on the decomp osition of t est setslp case. 68 3.5 simple recourse. 69 3.5.1 ip case. 70 3.5.2 lp...

133

Upload: others

Post on 05-Nov-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

On the Decomposition of Test Sets:

Building Blocks, Connection Sets, and Algorithms

Von der Fakult�at der Naturwissenschaften der

Gerhard-Mercator-Universit�at Duisburg

zur Erlangung des akademischen Grades eines

Dr. rer. nat.

genehmigte Dissertation

von

Raymond Hemmecke

aus

K�olleda

Referent: Prof. Dr. R�udiger Schultz

Koreferent: Prof. Dr. Rekha R. Thomas

Tag der m�undlichen Pr�ufung: 24. September 2001

Page 2: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

2

Page 3: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Contents

Introduction 9

1 The Positive Sum Property 21

1.1 Positive Sum Property implies Universal Test Set Property . . . . . . 21

1.2 Criteria to check Positive Sum Property . . . . . . . . . . . . . . . . 22

1.2.1 Integer case . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

1.2.2 Continuous case . . . . . . . . . . . . . . . . . . . . . . . . . . 24

1.3 Completion Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 26

1.3.1 Integer Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

1.3.2 Continuous Case . . . . . . . . . . . . . . . . . . . . . . . . . 28

1.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

2 Graver Test Sets 31

2.1 IP Graver Test Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

2.2 Truncated Graver Test Sets for (IP)c;b . . . . . . . . . . . . . . . . . . 33

2.3 LP Graver Test Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

2.4 MIP Graver Test Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

2.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

2.4.2 Finitely many Integer Parts . . . . . . . . . . . . . . . . . . . 40

2.4.3 Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

2.5 Termination of Augmentation Algorithm . . . . . . . . . . . . . . . . 45

2.6 Feasible Initial Solutions . . . . . . . . . . . . . . . . . . . . . . . . . 49

2.7 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

2.8 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

3

Page 4: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

CONTENTS 4

3 Decomposition of Test Sets in Two-Stage Stochastic Programming 55

3.1 Building Blocks of Graver Test Sets . . . . . . . . . . . . . . . . . . . 56

3.2 Finiteness of H1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

3.2.1 IP case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

3.2.2 LP case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

3.3 Computation of H1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

3.3.1 IP case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

3.3.2 LP case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

3.4 Solving the Optimization Problem with the Help of H1 . . . . . . . . 67

3.4.1 IP case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

3.4.2 LP case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

3.5 Simple Recourse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

3.5.1 IP case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

3.5.2 LP case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

3.6 Computations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

3.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

4 Decomposition of Test Sets in Multi-Stage Stochastic Programming 75

4.1 Building Blocks of Graver Test Sets . . . . . . . . . . . . . . . . . . . 77

4.2 Computation of H1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

4.2.1 IP case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

4.2.2 LP case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

4.3 Solving the Optimization Problem with the Help of H1 . . . . . . . . 86

4.3.1 IP case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

4.3.2 LP case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

4.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

5 Decomposition of Test Sets for Arbitrary Matrices 91

5.1 Connection Sets of LP, IP, and MIP Test Sets . . . . . . . . . . . . . 92

5.2 Connection Sets in Stochastic Programming . . . . . . . . . . . . . . 94

Page 5: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

CONTENTS 5

5.3 Connection Sets in Stochastic Mixed-Integer Programming . . . . . . 96

5.3.1 The Main Result . . . . . . . . . . . . . . . . . . . . . . . . . 97

5.3.2 Computation of CSm(H1) . . . . . . . . . . . . . . . . . . . . 98

5.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

6 Some Open Problems 107

6.1 Algorithmic Improvements . . . . . . . . . . . . . . . . . . . . . . . . 107

6.2 Extension of Maclagan's Theorem . . . . . . . . . . . . . . . . . . . . 108

7 Implementational Details 109

7.1 The Program MLP . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

7.1.1 Invocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

7.1.2 Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

7.1.3 Input File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

7.1.4 Output Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

7.2 Data Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

7.3 Algorithmic Improvements . . . . . . . . . . . . . . . . . . . . . . . . 113

7.3.1 The Option \sla" . . . . . . . . . . . . . . . . . . . . . . . . . 113

7.3.2 Computing the Minkowski Sum of Two Sets of Vectors . . . . 118

7.3.3 Properties and Structure of H1 . . . . . . . . . . . . . . . . . 119

7.3.4 Minimal Integer Solutions to Az = b . . . . . . . . . . . . . . 120

Notation 126

Conclusions 126

Page 6: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

CONTENTS 6

Page 7: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Acknowledgements

First I want to thank my advisor, R�udiger Schultz, for providing excellent working

and studying conditions both at the University of Leipzig and at the University of

Duisburg. His advice and support strongly in uenced my academic development, in

teaching, in giving talks and lectures, and in writing articles.

Moreover, I want to thank Rekha Thomas (University of Washington, Seattle), for

pointing my attention to Diane Maclagan's theorem ([32, 33]) which �nally proved

�niteness of the set H1 also for two-stage stochastic integer programs, as presented

in Chapter 3.

In this respect, I am clearly very grateful to Diane Maclagan (Stanford University)

for proving her theorem on �nite sequences of monomial ideals.

Moreover, I would like to thank Jesus de Loera (University of California at Davis),

Herbert Scarf (Yale University, New Haven), Bernd Sturmfels (University of Califor-

nia at Berkley), and Rekha Thomas for their support and many interesting discus-

sions.

I am grateful to Jack Graver (Syracuse University) for giving access to the unpub-

lished manuscript [17].

I want to thank my brother, Ralf Hemmecke (RISC Linz) for his help in optimizing

the �nal C-code of the program MLP, that has been developed in this scope of this

thesis.

Moreover, I want to thank Holger Traczinski (University of Duisburg) for fruitful

discussions on the fast computation of the Minkowski sum of two lists of vectors,

which led to the improvement presented in Subsection 7.3.2

Last but not least, I gratefully acknowledge the support by the Schwerpunktpro-

gramm \Echtzeit-Optimierung gro�er Systeme" of the Deutsche Forschungsgemein-

schaft.

7

Page 8: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

8

Page 9: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Introduction

Many important classes of optimization problems arising in practical applications can

be modeled as (usually) large mixed-integer linear programming problems. Therefore,

much research both on theory and on algorithmic aspects has been done. Powerful

algorithms like branch-and-bound or cutting-plane methods have been developed,

improved, and implemented into fast optimization software in order to solve bigger

and bigger problems. Algorithmic improvements, however, seem to become smaller

and harder to achieve. Therefore, it is worth studying other algorithmic ideas.

During the last few years, there has been a renewed interest in a di�erent solution

approach. These so-called primal methods are based on a simple augmentation pro-

cedure that repeatedly improves a given feasible solution as long as this solution is

not optimal. A prominent example of such an algorithm is the simplex algorithm

that moves along edges of the feasible region from vertex to vertex in order to �nd an

optimal solution to a given linear program. In the general augmentation algorithm,

however, we are allowed to make augmentation steps also through the interior of the

set of feasible solutions.

Although primal methods have not yet been proven applicable to solve optimization

problems of interesting (practical) sizes, some promising ideas and computational

experience have been reported very recently [20, 21, 29].

The major ingredient of the augmentation algorithm is the computation of improving

directions to given non-optimal feasible solutions. This problem can be solved by the

help of test sets.

Test sets are (�nite) sets of vectors (or directions) with the property that there is

always an (improving) direction in this set that can be used to augment a given non-

optimal feasible solution of our problem. In this work we will describe a particularly

nice test set, the Graver test set or Graver basis, that was introduced by Graver [19]

in 1975. We will present novel algorithms to compute these sets in the linear and

the integer linear situations, together with a �nite augmentation procedure for the

mixed-integer linear case, where test sets need not be �nite.

9

Page 10: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

10

These algorithms, however, are not the main bottleneck of the presented test set

approach. Test sets turned out to be quite large already for small problems and need

not even be �nite in the mixed-integer situation. Therefore, in order to overcome this

problem, we move our attention from algorithmic improvements to size reduction

of test sets. Thus, the major and novel concept of this thesis is the decomposition

of test set vectors into smaller building blocks. We present algorithms to compute

these much fewer building blocks together with the details on how to �nd improving

vectors for the augmentation algorithm that solves our optimization problem at hand.

All algorithms work entirely on the building block level, that is, the full test set is

not computed. We apply this decomposition idea speci�cally to two- and multi-stage

stochastic programs, where the special structure of the problem matrices admits a

particularly e�ective decomposition. We include some �rst computational experience

where our decomposition approach is superior to the existing algorithms (computer

codes). Thereafter, we extend the decomposition idea to test sets for problems with

arbitrary matrices.

After this general introduction let us become more speci�c. In this work we deal with

the solution of mixed-integer linear programs via a simple augmentation algorithm.

For given d = dz + dq, A 2 Zl�d, c 2 R d , and b 2 R l , let

(P)c;b : minfc|z : Az = b; z 2 X+g

be the family of mixed-integer linear optimization problems as c 2 R d and b 2 R l

vary. Herein, X = Zdz �R dq , and X+ denotes the corresponding non-negative orthant.

As usual, Z, Q , R denote the integers, rationals, and reals, respectively. Instead of

(P)c;b we write (LP)c;b if X = R d and (IP)c;b if X = Zd . By abuse of notation we refer

to subsets of the problem family (P)c;b as (P)c;b as well, but state which data is kept

�xed and which is allowed to vary. Thus, a single instance, that is where b and c are

given, is also denoted by (P)c;b.

Primal methods are based on repeated augmentation of feasible solutions to (P)c;b to

optimality. Strongly related to this augmentation scheme is the notion of a test set.

De�nition 0.0.1 (Test Set)

A set T � R d is called a test set for (P)c;b if

1. c|t > 0 for all t 2 T , and

2. for every b 2 R l and for every non-optimal feasible point z0 of (P)c;b there exist

a vector t 2 T and a scalar � > 0 such that z0 � �t is feasible.

Page 11: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

11

A vector t 2 T satisfying these two conditions is called an improving vector or an

improving direction.

A set is called a universal test set for (P)c;b if it contains a test set for (P)c;b for every

cost vector c 2 R d.

In contrast to the common de�nition of test sets we do not impose �niteness on T .

This allows a treatment of test sets for mixed-integer programs which need not be

�nite in general (see for example Cook et al. [14]).

Once a �nite (universal) test set T for and a feasible solution z0 to (P)c;b are avail-

able, the following augmentation algorithm can be employed in order to solve the

optimization problem (P)c;b.

Algorithm 0.0.2 (Augmentation Algorithm)

Input: X , a matrix A, a �nite test set T for (P)c;b, a cost vector c, a feasible solution

z0 to (P)c;b

Output: an optimum zmin of (P)c;b

while there is t 2 T with c|t > 0 and a scalar � > 0 such that z0 � �t is feasible do

z0 := z0 � �t, where � is chosen maximal such that z0 � �t is still feasible

return z0

In 1975, Graver [19] introduced �nite universal test sets for linear programs (LP)

and for integer linear programs (IP). However, he provided no algorithm to compute

them. He also presented a similar test set for linear mixed-integer programs (MIP)

and already pointed out that these sets need not be �nite in general [17]. Graver test

sets can be characterized by the following simple property.

De�nition 0.0.3 (Positive Sum Property)

A set G has the positive sum property with respect to S � R d if G � S and if any

non-zero v 2 S can be written as a �nite linear combination v =P

�igi with

� gi 2 G, �i > 0, �igi 2 S, and

� for all i, gi and v belong to the same orthant, that is, g(k)i v(k) � 0 for every

component k = 1; : : : ; d.

Graver test sets are inclusion minimal sets having the positive sum property with

respect to the set kerX(A) := fv 2 X : Av = 0g, the (mixed-integer) kernel of A.

Page 12: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

12

Already Graver [19] showed that the positive sum property implies the universal test

set property. For completeness, we include a proof for this important fact in Section

1.1.

In 1981, Scarf [43, 44, 45] presented a �nite test set for (IP)c;b for �xed c, the neigh-

bors of the origin. Herein, however, the matrix A had to ful�ll strong non-degenericity

assumptions. At that time, Scarf, too, could not provide an algorithm for the com-

putation of his set.

Only in 1991, Conti and Traverso [13] succeeded in computing a test set for (IP)c;bfor �xed c by transforming the problem into an algebraic question about polynomial

ideals and by using Buchberger's Gr�obner basis algorithm [6] in order to solve it.

The algorithm by Conti and Traverso, however, had a large computational overhead,

because it used many additional variables. Since then, however, a lot of work has been

done to improve its performance. By fundamental algebraic relations the problem was

reduced to the saturation of a certain polynomial ideal with respect to all occurring

variables and to a �nal Gr�obner basis computation. Both algorithms involve only

the original n variables. Later it was found that already saturation with respect to a

subset of variables is suÆcient. For more details see for example Sturmfels [48].

Buchberger's algorithm being a �eld of active research in computer algebra, test

set computation has also bene�ted from eÆcient implementations and improvements

achieved in that area (see Bigatti et al. [5], or Hosten and Sturmfels [25]).

The �rst geometrical interpretation of the Conti-Traverso algorithm was given by

Thomas [52]. This has initiated further work on geometric algorithms for the compu-

tation of test sets, see Cornu�ejols et al. [15] and Urbaniak et al. [55].

Structural similarities of Graver test sets in LP and IP were elaborated by Sturmfels

and Thomas [49]. In [57], Weismantel gives a survey on all currently known test sets

in IP together with their computation.

Some recent work has been done on MIP test sets. Following a geometric approach,

K�oppe et al. [23, 28] obtained a �niteness result similar to the one we obtain by

an algebraic argument in Chapter 2. This allows a �nite algorithmic treatment of

mixed-integer linear programs via the Augmentation Algorithm 0.0.2.

Regardless of these algorithmic improvements, it turned out that test sets tend to be

huge already for comparatively small problem matrices, having 100 columns, say. To

overcome this problem, truncated test sets were considered. A truncated test set has

to give improving vectors to non-optimal feasible solutions only for a subset of all

possible problems (IP)c;b [53, 55], or even only for a �xed problem [30]. In Section 2.2

we will introduce truncated Graver test sets where �xed upper bounds on variables

Page 13: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

13

are used to truncate the test set to a suÆcient subset. Although the presented ideas

lead to drastic improvements in the speed of the algorithms, the resulting test sets

remain huge for small problems.

This enormous size of test sets led us to the question, whether one could use the

structure of the problem matrix in order to encode the test set more eÆciently. For

this, we turned our attention to two-stage stochastic optimization problems. The

two-stage mixed-integer linear stochastic program is the optimization problem

minfh|x+Q(x) : Ax = a; x 2 Xg (1)

where

Q(x) :=

ZRs

�(� � Tx)�(d�) (2)

and

�(�) := minfq|y : Wy = �; y 2 Y g: (3)

Here, X � Rm+ and Y � R n

+ denote the non-negative orthants, possibly involving

integer requirements to variables. The measure � is a Borel probability measure on

R s , and all the remaining data have conformal dimensions.

The model (1)-(3) arises in optimization under uncertainty. Given an optimization

problem with random data where parts of the decisions (the �rst-stage variables x)

have to be taken before and parts (the second-stage variables y) are taken after the

realizations of the random data are known, the purpose of (1)-(3) is to minimize

the sum of the direct costs h|x and the expected costs of optimal decisions in the

second stage. The model has a multi-stage extension where a multi-stage process of

alternating decision and observation replaces the two-stage process assumed above.

For further details on the modeling background we refer to [3, 26, 38].

Under mild assumptions, all ingredients in the above model are well de�ned [46].

Algorithmically, (1)-(3) provides some challenges, since the integral behind Q(x) is

multidimensional and its integrand is given only implicitly. Due to the fact that

the model behaves stable when perturbing the underlying probability measure

[3, 26, 38, 46], computations, almost exclusively, work with discrete measures �,

tacitly assuming that, if necessary, continuous measures have been approximated by

discrete ones beforehand. Therefore, our algorithmic considerations will rest on the

assumption that � is a discrete probability measure with �nitely many realizations

(or scenarios) �1; : : : ; �N , and probabilities �1; : : : ; �N . Then (1)-(3) is equivalent to

the mixed-integer linear program

minfh|x+PN

�=1 ��q|y� : Ax = a; x 2 X;

Tx+Wy� = �� ; y� 2 Y; � = 1; : : : ; Ng:(4)

Page 14: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

14

The number N of scenarios being big in general, (4) is large-scale and not amenable

to general purpose mixed-integer linear programming solvers. This has motivated re-

search into decomposition algorithms. The latter have a long tradition in stochastic

linear programming without integer requirements [3, 24, 26, 38, 41]. Then models

enjoy convexity properties, and algorithmic developments can be based on subgra-

dient techniques from convex minimization and convex duality, for instance. With

integer requirements in (4) these convexity properties are lost, leading to substan-

tially di�erent decomposition approaches (see [27] for a recent survey). So far, two

main lines of research have been pursued in decomposition of stochastic mixed-integer

programs: (primal) decomposition by rows and (dual) decomposition by columns of

the constraint matrix of (4) whose block-angular structure is depicted in Figure 1.

A

T W

WT

Figure 1: Constraints matrix structure of (4)

In primal decomposition, the variable x is iterated in an outer master problem, and

advantage is taken of the fact that, for �xed x, the minimization with respect to

(y1; : : : ; yN ) can be carried out separately for each y� . The major diÆculty with this

approach in the (mixed-) integer situation is the complicated structure of the mas-

ter problem whose objective function is non-convex and discontinuous, in fact lower

semicontinuous. Research has been devoted to studying cutting planes for the master

problem that are derived via subadditive duality [8, 11, 12], to solving the master

problem by enumeration and bounding while handling the second-stage problems

by methods exploiting problem similarities [47], and to extending the latter into a

branch-and-bound method in the space of �rst-stage variables [1].

Dual decomposition methods start from an explicit (linear) representation of non-

anticipativity constraints. These constraints are inherent to (1)-(3) and (4). They

state that �rst-stage decisions must not depend on future observations. In (1)-(3)

and (4) this is modeled implicitly by the independence of the variable x on the ran-

dom vector � in (1)-(3) and on the scenarios �� ; � = 1; : : : ; N , in (4), respectively.

Page 15: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

15

Decomposition then is achieved by Lagrangian relaxation of the non-anticipativity

constraints. In [31] the framework of progressive hedging [39] is adopted where Aug-

mented Lagrangians lead to decomposition into quadratic mixed-integer programs

which are tackled by tabu search. In [9, 10] the standard Lagrangian is employed. This

leads to linear mixed-integer subproblems that can be solved by general-purpose or

custom-made mixed-integer linear programming software. On top of the Lagrangian

relaxation, a branch-and-bound scheme in the space of �rst-stage variables is placed,

providing quality bounds in the course of the iteration.

Another mode of decomposition that has been applied successfully mainly to

industrial problems in power optimization relies on problem dependent decoupling

into submodels which are still two- or even multi-stage stochastic integer programs

but amenable to specialized algorithms based on dynamic programming or network

ow, see for instance [34, 35, 51].

A common feature of all the algorithms discussed above is decomposition of the

problem itself. In this thesis we adopt an entirely di�erent viewpoint. Instead of

decomposition of the problem (4), we study decomposition of its Graver test set.

Decomposition of the Graver test set of (4) will lead us to an augmentation algo-

rithm that, to the best of our knowledge, so far has no counterpart in the existing

stochastic programming literature. In particular, we have observed that, once an algo-

rithmic bottleneck determined by the sizes of A; T , and W is passed, the algorithmic

e�ort grows only mildly with the number N of scenarios. In contrast, the algorithms

discussed above have been observed to be quite sensitive to that number.

We extend this decomposition idea to multi-stage stochastic programs and to prob-

lems with arbitrary problem matrix. To this end, we introduce the concept of connec-

tion vectors that couple the building blocks of (Graver) test set vectors. This yields

�nite sets of connection vectors, so-called connection sets, even in the mixed-integer

situation, where test sets need not be �nite in general.

This thesis is structured as follows.

Chapters 1 and 2 lay the notational and algorithmic basis to our test set decom-

position approaches to two- and multi-stage stochastic programs and to arbitrary

(mixed-integer and integer) linear programs as presented in Chapters 3, 4, and 5.

In Chapter 1 we present the positive sum property and show that this property ensures

the universal test set property. This fact has been proved already by Graver [19].

However, it is also this positive sum property that directs us to completion procedures

[7], a well-known and powerful algorithmic concept from computational algebra: step

by step an initial set of vectors is completed with respect to the positive sum property

Page 16: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

16

by adding new vectors to the set as long as necessary. This leads to novel algorithms

to compute LP and IP Graver test sets (see Chapter 2) that work entirely in the

kernel of the problem matrix. It turned out, however, that the IP algorithm had been

already presented in the disguised form of a d-dimensional Euclidean algorithm in a

research report by Pottier [37]. We present a slightly more general IP algorithm that

computes minimal points (with respect to a certain property) not only in kerZd(A)

but in general integer lattices � � Zd . In this way we can extend the concept of IP

Graver bases also to optimization problems

(MOD)c;b;�b : minfc|z : Az = b; �Az � �b (mod p); z 2 Zd+g;

where �Az � �b (mod p) abbreviates the relations �aiz � �bi (mod pi) for pi 2 Z+ ,

i = 1; : : : ; �l. These problems have a close connection for example to the \group

problem in integer programming" [42] and to the integer Fourier-Motzkin elimination

[58, 59, 60], respectively. Another algorithm for the computation of LP Graver test

sets can be found for example in [48].

In Chapter 2 we introduce Graver test sets for LP, IP, and MIP as inclusion minimal

sets that have the positive sum property with respect to kerX(A). Then we treat each

case separately in more detail. In the LP and in the IP cases we relate the Graver test

set to the circuits of the matrix A and to Hilbert bases of certain pointed rational

cones, respectively.

Right after the section on IP Graver bases we include a section on truncated Graver

test sets for the family of problems minfc|z : Az = b; 0 � z � u; z 2 Zdg as only

c 2 R d and b 2 R l vary. Here, �xed �nite upper bounds on z are used to truncate

the Graver test set for minfc|z : Az = b; z 2 Zd+g to a suÆcient subset and to

speed up the direct computation of this truncated set. The presented algorithm,

however, is more general and computes all non-zero minimal integer vectors in the

set fz : Az = 0; l � z � u; z 2 Zdg, l � 0 � u, where a vector v is called minimal

if there is no vector w in this set such that v and w belong to the same orthant of

R d and such that the components of w are not greater in absolute value than the

corresponding components of v. Again, the same algorithmic pattern as for general

Graver test set computations, a completion procedure, is used. For special choices of

l and u this includes algorithmic solutions to the following problems:

� For l = 0 and u = 1 (component-wise) the algorithm computes (partial) test

sets for combinatorial optimization problems.

� For l = �u the algorithm computes the desired truncated Graver test set.

Page 17: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

17

� For l = �1 and u = +1 the algorithm is similar to Algorithm 2:1:7 in [54]

and computes the IP Graver test set corresponding to A. However, it is much

slower than Algorithm 1.3.1 (Section 1.3), since the computation is done in a

higher dimensional space.

� For l = 0 and u = +1 the algorithm computes the unique minimal Hilbert

basis for the pointed rational cone fz : Az = 0; z 2 R d+g.

The notion of a truncated Graver test set already appeared in a paper by Thomas

and Weismantel [53]. They de�ne a partial order on the right-hand side (b; u) in

order to truncate test sets for problems of type minfc|z : Az = b; 0 � z � u; z 2 Zdg.

Our algorithm to compute a truncated Graver test set could be deduced from their

approach by de�ning their partial order not on all components of the right-hand side

(b; u) but only on the components of u, the upper bounds on z. Our approach uses

a simpler algebraic machinery, whereas the one by Thomas and Weismantel is more

general.

Next, we introduce MIP Graver test sets. We present an example similar to the one

by Cook et al. [14], which implies that MIP test sets need not be �nite in general. In

[14] the existence of �nite MIP test sets is shown if considerations are restricted to a

smaller family of optimization problems. In contrast to this approach, we introduce

a �nite set of integer vectors that is no test set by De�nition 0.0.1. However, we will

present algorithms to compute these �nitely many integer vectors and to compute

an improving direction for a given non-optimal solution. Thus, the Augmentation

Algorithm 0.0.2 can be employed in the MIP case, too.

In the following two sections we deal with the two main questions concerning the

Augmentation Algorithm 0.0.2: does the augmentation algorithm always terminate

and how do we �nd an initial feasible solution for its input? First, in Section 2.5,

we discuss termination of the Augmentation Algorithm 0.0.2. We demonstrate that

particularly in the LP case some caution is appropriate to avoid zig-zagging (even to

non-optimal solutions) and we present a strategy which ensures termination. Then,

we deal with feasible solutions. A test set has to give improving directions for any

right-hand side b and hence cannot depend on the speci�c choice of b. The only and,

if integrality constraints on variables are present, usually hard step speci�c to b is

to �nd some feasible solution of the given problem. Although there are algorithmic

alternatives, we show in Section 2.6 that universal test sets can be used to solve this

feasibility problem as well. Again, some care has to be taken in the LP case to ensure

termination of this procedure.

Page 18: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

18

In Chapter 3 we present our decomposition approach for two-stage stochastic pro-

gramming. We introduce building blocks that can be employed to construct test set

vectors for two-stage stochastic linear and integer linear programs. Existence of a

�nite set H1 is proved, that contains all the building blocks of test set vectors of

(4) for arbitrary objective function and arbitrary right-hand side vectors, and, most

importantly, for arbitrary number N of scenarios. Then we develop a �nite algorithm

that computes H1 which again is based on a completion procedure. This algorithm

entirely works at the building block level. No explicit information on test set vectors

is needed. The set H1 is employed by a �nite augmentation procedure for solving (4)

that, again, is entirely working at the building block level. The chapter is completed

by a discussion of stochastic programs with simple recourse and with a section on

initial computational experience in the integer situation.

In Chapter 4 we show how the above decomposition approach for two-stage stochastic

programs together with all necessary notions can be generalized to the multi-stage

situation. To this end, however, we will present the details for the 3-stage case only.

Again, we de�ne a set H1 of building blocks that does not depend on the number of

scenarios. Moreover, we present an algorithm, again a completion procedure, which,

upon termination, returns the set H1. We prove termination of the algorithm, and

thus �niteness of H1, for the LP case. The same question, however, is still open in

the IP situation. Again, the set H1 is employed by a �nite augmentation procedure

for solving the multi-stage problem. Note that also this procedure works entirely at

the building block level.

The treatment of the 3-stage case should make clear, how to de�ne the decomposition,

the notions, and the algorithms for the higher multi-stage situations. Also the proofs

follow analogous counting or construction arguments.

In Chapter 5 we apply our decomposition method to (mixed-integer and integer)

linear optimization problems with arbitrary problem matrix A. We split the matrix

A into (A1jA2) and each test set vector v accordingly into (v1; v2). We call v1 and

v2 the building blocks of v which are connected via the vector A1v1 = �A2v2. The

collection of all these connection vectors A1v1, the connection set, may be much

smaller than the test set it corresponds to. Moreover, we show that the knowledge

of the connection set is already enough to solve the original optimization problem.

More importantly, even if the full test set is known, the problem under consideration

may be solved much faster by �rst extracting the connection set and then solving the

problem with the help of these connection vectors only.

We look again at Graver bases for LP, IP, and MIP, and at our decomposition ap-

proach for two-stage stochastic programs (Section 5.2) but this time from the point

Page 19: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

19

of view of connection sets. We show how an improving vector can eÆciently be re-

constructed from the connection set in all these cases which makes the application of

the Augmentation Algorithm 0.0.2 possible.

Finally, we present an algorithm to compute connection sets also for two-stage stochas-

tic mixed-integer linear programs where �rst-stage and second-stage continuous vari-

ables are not coupled by a row of (T jW ). Surprisingly, the connection set correspond-

ing to H1 in this case turns out to be �nite.

In Chapter 6 we collect some open problems which arose in the previous chapters.

These problems either deal with structural questions (�niteness ofH1 for multi-stage

stochastic integer programs) or with computational improvements.

In Chapter 7 we present implementational details of the computer code MLP [22]

which has been developed during the last three years within the scope of this thesis.

MLP computes (truncated) IP Graver test sets, Hilbert bases of certain pointed ratio-

nal cones, and the set H1 for two-stage stochastic integer programs. In this chapter

we also present a novel algorithmic improvement to IP Graver bases computations,

if the kernel of the problem matrix has a particularly nice generating set of the form

f(e1;�Ae1); :::; (ed;�Aed)g, where e1; : : : ; ed denote the unit vectors in Rd . The same

improvement can be employed for the computation of Hilbert bases of kerRd \Rd+ .

Finally, we have a collection of the notation used in this thesis and a conclusions

chapter.

Page 20: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

20

Page 21: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 1

The Positive Sum Property

In this chapter we introduce the positive sum property and show that it implies the

universal test set property. Afterwards we present two criteria which allow a �nite

test whether a given symmetric set G has the positive sum property with respect to

an integer lattice � � Zd , for example kerZd(A), or with respect to kerRd (A). Herein,

a set G is called symmetric if v 2 G implies �v 2 G.

These two criteria immediately lead us to the concept of a completion procedure: a

given (symmetric) generating set G of an integer lattice � over Z or of kerRd (A) over

R is completed with respect to the positive sum property by adding new elements to

G as long as necessary. Once the completion procedure terminates, the two criteria

below already imply its correctness, that is, the set of vectors returned has indeed

the positive sum property with respect to � or kerRd (A).

1.1 Positive Sum Property implies Universal Test

Set Property

To simplify the subsequent proofs it is convenient to introduce the following relation.

De�nition 1.1.1 We de�ne the relation v on R d by u v v if u(j)v(j) � 0 and

ju(j)j � jv(j)j for all components j = 1; : : : ; d, that is, u belongs to the same orthant

as v and its components are not greater in absolute value than the corresponding

components of v.

Clearly, if u v v then also v � u v v. The following lemma was already proved by

Graver [19]. We include a proof because of its importance.

21

Page 22: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

1.2. Criteria to check Positive Sum Property 22

Lemma 1.1.2 (Positive Sum Property implies Universal Test Set Property)

If G has the positive sum property with respect to kerX(A) then G is a universal test

set for (P)c;b.

Proof. Given b 2 R l , c 2 R d , and a non-optimal feasible point z0 of (P)c;b. We have

to prove that there exist a vector v 2 G and a scalar � > 0 such that z0 � �v is a

better feasible solution than z0.

By z1 denote another feasible solution with smaller cost function value than z0. Thus,

by the assumptions on G, we can write the vector z0�z1 as a �nite linear combination

z0 � z1 =P

�igi, where gi 2 G, �i > 0, �igi 2 X , and �igi v z0 � z1 for all i. Since

c|(z0 � z1) > 0 we have c|(�jgj) > 0 for at least one gj. We claim that z0 � �jgj is

feasible and has a better cost function value than z0.

Clearly, A(z0 � �jgj) = Az0 � �jAgj = b � 0 = b and c|z0 > c|(z0 � �jgj), since

gj 2 kerX(A) and c|(�jgj) > 0. So it remains to prove that z0 � �jgj � 0. We prove

this for each component k separately. By the de�nition of the relation v we conclude

from �jgj v z0 � z1 that

�jg(k)j (z0 � z1)

(k) � 0 (1.1)

and ����jg(k)j

��� � ��(z0 � z1)(k)�� (1.2)

If �jg(k)j � 0 then (z0 � �jgj)

(k) = z(k)0 � �jg

(k)j � z

(k)0 � 0. If, on the contrary,

�jg(k)j > 0 then, by (1.1) and (1.2), also (z0 � z1)

(k) � �jg(k)j > 0 which implies

z(k)0 � �jg

(k)j � z

(k)1 � 0 as desired. �

1.2 Criteria to check Positive Sum Property

1.2.1 Integer case

The following result is not only valid for the (saturated) lattice kerZd(A) but holds

for arbitrary integer lattices �, as well. This more general version allows us to gen-

eralize Graver test sets and their computation to optimization problems (MOD)c;b;�b(introduced in the introductory chapter, page 16), see Section 2.1.

Lemma 1.2.1 (Criterion for Positive Sum Property with respect to integer lattice)

Let � be an integer sublattice of Zd. A symmetric set G � � has the positive sum

property with respect to � if and only if the following two conditions hold:

Page 23: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 1. The Positive Sum Property 23

� G �nitely generates � over Z, and

� for every pair v; w 2 G, the vector v + w can be written as a �nite linear

combination v + w =P

�igi, where for all i we have gi 2 G, �i 2 Z>0, and

gi v v + w.

Proof. We have to show that any non-zero z 2 � can be written as a �nite positive

linear integer combination of elements from G where each vector in this combination

belongs to the same orthant as z. SinceG generates � over Z and sinceG is symmetric,

the vector z can be written as a linear combination z =P

�ivi for �nitely many

�i 2 Z>0 and vi 2 G. Note thatP

�ikvik1 � kzk1 with equality if and only if vi v z

for all i.

From the set of all such linear integer combinationsP

�ivi choose one such thatP�ikvik1 is minimal and assume that

P�ikvik1 > kzk1. Otherwise vi v z for all i

and we are done. Therefore, there have to exist vectors vi1 ; vi2 in this representation

which have some component k = k0 of di�erent signs.

By the assumptions on G, the vector vi1 + vi2 can be written as vi1 + vi2 =P

�jv0j

for �nitely many �j 2 Z>0 , v0j 2 G, and �jv

0j v vi1 + vi2 for all j. The latter implies

that we have for each component k = 1; : : : ; d,

Xj

�jjv0j

(k)j = j

Xj

�jv0j

(k)j = j(vi1 + vi2)

(k)j � jv(k)i1j+ jv

(k)i2j;

where the last inequality is strict for k = k0 by construction. Summing up over

k = 1; : : : ; d, yieldsP

�jkv0jk1 = kvi1 + vi2k1 < kvi1k1 + kvi2k1. But now z can be

represented as

z = �i1vi1 + �i2vi2 +Xi6=i1;i2

�ivi

=X

�jv0j + (�i1 � 1)vi1 + (�i2 � 1)vi2 +

Xi6=i1;i2

�ivi

and it holds

X�jkv

0jk1 + (�i1 � 1)kvi1k1 + (�i2 � 1)kvi2k1 +

Xi6=i1;i2

�ikvik1 <X

�ikvik1

in contradiction to the minimality required onP

�ikvik1. Altogether we obtain that

kzk1 =P

�ikvik1, that is, G has the positive sum property with respect to �. �

Page 24: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

1.2. Criteria to check Positive Sum Property 24

1.2.2 Continuous case

In order to prove the criterion of Lemma 1.2.7 below we need to introduce the set of

circuits associated with a matrix A. This set will turn out to be a minimal set having

the positive sum property with respect to kerRd (A).

De�nition 1.2.2 For v 2 R d the set supp(v) := fj : v(j) 6= 0g is called the support

of v.

De�nition 1.2.3 (Circuits of a Matrix)

A circuit of a matrix A 2 Zl�d is a vector q 2 kerRd (A) \ Zd of inclusion minimal

support in kerRd (A) n f0g whose components have greatest common divisor 1.

Note that if A has only integer (or rational) entries then there is always a rational

(and thus also an integer) vector for every minimal support in kerRd (A). Proofs for

the following three statements can be found in [19].

Lemma 1.2.4 If v is a circuit of A and w 2 kerRd (A) satis�es supp(w) � supp(v)

then w = �v for some � 2 R . In particular, we have w = �v if v and w are circuits

with the same support.

Corollary 1.2.5 Every matrix A 2 Zl�d has only �nitely many circuits.

Lemma 1.2.6 The set of all circuits of A has the positive sum property with respect

to kerRd (A).

Now we are in the position to state and to prove the main result of this subsection.

Lemma 1.2.7 (Criterion for Positive Sum Property with respect to kerRd (A))

A symmetric set G � kerRd (A) has the positive sum property with respect to kerRd (A)

if and only if the following two conditions hold:

� G �nitely generates kerRd (A) over R , and

� for every pair v; w 2 G, and all � 2 R , the vector v + �w can be written as a

�nite linear combination v + �w =P

�igi, where for all i = 1; : : : ; d, we have

gi 2 G and supp(gi) � supp(v + �w). Herein, only those values for � 2 R need

to be considered, for which v + �w contains a zero entry at some component

k at which neither v nor w have a zero entry, that is, (v + �w)(k) = 0 but

v(k)w(k) 6= 0.

Page 25: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 1. The Positive Sum Property 25

Note that the condition supp(gi) � supp(v + �w) is in particular satis�ed if �i > 0

and �igi v v + �w.

Proof. It suÆces to prove that G contains some non-zero scalar multiple of every

circuit q of A. Lemma 1.2.7 then follows by application of Lemma 1.2.6.

Let q be an arbitrary circuit of A. Without loss of generality we may assume that

the zero components of q are the �rst m components. Otherwise we may rearrange

the variables to ensure this property. Since the vectors in G generate kerRd (A) over

R we have q =P

�igi for �nitely many non-zero �i 2 R and gi 2 G.

We will now rewrite this linear combination into a linear combination where all ap-

pearing vectors from G have a zero �rst component. Repeating this process with

the second, third, : : : ;mth component, we arrive at a linear representation of q by

elements from G which have zeros in their �rst m components. Choose any gj 2 G

that occurs in this last linear combination. Since supp(gj) � supp(q) and since q is

a circuit, we conclude by Lemma 1.2.4 that gj 2 G is a non-zero scalar multiple of

q and our claim follows. Thus, it remains to show that this rewriting step is always

possible.

If all gi in q =P

�igi have already a zero �rst component we can go on with the

second, third, : : : ;mth component. If not, 0 = q(1) =P

�ig(1)i implies that there exist

at least two vectors gj and gk with non-zero �rst components. Take gj and gk and

rewrite �jgj + �kgk as follows.

�jgj + �kgk = �jgj + �k

gk �

g(1)

k

g(1)j

gj

!+ �k

g(1)

k

g(1)j

gj

=

�j + �k

g(1)

k

g(1)j

!gj + �k

gk �

g(1)

k

g(1)j

gj

!.

Since the �rst component of (gk � g(1)

k gj=g(1)j ) vanishes, the assumptions on G yield

a representation of (gk � g(1)

k gj=g(1)j ) as a linear combination of elements in G whose

support is contained in the support of (gk � g(1)

k gj=g(1)j ). Thus, each vector in this

linear combination has a zero �rst component. Substituting this representation back

into the above linear combination for q we arrive at a new representation q =P

�0ig0i,

where the number of vectors in this linear combination with non-zero �rst component

is at least one less than the corresponding number for q =P

�igi. Hence this process

terminates with a �nite linear representation of q using only elements from G with a

zero �rst component. Now we can go on with the second, third, : : : ;mth component.

Page 26: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

1.3. Completion Algorithm 26

Clearly, zero entries on previous components (that are already considered) will not

be destroyed by our rewriting steps. This �nally concludes the proof. �

1.3 Completion Algorithm

Lemmas 1.2.1 and 1.2.7 reduce the decision on whether a �nite symmetric set F has

the positive sum property with respect to an integer lattice � or with respect to kerRd

to a �nite number of representability tests which may be treated algorithmically. In

order to complete a set of vectors with respect to the positive sum property, these

lemmas suggest the following procedure which adds further elements to the set as long

as it does not have the desired positive sum property. Such procedures are known as

completion procedures [7].

Algorithm 1.3.1 (Completion Procedure)

Input: a �nite symmetric set F � kerRd (A) generating � over Z or kerRd (A) over R

Output: a set G � F that has the positive sum property with respect to � or kerRd (A)

G := F

C :=S

f;g2G

S-vectors(f; g)

while C 6= ; do

s := an element in C

C := C n fsg

f := normalForm(s;G)

if f 6= 0 then

C := C [Sg2G

S-vectors(f; g)

G := G [ ffg

return G.

In this algorithm, the set S-vectors(f; g) corresponds to the \critical" vectors that are

described in Lemmas 1.2.1 and 1.2.7, respectively. That is, S-vectors(f; g) = ff + gg

for the integer case and S-vectors(f; g) =S

�ff + �gg for the continuous case. As

speci�ed in Lemma 1.2.7, only those values for � 2 R need to be considered, for which

f + �g contains a zero entry at some component k at which neither f nor g have a

Page 27: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 1. The Positive Sum Property 27

zero entry, that is, (f + �g)(k) = 0 but f (k)g(k) 6= 0. Note that if s 2 S-vectors(v; w)

then supp(s) � (supp(v) [ supp(w)) n feg for some e 2 supp(v) \ supp(w). This

construction is also used in matroid theory to characterize circuits (see for example

Oxley [36]).

Behind the function normalForm(s;G) there is the following algorithm which returns

0 if a representation s =P

�igi with �nitely many �i 2 Z>0 (or �i 2 R>0), gi 2 G,

and �igi v s is found, or it returns a vector t such that a representation of this kind

is possible for G [ ftg.

The normalForm algorithm aims at �nding a representation s =P

�igi with �nitely

many �i 2 Z>0 (or �i 2 R>0), gi 2 G, and �igi v s by reducing s by elements of G in

such a way that, if at some point of this reduction the zero vector is reached, a desired

representation s =P

�igi has been found. If the reduction process terminates with a

vector t 6= 0 then a desired representation s =P

�igi with vectors from G [ ftg has

been constructed. The vector t is called a normal form of s with respect to the set G.

Algorithm 1.3.2 (Normal form algorithm)

Input: a vector s, a set G of vectors

Output: a normal form of s with respect to G

while there is some g 2 G such that s is reducible by g do

s := reduce s by g

return s

The reduction involved in the normalForm algorithm has to be de�ned separately

for the integer and the continuous cases. For the special situation � = kerZd(A), our

de�nition for the integer case leads to an algorithm already presented by Pottier [37]

in the disguised form of a d-dimensional Euclidean algorithm.

1.3.1 Integer Case

Here we say that s 2 Zd can be reduced by g 2 Zd to s� g if g v s. Thus, in case of

reducibility, we have s = g+(s� g) with g v s and s� g v s. Since ks� gk1 < ksk1,

we conclude that normalForm(s;G) always terminates.

To prove termination of the completion algorithm in the integer case we need the

Gordan-Dickson Lemma (see for example Section 4:2 in [16]). The following is an

equivalent geometric formulation.

Page 28: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

1.3. Completion Algorithm 28

Lemma 1.3.3 (Gordan-Dickson Lemma)

Let fp1; p2; : : :g be a sequence of points in Zd+ such that pi � pj whenever i < j. Then

this sequence is �nite.

Lemma 1.3.4 With the above de�nitions of S-vectors and of normalForm for the

integer case the Completion Procedure 1.3.1 terminates and satis�es its speci�cations.

Proof. Termination of the above algorithm follows immediately by application of

the Gordan-Dickson Lemma to the sequence f(v+; v�) : v 2 G n Fg, where for

v 2 Zd we component-wise de�ne v+ := max(0; v) and v� := (�v)+ = max(0;�v).

To see this, note that f = normalForm(s;G) implies that there is no g 2 G with

g v f , or in other words, there is no g 2 G with (g+; g�) � (f+; f�). Thus, the

algorithm produces a sequence of vectors f(v+; v�) : v 2 GnFg = ff1; f2; : : :g, where

(f+i ; f�i ) � (f+j ; f

�j ) for i < j. Such a sequence is always �nite by the Gordan-Dickson

Lemma. Correctness of the algorithm follows immediately from Lemma 1.2.1, since

upon termination normalForm(v+w;G) = 0 for all v; w 2 G, giving a representation

v + w =P

�igi with �i 2 Z>0, gi 2 G, and gi v v + w. �

1.3.2 Continuous Case

Here we say that s 2 R d can be reduced by g 2 R d if supp(g) � supp(s). In case

of reducibility s is reduced to s � �g where � 2 R is chosen in such a way that

supp(s � �g) ( supp(s), which implies that normalForm(s;G) always terminates.

Moreover, s = �g+(s��g) where supp(�g) � supp(s) and supp(s��g) � supp(s).

Remark 1.3.5 The main ingredient in the proof of Lemma 1.2.7 was that f + �g

can be written as f +�g =P

�igi with gi 2 G and supp(gi) � supp(f +�g) for all i.

Note that this is less strict than the reduction de�ned above, since we do not assume

that supp(f + �g � �igi) ( supp(f + �g) for some i. Reducing f + �g by some gi to

f + �g � �igi, however, would require this latter condition.

Thus, in later proofs, we will only construct representations f + �g =P

�igi with

gi 2 G and supp(gi) � supp(f + �g) for all i.

Lemma 1.3.6 With the above de�nitions of S-vectors and of normalForm for the

continuous case the Completion Procedure 1.3.1 terminates and satis�es its speci�ca-

tions.

Page 29: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 1. The Positive Sum Property 29

Proof. Each vector in G n F must have a di�erent support. Hence the algorithm

terminates. Correctness of the above algorithm follows from Lemma 1.2.7 since upon

termination normalForm(v + �w;G) = 0 for all v; w 2 G, and for all � such that

v+�w contains a zero entry at some component at which both v and w are non-zero.

This normalForm reduction to zero provides a representation v + �w =P

�igi with

�i 2 R , gi 2 G, and supp(�igi) � supp(v + �w) as required. �

1.4 Conclusions

In this chapter we have presented the positive sum property, a property inherent to

LP, IP, and MIP Graver bases, which already ensures their universal test set property.

In Lemmas 1.2.1 and 1.2.7 we identi�ed two criteria to check the positive sum property

of a set with respect to an integer lattice or with respect to the (continuous) kernel

of a given matrix. These criteria led us to the algorithmic pattern of a completion

procedure in order to complete a given generating set of the lattice or of the kernel

with respect to the positive sum property. As we will see in the subsequent chapters,

this algorithmic pattern is closely related to Graver basis computations and reappears

using more complicated structures than vectors. It will suÆce to rede�ne the input

set, the function normalForm, and the set of S-vectors that are employed by the

completion procedure.

In the following we will introduce Graver test sets for LP, IP, and MIP.

Page 30: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

1.4. Conclusions 30

Page 31: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 2

Graver Test Sets

In the sequel we introduce, separately for each situation, Graver test sets in LP, IP,

and MIP as minimal sets of vectors having the positive sum property with respect to

kerX(A), where X = R d , X = Zd , and X = Zdz � R dq , d = dz + dq, respectively.

LP and IP Graver test sets are always �nite as was already shown by Graver in 1975

[19]. We show how the two completion procedures presented in Section 1.3 for the

integer and the continuous cases can be used to compute both sets.

Moreover, we will present truncated IP Graver test sets and present a completion

algorithm to compute them.

MIP Graver test sets need not be �nite in general. To demonstrate this, we present a

small example similar to the one by Cook et al. [14]. Thereafter, we give a solution to

this �niteness problem by presenting a �nite set of integer vectors which is inherent

to the MIP Graver test set and from which an improving vector to a non-optimal

solution can be reconstructed by solving a �nite number of pure LP's. Moreover, we

will again employ a completion procedure to compute these �nitely many integer

vectors.

Having introduced Graver test sets, we will deal with termination of the Augmenta-

tion Algorithm 0.0.2. It turns out that some caution is appropriate in the LP case to

avoid zig-zagging even to non-optimal points. Finally, we will concentrate on �nding

initial feasible solutions to our problem. Such a solution is needed as input to the

Augmentation Algorithm 0.0.2. Although there are algorithmic alternatives, we show

that universal test sets can be used to solve this feasibility problem as well. Again,

some care has to be taken in the LP case to ensure termination of this procedure.

31

Page 32: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

2.1. IP Graver Test Sets 32

2.1 IP Graver Test Sets

The notion of a Graver test set in IP is strongly related to the notion of a Hilbert

basis, which we will recall now. For further details we refer to Schrijver [42] and

Weismantel [57].

De�nition 2.1.1 (Hilbert Basis)

Let C be a rational cone. A �nite set H = fh1; : : : ; htg � C \ Zd is called a Hilbert

basis of C if every z 2 C \ Zd has a representation of the form

z =

tXi=1

�ihi

with non-negative integral multipliers �1; : : : ; �t.

Note that every pointed rational cone possesses a unique Hilbert basis that is minimal

with respect to inclusion.

De�nition 2.1.2 Let O j be the jth orthant of R d and Hj the unique minimal Hilbert

basis of kerRd (A)\ O j . Then we de�ne GIP(A) :=SHj n f0g to be the IP Graver test

set (or IP Graver basis) of A.

As GIP(A) is a �nite union of �nite sets, GIP(A) has �nite cardinality. By construction,

the elements of GIP(A) are exactly all elements of kerZd(A) n f0g that are minimal

with respect to the partial ordering v on Zd . Moreover, if v 2 GIP(A) then also

�v 2 GIP(A), that is, G is symmetric.

Lemma 2.1.3 GIP(A) has the positive sum property with respect to kerZd(A).

Proof. Take any non-zero element z 2 kerZd(A). Then z belongs to some orthant O j

and thus can be written as a positive integer linear combination of elements of the

Hilbert basis Hj � GIP(A) of kerZd(A) \ O j . �

Corollary 2.1.4 GIP(A) is a universal test set for (IP)c;b.

In Section 1.3 we have already seen how a generating set of kerZd(A) over Z can

be completed with respect to the positive sum property. The resulting set G has to

contain the IP Graver test set which consists of all v-minimal elements in G.

Page 33: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 2. Graver Test Sets 33

In fact, this algorithm computes the set G(�) of all v-minimal elements in �nf0g for

any integer lattice � := fz 2 Zd : Az = 0; �Az � 0 (mod p)g. Now we show that G(�)

has the positive sum property with respect to � and thus, constitutes a universal test

set for the family of optimization problems

(MOD)c;b;�b : minfc|z : Az = b; �Az � �b (mod p); z 2 Zd+g;

as c 2 R d , b 2 R l , and �b 2 R�l vary.

Lemma 2.1.5 The set G(�) has the positive sum property with respect to �.

Proof. We have to prove that every non-zero z 2 � can be written as a �nite linear

combination z =P

�igi with �i 2 Z>0 , gi 2 G(�), and gi v z for all i.

Let z 2 � n f0g be given. By de�nition of G(�), there is some g1 2 G(�) with g1 v z.

Thus, z � g1 v z and kz � g1k1 < kzk1. Going on with the same argument for z � g1

instead, the latter relation implies that we must eventually arrive at 0. But then

z =Pk

i=1 gi, gi 2 G(�), and gi v z for all i, which completes the proof. �

Moreover, employing the same arguments as in the proof of Lemma 1.1.2, we conclude

the following.

Corollary 2.1.6 The set G(�) constitutes a universal test set for the family of opti-

mization problems (MOD)c;b;�b as c 2 R d, b 2 R l , and �b 2 R�l vary.

Universal test sets (universal Gr�obner bases in particular) for (MOD)c;b;�b were already

considered in [50]. Here, we only want to emphasize that all algorithmic results in

this thesis for (IP)c;b readily extend to (MOD)c;b;�b, since only the lattice structure of

kerZd(A) and the positive sum property with respect to this lattice are exploited in

the proofs. However, for ease of exposition only, we will restrict our attention to the

special case � = kerZd(A).

2.2 Truncated Graver Test Sets for (IP)c;b

The notion of a truncated Graver test set already appeared in a paper by Thomas

and Weismantel [53]. Our algorithm to compute a truncated Graver test set could

be deduced from their approach as well by de�ning their partial order not on all

components of the right-hand side (b; u) but only on the components of u, the upper

bounds.

Page 34: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

2.2. Truncated Graver Test Sets for (IP)c;b 34

Consider the family of optimization problems minfc|z : Az = b; z � u; z 2 Zd+g

where only the vectors b 2 R l and c 2 R d are allowed to vary. The upper bounds

u 2 (Z[f1g)d are assumed to remain �xed. Let u(i) =1 if z(i) is not bounded from

above. Prominent examples arise in combinatorial optimization as 0-1 problems.

Example. Before we introduce truncated Graver test sets, let us demonstrate how

the algorithm presented in Section 1.3 deals with the upper bounds when it com-

putes a Graver test set for this problem. Introduction of slack variables leads to the

computation of GIP(B), where

B =

A 0

E E

!:

Herein, E denotes the identity matrix of size d. Each element of kerZ2d(B) has the

form (v;�v). Moreover, (w;�w) v (v;�v) if and only if w v v. This shows that

the computations of GIP(A) and of GIP(B) are isomorphic using the correspondence

v $ (v;�v). That is, to speed up the calculation of GIP(B), we may �rst compute

GIP(A) and �nally map each element v 2 GIP(A) to (v;�v).

Thus, essentially, the bounds z � u are ignored during the computation, which is due

to the fact that u is assumed to vary as well. Therefore, computations are done in

Z2d and do not exploit the given �xed upper bounds. �

However, the �xed upper bounds on the variables allow a truncation of the Graver

test set GIP(A) for minfc|z : Az = b; z 2 Zd+g, since only those vectors v with

jv(i)j � u(i) for all i = 1; : : : ; d, may transform a feasible solution 0 � z1 � u into a

feasible solution z2 = z1 � v satisfying 0 � z2 � u, since v = z1 � z2.

The following lemma is an immediate consequence from the positive sum property of

GIP(A) with respect to kerZd(A).

Lemma 2.2.1 The truncated Graver test set

Gu(A) := GIP(A) \ fz 2 Zd : �u � z � ug

is a universal test set for minfc|z : Az = b; z � u; z 2 Zd+g, where only b 2 R l and

c 2 R d may vary.

Proof. Take an arbitrary v 2 kerZd(A)\fz 2 Zd : �u � z � ug. By the positive sum

property of GIP(A) with respect to kerZd(A), v can be written as a positive integer

linear combination v =P

�igi of elements gi 2 GIP(A), �i 2 Z>0 , and gi v v. The

latter condition, however, together with �u � v � u implies that also �u � gi � u

Page 35: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 2. Graver Test Sets 35

and thus, gi 2 Gu(A) for all i. Thus, Gu(A) has the positive sum property with respect

to kerZd(A)\fz 2 Zd : �u � z � ug. As in the proof of Lemma 1.1.2, this immediately

implies that Gu(A) is a universal test set for minfc|z : Az = b; z � u; z 2 Zd+g, where

only b 2 R l and c 2 R d may vary. �

Example. To see that the truncated Graver test set Gu(A) may be much smaller

than GIP(A) consider the problem minfc|z : (k 1 1)z = b; z � 1; z 2 Z3+g for given

given integer k > 2. Then we have

GIP(A) = f�(0; 1;�1);�(1; 0;�k);�(1;�1;�(k � 1)); : : : ;�(1;�k; 0)g

but G(1;1;1)(A) = f�(0; 1;�1)g. Note that this truncated test set does not change if

we do not impose a �nite upper bound on z1. �

Therefore, truncation may drastically reduce the number of interesting vectors in

a Graver test set. Of major importance, however, is the fact that Gu(A) can be

computed directly, without computing the much bigger set GIP(A) �rst. In the fol-

lowing, we present an algorithm which computes all v-minimal solutions in the set

fz 2 Zd : Az = 0; l � z � ug n f0g, where l � 0 � u. Thus, for special choices of l

and u this includes algorithmic solutions to the following problems:

� For l = 0 and u = 1 (component-wise) the algorithm computes (partial) test

sets for combinatorial optimization problems.

� For l = �u the algorithm computes the desired truncated Graver test set Gu(A).

� For l = �1 and u = +1 the algorithm is similar to Algorithm 2:1:7 in

Urbaniak [54] and computes GIP(A). However, it is much slower than Algorithm

1.3.1 (Section 1.3), since the computation is done in a higher dimensional space.

� For b = 0; l = 0, and u = +1 the algorithm computes the unique minimal

Hilbert basis for the pointed rational cone kerRd (A) \ Rd+ .

Analogously to integer lattices, Lemma 2.1.5, one can prove that the set of all v-

minimal solutions in the set fz 2 Zd : Az = 0; l � z � ug n f0g has the positive sum

property with respect to fz 2 Zd : Az = 0; l � z � ug. Thus, it is not surprising that

the computation of these v-minimal solutions follows the pattern of a completion

procedure as given in Algorithm 1.3.1. Now we have to specify the input set, the

reduction needed in the normalForm algorithm, and the set of S-vectors.

As input set we choose F =S

i:l(i)<0

f(�ei;�Aei)g [S

i:u(i)>0

f(ei; Aei)g, where ei denotes

the ith unit vector in R d . Moreover, s 2 Zd+l reduces by g 2 Zd+l to s � g if g v s.

Page 36: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

2.2. Truncated Graver Test Sets for (IP)c;b 36

Finally, we de�ne

S-vectors((v;Av); (w;Aw)) :=

8>>><>>>:f(v + w;A(v + w))g if l � v + w � u and both,

v and w, lie in the same

orthant of R d

; otherwise:

Lemma 2.2.2 With the above de�nitions of input set, normalForm, and S-vectors,

the Completion Algorithm 1.3.1 terminates and returns a set G which contains (z0; 0)

for all v-minimal integer solutions z0 of the system Az = 0, l � z � u, and z 6= 0.

The set of all minimal integer solutions to Az = 0, l � z � u, and z 6= 0, is the set of

all those vectors z such that (z; 0) 2 G and such that (z; 0) is irreducible with respect

to G n f(z; 0)g.

The following proof is similar to that of Lemma 1.2.1. However, we use the special

input set F to show that much fewer S-vectors compared to those in Algorithm 1.3.1

need to be considered here.

Proof. Again, termination of the above algorithm follows immediately by applying

the Gordan-Dickson Lemma to the sequence f(v+; v�) : v 2 G nFg as it is generated

by the algorithm.

For the correctness proof let G denote the set that is returned by the algorithm.

We show that any non-zero vector (z; 0) with z 2 kerZd(A) and l � z � u can be

written as a positive linear integer combination of elements of G, where each vector in

this combination lies in the same orthant as (z; 0). From this the claim immediately

follows since there is exactly one such representation (z; 0) = 1 � (z; 0) for all those

z 2 kerZd(A) n f0g that are minimal with respect to v.

SinceG contains the (particularly nice) symmetric input set F , the vector (z; 0) can be

written as a positive linear integer combination (z; 0) =P

�i(vi; Avi) with �i 2 Z>0

and vectors (vi; Avi) 2 G, vi v z for all i. From the set of all such positive linear

integer combinationsP

�i(vi; Avi) choose one such thatP

�ikAvik1 is minimal. Note

thatP

�ikAvik1 � 0 with equality if and only if kAvik1 = 0 for all i.

IfP

�ikAvik1 = 0 were true, we could conclude Avi = 0 for all i and we had

found a desired linear integer representation of z. Thus, assume on the contrary thatP�ikAvik1 > 0 holds.

Therefore, there have to exist (vi1 ; Avi1), (vi2 ; Avi2) such that (Avi1)(k0)(Avi2)

(k0) < 0

for some component k0. By construction, vi1 and vi2 lie in the same orthant of R d

and from vi1 + vi2 v z we conclude l � �z� � vi1 + vi2 � z+ � u. Thus, the vector

Page 37: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 2. Graver Test Sets 37

(vi1 ; Avi1) + (vi2 ; Avi2) is an S-vector and was reduced to 0 by G in the course of

the algorithm. This gives a representation (vi1 ; Avi1) + (vi2 ; Avi2) =P

�j(v0j; Av

0j)

for some positive integers �j and some vectors (v0j ; Av0j) 2 G. Moreover, it holds

�j(v0j; Av

0j) v (vi1 ; Avi1) + (vi2 ; Avi2) for all j, implying that v0j v z and that for each

component k X�j��(Av0j)(k)�� =

���X�j(Av0j)(k)���

=��(A(vi1 + vi2))

(k)��

���(Avi1)(k)��+ ��(Avi2)(k)�� ;

holds, where the last inequality is strict for k = k0 by construction. Summation over

k now gives X�jkAv

0jk1 = kA(vi1 + vi2)k1 < kAvi1k1 + kAvi2k1:

This implies thatX�i(v

0i; Av

0i) + (�i1 � 1)(vi1; Avi1) + (�i2 � 1)(vi2; Avi2) +

Xi6=i1;i2

�i(vi; Avi)

is a valid linear integer representation of (z; 0) that contradicts the minimality ofP�ikAvik1. We conclude that already

P�ikAvik1 = 0 and the claim follows. �

Lemma 2.2.3 If A = (A0jE) then no additional variables are needed.

Proof. The input vectors of the above algorithm become (ei; A0ei; ei). Thus, each

vector that appears in the algorithm will have the form (v;A0v; v). Since we have

(w;A0w;w) v (v;A0v; v) if and only if (w;A0w) v (v;A0v), we can ignore the addi-

tional variables. �

A similar proof works for matrices A = (A0j � E) or for the case, where only some

equations arise from inequalities with the help of slack-variables. Thus, we can avoid

all or at least some of the additional variables, which leads to an algorithmic speed

up.

2.3 LP Graver Test Sets

For every orthant O j of Rd consider the pointed rational cone Cj := kerRd (A) \ O j .

Up to scalar factors this cone has a unique inclusion minimal generating set over R .

Since Cj is a (pointed) rational cone we may scale each generator to have only integer

components with greatest common divisor 1.

Page 38: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

2.4. MIP Graver Test Sets 38

De�nition 2.3.1 For every orthant O j of Rd let Hj be the unique minimal generat-

ing set over R of the pointed rational cone kerRd (A) \ O j , where the components of

each generator are scaled to integers with greatest common divisor 1. Then we de�ne

GLP(A) :=SHj to be the LP Graver test set (or LP Graver basis) of A.

As GLP(A) is a �nite union of �nite sets, GLP(A) has �nite cardinality.

Lemma 2.3.2 GLP(A) has the positive sum property with respect to kerRd (A). Thus,

GLP(A) is a universal test set for (LP)c;b.

Proof. Take any non-zero element z 2 kerRd (A). Thus, z belongs to some orthant

O j and can be written as a positive linear combination of elements of the generating

set Hj � GLP(A) of kerRd (A) \ O j . The second part follows from Lemma 1.1.2. �

Lemma 2.3.3 GLP(A) coincides with the set of all circuits of A.

Proof. We show that GLP(A) contains every circuit. The claim then follows from the

positive sum property of the set of circuits, Lemma 1.2.6.

Let q be a circuit of A belonging to some orthant O j of Rd . Then q can be written

as q =P

�igi where �i > 0 and gi 2 Hj. This implies �igi v q for all i. We conclude

that supp(gi) � supp(q) and thus q = gi by Lemma 1.2.4. �

In Section 1.3 we have already seen how a generating set of kerRd (A) over R can

be completed with respect to the positive sum property. The resulting set G has to

contain a scalar multiple of each circuit of A. Thus, the LP Graver test set consists of

all support minimal elements in G, normalized to integer vectors whose components

have a greatest common divisor 1. In that way we may compute the LP Graver test

set.

2.4 MIP Graver Test Sets

2.4.1 Introduction

Let A = (A1jA2), where the columns of A1 and A2 correspond to the integer and

continuous variables, respectively. Throughout this section A1 and A2 are assumed to

be integer matrices of sizes l � d1 and l � d2. Analogously, we subdivide c = (cz; cq),

where cz 2 R d1 and cq 2 R d2 . Let z 2 Zd1 and q 2 R d2 denote the integer and

Page 39: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 2. Graver Test Sets 39

continuous variables, respectively, and let d = d1 + d2. Our aim is to construct a

universal test set for the family of optimization problems (P)c;b as c 2 R d and b 2 R l

vary.

Again, a Graver test set in the mixed-integer situation is de�ned to be an inclu-

sion minimal subset of kerX(A) which has the positive sum property with respect to

kerX(A). This leads us to the following set of vectors.

De�nition 2.4.1 The MIP Graver test set GMIP(A) contains all vectors

� (0; q), q 2 GLP(A2), and

� (z; q) 2 kerX(A), z 6= 0, and such that there is no (z0; q0) 2 kerX(A) satisfying

(z0; q0) v (z; q).

The proof of the following lemma can be found in [17]. (To be self-contained we

reproduce its proof in the appendix, Section 2.7.)

Lemma 2.4.2 GMIP(A) is an inclusion minimal set which has the positive sum prop-

erty with respect to kerX(A). Consequently, GMIP (A) is a universal test set for (P)c;b.

The set GMIP(A), however, need not be �nite in general. To demonstrate this, we

present an example similar to the one by Cook et al. [14].

Example. Consider the family of optimization problems

minfc0z + c1q1 + c2q2 : z + q1 + q2 = b; z 2 Z+ ; q1; q2 2 R+g

as c = (c0; c1; c2) 2 R 3 and b 2 R vary. Thus, we have A = (A1jA2) where A1 = (1)

and A2 = (1 1). Every element in kerZ�R2 (A) can be written as linear combination

�1(1;�1; 0) + �2(0; 1;�1) with �1 2 Z and �2 2 R . We will prove that for every

� 2 R , 0 < � < 1, the vector

u := (1;�1; 0) + �(0; 1;�1) = (1;�1 + �;��) 2 kerZ�R2 (A) n f0g

cannot be written as a sum v + w with v; w 2 kerZ�R2 (A) n f0g and v; w v u.

Suppose on the contrary that such vectors v; w exist. Since v; w v u, one of the

two vectors, say v, must have a zero integer component. Hence v = (0; 1;�1) for

some 2 R . But, clearly, for no choice of 2 R and of � 2 R , 0 < � < 1, the

vectors (1;�1 + �;��) and (0; 1;�1) belong to the same orthant of R 3 , which is

a contradiction to v v u. We conclude that for all � 2 R , 0 < � < 1, the vector

(1;�1 + �;��) is v-minimal and thus, it belongs to GMIP(A). �

Page 40: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

2.4. MIP Graver Test Sets 40

Having only an in�nite test set available, we cannot yet make algorithmic use of the

Augmentation Algorithm 0.0.2 to improve a feasible initial solution to optimality.

However, we will construct a �nite set GZ(A) � Zd1 from which an improving vector

to a non-optimal solution of the given problem can be reconstructed in �nitely many

steps. Thus, the Augmentation Algorithm 0.0.2 can be employed again to �nd an

optimal solution.

2.4.2 Finitely many Integer Parts

Consider the projection � : Zd1 � R d2 ! Zd1 which maps each mixed-integer vector

onto its d1 integer components. De�ne GZ(A) = GZ(A1jA2) := �(GMIP(A)) to be the

set of images of the elements in GMIP(A).

Lemma 2.4.3 GZ(A1jA2) is �nite for every matrix A 2 Zl�d and for any subdivision

A = (A1jA2).

Proof. Each element (z; q) 2 GMIP(A) satis�es kzk1 � �(A1; A2) for some scalar

�(A1; A2) which depends only on A1 and A2 which can be true only for �nitely many

vectors z 2 Zd1 ([17], again, to be self-contained, this is reproduced in the appendix,

Section 2.7). �

In [28], K�oppe obtained the above �niteness result by a geometric description of the

rational parts of mixed-integer test set elements when the integer parts are kept �xed.

The analysis in [28] then also led to the conclusions we obtain in the following lemma.

Lemma 2.4.4 Given vectors b; c, and a non-optimal feasible solution (z0; q0) to

A1z + A2q = b, there exists z1 2 GZ(A) from which an improving vector (z1; q1)

to (z0; q0) can be reconstructed.

Proof. Since (z0; q0) is not optimal there is an improving vector (z0; q0) 2 GMIP(A).

Thus, an improving vector can be reconstructed from z1 := z0 2 GZ(A). It remains to

show that we can �nd z1 and a suitable q1 algorithmically.

For every z1 2 GZ(A) with z0 � z1 � 0 we try to �nd q1 such that (z0 � z1; q0 � q1) is

feasible and such that the objective value c|(z0 � z1; q0 � q1) is as small as possible.

If c|(z0 � z1; q0 � q1) < c|(z0; q0) then (z1; q1) is an improving vector. This problem

is equivalent to the optimization problem

q1 2 argmaxqfc|zz1 + c|qq : A1(z0 � z1) + A2(q0 � q) = b; q0 � q � 0; q 2 R d2g: (2.1)

Page 41: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 2. Graver Test Sets 41

Clearly, there is an improving vector for (z0; q0) in GMIP (A) starting with z1 if and

only if (z1; q1) is an improving vector, that is, if and only if the above maximization

problem 2.1 has a feasible solution with strictly positive cost function value. �

Corollary 2.4.5 If for all z1 2 GZ(A) the maximal value of (2:1) is non-positive then

(z0; q0) is optimal.

Proof. If the solution (z0; q0) is not optimal, then there has to exist an improving

vector (z1; �q) 2 GMIP(A1jA2). But for this z1 the maximal value of (2:1) is at least

c|zz1 + c|q �q > 0, contradicting the assumption that no such z1 exists. �

We need to solve many LP subproblems in order to reconstruct an improving vector.

However, compared to the original mixed-integer problem these LP computations

can be considered to be cheap. The advantages of our approach are clear. GZ(A) is

always a �nite set of vectors from which for every given b 2 R l improving vectors

to non-optimal solutions can be reconstructed. Moreover, this approach allows us

to use powerful LP solvers to take care of the continuous part of our mixed-integer

problem once the set GZ(A) has been computed. Finally, note that this project-and-

lift approach is not restricted to MIP Graver test sets. It can be used for any other

type of mixed-integer test sets, as well.

2.4.3 Computation

In the following we show how to compute a superset of GZ(A). To this end, we assume

that we have available an algorithm to solve the following problem.

Problem 2.4.6 Let A 2 Zl�d. For any b 2 R l de�ne PA;b := fz : Az = b; z 2 R d+g.

For given b1; b2 2 R l decide, whether PA;b1+b2 = PA;b1 + PA;b2, where PA;b1 + PA;b2

denotes the Minkowski sum of PA;b1 and PA;b2.

To compute GZ(A), we will again employ the Completion Algorithm 1.3.1 by de�ning

the necessary notions.

De�nition 2.4.7 Let �F be a symmetric integer generating set for the integer lattice

fz 2 Zk : 9q 2 R d�k with (z; q) 2 kerZk�Rd�k (A)g:

Let F := f(v; P(A2j�A2);�A1v) : v 2�Fg be the input set to the completion algorithm.

For better readability, we will write Pv instead of P(A2j�A2);�A1v.

Page 42: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

2.4. MIP Graver Test Sets 42

Thus, we apply the completion procedure to more complicated structures than vec-

tors. Therefore, the test f 6= 0 in the Completion Algorithm 1.3.1 has to be replaced

by f 6= (0; P0).

De�nition 2.4.8 We de�ne S-vectors((u; Pu); (u0; Pu0)) := f(u; Pu) � (u0; Pu0)g,

where

(u; Pu)� (u0; Pu0) := (u+ u0; Pu+u0):

We say that (u0; Pu0) reduces (u; Pu), or (u0; Pu0) v (u; Pu) for short, if u0 v u and

Pu = Pu0 + Pu�u0. In case of reducibility, (u; Pu) is reduced to

(u; Pu) (u0; Pu0) := (u� u0; Pu�u0):

In order to prove termination of the algorithm below, the following suÆcient condition

to check PA;b1+b2 = PA;b1 + PA;b2 will turn out very useful.

Lemma 2.4.9 Let A 2 Zl�d be of full row rank, and b1; b2 2 R l , such that PA;b1 6= ;

and PA;b2 6= ;. Moreover, suppose that for every invertible l� l submatrix B of A the

vectors B�1b1 and B�1b2 lie in the same orthant of R l . Then PA;b1+b2 = PA;b1 +PA;b2.

A proof of this lemma can be found for example in [28]. We will now collect some re-

sults that will turn out useful when proving termination of the completion algorithm.

Corollary 2.4.10 Let A 2 Zl�d be of full row rank and denote by B1; : : : ; BM all

invertible l � l submatrices of A. Moreover, we de�ne for each z 2 Zd the vector

f(A; z) := (z; det(B1)B�11 (Az); : : : ; det(BM)B�1

M (Az)) 2 Zd+Ml . Then the relation

(z; PA;Az) 6v (z0; PA;Az0) implies f(A; z) 6v f(A; z0).

Proof. Suppose that f(A; z) v f(A; z0). Therefore, the vectors f(A; z), f(A; z0), and

f(A; z0�z) = f(A; z0)�f(A; z) all lie in the same orthant of R d+Ml . Thus, z v z0, and

the vectors det(Bi)B�1i (Az) and det(Bi)B

�1i (A(z0 � z)) lie in the same orthant of R l

for all i = 1; : : : ;M . Hence, PA;Az0 = PA;A(z0�z)+PA;Az, by Lemma 2.4.9, and therefore

(z; PA;Az) v (z0; PA;Az0). Thus, (z; PA;Az) 6v (z0; PA;Az0) implies f(A; z) 6v f(A; z0), as

claimed. �

Corollary 2.4.11 Let A 2 Zl�d and let fu1; u2; : : :g be a sequence in Zd. Then ev-

ery sequence f(u1; PA;Au1); (u2; PA;Au2); : : :g with (ui; PA;Aui) 6v (uj; PA;Auj) whenever

i < j, is �nite.

Page 43: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 2. Graver Test Sets 43

Proof. Consider the sequence ff(A; u1); f(A; u2); : : :g of vectors in Zd+Ml , which sat-

is�es f(A; ui) 6v f(A; uj) whenever i < j, by Corollary 2.4.10. Applying the Gordan-

Dickson Lemma to the sequence

f(f(A; u1)+; f(A; u1)

�); (f(A; u2)+; f(A; u2)

�); : : :g � Z2(d+Ml);

which satis�es (f(A; ui)+; f(A; ui)

�) 6� (f(A; uj)+; f(A; uj)

�) whenever i < j, we

conclude that this sequence and, thus, also the sequences ff(A; u1); f(A; u2); : : :g

and f(u1; PA;Au1), (u2; PA;Au2); : : :g are �nite. �

Proposition 2.4.12 If the input set, the procedure normalForm, and the set

S-vectors are de�ned as above, then the Completion Algorithm 1.3.1 terminates and

computes a set G such that GZ(A) � fu : (u; Pu) 2 Gg.

Proof. In the course of the algorithm, a sequence of pairs in G nF is generated that

satis�es the conditions of Corollary 2.4.11. Therefore, the algorithm terminates.

To show correctness, we have to prove that

�G := f(�u; �v) 2 Zd1 � R d2 : (�u; P�u) 2 G; (�v+; �v�) 2 P�ug

has the positive sum property with respect to kerZd1�Rd2 (A). To this end, we construct

for arbitrarily chosen (z; q) 2 kerZd1�Rd2 (A) a representation

(z; q) =X

(u;Pu)2G

�u(u; qu);

where �u 2 Z+ , (q+u ; q

�u ) 2 Pu, and �u(u; qu) v (z; q) for all u. Since G contains F ,

at least an integer linear combination

(z; q) =X

(u;Pu)2G

�u(u; qu)

with �u 2 Z+ and (q+u ; q�u ) 2 Pu for all u, is possible. For each such integer linear

combination consider the valueP

�uk(u; qu)k1, which is always non-negative. Thus,

this sum is bounded from below by some non-negative in�mum. Let us assume for

the moment that there is a linear integer combinationP

�u(u; qu) that attains this

in�mum.

IfP

u �uk(u; qu)k1 = k(z; q)k1 then all vectors (u; qu) with �u > 0 lie in the same

orthant as (z; q). Hence �u(u; qu) v (z; q) for all u with �u > 0. But since (z; q) was

chosen arbitrarily, this implies that �G has the positive sum property with respect to

kerZd1�Rd2 (A) and thus, �G contains GMIP(A). The claim then follows.

Page 44: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

2.4. MIP Graver Test Sets 44

Assume on the contrary that we haveP

u �uk(u; qu)k1 > k(z; q)k1 for such a minimal

combination. Then there exist u1; u2 with �u1 > 0 and �u2 > 0, and a component m,

such that (u1; qu1)(m)(u2; qu2)

(m) < 0. Thus,

k(u1 + u2; qu1 + qu2)k1 < k(u1; qu1)k1 + k(u2; qu2)k1:

During the run of the algorithm, (u1+u2; Pu1+u2) was reduced to (0; P0) by elements

(v1; Pv1), : : :, (vs; Pvs) 2 G, giving a representation u1 + u2 =Ps

j=1 vj, vj v u1 + u2,

and Pu1+u2 = P0 + Pv1 + : : : + Pvs. The latter implies that there are vectors qvj with

(q+vj ; q�vj) 2 Pvj , j = 1; : : : ; s, and (q+v0; q

�v0) 2 P0 such that

(qu1 + qu2)+ =

sXj=0

q+vj

and

(qu1 + qu2)� =

sXj=0

q�vj :

Thus qvj v qu1 + qu2, j = 0; : : : ; s, and therefore,

kqu1 + qu2k1 =

sXj=0

kqvjk1:

Altogether, we obtain

sXj=0

k(vj; qvj)k1 = k

sXj=0

(vj; qvj)k1 = k(u1+u2; qu1+qu2)k1 < k(u1; qu1)k1+k(u2; qu2)k1;

where v0 = 0. Now rewrite

(z; q) =X

(u;Pu)2G

�u(u; qu)

as

(z; q) =X

(u;Pu)2G;u6=u1;u2

�u(u; qu) + (�u1 � 1)(u1; qu1) + (�u2 � 1)(u2; qu2) +

sXj=1

(vj; qvj)

where

Xu6=u1;u2

�uk(u; qu)k1 + (�u1 � 1)k(u1; qu1)k1 + (�u2 � 1)k(u2; qu2)k1 +

sXj=1

k(vj; qvj)k1

is strictly less thanP

u �uk(u; qu)k1, in contradiction to the minimality assumption

on the integer linear combinationP

(u;Pu)2G�u(u; qu).

Page 45: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 2. Graver Test Sets 45

It remains to show that the in�mum ofP

(u;Pu)2G�uk(u; qu)k1 is indeed attained by

some integer linear combination. This can be seen as follows. Since z, q, and all u

with (u; Pu) 2 G are given, we have to �nd �u 2 Z+ and vectors qu 2 R d�k with

(q+u ; q�u ) 2 Pu such that

P(u;Pu)2G

�uk(u; qu)k1 attains the in�mum value.

First, since there is at least one such linear combination, we have an upper bound K

for the in�mum. Thus, there are at most �nitely many choices for the non-negative

integers �u such thatP

(u;Pu)2G�ukuk1 � K. It remains to show that for �xed integers

�u the in�mum ofP

(u;Pu)2G�uk(u; qu)k1 is indeed attained for some choice of the

vectors qu. Then, also the global in�mum, as the least value of the �nitely many

minimal values ofP

(u;Pu)2G�uk(u; qu)k1, wherein the �u are �xed, is attained by

some combination.

Therefore, suppose in the following that the integers �u are �xed and that there

exists at least one linear integer representation of (z; q) for this �xed choice of the

numbers �u. Since �u and all u are �xed, we have to minimizeP

(u;Pu)2G�ukquk1

whereP

(u;Pu)2G�uqu = q and A2qu = �A1u, for all u with (u; Pu) 2 G. Writing

qu = q+u � q�u we obtain a linear minimization problem with non-empty feasible

region, whose objective is bounded from below by 0. Thus, the minimum is attained

for some choice of the vectors qu. This concludes the proof. �

2.5 Termination of Augmentation Algorithm

Let us now have a look at termination of the Augmentation Algorithm 0.0.2 in the

LP, IP, and MIP situations. Assume that we are given a feasible solution z0 to our

problem. Moreover, assume that (LP )c;b is bounded with respect to c, which can be

checked by standard techniques from LP. Therefore, if we assume integer (or rational)

entries in A and b, also (IP )c;b and (P )c;b are bounded with respect to c, see Schrijver

[42].

In the LP case, circuits provide improving directions to non-optimal solutions. How-

ever, a zig-zagging e�ect, even to a non-optimal point, is possible and we have to take

some care on how to choose the next circuit for improvement.

Example. Consider the problem

minfz1 + z2 � z3 : 2z1 + z2 � 2; z1 + 2z2 � 2; z3 � 1; (z1; z2; z3) 2 R 3�0g

with optimal solution (0; 0; 1). Introducing slack variables z4; z5; z6 we obtain the

Page 46: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

2.5. Termination of Augmentation Algorithm 46

problem minfc|z : Az = (2; 2; 1)|; z 2 R 6�0g with c| = (1; 1;�1; 0; 0; 0) and

A =

0B@

2 1 0 1 0 0

1 2 0 0 1 0

0 0 1 0 0 1

1CA :

kerR6 (A) is generated over R by the vectors (1; 0; 0;�2;�1; 0), (0; 1; 0;�1;�2; 0), and

(0; 0; 1; 0; 0;�1). We obtain (1; 0; 0;�2;�1; 0), (0; 1; 0;�1;�2; 0), (1;�2; 0; 0; 3; 0),

(2;�1; 0;�3; 0; 0), (0; 0; 1; 0; 0;�1) together with their negatives as the circuits of A.

The improving directions are given by all circuits v for which c|v > 0. These are the

following: (1; 0; 0;�2;�1; 0), (0; 1; 0;�1;�2; 0), (�1; 2; 0; 0;�3; 0), (2;�1; 0;�3; 0; 0),

(0; 0;�1; 0; 0; 1).

Now start with the feasible solution z0 = (0; 1; 0; 1; 0; 1). Going along the direc-

tions (0; 1; 0;�1;�2; 0) and (0; 0;�1; 0; 0; 1) as far as possible, we immediately ar-

rive at (0; 0; 1; 2; 2; 0) which corresponds to the desired optimal solution (0; 0; 1) of

our problem. However, alternatively choosing only the vectors (�1; 2; 0; 0;�3; 0) and

(2;�1; 0;�3; 0; 0) as improving directions, the augmentation process does not termi-

nate. In our original space R 3 this reads as

(0; 1; 0)!

�1

2; 0; 0

�!

�0;1

4; 0

�!

�1

8; 0; 0

�!

�0;

1

16; 0

�! : : :

clearly showing the zig-zagging e�ect to the non-optimal point (0; 0; 0). Thus, we have

to impose certain constraints on the circuit which is chosen next in the augmentation

algorithm. �

To ensure termination of the Augmentation Algorithm 0.0.2 we split the augmenta-

tion process into two phases.

Algorithm 2.5.1 (Augmentation Strategy to reach Optimum in �nitely many Steps)

1. In phase 1 only those circuits are allowed for improvement which lead to a better

feasible solution with an additional zero entry. Repeat this step until there is no

circuit with this property and go to phase 2.

2. In phase 2 any appropriate circuit is chosen for improvement. Stop, if there is

no such circuit, otherwise improve and go back to phase 1.

Proposition 2.5.2 Under the assumption that the (LP )c;b is bounded with respect to

c, the LP Augmentation Algorithm 0.0.2 which uses the Augmentation Strategy 2.5.1

terminates with an optimal solution.

Page 47: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 2. Graver Test Sets 47

Proof. At the end of phase 1 the current feasible solution z0 has to be optimal with

respect to all feasible solutions whose support is contained in supp(z0), that is, there

is no feasible solution z00 with supp(z00) � supp(z0) and c|z00 < c|z0. Assume on the

contrary that there is such a solution z00. Since z0�z00 2 kerRd (A), and by the positive

sum property of the set of circuits with respect to kerRd (A), there exists some circuit q

with supp(q) � supp(z0�z00) � supp(z0) and c|q > 0. This circuit, however, improves

z0: Choose the largest positive scalar � such that z0 � �q is still feasible. This scalar

is �nite since LPc;b is bounded with respect to the given cost function. We conclude

that z0 � �q contains an additional zero entry and c|(z0 � �q) < c|z0, contradicting

the fact that the �rst phase �nished with z0.

Now, if z0 is optimal then the algorithm terminates in phase 2. Otherwise, the zero

pattern of z0 will never reappear after z0 was improved in phase 2. Thus, no zero

pattern reappears at the end of the �rst phase and the augmentation process has to

terminate.

The feasible solution upon termination has to be optimal since in the second phase

no augmenting circuit was found. The claim now follows by the universal test set

property of circuits. �

Finally, let us look at the IP and the MIP situations. Here, the Augmentation Algo-

rithm 0.0.2 always terminates with an optimal solution.

Proposition 2.5.3 If the relaxed problem (LP )c;b is bounded with respect to c, the

IP Augmentation Algorithm 0.0.2 terminates with an optimal solution.

Proof. Let � := minfjc|gj : g 2 GIP(A); c|g 6= 0g. Then each time we improve a given

feasible solution z0 by an element g 2 GIP(A) to z0��g the cost function value drops

by at least �(c|g) � � > 0 since � is a positive integer. Since (LP )c;b is bounded with

respect to c this can happen only �nitely often and the IP augmentation algorithm has

to terminate. The returned solution has to be optimal, since it cannot be improved

along some test set direction. �

Proposition 2.5.4 If the relaxed problem (LP )c;b is bounded with respect to c, the

MIP Augmentation Algorithm 0.0.2 terminates with an optimal solution.

Proof. First transform the problem (P )c;b by Gaussian elimination into an equivalent

optimization problem (P )0c;b0 with problem matrix

A0 :=

A01 A0

2

A001 0

!;

Page 48: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

2.5. Termination of Augmentation Algorithm 48

wherein A02 has full row rank. Note that the right-hand side b will change to some

b0 as well, but the cost function c remains �xed. Moreover, since the kernel of the

problem matrix does not change, the �nite set GZ(A1jA2) remains unchanged after

this transformation.

Now suppose on the contrary that the augmentation process does not terminate,

that is, it generates an in�nite sequence of feasible solutions to (P )0c;b0 with strictly

decreasing objective value. Divide this sequence into consecutive subsequences of

length K + 1, where K denotes the number of invertible submatrices B of A02 of

full rank. As we will see, within each of these subsequences the cost function value

drops by at least some constant � > 0 depending only A01, A

001, A

02, and c. Since

(LP )c;b0 is bounded with respect to c this can happen only �nitely often, and the

MIP Augmentation Algorithm 0.0.2 has to terminate after �nitely many steps.

Since A0 is an integer matrix, the set fq 2 R d2+ : A0(z; q) = b0g is a polyhedron for

every �xed z 2 Zd1 . After each augmentation step the continuous part q0 of the

current feasible solution (z0; q0) is a vertex of fq 2 R d2+ : A0(z0; q) = b0g (or a point

with the same cost function value). Thus, after possible rearrangement of continuous

variables, we can write A02 as (Bj

�A2) where B is an invertible l � l-matrix and the

optimal point is given by (z0; B�1(b0 � A0

1z0); 0). Let c = (cz; cq;1; cq;2) be divided

analogously. Then the cost function value of (z0; B�1(b0 � A0

1z0); 0) is given by

c|zz0 + c|

q;1B�1(b0 � A0

1z0) = (c|z � c|

q;1B�1A0

1)z0 + c|

q;1B�1b0 =: ~c|Bz0 + ~c0;B

where ~cB and ~c0;B depend only on the given problem data and on the speci�c choice

of B.

Consider a sequence of K + 1 consecutive feasible solutions as generated in the MIP

Augmentation Algorithm 0.0.2. By the pigeon-hole principle there are two solutions

(z1; B�1(b0 � A0

1z1); 0) and (z2; B�1(b0 � A0

1z2); 0) whose continuous parts are deter-

mined by the same submatrix B of A02 as already indicated by the notation. Moreover,

let the second solution have a better cost function value than the �rst one.

Thus 0 < ~c|Bz1 + ~c0;B � (~c|Bz2 + ~c0;B) = ~c|B(z1 � z2). In the MIP augmentation

algorithm the vector z1� z2 was represented as a sumP�K

i=1 gi of at most K elements

from GZ(A0) = GZ(A), since z2 was obtained from z1 by at most K augmentation

steps. Since GZ(A0) is �nite, there are only �nitely many possible positive values of

~c|B(P�K

i=1 gi) > 0. Thus, there exists a constant �B > 0 depending only on the choices

of B and the given data A01, A

001, A

02, and c, such that

~c|B(z1 � z2) = ~c|B

�KXi=1

gi

!� �B > 0

Page 49: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 2. Graver Test Sets 49

for all choices of the gi 2 GZ(A0). Therefore, the cost function value drops by at

least �B > 0 between the two mixed-integer solutions (z1; B�1(b0 � A0

1z1); 0) and

(z2; B�1(b0 � A0

1z2); 0) in the augmentation algorithm.

Since there are only �nitely many invertible submatrices B of A02 of full rank, the

value � := minBf�Bg > 0 is well de�ned and depends only on A01, A

001, A

02, and c,

the given problem data. Moreover, we proved that within each subsequence of length

K+1 the cost function value drops by at least �B � � > 0, which implies that the MIP

Augmentation Algorithm 0.0.2 has to terminate provided that the relaxed problem

(LP )c;b0 is bounded with respect to c. The returned solution has to be optimal, since

it cannot be improved along some test set direction. �

2.6 Feasible Initial Solutions

So far we have addressed the problem of improving a given feasible solution for (P)c;busing directions given by test set elements. In doing so, the optimization problem can

be solved in �nitely many augmentation steps. But, particularly in the IP and MIP

cases, it is often already a very hard problem to �nd an initial feasible solution at all.

However, one can at least compute a (mixed-) integer solution toAz = b in polynomial

time, ignoring the lower bounds 0 on the variables [42]. Computing a solution in the

LP case can be done for example by Gaussian elimination or by even faster methods.

In the following we will demonstrate how universal test sets can be used to transform

any given solution to Az = b; z 2 X , into a feasible solution of (P)c;b, that is, a

solution to Az = b; z 2 X , satisfying also the lower bounds z � 0. The proposed

algorithm always terminates and returns either a feasible solution or the answer that

no such solution exists.

Algorithm 2.6.1 (Algorithm to �nd a Feasible Solution)

Input: a solution z1 2 X to Az = b, a universal test set T for (P)c;b

Output: a feasible solution or \FAIL" if no such solution exists

while there exist t 2 T and � 2 R>0 such that z1 � �t 2 X , k(z1 � �t)�k1 < kz�1 k1,

and (z1 � �t)(k) � 0 whenever z(k)1 � 0 do

z1 := z1 � �t, where � is chosen maximal such that all three conditions are

still satis�ed

if kz�1 k1 > 0 then return \FAIL" else return z1

Page 50: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

2.6. Feasible Initial Solutions 50

In the MIP case we have to decide for every gz 2 GZ(A) whether we can �nd gq

and a scalar � > 0 such that t := (gz; gq) has the desired properties. For this we

distinguish two cases. Either we have gz = 0 or gz 6= 0. In both cases we can �x

� = 1 and are left with a pure LP problem: Find (gz; gq) 2 kerX(A) minimizing

k(z1 � �g)�k1 = kz1 � (gz; gq)�k1, where z1 and gz are given.

Lemma 2.6.2 Algorithm 2.6.1 satis�es its speci�cations.

Proof. Suppose that the algorithm terminates and that at the end z1 is still infeasible

although there is some feasible solution z0 to our problem. Consider the optimization

problem

minfX

i:z(i)

1 <0

(�z)(i) : Az = b+ Az�1 ; z 2 X+g: (2.2)

Then z+1 is feasible for problem (2.2) and it admits an objective function value of

0. Moreover, z0 + z�1 is feasible, too, with strictly negative objective function value,

since (z0 + z�1 )(k) > 0 whenever z

(k)1 < 0 (and z

(k)0 � 0). By the universal test set

property of T , there have to exist a vector t 2 T and a scalar � > 0 such that

z+1 � �t is feasible for problem (2.2) with strictly negative objective function value.

In other words, z+ � �t � 0 andP

k:z(k)

1 <0(�t)(k) > 0. Therefore, (�t)(k) > 0 for

at least some k = k0 with z(k0)1 < 0. Moreover (�t)(k) � 0 whenever z

(k)1 < 0, since

(z+��t)(k) � 0, � > 0, and (z+1 )(k) = 0. Thus, k(z1��t)�k1 < kz�1 k1 and if z

(k)1 � 0

then also (z1 � �t)(k) = (z+1 � �t)(k) � 0. Thus, the vector t and the scalar � satisfy

the conditions of the while-loop which is a contradiction to z1 being the output of

the Algorithm 2.6.1. �

Lemma 2.6.3 Algorithm 2.6.1 terminates in the IP and MIP cases. The algorithm

terminates in the LP case if the following strategy similar to the Augmentation Strat-

egy 2.5.1 is followed.

1. In phase 1 only those circuits are allowed for improvement which lead to a better

solution with an additional zero entry. Repeat this step until there is no circuit

with this property and go to phase 2.

2. In phase 2 any appropriate circuit is chosen for improvement. Stop, if there is

no such circuit, otherwise improve and go back to phase 1.

Proof. The algorithm terminates in the IP case since in each step kz�1 k1 drops by at

least 1. The algorithm terminates in the LP case since, analogously to the termination

Page 51: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 2. Graver Test Sets 51

proof of the LP Augmentation Strategy 2.5.1, every support of z1 appears at most

once at the end of phase 1.

Suppose that the algorithm does not terminate in the MIP case and that it generates

an in�nite sequence z1; z2; : : : of points in X . Note that all non-negative components

of a point zj remain non-negative for all subsequent points during the run of the

above algorithm. Thus, since the algorithm does not terminate, there is a number N

such that ; 6= supp(z�N) = supp(z�j ) for all j � N . Now consider the problem

minfX

i:z(i)

N<0

(�z)(i) : Az = b+ Az�N ; z 2 X+g: (2.3)

As can be seen from the proof of Lemma 2.6.2, the Algorithm 2.6.1 produces a

sequence z1; z2; : : : of points in X such that z�1 � z�2 � : : :, that is, z�i � z�j whenever

i < j. Thus we have zj + z�N = z+j + (z�N � z�j ) � z+j � 0 and we can conclude that

zj + z�N , j � N , are all feasible solutions of (2.3). Since by our assumption on N , we

have z(k)j < 0 whenever z

(k)N < 0, all these solutions have a strictly positive objective

value. However, since the algorithm does not terminate, we can use a similar argument

as in the termination proof of the MIP augmentation algorithm, Proposition 2.5.4, to

show that the cost function value drops below any given value within a �nite number

of steps. Thus, in particular, the cost function value becomes eventually negative for

some zj with j � N . From this contradiction we can conclude that the algorithm

terminates in the MIP case, too. �

Remark 2.6.4 In order to �nd a feasible solution to (P)c;b we have to construct

improving vectors to problems of type (2.2). Thus, we can and we will focus our

attention in the subsequent exposition to �nding improving directions.

2.7 Appendix

Lemma 2.7.1 (Foroudi and Graver [17], General Decomposition Theorem)

GMIP(A) has the positive sum property with respect to kerX(A).

Proof. Suppose that (z; q) 2 kerX(A) cannot be written as a positive linear com-

bination of elements in GMIP(A). Thus, (z; q) 62 GMIP(A). We know that kzk1 > 0

since GLP(A2) has the positive sum property with respect to kerRd2 (A2). From all

such vectors (z; q) 2 kerX(A) choose one such that kzk1 + j supp(q)j is minimal.

Page 52: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

2.7. Appendix 52

Now suppose that there is some circuit q1 2 GLP(A2) such that supp(q1) � supp(q)

and such that (0; q1) lies in the same orthant as (z; q). Choose the largest scalar

�1 2 R>0 such that (z; q��1q1) still belongs to the same orthant as (z; q). Therefore,

supp(q � �1q1) ( supp(q) and consequently

kzk1 + j supp(q � �1q1)j < kzk1 + j supp(q)j:

By the minimality required on kzk1+ j supp(q)j we can conclude that there is a linear

representation (z; q � �1q1) =P

�j(zj; qj) where for all j we have �j(zj; qj) 2 X ,

�j(zj; qj) v (z; q � �1q1), and �j 2 R>0 . Hence (z; q) = �1q1 +P

�j(zj; qj) is a valid

representation of (z; q) in contrast to our initial assumption. Thus, we may assume

that there is no vector (0; q1) 2 kerX(A) with (0; q1) v (z; q).

From (z; q) 62 GMIP(A) we conclude that there is some (z0; q0) 2 kerX(A) n f0g with

(z0; q0) v (z; q), (z0; q0) 6= (z; q). Hence (z; q) = (z0; q0) + (z � z0; q � q0), where also

(z � z0; q � q0) v (z; q), (z � z0; q � q0) 6= (z; q). This implies in particular, that

supp(q0) � supp(q) and supp(q � q0) � supp(q).

But neither z0 = 0 nor z � z0 = 0, by construction. Therefore, we have kz0k1 < kzk1

and kz� z0k1 < kzk1. Thus, by the minimality assumption on kzk1+ j supp(q)j, both

(z0; q0) and (z � z0; q � q0) can be written as valid positive linear combinations of

elements from GMIP(A), all of which belong to the same orthant as (z0; q0) and as

(z � z0; q � q0), respectively.

Substituting these representations into (z; q) = (z0; q0) + (z � z0; q � q0) gives a valid

positive linear combination representing (z; q), contradicting our initial assumption

that no such representation exists. �

Lemma 2.7.2 (Foroudi and Graver [17], Lemma 13)

Given A = (A1jA2) the following inequality holds for any (z0; q0) 2 GMIP(A1jA2):

kz0k1 �X

(z;q)2GLP(A)

kzk1:

Proof. Let (z0; q0) 2 GMIP (A1jA2) � kerRd (A). The positive sum property of GLP(A)

with respect to kerRd (A) yields a �nite linear representation (z0; q0) =P

�i(zi; qi)

where (zi; qi) 2 GLP(A), �i(zi; qi) v (z0; q0), and �i 2 R>0.

Suppose that there exists a summand �j(zj; qj) in this representation with zj 6= 0

and �j > 1. From (zj; qj) 2 GLP(A) � Zd we conclude that

(zj; qj) 2 X and (�j � 1)(zj; qj) +Xi6=j

�i(zi; qi) = (z0; q0)� (zj; qj) 2 X :

Page 53: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 2. Graver Test Sets 53

By construction, zj 6= 0 and z0 � zj 6= 0, (zj; qj); (z0 � zj; q0 � qj) v (z0; q0), and

(z0; q0) = (zj; qj) + (z0 � zj; q0 � qj). Therefore, (z0; q0) 62 GMIP(A1jA2). Thus, if

(z0; q0) 2 GMIP(A1jA2) then any linear representation (z0; q0) =P

�i(zi; qi) with

(zi; qi) 2 GLP(A), �i(zi; qi) v (z0; q0), and �i 2 R>0 , must satisfy �j � 1 whenever

zj 6= 0. Consequently,

kz0k1 =Xi

k�izik1 �Xi:zi 6=0

kzik1 +Xi:zi=0

k�izik1 =Xi:zi 6=0

kzik1 �X

(z;q)2GLP(A)

kzk1

and the claim is proved. �

With the additional information that z0 belongs to some given orthant O j the above

estimate can be strengthened to

kz0k1 �X

(z;q)2GLP(A);z2O j

kzk1:

2.8 Conclusions

We have discussed the positive sum property, a property inherent to Graver test sets in

LP, IP, and MIP. Already Graver [19] showed that this property implies the universal

test set property. We added to this analysis by presenting two criteria which allow an

algorithmic test of the positive sum property with respect to kerRd or with respect

to an integer lattice. These criteria led us immediately to a completion algorithm: an

initial set of vectors is completed with respect to the positive sum property by adding

new vectors to the set as long as necessary. Critical pair/completion procedures arise

in automated theorem proving, polynomial ideal theory, and in the solution of word

problems in universal algebras, see Buchberger [7] for a survey.

Then we have demonstrated how �xed (�nite) upper bounds on variables can be

used to truncate the Graver test set of (IP)c;b and to speed up its computation. The

presented algorithm can also be used to compute the unique minimal Hilbert basis

of kerRd (A) \ Rd+ if the input set is chosen accordingly.

Via the positive sum property the notion of a Graver test sets can be extended to

the mixed-integer situation. Although these test sets need not be �nite in general we

have identi�ed a �nite set of integer vectors from which an improving vector can be

computed. In order to do so, one has to solve a �nite number of LP's.

When optimizing with the help of test sets, attention has to be paid in the LP

situation. An example shows that simple augmentation may result in zig-zagging

towards a non-optimal point. We have presented a strategy to prevent this.

Page 54: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

2.8. Conclusions 54

It is usually already a hard problem to �nd an initial feasible point of (P)c;b when

some or all variables are required to be integer. Although there are algorithmic alter-

natives, we have shown that universal test sets contain enough information to solve

this problem, too. Thus, once a universal test set is available, the problem (P)c;b can

be solved entirely by test set methods.

In the following we will focus our attention to two- and to multi-stage stochastic

programs whose problem matrices are highly structured. As we will see, the prob-

lem structure can be exploited for a novel decomposition approach. The presented

algorithms also follow the pattern of a completion procedure.

Page 55: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 3

Decomposition of Test Sets in

Two-Stage Stochastic

Programming

As we have seen in the previous chapter, given solvability, the optimization problem

(IP)c;b can be solved to optimality with the help of augmenting vectors from the

corresponding Graver test set. Theoretically, this procedure can be used to solve

stochastic integer programs (4), page 13, as well. However, due to the huge amount

of stored information, Graver test sets are quite large already for small problems.

Therefore, a direct test set approach to (4) is not advisable.

As we will see, the block angular structure of the problem matrix in (4) induces a

symmetry structure on the elements of the Graver basis, telling us, that these test set

vectors are formed by a comparably small number of building blocks. We show that

these building blocks can be computed without computing the Graver test set of (4)

itself and that we can construct an improving vector to a given non-optimal feasible

solution to (4), scenario by scenario, using building blocks only. Incorporating this

into the Augmentation Algorithm 0.0.2, we �nd an optimal solution with comparably

small e�ort, once the building blocks have been computed.

We present the decomposition approach for the pure integer case. The same de-

composition ideas, however, readily extend to the continuous and the mixed-integer

situations.

55

Page 56: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

3.1. Building Blocks of Graver Test Sets 56

To study Graver test sets of (4) we consider the matrix

AN :=

0BBBBBB@

A 0 0 � � � 0

T W 0 � � � 0

T 0 W � � � 0...

......

. . ....

T 0 0 � � � W

1CCCCCCA

together with the objective function vector c = (c0; c1; : : : ; cN)| := (h; �1q; : : : ; �Nq)

|

and the right-hand side b = (a; �1; : : : ; �N)|, where the subscript N corresponds to

the number of scenarios, that is, the number of T 's and W 's used. Problem (4) then

may be written as minfc|z : ANz = b; z 2 ZdN+ g with dN = m + Nn and m;n as in

(1)-(3), see the introductory chapter. We assume all entries in A, T , and W to be

integer.

When referring to components of z, the notation z = (u; v1; : : : ; vN ) will be used

throughout. Herein, u corresponds to the �rst-stage and is always of dimension m,

whereas v1; : : : ; vN are the second-stage vectors whose dimension is n.

3.1 Building Blocks of Graver Test Sets

The following simple observation is the basis for the decomposition of test set vectors

presented below.

Lemma 3.1.1 (u; v1; : : : ; vN ) 2 kerZdN(AN) if and only if (u; vi) 2 kerZd1 (A1), for

i = 1; : : : ; N .

Proof. The claim follows from 0 = ANz = (Au; Tu+Wv1; : : : ; Tu+WvN ). �

This guarantees that by permuting the vi we do not leave kerZdN (AN ). Moreover, every

v-minimal element of kerZdN(AN ) n f0g will always be transformed into another v-

minimal element of kerZdN(AN ) n f0g. Thus, a Graver test set vector is transformed

into a Graver test set vector by such a permutation. This leads us to the following

de�nition.

De�nition 3.1.2 (Building blocks)

Let z = (u; v1; : : : ; vN ) 2 kerZdN(AN ) and call the vectors u; v1; : : : ; vN the building

blocks of z. Denote by GN the Graver test set associated with AN and collect into HN

all those vectors arising as building blocks of some z 2 GN . By H1 denote the setS1

N=1HN .

Page 57: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 3. Decomposition of Test Sets in Two-Stage Stochastic Programming 57

The set H1 contains both m-dimensional vectors u associated with the �rst-stage in

(4) and n-dimensional vectors v related to the second-stage in (4). For convenience,

we will arrange the vectors in H1 into pairs (u; Vu). For �xed u 2 H1, all those

vectors v 2 H1 are collected into Vu for which (u; v) 2 kerZd1 (A1). In what follows,

we will employ this arrangement into pairs to arbitrary sets of m- and n-dimensional

building blocks, not necessarily belonging to H1.

The setH1 is of particular interest, since, by de�nition, it contains all building blocks

of test set vectors of (4) for an arbitrary number N of scenarios. Next, we will address

�niteness of H1, computation of H1, and reconstruction of improving vectors using

H1.

3.2 Finiteness of H1

As two main results of this thesis we will prove �niteness of H1 in the integer and

continuous situations.

3.2.1 IP case

To prove our claim for the integer case we have to use some further algebraic ma-

chinery. For an introduction into basic notions we refer to [16]. In order to prove the

subsequent Theorem 3.2.11 we will use the following recent theorem on monomial

ideals.

Theorem 3.2.1 (Maclagan, [32, 33])

Let I be an in�nite collection of monomial ideals in a polynomial ring. Then there

are two ideals I; J 2 I with I � J.

Assuming that each monomial ideal is generated by only one monomial, it can be

seen that Maclagan's theorem includes as a special case the Gordan-Dickson Lemma,

Lemma 1.3.3, which was used in Section 1.3 to show �niteness of the Completion

Algorithm 1.3.1 to compute IP Graver test sets.

We now add two useful corollaries (cf. [32]) and, for convenience, include their short

proofs.

Corollary 3.2.2 Let I be an in�nite collection of pair-wise di�erent monomial

ideals in a polynomial ring. Then I contains only �nitely many inclusion maximal

monomial ideals.

Page 58: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

3.2. Finiteness of H1 58

Proof. LetM denote the set of all monomial ideals in I that are maximal with respect

to inclusion. If M were not �nite, then it contained two di�erent ideals I; J 2 I with

I � J . Since I 6= J this last relation is strict and thus, I is not inclusion maximal in

I. �

Corollary 3.2.3 Let (I1; I2; : : :) be a sequence of monomial ideals in a polynomial

ring with Ij 6� Ii whenever i < j. Then this sequence is �nite.

Proof. Let M denote the set of all inclusion maximal monomial ideals in the set

fIj : j = 1; : : :g. ThenM is �nite by Corollary 3.2.2. Thus, there is some k 2 Z+ such

that M � fI1; : : : ; Ikg. Therefore, for all j > k there is some i 2 f1; : : : ; kg satisfying

Ij � Ii, in contradiction to the assumption that no such pair i; j exists. �

We will use this last corollary to obtain a similar result on sequences of pairs (u; Vu)

which will be employed to prove termination of the subsequent algorithm to compute

H1. To this end, we de�ne the following notions.

De�nition 3.2.4 We say that (u0; Vu0) reduces (u; Vu), or (u0; Vu0) v (u; Vu) for short,

if the following conditions are satis�ed:

� u0 v u,

� for every v 2 Vu there exists a v0 2 Vu0 with v0 v v,

� u0 6= 0 or there exist vectors v 2 Vu and v0 2 Vu0 with 0 6= v0 v v.

The last condition above is necessary to avoid the situation that u0 and all occurring

v0 are zero, which forms a trivial zero reduction. In other words, for every N and

for every vector z = (u; v1; : : : ; vN ) constructable from the building blocks in (u; Vu)

there is a non-zero (!) vector z0 = (u0; v01; : : : ; v0N ) constructable from the building

blocks in (u0; Vu0) such that z0 v z. Therefore, (u0; Vu0) v (u; Vu) implies that (at least

not all) the building blocks in (u; Vu) are not needed for the description of Graver

basis elements for any N .

De�nition 3.2.5 We associate with every pair (u; Vu), u 6= 0, the monomial ideal

I(u; Vu) 2 Q[x1; : : : ; x2m+2n] generated by all the monomials x(u+;u�;v+;v�) with

v 2 Vu, whereas we associate with (0; V0) the monomial ideal I(0; V0) 2 Q[x1; : : : ; x2n]

generated by all the monomials x(v+;v�) with v 2 V0 n f0g.

Page 59: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 3. Decomposition of Test Sets in Two-Stage Stochastic Programming 59

Lemma 3.2.6 Let (u; Vu); (u0; Vu0) with u; u

0 6= 0 be given. Then I(u; Vu) � I(u0; V 0u)

implies (u0; Vu0) v (u; Vu). Therefore, (u0; Vu0) 6v (u; Vu) implies I(u; Vu) 6� I(u0; V 0

u).

Proof. Since I(u; Vu) and I(u0; Vu0) are monomial ideals, we have I(u; Vu) � I(u0; Vu0)

if and only if every generator x(u+;u�;v+;v�) of I(u; Vu) is divisible by some generator

x((u0)+;(u0)�;(v0)+;(v0)�) of I(u0; Vu0) (cf. [16]). The latter implies that u0 v u and that

for every v 2 Vu there exists vectors v0 2 Vu0 with v0 v v. In other words, we have

(u0; Vu0) v (u; Vu), and the proof is complete. �

Lemma 3.2.7 (0; V 00) 6v (0; V0) implies I(0; V0) 6� I(0; V 0

0).

Proof. (0; V 00) 6v (0; V0) implies that there is some v 2 V0 such that there is no

v0 2 V 00 n f0g with v0 v v. But this means that x(v

+;v�) 62 I(0; V 00) and, therefore, we

have I(0; V0) 6� I(0; V 00). �

Now we are in the position to prove the above mentioned lemma on sequences of

pairs.

Lemma 3.2.8 Let ((u1; Vu1); (u2; Vu2); : : :) be a sequence of pairs such that ui 6= 0 for

all i = 1; 2; : : :, and such that (ui; Vui) 6v (uj; Vuj ) whenever i < j. Then this sequence

is �nite.

Proof. Consider the sequence (I(u1; Vu1); I(u2; Vu2); : : :) of monomial ideals. By the

above Lemma 3.2.6, it ful�lls I(uj; Vuj) 6� I(ui; Vui) whenever i < j. Thus, by Corol-

lary 3.2.3, this sequence is �nite implying that the sequence ((u1; Vu1); (u2; Vu2); : : :)

is �nite, as well. �

Lemma 3.2.9 Let ((0; V1); (0; V2); : : :) be a sequence of pairs with (0; Vi) 6v (0; Vj)

whenever i < j. Then this sequence is �nite.

Proof. Consider the sequence (I(0; V1); I(0; V2); : : :) of monomial ideals. By Lemma

3.2.7, it ful�lls I(0; Vj) 6� I(0; Vi) whenever i < j. Thus, by Corollary 3.2.3, this

sequence is �nite implying that the sequence ((0; V1); (0; V2); : : :) is �nite, as well. �

As a consequence of the last two lemmas we obtain the following.

Lemma 3.2.10 Let ((u1; Vu1); (u2; Vu2); : : :) be a sequence of pairs in the IP situation

such that (ui; Vui) 6v (uj; Vuj ) whenever i < j. Then this sequence is �nite.

Page 60: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

3.2. Finiteness of H1 60

Proof. Suppose the sequence ((u1; Vu1); (u2; Vu2); : : :) is not �nite. Consider the two

subsequences where all ui are non-zero and where all ui are zero, respectively. At least

one of these subsequences is not �nite and satis�es (ui; Vui) 6v (uj; Vuj) whenever i < j.

But this contradicts one of the two preceding Lemmas 3.2.8 and 3.2.9. �

Lemma 3.2.10 will imply �niteness of the subsequent algorithm to compute H1 in

the IP case. Below we will prove correctness of this algorithm. Then, termination and

correctness together imply �niteness of H1 in the IP situation. Thus, we have the

following.

Theorem 3.2.11 Given integer matrices A, T , and W of appropriate dimensions,

and let H1 be de�ned for the IP case as above. Then H1 is a �nite set.

3.2.2 LP case

For the LP case we have to de�ne the relation v di�erently.

De�nition 3.2.12 We say that the pair (u; Vu) can be reduced by the pair (u0; Vu0),

or (u0; Vu0) v (u; Vu) for short, if the following conditions are satis�ed:

1. supp(u0) � supp(u),

2. for all vi 2 Vu there exists v0i 2 Vu0 such that supp(v0i) � supp(vi), and

3. u 6= 0 or there exist vi 2 Vu and v0i 2 Vu0 such that ; 6= supp(v0i) � supp(vi).

The third condition is necessary to avoid the situation that u0 and all occurring v0i

are zero, which forms a trivial zero reduction.

Lemma 3.2.13 Let ((u1; Vu1); (u2; Vu2); : : :) be a sequence of pairs in the LP situation

such that (ui; Vui) 6v (uj; Vuj) whenever i < j. Then this sequence is �nite.

Proof. De�ne Vu and Vu0 to be equivalent if for each v 2 Vu there is some v0 2 Vu0

with supp(v) = supp(v0) and vice-versa. This de�nes an equivalence relation with 22n

equivalence classes.

De�ne (u; Vu) and (u0; Vu0) to be equivalent if supp(u) = supp(u0) and Vu is equivalent

to Vu0 . This de�nes an equivalence relation with 2m22n

equivalence classes.

Page 61: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 3. Decomposition of Test Sets in Two-Stage Stochastic Programming 61

Moreover, if (u; Vu) and (u0; Vu0) are equivalent it holds that (u; Vu) v (u0; Vu0) and

(u0; Vu0) v (u; Vu). Therefore, there are no two equivalent pairs in the sequence

((u1; Vu1); (u2; Vu2); : : :). Thus, this sequence is �nite. �

This lemma will imply �niteness of the subsequent algorithm to compute H1 in the

LP case. Below we will prove correctness of this algorithm. Then, termination and

correctness together imply �niteness of H1 in the LP situation. Thus, we have the

following.

Theorem 3.2.14 Given integer matrices A, T , and W of appropriate dimensions,

and let H1 be de�ned for the LP case as above. Then H1 is a �nite set.

3.3 Computation of H1

If H1 were �nite, we could �nd this set by computing the Graver test set GN for

suÆciently large N and by decomposing its elements into their building blocks. Even

when disregarding that we do not know in advance how big N then has to be taken,

this approach is not very practical, due to the size of GN . The idea now is to retain

the pattern of Graver test set computations from Section 1.3, but to work with

pairs (u; Vu) instead and to de�ne the main ingredients, input set, normalForm, and

S-vectors, to the completion procedure appropriately. In what follows, the objects f ,

g, and s all are pairs of the form (u; Vu).

Algorithm 3.3.1 (Algorithm to compute H1)

Input: a symmetric generating set F of kerZd1 (A1) over Z or of kerRd1 (A1) over R in

(u; Vu)-notation to be speci�ed below

Output: a set G � H1

G := F

C :=S

f;g2G

ff � gg (forming S-vectors)

while C 6= ; do

s := an element in C

C := C n fsg

f := normalForm(s;G)

if f 6= (0; f0g) then

Page 62: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

3.3. Computation of H1 62

C := C [S

g2G[ffg

ff � gg (adding S-vectors)

G := G [ ffg

return G.

Behind the function normalForm(s;G) there is the following algorithm.

Algorithm 3.3.2 (Normal form algorithm)

Input: a pair s, a set G of pairs

Output: a normal form of s with respect to G

while there is some g 2 G such that g v s do

s := s g

return s

It remains to de�ne an appropriate input set, the sum �, and the di�erence of two

pairs (u; Vu) and (u0; Vu0). To get good candidates we may think of a computation of

GN , N suÆciently large, where every vector is decomposed into its building blocks.

Lemma 3.3.3 Let F be a symmetric generating set for kerZd1 (A1) over Z which

contains a generating set for f(0; v) : Wv = 0g � kerZd1 (A1) consisting only of

vectors with zero as �rst-stage component. Moreover, let FN be the set of all those

vectors in kerZdN(AN ) whose building blocks are also building blocks of vectors in

F [ f0g. Then, for any N , the vectors FN generate kerZdN(AN) over Z.

The same result holds for the continuous kernel kerRdN(AN ).

Proof. Let z = (u; v1; : : : ; vN ) 2 kerZdN(AN). We have to show that z can be written

as a linear integer combination of elements from FN .

From (u; v1) 2 kerZd1 (A1) we conclude that there is a �nite linear integer combination

(u; v1) =P

�i(ui; vi;1), (ui; vi;1) 2 F . Hence (u; v1; : : : ; v1) =P

i �i(ui; vi;1; : : : ; vi;1)

with (ui; vi;1; : : : ; vi;1) 2 FN . Moreover, for j = 2; : : : ; N , we have W (vj � v1) = 0,

that is, (0; vj � v1) 2 kerZd1 (A1), since Tu +Wv1 = Tu +Wvj . Since F contains an

integer generating set for f(0; v) : Wv = 0g � kerZd1 (A1) consisting only of vectors

with zero as �rst-stage component, we can construct linear integer combinations

(0; vj � v1) =P

i �i;j(0; vi;j) with (0; vi;j) 2 F and j = 2; : : : ; N .

Page 63: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 3. Decomposition of Test Sets in Two-Stage Stochastic Programming 63

Thus, we have (0; 0; : : : ; 0; vj � v1; 0; : : : ; 0) =P

i �i;j(0; 0; : : : ; 0; vi;j ; 0; : : : ; 0) with

(0; 0; : : : ; 0; vi;j ; 0; : : : ; 0) 2 FN . But now we get

z = (u; v1; : : : ; v1) +Xj

(0; 0; : : : ; 0; vj � v1; 0; : : : ; 0)

=Xi

�i(ui; vi;1; : : : ; vi;1) +Xj>1

Xi

�i;j(0; 0; : : : ; 0; vi;j ; 0; : : : ; 0);

as desired.

An analogous argumentation for the continuous case shows the result for kerRdN(AN ).

Before we de�ne the input set, the sum �, and the di�erence for the IP and the

LP cases let us introduce the following notation.

De�nition 3.3.4 For � 2 R let

(u; Vu) + �(u0; Vu0) := (u+ �u0; Vu + �Vu0);

where

Vu + �Vu0 := fv + �v0 : v 2 Vu; v0 2 Vu0g:

3.3.1 IP case

Lemma 3.3.3 suggests the following input set.

De�nition 3.3.5 Let F be a symmetric generating set for kerZd1 (A1) over Z which

contains a generating set for f(0; v) : Wv = 0g � kerZd1 (A1) consisting only of vectors

with zero as �rst-stage components. With this de�ne the building blocks of all vectors

in F [ f0g in (u; Vu)-notation to be the input set to Algorithm 3.3.1 for the integer

case.

De�nition 3.3.6 For the computation of H1 for two-stage stochastic integer pro-

grams de�ne

(u; Vu)� (u0; Vu0) := (u; Vu) + (u0; Vu0) = (u+ u0; Vu + Vu0)

and

(u; Vu) (u0; Vu0) := (u� u0; fv � v0 : v 2 Vu; v0 2 Vu0 ; v

0 v vg):

Page 64: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

3.3. Computation of H1 64

Remark. In (u; Vu) (u0; Vu0) it suÆces to collect only one di�erence v� v0 for every

v 2 Vu. The proof of the subsequent proposition also shows that Algorithm 3.3.1 still

terminates and works correctly if we de�ned (u; Vu) (u0; Vu0) in this way. �

Proposition 3.3.7 If the input set, the procedure normalForm, f � g, and s g are

de�ned as in De�nitions 3.2.4, 3.3.5, and 3.3.6, then Algorithm 3.3.1 terminates and

returns a set containing H1 for two-stage stochastic integer programs.

Proof. In the course of the algorithm, a sequence of pairs in G nF is generated that

satis�es the conditions of Lemma 3.2.10. Therefore, Algorithm 3.3.1 terminates.

Denote by G the set that is returned by Algorithm 3.3.1. To show that H1 � G,

we have to prove that HN � G for all N 2 Z+ . Fix N and start a Graver test set

computation (see Section 1.3) with

�F := f(u; v1; : : : ; vN ) : (u; Vu) 2 G; vi 2 Vu; i = 1; : : : ; Ng

as input set. �F generates kerZdN(AN ) over Z for all N 2 Z+ , since FN � �F by the

assumptions on the input set to Algorithm 3.3.1 and by Lemma 3.3.3.

We will now show that all sums z + z0 2 S-vectors(z; z0) of two elements z; z0 2 �F

reduce to 0 with respect to �F . In this case, Algorithm 1.3.1 returns the input set �F

which implies GN � �F . Therefore, HN � G as desired.

Take two arbitrary elements z = (u; v1; : : : ; vN ) and z0 = (u0; v01; : : : ; v

0N ) from

�F , and

consider the vector z + z0 = (u+ u0; v1 + v01; : : : ; vN + v0N ).

In the above algorithm, (u; Vu)�(u0; Vu0) was reduced to (0; f0g) by elements (u1; Vu1),

: : :, (uk; Vuk) 2 G. From this sequence we can construct a sequence z1; : : : ; zk of vectors

in �F which reduce z + z0 to zero as follows.

(u1; Vu1) v (u; Vu) � (u0; Vu0) implies that u1 v u + u0 and that there exist vectors

v1;1; : : : ; v1;N 2 Vu1 satisfying v1;i v vi + v0i for i = 1; : : : ; N . Therefore, we conclude

z1 := (u1; v1;1; : : : ; v1;N ) v z + z0 and thus, z + z0 can be reduced to z + z0 � z1.

Moreover, z1 2 �F and all the building blocks of z + z0 � z1 are contained in the pair

((u; Vu)� (u0; Vu0)) (u1; Vu1).

But ((u; Vu)� (u0; Vu0)) (u1; Vu1) was further reduced by (u2; Vu2); : : : ; (uk; Vuk) 2 G.

Therefore, we can construct from (u2; Vu2) a vector z2 2 �F with z2 v z + z0 � z1.

Thus, z + z0 � z1 can be further reduced to z + z0 � z1 � z2.

Repeating this construction, we arrive in the kth step at z+z0�z1� : : :�zk�1 whose

building blocks all lie in ((u; Vu)� (u0; Vu0)) (u1; Vu1) : : : (uk�1; Vuk�1). The latter

Page 65: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 3. Decomposition of Test Sets in Two-Stage Stochastic Programming 65

can be reduced to (0; f0g) by the pair (uk; Vuk) 2 G. Therefore, there exists a vector

zk 2 �F such that zk v z + z0 � z1 � : : :� zk�1 and 0 = z + z0 � z1 � : : :� zk. �

Note that termination and correctness of the above algorithm imply �niteness of H1.

Therefore, the above proposition �nally proves Theorem 3.2.11.

3.3.2 LP case

Lemma 3.3.3 suggests the following input set.

De�nition 3.3.8 Let F be a generating set for kerRd1 (A1) over R which contains a

generating set for f(0; v) : Wv = 0g � kerRd1 (A1) consisting only of vectors with zero

as �rst-stage components. With this de�ne the building blocks of all vectors in F [f0g

in (u; Vu)-notation to be the input set to Algorithm 3.3.1 for the continuous case.

De�nition 3.3.9 For the computation of H1 for two-stage stochastic linear pro-

grams de�ne � and as follows. For given (u; Vu), (u0; Vu0), and all � such that

1. u + �u0 contains a zero entry at some component i at which neither u nor u0

have a zero entry, that is, (u+ �u0)(i) = 0 but u(i)(u0)(i) 6= 0, or

2. for some vectors v 2 Vu; v0 2 Vu0 the vector v+�v

0 contains a zero entry at some

component i at which neither v nor v0 have a zero entry, that is, (v+�v0)(i) = 0

but v(i)(v0)(i) 6= 0,

let

(u; Vu)� (u0; Vu0) := f(u; Vu)+�(u0; Vu0) : � 2 R satisfying the above conditions 1; 2g:

In case of reduction, that is, if (u0; Vu0) v (u; Vu) (see De�nition 3.2.12), (u; Vu)

reduces to

(u; Vu) �(u0; Vu0) := (u� �u0; fvi � �v0i : vi 2 Vu; v0i 2 Vu0 as in De�nition 3.2.12g);

where � takes one of the �nitely many values such that supp(u� �u0) ( supp(u) or

such that supp(vi � �v0i) ( supp(vi) for at least one i.

Since (u� �u0; fvi � �v0i : vi 2 Vu; v0i 2 Vu0g) contains at least one more zero entry in

u or in some element of Vu, normalForm(s;G) always terminates.

Page 66: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

3.3. Computation of H1 66

Proposition 3.3.10 If the input set, the procedure normalForm, f�g, and sg are

de�ned as above then Algorithm 3.3.1 terminates and returns a set containing H1

for two-stage stochastic linear programs.

Proof. In the course of the algorithm, a sequence of pairs in G nF is generated that

satis�es the conditions of Lemma 3.2.13. This means that G n F contains at most

2m22n

di�erent pairs (u; Vu). Therefore, Algorithm 3.3.1 terminates.

Let G denote the set that is returned by Algorithm 3.3.1. To show correctness, that

is H1 � G, we have to prove HN � G for all N 2 Z+ . Fix N and start a Graver test

set computation with �F := f(u; v1; : : : ; vN ) : (u; Vu) 2 G; vi 2 Vu; i = 1; : : : ; Ng as

input set. Note that �F generates kerRdN(AN) over R for all N 2 Z+ , since FN � �F

by the assumptions on the input set to Algorithm 3.3.1 and by Lemma 3.3.3. If all

S-vectors f +�g 2 S-vectors(f; g) of two elements f; g 2 �F can be written as a linear

combination of elements of �F whose support is contained in supp(f + �g), then the

Graver basis GN is contained in �F (see Remark 1.3.5). Since N was chosen arbitrarily,

GN � �F implies HN � G for all N 2 Z+ and we are done.

Take two arbitrary elements z = (u; v1; : : : ; vN ) and z0 = (u0; v01; : : : ; v0N ) from �F

and choose � 2 R such that z + �z0 2 S-vectors(z; z0). In the above algorithm,

(u; Vu)+�(u0; Vu0) 2 (u; Vu)�(u

0; Vu0) was successively reduced to (0; f0g) by (u1; Vu1),

: : :, (uk; Vuk) 2 G. From this sequence we will now construct a sequence z1; : : : ; zk of

vectors in �F such that z + �z0 =P

�izi and supp(zi) � supp(z + �z0) as desired.

In the very �rst reduction step, the pair (u; Vu) + �(u0; Vu0) was reduced to the pair

((u; Vu) + �(u0; Vu0)) �1(u1; Vu1). But (u1; Vu1) v (u; Vu) + �(u0; Vu0) implies that

supp(u1) � supp(u + �u0) and that there exist vectors v1;1; : : : ; v1;N 2 Vu1 with

supp(v1;i) � supp(vi + �v0i), i = 1; : : : ; N . Thus, z1 := (u1; v1;1; : : : ; v1;N ) 2 �F ,

supp(z1) � supp(z + �z0), and all the building blocks of z + �z0 � �1z1 lie in the

pair ((u; Vu) + �(u0; Vu0)) �1(u1; Vu1).

But ((u; Vu) + �(u0; Vu0)) (u1; Vu1) was further reduced by (u2; Vu2) 2 G to the pair

((u; Vu)+�(u0; Vu0))�1(u1; Vu1)�2(u2; Vu2). Thus, we can construct from (u2; Vu2)

a vector z2 2 �F such that supp(z2) � supp(z + �z0 � �1z1), and all building blocks

of z + �z0 � �1z1 � �2z2 lie in ((u; Vu) + �(u0; Vu0)) �1(u1; Vu1) �2(u2; Vu2).

Repeating this construction, we arrive in the kth step at z + �z0 �Pk�1

i=1 �izi whose

building blocks all lie in ((u; Vu) + �(u0; Vu0)) �1(u1; Vu1) : : : �k�1(uk�1; Vuk�1).

The latter can be reduced to (0; f0g) by the pair (uk; Vuk) 2 G. Therefore, there

has to exist a vector zk 2 �F such that supp(zk) � supp(z + �z0 �Pk�1

i=1 �izi) and

0 = z+�z0�Pk

i=1 �izi. Thus, z+�z0 =Pk

i=1 �izi with supp(zi) � supp(z+�z0) for

i = 1; : : : ; k, which completes the proof. �

Page 67: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 3. Decomposition of Test Sets in Two-Stage Stochastic Programming 67

Note that termination and correctness of the above algorithm imply �niteness of H1.

Therefore, the above proposition �nally proves Theorem 3.2.14.

3.4 Solving the Optimization Problem with the

Help of H1

As we have seen in Section 2.6, universal test sets can help to �nd an initial feasible

solution to our optimization problem and to augment it to optimality. For our op-

timization problem minfc|z : ANz = b; z 2 ZdN+ g, however, the universal test set is

not given explicitly. Only its set of building blocks H1 is available. As mentioned in

Remark 2.6.4, we may concentrate on �nding improving directions. In the following,

we will see how these directions can be reconstructed from H1.

3.4.1 IP case

Suppose we are givenH1, a cost function c and a feasible solution z0 = (u; v1; : : : ; vN ).

Lemma 3.4.1 Suppose there exists no pair (u0; Vu0) 2 H1 with the properties

1. u0 � u,

2. for all i = 1; : : : ; N , there exists �vi 2 Vu0 : �vi � vi,

3. c|z0 > 0, where z0 = (u0; v01; : : : ; v0N ) and v0i 2 argmaxfc|i �vi : �vi � vi; �vi 2 Vu0g

for i = 1; : : : ; N .

Then z0 = (u; v1; : : : ; vN ) is optimal for minfc|z : ANz = b; z 2 ZdN+ g.

If there exists such a pair (u0; Vu0) 2 H1 then z0�z0 is a feasible solution and it holds

c|(z0 � z0) < c|z0.

Proof. Suppose that z0 is not optimal. Then there has to exist some improving

vector z00 = (u00; v001 ; : : : ; v00N ) 2 GN such that z0� z00 is feasible and c|(z0� z00) < c|z0.

Feasibility of z0 � z00 implies z0 � z00 � 0, hence z00 � z0. Therefore, u00 � u and

v00i � vi, i = 1; : : : ; N , the latter implying that for any i = 1; : : : ; N there exists a

�vi 2 Vu00 such that �vi � vi.

Let z0 := (u00; v01; : : : ; v0N ) where v0i 2 argmaxfc|i �vi : �vi � vi; �vi 2 Vu00g. But now

c|(z0 � z00) < c|z0 implies that c|z00 > 0. Moreover, c|z0 � c|z00 > 0. In conclusion,

Page 68: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

3.4. Solving the Optimization Problem with the Help of H1 68

the pair (u0; Vu0) with u0 := u00 ful�lls conditions 1:� 3: proving the �rst claim of the

lemma.

With z0 = (u0; v01; : : : ; v0N ) according to 3: we obtain c|(z0 � z0) < c|z0. Moreover

v0i � vi, i = 1; : : : ; N , and u0 � u together imply z0 � z0, and z0 � z0 � 0. Finally,

(u0; v01; : : : ; v0N ) 2 ker

ZdN(AN), and therefore AN (z0 � z0) = ANz0 + 0 = b which

completes the proof. �

Theorem 3.4.2 If the problem minfc|z : ANz = b; z 2 ZdN+ g is solvable, an optimal

solution can be computed in �nitely many steps by application of the Algorithms 0.0.2

and 2.6.1 together with the above reconstruction procedures for �nding improving

vectors.

Moreover, the reconstruction procedure in the above Lemma 3.4.2 is linear with

respect to the number N of scenarios. In accordance with that, we observed in our

preliminary test runs (cf. Section 3.6) that the method is fairly insensitive with respect

to growing of the number N of scenarios. Of course, this becomes e�ective only after

H1 has been computed.

3.4.2 LP case

Suppose we are givenH1, a cost function c and a feasible solution z0 = (u; v1; : : : ; vN ).

Lemma 3.4.3 Suppose there exists no pair (u0; Vu0) 2 H1 with the properties

1. supp(u0+) � supp(u),

2. for all i = 1; : : : ; N , there exists �vi 2 Vu0 : supp(�v+i ) � supp(vi),

3. c|z0 > 0, where z0 = (u0; v01; : : : ; v0N ) and

v0i 2 argmaxfc|i �vi : supp(�v+i ) � supp(vi); �vi 2 Vu0g for i = 1; : : : ; N:

Then z0 = (u; v1; : : : ; vN ) is optimal for minfc|z : ANz = b; z 2 R dN+ g.

If there exists such a pair (u0; Vu0) 2 H1 then there is some positive scalar � such

that z0 � �z0 is feasible and it holds c|(z0 � �z0) < c|z0.

Proof. Suppose that z0 is not optimal. Then there has to exist some improving

vector z00 = (u00; v001 ; : : : ; v00N ) 2 GN and a positive scalar � such that z0 � �z00 is

Page 69: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 3. Decomposition of Test Sets in Two-Stage Stochastic Programming 69

feasible and c|(z0 � �z00) < c|z0. Feasibility of z0 � �z00 implies z0 � �z00 � 0, and

therefore supp((z00)+) � supp(z0). Therefore, we have supp((u00)+) � supp(u) and

supp((v00i )+) � supp(vi), i = 1; : : : ; N , the latter implying that for any i = 1; : : : ; N ,

there exists �vi 2 Vu00 such that supp(�v+i ) � supp(vi). Let z0 := (u00; v01; : : : ; v

0N ) where

v0i 2 argmaxfc|i �vi : supp(�v+i ) � supp(vi); �vi 2 Vu00g.

Now c|(z0 � �z00) < c|z0 and � > 0 imply that c|z00 > 0. Moreover, c|z0 � c|z00 > 0.

In conclusion, the pair (u0; Vu0) with u0 := u00 ful�lls conditions 1: � 3: proving the

�rst claim of the lemma.

With z0 = (u0; v01; : : : ; v0N ) according to condition 3: above, c|z0 > 0 implies that

c|(z0 � �z0) < c|z0 for any scalar � > 0. Moreover, supp((v0i)+) � supp(vi), for

i = 1; : : : ; N , and supp((u0)+) � supp(u) together imply that we can indeed choose a

scalar � > 0 with z0 � �z0 � 0. Finally, (u0; v01; : : : ; v0N ) 2 ker

RdN(AN) and therefore

AN (z0 � �z0) = ANz0 � � � 0 = b, which completes the proof. �

Theorem 3.4.4 If the problem minfc|z : ANz = b; z 2 R dN+ g is solvable, an optimal

solution can be computed in �nitely many steps by application of Algorithm 0.0.2

together with the Augmentation Strategy 2.5.1 and the above reconstruction procedure

for �nding improving vectors.

Note that the Augmentation Strategy 2.5.1 can also be applied at the building block

level, since in phase 1 we have to look for an improving vector v to our feasible

solution z0 for which c|v > 0 and supp(v+) � supp(z0) holds.

Moreover, the reconstruction procedure in the above Lemma 3.4.4 is linear with

respect to the number N of scenarios.

3.5 Simple Recourse

For two-stage stochastic programs with simple recourse the matrixW takes the partic-

ular simple formW = (Dj�D), where D is an invertible square matrix of appropriate

dimension. Simple recourse instances arise both in two-stage linear [3, 4, 26, 38] and

two-stage integer linear programs [27]. In the following we show that for stochastic

programs with simple recourse we have H1 = H1. Thus, it suÆces to compute the

Graver basis of A1 in order to reconstruct an improving vector for the full simple

recourse problem. To show this, we have to state another property of Graver bases.

A similar lemma for IP Graver test sets of knapsack problems can be found in [15].

Page 70: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

3.5. Simple Recourse 70

3.5.1 IP case

Lemma 3.5.1 Given a matrix B =�A a �a

�, with two columns which di�er

only in their respective signs. Then the IP Graver basis of B can be constructed from

the IP Graver basis of B0 =�A a

�in the following way:

GIP(B) = f(u; v; w) : vw � 0; (u; v � w) 2 GIP(B0)g [ f�(0; 1; 1)g:

Proof. Let (u; v; w) 2 GIP(B). Since �(0; 1; 1) are the only minimal elements in

kerZn+2(B) n f0g with u = 0, we may assume that u 6= 0. Moreover, it holds vw � 0,

since otherwise either (u; v + 1; w + 1) v (u; v; w) or (u; v � 1; w � 1) v (u; v; w)

contradicts the v-minimality property of (u; v; w). Thus, without loss of generality,

we assume in the following that v � 0 and w � 0. Next we show (u; v�w) 2 GIP(B0).

Suppose that (u; v � w) 62 GIP(B0). Then there is some vector (u0; z0) 2 GIP(B

0) with

(u0; z0) v (u; v �w) and u0 6= 0. (If u0 = 0 then z0 = 0 contradicting (u0; z0) 2 GIP(B0)

since only non-zero elements are in GIP(B0):) Of course, (u0; z0) 6= (u; v � w), since

(u; v � w) 62 GIP(B0). From v � w � 0 we get 0 � z0 � v � w. Next we show that

(u; v; w) 62 GIP(B) which will contradict our initial assumption (u; v; w) 2 GIP(B).

To prove this, note that 0 � min(z0; v) � v and 0 � �z0 +min(z0; v) � w. Whereas

the �rst chain of inequalities holds because of 0 � z0, the second can be seen as

follows. If z0 � v, we get 0 � �z0 + z0 = 0 � w by our assumption on w. If,

on the contrary, z0 > v we get 0 � �z0 + v � �(v � w) + v = w. But this

implies that (u0;min(z0; v);�z0 + min(z0; v)) v (u; v; w). Moreover, we know that

u0 6= 0 and (u0;min(z0; v);�z0 + min(z0; v)) 2 kerZn+2(B). Thus, it remains to prove

that (u0;min(z0; v);�z0 + min(z0; v)) 6= (u; v; w), which implies that (u; v; w) is not

v-minimal in kerZn+2(B) n f0g and therefore (u; v; w) 62 GIP(B).

Suppose that (u0;min(z0; v);�z0 + min(z0; v)) = (u; v; w) is true. This yields u = u0,

min(z0; v) = v, and w = �z0 + min(z0; v) = �z0 + v � �(v � w) + v = w. But

this implies that z0 = v � w and therefore (u0; z0) = (u; v � w) in contrast to our

assumption (u0; z0) 6= (u; v � w) above. Thus (u; v; w) 62 GIP(B).

Therefore, it remains to prove that every vector (u; v; w) with u 6= 0, v � 0, w � 0,

and (u; v � w) 2 GIP(B0) belongs to GIP(B). Clearly, (u; v; w) 2 kerZn+2(B). Suppose

that there exists a vector (u0; v0; w0) 2 kerZn+2(B) with u0 6= 0, (u0; v0; w0) v (u; v; w),

and (u; v; w) 6= (u0; v0; w0). But then we conclude (u0; v0 � w0) 2 kerZn+1(B0) and

(u0; v0�w0) v (u; v�w). If (u; v�w) 6= (u0; v0�w0) were true, this would contradict

(u; v � w) 2 GIP(B0).

Therefore, suppose that we have (u; v�w) = (u0; v0�w0), (u0; v0; w0) v (u; v; w), and

(u; v; w) 6= (u0; v0; w0). But then 0 � v0 � v and 0 � w0 � w together imply that

Page 71: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 3. Decomposition of Test Sets in Two-Stage Stochastic Programming 71

0 � v0 � w0 � v � w. Since u = u0 and (u; v; w) 6= (u0; v0; w0), at least one of the

inequalities v0 � v and �w0 � �w holds strictly. Therefore, 0 � v0 � w0 < v � w, a

contradiction to v � w = v0 � w0. �

Corollary 3.5.2 Given a matrix B =�A D �D

�, with a number of paired

columns which di�er only in their respective signs. Then the IP Graver basis of B

can be constructed from the IP Graver basis of B0 =�A D

�in the following way:

GIP(B) = f(u; v; w) : v(i)w(i) � 0; for all i; (u; v � w) 2 GIP(B0)g [ f�(0; 1; 1)g:

Herein, v, w, and 1 are vectors whose dimensions equal the number of columns of D.

Proof. The proof is analogous to the proof of Lemma 3.5.1, since we can consider

the pairs of columns in D and �D independently. �

Lemma 3.5.3 H1 coincides with H1 for all two-stage stochastic integer programs

with simple recourse.

Proof. The Graver basis of AN can be reconstructed (in particular each building

block of it can be reconstructed) from the Graver basis of the matrix

A0N :=

0BBBBBB@

A 0 0 � � � 0

T D 0 � � � 0

T 0 D � � � 0...

......

. . ....

T 0 0 � � � D

1CCCCCCA:

Thus, it suÆces to compute the set H01 associated with this two-stage stochastic

program. However, applying our decomposition approach to A0N we notice, that all

occuring sets Vu are singletons with Vu = fD�1(�Tu)g. Therefore, our completion

algorithm to compute H01 is isomorphic to the completion algorithm to compute the

Graver basis of A 0

T D

!

via the bijection (u; fvg) $ (u; v). Hence, both algorithms lead to the same set of

building blocks. Since the algorithm to reconstruct H1 from H01 works entirely on

the building block level, this already proves our claim. �

To reconstruct H1 from H01 we replace every pair (u0; Vu0) 2 H0

1, u0 6= 0, by the

pair (u; Vu) given by u = u0 and Vu := f(v; w) : v(i)w(i) � 0; for all i; v � w 2 Vu0g.

Furthermore, note that for u = 0 the set Vu consists of the building blocks (ei; ei)

and their negatives. Herein, the ei are the unit vectors of appropriate dimension.

Page 72: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

3.5. Simple Recourse 72

3.5.2 LP case

Lemma 3.5.4 Given a matrix B =�A a �a

�, with two columns which di�er

only in their respective signs. Then the LP Graver basis of B can be constructed from

the LP Graver basis of B0 =�A a

�in the following way:

GLP(B) = f(u; v; 0) : (u; v) 2 GLP(B0)g[f(u; 0;�v) : (u; v) 2 GLP(B

0)g[f�(0; 1; 1)g:

Proof. Since �(0; 1; 1) are the only support minimal elements in kerRn+2 (B) n f0g

with u = 0, we may assume that u 6= 0. Let (u; v; w) 2 GLP(B) be given. Because

of �(0; 1; 1) 2 kerRn+2 (B), (u; v; w) can have minimal support in kerRn+2 (B) n f0g

only if v = 0 or w = 0. If w = 0, then (u; v; 0) 2 GLP(B) implies that there is

no non-zero vector (u0; v0; w0) 2 kerRn+2 (B) with supp(u0; v0; w0) ( supp(u; v; 0) and

thus, with w0 = 0. Therefore, (u; v) 2 kerRn+1 (B0) has inclusion minimal support

in kerRn+1 (B0) n f0g, which implies (u; v) 2 GLP(B

0). Analogously, we conclude that

(u;�w) 2 GLP(B0) in case of v = 0.

It remains to show that

f(u; v; 0) : (u; v) 2 GLP(B0)g [ f(u; 0;�v) : (u; v) 2 G(B0)g [ f�(0; 1; 1)g � GLP(B):

Clearly, �(0; 1; 1) 2 GLP(B). Let (u; v) 2 GLP(B0), u 6= 0. That means that there is

no non-zero vector (u0; v0) 2 kerRn+1 (B0) with supp(u0; v0) ( supp(u; v). This implies

that there are no vectors (u0; v0; 0) and (u0; 0;�v0) with supp(u0; v0; 0) ( supp(u; v; 0)

and supp(u0; 0;�v0) ( supp(u; 0;�v), respectively. Therefore, both vectors (u; v; 0)

and (u; 0; v) belong to GLP(B). �

Corollary 3.5.5 Given a matrix B =�A D �D

�, with a number of paired

columns which di�er only in their respective signs. Then the LP Graver basis of B

can be constructed from the LP Graver basis of B0 =�A D

�in the following way:

GLP(B) = f(u; v; w) : (u; v � w) 2 GLP(B0); supp(v) \ supp(w) = ;g [ f�(0; 1; 1)g:

Herein, v, w, and 1 are vectors whose dimensions equal the number of columns of D.

Proof. The proof is analogous to the proof of Lemma 3.5.4, since we can consider

the pairs of columns in D and �D independently. �

Lemma 3.5.6 For two-stage stochastic linear programs with simple recourse H1

coincides with H1.

Page 73: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 3. Decomposition of Test Sets in Two-Stage Stochastic Programming 73

Proof. The proof follows exactly the same lines as the proof of Lemma 3.5.3. �

To construct H1 from H01 we replace every pair (u0; Vu0) 2 H

01, u

0 6= 0, by (u; Vu),

where u = u0 and Vu := f(u; v; w) : (u; v � w) 2 GLP(B0); supp(v) \ supp(w) = ;g.

Furthermore, note that for u = 0 the set Vu consists of the building blocks (ei; ei)

and their negatives.

3.6 Computations

Algorithm 3.3.1 as well as the initialization and augmentation procedures from Sec-

tion 3.4 have been implemented for the integer case. The current version of that

implementation can be obtained from [22]. In Chapter 7 we give a more detailed de-

scription of that implementation. To indicate the principal behaviour of our method,

we report on test runs with an academic example. The algorithmic bottleneck of the

method is the completion procedure in Algorithm 3.3.1. Therefore, the sizes of the

matrices T and W are very moderate in this initial phase of testing. On the other

hand, the method is fairly insensitive with respect to the number of scenarios.

Consider the two-stage program

minf35x1 + 40x2 +1N

NP�=1

16y�1 + 19y�2 + 47y�3 + 54y�4 :

x1 + y�1 + y�3 � ��1 ;

x2 + y�2 + y�4 � ��2 ;

2y�1 + y�2 � ��3

y�1 + 2y�2 � ��4 ;

x1; x2; y�1 ; y

�2 ; y

�3 ; y

�4 2 Z+g

Here, the random vector � 2 R s is given by the scenarios �1; : : : ; �N , all with equal

probability 1=N . The realizations of (��1 ; ��2 ) and (��3 ; �

�4 ) are given by uniform grids

(of di�ering granularity) in the squares [300; 500]� [300; 500] and [0; 2000]� [0; 2000],

respectively. Timings are given in CPU seconds on a SUN Enterprise 450, 300 MHz

Ultra-SPARC.

It took 3:3 seconds to compute H1 altogether consisting of 1464 building blocks ar-

ranged into 25 pairs (u; Vu). The column Aug then gives the times needed to augment

the solution x1 = x2 = y�1 = y�2 = 0, y�3 = ��1 , and y�4 = ��2 , � = 1; : : : N , to optimality.

(�1; �2) (�3; �4) scenarios variables optimum Aug CPLEX dualdec

5� 5 3� 3 225 902 (100; 150) 1:52 0:63 > 1800

5� 5 21� 21 11025 44102 (100; 100) 66:37 696:10 �

9� 9 21� 21 35721 142886 (108; 96) 180:63 > 1 day �

Page 74: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

3.7. Conclusions 74

Although further exploration is necessary, the above table seems to indicate linear

dependence of the computing time on the number N of scenarios, once H1 has been

computed. Existing methods in stochastic integer programming are much more sen-

sitive on N . The implemented dual decomposition algorithm for two-stage stochastic

mixed-integer programs [10] involves a non-smooth convex minimization in a space of

dimension (N �1)m, where m denotes the dimension of the �rst-stage vector x. This

explains the failure of the dual decomposition algorithm in the two bigger examples,

as indicated in the row dualdec. On the other hand, the dual decomposition algo-

rithm is a more general method in that it works for mixed-integer problems as well.

Of course, it is possible to tackle the full-size integer linear program (4) by standard

solvers, such as CPLEX, directly. Not surprisingly, this strategy comes to its limit if

N is suÆciently large, see the last example in the above table.

3.7 Conclusions

We have presented a novel approach for the solution of two-stage (integer) linear

stochastic programs based on test set methods. To this end, we constructed a �nite

object, H1, which depends only on the matrices A, T , and W , and which allows an

algorithmic solution to (4), page 13, for any number N of scenarios, any speci�c cost

function, any speci�c right-hand side, and any initial feasible solution to the problem.

To the best of our knowledge, the set H1, so far, has no counterpart in the existing

stochastic programming literature.

Moreover, we have presented an algorithm to compute H1. First computational tests

indicate that, once the bottleneck of �nding H1 has been passed, the augmentation

process acts less sensitive on the number N of scenarios than hitherto methods do.

In the next chapter we will show how to extend the presented decomposition approach

to the multi-stage extension of (4). Again, we de�ne a set H1 of building blocks not

depending on the number of scenarios and give an algorithm which, upon termina-

tion, returns H1. The in building blocks H1 can again be employed to reconstruct

improving vectors to a given feasible solution. In this more general setting, however,

�niteness of H1 in the integer situation is still an open question.

Page 75: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 4

Decomposition of Test Sets in

Multi-Stage Stochastic

Programming

In the following, we generalize the decomposition approach from Chapter 3 to the

multi-stage situation. To this end, we will present the details for the 3-stage case only.

The treatment below, however, should make clear how to de�ne the decomposition,

the notions, and the algorithms for the higher multi-stage situations. This treatment

and the proofs follow the same counting or construction arguments as in the two-

stage situation. For an introduction to multi-stage stochastic programming we refer

to [40] and the references therein.

The constraint matrix of the 3-stage extension of (4), page 13, has the following form:

AN1;(N2;1;:::;N2;N1) :=

0BBBBBBBBBBBBBBBBBBB@

A1

T 01 A2

T 001 T2 W2

......

. . .

T 001 T2 W2

.... . .

T 01 A2

T 001 T2 W2

......

. . .

T 001 T2 W2

1CCCCCCCCCCCCCCCCCCCA

:

75

Page 76: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

76

All remaining entries in the above matrix are 0 and there are N1 blocks of the form0BBBB@

A2

T2 W2

.... . .

T2 W2

1CCCCA :

The jth block contains N2;j copies of T2 and of W2. For simplicity we assume that

N2;1 = : : : = N2;N1=: N2. This can always be achieved by introducing additional

scenarios (or scenario trees in the multi-stage case) with zero conditional probabil-

ity. We remark that this symmetri�cation is done for ease of exposition only. The

subsequent analysis done with AN1;N2readily extends to AN1;(N2;1;:::;N2;N1

).

Therefore, we may assume that matrices of 3-stage stochastic programs have the

following structure:

AN1;N2:=

0BBBBBB@

A1 0 0 � � � 0

T1 W1 0 � � � 0

T1 0 W1 � � � 0...

......

. . ....

T1 0 0 � � � W1

1CCCCCCA;

where T1 = (T 01; T

001 ; : : : ; T

001 )

| and where W1 is the matrix of a two-stage stochastic

program of the form

W1 :=

0BBBBBB@

A2 0 0 � � � 0

T2 W2 0 � � � 0

T2 0 W2 � � � 0...

......

. . ....

T2 0 0 � � � W2

1CCCCCCA:

The subscripts Ni, i = 1; 2, correspond to the numbers of Ti's and Wi's used. The

matrices A1, A2, T1, T2, W2 have integer entries and are of appropriate dimensions in

order to �t into the above matrix structure. In the following let u, v, and w denote

variables corresponding to the �rst, second, and third stages, respectively. Moreover,

let their dimensions be n1, n2, and n3. Thus, the dimension of the full 3-stage problem

is given by dN1;N2= n1 +N1n2 +N1N2n3.

Although it would be more convenient to use u(1), u(2), and u(3) for an inductive

generalization of all notions to more than two stages, we prefer u; v; w for a better

readability.

Page 77: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 4. Decomposition of Test Sets in Multi-Stage Stochastic Programming 77

4.1 Building Blocks of Graver Test Sets

As in Chapter 3 we present the decomposition approach for the pure integer case.

However, the same decomposition ideas readily extend to the continuous and the

mixed-integer situations. Again, a simple observation is the basis for the decomposi-

tion of test set vectors.

Lemma 4.1.1 The vector z = (u; v1; w1;1; : : : ; w1;N2; : : : ; vN1

; wN1;1; : : : ; wN1;N2) be-

longs to kerZdN1;N2

(AN1;N2) if and only if (u; vi; wi;j) 2 ker

Zd1;1 (A1;1) for i = 1; : : : ; N1,

j = 1; : : : ; N2.

Proof. The claim follows immediately if we write 0 = AN1;N2z explicitly. �

De�nition 4.1.2 (Building blocks)

Let z = (u; v1; w1;1; : : : ; w1;N2; : : : ; vN1

; wN1;1; : : : ; wN1;N2) 2 ker

ZdN1;N2

(AN1;N2) and

call the vectors u; v1; : : : ; vN1; w1;1; : : : ; wN1;N2

the building blocks of z. Denote by

GN1;N2the Graver test set associated with AN1;N2

and collect into HN1;N2all those

vectors arising as building blocks of some z 2 GN1;N2.

By H1 denote the setS1

N1=1

S1

N2=1HN1;N2

.

The set H1 contains vectors u, v, w associated with the �rst, second, and third

stages, respectively. Again, for convenience we will arrange the vectors in H1 into a

certain structure:

For �xed u; v 2 H1 collect into Wu;v all those vectors w 2 H1 which satisfy

(u; v; w) 2 kerZd1;1 (A1;1). Now, for �xed u 2 H1, collect into Vu all those pairs

(v;Wu;v) such that v 2 H1 and Wu;v 6= ;.

Therefore, we will again arrange the building blocks of H1 into pairs (u; Vu) but

now each element of Vu is not a vector but a pair (v;Wu;v). In what follows we will

extend this arrangement into pairs to arbitrary sets of building blocks of appropriate

dimensions, not necessarily belonging to H1.

Again, the set H1 is of particular interest, since by de�nition it contains all building

blocks of Graver test set vectors for AN1;N2for arbitrary numbers N1 and N2. Thus,

H1 does not depend on N1 and N2.

4.2 Computation of H1

Assuming for the moment thatH1 is �nite also in the 3-stage situation, let us present

the necessary notions for the Completion Algorithm 4.2.1 below to compute this set.

Page 78: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

4.2. Computation of H1 78

We will prove that this algorithm, upon termination, returns a set containing H1

in both the LP and the IP cases. Employing a simple counting argument we will

show termination of this completion algorithm in the LP situation. Correctness and

termination of the algorithm implies �niteness of H1 for all 3-stage stochastic linear

programs. Moreover, this counting argument readily extends to higher multi-stage

stochastic linear programs, proving �niteness of the analogous sets H1 in these more

general cases.

For stochastic integer programs this �niteness question is still unanswered already for

the 3-stage case. In order to prove �niteness ofH1 and termination of its computation

for 3-stage or even higher multi-stage stochastic integer programs, generalizations of

Maclagan's Theorem, Theorem 3.2.1, are needed. We will deal with these generaliza-

tions in Chapter 6.

To compute H1 in the 3-stage case we use the same algorithmic pattern as for

Graver test set computations and the computation of H1 for the two-stage case.

Thus, we have to de�ne the main ingredients to this completion algorithm: the input

set, normalForm, �, and .

Note that the objects f; g, and s occurring in the following algorithm have the form

(u; Vu), wherein Vu is a set of pairs (v;Wu;v).

Algorithm 4.2.1 (Algorithm to compute H1)

Input: a symmetric generating set F of kerZd1;1 (A1;1) over Z or of ker

Rd1;1 (A1;1) over

R in (u; Vu)-notation to be speci�ed below

Output: a set G � H1

G := F

C :=S

f;g2G

ff � gg (forming S-vectors)

while C 6= ; do

s := an element in C

C := C n fsg

f := normalForm(s;G)

if f 6= (0; f(0; f0g)g) then

C := C [S

g2G[ffg

ff � gg (adding S-vectors)

G := G [ ffg

return G.

Page 79: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 4. Decomposition of Test Sets in Multi-Stage Stochastic Programming 79

Behind the function normalForm(s;G) there is again the following algorithm.

Algorithm 4.2.2 (Normal form algorithm)

Input: a pair s, a set G of pairs

Output: a normal form of s with respect to G

while there is some g 2 G such that g v s do

s := s g

return s

In the following we de�ne the necessary notions for the above completion algorithm

in the integer and continuous cases.

Lemma 4.2.3 Let F be a symmetric generating set for kerZd1;1 (A1;1) over Z which

contains both a generating set for f(0; v; w) : T2v+W2w = 0g � kerZd1;1 (A1;1) over Z

consisting only of vectors with zero as �rst-stage component and a generating set for

f(0; 0; w) : W2w = 0g � kerZd1;1 (A1;1) over Z consisting only of vectors with zero as

�rst- and as second-stage components.

Moreover, let FN1;N2be the set of all those vectors in ker

ZdN1;N2

(AN1;N2) whose building

blocks are also building blocks of vectors in F [ f0g.

Then for any N1; N2 the vectors FN1;N2generate ker

ZdN1;N2

(AN1;N2) over Z.

The same result holds for the continuous kernel kerRdN1 ;N2

(AN1;N2).

Proof. Let z = (u; v1; w1;1; : : : ; w1;N2; : : : ; vN1

; wN1;1; : : : ; wN1;N2) 2 ker

ZdN1;N2

(AN1;N2).

We have to show that z can be written as a linear combination of elements from

FN1;N2.

Since (u; v1; w1;1) 2 kerZd1;1 (A1;1) there has to exist a �nite integer linear combination

(u; v1; w1;1) =P

�i(�ui; �vi; �wi) with (�ui; �vi; �wi) 2 F . Hence

(u; v1; w1;1; : : : ; w1;1; : : : ; v1; w1;1; : : : ; w1;1) =Xi

�i(�u; �vi; �wi; : : : ; �wi; : : : ; �vi; �wi; : : : ; �wi)

with (�u; �vi; �wi; : : : ; �wi; : : : ; �vi; �wi; : : : ; �wi) 2 FN1;N2.

Page 80: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

4.2. Computation of H1 80

Now we have

z = (u; v1; w1;1; : : : ; w1;1; : : : ; v1; w1;1; : : : ; w1;1) +N1Xj=1

(0; 0; 0; : : : ; 0; : : : ; vj � v1; wj;1 � w1;1; : : : ; wj;N2� w1;1; : : : ; 0; 0; : : : ; 0)

=Xi

�i(�u; �vi; �wi; : : : ; �wi; : : : ; �vi; �wi; : : : ; �wi) +

N1Xj=1

(0; 0; 0; : : : ; 0; : : : ; vj � v1; wj;1 � w1;1; : : : ; wj;N2� w1;1; : : : ; 0; 0; : : : ; 0)

Note that (0; 0; 0; : : : ; 0; : : : ; vj � v1; wj;1 � w1;1; : : : ; wj;N2� w1;1; : : : ; 0; 0; : : : ; 0) 2

kerZdN1;N2

(AN1;N2) and (vj � v1; wj;1 � w1;1; : : : ; wj;N2

� w1;1) 2 kerZn2+N2n3 (W1). But

in the two-stage case (cf. Lemma 3.3.3) we have already seen that we can write

(vj � v1; wj;1�w1;1; : : : ; wj;N2�w1;1) 2 kerZn2+N2n3 (W1) as a �nite integer linear com-

bination of the given set of vectors. Therefore, we have proved that z can be written

as a �nite integer linear combination of vectors from FN1;N2.

An analogous argumentation shows the same result for the continuous case. �

Before we de�ne the input set, the sum �, and the di�erence for the IP and the

LP cases let us introduce the following notation.

De�nition 4.2.4 For � 2 R let

(u; Vu) + �(u0; Vu0) := (u+ �u0; Vu + �Vu0);

where

Vu + �Vu0 := fV + �V 0 : V 2 Vu; V0 2 Vu0g

with V + �V 0 as in the the 2-stage situation, De�nition 3.3.4.

4.2.1 IP case

Lemma 4.2.3 suggests the following input set.

De�nition 4.2.5 Let F be a symmetric generating set for kerZd1;1 (A1;1) over Z which

contains both a generating set for f(0; v; w) : T2v+W2w = 0g � kerZd1;1 (A1;1) over Z

consisting only of vectors with zero as �rst-stage component and a generating set for

f(0; 0; w) : W2w = 0g � kerZd1;1 (A1;1) over Z consisting only of vectors with zero as

�rst- and as second-stage components.

Page 81: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 4. Decomposition of Test Sets in Multi-Stage Stochastic Programming 81

With this de�ne the building blocks of all vectors in F [ f0g in (u; Vu)-notation to be

the input set to Algorithm 4.2.1 for the integer case.

It remains to de�ne (u; Vu)� (u0; Vu0) and (u; Vu) (u0; Vu0).

De�nition 4.2.6 Let (u; Vu)� (u0; Vu0) := (u; Vu) + (u0; Vu0). Moreover, we say that

(u0; Vu0) reduces (u; Vu), or (u0; Vu0) v (u; Vu) for short, if the following conditions are

satis�ed:

1. u0 v u

2. For every (v;Wu;v) 2 Vu there exists (v0;Wu0;v0) 2 Vu0 with

� v0 v v, and

� for all w 2 Wu;v there is some vector w0 2 Wu0;v0 with w0 v w.

3. If we have u0 = 0 and v0 = 0 in the condition above, then there are vectors

w 2Wu;v and w0 2Wu0;v0 with 0 6= w0 v w.

If (u0; Vu0) v (u; Vu), let (u; Vu) reduce to

(u; Vu) (u0; Vu0) := (u� u0; fV V 0 : V 2 Vu; V0 2 Vu0; V

0 v V g);

where V V 0 is de�ned as in the 2-stage case, De�nition 3.3.6.

Again, the third condition above is necessary to avoid the situation that u0 and all

occurring v0 and w0 are zero, which forms a trivial zero reduction.

Remark. In (u; Vu) (u0; Vu0) it suÆces again to collect only one di�erence V V 0

for every V 2 Vu. In addition, it suÆces to take only one di�erence w � w0 when

computing V V 0 for every w 2 V . As can be seen from the proof of the following

proposition, Algorithm 4.2.1 still works correctly if we de�ned (u; Vu) (u0; Vu0) in

this way. �

Proposition 4.2.7 If the input set, the procedure normalForm, f � g, and s g are

de�ned as above for the IP case, then Algorithm 4.2.1, upon termination, returns a

set of vectors that contains H1.

Proof. Suppose that Algorithm 4.2.1 terminates.

Page 82: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

4.2. Computation of H1 82

To show that H1 � G, we have to prove that HN1;N2� G for arbitrary N1; N2 2 Z+ .

Fix N1; N2 and start a Graver test set computation (see Section 1.3) with

�F := f(u; v1; w1;1; : : : ; w1;N2; : : : ; vN1

; wN1;1; : : : ; wN1;N2) :

(u; Vu) 2 G; (vi;Wu;vi) 2 Vu; wi;j 2Wu;vi ; i = 1; : : : ; N1; j = 1; : : : ; N2g

as input set. �F generates kerZdN1;N2

(AN1;N2) over Z for all N1; N2 2 Z+ , since, by the

assumption on the input set to Algorithm 4.2.1, FN1;N2� �F .

We show next that all sums z + z0 2 S-vectors(z; z0) of two elements z; z0 2 �F reduce

to 0 with respect to �F . In this case, Algorithm 1.3.1 returns the input set �F which

implies GN1;N2� �F . Therefore, HN1;N2

� G as desired.

Take two arbitrary elements

z = (u; v1; w1;1; : : : ; w1;N2; : : : ; vN1

; wN1;1; : : : ; wN1;N2)

and

z0 = (u0; v01; w01;1; : : : ; w

01;N2

; : : : ; v0N1; w0

N1;1; : : : ; w0

N1;N2)

from �F and consider the vector z + z0 2 S-vectors(z; z0).

In the above algorithm, (u; Vu)� (u0; Vu0) was reduced to (0; f(0; f0g)g) by elements

(u1; Vu1); : : : ; (uk; Vuk) 2 G. From this sequence we can construct a sequence z1; : : : ; zk

of vectors in �F which reduce z + z0 to zero as follows.

(u1; Vu1) v (u; Vu) � (u0; Vu0) implies that u1 v u + u0 and that there exist pairs

(v1;1;Wu1;v1;1), : : :, (v1;N1;Wu1;v1;N1

) 2 Vu1 and w1;j;1; : : : ; w1;j;N22 Wu1;v1;j such that

� v1;i v vi + v0i for i = 1; : : : ; N1, and

� w1;j;k v wj;k + w0j;k for j = 1; : : : ; N1, k = 1; : : : ; N2.

Therefore, z1 := (u1; v1;1; w1;1;1; : : : ; w1;1;N2; : : : ; v1;N1

; w1;N1;1; : : : ; w1;N1;N2) v z + z0

and z+ z0 can be reduced to z+ z0� z1. Moreover, z1 2 �F and all the building blocks

of z + z0 � z1 lie in ((u; Vu)� (u0; Vu0)) (u1; Vu1).

But ((u; Vu)� (u0; Vu0)) (u1; Vu1) was further reduced to (0; f(0; f0g)g) by the pairs

(u2; Vu2); : : : ; (uk; Vuk) 2 G. Therefore, we can construct from (u2; Vu2) a vector z2 2�F

with z2 v z + z0 � z1. Thus, z + z0 � z1 can be further reduced to z + z0 � z1 � z2.

Repeating this construction, we arrive in the kth step at z+ z0� z1� : : :� zk�1 whose

building blocks all lie in ((u; Vu)� (u0; Vu0)) (u1; Vu1) : : : (uk�1; Vuk�1). The latter

can be reduced to (0; f(0; f0g)g) by the pair (uk; Vuk) 2 G. Therefore, there exists a

vector zk 2 �F such that zk v z+ z0� z1� : : :� zk�1 and 0 = z+ z0� z1� : : :� zk. �

Page 83: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 4. Decomposition of Test Sets in Multi-Stage Stochastic Programming 83

4.2.2 LP case

Lemma 4.2.3 suggests the following input set.

De�nition 4.2.8 Let F be a symmetric generating set for kerRd1;1 (A1;1) over R which

contains both a generating set for f(0; v; w) : T2v +W2w = 0g � kerRd1;1 (A1;1) over

R consisting only of vectors with zero as �rst-stage component and a generating set

for f(0; 0; w) : W2w = 0g � kerRd1;1 (A1;1) over R consisting only of vectors with zero

as �rst- and as second-stage components.

With this de�ne the building blocks of all vectors in F [ f0g in (u; Vu)-notation to be

the input set to Algorithm 4.2.1 for the continuous case.

For the computation of H1 for 3-stage stochastic linear programs we de�ne � and

as follows.

De�nition 4.2.9 For given (u; Vu), (u0; Vu0), and all � 2 R such that

1. u + �u0 contains a zero entry at some component i at which neither u nor u0

have a zero entry, that is, (u+ �u0)(i) = 0 but u(i)(u0)(i) 6= 0, or

2. for some vectors v 2 Vu; v0 2 Vu0 the vector v+�v

0 contains a zero entry at some

component i at which neither v nor v0 have a zero entry, that is, (v+�v0)(i) = 0

but v(i)(v0)(i) 6= 0, or

3. for some vectors w 2 Wu;v; w0 2 Wu0;v0 the vector w + �w0 contains a zero

entry at some component i at which neither w nor w0 have a zero entry, that

is, (w + �w0)(i) = 0 but w(i)(w0)(i) 6= 0,

de�ne (u; Vu)� (u0; Vu0) to be the pair

f(u; Vu) + �(u0; Vu0) : � 2 R satisfying the above conditions 1; 2; 3g:

De�nition 4.2.10 We say that (u0; Vu0) reduces (u; Vu), or (u0; Vu0) v (u; Vu) for

short, if the following conditions are satis�ed:

1. supp(u0) � supp(u)

2. For every (v;Wu;v) 2 Vu there exists (v0;Wu0;v0) 2 Vu0 with

� supp(v0) � supp(v), and

Page 84: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

4.2. Computation of H1 84

� for all w 2Wu;v there is some w0 2Wu0;v0 with supp(w0) � supp(w).

3. If we have u0 = 0 and v0 = 0 in the condition above, then there are vectors

w 2Wu;v and w0 2Wu0;v0 with ; 6= supp(w0) � supp(w).

In case of reduction (u; Vu) reduces to (u; Vu) �(u0; Vu0) which is de�ned to be the

pair

(u� �u0; fv � �v0; fw � �w0 : w 2Wu;v; w0 2Wu0;v0g : v 2 Vu; v

0 2 Vu0g);

where all v0 and w0 satisfy the three conditions above, and where � takes one of the

�nitely many values such that supp(u� �u0) ( supp(u), supp(v� �v0) ( supp(v), or

supp(w � �w0) ( supp(w) for at least one pair of vectors.

Again, the third condition in the de�nition above is necessary to avoid the situation

that u0 and all occurring v0 and w0 are zero, which forms a trivial zero reduction.

Proposition 4.2.11 If the input set, the procedure normalForm, f � g, and s g

are de�ned as above for the LP case, then Algorithm 4.2.1 terminates and returns a

set of vectors containing H1.

Proof. In the following we show that G n F contains at most 2n122n222

n3

elements

and so the algorithm always terminates.

De�ne (v;Wu;v) and (v0;Wu0;v0) to be equivalent as for the two-stage case (see proof

of Lemma 3.2.13). This de�nes an equivalence relation with 2n222n3 equivalence

classes. De�ne Vu to be equivalent to Vu0 if for each (v;Wu;v) 2 Vu there is some

(v0;Wu0;v0) 2 Vu0 equivalent to (v;Wu;v). This de�nes an equivalence relation with

22n222

n3

equivalence classes. Moreover, de�ne (u; Vu) and (u0; Vu0) to be equivalent if

supp(u) = supp(u0) and Vu is equivalent to Vu0 . This de�nes an equivalence relation

with 2n122n222

n3

equivalence classes.

Moreover, if (u; Vu) and (u0; Vu0) are equivalent it holds that (u; Vu) v (u0; Vu0) and

(u0; Vu0) v (u; Vu). Therefore, there are no two equivalent pairs (u; Vu) and (u0; Vu0) in

GnF , because if (u; Vu) was added to G �rst then (u0; Vu0) could have been reduced by

(u; Vu) before being added to G. This means that G n F contains at most 2n122n222

n3

di�erent pairs (u; Vu) and the algorithm terminates.

To show correctness, that is H1 � G, we have to prove that HN1;N2� G for arbitrary

N1; N2 2 Z+ . Fix N1; N2 and start a Graver test set computation (see Section 1.3)

with

�F := f(u; v1; w1;1; : : : ; w1;N2; : : : ; vN ; wN1;1; : : : ; wN1;N2

) :

(u; Vu) 2 G; (vi;Wu;vi) 2 Vu; wi;j 2Wu;vi ; i = 1; : : : ; N1; j = 1; : : : ; N2g

Page 85: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 4. Decomposition of Test Sets in Multi-Stage Stochastic Programming 85

as input set. �F generates kerRdN1 ;N2

(AN1;N2) over R for all N1; N2 2 Z+ , since, by the

assumption on the input set to Algorithm 4.2.1, FN1;N2� F .

If all S-vectors z + �z0 2 S-vectors(z; z0) of two elements z; z0 2 �F can be written as

a linear combination of elements of �F whose support is contained in supp(z + �z0),

then the Graver basis GN1;N2is contained in �F (see Remark 1.3.5). Since N1; N2 were

chosen arbitrarily, GN1;N2� �F will imply HN1;N2

� G for all N1; N2 2 Z+ and we are

done.

Take two arbitrary elements

z = (u; v1; w1;1; : : : ; w1;N2; : : : ; vN1

; wN1;1; : : : ; wN1;N2)

and

z0 = (u0; v01; w01;1; : : : ; w

01;N2

; : : : ; v0N1; w0

N1;1; : : : ; w0

N1;N2)

from �F and any � 2 R such that z + �z0 2 S-vectors(z; z0).

In the above algorithm, (u; Vu)+�(u0; Vu0) 2 (u; Vu)�(u

0; Vu0) was successively reduced

to (0; f(0; f0g)g) by elements (u1; Vu1), : : :, (uk; Vuk) 2 G. From this sequence we will

now construct a sequence z1; : : : ; zk of vectors in �F such that z + �z0 =P

�izi and

supp(zi) � supp(z + �z0) as desired.

(u1; Vu1) reduces (u; Vu) + �(u0; Vu0) to the pair ((u; Vu) + �(u0; Vu0)) �1(u1; Vu1).

But (u1; Vu1) v (u; Vu) + �(u0; Vu0) implies that supp(u1) � supp(u + �u0) and that

there exist (v1;1;Wu1;v1;1), : : :, (v1;N1;Wu1;v1;N1

) 2 Vu1 and w1;j;1; : : : ; w1;j;N22Wu1;v1;j ,

j = 1; : : : ; N1, such that

� supp(v1;i) � supp(vi + �v0i) for i = 1; : : : ; N1, and

� supp(w1;j;k) � supp(wj;k + �w0j;k) for j = 1; : : : ; N1, k = 1; : : : ; N2.

Thus, we have z1 := (u1; v1;1; w1;1;1; : : : ; w1;1;N2; : : : ; v1;N ; w1;N1;1; : : : ; w1;N1;N2

) 2 �F ,

supp(z1) � supp(z+�z0), and all the building blocks of z+�z0��1z1 lie in the pair

((u; Vu)� (u0; Vu0)) �1(u1; Vu1).

But ((u; Vu)+�(u0; Vu0))�1(u1; Vu1) was further reduced by the pair (u2; Vu2) 2 G to

((u; Vu)+�(u0; Vu0))�1(u1; Vu1)�2(u2; Vu2). Thus, we can construct from (u2; Vu2)

a vector z2 2 �F with supp(z2) � supp(z+�z0��1z1) where again, all building blocks

of z + �z0 � �1z1 � �2z2 lie in ((u; Vu) + �(u0; Vu0)) �1(u1; Vu1) �2(u2; Vu2).

Repeating this construction, we arrive in the kth step at z + �z0 �Pk�1

i=1 �izi whose

building blocks all lie in ((u; Vu) + �(u0; Vu0)) �1(u1; Vu1) : : : �k�1(uk�1; Vuk�1).

The latter can be reduced to (0; f(0; f0g)g) by the pair (uk; Vuk) 2 G. Therefore, there

Page 86: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

4.3. Solving the Optimization Problem with the Help of H1 86

exists a vector zk 2 �F such that supp(zk) � supp(z + �z0 � �1z1 � : : : � �k�1zk�1)

and 0 = z + �z0 �Pk

i=1 �izi.

Thus, z + �z0 =Pk

i=1 �izi with supp(zi) � supp(z + �z0) for i = 1; : : : ; k, which

completes the proof. �

Note that termination and correctness of the above algorithm again imply �niteness

of H1. Therefore, the above proves the following theorem.

Theorem 4.2.12 Given integer matrices A1, A2, T1, T2, and W2 of appropriate di-

mensions, and letH1 be de�ned for 3-stage stochastic linear programs as in De�nition

4.1.2. Then H1 is a �nite set.

Although we have presented the generalization of our decomposition approach to the

3-stage situation only, the above presentation of decomposition, notions, algorithms,

and proofs readily extends to linear multi-stage stochastic programs. This gives the

following result.

Theorem 4.2.13 The set H1 is a �nite set for all multi-stage linear stochastic

programs.

4.3 Solving the Optimization Problem with the

Help of H1

As we have seen in Section 2.6, universal test sets can help to �nd an initial feasible

solution to our optimization problem, and to augment it to optimality. However, the

universal test set is again not given explicitly. By Remark 2.6.4, we can concentrate

on �nding improving directions. In the following, we will see how these directions can

be reconstructed from H1 in the 3-stage situation.

4.3.1 IP case

Suppose we are given H1, a cost function vector

c = (cu; cv;1; cw;1;1; : : : ; cw;1;N2; : : : ; cv;N1

; cw;N1;1; : : : ; cw;N1;N2);

and a feasible solution z0 = (u; v1; w1;1; : : : ; w1;N2; : : : ; vN1

; wN1;1; : : : ; wN1;N2).

Lemma 4.3.1 Suppose that there exists no pair (u0; Vu0) 2 H1 such that

Page 87: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 4. Decomposition of Test Sets in Multi-Stage Stochastic Programming 87

1. u0 � u,

2. for i = 1; : : : ; N1, there exist (�vi;Wu0;�vi) 2 Vu0 and �wi;1; : : : ; �wi;N22 Wu0;�vi with

�vi � vi and �wi;j � wi;j for j = 1; : : : ; N2,

3. c|z0 > 0, where z0 = (u0; v01; w01;1; : : : ; w

01;N2

; : : : ; v0N1; w0

N1;1; : : : ; w0

N1;N2) and for

i = 1; : : : ; N1, the vector (v0i; w0i;1; : : : ; w

0i;N2

), is an optimal solution of

maxfc|v;i�vi +PN2

j=1 c|

w;i;j �wi;j : (�vi;Wu0;�vi) 2 Vu0 ; �vi � vi;

�wi;j 2 Wu0;�vi ; �wi;j � wi;j ;

j = 1; : : : ; N2g:

Then z0 = (u; v1; w1;1; : : : ; w1;N2; : : : ; vN1

; wN1;1; : : : ; wN1;N2) is an optimal optimal so-

lution of the problem minfc|z : AN1;N2z = b; z 2 Z

dN1 ;N2

+ g.

If there exists such a pair (u0; Vu0) 2 H1 then z0 � z0 is a feasible solution with

c|(z0 � z0) < c|z0.

Proof. Suppose that z0 is not optimal. Then there is some vector

z00 = (u00; v001 ; w001;1; : : : ; w

001;N2

; : : : ; v00N1; w00

N1;1; : : : ; w00

N1;N2) 2 GN1;N2

such that z0 � z00 is feasible and c|(z0 � z00) < c|z0. Feasibility of z0 � z00 implies

z0 � z00 � 0, hence z00 � z0. Therefore, we have u00 � u, v00i � vi, i = 1; : : : ; N1,

and w00j;k � wj;k, j = 1; : : : ; N1, k = 1; : : : ; N2, the latter implying that for any

i = 1; : : : ; N1, j = 1; : : : ; N2, there exist vectors �vi and �wi;j as required in the second

condition of Lemma 4.3.1.

Choose u0 = u00 and let z0 as de�ned in the third condition. Now c|(z0 � z00) < c|z0

implies that c|z00 > 00. Moreover c|z0 � c|z00 > 0. In conclusion, the pair (u0; Vu0)

ful�lls conditions 1:� 3:, proving the �rst claim of the lemma.

With z0 according to 3: we obtain c|(z0 � z0) < c|z0. Moreover, we have u0 � u,

v0i � vi, i = 1; : : : ; N1, and w0j;k � wj;k, j = 1; : : : ; N1, k = 1; : : : ; N2, together imply

z0 � z0, and z0 � z0 � 0. Finally, we have z0 2 kerZdN1;N2

(AN1;N2), and therefore

AN1;N2(z0 � z0) = AN1;N2

z0 + 0 = b which completes the proof. �

Remark. Note that for �xed i the above optimization problem

maxfc|v;i�vi +PN2

j=1 c|

w;i;j �wi;j : (�vi;Wu0;�vi) 2 Vu0 ; �vi � vi;

�wi;j 2 Wu0;�vi ; �wi;j � wi;j ;

j = 1; : : : ; N2g

Page 88: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

4.3. Solving the Optimization Problem with the Help of H1 88

can easily be solved by considering those �nitely many �vi such that (�vi;Wu0;�vi) 2 Vu0

one after another. For �xed �vi the above optimization problem decomposes and we

have to solve

maxfc|w;i;j �wi;j : �wi;j � wi;j ; �wi;j 2Wu0;�vig

for j = 1; : : : ; N2.

Moreover note, that the above procedure to �nd an improving vector is again linear

with respect to the number N1N2 of scenarios. �

Theorem 4.3.2 If the problem minfc|z : AN1;N2z = b; z 2 Z

dN1 ;N2

+ g is solvable,

an optimal solution can be computed in �nitely many steps by application of the

Augmentation Algorithm 0.0.2 together with the above reconstruction procedure to

�nd improving vectors.

4.3.2 LP case

Suppose we are given H1, a cost function vector

c = (cu; cv;1; cw;1;1; : : : ; cw;1;N2; : : : ; cv;N1

; cw;N1;1; : : : ; cw;N1;N2);

and a feasible solution z0 = (u; v1; w1;1; : : : ; w1;N2; : : : ; vN1

; wN1;1; : : : ; wN1;N2).

Lemma 4.3.3 Suppose that there exists no pair (u0; Vu0) 2 H1 such that

1. supp((u0)+) � supp(u),

2. for i = 1; : : : ; N1, there exist (�vi;Wu0;�vi) 2 Vu0 and �wi;1; : : : ; �wi;N22 Wu0;�vi with

supp(�v+i ) � supp(vi) and supp( �w+i;j) � supp(wi;j) for j = 1; : : : ; N2,

3. c|z0 > 0, where z0 = (u0; v01; w01;1; : : : ; w

01;N2

; : : : ; v0N1; w0

N1;1; : : : ; w0

N1;N2) and for

i = 1; : : : ; N1, the vector (v0i; w0i;1; : : : ; w

0i;N2

) is an optimal solution of

maxfc|v;i�vi +PN2

j=1 c|

w;i;j �wi;j : (�vi;Wu0;�vi) 2 Vu0; supp(�v+i ) � supp(vi);

�wi;j 2 Wu0;�vi; supp( �w+i;j) � supp(wi;j);

j = 1; : : : ; N2g:

Then z0 = (u; v1; w1;1; : : : ; w1;N2; : : : ; vN1

; wN1;1; : : : ; wN1;N2) is an optimal optimal so-

lution of the problem minfc|z : AN1;N2z = b; z 2 R

dN1 ;N2

+ g.

If there exists such a pair (u0; Vu0) 2 H1 then there exists � 2 R>0, such that z0��z0

is a feasible solution with c|(z0 � �z0) < c|z0.

Page 89: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 4. Decomposition of Test Sets in Multi-Stage Stochastic Programming 89

Proof. Suppose that z0 is not optimal. Then there is some vector

z00 = (u00; v001 ; w001;1; : : : ; w

001;N2

; : : : ; v00N1; w00

N1;1; : : : ; w00

N1;N2) 2 GN1;N2

and a positive scalar � such that z0 � �z00 is feasible and c|(z0 � �z00) < c|z0. Fea-

sibility of z0 � �z00 implies z0 � �z00 � 0, hence supp((z00)+) � supp(z0). There-

fore, we have supp((u00)+) � supp(u), supp((v00)+i ) � supp(vi), i = 1; : : : ; N1, and

supp((w00)+j;k) � supp(wj;k), j = 1; : : : ; N1, k = 1; : : : ; N2, the latter implying that for

any i = 1; : : : ; N1, j = 1; : : : ; N2, there exist vectors �vi and �wi;j as required in the

second condition of Lemma 4.3.3.

Choose u0 = u00 and let z0 be as de�ned in the third condition above. But now

c|(z0 � �z00) < c|z0 and � > 0 imply that c|z00 > 0. Moreover c|z0 � c|z00 > 0. In

conclusion, the pair (u0; Vu0) ful�lls conditions 1: � 3:, proving the �rst claim of the

lemma.

With z0 according to 3:, c|z0 > 0 implies that c|(z0 � �z0) < c|z0 for any � > 0.

Moreover, we have supp((u0)+) � supp(u), supp((v0)+i ) � supp(vi), i = 1; : : : ; N1,

and supp((w0)+j;k) � supp(wj;k), j = 1; : : : ; N1, k = 1; : : : ; N2, together imply that

we can indeed choose a strictly positive scalar � such that z0 � �z0 � 0. Finally,

z0 2 kerRdN1 ;N2

(AN1;N2), and therefore AN1;N2

(z0 � �z0) = ANz0 � � � 0 = b which

completes the proof. �

Remark. Note that for �xed i the above optimization problem

maxfc|v;i�vi +PN2

j=1 c|

w;i;j �wi;j : (�vi;Wu0;�vi) 2 Vu0 ; supp(�v+i ) � supp(vi);

�wi;j 2 Wu0;�vi ; supp( �w+i;j) � supp(wi;j);

j = 1; : : : ; N2g:

can easily be solved by considering those �nitely many �vi such that (�vi;Wu0;�vi) 2 Vu0

one after another. For �xed �vi the above optimization problem decomposes and we

have to solve

maxfc|w;i;j �wi;j : supp( �w+i;j) � supp(wi;j); �wi;j 2 Wu0;�vig

for j = 1; : : : ; N2.

Moreover note, that the above procedure to �nd an improving vector is again linear

with respect to the number N1N2 of scenarios. �

Theorem 4.3.4 If the problem minfc|z : AN1;N2z = b; z 2 R

dN1 ;N2

+ g is solvable,

an optimal solution can be computed in �nitely many steps by application of the

Augmentation Algorithm 0.0.2 together with the Augmentation Strategy 2.5.1 and the

above reconstruction procedure for �nding improving vectors.

Page 90: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

4.4. Conclusions 90

Note, that the Augmentation Strategy 2.5.1 can also be applied at the building block

level, since in phase 1 we have to look for an improving vector t to our feasible solution

z0 for which c|t > 0 and supp(t) � supp(z0) holds.

4.4 Conclusions

In this chapter we have presented the details of our decomposition for 3-stage stochas-

tic programs. Again, we de�ned a set H1 of building blocks that is independent on

the number of scenarios. Moreover, we showed how to compute this set and how

to employ it in order to �nd an improving vector. As in the two-stage situation,

both algorithms work entirely on the building block level. The presentation should

have made clear how to de�ne the necessary notions for the decomposition and the

algorithms for higher multi-stage stochastic programs.

We have shown �niteness of H1 for 3-stage stochastic linear programs. Analogous

counting arguments show �niteness of H1 also for general multi-stage stochastic

linear programs. Whether H1 is �nite for the integer case is already an open question

for the 3-stage situation. In order to prove �niteness of H1 for general multi-stage

stochastic integer linear programs successive extensions of Maclagan's Theorem, see

Chapter 6, would turn out very useful.

In the next chapter we show that our decomposition approach to two- and multi-

stage stochastic programming can be generalized to arbitrary problem matrices. This

will lead us to the de�nition of a connection set. These sets turn out to be much

smaller than the test set they correspond to. Moreover, improving vectors can be

reconstructed from connection sets without knowledge of the full test set. Most sur-

prisingly, however, is the fact that this reconstruction of improving directions via

connection sets may be much faster even if the corresponding test set is given!

Page 91: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 5

Decomposition of Test Sets for

Arbitrary Matrices

In the previous two chapters we presented a novel decomposition approach for the

solution of two- and multi-stage stochastic programs. In this chapter we extend this

decomposition idea to problems with arbitrary integer problem matrix A. We de�ne

the notions of building blocks and of connection vectors. As the building block in

the last two chapters, also the connection vectors can be computed directly without

computing the corresponding test set �rst. Connection vectors can be employed to

�nd improving vectors to non-optimal solutions. More importantly, however, it may

be much faster to reconstruct an improving vector from the connection vectors even

if the full test set is given.

De�nition 5.0.1 Let A = (A1jA2) 2 Zl�k � Zl�(d�k), b 2 R l , and

T � f(u1; u2) 2 R k � R d�k : A1u1 + A2u2 = bg:

Then we call

CSk(T ) := fA1u1 : (u1; u2) 2 Tg

the connection set of T at k, its elements A1u1 connection vectors, and the vectors

u1 and u2 building blocks.

We will use the above de�nition only for b = 0. For b = 0 the above de�nition

introduces connection sets for the di�erent types of test sets. We will see, that there

are always �nite connection sets for IP, LP, and MIP Graver test sets. Moreover,

�nite connection sets can be de�ned for two-stage stochastic linear and integer linear

programs. In the stochastic mixed-integer case the situation is a bit more complicated.

91

Page 92: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

5.1. Connection Sets of LP, IP, and MIP Test Sets 92

In Section 5.3 we will deal with this case and show that there are �nite connection

sets for a large class of two-stage stochastic mixed-integer linear programs, and we

give an algorithm to compute them.

In the following two sections, however, we look at connection sets for IP, LP, and MIP

Graver test sets (also for stochastic programming) and we show how these connection

sets can be used to �nd an improving vector, the main ingredient to the Augmentation

Algorithm 0.0.2.

5.1 Connection Sets of LP, IP, and MIP Test Sets

Let X ;Y 2 fZ; R g, and suppose we are given the connection set CSk(T ) of a universal

test set T , say the Graver test set, for the family of linear (mixed-integer) problems

minfc|1z1 + c|

2z2 : A1z1 + A2z2 = b; z1 2 X k+ ; z2 2 Y d�k

+ g

as (c1; c2) 2 R d and b 2 R l or only parts of the data vary. We may assume that the

connection set CSk(T ) is �nite. While that is clear in the LP and IP situations, it

was proven in Section 2.4 that a �nite connection set suÆces also for mixed-integer

programs.

Moreover, suppose that we are given a speci�c cost function vector (c1; c2) 2 R d , a

right-hand side b 2 R l , and a feasible solution z0 = (z0;1; z0;2) 2 X k+ � Y d�k

+ to the

above problem.

The following theorems allow us to check for a �xed connection vector s 2 CSk(T )

whether there is an improving vector to z0. Moreover, these lemmas show how to

compute an improving vector, if one exists. Since CSk(T ) is �nite, this yields a �nite

procedure to reconstruct an improving vector to z0 or to decide that no such vector

exists.

Lemma 5.1.1 Let A1; A2; b; c1; c2;X = Y = Z; T ; k; z0 be de�ned as above, and let

s 2 CSk(T ). If the vector (�u1; �u2) that satis�es

� �u1 2 argmaxfc|1z1 : A1z1 = s; z1 � z0;1; z1 2 Zkg and

� �u2 2 argmaxfc|2z2 : A2z2 = �s; z2 � z0;2; z2 2 Zd�kg

does not have a positive cost function value c|

1�u1+c|

2�u2 then there exists no improving

vector for z0 with connection vector s.

Page 93: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 5. Decomposition of Test Sets for Arbitrary Matrices 93

Lemma 5.1.2 Let A1; A2; b; c1; c2;X = Y = R ; T ; k; z0 be de�ned as above, and let

s 2 CSk(T ). If the vector (�u1; �u2) that satis�es

� �u1 2

(argmaxfc|1z1 : A1z1 = s; z1 � z0;1; z1 2 R kg; if s = 0;

argmaxfc|

1z1 : A1z1 = s; supp(z+1 ) � supp(z0;1); z1 2 R kg; if s 6= 0;

and

� �u2 2

(argmaxfc|2z2 : A2z2 = �s; z2 � z0;2; z2 2 R d�kg; if s = 0;

argmaxfc|2z2 : A2z2 = �s; supp(z+2 ) � supp(z0;2); z2 2 R d�kg; if s 6= 0;

does not have a positive cost function value c|

1�u1+c|

2�u2 then there exists no improving

vector for z0 with connection vector s.

The distinction between the two cases s = 0 and s 6= 0 is necessary, since with

Lemma 5.1.3 Let A1; A2; b; c1; c2;X = Z;Y = R ; T ; k; z0 be de�ned as above, and let

s 2 CSk(T ). If the vector (�u1; �u2) that satis�es

� �u1 2 argmaxfc|1z1 : A1z1 = s; z1 � z0;1; z1 2 Zkg and

� �u2 2 argmaxfc|2z2 : A2z2 = �s; z2 � z0;2; z2 2 R d�kg

does not have a positive cost function value c|

1�u1+c|

2�u2 then there exists no improving

vector for z0 with connection vector s.

Proof of Lemmas 5.1.1, 5.1.2, 5.1.3. If (�u1; �u2) is de�ned as above for the IP, LP,

and MIP cases, respectively, and if c|1�u1+c|

2�u2 > 0 holds, then (�u1; �u2) is an improving

direction for z0. On the other hand, if (u1; u2) is an improving direction to z0 with

connection vector s, then the two optimization problems in the IP, LP, and MIP

situations, respectively, are non-empty and thus solvable. Moreover, c|1�u1 + c|

2�u2 �

c|

1u1 + c|

2u2 > 0. Thus, (�u1; �u2) is an improving direction for z0. �

Lemmas 5.1.1, 5.1.1, and 5.1.3 allow us to apply the Augmentation Algorithm 0.0.2.

If z0 is not optimal then there exists an improving vector (v1; v2) 2 T . Choosing

s = A1v1 in the (speci�c) lemma above, �u1 and �u2 exist (since v1 and v2, respectively,

are feasible solutions to the optimization problems that determine �u1 and �u2), and

c|

1u1+ c|

2u2 � c|

1v1+ c|

2v2 > 0. Thus, if the current feasible solution z0 is not optimal,

there is at least one connection vector s 2 CSk(T ) from which an improving vector

can be reconstructed as indicated in the above three lemmas. In other words, if for

all s 2 CSk(T ) no improving vector could be found, the solution z0 is optimal.

Page 94: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

5.2. Connection Sets in Stochastic Programming 94

The following example demonstrates that, even if the full test set is known, it can be

much faster to reconstruct an improving vector from a corresponding connection set

than to extract the improving vector from the test set itself.

Example. The IP Graver basis to the integer program

minfc|z : (k 1 1)z = b; z 2 Z3+g

equals GIP(A) := f�(0; 1;�1);�(1; 0;�k);�(1;�1;�(k�1)); : : : ;�(1;�k; 0)g, where

A = (k 1 1). Thus, we can form the two connection sets

� CS1(GIP(A)) = f0;�kg and

� CS2(GIP(A)) = f0;�1;�2; : : : ;�kg.

This already shows that the size of a connection set may heavily depend on the

position at which we split the problem matrix.

Given a solution (z0;1; z0;2; z0;3) we have to solve for each s 2 CS1(GIP(A)) = f0;�kg

the two problems

� maxfc|1z1 : z1 = s; z1 � z0;1; z1 2 Zg and

� maxfc|2z2 + c|

3z3 : z2 + z3 = �s; (z2; z3) � (z0;2; z0;3); z2; z3 2 Zg:

The time to �nd an improving vector in the test set GIP(A) increases with growing k.

The time to reconstruct an improving vector from the connection set, however, does

not (or at most only very mildly) depend on k. This speed-up may be even more

drastic for connection sets for stochastic programs as introduced in the following

section. �

5.2 Connection Sets in Stochastic Programming

In two-stage stochastic programming we deal with matrices of the form

AN :=

0B@

T W � � � 0...

.... . .

...

T 0 � � � W

1CA ;

where in the following we assume for simplicity of exposition that the matrix A,

introduced in (4), page 13, is incorporated into (T jW ) as the submatrix (Aj0). Then,

Page 95: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 5. Decomposition of Test Sets for Arbitrary Matrices 95

the above matrix AN can be decomposed into

AN;1 :=

0B@

T...

T

1CA and AN;2 :=

0B@

W � � � 0...

. . ....

0 � � � W

1CA :

Applying the decomposition ideas and the 3 lemmas from the previous section to the

matricesAN;1 and AN;2, we see that each connection vector has the form (Tu; : : : ; Tu)|

and that the two problems that have to be solved in order to reconstruct an improving

vector either simplify or decompose into smaller problems.

As we have seen in Chapter 3, the set H1 constructed there is �nite in the IP and

LP cases. Thus, there are only �nitely many vectors Tu with u 2 H1. Therefore, we

de�ne the set CSm(H1) := fTu : u 2 H1; u is a �rst-stage building blockg to be the

connection set of H1. Herein, m denotes again the number of columns of T , that is,

the number of �rst stage variables.

Connection sets for two-stage stochastic mixed-integer programs can be de�ned in the

same way. Unfortunately, they need not be �nite. We give an example in Section 5.3.

However, we prove �niteness of the connection set for a large subclass of two-stage

stochastic mixed-integer programs, and we present an algorithm to compute them.

In the following, assume that we are given a number N of scenarios, a speci�c cost

function c := (cu; c1; : : : ; cN ), and a feasible solution z0 = (u; v1; : : : ; vN ) to the prob-

lem.

In the subsequent lemmas we show again, how an improving vector can be found

for a �xed connection vector s 2 CSm(H1). For �nite CSm(H1), this yields a �nite

procedure to reconstruct an improving vector to z0 or to decide that no such vector

exists.

Lemma 5.2.1 Let z0 = (u; v1; : : : ; vN ) be a feasible solution, s 2 CSm(H1), and

suppose that the optimization problems

� �u 2 argmaxfc|uz : Tz = s; z � u; z 2 Zmg,

� �vi 2 argmaxfc|i z : Wz = �s; z � vi; z 2 Zng, i = 1; : : : ; N ,

are all solvable. If the vector (�u; �v1; : : : ; �vN ) does not have a positive cost function

value then there exists no improving vector for z0 with connection vector s.

Lemma 5.2.2 Let z0 = (u; v1; : : : ; vN ) be a feasible solution, s 2 CSm(H1), and

suppose that the optimization problems

Page 96: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

5.3. Connection Sets in Stochastic Mixed-Integer Programming 96

� �u 2

(argmaxfc|uz : Tz = s; z � u; z 2 Rmg; if s = 0;

argmaxfc|uz : Tz = s; supp(z+) � supp(u); z 2 Rmg; if s 6= 0; and

� �vi 2

(argmaxfc|i z : Wz = �s; z � vi; z 2 R ng; if s = 0;

argmaxfc|i z : Wz = �s; supp(z+) � supp(vi); z 2 R ng; if s 6= 0;

i = 1; : : : ; N .

are all solvable. If the vector (�u; �v1; : : : ; �vN ) does not have a positive cost function

value then there exists no improving vector for z0 with connection vector s.

Lemma 5.2.3 Suppose that there is no row in (T jW ) that couples a �rst-stage con-

tinuous variable with a second-stage continuous variable, that is, there is no prob-

lem constraint that involves both �rst- and second-stage continuous variables. Let

z0 = (u; v1; : : : ; vN ) be a feasible solution, s 2 CSm(H1), and suppose that the opti-

mization problems

� �u 2 argmaxfc|uz : Tz = s; z � u; z 2 Zmg,

� �vi 2 argmaxfc|i z :Wz = �s; z � vi; z 2 R ng; i = 1; : : : ; N:

are all solvable. If the vector (�u; �v1; : : : ; �vN ) does not have a positive cost function

value then there exists no improving vector for z0 with connection vector s.

Proof of Lemmas 5.2.1, 5.2.2, 5.2.3. If �z := (�u; �v1; : : : ; �vN ) is de�ned as above

for the IP, LP, and MIP cases, respectively, and if c|�z > 0 holds, then �z is an

improving direction for z0. On the other hand, if z0 := (u0; v01; : : : ; v0N ) is an improving

direction to z0 with connection vector s, then all the optimization problems in the

IP, LP, and MIP situations, respectively, are non-empty and thus solvable. Moreover,

c|�z � c|z0 > 0. Thus, �z is an improving direction for z0. �

5.3 Connection Sets in Stochastic Mixed-Integer

Programming

In Section 3.3 we have seen how to compute the setH1 for two-stage stochastic linear

and integer linear programs entirely on the building block level. Therefore, we can

use these algorithms to compute the connection sets of H1 in these two cases. But,

since H1 contains all building blocks of GLP(T jW ) and of GIP(T jW ), respectively, we

Page 97: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 5. Decomposition of Test Sets for Arbitrary Matrices 97

can use the same two algorithms to compute the connection sets CSk(GLP(A1jA2))

and CSk(GIP(A1jA2)) by choosing T = A1 and W = A2.

Although the knowledge of H1 already suÆces to solve any given speci�c problem

instance to optimality, the reconstruction of an improving vector may be done much

faster via the connection set of H1 (see the example on page 94). We refer to already

presented algorithms to stress the fact that also connection sets can be computed on

a building block level. Clearly, an algorithm to compute the connection set without

computing the set H1 �rst is even more desireable.

Connections sets of CSk(GMIP(A1jA2)) can be computed via the algorithm presented

in Section 2.4. Therefore, we restrict our attention in this section to the computation

of connections sets of H1 for two-stage stochastic mixed-integer linear programs.

5.3.1 The Main Result

As we have seen in Section 2.4, MIP Graver test sets need not be �nite in general.

Therefore, a similar decomposition approach as presented in Chapter 3 applied to

two-stage mixed-integer linear stochastic programs leads to an in�nite set H1 of

building blocks. However, although H1 may be not �nite, the connection set of H1

can be �nite. For each of the �nitely many connection vectors a number of integer

and mixed-integer linear programs have to be solved, see Lemma 5.2.3. Thus, we

can employ the Augmentation Algorithm 0.0.2 again to solve two-stage stochastic

mixed-integer programs in �nitely many augmentation steps.

It remains to determine interesting cases in which CSm(H1) is �nite. Our main result

in this respect is the following.

Theorem 5.3.1 Suppose that there is no row in (T jW ) that couples a �rst-stage

continuous variable with a second-stage continuous variable. Then CSm(H1) is �nite.

Before we prove this claim by presenting an algorithm that computes a �nite superset

to CSm(H1), let us show that CSm(H1) need not be �nite if there is a row that

couples a �rst-stage continuous variable with a second-stage continuous variable. For

this we will use the example from Section 2.4 which showed that MIP Graver test

sets need not be �nite in general.

Example. Let T = (1 1) and W = (1). Moreover, let only the �rst �rst-stage

variable be integer and let all other variables be continuous. As we have seen in

Section 2.4, all vectors (1;�1 + �;��) with � 2 R , 0 < � < 1 lie in GMIP(T jW ).

Page 98: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

5.3. Connection Sets in Stochastic Mixed-Integer Programming 98

Since all corresponding connection vectors of GMIP(T jW ), (1 + (�1 + �)) = (�), are

also connection vectors in CS2(H1), the set CS2(H1) is not �nite. �

5.3.2 Computation of CSm(H1)

In the following, we present all ingredients for a completion algorithm to compute

CSm(H1). The treatment, the de�nitions, and the proofs follow the main lines from

Section 2.4 about the computation of MIP Graver test sets. However, as we deal with

two-stage stochastic programs, we will also use the arguments from Section 3.3 in the

subsequent proofs. First, let us assume as in Theorem 5.3.1 that no �rst-stage and

second-stage continuous variables are coupled by a row. Thus, we can write T and

W as follows:

T =

T1 T2

T3 0

!and W =

W1 0

W3 W4

!;

where the columns of T2 and of W4 correspond to the �rst- and second-stage con-

tinuous variables, respectively. Moreover, every connection vector can be written as

(s; t)|, where s, of dimension ls, corresponds to the rows of T1, T2, andW1, and where

t, of dimension lt, corresponds to the rows of T3, W3, and W4. For any vector w let

the indices z and q indicate its integer and continuous parts, respectively. Therefore,

w = (wz; wq). Moreover, denote by mz, mq, nz, nq the numbers of �rst-stage and

second-stage integer and continuous variables, respectively. For better readability,

let ker(A1) and ker(AN ) denote the mixed-integer kernels of A1 and AN , as sets of

appropriate dimensions.

Note that, since �s = W1vz and t = T3uz, every connection vector (s; t)| has to be

integer.

The objects in the completion procedure presented below will be pairs of the form

((uz; P(T2j�T2);s�T1uz); Vuz ;s), where

Vuz ;s := f(vz; P(W4j�W4);�t�W3vz) :W1vz = �s; t = T3uzg

for certain vectors vz. Note that with each pair ((uz; P(T2j�T2);s�T1uz); Vuz ;s) there

is a �xed connection vector (s; t) = (s; T3uz). For better readability, we will write

Puz ;s := P(T2j�T2);s�T1uz and Qvz ;t := P(W4j�W4);�t�W3vz . (Remember that we de�ned

PA;b := fz : Az = b; z 2 R d+g for given dimension d.)

Page 99: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 5. Decomposition of Test Sets for Arbitrary Matrices 99

Algorithm 5.3.2 (Algorithm to compute H1)

Input: a symmetric generating set F of ker(A1) in ((uz; Puz ;s); Vuz ;s)-notation to be

speci�ed below

Output: a set G from which a superset of CSm(H1) can be extracted

G := F

C :=S

f;g2G

ff � gg (forming S-vectors)

while C 6= ; do

s := an element in C

C := C n fsg

f := normalForm(s;G)

if f 6= ((0; P0;0); f(0; Q0;0)g) then

C := C [S

g2G[ffg

ff � gg (adding S-vectors)

G := G [ ffg

return G.

Behind the function normalForm(s;G) there is again the following algorithm.

Algorithm 5.3.3 (Normal form algorithm)

Input: a pair s, a set G of pairs

Output: a normal form of s with respect to G

while there is some g 2 G such that g v s do

s := s g

return s

In the following we de�ne the necessary notions for the above completion algorithm.

De�nition 5.3.4 Let �F be a symmetric integer generating set for the integer lattice

f(uz; vz) 2 Zmz+nz : 9(uq; vq) 2 Rmq+nq with (uz; uq; vz; vq) 2 ker(A1)g which contains

an integer generating set for the integer lattice f(0; vz) 2 Zmz+nz : 9vq 2 R nq with

(0; 0; vz; vq) 2 ker(A1)g consisting only of vectors with zero as �rst-stage component.

De�ne F := f(uz; Puz ;s); Vuz ;s) : (uz; vz) 2�F [ f0g for some vz with W1vz = �sg,

where Vuz ;s := f(vz; Qvz ;t) : (uz; vz) 2�F [ f0g;W1vz = �s; t = T3uzg. Let F be the

input set to the completion algorithm.

Page 100: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

5.3. Connection Sets in Stochastic Mixed-Integer Programming 100

Analogously to the proof of Lemma 3.3.3 one can show that

FN := f(�u; �v1; : : : ; �vN ) : ((�uz; P�uz ;s); V�uz ;s) 2 F; (�u+q ; �u�q ) 2 P�uz ;s;

(�vi;z; Q�vi;z ;t) 2 V�uz ;s; (�v+i;q; �v

�i;q) 2 Q�vi;z ;t; i = 1; : : : ; Ng

�nitely generates ker(AN) over Z for everyN 2 Z+ , that is, every vector from ker(AN )

can be written as a �nite integer linear combination of elements from the (possibly

in�nite) set FN .

De�nition 5.3.5 We de�ne

Vuz ;s � Vu0z;s0 := f(vz + v0z; Qvz+v0z ;t+t

0) : (vz; Qvz ;t) 2 Vuz ;s; (v0z; Qv0

z;t0) 2 Vu0

z;s0g

and

((uz; Puz ;s); Vuz ;s)� ((u0z; Pu0z ;s0); Vu0z ;s0) := ((uz + u0z; Puz+u0z ;s+s0); Vuz ;s � Vu0z;s0):

De�nition 5.3.6 We say that ((u0z; Pu0z ;s0); Vu0z ;s0) reduces ((uz; Puz ;s); Vuz ;s), or

((u0z; Pu0z ;s0); Vu0z ;s0) v ((uz; Puz ;s); Vuz ;s) for short, if

� u0z v uz,

� Puz ;s = Pu0z;s0 + Puz�u0z ;s�s0,

� for every (vz; Qvz ;t) 2 Vuz ;s there exists (v0z; Qv0z;t0) 2 Vu0

z;s0 with

{ v0z v vz, and

{ Qvz ;t = Qv0z;t0 +Qvz�v0z ;t�t

0,

� u0z 6= 0 or there exist (vz; Qvz ;t) 2 Vuz ;s and (v0z; Qv0z;t0) 2 Vu0

z;s0 with

{ 0 6= v0z v vz, and

{ Qvz ;t = Qv0z;t0 +Qvz�v0z ;t�t

0.

The last condition is again to avoid the situation that u0z and all occurring v0z are

zero, which forms a trivial zero reduction.

De�nition 5.3.7 In case of reducibility, ((uz; Puz ;s); Vuz ;s) is reduced to

((uz; Puz ;s); Vuz ;s) ((u0z; Pu0z ;s0); Vu0z ;s0) := ((uz � u0z; Puz�u0z ;s�s0); Vuz ;s Vu0z;s0):

Herein, Vuz ;s Vu0z;s0 := f(vz � v0z; Qvz�v0z ;t�t

0) : (vz; Qvz ;t) 2 Vuz ;s; (v0z; Qv0

z;t0) 2 Vu0

z;s0g,

where the v0z and Qv0z;t0 ful�ll the conditions from De�nition 5.3.6.

Page 101: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 5. Decomposition of Test Sets for Arbitrary Matrices 101

Again, it suÆces to take only one di�erence (vz � v0z; Qvz�v0z ;t�t0) for every pair

(vz; Qvz ;t) 2 Vuz ;s in the de�nition of Vuz ;s Vu0z;s0 above.

Proposition 5.3.8 If the input set, the procedure normalForm, �, and are de�ned

as above, then the Completion Algorithm 5.3.2 terminates and computes a set G such

that CSm(H1) � f(s; t) :2 G; (vz; Qvz ;t) 2 Vuz ;sg, where (s; t) = (�W1vz; T3uz).

Proof. First let us show termination. Assume that the algorithm does not terminate

and that it generates an in�nite sequence

f((u1;z; Pu1;z ;s1); Vu1;z ;s1); ((u2;z; Pu2;z ;s2); Vu2;z ;s2); : : :g

such that ((ui;z; Pui;z ;si); Vui;z ;si) 6v ((uj;z; Puj;z ;sj ); Vuj;z ;sj ) whenever i < j.

To show termination we will associate with each pair ((ui;z; Pui;z ;si); Vui;z ;si) a pair

(Ui; VUi), where Ui is an integer vector and VUi

is a set of integer vectors of the

same dimension. Thus, (Ui; VUi) is of a structure as used in the computation of H1

for two-stage stochastic integer programs. We will show that (Ui; VUi) 6v (Uj; VUj

)

whenever i < j. Application of Lemma 3.2.10 then shows �niteness of the sequence

f(U1; VU1); (U2; VU2); : : :g and thus, the above algorithm has to terminate.

As in the proof of Proposition 2.4.12, we may assume that T2 and W4, respectively,

have full row rank. Let B1; : : : ; BM denote all invertible square submatrices of T2 of

maximal row rank, and let B01; : : : ; B

0M 0 denote all invertible square submatrices of

W4 of maximal row rank. Then associate with ((ui;z; Pui;z ;si); Vui;z ;si) the pair (Ui; VUi)

where

Ui := (ui;z; det(B1)B�11 (si � T1ui;z); : : : ; det(BM)B�1

M (si � T1ui;z)) 2 Zmz+Mls

and

VUi:= ff(vz) : (vz; Qvz ;ti) 2 Vui;z ;sig � Znz+M

0lt

where

f(vz) := (vz; det(B01)(B

01)�1(�ti �W3vz); : : : ; det(B

0M 0)(B0

M 0)�1(�ti �W3vz)):

By Lemma 2.4.9 we have that Puj;z ;sj 6= Pui;z ;si + Puj;z�ui;z ;sj�si implies Ui 6v Uj,

and that Qvj;z ;tj = Qvi;z ;ti + Qvj;z�vi;z ;tj�ti implies f(vi;z) 6v f(vj;z). Therefore, the

relation ((ui;z; Pui;z ;si); Vui;z ;si) 6v ((uj;z; Puj;z ;sj ); Vuj;z ;sj ) implies that we either have

ui;z 6v uj;z, Ui 6v Uj, or that there is some pair (vj;z; Qvj;z ;tj) 2 Vuj;z ;sj such that

there is no pair (vi;z; Qvi;z ;ti) 2 Vui;z ;si with vi;z 6v vj;z or f(vi;z) 6v f(vj;z). Altogether,

((ui;z; Pui;z ;si); Vui;z ;si) 6v ((uj;z; Puj;z ;sj ); Vuj;z ;sj) implies that (Ui; VUi) 6v (Uj; VUj

).

Page 102: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

5.3. Connection Sets in Stochastic Mixed-Integer Programming 102

Application of Lemma 3.2.10 to the sequence f(U1; VU1); (U2; VU2); : : :g shows that

this sequence has to be �nite and thus, the above algorithm has to terminate.

To show correctness, we have to prove that G contains all connection vectors of GN

for arbitrary N 2 Z+ . Thus, it suÆces for �xed N to prove that

�G := f(�u; �v1; : : : ; �vN ) : ((�uz; P�uz ;�s); V�uz ;�s) 2 G; (�u+q ; �u�q ) 2 P�uz ;�s;

(�vi;z; Q�vi;z ;�t) 2 V�uz ;�s; (�v+i;q; �v

�i;q) 2 Q�vi;z ;�t; i = 1; : : : ; Ng

has the positive sum property with respect to ker(AN). To this end, we choose an

arbitrary vector (u; v1; : : : ; vN ) 2 ker(AN ) and construct a �nite representation

(u; v1; : : : ; vN ) =X

k;((�uz ;P�uz;�s);V�uz;�s)2G

��u;�s;k(�u; �v1; : : : ; �vN );

where ��u;�s;k 2 Z+ , ((�uz; P�uz ;�s); V�uz ;�s) 2 G, (�u+q ; �u�q ) 2 P�uz ;�s, (�vi;z; Q�vi;z ;�t) 2 V�uz ;�s,

(�v+i;q; �v�i;q) 2 Q�vi;z ;�t, i = 1; : : : ; N , and (�u; �v1; : : : ; �vN ) v (u; v1; : : : ; vN ) whenever

��u;�s;k > 0. We need the additional subscript k since several vectors constructed from

the same pair ((�uz; P�uz ;�s); V�uz ;�s) 2 G may appear in the above sum.

Since G contains F , and thus �G � FN for all N 2 Z+ , at least an integer linear

combination

(u; v1; : : : ; vN ) =X

k;((�uz ;P�uz;�s);V�uz;�s)2G

��u;�s;k(�u; �v1; : : : ; �vN )

with ��u;�s;k 2 Z+ , ((�uz; P�uz ;�s); V�uz ;�s) 2 G, (�u+q ; �u�q ) 2 P�uz ;�s, (�vi;z; Q�vi;z ;�t) 2 V�uz ;�s, and

(�v+i;q; �v�i;q) 2 Q�vi;z ;�t, i = 1; : : : ; N , is possible.

For each such integer linear combination consider the valuePk��u;�s;k(�u; �v1; : : : ; �vN )k1,

which is always non-negative. Thus, this sum is bounded from below by some non-

negative in�mum. Let us assume for the moment that there is a linear integer com-

binationP

��u;�s;k(�u; �v1; : : : ; �vN ) that attains this in�mum.

IfPk��u;�s;k(�u; �v1; : : : ; �vN )k1 = k(u; v1; : : : ; vN )k1 then all vectors (�u; �v1; : : : ; �vN ) with

��u;�s;k > 0 lie in the same orthant of R dN as the vector (u; v1; : : : ; vN ). Therefore,

(�u; �v1; : : : ; �vN ) v (u; v1; : : : ; vN ) whenever ��u;�s;k > 0. But since (u; v1; : : : ; vN ) was

chosen arbitrarily, this implies that �G has the positive sum property with respect to

ker(AN). The claim then follows.

Assume on the contrary that it holdsPk��u;�s;k(�u; �v1; : : : ; �vN )k1 > k(u; v1; : : : ; vN )k1

for such a minimal combination. Therefore, there have to exist (�u1; �v1;1; : : : ; �v1;N ) and

(�u2; �v2;1; : : : ; �v2;N) with ��u1;�s1;k1 > 0 and ��u2;�s2;k2 > 0, and a component p such that

(�u1; �v1;1; : : : ; �v1;N )(p)(�u2; �v2;1; : : : ; �v2;N )

(p) < 0:

Page 103: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 5. Decomposition of Test Sets for Arbitrary Matrices 103

Thus,

k(�u1+�u2; �v1;1+�v2;1; : : : ; �v1;N+�v2;N)k1 < k(�u1; �v1;1; : : : ; �v1;N )k1+k(�u2; �v2;1; : : : ; �v2;N)k1:

During the run of the algorithm, ((�u1 + �u2; P�u1+�u2;�s1+�s2); V�u1;�s1 � V�u2;�s2) was reduced

to ((0; P0;0); f(0; Q0;0)g) by elements ((u1; Pu1;s1); Vu1;s1), : : :, ((ur; Pur;sr); Vur;sr) 2 G.

Analogously to the construction in the proof of Proposition 3.3.7, this reduction to

((0; P0;0); f(0; Q0;0)g) yields a representation

(�u1 + �u2; �v1;1 + �v2;1; : : : ; �v1;N + �v2;N) =

rXi=1

(ui; vi;1; : : : ; vi;N )

where ui;z v �u1;z + �u2;z, vi;j;z v �v1;j;z + �v2;j;z,

P�u1+�u2;�s1+�s2 = P0;0 + Pu1;s1 + : : :+ Pur;sr ;

Q�v1;j+�v2;k;�t1+�t2 = Q0;0 +Qv1;1;z ;t1 + : : :+Qv1;r;z;tr ;

(u+i;q; u�i;q) 2 Pui;z ;si , (v

+i;j;q; v

�i;j;q) 2 Qv1;i;z;ti , i = 1; : : : r, j = 1; : : : ; N .

Moreover

P�u1+�u2;�s1+�s2 = P0;0 + Pu1;s1 + : : :+ Pur;sr ;

Q�v1;j+�v2;k;�t1+�t2 = Q0;0 +Qv1;1;z ;t1 + : : :+Qv1;r;z;tr ;

imply that we can choose the continuous parts of the ui and vi;j in such a way that

(�u1;q + �u2;q)+ =

rXi=0

u+i;q;

(�u1;q + �u2;q)� =

rXi=0

u�i;q;

(�v1;j;q + �v2;j;q)+ =

rXi=0

v+i;j;q;

(�v1;j;q + �v2;j;q)� =

rXi=0

v�i;j;q;

for j = 1; : : : ; N . Thus, ui;q v �u1;q + �u2;q and vi;j;q v �v1;j;q + �v2;j;q, i = 0; : : : r,

j = 1; : : : ; N , that is,

(ui; vi;1; : : : ; vi;N ) v (�u1 + �u2; �v1;1 + �v2;1; : : : ; �v1;N + �v2;N);

Page 104: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

5.3. Connection Sets in Stochastic Mixed-Integer Programming 104

for i = 0; : : : r. Therefore,

rXi=0

k(ui; vi;1; : : : ; vi;N )k1 = k(�u1 + �u2; �v1;1 + �v2;1; : : : ; �v1;N + �v2;N )k1

< k(�u1; �v1;1; : : : ; �v1;N )k1 + k(�u2; �v2;1; : : : ; �v2;N )k1:

Now rewrite

(u; v1; : : : ; vN ) =X

k;((�uz ;P�uz;�s);V�uz;�s)2G

��u;�s;k(�u; �v1; : : : ; �vN )

as

(u; v1; : : : ; vN ) =X

(�u; �s; k) 6= (�u1; �s1; k1);

(�u; �s; k) 6= (�u2; �s2; k2)

��u;�s;k(�u; �v1; : : : ; �vN ) +

(��u1;�s1;k1 � 1)(�u1; �v1;1; : : : ; �v1;N ) +

(��u2;�s2;k2 � 1)(�u2; �v2;1; : : : ; �v2;N ) +rXi=0

(ui; vi;1; : : : ; vi;N );

which is an integer linear combination that contradicts the minimality assumption

on the integer linear combination

Xk;((�uz ;P�uz;�s);V�uz;�s)2G

��u;�s;k(�u; �v1; : : : ; �vN ):

Thus, it remains to show that the in�mum ofP

��u;�s;kk(�u; �v1; : : : ; �vN )k1 is indeed

attained by some linear integer combination. Using an analogous argument as in the

proof of Proposition 2.4.12, this can be seen as follows.

First, since there is at least one integer linear combination

(u; v1; : : : ; vN ) =X

k;((�uz ;P�uz;�s);V�uz;�s)2G

��u;�s;k(�u; �v1; : : : ; �vN );

we have an upper bound K for the in�mum. Thus, there are at most �nitely many

choices for the non-negative integers ��u;�s;k and the (computed) building blocks �uz,

�vj;z, j = 1; : : : ; N , such thatP

��u;�s;kk(�uz; �v1;z; : : : ; �vN;z)k1 � K. It remains to show

that for �xed integers ��u;�s;k and for �xed building blocks �uz, �vj;z, j = 1; : : : ; N ,

the in�mum ofP

��u;�s;kk(�u; �v1; : : : ; �vN )k1 is indeed attained for some choice of the

Page 105: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 5. Decomposition of Test Sets for Arbitrary Matrices 105

vectors �uq, �vj;q, j = 1; : : : ; N . Then, also the global in�mum as the least value of the

�nitely many minimal values ofP

��u;�s;kk(�u; �v1; : : : ; �vN )k1, where ��u;�s;k and �uz, �vj;z,

j = 1; : : : ; N , are kept �xed, is attained by some combination.

But as in the proof of Proposition 2.4.12, �xing the integers �u; �s; k and the building

blocks �uz, �vj;z, j = 1; : : : ; N , leads to a linear minimization problem with non-empty

feasible region, whose objective is bounded from below by 0. Thus, the minimum

is attained for some choice of the vectors �uq, �vj;q, j = 1; : : : ; N . This concludes the

proof. �

Termination and correctness of the above algorithm imply �niteness of CSm(H1)

if there is no row of (T jW ) that couples a �rst-stage continuous variable with a

second-stage continuous variable. Thus, Theorem 5.3.1 is �nally proved.

Note that in case of T1 = 0, T2 = 0, W1 = 0, and W3 = 0, we have a two-stage

stochastic mixed-integer program with pure integer �rst-stage and pure continuous

second-stage. The above algorithm then coincides with the algorithm to compute

MIP Graver test sets as presented in Section 2.4. Moreover, in case of T2 = 0 and

W4 = 0, the above algorithm simpli�es to the algorithm to compute the set H1 for

two-stage stochastic integer programs.

5.4 Conclusions

In this chapter we have introduced the new concepts of connection vectors and of

connection sets. The knowledge of a connection set is already enough to reduce the

problem of �nding an improving direction for a given non-optimal feasible solution to

the solution of �nitely many smaller optimization problems. These smaller problems

can be considered easy to solve compared to the full problem at hand. Moreover,

there are only two classes of subproblems and the problems in each class di�er only

in their right-hand sides (and in their cost function vectors), which makes further

application of test sets for their solution advisable. In this respect, we may even think

of further decomposing the smaller matrices A1 and A2, and T and W , respectively,

into smaller parts. This corresponds to a decomposition of building blocks into still

smaller building blocks that are related via certain connection vectors.

When computing connection sets, we compute left-hand (or �rst-stage) and right-

hand side (or second-stage) building blocks as well, although we do not need them

for the construction of improving vectors. Moreover, there may be too many building

blocks compared to the number of connection vectors. In order to speed up computa-

Page 106: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

5.4. Conclusions 106

tions of connection sets, fast algorithmic solutions to the problems presented in the

next chapter are needed.

Page 107: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 6

Some Open Problems

6.1 Algorithmic Improvements

In Chapters 3, 4, and 5 we have presented algorithms to compute H1 and to com-

pute connection sets. All algorithms included the computation of left-hand side (or

�rst-stage) and of right-hand side (or second-stage) building blocks. However, as we

have seen in Chapter 5, these building blocks are not needed for the construction

of improving vectors. Since there may be far more building blocks than connection

vectors, we should aim at �nding an algorithm that computes the connection set

without computing the building blocks. Fast algorithmic solutions to the following

two problems would speed up the computation of connection sets tremendously. A

fast algorithmic solution to Problem 6.1.1 which is the same as Problem 2.4.6 on page

41, is also needed for the computation of GZ(A), or equivalently, of connection sets of

MIP Graver test sets.

Problem 6.1.1 Let A 2 Zl�d. For any b 2 R l de�ne PA;b := fz : Az = b; z 2 R d+g.

For given b1; b2 2 R l decide, whether PA;b1+b2 = PA;b1 + PA;b2, where PA;b1 + PA;b2

denotes the Minkowski sum of PA;b1 and PA;b2.

Problem 6.1.2 Let A 2 Zl�d. For any b 2 R l de�ne P IA;b := fz : Az = b; z 2 Zd+g.

For given b1; b2 2 R l decide, whether P IA;b1+b2

= P IA;b1

+ P IA;b2

, where P IA;b1

+ P IA;b2

again denotes the Minkowski sum of P IA;b1

and P IA;b2

.

107

Page 108: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

6.2. Extension of Maclagan's Theorem 108

6.2 Extension of Maclagan's Theorem

In two- and multi-stage stochastic integer linear programming we have constructed

a set H1 of building blocks that has very nice properties. Moreover, we gave an

algorithm that, upon termination, returns this set. Unfortunately, in the integer sit-

uation, we can prove termination of this algorithm, and thus �niteness of H1, only

for two-stage stochastic programs. For the multi-stage integer situation a successive

generalization of Maclagan's Theorem would be very useful.

De�nition 6.2.1 Let P(S) denote the power set of a set S, and let �1 denote the

partial order � on S1 := Zd+. Then Sk+1 and �k+1 are inductively de�ned as follows.

Let Sk+1 = P(Sk) and for S; T 2 Sk+1 de�ne S �k+1 T if and only if for each t 2 T

there is some s 2 S with s �k t.

With this de�nition, the Gordan-Dickson Lemma and Maclagan's Theorem can be

restated as follows.

Lemma 6.2.2 (Gordan-Dickson)

Let fp1; p2; : : :g be a sequence of elements in S1 such that pi 6�1 pj whenever i < j.

Then this sequence is �nite.

Lemma 6.2.3 (Maclagan)

Let fp1; p2; : : :g be a sequence of elements in S2 such that pi 6�2 pj whenever i < j.

Then this sequence is �nite.

The following conjecture is a generalization of Maclagan's Theorem.

Conjecture 6.2.4 For �xed k 2 Z, k > 2, let fp1; p2; : : :g be a sequence of elements

in Sk such that pi 6�k pj whenever i < j. Then this sequence is �nite.

If this conjecture were true, then we could prove also in the multi-stage integer situ-

ation that H1 is �nite and that the algorithm to compute this set terminates. Note

that already a decision for the next case, k = 3, would be interesting. Clearly, an

inductive argument for the general case would be best.

Page 109: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 7

Implementational Details

Some of the algorithms presented in the previous chapters have been implemented as

a part of this thesis work. We called the program MLP in order to emphasize that

MLP computes sets of minimal lattice points. MLP is written in the programming

language C and is available from http://www.testsets.de.

7.1 The Program MLP

For given integer matrices A, T , andW , the program MLP allows the user to compute

the following sets:

� the (truncated) IP Graver basis GIP(A),

� all v-minimal elements in � n f0g for some integer lattice � � Zd ,

� the minimal Hilbert basis of the pointed rational cone kerRd (A) \ Rd+ ,

� the set H1 for the IP case for given T and W , where (Aj0) is assumed to be

included into (T jW ).

7.1.1 Invocation

The program MLP is invoked as follows:

mlp option ... option FILENAME

Herein, the input �le speci�ed by

109

Page 110: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

7.1. The Program MLP 110

FILENAME

contains the matrices A, T , and W , respectively, together with speci�cations for the

problem sizes. The used options specify the set to be computed.

7.1.2 Options

The following options are implemented:

� \gra" Computes the IP Graver basis GIP(A).

� \hil" Computes the unique minimal Hilbert basis of kerRd (A) \ Rd+ .

� \sip" Computes the set H1 for two-stage stochastic integer programs.

� \tru" Switches to truncated Graver/Hilbert basis computation.

� \gen" Input �le contains a list of generators of kerZd(A).

� \mat" Input is a matrix.

� \tra" Input matrix is A| instead of the problem matrix A.

� \per" Read the input and, depending on the form of the input, permute either

{ the components of the generators, or

{ the columns of the input matrix, or

{ the rows of the input matrix, which is given in transposed form.

The output, however, will be given in the original ordering!

� \sla" This option allows a much faster Graver basis computation if either

{ the input matrix is of the form (IjA), or

{ the set of generators is of the form f(e1;�Ae1); :::; (ed;�Aed)g.

(Each condition has to be true only up to sign of the unit vectors. The order of

the generators is not important for the second condition, whereas it is crucial

for the �rst condition that the the rows (columns) are ordered as speci�ed. In

that case, MLP will automatically produce a generating set as speci�ed in the

second condition.)

Page 111: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 7. Implementational Details 111

One may use longer names for all options, since only 3 characters are signi�cant. If

no option is chosen, then the default setting

mlp gra mat FILENAME

is invoked, that is, the IP Graver basis of the matrix given in FILENAME is computed.

Note that all v-minimal elements in � n f0g for some integer lattice � � Zd , are

computed if

mlp gra gen FILENAME

is invoked. To this end, the input �le has to contain a generating set of � over Z.

When the option \sla", applicable only in special cases, is used, then a di�erent

implementation of the IP Graver basis algorithm is used, where the special structure

of the generating set allows a tremendous speed-up, since much fewer S-vectors have to

be considered. Moreover, the algorithm only checks reducibility and does not perform

any reduction step, which leads to another speed-up. If an S-vector is reducible with

respect to the current basis, then it can be guaranteed that it will reduce to 0 with

respect to the current basis. Thus, the reduction need not be executed, since the

result, 0, is already known beforehand. In addition to these two improvements, the

set of those S-vectors that still have to be considered for reduction, need not be stored

in the computer memory. This allows us to solve bigger problems within the same

given amount of RAM. Termination and correctness of this algorithm is proved in

Subsection 7.3.1.

The option \per" allows the user to permute the columns of the problem matrix or

the components of the generators in order to make the application of the option \sla"

possible.

7.1.3 Input File

The input �le has to contain the following data:

� an INTEGER which denotes the number of (�rst-stage) variables

� only if H1 shall be computed: an INTEGER which denotes the number of

second-stage variables

� an INTEGER which denotes either

Page 112: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

7.1. The Program MLP 112

{ the number of vectors in the generating set or

{ the number of rows in the matrix (=number of columns if matrix is trans-

posed)

� (number of variables) INTEGERS which de�ne a permutation of f1; : : : ; dg if

option \per" is chosen

� (number of variables) INTEGERS which de�nes an upper bounds vector if

option \tru" is chosen

� either

{ a matrix A or (T jW ) with (number of rows) * (number of variables) IN-

TEGERS, or

{ a generating set with (number of vectors) * (number of variables) INTE-

GERS

Note that in order to compute the unique minimal Hilbert basis of kerRd (A) \ R d+ ,

the input �le has to contain either the matrix A (use option \mat") or the vectors

(e1; Ae1), : : :, (ed; Aed) (use option \gen").

7.1.4 Output Files

Depending on the speci�c set that is computed, the following output �les are gener-

ated:

� \FILENAME.gra" contains the Graver basis GIP(A).

� \FILENAME.gra2" contains the Graver basis GIP(A) without brackets together

with the numbers of variables and of vectors.

� \FILENAME.hil" contains the unique minimal Hilbert basis of kerRd (A)\ Rd+ .

� \FILENAME.bin" contains the Graver basis GIP(A) transformed into binomials

of the form xv+

� xv�

in the variables x1; x2; : : : ; xd.

� \FILENAME.sip" contains the set H1.

� \FILENAME.log" contains a copy of the standard output (screen).

Page 113: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 7. Implementational Details 113

7.2 Data Structure

The basic objects in the program MLP are sets of integer vectors of given dimension d.

Each d-dimensional vector v is stored as an array of integers of length d+4, where the

4 additional integer entries are used to store the characteristic vectors of v+, v�, and

the norms kv+k1, kv�k1. This additional data allows a quicker decision that a vector

v is not reducible by another vector w, since kw+k1 > kv+k1 and kw�k1 > kv�k1

immediately imply w 6v v.

Sets of vectors are internally stored both as a list of vectors and in a ternary tree

structure. Each leaf of this ternary tree corresponds to exactly one of the 3d possible

sign patterns from f�; 0;+gd and contains those vectors from the set that have this

particular sign pattern. Clearly, unused leaves or branches of the tree are fathomed,

that is, the tree is constructed only as far as needed in order to save memory. This tree

structure allows MLP to �nd reducing vectors faster, since many vectors in the current

basis G are not considered as their sign pattern is not compatible with the vector

that has to be reduced. Moreover, this tree structure allows a quicker construction

of S-vectors if parts of the vectors have to lie in the same orthant as it is the case in

the computation of truncated Graver bases (Lemma 2.2.2) or when the option \sla"

is used.

7.3 Algorithmic Improvements

7.3.1 The Option \sla"

The option \sla" can only be used if a lattice generating set F of the special form

f(e1;�Ae1); :::; (ed;�Aed)g for some integer matrix A is given. Clearly, such a gen-

erating set always exists for the lattice kerZd+l(AjE). Before we introduce and prove

the algorithm behind the option \sla", let us consider the following example from [2]

(example \altmann" on [22]).

Example. The IP Graver basis of0B@

1 0 0 2 0 0 2 1 1 3 2 2 2 3 3 3

0 1 0 0 2 0 1 2 1 2 3 2 3 2 3 3

0 0 1 0 0 2 1 1 2 2 2 3 3 3 2 3

1CA

consists of 73459 vectors and their negatives. If one computes this set via the invo-

cation

Page 114: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

7.3. Algorithmic Improvements 114

mlp gra mat altmann

that is, without the option \sla", it takes 1297891 seconds, or 360:53 hours, whereas

the invocation

mlp gra mat sla altmann

that uses the option \sla" via the algorithm presented below takes only 12787

seconds, or 3:55 hours. Times are given in CPU seconds on a SUN Enterprise 450,

300 MHz Ultra-SPARC. �

In the following we will present the algorithm behind the option \sla". To this end,

let Gi := f(v;�Av) 2 GIP(AjE) : kvk1 = ig.

Clearly, G0 = f0g and G1 = F [ �F . The idea of the proposed algorithm is to

compute G2, G3; : : : inductively one after another. This process corresponds to a se-

lection strategy by which the next S-vector is chosen from the set C in the com-

pletion algorithm to compute IP Graver bases, see Section 1.3. This particular se-

lection strategy, valid only if the given generating set of the lattice is of the above

form, leads to a tremendous speed up, since fewer S-vectors have to be considered, no

S-vector has to be reduced but only checked for reducibility, and the list C of S-vectors

that are still to be considered need not be stored in memory. In addition to that, no

vector is generated in the �nal output that does not belong to the desired Graver

basis which may happen in the general Graver basis algorithm from Section 1.3.

Suppose we have computed G1; : : : ;Gk, for some k � 1. Then the following observation

allows us to avoid all the reduction steps.

Lemma 7.3.1 The set G�k := G1[ : : :[Gk has the positive sum property with respect

to the set f(v;�Av) 2 kerZd+l(AjE) : kvk1 � kg.

Proof. Let (z;�Az) 2 f(v;�Av) 2 kerZd+l(AjE) : kvk1 � kg � kerZd+l(AjE). By the

positive sum property of GIP(AjE) with respect to kerZd+l(AjE) we can construct a

�nite linear representation z =P

�i(gi;�Agi) with �i 2 Z+ , (gi;�Agi) 2 GIP(AjE),

and (gi;�Agi) v (z;�Az) for all i. But these conditions imply kgik1 � kzk1 � k for

all i. Thus, (gi;�Agi) 2 G�k as claimed. �

Corollary 7.3.2 Let (z;�Az) 2 kerZd+l(AjE) and kzk1 = k + 1. Then there are

precisely two possible situations:

1. There exists a vector (v;�Av) 2 G�k with (v;�Av) v (z;�Az). Then we know

already that normalForm((z;�Az);G�k) = 0.

Page 115: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 7. Implementational Details 115

2. There does not exist a vector (v;�Av) 2 G�k with (v;�Av) v (z;�Az). Then

(z;�Az) 2 Gk+1.

Proof. First, suppose that there exists (v;�Av) 2 G�k with (v;�Av) v (z;�Az).

Since kz � vk1 � k, we know by the positive sum property of G�k that there

are �nitely many (not necessarily di�erent) elements (gi;�Agi) 2 G�k such that

(z� v;�A(z� v)) =P(gi;�Agi) and (gi;�Agi) v (z� v;�A(z� v)) v (z;�Az) for

all i. Choosing (v;�Av) together with these (gi;�Agi) in the algorithm normalForm

we obtain normalForm((z;�Az);G�k) = 0.

On the other hand, if there does not exist (v;�Av) 2 G�k with (v;�Av) v (z;�Az),

then (z;�Az) 2 Gk+1, that is, the vector (z;�Az) cannot be written as a sum

(v1;�Av1) + (v2;�Av2) with (vi;�Avi) 2 kerZd+l(AjE) n f0g, (vi;�Avi) v (z;�Az),

i = 1; 2. To see this, assume on the contrary that such vectors (v1;�Av1) and

(v2;�Av2) exist. Therefore, we know that kvik1 � k, i = 1; 2, since (0; 0) is the

only vector in kerZd+l(AjE) starting with d zero components. Therefore, by the pos-

itive sum property of G�k with respect to f(v;�Av) 2 kerZd+l(AjE) : kvk1 � kg,

(v1;�Av1) and (v2;�Av2) can both be written as positive linear integer combinations

of elements from G�k that all lie in the same orthants as (v1;�Av1) and (v2;�Av2),

respectively. Put together, both combinations allow a positive linear integer rep-

resentation of (z;�Az) by elements from G�k that all lie in the same orthant as

(z;�Az). But all the summands (v;�Av) 2 G�k in this representation of (z;�Az)

ful�ll (v;�Av) v (z;�Az) in contradiction to our initial assumption, that no such

vector (v;�Av) exists. Thus, (z;�Az) 2 Gk+1. �

Next we de�ne the set of necessary S-vectors, that have to be checked whether they

are reducible with respect to G�k, as follows:

S-vectors((v;�Av); (w;�Aw)) :=

8>>><>>>:f(v + w;�A(v + w))g if v and w lie in the

same orthant of R d

and kv + wk1 = k + 1;

; otherwise:

Now we can employ the following algorithm to �nd Gk+1.

Page 116: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

7.3. Algorithmic Improvements 116

Algorithm 7.3.3 (Algorithm to �nd Gk+1)

Input: G�k

Output: Gk+1

G := ;

C :=S

f;g2G�k

S-vectors(f; g)

for each s 2 C do

if normalForm(s;G�k) 6= 0 then

G := G [ fs;�sg

return G.

Note that the algorithm returns precisely the set Gk+1 instead of only a possibly

larger superset as it may happen with the general Graver basis algorithm presented

in Section 1.3. Moreover, Corollary 7.3.2 allows a simple and fast algorithmic test

whether normalForm(s;G�k) 6= 0.

Lemma 7.3.4 With the above de�nitions of input set, normalForm, and S-vectors,

Algorithm 7.3.3 terminates and returns Gk+1.

Proof. First, the algorithm terminates since only a �nite number of vectors is checked

for reducibility. By Corollary 7.3.2 every vector in G must belong to Gk+1. Therefore,

it remains to prove that every vector of Gk+1 is indeed generated by the algorithm.

By G denote the set that is returned by Algorithm 7.3.3 and let (z;�Az) 2 Gk+1.

Since F [�F = G1 � G�k, we can write (z;�Az) as a positive linear integer combina-

tion (z;�Az) =P

�i(vi;�Avi) for some �i 2 Z>0 and vectors (vi;�Avi) 2 G�k [G

with vi v z for all i. From the set of all such positive linear integer combinationsP�i(vi;�Avi) choose one such that

P�ikAvik1 is minimal. Note that we haveP

�ikAvik1 � kAzk1 with equality if and only if Avi v Az for all i.

IfP

�ikAvik1 = kAzk1, the we get (vi;�Avi) v (z;�Az) for all i. Together with

(z;�Az) =P

�i(vi;�Avi), �i 2 Z>0, and (z;�Az) 2 Gk+1 this representation must

be trivial, that is, (z;�Az) = 1 �(z;�Az) 2 G�k[G, and consequently, (z;�Az) 2 G.

Hence, we will assume on the contrary thatP

�ikAvik1 > kAzk1 holds.

Thus, there must exist (vi1 ;�Avi1); (vi2 ;�Avi2) such that (Avi1)(m0)(Avi2)

(m0) < 0

for some component m0. Now there are two possible cases.

If kvi1 + vi2k1 = k + 1, then we must have (vi1;�Avi1) + (vi2;�Avi2) = (z;�Az),

since (0; 0) is the only vector (v;�Av) 2 kerZl+d(AjE) with v = 0. Moreover, since

Page 117: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 7. Implementational Details 117

(vi1 ;�Avi1) + (vi2 ;�Avi2) 2 S-vectors((vi1 ;�Avi1); (vi2 ;�Avi2)), the vector (z;�Az)

was considered through the run of Algorithm 7.3.3 and thus, it was added to G, since

(z;�Az) 2 Gk+1 cannot be reduced by G�k.

If, on the contrary, kvi1 + vi2k1 � k, then there is a positive linear integer combina-

tion (vi1 ;�Avi1) + (vi2 ;�Avi2) =P

�j(v0j;�Av

0j) for some positive integers �j and

some (v0j ;�Av0j) 2 G�k, by the positive sum property of G�k with respect to the set

f(v;�Av) 2 kerZd+l(AjE) : kvk1 � kg (Lemma 7.3.1).

Moreover, �j(v0j;�Av

0j) v (vi1 ;�Avi1) + (vi2 ;�Avi2) for all j, implying that v0j v z

and that for each component mX�j��(Av0j)(m)

�� =���X�j(Av

0j)(m)���

=��(A(vi1 + vi2))

(m)��

���(Avi1)(m)

��+ ��(Avi2)(m)�� ;

holds, where the last inequality is strict form = m0 by construction. Summation over

m now gives X�jkAv

0jk1 = kA(vi1 + vi2)k1 < kAvi1k1 + kAvi2k1:

But then (z;�Az) =P

�i(v0i;�Av

0i) + (�i1 � 1)(vi1;�Avi1) + (�i2 � 1)(vi2;�Avi2) +P

i6=i1;i2�i(vi;�Avi) contradicts the minimality of

P�ikAvik1. We conclude that

alreadyP

�ikAvik1 = kAzk1 and the claim follows. �

It remains to �nd some k such that GIP(AjE) = G�k.

Lemma 7.3.5 Let k 2 Z>0 satisfy Gk+1 = : : : = G2k = ;. Then GIP(AjE) = G�k.

Proof. Since F � G�k, G�k generates kerZl+d(AjE) over Z. Moreover, G�k has the

positive sum property with respect to the set f(v;�Av) 2 kerZd+l(AjE) : kvk1 � kg,

Lemma 7.3.1. Since Gk+1 = : : : = G2k = ;, G�k must have the positive sum property

with respect to the set f(v;�Av) 2 kerZd+l(AjE) : kvk1 � 2kg.

This implies that choosing G�k as the input set to the completion algorithm that

computes IP Graver bases, all S-vectors (v+w;�A(v+w)) satisfy kv+wk1 � 2k and

thus, by the positive sum property of G�k, reduce to 0 in the algorithm normalForm.

Therefore, G�k is returned by the algorithm and hence, it contains GIP(AjE). Since

each element in G�k belongs to GIP(AjE), the claim is proved. �

The input set to the algorithm that computes all v-minimal solutions in the set

fz 2 Zd : Az = 0; l � z � ug n f0g, where l � 0 � u, presented in Section

Page 118: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

7.3. Algorithmic Improvements 118

2.2, was of the form f�(e1;�Ae1); :::;�(ed;�Aed)g. Therefore, the above algorithmic

improvement also applies to this computation. It has been implemented in MLP, as

well.

7.3.2 Computing the Minkowski Sum of Two Sets of Vectors

During the computation of H1 we often have to compute the Minkowski sums

Vu + Vu0 = fv + v0 : v 2 Vu; v0 2 Vu0g for two given sets Vu, Vu0 of vectors. The

sets Vu and Vu0 are stored as lists and thus, when we form all possible combinations

v+v0 : v 2 Vu; v0 2 Vu0 , many of these sums will usually occur several times, although

they are needed only once in the de�nition of Vu+ Vu0. Clearly, these multiple occur-

rences of sums should be avoided, and if possible, with a small number of comparisons

of vectors.

To avoid these multiple occurrences of sums, MLP employs the following strategy.

First, the vectors in both lists Vu and Vu0 are ordered with respect to increasing

lexicographic ordering. The idea is to construct the vectors in Vu + Vu0 with re-

spect to increasing lexicographic ordering, as well, starting with the smallest ele-

ment. Let Vu := fv1; : : : ; vpg and Vu0 := fv01; : : : ; v0qg denote the ordered lists, and let

C := f1; : : : ; pg� f1; : : : ; qg be the set of all those ordered pairs (i; j) of indices such

that the sum vi + v0j has not been considered so far.

Note that if (i1; j1) � (i2; j2), then vi1 + v0j1 is lexicographically smaller than (or

at most equal to) vi2 + v0j2 . Thus, the index pair (i; j) of the next lexicographically

smallest sum vi+v0j that is to be added to Vu�Vu0 must be minimal in C with respect

to � on Z2+ .

Thus, the lexicographically smallest element in Vu+Vu0 is v1+v01. The second smallest

element is either v1+v02 or v2+v

01, since (1; 2) and (2; 1) are the only two minimal pairs

in f1; : : : ; pg�f1; : : : ; qgnf(1; 1)g. Therefore, we can employ the following algorithm

in order to �nd Vu + Vu0.

Algorithm 7.3.6 (Algorithm to �nd Vu + Vu0)

Input: Vu, Vu0 lexicographically ordered, smallest element �rst

Output: Vu + Vu0 lexicographically ordered, smallest element �rst

G := ;

C := f1; : : : ; pg � f1; : : : ; qg

t := 0

Page 119: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 7. Implementational Details 119

while C 6= ; do

M := fvi + v0j : (i; j) is a �-minimal pair in Cg

Choose s = vi + v0j, a lexicographically minimal vector in M .

C := C n f(i; j)g

if s 6= t then

G := G [ fsg

t := s

return G.

Lemma 7.3.7 Algorithm 7.3.6 terminates and returns Vu+ Vu0 lexicographically or-

dered, smallest element �rst.

Proof. Termination of the algorithm is clear, since each of the pq sums is considered

exactly once. Moreover, it is clear that the returned set G contains Vu + Vu0, since

only those vectors s are not added to G that already occur in G as its currently

last element t. Thus, it remains to show that G is lexicographically ordered, smallest

element �rst.

When a sum vi0 + v0j0 was chosen in the run of Algorithm 7.3.6, vi0 + v0j0 was the

lexicographically smallest element in the set fvi + v0j : (i; j) 2 Cg. To see this,

remember that a lexicographically smallest element in the set fvi + v0j : (i; j) 2 Cg

must have a pair of indices that is �-minimal in C. Thus, each lexicographically

smallest element in the set fvi + v0j : (i; j) 2 Cg must belong to M . But vi0 + v0j0was a lexicographically smallest element in M and thus, vi0 + v0j0 is lexicographically

smaller than (or at most equal to) all elements in the set fvi + v0j : (i; j) 2 Cg. But

this implies that G is lexicographically ordered, smallest element �rst. �

Note that since G is ordered, Algorithm 7.3.6 returns a list G that does not contain

an element twice, as desired.

7.3.3 Properties and Structure of H1

Given a pair (u; Vu) 2 H1 in the two-stage stochastic integer case. What can we say

about its structure?

Lemma 7.3.8 (Structure of V0)

Let (0; V0) 2 H1. Then GIP(W ) � V0 � GIP(W ) [ f0g.

Page 120: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

7.3. Algorithmic Improvements 120

Proof. For arbitrary N 2 Z+ , any Graver basis element z = (0; v1; : : : ; vN ) 2 GN

contains exactly one non-zero building block vi. If, on the contrary, vi and vj were

both non-zero, then we could construct a non-zero vector z0 2 kerZdN(AN ) with z

0 v z

and z0 6= z by replacing vi by 0 in z. This contradicts the v-minimality of the Graver

basis element z. Since there is only one non-zero building block vi in z, this building

block has to be v-minimal in kerZd(W ) n f0g, that is, vi belongs to the Graver basis

of W . �

Thus, we can immediately construct V0 from the IP Graver basis of W .

Lemma 7.3.9 (Structure of Vu)

For any (u; Vu) 2 H1, u 6= 0, the set Vu coincides with the set of all v-minimal

solutions v of Wv = �Tu, where a solution v is called v-minimal if no other solution

v0 of Wv = �Tu with v0 v v exists.

Proof. For arbitrary N , any Graver basis element z = (u; v1; : : : ; vN ) can contain

only v-minimal solutions v of Wv = �Tu as building blocks vi; i = 1; : : : ; N . If vj

were not such a v-minimal solution, then there exists some v0j 6= vj withWv0j = �Tu

and v0j v vj. Thus, replacing in z the building block vj by v0j we obtain a non-

zero vector z0 2 kerZdN(AN) satisfying z0 v z and z0 6= z, which contradicts the

v-minimality of the Graver basis element z. The setM(u) of all v-minimal solutions

of Wv = �Tu, however, is �nite. To see this, apply the Gordan-Dickson-Lemma to

the set f(v+; v�) : v 2M(u)g.

On the other hand, choosing N suÆciently large and forming z = (u; v1; : : : ; vN ) such

that each of the �nitely many v-minimal solution of Wv = �Tu appears as some

building block vi, then z has to belong to GN . Thus, H1 indeed contains all these

v-minimal solutions. The claim now follows. �

Thus, we can, for given u, immediately compute the �nal set Vu as it should appear

in H1. Replacing the set Vu by the set of all v-minimal solutions of Wv = �Tu

before adding (u; Vu) to G in the completion algorithm to compute H1 for two-stage

stochastic integer programs, Section 3.3, tremendously speeds up the computation.

No further pair (u; Vu) with some other Vu will be added to G.

7.3.4 Minimal Integer Solutions to Az = b

To compute all v-minimal integer solutions of Az = b we will employ the Completion

Algorithm 1.3.1. To this end, we have to specify the input set, the reduction needed

in the normalForm algorithm and the set of S-vectors.

Page 121: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Chapter 7. Implementational Details 121

As input set we choose GIP(A) [ fz0g, where we may choose any integer solution z0

of Az = b. Moreover, if g v s and g 62 GIP(A) then we say that s 2 Zd reduces by

g 2 Zd to 0. Finally, we de�ne

S-vectors(v; w) :=

(fv + wg if v 2 G and w 2 GIP(A);

; otherwise:

Lemma 7.3.10 With the above de�nitions of input set, normalForm, and S-vectors,

the Algorithm 1.3.1 terminates and returns a set containing all v-minimal integer

solutions of Az = b.

The set of all minimal integer solutions to Az = b is the set of all those elements z

in G which are irreducible with respect to G n fzg.

Proof. Termination follows by application of Gordan-Dickson Lemma to the sequence

f(g+; g�) : g 2 G n GIP(A)g.

To see that every minimal integer solution z1 to Az = b is contained in the returned

set G note that z1 = z +P

�igi for some z 2 G n GIP(A), some positive integers �i,

and some vectors gi 2 G(A) with gi v z1�z, by the positive sum property of GIP(A).

Now choose z 2 G n GIP(A) such thatP

�ikgik1 is minimal. By the same rewriting

technique as in the correctness proof of the integer version of Algorithm 1.3.1 one can

prove that this sum must be zero, implying that z1 = z 2 G n GIP (A). �

Page 122: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

7.3. Algorithmic Improvements 122

Page 123: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Notation

Z integer numbers

Q rational numbers

R real numbers

X+ fx 2 X : x � 0g

X>0 fx 2 X : x > 0g

kerX(A) fx 2 X : Ax = 0g

(P)c;b minfc|z : Az = b; z 2 X+g

(IP)c;b (P)c;b where X = Zd

(LP)c;b (P)c;b where X = R d

(MOD)c;b;�b minfc|z : Az = b; �Az � �b (mod p); z 2 Zd+g

v(i) ith component of v

kvk1 l1 � normPd

i=1 jv(i)j

supp(v) support fi : v(i) 6= 0g

(v+)(i) max(v(i); 0)

(v�)(i) max(�v(i); 0)

xv xv(1)

1 � : : : � xv(d)

d

e1; : : : ; ed unit vectors in R d

v � w; where v; w 2 R d+ v(i) � w(i) for i = 1; : : : ; d

v v w; where v; w 2 R d (v+; v�) � (w+; w�)

PA;b fz : Az = b; z 2 R d+g

P IA;b fz : Az = b; z 2 Zd+g

123

Page 124: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

124

Page 125: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Conclusions

In this thesis we dealt with the solution of (mixed-integer) linear programs (P)c;bvia a simple augmentation algorithm that relies on an oracle that provides improving

directions to all non-optimal feasible solutions. Graver test sets provide such an oracle

for arbitrary choices of the cost function c and of the right-hand side b. They depend

only on the problem matrix and contain improving directions to all non-optimal

feasible solutions to (P)c;b. This enormous amount of stored information, however,

leads to huge test sets already for small problems with less that 100 variables, say.

From a practical point of view, however, we are usually not interested in a test

set. Instead we want to �nd improving directions to non-optimal solutions or we

seek a proof that the current solution is optimal. Test sets help with this problem

but they are too big. Therefore, we have constructed sets of building blocks and

sets of connection vectors which are usually much smaller than the test set they

correspond to. Nonetheless, an improving vector can still be eÆciently constructed

from the vectors in these smaller sets. Thus, larger problems can be solved via the

Augmentation Algorithm 0.0.2, if the improving directions are provided via building

blocks or via connection sets.

Connection sets and building blocks encode the full test set much more eÆciently

than just by listing its vectors. Since less information is computed o�-line, before the

speci�c problem data are known, more computational e�ort is moved to the online

part of the solution of the optimization problem, when c, b, and a feasible solution z0

are �nally given.

It is our belief that more compact representations of the information stored in a test

set will further drastically improve the performance of the Augmentation Algorithm

0.0.2, where the improving directions are reconstructed from these compact repre-

sentations. To this end, one might think of further decomposition of building blocks

into even smaller building blocks, other types of decomposition, or exploitation of the

(still fairly unknown) structure of test sets.

In this thesis we have presented a novel decomposition approach in two-stage stochas-

125

Page 126: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

126

tic (mixed-integer) linear programming, that to the best of our knowledge does not

have a counter-part in the existing stochastic programming literature. For one part,

this decomposition led to an interesting �niteness result (�niteness ofH1). Moreover,

using this approach, we could solve large, although academic, examples much faster

than other existing (specialized) computer codes. This supports the above hope/belief

that test set methods will eventually solve problems of practical sizes and interest.

Most probably, however, they will display their full power when successfully combined

with the existing algorithmic ideas from mixed-integer linear programming.

In Chapters 1 and 2 we laid the necessary notational and algorithmic basis of the later

decomposition approach. We have presented the positive sum property inherent to

Graver bases, a simple property that already implies the universal test set property.

More importantly, however, this property pointed our attention to the notion of a

completion procedure. This algorithmic pattern was then employed throughout this

thesis not only to compute LP, IP, and MIP Graver test sets but also to compute

building blocks and connection sets.

Computation of H1 and of connection sets would strongly bene�t from fast algo-

rithmic solutions to Problems 6.1.1 and 6.1.2, Chapter 6. From a theoretical and

structural point of view, however, we think that the challenging combinatorial Prob-

lem 6.2.4 is even more interesting, since it would imply �niteness of H1 also for

multi-stage stochastic integer linear programs.

Some of the algorithms presented in this thesis for the IP case were implemented in

the computer program MLP. Computational experience showed that the presented

algorithms work �ne for problem matrices arising in combinatorial algebra. In integer

programming, however, many improvements still have to be done to make a primal

solution of practical problems via test set methods possible. We think that the decom-

position approach presented in this thesis, in particular that for two-stage stochastic

programs, already indicates that this practical applicability of test set methods may

be achieved in future. In this respect, a combination of test set ideas with other al-

gorithmic ideas from (mixed-) integer linear programming should be established and

exploited.

Page 127: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Bibliography

[1] S. Ahmed, M. Tawarmalani, and N. V. Sahinides. A �nite branch

and bound algorithm for two-stage stochastic integer programs. Preprint,

University of Illinois at Urbana-Champaign, 2000, downloadable from

http://archimedes.scs.uiuc.edu/group/publications.html.

[2] K. Altmann. The chain property for the associated primes of A-graded ide-

als. Electronic preprint, Humboldt University Berlin, 2000, downloadable from

http://front.math.ucdavis.edu/math.AG/0004142.

[3] J. R. Birge and F. Louveaux. Introduction to Stochastic Programming. Springer,

New York, 1997.

[4] J. R. Birge and R. J.-B. Wets. Sublinear upper bounds for stochastic programs

with recourse. Mathematical Programming 43 (1989), 131-149.

[5] A. M. Bigatti, R. LaScala, and L. Robbiano. Computing toric ideals. Journal

of Symbolic Computation 27 (1999), 351-365.

[6] B. Buchberger. Gr�obner bases: an algorithmic method in polynomial ideal theory.

In Recent Trends in Multidimensional Systems Theory, N. K. Bose, ed., D. Reidel

Publishing Company, Dordrecht, 1985, 184-232.

[7] B. Buchberger. History and basic features of the critical-pair/completion proce-

dure. Journal of Symbolic Computation, 2 (1987), 3-38.

[8] C. C. Car�e. Decomposition in stochastic integer programming. PhD thesis,

University of Copenhagen, 1998.

[9] C. C. Car�e and R. Schultz. A two-stage stochastic program for unit commitment

under uncertainty in a hydro-thermal system. Preprint SC 98-11, Konrad-Zuse-

Zentrum f�ur Informationstechnik Berlin, 1998, downloadable as SC 98-11 from

http://www.zib.de/bib/pub/pw/index.en.html.

127

Page 128: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

BIBLIOGRAPHY 128

[10] C. C. Car�e and R. Schultz. Dual decomposition in stochastic integer program-

ming. Operations Research Letters 24 (1999), 37-45.

[11] C. C. Car�e and J. Tind. A cutting plane approach to mixed 0-1 stochastic

integer programs. European Journal of Operational Research 101 (1997), 306-

316.

[12] C. C. Car�e and J. Tind. L-shaped decomposition of two-stage stochastic pro-

grams with integer recourse. Mathematical Programming 83 (1998), 451-464.

[13] P. Conti and C. Traverso. Buchberger algorithm and integer programming. In

Proceedings AAECC-9, (New Orleans), LNCS 539, Springer-Verlag, 1991, 130-

139.

[14] W. Cook, A. M. H. Gerards, A. Schrijver, and �E. Tardos. Sensitivity theorems

in integer linear programming. Mathematical Programming 34 (1986), 251-264.

[15] G. Cornu�ejols, R. Urbaniak, R. Weismantel, and L. Wolsey. Decomposition of

integer programs and of generating sets. LNCS 1284, Springer-Verlag, 1997,

92-103.

[16] D. Cox, J. Little, and D. O'Shea. Ideals, Varieties, Algorithms. Springer-Verlag,

1992.

[17] A. D. Foroudi and J. E. Graver. On the foundations of mixed-integer program-

ming. Unpublished manuscript, Department of Mathematics, Syracuse Univer-

sity, 1989.

[18] F. R. Giles and W. R. Pulleyblank. Total dual integrality and integer polyhedra.

Linear Algebra and its Applications 25 (1979), 191-196.

[19] J. E. Graver. On the foundation of linear and integer programming I. Mathe-

matical Programming 9 (1975), 207-226.

[20] U. U. Haus, M. K�oppe, and R. Weismantel. The integral basis method for integer

programming. Mathematical Methods of Operations Research, 53 (2001).

[21] U. U. Haus, M. K�oppe, and R. Weismantel. A primal all-integer algorithm based

on irreducible solutions. Preprint, Otto-von-Guericke-Universit�at Magdeburg,

2001, downloadable from

http://www.math.uni-magdeburg.de/preprints/01.html.

[22] R. Hemmecke. Homepage on test sets. URL=http://www.testsets.de.

Page 129: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

BIBLIOGRAPHY 129

[23] M. Henk, M. K�oppe, and R. Weismantel. Integral decomposition

of polyhedra and some applications in mixed integer programming.

Preprint, Otto-von-Guericke-Universit�at Magdeburg, 2000, downloadable from

http://www.math.uni-magdeburg.de/preprints/01.html.

[24] J. L. Higle and S. Sen. Stochastic Decomposition: A Statistical Method for Large

Scale Stochastic Linear Programming. Kluwer, Dordrecht, 1996.

[25] S. Hosten and B. Sturmfels. GRIN: An implementation of Gr�obner bases for

integer programming. In Integer programming and combinatorial optimization,

E. Balas and J. Clausen, eds., LNCS 920, Springer-Verlag, 1995, 267-276.

[26] P. Kall and S. W. Wallace. Stochastic Programming. Wiley, Chichester, 1994.

[27] W. K. Klein Haneveld and M. H. van der Vlerk. Stochastic integer programming:

General models and algorithms. Annals of Operations Research 85 (1999), 39-58.

[28] M. K�oppe. Erzeugende Mengen f�ur gemischt-ganzzahlige Programme. Diploma

thesis, Otto-von-Guericke-Universit�at Magdeburg, 1999, downloadable from

http://www.math.uni-magdeburg.de/~mkoeppe/.

[29] A. N. Letchford and A. Lodi Primal cutting plane algorithms revisited. In

preparation.

[30] Q. Li, Y. Guo, T. Ida, and J. Darlington. The minimised geometric Buchberger

algorithm: an optimal algebraic algorithm for integer programming. In Proceed-

ings ISAAC-97, ACM, 1997, 331-338.

[31] A. L�kketangen and D. L. Woodru�. Progressive hedging and tabu search applied

to mixed integer (0,1) multi-stage stochastic programming. Journal of Heuristics

2 (1996), 111-128.

[32] D. Maclagan. Antichains of monomial ideals are �nite. Electronic

preprint, University of California at Berkeley, 1999, downloadable from

http://front.math.ucdavis.edu/math.CO/9909168.

[33] D. Maclagan. Structures on sets of monomial ideals. PhD thesis, University of

California at Berkeley, 2000.

[34] M. P. Nowak. Stochastic Lagrangian relaxation in power scheduling of a hydro-

thermal system under uncertainty. PhD thesis, Humboldt University Berlin,

1999.

Page 130: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

BIBLIOGRAPHY 130

[35] M. P. Nowak and W. R�omisch. Stochastic Lagrangian relaxation applied

to power scheduling in a hydro-thermal system under uncertainty. Preprint

98-36, Deutsche Forschungsgemeinschaft, Schwerpunktprogramm \Echtzeit-

Optimierung gro�er Systeme", 1998.

[36] J. G. Oxley. Matroid Theory. Oxford University Press, 1992.

[37] L. Pottier. Euclide's algorithm in dimension n. Research report, ISSAC 96,

ACM Press, 1996.

[38] A. Pr�ekopa. Stochastic Programming. Kluwer, Dordrecht, 1995.

[39] R. T. Rockafellar and R. J.-B. Wets. Scenarios and policy aggregation in op-

timization under uncertainty. Mathematics of Operations Research 16 (1991),

119-147.

[40] W. R�omisch and R. Schultz: Multistage stochastic integer programs: an intro-

duction. Preprint SM-DU-496, University of Duisburg, 2001, downloadable from

http://www.uni-duisburg.de/FB11/disma/.

[41] A. Ruszczy�nski. Some advances in decomposition methods for stochastic linear

programming. Annals of Operations Research 85 (1999), 153-172.

[42] A. Schrijver. Theory of Linear and Integer Programming. Wiley, Chichester,

1986.

[43] H. E. Scarf. Production sets with indivisibilities, Part I: Generalities. Econo-

metrica 49 (1981), 1-32.

[44] H. E. Scarf. Neighborhoodsystems for production sets with indivisibilities.

Econometrica 54 (1986), 507-532.

[45] H. E. Scarf. Test sets for integer programs. In Mathematical Programming,

T. M. Liebling and D. de Werra, eds., vol. 79 of B, Mathematical Programming

Society, 1997, 355-368.

[46] R. Schultz. On structure and stability in stochastic programs with random

technology matrix and complete integer recourse. Mathematical Programming

70 (1995), 73-89.

[47] R. Schultz, L. Stougie, and M. H. van der Vlerk. Solving stochastic programs with

integer recourse by enumeration: A framework using Gr�obner basis reductions.

Mathematical Programming 83 (1998), 229-252.

Page 131: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

BIBLIOGRAPHY 131

[48] B. Sturmfels. Gr�obner Bases and Convex Polytopes. American Mathematical

Society, Providence, Rhode Island, 1995.

[49] B. Sturmfels and R. R. Thomas. Variation of cost functions in integer program-

ming. Mathematical Programming 77 (1997), 357-387.

[50] B. Sturmfels, R. Weismantel, and G. M. Ziegler. Gr�obner bases of lattices,

corner polyhedra, and integer programming. Beitr�age zur Algebra und Geome-

trie/Contributions to Algebra and Geometry 36 (1995), 282-298.

[51] S. Takriti, J. R. Birge, and E. Long, A stochastic model for the unit commitment

problem. IEEE Transactions on Power Systems 11 (1996), 1497-1508.

[52] R. R. Thomas. A geometric Buchberger algorithm for integer programming.

Mathematics of Operations Research 20 (1995), 864-884.

[53] R. R. Thomas and R. Weismantel. Truncated Gr�obner bases for integer pro-

gramming. Applicable Algebra in Engineering, Communication and Computing

8 (1997), 241-257.

[54] R. Urbaniak. Decomposition of generating sets of integer programs. Dissertation,

Technische Universit�at Berlin, 1998.

[55] R. Urbaniak, R. Weismantel, and G. M. Ziegler. A variant of the Buchberger

algorithm for integer programming. SIAM J. Discrete Mathematics 10 (1997),

96-108.

[56] J. G. van der Corput. �Uber Systeme von linear-homogenen Gleichungen und

Ungleichungen. Proceedings Koninklijke Akademie van Wetenschappen te Ams-

terdam 34 (1931), 368-371.

[57] R. Weismantel. Test sets of integer programs. Mathematical Methods of Opera-

tions Research 47 (1998), 1-37.

[58] H. P. Williams. Fourier-Motzkin elimination extension to integer programming

problems. Journal of Combinatorial Theory (A) 21 (1976), 118-123.

[59] H. P. Williams. A characterization of all feasible solutions to an integer program.

Discrete Applied Mathematics 5 (1983), 147-155.

[60] H. P. Williams. A duality theorem for linear congruences. Discrete Applied

Mathematics 7 (1984), 93-103.

Page 132: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

BIBLIOGRAPHY 132

Page 133: On the Decomp osition of T est SetsLP case. 68 3.5 Simple Recourse. 69 3.5.1 IP case. 70 3.5.2 LP case. 72 3.6 Computations. 73 3.7 Conclusions. 74 4 Decomp osition of T est Sets in

Erkl�arung

Hiermit versichere ich, Raymond Hemmecke, da� ich die vorliegende Arbeit ohne

unzul�assige Hilfe Dritter und ohne Benutzung anderer als der angegebenen Hilfsmit-

tel angefertigt habe; die aus fremden Quellen direkt oder indirekt �ubernommenen

Gedanken sind als solche kenntlich gemacht. Die Arbeit wurde weder im Inland noch

im Ausland in gleicher oder �ahnlicher Form einer anderen Pr�ufungsbeh�orde vorgelegt.

Duisburg, den 22. 5. 2001 Raymond Hemmecke

133