multidimensional knapsack problem -...

28
Chapter 2 MULTIDIMENSIONAL KNAPSACK PROBLEM 1 1 A part of this chapter has been published as ”A new polynomial time algorithm for 0-1 multi- ple knapsack problem based on dominant principles” in Applied Mathematics and computation, Vol. 202, 71-77, 2008.

Upload: leminh

Post on 27-Mar-2019

247 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: MULTIDIMENSIONAL KNAPSACK PROBLEM - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/17553/12/12_chapter_02.pdf · MULTIDIMENSIONAL KNAPSACK PROBLEM 1 1A part of this chapter

Chapter 2

MULTIDIMENSIONAL KNAPSACK

PROBLEM

1

1A part of this chapter has been published as ”A new polynomial time algorithm for 0-1 multi-

ple knapsack problem based on dominant principles” in Applied Mathematics and computation,

Vol. 202, 71-77, 2008.

Page 2: MULTIDIMENSIONAL KNAPSACK PROBLEM - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/17553/12/12_chapter_02.pdf · MULTIDIMENSIONAL KNAPSACK PROBLEM 1 1A part of this chapter

25

CHAPTER 2. MULTIDIMENSIONAL KNAPSACK PROBLEM

2.1 Introduction

The Zero-One Multidimensional knapsack problem (MKP) has varied applica-

tions in various fields, for example economy: Consider a set of projects (variables)

(j = 1, 2, 3..., n) and a set of resources (constraints) (i = 1, 2, 3, ..., m). Each

project has assigned a profit cj and resource consumption values aij. The problem

is to find a subset of all projects leading to the highest possible profit and not

exceeding given resource limits bi.

The MKP is a well known NP-Hard combinatorial optimization problem

[73] which can be formulated as follows

Maximize

Z = f(x1, x2, ..., xn) =n∑

j=1

cjxj (2.1)

subject to the constraints

n∑j=1

aijxj ≤ bj , i = 1, 2, ..., m (2.2)

xj ∈ {0, 1} , j = 1, 2, 3, ..., n (2.3)

cj > 0, aij ≥ 0, bj ≥ 0 (2.4)

The objective function Z = f(x1, x2, ..., xn) should be maximized subject to

the constraints given by (2.2). For MKP problems the variable xj can take only

Page 3: MULTIDIMENSIONAL KNAPSACK PROBLEM - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/17553/12/12_chapter_02.pdf · MULTIDIMENSIONAL KNAPSACK PROBLEM 1 1A part of this chapter

26

two values 0 and 1. Here in a MKP, it is necessary that aij are positive. This nec-

essary condition paves way for better heuristics to obtain optimal or near optimal

solutions.

The popularity of knapsack problems stems from the fact that it has

attracted researchers from both camps: the theoreticians as wells as the practicians

[144] enjoy the fact that these simple structured problems can be used as sub

problems to solve more complicated ones. Practicians on the other hand, enjoy

the fact that these problems can model many industrial opportunities such as

cutting stock, cargo loading, and processor allocation in distributed systems.

The special case of MKP with m = 1 is the classical knapsack problem

(01KP), whose usual statement is the following. Given a knapsack of capacity b

and n objects, each being associated a profit a volume occupation, one wants to

select k (k <= n and k not fixed ) objects such that the total profit is maximized

and the capacity of the knapsack is not exceeded. It is well known that 01KP

is strongly NP-Hard because there are polynomial approximation algorithms to

solve it. This is not the case for the general MKP. Various algorithms to obtain

exact solutions of such problems were designed and well documented in literature

[10, 28, 75, 76, 77, 78, 84, 183, 201].

This chapter explains the following work: At first Bounds of the objective

function of MKP have been derived in the section 2.2. Previous work pertaining

to this problem is surveyed in section 2.3. The dominant principle of intercept

matrix, proposed algorithm, other issues pertaining to solving MKP, worst-case

bound, probabilistic performance analysis, and computational complexity of DPH

are explained in section 2.4. The results obtained by DPH for all the tese and

benchmark problems have been furnished in section 2.5. The performance of pro-

posed heuristic has been analysed by comparing its results with the results of other

Page 4: MULTIDIMENSIONAL KNAPSACK PROBLEM - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/17553/12/12_chapter_02.pdf · MULTIDIMENSIONAL KNAPSACK PROBLEM 1 1A part of this chapter

27

heuristics of MKP are enumerated in section 2.5. Salient features of this algorithm

are also presented in section 2.5, and concluding remarks are given in section 2.6.

2.2 Bounds

In this section, we present ”Reduced costs constraint” based upper bound of

MKP.

Consider the linear relaxation of MKP

Z(P)

= Max∑n

j=1 cjxj∑nj=1 aijxj ≤ bi, i=1,2,...,m 0 ≤ xj ≤ 1 j= 1,...,n.

solving this linear problem provides an upper bound (UB), which is much

better than the bound consistency propagation of all the constraints(∑nj cj in this case

), and it provide also the so-called reduced costs.

Let(c1, u

)be the reduced costs associated to (x, s), then

Z(P eq

)Max UB +

∑j∈N−

c1jxj −∑

j∈N+

c1j (1 − xj) +∑i∈M

uisi (2.5)

Ax + Ss = b (2.6)

0 ≤ x ≤ 1, s ≥ 0 (2.7)

where A, S, b depend of the basis associated to (x, s). and

Page 5: MULTIDIMENSIONAL KNAPSACK PROBLEM - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/17553/12/12_chapter_02.pdf · MULTIDIMENSIONAL KNAPSACK PROBLEM 1 1A part of this chapter

28

N− = {j ∈ N/xj is a non basic variable at its lower bound}

N+ = {j ∈ N/xj is a non basic variable at its upper bound}

Actually, the equivalence is still valid if the integrality constraint xj ∈ {0, 1};j= 1,...,n; si ∈ N ; i= 1,...,m are maintained. Hence, the MKP can be written as

Z (Peq) Max UB +∑

j∈N−c1jxj −

∑j∈N+

c1j (1 − xj) +∑i∈M

uisi (2.8)

Ax + Ss = b (2.9)

xj ∈ {0, 1} ; j = 1, ..., n; si ∈ N ; i = 1, ..., m (2.10)

Nevertheless, since a feasible integer solution is known, we are interested in

finding only better solutions, that is in finding feasible solutions satisfying the

following ”Reduced Costs Constraint”:

UB +∑

j∈N−c1jxj −

∑j∈N+

c1j (1 − xj) +∑i∈M

uisi ≥ LB (2.11)

−∑

j∈N−c1jxj +

∑j∈N+

c1j (1 − xj) −∑i∈M

uisi ≤ UB − LB (2.12)

where LB is the lower bound obtained by the heuristic (or the best known

solution).

Page 6: MULTIDIMENSIONAL KNAPSACK PROBLEM - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/17553/12/12_chapter_02.pdf · MULTIDIMENSIONAL KNAPSACK PROBLEM 1 1A part of this chapter

29

2.3 Previous Work

Exact and heuristic algorithms have been developed for the MKP, like many

NP-Hard combinatorial optimization problems.

2.3.1 Exact Algorithms

Existing exact algorithms are essentially based on branch and bound method,

dynamic programming, and MKP relaxation techniques such as Lagrangian, sur-

rogate and composite relaxations [10, 28, 75, 76, 77, 78, 84, 183, 201].

The Balas’s [10] algorithm is a systematically designed one that begins

with a null solution which assigns successively certain variables to 1 in such way

that after testing part of all the 2n combinations, we obtain either an optimal

solution or a proof that no feasible solution exists. The algorithms by Gilmore

and Gomory [76], Weingartner and Ness[201] use dynamic programming approach.

Two problems have been solved by dynamic programming method ( sizes being

n=28, m=2) and (n=105, m=2) using in a forward and backward approach. Shih

[183] proposed a branch and bound algorithm. In this method, an upper bound

is obtained by computing the linear relaxation upper bound of each of the m sin-

gle constraint knapsack problems separately and selecting the minimum objective

function value among those as the upper bound. Computational results showed

that this algorithm performed better than the general zero-one additive algorithm

of Balas[10].

Better algorithms have been proposed by using tight upper bounds, ob-

tained with other MKP relaxation techniques such as Lagrangian, Surrogate and

composite relaxations were developed by Gavish and Pirkul[75]. This algorithm

was compared with the Shih’s method [183] and was found to be faster by at least

Page 7: MULTIDIMENSIONAL KNAPSACK PROBLEM - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/17553/12/12_chapter_02.pdf · MULTIDIMENSIONAL KNAPSACK PROBLEM 1 1A part of this chapter

30

one order of magnitude. Osorio et al[160] used surrogate analysis and Constraint

Pairing to assign some variables to zero and to separate the rest of the variables

into groups(those that tend to zero and those that tend to one ) They use an

initial feasible integer solution to generate logical cuts based on their analysis at

the root of a branch and bound tree. Due to their exponential time complexity,

exact algorithms are limited to small size instances (n = 200 and m = 5).

2.3.2 Heuristic Algorithms

Heuristic algorithms are designed to produce near-optimal solutions for larger

problem instances. The first heuristic approach for the MKP concerns for a large

part greedy methods. These algorithms construct a solution by adding, accord-

ing to a greedy criterion, one object each time into a current solution without

constraint violation.

The second heuristic is based on linear programming by solving various

relaxations of the MKP. Balas and Martin [11] introduced a heuristic algorithm for

the MKP which utilizes linear programming (LP) by relaxing the integrality con-

straints xj = 0 or 1 to xj= 0 to 1. Linear programming problems are not NP-hard

and can be solved efficiently, e.g. with the well known Simplex algorithm. The frac-

tional xj are then set to 0 or 1 according to a heuristic which maintains feasibility.

In last decade, several algorithms based on metaheuristics have been developed,

including Particle Swarm Optimization[97], simulated annealing [60], tabu search

[81, 92, 196] and genetic algorithms [41, 52, 122]. More examples of heuristic al-

gorithms for the MKP can be found in [139, 144, 165, 197]. Sahini[177] presented

a sequence of ε approximate algorithm and time estimates of such algorithm were

also presented. A comprehensive review on exact and heuristic algorithms is given

in [40, 41, 72]. In this chapter, a heuristic algorithm based on dominance principle

Page 8: MULTIDIMENSIONAL KNAPSACK PROBLEM - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/17553/12/12_chapter_02.pdf · MULTIDIMENSIONAL KNAPSACK PROBLEM 1 1A part of this chapter

31

of intercept matrix to solve MKP is proposed.

2.4 Dominance Principle based heuristic

MKP is a well known Zero-One integer programming problem and many vari-

ables have zero values called redundant variables. We use the intercept matrix of

the constraints (2.2) to identify the variables of value 1 and 0.

we construct the intercept matrix by dividing bk values by coefficients

of (2.2). The elements of intercept matrix are arranged in decreasing order, the

leading element is the dominant variable. The process of maximizing the profit

by using successive leading element of intercept matrix is known as dominant

principle. We use this dominant variable to improve the current feasible solution

and this procedure provides optimum or near optimum solution of MKP. The

dominant principle focuses at the resource matrix with lower requirement that

comes forward to maximize the profit. The intercept matrix of the constraints

(2.2) plays a vital role in achieving the goal, in a heuristic manner.

The pertinent steps which are the essential points of the algorithm are as

follows. Step-(a) initialize the solution vector with zero value for all the unknowns.

In step-(b) we construct the intercept matrix and identify redundant variables

through step-(c) and (d). The Values corresponding to column minimum (eij) are

multiplied with corresponding cost coefficients (cj) and the maximum among this

product is chosen ie., max︸︷︷︸k

eikck. The corresponding xk assumes the value 1. Next

we update the availability ( right hand side column vector ) by using the relations

bi = bi−aik for all i and the coefficients of constraints by replacing aik by 0 for all i.

This process is repeated till coefficient matrix becomes null matrix. The values of

updated variables xk are substituted in the objective function to obtain the value.

Page 9: MULTIDIMENSIONAL KNAPSACK PROBLEM - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/17553/12/12_chapter_02.pdf · MULTIDIMENSIONAL KNAPSACK PROBLEM 1 1A part of this chapter

32

This process is repeated n times (step-(d) to step-(g) in DPH algorithm).

2.4.1 DPH Algorithm

Step-(a) Initialize the solution by assigning 0 to all xj .

Step-(b) Intercept matrix D djj = bi/aij, if aij > 0, djj = M, a large value ;

otherwise.

Step-(c) Identify 0 value variables( redundant): If any column has <1 entry

in D, then the corresponding variable identified as a redundant variable.

Step-(d) Dominant variable: Identify the smallest element(dominant variable)

in each column of D.

Step-(e) Multiply the smallest elements with the corresponding cost coeffi-

cients. If the product is Maximum in kth column, then set xk = 1 and update

the objective function value f(x1, x2, ..., xn).

Step-(f) Update the constraint matrix: bi = bi − aij for all i and set aik = 0

fo all i.

Step-(g) If aij = 0 for all i and j, then display the solution. Otherwise go to

step-(d)

ALGORITHM 2.1 DPH Algorithm for MKP

In the next subsection the worst-case bound of DPH and probabilistic

analysis of DPH to catch the optimal solution of MKP is presented.

Page 10: MULTIDIMENSIONAL KNAPSACK PROBLEM - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/17553/12/12_chapter_02.pdf · MULTIDIMENSIONAL KNAPSACK PROBLEM 1 1A part of this chapter

33

2.4.2 Worst-case bound

Despite its simplicity, a number of heuristics and worst-case results have been

obtained for the knapsack problem. These heuristics and results are highly rep-

resentative of the more general research on worst-case performance of heuristics,

and they will be used to introduce the fundamental concept of worst-case analysis

of heuristics and to illustrate many of the various types of results that have been

obtained.

The basic rules of a worst-case analysis are simple. The analysis is always

conducted with respect to a well-defined heuristics and set of problem instances.

Let P denote the set of problem instances, I ∈ P a particular problem instance,

cx* the optimal value of problem I, and cx̄ the value obtained when heuristics H

is applied to problem I.

Worst-case results establish a bound on how far cx̄ can deviate from cx*.

For simplicity, assume cx∗ ≥ 0. Then for maximization problems, such a result

would prove for some real number r ≤ 1 that

cx̄ ≥ r cx ∗ for all I ∈ P (2.13)

For minimization problems we wish to prove for some r ≥ 1 that

cx̄ ≤ r cx ∗ for all I ∈ P (2.14)

In analyzing a heuristic we would like to find the largest r for which (2.13)

holds or the smallest r for which (2.14) holds. This value is called the worst-case

performance ratio. We know that a given r is the best possible if we can find a

single problem instance I for which cx̄ = r cx* or an infinite sequence of instances

for which cx̄/cx∗ approaches r.

Page 11: MULTIDIMENSIONAL KNAPSACK PROBLEM - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/17553/12/12_chapter_02.pdf · MULTIDIMENSIONAL KNAPSACK PROBLEM 1 1A part of this chapter

34

One can equivalently express heuristic performance in terms of worst-case rel-

ative error. The relative error ε is defined in terms of the performance ratio r by

ε = 1-r for maximization problems and ε = r-1 for minimization problems. Both

r and ε are used frequently, and the choice between the two measures is largely a

matter of interest. The worst-case performance ratio r will be used in this section.

Dobson[53] established tight on the worst-case behaviour of dual greedy

algorithm for the minimization case. Starting from the all-ones solution and us-

ing the weights w = (1,1,...,1), the items are deleted according to increaasing

ratio cj/∑m

i=1 aij and coefficient reduction is applied in the remainig colums as

soon as the coefficient aij becomes greater then the update constraint level bi for

any i. Given that the MKP can be easily transformed into a set covering prob-

lem min {cx/Ax ≥ b, x ∈ {0, 1}n} by complementing the variable xj ← 1 − xj ,

Dobson’s[53] greedy algorithm applied to the maximization version.

Fishcher [66] coined the problem that paradoxical dichotomies between

minimization and maximization depend strongly on the use of ratio cx̄/cx∗ to

measure performance. For minimization

cx̄/cx∗ ≤ H(d) (2.15)

Thus the worst-case analysis can be adapted to the maximization case by using

another ratio to measure performance. Then, using the following performance.

Then, using the following performance measure

P =cx − cx̄

cx − cx∗ (2.16)

where cx =∑n

j=1 cj is a suitably reference value, one obtains that P ) ≤ H(d)

where d = max︸︷︷︸j=1,...,n

∑ni=1 aij and H(d) denotes the first d terms of the hormonic

series H(d) = 1 + 1/2 + ... + 1/d.

Page 12: MULTIDIMENSIONAL KNAPSACK PROBLEM - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/17553/12/12_chapter_02.pdf · MULTIDIMENSIONAL KNAPSACK PROBLEM 1 1A part of this chapter

35

In this section, we are finding the worst-case bound for set covering prob-

lem by using DPH cx̄cx∗ ≤ H(d) and convert into MKP cx−cx̄

cx−cx∗ ≤ H(d)1+ε

→ H(d) case

by using (2.15) and (2.16).

Here,two sets of price functions (which will be useful in comparing the value

of the heuristic solution with all other feasible solutions )are introduced, At first

a set of step functions pi(s) that will represent prices paid per unit by the DPH

to satisfy the constraints. (∑m

i=1 aij ≤ bi) so as cover up the intervel [0,bi). In

particular we will cover it up from right to left so that the interval remainig to be

covered at iteration r is [0,bri ).

Intuitively, for each point s in the interval [0,bi) the unit price paid by the

DPH to cover this point is

pi(s) =ckr br

i

aikr, if s ∈ [br+1

i , bri ) for r = 1,...,t.

That it is indeed the exact price will now be shown.

Lemma 2.4.1. Let x̄ be the solution of the given by the DPH, then

cx̄ ≤ 1bmin

∫[0,bi)

pi(s)ds

Proof.∑n

i=1 cj x̄j =∑t

r=1 ckr

=∑t

r=1ckr

bri /ar

ikr

bri

aikr

=∑t

r=1ckr br

i aikr

bri aikr

=∑t

r=1ckr br

i (bri −br+1

i )

bri aikr

≤ 1bmin

∑tr=1

ckr bri (br

i −br+1i )

aikrbmin = min {bi}

= 1bmin

∫[0,bi)

pi(s)ds

Define a unit price function pij(s) for each element of the matrix analogous to

the way in which has been defined earlier pi(s) for each bi.

Page 13: MULTIDIMENSIONAL KNAPSACK PROBLEM - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/17553/12/12_chapter_02.pdf · MULTIDIMENSIONAL KNAPSACK PROBLEM 1 1A part of this chapter

36

Define pij(s) =

⎧⎨⎩cjbi

aij, if s ∈ [br+1

i , bri ) for r = 1, ..., t

pi(s), if s ∈ [a1ij , bi)

⎫⎬⎭Here, pij(s) is nonincreasing for s in [0, bi). On [0, a1

ij) it is true since wrj =

bri

arij

is nonincreasing in r, and for [a1ij, bi), pij(s) = pi(s) and one can observe that if

s ∈ [br+1i , bi), t ∈ [br+2

i , br+1i ) then

pi(s) =ckr

wrkr

≤ ckr+1

wrkr+1

≤ ckr+1

wr+1kr+1

So, pi(s) is nonincreasing. Intuitively pij(s) is the price that would be paid to

cover the point s in [0, bi) using column j. Because the heuristic is myopic we have

Lemma 2.4.2. If s ∈ [0, bi), then pi(s) ≤ pij(s) for all j.

Proof. Fix j. Let r be the iteration number such that s ∈ [ar+1ij , ar

ij). Because aij

was reduced at iteration r, we must have ar+1ij = br+1

i , and clearly arij ≤ br

i , hence

s ∈ [br+1i , br

i ).

pi(s) =ckr

wrkr

≤ cj

wrj

= pij(s)

where the inequality follows from the choice rule.

Theorem 2.4.1. Let f : [0, b) → [0,∞) be nonincreasing, a ∈ (0, b], S ⊆ [0, b),

µ(S) ≥ a, then 1µ(S)

∫S

f ≤ 1a

∫[0,a)

f .

Proof. Let A = [0, a), then

1µ(S)

∫S

f = µ(S∩A)µ(S)

(1

µ(S∩A)

∫S∩A

f)

+µ(S−A)µ(S)

(1

µ(S−A)

∫S−A

f)

This is a convex combination of two averages. Since f is nonincreasing the first

average is at least as large as the second. Thus a convex combination of these

averages weighted more heavily on the first term can only be larger.

1µ(S)

∫S

f ≤ µ(S∩A)µ(A)

(1

µ(S∩A)

∫S∩A

f)

+µ(A−S)µ(A)

(1

µ(S−A)

∫S−A

f)

Page 14: MULTIDIMENSIONAL KNAPSACK PROBLEM - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/17553/12/12_chapter_02.pdf · MULTIDIMENSIONAL KNAPSACK PROBLEM 1 1A part of this chapter

37

Since f is nonincreasing, f on A-S is at least as large as f on S-A, thus

1µ(S−A)

∫S−A

f ≤ 1µ(A−S)

∫A−S

f .

After making this replacement in the second term we have

1µ(S)

∫S

f ≤ µ(A∩S)µ(A)

(1

µ(A∩S)

∫S∩A

f)

+µ(A−S)µ(A)

(1

µ(A−S)

∫A−S

f)

1µ(S)

∫S

f ≤ 1µ(A)

∫A

f .

Lemma 2.4.3.aij

bi

∫[0,bi)

pi(s)ds ≤ ∫[0,aij)

pij(s)ds.

Proof. First, since pi(s) ≤ pij(s) for s ∈ [0, bi), by lemma 2.5.2, we have

1bi

∫[0,bi)

pi(s)ds ≤ 1bi

∫[0,bi)

pij(s)ds

Second, recall that pij is nonincreasing. Apply Theorem 2.5.3 with S = [0, bi)

and f = pij to obtain

1bi

∫[0,bi)

pij(s)ds ≤ 1aij

∫[0,aij)

pij(s)ds

combining the last two inequalities the result is immediate.

For each j define vj = min{r|wr+1

j = 0}.

Lemma 2.4.4.aij

bi

∫[0,bi)

pi(s)ds ≤ cjhj where hj =∑vj

r=1 bri

wrj−wr+1

j

wrj

.

Proof.aij

bi

∫[0,bi)

pi(s)ds ≤ aij

wj

∫[0,wj)

pij(s)ds where aij = min{ar

ij

}and wj =

max{wr

j

}=

aij

wj

∑vj

r=1cjbr

i

arij

(wr

i − wr+1i

)≤∑vj

r=1

arij

wrj

cjbri

arij

(wr

j − wr+1j

)= cj

∑vj

r=1 bri

wrj−wr+1

j

wrj

= cjhj.

Page 15: MULTIDIMENSIONAL KNAPSACK PROBLEM - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/17553/12/12_chapter_02.pdf · MULTIDIMENSIONAL KNAPSACK PROBLEM 1 1A part of this chapter

38

Theorem 2.4.2. Given problem (set covering problem)with positive constraint

data and with optimal solution x*, if x̄ is the solution given by the DPH, then

cx̄cx∗ ≤ bmax

bminH

⎛⎝max︸︷︷︸1≤j≤n

wi

⎞⎠.

Proof. By Lemma 2.4.1, we have cx̄ ≤ 1bmin

∫[0,bi)

pi(s)ds.

Now let x be any feasible solution for set covering problem, so∑n

j=1 aijxj ≥ bi

for all i or∑n

j=1 xj ≥ 1, thus

≤∑nj=1

aijxj

bi

1bmin

∫[0,bi)

pi(s)ds

≤∑nj=1

xj

bi

aij

bi

∫[0,bi)

pi(s)dsxj

by lemma 2.4.4, we have

≤∑nj=1

xj

bicj

∑vj

r=1 bri

wrj−wr+1

j

wrj

≤ bmax

bmin(max︸︷︷︸1≤j≤n

hj)∑n

j=1 cjxj where bmax = max {bi}

= bmax

bmin(max︸︷︷︸1≤j≤n

hj)cx

Thus cx̄cx

≤ bmax

bminmax︸︷︷︸1≤j≤n

hj for all feasible x.

We now use the assumption that the aij and bi are positive and bi ≥ aij for all

i and j.

hj =∑vj

r=1

wrj−wr+1

j

wrj

=∑vj

r=1

∑wr+1

j <l<wrj

1wr

j

≤∑vj

r=1

∑wr+1

j <l<wrj

1l

=∑

1<l<w1j

1l

H(w1j ) = H (maxwi).

Thus max︸︷︷︸1≤j≤n

hj ≤ max︸︷︷︸≤j≤n

= H

⎛⎝max︸︷︷︸1≤j≤n

∑mi=1 wi

⎞⎠Therefore, since the optimal solution, x*, is feasible the final bound is

Page 16: MULTIDIMENSIONAL KNAPSACK PROBLEM - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/17553/12/12_chapter_02.pdf · MULTIDIMENSIONAL KNAPSACK PROBLEM 1 1A part of this chapter

39

cx̄cx∗ ≤ cx̄

cx≤ bmax

bminH

⎛⎝max︸︷︷︸1≤j≤n

wi

⎞⎠Remark:

If we start the algorithm with bi=1, then the worst-case bound becomes cx̄cx∗ ≤

cx̄cx

≤ H

⎛⎝max︸︷︷︸1≤j≤n

wi

⎞⎠. By (2.15) and (2.16) we have the worst-case bound for set

covering problem

cx−cx̄cx−cx∗ ≤ H

⎛⎝max︸︷︷︸1≤j≤n

wi

⎞⎠To see the that bound is tight consider the problem:

maximize 1dx1 + 1

d−1x2 + ... + 1

2xd−1 + 1

1xd + (1 + ε)xd+1

subject to

x1 + xd+1 ≤ 1

x2 + xd+1 ≤ 1

.

.

.

xd−1 + xd+1 ≤ 1

xd + xd+1 ≤ 1

where xj = 0, 1. The heuristic picks the solution x̄ = (0, ..., 0, 1) whereas the

optimal solution is x∗ = (1, ..., 1, 0) for every ε > 0. In this case as ε → 0

For minimization, cx̄cx∗ = H(d) and for maximization

cx−cx̄cx−cx∗ = H(d)

1+ε→ H(d)

Page 17: MULTIDIMENSIONAL KNAPSACK PROBLEM - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/17553/12/12_chapter_02.pdf · MULTIDIMENSIONAL KNAPSACK PROBLEM 1 1A part of this chapter

40

2.4.3 Probabilistic performance analysis

It is possible to examine the probability that a vector x̄ found by DPH defines

an optimal solution to MKP. Before that we give a sufficient condition for a solution

found by the DPH to optima.

To develop an analysis, let us specify a probabilistic model of the MKP.

The profit coefficients cj , constrain coefficients aij and capacities bi are nonnegative

independent identically distributed random variables.

Let pj = min(

bi

aij

)(j=1,2,...,n)

aj =

⎧⎨⎩aij |min︸︷︷︸j

(bi

aij

)⎫⎬⎭ ( i=1,2,...,m)

r̄ = max cjpj , r = min cjpj ( j=1,2,...,n)

MAX = max aj , MIN = min aj (j=1,2,...,n).

Lemma 2.4.5. A Vector x̄ found by the DPH defines an optimal solution to MKP

if r̄ − r < 1MAX

.

Proof. Let x̂ be the optimal solution for LMKP, x̂ ≤ UB and denote

x̂f = min{(

bi −∑f

j=1 aij

)/a1f

}where f is the smallest j such that∑j

k=1 aik ≥ bi(for atleast one i)

di =

⎧⎨⎩ ci + cf x̂f , if ai + af x̄f ≤ MAX

ci − cf(1 − x̂f ), otherwise.

⎫⎬⎭Then cx̂ − cx̄ = ck − di

= ckpk−dipk

pk

< (r̄ − r)MAX

Page 18: MULTIDIMENSIONAL KNAPSACK PROBLEM - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/17553/12/12_chapter_02.pdf · MULTIDIMENSIONAL KNAPSACK PROBLEM 1 1A part of this chapter

41

since ckpk ≤ r̄, dipk > r, and 1pk

≤ MAX

cx̂ − cx̄ ≤ (r̄ − r) MAX < 1

Hence x̄ defines an optimal solution to MKP.

Next we define the probability for that we fix λ∗ and |I| (such that λ∗ ∈ I),

and use as our sample space the set of all Multidimensional knapsack problems in

n variables (where n varies in the interval[|I|,∞)), whose coefficients aij and cj are

random variables drawn from the ranges [amin, amax] and[cmin, cmax], respectively,

and whosecjbi

aijisλ∗. The only assumption we make on the random variables aij and

cj is that the pairs (aj, cj), j ∈ N , are mutually independent, and their bivariate

distribution function is independent of n.

We define two semi-closed intervals around λ∗, namely,

R0 = λ∗, λ∗ + 1/ (2amax),

R1 = λ∗ − 1/ (2amax) , λ∗,

and denote the associated index sets by

Jk ={

j ∈ N | cjbi

aij∈ Rk

}, k=0,1.

For a randomly chosen pair (aj , cj), and for k =0,1, we denote by pk the

probability that j ∈ Jk, i.e., that the ratiocjbi

aijfalls in the interval Rk. Clearly,

pk > 0, k=0,1. then for k=0,1, the probability that among n randomly chosen

pairs(cj , aj), there are at most |I|pairs such that the ratiocjbi

aijfalls in the interval

Rk, i.e., the probability that |Jk| ≤ |I| given by the cumulative binomial distribu-

tion function

B (n, |I|, pk) =∑|I|

h=0nCh ph

k (1 − pk)n−h.

Theorem 2.4.3. Let P(n) be the probability that the vector x found by the DPH

Page 19: MULTIDIMENSIONAL KNAPSACK PROBLEM - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/17553/12/12_chapter_02.pdf · MULTIDIMENSIONAL KNAPSACK PROBLEM 1 1A part of this chapter

42

defines an optimal solution to the n-variable knapsack problem(MKP), and denote

Q(n) = 1 − (B(n, |I|, p0) + B(n, |I|, p1)).

Then p(n) ≥ Q(n) for all n ≥ |I|, and Q(n) is a strictly increasing function

of n, such that

limn→∞ Q(n) = 1.

Proof. We denote by Pr(α) the probability of the event α. From the definitions

of r̄, r, I and Jk,k=0,1, it follows that

r̄ ≥ λ∗ +1

2amax⇒ J0 ⊆ I ⇒ |J0| ≤ |I| (2.17)

r ≥ λ∗ − 1

2amax⇒ J1 ⊆ I ⇒ |J1| ≤ |I| (2.18)

therefore, for all n ≥ |I|, we have

P (n) ≥ Pr{r̄ − r < 1

MAX

}(from Lemma 2.4.5)

≥ Pr{r̄ − r < 1

amax

}(since MAX ≥ amax)

≥ Pr{r̄ − λ∗ < 1

2amaxandλ∗ − r < 1

2amax

}= 1− ≥ Pr

{r̄ − λ∗ ≥ 1

2amaxorλ∗ − r ≥ 1

2amax

}≥ 1− ≥ Pr

{r̄ − λ∗ ≥ 1

2amax

}+ Pr

{λ∗ − r ≥ 1

2amax

}≥ 1 − (Pr {|J0| ≤ |I|} + Pr {|J1| ≤ |I|}) (from 2.17 and 2.18)

= 1−(B(n, |I|, p0)+B(n, |I|, p1)) the last statement of the theorem follows

from well known properties of the cumulative binomial distribution function.

Page 20: MULTIDIMENSIONAL KNAPSACK PROBLEM - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/17553/12/12_chapter_02.pdf · MULTIDIMENSIONAL KNAPSACK PROBLEM 1 1A part of this chapter

43

2.4.4 Computational Complexity

Theorem 2.4.4. DPH can be solved in O(mn2) time, polynomial in the number

of items and constraints.

Proof. The worst-case complexity of finding the solutions of an MKP using

DPH can be obtained as follows. Assume that there are n variables and m+1

constraints. The procedure initialization (step-(a)), requires O(n) running time.

Construction of surogate constraint(step-(b)) and pseudo ration operator(step-

(c)) requires O(mn) and O(n) respectively. The formation of D matrix iterates

n times, identifies less than one entry in each column, finds smallest intercept in

each column, identifies rows which consists of more than one smallest intercept and

updates the constraint matrix A. Since there are m constraints, step-(d), step-(e),

step-(f), step-(i) require, O((m+1)n)time and step-(g) and step-(h) require O(n)

operations to multiply cost with corresponding smallest intercept and updating the

corresponding row of the constraint matrix. The maximum number of iterations

required for DPH is n. So the overall running time of the procedure DPH can be

deduced as O (mn2).

2.4.5 Illustration

An example of MKP solution by DPH:

Consider Max 5000x1 +5550x2+1000x3+1500x4

Subject to

5x1 + 7x2 + 2x3 ≤ 9

3x1 + 2x2 ≤ 3

425 x1 + 300 x2 +50 x3 + 100 x4 ≤ 500

The algorithm begins with the initial feasible solution, the solution vector

Page 21: MULTIDIMENSIONAL KNAPSACK PROBLEM - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/17553/12/12_chapter_02.pdf · MULTIDIMENSIONAL KNAPSACK PROBLEM 1 1A part of this chapter

44

is written as (0, 0, 0, 0), and objective value is 0. We update this solution by

using DPH algorithm iteratively. This heuristic updates the solution vector and

objective function value. Step - (d) to step - (i).

Iteration 1: The variable x2 dominates the other variables and improves the ob-

jective function value from 0 to 5550. Thus the new solution vector is (0, 1, 0, 0)

and objective function value is 5550.

Iteration 2: The variable x4 dominates the other variables( x1 and x3 ) and im-

proves the objective function value from 5500 to 7050. The solution vector is (0,

1, 0, 1).

Iteration 3: The variable x3 dominates the other variable (x1) and improves the

objective function value from 7050 to 8050. The solution vector is (0, 1, 1, 1). We

assign 0 to x1, x1 variable column has < 1 entry.

The final solution is x1=0, x2= 1, x3 = 1, x4 = 1. objective function value

is 8050 which is optimum.

We illustrate this procedure to solve Peterson4 (m= 10 and n= 28) which

is available in the OR-library by Beasely[15] a multidimensional knapsack problem

with 28 variables and 10 knapsacks. The DPH begins with initial cost zero, end

of at each iteration the DPH updates the value xi ( 0 or 1 ) and the cost. TABLE

2.1. gives the iteration wise report of DPH.

The DPH algorithm terminates the iterative process at 19th iteration,

since all the entries are in the constraint matrix are equal to 0. At the end of the

19th iteration the DPH reaches the optimum 12400.

Page 22: MULTIDIMENSIONAL KNAPSACK PROBLEM - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/17553/12/12_chapter_02.pdf · MULTIDIMENSIONAL KNAPSACK PROBLEM 1 1A part of this chapter

45

Table 2.1. iteration wise report of DPH for Peterson4 problem

iteration Variable variable f(x1, x2, ..., xn)

1 21 - 3100

2 20 - 5650

3 23 - 6600

4 19 - 6660

5 17 - 7140

6 16 - 7460

7 2 - 7680

8 18 - 7760

9 15 - 8410

10 27 - 8610

11 22 - 9710

12 28 - 10230

13 1 - 10330

14 26 - 10550

15 25 - 10850

16 14 - 12150

17 3 4 12240

18 9 5 12400

19 - 6,7,8,10,11,12,24,13 12400

2.5 Experimental Analysis

2.5.1 DPH solutions of Test Problems

Our DPH was initially tested on 55 standard problems (divided into six dif-

ferent sets) which are available from OR- Library[15]. The size of these problems

varies from n = 5 to 105 items and from m = 2 to 50 constraints. We solved

these problems, using both the general-purpose CPLEX mixed-integer program-

ming solver, and our DPH which was coded in MATLAP7. The results are shown

in TABLE 2.2.

The first two columns in TABLE 2.3 indicate the problem set name and

Page 23: MULTIDIMENSIONAL KNAPSACK PROBLEM - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/17553/12/12_chapter_02.pdf · MULTIDIMENSIONAL KNAPSACK PROBLEM 1 1A part of this chapter

46

the number of problems in that problem set. The next two columns report for

CPLEX the average execution time, and average number of nodes required for

CPLEX. The last three columns report that DPH algorithms average solution

time, average percentage deviation from optimum solution, and total number of

optimum solution found by DPH for each problem set.

It is clear that from TABLE 2.3 that our DPH finds the optimal or near

optimal in all 55 test problems. CPLEX required 1.13 seconds to solve all the

problems shown in TABLE 2.3. Our DPH required 0.64 seconds only.

The application of DPH algorithm for Weing7[15] is presented in FIGURE 2.1.

The maximum number of iterations is 105 (n) fixed. DPH reaches the solution

1094806 at 90th iterations.

The second set of tested instances is constituted of (also the largest ones

with n = 100 to 2500 items m =15 to 100 constraints) 11 benchmarks (MK-GK

problems) proposed very recently by Glover and Kochenberger[81](available at:

http://hces.bus.olemiss.edu/tools.html ). TABLE 2.4 compares our results and the

best known results taken from the above web site. The first two columns in TABLE

2.4 indicate the sizes (m and n), third column CPLEX solution of problems; the

last two columns report the DPH solution and percentage of deviation from the

best solutions of the problems. It can be seen from TABLE 2.4 that DPH is found

to be better in 4 out of 11 problems than CPLEX.

2.5.2 Comparison with other heuristics

The comparative study of DPH with other existing heuristic algorithms

(PSO[97], Khuri[122],Hybrid GA[52] and Chu Beasly [41]) has been furnished in

TABLE 2.5 in terms of the average deviation for problems and the number of

Page 24: MULTIDIMENSIONAL KNAPSACK PROBLEM - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/17553/12/12_chapter_02.pdf · MULTIDIMENSIONAL KNAPSACK PROBLEM 1 1A part of this chapter

47

Figure 2.1. Iteration wise Objective function value for Weing7

optimal/best solutions obtained (out of a total 55) for set 1 problem. For set 2

problems, the comparison between MJ[196], GK-Tabu[81] and DPH has been

reported in TABLE 2.6. DPH found best solutions for all the problems and the

average deviation from the best/optimal for the remaining is very less. As both

tables and figure clearly demonstrate, the DPH is able to localize the global

optimum or near optimal point for all the test problems in quick time. Our

approach is used to reduce the search space to find the near-optimal solutions of

the MKP. The computational complexity is cubic and the space complexity is

O(mn). DPH reaches the optimum or near optimum point in less number of

iterations where the maximum number of iterations is the size of projects

(variables). Our heuristic algorithm identifies the zero value variables quickly.

Page 25: MULTIDIMENSIONAL KNAPSACK PROBLEM - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/17553/12/12_chapter_02.pdf · MULTIDIMENSIONAL KNAPSACK PROBLEM 1 1A part of this chapter

48

Table 2.2. DPH solutions for small size problems

Problem m n optimum DPH solution Deviation

Peterson1 10 10 8706.1 8706.1 0.00

Peterson 2 10 15 4015 4015 0.00

Peterson 3 10 20 6120 6120 0.00

Peterson 4 10 28 12400 12400 0.00

Peterson 5 5 39 10618 10618 0.00

Peterson 6 5 50 16537 16537 0.00

Peterson 7 2 28 141278 141278 0.00

weing1 2 28 141278 141278 0.00

weing2 2 28 130883 130883 0.00

weing3 2 28 95677 95677 0.00

weing4 2 28 119337 119337 0.00

weing5 2 28 98796 98796 0.00

weing6 2 28 130623 130623 0.00

weing7 2 105 1095445 1094806 0.06

weing8 2 105 624319 624319 0.00

weish1 5 30 4554 4554 0.00

weish2 5 30 4536 4536 0.00

weish3 5 30 4115 4115 0.00

weish4 5 30 4561 4561 0.00

weish5 5 30 4514 4514 0.00

weish6 5 40 5557 5557 0.00

weish7 5 40 5567 5567 0.00

weish8 5 40 5605 5605 0.00

weish9 5 40 5246 5246 0.00

weish10 5 50 6339 6339 0.00

2.6 Conclusion

In this chapter, we have presented the dominance principle based approach

for tackling the NP-Hard Zero-One Multidimensional knapsack problem (MKP).

This approach has been tested on 66 state-of-art benchmark instances and has led

to give near optimal solutions for most of the tested instances. Proposed approach

is a heuristic with O(mn2) complexity and it requires maximum of n iterations to

Page 26: MULTIDIMENSIONAL KNAPSACK PROBLEM - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/17553/12/12_chapter_02.pdf · MULTIDIMENSIONAL KNAPSACK PROBLEM 1 1A part of this chapter

49

Table 2.3. Computational results for CPLEX and the DPH for small size prob-

lemsProblem Number of CPLEX DPH

set name Problems

Average Average A.P.O.D N.O.P.T

Solution Solution

time time

HP 2 0.3 0.2 0.0 2

PB 6 2.8 1.8 0.04 5

PETERSON 7 0.2 0.2 0.0 7

SENTO 2 12.0 3.8 0.0 2

WEING 8 0.6 0.4 0.01 7

WEISH 30 0.5 0.4 0.03 28

average 1.13 0.64A.P.O.D = Average percentage of deviation

N.O.P.T = Number of problems for which the DPH finds the optimal solution

Table 2.4. Computational results for CPLEX and the DPH for large size prob-

lemsn m CPLEX DPH Percentage

Best Solution of

Feasible deviation

Solution

100 15 3766 3766 0

100 25 3958 3958 0

150 25 5650 5656 0.26(improved)

150 50 5764 5767 0.05(improved)

200 25 7557 7557 0.0

200 50 7672 7672 0

500 25 19215 19215 0

500 50 18801 18806 0.03(improved)

1500 25 58085 58085 0.0

1500 50 57292 57294 0.003(improved)

2500 100 95231 95231 0.0

Page 27: MULTIDIMENSIONAL KNAPSACK PROBLEM - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/17553/12/12_chapter_02.pdf · MULTIDIMENSIONAL KNAPSACK PROBLEM 1 1A part of this chapter

50

Table 2.5. Comparative study of solutions for set 1 problems

Problem Number of PSO DPH Khuri Hybrid GA Chu Beasly

set name Problems

HP 2 0.00 0.00 0.00 0.00 0.00

PB 6 0.05 0.04 0.01 0.00 0.00

PETERSON 7 0.00 0.00 0.00 0.00 0.00

SENTO 2 0.32 0.00 0.17 0.00 0.04

WEING 8 2.75 0.01 1.15 0.00 0.00

WEISH 30 1.37 0.03 0.87 0.24 0.08

N.O.S 55 49 48 34 48 49

A.D 0.74 0.01 0.37 0.04 0.02N.O.S = Number of Optimum solutions

A.D = Average percentage of deviation

Table 2.6. Comparative study of solutions for set 2 problems

m n Best MJ GK-Tabu DPH

Known Hybrid Search

Solution

100 15 3766 3766 3766 3766

100 25 3958 3958 3958 3958

150 25 5656 5656 5650 5656

150 50 5767 5767 5764 5767

200 25 7560 7560 7557 7560

200 50 7677 7677 7672 7672

500 25 19220 19220 19215 19220

500 50 18806 18806 18801 18806

1500 25 58087 58087 58085 58087

1500 50 57295 57295 57295 57295

2500 100 95237 95237 95231 95237

A.D 0.00 0.03 0.01A.D = Average percentage of deviation

Page 28: MULTIDIMENSIONAL KNAPSACK PROBLEM - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/17553/12/12_chapter_02.pdf · MULTIDIMENSIONAL KNAPSACK PROBLEM 1 1A part of this chapter

51

solve the MKP. The performance of this heuristic (TABLE 2.5, TABLE 2.6) has

been compared extensively with the performance of Genetic Algorithm, Particle

swarm optimization, and other heuristics. The experimental results show that

the proposed DPH is outperforming many other heuristics. In the next chapter

we design the dominance principle based heuristic to solve generalized assignment

problem.