solving certain singly constrained convex optimization problems in production planning

7
Volume I, Number 6 OPERATIONS RESEARCH LETTERS December 1982 SOLVING CERTAIN SINGLY CONSTRAINED CONVEX OPTIMIZATION PROBLEMS. IN PRODU~ON PLANNING' Ha,s ZIEGLER Fot~tck Wirt~chq(tswissttt~chaft,Unit~r$1tdt Oesamthochschule Paderborn, WarburgerStrasse i00, D.4790 Paderborn, Fed. Received September 1982 Revised November 1982 This paper considen the problem of minimizin8 a special convex function subject to one linear constraint. Based upon a theorem for lower and upper bounds on the Lasranse multiplier a fully polynomial time approximation scheme is proposed. The efficiency of the alsorithm is demonstrated by a computational experiment. Approximation, computational complexity, Lagranse multiplier, nonlinear programming, production planning I. Statement of the problem In production planning convex optimization problems subject to one constraint can occur which can be formulated as minimize fix) = ~ a~x, + n-I I nXn ' s.t. g( )" E M. x,~0, n--I,...,N, wheref(x) is a convex function, x = (xl, .... XN)GR~o, a,, b,, c,~O, n= !,..., N, and M>0. Problem (I) can e.g. be interpreted as lot-size problem, where f(x) represents the cost function and g(x ) a capacity constraint. Defining a,-- ~cp,i, and b, == X,cf,. where and cp,, costs per unit of product n, i, rate of holding costs of product n, X, total demand of product n in the period, of, ordering costs per order of product n, c,, storage requirements per unit of product n, M storage capacity, (I) corresponds to the problem of minimizing the sum of holding and ordering costs subject to a limited storage capacity. 246 0167-6377/82/0000-0000/$02.75 © 1982 North-Holland

Upload: hans-ziegler

Post on 21-Jun-2016

214 views

Category:

Documents


2 download

TRANSCRIPT

Volume I, Number 6 OPERATIONS RESEARCH LETTERS December 1982

SOLVING CERTAIN SINGLY CONSTRAINED CONVEX OPTIMIZATION PROBLEMS. IN P R O D U ~ O N PLANNING'

Ha,s ZIEGLER F o t ~ t c k Wirt~chq(tswissttt~chaft, Unit~r$1tdt Oesamthochschule Paderborn, Warburger Strasse i00, D.4790 Paderborn, Fed.

Received September 1982 Revised November 1982

This paper considen the problem of minimizin8 a special convex function subject to one linear constraint. Based upon a theorem for lower and upper bounds on the Lasranse multiplier a fully polynomial time approximation scheme is proposed. The efficiency of the alsorithm is demonstrated by a computational experiment.

Approximation, computational complexity, Lagranse multiplier, nonlinear programming, production planning

I. Statement of the problem

In production planning convex optimization problems subject to one constraint can occur which can be formulated as

minimize f ix ) = ~ a~x, + n - I I nXn '

s.t. g( ) " E M.

x , ~ 0 , n - - I , . . . , N ,

wheref(x) is a convex function, x = (xl, . . . . XN)GR~o, a,, b,, c,~O, n= !, . . . , N, and M>0. Problem (I) can e.g. be interpreted as lot-size problem, where f(x) represents the cost function and g(x )

a capacity constraint. Defining

a,-- ~cp, i, and b, == X, cf,.

where

and

cp,, costs per unit of product n, i, rate of holding costs of product n, X, total demand of product n in the period, of, ordering costs per order of product n,

c,, storage requirements per unit of product n, M storage capacity,

(I) corresponds to the problem of minimizing the sum of holding and ordering costs subject to a limited storage capacity.

246 0167-6377/82/0000-0000/$02.75 © 1982 North-Holland

Volume I, Number 6 OPERATIONS RESEARCH LETTERS December 1982

Solution procedures for (!) have been proposed e.g. by Beckmann [I] and Churchman, Ackoff and Arnoff [2], which try to approximate the optimal solution by a trial and error generation of Lagrange multipliers. The main problem of these procedures is that appropriate bounds on the Lagrange multiplier are not known in advance. In the following, it is first shown how lower and upper bounds on the Lagrange multiplier can be generated. Then an outline of an algorithm is given which allows a systematic approximation of the optimal solution to (1). Finally, some theoretical results and computational experi- ences are described, which show the proposed algorithm being very efficient.

2. Generating bounds on the Lagrange multiplier

For solving (!), the Lagrange function L(x, ~,) -f(x) + ~,(g(x) - M) (2)

can be established. From the Kuhn-Tucker conditions, it is known that the optimal solution x* and the

optimal Lagrange multiplier ~,* have to meet the conditions

OL(x*,X*) ~,0, n ' - I , . . . , N , (3) ~)x n

x*OL(x*'X*) * b I i)x, - a,x,, - , x* + X*c,x* = O, n = I,..., N, (4)

x * ~ o , n - I , . . . , N , (5)

OL(x*, X*) ~0, (6) an

(7)

(8) ~,* >~0.

If ~,* is known, x* can be determined by solving (4) for x~, i.e.

b,, = (9) , ' • x* = a, + ?~*c,, n I ..., IV.

aOn the other hand, if at least one x* is known, ?~* can be calculated from (4) according to

~. _ b. a.. (IO) c.( x* )~ c.

Without loss of generality b., c. > 0, n - I,..., N, can be assumed, since x* - 0 for b. - 0 and x* --- for c,--0. Note that, in contrast to the unrestricted problem, x* does not tend to infinity if a,- 0 and

c n > 0 . Knowing an appropriate feasible solution to (1), eq. (10) can be used for determining lower and upper

bounds on ~*. This special solution can be calculated as follows: First, disregarding the constraint, the optimal solution ~ minimizing f ( x ) is calculated by use of the

Harris equation [3]:

~a b~n , n--- I,...,N. (II) ¥ -n

If ~ fulfils g(~) ~ M, the optimal solution to (I) is determined by x* - ~ and ?,* = 0. Otherwise, a feasible

247

Volume I, Number 6 OPERATIONS RESEARCH LETTERS December 1982

solution x h can be determined by a linear reduction of .~ in all its components, i.e.

M (12) x,, = .~., n = I , . . . , N .

~ . c. . t .

Using x,, ~ in (10), N estimates ~,,, for the._ La~ange multiplier ~* can be calculated according to

~,__ b, a , (13) , .

and the following theorem holds:

Theorem. Let g(.t) > M, N N

~t = min~., and ~," ,- max, , . . n " l n - - I

Then, either (I) ?~t . 2~. = )t* and x h is an optimal solution to (I) or

(2) 7 /< ~,* < ~,".

Proof. (i) If ~ " - ~t then (x h, ~") satisfies the Kuhn-Tucker conditions and therefore x h is an optimal solution to (I).

(ii) if A" I A t then

~n' : ~., > ~k I and 3n" : ~,,,, < ~k", n', n" e ( ! , . . . , N}.

(a) ~ / ( ~. , n - ! , . . . , N, ~/< ~.., for at least one n' ~ ( I , . . . , N) and

~/ b,, (14) x,(~,) = a, + ~,c. ' n = I , . . . , N,

imply

n - l . . . . . N .

and hence ~ . ,e,x,(gt) > M. Therefore (9) and (14) imply ~,* > ~l. (b) 8(.t) > M together with (8), (9) and (I.I) implies ~.* > 0 and therefore from (7) we get the condition

N , ~ , . . F . x . - I'4.

N' ;, ~,,,, n = I , . . . , N, ~," > ~,,.. for at least one n" G ( I . . . . , N} and (14) imply

h, . = l , . . . ,N,

and hence ~ . ,c.x.(?~") < M. From S , 1~,,. ,c,,x,, = M, (9) and (14) we immediately get X* < ~,".

As ~* represents the shadow price or opport,nity cost of the limited resource, ~t and ~" can be interpreted as lower and upper bounds on the shadow price. These bounds can be sharpened by using the following corollary:

Cemllary. Let g(.t) > M,

~,r = max(?,,lg(x(?,,)) > M,n- I,...,N)

and

min(X.lg( (x.)) < M,.- l,..., N).

Then Kr < ~, < ~...

248

Volume !, N u m b e r 6 O P E R A T I O N S R E S E A R C H L E T T E R S December 1982

Proof. Analogous to part (ii) of the proof of the theorem.

Since g(x(k)), where x(X) is obtained from (14), is strictly decreasing in k >i 0 and convex, another improvement of the bounds on ~,* can be achieved by applying the Newton method

~,'"- k' + ( M - g(x(~'))): 8g(x()])) (15) 8X

and regula falsi X'"= X ' - (X"- X'")( M - g(x(X"))): (g(x(X'")) -g (x(X") ) ) , (16)

where k ~ and k" may be the values given in the theorem or in the corollary':.

3. Outline of a solution procedure

The results of Section 2 can be used for thedesign of an algorithm solving (I) approximatively. In a first phase, one proceeds as described in Section 2. In case (2) of the theorem, a primal-dual iterative scheme based on a bisection technique is proposed, which can be depicted as follows:

Step ": Calculate ~z __. ½ (?l + ~,). Step 2: Determine x z - x(?~z). Step 3: If g(x z) < M then ?~":--- ?,-" and x" : - x";

otherwise N : - ~,~ and xl: - x:. Step 4: If ( f ( x " ) - f ( x t ) / f ( x l ) <~ e stop;

otherwise go to step i.

In step ! of the first iteration X I and ~," may be the values given in the theorem, the corollary or by (I 5) and (16) respectively. The quality of the current feasible solution x" is tested in step 4 using the lower bound f ( x I) of the optimal objective value. If the relative deviation of f ( x " ) from f ( x I) and hence from f(x*) is within a prespecified quantity e, the algorithm stops with the approximate solution x".

The current feasible solution can be improved by calculating '

h NM .x:,, n= l N, ' (17) • gn ~--" ' ' ' ' '

n - - I

in case of g(x ' ) < M in step 3 and using x h instead of x", because according to the monotone decreasing of f ( x ) in all x. in the intervals [0, ~,,] the objective value will decrease by going from x" to x h.

4. Computational considerations

A computational analysis of the worst-case behavior of the proposed algorithm shows that the first phase can be accomplished in time O(N) if the bounds stated in the theorem are calculated and in time O(N log2 N) if the bounds given in the corollary are determined using binary search for finding k I° and k"'. Furthermore, applying (15) and (16) to either bounds does not increa:~e the order of magnitude of the computational effort since (15) and (16) can be done in time O(N).

The computational complexity of the iterative scheme depends on the effort per iteration cycle and the number of cycles s to be performed. The time for running one cycle is bounded by O(N) and hence the computational effort of this phase is determined by O(sN), where s depends on e and the problem instance. Having knowledge about the data of the specific problem instance to be solved, it can be shown that the

249

Volmne !, Number 6 OPERATIONS RESEARCH LETrERS December 1982

algorithm stops after at most

Jr==

! !c.,,Z. ~. 2 N a. • • - i

iterations. Assuming a., b., c., M G N , . = ! , . . . , N, c,, < M , . = I , . . . , N, which can be obtained for problems

from reality by appropriate transformations of the data, and defining the problem size L as the length of the binary encoded input the following, less tight bound on s can be derived:

/ s b logz~ + 3L + 2 IogzL m, <e. (19)

Therefore the algorithm stops after at most s = [ Iog2(l /e) + 3L + 2 Iog2L1 iteration cycles. Thi~ yields a total computational effort of O(L, log2(I/e) + L 2) and hence the proposed algorithm is a fully polynomial time approximation scheme in the notation of Horowitz and Sahni [4]. Furthermore the space requirement of the algorithm is bounded by O(N).

To obtain experience about the efficiency of the algorithm in practice, the solution method with the proposed improvement (]7) was coded in APL and run on an Amdahl 47OV/8 computer to solve randomly generated lot-size problems• A first experiment showed that calculating the bounds given in the corollary is not advantageous, since [Io82N ] search steps, each of complexity O(N), have to be performed• This is equivalent to rlog2N] iteration cycles of the second phase. Hence using the values given in the theorem proved to be more efficient,

[n a second experiment two versions of the algorithm were compared. Version ] used the bounds given in the theorem. ]n version 2 these bounds were sharpened by applying (]5) and (]6)• The data was generated from discrete uniform distributions according to

X. G (t, 2,..., tO0000), cp. s ( i , 2 , . , , , IOOOO),

of,, s (O.Icp,, 0.2cp,,,..., ! .Sop,,},

i,, s (O.OS, 0.06,. . . , 0.2o),

c. s (0. Icp,,, 0.2cp., . . , , 2.0cp.).

To gain experience about the influence of the sharpness of the constraint on the number of iteration cycles

u(u)

?

6

S 4 3 2 1 0 • • u

10 SO 100

F'ql. I.

v e r s i o n 1

(z=O. 5

• w

. ( s ) ~ ¢ - 0 . OOS _¢-0 •01 7

~¢"O.OS 6 5 ¢"0 .1

versJ.on 2 1~ a=O. 5

,;-N -- ; ,, ,

503 10 50 100

Fig. 2.

¢ =0 • 005 ¢ =0•01

¢ =0 • 05

¢=0• 1 / , | ~ - N

5OO

Volume 1. Number 6 OPERATIONS RESEARCH LETTERS December 1982

N=5OO

7J - v e r s i o n ~ -- N=IO

. . . . i : - - : : 6 .4,. ' .............. N=5OO

I / , ' - - - _ . . . . . . . . . . . . . . = . , o

I -- o o " " " " " " " ot=O. 5 4_ .0 " ~'

o, 16 ~o 1co' 2 ~ o = I / c O. 1 0 . 0 5 0 . 0 1 O.005 ¢

Fig. 3.

~ ( s )

9 8 '7 6 5 4 3 2 1 o

I ~ N - 5 0 0 ! ¢ ' 0 . 0 0 5

~ N " I O I c ' O . O O 5 J r - " -- - + N ' 5 0 0 1 / ~ . . . . . . ...*.'~ ,-.. • . . . . ~ ' 0 . 0 0 5

/ _ / - , . . - . , . . . . . mn~uu! ¢ - 0 . 0 5

/ / ~ r s l o n " " " " c - 0 . 0 0 5 , , . . . , - - . , , , - , ° ,

1 / .~..- - . . . . . - - - -,J __II :." _ - . - - " - z A l . . ~ . ' - ' ~ . - ' - , , = - ' - r • • - - 05 1 0 . 9 0 . 8 0 . 6 •

Fig. 4.

(cf. 18)), the parameter M was taken into account via M = a . g(~), where a is a measure of the sharpness of the constraint. For all possible combinations of N ~ (10, 50, 100, 500), e G (0.1, 0.05, 0.01, 0.005), a ~ (0.9, 0.8, 0.6, 0.5) and for N = 1000, e = 0.005, a = 0.5 in each case 20 problems were solved.

The computational experiment shows that the algorithm behaves much better than indicated by the theoretical results. For example, choosing N -- 1000, e - 0.005 and a - 0.5 expression (18) yields an upper bound of 60 and (19) of 175000 on the number of iterations to be performed, whereas in the experiment the maxim+al number occuring was 10 for version I and 9 for version 2. Solving these problems, an average computing time of 0.825 CPU-seconds for version 1 and of 0.58 CPU-seconds for version 2 was necessary, with a maximum of 1.46 seconds for version I and of 0.65 seconds for version 2.

The main results are summarized in Figs, I to 4, where p(s) denotes the mean value of the number of iteration cycles performed. Figures I and 2 show that the number of iterations depends only subpropor- tionally on the number of variables and that the dependence of ~t(s) on the solution quality l / e is very weak (cf. also Fig. 3). The latter indices that one should not be worried about choosing a small e in solving practical problems. Figure 4 shows a certain depende".ce of/+(s) on a, but taking into account the scales of p(s) and a, the dependence is very weak.

Comparing the total computational effort of versions 1 and 2, it has to be taken into consideration that calculating the improved bounds (15) and (16) in version 2 is approximately equivalent to !.5 iteration cycles of phase 2. Therefore the results of the experiment indicate that for a ~ 0.8 and e >I 0.05 version I is superior to version 2, cf. the dotted line in Fig. 4. In all other cases version 2 is superior to version I.

Acknowledgment

The author is indebted to Prof. Dr. Otto Rosenberg for many helpful comments and suggestions.

251

Volume I, Number 6 OPERATIONS RESEARCH LETTERS December 1982

Refemnees

[!1 M. Beckmmm, "A ~ s i a n multiplier rule in linear activity analysis and some of its applications", Cowlcs Commission Discussion ~ : ~ n o m i c s No. 2054, unpublished, 1942, cited in [21.

[2] C.W. Churclumm, ILL. Ackoff and E.L. Amoff, Operations Research, Sth ed., Wien (1971). [3] F. Harris, Opeautiom msd Cost,'Cl~cago (1915). [4] F,. Horowitz and S. Sahni, "Combinatorial problems: Reducibility and approximation", Operations Res. 26, 718-759 (1978).

252