stochastic linear programming by series of monte-carlo estimators

Post on 11-Feb-2016

54 Views

Category:

Documents

1 Downloads

Preview:

Click to see full reader

DESCRIPTION

Stochastic Linear Programming by Series of Monte-Carlo Estimators. Leonidas SAKALAUSKAS Institute of Mathematics&Informatics Vilnius, Lithuania E-mail: . CONTENT. Introduction Monte-Carlo estimators Stochastic differentiation Dual solution approach ( DS ) - PowerPoint PPT Presentation

TRANSCRIPT

Stochastic Linear Programming by Series of Monte-Carlo Estimators

Leonidas SAKALAUSKASInstitute of Mathematics&Informatics

Vilnius, LithuaniaE-mail: <sakal;@ktl.mii.lt>

CONTENT Introduction Monte-Carlo estimators Stochastic differentiation

Dual solution approach (DS) Finite difference approach (FD) Simulated perturbation stochastic approximation (SPSA) Likelihood ratio approach (LR)

Numerical study of stochastic gradient estimators Stochastic optimization by series of Monte-Carlo

estimators Numerical study of stochastic optimization

algorithm Conclusions

Introduction

We consider the stochastic approach for stochastic linear problems which distinguishes by adaptive regulation of the Monte-Carlo estimators statistical termination procedure stochastic ε–feasible direction approach to avoid

“jamming” or “zigzagging” in solving a constraint problem

Two-stage stochastic programming problem with recourse

],|[min),( my RyhxTyWyqxQ

subject to the feasible set

nRxbxAxD ,

where

( ) ( , ) minnx D

F x c x E Q x

W, T, h are random in general and defined by absolutely continuous probability density )(p

Monte-Carlo estimators of objective function

)(py j

Let the certain number N of scenarios for some is provided :

),,...,,( 21 NyyyY

and the sampling estimator of the objective function

as well as the sampling variance are computed

nRx

N

j

j xQyxQN

xD1

22 ))(~),((1

1)(

)(~)(~ xQxcxF

N

j

jyxQN

xQ1

),(1)(~

The gradient1

1( ) ( , ),N

j

j

g x g x yN

Monte-Carlo estimators of stochastic gradient

1

1( ) , ,N Tj j

j

A x g x y g x g x y g xN n

as well as the sampling covariance matrix:

are evaluated using the same random sample, where

)(),( xFxEg

Statistical testing of optimality hypothesis under asymptotic normality

),,(/))(~())(())(~()( 12

nNnFishnxgxAxgnNT T

Optimality hypothesis is rejected if

1) the statistical hypothesis of equality of gradient to zero is rejected

NxD /)(~2

2) or confidence interval of the objective function exceeds the admissible value

Stochastic differentiation

We examine several estimators for stochastic gradient:Dual solution approach (DS);Finite difference approach (FD);Simulated perturbation stochastic approach

(SPSA);Likelihood ratio approach (LR).

Dual solution approach (DS)

The stochastic gradient is expressed as

using the set of solutions of the dual problem

1 *,g x c T u

],0|)[(max)( * mTTu

T uqWuuxThuxTh

Finite difference (FD) approach

In this approach the each ith component of the stochastic gradient is computed as:

is the vector with zero components except ith one, equal to 1, is certain small value

2 ,g x

2 ( , ) ( , )( , ) i

if x y f x y

g x y

i

Simulated perturbation stochastic approximation (SPSA)

where is the random vector, which components obtain values 1 or -1 with probabilities p=0.5, is some small value (Spall (2003))

2,,,3 yxfyxfyxg

Likelihood ratio (LR) approach

4 ( , ) ( ) ( ) ln ( )yg x y f x y f x p y

Rubinstein, Shapiro (1993), Sakalauskas (2002)

Methods for stochastic differentiation have been explored with testing functions

here

min)()( 0 xEfxF

20 1( ) ( (1 cos( )));n

i i i i iif y a y b c y

.i i iy x

Numerical study of stochastic gradient estimators (1)

),0( 2dNi

Numerical study of stochastic gradient estimators (2)Stochastic gradient estimators from samples of size (number

of scenarios) N was computed at the known optimum point X (i.e. ) for test functions, depending on n parameters.

This repeated 400 times and the corresponding sample of Hotelling statistics was analyzed according to and criteria

0)( xF

2T 22

criteria on variable number n and Monte Carlo sample size N (critical value 0,46)

2

Nn

50 100 200 500 1000

2 0.30 0.24 0.10 0.08 0.043 0.37 0.12 0.09 0.06 0.044 0.19 0.19 0.13 0.08 0.045 0.75 0.13 0.12 0.08 0.066 1.53 0.34 0.10 0.10 0.087 1.56 0.39 0.13 0.08 0.098 1.81 0.42 0.27 0.18 0.109 4.18 0.46 0.26 0.20 0.1210 8.12 0.56 0.53 0.25 0.17

criteria on variable number n and Monte Carlo sample size N (critical value 2,49)

2

N n

50 100 200 500 1000

2 2.57 1.14 0.66 0.65 0.423 2.78 0.82 0.65 0.60 0.274 3.75 1.17 0.79 0.53 0.315 4.34 1.46 0.85 0.64 0.366 8.31 2.34 0.79 0.79 0.767 8.14 2.72 1.04 0.52 0.458 10.22 2.55 1.87 0.89 0.529 20.86 2.59 1.57 1.41 0.7810 40.57 3.69 3.51 1.56 0.98

Sample size, N

1000 0,92 5,611500 0,76 4,152000 0,55 3,632100 0,68 2,842200 0,23 1,282500 0,19 1,143000 0,12 0,66

Statistical criteria on Monte Carlo sample size N for number of variable n=40 (critical values 0,46 ir 2,49)

2 2

Imties tūris, N

1000 4,42 23,112000 1,31 6,463000 1,17 6,053300 0,46 2,423500 0,22 1,254000 0,09 0,56

Statistical criteria on Monte Carlo sample size N for number of variable n=60

(critical values 0,46 ir 2,49)

2 2

Imties tūris, N

1000 15,53 83,262000 5,39 27,675000 0,79 3,976000 0,27 1,487000 0,13 0,6810000 0,07 0,39

Statistical criteria on Monte Carlo sample size N for number of variable n=80(critical values 0,46 ir 2,49)

2 2

Conclusion: T2-statistics distribution may be approximated by Fisher law, when number of scenarios:

Variable number, n

Number of scenarios , Nmin

(Monte Carlo sample size)20 100040 220060 3300100 6000

Numerical study of stochastic gradient estimators (8)

Frequency of optimality hypothesis on the distance to optimum (n=2)

Dual Solution

0% 10% 20% 30% 40% 50% 60% 70% 80% 90%

100%

0 0.1 0.2 0.3

N=100 N=200 N=500 N=1000 N=5000

Finite Difference

0% 10% 20% 30% 40% 50% 60% 70% 80% 90%

100%

0 0.1 0.2 0.3

N=100 N=200 N=500 N=1000 N=5000

SPSA

0% 10% 20% 30% 40% 50% 60% 70% 80% 90%

100%

0 0.1 0.2 0.3 0.4 0.5

N=100 N=200 N=500 N=1000 N=5000

Likelihood Ratio

0% 10% 20% 30% 40% 50% 60% 70% 80% 90%

100%

0 0.1 0.2 0.3, 0.4 0.5 0.6 0.7 0.8

N=100 N=200 N=500 N=1000 N=5000

Frequency of optimality hypothesis on the distance to optimum (n=10)

Dual Solution

0% 10% 20% 30% 40% 50% 60% 70% 80% 90%

100%

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7

Serija1 Serija2 Serija3 Serija4 Serija5

Finite Difference

0% 10% 20% 30% 40% 50% 60% 70% 80% 90%

100%

0 0.1 0.2 0.3 0.4 0.5

N=100 N=200 N=500 N=1000 N-5000

SPSA

0% 10% 20% 30% 40% 50% 60% 70% 80% 90%

100%

0 0.1 0.2 0.3 0.4 0.5

N=100 N=200 N=500 N=1000 N=5000

Likelihood Ratio

0% 10% 20% 30% 40% 50% 60% 70% 80% 90%

100%

0 0.1 0.2 0.3, 0.4 0.5 0.6 0.7 0.8

N=100 N=200 N=500 N=1000 N=5000

Frequency of optimality hypothesis on the distance to optimum (n=20)

Dual Solution

0% 10% 20% 30% 40% 50% 60% 70% 80% 90%

100%

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7

Serija1 Serija2 Serija3 Serija4 Serija5

Finite Difference

0% 10% 20% 30% 40% 50% 60% 70% 80% 90%

100%

0 0.1 0.2 0.3 0.4 0.5

N=100 N=200 N=500 N=1000 N-5000

SPSA

0% 10% 20% 30% 40% 50% 60% 70% 80% 90%

100%

0 0.1 0.2 0.3 0.4 0.5

N=100 N=200 N=500 N=1000 N=5000

Likelihood Ratio

0% 10% 20% 30% 40% 50% 60% 70% 80% 90%

100%

0 0.1 0.2 0.3, 0.4 0.5 0.6 0.7 0.8

N=100 N=200 N=500 N=1000 N=5000

Frequency of optimality hypothesis on the distance to optimum (n=50)

Frequency of optimality hypothesis on the distance to optimum (n=100)

Conclusion: stochastic differentiation by Dual Solution and Finite Difference approaches enables us to reliably estimate the stochastic gradient, when: .

SPSA and Likelihood Ratio works when

2 100n

2 20n

Numerical study of stochastic gradient estimators (14)

Gradient search procedure

1( ) 0, 0, 0 ( )ni n j j xV x g Ag g if x g

)(~ txg

Let some initial point be chosen, the random sample of a certain initial size N0 be generated at this point, and Monte-Carlo estimators be computed. The iterative stochastic procedure of gradient search is:

x D R n0

)(~1 ttt xgxx

where the projection of to ε - feasible set:)(~ txg

The rule to choose number of scenarios

We propose a following rule to regulate number of scenarios:

maxmin11 ,,

)(~())(()(~(),,(maxmin NNn

xgxAxgnNnFishnN ttTt

tt

Thus, the iterative stochastic search is performed until statistical criteria don’t contradict to optimality conditions

Linear convergenceUnder some conditions on finiteness and smooth

differentiability of the objective function the proposed algorithm converges a.s. to the stationary point:

0)(lim2

tV

t

txF

with linear rate

t

tt

CLKl

NKxx

NKxxE

)(12

0

202

where K, L, C, l are some constants (Sakalauskas (2002), (2004))

Linear ConvergenceSince the Monte-Carlo sample size increases with

geometric progression rate it follows:

10

QNN tt

ii

Conclusion:

the approach proposed enables us to solve SP problems by computing a finite number times of expected objective function

Numerical study of stochastic optimization algorithm Test problems have been solved from the Data Base of

two-stage stochastic linear optimisation problems:

http://www.math.bme.hu/~deak/twostage/ l1/20x20.1/ .

Dimensionality of the tasks from n=20 to n=80 (30 to 120 at the second stage)

All solutions given in data base are achieved and in a number of that we succeeded to improve the known decisions, especially for large number of variables

Two stage stochastic programming problem (n=20)

The estimate of the optimal value of the objective function given in the database is 182.94234 0.066

(improved to 182.59248 0.033 )

N0=Nmin=100, Nmax=10000 Maximal number of iterations , generation of

trials was broken when the estimated confidence interval of the objective function exceeds admissible value .

Initial data as follows:

Solution repeated 500 times

= =0.95; 0.99, 0.1; 0.2; 0.5; 1.0.

max 100t

Frequency of stopping under number of iterations and admissible confidence interval

0

20

40

60

80

100

1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 81 86 91 96

1

0,5

0,2

0,1

Change of the objective function under number of iterations and admissible interval

182

182,5

183

183,5

184

184,5

1 12 23 34 45 56 67 78 89 100

0,1

0,2

0,5

1

Change of confidence interval under number of iterations and admissible interval

01234567

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99

0,1

0,2

0,5

1

Change of the Hotelling statistics under admissible interval

0123456789

10

1 11 21 31 41 51 61 71 81 91

0,1

0,2

0,5

1

Change of the Monte-Carlo sample size under number of iterations and admissible interval

0

200000

400000

600000

800000

1000000

1200000

1400000

1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 81 86 91 96

0,1

0,2

0,5

1

1

jt

tj

NN

0

5

10

15

20

25

30

8 10 12 14 16 18 20 22 24 26 28 30 32 34 36

0,1

0,2

0,5

1

Ratio under admissible interval (1)

Accuracy Objective Function

0.1 182.6101 20.14

0.2 182.6248 19.73

0.5 182.7186 19.46

1 182.9475 19.43

1

jt

tj

NN

1

jt

tj

NN

Ratio under admissible interval (2)

Two-stage SP problem first stage: 80 variables, 40 constraints second stage: 80 variables, 120

constraints

DB given solution 649.604 0.053. Solution by developed algorithm: 646.444 0.999.

Solving DB Test Problems (1)

Two-stage SP problem first stage: 80 variables, 40 constraints second stage: 80 variables, 120

constraints

DB given solution 6656.637 0.814. Solution by developed algorithm: 6648.548 0.999.

Solving DB Test Problems (2)

Two-stage SP problem first stage: 80 variables, 40 constraints second stage: 80 variables, 120

constraints

DB given solution 586.329 0.327. Solution by developed algorithm: 475.012 0.999.

Solving DB Test Problems (3)

Metodų palyginimas

0

0,5

1

1,5

2

2,5

0 50 100 150 200

Skaičiavimų laikas (s)

Pasi

klia

utin

as in

terv

alas

Dekompozicijos

Monte-Karlo

Comparison with Benders decomposition

0

0.5

1

1.5

2

2.5

0 20 40 60 80 100 120 140 160 180

Computing time (s)

Adm

issi

ble

inte

rval

Dekompozicijos Monte-Karlo

Comparison with Benders decomposition

Conclusions The stochastic iterative method has been developed

to solve the SLP problems by a finite sequence of Monte-Carlo sampling estimators

The approach presented is reasoned by the statistical termination procedure and the adaptive regulation of size of Monte-Carlo samples

The computation results show the approach developed provides estimators for a reliable solving and testing of optimality hypothesis in a wide range of dimensionality of SLP problems (2<n<100).

The approach developed enables us generate almost unbounded number of scenarios and solve SLP problems with admissible accuracy

Total volume of computations solving SLP exceeds only several times the volume of scenarios needed to evaluate one value of the expected objective function

References Rubinstein, R, and Shapiro, A. (1993). Discrete events systems:

sensitivity analysis and stochastic optimization by the score function method. Wiley & Sons, N.Y.

Shapiro, A., and Homem-de-Mello, T. (1998). A simulation-based approach to two-stage stochastic programming with recourse. Mathematical Programming, 81, pp. 301-325.

Sakalauskas, L. (2002). Nonlinear stochastic programming by Monte-Carlo estimators. European Journal on Operational Research, 137, 558-573.

Spall G. (2003) Simultaneous Perturbation Stochastic Approximation. J.Wiley&Sons

Sakalauskas, L. (2004). Application of the Monte-Carlo method to nonlinear stochastic optimization with linear constraints. Informatica, 15(2), 271-282.

Sakalauskas L. (2006) Towards implementable nonlinear stochastic programming. In Eds K.Marti et al. Coping with uncertainty, Springer Verlag

Announcements

Welcome to the EURO Mini Conference

“Continuous Optimization and Knowledge Based Technologies (EUROPT-2008)”

May 20-23, 2008, Neringa, Lithuania

http://www.mii.lt/europt-2008

top related