monte-carlo method for two-stage slp

28
Monte-Carlo method for Two-Stage SLP Lecture 5 Leonidas Sakalauskas Institute of Mathematics and Informatics Vilnius, Lithuania <[email protected]> EURO Working Group on Continuous Optimization

Upload: ssa-kpi

Post on 24-Jun-2015

438 views

Category:

Documents


1 download

DESCRIPTION

AACIMP 2010 Summer School lecture by Leonidas Sakalauskas. "Applied Mathematics" stream. "Stochastic Programming and Applications" course. Part 5.More info at http://summerschool.ssa.org.ua

TRANSCRIPT

Page 1: Monte-Carlo method for Two-Stage SLP

Monte-Carlo method for Two-Stage

SLP

Lecture 5

Leonidas SakalauskasInstitute of Mathematics and InformaticsVilnius, Lithuania <[email protected]>

EURO Working Group on Continuous Optimization

Page 2: Monte-Carlo method for Two-Stage SLP

Content

Introduction

Monte Carlo estimators

Stochastic Differentiation

-feasible gradient approach for two-stage SLP

Interior-point method for two stage SLP

Testing optimality

Convergence analysis

Counterexample

Page 3: Monte-Carlo method for Two-Stage SLP

Two-stage stochastic optimization

problem

min],|[min)(

m

y RyhxTyWyqExcxF

,, nxbAx

assume vectors q, h and matrices W, T random in general.

Page 4: Monte-Carlo method for Two-Stage SLP

Two-stage stochastic optimization

problem

Say, vector h can be distributed multivariate normally:

here are correspondingly the vector of means and the covariance matrix

),( SN

S,

Page 5: Monte-Carlo method for Two-Stage SLP

Two-stage stochastic optimization

problem

the random vector Z distributed with respect to is simulated as

is standard normal vector,

R is triangle matrix that

(Choletzky factorization)

),( SN

RZ T

SRRT

Page 6: Monte-Carlo method for Two-Stage SLP

subject to the feasible set

nRxbxAxD ,

where

Two-stage stochastic optimization problem withcomplete recourse will be

( ) ( , ) minnx D

F x c x E Q x

],|[min),( my RyhxTyWyqxQ

Page 7: Monte-Carlo method for Two-Stage SLP

It can be derived under the assumption on the existence of a solution at the second stage and continuity of measure P, that the objective function is smoothly differentiable and the gradient is

),()( xEgxFx

where

is given by the set of solutions of the dual problem

*),( uTcxg

],0|)[(max)( * sTT

u

T RuqWuuxThuxTh

Page 8: Monte-Carlo method for Two-Stage SLP

Monte-Carlo samples

We assume here that the Monte-Carlo sample of a certain size N is provided for any

),,...,,( 21 NyyyY

the sampling estimator of the objective function

and sampling variance can be computed

1

1( ) ( , )

Nj

j

F x f x yN

2

2

1

1( ) ( , ) ( )

1

Nj

j

D x f x y F xN

nDx

Page 9: Monte-Carlo method for Two-Stage SLP

The gradient is evaluated using the same random sample:

1

1( ) ( , ),

Nj

j

g x g x yN

nRDx

Gradient

Page 10: Monte-Carlo method for Two-Stage SLP

Covariance matrix

1

1( ) , ,

N Tj j

j

A x g x y g x g x y g xN n

We use the sampling covariance matrix

later on for normalising the gradient estimator.

Page 11: Monte-Carlo method for Two-Stage SLP

– feasible direction approach

Let us define the set of feasible directions as follows:

1( ) 0, 0, 0n

i n j jV x g Ag g if x

Page 12: Monte-Carlo method for Two-Stage SLP

Gradient projection

Denote, as projection of vector g onto the set U.

Since the objective function is differentiable, the solution

is optimal if

x D

0V

F x

Ug

Page 13: Monte-Carlo method for Two-Stage SLP

Gradient projection

the projection of G onto the set

is

where is projector

0Ax

PGG T

A

AAAAIP TT 1

Page 14: Monte-Carlo method for Two-Stage SLP

Assume a certain multiplier to be given. Define the function by

0

)(: xVx

Thus, , when

for any

x g D ( ),x g

0,

1

ˆ( ) min , min( )j

j

xg j

j n

xg

g

01 jnj g

,g V x x D

Page 15: Monte-Carlo method for Two-Stage SLP

Now, let a certain small value be given.0

Then we introduce the function

, ,

, if

and define the ε - feasible set

: ( )x V x

jj

gnj

x gxg

j

ˆ,minmaxˆ)(

01

01 jnj g

0)( gx )0(1 jnj g

1( ) 0, 0, 0 ( )n

i n j j xV x g Ag g if x g

Page 16: Monte-Carlo method for Two-Stage SLP

The starting point can be obtained as the solution of the deterministic linear problem:

The iterative stochastic procedure of gradient search could be used further:

where is the step-length multiplier and

is the projection of gradient estimator to the ε -feasible set.

0 0

,( , ) arg min[ | , , , ].m n

x yx y c x q y A x b W y T x h y R x R

1 ( )t t t tx x G x

( )t ttx

G

( )t t

tV xG G x

Page 17: Monte-Carlo method for Two-Stage SLP

Monte-Carlo sample size problem

There is no a great necessity to compute estimators with a high accuracy on starting the optimisation, because then it suffices only to approximately evaluate the direction leading to the optimum.

Therefore, one can obtain not so large samples at the beginning of the optimum search and, later on, increase the size of samples so as to get the estimate of the objective function with a desired accuracy just at the time of decision making on finding the solution to the optimisation problem.

Page 18: Monte-Carlo method for Two-Stage SLP

We propose a following version for regulating the sample size:

maxmin1

1 ,,)(

~())(()(

~(

),,(maxmin NNn

xGxAxG

nNnFishnN tttTt

t

t

t

tt

Page 19: Monte-Carlo method for Two-Stage SLP

Statistical testing of the optimality

hypothesis

The optimality hypothesis could be accepted for somepoint with significance , if the followingcondition is sattisfied

Next, we can use the asymptotic normality again anddecide that the objective function is estimated with apermissible accuracy , if its confidence bound doesnot exceed this value:

tt NxD /)(~

1

tx

),,()(~)())(~()( 1

2

t

t

t

t

ttTt

t

t

nNnFishn

xgxAxgnNT

Page 20: Monte-Carlo method for Two-Stage SLP

Computer simulation

Two-stage stochastic linear optimisation problem.

Dimensions of the task are as follows:

the first stage has 10 rows and 20 variables;

the second stage has 20 rows and 30 variables.

http://www.math.bme.hu/~deak/twostage/ l1/20x20.1/

(2006-01-20).

Page 21: Monte-Carlo method for Two-Stage SLP

Two stage stochastic

programing

The estimate of the optimal value of the objective function given in the database is 182.94234 0.066

N0=Nmin=100 Nmax=10000. Maximal number of iterations , generation of

trials was broken when the estimated confidence interval of the objective function exceeds admissible value .

Initial data were as follows:

= =0.95; 0.99, 0.1; 0.2; 0.5; 1.0.

max 100t

Page 22: Monte-Carlo method for Two-Stage SLP

Frequency of stopping under

admissible interval

0

20

40

60

80

100

1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 81 86 91 96

1

0,5

0,2

0,1

Page 23: Monte-Carlo method for Two-Stage SLP

Change of the objective function

under admissible interval

182

182,5

183

183,5

184

184,5

1 12 23 34 45 56 67 78 89 100

0,1

0,2

0,5

1

Page 24: Monte-Carlo method for Two-Stage SLP

Change of confidence interval

under admissible interval

0

1

2

3

4

5

6

7

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99

0,1

0,2

0,5

1

Page 25: Monte-Carlo method for Two-Stage SLP

Change of the Monte-Carlo sample

size under admissible interval

0

200000

400000

600000

800000

1000000

1200000

1400000

1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 81 86 91 96

0,1

0,2

0,5

1

Page 26: Monte-Carlo method for Two-Stage SLP

Hotelling statistics under

admissible interval

0

1

2

3

4

5

6

7

8

9

10

1 11 21 31 41 51 61 71 81 91

0,1

0,2

0,5

1

Page 27: Monte-Carlo method for Two-Stage SLP

Histogram of ratio under

admissible interval 1

jt

tj

N

N

0

5

10

15

20

25

30

8 10 12 14 16 18 20 22 24 26 28 30 32 34 36

0,1

0,2

0,5

1

Page 28: Monte-Carlo method for Two-Stage SLP

Wrap-Up and Conclisions

The stochastic adaptive method has been developed to solve stochastic linear problems by a finite sequence of Monte-Carlo sampling estimators

The method is grounded by adaptive regulation of the size of Monte-Carlo samples and the statistical termination procedure, taking into consideration the statistical modeling accuracy

The proposed adjustment of sample size, when it is taken inversely proportional to the square of the norm of the Monte-Carlo estimate of the gradient, guarantees the convergence a. s. at a linear rate