nonlinear stochastic programming by the monte-carlo method

Post on 24-Jun-2015

882 Views

Category:

Education

3 Downloads

Preview:

Click to see full reader

DESCRIPTION

AACIMP 2010 Summer School lecture by Leonidas Sakalauskas. "Applied Mathematics" stream. "Stochastic Programming and Applications" course. Part 4.More info at http://summerschool.ssa.org.ua

TRANSCRIPT

Nonlinear Stochastic Programming

by the Monte-Carlo method

Lecture 4

Leonidas SakalauskasInstitute of Mathematics and InformaticsVilnius, Lithuania <sakal@ktl.mii.lt>

EURO Working Group on Continuous Optimization

Content

Stochastic unconstrained optimization

Monte Carlo estimators

Statistical testing of optimality

Gradient-based stochastic algorithm

Rule for Monte-Carlo sample size regulation

Counterexamples

Nonlinear stochastic constrained optimization

Convergence Analysis

Counterexample

, , Px

Stochastic unconstrained

optimization

nRxxEfxF min,)(

,: RRf n

Px

RRp n:

-elementary event in the probability space ,

- the measure, defined by probability density function:

Let us consider the stochastic unconstrained optimization problem

- random function:

Monte-Carlo samples

We assume here that the Monte-Carlo samples of a certain size N are provided for any

),,...,,( 21 NyyyY

the sampling estimator of the objective function

and sampling variance can be computed

1

1( ) ( , )

Nj

j

F x f x yN

22

1

1( ) ( , ) ( )

1

Nj

j

D x f x y F xN

nRx

Assume the stochastic gradient is evaluated using the same random sample:

1

1( ) ( , ),

Nj

j

g x g x yN

Gradient

Covariance matrix

1

1( ) , ,

N Tj j

j

A x g x y g x g x y g xN n

and the sampling covariance matrix is computed as well

Gradient search procedure

Let some initial point be given and the random sample of a certain initial size N0 be generated at this point, as well as, the Monte-Carlo estimates becomputed.

The iterative stochastic procedure of gradient search can be used further:

nx0

)(~1 ttt xgxx

Monte-Carlo sample size problem

There is no a great necessity to compute estimators with a high accuracy on starting the optimisation, because then it suffices only to approximately evaluate the direction leading to the optimum.

Therefore, one can obtain not so large samples at the beginning of the optimum search and, later on, increase the size of samples so as to get the estimate of the objective function with a desired accuracy just at the time of decision making on finding the solution to the optimisation problem.

The following version for regulating the sample size is proposed:

maxmin1

1 ,,)(

~())(()(

~(

),,(maxmin NNn

xGxAxG

nNnFishnN

ttTt

tt

Monte-Carlo sample size problem

Statistical testing of the optimality

hypothesis

The optimality hypothesis could be accepted for somepoint with significance

if the following condition is satisfied

12 ( ) ( ( )) ( ( )) ( ( ))

( , , )t t T t t

t

t

N n G x A x G xT Fish n N n

n

1

tx

Statistical testing of the optimality

hypothesis

Next, we can use the asymptotic normality again anddecide that the objective function is estimated with apermissible accuracy , if its confidence bound doesnot exceed this value:

tt NxD /)(~

Importance sampling

P x e dt e e e dt

e dt g a t e dt

t

x

t a t a t

x

ata t

x a

t

x a

( )

( , )

( ) ( )1

2

1

2

1

2

1

2

2 2 2 2

2 2 2

2 2 2 2

2 2 2

Let consider an application of SP to estimation of probabilities of rare events:

g a t eat

a

( , )

2

2where:

Importance sampling

Assume a be the parameter that should be chosen. The second moment is :

D x a g a t e dt e dt

e dte

e dt

t

x a

at at

x a

ata t

x

a t

x a

2 2 22

2

2 2 2

1

2

1

2

1

2 2

22

2

2 2 2 2

( , ) ( , )

Importance sampling

Assume a be the parameter that should be chosen. The second moment is :

D x a g a t e dt e dt

e dte

e dt

t

x a

at at

x a

ata t

x

a t

x a

2 2 22

2

2 2 2

1

2

1

2

1

2 2

22

2

2 2 2 2

( , ) ( , )

Importance sampling

Select the parameter a in order to minimize the variance:

0,00

0,20

0,40

0,60

0,80

1,00

1,20

0,00 2,00 4,00 6,00 8,00

a

22 2

2

D x a P x

P x P x

( , ) ( )

( ) ( )

Importance sampling

t at Sample size

1 5.000 1000 16.377 2.48182x10-7

2 5.092 51219 2.059 2.87169x10-7

3 5.097 217154 1.000 2.87010x10-7

~

( )

P

.86650x10

t

-7P x 2(%)

Manpower-planning problem

The employer must decide upon a base level of regular staff at various skill levels. The recourse actions available are regular staff overtime or outside temporary help in order to meet unknown demand of service at minimum cost (Ermolyev and Wets (1988)).

Manpower-planning problem

base level of regular staff at skill level j = 1, 2, 3

, :j ty

:jx

amount of overtime help

, :j tz amount of temporary help,

:jc cost of regular staff at skill level j = 1, 2, 3

:jq cost of overtime,

:jr cost of temporary help

:tw demand of services at period t,

:tabsentees rate for regular staff at time t,

1 :j

ratio of amount of skill level j per amount of j-1 required,

Manpower-planning problem

The problem is to choose the number of staff on three levels in order to minimize the expected costs:

1 2 3( , , )x x x x

3 12 3

, ,

1 1 1

( , ) min ( ( ))j j j j t j j t

j t j

F x z c x E q y r z

s.t. , ,

3 3

, ,

1 1

,

1 1, 1, 1, 1,

0, 0, 0,

( ) , 1, 2, , 12,

0.2 , 1, 2, 3, 1, 2, , 12,

( ) ( ) 0, 1, 2, 3, 1, 2, , 12.

j j t j t

j t j t t t j

j j

j t t j

j j j t j t j j t j t

x y z

y z w x t

y x j t

x y z x y z j t

2( , )t tNthe demands are normal: , t

t l

Manpower-planning problem

l x1 x2 x3 F

0 9222 5533 1106 94.899

1 9222 5533 1106 94.899

10 9376 5616 1106 96.832

30 9452 5672 1036 96.614

1X 2X 3X

Manpower amount and costs in USD with confidence interval 100 USD

Nonlinear Stochastic Programming

Constrained continuous (nonlinear )stochastic programming problem is in general

.

,0)(),(,)(

min)(),(,)(

111

000

n

R

R

x

dzzpzxfxEfxF

dzzpzxfxEfxF

n

n

Nonlinear Stochastic Programming

)()(),( 10 xFxFxL

),(),(),,( 10 xfxfxl

),,(),( xElxL

Let us define the Lagrange function

Nonlinear Stochastic Programming

x x L xt t

x

t t1 ~( , )

,0 0x - initial values

,0 0- parameters of

optimization

Procedure for stochastic optimization:

t t t

F

tF x D x1

101

max[ , (~

( )~

( ))]

,0N

Conditions and testing of optimality

0)(,0)()( 110 xFxFxF

( ) (~

)' (~

) / ( , , )N n L A L n Fish n N nx x

1

~( )

~( )F x D xt

F

t

1 10 2

i iD NF i/

Analysis of Convergence

N

N

t

t 1

N NQi

i

t t

0 1

In general the sample size is increased as geometric progression:

Wrap-Up and conclusions

The approach presented in this lecture is grounded by the termination procedure and the rule for adaptive regulation of size of Monte-Carlo samples, taking into account the statistical modeling accuracy.

Wrap-Up and Conclusions

The computer study shows that the approach developed provides us the estimator for a reliable testing of the optimality hypothesis in a wide range of dimensionality of stochastic optimization problem (2<n<100)

Termination procedure proposed allows us to test the optimality hypothesis and to evaluate reliably the confidence intervals of the objective and constraint functions

top related