0k. salah input analysis ref: law & kelton, chapter 6

47
K. Salah 1 Input Analysis Ref: Law & Kelton, Chapter

Upload: imogen-elliott

Post on 03-Jan-2016

216 views

Category:

Documents


0 download

TRANSCRIPT

K. Salah 1

Input Analysis

Ref: Law & Kelton, Chapter 6

K. Salah 2

Driving simulation modelsStochastic simulation models use random variables to represent inputs such as inter-arrival times, service times, probabilities of storms, proportion of ATM customers making a deposit.

We need to know the distribution family and parameters of each of these random variables.

Some methods to do this:

•Collect real data, and feed this data to the simulation model. This is called trace-driven simulation.

•Collect real data, build an empirical distribution of the data, and sample from this distribution in the simulation model.

•Collect real data, fit a theoretical distribution to the data, and sample from that distribution in the simulation model.

We will examine the last two of these methods.

K. Salah 3

Why Is This an Issue?

Service Times(in Days)

0

50

100

150

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29

Days

Nu

mb

er

The above graph shows service times (LOS) for a hospital in Ontario. How would you model a patient’s length of stay?

N = 546

K. Salah 4

Option 1: Trace

Patient LOS1 42 13 74 65 316 17 48 29 6

10 511 112 113 614 115 2

We could attempt a trace. In this scheme, we would hold this list of numbers in a file.

When we generate the first patient, we would assign him or her an LOS of 4; the 2nd 1; the 3rd 7; …

Traces have the advantage of being simple and reproducing the observed behaviour exactly.

Traces don’t allow us to generate values outside of our sample.

Our sample is usually of a limited size, meaning that we may not have observed the system in all states.

K. Salah 5

Option 2: Empirical DistributionLOS Count Frequency Cum. Freq

0 0 0.000 0.0001 139 0.255 0.2552 120 0.220 0.4743 69 0.126 0.6014 48 0.088 0.6895 30 0.055 0.7446 25 0.046 0.7897 21 0.038 0.8288 12 0.022 0.8509 11 0.020 0.870

10 8 0.015 0.88511 9 0.016 0.90112 10 0.018 0.91913 7 0.013 0.93214 0 0.000 0.93215 1 0.002 0.93416 2 0.004 0.93817 4 0.007 0.94518 2 0.004 0.94919 3 0.005 0.95420 12 0.022 0.97621 5 0.009 0.98522 0 0.000 0.98523 2 0.004 0.98924 0 0.000 0.98925 2 0.004 0.99326 0 0.000 0.99327 1 0.002 0.99528 2 0.004 0.99829 1 0.002 1.00030 0 0.000 1.000

546

A second idea might be to use an empirical distribution to model LOS.

We will use the LOS and cumulative frequency as an input to our model.

For instance, assume we pick a random number (say .62) as F(x). This corresponds to x of between 3 and 4 (~3.3 by interpolation).

The LOS will represent x, and the cumulative frequency represents F(x).

Empirical distributions may however, be based on a small sample size and may have irregularities.

The empirical distribution cannot generate an x value outside of the range [lowest observed value, highest observed value].

K. Salah 6

Option 3: Fitted Distribution

31 intervals of width 1 between 1 and 31 1 - Logarithmic Series

0.00

0.05

0.10

0.15

0.20

0.25

0.30

0.35

Frequency-Comparison Plot

Interval Midpoint

Pro

po

rtio

n

1.50 5.50 9.50 13.50 17.50 21.50 25.50 29.50

K. Salah 7

Why Theoretical Distributions?

• Theoretical distributions “smooth out” the irregularities that may be present in trace and empirical distributions.

• Gives the simulation the ability to generate wider range of values.– Test extreme conditions.

• There may be a compelling reason to use a theoretical distribution.

• Theoretical distributions are a compact way to represent very large datasets. Easy to change, very practical.

K. Salah 8

Describing DistributionsA probability distribution is described by its family and its parameters. The choice of family is usually made by examining the density function, because this tends to have a unique shape for each distribution family.

Distribution parameters are of three types:

location parameter(s)

scale parameter(s)

shape parameter(s) x

f(x)

1 32

x

f(x)

1

2

3

x

1 2

3

f(x)

K. Salah 9

Examples: continuous distributions

x

f(x)

ba

Uniform distribution (a is the location parameter; (b-a) is the shape parameter)

ab 1

otherwise

bxaabxf

0

1)(

0

( )

0

x a

x aF x a x b

b ax b

2

: ,

:2

:12

Range a b

a bMean

b aVariance

Uses: 1st model in which only a, b are known.

Essential for generation of other distributions.

K. Salah 10

Examples: continuous distributions

x

f(x)

Exponential distribution (one scale parameter)

1

2

0

01)(

0

01

)(

otherwise

xexF

otherwise

xexf

x

x

2

: 0,

:

:

Range

Mean

Variance

Uses: Inter-arrival times.

Notes: Special case of Weibull and Gamma (α = 1 , β = β)

If X1, X2, …, Xm are independent expo(β) then X1+ X2+ …+ Xm is distributed as an m-Erlang or gamma(m, β)

K. Salah 11

Examples: continuous distributions ...

Gamma distribution (one scale parameter,one shape parameter)

1

0

1

!

0( )

0

1 0

0

Note: F(x) has no closed form solution if is not a positive integer.

j

j

x

xx

j

x exf x

otherwise

F x e x

otherwise

x

=1=2

=3

f(x) =1

2

1

0

: 0,

:

:

Γ α = α-1 ! if α is a positiveinteger

otherwiset

Range

Mean

Variance

t e dt

Uses: Task completion time.

Notes: For positive integer (m) gamma (m, β) is an m-erlang.

If X1, X2, …, Xn are independent gamma(αi,β) then X1+ X2+ …+ Xn is distributed as an gamma (α1 + α2 +…+ αn, β)

K. Salah 12

Examples: continuous distributions ...

Weibull distribution (one scale parameter,one shape parameter)

1 0( )0

1 0

0

x

x

x e xf xotherwise

e xF xotherwise

x

=1=2

f(x) =1

22

: 0,

1:

2 1 1: 2

Range

Mean

Variance

Uses: Task completion time; equipment failure time

Notes: The expo(β) and the Weibull(1, β) are the same distribution.

K. Salah 13

Examples: continuous distributions ...

Normal distribution (one scale parameter,one shape parameter)

2

221( )

2

.

x

f x e

F x No closed form

x

f(x)

2 2

: ,

:

:

Range

Mean or

Variance a or

Uses: Errors from a set point. Quantities that are the sum of a large number of other quantities

Notes:

K. Salah 14

Examples: discrete distribution

Bernoulli distribution

1 0

( ) 1

0

0 0

( ) 1 0 1

1 1

p if x

f x p if x

otherwise

if x

F x p if x

if x

x

p(x)

p

1-p

0 1

: 0,1

:

: 1

Range

Mean p

Variance p p

Uses: Outcome of an experiment that either succeeds or fails.

Notes: If X1, X2, …, Xt are independent Bernoulli trials, then X1+X2+ … + Xt is binomially distributed with parameters (t, p).

K. Salah 15

Examples: discrete distribution

Binomial Distribution

0

1 if x 1,2,...,( )

0 otherwise

0 0

( ) 1 0

1

t px

xt ii

i

tp p t

f x x

if x

tF x p p if x t

x

if x t

x

p(x)

p

0 1

: 0,1,...,

:

: 1

Range t

Mean tp

Variance tp p

Uses: Number of defectives in a batch of size t.

Notes: If X1, X2, …, Xm are independent and distributed bin(ti,p), then X1+X2+ … + Xm is binomially distributed with parameters (t1 + t2+ …+ tm, p).

The binomial distribution is symmetric only if p = 0.5.

2 3 4 5

t = 5p = 0.5

K. Salah 16

• Poisson

• Pareto

K. Salah 17

Selecting an Input Distribution - Family

• The 1st step in any input distribution fit procedure is to hypothesize a family (i.e. exponential).

• Prior knowledge about the distribution and its use in the simulation can provide useful clues.– Normal shouldn’t be used for service times, since negative values can be

returned.

• Mostly, we will use heuristics to settle on a distribution family.

Summary StatisticsF u n c t i o n S u m m a r y S t a t i s t i c D i s t

T y p e C o m m e n t s

M in im u m , M a x im u m X ( 1 ) , X ( n ) C , D E s t im a t e s r a n g e M e a n X n

C , D C e n t r a l t e n d e n c y

M e d ia n x 0 .5

12

12 2

ˆ1

2

n

n n

X i f n i s o d d

x nX X i f n i s e v e n

C , D C e n t r a l t e n d e n c y

V a r ia n c e 2 2s n C , D M e a s u r e o f v a r ia b il i t y

C o e f f ie n t o f V a r ia t io n ( c v ) = /

2s n

X n

C M e a s u r e o f v a r ia b il i t y c v = 1 im p lie s e x p o n e n t ia l I f c v > 1 , t h e n lo g n o r m a l is a b e t t e r m o d e l t h a n g a m m a o r w e ib u ll .

L e x u s r a t io = 2 /

2s n

X n

D M e a s u r e o f v a r ia b il i t y = 1 im p lie s P o is s o n < 1 im p lie s B in o m ia l > 1 im p lie s N e g a t iv e B in o m ia l

S k e w n e s s

v =

32

3

2

E X

32

3

1

2

1

ˆ

n

ii

X X nn

v nS n

C , D M e a s u r e o f s y m m e t r y v = 0 f o r n o r m a l d is t r ib u t io n s v > 0 f o r r ig h t s k e w e d d is t r ib u t io n v < - f o r le f t s k e w e d d is t r ib u t io n s

K. Salah 19

Summary ExampleConsider the following sample

CV suggests NOT exponential

Conclusion: Gamma? Weibull? Beta?

Sample Value Sample Value1 0.0065 16 0.08262 0.0118 17 0.09643 0.0135 18 0.10104 0.0241 19 0.1083 Min: 0.00655 0.0281 20 0.1258 Max: 0.27146 0.0285 21 0.1319 Mean: 0.10357 0.0347 22 0.1458 Median: 0.07398 0.0372 23 0.1884 Variance: 0.00669 0.0380 24 0.1930 CV: 0.7875

10 0.0428 25 0.1958 Lexus: 0.064211 0.0496 26 0.1985 Skewness: 0.719012 0.0521 27 0.201713 0.0544 28 0.251714 0.0650 29 0.261615 0.0652 30 0.2714

Skew suggests a left skewed family.

Continuous data

K. Salah 20

Draw a Histogram

Histogram

0

2

4

6

8

10

12

0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60

Value

Coun

t

• Law & Kelton suggest drawing a histogram.

• Use the plot to “eyeball” the family.

• Law and Kelton suggest trying several plots with varying bar width.

• Pick a bar width such that the plot isn’t too boxy, nor too scraggly.

K. Salah 21

Histogram Example – I

Histogram (Bin Width = .025)

0

1

2

3

4

5

6

7

8

0.00

00.

050

0.10

00.

150

0.20

00.

250

0.30

00.

350

0.40

00.

450

0.50

00.

550

0.60

0

Value

Co

un

t

K. Salah 22

Histogram Example – II

Histogram (Bin Width = .05)

0

2

4

6

8

10

12

0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60

Value

Co

un

t

K. Salah 23

Histogram Example – III

Histogram (Bin Width = .1)

0

2

4

6

8

10

12

14

16

18

0.00 0.10 0.20 0.30 0.40 0.50 0.60

Value

Co

un

t

K. Salah 24

Sturge’s RuleHistogram (Bin Width = .06)

0

2

4

6

8

10

12

14

0.060 0.120 0.180 0.240 0.300

Value

Co

un

t

Select k (# bins): Int(1 + 3.322log10n), where n = number of samples.

We will guess at a gamma distribution.

K. Salah 25

Parameter Estimation

• To estimate parameters for our distribution, we use the Maximum Likelihood Estimators (MLE’s) for the selected distribution.

• For a gamma we’ll use the following approximation. Calculate T:

1

1 12.64

1 ln(.1035) ( 2.6534)ln lnn

ii

TX n X

n

Use table 6.20 to obtain α. α = 1.464

Calculate B: .1035ˆ .069ˆ 1.464

X n

K. Salah 26

A Note on MLEs

• Maximum likelihood estimator (MLE)

• Method for determining parameters of a hypothesized distribution.

• We assume that we have collected n IID observations.

1 2 ... nL p X p X p X

We define the likelihood function:

The MLE is simply the value of theta that maximizes L(θ) over all values of θ.

In simple terms, we want to pick an MLE that will give us a good estimate of the underlying parameter of interest.

K. Salah 27

MLE for an Exponential Distribution

1 2 ... nL p X p X p X

1 2

1

1

1 1 1...n

n

ii

X X X

Xn

e e e

e

L

We want to estimate ß:

So, the problem is to maximize the rhs of the above equation. To make things simpler we’ll maximize ln(L(ß))

1

1lnn

ii

XMax n

Take the 1st derivative and set = 0. Solve for ß

2

2

1

1

1. .

1

1

1

0n

ii

n

ii

n

ii

i e X

X

X

Xn

n

n

In general the MLEs are difficult to calculate for most distributions. See Chapter 6 of Law and Kelton.

K. Salah 28

Determining the “Goodness” of the Model

• As you might imagine, determine whether our hypothesize model is “good” is fraught with difficulties.

• Law and Kelton suggest both heuristic and analytical tests to determine “goodness”.

• Heuristic tests:– Density/Histogram over plots.

– Frequency comparisons.

– Distribution function difference plots.

– Probability-Probability Plots (P-P plots).

• Analytical tests:– Chi-Squared tests.

– Kolmogorov-Smirnov tests

K. Salah 29

Frequency Comparisons

Frequency Comparison

0.00

0.05

0.10

0.15

0.20

0.25

0.30

0.35

0.40

0.45

0.50

0.06 0.12 0.18 0.24 0.3

x

hj,

rj

(%) Data Dist

F(bj-1) - F(bj)x Data Dist

0 0 00.060 0.433 0.3850.120 0.633 0.6880.180 0.767 0.8500.240 0.900 0.9310.300 1.000 0.968

Cum. Distn Functions

K. Salah 30

Distribution Function Difference Plot

Plot the CDF of the hypothesized distribution – the CDF as observed in the data.

If the fit is perfect, this plot should be a straight line at 0.

L&K suggest the difference should be less than .10 for all points.

Distribution Function Difference Plot

-0.2

-0.1

0

0.1

0.2

0 0.1 0.2 0.3 0.4 0.5

x

F'(x

) -

Fn

(x)

K. Salah 31

P-P Plot

P-P Plot

0

0.2

0.4

0.6

0.8

1

0 0.2 0.4 0.6 0.8 1

Fn(x) - Data

F(X

) -

Dis

trib

uti

on

Note: We are plotting F(x) vs. F

K. Salah 32

Q-Q Plot

L&K note that Q-Q plots tend to emphasize errors in the tail. Our fitted distribution doesn’t look appropriate in the upper tail.

Q Q plot

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.0000 0.0500 0.1000 0.1500 0.2000 0.2500 0.3000 0.3500

X(i)s

F-1

((i-

.5)/

n)

K. Salah 33

Analytic Test – Chi Square Test

1. Divide the range into k adjacent intervals.

2. Nj = # of Xi’s in jth interval.

3. Determine the proportion of Xi’s that should fit into the jth interval. For continuous data equal-probability approach is recommended. Pj’s are set to be equal valuesFor continuous R.V.;

1

1

ˆ

ˆ ˆ

j

j ja

a

j j

n np

n f x dx

n F a F a

For discrete R.V.; Pj; Total probability that the random var. will take values in jth interval = P(X=x| x>=aj-1, x<=aj)

K. Salah 34

Analytic Test – Chi Squared Test1. The critical test statistic is:

2

20 i

1

i

where O is the observed num of samples in theith interval

E is theobserved num of samples in theith interval

ji i

i i

O E

E

2. A number of authors suggest that the intervals be grouped such that Ei is >= 5 in all cases.

3. H0: X conforms to the assumed distribution.H1: X does not conform to the assumed distribution.

Reject if X02 > X2

k-1,1-α

K. Salah 35

Example

k=5 Pj=1/k=.2

Cumulative a(j) s Observed Frequency Exp. Frequency (O-E)^2/E

0.2 0.033599 6 6 0

0.4 0.063509 7 6 0.166667

0.6 0.101155 5 6 0.166667

0.8 0.160816 4 6 0.666667

0.999999 infinity 8 6 0.666667

SUM 1.666667

X critical (5-1,.95) 9.48 accepted

K. Salah 36

Example 6.15

T.2 Table from 204.27 .1for

.

042.)20/21ln(399.0

020.)20/11ln(399.0

19,..,2,1 )20/1ln(399.0

issatisfy thmust e-1 /)(ˆ

on)distributi unbounded is l(Expoentia ,0

ok) are we5 ( 95.10219*05.0

interval.each for

05.0/1 so intervals 20 want We20Set

0.399 with lexponentia ison distributi Fitted

6.7) Tablein is (Data 219

01.01,120

2

1

399.0-

200

XueCritcalval

a

a

jja

akjaF

aa

np

kpk

n

j

j

a

j

j

j

j

K. Salah 37

Example 6.15

aj’s

Observed Expected

k

# ofintervals

Less than the critical value

K. Salah 38

Example 6.16

T.2 Table from 605.4

0.1for valueCritical

6.13 in tablegiven intervals 3 with up end We

valuean thisgreater th bemust s'

dist.) fitted theof (Mode 0.3460)P(x

intervals into data thegroupingby equalroughly s' themake try toWe

intervalsy probabilit equalexactly makenot can We

eom(0.346) ison distributi Fitted

size) Demand 6.9 Tablein is (Data 156

01.01,13

X

p

p

g

n

j

j

K. Salah 39

Example 6.16

Determine the intervalsso that Pj’s are more or lessequal

Less than the critical value

K. Salah 40

P value

Alpha

Critical valueTest statisticcalculated

Chi-square density with k-1 d.f.P value; Cumulative Prb. to the right of the test stat. if P value > alpha Don’t reject Ho

K. Salah 41

The Kolomogorov-Smirnov (K-S) Test

The K-S test compares an empirical distribution function(F^(x)) with a hypothesized distribution function (Fn(x)).

The K-S test is somewhat stronger than the Chi-Square test.

This test doesn’t require that we aggregate any of our samples into a minimum batch size.

We define a test statistic Dn:

^

supn n

x

D F x F x

The largest value over all x

K. Salah 42

K-S Example

Let’s say we had collected 10 samples (0.09, 0.23, 0.24, 0.26, 0.36, 0.38, 0.55, 0.62, 0.65, and 0.76) and have developed an emprical cdf.

We want to test the hypothesis that our sample is drawn from a U(0,1) distribution

f(x) = 1 0 <= x <= 1F(x) = x 0 <= x <= 1

Where H0 : Fn(x) = F^(x)H1 : Fn(x) <> F^(x)

K. Salah 43

K-S Example

0

0.2

0.4

0.6

0.8

1

1.2

0 0.2 0.4 0.6 0.8 1 1.2

F ^(x)

F n(x)

K. Salah 44

K-S Example

0

0.2

0.4

0.6

0.8

1

1.2

0 0.2 0.4 0.6 0.8 1 1.2

Dn is simply the largest of the gaps between F^(x) and Fn(x). Remember – we need to find the Dn

- and Dn

+ at every discontinuity

K. Salah 45

K-S ExampleSample x F^(x)

(Fit Distribution)

Fn(x)(Empirical)

Dn- Dn

+

0 0 0 0

1 0.09 0.09 0.1 0.09 0.01

2 0.23 0.23 0.2 0.13 0.03

3 0.24 0.24 0.3 0.04 0.06

4 0.26 0.26 0.4 0.04 0.14

5 0.36 0.36 0.5 0.04 0.14

6 0.38 0.38 0.6 0.12 0.22

7 0.55 0.55 0.7 0.05 0.15

8 0.62 0.62 0.8 0.08 0.18

9 0.65 0.65 0.9 0.15 0.25

10 0.76 0.76 1.0 0.14 0.24

Our Dn is the largest of the Dn-, Dn

+ columns. In this case Dn = 0.25.

K. Salah 46

K-S Test

• The value of Dn when appropriately adjusted (see next slide), is found to be < the critical point (1.224) for = 0.10.

• Thus we cannot reject H0.

K. Salah 47

Adjusted Critical Values

• Law and Kelton present critical values for the K-S tables that are non-standard.

• They use a compressed table based on work from Stevens (1962).

Case MulitplyDn by

All parameters in F^(x) known

Normal Distn

Expo Distn

0.110.12 nn D

n

0.850.01 nn D

n

0.2 0.50.26nD n

n n

0.110.12

0.1110 0.12 .25 (3.32)(.25) 0.83

10

nn Dn

For tables of critical values see pgs 365-367 of L&K