point process representations of sum- and max- … 2017/m.korte-stap… · 3 representation of...

94
Master Thesis Moritz Korte-Stapff [email protected] Point Process Representations of Sum- and Max- Stable Processes with Applications to Simulation August 2017 Supervisor: Prof. Thomas Mikosch

Upload: nguyentuyen

Post on 14-Sep-2018

214 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

Master Thesis

Moritz Korte-Stapff [email protected]

Point Process Representations of Sum- and Max-

Stable Processes with Applications to Simulation

August 2017

Supervisor: Prof. Thomas Mikosch

Page 2: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF
Page 3: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

Abstract

This thesis centers around representations of sum- and max-stable objects involving Pois-son processes and their application to simulation of max-stable random fields. Those occurnaturally as the weak limits of appropriately centered and scaled component wise maximaof i.i.d. processes and random fields. Therefore, they are a widely used tool for modelingextremal events such as extreme rain- or snowfall, and extreme temperatures; cf [4].

When simulating max-stable processes, one is confronted with the problem that theirrepresentation involves taking the supremum over an infinite set, which, in the absence ofsuitable algorithms, is generally impossible to do on a computer. To make matters worse,approximating this supremum by replacing the infinite set with a finite one is not advisableas it runs into severe difficulties, especially if the random field is simulated over a largearea. Hence, the task of simulating max-stable random fields is highly complex. As a result,early literature on this subject mainly focused on approximation techniques, for example[20]. Recently, algorithms have been introduced that allow for exact simulation at a finitenumber of locations, starting with [10]. During the course of this thesis, three recently pub-lished exact methods are introduced and compared to the ”naive method” described earlierin this paragraph, which, despite its drawbacks, might still be used in applications due toits simplicity.

In sections 1 - 3 the theoretical framework is built. Section 1 introduces Poisson randommeasures as special case of point processes and provides the important properties that areused in section 2 and 3, where the representations of sum-stable and max-stable processesare derived.

In section 4 the four different simulation methods are introduced. First, the limitationsof the naive method are demonstrated by applying it to a specific max-stable random field.

The next two methods were proposed by Dombry, Engelke and Oesting in [11] and areconsequently referred to as Dombry-Engelke-Oesting methods. The first one is a generaliza-tion of the method from [10] and is based on drawing samples from a spectral measure. Thesecond one is based on so-called extremal functions. In fact, both methods can be describedin terms of extremal functions and are therefore presented in the same section.

Lastly, the Liu-Blanchet-Dieker-Mikosch method from [18] is introduced. This algorithmallows simulation only for a restricted class of max-stable processes, however, the most com-monly used Brown-Resnick process can be simulated. While it is asymptotically the fastestalgorithm as the number of sample points increases, it shall be assessed how high this num-ber has to be to obtain a significant advantage in speed.

Finally, in Section 5 the methods are put to test through simulations of Brown-Resnickfields. The goodness-of-fit is assessed and the speeds of the algorithms are compared.

i

Page 4: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

ii

Page 5: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

Acknowledgments

I want to thank Prof. Thomas Mikosch for the supervision of this master’s thesis and his sup-port during the process of writing the thesis. He introduced me to this interesting and relevanttopic but always encouraged me to find my own way.

Additionally, I would like to thank my parents that made it possible for me to pursue a master’sdegree in this beautiful city called Copenhagen.

iii

Page 6: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

iv

Page 7: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

Table of Contents

1 Point Processes 11.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 The Laplace Functional . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 The Poisson Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.3.1 The General Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.3.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.3.3 The Order Statistics Property . . . . . . . . . . . . . . . . . . . . . . . . . 91.3.4 Transformation of Poisson Processes . . . . . . . . . . . . . . . . . . . . . 9

2 Representation of Sum-Stable Random Variables & Processes 132.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.2 A Series Representation of Sum-Stable Random Variables . . . . . . . . . . . . . 142.3 Representation of Sum-Stable Processes . . . . . . . . . . . . . . . . . . . . . . . 21

3 Representation of Max-Stable Random Variables & Processes 233.1 Max-Stable Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

3.1.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233.1.2 A Representation for Max-Stable Random Variables . . . . . . . . . . . . 25

3.2 Max-Stable Random Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253.2.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253.2.2 The Exponent Measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.3 Representation of Max-Stable Random Fields. . . . . . . . . . . . . . . . . . . . . 27

4 Simulation Methods for Max-Stable Processes 394.1 Limitations of the Naive Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 394.2 The Dombry-Engelke-Oesting Methods: Methods Based on Extremal Functions . 40

4.2.1 Preliminaries on Extremal Functions . . . . . . . . . . . . . . . . . . . . . 414.2.2 Algorithm 1 (Generalization of the Dieker-Mikosch Algorithm) . . . . . . 424.2.3 Algorithm 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454.2.4 Sampling from Pt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

4.3 The Liu-Blanchet-Dieker-Mikosch Method . . . . . . . . . . . . . . . . . . . . . . 484.3.1 Sampling (Γ1, . . . ,ΓN ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494.3.2 Sampling (W1, . . . ,WN ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504.3.3 Choosing a and C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

5 Simulation 575.1 The Naive Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

5.1.1 Goodness-of-Fit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575.1.2 Speed of the Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

5.2 The Dombry-Engelke-Oesting Methods . . . . . . . . . . . . . . . . . . . . . . . . 595.2.1 Goodness-of-Fit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 595.2.2 Speed of the Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

5.3 The Liu-Blanchet-Dieker-Mikosch Method . . . . . . . . . . . . . . . . . . . . . . 635.3.1 Goodness-of-Fit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 635.3.2 Speed of the Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

6 Conclusion 67

A Fractional Brownian Motion & Fractional Brownian Fields 69

B Java Code 71

v

Page 8: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

Bibliography 85

vi

Page 9: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

1 Point Processes

Poisson random measures, which are a particular class of point processes, will form the build-ing blocks of the representation theorems for sum-stable and max-stable processes. However,Poisson processes and more generally also point processes find application in a variety of otherfields. For example, Poisson processes are frequently used in insurance mathematics to modelarrival times of insurance claims. General point processes might be used to model earthquakelocations or oil occurrence relative to a known location. So, while point processes here aremostly used as means to an end, they are of great interest on their own. This section providesthe necessary theory of Poisson processes to proceed and is for most parts based on chapter 3of [21].

1.1 Preliminaries

The general idea behind point processes is to model random distributions of points in a specificspace. Usually, these spaces are subsets of the Euclidean ones, but they can also be other metricspaces. However, throughout this section it is always assumed that they belong to the class oflocally compact Hausdorff spaces. Such spaces always have a countable basis of open sets withcompact closure. Let one such space be denoted by E and equip this space with the Borel-σ-Algebra B, the σ-algebra generated by the open sets. Unless otherwise specified, throughoutthis section E will always denote a locally compact metric space and B its Borel-σ-algebra.Now let (xi)i≥1 be a countable family of elements of E and δx denote the Dirac-measure atpoint x. Then,

µ :=∞∑i=1

δxi

is a measure on (E,B). With the further requirement that for any relatively compact K ∈ B,µ(K) <∞ holds, that is µ is Radon, µ is called a point measure. Let Mp(E) denote the set ofall point measures on E. To model the randomness of the points, it is desirable to equip Mp

with a σ−algebra. This σ−algebra is the one making the evaluation maps eB : MP → N∪∞,eB(µ) = µ(B) measurable for any B ∈ B, from now on denoted by Mp(E). Now, define pointprocesses through

Definition 1.1. Let (Ω,F , P ) be a probability space and (E,B) be a metric space equipped withthe Borel-σ-algebra. Then, a point process is a measurable map N : (Ω,F , P )→ (Mp(E),Mp(E)).

As usual, the distribution of N is the image measure PN (A) = P (N−1(A)). If E is aEuclidean space, its rectangles are usually convenient sets to investigate measures on. Rectangleshave the useful properties that if I1, I2 are rectangles, I1 ∩ I2 is again a rectangle. Furthermore,there is an increasing sequence (In)n≥1 s.t.

⋃n≥1 In = E and lastly they generate B. One

might hope that classes of subsets with the same properties on more general metric spaces Eare convenient to investigate the distribution of a point process.

Proposition 1.2. Let T be class of relatively compact subsets of a metric space E for whichthe following statements hold.

1. T is intersection stable.

2. The σ-algebra generated by T coincides with the Borel- σ-algebra.

3. Either there exists an increasing sequence (In)n∈N ⊂ T s.t. In E or there exists apartition (In)n∈N ⊂ T of E s.t.

⋃n∈N In = E.

Then, the σ−algebra generated by e−1I (B) : I ∈ T , B ∈ B[0,∞), denoted by M∗p(E), coincides

with Mp(E). Here again, eI : Mp(E)→ [0,∞] is the evaluation map for the set I.

1

Page 10: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

Proof. Suppose the first condition of 3. in the assumptions holds and let (In)n≥1 be thatsequence. Define for fixed n

Gn := F ∈ B : eF∩In is M∗P (E)− measurable .

Then, since it is intersection stable, T ⊂ Gn. The goal is to prove that Gn is a Dynkin systemand then apply Dynkin’s π-λ-Theorem. From E ∩ In = In, it follows that E ∈ Gn. Secondly,let F1 ⊂ F2 ∈ Gn then, it is observed that for any µ ∈Mp(E)

eF2\F1∩In(µ) = µ(F2\F1 ∩ In) = µ(F2 ∩ In)− µ(F1 ∩ In) = eF2∩In(µ)− eF2∩In(µ).

Since it was required that µ be finite on compact sets and the In are relatively compact, it followsthat eF2\F1∩In is the difference of two finite, measurable functions and hence itself measurable.Lastly, let Gn ⊃ (An)n≥1 be an increasing sequence. Then, by the continuity from belowof measures, e⋃k

n=1 An∩In→ e⋃∞

n=1 An∩In pointwise. Since the pointwise limit of measurable

functions is again measurable,⋃∞n=1An ∈ Gn. It follows that Gn is a Dynkin system. Direct

application of Dynkin’s π-λ-Theorem gives

B = σ(T ) ⊂ Gn

for any n ∈ N. From this it follows that eF is measurable for any F ∈ B as it is the pointwiselimit of eF∩

⋃kn=1 In

. The prove under the second condition in 3. is entirely similar.

Proposition 1.2 proves that to check whether a map N : Ω → Mp(E) is a point process,it is enough to check measurability on classes of sets as in the assumptions of Proposition 1.2.Furthermore, it also allows to prove that the distribution of a point process can be described byonly considering sets of this form. Therefore, define for a point process N its finite-dimensionaldistributions through

pI1,...,Ik(n1, . . . , nk) = P (N(I1) = n1, . . . , N(Ik) = nk), k ∈ N,

where nj ∈ N and Ij ∈ T where T is as in the assumptions of Proposition 1.2.

Corollary 1.3. The distribution of a point process N is entirely determined by its finite-dimensional distributions.

Proof. Let G denote the set of all finite intersections of the form

e−1I (n) : n ∈ N n ∈ N, I ∈ T ,

where T is as usual. Then, since point measures take values in N ∪ ∞ and Proposition 1.2holds, it follows that G generates Mp(E). Furthermore, G is intersection stable and hence anymeasure onMp(E) is uniquely determined by its action on G. But how PN acts on G is uniquelydetermined by its finial-dimensional distribution, since for any k,

P

(N ∈

k⋂i=1

e−1Ii

(ni)

)= P (N(I1) = n1, . . . , N(Ik) = nk).

1.2 The Laplace Functional

To proceed, it will be useful to have transformation techniques for distributions of point pro-cesses. One such technique is the Laplace functional. It is defined in the following way.

2

Page 11: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

Definition 1.4. Let Q be a measure on Mp(E). Its Laplace functional is a map Ψ : B(E) →[0,∞), where B(E) denotes the set of non-negative Borel functions on E, defined through

Ψ(f) :=

∫Mp(E)

exp

(−∫Ef dm

)Q(dm) =

∫Mp(E)

exp

(−∞∑i=1

f(xmi )

)Q(dm),

where (xmi )i≥1 are the points of the respective point measure.

The Laplace functional of a point process N is the one of its distribution, in the followingdenoted by ΨN . For it to be a useful transformation technique, one needs the following.

Proposition 1.5. The distribution of a point process N is uniquely determined by its Laplacefunctional.

Proof. The claim follows from the uniqueness of the Laplace tranform. Fix arbitraryB1, . . . , Bn ⊂E and λ1, . . . , λn ∈ R. Define the function f : E → [0,∞) through

f(x) =n∑i=1

λi1Bi(x).

Then,

ΨN (f) =

∫exp

(−

n∑i=1

λiN(Bi)

)dP,

which is the unique Laplace transform of the random vector (N(B1), . . . , N(Bn)). Since thefinite dimensional distributions determine the distribution of N , the claim follows.

1.3 The Poisson Process

Now to the point process that is most relevant to this thesis.

Definition 1.6. Let (E,B) be a locally compact metric Hausdorff space equipped with theBorel-σ-algebra, µ be a Radon measure on B which is bounded on finite subsets and let N be apoint process. Then, N is called Poisson Process with mean measure µ or alternatively Poissonrandom measure (PRM(µ)), if the following hold:

1. N(A) is Poisson distributed with parameter µ(A) for any A ∈ B;

2. if A1, . . . , Ak are disjoint elements of B, then, N(A1), . . . , N(Ak) are independent.

Next, the existence of a Poisson process will be proven. In order to do so, first the Laplacefunctional will be computed given that a Poisson process exists. First some notation: For apoint process N and a Borel-function f on E define the random variable:

N(f) :=

∫Ef dN.

Theorem 1.7. A point process N is a Poisson process with mean measure µ if and only if ithas Laplace functional

ΨN (f) = exp

(−∫E

(1− e−f(x))µ(dx)

). (1.1)

3

Page 12: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

Proof. First, it needs to be shown that 1. and 2. from Definition 1.6 uniquely determine thedistribution the point process N . Assume therefore first that N is a point process with meanmeasure µ satisfying 1. and 2.. The proof follows the usual strategy of proving the resultfirst for simple functions and then, for arbitrary functions by approximating them using simplefunctions. For the first step, let f = a1A. Then,

ΨN (f) =

∫Mp(E)

exp

(−∞∑i=1

f(xmi )

)PN (dm) =

∫Mp(E)

exp

(−a

∞∑i=1

1A(xmi )

)PN (dm)

=

∫Mp(E)

exp(−am(A))PN (dm) =

∫Ωe−aN(A) dP =

∞∑k=1

e−akµ(a)k

k!e−µ(A)

= exp(µ(A)(e−a − 1)) = exp

(−∫E

1A(1− e−a) dµ

)= exp

(−∫E

(1− e−a1A) dµ

),

which is exactly the required form. If f is of the form∑k

i=1 ai1Ai with Ai ∩ Aj = ∅ if i 6= j,then,

ΨN (f) =

∫Mp(E)

exp

∞∑i=1

k∑j=1

aj1Aj (xmi )

PN (dm) =

∫Ω

exp

(−

k∑i=1

aiN(Ai)

)dP

=k∏i=1

Ee−aiN(Ai) =k∏i=1

exp

(−∫E

(1− e−ai1Ai ) dµ

)

= exp

(−∫E

k∑i=1

(1− e−ai1Ai ) dµ

)= exp

(−∫E

(1− e−∑ki=1 ai1Ai ) dµ

),

where the third equality comes from the independence of (N(Ai))ki=1. For an arbitrary f choose

a sequence of simple functions (fn)n∈N f . Using monotone convergence, N(f) = limn→∞

N(fn).

Furthermore,

ΨN (fn) =

∫Ω

exp (−N(fn)) dP →∫

Ωexp(−N(f)) dP = ΨN (f),

since e−f is bounded as f ≥ 0 and thus dominated convergence is applicable. Lastly, since(1− e−fn) (1− e−f ), an appeal to the monotone convergence theorem gives

Ψn(f) = limn→∞

ΨN (fn) = limn→∞

exp

(−∫E

1− e−fn dµ

)= exp

(−∫E

(1− e−f ) dµ

).

This proves that point processes satisfying 1. and 2. from the definition of Poisson processeshave the desired Laplace transform. This, together with Proposition 1.5 proves the first direc-tion.Assume conversely, a point process N has Laplace functional as in (1.1). Then for any A ∈ B,∫

Ωe−aN(A) dP = ΨN (a1A) = exp

(−∫E

1− e−a1A dµ

)=

∞∑k=1

e−akµ(a)k

k!e−µ(A),

proving that N(A) has the Laplace transform of a Poisson random variable and must thus havea Poisson distribution. Lastly, if A1, . . . , Ak are pairwise disjoint, then,∫

Ωexp

(−

k∑i=1

aiN(Ai)

)dP = ΨN

(k∑i=1

ai1Ai

)=

k∏i=1

Ee−aiN(Ai),

proving that the joint Laplace transform of (N(A1), . . . , N(Ak)) factors into the product of themarginal Laplace transforms, thus proving independence.

4

Page 13: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

Now the existence of a Poisson process will be proven. This is done via a constructionprocess called The General Construction.

1.3.1 The General Construction

Let a measure µ on E be given such that 0 < µ(E) < ∞ and define the probability measureµ := (µ(E))−1µ. Let, furthermore, τ be a random variable with Poisson distribution withparameter µ(E) and let (Xi)i∈N be a family of i.i.d. random variables with distribution µ,which are independent of τ . Define

N =

∑τi=1 δXi on (τ > 0)

0 on (τ = 0)

In order to prove that N is Poisson, its Laplace functional will be computed.

ΨN (f) =

∫Ω

exp

(−

τ∑i=1

f(Xi)

)dP =

∞∑j=1

E(e−∑ji=1 f(Xi))P (τ = j)

=∞∑j=1

E(e−f(X1)

)jP (τ = j) = exp

(µ(E)(E(e−f(X1))− 1)

)= exp

(−µ(E)

(1−

∫Ee−f(x) µ(dx)

))= exp

(−∫E

(1− e−f ) dµ

),

as desired. This method extends to the more general case where µ(E) <∞ is not satisfied. Inthat case choose a disjoint sequence of relatively compact sets (Ei)i∈N such that

⋃i∈NEi = E.

Define the measures µi = µ(· ∩Ei) so that µ =∑∞

i=1 µi. By assumption µi must be finite. Usethe µi to construct Poisson processes Ni independently of each other and set

N =∑i

Ni.

Then,

ΨN (f) = limk→∞

Ψ∑ki=1Ni

(f) = limk→∞

∫Ω

exp

(k∑i=1

Ni(f)

)dP = lim

k→∞

k∏i=1

∫ΩeNi(f) dP

= limk→∞

k∏i=1

ΨNi(f) = limk→∞

k∏i=1

exp

(−∫E

1− e−f dµi

)

= limk→∞

exp

(∫E

1− e−f dk∑i=1

µi

)= exp

(−∫E

(1− e−f ) dµ

),

as desired, where independence of Ni(f) was used.

1.3.2 Examples

Now some important examples of a Poisson process are given. Consider the special case whereE = [0,∞) and N is a Poisson process with mean measure µ. One might then define thestochastic process (N(t))t∈[0,∞) through

N(t) = N((0, t]).

Then, N(0) = 0 a.s. and (N(t))t≥0 has independent increments, that is for t1 < t2 < · · · one has(N(ti+1) − N(ti))i∈N is an sequence of independent random variables. Furthermore, since for

5

Page 14: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

any t0 and sequences an t0 and bn t0 one has, by continuity of measures, N(bn)→ N(t0)a.s. and N(an) → N((0, t0)) a.s., that the process has a.s. paths which are right continuousand have limits from the left, so-called cadlag paths.

A Poisson process on a Euclidean space E is called homogeneous, if for the mean measureµ the following holds:

µ = αλ α > 0,

where λ denotes the Lebesgue measure. Otherwise it is called inhomogeneous. If N is homoge-neous, then for any A,

EN(A) = µ(A) = αλ(A).

Back to the example where E = [0,∞) and (N(t))t≥0 is defined as before. This means that fort > s

E(N(t)−N(s)) = µ((s, t]) = α(t− s),

hence α describes the expected increments per unit interval. In this case, α is referred to as therate of the Poisson process.

Another important example of a Poisson process is the following: let (Ei)i∈N be a sequenceof i.i.d. random variables with exponential distribution. Set Γi = E1 + · · ·+ Ei. Then,

∞∑i=1

δΓi

is a Poisson process. In order to prove that this is actually the case, some work is required.First some notation is introduced: let X1, . . . , Xn be random variables. Define X(i) through

X(i) := the i-th smallest element of Xi : i ≤ n.

Furthermore, the subsequent Lemma is needed.

Lemma 1.8. The following statements hold:

1. let X1, . . . , Xn be i.i.d. real valued random variables with density f . Then, the jointdensity of the order statistics is given through

f(X(1),...,X(n))(x1, . . . , xn) = n!

n∏i=1

f(xi)1Cn(xi),

where Cn := (x1, . . . , xn) ∈ Rn : x1 < · · · < xn;

2. let (En)n∈N be i.i.d., exponentially distributed random variables. Set Γi =∑i

j=1Ei. Then,

(Γ1, . . . ,Γn|Γn+1 = t) = (U(1), . . . , U(n)),

where U(1), . . . , U(n) are the order statistic of n i.i.d. random variables, uniformly dis-tributed on (0, t).

Proof. 1. Since the Xi have no atoms, they are a.s. pairwise unequal. Next, it is observedthat X(1), . . . , X(n) = Xπ(1), . . . , Xπ(n) for one and only one π ∈ Sn where Sn denotes the

6

Page 15: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

permutations of 1, . . . , n. Hence,

P (X(1) ≤ x1, . . . , X(n) ≤ xn)

=∑π∈Sn

P((Xπ(1) ≤ x1, . . . , Xπ(n) ≤ xn) ∩ (Xπ(i) = X(i) : 1 ≤ i ≤ n

)=∑π∈Sn

P ((Xπ(1) ≤ x1, . . . , Xπ(n) ≤ xn) ∩ ((Xπ(1), . . . , Xπ(n)) ∈ Cn))

=∑π∈Sn

P ((Xπ(1), . . . , Xπ(n)) ∈ Cn ∩ (−∞, x1]× · · · × (−∞, xn])

= n!P ((X1, . . . , Xn) ∈ Cn ∩ (−∞, x1]× · · · × (−∞, xn])

=

∫(−∞,x1]×···×(−∞,xn]

n!1Cn dP(X1,...,Xn)

=

∫(−∞,x1]×···×(−∞,xn]

n!n∏i=1

f(xi)1Cn dxn · · · dx1.

Since the rectangles of the form (−∞, x1]×· · ·× (−∞, xn] generate the Borel σ-algebra and areintersection stable, the integrand of the last line is the Radon-Nikodym density of P(X(1),...,X(n))

with respect to the Lebesgue measure.2. The idea is to derive the joint density of Γ1, . . . ,Γn via transformation of the joint densityof E1, . . . , En. The inverse of the transformation from (E1, . . . , En) to (Γ1, . . . ,Γn) is giventhrough

g(x1, . . . , xn) = (x1, x2 − x1, . . . , xn − xn−1),

which has Jacobian matrix 1 0 . . . 0 0−1 1 0 . . . 0...

.... . .

. . ....

0 . . . −1 1 00 . . . 0 −1 1

.

The determinant of this matrix is clearly 1. Now the joint density of (E1, . . . , En) is giventhrough

f(E1,...,En)(x1, . . . , xn) = λnn∏i=1

e−λxi1Rn+ ,

where Rn+ denotes the set x ∈ Rn : xi ≥ 0, 1 ≤ i ≤ n. Hence, the joint density of (Γ1, . . . ,Γn)is given through

f(Γ1,...,Γn)(x1, . . . , xn) = f(E1,...,En)(g(x1, . . . , xn)) · 1 = λne−λx1

n∏i=2

e−λ(xi−xi−1)1Cn = λne−λxn1Cn ,

where it was used that g−1(Rn+) = Cn. So, to find the conditional density:

f(Γ1,...,Γn|Γn+1=t)(x1, . . . , xn) =f(Γ1,...,Γn+1)(x1, . . . , xn, t)

fΓn+1(t)=λn+1e−λt1Cnn!−1tne−λtλn

=n!

tn1Cn .

To finish the prove, it is observed that by 1. the density of the order statistic of (U1, . . . , Un) isgiven through

f(U(1),...,U(n))(x1, . . . , xn) = n!n∏i=1

(1

t

)1Cn =

n!

tn1Cn .

7

Page 16: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

With this Lemma, the proof of the initial claim is possible.

Theorem 1.9. Let (Ei)i∈N be i.i.d. random variables with unit exponential distribution and setΓi :=

∑ij=1Ej. Then,

N :=∞∑i=1

δΓi

is a homogeneous Poisson process with state space E = [0,∞).

Proof. The proof consists of showing that the Laplace functional of N has the right form.Therefore it is observed that

ΨN (f) = limm→∞

Ψ∑mi=1 Γi(f) =

∫Ω

exp

(−

m∑i=1

f(Γi)

)dP.

Hence, by using the previous Lemma and the law of total probability and letting U1, . . . , Un bei.i.d. uniformly distributed random variables on (0, 1),

ΨN (f) = limm→∞

∫E

∫Ω

(exp

(−

m∑i=1

f(Γi)

)∣∣∣∣∣Γm+1 = s

)dP PΓm+1(ds)

= limm→∞

∫E

∫Ω

exp

(−

m∑i=1

f(sU(i))

)dP dPΓm+1

= limm→∞

∫E

∫Ω

exp

(−

m∑i=1

f(sUi)

)dP dPΓm+1 = lim

m→∞

∫E

(∫Ωe−f(sU1) dP

)mdPΓm+1

= limm→∞

∫E

(∫[0,s)

1

se−f(x) dx

)mPΓn+1(ds) = lim

m→∞

∫Ω

(∫[0,Γm+1)

1

Γm+1e−f(x) dx

)mdP

= limm→∞

∫Ω

(Γ−1m+1

(Γm+1 −

∫[0,Γm+1)

1− e−f(x) dx

))mdP

= limm→∞

∫Ω

(1−

∫[0,Γm+1)

1− e−f(x)

Γm+1dx

)Γm+1m

Γm+1

dP,

where the second equality comes from the Lemma and the third is due to the symmetry of theuniform random variables. Now, since Γm+1 →∞ a.s. and m−1Γm+1 = m−1Em+1 +m−1Γm →E(E1) = 1 a.s., by the strong law of large numbers, the integrand converges a.s. to

exp

(−∫E

1− e−f(x) dx

),

as m → ∞. Now, since both the integrand and the limit are bounded by 1 a.s., dominatedconvergence gives

limm→∞

∫Ω

(1−

∫[0,Γm+1)

1− e−f(x)

Γm+1dx

)Γm+1m

Γm+1

dP =

∫Ω

exp

(−∫E

1− e−f(x) dx

)dP

= exp

(−∫E

1− e−f(x) dx

).

8

Page 17: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

1.3.3 The Order Statistics Property

One of the most useful properties of Poisson processes is the so called order statistics property,which will be stated now and proven with the help of the general construction.

Theorem 1.10. Let N be a Poisson process on E = [0,∞) with mean measure µ and points(Γi)i∈N. Assume, furthermore, that the restriction of µ to [0, t] is finite. Then,

(Γ1, . . . ,Γn|N([0, t]) = n)d= (X(1), . . . , X(n)),

where Xi are i.i.d random variables. If N is homogeneous, then,

(Γ1, . . . ,Γn|N([0, t]) = n)d= t(U(1), . . . , U(n)),

where (Ui)i∈N are i.i.d. uniform random variables on (0, 1).

Proof. Let N ′ := N(· ∩ [0, t]) denote the restriction of N to [0, t]. Since µ is finite on [0, t], onemight use the general construction to obtain a Poisson process N such that

N ′d= N =

τ∑i=1

δXi ,

where Xi, 1 ≤ i ≤ n are i.i.d. with distribution according to µ([0, t])−1µ|[0,t] and τ has Poissondistribution with parameter t. Furthermore, (Xi)i∈N and τ are independent. Now, if N([0, t]) =n then τ = n and hence

(Γ1, . . . ,Γn|N([0, t)) = n)d= (X(1), . . . , X(n)|τ = n)

d= (X(1), . . . , X(n)).

If N is homogeneous, then the restriction of µ to [0, t] is clearly finite and has density ft forevery t. Furthermore, µ([0, t])−1ft is clearly the density of a tU where U is uniform on (0, 1).Hence,

(Γ1, . . . ,Γn|N([0, t)) = n)d= t(U(1), . . . , U(n)).

1.3.4 Transformation of Poisson Processes

It will be very useful to be able to say what happens to a Poisson process if its state spaceis transformed. Let therefore E,E′ be metric spaces equipped with σ-algebras B1,B2, andconsider the transformation

T : E → E′.

If T is measurable, it induces a map T : Mp(E)→Mp(E′) through

T (µ) = µ T−1.

Hence, any point process N : Ω→Mp(E) induces a point process N : Ω→Mp(E′) through

N = T N.

To prove that this is a point process, it has to be shown that T is measurable. In order to do so,consider an arbitrary interval I and B ∈ B, the Borel-σ-algebra on [0,∞). Then, T−1(I) = Afor some A ∈ B1 and hence,

T−1(e−1I (B)) = µ ∈Mp(E) : T µ(Ik) ∈ B = µ ∈Mp(E) : µ(A) ∈ B ∈ Mp(E).

9

Page 18: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

Furthermore, if N has points (Xn)n∈N, then N ′ has points (TXn)n∈N since

N ′(A) = N(T−1(A)) =∑n∈N

δXn(T−1(A)) =∑n∈N

1Xn∈T−1(A) =∑n∈N

1TXn∈A =∑n∈N

δTXn(A).

Moreover, Poisson processes stay Poisson processes under transformations.

Proposition 1.11. Let (E1,B1), (E2,B2) be two metric spaces equipped with their respectiveBorel-σ-algebras. Let, furthermore, T : E1 → E2 be measurable with the property that for anybounded B ∈ B2, T

−1(B) is bounded in E2. Then, if N is a Poisson process with mean measureµ, T N is again a Poisson process with state space E2 and mean measure µ T−1.

Proof. The Laplace functional of T N will be computed. Let, therefore, f : E2 → [0,∞] bemeasurable. Then, f T is also measurable and positive and hence,

ΨTN (f) =

∫Ω

exp−TN(f) dP =

∫Ω

exp

(−∫E2

f dPN(ω)T−1

)P (dω)

=

∫Ω

exp

(−∫E1

f(T (x))PN(ω)(dx)

)P (dω)

= ΨN (f T ) = exp

(−∫E1

1− ef(T (x)) µ(dx)

)= exp

(−∫E2

1− ef(x) µ T−1(dx)

),

which is the Laplace functional of a Poisson process with mean measure µ T−1.

Proposition 1.11 allows to switch between homogeneous and inhomogeneous Poisson pro-cesses. Recall that a Poisson process whose state space is Euclidean is called homogeneous ifits mean measure is a scalar multiple of the Lebesgue measure. Let N be an inhomogeneousPoisson process with mean measure µ which is not finite on E = [0,∞), and which has densityf w.r.t. the Lebesgue measure. Define the function

m(t) =

∫ t

0f(x) dx

and consider the generalized inverse

m←(x) := infu ∈ [0,∞) : m(u) ≥ x.

Since m(∞) = ∞, m← is well defined on [0,∞) and, furthermore, since m is continuous andmonotonically increasing, m← is also strictly increasing. In particular, it is measurable. Hence,if m, m← : B→ B denote the maps induced by m← and m respectively, and N is a homogeneousPoisson process on E with mean measure the Lebesgue measure λ on [0,∞), then

m N

is also a Poisson process with mean measure µ m−1. Since

µ m−1([0, t]) = µ(x ∈ E : m(x) ≤ t) = µ(x ∈ E : x ≤ m←(x) ≤ t) = µ([0,m(t)])

= m(m←(t)) = t,

µ m−1 and the Lebesgue measure agree on an intersection stable generating system and mustso be equal. In particular, m N is a homogeneous Poisson process with rate 1. Conversely,

m← N

is also a Poisson process with mean measure λ (m←)−1. Since

λ (m←)−1([0, t]) = λ(x ∈ E : m←(x) ≤ t) = λ(x ∈ E : x ≤ m(t)) = m(t) = µ([0, t]),

10

Page 19: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

m← N is distributed as N .

But of course, other transformations are of interest as well. Consider for example thefunction mα : (0,∞) → (0,∞] given through mα(x) = x−1/α for some α > 0. (0,∞] can begiven a metric making the sets (x,∞] locally compact, e.g. d(x, y) = |x−1 − y−1| with theconvention ∞−1 = 0. Then, mα is continuous and preimages of compact sets are compact. So,if N denotes a homogeneous Poisson process then, mα N is also a Poisson process and has therepresentation

∞∑i=1

δΓ−1/αi

. (1.2)

Furthermore, its mean measure is given through µ m−1α ([a, b]) = a−α− b−α. Another transfor-

mation of interest is given through the following proposition.

Proposition 1.12. Let N be a Poisson process with points (Xi)i∈N in an Euclidean space E1 andmean measure µ. Let furthermore (Ji)i∈N be i.i.d. random elements of a second Euclidean spaceE2 with distribution ν, which are independent of (Xi)i∈N but defined on the same probabilityspace. Then, the point process given through

∞∑i=1

δ(Xi,Ji)

is a Poisson process with mean measure µ⊗ ν, the product measure of µ and ν.

Proof. The proof will again use the general construction. Assume first that µ is finite on E1.Then, using the general construction, one might assume the existence of a sequence (Yn)n∈N ofi.i.d. random variables with distribution µ(E1)−1µ and a Poisson distributed random variableτ such that

Nd=

τ∑i=1

δYi .

In this case, (Yi, Ji) are i.i.d. on E1×E2 and by independence have distribution (µ(E1)−1µ)⊗ν.It follows from the general construction that

τ∑i=1

δ(Yi,Ji)d=

∞∑i=1

δ(Xi,Ji)

is a Poisson process with state space E1×E2 and mean measure µ⊗ ν. If µ(E1) <∞ does nothold, then one proceeds as in the general construction by finding a disjoint, countable collectionof relatively compact sets Fi such that

⋃i∈N Fi = E1 and define the measures µi = µ(· ∩Fi) for

all i ∈ N.

11

Page 20: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

12

Page 21: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

2 Representation of Sum-Stable Random Variables & Processes

The relevance of sum-stable random variables is implied by the central limit theorem: sum-stablerandom variables are those whose distribution is the weak limit of appropriately centralized andscaled sums of i.i.d. random variables, with the Gaussian one being the most distinguished.While Gaussian models are well understood and have many applications, recently also othersum-stable models drew an increasing amount of interest. This is mainly due to the observationof data which suggests that the tails of its distribution decay relatively slowly which mightresult in infinite second and even first moments. Examples are telecommunication traffic dataor economical data. While the Gaussian distribution has finite first and second moments, allother sum-stable distributions have infinite second moments and many do not even have a firstmoment, thus suggesting they might be used in the case of ”heavy-tailed” data.The derivation of the representation of sum-stable random variables and processes is quiteinvolved. To not exceed the scope of this thesis, this is only done in detail for random variables.Most of this chapter is based on [23] and the detailed derivation of representations of sum-stableprocesses can be found there.

2.1 Preliminaries

As mentioned in the introduction, the definition of sum-stable random variables resembles thecentral limit theorem.

Definition 2.1. The distribution of a random variable X is called sum-stable if there exists asequence (Yi)i∈N of i.i.d. random variables and sequences of positive numbers (bn)n∈N and realnumbers (an)n∈N such that

Y1 + · · ·+ Yn − anbn

d→ X.

In that case, X is said to have a normal domain of attraction and the distribution of Y1 belongsto this domain of attraction.

The prime example of a stable distribution is the Gaussian distribution N (0, σ2), which, bythe central limit theorem, is stable with bn =

√n and an = nEY1 for any i.i.d sequence (Yi)i∈N

with V ar(Y1) < ∞. In particular, one could have chosen the Yi to be normally distributed.It turns out that, more generally, the following condition is equivalent to the above definition.Let X be any random variable and let (Xn)n≥1 be a sequence of i.i.d. copies of X. Then, thedistribution of X is called stable, if there exists a positive sequence (bn)n≥1 and a real valuedsequence (an)n≥1 such that

X1 + · · ·+Xn

bn− an

d= X. (2.1)

Obviously, (2.1) implies that X is stable in the sense of Definition 2.1. For the converse, seee.g. [23].

It can be shown that for any random variable with stable distribution the sequence (bn)n∈Nmust be of the form bn = n1/α for some 0 < α ≤ 2; see [14]. The number α is called the indexof stability and describes the rate of decay of the tails of the distribution. Distributions withstability index α < 2 have infinite second moment, and if additionally α ≤ 1 the first momentdoesn’t exist as well. If X has a stable distribution with stability index α, its distribution iscalled α-sum-stable or simply α-stable.

13

Page 22: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

Furthermore, a random variable has α-stable distribution if and only if its characteristicfunction is of the form

EeitX =

exp

(−σα|t|α(1− iβ(sgn t) tan πα

2 ) + iµt)

if α 6= 1

exp(−σ|t|(1− iβ 2

π (sgn t) ln|t|) + iµt)

if α = 1,

where 0 < α ≤ 2, σ ≥ 0 and β ∈ [−1, 1]. α is the index of stability. The prove that any suchcharacteristic function corresponds to an α-stable distribution is short. LetX have characteristicfunction as above and let Sn =

∑ni=1Xi. Assume α 6= 1 (if α = 1 the proof is similar), then,

EeitSn =n∏j=1

EeitXj = exp(n(−σα|t|α

(1− iβ(sgn t) tan

πα

2

)+ iµt

))= exp

((−σα|n1/αt|α

(1− iβ(sgn t) tan

πα

2

)+ iµn1/αt

))exp

((n− n1/α)iµt

)= Eeit(n

1/αX+(n−n1/α)µ).

For the converse see [14]. Hence, any α-stable distribution is determined by the three parametersσ, µ, β. Therefore, the following notation is introduced: If X has α-stable distribution withparameters σ, µ, β, then this will be written as X ∼ Sα(σ, β, µ).

2.2 A Series Representation of Sum-Stable Random Variables

Now, the first representation theorem is given. It is more of theoretical than of practical interest.It lays the foundation for representation of other α-stable objects but is also useful to deriveand proof properties α-stable random variables.

Theorem 2.2. Let (Γi)i≥1 be the points of a homogeneous Poisson process and (Wi)i∈N be ani.i.d. sequence of random variables such that

E|W1|α <∞, if α 6= 1 E|W1 ln|W1|| <∞ if α = 1.

Furthermore, define for any 0 < α ≤ 2 the sequence (k(α)i )i∈N through

k(α)i =

0 if 0 < α < 1

E(W1

∫ |W1|/(i−1)|W1|/i x−2 sinx dx

)if α = 1

αα−1

(iα−1α − (i− 1)

α−1α

)EW1 if α > 1

.

Then, the series

∞∑i=1

(Γ−1/αi Wi − k(α)

i

)(2.2)

converges almost surely to random variable X such that X ∼ Sα(σ, β, 0) with

σα =E|W1|α

Cαand β =

E|W1|α sgnW1

E|W1|α

and

Cα =

(∫ ∞0

x−α sinx dx

)−1

=

1−α

Γ(2−α) cos(2πα/2) if α 6= 1

2/π if α = 1.

14

Page 23: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

Moreover, in the special case α = 1 the series

∞∑i=1

(Γ−1i Wi − EW1

∫ 1/(i−1)

1/i

sinx

x2dx

)(2.3)

converges a.s. to a S1(σ, β, µ) random variables with σ, β as above and

µ = − lnEW1 ln|W1|.

Proof. The cases 0 < α < 1 and 1 < α < 2 will be proven in detail. If α = 1, the proofis essentially similar to the case 1 < α < 2 and only requires different sum- and integralmanipulations. Therefore, we will only sketch the proof for α = 1. To prove convergence,Kolmogorov’s three series theorem is applied. Different lemmas will be necessary which will beintroduced before the respective cases.

(a) 0 < α < 1.Since Γi

i → 1 a.s., it suffices to prove∑∞

i=1|Wi|i−1/α exists. Hence, for the first series:

∞∑i=1

P (|Wi|i−1/α > s) =

∞∑i=1

P (|Wi|α > isα) =

∫ ∞0

P (|W1s−1|α > bic+ 1) di

≤∫ ∞

0P (|W1s

−1|α > i) di,

which must be finite since E|W1|α <∞. For the other series:

∞∑i=1

E|Wi|i−1/α1(|Wi|i−1/α≤1) ≤∞∑i=1

i−1/α

∫ i1/α

0P (|W1| > y) dy

=1

α

∞∑i=1

∫ i

0i−1/αy(1−α)/αP (|W1|α > y) dy

=1

α

∞∑i=1

∫ ∞0

i−1/α1(0,i)(y)y(1−α)/αP (|W1|α > y) dy

≤ 1

α

∫ ∞0

y(1−α)/αP (|W1|α > y)

∫ ∞y

i−1/α di dy

= − 1

α− 1

∫ ∞0

P (|W1|α > y) dy =1

α− 1E|W1|α <∞,

where the order of integration could be exchanged, since everything involved is positive. Lastly,

∞∑i=1

E|Wi|2i−2/α1(|Wi|i−1/α≤1) ≤∞∑i=1

i−2/α

∫ i1/2

0P (|Wi|α > y) dy

=2

α

∞∑i=1

∫ ∞0

i−2/α1(0,i1/2)(y)y2−α/αP (|W1|α > y) dy

≤ 2

α

∫ ∞0

y2−α/αP (|W1|α > y)

∫ ∞y2

i−2/α di dy

= − 2

α− 2

∫ ∞0

yα−2/αP (|W1|α > y) dy,

which must be finite since both functions are integrable and clearly P (|W1|α > y) is bounded.This proves case (a).

(b) 1 < α < 2Two Lemmas will be needed.

15

Page 24: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

Lemma 2.3. For n ∈ N, let Γn be the sum of i.i.d exponentially distributed random variables(Ei)i∈N. Then, for any α > 0,

lim supi→∞

|Γ−1/αi − i−1/α|

i(−1/α)−(1/2)√

2 ln ln i= α a.s.

Proof of Lemma 2.3. First, it is noted that Γi/i → 1, so clearly (Γi/i)−1/α → 1 by the law of

large numbers. Second, if Ei = Ei − 1, then, Γi − i = E1 + . . . Ei and Ei have mean zero andunit variance so that the law of the iterated logarithm gives

lim supi→∞

Γi − i√i2 ln ln i

= 1 a.s. and so also lim supi→∞

|Γi − i|√i2 ln ln i

= 1.

Furthermore, the rule of de l’Hospital applied to the functions

f(x) = (1 + x)1/α − 1 and g(x) = α−1x

gives

1 = limi→∞

∣∣∣∣(1 + Γi−ii

)1/α− 1

∣∣∣∣α−1 |Γi−i|

i

.

Hence,

1 = limi→∞

∣∣∣∣(1 + Γi−ii

)1/α− 1

∣∣∣∣α−1 |Γi−i|

i

= limi→∞

∣∣∣∣(Γii

)1/α− 1

∣∣∣∣α−1 |Γi−i|

i

= limi→∞

(Γii

)−1/α∣∣∣∣(Γi

i

)1/α− 1

∣∣∣∣α−1 |Γi−i|

i

= lim supi→∞

∣∣∣∣(Γii

)−1/α− 1

∣∣∣∣α−1 |Γi−i|

i

|Γi − i|√i2 ln ln i

= lim supi→∞

i1/α|Γ−1/αi − i−1/α|

α−1i−1/2√

2 ln ln i

= lim supi→∞

α|Γ−1/αi − i−1/α|

i−(1/α)−(1/2)√

2 ln ln i.

Lemma 2.4. Define

C(α)i =

0 if 0 < α < 1∫ 1/(i−1)

1/i x−2 sinx dx if α = 1αα−1

(iα−1/α − (i− 1)α−1/α)

)if 1 < α < 2

.

Then,∑∞

i=1(Γ−1/αi − C(α)

i ) converges a.s..

Proof of Lemma 2.4. As before, since Γ−1/αi i−1 → 1, the case 0 < α < 1 is clear since −1/α > 1

in that case. Additionally, we will only prove the lemma for 1 < α < 2. One has

n∑i=1

(Γ−1/αi − C(α)

i ) =n∑i=1

(Γ−1/αi − i−1/α) +

n∑i=1

(i−1/α − C(α)i ).

To prove that the first sum on the right hand side converges a.s., one uses the limit comparisontest for series together with Lemma 2.3. For this, it is noted first that for any c > 0 there existsan ic such that

√2 ln ln i ≤ ic ∀i > ic since

√2 ln ln i is slowly varying. Second, i−1/α−1/2 =

16

Page 25: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

i−(2+α)/2α and −(2 + α)/2α ∈ (1, 3/2] for α ∈ [1, 2). Hence, for any α ∈ [1, 2) there exists a cαsuch that (2 + α)/2α+ cα > 1 and so

∞∑i=icα

i−(2+α)/2α√

2 ln ln i ≤∞∑

i=icα

i−((2−α)/2α+cα) <∞.

This together with limit comparison test and Lemma 2.3 provides convergence of the first series.For the second sum it is observed that for 1 ≤ α < 2 there exists for every i a ξ ∈ [0, 1] such

that C(α)i = (i− ξ)−1/α. Thus, using the mean value theorem once again, one obtains

∞∑i=1

|i−1/α − C(α)i | ≤ |1− C

(α)1 |+

∞∑i=2

|i−1/α − (i− 1)−1/α| = |1− C(α)1 |+

1

α

∞∑i=2

|η−(1+α)/αi | <∞,

since ηi ∈ [i− 1, i].

With these Lemmas, the continuation of the proof is possible. First, it is observed that sinceEW1 <∞,∣∣∣∣∣

∞∑i=1

(Γ−1/αi − C(α)

i )

∣∣∣∣∣ <∞ if and only if

∣∣∣∣∣∞∑i=1

(Γ−1/αi EW1 − C(α)

i EW1)

∣∣∣∣∣ <∞.And since EW1C

(α)i = k

(α)i and

n∑i=1

(Γ−1/αi EW1 − C(α)

i EW1) =

n∑i=1

(Γ−1/αi EW1 + Γ

−1/αi Wi − Γ

−1/αi Wi − C(α)

i EW1)

=

n∑i=1

Γ−1/αi (EW1 −Wi) +

n∑i=1

(Γ−1/αi Wi − C(α)

i EW1)

it is enough to prove convergence of∑∞

i=1 Γ−1/αi (Wi−EW1). Set Yi = Wi−EW1. Then, (Yi)i∈N

are i.i.d. with mean 0 and E|Yi|α <∞. Again, Markov’s three series theorem shall be used but

it is not directly applicable as (Γ−1/αi Yi)i∈N is not a sequence of independent random variables.

Hence, the idea is to condition on (Γi)i∈N, i.e. to show that

P

( ∞∑i=1

Γ−1/αi Yi converges

∣∣∣∣∣(Γi)i∈N)

= 1 a.e..

Now, there is a Markov kernel κ such that

P

( ∞∑i=1

Γ−1/αi Yi converges

∣∣∣∣∣(Γi)i∈N)

(ω) = κΓi(ω)(A)

and since (Γi)i∈N is independent of Yi

κΓi(ω)(A) = P

( ∞∑i=1

Γ−1/αi (ω)Yi converges

). (2.4)

It has to be shown that the right hand side is equal to 1 for almost all ω. In order to so, define

C :=

ω : lim

n→∞

Γn(ω)

n= 1

,

17

Page 26: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

so that by the law of large numbers P (C) = 1. Hence, it suffices to prove the claim for ω ∈ C.For any such ω there exist constants C1, C2 such that C1i ≤ Γi(ω) ≤ C2i. Now, to prove thatthe right hand side of (2.4) is equal to 1 for all ω ∈ C, Markov’s three series criterion will beused. To ease notation, set (bn)n∈N = (Γn(ω))n∈N. Then,

∞∑i=1

P (b−1/αi |Yi| > λ) =

∞∑i=1

P (|Yi|α > λαbi) ≤∞∑i=1

P (|Y1λ|α > C1i) <∞,

since E|Y1|α <∞. For the second sum it is observed that

E(b−1/αi Yi1(|b−1/α

i Yi|≤1)

)= E

(b−1/αi Yi

)− E

(b−1/αi Yi1(Yi<b

−1/αi )

)− E

(b−1/αi Yi1(Yi>b

−1/αi )

)= −b−1/α

i

∫ b−1/αi

−∞y PY1(dy)− b−1/α

i

∫ ∞b−1/αi

y PY1(dy), (2.5)

since EYi = 0. Hence, it suffices to show convergence of the sums

∞∑i=1

b−1/αi

∫ ∞b−1/αi

y PY1(dy) and

∞∑i=1

b−1/αi

∫ −Γ−1/αi

−∞y PY1(dy).

For the first sum, using monotone convergence, it is observed

∞∑i=1

b−1/αi

∫ ∞b−1/αi

y PY1(dy) ≤∞∑i=1

∫ ∞0

(C1i)−1/αy1((C1i)1/α,∞)(y)PY1(dy)

≤ C−1/α1

∫ ∞0

y

∫ yαC−11

0x−1/α dxPY1(dy)

α− 1C−1

1

∫ ∞0

yα PY1(dy),

which is finite as E|Y1|α <∞. Again, order of integration could be exchanged due to positivity.Similar argumentation gives the result for the second part. This prove convergence of the secondsum. Lastly,

∞∑i=1

E(

(b−1/αi Yi)

21(b−1/α|Yi|≤1)

)=

∞∑i=1

(b−2/αi

∫ b−1/αi

−b−1/αi

y2 PY1(dy)

)

≤∞∑i=1

∫ ∞−∞

(C1i)−2/αy21(−(C2i)1/α,(C2i)1/α)(y)PY1(dy)

≤ C−2/α1

∫ ∞0

∫ ∞−∞

x−2/αy21(−(C2x)1/α,(C2x)1/α)(y)PY1(dy) dx

= C−2/α1

∫ ∞−∞

y2

∫ ∞|y|αC−1

2

x−2/α dxPY1(dy)

= C ′∫ ∞−∞|y|α PY1(dy) <∞.

This gives P(∑∞

i=1 Γ−1/αi Yi converges

∣∣∣(Γi)i∈N) = 1 almost surely. Hence,

P

( ∞∑i=1

Γ−1/αi Yi converges

)= E

(P

( ∞∑i=1

Γ−1/αi Yi converges

∣∣∣∣∣(Γi)i∈N))

= E1 = 1.

(c) α = 1

In case α = 1 one does not have EW1C(α)i = k

(α)i , but EW1C

(α)i is the correcting term in (2.3).

18

Page 27: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

To prove convergence of (2.3), one proceeds as in the case 1 < α < 2 and slightly changes onlythe arguments for proving that

∞∑i=1

b−1/αi

∫ ∞b−1/αi

y PY1(dy) and∞∑i=1

b−1/αi

∫ −Γ−1/αi

−∞y PY1(dy),

are finite. To prove convergence of (2.2), one then shows that

∞∑i=1

(EW1Ci(α)− k(α)

i

)converges.

It remains to prove that distributions of these limits are α-stable. Here the order statis-tics property of Poisson processes shall be used. Let therefore (Ui)i∈N be a sequence of i.i.drandom variables independent of (Wi)i∈N and each one uniformly distributed on (0, 1). Set

Zi = U−1/αi Wi, then,

P (Zi > λ) = P (U−1/αi Wi > λ) = P (Ui < λ−αWα

i ) = P (Ui < λ−α(Wi)α+)

=

∫ ∞0

P (Ui < λ−αwα)P(Wi)+(dw)

=

∫ λ

0λ−αwα P(Wi)+

(dw) +

∫ ∞λ

P(Wi)+(dw)

= λ−α∫ λ

0wα P(Wi)+

(dw) + P ((Wi)+ > λ).

Hence,

limλ→∞

λαP (Zi > λ) = E(Wi)α+ + lim

λ→∞λαP ((Wi)+ > λ) = E(Wi)

α+

Analogously for limλ→∞

λαP (Zi < λ). Hence, it follows from [14], Page 581, that Zi belongs to

the normal domain of attraction of a stable random variable X. Furthermore, by [14] TheoremXVII 5.3. for the case 0 < α < 1 one has

1

n1/α

n∑i=1

Zid→ X1,

where X1 ∼ Sα(σ, β, 0), in the case α = 1

1

n

n∑i=1

Zi − nE sinZ1

n

d→ X2,

where X2 ∼ Sα(σ, β, 0). And lastly for 1 < α < 2

1

n1/α

n∑i=1

Zi − EZ1d→ X3,

where X3 ∼ Sα(σ, β, 0), where α, β are of the necessary form as stated in theorem. Hence, itneeds to b shown that

∞∑i=1

(Γ−1/αi − k(α)

i )d=

X1 if 0 < α < 1

X2 if α = 1

X3 if 1 < α < 2

19

Page 28: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

This is done in the following for 0 < α < 1 and 1 < α < 2. If α = 1, the arguments are similar.Let N(t) denote the Poisson process of which the Γi are the points, i.e.

N(t) = # i : Γi ≤ t .

One has(N(n)n

)1/α→ 1, hence,

(N(n)n

)1/α∑N(n)i=1 Γ

−1/αi Wi

a.s.→∑∞

i=1 Γ−1/αi Wi. Thus, using the

order statistics property, for the case 0 < α < 1:

limn→∞

P

(N(n)

n

)1/α N(n)∑i=1

Γ−1/αi Wi ≤ y

= limn→∞

E

P(N(n)

n

)1/α N(n)∑i=1

Γ−1/αi Wi ≤ y

∣∣∣∣∣∣N(n)

= lim

n→∞E

P 1

n1/α

N(n)∑i=1

(Γi

N(n)

)−1/α

Wi ≤ y

∣∣∣∣∣∣N(n)

= lim

n→∞P

1

n1/α

N(n)∑i=1

U−1/α(i) Wi ≤ y

= lim

n→∞P

1

n1/α

N(n)∑i=1

U−1/αi Wi ≤ y

= P (X1 ≤ y).

For the case 1 < α < 2:

P

1

n1/α

N(n)∑i=1

(Zi − EZ1) ≤ y

= P

1

n1/α

N(n)∑i=1

U−1/αi Wi −

1

n1/α

α

α− 1N(n)EW1 ≤ y

= E

P 1

n1/α

N(n)∑i=1

U−1/α(i) Wi −

N(n)

n1/α

α

α− 1EW1 ≤ y

∣∣∣∣∣∣N(n)

= P

(N(n)

n

)1/α N(n)∑i=1

Γ−1/αi Wi −

α

α− 1

N(n)

n1/αEW1 ≤ y

,

and(N(n)

n

)1/α N(n)∑i=1

Γ−1/αi Wi −

α

α− 1

N(n)

n1/αEW1

=

(N(n)

n

)1/α N(n)∑i=1

(Γ−1/αi Wi − C(α)

i EW1)

+ EW1

(N(n)

n

)1/α α

α− 1

N(n)∑i=1

(iα−1/α − (i− 1)α−1/α)− α

α− 1

N(n)

n1/αEW1

=

(N(n)

n

)1/α N(n)∑i=1

(Γ−1/αi Wi − C(α)

i EW1)

+ EW1α

α− 1

(N(n)

n

)1/α

N(n)(α−1)/α − EW1α

α− 1

N(n)

n1/α

=

(N(n)

n

)1/α N(n)∑i=1

(Γ−1/αi Wi − C(α)

i EW1))

→∞∑i=1

(Γ−1/αi Wi − k(α)

i ) a.s..

20

Page 29: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

This finishes the proof.

As mentioned above, this theorem is not really of use for practical purposes. One could usethis theorem to simulate α-stable random variables, however, this is not advisable. While the

Poisson process (Γ−1/αi ,Wi) is straightforward to simulate, the convergence of the sum is too

slow so that an approximation by a finite sum is not precise or requires vast computationaleffort. Better simulation methods are given for example in [23]. This theme will occur again inthe case of max-stable objects.

2.3 Representation of Sum-Stable Processes

The series representation lays the foundation for the so-called stable stochastic integral, whichin turn can be used to derive representations for α-stable processes. These concepts will bemade precise in the following.

First, the notion of an α-stable measure is introduced. Consider therefore a measure space(E, E ,m) and the subset E0 of E consisting of the sets of finite measure. An α-stable randommeasure M is then a set function from E0 into the set of random variables on a probabilityspace (Ω,F , P ) such that:

• M(A) is α-stable with parameters depending on A and a so-called skewness intensityfunction β;

• If A1, . . . , An are disjoint, then M(A1), . . . ,M(An) are mutually independent;

• If A1, . . . , An are disjoint,⋃Ai ∈ E0, then M (

⋃Ai) =

∑M(Ai).

Theorem 2.2 now gives that for the case α ∈ (0, 1) one has for the random variable M(A) a

series representation in terms of a PRM with points (Γ−1/αi ,Wi)i∈N. Moreover, it is shown in [23]

that the following holds: assume m is finite on E and consider the i.i.d. sequence (Wi, γi)i∈N,

independent of (Γ−1/αi )i∈N, such that W1 is distributed according to m(E)−1m and

P (γ1 = 1|Vi) = 1− P (γ1 = −1|Vi) =1 + β(Vi)

2,

where β denotes the skewness intensity function. Then,

M(A) : A ∈ E0d=

∞∑i=1

Γ−1/αi 1(Wi∈A) : A ∈ E0

,

in the sense that

(M(A1), . . . ,M(An))d=

∞∑i=1

Γ−1/αi 1(Wi∈Aj) : 1 ≤ j ≤ n

for any finite subset A1, . . . , An ⊂ E0. Here, stable random measures are thought of as randomfields with index set E0 and cα is a normalizing constant depending on α and m(E). Using this insome sense discrete structure of such a stable measure M , one can then define stable stochasticintegrals for functions in Lα(E) with respect to M through∫ +

Ef(x)M(dx) := cα

∞∑i=1

γiΓ−1/αi f(Vi),

according to [23] Theorem 3.10.1. Actually, a similar definition is possible in the case α ∈ [1, 2].However, as in Theorem 2.2 some correcting terms are necessary. It can be shown that the

21

Page 30: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

stable stochastic integrals are α-stable. Moreover, a vector of stable stochastic integrals is α-stable in the following sense: A random vector X is called α-stable if for any n ≥ 2 there existsan α ∈ (0, 2] and a vector Dn such that for i.i.d. copies (X(i))i∈N of X,

X(1) + · · ·+X(n) −Dn

n1/α

d= X.

The distribution of X is the called α-stable. The concept of α-stability extends further tostochastic processes. A real-valued stochastic process (Y (t))t∈T is α-stable, if for any finitesubset t1, . . . , tn ⊂ T the vector

(Y (t1), . . . , Y (tn))

is α-stable, i.e. if its finite-dimensional distributions are α-stable. Now, the stable stochasticintegrals allow a representation of stable processes: Let (Y (t))t∈T be an α-stable stochasticprocess satisfying the following condition: Let T0 ⊂ T be a countable subset such that for anyt ∈ T one has that Y (t) is the limit in probability of sums of the type

n∑i=1

ai,nY (ti,n) ai,n ∈ R, ti,n ∈ T0, 1 ≤ i ≤ n.

Then, if α ∈ (0, 2), α 6= 1 there exists a family of functions on some measure space (E, E ,m)such that

(Y (t))t∈Td=

(∫ +

Eft(x)M(dx)

)t∈T

, (2.6)

again in the sense that their finite-dimensional distributions coincide, see [23] Theorem 13.2.1.

22

Page 31: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

3 Representation of Max-Stable Random Variables & Processes

Similar to the case of sum-stable distributions, the relevance of max-stable distributions isimplied by a central limit type theorem, the Fisher-Tippett Theorem. It states that the appro-priately scaled and centered maxima of i.i.d. random variables can weakly converge only to oneof three different types of distributions, if the limiting distribution is non-degenerate. Theseturn out to be the max-stable distributions.

Maxima of i.i.d. observations are relevant in a wide variety of applications. For example,maxima of seawater levels in Holland are collected in order to assess how high dikes have to bebuilt at a certain point along the coastline to ensure reasonably high safety against flooding; cf.[7]. In the financial industry one might be interested in the distribution of the maximal valueof an asset in a portfolio or the maximal loss. But naturally the interest in maxima extends tomultivariate observations. Here one is most often interested in the maxima of the components,i.e. seawater levels or extreme snowfall at different locations; cf. [4]. However, the multivariatecase is much more involved than the 1-dimensional one as for example dependencies betweenthe components come into play: if a flood at one point is observed, then the probability ofobserving high water levels at other locations might be higher. In this section, we give thebasic definitions and results for max-stable random variables, vectors and processes, and provea representation theorem for max-stable processes, upon which the simulation methods fromSection 4. are based. For more detailed information on univariate and multivariate max-stableobjects we refer to [21] or [8].

3.1 Max-Stable Random Variables

3.1.1 Preliminaries

Let X be a random variable and let (Xn)n∈N be a sequence of i.i.d. copies of X. Set Mn =max(X1, . . . , Xn). Assume, analogously to the α-stable case, there exist a real valued sequence(an)n∈N and a positive sequence (bn)n∈N of normalizing constants such that

Mn − anbn

d−→ Y, (3.1)

where Y has a non-degenerate distribution. Again, there are two questions that arise. Firstly,what are the possible shapes of the distributions of Y and secondly what conditions on X haveto be made and what are the possible choices of (an)n∈N and (bn)n∈N so that the normalizedmaxima converge weakly. The first one is answered by the important Fisher-Tippett Theorem.

Theorem 3.1 (Fisher-Tippett Theorem). Let (Xn)n≥1 be a sequence of i.i.d. random variables.Assume there exist a sequence (an)n∈N, a positive sequence (bn)n∈N and a random variable Ysuch that (3.1) holds. Then, the distribution function of Y belongs to the type of one of thefollowing three distribution functions:

1. Φα(x) =

0, if x ≤ 0

exp (−x−α) elseα > 0, i.e. Frechet distribution;

2. Ψα(x) =

exp (−(−x)α) , if x ≤ 0

1 elseα > 0, i.e. Weibull distribution;

3. Λα(x) = exp(−e−(α−1x)

), i.e. Gumbel distribution.

For the proof see for example [21] Proposition 0.3. These distributions are referred to asextreme value distributions. Furthermore, these distributions are max-stable in the followingsense:

23

Page 32: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

Definition 3.2. Let X be a random variable and (Xn)n∈N be a sequence of i.i.d. copies of X.Then, X is called max-stable if there exist a positive sequence (bn)n∈N and a real valued sequence(an)n∈N such that

max(X1, . . . , Xn)− anbn

d= X.

Moreover, they are the only max-stable non-degenerate distributions. To see why this isthe case, let Y be a random variable such that there exist a sequence (Xn)n∈N of i.i.d. randomvariables and sequences (an)n∈N, (bn)n∈N as usual such that (3.1) holds. Let F denote thedistribution function of Y . Then, since the possible distribution functions are continuous, forany fixed k ∈ N,

limn→∞

(P (Mnk ≤ (x+ cn)an)) = limn→∞

(P (X1 ≤ (x+ cn)an)n)k = limn→∞

(P (a−1n Mn − cn ≤ x))k

= (F (x))k.

Furthermore,

limn→∞

P (aknMkn − ckn ≤ x)→ F (x).

Now, the convergence to types theorem, see e.g. [21], gives that for any fixed k, there existconstants ak, bk such that

limn→∞

anank

= ak and limn→∞

bnbnk

= bk

and

max(Y1, . . . , Yk)d= akY + bk, (Yi)i∈N are i.i.d copies of Y,

since max(Y1, . . . , Yk) is distributed according to F k. It follows that the non-degenerate max-stable distributions are exactly those which are the weak limits of normalized partial maxima.Straight forward calculation reveal that the following choices for the normalizing and centralizingsequences work:

Φα :max(X1, . . . , Xn)

n−1/α

d= X1

Ψα :max(X1, . . . , Xn)

n1/α

d= X1

Λα : max(X1, . . . , Xn)− log nd= X1.

From now on, the special case of the Frechet distribution will be important. However, sinceone can easily switch between the three extreme value distributions via monotone transforma-tions, similar results for the other distribution follow.

In what follows, a particular Poisson random measure will be encountered frequently andis therefore described here. If Γi denotes the sum of i i.i.d. unit rate exponential random

variables, then it was shown in (1.2) that (Γ−1/αi )i∈N are the points of a PRM on (0,∞] with

mean measure µα((a, b]) = a−α − b−α. If, furthermore, (Vi)i∈N are i.i.d. random variables onsome Euclidean space E with distribution ν, then by Proposition 1.12,

∞∑i=1

δ(Γ−1/αi ,Vi)

is a Poisson random measure on (0,∞]× E with mean measure µα ⊗ ν.

24

Page 33: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

3.1.2 A Representation for Max-Stable Random Variables

A Frechet random variable has the following representation.

Proposition 3.3. Let Γ1 < Γ2 < · · · be the points of a homogeneous Poisson process withunit rate. Assume, furthermore, that (Vi)i∈N is a sequence of positive i.i.d. random variables,independent of the Poisson process and EV α

1 <∞ for some α > 0. Then,

supi≥1

Γ−1/αi Vi

has ΦEV α1α distribution.

Proof. By Theorem 3.2. of [2], there exists a map f : [0, 1] → [0,∞] such that f(Ui)d= Vi,

where (Ui)i∈N is an i.i.d. sequence with U1 ∼ Unif(0, 1). Hence,

P

(supi≥1

Γ−1/αi Vi ≤ x

)= P

( ∞∑i=1

δ(Γ−1/αi ,Ui)

((y, u) : yf(u) > x) = 0

)

= exp

(−∫ 1

0µα

(x

f(u),∞)

du

)= exp

(−∫ 1

0

(f(u)

x

)αdu

)= e(−x

−αEV α1 ).

In particular, max-stable distributions, analogously to the α-stable ones, have a representa-tion involving the points of a homogeneous Poisson process and some i.i.d. sequence. Note thedifference in length and complexity of the proofs of the respective representation theorems.

3.2 Max-Stable Random Vectors

The concepts of extreme value distributions and max-stable distributions extend to the multi-dimensional case.

3.2.1 Preliminaries

A d-dimensional distribution function G is called an extreme value distribution function if thereexists a sequence of i.i.d. random vectors (Xn)n∈N, real-valued sequences (ain)n∈N and strictlypositive sequences (bin)n∈N, 1 ≤ i ≤ d such that

P

(M1n − a1

n

b1n≤ x1, . . . ,

Mdn − adnbdn

≤ xd)→ G(x1, . . . , xd).

Analogously, G is called max-stable if for 1 ≤ i ≤ d there exist functions αi and βi > 0 suchthat

G(β1(t)x1 + α1(t), . . . , βd(t)xd + αd(t)) = Gt(x1, . . . , xd).

Analogously to the one-dimensional case, a convergence to types argument gives the followingresult.

Proposition 3.4. The class of multivariate non-degenerate extreme value distributions is ex-actly the one of multivariate max-stable distributions.

25

Page 34: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

Similar to the one-dimensional case, it is legitimate and useful to consider the special caseof max-stable random vectors whose marginals all have Frechet distributions. It is legitimatebecause of the following fact proven in [21]. Assume from now on that G is a d-dimensionaldistribution functions. Define φi(x) := (1/− lnGi)

←(x) and

G(x1, . . . , xd) := G(φ1(x1), . . . , φn(xd)).

Then, G has unit Frechet marginals and G is a max-stable distribution function if and only ifG is. Hence any max-stable distribution function can be transformed into some G with Frechetmarginals. A useful tool to deal with max-stable random vectors is the so-called exponent mea-sure introduced next. It will reappear later when simulation techniques are discussed.

In what follows, again punctured spaces of the from [0,∞)d\0 are encountered, similar tothe derivation of (1.2). To proceed they have to be equipped with a topology making the sets[x,∞] compact, which is possible; cf [21]. With respect to this topology compact sets have tobe bounded away from 0.

3.2.2 The Exponent Measure

First, some notation is introduced. Write x = (x1, . . . , xn) for some vector and [x,z] for the set

(y1, . . . , yd) ∈ Rd : xi ≤ yi ≤ zi, 1 ≤ i ≤ d.

Max-stable distribution functions have the useful property that there exists a point l ∈[−∞,∞)d and a Radon measure µ on [l,∞)\l such that

G(x1, . . . , xd) =

exp (−µ([−∞,x]c)) if xi ≥ li, 1 ≤ i ≤ d0 else

; (3.2)

cf. [21]. The measure µ is called the exponent measure. In the Frechet case, the marginals areconcentrated on [0,∞), hence it is appropriate to let l = 0. By definition, if G is max-stablewith Frechet marginals, then

Gt(t−1/αx1, . . . , t−1/αxd) = G(x1, . . . , xd),

hence,

exp (−µ([0,x]c)) = Gt−α

(tx) = (exp (−µ ([0, tx]c)))t−α

= exp(−t−αµ ([0, tx]c)

).

Since the paving [0,x]c : x ∈ [0,∞)d\0 is intersection stable and generates the Borel-σ-algebra on [0,∞)d\0, it follows that µ is homogeneous in the sense that

µ(B) = t−αµ(tB),

where B is Borel and t ∈ R+ and tB = tx : x ∈ B. This property is quite useful, in particularto show that µ is in a certain sense a product measure. Let therefore ‖·‖ be a norm on Rd, andlet Sd−1

+ = Sd−1 ∩ [0,∞)d, where Sd−1 denotes the unit sphere on Rd with respect to this norm.

The unit sphere is clearly bounded away from 0 and is moreover compact. Equip Sd−1+ with the

Borel-σ-algebra and define the measure S on Sd−1+ through

S(A) = µ (x : ‖x‖ > 1, x/‖x‖ ∈ A) .

Since µ is Radon and the set x : ‖x‖ ≥ 1 is compact, S is a finite measure on Sd−1+ . S is

called the spectral measure of G. Consider now the map T : [0,∞)d → Sd−1+ × (0,∞], defined

26

Page 35: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

through T (x) = (x/‖x‖, ‖x‖), which is a kind of polar transformation. Then, for A ⊂ Sd−1+ and

r1 > 1 one has

µ(T−1(A× (r1,∞])) = µ (x : x/‖x‖ ∈ A, ‖x‖ > r1)= µ (r1 x : x/‖x‖ ∈ A, ‖x‖ > 1)= r−α1 µ (x : x/‖x‖ ∈ A, ‖x‖ > 1) = µα((r1,∞])S(A). (3.3)

Hence, µ T−1 = µα ⊗ S and indeed, the image measure of µ under T is a product measure.Furthermore, since

T ([0,x]c) = (θ, r) : rθi ≥ xi for some i =

(θ, r) : r ≥ min

xiθi

,

one has

µ([0,x]c) = µα ⊗ S(T [0,x]c) =

∫T ([0,x])

1 dµα ⊗ S =

∫Sd−1

+

µα

(r : r ≥ min

xiθi

)S(dθ)

=

∫Sd−1

+

(min

xiθi

)−αS(dθ) =

∫Sd−1

+

(max

θixi

)−αS(dθ). (3.4)

Lastly, the following fact is useful: let G be a distribution function of a max-stable randomvector with α-Frechet marginals with exponent measure µ and spectral measure S. Then,∫Sd−1

+

θ−αi S(dθ) = µ([0, (∞, . . . , 1, . . . ,∞)]c) = ln(G(∞, . . . , 1, . . . ,∞)) = − ln(P (Xi ≤ 1)) = 1.

(3.5)

3.3 Representation of Max-Stable Random Fields.

One might also extend the concept of max-stability to infinite-dimensional objects. A familyof random variables over an index set T is called random field if T is allowed to be an multi-dimensional Euclidean space. One possible way to define a max-stable random field is given inthe following definition.

Definition 3.5. Let (Y (t))t∈T be d-dimensional stochastic process or random field. Let fur-thermore

((Y (t))it∈T

)i∈N be a sequence of i.i.d. copies of (Y (t))t∈T . Then, (Y (t))t∈T is called

max-stable if

n−1/α

(maxi≤n

Y (t)i)t∈T

d= (Y (t))t∈T ,

in the sense that the finite-dimensional distributions coincide.

In other words: a random field is max-stable if its finite-dimensional distributions are max-stable with Frechet distributed marginals. One could also have chosen normalizing constantsleading to a Gumbel distributed marginals as an example. However, here again, a monotonetransformation of the marginals gives max-stability in the Frechet sense. Also for max-stableprocesses it is desirable to obtain a representation involving points of a Poisson process. Thefollowing example from [15] provides a possible approach. Let Γ1 < Γ2 < · · · denote the pointsof a homogeneous Poisson process with unit rate. Let, furthermore, (Ui)i∈N be a sequence ofi.i.d. uniform random variables on (0, 1). Proposition 1.12 gives that

Mα =

∞∑i=1

δ(Γ−1/αi ,Ui)

27

Page 36: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

is a Poisson random measure on (0,∞] × [0, 1] with mean measure µα ⊗ λ, where λ denotesLebesgue measure, and µα((a, b]) = a−α − b−α. Lastly, let ft be a family of functions such thatEft(U1)α <∞. Then, the process defined through

Y (t) = supi≥1

Γ−1/αi ft(Ui) (3.6)

is max-stable. To prove this, it will be shown that the finite-dimensional distributions are thesame. Let, therefore, n ∈ N and t1, . . . , tn ∈ T be arbitrary. It has to be proved that

P (Y (t1) ≤ x1, . . . , Y (tn) ≤ xn) = P k(Y (t1) ≤ x1k

1/α, . . . , Y (tn) ≤ xnk1/α).

Now,

P

(supi≥1

Γ−1/αi ft1(Ui) ≤ x1, . . . , sup

i≥1Γ−1/αi ftn(Ui) ≤ xn

)= P

(maxj≤n

supi≥1

Γ−1/αi ftj (Ui)x

−1j ≤ 1

)= P

(supi≥1

Γ−1/αi max

j≤nftj (Ui)x

−1j ≤ 1

)= exp

(−∫ 1

0

(maxj≤n

ftj (u)x−1j

)αdu

)(3.7)

and

P k(Y (t1) ≤ x1k

1/α, . . . , Y (tn) ≤ xnk1/α)

= P k(

supi≥1

Γ−1/αi max

j≤nftj (Ui)(k

1/αxj)−1 ≤ 1

)= exp

(−1

k

∫ 1

0

(maxj≤n

ftj (u)x−1j

)αdu

)k= exp

(−∫ 1

0

(maxj≤n

ftj (u)x−1j

)αdu

),

proving equality of the finite-dimensional distributions. De Haan proved in [15] that undercertain conditions also the converse is true, that if (Y (t))t∈R is a max-stable process, thenit possesses a representation as in (3.6). Here this result will be stated where the index set isallowed to be Rd, this does not change the proof significantly. The representation will be derivedthrough a short sequence of theorems. In the following a stochastically continuous random fieldwill be a random field (Y (t))t∈T satisfying

t→ t0 ⇒ P (|Y (t)− Y (t0)| > ε)→ 0 ∀ε > 0.

Furthermore, define the sets Lα([0, 1]) and Lα+([0, 1]) through

Lα([0, 1]) =

f :

∫[0,1]|f |α dλ <∞,

and Lα+([0, 1]) =

f :

∫[0,1]

fα dλ <∞, f ≥ 0

,

where λ denotes the Lebesgue measure on [0, 1].

Theorem 3.6. Let (Y (t))t∈N be a max-stable process. Then, there exists a family of functions(ft)t∈N ⊂ Lα+([0, 1]) such that

(Y (t))t∈Td=

(supi≥1

Γ−1/αi ft(Ui)

)t∈T

,

where (Γ−1/αi , Ui) are the points of a Poisson process on (0,∞]×[0, 1] with mean measure µα⊗λ,

where λ denotes the Lebesgue measure on [0, 1]. Furthermore∫ 1

0ft(s)

α ds = 1, for all t.

28

Page 37: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

The idea of the proof is related to the proof of Proposition 3.3. Before Theorem 3.6 can beproved, the following important theorem is necessary which is also from [15].

Theorem 3.7. Let (Y (t))t∈N be a max-stable stochastic process. Then, there exists a finitemeasure S on R∞+ such that for any n ∈ N and y1, . . . , yn the following holds:

P (Y (t1) ≤ y1, . . . , Y (tn) ≤ yn) = exp

(−∫Rn+

maxi≤n

(xiyi

)αSn(dx)

),

where Sn denotes the restriction of S to Rn+.

Proof of Theorem 3.6. Let S be the measure from Theorem 3.7. Set c = S(R∞+ ). Then,

exp

(−∫Rn+

maxi≤n

(xiyi

)α 1

cSn(dx)

)= P (Y (t1) ≤ cαy1, . . . , Y (tn) ≤ cαyn)

= P (c−αY (t1) ≤ y1, . . . , c−αY (tn) ≤ yn).

Hence, for any process (Y (t))t∈N there exists a number d−α such that the associated measureS to (d−αY (t))t∈N is a probability measure. It follows that it suffices to show the theorem forprocesses where the measure S is a probability measure. Since R∞+ is separable and complete,by Theorem 3.2. from [2] it follows that there exists a function f : [0, 1] → R∞+ such that, if Uis uniform on [0, 1], the distribution of f(U) is the same as S. Write

f(U) = (f1(U), f2(U), . . . ).

The claim is that these functions are the ones necessary for the theorem. First of all,

P

(supi≥1

Γ−1/αi ft1(Ui) ≤ y1, . . . , sup

i≥1Γ−1/αi ftn(Ui) ≤ yn

)= exp

(−∫ 1

0

(maxj≤n

ftj (u)y−1j

)αdu

)as in (3.7). And now, since the restrictions coincide as well

exp

(−∫ 1

0

(maxj≤n

ftj (u)x−1j

)αdu

)= exp

(−∫

Ω

(maxj≤n

ftj (U)y−1j

)αdP

)= exp

(−∫Rn+

(maxj≤n

xjyj

)αSn(dx)

),

which together with Theorem 3.7 proves the result. That∫ 1

0 ft(s) ds = 1 follows from (3.7) andthe fact that the marginals have a Frechet distribution.

Before the general case can be proved, some notation is introduced and two Lemmas arerequired.

Definition 3.8. Let f ∈ Lα+([0, 1]). Furthermore, let (Γ−1/αi , Ui) be the points of a Poisson

process on (0,∞]× [0, 1] with mean measure µα⊗λ, where λ denote Lebesgue measure on [0, 1].Then, ∫ ∨

[0,1]f dMα = sup

i≥1Γ−1/αi f(Ui).

Here, Mα indicates the PRM∑∞

i=1 δ(Γ−1/αi ,Ui)

.

29

Page 38: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

Clearly, the integral is not an integral in the usual sense. However, it shares properties withusual integrals. It follows immediately from the definition that

∫ ∨af dMα = a

∫ ∨f dMα for

a ≥ 0. Furthermore, for a Borel set A ⊂ [0, 1] define∫ ∨Af dMα = supΓ−1/α

i f(Ui)|i ≥ 1, Ui ∈ A.

The following properties will be useful:

1. assume f, g ∈ Lα+([0, 1]) and A ⊂ [0, 1] such that f(x) > g(x) for almost all x ∈ A. Then,∫ ∨A f dMα >

∫ ∨A g dMα a.s.;

2. max(∫ ∨

f,∫ ∨

g)

=∫ ∨

max(f, g);

3. A ∩B = ∅ ⇒∫ ∨A f is independent of

∫ ∨B g, and

∫ ∨A∪B f = max

(∫ ∨A f,

∫ ∨B f)

a.s.

The first property follows immediately from the definition. The second property holds sinceeverything involved is positive and hence max and sup are interchangeable. The second part of3. holds since sup(A∪B) = max(sup(A), sup(B)). The first part of 3. holds since, if A∩B = ∅,then,

P

(∫ ∨Af ≤ x1,

∫ ∨Bg ≤ x2

)= P (Mα ((y, u) : yf(u) > x1, u ∈ A) = 0,Mα ((y, u) : yg(u) > x2, u ∈ B) = 0)

= P (Mα ((y, u) : yf(u) > x1, u ∈ A) = 0)P (Mα ((y, u) : yg(u) > x2, u ∈ B) = 0)

= P

(∫ ∨Af ≤ x1

)P

(∫ ∨Bg ≤ x2

),

where the independent increments property of the PRM∑∞

i=1 δ(Γ−1/αi ,Ui)

was used. With these

properties in mind, one might now state and prove two necessary Lemmas.

Lemma 3.9. Let f, g ∈ Lα+([0, 1]). Then,

f = g a.e. ⇔∫ ∨

f =

∫ ∨g a.s.

Proof. Set A = x|f(x) > g(x). Then, by property 1 and the definition of A,∫ ∨Af >

∫ ∨Ag and

∫ ∨Acf ≤

∫ ∨Acg

and, by property 2,∫ ∨f = max

(∫ ∨Af,

∫ ∨Acf

),

∫ ∨g = max

(∫ ∨Ag,

∫ ∨Acg

).

In particular,

P

(∫ ∨f >

∫ ∨g

)= P

(∫ ∨Af >

∫ ∨Acg

). (3.8)

30

Page 39: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

Now, similarly to the proof of Proposition 3.3, P(∫ ∨

A f ≤ y)

= exp(− 1yα

∫A f(x)α dx

). There-

fore, and by the first part of property 3,

P

(∫ ∨Af >

∫ ∨Acg

)=

∫ ∞0

P

(∫ ∨Acg < y|

∫ ∨Af = y

)P∫ ∨

A f (dy)

=

∫ ∞0

exp

(− 1

(∫Acg(x)α dx+

∫Af(x)α dx

)) ∫A f(x)α dx

yα+1dy

=

∫A f(x)α dx∫

Ac g(x)α dx+∫A f(x)α dx

[exp

(− 1

yα+1

(∫Acg(x)α dx+

∫Af(x)α dx

))]∞0

=

∫A f(x)α dx∫

Ac g(x)α dx+∫A f(x)α dx

. (3.9)

Hence,

P

(∫ ∨f >

∫ ∨g

)= 0⇔ λ(A) = 0.

Lemma 3.10. For any f ∈ Lα+([0, 1]) define

‖f‖α :=

(∫[0,1]|f |α dλ

)1/α

,

which is a quasinorm if 0 < α < 1 and the usual norm on Lα([0, 1]) otherwise. Then, for apositive sequence of functions (fn)n∈N in Lα([0, 1]) one has ‖fn − f‖α → 0 for some f if andonly if

∫ ∨fn dMα converges in probability. Moreover,

limn→∞

∫ ∨fn =

∫ ∨f.

Proof. Assume first, fn → f in norm. Set A = x : f(x) = 0. Then again,∫ ∨

f =

max(∫ ∨

A f,∫ ∨Ac f

). By looking at 1Afn and 1Af , it is enough to consider the cases f > 0

and f = 0. If f = 0 the statement is clear, hence, it suffices to show the case f > 0. As in (3.8)and (3.9), one has

P

(∫ ∨fn >

∫ ∨f(1 + ε)

)= P

(∫ ∨(fn>f(1+ε))

fn >

∫ ∨(fn>f(1+ε))c

f(1 + ε)

)

=

∫(fn>f(1+ε)) fn(x)α dx∫

max(fn(x), f(1 + ε)(x))α dx.

Now, to show that the enumerator converges to 0 let first 0 < α ≤ 1. Then, since it holds forany positive x, y that xα + yα ≥ (x+ y)α,∣∣∣∣∣

∫(fn>(f+ε))

fαn dλ−∫

(fn>(f+ε))fα dλ

∣∣∣∣∣ =

∣∣∣∣∣∫

(fn>f(1+ε))fαn − fα dλ

∣∣∣∣∣≤

∣∣∣∣∣∫

(fn>f(1+ε))(fn − f)α + fα − fα dλ

∣∣∣∣∣≤ ‖fn − f‖αα → 0.

31

Page 40: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

For 1 < α, by the reverse triangle inequality,∣∣∣∣∣∣(∫

(fn>f(1+ε))fαn dλ

)1/α

(∫(fn>f(1+ε))

fα dλ

)1/α∣∣∣∣∣∣ ≤ ‖fn − f‖α → 0.

Furthermore, fn > f(1 + ε) implies |fn − f |α > (εf)α, since otherwise |fn − f | ≤ εf implyingfn − f ≤ εf - a contradiction. Hence,∫

(fn>f(1+ε))fα dλ ≤

∫(|fn−f |α>(εf)α)

fα dλ < ε−α∫

(|fn−f |α>(εf)α)|fn − f |α dλ

≤ ε−α‖fn − f‖αα → 0.

The analogous argumentation gives P(∫ ∨

fn <∫ ∨

f(1− ε))→ 0. Now, using monotonicity of

the natural logarithm one obtains

P

(∫ ∨fn >

∫ ∨f(1 + ε)

)+ P

(∫ ∨fn <

∫ ∨f(1− ε)

)= P

(ln

∫ ∨fn > ln

∫ ∨f(1 + ε)

)+ P

(ln

∫ ∨fn < ln

∫ ∨f(1− ε)

)= P

(ln

∫ ∨fn − ln

∫ ∨f > ln(1 + ε)

)+ P

(ln

∫ ∨fn − ln

∫ ∨f < ln(1− ε)

)= P

(∣∣∣∣ln∫ ∨ fn − ln

∫ ∨f

∣∣∣∣ > min(|ln(1 + ε)|, |ln(1− ε)|),

proving that ln∫ ∨

fn → ln∫ ∨

in probability, since the first line converges to 0 for any ε. Bycontinuity, also

∫ ∨fn →

∫ ∨f in probability.

For the converse, since∫ ∨

fn converges in probability, in particular it converges in distri-bution. Thus, as before

∫fαn dλ converges to some constant c. If c = 0, then clearly fn → 0.

So suppose c 6= 0, then also ln∫ ∨

fn converges in probability and tracing back the steps of theother direction one obtains

0 = limn,m→∞

P

(∫ ∨fn >

∫ ∨fm(1 + ε)

)= lim

n,m→∞

∫(fn>fm(1+ε)) f

αn dλ∫

(max(fn, fm(1 + ε))α dλ.

The denominator is bounded since∫fαn dλ converges. It follows that

limn,m→∞

∫(fn>fm(1+ε))

fαn dλ = 0 and similarly limn,m→∞

∫((1−ε)fm>fn)

fαm dλ = 0.

Now,∫|fn − fm|α dλ ≤

∫(fn>fm(1+ε))

fαn dλ+

∫((1−ε)fm>fn))

2fαm dλ+ εα∫

(|fn−fm|≤εfm)fαm dλ.

The first two summands on the right-hand side converge to zero as n,m → ∞ and since∫(|fn−fm|≤εfm) f

αm dλ is bounded, the last summand can be made arbitrarily small. Hence,

(fn)n∈N is a Cauchy sequence in Lα([0, 1]).

Lemma 3.10 essentially allows to prove the following, more general version of Theorem 3.6.

Theorem 3.11. Let (Y (t))t∈T be a stochastically continuous random field. Then, there existsa family of functions (ft)t∈T ∈ Lα+([0, 1]) with∫ 1

0ft(s) ds = 1, for all t ∈ T,

32

Page 41: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

such that

(Y (t))t∈T =

(supi≥1

Γ−1/αi ft(Ui)

)t∈T

,

where (Γ−1/αi , Ui) are the points of a Poisson process on (0,∞]×[0, 1] with mean measure µα⊗λ,

where λ denotes the Lebesgue measure on [0, 1].

Proof. Fix some d ∈ N. Then, Qd is dense in Rd and countable. Let (rn)n∈N be an enumerationof Qn. Then, by Theorem 3.6, the process (Y (tn))n∈N has representation

(Y (rn))n∈Nd=

(supi≥1

Γ−1/αi frn(Ui)

)rn

.

Now for an arbitrary t ∈ Rn there exists a sequence (tn)n∈N ⊂ (rn)n∈N s.t. tn → t. By continuityof the sample paths, this implies Y (tn)→ Y (t) in probability. This, together with Lemma 3.10,

implies that ftn → ft where ft ∈ Lα+([0, 1]). Moreover supi≥1 Γ−1/αi ft(Ui) =

∫ ∨ft = Y (t).

Lastly, that∫ 1

0 ft(s) ds = 1 follows again from the fact that the marginals have a Frechetdistribution and from (3.7).

The result from Theorem 3.11 can be generalized to non-stochastically continuous processes.For example, Kabluchko proved in [17] the following. Let (Ω,F , ν) be a sufficiently rich prob-

ability space and let (Y (t))t∈T where T ⊂ R. Let, furthermore, (Γ−1/αi , Ti) be the points of

a PRM on the space (0,∞] × Ω with mean measure µα ⊗ ν. Then, there exists a family offunctions (ft)t∈T in Lα+(Ω,F , ν) such that

(Y (t))t∈Td= sup

i≥1Γ−1/αi ft(Ti).

It should be noted that in addition to allowing processes which are not stochastically continuousthis theorem also allows for more general probability spaces than ([0, 1],B, λ).

From a theoretical point of view, the spectral representation is quite useful. Lemmas 3.9and 3.10 indicate that the spectral representation is indeed useful in the sense that propertiesof the process can be expressed in terms of the family of functions (ft)t∈T which might beeasier to handle. For example Lemma 3.10 states that the process is stochastically continuous

if the functions are Lα-continuous (i.e. if tn → t, then ftnLα→ ft). But more is possible. In

the following, conditions on the family (ft)t∈T will be given such that the resulting process isstationary. Before this is given, a piston is introduced. A map Φ : Lα([0, 1]) → Lα([0, 1]) is apiston, if for any f ∈ Lα([0, 1])

Φ(f(t)) = r(t)f(H(t)),

where H is a measurable bijection from [0, 1] to [0, 1] and r is a correcting function such that∫[0,1]

Φ(f(t))α dt =

∫[0,1]

f(t)α dt.

Proposition 3.12. Let (Γ−1/αi , Ui) be the points of a PRM with mean measure µα ⊗ λ, λ

denoting the Lebesgue measure on (0, 1). Let furthermore f ∈ Lα+([0, 1]) be function and (Φt)t∈Ra family of pistons such that Φs+t = Φs Φt. Then, the process

(supi≥1

Γ−1/αi Φt(f)(Ui))t∈R

is stationary.

33

Page 42: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

Proof. Write Φt(f) = ft. Then, as in (3.7)

P

(supi≥1

Γ−1/αi ft1(Ui) ≤ x1, . . . , sup

i≥1Γ−1/αi ftn(Ui) ≤ xn

)= exp

(−∫

[0,1]

(maxi≤n

fti(u)

xi

)αdu

).

Hence it suffices to prove that for any t ∈ R one has∫[0,1]

(maxi≤n

fti(u)

xi

)αdu =

∫[0,1]

(maxi≤n

Φt(fti)(u)

xi

)αdu.

Now the right hand side is∫[0,1]

(maxi≤n

Φt(fti)(u)

xi

)αdu =

∫[0,1]

rt(u)α(

maxi≤n

fti(Ht(u))

xi

)αdu

=

∫[0,1]

(Φt

(maxi≤n

fti(u)

xi

))αdu

=

∫[0,1]

(maxi≤n

fti(u)

xi

)αdu.

The spectral representation is also useful to give conditions under which the sample pathsof the process are almost surely continuous and when the sample paths are separable in thefollowing sense: assume from now on that T = [0,∞).

Definition 3.13. Let x : [0,∞)→ R be a function and D ⊂ [0,∞) be a countable subset. Then,x is called separable with respect to D if for any t ∈ [0,∞) there exists a sequence (tn)n∈N ⊂ Dsuch that

tn → t and x(tn)→ x(t).

A stochastic process (Y (t))t∈[0,∞) is called separable, if there exists a subset D ⊂ [0,∞) suchthat its sample paths are almost surely separable with respect to D.

A family of functions (ft)t∈T from a spectral representation of a max-stable process can beinterpreted as a stochastic process on ([0, 1],B[0,1], λ). Such a stochastic process has always aseparable version; see [3]. By changing this family on a null set, one might assume that ft(u)is separable for all u ∈ [0, 1]. So assume from now on that (ft)t∈T is a family of functions suchthat (ft(u))t∈T is separable for all u ∈ [0, 1]. Set

f∗(u) = supt∈T

ft(u) = supt∈T∩D

ft(u).

The following theorem is due to Resnick and Roy; cf. [22].

Theorem 3.14. Let (Y (t))t∈Td= (supi≥1 Γ

−1/αi ft(Ui))t∈T be a max-stable process where (Γ

−1/αi , Ui)

are as before. Suppose f∗ ∈ Lα+([0, 1]). If, for almost all u ∈ [0, 1],

t 7→ ft(u)

is upper semi-continuous, then (Y (t))t∈T is separable. If, additionally, t 7→ ft(u) is lower semi-continuous for almost all u, then (Y (t))t∈T has almost surely continuous sample paths.

34

Page 43: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

Proof. As seen previously (e.g. (3.7)), for any n ∈ N,

µα ⊗ λ((x, u) : xf∗(u) > n−1

)= −n

∫[0,1]

f∗(u) du <∞.

And so, since the Poisson distribution has finite first moment, there exists a set A ⊂ Ω suchthat P (A) = 1 and

Mn(ω) := supi : f∗(Ui(ω))Γ

−1/αi (ω) > n−1

<∞ for all ω ∈ A

for all n ∈ N. Furthermore, set

B := ω : (ft(Ui(ω)))t∈T is continuous ∀i ∈ N .

Then, P (B) = 1 and P (A ∩B) = 1. Now let t0 ∈ T be arbitrary. The goal is to show that theprocess is almost surely upper semi-continuous in t0. In order to do so, let ω ∈ A ∩ B and thefollowing two cases are considered:(1) Y (t0)(ω) > 0.Then, there exists an n0 ∈ N such that n0 > Y (t0)(ω) and since ω ∈ A∩B one has Mn0(ω) <∞.Hence,

lim supt→t0

Y (t)(ω) = lim supt→t0

(supi≥1

Γ−1/αi (ω)ft(Ui(ω))

)= lim sup

t→t0

(max

(sup

i≤Mn0 (ω)Γ−1/αi (ω)ft(Ui(ω)), sup

i>Mn0 (ω)Γ−1/αi (ω)ft(Ui(ω))

))

≤ lim supt→t0

(max

(sup

i≤Mn0 (ω)Γ−1/αi (ω)ft(Ui(ω)), sup

i>Mn0 (ω)Γ−1/αi (ω)f∗(Ui(ω))

))

≤ lim supt→t0

(max

(sup

i≤Mn0 (ω)Γ−1/αi (ω)ft(Ui(ω)), n−1

0

)),

since Γ−1/αi f∗(Ui) < n−1 for all i > Mn. Now, since ω ∈ B one has that ft(Ui(ω)) is continuous

in t0 and hence, since the supremum is over a finite set,

supi≤Mn0 (ω)

Γ−1/αi (ω)ft(Ui(ω))

is continuous in t0 as well. Hence,

lim supt→t0

Y (t)(ω) ≤ max

(sup

i≤Mn0 (ω)Γ−1/αi (ω)ft0(Ui(ω)), n−1

0

)≤ max

(Y (t0)(ω), n−1

0

)= Y (t0)(ω).

(2) Y (t0)(ω) = 0.For any n,

lim supt→t0

Y (t)(ω) = lim supt→t0

(supi≥1

Γ−1/αi (ω)ft(Ui(ω))

)= lim sup

t→t0

(max

(sup

i≤Mn(ω)Γ−1/αi (ω)ft(Ui(ω)), sup

i>Mn(ω)Γ−1/αi (ω)ft(Ui(ω))

))

≤ lim supt→t0

(max

(sup

i≤Mn(ω)Γ−1/αi (ω)ft(Ui(ω)), sup

i>Mn(ω)Γ−1/αi (ω)f∗(Ui(ω))

))

≤ lim supt→t0

(max

(sup

i≤Mn(ω)Γ−1/αi (ω)ft(Ui(ω)), n

))≤ max

(Y (t0)(ω), n−1

)= n−1.

35

Page 44: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

Since n was arbitrary, it follows that lim supt→t0 Y (t)(ω) ≤ 0 = Y (t0)(ω).To show lower semi-continuity, let ω ∈ B. Then,

lim inft→t0

supi≥1

Γ−1/αi (ω)ft(Ui(ω)) = sup

i≥1Γ−1/αi ft0(Ui(ω)) = Y (t0)(ω),

as the sup over lower semi-continuous functions is again lower semi-continuous. It should beremarked here, that up to now only upper semi-continuity of the map t 7→ ft(u) for almost uwas necessary.To show separability, let C ⊂ [0, 1] be defined through

C := ω : ft(Ui(ω)) is separable for all i ≥ 1 .

Then again, P (C) = 1. Let ω ∈ A ∩ B ∩ C. Let t0 ∈ T be arbitrary. Again, the two cases areconsidered.(a) Y (t0) > 0.Again, let n0 > Y (t0). Then,

Y (t0)(ω) = maxi≤Mn0 (ω)

Γ−1/αi (ω)ft0(Ui(ω))

Let i0 be the index such that

Y (t0)(ω) = Γ−1/αi0

(ω)ft0(Ui0(ω)).

Since ω ∈ C there must exist a sequence tn → t0 such that one has

Y (tn)(ω) ≥ Γ−1/αi0

(ω)ftn(Ui0(ω))→ Γ−1/αi0

(ω)ft0(Ui0(ω)).

Now, this implies

lim inftn→t0

Y (tn)(ω) = Γ−1/αi0

(ω)ft0(Ui0(ω)) = Y (t0)(ω).

Furthermore, as it has been shown that t 7→ Y (t) is almost surely upper semi-continuous, onehas

lim suptn→t0

Y (tn)(ω) = Y (t0)(ω).

(b) Y (t0)(ω) = 0. This follows immediately from upper semi-continuity since

lim suptn→t0

Y (tn)(ω) ≤ Y (t0)(ω) = 0.

As remarked during the proof, in order to obtain upper semi-continuous sample paths itwould have sufficed to require the map t 7→ ft(u) be upper semi-continuous for almost all u.The analogous statement goes for lower semi-continuity.

We conclude this section by introducing to commonly used examples of max-stable processes.

Examples

The Brown-Resnick Process

One very important example is the Brown-Resnick process. In their original paper, [5], Brownand Resnick introduced the process defined through

Y (t) = supi≥1

Γ−1/αi exp

(Wi(t)−

1

2t

), t ≥ 0 (3.10)

36

Page 45: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

where the (Γi)i∈N are as usual and (Wi)i∈N is an i.i.d. sequence of standard Brownian motions,independent of (Γi)i∈N, and showed that it is max-stable and, moreover, stationary. In [17],Kabluchko et al. considered generalizations of this notion. To provide this generalization, firstsome important properties of random fields are introduced. A random field is said to havestationary increments if the distribution of

(W (t0 + t)−W (t0))t∈T , T ⊂ Rd

does not depend on t0. Furthermore, a random field is said to be centered if at every pointt ∈ T it has mean 0. Since the distribution of Gaussian random vectors whose marginals havemean 0 is characterized by its covariance matrix, it follows that the distribution of stationary,centered Gaussian random fields only depends on the covariance structure and variances ofthe marginal distributions. That implies that its distribution is characterized by the so-calledvariogram defined through

γ(t) =1

2E(W (t0 + t)−W (t0))2

and the variance function σ2(t) = EW (t)2, since

γ(t) = −E(W (t+ t0),W (t)) +1

2(σ2(t0 + t) + σ2(t0))

= −Cov(W (t0 + t),W (t0)) +1

2(σ2(t0 + t) + σ2(t0)).

Kabluchko et al. proved in [17] the following theorem:

Theorem 3.15. Let T ⊂ Rd for some d ∈ N and let (Wi)i∈N be i.i.d. copies of a centered Gaus-sian random field (W (t))t∈T with stationary increments, variance function σ2(t) and variogramγ(t). Then, the process

Y (t) = supi≥1

Γ−1/αi exp

(Wi(t)−

σ2(t)

2

)(3.11)

is max-stable and stationary. Moreover, the distribution of (Y (t))t∈T depends only on γ.

From now on, random fields as defined in (3.11) will be referred to as Brown-Resnick fields.

The Max-Moving Process

Another example is the so called max-moving process, which was introduced in [9]. It is given

through the following representation: Let f be a density on the Rd and let (Γ−1/αi , Ui) be the

points of a unit rate homogeneous Poisson process on (0,∞]× R. Then, the process (Y (t))t∈T

(Y (t))t∈R = supi≥1

Γ−1/αi f(t− Ui) (3.12)

is max-stable and moreover stationary.

37

Page 46: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

38

Page 47: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

4 Simulation Methods for Max-Stable Processes

This section presents the three different methods for simulating max-stable process, which are allbased on the representation theorem for max-stable processes. While describing the simulationmethods, a special emphasis will we put on their application to Brown-Resnick fields becausethe next section, where the methods are tested, focuses on their simulation. Moreover, theunderlying Gaussian fields will be either fractional Brownian motions or fractional Brownianfields. A short introduction of these concepts and their relevant properties can be found in theappendix. The first algorithm is the naive method of replacing the supremum over the infiniteset by a maximum over a finite one.

4.1 Limitations of the Naive Method

The idea is straightforward. Assume the max stable random field (Y (t))t∈T has representation

(Y (t))t∈Td=

(supi≥1

Γ−1/αi ft(Ui)

)t∈T

.

Then, the idea is to obtain an approximation (Y (t))t∈T by setting

Y (t) = supi≤M

Γ−1/αi ft(Ui)

for all t ∈ T . While this can give acceptable results for t with small norm, it runs into troublefor large t. To illustrate this, consider the example of a one-dimensional Brown-Resnick process

(Y (t))t∈T =

(supi≥1

Γ−1/αi exp

(B

(i)H (t)− σ2(t)

2

))t∈T

,

where the underlying Gaussian process (BH(t))t≥0 is a fractional Brownian motion. The law ofthe iterated logarithm for fractional Brownian motion gives

lim supt→∞

BH(t)√2t2H log log t

= 1 a.s.;

see Theorem A.3. Now,

lim supt→∞

√2t2H log log t− 1

2t2H = lim sup

t→∞

(t2H

(√2 log log t

t2H− 1

2

))= −∞, (4.1)

since an application of l’Hopital’s rule gives

limt→∞

2 log log t

t2H= lim

t→∞

2(t log t)−1

2Ht2H−1= lim

t→∞

1

Ht2H log t= 0.

It follows that for any fixed m,

supi≤m

Γ−1/αi exp

(B

(i)H (t)− σ2(t)

2

)→ 0 a.s., as t→∞. (4.2)

Hence, simulation of this Brown-Resnick process by the simple method of replacing the supre-mum in (3.10) by a supremum over a finite set becomes inaccurate for large t or requires vastcomputational effort (i.e. choosing m large enough). However, it is observed that the rate ofconvergence depends on the Hurst parameter. More precisely the term in (4.1) is O(−t2H), i.e.the smaller the Hurst parameter the slower the convergence in (4.2). This suggests that oneshould see better simulation results for small Hurst parameters. This is evidenced in Figure

39

Page 48: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

Figure 4.1: Simulation of a Brown-Resnick process on [0, 50] where the underlying Gaussianprocesses are fractional Brownian motions with Hurst parameter H = 0.9 (red line), H = 0.5(blue line) and H = 0.1 (green line).

4.1. However, Figure 4.1 also points at the following fact: most of the dependence structureof Brown-Resnick fields is inherited from the underlying Gaussian fields, i.e. high H-valuescorrespond to positive correlation of the increments, while low H-values correspond to negativecorrelation. Hence, one might not be free to chose an arbitrary H-value.

To continue, it is noted that the spectral representation of a max-stable process is far frombeing unique. So it might be useful to find different representation of the max-stable process.In the following, different representations of Brown-Resnick fields will be given that turn out tobe useful for simulation purposes.

4.2 The Dombry-Engelke-Oesting Methods: Methods Based on ExtremalFunctions

In [11], Dombry et al. introduced two exact algorithms. One is a generalization of the methodfrom [10]. However, their algorithm does not apply to arbitrary max-stable random fields:throughout this section the assumption is made that the max-stable process has almost surelycontinuous sample paths. In Theorem 3.14, it was proved that this is the case if the family offunctions is continuous in the sense that the map t 7→ ft(u) is continuous for almost all u. Asa matter of fact, also the converse is true, that a max-stable process whose sample paths arealmost surely continuous, has a representation such that the map t 7→ ft(u) is almost surelycontinuous; cf. [22]. In what follows, it will be convenient to slightly reformulate the spectralrepresentation given in Theorem 3.11 in the case that the max-stable process has continuouspaths that are almost surely continuous and it is assumed without loss of generality that it hasunit Frechet margins. Let C+ = C(Rd,R+) denote the set of continuous and positive functionsfrom Rd into [0,∞). Let, furthermore, T ⊂ Rd and (Y (t))t∈T be a max-stable random field.

40

Page 49: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

Lastly, let (ψ)i∈N denote an i.i.d. sequence random elements of C+ defined through

ψi(t) = ft(Ui),

where (ft)t∈T is the family of functions from the spectral representation of Theorem 3.11. Then,

(Y (t))t∈Td=

(supi≥1

Γ−1i ψi(t)

)t∈T

. (4.3)

Let ν denote the distribution of ψi, so that (Γ−1i , ψi) are the points of a Poisson random measure

on (0,∞]× C+ with mean measure µ1 ⊗ ν. Furthermore,

∞∑i=1

δΓ−1i ψi

is Poisson random measure on C+ whose mean measure µ is given through

µ(A) =

∫C

∫ ∞0

1(xf∈A)x−2 dx ν(df).

To see this, let A ⊂ C+ be Borel, then

P

( ∞∑i=1

δ(Γ−1i ψi)

(A) = k

)= P

( ∞∑i=1

δ(Γ−1i ,ψi)

((x, f) : xf ∈ A) = k

)

and

µ1 ⊗ ν ((r, f) : rf ∈ A) =

∫ ∞0

ν((f : xf ∈ A))x−2 dx.

4.2.1 Preliminaries on Extremal Functions

A crucial role in the Dombry-Engelke-Oesting Method is played by so-called extremal functions.Here we collect relevant results about them for the algorithms. Throughout this section, thefollowing are fixed:

• (Y (t))t∈T is a max-stable random field with unit Frechet marginals where T is compact;

• Φ is the set of functions such that (Y (t))t∈Td=(supi≥1 φi(t)

)t∈T where φi ∈ Φ.

We begin with the definition.

Definition 4.1. Let K ⊂ T be a non-empty compact subset. A function φ ∈ Φ is called K-extreme, if there exists a t ∈ T such that φ(t) = Y (t). Otherwise it is called K-subextreme. Let,furthermore, Φ+

K denote the set of K-extreme functions and Φ−k denote K-subextreme functions.

The elements of the sets Φ+K and Φ−K are the points of Poisson random measures on C+.

Furthermore, for any point t ∈ Rd the set Φ+t contains almost surely one point, denoted by

φ+t, and one has

Pt(A) = P(φ+t/Y (t) ∈ A

)=

∫C+

1f/f(t)∈Af(t) ν(df), A ⊂ C+ Borel, (4.4)

where ν denotes the distribution of the random functions φi; cf [13]. Throughout the remainderof this section, for every ti ∈ Rd, let Pti denote the measure from (4.4).

By definition, φ+t(t) = Y (t), so that Pt is supported by functions such that f(t) = 1.

41

Page 50: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

Proposition 4.2. The restricted Point process

Φ ∩ f ∈ C+ : f(t) > 0

is a Poisson point process with intensity∫A

1f(t)>0 µ(df) =

∫C+

∫ ∞0

1(xf∈A)x−2 dxPt(df).

For proof, see [11].

Remark 4.3. This proposition has the following important implication: in the case of a max-stable process with unit Frechet marginals one has that ψ(t) > 0 for all t ∈ T almost surely.Consequently, if (ψi)i∈N is a sequence of i.i.d. random variables with distribution ν and (ψi)i∈Nis a sequence of i.i.d random variables with distribution Pt0, then(

supi≥1

Γ−1/αi ψi(t)

)t∈T

d=

(supi≥1

Γ−1/αi ψi(t)

)t∈T

for any t0 ∈ T . The advantage is that ψ(t0) = 1.

Lastly, we need a result from [12]. By definition, Φ+ ∪ Φ− = Φ. For a set K, furthermore,the notation f1 <K f2 means f1(x) < f2(x) for all x ∈ K.

Proposition 4.4. The conditional distribution of Φ−K given Φ+K is equal to the distribution of

a Poisson random measure on C+ with intensity

µ(A) =

∫A

1(f<KY ) µ(df).

After these introductory remarks we turn to the description of the algorithms.

4.2.2 Algorithm 1 (Generalization of the Dieker-Mikosch Algorithm)

Let (Y (t))t∈T be a max-stable random field and consider the locations t1, . . . , td. The goal is toderive an exact simulation algorithm for the vector

(Y (t1), . . . , Y (td)), (4.5)

where exact means that the simulated vector has the same distribution as (4.5). By definition,(4.5) must be max-stable and hence as in (3.2) must possess an exponent measure, i.e. a measureρ on [0,∞)d\0 such that

P (Y (t1) ≤ x1, . . . , Y (td) ≤ xd) = exp(−ρ(

y ∈ [0,∞)d\0 : yi ≥ xi, 1 ≤ i ≤ d))

.

Consider again the polar transformation T : [0,∞]d\0 → (0,∞]× Sd−1+ given through

T (x) =

(x,

x

‖x‖1

),

where ‖x‖1 = |x1| + · · · + |xd|. This particular norm is chosen for convenience. From (3.3), itfollows that ρ T−1 can be factorized into the measure µ1 on (0,∞] with µ1((r,∞]) = r−1 asusual, and a finite measure S on the positive part of the n−dimensional unit sphere. Hencethe distribution is fully characterized by the finite measure S. Thus the idea is to sample from

42

Page 51: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

this measure. As mentioned after (3.3), the measure S is finite and hence Ψ := 1S(Sd−1

+ )S is a

probability measure. Furthermore, the choice of the norm is justified by

S(Sd−1+ ) =

∫Sd−1

+

1S(dx) =

∫Sd−1

+

‖x‖1 S(dx) =

d∑i=1

∫Sd−1

+

xi S(dx) = d, (4.6)

hence

ρ(F ) =

∫T (F )

1 dµ1 ⊗ S = d−1

∫T (F )

x−2(dx) dΨ;

see (3.4) and (3.5). This motivates the following: Let (Γ−1i )i∈N be the points of a Poisson process

with mean measure dµ1 and (Qi)i∈N be an i.i.d. sequence of random variable with distributionΨ, independent of (Γ−1

i )i∈N. Then,

P(

Γ−1i Qi ∈ [0,x] for all i

)= P

( ∞∑i=1

δ(Γ−1i ,Qi)

((r, u) : ru ∈ [0,x]c) = 0

)= exp (−dµ1 ⊗Ψ (T [0,xc]))

= exp

(−∫Sd−1

+

(max

θixi

)−1

S(dθ)

),

so that (supi≥1

Γ−1i Q

(1)i , . . . , sup

i≥1Γ−1i Q

(d)i

)d= (Y (t1), . . . , Y (td)) ,

where Q(j)i denotes the j-th component of Qi. The crucial advantage of this representation

of (4.5) is that Γ−1i is almost surely monotonically decreasing and the Q

(j)i are bounded by 1.

Hence, maxj≤d Γ−1i Q

(j)i is a monotonically decreasing sequence converging to 0. In particular,

there must exist a finite n0 ∈ N such that

maxj≤d

Γ−1n Q(j)

n ≤ minj≤d

supi≤n0

Γ−1i Q

(j)i ∀ n ≥ n0, (4.7)

hence, (supi≥1

Γ−1i Q

(1)i , . . . , sup

i≥1Γ−1i Q

(d)i

)d=

(supi≤n0

Γ−1i Q

(1)i , . . . , sup

i≤n0

Γ−1i Q

(d)i

).

Thus, the idea of the algorithm is to sample(

Γ−1i Q

(1)i , . . . ,Γ−1

i Q(d)i

)until n0 is reached. How-

ever, there is one problem remaining. It is not obvious how to sample from Ψ. This problem isaddressed in the following theorem.

Theorem 4.5. Let (Ik)k∈N be an i.i.d. sequence of random variables which are uniformly

distributed on the set 1, . . . , d. Furthermore, for any 1 ≤ k ≤ d let (Z(k)i )i∈N be an i.i.d.

sequence of stochastic processes distributed according to Ptk . Then,

Qk :=

(Z

(Ik)k (t1), . . . , Z

(Ik)k (td)

)∥∥∥(Z(Ik)

k (t1), . . . , Z(Ik)(td))∥∥∥

1

∈ Sd−1+

are independent and with distribution Ψ. Hence, if (Γ−1i )i∈N are the points of a Poisson random

measure on (0,∞] with mean measure dµ1, then

(Y (t1), . . . , Y (td))d=

(supi≥1

Γ−1i Q

(d)i , . . . , sup

i≥1Γ−1i Q

(d)i

).

43

Page 52: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

Proof. To make notation easier, write f(t) for (f(t1), . . . , f(td)) during this proof. Let A ⊂ Sd−1+

be Borel and define

A :=

f :

f(t)

‖f(t)‖1∈ A

.

(4.4) implies for any k one has∫C+

1f :

f(t)‖f(t)‖1

∈A Ptk(df) = Ptk(A) =

∫C+f(tk)1f :

f(t)‖f(t)‖1

∈A ν(df).

Thus, for arbitrary u > 0,

ρ T−1((u,∞)×A) =

∫C+

∫ ∞0

1(x‖f(t)‖1>u)1f :

f(t)‖f(t)‖1

∈Ax−2 dx ν(df)

=

∫C+

1f :

f(t)‖f(t)‖1

∈A ∫ ∞

u‖f(t)‖1

x−2 dx ν(df) =1

u

∫C+‖f(t)‖11

f :f(t)‖f(t)‖1

∈A ν(df)

=1

u

d∑i=1

∫C+f(ti)1f :

f(t)‖f(t)‖1

∈A ν(df) =

d

u

1

d

d∑i=1

∫C+

1f :

f(t)‖f(t)‖1

∈A Pti(df).

For any 1 ≤ k ≤ d let Y (k) be a random element of C+ with distribution Ptk and let I be uniformon 1, . . . , d, independent of Y (k), 1 ≤ k ≤ d. Then,

P

(Y (I)(t)

‖Y (I)(t)‖1∈ A

)=

1

d

d∑i=1

P

(Y (ti)(t)

‖Y (ti)(t)‖1∈ A

)=

1

d

d∑i=1

∫C+

1f :

f(t)‖f(t)‖1

∈A Pti(df)

=u

dµ1 S((u,∞)×A) = Ψ(A),

which proves the claim.

Figure 4.2 shows two simulations of a Brown-Resnick field on [0, 1]2 one 65× 65 grid.

Figure 4.2: Simulations of a Brown-Resnick field on [0, 1]2 using the Generalized Dieker-Mikoschmethod where the underlying Gaussian field is a fractional Brownian field with Hurst parameterH = 0.6 (left) and H = 0.1 (right).

44

Page 53: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

4.2.3 Algorithm 2

The second algorithm relies on Remark 4.3. Consider the point processes Φ+d = Φ+

t1,...,td and

Φ+d = Φ+

t1,...,td. By definition, Φ+d =

(φ+ti

)i≤d and

Y (ti) = φ+ti

(ti).

Hence, in order to obtain an exact sample of (Y (t1), . . . , Y (td)), it suffices to sample Φ+d exactly.

This shall be achieved by inductively sampling the φ+ti

according to the following theorem:

Theorem 4.6. The sequence (φ+ti

)i≤d can be exactly sampled as follows:

• φ+t1

has the same distribution as Γ−11 ψ where Γ1 is a unit Frechet random variable and ψ

is sampled from Pt1;

• given φ+t1, . . . , φ+

tn, the distribution of φ+tn+1

is equal to the one of

φ+tn+1

=

maxj≤n φ

+tj

(tn+1) if Φ+d = ∅

maxφ∈Φ+n+1

φ(tn+1) if Φ+d 6= ∅

,

where Φ+d is a Poisson random measure on C+ with intensity

µ(A) =

∫A

1(f(ti)<φ+ti

(ti),1≤i≤n)1(f(tn+1)>maxi≤n φ+ti

(tn+1))) µ(df). (4.8)

In other words, in order to obtain Y (tn+1) one proceeds as follows: let Γi denote the pointsof a unit rate Poisson process. As long as Γ−1

i > 1, one samples Γiψ, where ψ is sampled fromPtn+1 and only accepts it as a valid sample if Γiψ(ti) ≤ maxj≤n φ

+tj

(ti) for 1 ≤ i ≤ n. If the

sample was not valid, then Y (tn+1) = maxj≤n φ+tj

(tn+1). Since Γ−1i is almost surely converging

to 0, this algorithm must stop after a finite amount of time. The stopping condition comes fromthe fact that ψ(tn+1) = 1, if ψ is sampled from Ptn+1 .

Proof of Theorem 4.3.2. The distribution of φ+t1

is given in (4.4). For the conditional distribu-tion, it is observed that by Proposition 4.4 the conditional distribution of Φ+

n given Φ−n is theone of a Poisson random measure with intensity

µn(A) =

∫A

1(f<x1,...,xnY ) µ(df) =

∫A

1(f(ti)<Y (ti),1≤i≤d) µ(df). (4.9)

Now, consider φ ∈ Φ−n where φ(tn+1) > maxj≤n φ+tj

(tn+1) and consider the point process

Φ+n+1 = Φ−n ∩ f : f(tn+1) > max

j≤nφ+tj

(tn+1),

which, with (4.9) is a Poisson random measure with intensity (4.8) conditionally on (φ+tj

)j≤n.This gives two cases:

1. Φ+n+1 = ∅ so that no function from Φ−n is greater than maxj≤n φ

+tj

(tn+1) and hence,

φtn+1(tn+1) = maxj≤n φ+tj

(tn+1);

2. Φ+n+1 6= ∅ so that there exists a function from Φ−n which is greater than maxj≤n φ

+tj

(tn+1).

In that case φ+n+1(tn+1) = maxφ∈Φ+

n+1φ(tn+1).

This concludes the proof.

45

Page 54: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

Before showing how to sample from the distributions Pt, we give some remarks on the speedof the algorithms.

Remark 4.7. We conclude this section with a remark on the speed of the algorithm. Let N id

denote the number of required simulations of the Gaussian random field depending on the numberof samples for algorithm i, i ∈ 1, 2. Then, Dombry et al. proved in [11] that

• EN1d = dEmaxj≤d Y (tj)

−1 for Algorithm 1;

• EN2d = d for Algorithm 2,

and, moreover, EN1d ≥ EN2

d with equality if and only if Y (t1) = · · · = Y (td). When simulatingBrown-Resnick random fields this has the following implication: for Algorithm 2 the expectednumber of operations is independent of the range where the random field is simulated and in-dependent of the dependence structure of the Brown-Resnick field. This is a crucial advantageover the naive method, but also over Algorithm 1. The simulation study in Section 5 seems tosuggest that Emaxj≤d Y (tj)

−1 increases if the maximum of the norms of (tj)j≤d increases.

4.2.4 Sampling from Pt

For both algorithms, it remains to prove that and how one can sample from the distributions Pt.Fortunately, this is possible specifically in the case of Brown-Resnick fields. First, the followingchange of measure lemma is needed.

Lemma 4.8. Let (W (t))t∈T denote a centered Gaussian random field with variance function σ2

and underlying probability space (Ω,F , P ). For fixed t0 ∈ T , denote by Q the measure definedthrough

Q(A) =

∫A

exp

(W (t0)− σ2(t0)

2

)dP.

Then, the distribution of (W (t))t∈T under Q is equal to the distribution of

(W (t)− Cov(W (t),W (t0)))

under P .

Proof. Since the distributions are determined by their finite-dimensional distributions, it isenough to consider those. The Cramer-Wold device is applied. Let therefore d ∈ N be arbitraryand let a = (a1, . . . , ad) ∈ Rd. Furthermore, set a0 = 1. One now compares moment-generatingfunctions. First it is noted that, since

∑di=0 aiW (ti) is centered and Gaussian, one has

E exp

(d∑i=0

aiW (ti)

)= exp

(−1

2Var

(d∑i=0

aiW (ti)

)),

and, hence, ∫exp

(d∑i=1

aiW (ti)

)dQ =

∫exp

(d∑i=0

aiW (ti)−σ2(t0)

2

)dP

= exp

(−σ

2(t0)

2+ Var

(d∑i=0

aiW (ti)

))

= exp

(1

2aΣaT − 1

2σ2(t0)

), (4.10)

46

Page 55: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

where a = (1,a) and Σ denotes the matrix with entries Σi,j = Cov(W (ti),W (tj)), which is ofthe block form

Σ =

(σ2(t0) Σd,0

ΣTd,0 Σ

),

where (Σi,j)1≤i,j≤d = (Cov(W (ti),W (tj)))1≤i,j≤k and

Σd,0 = (Cov(W (t0),W (t1)), . . . ,Cov(W (t0),W (td))) .

It follows that (4.10) can be rewritten as∫exp

(d∑i=1

aiW (ti)

)dQ = exp

(σ2(t0)

2+

1

2(aΣT

d,0 + Σd,0aT + aΣaT )− σ2(t0)

2

)

= exp

(d∑i=1

aiCov(W (t0),W (ti)) + Var

(d∑i=1

aiW (ti)

)),

which is the moment generating function of∑d

i=1 ai(W (ti)− Cov(W (t0),W (ti)).

This allows to give the spectral measure for the Brown-Resnick case.

Proposition 4.9. Fix t1, . . . , td ⊂ [0,∞)d and consider the representation of the Brown-Resnick process given in (3.11). Then, Pti is equal to the distribution of the process(

exp

(W (t)−W (ti)−

1

2Var(W (t)−W (ti)

))t∈T

. (4.11)

Proof. Fix ti. Let Q be given through dQ = exp(W (ti)− σ2(ti)

2

)dP . One has

Pti(A) =

∫C+f(ti)1f : f

‖f(t)‖1∈A ν(df)

=

∫exp

(W (ti)−

σ2(ti)

2

)1

exp(W (·)−σ2

2

)/ exp

(W (ti)−

σ2(ti)

2

)∈A dP

= Q

(exp

(W (·)− σ2

2

)/ exp

(W (ti)−

σ2(ti)

2

)∈ A

)= P

(exp

(W (·)− Cov(W (·),W (ti))−W (ti)− Cov(W (ti),W (ti))−

1

2(σ2 − σ2(ti))

)∈ A

)= P

(exp

(W (·)−W (ti)−

1

2

(σ2 + σ2(ti)− 2Cov(W (·),W (ti))

))∈ A

),

where the second to last equality comes from Lemma 4.8. This, together with the fact thatσ2 + σ2(ti)− 2Cov(W (·),W (ti)) = Var(W (·)−W (ti)), proves the claim.

This is particularly useful if W has stationary increments, since then (4.11) becomes(exp

(W (t− ti)−

1

2γ(t− ti)

))t∈T

,

where γ denotes the variogram of (W (t))t∈T . If, moreover, σ2(t)2 = γ(t), which is the case for

example for fractional Brownian motions, then (W (t− t0))t∈Td= (W (t)−W (t0))t∈T and (4.11)

becomes (exp

(W (t)−W (ti)−

1

2γ(t− ti)

))t∈T

,

47

Page 56: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

Furthermore, contrary to Gaussian random fields with non-stationary increments, it is feasibleto simulate Gaussian random fields with stationary increments. Hence, in this case in order toobtain exact samples of a Brown-Resnick random field via Algorithm 1, one samples(

Γ−1i d

exp (W (t1)− γ(t1 − Ii))∑dj=1 exp (W (tj)− γ(tj − Ii))

, . . . ,Γ−1i d

exp (W (td)− γ(td − Ii))∑dj=1 exp (W (tj)− γ(tj − Ii))

)until n0 from (4.7) is reached, where (Ii) is an i.i.d sequence of uniformly distributed randomvariables on t1, . . . , td and Γ−1

i are the points of a Poisson process with mean measure µ1.

But as mentioned in the introduction, this method is also applicable to the simulation ofother known max-stable processes, e.g. for the max-moving process. The distributions Pti arein this case given in the following proposition.

Proposition 4.10. Consider the max-moving model given in (3.12) and fix t1, . . . , td ⊂ Rd.Then,

Pti(A) = P

(f(·+ U − ti)

f(U)∈ A

),

where U is distributed according the density f .

Proof. The following computation proves the proposition:

Pti(A) =

∫C+φ(ti)1φ: φ

‖φ(t)‖1∈A ν(dφ) =

∫f(ti − U)1f(·−U)/f(ti−U)∈A dP

=

∫Rdf(ti − u)1f(·−u)/f(ti−u)∈A λ(du) =

∫Rdf(v)1f(·+v−ti)/f(v)∈A λ(du)

= P

(f(·+ U − ti)

f(U)∈ A

),

where the second to last equality comes from the change of variables v = ti−u and the last onesince f is a density on Rd.

For further distributions, the reader is referred to [11].

4.3 The Liu-Blanchet-Dieker-Mikosch Method

The last method described here will be the Liu-Blanchet-Dieker-Mikosch method, which is alsoan exact simulation of a max-stable random field at a finite number of points. It was introducedin [18]. While it is asymptotically the fastest method as the number of locations increases, itis not applicable to arbitrary max-stable random fields. More precisely, it is only applicable torandom fields (Y (t))t∈T which have representation

(Y (t))t∈Td=

(supi≥1

Γ−1/αi exp (Wi(t)− µ(t))

)t∈T

,

where Wi are copies of a centered Gaussian random field and µ is a bounded function. But itis noted that the important case of Brown-Resnick random fields has such representation if Tis bounded. The algorithm is based on the following idea: define the three random variablesNX(c, A), NA(γ) and Na(c, A, γ), given a ∈ (0, 1), C ∈ R and 0 < γ < EΓ1, through

NX = inf

m : max

j≤dWn(tj) ≤ a log n+ C ∀ n ≥ m

NA = inf m : Γn ≥ γn ∀n ≥ m

Na = inf

m : nγ ≥ Γ1n

a exp

(C −min

i≤dW1(ti)

), ∀n ≥ m

.

48

Page 57: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

If N = max(NX , NA, Na), then for any 1 ≤ j ≤ d and i > N

Γ−1i exp(Wi(tj)) ≤ Γ−1

i naeC ≤ (γn)−1naeC ≤ Γ−11 exp

(min1≤d

W1(tj)

),

so that in particular(supi≥1

Γ−1i exp(W (t1)− µ(t1)), . . . , sup

i≥1Γ−1i exp(W (td)− µ(td))

)=

(max

1≤i≤NΓ−1i exp(W (t1)− µ(t1)), . . . , max

1≤i≤NΓ−1i exp(W (td)− µ(td))

).

Hence, to obtain an exact sample, N and simultaneously (Wi(t1), . . . ,Wi(td)) and (Γ−1i ) need

to be sampled for 1 ≤ i ≤ N . In order to describe how to do so, three steps will be fol-lowed. First, it will be described how to sample (Γ−1

1 , . . . ,Γ−1N ), afterwards how to sample

(Wi(t1), . . . ,W (td))1≤i≤N , and lastly how to obtain the sample of the max-stable field.

4.3.1 Sampling (Γ1, . . . ,ΓN )

In this section it shall be described how to sample (Γ1, . . . ,ΓNA+l), where l = N −NA. First,it will be described how to sample (Γ1, . . . ,ΓNA).Define the random walk Sn starting at 0 through Sn = nγ − Γn, and set ξ+

0 = 0 to define thestopping times

ξ−i =

infn : n > ξ+

i−1, Sn < 0

if ξ+i−1 <∞

∞ else

and

ξ+i =

infn : n > ξ−i , Sn > 0

if ξ−i <∞

∞ else,

so that NA = supξ−i : ξ−i <∞

. Then, Sn has negative drift since γ < EΓ1 and hence

0 ≤ NA <∞ a.s.. The idea is now to sample the downcrossing and upcrossing segments of thelevel 0, (Sξ+

i +1, . . . , Sξ−i+1) and (Sξ−i+1+1, . . . , Sξ+

i+1) until SNA is sampled. In order to describe how

to achieve this, some notation has to be introduced. For x ∈ R let Px denote the distributionof the random walk Sxn := x+ Sn so that P0 = PSn . For any such x define

τ− = inf n : Sxn < 0 and τ+ = inf n : Sxn > 0 .

Then, the task can be reformulated in the following way: simulate the downcrossing segments

(Sξ+i

1 , . . . , Sξ+i

τ−) and the upcrossing segments (Sξ−i1 , . . . , S

ξ−iτ+) until ξ+

i =∞.

Because of the negative drift of Sxn it is no problem to sample the downcrossing segments,as τ− is almost surely finite. Hence, the difficulty lies in simulating the upcrossing segments asthe event τ+ =∞ has to be detected within a finite number of computations. To achieve this,exponential change of measure techniques are used. Let f denote the density of the incrementsand let F (θ) = E exp(θS1). Then, it is well known that

fθ(x) = exp(xθ − logF (θ))f(x)

is again a density; see section V1.b from [1]. It is also implied there that the most convenientchoice for θ is the number θ∗ > 0 such that

1 = F (θ∗) = E exp(θ∗S1),

49

Page 58: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

which is known as Cramer’s root. Let P θ∗

x denote the distribution of a random walk, wherethe increments are distributed according to fθ∗ . Under fθ∗ the increments have positive meanso that under P θ

∗x the random walk has positive drift; see Section VI.2 from [1]. Thus, if

x < 0, sampling (Sx1 , . . . , Sxτ+) under P θ

∗x can be done in a finite amount of time. The following

proposition shows why this is useful.

Proposition 4.11. Let x < 0 and let θ∗ denote Cramer’s root. If U is a standard uniformrandom variable, independent of Sxn under P θ

∗x , then

1. The distribution of 1(τ+<∞) under Px is the same as the distribution of 1(U≤exp(−θ∗(Sxτ+−x)))

under P θ∗

x ;

2. The distribution of τ+ given τ <∞ under Px is the same as the distribution of τ+ givenU ≤ exp(−θ∗(Sxτ+ − x)) under P θ

∗x .

3. For any k ≥ 1 the distribution of (S1, . . . , Sk) given that τ = k is the same as the distri-bution of (S1, . . . , Sk) given that U ≤ exp(−θ∗(Sxτ+ − x)) and τ = k under P θ

∗x .

Hence, to obtain the upcrossing segment one samples (Sξ−i1 , . . . , S

ξ−iτ+) under P θ

ξ−i. If U ≤

exp(−θ(Sξ−i

τ+ − x)), then (Sξ−i1 , . . . , S

ξ−iτ+) is accepted as upcrossing segement. If not, then the

algorithm stops and the previous downcrossing segment reached SNA .

Proof of Proposition 4.11. First, it is observed that for Borel sets B1, . . . , Bk and k ∈ N one has

Px(Sx1 ∈ B1, . . . , Sxk ∈ Bk, τ+ = k) =

∫exp(−θ∗(Sxn − x))1(Sx1∈B1,...,Sxk∈Bk,τ+=k) dP θ

∗x

=

∫1(U≤exp(−θ∗(Sxn−x)))1(Sx1∈B1,...,Sxk∈Bk,τ+=k) dP θ

∗x .

All three claims follow from this identity. The first two ones by letting B1, . . . , Bk be thewhole space and the third one by applying the first two ones and basic laws for conditionalprobabilities.

It remains to show how to sample (SNA+1, . . . , SNA+l) for 0 < l ∈ N. If NA is knownand one sets x = SNA , then this is equivalent to sampling (Sx1 , . . . , S

xl ), given that τ+ :=

inf n : Sxn > 0 = ∞. Clearly, τ+ = ∞ is equivalent to maxk≤l Sxk < 0 and supk>l S

xk < 0.

Hence, the idea is to sample (Sx1 , . . . , Sxl ) until these two conditions are satisfied. To check

whether the first condition is satisfied is straightforward. For the second condition, it is notedthat due to the Markov property of random walks, the distribution of (Sxk )k>l only depends on Sl.Hence, to check whether supk>l S

xk < 0, one might check whether τ+

x1= inf n : Sx1

n < 0 = ∞where x1 = Sl. This is done using the same method based on Proposition 4.11, which was usedwhen sampling (S1, . . . , SNA).

4.3.2 Sampling (W1, . . . ,WN )

Next, it will be described how to sample (Wi(t1), . . . ,Wi(td)) for 1 ≤ i ≤ NX + l, wherel = N − NX . To make notation easier let Wi denote the Gaussian vector (Wi(t1), . . . ,Wi(td))throughout this section.Set η0 = 0 and define for i ≥ 1 the record-breaking times

ηi =

inf m : maxj≤dWn(tj) > a log n+ C,m > ηi−1 if ηi−1 <∞∞ else

,

so that NX = max ηi : ηi <∞. Obtaining the sample (W1, . . . ,WNX ) shall now be achievedby successively sampling single records. However, sampling until a single record is reached is

50

Page 59: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

not straightforward and requires auxiliary mechanisms. We first give an overview over the mainsteps and then proceed to describing these steps in detail. Define

Tn0 = inf

k ∈ N : max

j≤dWk > a log(n0 + k) + C

The main steps to sample (W1, . . . ,WTn0

) are as follows:

Step 1 Sample an auxiliary random variable K from a probability mass function gn0 designedto approximate Tn0 given that it is finite.

Step 2 Sample a vector WK from a distribution P (n0+k) which is appropriately close to thedistribution of W given maxj≤nW (tj) > a log(n0 + k) + C.

Step 3 Sample W1, . . . ,WK−1 normally.

Step 4 Check if maxj≤dWi(tj) ≤ a log n+ C for i = 1, . . . ,K − 1 and if Tn0 is finite.

Step 5 If both conditions from Step 4 are fulfilled return (W1, . . . ,Wk), else no record breakingoccurred.

Note that K and P (n0+k) only approximate appropriately, but it will be shown that if gn0 ischosen appropriately, the output is exact, given that Tn0 is finite. As it will turn out thatthere are different possible choices for the probability mass function gn0 , we first describe Step2. Afterwards we introduce the full algorithm and give conditions on the probability massfunction gn0 under which the algorithm gives an exact output. Lastly, a possible choice for gn0

is provided. For arbitrary n ∈ N the measure P (n) is given through

dP (n)

dP(x) =

∑di=1 1(x(ti)>a logn+C)∑d

i=1 P (W (ti) > a log n+ C).

To provide some intuition why this measure is indeed related to a conditional distributionconsider the case where (X(ti) > a log n + C) ∩ (X(tj) > a log n + C) = ∅ if j 6= i, which is ofcourse never the case for Gaussian vectors. Then,

P (n)(A) =1

P (maxj≤dW (tj) > a log n+ C)

∫A

d∑i=1

1(x(ti)>a logn+C) dP

=P (A ∩maxj≤dW (tj) > a log n+ C)

P (maxj≤dW (tj) > a log n+ C)= P (A|max

j≤dW (tj) > a log n+ C).

We now give an algorithm in pseudo code which samples from P (n). Let Φ denote thestandard normal distribution function and σ(tj) the standard deviation of W (tj). Lastly, let

wj denote the vector with entries wj(ti) =Cov(W (tj),W (ti))

Var(W (tj)).

Algorithm 1 CondSample(a,C,n) for a ∈ (0, 1), C ∈ R and n ∈ N1: Sample index ν from probability mass function

P (ν = j) =P (W (tj) > a log n+ C)∑di=1 P (W (ti) > a log n+ C)

2: Sample a standard unifrom U .

3: X(tν) = σ(tν)Φ−1(U + (1− U)Φ

(a logn+Cσ(tν)

)).

4: Sample W = (W (t1), . . . ,W (td)) under P .5: Return W − wνW (tν) +X(tν).

51

Page 60: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

Next, an algorithm that for given n0, gn0 and n1 ≥ n0 samples (W1, . . . ,WTn1) is provided.

Algorithm 2 SampleSingleRecord(a,C, n0, n1) for a ∈ (0, 1), C ∈ R, n1 ≥ n0 ∈ N1: Sample K from gn0 .2: Sample (W1, . . . ,WK−1) under P .3: Sample WK from CondSample(a,C, n1 +K).4: Sample W = (W (t1), . . . ,W (td)) under P .5: Sample standard uniform U .6: if maxj≤dWi(tj) ≤ a log(n1 + K) + C for 1 ≤ i ≤ K − 1 and Ugn0(K) ≤ dP

dP (n1+K) (WK)then return (W1, . . . ,WK)

7: else return ”degenerate”

With the right choice of n0 and gn0 , this algorithm gives exact samples:

Proposition 4.12. If∑d

i=1 P (W (ti) > a log(n0 + k) + C) ≤ gn0(k) then, if (W1, . . . , WT )denotes the output of Algorithm SampleSingleRecord conditioned on not being degenerate, onehas

1. SampleSingleRecord returns ”degenerate” with probability P (Tn1 =∞).

2. T has the same distribution as Tn1.

3. The distribution of (W1, . . . , Wk) given that T = k is the same as (W1, . . . ,Wk) given thatTn1 = k.

Proof. Set Am =x ∈ Rd : maxj≤d xj ≤ a log(n1 +m) + C

for m ≥ 1. Let x(m) ∈ Am for

m ≤ k − 1 and x(k) ∈ Ack. For an arbitrary Borel set A and x ∈ A write P (Wi ∈ dx) for thedensity of the measure P (Wi ∈ · ∩A). Then,

P (W1 ∈ dx(1), . . . , Wk ∈ dx(k), T = k)

= P (T = k)P (W1 ∈ dx(1)) · · ·P (Wk−1 ∈ dx(k−1))P

(Ugn0(k) ≤ dP

dPn1+k(Wk), Wk ∈ dk(k)

)= P (T = k)P (W1 ∈ dx(1)) · · ·P (Wk−1 ∈ dx(k−1))E

(gn0(k)

dP

dPn1+k(Wk), Wk ∈ dk(k)

)= gn0(k)P (W1 ∈ dx(1)) · · ·P (Wk−1 ∈ dx(k−1))(gn0(k))−1P (Wk ∈ dx(k))

= P (W1 ∈ dx(1), . . . ,Wk−1 ∈ dx(k−1),Wk ∈ dx(k), Tn1 = k),

where the second to last equality follows from the fact that Wk is sampled from P (n1+k) andthe second equality follows from the assumption which implies (gn0(k))(−1) dP

dP (n1+k) (Wk) ≤ 1.Similarly to the proof of Proposition 4.11, the claims follow from this identity.

Lastly, we provide a possible choice for gn0 from [18], show how to sample from thisprobability mass function and lastly give conditions on n0 that ensure that the conditionsof Proposition 4.12 are satisfied. Let φ denote the density of the standard normal distribution,σ2 = maxj≤d σ

2(tj) and define gn0 through

gn0(k) =

∫ kk−1 φ((a log(n0 + s) + C)/σ) ds∫∞0 φ((a log(n0 + s) + C)/σ) ds

.

The following Lemma shows how to sample from gn0 .

52

Page 61: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

Lemma 4.13. Let U be uniform on (0, 1). Furthermore, let Φ denote the standard normaldistribution functions and Φ = 1− Φ =

∫∞x φ(x) dx. Then,⌈

exp

(σ2

a2− C

a+σ

aΦ−1(UΦ

(a log n+ C

σ− σ

a

)))− n0

⌉,

is a random variable with probability mass function gn0.

Proof. Let fn0 denote the expression inside the exponential operator. Then,

P (dexp(fn0(U))− n0e ≥ k) = P (exp(fn0(U)) > k − 1 + n0) = P (fn0(U) > log(k + n0 − 1))

= P

(Φ−1(UΦ

(a log n+ C

σ− σ

a

))>a log(k + n0 − 1) + C

σ− σ

a

)= P

(U < Φ

(a log(k + n0 − 1) + C

σ

(a log n+ C

σ− σ

a

)−1).

Hence, it remains to show that∑m≥k

gn0(m) =

∫∞k+n0−1 φ(a log x+ C)/σ dx∫∞n0φ(a log x+ C)/σ dx

= Φ

(a log(k + n0 − 1) + C

σ

(a log n+ C

σ− σ

a

)−1

.

To do so, let y > 0 be arbitrary. Then, a couple of changes of variables give∫ ∞y

φ

(a log x+ C

σ

)dx =

1√2π

∫ ∞log y

exp

(−(at+ C)2

2σ2 + t

)dt

=e−C/a√

(1−

∫ log y

−∞exp

(−1

2

((at+ C)2

σ2 − 2σ(at+ C)

σa

))dt

)

=e−C/a√

2πφ(σ/a)/(σ/a)Φ

(a log y + C

σ− σ

a

),

which yields the claim by replacing y with k + n0 − 1 and n0 respectively.

The following gives conditions on n0 so that the prerequisites of Proposition 4.12 are fulfilled.

Proposition 4.14. Let δ ∈ (0, 1) be given. If a log n0 + C > σ and if

de−C/aΦ

(a log n0 + C

σ− σ

a

)≤ δ√

2πφ(σ(a))

σ/a,

then gn0 satisfies the conditions of Proposition 4.12 and SampleSingleRecord returns degeneratewith probability at least 1− δ.

Proof. The first claim is proven by the following computation:

d∑i=1

P (X(ti) > a log(n0 + k) + C) ≤ dΦ

(a log(n0 + k) + C

σ

)≤ dφ

(a log(n0 + k) + C

σ

)≤ d

∫ k

k−1φ

(a log(n0 + s) + C

σ

)ds

= gn0(k) d

∫ ∞0

φ

(a log(n0 + s) + C

σ

)ds

= gn0(k)e−C/a√

2πφ(σ/a)/(σ/a)Φ

(a log n0 + C

σ− σ

a

),

53

Page 62: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

where the second inequality comes from the fact that Φ(x) ≤ φ(x) for x ≥ 1, the third fromthe fact that φ is monotonically decreasing for x ≥ 1 and the last equality from the proof ofLemma 4.13. For the second part it is observed that from Proposition 4.12 that the probabilityof the algorithm returning degenerate is equal to P (Tn1 =∞). Hence, by applying the previouscalculation one obtains

P (Tn1 =∞) = 1−∞∑k=1

P (Tn1 = k) ≤ 1−∞∑k=1

d∑i=1

P (X(ti) > a log(n1 + k) + C)

≤ 1−∞∑k=1

d∑i=1

P (X(ti) > a log(n0 + k) + C)

= 1− e−C/a√2πφ(σ/a)/(σ/a)

Φ

(a log n0 + C

σ− σ

a

) ∞∑k=1

gn0(k) ≥ 1− δ,

where the last inequality comes from the assumption. This finishes the proof.

It remains to show how to sample beyond NX . The idea is similar to the case of samplingbeyond NA. Here, it is described how generate a number of samples (W1, . . . ,Wl), l ∈ N giventhat Tn1 = ∞. The procedure is simple: sample W1, . . . ,Wl under P . If maxi≤l,j≤dWi(tj) ≤a log(n1 + i) +C then the sample is accepted, otherwise repeat the procedure until a sample isaccepted.This subsection is concluded by giving the full algorithm to sample (W1, . . . ,WN ).

Algorithm 3 FinalAlgorithm(a, δ, C, n0, l), where a, δ ∈ (0, 1), C ∈ R, l = N − NX and n0

fulfills the requirements for Proposition 4.14

1: X = [], η = n0

2: Sample (W1, . . . ,Wη) under P , X = [W1, . . . ,Wη].3: repeat4: segment = SampleSingleRecord(a,C, n0, η)5: if segment is not ”degenerate” then X = [X,segment], η = length(X)

6: until segement is ”degenerate”7: if l > 0 then8: repeat Sample (WNX+1, . . . ,WNX+l)9: until maxi≤l,j≤dWNX+i(tj) ≤ a log(η + i) + C

10: X = [X, (WNX+1, . . . ,WNX+l)]

11: return X

Given(Γ−1

1 , . . . ,Γ−1N

)and (W1, . . . ,WN ), it is straightforward to obtain an exact sample.

4.3.3 Choosing a and C

Even though none of these constants influence the order of the complexity of the algorithm,they do influence the number of necessary computations. Therefore, some strategies of choosingthem are provided here. First of all, it is observed that increasing the number of simulationpoints only increases the random variable NX . Assuming that C is fixed, then NX decreasespathwise as a increases. On the other hand,

Na ≥(

Γ1 exp(C −minj≤dW1(tj))

γ

) 11−a

,

so that if Γ1 exp(C−minj≤dW1(tj)) > γ one has that Na increases as a increases. Hence, thereis trade-off between NX and Na when choosing a. Unfortunately, it is not really possible to give

54

Page 63: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

an expression of the expectation of NX . Therefore, a possible strategy is equating the secondcondition on n0 from Proposition 4.14 to the expected value of Na, i.e. finding an a such that

exp

aΦ−1

(δ√

2πe−C/aφ(σ/a)

dσa+σ2

a2− C

a

))= E

((Γ1 exp(C −minj≤dW1(tj))

γ

) 11−a),

which exists since if a 1 the left hand side is bounded and the right hand side convergesto infinity, while if a 0 the right hand side is bounded and the left hand side converges toinfinity. Thus, for fixed C on might use numerical approximation to find a.Another strategy is assuming a = 1 and then adjusting C such that the condition on Na isalways fulfilled, that is setting

C = minj≤d

W1(tj) + log

Γ1

).

In this case NX is small while Na is also small.

Remark 4.15. We conclude this section with remarks on the speed of the algorithm. Let c(d)denote the cost of sampling the Gaussian field at points t1, . . . , td, which is computationallythe most expensive part of the algorithm. Then, Liu et al. showed in [18] that for any ε > 0and p > 0 one has

ERp = o(dεc(d)p),

where R denotes the total number of operations necessary to obtain an exact sample. In thissense, the algorithm is asymptotically optimal as the number of samples increases. However,the scalar factor is critical. To explain why, consider simulation of a Brown-Resnick process atpoints t1, . . . , td. Fix a and C. Reformulating the second condition on n0 of Proposition 4.14gives:

n0 ≥ exp

aΦ−1

(δ√

2πeC/aφ(σ/a)

d(σ/a)

)+σ2

a2− C

a

).

If σ → ∞, then φ(σ/a) → 0 and the argument of the Φ−1

function converges to 0. Hence,n0 → ∞ as σ → ∞ exponentially fast. Changing a or C for different σ does not help. If a istoo small than Na will be too large. Similarly, if C is too large than Na will be too large. Thisalso hints at why this algorithm is asymptotically optimal: δ can be chosen arbitrarily close to0. This only enlarges the number of necessary simulations of a Gaussian vector by a constantfactor.

These observations do not come as a surprise: the algorithm does not rely on a differentrepresentation than the naive method. More precisely, the random variable N is only a boundafter which additional simulations of a Gaussian vector do not contribute to the component-wisesupremum. Hence, by the law of the iterated logarithm, it must increase exponentially as themaximal variance of the underlying Gaussian field increases.

Precise numbers of n0 depending on the maximal variance will be provided in the simulationsection.

55

Page 64: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

56

Page 65: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

5 Simulation

In this section, the different simulation methods of max-stable processes are tested and differentaspects such as elapsed time, influence of underlying Gaussian random fields and goodness-of-fit will be analyzed. The necessary computations for the simulation were done in Java, wherefor necessary Fast Fourier Transformations the library JTransforms was utilized. For plottingpurposes R was used. The simulations were done on a machine with a relatively slow CPU,namely an Intel(R) Core(TM) i7-3517U with 1.90 GHz and Boost up to 2.40 GHz.

Unfortunately, there is no clear comparison method of speed and the influence of the Hurst-parameter. For example, the time elapsed for the naive method depends to large parts only onthe number of repetitions, which is manually determined. Moreover, while for the naive methodthe Hurst parameter has no influence on the speed of the algorithm but on the precision, for theother algorithms it does influence runtime but not precision. Hence, these simulation aspectsare discussed with respect to their relevance for each algorithm. However, the goodness of fitfor each algorithm can be compared. Here two methods will be used.

The first one is the simple visual method of comparing qq-plots. Unfortunately, qq-plots forFrechet distributions have the disadvantage that their large quantiles dominate the smaller onesso that in this case the plots are hard to interpret. To remedy this, we will take the logarithmof the sampled marginals and compare them to a Gumbel distribution.

Secondly, to visualize dependence structure, contour plots are used to compare bivariatefinite-dimensional distributions. Here it is used that from [16] analytic expression of the bivariatedistribution functions of Brown-Resnick processes are known. Namely, if (Y (t))t∈T is a Brown-Resnick process with Gumbel marginals, then

P (Y (t1) ≤ x1, Y (t2) ≤ x2) =

F (x1, x2) = exp

(− exp(x1)−1Φ

(x2 − x1

2√γ(h)

+√γ(h)

)− exp(x2)−1Φ

(x1 − x2

2√γ(h)

+√γ(h)

)),

where γ denotes the variogram of the underlying Gaussian random field and h = t2 − t1.

5.1 The Naive Method

Here we provide simulation results for the Naive Method. We start by analyzing the goodness-of-fit

5.1.1 Goodness-of-Fit

First we look at qq-plots of marginals. Based on the discussion on the limitations of the naivemethod we expect relatively good fit for small Hurst parameters even on larger intervals. Onthe other hand we expect serious deviation for bigger Hurst parameters at locations far awayfrom 0. This is confirmed in Figure 5.1. For small Hurst parameters one obtains surprisinglygood results, while for larger Hurst parameters at a larger point the empirical distribution iscompletely off.

57

Page 66: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

Figure 5.1: qq-plot of log-marginals against a Gumbel distribution at four different locations oftwo different Brown Resnick processes simulated on the interval [0, 50] with shape parameterα = 1. The top two plots correspond to a Hurst-parameter H = 0.1 whereas the bottom twocorrespond to a Hurst parameter H = 0.9.

Next we compare the empirical bivariate distributions to the theoretical ones by comparingcontour plots. As input we take the same marginals as were taken in Figure 5.1.

Figure 5.2: Contour plots comparing the empirical and theoretical distribution functions of(Y (2.5), Y (40)), where the top one corresponds to H = 0.1 and the bottom one to H = 0.9.

Again, for small Hurst parameters the results seem very good. As expected, for bigger Hurstparameters, the simulation is of no good use.

58

Page 67: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

5.1.2 Speed of the Algorithm

As mentioned there is no point in analyzing the speed of the algorithm. However we providea table with information on the elapsed time depending on the number of grid points andrepetitions for comparison with the other two methods. Since the Gaussian random fields aresimulated using circulant embedding methods, when possible the spacing is chosen so that it isthe inverse of a power of 2.

Number of grid-pointsRepetitions 1025 2049 3000 4097 5000 6000 7000 8193

1000 0.57 1.101 4.497 2.344 7.331 8.185 8.829 4.7462000 1.226 2.263 7.803 4.679 14.848 16.376 17.872 9.4903000 1.553 3.375 11.955 6.971 22.546 24.703 26.648 14.0834000 2.131 4.706 16.498 9.304 29.569 33.380 35.463 18.797

Table 1: Elapsed Time in seconds for simulation of 1-dimensional Brown-Resnick fields

5.2 The Dombry-Engelke-Oesting Methods

Next, we turn to the Dombry-Engelke-Oesting methods. We first look at the GeneralizedDieker-Mikosch method.

5.2.1 Goodness-of-Fit

We again start by looking at qq-plots. However, this time we look at points further away from0 to highlight the advantage of this method.

Figure 5.3: qq-plot of log-marginals against a Gumbel distribution at four different locations ofa Brown-Resnick process simulated on the interval [0, 1000] with shape parameter α = 1.

As one would expect from an exact method the results are very good at every location. Thesame goes for the other goodness-of-fit test.

59

Page 68: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

Figure 5.4: Contour plots comparing the empirical and theoretical distribution functions of(Y (50), Y (800)), where the top one corresponds to H = 0.1 and the bottom one to H = 0.9.

Next, we look at the other algorithm. Again, we see very satisfying results.

Figure 5.5: qq-plot of log-marginals against a Gumbel distribution at four different locations ofa Brown-Resnick process simulated on the interval [0, 1000] with shape parameter α = 1. Thetop two plots correspond to a Hurst-parameter H = 0.2 while the bottom ones correspond toH = 0.8.

60

Page 69: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

Figure 5.6: Contour plots comparing the empirical and theoretical bivariate distributions func-tions. The top one corresponds to (Y (200), Y (600)) with Hurst parameter H = 0.1 and thebottom on to H = 0.9 and the bottom one to (Y (300), Y (800) with H = 0.8.

5.2.2 Speed of the Algorithms

Before elpased times are compared, we look at some more detailed information on the crucialparameter for both algorithms, that is the number of times a Gaussian vector had to be sim-ulated. We therefore look first at the influence of the size of the area where the random fieldis simulated over. Figure 5.7 shows the histograms of 500 simulations of Brown-Resnick fieldswith varying interval lengths and constant Hurst parameter and number of samples equal to513. The top row corresponds to the generalized Dieker-Mikosch algorithm and bottom rowto Algorithm 2. With a constant number of locations, it is obvious that the necessary numberof Gaussian vector simulation increases with interval size for the generalized Dieker-Mikoschalgorithm. In particular, it appears that this algorithm also suffers from similar problems asthe naive method and is best used on smaller intervals. On the other hand, as expected, themean for Algorithm 2 is constant around 500. Moreover, it appears that with increasing intervallength the number of simulations becomes more stable. In this sense the algorithm seems tofavor greater interval lengths.

Next, we look at the influence of the Hurst parameter and leave therefore the number ofsample locations and the interval length fixed. Figure 5.8 shows the histograms of 500 simula-tions of Brown-Resnick fields with varying Hurst parameters. Again, the top row correspondsto the generalized Dieker-Mikosch algorithm and bottom row to Algorithm 2. Here, the pic-ture is less clear. The average number of simulation seems to slightly decrease with increasingHurst parameter, as does the deviation around the mean. Repeated simulations support thisobservation. For Algorithm 2 we see again that the mean is not influenced by changes to Hurstparameter. Moreover, with increasing Hurst parameter, we again see a decrease in deviationaround the mean.

61

Page 70: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

Figure 5.7: Histogram for number of necessary simulations of a Gaussian vector after 500 simluations ofa Brown-Resnick field with Hurst parameter H = 0.5 on varying interval sizes. The top row correspondsto simulations with the generalized Dieker-Mikosch method and the bottom one to Algorithm 2. Thered line is the mean.

Figure 5.8: Histogram for number of necessary simulations of a Gaussian vector after 500 simluationsof a Brown-Resnick field with varying Hurst parameters. The top row corresponds to simulations withthe generalized Dieker-Mikosch method and the bottom one to Algorithm 2. The red line is the mean.

62

Page 71: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

5.3 The Liu-Blanchet-Dieker-Mikosch Method

5.3.1 Goodness-of-Fit

Due to the draw backs of this method mentioned in Remark 4.15, we only check at locationsclose to 0. As expected from an exact method, the results are satisfying and seem neither betternor worse than the other exact methods.

Figure 5.9: qq-plot of log-marginals against a Gumbel distribution at four different locations ofa Brown-Resnick process simulated on the interval [0, 1000] with shape parameter α = 1.

Figure 5.10: Contour plots comparing the empirical and theoretical distribution functions of(Y (0.05), Y (0.8)), where the top one corresponds to H = 0.1 and the bottom one to H = 0.9.

63

Page 72: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

5.3.2 Speed of the Algorithm

Before this algorithm is compared to the other methods, we first give some numbers regardingthe problem mentioned in Remark 4.15, i.e. for different maximal variances we give the corre-sponding lower bounds for n0. For simplicity, we assume here that a = 1 and C is chosen asdescribed in Subsection 4.3.3. Since in this case C is random, we give averages. Table 2 showsthe data. Already for σ2 = 2, the algorithm can only compete with Algorithm 2 from [11], ifthe number of simulation points is of order 106.

σ2 1 2 3 4 5 6Lower bound 349.922 1094711.88 8.952 · 108 3.25 · 1017 2.57 · 1027 2 · 1036

Table 2: Average lower bound for n0

However, on small intervals this algorithm seems to be superior to the others. In fact, ifa and C are chosen with the alternative method indicated in Subsection 4.3.3, these numberscan even be decreased, unfortunately however, without changing the fundamental problem. Tomake this more precise, Tables 3, 4 and 5 provide average elapsed times over 100 simulationsfor different Hurst parameters.

Number of Samples Liu et al. Gen. Dieker-Miksoch Dombry et al.

1025 0.1345 2.0172 0.65482049 0.2877 11.3266 2.74783000 0.4716 48.1752 12.78444097 0.5662 42.2868 9.815000 1.0067 146.0612 41.74786000 1.4561 204.6843 50.72547000 1.6929 318.8584 63.42298193 2.1004 218.018 46.3559

Table 3: Average elapsed time in seconds for a Brown-Resnick process on [0, 1] with Hurstparameter H = 0.2.

Number of Samples Liu et al. Gen. Dieker-Miksoch Dombry et al.

1025 0.1215 1.7201 0.65012049 0.2578 6.9745 2.62573000 0.4277 29.4056 12.60344097 0.4538 28.3923 9.04055000 1.0067 91.7829 43.69946000 1.4561 133.2871 47.2987000 1.6929 169.0037 77.73178193 1.9314 112.5708 46.5151

Table 4: Average elapsed time in seconds for a Brown-Resnick process on [0, 1] with Hurstparameter H = 0.5.

64

Page 73: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

Number of Samples Liu et al. Gen. Dieker-Miksoch Dombry et al.

1025 0.1316 1.5199 0.65382049 0.2566 5.8750 2.55823000 0.4597 25.9550 15.06614097 0.571 24.9224 9.44555000 1.2 73.7777 48.5736000 1.507 114.5074 52.3617000 1.5941 140.695 74.97848193 1.9214 119.4252 47.0623

Table 5: Average elapsed time in seconds for a Brown-Resnick process on [0, 1] with Hurstparameter H = 0.7.

Additionally, Figure 5.11 gives an indication of how many times a Gaussian vector had tobe simulated. The number is so small for the Liu-Blanchet-Dieker-Mikosch method that it isnot visible.

Figure 5.11: Comparison of the average number of simulations of a Gaussian vector for thethree exact methods. The red line corresponds to the generalized Dieker-Mikosch Method, thegreen line to the Dombry-Engelke-Oesting method and the blue line to the Liu-Blanchet-Dieker-Mikosch method. The top plot corresponds to a Hurst parameter H = 0.2, the middle one toH = 0.5 and the bottom one to H = 0.7.

65

Page 74: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

66

Page 75: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

6 Conclusion

In this thesis four different simulation methods for max-stable random fields were describedand applied to the simulation of Brown-Resnick fields. Unfortunately, in this case there is nomethod that is superior to all others. Which method is used best strongly depends on thecontext.

Assuming that exactness is required, by far the fastest method is the Liu-Blanchet-Dieker-Mikosch method if the maximal variance of the underlying Gaussian process is sufficiently closeto 1 or smaller. If this is not the case, then this algorithm is only preferable to the otheralgorithms, if the number of points is so large that from a practical point of view the simulationis not feasible. Hence, this method should be used carefully.

If the maximal variance of the underlying Gaussian random field is greater than 1, thenthe best choice seems to be Algorithm 2 from Dombry et al. Its greatest advantage over theDieker-Mikosch method is that it is robust against increasing the size of the simulation areaand therefore might be used for simulation over virtually any compact area. Therefore, it isthe only algorithm that successfully overcomes the limitations of simulating directly from thespectral representation of Brown-Resnick processes. However, its drawback is that the numberof necessary simulation of Gaussian vectors increases linearly with the number of samples.

Lastly, the naive method of simulating directly from the spectral representation gives sur-prisingly exact results, especially if the increments of the Brown-Resnick field are positivelycorrelated. If this correlation is large enough than the naive method is a serious alternative tothe other algorithms, particularly if the number of simulation points is large.

67

Page 76: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

68

Page 77: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

A Fractional Brownian Motion & Fractional Brownian Fields

Since all of the simulations are done by using either fractional Brownian motions or fractionalBrownian fields and furthermore the influence of the Hurst parameter on precision and runtimeis examined, we give here a brief introduction to these concepts. For more detailed informationthe reader is referred to [23].

Definition A.1. A centered Gaussian process (BH(t))t∈R is called fractional Brownian motionwith Hurst parameter 0 < H < 1 if BH(0) = 0, and if its covariance function is given through

Cov(BH(t), BH(s)) =1

2

(|t|2H + |s|2H − |t− s|2H

)(A.1)

Fractional Brownian motion was first introduced by Kolmogorov but modern terminologygoes back to Mandelbrot and van Ness in [19]. We give the following properties of fractionalBrownian motion.

Proposition A.2. Let (BH(t))t∈R be a fractional Brownian motion. Then,

1. (BH(t))t∈R is self-similar with index H, i.e. (BH(at))t∈Rd=(aHBH(t)

)t∈R.

2. (BH(t))t∈R has stationary increments.

Moreover, fractional Brownian motion is the only centered Gaussian process with stationaryincrements that is self similar with index 0 < H < 1. For proof see e.g. [23].The index H also influences the covariance structure of the increments. More precisely, one hasthe following:

• if 0 < H < 12 , then the increments are negatively correlated;

• if H = 12 , then the process is a standard Brownian motion;

• if 12 < H < 1, then the increments are positively correlated.

Hence, for small Hurst parameters one would expect rougher sample paths and for larger Hurstparameters smoother sample paths. This becomes evident in Figure A.1. For standard Brownianmotions, the law of the iterated logarithm is well known. However, here we need a more generalversion.

Theorem A.3. Let (BH(t))t≥0 be a fractional Brownian motion with Hurst parameter 0 <H < 1. Then

lim supt→∞

BH(t)√2t2H log log t

= 1 a.s.

To prove this, one shows first that if (BH(t))t≥0 is a fractional Brownian motion then,(t2HB

(1

t

))t≥0

d= (BH(t))t≥0 ,

and then applies Theorem 3.2.4 from [6]. The simulation of fractional Brownian motions is donevia circulant embedding; cf. [1].For the simulation of d-dimensional max-stable random fields a d-dimensional analog of thefractional Brownian motion will be used. Usually one considers a centered Gaussian randomfield (BH(t))t∈T , T ⊂ Rd as a fractional Brownian field if its covariance function is given through

Cov(BH(t), BH(s)) =1

2

(‖t‖2H + ‖s‖2H − ‖t− s‖2H

), (A.2)

where ‖·‖ is some norm on Rd, see e.g. [23]. However, in order to obtain a feasible simulationalgorithm, here a broader class of Gaussian processes will be considered as a fractional Brownianmotion.

69

Page 78: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

Figure A.1: Simulation of fractional Brownian motions with Hurst parameters H = 0.2, H =0.5, H = 0.7.

Definition A.4. A non-stationary Gaussian field (B(t))t∈T , T ⊂ Rd with constant mean iscalled a fractional Brownian field if there exists a positive constant k such that

Var(BH(t)−BH(s)) = k‖t− s‖2H .

It is noted that centered Gaussian random fields fulfilling (A.2) are fractional Brownianfields in the sense of Definition A.4. As in the one-dimensional case, the Hurst parametercontrols the smoothness of the field, where high values correspond to smoother fields whilelower values correspond to rougher fields. Here, only fractional Brownian fields of dimension 2will be simulated. In order to do so, the algorithm introduced in [24] is used, which also relieson circulant embedding.

70

Page 79: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

B Java Code

Fractional Brownian Motion

public static double fBM (double h, double k, double c)

double y;

if (h == 0.5 && k != 0)

y = 0;

else

y = 0.5*(-2*Math.pow(c*k, 2*h)+ Math.pow(Math.abs(k-1)*c,

2*h)+Math.pow(Math.abs(k+1)*c, 2*h));

return y;

public static double[] covMafBM(double h, int n, double w)

double[] Cov = new double[n];

for (int i = 0; i < n; i++)

Cov[i] = fBM(h,i,w);

return Cov;

public static double [] circulant(double[] x)

int n = x.length;

double[]z = new double[2*n-2];

for (int i = 0; i< z.length; i++)

if (i< n)

z[i] = x[i];

else

z[i] = x[n-(i-n+2)];

return z;

public static double[][] fullFFT(double[] x)

DoubleFFT_1D obj = new DoubleFFT_1D(x.length);

obj.realForward(x);

int n = x.length;

int m = n/2 ;

double[][] y = new double[n][2];

y[0][0] = x[0];

y[0][1]= 0;

y[m][0] = x[1];

y[m][1] = 0;

for (int i = 1; i < m; i++)

y[i][0] = x[2*i];

y[i][1] = x[2*i+1];

y[n-i][0]=y[i][0];

y[n-i][1] = y[i][1];

return y;

public static double[][] eigenvalues(double[] circulant)

return(fullFFT(circulant));

public static double[] genIncrfBM(double[][] eigenvalues)

Random rand = new Random();

int n = (eigenvalues.length+2)/2;

double[] x = new double[2*eigenvalues.length];

71

Page 80: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

for (int i = 0; i< eigenvalues.length;i++)

x[2*i] = Math.sqrt(eigenvalues[i][0])*(rand.nextGaussian());

x[2*i+1] = Math.sqrt(eigenvalues[i][0])*(rand.nextGaussian());

DoubleFFT_1D obj = new DoubleFFT_1D(2*n-2);

obj.complexForward(x);

int m = 2*n-2;

double[] z = new double[n];

for (int i= 0; i<n ;i++)

z[i] = 1.0/Math.sqrt(m)*x[2*i];

return z;

public static double[] fBMsample(double[][] eigenvalues)

double[]x = genIncrfBM(eigenvalues);

int n = (eigenvalues.length+2)/2;

double[]y = new double[n+1];

y[0] = 0;

double sum1 = 0;

for (int i =1; i < n+1;i++)

sum1 += x[i-1];

y[i] = sum1;

return y;

2D Fractional Brownian Fields

public static double fkt1 (double k, double h)

double z = 0;

if (0 <= k && k<= 1)

z = 1-h-Math.pow(k, 2*h)+(h*k*k);

else

z = 0;

return z;

public static double[] circulant(double h, int n)

int m = 2*(n-1);

double[]z=new double[m*m];

double[]y=new double[2];

double d = 1.0/n;

int counter = 0;

for(int j = 0; j<m; j++)

for (int i = 0; i<m;i++)

if (j <= m/2.0)

y[0] = d*j;

else

y[0] = d*(j-m);

if (i <= m/2.0)

y[1] = d*i;

else

y[1] = d*(i-m);

z[counter] = fkt1(euclid(y),h);

72

Page 81: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

counter+=1;

return(z);

public static double[] eigenvalues(double[] x)

double[] z = new double[2*x.length];

for (int i = 0; i< x.length;i++)

z[2*i] = x[i];

z[2*i+1] = 0;

double n = Math.sqrt(x.length);

int m = (int)n;

DoubleFFT_2D rand = new DoubleFFT_2D(m,m);

rand.complexForward(z);

for (int i = 0; i < x.length;i++)

x[i] = z[2*i];

return x;

public static double[][] statField(double[]x)

Random rand = new Random();

double[] z = new double[2*x.length];

for(int i = 0; i < x.length;i++)

z[2*i] = Math.sqrt(x[i])*rand.nextGaussian();

z[2*i+1] = Math.sqrt(x[i])*rand.nextGaussian();

double n = Math.sqrt(x.length);

int m = (int)n;

DoubleFFT_2D fft = new DoubleFFT_2D(m,m);

fft.complexForward(z);

double l = (Math.sqrt(x.length)/2) + 1;

double[][] y = new double[(int)l][(int)l];

for (int i = 0; i< l;i++)

for (int j = 0; j< l;j++)

y[i][j] = (1.0/Math.sqrt(x.length))*z[(2*i)*((int)l)+2*j];

return y;

public static double[][] fBMsample(double[][] x, double h, int n)

double c = Math.sqrt(2*h);

double d = 1.0/n;

Random rand = new Random();

double[] Gaussian = new double[2];

Gaussian[0] = c*rand.nextGaussian();

Gaussian[1] = c*rand.nextGaussian();

for (int i = 0; i < x.length; i++)

for(int j = 0; j< x[0].length;j++)

x[i][j] = x[i][j] + i*d*Gaussian[0]+j*d*c*Gaussian[1];

x[i][j] = x[i][j] -x[0][0];

return x;

73

Page 82: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

Generalized Dieker-Mikosch Algorithm

public static double[] DomEngOest (double h, int n, double tmax, double a)

double w = tmax/n;

double[][] eigenvalues = eigenvalues(circulant(covMafBM(h,n,w)));

double min = 0;

Random rand = new Random();

int shift;

double[]Gaussian = new double[n+1];

double[] t = new double[Gaussian.length];

for (int i = 0; i <t.length; i++)

t[i] = i*w;

double[] values = new double[n+2];

for (int i = 0; i < n+1; i++)

values[i] = 0;

double sum = 0;

double gam = 0;

double gam2 =0;

boolean start = true;

double c;

int counter = 0;

while(min < Math.pow(gam2, -a)*Gaussian.length || start)

counter +=1;

shift = rand.nextInt(n+1);

Gaussian = fBMsample(eigenvalues);

sum = 0;

for (int i = 0; i < Gaussian.length; i++)

sum += Math.exp(Gaussian[i]-0.5*Math.pow(Math.abs(t[i]-t[shift]),2*h));

sum = (1.0/Gaussian.length)*sum;

gam += Math.log(1-rand.nextDouble())/(-1.0);

for (int i = 0; i < Gaussian.length; i++)

c =

Math.pow(gam,-a)*Math.exp(Gaussian[i]-0.5*Math.pow(Math.abs(t[i]-t[shift]),2*h));

c = (1/sum)*c;

if (c > values[i])

values[i] = c;

min = values[0];

for (int j = 0; j <Gaussian.length;j++)

if(values[j]< min)

min = values[j];

gam2 = gam +Math.log(1-rand.nextDouble())/(-1.0);

start = false;

values[n+1] = (double)counter;

return values;

Dombry-Engelke-Oesting Algorithm 2

74

Page 83: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

public static double[] DomEngOest2(double h, int n, double tmax, double a)

double w = tmax/n;

double[][] eigenvalues = eigenvalues(circulant(covMafBM(h,n,w)));

Random rand = new Random();

double[]Gaussian = new double[n+1];

double[] t = new double[Gaussian.length];

for (int i = 0; i <t.length; i++)

t[i] = i*w;

double[] values = new double[n+2];

double gam;

double exp = 1/(Math.log(1-rand.nextDouble())/(-1.0));

Gaussian = fBMsample(eigenvalues);

boolean test = false;

for (int i = 0; i < Gaussian.length; i++)

values[i] = exp*Math.exp(Gaussian[i]-Gaussian[0] -

0.5*Math.pow(Math.abs(t[i]-t[0]), 2*h));

int counter = 1;

for (int i = 1; i < Gaussian.length; i++)

exp = Math.log(1-rand.nextDouble())/(-1.0);

while(1/exp > values[i])

gam = 1/exp;

Gaussian = fBMsample(eigenvalues);

counter+=1;

test = false;

for (int j = 0; j< i; j++)

if (gam*Math.exp(Gaussian[j]-Gaussian[i] -

0.5*Math.pow(Math.abs(t[j]-t[i]), 2*h))>values[j])

test = true;

break;

if(!test)

for (int j = i; j < Gaussian.length;j++)

double d =gam*Math.exp(Gaussian[j]-Gaussian[i] -

0.5*Math.pow(Math.abs(t[j]-t[i]), 2*h));

if(d>values[j])

values[j] = d;

exp += Math.log(1-rand.nextDouble())/(-1.0);

values[values.length-1] = counter;

return values;

Liu-Blanchet-Dieker-Mikosch Method

public static double choosea (double C, double[]t, double h, double delta, double

gam)

double[] circulant = ID.covMafBM(h,t.length-1,t[1]-t[0]);

circulant = ID.circulant(circulant);

double[][] eigenvalues = ID.eigenvalues(circulant);

75

Page 84: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

double[] Gaussian = ID.fBMsample(eigenvalues);

double[] var = var(t,h);

double maxsd = var[0];

double min = Gaussian[0];

for(int i = 1; i< Gaussian.length;i++)

if(min > Gaussian[i])

min = Gaussian[i];

if(maxsd < var[i])

maxsd = var[i];

double a = 0.8;

NormalDistribution dist = new NormalDistribution(0,1);

double value1 = delta*Math.sqrt(2*Math.PI)*dist.density(maxsd/a);

value1 = value1/(t.length*(maxsd/a));

value1 = (maxsd/a)*dist.cumulativeProbability(1-value1)+(maxsd*maxsd/(a*a))-(C/a);

value1= Math.exp(value1);

Random rand = new Random();

double sum = 0;

for (int i = 0; i< 100; i++)

sum +=Math.pow(Math.log(1-rand.nextDouble())/(-1.0),1/(1-a));

sum = (1.0/1000)*sum;

double value2 = Math.pow(Math.exp(C-min)/(gam), 1/(1-a))*sum;

if(value1 > value2)

a -= 0.01;

else if (value2 > value1)

a += 0.01;

while (Math.abs(value1-value2)>5)

value1 = delta*Math.sqrt(2*Math.PI)*dist.density(maxsd/a);

value1 = value1/(t.length*(maxsd/a));

value1 =

(maxsd/a)*dist.cumulativeProbability(1-value1)+(maxsd*maxsd/(a*a))-(C/a);

value1= Math.exp(value1);

value2 = Math.pow(Math.exp(C-min)/(gam), 1/(1-a))*sum;

if(value1 > value2)

a -= 0.01;

else if (value2 > value1)

a += 0.01;

if(a<0)

a +=0.01;

break;

else if(a > 1)

a-=0.01;

break;

return a;

public static double chooseC(double min, double gam, double exp)

return min + Math.log(gam/exp);

public static double choosen0 (double a, double C, double h, double[] t, double

delta)

NormalDistribution dist = new NormalDistribution(0,1);

76

Page 85: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

double[] var = var(t,h);

double maxsd = 0;

for (int i = 0; i< var.length; i++)

if(var[i]>maxsd)

maxsd = var[i];

double z = delta*Math.sqrt(2*Math.PI)*dist.density(maxsd/a);

z = z/(t.length*(maxsd/a));

z = dist.inverseCumulativeProbability(1-z);

z = (maxsd/a)*z + (maxsd*maxsd/(a*a))-(C/a);

z = Math.exp(z);

double n1 = z+1;

double n2 = Math.ceil(Math.exp((maxsd-C)/a));

return Math.max(n1, n2);

public static ArrayList<Double> downcrossingSegment(double x, double gam)

Random rand = new Random();

double counter = 0;

double startPoint = x;

boolean start = true;

ArrayList<Double> list = new ArrayList<Double>();

while (x > 0 || start)

counter +=1;

start = false;

x += (gam-Math.log(1-rand.nextDouble())/(-1.0));

list.add(counter*gam - (x-startPoint));

return list;

public static ArrayList<Double> upcrossingSegment(double x, double gam, double theta)

Random rand = new Random();

ArrayList<Double> list = new ArrayList<Double>();

double counter = 0;

double u;

double startPoint = x;

while(x<0)

counter += 1;

u = rand.nextDouble();

x += (Math.log((theta+1)*u) + gam)/(theta+1);

list.add(counter*gam - (x-startPoint));

int i = list.size();

u = rand.nextDouble();

if (u >Math.exp(-theta*((counter * gam - list.get(i-1))-startPoint)))

list = null;

return list;

public static ArrayList<Double> NA (double gam)

ArrayList<Double> sample = new ArrayList<Double>();

ArrayList<Double> tempSample = new ArrayList<Double>();

boolean degenerate = false;

double x=0;

double exp = 0;

double theta = cramer(gam,10);

while(!degenerate)

tempSample = downcrossingSegment(x, gam);

77

Page 86: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

x = tempSample.size()* gam -tempSample.get(tempSample.size()-1);

for (int j = 0; j< tempSample.size();j++)

sample.add(tempSample.get(j)+exp);

exp = sample.get(sample.size()-1);

tempSample = upcrossingSegment(x, gam, theta);

if (tempSample == null)

degenerate = true;

else

x = tempSample.size()* gam -tempSample.get(tempSample.size()-1);

for(int j =1 ; j < tempSample.size();j++)

sample.add(tempSample.get(j)+exp);

exp = sample.get(sample.size()-1);

return sample;

public static double[] pastNA (int l, double startPoint, double gam, double theta)

boolean cont = true;

double max = startPoint;

Random rand = new Random();

double[] values = new double[l];

double x = startPoint;

while (cont)

x = startPoint;

max = x;

for (int i= 0; i < l; i++)

x += gam- Math.log(1-rand.nextDouble())/(-1.0);

if (x > max)

max = x;

values[i] = ((i+1)*gam - (x-startPoint));

if (max < 0)

if(upcrossingSegment(x,gam,theta) == null)

cont = false;

return values;

public static double discSample (double a, double C, double n, double[]var)

double[][] prob = new double[var.length][2];

double sum1 = 0;

for (int i = 0; i < var.length;i++)

if(var[i]>0)

NormalDistribution distribution = new NormalDistribution(0,

Math.sqrt(var[i]));

sum1 += 1-distribution.cumulativeProbability(a*Math.log(n)+C);

else if(a*Math.log(n)+C <= 0)

sum1 +=1;

for (int j = 0; j < prob.length; j++)

if(var[j] > 0)

NormalDistribution distribution = new NormalDistribution(0,

Math.sqrt(var[j]));

78

Page 87: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

prob[j][0] = (1/sum1)*(1-distribution.cumulativeProbability(a*Math.log(n)+C));

prob[j][1] = j;

else if (a*Math.log(n)+C<=0)

prob[j][0] = 1/sum1;

prob[j][1] = j;

else if (a*Math.log(n)+C > 0)

prob[j][0] = 0;

prob[j][1] = j;

Random rand = new Random();

double u = rand.nextDouble();

double[] sumVec = new double[prob.length+1];

sumVec[0] = 0;

double sum = 0;

double z =0;

for (int i = 0; i < prob.length;i++)

sum += prob[i][0];

sumVec[i+1] = sum;

if( u <=sum && u> sumVec[i])

z = prob[i][1];

return z;

public static double[] condSample(double a, double C, double n, double h, double[] t)

double[] var = var(t,h);

int j = (int)discSample(a,C,n,var);

Random rand = new Random();

double u = rand.nextDouble();

NormalDistribution dist = new NormalDistribution(0,1);

double x = dist.cumulativeProbability((a*Math.log(n)+C)/var[(int)j]);

x = u + (1-u)*x;

x = var[(int)j]*dist.inverseCumulativeProbability(x);

double[] W =

ID.fBMsample(ID.eigenvalues(ID.circulant(ID.covMafBM(h,t.length-1,t[1]-t[0]))));

double temp = W[j];

for (int i = 0; i < W.length; i++)

double cov = 0.5*(Math.pow(Math.abs(t[j]),2*h)+Math.pow(Math.abs(t[i]),2*h)-

Math.pow(Math.abs(t[j]-t[i]),2*h));

W[i] = W[i] - (cov /var[j])*temp+x;

return W;

public static double sampleG(int n0, double a, double C, double[] t, double h)

double maxsd = 0;

double mean = 0;

double sd = 1;

double[] var = var(t,h);

for(int i = 0; i < var.length;i++)

if(var[i]> maxsd*maxsd)

maxsd = Math.sqrt(var[i]);

Random rand = new Random();

double u = rand.nextDouble();

NormalDistribution distribution = new NormalDistribution(mean,sd);

double z = (a*Math.log(n0)+C)/(maxsd);

79

Page 88: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

z -= maxsd/a;

z = 1-distribution.cumulativeProbability(z);

z = u*z;

z = distribution.inverseCumulativeProbability(1-z);

z = z*(maxsd/a);

z = (maxsd*maxsd)/(a*a)-(C/a)+z;

z = Math.ceil(Math.exp(z)-n0);

return z;

ublic static List<double[]> sampleSingleRecord(double a, double C, double n1, double

n2,double h, double[] t)

int k = (int) sampleG((int)n1,a,C,t,h);

double[] Gaussian;

List<double[]>z = new ArrayList<double[]>();

boolean fine = true;

for (int i = 0; i < k-1;i++)

Gaussian =

ID.fBMsample(ID.eigenvalues(ID.circulant(ID.covMafBM(h,t.length-1,t[1]-t[0]))));

for (int j = 0; j< Gaussian.length;j++)

if (Gaussian[j]> a*Math.log(n1)+C)

fine = false;

z = null;

break;

if (fine)

z.add(Gaussian);

if (!fine)

break;

if (z != null)

int counter = 0;

double[]condGaussian = condSample(a,C,n2+k,h,t);

double sum1 = 0;

double[] var = var(t,h);

double maxsd = 0;

for (int i = 0; i < var.length;i++)

if (var[i] > 0)

NormalDistribution distribution = new NormalDistribution(0,

Math.sqrt(var[i]));

sum1 += 1-distribution.cumulativeProbability(a*Math.log(n2+k)+C);

else if (a*Math.log(n2+k)+C<=0)

sum1 += 1;

if (condGaussian[i] > a* Math.log(n2 + k)+ C)

counter += 1;

if(var[i]>maxsd*maxsd)

maxsd = Math.sqrt(var[i]);

NormalDistribution dist = new NormalDistribution(0,1);

Random rand = new Random();

double u = rand.nextDouble();

double test1 = -dist.cumulativeProbability((a*Math.log(n1+k-1)+C)/(maxsd)

-(maxsd/a));

80

Page 89: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

test1 += dist.cumulativeProbability((a*Math.log(n1+k)+C)/maxsd - (maxsd/a));

test1 = test1/(1-dist.cumulativeProbability((a*Math.log(n1)+C)/(maxsd)-

(maxsd/a)));

if (u * test1 > sum1/counter)

z = null;

else

z.add(condGaussian);

return z;

public static List<double[]> sampleWithoutRecord(double a, double C, int n, int l,

double[] t, double h)

double[] circulant = ID.covMafBM(h,t.length-1,t[1]-t[0]);

circulant = ID.circulant(circulant);

double[][] eigenvalues = ID.eigenvalues(circulant);

List<double[]> sample = new ArrayList<double[]>();

double[] Gaussian;

boolean cont2 = true;

while(cont2)

cont2 = true;

for(int i = 0; i < l; i++)

Gaussian = ID.fBMsample(eigenvalues);

for(int j = 0; j< Gaussian.length; j++)

if(Gaussian[j] -a * Math.log(n + (i+1))>C)

cont2 = false;

break;

if (!cont2)

break;

sample.add(Gaussian);

if (cont2)

break;

return sample;

public static List<double[]> NX (double a, double C, int n0, double h, double[]t)

List<double[]> samples = new ArrayList<double[]>();

double[] circulant = ID.covMafBM(h,t.length-1,t[1]-t[0]);

circulant = ID.circulant(circulant);

double[][] eigenvalues = ID.eigenvalues(circulant);

double[] Gaussian;

for (int i = 0; i < n0; i++)

Gaussian = ID.fBMsample(eigenvalues);

samples.add(Gaussian);

boolean cont = true;

int n1 = n0;

List<double[]> tempSample = new ArrayList<double[]>();

while(cont)

tempSample = sampleSingleRecord(a,C,(double)n0,(double)n1,h,t);

if (tempSample != null)

for (int i = 0; i < tempSample.size(); i++)

samples.add(tempSample.get(i));

81

Page 90: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

n1 = samples.size();

else

cont = false;

return samples;

public static double[] LBDMalg (double gam, double a, double C, double h, double[]t,

double alpha, double delta)

ArrayList<Double> NA = NA(gam);

double[] circulant = ID.covMafBM(h,t.length-1,t[1]-t[0]);

circulant = ID.circulant(circulant);

double[][] eigenvalues = ID.eigenvalues(circulant);

double[] Gaussian = ID.fBMsample(eigenvalues);

double min = Gaussian[0];

for(int i = 0; i< Gaussian.length;i++)

if(min > Gaussian[i])

min = Gaussian[i];

double[] values;

//C = chooseC(min,gam,NA.get(0));

a = choosea(C,t,h,delta,gam);

int max=0;

int n0 = choosen0(a,C,h,t,delta);

if (n0 < 15000)

List<double[]> NX = NX(a,C,n0,h,t);

NX.set(0,Gaussian);

double exp = NA.get(0);

int n3 = (int)Math.ceil((exp*Math.exp(C-min))/gam);

max = Math.max(NA.size(),n3);

max = Math.max(max, NX.size());

int NAsize = NA.size()-1;

if (NA.size() < max)

double[] pastNA = pastNA(max-NA.size(),NA.size()*gam

-NA.get(NA.size()-1),gam,cramer(gam,10));

for (int i = 0; i< pastNA.length;i++)

NA.add(pastNA[i]+NA.get(NAsize));

if (NX.size() < max)

List<double[]> adSamples = sampleWithoutRecord(a,C,n0,max-NX.size(),t,h);

for (int j = 0; j <adSamples.size();j++)

NX.add(adSamples.get(j));

double[] var = var(t,h);

values = new double[t.length+1];

double d;

for (int i = 0; i < max ; i++)

for (int j = 0; j< t.length; j++)

d = Math.pow(NA.get(i), alpha)*Math.exp(NX.get(i)[j] - 0.5*var[j]);

if (d > values[j] && i != 0)

values[j] = d;

else if(i == 0)

values[j] = d;

82

Page 91: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

else

values = null;

if(values != null)

values[values.length-1]=(double)max;

return values;

83

Page 92: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

84

Page 93: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

References

[1] Asmussen, S., and Glynn, P. Stochastic Simulation: Algorithms and Analysis. Stochas-tic Modelling and Applied Probability. Springer New York, 2007.

[2] Billingsley, P. Convergence of Probability Measures. Wiley Series in probability andMathematical Statistics: Tracts on probability and statistics. Wiley, 1968.

[3] Billingsley, P. Probability and Measure, second edition ed. John Wiley and Sons, 1986.

[4] Blanchet, J., and Davison, A. C. Spatial modeling of extreme snow depth. Ann. Appl.Stat. 5, 3 (2011), 1699–1725.

[5] Brown, B. M., and Resnick, S. I. Extreme values of independent stochastic processes.Journal of Applied Probability 14, 4 (1977), 732–739.

[6] Cohen, S., and Istas, J. Fractional Fields and Applications. Mathematiques et Appli-cations. Springer-Verlag Berlin-Heidelberg, 2013.

[7] de Haan, L., and de Ronde, J. Sea and wind: Multivariate extremes at work. Extremes1, 1 (1998), 7.

[8] de Haan, L., and Ferreira, A. Extreme Value Theory: An Introduction. SpringerSeries in Operations Research and Financial Engineering. Springer New York, 2006.

[9] de Haan, L., and Pickands, J. Stationary min-stable stochastic processes. ProbabilityTheory and Related Fields 72, 4 (1986), 477–492.

[10] Dieker, T., and Mikosch, T. Exact simulation of brown-resnick random fields at afinite number of locations. Extremes 18 (2015), 301–314.

[11] Dombry, C., Engelke, S., and Oesting, M. Exact simulation of max-stable processes.Biometrika 103, 2 (2016), 303.

[12] Dombry, C., and Eyi-Minko, F. Strong mixing properties of max-infinitely divisiblerandom fields. Stochastic Processes and their Applications 122, 11 (2012), 3790 – 3811.

[13] Dombry, C., and Eyi-Minko, F. Regular conditional distributions of continuous max-infinitely divisible random fields. Electron. J. Probab. 18 (2013), 21.

[14] Feller, W. An Introduction to Probability Theory and Its Applications. Wiley, 1971.

[15] Haan, L. D. A spectral representation for max-stable processes. Ann. Probab. 12, 4(1984), 1194–1204.

[16] Husler, J., and Reiss, R.-D. Maxima of normal random vectors: Between independenceand complete dependence. Statistics and Probability Letters 7, 4 (1989), 283 – 286.

[17] Kabluchko, Z., Schlather, M., and de Haan, L. Stationary max-stable fields asso-ciated to negative definite functions. Ann. Probab. 37, 5 (2009), 2042–2065.

[18] Liu, Z., Blanchet, J. H., Dieker, A. B., and Mikosch, T. Optimal exact simulationof max-stable and related random fields. ArXiv e-prints (2016).

[19] Mandelbrot, B. B., and Ness, J. W. V. Fractional brownian motions, fractional noisesand applications. SIAM Review 10, 4 (1968), 422–437.

[20] Oesting, M., Kabluchko, Z., and Schlather, M. Simulation of brown–resnick pro-cesses. Extremes 15, 1 (2012), 89–107.

85

Page 94: Point Process Representations of Sum- and Max- … 2017/M.Korte-Stap… · 3 Representation of Max-Stable Random Variables & Processes 23 3.1 Max-Stable Random Variables ... 2 nF

[21] Resnick, S. Extreme Values, Regular Variation, and Point Processes. Applied Probability.Springer-Verlag, 1987.

[22] Resnick, S. I., and Roy, R. Random usc functions, max-stable processes and continuouschoice. Ann. Appl. Probab. 1, 2 (1991), 267–292.

[23] Samorodnitsky, G., and Taqqu, M. Stable Non-Gaussian Random Processes: Stochas-tic Models with Infinite Variance. Stochastic Modeling Series. Chapman & Hall, 1994.

[24] Stein, M. L. Fast and exact simulation of fractional brownian surfaces. Journal ofComputational and Graphical Statistics 11, 3 (2002), 587–599.

86