the mean square error of prediction in the … · the mean square error of prediction in the chain...

22
THE MEAN SQUARE ERROR OF PREDICTION IN THE CHAIN LADDER RESERVING METHOD (MACK AND MURPHY REVISITED) BY MARKUS BUCHWALDER,HANS BÜHLMANN,MICHAEL MERZ AND MARIO V. WÜTHRICH ABSTRACT We revisit the famous Mack formula [2], which gives an estimate for the mean square error of prediction MSEP of the chain ladder claims reserving method: We define a time series model for the chain ladder method. In this time series framework we give an approach for the estimation ofthe conditional MSEP. It turns out that our approach leads to results that differ from the Mack for- mula. But we also see that our derivation leads to the same formulas for the MSEP estimate as the ones given in Murphy [4]. We discuss the differences and similarities of these derivations. 1. MOTIVATION In this article we revisit the famous Mack formula [2] to estimate the mean square error of prediction MSEP of the chain ladder method. The chain lad- der method has to be understood as an algorithm to estimate claims reserves (under certain homogeneity assumptions). Mack [2] was the first who gave a probabilistic background to the chain ladder method which allowed for the derivation of MSEP estimates for the claims reserves. We formulate a time series model for the chain ladder method. Within this time series framework we discuss different approaches for the estimation of the MSEP (see Subsec- tion 4.1.2). These different approaches contain conditional and unconditional views of the problem. In the present work we concentrate on the conditional view, i.e. we give formulas and estimates for the MSEP in the conditional approach. Our (conditional) approach leads to the same MSEP estimates as an other derivation given by Murphy [4] (in Subsection 4.2 we discuss and compare our derivation to Murphy’s derivation). Moreover, we see that the Mack formula [2] is a linear approximation to our result. We want to emphasize that different approaches to the estimation of the MSEP are acceptable, if they are correct in a mathematical sense. They describe the problem from different angles (conditional or unconditional probability laws). The expert in practice should decide which is the appropriate view for his problem. Astin Bulletin 36(2), 521-542. doi: 10.2143/AST.36.2.2017933 © 2006 by Astin Bulletin. All rights reserved.

Upload: doanthuy

Post on 04-Apr-2018

222 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: THE MEAN SQUARE ERROR OF PREDICTION IN THE … · the mean square error of prediction in the chain ladder reserving method (mack and murphy revisited) by markus buchwalder,hans bÜhlmann,michael

THE MEAN SQUARE ERROR OF PREDICTIONIN THE CHAIN LADDER RESERVING METHOD

(MACK AND MURPHY REVISITED)

BY

MARKUS BUCHWALDER, HANS BÜHLMANN, MICHAEL MERZ

AND MARIO V. WÜTHRICH

ABSTRACT

We revisit the famous Mack formula [2], which gives an estimate for the meansquare error of prediction MSEP of the chain ladder claims reserving method:We define a time series model for the chain ladder method. In this time seriesframework we give an approach for the estimation of the conditional MSEP.It turns out that our approach leads to results that differ from the Mack for-mula. But we also see that our derivation leads to the same formulas for theMSEP estimate as the ones given in Murphy [4]. We discuss the differences andsimilarities of these derivations.

1. MOTIVATION

In this article we revisit the famous Mack formula [2] to estimate the meansquare error of prediction MSEP of the chain ladder method. The chain lad-der method has to be understood as an algorithm to estimate claims reserves(under certain homogeneity assumptions). Mack [2] was the first who gave aprobabilistic background to the chain ladder method which allowed for thederivation of MSEP estimates for the claims reserves. We formulate a timeseries model for the chain ladder method. Within this time series frameworkwe discuss different approaches for the estimation of the MSEP (see Subsec-tion 4.1.2). These different approaches contain conditional and unconditionalviews of the problem. In the present work we concentrate on the conditionalview, i.e. we give formulas and estimates for the MSEP in the conditionalapproach. Our (conditional) approach leads to the same MSEP estimates asan other derivation given by Murphy [4] (in Subsection 4.2 we discuss andcompare our derivation to Murphy’s derivation). Moreover, we see that theMack formula [2] is a linear approximation to our result.

We want to emphasize that different approaches to the estimation of the MSEPare acceptable, if they are correct in a mathematical sense. They describe theproblem from different angles (conditional or unconditional probability laws).The expert in practice should decide which is the appropriate view for his problem.

Astin Bulletin 36 (2), 521-542. doi: 10.2143/AST.36.2.2017933 © 2006 by Astin Bulletin. All rights reserved.

9130-06_Astin_36/2_10 06-12-2006 14:10 Pagina 521

Page 2: THE MEAN SQUARE ERROR OF PREDICTION IN THE … · the mean square error of prediction in the chain ladder reserving method (mack and murphy revisited) by markus buchwalder,hans bÜhlmann,michael

1.1. The development trapezoid

Our claims data have the following structure:

522 M. BUCHWALDER, H. BUHLMANN, M. MERZ AND M.V. WUTHRICH

In this trapezoid the variables

k, 1 ≤ k ≤ K, refer to accident years (rows)j, 1 ≤ j ≤ J, refer to development years (columns).

Usually we have observations in DK := {Yk, j |1 ≤ k ≤ K and 1 ≤ j ≤ min{J,K –k + 1}} and we need to estimate the distributions in DK

c . For K > J the set DK

yields a trapezoid and for K = J we get a triangle of the observations.

The entries Yk, j (as well as their partial sums Xk, j ) may be interpreted

a) on a paid claims basis,b) on a occurred claims basis.

Henceforth,

: ,, ,k j k ll

j

1

==

X Y! (1.1)

9130-06_Astin_36/2_10 06-12-2006 14:10 Pagina 522

Page 3: THE MEAN SQUARE ERROR OF PREDICTION IN THE … · the mean square error of prediction in the chain ladder reserving method (mack and murphy revisited) by markus buchwalder,hans bÜhlmann,michael

denotes the cumulative payments of accident year k up to development period j,or the claims incurred for accident year k reported up to reporting year j,respectively.

THE MEAN SQUARE ERROR OF PREDICTION 523

Yk, j Xk, j

Interpretation a) Non-cumulative payments Cumulative paymentsfor accident year k for accident year k

in development year j up to development year j

Interpretation b) Change of incurred Incurred loss ofloss of accident year k accident year k upin development year j to development year j

For our exposition we use in the sequel interpretation a). The reader caneasily translate into interpretation b).

Whereas Xk, j (1 ≤ j ≤ J ) refers to cumulative payments with ultimate loss Xk,Jfor accident year k, it is sometimes useful to consider the incremental paymentsin accounting years (diagonals):

: ,t k j

j Jk j t1

1

=

# #+ = +

Y Y! (sum over t-diagonal) (1.2)

stands then for the payments in the accounting year t (1 ≤ t ≤ K + J – 1).

The Problems:

We have observed DK, or all t-diagonals of Yt with 1 ≤ t ≤ K, respectively. Intui-tively this means that we are at time K.

• Estimate the reserves Rk = Xk,J – Xk,K – k + 1 for all accident years k.

• Predict Yt for all future t-diagonals for K < t ≤ K + J – 1.

The estimation of the claims reserves Rk is the classical (actuarial) reservingproblem studied in every non-life insurance company. The prediction of Yt forfuture diagonals yields the cashflow associated with the reserving problem,which is especially helpful for economic considerations of the reserves and thevaluation portfolio constructions (see [1]).

Remarks 1.1.

• For each future t-diagonal we have to predict in formula (1.2) not only Yt buteach of its summands Yk, j with k + j = t + 1.

• This allows – by linearity – also the prediction of all missing Xk, j (k + j >K + 1), in particular the ultimate Xk, J and the reserve Rk.

9130-06_Astin_36/2_10 06-12-2006 14:11 Pagina 523

Page 4: THE MEAN SQUARE ERROR OF PREDICTION IN THE … · the mean square error of prediction in the chain ladder reserving method (mack and murphy revisited) by markus buchwalder,hans bÜhlmann,michael

Notation

• We use ˆ for both “prediction” and “estimation” to simplify the notation.The interpretation of its meaning should be clear from the context.

• We use capital letters Yk, j (and Xk, j ) to denote both the random variable aswell as its realisations. The meaning of the symbol must be interpreted fromthe context.

2. THE CHAIN LADDER RESERVING MODEL AND THE

CORRESPONDING CANONICAL MODEL

The chain ladder method is based on cumulative payments Xk, j . From thecumulative estimates we can easily derive estimates for Yk, j , as we see below.

2.1. Chain ladder method

The algorithmic definition of the chain-ladder method reads as follows

1. There are constants fl (l = 1, …, J – 1) such that for all accident years k thefuture cumulative payments Xk, j ( j > k*) are estimated by

Xk, j := Xk, k* · fk* · fk*+ 1 · … · fj – 1, (2.1)

where k* = k*(k) = K – k + 1 is the last observation for accident year k on theK-diagonal.

2. The chain ladder factors fl (age-to-age factors) are estimated by

: ,,

ll

k lk

K l

11=

+

=

-

SfX!

(2.2)

where

: ,l k lk

K l

1

==

-

S X! . (2.3)

The actuarial literature often explains the chain ladder method (as above)purely as a computational algorithm and leaves the question open which prob-abilistic model would lead to this procedure.

It is Mack’s merit [2] that he was the first who gave an answer to this ques-tion.

Remark. Formula (2.2) gives one possibility for the estimation of the chainladder factors fl . Of course there are other possibilities to estimate fl . We donot further discuss these in this paper (see also Remarks 2.1).

524 M. BUCHWALDER, H. BUHLMANN, M. MERZ AND M.V. WUTHRICH

9130-06_Astin_36/2_10 06-12-2006 14:11 Pagina 524

Page 5: THE MEAN SQUARE ERROR OF PREDICTION IN THE … · the mean square error of prediction in the chain ladder reserving method (mack and murphy revisited) by markus buchwalder,hans bÜhlmann,michael

2.2. Mack’s conditions

(M1) Accident years k are independent.

(M2) The Xk, j ( j = 1, 2, …, J ) form a Markov chain for each k (1 ≤ k ≤ K ).

(M3) E [Xk, j |Xk, j – 1] = fj – 1 · Xk, j – 1.

It is quite natural (but not imperative) to add a condition about conditionalvariances

(M4) Var[Xk, j |Xk, j – 1] = s2j –1 · Xk, j – 1.

Remarks 2.1.

• One could also write

(M4’)j 1-

.Vars

,

,,

,k j

k jk j

k j11

1=

--

-

2

XX

X X= G

Formula (M4’) is known as the Bühlmann-Straub condition in credibility the-ory. One may therefore interpret Xk, j – 1 as (conditional) volume measure.

• Assumption (M4) is not imperative and could be generalized to other pow-ers X g

k, j – 1 (see [3]). However, if we choose g = 1 as in (M4), then the estima-tor fl (formula (2.2)) has minimal variance, given Sl (see [2] or [5]). If g ! 1one uses other estimates for fl than (2.2).

• It is also worth to mention that the chain ladder estimate is only based onthe information in the upper triangle. Of course, in practice, we have addi-tional (external) information. In this case our estimate should not only bebased on DK.

It is easy to check that under conditions (M1) - (M3) we have for j > k* (seeMack [2])

E [Xk, j |DK ] = fk* · fk*+ 1 · … · fj –1 · Xk, k* = E [Xk, j |Xk, k*] ,

and (CL)

E [Xk, k* · fk* · … · fj –1 |Xk, k*] = E [Xk, j |Xk, k*].

This means that we have unbiased estimators for the conditional expectations.

2.3. The chain ladder consistent time series model

We choose here a different route. We specify a time series model which yieldsthe chain ladder method. This route seems very promising as it – besides allowingto calculate means and variances – also defines how to simulate developmenttrapezoids (or triangles).

THE MEAN SQUARE ERROR OF PREDICTION 525

9130-06_Astin_36/2_10 06-12-2006 14:11 Pagina 525

Page 6: THE MEAN SQUARE ERROR OF PREDICTION IN THE … · the mean square error of prediction in the chain ladder reserving method (mack and murphy revisited) by markus buchwalder,hans bÜhlmann,michael

Different accident years are independent, and for 1 ≤ j ≤ J and 0 ≤ k ≤ K we have

Xk, j = fj – 1 · Xk, j – 1 + sj – 1 · ,k j 1-X · ek, j , (T1)

with sj – 1 > 0 and ek, j independent with E [ek, j ] = 0 and E [e2k, j ] = 1. (T2)

Remarks 2.2

• This time series model coincides with the model studied in Murphy [4], Model IV(weighted average development factor model). We discuss Murphy’s resultsbelow.

• Xk,0 (1 ≤ k ≤ K ) can be interpreted as the (deterministic) premium Pk earnedfor accident year k. In an enlarged trapezoid this allows the simulation ofthe first year development figures.

• Theoretically, our process could have negative values for the cumulative pay-ments Xk, j . To avoid this problem, we could reformulate the definition of ek, j

such that its distribution is conditionally, given Xk, j –1, centered with variance1 and such that Xk, j is positive.

• The time series model is particulary useful for the derivation of the (condi-tional) estimation error. It reflects the mechanism of generating sets of “otherpossible” observations: Having an observation Xk, j –1, (T1) tells us how tomodel/generate the chain-ladder factors fj –1 (see also (T1’) and (4.19), below).

It is easy to check that (T1) and (T2) imply again that the chain ladder predic-tion formula (CL) holds. Also one sees that (T1) and (T2) imply Mack’s condi-tions (M1)-(M4).

3. ESTIMATION OF MODEL PARAMETERS

3.1. Estimators for fj and s2j

Our time series model specified by assumptions (T1) and (T2) has as parameters

f1, f2 , …, fJ – 1,(3.1)

s 21 , s 2

2 , …, s2J – 1,

which we need to estimate. To estimate the chain ladder factors fl – 1 we use thestandard formulae (2.2)-(2.3). The appropriate estimator for the variance s2

J – 1 isgiven by (1 ≤ j ≤ J – 1)

j 1- j 1-: .s K j1

,,

,k j

k

K j

k j

k j1

1

1

1

2

$ $=-

--

=

- +

-

2 fX XX

! e o (3.2)

526 M. BUCHWALDER, H. BUHLMANN, M. MERZ AND M.V. WUTHRICH

9130-06_Astin_36/2_10 06-12-2006 14:11 Pagina 526

Page 7: THE MEAN SQUARE ERROR OF PREDICTION IN THE … · the mean square error of prediction in the chain ladder reserving method (mack and murphy revisited) by markus buchwalder,hans bÜhlmann,michael

Remarks 3.1.

• Estimator (2.2) is minimum variance among linear estimators.

• Estimator (3.2) is minimum variance for normal ek, j .

• If we have enough data ( i.e. K > J ), we are able to estimate s2J – 1 with (3.2),

otherwise there are several possibilities to estimate s2J – 1 (see e.g. [2]).

3.2. Statistical meaning of the estimators ffj and ss 2j

Generally speaking our model has quite an excessive number of parameters(2J – 2) and our estimation resides on a quite small number of observationsJ · ( K J

22 1$ - + ), in the “worst case” of the triangles J J

21$ +] g .

This seems to indicate that the traditional chain ladder does “overparame-terize”. One obvious remedy is to put s2

l – 1 = s2 for all l . One might even lookfor a “smoothing procedure” for the age-to-age factors e.g. a parametric curvefor the fj depending on fewer parameters. We would also like to mention thatthe chain ladder method makes rather strong homogeneity assumptions acrossall accident years. Practical data do often have outliers, trends, cycles, etc.which need to be smoothed.

The statistical questions about overparameterization and about checking thereal data against the chain-ladder axioms go beyond the scope of this paperand are not further pursuited here. We therefore address these challenges asopen problems to the reader.

4. MEAN SQUARE ERROR OF PREDICTION

At time t = K we have realization for the cumulative payments Xk, j given by theupper trapezoid DK = {Xk, j |1 ≤ k ≤ K and 1 ≤ j ≤ min{J,K – k + 1}}. Moreover,given DK, the chain ladder estimators

Xk, j = Xk, k* · fk* · fk*+ 1 · … · fj – 1 (4.1)

THE MEAN SQUARE ERROR OF PREDICTION 527

TABLE 1

PARAMETERS VERSUS OBSERVATIONS FOR THE TRIANGLE

J Parameters Observations

3 4 64 6 105 8 156 10 21� � �

10 18 55

9130-06_Astin_36/2_10 06-12-2006 14:11 Pagina 527

Page 8: THE MEAN SQUARE ERROR OF PREDICTION IN THE … · the mean square error of prediction in the chain ladder reserving method (mack and murphy revisited) by markus buchwalder,hans bÜhlmann,michael

denote the estimators of the future cumulative payments Xk, j (1 ≤ k ≤ K andK – k + 1 < j ≤ J ). Formula (T1) implies for Xk, j with K – k + 1 < j ≤ J

E [Xk, j |DK ] = Xk, k* · fk* · fk*+ 1 · … · fj – 1. (4.2)

4.1. Prediction error for single accident years

As usual the prediction error of Xk, j can be decomposed into two parts: a) sto-chastic error (process variance) and b) estimation error (c.f. Mack [2], Section 3):

, , ,

, ,

k j k j k j

k j k j

K K K

K

2 2

2.

E E X E X

E X

,k j

process variance

estimation error

- = -

+ -

D D D

D

X

X

Xa `

a

k j

k

; 8:

8

E B D

B

1 2 34444444 4444444

1 2 344444 44444

(4.3)

The process variance Var(Xk, j |DK ) = E [(Xk, j – E [Xk, j |DK ])2|DK ] originatesfrom the stochastic movement of the process, whereas the estimation errorreflects the uncertainty in the estimation of the expectation (chain ladder para-meters). We derive estimates for both the process variance and the estimationerror. For the estimates of the estimation error there are different approaches.These are described in Subsection 4.1.2, below. The reader who wants to findquickly the relevant formulae for the different errors should use Table 2 for thecorresponding numbers of formulae:

528 M. BUCHWALDER, H. BUHLMANN, M. MERZ AND M.V. WUTHRICH

TABLE 2

MEAN SQUARE ERROR OF PREDICTION FOR DIFFERENT CONSIDERATIONS

X Y

Single accident years Result 4.1 Result 4.4 Ultimate of aggregated accident years Result 4.2Accounting years Result 4.3 Result 4.4

For the definition of the symbols G2k, j and D2

k, j in this formulae see (4.8) and(4.21) respectively.

4.1.1. (Conditional) Process variance

Using (4.2) we obtain for the (conditional) process variance in formula (4.3):Choose 1 ≤ k ≤ K and K – k + 1 < j ≤ J

9130-06_Astin_36/2_10 06-12-2006 14:12 Pagina 528

Page 9: THE MEAN SQUARE ERROR OF PREDICTION IN THE … · the mean square error of prediction in the chain ladder reserving method (mack and murphy revisited) by markus buchwalder,hans bÜhlmann,michael

E [(Xk, j – E [Xk, j |DK ])2|DK ] = E (X 2k, j |DK) – X 2

k,K – k + 1 · f 2K – k + 1 · … · f 2

j – 1. (4.4)

Moreover, using (T1) and (T2) we have

E (X 2k, j |DK) = f 2

j – 1 · E (X 2k, j – 1 |DK) + s2

j – 1 · E (Xk, j – 1 · e 2k, j |DK)

�= f 2

j – 1 · f 2j – 2 · … · f 2

K – k + 1 · X 2k, K – k + 1

+ s2j – 1 · fj – 2 · fj – 3 · … · fK – k + 1 · Xk, K – k + 1

(4.5)

+ f 2j – 1 · s2

j – 2 · fj – 3 · … · fK – k + 1 · Xk, K – k + 1

+ … + f 2j – 1 · f 2

j – 2 · … · f 2K – k + 2 · s2

K – k + 1 · Xk, K – k + 1.

Hence, from (4.5) we obtain for the (conditional) process variance:

Var(Xk, j |DK) = E [(Xk, j – E [Xk, j |DK ] )2 |DK ](4.6)

= Xk, K – k + 1 · ml fsl K k

j

m l

j

mm K k

l2

1

12

1

1

1

1

$ $= - +

-

= +

-

= - +

-

f! ! !

Finally, using (CL) the (conditional) process variance can be rewritten in the form:

E [(Xk, j – E [Xk, j |DK ] )2 |DK ] = (E [Xk, j |DK ] )2 ·,k l K l

l .E X f

s

l K k

j

2

2

1

1

$= - +

-

D!

7 A

(4.7)

Replacing E [Xk, j |DK ] , E [Xk, l |DK ] and the parameters s2l , fl by their estima-

tors leads to the following estimator of the (conditional) process variance

E [(Xk, j – E [Xk, j | DK ] )2 |DK ] := ,

,

k j

k l l

l

,k j

sf

:

l K k

j

G

2

2

2

1

1

2

$$= - +

-

=

XX

!a k

1 2 34444444 4444444

(4.8)

G2k, j can be rewritten in a recursive form:

G2k, j = G2

k, j – 1 · f 2j – 1 + s2

j – 1 · Xk, j – 1. (4.9)

Using (4.8), we obtain for the non-cumulative payments Yk, j the following esti-mator of the (conditional) process variance

E [(Yk, j – E [Yk, j | DK ] )2 |DK ] := G2k, j + G2

k, j – 1 · (1 – 2 · fj – 1) (4.10)= G2

k, j – 1 · ( fj – 1 – 1)2 + s2j – 1 · Xk, j – 1.

THE MEAN SQUARE ERROR OF PREDICTION 529

9130-06_Astin_36/2_10 06-12-2006 14:12 Pagina 529

Page 10: THE MEAN SQUARE ERROR OF PREDICTION IN THE … · the mean square error of prediction in the chain ladder reserving method (mack and murphy revisited) by markus buchwalder,hans bÜhlmann,michael

4.1.2. (Conditional) Estimation error

Using (4.1) and (4.2) leads to the following formula for the estimation error:

( Xk, j – E [Xk, j | DK ] )2 = X 2k, K – k + 1 · ( fK – k + 1 · … · fj – 1 – fK – k + 1 · … · fj – 1)2

= X 2k, K – k + 1 · l l l l .f f f2

l K k

j

l K k

j

l K k

j2 2

1

1

1

1

1

1

$ $+ -= - +

-

= - +

-

= - +

-

f%%%J

L

KK

N

P

OO

(4.11)

The realizations of the estimators fK – k + 1, …, fj – 1 are known at time t = K, andthe “true” chain ladder factors fK – k + 1, …, fj – 1 are unknown (and have to beestimated). In order to determine the (conditional) estimation error we needto determine the volatilities of fj around its true values fj. These volatilitiesmeasure the quality of the estimates fj. We determine the volatilities with thehelp of resampled observations for fj. There are different approaches to resam-ple our time series, conditional ones and unconditional ones. For j ! {1, …, J}we define the s-fields

Bj = s({Xk, l ! DK | l ≤ j}). (4.12)

Hence, {Bj}j =1, …,J defines a filtration and fj is Bj +1-measurable for all j = 1, …,J – 1. If we fix accident year k for the moment, then in order to determine thevolatility in the estimates we need to resample (see right-hand side of (4.11))

f 2K – k + 1 · … · f 2

j – 1. (4.13)

Define the upper right corner of the observations DK (with respect to develop-ment year K – k + 2)

DOK = DK + {Xi, j ; j > K – k + 1, i ≥ 1} (4.14)

Approach 1 (Complete resampling in DOK ). Calculate

E [ f 2K – k + 1 · … · f 2

j – 1 | BK – k + 1]. (4.15)

This is the complete averaging over the multidimensional distribution after timeK – k + 1. The observed realizations in DO

K do not play any role in this conside-ration. Therefore we call this the unconditional version.

Approach 2. Calculate

f 2K – k + 1 · … · f 2

j – 2 · E [ f 2j – 1 | Bj – 1] . (4.16)

The averaging is only partly done. In this approach one has also the possibil-ity to choose the position at which the averaging should be done. This approachis in some sense similar to the one used in Mack [2] (see also Section 4.2).

530 M. BUCHWALDER, H. BUHLMANN, M. MERZ AND M.V. WUTHRICH

9130-06_Astin_36/2_10 06-12-2006 14:12 Pagina 530

Page 11: THE MEAN SQUARE ERROR OF PREDICTION IN THE … · the mean square error of prediction in the chain ladder reserving method (mack and murphy revisited) by markus buchwalder,hans bÜhlmann,michael

Approach 3 (Conditional resampling in DOK ).

E [ f 2K – k + 1 |BK – k + 1] · E [ f 2

K – k + 2|BK – k + 2] · … · E [ f 2j – 1 |B j – 1] . (4.17)

The averaging is only done over the conditional distributions. Thereby theobserved realizations in DO

K have a direct influence on the estimate since theyare used in the conditions. Therefore we call this the conditional version. Froma numerical point of view it is important to note that Approach 3 allows fora multiplicative structure of the measure of volatility.

The question, which approach should be chosen, is not a mathematicalone. It depends on the circumstances of the questions which approach shouldbe used for a specific practical problem. Henceforth, only the practitioner canchoose the appropriate approach for his problems and questions.

In the present work we focus on Approach 3, the conditional version. Hence(4.11) is estimated by (see also (4.20), below)

X 2k, K – k + 1 · ll l .fE fB

l K k

j

l K k

j2 2

1

1

1

1

-= - +

-

= - +

-

%%J

L

KK

N

P

OO9 C (4.18)

We therefore resample the observations fK – k + 1, …, fj – 1, given the upper trape-zoid DK (see Remarks 2.2). This means that for the determination of an estimatorfor the conditional estimation error we have to take into account that, giventhe upper trapezoid DK, the observation for fj could have been different fromthe observed values fj. To regard this source of uncertainty, we generate for1 ≤ j ≤ J and 0 ≤ k ≤ K, given DK, a set of “new” observations by the formula

Zk, j = fj – 1 · Xk, j – 1 + sj – 1 · ,k j ,,k j1 $-X e (T1’)

with sj – 1 > 0 and ek, j, ek, j independent and identically distributed (T2’)

(cf. (T1) and (T2)). Observe that, given Xk, j – 1, Zk, j =(d )

Xk, j .From formula (T1’) and (2.2) we obtain the following representation for the

resampled estimates of the development factors

,j j k j1 1- -j

j

1

1

-

-f f S,

,

,

k jk

K j

k jk

K j

k jk

K jD

11

11

1

11

1K

$ $= = +

-

=

- +=

- +

-

=

- +

Xs

eZ

X!

!! (1 ≤ j ≤ J) (4.19)

with

j 1- .S ,k jk

K j

11

1

= -

=

- +

X!

Observe that, given Bj – 1, we have fj – 1 =(d )

f DKj – 1 .

THE MEAN SQUARE ERROR OF PREDICTION 531

9130-06_Astin_36/2_10 06-12-2006 14:13 Pagina 531

Page 12: THE MEAN SQUARE ERROR OF PREDICTION IN THE … · the mean square error of prediction in the chain ladder reserving method (mack and murphy revisited) by markus buchwalder,hans bÜhlmann,michael

Unlike the observations {Xk, j |1 ≤ k ≤ K and 1 ≤ j ≤ min{J, K – k + 1}} theobservations {Zk, j |1 ≤ k ≤ K and 1 ≤ j ≤ min{J, K – k + 1}} and also the resam-

pled estimates f DK0 , …, f DK

J – 1 are random variables given the upper trapezoid DK.Furthermore the observations Xk, j and the random variables ek, j are uncondi-tionally independent. From (4.19), (T2’) and (T2) we see that

1) the estimators f DK0 , …, f DK

J – 1 are conditionally independent w.r.t. DK,

2) E [ f DKj – 1 |DK ] = fj –1 for 1 ≤ j ≤ J and

3) E [( f DKj – 1)2 |DK ] = f 2

j –1 +j 1-

j 1-

s 2

S for 1 ≤ j ≤ J.

Hence in Approach 3 (using 1)-3) from above) we can explicitly calculate theconditional estimation error, i.e. the right-hand side of (4.11) is estimated by

j 1-K k j1 1- + -,

,

,

k K k

k K k l l

k K k ll

l

1

1

1

- +

- +

- +

K

K

l

K K

K

... ...

.

f f

f

E X f f

X E f

X f S fs

K k

l K k

j

l K k

j

l K k

j

l K k

j

D D

D

21

2

22

2

1

1

1

1

2 22

1

12

1

1

$ $ $ $ $

$

$

-

= -

= + -

- +

= - +

-

= - +

-

= - +

-

= - +

-

D

D %%

% %

J

L

KK

J

L

KK

J

L

KK

b

b

N

P

OO

N

P

OO

N

P

OO

l

l

<

<

F

F (4.20)

Next, we replace the parameters s2K – k + 1, …, s2

j – 1 and fK – k + 1, …, fj – 1 by theirestimators, and we obtain the following estimator for the (conditional) estima-tion error

j 1-K k j1 1- + -

K , ,k j k j

,

,

k K k

k K k ll

l

D

1

1

- +

- +

K

K

l

K K: ... ...

: .s

E f f

f f

E

X f f

X S

:

K k

l K k

j

l K k

j

D

D D

2

21

2

2 22

1

12

1

1

,k j2

$ $ $ $ $

$

-

= -

= + -

- +

= - +

-

= - +

-

=

E X D

D

X

% %J

L

KK

J

L

KK

a

b

N

P

OO

N

P

OO

k

l

8;

<

B E

F

1 2 344444444 44444444

(4.21)

D2k, j can be rewritten in a recursive form:

j

j

1

1

-

-

j

j j

1

1 1

-

- -

jl

l

j jl

1

1 1

-

- -

2 2

2

l

.

s s

s s

f f

f f

SD D

D

, ,

,

k j k jl K k

j

k jl K k

j

12

22

2

1

2

12

2 22

1

2

$ $

$ $

= + +

= + +

-= - +

-

-= - +

-

S

S S

%

%

J

L

KK

J

L

KK

N

P

OO

N

P

OO

(4.22)

532 M. BUCHWALDER, H. BUHLMANN, M. MERZ AND M.V. WUTHRICH

9130-06_Astin_36/2_10 06-12-2006 14:15 Pagina 532

Page 13: THE MEAN SQUARE ERROR OF PREDICTION IN THE … · the mean square error of prediction in the chain ladder reserving method (mack and murphy revisited) by markus buchwalder,hans bÜhlmann,michael

Analogously we define for non-cumulative payments the following estimator:

Yk, j := Xk, j – Xk, j – 1. (4.23)

Hence the (conditional) estimation error is estimated by (in the 3rd step we use(4.22))

j

j

1

1

-

-

j 1-

, ,k j k jK

,

,

,

,

k K k

k K k ll

j

k K k

k K k

D

1

11

1

1

- +

- +-

- +

- +

K

l K

2 2

2 2 2 2

2 2

l

K K:

:

.

s s

E Y

E f f

f

f

E Y

X f f

X S

X

X

D

D D D

D D

1 1

1

1

2 1

,

, , ,

, ,

l jl K k

j

l K k

j

k j jl K k

j

k j j k j k j

k j j k j

D D

2

21

1

2

1

2 2

21 1

22 2

1

2

21 1 1

2

21 1

$ $ $

$ $ $

$ $ $

$ $

-

= - - -

= - + +

= - + -

= - -

-

= - +

-

= - +

-

- -

= - +

-

- - -

- -

S

D

D

f

f

f

%%

%

J

L

KK

J

L

KK

J

L

KK

a

b _

a

ac

aa

N

P

OO

N

P

OO

N

P

OO

k

l i

k

k m

k k

R

T

SSS

8;

V

X

WWW

B E

(4.24)

Remark: The (conditional) estimation error in the cumulative case and theincremental case have exactly the same structure (compare 3rd line of (4.24)with (4.22)).

4.2. Result and comparison to the Mack [2] formula and the result by Murphy [4]

From our results we obtain the following estimator for the (conditional) pre-diction error of a single accident year (cf. (4.8) and (4.21)):

Result 4.1. (MSEP for single accident years) Under Assumptions (T1), (T2), (T1’),(T2’) for the estimation of the process variance and the estimation error, respectively,we have the following estimator for the (conditional) mean square prediction error(4.3) of a single accident year

,k j

l

, ,

,

,

k j k j

k j

k

,

,

k K k

l

lk K k l

ll

1

1

- +

- +

K2 2

l

:

.s s

E

ff f

X X

X S

G D ,k j

l K k

j

l K k

j

l K k

j

process variance estimation error

2 2

2

2

2

1

12 2

22

1

1

1

1

$

$$

$

- = +

= + + -= - +

-

= - +

-

= - +

-

DX

XX

! %%J

L

KK

J

L

KK

a

a

N

P

OO

N

P

OO

k

k

; E

1 2 3444444 444444 1 2 344444444444 44444444444

(4.25)

The recursive form of the estimator of the prediction error is given by (cf. (4.9)and (4.22))

THE MEAN SQUARE ERROR OF PREDICTION 533

9130-06_Astin_36/2_10 06-12-2006 14:17 Pagina 533

Page 14: THE MEAN SQUARE ERROR OF PREDICTION IN THE … · the mean square error of prediction in the chain ladder reserving method (mack and murphy revisited) by markus buchwalder,hans bÜhlmann,michael

,k j j

j

1 1

1

- -

-

j

j

1

1

-

-

, , ,k j k j k

,

j

k K k ll

j

1

11

-

- +-

K

2

l

- :

.

s

s s

E f

f f

X

X S

G

D ,k jl K k

j

2 2 2 2

21

2 22 2

1

2

$ $

$ $ $

= +

+ + +-= - +

-

S

DX X

%J

L

KK

J

L

KKK

a

N

P

OO

N

P

OOO

k; E

(4.26)

G 2k, j – 1 and D2

k, j – 1 were defined in (4.8) and (4.21) respectively.

Comparison to the Mack formula [2].

Formula (4.25) looks very similar to the Mack formula [2]. But, at the first sightsurprisingly, the formula for the estimation error given in Mack [2] differs fromour formula (4.22). In [2] the recursive formula for D2

k, j only considers the fol-lowing terms

D2k, j = D2

k, j – 1 · f 2j – 1 + j 1-

lj 1-

,s

fl K k

j2

2

1

2

$= - +

-

S% (4.27)

whereas we consider for the second term the following expression (cf. (4.22))

j 1-

ll

j 1-l.

s sf S

l K k

j2

2 2

1

2

$+= - +

-

S%J

L

KK

N

P

OO

The difference comes from the fact that Mack calculates the estimation error fora version of Approach 2. Mack [2] sums up the following expressions (see [2],p. 219)

E [ ( fK + 1 – k ··· fj – 1 · ( fj – fj ) · fj +1 ··· fJ – 1)2 |B j ] =

(4.28)f 2K + 1 – k ··· f 2

j – 1 ·j

j

s 2

S · f 2j + 1 ··· f 2

J – 1,

for K + 1 – k ≤ j ≤ J – 1. Formula (4.28) is similar to Approach 2 (see (4.16)) ifwe choose smartly the position j at which we want to average.

In fact, it can be shown that the Mack formula is a linear approximationto our result:

(4.29)j 1-K k 1- + K k 1- +,

,

k K k

k K k ll

l

1

1

- +

- +

K

l

K K... ...

s

E f f

f f

X f f

X S

j

l K k

j

l K k

j

D D21

2

2 22

2

1

1

1

1

$ $ $ $ $

$

-

= + -

-

= - +

-

= - +

-

D

%%J

L

KK

J

L

KK

b

N

P

OO

N

P

OO

l< F

534 M. BUCHWALDER, H. BUHLMANN, M. MERZ AND M.V. WUTHRICH

9130-06_Astin_36/2_10 06-12-2006 14:18 Pagina 534

Page 15: THE MEAN SQUARE ERROR OF PREDICTION IN THE … · the mean square error of prediction in the chain ladder reserving method (mack and murphy revisited) by markus buchwalder,hans bÜhlmann,michael

,

,

k K k l

l

l

k K k ll l

1

1

- +

- +

l

l

l

/,

s

s

ff

f f

XS

X S

1 1

/s f

l K k

j

S

l K k

j

l K k

j

l K k

j

2 2

2

2

1

1

1

1

2 22 2

1

1

1

1

l

l K k

j 2 2

1

1

$ $$

$ $.

= + -

.

= - +

-

= - +

-

= - +

-

= - +

-

= - +

-

l

%%

!%

!

J

L

KK

J

L

KK

N

P

OO

N

P

OO

1 2 34444444 4444444

which is the Mack term for the estimation error (cf. [2], page 219).In the practical examples that we have looked at the numerical differences

resulting from our formula presented in (4.22) and the Mack formula is rathersmall (see Table 5).

Comparison to the result of Murphy [4].

We are grateful to one of the referees of our paper for having pointed outthat our numerical results for the ultimate losses coincide with those obtainedon the basis of the weighted average development (WAD) factor model inMurphy [4], Model IV.

If one looks at this paper [4] one finds indeed that the WAD model coin-cides with ours. It is also the chain ladder model stated in time series language.To obtain the crucial recursive formula for the estimation error (Theorem 3 inAppendix C of [4]) Murphy assumes independence of estimates of the chainladder factors. This assumption is inconsistent with the model assumptions.One can easily see that chain ladder factors are uncorrelated (see Mack [2],Theorem 2). But other direct calcuations show that the squares of chain ladderestimates are negatively correlated.

The point is that Murphy by his assumptions gets a multiplicative structureof the measure of volatility. In this paper we get the multiplicative structureby the choice of Approach 3 for the measure of the (conditional) volatility ofthe chain ladder estimate (see discussion in Section 4.1.2). Henceforth, since inboth estimates one uses a multiplicative structure it turns out that our recursiveestimate (4.22) is exactly the estimate presented in Theorem 3 of Murphy [4].

4.3. Aggregation of ultimate loss over different accident years

Consider two different accident years k < l. From our assumptions we know thatthe ultimate losses Xk,J and Xl,J are independent. Nevertheless we have to becareful if we aggregate Xk,J and Xl,J. The estimators are no longer independentsince they use the same observations for estimating the age-to-age factors fj.

E [(Xk,J + Xl,J – (Xk,J + Xl,J))2 |DK ](4.30)

= Var(Xk,J + Xl,J |DK) + (Xk,J + Xl,J – E [Xk,J + Xl,J |DK ])2.

THE MEAN SQUARE ERROR OF PREDICTION 535

9130-06_Astin_36/2_10 06-12-2006 14:19 Pagina 535

Page 16: THE MEAN SQUARE ERROR OF PREDICTION IN THE … · the mean square error of prediction in the chain ladder reserving method (mack and murphy revisited) by markus buchwalder,hans bÜhlmann,michael

Using the independence of the different accident years, we obtain for the firstterm

Var(Xk,J + Xl,J | DK) = Var(Xk,J | DK) + Var(Xl,J | DK), (4.31)

whereas for the second term we obtain

( Xk,J + Xl,J – E [Xk,J + Xl,J |DK ])2

= ( Xk,J – E [Xk,J |DK ])2 + ( Xl,J – E [Xl,J |DK ])2 (4.32)

+ 2 · ( Xk,J – E [Xk,J |DK ]) · ( Xl,J – E [Xl,J |DK ]).

Hence we have the following decomposition for the (conditional) predictionerror of the sum of two accident years

E [( Xk,J + Xl,J – (Xk,J + Xl,J))2|DK ]= E [( Xk,J – Xk,J)

2 | DK ] + E [( Xl,J – Xl,J)2|DK ] (4.33)

+ 2 · ( Xk,J – E [Xk,J |DK ]) · ( Xl,J – E [Xl,J |DK ]).

In addition to the (conditional) mean square error of prediction of single accidentyears, we need to determine the volatilities of fj for the cross-products terms,which gives an additional term in the estimation errors for aggregated accidentyears.

( Xk,J – E [Xk,J |DK ]) · ( Xl,J – E [Xl,J |DK ])= Xk, K – k + 1 · ( fK – k + 1 · … · fJ – 1 – fK – k + 1 · … · fJ – 1) (4.34)

· Xl, K – l + 1 · ( fK – l + 1 · … · fJ – 1 – fK – l + 1 · … · fJ – 1).

As in (4.20) (Approach 3) this term is (conditionally) estimated by

j j

j

j

j

j j

K k-

,

,

, ,

l K l

l K l

l K l K l

1

1

1 1

- +

- +

- + - +

K

K

j

K K

K...

.

f f

f

E X X

X f f E f

E X X f fs

,

,

,

k K k jj K k

J

j K k

J

jj K l

J

j K l

J

k K k K lj K k

J

j K k

J

k K k kj K k

J

j K k

J

D D

D

11

1

1

1

1

1

1

1

1 1

2 2

1

1

1

1

12

22

1

1

1

1

$ $ $ $ $

$ $

- -

= -

= + -

- += - +

-

= - +

-

= - +

-

= - +

-

- + - += - +

-

= - +

-

- += - +

-

= - +

-

X

X

D

D

S

ff%% %%

%%

%%

J

L

KK

J

L

KK

J

L

KK

J

L

KKK

J

L

KKK

b

N

P

OO

N

P

OO

N

P

OO

N

P

OOO

N

P

OOO

l

R

T

SSS

<

7

V

X

WWW

F

A (4.35)

But then the estimation of the covariance term is straightforward from theestimate of a single accident year.

536 M. BUCHWALDER, H. BUHLMANN, M. MERZ AND M.V. WUTHRICH

9130-06_Astin_36/2_10 06-12-2006 14:20 Pagina 536

Page 17: THE MEAN SQUARE ERROR OF PREDICTION IN THE … · the mean square error of prediction in the chain ladder reserving method (mack and murphy revisited) by markus buchwalder,hans bÜhlmann,michael

Result 4.2. (MSEP for aggregated accident years). Under Assumptions (T1),(T2), (T1’),(T2’) for the estimation of the process variance and the estimationerror respectively, we have the following estimator for the (conditional) mean squareprediction error of the ultimate loss of aggregated accident years

(4.36)J, ,

, ,

k k J

k J k J ,l K 1- +

K

K2

: .

E

E

X

X X D2 , ,<

k K J

K

k K J

K

k K J

K

k K k k k Jk J k l K

2 2

2

2

21

2

$ $ $

-

= - +# #

= + - = + -

= + -

- +

+ -

X

D

D

X

X

! !

! !

e

a

o

k

R

T

SSS

;

V

X

WWW

E

Remarks:

1) The last term (covariance terms) from the result above can be rewritten as

,,

l Kk K k

11

- +

- +

2,X D2

,,

< k K k

kk J

k J k l K 1

2

2

$ $ $# # - ++ -

XX

! (4.37)

where X 2k,K – k + 1 · D2

k,J is the (conditional) estimation error of the single acci-dent year k (see (4.21)). This may be helpful in the implementation since itleads to matrix multiplications.

2) Note that in this section we have presented the aggregation of only two ulti-mate losses Xk,J and Xl,J which leads to one covariance term. If we aggregateover all accident years, of course, we obtain the sum over all covarianceterms l ! k.

4.4. Prediction error for accounting years

In the sequel Xt denotes the total of the cumulative payments originating fromall accident years up to accounting year t (1 ≤ t ≤ K + J – 1). Formally thiscorresponds to the sum of the Xk, j over the t-diagonal

: .,t k jk j t 1

j J1

=+ = +# #

X X! (4.38)

We are at time K, i.e. the total payments Xt are observed for t ≤ K. Conve-niently, for accident year k we denote the last observation by Xk,k* with k + k* =K + 1. We want to predict XK + t for 1 ≤ t ≤ J – 1 and the corresponding estimatoris given by

: .,K k jk j K

tt 1

t j J1

=+

+ = + +# #+

X X! (4.39)

THE MEAN SQUARE ERROR OF PREDICTION 537

9130-06_Astin_36/2_10 06-12-2006 14:21 Pagina 537

Page 18: THE MEAN SQUARE ERROR OF PREDICTION IN THE … · the mean square error of prediction in the chain ladder reserving method (mack and murphy revisited) by markus buchwalder,hans bÜhlmann,michael

Sometimes it is easier to write this sum as

+ * **

., ,K k kk K J

K

k kk

J

t tt

t

t

1 1

= =+

= + + -

+

=

-

X X X! ! (4.40)

The (conditional) prediction error of XK + t can be decomposed as follows (see(4.3))

K

K K

K K

K K K

t t

t t t

+ +

+ + + .

E

EVarprocess variance estimation error

2

2

-

= + -

D

D D

X

X X

X

X

a

^ a

k

h k

;

6

E

@1 2 34444 4444 1 2 3444444 444444

(4.41)

Using (4.39), (4.40) and the independence of the accident years we obtain forthe process variance of XK + t

K KK t+ * .Var Var ,k kk K J

K

tt 1

= +

= + + -

D DX X!^ _h i (4.42)

Finally, using (4.8) we obtain as estimator for the (conditional) process varianceof XK + t

,k jK KK Kt t+ +

2: .E E G

k j K t

2

1j Jt 1

- =+ = + +

# #+

D DX X !^ h69 @ C (4.43)

For the estimation error we proceed as in Subsection 4.1.2 (Approach 3). Weaverage

KK Kt t+ +E2

- DX Xa k6 @

over the resampled chain ladder factors f DKk* , …, f DK

k* + t – 1, given the upper trape-zoid DK, to determine the (conditional) estimation error of XK + t. We define

g DKk*,t := f DK

k* · … f DKk* + t – 1 and gk*,t := fk* · … · fk* + t – 1. (4.44)

Observe that

K KK Kt t+ +

k k k k

* *

* ... ... .

E E, ,

,

k k k kk

J

k kk

J

t t

t

t t

t

2

1

2

1 11

2

$ $ $ $

- = -

= -

+ +

=

-

+ - + -

=

-

*

* * * **

D DX

f f

X X

f fX

X!

!

a ae

ae

k ko

ko

6 7@ A

(4.45)

538 M. BUCHWALDER, H. BUHLMANN, M. MERZ AND M.V. WUTHRICH

9130-06_Astin_36/2_10 06-12-2006 14:22 Pagina 538

Page 19: THE MEAN SQUARE ERROR OF PREDICTION IN THE … · the mean square error of prediction in the chain ladder reserving method (mack and murphy revisited) by markus buchwalder,hans bÜhlmann,michael

Hence we calculate the average over the (conditionally) resampled observationsas follows

K

K,

l l

k k k k

l k kk l

*

*

... ...

.g g

E

E g g

,

, , , , , ,

k kk

J

k k l

J

t t

t

t t t t

t

1 11

2

1

K K

K K

$ $ $ $

$ $ $

-

= - -

+ - + -=

-

=

-

D D

D D

* * * **

* * * * ** *

D

D

f f f fX

X X

!

!

be

a a

lo

k k

R

T

SSS

R

T

SSS

V

X

WWW

V

X

WWW

(4.46)

Next, we use D2k,k*+ t (cf. (4.21)) and replace fk*, …, fl* – 1, fk*+ t, …, fl*+ t – 1 by their

estimators. Thus we obtain the following (conditional) estimator for the esti-mation error:

(4.47)

,k

KK Kt t

2 2

+ +

l l

l

k kl

k: .

E

X D D2, , , ,>

,min

k

J

k kk

k

lk

J

l mm k

Jt

t

t

t

t t

t

t

D

2

2

1 1

1 1

K

$ $ $ $ $

-

= +=

-

+=

-

+=

+ - -

+

+ -

**

* ** *

*

**

**

*

DXX

X X

E

f! !! %

a k6; @ E

! +

From our results (4.43) and (4.47) we obtain for the (conditional) predictionerror of a accounting year:

Result 4.3. (MSEP for accounting years, cumulative payments) For the estima-tor of the (conditional) prediction error (4.41) we have

,k jK KK Kt t

2

+ +

l l

l

lk

:

,

E E

D2

,

, , ,>

,min

k j Kk j

kk

k

lk

J

l mm k

J

t

t

t

t t

t

t

2

1

2

1

1 1

j Jt 1

$ $ $ $

- = -

+

+ = + +

=

-

+=

+ - -

+

+ -

# #+

** *

*

**

**

*

D DXX

X

X

X

X

f

!

!! %

a ak k; ;E E

! +(4.48)

Analogously for the estimated non-cumulative payments in accounting yearK + t we have

K t+ * * .Y , , ,k jk j K

k k k kk

J

tt t

t

11

1j Jt 1

= = -+ = + +

+ + -=

-

# #+

*

Y X X! ! a k (4.49)

The corresponding (conditional) prediction error can again be decomposed:

K K KK K K K Kt t t t t+ + + + +

2 .YE EVarprocess variance estimation error

2 2- = + -Y Y D D DY Ya ^ ak h k; 6E @

1 2 34444 4444 1 2 3444444 444444(4.50)

THE MEAN SQUARE ERROR OF PREDICTION 539

9130-06_Astin_36/2_10 06-12-2006 14:24 Pagina 539

Page 20: THE MEAN SQUARE ERROR OF PREDICTION IN THE … · the mean square error of prediction in the chain ladder reserving method (mack and murphy revisited) by markus buchwalder,hans bÜhlmann,michael

Using (4.49), (4.10) and the independence of the accident years we obtain forthe (conditional) process variance of YK + t the estimator

, ,k j k j 1-

K KK Kt t+ +

: .

E E

G G 1 2 jk j K t

2

2 21

1j Jt 1

$ $

-

= + - -

+ = + +# #+

Y Y D D

f!

^

aa

h

kk

69 @ C

(4.51)

Using (4.49) and (4.44) we get for the estimation error of YK + t (Approach 3)

,k

K

K

Kl l l l

k k k k

k k k k k

l l k k k kl

*

* *

.

g

g

g g

g g

E g

X E g

E g g

g g

1 1

1 1

2

, , ,

, ,

, , , , , ,

, , , ,

>

,min

k kk

J

k

J

k kk

k

k

J J

t t t t

t

t t t t

t

t t t t

t

t t t t

t t

1 1 1 11

2

21 1 1 1

2

1

1 11

1 1

1

K K

K K

K K

K K

$ $ $

$ $ $

$ $ $

$

- - -

= - - -

+ - - -

- - -

- + - - + -=

-

- + - - + -=

-

- -=

-

- -

+ - -

D D

D D

* * * **

* * * * **

* * * ** *

*

*

* * * *

D

D

D

D D

D D

f

f

f

f

X

X X

!

!

!!

b ^be

b ^b

a _a

a _a

l hlo

l hl

k ik

k ik

R

T

SSS

=

:

V

X

WWW

G

D

! +

(4.52)

As before, we obtain in addition to the terms of a single accident years thecovariance terms, which we need to calculated separately. If we proceed as in(4.47) we obtain for the single covariance terms in (4.52) the following estimate( l* > k*)

Lk, l := Xk, l * · Xl, l * · [D2l,k*+ t – D2

l,k*+ t – 1 · fk*+ t – 1] ·l

mm k t

t 2

= +

+ -

*

*

f% · ( fl *+ t – 1 – 1). (4.53)

So, if we collect all our terms, we obtain

Result 4.4 (MSEP for accounting years, incremental payments) For the (condi-tional) prediction error (4.50) we have the estimator

K KK Kt t+ +

l

:

,

EE

2

, ,

,>

,min

k j k jk j K

k lk

k

k

J J

t

t t t

2 2

1

1

1

j Jt 1

$

- = -

+

+ = + +

=

- + - -

# #+

* *

*

*

Y YY D D

L

Y!

!!

a ak k; ;E E

! +(4.54)

where (see (4.10) and (4.24)) (MSEP for single accident year, incremental pay-ments)

540 M. BUCHWALDER, H. BUHLMANN, M. MERZ AND M.V. WUTHRICH

9130-06_Astin_36/2_10 06-12-2006 14:27 Pagina 540

Page 21: THE MEAN SQUARE ERROR OF PREDICTION IN THE … · the mean square error of prediction in the chain ladder reserving method (mack and murphy revisited) by markus buchwalder,hans bÜhlmann,michael

, ,

,

k j k j

k K k

1

1

-

- +

K :

.

E

X

G G

D D

1 2

2 1

, ,

, ,

k j k j j

k j j k j

2 2 21

2 21 1

2

$ $

$ $

- = + -

+ - -

-

- -

Y DY f

f

a a

aa

k k

k k

; E

(4.55)

5. EXAMPLE

We use the first run off triangle of cumulative payments given in Mack [2] (cf.Table 1 in [2]).

THE MEAN SQUARE ERROR OF PREDICTION 541

TABLE 3

RUN-OFF-TRIANGLE (CUMULATIVE PAYMENTS), SOURCE [2].

AY Development period j

k 1 2 3 4 5 6 7 8 9 10

1 357848 1124788 1735330 2218270 2745596 3319994 3466336 3606286 3833515 3901463

2 352118 1236139 2170033 3353322 3799067 4120063 4647867 4914039 5339085

3 290507 1292306 2218525 3235179 3985995 4132918 4628910 4909315

4 310608 1418858 2195047 3757447 4029929 4381982 4588268

5 443160 1136350 2128333 2897821 3402672 3873311

6 396132 1333217 2180715 2985752 3691712

7 440832 1288463 2419861 3483130

8 359480 1421128 2864498

9 376686 1363294

10 344014

The estimators fj of the chain ladder factors (cf. Table 4) show, that the exam-ple chosen is a relatively simple run off triangle.

TABLE 4

ESTIMATORS OF THE CHAIN LADDER FACTORS fj .

fj

1 2 3 4 5 6 7 8 9

3.490607 1.747333 1.457413 1.173852 1.103824 1.086269 1.053874 1.076555 1.017725

For the estimation of s29 we use the formula given in [2], before Theorem 3.

Hence, using the estimators fj and s 2j we find an estimate (and the correspon-

ding error) for the aggregate reserve of all accident years SkRk.

9130-06_Astin_36/2_10 06-12-2006 14:27 Pagina 541

Page 22: THE MEAN SQUARE ERROR OF PREDICTION IN THE … · the mean square error of prediction in the chain ladder reserving method (mack and murphy revisited) by markus buchwalder,hans bÜhlmann,michael

In Table 5 the process standard deviation, (estimation error)1/2, MSPE andstandard error of prediction for the aggregated ultimate loss over (all) differentaccident years are given. We see that the results from Mack’s formula and ourformula presented in (4.22) are nearly the same. But since the Mack formulafor the estimation error is a linear bound from below to our formula it is alsoclear that our results are slightly higher compared to Mack’s results.

Other examples that we have examined (long and short term business)confirm the findings in this example.

542 M. BUCHWALDER, H. BUHLMANN, M. MERZ AND M.V. WUTHRICH

TABLE 5

MSEP AND STANDARD ERROR OF PREDICTION FOR AGGREGATED ACCIDENT YEARS.

reserves Sk Rk process std.dev. .estim error MSEP pred. std. error

Mack 18’680’856 1’878’292 1’568’532 5’988’273’257’923 2’447’095BBMW 18’680’856 1’878’292 1’569’349 5’990’835’395’887 2’447’618

difference 0 0 817 2’562’137’964 523

REFERENCES

[1] BUCHWALDER, M., BÜHLMANN H., MERZ, M. and WÜTHRICH, M.V. (2005) Legal valuationportfolio in non-life insurance. Conference paper presented at the 36th International ASTINColloquium, 4-7 September 2005, ETH Zürich. available under www.astin2005.ch

[2] MACK, T. (1993) Distribution-free calculation of the standard error of Chain Ladder reserveestimates. Astin Bulletin 23(2), 213-225.

[3] MACK, T. (1999) The standard error of Chain Ladder reserve estimates: recursive calculationand inclusion of a tail factor. Astin Bulletin 29(2), 361-366.

[4] MURPHY, D.M. (1994) Unbiased loss development factors. Proc. CAS, LXXXI, 154-222.[5] SCHMIDT, K.D. and SCHNAUS, A. (1996). An extension of Mack’s model for the chain ladder

method. Astin Bulletin 26(2), 247-262.

MARKUS BUCHWALDER

Bâloise, Aeschengraben 21, CH-4002 Basel,Switzerland

Hans BÜHLMANN

ETH Zürich, Department of Mathematics, CH-8092 Zürich,Switzerland

MICHAEL MERZ

University Tübingen, Faculty of Economics, D-72074 Tübingen,Germany

MARIO V. WÜTHRICH

ETH Zürich, Department of Mathematics, CH-8092 Zürich,Switzerland

9130-06_Astin_36/2_10 06-12-2006 14:27 Pagina 542