modelling loss data established practices and open issues giulio mignola intesa sanpaolo

30
Modelling Loss Data Established practices and Open Issues Giulio Mignola Intesa Sanpaolo

Upload: winfred-goodwin

Post on 18-Jan-2018

223 views

Category:

Documents


0 download

DESCRIPTION

3 Summary Basics of Op Risk Modelling - the LDA established practices –Fundamental elements –Essential Tools Open issues in LDA –Which distribution to choose: a.k.a. “the extrapolation problem” –Point vs. Interval estimate: how big are the error bars? Conclusions

TRANSCRIPT

Page 1: Modelling Loss Data Established practices and Open Issues Giulio Mignola Intesa Sanpaolo

Modelling Loss DataEstablished practices and Open Issues

Giulio MignolaIntesa Sanpaolo

Page 2: Modelling Loss Data Established practices and Open Issues Giulio Mignola Intesa Sanpaolo

2

Disclaimer

Opinions and views expressed here are of the speaker exclusively and do not necessarily represent practice or policies of Intesa Sanpaolo Spa

Page 3: Modelling Loss Data Established practices and Open Issues Giulio Mignola Intesa Sanpaolo

3

Summary

Basics of Op Risk Modelling - the LDA established practices– Fundamental elements– Essential Tools

Open issues in LDA– Which distribution to choose: a.k.a. “the extrapolation problem”– Point vs. Interval estimate: how big are the error bars?

Conclusions

Page 4: Modelling Loss Data Established practices and Open Issues Giulio Mignola Intesa Sanpaolo

4

Fundamental elements of OpRisk: the LDA

Total Loss Distribution

Op. VaR 99.9 percentile

Unexpected losses

0

2

4

6

8

10

12

14

1 3 5 7 9 11 13 15 17 19

# events per period

rela

tive

freq

uenc

y

Aggregated via Monte Carlo or analytical tools

Severity distribution Frequency distribution

Page 5: Modelling Loss Data Established practices and Open Issues Giulio Mignola Intesa Sanpaolo

5

Essential ToolsFrequency distributions

Poisson distribution:– Expected Value = Variance =

Negative Binomial distribution:– Expected Value = r ;– Variance = r (1+) > r

Histogram of frequency

n

Fre

quen

cy

0 10 20 30 40 50 60

0

50

100

150

poissonnegative binomial var/mu=3

Page 6: Modelling Loss Data Established practices and Open Issues Giulio Mignola Intesa Sanpaolo

6

Essential ToolsSeverity distributions

Lognormal lognormal-gamma mixture Generalised Pareto density Inverse Gaussian Weibull Burr density

same first two moments

x

1 -

CD

F

100 10,000 1,000,000 100,000,000 10,000,000,000

1 e-10

1 e-07

1 e-04

1 e-01

lognormalgpdinvGausslnorm-gammaweibullburr

Page 7: Modelling Loss Data Established practices and Open Issues Giulio Mignola Intesa Sanpaolo

7

Essential ToolsFitting distributions to data

Maximum Likelihood Estimation– Best asymptotic properties– Simple form, well established numerical methods– Sensitivity to outliers

Robust estimation methods– L-moments are more robust than ordinary sample moments– Trade-off between variance and stability– Attention to implementation complexity and introduction of new parameters– Example 1: Minimum Distance Estimator minimises a goodness-of-fit measure, like Cramér

von Mises (introduces no new parameter, simple form)– Example 2: Optimal Bias-Robust Estimator is an M-estimator with bound on Influence

Function and iterative estimation of most efficient weights (one new parameter [the bound], complex implementation [double minimisation])

Page 8: Modelling Loss Data Established practices and Open Issues Giulio Mignola Intesa Sanpaolo

8

Essential ToolsBuilding the annual loss distribution

Monte Carlo – old stuff very well known– Easy to implement– Easy to include insurance– Could be time and memory consuming – to achieve accuracy one may need millions of

scenarios, although some simple variance reduction techniques could be applied (e.g., Latin hypercube sampling)

– If dependence among dimensions is introduced, an ex-post aggregation mechanism has to be implemented

FFT - Using properties of the characteristic function to get the probability distribution of the aggregate loss

– Very fast– Not so straightforward to include insurance– One dimension at a time – ex post aggregation of dimensions is needed

Single Loss approximation – analytical approximation– The level of accuracy is not straightforward to keep under control– Insurance is structurally difficult to implement

Page 9: Modelling Loss Data Established practices and Open Issues Giulio Mignola Intesa Sanpaolo

9

A useful instrumentThe single loss “approximation”

Following known results recently highlighted in C. Klüppelberg and K Böcker, “Operational VaR: a Close-Form Approximation)

For a large class of severity distribution (sub-exponential) it can be proven that

And hence for a high percentile of the aggregated distribution the following approximation holds

nxFxF n

x

)()(

lim*

][E1111

npFpFS

))(1]([E)(1 xFnxFS

Page 10: Modelling Loss Data Established practices and Open Issues Giulio Mignola Intesa Sanpaolo

10

How it works in practiceAn example

Page 11: Modelling Loss Data Established practices and Open Issues Giulio Mignola Intesa Sanpaolo

11

From the hyperuranic of ideas to the cavern of implementationsi.e.what doesn’t work so well in the LDA

Page 12: Modelling Loss Data Established practices and Open Issues Giulio Mignola Intesa Sanpaolo

12

Understanding some challenges posed by Basel II

In the Basel Accord on Capital Adequacy Operational Risk is identified as a distinct source of banking risk and a specific capital requirement is introduced; in addition to basic and standard measurement methodologies, stringent requirements for the internal models are defined.

The crucial quantitative requirement characterizing the AMA is to produce a risk measure which is “compatible” with the 99.9% of the annual loss distribution (i.e. the distribution of the total loss of one year).

Another stringent requirement is the collection of loss data (both internal and system) that covers at least 5 years history

The BIS tower in Basel

Page 13: Modelling Loss Data Established practices and Open Issues Giulio Mignola Intesa Sanpaolo

13

Implications on models (1)

The value of the annual loss, assuming an average frequency of k losses per year, can be evaluated summing, on average, k losses independently coming from the severity distribution.

To describe the distribution of the annual loss the whole procedure should be repeated many times, at least 1000 times to reach the required 99.9 % confidence level. As a result k 1000 losses should be independently extracted from the loss severity distribution

The bank is supposed to collect each year on average k new losses then to reach the k 1000 losses in the database

This means that the amount of data required is equivalent to a 1000 bank years

………

999 annual losses1000th annual loss

999th annual loss

VaR = 99.9 % confidence level

=

Annual Loss k Individual losses

+ + ++ …

Loss DB 1000 = 1000 bank years

or k 1000 losses

Page 14: Modelling Loss Data Established practices and Open Issues Giulio Mignola Intesa Sanpaolo

14

Implications on models (2)

Operational risk loss distributions belong to the class of sub-exponential distributions. This implies that, in the tail, the aggregate annual loss is roughly equal to the largest single loss Consequently the properties of losses in the far tail of the severity distribution completely drive the VaR result.

The 99.9% annual loss confidence level with a 5 years loss history in the database, entails a 1000/5 = 200-fold extrapolation beyond the observed data. External data from the industry could help a bit, but at least a data pool of 200 banks for 5 years is needed. Homogeneity of the various contributors to the pool remains questionable.

These elements together imply that to obtain the result one must extrapolate. Extrapolation is very risky since it cannot be proven by observation: results perfectly fine and compatible in the realm of observed data could diverge dramatically by extrapolating. And the problem is unsolvable unless some other information is brought in. A fair amount of guesswork, isn’t it ?

1000 banks

or

Observed data

Extrapolated data

=

Annual Loss Individual losses

+ + ++ …

Single largest loss

Page 15: Modelling Loss Data Established practices and Open Issues Giulio Mignola Intesa Sanpaolo

15

What does it mean in practiceWhich distribution to choose

Since there are no established “dynamic” models of the loss generation, only phenomenological approaches based on collected loss data are viable.

The selection process has to be done relying on goodness of fit statistics (Kolmogorov - Smirnov, Anderson - Darling, Smirnov - Cramér - von Mises, Schwartz Bayesian Criterion, etc.)

There are cases in which two, or even more than two, equally good fits (statistically speaking) are found

Page 16: Modelling Loss Data Established practices and Open Issues Giulio Mignola Intesa Sanpaolo

16

An example

function name: dlnorms

estimated parameters:

meanlog 11.204300 +- 0.139508

sdlog 1.753589 +- 0.098629

constant parameters:

shift 50000.000000

neg. loglik. value: 2083.305

fitted range: ( ) 50000 Inf num. data fitted: 158

cdf extremes: 0.7768362 1

chisquare KS AD SCvM SBC

g.o.f. 1.999580149 0.09043227 0.001324199 0.0005123347 4176.734

p.val 0.005948592 0.04330166 0.347246805 0.2094978905 -4176.734

function name: dgpd

estimated parameters:

shape 0.799519 +- 0.151729

scale 86880.052984 +- 13819.608154

constant parameters:

loc 50000.000000

neg. loglik. value: 2081.230

fitted range: ( ) 50000 Inf num. data fitted: 158

cdf extremes: 0.7768362 1

chisquare KS AD SCvM SBC

g.o.f. 1.79569796 0.05435913 0.001259334 0.0003931439 4172.585

p.val 0.01779608 0.24924875 0.280444997 0.3187397029 -4172.585

Page 17: Modelling Loss Data Established practices and Open Issues Giulio Mignola Intesa Sanpaolo

17

… but far in the tail using the single loss approximation (N = 180)

Less 100 million for fit1and more than 500 million for fit2

Page 18: Modelling Loss Data Established practices and Open Issues Giulio Mignola Intesa Sanpaolo

18

Why it happens i.e. how bad the extrapolation problem can be

To reach the 99.9 percentile of the annual aggregate loss we should explore regions of the loss severity which are far beyond the observed data points

Example: if we want the 99.9 percentile of an aggregate loss obtained with a frequency of 100 we need to explore losses up to

or 1 in 100,000 losses. Recognizing that in a typical “risk class” we have in the most favourable condition few thousands of data points it is almost sure that we need to guess the behaviour of losses in the tail.

99999.0100

999.011

p

Page 19: Modelling Loss Data Established practices and Open Issues Giulio Mignola Intesa Sanpaolo

19

What can be doneExternal data could help

log(loss)

% p

oint

s of

qno

rm

0.1

1

10

50

90

99

99.999.9799.99

99.999

99.9999

internal data only

log(loss)

% p

oint

s of

qno

rm0.1

1

10

50

90

99

99.999.9799.99

99.999

99.9999

internal + external data

internal dataexternal data

Page 20: Modelling Loss Data Established practices and Open Issues Giulio Mignola Intesa Sanpaolo

20

What can be done… but for the time being is not the solution

log(loss)

% p

oint

s of

qno

rm

0.1

1

10

50

90

99

99.999.9799.99

99.999

99.9999

internal + external data

internal dataexternal datainternal dataexternal data

Page 21: Modelling Loss Data Established practices and Open Issues Giulio Mignola Intesa Sanpaolo

21

Typical consequences

Issues on model output Instability of results (also favoured by the “huge” statistical uncertainties coming from the fitting of

the severities), i.e. re-estimating the severities parameters could produce a change with respect to previous cycle results of material size (even greater than 100%)

Unreliability of results - uncertainty on the correct level of capital How to explain to top management (and regulators) that, given no relevant changes in the data,

in March you need 40% more capital than in December ?

Implications on model validation Absolute validation is unachievable Need to focus on trends of the results Needs to heavily rely on benchmarks

Implications on financial system consistency What really level playing field really means

Page 22: Modelling Loss Data Established practices and Open Issues Giulio Mignola Intesa Sanpaolo

22

Is there a solution ?

With the current formulation of the regulation, probably not

Few simple ideas on how to proceed Internal validation should be always performed jointly with an external party (to provide enough

experiences to benchmark model outputs). Potentially we could see a new figure emerging here: “The model validator”

A mechanism to incorporate trends (taking as a basis to measure variation an in-sample anchor point) should be hardwired in the model (even at the price of a potential inconsistent description of sample data)

Taking a more radical attitude Promoting Basel III, from now, and select a lower level of confidence, in, or very near, the

observed data range, then find another system to reach the regulatory comfort zone, e.g. by fixing a system multiplier

Page 23: Modelling Loss Data Established practices and Open Issues Giulio Mignola Intesa Sanpaolo

23

Other sources of unceratinty

Statistical uncertainty– Better to consider it carefully– Don’t rely on VaR calculated on point estimates of the severity parameters

Numerical approximations– Monte Carlo– FFT– Aggregation of different risk types

• Copulae vs. naïve approach• Which measure for dependency (rank vs. linear correlation)• Which data to use (severity, frequency, aggregated losses …)

Page 24: Modelling Loss Data Established practices and Open Issues Giulio Mignola Intesa Sanpaolo

24

Point vs. interval estimate: Statistical uncertainty

The result of a fit is a set of parameters together with the associated uncertainty

function name: dlnorms

estimated parameters:

meanlog 11.204300 +- 0.139508

sdlog 1.753589 +- 0.098629

constant parameters:

shift 50000.000000

neg. loglik. value: 2083.305

fitted range: ( ) 50000 Inf num. data fitted: 158

cdf extremes: 0.7768362 1

chisquare KS AD SCvM SBC

g.o.f. 1.999580149 0.09043227 0.001324199 0.0005123347 4176.734

p.val 0.005948592 0.04330166 0.347246805 0.2094978905 -4176.734

)var()var(22

PPP Neglecting correlations between parameters

Page 25: Modelling Loss Data Established practices and Open Issues Giulio Mignola Intesa Sanpaolo

25

Lognormal(meanlog=10,sdlog=2)

0.00 0.05 0.10 0.15 0.20

0.0

0.5

1.0

1.5

rel error on sdlog

rel e

rror

on

quan

tile

freq=200freq=1000

P/P, where P is the 99.9 percentile of the aggregated loss distribution:-severity: lognormal(10,2)-frequency: Poisson(200), Poisson(1000)

These results show how the statistical uncertainty on the estimated parameters reflects on the value of the VaR. Note the size of the error up to 70%. In the hypothesis of a symmetric function, the meaning of the statistical error is that another sample of the same size independently drawn from the same underlying distribution could be, with 68% probability, in between –70% and + 70% from the true value

Page 26: Modelling Loss Data Established practices and Open Issues Giulio Mignola Intesa Sanpaolo

26

A recent development:Correctly computing VaR with uncertainty

As was recently pointed out in X. Luo, P.V. Shevchenko and J.B. Donnely, the skewness in the response of the quantiles with respect of parameters variation should be correctly taken into account since naively ignoring this effect could grossly underestimate the risk figure.

A worked example Lognormal severity (a typical situation) Severity parameters (0, 0) and their uncertainty (, ) Is VaR0 = VaR(0, 0) – that is the case in which uncertainty on parameter estimates is not

considered – the good risk measure to be used ? The answer is definitely No, the correct measure should be:

– VaR = E[VaR(, )] over the , distributions which properly take into account their uncertainty. With a good degree of precision both distributions could be considered normal centred in (0, 0) with standard deviations (, )

Page 27: Modelling Loss Data Established practices and Open Issues Giulio Mignola Intesa Sanpaolo

27

A practical example

1.6 1.8 2.0 2.2 2.4

0

1

2

3

4

sdlog

dens

ity(s

dlog

)

Uncertainty on sigma Uncertainty on VaR

Expected sigma = sigma0 Expected VaR ≠VaR0

40% ΔVaR10%Δσ

0e+00 1e+08 2e+08 3e+08 4e+08 5e+08 6e+08

0e+00

1e-09

2e-09

3e-09

4e-09

5e-09

6e-09

VaR

dens

ity(V

aR)

VaR0 = 111 ME[VaR] = 160 M

Page 28: Modelling Loss Data Established practices and Open Issues Giulio Mignola Intesa Sanpaolo

28

A full MC is needed if parameters space is multidimensional

VaR

Freq

uenc

y

0e+00 1e+08 2e+08 3e+08 4e+08 5e+08 6e+08

0

5

10

15VaR0=111 ME[VaR]=139 M

%2,%5

%25 VaR

Page 29: Modelling Loss Data Established practices and Open Issues Giulio Mignola Intesa Sanpaolo

29

Conclusions

The financial industry has reached a high level of understanding of the Operational Risk discipline and a rather sophisticated level of modelling the risk profile

– Approaches to modelling operational risk losses have become well engineered with the use of smart statistical techniques

– Methods have been developed (in some cases intelligently borrowed from other disciplines) to solve difficult problems

– Software tools are available to “industrialize” the VaR production

… but we are still facing some “unpleasant” effects which are difficult to properly understand and to keep under control. In particular an unlucky definition of the risk measure imposed by the prudential regulation (Basel II) is fundamentally flawing the soundness of the measurement framework. This has to:

– be understood in the first place– worked around, even at the price of loosing some theoretical consistency– trigger an initiative of lobbying to properly change the current formulation of the regulation.

In order to bring back to a manageable situation the Operational Risk measurement framework giving back a proper meaning to the op risk measures, the validation of models and, last but not least, the level playing field we all claim to want in place, the regulation should be improved.

Page 30: Modelling Loss Data Established practices and Open Issues Giulio Mignola Intesa Sanpaolo

30

References

G. Mignola and R. Ugoccioni, Tests of Extreme Value Theory, Operational Risk, Vol. 6, Issue 10, Oct. 2005.

G. Mignola and R. Ugoccioni, Sources of uncertainty in modelling operational risk losses, Journal of Operational Risk Vol. 1, No. 2, Summer 2006.

G. Mignola and R. Ugoccioni, Effect of a data collection threshold in the loss distribution approach, Journal of Operational Risk Vol. 1, No. 4, Winter 2006/07.

X. Luo, P.V. Shevchenko and J.B. Donnely, Addressing the impact of data truncation and parameter uncertainty on operational risk estimate, Operational Risk, Vol. 2, Number 4, winter 07/08.

S.A. Klugman, H.H. Panjer and G.E. Willmot, Loss Models: from Data to Decisions, 2nd ed., John Wiley and Sons, New York, 2004.

P. Embrechts, A.J. McNeil and R. Frey, Quantitative Risk Management, Princeton University Press, Princeton, 2005.

J. Shih, A. Samad-Khan and P. Medapa, Is the size of an Operational Loss Related to Firm Size, Operational Risk (January 2000); A. Samad-Khan An integrated Framework for Measuring and Managing Operational Risk, OpRisk 2003.

S. Carrillo Menendez and A. Suarez, Version 2.0 Capital Models for Operational Risk: Bringing Economic Sense to Economic Capital, (talk at Global Conference on Operational Risk, New York, March 13-17, 2007)

F. Beekman, Using Robust Estimators to Find Parameters, (presentation given at Global Conference on Operational Risk, New York, March 13-17, 2007)