SlideShare a Scribd company logo
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Actuariat de l’Assurance Non-Vie # 11
A. Charpentier (UQAM & Université de Rennes 1)
ENSAE ParisTech, Octobre 2015 - Janvier 2016.
http://freakonometrics.hypotheses.org
@freakonometrics 1
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Regression Models in Claims Reserving
A natural idea is to assume that incremental payments Yi,j can be explained by
two factors: one related to occurrence year i, and one development factor, related
to j. Formally, we assume that
Yi,j ∼ L(θi,j), where θi,j = αi · βj
i.e. Yi,j is a random variable, with distribution L, where parameter(s) can be
related to the two factors.
@freakonometrics 2
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Poisson regression in claims reserving
Renshaw & Verrall (1998) proposed to use a Poisson regression for incremental
payments to estimate claim reserve, i.e.
Yi,j ∼ P (exp [γ + αi + βj]) .
1 devF=as.factor(development); anF=as.factor(year)
2 REG=glm(vec.C~devF+anF , family = "Poisson")
Here,
1 > summary(REG)
2 Call:
3 glm(formula = vec.C ~ anF + devF , family = poisson(link = "log"),
4 data = triangle)
5
6 Deviance Residuals:
7 Min 1Q Median 3Q Max
8 -2.343e+00 -4.996e -01 9.978e -07 2.770e-01 3.936e+00
@freakonometrics 3
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
9
10 Coefficients :
11 Estimate Std. Error z value Pr(>|z|)
12 (Intercept) 8.05697 0.01551 519.426 < 2e-16 ***
13 anF1989 0.06440 0.02090 3.081 0.00206 **
14 anF1990 0.20242 0.02025 9.995 < 2e-16 ***
15 anF1991 0.31175 0.01980 15.744 < 2e-16 ***
16 anF1992 0.44407 0.01933 22.971 < 2e -16 ***
17 anF1993 0.50271 0.02079 24.179 < 2e -16 ***
18 devF1 -0.96513 0.01359 -70.994 < 2e -16 ***
19 devF2 -4.14853 0.06613 -62.729 < 2e-16 ***
20 devF3 -5.10499 0.12632 -40.413 < 2e -16 ***
21 devF4 -5.94962 0.24279 -24.505 < 2e-16 ***
22 devF5 -5.01244 0.21877 -22.912 < 2e -16 ***
23 ---
24
25 ( Dispersion parameter for poisson family taken to be 1)
26
@freakonometrics 4
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
27 Null deviance: 46695.269 on 20 degrees of freedom
28 Residual deviance: 30.214 on 10 degrees of freedom
29 AIC: 209.52
30
31 Number of Fisher Scoring iterations: 4
Again, it is possible to summarize this information in triangles....
Predictions can be used to complete the triangle.
1 ANew=rep(1 :Ntr),times=Ntr) ; DNew=rep (0 :(Ntr -1),each=Ntr)
2 P=predict(REG , newdata=data.frame(A=as.factor(ANew),D=as.factor(
DNew)))
3 payinc.pred= exp(matrix(as.numeric(P),nrow=n,ncol=n))
4 noise = payinc -payinc.pred
1 year development paycum payinc payinc.pred noise
2 1 1988 0 3209 3209 3155.699242 5.330076e+01
3 2 1989 0 3367 3367 3365.604828 1.395172e+00
4 3 1990 0 3871 3871 3863.737217 7.262783e+00
@freakonometrics 5
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
5 4 1991 0 4239 4239 4310.096418 -7.109642e+01
6 5 1992 0 4929 4929 4919.862296 9.137704e+00
7 6 1993 0 5217 5217 5217.000000 1.818989e-12
8 7 1988 1 4372 1163 1202.109851 -3.910985e+01
9 8 1989 1 4659 1292 1282.069808 9.930192e+00
10 9 1990 1 5345 1474 1471.824853 2.175147e+00
11 10 1991 1 5917 1678 1641.857784 3.614222e+01
12 11 1992 1 6794 1865 1874.137704 -9.137704e+00
13 12 1988 2 4411 39 49.820712 -1.082071e+01
14 13 1989 2 4696 37 53.134604 -1.613460e+01
15 14 1990 2 5398 53 60.998886 -7.998886e+00
16 15 1991 2 6020 103 68.045798 3.495420e+01
17 16 1988 3 4428 17 19.143790 -2.143790e+00
18 17 1989 3 4720 24 20.417165 3.582835e+00
19 18 1990 3 5420 22 23.439044 -1.439044e+00
20 19 1988 4 4435 7 8.226405 -1.226405e+00
21 20 1989 4 4730 10 8.773595 1.226405e+00
22 21 1988 5 4456 21 21.000000 -2.842171e-14
@freakonometrics 6
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
The pearson residuals are
εP
i,j =
Xi,j − µi,j
µi,j
,
The deviance residuals are
εD
i,j =
Xi,j − µi,j
di,j
,
Pearson’s error can be obtained from function resid=residuals(REG,"pearson"), and
summarized in a triangle
1 > PEARSON
2 [,1] [,2] [,3] [,4] [,5] [,6]
3 [1,] 9.4882e-01 -1.128012 -1.5330 -0.48996 -0.42759 -6.2021e-15
4 [2,] 2.4048e-02 0.277333 -2.2134 0.79291 0.41404 NA
5 [3,] 1.1684e-01 0.056697 -1.0241 -0.29723 NA NA
6 [4,] -1.0829e+00 0.891963 4.2373 NA NA NA
7 [5,] 1.3027e-01 -0.211074 NA NA NA NA
8 [6,] 2.5183e-14 NA NA NA NA NA
@freakonometrics 7
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Errors in GLMs
@freakonometrics 8
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
log-Poisson regression and Chain-Ladder
The log-Poisson is interesting since it (usually) provides the same amount of
reserves as Chain Ladder.
1 > library( ChainLadder )
2 > an <- 10; ligne = rep (1:an , each=an); colonne = rep (1:an , an)
3 > passe = (ligne + colonne - 1) <=an; n = sum(passe)
4 > PAID=GenIns; INC=PAID
5 > INC [,2:an]= PAID [,2:an]-PAID [ ,1:(an -1)]
6 > Y = as.vector(INC)
7 > lig = as.factor(ligne)
8 > col = as.factor(colonne)
9 > base = data.frame(Y,col ,lig)
10 > reg=glm(Y~col+lig ,data=base ,family="poisson")
11 > sum(exp(predict(reg ,newdata=base))[passe!=TRUE ])
12 [1] 18680856
@freakonometrics 9
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
log-Poisson regression and Chain-Ladder
1 > MackChainLadder (GenIns)
2 MackChainLadder (Triangle = GenIns)
3 Latest Dev.To.Date Ultimate IBNR Mack.S.E CV(IBNR)
4 1 3 ,901 ,463 1.0000 3 ,901 ,463 0 0 NaN
5 2 5 ,339 ,085 0.9826 5 ,433 ,719 94 ,634 71 ,835 0.759
6 3 4 ,909 ,315 0.9127 5 ,378 ,826 469 ,511 119 ,474 0.254
7 4 4 ,588 ,268 0.8661 5 ,297 ,906 709 ,638 131 ,573 0.185
8 5 3 ,873 ,311 0.7973 4 ,858 ,200 984 ,889 260 ,530 0.265
9 6 3 ,691 ,712 0.7223 5 ,111 ,171 1 ,419 ,459 410 ,407 0.289
10 7 3 ,483 ,130 0.6153 5 ,660 ,771 2 ,177 ,641 557 ,796 0.256
11 8 2 ,864 ,498 0.4222 6 ,784 ,799 3 ,920 ,301 874 ,882 0.223
12 9 1 ,363 ,294 0.2416 5 ,642 ,266 4 ,278 ,972 970 ,960 0.227
13 10 344 ,014 0.0692 4 ,969 ,825 4 ,625 ,811 1 ,362 ,981 0.295
14 Totals
15 Latest: 34 ,358 ,090.00
16 Ultimate: 53 ,038 ,945.61
17 IBNR: 18 ,680 ,855.61
@freakonometrics 10
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
18 Mack S.E.: 2 ,441 ,364.13
19 CV(IBNR): 0.13
@freakonometrics 11
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
An explicit expression to quantify uncertainty
Recall that we while to estimate
E([R − R]2
) = E(R) − E(R)
2
+ Var(R − R)
≈ Var(R) + Var(R)
Classically, consider a log-Poisson model, were incremental payments satisfy
Yi,j ∼ P(µi,j) where µi,j = exp[ηi,j] = exp[γ + αi + βj]
Using the delta method, we get that asymptotically
Var(Yi,j) = Var(µi,j) ≈
∂µi,j
∂ηi,j
2
Var(ηi,j)
where, since we consider a log link,
∂µi,j
∂ηi,j
= µi,j
@freakonometrics 12
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
i.e., with an ODP distribution (i.e. Var(Yi,j) = ϕE(Yi,j),
E [Yi,j − Yi,j]2
≈ ϕ · µi,j + µ2
i,j · Var(ηi,j)
and
Cov(Yi,j, Yk,l) ≈ µi,j · µk,l · Cov (ηi,j, ηk,l)
Thus, since the overall amount of reserves satisfies
E [R − R]2
≈
i+j−1>n
ϕ · µi,j + µ Var(η)µ.
1 > an <- 6; ligne = rep (1:an , each=an); colonne = rep (1:an , an)
2 > passe = (ligne + colonne - 1) <=an; np = sum(passe)
3 > futur = (ligne + colonne - 1)> an; nf = sum(passe)
4 > INC=PAID
5 > INC [ ,2:6]= PAID [,2:6]- PAID [ ,1:5]
6 > Y = as.vector(INC)
@freakonometrics 13
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
7 > lig = as.factor(ligne); col = as.factor(colonne)
8 >
9 > CL <- glm(Y~lig+col , family= quasipoisson )
10 > Y2=Y; Y2[is.na(Y)]=.001
11 > CL2 <- glm(Y2~lig+col , family= quasipoisson )
12 > YP = predict(CL)
13 > p = 2*6-1;
14 > phi.P = sum(residuals(CL ,"pearson")^2)/(np -p)
15 > Sig = vcov(CL)
16 > X = model.matrix(CL2)
17 > Cov.eta = X%*%Sig%*%t(X)
18 > mu.hat = exp(predict(CL ,newdata=data.frame(lig ,col)))*futur
19 > pe2 = phi.P * sum(mu.hat) + t(mu.hat) %*% Cov.eta %*% mu.hat
20 > cat("Total reserve =", sum(mu.hat), "prediction error =", sqrt(pe2
),"n")
21 Total reserve = 2426.985 prediction error = 131.7726
i.e. E [R − R]2
= 131.77.
@freakonometrics 14
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Uncertainty and bootstrap simulations
Based on that theoretical triangle, it is possible to generate residuals to obtain a
simulated triangle. Since the size of the sample is small (here 21 observed
values), assuming normality for Pearson’s residuals can be too restrictive.
Resampling bootstrap procedure can then be more robust.
In order to get the loss distribution, it is possible to use bootstrap techniques to
generate a matrix of errors, see Renshaw & Verrall (1994). They suggest to
boostrap Pearson’s residuals, and the simulation procedure is the following
• estimate the model parameter (GLM), β,
• calculate fitted values µi,j, and the residuals ri,j =
Yi,j − µi,j
V (µi,j)
,
• forecast with original data µi,j for i + j > n.
Then can start the bootstrap loops, repeating B times
@freakonometrics 15
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
• resample the residuals with resample, and get a new sample r
(b)
i,j ,
• create a pseudo sample solving Y ∗
i,j = µi,j + r
(b)
i,j × V (µi,j),
• estimate the model using GLM procedure and derive boostrap forecast
Let resid.sim be resampled residuals. Note that REG$fitted.values (called here
payinc.pred) is the vector containing the µi,j’s. And further V (µi,j) is here simply
REG$fitted.values since the variance function for the Poisson regression is the
identity function. Hence, here
Y ∗
i,j = µi,j + r
(b)
i,j × µi,j
and thus, set
1 resid.sim = sample(resid ,Ntr*(Ntr +1)/2,replace=TRUE)
2 payinc.sim = resid.sim*sqrt(payinc.pred)+payinc.pred
3
4 [,1] [,2] [,3] [,4] [,5] [,6]
@freakonometrics 16
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
5 [1,] 3155.699 1216.465 42.17691 18.22026 9.021844 22.89738
6 [2,] 3381.694 1245.399 84.02244 18.20322 11.122243 NA
7 [3,] 3726.151 1432.534 61.44170 23.43904 NA NA
8 [4,] 4337.279 1642.832 74.58658 NA NA NA
9 [5,] 4929.000 1879.777 NA NA NA NA
10 [6,] 5186.116 NA NA NA NA NA
For this simulated triangle, we can use Chain-Ladder estimate to derive a
simulated reserve amount (here 2448.175). Figure below shows the empirical
distribution of those amounts based on 10, 000 random simulations.
@freakonometrics 17
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
@freakonometrics 18
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Parametric or nonparametric Monte Carlo ?
A natural idea would be to assume that Pearson residual have a Gaussian
distribution, qqnorm(R); qqline(R)
The graph on the right draw point with a size proportional to its Cook’s distance.
@freakonometrics 19
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Instead of resampling in the sample obtained, we can also directly draw from a
normal distribution, i.e.
1 > rnorm(length(R),mean=mean(R),sd=sd(R))
@freakonometrics 20
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
The second triangle is obtained using a Student t distribution (the blue line
being the bootstrap estimate).
0.80 0.85 0.90 0.95 1.00
2400245025002550260026502700
VaR for total reserves
probability level
quantilelevel
Student
Normal
bootstrap
Note that the bootstrap technique is valid only in the case were the residuals are
perfectly independent.
@freakonometrics 21
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
In R, it is also possible to use the BootChainLadder(Triangle , R = 999, process.distr
= "od.pois") function.
@freakonometrics 22
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Going further
So far, we have derived a ditrisbution for the best estimate of total reserves.
Note tat it is possible to estimate a scale parameter φ. England & Verrall (1999)
suggested
φ =
ε2
i,j
n − p
where the summation is over all past observations.
@freakonometrics 23
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Bootstrap Chain-Ladder
1 > I=as.matrix(read.table("D: triangleC.csv",sep=";",header=FALSE))
2 > BCL <- BootChainLadder (Triangle = I, R = 999, process.distr = "od.
pois")
3 > BCL
4 BootChainLadder (Triangle = I, R = 999, process.distr = "od.pois")
5
6 Latest Mean Ultimate Mean IBNR SD IBNR IBNR 75% IBNR 95%
7 1 4 ,456 4 ,456 0.0 0.0 0 0
8 2 4 ,730 4 ,752 22.0 11.8 28 45
9 3 5 ,420 5 ,455 35.3 14.6 44 61
10 4 6 ,020 6 ,086 66.2 20.8 78 102
11 5 6 ,794 6 ,947 152.7 29.1 170 205
12 6 5 ,217 7 ,364 2 ,146.9 112.5 2 ,214 2 ,327
13
14 Totals
15 Latest: 32 ,637
16 Mean Ultimate: 35 ,060
@freakonometrics 24
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
17 Mean IBNR: 2 ,423
18 SD IBNR: 131
19 Total IBNR 75%: 2 ,501
20 Total IBNR 95%: 2 ,653
Histogram of Total.IBNR
Total IBNR
Frequency
2000 2200 2400 2600 2800 3000
0100200300
2000 2200 2400 2600 2800 3000
0.00.40.8
ecdf(Total.IBNR)
Total IBNR
Fn(x)
q q qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq qq qqqqqqq q q
qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
qqqqqqqqqqqqqqqqqqqqqqqq
qqqqqqqqqqqqqqqqqqq
q
q
q
q
qq
q
qqq
qq
q
qqqq
q
qq
qqq
q
q
1 2 3 4 5 6
450060007500
Simulated ultimate claims cost
origin period
ultimateclaimscosts
qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
q Mean ultimate claim
qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
q
q
q
qq
qq
q
q
q
q
qq
qq
q
q
q
qq
q
q
q
q
qqq
qq
qqq
qq
q
q
qq
q
qq
q
qq
qqqq
q
qq
qq
q
qqq
qq
qqqqq
q
qq
qqqqqqqq
q
q
qq
q
qq
qq
q
qqqq
q
qq
q
qqq
q
qq
qqqq
q
q
qq
qqq
q
q
qq
qq
qq
q
qq
qq
qqq
q
q
qq
q
q
qq
qqqq
qqqqq
q
q
qq
q
q
q
qqq
q
q
qqqqqqqq
q
qq
qqqq
q
qqqqq
qqq
qq
qq
qq
q
q
qq
qqqqqq
q
qqqq
q
q
1 2 3 4 5 6
020004000
Latest actual incremental claims
against simulated values
origin period
latestincrementalclaims
qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq Latest actual
@freakonometrics 25
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
From Poisson to Over-Dispersed Poisson
Classical, in GLMs we consider distributions with density
f(z|θ, φ) = exp
zθ − b(θ)
φ
+ c(z, φ) ,
e.g. for the Poisson distribution P(λ) then
f(z|λ) = exp(−λ)
λz
z!
= exp z log λ − λ − log z! , z ∈ N,
with θ = log λ, φ = 1, b(θ) = exp θ = λ and c(z, φ) = − log z!.
Assume that φ = 1 becomes an additional parameter (that should be estimated).
Note that in that case f(z|λ) is not any more a density, but it is a quasidensity.
Further, note that
V ar(Z) = φE(Z).
Thus, if φ > 1 there is overdispersion.
@freakonometrics 26
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
On quasiPoisson regression
In order to understand the rule of the additional parameter, recall that for the
Gaussien linear model, N(µ, σ2
) it is an exponential distribution with θ = µ,
b(θ) = θ2
/2, φ = σ2
and
c(z, φ) = −
1
2
y2
σ2
+ log(2πσ2
) .
Thus, φ is the variance parameter
Y |X ∼ N(Xβ, σ2
)
In that linear model, estimation is done based on the following process,
• estimate β as β = (X X)−1
X Y
• derive the implied residuals, ε = Y − Xβ
• estimate σ as the variance of the implied residuals
@freakonometrics 27
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Thus, φ does not impact the estimation of the coefficient, but it will impact their
significativity.
1 > an <- 10; ligne = rep (1:an , each=an); colonne = rep (1:an , an)
2 > passe = (ligne + colonne - 1) <=an; n = sum(passe)
3 > PAID=GenIns; INC=PAID
4 > INC[,2:an]= PAID [,2:an]-PAID [ ,1:(an -1)]
5 > Y = as.vector(INC)
6 > lig = as.factor(ligne)
7 > col = as.factor(colonne)
8 > base = data.frame(Y,col ,lig)
9 > reg1=glm(Y~col+lig ,data=base ,family="poisson")
10 > reg2=glm(Y~col+lig ,data=base ,family=" quasipoisson ")
11 > summary(reg1)
12 Call:
13 glm(formula = Y ~ col + lig , family = "poisson", data = base)
14 Coefficients :
15 Estimate Std. Error z value Pr(>|z|)
16 (Intercept) 12.5064047 0.0007540 16587.372 < 2e-16 ***
@freakonometrics 28
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
17 col2 0.3312722 0.0006694 494.848 < 2e-16 ***
18 col3 0.3211186 0.0006877 466.963 < 2e -16 ***
19 col4 0.3059600 0.0007008 436.570 < 2e -16 ***
20 col5 0.2193163 0.0007324 299.461 < 2e-16 ***
21 col6 0.2700770 0.0007445 362.755 < 2e -16 ***
22 col7 0.3722084 0.0007606 489.344 < 2e -16 ***
23 col8 0.5533331 0.0008133 680.377 < 2e-16 ***
24 col9 0.3689342 0.0010429 353.772 < 2e-16 ***
25 col10 0.2420330 0.0018642 129.830 < 2e -16 ***
26 lig2 0.9125263 0.0006490 1406.042 < 2e-16 ***
27 lig3 0.9588306 0.0006652 1441.374 < 2e -16 ***
28 lig4 1.0259970 0.0006840 1499.927 < 2e-16 ***
29 lig5 0.4352762 0.0008019 542.814 < 2e -16 ***
30 lig6 0.0800565 0.0009364 85.492 < 2e -16 ***
31 lig7 -0.0063815 0.0010390 -6.142 8.14e-10 ***
32 lig8 -0.3944522 0.0013529 -291.560 < 2e -16 ***
33 lig9 0.0093782 0.0013963 6.716 1.86e -11 ***
34 lig10 -1.3799067 0.0039097 -352.946 < 2e -16 ***
@freakonometrics 29
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
35 ---
36 ( Dispersion parameter for poisson family taken to be 1)
37 Null deviance: 10699464 on 54 degrees of freedom
38 Residual deviance: 1903014 on 36 degrees of freedom
39 (45 observations deleted due to missingness )
40 AIC: 1903877
41 Number of Fisher Scoring iterations: 4
1 > summary(reg2)
2 Call:
3 glm(formula = Y ~ col + lig , family = " quasipoisson ", data = base)
4 Coefficients :
5 Estimate Std. Error t value Pr(>|t|)
6 (Intercept) 12.506405 0.172924 72.323 < 2e -16 ***
7 col2 0.331272 0.153537 2.158 0.03771 *
8 col3 0.321119 0.157719 2.036 0.04916 *
9 col4 0.305960 0.160736 1.903 0.06499 .
10 col5 0.219316 0.167970 1.306 0.19994
11 col6 0.270077 0.170756 1.582 0.12247
@freakonometrics 30
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
12 col7 0.372208 0.174451 2.134 0.03976 *
13 col8 0.553333 0.186525 2.967 0.00532 **
14 col9 0.368934 0.239181 1.542 0.13170
15 col10 0.242033 0.427562 0.566 0.57485
16 lig2 0.912526 0.148850 6.131 4.65e-07 ***
17 lig3 0.958831 0.152569 6.285 2.90e-07 ***
18 lig4 1.025997 0.156883 6.540 1.33e-07 ***
19 lig5 0.435276 0.183914 2.367 0.02344 *
20 lig6 0.080057 0.214770 0.373 0.71152
21 lig7 -0.006381 0.238290 -0.027 0.97878
22 lig8 -0.394452 0.310289 -1.271 0.21180
23 lig9 0.009378 0.320249 0.029 0.97680
24 lig10 -1.379907 0.896690 -1.539 0.13258
25 ---
26 ( Dispersion parameter for quasipoisson family taken to be 52601.93)
27 Null deviance: 10699464 on 54 degrees of freedom
28 Residual deviance: 1903014 on 36 degrees of freedom
29 (45 observations deleted due to missingness )
@freakonometrics 31
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
30 AIC: NA
31 Number of Fisher Scoring iterations: 4
Thus, coefficients are identical so it not affect the best estimate of claims
reserves.... unless we take into account the fact that some variates are no longer
significant.....
1 > base = data.frame(Y,col ,lig)
2 > base$lig[base$lig=="7"]="1"
3 > base$lig[base$lig=="9"]="1"
4 > base$lig[base$lig=="6"]="1"
5 > base$col[base$col=="5"]="1"
6 > base$col[base$col=="10"]="1"
7 > base$col[base$col=="9"]="1"
8 > base$col[base$col=="6"]="1"
9 > base$col[base$col=="4"]="1"
10 > base$col[base$col=="3"]="1"
11 > base$col[base$col=="7"]="1"
12 > base$col[base$col=="2"]="1"
@freakonometrics 32
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
13 > base$lig[base$lig=="8"]="1"
14 > summary(glm(Y~col+lig ,data=base ,family=" quasipoisson "))
15 Call:
16 glm(formula = Y ~ col + lig , family = " quasipoisson ", data = base)
17 Coefficients :
18 Estimate Std. Error t value Pr(>|t|)
19 (Intercept) 12.73401 0.07764 164.022 < 2e-16 ***
20 col8 0.28877 0.14109 2.047 0.04618 *
21 lig2 0.96246 0.10984 8.763 1.59e -11 ***
22 lig3 0.99721 0.11232 8.878 1.07e -11 ***
23 lig4 1.06465 0.11481 9.273 2.82e -12 ***
24 lig5 0.45513 0.14622 3.113 0.00312 **
25 lig10 -1.60752 0.85482 -1.881 0.06611 .
26 ---
27 ( Dispersion parameter for quasipoisson family taken to be 49241.53)
28 Null deviance: 10699464 on 54 degrees of freedom
29 Residual deviance: 2442092 on 48 degrees of freedom
30 (45 observations deleted due to missingness )
@freakonometrics 33
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
31 AIC: NA
32 Number of Fisher Scoring iterations: 4
Thus,
1 > M= cbind(Y,predict(reg1 ,newdata=base0 ,type="response"),
2 + predict(reg2 ,newdata=base0 ,type="response"),
3 + predict(reg3 ,newdata=base , type="response"))
4 > sum(M[is.na(Y)==TRUE ,2])
5 [1] 18680856
6 > sum(M[is.na(Y)==TRUE ,3])
7 [1] 18680856
8 > sum(M[is.na(Y)==TRUE ,4])
9 [1] 18226919
Including an overdispersion parameter φ might impact the estimation of the
overall reserves.
@freakonometrics 34
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Testing for overdispersion
In order to test for overdispersion in an econometric model, we need to specify
how overdispersion appears. A classical test is to assume that
V ar(Y |X) = E(Y |X) + τE(Y |X)2
which is a standard econometric model with random effect. We want to test
H0 : τ = 0 against H1 : τ > 0
A standard test statistics is
T =
n
i=1[Yi − λi]2
− Yi
2
n
i=1 λ2
i
which has a N(0, 1) distribution under H0. An alternative is to consider
T =
n
i=1[Yi − λi]2
− Yi
n
i=1[[Yi − λi]2 − Yi]2
@freakonometrics 35
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Those test can be found in R, respectively
1 > library(AER)
2 > dispersiontest (reglmp)
3 > dispersiontest (reglmp ,trafo = 2)
An alternative is simply the following
1 > library(ChainLadder)
2 > an <- 10; ligne = rep (1:an , each=an); colonne = rep (1:an , an)
3 > passe = (ligne + colonne - 1) <=an; n = sum(passe)
4 > PAID=GenIns; INC=PAID
5 > INC [,2:an]= PAID [,2:an]-PAID [ ,1:(an -1)]
6 > Y = as.vector(INC)
7 > lig = as.factor(ligne)
8 > col = as.factor(colonne)
9 > base = data.frame(Y,col ,lig)
10 > reg1=glm(Y~col+lig ,data=base ,family="poisson")
11 > reg2=glm(Y~col+lig ,data=base ,family=" quasipoisson ")
12 > dispersiontest (reg1)
@freakonometrics 36
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
13 Overdispersion test
14 data: reg1
15 z = 4.3942 , p-value = 5.558e -06
16 alternative hypothesis : true dispersion is greater than 1
@freakonometrics 37
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Alternative models for overdispersion
There is overdispersion if Var(Y ) > E(Y ), which can be obtained with a negative
binomial distribution (with belongs to the exponential family)
1 > library(MASS)
2 > reg3=glm.nb(Y~col+lig ,data=base)
3 > summary(reg3)
4 ( Dispersion parameter for Negative Binomial (13.8349) family taken to
be 1)
5 Theta: 13.83
6 Std. Err.: 2.61
7 2 x log -likelihood: -1460.766
8 > sum(exp(predict(reg3 ,newdata=base))[passe!=TRUE ])
9 [1] 18085795
@freakonometrics 38
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Uncertainty and overdispersion
Based on the explicit expression for the prediction error, it is possible to obtain
prediction error for those three models,
1 > predCL=function(reg=reg1 ,regb=reg1b){
2 + p = 2*6-1;
3 + phi.P = sum(residuals(reg ,"pearson")^2)/(np -p)
4 + Sig = vcov(reg)
5 + X = model.matrix(regb)
6 + Cov.eta = X%*%Sig%*%t(X)
7 + mu.hat = exp(predict(reg ,newdata=data.frame(lig ,col)))*futur
8 + pe2 = phi.P * sum(mu.hat) + t(mu.hat) %*% Cov.eta %*% mu.hat
9 + cat("Total reserve =", sum(mu.hat), " prediction error =", sqrt(pe2
),sqrt(pe2)/sum(mu.hat),"n")
10 + }
Avec nos trois modèles, Poisson, ODP et binomiale négative, on obtient,
1 > predCL(reg1 ,reg1b)
@freakonometrics 39
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
2 Total reserve = 18680856 prediction error = 896876.9 0.04801048
3 > predCL(reg2 ,reg2b)
4 Total reserve = 18680856 prediction error = 4736425 0.2535443
5 > predCL(reg3 ,reg3b)
6 Total reserve = 18085795 prediction error = 2058134 0.1137984
@freakonometrics 40
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
On the prediction error
In order to derive an estimation of the prediction error using bootstrap
techniques, we have not only to generate randomly possible triangles, but also to
add uncertainty in the developpement, using e.g. the fact that
Ci,j+1 = λjCi,j + σj Ci,j + εi,j
where the noise can be assume to be Gaussian, N(0, 1).
The statistical interpretation is that
Ci,j+1|Ci,j ∼ N(λjCi,j + σ2
j Ci,j)
Classically we use
1 > CL=function(triangle){
2 + n=nrow(triangle)
3 + LAMBDA=rep(NA ,n -1)
4 + for(i in 1:(n-1)){
@freakonometrics 41
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
5 + LAMBDA[i]= sum(triangle [1:(n-i),i+1])/
6 + sum(triangle [1:(n-i),i]) }
7 + DIAG=diag(triangle[,n:1])
8 + TOTO=c(1,rev(LAMBDA))
9 + return(sum(cumprod(TOTO)*DIAG -DIAG)) }
a natural idea is to consider
1 > CLboot=function(triangle ,l,s){
2 + m=nrow(triangle)
3 + for(i in 2:m){
4 + triangle [(m-i+2):m,i]= rnorm(i-1,
5 + mean=triangle [(m-i+2):m,i-1]*l[i-1],
6 + sd=sqrt(triangle [(m-i+2):m,i -1])*s[i -1])
7 + }
8 + ULT=triangle[,m]
9 + DIAG=diag(triangle[,m:1])
10 + return(sum(ULT -DIAG)) }
Then, we can run boostrap simulations,
@freakonometrics 42
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
1 > base=data.frame(Y,lig ,col)
2 > REG=glm(Y~lig+col ,family=poisson)
3 > YP=predict(REG ,newdata=base)
4 > E=residuals(REG ,"pearson")
5 > PROV.BE=rep(NA ,5000)
6 > PROVISION=rep(NA ,5000)
7 > for(k in 1:50000){
8 + simE=sample(E,size =36, replace=TRUE)
9 + bruit=simE*sqrt(exp(YP))
10 + INCsim=exp(YP)+bruit
11 + INCM=matrix(INCsim ,6 ,6)
12 + CUMM=INCM
13 + for(j in 2:6){CUMM[,j]= CUMM[,j -1]+ INCM[,j]}
14 + PROV.BE[k]=CL(CUMM)
15 + PROVISION[k]= CLboot(CUMM ,lambda ,sigma)}
@freakonometrics 43
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Random Generation of a Quasi-Distribution
It is also possible to generate Poisson, or quasi-Poisson random variables.
Recall that the negative binomial distribution has probability function
P[N = k] =
Γ(k + r)
k!Γ(r)
· [1 − p]r
pk
where the expected value and the variance are
µ = r ·
p
1 − p
and σ2
= µ = r ·
p
(1 − p)2
Assume that σ2
= ϕ · µ, then
r =
µ
ϕ − 1
and p =
1
ϕ
1 > rqpois = function(n, lambda , phi) {
2 + return( rnbinom(n, size = lambda/(1-phi), prob = 1/phi) }
@freakonometrics 44
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Using GAM for claims reserving
In the case of GAM’s, assume that
Yi,j ∼ L(θçi, j), where θi,j = ϕ(u(i) + v(j)),
where here u and v are two unknown functions. We still have an additive form,
but on unknown transformations of explanatory variates.
Spline functions are considered to estimation functions u and v.
1 > library(gam)
2 > GAM=gam(payinc~s(year ,5)+s(development ,3),data=D,familly="Poisson")
3 > plot.gam(GAM ,se=T,col="red",ask=TRUE ,main="GAM model , df=5, df=3")
@freakonometrics 45
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
@freakonometrics 46
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Dealing with negative increments
Negative incremental values can arise due to timing of reinsurance, recoveries,
cancellation of outstanding claims.
One might argue that the problem is more with the data than with the methods.
England & Verall (2002) mention that the Gaussian model is less affected by the
presence of negative incremental values. Unfortunately, one can hardly assume
that data are Gaussian because of the skewness. Renshaw & Verral (1994)
suggested to add a “small constant” to the past data and to substract this
constant from forecasts at the end.
@freakonometrics 47
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Dealing with negative increments
A classical technique to avoid negative payments is to
consider a translation of the incremental triangle,
i.e. Y +
i,j = Yi,j + κ such that Y +
i,j > 0 for all i, j.
@freakonometrics 48
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Dealing with negative increments
A classical technique to avoid negative payments is to
consider a translation of the incremental triangle,
i.e. Y +
i,j = Yi,j + κ such that Y +
i,j > 0 for all i, j.
q
q
q
0 1 2 3 4
012345
q
q
q
@freakonometrics 49
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Why a Poisson regression model ?
The is no reason to assume that incremental payments are Poisson distribution.
The only motivation here is that the expected value is the same as the Chain
Ladder estimate.
Distribution of the reserves, GAM model with Chain Ladder
Total amount of reserves
Density
2000 4000 6000 8000
0e+001e−042e−043e−044e−045e−04
@freakonometrics 50
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Tweedie ?
The density of a tweedie model with power function p would be
1 > ftweedie = function(y,p,mu ,psi){
2 + if(p==2){f = dgamma(y, 1/psi , 1/(psi*mu))} else
3 + if(p==1){f = dpois(y/psi , mu/psi)} else
4 + {lambda = mu^(2-p)/psi /(2-p)
5 + if(y==0){ f = exp(-lambda)} else
6 + { alpha = (2-p)/(p -1)
7 + beta = 1 / (psi * (p -1) * mu^(p -1))
8 + k = max (10, ceiling(lambda + 7*sqrt(lambda)))
9 + f = sum(dpois (1:k,lambda) * dgamma(y,alpha*(1:k),beta))
10 + }}
11 + return(f)
12 + }
A numerical problem is that we should have no missing values in the regression,
so artificially, consider
1 > source("http:// freakonometrics .free.fr/bases.R")
@freakonometrics 51
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
2 > library(statmod)
3 > an <- 6; ligne = rep (1:an , each=an); colonne = rep (1:an , an)
4 > passe = (ligne + colonne - 1) <=an; n = sum(passe)
5 > INC=PAID
6 > INC [ ,2:6]= PAID [,2:6]- PAID [ ,1:5]
7 > Y = as.vector(INC)
8 > lig = as.factor(ligne)
9 > col = as.factor(colonne)
10 > y = Y[passe]
11 > Y[is.na(Y)]=.01
Then, we can run an econometric regression
1 > pltweedie <- function(pow){
2 + regt = glm(Y~lig+col , tweedie(pow ,0))
3 + reserve = sum(fitted.values(regt)[!passe ])
4 + dev = deviance(regt)
5 + phi.hat = dev/n
6 + mu = fitted.values(regt)[passe]
@freakonometrics 52
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
7 + hat.logL = 0
8 + for (k in 1: length(y)){
9 + hat.logL <- hat.logL + log(ftweedie(y[k], pow , mu[k], phi.hat))
}
10 + cat("Puissance =", round(pow ,3) , "phi =", round(phi.hat ,2),
11 + "Reserve (tot) =", round(reserve), "logL =", round(hat.logL ,3))
12 + hat.logL}
13 > for(pow in c(1 ,1.25 ,1.5 ,1.75 ,2)){pltweedie(pow)}
14 Puissance = 1 phi = 166.95 Reserve (tot) = 1345 logL = -Inf
15 Puissance = 1.25 phi = 42.92 Reserve (tot) = 1216 logL = -151.72
16 Puissance = 1.5 phi = 15.8 Reserve (tot) = 996 logL = -145.232
17 Puissance = 1.75 phi = 9.02 Reserve (tot) = 609 logL = -153.997
18 Puissance = 2 phi = 6.78 Reserve (tot) = 125 logL = -170.614
It is also possible to run a optimization routine,
1 > optimize(pltweedie , c(1.01 ,1.99) , tol =1e-4, maximum = TRUE)
2 -144.624
3 $maximum
@freakonometrics 53
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
4 [1] 1.427873
5
6 $objective
7 [1] -144.6237
Thus, here the Poisson model might not be the appropriate one,
1.0 1.2 1.4 1.6 1.8 2.0
−2000−1500−1000−500
power
Loglikelihood
@freakonometrics 54
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Bayesian Models in Claims Reserving
The first idea is to consider some credibility based model, with
Ci,n = Z · CMack
i,n + [1 − Z] · µi
given some a priori µi.
For instance Benkhtander (1976) and Hovinen (1981) suggested
Z = 1 − [1 − βi]2
where βi =
n−1
k=n−i
1
λk
Note that
Ci,n = Ci,n−i + [1 − βi] βi · ·CMack
i,n + [1 − βi] · µi
More generally, consider the Cape-Code technique,
Ci,n = Ci,n−i + 1 −
Ci,n−i
Ci,n
Ci,n
@freakonometrics 55
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
sous la forme
Ci,n = Ci,n−i + 1 −
Ci,n−i
Ci,n
LRi · Pi,
où LRi correspond au loss ratio pour l’année i, i.e. LRi = Ci,n/Pi. L’idée de la
méthode dite Cape-Code est d’écrire une forme plus générale,
Ci,n = Ci,n−i + (1 − πn−i) LRiPi
où πn−i correspond à une cadence de paiement, et peut être estimé par la
méthode Chain Ladder. Quant aux LRi il s’agit des loss ratio cibles,
correspondant à un avis d’expert. On peut aussi proposer un même ratio cible
pour plusieurs années de survenance. On posera alors
Ri = Ci,n − Ci,n−i = (1 − πn−i)LRAPi.
pour i ∈ A, où
LRA = k∈A Cn,n−k
k∈A πn−kPk
.
@freakonometrics 56
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Dans un premier temps, on peut calculer les πi à partir de la méthode Chain
Ladder, i.e.
πn−i =
Ci,n−i
Ci,n
où la charge ultime est celle prédite pas la méthode Chain-Ladder.
1 > Cultime = MackChainLadder (PAID)$ FullTriangle [,6]
2 > (PI <- (1- Cdiag/Cultime))
3 1 2 3 4 5 6
4 0.00000 0.00471 0.00656 0.01086 0.02204 0.29181
5 > LR <- TRIANGLE [,6]/PREMIUM
6 > Cdiag <- diag(PAID [ ,6:1])
7 > (Cultime -Cdiag)/(LR*PREMIUM)
8 1 2 3 4 5 6
9 0.00000 0.00471 0.00656 0.01086 0.02204 0.29181
Si on suppose ensuite que A = {1, 2, · · · , n}, alors
1 > LR=sum(TRIANGLE [ ,6])/sum(PREMIUM)
@freakonometrics 57
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
2 > PI*LR*PREMIUM
3 1 2 3 4 5 6
4 0.0 24.6 35.6 62.7 139.6 2095.3
5 > sum(PI*LR*PREMIUM)
6 [1] 2358
On obtient ici un montant de provision total inférieur à celui obtenu par la
méthode Chain Ladder puisque le montant de provisions vaut ici 2357.756.
@freakonometrics 58
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Modèles bayésiens et Chain Ladder
De manière générale, un méthode bayésienne repose sur deux hypothèses
• une loi a priori pour les paramètres du modèle (Xi,j, Ci,j, λi,j,
LRi,j = Ci,j/Pj, etc)
• une technique pour calculer les lois a posteriori, qui sont en général assez
complexes.
@freakonometrics 59
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Modèles bayésiens pour les nombres de sinistres
Soit Ni,j l’incrément du nombre de sinistres, i.e. le nombre de sinistres survenus
l’année i, déclarés l’année i + j.
On note Mi le nombre total de sinistres par année de survenance, i.e.
Mi = Ni,0 + Ni,1 + · · · . Supposons que Mi ∼ P(λi), et que p = (p0, p1, · · · , pn)
désigne les proprotions des paiments par année de déroulé.
Conditionnellement à Mi = mi, les années de survenance sont indépenantes, et le
vecteur du nombre de sinistres survenus année l’année i suit une loi multinomiale
M(mi, p).
@freakonometrics 60
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Modèles bayésiens pour les nombres de sinistres
La vraisemblance L(M0, M1, · · · , Mn, p|Ni,j) est alors
n
i=0
Mi!
(Mi − Nn−i)!Ni,0!Ni,1! · · · Ni,n−i!
[1 − pn−i]Mi−Nn−i p
Ni,0
0 p
Ni,1
1 · · · p
Ni,n−i
n−i
où Nn−i = N0 + N1 + · · · + Nn−i et pn−i = p0 + p1 + · · · + pn−i.
Il faut ensuite de donner une loi a priori pour les paramètres. La loi a posteriori
sera alors proportionnelle produit entre la vraisemblance et cette loi a priori.
@freakonometrics 61
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Modèles bayésiens pour les montants agrégés
On pose Yi,j = log(Ci,j), et on suppose que Yi,j = µ + αi + βj + εi,j, où
εi,j ∼ N(0, σ2
). Aussi, Yi,j suit une loi normale,
f(yi,j|µ, α, β, σ2
) ∝
1
σ
exp −
1
2σ2
[yi,j − µ − αi − βj]
2
,
et la vraisemblance est alors
L(θ, σ|Y ) ∝ σ−m
exp


i,j
[yi,j − µ − αi − βj]
2


où m = (n(n + 1)/2 désigne le nombre d’observations passées. La difficulté est
alors de spécifier une loi a priori pour (θ, σ2
), i.e. (µ, α, β, σ2
).
@freakonometrics 62
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
MCMC and Bayesian Models
We have a sample x = {x1, · · · , xd) i.i.d. from distribution fθ(·).
In predictive modeling, we need E(g(X)|x) = g(x)fθ|x(x)dx where
fθ|x(x) = f(x|x) = f(x|θ) · π(θ|x)dθ
How can we derive π(θ|x) ? Can we sample from π(θ|x) (and use monte carlo
technique to approximate the integral) ?
@freakonometrics 63
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Hastings-Metropolis
Back to our problem, we want to sample from π(θ|x)
i.e. generate θ1, · · · , θn, · · · from π(θ|x).
Hastings-Metropolis sampler will generate a Markov Chain (θt) as follows,
• generate θ1
• generate θ and U ∼ U([0, 1]),
compute R =
π(θ |x)
π(θt|x)
P(θt|θ )
P(θ |θt−1)
if U < R set θt+1 = θ
if U ≥ R set θt+1 = θt
R is the acceptance ratio, we accept the new state θ with probability min{1, R}.
@freakonometrics 64
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Hastings-Metropolis
Observe that
R =
π(θ ) · f(x|θ )
π(θt) · f(x|θt)
P(θt|θ )
P(θ |θt−1)
In a more general case, we can have a Markov process, not a Markov chain.
E.g. P(θ |θt) ∼ N(θt, 1)
@freakonometrics 65
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Using MCMC to generate Gaussian values
1 > metrop1 <- function(n=1000 , eps =0.5){
2 + vec <- vector("numeric", n)
3 + x=0
4 + vec [1] <- x
5 + for (i in 2:n) {
6 + innov <- runif(1,-eps ,eps)
7 + mov <- x+innov
8 + aprob <- min(1, dnorm(mov)/dnorm(x))
9 + u <- runif (1)
10 + if (u < aprob)
11 + x <- mov
12 + vec[i] <- x
13 + }
14 + return(vec)}
@freakonometrics 66
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Using MCMC to generate Gaussian values
1 > plot.mcmc <- function(mcmc.out){
2 + op <- par(mfrow=c(2 ,2))
3 + plot(ts(mcmc.out),col="red")
4 + hist(mcmc.out ,30, probability =TRUE ,
5 + col="light blue")
6 + lines(seq(-4,4,by =.01) ,dnorm(seq
(-4,4,
7 + by =.01)),col="red")
8 + qqnorm(mcmc.out)
9 + abline(a=mean(mcmc.out),b=sd(mcmc.
out))
10 + acf(mcmc.out ,col="blue",lag.max
=100)
11 + par(op)}
12 > metrop.out <-metrop1 (10000 ,1)
13 > plot.mcmc(metrop.out)
@freakonometrics 67
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Heuristics on Hastings-Metropolis
In standard Monte Carlo, generate θi’s i.i.d., then
1
n
n
i=1
g(θi) → E[g(θ)] = g(θ)π(θ)dθ
(strong law of large numbers).
Well-behaved Markov Chains (P aperiodic, irreducible, positive recurrent) can
satisfy some ergodic property, similar to that LLN. More precisely,
• P has a unique stationary distribution λ, i.e. λ = λ × P
• ergodic theorem
1
n
n
i=1
g(θi) → g(θ)λ(θ)dθ
even if θi’s are not independent.
@freakonometrics 68
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Heuristics on Hastings-Metropolis
Remark The conditions mentioned above are
• aperiodic, the chain does not regularly return to any state in multiples of
some k.
• irreducible, the state can go from any state to any other state in some finite
number of steps
• positively recurrent, the chain will return to any particular state with
probability 1, and finite expected return time
@freakonometrics 69
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
MCMC and Loss Models
Example A Tweedie model, E(X) = µ and Var(X) = ϕ · µp
. Here assume that ϕ
and p are given, and µ is the unknown parameter.
→ need a predictive distribution for µ given x.
Consider the following transition kernel (a Gamma distribution)
µ|µt ∼ G
µt
α
, α
with E(µ|µt) = µt and CV(µ) =
1
√
α
.
Use some a priori distribution, e.g. G (α0, β0).
@freakonometrics 70
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
MCMC and Loss Models
• generate µ1
• at step t : generate µ ∼ G α−1
µt, α and U ∼ U([0, 1]),
compute R =
π(µ ) · f(x|µ )
π(µt) · f(x|θt)
Pα(µt|θ )
Pα(θ |θt−1)
if U < R set θt+1 = θ
if U ≥ R set θt+1 = θt
where
f(x|µ) = L(µ) =
n
i=1
f(xi|µ, p, ϕ),
f(x · |µ, p, ϕ) being the density of the Tweedie distribution, dtweedie function
(x, p, mu, phi) from library(tweedie).
@freakonometrics 71
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
1 > p=2 ; phi=2/5
2 > set.seed (1) ; X <- rtweedie (50,p,10,phi)
3 > metrop2 <- function(n=10000 , a0=10,
4 + b0=1, alpha =1){
5 + vec <- vector("numeric", n)
6 + vec [1] <- rgamma (1,a0 ,b0)
7 + for (i in 2:n){
8 + mustar <- rgamma (1,vec[i -1]/alpha ,alpha)
9 + R=prod(dtweedie(X,p,mustar ,phi)/dtweedie
10 + (X,p,vec[i-1],phi))*dgamma(mustar ,a0 ,b0)
11 + dgamma(vec[i-1],a0 ,b0)* dgamma(vec[i-1],
12 + mustar/alpha ,alpha)/dgamma(mustar ,
13 + vec[i-1]/alpha ,alpha)
14 + aprob <- min(1,R)
15 + ifelse(runif (1) < aprob ,vec[i]<-mustar ,
16 + vec[i]<-vec[i -1])}
17 + return(vec)}
18 > metrop.output <-metrop2 (10000 , alpha =1)
@freakonometrics 72
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Gibbs Sampler For a multivariate problem, it is possible to use Gibbs sampler.
Example Assume that the loss ratio of a company has a lognormal distribution,
LN(µ, σ2
), .e.g
1 > LR <- c(0.958 , 0.614 , 0.977 , 0.921 , 0.756)
Example Assume that we have a sample x from a N(µ, σ2
). We want the
posterior distribution of θ = (µ, σ2
) given x . Observe here that if priors are
Gaussian N µ0, τ2
and the inverse Gamma distribution IG(a, b), then



µ|σ2
, x ∼ N
σ2
σ2 + nτ2
µ0 +
nτ2
σ2 + nτ2
x,
σ2
τ2
σ2 + nτ2
2
i=1
σ2
|µ, x ∼ IG
n
2
+ a,
1
2
n
i=1
[xi − µ]2
+ b
More generally, we need the conditional distribution of θk|θ−k, x, for all k.
1 > x <- log(LR)
@freakonometrics 73
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Gibbs Sampler
1 > xbar <- mean(x)
2 > mu <- sigma2=rep (0 ,10000)
3 > sigma2 [1] <- 1/rgamma (1, shape =1,rate =1)
4 > Z <- sigma2 [1]/(sigma2 [1]+n*1)
5 > mu [1] <- rnorm (1,m=Z*0+(1 -Z)*xbar ,
6 + sd=sqrt (1*Z))
7 > for (i in 2:10000){
8 + Z <- sigma2[i-1]/(sigma2[i -1]+n*1)
9 + mu[i] <- rnorm (1,m=Z*0+(1 -Z)*xbar ,
10 + sd=sqrt (1*Z))
11 + sigma2[i] <- 1/rgamma (1, shape=n/2+1,
12 + rate <- (1/2)*(sum((x-mu[i])${ }^{
wedge}$2))+1)
13 + }
@freakonometrics 74
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Gibbs Sampler
Example Consider some vector X = (X1, · · · , Xd) with indépendent
components, Xi ∼ E(λi). We sample to sample from X given XT
1 > s for some
threshold s > 0.
• start with some starting point x0 such that xT
0 1 > s
• pick up (randomly) i ∈ {1, · · · , d}
Xi given Xi > s − xT
(−i)1 has an Exponential distribution E(λi)
draw Y ∼ E(λi) and set xi = y + (s − xT
(−i)1)+ until xT
(−i)1 + xi > s
E.g. losses and allocated expenses
@freakonometrics 75
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Gibbs Sampler
1 > sim <- NULL
2 > lambda <- c(1 ,2)
3 > X <- c(3 ,3)
4 > s <- 5
5 > for(k in 1:1000){
6 + i <- sample (1:2 ,1)
7 + X[i] <- rexp(1, lambda[i])+
8 + max(0,s-sum(X[-i]))
9 + while(sum(X)<s){
10 + X[i] <- rexp(1, lambda[i])+
11 + max(0,s-sum(X[-i])) }
12 + sim <- rbind(sim ,X) }
@freakonometrics 76
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
JAGS and STAN
Martyn Plummer developed JAGS Just another Gibbs sampler in 2007 (stable
since 2013) in library(runjags). It is an open-source, enhanced, cross-platform
version of an earlier engine BUGS (Bayesian inference Using Gibbs Sampling).
STAN library(Rstan) is a newer tool that uses the Hamiltonian Monte Carlo
(HMC) sampler.
HMC uses information about the derivative of the posterior probability density
to improve the algorithm. These derivatives are supplied by algorithm
differentiation in C/C++ codes.
@freakonometrics 77
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
JAGS on the N(µ, σ2
) distribution
1 library(runjags)
2 jags.model <- "
3 + model {
4 + mu ~ dnorm(mu0 , 1/(sigma0 ^2))
5 + g ~ dgamma(k0 , theta0)
6 + sigma <- 1 / g
7 + for (i in 1:n) {
8 + logLR[i] ~ dnorm(mu , g^2)
9 + }
10 + }"
1 > jags.data <- list(n=length(LR),
2 + logLR=log(LR), mu0=-.2, sigma0 =0.02 ,
3 + k0=1, theta0 =1)
4
5 > jags.init <- list(list(mu=log (1.2) ,
6 + g=1/0.5^2) ,
7 + list(mu=log (.8) ,
8 + g=1/.2^2))
9
10 > model.out <- autorun.jags(jags.model ,
11 + data=jags.data , inits=jags.init ,
12 + monitor=c("mu", "sigma"), n.chains =2)
13 traceplot(model.out$mcmc)
14 summary(model.out)
@freakonometrics 78
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
STAN on the N(µ, σ2
) distribution
1 > library(rstan)
2 > stan.model <- "
3 + data {
4 + int <lower =0> n;
5 + vector[n] LR;
6 + real mu0;
7 + real <lower =0> sigma0;
8 + real <lower =0> k0;
9 + real <lower =0> theta0;
10 + }
11 + parameters {
12 + real mu;
13 + real <lower =0> sigma;
14 + }
1 + model }
2 + mu ~ normal(mu0 , sigma0);
3 + sigma ~ inv_gamma(k0 , theta0);
4 + for (i in 1:n)
5 + log(LR[i]) ~ normal(mu , sigma);
6 + }
7
8 > stan.data <- list(n=length(LR), r=LR , mu0=
mu0 ,
9 + sigma0=sigma0 , k0=k0 , theta0=theta0)
10 > stan.out <- stan(model_code=stan.model ,
11 + data=stan.data , seed =2)
12 > traceplot(stan.out)
13 > print(stan.out , digits_summary =2)
@freakonometrics 79
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
MCMC and Loss Models
Example Consider some simple time series of Loss Ratios,
LRt ∼ N(µt, σ2
) where µt = φµt−1 + εt
E.g. in JAGS we can define the vector µ = (µ1, · · · , µT ) recursively
1 + model {
2 + mu [1] ~ dnorm(mu0 , 1/(sigma0 ^2))
3 + for (t in 2:T) { mu[t] ~ dnorm(mu[t-1], 1/(sigma0 ^2)) }
4 + }
@freakonometrics 80
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
A Bayesian version of Chain Ladder
Assume that λi,j ∼ N µj,
τj
Ci,j
.
We can use Gibbs sampler to get the distribution of the transition factors, as well
as a distribution for the reserves,
1 > source("http:// freakonometrics .free.fr/triangleCL.R")
2 > source("http:// freakonometrics .free.fr/bayesCL.R")
3 > mcmcCL <-bayesian.triangle(PAID)
@freakonometrics 81
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
1 > plot.mcmc(mcmcCL$Lambda [ ,1])
2 > plot.mcmc(mcmcCL$Lambda [ ,2])
3 > plot.mcmc(mcmcCL$reserves [ ,6])
4 > plot.mcmc(mcmcCL$reserves [ ,7])
5 > library( ChainLadder )
6 > MCL <-MackChainLadder (PAID)
7 > m<-sum(MCL$ FullTriangle [,6]-
8 + diag(MCL$ FullTriangle [ ,6:1]))
9 > stdev <-MCL$Total.Mack.S.E
10 > hist(mcmcCL$reserves [,7], probability=
TRUE ,
11 > breaks =20, col="light blue")
12 > x=seq (2000 ,3000 , by =10)
13 > y=dnorm(x,m,stdev)
14 > lines(x,y,col="red")
@freakonometrics 82
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Autres modèles bayésiens
Dans le cadre des modèles de provisionnement, on suppose
λi,j|λj, σ2
j , Ci,j ∼ N λj,
σ2
j
Ci,j
Notons γj = log(λj). λ désigne l’ensemble des observations, i.e. λi,j, et le
paramètre que l’on cherche à estimer est γ. La log-vraisemblance est alors
log L(λ|γ, C, σ2
) =
i,j
log
Ci,j
σ2
j
−
Ci,j
σ2
j
[λi,j − exp(γj)]
2
En utilisant le théorème de Bayes
log L(λ|γ, C, σ2
)
a posteriori
= log π(γ)
a priori
+ log L(γ|λ, C, σ2
)
log vraisemblance
+constante
Si on utilise une loi uniforme comme loi a priori, on obtient
log L(λ|γ, C, σ2
) = log L(γ|λ, C, σ2
) + constante
@freakonometrics 83
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Les calculs de lois conditionnelles peuvent être simples dans certains cas (très
limités). De manière gérérale, on utilise des méthodes de simulation pour
approcher les lois. En particulier, on peut utiliser les algorithmes de Gibbs ou
d’Hastings-Metropolis.
On part d’un vecteur initial γ(0)
= (γ
(0)
1 , · · · , γ
(0)
m ), puis



γ
(k+1)
1 ∼ f(·|γ
(k)
2 , · · · , γ
(k)
m , λ, C, σ)
γ
(k+1)
2 ∼ f(·|γ
(k+1)
1 , γ
(k)
3 , · · · , γ
(k)
m , λ, C, σ)
γ
(k+1)
3 ∼ f(·|γ
(k+1)
1 , γ
(k+1)
2 , γ
(k)
4 , · · · , γ
(k)
m , λ, C, σ)
...
γ
(k+1)
m−1 ∼ f(·|γ
(k+1)
1 , γ
(k+1)
2 , · · · , γ
(k+1)
m−2 , γ
(k)
m , λ, C, σ)
γ
(k+1)
m ∼ f(·|γ
(k+1)
1 , γ
(k+1)
2 , · · · , γ
(k+1)
m−1 , λ, C, σ)
A l’aide de cet algorithme, on simule alors de triangles C, puis on estime la
process error.
@freakonometrics 84
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
L’algorithme d’adaptative rejection metropolis sampling peut alors être utiliser
pour simuler ces différentes lois conditionnelle (cf Balson (2008)).
La méthode de rejet est basé sur l’idée suivante
• on souhaite tirer (indépendemment) suivant une loi f, qu’on ne sait pas
simuler
• on sait simuler suivant une loi g qui vérifie f(x) ≤ Mg(x), pour tout x, où M
peut être calculée.
L’algorithme pour tirer suivant f est alors le suivant
• faire une boucle
◦ tirer Y selon la loi g
◦ tirer U selon la loi uniforme sur [0, 1], indépendamment de Y ,
• tant que U >
f(Y )
Mg(Y )
.
@freakonometrics 85
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
• poser X = Y .
On peut utiliser cette technique pour simuler une loi normale à partir d’une loi
de Laplace, de densité g(x) = 0.5 · exp(−|x|), avec M =
√
2eπ−1. Mais cet
algorithme est très coûteux en temps s’il y a beaucoup de rejets,
L’adaptative rejection sampling est une extension de cet algorithme, à condition
d’avoir une densité log-concave. On parle aussi de méthode des cordes.
@freakonometrics 86
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
On majore localement la fonction log f par des fonctions linéaires. On construit
alors une enveloppe à log f.
On majore alors f par une fonction gn qui va dépendre du pas.
Formellement, on construit Li,j(x) la droite reliant les points (xi, log(f(xi))) et
@freakonometrics 87
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
(xj, log(f(xj))). On pose alors
hn(x) = min {Li−1,i(x), Li+1,i+2(x)} ,
qui définie alors une enveloppe de log(f) (par concavité de log(f). On utilise
alors un algorithme de rejet avec comme fonction de référence
gn(x) =
exp(hn(x))
exp(hn(t))dt
normalisée pour définir une densité.
• faire une boucle
◦ tirer Y selon la loi gn
◦ tirer U selon la loi uniforme sur [0, 1], indépendamment de Y ,
• tant que U >
f(Y )
exp(hn(Y ))
.
• poser X = Y .
@freakonometrics 88
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Enfin, l’adaptative rejection metropolis sampling rajoute une étape
suppl ´mentaire, dans le cas des densité non log-concave. L’idée est d’utiliser la
technique préc´dante, même si hn n’est plus forcément une enveloppe de log(f),
puis de rajouter une étape de rejet supplémenataire. Rappelons que l’on cherche
à implénter un algorithme de Gibbs, c’est à dire créér une suite de variables
X1, X2, · · · .
Supposons que l’on dispose de Xk−1. Pour tirer Xk, on utilise l’algorithme
précédant, et la nouvelle étape de rejet est la suivante
• tirer U selon la loi uniforme sur [0, 1], indépendamment de X et de Xk−1,
◦ si U > min 1,
f(X) min{f(Xk−1), exp(hn(Xk−1))}
f(Xk−1) min{f(X), exp(hn(X))}
alors garder
Xk = Xk−1
◦ sinon poser Xk = X
@freakonometrics 89
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Code R pour l’algorithme ARMS
Ces fonctions exponentielles par morceaux sont inéressantes car elles sont faciles
à simuler. La fonction hn est linéaires par morceaux, avec comme noeuds Nk, de
telle sorte que
hn(x) = akx + bk pour tout x ∈ [Nk, Nk+1].
Alors gn(x) =
exp(hn(x))
In
où
In = exp(hn(t))dt =
exp[hn(Nk+1)] − exp[hn(Nk)]
ak
. On calcule alors Gn, la
fonction de répartition associée à gn, et on fait utilise une méthode d’inversion
pour tirer suivant Gn.
@freakonometrics 90
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Bayesian estimation for reserves
0 200 400 600 800 1000
220023002400250026002700
iteration
reserves(total)
@freakonometrics 91
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Bayesian estimation for reserves
@freakonometrics 92
Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11
Bayesian estimation for reserves
@freakonometrics 93

More Related Content

PDF
Slides ensae-2016-11
PDF
Slides ineq-2
PDF
Inequality #4
PDF
Slides ineq-3b
PDF
Econometrics 2017-graduate-3
PDF
Graduate Econometrics Course, part 4, 2017
PDF
Classification
PDF
Slides ensae 8
Slides ensae-2016-11
Slides ineq-2
Inequality #4
Slides ineq-3b
Econometrics 2017-graduate-3
Graduate Econometrics Course, part 4, 2017
Classification
Slides ensae 8

What's hot (20)

PDF
Inequality, slides #2
PDF
Slides ensae 9
PDF
Inequalities #3
PDF
Slides erm-cea-ia
PDF
Slides erasmus
PDF
Slides edf-2
PDF
Multiattribute utility copula
PDF
Slides ads ia
PDF
Slides ineq-4
PDF
Slides ACTINFO 2016
PDF
Quantile and Expectile Regression
PDF
Slides ensae-2016-9
PDF
Inequalities #2
PDF
Slides barcelona Machine Learning
PDF
Slides astin
PDF
Slides econometrics-2017-graduate-2
PDF
Slides amsterdam-2013
PDF
Slides risk-rennes
PDF
IA-advanced-R
PDF
Slides erm
Inequality, slides #2
Slides ensae 9
Inequalities #3
Slides erm-cea-ia
Slides erasmus
Slides edf-2
Multiattribute utility copula
Slides ads ia
Slides ineq-4
Slides ACTINFO 2016
Quantile and Expectile Regression
Slides ensae-2016-9
Inequalities #2
Slides barcelona Machine Learning
Slides astin
Slides econometrics-2017-graduate-2
Slides amsterdam-2013
Slides risk-rennes
IA-advanced-R
Slides erm
Ad

Similar to Slides ensae 11bis (20)

PDF
Slides ensae-2016-10
PDF
Slides ensae 10
PDF
Predictive Modeling in Insurance in the context of (possibly) big data
PDF
Machine Learning in Actuarial Science & Insurance
PDF
Lausanne 2019 #1
PDF
Slides econometrics-2018-graduate-3
PDF
Varese italie #2
PDF
Mutualisation et Segmentation
PDF
Econ. Seminar Uqam
PDF
Slides econometrics-2018-graduate-2
PPTX
Static Models of Continuous Variables
PDF
Exam binder 1
PDF
Actuarial Pricing Game
PDF
Varese italie seminar
PDF
Swiss part3
PDF
Side 2019 #9
PPSX
Stochastic Loss Reserving-General Insurance
PPTX
An Interactive Introduction To R (Programming Language For Statistics)
PDF
09.3 credit scoring
PDF
Slides ensae-2016-8
Slides ensae-2016-10
Slides ensae 10
Predictive Modeling in Insurance in the context of (possibly) big data
Machine Learning in Actuarial Science & Insurance
Lausanne 2019 #1
Slides econometrics-2018-graduate-3
Varese italie #2
Mutualisation et Segmentation
Econ. Seminar Uqam
Slides econometrics-2018-graduate-2
Static Models of Continuous Variables
Exam binder 1
Actuarial Pricing Game
Varese italie seminar
Swiss part3
Side 2019 #9
Stochastic Loss Reserving-General Insurance
An Interactive Introduction To R (Programming Language For Statistics)
09.3 credit scoring
Slides ensae-2016-8
Ad

More from Arthur Charpentier (20)

PDF
Family History and Life Insurance
PDF
ACT6100 introduction
PDF
Family History and Life Insurance (UConn actuarial seminar)
PDF
Control epidemics
PDF
STT5100 Automne 2020, introduction
PDF
Family History and Life Insurance
PDF
Reinforcement Learning in Economics and Finance
PDF
Optimal Control and COVID-19
PDF
Slides OICA 2020
PDF
Lausanne 2019 #3
PDF
Lausanne 2019 #4
PDF
Lausanne 2019 #2
PDF
Side 2019 #10
PDF
Side 2019 #11
PDF
Side 2019 #12
PDF
Side 2019 #8
PDF
Side 2019 #7
PDF
Side 2019 #6
PDF
Side 2019 #5
PDF
Side 2019 #4
Family History and Life Insurance
ACT6100 introduction
Family History and Life Insurance (UConn actuarial seminar)
Control epidemics
STT5100 Automne 2020, introduction
Family History and Life Insurance
Reinforcement Learning in Economics and Finance
Optimal Control and COVID-19
Slides OICA 2020
Lausanne 2019 #3
Lausanne 2019 #4
Lausanne 2019 #2
Side 2019 #10
Side 2019 #11
Side 2019 #12
Side 2019 #8
Side 2019 #7
Side 2019 #6
Side 2019 #5
Side 2019 #4

Recently uploaded (20)

PPTX
lesson in englishhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh
PDF
Truxton Capital: Middle Market Quarterly Review - August 2025
PDF
7a Lifetime Expected Income Breakeven Comparison between SPIAs and Managed Po...
PPTX
2. RBI.pptx202029291023i38039013i92292992
PDF
2012_The dark side of valuation a jedi guide to valuing difficult to value co...
PDF
Pitch Deck.pdf .pdf all about finance in
PDF
DTC TRADIND CLUB MAKE YOUR TRADING BETTER
PDF
HCWM AND HAI FOR BHCM STUDENTS(1).Pdf and ptts
PPT
Fundamentals of Financial Management Chapter 3
PPTX
Q1 PE AND HEALTH 5 WEEK 5 DAY 1 powerpoint template
PDF
2a A Dynamic and Adaptive Approach to Distribution Planning and Monitoring JF...
PPTX
Role and functions of International monetary fund.pptx
PPTX
Grp C.ppt presentation.pptx for Economics
PDF
International Financial Management, 9th Edition, Cheol Eun, Bruce Resnick Tuu...
PPTX
ML Credit Scoring of Thin-File Borrowers
PDF
4a Probability-of-Failure-Based Decision Rules to Manage Sequence Risk in Ret...
PDF
THE EFFECT OF FOREIGN AID ON ECONOMIC GROWTH IN ETHIOPIA
DOCX
BUSINESS PERFORMANCE SITUATION AND PERFORMANCE EVALUATION OF FELIX HOTEL IN H...
PDF
GVCParticipation_Automation_Climate_India
PPTX
PPT-Lesson-2-Recognize-a-Potential-Market-2-3.pptx
lesson in englishhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh
Truxton Capital: Middle Market Quarterly Review - August 2025
7a Lifetime Expected Income Breakeven Comparison between SPIAs and Managed Po...
2. RBI.pptx202029291023i38039013i92292992
2012_The dark side of valuation a jedi guide to valuing difficult to value co...
Pitch Deck.pdf .pdf all about finance in
DTC TRADIND CLUB MAKE YOUR TRADING BETTER
HCWM AND HAI FOR BHCM STUDENTS(1).Pdf and ptts
Fundamentals of Financial Management Chapter 3
Q1 PE AND HEALTH 5 WEEK 5 DAY 1 powerpoint template
2a A Dynamic and Adaptive Approach to Distribution Planning and Monitoring JF...
Role and functions of International monetary fund.pptx
Grp C.ppt presentation.pptx for Economics
International Financial Management, 9th Edition, Cheol Eun, Bruce Resnick Tuu...
ML Credit Scoring of Thin-File Borrowers
4a Probability-of-Failure-Based Decision Rules to Manage Sequence Risk in Ret...
THE EFFECT OF FOREIGN AID ON ECONOMIC GROWTH IN ETHIOPIA
BUSINESS PERFORMANCE SITUATION AND PERFORMANCE EVALUATION OF FELIX HOTEL IN H...
GVCParticipation_Automation_Climate_India
PPT-Lesson-2-Recognize-a-Potential-Market-2-3.pptx

Slides ensae 11bis

  • 1. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Actuariat de l’Assurance Non-Vie # 11 A. Charpentier (UQAM & Université de Rennes 1) ENSAE ParisTech, Octobre 2015 - Janvier 2016. http://freakonometrics.hypotheses.org @freakonometrics 1
  • 2. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Regression Models in Claims Reserving A natural idea is to assume that incremental payments Yi,j can be explained by two factors: one related to occurrence year i, and one development factor, related to j. Formally, we assume that Yi,j ∼ L(θi,j), where θi,j = αi · βj i.e. Yi,j is a random variable, with distribution L, where parameter(s) can be related to the two factors. @freakonometrics 2
  • 3. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Poisson regression in claims reserving Renshaw & Verrall (1998) proposed to use a Poisson regression for incremental payments to estimate claim reserve, i.e. Yi,j ∼ P (exp [γ + αi + βj]) . 1 devF=as.factor(development); anF=as.factor(year) 2 REG=glm(vec.C~devF+anF , family = "Poisson") Here, 1 > summary(REG) 2 Call: 3 glm(formula = vec.C ~ anF + devF , family = poisson(link = "log"), 4 data = triangle) 5 6 Deviance Residuals: 7 Min 1Q Median 3Q Max 8 -2.343e+00 -4.996e -01 9.978e -07 2.770e-01 3.936e+00 @freakonometrics 3
  • 4. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 9 10 Coefficients : 11 Estimate Std. Error z value Pr(>|z|) 12 (Intercept) 8.05697 0.01551 519.426 < 2e-16 *** 13 anF1989 0.06440 0.02090 3.081 0.00206 ** 14 anF1990 0.20242 0.02025 9.995 < 2e-16 *** 15 anF1991 0.31175 0.01980 15.744 < 2e-16 *** 16 anF1992 0.44407 0.01933 22.971 < 2e -16 *** 17 anF1993 0.50271 0.02079 24.179 < 2e -16 *** 18 devF1 -0.96513 0.01359 -70.994 < 2e -16 *** 19 devF2 -4.14853 0.06613 -62.729 < 2e-16 *** 20 devF3 -5.10499 0.12632 -40.413 < 2e -16 *** 21 devF4 -5.94962 0.24279 -24.505 < 2e-16 *** 22 devF5 -5.01244 0.21877 -22.912 < 2e -16 *** 23 --- 24 25 ( Dispersion parameter for poisson family taken to be 1) 26 @freakonometrics 4
  • 5. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 27 Null deviance: 46695.269 on 20 degrees of freedom 28 Residual deviance: 30.214 on 10 degrees of freedom 29 AIC: 209.52 30 31 Number of Fisher Scoring iterations: 4 Again, it is possible to summarize this information in triangles.... Predictions can be used to complete the triangle. 1 ANew=rep(1 :Ntr),times=Ntr) ; DNew=rep (0 :(Ntr -1),each=Ntr) 2 P=predict(REG , newdata=data.frame(A=as.factor(ANew),D=as.factor( DNew))) 3 payinc.pred= exp(matrix(as.numeric(P),nrow=n,ncol=n)) 4 noise = payinc -payinc.pred 1 year development paycum payinc payinc.pred noise 2 1 1988 0 3209 3209 3155.699242 5.330076e+01 3 2 1989 0 3367 3367 3365.604828 1.395172e+00 4 3 1990 0 3871 3871 3863.737217 7.262783e+00 @freakonometrics 5
  • 6. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 5 4 1991 0 4239 4239 4310.096418 -7.109642e+01 6 5 1992 0 4929 4929 4919.862296 9.137704e+00 7 6 1993 0 5217 5217 5217.000000 1.818989e-12 8 7 1988 1 4372 1163 1202.109851 -3.910985e+01 9 8 1989 1 4659 1292 1282.069808 9.930192e+00 10 9 1990 1 5345 1474 1471.824853 2.175147e+00 11 10 1991 1 5917 1678 1641.857784 3.614222e+01 12 11 1992 1 6794 1865 1874.137704 -9.137704e+00 13 12 1988 2 4411 39 49.820712 -1.082071e+01 14 13 1989 2 4696 37 53.134604 -1.613460e+01 15 14 1990 2 5398 53 60.998886 -7.998886e+00 16 15 1991 2 6020 103 68.045798 3.495420e+01 17 16 1988 3 4428 17 19.143790 -2.143790e+00 18 17 1989 3 4720 24 20.417165 3.582835e+00 19 18 1990 3 5420 22 23.439044 -1.439044e+00 20 19 1988 4 4435 7 8.226405 -1.226405e+00 21 20 1989 4 4730 10 8.773595 1.226405e+00 22 21 1988 5 4456 21 21.000000 -2.842171e-14 @freakonometrics 6
  • 7. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 The pearson residuals are εP i,j = Xi,j − µi,j µi,j , The deviance residuals are εD i,j = Xi,j − µi,j di,j , Pearson’s error can be obtained from function resid=residuals(REG,"pearson"), and summarized in a triangle 1 > PEARSON 2 [,1] [,2] [,3] [,4] [,5] [,6] 3 [1,] 9.4882e-01 -1.128012 -1.5330 -0.48996 -0.42759 -6.2021e-15 4 [2,] 2.4048e-02 0.277333 -2.2134 0.79291 0.41404 NA 5 [3,] 1.1684e-01 0.056697 -1.0241 -0.29723 NA NA 6 [4,] -1.0829e+00 0.891963 4.2373 NA NA NA 7 [5,] 1.3027e-01 -0.211074 NA NA NA NA 8 [6,] 2.5183e-14 NA NA NA NA NA @freakonometrics 7
  • 8. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Errors in GLMs @freakonometrics 8
  • 9. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 log-Poisson regression and Chain-Ladder The log-Poisson is interesting since it (usually) provides the same amount of reserves as Chain Ladder. 1 > library( ChainLadder ) 2 > an <- 10; ligne = rep (1:an , each=an); colonne = rep (1:an , an) 3 > passe = (ligne + colonne - 1) <=an; n = sum(passe) 4 > PAID=GenIns; INC=PAID 5 > INC [,2:an]= PAID [,2:an]-PAID [ ,1:(an -1)] 6 > Y = as.vector(INC) 7 > lig = as.factor(ligne) 8 > col = as.factor(colonne) 9 > base = data.frame(Y,col ,lig) 10 > reg=glm(Y~col+lig ,data=base ,family="poisson") 11 > sum(exp(predict(reg ,newdata=base))[passe!=TRUE ]) 12 [1] 18680856 @freakonometrics 9
  • 10. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 log-Poisson regression and Chain-Ladder 1 > MackChainLadder (GenIns) 2 MackChainLadder (Triangle = GenIns) 3 Latest Dev.To.Date Ultimate IBNR Mack.S.E CV(IBNR) 4 1 3 ,901 ,463 1.0000 3 ,901 ,463 0 0 NaN 5 2 5 ,339 ,085 0.9826 5 ,433 ,719 94 ,634 71 ,835 0.759 6 3 4 ,909 ,315 0.9127 5 ,378 ,826 469 ,511 119 ,474 0.254 7 4 4 ,588 ,268 0.8661 5 ,297 ,906 709 ,638 131 ,573 0.185 8 5 3 ,873 ,311 0.7973 4 ,858 ,200 984 ,889 260 ,530 0.265 9 6 3 ,691 ,712 0.7223 5 ,111 ,171 1 ,419 ,459 410 ,407 0.289 10 7 3 ,483 ,130 0.6153 5 ,660 ,771 2 ,177 ,641 557 ,796 0.256 11 8 2 ,864 ,498 0.4222 6 ,784 ,799 3 ,920 ,301 874 ,882 0.223 12 9 1 ,363 ,294 0.2416 5 ,642 ,266 4 ,278 ,972 970 ,960 0.227 13 10 344 ,014 0.0692 4 ,969 ,825 4 ,625 ,811 1 ,362 ,981 0.295 14 Totals 15 Latest: 34 ,358 ,090.00 16 Ultimate: 53 ,038 ,945.61 17 IBNR: 18 ,680 ,855.61 @freakonometrics 10
  • 11. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 18 Mack S.E.: 2 ,441 ,364.13 19 CV(IBNR): 0.13 @freakonometrics 11
  • 12. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 An explicit expression to quantify uncertainty Recall that we while to estimate E([R − R]2 ) = E(R) − E(R) 2 + Var(R − R) ≈ Var(R) + Var(R) Classically, consider a log-Poisson model, were incremental payments satisfy Yi,j ∼ P(µi,j) where µi,j = exp[ηi,j] = exp[γ + αi + βj] Using the delta method, we get that asymptotically Var(Yi,j) = Var(µi,j) ≈ ∂µi,j ∂ηi,j 2 Var(ηi,j) where, since we consider a log link, ∂µi,j ∂ηi,j = µi,j @freakonometrics 12
  • 13. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 i.e., with an ODP distribution (i.e. Var(Yi,j) = ϕE(Yi,j), E [Yi,j − Yi,j]2 ≈ ϕ · µi,j + µ2 i,j · Var(ηi,j) and Cov(Yi,j, Yk,l) ≈ µi,j · µk,l · Cov (ηi,j, ηk,l) Thus, since the overall amount of reserves satisfies E [R − R]2 ≈ i+j−1>n ϕ · µi,j + µ Var(η)µ. 1 > an <- 6; ligne = rep (1:an , each=an); colonne = rep (1:an , an) 2 > passe = (ligne + colonne - 1) <=an; np = sum(passe) 3 > futur = (ligne + colonne - 1)> an; nf = sum(passe) 4 > INC=PAID 5 > INC [ ,2:6]= PAID [,2:6]- PAID [ ,1:5] 6 > Y = as.vector(INC) @freakonometrics 13
  • 14. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 7 > lig = as.factor(ligne); col = as.factor(colonne) 8 > 9 > CL <- glm(Y~lig+col , family= quasipoisson ) 10 > Y2=Y; Y2[is.na(Y)]=.001 11 > CL2 <- glm(Y2~lig+col , family= quasipoisson ) 12 > YP = predict(CL) 13 > p = 2*6-1; 14 > phi.P = sum(residuals(CL ,"pearson")^2)/(np -p) 15 > Sig = vcov(CL) 16 > X = model.matrix(CL2) 17 > Cov.eta = X%*%Sig%*%t(X) 18 > mu.hat = exp(predict(CL ,newdata=data.frame(lig ,col)))*futur 19 > pe2 = phi.P * sum(mu.hat) + t(mu.hat) %*% Cov.eta %*% mu.hat 20 > cat("Total reserve =", sum(mu.hat), "prediction error =", sqrt(pe2 ),"n") 21 Total reserve = 2426.985 prediction error = 131.7726 i.e. E [R − R]2 = 131.77. @freakonometrics 14
  • 15. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Uncertainty and bootstrap simulations Based on that theoretical triangle, it is possible to generate residuals to obtain a simulated triangle. Since the size of the sample is small (here 21 observed values), assuming normality for Pearson’s residuals can be too restrictive. Resampling bootstrap procedure can then be more robust. In order to get the loss distribution, it is possible to use bootstrap techniques to generate a matrix of errors, see Renshaw & Verrall (1994). They suggest to boostrap Pearson’s residuals, and the simulation procedure is the following • estimate the model parameter (GLM), β, • calculate fitted values µi,j, and the residuals ri,j = Yi,j − µi,j V (µi,j) , • forecast with original data µi,j for i + j > n. Then can start the bootstrap loops, repeating B times @freakonometrics 15
  • 16. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 • resample the residuals with resample, and get a new sample r (b) i,j , • create a pseudo sample solving Y ∗ i,j = µi,j + r (b) i,j × V (µi,j), • estimate the model using GLM procedure and derive boostrap forecast Let resid.sim be resampled residuals. Note that REG$fitted.values (called here payinc.pred) is the vector containing the µi,j’s. And further V (µi,j) is here simply REG$fitted.values since the variance function for the Poisson regression is the identity function. Hence, here Y ∗ i,j = µi,j + r (b) i,j × µi,j and thus, set 1 resid.sim = sample(resid ,Ntr*(Ntr +1)/2,replace=TRUE) 2 payinc.sim = resid.sim*sqrt(payinc.pred)+payinc.pred 3 4 [,1] [,2] [,3] [,4] [,5] [,6] @freakonometrics 16
  • 17. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 5 [1,] 3155.699 1216.465 42.17691 18.22026 9.021844 22.89738 6 [2,] 3381.694 1245.399 84.02244 18.20322 11.122243 NA 7 [3,] 3726.151 1432.534 61.44170 23.43904 NA NA 8 [4,] 4337.279 1642.832 74.58658 NA NA NA 9 [5,] 4929.000 1879.777 NA NA NA NA 10 [6,] 5186.116 NA NA NA NA NA For this simulated triangle, we can use Chain-Ladder estimate to derive a simulated reserve amount (here 2448.175). Figure below shows the empirical distribution of those amounts based on 10, 000 random simulations. @freakonometrics 17
  • 18. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 @freakonometrics 18
  • 19. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Parametric or nonparametric Monte Carlo ? A natural idea would be to assume that Pearson residual have a Gaussian distribution, qqnorm(R); qqline(R) The graph on the right draw point with a size proportional to its Cook’s distance. @freakonometrics 19
  • 20. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Instead of resampling in the sample obtained, we can also directly draw from a normal distribution, i.e. 1 > rnorm(length(R),mean=mean(R),sd=sd(R)) @freakonometrics 20
  • 21. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 The second triangle is obtained using a Student t distribution (the blue line being the bootstrap estimate). 0.80 0.85 0.90 0.95 1.00 2400245025002550260026502700 VaR for total reserves probability level quantilelevel Student Normal bootstrap Note that the bootstrap technique is valid only in the case were the residuals are perfectly independent. @freakonometrics 21
  • 22. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 In R, it is also possible to use the BootChainLadder(Triangle , R = 999, process.distr = "od.pois") function. @freakonometrics 22
  • 23. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Going further So far, we have derived a ditrisbution for the best estimate of total reserves. Note tat it is possible to estimate a scale parameter φ. England & Verrall (1999) suggested φ = ε2 i,j n − p where the summation is over all past observations. @freakonometrics 23
  • 24. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Bootstrap Chain-Ladder 1 > I=as.matrix(read.table("D: triangleC.csv",sep=";",header=FALSE)) 2 > BCL <- BootChainLadder (Triangle = I, R = 999, process.distr = "od. pois") 3 > BCL 4 BootChainLadder (Triangle = I, R = 999, process.distr = "od.pois") 5 6 Latest Mean Ultimate Mean IBNR SD IBNR IBNR 75% IBNR 95% 7 1 4 ,456 4 ,456 0.0 0.0 0 0 8 2 4 ,730 4 ,752 22.0 11.8 28 45 9 3 5 ,420 5 ,455 35.3 14.6 44 61 10 4 6 ,020 6 ,086 66.2 20.8 78 102 11 5 6 ,794 6 ,947 152.7 29.1 170 205 12 6 5 ,217 7 ,364 2 ,146.9 112.5 2 ,214 2 ,327 13 14 Totals 15 Latest: 32 ,637 16 Mean Ultimate: 35 ,060 @freakonometrics 24
  • 25. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 17 Mean IBNR: 2 ,423 18 SD IBNR: 131 19 Total IBNR 75%: 2 ,501 20 Total IBNR 95%: 2 ,653 Histogram of Total.IBNR Total IBNR Frequency 2000 2200 2400 2600 2800 3000 0100200300 2000 2200 2400 2600 2800 3000 0.00.40.8 ecdf(Total.IBNR) Total IBNR Fn(x) q q qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq qq qqqqqqq q q qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq qqqqqqqqqqqqqqqqqqqqqqqq qqqqqqqqqqqqqqqqqqq q q q q qq q qqq qq q qqqq q qq qqq q q 1 2 3 4 5 6 450060007500 Simulated ultimate claims cost origin period ultimateclaimscosts qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq q Mean ultimate claim qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq q q q qq qq q q q q qq qq q q q qq q q q q qqq qq qqq qq q q qq q qq q qq qqqq q qq qq q qqq qq qqqqq q qq qqqqqqqq q q qq q qq qq q qqqq q qq q qqq q qq qqqq q q qq qqq q q qq qq qq q qq qq qqq q q qq q q qq qqqq qqqqq q q qq q q q qqq q q qqqqqqqq q qq qqqq q qqqqq qqq qq qq qq q q qq qqqqqq q qqqq q q 1 2 3 4 5 6 020004000 Latest actual incremental claims against simulated values origin period latestincrementalclaims qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq Latest actual @freakonometrics 25
  • 26. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 From Poisson to Over-Dispersed Poisson Classical, in GLMs we consider distributions with density f(z|θ, φ) = exp zθ − b(θ) φ + c(z, φ) , e.g. for the Poisson distribution P(λ) then f(z|λ) = exp(−λ) λz z! = exp z log λ − λ − log z! , z ∈ N, with θ = log λ, φ = 1, b(θ) = exp θ = λ and c(z, φ) = − log z!. Assume that φ = 1 becomes an additional parameter (that should be estimated). Note that in that case f(z|λ) is not any more a density, but it is a quasidensity. Further, note that V ar(Z) = φE(Z). Thus, if φ > 1 there is overdispersion. @freakonometrics 26
  • 27. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 On quasiPoisson regression In order to understand the rule of the additional parameter, recall that for the Gaussien linear model, N(µ, σ2 ) it is an exponential distribution with θ = µ, b(θ) = θ2 /2, φ = σ2 and c(z, φ) = − 1 2 y2 σ2 + log(2πσ2 ) . Thus, φ is the variance parameter Y |X ∼ N(Xβ, σ2 ) In that linear model, estimation is done based on the following process, • estimate β as β = (X X)−1 X Y • derive the implied residuals, ε = Y − Xβ • estimate σ as the variance of the implied residuals @freakonometrics 27
  • 28. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Thus, φ does not impact the estimation of the coefficient, but it will impact their significativity. 1 > an <- 10; ligne = rep (1:an , each=an); colonne = rep (1:an , an) 2 > passe = (ligne + colonne - 1) <=an; n = sum(passe) 3 > PAID=GenIns; INC=PAID 4 > INC[,2:an]= PAID [,2:an]-PAID [ ,1:(an -1)] 5 > Y = as.vector(INC) 6 > lig = as.factor(ligne) 7 > col = as.factor(colonne) 8 > base = data.frame(Y,col ,lig) 9 > reg1=glm(Y~col+lig ,data=base ,family="poisson") 10 > reg2=glm(Y~col+lig ,data=base ,family=" quasipoisson ") 11 > summary(reg1) 12 Call: 13 glm(formula = Y ~ col + lig , family = "poisson", data = base) 14 Coefficients : 15 Estimate Std. Error z value Pr(>|z|) 16 (Intercept) 12.5064047 0.0007540 16587.372 < 2e-16 *** @freakonometrics 28
  • 29. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 17 col2 0.3312722 0.0006694 494.848 < 2e-16 *** 18 col3 0.3211186 0.0006877 466.963 < 2e -16 *** 19 col4 0.3059600 0.0007008 436.570 < 2e -16 *** 20 col5 0.2193163 0.0007324 299.461 < 2e-16 *** 21 col6 0.2700770 0.0007445 362.755 < 2e -16 *** 22 col7 0.3722084 0.0007606 489.344 < 2e -16 *** 23 col8 0.5533331 0.0008133 680.377 < 2e-16 *** 24 col9 0.3689342 0.0010429 353.772 < 2e-16 *** 25 col10 0.2420330 0.0018642 129.830 < 2e -16 *** 26 lig2 0.9125263 0.0006490 1406.042 < 2e-16 *** 27 lig3 0.9588306 0.0006652 1441.374 < 2e -16 *** 28 lig4 1.0259970 0.0006840 1499.927 < 2e-16 *** 29 lig5 0.4352762 0.0008019 542.814 < 2e -16 *** 30 lig6 0.0800565 0.0009364 85.492 < 2e -16 *** 31 lig7 -0.0063815 0.0010390 -6.142 8.14e-10 *** 32 lig8 -0.3944522 0.0013529 -291.560 < 2e -16 *** 33 lig9 0.0093782 0.0013963 6.716 1.86e -11 *** 34 lig10 -1.3799067 0.0039097 -352.946 < 2e -16 *** @freakonometrics 29
  • 30. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 35 --- 36 ( Dispersion parameter for poisson family taken to be 1) 37 Null deviance: 10699464 on 54 degrees of freedom 38 Residual deviance: 1903014 on 36 degrees of freedom 39 (45 observations deleted due to missingness ) 40 AIC: 1903877 41 Number of Fisher Scoring iterations: 4 1 > summary(reg2) 2 Call: 3 glm(formula = Y ~ col + lig , family = " quasipoisson ", data = base) 4 Coefficients : 5 Estimate Std. Error t value Pr(>|t|) 6 (Intercept) 12.506405 0.172924 72.323 < 2e -16 *** 7 col2 0.331272 0.153537 2.158 0.03771 * 8 col3 0.321119 0.157719 2.036 0.04916 * 9 col4 0.305960 0.160736 1.903 0.06499 . 10 col5 0.219316 0.167970 1.306 0.19994 11 col6 0.270077 0.170756 1.582 0.12247 @freakonometrics 30
  • 31. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 12 col7 0.372208 0.174451 2.134 0.03976 * 13 col8 0.553333 0.186525 2.967 0.00532 ** 14 col9 0.368934 0.239181 1.542 0.13170 15 col10 0.242033 0.427562 0.566 0.57485 16 lig2 0.912526 0.148850 6.131 4.65e-07 *** 17 lig3 0.958831 0.152569 6.285 2.90e-07 *** 18 lig4 1.025997 0.156883 6.540 1.33e-07 *** 19 lig5 0.435276 0.183914 2.367 0.02344 * 20 lig6 0.080057 0.214770 0.373 0.71152 21 lig7 -0.006381 0.238290 -0.027 0.97878 22 lig8 -0.394452 0.310289 -1.271 0.21180 23 lig9 0.009378 0.320249 0.029 0.97680 24 lig10 -1.379907 0.896690 -1.539 0.13258 25 --- 26 ( Dispersion parameter for quasipoisson family taken to be 52601.93) 27 Null deviance: 10699464 on 54 degrees of freedom 28 Residual deviance: 1903014 on 36 degrees of freedom 29 (45 observations deleted due to missingness ) @freakonometrics 31
  • 32. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 30 AIC: NA 31 Number of Fisher Scoring iterations: 4 Thus, coefficients are identical so it not affect the best estimate of claims reserves.... unless we take into account the fact that some variates are no longer significant..... 1 > base = data.frame(Y,col ,lig) 2 > base$lig[base$lig=="7"]="1" 3 > base$lig[base$lig=="9"]="1" 4 > base$lig[base$lig=="6"]="1" 5 > base$col[base$col=="5"]="1" 6 > base$col[base$col=="10"]="1" 7 > base$col[base$col=="9"]="1" 8 > base$col[base$col=="6"]="1" 9 > base$col[base$col=="4"]="1" 10 > base$col[base$col=="3"]="1" 11 > base$col[base$col=="7"]="1" 12 > base$col[base$col=="2"]="1" @freakonometrics 32
  • 33. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 13 > base$lig[base$lig=="8"]="1" 14 > summary(glm(Y~col+lig ,data=base ,family=" quasipoisson ")) 15 Call: 16 glm(formula = Y ~ col + lig , family = " quasipoisson ", data = base) 17 Coefficients : 18 Estimate Std. Error t value Pr(>|t|) 19 (Intercept) 12.73401 0.07764 164.022 < 2e-16 *** 20 col8 0.28877 0.14109 2.047 0.04618 * 21 lig2 0.96246 0.10984 8.763 1.59e -11 *** 22 lig3 0.99721 0.11232 8.878 1.07e -11 *** 23 lig4 1.06465 0.11481 9.273 2.82e -12 *** 24 lig5 0.45513 0.14622 3.113 0.00312 ** 25 lig10 -1.60752 0.85482 -1.881 0.06611 . 26 --- 27 ( Dispersion parameter for quasipoisson family taken to be 49241.53) 28 Null deviance: 10699464 on 54 degrees of freedom 29 Residual deviance: 2442092 on 48 degrees of freedom 30 (45 observations deleted due to missingness ) @freakonometrics 33
  • 34. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 31 AIC: NA 32 Number of Fisher Scoring iterations: 4 Thus, 1 > M= cbind(Y,predict(reg1 ,newdata=base0 ,type="response"), 2 + predict(reg2 ,newdata=base0 ,type="response"), 3 + predict(reg3 ,newdata=base , type="response")) 4 > sum(M[is.na(Y)==TRUE ,2]) 5 [1] 18680856 6 > sum(M[is.na(Y)==TRUE ,3]) 7 [1] 18680856 8 > sum(M[is.na(Y)==TRUE ,4]) 9 [1] 18226919 Including an overdispersion parameter φ might impact the estimation of the overall reserves. @freakonometrics 34
  • 35. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Testing for overdispersion In order to test for overdispersion in an econometric model, we need to specify how overdispersion appears. A classical test is to assume that V ar(Y |X) = E(Y |X) + τE(Y |X)2 which is a standard econometric model with random effect. We want to test H0 : τ = 0 against H1 : τ > 0 A standard test statistics is T = n i=1[Yi − λi]2 − Yi 2 n i=1 λ2 i which has a N(0, 1) distribution under H0. An alternative is to consider T = n i=1[Yi − λi]2 − Yi n i=1[[Yi − λi]2 − Yi]2 @freakonometrics 35
  • 36. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Those test can be found in R, respectively 1 > library(AER) 2 > dispersiontest (reglmp) 3 > dispersiontest (reglmp ,trafo = 2) An alternative is simply the following 1 > library(ChainLadder) 2 > an <- 10; ligne = rep (1:an , each=an); colonne = rep (1:an , an) 3 > passe = (ligne + colonne - 1) <=an; n = sum(passe) 4 > PAID=GenIns; INC=PAID 5 > INC [,2:an]= PAID [,2:an]-PAID [ ,1:(an -1)] 6 > Y = as.vector(INC) 7 > lig = as.factor(ligne) 8 > col = as.factor(colonne) 9 > base = data.frame(Y,col ,lig) 10 > reg1=glm(Y~col+lig ,data=base ,family="poisson") 11 > reg2=glm(Y~col+lig ,data=base ,family=" quasipoisson ") 12 > dispersiontest (reg1) @freakonometrics 36
  • 37. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 13 Overdispersion test 14 data: reg1 15 z = 4.3942 , p-value = 5.558e -06 16 alternative hypothesis : true dispersion is greater than 1 @freakonometrics 37
  • 38. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Alternative models for overdispersion There is overdispersion if Var(Y ) > E(Y ), which can be obtained with a negative binomial distribution (with belongs to the exponential family) 1 > library(MASS) 2 > reg3=glm.nb(Y~col+lig ,data=base) 3 > summary(reg3) 4 ( Dispersion parameter for Negative Binomial (13.8349) family taken to be 1) 5 Theta: 13.83 6 Std. Err.: 2.61 7 2 x log -likelihood: -1460.766 8 > sum(exp(predict(reg3 ,newdata=base))[passe!=TRUE ]) 9 [1] 18085795 @freakonometrics 38
  • 39. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Uncertainty and overdispersion Based on the explicit expression for the prediction error, it is possible to obtain prediction error for those three models, 1 > predCL=function(reg=reg1 ,regb=reg1b){ 2 + p = 2*6-1; 3 + phi.P = sum(residuals(reg ,"pearson")^2)/(np -p) 4 + Sig = vcov(reg) 5 + X = model.matrix(regb) 6 + Cov.eta = X%*%Sig%*%t(X) 7 + mu.hat = exp(predict(reg ,newdata=data.frame(lig ,col)))*futur 8 + pe2 = phi.P * sum(mu.hat) + t(mu.hat) %*% Cov.eta %*% mu.hat 9 + cat("Total reserve =", sum(mu.hat), " prediction error =", sqrt(pe2 ),sqrt(pe2)/sum(mu.hat),"n") 10 + } Avec nos trois modèles, Poisson, ODP et binomiale négative, on obtient, 1 > predCL(reg1 ,reg1b) @freakonometrics 39
  • 40. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 2 Total reserve = 18680856 prediction error = 896876.9 0.04801048 3 > predCL(reg2 ,reg2b) 4 Total reserve = 18680856 prediction error = 4736425 0.2535443 5 > predCL(reg3 ,reg3b) 6 Total reserve = 18085795 prediction error = 2058134 0.1137984 @freakonometrics 40
  • 41. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 On the prediction error In order to derive an estimation of the prediction error using bootstrap techniques, we have not only to generate randomly possible triangles, but also to add uncertainty in the developpement, using e.g. the fact that Ci,j+1 = λjCi,j + σj Ci,j + εi,j where the noise can be assume to be Gaussian, N(0, 1). The statistical interpretation is that Ci,j+1|Ci,j ∼ N(λjCi,j + σ2 j Ci,j) Classically we use 1 > CL=function(triangle){ 2 + n=nrow(triangle) 3 + LAMBDA=rep(NA ,n -1) 4 + for(i in 1:(n-1)){ @freakonometrics 41
  • 42. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 5 + LAMBDA[i]= sum(triangle [1:(n-i),i+1])/ 6 + sum(triangle [1:(n-i),i]) } 7 + DIAG=diag(triangle[,n:1]) 8 + TOTO=c(1,rev(LAMBDA)) 9 + return(sum(cumprod(TOTO)*DIAG -DIAG)) } a natural idea is to consider 1 > CLboot=function(triangle ,l,s){ 2 + m=nrow(triangle) 3 + for(i in 2:m){ 4 + triangle [(m-i+2):m,i]= rnorm(i-1, 5 + mean=triangle [(m-i+2):m,i-1]*l[i-1], 6 + sd=sqrt(triangle [(m-i+2):m,i -1])*s[i -1]) 7 + } 8 + ULT=triangle[,m] 9 + DIAG=diag(triangle[,m:1]) 10 + return(sum(ULT -DIAG)) } Then, we can run boostrap simulations, @freakonometrics 42
  • 43. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 1 > base=data.frame(Y,lig ,col) 2 > REG=glm(Y~lig+col ,family=poisson) 3 > YP=predict(REG ,newdata=base) 4 > E=residuals(REG ,"pearson") 5 > PROV.BE=rep(NA ,5000) 6 > PROVISION=rep(NA ,5000) 7 > for(k in 1:50000){ 8 + simE=sample(E,size =36, replace=TRUE) 9 + bruit=simE*sqrt(exp(YP)) 10 + INCsim=exp(YP)+bruit 11 + INCM=matrix(INCsim ,6 ,6) 12 + CUMM=INCM 13 + for(j in 2:6){CUMM[,j]= CUMM[,j -1]+ INCM[,j]} 14 + PROV.BE[k]=CL(CUMM) 15 + PROVISION[k]= CLboot(CUMM ,lambda ,sigma)} @freakonometrics 43
  • 44. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Random Generation of a Quasi-Distribution It is also possible to generate Poisson, or quasi-Poisson random variables. Recall that the negative binomial distribution has probability function P[N = k] = Γ(k + r) k!Γ(r) · [1 − p]r pk where the expected value and the variance are µ = r · p 1 − p and σ2 = µ = r · p (1 − p)2 Assume that σ2 = ϕ · µ, then r = µ ϕ − 1 and p = 1 ϕ 1 > rqpois = function(n, lambda , phi) { 2 + return( rnbinom(n, size = lambda/(1-phi), prob = 1/phi) } @freakonometrics 44
  • 45. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Using GAM for claims reserving In the case of GAM’s, assume that Yi,j ∼ L(θçi, j), where θi,j = ϕ(u(i) + v(j)), where here u and v are two unknown functions. We still have an additive form, but on unknown transformations of explanatory variates. Spline functions are considered to estimation functions u and v. 1 > library(gam) 2 > GAM=gam(payinc~s(year ,5)+s(development ,3),data=D,familly="Poisson") 3 > plot.gam(GAM ,se=T,col="red",ask=TRUE ,main="GAM model , df=5, df=3") @freakonometrics 45
  • 46. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 @freakonometrics 46
  • 47. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Dealing with negative increments Negative incremental values can arise due to timing of reinsurance, recoveries, cancellation of outstanding claims. One might argue that the problem is more with the data than with the methods. England & Verall (2002) mention that the Gaussian model is less affected by the presence of negative incremental values. Unfortunately, one can hardly assume that data are Gaussian because of the skewness. Renshaw & Verral (1994) suggested to add a “small constant” to the past data and to substract this constant from forecasts at the end. @freakonometrics 47
  • 48. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Dealing with negative increments A classical technique to avoid negative payments is to consider a translation of the incremental triangle, i.e. Y + i,j = Yi,j + κ such that Y + i,j > 0 for all i, j. @freakonometrics 48
  • 49. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Dealing with negative increments A classical technique to avoid negative payments is to consider a translation of the incremental triangle, i.e. Y + i,j = Yi,j + κ such that Y + i,j > 0 for all i, j. q q q 0 1 2 3 4 012345 q q q @freakonometrics 49
  • 50. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Why a Poisson regression model ? The is no reason to assume that incremental payments are Poisson distribution. The only motivation here is that the expected value is the same as the Chain Ladder estimate. Distribution of the reserves, GAM model with Chain Ladder Total amount of reserves Density 2000 4000 6000 8000 0e+001e−042e−043e−044e−045e−04 @freakonometrics 50
  • 51. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Tweedie ? The density of a tweedie model with power function p would be 1 > ftweedie = function(y,p,mu ,psi){ 2 + if(p==2){f = dgamma(y, 1/psi , 1/(psi*mu))} else 3 + if(p==1){f = dpois(y/psi , mu/psi)} else 4 + {lambda = mu^(2-p)/psi /(2-p) 5 + if(y==0){ f = exp(-lambda)} else 6 + { alpha = (2-p)/(p -1) 7 + beta = 1 / (psi * (p -1) * mu^(p -1)) 8 + k = max (10, ceiling(lambda + 7*sqrt(lambda))) 9 + f = sum(dpois (1:k,lambda) * dgamma(y,alpha*(1:k),beta)) 10 + }} 11 + return(f) 12 + } A numerical problem is that we should have no missing values in the regression, so artificially, consider 1 > source("http:// freakonometrics .free.fr/bases.R") @freakonometrics 51
  • 52. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 2 > library(statmod) 3 > an <- 6; ligne = rep (1:an , each=an); colonne = rep (1:an , an) 4 > passe = (ligne + colonne - 1) <=an; n = sum(passe) 5 > INC=PAID 6 > INC [ ,2:6]= PAID [,2:6]- PAID [ ,1:5] 7 > Y = as.vector(INC) 8 > lig = as.factor(ligne) 9 > col = as.factor(colonne) 10 > y = Y[passe] 11 > Y[is.na(Y)]=.01 Then, we can run an econometric regression 1 > pltweedie <- function(pow){ 2 + regt = glm(Y~lig+col , tweedie(pow ,0)) 3 + reserve = sum(fitted.values(regt)[!passe ]) 4 + dev = deviance(regt) 5 + phi.hat = dev/n 6 + mu = fitted.values(regt)[passe] @freakonometrics 52
  • 53. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 7 + hat.logL = 0 8 + for (k in 1: length(y)){ 9 + hat.logL <- hat.logL + log(ftweedie(y[k], pow , mu[k], phi.hat)) } 10 + cat("Puissance =", round(pow ,3) , "phi =", round(phi.hat ,2), 11 + "Reserve (tot) =", round(reserve), "logL =", round(hat.logL ,3)) 12 + hat.logL} 13 > for(pow in c(1 ,1.25 ,1.5 ,1.75 ,2)){pltweedie(pow)} 14 Puissance = 1 phi = 166.95 Reserve (tot) = 1345 logL = -Inf 15 Puissance = 1.25 phi = 42.92 Reserve (tot) = 1216 logL = -151.72 16 Puissance = 1.5 phi = 15.8 Reserve (tot) = 996 logL = -145.232 17 Puissance = 1.75 phi = 9.02 Reserve (tot) = 609 logL = -153.997 18 Puissance = 2 phi = 6.78 Reserve (tot) = 125 logL = -170.614 It is also possible to run a optimization routine, 1 > optimize(pltweedie , c(1.01 ,1.99) , tol =1e-4, maximum = TRUE) 2 -144.624 3 $maximum @freakonometrics 53
  • 54. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 4 [1] 1.427873 5 6 $objective 7 [1] -144.6237 Thus, here the Poisson model might not be the appropriate one, 1.0 1.2 1.4 1.6 1.8 2.0 −2000−1500−1000−500 power Loglikelihood @freakonometrics 54
  • 55. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Bayesian Models in Claims Reserving The first idea is to consider some credibility based model, with Ci,n = Z · CMack i,n + [1 − Z] · µi given some a priori µi. For instance Benkhtander (1976) and Hovinen (1981) suggested Z = 1 − [1 − βi]2 where βi = n−1 k=n−i 1 λk Note that Ci,n = Ci,n−i + [1 − βi] βi · ·CMack i,n + [1 − βi] · µi More generally, consider the Cape-Code technique, Ci,n = Ci,n−i + 1 − Ci,n−i Ci,n Ci,n @freakonometrics 55
  • 56. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 sous la forme Ci,n = Ci,n−i + 1 − Ci,n−i Ci,n LRi · Pi, où LRi correspond au loss ratio pour l’année i, i.e. LRi = Ci,n/Pi. L’idée de la méthode dite Cape-Code est d’écrire une forme plus générale, Ci,n = Ci,n−i + (1 − πn−i) LRiPi où πn−i correspond à une cadence de paiement, et peut être estimé par la méthode Chain Ladder. Quant aux LRi il s’agit des loss ratio cibles, correspondant à un avis d’expert. On peut aussi proposer un même ratio cible pour plusieurs années de survenance. On posera alors Ri = Ci,n − Ci,n−i = (1 − πn−i)LRAPi. pour i ∈ A, où LRA = k∈A Cn,n−k k∈A πn−kPk . @freakonometrics 56
  • 57. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Dans un premier temps, on peut calculer les πi à partir de la méthode Chain Ladder, i.e. πn−i = Ci,n−i Ci,n où la charge ultime est celle prédite pas la méthode Chain-Ladder. 1 > Cultime = MackChainLadder (PAID)$ FullTriangle [,6] 2 > (PI <- (1- Cdiag/Cultime)) 3 1 2 3 4 5 6 4 0.00000 0.00471 0.00656 0.01086 0.02204 0.29181 5 > LR <- TRIANGLE [,6]/PREMIUM 6 > Cdiag <- diag(PAID [ ,6:1]) 7 > (Cultime -Cdiag)/(LR*PREMIUM) 8 1 2 3 4 5 6 9 0.00000 0.00471 0.00656 0.01086 0.02204 0.29181 Si on suppose ensuite que A = {1, 2, · · · , n}, alors 1 > LR=sum(TRIANGLE [ ,6])/sum(PREMIUM) @freakonometrics 57
  • 58. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 2 > PI*LR*PREMIUM 3 1 2 3 4 5 6 4 0.0 24.6 35.6 62.7 139.6 2095.3 5 > sum(PI*LR*PREMIUM) 6 [1] 2358 On obtient ici un montant de provision total inférieur à celui obtenu par la méthode Chain Ladder puisque le montant de provisions vaut ici 2357.756. @freakonometrics 58
  • 59. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Modèles bayésiens et Chain Ladder De manière générale, un méthode bayésienne repose sur deux hypothèses • une loi a priori pour les paramètres du modèle (Xi,j, Ci,j, λi,j, LRi,j = Ci,j/Pj, etc) • une technique pour calculer les lois a posteriori, qui sont en général assez complexes. @freakonometrics 59
  • 60. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Modèles bayésiens pour les nombres de sinistres Soit Ni,j l’incrément du nombre de sinistres, i.e. le nombre de sinistres survenus l’année i, déclarés l’année i + j. On note Mi le nombre total de sinistres par année de survenance, i.e. Mi = Ni,0 + Ni,1 + · · · . Supposons que Mi ∼ P(λi), et que p = (p0, p1, · · · , pn) désigne les proprotions des paiments par année de déroulé. Conditionnellement à Mi = mi, les années de survenance sont indépenantes, et le vecteur du nombre de sinistres survenus année l’année i suit une loi multinomiale M(mi, p). @freakonometrics 60
  • 61. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Modèles bayésiens pour les nombres de sinistres La vraisemblance L(M0, M1, · · · , Mn, p|Ni,j) est alors n i=0 Mi! (Mi − Nn−i)!Ni,0!Ni,1! · · · Ni,n−i! [1 − pn−i]Mi−Nn−i p Ni,0 0 p Ni,1 1 · · · p Ni,n−i n−i où Nn−i = N0 + N1 + · · · + Nn−i et pn−i = p0 + p1 + · · · + pn−i. Il faut ensuite de donner une loi a priori pour les paramètres. La loi a posteriori sera alors proportionnelle produit entre la vraisemblance et cette loi a priori. @freakonometrics 61
  • 62. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Modèles bayésiens pour les montants agrégés On pose Yi,j = log(Ci,j), et on suppose que Yi,j = µ + αi + βj + εi,j, où εi,j ∼ N(0, σ2 ). Aussi, Yi,j suit une loi normale, f(yi,j|µ, α, β, σ2 ) ∝ 1 σ exp − 1 2σ2 [yi,j − µ − αi − βj] 2 , et la vraisemblance est alors L(θ, σ|Y ) ∝ σ−m exp   i,j [yi,j − µ − αi − βj] 2   où m = (n(n + 1)/2 désigne le nombre d’observations passées. La difficulté est alors de spécifier une loi a priori pour (θ, σ2 ), i.e. (µ, α, β, σ2 ). @freakonometrics 62
  • 63. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 MCMC and Bayesian Models We have a sample x = {x1, · · · , xd) i.i.d. from distribution fθ(·). In predictive modeling, we need E(g(X)|x) = g(x)fθ|x(x)dx where fθ|x(x) = f(x|x) = f(x|θ) · π(θ|x)dθ How can we derive π(θ|x) ? Can we sample from π(θ|x) (and use monte carlo technique to approximate the integral) ? @freakonometrics 63
  • 64. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Hastings-Metropolis Back to our problem, we want to sample from π(θ|x) i.e. generate θ1, · · · , θn, · · · from π(θ|x). Hastings-Metropolis sampler will generate a Markov Chain (θt) as follows, • generate θ1 • generate θ and U ∼ U([0, 1]), compute R = π(θ |x) π(θt|x) P(θt|θ ) P(θ |θt−1) if U < R set θt+1 = θ if U ≥ R set θt+1 = θt R is the acceptance ratio, we accept the new state θ with probability min{1, R}. @freakonometrics 64
  • 65. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Hastings-Metropolis Observe that R = π(θ ) · f(x|θ ) π(θt) · f(x|θt) P(θt|θ ) P(θ |θt−1) In a more general case, we can have a Markov process, not a Markov chain. E.g. P(θ |θt) ∼ N(θt, 1) @freakonometrics 65
  • 66. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Using MCMC to generate Gaussian values 1 > metrop1 <- function(n=1000 , eps =0.5){ 2 + vec <- vector("numeric", n) 3 + x=0 4 + vec [1] <- x 5 + for (i in 2:n) { 6 + innov <- runif(1,-eps ,eps) 7 + mov <- x+innov 8 + aprob <- min(1, dnorm(mov)/dnorm(x)) 9 + u <- runif (1) 10 + if (u < aprob) 11 + x <- mov 12 + vec[i] <- x 13 + } 14 + return(vec)} @freakonometrics 66
  • 67. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Using MCMC to generate Gaussian values 1 > plot.mcmc <- function(mcmc.out){ 2 + op <- par(mfrow=c(2 ,2)) 3 + plot(ts(mcmc.out),col="red") 4 + hist(mcmc.out ,30, probability =TRUE , 5 + col="light blue") 6 + lines(seq(-4,4,by =.01) ,dnorm(seq (-4,4, 7 + by =.01)),col="red") 8 + qqnorm(mcmc.out) 9 + abline(a=mean(mcmc.out),b=sd(mcmc. out)) 10 + acf(mcmc.out ,col="blue",lag.max =100) 11 + par(op)} 12 > metrop.out <-metrop1 (10000 ,1) 13 > plot.mcmc(metrop.out) @freakonometrics 67
  • 68. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Heuristics on Hastings-Metropolis In standard Monte Carlo, generate θi’s i.i.d., then 1 n n i=1 g(θi) → E[g(θ)] = g(θ)π(θ)dθ (strong law of large numbers). Well-behaved Markov Chains (P aperiodic, irreducible, positive recurrent) can satisfy some ergodic property, similar to that LLN. More precisely, • P has a unique stationary distribution λ, i.e. λ = λ × P • ergodic theorem 1 n n i=1 g(θi) → g(θ)λ(θ)dθ even if θi’s are not independent. @freakonometrics 68
  • 69. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Heuristics on Hastings-Metropolis Remark The conditions mentioned above are • aperiodic, the chain does not regularly return to any state in multiples of some k. • irreducible, the state can go from any state to any other state in some finite number of steps • positively recurrent, the chain will return to any particular state with probability 1, and finite expected return time @freakonometrics 69
  • 70. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 MCMC and Loss Models Example A Tweedie model, E(X) = µ and Var(X) = ϕ · µp . Here assume that ϕ and p are given, and µ is the unknown parameter. → need a predictive distribution for µ given x. Consider the following transition kernel (a Gamma distribution) µ|µt ∼ G µt α , α with E(µ|µt) = µt and CV(µ) = 1 √ α . Use some a priori distribution, e.g. G (α0, β0). @freakonometrics 70
  • 71. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 MCMC and Loss Models • generate µ1 • at step t : generate µ ∼ G α−1 µt, α and U ∼ U([0, 1]), compute R = π(µ ) · f(x|µ ) π(µt) · f(x|θt) Pα(µt|θ ) Pα(θ |θt−1) if U < R set θt+1 = θ if U ≥ R set θt+1 = θt where f(x|µ) = L(µ) = n i=1 f(xi|µ, p, ϕ), f(x · |µ, p, ϕ) being the density of the Tweedie distribution, dtweedie function (x, p, mu, phi) from library(tweedie). @freakonometrics 71
  • 72. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 1 > p=2 ; phi=2/5 2 > set.seed (1) ; X <- rtweedie (50,p,10,phi) 3 > metrop2 <- function(n=10000 , a0=10, 4 + b0=1, alpha =1){ 5 + vec <- vector("numeric", n) 6 + vec [1] <- rgamma (1,a0 ,b0) 7 + for (i in 2:n){ 8 + mustar <- rgamma (1,vec[i -1]/alpha ,alpha) 9 + R=prod(dtweedie(X,p,mustar ,phi)/dtweedie 10 + (X,p,vec[i-1],phi))*dgamma(mustar ,a0 ,b0) 11 + dgamma(vec[i-1],a0 ,b0)* dgamma(vec[i-1], 12 + mustar/alpha ,alpha)/dgamma(mustar , 13 + vec[i-1]/alpha ,alpha) 14 + aprob <- min(1,R) 15 + ifelse(runif (1) < aprob ,vec[i]<-mustar , 16 + vec[i]<-vec[i -1])} 17 + return(vec)} 18 > metrop.output <-metrop2 (10000 , alpha =1) @freakonometrics 72
  • 73. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Gibbs Sampler For a multivariate problem, it is possible to use Gibbs sampler. Example Assume that the loss ratio of a company has a lognormal distribution, LN(µ, σ2 ), .e.g 1 > LR <- c(0.958 , 0.614 , 0.977 , 0.921 , 0.756) Example Assume that we have a sample x from a N(µ, σ2 ). We want the posterior distribution of θ = (µ, σ2 ) given x . Observe here that if priors are Gaussian N µ0, τ2 and the inverse Gamma distribution IG(a, b), then    µ|σ2 , x ∼ N σ2 σ2 + nτ2 µ0 + nτ2 σ2 + nτ2 x, σ2 τ2 σ2 + nτ2 2 i=1 σ2 |µ, x ∼ IG n 2 + a, 1 2 n i=1 [xi − µ]2 + b More generally, we need the conditional distribution of θk|θ−k, x, for all k. 1 > x <- log(LR) @freakonometrics 73
  • 74. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Gibbs Sampler 1 > xbar <- mean(x) 2 > mu <- sigma2=rep (0 ,10000) 3 > sigma2 [1] <- 1/rgamma (1, shape =1,rate =1) 4 > Z <- sigma2 [1]/(sigma2 [1]+n*1) 5 > mu [1] <- rnorm (1,m=Z*0+(1 -Z)*xbar , 6 + sd=sqrt (1*Z)) 7 > for (i in 2:10000){ 8 + Z <- sigma2[i-1]/(sigma2[i -1]+n*1) 9 + mu[i] <- rnorm (1,m=Z*0+(1 -Z)*xbar , 10 + sd=sqrt (1*Z)) 11 + sigma2[i] <- 1/rgamma (1, shape=n/2+1, 12 + rate <- (1/2)*(sum((x-mu[i])${ }^{ wedge}$2))+1) 13 + } @freakonometrics 74
  • 75. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Gibbs Sampler Example Consider some vector X = (X1, · · · , Xd) with indépendent components, Xi ∼ E(λi). We sample to sample from X given XT 1 > s for some threshold s > 0. • start with some starting point x0 such that xT 0 1 > s • pick up (randomly) i ∈ {1, · · · , d} Xi given Xi > s − xT (−i)1 has an Exponential distribution E(λi) draw Y ∼ E(λi) and set xi = y + (s − xT (−i)1)+ until xT (−i)1 + xi > s E.g. losses and allocated expenses @freakonometrics 75
  • 76. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Gibbs Sampler 1 > sim <- NULL 2 > lambda <- c(1 ,2) 3 > X <- c(3 ,3) 4 > s <- 5 5 > for(k in 1:1000){ 6 + i <- sample (1:2 ,1) 7 + X[i] <- rexp(1, lambda[i])+ 8 + max(0,s-sum(X[-i])) 9 + while(sum(X)<s){ 10 + X[i] <- rexp(1, lambda[i])+ 11 + max(0,s-sum(X[-i])) } 12 + sim <- rbind(sim ,X) } @freakonometrics 76
  • 77. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 JAGS and STAN Martyn Plummer developed JAGS Just another Gibbs sampler in 2007 (stable since 2013) in library(runjags). It is an open-source, enhanced, cross-platform version of an earlier engine BUGS (Bayesian inference Using Gibbs Sampling). STAN library(Rstan) is a newer tool that uses the Hamiltonian Monte Carlo (HMC) sampler. HMC uses information about the derivative of the posterior probability density to improve the algorithm. These derivatives are supplied by algorithm differentiation in C/C++ codes. @freakonometrics 77
  • 78. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 JAGS on the N(µ, σ2 ) distribution 1 library(runjags) 2 jags.model <- " 3 + model { 4 + mu ~ dnorm(mu0 , 1/(sigma0 ^2)) 5 + g ~ dgamma(k0 , theta0) 6 + sigma <- 1 / g 7 + for (i in 1:n) { 8 + logLR[i] ~ dnorm(mu , g^2) 9 + } 10 + }" 1 > jags.data <- list(n=length(LR), 2 + logLR=log(LR), mu0=-.2, sigma0 =0.02 , 3 + k0=1, theta0 =1) 4 5 > jags.init <- list(list(mu=log (1.2) , 6 + g=1/0.5^2) , 7 + list(mu=log (.8) , 8 + g=1/.2^2)) 9 10 > model.out <- autorun.jags(jags.model , 11 + data=jags.data , inits=jags.init , 12 + monitor=c("mu", "sigma"), n.chains =2) 13 traceplot(model.out$mcmc) 14 summary(model.out) @freakonometrics 78
  • 79. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 STAN on the N(µ, σ2 ) distribution 1 > library(rstan) 2 > stan.model <- " 3 + data { 4 + int <lower =0> n; 5 + vector[n] LR; 6 + real mu0; 7 + real <lower =0> sigma0; 8 + real <lower =0> k0; 9 + real <lower =0> theta0; 10 + } 11 + parameters { 12 + real mu; 13 + real <lower =0> sigma; 14 + } 1 + model } 2 + mu ~ normal(mu0 , sigma0); 3 + sigma ~ inv_gamma(k0 , theta0); 4 + for (i in 1:n) 5 + log(LR[i]) ~ normal(mu , sigma); 6 + } 7 8 > stan.data <- list(n=length(LR), r=LR , mu0= mu0 , 9 + sigma0=sigma0 , k0=k0 , theta0=theta0) 10 > stan.out <- stan(model_code=stan.model , 11 + data=stan.data , seed =2) 12 > traceplot(stan.out) 13 > print(stan.out , digits_summary =2) @freakonometrics 79
  • 80. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 MCMC and Loss Models Example Consider some simple time series of Loss Ratios, LRt ∼ N(µt, σ2 ) where µt = φµt−1 + εt E.g. in JAGS we can define the vector µ = (µ1, · · · , µT ) recursively 1 + model { 2 + mu [1] ~ dnorm(mu0 , 1/(sigma0 ^2)) 3 + for (t in 2:T) { mu[t] ~ dnorm(mu[t-1], 1/(sigma0 ^2)) } 4 + } @freakonometrics 80
  • 81. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 A Bayesian version of Chain Ladder Assume that λi,j ∼ N µj, τj Ci,j . We can use Gibbs sampler to get the distribution of the transition factors, as well as a distribution for the reserves, 1 > source("http:// freakonometrics .free.fr/triangleCL.R") 2 > source("http:// freakonometrics .free.fr/bayesCL.R") 3 > mcmcCL <-bayesian.triangle(PAID) @freakonometrics 81
  • 82. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 1 > plot.mcmc(mcmcCL$Lambda [ ,1]) 2 > plot.mcmc(mcmcCL$Lambda [ ,2]) 3 > plot.mcmc(mcmcCL$reserves [ ,6]) 4 > plot.mcmc(mcmcCL$reserves [ ,7]) 5 > library( ChainLadder ) 6 > MCL <-MackChainLadder (PAID) 7 > m<-sum(MCL$ FullTriangle [,6]- 8 + diag(MCL$ FullTriangle [ ,6:1])) 9 > stdev <-MCL$Total.Mack.S.E 10 > hist(mcmcCL$reserves [,7], probability= TRUE , 11 > breaks =20, col="light blue") 12 > x=seq (2000 ,3000 , by =10) 13 > y=dnorm(x,m,stdev) 14 > lines(x,y,col="red") @freakonometrics 82
  • 83. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Autres modèles bayésiens Dans le cadre des modèles de provisionnement, on suppose λi,j|λj, σ2 j , Ci,j ∼ N λj, σ2 j Ci,j Notons γj = log(λj). λ désigne l’ensemble des observations, i.e. λi,j, et le paramètre que l’on cherche à estimer est γ. La log-vraisemblance est alors log L(λ|γ, C, σ2 ) = i,j log Ci,j σ2 j − Ci,j σ2 j [λi,j − exp(γj)] 2 En utilisant le théorème de Bayes log L(λ|γ, C, σ2 ) a posteriori = log π(γ) a priori + log L(γ|λ, C, σ2 ) log vraisemblance +constante Si on utilise une loi uniforme comme loi a priori, on obtient log L(λ|γ, C, σ2 ) = log L(γ|λ, C, σ2 ) + constante @freakonometrics 83
  • 84. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Les calculs de lois conditionnelles peuvent être simples dans certains cas (très limités). De manière gérérale, on utilise des méthodes de simulation pour approcher les lois. En particulier, on peut utiliser les algorithmes de Gibbs ou d’Hastings-Metropolis. On part d’un vecteur initial γ(0) = (γ (0) 1 , · · · , γ (0) m ), puis    γ (k+1) 1 ∼ f(·|γ (k) 2 , · · · , γ (k) m , λ, C, σ) γ (k+1) 2 ∼ f(·|γ (k+1) 1 , γ (k) 3 , · · · , γ (k) m , λ, C, σ) γ (k+1) 3 ∼ f(·|γ (k+1) 1 , γ (k+1) 2 , γ (k) 4 , · · · , γ (k) m , λ, C, σ) ... γ (k+1) m−1 ∼ f(·|γ (k+1) 1 , γ (k+1) 2 , · · · , γ (k+1) m−2 , γ (k) m , λ, C, σ) γ (k+1) m ∼ f(·|γ (k+1) 1 , γ (k+1) 2 , · · · , γ (k+1) m−1 , λ, C, σ) A l’aide de cet algorithme, on simule alors de triangles C, puis on estime la process error. @freakonometrics 84
  • 85. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 L’algorithme d’adaptative rejection metropolis sampling peut alors être utiliser pour simuler ces différentes lois conditionnelle (cf Balson (2008)). La méthode de rejet est basé sur l’idée suivante • on souhaite tirer (indépendemment) suivant une loi f, qu’on ne sait pas simuler • on sait simuler suivant une loi g qui vérifie f(x) ≤ Mg(x), pour tout x, où M peut être calculée. L’algorithme pour tirer suivant f est alors le suivant • faire une boucle ◦ tirer Y selon la loi g ◦ tirer U selon la loi uniforme sur [0, 1], indépendamment de Y , • tant que U > f(Y ) Mg(Y ) . @freakonometrics 85
  • 86. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 • poser X = Y . On peut utiliser cette technique pour simuler une loi normale à partir d’une loi de Laplace, de densité g(x) = 0.5 · exp(−|x|), avec M = √ 2eπ−1. Mais cet algorithme est très coûteux en temps s’il y a beaucoup de rejets, L’adaptative rejection sampling est une extension de cet algorithme, à condition d’avoir une densité log-concave. On parle aussi de méthode des cordes. @freakonometrics 86
  • 87. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 On majore localement la fonction log f par des fonctions linéaires. On construit alors une enveloppe à log f. On majore alors f par une fonction gn qui va dépendre du pas. Formellement, on construit Li,j(x) la droite reliant les points (xi, log(f(xi))) et @freakonometrics 87
  • 88. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 (xj, log(f(xj))). On pose alors hn(x) = min {Li−1,i(x), Li+1,i+2(x)} , qui définie alors une enveloppe de log(f) (par concavité de log(f). On utilise alors un algorithme de rejet avec comme fonction de référence gn(x) = exp(hn(x)) exp(hn(t))dt normalisée pour définir une densité. • faire une boucle ◦ tirer Y selon la loi gn ◦ tirer U selon la loi uniforme sur [0, 1], indépendamment de Y , • tant que U > f(Y ) exp(hn(Y )) . • poser X = Y . @freakonometrics 88
  • 89. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Enfin, l’adaptative rejection metropolis sampling rajoute une étape suppl ´mentaire, dans le cas des densité non log-concave. L’idée est d’utiliser la technique préc´dante, même si hn n’est plus forcément une enveloppe de log(f), puis de rajouter une étape de rejet supplémenataire. Rappelons que l’on cherche à implénter un algorithme de Gibbs, c’est à dire créér une suite de variables X1, X2, · · · . Supposons que l’on dispose de Xk−1. Pour tirer Xk, on utilise l’algorithme précédant, et la nouvelle étape de rejet est la suivante • tirer U selon la loi uniforme sur [0, 1], indépendamment de X et de Xk−1, ◦ si U > min 1, f(X) min{f(Xk−1), exp(hn(Xk−1))} f(Xk−1) min{f(X), exp(hn(X))} alors garder Xk = Xk−1 ◦ sinon poser Xk = X @freakonometrics 89
  • 90. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Code R pour l’algorithme ARMS Ces fonctions exponentielles par morceaux sont inéressantes car elles sont faciles à simuler. La fonction hn est linéaires par morceaux, avec comme noeuds Nk, de telle sorte que hn(x) = akx + bk pour tout x ∈ [Nk, Nk+1]. Alors gn(x) = exp(hn(x)) In où In = exp(hn(t))dt = exp[hn(Nk+1)] − exp[hn(Nk)] ak . On calcule alors Gn, la fonction de répartition associée à gn, et on fait utilise une méthode d’inversion pour tirer suivant Gn. @freakonometrics 90
  • 91. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Bayesian estimation for reserves 0 200 400 600 800 1000 220023002400250026002700 iteration reserves(total) @freakonometrics 91
  • 92. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Bayesian estimation for reserves @freakonometrics 92
  • 93. Arthur CHARPENTIER - Actuariat de l’Assurance Non-Vie, # 11 Bayesian estimation for reserves @freakonometrics 93