ˇ Cížek • Härdle • Weron Statistical Tools for Finance and Insurance
ˇ Pavel Cížek • Wolfgang Härdle • Rafał Weron
S...

This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!

ˇ Cížek • Härdle • Weron Statistical Tools for Finance and Insurance

ˇ Pavel Cížek • Wolfgang Härdle • Rafał Weron

Statistical Tools for Finance and Insurance

123

ˇ Pavel Cížek Tilburg University Dept. of Econometrics & OR P.O. Box 90153 5000 LE Tilburg, Netherlands e-mail: [email protected]

Rafał Weron Wrocław University of Technology Hugo Steinhaus Center Wyb. Wyspia´ nskiego 27 50-370 Wrocław, Poland e-mail: [email protected]

Wolfgang Härdle Humboldt-Universität zu Berlin CASE – Center for Applied Statistics and Economics Institut für Statistik und Ökonometrie Spandauer Straße 1 10178 Berlin, Germany e-mail: [email protected]

This book is also available as e-book on www.i-xplore.de. Use the licence code at the end of the book to download the e-book. Library of Congress Control Number: 2005920464

Mathematics Subject Classiﬁcation (2000): 62P05, 91B26, 91B28

ISBN 3-540-22189-1 Springer Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, speciﬁcally the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microﬁlm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable for prosecution under the German Copyright Law. Springer is a part of Springer Science+Business Media springeronline.com © Springer-Verlag Berlin Heidelberg 2005 Printed in Germany The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a speciﬁc statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typesetting by the authors Production: LE-TEX Jelonek, Schmidt & Vöckler GbR, Leipzig Cover design: design & production GmbH, Heidelberg Printed on acid-free paper 46/3142YL – 5 4 3 2 1 0

Contents

Contributors

13

Preface

15

I

19

Finance

1 Stable Distributions

21

Szymon Borak, Wolfgang H¨ ardle, and Rafal Weron 1.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

21

1.2

Deﬁnitions and Basic Characteristic . . . . . . . . . . . . . . .

22

1.2.1

Characteristic Function Representation . . . . . . . . .

24

1.2.2

Stable Density and Distribution Functions . . . . . . . .

26

1.3

Simulation of α-stable Variables . . . . . . . . . . . . . . . . . .

28

1.4

Estimation of Parameters . . . . . . . . . . . . . . . . . . . . .

30

1.4.1

Tail Exponent Estimation . . . . . . . . . . . . . . . . .

31

1.4.2

Quantile Estimation . . . . . . . . . . . . . . . . . . . .

33

1.4.3

Characteristic Function Approaches . . . . . . . . . . .

34

1.4.4

Maximum Likelihood Method . . . . . . . . . . . . . . .

35

Financial Applications of Stable Laws . . . . . . . . . . . . . .

36

1.5

2

Contents

2 Extreme Value Analysis and Copulas

45

Krzysztof Jajuga and Daniel Papla 2.1

2.2

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

45

2.1.1

Analysis of Distribution of the Extremum . . . . . . . .

46

2.1.2

Analysis of Conditional Excess Distribution . . . . . . .

47

2.1.3

Examples . . . . . . . . . . . . . . . . . . . . . . . . . .

48

Multivariate Time Series . . . . . . . . . . . . . . . . . . . . . .

53

2.2.1

Copula Approach . . . . . . . . . . . . . . . . . . . . . .

53

2.2.2

Examples . . . . . . . . . . . . . . . . . . . . . . . . . .

56

2.2.3

Multivariate Extreme Value Approach . . . . . . . . . .

57

2.2.4

Examples . . . . . . . . . . . . . . . . . . . . . . . . . .

60

2.2.5

Copula Analysis for Multivariate Time Series . . . . . .

61

2.2.6

Examples . . . . . . . . . . . . . . . . . . . . . . . . . .

62

3 Tail Dependence

65

Rafael Schmidt 3.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

65

3.2

What is Tail Dependence? . . . . . . . . . . . . . . . . . . . . .

66

3.3

Calculation of the Tail-dependence Coeﬃcient . . . . . . . . . .

69

3.3.1

Archimedean Copulae . . . . . . . . . . . . . . . . . . .

69

3.3.2

Elliptically-contoured Distributions . . . . . . . . . . . .

70

3.3.3

Other Copulae . . . . . . . . . . . . . . . . . . . . . . .

74

3.4

Estimating the Tail-dependence Coeﬃcient . . . . . . . . . . .

75

3.5

Comparison of TDC Estimators . . . . . . . . . . . . . . . . . .

78

3.6

Tail Dependence of Asset and FX Returns . . . . . . . . . . . .

81

3.7

Value at Risk – a Simulation Study . . . . . . . . . . . . . . . .

84

Contents

3

4 Pricing of Catastrophe Bonds

93

Krzysztof Burnecki, Grzegorz Kukla, and David Taylor 4.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

93

4.1.1

The Emergence of CAT Bonds . . . . . . . . . . . . . .

94

4.1.2

Insurance Securitization . . . . . . . . . . . . . . . . . .

96

4.1.3

CAT Bond Pricing Methodology . . . . . . . . . . . . .

97

4.2

Compound Doubly Stochastic Poisson Pricing Model . . . . . .

99

4.3

Calibration of the Pricing Model . . . . . . . . . . . . . . . . .

100

4.4

Dynamics of the CAT Bond Price . . . . . . . . . . . . . . . . .

104

5 Common Functional IV Analysis

115

Michal Benko and Wolfgang H¨ ardle 5.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

115

5.2

Implied Volatility Surface . . . . . . . . . . . . . . . . . . . . .

116

5.3

Functional Data Analysis . . . . . . . . . . . . . . . . . . . . .

118

5.4

Functional Principal Components . . . . . . . . . . . . . . . . .

121

5.4.1

Basis Expansion . . . . . . . . . . . . . . . . . . . . . .

123

Smoothed Principal Components Analysis . . . . . . . . . . . .

125

5.5.1

Basis Expansion . . . . . . . . . . . . . . . . . . . . . .

126

Common Principal Components Model . . . . . . . . . . . . . .

127

5.5

5.6

6 Implied Trinomial Trees

135

ˇ ıˇzek and Karel Komor´ad Pavel C´ 6.1

Option Pricing . . . . . . . . . . . . . . . . . . . . . . . . . . .

136

6.2

Trees and Implied Trees . . . . . . . . . . . . . . . . . . . . . .

138

6.3

Implied Trinomial Trees . . . . . . . . . . . . . . . . . . . . . .

140

6.3.1

140

Basic Insight . . . . . . . . . . . . . . . . . . . . . . . .

4

Contents

6.4

6.3.2

State Space . . . . . . . . . . . . . . . . . . . . . . . . .

142

6.3.3

Transition Probabilities . . . . . . . . . . . . . . . . . .

144

6.3.4

Possible Pitfalls . . . . . . . . . . . . . . . . . . . . . . .

145

Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

147

6.4.1

Pre-speciﬁed Implied Volatility . . . . . . . . . . . . . .

147

6.4.2

German Stock Index . . . . . . . . . . . . . . . . . . . .

152

7 Heston’s Model and the Smile

161

Rafal Weron and Uwe Wystup 7.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

161

7.2

Heston’s Model . . . . . . . . . . . . . . . . . . . . . . . . . . .

163

7.3

Option Pricing . . . . . . . . . . . . . . . . . . . . . . . . . . .

166

7.3.1

Greeks . . . . . . . . . . . . . . . . . . . . . . . . . . . .

168

Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

169

7.4.1

Qualitative Eﬀects of Changing Parameters . . . . . . .

171

7.4.2

Calibration Results . . . . . . . . . . . . . . . . . . . . .

173

7.4

8 FFT-based Option Pricing

183

Szymon Borak, Kai Detlefsen, and Wolfgang H¨ ardle 8.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

183

8.2

Modern Pricing Models . . . . . . . . . . . . . . . . . . . . . .

183

8.2.1

Merton Model . . . . . . . . . . . . . . . . . . . . . . .

184

8.2.2

Heston Model . . . . . . . . . . . . . . . . . . . . . . . .

185

8.2.3

Bates Model . . . . . . . . . . . . . . . . . . . . . . . .

187

8.3

Option Pricing with FFT . . . . . . . . . . . . . . . . . . . . .

188

8.4

Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

192

Contents

5

9 Valuation of Mortgage Backed Securities

201

Nicolas Gaussel and Julien Tamine 9.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

201

9.2

Optimally Prepaid Mortgage . . . . . . . . . . . . . . . . . . .

204

9.2.1

Financial Characteristics and Cash Flow Analysis . . .

204

9.2.2

Optimal Behavior and Price . . . . . . . . . . . . . . . .

204

Valuation of Mortgage Backed Securities . . . . . . . . . . . . .

212

9.3.1

Generic Framework . . . . . . . . . . . . . . . . . . . . .

213

9.3.2

A Parametric Speciﬁcation of the Prepayment Rate . .

215

9.3.3

Sensitivity Analysis . . . . . . . . . . . . . . . . . . . .

218

9.3

10 Predicting Bankruptcy with Support Vector Machines

225

Wolfgang H¨ ardle, Rouslan Moro, and Dorothea Sch¨ afer 10.1 Bankruptcy Analysis Methodology . . . . . . . . . . . . . . . .

226

10.2 Importance of Risk Classiﬁcation in Practice . . . . . . . . . .

230

10.3 Lagrangian Formulation of the SVM . . . . . . . . . . . . . . .

233

10.4 Description of Data . . . . . . . . . . . . . . . . . . . . . . . . .

236

10.5 Computational Results . . . . . . . . . . . . . . . . . . . . . . .

237

10.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

243

11 Modelling Indonesian Money Demand

249

Noer Azam Achsani, Oliver Holtem¨ oller, and Hizir Sofyan 11.1 Speciﬁcation of Money Demand Functions . . . . . . . . . . . .

250

11.2 The Econometric Approach to Money Demand . . . . . . . . .

253

11.2.1 Econometric Estimation of Money Demand Functions .

253

11.2.2 Econometric Modelling of Indonesian Money Demand .

254

11.3 The Fuzzy Approach to Money Demand . . . . . . . . . . . . .

260

6

Contents 11.3.1 Fuzzy Clustering . . . . . . . . . . . . . . . . . . . . . .

260

11.3.2 The Takagi-Sugeno Approach . . . . . . . . . . . . . . .

261

11.3.3 Model Identiﬁcation . . . . . . . . . . . . . . . . . . . .

262

11.3.4 Fuzzy Modelling of Indonesian Money Demand . . . . .

263

11.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

266

12 Nonparametric Productivity Analysis

271

Wolfgang H¨ ardle and Seok-Oh Jeong 12.1 The Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . .

272

12.2 Nonparametric Hull Methods . . . . . . . . . . . . . . . . . . .

276

12.2.1 Data Envelopment Analysis . . . . . . . . . . . . . . . .

277

12.2.2 Free Disposal Hull . . . . . . . . . . . . . . . . . . . . .

278

12.3 DEA in Practice: Insurance Agencies . . . . . . . . . . . . . . .

279

12.4 FDH in Practice: Manufacturing Industry . . . . . . . . . . . .

281

II Insurance 13 Loss Distributions

287 289

Krzysztof Burnecki, Adam Misiorek, and Rafal Weron 13.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

289

13.2 Empirical Distribution Function . . . . . . . . . . . . . . . . . .

290

13.3 Analytical Methods . . . . . . . . . . . . . . . . . . . . . . . . .

292

13.3.1 Log-normal Distribution . . . . . . . . . . . . . . . . . .

292

13.3.2 Exponential Distribution . . . . . . . . . . . . . . . . .

293

13.3.3 Pareto Distribution . . . . . . . . . . . . . . . . . . . . .

295

13.3.4 Burr Distribution . . . . . . . . . . . . . . . . . . . . . .

298

13.3.5 Weibull Distribution . . . . . . . . . . . . . . . . . . . .

298

Contents

7

13.3.6 Gamma Distribution . . . . . . . . . . . . . . . . . . . .

300

13.3.7 Mixture of Exponential Distributions . . . . . . . . . . .

302

13.4 Statistical Validation Techniques . . . . . . . . . . . . . . . . .

303

13.4.1 Mean Excess Function . . . . . . . . . . . . . . . . . . .

303

13.4.2 Tests Based on the Empirical Distribution Function . .

305

13.4.3 Limited Expected Value Function . . . . . . . . . . . . .

309

13.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

311

14 Modeling of the Risk Process

319

Krzysztof Burnecki and Rafal Weron 14.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

319

14.2 Claim Arrival Processes . . . . . . . . . . . . . . . . . . . . . .

321

14.2.1 Homogeneous Poisson Process . . . . . . . . . . . . . . .

321

14.2.2 Non-homogeneous Poisson Process . . . . . . . . . . . .

323

14.2.3 Mixed Poisson Process . . . . . . . . . . . . . . . . . . .

326

14.2.4 Cox Process . . . . . . . . . . . . . . . . . . . . . . . . .

327

14.2.5 Renewal Process . . . . . . . . . . . . . . . . . . . . . .

328

14.3 Simulation of Risk Processes

. . . . . . . . . . . . . . . . . . .

329

14.3.1 Catastrophic Losses . . . . . . . . . . . . . . . . . . . .

329

14.3.2 Danish Fire Losses . . . . . . . . . . . . . . . . . . . . .

334

15 Ruin Probabilities in Finite and Inﬁnite Time

341

Krzysztof Burnecki, Pawel Mi´sta, and Aleksander Weron 15.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

341

15.1.1 Light- and Heavy-tailed Distributions . . . . . . . . . .

343

15.2 Exact Ruin Probabilities in Inﬁnite Time . . . . . . . . . . . .

346

15.2.1 No Initial Capital

. . . . . . . . . . . . . . . . . . . . .

347

8

Contents 15.2.2 Exponential Claim Amounts . . . . . . . . . . . . . . .

347

15.2.3 Gamma Claim Amounts . . . . . . . . . . . . . . . . . .

347

15.2.4 Mixture of Two Exponentials Claim Amounts . . . . . .

349

15.3 Approximations of the Ruin Probability in Inﬁnite Time . . . .

350

15.3.1 Cram´er–Lundberg Approximation . . . . . . . . . . . .

351

15.3.2 Exponential Approximation . . . . . . . . . . . . . . . .

352

15.3.3 Lundberg Approximation . . . . . . . . . . . . . . . . .

352

15.3.4 Beekman–Bowers Approximation . . . . . . . . . . . . .

353

15.3.5 Renyi Approximation . . . . . . . . . . . . . . . . . . .

354

15.3.6 De Vylder Approximation . . . . . . . . . . . . . . . . .

355

15.3.7 4-moment Gamma De Vylder Approximation . . . . . .

356

15.3.8 Heavy Traﬃc Approximation . . . . . . . . . . . . . . .

358

15.3.9 Light Traﬃc Approximation . . . . . . . . . . . . . . . .

359

15.3.10 Heavy-light Traﬃc Approximation . . . . . . . . . . . .

360

15.3.11 Subexponential Approximation . . . . . . . . . . . . . .

360

15.3.12 Computer Approximation via the Pollaczek-Khinchin Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 15.3.13 Summary of the Approximations . . . . . . . . . . . . .

362

15.4 Numerical Comparison of the Inﬁnite Time Approximations . .

363

15.5 Exact Ruin Probabilities in Finite Time . . . . . . . . . . . . .

367

15.5.1 Exponential Claim Amounts . . . . . . . . . . . . . . .

368

15.6 Approximations of the Ruin Probability in Finite Time . . . .

368

15.6.1 Monte Carlo Method . . . . . . . . . . . . . . . . . . . .

369

15.6.2 Segerdahl Normal Approximation . . . . . . . . . . . . .

369

15.6.3 Diﬀusion Approximation . . . . . . . . . . . . . . . . . .

371

15.6.4 Corrected Diﬀusion Approximation . . . . . . . . . . . .

372

15.6.5 Finite Time De Vylder Approximation . . . . . . . . . .

373

Contents

9

15.6.6 Summary of the Approximations . . . . . . . . . . . . . 15.7 Numerical Comparison of the Finite Time Approximations

. .

16 Stable Diﬀusion Approximation of the Risk Process

374 374 381

Hansj¨ org Furrer, Zbigniew Michna, and Aleksander Weron 16.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

381

16.2 Brownian Motion and the Risk Model for Small Claims . . . .

382

16.2.1 Weak Convergence of Risk Processes to Brownian Motion 383 16.2.2 Ruin Probability for the Limit Process . . . . . . . . . .

383

16.2.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . .

384

16.3 Stable L´evy Motion and the Risk Model for Large Claims . . .

386

16.3.1 Weak Convergence of Risk Processes to α-stable L´evy Motion . . . . . . . . . . . . . . . . . . . . . . . . . . .

387

16.3.2 Ruin Probability in Limit Risk Model for Large Claims

388

16.3.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . .

390

17 Risk Model of Good and Bad Periods

395

Zbigniew Michna 17.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

395

17.2 Fractional Brownian Motion and Model of Good and Bad Periods396 17.3 Ruin Probability in Limit Risk Model of Good and Bad Periods 399 17.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Premiums in the Individual and Collective Risk Models

402 407

Jan Iwanik and Joanna Nowicka-Zagrajek 18.1 Premium Calculation Principles . . . . . . . . . . . . . . . . . .

408

18.2 Individual Risk Model . . . . . . . . . . . . . . . . . . . . . . .

410

18.2.1 General Premium Formulae . . . . . . . . . . . . . . . .

411

10

Contents 18.2.2 Premiums in the Case of the Normal Approximation . .

412

18.2.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . .

413

18.3 Collective Risk Model . . . . . . . . . . . . . . . . . . . . . . .

416

18.3.1 General Premium Formulae . . . . . . . . . . . . . . . .

417

18.3.2 Premiums in the Case of the Normal and Translated Gamma Approximations . . . . . . . . . . . . . . . . . .

418

18.3.3 Compound Poisson Distribution . . . . . . . . . . . . .

420

18.3.4 Compound Negative Binomial Distribution . . . . . . .

421

18.3.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . .

423

19 Pure Risk Premiums under Deductibles

427

Krzysztof Burnecki, Joanna Nowicka-Zagrajek, and Agnieszka Wyloma´ nska 19.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

427

19.2 General Formulae for Premiums Under Deductibles . . . . . . .

428

19.2.1 Franchise Deductible . . . . . . . . . . . . . . . . . . . .

429

19.2.2 Fixed Amount Deductible . . . . . . . . . . . . . . . . .

431

19.2.3 Proportional Deductible . . . . . . . . . . . . . . . . . .

432

19.2.4 Limited Proportional Deductible . . . . . . . . . . . . .

432

19.2.5 Disappearing Deductible . . . . . . . . . . . . . . . . . .

434

19.3 Premiums Under Deductibles for Given Loss Distributions . . .

436

19.3.1 Log-normal Loss Distribution . . . . . . . . . . . . . . .

437

19.3.2 Pareto Loss Distribution . . . . . . . . . . . . . . . . . .

438

19.3.3 Burr Loss Distribution . . . . . . . . . . . . . . . . . . .

441

19.3.4 Weibull Loss Distribution . . . . . . . . . . . . . . . . .

445

19.3.5 Gamma Loss Distribution . . . . . . . . . . . . . . . . .

447

19.3.6 Mixture of Two Exponentials Loss Distribution . . . . .

449

19.4 Final Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . .

450

Contents 20 Premiums, Investments, and Reinsurance

11 453

Pawel Mi´sta and Wojciech Otto 20.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

453

20.2 Single-Period Criterion and the Rate of Return on Capital . . .

456

20.2.1 Risk Based Capital Concept . . . . . . . . . . . . . . . .

456

20.2.2 How To Choose Parameter Values? . . . . . . . . . . . .

457

20.3 The Top-down Approach to Individual Risks Pricing . . . . . .

459

20.3.1 Approximations of Quantiles . . . . . . . . . . . . . . .

459

20.3.2 Marginal Cost Basis for Individual Risk Pricing . . . . .

460

20.3.3 Balancing Problem . . . . . . . . . . . . . . . . . . . . .

461

20.3.4 A Solution for the Balancing Problem . . . . . . . . . .

462

20.3.5 Applications . . . . . . . . . . . . . . . . . . . . . . . .

462

20.4 Rate of Return and Reinsurance Under the Short Term Criterion 463 20.4.1 General Considerations . . . . . . . . . . . . . . . . . .

464

20.4.2 Illustrative Example . . . . . . . . . . . . . . . . . . . .

465

20.4.3 Interpretation of Numerical Calculations in Example 2 .

467

20.5 Ruin Probability Criterion when the Initial Capital is Given . .

469

20.5.1 Approximation Based on Lundberg Inequality . . . . . .

469

20.5.2 “Zero” Approximation . . . . . . . . . . . . . . . . . . .

471

20.5.3 Cram´er–Lundberg Approximation . . . . . . . . . . . .

471

20.5.4 Beekman–Bowers Approximation . . . . . . . . . . . . .

472

20.5.5 Diﬀusion Approximation . . . . . . . . . . . . . . . . . .

473

20.5.6 De Vylder Approximation . . . . . . . . . . . . . . . . .

474

20.5.7 Subexponential Approximation . . . . . . . . . . . . . .

475

20.5.8 Panjer Approximation . . . . . . . . . . . . . . . . . . .

475

20.6 Ruin Probability Criterion and the Rate of Return . . . . . . .

477

20.6.1 Fixed Dividends . . . . . . . . . . . . . . . . . . . . . .

477

12

Contents 20.6.2 Flexible Dividends . . . . . . . . . . . . . . . . . . . . .

479

20.7 Ruin Probability, Rate of Return and Reinsurance . . . . . . .

481

20.7.1 Fixed Dividends . . . . . . . . . . . . . . . . . . . . . .

481

20.7.2 Interpretation of Solutions Obtained in Example 5 . . .

482

20.7.3 Flexible Dividends . . . . . . . . . . . . . . . . . . . . .

484

20.7.4 Interpretation of Solutions Obtained in Example 6 . . .

485

20.8 Final remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . .

487

III General 21 Working with the XQC

489 491

Szymon Borak, Wolfgang H¨ ardle, and Heiko Lehmann 21.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

491

21.2 The XploRe Quantlet Client . . . . . . . . . . . . . . . . . . . .

492

21.2.1 Conﬁguration . . . . . . . . . . . . . . . . . . . . . . . .

492

21.2.2 Getting Connected . . . . . . . . . . . . . . . . . . . . .

493

21.3 Desktop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

494

21.3.1 XploRe Quantlet Editor . . . . . . . . . . . . . . . . . .

495

21.3.2 Data Editor . . . . . . . . . . . . . . . . . . . . . . . . .

496

21.3.3 Method Tree . . . . . . . . . . . . . . . . . . . . . . . .

501

21.3.4 Graphical Output . . . . . . . . . . . . . . . . . . . . .

503

Index

507

Contributors Noer Azam Achsani Department of Economics, University of Potsdam Michal Benko Center for Applied Statistics and Economics, Humboldt-Universit¨ at zu Berlin Szymon Borak Center for Applied Statistics and Economics, Humboldt-Universit¨ at zu Berlin Krzysztof Burnecki Hugo Steinhaus Center for Stochastic Methods, Wroclaw University of Technology ˇ ıˇ Pavel C´ zek Center for Economic Research, Tilburg University Kai Detlefsen Center for Applied Statistics and Economics, Humboldt-Universit¨ at zu Berlin Hansj¨ org Furrer Swiss Life, Z¨ urich Nicolas Gaussel Soci´et´e G´en´erale Asset Management, Paris Wolfgang H¨ ardle Center for Applied Statistics and Economics, HumboldtUniversit¨ at zu Berlin Oliver Holtem¨ oller Department of Economics, RWTH Aachen University Jan Iwanik Concordia Capital S.A., Pozna´ n Krzysztof Jajuga Department of Financial Investments and Insurance, Wroclaw University of Economics Seok-Oh Jeong Institut de statistique, Universite catholique de Louvain Karel Komor´ ad Komerˇcn´ı Banka, Praha Grzegorz Kukla Towarzystwo Ubezpieczeniowe EUROPA S.A., Wroclaw Heiko Lehmann SAP AG, Walldorf Zbigniew Michna Department of Mathematics, Wroclaw University of Economics Adam Misiorek Institute of Power Systems Automation, Wroclaw Pawel Mi´sta Institute of Mathematics, Wroclaw University of Technology

Rouslan Moro Center for Applied Statistics and Economics, Humboldt-Universit¨ at zu Berlin Joanna Nowicka-Zagrajek Hugo Steinhaus Center for Stochastic Methods, Wroclaw University of Technology Wojciech Otto Faculty of Economic Sciences, Warsaw University Daniel Papla Department of Financial Investments and Insurance, Wroclaw University of Economics Dorothea Sch¨ afer Deutsches Institut f¨ ur Wirtschaftsforschung e.V., Berlin Rafael Schmidt Department of Statistics, London School of Economics Hizir Sofyan Mathematics Department, Syiah Kuala University Julien Tamine Soci´et´e G´en´erale Asset Management, Paris David Taylor School of Computational and Applied Mathematics, University of the Witwatersrand, Johannesburg Aleksander Weron Hugo Steinhaus Center for Stochastic Methods, Wroclaw University of Technology Rafal Weron Hugo Steinhaus Center for Stochastic Methods, Wroclaw University of Technology Agnieszka Wyloma´ nska Institute of Mathematics, Wroclaw University of Technology Uwe Wystup MathFinance AG, Waldems

Preface This book is designed for students, researchers and practitioners who want to be introduced to modern statistical tools applied in ﬁnance and insurance. It is the result of a joint eﬀort of the Center for Economic Research (CentER), Center for Applied Statistics and Economics (C.A.S.E.) and Hugo Steinhaus Center for Stochastic Methods (HSC). All three institutions brought in their speciﬁc proﬁles and created with this book a wide-angle view on and solutions to up-to-date practical problems. The text is comprehensible for a graduate student in ﬁnancial engineering as well as for an inexperienced newcomer to quantitative ﬁnance and insurance who wants to get a grip on advanced statistical tools applied in these ﬁelds. An experienced reader with a bright knowledge of ﬁnancial and actuarial mathematics will probably skip some sections but will hopefully enjoy the various computational tools. Finally, a practitioner might be familiar with some of the methods. However, the statistical techniques related to modern ﬁnancial products, like MBS or CAT bonds, will certainly attract him. “Statistical Tools for Finance and Insurance” consists naturally of two main parts. Each part contains chapters with high focus on practical applications. The book starts with an introduction to stable distributions, which are the standard model for heavy tailed phenomena. Their numerical implementation is thoroughly discussed and applications to ﬁnance are given. The second chapter presents the ideas of extreme value and copula analysis as applied to multivariate ﬁnancial data. This topic is extended in the subsequent chapter which deals with tail dependence, a concept describing the limiting proportion that one margin exceeds a certain threshold given that the other margin has already exceeded that threshold. The fourth chapter reviews the market in catastrophe insurance risk, which emerged in order to facilitate the direct transfer of reinsurance risk associated with natural catastrophes from corporations, insurers, and reinsurers to capital market investors. The next contribution employs functional data analysis for the estimation of smooth implied volatility sur-

16

Preface

faces. These surfaces are a result of using an oversimpliﬁed market benchmark model – the Black-Scholes formula – to real data. An attractive approach to overcome this problem is discussed in chapter six, where implied trinomial trees are applied to modeling implied volatilities and the corresponding state-price densities. An alternative route to tackling the implied volatility smile has led researchers to develop stochastic volatility models. The relative simplicity and the direct link of model parameters to the market makes Heston’s model very attractive to front oﬃce users. Its application to FX option markets is covered in chapter seven. The following chapter shows how the computational complexity of stochastic volatility models can be overcome with the help of the Fast Fourier Transform. In chapter nine the valuation of Mortgage Backed Securities is discussed. The optimal prepayment policy is obtained via optimal stopping techniques. It is followed by a very innovative topic of predicting corporate bankruptcy with Support Vector Machines. Chapter eleven presents a novel approach to money-demand modeling using fuzzy clustering techniques. The ﬁrst part of the book closes with productivity analysis for cost and frontier estimation. The nonparametric Data Envelopment Analysis is applied to eﬃciency issues of insurance agencies. The insurance part of the book starts with a chapter on loss distributions. The basic models for claim severities are introduced and their statistical properties are thoroughly explained. In chapter fourteen, the methods of simulating and visualizing the risk process are discussed. This topic is followed by an overview of the approaches to approximating the ruin probability of an insurer. Both ﬁnite and inﬁnite time approximations are presented. Some of these methods are extended in chapters sixteen and seventeen, where classical and anomalous diﬀusion approximations to ruin probability are discussed and extended to cases when the risk process exhibits good and bad periods. The last three chapters are related to one of the most important aspects of the insurance business – premium calculation. Chapter eighteen introduces the basic concepts including the pure risk premium and various safety loadings under diﬀerent loss distributions. Calculation of a joint premium for a portfolio of insurance policies in the individual and collective risk models is discussed as well. The inclusion of deductibles into premium calculation is the topic of the following contribution. The last chapter of the insurance part deals with setting the appropriate level of insurance premium within a broader context of business decisions, including risk transfer through reinsurance and the rate of return on capital required to ensure solvability. Our e-book oﬀers a complete PDF version of this text and the corresponding HTML ﬁles with links to algorithms and quantlets. The reader of this book

Preface

17

may therefore easily reconﬁgure and recalculate all the presented examples and methods via the enclosed XploRe Quantlet Server (XQS), which is also available from www.xplore-stat.de and www.quantlet.com. A tutorial chapter explaining how to setup and use XQS can be found in the third and ﬁnal part of the book. We gratefully acknowledge the support of Deutsche Forschungsgemeinschaft ¨ (SFB 373 Quantiﬁkation und Simulation Okonomischer Prozesse, SFB 649 ¨ Okonomisches Risiko) and Komitet Bada´ n Naukowych (PBZ-KBN 016/P03/99 Mathematical models in analysis of ﬁnancial instruments and markets in Poland). A book of this kind would not have been possible without the help of many friends, colleagues, and students. For the technical production of the e-book platform and quantlets we would like to thank Zdenˇek Hl´avka, Sigbert Klinke, Heiko Lehmann, Adam Misiorek, Piotr Uniejewski, Qingwei Wang, and Rodrigo Witzel. Special thanks for careful proofreading and supervision of the insurance part go to Krzysztof Burnecki. ˇ ıˇzek, Wolfgang H¨ardle, and Rafal Weron Pavel C´ Tilburg, Berlin, and Wroclaw, February 2005

Part I

Finance

1 Stable Distributions Szymon Borak, Wolfgang H¨ ardle, and Rafal Weron

1.1

Introduction

Many of the concepts in theoretical and empirical ﬁnance developed over the past decades – including the classical portfolio theory, the Black-Scholes-Merton option pricing model and the RiskMetrics variance-covariance approach to Value at Risk (VaR) – rest upon the assumption that asset returns follow a normal distribution. However, it has been long known that asset returns are not normally distributed. Rather, the empirical observations exhibit fat tails. This heavy tailed or leptokurtic character of the distribution of price changes has been repeatedly observed in various markets and may be quantitatively measured by the kurtosis in excess of 3, a value obtained for the normal distribution (Bouchaud and Potters, 2000; Carr et al., 2002; Guillaume et al., 1997; Mantegna and Stanley, 1995; Rachev, 2003; Weron, 2004). It is often argued that ﬁnancial asset returns are the cumulative outcome of a vast number of pieces of information and individual decisions arriving almost continuously in time (McCulloch, 1996; Rachev and Mittnik, 2000). As such, since the pioneering work of Louis Bachelier in 1900, they have been modeled by the Gaussian distribution. The strongest statistical argument for it is based on the Central Limit Theorem, which states that the sum of a large number of independent, identically distributed variables from a ﬁnite-variance distribution will tend to be normally distributed. However, as we have already mentioned, ﬁnancial asset returns usually have heavier tails. In response to the empirical evidence Mandelbrot (1963) and Fama (1965) proposed the stable distribution as an alternative model. Although there are other heavy-tailed alternatives to the Gaussian law – like Student’s t, hyperbolic, normal inverse Gaussian, or truncated stable – there is at least one good reason

22

1

Stable Distributions

for modeling ﬁnancial variables using stable distributions. Namely, they are supported by the generalized Central Limit Theorem, which states that stable laws are the only possible limit distributions for properly normalized and centered sums of independent, identically distributed random variables. Since stable distributions can accommodate the fat tails and asymmetry, they often give a very good ﬁt to empirical data. In particular, they are valuable models for data sets covering extreme events, like market crashes or natural catastrophes. Even though they are not universal, they are a useful tool in the hands of an analyst working in ﬁnance or insurance. Hence, we devote this chapter to a thorough presentation of the computational aspects related to stable laws. In Section 1.2 we review the analytical concepts and basic characteristics. In the following two sections we discuss practical simulation and estimation approaches. Finally, in Section 1.5 we present ﬁnancial applications of stable laws.

1.2

Deﬁnitions and Basic Characteristics

Stable laws – also called α-stable, stable Paretian or L´evy stable – were introduced by Levy (1925) during his investigations of the behavior of sums of independent random variables. A sum of two independent random variables having an α-stable distribution with index α is again α-stable with the same index α. This invariance property, however, does not hold for diﬀerent α’s. The α-stable distribution requires four parameters for complete description: an index of stability α ∈ (0, 2] also called the tail index, tail exponent or characteristic exponent, a skewness parameter β ∈ [−1, 1], a scale parameter σ > 0 and a location parameter µ ∈ R. The tail exponent α determines the rate at which the tails of the distribution taper oﬀ, see the left panel in Figure 1.1. When α = 2, the Gaussian distribution results. When α < 2, the variance is inﬁnite and the tails are asymptotically equivalent to a Pareto law, i.e. they exhibit a power-law behavior. More precisely, using a central limit theorem type argument it can be shown that (Janicki and Weron, 1994; Samorodnitsky and Taqqu, 1994): limx→∞ xα P(X > x) = Cα (1 + β)σ α , (1.1) limx→∞ xα P(X < −x) = Cα (1 + β)σ α ,

1.2

Deﬁnitions and Basic Characteristic

23

Tails of stable laws

-5

log(1-CDF(x))

-6 -10

-10

-8

log(PDF(x))

-4

-2

Dependence on alpha

-10

-5

0 x

5

0

10

1 log(x)

2

Figure 1.1: Left panel : A semilog plot of symmetric (β = µ = 0) α-stable probability density functions (pdfs) for α = 2 (black solid line), 1.8 (red dotted line), 1.5 (blue dashed line) and 1 (green long-dashed line). The Gaussian (α = 2) density forms a parabola and is the only α-stable density with exponential tails. Right panel : Right tails of symmetric α-stable cumulative distribution functions (cdfs) for α = 2 (black solid line), 1.95 (red dotted line), 1.8 (blue dashed line) and 1.5 (green long-dashed line) on a double logarithmic paper. For α < 2 the tails form straight lines with slope −α. STFstab01.xpl

where:

Cα = 2 0

∞

−α

x

−1 sin(x)dx

=

1 πα Γ(α) sin . π 2

The convergence to a power-law tail varies for diﬀerent α’s and, as can be seen in the right panel of Figure 1.1, is slower for larger values of the tail index. Moreover, the tails of α-stable distribution functions exhibit a crossover from an approximate power decay with exponent α > 2 to the true tail with exponent α. This phenomenon is more visible for large α’s (Weron, 2001). When α > 1, the mean of the distribution exists and is equal to µ. In general, the pth moment of a stable random variable is ﬁnite if and only if p < α. When the skewness parameter β is positive, the distribution is skewed to the right,

24

1

Gaussian, Cauchy, and Levy distributions

0.3 PDF(x)

0.2

0.15

0

0.05

0.1

0.1

PDF(x)

0.2

0.25

0.4

0.3

Dependence on beta

Stable Distributions

-5

0 x

5

-5

0 x

5

Figure 1.2: Left panel : Stable pdfs for α = 1.2 and β = 0 (black solid line), 0.5 (red dotted line), 0.8 (blue dashed line) and 1 (green long-dashed line). Right panel : Closed form formulas for densities are known only for three distributions – Gaussian (α = 2; black solid line), Cauchy (α = 1; red dotted line) and Levy (α = 0.5, β = 1; blue dashed line). The latter is a totally skewed distribution, i.e. its support is R+ . In general, for α < 1 and β = 1 (−1) the distribution is totally skewed to the right (left). STFstab02.xpl

i.e. the right tail is thicker, see the left panel of Figure 1.2. When it is negative, it is skewed to the left. When β = 0, the distribution is symmetric about µ. As α approaches 2, β loses its eﬀect and the distribution approaches the Gaussian distribution regardless of β. The last two parameters, σ and µ, are the usual scale and location parameters, i.e. σ determines the width and µ the shift of the mode (the peak) of the density. For σ = 1 and µ = 0 the distribution is called standard stable.

1.2.1

Characteristic Function Representation

Due to the lack of closed form formulas for densities for all but three distributions (see the right panel in Figure 1.2), the α-stable law can be most

1.2

Deﬁnitions and Basic Characteristic

25

S0 parameterization

0.4

0

0

0.1

0.1

0.2

0.3

PDF(x)

0.3 0.2

PDF(x)

0.4

0.5

0.5

S parameterization

-4

-2

0 x

2

4

-4

-2

0 x

2

4

Figure 1.3: Comparison of S and S 0 parameterizations: α-stable pdfs for β = 0.5 and α = 0.5 (black solid line), 0.75 (red dotted line), 1 (blue short-dashed line), 1.25 (green dashed line) and 1.5 (cyan longdashed line). STFstab03.xpl

conveniently described by its characteristic function φ(t) – the inverse Fourier transform of the probability density function. However, there are multiple parameterizations for α-stable laws and much confusion has been caused by these diﬀerent representations, see Figure 1.3. The variety of formulas is caused by a combination of historical evolution and the numerous problems that have been analyzed using specialized forms of the stable distributions. The most popular parameterization of the characteristic function of X ∼ Sα (σ, β, µ), i.e. an α-stable random variable with parameters α, σ, β, and µ, is given by (Samorodnitsky and Taqqu, 1994; Weron, 2004): ⎧ πα α α ⎪ ⎨−σ |t| {1 − iβsign(t) tan 2 } + iµt, α = 1, ln φ(t) = (1.2) ⎪ ⎩ 2 α = 1. −σ|t|{1 + iβsign(t) π ln |t|} + iµt,

26

1

Stable Distributions

For numerical purposes, it is often advisable to use Nolan’s (1997) parameterization: ⎧ πα α α 1−α ⎪ − 1]} + iµ0 t, α = 1, ⎨−σ |t| {1 + iβsign(t) tan 2 [(σ|t|) ln φ0 (t) = ⎪ ⎩ α = 1. −σ|t|{1 + iβsign(t) π2 ln(σ|t|)} + iµ0 t, (1.3) The Sα0 (σ, β, µ0 ) parameterization is a variant of Zolotariev’s (M)-parameterization (Zolotarev, 1986), with the characteristic function and hence the density and the distribution function jointly continuous in all four parameters, see the right panel in Figure 1.3. In particular, percentiles and convergence to the power-law tail vary in a continuous way as α and β vary. The location parameters of the two representations are related by µ = µ0 − βσ tan πα 2 for α = 1 and µ = µ0 − βσ π2 ln σ for α = 1. Note also, that the traditional scale parameter σG of the Gaussian distribution deﬁned by:

1 (x − µ)2 , (1.4) exp − fG (x) = √ 2 2σG 2πσG √ is not the same as σ in formulas (1.2) or (1.3). Namely, σG = 2σ.

1.2.2

Stable Density and Distribution Functions

The lack of closed form formulas for most stable densities and distribution functions has negative consequences. For example, during maximum likelihood estimation computationally burdensome numerical approximations have to be used. There generally are two approaches to this problem. Either the fast Fourier transform (FFT) has to be applied to the characteristic function (Mittnik, Doganoglu, and Chenyao, 1999) or direct numerical integration has to be utilized (Nolan, 1997, 1999). For data points falling between the equally spaced FFT grid nodes an interpolation technique has to be used. Taking a larger number of grid points increases accuracy, however, at the expense of higher computational burden. The FFT based approach is faster for large samples, whereas the direct integration method favors small data sets since it can be computed at any arbitrarily chosen point. Mittnik, Doganoglu, and Chenyao (1999) report that for N = 213 the FFT based method is faster for samples exceeding 100 observations and slower for smaller data sets. Moreover, the FFT based approach is less universal – it is eﬃcient only for large α’s and only for pdf calculations. When

1.2

Deﬁnitions and Basic Characteristic

27

computing the cdf the density must be numerically integrated. In contrast, in the direct integration method Zolotarev’s (1986) formulas either for the density or the distribution function are numerically integrated. Set ζ = −β tan πα 2 . Then the density f (x; α, β) of a standard α-stable random variable in representation S 0 , i.e. X ∼ Sα0 (1, β, 0), can be expressed as (note, that Zolotarev (1986, Section 2.2) used yet another parametrization): • when α = 1 and x > ζ: 1

α(x − ζ) α−1 f (x; α, β) = π |α−1|

π 2

−ξ

α V (θ; α, β) exp −(x − ζ) α−1 V (θ; α, β) dθ, (1.5)

• when α = 1 and x = ζ: f (x; α, β) =

Γ(1 + α1 ) cos(ξ) 1

π(1 + ζ 2 ) 2α

,

• when α = 1 and x < ζ: f (x; α, β) = f (−x; α, −β), • when α = 1: ⎧

π2 − πx − πx 1 ⎪ 2β 2β V (θ; 1, β) e V (θ; 1, β) exp −e dθ, β = 0, ⎪ π ⎨ 2|β| −2 f (x; 1, β) = ⎪ ⎪ 1 ⎩ β = 0, π(1+x2 ) ,

where ξ= and

V (θ; α, β) =

1 α arctan(−ζ), π 2,

α ⎧ α−1 1 cos θ ⎪ α−1 (cos αξ) ⎪ sin α(ξ+θ) ⎨

α = 1, α = 1,

cos{αξ+(α−1)θ} , cos θ

⎪ ⎪ ⎩ 2 π2 +βθ exp 1 ( π + βθ) tan θ , π cos θ β 2

α = 1, α = 1, β = 0.

The distribution F (x; α, β) of a standard α-stable random variable in representation S 0 can be expressed as:

28

1

Stable Distributions

• when α = 1 and x > ζ: F (x; α, β) = c1 (α, β) +

sign(1 − α) π

−ξ

1 π

where c1 (α, β) =

π

π 2

2

α exp −(x − ζ) α−1 V (θ; α, β) dθ,

−ξ ,

1,

α < 1, α > 1,

• when α = 1 and x = ζ: F (x; α, β) =

1 π −ξ , π 2

• when α = 1 and x < ζ: F (x; α, β) = 1 − F (−x; α, −β), • when α = 1:

F (x; 1, β) =

⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩

1 π 1 2

π 2

−π 2

+

1 π

πx exp −e− 2β V (θ; 1, β) dθ, β > 0, arctan x,

1 − F (x, 1, −β),

β = 0, β < 0.

Formula (1.5) requires numerical integration of the function g(·) exp{−g(·)}, α where g(θ; x, α, β) = (x − ζ) α−1 V (θ; α, β). The integrand is 0 at −ξ, increases monotonically to a maximum of 1e at point θ∗ for which g(θ∗ ; x, α, β) = 1, and then decreases monotonically to 0 at π2 (Nolan, 1997). However, in some cases the integrand becomes very peaked and numerical algorithms can miss the spike and underestimate the integral. To avoid this problem we need to ﬁnd the argument θ∗ of the peak numerically and compute the integral as a sum of two integrals: one from −ξ to θ∗ and the other from θ∗ to π2 .

1.3

Simulation of α-stable Variables

The complexity of the problem of simulating sequences of α-stable random variables results from the fact that there are no analytic expressions for the

1.3

Simulation of α-stable Variables

29

inverse F −1 of the cumulative distribution function. The ﬁrst breakthrough was made by Kanter (1975), who gave a direct method for simulating Sα (1, 1, 0) random variables, for α < 1. It turned out that this method could be easily adapted to the general case. Chambers, Mallows, and Stuck (1976) were the ﬁrst to give the formulas. The algorithm for constructing a standard stable random variable X ∼ Sα (1, β, 0), in representation (1.2), is the following (Weron, 1996): • generate a random variable V uniformly distributed on (− π2 , π2 ) and an independent exponential random variable W with mean 1; • for α = 1 compute:

X = Sα,β

(1−α)/α sin{α(V + Bα,β )} cos{V − α(V + Bα,β )} · · , (1.6) W {cos(V )}1/α

where Bα,β Sα,β

arctan(β tan πα 2 ) , α πα 1/(2α) = 1 + β 2 tan2 ; 2 =

• for α = 1 compute:

π 2 π 2 W cos V . + βV tan V − β ln X= π π 2 2 + βV

(1.7)

Given the formulas for simulation of a standard α-stable random variable, we can easily simulate a stable random variable for all admissible values of the parameters α, σ, β and µ using the following property: if X ∼ Sα (1, β, 0) then ⎧ ⎪ α = 1, ⎨σX + µ, (1.8) Y = ⎪ ⎩ σX + π2 βσ ln σ + µ, α = 1, is Sα (σ, β, µ). It is interesting to note that for α = 2 (and β = 0) the ChambersMallows-Stuck method reduces to the well known Box-Muller algorithm for generating Gaussian random variables (Janicki and Weron, 1994). Although many other approaches have been proposed in the literature, this method is regarded as the fastest and the most accurate (Weron, 2004).

30

1

-5 -10

log(1-CDF(x))

-5

Sample of size N=10^6

-10

log(1-CDF(x))

Sample of size N=10^4

Stable Distributions

-4

-2

0 log(x)

2

4

-4

-2

0 log(x)

2

4

Figure 1.4: A double logarithmic plot of the right tail of an empirical symmetric 1.9-stable distribution function for a sample of size N = 104 (left panel ) and N = 106 (right panel ). Thick red lines represent the linear regression ﬁt. The tail index estimate (ˆ α = 3.7320) obtained for the smaller sample is close to the initial power-law like decay of the larger sample (ˆ α = 3.7881). The far tail estimate α ˆ = 1.9309 is close to the true value of α. STFstab04.xpl

1.4

Estimation of Parameters

Like simulation, the estimation of stable law parameters is in general severely hampered by the lack of known closed-form density functions for all but a few members of the stable family. Either the pdf has to be numerically integrated (see the previous section) or the estimation technique has to be based on a diﬀerent characteristic of stable laws. All presented methods work quite well assuming that the sample under consideration is indeed α-stable. However, if the data comes from a diﬀerent distribution, these procedures may mislead more than the Hill and direct tail estimation methods. Since the formal tests for assessing α-stability of a sample are very time consuming we suggest to ﬁrst apply the “visual inspection” tests to see whether the empirical densities resemble those of α-stable laws.

1.4

Estimation of Parameters

1.4.1

31

Tail Exponent Estimation

The simplest and most straightforward method of estimating the tail index is to plot the right tail of the empirical cdf on a double logarithmic paper. The slope of the linear regression for large values of x yields the estimate of the tail index α, through the relation α = −slope. This method is very sensitive to the size of the sample and the choice of the number of observations used in the regression. For example, the slope of about −3.7 may indicate a non-α-stable power-law decay in the tails or the contrary – an α-stable distribution with α ≈ 1.9. This is illustrated in Figure 1.4. In the left panel a power-law ﬁt to the tail of a sample of N = 104 standard symmetric (β = µ = 0, σ = 1) α-stable distributed variables with α = 1.9 yields an estimate of α ˆ = 3.732. However, when the sample size is increased to N = 106 the power-law ﬁt to the extreme tail observations yields α ˆ = 1.9309, which is fairly close to the original value of α. The true tail behavior (1.1) is observed only for very large (also for very small, i.e. the negative tail) observations, after a crossover from a temporary powerlike decay (which surprisingly indicates α ≈ 3.7). Moreover, the obtained estimates still have a slight positive bias, which suggests that perhaps even larger samples than 106 observations should be used. In Figure 1.4 we used only the upper 0.15% of the records to estimate the true tail exponent. In general, the choice of the observations used in the regression is subjective and can yield large estimation errors, a fact which is often neglected in the literature. A well known method for estimating the tail index that does not assume a parametric form for the entire distribution function, but focuses only on the tail behavior was proposed by Hill (1975). The Hill estimator is used to estimate the tail index α, when the upper (or lower) tail of the distribution is of the form: 1−F (x) = Cx−α , see Figure 1.5. Like the log-log regression method, the Hill estimator tends to overestimate the tail exponent of the stable distribution if α is close to two and the sample size is not very large. For a review of the extreme value theory and the Hill estimator see H¨ ardle, Klinke, and M¨ uller (2000, Chapter 13) or Embrechts, Kl¨ uppelberg, and Mikosch (1997). These examples clearly illustrate that the true tail behavior of α-stable laws is visible only for extremely large data sets. In practice, this means that in order to estimate α we must use high-frequency data and restrict ourselves to the most “outlying” observations. Otherwise, inference of the tail index may be strongly misleading and rejection of the α-stable regime unfounded.

32

1

Stable Distributions

2

alpha

2.5

Sample of size N=10^4

200

400 600 Order statistics

800

Sample of size N=10^6

1.7

2

1.8

alpha

alpha

1.9

2.5

2

2.1

Sample of size N=10^6

1000

0

50000 Order statistics

100000

0

1000 Order statistics

2000

Figure 1.5: Plots of the Hill statistics α ˆ n,k vs. the maximum order statistic k for 1.8-stable samples of size N = 104 (top panel ) and N = 106 (left and right panels). Red horizontal lines represent the true value of α. For better exposition, the right panel is a magniﬁcation of the left panel for small k. A close estimate is obtained only for k = 500, ..., 1300 (i.e. for k < 0.13% of sample size). STFstab05.xpl

1.4

Estimation of Parameters

33

We now turn to the problem of parameter estimation. We start the discussion with the simplest, fastest and ... least accurate quantile methods, then develop the slower, yet much more accurate sample characteristic function methods and, ﬁnally, conclude with the slowest but most accurate maximum likelihood approach. Given a sample x1 , ..., xn of independent and identically distributed ˆ and µ Sα (σ, β, µ) observations, in what follows, we provide estimates α ˆ, σ ˆ , β, ˆ of all four stable law parameters.

1.4.2

Quantile Estimation

Already in 1971 Fama and Roll provided very simple estimates for parameters of symmetric (β = 0, µ = 0) stable laws when α > 1. McCulloch (1986) generalized and improved their method. He analyzed stable law quantiles and provided consistent estimators of all four stable parameters, with the restriction α ≥ 0.6, while retaining the computational simplicity of Fama and Roll’s method. After McCulloch deﬁne: x0.95 − x0.05 vα = , (1.9) x0.75 − x0.25 which is independent of both σ and µ. In the above formula xf denotes the f -th population quantile, so that Sα (σ, β, µ)(xf ) = f . Let vˆα be the corresponding sample value. It is a consistent estimator of vα . Now, deﬁne: vβ =

x0.95 + x0.05 − 2x0.50 , x0.95 − x0.05

(1.10)

and let vˆβ be the corresponding sample value. vβ is also independent of both σ and µ. As a function of α and β it is strictly increasing in β for each α. The statistic vˆβ is a consistent estimator of vβ . Statistics vα and vβ are functions of α and β. This relationship may be inverted and the parameters α and β may be viewed as functions of vα and vβ : α = ψ1 (vα , vβ ), β = ψ2 (vα , vβ ).

(1.11)

Substituting vα and vβ by their sample values and applying linear interpolation between values found in tables provided by McCulloch (1986) yields estimators ˆ α ˆ and β. Scale and location parameters, σ and µ, can be estimated in a similar way. However, due to the discontinuity of the characteristic function for α = 1 and β = 0 in representation (1.2), this procedure is much more complicated. We refer the interested reader to the original work of McCulloch (1986).

34

1.4.3

1

Stable Distributions

Characteristic Function Approaches

Given a sample x1 , ..., xn of independent and identically distributed (i.i.d.) random variables, deﬁne the sample characteristic function by ˆ = 1 eitxj . φ(t) n j=1 n

(1.12)

ˆ ˆ Since |φ(t)| is bounded by unity all moments of φ(t) are ﬁnite and, for any ﬁxed t, it is the sample average of i.i.d. random variables exp(itxj ). Hence, ˆ is a consistent estimator of the characteristic by the law of large numbers, φ(t) function φ(t). Press (1972) proposed a simple estimation method, called the method of moments, based on transformations of the characteristic function. The obtained estimators are consistent since they are based upon estimators of φ(t), Im{φ(t)} and Re{φ(t)}, which are known to be consistent. However, convergence to the population values depends on a choice of four points at which the above functions are evaluated. The optimal selection of these values is problematic and still is an open question. The obtained estimates are of poor quality and the method is not recommended for more than preliminary estimation. Koutrouvelis (1980) presented a regression-type method which starts with an initial estimate of the parameters and proceeds iteratively until some prespeciﬁed convergence criterion is satisﬁed. Each iteration consists of two weighted regression runs. The number of points to be used in these regressions depends on the sample size and starting values of α. Typically no more than two or three iterations are needed. The speed of the convergence, however, depends on the initial estimates and the convergence criterion. The regression method is based on the following observations concerning the characteristic function φ(t). First, from (1.2) we can easily derive: ln(− ln |φ(t)|2 ) = ln(2σ α ) + α ln |t|. The real and imaginary parts of φ(t) are for α = 1 given by πα {φ(t)} = exp(−|σt|α ) cos µt + |σt|α βsign(t) tan , 2 and

πα {φ(t)} = exp(−|σt|α ) sin µt + |σt|α βsign(t) tan . 2

(1.13)

1.4

Estimation of Parameters

35

The last two equations lead, apart from considerations of principal values, to πα {φ(t)} (1.14) sign(t)|t|α . arctan = µt + βσ α tan {φ(t)} 2 Equation (1.13) depends only on α and σ and suggests that we estimate these parameters by regressing y = ln(− ln |φn (t)|2 ) on w = ln |t| in the model yk = m + αwk + k ,

k = 1, 2, ..., K,

(1.15)

where tk is an appropriate set of real numbers, m = ln(2σ α ), and k denotes an error term. Koutrouvelis (1980) proposed to use tk = πk 25 , k = 1, 2, ..., K; with K ranging between 9 and 134 for diﬀerent estimates of α and sample sizes. Once α ˆ and σ ˆ have been obtained and α and σ have been ﬁxed at these values, estimates of β and µ can be obtained using (1.14). Next, the regressions are repeated with α ˆ, σ ˆ , βˆ and µ ˆ as the initial parameters. The iterations continue until a prespeciﬁed convergence criterion is satisﬁed. Kogon and Williams (1998) eliminated this iteration procedure and simpliﬁed the regression method. For initial estimation they applied McCulloch’s (1986) method, worked with the continuous representation (1.3) of the characteristic function instead of the classical one (1.2) and used a ﬁxed set of only 10 equally spaced frequency points tk . In terms of computational speed their method compares favorably to the original method of Koutrouvelis (1980). It has a signiﬁcantly better performance near α = 1 and β = 0 due to the elimination of discontinuity of the characteristic function. However, it returns slightly worse results for very small α.

1.4.4

Maximum Likelihood Method

The maximum likelihood (ML) estimation scheme for α-stable distributions does not diﬀer from that for other laws, at least as far as the theory is concerned. For a vector of observations x = (x1 , ..., xn ), the ML estimate of the parameter vector θ = (α, σ, β, µ) is obtained by maximizing the log-likelihood function: Lθ (x) =

n

ln f˜(xi ; θ),

(1.16)

i=1

where f˜(·; θ) is the stable pdf. The tilde denotes the fact that, in general, we do not know the explicit form of the density and have to approximate it

36

1

Stable Distributions

numerically. The ML methods proposed in the literature diﬀer in the choice of the approximating algorithm. However, all of them have an appealing common feature – under certain regularity conditions the maximum likelihood estimator is asymptotically normal. Modern ML estimation techniques either utilize the FFT-based approach for approximating the stable pdf (Mittnik et al., 1999) or use the direct integration method (Nolan, 2001). Both approaches are comparable in terms of eﬃciency. The diﬀerences in performance result from diﬀerent approximation algorithms, see Section 1.2.2. Simulation studies suggest that out of the ﬁve described techniques the method of moments yields the worst estimates, well outside any admissible error range (Stoyanov and Racheva-Iotova, 2004; Weron, 2004). McCulloch’s method comes in next with acceptable results and computational time signiﬁcantly lower than the regression approaches. On the other hand, both the Koutrouvelis and the Kogon-Williams implementations yield good estimators with the latter performing considerably faster, but slightly less accurate. Finally, the ML estimates are almost always the most accurate, in particular, with respect to the skewness parameter. However, as we have already said, maximum likelihood estimation techniques are certainly the slowest of all the discussed methods. For example, ML estimation for a sample of a few thousand observations using a gradient search routine which utilizes the direct integration method is slower by 4 orders of magnitude than the Kogon-Williams algorithm, i.e. a few minutes compared to a few hundredths of a second on a fast PC! Clearly, the higher accuracy does not justify the application of ML estimation in many real life problems, especially when calculations are to be performed on-line.

1.5

Financial Applications of Stable Laws

Many techniques in modern ﬁnance rely heavily on the assumption that the random variables under investigation follow a Gaussian distribution. However, time series observed in ﬁnance – but also in other applications – often deviate from the Gaussian model, in that their marginal distributions are heavy-tailed and, possibly, asymmetric. In such situations, the appropriateness of the commonly adopted normal assumption is highly questionable. It is often argued that ﬁnancial asset returns are the cumulative outcome of a vast number of pieces of information and individual decisions arriving almost continuously in time. Hence, in the presence of heavy-tails it is natural

1.5

Financial Applications of Stable Laws

37

Table 1.1: Fits to 2000 Dow Jones Industrial Average (DJIA) index returns from the period February 2, 1987 – December 29, 1994. Test statistics and the corresponding p-values based on 1000 simulated samples (in parentheses) are also given. Parameters: α-stable ﬁt Gaussian ﬁt

α 1.6411

Tests: α-stable ﬁt

Anderson-Darling 0.6441

Gaussian ﬁt

σ 0.0050 0.0111

β -0.0126

µ 0.0005 0.0003

Kolmogorov 0.5583

(0.020)

(0.500)

+∞

4.6353

(<0.005)

(<0.005)

STFstab06.xpl

to assume that they are approximately governed by a stable non-Gaussian distribution. Other leptokurtic distributions, including Student’s t, Weibull, and hyperbolic, lack the attractive central limit property. Stable distributions have been successfully ﬁt to stock returns, excess bond returns, foreign exchange rates, commodity price returns and real estate returns (McCulloch, 1996; Rachev and Mittnik, 2000). In recent years, however, several studies have found, what appears to be strong evidence against the stable model (Gopikrishnan et al., 1999; McCulloch, 1997). These studies have estimated the tail exponent directly from the tail observations and commonly have found α that appears to be signiﬁcantly greater than 2, well outside the stable domain. Recall, however, that in Section 1.4.1 we have shown that estimating α only from the tail observations may be strongly misleading and for samples of typical size the rejection of the α-stable regime unfounded. Let us see ourselves how well the stable law describes ﬁnancial asset returns. In this section we want to apply the discussed techniques to ﬁnancial data. Due to limited space we chose only one estimation method – the regression approach of Koutrouvelis (1980), as it oﬀers high accuracy at moderate computational time. We start the empirical analysis with the most prominent example – the Dow Jones Industrial Average (DJIA) index, see Table 1.1. The data set covers the period February 2, 1987 – December 29, 1994 and comprises 2000

38

1

Stable, Gaussian, and empirical left tails

-5

log(CDF(x))

0

-10

0.5

CDF(x)

1

Stable and Gaussian fit to DJIA returns

Stable Distributions

-0.02

0 x

0.02

-6

-5

-4 log(x)

-3

-2

Figure 1.6: Stable (cyan) and Gaussian (dashed red) ﬁts to the DJIA returns (black circles) empirical cdf from the period February 2, 1987 – December 29, 1994. Right panel is a magniﬁcation of the left tail ﬁt on a double logarithmic scale clearly showing the superiority of the 1.64-stable law. STFstab06.xpl

daily returns. Recall, that it includes the largest crash in Wall Street history – the Black Monday of October 19, 1987. Clearly the 1.64-stable law oﬀers a much better ﬁt to the DJIA returns than the Gaussian distribution. Its superiority, especially in the tails of the distribution, is even better visible in Figure 1.6. To make our statistical analysis more sound, we also compare both ﬁts through Anderson-Darling and Kolmogorov test statistics (D’Agostino and Stephens, 1986). The former may be treated as a weighted Kolmogorov statistics which puts more weight to the diﬀerences in the tails of the distributions. Although no asymptotic results are known for the stable laws, approximate p-values for these goodness-of-ﬁt tests can be obtained via the Monte Carlo technique, for details see Chapter 13. First the parameter vector is estimated for a given ˆ and the test statistics is calculated assuming that sample of size n, yielding θ, ˆ returning a value of d. Next, the sample is distributed according to F (x; θ), ˆ a sample of size n of F (x; θ)-distributed variates is generated. The parameter

1.5

Financial Applications of Stable Laws

39

Stable, Gaussian, and empirical left tails

-6

log(CDF(x))

0

-8

0.5

CDF(x)

-4

-2

1

Stable and Gaussian fit to Boeing returns

-0.05

0 x

0.05

-5

-4

-3

-2

log(x)

Figure 1.7: Stable (cyan) and Gaussian (dashed red) ﬁts to the Boeing stock returns (black circles) empirical cdf from the period July 1, 1997 – December 31, 2003. Right panel is a magniﬁcation of the left tail ﬁt on a double logarithmic scale clearly showing the superiority of the 1.78-stable law. STFstab07.xpl

vector is estimated for this simulated sample, yielding θˆ1 , and the test statistics is calculated assuming that the sample is distributed according to F (x; θˆ1 ). The simulation is repeated as many times as required to achieve a certain level of accuracy. The estimate of the p-value is obtained as the proportion of times that the test quantity is at least as large as d. For the α-stable ﬁt of the DJIA returns the values of the Anderson-Darling and Kolmogorov statistics are 0.6441 and 0.5583, respectively. The corresponding approximate p-values based on 1000 simulated samples are 0.02 and 0.5 allowing us to accept the α-stable law as a model of DJIA returns. The values of the test statistics for the Gaussian ﬁt yield p-values of less than 0.005 forcing us to reject the Gaussian law, see Table 1.1. Next, we apply the same technique to 1635 daily returns of Boeing stock prices from the period July 1, 1997 – December 31, 2003. The 1.78-stable distribution ﬁts the data very well, yielding small values of the Anderson-Darling (0.3756) and Kolmogorov (0.4522) test statistics, see Figure 1.7 and Table 1.2. The

40

1

Stable Distributions

Table 1.2: Fits to 1635 Boeing stock price returns from the period July 1, 1997 – December 31, 2003. Test statistics and the corresponding p-values based on 1000 simulated samples (in parentheses) are also given. Parameters: α-stable ﬁt Gaussian ﬁt

α 1.7811

σ 0.0141 0.0244

β 0.2834

µ 0.0009 0.0001

Tests: α-stable ﬁt

Anderson-Darling 0.3756 (0.18)

(0.80)

Gaussian ﬁt

9.6606

2.1361

(<0.005)

(<0.005)

Kolmogorov 0.4522

STFstab07.xpl

approximate p-values based, as in the previous example, on 1000 simulated samples are 0.18 and 0.8, respectively, allowing us to accept the α-stable law as a model of Boeing returns. On the other hand, the values of the test statistics for the Gaussian ﬁt yield p-values of less than 0.005 forcing us to reject the Gaussian distribution. The stable law seems to be tailor-cut for the DJIA index and Boeing stock price returns. But does it ﬁt other asset returns as well? Unfortunately, not. Although, for most asset returns it does provide a better ﬁt than the Gaussian law, in many cases the test statistics and p-values suggest that the ﬁt is not as good as for these two data sets. This can be seen in Figure 1.8 and Table 1.3, where the calibration results for 4444 daily returns of the Japanese yen against the US dollar (JPY/USD) exchange rate from December 1, 1978 to January 31, 1991 are presented. The empirical distribution does not exhibit power-law tails and the extreme tails are largely overestimated by the stable distribution. For a risk manager who likes to play safe this may not be a bad idea, as the stable law overestimates the risks and thus provides an upper limit of losses. However, from a calibration perspective other distributions, like the hyperbolic or truncated stable, may be more appropriate for many data sets (Weron, 2004).

1.5

Financial Applications of Stable Laws

41

Table 1.3: Fits to 4444 JPY/USD exchange rate returns from the period December 1, 1978 – January 31, 1991. Test statistics and the corresponding p-values (in parentheses) are also given. Parameters: α-stable ﬁt Gaussian ﬁt

α 1.3274

σ 0.0020 0.0049

β -0.1393

µ -0.0003 -0.0001

Tests: α-stable ﬁt

Anderson-Darling 4.7833

Kolmogorov 1.4520

(<0.005)

(<0.005)

Gaussian ﬁt

91.7226

6.7574

(<0.005)

(<0.005)

STFstab08.xpl

Stable, Gaussian, and empirical left tails

log(CDF(x))

0

-10

0.5

CDF(x)

-5

1

Stable and Gaussian fit to JPY/USD returns

-0.02

-0.01

0 x

0.01

-7

-6

-5

-4

log(x)

Figure 1.8: Stable (cyan) and Gaussian (dashed red) ﬁts to the JPY/USD exchange rate returns (black circles) empirical cdf from the period December 1, 1978 – January 31, 1991. Right panel is a magniﬁcation of the left tail ﬁt on a double logarithmic scale. The extreme returns are largely overestimated by the stable law. STFstab08.xpl

42

Bibliography

Bibliography Bouchaud, J.-P. and Potters, M. (2000). Theory of Financial Risk, Cambridge University Press, Cambridge. Carr, P., Geman, H., Madan, D. B., and Yor, M. (2002). The ﬁne structure of asset returns: an empirical investigation, Journal of Business 75: 305–332. Chambers, J. M., Mallows, C. L., and Stuck, B. W. (1976). A method for simulating stable random variables, Journal of the American Statistical Association 71: 340–344. D’Agostino, R. B. and Stephens, M. A. (1986). Goodness-of-Fit Techniques, Marcel Dekker, New York. Embrechts, P., Kluppelberg, C., and Mikosch, T. (1997). Modelling Extremal Events for Insurance and Finance, Springer. Fama, E. F. (1965). The behavior of stock market prices, Journal of Business 38: 34–105. Fama, E. F. and Roll, R. (1971). Parameter estimates for symmetric stable distributions, Journal of the American Statistical Association 66: 331– 338. Gopikrishnan, P., Plerou, V., Amaral, L. A. N., Meyer, M. and Stanley, H. E. (1999). Scaling of the distribution of ﬂuctuations of ﬁnancial market indices, Physical Review E 60(5): 5305–5316. Guillaume, D. M., Dacorogna, M. M., Dave, R. R., M¨ uller, U. A., Olsen, R. B., and Pictet, O. V. (1997). From the birds eye to the microscope: A survey of new stylized facts of the intra-daily foreign exchange markets, Finance & Stochastics 1: 95–129. H¨ ardle, W., Klinke, S., and M¨ uller, M. (2000). Springer.

XploRe Learning Guide,

Hill, B. M. (1975). A simple general approach to inference about the tail of a distribution, Annals of Statistics 3: 1163–1174. Janicki, A. and Weron, A. (1994). Simulation and Chaotic Behavior of α-Stable Stochastic Processes, Marcel Dekker.

Bibliography

43

Kanter, M. (1975). Stable densities under change of scale and total variation inequalities, Annals of Probability 3: 697–707. Koutrouvelis, I. A. (1980). Regression-type estimation of the parameters of stable laws, Journal of the American Statistical Association 75: 918–928. Kogon, S. M. and Williams, D. B. (1998). Characteristic function based estimation of stable parameters, in R. Adler, R. Feldman, M. Taqqu (eds.), A Practical Guide to Heavy Tails, Birkhauser, pp. 311–335. Levy, P. (1925). Calcul des Probabilites, Gauthier Villars. Mandelbrot, B. B. (1963). The variation of certain speculative prices, Journal of Business 36: 394–419. Mantegna, R. N. and Stanley, H. E. (1995). Scaling behavior in the dynamics of an economic index, Nature 376: 46–49. McCulloch, J. H. (1986). Simple consistent estimators of stable distribution parameters, Communications in Statistics – Simulations 15: 1109–1136. McCulloch, J. H. (1996). Financial applications of stable distributions, in G. S. Maddala, C. R. Rao (eds.), Handbook of Statistics, Vol. 14, Elsevier, pp. 393–425. McCulloch, J. H. (1997). Measuring tail thickness to estimate the stable index α: A critique, Journal of Business & Economic Statistics 15: 74–81. Mittnik, S., Doganoglu, T., and Chenyao, D. (1999). Computing the probability density function of the stable Paretian distribution, Mathematical and Computer Modelling 29: 235–240. Mittnik, S., Rachev, S. T., Doganoglu, T. and Chenyao, D. (1999). Maximum likelihood estimation of stable Paretian models, Mathematical and Computer Modelling 29: 275–293. Nolan, J. P. (1997). Numerical calculation of stable densities and distribution functions, Communications in Statistics – Stochastic Models 13: 759–774. Nolan, J. P. (1999). An algorithm for evaluating stable densities in Zolotarev’s (M) parametrization, Mathematical and Computer Modelling 29: 229–233. Nolan, J. P. (2001). Maximum likelihood estimation and diagnostics for stable distributions, in O. E. Barndorﬀ-Nielsen, T. Mikosch, S. Resnick (eds.), L´evy Processes, Brikh¨ auser, Boston.

44

Bibliography

Press, S. J. (1972). Estimation in univariate and multivariate stable distribution, Journal of the American Statistical Association 67: 842–846. Rachev, S., ed. (2003). Handbook of Heavy-tailed Distributions in Finance, North Holland. Rachev, S. and Mittnik, S. (2000). Stable Paretian Models in Finance, Wiley. Samorodnitsky, G. and Taqqu, M. S. (1994). Stable Non–Gaussian Random Processes, Chapman & Hall. Stoyanov, S. and Racheva-Iotova, B. (2004). Univariate stable laws in the ﬁeld of ﬁnance – parameter estimation, Journal of Concrete and Applicable Mathematics 2(4), in print. Weron, R. (1996). On the Chambers-Mallows-Stuck method for simulating skewed stable random variables, Statistics and Probability Letters 28: 165–171. See also R. Weron, Correction to: On the ChambersMallows-Stuck method for simulating skewed stable random variables, Research Report HSC/96/1, Wroclaw University of Technology, 1996, http://www.im.pwr.wroc.pl/˜hugo/Publications.html. Weron, R. (2001). Levy-stable distributions revisited: Tail index > 2 does not exclude the Levy-stable regime, International Journal of Modern Physics C 12: 209–223. Weron, R. (2004). Computationally intensive Value at Risk calculations, in J. E. Gentle, W. H¨ ardle, Y. Mori (eds.) Handbook of Computational Statistics, Springer, Berlin, 911–950. Zolotarev, V. M. (1986). One-Dimensional Stable Distributions, American Mathematical Society.

2 Extreme Value Analysis and Copulas Krzysztof Jajuga and Daniel Papla

2.1

Introduction

The analysis of ﬁnancial data, usually given in the form of ﬁnancial time series, has recently received a lot of attention of researchers and ﬁnance practitioners, in such areas as valuation of derivative instruments, forecasting of ﬁnancial prices, risk analysis (particularly market risk analysis). From the practical point of view, multivariate analysis of ﬁnancial data may be more appropriate than univariate analysis. Most market participants hold portfolios containing more than one ﬁnancial instrument. Therefore they should perform analysis for all components of a portfolio. There are more and more ﬁnancial instruments where payoﬀs depend on several underlyings (e.g. rainbow options). Therefore, to value them one should use multivariate models of underlying vectors of indices. Risk analysis is strongly based on the issue of correlation, or generally speaking dependence, between the returns (or prices) of the components of a portfolio. Therefore multivariate analysis is an appropriate tool to detect these relations. One of the most important applications of ﬁnancial time series models is risk analysis, including risk measurement. A signiﬁcant tendency, observed in the market, is the occurrence of rare events, which very often lead to exceptionally high losses. This has caused a growing interest in the evaluation of the socalled extreme risk. There are two groups of models applied to ﬁnancial time series: “mean-oriented” models, aiming at modeling the mean (expected value) and the variance of the distribution; and “extreme value” models, aiming at modeling tails (including maximum and minimum) of the distribution.

46

2

Extreme Value Analysis and Copulas

In this chapter we present some methods of time series analysis, both univariate and multivariate time series. The attention is put on two approaches: extreme value analysis and copula analysis. The presented methods are illustrated by examples coming from the Polish ﬁnancial market.

2.1.1

Analysis of Distribution of the Extremum

The analysis of the distribution of the extremum is simply the analysis of the random variable, deﬁned as the maximum (or minimum) of a set of random variables. For simplicity we concentrate only on the distribution of the maximum. The most important result is the Fisher-Tippet theorem (Embrechts, Kl¨ uppelberg, and Mikosch, 1997). In this theorem one considers the limiting distribution for the normalized maximum: Xn:n − bn lim P ≤ x = G(x), (2.1) n→∞ an where Xn:n = max(X1 , X2 , ..., Xn ). It can be proved that this limiting distribution belongs to the family of the so-called Generalized Extreme Value distributions (GEV), whose distribution function is given as: −1/ξ x−µ G(x) = exp − 1 + ξ , σ (2.2) 1 + ξσ

−1

(x − µ) > 0.

The GEV distribution has three parameters Reiss and Thomas (2000): the location parameter µ, the scale parameter σ, and the shape parameter ξ, which reﬂects the fatness of tails of the distribution (the higher value of this parameter, the fatter tails). The family of GEV distributions contains three subclasses: the Fr´echet distribution, ξ > 0, the Weibull distribution, ξ < 0, and the Gumbel distribution, ξ → 0. In ﬁnancial problems one usually encounters the Fr´echet distribution. In this case the underlying observations come from a fat-tailed distribution, such as the Pareto distribution, stable distribution (including Cauchy), etc. One of the most common methods to estimate the parameters of GEV distributions is maximum likelihood. The method is applied to block maxima, obtained by dividing the set of observations into subsets, called blocks, and taking maximum for each block.

2.1

Introduction

47

The main weakness of this approach comes from the fact that the maxima for some blocks may not correspond to rare events. On the other hand, in some blocks there may be more than one observation corresponding to rare events. Therefore this approach can be biased by the selection of the blocks.

2.1.2

Analysis of Conditional Excess Distribution

To analyze rare events, another approach can be used. Consider the so-called conditional excess distribution: F (u + y) − F (u) Fu (y) = P (X − u ≤ y |X > u ) = , (2.3) 1 − F (u) where 0 ≤ y < x0 − u; and x0 = sup(x : F (x) < 1). This distribution (also called the conditional tail distribution) is simply the distribution conditional on the underlying random variable taking value from the tail. Of course, this distribution depends on the choice of threshold u. It can be proved (Embrechts, Kl¨ uppelberg, and Mikosch, 1997) that the conditional excess distribution can be approximated by the so-called Generalized Pareto distribution (GPD), which is linked by one parameter to the GEV distribution. The following property is important: the larger the threshold (the further one goes in the direction of the tail), the better the approximation. The distribution function of GPD is given by Franke, H¨ ardle and Hafner (2004) and Reiss and Thomas (2000): Fu (y) = 1 − (1 + ξy/β)−1/ξ ,

(2.4)

where β = σ + ξ(u − µ). The shape parameter ξ has the same role as in GEV distributions. The generalized parameter β depends on all three parameters of the GEV distribution, as well as on the threshold u. The family of GPD contains three types of distributions, the Pareto distribution – ξ > 0, the Pareto type II distribution – ξ < 0, and the exponential distribution – ξ → 0. The mean of the conditional excess distribution can be characterized by a linear function of the threshold and of the parameters of GPD: E(X − u |X > u ) =

βu ξ + u 1−ξ 1−ξ

(2.5)

for ξ < 1. One of the most common methods of estimating the parameters of GPD is maximum likelihood. However, the GPD depends on the choice of the

48

2

Extreme Value Analysis and Copulas

threshold u. The higher the threshold, the better the approximation of the tail by GPD – this is a desired property. Then one has fewer observations to perform maximum likelihood estimation, which weakens the quality of estimation. To choose the threshold, one can use the procedure, based on the fact that for GPD the mean of the conditional excess distribution is a linear function of the threshold. Therefore, one can use the following function, which is just the arithmetic average of the observations exceeding the threshold: n ∧

e(u) =

i=1

max{(xi − u), 0} n i=1

.

(2.6)

I(xi > u)

We know that for the observations higher than the threshold this relation should be a linear function. Therefore a graphical procedure can be applied. In this ∧ procedure the value of e(u) is calculated for diﬀerent values of the threshold u. Then such a value is selected, that for the values above this value the linear relation can be observed.

2.1.3

Examples

Consider the logarithmic rate of returns for the following stock market indices: • Four indices of the Warsaw Stock Exchange (WSE): WIG (index of most traded stocks on this exchange), WIG20 (index of 20 stocks with the largest capitalization), MIDWIG (index of 40 mid-cap stocks), and TECHWIG (index of high technology stocks); • Two US market indices: DJIA and S&P 500; • Two EU market indices: DAX and FT-SE100. In addition we studied the logarithmic rates of return for the following exchange rates: USD/PLN, EUR/PLN, EUR/USD. The ﬁnancial time series of the logarithmic rates of return come from the period January 2, 1995 – October 3, 2003, except for the case of exchange rates EUR/PLN and EUR/USD, where the period January 1, 1999 – October 3, 2003 was taken into account. Figures 2.1–2.3 show histograms of those time series.

2.1

Introduction

49

WIG 20

15

0

0

5

5

10

Number of observations

15 10

Number of observations

20

20

25

WIG

-0.05

0

-0.1

0 x

MIDWIG

TECHWIG

0.1

10

Number of observations

5

20 10

Number of observations

0.05

15

0.05

0

0

-0.1

-0.05

x

30

-0.1

-0.05

0 x

0.05

-0.1

-0.05

0

0.05

0.1

x

Figure 2.1: Histograms of the logarithmic rates of return for WSE indices STFeva01.xpl

The most common application of the analysis of the extremum is the estimation of the maximum loss of a portfolio. It can be treated as a more conservative measure of risk than the well-known Value at Risk, deﬁned through a quantile of the loss distribution (rather than the distribution of the maximal loss). The limiting distribution of the maximum loss is the GEV distribution. This, of

50

2

Extreme Value Analysis and Copulas

S&P 500

30 0

0

10

20

Number of observations

20 10

Number of observations

30

40

40

DJIA

-0.05

0

-0.05

0.05

0 x

DAX

FT-SE100

0.05

30 0

0

5

10

20

Number of observations

20 15 10

Number of observations

25

40

x

-0.05

0 x

0.05

-0.05

0 x

0.05

Figure 2.2: Histograms of the logarithmic rates of return for world indices STFeva01.xpl

course, requires a rather large sample of observations coming from the same underlying distribution. Since most ﬁnancial data are in the form of time series, the required procedure would call for at least the check of the hypothesis about stationarity of time series by using unit root test e.g. Dickey-Fuller test, (Dickey and Fuller, 1979). The hypothesis of stationarity states that

2.1

Introduction

51

EUR/PLN

40 30

Number of observations

20

60 40

-0.05

0

0

10

20

Number of observations

50

80

60

USD/PLN

0 x

-0.05

0.05

0 x

0.05

40

30 20 0

10

Number of observations

50

60

EUR/USD

-0.02

0

0.02

0.04

x

Figure 2.3: Histograms of the logarithmic rates of return for exchange rates STFeva01.xpl

the process has no unit roots. With the Dickey-Fuller test we test the null hypothesis of a unit root, that is, there is a unit root for the characteristic equation of the AR(1) process. The alternative hypothesis is that the time series is stationary. To verify stationarity hypotheses for each of the considered time series, the augmented Dickey-Fuller test was used. The hypotheses of a

52

2

Extreme Value Analysis and Copulas

Table 2.1: The estimates of the parameters indices. Data ξ WIG 0.374 WIG20 0.450 MIDWIG 0.604 TECHWIG 0.147 DJIA 0.519 S&P 500 0.244 FT-SE 100 -0.048 DAX -0.084

of GEV distributions, for the stock µ 0.040 0.037 0.033 0.066 0.027 0.027 0.031 0.041

σ 0.012 0.022 0.011 0.012 0.006 0.007 0.006 0.011 STFeva02.xpl

unit root were rejected with the level of signiﬁcance lower than 1%, so all time series in question are stationary. One of the most important applications of the analysis of conditional excess distribution is the risk measure called Expected Shortfall – ES (also known as conditional Value at Risk, expected tail loss). It is deﬁned as: ES = E(X − u |X > u ).

(2.7)

So ES is the expected value of the conditional excess distribution. Therefore the GPD could be used to determine ES. Then for each time series the parameters of GEV distributions were estimated using maximum likelihood method. The results of the estimation for GEV are presented in Table 2.1 (for stock indices) and in Table 2.2 (for exchange rates). The analysis of the results for stock indices leads to the following conclusions. In most cases we obtained the Fr´echet distribution (estimate of the shape parameter is positive), which suggests that underlying observations are characterized by a fat-tailed distribution. For FTSE-100 and DAX indices the estimate of ξ is negative but close to zero, which may suggest either a Weibull distribution or a Gumbel distribution. In the majority of cases, the WSE indices exhibit fatter tails than the other indices. They also have larger estimates of location (related to mean return) and larger estimates of the scale parameter (related to volatility).

2.2

Multivariate Time Series

53

Table 2.2: The estimates of the parameters of GEV distributions, for the exchange rates. Data USD/PLN EUR/PLN EUR/USD

ξ 0.046 0.384 -0.213

µ 0.014 0.015 0.014

σ 0.005 0.005 0.004 STFeva03.xpl

The analysis of the results for the exchange rates leads to the following conclusions. Three diﬀerent distributions were obtained, for USD/PLN – a Gumbel distribution, for EUR/PLN – a Fr´echet distribution, for EUR/USD – a Weibull distribution. This suggests very diﬀerent behavior of underlying observations. The location and scale parameters are almost the same. The scale parameters are considerably lower for the exchange rates than for the stock indices.

2.2 2.2.1

Multivariate Time Series Copula Approach

In this section we present the so-called copula approach. It is performed in two steps. In the ﬁrst step one analyzes the marginal (univariate) distributions. In the second step one analyzes the dependence between components of the random vector. Therefore the analysis of dependence is “independent” from the analysis of marginal distributions. This idea is diﬀerent from the one present in the classical approach, where multivariate analysis is performed “jointly” for marginal distributions and dependence structure by considering the complete covariance matrix, like e.g. in the MGARCH approach. So one can think that instead of analyzing the whole covariance matrix, where the oﬀ diagonal elements contain information about scatter and dependence) one analyzes only the main diagonal (scatter measures) and then the structure of dependence “not contaminated” by scatter parameters. The fundamental concept of copulas becomes clear by Sklar theorem (Sklar, 1959). The multivariate joint distribution function is represented as a copula

54

2

Extreme Value Analysis and Copulas

function linking the univariate marginal distribution functions: H(x1 , ..., xn ) = C{F1 (x1 ), ..., Fn (xn )}

(2.8)

where H the multivariate distribution function; Fi the distribution function of the i-th marginal distribution; C is a copula. The copula describes the dependence between components of a random vector. It is worth mentioning some properties of copulas for modeling dependence. The most important ones are the following: • for independent variables we have: C(u1 , ..., un ) = C ¬ (u1 , ..., un ) = u1 u2 ...un • the lower limit for copula function is: C − (u1 , ..., un ) = max{u1 + ... + un − n + 1; 0} • the upper limit for copula function is: C + (u1 , ..., un ) = min(u1 , ..., un ) The lower and upper limits for the copula function have important consequences for modeling the dependence. It can be explained in the simplest, bivariate case. Suppose there are two variables X and Y and there exists a function (not necessarily a linear one), which links these two variables. One speaks about the so-called total positive dependence between X and Y , when Y = T (X) and T is the increasing function. Similarly, one speaks about the so-called total negative dependence between X and Y , when Y = T (X) and T is the decreasing function. Then: • in the case of total positive dependence the following relation holds: C(u1 , u2 ) = C + (u1 , u2 ) = min(u1 , u2 ) • in the case of total negative dependence the following relation holds: C(u1 , u2 ) = C − (u1 , u2 ) = max{u1 + u2 − 1; 0},

2.2

Multivariate Time Series

55

The introduction of the copula leads to a natural ordering of the multivariate distributions with respect to the strength and the direction of the dependence. This ordering is given as: C1 (u1 , ..., un ) ≤ C2 (u1 , ..., un ) and then we have: C − (u1 , ..., un ) ≤ C ¬ (u1 , ..., un ) ≤ C + (u1 , ..., un ). The presented properties are valid for any type of the dependence, not just linear dependence. More facts of copulas is given in Franke, H¨ ardle and Hafner (2004); Rank and Siegl (2002) and Kiesel and Kleinow (2002). There are very possible copulas. A popular family contains the so-called Archimedean copulas, deﬁned on the base of strictly decreasing and convex function, called generator. In the bivariate case it is given as: C(u1 , u2 ) = ψ −1 {ψ(u1 ) + ψ(u2 )},

(2.9)

where ψ : [0; 1] → [0; ∞), and ψ(1) = 0. The most popular and well-studied Archimedean copulas are: 1. The Clayton copula: −θ (t − 1)/θ, ψ(t) = − log(t),

θ ≥ −1, θ = 0, θ ∈ [−1, ∞). θ = 0,

(2.10)

2. The Frank copula: ψ(t) =

− log exp(−θt)−1 θ = 0, exp(−θ)−1 − log(t), θ = 0.

3. The Ali-Mikhail-Haq copula: 1 − θ(1 − t) , θ ∈ [−1; 1]. ψ(t) = log t

(2.11)

(2.12)

Among other copulas, which do not belong to Archimedean family, it is worth to mention the Farlie-Gumbel-Morgenstern copula, given in the bivariate case as: Cθ (u, v) = uv + θuv(1 − u)(1 − v), θ ∈ [−1; 1]. (2.13)

56

2

Extreme Value Analysis and Copulas

In all these copulas there is one parameter, which can be interpreted as dependence parameter. Here the dependence has a more general meaning, presented above and described by a monotonic function. An often used copula function is the so-called normal (Gaussian) copula, which links the distribution function of multivariate normal distribution with the distribution functions of the univariate normal distributions. This means that: C(u1 , ..., un ) = ΦnR {Φ−1 (u1 ), ..., Φ−1 (un )}

(2.14)

The other commonly used example is the Gumbel copula, which for the bivariate case is given as: C(u1 , u2 ) = exp[−{(− log u1 )δ + (− log u2 )δ }1/δ ]

(2.15)

Figure 2.4 presents an example of the shape of the copula function. In this case it is a Frank copula (see (2.11)), with parameters θ taken from results presented in Section 2.2.2. The estimation of the copula parameters can be performed by using maximum likelihood given the distribution function of marginals. As the simplest approach to the distribution function of marginals one can take just the empirical distribution function.

2.2.2

Examples

Consider diﬀerent pairs of stock market indices and exchange rates, studied in Section 2.1.3. For each pair we ﬁtted a bivariate copula, namely the Clayton, Frank, Ali-Mikhail-Haq, and the Farlie-Gumbel-Morgenstern. We present here the results obtained for Frank copula. Table 2.3 presents selected results for pairs of exchange rates and Table 2.4 for pairs of stock indices. The important conclusion to be drawn from Table 2.3 is that one pair, namely USD/PLN and EUR/USD, shows negative dependence, whereas the other two show positive dependence. This is particularly important for the entities that are exposed to exchange rate risk and they want to decrease it by appropriate management of assets and liabilities. There is positive extreme dependence between all stock indices. As could have been expected, there is strong dependence between indices of the WSE and much lower between WSE and the other exchanges, with weaker dependence between WSE and NYSE than between WSE and large European exchanges. The copula approach can be applied in the so-called tail dependence coeﬃcients. The detailed description of tail dependence is given in Chapter 3.

2.2

Multivariate Time Series

57

(0.0,0.0,1.0)

(0.0,0.0,1.0) C(u,v)

C(u,v)

0.8

0.8 v

v u

u

0.5

0.5

0.2

0.3

0.8 (0.0,1.0,0.0) 0.5 0.2

0.8 (0.0,1.0,0.0) 0.5 0.2

(0.0,0.0,0.0)

0.2

0.5

0.8

(1.0,0.0,0.0)

(0.0,0.0,0.0)

0.2

0.5

0.8

(1.0,0.0,0.0)

Figure 2.4: Plot of C(u, v) for the Frank copula for θ = −2, 563 in left panel, and θ = 11.462 in right panel. STFeva04.xpl STFeva05.xpl

Table 2.3: The estimates of the Frank copula for exchange rates. Bivariate data USD/PLN and EUR/PLN USD/PLN and EUR/USD EUR/PLN and EUR/USD

θ 2.730 -2.563 3.409 STFeva06.xpl

2.2.3

Multivariate Extreme Value Approach

The copula approach also gives the possibility to analyze extreme values in the general multivariate case. This is possible by linking this approach to univariate extreme value analysis. In order to make this possible, we concentrate on the multivariate distribution of extrema, where the extremum is taken for each component of a random vector.

58

2

Extreme Value Analysis and Copulas

Table 2.4: The estimates of the Frank copula for stock indices. Bivariate data WIG and WIG20 WIG and DJIA WIG and FTSE-100 WIG and DAX

θ 11.462 0.943 2.021 2.086 STFeva07.xpl

The main result in the multivariate extreme value analysis is given for the limiting distribution of normalized maxima: 1 m − bm Xn:n − b1n Xn:n n 1 m = G(x1 , ..., xm ), ≤ x , ..., ≤x (2.16) lim P n→∞ a1n am n It was shown by Galambos (1978) that this limiting distribution can be presented in the following form: G(x1 , ..., xm ) = CG{G1 (x1 ), ..., Gm (xm )}.

(2.17)

where CG is the so-called Extreme Value Copula (EVC). This is the representation of the multivariate distribution of maxima, called here Multivariate Extreme Value distribution (MEV), in the way it is presented in the Sklar theorem. It is composed of two parts and each part has a special meaning: univariate distributions belong to the family of GEV distributions, therefore they are the Fr´echet, Weibull or Gumbel distributions. Therefore, to obtain the MEV distribution one has to apply the EVC to univariate GEV distributions (Fr´echet, Weibull, or Gumbel). Since there are many possible extreme value copulas, we get many possible multivariate extreme value distributions. The EVC is a copula satisfying the following relation: C(ut1 , ..., utn ) = C t (u1 , ..., un ) for t > 0.

(2.18)

It can be shown that the bivariate extreme value copula can be represented in the following form: C(u1 , u2 ) = exp{log(u1 u2 )A(log(u1 ))/ log(u1 u2 )}.

(2.19)

2.2

Multivariate Time Series

59

Here A is a convex function satisfying the following relations: A(0) = A(1) = 1, (2.20) max(w, 1 − w) ≤ A(w) ≤ 1. The most common extreme value copulas are: 1. Gumbel copula, where: C(u1 , u2 ) = exp[−{(log u1 )θ + (log u2 )θ }1/θ ], with A(w) = {wθ + (1 − w)θ }1/θ ,

(2.21)

and θ ∈ [1, ∞). 2. Gumbel II copula, where: C(u1 , u2 ) = u1 u2 exp{θ(log u1 log u2 )/(log u1 + log u2 )}, (2.22) with A(w) = θw2 − θw + 1, and θ ∈ [0, 1]. 3. Galambos copula, where: C(u1 , u2 ) = u1 u2 exp[{(log u1 )−θ + (log u2 )−θ }−1/θ ], with A(w) = 1 − {w−θ + (1 − w)−θ }−1/θ , (2.23) and θ ∈ [0, ∞). All three presented copulas are one parameter functions. This parameter can be interpreted as dependence parameter. The important property is that for these copulas, as well as for other possible extreme value copulas, there is positive dependence between the two components of the random vector. The main application of multivariate extreme value approach is the estimation of the maximum loss of each component of the portfolio. We get then the limiting distribution of the vector of maximal losses. The limiting distributions for the components are univariate GEV distributions and the relation between the maxima is reﬂected through extreme value copula.

60

2

Extreme Value Analysis and Copulas

Table 2.5: The estimates of the Galambos copula for exchange rates. Bivariate data USD/PLN and EUR/PLN USD/PLN and EUR/USD EUR/PLN and EUR/USD

θ 34.767 2.478 2.973 STFeva08.xpl

2.2.4

Examples

As in Section 2.2.2 we consider diﬀerent pairs of stock market indices and exchange rates. In the ﬁrst step we analyze separate components in each pair to get estimates of generalized extreme value distributions. In the second step, we use empirical distribution functions obtained in the ﬁrst step and estimate three copulas belonging to EVC family: Gumbel, Gumbel II, and Galambos. We present here the results obtained for Galambos copula (Table 2.5) and Gumbel copula (Table 2.6) It turns out that in the case of exchange rates we obtained the best ﬁt for the Galambos copula, see Table 2.5. In the case of stock indices the best ﬁt was obtained for diﬀerent copulas. For the comparison we present the results obtained for the Gumbel copula, see Table 2.6. The dependence parameter of the Galambos copula takes only non-negative values. The higher the value of this parameter, the stronger the dependence between maximal losses of respective variables. We see that there is strong extreme dependence between the exchange rates of USD/PLN and EUR/PLN and rather weak dependence between EUR/PLN and EUR/USD as well as for USD/PLN and EUR/USD. The dependence parameter for Gumbel copula takes values higher or equal to 1. The higher the value of this parameter, the stronger the dependence between maximal losses of respective variables. The results given in this table indicate strong dependence (as could have been expected) between stock indices of the Warsaw Stock Exchange. It also shows stronger extreme dependence between WSE and NYSE than between WSE and two large European exchanges.

2.2

Multivariate Time Series

61

Table 2.6: The estimates of the Gumbel copula for stock indices. Bivariate data WIG and WIG20 WIG and DJIA WIG and FTSE-100 WIG and DAX

θ 21.345 14.862 2.275 5.562 STFeva09.xpl

2.2.5

Copula Analysis for Multivariate Time Series

One of the basic models applied in the classical (mean-oriented) approach in the analysis of multivariate time series was the multivariate GARCH model (MGARCH) aimed at modeling of conditional covariance matrix. One of the disadvantages of this approach was the joint modeling of volatilities and correlations, as well as relying on the correlation coeﬃcient as a measure of dependence. In this section we present another approach, where volatilities and dependences in multivariate time series, both conditional, are modeled separately. This is possible due to the application of copula approach directly to univariate time series, being the components of multivariate time series. Our presentation is based on the idea presented by Jondeau and Rockinger (2002), which combines the univariate time series modeling by GARCH type models for volatility with copula analysis. The proposed model is given as: log(θt ) =

16

dj I{(ut−1 , vt−1 ) ∈ Aj },

(2.24)

j=1

where Aj is the jth element of the unit-square grid. To each parameter dj , an area Aj is associated. For instance, A1 = [0, p1 ] × [0, q1 ] and A2 = [p1 , p2 ] × [0, q1 ], where p1 = q1 = 0.15, p2 = q2 = 0.5, and p3 = q3 = 0.85. The choice of 16 subintervals is, according to Jondeau and Rockinger (2002), somewhat arbitrary. Therefore the dependence parameter is conditioned on the lagged values of univariate distribution functions, where the 16 possible sets of pairs of values are taken into account. The larger value of parameter dj , the stronger dependence on the past values.

62

2

Extreme Value Analysis and Copulas

Table 2.7: Conditional dependence parameter for time series WIG, WIG20.

[0, 0.15) [0.15, 0.5) [0.5, 0.85) [0.85, 1]

[ 0, 0.15) 15.951 6.000 -0.286 0.000

[0.15, 0.5) 4.426 18.307 8.409 2.578

[0.5, 0.85) 5.010 8.704 19.507 1.942

[0.85, 1] 1.213 1.524 5.133 19.202 STFeva10.xpl

We also give the description of the method, which was used in the empirical example. We describe this procedure for the case of bivariate time series. The proposed procedure consists of two steps. In the ﬁrst step, the models for univariate time series are built for both time series. Here the combined procedure of ARIMA models for conditional mean and GARCH models for conditional variance was used. In the second step, the values of the distribution function for residuals obtained after the application of univariate models were subject to copula analysis.

2.2.6

Examples

In this example we study three pairs of time series, namely WIG and WIG20, WIG and DJIA, USD/PLN and EUR/PLN. First of all, to get the best ﬁt: an AR(10)-GARCH (1,1) model was built for each component of bivariate time series. Then the described procedure of ﬁtting copula and obtaining conditional dependence parameter was applied. In order to do this, the interval [0, 1] of the values of univariate distribution function was divided into 4 subintervals: [0, 0.15), [0.15, 0.5); [0.5, 0.85); [0.85, 1]. Such a selection of subintervals allows us to concentrate on tails of the distributions. Therefore we obtained 16 disjoint areas. For each area the conditional dependence parameter was estimated using diﬀerent copula function. For the purpose of comparison, we present the results obtained in the case of the Frank copula. These results are given in the Tables 2.7–2.9. The values on “the main diagonal” of the presented tables correspond to the same subintervals of univariate distribution functions. Therefore, the values for the lowest interval (upper left corner of the table) and highest interval (lower

2.2

Multivariate Time Series

63

Table 2.8: Conditional dependence parameter for time series WIG, DJIA.

[0, 0.15) [0.15, 0.5) [0.5, 0.85) [0.85, 1]

[ 0, 0.15) 2.182 1.868 1.454 -0.207

[0.15, 0.5) 1.169 0.532 1.246 0.493

[0.5, 0.85) 0.809 0.954 0.806 1.301

[0.85, 1] 2.675 2.845 0.666 1.202 STFeva11.xpl

Table 2.9: Conditional dependence parameter for time series USD/PLN, EUR/PLN. [0, 0.15) [0.15, 0.5) [0.5, 0.85) [0.85, 1]

[ 0, 0.15) 3.012 3.887 2.432 7.175

[0.15, 0.5) 2.114 2.817 3.432 3.750

[0.5, 0.85) 2.421 2.824 2.526 4.534

[0.85, 1] 0.127 5.399 3.424 4.616 STFeva12.xpl

right corner of the table) correspond to the notion of lower tail dependence and upper tail dependence. Also, the higher are values concentrated along “the main diagonal”, the stronger conditional dependence is observed. From the results presented in Tables 2.7–2.9, we can see, that there is a strong conditional dependence between returns on WIG and WIG20; the values of conditional dependence parameter “monotonically decrease with the departure from the main diagonal.” This property is not observed in the other two tables, where no signiﬁcant regular patterns can be identiﬁed. We presented here only some selected non-classical methods of the analysis of ﬁnancial time series. They proved some usefulness for real data. It seems that the plausible future direction of the research would be the integration of econometric methods, aimed at studying the dynamic properties, with statistical methods, aimed at studying the distributional properties.

64

Bibliography

Bibliography Dickey, D. and Fuller, W. (1979). Distribution of the estimators for autoregressive time series with a unit root, Journal of the American Statistical Association, 74: 427–431. Embrechts, P., Kl¨ uppelberg, C. and Mikosch, T. (1997). Modelling Extremal Events for Insurance and Finance, Springer. Franke, J., H¨ ardle, W. and Hafner, C. (2004). Statistics of Financial Markets, Springer. Galambos, J. (1978). The Asymptotic Theory of Extreme Order Statistics, Krieger Publishing. Jondeau, E. and Rockinger, M. (2002). Conditional Dependency of Financial Series: The Copula-GARCH Model, FAME, working paper. Kiesel, R. and Kleinow, T. (2002). Sensitivity analysis of credit portfolio models, in W. H¨ ardle, T. Kleinow, G. Stahl (eds.), Applied Quantitative Finance, Springer. Rank, J. and Siegl, T. (2002). Applications of Copulas for the Calculation of Value-at-Risk, in W. H¨ ardle, T. Kleinow and G. Stahl (eds.), Applied Quantitative Finance, Springer. Reiss, R.-D. and Thomas, M. (2000). Extreme Value Analysis, in W. H¨ ardle, S. Klinke and M. M¨ uller (eds.), XploRe Learning Guide, Springer. Sklar, A. (1959). Fonctions de repartition a` n dimensions et leurs marges, Publications de l’Institut de Statistique de l’Universit´e de Paris, 8 (1959): 229–231.

3 Tail Dependence Rafael Schmidt

3.1

Introduction

Tail dependence describes the amount of dependence in the tail of a bivariate distribution. In other words, tail dependence refers to the degree of dependence in the corner of the lower-left quadrant or upper-right quadrant of a bivariate distribution. Recently, the concept of tail dependence has been discussed in ﬁnancial applications related to market or credit risk, Hauksson et al. (2001) and Embrechts et al. (2003). In particular, tail-dependent distributions are of interest in the context of Value at Risk (VaR) estimation for asset portfolios, since these distributions can model dependence of large loss events (default events) between diﬀerent assets. It is obvious that the portfolio’s VaR is determined by the risk behavior of each single asset in the portfolio. On the other hand, the general dependence structure, and especially the dependence structure of extreme events, strongly inﬂuences the VaR calculation. However, it is not known to most people which are not familiar with extreme value theory, how to measure and model dependence, for example, of large loss events. In other words, the correlation coeﬃcient, which is the most common dependence measure in ﬁnancial applications, is often insuﬃcient to describe and estimate the dependence structure of large loss events, and therefore frequently leads to inaccurate VaR estimations, Embrechts et al. (1999). The main aim of this chapter is to introduce and to discuss the so-called tail-dependence coeﬃcient as a simple measure of dependence of large loss events. Kiesel and Kleinow (2002) show empirically that a precise VaR estimation for asset portfolios depends heavily on the proper speciﬁcation of the taildependence structure of the underlying asset-return vector. In their setting,

66

3 Tail Dependence

diﬀerent choices of the portfolio’s dependence structure, which is modelled by a copula function, determine the degree of dependence of large loss events. Motivated by their empirical observations, this chapter deﬁnes and explores the concept of tail dependence in more detail. First, we deﬁne and calculate tail dependence for several classes of distributions and copulae. In our context, tail dependence is characterized by the so-called tail-dependence coeﬃcient (TDC) and is embedded into the general framework of copulae. Second, a parametric and two nonparametric estimators for the TDC are discussed. Finally, we investigate some empirical properties of the implemented TDC estimators and examine an empirical study to show one application of the concept of tail dependence for VaR estimation.

3.2

What is Tail Dependence?

Deﬁnitions of tail dependence for multivariate random vectors are mostly related to their bivariate marginal distribution functions. Loosely speaking, tail dependence describes the limiting proportion that one margin exceeds a certain threshold given that the other margin has already exceeded that threshold. The following approach, as provided in the monograph of Joe (1997), represents one of many possible deﬁnitions of tail dependence. Let X = (X1 , X2 ) be a two-dimensional random vector. We say that X is (bivariate) upper tail-dependent if: def λU = lim P X1 > F1−1 (v) | X2 > F2−1 (v) > 0, v↑1

(3.1)

in case the limit exists. F1−1 and F2−1 denote the generalized inverse distribution functions of X1 and X2 , respectively. Consequently, we say X = (X1 , X2 ) is upper tail-independent if λU equals 0. Further, we call λU the upper tail-dependence coeﬃcient (upper TDC). Similarly, we deﬁne the lower tail-dependence coeﬃcient, if it exists, by: def λL = lim P X1 ≤ F1−1 (v) | X2 ≤ F2−1 (v) . v↓0

(3.2)

In case X = (X1 , X2 ) is standard normally or t-distributed, formula (3.1) simpliﬁes to: def λU = lim λU (v) = lim 2 · P X1 > F1−1 (v) | X2 = F2−1 (v) . v↑1

v↑1

(3.3)

67

1

3.2 What is Tail Dependence?

rho=0.6 0.5

lambda_U(v)

rho=0.8

0

rho=-0.8

0.5

0.6

0.8

0.7

0.9

1

v

Figure 3.1: The function λU (v) = 2 · P{X1 > F1−1 (v) | X2 = F2−1 (v)} for a bivariate normal distribution with correlation coeﬃcients ρ = −0.8, −0.6, . . . , 0.6, 0.8. Note that λU = 0 for all ρ ∈ (−1, 1). STFtail01.xpl

A generalization of bivariate tail dependence, as deﬁned above, to multivariate tail dependence can be found in Schmidt and Stadtm¨ uller (2003). Figures 3.1 and 3.2 illustrate tail dependence for a bivariate normal and tdistribution. Irrespectively of the correlation coeﬃcient ρ, the bivariate normal distribution is (upper) tail independent. In contrast, the bivariate t-distribution exhibits (upper) tail dependence and the degree of tail dependence is aﬀected by the correlation coeﬃcient ρ. The concept of tail dependence can be embedded within the copula theory. An n-dimensional distribution function C : [0, 1]n → [0, 1] is called a copula if it has one-dimensional margins which are uniformly distributed on the interval [0, 1]. Copulae are functions that join or “couple” an n-dimensional distribution function F to its corresponding one-dimensional marginal distribution functions

3 Tail Dependence

1

68

rho=0.6 0.5

lambda_U(v)

rho=0.8

0

rho=-0.8

0.5

0.6

0.8

0.7

0.9

1

v

Figure 3.2: The function λU (v) = 2 · P{X1 > F1−1 (v) | X2 = F2−1 (v)} for a bivariate t-distribution with correlation coeﬃcients ρ = −0.8, −0.6, . . . , 0.6, 0.8. STFtail02.xpl

Fi , i = 1, . . . , n, in the following way: F (x1 , . . . , xn ) = C {F1 (x1 ), . . . , Fn (xn )} . We refer the reader to Joe (1997), Nelsen (1999) or H¨ ardle, Kleinow, and Stahl (2002) for more information on copulae. The following representation shows that tail dependence is a copula property. Thus, many copula features transfer to the tail-dependence coeﬃcient such as the invariance under strictly increasing transformations of the margins. If X is a continuous bivariate random vector, then straightforward calculation yields: λU = lim v↑1

1 − 2v + C(v, v) , 1−v

where C denotes the copula of X. Analogously, λL = limv↓0 the lower tail-dependence coeﬃcient.

(3.4) C(v,v) v

holds for

3.3

Calculation of the Tail-dependence Coeﬃcient

3.3 3.3.1

69

Calculation of the Tail-dependence Coeﬃcient Archimedean Copulae

Archimedean copulae form an important class of copulae which are easy to construct and have good analytical properties. A bivariate Archimedean copula has the form C(u, v) = ψ [−1] {ψ(u) + ψ(v)} for some continuous, strictly decreasing, and convex generator function ψ : [0, 1] → [0, ∞] such that ψ(1) = 0 and the pseudo-inverse function ψ [−1] is deﬁned by: ψ [−1] (t) =

ψ −1 (t), 0 ≤ t ≤ ψ(0), 0, ψ(0) < t ≤ ∞.

We call ψ strict if ψ(0) = ∞. In that case ψ [−1] = ψ −1 . Within the framework of tail dependence for Archimedean copulae, the following result can be shown (Schmidt, 2003). Note that the one-sided derivatives of ψ exist, as ψ is a convex function. In particular, ψ (1) and ψ (0) denote the one-sided derivatives at the boundary of the domain of ψ. Then: i) upper tail-dependence implies ψ (1) = 0 and λU = 2 − (ψ −1 ◦ 2ψ) (1), ii) ψ (1) < 0 implies upper tail-independence, iii) ψ (0) > −∞ or a non-strict ψ implies lower tail-independence, iv) lower tail-dependence implies ψ (0) = −∞, a strict ψ, and λL = (ψ −1 ◦ 2ψ) (0). Tables 3.1 and 3.2 list various Archimedean copulae in the same ordering as provided in Nelsen (1999, Table 4.1, p. 94) and in H¨ ardle, Kleinow, and Stahl (2002, Table 2.1, p. 42) and the corresponding upper and lower tail-dependence coeﬃcients (TDCs).

70

3 Tail Dependence

Table 3.1: Various selected Archimedean copulae. The numbers in the ﬁrst column correspond to the numbers of Table 4.1 in Nelsen (1999), p. 94. C(u, v)

Number & Type

max (u−θ + v −θ − 1)−1/θ , 0

(1) Clayton

1/θ ,0 max 1 − (1 − u)θ + (1 − v)θ

(2) (3)

AliMikhail-Haq

(4)

GumbelHougaard

uv 1 − θ(1 − u)(1 − v)

Parameters θ ∈ [−1, ∞)\{0}

θ ∈ [1, ∞) θ ∈ [−1, 1)

1/θ exp − (− log u)θ + (− log v)θ

θ ∈ [1, ∞)

1/θ −1 1 + (u−1 − 1)θ + (v −1 − 1)θ

θ ∈ [1, ∞)

(14)

1/θ −θ 1 + (u−1/θ − 1)θ + (v −1/θ − 1)θ

θ ∈ [1, ∞)

(19)

θ/ log eθ/u + eθ/v − eθ

θ ∈ (0, ∞)

3.3.2

Elliptically-contoured Distributions

(12)

In this section, we calculate the tail-dependence coeﬃcient for ellipticallycontoured distributions (brieﬂy: elliptical distributions). Well-known elliptical distributions are the multivariate normal distribution, the multivariate t-distribution, the multivariate logistic distribution, the multivariate symmetric stable distribution, and the multivariate symmetric generalized-hyperbolic distribution. Elliptical distributions are deﬁned as follows: Let X be an n-dimensional random vector and Σ ∈ Rn×n be a symmetric positive semi-deﬁnite matrix. If X − µ, for some µ ∈ Rn , possesses a characteristic function of the form φX−µ (t) = Ψ(t Σt) for some function Ψ : R+ 0 → R, then X is said to be el-

3.3

Calculation of the Tail-dependence Coeﬃcient

71

Table 3.2: Tail-dependence coeﬃcients (TDCs) and generators ψθ for various selected Archimedean copulae. The numbers in the ﬁrst column correspond to the numbers of Table 4.1 in Nelsen (1999), p. 94. ψθ (t)

Parameter θ

Upper-TDC

Lower-TDC

(1) Pareto

t−θ − 1

[−1, ∞)\{0}

0 for θ > 0

2−1/θ for θ > 0

(2)

(1 − t)θ

[1, ∞)

2 − 21/θ

0

Number & Type

(3)

1 − θ(1 − t) Alilog Mikhail-Haq t

[−1, 1)

0

0

(4)

GumbelHougaard

(− log t)θ

[1, ∞)

2 − 21/θ

0

θ −1

[1, ∞)

2 − 21/θ

2−1/θ

θ t−1/θ − 1

[1, ∞)

2 − 21/θ

1 2

eθ/t − eθ

(0, ∞)

0

1

(12) (14) (19)

1 t

liptically distributed with parameters µ (location), Σ (dispersion), and Ψ. Let En (µ, Σ, Ψ) denote the class of elliptically-contoured distributions with the latter parameters. We call Ψ the characteristic generator. The density function, if it exists, of an elliptically-contoured distribution has the following form: f (x) = |Σ|−1/2 g (x − µ) Σ−1 (x − µ) ,

x ∈ Rn ,

(3.5)

+ for some function g : R+ 0 → R0 , which we call the density generator.

Observe that the name “elliptically-contoured distribution” is related to the elliptical contours of the latter density. For a more detailed treatment of elliptical distributions see the monograph of Fang, Kotz, and Ng (1990) or Cambanis, Huang, and Simon (1981).

72

3 Tail Dependence

In connection with ﬁnancial applications, Bingham and Kiesel (2002) and Bingham, Kiesel, and Schmidt (2002) propose a semi-parametric approach for elliptical distributions by estimating the parametric component (µ, Σ) separately from the density generator g. In their setting, the density generator is estimated by means of a nonparametric statistics. Schmidt (2002b) shows that bivariate elliptically-contoured distributions are upper and lower tail-dependent if the tail of their density generator is regularly varying, i.e. the tail behaves asymptotically like a power function. Further, a necessary condition for tail dependence is given which is more general than regular variation of the latter tail: More precisely, the tail must be O-regularly varying (see Bingham, Goldie, and Teugels (1987) for the deﬁnition of O-regular variation). Although the equivalence of tail dependence and regularly-varying density generator has not been shown, all density generators of well-known elliptical distributions possess either a regularly-varying tail or a not O-regularlyvarying tail. This justiﬁes a restriction to the class of elliptical distributions with regularly-varying density generator if tail dependence is required. In particular, tail dependence is solely determined by the tail behavior of the density generator (except for completely correlated random variables which are always tail dependent). The following closed-form expression exists (Schmidt, 2002b) for the upper and lower tail-dependence coeﬃcient of an elliptically-contoured random vector (X1 , X2 ) ∈ E2 (µ, Σ, Ψ) with positive-deﬁnite matrix σ11 σ12 , Σ= σ11 σ12 having a regularly-varying density generator g with regular variation index −α/2 − 1 < 0 : h(ρ) uα √ du def 1 − u2 0 , (3.6) λ = λU = λL = 1 uα √ du 1 − u2 0 −1/2 2 √ where ρ = σ12 / σ11 σ22 and h(ρ) = 1 + (1−ρ) . 1−ρ2 Note that ρ corresponds to the “correlation” coeﬃcient when it exists (Fang, Kotz, and Ng, 1990). Moreover, the upper tail-dependence coeﬃcient λU coincides with the lower tail-dependence coeﬃcient λL and depends only on the “correlation” coeﬃcient ρ and the regular variation index α, see Figure 3.3.

Calculation of the Tail-dependence Coeﬃcient

73

0.3

rho=0.5

0.1

0.2

lambda

0.4

0.5

3.3

rho=0.3

rho=0.1

2

6

4

8

10

alpha

Figure 3.3: Tail-dependence coeﬃcient λ versus regular variation index α for “correlation” coeﬃcients ρ = 0.5, 0.3, 0.1. STFtail03.xpl

Table 3.3 lists various elliptical distributions, the corresponding density generators (here cn denotes a normalizing constant depending only on the dimension n) and the associated regular variation index α from which one easily derives the tail-dependence coeﬃcient using formula (3.6).

74

3 Tail Dependence

Table 3.3: Tail index α for various density generators g of multivariate elliptical distributions. Kν denotes the modiﬁed Bessel function of the third kind (or Macdonald function). Number & Type (23) Normal (24) t Symmetric (25) general. hyperbolic (26)

Symmetric θ-stable

(27) logistic

3.3.3

Density generator g or characteristic generator Ψ

Parameters

α for n=2

g(u) = cn exp(−u/2)

—

∞

t −(n+θ)/2 g(u) = cn 1 + θ

θ>0

θ

ς, χ > 0 λ∈R

∞

θ ∈ (0, 2]

θ

—

∞

Kλ− n2 { ς(χ + u)} g(u) = cn √ n ( χ + u) 2 −λ Ψ(u) = exp

g(u) = cn

−

1 2u

θ/2

exp(−u) {1 + exp(−u)}2

Other Copulae

For many other closed form copulae one can explicitly derive the tail-dependence coeﬃcient. Tables 3.4 and 3.5 list some well-known copula functions and the corresponding lower and upper TDCs.

3.4

Estimating the Tail-dependence Coeﬃcient

75

Table 3.4: Various copulae. Copulae BBx are provided in Joe (1997).

Number & Type (28) Raftery

g {min(u, v), max(u, v); θ} with

g {x, y; θ} = x −

(29) BB1

1−θ 1/(1−θ) x 1+θ

(31) BB7

y −θ/(1−θ) − y 1/(1−θ)

1/δ −1/θ 1 + (u−θ − 1)δ + (v −θ − 1)δ

(30) BB4

Parameters

C(u, v)

u−θ + v −θ − 1−

− (u−θ − 1)−δ + (v −θ − 1)−δ

−1/δ −1/θ

−δ + 1 − 1 − 1 − (1 − u)θ −1/δ 1/θ −δ −1 + 1 − (1 − v)θ

θ ∈ [0, 1] θ ∈ (0, ∞) δ ∈ [1, ∞) θ ∈ [0, ∞) δ ∈ (0, ∞)

θ ∈ [1, ∞) δ ∈ (0, ∞)

(32) BB8

−1 1 1 − 1 − 1 − (1 − δ)θ · δ 1/θ θ θ 1 − (1 − δv) · 1 − (1 − δu)

θ ∈ [1, ∞) δ ∈ [0, 1]

(33) BB11

θ min(u, v) + (1 − θ)uv

θ ∈ [0, 1]

CΩ in (34) Junker and May (2002)

3.4

βC(sθ, ¯ δ) ¯ (u, v) − (1 − β)C(θ,δ) (u, v) with δ −θt −1 Archim. generator ψ(θ,δ) (t) = − log ee−θ −1 ¯ ¯ C(sθ, ¯ δ) ¯ is the survival copula with param. (θ, δ)

θ, θ¯ ∈ R\{0} δ, δ¯ ≥ 1 β ∈ [0, 1]

Estimating the Tail-dependence Coeﬃcient

Suppose X, X (1) , . . . , X (m) are i.i.d. bivariate random vectors with distribution function F and copula C. We assume continuous marginal distribution functions Fi , i = 1, 2. Tests for tail dependence or tail independence are given for example in Ledford and Tawn (1996) or Draisma et al. (2004). We consider the following three (non-)parametric estimators for the lower and upper tail-dependence coeﬃcients λU and λL . These estimators have been discussed in Huang (1992) and Schmidt and Stadtm¨ uller (2003). Let Cm be the

76

3 Tail Dependence

Table 3.5: Tail-dependence coeﬃcients (TDCs) for various copulae. Copulae BBx are provided in Joe (1997). Number & Type

Parameters

upper-TDC

lower-TDC

θ ∈ [0, 1]

0

2θ 1+θ

(29) BB1

θ ∈ (0, ∞) δ ∈ [1, ∞)

2 − 21/δ

2−1/(θδ)

(30) BB4

θ ∈ [0, ∞) δ ∈ (0, ∞)

2−1/δ

(2− 2−1/δ )−1/θ

(31) BB7

θ ∈ [1, ∞) δ ∈ (0, ∞)

2 − 21/θ

2−1/δ

(32) BB8

θ ∈ [1, ∞) δ ∈ [0, 1]

2− −2(1 − δ)θ−1

0

(33) BB11

θ ∈ [0, 1]

θ

θ

θ, θ¯ ∈ R\{0} δ, δ¯ ≥ 1 β ∈ [0, 1]

(1 − β)· ·(2 − 21/δ )

β(2 − 21/δ )

(28) Raftery

CΩ in (34) Junker and May (2002)

¯

empirical copula deﬁned by: −1 −1 Cm (u, v) = Fm (F1m (u), F2m (v)),

(3.7)

with Fm and Fim denoting the empirical distribution functions corresponding (j) (j) (j) to F and Fi , i = 1, 2, respectively. Let Rm1 and Rm2 be the rank of X1 and (j) X2 , j = 1, . . . , m, respectively. The ﬁrst estimators are based on formulas (3.1) and (3.2):

3.4

Estimating the Tail-dependence Coeﬃcient

ˆ (1) λ U,m

m k k Cm 1 − , 1 × 1 − , 1 k m m m 1 (j) (j) I(Rm1 > m − k, Rm2 > m − k) k j=1

= =

77

(3.8)

and m (j) (j) ˆ (1) = m Cm k , k = 1 I(Rm1 ≤ k, Rm2 ≤ k), λ L,m k m m k j=1

(3.9)

where k = k(m) → ∞ and k/m → 0 as m → ∞, and the ﬁrst expression in (3.8) has to be understood as the empirical copula-measure of the interval (1 − k/m, 1] × (1 − k/m, 1]. The second type of estimator is already well known in multivariate extreme-value theory (Huang, 1992). We only provide the estimator for the upper TDC. ˆ (2) λ U,m

= =

k m k 1 − Cm 1 − , 1 − k m m m 1 (j) (j) 2− I(Rm1 > m − k or Rm2 > m − k), k j=1 2−

(3.10)

with k = k(m) → ∞ and k/m → 0 as m → ∞. The optimal choice of k is related to the usual variance-bias problem and we refer the reader to Peng (1998) for more details. Strong consistency and asymptotic normality for both types of nonparametric estimators are also addressed in the latter three reference. Now we focus on an elliptically-contoured bivariate random vector X. In the presence of tail dependence, previous arguments justify a sole consideration of elliptical distributions having a regularly-varying density generator with regular variation index α. This implies that the distribution function of ||X||2 has also a regularly-varying tail with index α. Formula (3.6) shows that the upper and lower tail-dependence coeﬃcients λU and λL depend only on the regular variation index α and the “correlation” coeﬃcient ρ. Hence, we propose the following parametric estimator for λU and λL : ˆ (3) = λ(3) (ˆ ˆ (3) = λ ˆm ). λ U,m L,m U αm , ρ

(3.11)

78

3 Tail Dependence

Several robust estimators ρˆm for ρ are provided in the literature such as estimators based on techniques of multivariate trimming (Hahn, Mason, and Weiner, 1991), minimum-volume ellipsoid estimators (Rousseeuw and van Zomeren, 1990), and least square estimators (Frahm et al., 2002). For more details regarding the relationship between the regular variation index α, the density generator, and the random variable ||X||2 we refer to Schmidt (2002b). Observe that even though the estimator for the regular variation ˆ (3) is biased due to the index α might be unbiased, the TDC estimator λ U,m integral transform.

3.5

Comparison of TDC Estimators

In this section we investigate the ﬁnite-sample properties of the introduced TDC estimators. One thousand independent copies of m = 500, 1000, and 2000 i.i.d. random vectors (m denotes the sample length) of a bivariate standard tdistribution with θ = 1.5, 2, and 3 degrees of freedom are generated and the upper TDCs are estimated. Note that the parameter θ equals the regular variation index α which we discussed in the context of elliptically-contoured distributions. The empirical bias and root-mean-squared error (RMSE) for all three introduced TDC estimation methods are derived and presented in Tables 3.6, 3.7, and 3.8, respectively. ˆ (1) Table 3.6: Bias and RMSE for the nonparametric upper TDC estimator λ U (multiplied by 103 ). The sample length is denoted by m. Original θ = 1.5 θ=2 θ=3 parameters λU = 0.2296 λU = 0.1817 λU = 0.1161 ˆ (1) ˆ (1) ˆ (1) Estimator λ λ λ U U U Bias (RMSE) Bias (RMSE) Bias (RMSE) m = 500 m = 1000 m = 2000

25.5 (60.7) 15.1 (47.2) 8.2 (38.6)

43.4 (72.8) 28.7 (55.3) 19.1 (41.1)

71.8 (92.6) 51.8 (68.3) 36.9 (52.0)

3.5

Comparison of TDC Estimators

79

ˆ (2) Table 3.7: Bias and RMSE for the nonparametric upper TDC estimator λ U 3 (multiplied by 10 ). The sample length is denoted by m. Original θ = 1.5 θ=2 θ=3 parameters λU = 0.2296 λU = 0.1817 λU = 0.1161 ˆ (2) ˆ (2) ˆ (2) Estimator λ λ λ U U U Bias (RMSE) Bias (RMSE) Bias (RMSE) m = 500 m = 1000 m = 2000

53.9 (75.1) 33.3 (54.9) 22.4 (41.6)

70.3 (88.1) 49.1 (66.1) 32.9 (47.7)

103.1 (116.4) 74.8 (86.3) 56.9 (66.0)

(3)

ˆ (mulTable 3.8: Bias and RMSE for the parametric upper TDC estimator λ U tiplied by 103 ). The sample length is denoted by m. Original θ = 1.5 θ=2 θ=3 parameters λU = 0.2296 λU = 0.1817 λU = 0.1161 ˆ (3) ˆ (3) ˆ (3) Estimator λ λ λ U U U Bias (RMSE) Bias (RMSE) Bias (RMSE) m = 500 m = 1000 m = 2000

1.6 (30.5) 2.4 (22.4) 2.4 (15.5)

3.5 (30.8) 5.8 (23.9) 5.4 (17.0)

16.2 (33.9) 15.4 (27.6) 12.4 (21.4)

Regarding the parametric approach we apply the procedure introduced in Section 3.4 and estimate ρ by a trimmed empirical correlation coeﬃcient with trimming proportion 0.05% and α (= θ) by a Hill estimator. For the latter we choose the optimal threshold value k according to Drees and Kaufmann (1998). The empirical bias and RMSE corresponding to the estimation of ρ and α are provided in Tables 3.9 and 3.10. Observe that Pearson’s correlation coeﬃcient ρ does not exist for θ < 2. In this case, ρ denotes some dependence parameter and a more robust estimation procedure should be used (Frahm et al., 2002).

80

3 Tail Dependence

Table 3.9: Bias and RMSE for the estimator of the regular variation index α (multiplied by 103 ). The sample length is denoted by m. Original θ = 1.5 θ=2 θ=3 parameters α = 1.5 α=2 α=3 Estimator α ˆ α ˆ α ˆ Bias (RMSE) Bias (RMSE) Bias (RMSE) m = 500 m = 1000 m = 2000

2.2 (211.9) −14.7 (153.4) −15.7 (101.1)

−19.8 (322.8) −48.5 (235.6) −60.6 (173.0)

−221.9 (543.7) −242.2 (447.7) −217.5 (359.4)

Table 3.10: Bias and RMSE for the “correlation” coeﬃcient estimator ρˆ (multiplied by 103 ). The sample length is denoted by m. Original θ = 1.5 θ=2 θ=3 parameters ρ=0 ρ=0 ρ=0 Estimator ρˆ ρˆ ρˆ Bias (RMSE) Bias (RMSE) Bias (RMSE) m = 500 m = 1000 m = 2000

0.02 (61.6) −0.32 (44.9) 0.72 (32.1)

−2.6 (58.2) 1.0 (42.1) −1.2 (29.3)

2.1 (56.5) 0.6 (39.5) −1.8 (27.2)

Finally, Figures 3.4 and 3.5 illustrate the (non-)parametric estimation results ˆ (i) , i = 1, 2, 3. Presented are 3 × 1000 TDC of the upper TDC estimator λ U estimations with sample lengths m = 500, 1000 and 2000. The plots visualize the decreasing empirical bias and variance for increasing sample length.

Tail Dependence of Asset and FX Returns

m=2000

81

m=500

m=2000

TDC estimate

0.2

0.3

m=1000

0.1

0.2

0.1

TDC estimate

0.3

m=1000

0.4

m=500

0.4

3.6

0

500

1000

1500 Sample

2000

2500

3000

0

500

1000

1500 Sample

2000

2500

3000

ˆ (1) (left panel) and λ ˆ (2) Figure 3.4: Nonparametric upper TDC estimates λ U U (right panel) for 3 × 1000 i.i.d. samples of size m = 500, 1000, 2000 from a bivariate t-distribution with parameters θ = 2, ρ = 0, and (1) (2) λU = λU = 0.1817. STFtail04.xpl

ˆ (3) outperforms the other The empirical study shows that the TDC estimator λ U ˆ (1) is three (two and two estimators. For m = 2000, the bias (RMSE) of λ U (3) ˆ , whereas the bias (RMSE) a half) times larger than the bias (RMSE) of λ U ˆ (2) is two (ten percent) times larger than the bias (RMSE) of λ ˆ (1) . More of λ U U ˆ (1) and λ ˆ (2) are given empirical and statistical results regarding the estimators λ U U ˆ (3) was in Schmidt and Stadtm¨ uller (2003). However, note that the estimator λ U especially developed for bivariate elliptically-contoured distributions. Thus, ˆ (1) is recommended for TDC estimations of non-elliptical or the estimator λ U unknown bivariate distributions.

3.6

Tail Dependence of Asset and FX Returns

Tail dependence is indeed often found in ﬁnancial data series. Consider two scatter plots of daily negative log-returns of a tuple of ﬁnancial securities and ˆ (1) for various k (for notational conthe corresponding upper TDC estimate λ U venience we drop the index m).

82

3 Tail Dependence

m=1000

m=2000

TDC estimate

0.1

0.2

0.3

0.4

m=500

0

500

1000

1500 Sample

2000

2500

3000

ˆ (3) for 3×1000 i.i.d. samples Figure 3.5: Nonparametric upper TDC estimates λ U of size m = 500, 1000, 2000 from a bivariate t-distribution with (3) parameters θ = 2, ρ = 0, and λU = 0.1817. STFtail05.xpl

The ﬁrst data set (D1 ) contains negative daily stock log-returns of BMW and Deutsche Bank for the time period 1992–2001. The second data set (D2 ) consists of negative daily exchange rate log-returns of DEM/USD and JPY/USD (so-called FX returns) for the time period 1989–2001. For modelling reasons we assume that the daily log-returns are i.i.d. observations. Figures 3.6 and 3.7 show the presence of tail dependence and the order of magnitude of the tail-dependence coeﬃcient. Tail dependence is present if the plot of TDC estiˆ (1) against the thresholds k shows a characteristic plateau for small k. mates λ U The existence of this plateau for tail-dependent distributions is justiﬁed by a regular variation property of the tail distribution; we refer the reader to Peng (1998) or Schmidt and Stadtm¨ uller (2003) for more details. By contrast, the characteristic plateau is not observable if the distribution is tail independent.

Tail Dependence of Asset and FX Returns

83

0.3

0.2

TDC estimate

0

0

0.1

0.05

- Log-returns Dt. Bank

0.1

0.4

3.6

0

0.05 - Log-returns BMW

0.1

0

100

200 Threshold k

300

400

Figure 3.6: Scatter plot of BMW versus Deutsche Bank negative daily stock log-returns (2347 data points) and the corresponding TDC estimate ˆ (1) for various thresholds k. λ U STFtail06.xpl

The typical variance-bias problem for various thresholds k can be also observed in Figures 3.6 and 3.7. In particular, a small k comes along with a large variance of the TDC estimator, whereas increasing k results in a strong bias. In the ˆ (1) lies presence of tail dependence, k is chosen such that the TDC estimate λ U on the plateau between the decreasing variance and the increasing bias. Thus for the data set D1 we take k between 80 and 110 which provides a TDC ˆ (1) = 0.31, whereas for D2 we choose k between 40 and 90 which estimate of λ U,D1 ˆ (1) = 0.17. yields λ U,D2

The importance of the detection and the estimation of tail dependence becomes clear in the next section. In particular, we show that the Value at Risk estimation of a portfolio is closely related to the concept of tail dependence. A proper analysis of tail dependence results in an adequate choice of the portfolio’s loss distribution and leads to a more precise assessment of the Value at Risk.

3 Tail Dependence

0

0

0.1

0.2

TDC estimate

0.04

0.02

- Log-returns JPY/USD

0.3

0.06

0.4

84

-0.01

0

0.01 - Log-returns DM/USD

0.02

0

100

200

300 Threshold k

400

500

600

Figure 3.7: Scatter plot of DEM/USD versus JPY/USD negative daily exchange rate log-returns (3126 data points) and the corresponding ˆ (1) for various thresholds k. TDC estimate λ U STFtail07.xpl

3.7

Value at Risk – a Simulation Study

Value at Risk (VaR) estimations refer to the estimation of high target quantiles of single asset or portfolio loss distributions. Thus, VaR estimations are very sensitive towards the tail behavior of the underlying distribution model. On the one hand, the VaR of a portfolio is aﬀected by the tail distribution of each single asset. On the other hand, the general dependence structure and especially the tail-dependence structure among all assets have a strong impact on the portfolio’s VaR, too. With the concept of tail dependence, we supply a methodology for measuring and modelling one particular type of dependence of extreme events. What follows, provides empirical justiﬁcation that the portfolio’s VaR estimation depends heavily on a proper speciﬁcation of the (tail-)dependence structure of the underlying asset-return vector. To illustrate our assertion we consider three ﬁnancial data sets: The ﬁrst two data sets D1 and D2 refer again to the daily stock log-returns of BMW and Deutsche Bank for the time period 1992– 2001 and the daily exchange rate log-returns of DEM/USD and JPY/USD

85

0 0.02 Log-returns FFR/USD

0.04

0

Simulated log-returns DEM/USD

-0.02

-0.02

0

-0.02

Log-returns DEM/USD

0.02

Value at Risk – a Simulation Study

0.02

3.7

-0.02

0 0.02 Simulated log-returns FFR/USD

0.04

Figure 3.8: Scatter plot of foreign exchange data (left panel) and simulated normal pseudo-random variables (right panel) of FFR/USD versus DEM/USD negative daily exchange rate log-returns (5189 data points). STFtail08.xpl

for the time period 1989–2001, respectively. The third data set (D3 ) contains exchange rate log-returns of FFR/USD and DEM/USD for the time period 1984–2002. Typically, in practice, either a multivariate normal distribution or multivariate t-distribution is ﬁtted to the data in order to describe the random behavior (market riskiness) of asset returns. Especially multivariate t-distributions have recently gained the attraction of practitioners due to their ability to model heavy tails while still having the advantage of being in the class of ellipticallycontoured distributions. Recall that the multivariate normal distribution has thin tailed marginals which exhibit no tail-dependence, and the t-distribution possesses heavy tailed marginals which are tail dependent (see Section 3.3.2). Due to the diﬀerent tail behavior, one might pick one of the latter two distribution classes if the data are elliptically contoured. However, ellipticallycontoured distributions require a very strong symmetry of the data and might not be appropriate in many circumstances. For example, the scatter plot of the data set D3 in Figure 3.8 reveals that its distributional structure does not seem to be elliptically contoured at all.

86

3 Tail Dependence

To circumvent this problem, one could ﬁt a distribution from a broader distribution class, such as a generalized hyperbolic distribution (Eberlein and Keller, 1995; Bingham and Kiesel, 2002). Alternatively, a split of the dependence structure and the marginal distribution functions via the theory of copulae (as described in Section 3.2) seems to be also attractive. This split exploits the fact that statistical (calibration) methods are well established for one-dimensional distribution functions. For the data sets D1 , D2 , and D3 , one-dimensional t-distributions are utilized to model the marginal distributions. The choice of an appropriate copula function turns out to be delicate. Two structural features are important in the context of VaR estimations regarding the choice of the copula. First, the general structure (symmetry) of the chosen copula should coincide with the dependence structure of the real data. We visualize the dependence structure of the sample data via the respective empirical copula (Figure 3.9), i.e. the marginal distributions are standardized by the corresponding empirical distribution functions. Second, if the data show tail dependence than one must utilize a copula which comprises tail dependence. Especially VaR estimations at a small conﬁdence level are very sensitive towards tail dependence. Figure 3.9 indicates that the FX data set D3 has signiﬁcantly more dependence in the lower tail than the simulated data from a ﬁtted bivariate normal copula. The data clustering in the lower left corner of the scatter plot of the empirical copula is a strong indication for tail dependence. Based on the latter ﬁndings, we use a t-copula (which allows for tail dependence, see Section 3.3.2) and t-distributed marginals (which are heavy tailed). Note, the resulting common distribution is only elliptically contoured if the degrees of freedom of the t-copula and the t-margins coincide, since in this case the common distribution corresponds to a multivariate t-distribution. The parameters of the marginals and the copula are separately estimated in two consecutive steps via maximum likelihood. For statistical properties of the latter procedure, which is called Inference Functions for Margins method (IFM), we refer to Joe and Xu (1996). Tables 3.11, 3.12, and 3.13 compare the historical VaR estimates of the data sets D1 , D2 , and D3 with the average of 100 VaR estimates which are simulated from diﬀerent distributions. The ﬁtted distribution is either a bivariate normal, a bivariate t-distribution or a bivariate distribution with t-copula and t-marginals. The respective standard deviation of the VaR estimations are provided in parenthesis. For a better exposition, we have multiplied all numbers by 105 .

87

0.05

Log-returns DEM/USD

0

0.05

0

Log-returns DEM/USD

0.1

Value at Risk – a Simulation Study

0.1

3.7

0

0.05 Log-returns FFR/USD

0.1

0

0.05 Log-returns FFR/USD

0.1

Figure 3.9: Lower left corner of the empirical copula density plots of real data (left panel) and simulated normal pseudo-random variables (right panel) of FFR/USD versus DEM/USD negative daily exchange rate log-returns (5189 data points). STFtail09.xpl

Table 3.11: Mean and standard deviation of 100 VaR estimations (multiplied by 105 ) from simulated data following diﬀerent distributions which are ﬁtted to the data set D1 . Quantile Historical Normal t-distribution t-copula & VaR distribution t-marginals Mean (Std) Mean (Std) Mean (Std) 0.01 0.025 0.05

489.93 347.42 270.41

397.66 (13.68) 335.28 (9.67) 280.69 (7.20)

464.66 (39.91) 326.04 (18.27) 242.57 (10.35)

515.98 (36.54) 357.40 (18.67) 260.27 (11.47)

The results of the latter tables clearly show that the ﬁtted bivariate normaldistribution does not yield an overall satisfying estimation of the VaR for all data sets D1 , D2 , and D3 . The poor estimation results for the 0.01− and 0.025−quantile VaR (i.e. the mean of the VaR estimates deviate strongly from the historical VaR estimate) are mainly caused by the thin tails of the normal

88

3 Tail Dependence

Table 3.12: Mean and standard deviation of 100 VaR estimations (multiplied by 105 ) from simulated data following diﬀerent distributions which are ﬁtted to the data set D2 . Quantile Historical Normal t-distribution t-copula & VaR distribution t-marginals Mean (Std) Mean (Std) Mean (Std) 0.01 0.025 0.05

155.15 126.63 98.27

138.22 (4.47) 116.30 (2.88) 97.56 (2.26)

155.01 (8.64) 118.28 (4.83) 92.35 (2.83)

158.25 (8.24) 120.08 (4.87) 94.14 (3.12)

Table 3.13: Mean and standard deviation of 100 VaR estimations (multiplied by 105 ) from simulated data following diﬀerent distributions which are ﬁtted to the data set D3 . Quantile Historical Normal t-distribution t-copula & VaR distribution t-marginals Mean (Std) Mean (Std) Mean (Std) 0.01 0.025 0.05

183.95 141.22 109.94

156.62 (3.65) 131.54 (2.41) 110.08 (2.05)

179.18 (9.75) 124.49 (4.43) 91.74 (2.55)

179.41 (6.17) 135.21 (3.69) 105.67 (2.59)

distribution. By contrast, the bivariate t-distribution provides good estimations of the historical VaR for the data sets D1 and D2 over all quantiles. However, both data sets are approximately elliptically-contoured distributed since the estimated parameters of the copula and the marginals are almost equal. For example for the data set D1 , the estimated degree of freedom of the t-copula is 3.05 whereas the estimated degrees of freedom of the t-marginals are 2.99 and 3.03, respectively. We have already discussed that the distribution of the data set D3 is not elliptically contoured. Indeed, the VaR estimation improves with a splitting of the copula and the marginals. The corresponding estimated degree of freedom of the t-copula is 1.11 whereas the estimated degrees of freedom of the t-marginals are 4.63 and 5.15. Finally, note that the empirical standard deviations do signiﬁcantly diﬀer between the VaR estimation based on the multivariate t-distribution and the t-copula, respectively.

Bibliography

89

Bibliography Bingham, N. H., Goldie, C. M. and Teugels, J. L. (1987). Regular Variation, Cambridge University Press, Cambridge. Bingham, N. H. and Kiesel, R. (2002). Semi-parametric modelling in Finance: Theoretical foundation, Quantitative Finance 2: 241–250. Bingham, N. H., Kiesel, R. and Schmidt, R. (2002). Semi-parametric modelling in Finance: Econometric applications, Quantitative Finance 3 (6): 426– 441. Cambanis, S., Huang, S. and Simons, G. (1981). On the theory of elliptically contoured distributions, Journal of Multivariate Analysis 11: 368-385. Draisma, G., Drees, H., Ferreira, A. and de Haan, L. (2004). Bivariate tail estimation: dependence in asymptotic independence, Bernoulli 10 (2): 251-280. Drees, H. and Kaufmann, E. (1998). Selecting the optimal sample fraction in univariate extreme value estimation, Stochastic Processes and their Applications 75: 149-172. Eberlein, E. and Keller, U. (1995). Bernoulli 1: 281–299.

Hyperbolic distributions in ﬁnance,

Embrechts, P., Kl¨ uppelberg, C. and Mikosch, T. (1997). Modelling Extremal Events, Springer Verlag, Berlin. Embrechts, P., Lindskog, F. and McNeil, A. (2001). Modelling Dependence with Copulas and Applications to Risk Management, in S. Rachev (Ed.) Handbook of Heavy Tailed Distributions in Finance, Elsevier: 329–384. Embrechts, P., McNeil, A. and Straumann, D. (1999). Correlation and Dependency in Risk Management: Properties and Pitfalls, in M.A.H. Dempster (Ed.) Risk Management: Value at Risk and Beyond, Cambridge University Press, Cambridge: 176–223. Fang, K., Kotz, S. and Ng, K. (1990). Symmetric Multivariate and Related Distributions, Chapman and Hall, London. Frahm, G., Junker, M. and Schmidt, R. (2002). Estimating the Tail Dependence Coeﬃcient, CAESAR Center Bonn, Technical Report 38 http://stats.lse.ac.uk/schmidt.

90

Bibliography

H¨ ardle, W., Kleinow, T. and Stahl, G. (2002). Applied Quantitative Finance Theory and Computational Tools, Springer Verlag, Berlin. Hahn, M.G., Mason, D.M. and Weiner D.C. (1991). Sums, trimmed sums and extremes, Birkh¨ auser, Boston. Hauksson, H., Dacorogna, M., Domenig, T., Mueller, U. and Samorodnitsky, G. (2001). Multivariate Extremes, Aggregation and Risk Estimation, Quantitative Finance 1: 79–95. Huang, X., (1992). Statistics of Bivariate Extreme Values. Thesis Publishers and Tinbergen Institute. Joe, H. (1997). Multivariate Models and Dependence Concepts, Chapman and Hall, London. Joe, H. and Xu, J. J. (1996). The Estimation Method of Inference Function for Margins for Multivariate Models, British Columbia, Dept. of Statistics, Technical Report 166. Junker, M. and May, A. (2002). Measurement of aggregate risk with copulas, Research Center CAESAR Bonn, Dept. of Quantitative Finance, Technical Report 2. Kiesel, R. and Kleinow, T. (2002). Sensitivity analysis of credit portfolio models, in W. H¨ardle, T. Kleinow and G. Stahl (Eds.) Applied Quantitative Finance., Springer Verlag, New York. Ledford, A. and Tawn, J. (1996). Statistics for Near Independence in Multivariate Extreme Values, Biometrika 83: 169–187. Nelsen, R. (1999). An Introduction to Copulas, Springer Verlag, New York. Peng, L. (1998). Second Order Condition and Extreme Value Theory, Tinbergen Institute Research Series 178, Thesis Publishers and Tinbergen Institute. Rousseeuw, P.J. and van Zomeren B.C. (2002). Unmasking multivariate outliers and leverage points, Journal of the American Statistical Association 85: 633–639. Schmidt, R. (2002a). Credit Risk Modelling and Estimation via Elliptical Copulae, in G. Bohl, G. Nakhaeizadeh, S.T. Rachev, T. Ridder and K.H. Vollmer (Eds.) Credit Risk: Measurement, Evaluation and Management, Physica Verlag, Heidelberg.

Bibliography

91

Schmidt, R. (2002b). Tail Dependence for Elliptically Contoured Distributions, Math. Methods of Operations Research 55 (2): 301–327. Schmidt, R. (2003). Dependencies of Extreme Events in Finance, Dissertation, University of Ulm, http://stats.lse.ac.uk/schmidt. Schmidt, R. and Stadtm¨ uller, U. (2002). Nonparametric Estimation of Tail Dependence, The London School of Economics, Department of Statistics, Research report 101, http://stats.lse.ac.uk/schmidt.

4 Pricing of Catastrophe Bonds Krzysztof Burnecki, Grzegorz Kukla, and David Taylor

4.1

Introduction

Catastrophe (CAT) bonds are one of the more recent ﬁnancial derivatives to be traded on the world markets. In the mid-1990s a market in catastrophe insurance risk emerged in order to facilitate the direct transfer of reinsurance risk associated with natural catastrophes from corporations, insurers and reinsurers to capital market investors. The primary instrument developed for this purpose was the CAT bond. CAT bonds are more speciﬁcally referred to as insurance-linked securities (ILS) The distinguishing feature of these bonds is that the ultimate repayment of principal depends on the outcome of an insured event. The basic CAT bond structure can be summarized as follows (Lane, 2004): 1. The sponsor establishes a special purpose vehicle (SPV) as an issuer of bonds and as a source of reinsurance protection. 2. The issuer sells bonds to investors. The proceeds from the sale are invested in a collateral account. 3. The sponsor pays a premium to the issuer; this and the investment of bond proceeds are a source of interest paid to investors. 4. If the speciﬁed catastrophic risk is triggered, the funds are withdrawn from the collateral account and paid to the sponsor; at maturity, the remaining principal – or if there is no event, 100% of principal – is paid to investors.

94

4

Pricing of Catastrophe Bonds

There are three types of ILS triggers: indemnity, index and parametric. An indemnity trigger involves the actual losses of the bond-issuing insurer. For example the event may be the insurer’s losses from an earthquake in a certain area of a given country over the period of the bond. An industry index trigger involves, in the US for example, an index created from property claim service (PCS) loss estimates. A parametric trigger is based on, for example, the Richter scale readings of the magnitude of an earthquake at speciﬁed data stations. In this chapter we address the issue of pricing CAT bonds with indemnity and index triggers.

4.1.1

The Emergence of CAT Bonds

Until fairly recently, property reinsurance was a relatively well understood market with eﬃcient pricing. However, naturally occurring catastrophes, such as earthquakes and hurricanes, are beginning to have a dominating impact on the industry. In part, this is due to the rapidly changing, heterogeneous distribution of high-value property in vulnerable areas. A consequence of this has been an increased need for a primary and secondary market in catastrophe related insurance derivatives. The creation of CAT bonds, along with allied ﬁnancial products such as catastrophe insurance options, was motivated in part by the need to cover the massive property insurance industry payouts of the earlyto mid-1990s. They also represent a “new asset class” in that they provide a mechanism for hedging against natural disasters, a risk which is essentially uncorrelated with the capital market indices (Doherty, 1997). Subsequent to the development of the CAT bond, the class of disaster referenced has grown considerably. As yet, there is almost no secondary market for CAT bonds which hampers using arbitrage-free pricing models for the derivative. Property insurance claims of approximately USD 60 billion between 1990 and 1996 (Canter, Cole, and Sandor, 1996) caused great concern to the insurance industry and resulted in the insolvency of a number of ﬁrms. These bankruptcies were brought on in the wake of hurricanes Andrew (Florida and Louisiana aﬀected, 1992), Opal (Florida and Alabama, 1995) and Fran (North Carolina, 1996), which caused combined damage totalling USD 19.7 billion (Canter, Cole, and Sandor, 1996). These, along with the Northridge earthquake (1994) and similar disasters (for the illustration of the US natural catastrophe data see Figure 4.1), led to an interest in alternative means for underwriting insurance. In 1995, when the CAT bond market was born, the primary and secondary (or reinsurance) industries had access to approximately USD 240 billion in capi-

4.1

Introduction

95

tal (Canter, Cole, and Sandor, 1996; Cummins and Danzon, 1997). Given the capital level constraints necessary for the reinsuring of property losses and the potential for single-event losses in excess of USD 100 billion, this was clearly insuﬃcient. The international capital markets provided a potential source of security for the (re-)insurance market. An estimated capitalisation of the international ﬁnancial markets, at that time, of about USD 19 trillion underwent an average daily ﬂuctuation of approximately 70 basis points or USD 133 billion (Sigma, 1996). The undercapitalisation of the reinsurance industry (and their consequential default risk) meant that there was a tendency for CAT reinsurance prices to be highly volatile. This was reﬂected in the traditional insurance market, with rates on line being signiﬁcantly higher in the years following catastrophes and dropping oﬀ in the intervening years (Froot and O’Connell, 1997; Sigma, 1997). This heterogeneity in pricing has a very strong damping eﬀect, forcing many re-insurers to leave the market, which in turn has adverse consequences for the primary insurers. A number of reasons for this volatility have been advanced (Winter, 1994; Cummins and Danzon, 1997). CAT bonds and allied catastrophe related derivatives are an attempt to address these problems by providing eﬀective hedging instruments which reﬂect long-term views and can be priced according to the statistical characteristics of the dominant underlying process(es). Their impact, since a period of standardisation between 1997 and 2003, has been substantial. As a consequence the rise in prices associated with the uppermost segments of the CAT reinsurance programs has been dampened. The primary market has developed and both issuers and investors are now well-educated and technically adept. In the years 2000 to 2003, the average total issue exceeded USD 1 billion per annum (McGhee, 2004). The catastrophe bond market witnessed yet another record year in 2003, with total issuance of USD 1.73 billion, an impressive 42 percent year-on-year increase from 2002s record of USD 1.22 billion. During the year, a total of eight transactions were completed, with three originating from ﬁrsttime issuers. The year also featured the ﬁrst European corporate-sponsored transaction (and only the third by any non-insurance company). Electricit´e de France, the largest electric utility in Europe, sponsored a transaction to address a portion of the risks facing its properties from French windstorms. Since 1997, when the market began in earnest, 54 catastrophe bond issues have been completed with total risk limits of almost USD 8 billion. It is interesting to note that very few of the issued bonds receive better than “non-investment grade” BB ratings and that almost no CAT bonds have been triggered, despite an increased reliance on parametric or index based payout triggers.

4

Pricing of Catastrophe Bonds

10

5 0

Adjusted PCS catastrophe claims (USD billion)

15

96

1990

1991

1992

1993

1994

1995

1996

1997

1998

1999

2000

Years

Figure 4.1: Graph of the PCS catastrophe loss data, 1990–1999. STFcat01.xpl

4.1.2

Insurance Securitization

Capitalisation of insurance and consequential risk spreading through share issue, is well established and the majority of primary and secondary insurers are public companies. Investors in these companies are thus de facto bearers of risk for the industry. This however relies on the idea of risk pooling through the law of large numbers, where the loss borne by each investor becomes highly predictable. In the case of catastrophic natural disasters, this may not be possible as the losses incurred by diﬀerent insurers tend to be correlated. In this situation a diﬀerent approach for hedging the risk is necessary. A number of such products which realize innovative methods of risk spreading already exist and are traded (Litzenberger, Beaglehole, and Reynolds, 1996; Cummins and Danzon, 1997; Aase, 1999; Sigma, 2003). They are roughly divided into reinsurance share related derivatives, including Post-loss Equity Issues and Catastrophe Equity Puts, and asset–liability hedges such as Catastrophe Futures, Options and CAT Bonds.

4.1

Introduction

97

In 1992, the Chicago Board of Trade (CBOT) introduced the CAT futures. In 1995, the CAT future was replaced by the PCS option. This option was based on a loss index provided by PCS. The underlying index represented the development of speciﬁed catastrophe damages, was published daily and eliminated the problems of the earlier ISO index. The options traded better, especially the call option spreads where insurers would appear on both side of the transaction, i.e. as buyer and seller. However, they also ceased trading in 2000. Much work in the reinsurance industry concentrated on pricing these futures and options and on modelling the process driving their underlying indices (Canter, Cole, and Sandor, 1996; Embrechts and Meister, 1997; Aase, 1999). CAT bonds are allied but separate instruments which seek to ensure capital requirements are met in the speciﬁc instance of a catastrophic event.

4.1.3

CAT Bond Pricing Methodology

In this chapter we investigate the pricing of CAT Bonds. The methodology developed here can be extended to most other catastrophe related instruments. However, we are concerned here only with CAT speciﬁc instruments, e.g. California Earthquake Bonds (Sigma, 1996; Sigma, 1997; Sigma, 2003; McGhee, 2004), and not reinsurance shares or their related derivatives. In the early market for CAT bonds, the pricing of the bonds was in the hands of the issuer and was aﬀected by the equilibrium between supply and demand only. Consequently there was a tendency for the market to resemble the traditional reinsurance market. However, as CAT bonds become more popular, it is reasonable to expect that their price will begin to reﬂect the fair or arbitrage-free price of the bond, although recent discussions of alternative pricing methodologies have contradicted this expectation (Lane, 2003). Our pricing approach assumes that this market already exists. Some of the traditional assumptions of derivative security pricing are not correct when applied to these instruments due to the properties of the underlying contingent stochastic processes. There is evidence that certain catastrophic natural events have (partial) power-law distributions associated with their loss statistics (Barton and Nishenko, 1994). This overturns the traditional lognormal assumption of derivative pricing models. There are also well-known statistical diﬃculties associated with the moments of power-law distributions, thus rendering it impossible to employ traditional pooling methods and consequently the central limit theorem. Given that heavy-tailed or large deviation results assume, in general, that at least the ﬁrst moment of the distribution

98

4

Pricing of Catastrophe Bonds

exists, there will be diﬃculties with applying extreme value theory to this problem (Embrechts, Resnick, and Samorodnitsky, 1999). It would seem that these characteristics may render traditional actuarial or derivatives pricing approaches ineﬀective. There are additional features to modelling the CAT bond price which are not to be found in models of ordinary corporate or government issue (although there is some similarity with pricing defaultable bonds). In particular, the trigger event underlying CAT bond pricing is dependent on both the frequency and severity of natural disasters. In the model described here, we attempt to reduce to a minimum any assumptions about the underlying distribution functions. This is in the interests of generality of application. The numerical examples will have to make some distributional assumptions and will reference some real data. We will also assume that loss levels are instantaneously measurable and updatable. It is straightforward to adjust the underlying process to accommodate a development period. There is a natural similarity between the pricing of catastrophe bonds and the pricing of defaultable bonds. Defaultable bonds, by deﬁnition, must contain within their pricing model a mechanism that accounts for the potential (partial or complete) loss of their principal value. Defaultable bonds yield higher returns, in part, because of this potential defaultability. Similarly, CAT bonds are oﬀered at high yields because of the unpredictable nature of the catastrophe process. With this characteristic in mind, a number of pricing models for defaultable bonds have been advanced (Jarrow and Turnbull, 1995; Zhou, 1997; Duﬃe and Singleton, 1999). The trigger event for the default process has similar statistical characteristics to that of the equivalent catastrophic event pertaining to CAT bonds. In an allied application to mortgage insurance, the similarity between catastrophe and default in the log-normal context has been commented on (Kau and Keenan, 1996). With this in mind, we have modelled the catastrophe process as a compound doubly stochastic Poisson process. The underlying assumption is that there is a Poisson point process (of some intensity, in general varying over time) of potentially catastrophic events. However, these events may or may not result in economic losses. We assume the economic losses associated with each of the potentially catastrophic events to be independent and to have a certain common probability distribution. This is justiﬁable for the Property Claim Loss indices used as the triggers for the CAT bonds. Within this model, the threshold time can be seen as a point of a Poisson point process with a stochastic intensity depending on the instantaneous index position. We make this model precise later in the chapter.

4.2

Compound Doubly Stochastic Poisson Pricing Model

99

In the article of Baryshnikov, Mayo, and Taylor (1998) the authors presented an arbitrage-free solution to the pricing of CAT bonds under conditions of continuous trading. They modelled the stochastic process underlying the CAT bond as a compound doubly stochastic Poisson process. Burnecki and Kukla (2003) applied their results in order to determine no-arbitrage prices of a zero-coupon and coupon bearing CAT bond. In Section 4.2 we present the doubly stochastic Poisson pricing model. In Section 4.3 we study 10-year catastrophe loss data provided by Property Claim Services. We ﬁnd a distribution function which ﬁts the observed claims in a satisfactory manner and estimate the intensity of the non-homogeneous Poisson process governing the ﬂow of the natural events. In Section 4.4 we illustrate the values of diﬀerent CAT bonds associated with this loss data with respect to the threshold level and maturity time. To this end we apply Monte Carlo simulations.

4.2

Compound Doubly Stochastic Poisson Pricing Model

The CAT bond we are interested in is described by specifying the region, type of events, type of insured properties, etc. More abstractly, it is described by the aggregate loss process Ls and by the threshold loss D. Set a probability space (Ω, F, F t , ν) and an increasing ﬁltration F t ⊂ F, t ∈ [0, T ]. This leads to the following assumptions: • There exists a doubly stochastic Poisson process (Bremaud, 1981) Ms describing the ﬂow of (potentially catastrophic) natural events of a given type in the region speciﬁed in the bond contract. The intensity of this Poisson point process is assumed to be a predictable bounded process ms . This process describes the estimates based on statistical analysis and scientiﬁc knowledge about the nature of the catastrophe causes. We will denote the instants of these potentially catastrophic natural events as 0 ≤ t1 ≤ . . . ≤ ti ≤ . . . ≤ T . • The losses incurred by each event in the ﬂow {ti } are assumed to be independent, identically distributed random values {Xi } with distribution function F (x) = P(Xi < x). • A progressive process of discounting rates r. Following the traditional practice, we assume the process is continuous almost everywhere. This

100

4

Pricing of Catastrophe Bonds

process describes the value at time s of USD 1 paid at time t > s by ⎫ ⎧ ⎬ ⎨ t exp{−R(s, t)} = exp − r(ξ) dξ . ⎭ ⎩ s

Therefore, one has Lt =

ti

Xi =

Mt

Xi .

i=1

The deﬁnition of the process implies that L is left-continuous and predictable. We assume that the threshold event is the time when the accumulated losses exceed the threshold level D, that is τ = inf{t : Lt ≥ D}. Now deﬁne a new process Nt = I(Lt ≥ D). Baryshnikov et al. (1998) show that this is also a doubly stochastic Poisson process with the intensity λs = ms {1 − F (D − Ls )} I(Ls < D).

(4.1)

In Figure 4.2 we see a sample trajectory of the aggregate loss process Lt (0 ≤ t ≤ T = 10 years) generated under the assumption of log-normal loss amounts with µ = 18.3806 and σ = 1.1052 and a non-homogeneous Poisson process Mt with the intensity function m1s = 35.32 + 2.32 · 2π sin {2π(s − 0.20)}, a real-life catastrophe loss trajectory (which will be analysed in detail in Section 4.3), the mean function of the process Lt and two sample 0.05- and 0.95-quantile lines based on 5000 trajectories of the aggregated loss process, see Chapter 14 and Burnecki, H¨ ardle, and Weron (2004). It is evident that in the studied lognormal case, the historical trajectory falls outside even the 0.05-quantile line. This may suggest that “more heavy-tailed” distributions such as the Pareto or Burr distributions would be better for modelling the“real” aggregate loss process. In Figure 4.2 the black horizontal line represents a threshold level of D = 60 billion USD.

4.3

Calibration of the Pricing Model

We conducted empirical studies for the PCS data obtained from Property Claim Services. ISO’s Property Claim Services unit is the internationally recognized authority on insured property losses from catastrophes in the United States, Puerto Rico and the U.S. Virgin Islands. PCS investigates reported disasters

Calibration of the Pricing Model

101

80 60 40 0

20

Aggregate loss process (USD billion)

100

120

4.3

0

2

6

4

8

10

Years

Figure 4.2: A sample trajectory of the aggregate loss process Lt (thin blue solid line), a real-life catastrophe loss trajectory (thick green solid line), the analytical mean of the process Lt (red dashed line) and two sample 0.05- and 0.95-quantile lines (brown dotted line). The black horizontal line represents the threshold level D = 60 billion USD. STFcat02.xpl

and determines the extent and type of damage, dates of occurrence and geographic areas aﬀected (Burnecki, Kukla, and Weron, 2000). The data, see Figure 4.1, concerns the US market’s loss amounts in USD, which occurred between 1990 and 1999 and adjusted for inﬂation using the Consumer Price Index provided by the U.S. Department of Labor. Only natural perils like hurricane, tropical storm, wind, ﬂooding, hail, tornado, snow, freezing, ﬁre, ice and earthquake were taken into consideration. We note that peaks in Figure 4.1 mark the occurrence of Hurricane Andrew (the 24th August 1992) and the Northridge Earthquake (the 17th January 1994).

102

4

Pricing of Catastrophe Bonds

In order to calibrate the pricing model we have to ﬁt both the distribution function of the incurred losses F and the process Mt governing the ﬂow of natural events. The claim size distributions, especially describing property losses, are usually heavy-tailed. In the actuarial literature for describing such claims, continuous distributions are often proposed (with the domain R+ ), see Chapter 13. The choice of the distribution is very important because it inﬂuences the bond price. In Chapter 14 the claim amount distributions were ﬁtted to the PCS data depicted in Figure 4.1. The log-normal, exponential, gamma, Weibull, mixture of two exponentials, Pareto and Burr distributions were analysed. The parameters were estimated via the Anderson-Darling statistical minimisation procedure. The goodness-of-ﬁt was checked with the help of Kolmogorov-Smirnov, Kuiper, Cram´er-von Mises and Anderson-Darling non-parametric tests. The test statistics were compared with the critical values obtained through Monte Carlo simulations. The Burr distribution with parameters α = 0.4801, λ = 3.9495 · 1016 and τ = 2.1524 passed all tests. The log-normal distribution with parameters µ = 18.3806 and σ = 1.1052 was the next best ﬁt. A doubly stochastic Poisson process governing the occurrence times of the losses was ﬁtted by Burnecki and Kukla (2003). The simplest case with the intensity ms equal to a nonnegative constant m was considered. Studies of the quarterly number of losses and the inter-occurence times of the catastrophes led to the conclusion that the ﬂow of the events may be described by a Poisson process with an annual intensity of m = 34.2. The claim arrival process is also analysed in Chapter 14. The statistical tests applied to the annual waiting times led to a renewal process. Finally, the rate function m1s = 35.32+2.32·2π sin {2π(s − 0.20)} was ﬁtted and the claim arrival process was treated as a non-homogeneous Poisson process. Such a choice of the intensity function allows modelling of an annual seasonality present in the natural catastrophe data. Baryshnikov, Mayo, and Taylor (1998) proposed an intensity function of the form m2s = a + b sin2 {2π(s + S)}. Using the least squares procedure (Ross, 2001), we ﬁtted

s the cumulative intensity function (mean value function) given by E(Ms ) = 0 mz dz to the accumulated quarterly number of PCS losses. We concluded that a = 35.22, b = 0.224, and S = −0.16. This choice of the rate function allows the incorporation of both an annual cyclic component and a trend which is sometimes observed in natural catastrophe data.

Calibration of the Pricing Model

103

190 180

170 160

140

150

Aggregate number of losses / Mean value function

200

210

4.3

4

4.2

4.4

4.6

4.8

5

5.2

5.4

5.6

5.8

6

Years

Figure 4.3: The aggregate quarterly number of PCS losses (blue solid line) together with the mean value functions E(Mt ) corresponding to the HP (red dotted line), NHP1 (black dashed line) and NHP2 (green dashed-dotted line) cases. STFcat03.xpl

It appears that both the mean squared error (MSE) and the mean absolute error (MAE) favour the rate function m1s . In this case MSE = 13.68 and MAE = 2.89, whereas m2s yields MSE = 15.12 and MAE = 3.22. Finally the homogeneous Poisson process with the constant intensity gives MSE = 55.86 and MAE = 6.1. All three choices of the intensity function ms are illustrated in Figure 4.3, where the accumulated quarterly number of PCS losses and the mean value functions on the interval [4, 6] years are depicted. This interval was chosen to best illustrate the diﬀerences.

104

4.4

4

Pricing of Catastrophe Bonds

Dynamics of the CAT Bond Price

In this section, we present prices for diﬀerent CAT bonds. We illustrate them while focusing on the inﬂuence of the choice of the loss amount distribution and the claim arrival process on the bond price. We analyse cases using the Burr distribution with parameters α = 0.4801, λ = 3.9495 · 1016 and τ = 2.1524, and the log-normal distribution with parameters µ = 18.3806 and σ = 1.1052. We also analyse the homogeneous Poisson process with an annual intensity m = 34.2 (HP) and the non-homogeneous Poisson processes with the rate functions m1s = 35.32 + 2.32 · 2π sin {2π(s − 0.20)} (NHP1) and m2s = 35.22 + 0.224 sin2 {2π(s − 0.16)} (NHP2). Consider a zero-coupon CAT bond deﬁned by the payment of an amount Z at maturity T , contingent on a threshold time τ > T . Deﬁne the process Zs = E(Z|Fs ). We require that Zs is a predictable process. This can be interpreted as the independence of payment at maturity from the occurrence and timing of the threshold. The amount Z can be the principal plus interest, usually deﬁned as a ﬁxed percentage over the London Inter-Bank Oﬀer Rate (LIBOR). The no-arbitrage price of the zero-coupon CAT bond associated with a threshold D, catastrophic ﬂow Ms , a distribution function of incurred losses F , and paying Z at maturity is given by Burnecki and Kukla (2003): Vt1

=

E [Z exp {−R(t, T )} (1 − NT )|F t ]

=

E Z exp {−R(t, T )}

·

⎧ ⎫ T ⎨ ⎬ 1 − ms {1 − F (D − Ls )} I(Ls < D)ds |F t . ⎩ ⎭

(4.2)

t

We evaluate this CAT bond price at t = 0, and apply appropriate Monte Carlo simulations. We assume for the purposes of illustration that the annual continuously-compounded discount rate r = ln(1.025) is constant and corresponds to LIBOR, T ∈ [1/4, 2] years, D ∈ [2.54, 30] billion USD (quarterly – 3*annual average loss). Furthermore, in the case of the zero-coupon CAT bond we assume that Z = 1.06 USD. Hence, the bond is priced at 3.5% over LIBOR when T = 1 year. Figure 4.4 illustrates the zero-coupon CAT bond values (4.2) with respect

4.4

Dynamics of the CAT Bond Price

105

1.04

0.83

0.62

0.42

0.21

2.54 8.14 13.73 19.32 24.91

0.60

0.95

1.30

1.65

2.00

Figure 4.4: The zero-coupon CAT bond price with respect to the threshold level (left axis) and time to expiry (right axis) in the Burr and NHP1 case. STFcat04.xpl

to the threshold level and time to expiry in the Burr and NHP1 case. We can see that as the time to expiry increases, the price of the CAT bond decreases. Increasing the threshold level leads to higher bond prices. When T is a quarter and D = 30 billion USD the CAT bond price approaches the value 1.06 exp {− ln(1.025)/4} ≈ 1.05 USD. This is equivalent to the situation when the threshold time exceeds the maturity (τ T ) with probability one. Consider now a CAT bond which has only coupon payments Ct , which terminate at the threshold time τ . The no-arbitrage price of the CAT bond associated with a threshold D, catastrophic ﬂow Ms , a distribution function of incurred losses F , with coupon payments Cs which terminate at time τ is

106

4

Pricing of Catastrophe Bonds

given by Burnecki and Kukla (2003): Vt2

T

=

E

exp {−R(t, s)} Cs (1 − Ns )ds|F t

t

=

·

T

E

exp {−R(t, s)} Cs

t

s

1−

mξ {1 − F (D − Lξ )} I(Lξ < D)dξ ds|F t .

(4.3)

t

We evaluate this CAT bond price at t = 0 and assume that Ct ≡ 0.06. The value of V02 as a function of time to maturity (expiry) and threshold level in the Burr and NHP1 case is illustrated by Figure 4.5. We clearly see that the situation is diﬀerent to that of the zero-coupon case. The price increases with both time to expiry and threshold level. When D = 30 USD billion and T = 2 2 years the CAT bond price approaches the value 0.06 0 exp {− ln(1.025)t} dt ≈ 0.12 USD. This is equivalent to the situation when the threshold time exceeds the maturity (τ T ) with probability one. Finally, we consider the case of the coupon-bearing CAT bond. Fashioned as ﬂoating rate notes, such bonds pay a ﬁxed spread over LIBOR. Loosely speaking, the ﬁxed spread may be analogous to the premium paid for the underlying insured event, and the ﬂoating rate, LIBOR, is the payment for having invested cash in the bond to provide payment against the insured event, should a payment to the insured be necessary. We combine (4.2) with Z equal to par value (PV) and (4.3) to obtain the price for the coupon-bearing CAT bond. The no-arbitrage price of the CAT bond associated with a threshold D, catastrophic ﬂow Ms , a distribution function of incurred losses F , paying P V at maturity, and coupon payments Cs which cease at the threshold time τ is

4.4

Dynamics of the CAT Bond Price

107

0.11

0.09

0.07

0.05

0.03

2.54 8.14 13.73 19.32 24.91

0.60

0.95

1.30

1.65

2.00

Figure 4.5: The CAT bond price, for the bond paying only coupons, with respect to the threshold level (left axis) and time to expiry (right axis) in the Burr and NHP1 case. STFcat05.xpl

given by Burnecki and Kukla (2003): Vt3

=

E P V exp {−R(t, T )} (1 − NT )

T

+

exp {−R(t, s)} Cs (1 − Ns )ds|F t

t

=

E P V exp{−R(t, T )} T

!

exp{−R(t, s)} Cs 1 −

+ t

"

s mξ {1 − F (D − Lξ )} I(Lξ < D)dξ t

− P V exp {−R(s, T )} ms {1 − F (D − Ls )} I(Ls < D) ds|F t . (4.4)

108

4

Pricing of Catastrophe Bonds

0.99

0.80

0.61

0.41

0.22

2.54 8.14 13.73 19.32 24.91

0.60

0.95

1.30

1.65

2.00

Figure 4.6: The coupon-bearing CAT bond price with respect to the threshold level (left axis) and time to expiry (right axis) in the Burr and NHP1 case. STFcat06.xpl

We evaluate this CAT bond price at t = 0 and assume that P V = 1 USD, and again Ct ≡ 0.06. Figure 4.6 illustrates this CAT bond price in the Burr and NHP1 case. The inﬂuence of the threshold level D on the bond value is clear but the eﬀect of increasing the time to expiry is not immediately clear. As T increases, the possibility of receiving more coupons increases but so does the possibility of losing the principal of the bond. In this example (see Figure 4.6) the price decreases with respect to the time to expiry but this is not always true. We also notice that the bond prices in Figure 4.6 are lower than the corresponding ones in Figure 4.4. However, we recall that in the former case P V = 1.06 USD and here P V = 1 USD. The choice of the ﬁtted loss distribution aﬀects the price of the bond. Figure 4.7 illustrates the diﬀerence between the zero-coupon CAT bond prices calculated under the two assumptions of Burr and log-normal loss sizes in the NHP1 case.

4.4

Dynamics of the CAT Bond Price

109

0.00 -0.11 -0.22 -0.34 -0.45

2.54 8.14 13.73 19.32 24.91

0.60

0.95

1.30

1.65

2.00

Figure 4.7: The diﬀerence between zero-coupon CAT bond prices in the Burr and log-normal cases with respect to the threshold level (left axis) and time to expiry (right axis) under the NHP1 assumption. STFcat07.xpl

It is clear that taking into account heavier tails (the Burr distribution), which can be more appropriate when considering catastrophic losses, leads to higher prices (the maximum diﬀerence in this example reaches 50% of the principal). Figures 4.8 and 4.9 show how the choice of the ﬁtted Poisson point process inﬂuences the CAT bond value. Figure 4.8 illustrates the diﬀerence between the zero-coupon CAT bond prices calculated in the NHP1 and HP cases under the assumption of the Burr loss distribution. We see that the diﬀerences vary from −14% to 3% of the principal. Finally, Figure 4.9 illustrates the diﬀerence between the zero-coupon CAT bond prices calculated in the NHP1 and NHP2 cases under the assumption of the Burr loss distribution. The diﬀerence is always below 12%.

110

4

Pricing of Catastrophe Bonds

0.03

-0.01

-0.04

-0.08

-0.12

2.54 8.14 13.73 19.32 24.91

0.60

0.95

1.30

1.65

2.00

Figure 4.8: The diﬀerence between zero-coupon CAT bond prices in the NHP1 and HP cases with respect to the threshold level (left axis) and time to expiry (right axis) under the Burr assumption. STFcat08.xpl

In our examples, equations (4.2) and (4.4), we have assumed that in the case of a trigger event the bond principal is completely lost. However, if we would like to incorporate a partial loss in the contract it is suﬃcient to multiply P V by the appropriate constant.

4.4

Dynamics of the CAT Bond Price

111

0.05

0.01

-0.02

-0.06

-0.10

2.54 8.14 13.73 19.32 24.91

0.60

0.95

1.30

1.65

2.00

Figure 4.9: The diﬀerence between zero-coupon CAT bond prices in the NHP1 and NHP2 cases with respect to the threshold level (left axis) and time to expiry (right axis) under the Burr assumption. STFcat09.xpl

112

Bibliography

Bibliography Aase, K. (1999). An Equilibrium Model of Catastrophe Insurance Futures and Spreads, The Geneva Papers on Risk and Insurance Theory 24: 69–96. Barton, C. and Nishenko, S. (1994). Natural Disasters – Forecasting Economic and Life Losses, SGS special report. Baryshnikov, Yu., Mayo, A. and Taylor, D.R. (1998). Pricing of CAT bonds, preprint. Bremaud, P. (1981). Point Processes and Queues: Martingale Dynamics, Springer, New York. Burnecki, K., Kukla, G. and Weron R. (2000). Property Insurance Loss Distributions, Phys. A 287: 269–278. Burnecki, K. and Kukla, G. (2003). Pricing of Zero-Coupon and Coupon Cat Bonds, Appl. Math. (Warsaw) 30(3): 315–324. Burnecki, K., H¨ ardle, W. and Weron, R. (2004). Simulation of Risk Processes, in Encyclopedia of Actuarial Science, J. L. Teugels and B. Sundt (eds.), Wiley, Chichester. Canter, M. S., Cole, J. B. and Sandor, R. L. (1996). Insurance Derivatives: A New Asset Class for the Capital Markets and a New Hedging Tool for the Insurance Industry, Journal of Derivatives 4(2): 89–104. Cummins, J. D. and Danzon, P. M. (1997). Price Shocks and Capital Flows in Liability Insurance, Journal of Financial Intermediation 6: 3–38. Cummins, J. D., Lo, A. and Doherty, N. A. (1999). Can Insurers Pay for the “Big One”? Measuring the Capacity of an Insurance Market to Respond to Catastrophic Losses, preprint, Wharton School, University of Pennsylvania. D’Agostino, R.B.and Stephens, M.A. (1986). Goodness-of-Fit Techniques, Marcel Dekker, New York. Doherty, N. A. (1997). Innovations in Managing Catastrophe Risk, Journal of Risk & Insurance 64(4): 713–718. Duﬃe, D. and Singleton, K. J. (1999). Modelling Term Structures of Defaultable Bonds, The Review of Financial Studies 12(4): 687–720.

Bibliography

113

Embrechts, P., Resnick, S. I. and Samorodnitsky, G. (1999). Extreme Value Theory as a Risk Management Tool, North American Actuarial Journal 3(2): 30–41. Embrechts, P. and Meister, S. (1997). Pricing Insurance Derivatives: The Case of Cat-Futures, in Securitization of Insurance risk, 1995 Bowles Symposium, SOA Monograph M-F197-1: 15–26. Froot, K. and O’Connell P. (1997). On the Pricing of Intermediated Risks: Theory and Application to Catastrophe Reinsurance, NBER Working Paper No. w6011. Jarrow, R. A. and Turnbull, S. (1995). Pricing Options on Financial Securities Subject to Default Risk, Journal of Finance 50: 53–86. Kau, J. B. and Keenan, D. C. (1996). An Option-Theoretic Model of Catastrophes Applied to Mortgage Insurance, Journal of Risk and Insurance 63(4): 639–656. Lane, M.N. (2001). Rationale and Results with the LFC CAT Bond Pricing Model, Lane Financial L.L.C. Lane, M.N. (2004). The Viability and Likely Pricing of “CAT Bonds” for Developing Countries, Lane Financial L.L.C. Litzenberger, R.H., Beaglehole, D. R. and Reynolds, C. E. (1996). Assessing Catastrophe Reinsurance-Linked Securities as a New Asset Class, Journal of Portfolio Management (December): 76–86. McGhee, C. (2004). Market Update: The Catastrophe Bond Market at YearEnd 2003, Guy Carpenter & Company, Inc. Ross, S. (2001). Simulation, 3rd ed., Academic Press, Boston. Sigma (1996). Insurance Derivatives and Securitization: New Hedging Perspectives for the US Catastrophe Insurance Market?, Report Number 5, Swiss Re. Sigma (1997). Too Little Reinsurance of Natural Disasters in Many Markets, Report Number 7, Swiss Re. Sigma (2003). The Picture of ART, Report Number 1, Swiss Re. Winter, R. A. (1994). The dynamics of competitive insurance markets, Journal of Financial Intermediation 3: 379–415.

114

Bibliography

Zhou, C. (1994). A Jump Diﬀusion Approach to Modelling Credit Risk and Valuing Defaultable Securities, preprint, Federal Board.

5 Common Functional Implied Volatility Analysis Michal Benko and Wolfgang H¨ ardle

5.1

Introduction

Trading, hedging and risk analysis of complex option portfolios depend on accurate pricing models. The modelling of implied volatilities (IV) plays an important role, since volatility is the crucial parameter in the Black-Scholes (BS) pricing formula. It is well known from empirical studies that the volatilities implied by observed market prices exhibit patterns known as volatility smiles or smirks that contradict the assumption of constant volatility in the BS pricing model. On the other hand, the IV is a function of two parameters: the strike price and the time to maturity and it is desirable in practice to reduce the dimension of this object and characterize the IV surface through a small number of factors. Clearly, a dimension reduced pricing-model that should reﬂect the dynamics of the IV surface needs to contain factors and factor loadings that characterize the IV surface itself and their movements across time. A popular dimension reduction technique is the principal components analysis (PCA), employed for example by Fengler, H¨ ardle, and Schmidt (2002) in the IV surface analysis. The discretization of the strike dimension and application of PCA yield suitable factors (weight vectors) in the multivariate framework. Noting that the IVs of ﬁxed maturity could also be viewed as random functions, we propose to use the functional analogue of PCA. We utilize the truncated functional basis expansion described in Ramsay and Silverman (1997) to the IVs of the European options on the German stock index (DAX). The standard functional PCA, however, yields weight functions that are too rough, hence a smoothed version of functional PCA is proposed here.

116

5

Common Functional IV Analysis

Like Fengler, H¨ardle, and Villa (2003) we discover similarities of the resulting weight functions across maturity groups. Thus we propose an estimation procedure based on the Flury-Gautschi algorithm, Flury (1988), for the simultaneous estimation of the weight functions for two diﬀerent maturities. This procedure yields common weight functions with the level, slope, and curvature interpretation known from the ﬁnancial literature. The resulting common factors of the IV surface are the basic elements to be used in applications, such as simulation based pricing, and deliver a substantial dimension reduction. The chapter is organized as follows. In Section 5.2 the basic ﬁnancial framework is presented, while in Section 5.3 we introduce the notation of the functional data analysis. In the following three sections we analyze the IV functions using functional principal components, smoothed functional principal components and common estimation of principal components, respectively.

5.2

Implied Volatility Surface

Implied volatilities are derived from the BS pricing formula for European options. Recall that European call and put options are derivatives written on an underlying asset S driven by the price process St , which yield the pay-oﬀ max(ST − K, 0) and max(K − ST , 0) respectively, at a given expiry time T and for a prespeciﬁed strike price K. The diﬀerence τ = T − t between the day of trade and day of expiration (maturity) is called time to maturity. The pricing formula for call options, Black and Scholes (1973), is: Ct (St , K, τ, r, σ) = St Φ(d1 ) − Ke−rτ Φ(d2 ) (5.1) √ ln(St /K) + (r + 1/2σ 2 )τ √ d1 = , d2 = d1 − σ τ , σ τ where Φ(·) is the cumulative distribution function of the standard normal distribution, r is the riskless interest rate, and σ is the (unknown and constant) volatility parameter. The put option price Pt can be obtained from the put-call parity Pt = Ct − St + e−τ r K. For a European option the implied volatility σ ˆ is deﬁned as the volatility – σ, which yields the BS price Ct equal to the price C˜t observed on the market. For a single asset, we obtain at each time point t a two-dimensional function – the IV surface σ ˆt (K, τ ). In order to standardize the volatility functions in time, one

5.2

Implied Volatility Surface

117

Volatility Surface Impl. volatility 0.31 0.28 0.25 0.22 0.19

0.80 0.88 0.95 Moneyness

1.03 1.10

0.13

0.25

0.38

0.51

0.63

Time to maturity

Figure 5.1: Implied volatility surface of ODAX on May 24, 2001. STFfda01.xpl

may rescale the strike dimension by dividing K by the future price Ft (τ ) of the underlying asset with the same maturity. This yields the so-called moneyness κ = K/Ft (τ ). Note that some authors deﬁne moneyness simply as κ = K/St . In contrast to the BS assumptions, empirical studies show that IV surfaces are signiﬁcantly curved, especially across the strikes. This phenomenon is called a volatility smirk or smile. Smiles stand for U-shaped volatility functions and smirks for decreasing volatility functions. We focus on the European options on the German stock index (ODAX). Figure 5.1 displays the ODAX implied volatilities computed from the BS formula (red points) and the IV surface on May 24, 2001 estimated using a local polynomial

118

5

Common Functional IV Analysis

estimator for τ ∈ [0, 0.6] and κ ∈ [0.8, 1.2]. We can clearly observe the “strings” of the original data on maturity grid τ ∈ {0.06111, 0.23611, 0.33333, 0.58611}, which corresponds to 22, 85, 120, and 211 days to maturity. This maturity grid is structured by market conventions and changes over time. The fact that the number of transactions with short maturity is much higher than those with longer maturity is also typical for the IVs observed on the market. The IV surface is a high-dimensional object – for every time point t we have to analyze a two-dimensional function. Our goal is to reduce the dimension of this problem and to characterize the IV surface through a small number of factors. These factors can be used in practice for risk management, e.g. with vega-strategies. The analyzed data, taken from MD*Base, contains EUREX intra-day transaction data for DAX options and DAX futures (FDAX) from January 2 to June 29, 2001. The IVs are calculated by the Newton-Raphson iterative method. The correction of Hafner and Wallmeier (2001) is applied to avoid inﬂuence of the tax-scheme in the DAX. For robustness, we exclude the contracts with time to maturity of less than 7 days and maturity strings with less than 100 observations. The approximation of the “riskless” interest rate with a given maturity is obtained on a daily basis from the linear interpolation of the 1, 3, 6, and 12 month EURIBOR interest rates (obtained from Datastream). The resulting data set is analyzed using the functional data analysis framework. One advantage of this approach, as we will see later in this chapter, is the possibility of introducing smoothness in the functional sense and using it for regularization. The notation of the functional data analysis is rather complex, therefor the theoretical motivation and the basic notation will be introduced in the next section.

5.3

Functional Data Analysis

In the functional data framework, the objects are usually modelled as realizations of a stochastic process X(t), t ∈ J, where J is a bounded interval in R. Thus, the set of functions xi (t), i = 1, 2, . . . n, t ∈ J, represents the data set. We assume the existence of the mean, variance, and covariance functions of the process X(t) and denote these by EX(t), Var(t) and Cov(s, t) respectively.

5.3

Functional Data Analysis

119

For the functional sample we can deﬁne the sample-counterparts of EX(t), Var(t) and Cov(s, t) in a straightforward way: ¯ X(t) # Var(t) # t) Cov(s,

=

1 n

n i=1

=

1 n−1

=

1 n−1

xi (t),

n

i=1 n i=1

2 ¯ , xi (t) − X(t) ¯ ¯ xi (t) − X(t) . xi (s) − X(s) def

In practice, we observe the function values X = {xi (ti1 ), xi (ti2 ), . . . , xi (tipi ); i = 1, . . . , n} only on a discrete grid {ti1 , ti2 , . . . , tipi } ∈ J, where pi is the number of grid points for the ith observation. One may estimate the functions x1 , . . . , xn via standard nonparametric regression methods, H¨ ardle (1990). Another popular way is to use a truncated functional basis expansion. More precisely, let us denote a functional basis on the interval J by {Θ1 , Θ2 , . . . , } and assume that the functions xi are approximated by the ﬁrst L basis functions Θl , l = 1, 2, . . . , L : xi (t) =

L

cil Θl (t) = c i Θ(t),

(5.2)

l=1

where Θ = (Θ1 , . . . , ΘL ) and ci = (ci1 , . . . , ciL ) . The number of basis functions L determines the tradeoﬀ between data ﬁdelity and smoothness. The analysis of the functional objects will be implemented through the coeﬃcient matrix C = {cil , i = 1, . . . , n, l = 1, . . . , L}. The mean, variance, and covariance functions are calculated by: ¯ X(t) = ¯ c Θ(t), # Var(t) = Θ(t) Cov(C)Θ(t), # t) = Θ(s) Cov(C)Θ(t), Cov(s, def 1 n

where ¯ cl =

n i=1

def

cil , l = 1, . . . , L and Cov(C) =

1 n−1

n i=1

¯)(ci − c ¯ ) . (ci − c

The scalar product in the functional space is deﬁned by: def xi (t)xj (t)dt = c xi , xj = i Wcj , J

120

5

where def

W =

Common Functional IV Analysis

Θ(t)Θ(t) dt.

(5.3)

J

In practice, the coeﬃcient matrix C needs to be estimated from the data set X . An example for a functional basis is the Fourier basis deﬁned on J by: ⎧ 1, l = 0, ⎨ sin(rωt), l = 2r − 1, Θl (t) = ⎩ cos(rωt), l = 2r, where the frequency ω determines the period and the length of the interval |J| = 2π/ω. The Fourier basis deﬁned above can be easily transformed to the orthonormal basis, hence the scalar-product matrix in (5.3) is simply the identity matrix. Our aim is to estimate the IV-functions for ﬁxed τ = 1 month (1M) and 2 months (2M) from the daily-speciﬁc grid of the maturities. We estimate the Fourier coeﬃcients on the moneyness-range κ ∈ [0.9, 1.1] for maturities observed on particular day i. For τ ∗ = 1M, 2M we calculate σ ˆi (κ, τ ∗ ) by linear ∗ interpolation of the closest observable IV string with τ ≤ τ ∗ , σ $i (κ, τi− ) and ∗ ∗ τ ≥τ ,σ $i (κ, τi+ ): ∗ ∗ ∗ τ ∗ − τi− τ − τi− ∗ ∗ + σ ˆ , ˆi (κ, τi− ) 1− ∗ (κ, τ ) σ ˆi (κ, τ ∗ ) = σ i i+ ∗ ∗ − τ∗ τi+ − τi− τi+ i− ∗ ∗ for i where τi− and τi− exist. In Figure 5.2 we show the situation for τ ∗ =1M on May 30, 2001. The blue points and the blue ﬁnely dashed curve correspond to ∗ the transactions with τ− =16 days and the green points and the green dashed ∗ curve to the transactions with τ+ = 51 days. The solid black line is the linear ∗ interpolation at τ = 30 days.

The choice of L = 9 delivers a good tradeoﬀ between ﬂexibility and smoothness of the strings. At this moment we exclude from our analysis those days, where this procedure cannot be performed due to the complete absence of the needed maturities, and strings with poor performance of estimated coeﬃcients, due to the small number of contracts in a particular string or presence of strong outliers. Using this procedure we obtain 77 “functional” observations def x1M ˆi1 (κ, 1M ), i1 = 1, . . . , 77, for the 1M-maturity and 66 observai1 (κ) = σ def

ˆi2 (κ, 2M ), i2 = 1, . . . , 66, for the 2M-maturity, as displayed tions x2M i2 (κ) = σ in Figure 5.3.

5.4

Functional Principal Components

121

IVs and IV strings 0.95

ATM

1.15

0.25

0.25

0.20

0.20

0.95

ATM

1.15

Figure 5.2: Linear interpolation of IV strings on May 30, 2001 with L = 9. STFfda02.xpl

5.4

Functional Principal Components

Principal Components Analysis yields dimension reduction in the multivariate framework. The idea is to ﬁnd normalized weight vectors γm ∈ Rp , for which the linear transformations of a p-dimensional random vector x, with E[x] = 0: x = γm , x, m = 1, . . . , p, fm = γm

(5.4)

have maximal variance subject to: γl γm = γl , γm = I(l = m) for l ≤ m. Where I denotes the identiﬁcator function. The solution is the Jordan spectral decomposition of the covariance matrix, H¨ ardle and Simar (2003).

122

5

Common Functional IV Analysis

IV-strings, 1M-Group 0.95

ATM

IV-strings, 2M-Group

1.05

0.95

ATM

1.05

0.25

0.25

0.25

0.25

0.20

0.20

0.20

0.20

0.95

ATM

1.05

0.95

ATM

1.05

Figure 5.3: Functional observations estimated using Fourier basis with L = 9, σ ˆi1 (κ, 1M ), i1 = 1, . . . , 77, in the left panel, σ ˆi2 (κ, 2M ) i2 = 1, . . . , 66 in the right panel. STFfda03.xpl

In the Functional Principal Components Analysis (FPCA) the dimension reduction can be achieved via the same route, i.e. by ﬁnding orthonormal weight functions γ1 , γ2 , . . ., such that the variance of the linear transformation is maximal. In order to keep notation simple we assume EX(t) = 0. The weight functions satisfy: 2 ||γm || = γm (t)2 dt = 1, γl (t)γm (t)dt = 0, l = m. γl , γm = The linear transformation is: fm = γm , X =

γm (t)X(t)dt,

and the desired weight functions solve: arg max γl ,γm =I(l=m),l≤m

Varγm , X,

(5.5)

5.4

Functional Principal Components

123

or equivalently: arg max γl ,γm =I(l=m),l≤m

γm (s)Cov(s, t)γm (t)dsdt.

The solution is obtained by solving the Fredholm functional eigenequation Cov(s, t)γ(t)dt = λγ(s). (5.6) The eigenfunctions γ1 , γ2 , . . . sorted with respect to the corresponding eigenvalues λ1 ≥ λ2 ≥ . . . solve the FPCA problem (5.5). The following link between eigenvalues and eigenfunctions holds: λm = Var(fm ) = Var γm (t)X(t)dt = γm (s)Cov(s, t)γm (t)dsdt. In the sampling problem, the unknown covariance function Cov(s, t) needs to # t). Dauxois, Pousse, and be replaced by the sample covariance function Cov(s, Romain (1982) show that the eigenfunctions and eigenvalues are consistent estimators for λm and γm and derive some asymptotic results for these estimators.

5.4.1

Basis Expansion

Suppose that the weight function γ has expansion γ=

L

bl Θl (t) = Θ b.

l=1

Using this notation we can rewrite the left hand side of eigenequation (5.6): Cov(s, t)γ(t)dt = Θ(s) Cov(C)Θ(t)Θ(t) bdt = Θ Cov(C)Wb, so that: Cov(C)Wb = λb. The functional scalar product γl , γk corresponds to b l Wbk in the truncated basis framework, in the sense that if two functions γl and γk are orthogonal, the corresponding coeﬃcient vectors bl , bk satisfy b l Wbk = 0. Matrix W is

124

5

Common Functional IV Analysis

Weight functions, 1M-Group 0.95

ATM

Weight functions, 2M-Group

1.05

0.95

ATM

1.05

4.00

4.00

4.00

4.00

3.00

3.00

3.00

3.00

2.00

2.00

2.00

2.00

1.00

1.00

1.00

1.00

0.00

0.00

0.00

0.00

-1.00

-1.00

-1.00

-1.00

-2.00

-2.00

-2.00

-2.00

-3.00

-3.00

-3.00

-3.00

-4.00

-4.00

-4.00

-4.00

-5.00

-5.00

-5.00

-5.00

0.95

ATM

1.05

0.95

ATM

1.05

Figure 5.4: Weight functions for 1M and 2M maturity groups. Blue solid lines, γˆ11M and γˆ12M , are the ﬁrst eigenfunctions, green ﬁnely dashed lines, γˆ21M and γˆ22M , are the second eigenfunctions, and cyan dashed lines, γˆ31M and γˆ32M , are the third eigenfunctions. STFfda04.xpl

symmetric by deﬁnition. Thus, deﬁning u = W1/2 b, one needs to solve ﬁnally a symmetric eigenvalue problem: W1/2 Cov(C)W1/2 u = λu, and to compute the inverse transformation b = W−1/2 u. For the orthonormal functional basis (i.e. also for the Fourier basis) W = I, i.e. the problem of FPCA is reduced to the multivariate PCA performed on the matrix C. Using the FPCA method on the IV-strings for 1M and 2M maturities we obtain the eigenfunctions plotted in Figure 5.4. It can be seen, that the eigenfunctions are too rough. Intuitively, this roughness is caused by the ﬂexibility of the functional basis. In the next section we present a way of incorporating the smoothing directly into the PCA problem.

5.5

5.5

Smoothed Principal Components Analysis

125

Smoothed Principal Components Analysis

As we can see in Figure 5.4, the resulting eigenfunctions are often very rough. Smoothing them could result in a more natural interpretation of the obtained weight functions. Here we apply a popular approach known as roughness penalty. The downside of this technique is that we loose orthogonality in the L2 sense. Assume that the underlying eigenfunctions of the covariance operator have a continuous and square-integrable second derivative. Let Dγ = γ (t) be the ﬁrst derivative operator and deﬁne the roughness penalty by Ψ(γ) = ||D2 γ||2 . Moreover, suppose that γm has square-integrable derivatives up to degree four and that the second and the third derivatives satisfy one of the following conditions: 1. D2 γ, D3 γ are zero at the ends of the interval J, 2. the periodicity boundary conditions of γ,Dγ, D2 γ, and D3 γ on J. Then we can rewrite the roughness penalty in the following way: D2 γ(s)D2 γ(s)ds ||D2 γ||2 = 2 2 = Dγ(u)D γ(u) − Dγ(d)D γ(d) − Dγ(s)D3 γ(s)ds (5.7) = γ(u)D3 γ(u) − γ(d)D3 γ(d) − γ(s)D4 γ(s)ds (5.8) = γ, D4 γ,

(5.9)

where d and u are the boundaries of the interval J and the ﬁrst two elements in (5.7) and (5.8) are both zero under any of the conditions mentioned above. Given a eigenfunction γ with norm ||γ||2 = 1, we can penalize the sample variance of the principal component by dividing it by 1 + αγ, D4 γ:

# t)γ(t)dsdt γ(s)Cov(s, def P CAP V = , (5.10) γ(t)(I + αD4 )γ(t)dt where I denotes the identity operator. The maximum of the penalized sample variance (PCAPV) is an eigenfunction γ corresponding to the largest eigenvalue of the generalized eigenequation: # t)γ(t)dt = λ(I + αD4 )γ(s). Cov(s, (5.11)

126

5

Common Functional IV Analysis

As already mentioned above, the resulting weight functions (eigenfunctions) are no longer orthonormal in the L2 sense. Since the weight functions are used as smoothed estimators of principal components functions, we need to rescale them to satisfy ||γl ||2 = 1. The weight functions γl can be also interpreted as orthogonal in the modiﬁed scalar product of the Sobolev type def

(f, g) = f, g + αD2 f, D2 g. A more extended theoretical discussion can be found in Silverman (1991).

5.5.1

Basis Expansion

Deﬁne K to be a matrix whose elements are D2 Θj , D2 Θk . Then the generalized eigenequation (5.11) can be transformed to: W Cov(C)Wu = λ(W + αK)u.

(5.12)

Using Cholesky factorization LL = W + αK and deﬁning S = L−1 we can rewrite (5.12) as: {SW Cov(C)WS }(L u) = λL u. Applying Smoothed Functional PCA (SPCA) to the IV-strings, we get the smooth-eigenfunctions plotted in Figure 5.5. We use α = 10−7 , the aim is to use a rather small degree of smoothing, in order to replace the high frequency ﬂuctuations only. Some popular methods, like cross-validation, could be employed as well, Ramsay and Silverman (1997). The interpretation of the weight functions displayed in Figure 5.5 is as follows: The ﬁrst weight function (solid blue) represents clearly the level of the volatility – weights are almost constant and positive. The second weight function (ﬁnely dashed green) changes sign near the at-the-money point, i.e. can be interpreted as the in-the-money/out-of-the-money identiﬁcation factor or slope. The third (dashed cyan) weight function may play the part of the measure for a deep in-the-money or out-of-the-money factor or curvature. It can be seen that the weight functions for the 1M (% γ11M , γ %31M ) and 2M maturities %21M , γ 2M 2M 2M (% γ1 , γ %2 , γ %3 ) have a similar structure. From a practical point of view it can be interesting to try to get common estimated eigenfunctions (factors in the further analysis) for both groups. In the next section, we introduce the estimation motivated by the Common Principal Component Model.

5.6

Common Principal Components Model

Weight functions, 1M-Group 0.95

ATM

127

Weight functions, 2M-Group

1.05

0.95

ATM

1.05

2.00

2.00

2.00

2.00

1.00

1.00

1.00

1.00

0.00

0.00

0.00

0.00

-1.00

-1.00

-1.00

-1.00

-2.00

-2.00

-2.00

-2.00

0.95

ATM

1.05

0.95

ATM

1.05

Figure 5.5: Smoothed weight functions with α = 10−7 . Blue solid lines, γˆ11M and γˆ12M , are the ﬁrst eigenfunctions, green ﬁnely dashed lines, γˆ21M and γˆ22M , are the second eigenfunctions, and cyan dashed lines, γˆ31M and γˆ32M , are the third eigenfunctions. STFfda05.xpl

5.6

Common Principal Components Model

The Common Principal Components model (CPC) in the multivariate setting can be motivated as the model for similarity of the covariance matrices in the ksample problem, Flury (1988). Having k random vectors, x(1) , x(2) , . . . , x(k) ∈ Rp the CPC-Model can be written as: Ψj = Cov(x(j) ) = ΓΛj Γ , def

where Γ is an orthogonal matrix and Λj = diag(λi1 , . . . , λip ). This means that eigenvectors are the same across samples and just the eigenvalues – variances of the principal component scores (5.4) diﬀer. Using the normality assumption, the sample covariance matrices Sj , j = 1, . . . , k, are Wishart-distributed: Sj ∼ Wp (nj , Ψj /nj ),

128

5

Common Functional IV Analysis

and the CPC model can be estimated using maximum likelihood estimation with likelihood-function: L(Ψ1 , Ψ2 , . . . , Ψk ) = C

n j exp tr − Ψ−1 (detΨj )−nj /2 . j Sj 2 j=1 k &

Here C is a factor that does not depend on the parameters and nj is the number of observations in group j. The maximization of this likelihood function is equivalent to:

nj k & det diag(Γ Sj Γ) , (5.13) det(Γ Sj Γ) j=1 and the maximization of this criterion is performed by the so-called FluryGautschi(FG)-algorithm, Flury (1988). As shown in Section 5.4, using the functional basis expansion, the FPCA and SPCA are basically implemented via the spectral decomposition of the “weighted” covariance matrix of the coeﬃcients. In view of the minimization property of the FG algorithm, the diagonalization procedure optimizing the criterion (5.13) can be employed. However, the obtained estimates may not be maximum likelihood estimates. Using this procedure for the IV-strings of 1M and 2M maturity we get “common” smoothed eigenfunctions. The ﬁrst three common eigenfunctions (% γ1c , γ %2c , c γ %3 ) are displayed in Figures 5.6–5.8. The solid blue curve represents the estimated eigenfunction for the 1M maturity, the ﬁnely dashed green curve for the 2M maturity and the dashed black curve is the common eigenfunction estimated by the FG-algorithm. Assuming that σ ˆi (κ, τ ) are centered for τ = 1M and 2M (we subtract the sample mean of corresponding group from the estimated functions), we may use the obtained weight functions in the factor model of the IV dynamics of the form: R σ %i (κ, τ ) = γ %jc (κ)% γjc (κ), σ ˆi (κ, τ ), (5.14) j=1

where τ ∈ {1M, 2M } and R is the number of factors. Thus σ %i is an alternative estimation of σi . This factor model can be used for simulation applications like Monte Carlo VaR. Especially the usage of Common Principal Components γ %jc (κ) reduces the high-dimensional IV-surface problem to a small number of functional factors.

5.6

Common Principal Components Model

129

1. weight functions-1M,2M,Common 0.95

ATM

1.05

2.00

2.00

1.00

1.00

0.00

0.00

-1.00

-1.00

-2.00

-2.00

0.95

ATM

1.05

Figure 5.6: First weight functions, α = 10−7 , solid blue line is the weight function of the 1M maturity group (ˆ γ11M ), ﬁnely dashed green line of 2M the 2M maturity group (ˆ γ1 ), and dashed black line is the common eigenfunction (% γ1c ), estimated from both groups.

In addition, an econometric approach, successfully employed by Fengler, H¨ ardle, and Mammen (2004) can be employed. It consists of ﬁtting an appropriate model to the time series of the estimated principal component scores, c f%ij (τ ) = % γjc (κ), σ ˆi (κ, τ ), as displayed in Figure 5.9. Note that σ ˆi (κ, τ ) are centered again (sample means are zero). The ﬁtted time series model can be used for forecasting future IV functions. There are still some open questions related to this topic. First of all, the practitioner would be interested in a good automated choice of the parameters of our method (dimension of the truncated functional basis L and smoothing parameter α). The application of the Fourier coeﬃcients in this framework seems to be reasonable for the volatility smiles (U-shaped strings), however for the volatility smirks (typically monotonically decreasing strings) the performance

130

5

Common Functional IV Analysis

2. weight functions-1M,2M,Common 0.95

ATM

1.05

2.00

2.00

1.00

1.00

0.00

0.00

-1.00

-1.00

-2.00

-2.00

0.95

ATM

1.05

Figure 5.7: Second eigenfunctions, α = 10−7 , solid blue line is the weight function of the 1M maturity group (ˆ γ21M ), ﬁnely dashed green line of 2M the 2M maturity group (ˆ γ2 ), and dashed black line is the common eigenfunction (% γ2c ), estimated from both groups.

is rather bad. In particular, the variance of our functional objects and the shape of our weight functions at the boundaries is aﬀected. The application of regression splines in this setting seems to be promising, but it increases the number of smoothing parameters by the number and the choice of the knots – problems which are not generally easy to deal with. The next natural question, which is still open concerns the statistical properties of the technique and the testing procedure for the Functional Common PCA model. Finally, using the data for a longer time period one may also analyze the longer maturities like 3 months or 6 months.

5.6

Common Principal Components Model

131

3. weight functions-1M,2M,Common 0.95

ATM

1.05

2.00

2.00

1.00

1.00

0.00

0.00

-1.00

-1.00

-2.00

-2.00

0.95

ATM

1.05

Figure 5.8: Third eigenfunctions, α = 10−7 , solid blue line is the weight function of the 1M maturity group (ˆ γ31M ), ﬁnely dashed green line of 2M the 2M maturity group (ˆ γ3 ), and dashed black line is the common eigenfunction (% γ3c ), estimated from both groups.

132

5

PCs 1. variables 1M Group 20010208

Common Functional IV Analysis

PCs 2. variables 1M Group

20010507

20010208

0.05

PCs 3. variables 1M Group

20010507

20010208

0.01

20010507

0.01

0.03

0.03

0.01

0.01

0.01

0.01

0.00

0.00

0.00

0.00

0.00

0.00

-0.03

-0.03

-0.01

-0.01

-0.01

-0.01

-0.05 20010208

-0.01

20010507

20010208

PCs 2. variables 1M Group 20010417

-0.01

20010507

20010208

PCs 2. variables 2M Group

20010608

20010417

0.05

20010507

PCs 3. variables 2M Group

20010608

20010417

0.01

20010608

0.01

0.03

0.03

0.01

0.01

0.01

0.01

0.00

0.00

0.00

0.00

0.00

0.00

-0.03

-0.03

-0.01

-0.01

-0.01

-0.01

-0.05 20010417

20010608

-0.01 20010417

20010608

-0.01 20010417

20010608

c c Figure 5.9: Estimated principal component scores, f%i1 (1M ), f%i2 (1M ), and c c c % % % (2M ), and fi3 (1M ) for 1M maturity – ﬁrst row, and fi1 (2M ), fi2 c (2M ) for 2M maturity – second row; α = 10−7 . f%i3

Bibliography

133

Bibliography Black, F. and Scholes, M. (1973). The pricing of options and corporate liabilities, Journal of Political Economy, 81: 637:654. Dauxois, J., Pousse, A., and Romain, Y. (1982). Asymptotic Theory for the Principal Component Analysis of a Vector Random Function: Some Applications to Statistical Inference, Journal of Multivariate Analysis 12: 136-154. Flury, B. (1988). Common Principal Components and Related Models, Wiley, New York. Fengler, M., H¨ ardle, W., and Schmidt, P. (2002). Common Factors Governing VDAX Movements and the Maximum Loss, Journal of Financial Markets and Portfolio Management 16(1): 16-29. Fengler, M., H¨ ardle, W., and Villa, P. (2003). The Dynamics of Implied Volatilities: A common principle components approach, Review of Derivative Research 6: 179-202. Fengler, M., H¨ ardle, W., and Mammen, E. (2004). Implied Volatility String Dynamics, CASE Discussion Paper, http://www.case.hu-berlin.de. F¨ ollmer, H. and Schied A. (2002). Stochastic Finance, Walter de Gruyter. H¨ardle, W. (1990). Applied Nonparametric Regression, Cambridge University Press. Hafner, R. and Wallmeier, M. (2001). The Dynamics of DAX Implied Volatilities, International Quarterly Journal of Finance 1(1): 1-27. H¨ ardle, W. and Simar, L. (2003). Applied Multivariate Statistical Analysis, Springer-Verlag Berlin Heidelberg. Kneip, A. and Utikal, K. (2001). Inference for Density Families Using Functional Principal Components Analysis, Journal of the American Statistical Association 96: 519-531. Ramsay, J. and Silverman, B. (1997). Functional Data Analysis, Springer, New York. Rice, J. and Silverman, B. (1991). Estimating the Mean and Covariance Structure Nonparametrically when the Data are Curves, Journal of Royal Statistical Society, Series B 53: 233-243.

134

Bibliography

Silverman, B. (1996). Smoothed Functional Principal Components Analysis by Choice of Norm, Annals of Statistics 24: 1-24.

6 Implied Trinomial Trees ˇ ıˇzek and Karel Komor´ad Pavel C´

Options are ﬁnancial derivatives that, conditional on the price of an underlying asset, constitute a right to transfer the ownership of this underlying. More speciﬁcally, a European call and put options give their owner the right to buy and sell, respectively, at a ﬁxed strike price at a given date. Options are important ﬁnancial instruments used for hedging since they can be included into a portfolio to reduce risk. Corporate securities (e.g., bonds or stocks) may include option features as well. Last, but not least, some new ﬁnancing techniques, such as contingent value rights, are straightforward applications of options. Thus, option pricing has become one of the basic techniques in ﬁnance. The boom in research on the use of options started after Black and Scholes (1973) published an option-pricing formula based on geometric Brownian motion. Option prices computed by the Black-Scholes formula and the market prices of options exhibit a discrepancy though. Whereas the volatility of market option prices varies with the price (or moneyness) – the dependency referred to as the volatility smile, the Black-Scholes model is based on the assumption of a constant volatility. Therefore, to model option prices consistent with the market many new approaches were proposed. Probably the most commonly used and rather intuitive procedure for option pricing is based on binomial trees, which represent a discrete form of the Black-Scholes model. To ﬁt the market data, Derman and Kani (1994) proposed an extension of binomial trees: the so-called implied binomial trees, which are able to model the market volatility smile. Implied trinomial trees (ITTs) present an analogous extension of trinomial trees proposed by Derman, Kani, and Chriss (1996). Like their binomial counterparts, they can ﬁt the market volatility smile and actually converge to the same continuous limit as binomial trees. In addition, they allow for a free choice of the underlying prices at each node of a tree, the so-called state space.

136

6

The term structure

31

35

32

Implied Volatility [%] 33 34 35

Implied volatility [%] 40 45 50

36

55

The skew structure

Implied Trinomial Trees

3000

4000

5000 6000 Strike price [DM]

7000

100

200 300 400 Time to maturity [days]

500

Figure 6.1: Implied volatilities of DAX put options on January 29, 1999. This feature of ITTs allows to improve the ﬁt of the volatility smile under some circumstances such as inconsistent, arbitrage-violating, or other market prices leading to implausible or degenerated probability distributions in binomial trees. We introduce ITTs in several steps. We ﬁrst review main concepts regarding option pricing (Section 6.1) and implied models (Section 6.2). Later, we discuss the construction of ITTs (Section 6.3) and provide some illustrative examples (Section 6.4).

6.1

Option Pricing

The option-pricing model by Black and Scholes (1973) is based on the assumptions that the underlying asset follows a geometric Brownian motion with a constant volatility σ: dSt St

= µdt + σdWt ,

(6.1)

where St denotes the underlying-price process, µ is the expected return, and Wt stands for the standard Wiener process. As a consequence, the distribution of St is lognormal. More importantly, the volatility σ is the only parameter of the Black-Scholes formula which is not explicitly observable on the market. Thus,

6.1

Option Pricing

137

S

s Su s p HHH HH HH s s H H HH HH s 1 − p H HH Sd HHH HHs

Figure 6.2: Two levels of a CRR binomial tree.

we infer on σ by matching the observed option prices. A solution σI , “implied” by options prices, is called the implied volatility (or Black-Scholes equivalent). In general, implied volatilities vary both with respect to the exercise price (the skew structure) and expiration time (the term structure). Both dependencies are illustrated in Figure 6.1, with the ﬁrst one representing the volatility smile. Let us add that the implied volatility of an option is the market’s estimate of the average future underlying volatility during the life of that option. We refer to the market’s estimate of an underlying volatility at a particular time and price point as the local volatility. Binomial trees, as a discretization of the Black-Scholes model, can be constructed in several alternative ways. Here we recall the classic Cox, Ross, and Rubinstein’s (1979) scheme (CRR), which has a constant logarithmic spacing between nodes on the same level (this spacing represents the future price volatility). A standard CRR tree is depicted in Figure 6.2. Starting at a node S, the price of an underlying asset can either increase to Su with probability p or decrease to Sd with probability 1 − p: √ ∆t

Su

= Seσ

Sd

= Se , F − Sd = , Su − Sd

p

,

√ −σ ∆t

(6.2) (6.3) (6.4)

138

6

Implied Trinomial Trees

where ∆t refers to the time step and σ is the (constant) volatility. The forward price F = er∆t S in the node S is determined by the the continuous interest rate r (for the sake of simplicity, we assume that the dividend yield equals zero; see Cox, Ross, and Rubinstein, 1979, for treatment of dividends). A binomial tree corresponding to the risk-neutral underlying evaluation process is the same for all options on this asset, no matter what the strike price or time to expiration is. There are many extensions of the original Black-Scholes approach that try to capture the volatility variation and to price options consistently with the market prices (that is, to account for the volatility smile). Some extensions incorporate a stochastic volatility factor or discontinuous jumps in the underlying price; see for instance Franke, H¨ ardle, and Hafner (2004) and Chapters 5 and 7. In the next section, we discuss an extension of the BlackScholes model developed by Derman and Kani (1994) – the implied trees.

6.2

Trees and Implied Trees

While the Black-Scholes model assumes that an underlying asset follows a geometric Brownian motion (6.1) with a constant volatility, more complex models assume that the underlying follows a process with a price- and time-varying volatility σ(S, t). See Dupire (1994) and Fengler, H¨ ardle, and Villa (2003) for details and related evidence. Such a process can be expressed by the following stochastic diﬀerential equation: dSt St

= µdt + σ(S, t)dWt .

(6.5)

This approach ensures that the valuation of an option remains preference-free, that is, all uncertainty is in the spot price, and thus, we can hedge options using the underlying. Derman and Kani (1994) show that it is possible to determine σ(S, t) directly from the market prices of liquidly traded options. Further, they use this volatility σ(S, t) to construct an implied binomial tree (IBT), which is a natural discrete representation of a non-lognormal evolution process of the underlying prices. In general, we can use – instead of an IBT – any (higher-order) multinomial tree for the discretization of process (6.5). Nevertheless, as the time step tends towards zero, all of them converge to the same continuous process (Hull and White, 1990). Thus, IBTs are among all implied multinomial trees minimal in the sense that they have only one degree of freedom – the arbitrary

6.2 Trees and Implied Trees

139

A

Figure 6.3: Computing the Arrow-Debreu price in a binomial tree. The bold lines with arrows depict all (three) possible path from the root of the tree to point A.

choice of the central node at each level of the tree. Although one may feel now that binomial trees are suﬃcient, some higher-order trees could be more useful because they allow for a more ﬂexible discretization in the sense that transition probabilities and probability distributions can vary as smoothly as possible across a tree. This is especially important when the market option prices are inaccurate because of ineﬃciency, market frictions, and so on. At the end of this section, let us to recall the concept of Arrow-Debreu prices, which is closely related to multinomial trees and becomes very useful in subsequent derivations (Section 6.3). Let (n, i) denote the ith (highest) node in the nth time level of a tree. The Arrow-Debreu price λn,i at node (n, i) of a tree is computed as the sum of the product of the risklessly discounted transition probabilities over all paths starting in the root of the tree and leading to node (n, i). Hence, the Arrow-Debreu price of the root is equal to one and the Arrow-Debreu prices at the ﬁnal level of a (multinomial) tree form a discrete approximation of the state price density. Notice that these prices are discounted, and thus, the risk-neutral probability corresponding to each node (at the ﬁnal level) should be calculated as the product of the Arrow-Debreu price and the capitalizing factor erT .

140

6

6.3

Implied Trinomial Trees

Implied Trinomial Trees

6.3.1

Basic Insight

A trinomial tree with N levels is a set of nodes sn,i (representing the underlying price), where n = 1, . . . , N is the level number and i = 1, . . . , 2n − 1 indexes nodes within a level. Being at a node sn,i , one can move to one of three nodes (see Figure 6.4a): (i) to the upper node with value sn+1,i with probability pi ; (ii) to the lower node with value sn+1,i+2 with probability qi ; and (iii) to the middle node with value sn+1,i+1 with probability 1 − pi − qi . For the sake of brevity, we omit the level index n from transition probabilities unless they refer to a speciﬁc level; that is, we write pi and qi instead of pn,i and qn,i unless the level has to be speciﬁed. Similarly, let us denote the nodes in the new level with capital letters: Si (=sn+1,i ), Si+1 (=sn+1,i+1 ) and Si+2 (=sn+1,i+2 ), respectively (see Figure 6.4b). Starting from a node sn,i at time tn , there are ﬁve unknown parameters: two transition probabilities pi and qi and three prices Si , Si+1 , and Si+2 at new nodes. To determine them, we need to introduce the notation and main requirements a tree should satisfy. First, let Fi denote the known forward price of the spot price sn,i and λn,i the known Arrow-Debreu price at node (n, i). The Arrow-Debreu prices for a trinomial tree can be obtained by the following iterative formulas: λ1,1 = λn+1,1 = λn+1,2 = λn+1,i+1 = λn+1,2n = λn+1,2n+1 =

1, (6.6) −r∆t e λn,1 p1 , (6.7) −r∆t e {λn,1 (1 − p1 − q1 ) + λn,2 p2 }, (6.8) e−r∆t {λn,i−1 qi−1 + λn,i (1 − pi − qi ) + λn,i+1 pi+1 }, (6.9) e−r∆t {λn,2n−1 (1 − p2n−1 − q2n−1 )+λn,2n−2 q2n−2 }, (6.10) e−r∆t λn,2n−1 q2n−1 . (6.11)

An implied tree provides a discrete representation of the evolution process of underlying prices. To capture and model the underlying price correctly, we desire that an implied tree: 1. reproduces correctly the volatility smile, 2. is risk-neutral, 3. uses transition probabilities from interval (0, 1).

6.3

Implied Trinomial Trees

141

S1

Si

s1

S2

s2

S3

pi sn,i

1 - pi- q

S4 i

S i+1 S 2n-2

qi S i+2

s 2n-2

S 2n-1

s 2n-1

S2n S2n+1

Figure 6.4: Nodes in a trinomial tree. Left panel: a single node with its branches. Right panel: the nodes of two consecutive levels n − 1 and n.

To fulﬁll the risk-neutrality condition, the expected value of the underlying price sn+1,i in the following time period tn+1 has to equal its known forward price: Esn+1,i = pi Si + (1 − pi − qi )Si+1 + qi Si+2 = Fi = er∆t sn,i ,

(6.12)

where r denotes the continuous interest rate and ∆t is the time step from tn to tn+1 . Additionally, one can specify such a condition also for the second moments of sn,i and Fi . Hence, one obtains a second constraint on the node prices and transition probabilities: pi (Si −Fi )2 +(1−pi −qi )(Si+1 −Fi )2 +qi (Si+2 −Fi )2 = Fi2 σi2 ∆t+ O(∆t), (6.13) where σi is the stock or index price volatility during the time period.

142

6

Implied Trinomial Trees

Consequently, we have two constraints (6.12) and (6.13) for ﬁve unknown parameters, and therefore, there is no unique implied trinomial tree. On the other hand, all trees satisfying these constraints are equivalent in the sense that as the time spacing ∆t tends to zero, all these trees converge to the same continous process. A common method for constructing an ITT is to choose ﬁrst freely the underlying prices and then to solve equations (6.12) and (6.13) for the transition probabilities pi and qi . Afterwards one only has to ensure that these probabilities do not violate the above mentioned Condition 3. Apparently, using an ITT instead of an IBT gives us additional degrees of freedom. This allows us to better ﬁt the volatility smile, especially when inconsistent or arbitrage-violating market option prices make a consistent tree impossible. Note, however, that even though the constructed tree is consistent, other difﬁculties can arise when its local volatility and probability distributions are jagged and “implausible.”

6.3.2

State Space

There are several methods we can use to construct an initial state space. Let us ﬁrst discuss a construction of a constant-volatility trinomial tree, which forms a base for an implied trinomial tree. As already mentioned, binomial and trinomial discretization of the constant-volatility Black-Scholes model have the same continous limit, and therefore, are equivalent. Hence, we can start from a constant-volatility CRR binomial tree and then combine two steps of this tree into a single step of a new trinomial tree. This is illustrated in Figure 6.5, where thin lines correspond to the original binomial tree and the thick lines to the constructed trinomial tree. Consequently, using formulas (6.2) and (6.3), we can derive the following expressions for the nodes of the constructed trinomial tree: √ 2∆t

Si+1

= sn+1,i = sn+1,i+1

= sn,i eσ = sn,i ,

Si+2

= sn+1,i+2

= sn,i e−σ

Si

,

√ 2∆t

(6.14) (6.15) ,

(6.16)

where σ is a constant volatility (e.g., an estimate of the at-the-money volatility at maturity T ). Next, summing the transition probabilities in the binomial tree given in (6.4), we can also derive the up and down transition probabilities in

6.3

Implied Trinomial Trees

143

Figure 6.5: Constructing a constant-volatility trinomial tree (thick lines) by combining two steps of a CRR binomial tree (thin lines).

the trinomial tree (the “middle” transition probability is equal to 1 − pi − qi ): √ "2 er∆t/2 − e−σ ∆t/2 √ √ = , eσ ∆t/2 − e−σ ∆t/2 √ "2 ! eσ ∆t/2 − er∆t/2 √ √ = . eσ ∆t/2 − e−σ ∆t/2 !

pi

qi

Note that there are more methods for building a constant-volatility trinomial tree such as combining two steps of a Jarrow and Rudd’s (1983) binomial tree; see Derman, Kani, and Chriss (1996) for more details. When the implied volatility varies only slowly with strike and expiration, the regular state space with a uniform mesh size, as described above, is adequate for constructing ITT models. On the other hand, if the volatility varies signiﬁcantly with strike or time to maturity, we should choose a state space reﬂecting these properties. Assuming that the volatility is separable in time and stock price, σ(S, t) = σ(S)σ(t), an ITT state space with a proper skew and term structure can be constructed in four steps.

144

6

Implied Trinomial Trees

First, we build a regular trinomial lattice with a constant time spacing ∆t and a constant price spacing ∆S as described above. Additionally, we assume that all interest rates and dividends are equal to zero. Second, we modify ∆t at diﬀerent time points. Let us denote the original equally spaced time points t0 = 0, t1 , . . . , tn = T . We can then ﬁnd the unknown scaled times t˜0 = 0, t˜1 , . . . , t˜n = T by solving the following set of non-linear equations: t˜k

n−1 i=1

k 1 1 ˜k 1 = T + t , 2 (t˜ ) σ 2 (T ) σ 2 (t˜i ) σ i i=1

k = 1, . . . , n − 1.

(6.17)

Next, we change ∆S at diﬀerent levels. Denoting by S1 , . . . , S2n+1 the original (known) underlying prices, we solve for rescaled underlying prices S˜1 , . . . , S˜2n+1 using

S˜k c Sk , k = 2, . . . , 2n + 1, (6.18) = exp ln σ(Sk ) Sk−1 S˜k−1 where c is a constant. It is recommended to set c to an estimate of the local volatility. Since there are 2n equations for 2n + 1 unknown parameters, an additional equation is needed. Here we always suppose that the new central node equals the original central node: S˜n+1 = Sn+1 . See Derman, Kani, and Chriss (1996) for a more elaborate explanation of the theory behind equations (6.17) and (6.18). Finally, one can increase all node prices by a suﬃciently large growth factor, which removes forward prices violations, see Section 6.3.4. Multiplying all zero-rate node prices at time t˜i by ert˜i should be always suﬃcient.

6.3.3

Transition Probabilities

Once the state space of an ITT is ﬁxed, we can compute the transition probabilities for all nodes (n, i) at each tree level n. Let C(K, tn+1 ) and P (K, tn+1 ) denote today’s price of a standard European call and put option, respectively, struck at K and expiring at tn+1 . These values can be obtained by interpolating the smile surface at various strike and time points. The values of these options given by the trinomial tree are the discounted expectations of the pay-oﬀ functions: max(Sj − K, 0) = (Sj − K)+ for the call option and max(K − Sj , 0) for the put option at the node (n + 1, j). The expectation

6.3

Implied Trinomial Trees

145

is taken with respect to the probabilities of reaching each node, that is, with respect to transition probabilities: C (K, tn+1 ) = e−r∆t {pj λn,j + (1 − pj−1 − qj−1 )λn,j−1 (6.19) j

+qj−2 λn,j−2 } (Sj − K)+ , P (K, tn+1 )

= e−r∆t

{pj λn,j + (1 − pj−1 − qj−1 )λn,j−1

(6.20)

j

+qj−2 λn,j−2 } (K − Sj )+ . If we set the strike price K to Si+1 (the stock price at node (n + 1, i + 1)), rearrange the terms in the sum, and use equation (6.12), we can express the transition probabilities pi and qi for all nodes above the central node from formula (6.19): i−1 er∆t C(Si+1 , tn+1 ) − j=1 λn+1,j (Fj − Si+1 ) pi = , (6.21) λn+1,i (Si − Si+1 ) Fi − pi (Si − Si+1 ) − Si+1 . (6.22) qi = Si+2 − Si+1 Similarly, we compute from formula (6.20) the transition probabilities for all nodes below (and including) the center node (n + 1, n) at time tn : 2n−1 er∆t P (Si+1 , tn+1 ) − j=i+1 λn+1,j (Si+1 − Fj ) qi = , (6.23) λn+1,i (Si+1 − Si+2 ) Fi − qi (Si+2 − Si+1 ) − Si+1 . (6.24) pi = Si − Si+1 A detailed derivation of these formulas can be found in Komor´ ad (2002). Finally, the implied local volatilities are approximated from equation (6.13): σi2 ≈

6.3.4

pi (Si − Fi )2 + (1 − pi − qi )(Si+1 − Fi )2 + qi (Si+2 − Fi )2 . Fi2 ∆t

(6.25)

Possible Pitfalls

Formulas (6.21)–(6.24) can unfortunately result in transition probabilities which are negative or greater than one. This is inconsistent with rational option prices

146

6

Implied Trinomial Trees

123.50

113.08

114.61 113.60

105.22

106.34

106.82

100.00 100.00

100.01

100.46

95.04

94.04 88.43

87.27 80.97

0

1

3

6

0

1

2

3

4

5

6

7

8

9 10

Figure 6.6: Two kinds of the forward price violation. Left panel: forward price outside the range of its daughter nodes. Right panel: sharp increase in option prices leading to an extreme local volatility.

and allows arbitrage. We actually have to face two forms of this problem, see Figure 6.6 for examples of such trees. First, we have to check that no forward price Fn,i at node (n, i) falls outside the range of its daughter nodes at the level n + 1: Fn,i ∈ (sn+1,i+2 , sn+1,i ). This inconsistency is not diﬃcult to overcome since we are free to choose the state space. Thus, we can overwrite the nodes causing this problem. Second, extremely small or large values of option prices, which would imply an extreme value of local volatility, can also result in probabilities that are negative or larger than one. In such a case, we have to overwrite the option prices which led to the unacceptable probabilities. Fortunately, the transition probabilities can be always corrected providing that the corresponding state space does not violate the forward price condition Fn,i ∈ (sn+1,i+2 , sn+1,i ). Derman, Kani, and Chriss (1996) proposed to reduce the troublesome nodes to binomial ones or to set Si − Fi 1 Fi − Si+1 Fi − Si+2 1 pi = , qi = , (6.26) + 2 Si − Si+1 Si − Si+2 2 Si − Si+2

6.4

Examples

for Fi ∈ (Si+1 , Si ) and 1 Fi − Si+2 , pi = 2 Si − Si+2

147

qi =

1 2

Si+1 − Fi S i − Fi + Si+1 − Si+2 Si − Si+2

,

(6.27)

for Fi ∈ (Si+2 , Si+1 ). In both cases, the “middle” transition probability is equal to 1 − pi − qi .

6.4

Examples

To illustrate the construction of an implied trinomial tree and its use, we consider here ITTs for two artiﬁcial implied-volatility functions and an impliedvolatility function constructed from real data.

6.4.1

Pre-speciﬁed Implied Volatility

Let us consider a case where the volatility varies only slowly with respect to the strike price and time to expiration (maturity). Assume that the current index level is 100 points, the annual riskless interest rate is r = 12%, and the dividend yield equals δ = 4%. The annualized Black-Scholes implied volatility is assumed to be σ = 11%, and additionally, it increases (decreases) linearly by 10 basis points (i.e., 0.1%) with every 10 unit drop (rise) in the strike price K; that is, σI = 0.11 − ∆K ∗ 0.001. To keep the example simple, we consider three one-year steps. First, we construct the state space: a constant-volatility trinomial tree as described in Section 6.3.2. The ﬁrst node at time t0 = 0, labeled A in Figure 6.7, has the value of sA = 100, today’s spot price. The next three nodes at time t1 , are computed from equations (6.14)–(6.16) and take values S1 = 116.83, S2 = 100.00, and S3 = 85.59, respectively. In order to determine the transition probabilities, we need to know the price P (S2 , t1 ) of a put option struck at S2 = 100 and expiring one year from now. Since the implied volatility of this option is 11%, we calculate its price using a constant-volatility trinomial tree with the same state space and ﬁnd it to be 0.987 index points. Further, the ∗ ∗ forward price corresponding to node A is FA = Se(r −δ )∆t = 107.69, where r∗ = log(1+r) denotes the continuous interest rate and δ ∗ = log(1+δ) the continuous dividend rate. Hence, the transition probability of a down movement

148

6

Implied Trinomial Trees

159.47

136.50

136.50

116.83

116.83

116.83

100.00

100.00

100.00

85.59

85.59

85.59

73.26

73.26

B 100.00

A

62.71

0

1

2

3

Figure 6.7: The state space of a trinomial tree with constant volatility σ = 11%. Nodes A and B are reference points for which we demonstrate constructing of an ITT and estimating of the implied local volatility. STFitt01.xpl

computed from equation (6.23) is qA =

elog(1+0.12)·1 0.987 − Σ = 0.077, 1 · (100.00 − 85.59)

where the summation term Σ in the numerator is zero because there are no nodes with price lower than S3 at time t1 . Similarly, the transition probability of an upward movement pA computed from equation (6.24) is pA =

107.69 + 0.077 · (100.00 − 85.59) − 100 = 0.523. 116.83 − 100.00

6.4

Examples

149

Upper Probabilities

Middle Probabilities

0.523

0.517

0.515

0.523

0.521

0.538

0.534

0.571

Lower Probabilities

0.431

0.508

0.401

0.413

0.417

0.401

0.404

0.368

0.375

0.296

0.060

0.077

0.070

0.068

0.077

0.075

0.094

0.090

0.133

Figure 6.8: Transition probabilities for σI = 0.11 − ∆K · 0.001. STFitt02.xpl

Finally, the middle transition probability equals 1 − pA − qA = 0.4. As one can see from equations (6.6)–(6.11), the Arrow-Debreu prices turn out to be just discounted transition probabilities: λ1,1 = e− log(1+0.12)·1 · 0.523 = 0.467, λ1,2 = 0.358, and λ1,3 = 0.069. Finally, we can estimate the value of the implied local volatility at node A from equation (6.25), obtaining σA = 9.5%. Let us demonstrate the computation of one further node. Starting from node B in year t2 = 2 of Figure 6.7, the index level at this node is sB = 116.83 and ∗ ∗ its forward price one year later is FB = e(r −δ )·1 · 116.83 = 125.82. From this node, the underlying can move to one of three future nodes at time t3 = 3, with prices s3,2 = 136.50, s3,3 = 116.83, and s3,4 = 100.00. The value of a call option struck at 116.83 and expiring at time t3 = 3 is C(s3,3 , t3 ) = 8.87, corresponding to the implied volatility of 10.83% interpolated from the smile. The Arrow-Debreu price computed from equation (6.8) is λ2,2 = e− log(1+0.12)·1 {0.467 · (1 − 0.517 − 0.070) + 0.358 · 0.523} = 0.339. The numerical values used here are already known from the previous level at time t1 . Now, using equations (6.21) and (6.22) we can ﬁnd the transition

150

6

Implied Trinomial Trees

0.098

1.000

0.215

0.239

0.467

0.339

0.226

0.358

0.190

0.111

0.069

0.047

0.031

0.006

0.005 0.001

Figure 6.9: Arrow-Debreu prices for σI = 0.11 − ∆K · 0.001. STFitt03.xpl

probabilities: p2,2

=

q2,2

=

elog(1+0.12)·1 · 8.87 − Σ = 0.515, 0.339 · (136.50 − 116.83) 125.82 − 0.515 · (136.50 − 116.83) − 116.83 = 0.068, 100 − 116.83

where Σ contributes only one term 0.215 · (147 − 116.83), that is, there is one single node above SB whose forward price is equal to 147. Finally, employing (6.25) again, we ﬁnd that the implied local volatility at this node is σB = 9.3%. The complete trees of transition probabilities, Arrow-Debreu prices, and local volatilities for this example are shown in Figures 6.8–6.10. As already mentioned in Section 6.3.4, the transition probabilities may fall out of the interval (0, 1). For example, let us slightly modify our previous example and assume that the Black-Scholes volatility increases (decreases) linearly 0.5

6.4

Examples

151

0.092

0.095

0.094

0.093

0.095

0.095

0.099

0.098

0.106

Figure 6.10: Implied local volatilities for σI = 0.11 − ∆K · 0.001. STFitt04.xpl

Upper Probabilities

Middle Probabilities

0.582

0.271

C

0.523

0.492

0.485

0.523

0.514

0.605

0.605

0.146

C

0.401

0.582

Lower Probabilities

0.467

0.482

0.401

0.420

0.222

0.222

0.077

0.271

D

C 0.041

0.033

0.077

0.066

0.173

0.173 0.146

D

D

Figure 6.11: Transition probabilities for σI = 0.11 − ∆K · 0.005. Nodes C and D had inadmissible transition probabilities (6.21)–(6.24). STFitt05.xpl

152

6

C

1.000

Implied Trinomial Trees

0.107

0.205

0.206

0.467

0.362

0.266

0.358

0.182

0.099

0.069

0.038

0.024

0.011

0.008

D

0.001

Figure 6.12: Arrow-Debreu prices for σI = 0.11 − ∆K · 0.005. Nodes C and D had inadmissible transition probabilities (6.21)–(6.24). STFitt06.xpl

percentage points with every 10 unit drop (rise) in the strike price K; that is, σI = 0.11 − ∆K · 0.005. In other words, the volatility smile is now ﬁve times steeper than before. Using the same state space as in the previous example, we ﬁnd inadmissable transition probabilities at nodes C and D, see Figures 6.11– 6.13. To overwrite them with plausible values, we used the strategy described by (6.26) and (6.27) and obtained reasonable results in the sense of the three conditions stated on page 140.

6.4.2

German Stock Index

Following the artiﬁcial examples, let us now demonstrate the ITT modeling for a real data set, which consists of strike prices for DAX options with maturities from two weeks to two months on January 4, 1999. Given such data, we can

6.4

Examples

153

0.108

0.095

0.087

0.086

0.095

0.093

0.113

0.113 0.108

C

D

Figure 6.13: Implied local volatilities for σI = 0.11 − ∆K · 0.005. Nodes C and D had inadmissible transition probabilities (6.21)–(6.24). STFitt07.xpl ﬁrstly compute from the Black-Scholes equation (6.1) the implied volatilities at various combinations of prices and maturities, that is, we can construct the volatility smile. Next, we build and calibrate an ITT so that it ﬁts this smile. The procedure is analogous to the examples described above – the only diﬀerence lies in replacing an artiﬁcial function σI (K, t) by an estimate of implied volatility σI at each point (K, t). For the purpose of demonstration, we build a three-level ITT with time step ∆t of two weeks. First, we construct the state space (Section 6.3.2) starting at time t0 = 0 with the spot price S = 5290 and riskless interest rate r = 4%, see Figure 6.14. Further, we have to compute the transition probabilities. Because option contracts are not available for each combination of price and maturity, we use a nonparametric smoothing procedure to model the whole volatility surface σI (K, t) as employed by A¨it-Sahalia, Wang, and Yared (2001) and Fengler, H¨ardle, and Villa (2003), for instance. Given the data, some transition probabilities fall outside interval (0, 1); they are depicted by dashed lines in Figure 6.14. Such probabilities have to be corrected as described in Section 6.3.4

154

6

Implied Trinomial Trees

6994.15

5290.00

6372.48

6372.48

5806.07

5806.07

5806.07

5290.00

5290.00

5290.00

4819.80

4819.80

4819.80

4391.40

4391.40 4001.07

0

2

4

6

Figure 6.14: The state space of the ITT constructed for DAX on January 4, 1999. Dashed lines mark the transitions with originally inadmissible transition probabilities. STFitt08.xpl

(there are no forward price violations). The resulting local volatilities, which reﬂect the volatility skew, are on Figure 6.15. Probably the main result of this ITT model can be summarized by the state price density (the left panel of Figure 6.16). This density describes the price distribution given by the constructed ITT and smoothed by the NadarayaWatson estimator. Apparently, the estimated density is rather rough because we used just three steps in our tree. To get a smoother state-price density estimate, we doubled the number of steps; that is, we used six one-week steps instead of three two-week steps (see the right panel of Figure 6.16).

6.4

Examples

155

0.28

0.34

0.31

0.30

0.33

0.32

0.43

0.43

0.34

Figure 6.15: Implied local volatilities computed from an ITT for DAX on January 4, 1999. STFitt08.xpl

Finally, it possible to use the constructed ITT to evaluate various DAX options. For example, a European knock-out option gives the owner the same rights as a standard European option as long as the index price S does not exceed or fall below some barrier B for the entire life of the knock-out option; see H¨ardle, Kleinow, and Stahl (2002) for details. So, let us compute the price of the knock-out-call DAX option at maturity T = 6 weeks, strike price K = 5200, and barrier B = 4800. The option price at time tj (t0 = 0, t1 = 2, t2 = 4, and t3 = 6 weeks) and stock price sj,i will be denoted Vj,i . At the maturity t = T = 6, the price is known: V3,i = max{0, sj,i − K}, i = 1, . . . , 7. Thus, V3,1 = max{0, 4001.01−5200} = 0 and V3,5 = max{0, 5806.07− 5200} = 606.07, for instance. To compute the option price at tj < T , one just has to discount the conditional expectation of the option price at time tj+1 Vj,i = e−r

∗

∆t

{pj,i Vj+1,i+2 + (1 − pj,i − qj,i )Vj+1,i+1 + qj,i Vj+1,i }

(6.28)

5

4

0

1

2

3

Probability*E-4

5

4

3

0

1

2

Probability*E-4

Implied Trinomial Trees

6

6

6

156

4000

5000

6000 Underlying price

7000

8000

4000

5000

6000 Underlying price

7000

8000

Figure 6.16: State price density estimated from an ITT for DAX on January 4, 1999. The dashed line depicts the corresponding Black-Scholes density. Left panel: State price density for a three-level tree. Right panel: State price density for a six-level tree. STFitt08.xpl STFitt09.xpl provided that sj,i ≥ B, otherwise Vj,i = 0. Hence at time t2 = 4, one obtains V2,1 = 0 because s2,1 = 4391.40 < 4800 = B and V2,3 = e− log(1+0.04)·2/52 (0.22 · 606.07 + 0.55 · 90 + 0.23 · 0) = 184.33 (see Figure 6.17). We can continue further and compute the option price at times t1 = 2 and t0 = 0 just using the standard formula (6.28) since prices no longer lie below the barrier B (see Figure 6.14). Thus, one computes V1,1 = 79.7, V1,2 = 251.7, V1,3 = 639.8, and ﬁnally, the option price at time t0 = 0 and stock price S = 5290 equals V0,1 = e− log(1+0.04)·2/52 (0.25 · 639.8 + 0.50 · 251.7 + 0.25 · 79.7) = 303.28.

6.4

Examples

157

Upper probabilities

Middle probabilities

0.17

0.25

0.21

0.19

0.24

0.22

0.40

0.41

0.25

Lower probabilities

0.65

0.50

0.58

0.61

0.51

0.55

0.19

0.16

0.49

0.17

0.25

0.21

0.20

0.25

0.23

0.42

0.43

0.26

Figure 6.17: Transition probabilities of the ITT constructed for DAX on January 4, 1999. STFitt10.xpl

158

Bibliography

Bibliography A¨it-Sahalia, Y., Wang, Y., and Yared, F. (2001). Do options markets correctly price the probabilities of movement of the underlying asset? Journal of Econometrics 102, 67–110. Black, F. and Scholes, M. (1973). The Pricing of Options and Corporate Liabilities. Journal of Political Economy 81: 637–654. Cox, J. C., Ross, S. A., and Rubinstein, M. (1979). Option Pricing: A Simpliﬁed Approach. Journal of Financial Economics 7: 229–263. Derman, E. and Kani, I. (1994). The Volatility Smile and Its Implied Tree. RISK 7(2): 139–145, 32–39. Derman, E., Kani, I., and Chriss, N. (1996). Implied Trinomial Trees of the Volatility Smile. The Journal of Derivatives 3(4): 7–22 Dupire B. (1994). Pricing with a smile, RISK 7(1): 18–20. Fengler, M. R., H¨ ardle, W., and Villa, C. (2003). The dynamics of implied volatilities: a common principle components approach. Review of Derivatives Research 6: 179–202. Franke, J., H¨ ardle, W., and Hafner, C. M. (2004). Statistics of Financial Markets, Springer, Heidelberg, Germany. H¨ardle, W., Kleinow, T., and Stahl, G. (2002). Applied Quantitative Finance. Springer-Verlag, Berlin. Hull, J. (1989). Options, Futures and Other Derivatives. Prentice-Hall, Englewood Cliﬀs, New Jersey. Hull, J. and White, A. (1990). Valuing derivative securities using the explicit ﬁnite diﬀerence method.The Journal of Finance and Quantitative Analysis 25: 87–100. Jarrow, R. and Rudd A. (1983). Option Pricing, Dow Jones-Irwin Publishing, Homewood, Illinois. Komor´ ad, K. (2002). Implied Trinomial Trees and Their Implementation with XploRe. Bachelor Thesis, HU Berlin; http://appel.rz.hu-berlin.de/Zope/ise stat/wiwi/ise/stat/ forschung/dmbarbeiten/.

Bibliography

159

Ross, S., Westerﬁeld, R., and Jaﬀe, J. (2002). Corporate Finance. Mc GrawHill.

7 Heston’s Model and the Smile Rafal Weron and Uwe Wystup

7.1

Introduction

The Black-Scholes formula, based on the assumption of log-normal stock diﬀusion with constant volatility, is the universal benchmark for option pricing. But as all market participants are keenly aware of, it is ﬂawed. The model-implied volatilities for diﬀerent strikes and maturities of options are not constant and tend to be smile shaped. Over the last two decades researchers have tried to ﬁnd extensions of the model in order to explain this empirical fact. A very natural approach, suggested already by Merton (1973), allows the volatilities to be a deterministic function of time. While it explains the different implied volatility levels for diﬀerent times of maturity, it still does not explain the smile shape for diﬀerent strikes. Dupire (1994), Derman and Kani (1994), and Rubinstein (1994) came up with the idea of allowing not only time, but also state dependence of the volatility coeﬃcient, see Fengler (2005) and Chapter 6. This local (deterministic) volatility approach yields a complete market model. Moreover, it lets the local volatility surface to be ﬁtted, but it cannot explain the persistent smile shape which does not vanish as time passes. The next step beyond the local volatility approach was to allow the volatility coeﬃcient in the Black-Scholes diﬀusion equation to be random. The pioneering work of Hull and White (1987), Stein and Stein (1991), and Heston (1993) led to the development of stochastic volatility models. These are two-factor models with one of the factors being responsible for the dynamics of the volatility coeﬃcient. Diﬀerent driving mechanisms for the volatility process have been proposed, including geometric Brownian motion and mean-reverting OrnsteinUhlenbeck type processes.

162

7

Heston’s Model and the Smile

Heston’s model stands out from this class mainly for two reasons: (i) the process for the volatility is non-negative and mean-reverting, which is what we observe in the markets, and (ii) there exists a closed-form solution for vanilla options. It was also one of the ﬁrst models that was able to explain the smile and simultaneously allow a front-oﬃce implementation and a market consistent valuation of many exotics. Hence, we concentrate in this chapter on Heston’s model. First, in Section 7.2 we discuss the properties of the model, including marginal distributions and tail behavior. In Section 7.3 we adapt the original work of Heston (1993) to a foreign exchange (FX) setting. We do this because the model is particularly useful in explaining the volatility smile found in FX markets. In equity markets the typical volatility structure is an asymmetric skew (also called a smirk or grimace). Calibrating Heston’s model to such a structure leads to very high, unrealistic values of the correlation coeﬃcient. Finally, in Section 7.4 we show that the smile of vanilla options can be reproduced by suitably calibrating the model parameters. However, we do have to say that Heston’s model is not a panacea. The criticism that we might want to put forward is that the market consistency could potentially be based on a large number of market participants using it! Furthermore, while trying to calibrate short term smiles, the volatility of volatility often seems to explode along with the speed of mean reversion. This is a strong indication that the process “wants” to jump, which of course it is not allowed to do. This observation, together with market crashes, has lead researchers to consider models with jumps. Interestingly, jump-diﬀusion models have been investigated already in the mid-seventies (Merton, 1976), long before the advent of stochastic volatility. Jump-diﬀusion models are, in general, more challenging to handle numerically than stochastic volatility models. Like the latter, they result in an incomplete market. But, whereas stochastic volatility models can be made complete by the introduction of one (or a few) traded options, a jump-diﬀusion model typically requires the existence of a continuum of options for the market to be complete. Recent research by Bates (1996) and Bakshi, Cao, and Chen (1997) suggests using a combination of jumps and stochastic volatility. This approach allows for even a better ﬁt to market data, but has so many parameters, that it is hard to believe that there is enough information in the market to calibrate them. Andersen and Andreasen (2000) let the stock dynamics be described by a jumpdiﬀusion process with local volatility. This method combines ease of modeling steep short-term volatility skews (jumps) and accurate ﬁtting to quoted option prices (deterministic volatility function). Other alternative approaches utilize L´evy processes (Barndorﬀ-Nielsen, Mikosch, and Resnick, 2001; Eberlein,

7.2

Heston’s Model

163

Kallsen, and Kristen, 2003) or mixing unconditional disturbances (Tompkins and D’Ecclesia, 2004), but it is still an open question how to price and hedge exotics using such models.

7.2

Heston’s Model

Heston (1993) assumed that the spot price follows the diﬀusion: √ (1) dSt = St µ dt + vt dWt ,

(7.1)

i.e. a process resembling geometric Brownian motion (GBM) with a non-constant instantaneous variance vt . Furthermore, he proposed that the variance be driven by a mean reverting stochastic process of the form: √ (2) dvt = κ(θ − vt ) dt + σ vt dWt ,

(7.2)

and allowed the two Wiener processes to be correlated with each other: (1)

(2)

dWt dWt

= ρ dt.

The variance process (7.2) was originally used by Cox, Ingersoll, and Ross (1985) for modeling the short term interest rate. It is deﬁned by three parameters: θ, κ, and σ. In the context of stochastic volatility models they can be interpreted as the long term variance, the rate of mean reversion to the long term variance, and the volatility of variance (often called the vol of vol), respectively. Surprisingly, the introduction of stochastic volatility does not change the properties of the spot price process in a way that could be noticed just by a visual inspection of its realizations. In Figure 7.1 we plot sample paths of a geometric Brownian motion and the spot process (7.1) in Heston’s model. To make the comparison more objective both trajectories were obtained with the same set of random numbers. Clearly, they are indistinguishable by mere eye. In both cases the initial spot rate S0 = 0.84 and the domestic and foreign interest rates are 5% and 3%, respectively, √ yielding a drift of µ = 2%. The volatility in the √ GBM is constant vt = 4% = 20%, while in Heston’s model it is driven by the mean reverting process (7.2) with the initial variance v0 = 4%, the long term variance θ = 4%, the speed of mean reversion κ = 2, and the vol of vol σ = 30%. The correlation is set to ρ = −0.05.

164

7

GBM vs. Heston volatility

20

Volatility [%]

0.8 0.7

15

0.75

Exchange rate

0.85

25

0.9

GBM vs. Heston dynamics

Heston’s Model and the Smile

0

0.5 Time [years]

1

0

0.5 Time [years]

1

Figure 7.1: Sample paths of a geometric Brownian motion (dotted red line) and the spot process (7.1) in Heston’s model (solid blue line) obtained with the same set of random numbers (left panel ). Despite the fact that the volatility in the GBM is constant, while in Heston’s model it is driven by a mean reverting process (right panel ) the sample paths are indistinguishable by mere eye. STFhes01.xpl

A closer inspection of Heston’s model does, however, reveal some important differences with respect to GBM. For example, the probability density functions of (log-)returns have heavier tails – exponential compared to Gaussian, see Figure 7.2. In this respect they are similar to hyperbolic distributions (Weron, 2004), i.e. in the log-linear scale they resemble hyperbolas (rather than parabolas). Equations (7.1) and (7.2) deﬁne a two-dimensional stochastic process for the variables St and vt . By setting xt = log(St /S0 ) − µt, we can express it in terms of the centered (log-)return xt and vt . The process is then characterized by the transition probability Pt (x, v | v0 ) to have (log-)return x and variance v at time t given the initial return x = 0 and variance v0 at time t = 0. The time evolution of Pt (x, v | v0 ) is governed by the following Fokker-Planck (or forward

7.2

Heston’s Model

165

Gaussian vs. Heston log-densities

-5

Log(PDF(x))

1 0

-10

0.5

PDF(x)

1.5

2

0

Gaussian vs. Heston densities

-1

-0.5

0 x

0.5

1

-1

-0.5

0 x

0.5

1

Figure 7.2: The marginal probability density function in Heston’s model (solid blue line) and the Gaussian PDF (dotted red line) for the same set of parameters as in Figure 7.1 (left panel ). The tails of Heston’s marginals are exponential which is clearly visible in the right panel where the corresponding log-densities are plotted. STFhes02.xpl

Kolmogorov) equation: ∂ P ∂t

∂ 1 ∂ {(v − θ)P } + (vP ) + ∂v 2 ∂x 2 2 ∂ σ2 ∂ 2 1 ∂ + ρσ (vP ) + (vP ). (vP ) + ∂x ∂v 2 ∂x2 2 ∂v 2

= κ

(7.3)

Solving this equation yields the following analytical formula for the density of centered returns x, given a time lag t of the price changes (Dragulescu and Yakovenko, 2002): +∞ 1 Pt (x) = eiξx+Ft (ξ) dξ, (7.4) 2π −∞ with

Ω2 −γ 2 +2κγ , log cosh Ωt sinh Ωt 2 + 2κΩ 2 γ = κ + iρσξ, and Ω = γ 2 + σ 2 (ξ 2 − iξ).

Ft (ξ) =

κθ σ2

γt −

2κθ σ2

166

7

Heston’s Model and the Smile

A sample marginal probability density function in Heston’s model is illustrated in Figure 7.2. The parameters are the same as in Figure 7.1, i.e. θ = 4%, κ = 2, σ = 30%, and ρ = −0.05. The time lag is set to t = 1.

7.3

Option Pricing

Consider the value function of a general contingent claim U (t, v, S) paying g(S) = U (T, v, S) at time T . We want to replicate it with a self-ﬁnancing portfolio. Due to the fact that in Heston’s model we have two sources of uncertainty (the Wiener processes W (1) and W (2) ) the portfolio must include the possibility to trade in the money market, the underlying and another derivative security with value function V (t, v, S). We start with an initial wealth X0 which evolves according to: dX = ∆ dS + Γ dV + rd (X − ΓV ) dt − (rd − rf )∆S dt,

(7.5)

where ∆ is the number of units of the underlying held at time t and Γ is the number of derivative securities V held at time t. Since we are operating in a foreign exchange setup, we let rd and rf denote the domestic and foreign interest rates, respectively. The goal is to ﬁnd ∆ and Γ such that Xt = U (t, vt , St ) for all t ∈ [0, T ]. The standard approach to achieve this is to compare the diﬀerentials of U and X obtained via Itˆ o’s formula. After some algebra we arrive at the partial diﬀerential equation which U must satisfy: 1 2 ∂2U ∂2U ∂U 1 2 ∂2U + ρσvS vS + σ v 2 + (rd − rf )S + 2 2 ∂S ∂S∂v 2 ∂v ∂S ∂U ∂U − rd U + = + κ(θ − v) − λ(t, v, S) ∂v ∂t

0.

(7.6)

For details on the derivation in the foreign exchange setting see Hakala and Wystup (2002). The term λ(t, v, S) is called the market price of volatility risk. Without loss of generality its functional form can be reduced to λ(t, v, S) = λv, Heston (1993). We obtain a solution to (7.6) by specifying appropriate boundary conditions. For a European vanilla option these are: U (T, v, S)

=

U (t, v, 0)

=

∂U (t, v, ∞) ∂S

=

max{φ(S − K), 0}, 1−φ Ke−rd τ , 2 1 + φ −rf τ , e 2

(7.7) (7.8) (7.9)

7.3

Option Pricing

167

∂U (t, 0, S) + ∂S ∂U ∂U (t, 0, S) + (t, 0, S) + κθ ∂v ∂t

(rd − rf )S

U (t, ∞, S)

= rd U (t, 0, S), Se−rf τ , for φ = +1, = Ke−rd τ , for φ = −1,

(7.10) (7.11)

where φ is a binary variable taking value +1 for call options and −1 for put options, K is the strike in units of the domestic currency, τ = T − t, T is the expiration time in years, and t is the current time. In this case, PDE (7.6) can be solved analytically using the method of characteristic functions (Heston, 1993). The price of a European vanilla option is hence given by: h(t)

=

HestonVanilla(κ, θ, σ, ρ, λ, rd , rf , v0 , S0 , K, τ, φ) = φ e−rf τ St P+ (φ) − Ke−rd τ P− (φ) ,

(7.12)

where a = κθ, u1 = 12 , u2 = − 12 , b1 = κ + λ − σρ, b2 = κ + λ, x = log St , dj = (ρσϕi − bj )2 − σ 2 (2uj ϕi − ϕ2 ), gj = (bj − ρσϕi + dj )/(bj − ρσϕi − dj ), and 1 − edj τ bj − ρσϕi + dj Dj (τ, ϕ) = , (7.13) σ2 1 − gj edj τ (7.14) Cj (τ, ϕ) = (rd − rf )ϕiτ +

dj τ a 1 − gj e , + 2 (bj − ρσϕi + d)τ − 2 log σ 1 − edj τ (7.15) fj (x, v, t, ϕ) = exp{Cj (τ, ϕ) + Dj (τ, ϕ)v + iϕx},

∞ −iϕy 1 e fj (x, v, τ, ϕ) 1 dϕ, (7.16) Pj (x, v, τ, y) = + 2 π 0 iϕ 1 ∞ −iϕy pj (x, v, τ, y) = e fj (x, v, τ, ϕ) dϕ. (7.17) π 0 The functions Pj are the cumulative distribution functions (in the variable y) of the log-spot price after time τ = T − t starting at x for some drift µ. The functions pj are the respective densities. The integration in (7.17) can be done with the Gauss-Legendre algorithm using 100 for ∞ and 100 abscissas. The best is to let the Gauss-Legendre algorithm compute the abscissas and weights

168

7

Heston’s Model and the Smile

once and reuse them as constants for all integrations. Finally: P+ (φ)

=

P− (φ)

=

1−φ + φP1 (log St , vt , τ, log K), 2 1−φ + φP2 (log St , vt , τ, log K). 2

(7.18) (7.19)

Apart from the above closed-form solution for vanilla options, alternative approaches can be utilized. These include ﬁnite diﬀerence and ﬁnite element methods. The former must be used with care since high precision is required to invert scarce matrices. The Crank-Nicholson, ADI (Alternate Direction Implicit), and Hopscotch schemes can be used, however, ADI is not suitable to handle nonzero correlation. Boundary conditions must be also set appropriately. For details see Kluge (2002). Finite element methods can be applied to price both the vanillas and exotics, as explained for example in Apel, Winkler, and Wystup (2002).

7.3.1

Greeks

The Greeks can be evaluated by taking the appropriate derivatives or by exploiting homogeneity properties of ﬁnancial markets (Reiss and Wystup, 2001). In Heston’s model the spot delta and the so-called dual delta are given by: ∆=

∂h(t) = φe−rf τ P+ (φ) ∂St

and

∂h(t) = −φe−rd τ P− (φ), ∂K

(7.20)

respectively. Gamma, which measures the sensitivity of delta to the underlying has the form: ∂∆ e−rf τ = p1 (log St , vt , τ, log K). (7.21) Γ= ∂St St T heta = ∂h(t)/∂t can be computed from (7.6). The formulas for rho are the following: ∂h(t) ∂rd ∂h(t) ∂rf

= φKe−rd τ τ P− (φ),

(7.22)

= −φSt e−rf τ τ P+ (φ).

(7.23)

Note that in a foreign exchange setting there are two rho’s – one is a derivative of the option price with respect to the domestic interest rate and the other is a derivative with respect to the foreign interest rate.

7.4

Calibration

169

The notions of vega and volga usually refer to the ﬁrst and second derivative with respect to volatility. In Heston’s model we use them for the ﬁrst and second derivative with respect to the initial variance: ∂h(t) ∂vt

∂ 2 h(t) ∂vt2

∂ P1 (log St , vt , τ, log K) − ∂vt ∂ − Ke−rd τ P2 (log St , vt , τ, log K), ∂vt ∂2 = e−rf τ St 2 P1 (log St , vt , τ, log K) − ∂vt ∂2 − Ke−rd τ 2 P2 (log St , vt , τ, log K), ∂vt = e−rf τ St

(7.24)

(7.25)

where ∂ Pj (x, vt , τ, y) ∂vt ∂2 Pj (x, vt , τ, y) ∂vt2

7.4

= =

1 ∞ D(τ, ϕ)e−iϕy fj (x, vt , τ, ϕ) dϕ, (7.26) π 0 iϕ ∞ 2 1 D (τ, ϕ)e−iϕy fj (x, vt , τ, ϕ) dϕ. (7.27) π 0 iϕ

Calibration

Calibration of stochastic volatility models can be done in two conceptually diﬀerent ways. One way is to look at a time series of historical data. Estimation methods such as Generalized, Simulated, and Eﬃcient Methods of Moments (respectively GMM, SMM, and EMM), as well as Bayesian MCMC have been extensively applied, for a review see Chernov and Ghysels (2000). In the Heston model we could also try to ﬁt empirical distributions of returns to the marginal distributions speciﬁed in (7.4) via a minimization scheme. Unfortunately, all historical approaches have one common ﬂaw – they do not allow for estimation of the market price of volatility risk λ(t, v, S). However, multiple studies ﬁnd evidence of a nonzero volatility risk premium, see e.g. Bates (1996). This implies in turn that one needs some extra input to make the transition from the physical to the risk neutral world. Observing only the underlying spot price and estimating stochastic volatility models with this information will not deliver correct derivative security prices. This leads us to the second estimation approach. Instead of using the spot data we calibrate the model to derivative prices.

170

7

Heston’s Model and the Smile

We follow the latter approach and take the smile of the current vanilla options market as a given starting point. As a preliminary step, we have to retrieve the strikes since the smile in foreign exchange markets is speciﬁed as a function of the deltas. Comparing the Black-Scholes type formulas (in the foreign exchange market setting we have to use the Garman and Kohlhagen (1983) speciﬁcation) for delta and the option premium yields the relation for the strikes Ki . From a computational point of view this stage requires only an inversion of the cumulative normal distribution. Next, we ﬁt the ﬁve parameters: initial variance v0 , volatility of variance σ, long-run variance θ, mean reversion κ, and correlation ρ for a ﬁxed time to maturity and a given vector of market Black-Scholes implied volatilities {ˆ σi }ni=1 n for a given set of delta pillars {∆i }i=1 . Since we are calibrating the model to derivative prices we do not need to worry about estimating the market price of volatility risk as it is already embedded in the market smile. Furthermore, it can easily be veriﬁed that the value function (7.12) satisﬁes: HestonVanilla(κ, θ, σ, ρ, λ, rd , rf , v0 , S0 , K, τ, φ) = κ = HestonVanilla κ + λ, θ, σ, ρ, 0, rd , rf , v0 , S0 , K, τ, φ , (7.28) κ+λ which means that we can set λ = 0 by default and just determine the remaining ﬁve parameters. After ﬁtting the parameters we compute the option prices in Heston’s model using (7.12) and retrieve the corresponding Black-Scholes model implied volatilities {σi }ni=1 via a standard bisection method (a Newton-Raphson method could be used as well). The next step is to deﬁne an objective function, which we choose to be the Sum of Squared Errors (SSE): SSE(κ, θ, σ, ρ, v0 ) =

n

{ˆ σi − σi (κ, θ, σ, ρ, v0 )}2 .

(7.29)

i=1

We compare volatilities (rather than prices), because they are all of comparable magnitude. In addition, one could introduce weights for all the summands to favor at-the-money (ATM) or out-of-the-money (OTM) ﬁts. Finally we minimize over this objective function using a simplex search routine to ﬁnd the optimal set of parameters.

7.4

Calibration

171

Initial variance and the smile

11 10.5

Implied volatility [%]

9.5

9

10

10

Implied volatility [%]

11

11.5

12

Vol of vol and the smile

20

40

60 Delta [%]

80

20

40

60

80

Delta [%]

Figure 7.3: Left panel : Eﬀect of changing the volatility of variance (vol of vol) on the shape of the smile. For the red dashed “smile” with triangles σ = 0.01, and for the blue dotted smile with squares σ = 0.6. Right panel : Eﬀect of changing the initial variance on the shape of the smile. For the red dashed smile with triangles v0 = 0.008 and for the blue dotted smile with squares v0 = 0.012. STFhes03.xpl

7.4.1

Qualitative Eﬀects of Changing Parameters

Before calibrating the model to market data we will show how changing the input parameters aﬀects the shape of the ﬁtted smile curve. This analysis will help in reducing the dimensionality of the problem. In all plots of this subsection the solid black curve with circles is the smile obtained for v0 = 0.01, σ = 0.25, κ = 1.5, θ = 0.015, and ρ = 0.05. First, to take a look at the volatility of variance (vol of vol), see the left panel of Figure 7.3. Clearly, setting σ equal to zero produces a deterministic process for the variance and hence volatility which does not admit any smile. The resulting ﬁt is a constant curve. On the other hand, increasing the volatility of variance increases the convexity of the ﬁt. The initial variance has a diﬀerent impact on the smile. Changing v0 allows adjustments in the height of the smile curve rather than the shape. This is illustrated in the right panel of Figure 7.3.

172

7

Mean reversion and the smile

11

10.5 9

9.5

9.5

10

Implied volatility [%]

11 10.5 10

Implied volatility [%]

11.5

11.5

Long-run variance and the smile

Heston’s Model and the Smile

20

40

60 Delta [%]

80

20

40

60

80

Delta [%]

Figure 7.4: Left panel : Eﬀect of changing the long-run variance on the shape of the smile. For the red dashed smile with triangles θ = 0.01, and for the blue dotted smile with squares θ = 0.02. Right panel : Eﬀect of changing the mean reversion on the shape of the smile. For the red dashed smile with triangles κ = 0.01, and for the blue dotted smile with squares κ = 3. STFhes04.xpl

Eﬀects of changing the long-run variance θ are similar to those observed by changing the initial variance, see the left panel of Figure 7.4. This requires some attention in the calibration process. It seems promising to choose the initial variance a priori and only let the long-run variance vary. In particular, a diﬀerent initial variance for diﬀerent maturities would be inconsistent. Changing the mean reversion κ aﬀects the ATM part more than the extreme wings of the smile curve. The low deltas remain almost unchanged whereas increasing the mean reversion lifts the center. This is illustrated in the right panel of Figure 7.4. Moreover, the inﬂuence of mean reversion is often compensated by a stronger volatility of variance. This suggests ﬁxing the mean reversion parameter and only calibrating the remaining parameters. Finally, let us look at the inﬂuence of correlation. The uncorrelated case produces a ﬁt that looks like a symmetric smile curve centered at-the-money. How-

7.4

Calibration

173

Correlation and the skew

9

10

10

11

Implied volatility [%]

11 10.5

Implied volatility [%]

12

11.5

Correlation and the smile

20

40

60

80

Delta [%]

20

40

60

80

Delta [%]

Figure 7.5: Left panel : Eﬀect of changing the correlation on the shape of the smile. For the red dashed smile with triangles ρ = 0, for the blue dashed smile with squares ρ = −0.15, and for the green dotted smile with rhombs ρ = 0.15. Right panel : In order for the model to yield a volatility skew, a typically observed volatility structure in equity markets, the correlation must be set to an unrealistically high value (with respect to the absolute value; here ρ = −0.5). STFhes05.xpl

ever, it is not exactly symmetric. Changing ρ changes the degree of symmetry. In particular, positive correlation makes calls more expensive, negative correlation makes puts more expensive. This is illustrated in Figure 7.5. Note that for the model to yield a volatility skew, a typically observed volatility structure in equity markets, the correlation must be set to an unrealistically high value.

7.4.2

Calibration Results

We are now ready to calibrate Heston’s model to market data. We take the EUR/USD volatility surface on July 1, 2004 and ﬁt the parameters in Heston’s model according to the calibration scheme discussed earlier. The results are shown in Figures 7.6–7.8. Note that the ﬁt is very good for maturities between three and eighteen months. Unfortunately, Heston’s model does not perform

174

7

1M market and Heston volatilities

10.4 10.2

Implied volatility [%]

10.5 9.5

10

10

Implied volatility [%]

11

10.6

11.5

1W market and Heston volatilities

Heston’s Model and the Smile

20

40

60

20

80

40

60

80

Delta [%]

2M market and Heston volatilities

3M market and Heston volatilities

10.8 10.6

Implied volatility [%]

10.4

10.6 10.2

10.2

10.4

Implied volatility [%]

10.8

11

11

Delta [%]

20

40

60 Delta [%]

80

20

40

60

80

Delta [%]

Figure 7.6: The market smile (solid black line with circles) on July 1, 2004 and the ﬁt obtained with Heston’s model (dotted red line with squares) for τ = 1 week (top left), 1 month (top right), 2 months (bottom left), and 3 months (bottom right). STFhes06.xpl satisfactorily for short maturities and extremely long maturities. For the former we recommend using a jump-diﬀusion model (Cont and Tankov, 2003; Martinez and Senge, 2002), for the latter a suitable long term FX model (Andreasen, 1997).

7.4

Calibration

175

1Y market and Heston volatilities

11 10.5

10.5

Implied volatility [%]

Implied volatility [%]

11

11.5

6M market and Heston volatilities

20

40

60

20

80

60

80

18M market and Heston volatilities

2Y market and Heston volatilities

10.5

11

11

Implied volatility [%]

11.5

11.5

Delta [%]

10.5

Implied volatility [%]

40

Delta [%]

20

40

60 Delta [%]

80

20

40

60

80

Delta [%]

Figure 7.7: The market smile (solid black line with circles) on July 1, 2004 and the ﬁt obtained with Heston’s model (dotted red line with squares) for τ = 6 months (top left), 1 year (top right), 18 months (bottom left), and 2 years (bottom right). STFhes06.xpl

176

7

Correlation term structure

0.05

Correlation

0.15 0.05

0

0.1

Vol of vol

0.2

0.25

0.3

0.1

Vol of vol term structure

Heston’s Model and the Smile

0

0.5

1 Tau [year]

1.5

0

2

0.5

1 Tau [year]

1.5

2

Figure 7.8: Term structure of the vol of vol (left panel ) and correlation (right panel ) in the Heston model calibrated to the EUR/USD surface as observed on July 1, 2004. STFhes06.xpl

Performing calibrations for diﬀerent time slices of the volatility matrix produces diﬀerent values of the parameters. This suggests a term structure of some parameters in Heston’s model. Therefore, we need to generalize the CoxIngersoll-Ross process to the case of time-dependent parameters, i.e. we consider the process: √ dvt = κ(t){θ(t) − vt } dt + σ(t) vt dWt

(7.30)

for some nonnegative deterministic parameter functions σ(t), κ(t), and θ(t). The formula for the mean turns out to be: t E(vt ) = g(t) = v0 e−K(t) + κ(s)θ(s)eK(s)−K(t) ds, (7.31) 0

with K(t) =

t 0

κ(s) ds. The result for the second moment is:

E(vt2 ) = v02 e−2K(t) +

0

t

{2κ(s)θ(s) + σ 2 (s)}g(s)e2K(s)−2K(t) ds,

(7.32)

7.4

Calibration

177

and hence for the variance (after some algebra): t σ 2 (s)g(s)e2K(s)−2K(t) ds. Var(vt ) =

(7.33)

0

The formula for the variance allows us to compute forward volatilities of variance explicitly. Assuming known values σT1 and σT2 for some times 0 < T1 < T2 , we want to determine the forward volatility of variance σT1 ,T2 which matches the corresponding variances, i.e. T2 σT2 2 g(s)e2κ(s−T2 ) ds = (7.34) 0

= 0

T1

σT2 1 g(s)e2κ(s−T2 ) ds +

T2

T1

σT2 1 ,T2 g(s)e2κ(s−T2 ) ds.

The resulting forward volatility of variance is thus: σT2 1 ,T2 =

σT2 2 H(T2 ) − σT2 1 H(T1 ) , H(T2 ) − H(T1 )

where

t

2κs

g(s)e

H(t) = 0

θ 2κt 1 1 ds = e + (v0 − θ)eκt + 2κ κ κ

(7.35)

θ − v0 . 2

(7.36)

Assuming known values ρT1 and ρT2 for some times 0 < T1 < T2 , we want to determine the forward correlation coeﬃcient ρT1 ,T2 to be active between times T1 and T2 such that the covariance between the Brownian motions of the variance process and the exchange rate process agrees with the given values ρT1 and ρT2 . This problem has a simple answer, namely: ρT1 ,T2 = ρT2 ,

T 1 ≤ t ≤ T2 .

This can be seen by writing the Heston model in the form: √ (1) dSt = St µ dt + vt dWt √ √ (1) (2) dvt = κ(θ − vt ) dt + ρσ vt dWt + 1 − ρ2 σ vt dWt

(7.37)

(7.38) (7.39)

for a pair of independent Brownian motions W (1) and W (2) . Observe that choosing the forward correlation coeﬃcient as stated does not conﬂict with the computed forward volatility.

178

7

Heston’s Model and the Smile

As we have seen, Heston’s model can be successfully applied to modeling the volatility smile of vanilla currency options. There are essentially three parameters to ﬁt, namely the long-run variance, which corresponds to the at-the-money level of the market smile, the vol of vol, which corresponds to the convexity of the smile (in the market often quoted as butterﬂies), and the correlation, which corresponds to the skew of the smile (in the market often quoted as risk reversals). It is this direct link of the model parameters to the market that makes the Heston model so attractive to front oﬃce users. The key application of the model is to calibrate it to vanilla options and afterward employ it for pricing exotics, like one-touch options, in either a ﬁnite diﬀerence grid or a Monte Carlo simulation (Hakala and Wystup, 2002; Wystup, 2003). Surprisingly, the results often coincide with the traders’ rule of thumb pricing method. This might also simply mean that a lot of traders are using the same model. After all, it is a matter of belief which model reﬂects the reality most suitably.

Bibliography

179

Bibliography Andersen, L. and Andreasen, J. (2000). Jump-Diﬀusion Processes: Volatility Smile Fitting and Numerical Methods for Option Pricing, Review of Derivatives Research 4: 231–262. Andreasen, J. (1997). A Gaussian Exchange Rate and Term Structure Model, Essays on Contingent Claim Pricing 97/2, PhD thesis. Apel, T., Winkler, G., and Wystup, U. (2002). Valuation of options in Heston’s stochastic volatility model using ﬁnite element methods, in J. Hakala, U. Wystup (eds.) Foreign Exchange Risk, Risk Books, London. Bakshi, G., Cao, C. and Chen, Z. (1997). Empirical Performance of Alternative Option Pricing Models, Journal of Finance 52: 2003–2049. Barndorﬀ-Nielsen, O.E., Mikosch, T., and Resnick, S. (2001). Levy processes: Theory and Applications, Birkh¨ auser. Bates, D. (1996). Jumps and Stochastic Volatility: Exchange Rate Processes Implicit in Deutsche Mark Options, Review of Financial Studies 9: 69– 107. Chernov, M. and Ghysels, E. (2000). Estimation of the Stochastic Volatility Models for the Purpose of Options Valuation, in Y. S. Abu-Mostafa, B. LeBaron, A. W. Lo, and A. S. Weigend (eds.) Computational Finance Proceedings of the Sixth International Conference, MIT Press, Cambridge. Cont, R., and Tankov, P. (2003). Financial Modelling with Jump Processes, Chapman & Hall/CRC. Cox, J. C., Ingersoll, J. E. and Ross, S. A. (1985). A Theory of the Term Structure of Interest Rates, Econometrica 53: 385–407. Derman, E. and Kani, I. (1994). Riding on a Smile, RISK 7(2): 32–39. Dragulescu, A. A. and Yakovenko, V. M. (2002). Probability distribution of returns in the Heston model with stochastic volatility, Quantitative Finance 2: 443–453. Dupire, B. (1994). Pricing with a Smile, RISK 7(1): 18–20. Eberlein, E., Kallsen, J., and Kristen, J. (2003). Risk Management Based on Stochastic Volatility, Journal of Risk 5(2): 19–44.

180

Bibliography

Fengler, M. (2005). Semiparametric Modelling of Implied Volatility, Springer. Garman, M. B. and Kohlhagen, S. W. (1983). Foreign currency option values, Journal of International Monet & Finance 2: 231–237. Hakala, J. and Wystup, U. (2002). Heston’s Stochastic Volatility Model Applied to Foreign Exchange Options, in J. Hakala, U. Wystup (eds.) Foreign Exchange Risk, Risk Books, London. Heston, S. (1993). A Closed-Form Solution for Options with Stochastic Volatility with Applications to Bond and Currency Options, Review of Financial Studies 6: 327–343. Hull, J. and White, A. (1987). The Pricing of Options with Stochastic Volatilities, Journal of Finance 42: 281–300. Kluge, T. (2002). Pricing derivatives in stochastic volatility models using the ﬁnite diﬀerence method, Diploma thesis, Chemnitz Technical University. Martinez, M. and Senge, T. (2002). A Jump-Diﬀusion Model Applied to Foreign Exchange Markets, in J. Hakala, U. Wystup (eds.) Foreign Exchange Risk, Risk Books, London. Merton, R. (1973). The Theory of Rational Option Pricing, Bell Journal of Economics and Management Science 4: 141–183. Merton, R. (1976). Option Pricing when Underlying Stock Returns are Discontinuous, Journal of Financial Economics 3: 125–144. Reiss, O. and Wystup, U. (2001). Computing Option Price Sensitivities Using Homogeneity, Journal of Derivatives 9(2): 41–53. Rubinstein, M. (1994). Implied Binomial Trees, Journal of Finance 49: 771– 818. Stein, E. and Stein, J. (1991). Stock Price Distributions with Stochastic Volatility: An Analytic Approach, Review of Financial Studies 4(4): 727–752. Tompkins, R. G. and D’Ecclesia, R. L. (2004). Unconditional Return Disturbances: A Non-Parametric Simulation Approach, Journal of Banking and Finance, to appear. Weron, R. (2004). Computationally intensive Value at Risk calculations, in J.E. Gentle, W. H¨ardle, Y. Mori (eds.) Handbook of Computational Statistics, Springer.

Bibliography

181

Wystup, U. (2003). The market price of one-touch options in foreign exchange markets, Derivatives Week, 12(13).

8 FFT-based Option Pricing Szymon Borak, Kai Detlefsen, and Wolfgang H¨ ardle

8.1

Introduction

The Black-Scholes formula, one of the major breakthroughs of modern ﬁnance, allows for an easy and fast computation of option prices. But some of its assumptions, like constant volatility or log-normal distribution of asset prices, do not ﬁnd justiﬁcation in the markets. More complex models, which take into account the empirical facts, often lead to more computations and this time burden can become a severe problem when computation of many option prices is required, e.g. in calibration of the implied volatility surface. To overcome this problem Carr and Madan (1999) developed a fast method to compute option prices for a whole range of strikes. This method and its application are the theme of this chapter. In Section 8.2, we brieﬂy discuss the Merton, Heston, and Bates models concentrating on aspects relevant for the option pricing method. In the following section, we present the method of Carr and Madan which is based on the fast Fourier transform (FFT) and can be applied to a variety of models. We also consider brieﬂy some further developments and give a short introduction to the FFT algorithm. In the last section, we apply the method to the three analyzed models, check the results by Monte Carlo simulations and comment on some numerical issues.

8.2

Modern Pricing Models

The geometric Brownian motion (GBM) is the building block of modern ﬁnance. In particular, in the Black-Scholes model the underlying stock price is

184

8

FFT-based Option Pricing

assumed to follow the GBM dynamics: dSt = rSt dt + σSt dWt ,

(8.1)

which, applying Itˆ o’s lemma, can be written as: St = S0 exp

r−

σ2 2

t + σWt .

(8.2)

The empirical facts, however, do not conﬁrm model assumptions. Financial returns exhibit much fatter tails than the Black-Scholes model postulates, see Chapter 1. The common big returns that are larger than six-standard deviations should appear less than once in a million years if the Black-Scholes framework were accurate. Squared returns, as a measure of volatility, display positive autocorrelation over several days, which contradicts the constant volatility assumption. Non-constant volatility can be observed as well in the option markets where “smiles” and “skews” in implied volatility occur. These properties of ﬁnancial time series lead to more reﬁned models. We introduce three such models in the following paragraphs.

8.2.1

Merton Model

If an important piece of information about the company becomes public it may cause a sudden change in the company’s stock price. The information usually comes at a random time and the size of its impact on the stock price may be treated as a random variable. To cope with these observations Merton (1976) proposed a model that allows discontinuous trajectories of asset prices. The model extends (8.1) by adding jumps to the stock price dynamics: dSt = rdt + σdWt + dZt , St

(8.3)

where Zt is a compound Poisson process with a log-normal distribution of jump sizes. The jumps follow a (homogeneous) Poisson process Nt with intensity λ (see Chapter 14), which is independent of Wt . The log-jump sizes Yi ∼ N (µ, δ 2 ) are i.i.d random variables with mean µ and variance δ 2 , which are independent of both Nt and Wt .

8.2

Modern Pricing Models

185

The model becomes incomplete which means that there are many possible ways to choose a risk-neutral measure such that the discounted price process is a martingale. Merton proposed to change the drift of the Wiener process and to leave the other ingredients unchanged. The asset price dynamics is then given by: ! St = S0 exp µM t + σWt +

Nt

" Yi

,

i=1 2

where µM = r − σ 2 − λ{exp(µ + δ2 ) − 1}. Jump components add mass to the tails of the returns distribution. Increasing δ adds mass to both tails, while a negative/positive µ implies relatively more mass in the left/right tail. For the purpose of Section 8.4 it is necessary to introduce the characteristic function (cf) of Xt = ln SS0t : 2 2 σ2 z2 φXt (z) = exp t − , + iµM z + λ e−δ z /2+iµz−1 2 Nt where Xt = µM t + σWt + i=1 Yi .

8.2.2

(8.4)

Heston Model

Another possible modiﬁcation of (8.1) is to substitute the constant volatility parameter σ with a stochastic process. This leads to the so-called “stochastic volatility” models, where the price dynamics is driven by: √ dSt = rdt + vt dWt , St where vt is another unobservable stochastic process. There are many possible ways of choosing the variance process vt . Hull and White (1987) proposed to use geometric Brownian motion: dvt = c1 dt + c2 dWt . vt

(8.5)

However, geometric Brownian motion tends to increase exponentially which is an undesirable property for volatility. Volatility exhibits rather a mean

186

8

FFT-based Option Pricing

reverting behavior. Therefore a model based on an Ornstein-Uhlenbeck-type process: dvt = κ(θ − vt )dt + βdWt , (8.6) was suggested by Stein and Stein (1991). This process, however, admits negative values of the variance vt . These deﬁciencies were eliminated in a stochastic volatility model introduced by Heston (1993):

dSt St dvt

= rdt +

√

(1)

vt dWt ,

√ (2) = κ(θ − vt )dt + σ vt dWt , (1)

where the two Brownian components Wt

(2)

and Wt

(8.7)

are correlated with rate ρ:

(1) (2) Cov dWt , dWt = ρdt,

(8.8)

√ for details see Chapter 7. The term vt in equation (8.7) simply ensures positive volatility. When the process touches the zero bound the stochastic part becomes zero and the non-stochastic part will push it up. Parameter κ measures the speed of mean reversion, θ is the average level of volatility, and σ is the volatility of volatility. In (8.8) the correlation ρ is typically negative, which is consistent with empirical observations (Cont, 2001). This negative dependence between returns and volatility is known in the market as the “leverage eﬀect.” The risk neutral dynamics is given in a similar way as in the Black-Scholes model. For the logarithm of the asset price process Xt = ln SS0t one obtains the equation: dXt = The cf is given by:

√ 1 (1) r − vt dt + vt dWt . 2

8.2

Modern Pricing Models

φXt (z)

= ·

187

+ iztr + izx0 } exp{ κθt(κ−iρσz) σ2 (cosh γt 2 +

2κθ

κ−iρσz γ

σ2 sinh γt 2 ) (z 2 + iz)v0 , exp − γ coth γt 2 + κ − iρσz

(8.9)

where γ = σ 2 (z 2 + iz) + (κ − iρσz)2 , and x0 and v0 are the initial values for the log-price process and the volatility process, respectively.

8.2.3

Bates Model

The Merton and Heston approaches were combined by Bates (1996), who proposed a model with stochastic volatility and jumps: dSt St dvt (1)

(2)

Cov(dWt , dWt )

= rdt +

√

(1)

vt dWt

+ dZt ,

(8.10)

√ (2) = κ(θ − vt )dt + σ vt dWt , = ρ dt.

As in (8.3) Zt is a compound Poisson process with intensity λ and log-normal (1) (2) distribution of jump sizes independent of Wt (and Wt ). If J denotes the 1 2 2 ¯ Under the jump size then ln(1 + J) ∼ N (ln(1 + k) − 2 δ , δ ) for some k. risk neutral probability one obtains the equation for the logarithm of the asset price: √ 1 (1) dXt = (r − λk − vt )dt + vt dWt + Z˜t , 2 where Z˜t is a compound Poisson process with normal distribution of jump magnitudes. Since the jumps are independent of the diﬀusion part in (8.10), the characteristic function for the log-price process can be obtained as: J φXt (z) = φD Xt (z)φXt (z),

where:

188

8

φD Xt (z)

FFT-based Option Pricing

κθt(κ−iρσz) σ2

+ izt(r − λk) + izx0 = 2κθ κ−iρσz γt σ2 cosh γt + sinh 2 γ 2 (z 2 + iz)v0 · exp − γ coth γt 2 + κ − iρσz exp

(8.11)

is the diﬀusion part cf and φJXt (z) = exp{tλ(e−δ

2 2

z /2+i(ln(1+k)− 12 δ 2 )z

− 1)},

(8.12)

is the jump part cf. Note that (8.9) and (8.11) are very similar. The diﬀerence lies in the shift λk (risk neutral correction). Formula (8.12) has a similar structure as the jump part in (8.4), however, µ is substituted with ln(1 + k) − 1 2 2δ .

8.3

Option Pricing with FFT

In the last section, three asset price models and their characteristic functions were presented. In this section, we describe a numerical approach for pricing options which utilizes the characteristic function of the underlying instrument’s price process. The approach has been introduced by Carr and Madan (1999) and is based on the FFT. The use of the FFT is motivated by two reasons. On the one hand, the algorithm oﬀers a speed advantage. This eﬀect is even boosted by the possibility of the pricing algorithm to calculate prices for a whole range of strikes. On the other hand, the cf of the log price is known and has a simple form for many models considered in literature, while the density is often not known in closed form. The approach assumes that the cf of the log-price is given analytically. The basic idea of the method is to develop an analytic expression for the Fourier transform of the option price and to get the price by Fourier inversion. As the Fourier transform and its inversion work for square-integrable functions (see Plancherel’s theorem, e.g. in Rudin, 1991) we do not consider directly the option price but a modiﬁcation of it.

8.3

Option Pricing with FFT

189

Let CT (k) denote the price of a European call option with maturity T and strike K = exp(k): ∞ e−rT (es − ek )qT (s)ds, CT (k) = k

where qT is the risk-neutral density of sT = log ST . The function CT is not square-integrable because CT (k) converges to S0 for k → −∞. Hence, we consider a modiﬁed function: cT (k) = exp(αk)CT (k),

(8.13)

which is square-integrable for a suitable α > 0. The choice of α may depend on the model for St . The Fourier transform of cT is deﬁned by: ∞ eivk cT (k)dk. ψT (v) = −∞

The expression for ψT can be computed directly after an interchange of integrals: ∞ ∞ ψT (v) = eαk e−rT (es − ek )qT (s)dsdk eivk −∞ k ∞ s −rT = e qT (s) (eαk+s − e(α+1)k )eivk dkds −∞ ∞

−∞

e(α+1+iv)s e(α+1+iv)s = e−rT qT (s)( − )ds α + iv α + 1 + iv −∞ =

e−rT φT (v − (α + 1)i) , α2 + α − v 2 + i(2α + 1)v

where φT is the Fourier transform of qT . A suﬃcient condition for cT to be square-integrable is given by ψT (0) being ﬁnite. This is equivalent to E(STα+1 ) < ∞. A value α = 0.75 fulﬁlls this condition for the models of Section 8.2. With this choice, we follow Schoutens et al. (2003) who found in an empirical study that this value leads to stable algorithms, i.e. the prices are well replicated for many model parameters. Now, we get the desired option price in terms of ψT using Fourier inversion exp(−αk) ∞ −ivk e ψ(v)dv. CT (k) = π 0

190

8

FFT-based Option Pricing

This integral can be computed numerically as: CT (k) ≈

N −1 exp(−αk) −ivj k e ψ(vj )η, π j=0

(8.14)

where vj = ηj, j = 0, . . . , N − 1, and η > 0 is the distance between the points of the integration grid. Lee (2004) has developed bounds for the sampling and truncation errors of this approximation. Formula (8.14) suggests to calculate the prices using the FFT, which is an eﬃcient algorithm for computing the sums wu =

N −1

2π

e−i N ju xj , for u = 0, . . . , N − 1.

(8.15)

j=0

To see why this is the case see Example 1 below, which illustrates the basic idea of the FFT. In general, the strikes near the spot price are of interest because such options are traded most frequently. We consider thus an equidistant spacing of the log-strikes around the log spot price s0 : 1 ku = − N ζ + ζu + s0 , for u = 0, . . . , N − 1, 2

(8.16)

where ζ > 0 denotes the distance between the log strikes. Substituting these log-strikes yields for u = 0, . . . , N − 1: CT (ku ) ≈

N −1 exp(−αk) −iζηju i{( 1 N ζ−s0 )vj } e e 2 ψ(vj )η. π j=0

Now, the FFT can be applied to 1

xj = ei{( 2 N ζ−s0 )vj } ψ(vj ), for j = 0, . . . , N − 1, provided that ζη =

2π . N

(8.17)

This constraint leads, however, to the following trade-oﬀ: the parameter N controls the computation time and thus is often determined by the computational setup. Hence the right hand side may be regarded as given or ﬁxed.

8.3

Option Pricing with FFT

191

One would like to choose a small ζ in order to get many prices for strikes near the spot price. But the constraint implies then a big η giving a coarse grid for integration. So we face a trade-oﬀ between accuracy and the number of interesting strikes. Example 1 The FFT is an algorithm for computing (8.15). Its popularity stems from its remarkable speed: while a naive computation needs N 2 operations the FFT requires only N log(N ) steps. The algorithm was ﬁrst published by Cooley and Tukey (1965) and since then has been continuously reﬁned. We illustrate the original FFT algorithm for N = 4. Writing u and j as binary numbers: u = 2u1 + u0 , j = 2j1 + j0 , with u1 , u0 , j1 , j0 ∈ {0, 1} u = (u1 , u0 ), j = (j1 , j0 ) the formula (8.15) is given as: w(u1 ,u0 ) =

1 1

x(j1 ,j0 ) W (2u1 +u0 )(2j1 +j0 ) ,

j0 =0 j1 =0

where W = e−2πi/N . Because of W (2u1 +u0 )(2j1 +j0 ) = W 2u0 j1 W (2u1 +u0 )j0 , we get w(u1 ,u0 ) =

1 1 ( x(j1 ,j0 ) , W 2u0 j1 )W (2u1 +u0 )j0 . j0 =0 j1 =0

Now, the FFT can be described by the following three steps 1 = w(u 0 ,j0 )

1

x(j1 ,j0 ) W 2u0 j1 ,

j1 =0 2 w(u 0 ,u1 )

=

1

1 w(u W (2u1 +u0 )j0 , 0 ,j0 )

j0 =0 2 w(u1 ,u0 ) = w(u . 0 ,u1 )

While a naive computation of (8.15) requires 42 = 16 complex multiplications the FFT needs only 4 log(4) = 8 complex multiplications. This explains the speed of the FFT because complex multiplications are the most time consuming operations in this context.

192

8

FFT-based Option Pricing

Implied volatility

0.53 0.48 0.43 0.38 0.33

0.40 0.58 0.77 Moneyness

0.95 1.13

0.39

0.78

1.17

1.56

1.96

Time to maturity

Figure 8.1: Implied volatility surface of DAX options on January 4, 1999. STFfft01.xpl

8.4

Applications

In this section, we apply the FFT option pricing algorithm of Section 8.3 to the models described in Section 8.2. Our aim is to demonstrate the remarkable speed of the FFT algorithm by comparing it to Monte Carlo simulations. Moreover, we present an application of the fast option pricing algorithm to the calibration of implied volatility (IV) surfaces. In Figure 8.1 we present the IV surface of DAX options on January 4, 1999 where the red points are the observed implied volatilities and the surface is ﬁtted with the Nadaraya-Watson kernel estimator. For analysis of IV surfaces consult Fengler et al. (2002) and Chapter 5. In order to apply the FFT-based algorithm we need to know the characteristic function of the risk neutral density which has been described in Section 8.2 for the Merton, Heston, and Bates models. Moreover, we have to decide on

8.4

Applications

193

the parameters α, N , and η of the algorithm. Schoutens et al. (2003) used α = 0.75 in a calibration procedure for the Eurostoxx 50 index data. We follow their approach and set α to this value. The computation time depends on the parameter N which we set to 512. As the number of grid points of the numerical integration is also given by N , this parameter in addition determines the accuracy of the prices. For parameter η, which determines the distance of the points of the integration grid, we use 0.25. A limited simulation study showed that the FFT algorithm is not sensitive to the choice of η, i.e. small changes in η gave similar results. In Section 8.3, we have already discussed the relation between these parameters. For comparison, we computed the option prices also by Monte Carlo simulations with 500 time steps and 5000 repetitions. Such simulations are a convenient way to check the results of the FFT-based algorithm. The calculations are based on the following parameters: the price of the underlying asset is S0 = 100, time to maturity T = 1, and the interest rate r = 0.02. For demonstration we choose the Heston model with parameters: κ = 10, θ = 0.2, σ = 0.7, ρ = −0.5, and v0 = 0.2. To make our comparison more sound we also calculate prices with the analytic formula given in Chapter 7. In the left panel of Figure 8.2 we show the prices of European call options as a function of the strike price K. As the prices obtained with the analytical formula are close to the prices obtained with the FFT-based method and the Monte Carlo prices oscillate around them, this ﬁgure conﬁrms that the pricing algorithm works correctly. The diﬀerent values of the Monte Carlo prices are mainly due to the random nature of this technique. One needs to use even more time steps and repetitions to get better results. The minor diﬀerences between the analytical and FFT-based prices come form the fact that the latter method gives the exact values only on the grid (8.16) and between the grid points one has to use some interpolation method to approximate the price of the option. This problem can be more clearly observed in the right panel of Figure 8.2, where percentage diﬀerences between the analytical and FFT prices are presented. In order to preserve the great speed of the algorithm we simply use linear interpolation between the grid points. This approach, however, slightly overestimates the true prices since the call option price is a convex function of the strike. It can be clearly seen that near the grid points the prices obtained by both methods coincide, while between the grid points the FFT-based algorithm generates higher prices than the analytical solution. Although these methods yield similar results they need diﬀerent computation time. In Table 8.1 we compare the speed of C++ implementations of the Monte Carlo and FFT methods. We calculate Monte Carlo prices for 20 diﬀerent

194

8

(Analytical - FFT)/Analytical [%]

-0.1

MAPE

20

-0.2

15

option price

25

0

Option prices in the Heston model

FFT-based Option Pricing

80

90

100 strike price

110

80

120

90

100 strike price

110

120

Figure 8.2: Left panel: European call option prices obtained by Monte Carlo simulations (ﬁlled circles), analytical formula (crosses) and the FFT method (solid line) for the Heston model. Right panel: Percentage diﬀerences between analytical and FFT prices. STFfft02.xpl

Table 8.1: The computation times in seconds for the FFT method and the Monte Carlo method for three diﬀerent models. Monte Carlo prices were calculated for 20 diﬀerent strikes, with 500 time steps and 5000 repetitions. Model Merton Heston Bates

FFT 0.01 0.01 0.01

MC 31.25 34.41 37.53

strikes for each of the three models. The speed superiority of the FFT-based method is clearly visible. It is more than 3000 times faster than the Monte Carlo approach.

8.4

Applications

195

As an application of the fast pricing algorithm we consider the problem of model calibration. Given option prices observed in the market we look for model parameters that can reproduce the data well. Normally, the market prices are given by an implied volatility surface which represents the implied volatility of option prices for diﬀerent strikes and maturities. The calibration can then be done for the implied volatilities or for the option prices. This decision depends on the problem considered. As a measure of ﬁt one can use the Mean Squared Error (MSE): M SE =

(market price - model price)2 1 , number of options market price2 options

(8.18)

but other choices like the Mean Absolute Percentage Error (MAPE) or Mean Absolute Error (MAE) are also possible: M AP E =

| market price - model price | 1 , number of options market price options

1 M AE = | market price - model price | . number of options options

Moreover, the error function can be modiﬁed by weights if some regions of the implied volatility surface are more important or some observations should be ignored completely. The calibration results in a minimization problem of the error function M SE. This optimization can be carried out by diﬀerent algorithms like simulated annealing, the Broyden-Fletcher-Goldfarb-Shanno-algorithm, the Nelder-Mead simplex algorithm or Monte Carlo Markov Chain methods. An overview of ˇ ıˇzkov´ optimization methods can be found in C´ a (2003). As minimization algorithms normally have to compute the function to be minimized many times an eﬃcient algorithm for the option prices is essential. The FFT-based algorithm is fairly eﬃcient as is shown in Table 8.1. Moreover, it returns prices for a whole range of strikes at one maturity. This is an additional advantage because for the calibration of an implied volatility surface one needs to calculate prices for many diﬀerent strikes and maturities. As an example we present the results for the Bates model calibrated to the IV surface of DAX options on January 4, 1999. The data set, which can be found in MD*Base, contains 236 option prices for 7 maturities (for each maturity there is a diﬀerent number of strikes). We minimize (8.18) with respect to 8 parameters of the Bates model: λ, δ, k, κ, θ, σ, ρ, v0 . Since the function (8.18)

196

8

0.5 0.4 0

0.1

0.2

0.3

implied volatility

0.4 0.3 0

0.1

0.2

implied volatility

0.5

0.6

Time to maturity T=0.4603

0.6

Time to maturity T=0.2110

4000

3000

6000

5000 strike

7000

4000

3000

5000 strike

6000

7000

Time to maturity T=0.9589

0

0.4 0.3 0

0.1

0.1

0.2

0.2

0.3

0.4

implied volatility

0.5

0.5

0.6

0.6

Time to maturity T=0.7096

implied volatility

FFT-based Option Pricing

5000

4000

strike

6000

3000

4000

5000 strike

6000

7000

Figure 8.3: The observed implied volatilities of DAX options on January 4, 1999 (circles) and ﬁtted Bates model (line) for 4 diﬀerent maturity strings. STFfft03.xpl

has many local minima, we use the simulated annealing minimization method, which oﬀers the advantage to search for a global minimum, combined with the Nelder-Mead simplex algorithm. As a result we obtaine the following estimates $ = 0.13, δ$ = 0.0004, $ k = −0.03, κ $ = 4.23, θ$ = for the model parameters: λ

8.4

Applications

197

0.17, σ $ = 1.39, ρ$ = −0.55, v$0 = 0.10, and the value of M SE is 0.00381. In Figure 8.3 we show the resulting ﬁts of the Bates model to the data for 4 diﬀerent maturities. The red circles are implied volatilities observed in the market on the time to maturities T = 0.21, 0.46, 0.71, 0.96 and the blue lines are implied volatilities calculated from the Bates model with the calibrated parameters. In the calibration we used all data points. As the FFT-based algorithm computes prices for the whole range of strikes the biggest impact on the speed of calibration has the number of used maturities, while the total number of observations has only minor inﬂuence on the speed. On the one hand, the Carr-Madan algorithm oﬀers a great speed advantage but on the other hand its applications are restricted to European options. The Monte Carlo approach instead works for a wider class of derivatives including path dependent options. Thus, this approach has been modiﬁed in diﬀerent ways. The accuracy can be improved by using better integration rules. Carr and Madan (1999) considered also the Simpson rule which leads – taking (8.17) into account – to the following formula for the option prices: CT (ku ) ≈

N −1 η exp(−αk) −iζηju i{( 1 N ζ−s0 )vj } e e 2 ψ(vj ) {3 + (−1)j − I(−j = 0)}. π 3 j=0

This representation again allows a direct application of the FFT to compute the sum. An alternative to the original Carr-Madan approach is to consider instead of (8.13) other modiﬁcations of the call prices. For example, Cont and Tankov (2004) used the (modiﬁed) time value of the options: c˜T (k) = CT (k) − max(1 − ek−rT , 0). Although this method also requires the existence of α satisfying E(STα+1 ) < ∞ the parameter does not enter into the ﬁnal pricing formula. Thus, it is not necessary to choose any value for α. This freedom of choice of α makes the approach easier to implement. On the other hand, option price surfaces that are obtained with this method often have a peak for small maturities and strikes near the spot. This special form diﬀers from the surfaces typically observed in the market. The peak results from the non-diﬀerentiability of the intrinsic value at the spot. Hence, other modiﬁcations of the option prices have been considered that make the modiﬁed option prices diﬀerentiable (Cont and Tankov, 2004).

198

8

FFT-based Option Pricing

The calculation of option prices by the FFT-based algorithm leads to diﬀerent errors. The truncation error results from substituting the inﬁnite upper integration limit by a ﬁnite number. The sampling error comes from evaluating the integrand only at grid points. Lee (2004) gives bounds for these errors and discusses error minimization strategies. Moreover, he presents and uniﬁes extensions of the original Carr-Madan approach to other payoﬀ classes. Besides the truncation and the sampling error, the implementation of the algorithm often leads to severe roundoﬀ errors because of the complex form of the characteristic function for some models. To avoid this problem, which often occurs for long maturities, it is necessary to transform the characteristic function. Concluding, we can say that the FFT-based option pricing method is a technique that can be used whenever time constraints are important. However, in order to avoid severe pricing errors its application requires careful decisions regarding the choice of the parameters and the particular algorithm steps used.

Bibliography

199

Bibliography Bates, D. (1996). Jump and Stochastic Volatility: Exchange Rate Processes Implicit in Deutsche Mark Options, Review of Financial Studies 9: 69– 107. Carr, P. and Madan, D. (1999). Option valuation using the fast Fourier transform, Journal of Computational Finance 2: 61–73. ˇ ıˇzkov´ C´ a, L. (2003). Numerical Optimization Methods in Econometrics, in J.M. Rodriguez Poo (ed.) Computer-Aided Introduction to Econometrics, Springer-Verlag, Berlin. Cooley, J. and Tukey, J. (1965). An algorithm for the machine calculation of complex Fourier series, Math. Comput. 19: 297–301. Cont, R. (2001). Empirical properties of assets returns: Stylized facts and statistical issues, Quant. Finance 1: 1-14. Cont, R. and Tankov, P. (2004). Financial Modelling With Jump Processes, Chapman & Hall/CRC. Fengler, M., H¨ ardle, W. and Schmidt, P. (2002). The Analysis of Implied Volatilities, in W. H¨ ardle, T. Kleinow, G. Stahl (eds.) Applied Quantitative Finance, Springer-Verlag, Berlin. Heston, S. (1993). A closed-form solution for options with stochastic volatility with applications to bond and currency options, Review of Financial Studies 6: 327-343. Hull, J. and White, A. (1987). The pricing of Options on Assets with Stochastic Volatilities, Journal of Finance 42: 281–300. Lee, R. (2004). Option pricing by transform methods: extensions, uniﬁcation and error control, Journal of Computational Finance 7. Merton, R. (1976). Option pricing when underlying stock returns are discontinuous, J. Financial Economics 3: 125-144. Rudin, W. (1991). Functional Analysis, McGrawHill. Schoutens, W., Simons, E., and Tistaert, J. (2003). A Perfect Calibration! Now What? UCS Technical Report, Catholic University Leuven.

200

Bibliography

Stein, E. and Stein, J. (1991). Stock price distribution with stochastic volatility: An analytic approach, Review of Financial Studies 4: 727–752.

9 Valuation of Mortgage Backed Securities: from Optimality to Reality Nicolas Gaussel and Julien Tamine

9.1

Introduction

Mortgage backed securities (MBS) are ﬁnancial assets backed by a pool of mortgages. Investors buy a part of the pool’s principal and receive the corresponding mortgages cash ﬂows. The pooled mortgages generally oﬀer the borrower the opportunity to prepay part or all of the remaining principal before maturity. This prepayment policy is the key point for pricing and hedging MBS. In the existing literature, two broad directions have been explored. On the one hand, the mainstream approach relies on statistical inference. The observed prepayment policy is statistically explained by the level of interest rates and some parameters of the underlying mortgage portfolio, see Schwartz and Torous (1989), Boudhouk et al. (1997). Dedicated to pricing and hedging, these approaches do not address the rationality behind the observed prepayment policy. On the other hand, authors like Nielsen and Poulsen (2002) directly address the problem of optimal prepayment within consumption based models. This normative approach gives insights into the determinants of prepayments and relies on macro-economic variables. However, this approach appears to be of poor practical use due to the numerous economic variables involved. In this chapter, we propose a third way. The optimality problem is addressed from an unconstrained, ﬁnancial point of view. Using arguments similar to those of early exercise of American derivatives, we identify the optimal interest rate level for prepayment. Building on this frontier, we construct a family of

202

9

Valuation of Mortgage Backed Securities

prepayment policies based on the spread between interest rates and the optimal prepayment level. The MBS are then priced as the expected value of their forthcoming discounted cash ﬂows, which is in line with classical methodology for ﬂow product valuation.

Mortgages speciﬁc characteristics Mortgage cash ﬂows diﬀer from those of a classical bond since their coupon is partly made of interest and partly of principal refunding. Despite this diﬀerence in cash ﬂow structure, the prepayment option enclosed in the mortgage is very similar to the callability feature of a bond. Under classical assumptions on the bond market, an optimal time of early exercise can be exhibited, depending on the term structure and on the volatility of interest rates. Such models predict a rise in exercise probability during low interest rate periods, increasing the value of the callability option attached to the bond. These conclusions are supported by empirical evidence. Historical values of the market price of a non-callable and a callable General Electric bond with the same maturity and coupon are displayed on Figure 9.1. The 10-year US government rate is displayed on the secondary axis. During this period of a sharp decrease of interest rates, the value of the non-callable bond rose much more than the value of the callable one. It may be tempting to adapt callable bonds pricing framework to mortgages. Nevertheless, statistical results prevent such a direct extrapolation. Though most mortgagors prepay for low interest rate levels, a signiﬁcant percentage chooses to go on refunding their loan, no matter how interesting the reﬁnancing conditions are. This phenomenon is often called burnout, Schwartz and Torous (1989). Conversely, some mortgagors choose to exercise their prepayment right at high interest rate levels. Such observations reveal that mortgagors are individuals whose behavior is in part determined by exogenous factors. Economic studies suggest that major motivations for early prepayment can be classiﬁed within three broad categories, Hayre (1999): • structural motivations accounting for occurrence of prepayment during high interest rate periods: unexpected heritage; professional move involving house sale (if residential mortgages are considered); insurance prepayment after mortgagor death;

9.1

Introduction

203

Figure 9.1: Historical prices of the 10-year US goverment bond (solid line, right axis) and a non-callable (dotted line, left axis) and a callable (dashed line, left axis) General Electric bonds. STFmbs01.xpl

• speciﬁc characteristics explaining burnout: lack of access to interest rate information; • refunding motivations in accordance with classical ﬁnancial theory. Based on these considerations, the subsequent analysis is divided into three parts. Section 2 is concerned with the determination of the optimal time for prepaying a mortgage in an ideal market where interest rates would be the only variable of decision. This section sheds light on the inﬂuence of interest rates on reﬁnancing incentive. In Section 3, the MBS price is expressed as the expected value of its future cash-ﬂows, under some prepayment policy. A numerical procedure based on the resolution of a two-dimensional partial diﬀerential equation is put forward. The insights provided by our approach are illustrated through numerical examples.

204

9.2 9.2.1

9

Valuation of Mortgage Backed Securities

Optimally Prepaid Mortgage Financial Characteristics and Cash Flow Analysis

For the sake of simplicity, all cash ﬂows are assumed to be paid continuously in time. Given a maturity T, the mortgage is deﬁned by a ﬁxed actuarial coupon rate c and a principal N . If the mortgagor chooses not to prepay, he refunds a continuous ﬂow φdt, related to the maturity T and the coupon rate c through the initial parity condition T N= φ exp (−cs) ds, (9.1) 0

where φ=N

c . 1 − exp (−cT )

As opposed to in ﬁne bonds where intermediary cashﬂows are only made of interest and the principal is fully redeemed at maturity, this ﬂow includes payments of both interest and principal. At time t ∈ [0, T ] the remaining principal Kt is contractually deﬁned as the forthcoming cash ﬂows discounted at the initial actuarial coupon rate T def Kt = φ exp {−c (s − t)} ds t

= =

φ [1 − exp {−c (T − t)}] c 1 − exp {−c (T − t)} N 1 − exp (−cT )

Early prepayment at date t means paying Kt to the bank. In ﬁnancial terms, the mortgagor owns an American prepayment option with strike Kt . The varying proportion between interest and capital in the ﬂow φ is displayed in Figure 9.2.

9.2.2

Optimal Behavior and Price

The ﬁnancial model Given its callability feature, the mortgage is a ﬁxed income derivative product. Its valuation must therefore be grounded on the deﬁnition of a particular

9.2

Optimally Prepaid Mortgage

205

Figure 9.2: The proportion between interest and principal varying in time. interest rate model. Since many models can be seen as good candidates, we need to specify some additional features. First, this model should be arbitrage free and consistent with the observed forward term structure. This amounts to selecting a standard Heath-Jarrow-Morton (HJM) type approach. Second, we specify an additional Markovian structure for tractability purposes. While our theoretical analysis is valid for any Markovian HJM model, all (numerical) results will be presented, for simplicity, using a one factor enhanced Vasicek model (Priaulet, 2000); see Martellini and Priaulet (2000) for practical uses or Bj¨ork (1998) for more details on theoretical grounds. Let us quickly recap its characteristics. Assumption A The short rate process rt is deﬁned via an Ornstein-Uhlenbeck process: drt = λ {θ (t) − rt } dt + σdWt , (9.2) with

∂ 1 − exp (−2λt) f (0, t) + f (0, t) + σ 2 , ∂t 2λ and f (0, t) being the initial instantaneous forward term curve. The parameters θ (t) =

206

9

Valuation of Mortgage Backed Securities

σ and λ control the volatility σ (τ ) of forward rates of maturity τ σ (τ ) =

σ {1 − exp (−λτ )} , λ

and allow for a rough calibration to derivative prices. Note that in this enhanced Vasicek framework, all bond prices can be written in closed form, Martellini and Priaulet (2000). The optimal stopping problem The theory of optimal stopping is well known, Pham (2003). It is widely used in mathematical ﬁnance for the valuation of American contracts, Musiela and Rutkowski (1997). In the sequel, the optimally prepaid mortgage price is explicitly calculated as a solution of an optimal stopping problem. Let τ ∈ [t, T ] be the stopping time at which mortgagors choose to prepay. Cash ﬂows are of two kinds. If τ < T, mortgagors keep on paying continuously φdt at any time, with discounted (random) value equal to

min(τ,T )

φ exp −

t

s

ru du ds.

t

At date τ, if τ < T, the remaining capital Kt must be paid, implying a discounted cash ﬂow equal to τ ru du Kτ , I (τ < T ) exp − t

The mortgagor will choose his prepayment time τ in order to minimize the risk neutral expected value of these future discounted cashﬂows. The value of the optimally prepaid mortgage is then obtained as min(τ,T )

Vt

=

t<τ

s

φ exp −

inf E t

ru du ds

+I(τ < T ) exp −

t

t

τ

(9.3)

' ' ' ru du Kτ ' Ft , '

where Ft is the relevant ﬁltration. Since rt is Markovian, Vt can be expressed as a function of the current level of the state variables and reduces to V (t, rt ) . The

9.2

Optimally Prepaid Mortgage

207

problem in (9.3) is therefore a standard Markovian optimal stopping problem (Pham, 2003). At time t, the mortgagor’s decision whether to prepay or not is made on the following arbitrage: the cost of prepaying immediately (τ = t) is equal to the current value of the remaining mortgage principal Kt . This cost has to be compared to the expected cost V (t, rt ) of going on refunding the continuous ﬂow φdt and keeping the option to prepay until later. Obviously, the optimal mortgagor should opt for prepayment if V (t, rt ) Kt .

(9.4)

Conversely, within the non-prepayment region, the mortgage can be sold or bought: its price must be the solution of the standard Black-Scholes partial diﬀerential equation. The following proposition sums up these intuitions. Its proof uses the link between conditional expectation and partial diﬀerential equations called the “Feynman-Kac analysis.” PROPOSITION 9.1 Under Assumption A, V (t, rt ) is solution of the partial diﬀerential equation : ⎧ ⎫ 2 ⎨ ∂V (t, r) + µ (t, r) ∂V (t, r) + 1 σ 2 ∂ V (t, r) − rV (t, r) + φ, ⎬ ∂t ∂r 2 ∂r2 max =0 ⎩ ⎭ V (t, r) − Kt (9.5) V (T, r) = 0

(9.6)

def

where µ (t, r) = λ (θ (t) − r) and σ are ﬁxed by Assumption A. Proof: We only give a sketch for constructing a solution. The optimal stopping time problem at time t is given by s min(τ,T ) Vt = inf E φ exp − ru du ds (9.7) τ

t

! +I(τ < T ) exp − t

t min(τ,T )

' ' ' ru du Kτ ' Ft ' "

(9.8)

The Markovian property allows to change the conditioning by Ft by a conditioning by rt . Thus, Vt is a function of (t, rt ). If the mortgagor does not prepay

208

9

Valuation of Mortgage Backed Securities

during the time interval [t, t + h] , h > 0, the discounted cashﬂows refunded in the interval [t, t + h] equal to T

exp −

s

ru du φds

t

t

The value at time t + h of the remaining cash ﬂows to be paid by the mortgagor is equal to V (t + h, rt+h ) . Its discounted value, at time t is: ! " t+h

exp −

ru du V (t + h, rt+h ) . t

Finally, the expected value of the cash ﬂows to be paid for a mortgage not prepaid on the interval [t, t + h] equals to ⎧ T ! " s ⎨ t+h E exp − ru du φds + exp − ru du V (t + h, rt+h ) . ⎩ t t t

Not prepaying on the time interval [t, t + h] may not be optimal so that ⎧ T ! " s ⎨ t+h exp − ru du φds + exp − ru du V (t + h, rt+h ) . V (t, rt ) ≤ E ⎩ t t t

Assuming regularity conditions on V , classical Taylor expansion yields 0≤

∂V (t, rt ) ∂V (t, rt ) 1 2 ∂ 2 V (t, rt ) − rV (t, rt ) + φ. + µ (t, r) + σ ∂t ∂r 2 ∂r2

(9.9)

Furthermore, using the deﬁnition (9.7), the inequality V (t, rt ) ≤ Kt is satisﬁed. Assuming this inequality to be strictly satisﬁed, the stopping time τ is deﬁned by τ = inf {s ≥ t : V (s, rs ) = Ks } . On the time interval [t, min{t + h, τ }] , the non-prepayment strategy is optimal since V (s, rs ) < Ks . As a consequence: ⎧ T ! " s ⎨ t+h exp − ru du φds + exp − ru du V (t + h, rt+h ) . V (t, rt ) = E ⎩ t t t

9.2

Optimally Prepaid Mortgage

209

Figure 9.3: The sensitivity of the optimal prepayment-frontier to forward-rates slope: steeper forward-rate curve leads to the dotted frontier, less steep forward-rate curve to solid frontier. Letting h → 0 and applying Itˆ o’s lemma, as previously yields 0=

∂V (t, rt ) ∂ 2 V (t, rt ) ∂V (t, rt ) 1 2 − rV (t, rt ) + φ (9.10) + µ (t, r) + σ (t, r) ∂t ∂r 2 ∂r2

as long as V (t, rt ) < Kt . Formula (9.9) combined with (9.10) implies

∂Vt 1 2 ∂ 2 Vt ∂Vt − rVt + φ, Vt − Kt = 0. max + µ (t, r) + σ ∂t ∂r 2 ∂r2

210

9

Valuation of Mortgage Backed Securities

Figure 9.4: The sensitivity of the optimal prepayment frontier to interest-rates volatility: volatilities of the 1-year and 10-year bonds are 90 bps and 37 bps (solid line) and 135 bps and 55 bps (dotted line), respectively. Discussion and visualization In this one-dimensional framework, the prepayment condition (9.4) deﬁnes a two-dimensional no prepayment region D = {(t, r) : Vt < Kt } . In particular, it includes the set {(t, r) : rt ≥ c} . The optimal stopping theory provides characterization of D, Pham (2003). In fact, there exists an optimal, time-dependent, stopping frontier rtopt such that D = (t, r) : rt > rtopt . The price Vt and the optimal frontier rtopt are jointly determined: this is a so-called free boundary problem. It can only be calculated via a standard

9.2

Optimally Prepaid Mortgage

211

Figure 9.5: The sensitivity of the time value of the embbeded option to interestrate volatility: volatilities of the 1-year and 10-year bonds are 90 bps and 37 bps (solid line) and 135 bps and 55 bps (dotted line), respectively. ﬁnite diﬀerence approach, Wilmott (2000). An example is displayed in Figure 9.3. Interestingly enough, the optimal frontier heavily depends on the time to maturity and it may be far away from the mortgage coupon c. Both its shape and its level rtopt strongly depend on market conditions. Figure 9.3 illustrates the positive impact of the slope of the curve on to the slope of the optimal frontier. The inﬂuence of implicit market volatility on the optimal prepayment frontier is displayed in Figure 9.4. As expected, the more randomness σ around future rates moves, the stronger the incentive for mortgagors to delay their prepayment in time. In the language of derivatives, the time value of the embedded option increases, see Figure 9.5. All these eﬀects are summed up in one key indicator: the duration of the optimally prepaid mortgage. Deﬁned as the sensitivity to the variation of interest rates, this indicator has two interesting interpretations. From an actuarial point of view, it represents the average expected maturity of the future discounted cash ﬂows. From a hedging point of view, duration may be interpreted as the “delta” of the mortgage with respect to interest rates.

212

9

Valuation of Mortgage Backed Securities

Figure 9.6: The sensitivity of the duration to interest-rate volatility: volatilities of the 1-year and 10-year bonds are 90 bps and 37 bps (solid line) and 135 bps and 55 bps (dotted line), respectively. If interest rate is deep inside the continuing region, the expected time before prepayment is large and the duration increases. As displayed in Figure 9.6, the higher the volatility, the higher the duration. The preceeding discussion indicates that the optimally prepaid mortgage can be understood as a standard interest rate derivative, allowing one to get asymmetric exposure to future interest rates shifts.

9.3

Valuation of Mortgage Backed Securities

As conﬁrmed by empirical evidence, mortgagors do not prepay optimally, Hayre (1999). Nielsen and Poulsen (2002) provide important insights on the constraints and information asymmetries faced by mortgagors. Although being bound by these constraints, individuals aim at minimizing their expected future cash ﬂows. Thus, it is natural to root their prepayment policy into the

9.3

Valuation of Mortgage Backed Securities

213

def

optimal one. Let dt = rtopt − rt be the distance between the interest rate and the optimal prepayment frontier. The optimal policy leads to a 100% prepayment of the mortgage if dt > 0 and a 0% prepayment if not: it can thus be seen as a Heaviside function of dt . When determinants of mortgagors’ behavior cannot be observed, this behavior can be modelled as a noisy version of the optimal one. It is thus natural to look for the eﬀective prepayment policy under the form of a characteristic distribution function of dt , which introduces dispersion around the optimal frontier.

9.3.1

Generic Framework

A pool of mortgages with similar ﬁnancial characteristics is now considered. This homogeneity assumption of the pool is in accordance with market practice. For the ease of monitoring and investors’ analysis, mortgages with the same coupon rate and the same maturity are chosen for pooling. Without loss of generality, the MBS can be assimilated into a single loan with coupon c and maturity T, issued with principal N normalized at 1. Let Ft be the proportion of unprepaid shares at date t. In the optimal approach, the prepayment policy follows an “all or nothing” type strategy, with Ft being worth 0 or 1. When practical policies are involved, Ft is a positive process decreasing in time from 1 to 0. One can look for def

Ft = exp (−Πt ) F0 = 1, where, in probabilistic terms, Πt is the hazard process associated with the refunding dynamics. The size of the underlying mortgage gives incentives to model Πt as an absolutely continuous process. In mathematical terms, this amounts to assuming the existence of an intensity process πt such that dΠt = πt dt, or equivalently

t Ft = exp − πu du .

(9.11)

0

In this framework, the main point lies in the functional form of the refunding intensity πt . As it will be precised in the next subsection, πt must be seen as

214

9

Valuation of Mortgage Backed Securities

a function of dt instead of directly rt . The valuation consists in discounting a continuous sequence of cash ﬂows. Given the prepayment policy πt , the MBS cashﬂows during [t, T ] can be divided in two parts. Firstly, T

exp −

s

ru du Fs φds

t

t

is the discounted value of the continuous ﬂows φ refunded on the outstanding MBS principal Fs . Secondly, T

exp −

s

ru du πs K (s) dFs

t

t

is the discounted value of the principal prepaid at time s. The MBS value equals the risk neutral expectation of these cash ﬂows ⎡ T ⎤ s P (t, rt , Ft ) = E ⎣ exp − ru du · {Fs φ + πs Fs K (s)} ds⎦ . (9.12) t

t

Because πt is chosen as a function of dt , the explicit computation of P involves the knowledge of rtopt . As opposed to the classical approach, a simple Monte Carlo technique cannot do the job. P can be characterized as a solution of a standard two dimensional partial diﬀerential equation. In our one dimensional framework, this means that: PROPOSITION 9.2 Under Assumption A, the MBS price P (t, rt , Ft ) solves the partial diﬀerential equation ∂P (t, r, F ) ∂P (t, r, F ) ∂P (t, r, F ) + µ (t, r) − π (t, r) F ∂t ∂r ∂F 1 2 ∂ 2 P (t, r, F ) + F (φ + π (t, r) K (t)) − rP (t, r, F ) + σ 2 ∂r2 P (T, r, F ) def

=

0. (9.13)

=

0

where µ (t, r) = λ {θ (t) − r} and σ are ﬁxed by assumption A and πt has to be properly determined.

9.3

Valuation of Mortgage Backed Securities

9.3.2

215

A Parametric Speciﬁcation of the Prepayment Rate

We now come to a particular speciﬁcation of πt . For simplicity, we choose an ad hoc parametric form for π in order to analyze its main sensitivities. In accordance with stylized facts on prepayment, the prepayment rate πt is split in two distinct components πt = πtS + πtR , where πtS represents the structural component of prepayment and πtR , as a function of dt , accounts for both the refunding decision and burnout. Structural prepayment Structural prepayment can involve many diﬀerent reasons for prepaying, including: • professional changes, • natural disasters followed by insurance prepayment, • death or default of the mortgagor also followed by insurance prepayment. Such prepayment characteristics appear to be stationary in time, Hayre (1999). Their average eﬀect can be captured reasonably well through a deterministic model. The Public Securities Association (PSA) recommends the use of a piecewise linear structural prepayment rate: πtS = k (atI(0 ≤ t ≤ 30 months) + bI(30 months ≤ t)) .

(9.14)

This piecewise linear speciﬁcation takes into account the inﬂuence of the age of the mortgage on prepayment. According to the PSA, the mean annualized values for a and b are 2% and 6%, respectively. This implies that the prepayment starts from 0% at the issuance date of the mortgage, growing by 0.2% per month during the ﬁrst 30 months, and being equal to 6% afterwards. This curve is accepted by the market practice as the benchmark structural prepayment rate, see Figure 9.7. It is known as the 100% PSA curve. The parameter k sets the desired translation level of this benchmark curve. The PSA regularly publishes statistics on the level of k according to the geographical region, in the US, of mortgage issuance.

216

9

Valuation of Mortgage Backed Securities

Figure 9.7: The 100% PSA curve. Reﬁnancing prepayment The reﬁnancing prepayment rate has to account for both the eﬀect of interest rates and individual characteristics such as burnout. Reﬁnancing incentives linked to interest rate level can be captured through the optimal prepayment framework of Section 2. This framework implies a 1 to 0 rule for MBS principal evolution, depending on the optimal short term interest rate level for prepaying, rtopt . As soon as dt > 0, if the mortgagors were optimal, the whole MBS principal would be prepaid. In order to reﬂect the eﬀect of individual characteristics on prepayment rate causing dispersion around the optimal level dt = 0, we introduce the standard Weibull cumulative distribution function α dt R πt = π · 1 − exp − . (9.15) d We do not claim that this parametric form is better than other found in the literature. Its main advantage comes from the easy determination of parameters

9.3

Valuation of Mortgage Backed Securities

217

Figure 9.8: Prepayment policy. thanks to an analytic inversion of its quantile function. In fact, as suggested by Figure 9.8, the determination of quantile ensures that this parametric speciﬁcation can easily be interpreted. Parameter d is a scale parameter. In this form, being far into the prepayment zone means that dt /d 0 so that πtR ∼ π. Parameter π directly accounts for the magnitude of the burnout eﬀect since it represents the instantaneous fraction of mortgagors who chose not to prepay even for very low values of rt . More precisely, if rt was to stay very low during a time period [0, h] and if the reﬁnancing prepayment was the only prepayment component to be considered, using expression (9.11), the proportion of unprepaid shares at date h would be equal to Fh = exp (−¯ π h) . This proportion is the burnout rate during the time horizon h. Parameter α controls the speed at which prepayment is made, linking the PSA regime to the burnout regime.

218

9

Valuation of Mortgage Backed Securities

Figure 9.9: The relation between MBS and prepayment policy: MBS without prepayment (solid line), mortgage with prepayment (dashed line), and MBS (dotted line).

9.3.3

Sensitivity Analysis

In order to analyze the main eﬀects of our model, we choose the 100% PSA curve for the structural prepayment rate; the burnout is set equal to 20%. This means that, whatever the market conditions are, 20% of the mortgagors will never repay their loan. The time horizon h for this burnout eﬀect is ﬁxed equal to 2 years. Parameters d and α are calibrated in such a way that when dt = 0, ten percent of mortgagors prepay their loan after horizon h, and half of the mortgage is prepaid if half the distance to optimal prepayment rate is reached. Market conditions are set as of December 2003 in the EUR zone. The short rate equals to 2.3% and the long term rate is 5%. The volatility of the short rate σ is taken equal to 0.8% and λ is such that the volatility of the 10 year forward rate equals to 0.5%. The facial coupon of the pool of mortgage is c = 5%, its remaining maturity is set to T = 15 years and no prepayment has been made (F0 = 1).

9.3

Valuation of Mortgage Backed Securities

219

Figure 9.10: Embedded option price in MBS for a steeper forward-rate curve (dotted line) and a less steep forward-rate curve (solid line). With such parameters, the price of the MBS is displayed in Figure 9.9 as a function of interest rates, together with the optimally prepaid mortgage (OPM) and the mortgage without callability feature (NPM). When interest rates go down, the behavior of the MBS is intermediate between the OPM and the NPM. The value at rt = 0 is controlled by the burnout level. The transition part is controlled by parameters d and α. When interest rates increase, the MBS price is higher than the NPM’s due to the PSA eﬀect. In fact, by prepaying in the optimal region, mortgagors oﬀer the holder of MBS a positive NPV. This appears clearly when displaying the value of the option embedded in MBS. Recall that in the case of the optimally prepaid mortgage, this value was always positive (Figure 9.5). This is no longer the case for MBS as indicated in Figure 9.10. As a consequence, the sensitivity of MBS to interest rates moves is reduced. Duration is computed in Figure 9.11. It is always less than the underlying pool duration. Its behavior resembles a smoothed version of the optimally prepaid one.

220

9

Valuation of Mortgage Backed Securities

Figure 9.11: Duration of the MBS: MBS without prepayment (solid line), mortgage with prepayment (dashed line), and MBS (dotted line). Let us now increase the implied volatility of the underlying derivatives market. The embedded option value increases, translating the negative sensitivity of the MBS price to market volatilities, see Figure 9.12. In hedging terms, MBS are “vega negative”. A long position in MBS is “short volatility”. This is also well indicated in the variation of duration. Figure 9.13 shows how higher volatility increases the duration when the MBS is “in the money” (low interest rates) and decreases for “out of the money” MBS. This is not surprising when one thinks of the duration as the “delta” of the MBS with respect to interest rates. The eﬀect of volatility on the delta for a standard vanilla put option is known to be opposite, depending on the moneyness of the option.

9.3 Valuation of Mortgage Backed Securities

221

Figure 9.12: The sensitivity of the MBS price to interest-rates volatility: volatilities of the 1-year and 10-year bonds are 90 bps and 37 bps (solid line) and 135 bps and 55 bps (dotted line), respectively.

222

9

Valuation of Mortgage Backed Securities

Figure 9.13: The sensitivity of the MBS duration to interest-rates volatility: volatilities of the 1-year and 10-year bonds are 90 bps and 37 bps (solid line) and 135 bps and 55 bps (dotted line), respectively.

Bibliography

223

Bibliography Bj¨ ork, T. (1998) Arbitrage Theory in Continuous Time, Oxford University Press. Boudhouk, J., Richardson, M., Stanton, R., and Whitelaw, R. (1997) Pricing Mortgage Backed Securities in a Multifactor Interest Rate Environment : a Multivariate Density Estimation Approach. Review of Financial Studies 10: 405–446. Hayre, L. (1999) Guide to Mortgage Backed Securities. Technical report, Salomon Smith Barney Fixed Income Research. Longstaﬀ, F. A., and Whitelaw, E. (2001) Valuing American Options by Simulation : a Simple Least Square Approach. Review of Financial Studies 10: 405–446. Martellini, L. and Priaulet, P. (2000) Fixed Income Securities: Dynamic Methods for Interest Rate Risk Pricing and Hedging, Wiley. Musiela, M. and Rutkowsky, M. (1997) Martingal Methods in Financial Modeling, Springer. Nielsen, S. and Poulsen, R. (2002) A Two Factor Stochastic Programming of Danish Mortgage Backed Securities. Technical report, University of Copenhagen, Department of Statistics and Operation Research. Pham, H. (2003) Contrˆ ole Optimal Stochastique etApplications en Finance, Lecture Notes. Schwartz, E. and Torous, E. S. (1989) Prepayment and the Valuation of Mortgage Backed Securities, Journal of Finance 44: 375–392. Schwartz, E. and Torous, E. S. (1993) Mortgage Prepayment and Default Decision, Journal of the American Real Estate and Urban Economic Association 21: 431–449. Wilmott, P. (2000) On Quantitative Finance, Wiley.

10 Predicting Bankruptcy with Support Vector Machines Wolfgang H¨ ardle, Rouslan Moro, and Dorothea Sch¨ afer

The purpose of this work is to introduce one of the most promising among recently developed statistical techniques – the support vector machine (SVM) – to corporate bankruptcy analysis. An SVM is implemented for analysing such predictors as ﬁnancial ratios. A method of adapting it to default probability estimation is proposed. A survey of practically applied methods is given. This work shows that support vector machines are capable of extracting useful information from ﬁnancial data, although extensive data sets are required in order to fully utilize their classiﬁcation power. The support vector machine is a classiﬁcation method that is based on statistical learning theory. It has already been successfully applied to optical character recognition, early medical diagnostics, and text classiﬁcation. One application where SVMs outperformed other methods is electric load prediction (EUNITE, 2001), another one is optical character recognition (Vapnik, 1995). SVMs produce better classiﬁcation results than parametric methods and such a popular and widely used nonparametric technique as neural networks, which is deemed to be one of the most accurate. In contrast to the latter they have very attractive properties. They give a single solution characterized by the global minimum of the optimized functional and not multiple solutions associated with the local minima as in the case of neural networks. Moreover, SVMs do not rely so heavily on heuristics, i.e. an arbitrary choice of the model and have a more ﬂexible structure.

226

10.1

10 Predicting Bankruptcy with Support Vector Machines

Bankruptcy Analysis Methodology

Although the early works in bankruptcy analysis were published already in the 19th century (Dev, 1974), statistical techniques were not introduced to it until the publications of Beaver (1966) and Altman (1968). Demand from ﬁnancial institutions for investment risk estimation stimulated subsequent research. However, despite substantial interest, the accuracy of corporate default predictions was much lower than in the private loan sector, largely due to a small number of corporate bankruptcies. Meanwhile, the situation in bankruptcy analysis has changed dramatically. Larger data sets with the median number of failing companies exceeding 1000 have become available. 20 years ago the median was around 40 companies and statistically signiﬁcant inferences could not often be reached. The spread of computer technologies and advances in statistical learning techniques have allowed the identiﬁcation of more complex data structures. Basic methods are no longer adequate for analysing expanded data sets. A demand for advanced methods of controlling and measuring default risks has rapidly increased in anticipation of the New Basel Capital Accord adoption (BCBS, 2003). The Accord emphasises the importance of risk management and encourages improvements in ﬁnancial institutions’ risk assessment capabilities. In order to estimate investment risks one needs to evaluate the default probability (PD) for a company. Each company is described by a set of variables (predictors) x, such as ﬁnancial ratios, and its class y that can be either y = −1 (‘successful’) or y = 1 (‘bankrupt’). Initially, an unknown classiﬁer function f : x → y is estimated on a training set of companies (xi , yi ), i = 1, ..., n. The training set represents the data for companies which are known to have survived or gone bankrupt. Finally, f is applied to computing default probabilities (PD) that can be uniquely translated into a company rating. The importance of ﬁnancial ratios for company analysis has been known for more than a century. Among the ﬁrst researchers applying ﬁnancial ratios for bankruptcy prediction were Ramser (1931), Fitzpatrick (1932) and Winakor and Smith (1935). However, it was not until the publications of Beaver (1966) and Altman (1968) and the introduction of univariate and multivariate discriminant analysis that the systematic application of statistics to bankruptcy analysis began. Altman’s linear Z-score model became the standard for a decade to come and is still widely used today due to its simplicity. However, its assumption of equal normal distributions for both failing and successful companies with the same covariance matrix has been justly criticized. This approach was further developed by Deakin (1972) and Altman et al. (1977).

10.1 Bankruptcy Analysis Methodology

227

Later on, the center of research shifted towards the logit and probit models. The original works of Martin (1977) and Ohlson (1980) were followed by (Wiginton, 1980), (Zavgren, 1983) and (Zmijewski, 1984). Among other statistical methods applied to bankruptcy analysis there are the gambler’s ruin model (Wilcox, 1971), option pricing theory (Merton, 1974), recursive partitioning (Frydman et al., 1985), neural networks (Tam and Kiang, 1992) and rough sets (Dimitras et al., 1999) to name a few. There are three main types of models used in bankruptcy analysis. The ﬁrst one is structural or parametric models, e.g. the option pricing model, logit and probit regressions, discriminant analysis. They assume that the relationship between the input and output parameters can be described a priori. Besides their ﬁxed structure these models are fully determined by a set of parameters. The solution requires the estimation of these parameters on a training set. Although structural models provide a very clear interpretation of modelled processes, they have a rigid structure and are not ﬂexible enough to capture information from the data. The non-structural or nonparametric models (e.g. neural networks or genetic algorithms) are more ﬂexible in describing data. They do not impose very strict limitations on the classiﬁer function but usually do not provide a clear interpretation either. Between the structural and non-structural models lies the class of semiparametric models. These models, like the RiskCalc private company rating model developed by Moody’s, are based on an underlying structural model but all or some predictors enter this structural model after a nonparametric transformation. In recent years the area of research has shifted towards non-structural and semi-parametric models since they are more ﬂexible and better suited for practical purposes than purely structural ones. Statistical models for corporate default prediction are of practical importance. For example, corporate bond ratings published regularly by rating agencies such as Moody’s or S&P strictly correspond to company default probabilities estimated to a great extent statistically. Moody’s RiskCalc model is basically a probit regression estimation of the cumulative default probability over a number of years using a linear combination of non-parametrically transformed predictors (Falkenstein, 2000). These non-linear transformations f1 , f2 , ..., fd are estimated on univariate models. As a result, the original probit model: E[yi,t |xi,t ] =

Φ (β1 xi1,t + β2 xi2,t + ... + βd xid,t ) ,

(10.1)

228

10 Predicting Bankruptcy with Support Vector Machines

is converted into: E[yi,t |xi,t ] =

Φ{β1 f1 (xi1,t ) + β2 f2 (xi2,t ) + ... + βd fd (xid,t )}, (10.2)

where yi,t is the cumulative default probability within the prediction horizon for company i at time t. Although modiﬁcations of traditional methods like probit analysis extend their applicability, it is more desirable to base our methodology on general ideas of statistical learning theory without making many restrictive assumptions. The ideal classiﬁcation machine applying a classifying function f from the available set of functions F is based on the so called expected risk minimization principle. The expected risk 1 |f (x) − y| dP (x, y), (10.3) R (f ) = 2 is estimated under the distribution P (x, y), which is assumed to be known. This is, however, never true in practical applications and the distribution should also be estimated from the training set (xi , yi ), i = 1, 2, ..., n, leading to an ill-posed problem (Tikhonov and Arsenin, 1977). In most methods applied today in statistical packages this problem is solved by implementing another principle, namely the principle of the empirical risk minimization, i.e. risk minimization over the training set of companies, even when the training set is not representative. The empirical risk deﬁned as: 1 ˆ (f ) = 1 R |f (xi ) − yi | , n i=1 2 n

(10.4)

is nothing else but an average value of loss over the training set, while the expected risk is the expected value of loss under the true probability measure. The loss for i.i.d. observations is given by: 1 0, if classiﬁcation is correct, |f (x) − y| = 1, if classiﬁcation is wrong. 2 The solutions to the problems of expected and empirical risk minimization: fopt

=

arg min R (f ) ,

(10.5)

fˆn

=

ˆ (f ) , arg min R

(10.6)

f ∈F f ∈F

10.1 Bankruptcy Analysis Methodology

229

Risk

R

Rˆ

ˆ (f) R

R (f)

f

fopt

fˆn

Function class

ˆ Figure 10.1: The minima fopt and fˆn of the expected (R) and empirical (R) risk functions generally do not coincide.

generally do not coincide (Figure 10.1), although they converge to each other as n → ∞ if F is not too large. We cannot minimize expected risk directly since the distribution P (x, y) is unknown. However, according to statistical learning theory (Vapnik, 1995), it is possible to estimate the Vapnik-Chervonenkis (VC) bound that holds with a certain probability 1 − η: ˆ (f ) + φ h , ln(η) . R (f ) ≤ R (10.7) n n For a linear indicator function g(x) = sign(x w + b): , η h ln 2n h ln(η) h − ln 4 φ , , = n n n

(10.8)

where h is the VC dimension. The VC dimension of the function set F in a d-dimensional space is hh if some d function f ∈ F can shatter h objects x ∈ R , i = 1, ..., h , in all 2 possible i conﬁgurations and no set xj ∈ Rd , j = 1, ..., q , exists where q > h that satisﬁes this property. For example, three points on a plane (d = 2) can be shattered by linear indicator functions in 2h = 23 = 8 ways, whereas 4 points cannot be

230

10 Predicting Bankruptcy with Support Vector Machines

Figure 10.2: Eight possible ways of shattering 3 points on the plane with a linear indicator function.

shattered in 2q = 24 = 16 ways. Thus, the VC dimension of the set of linear indicator functions in a two-dimensional space is three, see Figure 10.2. The expression for the VC bound (10.7) is a regularized functional where the VC dimension h is aparameter controlling complexity of the classiﬁer function. The introduces a penalty for the excessive complexity of a classiﬁer term φ nh , ln(η) n function. There is a trade-oﬀ between the number of classiﬁcation errors on the training set and the complexity of the classiﬁer function. If the complexity were not controlled, it would be possible to ﬁnd such a classiﬁer function that would make no classiﬁcation errors on the training set notwithstanding how low its generalization ability would be.

10.2

Importance of Risk Classiﬁcation in Practice

In most countries only a small percentage of ﬁrms has been rated to date. The lack of rated ﬁrms is mainly due to two factors. Firstly, an external rating is an extremely costly procedure. Secondly, until the recent past most banks decided on their loans to small and medium sized ﬁrms (SME) without asking for the client’s rating ﬁgure or applying an own rating procedure to estimate the client’s default risk. At best, banks based their decision on rough scoring models. At worst, the credit decision was completely left to the loan oﬃcer.

10.2 Importance of Risk Classiﬁcation in Practice

231

Table 10.1: Rating grades and risk premia. Source: (Damodaran, 2002) and (F¨ user, 2002) Rating Class (S&P) One year PD (%) Risk Premia (%) AAA 0.01 0.75 AA 0.02 – 0.04 1.00 A+ 0.05 1.50 A 0.08 1.80 A0.11 2.00 BBB 0.15 – 0.40 2.25 BB 0.65 – 1.95 3.50 B+ 3.20 4.75 B 7.00 6.50 B13.00 8.00 CCC > 13 10.00 CC 11.50 C 12.70 D 14.00

Since learning to know its own risk is costly and, until recently, the lending procedure of banks failed to set the right incentives, small and medium sized ﬁrms shied away from rating. However, the regulations are about to change the environment for borrowing and lending decisions. With the implementation of the New Basel Capital Accord (Basel II) scheduled for the end of 2006 not only ﬁrms that issue debt securities on the market are in need of rating but also any ordinary ﬁrm that applies for a bank loan. If no external rating is available, banks have to employ an internal rating system and deduce each client’s speciﬁc risk class. Moreover, Basel II puts pressure on ﬁrms and banks from two sides. First, banks have to demand risk premia in accordance to the speciﬁc borrower’s default probability. Table 10.1 presents an example of how individual risk classes map into risk premiums (Damodaran, 2002) and (F¨ user, 2002). For small US-ﬁrms a one-year default probability of 0.11% results in a spread of 2%. Of course, the mapping used by lenders will be diﬀerent if the ﬁrm type or the country in which the bank is located changes. However, in any case future loan pricing has to follow the basic rule. The higher the ﬁrm’s default risk is the more risk premium the bank has to charge.

232

10 Predicting Bankruptcy with Support Vector Machines

Table 10.2: Rating grades and capital requirements. Source: (Damodaran, 2002) and (F¨ user, 2002). The ﬁgures in the last column were estimated by the authors for a loan to an SME with a turnover of 5 million euros with a maturity of 2.5 years using the data from column 2 and the recommendations of the Basel Committee on Banking Supervision (BCBS, 2003). Rating Class One-year Capital Capital (S&P) PD (%) Requirements Requirements (%) (Basel I) (%) (Basel II) AAA 0.01 8.00 0.63 AA 0.02 – 0.04 8.00 0.93 – 1.40 A+ 0.05 8.00 1.60 A 0.08 8.00 2.12 A0.11 8.00 2.55 BBB 0.15 – 0.40 8.00 3.05 – 5.17 BB 0.65 – 1.95 8.00 6.50 – 9.97 B+ 3.20 8.00 11.90 B 7.00 8.00 16.70 B13.00 8.00 22.89 CCC > 13 8.00 > 22.89 CC 8.00 C 8.00 D 8.00

Second, Basel II requires banks to hold client-speciﬁc equity buﬀers. The magnitudes of these buﬀers are determined by a risk weight function deﬁned by the Basel Committee and a solvability coeﬃcient (8%). The function maps default probabilities into risk weights. Table 10.2 illustrates the change in the capital requirements per unit of a loan induced by switching from Basel I to Basel II. Apart from basic risk determinants such as default probability (PD), maturity and loss given default (LGD) the risk weights depend also on the type of the loan (retail loan, loan to an SME, mortgages, etc.) and the annual turnover. Table 10.2 refers to an SME loan and assumes that the borrower’s annual turnover is 5 million EUR (BCBS, 2003). Since the lock-in of the bank’s equity aﬀects the provision costs of the loan, it is likely that these costs will be handed over directly to an individual borrower.

10.3 Lagrangian Formulation of the SVM

233

Basel II will aﬀect any ﬁrm that is in need for external ﬁnance. As both the risk premium and the credit costs are determined by the default risk, the ﬁrms’ rating will have a deeper economic impact on banks as well as on ﬁrms themselves than ever before. Thus in the wake of Basel II the choice of the right rating method is of crucial importance. To avoid friction of a large magnitude the employed method must meet certain conditions. On the one hand, the rating procedure must keep the amount of misclassiﬁcations as low as possible. On the other, it must be as simple as possible and, if employed by the borrower, also provide some guidance to him on how to improve his own rating. SVMs have the potential to satisfy both demands. First, the procedure is easy to implement so that any ﬁrm could generate its own rating information. Second, the method is suitable for estimating a unique default probability for each ﬁrm. Third, the rating estimation done by an SVM is transparent and does not depend on heuristics or expert judgements. This property implies objectivity and a high degree of robustness against user changes. Moreover, an appropriately trained SVM enables the ﬁrm to detect the speciﬁc impact of all rating determinants on the overall classiﬁcation. This property would enable the ﬁrm to ﬁnd out prior to negotiations what drawbacks it has and how to overcome its problems. Overall, SVMs employed in the internal rating systems of banks will improve the transparency and accuracy of the system. Both improvements may help ﬁrms and banks to adapt to the Basel II framework more easily.

10.3

Lagrangian Formulation of the SVM

Having introduced some elements of statistical learning and demonstrated the potential of SVMs for company rating we can now give a Lagrangian formulation of an SVM for the linear classiﬁcation problem and generalize this approach to a nonlinear case. In the linear case the following inequalities hold for all n points of the training set: x i w + b ≥ 1 − ξi xi w + b ≤ −1 + ξi ξi ≥ 0,

for yi = 1, for yi = −1,

234

10 Predicting Bankruptcy with Support Vector Machines

Figure 10.3: The separating hyperplane x w + b = 0 and the margin in a non-separable case.

which can be combined into two constraints: yi (x i w + b) ≥ 1 − ξi

(10.9)

ξi ≥ 0.

(10.10)

The basic idea of the SVM classiﬁcation is to ﬁnd such a separating hyperplane that corresponds to the largest possible margin between the points of diﬀerent classes, see Figure 10.3. Some penalty for misclassiﬁcation must also be introduced. The classiﬁcation error ξi is related to the distance from a misclassiﬁed point xi to the canonical hyperplane bounding its class. If ξi > 0, an error in separating the two sets occurs. The objective function corresponding to penalized margin maximization is formulated as: ! n "υ 1 2 w + C ξi , (10.11) 2 i=1

10.3 Lagrangian Formulation of the SVM

235

where the parameter C characterizes the generalization ability of the machine and υ ≥ 1 is a positive integer controlling the sensitivity of the machine to outliers. The conditional minimization of the objective function with constraint (10.9) and (10.10) provides the highest possible margin in the case when classiﬁcation errors are inevitable due to the linearity of the separating hyperplane. Under such a formulation the problem is convex. One can show that margin maximization reduces the VC dimension. The Lagrange functional for the primal problem for υ = 1 is: LP =

n n n 1 w2 + C ξi − αi {yi x w + b − 1 + ξ } − µi ξi , (10.12) i i 2 i=1 i=1 i=1

where αi ≥ 0 and µi ≥ 0 are Lagrange multipliers. The primal problem is formulated as: min max LP . αi

wk ,b,ξi

After substituting the Karush-Kuhn-Tucker conditions (Gale et al., 1951) into the primal Lagrangian, we derive the dual Lagrangian as: LD =

n i=1

1 αi αj yi yj x i xj , 2 i=1 j=1 n

αi −

n

(10.13)

and the dual problem is posed as: max LD , αi

subject to: 0 ≤ αi ≤ C, n αi yi = 0. i=1

Those points i for which the equation yi (x i w + b) ≤ 1 holds are called support vectors. After training the support vector machine and deriving Lagrange multipliers (they are equal to 0 for non-support vectors) one can classify a company described by the vector of parameters x using the classiﬁcation rule: (10.14) g(x) = sign x w + b ,

236

10 Predicting Bankruptcy with Support Vector Machines

where w = ni=1 αi yi xi and b = 12 (x+1 + x−1 ) w. x+1 and x−1 are two support vectors belonging to diﬀerent classes for which y(x w + b) = 1. The value of the classiﬁcation function (the score of a company) can be computed as f (x) = x w + b.

(10.15)

Each value of f (x) uniquely corresponds to a default probability (PD). The SVMs can also be easily generalized to the nonlinear case. It is worth noting that all the training vectors appear in the dual Lagrangian formulation only as scalar products. This means that we can apply kernels to transform all the data into a high dimensional Hilbert feature space and use linear algorithms there: Ψ : Rd → H.

(10.16)

If a kernel function K exists such that K(xi , xj ) = Ψ(xi ) Ψ(xj ), then it can be used without knowing the transformation Ψ explicitly. A necessary and suﬃcient condition for a symmetric function K(xi , xj ) to be a kernel is given by Mercer’s (1909) theorem. It requires positive deﬁniteness, i.e. for any data set x1 , ..., xn and any real numbers λ1 , ..., λn the function K must satisfy n n

λi λj K(xi , xj ) ≥ 0.

(10.17)

i=1 j=1

Some examples of kernel functions are: 2

• K(xi , xj ) = e−xi −xj /2σ – the isotropic Gaussian kernel; −2

−1

• K(xi , xj ) = e−(xi −xj ) r Σ (xi −xj )/2 – the stationary Gaussian kernel with an anisotropic radial basis; we will apply this kernel in our study taking Σ equal to the variance matrix of the training set; r is a constant; P • K(xi , xj ) = (x i xj + 1) – the polynomial kernel;

• K(xi , xj ) = tanh(kx i xj − δ) – the hyperbolic tangent kernel.

10.4

Description of Data

For our study we selected the largest bankrupt companies with the capitalization of no less than $1 billion that ﬁled for protection against creditors under

10.5 Computational Results

237

Chapter 11 of the US Bankruptcy Code in 2001–2002 after the stock marked crash of 2000. We excluded a few companies due to incomplete data, leaving us with 42 companies. They were matched with 42 surviving companies with the closest capitalizations and the same US industry classiﬁcation codes available through the Division of Corporate Finance of the Securities and Exchange Commission (SEC, 2004). From the selected 84 companies 28 belonged to various manufacturing industries, 20 to telecom and IT industries, 8 to energy industries, 4 to retail industries, 6 to air transportation industries, 6 to miscellaneous service industries, 6 to food production and processing industries and 6 to construction and construction material industries. For each company the following information was collected from the annual reports for 1998–1999, i.e. 3 years prior to defaults of bankrupt companies (SEC, 2004): (i) S – sales; (ii) COGS – cost of goods sold; (iii) EBIT – earnings before interest and taxes, in most cases equal to the operating income; (iv) Int – interest payments; (v) NI – net income (loss); (vi) Cash – cash and cash equivalents; (vii) Inv – inventories; (viii) CA – current assets; (ix) TA – total assets; (x) CL – current liabilities; (xi) STD – current maturities of the long-term debt; (xii) TD – total debt; (xiii) TL – total liabilities; (xiv) Bankr – bankruptcy (1 if a company went bankrupt, −1 otherwise). The information about the industry was summarized in the following dummy variables: (i) Indprod – manufacturing industries; (ii) Indtelc – telecom and IT industries; (iii) Indenerg – energy industries; (iv) Indret – retail industries; (v) Indair – air transportation industries; (vi) Indserv – miscellaneous service industries; (vii) Indfood – food production and processing industries; (viii) Indconst – construction and construction material industries. Based on these ﬁnancial indicators the following four groups of ﬁnancial ratios were constructed and used in our study: (i) proﬁt measures: EBIT/TA, NI/TA, EBIT/S; (ii) leverage ratios: EBIT/Int, TD/TA, TL/TA; (iii) liquidity ratios: QA/CL, Cash/TA, WC/TA, CA/CL and STD/TD, where QA is quick assets and WC is working capital; (iv) activity or turnover ratios: S/TA, Inv/COGS.

10.5

Computational Results

The most signiﬁcant predictors suggested by the discriminant analysis belong to proﬁt and leverage ratios. To demonstrate the ability of an SVM to extract information from the data, we will chose two ratios from these groups: NI/TA from the proﬁtability ratios and TL/TA from the leverage ratios. The SVMs,

238

10 Predicting Bankruptcy with Support Vector Machines

Table 10.3: Descriptive statistics for the companies. All data except SIZE = log (TA) and ratios are given in billions of dollars. Variable Min Max Mean Std. Dev. TA 0.367 91.072 8.122 13.602 CA 0.051 10.324 1.657 1.887 CL 0.000 17.209 1.599 2.562 TL 0.115 36.437 4.880 6.537 CASH 0.000 1.714 0.192 0.333 INVENT 0.000 7.101 0.533 1.114 LTD 0.000 13.128 1.826 2.516 STD 0.000 5.015 0.198 0.641 SALES 0.036 37.120 5.016 7.141 COGS 0.028 26.381 3.486 4.771 EBIT -2.214 29.128 0.822 3.346 INT -0.137 0.966 0.144 0.185 NI -2.022 4.013 0.161 0.628 EBIT/TA -0.493 1.157 0.072 0.002 NI/TA -0.599 0.186 -0.003 0.110 EBIT/S -2.464 36.186 0.435 3.978 EBIT/INT -16.897 486.945 15.094 68.968 TD/TA 0.000 1.123 0.338 0.236 TL/TA 0.270 1.463 0.706 0.214 SIZE 12.813 18.327 15.070 1.257 QA/CL -4.003 259.814 4.209 28.433 CASH/TA 0.000 0.203 0.034 0.041 WC/TA -0.258 0.540 0.093 0.132 CA/CL 0.041 2001.963 25.729 219.568 STD/TD 0.000 0.874 0.082 0.129 S/TA 0.002 5.559 1.008 0.914 INV/COGS 0.000 252.687 3.253 27.555

besides their Lagrangian formulation, can diﬀer in two aspects: (i) their capacity that is controlled by the coeﬃcient C in (10.12) and (ii) the complexity of classiﬁer functions controlled in our case by the anisotropic radial basis in the Gaussian kernel transformation.

10.5 Computational Results

239

Triangles and squares in Figures 10.4–10.7 represent successful and failing companies from the training set, respectively. The intensity of the gray background corresponds to diﬀerent score values f . The darker the area, the higher the score and the greater is the probability of default. Most successful companies lying in the bright area have positive proﬁtability and a reasonable leverage TL/TA of around 0.4, which makes economic sense. Figure 10.4 presents the classiﬁcation results for an SVM using locally near linear classiﬁer functions (the anisotropic radial basis is 100Σ1/2) with the capacity ﬁxed at C = 1. The discriminating rule in this case can be approximated by a linear combination of predictors and is similar to that suggested by discriminant analysis, although the coeﬃcients of the predictors may be diﬀerent. If the complexity of classifying functions increases (the radial basis goes down to 2Σ1/2 ) as illustrated in Figure 10.5, we get a more detailed picture. Now the areas of successful and failing companies become localized. If the radial basis is decreased further down to 0.5Σ1/2 (Figure 10.6), the SVM will try to track each observation. The complexity in this case is too high for the given data set. Figure 10.7 demonstrates the eﬀects of high capacities (C = 300) on the classiﬁcation results. As capacity is growing, the SVM localizes only one cluster of successful companies. The area outside this cluster is associated with approximately equally high score values. Thus, besides estimating the scores for companies the SVM also managed to learn that there always exists a cluster of successful companies, while the cluster for bankrupt companies vanishes when the capacity is high, i.e. a company must possess certain characteristics in order to be successful and failing companies can be located elsewhere. This result was obtained without using any additional knowledge besides that contained in the training set. The calibration of the model or estimation of the mapping f → PD can be illustrated by the following example (the SVM with the radial basis 2Σ1/2 and capacity C = 1 will be applied). We can set three rating grades: safe, neutral and risky which correspond to the values of the score f < −0.0115, −0.0115 < f < 0.0115 and f > 0.0115, respectively, and calculate the total number of companies and the number of failing companies in each of the three groups. If the training set were representative of the whole population of companies, the ratio of failing to all companies in a group would give the estimated probability of default. Figure 10.8 shows the power (Lorenz) curve (Lorenz, 1905) – the cumulative default rate as a function of the percentile

240

10 Predicting Bankruptcy with Support Vector Machines

Figure 10.4: Ratings of companies in two dimensions. The case of a low complexity of classiﬁer functions, the radial basis is 100Σ1/2 , the capacity is ﬁxed at C = 1. STFsvm01.xpl

of companies sorted according to their score – for the training set of companies. For the abovementioned three rating grades we derive PDsafe = 0.24, PDneutral = 0.50 and PDrisky = 0.76. If a suﬃcient number of observations is available, the model can also be calibrated for ﬁner rating grades such as AAA or BB by adjusting the score values separating the groups of companies so that the estimated default probabilities within each group equal to those of the corresponding rating grades. Note, that we are calibrating the model on the grid determined by grad(f) = 0 or ˆ = 0 and not on the orthogonal grid as in the Moody’s RiskCalc grad(PD) model. In other words, we do not make a restrictive assumption of an independent inﬂuence of predictors as in the latter model. This can be important since,

10.5 Computational Results

241

Figure 10.5: Ratings of companies in two dimensions; the case of an average complexity of classiﬁer functions, the radial basis is 2Σ1/2 , the capacity is ﬁxed at C = 1. STFsvm02.xpl

for example, the same decrease in proﬁtability will have diﬀerent consequences for high and low leveraged ﬁrms. For multidimensional classiﬁcation the results cannot be easily visualized. In this case we will use the cross-validation technique to compute the percentage of correct classiﬁcations and compare it with that for the discriminant analysis (DA). Note that both most widely used methods – the discriminant analysis and logit regression – choose only one signiﬁcant at the 5% level predictor (NI/TA) when forward selection is used. Cross-validation has the following stages. One company is taken out of the sample and the SVM is trained on the remaining companies. Then the class of the out-of-the-sample company is evaluated by the SVM. This procedure is repeated for all the companies and the percentage of correct classiﬁcations is calculated.

242

10 Predicting Bankruptcy with Support Vector Machines

Figure 10.6: Ratings of companies in two dimensions; the case of an excessively high complexity of classiﬁer functions, the radial basis is 0.5Σ1/2 , the capacity is ﬁxed at C = 1. STFsvm03.xpl

The best percentage of correctly cross-validated companies (all available ratios were used as predictors) is higher for the SVM than for the discriminant analysis (62% vs. 60%). However, the diﬀerence is not signiﬁcant at the 5% level. This indicates that the linear function might be considered as an optimal classiﬁer for the number of observations in the data set we have. As for the direction vector of the separating hyperplane, it can be estimated diﬀerently by the SVM and DA without aﬀecting much the accuracy since the correlation of underlying predictors is high. Cluster center locations, as they were estimated using cluster analysis, are presented in Table 10.4. The results of the cluster analysis indicate that two clusters are likely to correspond to successful and failing companies. Note the

10.6 Conclusions

243

Figure 10.7: Ratings of companies in two dimensions; the case of a high capacity (C = 300). The radial basis is ﬁxed at 2Σ1/2 . STFsvm04.xpl

substantial diﬀerences in the interest coverage ratios, NI/TA, EBIT/TA and TL/TA between the clusters.

10.6

Conclusions

As we have shown, SVMs are capable of extracting information from real life economic data. Moreover, they give an opportunity to obtain the results not very obvious at ﬁrst glance. They are easily adjusted with only few parameters. This makes them particularly well suited as an underlying technique for company rating and investment risk assessment methods applied by ﬁnancial institutions.

244

10 Predicting Bankruptcy with Support Vector Machines

0.5 0

Cumulative default rate

1

Power Curve

0

0.5 Percentile

1

Figure 10.8: Power (Lorenz) curve (Lorenz, 1905) – the cumulative default rate as a function of the percentile of companies sorted according to their score – for the training set of companies. An SVM is applied with the radial basis 2Σ1/2 and capacity C = 1. STFsvm05.xpl

SVMs are also based on very few restrictive assumptions and can reveal eﬀects overlooked by many other methods. They have been able to produce accurate classiﬁcation results in other areas and can become an option of choice for company rating. However, in order to create a practically valuable methodology one needs to combine an SVM with an extensive data set of companies and turn to alternative formulations of SVMs better suited for processing large data sets. Overall, we have a valuable tool for company rating that can answer the requirements of the new capital regulations.

10.6 Conclusions

Table 10.4: Cluster centre locations. successful companies, and panies. Cluster EBIT/TA NI/TA EBIT/S EBIT/INT TD/TA TL/TA SIZE QA/CL CASH/TA WC/TA CA/CL STD/TD S/TA INV/COGS

245

There are 19 members in class {-1} – 65 members in class {1} – failing com{-1} 0.263 0.078 0.313 13.223 0.200 0.549 15.104 1.108 0.047 0.126 1.879 0.144 1.178 0.173

{1} 0.015 -0.027 -0.040 1.012 0.379 0.752 15.059 1.361 0.030 0.083 1.813 0.061 0.959 0.155

246

Bibliography

Bibliography Altman, E., (1968). Financial Ratios, Discriminant Analysis and the Prediction of Corporate Bankruptcy, The Journal of Finance, September: 589-609. Altman, E., Haldeman, R. and Narayanan, P., (1977). ZETA Analysis: a New Model to Identify Bankruptcy Risk of Corporations, Journal of Banking and Finance, June: 29-54. Basel Committee on Banking Supervision (2003). The New Basel Capital Accord, third consultative paper, http://www.bis.org/bcbs/cp3full.pdf. Beaver, W., (1966). Financial Ratios as Predictors of Failures. Empirical Research in Accounting: Selected Studies, Journal of Accounting Research, supplement to vol. 5: 71-111. Damodaran, A., (2002). Investment Valuation, second ed., John Wiley & Sons, New York, NY. Deakin, E., (1972). A Discriminant Analysis of Predictors of Business Failure, Journal of Accounting Research, Spring: 167-179. Dev, S., (1974). Ratio Analysis and the Prediction of Company Failure in Ebits, Credits, Finance and Proﬁts, ed. H.C. Edy and B.S. Yamey, Sweet and Maxwell, London: 61-74. Dimitras, A., Slowinski, R., Susmaga, R. and Zopounidis, C., (1999). Business Failure Prediction Using Rough Sets, European Journal of Operational Research, number 114: 263-280. EUNITE, (2001). Electricity load forecast competition of the EUropean Network on Intelligent TEchnologies for Smart Adaptive Systems, http://neuron.tuke.sk/competition/. Falkenstein, E., (2000). RiskCalc for Private Companies: Moody’s Default Model, Moody’s Investors Service. Fitzpatrick, P., (2000). A Comparison of the Ratios of Successful Industrial Enterprises with Those of Failed Companies, The Accounting Publishing Company. Frydman, H., Altman, E. and Kao, D.-L., (1985). Introducing Recursive Partitioning for Financial Classiﬁcation: The Case of Financial Distress, The Journal of Finance, 40: 269-291.

Bibliography

247

F¨ user, K., (2002). Basel II – was muß der Mittelstand tun?, http://www.ey.com/global/download.nsf/Germany/Mittelstandsrating/ $ﬁle/Mittelstandsrating.pdf. H¨ardle, W. and Simar, L. (2003). Applied Multivariate Statistical Analysis, Springer Verlag. Gale, D., Kuhn, H.W. and Tucker, A.W., (1951). Linear Programming and the Theory of Games, Activity Analysis of Production and Allocation, ed. T.C. Koopmans, John Wiley & Sons, New York, NY: 317-329. Lorenz, M.O., (1905). Methods for Measuring the Concentration of Wealth, Journal of American Statistical Association, 9: 209-219. Martin, D., (1977). Early Warning of Bank Failure: A Logit Regression Approach, Journal of Banking and Finance, number 1: 249-276. Mercer, J., (1909). Functions of Positive and Negative Type and Their Connection with the Theory of Integral Equations, Philosophical Transactions of the Royal Society of London, 209: 415-446. Merton, R., (1974). On the Pricing of Corporate Debt: The Risk Structure of Interest Rates, The Journal of Finance, 29: 449-470. Ohlson, J., (1980). Financial Ratios and the Probabilistic Prediction of Bankruptcy, Journal of Accounting Research, Spring: 109-131. Ramser, J. and Foster, L., (1931). A Demonstration of Ratio Analysis. Bulletin No. 40, University of Illinois, Bureau of Business Research, Urbana, Illinois. Division of Corporate Finance of the Securities and Exchange Commission, (2004). Standard industrial classiﬁcation (SIC) code list, http://www.sec.gov/info/edgar/siccodes.htm. Securities and Exchange Commission, (2004). Archive of historical documents, http://www.sec.gov/cgi-bin/srch-edgar. Tam, K. and Kiang, M., (1992). Managerial Application of Neural Networks: the Case of Bank Failure Prediction, Management Science, 38: 926-947. Tikhonov, A.N. and Arsenin, V.Y., (1977). Solution of Ill-posed Problems, W.H. Winston, Washington, DC.

248

Bibliography

Vapnik, V., (1995). The Nature of Statistical Learning Theory, Springer Verlag, New York, NY. Wiginton, J., (1980). A Note on the Comparison of Logit and Discriminant Models of Consumer Credit Behaviour, Journal of Financial and Quantitative Analysis, 15: 757-770. Wilcox, A., (1971). A Simple Theory of Financial Ratios as Predictors of Failure, Journal of Accounting Research: 389-395. Winakor, A. and Smith, R., (1935). Changes in the Financial Structure of Unsuccessful Industrial Corporations. Bulletin No. 51, University of Illinois, Bureau of Business Research, Urbana, Illinois. Zavgren, C., (1983). The Prediction of Corporate Failure: The State of the Art, Journal of Accounting Literature, number 2: 1-38. Zmijewski, M., (1984). Methodological Issues Related to the Estimation of Financial Distress Prediction Models, Journal of Accounting Research, 20: 59-82.

11 Econometric and Fuzzy Modelling of Indonesian Money Demand Noer Azam Achsani, Oliver Holtem¨oller, and Hizir Sofyan

Money demand is an important element of monetary policy analysis. Inﬂation is supposed to be a monetary phenomenon in the long run, and the empirical relation between money and prices is usually discussed in a money demand framework. The main purpose of money demand studies is to analyze if a stable money demand function exists in a speciﬁc country, especially when a major structural change has taken place. Examples for such structural changes are the monetary union of West Germany and the former German Democratic Republic in 1990 and the introduction of the Euro in 1999. There is broad evidence that money demand has been quite stable both in Germany and in the Euro area, see for example Wolters, Ter¨ asvirta and L¨ utkepohl (1998) and Holtem¨oller (2004a). In this chapter, we explore the M2 money demand function for Indonesia in the period 1990:1–2002:3. This period is dominated by the Asian crises, which started in 1997. In the aftermath of the crisis, a number of immense ﬁnancial and economic problems have emerged in Indonesia. The price level increased by about 16 percent in 1997 compared to the previous year. In the same period, the call money rate increased temporarily from 12.85 percent to 57.10 percent and the money stock increased by about 54 percent. Additionally, Indonesia has faced a sharp decrease in real economic activity: GNP decreased by about 11 percent. Given these extraordinary economic developments, it may not be expected that a stable money demand function existed during that period. The main contribution of this chapter is twofold. Firstly, we provide a careful analysis of Indonesian money demand, an emerging market economy for which only very few money demand studies exist. Secondly, we do not only apply

250

11 Modelling Indonesian Money Demand

the standard econometric methods but also the fuzzy Takagi-Sugeno model which allows for locally diﬀerent functional relationships, for example during the Asian crisis. This is interesting and important because the assessment of monetary policy stance as well as monetary policy decisions depend on the relationship between money and other macroeconomic variables. Hence, a stable money demand function should be supported by various empirical methodologies. In Section 11.1 we discuss money demand speciﬁcation generally and in Section 11.2 we estimate a money demand function and the corresponding errorcorrection model for Indonesia using standard regression techniques. In Section 11.3, we exploit the fuzzy approach and its application to money demand. Section 11.4 presents conclusions and a comparison of the two approaches.

11.1

Speciﬁcation of Money Demand Functions

Major central banks stress the importance of money growth analysis and of a stable money demand function for monetary policy purposes. The Deutsche Bundesbank, for example, has followed an explicit monetary targeting strategy from 1975 to 1998, and the analysis of monetary aggregates is one of the two pillars of the European Central Bank’s (ECB) monetary policy strategy. Details about these central banks’ monetary policy strategies, a comparison and further references can be found in Holtem¨oller (2002). The research on the existence and stability of a money demand function is motivated inter alia by the following two observations: (i) Money growth is highly correlated with inﬂation, see McCandless and Weber (1995) for international empirical evidence. Therefore, monetary policy makers use money growth as one indicator for future risks to price stability. The information content of monetary aggregates for future inﬂation assessment is based on a stable relationship between money, prices and other observable macroeconomic variables. This relationship is usually analyzed in a money demand framework. (ii) The monetary policy transmission process is still a “black box”, see Mishkin (1995) and Bernanke and Gertler (1995). If we are able to specify a stable money demand function, an important element of the monetary transmission mechanism is revealed, which may help to learn more about monetary policy transmission. There is a huge amount of literature about money demand. The majority of the studies is concerned with industrial countries. Examples are Hafer and Jansen (1991), Miller (1991), McNown and Wallace (1992) and Mehra (1993) for the

11.1 Speciﬁcation of Money Demand Functions

251

USA; L¨ utkepohl and Wolters (1999), Coenen and Vega (1999), Brand and Cassola (2000) and Holtem¨ oller (2004b) for the Euro area; Arize and Shwiﬀ (1993), Miyao (1996) and Bahmani-Oskooee (2001) for Japan; Drake and Chrystal (1994) for the UK; Haug and Lucas (1996) for Canada; Lim (1993) for Australia and Orden and Fisher (1993) for New Zealand. There is also a growing number of studies analyzing money demand in developing and emerging countries, primarily triggered by the concern among central bankers and researchers around the world about the impact of moving toward ﬂexible exchange rate regimes, globalization of capital markets, ongoing ﬁnancial liberalization, innovation in domestic markets, and the country-speciﬁc events on the demand for money (Sriram, 1999). Examples are Hafer and Kutan (1994) and Tseng (1994) for China; Moosa (1992) for India; Arize (1994) for Singapore and Deckle and Pradhan (1997) for ASEAN countries. For Indonesia, a couple of studies have applied the cointegration and errorcorrection framework to money demand. Price and Insukindro (1994) use quarterly data from the period 1969:1 to 1987:4. Their results are based on diﬀerent methods of testing for cointegration. The two-step Engle and Granger (1987) procedure delivers weak evidence for one cointegration relationship, while the Johansen likelihood ratio statistic supports up to two cointegrating vectors. In contrast, Deckle and Pradhan (1997), who use annual data, do not ﬁnd any cointegrating relationship that can be interpreted as a money demand function. The starting point of empirical money demand analysis is the choice of variables to be included in the money demand function. It is common practice to assume that the desired level of nominal money demand depends on the price level, a transaction (or scaling) variable, and a vector of opportunity costs (e.g., Goldfeld and Sichel, 1990; Ericsson, 1999): (M ∗ /P ) = f (Y, R1 , R2 , ...),

(11.1)

where M ∗ is nominal money demand, P is the price level, Y is real income (the transaction variable), and Ri are the elements of the vector of opportunity costs which possibly also includes the inﬂation rate. A money demand function of this type is not only the result of traditional money demand theories but also of modern micro-founded dynamic stochastic general equilibrium models (Walsh, 1998). An empirical standard speciﬁcation of the money demand function is the partial adjustment model (PAM). Goldfeld and Sichel (1990) show that a desired level of real money holdings M Rt∗ = Mt∗ /Pt : ln M Rt∗ = φ0 + φ1 ln Yt + φ2 Rt + φ3 πt ,

(11.2)

252

11 Modelling Indonesian Money Demand

where Rt represents one or more interest rates and πt = ln(Pt /Pt−1 ) is the inﬂation rate, and an adjustment cost function: C = α1 (ln Mt∗ − ln Mt ) + α2 {(ln Mt − ln Mt−1 ) + δ (ln Pt − ln Pt−1 )} (11.3) yield the following reduced form: 2

2

ln M Rt = µφ0 + µφ1 ln Yt + µφ2 Rt + (1 − µ) ln M Rt−1 + γπt ,

(11.4)

where: µ = α1 /(α1 + α2 )

and

γ = µφ3 + (1 − µ)(δ − 1).

(11.5)

The parameter δ controls whether nominal money (δ = 0) or real money (δ = −1) adjusts. Intermediate cases are also possible. Notice that the coeﬃcient to the inﬂation rate depends on the value of φ3 and on the parameters of the adjustment cost function. The imposition of price-homogeneity, that is the price level coeﬃcient in a nominal money demand function is restricted to one, is rationalized by economic theory and Goldfeld and Sichel (1990) propose that empirical rejection of the unity of the price level coeﬃcient should be interpreted as an indicator for misspeciﬁcation. The reduced form can also be augmented by lagged independent and further lagged dependent variables in order to allow for a more general adjustment process. Rearranging (11.4) yields: ∆ ln M Rt

=

=

µφ0 + µφ1 ∆ ln Yt + µφ1 ln Yt−1 + µφ2 ∆Rt +µφ2 Rt−1 − µ ln M Rt−1 + γ∆πt + γπt−1 γ µφ0 − µ ln M Rt−1 − φ1 ln Yt−1 − φ2 Rt−1 − πt−1 µ +µφ1 ∆ ln Yt + µφ2 ∆Rt + γ∆πt . (11.6)

Accordingly, the PAM can also be represented by an error-correction model like (11.6).

11.2

The Econometric Approach to Money Demand

11.2.1

Econometric Estimation of Money Demand Functions

Since the seminal works of Nelson and Plosser (1982), who have shown that relevant macroeconomic variables exhibit stochastic trends and are only sta-

11.2 The Econometric Approach to Money Demand

253

tionary after diﬀerencing, and Engle and Granger (1987), who introduced the concept of cointegration, the (vector) error correction model, (V)ECM, is the dominant econometric framework for money demand analysis. If a certain set of conditions about the number of cointegration relations and exogeneity properties is met, the following single equation error correction model (SE-ECM) can be used to estimate money demand functions: ∆ ln M Rt

=

ct + α (ln M Rt−1 − β2 ln Yt−1 − β3 Rt−1 − β4 πt−1 ) ./ 0 error correction term

+

k

γ1i ∆ ln M Rt−i +

k

i=1

+

k

γ2i ∆ ln Yt−i

(11.7)

i=0

γ3i ∆Rt−i +

i=0

k

γ4i ∆πt−i .

i=0

It can immediately be seen that (11.6) is a special case of the error correction model (11.7). In other words, the PAM corresponds to a SE-ECM with certain parameter restrictions. The SE-ECM can be interpreted as a partial adjustment model with β2 as long-run income elasticity of money demand, β3 as long-run semi-interest rate elasticity of money demand, and less restrictive short-run dynamics. The coeﬃcient β4 , however, cannot be interpreted directly. In practice, the number of cointegration relations and the exogeneity of certain variables cannot be considered as known. Therefore, the VECM is often applied. In this framework, all variables are assumed to be endogenous a priori, and the imposition of a certain cointegration rank can be justiﬁed by statistical tests. The standard VECM is obtained from a vector autoregressive (VAR) model: k xt = µt + Ai xt−i + ut , (11.8) i=1

where xt is a (n × 1)-dimensional vector of endogenous variables, µt contains deterministic terms like constant and time trend, Ai are (n × n)-dimensional coeﬃcient matrices and ut ∼ N (0, Σu ) is a serially uncorrelated error term. Subtracting xt−1 and rearranging terms yields the VECM: ∆xt−1 = µt + Πxt−1 +

k−1

Γi ∆xt−i + ut ,

(11.9)

i=1

where Π and Γi are functions of the Ai . The matrix Π can be decomposed into two (n × r)-dimensional matrices α and β: Π = αβ where α is called an

254

11 Modelling Indonesian Money Demand

adjustment matrix, β comprises the cointegration vectors, and r is the number of linearly independent cointegration vectors (cointegration rank). Following Engle and Granger (1987), a variable is integrated of order d, or I(d), if it has to be diﬀerenced d-times to become stationary. A vector xt is integrated of order d if the maximum order of integration of the variables in xt is d. A vector xt is cointegrated, or CI(d, b), if there exists a linear combination β xt that is integrated of a lower order (d − b) than xt . The cointegration framework is only appropriate if the relevant variables are actually integrated. This can be tested using unit root tests. When no unit roots are found, traditional econometric methods can by applied.

11.2.2

Modelling Indonesian Money Demand with Econometric Techniques

We use quarterly data from 1990:1 until 2002:3 for our empirical analysis. The data is not seasonally adjusted and taken from Datastream (gross national product at 1993 prices Y and long-term interest rate R) and from Bank Indonesia (money stock M2 M and consumer price index P ). In the following, logarithms of the respective variables are indicated by small letters, and mr = ln M − ln P denotes logarithmic real balances. The data is depicted in Figure 11.1. In the ﬁrst step, we analyze the stochastic properties of the variables. Table 11.1 presents the results of unit root tests for logarithmic real balances mr, logarithmic real GNP y, logarithmic price level p, and logarithmic long-term interest rate r. Note that the log interest rate is used here while in the previous section the level of the interest rate has been used. Whether interest rates should be included in logarithms or in levels is mainly an empirical question. Because the time series graphs show that there seem to be structural breaks in real money, GNP and price level, we allow for the possibility of a mean shift and a change in the slope of a linear trend in the augmented Dickey-Fuller test regression. This corresponds to model (c) in Perron (1989), where the critical values for this type of test are tabulated. In the unit root test for the interest rate, only a constant is considered. According to the test results, real money, real GNP and price level are trend-stationary, that is they do not exhibit a unit root, and the interest rate is also stationary. These results are quite stable with respect to the lag length speciﬁcation. The result of trend-stationarity is also supported by visual inspection of a ﬁtted trend and the corresponding

11.2 The Econometric Approach to Money Demand

11.8

Log. GNP

Y

11.4

9.4

11.2

9.6

Y

9.8

10

11.6

10.2

10.4

Log. Real Balances

255

10

20

30

40

50

0

10

20

30

40

Time: 1990:1-2002:3

Time: 1990:1-2002:3

Log. Long-term Interest Rate

Consumer Price Inflation

50

Y

0.08

2.2

2.4

0

2.6

0.04

2.8

Y

3

0.12

3.2

0.16

3.4

0.2

0

0

10

20

30

Time: 1990:1-2002:3

40

50

0

10

20

30

40

50

Time: 1990:1-2002:3

Figure 11.1: Time series plots of logarithms of real balance, GNP, interest rate, and CPI. STFmon01.xpl

trend deviations, see Figure 11.2. In the case of real money, the change in the slope of the linear trend is not signiﬁcant. Now, let us denote centered seasonal dummies sit , a step dummy switching from zero to one in the respective quarter ds, and an impulse dummy having

256

11 Modelling Indonesian Money Demand

Table 11.1: Unit Root Tests Variable mr y p r

Deterministic terms c, t, s, P89c (98:3) c, t, s, P89c (98:1) c, t, s, P89c (98:1) c, s

Lags 2 0 2 2

Test stat. −4.55∗∗ −9.40∗∗∗ −9.46∗∗∗ −4.72∗∗∗

1/5/10% CV –4.75 / –4.44 / –4.18 –4.75 / –4.44 / –4.18 –4.75 / –4.44 / –4.18 –3.57 / –2.92 / –2.60

Note: Unit root test results for the variables indicated in the ﬁrst column. The second column describes deterministic terms included in the test regression: constant c, seasonal dummies s, linear trend t, and shift and impulse dummies P89c according to the model (c) in Perron (1989) allowing for a change in the mean and slope of a linear trend. Break points are given in parentheses. Lags denotes the number of lags included in the test regression. Column CV contains critical values. Three (two) asterisks denote signiﬁcance at the 1% (5%) level. value one only in the respective quarter di. Indonesian money demand is then estimated by OLS using the reduced form equation (11.4) (t- and p-values are in round and square parantheses, respectively): mrt

=

0.531 mrt−1 + 0.470 yt − 0.127 rt (6.79) (4.87) (−6.15) −

0.438 − 0.029 s1t − 0.034 s2t − 0.036 s3t (−0.84) (−2.11) (−2.57) (−2.77)

+ 0.174 di9802t + 0.217 di9801t + 0.112 ds9803t + ut (3.54) (5.98) (5.02) T R2

= =

50 (1990 : 2 − 2002 : 3) 0.987

RESET (1) =

0.006 [0.941]

LM (4) = JB =

0.479 [0.751] 0.196 [0.906]

ARCH(4) =

0.970 [0.434]

Here JB refers to the Jarque-Bera test for nonnormality, RESET is the usual test for general nonlinearity and misspeciﬁcation, LM(4) denotes a LagrangeMultiplier test for autocorrelation up to order 4, ARCH(4) is a LagrangeMultiplier test for autoregressive conditional heteroskedasticity up to order

11.2 The Econometric Approach to Money Demand

Fitted Trend for GNP

20

4.75+Y*E-2

15

9.8

5

9.4

10

9.6

Y

10

25

30

10.2

Fitted Trend for Real Money

257

0

10

20 30 Time: 1990:1-2002:3

40

50

0

20 30 Time: 1990:1-2002:3

40

50

Fitted Trend Residual for GNP

Y*E-2 -2

-5

0

0

Y*E-2

2

5

4

10

Residual of Fitted Trend

10

0

10

20 30 Time: 1990:1-2002:3

40

50

0

10

20 30 Time: 1990:1-2002:3

40

50

Figure 11.2: Fitted trends for real money and real GNP. STFmon02.xpl STFmon03.xpl

4. Given these diagnostic statistics, the regression seems to be well speciﬁed. There is a mean shift in 1998:3 and the impulse dummies capture the fact, that the structural change in GNP occurs two months before the change in real money. The inﬂation rate is not signiﬁcant and is therefore not included in the equation.

258

11 Modelling Indonesian Money Demand

The implied income elasticity of money demand is 0.47/(1–0.53) = 1 and the interest rate elasticity is –0.13/(1–0.53) = –0.28. These are quite reasonable magnitudes. Equation (11.10) can be transformed into the following error correction representation: ∆mrt

=

−0.47 · (mrt−1 − yt−1 + 0.28rt−1 ) + 0.47∆yt − 0.13∆rt + deterministic terms + ut . (11.10)

Stability tests for the real money demand equation (11.10) are depicted in Figure 11.3. The CUSUM of squares test indicates some instability at the time of the Asian crises, and the coeﬃcients of lagged real money and GNP seem to change slightly after the crisis. A possibility to allow for a change in these coeﬃcients from 1998 on is to introduce two additional right-handside variables: lagged real money multiplied by the step dummy ds9803 and GNP multiplied by ds9803. Initially, we have also included a corresponding term for the interest rate. The coeﬃcient is negative (-0.04) but not signiﬁcant (p-value: 0.29), such that we excluded the term from the regression equation. The respective coeﬃcients for the period 1998:3-2002:3 can be obtained by summing the coeﬃcients of lagged real money and lagged real money times step dummy and of GNP and GNP times step dummy, respectively. This reveals that the income elasticity stays approximately constant (0.28/(1–0.70)=0.93) until 1998:02 and ((0.28+0.29)/(1-0.70+0.32)=0.92) from 1998:3 to 2002:3 and that the interest rate elasticity declines in the second half of the sample from –0.13/(1–0.70)=–0.43 to –0.13/(1-0.79+0.32)=–0.21: mrt

=

0.697 mrt−1 + 0.281 yt − 0.133 rt (7.09) (2.39) (−6.81) −

0.322 mrt−1 · ds9803t + 0.288 yt · ds9803t (−2.54) (2.63)

+ 0.133 − 0.032 s1t − 0.041 s2t − 0.034 s3t (0.25) (−2.49) (−3.18) (−2.76) + 0.110 di9802t + 0.194 di9801t + ut . (2.04) (5.50)

11.2 The Econometric Approach to Money Demand

0.4 Y 0.2 0

0.2

0.4

Y

0.6

0.6

Recursive Coefficient GNP

0.8

Recursive Coefficient Real Balances

259

0

10

20 Time: 1993:1-2002:3

30

0

20 Time: 1993:1-2002:3

30

CUSUM of Square Test (5%)

Y

0.4 -0.2

0

-0.2

0.2

Y

-0.15

0.6

0.8

-0.1

1

1.2

-0.05

Recursive Coefficient Long-term Interest Rate

10

0

10

20 Time: 1993:1-2002:3

30

0

10

20 Time: 1993:1-2002:3

30

40

Figure 11.3: Stability test for the real money demand equation (11.10). STFmon04.xpl

T R2

= =

50 (1990 : 2 − 2002 : 3) 0.989

RESET (1) = LM (4) =

4.108 [0.050] 0.619 [0.652]

JB = ARCH(4) =

0.428 [0.807] 0.408 [0.802]

260

11 Modelling Indonesian Money Demand

Accordingly, the absolute adjustment coeﬃcient µ in the error correction representation increases from 0.30 to 0.62. It can be concluded that Indonesian money demand has been surprisingly stable throughout and after the Asian crisis given that the Cusum of squares test indicates only minor stability problems. A shift in the constant term and two impulse dummies that correct for the diﬀerent break points in real money and real output are suﬃcient to yield a relatively stable money demand function with an income elasticity of one and an interest rate elasticity of –0.28. However, a more ﬂexible speciﬁcation shows that the adjustment coeﬃcient µ increases and that the interest rate elasticity decreases after the Asian crisis. In the next section, we analyze if these results are supported by a fuzzy clustering technique.

11.3

The Fuzzy Approach to Money Demand

11.3.1

Fuzzy Clustering

Ruspini (1969) introduced fuzzy partition to describe the cluster structure of a data set and suggested an algorithm to compute the optimum fuzzy partition. Dunn (1973) generalized the minimum-variance clustering procedure to a Fuzzy ISODATA clustering technique. Bezdek (1981) used Dunn’s (1973) approach to obtain an inﬁnite family of algorithms known as the Fuzzy C-Means (FCM) algorithm. He generalized the fuzzy objective function by introducing the weighting exponent m, 1 ≤ m < ∞: Jm (U, V ) =

c n

(uik )m d2 (xk , vi ),

(11.11)

k=1 i=1

where X = {x1 , x2 , . . . , xn } ⊂ Rp is a subset of the real p-dimensional vector space Rp consisting of n observations, U is a random fuzzy partition matrix of p X into c parts, vi ’s are the cluster centers in R , and d(xk , vi ) p= xk − vi = (xk − vi ) (xk − vi ) is an inner product induced norm on R . Finally, uik refers to the degree of membership of point xk to the ith cluster. This degree of membership, which can be seen as a probability of xk belonging to cluster

11.3 The Fuzzy Approach to Money Demand

261

i, satisﬁes the following constraints: 0 ≤ uik ≤ 1, for 1 ≤ i ≤ c, 1 ≤ k ≤ n, n 0< uik < n, for 1 ≤ i ≤ c, k=1 c

uik = 1,

for 1 ≤ k ≤ n.

(11.12) (11.13) (11.14)

i=1

The FCM uses an iterative optimization of the objective function, based on the weighted similarity measure between xk and the cluster center vi . More details on the FCM algorithm can be found in Mucha and Sofyan (2000). In practical applications, a validation method to measure the quality of a clustering result is needed. Its quality depends on many factors, such as the method of initialization, the choice of the number of clusters c, and the clustering method. The initialization requires a good estimate of the clusters and the cluster validity problem can be reduced to the choice of an optimal number of clusters c. Several cluster validity measures have been developed in the past by Bezdek and Pal (1992).

11.3.2

The Takagi-Sugeno Approach

Takagi and Sugeno (1985) proposed a fuzzy clustering approach using the membership function µA (x) : X → [0, 1], which deﬁnes a degree of membership of x ∈ X in a fuzzy set A. In this context, all the fuzzy sets are associated with piecewise linear membership functions. Based on the fuzzy-set concept, the aﬃne Takagi-Sugeno (TS) fuzzy model consists of a set of rules Ri , i = 1, . . . , r, which have the following structure: IF x is Ai , THEN yi = a i x + bi . This structure consists of two parts, namely the antecedent part “x is Ai ” p and the consequent part “yi = a i x + bi ,” where x ∈ X ⊂ R is a crisp input vector, Ai is a (multidimensional) fuzzy set deﬁned by the membership function µAi (x) : X → [0, 1], and yi ∈ R is an output of the i-th rule depending on a parameter vector ai ∈ Rp and a scalar bi .

262

11 Modelling Indonesian Money Demand

Given a set of r rules and their outputs (consequents) yi , the global output y of the Takagi-Sugeno model is deﬁned by the fuzzy mean formula: r µAi (x)yi . (11.15) y = i=1 r i=1 µAi (x) It is usually diﬃcult to implement multidimensional fuzzy sets. Therefore, the antecedent part is commonly represented as a combination of equations for the elements of x = (x1 , . . . , xp ) , each having a corresponding one-dimensional fuzzy set Aij , j = 1, . . . , p. Using the conjunctive form, the rules can be formulated as: IF x1 is Ai,1 AND · · · AND xp is Ai,p , THEN yi = a i x + bi , with the degree of membership µAi (x) = µAi ,1 (x1 )·µAi ,2 (x2 ) · · · µAi ,p (xp ). This elementwise clustering approach is also referred to as product space clustering. Note that after normalizing this degree of membership (of the antecedent part) is: µA (x) . (11.16) φi (x) = r i j=1 µAj (x) We can also interpret the aﬃne Takagi-Sugeno model as a quasilinear model with a dependent input parameter (Wolkenhauer, 2001): " ! r r φi (x)ai x + φi (x)bi = a (x) + b(x). (11.17) y= i=1

11.3.3

i=1

Model Identiﬁcation

The basic principle of model identiﬁcation by product space clustering is to approximate a nonlinear regression problem by decomposing it to several local linear sub-problems described by IF-THEN rules. A comprehensive discussion can be found in Giles and Draeseke (2001). Let us now discuss identiﬁcation and estimation of the fuzzy model in case of multivariate data. Suppose y = f (x1 , x2 , ..., xp ) + ε

(11.18)

where the error term ε is assumed to be independent, identically and normally distributed around zero. The fuzzy function f represents the conditional mean of the output variable y. In the rest of the chapter, we use a linear form of f and the least squares criterion for its estimation. The algorithm is as follows.

11.3 The Fuzzy Approach to Money Demand

263

Step 1 For each pair xr and y, separately partition n observations of the sample into cr fuzzy clusters by using fuzzy clustering (where r = 1, ..., p). Step 2 Consider all possible combinations 1p of c fuzzy clusters given the number of input variables p, where c = r=1 cr . Step 3 Form a model by using data taken from each fuzzy cluster: yij = βi0 + βi1 x1ij + βi2 x2ij + ... + βip xpij + εij

(11.19)

where observation index j = 1, . . . , n and cluster index i = 1, . . . , c. Step 4 Predict the conditional mean of x by using: c (bi0 + bi1 x1k + ... + bip xpk )wik c , k = 1, . . . , n, (11.20) yˆk = i=1 i=1 wik 1p where wik = r=1 δij µrj (xk ), i = 1, . . . , c, and δij is an indicator equal to one if the jth cluster is associated with the ith observation. The fuzzy predictor of the conditional mean y is a weighted average of linear predictors based on the fuzzy partitions of explanatory variables, with a membership value varying continuously through the sample observations. The eﬀect of this condition is that the non-linear system can be eﬀectively modelled. The modelling technique based on fuzzy sets can be understood as a local method: it uses partitions of a domain process into a number of fuzzy regions. In each region of the input space, a rule is deﬁned which transforms input variables into output. The rules can be interpreted as local sub-models of the system. This approach is very similar to the inclusion of dummy variables in an econometric variable. By allowing interaction of dummy-variables and independent variables, we also specify local sub-models. While the number and location of the sub-periods is determined endogenously by the data in the fuzzy approach, they have been imposed exogenously after visual data inspection in our econometric model. However, this is not a fundamental diﬀerence because the number and location of the sub-periods could also be determined automatically by using econometric techniques.

11.3.4

Modelling Indonesian Money Demand with Fuzzy Techniques

In this section, we model the M2 money demand in Indonesia using the approach of fuzzy model identiﬁcation and the same data as in Section 11.2. Like

264

11 Modelling Indonesian Money Demand

Table 11.2: Four clusters of Indonesian money demand data Cluster

Observations

1

1-15

2

16-31

3

34-39

4

40-51

β0 (t-value) 3.9452 (3.402) 1.2913 (0.328) 28.7063 (1.757) -0.2389 (-0.053)

β1 (yt ) (t-value) 0.5479 (5.441) 0.7123 (1.846) -1.5480 (-1.085) 0.8678 (2.183)

β2 (rt ) (t-value) -0.2047 (-4.195) 0.1493 (0.638) -0.3177 (-2.377) 0.1357 (0.901)

in the econometric approach logarithmic real money demand (mrt ) depends on logarithmic GNP (yt ) and the logarithmic long-term interest-rate (rt ): mrt = β0 + β1 ln Yt + β2 rt .

(11.21)

The results of the fuzzy clustering algorithm are far from being unambiguous. Fuzzy clustering with real money and output yields three clusters. However, real money and output clusters overlap, such that it is diﬃcult to identify three common clusters. Hence, we arrange them as four clusters. On the other hand, clustering with real money and the interest rate leads to two clusters. The intersection of both clustering results gives 4 diﬀerent clusters. The four local models are presented in Table 11.2. In the ﬁrst cluster that covers the period 1990:1–1993:3 GNP has a positive eﬀect on money demand, and the interest rate eﬀect is negative. The output elasticity is substantially below one, but increases in the second cluster (1993:4–1997:3). The interest rate has no signiﬁcant impact on real money in the second period. The third cluster from 1997:4 to 1998:4 covers the Asean crisis. In this period, the relationship between real money and output breaks down while the interest rate eﬀect is stronger than before. The last cluster covers the period 1999:4–2002:3, in which the situation in Indonesia was slowly brought under control as a result of having a new government elected in October 1999. The elasticity of GNP turned back approximately to the level before the crisis. However, the eﬀect of the interest rate is not signiﬁcant.

11.3 The Fuzzy Approach to Money Demand

265

Fuzzy TS Model

9.8

True Value

Econometrics Model

9.4

9.6

Log(Money Demand)

10

10.2

Indonesian Money Demand

0

10

20 30 Time: 1990:1-2002:3

40

50

Figure 11.4: Fitted money demand (dotted line): econometric model (dashed line) and fuzzy model (solid line). STFmon05.xpl

The ﬁt of the local sub-models is not as good as the ﬁt of the econometric model (Figure 11.4). The main reasons for this result are that autocorrelation and seasonality of the data have not been considered in the fuzzy approach, mainly for computational reasons. Additionally, the determination of the number of diﬀerent clusters turned out to be rather diﬃcult. Therefore, the fuzzy model for Indonesian money demand described here should be interpreted as an illustrative example for the robustness analysis of econometric models. More research is necessary to ﬁnd a fuzzy speciﬁcation that describes the data as well as the econometric model.

266

11.4

11 Modelling Indonesian Money Demand

Conclusions

In this chapter, we have analyzed money demand in Indonesia in a period in which major instabilities in basic economic relations due to the Asian crises may be expected. In addition to an econometric approach we have applied fuzzy clustering in order to analyze the robustness of the econometric results. Both the econometric and the fuzzy clustering approach divide the period from 1990 to 2002 into separate sub-periods. In the econometric approach this is accomplished by inclusion of dummy variables in the regression model, and in the fuzzy clustering approach diﬀerent clusters are identiﬁed in which local regression models are valid. Both approaches reveal that there have been structural changes in Indonesian money demand during the late 1990s. A common result is that the income elasticity of money demand is quite stable before and after the crisis, the econometric estimation of the income elasticity after the crisis is about 0.93 and the fuzzy estimate is 0.87. The interest rate elasticity diﬀers in both approaches: the econometric model indicates a absolutely smaller but still signiﬁcant negative interest rate elasticity after the crisis, while the fuzzy approach yields an insigniﬁcant interest rate elasticity after the crises. A further diﬀerence is that the fuzzy approach suggests a higher number of sub-periods, namely four clusters, while the econometric model is based on only two sub-periods. However, it might well be that the results of the two approaches become even more similar when the ﬁt of the fuzzy model is improved. Our main conclusions are: Firstly, Indonesian money demand has been surprisingly stable in a troubled and diﬃcult time. Secondly, the fuzzy clustering approach provides a framework for the robustness analysis of economic relationships. This framework can especially be useful if the number and location of sub-periods exhibiting structural diﬀerences in the economic relationships is not known ex-ante. Thirdly, our analysis does also reveal why the previous studies of Indonesian money demand delivered unstable results. Theses studies applied cointegration techniques. However, we show that the relevant Indonesian time series are trend-stationary such that the cointegration framework is not appropriate.

Bibliography

267

Bibliography Arize, A. C. (1994). A Re-examination of the Demand for Money in Small Developing Economies, Applied Economics 26: 217-228. Arize, A. C. and Shwiﬀ, S. S. (1993). Cointegration, Real Exchange Rate and Modelling the Demand for Broad Money in Japan, Applied Economics 25 (6): 717-726. Bahmani-Oskooee, M. (2001). How Stable is M2 Money Demand Function in Japan?, Japan and the World Economy 13: 455-461. Bernanke, B. S. and Gertler, M. (1995). Inside the Black Box: the Credit Channel of Monetary Policy Transmission, Journal of Economic Perspectives 9: 27–48. Bezdek, J. C. (1981). Pattern Recognition with Fuzzy Objective Function Algorithms, Plenum Press, New York. Bezdek, J. C. and Pal, S. K. (1992). Fuzzy Models for Pattern Recognition, IEEE Press, New York. Brand, C. and Cassola, N. (2000). A Money Demand System for Euro Area M3, ECB Working Paper 39. Coenen, G. and Vega, J. L. (1999). The Demand for M3 in the Euro Area, ECB Working Paper 6. Deckle, P. and Pradhan, M. (1997). Financial Liberalization and Money Demand in ASEAN Countries: Implications for Monetary Policy. IMF Working Paper WP/97/36. Drake, L. and Chrystal, K. A. (1994). Company-Sector Money Demand: New Evidence on the Existence of a Stable Long-run Relationship for the UK, Journal of Money, Credit, and Banking 26: 479–494. Dunn, J. C. (1973). A Fuzzy Relative of the ISODATA Process and its Use in Detecting Compact Well-Separated Clusters, Journal of Cybernetics 3: 32–57. Engle, R. F. and Granger, C. W. J. (1987). Co-integration and Error Correction: Representation, Estimation and Testing, Econometrica 55: 251–276.

268

Bibliography

Ericsson, N. R. (1999). Empirical Modeling of Money Demand, In: L¨ utkepohl, H. and Wolters, J. (Eds), Money Demand in Europe, Physica, Heidelberg, 29-49. Giles, D. E. A and Draeseke, R. (2001). Econometric Modelling Using Pattern Recognition via the Fuzzy c-Means Algorithm, in D.E.A. Giles (ed.), Computer-Aided Econometrics, Marcel Dekker, New York. Goldfeld, S. M. and Sichel, D. E. (1990). The Demand for Money, In: Friedman, B. and Hahn, F. H. (Eds), Handbook of Monetary Economics, Elsevier, Amsterdam, 299-356. Hafer, R. W. and Jansen, D. W. (1991). The Demand for Money in the United States: Evidence from Cointegration Tests, Journal of Money, Credit, and Banking 23: 155-168. Hafer, R. W. and Kutan, A. M. (1994). Economic Reforms and Long-Run Money Demand in China: Implication for Monetary Policy, Southern Economic Journal 60(4): 936–945. Haug, A. A. and Lucas, R. F. (1996). Long-Term Money Demand in Canada: In Search of Stability, Review of Economics and Statistics 78: 345–348. Holtem¨oller, O. (2002). Vector Autoregressive Analysis and Monetary Policy. Three Essays, Shaker, Aachen. Holtem¨oller, O. (2004a). Aggregation of National Data and Stability of Euro Area Money Demand, In: Dreger, Chr. and Hansen, G. (Eds), Advances in Macroeconometric Modeling, Papers and Proceedings of the 3rd IWH Workshop in Macroeconometrics, Nomos, Baden-Baden, 181-203. Holtem¨oller, O. (2004b). A Monetary Vector Error Correction Model of the Euro Area and Implications for Monetary Policy, Empirical Economics, forthcoming. Lim, G. C. (1993). The Demand for the Components of Broad Money: Error Correction and Generalized Asset Adjustment Systems, Applied Economics 25(8): 995–1004. L¨ utkepohl, H. and Wolters, J. (Eds)(1999). Money Demand in Europe, Physica, Heidelberg. McCandless, G. T. and Weber, W. E. (1995). Some Monetary Facts, Federal Reserve Bank of Minneapolis Quarterly Review 19: 2-11.

Bibliography

269

McNown, R. and Wallace, M. S. (1992). Cointegration Tests of a Long-Run Relation between Money Demand and the Eﬀective Exchange Rate, Journal of International Money and Finance 11(1): 107–114. Mehra, Y. P. (1993). The Stability of the M2 Money Demand Function: Evidence from an Error-Correction Model, Journal of Money, Credit, and Banking 25: 455–460. Miller, S. M (1991). Monetary Dynamics: An Application of Cointegration and Error-Correction Modelling, Journal of Money, Credit, and Banking 23: 139–168. Mishkin, F. S. (1995). Symposium on the monetary transmission mechanism, Journal of Economic Perspectives 9: 3–10. Miyao, R. (1996). Does a Cointegrating M2 Demand Relation Really Exist in Japan?, Journal of the Japanese and International Economics 10: 169– 180. Moosa, I. A. (1992). The Demand for Money in India: A Cointegration Approach, The Indian Economic Journal 40(1): 101–115. Mucha, H. J. and Sofyan, H. (2000). Cluster Analysis, in H¨ ardle, W., Klinke, S. and Hlavka, Z. XploRe Application Guide, (Eds), Springer, Heidelberg. Nelson, C. R. and Plosser, C. I. (1982). Trends and Random Walks in Macroeconomic Time Series, Journal of Monetary Economics 10: 139–162. Orden, D. and Fisher, L. A. (1993). Financial Deregulation and the Dynamics of Money, Prices and Output in New Zealand and Australia, Journal of Money, Credit, and Banking 25: 273–292. Perron, P. (1989). The Great Crash, the Oil Price Shock, and the Unit Root Hypothesis, Econometrica 57: 1361–1401. Price, S. and Insukindro (1994). The Demand for Indonesian Narrow Money: Long-run Equilibrium, Error Correction and Forward-looking Behaviour, The Journal of International Trade and Economic Development 3(2): 147– 163. Ruspini, E. H. (1969). A New Approach to Clustering, Information Control 15: 22–32. Sriram, S. S. (1999). Demand for M2 in an Emerging-Market Economy: An Error-Correction Model for Malaysia, IMF Working paper WP/99/173.

270

Bibliography

Takagi, T. and Sugeno, M. (1985). Fuzzy Identiﬁcation of Systems and its Application to Modelling and Control, IEEE Transactions on Systems, Man and Cybernetics 15(1): 116–132. Tseng, W. (1994). Economic Reform in China: A New Phase, IMF Occasional Paper 114. Walsh, C. E. (1998). Monetary Theory and Policy, MIT Press, Cambridge. Wolkenhauer, O. (2001). Data Engineering: Fuzzy Mathematics in System Theory and Data Analysis, Wiley, New York. Wolters, J., Ter¨ asvirta, T. and L¨ utkepohl, H. (1998). Modeling the Demand for M3 in the Uniﬁed Germany, The Review of Economics and Statistics 90: 309–409.

12 Nonparametric Productivity Analysis Wolfgang H¨ ardle and Seok-Oh Jeong

How can we measure and compare the relative performance of production units? If input and output variables are one dimensional, then the simplest way is to compute eﬃciency by calculating and comparing the ratio of output and input for each production unit. This idea is inappropriate though, when multiple inputs or multiple outputs are observed. Consider a bank, for example, with three branches A, B, and C. The branches take the number of staﬀ as the input, and measures outputs such as the number of transactions on personal and business accounts. Assume that the following statistics are observed: • Branch A: 60000 personal transactions, 50000 business transactions, 25 people on staﬀ, • Branch B: 50000 personal transactions, 25000 business transactions, 15 people on staﬀ, • Branch C: 45000 personal transactions, 15000 business transactions, 10 people on staﬀ. We observe that Branch C performed best in terms of personal transactions per staﬀ, whereas Branch A has the highest ratio of business transactions per staﬀ. By contrast Branch B performed better than Branch A in terms of personal transactions per staﬀ, and better than Branch C in terms of business transactions per staﬀ. How can we compare these business units in a fair way? Moreover, can we possibly create a virtual branch that reﬂects the input/output mechanism and thus creates a scale for the real branches? Productivity analysis provides a systematic approach to these problems. We review the basic concepts of productivity analysis and two popular methods

272

12 Nonparametric Productivity Analysis

DEA and FDH, which are given in Sections 12.1 and 12.2, respectively. Sections 12.3 and 12.4 contain illustrative examples with real data.

12.1

The Basic Concepts

The activity of production units such as banks, universities, governments, administrations, and hospitals may be described and formalized by the production set: Ψ = {(x, y) ∈ Rp+ × Rq+ | x can produce y}. where x is a vector of inputs and y is a vector of outputs. This set is usually assumed to be free disposable, i.e. if for given (x, y) ∈ Ψ all (x , y ) with x ≥ x and y ≤ y belong to Ψ, where the inequalities between vectors are understood componentwise. When y is one-dimensional, Ψ can be characterized by a function g called the frontier function or the production function: Ψ = {(x, y) ∈ Rp+ × R+ | y ≤ g(x)}. Under free disposability condition the frontier function g is monotone nondecreasing in x. See Figure 12.1 for an illustration of the production set and the frontier function in the case of p = q = 1. The black curve represents the frontier function, and the production set is the region below the curve. Suppose the point A represent the input and output pair of a production unit. The performance of the unit can be evaluated by referring to the points B and C on the frontier. One sees that with less input x one could have produced the same output y (point B). One also sees that with the input of A one could have produced C. In the following we describe a systematic way to measure the eﬃciency of any production unit compared to the peers of the production set in a multi-dimensional setup. The production set Ψ can be described by its sections. The input (requirement) set X(y) is deﬁned by: X(y) = {x ∈ Rp+ | (x, y) ∈ Ψ}, which is the set of all input vectors x ∈ Rp+ that yield at least the output vector y. See Figure 12.2 for a graphical illustration for the case of p = 2. The region over the smooth curve represents X(y) for a given level y. On the other hand, the output (correspondence) set Y (x) is deﬁned by: Y (x) = {y ∈ Rq+ | (x, y) ∈ Ψ},

273

0.8

12.1 The Basic Concepts

0.4

output

0.6

C

A

0

0.2

B

0

0.5 input

1

Figure 12.1: The production set and the frontier function, p = q = 1. the set of all output vectors y ∈ Rq+ that is obtainable from the input vector x. Figure 12.3 illustrates Y (x) for the case of q = 2. The region below the smooth curve is Y (x) for a given input level x. In productivity analysis one is interested in the input and output isoquants or eﬃcient boundaries, denoted by ∂X(y) and ∂Y (x) respectively. They consist of the attainable boundary in a radial sense: {x | x ∈ X(y), θx ∈ / X(y), 0 < θ < 1} if y = 0 ∂X(y) = {0} if y = 0 and

∂Y (x) =

{y | y ∈ Y (x), λy ∈ / X(y), λ > 1} {0}

if Y (x) = {0} if y = 0.

Given a production set Ψ with the scalar output y, the production function g can also be deﬁned for x ∈ Rp+ : g(x) = sup{y | (x, y) ∈ Ψ}.

12 Nonparametric Productivity Analysis

x2

0.5

1

274

A

0

B

O

0

0.5 x1

1

Figure 12.2: Input requirement set, p = 2. It may be deﬁned via the input set and the output set as well: g(x) = sup{y | x ∈ X(y)} = sup{y | y ∈ Y (x)}. For a given input-output point (x0 , y0 ), its input eﬃciency is deﬁned as θIN (x0 , y0 ) = inf{θ | θx0 ∈ X(y0 )}. The eﬃcient level of input corresponding to the output level y0 is then given by (12.1) x∂ (y0 ) = θIN (x0 , y0 )x0 . Note that x∂ (y0 ) is the intersection of ∂X(y0 ) and the ray θx0 , θ > 0, see Figure 12.2. Suppose that the point A in Figure 12.2 represent the input used by a production unit. The point B is its eﬃcient input level and the input eﬃcient score of the unit is given by OB/OA. The output eﬃciency score θOUT (x0 , y0 ) can be deﬁned similarly: θOUT (x0 , y0 ) = sup{θ | θy0 ∈ Y (x0 )}.

(12.2)

275

0.6

0.8

12.1 The Basic Concepts

A

0

0.2

y2

0.4

B

O

0

0.5 y1

1

Figure 12.3: Output corresponding set, q = 2. The eﬃcient level of output corresponding to the input level x0 is given by y ∂ (x0 ) = θOUT (x0 , y0 )y0 . In Figure 12.3, let the point A be the output produced by a unit. Then the point B is the eﬃcient output level and the output eﬃcient score of the unit is given by OB/OA. Note that, by deﬁnition, θIN (x0 , y0 ) = inf{θ | (θx0 , y0 ) ∈ Ψ}, θOUT (x0 , y0 ) = sup{θ | (x0 , θy0 ) ∈ Ψ}.

(12.3)

Returns to scale is a characteristic of the surface of the production set. The production set exhibits constant returns to scale (CRS) if, for α ≥ 0 and P ∈ Ψ, αP ∈ Ψ; it exhibits non-increasing returns to scale (NIRS) if, for 0 ≤ α ≤ 1 and P ∈ Ψ, αP ∈ Ψ; it exhibits non-decreasing returns to scale (NDRS) if, for α ≥ 1 and P ∈ Ψ, αP ∈ Ψ. In particular, a convex production set exhibits non-increasing returns to scale. Note, however, that the converse is not true.

276

12 Nonparametric Productivity Analysis

For more details on the theory and method for productivity analysis, see Shephard (1970), F¨ are, Grosskopf, and Lovell (1985), and F¨ are, Grosskopf, and Lovell (1994).

12.2

Nonparametric Hull Methods

The production set Ψ and the production function g is usually unknown, but a sample of production units or decision making units (DMU’s) is available instead: X = {(xi , yi ), i = 1, . . . , n}. The aim of productivity analysis is to estimate Ψ or g from the data X . Here we consider only the deterministic frontier model, i.e. no noise in the observations and hence X ⊂ Ψ with probability 1. For example, when q = 1 the structure of X can be expressed as: yi = g(xi ) − ui , i = 1, . . . , n or yi = g(xi )vi , i = 1, . . . , n where g is the frontier function, and ui ≥ 0 and vi ≤ 1 are the random terms for ineﬃciency of the observed pair (xi , yi ) for i = 1, . . . , n. The most popular nonparametric method is Data Envelopment Analysis (DEA), which assumes that the production set is convex and free disposable. This model is an extension of Farrel (1957)’s idea and was popularized by Charnes, Cooper, and Rhodes (1978). Deprins, Simar, and Tulkens (1984), assuming only free disposability on the production set, proposed a more ﬂexible model, say, Free Disposal Hull (FDH) model. Statistical properties of these hull methods have been studied in the literature. Park (2001), Simar and Wilson (2000) provide reviews on the statistical inference of existing nonparametric frontier models. For the nonparametric frontier models in the presence of noise, so called nonparametric stochastic frontier models, we refer to Simar (2003), Kumbhakar, Park, Simar and Tsionas (2004) and references therein.

12.2 Nonparametric Hull Methods

12.2.1

277

Data Envelopment Analysis

The Data Envelopment Analysis (DEA) of the observed sample X is deﬁned as the smallest free disposable and convex set containing X : $ DEA Ψ

= {(x, y) ∈ Rp+ × Rq+ | x ≥

n

γi xi , y ≤

i=1

n

γi yi ,

i=1

for some (γ1 , . . . , γn ) such that n γi = 1, γi ≥ 0 ∀i = 1, . . . , n}. i=1

The DEA eﬃciency scores for a given input-output level (x0 , y0 ) are obtained via (12.3): θ$IN (x0 , y0 ) = θ$OUT (x0 , y0 ) =

$ DEA }, min{θ > 0 | (θx0 , y0 ) ∈ Ψ $ DEA }. max{θ > 0 | (x0 , θy0 ) ∈ Ψ

The DEA eﬃcient levels for a given level (x0 , y0 ) are given by (12.1) and (12.2) as: 2∂ (y0 ) = θ$IN (x0 , y0 )x0 ; y2∂ (x0 ) = θ$OUT (x0 , y0 )y0 . x Figure 12.4 depicts 50 simulated production units and the frontier built by DEA eﬃcient input levels. The simulated model is as follows: √ xi ∼ Uniform[0, 1], yi = g(xi )e−zi , g(x) = 1 + x, zi ∼ Exp(3), for i = 1, . . . , 50, where Exp(ν) denotes the exponential distribution with mean 1/ν. Note that E[−zi ] = 0.75. The scenario with an exponential distribution for the logarithm of ineﬃciency term and 0.75 as an average of ineﬃciency are reasonable in the productivity analysis literature (Gijbels, Mammen, Park, and Simar, 1999). $ DEA ⊂ Ψ. The DEA estimate is always downward biased in the sense that Ψ So the asymptotic analysis quantifying the discrepancy between the true frontier and the DEA estimate would be appreciated. The consistency and the convergence rate of DEA eﬃciency scores with multidimensional inputs and outputs were established analytically by Kneip, Park, and Simar (1998). For p = 1 and q = 1, Gijbels, Mammen, Park, and Simar (1999) obtained its limit distribution depending on the curvature of the frontier and the density at the boundary. Jeong and Park (2004) and Kneip, Simar, and Wilson (2003) extended this result to higher dimensions.

12 Nonparametric Productivity Analysis

0.5

1

output

1.5

2

278

0

0.5 input

1

Figure 12.4: 50 simulated production units (circles), the frontier of the DEA √ estimate (solid line), and the true frontier function g(x) = 1 + x (dotted line). STFnpa01.xpl

12.2.2

Free Disposal Hull

The Free Disposal Hull (FDH) of the observed sample X is deﬁned as the smallest free disposable set containing X : $ FDH = {(x, y) ∈ Rp × Rq | x ≥ xi , y ≤ yi , i = 1, . . . , n}. Ψ + + We can obtain the FDH estimates of eﬃciency scores for a given input-output $ DEA with Ψ $ FDH in the deﬁnition of DEA eflevel (x0 , y0 ) by substituting Ψ ﬁciency scores. Note that, unlike DEA estimates, their closed forms can be

12.3 DEA in Practice: Insurance Agencies

279

derived by a straightforward calculation: θ$IN (x0 , y0 ) θ$OUT (x0 , y0 )

3 max xji xj0 , i|y≤yi 1≤j≤p 3 = max min yik y0k , =

min

i|x≥xi 1≤k≤q

where v j is the jth component of a vector v. The eﬃcient levels for a given level (x0 , y0 ) are obtained by the same way as those for DEA. See Figure 12.5 for an illustration by a simulated example: xi ∼ Uniform[1, 2], yi = g(xi )e−zi , g(x) = 3(x−1.5)3 +0.25x+1.125, zi ∼ Exp(3), for i = 1, . . . , 50. Park, Simar, and Weiner (1999) showed that the limit distribution of the FDH estimator in a multivariate setup is a Weibull distribution depending on the slope of the frontier and the density at the boundary.

12.3

DEA in Practice: Insurance Agencies

In order to illustrate a practical application of DEA we consider an example from the empirical study of Scheel (1999). This concrete data analysis is about the eﬃciency of 63 agencies of a German insurance company, see Table 12.1. The input X ∈ R4+ and output Y ∈ R2+ variables were as follows: X1 : Number of clients of Type A, X2 : Number of clients of Type B, X3 : Number of clients of Type C, X4 : Potential new premiums in EURO, Y1 : Number of new contracts, Y2 : Sum of new premiums in EURO. Clients of an insurance company are those who are currently served by the agencies of the company. They are classiﬁed into several types which reﬂect, for example, the insurance coverage. Agencies should sell to the clients as many contracts with as many premiums as possible. Hence the number of clients (X1 , X2 , X3 ) are included as input variables, and the number of new contracts (Y1 )

12 Nonparametric Productivity Analysis

0.5

1

output

1.5

2

280

1

1.5 input

2

Figure 12.5: 50 simulated production units (circles) the frontier of the FDH estimate (solid line), and the true frontier function g(x) = 3(x − 1.5)3 + 0.25x + 1.125 (dotted line). STFnpa02.xpl

and the sum of new premiums (Y2 ) are included as output variables. The potential new premiums (X4 ) is included as input variables, since it depends on the clients’ current coverage. Summary statistics for this data are given in Table 12.2. The DEA eﬃciency scores and the DEA eﬃcient levels of inputs for the agencies are given in Tables 12.3 and 12.4, respectively. The input eﬃcient score for each agency provides a gauge for evaluating its activity, and the eﬃcient level of inputs can be interpreted as a ’goal’ input. For example, agency 1 should have been able to yield its activity outputs (Y1 = 7, Y2 = 1754) with only 38% of its inputs, i.e., X1 = 53, X2 = 93, X3 = 4, and X4 = 108960. By contrast, agency 63, whose eﬃciency score is equal to 1, turned out to have used its resources 100% eﬃciently.

12.4 FDH in Practice: Manufacturing Industry

281

Table 12.1: Activities of 63 agencies of a German insurance company

Agency 1 2 3 . . . 62 63

X1 138 166 152 . . . 83 108

X2 242 124 84 . . . 109 257

inputs X3 10 5 3 . . . 2 0

X4 283816.7 156727.5 111128.9 . . . 139831.4 299905.3

outputs Y1 Y2 7 1754 8 2413 15 2531 . . . . . . 11 4439 45 30545

Table 12.2: Summary statistics for 63 agencies of a German insurance company

X1 X2 X3 X4 Y1 Y2

12.4

Minimum 42 55 0 73756 2 696

Maximum 572 481 140 693820 70 33075

Mean 225.54 184.44 19.762 258670 22.762 7886.7

Median 197 141 10 206170 16 6038

Std.Error 131.73 110.28 26.012 160150 16.608 7208

FDH in Practice: Manufacturing Industry

In order to illustrate how FDH works, the Manufacturing Industry Productivity Database from the National Bureau of Economic Research (NBER), USA is considered. This database is downloadable from the website of NBER [http://www.nber.org]. It contains annual industry-level data on output, employment, payroll, and other input costs, investment, capital stocks, and various industry-speciﬁc price indices from 1958 on hundreds of manufacturing industries (indexed by 4 digits numbers) in the United States. We selected data from the year 1996 (458 industries) with the following 4 input variables, p = 4, and 1 output variable, q = 1 (summary statistics are given in Table 12.5):

282

12 Nonparametric Productivity Analysis

Table 12.3: DEA eﬃciency score of the 63 agencies Agency 1 2 3 . . . 62 63

Eﬃciency score 0.38392 0.49063 0.86449 . . . 0.79892 1 STFnpa03.xpl

Table 12.4: DEA eﬃciency level of the 63 agencies

Agency 1 2 3 . . . 62 63

Eﬃcient level of inputs X1 X2 X3 X4 52.981 92.909 3.8392 108960 81.444 60.838 2.4531 76895 131.4 72.617 2.5935 96070 . . . . . . . . . . . . 66.311 87.083 1.5978 111710 108 257 0 299910 STFnpa03.xpl

X1 : Total employment, X2 : Total cost of material, X3 : Cost of electricity and fuel, X4 : Total real capital stock, Y : Total value added.

12.4 FDH in Practice: Manufacturing Industry

283

Table 12.5: Summary statistics for Manufacturing Industry Productivity Database (NBER, USA) Minimum Maximum Mean Median Std.Error X1 0.8 500.5 37.833 21 54.929 X2 18.5 145130 4313 1957.2 10771 X3 0.5 3807.8 139.96 49.7 362 X4 15.8 64590 2962.8 1234.7 6271.1 Y 34.1 56311 3820.2 1858.5 6392

Table 12.6 summarizes the result of the analysis of US manufacturing industries in 1996. The industry indexed by 2015 was eﬃcient in both input and output orientation. This means that it is one of the vertices of the free disposal hull generated by the 458 observations. On the other hand, the industry 2298 performed fairly well in terms of input eﬃciency (0.96) but somewhat badly (0.47) in terms of output eﬃciency. We can obtain the eﬃcient level of inputs (or outputs) by multiplying (or dividing) the eﬃciency score to each corresponding observation. For example, consider the industry 2013, which used inputs X1 = 88.1, X2 = 14925, X3 = 250, and X4 = 4365.1 to yield the output Y = 5954.2. Since its FDH input eﬃciency score was 0.64, this industry should have used the inputs X1 = 56.667, X2 = 9600, X3 = 160.8, and X4 = 2807.7 to produce the observed output Y = 5954.2. On the other hand, taking into account that the FDH output eﬃciency score was 0.70, this industry should have increased its output upto Y = 4183.1 with the observed level of inputs.

284

12 Nonparametric Productivity Analysis

Table 12.6: FDH eﬃciency scores of 458 US industries in 1996

1 2 3 4 . . . 75 . . . 458

Industry 2011 2013 2015 2021 . . . 2298 . . . 3999

Eﬃciency scores input output 0.88724 0.94203 0.79505 0.80701 0.66933 0.62707 1 1 . . . . . . 0.80078 0.7439 . . . . . . 0.50809 0.47585 STFnpa04.xpl

Bibliography

285

Bibliography Charnes, A., Cooper, W. W., and Rhodes, E. (1978). Measuring the Ineﬃciency of Decision Making Units, European Journal of Operational Research 2, 429–444. Deprins, D., Simar, L., and Tulkens, H. (1984). Measuring Labor Ineﬃciency in Post Oﬃces, in Marchand, M., Pestieau, P. and Tulkens, H. (eds.)The Performance of Public Enterprizes: Concepts and Measurements, 243–267. F¨ are, R., Grosskopf, S., and Lovell, C. A. K. (1985). The Measurement of Eﬃciency of Production, Kluwer-Nijhoﬀ. F¨ are, R., Grosskopf, S., and Lovell, C. A. K. (1994). Production Frontiers, Cambridge University Press. Farrell, M. J. I.(1957).The Measurement of Productivity Eﬃciency, Journal of the Royal Statistical Society, Ser. A 120, 253–281. Gijbels, I., Mammen, E., Park, B. U., and Simar, L. (1999). On Estimation of Monotone and Concave Frontier Functions, Journal of the American Statistical Association 94, 220–228. Jeong, S. and Park, B. U. (2002). Limit Distributions Convex Hull Estimators of Boundaries, Discussion Paper # 0439, CASE (Center for Applieed Statistics and Economics), Humboldt-Universit¨ at zu Berlin, Germany. Kneip, A., Park, B. U., and Simar, L. (1998). A Note on the Convergence of Nonparametric DEA Eﬃciency Measures, Econometric Theory 14, 783– 793. Kneip, A., Simar, L., and Wilson, P. (2003). Asymptotics for DEA estimators in non-parametric frontier models, Discussion Paper # 0317, Institute de Statistique, Universit´e catholique de Louvain, Louvain-la-Neuve, Belgium. Kumbhakar, S. C., Park, B. U., Simar, L., and Tsionas, E. G. (2004 ). Nonparametric stochastic frontiers: A local maximum likelihood approach, Discussion Paper # 0417 Institut de statistique, Universit´e catholique de Louvain, Louvain-la-Neuve, Belgium. Park, B. U. (2001). On Nonparametric Estimation of Data Edges, Journal of the Korean Statistical Society 30, 2, 265–280.

286

Bibliography

Park, B. U., Simar, L., and Weiner, Ch. (1999). The FDH Estimator for Productivity Eﬃciency Scores: Asymptotic Properties, Econometric Theory 16, 855–877. Scheel, H. (1999). Continuity of the BCC eﬃciency measure, in: Westermann (ed.), Data Envelopment Analysis in the public and private service sector, Gabler, Wiesbaden. Shephard, R. W. (1970). Theory of Cost and Production Function, Princeton University Press. Simar, L. (2003 ). How to improve the performances of DEA/FDH estimators in the presence of noise?, Discussion Paper # 0323, Institut de statistique, Universit´e catholique de Louvain, Louvain-la-Neuve, Belgium. Simar, L. and Wilson, P. (2000 ). Statistical Inference in Nonparametric Frontier Models: The State of the Art, Journal of Productivity Analysis 13, 49–78.

Part II

Insurance

13 Loss Distributions Krzysztof Burnecki, Adam Misiorek, and Rafal Weron

13.1

Introduction

The derivation of loss distributions from insurance data is not an easy task. Insurers normally keep data ﬁles containing detailed information about policies and claims, which are used for accounting and rate-making purposes. However, claim size distributions and other data needed for risk-theoretical analyzes can be obtained usually only after tedious data preprocessing. Moreover, the claim statistics are often limited. Data ﬁles containing detailed information about some policies and claims may be missing or corrupted. There may also be situations where prior data or experience are not available at all, e.g. when a new type of insurance is introduced or when very large special risks are insured. Then the distribution has to be based on knowledge of similar risks or on extrapolation of lesser risks. There are three basic approaches to deriving the loss distribution: empirical, analytical, and moment based. The empirical method, presented in Section 13.2, can be used only when large data sets are available. In such cases a suﬃciently smooth and accurate estimate of the cumulative distribution function (cdf) is obtained. Sometimes the application of curve ﬁtting techniques – used to smooth the empirical distribution function – can be beneﬁcial. If the curve can be described by a function with a tractable analytical form, then this approach becomes computationally eﬃcient and similar to the second method. The analytical approach is probably the most often used in practice and certainly the most frequently adopted in the actuarial literature. It reduces to ﬁnding a suitable analytical expression which ﬁts the observed data well and which is easy to handle. Basic characteristics and estimation issues for the most popular and useful loss distributions are discussed in Section 13.3. Note, that

290

13

Loss Distributions

sometimes it may be helpful to subdivide the range of the claim size distribution into intervals for which diﬀerent methods are employed. For example, the small and medium size claims could be described by the empirical claim size distribution, while the large claims – for which the scarcity of data eliminates the use of the empirical approach – by an analytical loss distribution. In some applications the exact shape of the loss distribution is not required. We may then use the moment based approach, which consists of estimating only the lowest characteristics (moments) of the distribution, like the mean and variance. However, it should be kept in mind that even the lowest three or four moments do not fully deﬁne the shape of a distribution, and therefore the ﬁt to the observed data may be poor. Further details on the moment based approach can be found e.g. in Daykin, Pentikainen, and Pesonen (1994). Having a large collection of distributions to choose from, we need to narrow our selection to a single model and a unique parameter estimate. The type of the objective loss distribution can be easily selected by comparing the shapes of the empirical and theoretical mean excess functions. Goodness-of-ﬁt can be veriﬁed by plotting the corresponding limited expected value functions. Finally, the hypothesis that the modeled random event is governed by a certain loss distribution can be statistically tested. In Section 13.4 these statistical issues are thoroughly discussed. In Section 13.5 we apply the presented tools to modeling real-world insurance data. The analysis is conducted for two datasets: (i) the PCS (Property Claim Services) dataset covering losses resulting from catastrophic events in USA that occurred between 1990 and 1999 and (ii) the Danish ﬁre losses dataset, which concerns major ﬁre losses that occurred between 1980 and 1990 and were recorded by Copenhagen Re.

13.2

Empirical Distribution Function

A natural estimate for the loss distribution is the observed (empirical) claim size distribution. However, if there have been changes in monetary values during the observation period, inﬂation corrected data should be used. For a sample of observations {x1 , . . . , xn } the empirical distribution function (edf) is deﬁned as: 1 (13.1) Fn (x) = #{i : xi ≤ x}, n

13.2

Empirical Distribution Function

291

Empirical and lognormal distributions 1

0

0.5

CDF(x)

0.5

0

CDF(x)

1

Empirical distribution function

0

1

2 x

3

4

0

1

2

3

4

5

x

Figure 13.1: Left panel : Empirical distribution function (edf) of a 10-element log-normally distributed sample with parameters µ = 0.5 and σ = 0.5, see Section 13.3.1. Right panel : Approximation of the edf by a continuous, piecewise linear function (black solid line) and the theoretical distribution function (red dotted line). STFloss01.xpl

i.e. it is a piecewise constant function with jumps of size 1/n at points xi . Very often, especially if the sample is large, the edf is approximated by a continuous, piecewise linear function with the “jump points” connected by linear functions, see Figure 13.1. The empirical distribution function approach is appropriate only when there is a suﬃciently large volume of claim data. This is rarely the case for the tail of the distribution, especially in situations where exceptionally large claims are possible. It is often advisable to divide the range of relevant values of claims into two parts, treating the claim sizes up to some limit on a discrete basis, while the tail is replaced by an analytical cdf.

292

13.3

13

Loss Distributions

Analytical Methods

It is often desirable to ﬁnd an explicit analytical expression for a loss distribution. This is particularly the case if the claim statistics are too sparse to use the empirical approach. It should be stressed, however, that many standard models in statistics – like the Gaussian distribution – are unsuitable for ﬁtting the claim size distribution. The main reason for this is the strongly skewed nature of loss distributions. The log-normal, Pareto, Burr, Weibull, and gamma distributions are typical candidates for claim size distributions to be considered in applications.

13.3.1

Log-normal Distribution

Consider a random variable X which has the normal distribution with density

1 1 (x − µ)2 , −∞ < x < ∞. (13.2) fN (x) = √ exp − 2 σ2 2πσ Let Y = eX so that X = log Y . Then the probability density function of Y is given by:

1 1 1 (log y − µ)2 , y > 0, (13.3) f (y) = fN (log y) = √ exp − y 2 σ2 2πσy where σ > 0 is the scale and −∞ < µ < ∞ is the location parameter. The distribution of Y is termed log-normal, however, sometimes it is called the Cobb-Douglas law, especially when applied to econometric data. The lognormal cdf is given by: log y − µ F (y) = Φ , y > 0, (13.4) σ where Φ(·) is the standard normal (with mean 0 and variance l) distribution function. The k-th raw moment mk of the log-normal variate can be easily derived using results for normal random variables: k kX σ2 k2 , (13.5) mk = E Y = E e = MX (k) = exp µk + 2

13.3

Analytical Methods

293

where MX (z) is the moment generating function of the normal distribution. In particular, the mean and variance are σ2 E(X) = exp µ + , (13.6) 2 (13.7) Var(X) = exp σ 2 − 1 exp 2µ + σ 2 , respectively. For both standard parameter estimation techniques the estimators are known in closed form. The method of moments estimators are given by: ! n " " ! n 1 1 1 2 µ ˆ = 2 log (13.8) xi − log x , n i=1 2 n i=1 i " ! n " ! n 1 1 2 2 σ ˆ (13.9) = log x − 2 log xi , n i=1 i n i=1 while the maximum likelihood estimators by: 1 log(xi ), n i=1

(13.10)

1 2 {log(xi ) − µ ˆ} . n i=1

(13.11)

n

µ ˆ =

n

σ ˆ2

=

Finally, the generation of a log-normal variate is straightforward. We simply have to take the exponent of a normal variate. The log-normal distribution is very useful in modeling of claim sizes. It is right-skewed, has a thick tail and ﬁts many situations well. For small σ it resembles a normal distribution (see the left panel in Figure 13.2) although this is not always desirable. It is inﬁnitely divisible and closed under scale and power transformations. However, it also suﬀers from some drawbacks. Most notably, the Laplace transform does not have a closed form representation and the moment generating function does not exist.

13.3.2

Exponential Distribution

Consider the random variable with the following density and distribution functions, respectively: f (x)

= βe−βx ,

F (x)

=

1 − e−βx ,

x > 0, x > 0.

(13.12) (13.13)

294

13

Exponential densities

0.5

PDF(x)

0.4

0

0

0.2

PDF(x)

1

0.6

1.5

Log-normal densities

Loss Distributions

0

5

10

15

20

25

0

2

x

4 x

6

8

Figure 13.2: Left panel: Log-normal probability density functions (pdfs) with parameters µ = 2 and σ = 1 (black solid line), µ = 2 and σ = 0.1 (red dotted line), and µ = 0.5 and σ = 2 (blue dashed line). Right panel: Exponential pdfs with parameter β = 0.5 (black solid line), β = 1 (red dotted line), and β = 5 (blue dashed line). STFloss02.xpl

This distribution is termed an exponential distribution with parameter (or intensity) β > 0. The Laplace transform of (13.12) is ∞ β def L(t) = e−tx f (x)dx = , t > −β, (13.14) β +t 0 yielding the general formula for the k-th raw moment def

mk = (−1)k

∂ k L(t) '' k! = k. ' ∂tk t=0 β

(13.15)

The mean and variance are thus β −1 and β −2 , respectively. The maximum likelihood estimator (equal to the method of moments estimator) for β is given by: 1 βˆ = , (13.16) m ˆ1

13.3

Analytical Methods

295

where

1 k x , n i=1 i n

m ˆk =

(13.17)

is the sample k-th raw moment. To generate an exponential random variable X with intensity β we can use the inverse transform method (L’Ecuyer, 2004; Ross, 2002). The method consists of taking a random number U distributed uniformly on the interval (0, 1) and setting X = F −1 (U ), where F −1 (x) = − β1 log(1 − x) is the inverse of the exponential cdf (13.13). In fact we can set X = − β1 log U since 1 − U has the same distribution as U . The exponential distribution has many interesting features. For example, it has the memoryless property, i.e. P(X > x + y|X > y) = P(X > x). It also arises as the inter-occurrence times of the events in a Poisson process, see Chapter 14. The n-th root of the Laplace transform (13.14) is L(t) =

β β+t

n1 ,

(13.18)

which is the Laplace transform of a gamma variate (see Section 13.3.6). Thus the exponential distribution is inﬁnitely divisible. The exponential distribution is often used in developing models of insurance risks. This usefulness stems in a large part from its many and varied tractable mathematical properties. However, a disadvantage of the exponential distribution is that its density is monotone decreasing (see the right panel in Figure 13.2), a situation which may not be appropriate in some practical situations.

13.3.3

Pareto Distribution

Suppose that a variate X has (conditional on β) an exponential distribution with mean β −1 . Further, suppose that β itself has a gamma distribution (see Section 13.3.6). The unconditional distribution of X is a mixture and is called the Pareto distribution. Moreover, it can be shown that if X is an exponential random variable and Y is a gamma random variable, then X/Y is a Pareto random variable.

296

13

Loss Distributions

The density and distribution functions of a Pareto variate are given by: f (x)

=

F (x)

=

αλα , x > 0, (λ + x)α+1 α λ 1− , x > 0, λ+x

(13.19) (13.20)

respectively. Clearly, the shape parameter α and the scale parameter λ are both positive. The k-th raw moment: mk = λk k!

Γ(α − k) , Γ(α)

exists only for k < α. In the above formula ∞ def Γ(a) = y a−1 e−y dy,

(13.21)

(13.22)

0

is the standard gamma function. The mean and variance are thus: E(X)

=

Var(X)

=

λ , α−1 αλ2 , (α − 1)2 (α − 2)

(13.23) (13.24)

respectively. Note, that the mean exists only for α > 1 and the variance only for α > 2. Hence, the Pareto distribution has very thick (or heavy) tails, see Figure 13.3. The method of moments estimators are given by: α ˆ

=

ˆ λ

=

m ˆ2 −m ˆ 21 , m ˆ 2 − 2m ˆ 21 ˆ2 m ˆ 1m , m ˆ 2 − 2m ˆ 21

2

(13.25) (13.26)

where, as before, m ˆ k is the sample k-th raw moment (13.17). Note, that the estimators are well deﬁned only when m ˆ 2 − 2m ˆ 21 > 0. Unfortunately, there are no closed form expressions for the maximum likelihood estimators and they can only be evaluated numerically. Like for many other distributions the simulation of a Pareto variate X can be conducted via the inverse transform method. The inverse of the cdf (13.20) has a simple analytical form F −1 (x) = λ (1 − x)−1/α − 1 . Hence, we can

13.3

Analytical Methods

297

Pareto log-densities

0

-6

0.5

-4

1

PDF(x)

log(PDF(x))

-2

1.5

2

0

Pareto densities

0

2

4 x

6

8

-2

-1

0 log(x)

1

2

Figure 13.3: Left panel: Pareto pdfs with parameters α = 0.5 and λ = 2 (black solid line), α = 2 and λ = 0.5 (red dotted line), and α = 2 and λ = 1 (blue dashed line). Right panel: The same Pareto densities on a double logarithmic plot. The thick power-law tails of the Pareto distribution are clearly visible. STFloss03.xpl

set X = λ U −1/α − 1 , where U is distributed uniformly on the unit interval. We have to be cautious, however, when α is larger but very close to one. The theoretical mean exists, but the right tail is very heavy. The sample mean will, in general, be signiﬁcantly lower than E(X). The Pareto law is very useful in modeling claim sizes in insurance, due in large part to its extremely thick tail. Its main drawback lies in its lack of mathematical tractability in some situations. Like for the log-normal distribution, the Laplace transform does not have a closed form representation and the moment generating function does not exist. Moreover, like the exponential pdf the Pareto density (13.19) is monotone decreasing, which may not be adequate in some practical situations.

298

13.3.4

13

Loss Distributions

Burr Distribution

Experience has shown that the Pareto formula is often an appropriate model for the claim size distribution, particularly where exceptionally large claims may occur. However, there is sometimes a need to ﬁnd heavy tailed distributions which oﬀer greater ﬂexibility than the Pareto law, including a non-monotone pdf. Such ﬂexibility is provided by the Burr distribution and its additional shape parameter τ > 0. If Y has the Pareto distribution, then the distribution of X = Y 1/τ is known as the Burr distribution, see the left panel in Figure 13.4. Its density and distribution functions are given by: f (x) F (x)

xτ −1 = τ αλα , (λ + xτ )α+1 α λ = 1− , λ + xτ

x > 0, x > 0,

(13.27) (13.28)

respectively. The k-th raw moment mk =

k 1 k/τ k Γ α− , λ Γ 1+ Γ(α) τ τ

(13.29)

exists only for k < τ α. Naturally, the Laplace transform does not exist in a closed form and the distribution has no moment generating function as it was the case with the Pareto distribution. The maximum likelihood and method of moments estimators for the Burr distribution can only be evaluated numerically. A Burr variate X can be generated using the inverse transform method. The inverse of the cdf (13.28) has a sim4 51/τ ple analytical form F −1 (x) = λ (1 − x)−1/α − 1 . Hence, we can set 1/τ −1/α −1 , where U is distributed uniformly on the unit interX = λ U val. Like in the Pareto case, we have to be cautious when τ α is larger but very close to one. The theoretical mean exists, but the right tail is very heavy. The sample mean will, in general, be signiﬁcantly lower than E(X).

13.3.5

Weibull Distribution

If V is an exponential variate, then the distribution of X = V 1/τ , τ > 0, is called the Weibull (or Frechet) distribution. Its density and distribution

13.3

Analytical Methods

299

Weibull densities

PDF(x)

0

0

0.5

0.5

PDF(x)

1

1

Burr densities

0

2

4 x

6

8

0

1

2

3

4

5

x

Figure 13.4: Left panel: Burr pdfs with parameters α = 0.5, λ = 2 and τ = 1.5 (black solid line), α = 0.5, λ = 0.5 and τ = 5 (red dotted line), and α = 2, λ = 1 and τ = 0.5 (blue dashed line). Right panel: Weibull pdfs with parameters β = 1 and τ = 0.5 (black solid line), β = 1 and τ = 2 (red dotted line), and β = 0.01 and τ = 6 (blue dashed line). STFloss04.xpl

functions are given by: x > 0, f (x) = τ βxτ −1 e−βx , τ x > 0, F (x) = 1 − e−βx , τ

(13.30) (13.31)

respectively. The Weibull distribution is roughly symmetrical for the shape parameter τ ≈ 3.6. When τ is smaller the distribution is right-skewed, when τ is larger it is left-skewed, see the right panel in Figure 13.4. The k-th raw moment can be shown to be k −k/τ . (13.32) Γ 1+ mk = β τ Like for the Burr distribution, the maximum likelihood and method of moments estimators can only be evaluated numerically. Similarly, Weibull variates can be generated using the inverse transform method.

300

13.3.6

13

Loss Distributions

Gamma Distribution

The probability law with density and distribution functions given by: f (x) F (x)

e−βx , x > 0, Γ(α) x e−βs β(βs)α−1 ds, x > 0, = Γ(α) 0

= β(βx)α−1

(13.33) (13.34)

where α and β are non-negative, is known as a gamma (or a Pearson’s Type III) distribution, see the left panel in Figure 13.5. Moreover, for β = 1 the integral in (13.34): x 1 def Γ(α, x) = sα−1 e−s ds, (13.35) Γ(α) 0 is called the incomplete gamma function. If the shape parameter α = 1, the exponential distribution results. If α is a positive integer, the distribution is termed an Erlang law. If β = 21 and α = ν2 then it is termed a chi-squared (χ2 ) distribution with ν degrees of freedom. Moreover, a mixed Poisson distribution with gamma mixing distribution is negative binomial, see Chapter 18. The Laplace transform of the gamma distribution is given by: α β L(t) = , t > −β. β+t

(13.36)

The k-th raw moment can be easily derived from the Laplace transform: mk =

Γ(α + k) . Γ(α)β k

(13.37)

Hence, the mean and variance are E(X)

=

Var(X)

=

α , β α . β2

(13.38) (13.39)

Finally, the method of moments estimators for the gamma distribution parameters have closed form expressions: α ˆ

=

βˆ =

m ˆ 21 , m ˆ2 −m ˆ 21 m ˆ1 , m ˆ2 −m ˆ 21

(13.40) (13.41)

13.3

Analytical Methods

301

Mixture of two exponential densities

0

0

0.1

0.1

0.2

0.2

PDF(x)

0.3

PDF(x)

0.3

0.4

0.4

0.5

Gamma densities

0

2

4 x

6

8

0

5

10

15

x

Figure 13.5: Left panel: Gamma pdfs with parameters α = 1 and β = 2 (black solid line), α = 2 and β = 1 (red dotted line), and α = 3 and β = 0.5 (blue dashed line). Right panel: Densities of two exponential distributions with parameters β1 = 0.5 (red dotted line) and β2 = 0.1 (blue dashed line) and of their mixture with the mixing parameter a = 0.5 (black solid line). STFloss05.xpl

but maximum likelihood estimators can only be evaluated numerically. Simulation of gamma variates is not as straightforward as for the distributions presented above. For α < 1 a simple but slow algorithm due to J¨ ohnk (1964) can be used, while for α > 1 the rejection method is more optimal (Bratley, Fox, and Schrage, 1987; Devroye, 1986). The gamma distribution is closed under convolution, i.e. a sum of independent gamma variates with the same parameter β is again gamma distributed with this β. Hence, it is inﬁnitely divisible. Moreover, it is right-skewed and approaches a normal distribution in the limit as α goes to inﬁnity. The gamma law is one of the most important distributions for modeling because it has very tractable mathematical properties. As we have seen above it is also very useful in creating other distributions, but by itself is rarely a reasonable model for insurance claim sizes.

302

13.3.7

13

Loss Distributions

Mixture of Exponential Distributions

n Let a1 , a2 , . . . , an denote a series of non-negative weights satisfying i=1 ai = 1. Let F1 (x), F2 (x), . . . , Fn (x) denote an arbitrary sequence of exponential distribution functions given by the parameters β1 , β2 , . . . , βn , respectively. Then, the distribution function: F (x) =

n

n

ai Fi (x) =

i=1

ai {1 − exp(−βi x)} ,

(13.42)

i=1

is called a mixture of n exponential distributions (exponentials). The density function of the constructed distribution is f (x) =

n

ai fi (x) =

i=1

n

ai βi exp(−βi x),

(13.43)

i=1

where f1 (x), f2 (x), . . . , fn (x) denote the density functions of the input exponential distributions. Note, that the mixing procedure can be applied to arbitrary distributions. Using the technique of mixing, one can construct a wide class of distributions. The most commonly used in the applications is a mixture of two exponentials, see Chapter 15. In the right panel of Figure 13.5 a pdf of a mixture of two exponentials is plotted together with the pdfs of the mixing laws. The Laplace transform of (13.43) is L(t) =

n i=1

ai

βi , βi + t

t > − min {βi }, i=1...n

(13.44)

yielding the general formula for the k-th raw moment mk =

n i=1

ai

k! . βik

(13.45)

n The mean is thus i=1 ai βi−1 . The maximum likelihood and method of moments estimators for the mixture of n (n ≥ 2) exponential distributions can only be evaluated numerically. Simulation of variates deﬁned by (13.42) can be performed using the composition approach (Ross, 2002). First generate a random variable I, equal to i with probability ai , i = 1, ..., n. Then simulate an exponential variate with intensity βI . Note, that the method is general in the sense that it can be used for any set of distributions Fi ’s.

13.4

Statistical Validation Techniques

13.4

303

Statistical Validation Techniques

Having a large collection of distributions to choose from we need to narrow our selection to a single model and a unique parameter estimate. The type of the objective loss distribution can be easily selected by comparing the shapes of the empirical and theoretical mean excess functions. The mean excess function, presented in Section 13.4.1, is based on the idea of conditioning a random variable given that it exceeds a certain level. Once the distribution class is selected and the parameters are estimated using one of the available methods the goodness-of-ﬁt has to be tested. Probably the most natural approach consists of measuring the distance between the empirical and the ﬁtted analytical distribution function. A group of statistics and tests based on this idea is discussed in Section 13.4.2. However, when using these tests we face the problem of comparing a discontinuous step function with a continuous non-decreasing curve. The two functions will always diﬀer from each other in the vicinity of a step by at least half the size of the step. This problem can be overcome by integrating both distributions once, which leads to the so-called limited expected value function introduced in Section 13.4.3.

13.4.1

Mean Excess Function

For a claim amount random variable X, the mean excess function or mean residual life function is the expected payment per claim on a policy with a ﬁxed amount deductible of x, where claims with amounts less than or equal to x are completely ignored:

∞ {1 − F (u)} du e(x) = E(X − x|X > x) = x . (13.46) 1 − F (x) In practice, the mean excess function e is estimated by eˆn based on a representative sample x1 , . . . , xn : xi >x xi eˆn (x) = − x. (13.47) #{i : xi > x} Note, that in a ﬁnancial risk management context, switching from the right tail to the left tail, e(x) is referred to as the expected shortfall (Weron, 2004). When considering the shapes of mean excess functions, the exponential distribution plays a central role. It has the memoryless property, meaning that

304

13

Loss Distributions

whether the information X > x is given or not, the expected value of X − x is the same as if one started at x = 0 and calculated E(X). The mean excess function for the exponential distribution is therefore constant. One in fact easily calculates that for this case e(x) = 1/β for all x > 0. If the distribution of X is heavier-tailed than the exponential distribution we ﬁnd that the mean excess function ultimately increases, when it is lightertailed e(x) ultimately decreases. Hence, the shape of e(x) provides important information on the sub-exponential or super-exponential nature of the tail of the distribution at hand. Mean excess functions and ﬁrst order approximations to the tail for the distributions discussed in Section 13.3 are given by the following formulas: • log-normal distribution: e(x)

=

2 2 exp µ + σ2 1 − Φ ln x−µ−σ σ −x ln x−µ 1−Φ σ

=

σ2 x {1 + o(1)} , ln x − µ

where o(1) stands for a term which tends to zero as x → ∞; • exponential distribution: e(x) =

1 ; β

• Pareto distribution: e(x) =

λ+x , α−1

α > 1;

• Burr distribution: e(x)

−α λ1/τ Γ α − τ1 Γ 1 + τ1 λ · = · Γ(α) λ + xτ

1 1 xτ −x · 1 − B 1 + ,α − , τ τ λ + xτ

=

x {1 + o(1)} , ατ − 1

ατ > 1,

13.4

Statistical Validation Techniques

305

where Γ(·) is the standard gamma function (13.22) and x def Γ(a + b) y a−1 (1 − y)b−1 dy, B(a, b, x) = Γ(a)Γ(b) 0

(13.48)

is the beta function; • Weibull distribution:

e(x)

= =

1 Γ (1 + 1/τ ) τ exp (βxτ ) − x 1 − Γ 1 + , βx τ β 1/τ x1−τ {1 + o(1)} , βτ

where Γ(·, ·) is the incomplete gamma function (13.35); • gamma distribution:

e(x)

=

α 1 − F (x, α + 1, β) · − x = β −1 {1 + o(1)} , β 1 − F (x, α, β)

where F (x, α, β) is the gamma distribution function (13.34); • mixture of two exponential distributions: • distribution!mixture of exponentials

e(x)

=

x β1

exp (−β1 c) +

1−x β2

exp (−β2 c)

x exp (−β1 c) + (1 − x) exp (−β2 c)

− x.

Selected shapes are also sketched in Figure 13.6.

13.4.2

Tests Based on the Empirical Distribution Function

A statistics measuring the diﬀerence between the empirical Fn (x) and the ﬁtted F (x) distribution function, called an edf statistic, is based on the vertical diﬀerence between the distributions. This distance is usually measured either by a supremum or a quadratic norm (D’Agostino and Stephens, 1986).

13

Loss Distributions

1

0.5

1

2

1.5

e(x)

3

e(x)

2

2.5

4

3

5

306

0

5 x

0

10

5 x

10

Figure 13.6: Left panel: Shapes of the mean excess function e(x) for the lognormal (green dashed line), gamma with α < 1 (red dotted line), gamma with α > 1 (black solid line) and a mixture of two exponential distributions (blue long-dashed line). Right panel: Shapes of the mean excess function e(x) for the Pareto (green dashed line), Burr (blue long-dashed line), Weibull with τ < 1 (black solid line) and Weibull with τ > 1 (red dotted line) distributions. STFloss06.xpl

The most well-known supremum statistic: D = sup |Fn (x) − F (x)| ,

(13.49)

x

is known as the Kolmogorov or Kolmogorov-Smirnov statistic. It can also be written in terms of two supremum statistics: D+ = sup {Fn (x) − F (x)} x

and D− = sup {F (x) − Fn (x)} , x

where the former is the largest vertical diﬀerence when Fn (x) is larger than F (x) and the latter is the largest vertical diﬀerence when it is smaller. The Kolmogorov statistic is then given by D = max(D+ , D− ). A closely related statistic proposed by Kuiper is simply a sum of the two diﬀerences, i.e. V = D+ + D− .

13.4

Statistical Validation Techniques

307

The second class of measures of discrepancy is given by the Cram´er-von Mises family ∞ 2 {Fn (x) − F (x)} ψ(x)dF (x), (13.50) Q=n −∞

where ψ(x) is a suitable function which gives weights to the squared diﬀerence 2 {Fn (x) − F (x)} . When ψ(x) = 1 we obtain the W 2 statistic of Cram´er-von Mises. When ψ(x) = [F (x) {1 − F (x)}]−1 formula (13.50) yields the A2 statistic of Anderson and Darling. From the deﬁnitions of the statistics given above, suitable computing formulas must be found. This can be done by utilizing the transformation Z = F (X). When F (x) is the true distribution function of X, the random variable Z is uniformly distributed on the unit interval. Suppose that a sample x1 , . . . , xn gives values zi = F (xi ), i = 1, . . . , n. It can be easily shown that, for values z and x related by z = F (x), the corresponding vertical diﬀerences in the edf diagrams for X and for Z are equal. Consequently, edf statistics calculated from the empirical distribution function of the zi ’s compared with the uniform distribution will take the same values as if they were calculated from the empirical distribution function of the xi ’s, compared with F (x). This leads to the following formulas given in terms of the order statistics z(1) < z(2) < · · · < z(n) :

i + (13.51) = max − z(i) , D 1≤i≤n n

(i − 1) , (13.52) D− = max z(i) − 1≤i≤n n D V W2

A2

=

max(D+ , D− ), −

+

= D +D ,

2 n (2i − 1) 1 z(i) − = + , 2n 12n i=1

= −n −

n 1 log z(i) + log(1 − z(n+1−i) ) n i=1

1 (2i − 1) log z(i) + n i=1 +(2n + 1 − 2i) log(1 − z(i) ) .

(13.53) (13.54) (13.55)

(13.56)

n

= −n −

(13.57)

308

13

Loss Distributions

The general test of ﬁt is structured as follows. The null hypothesis is that a speciﬁc distribution is acceptable, whereas the alternative is that it is not: H0 : Fn (x) = F (x; θ), H1 : Fn (x) = F (x; θ), where θ is a vector of known parameters. Small values of the test statistic T are evidence in favor of the null hypothesis, large ones indicate its falsity. To see how unlikely such a large outcome would be if the null hypothesis was true, we calculate the p-value by: p-value = P (T ≥ t),

(13.58)

where t is the test value for a given sample. It is typical to reject the null hypothesis when a small p-value is obtained. However, we are in a situation where we want to test the hypothesis that the sample has a common distribution function F (x; θ) with unknown θ. To employ any of the edf tests we ﬁrst need to estimate the parameters. It is important to recognize, however, that when the parameters are estimated from the data, the critical values for the tests of the uniform distribution (or equivalently of a fully speciﬁed distribution) must be reduced. In other words, if the value of the test statistics T is d, then the p-value is overestimated by PU (T ≥ d). Here PU indicates that the probability is computed under the assumption of a uniformly distributed sample. Hence, if PU (T ≥ d) is small, then the p-value will be even smaller and the hypothesis will be rejected. However, if it is large then we have to obtain a more accurate estimate of the p-value. Ross (2002) advocates the use of Monte Carlo simulations in this context. ˆ First the parameter vector is estimated for a given sample of size n, yielding θ, and the edf test statistics is calculated assuming that the sample is distributed ˆ returning a value of d. Next, a sample of size n of F (x; θ)ˆ according to F (x; θ), distributed variates is generated. The parameter vector is estimated for this simulated sample, yielding θˆ1 , and the edf test statistics is calculated assuming that the sample is distributed according to F (x; θˆ1 ). The simulation is repeated as many times as required to achieve a certain level of accuracy. The estimate of the p-value is obtained as the proportion of times that the test quantity is at least as large as d. An alternative solution to the problem of unknown parameters was proposed by Stephens (1978). The half-sample approach consists of using only half the data to estimate the parameters, but then using the entire data set to conduct the

13.4

Statistical Validation Techniques

309

test. In this case, the critical values for the uniform distribution can be applied, at least asymptotically. The quadratic edf tests seem to converge fairly rapidly to their asymptotic distributions (D’Agostino and Stephens, 1986). Although, the method is much faster than the Monte Carlo approach it is not invariant – depending on the choice of the half-samples diﬀerent test values will be obtained and there is no way of increasing the accuracy. As a side product, the edf tests supply us with a natural technique of estimating the parameter vector θ. We can simply ﬁnd such θˆ∗ that minimizes a selected edf statistic. Out of the four presented statistics A2 is the most powerful when the ﬁtted distribution departs from the true distribution in the tails (D’Agostino and Stephens, 1986). Since the ﬁt in the tails is of crucial importance in most actuarial applications A2 is the recommended statistic for the estimation scheme.

13.4.3

Limited Expected Value Function

The limited expected value function L of a claim size variable X, or of the corresponding cdf F (x), is deﬁned by x L(x) = E{min(X, x)} = ydF (y) + x {1 − F (x)} , x > 0. (13.59) 0

The value of the function L at point x is equal to the expectation of the cdf F (x) truncated at this point. In other words, it represents the expected amount per claim retained by the insured on a policy with a ﬁxed amount deductible of x. The empirical estimate is deﬁned as follows: ⎛ ⎞ ˆ n (x) = 1 ⎝ L xj + x⎠ . (13.60) n x <x j

xj ≥x

In order to ﬁt the limited expected value function L of an analytical distribution ˆ n is ﬁrst constructed. Thereafter one tries to the observed data, the estimate L to ﬁnd a suitable analytical cdf F , such that the corresponding limited expected ˆ n as possible. value function L is as close to the observed L The limited expected value function has the following important properties: 1. the graph of L is concave, continuous and increasing;

310

13

Loss Distributions

2. L(x) → E(X), as x → ∞; 3. F (x) = 1 − L (x), where L (x) is the derivative of the function L at point x; if F is discontinuous at x, then the equality holds true for the right-hand derivative L (x+). A reason why the limited expected value function is a particularly suitable tool for our purposes is that it represents the claim size distribution in the monetary dimension. For example, we have L(∞) = E(X) if it exists. The cdf F , on the other hand, operates on the probability scale, i.e. takes values between 0 and 1. Therefore, it is usually diﬃcult to see, by looking only at F (x), how sensitive the price for the insurance – the premium – is to changes in the values of F , while the limited expected value function shows immediately how diﬀerent parts of the claim size cdf contribute to the premium (see Chapter 19 for information on various premium calculation principles). Apart from curveﬁtting purposes, the function L will turn out to be a very useful concept in dealing with deductibles in Chapter 19. It is also worth mentioning, that there exists a connection between the limited expected value function and the mean excess function: E(X) = L(x) + P(X > x)e(x). (13.61) The limited expected value functions for all distributions considered in this chapter are given by: • log-normal distribution:

σ2 ln x − µ − σ 2 ln x − µ L(x) = exp µ + Φ +x 1−Φ ; 2 σ σ • exponential distribution: L(x) =

1 {1 − exp(−βx)} ; β

L(x) =

λ − λα (λ + x)1−α ; α−1

• Pareto distribution:

13.5

Applications

311

• Burr distribution:

λ1/τ Γ α − τ1 Γ 1 + τ1 1 1 xτ L(x) = B 1 + ,α − ; Γ(α) τ τ λ + xτ α λ ; + x λ + xτ

• Weibull distribution: L(x)

α Γ (1 + 1/τ ) 1 α + xe−βx ; Γ 1 + , βx τ β 1/τ

=

• gamma distribution: L(x) =

α F (x, α + 1, β) + x {1 − F (x, α, β)} ; β

• mixture of two exponential distributions: L(x) =

1−a a {1 − exp (−β1 x)} + {1 − exp (−β2 x)} . β1 β2

From the curve-ﬁtting point of view the use of the limited expected value function has the advantage, compared with the use of the cdfs, that both the ˆ n , based on the observed analytical and the corresponding observed function L discrete cdf, are continuous and concave, whereas the observed claim size cdf Fn is a discontinuous step function. Property (3) implies that the limited expected value function determines the corresponding cdf uniquely. When the limited expected value functions of two distributions are close to each other, not only are the mean values of the distributions close to each other, but the whole distributions as well.

13.5

Applications

In this section we illustrate some of the methods described earlier in the chapter. We conduct the analysis for two datasets. The ﬁrst is the PCS (Property Claim Services, see Insurance Services Oﬃce Inc. (ISO) web site: www.iso.com/products/2800/prod2801.html) dataset covering losses resulting from natural catastrophic events in USA that occurred between 1990 and 1999.

312

13

Loss Distributions

The second is the Danish ﬁre losses dataset, which concerns major ﬁre losses in Danish Krone (DKK) that occurred between 1980 and 1990 and were recorded by Copenhagen Re. Here we consider only losses in proﬁts. The overall ﬁre losses were analyzed by Embrechts, Kl¨ uppelberg, and Mikosch (1997). The Danish ﬁre losses dataset has been already adjusted for inﬂation. However, the PCS dataset consists of raw data. Since the data have been collected over a considerable period of time, it is important to bring the values onto a common basis by means of a suitably chosen index. The choice of the index depends on the line of insurance. For example, an index of the cost of construction prices may be suitable for ﬁre and other property insurance, an earnings index for life and accident insurance, and a general price index may be appropriate when a single index is required for several lines or for the whole portfolio. Here we adjust the PCS dataset using the Consumer Price Index provided by the U.S. Department of Labor. Note, that the same raw catastrophe data, however, adjusted using the discount window borrowing rate that refers to the simple interest rate at which depository institutions borrow from the Federal Reserve Bank of New York was analyzed by Burnecki, H¨ ardle, and Weron (2004). A related dataset containing the national and regional PCS indices for losses resulting from catastrophic events in USA was studied by Burnecki, Kukla, and Weron (2000). As suggested in the proceeding section we ﬁrst look for the appropriate shape of the distribution. To this end we plot the empirical mean excess functions for the analyzed data sets, see Figure 13.7. Both in the case of PCS natural catastrophe losses and Danish ﬁre losses the data show a super-exponential pattern suggesting a log-normal, Pareto or Burr distribution as most adequate for modeling. Hence, in the sequel we calibrate these three distributions. We apply two estimation schemes: maximum likelihood and A2 statistic minimization. Out of the three ﬁtted distributions only the log-normal has closed form expressions for the maximum likelihood estimators. Parameter calibration for the remaining distributions and the A2 minimization scheme is carried out via a simplex numerical optimization routine. A limited simulation study suggests that the A2 minimization scheme tends to return lower values of all edf test statistics than maximum likelihood estimation. Hence, it is exclusively used for further analysis. The results of parameter estimation and hypothesis testing for the PCS loss amounts are presented in Table 13.1. The Burr distribution with parameters α = 0.4801, λ = 3.9495 · 1016 , and τ = 2.1524 yields the best results and passes all tests at the 2.5% level. The log-normal distribution with parameters

Applications

313

15 10

e_n(x) (DKK million)

4

0

0

0

5

2

e_n(x) (USD billion)

6

20

8

25

13.5

1

2 3 x (USD billion)

4

5

0

5

10

15 20 x (DKK million)

25

30

Figure 13.7: The empirical mean excess function eˆn (x) for the PCS catastrophe data (left panel ) and the Danish ﬁre data (right panel ). STFloss07.xpl

µ = 18.3806 and σ = 1.1052 comes in second, however, with an unacceptable ﬁt as tested by the Anderson-Darling statistic. As expected, the remaining distributions presented in Section 13.3 return even worse ﬁts. Thus we suggest to choose the Burr distribution as a model for the PCS loss amounts. In the left panel of Figure 13.8 we present the empirical and analytical limited expected value functions for the three ﬁtted distributions. The plot justiﬁes the choice of the Burr distribution. The results of parameter estimation and hypothesis testing for the Danish ﬁre loss amounts are presented in Table 13.2. The log-normal distribution with parameters µ = 12.6645 and σ = 1.3981 returns the best results. It is the only distribution that passes any of the four applied tests (D, V , W 2 , and A2 ) at a reasonable level. The Burr and Pareto laws yield worse ﬁts as the tails of the edf are lighter than power-law tails. As expected, the remaining distributions presented in Section 13.3 return even worse ﬁts. In the right panel of Figure 13.8 we depict the empirical and analytical limited expected value functions for the three ﬁtted distributions. Unfortunately, no deﬁnitive conclusions can be drawn regarding the choice of the distribution. Hence, we suggest to use the log-normal distribution as a model for the Danish ﬁre loss amounts.

13

Loss Distributions

5

10 x (USD billion)

15

1 0.8 0.6

Analytical and empirical LEVFs (DKK million)

0

0.4

250 200 150 100

Analytical and empirical LEVFs (USD million)

300

1.2

314

0

20

40 x (DKK million)

60

Figure 13.8: The empirical (black solid line) and analytical limited expected value functions (LEVFs) for the log-normal (green dashed line), Pareto (blue dotted line), and Burr (red long-dashed line) distributions for the PCS catastrophe data (left panel ) and the Danish ﬁre data (right panel ). STFloss08.xpl

Table 13.1: Parameter estimates obtained via the A2 minimization scheme and test statistics for the catastrophe loss amounts. The corresponding p-values based on 1000 simulated samples are given in parentheses. Distributions: log-normal Pareto Burr Parameters: µ=18.3806 α=3.4081 α=0.4801 σ=1.1052 λ=4.4767 · 108 λ=3.9495 · 1016 τ =2.1524 Tests: D 0.0440 0.1049 0.0366 (0.033)

(<0.005)

(0.077)

V

0.0786

0.1692

0.0703

(0.022)

(<0.005)

(0.038)

W2

0.1353

0.7042

0.0626

(0.006)

(<0.005)

(0.059)

A2

1.8606

6.1160

0.5097

(<0.005)

(<0.005)

(0.027)

STFloss09.xpl

13.5

Applications

315

Table 13.2: Parameter estimates obtained via the A2 minimization scheme and test statistics for the ﬁre loss amounts. The corresponding p-values based on 1000 simulated samples are given in parentheses. Distributions: log-normal Pareto Burr Parameters: µ=12.6645 α=1.7439 α=0.8804 σ=1.3981 λ=6.7522 · 105 λ=8.4202 · 106 τ =1.2749 Tests: D 0.0381 0.0471 0.0387 V W2 A2

(0.008)

(<0.005)

0.0676

0.0779

(<0.005)

0.0724

(0.005)

(<0.005)

(<0.005)

0.0921

0.2119

0.1117

(0.049)

(<0.005)

(0.007)

0.7567

1.9097

0.6999

(0.024)

(<0.005)

(0.005)

STFloss10.xpl

316

Bibliography

Bibliography Bratley, P., Fox, B. L., and Schrage, L. E. (1987). A Guide to Simulation, Springer-Verlag, New York. D’Agostino, R. B. and Stephens, M. A. (1986). Goodness-of-Fit Techniques, Marcel Dekker, New York. Burnecki, K., H¨ ardle, W., and Weron, R. (2004). Simulation of risk processes, in J. Teugels, B. Sundt (eds.) Encyclopedia of Actuarial Science, Wiley, Chichester. Burnecki, K., Kukla, G., and Weron, R. (2000). Property insurance loss distributions, Physica A 287: 269-278. Daykin, C.D., Pentikainen, T., and Pesonen, M. (1994). Practical Risk Theory for Actuaries, Chapman, London. Devroye, L. (1986). Non-Uniform Random Variate Generation, SpringerVerlag, New York. Embrechts, P., Kl¨ uppelberg, C., and Mikosch, T. (1997). Modelling Extremal Events for Insurance and Finance, Springer. Hogg, R. and Klugman, S. A. (1984). Loss Distributions, Wiley, New York. J¨ohnk, M. D. (1964). Erzeugung von Betaverteilten und Gammaverteilten Zufallszahlen, Metrika 8: 5-15. Klugman, S. A., Panjer, H.H., and Willmot, G.E. (1998). Loss Models: From Data to Decisions, Wiley, New York. L’Ecuyer, P. (2004). Random Number Generation, in J. E. Gentle, W. H¨ ardle, Y. Mori (eds.) Handbook of Computational Statistics, Springer, Berlin, 35–70. Panjer, H.H. and Willmot, G.E. (1992). Insurance Risk Models, Society of Actuaries, Chicago. Ross, S. (2002). Simulation, Academic Press, San Diego. Stephens, M. A. (1978). On the half-sample method for goodness-of-ﬁt, Journal of the Royal Statistical Society B 40: 64-70.

Bibliography

317

Weron, R. (2004). Computationally Intensive Value at Risk Calculations, in J. E. Gentle, W. H¨ ardle, Y. Mori (eds.) Handbook of Computational Statistics, Springer, Berlin, 911–950.

14 Modeling of the Risk Process Krzysztof Burnecki and Rafal Weron

14.1

Introduction

An actuarial risk model is a mathematical description of the behavior of a collection of risks generated by an insurance portfolio. It is not intended to replace sound actuarial judgment. In fact, a well formulated model is consistent with and adds to intuition, but cannot and should not replace experience and insight (Willmot, 2001). Even though we cannot hope to identify all inﬂuential factors relevant to future claims, we can try to specify the most important. A typical model for insurance risk, the so-called collective risk model, has two main components: one characterizing the frequency (or incidence) of events and another describing the severity (or size or amount) of gain or loss resulting from the occurrence of an event, see also Chapter 18. The collective risk model is often used in health insurance and in general insurance, whenever the main risk components are the number of insurance claims and the amount of the claims. It can also be used for modeling other non-insurance product risks, such as credit and operational risk (Embrechts, Kaufmann, and Samorodnitsky, 2004). In the former, for example, the main risk components are the number of credit events (either defaults or downgrades), and the amount lost as a result of the credit event. The stochastic nature of both the incidence and severity of claims are fundamental components of a realistic model. Hence, in its classical form the model for insurance risk is deﬁned as follows (Embrechts, Kl¨ uppelberg, and Mikosch, 1997; Grandell, 1991). If (Ω, F, P) is a probability space carrying (i) a point process {Nt }t≥0 , i.e. an integer valued stochastic process with N0 = 0 a.s., Nt < ∞ for each t < ∞ and nondecreasing realizations, and (ii) an independent sequence {Xk }∞ k=1 of positive independent and identically distributed

320

14

Modeling of the Risk Process

(i.i.d.) random variables, then the risk process {Rt }t≥0 is given by Rt = u + c(t) −

Nt

Xi .

(14.1)

i=1

The non-negative constant u stands for the initial capital of the insurance company. The company sells insurance policies and receives a premium according to c(t). In the classical model c is constant, but in a more general setup it can be a deterministicor even a stochastic function of time. Claims form the Nt aggregate claim loss { i=1 Xi }. The claim severities are described by the random sequence {Xk } and the number of claims in the interval (0, t] is modeled by the point process Nt , often called the claim arrival process. The modeling of the aggregate loss process consists of modeling the point process {Nt } and the claim size sequence {Xk }. Both processes are usually assumed to be independent, hence can be treated independently of each other. The modeling of claim severities was covered in detail in Chapter 13. The focus of this chapter is therefore on modeling the claim arrival point process {Nt }. The simplicity of the risk process (14.1) is only illusionary. In most cases no analytical conclusions regarding the time evolution of the process can be drawn. However, it is this evolution that is important for practitioners, who have to calculate functionals of the risk process like the expected time to ruin and the ruin probability, see Chapter 15. All this calls for numerical simulation schemes (Burnecki, H¨ ardle, and Weron, 2004). In Section 14.2 we present eﬃcient algorithms for ﬁve classes of the claim arrival point processes. Next, in Section 14.3 we apply some of them to modeling realworld risk processes. The analysis is conducted for the same two datasets as in Chapter 13: (i) the PCS (Property Claim Services) dataset covering losses resulting from catastrophic events in USA that occurred between 1990 and 1999 and (ii) the Danish ﬁre losses dataset, which concerns major ﬁre losses of proﬁts that occurred between 1980 and 1990 and were recorded by Copenhagen Re. It is important to note that the choice of the model has inﬂuence on both the ruin probability (see Chapter 15) and the reinsurance strategy of the company (see Chapter 20), hence the selection has to be made with great care.

14.2

Claim Arrival Processes

14.2

321

Claim Arrival Processes

In this section we focus on eﬃcient simulation of the claim arrival point process {Nt }. This process can be simulated either via the arrival times {Ti }, i.e. moments when the ith claim occurs, or the inter-arrival times (or waiting times) Wi = Ti − Ti−1 , i.e. the time periods between successive claims. ∞Note that in terms of Wi ’s the claim arrival point process is given by Nt = n=1 I(Tn ≤ t). In what follows we discuss ﬁve prominent examples of {Nt }, namely the classical (homogeneous) Poisson process, the non-homogeneous Poisson process, the mixed Poisson process, the Cox process (also called the doubly stochastic Poisson process) and the renewal process.

14.2.1

Homogeneous Poisson Process

The most common and best known claim arrival point process is the homogeneous Poisson process (HPP) with stationary and independent increments and the number of claims in a given time interval governed by the Poisson law. While this process is normally appropriate in connection with life insurance modeling, it often suﬀers from the disadvantage of providing an inadequate ﬁt to insurance data in other coverages. In particular, it tends to understate the true variability inherent in these situations. Formally, a continuous-time stochastic process {Nt : t ≥ 0} is a (homogeneous) Poisson process with intensity (or rate) λ > 0 if (i) {Nt } is a point process, and (ii) the waiting times Wi are independent and identically distributed and follow an exponential law with intensity λ, i.e. with mean 1/λ (see Chapter 13, where the properties and simulation scheme for the exponential distribution were discussed). This deﬁnition naturally leads to a simulation scheme for the successive arrival times T1 , T2 , . . . , Tn of the Poisson process: Algorithm HPP1 Step 1: set T0 = 0 Step 2: for i = 1, 2, . . . , n do Step 2a: generate an exponential random variable E with intensity λ Step 2b: set Ti = Ti−1 + E

322

14

Modeling of the Risk Process

Alternatively, the homogeneous Poisson process can be simulated by applying the following property (Rolski et al., 1999). Given that Nt = n, the n occurrence times T1 , T2 , . . . , Tn have the same distributions as the order statistics corresponding to n i.i.d. random variables uniformly distributed on the interval (0, t]. Hence, the arrival times of the HPP on the interval (0, t] can be generated as follows: Algorithm HPP2 Step 1: generate a Poisson random variable N with intensity λ Step 2: generate N random variables Ui distributed uniformly on (0, 1), i.e. Ui ∼ U(0, 1), i = 1, 2, . . . , N Step 3: set (T1 , T2 , . . . , TN ) = t · sort{U1 , U2 , . . . , UN } In general, this algorithm will run faster than the previous one as it does not involve a loop. The only two inherent numerical diﬃculties involve generating a Poisson random variable and sorting a vector of occurrence times. Whereas the latter problem can be solved via the standard quicksort algorithm, the former requires more attention. A simple algorithm would take N = min{n : U1 · . . . · Un < exp(−λ)} − 1, which is a consequence of the properties of the Poisson process (for a derivation see Ross, 2002). However, for large λ, this method can become slow. Faster, but more complicated methods have been proposed in the literature. Ahrens and Dieter (1982) suggested a generator which utilizes acceptance-complement with truncated normal variates whenever λ > 10 and reverts to table-aided inversion otherwise. Stadlober (1989) adapted the ratio of uniforms method for λ > 5 and classical inversion for small λ’s. H¨ormann (1993) advocated the transformed rejection method, which is a combination of the inversion and rejection algorithms. Sample trajectories of homogeneous and non-homogeneous Poisson processes are plotted in Figure 14.1. The dotted green line is a HPP with intensity λ = 1 (left panel) and λ = 10 (right panel). Clearly the latter jumps more often. Since for the HPP the expected value E(Nt ) = λt, it is natural to deﬁne the premium function in this case as c(t) = ct, where c = (1+θ)µλ, µ = E(Xk ) and θ > 0 is the relative safety loading which “guarantees” survival of the insurance company. With such a choice of the premium function we obtain the classical form of the risk process.

Claim Arrival Processes

323

0

0

20

50

N(t)

N(t)

40

100

60

14.2

0

5 t

10

0

5 t

10

Figure 14.1: Left panel : Sample trajectories of a NHPP with linear intensity λ(t) = a+b·t for a = 1 and b = 1 (solid blue line), b = 0.1 (dashed red line), and b = 0 (dotted green line). Note that the latter is in fact a HPP. Right panel : Sample trajectories of a NHPP with periodic intensity λ(t) = a + b · cos(2πt) for a = 10 and b = 10 (solid blue line), b = 1 (dashed red line), and b = 0 (dotted green line). Again, the latter is a HPP. STFrisk01.xpl

14.2.2

Non-homogeneous Poisson Process

The choice of a homogeneous Poisson process implies that the size of the portfolio cannot increase or decrease. In addition, it cannot describe situations, like in motor insurance, where claim occurrence epochs are likely to depend on the time of the year or of the week. For modeling such phenomena the non-homogeneous Poisson process (NHPP) suits much better than the homogeneous one. The NHPP can be thought of as a Poisson process with a variable intensity deﬁned by the deterministic intensity (rate) function λ(t). Note that the increments of a NHPP do not have to be stationary. In the special case when λ(t) takes the constant value λ, the NHPP reduces to the homogeneous Poisson process with intensity λ.

324

14

Modeling of the Risk Process

The simulation of the process in the non-homogeneous case is slightly more complicated than in the homogeneous one. The ﬁrst approach, known as the thinning or rejection method, is based on the following fact (Bratley, Fox, and Schrage, 1987; Ross, 2002). Suppose that there exists a constant λ such that λ(t) ≤ λ for all t. Let T1∗ , T2∗ , T3∗ , . . . be the successive arrival times of a homogeneous Poisson process with intensity λ. If we accept the ith arrival time Ti∗ with probability λ(Ti∗ )/λ, independently of all other arrivals, then the sequence T1 , T2 , . . . of the accepted arrival times (in ascending order) forms a sequence of the arrival times of a non-homogeneous Poisson process with the rate function λ(t). The resulting algorithm reads as follows: Algorithm NHPP1 (Thinning) Step 1: set T0 = 0 and T ∗ = 0 Step 2: for i = 1, 2, . . . , n do Step 2a: generate an exponential random variable E with intensity λ Step 2b: set T ∗ = T ∗ + E Step 2c: generate a random variable U distributed uniformly on (0, 1) Step 2d: if U > λ(T ∗ )/λ then return to step 2a (→ reject the arrival time) else set Ti = T ∗ (→ accept the arrival time) As mentioned in the previous section, the inter-arrival times of a homogeneous Poisson process have an exponential distribution. Therefore steps 2a–2b generate the next arrival time of a homogeneous Poisson process with intensity λ. Steps 2c–2d amount to rejecting (hence the name of the method) or accepting a particular arrival as part of the thinned process (hence the alternative name). Note that in the above algorithm we generate a HPP with intensity λ employing the HPP1 algorithm. We can also generate it using the HPP2 algorithm, which is in general much faster. The second approach is based on the observation (Grandell, 1991) that for a NHPP with rate function λ(t) the increment Nt −Ns , 0 < s < t, is distributed as

% = t λ(u)du. Hence, the cumulative a Poisson random variable with intensity λ s distribution function Fs of the waiting time Ws is given by Fs (t)

= =

P(Ws ≤ t) = 1 − P(Ws > t) = 1 − P(Ns+t − Ns = 0) = s+t

t

1 − exp − λ(u)du = 1 − exp − λ(s + v)dv . s

0

14.2

Claim Arrival Processes

325

If the function λ(t) is such that we can ﬁnd a formula for the inverse Fs−1 for each s, we can generate a random quantity X with the distribution Fs by using the inverse transform method. The algorithm, often called the integration method, can be summarized as follows: Algorithm NHPP2 (Integration) Step 1: set T0 = 0 Step 2: for i = 1, 2, . . . , n do Step 2a: generate a random variable U distributed uniformly on (0, 1) Step 2b: set Ti = Ti−1 + Fs−1 (U ) The third approach utilizes a generalization of the property used in the HPP2 algorithm. Given that Nt = n, the n occurrence times T1 , T2 , . . . , Tn of the non-homogeneous Poisson process have the same distributions as the order statistics corresponding to n independent random variables distributed

t on the interval (0, t], each with the common density function f (v) = λ(v)/ 0 λ(u)du, where v ∈ (0, t]. Hence, the arrival times of the NHPP on the interval (0, t] can be generated as follows: Algorithm NHPP3 Step 1: generate a Poisson random variable N with intensity

t 0

λ(u)du

Step 2: generate N random variables Vi , i = 1, 2, . . . N with density f (v) =

t λ(v)/ 0 λ(u)du. Step 3: set (T1 , T2 , . . . , TN ) = sort{V1 , V2 , . . . , VN }. The performance of the algorithm is highly dependent on the eﬃciency of the computer generator of random variables with density f (v). Moreover, like in the homogeneous case, this algorithm has the advantage of not invoking a loop. Hence, it performs faster than the former two methods if λ(u) is a nicely integrable function. Sample trajectories of non-homogeneous Poisson processes are plotted in Figure 14.1. In the left panel realizations of a NHPP with linear intensity λ(t) = a+b·t are presented for the same value of parameter a. Note, that the higher the value of parameter b, the more pronounced is the increase in the intensity of

326

14

Modeling of the Risk Process

the process. In the right panel realizations of a NHPP with periodic intensity λ(t) = a + b · cos(2πt) are illustrated, again for the same value of parameter a. This time, for high values of parameter b the events exhibit a seasonal behavior. The process has periods of high activity (grouped around natural values of t) and periods of low activity, where almost no jumps take place. Finally,

twe note that since in the non-homogeneous case the expected value E(Nt ) = 0 λ(s)ds,

t it is natural to deﬁne the premium function as c(t) = (1 + θ)µ 0 λ(s)ds.

14.2.3

Mixed Poisson Process

In many situations the portfolio of an insurance company is diversiﬁed in the sense that the risks associated with diﬀerent groups of policy holders are signiﬁcantly diﬀerent. For example, in motor insurance we might want to make a diﬀerence between male and female drivers or between drivers of diﬀerent age. We would then assume that the claims come from a heterogeneous group of clients, each one of them generating claims according to a Poisson distribution with the intensity varying from one group to another. Another practical reason for considering yet another generalization of the classical Poisson process is the following. If we measure the volatility of risk processes, expressed in terms of the index of dispersion Var(Nt )/ E(Nt ), then very often we obtain estimates in excess of one – a value obtained for the homogeneous and the non-homogeneous cases. These empirical observations led to the introduction of the mixed Poisson process (Ammeter, 1948). In the mixed Poisson process the distribution of {Nt } is given by a mixture of Poisson processes (Rolski et al., 1999). This means that, conditioning on an extrinsic random variable Λ (called a structure variable), the process {Nt } behaves like a homogeneous Poisson process. Since for each t the claim numbers {Nt } up to time t are Poisson variates with intensity Λt, it is now reasonable to consider the premium function of the form c(t) = (1 + θ)µΛt. The process can be generated in the following way: ﬁrst a realization of a nonnegative random variable Λ is generated and, conditioned upon its realization, {Nt } as a homogeneous Poisson process with that realization as its intensity is constructed. Both the HPP1 and the HPP2 algorithm can be utilized. Making use of the former we can write: Algorithm MPP1 Step 1: generate a realization λ of the random intensity Λ

14.2

Claim Arrival Processes

327

Step 2: set T0 = 0 Step 3: for i = 1, 2, . . . , n do Step 3a: generate an exponential random variable E with intensity λ Step 3b: set Ti = Ti−1 + E

14.2.4

Cox Process

The Cox process, or doubly stochastic Poisson process, provides ﬂexibility by letting the intensity not only depend on time but also by allowing it to be a stochastic process. Therefore, the doubly stochastic Poisson process can be viewed as a two-step randomization procedure. An intensity process {Λ(t)} is used to generate another process {Nt } by acting as its intensity. That is, {Nt } is a Poisson process conditional on {Λ(t)} which itself is a stochastic process. If {Λ(t)} is deterministic, then {Nt } is a non-homogeneous Poisson process. If Λ(t) = Λ for some positive random variable Λ, then {Nt } is a mixed Poisson process. In the doubly stochastic case the premium function is a generalization of the former functions, in line with the generalization of the claim arrival

t process. Hence, it takes the form c(t) = (1 + θ)µ 0 Λ(s)ds. The deﬁnition of the Cox process suggests that it can be generated in the following way: ﬁrst a realization of a non-negative stochastic process {Λ(t)} is generated and, conditioned upon its realization, {Nt } as a non-homogeneous Poisson process with that realization as its intensity is constructed. Out of the three methods of generating a non-homogeneous Poisson process the NHPP1 algorithm is the most general and, hence, the most suitable for adaptation. We can write: Algorithm CP1 Step 1: generate a realization λ(t) of the intensity process {Λ(t)} for a suﬃciently large time period Step 2: set λ = max {λ(t)} Step 3: set T0 = 0 and T ∗ = 0 Step 4: for i = 1, 2, . . . , n do Step 4a: generate an exponential random variable E with intensity λ

328

14

Modeling of the Risk Process

Step 4b: set T ∗ = T ∗ + E Step 4c: generate a random variable U distributed uniformly on (0, 1) Step 4d: if U > λ(T ∗ )/λ then return to step 4a (→ reject the arrival time) else set Ti = T ∗ (→ accept the arrival time)

14.2.5

Renewal Process

Generalizing the homogeneous Poisson process we come to the point where instead of making λ non-constant, we can make a variety of diﬀerent distributional assumptions on the sequence of waiting times {W1 , W2 , . . .} of the claim arrival point process {Nt }. In some particular cases it might be useful to assume that the sequence is generated by a renewal process, i.e. the random variables Wi are i.i.d. and positive. Note that the homogeneous Poisson process is a renewal process with exponentially distributed inter-arrival times. This observation lets us write the following algorithm for the generation of the arrival times of a renewal process: Algorithm RP1 Step 1: set T0 = 0 Step 2: for i = 1, 2, . . . , n do Step 2a: generate a random variable X with an assumed distribution function F Step 2b: set Ti = Ti−1 + X An important point in the previous generalizations of the Poisson process was the possibility to compensate risk and size ﬂuctuations by the premiums. Thus, the premium rate had to be constantly adapted to the development of the claims. For renewal claim arrival processes, a constant premium rate allows for a constant safety loading (Embrechts and Kl¨ uppelberg, 1993). Let {Nt } be a renewal process and assume that W1 has ﬁnite mean 1/λ. Then the premium function is deﬁned in a natural way as c(t) = (1 + θ)µλt, like for the homogeneous Poisson process.

Simulation of Risk Processes

329

3 2

Mean excess function

2 0

1

1

Sample mean excess function*E-2

4

3

5

14.3

0

5

10 x (years)*E-2

15

0

5 x

10

Figure 14.2: Left panel : The empirical mean excess function eˆn (x) for the PCS waiting times. Right panel : Shapes of the mean excess function e(x) for the log-normal (solid green line), Burr (dashed blue line), and exponential (dotted red line) distributions. STFrisk02.xpl

14.3

Simulation of Risk Processes

14.3.1

Catastrophic Losses

In this section we apply some of the models described earlier to the PCS dataset. The Property Claim Services dataset covers losses resulting from natural catastrophic events in USA that occurred between 1990 and 1999. It is adjusted for inﬂation using the Consumer Price Index provided by the U.S. Department of Labor. See Chapters 4 and 13 where this dataset was analyzed in the context of CAT bonds and loss distributions, respectively. Note, that the same raw catastrophe data, however, adjusted using the discount window borrowing rate that refers to the simple interest rate at which depository institutions borrow from the Federal Reserve Bank of New York was analyzed by Burnecki, H¨ardle, and Weron (2004).

330

14

Modeling of the Risk Process

Table 14.1: Parameter estimates obtained via the A2 minimization scheme and test statistics for the PCS waiting times. The corresponding pvalues based on 1000 simulated samples are given in parentheses. Distributions: log-normal Burr exponential Parameters: µ=−3.91 α=1.3051 β=33.187 σ=0.9051 λ=1.6 · 10−3 τ =1.7448 Tests: D 0.0589 0.0492 0.1193 V W2 A2

(<0.005)

(<0.005)

0.0973

0.0938

(<0.005)

0.1969

(<0.005)

(<0.005)

(<0.005)

0.1281

0.1120

0.9130

(0.013)

(<0.005)

(<0.005)

1.3681

0.8690

5.8998

(<0.005)

(<0.005)

(<0.005)

STFrisk03.xpl

Now, we study the claim arrival process and the distribution of waiting times. As suggested in Chapter 13 we ﬁrst look for the appropriate shape of the approximating distribution. To this end we plot the empirical mean excess function for the waiting time data (given in years), see Figure 14.2. The initially decreasing, later increasing pattern suggests the log-normal or Burr distribution as most adequate for modeling. The empirical distribution seems, however, to have lighter tails than the two: e(x) does not increase for very large x. The overall impression might be of a highly volatile but constant function, like that for the exponential distribution. Hence, we ﬁt the log-normal, Burr, and exponential distributions using the A2 minimization scheme and check the goodness-of-ﬁt with test statistics. In terms of the values of the test statistics the Burr distribution seems to give the best ﬁt. However, it does not pass any of the tests even at the very low level of 0.5% (see Chapter 13 for test deﬁnitions). The only distribution that passes any of the four applied tests, although at a very low level, is the log-normal law with parameters µ = −3.91 and σ = 0.9051, see Table 14.1. Thus, if we wanted to model the claim arrival process by a renewal process then the log-normal distribution would be the best to describe the waiting times.

Simulation of Risk Processes

331

40

Periodogram

0

5

20

10

Number of events

60

15

80

14.3

0

1

2

3

4

5 6 Time (years)

7

8

9

10

0.12

0.25 Frequency

0.38

0.5

Figure 14.3: Left panel : The quarterly number of losses for the PCS data. Right panel : Periodogram of the PCS quarterly number of losses. A distinct peak is visible at frequency ω = 0.25 implying a period of 1/ω = 4 quarters, i.e. one year. STFrisk04.xpl

If, on the other hand, we wanted to model the claim arrival process by a HPP then the studies of the quarterly numbers of losses would lead us to the conclusion that the best HPP is given by the annual intensity λ1 = 34.2. This value is obtained by taking the mean of the quarterly numbers of losses and multiplying it by four. Note, that the value of the intensity is signiﬁcantly diﬀerent from the parameter β = 32.427 of the calibrated exponential distribution, see Table 14.1. This, together with a very bad ﬁt of the exponential law to the waiting times, indicates that the HPP is not a good model for the claim arrival process. Further analysis of the data reveals its periodicity. The time series of the quarterly number of losses does not exhibit any trends but an annual seasonality can be very well observed using the periodogram, see Figure 14.3. This suggests that calibrating a NHPP with a sinusoidal rate function would give a good model. We estimate the parameters by ﬁtting the cumulative intensity function, i.e. the mean value function E(Nt ), to the accumulated number of PCS losses. The least squares algorithm yields the formula λ2 (t) = 35.32 + 2.32 · 2π · sin{2π(t − 0.20)}. This choice of λ(t) gives a reasonably good ﬁt, see also Chapter 4.

160

120 0

40

80

Capital (USD billion)

320

240 160

Capital (USD billion)

80 0

1

2

3

4

5 6 Time (years)

7

8

9

0

10

1

2

3

4

5 6 Time (years)

7

8

9

10

120 80 0

40

Capital (USD billion)

160

200

0

Modeling of the Risk Process

200

14

400

332

0

1

2

3

4

5 6 Time (years)

7

8

9

10

Figure 14.4: The PCS data simulation results for a NHPP with Burr claim sizes (left panel ), a NHPP with log-normal claim sizes (right panel ), and a NHPP with claims generated from the edf (bottom panel ). The dotted lines are the sample 0.001, 0.01, 0.05, 0.25, 0.50, 0.75, 0.95, 0.99, 0.999-quantile lines based on 3000 trajectories of the risk process. STFrisk05.xpl

14.3

Simulation of Risk Processes

333

To study the evolution of the risk process we simulate sample trajectories. We consider a hypothetical scenario where the insurance company insures losses resulting from catastrophic events in the United States. The company’s initial capital is assumed to be u = 100 billion USD and the relative safety loading used is θ = 0.5. We choose diﬀerent models of the risk process whose application is most justiﬁed by the statistical results described above. The results are presented in Figure 14.4. In all subplots the thick solid blue line is the “real” risk process, i.e. a trajectory constructed from the historical arrival times and values of the losses. The diﬀerent shapes of the “real” risk process in the subplots are due to the diﬀerent forms of the premium function c(t). Recall, that the function has to be chosen accordingly to the type of the claim arrival process. The dashed red line is a sample trajectory. The dotted lines are the sample 0.001, 0.01, 0.05, 0.25, 0.50, 0.75, 0.95, 0.99, 0.999-quantile lines based on 3000 trajectories of the risk process. The function x ˆp (t) is called a sample p-quantile line if for each t ∈ [t0 , T ], x ˆp (t) is the sample p-quantile, i.e. if it satisﬁes Fn (xp −) ≤ p ≤ Fn (xp ), where Fn is the edf. Quantile lines are a very helpful tool in the analysis of stochastic processes. For example, they can provide a simple justiﬁcation of the stationarity (or the lack of it) of a process, see Janicki and Weron (1994). In Figure 14.4 they visualize the evolution of the density of the risk process. The periodic pattern is due to the sinusoidal intensity function λ2 (t). We also note that we assumed in the simulations that if the capital of the insurance company drops bellow zero, the company goes bankrupt, so the capital is set to zero and remains at this level hereafter. This is in agreement with Chapter 15. The claim severity distribution of the PCS dataset was studied in Chapter 13. The Burr distribution with parameters α = 0.4801, λ = 3.9495 · 1016 , and τ = 2.1524 yielded the best ﬁt. Unfortunately, such a choice of the parameters leads to an undesired feature of the claim size distribution – very heavy tails of order x−ατ ≈ x−1.03 . Although the expected value exists, the sample mean is, in general, signiﬁcantly below the theoretical value. As a consequence, the premium function c(t) cannot include the factor µ = E(Xk ) or the risk process trajectories will exhibit a highly positive drift. To cope with this problem, in the simulations we substitute the original factor µ with µ ˜ equal to the empirical mean of the simulated claims for all trajectories. Despite this change the trajectories possess a positive drift due to the large value of the relative safety loading θ. They are also highly volatile leading to a large number of ruins – the 0.05-quantile line drops to zero after ﬁve years, see the left panel in Figure 14.4. It seems that the Burr distribution overestimates the PCS losses.

334

14

Modeling of the Risk Process

In our second attempt we simulate the NHPP with log-normal claims with µ = 18.3806 and σ = 1.1052, as the log-normal law was found in Chapter 13 to yield a relatively good ﬁt to the data. The results, shown in the right panel of Figure 14.4, are not satisfactory. This time the analytical distribution largely underestimates the loss data. The “real” risk process is well outside the 0.001quantile line. This leads us to the conclusion that none of the analytical loss distributions describes the data well enough. We either overestimate risk using the Burr distribution or underestimate it with the log-normal law. Hence, in our next attempt we simulate the NHPP with claims generated from the edf, see the bottom panel in Figure 14.4. The factor µ in the premium function c(t) is set to the empirical mean. This time the “real” risk process lies close to the median and does not cross the lower and upper quantile lines. This approach seems to give the best results. However, we do have to remember that it has its shortcomings. For example, the model is tailor-made for the dataset at hand but is not universal. As the dataset will be expanded by including new losses the model may change substantially. An analytic model would, in general, be less susceptible to such modiﬁcations. Hence, it might be more optimal to use the Burr distribution after all.

14.3.2

Danish Fire Losses

We conduct empirical studies for Danish ﬁre losses recorded by Copenhagen Re. The data concerns major Danish ﬁre losses in Danish Krone (DKK), occurred between 1980 and 1990 and adjusted for inﬂation. Only losses of proﬁts connected with the ﬁres are taken into consideration, see Chapter 13 and Burnecki and Weron (2004), where this dataset was also analyzed. We start the analysis with a HPP with a constant intensity λ3 . Studies of the quarterly numbers of losses and the inter-occurrence times of the ﬁres lead us to the conclusion that the HPP with the annual intensity λ3 = 57.72 gives the best ﬁt. However, as we can see in the right panel of Figure 14.5, the ﬁt is not very good suggesting that the HPP is too simplistic and forcing us to consider the NHPP. In fact, a renewal process would also give unsatisfactory results as the data reveals a clear increasing trend in the number of quarterly losses, see the left panel in Figure 14.5. We tested diﬀerent exponential and polynomial functional forms, but a simple linear intensity function λ4 (s) = c + ds gives the best ﬁt. Applying the least squares procedure we arrive at the following values of the parameters: c = 13.97 and d = 7.57. Processes with both choices of the intensity function, λ3 and λ4 (s), are illustrated in the right panel of Figure

Simulation of Risk Processes

335

600 500 400 300

200 100

20

0

0

10

Number of events

30

Aggregate number of losses / Mean value function

40

700

14.3

0

1

2

3

4

5 6 Time (years)

7

8

9

10

11

0

1

2

3

4

5 6 Time (years)

7

8

9

10

11

Figure 14.5: Left panel : The quarterly number of losses for the Danish ﬁre data. Right panel : The aggregate quarterly number of losses of the Danish ﬁre data (dashed blue line) together with the mean value function E(Nt ) of the calibrated HPP (solid black line) and the NHPP (dotted red line). Clearly the latter model gives a better ﬁt to the empirical data. STFrisk06.xpl

14.5, where the accumulated number of ﬁre losses and mean value functions for all 11 years of data are depicted. After describing the claim arrival process we have to ﬁnd an appropriate model for the loss amounts. In Chapter 13 a number of distributions were ﬁtted to loss sizes. The log-normal distribution with parameters µ = 12.6645 and σ = 1.3981 produced the best results. The Burr distribution with α = 0.8804, λ = 8.4202 · 106 , and τ = 1.2749 overestimated the tails of the empirical distribution, nevertheless it gave the next best ﬁt. The simulation results are presented in Figure 14.6. We consider a hypothetical scenario where the insurance company insures losses resulting from ﬁre damage. The company’s initial capital is assumed to be u = 400 million DKK and the relative safety loading used is θ = 0.5. We choose two models of the risk process whose application is most justiﬁed by the statistical results described above:

14

1000

400

800

Capital (DKK million)

1200

800 600

400

Capital (DKK million)

0

200 0

1

2

3

4

5 6 Time (years)

7

8

9

10

0

11

1

2

3

4

5 6 Time (years)

7

8

9

10

11

600

400 0

200

Capital (DKK million)

800

1000

0

Modeling of the Risk Process

1600

336

0

1

2

3

4

5 6 Time (years)

7

8

9

10

11

Figure 14.6: The Danish ﬁre data simulation results for a NHPP with lognormal claim sizes (left panel ), a NHPP with Burr claim sizes (right panel ), and a NHPP with claims generated from the edf (bottom panel ). The dotted lines are the sample 0.001, 0.01, 0.05, 0.25, 0.50, 0.75, 0.95, 0.99, 0.999-quantile lines based on 3000 trajectories of the risk process. STFrisk07.xpl

14.3

Simulation of Risk Processes

337

a NHPP with log-normal claim sizes and a NHPP with Burr claim sizes. For comparison we also present the results of a model incorporating the empirical distribution function. Recall, that in this model the factor µ in the premium function c(t) is set to the empirical mean. In all panels of Figure 14.6 the thick solid blue line is the “real” risk process, i.e. a trajectory constructed from the historical arrival times and values of the losses. The diﬀerent shapes of the “real” risk process in the subplots are due to the diﬀerent forms of the premium function c(t) which has to be chosen accordingly to the type of the claim arrival process. The dashed red line is a sample trajectory. The dotted lines are the sample 0.001, 0.01, 0.05, 0.25, 0.50, 0.75, 0.95, 0.99, 0.999-quantile lines based on 3000 trajectories of the risk process. Similarly as in PCS data case, we assume that if the capital of the insurance company drops bellow zero, the company goes bankrupt, so the capital is set to zero and remains at this level hereafter. Clearly, if claim severities are Burr distributed then extreme events are more probable to happen than in the log-normal case, for which the historical trajectory falls outside the 0.001-quantile line. The overall picture is, in fact, similar to the one obtained for the PCS data. We either overestimate risk using the Burr distribution or underestimate it with the log-normal law. The empirical approach yields the “real” risk process which lies close to the median and does not cross the very low or very high quantile lines. However, as stated previously, the empirical approach has its shortcomings. Since this time we only slightly undervalue risk with the log-normal law it might be advisable to use it for further modeling.

338

Bibliography

Bibliography Ahrens, J. H. and Dieter, U. (1982). Computer generation of Poisson deviates from modiﬁed normal distributions, ACM Trans. Math. Software 8: 163– 179. Ammeter, H. (1948). A generalization of the collective theory of risk in regard to ﬂuctuating basic probabilities, Skand. Aktuarietidskr. 31: 171–198. Bratley, P., Fox, B. L., and Schrage, L. E. (1987). A Guide to Simulation, Springer-Verlag, New York. Burnecki, K. and Weron, R. (2004). Modeling the risk process in the XploRe computing environment, Lecture Notes in Computer Science 3039: 868875. Burnecki, K., H¨ ardle, W., and Weron, R. (2004). Simulation of risk processes, in J. Teugels, B. Sundt (eds.) Encyclopedia of Actuarial Science, Wiley, Chichester. Embrechts, P., Kaufmann, R., and Samorodnitsky, G. (2002). Ruin theory revisited: stochastic models for operational risk, in C. Bernadell et al. (eds.) Risk Management for Central Bank Foreign Reserves, European Central Bank, Frankfurt a.M., 243–261. Embrechts, P. and Kl¨ uppelberg, C. (1993). Some aspects of insurance mathematics, Theory Probab. Appl. 38: 262–295. Embrechts, P., Kl¨ uppelberg, C., and Mikosch, T. (1997). Modelling Extremal Events for Insurance and Finance, Springer. Grandell, J. (1991). Aspects of Risk Theory, Springer, New York. H¨ormann, W. (1993). The transformed rejection method for generating Poisson random variables, Insurance: Mathematics and Economics 12: 39–45. Janicki, A. and Weron, A. (1994). Simulation and Chaotic Behavior of α-Stable Stochastic Processes, Marcel Dekker. L’Ecuyer, P. (2004). Random Number Generation, in J. E. Gentle, W. H¨ ardle, Y. Mori (eds.) Handbook of Computational Statistics, Springer, Berlin, 35–70.

Bibliography

339

Rolski, T., Schmidli, H., Schmidt, V., and Teugels, J. L. (1999). Stochastic Processes for Insurance and Finance, Wiley, Chichester. Ross, S. (2002). Simulation, Academic Press, San Diego. Stadlober, E. (1989). Sampling from Poisson, binomial and hypergeometric distributions: ratio of uniforms as a simple and fast alternative, Math. Statist. Sektion 303, Forschungsgesellschaft Joanneum Graz. Willmot, G. E. (2001). The nature of modelling insurance losses, The Munich Re Inaugural Lecture, December 5, 2001, Toronto.

15 Ruin Probabilities in Finite and Inﬁnite Time Krzysztof Burnecki, Pawel Mi´sta, and Aleksander Weron

15.1

Introduction

In examining the nature of the risk associated with a portfolio of business, it is often of interest to assess how the portfolio may be expected to perform over an extended period of time. One approach concerns the use of ruin theory (Panjer and Willmot, 1992). Ruin theory is concerned with the excess of the income (with respect to a portfolio of business) over the outgo, or claims paid. This quantity, referred to as insurer’s surplus, varies in time. Speciﬁcally, ruin is said to occur if the insurer’s surplus reaches a speciﬁed lower bound, e.g. minus the initial capital. One measure of risk is the probability of such an event, clearly reﬂecting the volatility inherent in the business. In addition, it can serve as a useful tool in long range planning for the use of insurer’s funds. We recall now a deﬁnition of the standard mathematical model for the insurance risk, see Grandell (1991) and Chapter 14. The initial capital of the insurance company is denoted by u, the Poisson process Nt with intensity (rate) λ describes the number of claims in (0, t] interval and claim severities are random, given by i.i.d. non-negative sequence {Xk }∞ k=1 with mean value µ and variance σ 2 , independent of Nt . The insurance company receives a premium at a constant rate c per unit time, where c = (1 + θ)λµ and θ > 0 is called the relative safety loading. The classical risk process {Rt }t≥0 is given by Rt = u + ct −

Nt i=1

Xi .

342

15

Ruin Probabilities in Finite and Inﬁnite Time

We deﬁne a claim surplus process {St }t≥0 as St = u − R t =

Nt

Xi − ct.

i=1

The time to ruin is deﬁned as τ (u) = inf{t ≥ 0 : Rt < 0} = inf{t ≥ 0 : St > u}. Let L = sup0≤t<∞ {St } and LT = sup0≤t u).

(15.1)

We note that the above deﬁnition implies that the relative safety loading θ has to be positive, otherwise c would be less than λµ and thus with probability 1 the risk business would become negative in inﬁnite time. The ruin probability in ﬁnite time T is given by ψ(u, T ) = P(τ (u) ≤ T ) = P(LT > u).

(15.2)

We also note that obviously ψ(u, T ) < ψ(u). However, the inﬁnite time ruin probability may be sometimes also relevant for the ﬁnite time case. From a practical point of view, ψ(u, T ), where T is related to the planning horizon of the company, may perhaps sometimes be regarded as more interesting than ψ(u). Most insurance managers will closely follow the development of the risk business and increase the premium if the risk business behaves badly. The planning horizon may be thought of as the sum of the following: the time until the risk business is found to behave “badly”, the time until the management reacts and the time until a decision of a premium increase takes eﬀect. Therefore, in non-life insurance, it may be natural to regard T equal to four or ﬁve years as reasonable (Grandell, 1991). We also note that the situation in inﬁnite time is markedly diﬀerent from the ﬁnite horizon case as the ruin probability in ﬁnite time can always be computed directly using Monte Carlo simulations. We also remark that generalizations of the classical risk process, which are studied in Chapter 14, where the occurrence of the claims is described by point processes other than the Poisson process (i.e., non-homogeneous, mixed Poisson and Cox processes) do not alter the ruin probability in inﬁnite time. This stems from the following fact. Consider a risk ˜ ˜t with the intensity process λ(t), process R˜t driven by a Cox process N namely

t

t ˜t N ˜ ˜ ˜ t = u + (1 + θ)µ λ(s)ds R Xi . Deﬁne now Λt = λ(s)ds and Rt = − 0

i=1

0

15.1

Introduction

343

˜ −1 ˜ −1 R(Λ t ). Then the point process Nt = N (Λt ) is a standard Poisson process ˜ with intensity 1, and therefore, ψ(u) = P(inf t≥0 {R˜t } < 0) = P(inf t≥0 {Rt } < 0) = ψ(u). The time scale deﬁned by Λ−1 is called the operational time scale. t It naturally aﬀects the time to ruin, hence the ﬁnite time ruin probability, but not the ultimate ruin probability. The ruin probabilities in inﬁnite and ﬁnite time can only be calculated for a few special cases of the claim amount distribution. Thus, ﬁnding a reliable approximation, especially in the ultimate case, when the Monte Carlo method can not be utilized, is really important from a practical point of view. In Section 15.2 we present a general formula, called Pollaczek-Khinchin formula, on the ruin probability in inﬁnite time, which leads to exact ruin probabilities in special cases of the claim size distribution. Section 15.3 is devoted to various approximations of the inﬁnite time ruin probability. In Section 15.4 we compare the 12 diﬀerent well-known and not so well-known approximations. The ﬁnitetime case is studied in Sections 15.5, 15.6, and 15.7. The exact ruin probabilities in ﬁnite time are discussed in Section 15.5. The most important approximations of the ﬁnite time ruin probability are presented in Section 15.6. They are illustrated in Section 15.7. To illustrate and compare approximations we use the PCS (Property Claim Services) catastrophe data example introduced in Chapter 13. The data describes losses resulting from natural catastrophic events in USA that occurred between 1990 and 1999. This data set was used to obtain the parameters of the discussed distributions. We note that ruin theory has been also recently employed as an interesting tool in operational risk. In the view of the data already available on operational risk, ruin type estimates may become useful (Embrechts, Kaufmann, and Samorodnitsky, 2004). We ﬁnally note that all presented explicit solutions and approximations are implemented in the Insurance library of XploRe. All ﬁgures and tables were created with the help of this library.

15.1.1

Light- and Heavy-tailed Distributions

We distinguish here between light- and heavy-tailed distributions. A distribution FX (x) is said to be light-tailed, if there exist constants a > 0, b > 0 such that F¯X (x) = 1 − FX (x) ≤ ae−bx or, equivalently, if there exist z > 0, such that MX (z) < ∞, where MX (z) is the moment generating function, see Chapter 13. Distribution FX (x) is said to be heavy-tailed, if for all a > 0,

344

15

Ruin Probabilities in Finite and Inﬁnite Time

Table 15.1: Typical claim size distributions. In all cases x ≥ 0. Light-tailed distributions Name Parameters pdf Exponential β > 0 fX (x) = β exp(−βx) βα Gamma α > 0, β > 0 fX (x) = Γ(α) xα−1 exp(−βx) Weibull β > 0, τ ≥ 1 fX (x) = βτ xτ −1 exp(−βxτ ) n n Mixed exp’s βi > 0, {ai βi exp(−βi x)} ai = 1 fX (x) = i=1

i=1

Name Weibull

Heavy-tailed distributions Parameters pdf τ β > 0, 0 < τ < 1 fX (x) = βτ xτ −1 exp(−βx )

Log-normal

µ ∈ R, σ > 0

fX (x) =

Pareto

α > 0, λ > 0

fX (x) =

Burr

α > 0, λ > 0, τ > 0

fX (x) =

√ 1 exp − 2πσx α α λ λ+x λ+x ατ λα xτ −1 (λ+xτ )α+1

(ln x−µ)2 2σ 2

b > 0: F¯X (x) > ae−bx , or, equivalently, if ∀z > 0 MX (z) = ∞. We study here claim size distributions as in Table 15.1. In the case of light-tailed claims the adjustment coeﬃcient (called also the Lundberg exponent) plays a key role in calculating the ruin probability. Let γ = supz {MX (z)} < ∞ and let R be a positive solution of the equation: 1 + (1 + θ)µR = MX (R),

R < γ.

(15.3)

If there exists a non-zero solution R to the above equation, we call it an adjustment coeﬃcient. Clearly, R = 0 satisﬁes the equation (15.3), but there may exist a positive solution as well (this requires that X has a moment generating function, thus excluding distributions such as Pareto and the log-normal). To see the plausibility of this result, note that MX (0) = 1, MX (z) < 0, MX (z) > 0 and MX (0) = −µ. Hence, the curves y = MX (z) and y = 1 + (1 + θ)µz may intersect, as shown in Figure 15.1.

An analytical solution to equation (15.3) exists only for few claim distributions. However, it is quite easy to obtain a numerical solution. The coeﬃcient R

Introduction

345

0.95

1

1.05

y

1.1

1.15

1.2

15.1

R

0 x

Figure 15.1: Illustration of the existence of the adjustment coeﬃcient. The solid blue line represents the curve y = 1 + (1 + θ)µz and the dotted red one y = MX (z). STFruin01.xpl

satisﬁes the inequality: R<

2θµ , µ(2)

(15.4)

where µ(2) = E(Xi2 ), see Asmussen (2000). Let D(z) = 1 + (1 + θ)µz − MX (z). Thus, the adjustment coeﬃcient R > 0 satisﬁes the equation D(R) = 0. In order to get the solution one may use the Newton-Raphson formula Rj+1 = Rj −

D(Rj ) , D (Rj )

(15.5)

(z). with the initial condition R0 = 2θµ/µ(2) , where D (z) = (1 + θ)µ − MX

346

15

Ruin Probabilities in Finite and Inﬁnite Time

Moreover, if it is possible to calculate the third raw moment µ(3) , we can obtain a sharper bound than (15.4), Panjer and Willmot (1992): R<

3µ(2)

12µθ , + 9(µ(2) )2 + 24µµ(3) θ

and use it as the initial condition in (15.5).

15.2

Exact Ruin Probabilities in Inﬁnite Time

In order to present a ruin probability formula we ﬁrst use the relation (15.1) and express L as a sum of so-called ladder heights. Let L1 be the value that the process {St } reaches for the ﬁrst time above the zero level. Next, let L2 be the value which is obtained for the ﬁrst time above the level L1 ; L3 , L4 , . . . are deﬁned in the same way. The values Lk are called ladder heights. Since the process {St } has stationary and independent increments, {Lk }∞ k=1 is a sequence of independent and identically distributed variables with the density fL1 (x) = F¯X (x)/µ.

(15.6)

One may also show that the number of ladder heights K is given by the geometric distribution with the parameter q = θ/(1 + θ). Thus, the random variable L may be expressed as K Li (15.7) L= i=1

and it has a compound geometric distribution. The above fact leads to the Pollaczek-Khinchin formula for the ruin probability: n ∞ θ 1 ψ(u) = 1 − P(L ≤ u) = 1 − FL∗n1 (u), (15.8) 1 + θ n=0 1 + θ where FL∗n1 (u) denotes the nth convolution of the distribution function FL1 . One can use it to derive explicit solutions for a variety of claim amount distributions, particularly those whose Laplace transform is a rational function. These cases will be discussed in this section. Unfortunately, heavy-tailed distributions like e.g. the log-normal or Pareto one are not included. In such a case various approximations can be applied or one can calculate the ruin probability directly via the Pollaczek-Khinchin formula using Monte Carlo simulations. This will be studied in Section 15.3.

15.2

Exact Ruin Probabilities in Inﬁnite Time

347

We shall now, in Sections 15.2.1–15.2.4, brieﬂy present a collection of basic exact results on the ruin probability in inﬁnite time. The ruin probability ψ(u) is always considered as a function of the initial capital u.

15.2.1

No Initial Capital

When u = 0 it is easy to obtain the exact formula: ψ(u) =

1 . 1+θ

Notice that the formula depends only on θ, regardless of the claim frequency rate λ and claim size distribution. The ruin probability is clearly inversely proportional to the relative safety loading.

15.2.2

Exponential Claim Amounts

One of the historically ﬁrst results on the ruin probability is the explicit formula for exponential claims with the parameter β, namely θβu 1 . (15.9) exp − ψ(u) = 1+θ 1+θ In Table 15.2 we present the ruin probability values for exponential claims with β = 6.3789 · 10−9 (see Chapter 13) and the relative safety loading θ = 30% with respect to the initial capital u. We can observe that the ruin probability decreases as the capital grows. When u = 1 billion USD the ruin probability amounts to 18%, whereas u = 5 billion USD reduces the probability to almost zero.

15.2.3

Gamma Claim Amounts

Grandell and Segerdahl (1971) showed that for the gamma claim amount distribution with mean 1 and α ≤ 1 the exact value of the ruin probability can be

348

15

Ruin Probabilities in Finite and Inﬁnite Time

Table 15.2: The ruin probability for exponential claims with β = 6.3789 · 10−9 and θ = 0.3 (u in USD billion). u ψ(u)

0 0.769231

1 0.176503

2 0.040499

3 0.009293

4 0.002132

5 0.000489

STFruin02.xpl

computed via the formula: ψ(u) =

θ(1 − R/α) exp(−Ru) αθ sin(απ) + · I, 1 + (1 + θ)R − (1 + θ)(1 − R/α) π

(15.10)

where I= 0

∞

xα exp {−(x + 1)αu} 2

[xα {1 + α(1 + θ)(x + 1)} − cos(απ)] + sin2 (απ)

dx.

(15.11)

The integral I has to be calculated numerically. We also notice that the assumption on the mean is not restrictive since for claims X with arbitrary mean µ we have that ψX (u) = ψX/µ (u/µ). As the gamma distribution is closed under scale changes we obtain that ψG(α,β) (u) = ψG(α,α) (βu/α). This correspondence enables us to calculate the exact ruin probability via equation (15.10) for gamma claims with arbitrary mean. Table 15.3 shows the ruin probability values for gamma claims with with α = 0.9185, β = 6.1662 · 10−9 (see Chapter 13) and the relative safety loading θ = 30% with respect to the initial capital u. Naturally, the ruin probability decreases as the capital grows. Moreover, the probability takes similar values as in the exponential case but a closer look reveals that the values in the exponential case are always slightly larger. When u = 1 billion USD the diﬀerence is about 1%. It suggests that a choice of the ﬁtted distribution function may have a an impact on actuarial decisions.

15.2

Exact Ruin Probabilities in Inﬁnite Time

349

Table 15.3: The ruin probability for gamma claims with α = 0.9185, β = 6.1662 · 10−9 and θ = 0.3 (u in USD billion). u ψ(u)

0 0.769229

1 0.174729

2 0.039857

3 0.009092

4 0.002074

5 0.000473

STFruin03.xpl

15.2.4

Mixture of Two Exponentials Claim Amounts

For the claim size distribution being a mixture of two exponentials with the parameters β1 , β2 and weights a, 1 − a, one may obtain an explicit formula by using the Laplace transform inversion (Panjer and Willmot, 1992):

ψ(u) =

1 {(ρ − r1 ) exp(−r1 u) + (r2 − ρ) exp(−r2 u)} , (15.12) (1 + θ)(r2 − r1 )

where

r1 =

r2 =

1/2 2 ρ + θ(β1 + β2 ) − {ρ + θ(β1 + β2 )} − 4β1 β2 θ(1 + θ) 2(1 + θ) 1/2 2 ρ + θ(β1 + β2 ) + {ρ + θ(β1 + β2 )} − 4β1 β2 θ(1 + θ) 2(1 + θ)

and p=

aβ1−1

aβ1−1 , + (1 − a)β2−1

,

,

ρ = β1 (1 − p) + β2 p.

Table 15.4 shows the ruin probability values for mixture of two exponentials claims with β1 = 3.5900 · 10−10 , β2 = 7.5088 · 10−9 , a = 0.0584 (see Chapter 13) and the relative safety loading θ = 30% with respect to the initial capital u. As before, the ruin probability decreases as the capital grows. Moreover, the increase in the ruin probability values with respect to previous cases is dramatic. When u = 1 billion USD the diﬀerence between the mixture of two exponentials and exponential cases reaches 240%! As the same underlying

350

15

Ruin Probabilities in Finite and Inﬁnite Time

Table 15.4: The ruin probability for mixture of two exponentials claims with β1 = 3.5900 · 10−10 , β2 = 7.5088 · 10−9 , a = 0.0584 and θ = 0.3 (u in USD billion). u ψ(u)

0 0.769231

1 0.587919

5 0.359660

10 0.194858

20 0.057197

50 0.001447

STFruin04.xpl

data set was used in all cases to estimate the parameters of the distributions, it supports the thesis that a choice of the ﬁtted distribution function and checking the goodness of ﬁt is of paramount importance.

Finally, note that it is possible to derive explicit formulae for mixture of n (n ≥ 3) exponentials (Wikstad, 1971; Panjer and Willmot, 1992). They are not presented here since the complexity of formulae grows as n increases and such mixtures are rather of little practical importance due to increasing number of parameters.

15.3

Approximations of the Ruin Probability in Inﬁnite Time

When the claim size distribution is exponential (or closely related to it), simple analytic results for the ruin probability in inﬁnite time exist, see Section 15.2. For more general claim amount distributions, e.g. heavy-tailed, the Laplace transform technique does not work and one needs some estimates. In this section, we present 12 diﬀerent well-known and not so well-known approximations. We illustrate them on a common claim size distribution example, namely the mixture of two exponentials claims with β1 = 3.5900 · 10−10 , β2 = 7.5088 · 10−9 and a = 0.0584 (see Chapter 13). Numerical comparison of the approximations is given in Section 15.4.

15.3

Approximations of the Ruin Probability in Inﬁnite Time

351

Table 15.5: The Cram´er–Lundberg approximation for mixture of two exponentials claims with β1 = 3.5900 · 10−10 , β2 = 7.5088 · 10−9 , a = 0.0584 and θ = 0.3 (u in USD billion). u ψCL (u)

0 0.663843

1 0.587260

5 0.359660

10 0.194858

20 0.057197

50 0.001447

STFruin05.xpl

15.3.1

Cram´ er–Lundberg Approximation

Cram´er–Lundberg’s asymptotic ruin formula for ψ(u) for large u is given by ψCL (u) = Ce−Ru ,

(15.13)

(R) − µ(1 + θ)} . For the proof we refer to Grandell (1991). where C = θµ/ {MX The classical Cram´er–Lundberg approximation yields quite accurate results, however we must remember that it requires the adjustment coeﬃcient to exist, therefore merely the light-tailed distributions can be taken into consideration. For exponentially distributed claims, the formula (15.13) yields the exact result.

In Table 15.5 the Cram´er–Lundberg approximation for mixture of two exponentials claims with β1 , β2 , a and the relative safety loading θ = 30% with respect to the initial capital u is given. We see that the Cram´er–Lundberg approximation underestimates the ruin probability. Nevertheless, the results coincide quite closely with the exact values shown by Table 15.4. When the initial capital is zero, the relative error is the biggest and exceeds 13%.

352

15

Ruin Probabilities in Finite and Inﬁnite Time

Table 15.6: The exponential approximation for mixture of two exponentials claims with β1 = 3.5900 · 10−10 , β2 = 7.5088 · 10−9 , a = 0.0584 and θ = 0.3 (u in USD billion). u ψE (u)

0 0.747418

1 0.656048

5 0.389424

10 0.202900

20 0.055081

50 0.001102

STFruin06.xpl

15.3.2

Exponential Approximation

This approximation was proposed and derived by De Vylder (1996). It requires the ﬁrst three moments to be ﬁnite.

2µθu − µ(2)

ψE (u) = exp −1 − (µ(2) )2 + (4/3)θµµ(3)

.

(15.14)

Table 15.6 shows the results of the exponential approximation for mixture of two exponentials claims with β1 , β2 , a and the relative safety loading θ = 30% with respect to the initial capital u. Comparing them with the exact values presented in Table 15.4 we see that the exponential approximation works not bad in the studied case. When the initial capital is USD 50 billion, the relative error is the biggest and reaches 24%.

15.3.3

Lundberg Approximation

The following formula, called the Lundberg approximation, comes from Grandell (2000). It requires the ﬁrst three moments to be ﬁnite.

µ(2) 4θµ2 µ(3) −2µθu . ψL (u) = 1 + θu − exp 2µ 3(µ(2) )3 µ(2)

(15.15)

In Table 15.7 the Lundberg approximation for mixture of two exponentials claims with β1 , β2 , a and the relative safety loading θ = 30% with respect to

15.3

Approximations of the Ruin Probability in Inﬁnite Time

353

Table 15.7: The Lundberg approximation for mixture of two exponentials claims with β1 = 3.5900 · 10−10 , β2 = 7.5088 · 10−9 , a = 0.0584 and θ = 0.3 (u in USD billion). u ψL (u)

0 0.504967

1 0.495882

5 0.382790

10 0.224942

20 0.058739

50 0.000513

STFruin07.xpl

the initial capital u is given. We see that the Lundberg approximation works worse than the exponential one. When the initial capital is USD 50 billion, the relative error exceeds 60%.

15.3.4

Beekman–Bowers Approximation

The Beekman–Bowers approximation uses the following representation of the ruin probability: ψ(u) = P(L > u) = P(L > 0)P(L > u|L > 0).

(15.16)

The idea of the approximation is to replace the conditional probability 1 − P(L > u|L > 0) with a gamma distribution function G(u) by ﬁtting ﬁrst two moments (Grandell, 2000). This leads to: ψBB (u) =

1 {1 − G(u)} , 1+θ

(15.17)

where the parameters α, β of G are given by

4µµ(3) 4µµ(3) (2) (2) θ . − 1 θ /(1 + θ), β = 2µθ/ µ + −µ α= 1+ 3(µ(2) )2 3µ(2) The Beekman–Bowers approximation gives rather accurate results, see Burnecki, Mi´sta, and Weron (2004). In the exponential case it becomes the exact formula. It can be used only for distributions with ﬁnite ﬁrst three moments.

354

15

Ruin Probabilities in Finite and Inﬁnite Time

Table 15.8: The Beekman–Bowers approximation for mixture of two exponentials claims with β1 = 3.5900 · 10−10 , β2 = 7.5088 · 10−9 , a = 0.0584 and θ = 0.3 (u in USD billion). u ψBB (u)

0 0.769231

1 0.624902

5 0.352177

10 0.186582

20 0.056260

50 0.001810

STFruin08.xpl

Table 15.8 shows the results of the Beekman–Bowers approximation for mixture of two exponentials claims with β1 , β2 , a and the relative safety loading θ = 30% with respect to the initial capital u. The results justify the thesis the approximation yields quite accurate results but when the initial capital is USD 50 billion, the relative error is unacceptable, reaching 25%, cf. the exact values in Table 15.4.

15.3.5

Renyi Approximation

The Renyi approximation (Grandell, 2000), may be derived from (20.5.4) when we replace the gamma distribution function G with an exponential one, matching only the ﬁrst moment. Hence, it can be regarded as a simpliﬁed version of the Beekman–Bowers approximation. It requires the ﬁrst two moments to be ﬁnite. ψR (u) =

1 2µθu . exp − (2) 1+θ µ (1 + θ)

(15.18)

In Table 15.9 the Renyi approximation for mixture of two exponentials claims with β1 , β2 , a and the relative safety loading θ = 30% with respect to the initial capital u is given. We see that the results compared with the exact values presented in Table 15.4 are quite accurate. The accuracy ot the approximation is similar to the Beekman–Bowers approximation but when the initial capital is USD 50 billion, the relative error exceeds 50%.

15.3

Approximations of the Ruin Probability in Inﬁnite Time

355

Table 15.9: The Renyi approximation for mixture of two exponentials claims with β1 = 3.5900 · 10−10 , β2 = 7.5088 · 10−9 , a = 0.0584 and θ = 0.3 (u in USD billion). u ψR (u)

0 0.769231

1 0.667738

5 0.379145

10 0.186876

20 0.045400

50 0.000651

STFruin09.xpl

15.3.6

De Vylder Approximation

The idea of this approximation is to replace the claim surplus process St with the claim surplus process S¯t with exponentially distributed claims such that the three moments of the processes coincide, namely E(Stk ) = E(S¯tk ) for k = 1, 2, 3, see De Vylder (1978). The process S¯t is determined by the three parameters ¯ θ, ¯ β). ¯ Thus the parameters must satisfy: (λ, 3

(2) ¯ = 9λµ 2 , λ 2µ(3)

2µµ(3) θ¯ = θ, 3µ(2)2

and

Then De Vylder’s approximation is given by: ¯¯ 1 θβu . exp − ψDV (u) = ¯ 1+θ 1 + θ¯

3µ(2) β¯ = (3) . µ

(15.19)

Obviously, in the exponential case the method gives the exact result. For other claim amount distributions, in order to apply the approximation, the ﬁrst three moments have to exist. Table 15.10 shows the results of the De Vylder approximation for mixture of two exponentials claims with β1 , β2 , a and the relative safety loading θ = 30% with respect to the initial capital u. The approximation gives surprisingly good results. In the considered case the relative error is the biggest when the initial capital is zero and amounts to about 13%, cf. Table 15.4.

356

15

Ruin Probabilities in Finite and Inﬁnite Time

Table 15.10: The De Vylder approximation for mixture of two exponentials claims with β1 = 3.5900 · 10−10 , β2 = 7.5088 · 10−9 , a = 0.0584 and θ = 0.3 (u in USD billion). u ψDV (u)

0 0.668881

1 0.591446

5 0.361560

10 0.195439

20 0.057105

50 0.001424

STFruin10.xpl

15.3.7

4-moment Gamma De Vylder Approximation

The 4-moment gamma De Vylder approximation, proposed by Burnecki, Mi´sta, and Weron (2003), is based on De Vylder’s idea to replace the claim surplus process St with another one S¯t for which the expression for ψ(u) is explicit. This time we calculate the parameters of the new process with gamma distributed claims and apply the exact formula (15.10) for the ruin probability. Let us note that the claim surplus process S¯t with gamma claims is determined by the four ¯ θ, ¯µ parameters (λ, ¯, µ ¯(2) ), so we have to match the four moments of St and S¯t . We also need to assume that µ(2) µ(4) < 32 (µ(3) )2 to ensure that µ ¯, µ ¯(2) > 0 and (2) 2 µ ¯ >µ ¯ , which is true for the gamma distribution. Then ¯ λ

=

θ¯

=

µ ¯

=

µ ¯(2)

=

λ(µ(3) )2 (µ(2) )3

{µ(2) µ(4) −2(µ(3) )2 }{2µ(2) µ(4) −3(µ(3) )2 } θµ{2(µ(3) )2 −µ(2) µ(4) } , (µ(2) )2 µ(3)

,

3(µ(3) )2 −2µ(2) µ(4) , µ(2) µ(3)

{µ(2) µ(4) −2(µ(3) )2 }{2µ(2) µ(4) −3(µ(3) )2 } (µ(2) µ(3) )2

.

When this assumption can not be fulﬁlled, the simpler case leads to

¯= λ

2λ(µ(2) )2 µ(µ(3) + µ(2) µ) θµ(µ(3) + µ(2) µ) , µ ¯ = µ, µ ¯(2) = . , θ¯ = (3) (2) (2) 2 µ(µ + µ µ) 2(µ ) 2µ(2)

15.3

Approximations of the Ruin Probability in Inﬁnite Time

357

Table 15.11: The 4-moment gamma De Vylder approximation for mixture of two exponentials claims with β1 = 3.5900 · 10−10 , β2 = 7.5088 · 10−9 , a = 0.0584 and θ = 0.3 (u in USD billion). u ψ4M GDV (u)

0 0.683946

1 0.595457

5 0.359879

10 0.194589

20 0.057150

50 0.001450

STFruin11.xpl

All in all, the 4-moment gamma De Vylder approximation is given by

¯ ¯ − R ) exp(− βR θ(1 α ¯ α ¯ u) ψ4M GDV (u) = ¯ ¯ 1 + (1 + θ)R − (1 + θ)(1 −

R α ¯)

+

α ¯ θ¯ sin(¯ απ) · I, π

(15.20)

where I= 0

∞

¯ dx xα¯ exp{−(x + 1)βu} , 4 52 ¯ + 1) − cos(¯ απ) xα¯ 1 + α ¯ (1 + θ)(x απ) + sin2 (¯

(2) (2) and α ¯=µ ¯2 / µ ¯ −µ ¯2 , β¯ = µ ¯2 . ¯/ µ ¯ −µ In the exponential and gamma case this method gives the exact result. For other claim distributions in order to apply the approximation, the ﬁrst four (or three in the simpler case) moments have to exist. Burnecki, Mi´sta, and Weron (2003) showed numerically that the method gives a slight correction to the De Vylder approximation, which is often regarded as the best among “simple” approximations. In Table 15.11 the 4-moment gamma De Vylder approximation for mixture of two exponentials claims with β1 = 3.5900·10−10 , β2 = 7.5088·10−9 , a = 0.0584 (see Chapter 13) and the relative safety loading θ = 30% with respect to the initial capital u is given. The most striking impression of Table 15.11 is certainly the extremely good accuracy of the simple 4-moment gamma De Vylder approximation for reasonable choices of the initial capital u. The relative error with respect to the exact values presented in Table 15.4 is the biggest for u = 0 and equals 11%.

358

15

Ruin Probabilities in Finite and Inﬁnite Time

Table 15.12: The heavy traﬃc approximation for mixture of two exponentials claims with β1 = 3.5900 · 10−10 , β2 = 7.5088 · 10−9 , a = 0.0584 and θ = 0.3 (u in USD billion). u ψHT (u)

0 1.000000

1 0.831983

5 0.398633

10 0.158908

20 0.025252

50 0.000101

STFruin12.xpl

15.3.8

Heavy Traﬃc Approximation

The term “heavy traﬃc” comes from queuing theory. In risk theory it means that, on the average, the premiums exceed only slightly the expected claims. It implies that the relative safety loading θ is positive and small. Asmussen (2000) suggests the following approximation.

2θµu ψHT (u) = exp − (2) µ

.

(15.21)

This method requires the existence of the ﬁrst two moments of the claim size distribution. Numerical evidence shows that the approximation is reasonable for the relative safety loading being 10 − 20% and u being small or moderate, while the approximation may be far oﬀ for large u. We also note that the approximation given by (15.21) is also known as the diﬀusion approximation and is further analysed and generalised to the stable case in Chapter 16, see also Furrer, Michna, and Weron (1997). Table 15.12 shows the results of the heavy traﬃc approximation for mixture of two exponentials claims with β1 , β2 , a and the relative safety loading θ = 30% with respect to the initial capital u. It is clear that the accuracy of the approximation in the considered case is extremely poor. When the initial capital is USD 50 billion, the relative error reaches 93%, cf. Table 15.4.

15.3

Approximations of the Ruin Probability in Inﬁnite Time

359

Table 15.13: The light traﬃc approximation for mixture of two exponentials claims with β1 = 3.5900 · 10−10 , β2 = 7.5088 · 10−9 , a = 0.0584 and θ = 0.3 (u in USD billion). u ψLT (u)

0 0.769231

1 0.303545

5 0.072163

10 0.011988

20 0.000331

50 0.000000

STFruin13.xpl

15.3.9

Light Traﬃc Approximation

As for heavy traﬃc, the term “light traﬃc” comes from queuing theory, but has an obvious interpretation also in risk theory, namely, on the average, the premiums are much larger than the expected claims, or in other words, claims appear less frequently than expected. It implies that the relative safety loading θ is positive and large. We may obtain the following asymptotic formula. ∞ 1 ψLT (u) = F¯X (x)dx. (15.22) (1 + θ)µ u In risk theory heavy traﬃc is most often argued to be the typical case rather than light traﬃc. However, light traﬃc is of some interest as a complement to heavy traﬃc, as well as it is needed for the interpolation approximation to be studied in the next point. It is worth noticing that this method gives accurate results merely for huge values of the relative safety loading, see Asmussen (2000). In Table 15.13 the light traﬃc approximation for mixture of two exponentials claims with β1 , β2 , a and the relative safety loading θ = 30% with respect to the initial capital u is given. The results are even worse than in the heavy case, only for moderate u the situation is better. The relative error dramatically increases with the initial capital.

360

15

Ruin Probabilities in Finite and Inﬁnite Time

Table 15.14: The heavy-light traﬃc approximation for mixture of two exponentials claims with β1 = 3.5900·10−10 , β2 = 7.5088·10−9 , a = 0.0584 and θ = 0.3 (u in USD billion). u ψHLT (u)

0 0.769231

1 0.598231

5 0.302136

10 0.137806

20 0.034061

50 0.001652

STFruin14.xpl

15.3.10

Heavy-light Traﬃc Approximation

The crude idea of this approximation is to combine the heavy and light approximations (Asmussen, 2000): θ ψHLT (u) = ψLT 1+θ

θu 1+θ

+

1 ψHT (u). (1 + θ)2

(15.23)

The particular features of this approximation is that it is exact for the exponential distribution and asymptotically correct both in light and heavy traﬃc. Table 15.14 shows the results of the heavy-light traﬃc approximation for mixture of two exponentials claims with β1 , β2 , a and the relative safety loading θ = 30% with respect to the initial capital u. Comparing the results with Table 15.12 (heavy traﬃc), Table 15.13 (light traﬃc) and the exact values given in Table 15.4 we see that the interpolation is promising. In the considered case the relative error is the biggest when the initial capital is USD 20 billion and is over 40%, but usually the error is acceptable.

15.3.11

Subexponential Approximation

First, let us introduce the class of subexponential distributions S (Embrechts, Kl¨ uppelberg, and Mikosch, 1997), namely F ∗2 (x) S = F : lim ¯ =2 . (15.24) x→∞ F (x)

15.3

Approximations of the Ruin Probability in Inﬁnite Time

361

Here F ∗2 (x) is the convolution square. In terms of random variables (15.24) means P(X1 + X2 > x) ∼ 2P (X1 > x), x → ∞, where X1 , X2 are independent random variables with distribution F . The class contains log-normal and Weibull (for τ < 1) distributions. Moreover, all distributions with a regularly varying tail (e.g. Pareto and Burr distributions) are subexponential. For subexponential distributions we can formulate the following approximation of the ruin probability. If F ∈ S, then the asymptotic formula for large u is given by u 1 ψS (u) = F¯ (x)dx , (15.25) µ− θµ 0 see Asmussen (2000). The approximation is considered to be inaccurate. The problem is a very slow rate of convergence as u → ∞. Even though the approximation is asymptotically correct in the tail, one may have to go out to values of ψ(u) which are unrealistically small before the ﬁt is reasonable. However, we will show in Section 15.4 that it is not always the case. As the mixture of exponentials does not belong to the subexponential class we do not present a numerical example like in all previously discussed approximations.

15.3.12

Computer Approximation via the Pollaczek-Khinchin Formula

One can use the Pollaczek-Khinchin formula (15.8) to derive explicit closed form solutions for claim amount distributions presented in Section 15.2, see Panjer and Willmot (1992). For other distributions studied here, in order to calculate the ruin probability, the Monte Carlo method can be applied to (15.1) and (15.7). The main problem is to simulate random variables from the density fL1 (x). Only four of the considered distributions lead to a known density: (i) for exponential claims, fL1 (x) is the density of the same exponential distribution, (ii) for a mixture of exponentials claims, fL1(x) is the density of the n a1 ai ,··· , mixture of exponential distribution with the weights β1 / i=1 βi n an ai , (iii) for Pareto claims, fL1 (x) is the density of the Pareto i=1 βi βn / distribution with the parameters α − 1 and λ, (iv) for Burr claims, fL1 (x) is the density of the transformed beta distribution.

362

15

Ruin Probabilities in Finite and Inﬁnite Time

Table 15.15: The Pollaczek-Khinchin approximation for mixture of two exponentials claims with β1 = 3.5900 · 10−10 , β2 = 7.5088 · 10−9 , a = 0.0584 and θ = 0.3 (u in USD billion). u ψP K (u)

0 0.769209

1 0.587917

5 0.359705

10 0.194822

20 0.057173

50 0.001445

STFruin15.xpl

For other distributions studied here we use formula (15.6) and controlled numerical integration to generate random variables Lk (except for the Weibull distribution, fL1 (x) does not even have a closed form). We note that the methodology based on the Pollaczek-Khinchin formula works for all considered claim distributions. The computer approximation via the Pollaczek-Khinchin formula will be called in short the Pollaczek-Khinchin approximation. Burnecki, Mi´sta, and Weron (2004) showed that the approximation can be chosen as the reference method for calculating the ruin probability in inﬁnite time, see also Table 15.15 where the results of the Pollaczek-Khinchin approximation are presented for mixture of two exponentials claims with β1 , β2 , a and the relative safety loading θ = 30% with respect to the initial capital u. For the Monte Carlo method purposes we generated 100 blocks of 500000 simulations.

15.3.13

Summary of the Approximations

Table 15.16 shows which approximation can be used for a particular choice of a claim size distribution. Moreover, the necessary assumptions on the distribution parameters are presented.

15.4

Numerical Comparison of the Inﬁnite Time Approximations

363

Table 15.16: Survey of approximations with an indication when they can be applied Distrib. Method Cram´ er-Lundberg Exponential Lundberg Beekman-Bowers Renyi De Vylder 4M Gamma De Vylder Heavy Traﬃc Light Traﬃc Heavy-Light Traﬃc Subexponential Pollaczek-Khinchin

15.4

Exp.

Gamma

Mix. Exp. + + + +

Lognormal – + + +

Pareto

Burr

+ + + +

Weibull – + + +

+ + + +

– α>3 α>3 α>3

– ατ > 3 ατ > 3 ατ > 3

+ + +

+ + +

+ + +

+ + +

+ + +

α>2 α>3 α>3

ατ > 2 ατ > 3 ατ > 3

+ + +

+ + +

+ + +

+ + +

+ + +

α>2 + α>2

ατ > 2 + ατ > 2

– +

– +

0<τ <1 +

– +

+ +

+ +

+ +

Numerical Comparison of the Inﬁnite Time Approximations

In this section we will illustrate all 12 approximations presented in Section 15.3. To this end we consider three claim amount distributions which were ﬁtted to the PCS catastrophe data in Chapter 13, namely the mixture of two exponential (a running example in Section 15.3) with β1 = 3.5900·10−10 , β2 = 7.5088·10−9 and a = 0.0584, log-normal with µ = 18.3806 and σ = 1.1052, and Pareto with α = 3.4081 and λ = 4.4767 · 108 distributions. The logarithm of the ruin probability as a function of the initial capital u ranging from USD 0 to 50 billion for the three distributions is depicted in Figure 15.2. In the case of log-normal and Pareto distributions the reference Pollaczek-Khinchin approximation is used. We see that the ruin probability values for the mixture of exponential distributions are much higher than for the log-normal and Pareto distributions. It stems from the fact that the estimated parameters of the mixture result in the mean equal to 2.88 · 108 , whereas the mean of the ﬁtted log-normal distribution amounts to 1.77 · 108 and of Pareto distribution to 1.86 · 108 .

15

Ruin Probabilities in Finite and Inﬁnite Time

-6 -12

-10

-8

log(psi(u))

-4

-2

0

364

0

5

10

15

30 25 20 u (billion USD)

35

40

45

50

Figure 15.2: The logarithm of the exact value of the ruin probability. The mixture of two exponentials (dashed blue line), log-normal (dotted red line), and Pareto (solid black line) clam size distribution. STFruin16.xpl

Figures 15.3–15.5 describe the relative error of the 11 approximations from Sections 15.3.1–15.3.11 with respect to exact ruin probability values in the mixture of two exponentials case and obtained via the Pollaczek-Khinchin approximation in the log-normal and Pareto cases. The relative safety loading is set to 30%. We note that for the Monte Carlo method purposes in the PollaczekKhinchin approximation we generate 500 blocks of 100000 simulations. First, we consider the mixture of two exponentials case already analysed in Section 15.3. Only the subexponential approximation can not be used for such a claim amount distribution, see Table 15.16. As we can clearly see in Figure 15.3 the Cram´er–Lundberg, De Vylder and 4-moment gamma De Vylder approximations work extremely well. Furthermore, the heavy traﬃc, light traﬃc, Renyi,

Numerical Comparison of the Inﬁnite Time Approximations

365

0

0.6

0.4 0.2 0 -0.2

-0.4

(psi(u)-psi_{exact}(u))/psi_{exact}(u)

-1

-0.3

-0.8

-0.6

0.1

0 -0.1 -0.2

(psi(u)-psi_{exact}(u))/psi_{exact}(u)

0.2

0.8

1

0.3

15.4

5

10

15

20 25 30 u (USD billion)

35

40

45

50

0

5

10

15

20 25 30 u (USD billion)

35

40

45

50

Figure 15.3: The relative error of the approximations. More eﬀective methods (left panel): the Cram´er–Lundberg (solid blue line), exponential (short-dashed brown line), Beekman–Bowers (dotted red line), De Vylder (medium-dashed black line) and 4-moment gamma De Vylder (long-dashed green line) approximations. Less eﬀective methods (right panel): Lundberg (short-dashed red line), Renyi (dotted blue line), heavy traﬃc (solid magenta line), light traﬃc (long-dashed green line) and heavy-light traﬃc (medium-dashed brown line) approximations. The mixture of two exponentials case. STFruin17.xpl

and Lundberg approximations show a total lack of accuracy and the rest of the methods are only acceptable.

In the case of log-normally distributed claims, the situation is diﬀerent, see Figure 15.4. Only results obtained via Beekman–Bowers, De Vylder and 4moment gamma De Vylder approximations are acceptable. The rest of the approximations are well oﬀ target. We also note that all 11 approximations can be employed in the log-normal case except the Cram´er–Lundberg one.

Ruin Probabilities in Finite and Inﬁnite Time

0

0.4 0.2 0 -0.2

-0.4

(psi(u)-psi_{exact}(u))/psi_{exact}(u)

-1

-0.8

-0.6

0.2 0 -0.2

-0.4 -0.6 -1

-0.8

(psi(u)-psi_{exact}(u))/psi_{exact}(u)

0.4

0.6

15

0.6

366

1

2

3

4 5 6 u (USD billion)

7

8

9

10

0

1

2

3

4 5 6 u (USD billion)

7

8

9

10

Figure 15.4: The relative error of the approximations. More eﬀective methods (left panel): the exponential (dotted blue line), Beekman–Bowers (short-dashed brown line), heavy-light traﬃc (solid red line), De Vylder (medium-dashed black line) and 4-moment gamma De Vylder (long-dashed green line) approximations. Less eﬀective methods (right panel): Lundberg (short-dashed red line), heavy traﬃc (solid magenta line), light traﬃc (long-dashed green line), Renyi (medium-dashed brown line) and subexponential (dotted blue line) approximations. The log-normal case. STFruin18.xpl

Finally, we take into consideration the Pareto claim size distribution. Figure 15.5 depicts the relative error for 9 approximations. Only the Cram´er– Lundberg and 4-moment gamma De Vylder approximations have to excluded as the moment generating function does not exist and the fourth moment is inﬁnite for the Pareto distribution with α = 3.4081. As we see in Figure 15.5 the relative errors for all approximations can not be neglected. There is no unanimous winner among the approximations but we may claim that the exponential approximation gives most accurate results.

0.8

0.4 0.2 0 -0.2

-1

-0.8

-0.6

-0.4

(psi(u)-psi_{exact}(u))/psi_{exact}(u)

0.6

0.8 0.6

0.4 0.2 0 -0.2

-0.4

(psi(u)-psi_{exact}(u))/psi_{exact}(u)

-0.6 -0.8 -1

0

367

1

Exact Ruin Probabilities in Finite Time

1

15.5

1

2

3

4 5 6 u (USD billion)

7

8

9

10

0

1

2

3

4 5 6 u (USD billion)

7

8

9

10

Figure 15.5: The relative error of the approximations. More eﬀective methods (left panel): the exponential (dotted blue line), Beekman–Bowers (short-dashed brown line), heavy-light traﬃc (solid red line) and De Vylder (medium-dashed black line) approximations. Less effective methods (right panel): Lundberg (short-dashed red line), heavy traﬃc (solid magenta line), light traﬃc (long-dashed green line), Renyi (medium-dashed brown line) and subexponential (dotted blue line) approximations. The Pareto case. STFruin19.xpl

15.5

Exact Ruin Probabilities in Finite Time

We are now interested in the probability that the insurer’s capital as deﬁned by (15.1) remains non-negative for a ﬁnite period T rather than permanently. We assume that the number of claims process Nt is a Poisson process with rate λ, and consequently, the aggregate loss process is a compound Poisson process. Premiums are payable at rate c per unit time. We recall that the intensity of the process Nt is irrelevant in the inﬁnite time case provided that it is compensated by the premium, see discussion at the end of Section 15.1. In contrast to the inﬁnite time case there is no general formula for the ruin probability like the Pollaczek-Khinchin one given by (15.8). In the literature one can only ﬁnd a partial integro-diﬀerential equation which satisﬁes the prob-

368

15

Ruin Probabilities in Finite and Inﬁnite Time

ability of non-ruin, see Panjer and Willmot (1992). An explicit result is merely known for the exponential claims, and even in this case a numerical integration is needed (Asmussen, 2000).

15.5.1

Exponential Claim Amounts

First, in order to simplify the formulae, let us assume that claims have the exponential distribution with β = 1 and the amount of premium is c = 1. Then 1 π f1 (x)f2 (x) ψ(u, T ) = λ exp {−(1 − λ)u} − dx, (15.26) π 0 f3 (x) √ √ λ cos x − 1 , f2 (x) = where f1 (x) = λ exp 2 λT cos x − (1 + λ)T + u √ √ √ cos u λ sin x − cos u λ sin x + 2x , and f3 (x) = 1 + λ − 2 λ cos x. Now, notice that the case β = 1 is easily reduced to β = 1, using the formula: ψλ,β (u, T ) = ψ λ ,1 (βu, βT ). β

(15.27)

Moreover, the assumption c = 1 is not restrictive since we have ψλ,c (u, T ) = ψλ/c,1 (u, cT ).

(15.28)

Table 15.17 shows the exact values of the ruin probability for exponential claims with β = 6.3789 · 10−9 (see Chapter 13) with respect to the initial capital u and the time horizon T . The relative safety loading θ equals 30%. We see that the values converge to those calculated in inﬁnite case as T is getting larger, cf. Table 15.2. The speed of convergence decreases as the initial capital u grows.

15.6

Approximations of the Ruin Probability in Finite Time

In this section, we present 5 diﬀerent approximations. We illustrate them on a common claim size distribution example, namely the mixture of two exponentials claims with β1 = 3.5900 · 10−10 , β2 = 7.5088 · 10−9 and a = 0.0584 (see Chapter 13). Their numerical comparison is given in Section 15.7.

15.6

Approximations of the Ruin Probability in Finite Time

369

Table 15.17: The ruin probability for exponential claims with β = 6.3789 · 10−9 and θ = 0.3 (u in USD billion). u ψ(u, 1) ψ(u, 2) ψ(u, 5) ψ(u, 10) ψ(u, 20)

0 0.757164 0.766264 0.769098 0.769229 0.769231

1 0.147954 0.168728 0.176127 0.176497 0.176503

2 0.025005 0.035478 0.040220 0.040495 0.040499

3 0.003605 0.007012 0.009138 0.009290 0.009293

4 0.000443 0.001288 0.002060 0.002131 0.002132

5 0.000047 0.000218 0.000459 0.000489 0.000489

STFruin20.xpl

15.6.1

Monte Carlo Method

The ruin probability in ﬁnite time can always be approximated by means of Monte Carlo simulations. Table 15.18 shows the output for mixture of two exponentials claims with β1 , β2 , a with respect to the initial capital u and the time horizon T . The relative safety loading θ is set to 30%. For the Monte Carlo method purposes we generated 50 x 10000 simulations. We see that the values approach those calculated in inﬁnite case as T increases, cf. Table 15.4. We note that the Monte Carlo method will be used as a reference method when comparing diﬀerent ﬁnite time approximations in Section 15.7.

15.6.2

Segerdahl Normal Approximation

The following result due to Segerdahl (1955) is said to be a time-dependent version of the Cram´er–Lundberg approximation given by (15.13). Under the assumption that c = 1, cf. relation (15.28), we have T − umL √ , (15.29) ψS (u, T ) = C exp(−Ru)Φ ωL u where C = θµ/ {MX (R) − µ(1 + θ)}, mL = 1 {λMX (R) − 1} 3 λMX (R)mL .

−1

2 and ωL =

370

15

Ruin Probabilities in Finite and Inﬁnite Time

Table 15.18: Monte Carlo results (50 x 10000 simulations) for mixture of two exponentials claims with β1 = 3.5900 · 10−10 , β2 = 7.5088 · 10−9 , a = 0.0584 and θ = 0.3 (u in USD billion). u ψ(u, 1) ψ(u, 2) ψ(u, 5) ψ(u, 10) ψ(u, 20)

0 0.672550 0.718254 0.753696 0.765412 0.769364

1 0.428150 0.501066 0.560426 0.580786 0.587826

5 0.188930 0.256266 0.323848 0.350084 0.359778

10 0.063938 0.105022 0.159034 0.184438 0.194262

20 0.006164 0.015388 0.035828 0.049828 0.056466

50 0.000002 0.000030 0.000230 0.000726 0.001244

STFruin21.xpl

Table 15.19: The Segerdahl approximation for mixture of two exponentials claims with β1 = 3.5900 · 10−10 , β2 = 7.5088 · 10−9 , a = 0.0584 and θ = 0.3 (u in USD billion). u ψ(u, 1) ψ(u, 2) ψ(u, 5) ψ(u, 10) ψ(u, 20)

0 0.663843 0.663843 0.663843 0.663843 0.663843

1 0.444333 0.554585 0.587255 0.587260 0.587260

5 0.172753 0.229282 0.338098 0.359593 0.359660

10 0.070517 0.092009 0.152503 0.192144 0.194858

20 0.013833 0.017651 0.030919 0.049495 0.057143

50 0.000141 0.000175 0.000311 0.000634 0.001254

STFruin22.xpl

This method requires existence of the adjustment coeﬃcient. This implies that only light-tailed distributions can be used. Numerical evidence shows that the Segerdahl approximation gives the best results for huge values of the initial capital u, see Asmussen (2000). In Table 15.19, the results of the Segerdahl approximation for mixture of two exponentials claims with β1 , β2 , a with respect to the initial capital u and the

15.6

Approximations of the Ruin Probability in Finite Time

371

Table 15.20: The diﬀusion approximation for mixture of two exponentials claims with β1 = 3.5900 · 10−10 , β2 = 7.5088 · 10−9 , a = 0.0584 and θ = 0.3 (u in USD billion). u ψ(u, 1) ψ(u, 2) ψ(u, 5) ψ(u, 10) ψ(u, 20)

0 1.000000 1.000000 1.000000 1.000000 1.000000

1 0.770917 0.801611 0.823343 0.829877 0.831744

5 0.223423 0.304099 0.370177 0.391556 0.397816

10 0.028147 0.072061 0.128106 0.150708 0.157924

20 0.000059 0.001610 0.011629 0.020604 0.024603

50 0.000000 0.000000 0.000000 0.000017 0.000073

STFruin23.xpl

time horizon T are presented. The relative safety loading θ = 30%. We see that the approximation in the considered case yields quite accurate results for moderate u, cf. Table 15.18.

15.6.3

Diﬀusion Approximation

The idea of the diﬀusion approximation is ﬁrst to approximate the claim surplus process St by a Brownian motion with drift (arithmetic Brownian motion) by matching the ﬁrst two moments, and next, to note that such an approximation implies that the ﬁrst passage probabilities are close. The ﬁrst passage probability serves as the ruin probability. The diﬀusion approximation is given by: 2 T µc u|µc | ψD (u, T ) = IG , ; −1; σc2 σc2

(15.30)

where µc = −λθµ, σc2 = λµ(2) , and IG(·; ζ; u) denotes the distribution function of the passage time of the Brownian motion with unit variance and drift ζ from the level 0 to the level u > 0 (often referred to as Gaussian √ the inverse √ distribution function), namely IG(x; ζ; u) = 1 − Φ (u/ x − ζ x) + exp (2ζu) √ √ ·Φ (−u/ x − ζ x), see Asmussen (2000).

372

15

Ruin Probabilities in Finite and Inﬁnite Time

Table 15.21: The corrected diﬀusion approximation for mixture of two exponentials claims with β1 = 3.5900·10−10 , β2 = 7.5088·10−9 , a = 0.0584 and θ = 0.3 (u in USD billion). u ψ(u, 1) ψ(u, 2) ψ(u, 5) ψ(u, 10) ψ(u, 20)

0 0.521465 0.587784 0.638306 0.655251 0.660958

1 0.426840 0.499238 0.557463 0.577547 0.584386

5 0.187718 0.254253 0.321230 0.347505 0.356922

10 0.065264 0.104967 0.157827 0.182727 0.192446

20 0.007525 0.016173 0.035499 0.049056 0.055610

50 0.000010 0.000039 0.000251 0.000724 0.001243

STFruin24.xpl

We also note that in order to apply this approximation we need the existence of the second moment of the claim size distribution. Table 15.20 shows the results of the diﬀusion approximation for mixture of two exponentials claims with β1 , β2 , a with respect to the initial capital u and the time horizon T . The relative safety loading θ equals 30%. The results lead to the conclusion that the approximation does not produce accurate results for such a choice of the claim size distribution. Only when u = 5 billion USD the results are acceptable, cf. the reference values in Table 15.18.

15.6.4

Corrected Diﬀusion Approximation

The idea presented above of the diﬀusion approximation ignores the presence of jumps in the risk process (the Brownian motion with drift is skip-free) and the value Sτ (u) − u in the moment of ruin. The corrected diﬀusion approximation takes this and other deﬁcits into consideration (Asmussen, 2000). Under the assumption that c = 1, cf. relation (15.28), we have T δ1 δ2 Ru δ2 ψCD (u, t) = IG , (15.31) + ; − ; 1 + u2 u 2 u where R is the adjustment coeﬃcient, δ1 = λMX (γ0 ), δ2 = MX (γ0 )/ {3MX (γ0 )}, and γ0 satisﬁes the equation: κ (γ0 ) = 0, where κ(s) = λ {MX (s) − 1} − s.

15.6

Approximations of the Ruin Probability in Finite Time

373

Table 15.22: The ﬁnite time De Vylder approximation for mixture of two exponentials claims with β1 = 3.5900 · 10−10 , β2 = 7.5088 · 10−9 , a = 0.0584 and θ = 0.3 (u in USD billion). u ψ(u, 1) ψ(u, 2) ψ(u, 5) ψ(u, 10) ψ(u, 20)

0 0.528431 0.594915 0.645282 0.662159 0.667863

1 0.433119 0.505300 0.563302 0.583353 0.590214

5 0.189379 0.256745 0.323909 0.350278 0.359799

10 0.063412 0.104811 0.158525 0.183669 0.193528

20 0.006114 0.015180 0.035142 0.048960 0.055637

50 0.000003 0.000021 0.000215 0.000690 0.001218

STFruin25.xpl

Similarly as in the Segerdahl approximation, the method requires existence of the moment generating function, so we can use it only for light-tailed distributions. In Table 15.21 the results of the corrected diﬀusion approximation for mixture of two exponentials claims with β1 , β2 , a with respect to the initial capital u and the time horizon T are given. The relative safety loading θ is set to 30%. It turns out that corrected diﬀusion method gives surprisingly good results and is vastly superior to the ordinary diﬀusion approximation, cf. the reference values in Table 15.18.

15.6.5

Finite Time De Vylder Approximation

Let us recall the idea of the De Vylder approximation in inﬁnite time: we re¯ λ=λ ¯ and exponential place the claim surplus process with the one with θ = θ, ¯ ﬁtting ﬁrst three moments, see Section 15.3.6. Here, claims with parameter β, the idea is the same. First, we compute 3µ(2) β¯ = (3) , µ

3

(2) ¯ = 9λµ 2 , λ 2µ(3)

and

2µµ(3) θ¯ = θ. 3µ(2)2

Next, we employ relations (15.27) and (15.28) and ﬁnally use the exact, exponential case formula presented in Section 15.5.1.

374

15

Ruin Probabilities in Finite and Inﬁnite Time

Obviously, the method gives the exact result in the exponential case. For other claim distributions, the ﬁrst three moments have to exist in order to apply the approximation. Table 15.22 shows the results of the ﬁnite time De Vylder approximation for mixture of two exponentials claims with β1 , β2 , a with respect to the initial capital u and the time horizon T . The relative safety loading θ = 30%. We see that the approximation gives even better results than the corrected diﬀusion one, cf. the reference values presented in Table 15.18.

15.6.6

Summary of the Approximations

Table 15.23 shows which approximation can be used for each claim size distribution. Moreover, the necessary assumptions on the distribution parameters are presented. Table 15.23: Survey of approximations with an indication when they can be applied Distrib. Method Monte Carlo Segerdahl Diﬀusion Corr. diﬀ. Fin. De Vylder

15.7

Exp.

Gamma

+ + + + +

+ + + + +

Weibull + – + – +

Mix. Exp. + + + + +

Lognormal + – + – +

Pareto

Burr

+ – α>2 – α>3

+ – ατ > 2 – ατ > 3

Numerical Comparison of the Finite Time Approximations

Now, we illustrate all 5 approximations presented in Section 15.6. As in the inﬁnite time case we consider three claim amount distributions which were best ﬁtted to the catastrophe data in Chapter 13, namely the mixture of two exponentials (a running example in Sections 15.3 and 15.6), log-normal and Pareto distributions. The parameters of the distributions are: β1 = 3.5900 · 10−10 , β2 = 7.5088 · 10−9 , a = 0.0584 (mixture), µ = 18.3806, σ = 1.1052

0.4 0.2 0

-0.2 -0.6

-0.4

(psi(u,T)-psi_(MC)(u,T))/psi_(MC)(u,T)

0.6 0.5 0.4

psi(u,T)

0.3 0.2

-0.8

0.1 0

0

375

0.6

Numerical Comparison of the Finite Time Approximations

0.7

15.7

3

6

9

12 15 18 u (USD billion)

21

24

27

30

0

3

6

9

12 15 18 u (USD billion)

21

24

27

30

Figure 15.6: The exact ruin probability obtained via Monte Carlo simulations (left panel), the relative error of the approximations (right panel). The Segerdahl (short-dashed blue line), diﬀusion (dotted red line), corrected diﬀusion (solid black line) and ﬁnite time De Vylder (long-dashed green line) approximations. The mixture of two exponentials case with T ﬁxed and u varying. STFruin26.xpl

(log-normal), and α = 3.4081, λ = 4.4767 · 108 (Pareto). The ruin probability will be depicted as a function of u, ranging from USD 0 to 30 billion, with ﬁxed T = 10 or with ﬁxed value of u = 20 billion USD and varying T from 0 to 20 years. The relative safety loading is set to 30%. Figures has the same form of output. In the left panel, the exact ruin probability values obtained via Monte Carlo simulations are presented. The right panel describes the relative error with respect to the exact values. We also note that for the Monte Carlo method purposes we generated 50 x 10000 simulations.

First, we consider the mixture of two exponentials case. As we can see in Figures 15.6 and 15.7 the diﬀusion approximation almost for all values of u and T gives highly incorrect results. Segerdahl and corrected diﬀusion approximations yield similar error, which visibly decreases when the time horizon gets bigger. The ﬁnite time De Vylder method is a unanimous winner and always gives the error below 10%.

15

Ruin Probabilities in Finite and Inﬁnite Time

0.6

0.4 0.2 0 -0.2

-0.4

(psi(u,T)-psi_(MC)(u,T))/psi_(MC)(u,T)

0.04

0.03

psi(u,T)

0.02 0

-1

-0.8

0.01

-0.6

0.05

0.8

1

0.06

376

0

2

4

6

8

10 12 T (years)

14

16

18

20

0

2

4

6

8

10 12 T (years)

14

16

18

20

Figure 15.7: The exact ruin probability obtained via Monte Carlo simulations (left panel), the relative error of the approximations (right panel). The Segerdahl (short-dashed blue line), diﬀusion (dotted red line), corrected diﬀusion (solid black line) and ﬁnite time De Vylder (long-dashed green line) approximations. The mixture of two exponentials case with u ﬁxed and T varying. STFruin27.xpl

In the case of log-normally distributed claims, we can only apply two approximations: diﬀusion and ﬁnite time De Vylder ones, cf. Table 15.23. Figures 15.8 and 15.9 depict the exact ruin probability values obtained via Monte Carlo simulations and the relative error with respect to the exact values. Again, the ﬁnite time De Vylder approximation works much better than the diﬀusion one.

Finally, we take into consideration the Pareto claim size distribution. Figures 15.10 and 15.11 depict the exact ruin probability values and the relative error with respect to the exact values for the diﬀusion and ﬁnite time De Vylder approximations. We see that now we cannot claim which approximation is better. The error strongly depends on the values of u and T . We may only suspect that a combination of the two methods could give interesting results.

0.6 0.4 0.2 0

-0.2 -0.6

-0.4

(psi(u,T)-psi_{MC}(u,T))/psi_{MC}(u,T)

0.7 0.6 0.5 0.4

psi_(MC)(u,T)

0.3 0.2

-0.8

0.1 0

0

377

0.8

Numerical Comparison of the Finite Time Approximations 0.8

15.7

0.5

1

1.5

2 2.5 3 u (USD billion)

3.5

4

4.5

0

5

0.5

1

1.5

2 2.5 3 u (USD billion)

3.5

4

4.5

5

0.2 0 -0.2

-0.4 -0.8 -1

0

0

-0.6

0.03 0.01

0.02

psi_(MC)(u,T)

0.04

(psi(u,T)-psi_{MC}(u,T))/psi_{MC}(u,T)

0.05

0.4

0.6

0.06

Figure 15.8: The exact ruin probability obtained via Monte Carlo simulations (left panel), the relative error of the approximations (right panel). Diﬀusion (dotted red line) and ﬁnite time De Vylder (long-dashed green line) approximations. The log-normal case with T ﬁxed and u varying. STFruin28.xpl

2

4

6

8

10 12 T (years)

14

16

18

20

0

2

4

6

8

10 12 T (years)

14

16

18

20

Figure 15.9: The exact ruin probability obtained via Monte Carlo simulations (left panel), the relative error of the approximations (right panel). Diﬀusion (dotted red line) and ﬁnite time De Vylder (long-dashed green line) approximations. The log-normal case with u ﬁxed and T varying. STFruin29.xpl

Ruin Probabilities in Finite and Inﬁnite Time

0.6 0.4 0.2 0

-0.2 -0.6 -0.8

0.1 0

0

-0.4

0.5 0.4

0.3 0.2

psi_(MC)(u,T)

0.6

(psi(u,T)-psi_{MC}(u,T))/psi_{MC}(u,T)

0.7

0.8

15 0.8

378

0.5

1

1.5

2 2.5 3 u (USD billion)

3.5

4

4.5

0

5

0.5

1

1.5

2 2.5 3 u (USD billion)

3.5

4

4.5

5

0.6

0.4 0.2 0 -0.2

-0.4

(psi(u,T)-psi_{MC}(u,T))/psi_{MC}(u,T)

0.04

0.03

psi_(MC)(u,T)

0.02 0

-1

-0.8

0.01

-0.6

0.05

0.8

1

0.06

Figure 15.10: The exact ruin probability obtained via Monte Carlo simulations (left panel), the relative error of the approximations (right panel). Diﬀusion (dotted red line) and ﬁnite time De Vylder (long-dashed green line) approximations. The Pareto case with T ﬁxed and u varying. STFruin30.xpl

0

2

4

6

8

10 12 T (years)

14

16

18

20

0

2

4

6

8

10 12 T (years)

14

16

18

20

Figure 15.11: The exact ruin probability obtained via Monte Carlo simulations (left panel), the relative error of the approximations (right panel). Diﬀusion (dotted red line) and ﬁnite time De Vylder (long-dashed green line) approximations. The Pareto case with u ﬁxed and T varying. STFruin31.xpl

Bibliography

379

Bibliography Asmussen, S. (2000). Ruin Probabilities, World Scientiﬁc, Singapore. Burnecki, K., Mi´sta, P., and Weron A. (2003). A New De Vylder Type Approximation of the Ruin Probability in Inﬁnite Time, Research Report HSC/03/05. Burnecki, K., Mi´sta, P., and Weron A. (2005). What is the Best Approximation of Ruin Probability in Inﬁnite Time?, Appl. Math. (Warsaw) 32. De Vylder, F.E. (1978). A Practical Solution to the Problem of Ultimate Ruin Probability, Scand. Actuar. J.: 114–119. De Vylder, F.E. (1996). Advanced Risk Theory. A Self-Contained Introduction, Editions de l’Universit´e de Bruxelles and Swiss Association of Actuaries. Embrechts, P., Kaufmann, R., and Samorodnitsky, G. (2004). Ruin Theory Revisited: Stochastic Models for Operational Risk, in C. Bernadell et al (eds.) Risk Management for Central Bank Foreign Reserves, European Central Bank, Frankfurt a.M., 243-261. Embrechts, P., Kl¨ uppelberg, C., and Mikosch, T. (1997). Modelling Extremal Events for Insurance and Finance, Springer, Berlin. Furrer, H., Michna, Z., and Weron A. (1997). Stable L´evy Motion Approximation in Collective Risk Theory, Insurance Math. Econom. 20: 97–114. Grandell, J. and Segerdahl, C.-O. (1971). A Comparison of Some Approximations of Ruin Probabilities, Skand. Aktuarietidskr.: 144–158. Grandell, J. (1991). Aspects of Risk Theory, Springer, New York. Grandell, J. (2000). Simple Approximations of Ruin Probability, Insurance Math. Econom. 26: 157–173. Panjer, H.H. and Willmot, G.E. (1992). Insurance Risk Models, Society of Actuaries, Schaumburg. Segerdahl, C.-O. (1955). When Does Ruin Occur in the Collective Theory of Risk?, Skand. Aktuarietidskr. 38: 22–36. Wikstad, N. (1971). Exempliﬁcation of Ruin Probabilities, Astin Bulletin 6: 147–152.

16 Stable Diﬀusion Approximation of the Risk Process Hansj¨ org Furrer, Zbigniew Michna, and Aleksander Weron

16.1

Introduction

Collective risk theory is concerned with random ﬂuctuations of the total net assets – the capital of an insurance company. Consider a company which only writes ordinary insurance policies such as accident, disability, health and whole life. The policyholders pay premiums regularly and at certain random times make claims to the company. A policyholder’s premium, the gross risk premium, is a positive amount composed of two components. The net risk premium is the component calculated to cover the payments of claims on the average, while the security risk premium, or safety loading, is the component which protects the company from large deviations of claims from the average and also allows an accumulation of capital. So the risk process has the Cram´er-Lundberg form:

N (t)

R(t) = u + ct −

Yk ,

k=1

where u > 0 is the initial capital (in some cases interpreted as the initial risk reserve) of the company and the policyholders pay a gross risk premium of c > 0 per unit time, see also Chapter 14. The successive claims {Yk } are assumed to form a sequence of i.i.d. random variables with mean EYk = µ and claims occur at jumps of a point process N (t), t ≥ 0. The ruin time T is deﬁned as the ﬁrst time the company has a negative capital, see Chapter 15. One of the key problems of collective risk theory concerns calculating the ultimate ruin probability Ψ = P(T < ∞|R(0) = u), i.e. the

382

16

Stable Diﬀusion Approximation of the Risk Process

probability that the risk process ever becomes negative. On the other hand, an insurance company will typically be interested in the probability that ruin occurs before time t, i.e. Ψ(t) = P(T < t|R(0) = u). However, many of the results available in the literature are in the form of complicated analytic expressions (for a comprehensive treatment of the theory see e.g. Asmussen, 2000; Embrechts, Kl¨ uppelberg, and Mikosch, 1997; Rolski et al., 1999). Hence, some authors have proposed to approximate the risk process by Brownian diﬀusion, see Iglehart (1969) and Schmidli (1994). The idea is to let the number of claims grow in a unit time interval and to make the claim sizes smaller in such a way that the risk process converges weakly to the diﬀusion. In this chapter we present weak convergence theory applied to approximate the risk process by Brownian motion and α-stable L´evy motion. We investigate two diﬀerent approximations. The ﬁrst one assumes that the distribution of claim sizes belongs to the domain of attraction of the normal law, i.e. claims are small. In the second model we consider claim sizes belonging to the domain of attraction of the α-stable law (1 < α < 2), i.e. large claims. The latter approximation is particularly relevant whenever the claim experience allows for heavy-tailed distributions. As the empirical results presented in Chapter 13 show, at least for the catastrophic losses the assumption of heavy-tailed severities is statistically justiﬁed. While the classical theory of Brownian diﬀusion approximation requires short-tailed claims, this assumption can be dropped in our approach, hence allowing for extremal events. Furthermore, employing approximations of risk processes by Brownian motion and α-stable L´evy motion we obtain formulas for ruin probabilities in ﬁnite and inﬁnite time horizons.

16.2

Brownian Motion and the Risk Model for Small Claims

This section will be devoted to the Brownian motion approximation in risk theory and will be based on the work of Iglehart (1969). We assume that the distribution of the claim sizes belongs to the domain of attraction of the normal law. Thus, such claims attain big values with small probabilities. This assumption will cover many practical situations in which the claim size distribution possesses a ﬁnite second moment and claims constitute an i.i.d. sequence. The claims counting process does not have to be independent of the sequence of claim sizes as it is assumed in many risk models and, in general, can be a renewal process constructed from random variables having a ﬁnite ﬁrst moment.

16.2

Brownian Motion and the Risk Model for Small Claims

16.2.1

383

Weak Convergence of Risk Processes to Brownian Motion

Let us consider a sequence of risk processes Rn (t) deﬁned in the following way:

N (nt)

Rn (t) = un + cn t −

(n)

Yk ,

(16.1)

k=1

where un is the initial, cn is the premium payed by policyholders, and the (n) sequence {Yk : k ∈ N } describes the consecutive claim sizes. Assume also (n) (n) that EYk = µn and VarYk = σn2 . The point process N = {N (t) : t ≥ 0} counts claims appearing up to time t that is: k Ti ≤ t , (16.2) N (t) = max k : i=1

where {Tk : k ∈ N } is an i.i.d. sequence of nonnegative random variables describing the times between arriving claims with ETk = 1/λ > 0. Recall that if Tk are exponentially distributed then N (t) is a Poisson process with intensity λ. To approximate the risk process by Brownian motion, we assume n−1/2 un → u, 2+ε (n) n−1/2 cn → c, n1/2 µn → µ, σn2 → σ 2 , and E Yk ≤ M for some ε > 0 where M is independent of n. Then: 1 L Rn (t) → u + (c − µλ)t + σλ1/2 B(t) n1/2

(16.3)

weakly in topology U (uniform convergence on compact sets). Let us denote by RB (t) the limit process from the above approximation, i.e.: RB (t) = u + (c − µλ)t + σλ1/2 B(t).

(16.4)

Property (16.3) let us approximate the risk process by RB (t) for which it is possible to derive exact formulas for ruin probabilities in ﬁnite and inﬁnite time horizons.

16.2.2

Ruin Probability for the Limit Process

Weak convergence of stochastic processes does not imply the convergence of ruin probabilities in general. Thus, to take the advantage of the Brownian

384

16

Stable Diﬀusion Approximation of the Risk Process

motion approximations it is necessary to show that the ruin probability in ﬁnite and inﬁnite time horizons of risk processes converges to the ruin probabilities of Brownian motion. Let us deﬁne the ruin time: T (R) = inf{t > 0 : R(t) < 0},

(16.5)

if the set is non-empty and T = ∞ in other cases. Then T (Rn ) → T (RB ) almost surely if Rn → RB almost surely as n → ∞ and P{T (Rn ) < ∞} → P{T (RB ) < ∞} . Thus we need to ﬁnd formulas for the ruin probabilities of the process RB . Let RB be the Brownian motion with the linear drift deﬁned in (16.4). Then

u(c − λµ) P{T (RB ) < ∞} = exp −2 (16.6) σ2 λ and P{T (RB ) ≤ t} = +

u + (c − λµ)t (16.7) 1−Φ σ(λt)1/2

u − (c − λµ)t −2u(c − λµ) 1 − Φ . exp σ2 λ σ(λt)1/2

It is also possible to determine the density distribution of the ruin time. Let T (RB ) be the ruin time of the process (16.4). Then the density fT of the random variable T (RB ) has the following form β −1 eαβ −3/2 1 2 −1 2 t exp − t + (αβ) t} , t > 0, fT (t) = {β 2 (2π)2/3 where α = (c − λµ)/σλ1/2 and β = u/σλ1/2 . The Brownian model is an approximation of the risk process in the case when the distribution of claim sizes belongs to the domain of attraction of the normal law and the assumptions imposed on the risk process indicate that from the point of view of an insurance company the number of claims is large and the sizes of claims are small.

16.2.3

Examples

Let us consider a risk model where the distribution of claim sizes belongs to the domain of attraction of the normal law and the process counting the number of

16.2

Brownian Motion and the Risk Model for Small Claims

385

Table 16.1: Ruin probabilities for the Brownian motion approximation. Parameters µ = 20, σ = 10, and t = 10 are ﬁxed. u c λ Ψ(t) Ψ 25 50 2 8.0842e-02 8.2085e-02 25 60 2 6.7379e-03 6.7379e-03 30 60 2 2.4787e-03 2.4787e-03 35 60 2 9.1185e-04 9.1188e-04 40 60 2 3.3544e-04 3.3546e-04 40 70 3 6.5282e-02 6.9483e-02 STFdiff01.xpl

claims is a renewal counting process constructed from i.i.d. random variables with a ﬁnite ﬁrst moment. Let R(t) be the following risk process

N (t)

R(t) = u + ct −

Yk ,

(16.8)

n=1

where u is the initial capital, c is the premium income in the unit time interval and {Yk : k ∈ N } are i.i.d. random variables belonging to the domain of attraction of the normal law. Moreover, EYk = µ, VarYk = σ 2 and the intensity of arriving claims is λ (reciprocal of the expectation of claims inter-arrivals). Thus, we obtain: P{T (R) ≤ t} ≈ P{T (RB ) ≤ t} (16.9) and P{T (R) < ∞} ≈ P{T (RB ) < ∞} ,

(16.10)

where RB (t) = u + (c − µλ)t + σλ1/2 B(t), and B(t) is the standard Brownian motion. Using the formulas for ruin probabilities in ﬁnite and inﬁnite time horizons given in (16.6) and (16.7) we compute approximate values of ruin probabilities for diﬀerent levels of initial capital, premium, intensity of claims, expectation of claims and their variance, see Table 16.1. A sample path of the process RB (t) is depicted in Figure 16.1.

16

Stable Diﬀusion Approximation of the Risk Process

Y

40

50

60

386

0

0.5 X

1

Figure 16.1: A sample path of the process RB for u = 40, c = 100, µ = 20, σ = 10, and λ = 3. STFdiff02.xpl

16.3

Stable L´ evy Motion and the Risk Model for Large Claims

In this section we present approximations of the risk process by α-stable L´evy motion. We assume that claims are large, i.e. that the distribution of their sizes is heavy-tailed. More precisely, we let the claim sizes distribution belong to the domain of attraction of the α-stable law with 1 < α < 2, see Weron (2001) and Chapter 1. This is an extension of the Brownian motion approximation approach. Note, however, that the methods and theory presented here are quite diﬀerent from those used in the previous section (Weron, 1984). We assume that claim sizes constitute an i.i.d. sequence and that the claim counting process does not have to be independent of the sequence of the claim

16.3

Stable L´evy Motion and the Risk Model for Large Claims

387

sizes and, in general, can be a counting renewal process constructed from the random variables having a ﬁnite second moment. This model can be applied when claims are caused by earthquakes, ﬂoods, tornadoes, and other natural disasters. In fact, the catastrophic losses dataset studied in Chapter 13 reveals a very heavy-tailed nature of the severity distribution. The best ﬁt was obtained for a Burr law with α = 0.4801 and τ = 2.1524, which indicates a power-law decay of order ατ = 1.0334 of the claim sizes distribution. Naturally, such a distribution belongs to the domain of attraction of the α-stable law with 1 < α < 2.

16.3.1

Weak Convergence of Risk Processes to α-stable L´ evy Motion

We construct a sequence of risk processes converging weakly to the α-stable L´evy motion. Let Rn (t) be a sequence of risk processes deﬁned as follows: N (n) (t)

Rn (t) = un + cn t −

(n)

Yk

,

(16.11)

k=1 (n)

where un is the initial capital, cn is the premium rate, {Yk : k ∈ N } is a sequence describing the sizes of the consecutive claims, and N (n) (t), for every n ∈ N, is a point process counting the number of claims. Moreover, we assume that the random variables representing the claim sizes are of the following form (n)

Yk

=

1 Yk , ϕ(n)

(16.12)

where {Yk : k ∈ N } is a sequence of i.i.d. random variables with distribution F and expectation EYk = µ. The normalizing function ϕ(n) = n1/α L(n), where L is a slowly varying function at inﬁnity. As before it is not necessary to assume that the random variables Yk are non-negative, however, this time we assume that they belong to the domain of attraction of an α-stable law, that is: 1 L (Yk − µ) → Zα,β (1) , ϕ(n) n

(16.13)

k=1

where Zα,β (t) is the α-stable L´evy motion with scale parameter σ , skewness parameter β, and 1 < α < 2. For details see Janicki and Weron (1994) and Samorodnitsky and Taqqu (1994).

388

16

Stable Diﬀusion Approximation of the Risk Process

Let Rα (t) be the α-stable L´evy motion with a linear drift Rα (t) = u + ct − λ1/α Zα,β (t),

(16.14)

where u, c, and λ are positive constants. Let {Yk } be the sequence of the random variable deﬁned above, {N (n) } be a sequence of point processes satisfying N (n) (t) − λnt L → 0, ϕ(n)

(16.15)

L

where → denotes weak convergence in the Skorokhod topology, and λ is a positive constant. Moreover, we assume µ (n) =c (16.16) lim c − λn n→∞ ϕ(n) and lim u(n) = u .

n→∞

(16.17)

Then (n)

N (t) 1 L Rn (t) = un + cn t − Yk → Rα (t) = u + ct − λ1/α Zα,β (t), (16.18) ϕ(n) k=1

when n → ∞, for details see Furrer, Michna, and Weron (1997). Assumption (16.15) is satisﬁed for a wide class of point processes. For example, if the times between consecutive claims constitute i.i.d. sequence with the distribution possessing a ﬁnite second moment. We should also notice that the skewness parameter β equals 1 for the process Rα (t) if the random variables {Yk } are non-negative.

16.3.2

Ruin Probability in the Limit Risk Model for Large Claims

As in the Brownian motion approximation it can be shown that the ﬁnite and inﬁnite time ruin probabilities converge to the ruin probabilities of the limit process. Thus it remains to derive ruin probabilities for the process Rα (t) deﬁned in (16.18). We present asymptotic behavior for ruin probabilities in ﬁnite and inﬁnite time horizons and an exact formula for inﬁnite time ruin probability. An upper bound for ﬁnite time ruin probability will be shown.

16.3

Stable L´evy Motion and the Risk Model for Large Claims

389

First, we derive the asymptotic ruin probability for the ﬁnite time horizon. Let T be the ruin time (17.11) and Zα,β (t) be the α-stable L´evy motion with 0 < α < 2, −1 < β ≤ 1, and scale parameter σ . Then: P{T (u + cs − λ1/α Zα,β (s)) ≤ t} = 1, u→∞ P{λ1/α Zα,β (t) > u + ct} lim

(16.19)

see Furrer, Michna, and Weron (1997) and Willekens (1987). Using the asymptotic behavior of probability P{λ1/α Zα,β (t) > u + ct} when u → ∞ for 1 < α < 2, we get ( Samorodnitsky and Taqqu, 1994, Prop. 1.2.15) that 1+β P{T (u + cs − λ1/α Zα,β (s)) ≤ t} ≈ Cα (16.20) λ(σ )α t(u + ct)−α , 2 where 1−α Cα = . (16.21) Γ(2 − α) cos(πα/2) The asymptotic ruin probability in the ﬁnite time horizon is a lower bound for the ﬁnite time ruin probability. Let Zα,β (t) be the α-stable L´evy motion with α = 1 and |β| ≤ 1 or α = 1 and β = 0. Then for positive u, c, and λ: P{T (u + cs − λ1/α Zα,β (s)) ≤ t} ≤

P{λ1/α Zα,β (t) > u + ct} . P {λ1/α Zα,β (t) > ct}

(16.22)

Now, we consider inﬁnite time ruin probability for the α-stable L´evy motion. It turns out that for β = 1 it is possible to give an exact formula for the ruin probability in the inﬁnite time horizon. If Zα,β (t) is the α-stable L´evy motion with 1 < α < 2, β = 1, and scale parameter σ then for positive u, c, and λ, Furrer (1998) showed that P{T (u + cs − λ1/α Zα,β (s)) < ∞} =

∞

(−a)n u(α−1)n , Γ{1 + (α − 1)n} n=0

(16.23)

where a = cλ−1 (σ )−α cos{π(α − 2)/2}. In general, for an arbitrary β we can obtain asymptotic behavior for inﬁnite time ruin probabilities when the initial capital tends to inﬁnity. Now, let Zα,β (t) be the α-stable L´evy motion with 1 < α < 2, −1 < β ≤ 1, and scale parameter σ . Then for positive u, c, and λ we have (Port, 1989, Theorem 9): P{T (u + cs − λ1/α Zα,β (s)) < ∞} =

A(α, β)λ(σ )α −α+1 + O(u−α+1 ) (16.24) u α(α − 1)c

390

16

Stable Diﬀusion Approximation of the Risk Process

when u → ∞, where A(α, β) =

Γ(1 + α) π

: 1 + β 2 tan2

πα 2

sin

πα 2

+ arctan{β tan

πα } . 2

For completeness it remains to consider the case β = −1, which is quite diﬀerent because the right tail of the distribution of the α-stable law with β = −1 does not behave like a power function but like an exponential function (i.e. it is not a heavy tail). Let Zα,β (t) be the α-stable L´evy motion with 1 < α < 2, β = −1, and scale parameter σ . Then for positive u, c, and λ: P{T (u + cs − λ1/α Zα,β (s)) < ∞} = exp{−a1/(α−1) u} ,

(16.25)

where a is as above.

16.3.3

Examples

Let us assume that the sequence of claims is i.i.d. and their distribution belongs to the domain of attraction of the α-stable law with 1 < α < 2. Let R(t) be the following risk process

N (t)

R(t) = u + ct −

Yk ,

(16.26)

n=1

where u is the initial capital, c is a premium rate payed by the policyholders, and {Yk : k ∈ N} is an i.i.d. sequence with the distribution belonging to the domain of attraction of the α-stable law with 1 < α < 2, that is fulﬁlling (16.13). Moreover, let EYk = µ and the claim intensity be λ. Similarly as in the Brownian motion approximation we obtain: P{T (R) ≤ t} ≈ P{T (Rα ) ≤ t},

(16.27)

P{T (R) < ∞} ≈ P{T (Rα ) < ∞} ,

(16.28)

and where Rα (t) = u + (c − λµ)t − λ1/α Zα (t), and Zα (t) is the α-stable L´evy motion with β = 1 and scale parameter σ . The scale parameter can be calibrated using the asymptotic results of Mijnheer (1975), see also Samorodnitsky and Taqqu (1994, p. 50).

16.3

Stable L´evy Motion and the Risk Model for Large Claims

Table 16.2: Ruin probabilities for α t = 10. u c λ 25 50 2 25 60 2 30 60 2 35 60 2 40 60 2 40 70 3

391

= 1.0334 and ﬁxed µ = 20, σ = 10, and Ψ(t) 0.45896 0.25002 0.24440 0.23903 0.23389 0.61235

Ψ 0.94780 0.90076 0.90022 0.89976 0.89935 0.96404 STFdiff03.xpl

Table 16.3: Ruin probabilities for α = 1.5 and ﬁxed µ = 20, σ = 10, and t = 10. u c λ Ψ(t) Ψ 25 50 2 9.0273e-02 0.39735 25 60 2 3.7381e-02 0.23231 30 60 2 3.6168e-02 0.21461 35 60 2 3.5020e-02 0.20046 40 60 2 3.3932e-02 0.18880 40 70 3 1.1424e-01 0.44372 STFdiff04.xpl

√ For α = 2, the standard deviation σ = 2σ . Hence, it is reasonable to put σ = 2−1/α σ in the general case. In this way we can compare the results for the two approximations. Using (16.20) and (16.23) we compute the ﬁnite and inﬁnite time ruin probabilities for diﬀerent levels of initial capital, premium, intensity of claims, expectation of claims and their scale parameter, see Tables 16.2 and 16.3. A sample path of the process Rα is depicted in Figure 16.2. The results in the tables show the eﬀects of the heaviness of the claim size distribution tails on the crucial parameter for insurance companies – the ruin probability. It is clearly visible that a decrease of α increases the ruin probability. The tables also illustrate the relationship between the ruin probability and the initial capital u, premium c, intensity of claims λ, expectation of claims µ and their scale parameter σ . For the heavy-tailed claim distributions the ruin

16

Stable Diﬀusion Approximation of the Risk Process

-100

-50

0

Y

50

100

392

0

0.5 X

1

Figure 16.2: A sample path of the process Rα for α = 1.5, u = 40, c = 100, µ = 20, σ = 10, and λ = 3. STFdiff05.xpl

probability is considerably higher than for the light-tailed claim distributions. Thus the estimation of the stability parameter α from real data is crucial for the choice of the premium c.

Bibliography

393

Bibliography Asmussen, S. (2000). Ruin Probabilities, World Scientiﬁc, Singapore. Embrechts, P., Kl¨ uppelberg, C., and Mikosch, T. (1997). Modeling Extremal Events for Insurance and Finance, Springer, Berlin. Furrer, H. (1998). Risk processes perturbed by α-stable L´evy motion, Scandinavian Actuarial Journal 10: 23–35. Furrer, H., Michna, Z. and Weron A. (1997). Stable L´evy motion approximation in collective risk theory, Insurance: Mathematics and Economics 20: 97– 114. Iglehart, D. L. (1969). Diﬀusion approximations in collective risk theory, Journal of Applied Probability 6: 285–292. Janicki, A. and Weron, A. (1994). Simulation and Chaotic Behavior of α–Stable Stochastic Processes, Marcel Dekker, New York. Mijnheer, J. L. (1975). Sample path properties of stable processes, Mathematical Centre Tracts 59, Mathematical Centrum Amsterdam. Port, S. C. (1989). Stable processes with drift on the line, Trans. Amer. Math. Soc. 313: 201–212. Rolski, T., Schmidli, H., Schmidt, V. and Teugels, J. (1999). Stochastic Processes for Insurance and Finance, John Wiley and Sons, New York. Samorodnitsky, G. and Taqqu, M. (1994). Non–Gaussian Stable Processes: Stochastic Models with Inﬁnite Variance, Chapman and Hall, London. Schmidli, H. (1994). Diﬀusion approximations for a risk process with the possibility of borrowing and investment, Stochastic Models 10: 365-388. Weron, A. (1984). Stable processes and measures: A survey. in Probability Theory on Vector Spaces III, D. Szynal, A. Weron (eds.), Lecture Notes in Mathematics 1080: 306–364. Weron, R. (2001). L´evy-stable distributions revisited: tail index > 2 does not exclude the L´evy-stable regime, International Journal of Modern Physics C 12: 200–223. Willekens, E. (1987). On the supremum of an inﬁnitely divisible process, Stoch. Proc. Appl. 26: 173–175.

17 Risk Model of Good and Bad Periods Zbigniew Michna

17.1

Introduction

Classical insurance risk models rely on independent increments of the corresponding risk process. However, this assumption can be very restrictive in modeling natural events. For example, M¨ uller and Pﬂug (2001) found a signiﬁcant correlation of claims related to tornados in the USA. To cope with these observations we present here a risk model producing positively correlated claims. In the recent years such models have been extensively investigated by Gerber (1981; 1982), Promislov (1991), Michna (1998), Nyrhinen (1998; 1999a; 1999b), Asmussen (1999), and M¨ uller and Pﬂug (2001). We consider a model where the time of the year inﬂuences claims. For example, seasonal weather ﬂuctuations aﬀect the size and quantity of damages in car accidents, intensive rains can cause abnormal damage to households. We assume the existence of good and bad periods for the insurance company in the sense of diﬀerent expected values for claim sizes. This structure of good and bad periods produces a dependence of claims such that the resulting risk process can be approximated by the fractional Brownian motion with a linear drift. Explicit asymptotic formulas and numerical results can be derived for diﬀerent levels of the dependence structure. As we will see the dependence of claims aﬀects a crucial parameter for the risk exposure of the insurance company – the ruin probability. Recall that the ruin time T is deﬁned as the ﬁrst time the company has a negative capital. One of the key problems of collective risk theory concerns calculating the ultimate ruin probability Ψ = P(T < ∞), i.e. the probability

396

17

Risk Model of Good and Bad Periods

that the risk process ever becomes negative. On the other hand, the insurance company will typically be interested in the probability that ruin occurs before time t, that is Ψ(t) = P(T ≤ t). In the next section we present basic deﬁnitions and assumptions imposed on the model, and results which permit to approximate the risk process by fractional Brownian motion. Section 17.3 deals with bounds and asymptotic formulas for ruin probabilities. The last section is devoted to numerical results.

17.2

Fractional Brownian Motion and the Risk Model of Good and Bad Periods

In this section we describe fractional Brownian motion approximation in risk theory. We show that under suitable assumptions the risk process constructed from claims appearing in good and bad periods can be approximated by the fractional Brownian motion with a linear drift. Hence, we ﬁrst introduce the deﬁnition of fractional Brownian motion and then construct the model. A process BH is called fractional Brownian motion if for some 0 < H ≤ 1: 1. BH (0) = 0 almost surely. 2. BH has strictly stationary increments, that is the random function Mh (t) = BH (t + h) − BH (t), h ≥ 0, is strictly stationary. 3. BH is self-similar of order H denoted H − ss, that is L{BH (ct)} = L{cH BH (t)} in the sense of ﬁnite-dimensional distributions. 4. Finite dimensional distributions of BH are Gaussian with EBH (t) = 0 5. BH is almost surely continuous. If not stated otherwise explicitly, we let the parameter of self-similarity satisfy 1 2 < H < 1. The concept of semi-stability was introduced by Lamperti (1962) and recently discussed in Embrechts and Maejima (2002). Mandelbrot and Van Ness (1968) call it self-similarity when appearing in conjunction with stationary increments, as it does here. When we observe arriving claims we assume that we have good and bad periods (e.g. periods of good weather and periods of bad weather). These two periods alternate. Let {TnG , n ∈ N} be i.i.d. non-negative random variables representing

17.2

Fractional Brownian Motion and Model of Good and Bad Periods 397

good periods; similarly, let {S B , SnB , n ∈ N} be i.i.d. non-negative random variables representing bad periods. The T ’s are assumed independent of the S’s, the common distribution of good periods is F G , and the distribution of bad periods is F B . We assume that both F G and F B have ﬁnite means νG and νB , respectively, and we set ν = νG + νB . n Consider the pure renewal sequence initiated by a good period {0, i=1 (TiG + SiB ), n ∈ N}. The inter-arrival distribution is F G ∗ F B and the mean interarrival time is ν. This pure renewal process has a stationary version {D, D + n B G i=1 (Ti +Si ), n ∈ N}, where D is a delay random variable (Asmussen, 1987). However, by deﬁning the initial delay interval of length D this way, the interval does not decompose into a good and a bad period the way subsequent interarrival intervals do. Consequently, we turn to an alternative construction of the stationary renewal process to decompose the delay random variable D into a good and bad period. Deﬁne three independent random variables B, T0G , and S0B , which are independent of (S B , TnG , SnB , n ∈ N), as follows: B is a Bernoulli random variable with values in {0, 1} and mass function νG P(B = 1) = = 1 − P(B = 0) ν and ∞ 1 − F G (s) def ds = 1 − F0G (x), P(T0G > x) = νG x ∞ 1 − F B (s) def ds = 1 − F0B (x), P(S0B > x) = νB x for x > 0. Deﬁne a delay random variable D0 by D0 = (T0G + S B )B + (1 − B)S0B and a delayed renewal sequence by def

{Sn , n ≥ 0} =

D0 , D0 +

n

(TiG + SiB ), n ≥ 0 .

i=1

One can verify that this delayed renewal sequence is stationary (Heath, Resnick, and Samorodnitsky, 1998). We now deﬁne L(t) to be 1 if t falls in a good period, and L(t) = 0 if t is in a bad period. More precisely, the process {L(t), t ≥ 0} is deﬁned in terms of {Sn , n ≥ 0} as follows L(t) = BI(0 ≤ t < T0G ) +

∞ n=0

G I(Sn ≤ t < Sn + Tn+1 ).

(17.1)

398

17

Risk Model of Good and Bad Periods

The process {L(t), t ≥ 0} is strictly stationary and P{L(t) = 1} = EL(t) =

νG . ν

Let {YnG , n ≥ 0} be i.i.d. random variables representing claims appearing in good periods (e.g. YnG describes a claim which may appear at the n-th moment in a good period). Similarly, let {YnB , n ≥ 0} be i.i.d. random variables representing claims appearing in bad periods (e.g. YnB describes a claim which may appear at the n-th moment in a bad period). We assume that {YnG , n ≥ 0}, {YnB , n ≥ 0} and {L(t), t ≥ 0} are independent, E(Y0G ) = g < E(Y0B ) = b, and the second moments of Y0G and Y0B exist. Then the claim Yn appearing at the n-th moment is Yn = L(n)YnG + {1 − L(n)}YnB , n ≥ 0.

(17.2)

Furthermore, the sequence {Yn , n ≥ 0} is stationary. Assume that

1 − F G (t) = t−(C+1) K(t),

(17.3)

for t → ∞, 0 < C < 1, where K is slowly varying at inﬁnity. Moreover, assume that 1 − F B (t) = O{1 − F G (t)}, (17.4) as t → ∞ and there exists an n ≥ 1 such that (FG ∗ FB )∗n is nonsingular. Then 2 (b − g)2 −C νB n K(n) Cν 3 when n → ∞ (Heath, Resnick, and Samorodnitsky, 1998).

Cov(Y0 , Yn ) ∼

(17.5)

We assumed that the good period dominates the bad period but one can approach the problem reversely (i.e. the bad period can dominate the good period) because of the symmetry of the good and bad period characteristics in the covariance function. Assume that EYn = µ and ϕ(n) = nH K(n), where K is a slowly varying function at inﬁnity. Let the sequence {Yk : k ∈ N} be as above and let {N (n) : n ∈ N} be a sequence of point processes such that N (n) (t) − λnt L →0 ϕ(n)

(17.6)

17.3

Ruin Probability in Limit Risk Model of Good and Bad Periods

399

weakly in the Skorokhod topology (Jacod and Shiryaev, 1987) for some positive constant λ. Assume also that

µ =c (17.7) lim c(n) − λn n→∞ ϕ(n) and lim u(n) = u.

n→∞

Then

(17.8)

(n)

u

(n)

(n)

+c

N (t) 1 L t− Yk → u + ct − λH BH (t) ϕ(n)

(17.9)

k=1

in the Skorokhod topology as n → ∞. Condition (17.6) is satisﬁed for a wide class of point processes. For example, if the times between consecutive claims constitute an i.i.d. sequence with the distribution possessing a ﬁnite second moment.

17.3

Ruin Probability in the Limit Risk Model of Good and Bad Periods

Let us deﬁne RH (t) = u + ct − λH BH (t),

(17.10)

where u, c, and λ are positive constants and the ruin time: T (RH ) = inf{t > 0 : RH (t) < 0},

(17.11)

if the set is non-empty and T (RH ) = ∞ otherwise. The ruin probability of the process of (17.10) is given by (Michna, 1998):

−2uct u − ct u + ct + exp 1−Φ , P{T (RH ) ≤ t} ≤ 1 − Φ σ(λt)H σ 2 (λt)2H σ(λt)H (17.12) 2 (1)}. where the functional T is given in (17.11) and σ 2 = E{BH The next result enables us to approximate the ruin probability of the process RH (t) for a suﬃciently large initial capital. For every t > 0: P {T (RH ) ≤ t} = 1, u→∞ P {λH BH (t) > u + ct} lim

(17.13)

400

17

Risk Model of Good and Bad Periods

where the functional T is given in (17.11). Now, let us consider the inﬁnite time ruin probability. The lower and upper bounds for the ruin probability are given by:

u1−H cH , (17.14) P{T (RH ) < ∞} ≥ 1 − Φ σ(λH)H (1 − H)1−H and

H 1 −2H −2 − 1−H 2 dx . exp − λ σ (ux + cx) 2 0 (17.15) See Norros (1994) for the lower bound and D¸ebicki, Michna, and Rolski (1998) for the upper bound analysis. 2c P{T (RH ) < ∞} ≤ √ 8π(1 − H)

∞

The next property will show the asymptotic behavior of the inﬁnite time ruin probability. Let the Hurst parameter satisfy 0 < H < 1. Then (H¨ usler and Piterbarg, 1999): √ 3 1 PH πc1−H H H− 2 u(1−H)( H −1) P{T (RH ) < ∞} = 1 − 1 · 1 3 1 2 2H 2 (1 − H)H+ H − 2 λ1−H σ H −1 H 1−H u1−H cH {1 + o(1)}, (17.16) · 1−Φ H (1 − H)λH σ as u → ∞ where PH is the Pickands constant, Piterbarg (1996). The value of the Pickands constant is known only for H = 0.5 and H = 1. Some approximations of its value can be found in Burnecki and Michna (2002) and D¸ebicki, Michna, and Rolski (2003). The above result permits to approximate the inﬁnite time ruin probability in the model of good and bad periods for large values of the initial capital. For an arbitrary value of the initial capital there exists a simulation method of the inﬁnite time ruin probability based on the Girsanov-type theorem. To present this method we introduce the stopping time τa (u) = inf{t > 0 : BH (t) + at > u} , where a ≥ 0 and the function 1 1 −H c1 s 2 −H (t − s) 2 w(t, s) = 0,

s ∈ (0, t) s ∈ (0, t),

(17.17)

(17.18)

17.3 where

Ruin Probability in Limit Risk Model of Good and Bad Periods 1 2

401

< H < 1,

H(2H − 1)B

c1 =

3 1 − H, H − 2 2

−1 ,

(17.19)

and B denotes the beta function. Note that τa < ∞ almost surely for a ≥ 0. According to Norros, Valkeila, and Virtamo (1999) the following centered Gaussian process t M (t) = w(t, s) dBH (s), (17.20) 0

possesses independent increments and its variance is EM 2 (t) = c22 t2−2H , where

(17.21)

c2 =

− 12 1 H(2H − 1)(2 − 2H)B H − , 2 − 2H . 2

In particular M (t) is a martingale. For all a > 0 we have P{T (RH ) < ∞} =

c+a E exp − H λ σ

0

τa

w(τa , s) dBH (s) −

1 2λ2H σ

c2 (c 2 2

2

+ a) τa

2−2H

.

The above formula enables us to simulate the inﬁnite time ruin probability for an arbitrary value of the initial capital. Using the structure of the common distribution of (M (t), BH (t)) we get the following estimator of the ruin probability valid for 0 < H < 1:

(c + a) (a2 − c2 ) 2−2H P{T (RH ) < ∞} = E exp − 2H 2 τa1−2H u + . (17.22) τ a λ σ 2λ2H σ 2 Let us note that putting a = c in (17.22) we obtain a simple formula

2cτc1−2H u . P{T (RH ) < ∞} = E exp − 2H λ σ2

(17.23)

For similar methods of simulation based on the change of measure technique applied to ﬂuid models see D¸ebicki, Michna, and Rolski (2003).

402

17

Risk Model of Good and Bad Periods

Table 17.1: Ruin probabilities for H = 0.7 and ﬁxed µ = 20, σ = 10, and t = 10. u c λ Ψ(t) Ψ 25 50 2 8.1257e-2 0.28307 25 60 2 1.3516e-2 0.03932 30 60 2 6.6638e-3 0.02685 35 60 2 3.6826e-3 0.01889 40 60 2 2.2994e-3 0.01363 40 70 3 1.0363e-1 0.38016 STFgood01.xpl

Table 17.2: Ruin probabilities for H = 0.8 and ﬁxed µ = 20, σ = 10, and t = 10. u c λ Ψ(t) Ψ 25 50 2 0.22240 0.40728 25 60 2 0.09890 0.08029 30 60 2 0.06570 0.06583 35 60 2 0.04496 0.05471 40 60 2 0.03183 0.04646 40 70 3 0.23622 0.55505 STFgood02.xpl

17.4

Examples

Let us assume that claims appear in good and bad periods. According to (17.9) we are able to approximate the risk process by: RH (t) = u + (c − λµ)t + λH BH (t), where BH (t) is a fractional Brownian motion, c is the premium rate, µ is the expected value of claims, σ 2 = EB 2 (1) is their variance, λ is the claim intensity, and u is the initial capital. We can compute ﬁnite and inﬁnite time ruin probabilities for diﬀerent levels of the initial capital, premium, intensity of claims, expectation of claims and

Examples

403

40

50

Y

60

70

17.4

0

0.5 X

1

Figure 17.1: Sample paths of the process RH for H = 0.7, u = 40, c = 100, µ = 20, σ = 10, and λ = 3. STFgood03.xpl

their variance (see Tables 17.1 and 17.2). We approximate the ﬁnite time ruin probabilities by formula (17.12) and the inﬁnite time ruin probabilities using the estimator given in (17.23). Sample paths of the process RH are depicted in Figure 17.1. The results in the tables show the eﬀects of dependence structures between claims on the crucial parameter for insurance companies – the ruin probability. Numerical simulations are performed for diﬀerent values of the parameter of self-similarity H which deﬁnes the level of the dependence between claims. It is clearly visible that an increase of H increases the ruin probability. The tables also illustrate the relationship between the ruin probability and the initial capital u, premium c, intensity of claims λ, expectation of claims µ and their variance σ. It is shown that for dependent damage occurrences the ruin probability is considerably higher than for independent events. Thus ignoring

404

17

Risk Model of Good and Bad Periods

possible dependence (existence of good and bad periods) and its level might lead to wrong choices of the premium c.

Bibliography

405

Bibliography Asmussen, S. (1987). Applied Probability and Queues, John Wiley and Sons, New York. Asmussen, S. (1999). On the ruin problems for some adapted premium rules, MaPhySto Research Report No. 5. University of Aarhus, Denmark. Burnecki, K. and Michna, Z. (2002). Simulation of Pickands constants, Probability and Mathematical Statistics 22: 193–199. D¸ebicki, K., Michna, Z. and Rolski, T. (1998). On the supremum from Gaussian processes over inﬁnite horizon, Probability and Mathematical Statistics 18: 83–100. D¸ebicki, K., Michna, Z. and Rolski T. (2003). Simulation of the asymptotic constant in some ﬂuid models, Stochastic Models 19: 407–423. Embrechts, P. and Maejima, M. (2002). Selfsimilar Processes, Princeton University Press, Princeton and Oxford. Gerber, H. U. (1981). On the probability of ruin in an autoregressive model, Mitteilung der Vereinigung Schweiz. Versicherungsmathematiker 2: 213– 219. Gerber, H. U. (1982). Ruin theory in a linear model, Insurance: Mathematics and Economics 1: 177–184. Heath, D., Resnick, S. and Samorodnitsky, G. (1998). Heavy tails and long range dependence in on/oﬀ processes and associated ﬂuid models, Mathematics and Operations Research 23: 145–165. H¨ usler, J. and Piterbarg, V. (1999). Extremes of certain class of Gaussian processes, Stochastic Processes and their Applications 83: 338–357. Embrechts, P. and Maejima, M. (1987). Limit Theorems for Stochastic Processes, Springer, Berlin Heidelberg. Lamperti, J. (1962). Semi-stable stochastic processes, Transection of the American Mathematical Society. 104: 62–78. Mandelbrot, B. B. and Van Ness, J. W. (1968). Fractional Brownian motions, fractional noises and applications, SIAM Review 10: 422–437.

406

Bibliography

Michna, Z. (1998). Self-similar processes in collective risk theory, Journal of Applied Mathematics and Stochastic Analysis 11: 429–448. M¨ uller, A. and Pﬂug, G. (2001). Asymptotic ruin probabilities for risk processes with dependent increments, Insurance: Mathematics and Economics 28: 381–392. Norros, I. (1994). A storage model with self-similar input, Queueing Systems 16: 387–396. Norros, I., Valkeila, E. and Virtamo, J. (1999). A Girsanov type theorem for the fractional Brownian motion, Bernoulli 5: 571–587. Nyrhinen, H. (1998). Rough description of ruin for general class of surplus process, Adv. Appl. Probab. 30: 107–119. Nyrhinen, H. (1999a). On the ruin probabilities in a general economic environment, Stoch. Proc. Appl. 83: 319–330. Nyrhinen, H. (1999b). Large deviations for the time of ruin, J. Appl. Probab. 36: 733–746. Piterbarg, V. I. (1996). Asymptotic methods in the theory of Gaussian processes and ﬁelds, Translations of Mathematical Monographs 148, AMS, Providence. Promislow, S. D. (1991). The probability of ruin in a process with dependent increments, Insurance: Mathematics and Economics 10: 99–107.

18 Premiums in the Individual and Collective Risk Models Jan Iwanik and Joanna Nowicka-Zagrajek

The premium is the price for the good “insurance” sold by an insurance company. The right pricing is vital since too low a price level results in a loss, while with too high prices a company can price itself out of the market. It is the actuary’s task to ﬁnd methods of premium calculation (also called premium calculation principles), i.e. rules saying what premium should be assigned to a given risk. We present the most important types of premiums in Section 18.1; for more premium calculation principles, that are not considered here, see Straub (1988) and Young (2004). We focus on the monetary payout made by the insurer in connection with insurable losses and we ignore premium loading for expenses and proﬁt. The goal of insurance modeling is to develop a probability distribution for the total amount paid in beneﬁts. This allows the insurance company to manage its capital account and honor its commitments. Therefore, we describe two standard models: the individual risk model in Section 18.2 and the collective risk model in Section 18.3. In both cases, we determine the expectation and variance of the portfolio, consider the approximation of the distribution of the aggregate claims, and present formulae for the considered premiums. It is worth mentioning here that the collective risk model can also be applied to quantifying regulatory capital for operational risk, for example to model a yearly operational risk variable (Embrechts, Furrer, and Kaufmann, 2003).

408

18.1

18

Premiums in the Individual and Collective Risk Models

Premium Calculation Principles

Let X denote a non-negative random variable describing the size of claim (risk, loss) with the distribution function FX (t). Moreover, we assume that the expected value E(X), the variance Var(X) and the moment generating function MX (z) = E(ezX ) exist. The simplest premium (calculation principle) is called pure risk premium and it is equal to the expectation of claim size variable: P = E(X).

(18.1)

This premium is often applied in life and some mass lines of business in non-life insurance. As it is known from the ruin theory, the pure risk premium without any kind of loading is insuﬃcient since, in the long run, the ruin is inevitable even in the case of substantial (though ﬁnite) initial reserves. Nevertheless, the pure risk premium can be – and still is – of practical use because, for one thing, in practice the planning horizon is always limited, and for another, there are indirect ways of loading a premium, e.g. by neglecting interest earnings (Straub, 1988). The future claims cost X may be diﬀerent from its expected value E(X) and drawn from past may be diﬀerent from the true E(X). the estimator E(X) To reﬂect this fact, the insurer can impose the risk loading on the pure risk premium. The pure risk premium with safety (security) loading given by PSL (θ) = (1 + θ) E(X),

θ ≥ 0,

(18.2)

where θ and θ E(X) are the relative and total safety loadings, respectively, is very popular in practical applications. This premium is an increasing linear function of θ and it is equal to the pure risk premium for θ = 0 . The pure risk premium and the premium with safety loading are sometimes criticised because they do not depend on the degree of ﬂuctuation of X. Thus, two other rules have been proposed. The ﬁrst one, denoted here by PV (a) and given by PV (a) = E(X) + a Var(X),

a ≥ 0,

(18.3)

is called the σ 2 -loading principle or the variance principle. In this case the premium depends not only on the expectation but also on the variance of the

18.1

Premium Calculation Principles

409

loss. The premium given by (18.3) is an increasing linear function of a and it is obvious that for a = 0 it is equal to the pure risk premium. The other one, denoted here by PSD (b) and given by PSD (b) = E(X) + b Var(X), b ≥ 0,

(18.4)

is called the σ-loading principle or the standard deviation principle. In this case the premium depends on the expectation and also on the standard deviation of the loss. The premium given by (18.4) is an increasing linear function of b and clearly for b = 0 it reduces to the pure risk premium. Both the σ 2 - and σ-loading principles are widely used in practice, but there is a discussion which one is better. If we consider two risks X1 and X2 , the σ-loading is additive and the σ 2 -loading not in case X1 and X2 are totally dependent, whereas the contrary is true for independent risks X1 and X2 . Although in many cases the additivity is required from premium calculation principles, there are also strong arguments against additivity based on the idea that the price of insurance ought to be the lower the larger number of the risk carriers are sharing the risk. The rules described so far are sometimes called “empirical” or “pragmatic”. Another approach employs the notion of utility (Straub, 1988). The so-called zero utility principle states that the premium PU for a risk X should be calculated such that the expected utility is (at least) equal to the zero utility. This principle yields a technical minimum premium in the sense that the risk X should not be accepted at a premium below PU . In the trivial case zero utility premium equals the pure risk premium. A more interesting case is the exponential utility which leads to a premium, denoted here by PE (c) and called the exponential premium, given by PE (c) =

ln MX (c) ln E(ecX ) = , c c

c > 0.

(18.5)

This premium is an increasing function of the parameter c that measures the risk aversion and limc→0 PE (c) = E(X). It is worth noticing that the zero utility principle yields additive premiums only in the trivial and the exponential utility cases (Gerber, 1980). As the trivial utility is just a special case of exponential utility corresponding to the limit c → 0, additivity characterizes the exponential utility. Another interesting approach to the problem of premium calculations is the quantile premium, denoted here by PQ (ε), is given by −1 (1 − ε), PQ (ε) = FX

(18.6)

410

18

Premiums in the Individual and Collective Risk Models

where ε ∈ (0, 1) is small enough. As it can be easily seen, it is just the quantile of order (1 − ε) of the loss distribution and this means that the insurer wants to get the premium that covers (1 − ε) · 100% of the possible loss. A reasonable range of the parameter ε is usually from 1% to 5%.

18.2

Individual Risk Model

We consider here a certain portfolio of insurance policies and the total amount of claims arising from it during a given period (usually a year). Our aim is to determine the joint premium for the whole portfolio that will cover the accumulated risk connected with all policies. In the individual risk model, which is widely used in applications, especially in life and health insurance, we assume that the portfolio consists of n insurance policies and the claim made in respect of the policy k is denoted by Xk . Then the total, or aggregate, amount of claims is S = X1 + X2 + . . . + Xn ,

(18.7)

where Xk is the loss on insured unit k and n is the number of risk units insured (known and ﬁxed at the beginning of the period). The Xk ’s are usually postulated to be independent random variables (but not necessarily identically distributed), so we will make such assumption in this section. Moreover, the individual risk model discussed here will not recognize the time value of money because we will consider only models for short periods. The claim amount variable Xk for each policy is usually presented as Xk = Ik Bk ,

(18.8)

where random variables I1 , . . . , In , B1 , . . . , Bn are independent. The random variable Ik indicates whether or not the kth policy produced a payment. If the claim has occurred, then Ik = 1; if there has not been any claim, Ik = 0. We denote qk = P(Ik = 1) and 1 − qk = P(Ik = 0). The random variable Bk can have an arbitrary distribution and represents the amount of the payment in respect of the kth policy given that the payment was made. In Section 18.2.1 we present general formulae for the premiums introduced in Section 18.1. In Section 18.2.2 we apply the normal approximation to obtain closed-form formulae for both the exponential and quantile premiums. Finally in Section 18.2.3, we illustrate the behavior of these premiums on a real-life data describing losses resulting from catastrophic events in the USA.

18.2

Individual Risk Model

18.2.1

411

General Premium Formulae

In order to ﬁnd formulae for the “pragmatic” premiums, let us assume that the expectations and variances of Bk ’s exist and denote µk = E(Bk ) and σk2 = Var(Bk ), k = 1, 2, . . . , n. Then E(Xk ) = µk qk ,

(18.9)

and the mean of the total loss in the individual risk model is given by E(S) =

n

µk qk .

(18.10)

k=1

The variance of Xk can be calculated as follows: = Var{E(Xk |Ik )} + E{Var(Xk |Ik )} = Var{Ik E(Bk )} + E{Ik Var(Bk )} = {E(Bk )}2 Var(Ik ) + Var(Bk ) E(Ik ) = µ2k qk (1 − qk ) + σk2 qk .

Var(Xk )

(18.11)

Applying the assumption of independent Xk ’s, the variance of S is of the form: Var(S) =

n

µ2k qk (1 − qk ) + σk2 qk .

(18.12)

k=1

Now we can easily obtain the following formulae for the individual risk model: • pure risk premium P =

n

µk qk ,

(18.13)

k=1

• premium with safety loading PSL (θ) = (1 + θ)

n

µk qk ,

θ ≥ 0,

(18.14)

k=1

• premium with variance loading PV (a) =

n k=1

µk qk + a

n k=1

µ2k qk (1 − qk ) + σk2 qk ,

a ≥ 0,

(18.15)

412

18

Premiums in the Individual and Collective Risk Models

• premium with standard deviation loading ; < n n < PSD (b) = µk qk + b= {µ2k qk (1 − qk ) + σk2 qk }, k=1

b ≥ 0. (18.16)

k=1

If we assume that for each k = 1, 2, . . . , n the moment generating function MBk (t) exists, then MXk (t) = 1 − qk + qk MBk (t), and hence MS (t) =

n &

(18.17)

{1 − qk + qk MBk (t)} .

(18.18)

k=1

This leads to the following formula for the exponential premium: 1 ln {1 − qk + qk MBk (c)} , c n

PE (c) =

c > 0.

(18.19)

k=1

In the individual risk model, claims of an insurance company are modeled as a sum of the claims of many insured individuals. Therefore, in order to ﬁnd the quantile premium given by PQ (ε) = FS−1 (1 − ε),

ε ∈ (0, 1),

(18.20)

the distribution of the sum of independent random variables has to be determined. There are methods to solve this problem, see Bowers et al. (1997) and Panjer and Willmot (1992). For example, one can use the convolution of the probability distributions of X1 , X2 , . . . , Xn . However in practice it can be a very complex task that involves numerous calculations. In many cases the result cannot be represented by a simple formula. Therefore, approximations for the distribution of the sum are often used.

18.2.2

Premiums in the Case of the Normal Approximation

The distribution of the total claim in the individual risk model can be approximated by means of the central limit theorem (Bowers et al., 1997). In such case it is suﬃcient to evaluate means and variances of the individual loss random variables, sum them to obtain the mean and variance of the aggregate loss of

18.2

Individual Risk Model

413

the insurer and apply the normal approximation. However, it is important to remember that the quality of this approximation depends not only on the size of the portfolio, but also on its homogeneity. The approximation of the distribution of the total loss S in the individual risk model can be applied to ﬁnd a simple expression for the quantile premium. If the distribution of S is approximated by a normal distribution with mean E(S) and variance Var(S), the quantile premium can be written as ; < n n < PQ (ε) = µk qk + Φ−1 (1 − ε)= {µ2k qk (1 − qk ) + σk2 qk }, (18.21) k=1

k=1

where ε ∈ (0, 1) and Φ(·) denotes the standard normal distribution function. It is the same premium as the premium with standard deviation loading with b = Φ−1 (1 − ε). Moreover, in the case of this approximation, it is possible to express the exponential premium as PE (c) =

n

c 2 µk qk (1 − qk ) + σk2 qk , 2 n

µk qk +

k=1

c > 0,

(18.22)

k=1

and it is easy to notice, that this premium is equal to the premium with variance loading with a = c/2. Since the distribution of S is approximated by the normal distribution with the same mean value and variance, premiums deﬁned in terms of the expected value of the aggregate claims are given by the same formulae as in Section 18.2.1.

18.2.3

Examples

Quantile premium for the individual risk model with Bk ’s log-normally distributed. The insurance company holds n = 500 policies Xk . The claims arising from policies can be represented as independent identically distributed random variables. The actuary estimates that each policy generates a claim with probability qk = 0.05 and the claim size, given that the claim happens, is log-normally distributed. The parameters of the log-normal distribution correspond to the real-life data describing losses resulting from catastrophic events in the USA, i.e. µk = 18.3806 and σk = 1.1052 (see Chapter 13).

414

18

Premiums in the Individual and Collective Risk Models

As the company wants to assure that the probability of losing any money is less than a speciﬁc value ε, the actuary is asked to calculate the quantile premium. The actuary wants to compare the quantile premium given by the general formula (18.20) with the one (18.21) obtained from the approximation of the aggregate claims. The distribution of the total claim in this model can be approximated by the normal distribution with mean 4.4236 · 109 and variance 2.6160 · 1018 . Figure 18.1 shows the quantile premium in the individual risk model framework for ε ∈ (0.01, 0.1). The exact premium is drawn with the solid blue line whereas the premium calculated on the base of the normal approximation is marked with the dashed red line. Because of the complexity of analytical formulae, the exact quantile premium for the total claim amount was obtained using numerical simulations. The simulation-based approach is the reason for the line being jagged. A better smoothness can be achieved by performing a larger number of Monte Carlo simulations (here we performed 10000 simulations). We can observe now that the approximation seems to ﬁt well for larger ε and worse for small ε. This is speciﬁc for the quantile premium. The eﬀect is caused by the fact that even if two distribution functions F1 (x), F2 (x) are very close to each other, their inverse functions F1−1 (y), F2−1 (y) may diﬀer signiﬁcantly for y close to 1. Exponential premium for the individual risk model with Bk ’s gamma distributed. Because the company has a speciﬁc risk strategy described by the exponential utility function, the actuary is asked to determine the premium for the same portfolio of 500 independent policies once again but now with respect to the risk aversion parameter c. The actuary is also asked to use a method of calculation that provides direct results and does not require Monte Carlo simulations. This time the actuary has decided to describe the claim size, given that the claim happens, by the gamma distribution with α = 0.9185 and β = 5.6870 · 10−9 , see Chapter 13. The choice of the gamma distribution guarantees a simple analytical form of the premium, namely PE (c) =

α

n β 1 , ln 1 − qk + qk c β−c k=1

c > 0.

(18.23)

Individual Risk Model

415

8.5 8 7.5 6.5

7

Quantile premium (USD billion)

9

18.2

0.02

0.06 0.04 Quantile parameter epsilon

0.08

0.1

Figure 18.1: Quantile premium for the individual risk model with Bk ’s lognormally distributed. The exact premium (solid blue line) and the premium resulting for the normal approximation of the aggregate claims (dashed red line). STFprem01.xpl

On the other hand, the actuary can use formula (18.22) applying the normal approximation of the aggregate claims with mean 4.0377 · 109 and variance 1.3295 · 1018 . Figure 18.2 shows the exponential premiums resulting from both approaches with respect to the risk aversion parameter c. A simple pattern can be observed – the more risk averse the customer is, the more he or she is willing to pay for the risk protection. Moreover, the normal approximation gives better results for smaller values of c.

18

Premiums in the Individual and Collective Risk Models

4.5 4.4 4.3 4.2 4.1

Exponential utility premuim (USD billion)

4.6

416

0

2

6 4 Risk aversion parameter*E-10

8

Figure 18.2: Exponential premium for the individual risk model with Bk ’s generated from the gamma distribution. The exact premium (solid blue line) and the premium resulting for the normal approximation of the aggregate claims (dashed red line). STFprem02.xpl

18.3

Collective Risk Model

We consider now an alternative model describing the total claim amount in a ﬁxed period in a portfolio of insurance contracts. Let N denote the number of claims arising from policies in a given time period. Let X1 denote the amount of the ﬁrst claim, X2 the amount of the second claim and so on. In the collective risk model, the random sum S = X1 + X2 + . . . + XN

(18.24)

represents the aggregate claims generated by the portfolio for the period under study. The number of claims N is a random variable and is associated with the

18.3 Collective Risk Model

417

frequency of claim. The individual claims X1 , X2 , . . . are also random variables and are said to measure the severity of claims. There are two fundamental assumptions that we will make in this section: X1 , X2 , . . . are identically distributed random variables and the random variables N, X1 , X2 , . . . are mutually independent. In Section 18.3.1 we present formulae for the considered premiums in the collective risk model. In Section 18.3.2 we apply the normal and translated gamma approximations to obtain closed formulae for premiums. Since for the number of claims N , a Poisson or a negative binomial distribution is often selected, we discuss these cases in detail in Section 18.3.3 and 18.3.4, respectively. Finally, we illustrate the behavior of the premiums on examples in Section 18.3.5.

18.3.1

General Premium Formulae

In order to ﬁnd formulae for premiums based on the expected value of the total claim, let us assume that E(X), E(N ), Var(X) and Var(N ) exist. For the collective risk model, the expected value of aggregate claims is the product of the expected individual claim amount and the expected number of claims, E(S) = E(N ) E(X),

(18.25)

while the variance of aggregate claims is the sum of two components where the ﬁrst is attributed to the variability of individual claim amounts and the other to the variability of the number of claims: Var(S) = E(N ) Var(X) + {E(X)}2 Var(N ).

(18.26)

Thus it is easy to obtain the following premium formulae in the collective risk model: • pure risk premium P = E(N ) E(X),

(18.27)

• premium with safety loading PSL (θ) = (1 + θ) E(N ) E(X),

θ ≥ 0,

(18.28)

• premium with variance loading PV (a)

= E(N ) E(X) + a[E(N ) Var(X) + {E(X)}2 Var(N )],

(18.29) a ≥ 0,

418

18

Premiums in the Individual and Collective Risk Models

• premium with standard deviation loading PSD (b)

=

E(N ) E(X) + b E(N ) Var(X) + {E(X)}2 Var(N ),

(18.30) b ≥ 0.

If we assume that MN (t) and MX (t) exist, the moment generating function of S can be derived as MS (t) = MN {ln MX (t)}, (18.31) and thus the exponential premium is of the form PE (c) =

ln[MN {ln MX (c)}] , c

c > 0.

(18.32)

It is often diﬃcult to determine the distribution of the aggregate claims and this fact causes problems with calculating the quantile premium given by PQ (ε) = FS−1 (1 − ε),

ε ∈ (0, 1).

(18.33)

Although the distribution function of S can be expressed by means of the distribution of N and the convolution of the claim amount distribution, this is too complicated in practical applications, see e.g. Klugman, Panjer, and Willmot (1998). Therefore, approximations for the distribution of the aggregate claims are usually considered.

18.3.2

Premiums in the Case of the Normal and Translated Gamma Approximations

In Section 18.2.2 the normal approximation was employed as an approximation for the distribution of aggregate claims in the individual risk model. This approach can also be used in the case of the collective model when the expected number of claims is large (Bowers et al., 1997; Daykin, Pentikainen, and Pesonen, 1994). The normal approximation simpliﬁes the calculations. If the distribution of S can be approximated by a normal distribution with mean E(S) and variance Var(S), the quantile premium is given by the formula PQ (ε) = E(N ) E(X) + Φ−1 (1 − ε) E(N ) Var(X) + {E(X)}2 Var(N ), (18.34)

18.3 Collective Risk Model

419

where ε ∈ (0, 1) and Φ(·) denotes the standard normal distribution function. It is easy to notice, that this premium is equal to the standard deviation-loaded premium with b = Φ−1 (1 − ε). Moreover, in the case of the normal approximation, it is possible to express the exponential premium as 5 c4 PE (c) = E(N ) E(X) + E(N ) Var(X) + {E(X)}2 Var(N ) , c > 0, (18.35) 2 which is the same premium as resulting from the variance principle with a = c/2. Let us also mention that since the mean and variance in the case of the normal approximation are the same as for the distribution of S, the premiums based on the expected value are given by the general formulae presented in Section 18.3.1. Unfortunately, the normal approximation is not usually suﬃciently accurate. The disadvantage of this approximation lies in the fact that the skewness of the normal distribution is always zero, as it has a symmetric probability density function. Since the distribution of aggregate claims is often skewed, another approximation of the distribution of aggregate claims that accommodates skewness is required. In this section we describe the translated gamma approximation. For more approaches and discussion of their applicability see, for example, Daykin, Pentikainen, and Pesonen (1994). The distribution function of the translated (shifted) gamma distribution is given by Gtr (x; α, β, x0 ) = F (x − x0 ; α, β),

x, α, β > 0,

(18.36)

where F (x; α, β) denotes the distribution function of the gamma distribution (described in Chapter 13) with parameters α and β: x α β F (x; α, β) = (18.37) tα−1 e−βt dt, x, α, β > 0. 0 Γ(α) To apply the approximation, the parameters α, β, and x0 have to be selected so that the ﬁrst, second, and third central moments of S equal the corresponding items for the translated gamma distribution. This procedure leads to the following result: α=4

{Var(S)}3 , (E[{S − E(S)}3 ])2

(18.38)

420

18

Premiums in the Individual and Collective Risk Models

β=2

Var(S) , E[{S − E(S)}3 ]

x0 = E(S) − 2

{Var(S)}2 . E[{S − E(S)}3 ]

(18.39)

(18.40)

In the case of the translated gamma distribution, it is impossible to give a simple analytical formula for the quantile premium. Therefore, in order to ﬁnd this premium a numerical approximation must be used. However, it is worth noticing that the exponential premium can be presented as α β PE (c) = x0 + ln , c > 0, (18.41) c β−c while the premiums given in terms of the expected value of the aggregate claims are the same as given in Section 18.3.1 (since the distribution of S is approximated by the translated gamma distribution with the same mean value and variance).

18.3.3

Compound Poisson Distribution

In many applications, the number of claims N is assumed to be described by the Poisson distribution with the probability function given by P(N = n) =

λn e−λ , n!

n = 0, 1, 2, . . . ,

(18.42)

where λ > 0. With this choice of the distribution of N , the distribution of S is called a compound Poisson distribution. The compound Poisson distribution has a number of useful properties. Formulae for the exponential premium and for the premiums based on the expectation of the aggregate claims simplify because E(N ) = Var(N ) = λ and MN (t) = exp {λ(et − 1)}. Moreover, for large λ, the distribution of the compound Poisson can be approximated by a normal distribution with mean λ E(X) and variance λ E(X 2 ), and the quantile premium is given by PQ (ε) = λ E(X) + Φ−1 (1 − ε) λ E(X 2 ), ε ∈ (0, 1), (18.43)

18.3 Collective Risk Model

421

and the exponential premium is of the form c PE (c) = λ E(X) + λ E(X 2 ), 2

c > 0.

(18.44)

If the ﬁrst three central moments of the individual claim distribution exist, the compound Poisson distribution can be approximated by the translated gamma distribution with the following parameters α = 4λ

{E(X 2 )}3 , {(E(X 3 )}2

β=2

E(X 2 ) , E(X 3 )

x0 = λ E(X) − 2λ

{E(X 2 )}2 . E(X 3 )

(18.45)

(18.46) (18.47)

Substituting these parameters in (18.41) one can obtain the formula for the exponential premium. It is worth mentioning that the compound Poisson distribution has many attractive features (Bowers et al., 1997; Panjer and Willmot, 1992), for example, the combination of a number of portfolios, each of which has a compound Poisson distribution of aggregate claims, also has a compound Poisson distribution of aggregate claims. Moreover, this distribution can be used to approximate the distribution of total claims in the individual model. Although the compound Poisson distribution is normally appropriate in life insurance modeling, it sometimes does not provide an adequate ﬁt to insurance data in other coverages (Willmot, 2001).

18.3.4

Compound Negative Binomial Distribution

When the variance of the number of claims exceeds its mean, the Poisson distribution is not appropriate – in this situation the use of the negative binomial distribution with the probability function given by r+n−1 P(N = n) = pr q n , n = 0, 1, 2, . . . , (18.48) n where r > 0, 0 < p < 1, and q = 1 − p, is suggested. In many cases it provides a signiﬁcantly improved ﬁt to that of the Poisson distribution. When

422

18

Premiums in the Individual and Collective Risk Models

a negative binomial distribution is selected for N , the distribution of S is called a compound negative binomial distribution. Since for the negative binomial distribution we have E(N ) =

rq , p

rq , p2

Var(N ) =

(18.49)

and MN (t) =

p 1 − qet

r ,

(18.50)

the formulae for the exponential premium and for the premiums based on the expectation of the aggregate claims simplify. For large r, the distribution of the compound negative binomial can be approximated by a normal distribution with the mean rq p E(X) and variance rq rq 2 p Var(X) + p2 {E(X)} . In this case the quantile premium is given by PQ (ε) =

rq E(X) + Φ−1 (1 − ε) p

:

rq rq Var(X) + 2 {E(X)}2 , ε ∈ (0, 1), (18.51) p p

and the exponential premium is of the form rq c rq rq PE (c) = E(X) + Var(X) + 2 {E(X)}2 , p 2 p p

c > 0.

(18.52)

It is worth mentioning that the negative binomial distribution arises as a mixed Poisson variate. More precisely, various distributions for the number of claims can be generated by assuming that the Poisson parameter Λ is a random variable with probability distribution function u(λ), λ > 0, and that the conditional distribution of N , given Λ = λ, is Poisson with parameter λ. In such case the distribution of S is called a compound mixed Poisson distribution, see also Chapter 14. This choice might be useful for example when we consider a population of insureds where various classes of insureds within the population generate numbers of claims according to the Poisson distribution, but the Poisson parameters may be diﬀerent for the various classes. The negative binomial distribution can be derived in this fashion when u(λ) is the gamma probability density function.

18.3 Collective Risk Model

18.3.5

423

Examples

Quantile premium for the collective risk model with log-normal claim distribution. As the number of policies sold by the insurance company grows, the actuary has decided to try to ﬁt a collective risk model to the portfolio. The log-normal distribution with the parameters µ = 18.3806 and σ = 1.1052 (these parameters are again estimated on the base of the real-life data describing losses resulting from catastrophic events in the USA, see Chapter 13) is chosen to describe the amount of claims. The number of claims is assumed to be Poisson distributed with parameter λ = 34.2. Moreover, the claim amounts and the number of claims are believed to be independent. The actuary wants to compare the behavior of the quantile premium for the whole portfolio of policies given by the general formula (18.34) and in the case of the translated gamma approximation. Figure 18.3 illustrates how the premium based on the translated gamma approximation (dashed red line) ﬁts the premium determined by the exact compound Poisson distribution (solid blue line). The premium for the original compound distribution has to be determined on the base of numerical simulations. This is the reason why the line is jagged. Better smoothness can be achieved by performing a larger number of Monte Carlo simulations (here we again performed 10000 simulations). The actuary notices that the approximation ﬁts better for the larger values of ε and worse for its smaller values. In fact the compound distribution functions of the original distribution and its transformed gamma approximation lay close to each other, but both are increasing and tend to one in inﬁnity. This explains why the quantile premiums – understood as inverse functions of the distribution functions – diﬀer so much for ε close to zero. Exponential premium for the collective risk model with gamma claim distribution. The actuary considers again the collective risk model where the number of claims is described by the Poisson distribution with parameter λ = 34.2, i.e. the compound Poisson model. But this time the claims are described by the gamma distribution with the parameters α = 0.9185 and β = 5.6870 · 10−9 (parameters are based on the same catastrophic data as in the previous example). Now the actuary considers the exponential premium for the aggregate claims in this model. The exponential premium in the case of the translated gamma

18

Premiums in the Individual and Collective Risk Models

10.5 10 9.5 8.5

9

Quantile premium (USD billion)

11

11.5

424

0.02

0.06 0.04 Quantile parameter epsilon

0.08

0.1

Figure 18.3: Quantile premium for the log-normal claim distribution and its translated gamma approximation in the collective risk model. The exact premium (solid blue line) and the premium in the case of the approximation (dashed red line) are plotted. STFprem03.xpl

approximation (dashed red line) and the exact premium (solid blue line) are plotted in Figure 18.4. Both premiums – for the original and the approximating distribution – are calculated analytically while it is easy to perform the calculations in this case. Both presented functions increase with the risk aversion parameter. We see that the translated gamma approximation can be a useful and precise tool for calculating the premiums in the collective risk model.

425

40 20

Exponential utility premuim (USD billion)

60

18.3 Collective Risk Model

0

1

3 2 Risk aversion parameter*E-9

4

Figure 18.4: Exponential premium for the gamma claim distribution in the collective risk model. The exact premium (solid blue line) and the translated gamma approximation premium (dashed red line) are plotted. STFprem04.xpl

426

Bibliography

Bibliography Bowers, N. L. JR., Gerber H. U., Hickman, J. C., Jones, D. A. and Nesbitt, C. J. (1997). Actuarial Mathematics, 2nd edition, The Society of Actuaries, Schaumburg. Daykin, C. D., Pentikainen, T. and Pesonen, M. (1994). Practical Risk Theory for Actuaries, Chapman&Hall, London. Embrechts, P., Furrer, H. and Kaufmann, R. (2003). Quantifying regulatory capital for operational risk, Trading & Regulation 9(3): 217-233. Embrechts, P., Kl¨ uppelberg, C. and Mikosch, T. (1997). Modelling Extremal Events for Insurance and Finance, Springer, Berlin. Gerber, H.U. (1980). An introduction to mathematical risk theory, Huebner, Philadelphia. Klugman, S. A., Panjer, H. H. and Willmot, G. E. (1998). Loss Models: From Data to Decisions, Wiley, New York. Panjer, H. H. and Willmot, G. E. (1992). Insurance Risk Models, Society of Actuaries, Schaumburg. Straub, E. (1988). Non-Life Insurance Mathematics, Springer, Berlin. Willmot, G. E. (2001). The nature of modelling insurance losses, Inaugural Lecture, Munich Reinsurance, Toronto. Young, V. R. (2004). Premium calculation principles, to appear in Encyclopedia of Actuarial Science, J. L. Teugels and B. Sundt eds., Wiley, Chichester.

19 Pure Risk Premiums under Deductibles Krzysztof Burnecki, Joanna Nowicka-Zagrajek, and Agnieszka Wyloma´ nska

19.1

Introduction

It is a common practice in most insurance lines for the coverage to be restricted by a deductible. For example, it is often incorporated in motor, health, disability, life, and business insurance. The main idea of a deductible is, ﬁrstly, to reduce claim handling costs by excluding coverage for the often numerous small claims and, secondly, to provide some motivation to the insured to prevent claims through a limited degree of participation in claim costs (Daykin, Pentikainen, and Pesonen, 1994; Sundt, 1994; Klugman, Panjer, and Willmot, 1998). We mention the following properties of a deductible: (i) loss prevention – as the compensation is reduced by a deductible the retention of the insured is positive; This makes out a good case for avoiding the loss; (ii) loss reduction – the fact a deductible puts the policyholder at risk of obtaining only partial compensation provides an economic incentive to reduce the extend of the damage; (iii) avoidance of small claims where administration costs are dominant – for small losses, the administration costs will often exceed the loss itself, and hence the insurance company would want the policyholder to pay it himself; (iv) premium reduction – premium reduction can be an important aspect for the policyholders, they may prefer to take a higher deductible to get a lower premium.

428

19

Pure Risk Premiums under Deductibles

There are two types of deductibles: an annual deductible and a per occurrence deductible, the latter being more common. We quote now an example from the American market. Blue Shield of California, an independent member of the Blue Shield Association, is California’s second largest not-for-proﬁt health care company, with 2 million members and USD 3 billion annual revenue. Blue Shield of California oﬀers 4 Preferred Provider Organization (PPO) Plans, each oﬀering similar levels of beneﬁts with a diﬀerent deductible option: USD 500, 750, 1500, and 2000, respectively. For example, the Blue Shield USD 500 Deductible PPO Plan has a USD 500 annual deductible for most covered expenses. This is just the case of the ﬁxed amount deductible, which is exploited in Section 19.2.2. The annual deductible does not apply to oﬃce visits or prescription medications. Oﬃce visits and most lab and x-ray services are provided at a USD 30 copayment. This is also the case of the ﬁxed amount deductible. For other covered services, after the annual deductible has been met, you pay 25% up to an annual maximum of USD 3500. This is a case of the limited proportional deductible, which is examined in Section 19.2.4. In Section 19.2 we present formulae for pure risk premiums under franchise, ﬁxed amount, proportional, limited proportional, and disappearing deductibles in terms of the limited expected value function (levf), which was introduced and exploited in Chapter 13. Using the speciﬁc form of levf for diﬀerent loss distributions, we present in Section 19.3 formulae for pure risk premiums under the deductibles for the log-normal, Pareto, Burr, Weibull, gamma, and mixture of two exponential distributions. The formulae can be used to obtain annual pure risk premiums under the deductibles in the individual and collective risk model framework analysed in Chapter 18. We illustrate graphically the inﬂuence of the parameters of the discussed deductibles on the premiums considering the Danish ﬁre loss example, which was studied in Chapter 13. It gives an insight into an important issue of choosing an optimal deductible and its level for a potential insured and a proper pricing of the accepted risk for an insurer.

19.2

General Formulae for Premiums Under Deductibles

Let X denote a non-negative continuous random variable describing the size of claim (risk, loss), F (t) and f (t) its distribution and probability density functions, respectively, and h(x) the payment function corresponding to a de-

19.2

General Formulae for Premiums Under Deductibles

429

ductible. We consider here the simplest premium which is called the pure risk premium, see Chapter 18. The pure risk premium P (as we consider only pure risk premium we will henceforth use the term premium meaning pure risk premium) is equal to the expectation, i.e. P = E(X),

(19.1)

and we assume that the expected value E(X) exists. In the case of no deductible the payment function is obviously of the form h(x) = x. This means that if the loss is equal to x, the insurer pays the whole claim amount and P = E(X). We express formulae for premiums under deductibles in terms of the so-called limited expected value function (levf), namely x L(x) = E{min(X, x)} = yf (y)dy + x {1 − F (x)} , x > 0. (19.2) 0

The value of this function at a point x is equal to the expected value of the random variable X truncated at the point x. The function is a very useful tool for testing the goodness of ﬁt an analytic distribution function to the observed claim size distribution function and was already discussed in Chapter 13. In the following sections we illustrate premium formulae for the most important types of deductibles. All examples were created with the insurance library of XploRe.

19.2.1

Franchise Deductible

One of the deductibles that can be incorporated in the contract is the so-called franchise deductible. In this case the insurer pays the whole claim, if the agreed deductible amount is exceeded. More precisely, under the franchise deductible of a, if the loss is less than a the insurer pays nothing, but if the loss equals or exceeds a claim is paid in full. This means that the payment function can be described as (Figure 19.1) hF D(a) (x) =

0, x < a, x, otherwise.

(19.3)

It is worth noticing that the franchise deductible satisﬁes properties (i), (iii) and (iv), but not property (ii). This deductible can even work against property

430

19

Pure Risk Premiums under Deductibles

a

Figure 19.1: The payment function under the franchise deductible (solid blue line) and no deductible (dashed red line). STFded01.xpl

(ii). Since if a loss occurs, the policyholder would prefer it to be greater than or equal to the deductible. The pure risk premium under the franchise deductible can be expressed in terms of the premium in the case of no deductible and the corresponding limited expected value function: PF D(a) = P − L(a) + a {1 − F (a)} .

(19.4)

It can be easily noticed that this premium is a decreasing function of a. When a = 0 the premium is equal to the no deductible case and if a tends to inﬁnity the premium tends to zero.

19.2

General Formulae for Premiums Under Deductibles

431

b

Figure 19.2: The payment function under the ﬁx amount deductible (solid blue line) and no deductible (dashed red line). STFded02.xpl

19.2.2

Fixed Amount Deductible

An agreement between the insured and the insurer incorporating a deductible b means that the insurer pays only the part of the claim which exceeds amount b. If the size of the claim falls below this amount, the claim is not covered by the contract and the insured receives no indemniﬁcation. The payment function is thus given by hF AD(b) (x) = max(0, x − b),

(19.5)

see Figure 19.2. The ﬁxed amount deductible satisﬁes all the properties (i)-(iv).

432

19

Pure Risk Premiums under Deductibles

The premium in the case of the ﬁxed amount deductible has the following form in terms of the premium under the franchise deductible. PF AD(b) = P − L(b) = PF D(b) − b {1 − F (b)} .

(19.6)

As previously, this premium is a decreasing function of b, for b = 0 it gives the premium in the case of no deductible and if b tends to inﬁnity, it tends to zero.

19.2.3

Proportional Deductible

In the case of the proportional deductible with c ∈ (0, 1), each payment is reduced by c · 100% (the insurer pays 100%(1 − c) of the claim). Consequently, the payment function is given by (Figure 19.3) hP D(c) (x) = (1 − c)x.

(19.7)

The proportional deductible satisﬁes properties (i), (ii), and (iv), but not property (iii), as it implies some compensation for even very small claims. The relation between the premium under the proportional deductible and the premium in the case of no deductible has the following form. PP D(c) = (1 − c) E(X) = (1 − c)P.

(19.8)

Clearly, the premium is a decreasing function of c, PP D(0) = P and PP D(1) = 0.

19.2.4

Limited Proportional Deductible

The proportional deductible is usually combined with a minimum amount deductible so the insurer does not need to handle small claims and with a maximum amount deductible to limit the retention of the insured. For the limited proportional deductible of c with a minimum amount m1 and maximum amount m2 (0 ≤ m1 < m2 ) the payment function is given by ⎧ 0, x ≤ m1 , ⎪ ⎪ ⎨ x − m1 , m1 < x ≤ m1 /c, hLP D(c,m1 ,m2 ) (x) = (19.9) (1 − c)x, m1 /c < x ≤ m2 /c, ⎪ ⎪ ⎩ x − m2 , otherwise, see Figure 19.4. The limited proportional deductible satisﬁes all the properties.

19.2

General Formulae for Premiums Under Deductibles

433

Figure 19.3: The payment function under the proportional deductible (solid blue line) and no deductible (dashed red line). STFded03.xpl

The following formula expresses the premium under the limited proportional deductible in terms of the premium in the case of no deductible and the corresponding limited expected value function. m m 1 2 PLP D(c,m1 ,m2 ) = P − L(m1 ) + c L −L . (19.10) c c Sometimes only one limitation is incorporated in the contract, i.e. m1 = 0 or m2 = ∞. It is easy to check that the limited proportional deductible with m1 = 0 and m2 = ∞ reduces to the proportional deductible.

434

19

m1 m1/c

Pure Risk Premiums under Deductibles

m2/c

Figure 19.4: The payment function under the limited proportional deductible (solid blue line) and no deductible (dashed red line). STFded04.xpl

19.2.5

Disappearing Deductible

There is another type of deductible that is a compromise between the franchise and ﬁxed amount deductible. In the case of disappearing deductible the payment depends on the loss in the following way: if the loss is less than an amount of d1 > 0, the insurer pays nothing; if the loss exceeds d2 (d2 > d1 ) amount, the insurer pays the loss in full; if the loss is between d1 and d2 , then the deductible is reduced linearly between d1 and d2 . Therefore, the larger the claim, the less of the deductible becomes the responsibility of the policyholder.

19.2

General Formulae for Premiums Under Deductibles

d1

435

d2

Figure 19.5: The payment function under the disappearing deductible (solid blue line) and no deductible (dashed red line). STFded05.xpl

The payment function is given by (Figure 19.5) ⎧ x ≤ d1 , ⎨ 0, d2 (x−d1 ) hDD(d1 ,d2 ) (x) = , d1 < x ≤ d2 , ⎩ d2 −d1 x, otherwise.

(19.11)

This kind of deductible satisﬁes properties (i), (iii), and (iv), but similarly to the franchise deductible it works against (ii). The following formula shows the premium under the disappearing deductible in terms of the premium in the case of no deductible and the corresponding

436

19

Pure Risk Premiums under Deductibles

limited expected value function PDD(d1 ,d2 ) = P +

d1 d2 L(d2 ) − L(d1 ). d2 − d1 d2 − d1

(19.12)

If d1 = 0, the premium does not depend on d2 and it becomes the premium in the case of no deductible. If d2 tends to inﬁnity, then the disappearing deductible reduces to the ﬁx amount deductible of d1 .

19.3

Premiums Under Deductibles for Given Loss Distributions

In the preceding section we showed a relation between the pure risk premium under several deductibles and a limited expected value function. Now, we use the relation to present formulae for premiums in the case of deductibles for a number of loss distributions often used in non-life actuarial practice, see Burnecki, Nowicka-Zagrajek, and Weron (2004). To this end we apply the formulae for levf for diﬀerent distributions given in Chapter 13. The log-normal, Pareto, Burr, Weibull, gamma, and mixture of two exponential distributions are typical candidates when looking for a suitable analytic distribution, which ﬁts the observed data well, see Aebi, Embrechts, and Mikosch (1992), Burnecki, Kukla, and Weron (2000), Embrechts, Kl¨ uppelberg, and Mikosch (1997), Mikosch (1997), Panjer and Willmot (1992), and Chapter 13. In the log-normal and Burr case the premium formulae will be illustrated on a real-life example, namely on the ﬁre loss data, already analysed in Chapter 13. For illustrative purposes, we assume that the total amount of risk X simply follows one of the ﬁtted distributions, whereas in practice, in the individual and collective risk model framework (see Chapter 18), in order to obtain an annual premium under a per occurrence deductible we would have to multiply the premium by a number of policies and mean number of losses per year, respectively, since in the individual risk model n E h (Xk ) = n E {h (Xk )} , k=1

provided that the claim amount variables are identically distributed, and in the collective risk model N E h (Xk ) = E(N ) E{h (Xk )}. k=1

19.3

Premiums Under Deductibles for Given Loss Distributions

19.3.1

437

Log-normal Loss Distribution

Consider a random variable Z which has the normal distribution. Let X = eZ . The distribution of X is called the log-normal distribution and its distribution function is given by t 2 ln t − µ 1 1 ln y − µ √ F (t) = Φ = dy, exp − σ 2 σ 2πσy 0 where t, σ > 0, µ ∈ R and Φ(.) is the standard normal distribution function, see Chapter 13. For the log-normal distribution the following formulae hold: (a) franchise deductible premium

ln a − µ − σ 2 σ2 PF D(a) = exp µ + 1−Φ , 2 σ (b) ﬁxed amount deductible premium σ2 PF AD(b) = exp µ + · 2

ln b − µ ln b − µ − σ 2 −b 1−Φ , · 1−Φ σ σ (c) proportional deductible premium

PP D(c)

σ2 = (1 − c) exp µ + 2

,

(d) limited proportional deductible premium

ln m1 − µ − σ 2 σ2 PLP D(c,m1 ,m2 ) = exp µ + 1−Φ 2 σ

ln(m1 /c) − µ ln m1 − µ −Φ + m1 Φ σ σ

ln(m2 /c) − µ − σ 2 ln(m1 /c) − µ − σ 2 −Φ · + Φ σ σ

σ2 ln(m2 /c) − µ · c exp µ + + m2 Φ −1 , 2 σ

438

19

Pure Risk Premiums under Deductibles

(e) disappearing deductible premium exp µ + σ 2 /2 · PDD(d1 ,d2 ) = d2 − d1

ln d2 − µ − σ 2 ln d1 − µ − σ 2 − d2 Φ · d2 − d1 + d 1 Φ σ σ

d1 d 2 ln d2 − µ ln d1 − µ −Φ . + Φ d2 − d1 σ σ We now illustrate the above formulae using the Danish ﬁre loss data. We study the log-normal loss distribution with parameters µ = 12.6645 and σ = 1.3981, which best ﬁtted the data. Figure 19.6 depicts the premium under franchise and ﬁxed amount deductibles in the log-normal case. Figure 19.7 shows the eﬀect of parameters c, m1 , and m2 of the limited proportional deductible. Clearly, PLP D(c,m1 ,m2 ) is a decreasing function of these parameters. Finally, Figure 19.8 depicts the inﬂuence of parameters d1 and d2 of the disappearing deductible. Markedly, PDD(d1 ,d2 ) is a decreasing function of the parameters and we can observe that the eﬀect of increasing d2 is rather minor.

19.3.2

Pareto Loss Distribution

The Pareto distribution function is deﬁned by α λ , F (t) = 1 − λ+t where t, α, λ > 0, see Chapter 13. The expectation of the Pareto distribution exists only for α > 1. For the Pareto distribution with α > 1 the following formulae hold: (a) franchise deductible premium PF D(a)

=

1 (aα + λ) α−1

λ a+λ

α ,

Premiums Under Deductibles for Given Loss Distributions

439

0.6 0.4 0

0.2

Premium (DKK million)

0.8

19.3

0

10

20 Deductible (DKK million)

40

30

Figure 19.6: The premium under the franchise deductible (thick blue line) and ﬁxed amount deductible (thin red line). The log-normal case. STFded06.xpl

(b) ﬁxed amount deductible premium PF AD(b)

=

1 (b + λ) α−1

λ b+λ

(c) proportional deductible premium PP D(c)

=

(1 − c)

λ , α−1

α ,

19

Pure Risk Premiums under Deductibles

0.6 0.4 0

0.2

Premium (DKK million)

0.8

440

0

10

20 m2 (DKK million)

30

40

Figure 19.7: The premium under the limited proportional deductible with respect to the parameter m2 . The thick blue solid line represents the premium for c = 0.2 and m1 = 100 000 DKK, the thin blue solid line for c = 0.4 and m1 = 100 000 DKK, the dashed red line for c = 0.2 and m1 = 1 million DKK, and the dotted red line for c = 0.4 and m1 = 1 million DKK. The log-normal case. STFded07.xpl

(d) limited proportional deductible premium PLP D(c,m1 ,m2 )

α 1 λ (m1 + λ) α−1 m1 + λ α m c λ 2 + +λ α−1 c m2 /c + λ α m λ 1 , − +λ c m1 /c + λ =

Premiums Under Deductibles for Given Loss Distributions

441

0.6 0.4 0

0.2

Premium (DKK million)

0.8

19.3

10

0

20 d2 (DKK million)

30

40

Figure 19.8: The premium under the disappearing deductible with respect to the parameter d2 . The thick blue line represents the premium for d1 = 100 000 DKK and the thin red line the premium for d1 = 500 000 DKK. The log-normal case. STFded08.xpl

(e) disappearing deductible premium PDD(d1 ,d2 )

= ·

19.3.3

1 · (α − 1)(d2 − d1 ) α α

λ λ . − d1 (d2 + λ) d2 (d1 + λ) d1 + λ d2 + λ

Burr Loss Distribution

Experience has shown that the Pareto formula is often an appropriate model for the claim size distribution, particularly where exceptionally large claims may

442

19

Pure Risk Premiums under Deductibles

occur. However, there is sometimes a need to ﬁnd heavy tailed distributions which oﬀer greater ﬂexibility than the Pareto law. Such ﬂexibility is provided by the Burr distribution which distribution function is given by α λ F (t) = 1 − , λ + tτ where t, α, λ, τ > 0, see Chapter 13. Its mean exists only for ατ > 1. For the Burr distribution with ατ > 1 the following formulae hold: (a) franchise deductible premium PF D(a)

=

λ1/τ Γ (α − 1/τ ) Γ (1 + 1/τ ) Γ(α)

1 1 aτ 1 − B 1 + ,α − , τ τ λ + aτ

(b) ﬁxed amount deductible premium PF AD(b)

= ·

λ1/τ Γ (α − 1/τ ) Γ (1 + 1/τ ) · Γ(α)

α λ 1 bτ 1 −b , 1 − B 1 + ,α − , τ τ λ + bτ λ + bτ

(c) proportional deductible premium PP D(c)

=

(1 − c)

λ1/τ Γ(α − 1/τ )Γ(1 + 1/τ ) , Γ(α)

,

19.3

Premiums Under Deductibles for Given Loss Distributions

443

(d) limited proportional deductible premium PLP D(c,m1 ,m2 )

=

·

λ1/τ Γ (α − 1/τ ) Γ (1 + 1/τ ) · Γ(α) 1 mτ1 1 1 − B 1 + ,α − , τ τ λ + mτ1 1 (m1 /c)τ 1 +cB 1 + , α − , τ τ λ + (m1 /c)τ

1 1 (m2 /c)τ −cB 1 + , α − , τ τ λ + (m2 /c)τ α α λ λ + m1 − m1 λ + mτ1 λ + (m1 /c)τ α λ , − m2 λ + (m2 /c)τ

(e) disappearing deductible premium PDD(d1 ,d2 )

=

·

λ1/τ Γ (α − 1/τ ) Γ (1 + 1/τ ) · Γ(α) d2 − d1 + d1 B 1 + 1/τ, α − 1/τ, dτ2 /(λ + dτ2 ) d2 − d1

− +

d2 B 1 + 1/τ, α − 1/τ, dτ1 /(λ + dτ1 )

d2 d1 d2 − d1

d2 − d1 α α

λ λ , − λ + dτ2 λ + dτ1

where the functions Γ(·) and B(·, ·, ·) are deﬁned as: Γ(a) = Γ(a+b) x a−1 B(a, b, x) = Γ(a)Γ(b) y (1 − y)b−1 dy. 0

∞ 0

y a−1 e−y dy and

In order to illustrate the preceding formulae we consider the ﬁre loss data. analysed in Chapter 13. The analysis showed that the losses can be well mod-

19

Pure Risk Premiums under Deductibles

1.5 1 0

0.5

Premium (DKK million)

2

2.5

444

0

10

20 Deductible (DKK million)

30

40

Figure 19.9: The premium under the franchise deductible (thick blue line) and ﬁxed amount deductible (thin red line). The Burr case. STFded09.xpl

elled by the Burr distribution with parameters α = 0.8804, λ = 8.4202 · 106 and τ = 1.2749. Figure 19.9 depicts the premium under franchise and ﬁxed amount deductibles for the Burr loss distribution. In Figure 19.10 the inﬂuence of the parameters c, m1 , and m2 of the limited proportional deductible is illustrated. Figure 19.11 shows the eﬀect of the parameters d1 and d2 of the disappearing deductible.

Premiums Under Deductibles for Given Loss Distributions

445

1.5 1 0

0.5

Premium (DKK million)

2

2.5

19.3

0

10

20 m2 (DKK million)

30

40

Figure 19.10: The premium under the limited proportional deductible with respect to the parameter m2 . The thick solid blue line represents the premium for c = 0.2 and m1 = 100 000 DKK, the thin solid blue line for c = 0.4 and m1 = 100 000 DKK, the dashed red line for c = 0.2 and m1 = 1 million DKK, and the dotted red line for c = 0.4 and m1 = 1 million DKK. The Burr case. STFded10.xpl

19.3.4

Weibull Loss Distribution

Another frequently used analytic claim size distribution is the Weibull distribution which is deﬁned by F (t) = 1 − exp (−βtτ ) , where t, τ, β > 0, see Chapter 13.

19

Pure Risk Premiums under Deductibles

1.5 1 0

0.5

Premium (DKK million)

2

2.5

446

10

0

20 d2 (DKK million)

30

40

Figure 19.11: The premium under the disappearing deductible with respect to the parameter d2 . The thick blue line represents the premium for d1 = 100 000 DKK and the thin red line the premium for d1 = 500 000 DKK. The Burr case. STFded11.xpl

For the Weibull distribution the following formulae hold: (a) franchise deductible premium PF D(a)

=

Γ (1 + 1/τ ) β 1/τ

1 1 − Γ 1 + , βaτ , τ

(b) ﬁxed amount deductible premium

1 Γ (1 + 1/τ ) τ 1 − Γ 1 + − b exp (−βbτ ) , , βb PF AD(b) = τ β 1/τ

19.3

Premiums Under Deductibles for Given Loss Distributions

447

(c) proportional deductible premium PP D(c)

=

(1 − c) 1 Γ 1 + , τ β 1/τ

(d) limited proportional deductible premium

1 Γ (1 + 1/τ ) τ PLP D(c,m1 ,m2 ) = 1 − Γ 1 + , βm 1 τ β 1/τ

cΓ (1 + 1/τ ) 1 m1 τ + Γ 1 + ,β τ c β 1/τ

cΓ (1 + 1/τ ) 1 m2 τ − Γ 1 + , β τ c β 1/τ m τ 1 − m1 exp (−βmτ1 ) + m1 exp −β c m τ 2 , − m2 exp −β c (e) disappearing deductible premium PDD(d1 ,d2 )

Γ (1 + 1/τ ) 1 τ = − d + d Γ 1 + d , βd 2 1 1 2 τ β 1/τ (d2 − d1 ) 1 τ −d2 Γ 1 + , βd1 τ +

d1 d2 {exp (−βdτ2 ) − exp (−βdτ1 )} , d2 − d1

where the incomplete gamma function Γ(·, ·) is deﬁned as x 1 Γ(a, x) = y a−1 e−y dy. Γ(a) 0

19.3.5

Gamma Loss Distribution

All four presented above distributions suﬀer from some mathematical drawbacks such as lack of a closed form representation for the Laplace transform

448

19

Pure Risk Premiums under Deductibles

and nonexistence of the moment generating function. The gamma distribution given by t α β F (t) = F (t, α, β) = y α−1 e−βy dy, Γ(α) 0 for t, α, β > 0 does not have these drawbacks, see Chapter 13. For the gamma distribution the following formulae hold: (a) franchise deductible premium PF D(a)

=

α {1 − F (a, α + 1, β)} , β

(b) ﬁxed amount deductible premium α {1 − F (b, α + 1, β)} − b {1 − F (b, α, β)} , PF AD(b) = β (c) proportional deductible premium PP D(c)

=

(1 − c)α , β

(d) limited proportional deductible premium α {1 − F (m1 , α + 1, β)} PLP D(c,m1 ,m2 ) = β m cα m1 2 + F , α + 1, β − F , α + 1, β β c c m 1 , α, β + m1 F (m1 , α, β) − F c m 2 , α, β , − m2 1 − F c (e) disappearing deductible premium α d2 {1 − F (d1 , α + 1, β)} PDD(d1 ,d2 ) = β(d2 − d1 ) −d1 {1 − F (d2 , α + 1, β)} +

d 1 d2 {F (d1 , α, β) − F (d2 , α, β)} . d2 − d1

19.3

Premiums Under Deductibles for Given Loss Distributions

19.3.6

449

Mixture of Two Exponentials Loss Distribution

The mixture of two exponentials distribution function is deﬁned by F (t) = 1 − a exp (−β1 t) − (1 − a) exp (−β2 t) , where 0 ≤ a ≤ 1 and β1 , β2 > 0, see Chapter 13. For the mixture of exponentials distribution the following formulae hold: (a) franchise deductible premium PF D(c)

a 1−a exp (−β1 c) + exp (−β2 c) β1 β2

=

+ c {a exp (−β1 c) + (1 − a) exp (−β2 c)} , (b) ﬁxed amount deductible premium a 1−a exp (−β1 b) + exp (−β2 b) , β1 β2

=

PF AD(b)

(c) proportional deductible premium PP D(c)

=

(1 − c)

1−a a + β1 β2

,

(d) limited proportional deductible premium PLP D(c,m1 ,m2 )

= + +

a 1−a exp (−β1 m1 ) + exp (−β2 m1 ) β1 β2 m2 m1 ca exp −β1 − exp −β1 β1 c c m2 m1 c(1 − a) exp −β2 − exp −β2 , β2 c c

(e) disappearing deductible premium

d2 a d1 PDD(d1 ,d2 ) = exp (−β1 d1 ) − exp (−β1 d2 ) β1 d2 − d1 d2 − d1

d2 d1 1−a exp (−β2 d1 ) − exp (−β2 d2 ) . + β2 d2 − d1 d2 − d1

450

19.4

19

Pure Risk Premiums under Deductibles

Final Remarks

Let us ﬁrst concentrate on the franchise and ﬁxed amount deductibles. Figures 19.6 and 19.9 depict the comparison of the two corresponding premiums and the eﬀect of increasing the parameters a and b. Evidently P PF D PF AD . Moreover, we can see that the deducible of about DKK 2 million in the lognormal case and DKK 40 million in the Burr case reduces PF AD by half. Figures corresponding to the two loss distributions are similar, however we note that the diﬀerences do not lie in shifting or scaling. The same is true for the rest of considered deductibles. We also note that the premiums under no deductible for log-normal and Burr loss distributions do not tally because the parameters were estimated via the Anderson-Darling statistic minimization procedure which in general does not yield the same moments, cf. Chapter 13. For the considered distributions the mean, and consequently the pure risk premium, is even 3 times bigger in the Burr case. The proportional deductible inﬂuences the premium in an obvious manner, that is pro rata (e.g. c = 0.25 results in cutting the premium by a quarter). Figures 19.7 and 19.10 show the eﬀect of parameters c, m1 and m2 of the limited proportional deductible. It is easy to see that PLP D(c,m1 ,m2 ) is a decreasing function of these parameters. Figures 19.8 and 19.11 depict the inﬂuence of parameters d1 and d2 of the disappearing deductible. Clearly, PDD(d1 ,d2 ) is a decreasing function of the parameters and we can observe that the eﬀect of increasing d2 is rather minor. It is clear that the choice of a distribution and a deductible has a great impact on the pure risk premium. For an insurer the choice can be crucial in reasonable quoting of a given risk. A potential insured should take into account insurance options arising from appropriate types and levels of self-insurance (deductibles). Insurance premiums decrease with increasing levels of deductibles. With adequate loss protection, a property owner can take some risk and accept a large deductible which might reduce the total cost of insurance. We presented here a general approach to calculating pure risk premiums under deductibles. In Section 19.2 we presented a link between the pure risk premium under several deductibles and a limited expected value function. We used this link in Section 19.3 to calculate the pure risk premium in the case of the deductibles for diﬀerent claim amount distributions. The results can be applied to derive annual premiums in the individual and collective risk model on a per occurrence deductible basis.

19.4

Final Remarks

451

The approach can be easily extended to other distributions. One has only to calculate levf for a particular distribution. This also includes the case of righttruncated distributions which would reﬂect the maximum limit of liability set in a contract. Moreover, the idea can be extended to other deductibles. Once we express the pure risk premium in terms of the limited expected value function, it is enough to apply a form of levf for a speciﬁc distribution. Finally, one can also use the formulae to obtain the premium with safety loading which is discussed in Chapter 18.

452

Bibliography

Bibliography Aebi, M., Embrechts, P., and Mikosch, T. (1992). A large claim index, Mitteilungen SVVM: 143–156. Burnecki, K., Kukla, G., and Weron, R. (2000). Property insurance loss distributions, Physica A 287: 269–278. Burnecki, K., Nowicka-Zagrajek, J., and Weron, A. (2004). Pure risk premiums under deductibles. A quantitative management in actuarial practice, Research Report HSC/04/5, Hugo Steinhaus Center, Wroclaw University of Technology. Daykin, C.D., Pentikainen, T., and Pesonen, M. (1994). Practical Risk Theory for Actuaries, Chapman&Hall, London. Embrechts, P., Kl¨ uppelberg, C., and Mikosch, T. (1997). Modelling Extremal Events for Insurance and Finance, Springer, Berlin. Klugman, S. A., Panjer, H.H., and Willmot, G.E. (1998). Loss Models: From Data to Decisions, Wiley, New York. Mikosch, T. (1997). Heavy-tailed modelling in insurance, Commun. Statist.Stochastic Models 13: 799–815. Panjer, H.H. and Willmot, G.E. (1992). Insurance Risk Models, Society of Actuaries, Schaumburg. Sundt, B. (1994). An Introduction to Non-Life Insurance Mathematics (3rd ed.), Verlag Versicherungswirtschaft e.V., Karlsruhe.

20 Premiums, Investments, and Reinsurance Pawel Mi´sta and Wojciech Otto

20.1

Introduction

In this chapter, setting the appropriate level of insurance premium is considered in a broader context of business decisions, concerning also risk transfer through reinsurance, and the rate of return on capital required to ensure solvability. Furthermore, the long term dividend policy, i.e. the rule of subdividing the ﬁnancial result between the company and shareholders, is analyzed. The problem considered throughout this chapter can be illustrated by a simple example. Example 1 Let us consider the following model of a risk process describing a capital of an insurer: Rt = u + (c − du)t − St , t ≥ 0, where Rt denotes the current capital at time t, u = R0 stands for initial capital, c is the intensity of premium inﬂow, and St is the aggregate loss process – amount of claim’s outlays over the period (0, t]. The term du represents the intensity of outﬂow of dividends paid to shareholders with d being the dividend rate. Let us assume that increments of the amount of claims process St+h − St are for any t, h > 0 normally distributed N (µh, σ 2 h) and mutually independent. Below we consider premium calculation under two cases. First case: d = 0. In this case the probability of ruin is an exponential function of the initial capital: ψ(u) = exp(−Ru),

u ≥ 0,

454

20 Premiums, Investments, and Reinsurance

where the adjustment coeﬃcient R exists for c > µ, and equals then 2(c−µ)σ −2 . The above formula can be easily inverted to render the intensity of premium c for a given capital u and predetermined level ψ of ruin probability: c=µ+

− log(ψ) 2 σ . 2u

Given the safety standard ψ, the larger the initial capital u of the company is, the more competitive it is (since it can oﬀer the insurance cover at a lower price c). However, a more realistic result is considered when we assume positive cost of capital. Second case: d > 0. Now the problem of competitiveness is reduced to the problem of minimizing the premium by choosing the optimal level of capital backing insurance risk: c=µ+

− log(ψ) 2 σ + du. 2u :

The solution reads:

− log(ψ) , 2d = µ + σ −2d log ψ,

uopt = σ copt

where exactly one half of the loading (copt − µ) serves to ﬁnance dividends and the other half serves as a safety loading (retained in the company). Having already calculated the total premium, we face the problem of decomposing it into premiums for individual risks. In order to do that we should ﬁrst identify the random variable W = St+1 − St as a sum of independent risks X1 , . . . , Xn , and the intensity of premium c as a whole-portfolio premium Π(W ), which has to be decomposed into individual premiums Π(Xi ). The decomposition is straightforward when the total premium is calculated as in the ﬁrst case above: − log(ψ) 2 Π(Xi ) = E(Xi ) + σ (Xi ), 2u which is due to additivity of variance for independent risks. The premium formula in the second case contains the safety loading proportional to the standard deviation and thus is no more additive. This does not mean that reasonable decomposition rules do not exist – rather that their derivation is not so straightforward.

20.1

Introduction

455

In this chapter, various generalizations of the basic problem presented in the Example 1 are considered. These generalizations make the basic problem more complex on the one hand, but closer to real-life situations on the other. Additionally, these generalizations do not yield analytical results and, therefore, we demonstrate in several examples how to obtain numerical solutions. First of all, Example 1 assumes that the safety standard is expressed in terms of an acceptable level of ruin probability. On the contrary, Sections 2, 3, and 4 are devoted to the approach based on the distribution of the single-year loss function. Section 2 presents the basic problem of joint decisions on premium and capital needed to ensure safety in terms of shareholder’s choice of the level of expected rate of return and risk. Section 3 presents in more details the problem of decomposition of the whole-portfolio premium into individual risks premiums. Section 4 presents the problem extended by allowing for reinsurance, where competitiveness is a result of simultaneous choice of the amount of capital and retention level. This problem has not been illustrated in Example 1, as in the case of the normal distribution of the aggregate loss and usual market conditions there is no room to improve competitiveness through reinsurance. Sections 5, 6, and 7 are devoted again to the approach based on ruin probability. However, Section 5 departs from the simplistic assumptions of Example 1 concerning the risk process. It is shown there how to invert various approximate formulas for the ruin probability in order to calculate premium for the whole portfolio as well as to decompose it into individual risks. Section 6 exploits results of Section 5 in the context of positive cost of capital. In that section a kind of ﬂexible dividend policy is also considered, and the possibility to improve competitiveness this way is studied. Finally, Section 7 presents an extension of the decision problem by allowing for reinsurance cession. Throughout this chapter we assume that we typically have at our disposal incomplete information on the distribution of the aggregate loss, and this incomplete information set consists of cumulants of order 1, 2, 3, and possibly 4. The rationale is that sensible empirical investigation of frequency and severity distributions could be done only separately for sub-portfolios of homogeneous risks. Cumulants for the whole portfolio are then obtained just by summing up ﬁgures over the collection of sub-portfolios, provided that sub-portfolios are mutually independent. The existence of cumulants of higher orders is assured by the common practice of issuing policies with limited cover exclusively (which in many countries is even enforced by law). Consequences of the assumption are that both the quantile of the current year loss and the probability of ruin in the long run will be approximated by formulas based on cumulants of the one-year aggregate loss W .

456

20 Premiums, Investments, and Reinsurance

The chapter is based on Otto (2004), a book on non-life insurance mathematics. However, general ideas are heavily borrowed from the seminal paper of B¨ uhlmann (1985).

20.2

Single-period Criterion and the Rate of Return on Capital

In this section the problem of joint decisions on premium and required capital is considered in terms of shareholder’s choice of the level of expected rate of return and risk. It is assumed that typically the single-year loss (when it happens) is covered by the insurance company through reduction of its own assets. This assumption can be justiﬁed by the fact that in most developed countries state supervision agencies eﬃciently prevent companies to undertake too risky insurance business without own assets being large enough. As shareholders are unable to externalize the loss, they are enforced to balance the required expected rate of return with the possible size of the loss. The risk based capital concept (RBC) formalizes the assumption that premium loading results from the required expected rate of return on capital invested by shareholders and the admitted level of risk.

20.2.1

Risk Based Capital Concept

Let us denote by RBC the amount of capital backing risk borne by the insurance portfolio. It is assumed that the capital has a form of assets invested in securities. Shareholders will accept risk borne by the insurance portfolio provided it yields expected rate of return larger than the rate of return on riskless investments oﬀered by the ﬁnancial market. Let us denote by r the required expected rate of return, and by rf the riskless rate. The following equality holds: Π (W ) − E(W ) = (r − rf ) · RBC. (20.1) For simplicity it is assumed that all assets are invested in riskless securities. This means that we neglect shareholder’s capital locked-up in ﬁxed assets necessary to run the insurance operations of the company, and we also assume prudent investment policy, at least with respect to those assets, which are devoted for backing the insurance risk. It is also assumed that all amounts are expressed in terms of their value at the end of year (accumulated when spent or received earlier, discounted when spent or received after the year end).

20.2

Single-Period Criterion and the Rate of Return on Capital

457

Let us also assume that company management is convinced that the rate of return r is large enough to admit the risk of technical loss in amount, let us say, ηRBC, η ∈ (0, 1) with a presumed small probability ε. The total loss of capital amounts then to (η − rf ) RBC. The assumption could be expressed in the following form: −1 FW (1 − ε) = Π (W ) + ηRBC,

(20.2)

where FW denotes the cdf of random variable W . Combining equations (20.1) and (20.2), one obtains the desired amount of capital backing risk of the insurance portfolio: RBC =

−1 (1 − ε) − E(W ) FW , r − rf + η

(20.3)

and the corresponding premium: ΠRBC (W ) = E(W ) +

r − rf −1 FW (1 − ε) − E(W ) . r − rf + η

(20.4)

In both formulas, only the diﬀerence r − rf is relevant. We denote it by r∗ . The obtained premium formula is just a simple generalization of the well-known quantile formula based on the one-year loss criterion. This standard formula is −1 obtained by replacing the coeﬃcient r∗ (r∗ + η) by one. Now it is clear that the standard formula could be interpreted as a result of the assumption η = 0, so that shareholders are not ready to suﬀer a technical loss at all (at least with probability higher than ε).

20.2.2

How to Choose Parameter Values?

Parameters r∗ , η, and ε of the formula are subject to managerial decision. However, an actuary could help reducing the number of redundant decision parameters. This is because parameters reﬂect not only subjective factors (shareholder’s attitude to risk), but also objective factors (rate of substitution between expected return and risk oﬀered by the capital market). The latter could be deduced from capital market quotations. In terms of the Capital Asset Pricing Model (CAPM), the relationship between expectation E(∆R) and standard deviation σ (∆R) of the excess ∆R of the rate of return over the riskless rate is reﬂected by the so-called capital market line (CML). The slope coeﬃcient E(∆Rt)σ −1 (∆R) of the CML represents just a risk premium (in

458

20 Premiums, Investments, and Reinsurance

terms of an increase in expectation) per unit increase of standard deviation. def

−1

Let us denote the reciprocal of the slope coeﬃcient by s = {E(∆R)} σ (∆R). We will now consider shareholder’s choice between two alternatives: investment of the amount RBC in a well diversiﬁed portfolio of equities and bonds versus investment in the insurance company’s capital. In the second case the total loss W − Π (W ) − rf RBC exceeds the amount (η − rf ) RBC with probability ε. The equally probable loss in the ﬁrst case equals: {uε σ (∆R) − E(∆R) − rf } RBC, where uε denotes the quantile of order (1 − ε) of the standard normal variable. This is justiﬁed by the fact that the CAPM is based on assumption of normality of ﬂuctuations of rates of return. The shareholder is indiﬀerent when the following equation holds: η − rf = uε σ (∆R) − E(∆R) − rf , provided that expected rates of return in both cases are the same: r = rf + E(∆R). Making use of our knowledge of the substitution rate s and putting the above results together we obtain: η = r∗ (uε s − 1). In the real world the required rate of return could depart (ceteris paribus) from the above equation. On the one hand, required expected rate of return could be larger, because direct investments in strategic portions of the insurance company capital are not as liquid as investments in securities traded on the stock exchange. On the other hand, there is empirical evidence that ﬂuctuations in proﬁts in the insurance industry are uncorrelated with the business cycle. This means that having a portion of insurance company shares in the portfolio improves diversiﬁcation of risk to which a portfolio investor is exposed. Hence, there are reasons to require smaller risk premium. The reasonable range of the parameter ε is from 1% to 5%. The rate of return depends on shareholder’s attitude to risk and market conditions, but it is customary to assume that the range of the risk premium r∗ is from 5% to 15%. A reference point for setting the parameter η could also be deduced from regulatory requirements, as the situation when the capital falls below the solvency margin needs undertaking troublesome actions enforced by supervision authority that could be harmful for company managers. A good summary of the CAPM and related models is given in Panjer et al. (1998), Chapters 4 and 8.

20.3

The Top-down Approach to Individual Risks Pricing

20.3

459

The Top-down Approach to Individual Risks Pricing

As it has been pointed out in the introduction, some premium calculation formulas are additive for independent risks, and then the decomposition of the whole-portfolio premium into individual risks premiums is straightforward. However, sometimes a non-additive formula for pricing the whole portfolio is well justiﬁed, and then the decomposition is no more trivial. This is exactly the case of the RBC formula (and also other quantile-based formulas) derived in the previous section. This section is devoted to showing the range, interpretation and applications of some solutions to this problem.

20.3.1

Approximations of Quantiles

In the case of the RBC formula decomposition means answering the question what is the share of a particular risk in the demand for capital backing the portfolio risk that in turn entails the premium. In order to solve the problem one can make use of approximations of the quantile by the normal power expansions. The most general version used in practice of the normal power formula for the quantile wε of order (1 − ε) of the variable W reads: u3 − 3uε 2u3 − 5uε 2 u2 − 1 wε ≈ µW + σW uε + ε γW + ε γ2,W − ε γW , 6 24 36 where µW , σW , γW , γ2,W denotes expectation, standard deviation, skewness, and kurtosis of the variable W and uε is the quantile of order (1−ε) of a N (0, 1) variable. Now the premium can be expressed by: 2 ΠRBC (W ) = µW + σW a0 + a1 γW + a2 γ2,W − a3 γW , (20.5) where coeﬃcients a0 , a1 , a2 , a3 are simple functions of parameters ε, η, r∗ , and the quantile uε of the standard normal variable. The above formula was proposed by Fisher and Cornish, see Hill and Davis (1968), so it will be referred to as FC20.5. The formula reduced by neglecting the last two components (by taking a2 = a3 = 0) will be referred to as FC20.6: ΠRBC (W ) = µW + σW (a0 + a1 γW ) ,

(20.6)

and the formula neglecting also the skewness component as normal approximation: ΠRBC (W ) = µW + a0 σW . (20.7)

460

20 Premiums, Investments, and Reinsurance

More details on normal power approximation can be found in Kendall and Stuart (1977).

20.3.2

Marginal Cost Basis for Individual Risk Pricing

Premium for the individual risk X could be set on the basis of marginal cost. This means that we look for such a price at which the insurer is indiﬀerent whether to accept the risk or not. Calculation of the marginal cost can be based on standards of diﬀerential calculus. In order to do that, we should ﬁrst write the formula explicitly in terms of a function of cumulants of ﬁrst four orders: c4 µ2 µ3 Π µ, σ 2 , µ3 , c4 = µ + a0 σ + a1 2 + a2 3 − a3 35 . σ σ σ def

This allows expressing the increment ∆Π (W ) = Π (W + X) − Π (W ) due to extend of the basic portfolio W by additional risk X in terms of linear approximation: ∆Π (W ) ≈

∂Π ∂Π ∂Π ∂Π 2 + (W ) ∆µ3,W + (W ) ∆c4,W , (W ) ∆µW + 2 (W ) ∆σW ∂µ ∂σ ∂µ3 ∂c4

∂Π ∂Π ∂Π ), ∂σ where ∂Π 2 (W ), ∂µ (W ), ∂c (W ) denote partial derivatives of the ∂µ (W 3 4 2 , µ3,W , c4,W . By function Π µ, σ 2 , µ3 , c4 calculated at the point µW , σW virtue of additivity of cumulants for independent random variables we replace 2 increments ∆µ , ∆σ , ∆µ , ∆c by cumulants of the additional risk W 3,W 4,W W 2 , µ3,X , c4,X . As a result the following formula is obtained: µX , σX

ΠM (X) =

∂Π ∂Π ∂Π ∂Π 2 + (W ) σX (W ) µ3,X + (W ) c4,X . (W ) µX + 2 ∂µ ∂σ ∂µ3 ∂c4

Respective calculations lead to the marginal premium formula: ΠM (X)

2 2 µ3,X σX σX + = µX + a0 + σW a1 γW − 2 2σW µ3,W σW

2 2 c4,X µ3,X 3σX 5σX 2 +σW a2 γ2,W − a3 γW 2 . − 2 − 2 c4,W 2σW µ3,W 2σW

First two components coincide with the result obtained when the whole premium is based on the normal approximation. Setting additionally a1 = 0 we obtain the premium for the case when skewness of the portfolio in non-neglectible

20.3

The Top-down Approach to Individual Risks Pricing

461

(making use of FC20.6 approximation), including last two components means we regard also portfolio kurtosis (approximation based on formula FC20.5).

20.3.3

Balancing Problem

For each component the problem of balancing the premium on the whole portfolio level arises. Should all risks composing the portfolio W = X1 + X2 + ...+ Xn be charged their marginal premiums, the portfolio premium amounts to: n 1 1 1 2 ΠM (Xi ) = µW + σW a0 − a2 γ2,W + a3 γW , 2 2 2 i=1 that is evidently underestimated by: n 1 3 3 2 . ΠM (Xi ) = σW Π (W ) − a0 + a1 γW + a2 γ2,W − a3 γW 2 2 2 i=1 The last ﬁgure represents a diversiﬁcation eﬀect obtained by composing the portfolio of a large number of individual risks, which could be also treated as an example of “positive returns to scale”. Balancing correction made so as to preserve sensitivity of premium on cumulants of order 1, 3, and 4 leads to the formula for the basic premium: ΠB (X)

σ2

= µX + σW a0 σ2X + W µ3,X µ3,X c4,X 2 +σW a1 γW µ3,W 2 µ3,W + a2 γ2,W c4,W − a3 γW −

2 σX 2 σW

.

Obviously, several alternative correction rules exist. For example, in the case of the kurtosis component any expression of the form:

c4,X σ2 c4,X a2 σW γ2,W +δ − 2X c4,W c4,W σW satisﬁes the requirement of balancing the whole portfolio premium for arbitrary number δ. In fact, any particular choice is more or less arbitrary. Some common sense can be expressed by the requirement that a basic premium formula should not produce smaller ﬁgures than marginal formula for any risk in the portfolio. Of course this requirement is insuﬃcient to point out a unique solution. Here, the balancing problem results from the lack of additivity of the RBC formula, as it is a nonlinear function of cumulants.

462

20.3.4

20 Premiums, Investments, and Reinsurance

A Solution for the Balancing Problem

2 It seems that only in the case of the variance component a0 σX /2σW some more or less heuristic argument for the correction can be found. The essence of the basic premium for individual risks is that it is a basis of an open market oﬀer. Once the cover is oﬀered to the public, clients decide whether to buy the cover or not. Thus the price should not depend on how many risks out of the portfolio W have been insured before, and how many after the risk in question. Let us imagine a particular ordering of the basic set of n risks amended by the additional risk X in a form of a sequence {X1 , ..., Xj , X, Xj+1 , ...Xn }. Given this ordering, the respective component of the marginal cost of risk X takes the form: " !: : j j 2 2 2 a0 σ (Xk ) + σX − σ (Xk ) . k=1

k=1

We can now consider the expected value of this component, provided that each of (n + 1)! orderings is equally probable (as it was proposed Shapley (1953)). However, calculations are much simpler if we assume that the share U of the aggregated variance of all risks preceeding the risk X in the total aggregate 2 variance σW is a random variable uniformly distributed over the interval (0, 1). The error of the simpliﬁcation is neglectible as the share of each individual risk in the total variance is small. The result: > 1 > > > 2 2 2 2 2 2 a0 E U σW + σX − U σW = a0 uσW + σX − uσW du 0

⎛,

⎞ 2 σ σ2 ≈ a0 σW 2 ⎝ 1 + X − 1⎠ ≈ a0 X σW σW is exactly what we need to balance the premium on the portfolio level. The reader easily veriﬁes that the analogous argumentation does not work any more in the case of components of higher orders of the premium formula.

20.3.5

Applications

Results presented in this section have three possible ﬁelds of application. The ﬁrst is just passive premium calculation for the whole portfolio. In this respect several more accurate formulas exist, especially when our information on the distribution of the variable W extends its ﬁrst four cumulants.

20.4

Rate of Return and Reinsurance Under the Short Term Criterion

463

The second application concerns pricing individual risks. In this respect it is hard to ﬁnd a better approach (apart from that based on long-run solvability criteria, which is a matter of consideration in next sections), which consistently links the risk relevant to the company (on the whole portfolio level) with risk borne by an individual policy. Of course open market oﬀer should be based on basic valuation ΠB (·), whereas the marginal cost valuation ΠM (·) could serve as a lower bound for contracts negotiated individually. The third ﬁeld of applications opens when a portfolio, characterized by substantial skewness and kurtosis, is inspected in order to localize these risks (or groups of risks), that distort the distribution of the whole portfolio. Too high (noncompetitive) general premium level could be caused though by few inﬂuential risks. Such localization could help in decisions concerning underwriting limits and reinsurance program. Applying these measures could help “normalize” the distribution of the variable W . Thus in the preliminary stage, when the basis for underwriting policy and reinsurance is considered, extended pricing formulas (involving higher order cumulants) should be used. Paradoxically, once the prudent underwriting and ceding policy has been elaborated, simple normal approximation suﬃces to price as well the portfolio as individual risks. Clearly, such prices concern only retained portions of risk, and should be complemented by reinsurance costs.

20.4

Rate of Return and Reinsurance Under the Short Term Criterion

This section is devoted to extending the decision problem considered in previous sections by allowing for reinsurance. Then the pricing obey the form: Π (W ) = ΠI (WI ) + ΠR (WR ) , where the whole aggregate loss W is subdivided into the share of the insurer WI and that of reinsurer WR . ΠI (·) denotes the premium formula applied by the insurer to price his share, set in accordance with RBC concept. ΠR (·) symbolizes the pricing formula used by the reinsurer. Provided formula ΠR (·) is accurate enough to reﬂect the existing oﬀer of the reinsurance market, we could compare various variants of subdivision of the variable W into components WI and WR , looking for such subdivision which optimizes some objective function.

464

20.4.1

20 Premiums, Investments, and Reinsurance

General Considerations

No matter which particular objective function is chosen, the space of possible subdivisions of the variable W has to be reduced somehow. One of the most important cases is when the variable W has a compound Poisson distribution, and the excess of loss reinsurance is chosen. Denoting by N the number of claims, we could deﬁne for each claim amount Yi , i = 1, 2, ...N its subdivision def def into the truncated loss Y M,i = min {Yi , M } and the excess of loss Y M,1 = max {Yi − M, 0} and then deﬁne variables representing subdivision of the whole portfolio: WI = Y M,1 + ... + Y M,N , WR = Y M,1 + ... + Y M,N , both having compound Poisson distributions too, with characteristics being functions of the subdivision parameter M . Assuming that capital of the insurer is not ﬂexible, and that the current amount u of capital is smaller than the amount RBC (W ) necessary to accept solely the whole portfolio, we could simply ﬁnd such value of M , for which RBC (WI ) = u. In the case when the current amount of capital is in excess, it is still relevant to assess such portion of the capital, which should serve as a protection for insurance operations. The excess of capital over this amount can be treated separately, as being free of prudence requirements when investment decisions are undertaken. It is more interesting to assume that the amount of capital is ﬂexible, and to choose the retention limit M to minimize the total premium Π (W ) given parameters r∗ , s, and ε. The objective function reﬂects the aim of maximizing competitiveness of the company. If the resulting premium (after being charged by respective cost loadings) is lower than that acceptable by the market, we can revise assumptions. Revised problem could consist in maximizing expected rate of return given the premium level and parameters η and ε. This would mean getting higher risk premium than that oﬀered by the capital market. Reasonable solutions could be expected in the case when reinsurance premium formula ΠR (·) contains loadings proportional primarily to the expected value, and its sensitivity to the variance (more so as to skewness and kurtosis) is small. This could be expected as a result of transaction costs on the one hand, and larger capital assets of reinsurers on the other. Also the possibility to diversify risk on the world-wide scale work in the same direction, increasing transaction costs and at the same time reducing the reinsurer’s exposure to risk.

20.4

Rate of Return and Reinsurance Under the Short Term Criterion

20.4.2

465

Illustrative Example

Example 2 Aggregate loss W has a compound Poisson distribution with truncated-Pareto severity distribution, with cdf given for y 0 by the formula: −α 1 − 1 + λy when y < M0 FY (y) = 1 when y M0 Variable W is subdivided into retained part W M and ceded part W M , that given the subdivision parameter M ∈ (0, M0 ] have a form: W M = Y M,1 + ... + Y M,N , W M = Y M,1 + ... + Y M,N . We assume that reinsurance pricing rule can be reﬂected by the formula: ΠR W M = (1 + re0 ) E W M + re1 Var W M , and that insurer’s own pricing formula is: ΠI (W M ) = E (W M ) +

r∗ −1 ) , (1 − ε) − E (W F M WM r∗ + η

with a respective approximation of the quantile of the variable W M . For expository purposes we take the following values of parameters: (i) Parameters of the Pareto distribution (α, λ) = point M0 = 500;

5

3 2, 2

, with truncation

(ii) Expected value of the number of claims E(N ) = λP = 1000; (iii) Substitution rate s = 2; (iv) Remaining parameters (in the basic variant of the problem) ε = 2%, r∗ = 10%, re0 = 100%, re1 = 0.5%. Problem consists in choosing the retention limit M ∈ (0, M0 ] that minimize the total premium Π (W ) = ΠI (W M ) + ΠR W M .

466

20 Premiums, Investments, and Reinsurance

Solution. First step is to express moments of ﬁrst four orders of variables Y M and Y M as functions of parameters (α, λ, M0 ) and the real variable M . Expected value of the truncated-Pareto variable with parameters (α, λ, M ) equals by deﬁnition: M y 0

αλ

1+ M λ

α

α+1 dy+M {1 − F (M )} = (λ + y) 1+ M λ

= αλ

−α

x

(x − 1)

1+

M λ

−α =

1

−α−1

−x

αλ dx+M xα+1

dx+M

M 1+ λ

−α

1

that, after integration and reordering of components produces the following formula: 1−α M λ m1 = 1− 1+ . α−1 λ Similar calculations made for moments of higher order yield the recursive equation: 1−α M λ k−1 mk,α = αmk−1,α−1 − (α − 1) mk−1,α − M , 1+ α−1 λ k = 2, 3, ... where the symbol mK,A means for A > 0 just the moment of order K of the truncated-Pareto variable with parameters (A, λ, M ). No matter whether A is positive or not, in order to start the recursion we take: 1−A λ 1− 1+ M when A = 1 A−1 λ m1,A = λ ln 1 + M when A = 1 λ The above formulas could serve to calculate raw moments as well of the variable Y M as the variable Y , provided we replace M by M0 . Having calculated moments for both variables Y M and Y already, we make use of the relation: E(Y k ) =

k j k E Y k−j Y M , M j j=0

(20.8)

20.4

Rate of Return and Reinsurance Under the Short Term Criterion

467

to calculate moments of the variable Y M . In the above formula we read Y 0M 0 and Y M as equal one with probability one. Mixed moments appearing on the RHS of formula (20.8) can be calculated easily as positive values of the variable Y M happen only when Y M = M . So mixed moments equal simply: n n m = M Y E YM E Ym M M for arbitrary m, n > 0. The second step is to express cumulants of both variables W M and W M as a product of the parameter λP and respective raw moments of variables Y M and Y M . Finally, both components ΠI (W M ) and ΠR W M of the total premium are expressed as a function of parameters (λP , α, λ, M0 , ε, r∗ , s, re0 , re1 ) and the decision parameter M ∈ (0, M0 ]. Now the search of such a value of M that minimizes the total premium Π (W ) is a quite feasible numerical task. Optimal retention level and related minimal premium entail optimal amount of capital −1 uopt = (r∗ ) {Π (WI ) − E(WI )}.

20.4.3

Interpretation of Numerical Calculations in Example 2

The problem described in Example 2 has been solved in several diﬀerent variants of assumptions on parameters. Variants 1–5 consist in minimization of the total premium, in variant 1 the parameters are (s, ε, r∗ , re0 , re1 ) = (2, 2%, 10%, 100%, 0.5%). In variants 2, 3, 4 and 5 value of one of parameters (ε, r∗ , re0 , re1 ) is modiﬁed and in variant 6 there is no reinsurance, (s, ε, r∗ ) are as in variant 1. Variant 7 consists in maximization of r∗ where (ε, η, re0 , re1 ) are as in variant 1, and premium loading equals 4.47%. Results are presented in Table 20.1. Reinsurance reduces the required level of RBC, which coincides either with premium reduction (compare variant 1 and 6) or with increase of the expected rate of return (compare variant 7 and 6). Reinsurance also reduces diﬀerence between results obtained on the basis of two diﬀerent approximation methods (FC20.6 and FC20.5). In variant 6 (no reinsurance) the diﬀerence is quite large, which is caused by the fairly long right tail of the distribution of the variable Y . Comparison of variants 2 and 1 conﬁrms that the choice of a smaller expected rate of return (given substitution rate) automatically raises the need for capital, leaving the premium level unchanged (and therefore also the optimal retention level).

468

20 Premiums, Investments, and Reinsurance

Table 20.1: Optimal choice of retention limit M . Basic characteristics of the variable W : E(W ) = 999.8, σ (W ) = 74.2, γ (W ) = 0.779, γ2 (W ) = 2.654 Optimization variants

V.1: (basic) V.2: r ∗ = 8% V.3: ε = 4% V.4: re0 = 50% V.5: re1 = 0.25% V.6: (no reinsurance) V.7:

r∗ = 11.27% r ∗ = 11.22%

Quantile approx. method for W M

Retention Limit M

RBC

Loading Π(W ) −1 E(W )

FC20.6 FC20.5 FC20.6 FC20.5 FC20.6 FC20.5 FC20.6 FC20.5 FC20.6 FC20.5 FC20.6 FC20.5 FC20.6 FC20.5

114.5 106.5 114.5 106.5 129.7 134.7 79.8 76.3 95.5 90.9 500.0 500.0 106.0 99.6

386.6 385.2 483.3 481.5 382.3 382.1 373.3 372.1 380.0 379.0 446.6 475.1 372.3 371.5

4.11% 4.13% 4.11% 4.13% 4.03% 4.01% 4.03% 4.03% 4.05% 4.06% 4.47% 4.75% 4.47% 4.47%

STFrein01.xpl Comparison of variants 3 and 1 shows that admission of greater loss probability ε causes reduction of premium, which coincides with substantial reduction of the need for reinsurance cover, and slight reduction in the need for capital. It is worthwhile to notice that replacement of ε = 2% by ε = 4% entails reversing the relation of results obtained by two approximation methods. Formula FC20.5 leads to smaller retention limits when safety standard is high (small ε), and to larger retention limits when safety standard is relaxed (large ε). Comparison of variants 4 and 5 with variant 1 illustrates the obvious rule that it does pay oﬀ to reduce retention limits when reinsurance is cheap, and to increase them when reinsurance is expensive. It could happen in practice that pricing rules applied by reinsurers diﬀer by lines of business. When the portfolio W = W1 + ... + Wn consists of n business lines, for which the market oﬀers reinsurance cover priced on the basis of diﬀerent formulas Π1,R (·), ..., Πn,R (·), the natural generalization of the problem lies in minimization of the premium (or maximization of the rate r∗ ) made by choosing n retention limits M1 , ..., Mn , for each of business lines separately. Separation of business lines makes it feasible to assume diﬀerent severity distributions, too.

20.5

Ruin Probability Criterion when the Initial Capital is Given

20.5

469

Ruin Probability Criterion when the Initial Capital is Given

Presuming long-run horizon for premium calculation we turn back to ruin theory. Our aim is now to obtain such a level of premium for the portfolio yielding each year the aggregate loss W , which results from a presumed level of ruin probability ψ and initial capital u. This is done by inverting various approximate formulae for the probability of ruin. Information requirements of diﬀerent methods are emphasized. Special attention is paid also to the problem of decomposition of the whole portfolio premium.

20.5.1

Approximation Based on Lundberg Inequality

This is a simplest (and crude) approximation method, simply assuming replacement of the true function ψ(u) by: ψLi (u) = e−Ru . At ﬁrst we obtain the approximation R(Li) of the desired level of the adjustment coeﬃcient R: − ln ψ R(Li) = . u In the next step we make use of the deﬁnition of the adjustment coeﬃcient for the portfolio: E eRW = eRΠ(W ) , to obtain directly the premium formula: Π (W ) = R−1 ln E eRW = R−1 CW (R) , where CW denotes the cumulant generating function. The result is well known as the exponential premium formula. It possesses several desirable properties – not only that it is derivable from ruin theory. First of all, by the virtue of properties of the cumulant generating function, it is additive for independent risks. So there is no need to distinguish between marginal and basic premiums for individual risks. By the same reason the formula does not reﬂect the crosssectional diversiﬁcation eﬀect when the portfolio is composed of large number of risks, each of them being small. The formula can be practically applied once we replace the adjustment coeﬃcient R by its approximation R(Li) .

470

20 Premiums, Investments, and Reinsurance

Under certain conditions we could rely on truncating higher order terms in the expansion of the cumulant generating function: Π (W ) =

1 1 1 1 2 + R2 µ3,W + R3 c4,W + ..., CW (R) = µW + RσW R 2! 3! 4!

(20.9)

and use for the purpose of individual risk pricing the formula (where higher order terms are truncated as well): Π (X) =

1 1 1 1 2 + R2 µ3,X + R3 c4,X + ... CX (R) = µX + RσX R 2! 3! 4!

(20.10)

Some insight into the nature of the long-run criteria for premium calculation could be gained by re-arrangement of the formula (20.9). At ﬁrst we could express the initial capital in units of standard deviation of the aggregate loss: −1 U = uσW . Now the adjustment coeﬃcient could be expressed as: R=

− ln ψ , U σW

and premium formula (20.9) as: 2 3 1 − ln ψ 1 − ln ψ 1 − ln ψ γW + γ2,W + ... Π (W ) = µW + σW + 2! U 3! U 4! U (20.11) where in the brackets appear only unit-less ﬁgures, that form together the pric−1 ing formula for the standardized risk (W − µW ) σW . Let us notice that the contribution of higher order terms in the expansion is neglectible when initial capital is large enough. The above phenomenon could be interpreted as a result of risk diversiﬁcation in time (as opposed to cross-sectional risk diversiﬁcation). Provided the initial capital is large, the ruin (if it happens at all) will rather appear as a result of aggregation of poor results over many periods of time. However, given the skewness and kurtosis of one-year increment of the risk pro1 cess, the sum of increments over n periods has skewness of order n− 2 , kurtosis of order n−1 etc. Hence the larger initial capital, the smaller importance of the diﬀerence between the distribution of the yearly increment and the normal distribution. In a way this is how the diversiﬁcation of risk in time works (as opposed to cross-sectional diversiﬁcation). In the case of a cross-sectional diversiﬁcation the assumption of mutual independency of risks plays the crucial role. Analogously, diversiﬁcation of risk in time works eﬀectively when subsequent increments of the risk process are independent.

20.5

Ruin Probability Criterion when the Initial Capital is Given

20.5.2

471

“Zero” Approximation

The “zero” approximation is a kind of naive approximation, assuming replacement of the function ψ(u) by: ψ0 (u) = (1 + θ)

−1

exp (−Ru) ,

) where θ denotes the relative security loading, which means that (1 + θ) = Π(W E(W ) . The “zero” approximation is applicable to the case of Poisson claim arrivals (as opposed to Lundberg inequality, which is applicable under more general assumptions). Relying on “zero” approximation leads to the system of two equations: Π (W ) = R−1 CW (R) 1 u

R=

E(W ) ln ψΠ(W ).

The system could be solved by assuming at ﬁrst: R(0) =

− ln ψ , u

and next by executing iterations: Π(n) (W ) = R(n) =

1 u

1 C R(n−1) W

R(n−1)

) ln ψΠE(W (n) (W ) ,

that under reasonable circumstances converge quite quickly to the solution R(0) = lim R(n) , which allows applying formula (20.9) for the whole portfolio n→∞

and formula (20.10) for individual risks, provided the coeﬃcient R is replaced by its approximation R(0) .

20.5.3

Cram´ er–Lundberg Approximation

Premium calculation could also be based on the Cram´er-Lundberg approximation. In this case the problem can be reduced also to the system of equations

472

20 Premiums, Investments, and Reinsurance

(three this time): = R−1 CW (R)

µY θ 1 − ln ψ + ln R = u MY (R) − µY (1 + θ) Π (W ) (1 + θ) = . E(W ) Π (W )

where MY (·) and µY denote respectively the ﬁrst order derivative of the moment generating function and the expectation of the severity distribution. Solution of the system in respect of unknowns Π (W ), θ and R requires now a bit more complex calculations. Obtained result R(CL) could be used then to replace R in formulas (20.9) and (20.10). The method is applicable to the case of Poisson claim arrivals. Moreover, severity distribution has to be known in this case. It can be expected that the method will produce accurate results for large u.

20.5.4

Beekman–Bowers Approximation

This method is often recommended as the one which produces relatively accurate approximations, especially for moderate amounts of initial capital. The problem consists in solving the system of three equations: ψ α β α (α + 1) β2

= = =

−1

{1 − Gα,β (u)} m2,Y (1 + θ) 2θm1,Y 2 m3,Y m2,Y , (1 + θ) +2 3θm1,Y 2θm1,Y

(1 + θ)

where Gα,β denotes the cdf of the gamma distribution with parameters (α, β), and mk,Y denotes the raw moment of order k of the severity distribution. Last two equations arise from equating moments of the gamma distribution to conditional moments of the maximal loss distribution (provided the maximal loss is positive). Solving this system of equation is a bit cumbersome, as it involves multiple numerical evaluations of the cdf of the gamma distribution. The admissible solution exists provided m3,Y m1,Y > m22,Y , that is always satisﬁed for arbitrary severity distribution with support on the positive part of the axis. Denoting the solution for the unknown θ by θBB , we can write the latter as

20.5

Ruin Probability Criterion when the Initial Capital is Given

473

a function: θBB = θBB (u, ψ, m1,Y , m2,Y , m2,Y ) , and obtain the whole portfolio premium from the equation: ΠBB (W ) = (1 + θBB ) E(W ). Formally, application of the method requires only moments of ﬁrst three orders of the severity distribution to be ﬁnite. However, the problem arises when we wish to price individual risks. Then we have to know the moment generating function of the severity distribution, and it should obey conditions for adjustment coeﬃcient to exist. If this is a case, we can replace the coeﬃcient θ of the equation: MY (r) = 1 + (1 + θ) m1,Y r by its approximation θBB , and thus obtain the approximation R(BB) of the adjustment coeﬃcient R. It allows calculating premiums according to formulas (20.9) and (20.10). It is easy to verify that there is no danger of contradiction, as both formulas for the premium ΠBB (W ) produce the same result −1 (1 + θBB ) E(W ) = R(BB) CW (R(BB) ).

20.5.5

Diﬀusion Approximation

This approximation method requires the scarcest information. It suﬃces to know the ﬁrst two moments of the increment of the risk process, to invert the formula: ψD (u) = exp −R(D) u , where:

−2 , R(D) = 2 {Π(W ) − µW } σW

in order to obtain the premium: ΠD (W ) = µW +

2 σW − log ψ , 2 u

that again is easily decomposable for individual risks. The formula is equivalent to the exponential formula (20.9), where all terms except the ﬁrst two are omitted.

474

20.5.6

20 Premiums, Investments, and Reinsurance

De Vylder Approximation

The method requires information on moments of the ﬁrst three orders of the increment of the risk process. According to the method, ruin probability could be expressed as: R(D) u 1 ψdV (u) = exp − , 1 + R(D) ρ 1 + R(D) ρ def

where for simplicity the abbreviated notation ρ = 13 σW γW is used. Setting ψdV (u) equal to ψ and rearranging the equation we obtain another form of it: − log ψ − log 1 + R(D) ρ 1 + R(D) ρ = R(D) u that can be solved numerically in respect of R(D) , to yield as a result premium formula: σ2 ΠdV (W ) = µW + W R(D) , 2 which again is directly decomposable. When the analytic solution is needed, we can make some further simpliﬁcations. Namely, the equation entangling the unknown coeﬃcient R(D) could be transformed to a simpliﬁed form on the basis of the following approximation: 1 + R(D) ρ log 1 + R(D) ρ =

2 1 3 1 R(D) ρ + R(D) ρ − . . . ≈ R(D) ρ. = 1 + R(D) ρ R(D) ρ − 2 3 Provided the error of omission of higher order terms is small, we obtain the approximation: − log ψ . R(D) ≈ u + ρ(log ψ + 1) The error of the above solution is small, provided the initial capital u is several times greater than the product ρ |log ψ + 1|. Under this condition we obtain the explicit (approximated) premium formula:

− log ψ σ2 ΠdV ∗ (W ) = µW + W , 2 u + ρ(log ψ + 1)

20.5

Ruin Probability Criterion when the Initial Capital is Given

475

where the star symbolizes the simpliﬁcation made. Applying now the method of linear approximation of marginal cost ΠdV ∗ (W + X) − ΠdV ∗ (W ) presented in Section 20.3 yields the result: ΠdV ∗ (X) = µX +

− log ψ {u + 2ρ(log ψ + 1)} 2 {u + ρ (log ψ + 1)}

2

2 + σX

log ψ(log ψ + 1) 6 {u + ρ (log ψ + 1)}

2 µ3,X .

The reader can verify that the formula ΠdV ∗ (·) is additive for independent risks, and so it can serve for marginal as well as for basic valuation.

20.5.7

Subexponential Approximation

This method applies to the classical model (Poisson claim arrivals) with thicktailed severity distribution. More precisely, when the severity cdf FY possesses the ﬁnite expectation µY , then the integrated tail distribution cdf FL1 (interpreted as the cdf of the variable L1 , being the “ladder height” of the claim surplus process) is deﬁned as follows: ∞ 1 1 − FL1 (x) = {1 − FY (y)}dy. µY x Assuming now that the latter distribution is subexponential (see Chapter 15), we could obtain (applying the Pollaczek-Khinchin formula) the approximation, which should work for large values of initial capital: 1 ΠS (W ) = µW 1 + {1 − FL1 (u)} . ψ The extended study of consequences of thick-tailed severity distributions can be found in Embrechts, Kl¨ uppelberg, and Mikosch (1997).

20.5.8

Panjer Approximation

The Pollaczek-Khinchin formula could be also used in combination with the Panjer recursion algorithm, to produce quite accurate (at the cost of timeconsuming calculations) answers in the case of the classical model (Poisson claim arrivals). The method consists of two basic steps. In the ﬁrst step the integrated tail distribution FL1 (x) is calculated and discretized. Once this step

476

20 Premiums, Investments, and Reinsurance

˜ 1 (discretized version of the is executed, we have a distribution of a variable L “ladder height” L1 ): ˜ 1 = jh , fj = P L j = 0, 1, 2, . . . The second step is based on the fact that the maximal loss L = L1 +· · ·+LN has a compound geometric distribution. Thus the distribution of the discretized ˜ of the variable L is obtained by making use of the Panjer recursion version L ∞ formula: ˜ = 0 = (1 − q) P L (qf0 )j , j=0

and for k = 1, 2, . . . : ˜ = kh = P L

k q ˜ = (k − j)h , fj P L 1 − qf0 j=1

def

where q = (1 + θ)−1 . Iterations should be stopped when for some kψ the cumulated probability FL˜ (kψ h) exceeds for the ﬁrst time the predetermined value 1 − ψ. The approximated value of the capital u at which the ruin probability attains the value ψ could be set then on the basis of interpolation, taking into account that the ruin probability function is approximately exponential: def

uψ = kψ h − h

log ψ − log {1 − FL˜ (kψ h)} . log {1 − FL˜ (kψ h − h)} − log {1 − FL˜ (kψ h)}

Calculations should be repeated for diﬀerent values of θ in order to ﬁnd such value θP anjer (ψ, u), at which the resulting capital uψ approaches the predetermined value of capital u. Then the resulting premium is given by the formula: ΠP anjer (W ) = (1 + θP anjer )µW . It should be noted, that it is only the second step of calculations which has to be repeated many times under the search procedure, as the distribution of ˜ 1 remains the same for various values of θ being tested. The the variable L advantage of the method is that the range of the approximation error is under control, as it is a simple consequence of the width of the discretization interval h and the discretization method used. The disadvantage already mentioned is a time-consuming algorithm. Moreover, the method produces only numerical results, and therefore, provides no rule for decomposition of the whole portfolio premium for individual risk premiums. Nevertheless, the method could be used to obtain quite accurate approximations, and thus, a reference point to estimate approximation errors produced by simpler methods.

20.6

Ruin Probability Criterion and the Rate of Return

477

All approximation methods presented in this section are more or less standard, and more detailed information on them can be found in any actuarial textbook, as for example in “Actuarial Mathematics” by Bowers et al. (1986, 1997). More advanced analysis can be found in a book “Ruin probabilities” by Asmussen (2000) and numerical comparison of this and other approximations are given in Chapter 15.

20.6

Ruin Probability Criterion and the Rate of Return

This section is devoted to considering the problem of balancing proﬁtability and solvency requirements. In Section 20.2 a similar problem has already been studied. However, we have considered there return on capital on the singleperiod basis. Therefore neither the allocation of returns (losses) nor the long run consequences of decision rules applied in this respect were considered. The problem was already illustrated in Example 1. Section 20.6.1 is devoted to presenting the same problem under more general assumptions about the risk process, making use of some of approximations presented in Section 20.5. Section 20.6.2 is devoted to another generalization, where more ﬂexible dividend policy allows for sharing risk between the company and shareholders.

20.6.1

Fixed Dividends

First we consider a reinterpretation of the model presented in Example 1. Now the discrete-time version of the model is assumed: Rn = u + (c − du) n − (W1 + ... + Wn ) ,

n = 0, 1, 2, . . .

where all events are assumed to be observed once a year, and notations are obviously adapted. The question is the same: to choose the optimal level of initial capital u that minimizes the premium c given the ruin probability ψ and the dividend rate d. The solution depends on how much information we have on the distribution of the variable W , and how precise result is required. Provided our information is restricted to the expectation and variance of W , we can use the diﬀusion approximation. This produces exactly the same results as in Example 1, although now we interpret them as an approximated solution. Let us remind that the resulting premium formula reads: Π(W ) = µW + σW −2d log ψ,

478

20 Premiums, Investments, and Reinsurance

with the accompanying result for the optimal level of capital: uopt = σ − log ψ(2d)−1 . Despite the fact that the premium formula is not additive, we can follow arguments presented in Section 20.3.4, to propose the individual basic premium formula: 2 −1 ΠB (X) = µX + σX σW −2d log ψ, and obviously the marginal premium containing loading twice as small as the basic one. The basic idea presented above can be generalized to cases when richer information on the distribution of the variable W allows for more sophisticated methods. For illustrative purposes only the method of De Vylder (in a simpliﬁed version) is considered. Example 3 Our information encompasses also skewness (which is positive), so premium is calculated on the basis of the De Vylder approximation. Allowing for simpliﬁcation proposed in the previous section, we obtain the minimized function:

2 − ln ψ σW + du. c = µW + 2 u + ρ (ln ψ + 1) Almost as simply as in the Example 1 we get the solutions: : − ln ψ uopt = σW − ρ (ln ψ + 1) , 2d −2d ln ψ − 13 d (ln ψ + 1) γW , copt = µW + σW √ where again the safety loading amounts to 21 σW −2d ln ψ. However, in this case the safety loading is smaller than a half of the total premium loading. This time the capital (and so the dividend loading) is larger, because of component proportional to σW γW . This complicates also individual risks pricing, as (analogously to formulas considered in Section 20.3.3), the basic premium in respect of this component has to be set arbitrarily. Comparing problems presented above with those considered in Section 20.5 we can conclude that premium calculation based on ruin theory are easily decomposable as far as the capital backing risk is considered as ﬁxed. Once the cost of capital is explicitly taken into account, we obtain premium calculation formulas much more similar to those derived on the basis of one-year considerations, what leads to similar obstacles when the decomposition problem is considered.

20.6

Ruin Probability Criterion and the Rate of Return

20.6.2

479

Flexible Dividends

So far we have assumed that shareholders are paid a ﬁxed dividend irrespective of the current performance of the company. It is not necessarily the case, as shareholders would accept some share in risk provided they will get a suitable risk premium in exchange. The more general model which encompasses the previous examples as well as the case of risk sharing can be formulated as follows: Rn = u + cn − (W1 + ... + Wn ) − (D1 + ... + Dn ) where Dn is a dividend due to the year n, deﬁned as such a function of the variable Wn that E(Dn ) = du. As dividend is a function of the current year result, it preserves independency of increments of the risk process. Of course, only such deﬁnitions of Dn are sensible, which reduce in eﬀect the range of ﬂuctuations of the risk process. The example presented below assumes one of possible (and sensible) choices in this respect. Example 4 Let us assume that Wn has a gamma (α, β) distribution, and the dividend is deﬁned as: Dn = max {0, δ (c − Wn )} , δ ∈ (0, 1) , which means that shareholders’ share in proﬁts amounts to δ100%, but they do not participate in losses. Problem is to choose a value of the parameter δ and amount of capital u so as to minimize premium c, under the restriction E(Dn ) = du, and given parameters (α, β, d, ψ). The problem could be reformulated so as to solve it numerically, making use of the De Vylder approximation. Solution. Let us write the state of the process after n periods in the form: Rn = u − (V1 + ... + Vn ) with the increment equal −Vn . The variable Vn could be then deﬁned as: when Wn > c Wn − c Vn = (1 − δ) (Wn − c) when Wn c According to the De Vylder method ruin probability is approximated by: −1 −1 ψdV (u) = 1 + R(D) ρ , exp −R(D) u 1 + R(D) ρ

480

20 Premiums, Investments, and Reinsurance

where R(D) = −2E (V ) σ −2 (V ), and ρ = 13 µ3 (V ) σ −2 (V ); for simplicity, the number of year n has been omitted. In order to minimize the premium under restrictions: ψdV (u) = ψ,

E(D) = du,

δ ∈ (0, 1) ,

u > 0,

and under predetermined values of (α, β, d, ψ) it suﬃces to express the expectation of a variable D and cumulants of order 1, 2, and 3 of the variable V as functions of these parameters and variables. First we derive raw moments of order 1, 2, and 3 of the variable D. From its deﬁnition we obtain: c k

E(D ) = δ

k

(c − x) dFW (x),

k 0

that (after some calculations) leads to the following results: F (c) , = δ cFα,β (c) − α α+1,β β α(α+1) E(D2 ) = δ 2 c2 Fα,β (c) − 2c α F (c) + F (c) , 2 α+1,β α+2,β β β E(D3 ) = δ 3 c3 Fα,β (c) − 3c2 α β Fα+1,β (c) + α(α+1)(α+2) + δ 3 3c α(α+1) F (c) − F (c) , 2 3 α+2,β α+3,β β β

E(D)

where Fα+j,β denotes the cdf of gamma distribution with parameters (α + j, β). In respect of the relation V − D = W − c, and taking into account that:

c n

n

m+n

E [D (−V ) ] = δ (1 − δ) m

m

(c − x)

dFW (x) =

1−δ δ

n E(Dm+n ),

0

we easily obtain raw moments of the variable V : E(V )

=

E(V 2 )

=

E(V 3 )

=

− c + E(D), 2 α α − 1 + 2 1−δ E(D2 ), β2 + β − c δ 3 1−δ 2 2α 3α α α 1−δ E(D3 ), − c + − c + 1 + 3 + 3 + 3 2 β β β β δ δ α β

20.7

Ruin Probability, Rate of Return and Reinsurance

481

so as cumulants of this variable, too. Provided we are able to evaluate numerically the cdf of the gamma distribution, all elements needed to construct the numerical procedure solving the problem are completed. In Example 4 some speciﬁc rule of sharing risk by shareholders and the company has been applied. On the contrary, the assumption on the distribution of the variable W is of some general advantage, as the shifted gamma distribution is often used to approximate the distribution of the aggregate loss. We will make use of it in Example 6 presented in the next section.

20.7

Ruin Probability, Rate of Return and Reinsurance

In this section premium calculation is considered under predetermined ruin probability and predetermined rate of dividend, with reinsurance included. At ﬁrst the example involving ﬁxed dividend is presented.

20.7.1

Fixed Dividends

Example 5 We assume (as in Example 2), that the aggregate loss W has a compound Poisson distribution with expected number of claims λP = 1000, and with severity distribution being truncated-Pareto distribution with parameters (α, λ, M0 ) = (2.5, 1.5, 500). We assume also that the excess of each loss over the limit M ∈ (0, M0 ] is ceded to the reinsurer using the same pricing formula: ΠR W M = (1 + re0 ) E W M + re1 V AR W M . The problem lies in choosing such a value of the retention limit M and initial capital u, which minimize the total premium paid by policyholders, under predetermined values of parameters (d, ψ, re0 , re1 ). The problem could be solved with application of the De Vylder and Beekman–Bowers approximation methods. As allowing for reinsurance leads to numerical solutions anyway, there is no more reason to apply the simpliﬁed version of the De Vylder method, as in Example 3.

482

20 Premiums, Investments, and Reinsurance

Solution. Risk process can be written now as: Rn = u + c − du − ΠR W M n − W M,1 + ... + W M,n . The problem takes a form of minimization of the premium c under restrictions, which in the case of De Vylder metod take a form: ψ

=

R(D)

=

−1 −1 1 + R(D) ρ , exp −R(D) u 1 + R(D) ρ 2 c − du − ΠR W M σ −2 (W M ) ,

ρ

=

1 3 µ3

(W M ) σ −2 (W M ) ,

and in the version based on the Beekman–Bowers approximation method take a form: c − du − ΠR W M = (1 + θ) E (W M ) , ψ

=

−1

=

α (α + 1) β −2

=

αβ

−1

(1 − Gα,β (u)) , 2 −1 (1 + θ) E Y M {2θ E (Y M )} , ⎧ ! "2 ⎫ 2 ⎨ E Y 3 ⎬ E Y M M (1 + θ) . +2 ⎩ 3θ E (Y M ) 2θ E (Y M ) ⎭ (1 + θ)

Moments of the ﬁrst three orders of the variable Y M as well as cumulants of variables W M and W M are calculated the same way as in Example 2. All these characteristics are functions of parameters (α, λ, λP ) and the decision variable M .

20.7.2

Interpretation of Solutions Obtained in Example 5

Results of numerical optimization are reported in Table 20.2. In the basic variant of the problem, parameters has been set on the level (d, ψ, re0 , re1 ) = (5%, 5%, 100%, 0.5%). In variant 6 the value M = M0 is assumed, so as this variant represents the lack of reinsurance. Variants 2, 3, 4 and 5 diﬀer from the basic wariant by the value of one of parameters (d, ψ, re0 , re1 ). In variant 2 the dividend rate d has been increased so as to obtain the same level of premium, than it is obtained in variant 6. Results could be summarized as follows:

20.7

Ruin Probability, Rate of Return and Reinsurance

483

Table 20.2: Minimization of premium c with respect to choice of capital u and retention limit M . Basic characteristics of the variable W : µW = 999.8, σW = 74.2, γW = 0.779, γ2,W = 2.654 Variants of minimization problems V.1: (basic) V.2: d = 5.2% V.3: ψ = 2.5% V.4: re0 = 50% V.5: re1 = 0.25% V.6: (no reinsurance)

Method of approx. of the ruin probability BB dV BB dV BB dV BB dV BB dV BB dV

Retention limit M

Initial capital u

Loading

184.2 185.2 179.5 180.5 150.1 156.3 126.1 127.1 139.7 140.5 500.0 500.0

416.6 416.3 408.2 407.9 463.3 461.7 406.2 406.0 409.0 408.8 442.9 442.7

4.17% 4.16% 4.25% 4.25% 4.65% 4.63% 4.13% 4.13% 4.13% 4.13% 4.25% 4.25%

c−µW µW

STFrein02.xpl (i) Reinsurance results either in premium reduction under unchanged rate of dividend (compare variant 6 with wariant 1), or in increase of the rate of dividend under the same premium level (compare variant 2 with variant 1). In both cases the need for capital is also reduced. If we wish to obtain reduction of premium as a result of reinsurance introduced, then the reduction of capital is slightly smaller than in the case when reinsurance serves to enlarge the rate of dividend. (ii) Comparison of variants 3 and 1 shows that increasing safety (reduction of parameter ψ from 5% to 2.5%) results in signiﬁcant growth of the premium. This eﬀect is caused as well by increase of capital (which burdens the premium through larger cost of dividends), as by increase of costs of reinsurance, because of reduced retention limit. It is also worthwhile to notice that predetermining ψ = 2.5% results in signiﬁcant diversiﬁcation of results obtained by two methods of approximation. In the case when ψ = 5% the diﬀerence is neglectible. (iii) Results obtained in variants 4 and 5 show that the optimal level of reinsurance is quite sensitive to changes of parameters reﬂecting costs of reinsurance.

484

20.7.3

20 Premiums, Investments, and Reinsurance

Flexible Dividends

In the next example assumptions are almost the same as in Example 5, except that the ﬁxed dividend is replaced by the dividend dependent on ﬁnancial result by the same manner, as in Example 4. Example 6 Assumptions on the aggregate loss W are the same as in Example 5: compound Poisson truncated-Pareto distribution with parameters (λP , α, λ, M0 ). Assumptions concerning available reinsurance (excess of loss over M ∈ (0, M0 ], pricing formulas characterized by parameters re0 and re1 ) are also the same. Dividend is deﬁned as in Example 4, with a suitable correction due to reinsurance allowed: 4 5 Dn = max 0, δ c − W M,n − ΠR W M , δ ∈ (0, 1) . Now the problem lies in choosing capital u, risk-sharing parameter δ and retention limit M so as to minimize premium c under the restriction E(Dn ) = du, and predetermined values of parameters characterizing the distribution (λP , α, λ, M0 ), parameters characterizing reinsurance costs (re0 , re1 ) and parameters characterizing proﬁtability and safety (d, ψ). Solution. Under the predetermined values of decision variables (u, δ, M ) and remaining parameters the risk process has a form: Rn = u − (V1 + ... + Vn ) , with increment −Vn , where the variable Vn is deﬁned as: W M,n − when W M,n > c − ΠR W M c + ΠR W M Vn = when W M,n c − ΠR W M (1 − δ) W M,n − c + ΠR W M The problem diﬀers from that presented in Example 4 by two factors: variable W M is not gamma and the premium c is now replaced by the distributed, constant c − ΠR W M . However, variable W M could be approximated by the shifted gamma distribution with parameters (x0 , α0 , β0 ) chosen so as to match moments of order 1, 2, and 3 of the original variable W M . Suitable calculations lead to the deﬁnition of the variable V˜ , that approximates the original variable Vn : when X > c∗ X − c∗ V˜ = ∗ (1 − δ) (X − c ) when X c∗

20.7

Ruin Probability, Rate of Return and Reinsurance

485

∗ where the variable X has a gamma (α0 , β0 ) distribution, and the constant c ¯ M − x0 . Thus we could express moments of the variable V˜ equals c − ΠR W as functions of parameters (α0 , β0 , c∗ , δ) exactly this way, as it is done with respect to variable V and parameters (α, β, c, δ) in Example 4. It suﬃces in turn to approximate ruin probability with the De Vylder method:

−1 −1 ψdV (u) = 1 + R(D) ρ , exp −R(D) u 1 + R(D) ρ where R(D) = −2E V˜ σ −2 V˜ and ρ = 13 µ3 V˜ σ −2 V˜ , and where the expected value of dividend E(D) satisﬁes the restriction: c − ΠR W M − E (W ) − E(D) = − E(V˜ ). M

Hence it is clear that the problem of minimization of premium under restrictions ψdV (u) = ψ, E(D) = du, δ ∈ (0, 1), u > 0, M ∈ (0, M0 ] and predetermined values of parameters (λP , α, λ, re0 , re1 , d, ψ, M0 ) is in essence analogous to the problem presented in Example 4, and diﬀers only in details. The set of decision variables (u, δ) in Example 4 is now extended by additional variable M , and the variable W M is only approximately gamma distributed.

20.7.4

Interpretation of Solutions Obtained in Example 6

Results are presented in Table 20.3. In all variants predetermined values of parameters (λP , α, λ, M0 , re0 , re1 ) = (1000, 2.5, 1.5, 500, 100%, 0.5%) are the same. In variant 1 (basic) the ruin probability ψ = 5% is assumed, and reinsurance is allowed. Variant 2 diﬀers from the basic one by higher safety standard (ψ = 2.5%), whereas variant 3 diﬀers by lack of reinsurance. In each variant three slightly diﬀerent versions of the problem have been solved. Version A is a simpliﬁed one, assuming ﬁxed dividend rate d = 5%, so that Dn = du. Consequently the minimization of the premium is conducted with respect to (u, M ) only. In fact, the results from Table 20.2 are quoted for this version. Versions B and C assume minimization with respect to (u, M, δ). Version B plays a role of a basic version, where premium c is minimized under the expected rate of dividend d = 5%. In version C such a rate of dividend d has been chosen, that leads (through minimization) to the same premium level, as obtained previously in version A. So two alternative eﬀects of the consent of shareholders to participate in risk could be observed. Eﬀect in terms of reduction of premium (expected rate of dividend remaining unchanged) is observed

486

20 Premiums, Investments, and Reinsurance

Table 20.3: Minimization of premium c under three variants of assumptions and three versions of the problem. Variant of assumptions

Version of problem A B V.1: ψ = 5%, reins. C A V.2: ψ = 2.5%, reins. B C A V.3: ψ = 5%, no reins. B C

d

M

u

c−µW µW

δ

σD u

5% 5% 8.54% 5% 5% 8.96% 5% 5% 8.09%

185.2 189.3 143.6 156.3 157.0 122.9 500.0 500.0 500.0

416.3 406.0 305.2 461.7 447.5 329.7 442.7 429.8 340.0

4.16% 3.35% 4.16% 4.63% 3.67% 4.63% 4.25% 3.45% 4.25%

41.7% 48.6% 44.4% 52.3% 42.0% 48.2%

0 5.02% 8.14% 0 4.94% 8.29% 0 4.70% 7.15%

STFrein03.xpl when we compare version B and A. Eﬀect in terms of increase of the expected rate of dividend (premium being ﬁxed) is observed when versions C and A are compared. Results could be summarized as follows. In each of three variants, the consent of shareholders for risk participation allows for substantial reduction of premium (loading reduced by about 20%). It is interesting that shareholder’s consent to participate in risk allows for much more radical reduction of premium than reinsurance. It results from the fact that reinsurance costs have been explicitly involved in optimization, whereas the “costs of the shareholder’s consent to participate in risk” have not been accounted for. Comparison of versions C with versions A in each variant of the problem allows us to see the outcome (increase of expected rate of dividend) of the shareholder’s consent to share risk. In the last column of the table the (relative) standard deviation σD /u of dividends is reported; it could serve as a measure of “cost” at which the outcome, in terms of the increment of the expected dividend rate, is obtained. Comparing versions B and C in variants 1 and 2 we could observe eﬀects of the increment in the expected rate of dividend. Apart from the obvious eﬀect on premium increase, also the reduction of capital could be observed (cost of capital is higher), and at the same time retention limits are reduced. Also the sharing parameter δ increases, as well as the (relative) standard deviation of dividends σD /u.

20.8

Final remarks

487

Comparing variants 1 and 2 (in all versions A, B, and C) we notice the substantial increase of the premium as an eﬀect of higher safety standard (smaller ψ). Also the amount of capital needed increases and the retention limit is reduced. At the same time a slight increase of sharing parameter δ is observed (versions B and C).

20.8

Final Remarks

It should be noted that all presented models, including risk participation of reinsurers and shareholders, lead only to a modiﬁcation of the distribution of the increment of the risk process. Still the mutual independence of subsequent increments and their identical distribution is preserved. There are also models where decisions concerning premiums, reinsurance, and dividends depend on current size of the capital. In general, models of this type need the stochastic control technique to be applied. Nevertheless, models presented in this chapter preserve simplicity, and allow just to have insight on long-run consequences of some decision rules, provided they remain unchanged for a long time. This insight is worthwhile despite the fact that in reality decisions are undertaken on the basis of the current situation, and no ﬁxed strategy remains unchanged under changing conditions of the environment. On the other hand, it is always a good idea to have some reference point, when consequences of decisions motivated by current circumstances have to be evaluated.

488

Bibliography

Bibliography Asmussen, S. (2000). Ruin Probabilities, World Scientiﬁc, Advanced Series on Statistical Science & Applied Probability, Vol. 2, Singapore. Bowers, N.L., Gerber H.U., Hickman J.C., Jones D.A., and Nesbitt C.J. (1986, 1997). Actuarial Mathematics, Society of Actuaries, Itasca, Illinois B¨ uhlmann H. (1985). Premium calculation from top down, Astin Bulletin 15: 89–101. Embrechts, P., Kl¨ uppelberg, C., and Mikosch, T. (1997). Modelling Extremal Events for Insurance and Finance, Springer, Berlin. Hill, G.W. and Davis, A.W. (1968). Generalized asymptotic expansions of Cornish-Fisher type, Ann. Math. Statist. 39: 1264–1273. Kendall, M. and Stuart, A. (1977). The Advanced Theory of Statistics, 4th ed. MacMillan. Otto, W. (2004). Nonlife insurance – part I – Theory of risk, series “Mathematics in Insurance” WNT (in Polish). Panjer, H.H. (ed.), Boyle, P.P., Cox, S.H., Dufresne, D., Gerber, H.U., Mueller, H.H., Pedersen, H.W., Pliska, S.R., Sherris, M., Shiu, E.S., and Tan, K. S. (1998). Financial Economics with Applications to Investments, Insurance and Pensions, The Actuarial Foundation, Schaumburg, Illinois. Shapley, L.S.(1953). A Value for n-Person Games, in Kuhn, H.W. and Tucker, A.W. (eds.) Contributions to the Theory of Games II, Princeton University Press, 307–317.

Part III

General

21 Working with the XQC Szymon Borak, Wolfgang H¨ ardle, and Heiko Lehmann

21.1

Introduction

An enormous number of statistical methods have been developed in quantitive ﬁnance during the last decades. Nonparametric methods, bootstrapping time series, wavelets, Markov Chain Monte Carlo are now almost standard in statistical applications. To implement these new methods the method developer usually uses a programming environment he is familiar with. Thus, automatically such methods are only available for preselected software packages, but not for widely used standard software packages like MS Excel. To apply these new methods to empirical data a potential user faces a number of problems or it may even be impossible for him to use the methods without rewriting them in a diﬀerent programming language. Even if one wants to apply a newly developed method to simulated data in order to understand the methodology one is confronted with the drawbacks described above. A very similar problem occurs in teaching statistics at undergraduate level. Since students (by deﬁnition!) have their preferred software and often do not have access to the same statistical software packages as their teacher, illustrating examples have to be executable with standard tools. The delayed proliferation of new statistical technology over heterogeneous platforms and the evident student/teacher software gap are examples of ineﬃcient distribution of quantitative methodology. This chapter describes the use of a platform independent client that is the basis for e-books, transparencies and other knowledge based systems. In general, two statisticians are on either side of the distribution process of newly implemented methods, the provider (inventor) of a new technique (algorithm) and the user who wants to apply (understand) the new technique. The aim of the XploRe Quantlet client/server architecture is to bring these statisticians closer to each other. The XploRe Quantlet Client (XQC) represents the

492

21

Working with the XQC

front end – the user interface (UI) of this architecture allowing to access the XploRe server and its methods and data. The XQC is fully programmed in Java not depending on a speciﬁc computer platform. It runs on Windows and Mac platforms as well as on Unix and Linux machines. The following sections contain a description of components and functionalities the XQC oﬀers. Section 21.2.1 gives a short overview about possible conﬁguration settings of the XQC, which allow inﬂuencing the behaviour of the client. Section 21.2.2 explains how to connect the XQC to an XploRe Quantlet Server. A detailed description of the XQC’s components desktop, Quantlet editor, data editor and method tree is part of Sections 21.3 to 21.3.3. Section 21.3.4 ﬁnally explains graphical features oﬀered by the XploRe Quantlet Client.

21.2

The XploRe Quantlet Client

The XploRe Quantlet Client can be initiated in two diﬀerent ways. The way depends on whether the XQC is supposed to run as a standalone application or as an applet embedded within an HTML page. The XQC comes packed in a single Java Archive (JAR) ﬁle, which allows an easy usage. This JAR ﬁle allows for running the XQC as an application, as well as running it as an applet. Running the XQC as an application does not require any programming skills. Provided that a Java Runtime Environment is installed on the computer the XQC is supposed to be executed on, the xqc.jar will automatically be recognized as an executable jar ﬁle that opens with the program javaw. If the XQC is embedded in a HTML page it runs as an applet and can be started right after showing the page.

21.2.1

Conﬁguration

Property ﬁles allow conﬁguring the XQC to meet special needs of the user. These ﬁles can be used to manage the appearance and behavior of the XQC. Any text editor can be used in editing the conﬁguration ﬁles. Generally, the use of all information is optional. In its current version, the XQC works with three diﬀerent conﬁguration ﬁles. The xqc.ini ﬁle contains important information about the basic setup of the XploRe Quantlet Client, such as server and port information the client is supposed to connect to. It also contains information

21.2

The XploRe Quantlet Client

493

Figure 21.1: Manual input for server and port number.

about the size of the client. This information can be maintained either relative to the actual size of the screen by using a factor or by stating its exact width and height. If this information is missing, the XQC begins by using its default values. The xqc language.ini allows for setting up the XQC’s language. This ﬁle contains all texts used within the XQC. To localize the client, the texts have to be translated. If no language ﬁle can be found, the client starts with its default setup, showing all menus and messages in English. The xqc methodtree.ini ﬁle ﬁnally contains information about the method tree that can be shown as part of the METHOD/DATA window, see Section 21.3.2. A detailed description of the set up of the method tree will be part of Section 21.3.3.

21.2.2

Getting Connected

After starting the XQC the client attempts to access and read information from the conﬁguration ﬁles. If no conﬁguration ﬁle is used error messages will pop up. If server and port information cannot be found, a pop up appears and enables a manual input of server and port number, as displayed in Figure 21.1. The screenshot in Figure 21.2 shows the XQC after it has been started and connected to an XploRe server. A traﬃc light in the lower right corner of the screen indicates the actual status of the server. A green light means the client

494

21

Working with the XQC

Figure 21.2: XQC connected and ready to work.

has successfully connected to the server and the server is ready to work. If the server is busy, computing previously received XploRe code, the traﬃc light will be set to yellow. A red light indicates that the XQC is not connected to the server.

21.3

Desktop

If no further restrictions or features are set in the conﬁguration ﬁle (e.g. not showing any window or starting with executing a certain XploRe Quantlet) the XQC should look like shown in the screen shot. It opens with the two screen components CONSOLE and OUTPUT/RESULT window. The CONSOLE allows for the sending of single-line XploRe commands to the server to be

21.3

Desktop

495

executed immediately. It also oﬀers a history of the last 20 commands sent to the server. To repeat a command from the history, all that is required is a mouse click on the command, and it will be copied to the command line. Pressing the ‘Return’ key on the keyboard executes the XploRe command. Text output coming from the XploRe server will be shown in the OUTPUT / RESULT window. Any text that is displayed can be selected and copied for use in other applications – e.g. for presentation of results within a scientiﬁc article. At the top of the screen the XQC oﬀers additional functions via a menu bar. These functions are grouped into four categories. The XQC menu contains the features Connect, Disconnect, Reconnect and Quit. Depending on the actual server status not every feature is enabled – e.g. if the client is not connected (the server status is indicated by a red traﬃc light) it does not make sense to disconnect or reconnect, if the client is already connected (server status equals a green light) the connect feature is disabled.

21.3.1

XploRe Quantlet Editor

The Program menu contains the features New Program, Open Program (local). . . and Open Program (net). . . . New Program opens a new and empty text editor window. This window enables the user to construct own XploRe Quantlets. The feature Open Program (local) oﬀers the possibility of accessing XploRe Quantlets stored on the local hard disk drive. It is only available if the XQC is running as an application or a certiﬁed applet. Due the Java sandbox restrictions running the XQC as an unsigned applet, it is not possible to access local programs. If the user has access to the internet the menu item Open Program (net) can be useful. This feature allows the opening of Quantlets stored on a remote Web server. All it needs is the ﬁlename and the URL address at which the ﬁle is located. Figure 21.3 shows a screen shot of the editor window containing a simple XploRe Quantlet. Two icons oﬀer actions on the XploRe code:

•

– Represents the probably most important feature – it sends the XploRe Quantlet to the server for execution.

496

21

Working with the XQC

Figure 21.3: XploRe Editor window.

•

– Saves the XploRe Quantlet to your local computer (not possible if running the XQC as an unsigned applet).

The Quantlet shown in Figure 21.3 assigns two three-dimensional standard normal distributions to the variables x and y. The generated data are formatted to a certain color, shape and size using the command setmaskp. The result is ﬁnally shown in a single display.

21.3.2

Data Editor

The Data menu contains the features New Data. . . , Open Data (local). . . , Open Data (net). . . , Download DataSet from Server. . . and DataSets uploaded to Server. New Data can be used to generate a new and empty data window. Before the data frame opens a pop-up window as shown in Figure 21.4 appears, asking for the desired dimension – the number of rows and cols – of the new data set. The XQC needs this information to create the spreadsheet. This deﬁnition does not have to be the exact and ﬁnal decision, it is possible to add and delete rows and columns later on.

21.3

Desktop

497

Figure 21.4: Dimension of the Data Set.

The menu item Open Data (local) enables the user to open data sets stored on the local hard disk. Again, access to local resources of the user’s computer is only possible if the XQC is running as an application or a certiﬁed applet. The ﬁle will be interpreted as a common text format ﬁle. Line breaks within the ﬁle are considered as new rows for the data set. To recognize data belonging to a certain column the single data in one line must be separated by either using a “;” or a “tab” (separating the data by just a “space” will force the XQC to open the complete line in just on cell). Open Data (net) lets the user open a data set that is stored on a web server by specifying the URL. The menu item Download DataSet from Server oﬀers the possibility to download data from the server. The data will automatically be opened in a new method and data window, oﬀering all features of the method and data window (e.g. applying methods, saving, . . . ) to them. A helpful feature especially for research purposes is presented with the menu item DataSets uploaded to Server. This item opens a window that contains a list of objects uploaded to the server using the data window or the console. Changes of these objects are documented as an object history. Due to performance reasons only uploaded data and actions on data from the CONSOLE and the TABLE MODEL are recorded. The appearance of the data window depends on the settings in the conﬁguration ﬁle. If a method tree is deﬁned and supposed to be shown, the window shows

498

21

Working with the XQC

Figure 21.5: Combined Data and Method Window.

the method tree on the left part and data spreadsheet on the right part of the frame. If no method tree has been deﬁned, only the spreadsheet will be shown. The method tree will be discussed in more detail in Section 21.3.3. Figure 21.5 shows a screen shot of the combined data and method frame. Icons on the upper part of the data and method window oﬀer additional functionalities: •

– If columns or cells are selected – this speciﬁc selection, otherwise the entire data set can be uploaded to the server with specifying a variable name.

•

– Saves the data to your local computer (not possible if running the XQC as an unsigned applet).

•

/

– Copy and paste.

21.3

•

Desktop

499

/ – Switches the column or cell selection mode on and oﬀ. Selected columns/cells can be uploaded to the server or methods can be executed on them.

The spreadsheet of the data and method window also oﬀers a context menu containing the following items: • Copy • Paste • No Selection Mode – Switches OFF the column or cell selection mode. • Column Selection Mode – Switches ON the column selection mode. • Cell Selection Mode – Switches ON the cell selection mode. • Set Row as Header Line • Set column Header • Delete single Row • Insert single Row • Add single Row • Delete single Column • Add single Column Most of the context menu items are self-explaining. However, there are two items that are worth taking a closer look at – ‘Set Row as Header Line’ and ‘Set column Header’. The spreadsheet has the capability to specify a header for each column. This information can be used within XploRe Quantlets to name the axis within a plot, making it easier for the user to interpret graphics. A more detailed description is included in Section 21.3.3. Default values for the headers are COL1, COL2, . . . as shown in Figure 21.6. Naming a single column can be performed using the menu item ‘Set column Header’. The name has to be maintained within the pop up window that appears right after choosing this menu item. It can also be used to change existing column headers. The spreadsheet also oﬀers the possibility to set column headers all at once. If the

500

21

Working with the XQC

Figure 21.6: Working with the Data and Method Window.

data set already contains a row with header information – either coming from manual input or as part of an opened data set – these row can be set as header using the menu item ‘Set Row as Header Line’. The row with the cell that is active at that time will be cut out of the data set and pasted into the header line. Setting the header is also possible while opening a data set. After choosing the data, a pop up asks whether or not the ﬁrst row of the data set to be opened should be used as the header. Nevertheless, the context menu features just described above are of course still available, enabling the user to set or change headers afterwards. Working with the XQC’s method and data window does not require any XploRe programming knowledge. All it requires is a pointing device like the mouse. Applying, for example, the scatter-plot-method on the two columns would only mean to • switch on the column selection mode • mark both columns • mouse click on the method “Scatter Plot”

21.3

Desktop

501

Result will be a plot as shown in Figure 21.6. As stated above, the selected area can also be uploaded to the server using the icon in order to be used for further investigation. This new variable can be used within XploRe Quantlets written using the EDITOR window or manipulated via the CONSOLE.

21.3.3

Method Tree

The METHOD TREE represents a tool for accessing statistical methods in an easy way. Its setup does not require any Java programming skills. All it needs is the maintenance of two conﬁguration ﬁles. Settings maintained within the xqc.ini ﬁle tell to the XQC whether there will be a method tree to be shown or not and where to get the tree information from. The client also needs to know where the methods are stored at. The MethodPath contains this information. Path statements can either be absolute statements or relative to the directory the XQC has been started in. For relative path information the path must start with XQCROOT. The settings in the example below tell the client to generate a method tree by using the ﬁle xqc methodtree.ini with the XploRe Quantlets stored in the relative subdirectory xqc_quantlets/. ShowMethodTree = yes MethodTreeIniFile = xqc_methodtree.ini MethodPath = XQCROOT/xqc_quantlets/ The actual method tree is set up in a separate conﬁguration ﬁle that is given by the property of MethodTreeIniFile. This ﬁle contains a systematic structure of the tree – nodes and children, the method to be executed and its description to be shown within the tree frame. Node_1 = path name Child_1.1 = method|description Child_1.2 = method|description Child_1.3 = method|description Node_2 = path name Node_2.1 = path name Child_2.1.1 = method|description

502

21

Working with the XQC

The name of the method has to be identical to the name of the XploRe program (Quantlet). The Quantlet itself has to have a procedure with the same name as the method. This procedure is called by the XQC on execution within the method tree.

Example The following example shows how to set up a simple method tree. First of all, we choose XploRe Quantlets used within this e-book that we want to be part of the method tree. The aim of the Quantlet should be to generate graphics from selected data of the data spreadsheet or to just generate text output. Before being able to use the Quantlets within the method tree, they have to be ‘wrapped’ in a procedure. The name of the procedure – in our case for example ‘STFstab08MT’ – has to equal the name of the saved XploRe ﬁle. Our example Quantlet STFstab08MT.xpl is based on the original Quantlet STFstab08.xpl used in Chapter 1. The procedure must further have two parameters: • data – Used for passing the selected data to the XploRe Quantlet. • names – Contains the names of the selected columns taken from the header of the spreadsheet. It might also be necessary to make some minor adjustments within the Quantlet in order to refer to the parameter handed over by the XQC. Those changes depend on the Quantlet itself. library (" graphic ") proc () = STFstab08MT ( data , names ) ... endp

Figure 21.7: STFstab08MT.xpl.

The XploRe coding within the procedure statement is not subject to any further needs or restrictions. Once we have programmed the Quantlet it needs to be integrated into a method tree. For this purpose we deﬁne our own conﬁguration ﬁle - xqc methodtree STF – with the following content shown in Figure 21.8.

21.3

Desktop

503

Node_1 = Stable Distribution Node_1 .1 = Estimation Child_1 .1.1 = stabreg . xpl | Stabreg Child_1 .1.2 = stabcull . xpl | Stabcull Child_1 .1.3 = stabmom . xpl | Stabmom Node_1 .2 = Examples Child_1 .2.1 = STFstab08 . xpl | STFstab08 Child_1 .2.2 = STFstab09 . xpl | STFstab09 Child_1 .2.3 = STFstab10 . xpl | STFstab10

Figure 21.8: sample tree.ini

We create a node calling it ‘Estimation’. Below this ﬁrst node we set up the Quantlets stabreg.xpl, stabcull.xpl and stabmom.xpl. A second node – ‘Examples’ contains the Quantlets STFstab08.xpl, STFstab09.xpl and STFstab10.xpl. The text stated right beside each Quantlet (separated by the ‘|’) represents the text we would like to be shown in the method tree. Now that we have programmed the XploRe Quantlet(s) and set up the method tree we still need to tell the XQC to show our method tree upon opening data sets. ... ShowMethodTree = yes M e t h o d T r e e I n i F i l e = x q c _ m e t h o d t r e e _ S T F . ini MethodPath = XQCROOT / xqc_quantlets / ...

Figure 21.9: Extract of the xqc.ini.

The settings as shown in Figure 21.9 tell the XQC to show the method tree that is set up in our xqc methodtree STF.ini ﬁle and to use our XploRe Quantlet stored in a subdirectory of the XQC itself. Our method tree is now ready for ﬁnally being tested. Figure 21.10 shows a screenshot of the ﬁnal result – the method tree, set up above.

21.3.4

Graphical Output

The previous sections contain some examples of graphical output shown within a display. The XQC’s displays do not show only the graphical results received

504

21

Working with the XQC

Figure 21.10: Final result of our tree example.

from the XploRe server. Besides the possibility to print out the graphic it oﬀers additional features that can be helpful for investigating data - especially for three-dimensional plots. Those features can be accessed via the display’s context menu. Figure 21.11 shows three-dimensional plot of the 236 implied volatilities and ﬁtted implied volatility surface of DAX from January 4th 1999. The red points in the plot represent observed implied volatilities on 7 diﬀerent maturities T = 0.13, 0.21, 0.46, 0.71, 0.96, 1.47, 1.97. The plot shows that implied volatilities are observed in strings and there are more observations on the strings with small maturities than on the strings with larger maturities. The surface is obtained with Nadaraya-Watson kernel estimator. For a more detailed inspection three-dimensional plots can be rotated by using a pointing device such as a mouse (with the left mouse-button pressed) or by using the keyboards arrow-keys. Figure 21.12 shows the same plot as before – it has just been rotated by some degrees. Now, one can see implied volatilities “smiles” and “smirks” and recognize diﬀerent curvature for diﬀerent maturities. For further research it would be helpful to know which data point belongs to which string. Here the XQC’s display oﬀers a feature to show

21.3

Desktop

505

Figure 21.11: Plot of the implied volatility surface from January 4, 1999

the point’s coordinates. This feature can be accessed via the display’s context menu. ‘Showing coordinates’ is not the only option. The user could also switch ˜ ‘Show XZ’ ˜ and ‘Show YZ’. ˜ between the three dimensions - ‘Show XY’, After the ‘Showing coordinates’ has been chosen all it needs is to point the mouse arrow on a certain data point in order to get the information. The possibility to conﬁgure the XploRe Quantlet Client for special purposes as well as its platform independence are features that recommends itself for the integration into HTML and PDF contents for visualizing statistical and mathematical coherences as already shown in this e-book.

506

21

Working with the XQC

Figure 21.12: Rotating scatter plot showing the context menu.

Figure 21.13: Showing the coordinates of a data point.

Index α-stable L´evy motion, 382, 386, 387 α-stable variable, 28 p-value, 308 aggregate loss, 455, 463 aggregate loss process, → process algorithm Box-Muller, 29 CP1, 327 FFT option pricing, 188, 192 Flury-Gautschi, 116, 128 Fuzzy C-Means (FCM), 260 Gauss-Legendre, 167 HPP1, 321 HPP2, 322 MPP1, 326 NHPP1 (Thinning), 324 NHPP2 (Integration), 325 NHPP3, 325 RP1, 328 arbitrage-free pricing, 94 arrival time, 321 Arrow-Debreu price, 139 Asian crisis, 250, 260, 266 asset return, 36 asset returns, 21, 81 bankruptcy, 225, 226 Basel Capital Accord Basel I, 232 Basel II, 226, 231, 232 basis function, 119

Bates’ model, 187 beta function, 305 binomial tree, 135, 137 constant volatility, 142 Cox-Ross-Rubinstein, 137 implied, 135, → implied binomial tree Black Monday, 38 Black-Scholes formula, 115, 116, 135, 136, 161, 170, 183 bond callable, 202 catastrophe, → CAT bond defaultable, 98 non-callable, 202 rating, 227 Brownian motion, 382, 385 arithmetic, 371 fractional, 395, 396 geometric, 136, 163, 183, 185 burnout, 202, 217 Burr distribution, → distribution call option, → derivative Capital Asset Pricing Model (CAPM), 457 capital market, 93 CAT bond, 93, 94, 96, 105 coupon, 105 coupon-bearing, 106

508 maturity, 93 premium, 93 pricing, 97 principal, 93 zero-coupon, 99, 104 catastrophe bond, → CAT bond data, 329, 387 seasonality, 102 trend, 102 futures, → derivative natural, 94, 311 option, → derivative Chambers-Mallows-Stuck method, 29 change of measure, 401 characteristic function, 24, 185, 187, 192 Cholesky factorization, 126 claim correlated, 395 severity, 320 claim arrival process, → process claim surplus process, → process classiﬁcation, 225, 228 clustering cluster center, 261 fuzzy, 249, 260, 261 fuzzy set, 261 membership function, 261 Takagi-Sugeno approach, 262 cointegration, 251, 254 collective risk model, 319, 407, 416, 428 collective risk theory, 381

Index composition method, → method consumer price index, 254 contingent claim, 166 copula, 45, 53, 54, 75 t, 86 Ali-Mikhail-Haq, 55, 70 Archimedean, 69, 71 Clayton, 55, 56, 70 Farlie-Gumbel-Morgenstern, 55, 56 Frank, 55, 56 Galambos, 59, 60 Gaussian, 56 Gumbel, 56, 59, 60 Gumbel II, 59, 60 Gumbel-Hougaard, 70 correlation, 170, 173 Cox process, → process Cox-Ross-Rubinstein scheme, 137 credit risk, 319 critical value, 309 cumulant, 455 generating function, 469 data envelopment analysis (DEA), 276, 277, 279 eﬃciency score, 277 eﬃcient level, 277 dataset Danish ﬁre losses, 312, 334, 436 Property Claim Services (PCS), 311, 329, 343, 413 Datastream, 254 DAX index, 115, 117 options, 152 deductible, 303, 427 disappearing, 434

Index premium, 438, 441, 443, 447– 449 ﬁxed amount, 309, 431 premium, 437, 439, 442, 446, 448, 449 franchise, 429 premium, 437, 438, 442, 446, 448, 449 limited proportional, 432 premium, 437, 440, 443, 447– 449 payment function, 428 proportional, 432 premium, 437, 439, 442, 447– 449 default, 226 probability, 232 probability of, 226 derivative, 93, 166 call option, 116, 135, 144 catastrophe futures, 96 catastrophe option, 94, 96 delta, 170, 211 dual delta, 168 European option, 116, 135 Gamma, 168 Greeks, 168 rho, 168 spot delta, 168 insurance, 94 maturity, → maturity prepayment option, 202 American, 204 put option, 116, 135, 144 risk reversal, 178 strike price, 115, 116, 135 vanilla option, 220 European, 167 vega, 169, 220

509 vol of vol, 163, 171 volga, 169 dimension reduction, 115 disappearing deductible, → deductible discriminant analysis, 226, 227 distribution α-stable, 382 θ-stable, 74 Bernoulli, 397 Burr, 100, 102, 298, 304, 311, 361, 387, 441 chi-squared (χ2 ), 300 claim amount, 102 Cobb-Douglas, 292 compound geometric, 346 compound mixed Poisson, 422 compound negative binomial, 422 compound Poisson, 420, 464, 465 conditional excess, 47, 52 elliptically-contoured, 70 Erlang, 300 exponential, 102, 293, 295, 298, 300, 303, 304, 310, 324, 361, 383 memoryless property, 295, 303 extreme value multivariate, 58 ﬁnite-dimensional, 396 Fr´echet, 46 gamma, 102, 295, 300, 305, 311, 353, 414, 422, 447, 472 generalized extreme value, 46 generalized Pareto, 47 geometric, 346, 476 Gumbel, 46 heavy-tailed, 164, 296, 298, 343, 382, 386 hyperbolic, 74, 164

510 inﬁnitely divisible, 293 inverse Gaussian, 371 L´evy stable, 22 light-tailed, 344 adjustment coeﬃcient, 344 Lundberg exponent, 344 log-normal, 102, 136, 292, 304, 310, 413, 437 logistic, 74 loss, → loss distribution mixture, 295 mixture of exponentials, 102, 302, 311, 361, 449 negative binomial, 300, 421 normal, 66, 74, 116, 226, 382, 455 of extremum, 46 Pareto, 46, 47, 100, 102, 295, 298, 304, 310, 361, 438 Pareto type II, 47 Pearson’s Type III, 300 Poisson, 322, 420 power-law, 97 shifted gamma, 419 stable, → stable distribution, 46 stable Paretian, 22 Student, 66, 74 subexponential, 360, 475 convolution square, 361 transformed beta, 361 translated gamma, 419 truncated-Pareto, 465, 466, 481 uniform, 295, 309 Weibull, 46, 102, 216, 279, 298, 305, 311, 362, 445 with regularly varying tail, 361 distribution function empirical, 290, 305

Index dividend, 453 ﬁxed, 477, 481 ﬂexible, 479, 484 domain of attraction, 382, 384, 386, 390 doubly stochastic Poisson process, → process Dow Jones Industrial Average (DJIA), 38 eigenfunction, 123 eigenvalue, 123 elliptically-contoured distributions, 72 empirical distribution function, → distribution function empirical risk, 228 error correction model, 251, 253 vector, 253 estimation A2 statistic minimization, 312, 450 maximum likelihood, 312 EUREX, 118 European Central Bank, 250 expected risk, 228, 229 expected shortfall, 52, 303 expected tail loss, 52 exponential distribution, → distribution extreme event, 22 extreme value, 45 ﬁltration, 99 ﬁnite diﬀerence approach, 211 Fisher-Tippet theorem, 46 ﬁxed amount deductible, → deductible foreign exchange, 166, 170 Fourier basis, 120, 124

Index Fourier transform, 25, 188, 189 fast (FFT), 183, 188, 190, 191 option pricing, 188, 192 fractional Brownian motion, → Brownian motion franchise deductible, → deductible Fredholm eigenequation, 123 free boundary problem, 210 free disposal hull (FDH), 276, 278, 281 eﬃciency score, 278 eﬃcient level, 279 function basis, → basis function beta, → beta function characteristic, → characteristic function classiﬁer, → Support Vector Machine (SVM) distribution, → distribution function frontier, → production Heaviside, → Heaviside function kernel, → Support Vector Machine (SVM) limited expected value, → limited expected value function mean excess, → mean excess function mean residual life, → mean residual life function

511 membership, → clustering moment generating, → moment generating function production, → production slowly varying at inﬁnity, 387 functional data analysis, 115, 118 gamma distribution, → distribution gamma function incomplete, 300, 305 standard, 296 generalized eigenequation, 125, 126 goodness-of-ﬁt, 38, 290, 330 half-sample approach, 308 Heath-Jarrow-Morton approach, 205 Heaviside function, 213 hedging, 94 Heston’s model, 161, 163, 185 Hill estimator, 31 homogeneous Poisson process (HPP), → process hurricane, 94 implied binomial tree, 138 implied trinomial tree, 135, 140 Arrow-Debreu price, 140 state space, 142 transition probability, 140, 144 implied volatility, 115, 137, 161, 170, 184, 192, 195, 220 surface, 115, 116, 192, 504 incomplete market, 162, 185 index of dispersion, 326 individual risk model, 407, 410, 428 inﬂation rate, 251

512 initial capital, 320, 381 risk reserve, 381 variance, 169–171 insurance policy, 381 portfolio, 319, 341, 410, 416, 456 risk, 319, 341 securitization, 96 insurance-linked security (ILS), 93 indemnity trigger, 94 index trigger, 94 parametric trigger, 94 intensity, → process intensity function, → process inter-arrival time, 321, 324, 397 inter-occurrence time, 295 interest, 204 rate, 254, 264 eﬀect, 264 elasticity, 266 long-term, 264 inverse transform method, → method investment, 453 jump-diﬀusion model, 162, 174 Karush-Kuhn-Tucker conditions, 235 Laplace transform, 294, 295, 346 inversion, 349 leverage eﬀect, 186 limited expected value function, 309 limited proportional deductible, → deductible linear interpolation, 120

Index local polynomial estimator, 118 log-normal distribution, → distribution logit, 227 London Inter-Bank Oﬀer Rate (LIBOR), 104 long-run variance, 163, 170, 172 Lorenz curve, 239 loss distribution, 102, 289, 341 analytical approach, 289 curve ﬁtting, 289 empirical approach, 289, 291 moment based approach, 290 lower tail-independence, 69 martingale, 185, 401 maturity, 115, 116, 201, 204, 232 time to, 115, 116 MD*Base, 195 Mean Absolute Error (MAE), 103, 195 mean excess function, 303, 310, 330 mean residual life function, 303 mean reversion, 170, 172, 186 Mean Squared Error (MSE), 103, 195 mean value function, 102, 331 Merton’s model, 184 method composition, 302 integration, → algorithm inverse transform, 295, 296, 298, 299 least squares, 102, 331 Newton-Raphson, 118, 345 of characteristic functions, 167 rejection, → algorithm

Index thinning, → algorithm minimum-volume ellipsoid estimator, 78 mixed Poisson process, → process mixture of exponential distributions, → distribution modeling dependence, 54 moment generating function, 293, 343, 408, 418 monetary policy, 249, 250 monetary union, 249 money demand, 249, 260 Indonesian, 249, 263 M2, 249 nominal, 251 partial adjustment model, 251 moneyness, 120 Monte Carlo method, 38, 214, 361, 369 simulation, 99, 192, 193, 308, 342, 369, 423 mortgage, 201, 202, 204 callability, 202, 204, 219 optimally prepaid, 201, 206, 211 mortgage backed security (MBS), 201 valuation, 212 multivariate GARCH, 61 multivariate trimming, 78

513 operational risk, 319, 343, 407 operational time scale, 343 optimal stopping problem, 206

Panjer recursion formula, 476 Pareto distribution, → distribution periodogram, 331 Pickands constant, 400 point process, → process Poisson process, → process policy ﬂexible dividend, 455 Pollaczek-Khinchin formula, 346, 361, 475 power-law tail, 23 premium, 310, 320, 322, 326, 328, 381, 407, 429, 453, 454, 457, 459, 469, 470, 473, 474, 478, 483 σ-loading principle, 409 σ 2 -loading principle, 408 balancing problem, 461 exponential, 409, 413, 414, 418, 419, 421–423 marginal, 460 normal approximation, 412, 418 pure risk, 408, 411, 417, 427, 429 natural catastrophe, with safety loading, 408 → catastrophe quantile, 409, 413, 418, 420, 422, neural network, 225, 227 423, 457 non-homogeneous Poisson process (NHPP), standard deviation principle, 409 → process translated gamma approximanonparametric regression, 119 tion, 419 normal distribution variance principle, 408 multivariate, 85 whole-portfolio, 454 normal power formula, 459 with safety loading, 411, 417

514 with standard deviation loading, 412, 418 with variance loading, 411, 417 zero utility principle, 409 premium function, → premium prepayment optimal frontier, 211 parametric speciﬁcation, 215 reﬁnancing, 216 structural, 215 prepayment policy, 201, 212 early prepayment, 204 interest rate, 202 optimality, 202, 204, 212 principal, 201, 202, 204 principal components analysis (PCA), 115, 121 common, 127 functional, 115, 121, 122 smoothed, 125, 126 roughness penalty, 125 probability space, 99 probit, 227 process aggregate loss, 99, 320, 367, 453 claim arrival, 102, 320, 321 claim surplus, 342, 355, 475 compound Poisson, 367 counting, 382 Cox, 327 intensity process, 342 doubly stochastic Poisson, 327 homogeneous Poisson, 321, 323, 324, 326 mixed Poisson, 326 non-homogeneous Poisson, 323, 327 Ornstein-Uhlenbeck, 186, 205 point, 98, 319, 320, 343, 381

Index Poisson, 295, 341, 342, 383 compound, 184, 187 cumulative intensity function, 102, 331 doubly stochastic, 98, 99 homogeneous, 184 intensity, 321, 341, 383 intensity function, 100, 323 linear intensity function, 334 non-homogeneous, 99, 100 periodic intensity, 326 rate, 321 rate function, 323 sinusoidal intensity function, 333 stochastic intensity, 98 predictable bounded, 99 progressive, 99 renewal, 102, 328, 382, 385, 387, 397 risk, → risk process, 453 self-similar, 396 stationary, 333 variance, 185 Wiener, 185 production frontier function, 272 function, 272 input eﬃciency score, 274 output eﬃciency score, 274, 275 set, 272 unit, 274 productivity analysis, 271 data envelopment analysis, → data envelopment analysis (DEA) free disposal hull, → free disposal hull (FDH) input requirement set, 274

Index nonparametric, 271 hull method, 276 output corresponding set, 275 Property Claim Services (PCS), 94, 100, 311 proportional deductible, → deductible Public Securities Association, 215 pure risk premium, → premium put option, → derivative quantile, 217 sample, 333 quantile line sample, 333 queuing theory, 358, 359 rate of mean reversion, 163 rate of return, 456, 477, 481 rating, 226, 227, 230–232 raw moment, 292, 294 reinsurance, 93, 320, 453, 463, 481, 483 excess of loss, 464 renewal process, → process retention, 427, 455 limit, 464 returns to scale constant, 275 non-decreasing, 275 non-increasing, 275 risk aversion, 415 Risk Based Capital (RBC), 456 risk classiﬁcation, 230 risk model collective, → collective risk model

515 individual, → individual risk model of good and bad periods, 395 risk process, 319, 320, 341, 381 modeling, 319 simulation, 329 stable diﬀusion approximation, 381 weak convergence to α-stable L´evy motion, 387 to Brownian motion, 383 risk-neutral measure, 185 RiskCalc, 227, 240 ruin probability, 320, 341, 383, 395, 481 “Zero” approximation, 471 4-moment gamma De Vylder approximation, 356, 364 adjustment coeﬃcient, 454 Beekman–Bowers approximation, 353, 354, 472, 481 corrected diﬀusion approximation, 372 Cram´er–Lundberg approximation, 351, 364, 369, 471 criterion, 469, 477 De Vylder approximation, 355, 364, 474, 481 diﬀusion approximation, 371, 473 exact exponential claim amount, 347, 368 gamma claim amount, 347 mixture of exponentials claim amount, 349 exponential approximation, 352, 366 ﬁnite time De Vylder approximation, 373 ﬁnite time horizon, 367, 368,

516 384, 389 heavy traﬃc approximation, 358, 364 heavy-light traﬃc approximation, 360 inﬁnite time horizon, 342, 384, 389 ladder heights, 346, 475, 476 light traﬃc approximation, 359, 364 Lundberg approximation, 352, 365 Lundberg inequality, 469, 471 Panjer approximation, 475 Renyi approximation, 354, 364 Segerdahl normal approximation, 369 subexponential approximation, 360, 364, 475 ultimate, 381, 395 ruin theory, 341 ruin time, 381, 384, 395 safety loading, 381, 454, 478 relative, 322, 341, 375 Securities and Exchange Commission, 237 single-period criterion, 456 Sklar theorem, 53, 58 Skorokhod topology, 388, 399 solvency, 477 special purpose vehicle (SPV), 93 stable distribution, 21 characteristic exponent, 22 density function, 26 direct integration, 26, 36 distribution function, 26 FFT-based approach, 26, 36 index of stability, 22 maximum likelihood method, 35

Index method of moments, 34 quantile estimation, 33 regression-type method, 34, 35 simulation, 28 skewness parameter, 22 tail exponent, 22 estimation, 31 tail index, 22 stochastic process mean reverting, 163 stochastic volatility, 161, 185 calibration, 169 strings, 118, 120 structure variable, 326 Student t distribution multivariate, 85 Sum of Squared Errors (SSE), 170 Support Vector Machine (SVM), 225, 233 calibration, 239 cross validation, 241, 242 classiﬁer function, 226 kernel function, 236 Lagrangian formulation, 233 outlier, 235 separating hyperplane, 234 training set, 226, 233 tail dependence, 65, 67 asset and FX returns, 81 estimation, 75, 78 tail exponent, 22, 31 estimation, 31 log-log regression, 31 tail index, 74 Takagi-Sugeno approach, 250, 261 test statistic Anderson-Darling, 38, 102, 307 Cram´er-von Mises, 102, 307 CUSUM, 258

Index Dickey-Fuller, 51 augmented, 52, 255 half-sample approach, 308 Jarque-Bera, 258 Kolmogorov, 38, 306 Kolmogorov-Smirnov, 102, 306 Kuiper, 102, 306 threshold time, 98 time to ruin, 342 top-down approach, 459 trinomial tree, 140 constant volatility, 142 implied, → implied trinomial tree uniform convergence on compact sets, 383 upper tail-dependence, 66 coeﬃcient, 66 upper tail-independence, 66 utility expected, 409 Value at Risk, 52, 84 conditional, 52 historical estimates, 86 portfolio, 84 Vapnik-Chervonenkis (VC) bound, 229, 230 dimension, 229, 230 Vasicek model, 205 vector autoregressive model (VAR), 253 volatility, 116, 126, 185 constant, 135 implied, → implied volatility of variance, 163, 170, 171 forward, 177 risk

517 market price, 166, 170 premium, 170 smile, 115, 135, 140, 161 surface, 174 waiting time, 321, 328 Weibull distribution, → distribution XploRe Quantlet, 494, 495 Quantlet Client (XQC), 491, 492 data editor, 496 method tree, 493, 501 Quantlet Editor, 495 Quantlet server (XQS), 493

ˇ Pavel Cížek • Wolfgang Härdle • Rafał Weron

Statistical Tools for Finance and Insurance

123

ˇ Pavel Cížek Tilburg University Dept. of Econometrics & OR P.O. Box 90153 5000 LE Tilburg, Netherlands e-mail: [email protected]

Rafał Weron Wrocław University of Technology Hugo Steinhaus Center Wyb. Wyspia´ nskiego 27 50-370 Wrocław, Poland e-mail: [email protected]

Wolfgang Härdle Humboldt-Universität zu Berlin CASE – Center for Applied Statistics and Economics Institut für Statistik und Ökonometrie Spandauer Straße 1 10178 Berlin, Germany e-mail: [email protected]

This book is also available as e-book on www.i-xplore.de. Use the licence code at the end of the book to download the e-book. Library of Congress Control Number: 2005920464

Mathematics Subject Classiﬁcation (2000): 62P05, 91B26, 91B28

ISBN 3-540-22189-1 Springer Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, speciﬁcally the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microﬁlm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable for prosecution under the German Copyright Law. Springer is a part of Springer Science+Business Media springeronline.com © Springer-Verlag Berlin Heidelberg 2005 Printed in Germany The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a speciﬁc statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typesetting by the authors Production: LE-TEX Jelonek, Schmidt & Vöckler GbR, Leipzig Cover design: design & production GmbH, Heidelberg Printed on acid-free paper 46/3142YL – 5 4 3 2 1 0

Contents

Contributors

13

Preface

15

I

19

Finance

1 Stable Distributions

21

Szymon Borak, Wolfgang H¨ ardle, and Rafal Weron 1.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

21

1.2

Deﬁnitions and Basic Characteristic . . . . . . . . . . . . . . .

22

1.2.1

Characteristic Function Representation . . . . . . . . .

24

1.2.2

Stable Density and Distribution Functions . . . . . . . .

26

1.3

Simulation of α-stable Variables . . . . . . . . . . . . . . . . . .

28

1.4

Estimation of Parameters . . . . . . . . . . . . . . . . . . . . .

30

1.4.1

Tail Exponent Estimation . . . . . . . . . . . . . . . . .

31

1.4.2

Quantile Estimation . . . . . . . . . . . . . . . . . . . .

33

1.4.3

Characteristic Function Approaches . . . . . . . . . . .

34

1.4.4

Maximum Likelihood Method . . . . . . . . . . . . . . .

35

Financial Applications of Stable Laws . . . . . . . . . . . . . .

36

1.5

2

Contents

2 Extreme Value Analysis and Copulas

45

Krzysztof Jajuga and Daniel Papla 2.1

2.2

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

45

2.1.1

Analysis of Distribution of the Extremum . . . . . . . .

46

2.1.2

Analysis of Conditional Excess Distribution . . . . . . .

47

2.1.3

Examples . . . . . . . . . . . . . . . . . . . . . . . . . .

48

Multivariate Time Series . . . . . . . . . . . . . . . . . . . . . .

53

2.2.1

Copula Approach . . . . . . . . . . . . . . . . . . . . . .

53

2.2.2

Examples . . . . . . . . . . . . . . . . . . . . . . . . . .

56

2.2.3

Multivariate Extreme Value Approach . . . . . . . . . .

57

2.2.4

Examples . . . . . . . . . . . . . . . . . . . . . . . . . .

60

2.2.5

Copula Analysis for Multivariate Time Series . . . . . .

61

2.2.6

Examples . . . . . . . . . . . . . . . . . . . . . . . . . .

62

3 Tail Dependence

65

Rafael Schmidt 3.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

65

3.2

What is Tail Dependence? . . . . . . . . . . . . . . . . . . . . .

66

3.3

Calculation of the Tail-dependence Coeﬃcient . . . . . . . . . .

69

3.3.1

Archimedean Copulae . . . . . . . . . . . . . . . . . . .

69

3.3.2

Elliptically-contoured Distributions . . . . . . . . . . . .

70

3.3.3

Other Copulae . . . . . . . . . . . . . . . . . . . . . . .

74

3.4

Estimating the Tail-dependence Coeﬃcient . . . . . . . . . . .

75

3.5

Comparison of TDC Estimators . . . . . . . . . . . . . . . . . .

78

3.6

Tail Dependence of Asset and FX Returns . . . . . . . . . . . .

81

3.7

Value at Risk – a Simulation Study . . . . . . . . . . . . . . . .

84

Contents

3

4 Pricing of Catastrophe Bonds

93

Krzysztof Burnecki, Grzegorz Kukla, and David Taylor 4.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

93

4.1.1

The Emergence of CAT Bonds . . . . . . . . . . . . . .

94

4.1.2

Insurance Securitization . . . . . . . . . . . . . . . . . .

96

4.1.3

CAT Bond Pricing Methodology . . . . . . . . . . . . .

97

4.2

Compound Doubly Stochastic Poisson Pricing Model . . . . . .

99

4.3

Calibration of the Pricing Model . . . . . . . . . . . . . . . . .

100

4.4

Dynamics of the CAT Bond Price . . . . . . . . . . . . . . . . .

104

5 Common Functional IV Analysis

115

Michal Benko and Wolfgang H¨ ardle 5.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

115

5.2

Implied Volatility Surface . . . . . . . . . . . . . . . . . . . . .

116

5.3

Functional Data Analysis . . . . . . . . . . . . . . . . . . . . .

118

5.4

Functional Principal Components . . . . . . . . . . . . . . . . .

121

5.4.1

Basis Expansion . . . . . . . . . . . . . . . . . . . . . .

123

Smoothed Principal Components Analysis . . . . . . . . . . . .

125

5.5.1

Basis Expansion . . . . . . . . . . . . . . . . . . . . . .

126

Common Principal Components Model . . . . . . . . . . . . . .

127

5.5

5.6

6 Implied Trinomial Trees

135

ˇ ıˇzek and Karel Komor´ad Pavel C´ 6.1

Option Pricing . . . . . . . . . . . . . . . . . . . . . . . . . . .

136

6.2

Trees and Implied Trees . . . . . . . . . . . . . . . . . . . . . .

138

6.3

Implied Trinomial Trees . . . . . . . . . . . . . . . . . . . . . .

140

6.3.1

140

Basic Insight . . . . . . . . . . . . . . . . . . . . . . . .

4

Contents

6.4

6.3.2

State Space . . . . . . . . . . . . . . . . . . . . . . . . .

142

6.3.3

Transition Probabilities . . . . . . . . . . . . . . . . . .

144

6.3.4

Possible Pitfalls . . . . . . . . . . . . . . . . . . . . . . .

145

Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

147

6.4.1

Pre-speciﬁed Implied Volatility . . . . . . . . . . . . . .

147

6.4.2

German Stock Index . . . . . . . . . . . . . . . . . . . .

152

7 Heston’s Model and the Smile

161

Rafal Weron and Uwe Wystup 7.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

161

7.2

Heston’s Model . . . . . . . . . . . . . . . . . . . . . . . . . . .

163

7.3

Option Pricing . . . . . . . . . . . . . . . . . . . . . . . . . . .

166

7.3.1

Greeks . . . . . . . . . . . . . . . . . . . . . . . . . . . .

168

Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

169

7.4.1

Qualitative Eﬀects of Changing Parameters . . . . . . .

171

7.4.2

Calibration Results . . . . . . . . . . . . . . . . . . . . .

173

7.4

8 FFT-based Option Pricing

183

Szymon Borak, Kai Detlefsen, and Wolfgang H¨ ardle 8.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

183

8.2

Modern Pricing Models . . . . . . . . . . . . . . . . . . . . . .

183

8.2.1

Merton Model . . . . . . . . . . . . . . . . . . . . . . .

184

8.2.2

Heston Model . . . . . . . . . . . . . . . . . . . . . . . .

185

8.2.3

Bates Model . . . . . . . . . . . . . . . . . . . . . . . .

187

8.3

Option Pricing with FFT . . . . . . . . . . . . . . . . . . . . .

188

8.4

Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

192

Contents

5

9 Valuation of Mortgage Backed Securities

201

Nicolas Gaussel and Julien Tamine 9.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

201

9.2

Optimally Prepaid Mortgage . . . . . . . . . . . . . . . . . . .

204

9.2.1

Financial Characteristics and Cash Flow Analysis . . .

204

9.2.2

Optimal Behavior and Price . . . . . . . . . . . . . . . .

204

Valuation of Mortgage Backed Securities . . . . . . . . . . . . .

212

9.3.1

Generic Framework . . . . . . . . . . . . . . . . . . . . .

213

9.3.2

A Parametric Speciﬁcation of the Prepayment Rate . .

215

9.3.3

Sensitivity Analysis . . . . . . . . . . . . . . . . . . . .

218

9.3

10 Predicting Bankruptcy with Support Vector Machines

225

Wolfgang H¨ ardle, Rouslan Moro, and Dorothea Sch¨ afer 10.1 Bankruptcy Analysis Methodology . . . . . . . . . . . . . . . .

226

10.2 Importance of Risk Classiﬁcation in Practice . . . . . . . . . .

230

10.3 Lagrangian Formulation of the SVM . . . . . . . . . . . . . . .

233

10.4 Description of Data . . . . . . . . . . . . . . . . . . . . . . . . .

236

10.5 Computational Results . . . . . . . . . . . . . . . . . . . . . . .

237

10.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

243

11 Modelling Indonesian Money Demand

249

Noer Azam Achsani, Oliver Holtem¨ oller, and Hizir Sofyan 11.1 Speciﬁcation of Money Demand Functions . . . . . . . . . . . .

250

11.2 The Econometric Approach to Money Demand . . . . . . . . .

253

11.2.1 Econometric Estimation of Money Demand Functions .

253

11.2.2 Econometric Modelling of Indonesian Money Demand .

254

11.3 The Fuzzy Approach to Money Demand . . . . . . . . . . . . .

260

6

Contents 11.3.1 Fuzzy Clustering . . . . . . . . . . . . . . . . . . . . . .

260

11.3.2 The Takagi-Sugeno Approach . . . . . . . . . . . . . . .

261

11.3.3 Model Identiﬁcation . . . . . . . . . . . . . . . . . . . .

262

11.3.4 Fuzzy Modelling of Indonesian Money Demand . . . . .

263

11.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

266

12 Nonparametric Productivity Analysis

271

Wolfgang H¨ ardle and Seok-Oh Jeong 12.1 The Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . .

272

12.2 Nonparametric Hull Methods . . . . . . . . . . . . . . . . . . .

276

12.2.1 Data Envelopment Analysis . . . . . . . . . . . . . . . .

277

12.2.2 Free Disposal Hull . . . . . . . . . . . . . . . . . . . . .

278

12.3 DEA in Practice: Insurance Agencies . . . . . . . . . . . . . . .

279

12.4 FDH in Practice: Manufacturing Industry . . . . . . . . . . . .

281

II Insurance 13 Loss Distributions

287 289

Krzysztof Burnecki, Adam Misiorek, and Rafal Weron 13.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

289

13.2 Empirical Distribution Function . . . . . . . . . . . . . . . . . .

290

13.3 Analytical Methods . . . . . . . . . . . . . . . . . . . . . . . . .

292

13.3.1 Log-normal Distribution . . . . . . . . . . . . . . . . . .

292

13.3.2 Exponential Distribution . . . . . . . . . . . . . . . . .

293

13.3.3 Pareto Distribution . . . . . . . . . . . . . . . . . . . . .

295

13.3.4 Burr Distribution . . . . . . . . . . . . . . . . . . . . . .

298

13.3.5 Weibull Distribution . . . . . . . . . . . . . . . . . . . .

298

Contents

7

13.3.6 Gamma Distribution . . . . . . . . . . . . . . . . . . . .

300

13.3.7 Mixture of Exponential Distributions . . . . . . . . . . .

302

13.4 Statistical Validation Techniques . . . . . . . . . . . . . . . . .

303

13.4.1 Mean Excess Function . . . . . . . . . . . . . . . . . . .

303

13.4.2 Tests Based on the Empirical Distribution Function . .

305

13.4.3 Limited Expected Value Function . . . . . . . . . . . . .

309

13.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

311

14 Modeling of the Risk Process

319

Krzysztof Burnecki and Rafal Weron 14.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

319

14.2 Claim Arrival Processes . . . . . . . . . . . . . . . . . . . . . .

321

14.2.1 Homogeneous Poisson Process . . . . . . . . . . . . . . .

321

14.2.2 Non-homogeneous Poisson Process . . . . . . . . . . . .

323

14.2.3 Mixed Poisson Process . . . . . . . . . . . . . . . . . . .

326

14.2.4 Cox Process . . . . . . . . . . . . . . . . . . . . . . . . .

327

14.2.5 Renewal Process . . . . . . . . . . . . . . . . . . . . . .

328

14.3 Simulation of Risk Processes

. . . . . . . . . . . . . . . . . . .

329

14.3.1 Catastrophic Losses . . . . . . . . . . . . . . . . . . . .

329

14.3.2 Danish Fire Losses . . . . . . . . . . . . . . . . . . . . .

334

15 Ruin Probabilities in Finite and Inﬁnite Time

341

Krzysztof Burnecki, Pawel Mi´sta, and Aleksander Weron 15.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

341

15.1.1 Light- and Heavy-tailed Distributions . . . . . . . . . .

343

15.2 Exact Ruin Probabilities in Inﬁnite Time . . . . . . . . . . . .

346

15.2.1 No Initial Capital

. . . . . . . . . . . . . . . . . . . . .

347

8

Contents 15.2.2 Exponential Claim Amounts . . . . . . . . . . . . . . .

347

15.2.3 Gamma Claim Amounts . . . . . . . . . . . . . . . . . .

347

15.2.4 Mixture of Two Exponentials Claim Amounts . . . . . .

349

15.3 Approximations of the Ruin Probability in Inﬁnite Time . . . .

350

15.3.1 Cram´er–Lundberg Approximation . . . . . . . . . . . .

351

15.3.2 Exponential Approximation . . . . . . . . . . . . . . . .

352

15.3.3 Lundberg Approximation . . . . . . . . . . . . . . . . .

352

15.3.4 Beekman–Bowers Approximation . . . . . . . . . . . . .

353

15.3.5 Renyi Approximation . . . . . . . . . . . . . . . . . . .

354

15.3.6 De Vylder Approximation . . . . . . . . . . . . . . . . .

355

15.3.7 4-moment Gamma De Vylder Approximation . . . . . .

356

15.3.8 Heavy Traﬃc Approximation . . . . . . . . . . . . . . .

358

15.3.9 Light Traﬃc Approximation . . . . . . . . . . . . . . . .

359

15.3.10 Heavy-light Traﬃc Approximation . . . . . . . . . . . .

360

15.3.11 Subexponential Approximation . . . . . . . . . . . . . .

360

15.3.12 Computer Approximation via the Pollaczek-Khinchin Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 15.3.13 Summary of the Approximations . . . . . . . . . . . . .

362

15.4 Numerical Comparison of the Inﬁnite Time Approximations . .

363

15.5 Exact Ruin Probabilities in Finite Time . . . . . . . . . . . . .

367

15.5.1 Exponential Claim Amounts . . . . . . . . . . . . . . .

368

15.6 Approximations of the Ruin Probability in Finite Time . . . .

368

15.6.1 Monte Carlo Method . . . . . . . . . . . . . . . . . . . .

369

15.6.2 Segerdahl Normal Approximation . . . . . . . . . . . . .

369

15.6.3 Diﬀusion Approximation . . . . . . . . . . . . . . . . . .

371

15.6.4 Corrected Diﬀusion Approximation . . . . . . . . . . . .

372

15.6.5 Finite Time De Vylder Approximation . . . . . . . . . .

373

Contents

9

15.6.6 Summary of the Approximations . . . . . . . . . . . . . 15.7 Numerical Comparison of the Finite Time Approximations

. .

16 Stable Diﬀusion Approximation of the Risk Process

374 374 381

Hansj¨ org Furrer, Zbigniew Michna, and Aleksander Weron 16.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

381

16.2 Brownian Motion and the Risk Model for Small Claims . . . .

382

16.2.1 Weak Convergence of Risk Processes to Brownian Motion 383 16.2.2 Ruin Probability for the Limit Process . . . . . . . . . .

383

16.2.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . .

384

16.3 Stable L´evy Motion and the Risk Model for Large Claims . . .

386

16.3.1 Weak Convergence of Risk Processes to α-stable L´evy Motion . . . . . . . . . . . . . . . . . . . . . . . . . . .

387

16.3.2 Ruin Probability in Limit Risk Model for Large Claims

388

16.3.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . .

390

17 Risk Model of Good and Bad Periods

395

Zbigniew Michna 17.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

395

17.2 Fractional Brownian Motion and Model of Good and Bad Periods396 17.3 Ruin Probability in Limit Risk Model of Good and Bad Periods 399 17.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Premiums in the Individual and Collective Risk Models

402 407

Jan Iwanik and Joanna Nowicka-Zagrajek 18.1 Premium Calculation Principles . . . . . . . . . . . . . . . . . .

408

18.2 Individual Risk Model . . . . . . . . . . . . . . . . . . . . . . .

410

18.2.1 General Premium Formulae . . . . . . . . . . . . . . . .

411

10

Contents 18.2.2 Premiums in the Case of the Normal Approximation . .

412

18.2.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . .

413

18.3 Collective Risk Model . . . . . . . . . . . . . . . . . . . . . . .

416

18.3.1 General Premium Formulae . . . . . . . . . . . . . . . .

417

18.3.2 Premiums in the Case of the Normal and Translated Gamma Approximations . . . . . . . . . . . . . . . . . .

418

18.3.3 Compound Poisson Distribution . . . . . . . . . . . . .

420

18.3.4 Compound Negative Binomial Distribution . . . . . . .

421

18.3.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . .

423

19 Pure Risk Premiums under Deductibles

427

Krzysztof Burnecki, Joanna Nowicka-Zagrajek, and Agnieszka Wyloma´ nska 19.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

427

19.2 General Formulae for Premiums Under Deductibles . . . . . . .

428

19.2.1 Franchise Deductible . . . . . . . . . . . . . . . . . . . .

429

19.2.2 Fixed Amount Deductible . . . . . . . . . . . . . . . . .

431

19.2.3 Proportional Deductible . . . . . . . . . . . . . . . . . .

432

19.2.4 Limited Proportional Deductible . . . . . . . . . . . . .

432

19.2.5 Disappearing Deductible . . . . . . . . . . . . . . . . . .

434

19.3 Premiums Under Deductibles for Given Loss Distributions . . .

436

19.3.1 Log-normal Loss Distribution . . . . . . . . . . . . . . .

437

19.3.2 Pareto Loss Distribution . . . . . . . . . . . . . . . . . .

438

19.3.3 Burr Loss Distribution . . . . . . . . . . . . . . . . . . .

441

19.3.4 Weibull Loss Distribution . . . . . . . . . . . . . . . . .

445

19.3.5 Gamma Loss Distribution . . . . . . . . . . . . . . . . .

447

19.3.6 Mixture of Two Exponentials Loss Distribution . . . . .

449

19.4 Final Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . .

450

Contents 20 Premiums, Investments, and Reinsurance

11 453

Pawel Mi´sta and Wojciech Otto 20.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

453

20.2 Single-Period Criterion and the Rate of Return on Capital . . .

456

20.2.1 Risk Based Capital Concept . . . . . . . . . . . . . . . .

456

20.2.2 How To Choose Parameter Values? . . . . . . . . . . . .

457

20.3 The Top-down Approach to Individual Risks Pricing . . . . . .

459

20.3.1 Approximations of Quantiles . . . . . . . . . . . . . . .

459

20.3.2 Marginal Cost Basis for Individual Risk Pricing . . . . .

460

20.3.3 Balancing Problem . . . . . . . . . . . . . . . . . . . . .

461

20.3.4 A Solution for the Balancing Problem . . . . . . . . . .

462

20.3.5 Applications . . . . . . . . . . . . . . . . . . . . . . . .

462

20.4 Rate of Return and Reinsurance Under the Short Term Criterion 463 20.4.1 General Considerations . . . . . . . . . . . . . . . . . .

464

20.4.2 Illustrative Example . . . . . . . . . . . . . . . . . . . .

465

20.4.3 Interpretation of Numerical Calculations in Example 2 .

467

20.5 Ruin Probability Criterion when the Initial Capital is Given . .

469

20.5.1 Approximation Based on Lundberg Inequality . . . . . .

469

20.5.2 “Zero” Approximation . . . . . . . . . . . . . . . . . . .

471

20.5.3 Cram´er–Lundberg Approximation . . . . . . . . . . . .

471

20.5.4 Beekman–Bowers Approximation . . . . . . . . . . . . .

472

20.5.5 Diﬀusion Approximation . . . . . . . . . . . . . . . . . .

473

20.5.6 De Vylder Approximation . . . . . . . . . . . . . . . . .

474

20.5.7 Subexponential Approximation . . . . . . . . . . . . . .

475

20.5.8 Panjer Approximation . . . . . . . . . . . . . . . . . . .

475

20.6 Ruin Probability Criterion and the Rate of Return . . . . . . .

477

20.6.1 Fixed Dividends . . . . . . . . . . . . . . . . . . . . . .

477

12

Contents 20.6.2 Flexible Dividends . . . . . . . . . . . . . . . . . . . . .

479

20.7 Ruin Probability, Rate of Return and Reinsurance . . . . . . .

481

20.7.1 Fixed Dividends . . . . . . . . . . . . . . . . . . . . . .

481

20.7.2 Interpretation of Solutions Obtained in Example 5 . . .

482

20.7.3 Flexible Dividends . . . . . . . . . . . . . . . . . . . . .

484

20.7.4 Interpretation of Solutions Obtained in Example 6 . . .

485

20.8 Final remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . .

487

III General 21 Working with the XQC

489 491

Szymon Borak, Wolfgang H¨ ardle, and Heiko Lehmann 21.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

491

21.2 The XploRe Quantlet Client . . . . . . . . . . . . . . . . . . . .

492

21.2.1 Conﬁguration . . . . . . . . . . . . . . . . . . . . . . . .

492

21.2.2 Getting Connected . . . . . . . . . . . . . . . . . . . . .

493

21.3 Desktop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

494

21.3.1 XploRe Quantlet Editor . . . . . . . . . . . . . . . . . .

495

21.3.2 Data Editor . . . . . . . . . . . . . . . . . . . . . . . . .

496

21.3.3 Method Tree . . . . . . . . . . . . . . . . . . . . . . . .

501

21.3.4 Graphical Output . . . . . . . . . . . . . . . . . . . . .

503

Index

507

Contributors Noer Azam Achsani Department of Economics, University of Potsdam Michal Benko Center for Applied Statistics and Economics, Humboldt-Universit¨ at zu Berlin Szymon Borak Center for Applied Statistics and Economics, Humboldt-Universit¨ at zu Berlin Krzysztof Burnecki Hugo Steinhaus Center for Stochastic Methods, Wroclaw University of Technology ˇ ıˇ Pavel C´ zek Center for Economic Research, Tilburg University Kai Detlefsen Center for Applied Statistics and Economics, Humboldt-Universit¨ at zu Berlin Hansj¨ org Furrer Swiss Life, Z¨ urich Nicolas Gaussel Soci´et´e G´en´erale Asset Management, Paris Wolfgang H¨ ardle Center for Applied Statistics and Economics, HumboldtUniversit¨ at zu Berlin Oliver Holtem¨ oller Department of Economics, RWTH Aachen University Jan Iwanik Concordia Capital S.A., Pozna´ n Krzysztof Jajuga Department of Financial Investments and Insurance, Wroclaw University of Economics Seok-Oh Jeong Institut de statistique, Universite catholique de Louvain Karel Komor´ ad Komerˇcn´ı Banka, Praha Grzegorz Kukla Towarzystwo Ubezpieczeniowe EUROPA S.A., Wroclaw Heiko Lehmann SAP AG, Walldorf Zbigniew Michna Department of Mathematics, Wroclaw University of Economics Adam Misiorek Institute of Power Systems Automation, Wroclaw Pawel Mi´sta Institute of Mathematics, Wroclaw University of Technology

Rouslan Moro Center for Applied Statistics and Economics, Humboldt-Universit¨ at zu Berlin Joanna Nowicka-Zagrajek Hugo Steinhaus Center for Stochastic Methods, Wroclaw University of Technology Wojciech Otto Faculty of Economic Sciences, Warsaw University Daniel Papla Department of Financial Investments and Insurance, Wroclaw University of Economics Dorothea Sch¨ afer Deutsches Institut f¨ ur Wirtschaftsforschung e.V., Berlin Rafael Schmidt Department of Statistics, London School of Economics Hizir Sofyan Mathematics Department, Syiah Kuala University Julien Tamine Soci´et´e G´en´erale Asset Management, Paris David Taylor School of Computational and Applied Mathematics, University of the Witwatersrand, Johannesburg Aleksander Weron Hugo Steinhaus Center for Stochastic Methods, Wroclaw University of Technology Rafal Weron Hugo Steinhaus Center for Stochastic Methods, Wroclaw University of Technology Agnieszka Wyloma´ nska Institute of Mathematics, Wroclaw University of Technology Uwe Wystup MathFinance AG, Waldems

Preface This book is designed for students, researchers and practitioners who want to be introduced to modern statistical tools applied in ﬁnance and insurance. It is the result of a joint eﬀort of the Center for Economic Research (CentER), Center for Applied Statistics and Economics (C.A.S.E.) and Hugo Steinhaus Center for Stochastic Methods (HSC). All three institutions brought in their speciﬁc proﬁles and created with this book a wide-angle view on and solutions to up-to-date practical problems. The text is comprehensible for a graduate student in ﬁnancial engineering as well as for an inexperienced newcomer to quantitative ﬁnance and insurance who wants to get a grip on advanced statistical tools applied in these ﬁelds. An experienced reader with a bright knowledge of ﬁnancial and actuarial mathematics will probably skip some sections but will hopefully enjoy the various computational tools. Finally, a practitioner might be familiar with some of the methods. However, the statistical techniques related to modern ﬁnancial products, like MBS or CAT bonds, will certainly attract him. “Statistical Tools for Finance and Insurance” consists naturally of two main parts. Each part contains chapters with high focus on practical applications. The book starts with an introduction to stable distributions, which are the standard model for heavy tailed phenomena. Their numerical implementation is thoroughly discussed and applications to ﬁnance are given. The second chapter presents the ideas of extreme value and copula analysis as applied to multivariate ﬁnancial data. This topic is extended in the subsequent chapter which deals with tail dependence, a concept describing the limiting proportion that one margin exceeds a certain threshold given that the other margin has already exceeded that threshold. The fourth chapter reviews the market in catastrophe insurance risk, which emerged in order to facilitate the direct transfer of reinsurance risk associated with natural catastrophes from corporations, insurers, and reinsurers to capital market investors. The next contribution employs functional data analysis for the estimation of smooth implied volatility sur-

16

Preface

faces. These surfaces are a result of using an oversimpliﬁed market benchmark model – the Black-Scholes formula – to real data. An attractive approach to overcome this problem is discussed in chapter six, where implied trinomial trees are applied to modeling implied volatilities and the corresponding state-price densities. An alternative route to tackling the implied volatility smile has led researchers to develop stochastic volatility models. The relative simplicity and the direct link of model parameters to the market makes Heston’s model very attractive to front oﬃce users. Its application to FX option markets is covered in chapter seven. The following chapter shows how the computational complexity of stochastic volatility models can be overcome with the help of the Fast Fourier Transform. In chapter nine the valuation of Mortgage Backed Securities is discussed. The optimal prepayment policy is obtained via optimal stopping techniques. It is followed by a very innovative topic of predicting corporate bankruptcy with Support Vector Machines. Chapter eleven presents a novel approach to money-demand modeling using fuzzy clustering techniques. The ﬁrst part of the book closes with productivity analysis for cost and frontier estimation. The nonparametric Data Envelopment Analysis is applied to eﬃciency issues of insurance agencies. The insurance part of the book starts with a chapter on loss distributions. The basic models for claim severities are introduced and their statistical properties are thoroughly explained. In chapter fourteen, the methods of simulating and visualizing the risk process are discussed. This topic is followed by an overview of the approaches to approximating the ruin probability of an insurer. Both ﬁnite and inﬁnite time approximations are presented. Some of these methods are extended in chapters sixteen and seventeen, where classical and anomalous diﬀusion approximations to ruin probability are discussed and extended to cases when the risk process exhibits good and bad periods. The last three chapters are related to one of the most important aspects of the insurance business – premium calculation. Chapter eighteen introduces the basic concepts including the pure risk premium and various safety loadings under diﬀerent loss distributions. Calculation of a joint premium for a portfolio of insurance policies in the individual and collective risk models is discussed as well. The inclusion of deductibles into premium calculation is the topic of the following contribution. The last chapter of the insurance part deals with setting the appropriate level of insurance premium within a broader context of business decisions, including risk transfer through reinsurance and the rate of return on capital required to ensure solvability. Our e-book oﬀers a complete PDF version of this text and the corresponding HTML ﬁles with links to algorithms and quantlets. The reader of this book

Preface

17

may therefore easily reconﬁgure and recalculate all the presented examples and methods via the enclosed XploRe Quantlet Server (XQS), which is also available from www.xplore-stat.de and www.quantlet.com. A tutorial chapter explaining how to setup and use XQS can be found in the third and ﬁnal part of the book. We gratefully acknowledge the support of Deutsche Forschungsgemeinschaft ¨ (SFB 373 Quantiﬁkation und Simulation Okonomischer Prozesse, SFB 649 ¨ Okonomisches Risiko) and Komitet Bada´ n Naukowych (PBZ-KBN 016/P03/99 Mathematical models in analysis of ﬁnancial instruments and markets in Poland). A book of this kind would not have been possible without the help of many friends, colleagues, and students. For the technical production of the e-book platform and quantlets we would like to thank Zdenˇek Hl´avka, Sigbert Klinke, Heiko Lehmann, Adam Misiorek, Piotr Uniejewski, Qingwei Wang, and Rodrigo Witzel. Special thanks for careful proofreading and supervision of the insurance part go to Krzysztof Burnecki. ˇ ıˇzek, Wolfgang H¨ardle, and Rafal Weron Pavel C´ Tilburg, Berlin, and Wroclaw, February 2005

Part I

Finance

1 Stable Distributions Szymon Borak, Wolfgang H¨ ardle, and Rafal Weron

1.1

Introduction

Many of the concepts in theoretical and empirical ﬁnance developed over the past decades – including the classical portfolio theory, the Black-Scholes-Merton option pricing model and the RiskMetrics variance-covariance approach to Value at Risk (VaR) – rest upon the assumption that asset returns follow a normal distribution. However, it has been long known that asset returns are not normally distributed. Rather, the empirical observations exhibit fat tails. This heavy tailed or leptokurtic character of the distribution of price changes has been repeatedly observed in various markets and may be quantitatively measured by the kurtosis in excess of 3, a value obtained for the normal distribution (Bouchaud and Potters, 2000; Carr et al., 2002; Guillaume et al., 1997; Mantegna and Stanley, 1995; Rachev, 2003; Weron, 2004). It is often argued that ﬁnancial asset returns are the cumulative outcome of a vast number of pieces of information and individual decisions arriving almost continuously in time (McCulloch, 1996; Rachev and Mittnik, 2000). As such, since the pioneering work of Louis Bachelier in 1900, they have been modeled by the Gaussian distribution. The strongest statistical argument for it is based on the Central Limit Theorem, which states that the sum of a large number of independent, identically distributed variables from a ﬁnite-variance distribution will tend to be normally distributed. However, as we have already mentioned, ﬁnancial asset returns usually have heavier tails. In response to the empirical evidence Mandelbrot (1963) and Fama (1965) proposed the stable distribution as an alternative model. Although there are other heavy-tailed alternatives to the Gaussian law – like Student’s t, hyperbolic, normal inverse Gaussian, or truncated stable – there is at least one good reason

22

1

Stable Distributions

for modeling ﬁnancial variables using stable distributions. Namely, they are supported by the generalized Central Limit Theorem, which states that stable laws are the only possible limit distributions for properly normalized and centered sums of independent, identically distributed random variables. Since stable distributions can accommodate the fat tails and asymmetry, they often give a very good ﬁt to empirical data. In particular, they are valuable models for data sets covering extreme events, like market crashes or natural catastrophes. Even though they are not universal, they are a useful tool in the hands of an analyst working in ﬁnance or insurance. Hence, we devote this chapter to a thorough presentation of the computational aspects related to stable laws. In Section 1.2 we review the analytical concepts and basic characteristics. In the following two sections we discuss practical simulation and estimation approaches. Finally, in Section 1.5 we present ﬁnancial applications of stable laws.

1.2

Deﬁnitions and Basic Characteristics

Stable laws – also called α-stable, stable Paretian or L´evy stable – were introduced by Levy (1925) during his investigations of the behavior of sums of independent random variables. A sum of two independent random variables having an α-stable distribution with index α is again α-stable with the same index α. This invariance property, however, does not hold for diﬀerent α’s. The α-stable distribution requires four parameters for complete description: an index of stability α ∈ (0, 2] also called the tail index, tail exponent or characteristic exponent, a skewness parameter β ∈ [−1, 1], a scale parameter σ > 0 and a location parameter µ ∈ R. The tail exponent α determines the rate at which the tails of the distribution taper oﬀ, see the left panel in Figure 1.1. When α = 2, the Gaussian distribution results. When α < 2, the variance is inﬁnite and the tails are asymptotically equivalent to a Pareto law, i.e. they exhibit a power-law behavior. More precisely, using a central limit theorem type argument it can be shown that (Janicki and Weron, 1994; Samorodnitsky and Taqqu, 1994): limx→∞ xα P(X > x) = Cα (1 + β)σ α , (1.1) limx→∞ xα P(X < −x) = Cα (1 + β)σ α ,

1.2

Deﬁnitions and Basic Characteristic

23

Tails of stable laws

-5

log(1-CDF(x))

-6 -10

-10

-8

log(PDF(x))

-4

-2

Dependence on alpha

-10

-5

0 x

5

0

10

1 log(x)

2

Figure 1.1: Left panel : A semilog plot of symmetric (β = µ = 0) α-stable probability density functions (pdfs) for α = 2 (black solid line), 1.8 (red dotted line), 1.5 (blue dashed line) and 1 (green long-dashed line). The Gaussian (α = 2) density forms a parabola and is the only α-stable density with exponential tails. Right panel : Right tails of symmetric α-stable cumulative distribution functions (cdfs) for α = 2 (black solid line), 1.95 (red dotted line), 1.8 (blue dashed line) and 1.5 (green long-dashed line) on a double logarithmic paper. For α < 2 the tails form straight lines with slope −α. STFstab01.xpl

where:

Cα = 2 0

∞

−α

x

−1 sin(x)dx

=

1 πα Γ(α) sin . π 2

The convergence to a power-law tail varies for diﬀerent α’s and, as can be seen in the right panel of Figure 1.1, is slower for larger values of the tail index. Moreover, the tails of α-stable distribution functions exhibit a crossover from an approximate power decay with exponent α > 2 to the true tail with exponent α. This phenomenon is more visible for large α’s (Weron, 2001). When α > 1, the mean of the distribution exists and is equal to µ. In general, the pth moment of a stable random variable is ﬁnite if and only if p < α. When the skewness parameter β is positive, the distribution is skewed to the right,

24

1

Gaussian, Cauchy, and Levy distributions

0.3 PDF(x)

0.2

0.15

0

0.05

0.1

0.1

PDF(x)

0.2

0.25

0.4

0.3

Dependence on beta

Stable Distributions

-5

0 x

5

-5

0 x

5

Figure 1.2: Left panel : Stable pdfs for α = 1.2 and β = 0 (black solid line), 0.5 (red dotted line), 0.8 (blue dashed line) and 1 (green long-dashed line). Right panel : Closed form formulas for densities are known only for three distributions – Gaussian (α = 2; black solid line), Cauchy (α = 1; red dotted line) and Levy (α = 0.5, β = 1; blue dashed line). The latter is a totally skewed distribution, i.e. its support is R+ . In general, for α < 1 and β = 1 (−1) the distribution is totally skewed to the right (left). STFstab02.xpl

i.e. the right tail is thicker, see the left panel of Figure 1.2. When it is negative, it is skewed to the left. When β = 0, the distribution is symmetric about µ. As α approaches 2, β loses its eﬀect and the distribution approaches the Gaussian distribution regardless of β. The last two parameters, σ and µ, are the usual scale and location parameters, i.e. σ determines the width and µ the shift of the mode (the peak) of the density. For σ = 1 and µ = 0 the distribution is called standard stable.

1.2.1

Characteristic Function Representation

Due to the lack of closed form formulas for densities for all but three distributions (see the right panel in Figure 1.2), the α-stable law can be most

1.2

Deﬁnitions and Basic Characteristic

25

S0 parameterization

0.4

0

0

0.1

0.1

0.2

0.3

PDF(x)

0.3 0.2

PDF(x)

0.4

0.5

0.5

S parameterization

-4

-2

0 x

2

4

-4

-2

0 x

2

4

Figure 1.3: Comparison of S and S 0 parameterizations: α-stable pdfs for β = 0.5 and α = 0.5 (black solid line), 0.75 (red dotted line), 1 (blue short-dashed line), 1.25 (green dashed line) and 1.5 (cyan longdashed line). STFstab03.xpl

conveniently described by its characteristic function φ(t) – the inverse Fourier transform of the probability density function. However, there are multiple parameterizations for α-stable laws and much confusion has been caused by these diﬀerent representations, see Figure 1.3. The variety of formulas is caused by a combination of historical evolution and the numerous problems that have been analyzed using specialized forms of the stable distributions. The most popular parameterization of the characteristic function of X ∼ Sα (σ, β, µ), i.e. an α-stable random variable with parameters α, σ, β, and µ, is given by (Samorodnitsky and Taqqu, 1994; Weron, 2004): ⎧ πα α α ⎪ ⎨−σ |t| {1 − iβsign(t) tan 2 } + iµt, α = 1, ln φ(t) = (1.2) ⎪ ⎩ 2 α = 1. −σ|t|{1 + iβsign(t) π ln |t|} + iµt,

26

1

Stable Distributions

For numerical purposes, it is often advisable to use Nolan’s (1997) parameterization: ⎧ πα α α 1−α ⎪ − 1]} + iµ0 t, α = 1, ⎨−σ |t| {1 + iβsign(t) tan 2 [(σ|t|) ln φ0 (t) = ⎪ ⎩ α = 1. −σ|t|{1 + iβsign(t) π2 ln(σ|t|)} + iµ0 t, (1.3) The Sα0 (σ, β, µ0 ) parameterization is a variant of Zolotariev’s (M)-parameterization (Zolotarev, 1986), with the characteristic function and hence the density and the distribution function jointly continuous in all four parameters, see the right panel in Figure 1.3. In particular, percentiles and convergence to the power-law tail vary in a continuous way as α and β vary. The location parameters of the two representations are related by µ = µ0 − βσ tan πα 2 for α = 1 and µ = µ0 − βσ π2 ln σ for α = 1. Note also, that the traditional scale parameter σG of the Gaussian distribution deﬁned by:

1 (x − µ)2 , (1.4) exp − fG (x) = √ 2 2σG 2πσG √ is not the same as σ in formulas (1.2) or (1.3). Namely, σG = 2σ.

1.2.2

Stable Density and Distribution Functions

The lack of closed form formulas for most stable densities and distribution functions has negative consequences. For example, during maximum likelihood estimation computationally burdensome numerical approximations have to be used. There generally are two approaches to this problem. Either the fast Fourier transform (FFT) has to be applied to the characteristic function (Mittnik, Doganoglu, and Chenyao, 1999) or direct numerical integration has to be utilized (Nolan, 1997, 1999). For data points falling between the equally spaced FFT grid nodes an interpolation technique has to be used. Taking a larger number of grid points increases accuracy, however, at the expense of higher computational burden. The FFT based approach is faster for large samples, whereas the direct integration method favors small data sets since it can be computed at any arbitrarily chosen point. Mittnik, Doganoglu, and Chenyao (1999) report that for N = 213 the FFT based method is faster for samples exceeding 100 observations and slower for smaller data sets. Moreover, the FFT based approach is less universal – it is eﬃcient only for large α’s and only for pdf calculations. When

1.2

Deﬁnitions and Basic Characteristic

27

computing the cdf the density must be numerically integrated. In contrast, in the direct integration method Zolotarev’s (1986) formulas either for the density or the distribution function are numerically integrated. Set ζ = −β tan πα 2 . Then the density f (x; α, β) of a standard α-stable random variable in representation S 0 , i.e. X ∼ Sα0 (1, β, 0), can be expressed as (note, that Zolotarev (1986, Section 2.2) used yet another parametrization): • when α = 1 and x > ζ: 1

α(x − ζ) α−1 f (x; α, β) = π |α−1|

π 2

−ξ

α V (θ; α, β) exp −(x − ζ) α−1 V (θ; α, β) dθ, (1.5)

• when α = 1 and x = ζ: f (x; α, β) =

Γ(1 + α1 ) cos(ξ) 1

π(1 + ζ 2 ) 2α

,

• when α = 1 and x < ζ: f (x; α, β) = f (−x; α, −β), • when α = 1: ⎧

π2 − πx − πx 1 ⎪ 2β 2β V (θ; 1, β) e V (θ; 1, β) exp −e dθ, β = 0, ⎪ π ⎨ 2|β| −2 f (x; 1, β) = ⎪ ⎪ 1 ⎩ β = 0, π(1+x2 ) ,

where ξ= and

V (θ; α, β) =

1 α arctan(−ζ), π 2,

α ⎧ α−1 1 cos θ ⎪ α−1 (cos αξ) ⎪ sin α(ξ+θ) ⎨

α = 1, α = 1,

cos{αξ+(α−1)θ} , cos θ

⎪ ⎪ ⎩ 2 π2 +βθ exp 1 ( π + βθ) tan θ , π cos θ β 2

α = 1, α = 1, β = 0.

The distribution F (x; α, β) of a standard α-stable random variable in representation S 0 can be expressed as:

28

1

Stable Distributions

• when α = 1 and x > ζ: F (x; α, β) = c1 (α, β) +

sign(1 − α) π

−ξ

1 π

where c1 (α, β) =

π

π 2

2

α exp −(x − ζ) α−1 V (θ; α, β) dθ,

−ξ ,

1,

α < 1, α > 1,

• when α = 1 and x = ζ: F (x; α, β) =

1 π −ξ , π 2

• when α = 1 and x < ζ: F (x; α, β) = 1 − F (−x; α, −β), • when α = 1:

F (x; 1, β) =

⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩

1 π 1 2

π 2

−π 2

+

1 π

πx exp −e− 2β V (θ; 1, β) dθ, β > 0, arctan x,

1 − F (x, 1, −β),

β = 0, β < 0.

Formula (1.5) requires numerical integration of the function g(·) exp{−g(·)}, α where g(θ; x, α, β) = (x − ζ) α−1 V (θ; α, β). The integrand is 0 at −ξ, increases monotonically to a maximum of 1e at point θ∗ for which g(θ∗ ; x, α, β) = 1, and then decreases monotonically to 0 at π2 (Nolan, 1997). However, in some cases the integrand becomes very peaked and numerical algorithms can miss the spike and underestimate the integral. To avoid this problem we need to ﬁnd the argument θ∗ of the peak numerically and compute the integral as a sum of two integrals: one from −ξ to θ∗ and the other from θ∗ to π2 .

1.3

Simulation of α-stable Variables

The complexity of the problem of simulating sequences of α-stable random variables results from the fact that there are no analytic expressions for the

1.3

Simulation of α-stable Variables

29

inverse F −1 of the cumulative distribution function. The ﬁrst breakthrough was made by Kanter (1975), who gave a direct method for simulating Sα (1, 1, 0) random variables, for α < 1. It turned out that this method could be easily adapted to the general case. Chambers, Mallows, and Stuck (1976) were the ﬁrst to give the formulas. The algorithm for constructing a standard stable random variable X ∼ Sα (1, β, 0), in representation (1.2), is the following (Weron, 1996): • generate a random variable V uniformly distributed on (− π2 , π2 ) and an independent exponential random variable W with mean 1; • for α = 1 compute:

X = Sα,β

(1−α)/α sin{α(V + Bα,β )} cos{V − α(V + Bα,β )} · · , (1.6) W {cos(V )}1/α

where Bα,β Sα,β

arctan(β tan πα 2 ) , α πα 1/(2α) = 1 + β 2 tan2 ; 2 =

• for α = 1 compute:

π 2 π 2 W cos V . + βV tan V − β ln X= π π 2 2 + βV

(1.7)

Given the formulas for simulation of a standard α-stable random variable, we can easily simulate a stable random variable for all admissible values of the parameters α, σ, β and µ using the following property: if X ∼ Sα (1, β, 0) then ⎧ ⎪ α = 1, ⎨σX + µ, (1.8) Y = ⎪ ⎩ σX + π2 βσ ln σ + µ, α = 1, is Sα (σ, β, µ). It is interesting to note that for α = 2 (and β = 0) the ChambersMallows-Stuck method reduces to the well known Box-Muller algorithm for generating Gaussian random variables (Janicki and Weron, 1994). Although many other approaches have been proposed in the literature, this method is regarded as the fastest and the most accurate (Weron, 2004).

30

1

-5 -10

log(1-CDF(x))

-5

Sample of size N=10^6

-10

log(1-CDF(x))

Sample of size N=10^4

Stable Distributions

-4

-2

0 log(x)

2

4

-4

-2

0 log(x)

2

4

Figure 1.4: A double logarithmic plot of the right tail of an empirical symmetric 1.9-stable distribution function for a sample of size N = 104 (left panel ) and N = 106 (right panel ). Thick red lines represent the linear regression ﬁt. The tail index estimate (ˆ α = 3.7320) obtained for the smaller sample is close to the initial power-law like decay of the larger sample (ˆ α = 3.7881). The far tail estimate α ˆ = 1.9309 is close to the true value of α. STFstab04.xpl

1.4

Estimation of Parameters

Like simulation, the estimation of stable law parameters is in general severely hampered by the lack of known closed-form density functions for all but a few members of the stable family. Either the pdf has to be numerically integrated (see the previous section) or the estimation technique has to be based on a diﬀerent characteristic of stable laws. All presented methods work quite well assuming that the sample under consideration is indeed α-stable. However, if the data comes from a diﬀerent distribution, these procedures may mislead more than the Hill and direct tail estimation methods. Since the formal tests for assessing α-stability of a sample are very time consuming we suggest to ﬁrst apply the “visual inspection” tests to see whether the empirical densities resemble those of α-stable laws.

1.4

Estimation of Parameters

1.4.1

31

Tail Exponent Estimation

The simplest and most straightforward method of estimating the tail index is to plot the right tail of the empirical cdf on a double logarithmic paper. The slope of the linear regression for large values of x yields the estimate of the tail index α, through the relation α = −slope. This method is very sensitive to the size of the sample and the choice of the number of observations used in the regression. For example, the slope of about −3.7 may indicate a non-α-stable power-law decay in the tails or the contrary – an α-stable distribution with α ≈ 1.9. This is illustrated in Figure 1.4. In the left panel a power-law ﬁt to the tail of a sample of N = 104 standard symmetric (β = µ = 0, σ = 1) α-stable distributed variables with α = 1.9 yields an estimate of α ˆ = 3.732. However, when the sample size is increased to N = 106 the power-law ﬁt to the extreme tail observations yields α ˆ = 1.9309, which is fairly close to the original value of α. The true tail behavior (1.1) is observed only for very large (also for very small, i.e. the negative tail) observations, after a crossover from a temporary powerlike decay (which surprisingly indicates α ≈ 3.7). Moreover, the obtained estimates still have a slight positive bias, which suggests that perhaps even larger samples than 106 observations should be used. In Figure 1.4 we used only the upper 0.15% of the records to estimate the true tail exponent. In general, the choice of the observations used in the regression is subjective and can yield large estimation errors, a fact which is often neglected in the literature. A well known method for estimating the tail index that does not assume a parametric form for the entire distribution function, but focuses only on the tail behavior was proposed by Hill (1975). The Hill estimator is used to estimate the tail index α, when the upper (or lower) tail of the distribution is of the form: 1−F (x) = Cx−α , see Figure 1.5. Like the log-log regression method, the Hill estimator tends to overestimate the tail exponent of the stable distribution if α is close to two and the sample size is not very large. For a review of the extreme value theory and the Hill estimator see H¨ ardle, Klinke, and M¨ uller (2000, Chapter 13) or Embrechts, Kl¨ uppelberg, and Mikosch (1997). These examples clearly illustrate that the true tail behavior of α-stable laws is visible only for extremely large data sets. In practice, this means that in order to estimate α we must use high-frequency data and restrict ourselves to the most “outlying” observations. Otherwise, inference of the tail index may be strongly misleading and rejection of the α-stable regime unfounded.

32

1

Stable Distributions

2

alpha

2.5

Sample of size N=10^4

200

400 600 Order statistics

800

Sample of size N=10^6

1.7

2

1.8

alpha

alpha

1.9

2.5

2

2.1

Sample of size N=10^6

1000

0

50000 Order statistics

100000

0

1000 Order statistics

2000

Figure 1.5: Plots of the Hill statistics α ˆ n,k vs. the maximum order statistic k for 1.8-stable samples of size N = 104 (top panel ) and N = 106 (left and right panels). Red horizontal lines represent the true value of α. For better exposition, the right panel is a magniﬁcation of the left panel for small k. A close estimate is obtained only for k = 500, ..., 1300 (i.e. for k < 0.13% of sample size). STFstab05.xpl

1.4

Estimation of Parameters

33

We now turn to the problem of parameter estimation. We start the discussion with the simplest, fastest and ... least accurate quantile methods, then develop the slower, yet much more accurate sample characteristic function methods and, ﬁnally, conclude with the slowest but most accurate maximum likelihood approach. Given a sample x1 , ..., xn of independent and identically distributed ˆ and µ Sα (σ, β, µ) observations, in what follows, we provide estimates α ˆ, σ ˆ , β, ˆ of all four stable law parameters.

1.4.2

Quantile Estimation

Already in 1971 Fama and Roll provided very simple estimates for parameters of symmetric (β = 0, µ = 0) stable laws when α > 1. McCulloch (1986) generalized and improved their method. He analyzed stable law quantiles and provided consistent estimators of all four stable parameters, with the restriction α ≥ 0.6, while retaining the computational simplicity of Fama and Roll’s method. After McCulloch deﬁne: x0.95 − x0.05 vα = , (1.9) x0.75 − x0.25 which is independent of both σ and µ. In the above formula xf denotes the f -th population quantile, so that Sα (σ, β, µ)(xf ) = f . Let vˆα be the corresponding sample value. It is a consistent estimator of vα . Now, deﬁne: vβ =

x0.95 + x0.05 − 2x0.50 , x0.95 − x0.05

(1.10)

and let vˆβ be the corresponding sample value. vβ is also independent of both σ and µ. As a function of α and β it is strictly increasing in β for each α. The statistic vˆβ is a consistent estimator of vβ . Statistics vα and vβ are functions of α and β. This relationship may be inverted and the parameters α and β may be viewed as functions of vα and vβ : α = ψ1 (vα , vβ ), β = ψ2 (vα , vβ ).

(1.11)

Substituting vα and vβ by their sample values and applying linear interpolation between values found in tables provided by McCulloch (1986) yields estimators ˆ α ˆ and β. Scale and location parameters, σ and µ, can be estimated in a similar way. However, due to the discontinuity of the characteristic function for α = 1 and β = 0 in representation (1.2), this procedure is much more complicated. We refer the interested reader to the original work of McCulloch (1986).

34

1.4.3

1

Stable Distributions

Characteristic Function Approaches

Given a sample x1 , ..., xn of independent and identically distributed (i.i.d.) random variables, deﬁne the sample characteristic function by ˆ = 1 eitxj . φ(t) n j=1 n

(1.12)

ˆ ˆ Since |φ(t)| is bounded by unity all moments of φ(t) are ﬁnite and, for any ﬁxed t, it is the sample average of i.i.d. random variables exp(itxj ). Hence, ˆ is a consistent estimator of the characteristic by the law of large numbers, φ(t) function φ(t). Press (1972) proposed a simple estimation method, called the method of moments, based on transformations of the characteristic function. The obtained estimators are consistent since they are based upon estimators of φ(t), Im{φ(t)} and Re{φ(t)}, which are known to be consistent. However, convergence to the population values depends on a choice of four points at which the above functions are evaluated. The optimal selection of these values is problematic and still is an open question. The obtained estimates are of poor quality and the method is not recommended for more than preliminary estimation. Koutrouvelis (1980) presented a regression-type method which starts with an initial estimate of the parameters and proceeds iteratively until some prespeciﬁed convergence criterion is satisﬁed. Each iteration consists of two weighted regression runs. The number of points to be used in these regressions depends on the sample size and starting values of α. Typically no more than two or three iterations are needed. The speed of the convergence, however, depends on the initial estimates and the convergence criterion. The regression method is based on the following observations concerning the characteristic function φ(t). First, from (1.2) we can easily derive: ln(− ln |φ(t)|2 ) = ln(2σ α ) + α ln |t|. The real and imaginary parts of φ(t) are for α = 1 given by πα {φ(t)} = exp(−|σt|α ) cos µt + |σt|α βsign(t) tan , 2 and

πα {φ(t)} = exp(−|σt|α ) sin µt + |σt|α βsign(t) tan . 2

(1.13)

1.4

Estimation of Parameters

35

The last two equations lead, apart from considerations of principal values, to πα {φ(t)} (1.14) sign(t)|t|α . arctan = µt + βσ α tan {φ(t)} 2 Equation (1.13) depends only on α and σ and suggests that we estimate these parameters by regressing y = ln(− ln |φn (t)|2 ) on w = ln |t| in the model yk = m + αwk + k ,

k = 1, 2, ..., K,

(1.15)

where tk is an appropriate set of real numbers, m = ln(2σ α ), and k denotes an error term. Koutrouvelis (1980) proposed to use tk = πk 25 , k = 1, 2, ..., K; with K ranging between 9 and 134 for diﬀerent estimates of α and sample sizes. Once α ˆ and σ ˆ have been obtained and α and σ have been ﬁxed at these values, estimates of β and µ can be obtained using (1.14). Next, the regressions are repeated with α ˆ, σ ˆ , βˆ and µ ˆ as the initial parameters. The iterations continue until a prespeciﬁed convergence criterion is satisﬁed. Kogon and Williams (1998) eliminated this iteration procedure and simpliﬁed the regression method. For initial estimation they applied McCulloch’s (1986) method, worked with the continuous representation (1.3) of the characteristic function instead of the classical one (1.2) and used a ﬁxed set of only 10 equally spaced frequency points tk . In terms of computational speed their method compares favorably to the original method of Koutrouvelis (1980). It has a signiﬁcantly better performance near α = 1 and β = 0 due to the elimination of discontinuity of the characteristic function. However, it returns slightly worse results for very small α.

1.4.4

Maximum Likelihood Method

The maximum likelihood (ML) estimation scheme for α-stable distributions does not diﬀer from that for other laws, at least as far as the theory is concerned. For a vector of observations x = (x1 , ..., xn ), the ML estimate of the parameter vector θ = (α, σ, β, µ) is obtained by maximizing the log-likelihood function: Lθ (x) =

n

ln f˜(xi ; θ),

(1.16)

i=1

where f˜(·; θ) is the stable pdf. The tilde denotes the fact that, in general, we do not know the explicit form of the density and have to approximate it

36

1

Stable Distributions

numerically. The ML methods proposed in the literature diﬀer in the choice of the approximating algorithm. However, all of them have an appealing common feature – under certain regularity conditions the maximum likelihood estimator is asymptotically normal. Modern ML estimation techniques either utilize the FFT-based approach for approximating the stable pdf (Mittnik et al., 1999) or use the direct integration method (Nolan, 2001). Both approaches are comparable in terms of eﬃciency. The diﬀerences in performance result from diﬀerent approximation algorithms, see Section 1.2.2. Simulation studies suggest that out of the ﬁve described techniques the method of moments yields the worst estimates, well outside any admissible error range (Stoyanov and Racheva-Iotova, 2004; Weron, 2004). McCulloch’s method comes in next with acceptable results and computational time signiﬁcantly lower than the regression approaches. On the other hand, both the Koutrouvelis and the Kogon-Williams implementations yield good estimators with the latter performing considerably faster, but slightly less accurate. Finally, the ML estimates are almost always the most accurate, in particular, with respect to the skewness parameter. However, as we have already said, maximum likelihood estimation techniques are certainly the slowest of all the discussed methods. For example, ML estimation for a sample of a few thousand observations using a gradient search routine which utilizes the direct integration method is slower by 4 orders of magnitude than the Kogon-Williams algorithm, i.e. a few minutes compared to a few hundredths of a second on a fast PC! Clearly, the higher accuracy does not justify the application of ML estimation in many real life problems, especially when calculations are to be performed on-line.

1.5

Financial Applications of Stable Laws

Many techniques in modern ﬁnance rely heavily on the assumption that the random variables under investigation follow a Gaussian distribution. However, time series observed in ﬁnance – but also in other applications – often deviate from the Gaussian model, in that their marginal distributions are heavy-tailed and, possibly, asymmetric. In such situations, the appropriateness of the commonly adopted normal assumption is highly questionable. It is often argued that ﬁnancial asset returns are the cumulative outcome of a vast number of pieces of information and individual decisions arriving almost continuously in time. Hence, in the presence of heavy-tails it is natural

1.5

Financial Applications of Stable Laws

37

Table 1.1: Fits to 2000 Dow Jones Industrial Average (DJIA) index returns from the period February 2, 1987 – December 29, 1994. Test statistics and the corresponding p-values based on 1000 simulated samples (in parentheses) are also given. Parameters: α-stable ﬁt Gaussian ﬁt

α 1.6411

Tests: α-stable ﬁt

Anderson-Darling 0.6441

Gaussian ﬁt

σ 0.0050 0.0111

β -0.0126

µ 0.0005 0.0003

Kolmogorov 0.5583

(0.020)

(0.500)

+∞

4.6353

(<0.005)

(<0.005)

STFstab06.xpl

to assume that they are approximately governed by a stable non-Gaussian distribution. Other leptokurtic distributions, including Student’s t, Weibull, and hyperbolic, lack the attractive central limit property. Stable distributions have been successfully ﬁt to stock returns, excess bond returns, foreign exchange rates, commodity price returns and real estate returns (McCulloch, 1996; Rachev and Mittnik, 2000). In recent years, however, several studies have found, what appears to be strong evidence against the stable model (Gopikrishnan et al., 1999; McCulloch, 1997). These studies have estimated the tail exponent directly from the tail observations and commonly have found α that appears to be signiﬁcantly greater than 2, well outside the stable domain. Recall, however, that in Section 1.4.1 we have shown that estimating α only from the tail observations may be strongly misleading and for samples of typical size the rejection of the α-stable regime unfounded. Let us see ourselves how well the stable law describes ﬁnancial asset returns. In this section we want to apply the discussed techniques to ﬁnancial data. Due to limited space we chose only one estimation method – the regression approach of Koutrouvelis (1980), as it oﬀers high accuracy at moderate computational time. We start the empirical analysis with the most prominent example – the Dow Jones Industrial Average (DJIA) index, see Table 1.1. The data set covers the period February 2, 1987 – December 29, 1994 and comprises 2000

38

1

Stable, Gaussian, and empirical left tails

-5

log(CDF(x))

0

-10

0.5

CDF(x)

1

Stable and Gaussian fit to DJIA returns

Stable Distributions

-0.02

0 x

0.02

-6

-5

-4 log(x)

-3

-2

Figure 1.6: Stable (cyan) and Gaussian (dashed red) ﬁts to the DJIA returns (black circles) empirical cdf from the period February 2, 1987 – December 29, 1994. Right panel is a magniﬁcation of the left tail ﬁt on a double logarithmic scale clearly showing the superiority of the 1.64-stable law. STFstab06.xpl

daily returns. Recall, that it includes the largest crash in Wall Street history – the Black Monday of October 19, 1987. Clearly the 1.64-stable law oﬀers a much better ﬁt to the DJIA returns than the Gaussian distribution. Its superiority, especially in the tails of the distribution, is even better visible in Figure 1.6. To make our statistical analysis more sound, we also compare both ﬁts through Anderson-Darling and Kolmogorov test statistics (D’Agostino and Stephens, 1986). The former may be treated as a weighted Kolmogorov statistics which puts more weight to the diﬀerences in the tails of the distributions. Although no asymptotic results are known for the stable laws, approximate p-values for these goodness-of-ﬁt tests can be obtained via the Monte Carlo technique, for details see Chapter 13. First the parameter vector is estimated for a given ˆ and the test statistics is calculated assuming that sample of size n, yielding θ, ˆ returning a value of d. Next, the sample is distributed according to F (x; θ), ˆ a sample of size n of F (x; θ)-distributed variates is generated. The parameter

1.5

Financial Applications of Stable Laws

39

Stable, Gaussian, and empirical left tails

-6

log(CDF(x))

0

-8

0.5

CDF(x)

-4

-2

1

Stable and Gaussian fit to Boeing returns

-0.05

0 x

0.05

-5

-4

-3

-2

log(x)

Figure 1.7: Stable (cyan) and Gaussian (dashed red) ﬁts to the Boeing stock returns (black circles) empirical cdf from the period July 1, 1997 – December 31, 2003. Right panel is a magniﬁcation of the left tail ﬁt on a double logarithmic scale clearly showing the superiority of the 1.78-stable law. STFstab07.xpl

vector is estimated for this simulated sample, yielding θˆ1 , and the test statistics is calculated assuming that the sample is distributed according to F (x; θˆ1 ). The simulation is repeated as many times as required to achieve a certain level of accuracy. The estimate of the p-value is obtained as the proportion of times that the test quantity is at least as large as d. For the α-stable ﬁt of the DJIA returns the values of the Anderson-Darling and Kolmogorov statistics are 0.6441 and 0.5583, respectively. The corresponding approximate p-values based on 1000 simulated samples are 0.02 and 0.5 allowing us to accept the α-stable law as a model of DJIA returns. The values of the test statistics for the Gaussian ﬁt yield p-values of less than 0.005 forcing us to reject the Gaussian law, see Table 1.1. Next, we apply the same technique to 1635 daily returns of Boeing stock prices from the period July 1, 1997 – December 31, 2003. The 1.78-stable distribution ﬁts the data very well, yielding small values of the Anderson-Darling (0.3756) and Kolmogorov (0.4522) test statistics, see Figure 1.7 and Table 1.2. The

40

1

Stable Distributions

Table 1.2: Fits to 1635 Boeing stock price returns from the period July 1, 1997 – December 31, 2003. Test statistics and the corresponding p-values based on 1000 simulated samples (in parentheses) are also given. Parameters: α-stable ﬁt Gaussian ﬁt

α 1.7811

σ 0.0141 0.0244

β 0.2834

µ 0.0009 0.0001

Tests: α-stable ﬁt

Anderson-Darling 0.3756 (0.18)

(0.80)

Gaussian ﬁt

9.6606

2.1361

(<0.005)

(<0.005)

Kolmogorov 0.4522

STFstab07.xpl

approximate p-values based, as in the previous example, on 1000 simulated samples are 0.18 and 0.8, respectively, allowing us to accept the α-stable law as a model of Boeing returns. On the other hand, the values of the test statistics for the Gaussian ﬁt yield p-values of less than 0.005 forcing us to reject the Gaussian distribution. The stable law seems to be tailor-cut for the DJIA index and Boeing stock price returns. But does it ﬁt other asset returns as well? Unfortunately, not. Although, for most asset returns it does provide a better ﬁt than the Gaussian law, in many cases the test statistics and p-values suggest that the ﬁt is not as good as for these two data sets. This can be seen in Figure 1.8 and Table 1.3, where the calibration results for 4444 daily returns of the Japanese yen against the US dollar (JPY/USD) exchange rate from December 1, 1978 to January 31, 1991 are presented. The empirical distribution does not exhibit power-law tails and the extreme tails are largely overestimated by the stable distribution. For a risk manager who likes to play safe this may not be a bad idea, as the stable law overestimates the risks and thus provides an upper limit of losses. However, from a calibration perspective other distributions, like the hyperbolic or truncated stable, may be more appropriate for many data sets (Weron, 2004).

1.5

Financial Applications of Stable Laws

41

Table 1.3: Fits to 4444 JPY/USD exchange rate returns from the period December 1, 1978 – January 31, 1991. Test statistics and the corresponding p-values (in parentheses) are also given. Parameters: α-stable ﬁt Gaussian ﬁt

α 1.3274

σ 0.0020 0.0049

β -0.1393

µ -0.0003 -0.0001

Tests: α-stable ﬁt

Anderson-Darling 4.7833

Kolmogorov 1.4520

(<0.005)

(<0.005)

Gaussian ﬁt

91.7226

6.7574

(<0.005)

(<0.005)

STFstab08.xpl

Stable, Gaussian, and empirical left tails

log(CDF(x))

0

-10

0.5

CDF(x)

-5

1

Stable and Gaussian fit to JPY/USD returns

-0.02

-0.01

0 x

0.01

-7

-6

-5

-4

log(x)

Figure 1.8: Stable (cyan) and Gaussian (dashed red) ﬁts to the JPY/USD exchange rate returns (black circles) empirical cdf from the period December 1, 1978 – January 31, 1991. Right panel is a magniﬁcation of the left tail ﬁt on a double logarithmic scale. The extreme returns are largely overestimated by the stable law. STFstab08.xpl

42

Bibliography

Bibliography Bouchaud, J.-P. and Potters, M. (2000). Theory of Financial Risk, Cambridge University Press, Cambridge. Carr, P., Geman, H., Madan, D. B., and Yor, M. (2002). The ﬁne structure of asset returns: an empirical investigation, Journal of Business 75: 305–332. Chambers, J. M., Mallows, C. L., and Stuck, B. W. (1976). A method for simulating stable random variables, Journal of the American Statistical Association 71: 340–344. D’Agostino, R. B. and Stephens, M. A. (1986). Goodness-of-Fit Techniques, Marcel Dekker, New York. Embrechts, P., Kluppelberg, C., and Mikosch, T. (1997). Modelling Extremal Events for Insurance and Finance, Springer. Fama, E. F. (1965). The behavior of stock market prices, Journal of Business 38: 34–105. Fama, E. F. and Roll, R. (1971). Parameter estimates for symmetric stable distributions, Journal of the American Statistical Association 66: 331– 338. Gopikrishnan, P., Plerou, V., Amaral, L. A. N., Meyer, M. and Stanley, H. E. (1999). Scaling of the distribution of ﬂuctuations of ﬁnancial market indices, Physical Review E 60(5): 5305–5316. Guillaume, D. M., Dacorogna, M. M., Dave, R. R., M¨ uller, U. A., Olsen, R. B., and Pictet, O. V. (1997). From the birds eye to the microscope: A survey of new stylized facts of the intra-daily foreign exchange markets, Finance & Stochastics 1: 95–129. H¨ ardle, W., Klinke, S., and M¨ uller, M. (2000). Springer.

XploRe Learning Guide,

Hill, B. M. (1975). A simple general approach to inference about the tail of a distribution, Annals of Statistics 3: 1163–1174. Janicki, A. and Weron, A. (1994). Simulation and Chaotic Behavior of α-Stable Stochastic Processes, Marcel Dekker.

Bibliography

43

Kanter, M. (1975). Stable densities under change of scale and total variation inequalities, Annals of Probability 3: 697–707. Koutrouvelis, I. A. (1980). Regression-type estimation of the parameters of stable laws, Journal of the American Statistical Association 75: 918–928. Kogon, S. M. and Williams, D. B. (1998). Characteristic function based estimation of stable parameters, in R. Adler, R. Feldman, M. Taqqu (eds.), A Practical Guide to Heavy Tails, Birkhauser, pp. 311–335. Levy, P. (1925). Calcul des Probabilites, Gauthier Villars. Mandelbrot, B. B. (1963). The variation of certain speculative prices, Journal of Business 36: 394–419. Mantegna, R. N. and Stanley, H. E. (1995). Scaling behavior in the dynamics of an economic index, Nature 376: 46–49. McCulloch, J. H. (1986). Simple consistent estimators of stable distribution parameters, Communications in Statistics – Simulations 15: 1109–1136. McCulloch, J. H. (1996). Financial applications of stable distributions, in G. S. Maddala, C. R. Rao (eds.), Handbook of Statistics, Vol. 14, Elsevier, pp. 393–425. McCulloch, J. H. (1997). Measuring tail thickness to estimate the stable index α: A critique, Journal of Business & Economic Statistics 15: 74–81. Mittnik, S., Doganoglu, T., and Chenyao, D. (1999). Computing the probability density function of the stable Paretian distribution, Mathematical and Computer Modelling 29: 235–240. Mittnik, S., Rachev, S. T., Doganoglu, T. and Chenyao, D. (1999). Maximum likelihood estimation of stable Paretian models, Mathematical and Computer Modelling 29: 275–293. Nolan, J. P. (1997). Numerical calculation of stable densities and distribution functions, Communications in Statistics – Stochastic Models 13: 759–774. Nolan, J. P. (1999). An algorithm for evaluating stable densities in Zolotarev’s (M) parametrization, Mathematical and Computer Modelling 29: 229–233. Nolan, J. P. (2001). Maximum likelihood estimation and diagnostics for stable distributions, in O. E. Barndorﬀ-Nielsen, T. Mikosch, S. Resnick (eds.), L´evy Processes, Brikh¨ auser, Boston.

44

Bibliography

Press, S. J. (1972). Estimation in univariate and multivariate stable distribution, Journal of the American Statistical Association 67: 842–846. Rachev, S., ed. (2003). Handbook of Heavy-tailed Distributions in Finance, North Holland. Rachev, S. and Mittnik, S. (2000). Stable Paretian Models in Finance, Wiley. Samorodnitsky, G. and Taqqu, M. S. (1994). Stable Non–Gaussian Random Processes, Chapman & Hall. Stoyanov, S. and Racheva-Iotova, B. (2004). Univariate stable laws in the ﬁeld of ﬁnance – parameter estimation, Journal of Concrete and Applicable Mathematics 2(4), in print. Weron, R. (1996). On the Chambers-Mallows-Stuck method for simulating skewed stable random variables, Statistics and Probability Letters 28: 165–171. See also R. Weron, Correction to: On the ChambersMallows-Stuck method for simulating skewed stable random variables, Research Report HSC/96/1, Wroclaw University of Technology, 1996, http://www.im.pwr.wroc.pl/˜hugo/Publications.html. Weron, R. (2001). Levy-stable distributions revisited: Tail index > 2 does not exclude the Levy-stable regime, International Journal of Modern Physics C 12: 209–223. Weron, R. (2004). Computationally intensive Value at Risk calculations, in J. E. Gentle, W. H¨ ardle, Y. Mori (eds.) Handbook of Computational Statistics, Springer, Berlin, 911–950. Zolotarev, V. M. (1986). One-Dimensional Stable Distributions, American Mathematical Society.

2 Extreme Value Analysis and Copulas Krzysztof Jajuga and Daniel Papla

2.1

Introduction

The analysis of ﬁnancial data, usually given in the form of ﬁnancial time series, has recently received a lot of attention of researchers and ﬁnance practitioners, in such areas as valuation of derivative instruments, forecasting of ﬁnancial prices, risk analysis (particularly market risk analysis). From the practical point of view, multivariate analysis of ﬁnancial data may be more appropriate than univariate analysis. Most market participants hold portfolios containing more than one ﬁnancial instrument. Therefore they should perform analysis for all components of a portfolio. There are more and more ﬁnancial instruments where payoﬀs depend on several underlyings (e.g. rainbow options). Therefore, to value them one should use multivariate models of underlying vectors of indices. Risk analysis is strongly based on the issue of correlation, or generally speaking dependence, between the returns (or prices) of the components of a portfolio. Therefore multivariate analysis is an appropriate tool to detect these relations. One of the most important applications of ﬁnancial time series models is risk analysis, including risk measurement. A signiﬁcant tendency, observed in the market, is the occurrence of rare events, which very often lead to exceptionally high losses. This has caused a growing interest in the evaluation of the socalled extreme risk. There are two groups of models applied to ﬁnancial time series: “mean-oriented” models, aiming at modeling the mean (expected value) and the variance of the distribution; and “extreme value” models, aiming at modeling tails (including maximum and minimum) of the distribution.

46

2

Extreme Value Analysis and Copulas

In this chapter we present some methods of time series analysis, both univariate and multivariate time series. The attention is put on two approaches: extreme value analysis and copula analysis. The presented methods are illustrated by examples coming from the Polish ﬁnancial market.

2.1.1

Analysis of Distribution of the Extremum

The analysis of the distribution of the extremum is simply the analysis of the random variable, deﬁned as the maximum (or minimum) of a set of random variables. For simplicity we concentrate only on the distribution of the maximum. The most important result is the Fisher-Tippet theorem (Embrechts, Kl¨ uppelberg, and Mikosch, 1997). In this theorem one considers the limiting distribution for the normalized maximum: Xn:n − bn lim P ≤ x = G(x), (2.1) n→∞ an where Xn:n = max(X1 , X2 , ..., Xn ). It can be proved that this limiting distribution belongs to the family of the so-called Generalized Extreme Value distributions (GEV), whose distribution function is given as: −1/ξ x−µ G(x) = exp − 1 + ξ , σ (2.2) 1 + ξσ

−1

(x − µ) > 0.

The GEV distribution has three parameters Reiss and Thomas (2000): the location parameter µ, the scale parameter σ, and the shape parameter ξ, which reﬂects the fatness of tails of the distribution (the higher value of this parameter, the fatter tails). The family of GEV distributions contains three subclasses: the Fr´echet distribution, ξ > 0, the Weibull distribution, ξ < 0, and the Gumbel distribution, ξ → 0. In ﬁnancial problems one usually encounters the Fr´echet distribution. In this case the underlying observations come from a fat-tailed distribution, such as the Pareto distribution, stable distribution (including Cauchy), etc. One of the most common methods to estimate the parameters of GEV distributions is maximum likelihood. The method is applied to block maxima, obtained by dividing the set of observations into subsets, called blocks, and taking maximum for each block.

2.1

Introduction

47

The main weakness of this approach comes from the fact that the maxima for some blocks may not correspond to rare events. On the other hand, in some blocks there may be more than one observation corresponding to rare events. Therefore this approach can be biased by the selection of the blocks.

2.1.2

Analysis of Conditional Excess Distribution

To analyze rare events, another approach can be used. Consider the so-called conditional excess distribution: F (u + y) − F (u) Fu (y) = P (X − u ≤ y |X > u ) = , (2.3) 1 − F (u) where 0 ≤ y < x0 − u; and x0 = sup(x : F (x) < 1). This distribution (also called the conditional tail distribution) is simply the distribution conditional on the underlying random variable taking value from the tail. Of course, this distribution depends on the choice of threshold u. It can be proved (Embrechts, Kl¨ uppelberg, and Mikosch, 1997) that the conditional excess distribution can be approximated by the so-called Generalized Pareto distribution (GPD), which is linked by one parameter to the GEV distribution. The following property is important: the larger the threshold (the further one goes in the direction of the tail), the better the approximation. The distribution function of GPD is given by Franke, H¨ ardle and Hafner (2004) and Reiss and Thomas (2000): Fu (y) = 1 − (1 + ξy/β)−1/ξ ,

(2.4)

where β = σ + ξ(u − µ). The shape parameter ξ has the same role as in GEV distributions. The generalized parameter β depends on all three parameters of the GEV distribution, as well as on the threshold u. The family of GPD contains three types of distributions, the Pareto distribution – ξ > 0, the Pareto type II distribution – ξ < 0, and the exponential distribution – ξ → 0. The mean of the conditional excess distribution can be characterized by a linear function of the threshold and of the parameters of GPD: E(X − u |X > u ) =

βu ξ + u 1−ξ 1−ξ

(2.5)

for ξ < 1. One of the most common methods of estimating the parameters of GPD is maximum likelihood. However, the GPD depends on the choice of the

48

2

Extreme Value Analysis and Copulas

threshold u. The higher the threshold, the better the approximation of the tail by GPD – this is a desired property. Then one has fewer observations to perform maximum likelihood estimation, which weakens the quality of estimation. To choose the threshold, one can use the procedure, based on the fact that for GPD the mean of the conditional excess distribution is a linear function of the threshold. Therefore, one can use the following function, which is just the arithmetic average of the observations exceeding the threshold: n ∧

e(u) =

i=1

max{(xi − u), 0} n i=1

.

(2.6)

I(xi > u)

We know that for the observations higher than the threshold this relation should be a linear function. Therefore a graphical procedure can be applied. In this ∧ procedure the value of e(u) is calculated for diﬀerent values of the threshold u. Then such a value is selected, that for the values above this value the linear relation can be observed.

2.1.3

Examples

Consider the logarithmic rate of returns for the following stock market indices: • Four indices of the Warsaw Stock Exchange (WSE): WIG (index of most traded stocks on this exchange), WIG20 (index of 20 stocks with the largest capitalization), MIDWIG (index of 40 mid-cap stocks), and TECHWIG (index of high technology stocks); • Two US market indices: DJIA and S&P 500; • Two EU market indices: DAX and FT-SE100. In addition we studied the logarithmic rates of return for the following exchange rates: USD/PLN, EUR/PLN, EUR/USD. The ﬁnancial time series of the logarithmic rates of return come from the period January 2, 1995 – October 3, 2003, except for the case of exchange rates EUR/PLN and EUR/USD, where the period January 1, 1999 – October 3, 2003 was taken into account. Figures 2.1–2.3 show histograms of those time series.

2.1

Introduction

49

WIG 20

15

0

0

5

5

10

Number of observations

15 10

Number of observations

20

20

25

WIG

-0.05

0

-0.1

0 x

MIDWIG

TECHWIG

0.1

10

Number of observations

5

20 10

Number of observations

0.05

15

0.05

0

0

-0.1

-0.05

x

30

-0.1

-0.05

0 x

0.05

-0.1

-0.05

0

0.05

0.1

x

Figure 2.1: Histograms of the logarithmic rates of return for WSE indices STFeva01.xpl

The most common application of the analysis of the extremum is the estimation of the maximum loss of a portfolio. It can be treated as a more conservative measure of risk than the well-known Value at Risk, deﬁned through a quantile of the loss distribution (rather than the distribution of the maximal loss). The limiting distribution of the maximum loss is the GEV distribution. This, of

50

2

Extreme Value Analysis and Copulas

S&P 500

30 0

0

10

20

Number of observations

20 10

Number of observations

30

40

40

DJIA

-0.05

0

-0.05

0.05

0 x

DAX

FT-SE100

0.05

30 0

0

5

10

20

Number of observations

20 15 10

Number of observations

25

40

x

-0.05

0 x

0.05

-0.05

0 x

0.05

Figure 2.2: Histograms of the logarithmic rates of return for world indices STFeva01.xpl

course, requires a rather large sample of observations coming from the same underlying distribution. Since most ﬁnancial data are in the form of time series, the required procedure would call for at least the check of the hypothesis about stationarity of time series by using unit root test e.g. Dickey-Fuller test, (Dickey and Fuller, 1979). The hypothesis of stationarity states that

2.1

Introduction

51

EUR/PLN

40 30

Number of observations

20

60 40

-0.05

0

0

10

20

Number of observations

50

80

60

USD/PLN

0 x

-0.05

0.05

0 x

0.05

40

30 20 0

10

Number of observations

50

60

EUR/USD

-0.02

0

0.02

0.04

x

Figure 2.3: Histograms of the logarithmic rates of return for exchange rates STFeva01.xpl

the process has no unit roots. With the Dickey-Fuller test we test the null hypothesis of a unit root, that is, there is a unit root for the characteristic equation of the AR(1) process. The alternative hypothesis is that the time series is stationary. To verify stationarity hypotheses for each of the considered time series, the augmented Dickey-Fuller test was used. The hypotheses of a

52

2

Extreme Value Analysis and Copulas

Table 2.1: The estimates of the parameters indices. Data ξ WIG 0.374 WIG20 0.450 MIDWIG 0.604 TECHWIG 0.147 DJIA 0.519 S&P 500 0.244 FT-SE 100 -0.048 DAX -0.084

of GEV distributions, for the stock µ 0.040 0.037 0.033 0.066 0.027 0.027 0.031 0.041

σ 0.012 0.022 0.011 0.012 0.006 0.007 0.006 0.011 STFeva02.xpl

unit root were rejected with the level of signiﬁcance lower than 1%, so all time series in question are stationary. One of the most important applications of the analysis of conditional excess distribution is the risk measure called Expected Shortfall – ES (also known as conditional Value at Risk, expected tail loss). It is deﬁned as: ES = E(X − u |X > u ).

(2.7)

So ES is the expected value of the conditional excess distribution. Therefore the GPD could be used to determine ES. Then for each time series the parameters of GEV distributions were estimated using maximum likelihood method. The results of the estimation for GEV are presented in Table 2.1 (for stock indices) and in Table 2.2 (for exchange rates). The analysis of the results for stock indices leads to the following conclusions. In most cases we obtained the Fr´echet distribution (estimate of the shape parameter is positive), which suggests that underlying observations are characterized by a fat-tailed distribution. For FTSE-100 and DAX indices the estimate of ξ is negative but close to zero, which may suggest either a Weibull distribution or a Gumbel distribution. In the majority of cases, the WSE indices exhibit fatter tails than the other indices. They also have larger estimates of location (related to mean return) and larger estimates of the scale parameter (related to volatility).

2.2

Multivariate Time Series

53

Table 2.2: The estimates of the parameters of GEV distributions, for the exchange rates. Data USD/PLN EUR/PLN EUR/USD

ξ 0.046 0.384 -0.213

µ 0.014 0.015 0.014

σ 0.005 0.005 0.004 STFeva03.xpl

The analysis of the results for the exchange rates leads to the following conclusions. Three diﬀerent distributions were obtained, for USD/PLN – a Gumbel distribution, for EUR/PLN – a Fr´echet distribution, for EUR/USD – a Weibull distribution. This suggests very diﬀerent behavior of underlying observations. The location and scale parameters are almost the same. The scale parameters are considerably lower for the exchange rates than for the stock indices.

2.2 2.2.1

Multivariate Time Series Copula Approach

In this section we present the so-called copula approach. It is performed in two steps. In the ﬁrst step one analyzes the marginal (univariate) distributions. In the second step one analyzes the dependence between components of the random vector. Therefore the analysis of dependence is “independent” from the analysis of marginal distributions. This idea is diﬀerent from the one present in the classical approach, where multivariate analysis is performed “jointly” for marginal distributions and dependence structure by considering the complete covariance matrix, like e.g. in the MGARCH approach. So one can think that instead of analyzing the whole covariance matrix, where the oﬀ diagonal elements contain information about scatter and dependence) one analyzes only the main diagonal (scatter measures) and then the structure of dependence “not contaminated” by scatter parameters. The fundamental concept of copulas becomes clear by Sklar theorem (Sklar, 1959). The multivariate joint distribution function is represented as a copula

54

2

Extreme Value Analysis and Copulas

function linking the univariate marginal distribution functions: H(x1 , ..., xn ) = C{F1 (x1 ), ..., Fn (xn )}

(2.8)

where H the multivariate distribution function; Fi the distribution function of the i-th marginal distribution; C is a copula. The copula describes the dependence between components of a random vector. It is worth mentioning some properties of copulas for modeling dependence. The most important ones are the following: • for independent variables we have: C(u1 , ..., un ) = C ¬ (u1 , ..., un ) = u1 u2 ...un • the lower limit for copula function is: C − (u1 , ..., un ) = max{u1 + ... + un − n + 1; 0} • the upper limit for copula function is: C + (u1 , ..., un ) = min(u1 , ..., un ) The lower and upper limits for the copula function have important consequences for modeling the dependence. It can be explained in the simplest, bivariate case. Suppose there are two variables X and Y and there exists a function (not necessarily a linear one), which links these two variables. One speaks about the so-called total positive dependence between X and Y , when Y = T (X) and T is the increasing function. Similarly, one speaks about the so-called total negative dependence between X and Y , when Y = T (X) and T is the decreasing function. Then: • in the case of total positive dependence the following relation holds: C(u1 , u2 ) = C + (u1 , u2 ) = min(u1 , u2 ) • in the case of total negative dependence the following relation holds: C(u1 , u2 ) = C − (u1 , u2 ) = max{u1 + u2 − 1; 0},

2.2

Multivariate Time Series

55

The introduction of the copula leads to a natural ordering of the multivariate distributions with respect to the strength and the direction of the dependence. This ordering is given as: C1 (u1 , ..., un ) ≤ C2 (u1 , ..., un ) and then we have: C − (u1 , ..., un ) ≤ C ¬ (u1 , ..., un ) ≤ C + (u1 , ..., un ). The presented properties are valid for any type of the dependence, not just linear dependence. More facts of copulas is given in Franke, H¨ ardle and Hafner (2004); Rank and Siegl (2002) and Kiesel and Kleinow (2002). There are very possible copulas. A popular family contains the so-called Archimedean copulas, deﬁned on the base of strictly decreasing and convex function, called generator. In the bivariate case it is given as: C(u1 , u2 ) = ψ −1 {ψ(u1 ) + ψ(u2 )},

(2.9)

where ψ : [0; 1] → [0; ∞), and ψ(1) = 0. The most popular and well-studied Archimedean copulas are: 1. The Clayton copula: −θ (t − 1)/θ, ψ(t) = − log(t),

θ ≥ −1, θ = 0, θ ∈ [−1, ∞). θ = 0,

(2.10)

2. The Frank copula: ψ(t) =

− log exp(−θt)−1 θ = 0, exp(−θ)−1 − log(t), θ = 0.

3. The Ali-Mikhail-Haq copula: 1 − θ(1 − t) , θ ∈ [−1; 1]. ψ(t) = log t

(2.11)

(2.12)

Among other copulas, which do not belong to Archimedean family, it is worth to mention the Farlie-Gumbel-Morgenstern copula, given in the bivariate case as: Cθ (u, v) = uv + θuv(1 − u)(1 − v), θ ∈ [−1; 1]. (2.13)

56

2

Extreme Value Analysis and Copulas

In all these copulas there is one parameter, which can be interpreted as dependence parameter. Here the dependence has a more general meaning, presented above and described by a monotonic function. An often used copula function is the so-called normal (Gaussian) copula, which links the distribution function of multivariate normal distribution with the distribution functions of the univariate normal distributions. This means that: C(u1 , ..., un ) = ΦnR {Φ−1 (u1 ), ..., Φ−1 (un )}

(2.14)

The other commonly used example is the Gumbel copula, which for the bivariate case is given as: C(u1 , u2 ) = exp[−{(− log u1 )δ + (− log u2 )δ }1/δ ]

(2.15)

Figure 2.4 presents an example of the shape of the copula function. In this case it is a Frank copula (see (2.11)), with parameters θ taken from results presented in Section 2.2.2. The estimation of the copula parameters can be performed by using maximum likelihood given the distribution function of marginals. As the simplest approach to the distribution function of marginals one can take just the empirical distribution function.

2.2.2

Examples

Consider diﬀerent pairs of stock market indices and exchange rates, studied in Section 2.1.3. For each pair we ﬁtted a bivariate copula, namely the Clayton, Frank, Ali-Mikhail-Haq, and the Farlie-Gumbel-Morgenstern. We present here the results obtained for Frank copula. Table 2.3 presents selected results for pairs of exchange rates and Table 2.4 for pairs of stock indices. The important conclusion to be drawn from Table 2.3 is that one pair, namely USD/PLN and EUR/USD, shows negative dependence, whereas the other two show positive dependence. This is particularly important for the entities that are exposed to exchange rate risk and they want to decrease it by appropriate management of assets and liabilities. There is positive extreme dependence between all stock indices. As could have been expected, there is strong dependence between indices of the WSE and much lower between WSE and the other exchanges, with weaker dependence between WSE and NYSE than between WSE and large European exchanges. The copula approach can be applied in the so-called tail dependence coeﬃcients. The detailed description of tail dependence is given in Chapter 3.

2.2

Multivariate Time Series

57

(0.0,0.0,1.0)

(0.0,0.0,1.0) C(u,v)

C(u,v)

0.8

0.8 v

v u

u

0.5

0.5

0.2

0.3

0.8 (0.0,1.0,0.0) 0.5 0.2

0.8 (0.0,1.0,0.0) 0.5 0.2

(0.0,0.0,0.0)

0.2

0.5

0.8

(1.0,0.0,0.0)

(0.0,0.0,0.0)

0.2

0.5

0.8

(1.0,0.0,0.0)

Figure 2.4: Plot of C(u, v) for the Frank copula for θ = −2, 563 in left panel, and θ = 11.462 in right panel. STFeva04.xpl STFeva05.xpl

Table 2.3: The estimates of the Frank copula for exchange rates. Bivariate data USD/PLN and EUR/PLN USD/PLN and EUR/USD EUR/PLN and EUR/USD

θ 2.730 -2.563 3.409 STFeva06.xpl

2.2.3

Multivariate Extreme Value Approach

The copula approach also gives the possibility to analyze extreme values in the general multivariate case. This is possible by linking this approach to univariate extreme value analysis. In order to make this possible, we concentrate on the multivariate distribution of extrema, where the extremum is taken for each component of a random vector.

58

2

Extreme Value Analysis and Copulas

Table 2.4: The estimates of the Frank copula for stock indices. Bivariate data WIG and WIG20 WIG and DJIA WIG and FTSE-100 WIG and DAX

θ 11.462 0.943 2.021 2.086 STFeva07.xpl

The main result in the multivariate extreme value analysis is given for the limiting distribution of normalized maxima: 1 m − bm Xn:n − b1n Xn:n n 1 m = G(x1 , ..., xm ), ≤ x , ..., ≤x (2.16) lim P n→∞ a1n am n It was shown by Galambos (1978) that this limiting distribution can be presented in the following form: G(x1 , ..., xm ) = CG{G1 (x1 ), ..., Gm (xm )}.

(2.17)

where CG is the so-called Extreme Value Copula (EVC). This is the representation of the multivariate distribution of maxima, called here Multivariate Extreme Value distribution (MEV), in the way it is presented in the Sklar theorem. It is composed of two parts and each part has a special meaning: univariate distributions belong to the family of GEV distributions, therefore they are the Fr´echet, Weibull or Gumbel distributions. Therefore, to obtain the MEV distribution one has to apply the EVC to univariate GEV distributions (Fr´echet, Weibull, or Gumbel). Since there are many possible extreme value copulas, we get many possible multivariate extreme value distributions. The EVC is a copula satisfying the following relation: C(ut1 , ..., utn ) = C t (u1 , ..., un ) for t > 0.

(2.18)

It can be shown that the bivariate extreme value copula can be represented in the following form: C(u1 , u2 ) = exp{log(u1 u2 )A(log(u1 ))/ log(u1 u2 )}.

(2.19)

2.2

Multivariate Time Series

59

Here A is a convex function satisfying the following relations: A(0) = A(1) = 1, (2.20) max(w, 1 − w) ≤ A(w) ≤ 1. The most common extreme value copulas are: 1. Gumbel copula, where: C(u1 , u2 ) = exp[−{(log u1 )θ + (log u2 )θ }1/θ ], with A(w) = {wθ + (1 − w)θ }1/θ ,

(2.21)

and θ ∈ [1, ∞). 2. Gumbel II copula, where: C(u1 , u2 ) = u1 u2 exp{θ(log u1 log u2 )/(log u1 + log u2 )}, (2.22) with A(w) = θw2 − θw + 1, and θ ∈ [0, 1]. 3. Galambos copula, where: C(u1 , u2 ) = u1 u2 exp[{(log u1 )−θ + (log u2 )−θ }−1/θ ], with A(w) = 1 − {w−θ + (1 − w)−θ }−1/θ , (2.23) and θ ∈ [0, ∞). All three presented copulas are one parameter functions. This parameter can be interpreted as dependence parameter. The important property is that for these copulas, as well as for other possible extreme value copulas, there is positive dependence between the two components of the random vector. The main application of multivariate extreme value approach is the estimation of the maximum loss of each component of the portfolio. We get then the limiting distribution of the vector of maximal losses. The limiting distributions for the components are univariate GEV distributions and the relation between the maxima is reﬂected through extreme value copula.

60

2

Extreme Value Analysis and Copulas

Table 2.5: The estimates of the Galambos copula for exchange rates. Bivariate data USD/PLN and EUR/PLN USD/PLN and EUR/USD EUR/PLN and EUR/USD

θ 34.767 2.478 2.973 STFeva08.xpl

2.2.4

Examples

As in Section 2.2.2 we consider diﬀerent pairs of stock market indices and exchange rates. In the ﬁrst step we analyze separate components in each pair to get estimates of generalized extreme value distributions. In the second step, we use empirical distribution functions obtained in the ﬁrst step and estimate three copulas belonging to EVC family: Gumbel, Gumbel II, and Galambos. We present here the results obtained for Galambos copula (Table 2.5) and Gumbel copula (Table 2.6) It turns out that in the case of exchange rates we obtained the best ﬁt for the Galambos copula, see Table 2.5. In the case of stock indices the best ﬁt was obtained for diﬀerent copulas. For the comparison we present the results obtained for the Gumbel copula, see Table 2.6. The dependence parameter of the Galambos copula takes only non-negative values. The higher the value of this parameter, the stronger the dependence between maximal losses of respective variables. We see that there is strong extreme dependence between the exchange rates of USD/PLN and EUR/PLN and rather weak dependence between EUR/PLN and EUR/USD as well as for USD/PLN and EUR/USD. The dependence parameter for Gumbel copula takes values higher or equal to 1. The higher the value of this parameter, the stronger the dependence between maximal losses of respective variables. The results given in this table indicate strong dependence (as could have been expected) between stock indices of the Warsaw Stock Exchange. It also shows stronger extreme dependence between WSE and NYSE than between WSE and two large European exchanges.

2.2

Multivariate Time Series

61

Table 2.6: The estimates of the Gumbel copula for stock indices. Bivariate data WIG and WIG20 WIG and DJIA WIG and FTSE-100 WIG and DAX

θ 21.345 14.862 2.275 5.562 STFeva09.xpl

2.2.5

Copula Analysis for Multivariate Time Series

One of the basic models applied in the classical (mean-oriented) approach in the analysis of multivariate time series was the multivariate GARCH model (MGARCH) aimed at modeling of conditional covariance matrix. One of the disadvantages of this approach was the joint modeling of volatilities and correlations, as well as relying on the correlation coeﬃcient as a measure of dependence. In this section we present another approach, where volatilities and dependences in multivariate time series, both conditional, are modeled separately. This is possible due to the application of copula approach directly to univariate time series, being the components of multivariate time series. Our presentation is based on the idea presented by Jondeau and Rockinger (2002), which combines the univariate time series modeling by GARCH type models for volatility with copula analysis. The proposed model is given as: log(θt ) =

16

dj I{(ut−1 , vt−1 ) ∈ Aj },

(2.24)

j=1

where Aj is the jth element of the unit-square grid. To each parameter dj , an area Aj is associated. For instance, A1 = [0, p1 ] × [0, q1 ] and A2 = [p1 , p2 ] × [0, q1 ], where p1 = q1 = 0.15, p2 = q2 = 0.5, and p3 = q3 = 0.85. The choice of 16 subintervals is, according to Jondeau and Rockinger (2002), somewhat arbitrary. Therefore the dependence parameter is conditioned on the lagged values of univariate distribution functions, where the 16 possible sets of pairs of values are taken into account. The larger value of parameter dj , the stronger dependence on the past values.

62

2

Extreme Value Analysis and Copulas

Table 2.7: Conditional dependence parameter for time series WIG, WIG20.

[0, 0.15) [0.15, 0.5) [0.5, 0.85) [0.85, 1]

[ 0, 0.15) 15.951 6.000 -0.286 0.000

[0.15, 0.5) 4.426 18.307 8.409 2.578

[0.5, 0.85) 5.010 8.704 19.507 1.942

[0.85, 1] 1.213 1.524 5.133 19.202 STFeva10.xpl

We also give the description of the method, which was used in the empirical example. We describe this procedure for the case of bivariate time series. The proposed procedure consists of two steps. In the ﬁrst step, the models for univariate time series are built for both time series. Here the combined procedure of ARIMA models for conditional mean and GARCH models for conditional variance was used. In the second step, the values of the distribution function for residuals obtained after the application of univariate models were subject to copula analysis.

2.2.6

Examples

In this example we study three pairs of time series, namely WIG and WIG20, WIG and DJIA, USD/PLN and EUR/PLN. First of all, to get the best ﬁt: an AR(10)-GARCH (1,1) model was built for each component of bivariate time series. Then the described procedure of ﬁtting copula and obtaining conditional dependence parameter was applied. In order to do this, the interval [0, 1] of the values of univariate distribution function was divided into 4 subintervals: [0, 0.15), [0.15, 0.5); [0.5, 0.85); [0.85, 1]. Such a selection of subintervals allows us to concentrate on tails of the distributions. Therefore we obtained 16 disjoint areas. For each area the conditional dependence parameter was estimated using diﬀerent copula function. For the purpose of comparison, we present the results obtained in the case of the Frank copula. These results are given in the Tables 2.7–2.9. The values on “the main diagonal” of the presented tables correspond to the same subintervals of univariate distribution functions. Therefore, the values for the lowest interval (upper left corner of the table) and highest interval (lower

2.2

Multivariate Time Series

63

Table 2.8: Conditional dependence parameter for time series WIG, DJIA.

[0, 0.15) [0.15, 0.5) [0.5, 0.85) [0.85, 1]

[ 0, 0.15) 2.182 1.868 1.454 -0.207

[0.15, 0.5) 1.169 0.532 1.246 0.493

[0.5, 0.85) 0.809 0.954 0.806 1.301

[0.85, 1] 2.675 2.845 0.666 1.202 STFeva11.xpl

Table 2.9: Conditional dependence parameter for time series USD/PLN, EUR/PLN. [0, 0.15) [0.15, 0.5) [0.5, 0.85) [0.85, 1]

[ 0, 0.15) 3.012 3.887 2.432 7.175

[0.15, 0.5) 2.114 2.817 3.432 3.750

[0.5, 0.85) 2.421 2.824 2.526 4.534

[0.85, 1] 0.127 5.399 3.424 4.616 STFeva12.xpl

right corner of the table) correspond to the notion of lower tail dependence and upper tail dependence. Also, the higher are values concentrated along “the main diagonal”, the stronger conditional dependence is observed. From the results presented in Tables 2.7–2.9, we can see, that there is a strong conditional dependence between returns on WIG and WIG20; the values of conditional dependence parameter “monotonically decrease with the departure from the main diagonal.” This property is not observed in the other two tables, where no signiﬁcant regular patterns can be identiﬁed. We presented here only some selected non-classical methods of the analysis of ﬁnancial time series. They proved some usefulness for real data. It seems that the plausible future direction of the research would be the integration of econometric methods, aimed at studying the dynamic properties, with statistical methods, aimed at studying the distributional properties.

64

Bibliography

Bibliography Dickey, D. and Fuller, W. (1979). Distribution of the estimators for autoregressive time series with a unit root, Journal of the American Statistical Association, 74: 427–431. Embrechts, P., Kl¨ uppelberg, C. and Mikosch, T. (1997). Modelling Extremal Events for Insurance and Finance, Springer. Franke, J., H¨ ardle, W. and Hafner, C. (2004). Statistics of Financial Markets, Springer. Galambos, J. (1978). The Asymptotic Theory of Extreme Order Statistics, Krieger Publishing. Jondeau, E. and Rockinger, M. (2002). Conditional Dependency of Financial Series: The Copula-GARCH Model, FAME, working paper. Kiesel, R. and Kleinow, T. (2002). Sensitivity analysis of credit portfolio models, in W. H¨ ardle, T. Kleinow, G. Stahl (eds.), Applied Quantitative Finance, Springer. Rank, J. and Siegl, T. (2002). Applications of Copulas for the Calculation of Value-at-Risk, in W. H¨ ardle, T. Kleinow and G. Stahl (eds.), Applied Quantitative Finance, Springer. Reiss, R.-D. and Thomas, M. (2000). Extreme Value Analysis, in W. H¨ ardle, S. Klinke and M. M¨ uller (eds.), XploRe Learning Guide, Springer. Sklar, A. (1959). Fonctions de repartition a` n dimensions et leurs marges, Publications de l’Institut de Statistique de l’Universit´e de Paris, 8 (1959): 229–231.

3 Tail Dependence Rafael Schmidt

3.1

Introduction

Tail dependence describes the amount of dependence in the tail of a bivariate distribution. In other words, tail dependence refers to the degree of dependence in the corner of the lower-left quadrant or upper-right quadrant of a bivariate distribution. Recently, the concept of tail dependence has been discussed in ﬁnancial applications related to market or credit risk, Hauksson et al. (2001) and Embrechts et al. (2003). In particular, tail-dependent distributions are of interest in the context of Value at Risk (VaR) estimation for asset portfolios, since these distributions can model dependence of large loss events (default events) between diﬀerent assets. It is obvious that the portfolio’s VaR is determined by the risk behavior of each single asset in the portfolio. On the other hand, the general dependence structure, and especially the dependence structure of extreme events, strongly inﬂuences the VaR calculation. However, it is not known to most people which are not familiar with extreme value theory, how to measure and model dependence, for example, of large loss events. In other words, the correlation coeﬃcient, which is the most common dependence measure in ﬁnancial applications, is often insuﬃcient to describe and estimate the dependence structure of large loss events, and therefore frequently leads to inaccurate VaR estimations, Embrechts et al. (1999). The main aim of this chapter is to introduce and to discuss the so-called tail-dependence coeﬃcient as a simple measure of dependence of large loss events. Kiesel and Kleinow (2002) show empirically that a precise VaR estimation for asset portfolios depends heavily on the proper speciﬁcation of the taildependence structure of the underlying asset-return vector. In their setting,

66

3 Tail Dependence

diﬀerent choices of the portfolio’s dependence structure, which is modelled by a copula function, determine the degree of dependence of large loss events. Motivated by their empirical observations, this chapter deﬁnes and explores the concept of tail dependence in more detail. First, we deﬁne and calculate tail dependence for several classes of distributions and copulae. In our context, tail dependence is characterized by the so-called tail-dependence coeﬃcient (TDC) and is embedded into the general framework of copulae. Second, a parametric and two nonparametric estimators for the TDC are discussed. Finally, we investigate some empirical properties of the implemented TDC estimators and examine an empirical study to show one application of the concept of tail dependence for VaR estimation.

3.2

What is Tail Dependence?

Deﬁnitions of tail dependence for multivariate random vectors are mostly related to their bivariate marginal distribution functions. Loosely speaking, tail dependence describes the limiting proportion that one margin exceeds a certain threshold given that the other margin has already exceeded that threshold. The following approach, as provided in the monograph of Joe (1997), represents one of many possible deﬁnitions of tail dependence. Let X = (X1 , X2 ) be a two-dimensional random vector. We say that X is (bivariate) upper tail-dependent if: def λU = lim P X1 > F1−1 (v) | X2 > F2−1 (v) > 0, v↑1

(3.1)

in case the limit exists. F1−1 and F2−1 denote the generalized inverse distribution functions of X1 and X2 , respectively. Consequently, we say X = (X1 , X2 ) is upper tail-independent if λU equals 0. Further, we call λU the upper tail-dependence coeﬃcient (upper TDC). Similarly, we deﬁne the lower tail-dependence coeﬃcient, if it exists, by: def λL = lim P X1 ≤ F1−1 (v) | X2 ≤ F2−1 (v) . v↓0

(3.2)

In case X = (X1 , X2 ) is standard normally or t-distributed, formula (3.1) simpliﬁes to: def λU = lim λU (v) = lim 2 · P X1 > F1−1 (v) | X2 = F2−1 (v) . v↑1

v↑1

(3.3)

67

1

3.2 What is Tail Dependence?

rho=0.6 0.5

lambda_U(v)

rho=0.8

0

rho=-0.8

0.5

0.6

0.8

0.7

0.9

1

v

Figure 3.1: The function λU (v) = 2 · P{X1 > F1−1 (v) | X2 = F2−1 (v)} for a bivariate normal distribution with correlation coeﬃcients ρ = −0.8, −0.6, . . . , 0.6, 0.8. Note that λU = 0 for all ρ ∈ (−1, 1). STFtail01.xpl

A generalization of bivariate tail dependence, as deﬁned above, to multivariate tail dependence can be found in Schmidt and Stadtm¨ uller (2003). Figures 3.1 and 3.2 illustrate tail dependence for a bivariate normal and tdistribution. Irrespectively of the correlation coeﬃcient ρ, the bivariate normal distribution is (upper) tail independent. In contrast, the bivariate t-distribution exhibits (upper) tail dependence and the degree of tail dependence is aﬀected by the correlation coeﬃcient ρ. The concept of tail dependence can be embedded within the copula theory. An n-dimensional distribution function C : [0, 1]n → [0, 1] is called a copula if it has one-dimensional margins which are uniformly distributed on the interval [0, 1]. Copulae are functions that join or “couple” an n-dimensional distribution function F to its corresponding one-dimensional marginal distribution functions

3 Tail Dependence

1

68

rho=0.6 0.5

lambda_U(v)

rho=0.8

0

rho=-0.8

0.5

0.6

0.8

0.7

0.9

1

v

Figure 3.2: The function λU (v) = 2 · P{X1 > F1−1 (v) | X2 = F2−1 (v)} for a bivariate t-distribution with correlation coeﬃcients ρ = −0.8, −0.6, . . . , 0.6, 0.8. STFtail02.xpl

Fi , i = 1, . . . , n, in the following way: F (x1 , . . . , xn ) = C {F1 (x1 ), . . . , Fn (xn )} . We refer the reader to Joe (1997), Nelsen (1999) or H¨ ardle, Kleinow, and Stahl (2002) for more information on copulae. The following representation shows that tail dependence is a copula property. Thus, many copula features transfer to the tail-dependence coeﬃcient such as the invariance under strictly increasing transformations of the margins. If X is a continuous bivariate random vector, then straightforward calculation yields: λU = lim v↑1

1 − 2v + C(v, v) , 1−v

where C denotes the copula of X. Analogously, λL = limv↓0 the lower tail-dependence coeﬃcient.

(3.4) C(v,v) v

holds for

3.3

Calculation of the Tail-dependence Coeﬃcient

3.3 3.3.1

69

Calculation of the Tail-dependence Coeﬃcient Archimedean Copulae

Archimedean copulae form an important class of copulae which are easy to construct and have good analytical properties. A bivariate Archimedean copula has the form C(u, v) = ψ [−1] {ψ(u) + ψ(v)} for some continuous, strictly decreasing, and convex generator function ψ : [0, 1] → [0, ∞] such that ψ(1) = 0 and the pseudo-inverse function ψ [−1] is deﬁned by: ψ [−1] (t) =

ψ −1 (t), 0 ≤ t ≤ ψ(0), 0, ψ(0) < t ≤ ∞.

We call ψ strict if ψ(0) = ∞. In that case ψ [−1] = ψ −1 . Within the framework of tail dependence for Archimedean copulae, the following result can be shown (Schmidt, 2003). Note that the one-sided derivatives of ψ exist, as ψ is a convex function. In particular, ψ (1) and ψ (0) denote the one-sided derivatives at the boundary of the domain of ψ. Then: i) upper tail-dependence implies ψ (1) = 0 and λU = 2 − (ψ −1 ◦ 2ψ) (1), ii) ψ (1) < 0 implies upper tail-independence, iii) ψ (0) > −∞ or a non-strict ψ implies lower tail-independence, iv) lower tail-dependence implies ψ (0) = −∞, a strict ψ, and λL = (ψ −1 ◦ 2ψ) (0). Tables 3.1 and 3.2 list various Archimedean copulae in the same ordering as provided in Nelsen (1999, Table 4.1, p. 94) and in H¨ ardle, Kleinow, and Stahl (2002, Table 2.1, p. 42) and the corresponding upper and lower tail-dependence coeﬃcients (TDCs).

70

3 Tail Dependence

Table 3.1: Various selected Archimedean copulae. The numbers in the ﬁrst column correspond to the numbers of Table 4.1 in Nelsen (1999), p. 94. C(u, v)

Number & Type

max (u−θ + v −θ − 1)−1/θ , 0

(1) Clayton

1/θ ,0 max 1 − (1 − u)θ + (1 − v)θ

(2) (3)

AliMikhail-Haq

(4)

GumbelHougaard

uv 1 − θ(1 − u)(1 − v)

Parameters θ ∈ [−1, ∞)\{0}

θ ∈ [1, ∞) θ ∈ [−1, 1)

1/θ exp − (− log u)θ + (− log v)θ

θ ∈ [1, ∞)

1/θ −1 1 + (u−1 − 1)θ + (v −1 − 1)θ

θ ∈ [1, ∞)

(14)

1/θ −θ 1 + (u−1/θ − 1)θ + (v −1/θ − 1)θ

θ ∈ [1, ∞)

(19)

θ/ log eθ/u + eθ/v − eθ

θ ∈ (0, ∞)

3.3.2

Elliptically-contoured Distributions

(12)

In this section, we calculate the tail-dependence coeﬃcient for ellipticallycontoured distributions (brieﬂy: elliptical distributions). Well-known elliptical distributions are the multivariate normal distribution, the multivariate t-distribution, the multivariate logistic distribution, the multivariate symmetric stable distribution, and the multivariate symmetric generalized-hyperbolic distribution. Elliptical distributions are deﬁned as follows: Let X be an n-dimensional random vector and Σ ∈ Rn×n be a symmetric positive semi-deﬁnite matrix. If X − µ, for some µ ∈ Rn , possesses a characteristic function of the form φX−µ (t) = Ψ(t Σt) for some function Ψ : R+ 0 → R, then X is said to be el-

3.3

Calculation of the Tail-dependence Coeﬃcient

71

Table 3.2: Tail-dependence coeﬃcients (TDCs) and generators ψθ for various selected Archimedean copulae. The numbers in the ﬁrst column correspond to the numbers of Table 4.1 in Nelsen (1999), p. 94. ψθ (t)

Parameter θ

Upper-TDC

Lower-TDC

(1) Pareto

t−θ − 1

[−1, ∞)\{0}

0 for θ > 0

2−1/θ for θ > 0

(2)

(1 − t)θ

[1, ∞)

2 − 21/θ

0

Number & Type

(3)

1 − θ(1 − t) Alilog Mikhail-Haq t

[−1, 1)

0

0

(4)

GumbelHougaard

(− log t)θ

[1, ∞)

2 − 21/θ

0

θ −1

[1, ∞)

2 − 21/θ

2−1/θ

θ t−1/θ − 1

[1, ∞)

2 − 21/θ

1 2

eθ/t − eθ

(0, ∞)

0

1

(12) (14) (19)

1 t

liptically distributed with parameters µ (location), Σ (dispersion), and Ψ. Let En (µ, Σ, Ψ) denote the class of elliptically-contoured distributions with the latter parameters. We call Ψ the characteristic generator. The density function, if it exists, of an elliptically-contoured distribution has the following form: f (x) = |Σ|−1/2 g (x − µ) Σ−1 (x − µ) ,

x ∈ Rn ,

(3.5)

+ for some function g : R+ 0 → R0 , which we call the density generator.

Observe that the name “elliptically-contoured distribution” is related to the elliptical contours of the latter density. For a more detailed treatment of elliptical distributions see the monograph of Fang, Kotz, and Ng (1990) or Cambanis, Huang, and Simon (1981).

72

3 Tail Dependence

In connection with ﬁnancial applications, Bingham and Kiesel (2002) and Bingham, Kiesel, and Schmidt (2002) propose a semi-parametric approach for elliptical distributions by estimating the parametric component (µ, Σ) separately from the density generator g. In their setting, the density generator is estimated by means of a nonparametric statistics. Schmidt (2002b) shows that bivariate elliptically-contoured distributions are upper and lower tail-dependent if the tail of their density generator is regularly varying, i.e. the tail behaves asymptotically like a power function. Further, a necessary condition for tail dependence is given which is more general than regular variation of the latter tail: More precisely, the tail must be O-regularly varying (see Bingham, Goldie, and Teugels (1987) for the deﬁnition of O-regular variation). Although the equivalence of tail dependence and regularly-varying density generator has not been shown, all density generators of well-known elliptical distributions possess either a regularly-varying tail or a not O-regularlyvarying tail. This justiﬁes a restriction to the class of elliptical distributions with regularly-varying density generator if tail dependence is required. In particular, tail dependence is solely determined by the tail behavior of the density generator (except for completely correlated random variables which are always tail dependent). The following closed-form expression exists (Schmidt, 2002b) for the upper and lower tail-dependence coeﬃcient of an elliptically-contoured random vector (X1 , X2 ) ∈ E2 (µ, Σ, Ψ) with positive-deﬁnite matrix σ11 σ12 , Σ= σ11 σ12 having a regularly-varying density generator g with regular variation index −α/2 − 1 < 0 : h(ρ) uα √ du def 1 − u2 0 , (3.6) λ = λU = λL = 1 uα √ du 1 − u2 0 −1/2 2 √ where ρ = σ12 / σ11 σ22 and h(ρ) = 1 + (1−ρ) . 1−ρ2 Note that ρ corresponds to the “correlation” coeﬃcient when it exists (Fang, Kotz, and Ng, 1990). Moreover, the upper tail-dependence coeﬃcient λU coincides with the lower tail-dependence coeﬃcient λL and depends only on the “correlation” coeﬃcient ρ and the regular variation index α, see Figure 3.3.

Calculation of the Tail-dependence Coeﬃcient

73

0.3

rho=0.5

0.1

0.2

lambda

0.4

0.5

3.3

rho=0.3

rho=0.1

2

6

4

8

10

alpha

Figure 3.3: Tail-dependence coeﬃcient λ versus regular variation index α for “correlation” coeﬃcients ρ = 0.5, 0.3, 0.1. STFtail03.xpl

Table 3.3 lists various elliptical distributions, the corresponding density generators (here cn denotes a normalizing constant depending only on the dimension n) and the associated regular variation index α from which one easily derives the tail-dependence coeﬃcient using formula (3.6).

74

3 Tail Dependence

Table 3.3: Tail index α for various density generators g of multivariate elliptical distributions. Kν denotes the modiﬁed Bessel function of the third kind (or Macdonald function). Number & Type (23) Normal (24) t Symmetric (25) general. hyperbolic (26)

Symmetric θ-stable

(27) logistic

3.3.3

Density generator g or characteristic generator Ψ

Parameters

α for n=2

g(u) = cn exp(−u/2)

—

∞

t −(n+θ)/2 g(u) = cn 1 + θ

θ>0

θ

ς, χ > 0 λ∈R

∞

θ ∈ (0, 2]

θ

—

∞

Kλ− n2 { ς(χ + u)} g(u) = cn √ n ( χ + u) 2 −λ Ψ(u) = exp

g(u) = cn

−

1 2u

θ/2

exp(−u) {1 + exp(−u)}2

Other Copulae

For many other closed form copulae one can explicitly derive the tail-dependence coeﬃcient. Tables 3.4 and 3.5 list some well-known copula functions and the corresponding lower and upper TDCs.

3.4

Estimating the Tail-dependence Coeﬃcient

75

Table 3.4: Various copulae. Copulae BBx are provided in Joe (1997).

Number & Type (28) Raftery

g {min(u, v), max(u, v); θ} with

g {x, y; θ} = x −

(29) BB1

1−θ 1/(1−θ) x 1+θ

(31) BB7

y −θ/(1−θ) − y 1/(1−θ)

1/δ −1/θ 1 + (u−θ − 1)δ + (v −θ − 1)δ

(30) BB4

Parameters

C(u, v)

u−θ + v −θ − 1−

− (u−θ − 1)−δ + (v −θ − 1)−δ

−1/δ −1/θ

−δ + 1 − 1 − 1 − (1 − u)θ −1/δ 1/θ −δ −1 + 1 − (1 − v)θ

θ ∈ [0, 1] θ ∈ (0, ∞) δ ∈ [1, ∞) θ ∈ [0, ∞) δ ∈ (0, ∞)

θ ∈ [1, ∞) δ ∈ (0, ∞)

(32) BB8

−1 1 1 − 1 − 1 − (1 − δ)θ · δ 1/θ θ θ 1 − (1 − δv) · 1 − (1 − δu)

θ ∈ [1, ∞) δ ∈ [0, 1]

(33) BB11

θ min(u, v) + (1 − θ)uv

θ ∈ [0, 1]

CΩ in (34) Junker and May (2002)

3.4

βC(sθ, ¯ δ) ¯ (u, v) − (1 − β)C(θ,δ) (u, v) with δ −θt −1 Archim. generator ψ(θ,δ) (t) = − log ee−θ −1 ¯ ¯ C(sθ, ¯ δ) ¯ is the survival copula with param. (θ, δ)

θ, θ¯ ∈ R\{0} δ, δ¯ ≥ 1 β ∈ [0, 1]

Estimating the Tail-dependence Coeﬃcient

Suppose X, X (1) , . . . , X (m) are i.i.d. bivariate random vectors with distribution function F and copula C. We assume continuous marginal distribution functions Fi , i = 1, 2. Tests for tail dependence or tail independence are given for example in Ledford and Tawn (1996) or Draisma et al. (2004). We consider the following three (non-)parametric estimators for the lower and upper tail-dependence coeﬃcients λU and λL . These estimators have been discussed in Huang (1992) and Schmidt and Stadtm¨ uller (2003). Let Cm be the

76

3 Tail Dependence

Table 3.5: Tail-dependence coeﬃcients (TDCs) for various copulae. Copulae BBx are provided in Joe (1997). Number & Type

Parameters

upper-TDC

lower-TDC

θ ∈ [0, 1]

0

2θ 1+θ

(29) BB1

θ ∈ (0, ∞) δ ∈ [1, ∞)

2 − 21/δ

2−1/(θδ)

(30) BB4

θ ∈ [0, ∞) δ ∈ (0, ∞)

2−1/δ

(2− 2−1/δ )−1/θ

(31) BB7

θ ∈ [1, ∞) δ ∈ (0, ∞)

2 − 21/θ

2−1/δ

(32) BB8

θ ∈ [1, ∞) δ ∈ [0, 1]

2− −2(1 − δ)θ−1

0

(33) BB11

θ ∈ [0, 1]

θ

θ

θ, θ¯ ∈ R\{0} δ, δ¯ ≥ 1 β ∈ [0, 1]

(1 − β)· ·(2 − 21/δ )

β(2 − 21/δ )

(28) Raftery

CΩ in (34) Junker and May (2002)

¯

empirical copula deﬁned by: −1 −1 Cm (u, v) = Fm (F1m (u), F2m (v)),

(3.7)

with Fm and Fim denoting the empirical distribution functions corresponding (j) (j) (j) to F and Fi , i = 1, 2, respectively. Let Rm1 and Rm2 be the rank of X1 and (j) X2 , j = 1, . . . , m, respectively. The ﬁrst estimators are based on formulas (3.1) and (3.2):

3.4

Estimating the Tail-dependence Coeﬃcient

ˆ (1) λ U,m

m k k Cm 1 − , 1 × 1 − , 1 k m m m 1 (j) (j) I(Rm1 > m − k, Rm2 > m − k) k j=1

= =

77

(3.8)

and m (j) (j) ˆ (1) = m Cm k , k = 1 I(Rm1 ≤ k, Rm2 ≤ k), λ L,m k m m k j=1

(3.9)

where k = k(m) → ∞ and k/m → 0 as m → ∞, and the ﬁrst expression in (3.8) has to be understood as the empirical copula-measure of the interval (1 − k/m, 1] × (1 − k/m, 1]. The second type of estimator is already well known in multivariate extreme-value theory (Huang, 1992). We only provide the estimator for the upper TDC. ˆ (2) λ U,m

= =

k m k 1 − Cm 1 − , 1 − k m m m 1 (j) (j) 2− I(Rm1 > m − k or Rm2 > m − k), k j=1 2−

(3.10)

with k = k(m) → ∞ and k/m → 0 as m → ∞. The optimal choice of k is related to the usual variance-bias problem and we refer the reader to Peng (1998) for more details. Strong consistency and asymptotic normality for both types of nonparametric estimators are also addressed in the latter three reference. Now we focus on an elliptically-contoured bivariate random vector X. In the presence of tail dependence, previous arguments justify a sole consideration of elliptical distributions having a regularly-varying density generator with regular variation index α. This implies that the distribution function of ||X||2 has also a regularly-varying tail with index α. Formula (3.6) shows that the upper and lower tail-dependence coeﬃcients λU and λL depend only on the regular variation index α and the “correlation” coeﬃcient ρ. Hence, we propose the following parametric estimator for λU and λL : ˆ (3) = λ(3) (ˆ ˆ (3) = λ ˆm ). λ U,m L,m U αm , ρ

(3.11)

78

3 Tail Dependence

Several robust estimators ρˆm for ρ are provided in the literature such as estimators based on techniques of multivariate trimming (Hahn, Mason, and Weiner, 1991), minimum-volume ellipsoid estimators (Rousseeuw and van Zomeren, 1990), and least square estimators (Frahm et al., 2002). For more details regarding the relationship between the regular variation index α, the density generator, and the random variable ||X||2 we refer to Schmidt (2002b). Observe that even though the estimator for the regular variation ˆ (3) is biased due to the index α might be unbiased, the TDC estimator λ U,m integral transform.

3.5

Comparison of TDC Estimators

In this section we investigate the ﬁnite-sample properties of the introduced TDC estimators. One thousand independent copies of m = 500, 1000, and 2000 i.i.d. random vectors (m denotes the sample length) of a bivariate standard tdistribution with θ = 1.5, 2, and 3 degrees of freedom are generated and the upper TDCs are estimated. Note that the parameter θ equals the regular variation index α which we discussed in the context of elliptically-contoured distributions. The empirical bias and root-mean-squared error (RMSE) for all three introduced TDC estimation methods are derived and presented in Tables 3.6, 3.7, and 3.8, respectively. ˆ (1) Table 3.6: Bias and RMSE for the nonparametric upper TDC estimator λ U (multiplied by 103 ). The sample length is denoted by m. Original θ = 1.5 θ=2 θ=3 parameters λU = 0.2296 λU = 0.1817 λU = 0.1161 ˆ (1) ˆ (1) ˆ (1) Estimator λ λ λ U U U Bias (RMSE) Bias (RMSE) Bias (RMSE) m = 500 m = 1000 m = 2000

25.5 (60.7) 15.1 (47.2) 8.2 (38.6)

43.4 (72.8) 28.7 (55.3) 19.1 (41.1)

71.8 (92.6) 51.8 (68.3) 36.9 (52.0)

3.5

Comparison of TDC Estimators

79

ˆ (2) Table 3.7: Bias and RMSE for the nonparametric upper TDC estimator λ U 3 (multiplied by 10 ). The sample length is denoted by m. Original θ = 1.5 θ=2 θ=3 parameters λU = 0.2296 λU = 0.1817 λU = 0.1161 ˆ (2) ˆ (2) ˆ (2) Estimator λ λ λ U U U Bias (RMSE) Bias (RMSE) Bias (RMSE) m = 500 m = 1000 m = 2000

53.9 (75.1) 33.3 (54.9) 22.4 (41.6)

70.3 (88.1) 49.1 (66.1) 32.9 (47.7)

103.1 (116.4) 74.8 (86.3) 56.9 (66.0)

(3)

ˆ (mulTable 3.8: Bias and RMSE for the parametric upper TDC estimator λ U tiplied by 103 ). The sample length is denoted by m. Original θ = 1.5 θ=2 θ=3 parameters λU = 0.2296 λU = 0.1817 λU = 0.1161 ˆ (3) ˆ (3) ˆ (3) Estimator λ λ λ U U U Bias (RMSE) Bias (RMSE) Bias (RMSE) m = 500 m = 1000 m = 2000

1.6 (30.5) 2.4 (22.4) 2.4 (15.5)

3.5 (30.8) 5.8 (23.9) 5.4 (17.0)

16.2 (33.9) 15.4 (27.6) 12.4 (21.4)

Regarding the parametric approach we apply the procedure introduced in Section 3.4 and estimate ρ by a trimmed empirical correlation coeﬃcient with trimming proportion 0.05% and α (= θ) by a Hill estimator. For the latter we choose the optimal threshold value k according to Drees and Kaufmann (1998). The empirical bias and RMSE corresponding to the estimation of ρ and α are provided in Tables 3.9 and 3.10. Observe that Pearson’s correlation coeﬃcient ρ does not exist for θ < 2. In this case, ρ denotes some dependence parameter and a more robust estimation procedure should be used (Frahm et al., 2002).

80

3 Tail Dependence

Table 3.9: Bias and RMSE for the estimator of the regular variation index α (multiplied by 103 ). The sample length is denoted by m. Original θ = 1.5 θ=2 θ=3 parameters α = 1.5 α=2 α=3 Estimator α ˆ α ˆ α ˆ Bias (RMSE) Bias (RMSE) Bias (RMSE) m = 500 m = 1000 m = 2000

2.2 (211.9) −14.7 (153.4) −15.7 (101.1)

−19.8 (322.8) −48.5 (235.6) −60.6 (173.0)

−221.9 (543.7) −242.2 (447.7) −217.5 (359.4)

Table 3.10: Bias and RMSE for the “correlation” coeﬃcient estimator ρˆ (multiplied by 103 ). The sample length is denoted by m. Original θ = 1.5 θ=2 θ=3 parameters ρ=0 ρ=0 ρ=0 Estimator ρˆ ρˆ ρˆ Bias (RMSE) Bias (RMSE) Bias (RMSE) m = 500 m = 1000 m = 2000

0.02 (61.6) −0.32 (44.9) 0.72 (32.1)

−2.6 (58.2) 1.0 (42.1) −1.2 (29.3)

2.1 (56.5) 0.6 (39.5) −1.8 (27.2)

Finally, Figures 3.4 and 3.5 illustrate the (non-)parametric estimation results ˆ (i) , i = 1, 2, 3. Presented are 3 × 1000 TDC of the upper TDC estimator λ U estimations with sample lengths m = 500, 1000 and 2000. The plots visualize the decreasing empirical bias and variance for increasing sample length.

Tail Dependence of Asset and FX Returns

m=2000

81

m=500

m=2000

TDC estimate

0.2

0.3

m=1000

0.1

0.2

0.1

TDC estimate

0.3

m=1000

0.4

m=500

0.4

3.6

0

500

1000

1500 Sample

2000

2500

3000

0

500

1000

1500 Sample

2000

2500

3000

ˆ (1) (left panel) and λ ˆ (2) Figure 3.4: Nonparametric upper TDC estimates λ U U (right panel) for 3 × 1000 i.i.d. samples of size m = 500, 1000, 2000 from a bivariate t-distribution with parameters θ = 2, ρ = 0, and (1) (2) λU = λU = 0.1817. STFtail04.xpl

ˆ (3) outperforms the other The empirical study shows that the TDC estimator λ U ˆ (1) is three (two and two estimators. For m = 2000, the bias (RMSE) of λ U (3) ˆ , whereas the bias (RMSE) a half) times larger than the bias (RMSE) of λ U ˆ (2) is two (ten percent) times larger than the bias (RMSE) of λ ˆ (1) . More of λ U U ˆ (1) and λ ˆ (2) are given empirical and statistical results regarding the estimators λ U U ˆ (3) was in Schmidt and Stadtm¨ uller (2003). However, note that the estimator λ U especially developed for bivariate elliptically-contoured distributions. Thus, ˆ (1) is recommended for TDC estimations of non-elliptical or the estimator λ U unknown bivariate distributions.

3.6

Tail Dependence of Asset and FX Returns

Tail dependence is indeed often found in ﬁnancial data series. Consider two scatter plots of daily negative log-returns of a tuple of ﬁnancial securities and ˆ (1) for various k (for notational conthe corresponding upper TDC estimate λ U venience we drop the index m).

82

3 Tail Dependence

m=1000

m=2000

TDC estimate

0.1

0.2

0.3

0.4

m=500

0

500

1000

1500 Sample

2000

2500

3000

ˆ (3) for 3×1000 i.i.d. samples Figure 3.5: Nonparametric upper TDC estimates λ U of size m = 500, 1000, 2000 from a bivariate t-distribution with (3) parameters θ = 2, ρ = 0, and λU = 0.1817. STFtail05.xpl

The ﬁrst data set (D1 ) contains negative daily stock log-returns of BMW and Deutsche Bank for the time period 1992–2001. The second data set (D2 ) consists of negative daily exchange rate log-returns of DEM/USD and JPY/USD (so-called FX returns) for the time period 1989–2001. For modelling reasons we assume that the daily log-returns are i.i.d. observations. Figures 3.6 and 3.7 show the presence of tail dependence and the order of magnitude of the tail-dependence coeﬃcient. Tail dependence is present if the plot of TDC estiˆ (1) against the thresholds k shows a characteristic plateau for small k. mates λ U The existence of this plateau for tail-dependent distributions is justiﬁed by a regular variation property of the tail distribution; we refer the reader to Peng (1998) or Schmidt and Stadtm¨ uller (2003) for more details. By contrast, the characteristic plateau is not observable if the distribution is tail independent.

Tail Dependence of Asset and FX Returns

83

0.3

0.2

TDC estimate

0

0

0.1

0.05

- Log-returns Dt. Bank

0.1

0.4

3.6

0

0.05 - Log-returns BMW

0.1

0

100

200 Threshold k

300

400

Figure 3.6: Scatter plot of BMW versus Deutsche Bank negative daily stock log-returns (2347 data points) and the corresponding TDC estimate ˆ (1) for various thresholds k. λ U STFtail06.xpl

The typical variance-bias problem for various thresholds k can be also observed in Figures 3.6 and 3.7. In particular, a small k comes along with a large variance of the TDC estimator, whereas increasing k results in a strong bias. In the ˆ (1) lies presence of tail dependence, k is chosen such that the TDC estimate λ U on the plateau between the decreasing variance and the increasing bias. Thus for the data set D1 we take k between 80 and 110 which provides a TDC ˆ (1) = 0.31, whereas for D2 we choose k between 40 and 90 which estimate of λ U,D1 ˆ (1) = 0.17. yields λ U,D2

The importance of the detection and the estimation of tail dependence becomes clear in the next section. In particular, we show that the Value at Risk estimation of a portfolio is closely related to the concept of tail dependence. A proper analysis of tail dependence results in an adequate choice of the portfolio’s loss distribution and leads to a more precise assessment of the Value at Risk.

3 Tail Dependence

0

0

0.1

0.2

TDC estimate

0.04

0.02

- Log-returns JPY/USD

0.3

0.06

0.4

84

-0.01

0

0.01 - Log-returns DM/USD

0.02

0

100

200

300 Threshold k

400

500

600

Figure 3.7: Scatter plot of DEM/USD versus JPY/USD negative daily exchange rate log-returns (3126 data points) and the corresponding ˆ (1) for various thresholds k. TDC estimate λ U STFtail07.xpl

3.7

Value at Risk – a Simulation Study

Value at Risk (VaR) estimations refer to the estimation of high target quantiles of single asset or portfolio loss distributions. Thus, VaR estimations are very sensitive towards the tail behavior of the underlying distribution model. On the one hand, the VaR of a portfolio is aﬀected by the tail distribution of each single asset. On the other hand, the general dependence structure and especially the tail-dependence structure among all assets have a strong impact on the portfolio’s VaR, too. With the concept of tail dependence, we supply a methodology for measuring and modelling one particular type of dependence of extreme events. What follows, provides empirical justiﬁcation that the portfolio’s VaR estimation depends heavily on a proper speciﬁcation of the (tail-)dependence structure of the underlying asset-return vector. To illustrate our assertion we consider three ﬁnancial data sets: The ﬁrst two data sets D1 and D2 refer again to the daily stock log-returns of BMW and Deutsche Bank for the time period 1992– 2001 and the daily exchange rate log-returns of DEM/USD and JPY/USD

85

0 0.02 Log-returns FFR/USD

0.04

0

Simulated log-returns DEM/USD

-0.02

-0.02

0

-0.02

Log-returns DEM/USD

0.02

Value at Risk – a Simulation Study

0.02

3.7

-0.02

0 0.02 Simulated log-returns FFR/USD

0.04

Figure 3.8: Scatter plot of foreign exchange data (left panel) and simulated normal pseudo-random variables (right panel) of FFR/USD versus DEM/USD negative daily exchange rate log-returns (5189 data points). STFtail08.xpl

for the time period 1989–2001, respectively. The third data set (D3 ) contains exchange rate log-returns of FFR/USD and DEM/USD for the time period 1984–2002. Typically, in practice, either a multivariate normal distribution or multivariate t-distribution is ﬁtted to the data in order to describe the random behavior (market riskiness) of asset returns. Especially multivariate t-distributions have recently gained the attraction of practitioners due to their ability to model heavy tails while still having the advantage of being in the class of ellipticallycontoured distributions. Recall that the multivariate normal distribution has thin tailed marginals which exhibit no tail-dependence, and the t-distribution possesses heavy tailed marginals which are tail dependent (see Section 3.3.2). Due to the diﬀerent tail behavior, one might pick one of the latter two distribution classes if the data are elliptically contoured. However, ellipticallycontoured distributions require a very strong symmetry of the data and might not be appropriate in many circumstances. For example, the scatter plot of the data set D3 in Figure 3.8 reveals that its distributional structure does not seem to be elliptically contoured at all.

86

3 Tail Dependence

To circumvent this problem, one could ﬁt a distribution from a broader distribution class, such as a generalized hyperbolic distribution (Eberlein and Keller, 1995; Bingham and Kiesel, 2002). Alternatively, a split of the dependence structure and the marginal distribution functions via the theory of copulae (as described in Section 3.2) seems to be also attractive. This split exploits the fact that statistical (calibration) methods are well established for one-dimensional distribution functions. For the data sets D1 , D2 , and D3 , one-dimensional t-distributions are utilized to model the marginal distributions. The choice of an appropriate copula function turns out to be delicate. Two structural features are important in the context of VaR estimations regarding the choice of the copula. First, the general structure (symmetry) of the chosen copula should coincide with the dependence structure of the real data. We visualize the dependence structure of the sample data via the respective empirical copula (Figure 3.9), i.e. the marginal distributions are standardized by the corresponding empirical distribution functions. Second, if the data show tail dependence than one must utilize a copula which comprises tail dependence. Especially VaR estimations at a small conﬁdence level are very sensitive towards tail dependence. Figure 3.9 indicates that the FX data set D3 has signiﬁcantly more dependence in the lower tail than the simulated data from a ﬁtted bivariate normal copula. The data clustering in the lower left corner of the scatter plot of the empirical copula is a strong indication for tail dependence. Based on the latter ﬁndings, we use a t-copula (which allows for tail dependence, see Section 3.3.2) and t-distributed marginals (which are heavy tailed). Note, the resulting common distribution is only elliptically contoured if the degrees of freedom of the t-copula and the t-margins coincide, since in this case the common distribution corresponds to a multivariate t-distribution. The parameters of the marginals and the copula are separately estimated in two consecutive steps via maximum likelihood. For statistical properties of the latter procedure, which is called Inference Functions for Margins method (IFM), we refer to Joe and Xu (1996). Tables 3.11, 3.12, and 3.13 compare the historical VaR estimates of the data sets D1 , D2 , and D3 with the average of 100 VaR estimates which are simulated from diﬀerent distributions. The ﬁtted distribution is either a bivariate normal, a bivariate t-distribution or a bivariate distribution with t-copula and t-marginals. The respective standard deviation of the VaR estimations are provided in parenthesis. For a better exposition, we have multiplied all numbers by 105 .

87

0.05

Log-returns DEM/USD

0

0.05

0

Log-returns DEM/USD

0.1

Value at Risk – a Simulation Study

0.1

3.7

0

0.05 Log-returns FFR/USD

0.1

0

0.05 Log-returns FFR/USD

0.1

Figure 3.9: Lower left corner of the empirical copula density plots of real data (left panel) and simulated normal pseudo-random variables (right panel) of FFR/USD versus DEM/USD negative daily exchange rate log-returns (5189 data points). STFtail09.xpl

Table 3.11: Mean and standard deviation of 100 VaR estimations (multiplied by 105 ) from simulated data following diﬀerent distributions which are ﬁtted to the data set D1 . Quantile Historical Normal t-distribution t-copula & VaR distribution t-marginals Mean (Std) Mean (Std) Mean (Std) 0.01 0.025 0.05

489.93 347.42 270.41

397.66 (13.68) 335.28 (9.67) 280.69 (7.20)

464.66 (39.91) 326.04 (18.27) 242.57 (10.35)

515.98 (36.54) 357.40 (18.67) 260.27 (11.47)

The results of the latter tables clearly show that the ﬁtted bivariate normaldistribution does not yield an overall satisfying estimation of the VaR for all data sets D1 , D2 , and D3 . The poor estimation results for the 0.01− and 0.025−quantile VaR (i.e. the mean of the VaR estimates deviate strongly from the historical VaR estimate) are mainly caused by the thin tails of the normal

88

3 Tail Dependence

Table 3.12: Mean and standard deviation of 100 VaR estimations (multiplied by 105 ) from simulated data following diﬀerent distributions which are ﬁtted to the data set D2 . Quantile Historical Normal t-distribution t-copula & VaR distribution t-marginals Mean (Std) Mean (Std) Mean (Std) 0.01 0.025 0.05

155.15 126.63 98.27

138.22 (4.47) 116.30 (2.88) 97.56 (2.26)

155.01 (8.64) 118.28 (4.83) 92.35 (2.83)

158.25 (8.24) 120.08 (4.87) 94.14 (3.12)

Table 3.13: Mean and standard deviation of 100 VaR estimations (multiplied by 105 ) from simulated data following diﬀerent distributions which are ﬁtted to the data set D3 . Quantile Historical Normal t-distribution t-copula & VaR distribution t-marginals Mean (Std) Mean (Std) Mean (Std) 0.01 0.025 0.05

183.95 141.22 109.94

156.62 (3.65) 131.54 (2.41) 110.08 (2.05)

179.18 (9.75) 124.49 (4.43) 91.74 (2.55)

179.41 (6.17) 135.21 (3.69) 105.67 (2.59)

distribution. By contrast, the bivariate t-distribution provides good estimations of the historical VaR for the data sets D1 and D2 over all quantiles. However, both data sets are approximately elliptically-contoured distributed since the estimated parameters of the copula and the marginals are almost equal. For example for the data set D1 , the estimated degree of freedom of the t-copula is 3.05 whereas the estimated degrees of freedom of the t-marginals are 2.99 and 3.03, respectively. We have already discussed that the distribution of the data set D3 is not elliptically contoured. Indeed, the VaR estimation improves with a splitting of the copula and the marginals. The corresponding estimated degree of freedom of the t-copula is 1.11 whereas the estimated degrees of freedom of the t-marginals are 4.63 and 5.15. Finally, note that the empirical standard deviations do signiﬁcantly diﬀer between the VaR estimation based on the multivariate t-distribution and the t-copula, respectively.

Bibliography

89

Bibliography Bingham, N. H., Goldie, C. M. and Teugels, J. L. (1987). Regular Variation, Cambridge University Press, Cambridge. Bingham, N. H. and Kiesel, R. (2002). Semi-parametric modelling in Finance: Theoretical foundation, Quantitative Finance 2: 241–250. Bingham, N. H., Kiesel, R. and Schmidt, R. (2002). Semi-parametric modelling in Finance: Econometric applications, Quantitative Finance 3 (6): 426– 441. Cambanis, S., Huang, S. and Simons, G. (1981). On the theory of elliptically contoured distributions, Journal of Multivariate Analysis 11: 368-385. Draisma, G., Drees, H., Ferreira, A. and de Haan, L. (2004). Bivariate tail estimation: dependence in asymptotic independence, Bernoulli 10 (2): 251-280. Drees, H. and Kaufmann, E. (1998). Selecting the optimal sample fraction in univariate extreme value estimation, Stochastic Processes and their Applications 75: 149-172. Eberlein, E. and Keller, U. (1995). Bernoulli 1: 281–299.

Hyperbolic distributions in ﬁnance,

Embrechts, P., Kl¨ uppelberg, C. and Mikosch, T. (1997). Modelling Extremal Events, Springer Verlag, Berlin. Embrechts, P., Lindskog, F. and McNeil, A. (2001). Modelling Dependence with Copulas and Applications to Risk Management, in S. Rachev (Ed.) Handbook of Heavy Tailed Distributions in Finance, Elsevier: 329–384. Embrechts, P., McNeil, A. and Straumann, D. (1999). Correlation and Dependency in Risk Management: Properties and Pitfalls, in M.A.H. Dempster (Ed.) Risk Management: Value at Risk and Beyond, Cambridge University Press, Cambridge: 176–223. Fang, K., Kotz, S. and Ng, K. (1990). Symmetric Multivariate and Related Distributions, Chapman and Hall, London. Frahm, G., Junker, M. and Schmidt, R. (2002). Estimating the Tail Dependence Coeﬃcient, CAESAR Center Bonn, Technical Report 38 http://stats.lse.ac.uk/schmidt.

90

Bibliography

H¨ ardle, W., Kleinow, T. and Stahl, G. (2002). Applied Quantitative Finance Theory and Computational Tools, Springer Verlag, Berlin. Hahn, M.G., Mason, D.M. and Weiner D.C. (1991). Sums, trimmed sums and extremes, Birkh¨ auser, Boston. Hauksson, H., Dacorogna, M., Domenig, T., Mueller, U. and Samorodnitsky, G. (2001). Multivariate Extremes, Aggregation and Risk Estimation, Quantitative Finance 1: 79–95. Huang, X., (1992). Statistics of Bivariate Extreme Values. Thesis Publishers and Tinbergen Institute. Joe, H. (1997). Multivariate Models and Dependence Concepts, Chapman and Hall, London. Joe, H. and Xu, J. J. (1996). The Estimation Method of Inference Function for Margins for Multivariate Models, British Columbia, Dept. of Statistics, Technical Report 166. Junker, M. and May, A. (2002). Measurement of aggregate risk with copulas, Research Center CAESAR Bonn, Dept. of Quantitative Finance, Technical Report 2. Kiesel, R. and Kleinow, T. (2002). Sensitivity analysis of credit portfolio models, in W. H¨ardle, T. Kleinow and G. Stahl (Eds.) Applied Quantitative Finance., Springer Verlag, New York. Ledford, A. and Tawn, J. (1996). Statistics for Near Independence in Multivariate Extreme Values, Biometrika 83: 169–187. Nelsen, R. (1999). An Introduction to Copulas, Springer Verlag, New York. Peng, L. (1998). Second Order Condition and Extreme Value Theory, Tinbergen Institute Research Series 178, Thesis Publishers and Tinbergen Institute. Rousseeuw, P.J. and van Zomeren B.C. (2002). Unmasking multivariate outliers and leverage points, Journal of the American Statistical Association 85: 633–639. Schmidt, R. (2002a). Credit Risk Modelling and Estimation via Elliptical Copulae, in G. Bohl, G. Nakhaeizadeh, S.T. Rachev, T. Ridder and K.H. Vollmer (Eds.) Credit Risk: Measurement, Evaluation and Management, Physica Verlag, Heidelberg.

Bibliography

91

Schmidt, R. (2002b). Tail Dependence for Elliptically Contoured Distributions, Math. Methods of Operations Research 55 (2): 301–327. Schmidt, R. (2003). Dependencies of Extreme Events in Finance, Dissertation, University of Ulm, http://stats.lse.ac.uk/schmidt. Schmidt, R. and Stadtm¨ uller, U. (2002). Nonparametric Estimation of Tail Dependence, The London School of Economics, Department of Statistics, Research report 101, http://stats.lse.ac.uk/schmidt.

4 Pricing of Catastrophe Bonds Krzysztof Burnecki, Grzegorz Kukla, and David Taylor

4.1

Introduction

Catastrophe (CAT) bonds are one of the more recent ﬁnancial derivatives to be traded on the world markets. In the mid-1990s a market in catastrophe insurance risk emerged in order to facilitate the direct transfer of reinsurance risk associated with natural catastrophes from corporations, insurers and reinsurers to capital market investors. The primary instrument developed for this purpose was the CAT bond. CAT bonds are more speciﬁcally referred to as insurance-linked securities (ILS) The distinguishing feature of these bonds is that the ultimate repayment of principal depends on the outcome of an insured event. The basic CAT bond structure can be summarized as follows (Lane, 2004): 1. The sponsor establishes a special purpose vehicle (SPV) as an issuer of bonds and as a source of reinsurance protection. 2. The issuer sells bonds to investors. The proceeds from the sale are invested in a collateral account. 3. The sponsor pays a premium to the issuer; this and the investment of bond proceeds are a source of interest paid to investors. 4. If the speciﬁed catastrophic risk is triggered, the funds are withdrawn from the collateral account and paid to the sponsor; at maturity, the remaining principal – or if there is no event, 100% of principal – is paid to investors.

94

4

Pricing of Catastrophe Bonds

There are three types of ILS triggers: indemnity, index and parametric. An indemnity trigger involves the actual losses of the bond-issuing insurer. For example the event may be the insurer’s losses from an earthquake in a certain area of a given country over the period of the bond. An industry index trigger involves, in the US for example, an index created from property claim service (PCS) loss estimates. A parametric trigger is based on, for example, the Richter scale readings of the magnitude of an earthquake at speciﬁed data stations. In this chapter we address the issue of pricing CAT bonds with indemnity and index triggers.

4.1.1

The Emergence of CAT Bonds

Until fairly recently, property reinsurance was a relatively well understood market with eﬃcient pricing. However, naturally occurring catastrophes, such as earthquakes and hurricanes, are beginning to have a dominating impact on the industry. In part, this is due to the rapidly changing, heterogeneous distribution of high-value property in vulnerable areas. A consequence of this has been an increased need for a primary and secondary market in catastrophe related insurance derivatives. The creation of CAT bonds, along with allied ﬁnancial products such as catastrophe insurance options, was motivated in part by the need to cover the massive property insurance industry payouts of the earlyto mid-1990s. They also represent a “new asset class” in that they provide a mechanism for hedging against natural disasters, a risk which is essentially uncorrelated with the capital market indices (Doherty, 1997). Subsequent to the development of the CAT bond, the class of disaster referenced has grown considerably. As yet, there is almost no secondary market for CAT bonds which hampers using arbitrage-free pricing models for the derivative. Property insurance claims of approximately USD 60 billion between 1990 and 1996 (Canter, Cole, and Sandor, 1996) caused great concern to the insurance industry and resulted in the insolvency of a number of ﬁrms. These bankruptcies were brought on in the wake of hurricanes Andrew (Florida and Louisiana aﬀected, 1992), Opal (Florida and Alabama, 1995) and Fran (North Carolina, 1996), which caused combined damage totalling USD 19.7 billion (Canter, Cole, and Sandor, 1996). These, along with the Northridge earthquake (1994) and similar disasters (for the illustration of the US natural catastrophe data see Figure 4.1), led to an interest in alternative means for underwriting insurance. In 1995, when the CAT bond market was born, the primary and secondary (or reinsurance) industries had access to approximately USD 240 billion in capi-

4.1

Introduction

95

tal (Canter, Cole, and Sandor, 1996; Cummins and Danzon, 1997). Given the capital level constraints necessary for the reinsuring of property losses and the potential for single-event losses in excess of USD 100 billion, this was clearly insuﬃcient. The international capital markets provided a potential source of security for the (re-)insurance market. An estimated capitalisation of the international ﬁnancial markets, at that time, of about USD 19 trillion underwent an average daily ﬂuctuation of approximately 70 basis points or USD 133 billion (Sigma, 1996). The undercapitalisation of the reinsurance industry (and their consequential default risk) meant that there was a tendency for CAT reinsurance prices to be highly volatile. This was reﬂected in the traditional insurance market, with rates on line being signiﬁcantly higher in the years following catastrophes and dropping oﬀ in the intervening years (Froot and O’Connell, 1997; Sigma, 1997). This heterogeneity in pricing has a very strong damping eﬀect, forcing many re-insurers to leave the market, which in turn has adverse consequences for the primary insurers. A number of reasons for this volatility have been advanced (Winter, 1994; Cummins and Danzon, 1997). CAT bonds and allied catastrophe related derivatives are an attempt to address these problems by providing eﬀective hedging instruments which reﬂect long-term views and can be priced according to the statistical characteristics of the dominant underlying process(es). Their impact, since a period of standardisation between 1997 and 2003, has been substantial. As a consequence the rise in prices associated with the uppermost segments of the CAT reinsurance programs has been dampened. The primary market has developed and both issuers and investors are now well-educated and technically adept. In the years 2000 to 2003, the average total issue exceeded USD 1 billion per annum (McGhee, 2004). The catastrophe bond market witnessed yet another record year in 2003, with total issuance of USD 1.73 billion, an impressive 42 percent year-on-year increase from 2002s record of USD 1.22 billion. During the year, a total of eight transactions were completed, with three originating from ﬁrsttime issuers. The year also featured the ﬁrst European corporate-sponsored transaction (and only the third by any non-insurance company). Electricit´e de France, the largest electric utility in Europe, sponsored a transaction to address a portion of the risks facing its properties from French windstorms. Since 1997, when the market began in earnest, 54 catastrophe bond issues have been completed with total risk limits of almost USD 8 billion. It is interesting to note that very few of the issued bonds receive better than “non-investment grade” BB ratings and that almost no CAT bonds have been triggered, despite an increased reliance on parametric or index based payout triggers.

4

Pricing of Catastrophe Bonds

10

5 0

Adjusted PCS catastrophe claims (USD billion)

15

96

1990

1991

1992

1993

1994

1995

1996

1997

1998

1999

2000

Years

Figure 4.1: Graph of the PCS catastrophe loss data, 1990–1999. STFcat01.xpl

4.1.2

Insurance Securitization

Capitalisation of insurance and consequential risk spreading through share issue, is well established and the majority of primary and secondary insurers are public companies. Investors in these companies are thus de facto bearers of risk for the industry. This however relies on the idea of risk pooling through the law of large numbers, where the loss borne by each investor becomes highly predictable. In the case of catastrophic natural disasters, this may not be possible as the losses incurred by diﬀerent insurers tend to be correlated. In this situation a diﬀerent approach for hedging the risk is necessary. A number of such products which realize innovative methods of risk spreading already exist and are traded (Litzenberger, Beaglehole, and Reynolds, 1996; Cummins and Danzon, 1997; Aase, 1999; Sigma, 2003). They are roughly divided into reinsurance share related derivatives, including Post-loss Equity Issues and Catastrophe Equity Puts, and asset–liability hedges such as Catastrophe Futures, Options and CAT Bonds.

4.1

Introduction

97

In 1992, the Chicago Board of Trade (CBOT) introduced the CAT futures. In 1995, the CAT future was replaced by the PCS option. This option was based on a loss index provided by PCS. The underlying index represented the development of speciﬁed catastrophe damages, was published daily and eliminated the problems of the earlier ISO index. The options traded better, especially the call option spreads where insurers would appear on both side of the transaction, i.e. as buyer and seller. However, they also ceased trading in 2000. Much work in the reinsurance industry concentrated on pricing these futures and options and on modelling the process driving their underlying indices (Canter, Cole, and Sandor, 1996; Embrechts and Meister, 1997; Aase, 1999). CAT bonds are allied but separate instruments which seek to ensure capital requirements are met in the speciﬁc instance of a catastrophic event.

4.1.3

CAT Bond Pricing Methodology

In this chapter we investigate the pricing of CAT Bonds. The methodology developed here can be extended to most other catastrophe related instruments. However, we are concerned here only with CAT speciﬁc instruments, e.g. California Earthquake Bonds (Sigma, 1996; Sigma, 1997; Sigma, 2003; McGhee, 2004), and not reinsurance shares or their related derivatives. In the early market for CAT bonds, the pricing of the bonds was in the hands of the issuer and was aﬀected by the equilibrium between supply and demand only. Consequently there was a tendency for the market to resemble the traditional reinsurance market. However, as CAT bonds become more popular, it is reasonable to expect that their price will begin to reﬂect the fair or arbitrage-free price of the bond, although recent discussions of alternative pricing methodologies have contradicted this expectation (Lane, 2003). Our pricing approach assumes that this market already exists. Some of the traditional assumptions of derivative security pricing are not correct when applied to these instruments due to the properties of the underlying contingent stochastic processes. There is evidence that certain catastrophic natural events have (partial) power-law distributions associated with their loss statistics (Barton and Nishenko, 1994). This overturns the traditional lognormal assumption of derivative pricing models. There are also well-known statistical diﬃculties associated with the moments of power-law distributions, thus rendering it impossible to employ traditional pooling methods and consequently the central limit theorem. Given that heavy-tailed or large deviation results assume, in general, that at least the ﬁrst moment of the distribution

98

4

Pricing of Catastrophe Bonds

exists, there will be diﬃculties with applying extreme value theory to this problem (Embrechts, Resnick, and Samorodnitsky, 1999). It would seem that these characteristics may render traditional actuarial or derivatives pricing approaches ineﬀective. There are additional features to modelling the CAT bond price which are not to be found in models of ordinary corporate or government issue (although there is some similarity with pricing defaultable bonds). In particular, the trigger event underlying CAT bond pricing is dependent on both the frequency and severity of natural disasters. In the model described here, we attempt to reduce to a minimum any assumptions about the underlying distribution functions. This is in the interests of generality of application. The numerical examples will have to make some distributional assumptions and will reference some real data. We will also assume that loss levels are instantaneously measurable and updatable. It is straightforward to adjust the underlying process to accommodate a development period. There is a natural similarity between the pricing of catastrophe bonds and the pricing of defaultable bonds. Defaultable bonds, by deﬁnition, must contain within their pricing model a mechanism that accounts for the potential (partial or complete) loss of their principal value. Defaultable bonds yield higher returns, in part, because of this potential defaultability. Similarly, CAT bonds are oﬀered at high yields because of the unpredictable nature of the catastrophe process. With this characteristic in mind, a number of pricing models for defaultable bonds have been advanced (Jarrow and Turnbull, 1995; Zhou, 1997; Duﬃe and Singleton, 1999). The trigger event for the default process has similar statistical characteristics to that of the equivalent catastrophic event pertaining to CAT bonds. In an allied application to mortgage insurance, the similarity between catastrophe and default in the log-normal context has been commented on (Kau and Keenan, 1996). With this in mind, we have modelled the catastrophe process as a compound doubly stochastic Poisson process. The underlying assumption is that there is a Poisson point process (of some intensity, in general varying over time) of potentially catastrophic events. However, these events may or may not result in economic losses. We assume the economic losses associated with each of the potentially catastrophic events to be independent and to have a certain common probability distribution. This is justiﬁable for the Property Claim Loss indices used as the triggers for the CAT bonds. Within this model, the threshold time can be seen as a point of a Poisson point process with a stochastic intensity depending on the instantaneous index position. We make this model precise later in the chapter.

4.2

Compound Doubly Stochastic Poisson Pricing Model

99

In the article of Baryshnikov, Mayo, and Taylor (1998) the authors presented an arbitrage-free solution to the pricing of CAT bonds under conditions of continuous trading. They modelled the stochastic process underlying the CAT bond as a compound doubly stochastic Poisson process. Burnecki and Kukla (2003) applied their results in order to determine no-arbitrage prices of a zero-coupon and coupon bearing CAT bond. In Section 4.2 we present the doubly stochastic Poisson pricing model. In Section 4.3 we study 10-year catastrophe loss data provided by Property Claim Services. We ﬁnd a distribution function which ﬁts the observed claims in a satisfactory manner and estimate the intensity of the non-homogeneous Poisson process governing the ﬂow of the natural events. In Section 4.4 we illustrate the values of diﬀerent CAT bonds associated with this loss data with respect to the threshold level and maturity time. To this end we apply Monte Carlo simulations.

4.2

Compound Doubly Stochastic Poisson Pricing Model

The CAT bond we are interested in is described by specifying the region, type of events, type of insured properties, etc. More abstractly, it is described by the aggregate loss process Ls and by the threshold loss D. Set a probability space (Ω, F, F t , ν) and an increasing ﬁltration F t ⊂ F, t ∈ [0, T ]. This leads to the following assumptions: • There exists a doubly stochastic Poisson process (Bremaud, 1981) Ms describing the ﬂow of (potentially catastrophic) natural events of a given type in the region speciﬁed in the bond contract. The intensity of this Poisson point process is assumed to be a predictable bounded process ms . This process describes the estimates based on statistical analysis and scientiﬁc knowledge about the nature of the catastrophe causes. We will denote the instants of these potentially catastrophic natural events as 0 ≤ t1 ≤ . . . ≤ ti ≤ . . . ≤ T . • The losses incurred by each event in the ﬂow {ti } are assumed to be independent, identically distributed random values {Xi } with distribution function F (x) = P(Xi < x). • A progressive process of discounting rates r. Following the traditional practice, we assume the process is continuous almost everywhere. This

100

4

Pricing of Catastrophe Bonds

process describes the value at time s of USD 1 paid at time t > s by ⎫ ⎧ ⎬ ⎨ t exp{−R(s, t)} = exp − r(ξ) dξ . ⎭ ⎩ s

Therefore, one has Lt =

ti

Xi =

Mt

Xi .

i=1

The deﬁnition of the process implies that L is left-continuous and predictable. We assume that the threshold event is the time when the accumulated losses exceed the threshold level D, that is τ = inf{t : Lt ≥ D}. Now deﬁne a new process Nt = I(Lt ≥ D). Baryshnikov et al. (1998) show that this is also a doubly stochastic Poisson process with the intensity λs = ms {1 − F (D − Ls )} I(Ls < D).

(4.1)

In Figure 4.2 we see a sample trajectory of the aggregate loss process Lt (0 ≤ t ≤ T = 10 years) generated under the assumption of log-normal loss amounts with µ = 18.3806 and σ = 1.1052 and a non-homogeneous Poisson process Mt with the intensity function m1s = 35.32 + 2.32 · 2π sin {2π(s − 0.20)}, a real-life catastrophe loss trajectory (which will be analysed in detail in Section 4.3), the mean function of the process Lt and two sample 0.05- and 0.95-quantile lines based on 5000 trajectories of the aggregated loss process, see Chapter 14 and Burnecki, H¨ ardle, and Weron (2004). It is evident that in the studied lognormal case, the historical trajectory falls outside even the 0.05-quantile line. This may suggest that “more heavy-tailed” distributions such as the Pareto or Burr distributions would be better for modelling the“real” aggregate loss process. In Figure 4.2 the black horizontal line represents a threshold level of D = 60 billion USD.

4.3

Calibration of the Pricing Model

We conducted empirical studies for the PCS data obtained from Property Claim Services. ISO’s Property Claim Services unit is the internationally recognized authority on insured property losses from catastrophes in the United States, Puerto Rico and the U.S. Virgin Islands. PCS investigates reported disasters

Calibration of the Pricing Model

101

80 60 40 0

20

Aggregate loss process (USD billion)

100

120

4.3

0

2

6

4

8

10

Years

Figure 4.2: A sample trajectory of the aggregate loss process Lt (thin blue solid line), a real-life catastrophe loss trajectory (thick green solid line), the analytical mean of the process Lt (red dashed line) and two sample 0.05- and 0.95-quantile lines (brown dotted line). The black horizontal line represents the threshold level D = 60 billion USD. STFcat02.xpl

and determines the extent and type of damage, dates of occurrence and geographic areas aﬀected (Burnecki, Kukla, and Weron, 2000). The data, see Figure 4.1, concerns the US market’s loss amounts in USD, which occurred between 1990 and 1999 and adjusted for inﬂation using the Consumer Price Index provided by the U.S. Department of Labor. Only natural perils like hurricane, tropical storm, wind, ﬂooding, hail, tornado, snow, freezing, ﬁre, ice and earthquake were taken into consideration. We note that peaks in Figure 4.1 mark the occurrence of Hurricane Andrew (the 24th August 1992) and the Northridge Earthquake (the 17th January 1994).

102

4

Pricing of Catastrophe Bonds

In order to calibrate the pricing model we have to ﬁt both the distribution function of the incurred losses F and the process Mt governing the ﬂow of natural events. The claim size distributions, especially describing property losses, are usually heavy-tailed. In the actuarial literature for describing such claims, continuous distributions are often proposed (with the domain R+ ), see Chapter 13. The choice of the distribution is very important because it inﬂuences the bond price. In Chapter 14 the claim amount distributions were ﬁtted to the PCS data depicted in Figure 4.1. The log-normal, exponential, gamma, Weibull, mixture of two exponentials, Pareto and Burr distributions were analysed. The parameters were estimated via the Anderson-Darling statistical minimisation procedure. The goodness-of-ﬁt was checked with the help of Kolmogorov-Smirnov, Kuiper, Cram´er-von Mises and Anderson-Darling non-parametric tests. The test statistics were compared with the critical values obtained through Monte Carlo simulations. The Burr distribution with parameters α = 0.4801, λ = 3.9495 · 1016 and τ = 2.1524 passed all tests. The log-normal distribution with parameters µ = 18.3806 and σ = 1.1052 was the next best ﬁt. A doubly stochastic Poisson process governing the occurrence times of the losses was ﬁtted by Burnecki and Kukla (2003). The simplest case with the intensity ms equal to a nonnegative constant m was considered. Studies of the quarterly number of losses and the inter-occurence times of the catastrophes led to the conclusion that the ﬂow of the events may be described by a Poisson process with an annual intensity of m = 34.2. The claim arrival process is also analysed in Chapter 14. The statistical tests applied to the annual waiting times led to a renewal process. Finally, the rate function m1s = 35.32+2.32·2π sin {2π(s − 0.20)} was ﬁtted and the claim arrival process was treated as a non-homogeneous Poisson process. Such a choice of the intensity function allows modelling of an annual seasonality present in the natural catastrophe data. Baryshnikov, Mayo, and Taylor (1998) proposed an intensity function of the form m2s = a + b sin2 {2π(s + S)}. Using the least squares procedure (Ross, 2001), we ﬁtted

s the cumulative intensity function (mean value function) given by E(Ms ) = 0 mz dz to the accumulated quarterly number of PCS losses. We concluded that a = 35.22, b = 0.224, and S = −0.16. This choice of the rate function allows the incorporation of both an annual cyclic component and a trend which is sometimes observed in natural catastrophe data.

Calibration of the Pricing Model

103

190 180

170 160

140

150

Aggregate number of losses / Mean value function

200

210

4.3

4

4.2

4.4

4.6

4.8

5

5.2

5.4

5.6

5.8

6

Years

Figure 4.3: The aggregate quarterly number of PCS losses (blue solid line) together with the mean value functions E(Mt ) corresponding to the HP (red dotted line), NHP1 (black dashed line) and NHP2 (green dashed-dotted line) cases. STFcat03.xpl

It appears that both the mean squared error (MSE) and the mean absolute error (MAE) favour the rate function m1s . In this case MSE = 13.68 and MAE = 2.89, whereas m2s yields MSE = 15.12 and MAE = 3.22. Finally the homogeneous Poisson process with the constant intensity gives MSE = 55.86 and MAE = 6.1. All three choices of the intensity function ms are illustrated in Figure 4.3, where the accumulated quarterly number of PCS losses and the mean value functions on the interval [4, 6] years are depicted. This interval was chosen to best illustrate the diﬀerences.

104

4.4

4

Pricing of Catastrophe Bonds

Dynamics of the CAT Bond Price

In this section, we present prices for diﬀerent CAT bonds. We illustrate them while focusing on the inﬂuence of the choice of the loss amount distribution and the claim arrival process on the bond price. We analyse cases using the Burr distribution with parameters α = 0.4801, λ = 3.9495 · 1016 and τ = 2.1524, and the log-normal distribution with parameters µ = 18.3806 and σ = 1.1052. We also analyse the homogeneous Poisson process with an annual intensity m = 34.2 (HP) and the non-homogeneous Poisson processes with the rate functions m1s = 35.32 + 2.32 · 2π sin {2π(s − 0.20)} (NHP1) and m2s = 35.22 + 0.224 sin2 {2π(s − 0.16)} (NHP2). Consider a zero-coupon CAT bond deﬁned by the payment of an amount Z at maturity T , contingent on a threshold time τ > T . Deﬁne the process Zs = E(Z|Fs ). We require that Zs is a predictable process. This can be interpreted as the independence of payment at maturity from the occurrence and timing of the threshold. The amount Z can be the principal plus interest, usually deﬁned as a ﬁxed percentage over the London Inter-Bank Oﬀer Rate (LIBOR). The no-arbitrage price of the zero-coupon CAT bond associated with a threshold D, catastrophic ﬂow Ms , a distribution function of incurred losses F , and paying Z at maturity is given by Burnecki and Kukla (2003): Vt1

=

E [Z exp {−R(t, T )} (1 − NT )|F t ]

=

E Z exp {−R(t, T )}

·

⎧ ⎫ T ⎨ ⎬ 1 − ms {1 − F (D − Ls )} I(Ls < D)ds |F t . ⎩ ⎭

(4.2)

t

We evaluate this CAT bond price at t = 0, and apply appropriate Monte Carlo simulations. We assume for the purposes of illustration that the annual continuously-compounded discount rate r = ln(1.025) is constant and corresponds to LIBOR, T ∈ [1/4, 2] years, D ∈ [2.54, 30] billion USD (quarterly – 3*annual average loss). Furthermore, in the case of the zero-coupon CAT bond we assume that Z = 1.06 USD. Hence, the bond is priced at 3.5% over LIBOR when T = 1 year. Figure 4.4 illustrates the zero-coupon CAT bond values (4.2) with respect

4.4

Dynamics of the CAT Bond Price

105

1.04

0.83

0.62

0.42

0.21

2.54 8.14 13.73 19.32 24.91

0.60

0.95

1.30

1.65

2.00

Figure 4.4: The zero-coupon CAT bond price with respect to the threshold level (left axis) and time to expiry (right axis) in the Burr and NHP1 case. STFcat04.xpl

to the threshold level and time to expiry in the Burr and NHP1 case. We can see that as the time to expiry increases, the price of the CAT bond decreases. Increasing the threshold level leads to higher bond prices. When T is a quarter and D = 30 billion USD the CAT bond price approaches the value 1.06 exp {− ln(1.025)/4} ≈ 1.05 USD. This is equivalent to the situation when the threshold time exceeds the maturity (τ T ) with probability one. Consider now a CAT bond which has only coupon payments Ct , which terminate at the threshold time τ . The no-arbitrage price of the CAT bond associated with a threshold D, catastrophic ﬂow Ms , a distribution function of incurred losses F , with coupon payments Cs which terminate at time τ is

106

4

Pricing of Catastrophe Bonds

given by Burnecki and Kukla (2003): Vt2

T

=

E

exp {−R(t, s)} Cs (1 − Ns )ds|F t

t

=

·

T

E

exp {−R(t, s)} Cs

t

s

1−

mξ {1 − F (D − Lξ )} I(Lξ < D)dξ ds|F t .

(4.3)

t

We evaluate this CAT bond price at t = 0 and assume that Ct ≡ 0.06. The value of V02 as a function of time to maturity (expiry) and threshold level in the Burr and NHP1 case is illustrated by Figure 4.5. We clearly see that the situation is diﬀerent to that of the zero-coupon case. The price increases with both time to expiry and threshold level. When D = 30 USD billion and T = 2 2 years the CAT bond price approaches the value 0.06 0 exp {− ln(1.025)t} dt ≈ 0.12 USD. This is equivalent to the situation when the threshold time exceeds the maturity (τ T ) with probability one. Finally, we consider the case of the coupon-bearing CAT bond. Fashioned as ﬂoating rate notes, such bonds pay a ﬁxed spread over LIBOR. Loosely speaking, the ﬁxed spread may be analogous to the premium paid for the underlying insured event, and the ﬂoating rate, LIBOR, is the payment for having invested cash in the bond to provide payment against the insured event, should a payment to the insured be necessary. We combine (4.2) with Z equal to par value (PV) and (4.3) to obtain the price for the coupon-bearing CAT bond. The no-arbitrage price of the CAT bond associated with a threshold D, catastrophic ﬂow Ms , a distribution function of incurred losses F , paying P V at maturity, and coupon payments Cs which cease at the threshold time τ is

4.4

Dynamics of the CAT Bond Price

107

0.11

0.09

0.07

0.05

0.03

2.54 8.14 13.73 19.32 24.91

0.60

0.95

1.30

1.65

2.00

Figure 4.5: The CAT bond price, for the bond paying only coupons, with respect to the threshold level (left axis) and time to expiry (right axis) in the Burr and NHP1 case. STFcat05.xpl

given by Burnecki and Kukla (2003): Vt3

=

E P V exp {−R(t, T )} (1 − NT )

T

+

exp {−R(t, s)} Cs (1 − Ns )ds|F t

t

=

E P V exp{−R(t, T )} T

!

exp{−R(t, s)} Cs 1 −

+ t

"

s mξ {1 − F (D − Lξ )} I(Lξ < D)dξ t

− P V exp {−R(s, T )} ms {1 − F (D − Ls )} I(Ls < D) ds|F t . (4.4)

108

4

Pricing of Catastrophe Bonds

0.99

0.80

0.61

0.41

0.22

2.54 8.14 13.73 19.32 24.91

0.60

0.95

1.30

1.65

2.00

Figure 4.6: The coupon-bearing CAT bond price with respect to the threshold level (left axis) and time to expiry (right axis) in the Burr and NHP1 case. STFcat06.xpl

We evaluate this CAT bond price at t = 0 and assume that P V = 1 USD, and again Ct ≡ 0.06. Figure 4.6 illustrates this CAT bond price in the Burr and NHP1 case. The inﬂuence of the threshold level D on the bond value is clear but the eﬀect of increasing the time to expiry is not immediately clear. As T increases, the possibility of receiving more coupons increases but so does the possibility of losing the principal of the bond. In this example (see Figure 4.6) the price decreases with respect to the time to expiry but this is not always true. We also notice that the bond prices in Figure 4.6 are lower than the corresponding ones in Figure 4.4. However, we recall that in the former case P V = 1.06 USD and here P V = 1 USD. The choice of the ﬁtted loss distribution aﬀects the price of the bond. Figure 4.7 illustrates the diﬀerence between the zero-coupon CAT bond prices calculated under the two assumptions of Burr and log-normal loss sizes in the NHP1 case.

4.4

Dynamics of the CAT Bond Price

109

0.00 -0.11 -0.22 -0.34 -0.45

2.54 8.14 13.73 19.32 24.91

0.60

0.95

1.30

1.65

2.00

Figure 4.7: The diﬀerence between zero-coupon CAT bond prices in the Burr and log-normal cases with respect to the threshold level (left axis) and time to expiry (right axis) under the NHP1 assumption. STFcat07.xpl

It is clear that taking into account heavier tails (the Burr distribution), which can be more appropriate when considering catastrophic losses, leads to higher prices (the maximum diﬀerence in this example reaches 50% of the principal). Figures 4.8 and 4.9 show how the choice of the ﬁtted Poisson point process inﬂuences the CAT bond value. Figure 4.8 illustrates the diﬀerence between the zero-coupon CAT bond prices calculated in the NHP1 and HP cases under the assumption of the Burr loss distribution. We see that the diﬀerences vary from −14% to 3% of the principal. Finally, Figure 4.9 illustrates the diﬀerence between the zero-coupon CAT bond prices calculated in the NHP1 and NHP2 cases under the assumption of the Burr loss distribution. The diﬀerence is always below 12%.

110

4

Pricing of Catastrophe Bonds

0.03

-0.01

-0.04

-0.08

-0.12

2.54 8.14 13.73 19.32 24.91

0.60

0.95

1.30

1.65

2.00

Figure 4.8: The diﬀerence between zero-coupon CAT bond prices in the NHP1 and HP cases with respect to the threshold level (left axis) and time to expiry (right axis) under the Burr assumption. STFcat08.xpl

In our examples, equations (4.2) and (4.4), we have assumed that in the case of a trigger event the bond principal is completely lost. However, if we would like to incorporate a partial loss in the contract it is suﬃcient to multiply P V by the appropriate constant.

4.4

Dynamics of the CAT Bond Price

111

0.05

0.01

-0.02

-0.06

-0.10

2.54 8.14 13.73 19.32 24.91

0.60

0.95

1.30

1.65

2.00

Figure 4.9: The diﬀerence between zero-coupon CAT bond prices in the NHP1 and NHP2 cases with respect to the threshold level (left axis) and time to expiry (right axis) under the Burr assumption. STFcat09.xpl

112

Bibliography

Bibliography Aase, K. (1999). An Equilibrium Model of Catastrophe Insurance Futures and Spreads, The Geneva Papers on Risk and Insurance Theory 24: 69–96. Barton, C. and Nishenko, S. (1994). Natural Disasters – Forecasting Economic and Life Losses, SGS special report. Baryshnikov, Yu., Mayo, A. and Taylor, D.R. (1998). Pricing of CAT bonds, preprint. Bremaud, P. (1981). Point Processes and Queues: Martingale Dynamics, Springer, New York. Burnecki, K., Kukla, G. and Weron R. (2000). Property Insurance Loss Distributions, Phys. A 287: 269–278. Burnecki, K. and Kukla, G. (2003). Pricing of Zero-Coupon and Coupon Cat Bonds, Appl. Math. (Warsaw) 30(3): 315–324. Burnecki, K., H¨ ardle, W. and Weron, R. (2004). Simulation of Risk Processes, in Encyclopedia of Actuarial Science, J. L. Teugels and B. Sundt (eds.), Wiley, Chichester. Canter, M. S., Cole, J. B. and Sandor, R. L. (1996). Insurance Derivatives: A New Asset Class for the Capital Markets and a New Hedging Tool for the Insurance Industry, Journal of Derivatives 4(2): 89–104. Cummins, J. D. and Danzon, P. M. (1997). Price Shocks and Capital Flows in Liability Insurance, Journal of Financial Intermediation 6: 3–38. Cummins, J. D., Lo, A. and Doherty, N. A. (1999). Can Insurers Pay for the “Big One”? Measuring the Capacity of an Insurance Market to Respond to Catastrophic Losses, preprint, Wharton School, University of Pennsylvania. D’Agostino, R.B.and Stephens, M.A. (1986). Goodness-of-Fit Techniques, Marcel Dekker, New York. Doherty, N. A. (1997). Innovations in Managing Catastrophe Risk, Journal of Risk & Insurance 64(4): 713–718. Duﬃe, D. and Singleton, K. J. (1999). Modelling Term Structures of Defaultable Bonds, The Review of Financial Studies 12(4): 687–720.

Bibliography

113

Embrechts, P., Resnick, S. I. and Samorodnitsky, G. (1999). Extreme Value Theory as a Risk Management Tool, North American Actuarial Journal 3(2): 30–41. Embrechts, P. and Meister, S. (1997). Pricing Insurance Derivatives: The Case of Cat-Futures, in Securitization of Insurance risk, 1995 Bowles Symposium, SOA Monograph M-F197-1: 15–26. Froot, K. and O’Connell P. (1997). On the Pricing of Intermediated Risks: Theory and Application to Catastrophe Reinsurance, NBER Working Paper No. w6011. Jarrow, R. A. and Turnbull, S. (1995). Pricing Options on Financial Securities Subject to Default Risk, Journal of Finance 50: 53–86. Kau, J. B. and Keenan, D. C. (1996). An Option-Theoretic Model of Catastrophes Applied to Mortgage Insurance, Journal of Risk and Insurance 63(4): 639–656. Lane, M.N. (2001). Rationale and Results with the LFC CAT Bond Pricing Model, Lane Financial L.L.C. Lane, M.N. (2004). The Viability and Likely Pricing of “CAT Bonds” for Developing Countries, Lane Financial L.L.C. Litzenberger, R.H., Beaglehole, D. R. and Reynolds, C. E. (1996). Assessing Catastrophe Reinsurance-Linked Securities as a New Asset Class, Journal of Portfolio Management (December): 76–86. McGhee, C. (2004). Market Update: The Catastrophe Bond Market at YearEnd 2003, Guy Carpenter & Company, Inc. Ross, S. (2001). Simulation, 3rd ed., Academic Press, Boston. Sigma (1996). Insurance Derivatives and Securitization: New Hedging Perspectives for the US Catastrophe Insurance Market?, Report Number 5, Swiss Re. Sigma (1997). Too Little Reinsurance of Natural Disasters in Many Markets, Report Number 7, Swiss Re. Sigma (2003). The Picture of ART, Report Number 1, Swiss Re. Winter, R. A. (1994). The dynamics of competitive insurance markets, Journal of Financial Intermediation 3: 379–415.

114

Bibliography

Zhou, C. (1994). A Jump Diﬀusion Approach to Modelling Credit Risk and Valuing Defaultable Securities, preprint, Federal Board.

5 Common Functional Implied Volatility Analysis Michal Benko and Wolfgang H¨ ardle

5.1

Introduction

Trading, hedging and risk analysis of complex option portfolios depend on accurate pricing models. The modelling of implied volatilities (IV) plays an important role, since volatility is the crucial parameter in the Black-Scholes (BS) pricing formula. It is well known from empirical studies that the volatilities implied by observed market prices exhibit patterns known as volatility smiles or smirks that contradict the assumption of constant volatility in the BS pricing model. On the other hand, the IV is a function of two parameters: the strike price and the time to maturity and it is desirable in practice to reduce the dimension of this object and characterize the IV surface through a small number of factors. Clearly, a dimension reduced pricing-model that should reﬂect the dynamics of the IV surface needs to contain factors and factor loadings that characterize the IV surface itself and their movements across time. A popular dimension reduction technique is the principal components analysis (PCA), employed for example by Fengler, H¨ ardle, and Schmidt (2002) in the IV surface analysis. The discretization of the strike dimension and application of PCA yield suitable factors (weight vectors) in the multivariate framework. Noting that the IVs of ﬁxed maturity could also be viewed as random functions, we propose to use the functional analogue of PCA. We utilize the truncated functional basis expansion described in Ramsay and Silverman (1997) to the IVs of the European options on the German stock index (DAX). The standard functional PCA, however, yields weight functions that are too rough, hence a smoothed version of functional PCA is proposed here.

116

5

Common Functional IV Analysis

Like Fengler, H¨ardle, and Villa (2003) we discover similarities of the resulting weight functions across maturity groups. Thus we propose an estimation procedure based on the Flury-Gautschi algorithm, Flury (1988), for the simultaneous estimation of the weight functions for two diﬀerent maturities. This procedure yields common weight functions with the level, slope, and curvature interpretation known from the ﬁnancial literature. The resulting common factors of the IV surface are the basic elements to be used in applications, such as simulation based pricing, and deliver a substantial dimension reduction. The chapter is organized as follows. In Section 5.2 the basic ﬁnancial framework is presented, while in Section 5.3 we introduce the notation of the functional data analysis. In the following three sections we analyze the IV functions using functional principal components, smoothed functional principal components and common estimation of principal components, respectively.

5.2

Implied Volatility Surface

Implied volatilities are derived from the BS pricing formula for European options. Recall that European call and put options are derivatives written on an underlying asset S driven by the price process St , which yield the pay-oﬀ max(ST − K, 0) and max(K − ST , 0) respectively, at a given expiry time T and for a prespeciﬁed strike price K. The diﬀerence τ = T − t between the day of trade and day of expiration (maturity) is called time to maturity. The pricing formula for call options, Black and Scholes (1973), is: Ct (St , K, τ, r, σ) = St Φ(d1 ) − Ke−rτ Φ(d2 ) (5.1) √ ln(St /K) + (r + 1/2σ 2 )τ √ d1 = , d2 = d1 − σ τ , σ τ where Φ(·) is the cumulative distribution function of the standard normal distribution, r is the riskless interest rate, and σ is the (unknown and constant) volatility parameter. The put option price Pt can be obtained from the put-call parity Pt = Ct − St + e−τ r K. For a European option the implied volatility σ ˆ is deﬁned as the volatility – σ, which yields the BS price Ct equal to the price C˜t observed on the market. For a single asset, we obtain at each time point t a two-dimensional function – the IV surface σ ˆt (K, τ ). In order to standardize the volatility functions in time, one

5.2

Implied Volatility Surface

117

Volatility Surface Impl. volatility 0.31 0.28 0.25 0.22 0.19

0.80 0.88 0.95 Moneyness

1.03 1.10

0.13

0.25

0.38

0.51

0.63

Time to maturity

Figure 5.1: Implied volatility surface of ODAX on May 24, 2001. STFfda01.xpl

may rescale the strike dimension by dividing K by the future price Ft (τ ) of the underlying asset with the same maturity. This yields the so-called moneyness κ = K/Ft (τ ). Note that some authors deﬁne moneyness simply as κ = K/St . In contrast to the BS assumptions, empirical studies show that IV surfaces are signiﬁcantly curved, especially across the strikes. This phenomenon is called a volatility smirk or smile. Smiles stand for U-shaped volatility functions and smirks for decreasing volatility functions. We focus on the European options on the German stock index (ODAX). Figure 5.1 displays the ODAX implied volatilities computed from the BS formula (red points) and the IV surface on May 24, 2001 estimated using a local polynomial

118

5

Common Functional IV Analysis

estimator for τ ∈ [0, 0.6] and κ ∈ [0.8, 1.2]. We can clearly observe the “strings” of the original data on maturity grid τ ∈ {0.06111, 0.23611, 0.33333, 0.58611}, which corresponds to 22, 85, 120, and 211 days to maturity. This maturity grid is structured by market conventions and changes over time. The fact that the number of transactions with short maturity is much higher than those with longer maturity is also typical for the IVs observed on the market. The IV surface is a high-dimensional object – for every time point t we have to analyze a two-dimensional function. Our goal is to reduce the dimension of this problem and to characterize the IV surface through a small number of factors. These factors can be used in practice for risk management, e.g. with vega-strategies. The analyzed data, taken from MD*Base, contains EUREX intra-day transaction data for DAX options and DAX futures (FDAX) from January 2 to June 29, 2001. The IVs are calculated by the Newton-Raphson iterative method. The correction of Hafner and Wallmeier (2001) is applied to avoid inﬂuence of the tax-scheme in the DAX. For robustness, we exclude the contracts with time to maturity of less than 7 days and maturity strings with less than 100 observations. The approximation of the “riskless” interest rate with a given maturity is obtained on a daily basis from the linear interpolation of the 1, 3, 6, and 12 month EURIBOR interest rates (obtained from Datastream). The resulting data set is analyzed using the functional data analysis framework. One advantage of this approach, as we will see later in this chapter, is the possibility of introducing smoothness in the functional sense and using it for regularization. The notation of the functional data analysis is rather complex, therefor the theoretical motivation and the basic notation will be introduced in the next section.

5.3

Functional Data Analysis

In the functional data framework, the objects are usually modelled as realizations of a stochastic process X(t), t ∈ J, where J is a bounded interval in R. Thus, the set of functions xi (t), i = 1, 2, . . . n, t ∈ J, represents the data set. We assume the existence of the mean, variance, and covariance functions of the process X(t) and denote these by EX(t), Var(t) and Cov(s, t) respectively.

5.3

Functional Data Analysis

119

For the functional sample we can deﬁne the sample-counterparts of EX(t), Var(t) and Cov(s, t) in a straightforward way: ¯ X(t) # Var(t) # t) Cov(s,

=

1 n

n i=1

=

1 n−1

=

1 n−1

xi (t),

n

i=1 n i=1

2 ¯ , xi (t) − X(t) ¯ ¯ xi (t) − X(t) . xi (s) − X(s) def

In practice, we observe the function values X = {xi (ti1 ), xi (ti2 ), . . . , xi (tipi ); i = 1, . . . , n} only on a discrete grid {ti1 , ti2 , . . . , tipi } ∈ J, where pi is the number of grid points for the ith observation. One may estimate the functions x1 , . . . , xn via standard nonparametric regression methods, H¨ ardle (1990). Another popular way is to use a truncated functional basis expansion. More precisely, let us denote a functional basis on the interval J by {Θ1 , Θ2 , . . . , } and assume that the functions xi are approximated by the ﬁrst L basis functions Θl , l = 1, 2, . . . , L : xi (t) =

L

cil Θl (t) = c i Θ(t),

(5.2)

l=1

where Θ = (Θ1 , . . . , ΘL ) and ci = (ci1 , . . . , ciL ) . The number of basis functions L determines the tradeoﬀ between data ﬁdelity and smoothness. The analysis of the functional objects will be implemented through the coeﬃcient matrix C = {cil , i = 1, . . . , n, l = 1, . . . , L}. The mean, variance, and covariance functions are calculated by: ¯ X(t) = ¯ c Θ(t), # Var(t) = Θ(t) Cov(C)Θ(t), # t) = Θ(s) Cov(C)Θ(t), Cov(s, def 1 n

where ¯ cl =

n i=1

def

cil , l = 1, . . . , L and Cov(C) =

1 n−1

n i=1

¯)(ci − c ¯ ) . (ci − c

The scalar product in the functional space is deﬁned by: def xi (t)xj (t)dt = c xi , xj = i Wcj , J

120

5

where def

W =

Common Functional IV Analysis

Θ(t)Θ(t) dt.

(5.3)

J

In practice, the coeﬃcient matrix C needs to be estimated from the data set X . An example for a functional basis is the Fourier basis deﬁned on J by: ⎧ 1, l = 0, ⎨ sin(rωt), l = 2r − 1, Θl (t) = ⎩ cos(rωt), l = 2r, where the frequency ω determines the period and the length of the interval |J| = 2π/ω. The Fourier basis deﬁned above can be easily transformed to the orthonormal basis, hence the scalar-product matrix in (5.3) is simply the identity matrix. Our aim is to estimate the IV-functions for ﬁxed τ = 1 month (1M) and 2 months (2M) from the daily-speciﬁc grid of the maturities. We estimate the Fourier coeﬃcients on the moneyness-range κ ∈ [0.9, 1.1] for maturities observed on particular day i. For τ ∗ = 1M, 2M we calculate σ ˆi (κ, τ ∗ ) by linear ∗ interpolation of the closest observable IV string with τ ≤ τ ∗ , σ $i (κ, τi− ) and ∗ ∗ τ ≥τ ,σ $i (κ, τi+ ): ∗ ∗ ∗ τ ∗ − τi− τ − τi− ∗ ∗ + σ ˆ , ˆi (κ, τi− ) 1− ∗ (κ, τ ) σ ˆi (κ, τ ∗ ) = σ i i+ ∗ ∗ − τ∗ τi+ − τi− τi+ i− ∗ ∗ for i where τi− and τi− exist. In Figure 5.2 we show the situation for τ ∗ =1M on May 30, 2001. The blue points and the blue ﬁnely dashed curve correspond to ∗ the transactions with τ− =16 days and the green points and the green dashed ∗ curve to the transactions with τ+ = 51 days. The solid black line is the linear ∗ interpolation at τ = 30 days.

The choice of L = 9 delivers a good tradeoﬀ between ﬂexibility and smoothness of the strings. At this moment we exclude from our analysis those days, where this procedure cannot be performed due to the complete absence of the needed maturities, and strings with poor performance of estimated coeﬃcients, due to the small number of contracts in a particular string or presence of strong outliers. Using this procedure we obtain 77 “functional” observations def x1M ˆi1 (κ, 1M ), i1 = 1, . . . , 77, for the 1M-maturity and 66 observai1 (κ) = σ def

ˆi2 (κ, 2M ), i2 = 1, . . . , 66, for the 2M-maturity, as displayed tions x2M i2 (κ) = σ in Figure 5.3.

5.4

Functional Principal Components

121

IVs and IV strings 0.95

ATM

1.15

0.25

0.25

0.20

0.20

0.95

ATM

1.15

Figure 5.2: Linear interpolation of IV strings on May 30, 2001 with L = 9. STFfda02.xpl

5.4

Functional Principal Components

Principal Components Analysis yields dimension reduction in the multivariate framework. The idea is to ﬁnd normalized weight vectors γm ∈ Rp , for which the linear transformations of a p-dimensional random vector x, with E[x] = 0: x = γm , x, m = 1, . . . , p, fm = γm

(5.4)

have maximal variance subject to: γl γm = γl , γm = I(l = m) for l ≤ m. Where I denotes the identiﬁcator function. The solution is the Jordan spectral decomposition of the covariance matrix, H¨ ardle and Simar (2003).

122

5

Common Functional IV Analysis

IV-strings, 1M-Group 0.95

ATM

IV-strings, 2M-Group

1.05

0.95

ATM

1.05

0.25

0.25

0.25

0.25

0.20

0.20

0.20

0.20

0.95

ATM

1.05

0.95

ATM

1.05

Figure 5.3: Functional observations estimated using Fourier basis with L = 9, σ ˆi1 (κ, 1M ), i1 = 1, . . . , 77, in the left panel, σ ˆi2 (κ, 2M ) i2 = 1, . . . , 66 in the right panel. STFfda03.xpl

In the Functional Principal Components Analysis (FPCA) the dimension reduction can be achieved via the same route, i.e. by ﬁnding orthonormal weight functions γ1 , γ2 , . . ., such that the variance of the linear transformation is maximal. In order to keep notation simple we assume EX(t) = 0. The weight functions satisfy: 2 ||γm || = γm (t)2 dt = 1, γl (t)γm (t)dt = 0, l = m. γl , γm = The linear transformation is: fm = γm , X =

γm (t)X(t)dt,

and the desired weight functions solve: arg max γl ,γm =I(l=m),l≤m

Varγm , X,

(5.5)

5.4

Functional Principal Components

123

or equivalently: arg max γl ,γm =I(l=m),l≤m

γm (s)Cov(s, t)γm (t)dsdt.

The solution is obtained by solving the Fredholm functional eigenequation Cov(s, t)γ(t)dt = λγ(s). (5.6) The eigenfunctions γ1 , γ2 , . . . sorted with respect to the corresponding eigenvalues λ1 ≥ λ2 ≥ . . . solve the FPCA problem (5.5). The following link between eigenvalues and eigenfunctions holds: λm = Var(fm ) = Var γm (t)X(t)dt = γm (s)Cov(s, t)γm (t)dsdt. In the sampling problem, the unknown covariance function Cov(s, t) needs to # t). Dauxois, Pousse, and be replaced by the sample covariance function Cov(s, Romain (1982) show that the eigenfunctions and eigenvalues are consistent estimators for λm and γm and derive some asymptotic results for these estimators.

5.4.1

Basis Expansion

Suppose that the weight function γ has expansion γ=

L

bl Θl (t) = Θ b.

l=1

Using this notation we can rewrite the left hand side of eigenequation (5.6): Cov(s, t)γ(t)dt = Θ(s) Cov(C)Θ(t)Θ(t) bdt = Θ Cov(C)Wb, so that: Cov(C)Wb = λb. The functional scalar product γl , γk corresponds to b l Wbk in the truncated basis framework, in the sense that if two functions γl and γk are orthogonal, the corresponding coeﬃcient vectors bl , bk satisfy b l Wbk = 0. Matrix W is

124

5

Common Functional IV Analysis

Weight functions, 1M-Group 0.95

ATM

Weight functions, 2M-Group

1.05

0.95

ATM

1.05

4.00

4.00

4.00

4.00

3.00

3.00

3.00

3.00

2.00

2.00

2.00

2.00

1.00

1.00

1.00

1.00

0.00

0.00

0.00

0.00

-1.00

-1.00

-1.00

-1.00

-2.00

-2.00

-2.00

-2.00

-3.00

-3.00

-3.00

-3.00

-4.00

-4.00

-4.00

-4.00

-5.00

-5.00

-5.00

-5.00

0.95

ATM

1.05

0.95

ATM

1.05

Figure 5.4: Weight functions for 1M and 2M maturity groups. Blue solid lines, γˆ11M and γˆ12M , are the ﬁrst eigenfunctions, green ﬁnely dashed lines, γˆ21M and γˆ22M , are the second eigenfunctions, and cyan dashed lines, γˆ31M and γˆ32M , are the third eigenfunctions. STFfda04.xpl

symmetric by deﬁnition. Thus, deﬁning u = W1/2 b, one needs to solve ﬁnally a symmetric eigenvalue problem: W1/2 Cov(C)W1/2 u = λu, and to compute the inverse transformation b = W−1/2 u. For the orthonormal functional basis (i.e. also for the Fourier basis) W = I, i.e. the problem of FPCA is reduced to the multivariate PCA performed on the matrix C. Using the FPCA method on the IV-strings for 1M and 2M maturities we obtain the eigenfunctions plotted in Figure 5.4. It can be seen, that the eigenfunctions are too rough. Intuitively, this roughness is caused by the ﬂexibility of the functional basis. In the next section we present a way of incorporating the smoothing directly into the PCA problem.

5.5

5.5

Smoothed Principal Components Analysis

125

Smoothed Principal Components Analysis

As we can see in Figure 5.4, the resulting eigenfunctions are often very rough. Smoothing them could result in a more natural interpretation of the obtained weight functions. Here we apply a popular approach known as roughness penalty. The downside of this technique is that we loose orthogonality in the L2 sense. Assume that the underlying eigenfunctions of the covariance operator have a continuous and square-integrable second derivative. Let Dγ = γ (t) be the ﬁrst derivative operator and deﬁne the roughness penalty by Ψ(γ) = ||D2 γ||2 . Moreover, suppose that γm has square-integrable derivatives up to degree four and that the second and the third derivatives satisfy one of the following conditions: 1. D2 γ, D3 γ are zero at the ends of the interval J, 2. the periodicity boundary conditions of γ,Dγ, D2 γ, and D3 γ on J. Then we can rewrite the roughness penalty in the following way: D2 γ(s)D2 γ(s)ds ||D2 γ||2 = 2 2 = Dγ(u)D γ(u) − Dγ(d)D γ(d) − Dγ(s)D3 γ(s)ds (5.7) = γ(u)D3 γ(u) − γ(d)D3 γ(d) − γ(s)D4 γ(s)ds (5.8) = γ, D4 γ,

(5.9)

where d and u are the boundaries of the interval J and the ﬁrst two elements in (5.7) and (5.8) are both zero under any of the conditions mentioned above. Given a eigenfunction γ with norm ||γ||2 = 1, we can penalize the sample variance of the principal component by dividing it by 1 + αγ, D4 γ:

# t)γ(t)dsdt γ(s)Cov(s, def P CAP V = , (5.10) γ(t)(I + αD4 )γ(t)dt where I denotes the identity operator. The maximum of the penalized sample variance (PCAPV) is an eigenfunction γ corresponding to the largest eigenvalue of the generalized eigenequation: # t)γ(t)dt = λ(I + αD4 )γ(s). Cov(s, (5.11)

126

5

Common Functional IV Analysis

As already mentioned above, the resulting weight functions (eigenfunctions) are no longer orthonormal in the L2 sense. Since the weight functions are used as smoothed estimators of principal components functions, we need to rescale them to satisfy ||γl ||2 = 1. The weight functions γl can be also interpreted as orthogonal in the modiﬁed scalar product of the Sobolev type def

(f, g) = f, g + αD2 f, D2 g. A more extended theoretical discussion can be found in Silverman (1991).

5.5.1

Basis Expansion

Deﬁne K to be a matrix whose elements are D2 Θj , D2 Θk . Then the generalized eigenequation (5.11) can be transformed to: W Cov(C)Wu = λ(W + αK)u.

(5.12)

Using Cholesky factorization LL = W + αK and deﬁning S = L−1 we can rewrite (5.12) as: {SW Cov(C)WS }(L u) = λL u. Applying Smoothed Functional PCA (SPCA) to the IV-strings, we get the smooth-eigenfunctions plotted in Figure 5.5. We use α = 10−7 , the aim is to use a rather small degree of smoothing, in order to replace the high frequency ﬂuctuations only. Some popular methods, like cross-validation, could be employed as well, Ramsay and Silverman (1997). The interpretation of the weight functions displayed in Figure 5.5 is as follows: The ﬁrst weight function (solid blue) represents clearly the level of the volatility – weights are almost constant and positive. The second weight function (ﬁnely dashed green) changes sign near the at-the-money point, i.e. can be interpreted as the in-the-money/out-of-the-money identiﬁcation factor or slope. The third (dashed cyan) weight function may play the part of the measure for a deep in-the-money or out-of-the-money factor or curvature. It can be seen that the weight functions for the 1M (% γ11M , γ %31M ) and 2M maturities %21M , γ 2M 2M 2M (% γ1 , γ %2 , γ %3 ) have a similar structure. From a practical point of view it can be interesting to try to get common estimated eigenfunctions (factors in the further analysis) for both groups. In the next section, we introduce the estimation motivated by the Common Principal Component Model.

5.6

Common Principal Components Model

Weight functions, 1M-Group 0.95

ATM

127

Weight functions, 2M-Group

1.05

0.95

ATM

1.05

2.00

2.00

2.00

2.00

1.00

1.00

1.00

1.00

0.00

0.00

0.00

0.00

-1.00

-1.00

-1.00

-1.00

-2.00

-2.00

-2.00

-2.00

0.95

ATM

1.05

0.95

ATM

1.05

Figure 5.5: Smoothed weight functions with α = 10−7 . Blue solid lines, γˆ11M and γˆ12M , are the ﬁrst eigenfunctions, green ﬁnely dashed lines, γˆ21M and γˆ22M , are the second eigenfunctions, and cyan dashed lines, γˆ31M and γˆ32M , are the third eigenfunctions. STFfda05.xpl

5.6

Common Principal Components Model

The Common Principal Components model (CPC) in the multivariate setting can be motivated as the model for similarity of the covariance matrices in the ksample problem, Flury (1988). Having k random vectors, x(1) , x(2) , . . . , x(k) ∈ Rp the CPC-Model can be written as: Ψj = Cov(x(j) ) = ΓΛj Γ , def

where Γ is an orthogonal matrix and Λj = diag(λi1 , . . . , λip ). This means that eigenvectors are the same across samples and just the eigenvalues – variances of the principal component scores (5.4) diﬀer. Using the normality assumption, the sample covariance matrices Sj , j = 1, . . . , k, are Wishart-distributed: Sj ∼ Wp (nj , Ψj /nj ),

128

5

Common Functional IV Analysis

and the CPC model can be estimated using maximum likelihood estimation with likelihood-function: L(Ψ1 , Ψ2 , . . . , Ψk ) = C

n j exp tr − Ψ−1 (detΨj )−nj /2 . j Sj 2 j=1 k &

Here C is a factor that does not depend on the parameters and nj is the number of observations in group j. The maximization of this likelihood function is equivalent to:

nj k & det diag(Γ Sj Γ) , (5.13) det(Γ Sj Γ) j=1 and the maximization of this criterion is performed by the so-called FluryGautschi(FG)-algorithm, Flury (1988). As shown in Section 5.4, using the functional basis expansion, the FPCA and SPCA are basically implemented via the spectral decomposition of the “weighted” covariance matrix of the coeﬃcients. In view of the minimization property of the FG algorithm, the diagonalization procedure optimizing the criterion (5.13) can be employed. However, the obtained estimates may not be maximum likelihood estimates. Using this procedure for the IV-strings of 1M and 2M maturity we get “common” smoothed eigenfunctions. The ﬁrst three common eigenfunctions (% γ1c , γ %2c , c γ %3 ) are displayed in Figures 5.6–5.8. The solid blue curve represents the estimated eigenfunction for the 1M maturity, the ﬁnely dashed green curve for the 2M maturity and the dashed black curve is the common eigenfunction estimated by the FG-algorithm. Assuming that σ ˆi (κ, τ ) are centered for τ = 1M and 2M (we subtract the sample mean of corresponding group from the estimated functions), we may use the obtained weight functions in the factor model of the IV dynamics of the form: R σ %i (κ, τ ) = γ %jc (κ)% γjc (κ), σ ˆi (κ, τ ), (5.14) j=1

where τ ∈ {1M, 2M } and R is the number of factors. Thus σ %i is an alternative estimation of σi . This factor model can be used for simulation applications like Monte Carlo VaR. Especially the usage of Common Principal Components γ %jc (κ) reduces the high-dimensional IV-surface problem to a small number of functional factors.

5.6

Common Principal Components Model

129

1. weight functions-1M,2M,Common 0.95

ATM

1.05

2.00

2.00

1.00

1.00

0.00

0.00

-1.00

-1.00

-2.00

-2.00

0.95

ATM

1.05

Figure 5.6: First weight functions, α = 10−7 , solid blue line is the weight function of the 1M maturity group (ˆ γ11M ), ﬁnely dashed green line of 2M the 2M maturity group (ˆ γ1 ), and dashed black line is the common eigenfunction (% γ1c ), estimated from both groups.

In addition, an econometric approach, successfully employed by Fengler, H¨ ardle, and Mammen (2004) can be employed. It consists of ﬁtting an appropriate model to the time series of the estimated principal component scores, c f%ij (τ ) = % γjc (κ), σ ˆi (κ, τ ), as displayed in Figure 5.9. Note that σ ˆi (κ, τ ) are centered again (sample means are zero). The ﬁtted time series model can be used for forecasting future IV functions. There are still some open questions related to this topic. First of all, the practitioner would be interested in a good automated choice of the parameters of our method (dimension of the truncated functional basis L and smoothing parameter α). The application of the Fourier coeﬃcients in this framework seems to be reasonable for the volatility smiles (U-shaped strings), however for the volatility smirks (typically monotonically decreasing strings) the performance

130

5

Common Functional IV Analysis

2. weight functions-1M,2M,Common 0.95

ATM

1.05

2.00

2.00

1.00

1.00

0.00

0.00

-1.00

-1.00

-2.00

-2.00

0.95

ATM

1.05

Figure 5.7: Second eigenfunctions, α = 10−7 , solid blue line is the weight function of the 1M maturity group (ˆ γ21M ), ﬁnely dashed green line of 2M the 2M maturity group (ˆ γ2 ), and dashed black line is the common eigenfunction (% γ2c ), estimated from both groups.

is rather bad. In particular, the variance of our functional objects and the shape of our weight functions at the boundaries is aﬀected. The application of regression splines in this setting seems to be promising, but it increases the number of smoothing parameters by the number and the choice of the knots – problems which are not generally easy to deal with. The next natural question, which is still open concerns the statistical properties of the technique and the testing procedure for the Functional Common PCA model. Finally, using the data for a longer time period one may also analyze the longer maturities like 3 months or 6 months.

5.6

Common Principal Components Model

131

3. weight functions-1M,2M,Common 0.95

ATM

1.05

2.00

2.00

1.00

1.00

0.00

0.00

-1.00

-1.00

-2.00

-2.00

0.95

ATM

1.05

Figure 5.8: Third eigenfunctions, α = 10−7 , solid blue line is the weight function of the 1M maturity group (ˆ γ31M ), ﬁnely dashed green line of 2M the 2M maturity group (ˆ γ3 ), and dashed black line is the common eigenfunction (% γ3c ), estimated from both groups.

132

5

PCs 1. variables 1M Group 20010208

Common Functional IV Analysis

PCs 2. variables 1M Group

20010507

20010208

0.05

PCs 3. variables 1M Group

20010507

20010208

0.01

20010507

0.01

0.03

0.03

0.01

0.01

0.01

0.01

0.00

0.00

0.00

0.00

0.00

0.00

-0.03

-0.03

-0.01

-0.01

-0.01

-0.01

-0.05 20010208

-0.01

20010507

20010208

PCs 2. variables 1M Group 20010417

-0.01

20010507

20010208

PCs 2. variables 2M Group

20010608

20010417

0.05

20010507

PCs 3. variables 2M Group

20010608

20010417

0.01

20010608

0.01

0.03

0.03

0.01

0.01

0.01

0.01

0.00

0.00

0.00

0.00

0.00

0.00

-0.03

-0.03

-0.01

-0.01

-0.01

-0.01

-0.05 20010417

20010608

-0.01 20010417

20010608

-0.01 20010417

20010608

c c Figure 5.9: Estimated principal component scores, f%i1 (1M ), f%i2 (1M ), and c c c % % % (2M ), and fi3 (1M ) for 1M maturity – ﬁrst row, and fi1 (2M ), fi2 c (2M ) for 2M maturity – second row; α = 10−7 . f%i3

Bibliography

133

Bibliography Black, F. and Scholes, M. (1973). The pricing of options and corporate liabilities, Journal of Political Economy, 81: 637:654. Dauxois, J., Pousse, A., and Romain, Y. (1982). Asymptotic Theory for the Principal Component Analysis of a Vector Random Function: Some Applications to Statistical Inference, Journal of Multivariate Analysis 12: 136-154. Flury, B. (1988). Common Principal Components and Related Models, Wiley, New York. Fengler, M., H¨ ardle, W., and Schmidt, P. (2002). Common Factors Governing VDAX Movements and the Maximum Loss, Journal of Financial Markets and Portfolio Management 16(1): 16-29. Fengler, M., H¨ ardle, W., and Villa, P. (2003). The Dynamics of Implied Volatilities: A common principle components approach, Review of Derivative Research 6: 179-202. Fengler, M., H¨ ardle, W., and Mammen, E. (2004). Implied Volatility String Dynamics, CASE Discussion Paper, http://www.case.hu-berlin.de. F¨ ollmer, H. and Schied A. (2002). Stochastic Finance, Walter de Gruyter. H¨ardle, W. (1990). Applied Nonparametric Regression, Cambridge University Press. Hafner, R. and Wallmeier, M. (2001). The Dynamics of DAX Implied Volatilities, International Quarterly Journal of Finance 1(1): 1-27. H¨ ardle, W. and Simar, L. (2003). Applied Multivariate Statistical Analysis, Springer-Verlag Berlin Heidelberg. Kneip, A. and Utikal, K. (2001). Inference for Density Families Using Functional Principal Components Analysis, Journal of the American Statistical Association 96: 519-531. Ramsay, J. and Silverman, B. (1997). Functional Data Analysis, Springer, New York. Rice, J. and Silverman, B. (1991). Estimating the Mean and Covariance Structure Nonparametrically when the Data are Curves, Journal of Royal Statistical Society, Series B 53: 233-243.

134

Bibliography

Silverman, B. (1996). Smoothed Functional Principal Components Analysis by Choice of Norm, Annals of Statistics 24: 1-24.

6 Implied Trinomial Trees ˇ ıˇzek and Karel Komor´ad Pavel C´

Options are ﬁnancial derivatives that, conditional on the price of an underlying asset, constitute a right to transfer the ownership of this underlying. More speciﬁcally, a European call and put options give their owner the right to buy and sell, respectively, at a ﬁxed strike price at a given date. Options are important ﬁnancial instruments used for hedging since they can be included into a portfolio to reduce risk. Corporate securities (e.g., bonds or stocks) may include option features as well. Last, but not least, some new ﬁnancing techniques, such as contingent value rights, are straightforward applications of options. Thus, option pricing has become one of the basic techniques in ﬁnance. The boom in research on the use of options started after Black and Scholes (1973) published an option-pricing formula based on geometric Brownian motion. Option prices computed by the Black-Scholes formula and the market prices of options exhibit a discrepancy though. Whereas the volatility of market option prices varies with the price (or moneyness) – the dependency referred to as the volatility smile, the Black-Scholes model is based on the assumption of a constant volatility. Therefore, to model option prices consistent with the market many new approaches were proposed. Probably the most commonly used and rather intuitive procedure for option pricing is based on binomial trees, which represent a discrete form of the Black-Scholes model. To ﬁt the market data, Derman and Kani (1994) proposed an extension of binomial trees: the so-called implied binomial trees, which are able to model the market volatility smile. Implied trinomial trees (ITTs) present an analogous extension of trinomial trees proposed by Derman, Kani, and Chriss (1996). Like their binomial counterparts, they can ﬁt the market volatility smile and actually converge to the same continuous limit as binomial trees. In addition, they allow for a free choice of the underlying prices at each node of a tree, the so-called state space.

136

6

The term structure

31

35

32

Implied Volatility [%] 33 34 35

Implied volatility [%] 40 45 50

36

55

The skew structure

Implied Trinomial Trees

3000

4000

5000 6000 Strike price [DM]

7000

100

200 300 400 Time to maturity [days]

500

Figure 6.1: Implied volatilities of DAX put options on January 29, 1999. This feature of ITTs allows to improve the ﬁt of the volatility smile under some circumstances such as inconsistent, arbitrage-violating, or other market prices leading to implausible or degenerated probability distributions in binomial trees. We introduce ITTs in several steps. We ﬁrst review main concepts regarding option pricing (Section 6.1) and implied models (Section 6.2). Later, we discuss the construction of ITTs (Section 6.3) and provide some illustrative examples (Section 6.4).

6.1

Option Pricing

The option-pricing model by Black and Scholes (1973) is based on the assumptions that the underlying asset follows a geometric Brownian motion with a constant volatility σ: dSt St

= µdt + σdWt ,

(6.1)

where St denotes the underlying-price process, µ is the expected return, and Wt stands for the standard Wiener process. As a consequence, the distribution of St is lognormal. More importantly, the volatility σ is the only parameter of the Black-Scholes formula which is not explicitly observable on the market. Thus,

6.1

Option Pricing

137

S

s Su s p HHH HH HH s s H H HH HH s 1 − p H HH Sd HHH HHs

Figure 6.2: Two levels of a CRR binomial tree.

we infer on σ by matching the observed option prices. A solution σI , “implied” by options prices, is called the implied volatility (or Black-Scholes equivalent). In general, implied volatilities vary both with respect to the exercise price (the skew structure) and expiration time (the term structure). Both dependencies are illustrated in Figure 6.1, with the ﬁrst one representing the volatility smile. Let us add that the implied volatility of an option is the market’s estimate of the average future underlying volatility during the life of that option. We refer to the market’s estimate of an underlying volatility at a particular time and price point as the local volatility. Binomial trees, as a discretization of the Black-Scholes model, can be constructed in several alternative ways. Here we recall the classic Cox, Ross, and Rubinstein’s (1979) scheme (CRR), which has a constant logarithmic spacing between nodes on the same level (this spacing represents the future price volatility). A standard CRR tree is depicted in Figure 6.2. Starting at a node S, the price of an underlying asset can either increase to Su with probability p or decrease to Sd with probability 1 − p: √ ∆t

Su

= Seσ

Sd

= Se , F − Sd = , Su − Sd

p

,

√ −σ ∆t

(6.2) (6.3) (6.4)

138

6

Implied Trinomial Trees

where ∆t refers to the time step and σ is the (constant) volatility. The forward price F = er∆t S in the node S is determined by the the continuous interest rate r (for the sake of simplicity, we assume that the dividend yield equals zero; see Cox, Ross, and Rubinstein, 1979, for treatment of dividends). A binomial tree corresponding to the risk-neutral underlying evaluation process is the same for all options on this asset, no matter what the strike price or time to expiration is. There are many extensions of the original Black-Scholes approach that try to capture the volatility variation and to price options consistently with the market prices (that is, to account for the volatility smile). Some extensions incorporate a stochastic volatility factor or discontinuous jumps in the underlying price; see for instance Franke, H¨ ardle, and Hafner (2004) and Chapters 5 and 7. In the next section, we discuss an extension of the BlackScholes model developed by Derman and Kani (1994) – the implied trees.

6.2

Trees and Implied Trees

While the Black-Scholes model assumes that an underlying asset follows a geometric Brownian motion (6.1) with a constant volatility, more complex models assume that the underlying follows a process with a price- and time-varying volatility σ(S, t). See Dupire (1994) and Fengler, H¨ ardle, and Villa (2003) for details and related evidence. Such a process can be expressed by the following stochastic diﬀerential equation: dSt St

= µdt + σ(S, t)dWt .

(6.5)

This approach ensures that the valuation of an option remains preference-free, that is, all uncertainty is in the spot price, and thus, we can hedge options using the underlying. Derman and Kani (1994) show that it is possible to determine σ(S, t) directly from the market prices of liquidly traded options. Further, they use this volatility σ(S, t) to construct an implied binomial tree (IBT), which is a natural discrete representation of a non-lognormal evolution process of the underlying prices. In general, we can use – instead of an IBT – any (higher-order) multinomial tree for the discretization of process (6.5). Nevertheless, as the time step tends towards zero, all of them converge to the same continuous process (Hull and White, 1990). Thus, IBTs are among all implied multinomial trees minimal in the sense that they have only one degree of freedom – the arbitrary

6.2 Trees and Implied Trees

139

A

Figure 6.3: Computing the Arrow-Debreu price in a binomial tree. The bold lines with arrows depict all (three) possible path from the root of the tree to point A.

choice of the central node at each level of the tree. Although one may feel now that binomial trees are suﬃcient, some higher-order trees could be more useful because they allow for a more ﬂexible discretization in the sense that transition probabilities and probability distributions can vary as smoothly as possible across a tree. This is especially important when the market option prices are inaccurate because of ineﬃciency, market frictions, and so on. At the end of this section, let us to recall the concept of Arrow-Debreu prices, which is closely related to multinomial trees and becomes very useful in subsequent derivations (Section 6.3). Let (n, i) denote the ith (highest) node in the nth time level of a tree. The Arrow-Debreu price λn,i at node (n, i) of a tree is computed as the sum of the product of the risklessly discounted transition probabilities over all paths starting in the root of the tree and leading to node (n, i). Hence, the Arrow-Debreu price of the root is equal to one and the Arrow-Debreu prices at the ﬁnal level of a (multinomial) tree form a discrete approximation of the state price density. Notice that these prices are discounted, and thus, the risk-neutral probability corresponding to each node (at the ﬁnal level) should be calculated as the product of the Arrow-Debreu price and the capitalizing factor erT .

140

6

6.3

Implied Trinomial Trees

Implied Trinomial Trees

6.3.1

Basic Insight

A trinomial tree with N levels is a set of nodes sn,i (representing the underlying price), where n = 1, . . . , N is the level number and i = 1, . . . , 2n − 1 indexes nodes within a level. Being at a node sn,i , one can move to one of three nodes (see Figure 6.4a): (i) to the upper node with value sn+1,i with probability pi ; (ii) to the lower node with value sn+1,i+2 with probability qi ; and (iii) to the middle node with value sn+1,i+1 with probability 1 − pi − qi . For the sake of brevity, we omit the level index n from transition probabilities unless they refer to a speciﬁc level; that is, we write pi and qi instead of pn,i and qn,i unless the level has to be speciﬁed. Similarly, let us denote the nodes in the new level with capital letters: Si (=sn+1,i ), Si+1 (=sn+1,i+1 ) and Si+2 (=sn+1,i+2 ), respectively (see Figure 6.4b). Starting from a node sn,i at time tn , there are ﬁve unknown parameters: two transition probabilities pi and qi and three prices Si , Si+1 , and Si+2 at new nodes. To determine them, we need to introduce the notation and main requirements a tree should satisfy. First, let Fi denote the known forward price of the spot price sn,i and λn,i the known Arrow-Debreu price at node (n, i). The Arrow-Debreu prices for a trinomial tree can be obtained by the following iterative formulas: λ1,1 = λn+1,1 = λn+1,2 = λn+1,i+1 = λn+1,2n = λn+1,2n+1 =

1, (6.6) −r∆t e λn,1 p1 , (6.7) −r∆t e {λn,1 (1 − p1 − q1 ) + λn,2 p2 }, (6.8) e−r∆t {λn,i−1 qi−1 + λn,i (1 − pi − qi ) + λn,i+1 pi+1 }, (6.9) e−r∆t {λn,2n−1 (1 − p2n−1 − q2n−1 )+λn,2n−2 q2n−2 }, (6.10) e−r∆t λn,2n−1 q2n−1 . (6.11)

An implied tree provides a discrete representation of the evolution process of underlying prices. To capture and model the underlying price correctly, we desire that an implied tree: 1. reproduces correctly the volatility smile, 2. is risk-neutral, 3. uses transition probabilities from interval (0, 1).

6.3

Implied Trinomial Trees

141

S1

Si

s1

S2

s2

S3

pi sn,i

1 - pi- q

S4 i

S i+1 S 2n-2

qi S i+2

s 2n-2

S 2n-1

s 2n-1

S2n S2n+1

Figure 6.4: Nodes in a trinomial tree. Left panel: a single node with its branches. Right panel: the nodes of two consecutive levels n − 1 and n.

To fulﬁll the risk-neutrality condition, the expected value of the underlying price sn+1,i in the following time period tn+1 has to equal its known forward price: Esn+1,i = pi Si + (1 − pi − qi )Si+1 + qi Si+2 = Fi = er∆t sn,i ,

(6.12)

where r denotes the continuous interest rate and ∆t is the time step from tn to tn+1 . Additionally, one can specify such a condition also for the second moments of sn,i and Fi . Hence, one obtains a second constraint on the node prices and transition probabilities: pi (Si −Fi )2 +(1−pi −qi )(Si+1 −Fi )2 +qi (Si+2 −Fi )2 = Fi2 σi2 ∆t+ O(∆t), (6.13) where σi is the stock or index price volatility during the time period.

142

6

Implied Trinomial Trees

Consequently, we have two constraints (6.12) and (6.13) for ﬁve unknown parameters, and therefore, there is no unique implied trinomial tree. On the other hand, all trees satisfying these constraints are equivalent in the sense that as the time spacing ∆t tends to zero, all these trees converge to the same continous process. A common method for constructing an ITT is to choose ﬁrst freely the underlying prices and then to solve equations (6.12) and (6.13) for the transition probabilities pi and qi . Afterwards one only has to ensure that these probabilities do not violate the above mentioned Condition 3. Apparently, using an ITT instead of an IBT gives us additional degrees of freedom. This allows us to better ﬁt the volatility smile, especially when inconsistent or arbitrage-violating market option prices make a consistent tree impossible. Note, however, that even though the constructed tree is consistent, other difﬁculties can arise when its local volatility and probability distributions are jagged and “implausible.”

6.3.2

State Space

There are several methods we can use to construct an initial state space. Let us ﬁrst discuss a construction of a constant-volatility trinomial tree, which forms a base for an implied trinomial tree. As already mentioned, binomial and trinomial discretization of the constant-volatility Black-Scholes model have the same continous limit, and therefore, are equivalent. Hence, we can start from a constant-volatility CRR binomial tree and then combine two steps of this tree into a single step of a new trinomial tree. This is illustrated in Figure 6.5, where thin lines correspond to the original binomial tree and the thick lines to the constructed trinomial tree. Consequently, using formulas (6.2) and (6.3), we can derive the following expressions for the nodes of the constructed trinomial tree: √ 2∆t

Si+1

= sn+1,i = sn+1,i+1

= sn,i eσ = sn,i ,

Si+2

= sn+1,i+2

= sn,i e−σ

Si

,

√ 2∆t

(6.14) (6.15) ,

(6.16)

where σ is a constant volatility (e.g., an estimate of the at-the-money volatility at maturity T ). Next, summing the transition probabilities in the binomial tree given in (6.4), we can also derive the up and down transition probabilities in

6.3

Implied Trinomial Trees

143

Figure 6.5: Constructing a constant-volatility trinomial tree (thick lines) by combining two steps of a CRR binomial tree (thin lines).

the trinomial tree (the “middle” transition probability is equal to 1 − pi − qi ): √ "2 er∆t/2 − e−σ ∆t/2 √ √ = , eσ ∆t/2 − e−σ ∆t/2 √ "2 ! eσ ∆t/2 − er∆t/2 √ √ = . eσ ∆t/2 − e−σ ∆t/2 !

pi

qi

Note that there are more methods for building a constant-volatility trinomial tree such as combining two steps of a Jarrow and Rudd’s (1983) binomial tree; see Derman, Kani, and Chriss (1996) for more details. When the implied volatility varies only slowly with strike and expiration, the regular state space with a uniform mesh size, as described above, is adequate for constructing ITT models. On the other hand, if the volatility varies signiﬁcantly with strike or time to maturity, we should choose a state space reﬂecting these properties. Assuming that the volatility is separable in time and stock price, σ(S, t) = σ(S)σ(t), an ITT state space with a proper skew and term structure can be constructed in four steps.

144

6

Implied Trinomial Trees

First, we build a regular trinomial lattice with a constant time spacing ∆t and a constant price spacing ∆S as described above. Additionally, we assume that all interest rates and dividends are equal to zero. Second, we modify ∆t at diﬀerent time points. Let us denote the original equally spaced time points t0 = 0, t1 , . . . , tn = T . We can then ﬁnd the unknown scaled times t˜0 = 0, t˜1 , . . . , t˜n = T by solving the following set of non-linear equations: t˜k

n−1 i=1

k 1 1 ˜k 1 = T + t , 2 (t˜ ) σ 2 (T ) σ 2 (t˜i ) σ i i=1

k = 1, . . . , n − 1.

(6.17)

Next, we change ∆S at diﬀerent levels. Denoting by S1 , . . . , S2n+1 the original (known) underlying prices, we solve for rescaled underlying prices S˜1 , . . . , S˜2n+1 using

S˜k c Sk , k = 2, . . . , 2n + 1, (6.18) = exp ln σ(Sk ) Sk−1 S˜k−1 where c is a constant. It is recommended to set c to an estimate of the local volatility. Since there are 2n equations for 2n + 1 unknown parameters, an additional equation is needed. Here we always suppose that the new central node equals the original central node: S˜n+1 = Sn+1 . See Derman, Kani, and Chriss (1996) for a more elaborate explanation of the theory behind equations (6.17) and (6.18). Finally, one can increase all node prices by a suﬃciently large growth factor, which removes forward prices violations, see Section 6.3.4. Multiplying all zero-rate node prices at time t˜i by ert˜i should be always suﬃcient.

6.3.3

Transition Probabilities

Once the state space of an ITT is ﬁxed, we can compute the transition probabilities for all nodes (n, i) at each tree level n. Let C(K, tn+1 ) and P (K, tn+1 ) denote today’s price of a standard European call and put option, respectively, struck at K and expiring at tn+1 . These values can be obtained by interpolating the smile surface at various strike and time points. The values of these options given by the trinomial tree are the discounted expectations of the pay-oﬀ functions: max(Sj − K, 0) = (Sj − K)+ for the call option and max(K − Sj , 0) for the put option at the node (n + 1, j). The expectation

6.3

Implied Trinomial Trees

145

is taken with respect to the probabilities of reaching each node, that is, with respect to transition probabilities: C (K, tn+1 ) = e−r∆t {pj λn,j + (1 − pj−1 − qj−1 )λn,j−1 (6.19) j

+qj−2 λn,j−2 } (Sj − K)+ , P (K, tn+1 )

= e−r∆t

{pj λn,j + (1 − pj−1 − qj−1 )λn,j−1

(6.20)

j

+qj−2 λn,j−2 } (K − Sj )+ . If we set the strike price K to Si+1 (the stock price at node (n + 1, i + 1)), rearrange the terms in the sum, and use equation (6.12), we can express the transition probabilities pi and qi for all nodes above the central node from formula (6.19): i−1 er∆t C(Si+1 , tn+1 ) − j=1 λn+1,j (Fj − Si+1 ) pi = , (6.21) λn+1,i (Si − Si+1 ) Fi − pi (Si − Si+1 ) − Si+1 . (6.22) qi = Si+2 − Si+1 Similarly, we compute from formula (6.20) the transition probabilities for all nodes below (and including) the center node (n + 1, n) at time tn : 2n−1 er∆t P (Si+1 , tn+1 ) − j=i+1 λn+1,j (Si+1 − Fj ) qi = , (6.23) λn+1,i (Si+1 − Si+2 ) Fi − qi (Si+2 − Si+1 ) − Si+1 . (6.24) pi = Si − Si+1 A detailed derivation of these formulas can be found in Komor´ ad (2002). Finally, the implied local volatilities are approximated from equation (6.13): σi2 ≈

6.3.4

pi (Si − Fi )2 + (1 − pi − qi )(Si+1 − Fi )2 + qi (Si+2 − Fi )2 . Fi2 ∆t

(6.25)

Possible Pitfalls

Formulas (6.21)–(6.24) can unfortunately result in transition probabilities which are negative or greater than one. This is inconsistent with rational option prices

146

6

Implied Trinomial Trees

123.50

113.08

114.61 113.60

105.22

106.34

106.82

100.00 100.00

100.01

100.46

95.04

94.04 88.43

87.27 80.97

0

1

3

6

0

1

2

3

4

5

6

7

8

9 10

Figure 6.6: Two kinds of the forward price violation. Left panel: forward price outside the range of its daughter nodes. Right panel: sharp increase in option prices leading to an extreme local volatility.

and allows arbitrage. We actually have to face two forms of this problem, see Figure 6.6 for examples of such trees. First, we have to check that no forward price Fn,i at node (n, i) falls outside the range of its daughter nodes at the level n + 1: Fn,i ∈ (sn+1,i+2 , sn+1,i ). This inconsistency is not diﬃcult to overcome since we are free to choose the state space. Thus, we can overwrite the nodes causing this problem. Second, extremely small or large values of option prices, which would imply an extreme value of local volatility, can also result in probabilities that are negative or larger than one. In such a case, we have to overwrite the option prices which led to the unacceptable probabilities. Fortunately, the transition probabilities can be always corrected providing that the corresponding state space does not violate the forward price condition Fn,i ∈ (sn+1,i+2 , sn+1,i ). Derman, Kani, and Chriss (1996) proposed to reduce the troublesome nodes to binomial ones or to set Si − Fi 1 Fi − Si+1 Fi − Si+2 1 pi = , qi = , (6.26) + 2 Si − Si+1 Si − Si+2 2 Si − Si+2

6.4

Examples

for Fi ∈ (Si+1 , Si ) and 1 Fi − Si+2 , pi = 2 Si − Si+2

147

qi =

1 2

Si+1 − Fi S i − Fi + Si+1 − Si+2 Si − Si+2

,

(6.27)

for Fi ∈ (Si+2 , Si+1 ). In both cases, the “middle” transition probability is equal to 1 − pi − qi .

6.4

Examples

To illustrate the construction of an implied trinomial tree and its use, we consider here ITTs for two artiﬁcial implied-volatility functions and an impliedvolatility function constructed from real data.

6.4.1

Pre-speciﬁed Implied Volatility

Let us consider a case where the volatility varies only slowly with respect to the strike price and time to expiration (maturity). Assume that the current index level is 100 points, the annual riskless interest rate is r = 12%, and the dividend yield equals δ = 4%. The annualized Black-Scholes implied volatility is assumed to be σ = 11%, and additionally, it increases (decreases) linearly by 10 basis points (i.e., 0.1%) with every 10 unit drop (rise) in the strike price K; that is, σI = 0.11 − ∆K ∗ 0.001. To keep the example simple, we consider three one-year steps. First, we construct the state space: a constant-volatility trinomial tree as described in Section 6.3.2. The ﬁrst node at time t0 = 0, labeled A in Figure 6.7, has the value of sA = 100, today’s spot price. The next three nodes at time t1 , are computed from equations (6.14)–(6.16) and take values S1 = 116.83, S2 = 100.00, and S3 = 85.59, respectively. In order to determine the transition probabilities, we need to know the price P (S2 , t1 ) of a put option struck at S2 = 100 and expiring one year from now. Since the implied volatility of this option is 11%, we calculate its price using a constant-volatility trinomial tree with the same state space and ﬁnd it to be 0.987 index points. Further, the ∗ ∗ forward price corresponding to node A is FA = Se(r −δ )∆t = 107.69, where r∗ = log(1+r) denotes the continuous interest rate and δ ∗ = log(1+δ) the continuous dividend rate. Hence, the transition probability of a down movement

148

6

Implied Trinomial Trees

159.47

136.50

136.50

116.83

116.83

116.83

100.00

100.00

100.00

85.59

85.59

85.59

73.26

73.26

B 100.00

A

62.71

0

1

2

3

Figure 6.7: The state space of a trinomial tree with constant volatility σ = 11%. Nodes A and B are reference points for which we demonstrate constructing of an ITT and estimating of the implied local volatility. STFitt01.xpl

computed from equation (6.23) is qA =

elog(1+0.12)·1 0.987 − Σ = 0.077, 1 · (100.00 − 85.59)

where the summation term Σ in the numerator is zero because there are no nodes with price lower than S3 at time t1 . Similarly, the transition probability of an upward movement pA computed from equation (6.24) is pA =

107.69 + 0.077 · (100.00 − 85.59) − 100 = 0.523. 116.83 − 100.00

6.4

Examples

149

Upper Probabilities

Middle Probabilities

0.523

0.517

0.515

0.523

0.521

0.538

0.534

0.571

Lower Probabilities

0.431

0.508

0.401

0.413

0.417

0.401

0.404

0.368

0.375

0.296

0.060

0.077

0.070

0.068

0.077

0.075

0.094

0.090

0.133

Figure 6.8: Transition probabilities for σI = 0.11 − ∆K · 0.001. STFitt02.xpl

Finally, the middle transition probability equals 1 − pA − qA = 0.4. As one can see from equations (6.6)–(6.11), the Arrow-Debreu prices turn out to be just discounted transition probabilities: λ1,1 = e− log(1+0.12)·1 · 0.523 = 0.467, λ1,2 = 0.358, and λ1,3 = 0.069. Finally, we can estimate the value of the implied local volatility at node A from equation (6.25), obtaining σA = 9.5%. Let us demonstrate the computation of one further node. Starting from node B in year t2 = 2 of Figure 6.7, the index level at this node is sB = 116.83 and ∗ ∗ its forward price one year later is FB = e(r −δ )·1 · 116.83 = 125.82. From this node, the underlying can move to one of three future nodes at time t3 = 3, with prices s3,2 = 136.50, s3,3 = 116.83, and s3,4 = 100.00. The value of a call option struck at 116.83 and expiring at time t3 = 3 is C(s3,3 , t3 ) = 8.87, corresponding to the implied volatility of 10.83% interpolated from the smile. The Arrow-Debreu price computed from equation (6.8) is λ2,2 = e− log(1+0.12)·1 {0.467 · (1 − 0.517 − 0.070) + 0.358 · 0.523} = 0.339. The numerical values used here are already known from the previous level at time t1 . Now, using equations (6.21) and (6.22) we can ﬁnd the transition

150

6

Implied Trinomial Trees

0.098

1.000

0.215

0.239

0.467

0.339

0.226

0.358

0.190

0.111

0.069

0.047

0.031

0.006

0.005 0.001

Figure 6.9: Arrow-Debreu prices for σI = 0.11 − ∆K · 0.001. STFitt03.xpl

probabilities: p2,2

=

q2,2

=

elog(1+0.12)·1 · 8.87 − Σ = 0.515, 0.339 · (136.50 − 116.83) 125.82 − 0.515 · (136.50 − 116.83) − 116.83 = 0.068, 100 − 116.83

where Σ contributes only one term 0.215 · (147 − 116.83), that is, there is one single node above SB whose forward price is equal to 147. Finally, employing (6.25) again, we ﬁnd that the implied local volatility at this node is σB = 9.3%. The complete trees of transition probabilities, Arrow-Debreu prices, and local volatilities for this example are shown in Figures 6.8–6.10. As already mentioned in Section 6.3.4, the transition probabilities may fall out of the interval (0, 1). For example, let us slightly modify our previous example and assume that the Black-Scholes volatility increases (decreases) linearly 0.5

6.4

Examples

151

0.092

0.095

0.094

0.093

0.095

0.095

0.099

0.098

0.106

Figure 6.10: Implied local volatilities for σI = 0.11 − ∆K · 0.001. STFitt04.xpl

Upper Probabilities

Middle Probabilities

0.582

0.271

C

0.523

0.492

0.485

0.523

0.514

0.605

0.605

0.146

C

0.401

0.582

Lower Probabilities

0.467

0.482

0.401

0.420

0.222

0.222

0.077

0.271

D

C 0.041

0.033

0.077

0.066

0.173

0.173 0.146

D

D

Figure 6.11: Transition probabilities for σI = 0.11 − ∆K · 0.005. Nodes C and D had inadmissible transition probabilities (6.21)–(6.24). STFitt05.xpl

152

6

C

1.000

Implied Trinomial Trees

0.107

0.205

0.206

0.467

0.362

0.266

0.358

0.182

0.099

0.069

0.038

0.024

0.011

0.008

D

0.001

Figure 6.12: Arrow-Debreu prices for σI = 0.11 − ∆K · 0.005. Nodes C and D had inadmissible transition probabilities (6.21)–(6.24). STFitt06.xpl

percentage points with every 10 unit drop (rise) in the strike price K; that is, σI = 0.11 − ∆K · 0.005. In other words, the volatility smile is now ﬁve times steeper than before. Using the same state space as in the previous example, we ﬁnd inadmissable transition probabilities at nodes C and D, see Figures 6.11– 6.13. To overwrite them with plausible values, we used the strategy described by (6.26) and (6.27) and obtained reasonable results in the sense of the three conditions stated on page 140.

6.4.2

German Stock Index

Following the artiﬁcial examples, let us now demonstrate the ITT modeling for a real data set, which consists of strike prices for DAX options with maturities from two weeks to two months on January 4, 1999. Given such data, we can

6.4

Examples

153

0.108

0.095

0.087

0.086

0.095

0.093

0.113

0.113 0.108

C

D

Figure 6.13: Implied local volatilities for σI = 0.11 − ∆K · 0.005. Nodes C and D had inadmissible transition probabilities (6.21)–(6.24). STFitt07.xpl ﬁrstly compute from the Black-Scholes equation (6.1) the implied volatilities at various combinations of prices and maturities, that is, we can construct the volatility smile. Next, we build and calibrate an ITT so that it ﬁts this smile. The procedure is analogous to the examples described above – the only diﬀerence lies in replacing an artiﬁcial function σI (K, t) by an estimate of implied volatility σI at each point (K, t). For the purpose of demonstration, we build a three-level ITT with time step ∆t of two weeks. First, we construct the state space (Section 6.3.2) starting at time t0 = 0 with the spot price S = 5290 and riskless interest rate r = 4%, see Figure 6.14. Further, we have to compute the transition probabilities. Because option contracts are not available for each combination of price and maturity, we use a nonparametric smoothing procedure to model the whole volatility surface σI (K, t) as employed by A¨it-Sahalia, Wang, and Yared (2001) and Fengler, H¨ardle, and Villa (2003), for instance. Given the data, some transition probabilities fall outside interval (0, 1); they are depicted by dashed lines in Figure 6.14. Such probabilities have to be corrected as described in Section 6.3.4

154

6

Implied Trinomial Trees

6994.15

5290.00

6372.48

6372.48

5806.07

5806.07

5806.07

5290.00

5290.00

5290.00

4819.80

4819.80

4819.80

4391.40

4391.40 4001.07

0

2

4

6

Figure 6.14: The state space of the ITT constructed for DAX on January 4, 1999. Dashed lines mark the transitions with originally inadmissible transition probabilities. STFitt08.xpl

(there are no forward price violations). The resulting local volatilities, which reﬂect the volatility skew, are on Figure 6.15. Probably the main result of this ITT model can be summarized by the state price density (the left panel of Figure 6.16). This density describes the price distribution given by the constructed ITT and smoothed by the NadarayaWatson estimator. Apparently, the estimated density is rather rough because we used just three steps in our tree. To get a smoother state-price density estimate, we doubled the number of steps; that is, we used six one-week steps instead of three two-week steps (see the right panel of Figure 6.16).

6.4

Examples

155

0.28

0.34

0.31

0.30

0.33

0.32

0.43

0.43

0.34

Figure 6.15: Implied local volatilities computed from an ITT for DAX on January 4, 1999. STFitt08.xpl

Finally, it possible to use the constructed ITT to evaluate various DAX options. For example, a European knock-out option gives the owner the same rights as a standard European option as long as the index price S does not exceed or fall below some barrier B for the entire life of the knock-out option; see H¨ardle, Kleinow, and Stahl (2002) for details. So, let us compute the price of the knock-out-call DAX option at maturity T = 6 weeks, strike price K = 5200, and barrier B = 4800. The option price at time tj (t0 = 0, t1 = 2, t2 = 4, and t3 = 6 weeks) and stock price sj,i will be denoted Vj,i . At the maturity t = T = 6, the price is known: V3,i = max{0, sj,i − K}, i = 1, . . . , 7. Thus, V3,1 = max{0, 4001.01−5200} = 0 and V3,5 = max{0, 5806.07− 5200} = 606.07, for instance. To compute the option price at tj < T , one just has to discount the conditional expectation of the option price at time tj+1 Vj,i = e−r

∗

∆t

{pj,i Vj+1,i+2 + (1 − pj,i − qj,i )Vj+1,i+1 + qj,i Vj+1,i }

(6.28)

5

4

0

1

2

3

Probability*E-4

5

4

3

0

1

2

Probability*E-4

Implied Trinomial Trees

6

6

6

156

4000

5000

6000 Underlying price

7000

8000

4000

5000

6000 Underlying price

7000

8000

Figure 6.16: State price density estimated from an ITT for DAX on January 4, 1999. The dashed line depicts the corresponding Black-Scholes density. Left panel: State price density for a three-level tree. Right panel: State price density for a six-level tree. STFitt08.xpl STFitt09.xpl provided that sj,i ≥ B, otherwise Vj,i = 0. Hence at time t2 = 4, one obtains V2,1 = 0 because s2,1 = 4391.40 < 4800 = B and V2,3 = e− log(1+0.04)·2/52 (0.22 · 606.07 + 0.55 · 90 + 0.23 · 0) = 184.33 (see Figure 6.17). We can continue further and compute the option price at times t1 = 2 and t0 = 0 just using the standard formula (6.28) since prices no longer lie below the barrier B (see Figure 6.14). Thus, one computes V1,1 = 79.7, V1,2 = 251.7, V1,3 = 639.8, and ﬁnally, the option price at time t0 = 0 and stock price S = 5290 equals V0,1 = e− log(1+0.04)·2/52 (0.25 · 639.8 + 0.50 · 251.7 + 0.25 · 79.7) = 303.28.

6.4

Examples

157

Upper probabilities

Middle probabilities

0.17

0.25

0.21

0.19

0.24

0.22

0.40

0.41

0.25

Lower probabilities

0.65

0.50

0.58

0.61

0.51

0.55

0.19

0.16

0.49

0.17

0.25

0.21

0.20

0.25

0.23

0.42

0.43

0.26

Figure 6.17: Transition probabilities of the ITT constructed for DAX on January 4, 1999. STFitt10.xpl

158

Bibliography

Bibliography A¨it-Sahalia, Y., Wang, Y., and Yared, F. (2001). Do options markets correctly price the probabilities of movement of the underlying asset? Journal of Econometrics 102, 67–110. Black, F. and Scholes, M. (1973). The Pricing of Options and Corporate Liabilities. Journal of Political Economy 81: 637–654. Cox, J. C., Ross, S. A., and Rubinstein, M. (1979). Option Pricing: A Simpliﬁed Approach. Journal of Financial Economics 7: 229–263. Derman, E. and Kani, I. (1994). The Volatility Smile and Its Implied Tree. RISK 7(2): 139–145, 32–39. Derman, E., Kani, I., and Chriss, N. (1996). Implied Trinomial Trees of the Volatility Smile. The Journal of Derivatives 3(4): 7–22 Dupire B. (1994). Pricing with a smile, RISK 7(1): 18–20. Fengler, M. R., H¨ ardle, W., and Villa, C. (2003). The dynamics of implied volatilities: a common principle components approach. Review of Derivatives Research 6: 179–202. Franke, J., H¨ ardle, W., and Hafner, C. M. (2004). Statistics of Financial Markets, Springer, Heidelberg, Germany. H¨ardle, W., Kleinow, T., and Stahl, G. (2002). Applied Quantitative Finance. Springer-Verlag, Berlin. Hull, J. (1989). Options, Futures and Other Derivatives. Prentice-Hall, Englewood Cliﬀs, New Jersey. Hull, J. and White, A. (1990). Valuing derivative securities using the explicit ﬁnite diﬀerence method.The Journal of Finance and Quantitative Analysis 25: 87–100. Jarrow, R. and Rudd A. (1983). Option Pricing, Dow Jones-Irwin Publishing, Homewood, Illinois. Komor´ ad, K. (2002). Implied Trinomial Trees and Their Implementation with XploRe. Bachelor Thesis, HU Berlin; http://appel.rz.hu-berlin.de/Zope/ise stat/wiwi/ise/stat/ forschung/dmbarbeiten/.

Bibliography

159

Ross, S., Westerﬁeld, R., and Jaﬀe, J. (2002). Corporate Finance. Mc GrawHill.

7 Heston’s Model and the Smile Rafal Weron and Uwe Wystup

7.1

Introduction

The Black-Scholes formula, based on the assumption of log-normal stock diﬀusion with constant volatility, is the universal benchmark for option pricing. But as all market participants are keenly aware of, it is ﬂawed. The model-implied volatilities for diﬀerent strikes and maturities of options are not constant and tend to be smile shaped. Over the last two decades researchers have tried to ﬁnd extensions of the model in order to explain this empirical fact. A very natural approach, suggested already by Merton (1973), allows the volatilities to be a deterministic function of time. While it explains the different implied volatility levels for diﬀerent times of maturity, it still does not explain the smile shape for diﬀerent strikes. Dupire (1994), Derman and Kani (1994), and Rubinstein (1994) came up with the idea of allowing not only time, but also state dependence of the volatility coeﬃcient, see Fengler (2005) and Chapter 6. This local (deterministic) volatility approach yields a complete market model. Moreover, it lets the local volatility surface to be ﬁtted, but it cannot explain the persistent smile shape which does not vanish as time passes. The next step beyond the local volatility approach was to allow the volatility coeﬃcient in the Black-Scholes diﬀusion equation to be random. The pioneering work of Hull and White (1987), Stein and Stein (1991), and Heston (1993) led to the development of stochastic volatility models. These are two-factor models with one of the factors being responsible for the dynamics of the volatility coeﬃcient. Diﬀerent driving mechanisms for the volatility process have been proposed, including geometric Brownian motion and mean-reverting OrnsteinUhlenbeck type processes.

162

7

Heston’s Model and the Smile

Heston’s model stands out from this class mainly for two reasons: (i) the process for the volatility is non-negative and mean-reverting, which is what we observe in the markets, and (ii) there exists a closed-form solution for vanilla options. It was also one of the ﬁrst models that was able to explain the smile and simultaneously allow a front-oﬃce implementation and a market consistent valuation of many exotics. Hence, we concentrate in this chapter on Heston’s model. First, in Section 7.2 we discuss the properties of the model, including marginal distributions and tail behavior. In Section 7.3 we adapt the original work of Heston (1993) to a foreign exchange (FX) setting. We do this because the model is particularly useful in explaining the volatility smile found in FX markets. In equity markets the typical volatility structure is an asymmetric skew (also called a smirk or grimace). Calibrating Heston’s model to such a structure leads to very high, unrealistic values of the correlation coeﬃcient. Finally, in Section 7.4 we show that the smile of vanilla options can be reproduced by suitably calibrating the model parameters. However, we do have to say that Heston’s model is not a panacea. The criticism that we might want to put forward is that the market consistency could potentially be based on a large number of market participants using it! Furthermore, while trying to calibrate short term smiles, the volatility of volatility often seems to explode along with the speed of mean reversion. This is a strong indication that the process “wants” to jump, which of course it is not allowed to do. This observation, together with market crashes, has lead researchers to consider models with jumps. Interestingly, jump-diﬀusion models have been investigated already in the mid-seventies (Merton, 1976), long before the advent of stochastic volatility. Jump-diﬀusion models are, in general, more challenging to handle numerically than stochastic volatility models. Like the latter, they result in an incomplete market. But, whereas stochastic volatility models can be made complete by the introduction of one (or a few) traded options, a jump-diﬀusion model typically requires the existence of a continuum of options for the market to be complete. Recent research by Bates (1996) and Bakshi, Cao, and Chen (1997) suggests using a combination of jumps and stochastic volatility. This approach allows for even a better ﬁt to market data, but has so many parameters, that it is hard to believe that there is enough information in the market to calibrate them. Andersen and Andreasen (2000) let the stock dynamics be described by a jumpdiﬀusion process with local volatility. This method combines ease of modeling steep short-term volatility skews (jumps) and accurate ﬁtting to quoted option prices (deterministic volatility function). Other alternative approaches utilize L´evy processes (Barndorﬀ-Nielsen, Mikosch, and Resnick, 2001; Eberlein,

7.2

Heston’s Model

163

Kallsen, and Kristen, 2003) or mixing unconditional disturbances (Tompkins and D’Ecclesia, 2004), but it is still an open question how to price and hedge exotics using such models.

7.2

Heston’s Model

Heston (1993) assumed that the spot price follows the diﬀusion: √ (1) dSt = St µ dt + vt dWt ,

(7.1)

i.e. a process resembling geometric Brownian motion (GBM) with a non-constant instantaneous variance vt . Furthermore, he proposed that the variance be driven by a mean reverting stochastic process of the form: √ (2) dvt = κ(θ − vt ) dt + σ vt dWt ,

(7.2)

and allowed the two Wiener processes to be correlated with each other: (1)

(2)

dWt dWt

= ρ dt.

The variance process (7.2) was originally used by Cox, Ingersoll, and Ross (1985) for modeling the short term interest rate. It is deﬁned by three parameters: θ, κ, and σ. In the context of stochastic volatility models they can be interpreted as the long term variance, the rate of mean reversion to the long term variance, and the volatility of variance (often called the vol of vol), respectively. Surprisingly, the introduction of stochastic volatility does not change the properties of the spot price process in a way that could be noticed just by a visual inspection of its realizations. In Figure 7.1 we plot sample paths of a geometric Brownian motion and the spot process (7.1) in Heston’s model. To make the comparison more objective both trajectories were obtained with the same set of random numbers. Clearly, they are indistinguishable by mere eye. In both cases the initial spot rate S0 = 0.84 and the domestic and foreign interest rates are 5% and 3%, respectively, √ yielding a drift of µ = 2%. The volatility in the √ GBM is constant vt = 4% = 20%, while in Heston’s model it is driven by the mean reverting process (7.2) with the initial variance v0 = 4%, the long term variance θ = 4%, the speed of mean reversion κ = 2, and the vol of vol σ = 30%. The correlation is set to ρ = −0.05.

164

7

GBM vs. Heston volatility

20

Volatility [%]

0.8 0.7

15

0.75

Exchange rate

0.85

25

0.9

GBM vs. Heston dynamics

Heston’s Model and the Smile

0

0.5 Time [years]

1

0

0.5 Time [years]

1

Figure 7.1: Sample paths of a geometric Brownian motion (dotted red line) and the spot process (7.1) in Heston’s model (solid blue line) obtained with the same set of random numbers (left panel ). Despite the fact that the volatility in the GBM is constant, while in Heston’s model it is driven by a mean reverting process (right panel ) the sample paths are indistinguishable by mere eye. STFhes01.xpl

A closer inspection of Heston’s model does, however, reveal some important differences with respect to GBM. For example, the probability density functions of (log-)returns have heavier tails – exponential compared to Gaussian, see Figure 7.2. In this respect they are similar to hyperbolic distributions (Weron, 2004), i.e. in the log-linear scale they resemble hyperbolas (rather than parabolas). Equations (7.1) and (7.2) deﬁne a two-dimensional stochastic process for the variables St and vt . By setting xt = log(St /S0 ) − µt, we can express it in terms of the centered (log-)return xt and vt . The process is then characterized by the transition probability Pt (x, v | v0 ) to have (log-)return x and variance v at time t given the initial return x = 0 and variance v0 at time t = 0. The time evolution of Pt (x, v | v0 ) is governed by the following Fokker-Planck (or forward

7.2

Heston’s Model

165

Gaussian vs. Heston log-densities

-5

Log(PDF(x))

1 0

-10

0.5

PDF(x)

1.5

2

0

Gaussian vs. Heston densities

-1

-0.5

0 x

0.5

1

-1

-0.5

0 x

0.5

1

Figure 7.2: The marginal probability density function in Heston’s model (solid blue line) and the Gaussian PDF (dotted red line) for the same set of parameters as in Figure 7.1 (left panel ). The tails of Heston’s marginals are exponential which is clearly visible in the right panel where the corresponding log-densities are plotted. STFhes02.xpl

Kolmogorov) equation: ∂ P ∂t

∂ 1 ∂ {(v − θ)P } + (vP ) + ∂v 2 ∂x 2 2 ∂ σ2 ∂ 2 1 ∂ + ρσ (vP ) + (vP ). (vP ) + ∂x ∂v 2 ∂x2 2 ∂v 2

= κ

(7.3)

Solving this equation yields the following analytical formula for the density of centered returns x, given a time lag t of the price changes (Dragulescu and Yakovenko, 2002): +∞ 1 Pt (x) = eiξx+Ft (ξ) dξ, (7.4) 2π −∞ with

Ω2 −γ 2 +2κγ , log cosh Ωt sinh Ωt 2 + 2κΩ 2 γ = κ + iρσξ, and Ω = γ 2 + σ 2 (ξ 2 − iξ).

Ft (ξ) =

κθ σ2

γt −

2κθ σ2

166

7

Heston’s Model and the Smile

A sample marginal probability density function in Heston’s model is illustrated in Figure 7.2. The parameters are the same as in Figure 7.1, i.e. θ = 4%, κ = 2, σ = 30%, and ρ = −0.05. The time lag is set to t = 1.

7.3

Option Pricing

Consider the value function of a general contingent claim U (t, v, S) paying g(S) = U (T, v, S) at time T . We want to replicate it with a self-ﬁnancing portfolio. Due to the fact that in Heston’s model we have two sources of uncertainty (the Wiener processes W (1) and W (2) ) the portfolio must include the possibility to trade in the money market, the underlying and another derivative security with value function V (t, v, S). We start with an initial wealth X0 which evolves according to: dX = ∆ dS + Γ dV + rd (X − ΓV ) dt − (rd − rf )∆S dt,

(7.5)

where ∆ is the number of units of the underlying held at time t and Γ is the number of derivative securities V held at time t. Since we are operating in a foreign exchange setup, we let rd and rf denote the domestic and foreign interest rates, respectively. The goal is to ﬁnd ∆ and Γ such that Xt = U (t, vt , St ) for all t ∈ [0, T ]. The standard approach to achieve this is to compare the diﬀerentials of U and X obtained via Itˆ o’s formula. After some algebra we arrive at the partial diﬀerential equation which U must satisfy: 1 2 ∂2U ∂2U ∂U 1 2 ∂2U + ρσvS vS + σ v 2 + (rd − rf )S + 2 2 ∂S ∂S∂v 2 ∂v ∂S ∂U ∂U − rd U + = + κ(θ − v) − λ(t, v, S) ∂v ∂t

0.

(7.6)

For details on the derivation in the foreign exchange setting see Hakala and Wystup (2002). The term λ(t, v, S) is called the market price of volatility risk. Without loss of generality its functional form can be reduced to λ(t, v, S) = λv, Heston (1993). We obtain a solution to (7.6) by specifying appropriate boundary conditions. For a European vanilla option these are: U (T, v, S)

=

U (t, v, 0)

=

∂U (t, v, ∞) ∂S

=

max{φ(S − K), 0}, 1−φ Ke−rd τ , 2 1 + φ −rf τ , e 2

(7.7) (7.8) (7.9)

7.3

Option Pricing

167

∂U (t, 0, S) + ∂S ∂U ∂U (t, 0, S) + (t, 0, S) + κθ ∂v ∂t

(rd − rf )S

U (t, ∞, S)

= rd U (t, 0, S), Se−rf τ , for φ = +1, = Ke−rd τ , for φ = −1,

(7.10) (7.11)

where φ is a binary variable taking value +1 for call options and −1 for put options, K is the strike in units of the domestic currency, τ = T − t, T is the expiration time in years, and t is the current time. In this case, PDE (7.6) can be solved analytically using the method of characteristic functions (Heston, 1993). The price of a European vanilla option is hence given by: h(t)

=

HestonVanilla(κ, θ, σ, ρ, λ, rd , rf , v0 , S0 , K, τ, φ) = φ e−rf τ St P+ (φ) − Ke−rd τ P− (φ) ,

(7.12)

where a = κθ, u1 = 12 , u2 = − 12 , b1 = κ + λ − σρ, b2 = κ + λ, x = log St , dj = (ρσϕi − bj )2 − σ 2 (2uj ϕi − ϕ2 ), gj = (bj − ρσϕi + dj )/(bj − ρσϕi − dj ), and 1 − edj τ bj − ρσϕi + dj Dj (τ, ϕ) = , (7.13) σ2 1 − gj edj τ (7.14) Cj (τ, ϕ) = (rd − rf )ϕiτ +

dj τ a 1 − gj e , + 2 (bj − ρσϕi + d)τ − 2 log σ 1 − edj τ (7.15) fj (x, v, t, ϕ) = exp{Cj (τ, ϕ) + Dj (τ, ϕ)v + iϕx},

∞ −iϕy 1 e fj (x, v, τ, ϕ) 1 dϕ, (7.16) Pj (x, v, τ, y) = + 2 π 0 iϕ 1 ∞ −iϕy pj (x, v, τ, y) = e fj (x, v, τ, ϕ) dϕ. (7.17) π 0 The functions Pj are the cumulative distribution functions (in the variable y) of the log-spot price after time τ = T − t starting at x for some drift µ. The functions pj are the respective densities. The integration in (7.17) can be done with the Gauss-Legendre algorithm using 100 for ∞ and 100 abscissas. The best is to let the Gauss-Legendre algorithm compute the abscissas and weights

168

7

Heston’s Model and the Smile

once and reuse them as constants for all integrations. Finally: P+ (φ)

=

P− (φ)

=

1−φ + φP1 (log St , vt , τ, log K), 2 1−φ + φP2 (log St , vt , τ, log K). 2

(7.18) (7.19)

Apart from the above closed-form solution for vanilla options, alternative approaches can be utilized. These include ﬁnite diﬀerence and ﬁnite element methods. The former must be used with care since high precision is required to invert scarce matrices. The Crank-Nicholson, ADI (Alternate Direction Implicit), and Hopscotch schemes can be used, however, ADI is not suitable to handle nonzero correlation. Boundary conditions must be also set appropriately. For details see Kluge (2002). Finite element methods can be applied to price both the vanillas and exotics, as explained for example in Apel, Winkler, and Wystup (2002).

7.3.1

Greeks

The Greeks can be evaluated by taking the appropriate derivatives or by exploiting homogeneity properties of ﬁnancial markets (Reiss and Wystup, 2001). In Heston’s model the spot delta and the so-called dual delta are given by: ∆=

∂h(t) = φe−rf τ P+ (φ) ∂St

and

∂h(t) = −φe−rd τ P− (φ), ∂K

(7.20)

respectively. Gamma, which measures the sensitivity of delta to the underlying has the form: ∂∆ e−rf τ = p1 (log St , vt , τ, log K). (7.21) Γ= ∂St St T heta = ∂h(t)/∂t can be computed from (7.6). The formulas for rho are the following: ∂h(t) ∂rd ∂h(t) ∂rf

= φKe−rd τ τ P− (φ),

(7.22)

= −φSt e−rf τ τ P+ (φ).

(7.23)

Note that in a foreign exchange setting there are two rho’s – one is a derivative of the option price with respect to the domestic interest rate and the other is a derivative with respect to the foreign interest rate.

7.4

Calibration

169

The notions of vega and volga usually refer to the ﬁrst and second derivative with respect to volatility. In Heston’s model we use them for the ﬁrst and second derivative with respect to the initial variance: ∂h(t) ∂vt

∂ 2 h(t) ∂vt2

∂ P1 (log St , vt , τ, log K) − ∂vt ∂ − Ke−rd τ P2 (log St , vt , τ, log K), ∂vt ∂2 = e−rf τ St 2 P1 (log St , vt , τ, log K) − ∂vt ∂2 − Ke−rd τ 2 P2 (log St , vt , τ, log K), ∂vt = e−rf τ St

(7.24)

(7.25)

where ∂ Pj (x, vt , τ, y) ∂vt ∂2 Pj (x, vt , τ, y) ∂vt2

7.4

= =

1 ∞ D(τ, ϕ)e−iϕy fj (x, vt , τ, ϕ) dϕ, (7.26) π 0 iϕ ∞ 2 1 D (τ, ϕ)e−iϕy fj (x, vt , τ, ϕ) dϕ. (7.27) π 0 iϕ

Calibration

Calibration of stochastic volatility models can be done in two conceptually diﬀerent ways. One way is to look at a time series of historical data. Estimation methods such as Generalized, Simulated, and Eﬃcient Methods of Moments (respectively GMM, SMM, and EMM), as well as Bayesian MCMC have been extensively applied, for a review see Chernov and Ghysels (2000). In the Heston model we could also try to ﬁt empirical distributions of returns to the marginal distributions speciﬁed in (7.4) via a minimization scheme. Unfortunately, all historical approaches have one common ﬂaw – they do not allow for estimation of the market price of volatility risk λ(t, v, S). However, multiple studies ﬁnd evidence of a nonzero volatility risk premium, see e.g. Bates (1996). This implies in turn that one needs some extra input to make the transition from the physical to the risk neutral world. Observing only the underlying spot price and estimating stochastic volatility models with this information will not deliver correct derivative security prices. This leads us to the second estimation approach. Instead of using the spot data we calibrate the model to derivative prices.

170

7

Heston’s Model and the Smile

We follow the latter approach and take the smile of the current vanilla options market as a given starting point. As a preliminary step, we have to retrieve the strikes since the smile in foreign exchange markets is speciﬁed as a function of the deltas. Comparing the Black-Scholes type formulas (in the foreign exchange market setting we have to use the Garman and Kohlhagen (1983) speciﬁcation) for delta and the option premium yields the relation for the strikes Ki . From a computational point of view this stage requires only an inversion of the cumulative normal distribution. Next, we ﬁt the ﬁve parameters: initial variance v0 , volatility of variance σ, long-run variance θ, mean reversion κ, and correlation ρ for a ﬁxed time to maturity and a given vector of market Black-Scholes implied volatilities {ˆ σi }ni=1 n for a given set of delta pillars {∆i }i=1 . Since we are calibrating the model to derivative prices we do not need to worry about estimating the market price of volatility risk as it is already embedded in the market smile. Furthermore, it can easily be veriﬁed that the value function (7.12) satisﬁes: HestonVanilla(κ, θ, σ, ρ, λ, rd , rf , v0 , S0 , K, τ, φ) = κ = HestonVanilla κ + λ, θ, σ, ρ, 0, rd , rf , v0 , S0 , K, τ, φ , (7.28) κ+λ which means that we can set λ = 0 by default and just determine the remaining ﬁve parameters. After ﬁtting the parameters we compute the option prices in Heston’s model using (7.12) and retrieve the corresponding Black-Scholes model implied volatilities {σi }ni=1 via a standard bisection method (a Newton-Raphson method could be used as well). The next step is to deﬁne an objective function, which we choose to be the Sum of Squared Errors (SSE): SSE(κ, θ, σ, ρ, v0 ) =

n

{ˆ σi − σi (κ, θ, σ, ρ, v0 )}2 .

(7.29)

i=1

We compare volatilities (rather than prices), because they are all of comparable magnitude. In addition, one could introduce weights for all the summands to favor at-the-money (ATM) or out-of-the-money (OTM) ﬁts. Finally we minimize over this objective function using a simplex search routine to ﬁnd the optimal set of parameters.

7.4

Calibration

171

Initial variance and the smile

11 10.5

Implied volatility [%]

9.5

9

10

10

Implied volatility [%]

11

11.5

12

Vol of vol and the smile

20

40

60 Delta [%]

80

20

40

60

80

Delta [%]

Figure 7.3: Left panel : Eﬀect of changing the volatility of variance (vol of vol) on the shape of the smile. For the red dashed “smile” with triangles σ = 0.01, and for the blue dotted smile with squares σ = 0.6. Right panel : Eﬀect of changing the initial variance on the shape of the smile. For the red dashed smile with triangles v0 = 0.008 and for the blue dotted smile with squares v0 = 0.012. STFhes03.xpl

7.4.1

Qualitative Eﬀects of Changing Parameters

Before calibrating the model to market data we will show how changing the input parameters aﬀects the shape of the ﬁtted smile curve. This analysis will help in reducing the dimensionality of the problem. In all plots of this subsection the solid black curve with circles is the smile obtained for v0 = 0.01, σ = 0.25, κ = 1.5, θ = 0.015, and ρ = 0.05. First, to take a look at the volatility of variance (vol of vol), see the left panel of Figure 7.3. Clearly, setting σ equal to zero produces a deterministic process for the variance and hence volatility which does not admit any smile. The resulting ﬁt is a constant curve. On the other hand, increasing the volatility of variance increases the convexity of the ﬁt. The initial variance has a diﬀerent impact on the smile. Changing v0 allows adjustments in the height of the smile curve rather than the shape. This is illustrated in the right panel of Figure 7.3.

172

7

Mean reversion and the smile

11

10.5 9

9.5

9.5

10

Implied volatility [%]

11 10.5 10

Implied volatility [%]

11.5

11.5

Long-run variance and the smile

Heston’s Model and the Smile

20

40

60 Delta [%]

80

20

40

60

80

Delta [%]

Figure 7.4: Left panel : Eﬀect of changing the long-run variance on the shape of the smile. For the red dashed smile with triangles θ = 0.01, and for the blue dotted smile with squares θ = 0.02. Right panel : Eﬀect of changing the mean reversion on the shape of the smile. For the red dashed smile with triangles κ = 0.01, and for the blue dotted smile with squares κ = 3. STFhes04.xpl

Eﬀects of changing the long-run variance θ are similar to those observed by changing the initial variance, see the left panel of Figure 7.4. This requires some attention in the calibration process. It seems promising to choose the initial variance a priori and only let the long-run variance vary. In particular, a diﬀerent initial variance for diﬀerent maturities would be inconsistent. Changing the mean reversion κ aﬀects the ATM part more than the extreme wings of the smile curve. The low deltas remain almost unchanged whereas increasing the mean reversion lifts the center. This is illustrated in the right panel of Figure 7.4. Moreover, the inﬂuence of mean reversion is often compensated by a stronger volatility of variance. This suggests ﬁxing the mean reversion parameter and only calibrating the remaining parameters. Finally, let us look at the inﬂuence of correlation. The uncorrelated case produces a ﬁt that looks like a symmetric smile curve centered at-the-money. How-

7.4

Calibration

173

Correlation and the skew

9

10

10

11

Implied volatility [%]

11 10.5

Implied volatility [%]

12

11.5

Correlation and the smile

20

40

60

80

Delta [%]

20

40

60

80

Delta [%]

Figure 7.5: Left panel : Eﬀect of changing the correlation on the shape of the smile. For the red dashed smile with triangles ρ = 0, for the blue dashed smile with squares ρ = −0.15, and for the green dotted smile with rhombs ρ = 0.15. Right panel : In order for the model to yield a volatility skew, a typically observed volatility structure in equity markets, the correlation must be set to an unrealistically high value (with respect to the absolute value; here ρ = −0.5). STFhes05.xpl

ever, it is not exactly symmetric. Changing ρ changes the degree of symmetry. In particular, positive correlation makes calls more expensive, negative correlation makes puts more expensive. This is illustrated in Figure 7.5. Note that for the model to yield a volatility skew, a typically observed volatility structure in equity markets, the correlation must be set to an unrealistically high value.

7.4.2

Calibration Results

We are now ready to calibrate Heston’s model to market data. We take the EUR/USD volatility surface on July 1, 2004 and ﬁt the parameters in Heston’s model according to the calibration scheme discussed earlier. The results are shown in Figures 7.6–7.8. Note that the ﬁt is very good for maturities between three and eighteen months. Unfortunately, Heston’s model does not perform

174

7

1M market and Heston volatilities

10.4 10.2

Implied volatility [%]

10.5 9.5

10

10

Implied volatility [%]

11

10.6

11.5

1W market and Heston volatilities

Heston’s Model and the Smile

20

40

60

20

80

40

60

80

Delta [%]

2M market and Heston volatilities

3M market and Heston volatilities

10.8 10.6

Implied volatility [%]

10.4

10.6 10.2

10.2

10.4

Implied volatility [%]

10.8

11

11

Delta [%]

20

40

60 Delta [%]

80

20

40

60

80

Delta [%]

Figure 7.6: The market smile (solid black line with circles) on July 1, 2004 and the ﬁt obtained with Heston’s model (dotted red line with squares) for τ = 1 week (top left), 1 month (top right), 2 months (bottom left), and 3 months (bottom right). STFhes06.xpl satisfactorily for short maturities and extremely long maturities. For the former we recommend using a jump-diﬀusion model (Cont and Tankov, 2003; Martinez and Senge, 2002), for the latter a suitable long term FX model (Andreasen, 1997).

7.4

Calibration

175

1Y market and Heston volatilities

11 10.5

10.5

Implied volatility [%]

Implied volatility [%]

11

11.5

6M market and Heston volatilities

20

40

60

20

80

60

80

18M market and Heston volatilities

2Y market and Heston volatilities

10.5

11

11

Implied volatility [%]

11.5

11.5

Delta [%]

10.5

Implied volatility [%]

40

Delta [%]

20

40

60 Delta [%]

80

20

40

60

80

Delta [%]

Figure 7.7: The market smile (solid black line with circles) on July 1, 2004 and the ﬁt obtained with Heston’s model (dotted red line with squares) for τ = 6 months (top left), 1 year (top right), 18 months (bottom left), and 2 years (bottom right). STFhes06.xpl

176

7

Correlation term structure

0.05

Correlation

0.15 0.05

0

0.1

Vol of vol

0.2

0.25

0.3

0.1

Vol of vol term structure

Heston’s Model and the Smile

0

0.5

1 Tau [year]

1.5

0

2

0.5

1 Tau [year]

1.5

2

Figure 7.8: Term structure of the vol of vol (left panel ) and correlation (right panel ) in the Heston model calibrated to the EUR/USD surface as observed on July 1, 2004. STFhes06.xpl

Performing calibrations for diﬀerent time slices of the volatility matrix produces diﬀerent values of the parameters. This suggests a term structure of some parameters in Heston’s model. Therefore, we need to generalize the CoxIngersoll-Ross process to the case of time-dependent parameters, i.e. we consider the process: √ dvt = κ(t){θ(t) − vt } dt + σ(t) vt dWt

(7.30)

for some nonnegative deterministic parameter functions σ(t), κ(t), and θ(t). The formula for the mean turns out to be: t E(vt ) = g(t) = v0 e−K(t) + κ(s)θ(s)eK(s)−K(t) ds, (7.31) 0

with K(t) =

t 0

κ(s) ds. The result for the second moment is:

E(vt2 ) = v02 e−2K(t) +

0

t

{2κ(s)θ(s) + σ 2 (s)}g(s)e2K(s)−2K(t) ds,

(7.32)

7.4

Calibration

177

and hence for the variance (after some algebra): t σ 2 (s)g(s)e2K(s)−2K(t) ds. Var(vt ) =

(7.33)

0

The formula for the variance allows us to compute forward volatilities of variance explicitly. Assuming known values σT1 and σT2 for some times 0 < T1 < T2 , we want to determine the forward volatility of variance σT1 ,T2 which matches the corresponding variances, i.e. T2 σT2 2 g(s)e2κ(s−T2 ) ds = (7.34) 0

= 0

T1

σT2 1 g(s)e2κ(s−T2 ) ds +

T2

T1

σT2 1 ,T2 g(s)e2κ(s−T2 ) ds.

The resulting forward volatility of variance is thus: σT2 1 ,T2 =

σT2 2 H(T2 ) − σT2 1 H(T1 ) , H(T2 ) − H(T1 )

where

t

2κs

g(s)e

H(t) = 0

θ 2κt 1 1 ds = e + (v0 − θ)eκt + 2κ κ κ

(7.35)

θ − v0 . 2

(7.36)

Assuming known values ρT1 and ρT2 for some times 0 < T1 < T2 , we want to determine the forward correlation coeﬃcient ρT1 ,T2 to be active between times T1 and T2 such that the covariance between the Brownian motions of the variance process and the exchange rate process agrees with the given values ρT1 and ρT2 . This problem has a simple answer, namely: ρT1 ,T2 = ρT2 ,

T 1 ≤ t ≤ T2 .

This can be seen by writing the Heston model in the form: √ (1) dSt = St µ dt + vt dWt √ √ (1) (2) dvt = κ(θ − vt ) dt + ρσ vt dWt + 1 − ρ2 σ vt dWt

(7.37)

(7.38) (7.39)

for a pair of independent Brownian motions W (1) and W (2) . Observe that choosing the forward correlation coeﬃcient as stated does not conﬂict with the computed forward volatility.

178

7

Heston’s Model and the Smile

As we have seen, Heston’s model can be successfully applied to modeling the volatility smile of vanilla currency options. There are essentially three parameters to ﬁt, namely the long-run variance, which corresponds to the at-the-money level of the market smile, the vol of vol, which corresponds to the convexity of the smile (in the market often quoted as butterﬂies), and the correlation, which corresponds to the skew of the smile (in the market often quoted as risk reversals). It is this direct link of the model parameters to the market that makes the Heston model so attractive to front oﬃce users. The key application of the model is to calibrate it to vanilla options and afterward employ it for pricing exotics, like one-touch options, in either a ﬁnite diﬀerence grid or a Monte Carlo simulation (Hakala and Wystup, 2002; Wystup, 2003). Surprisingly, the results often coincide with the traders’ rule of thumb pricing method. This might also simply mean that a lot of traders are using the same model. After all, it is a matter of belief which model reﬂects the reality most suitably.

Bibliography

179

Bibliography Andersen, L. and Andreasen, J. (2000). Jump-Diﬀusion Processes: Volatility Smile Fitting and Numerical Methods for Option Pricing, Review of Derivatives Research 4: 231–262. Andreasen, J. (1997). A Gaussian Exchange Rate and Term Structure Model, Essays on Contingent Claim Pricing 97/2, PhD thesis. Apel, T., Winkler, G., and Wystup, U. (2002). Valuation of options in Heston’s stochastic volatility model using ﬁnite element methods, in J. Hakala, U. Wystup (eds.) Foreign Exchange Risk, Risk Books, London. Bakshi, G., Cao, C. and Chen, Z. (1997). Empirical Performance of Alternative Option Pricing Models, Journal of Finance 52: 2003–2049. Barndorﬀ-Nielsen, O.E., Mikosch, T., and Resnick, S. (2001). Levy processes: Theory and Applications, Birkh¨ auser. Bates, D. (1996). Jumps and Stochastic Volatility: Exchange Rate Processes Implicit in Deutsche Mark Options, Review of Financial Studies 9: 69– 107. Chernov, M. and Ghysels, E. (2000). Estimation of the Stochastic Volatility Models for the Purpose of Options Valuation, in Y. S. Abu-Mostafa, B. LeBaron, A. W. Lo, and A. S. Weigend (eds.) Computational Finance Proceedings of the Sixth International Conference, MIT Press, Cambridge. Cont, R., and Tankov, P. (2003). Financial Modelling with Jump Processes, Chapman & Hall/CRC. Cox, J. C., Ingersoll, J. E. and Ross, S. A. (1985). A Theory of the Term Structure of Interest Rates, Econometrica 53: 385–407. Derman, E. and Kani, I. (1994). Riding on a Smile, RISK 7(2): 32–39. Dragulescu, A. A. and Yakovenko, V. M. (2002). Probability distribution of returns in the Heston model with stochastic volatility, Quantitative Finance 2: 443–453. Dupire, B. (1994). Pricing with a Smile, RISK 7(1): 18–20. Eberlein, E., Kallsen, J., and Kristen, J. (2003). Risk Management Based on Stochastic Volatility, Journal of Risk 5(2): 19–44.

180

Bibliography

Fengler, M. (2005). Semiparametric Modelling of Implied Volatility, Springer. Garman, M. B. and Kohlhagen, S. W. (1983). Foreign currency option values, Journal of International Monet & Finance 2: 231–237. Hakala, J. and Wystup, U. (2002). Heston’s Stochastic Volatility Model Applied to Foreign Exchange Options, in J. Hakala, U. Wystup (eds.) Foreign Exchange Risk, Risk Books, London. Heston, S. (1993). A Closed-Form Solution for Options with Stochastic Volatility with Applications to Bond and Currency Options, Review of Financial Studies 6: 327–343. Hull, J. and White, A. (1987). The Pricing of Options with Stochastic Volatilities, Journal of Finance 42: 281–300. Kluge, T. (2002). Pricing derivatives in stochastic volatility models using the ﬁnite diﬀerence method, Diploma thesis, Chemnitz Technical University. Martinez, M. and Senge, T. (2002). A Jump-Diﬀusion Model Applied to Foreign Exchange Markets, in J. Hakala, U. Wystup (eds.) Foreign Exchange Risk, Risk Books, London. Merton, R. (1973). The Theory of Rational Option Pricing, Bell Journal of Economics and Management Science 4: 141–183. Merton, R. (1976). Option Pricing when Underlying Stock Returns are Discontinuous, Journal of Financial Economics 3: 125–144. Reiss, O. and Wystup, U. (2001). Computing Option Price Sensitivities Using Homogeneity, Journal of Derivatives 9(2): 41–53. Rubinstein, M. (1994). Implied Binomial Trees, Journal of Finance 49: 771– 818. Stein, E. and Stein, J. (1991). Stock Price Distributions with Stochastic Volatility: An Analytic Approach, Review of Financial Studies 4(4): 727–752. Tompkins, R. G. and D’Ecclesia, R. L. (2004). Unconditional Return Disturbances: A Non-Parametric Simulation Approach, Journal of Banking and Finance, to appear. Weron, R. (2004). Computationally intensive Value at Risk calculations, in J.E. Gentle, W. H¨ardle, Y. Mori (eds.) Handbook of Computational Statistics, Springer.

Bibliography

181

Wystup, U. (2003). The market price of one-touch options in foreign exchange markets, Derivatives Week, 12(13).

8 FFT-based Option Pricing Szymon Borak, Kai Detlefsen, and Wolfgang H¨ ardle

8.1

Introduction

The Black-Scholes formula, one of the major breakthroughs of modern ﬁnance, allows for an easy and fast computation of option prices. But some of its assumptions, like constant volatility or log-normal distribution of asset prices, do not ﬁnd justiﬁcation in the markets. More complex models, which take into account the empirical facts, often lead to more computations and this time burden can become a severe problem when computation of many option prices is required, e.g. in calibration of the implied volatility surface. To overcome this problem Carr and Madan (1999) developed a fast method to compute option prices for a whole range of strikes. This method and its application are the theme of this chapter. In Section 8.2, we brieﬂy discuss the Merton, Heston, and Bates models concentrating on aspects relevant for the option pricing method. In the following section, we present the method of Carr and Madan which is based on the fast Fourier transform (FFT) and can be applied to a variety of models. We also consider brieﬂy some further developments and give a short introduction to the FFT algorithm. In the last section, we apply the method to the three analyzed models, check the results by Monte Carlo simulations and comment on some numerical issues.

8.2

Modern Pricing Models

The geometric Brownian motion (GBM) is the building block of modern ﬁnance. In particular, in the Black-Scholes model the underlying stock price is

184

8

FFT-based Option Pricing

assumed to follow the GBM dynamics: dSt = rSt dt + σSt dWt ,

(8.1)

which, applying Itˆ o’s lemma, can be written as: St = S0 exp

r−

σ2 2

t + σWt .

(8.2)

The empirical facts, however, do not conﬁrm model assumptions. Financial returns exhibit much fatter tails than the Black-Scholes model postulates, see Chapter 1. The common big returns that are larger than six-standard deviations should appear less than once in a million years if the Black-Scholes framework were accurate. Squared returns, as a measure of volatility, display positive autocorrelation over several days, which contradicts the constant volatility assumption. Non-constant volatility can be observed as well in the option markets where “smiles” and “skews” in implied volatility occur. These properties of ﬁnancial time series lead to more reﬁned models. We introduce three such models in the following paragraphs.

8.2.1

Merton Model

If an important piece of information about the company becomes public it may cause a sudden change in the company’s stock price. The information usually comes at a random time and the size of its impact on the stock price may be treated as a random variable. To cope with these observations Merton (1976) proposed a model that allows discontinuous trajectories of asset prices. The model extends (8.1) by adding jumps to the stock price dynamics: dSt = rdt + σdWt + dZt , St

(8.3)

where Zt is a compound Poisson process with a log-normal distribution of jump sizes. The jumps follow a (homogeneous) Poisson process Nt with intensity λ (see Chapter 14), which is independent of Wt . The log-jump sizes Yi ∼ N (µ, δ 2 ) are i.i.d random variables with mean µ and variance δ 2 , which are independent of both Nt and Wt .

8.2

Modern Pricing Models

185

The model becomes incomplete which means that there are many possible ways to choose a risk-neutral measure such that the discounted price process is a martingale. Merton proposed to change the drift of the Wiener process and to leave the other ingredients unchanged. The asset price dynamics is then given by: ! St = S0 exp µM t + σWt +

Nt

" Yi

,

i=1 2

where µM = r − σ 2 − λ{exp(µ + δ2 ) − 1}. Jump components add mass to the tails of the returns distribution. Increasing δ adds mass to both tails, while a negative/positive µ implies relatively more mass in the left/right tail. For the purpose of Section 8.4 it is necessary to introduce the characteristic function (cf) of Xt = ln SS0t : 2 2 σ2 z2 φXt (z) = exp t − , + iµM z + λ e−δ z /2+iµz−1 2 Nt where Xt = µM t + σWt + i=1 Yi .

8.2.2

(8.4)

Heston Model

Another possible modiﬁcation of (8.1) is to substitute the constant volatility parameter σ with a stochastic process. This leads to the so-called “stochastic volatility” models, where the price dynamics is driven by: √ dSt = rdt + vt dWt , St where vt is another unobservable stochastic process. There are many possible ways of choosing the variance process vt . Hull and White (1987) proposed to use geometric Brownian motion: dvt = c1 dt + c2 dWt . vt

(8.5)

However, geometric Brownian motion tends to increase exponentially which is an undesirable property for volatility. Volatility exhibits rather a mean

186

8

FFT-based Option Pricing

reverting behavior. Therefore a model based on an Ornstein-Uhlenbeck-type process: dvt = κ(θ − vt )dt + βdWt , (8.6) was suggested by Stein and Stein (1991). This process, however, admits negative values of the variance vt . These deﬁciencies were eliminated in a stochastic volatility model introduced by Heston (1993):

dSt St dvt

= rdt +

√

(1)

vt dWt ,

√ (2) = κ(θ − vt )dt + σ vt dWt , (1)

where the two Brownian components Wt

(2)

and Wt

(8.7)

are correlated with rate ρ:

(1) (2) Cov dWt , dWt = ρdt,

(8.8)

√ for details see Chapter 7. The term vt in equation (8.7) simply ensures positive volatility. When the process touches the zero bound the stochastic part becomes zero and the non-stochastic part will push it up. Parameter κ measures the speed of mean reversion, θ is the average level of volatility, and σ is the volatility of volatility. In (8.8) the correlation ρ is typically negative, which is consistent with empirical observations (Cont, 2001). This negative dependence between returns and volatility is known in the market as the “leverage eﬀect.” The risk neutral dynamics is given in a similar way as in the Black-Scholes model. For the logarithm of the asset price process Xt = ln SS0t one obtains the equation: dXt = The cf is given by:

√ 1 (1) r − vt dt + vt dWt . 2

8.2

Modern Pricing Models

φXt (z)

= ·

187

+ iztr + izx0 } exp{ κθt(κ−iρσz) σ2 (cosh γt 2 +

2κθ

κ−iρσz γ

σ2 sinh γt 2 ) (z 2 + iz)v0 , exp − γ coth γt 2 + κ − iρσz

(8.9)

where γ = σ 2 (z 2 + iz) + (κ − iρσz)2 , and x0 and v0 are the initial values for the log-price process and the volatility process, respectively.

8.2.3

Bates Model

The Merton and Heston approaches were combined by Bates (1996), who proposed a model with stochastic volatility and jumps: dSt St dvt (1)

(2)

Cov(dWt , dWt )

= rdt +

√

(1)

vt dWt

+ dZt ,

(8.10)

√ (2) = κ(θ − vt )dt + σ vt dWt , = ρ dt.

As in (8.3) Zt is a compound Poisson process with intensity λ and log-normal (1) (2) distribution of jump sizes independent of Wt (and Wt ). If J denotes the 1 2 2 ¯ Under the jump size then ln(1 + J) ∼ N (ln(1 + k) − 2 δ , δ ) for some k. risk neutral probability one obtains the equation for the logarithm of the asset price: √ 1 (1) dXt = (r − λk − vt )dt + vt dWt + Z˜t , 2 where Z˜t is a compound Poisson process with normal distribution of jump magnitudes. Since the jumps are independent of the diﬀusion part in (8.10), the characteristic function for the log-price process can be obtained as: J φXt (z) = φD Xt (z)φXt (z),

where:

188

8

φD Xt (z)

FFT-based Option Pricing

κθt(κ−iρσz) σ2

+ izt(r − λk) + izx0 = 2κθ κ−iρσz γt σ2 cosh γt + sinh 2 γ 2 (z 2 + iz)v0 · exp − γ coth γt 2 + κ − iρσz exp

(8.11)

is the diﬀusion part cf and φJXt (z) = exp{tλ(e−δ

2 2

z /2+i(ln(1+k)− 12 δ 2 )z

− 1)},

(8.12)

is the jump part cf. Note that (8.9) and (8.11) are very similar. The diﬀerence lies in the shift λk (risk neutral correction). Formula (8.12) has a similar structure as the jump part in (8.4), however, µ is substituted with ln(1 + k) − 1 2 2δ .

8.3

Option Pricing with FFT

In the last section, three asset price models and their characteristic functions were presented. In this section, we describe a numerical approach for pricing options which utilizes the characteristic function of the underlying instrument’s price process. The approach has been introduced by Carr and Madan (1999) and is based on the FFT. The use of the FFT is motivated by two reasons. On the one hand, the algorithm oﬀers a speed advantage. This eﬀect is even boosted by the possibility of the pricing algorithm to calculate prices for a whole range of strikes. On the other hand, the cf of the log price is known and has a simple form for many models considered in literature, while the density is often not known in closed form. The approach assumes that the cf of the log-price is given analytically. The basic idea of the method is to develop an analytic expression for the Fourier transform of the option price and to get the price by Fourier inversion. As the Fourier transform and its inversion work for square-integrable functions (see Plancherel’s theorem, e.g. in Rudin, 1991) we do not consider directly the option price but a modiﬁcation of it.

8.3

Option Pricing with FFT

189

Let CT (k) denote the price of a European call option with maturity T and strike K = exp(k): ∞ e−rT (es − ek )qT (s)ds, CT (k) = k

where qT is the risk-neutral density of sT = log ST . The function CT is not square-integrable because CT (k) converges to S0 for k → −∞. Hence, we consider a modiﬁed function: cT (k) = exp(αk)CT (k),

(8.13)

which is square-integrable for a suitable α > 0. The choice of α may depend on the model for St . The Fourier transform of cT is deﬁned by: ∞ eivk cT (k)dk. ψT (v) = −∞

The expression for ψT can be computed directly after an interchange of integrals: ∞ ∞ ψT (v) = eαk e−rT (es − ek )qT (s)dsdk eivk −∞ k ∞ s −rT = e qT (s) (eαk+s − e(α+1)k )eivk dkds −∞ ∞

−∞

e(α+1+iv)s e(α+1+iv)s = e−rT qT (s)( − )ds α + iv α + 1 + iv −∞ =

e−rT φT (v − (α + 1)i) , α2 + α − v 2 + i(2α + 1)v

where φT is the Fourier transform of qT . A suﬃcient condition for cT to be square-integrable is given by ψT (0) being ﬁnite. This is equivalent to E(STα+1 ) < ∞. A value α = 0.75 fulﬁlls this condition for the models of Section 8.2. With this choice, we follow Schoutens et al. (2003) who found in an empirical study that this value leads to stable algorithms, i.e. the prices are well replicated for many model parameters. Now, we get the desired option price in terms of ψT using Fourier inversion exp(−αk) ∞ −ivk e ψ(v)dv. CT (k) = π 0

190

8

FFT-based Option Pricing

This integral can be computed numerically as: CT (k) ≈

N −1 exp(−αk) −ivj k e ψ(vj )η, π j=0

(8.14)

where vj = ηj, j = 0, . . . , N − 1, and η > 0 is the distance between the points of the integration grid. Lee (2004) has developed bounds for the sampling and truncation errors of this approximation. Formula (8.14) suggests to calculate the prices using the FFT, which is an eﬃcient algorithm for computing the sums wu =

N −1

2π

e−i N ju xj , for u = 0, . . . , N − 1.

(8.15)

j=0

To see why this is the case see Example 1 below, which illustrates the basic idea of the FFT. In general, the strikes near the spot price are of interest because such options are traded most frequently. We consider thus an equidistant spacing of the log-strikes around the log spot price s0 : 1 ku = − N ζ + ζu + s0 , for u = 0, . . . , N − 1, 2

(8.16)

where ζ > 0 denotes the distance between the log strikes. Substituting these log-strikes yields for u = 0, . . . , N − 1: CT (ku ) ≈

N −1 exp(−αk) −iζηju i{( 1 N ζ−s0 )vj } e e 2 ψ(vj )η. π j=0

Now, the FFT can be applied to 1

xj = ei{( 2 N ζ−s0 )vj } ψ(vj ), for j = 0, . . . , N − 1, provided that ζη =

2π . N

(8.17)

This constraint leads, however, to the following trade-oﬀ: the parameter N controls the computation time and thus is often determined by the computational setup. Hence the right hand side may be regarded as given or ﬁxed.

8.3

Option Pricing with FFT

191

One would like to choose a small ζ in order to get many prices for strikes near the spot price. But the constraint implies then a big η giving a coarse grid for integration. So we face a trade-oﬀ between accuracy and the number of interesting strikes. Example 1 The FFT is an algorithm for computing (8.15). Its popularity stems from its remarkable speed: while a naive computation needs N 2 operations the FFT requires only N log(N ) steps. The algorithm was ﬁrst published by Cooley and Tukey (1965) and since then has been continuously reﬁned. We illustrate the original FFT algorithm for N = 4. Writing u and j as binary numbers: u = 2u1 + u0 , j = 2j1 + j0 , with u1 , u0 , j1 , j0 ∈ {0, 1} u = (u1 , u0 ), j = (j1 , j0 ) the formula (8.15) is given as: w(u1 ,u0 ) =

1 1

x(j1 ,j0 ) W (2u1 +u0 )(2j1 +j0 ) ,

j0 =0 j1 =0

where W = e−2πi/N . Because of W (2u1 +u0 )(2j1 +j0 ) = W 2u0 j1 W (2u1 +u0 )j0 , we get w(u1 ,u0 ) =

1 1 ( x(j1 ,j0 ) , W 2u0 j1 )W (2u1 +u0 )j0 . j0 =0 j1 =0

Now, the FFT can be described by the following three steps 1 = w(u 0 ,j0 )

1

x(j1 ,j0 ) W 2u0 j1 ,

j1 =0 2 w(u 0 ,u1 )

=

1

1 w(u W (2u1 +u0 )j0 , 0 ,j0 )

j0 =0 2 w(u1 ,u0 ) = w(u . 0 ,u1 )

While a naive computation of (8.15) requires 42 = 16 complex multiplications the FFT needs only 4 log(4) = 8 complex multiplications. This explains the speed of the FFT because complex multiplications are the most time consuming operations in this context.

192

8

FFT-based Option Pricing

Implied volatility

0.53 0.48 0.43 0.38 0.33

0.40 0.58 0.77 Moneyness

0.95 1.13

0.39

0.78

1.17

1.56

1.96

Time to maturity

Figure 8.1: Implied volatility surface of DAX options on January 4, 1999. STFfft01.xpl

8.4

Applications

In this section, we apply the FFT option pricing algorithm of Section 8.3 to the models described in Section 8.2. Our aim is to demonstrate the remarkable speed of the FFT algorithm by comparing it to Monte Carlo simulations. Moreover, we present an application of the fast option pricing algorithm to the calibration of implied volatility (IV) surfaces. In Figure 8.1 we present the IV surface of DAX options on January 4, 1999 where the red points are the observed implied volatilities and the surface is ﬁtted with the Nadaraya-Watson kernel estimator. For analysis of IV surfaces consult Fengler et al. (2002) and Chapter 5. In order to apply the FFT-based algorithm we need to know the characteristic function of the risk neutral density which has been described in Section 8.2 for the Merton, Heston, and Bates models. Moreover, we have to decide on

8.4

Applications

193

the parameters α, N , and η of the algorithm. Schoutens et al. (2003) used α = 0.75 in a calibration procedure for the Eurostoxx 50 index data. We follow their approach and set α to this value. The computation time depends on the parameter N which we set to 512. As the number of grid points of the numerical integration is also given by N , this parameter in addition determines the accuracy of the prices. For parameter η, which determines the distance of the points of the integration grid, we use 0.25. A limited simulation study showed that the FFT algorithm is not sensitive to the choice of η, i.e. small changes in η gave similar results. In Section 8.3, we have already discussed the relation between these parameters. For comparison, we computed the option prices also by Monte Carlo simulations with 500 time steps and 5000 repetitions. Such simulations are a convenient way to check the results of the FFT-based algorithm. The calculations are based on the following parameters: the price of the underlying asset is S0 = 100, time to maturity T = 1, and the interest rate r = 0.02. For demonstration we choose the Heston model with parameters: κ = 10, θ = 0.2, σ = 0.7, ρ = −0.5, and v0 = 0.2. To make our comparison more sound we also calculate prices with the analytic formula given in Chapter 7. In the left panel of Figure 8.2 we show the prices of European call options as a function of the strike price K. As the prices obtained with the analytical formula are close to the prices obtained with the FFT-based method and the Monte Carlo prices oscillate around them, this ﬁgure conﬁrms that the pricing algorithm works correctly. The diﬀerent values of the Monte Carlo prices are mainly due to the random nature of this technique. One needs to use even more time steps and repetitions to get better results. The minor diﬀerences between the analytical and FFT-based prices come form the fact that the latter method gives the exact values only on the grid (8.16) and between the grid points one has to use some interpolation method to approximate the price of the option. This problem can be more clearly observed in the right panel of Figure 8.2, where percentage diﬀerences between the analytical and FFT prices are presented. In order to preserve the great speed of the algorithm we simply use linear interpolation between the grid points. This approach, however, slightly overestimates the true prices since the call option price is a convex function of the strike. It can be clearly seen that near the grid points the prices obtained by both methods coincide, while between the grid points the FFT-based algorithm generates higher prices than the analytical solution. Although these methods yield similar results they need diﬀerent computation time. In Table 8.1 we compare the speed of C++ implementations of the Monte Carlo and FFT methods. We calculate Monte Carlo prices for 20 diﬀerent

194

8

(Analytical - FFT)/Analytical [%]

-0.1

MAPE

20

-0.2

15

option price

25

0

Option prices in the Heston model

FFT-based Option Pricing

80

90

100 strike price

110

80

120

90

100 strike price

110

120

Figure 8.2: Left panel: European call option prices obtained by Monte Carlo simulations (ﬁlled circles), analytical formula (crosses) and the FFT method (solid line) for the Heston model. Right panel: Percentage diﬀerences between analytical and FFT prices. STFfft02.xpl

Table 8.1: The computation times in seconds for the FFT method and the Monte Carlo method for three diﬀerent models. Monte Carlo prices were calculated for 20 diﬀerent strikes, with 500 time steps and 5000 repetitions. Model Merton Heston Bates

FFT 0.01 0.01 0.01

MC 31.25 34.41 37.53

strikes for each of the three models. The speed superiority of the FFT-based method is clearly visible. It is more than 3000 times faster than the Monte Carlo approach.

8.4

Applications

195

As an application of the fast pricing algorithm we consider the problem of model calibration. Given option prices observed in the market we look for model parameters that can reproduce the data well. Normally, the market prices are given by an implied volatility surface which represents the implied volatility of option prices for diﬀerent strikes and maturities. The calibration can then be done for the implied volatilities or for the option prices. This decision depends on the problem considered. As a measure of ﬁt one can use the Mean Squared Error (MSE): M SE =

(market price - model price)2 1 , number of options market price2 options

(8.18)

but other choices like the Mean Absolute Percentage Error (MAPE) or Mean Absolute Error (MAE) are also possible: M AP E =

| market price - model price | 1 , number of options market price options

1 M AE = | market price - model price | . number of options options

Moreover, the error function can be modiﬁed by weights if some regions of the implied volatility surface are more important or some observations should be ignored completely. The calibration results in a minimization problem of the error function M SE. This optimization can be carried out by diﬀerent algorithms like simulated annealing, the Broyden-Fletcher-Goldfarb-Shanno-algorithm, the Nelder-Mead simplex algorithm or Monte Carlo Markov Chain methods. An overview of ˇ ıˇzkov´ optimization methods can be found in C´ a (2003). As minimization algorithms normally have to compute the function to be minimized many times an eﬃcient algorithm for the option prices is essential. The FFT-based algorithm is fairly eﬃcient as is shown in Table 8.1. Moreover, it returns prices for a whole range of strikes at one maturity. This is an additional advantage because for the calibration of an implied volatility surface one needs to calculate prices for many diﬀerent strikes and maturities. As an example we present the results for the Bates model calibrated to the IV surface of DAX options on January 4, 1999. The data set, which can be found in MD*Base, contains 236 option prices for 7 maturities (for each maturity there is a diﬀerent number of strikes). We minimize (8.18) with respect to 8 parameters of the Bates model: λ, δ, k, κ, θ, σ, ρ, v0 . Since the function (8.18)

196

8

0.5 0.4 0

0.1

0.2

0.3

implied volatility

0.4 0.3 0

0.1

0.2

implied volatility

0.5

0.6

Time to maturity T=0.4603

0.6

Time to maturity T=0.2110

4000

3000

6000

5000 strike

7000

4000

3000

5000 strike

6000

7000

Time to maturity T=0.9589

0

0.4 0.3 0

0.1

0.1

0.2

0.2

0.3

0.4

implied volatility

0.5

0.5

0.6

0.6

Time to maturity T=0.7096

implied volatility

FFT-based Option Pricing

5000

4000

strike

6000

3000

4000

5000 strike

6000

7000

Figure 8.3: The observed implied volatilities of DAX options on January 4, 1999 (circles) and ﬁtted Bates model (line) for 4 diﬀerent maturity strings. STFfft03.xpl

has many local minima, we use the simulated annealing minimization method, which oﬀers the advantage to search for a global minimum, combined with the Nelder-Mead simplex algorithm. As a result we obtaine the following estimates $ = 0.13, δ$ = 0.0004, $ k = −0.03, κ $ = 4.23, θ$ = for the model parameters: λ

8.4

Applications

197

0.17, σ $ = 1.39, ρ$ = −0.55, v$0 = 0.10, and the value of M SE is 0.00381. In Figure 8.3 we show the resulting ﬁts of the Bates model to the data for 4 diﬀerent maturities. The red circles are implied volatilities observed in the market on the time to maturities T = 0.21, 0.46, 0.71, 0.96 and the blue lines are implied volatilities calculated from the Bates model with the calibrated parameters. In the calibration we used all data points. As the FFT-based algorithm computes prices for the whole range of strikes the biggest impact on the speed of calibration has the number of used maturities, while the total number of observations has only minor inﬂuence on the speed. On the one hand, the Carr-Madan algorithm oﬀers a great speed advantage but on the other hand its applications are restricted to European options. The Monte Carlo approach instead works for a wider class of derivatives including path dependent options. Thus, this approach has been modiﬁed in diﬀerent ways. The accuracy can be improved by using better integration rules. Carr and Madan (1999) considered also the Simpson rule which leads – taking (8.17) into account – to the following formula for the option prices: CT (ku ) ≈

N −1 η exp(−αk) −iζηju i{( 1 N ζ−s0 )vj } e e 2 ψ(vj ) {3 + (−1)j − I(−j = 0)}. π 3 j=0

This representation again allows a direct application of the FFT to compute the sum. An alternative to the original Carr-Madan approach is to consider instead of (8.13) other modiﬁcations of the call prices. For example, Cont and Tankov (2004) used the (modiﬁed) time value of the options: c˜T (k) = CT (k) − max(1 − ek−rT , 0). Although this method also requires the existence of α satisfying E(STα+1 ) < ∞ the parameter does not enter into the ﬁnal pricing formula. Thus, it is not necessary to choose any value for α. This freedom of choice of α makes the approach easier to implement. On the other hand, option price surfaces that are obtained with this method often have a peak for small maturities and strikes near the spot. This special form diﬀers from the surfaces typically observed in the market. The peak results from the non-diﬀerentiability of the intrinsic value at the spot. Hence, other modiﬁcations of the option prices have been considered that make the modiﬁed option prices diﬀerentiable (Cont and Tankov, 2004).

198

8

FFT-based Option Pricing

The calculation of option prices by the FFT-based algorithm leads to diﬀerent errors. The truncation error results from substituting the inﬁnite upper integration limit by a ﬁnite number. The sampling error comes from evaluating the integrand only at grid points. Lee (2004) gives bounds for these errors and discusses error minimization strategies. Moreover, he presents and uniﬁes extensions of the original Carr-Madan approach to other payoﬀ classes. Besides the truncation and the sampling error, the implementation of the algorithm often leads to severe roundoﬀ errors because of the complex form of the characteristic function for some models. To avoid this problem, which often occurs for long maturities, it is necessary to transform the characteristic function. Concluding, we can say that the FFT-based option pricing method is a technique that can be used whenever time constraints are important. However, in order to avoid severe pricing errors its application requires careful decisions regarding the choice of the parameters and the particular algorithm steps used.

Bibliography

199

Bibliography Bates, D. (1996). Jump and Stochastic Volatility: Exchange Rate Processes Implicit in Deutsche Mark Options, Review of Financial Studies 9: 69– 107. Carr, P. and Madan, D. (1999). Option valuation using the fast Fourier transform, Journal of Computational Finance 2: 61–73. ˇ ıˇzkov´ C´ a, L. (2003). Numerical Optimization Methods in Econometrics, in J.M. Rodriguez Poo (ed.) Computer-Aided Introduction to Econometrics, Springer-Verlag, Berlin. Cooley, J. and Tukey, J. (1965). An algorithm for the machine calculation of complex Fourier series, Math. Comput. 19: 297–301. Cont, R. (2001). Empirical properties of assets returns: Stylized facts and statistical issues, Quant. Finance 1: 1-14. Cont, R. and Tankov, P. (2004). Financial Modelling With Jump Processes, Chapman & Hall/CRC. Fengler, M., H¨ ardle, W. and Schmidt, P. (2002). The Analysis of Implied Volatilities, in W. H¨ ardle, T. Kleinow, G. Stahl (eds.) Applied Quantitative Finance, Springer-Verlag, Berlin. Heston, S. (1993). A closed-form solution for options with stochastic volatility with applications to bond and currency options, Review of Financial Studies 6: 327-343. Hull, J. and White, A. (1987). The pricing of Options on Assets with Stochastic Volatilities, Journal of Finance 42: 281–300. Lee, R. (2004). Option pricing by transform methods: extensions, uniﬁcation and error control, Journal of Computational Finance 7. Merton, R. (1976). Option pricing when underlying stock returns are discontinuous, J. Financial Economics 3: 125-144. Rudin, W. (1991). Functional Analysis, McGrawHill. Schoutens, W., Simons, E., and Tistaert, J. (2003). A Perfect Calibration! Now What? UCS Technical Report, Catholic University Leuven.

200

Bibliography

Stein, E. and Stein, J. (1991). Stock price distribution with stochastic volatility: An analytic approach, Review of Financial Studies 4: 727–752.

9 Valuation of Mortgage Backed Securities: from Optimality to Reality Nicolas Gaussel and Julien Tamine

9.1

Introduction

Mortgage backed securities (MBS) are ﬁnancial assets backed by a pool of mortgages. Investors buy a part of the pool’s principal and receive the corresponding mortgages cash ﬂows. The pooled mortgages generally oﬀer the borrower the opportunity to prepay part or all of the remaining principal before maturity. This prepayment policy is the key point for pricing and hedging MBS. In the existing literature, two broad directions have been explored. On the one hand, the mainstream approach relies on statistical inference. The observed prepayment policy is statistically explained by the level of interest rates and some parameters of the underlying mortgage portfolio, see Schwartz and Torous (1989), Boudhouk et al. (1997). Dedicated to pricing and hedging, these approaches do not address the rationality behind the observed prepayment policy. On the other hand, authors like Nielsen and Poulsen (2002) directly address the problem of optimal prepayment within consumption based models. This normative approach gives insights into the determinants of prepayments and relies on macro-economic variables. However, this approach appears to be of poor practical use due to the numerous economic variables involved. In this chapter, we propose a third way. The optimality problem is addressed from an unconstrained, ﬁnancial point of view. Using arguments similar to those of early exercise of American derivatives, we identify the optimal interest rate level for prepayment. Building on this frontier, we construct a family of

202

9

Valuation of Mortgage Backed Securities

prepayment policies based on the spread between interest rates and the optimal prepayment level. The MBS are then priced as the expected value of their forthcoming discounted cash ﬂows, which is in line with classical methodology for ﬂow product valuation.

Mortgages speciﬁc characteristics Mortgage cash ﬂows diﬀer from those of a classical bond since their coupon is partly made of interest and partly of principal refunding. Despite this diﬀerence in cash ﬂow structure, the prepayment option enclosed in the mortgage is very similar to the callability feature of a bond. Under classical assumptions on the bond market, an optimal time of early exercise can be exhibited, depending on the term structure and on the volatility of interest rates. Such models predict a rise in exercise probability during low interest rate periods, increasing the value of the callability option attached to the bond. These conclusions are supported by empirical evidence. Historical values of the market price of a non-callable and a callable General Electric bond with the same maturity and coupon are displayed on Figure 9.1. The 10-year US government rate is displayed on the secondary axis. During this period of a sharp decrease of interest rates, the value of the non-callable bond rose much more than the value of the callable one. It may be tempting to adapt callable bonds pricing framework to mortgages. Nevertheless, statistical results prevent such a direct extrapolation. Though most mortgagors prepay for low interest rate levels, a signiﬁcant percentage chooses to go on refunding their loan, no matter how interesting the reﬁnancing conditions are. This phenomenon is often called burnout, Schwartz and Torous (1989). Conversely, some mortgagors choose to exercise their prepayment right at high interest rate levels. Such observations reveal that mortgagors are individuals whose behavior is in part determined by exogenous factors. Economic studies suggest that major motivations for early prepayment can be classiﬁed within three broad categories, Hayre (1999): • structural motivations accounting for occurrence of prepayment during high interest rate periods: unexpected heritage; professional move involving house sale (if residential mortgages are considered); insurance prepayment after mortgagor death;

9.1

Introduction

203

Figure 9.1: Historical prices of the 10-year US goverment bond (solid line, right axis) and a non-callable (dotted line, left axis) and a callable (dashed line, left axis) General Electric bonds. STFmbs01.xpl

• speciﬁc characteristics explaining burnout: lack of access to interest rate information; • refunding motivations in accordance with classical ﬁnancial theory. Based on these considerations, the subsequent analysis is divided into three parts. Section 2 is concerned with the determination of the optimal time for prepaying a mortgage in an ideal market where interest rates would be the only variable of decision. This section sheds light on the inﬂuence of interest rates on reﬁnancing incentive. In Section 3, the MBS price is expressed as the expected value of its future cash-ﬂows, under some prepayment policy. A numerical procedure based on the resolution of a two-dimensional partial diﬀerential equation is put forward. The insights provided by our approach are illustrated through numerical examples.

204

9.2 9.2.1

9

Valuation of Mortgage Backed Securities

Optimally Prepaid Mortgage Financial Characteristics and Cash Flow Analysis

For the sake of simplicity, all cash ﬂows are assumed to be paid continuously in time. Given a maturity T, the mortgage is deﬁned by a ﬁxed actuarial coupon rate c and a principal N . If the mortgagor chooses not to prepay, he refunds a continuous ﬂow φdt, related to the maturity T and the coupon rate c through the initial parity condition T N= φ exp (−cs) ds, (9.1) 0

where φ=N

c . 1 − exp (−cT )

As opposed to in ﬁne bonds where intermediary cashﬂows are only made of interest and the principal is fully redeemed at maturity, this ﬂow includes payments of both interest and principal. At time t ∈ [0, T ] the remaining principal Kt is contractually deﬁned as the forthcoming cash ﬂows discounted at the initial actuarial coupon rate T def Kt = φ exp {−c (s − t)} ds t

= =

φ [1 − exp {−c (T − t)}] c 1 − exp {−c (T − t)} N 1 − exp (−cT )

Early prepayment at date t means paying Kt to the bank. In ﬁnancial terms, the mortgagor owns an American prepayment option with strike Kt . The varying proportion between interest and capital in the ﬂow φ is displayed in Figure 9.2.

9.2.2

Optimal Behavior and Price

The ﬁnancial model Given its callability feature, the mortgage is a ﬁxed income derivative product. Its valuation must therefore be grounded on the deﬁnition of a particular

9.2

Optimally Prepaid Mortgage

205

Figure 9.2: The proportion between interest and principal varying in time. interest rate model. Since many models can be seen as good candidates, we need to specify some additional features. First, this model should be arbitrage free and consistent with the observed forward term structure. This amounts to selecting a standard Heath-Jarrow-Morton (HJM) type approach. Second, we specify an additional Markovian structure for tractability purposes. While our theoretical analysis is valid for any Markovian HJM model, all (numerical) results will be presented, for simplicity, using a one factor enhanced Vasicek model (Priaulet, 2000); see Martellini and Priaulet (2000) for practical uses or Bj¨ork (1998) for more details on theoretical grounds. Let us quickly recap its characteristics. Assumption A The short rate process rt is deﬁned via an Ornstein-Uhlenbeck process: drt = λ {θ (t) − rt } dt + σdWt , (9.2) with

∂ 1 − exp (−2λt) f (0, t) + f (0, t) + σ 2 , ∂t 2λ and f (0, t) being the initial instantaneous forward term curve. The parameters θ (t) =

206

9

Valuation of Mortgage Backed Securities

σ and λ control the volatility σ (τ ) of forward rates of maturity τ σ (τ ) =

σ {1 − exp (−λτ )} , λ

and allow for a rough calibration to derivative prices. Note that in this enhanced Vasicek framework, all bond prices can be written in closed form, Martellini and Priaulet (2000). The optimal stopping problem The theory of optimal stopping is well known, Pham (2003). It is widely used in mathematical ﬁnance for the valuation of American contracts, Musiela and Rutkowski (1997). In the sequel, the optimally prepaid mortgage price is explicitly calculated as a solution of an optimal stopping problem. Let τ ∈ [t, T ] be the stopping time at which mortgagors choose to prepay. Cash ﬂows are of two kinds. If τ < T, mortgagors keep on paying continuously φdt at any time, with discounted (random) value equal to

min(τ,T )

φ exp −

t

s

ru du ds.

t

At date τ, if τ < T, the remaining capital Kt must be paid, implying a discounted cash ﬂow equal to τ ru du Kτ , I (τ < T ) exp − t

The mortgagor will choose his prepayment time τ in order to minimize the risk neutral expected value of these future discounted cashﬂows. The value of the optimally prepaid mortgage is then obtained as min(τ,T )

Vt

=

t<τ

s

φ exp −

inf E t

ru du ds

+I(τ < T ) exp −

t

t

τ

(9.3)

' ' ' ru du Kτ ' Ft , '

where Ft is the relevant ﬁltration. Since rt is Markovian, Vt can be expressed as a function of the current level of the state variables and reduces to V (t, rt ) . The

9.2

Optimally Prepaid Mortgage

207

problem in (9.3) is therefore a standard Markovian optimal stopping problem (Pham, 2003). At time t, the mortgagor’s decision whether to prepay or not is made on the following arbitrage: the cost of prepaying immediately (τ = t) is equal to the current value of the remaining mortgage principal Kt . This cost has to be compared to the expected cost V (t, rt ) of going on refunding the continuous ﬂow φdt and keeping the option to prepay until later. Obviously, the optimal mortgagor should opt for prepayment if V (t, rt ) Kt .

(9.4)

Conversely, within the non-prepayment region, the mortgage can be sold or bought: its price must be the solution of the standard Black-Scholes partial diﬀerential equation. The following proposition sums up these intuitions. Its proof uses the link between conditional expectation and partial diﬀerential equations called the “Feynman-Kac analysis.” PROPOSITION 9.1 Under Assumption A, V (t, rt ) is solution of the partial diﬀerential equation : ⎧ ⎫ 2 ⎨ ∂V (t, r) + µ (t, r) ∂V (t, r) + 1 σ 2 ∂ V (t, r) − rV (t, r) + φ, ⎬ ∂t ∂r 2 ∂r2 max =0 ⎩ ⎭ V (t, r) − Kt (9.5) V (T, r) = 0

(9.6)

def

where µ (t, r) = λ (θ (t) − r) and σ are ﬁxed by Assumption A. Proof: We only give a sketch for constructing a solution. The optimal stopping time problem at time t is given by s min(τ,T ) Vt = inf E φ exp − ru du ds (9.7) τ

t

! +I(τ < T ) exp − t

t min(τ,T )

' ' ' ru du Kτ ' Ft ' "

(9.8)

The Markovian property allows to change the conditioning by Ft by a conditioning by rt . Thus, Vt is a function of (t, rt ). If the mortgagor does not prepay

208

9

Valuation of Mortgage Backed Securities

during the time interval [t, t + h] , h > 0, the discounted cashﬂows refunded in the interval [t, t + h] equal to T

exp −

s

ru du φds

t

t

The value at time t + h of the remaining cash ﬂows to be paid by the mortgagor is equal to V (t + h, rt+h ) . Its discounted value, at time t is: ! " t+h

exp −

ru du V (t + h, rt+h ) . t

Finally, the expected value of the cash ﬂows to be paid for a mortgage not prepaid on the interval [t, t + h] equals to ⎧ T ! " s ⎨ t+h E exp − ru du φds + exp − ru du V (t + h, rt+h ) . ⎩ t t t

Not prepaying on the time interval [t, t + h] may not be optimal so that ⎧ T ! " s ⎨ t+h exp − ru du φds + exp − ru du V (t + h, rt+h ) . V (t, rt ) ≤ E ⎩ t t t

Assuming regularity conditions on V , classical Taylor expansion yields 0≤

∂V (t, rt ) ∂V (t, rt ) 1 2 ∂ 2 V (t, rt ) − rV (t, rt ) + φ. + µ (t, r) + σ ∂t ∂r 2 ∂r2

(9.9)

Furthermore, using the deﬁnition (9.7), the inequality V (t, rt ) ≤ Kt is satisﬁed. Assuming this inequality to be strictly satisﬁed, the stopping time τ is deﬁned by τ = inf {s ≥ t : V (s, rs ) = Ks } . On the time interval [t, min{t + h, τ }] , the non-prepayment strategy is optimal since V (s, rs ) < Ks . As a consequence: ⎧ T ! " s ⎨ t+h exp − ru du φds + exp − ru du V (t + h, rt+h ) . V (t, rt ) = E ⎩ t t t

9.2

Optimally Prepaid Mortgage

209

Figure 9.3: The sensitivity of the optimal prepayment-frontier to forward-rates slope: steeper forward-rate curve leads to the dotted frontier, less steep forward-rate curve to solid frontier. Letting h → 0 and applying Itˆ o’s lemma, as previously yields 0=

∂V (t, rt ) ∂ 2 V (t, rt ) ∂V (t, rt ) 1 2 − rV (t, rt ) + φ (9.10) + µ (t, r) + σ (t, r) ∂t ∂r 2 ∂r2

as long as V (t, rt ) < Kt . Formula (9.9) combined with (9.10) implies

∂Vt 1 2 ∂ 2 Vt ∂Vt − rVt + φ, Vt − Kt = 0. max + µ (t, r) + σ ∂t ∂r 2 ∂r2

210

9

Valuation of Mortgage Backed Securities

Figure 9.4: The sensitivity of the optimal prepayment frontier to interest-rates volatility: volatilities of the 1-year and 10-year bonds are 90 bps and 37 bps (solid line) and 135 bps and 55 bps (dotted line), respectively. Discussion and visualization In this one-dimensional framework, the prepayment condition (9.4) deﬁnes a two-dimensional no prepayment region D = {(t, r) : Vt < Kt } . In particular, it includes the set {(t, r) : rt ≥ c} . The optimal stopping theory provides characterization of D, Pham (2003). In fact, there exists an optimal, time-dependent, stopping frontier rtopt such that D = (t, r) : rt > rtopt . The price Vt and the optimal frontier rtopt are jointly determined: this is a so-called free boundary problem. It can only be calculated via a standard

9.2

Optimally Prepaid Mortgage

211

Figure 9.5: The sensitivity of the time value of the embbeded option to interestrate volatility: volatilities of the 1-year and 10-year bonds are 90 bps and 37 bps (solid line) and 135 bps and 55 bps (dotted line), respectively. ﬁnite diﬀerence approach, Wilmott (2000). An example is displayed in Figure 9.3. Interestingly enough, the optimal frontier heavily depends on the time to maturity and it may be far away from the mortgage coupon c. Both its shape and its level rtopt strongly depend on market conditions. Figure 9.3 illustrates the positive impact of the slope of the curve on to the slope of the optimal frontier. The inﬂuence of implicit market volatility on the optimal prepayment frontier is displayed in Figure 9.4. As expected, the more randomness σ around future rates moves, the stronger the incentive for mortgagors to delay their prepayment in time. In the language of derivatives, the time value of the embedded option increases, see Figure 9.5. All these eﬀects are summed up in one key indicator: the duration of the optimally prepaid mortgage. Deﬁned as the sensitivity to the variation of interest rates, this indicator has two interesting interpretations. From an actuarial point of view, it represents the average expected maturity of the future discounted cash ﬂows. From a hedging point of view, duration may be interpreted as the “delta” of the mortgage with respect to interest rates.

212

9

Valuation of Mortgage Backed Securities

Figure 9.6: The sensitivity of the duration to interest-rate volatility: volatilities of the 1-year and 10-year bonds are 90 bps and 37 bps (solid line) and 135 bps and 55 bps (dotted line), respectively. If interest rate is deep inside the continuing region, the expected time before prepayment is large and the duration increases. As displayed in Figure 9.6, the higher the volatility, the higher the duration. The preceeding discussion indicates that the optimally prepaid mortgage can be understood as a standard interest rate derivative, allowing one to get asymmetric exposure to future interest rates shifts.

9.3

Valuation of Mortgage Backed Securities

As conﬁrmed by empirical evidence, mortgagors do not prepay optimally, Hayre (1999). Nielsen and Poulsen (2002) provide important insights on the constraints and information asymmetries faced by mortgagors. Although being bound by these constraints, individuals aim at minimizing their expected future cash ﬂows. Thus, it is natural to root their prepayment policy into the

9.3

Valuation of Mortgage Backed Securities

213

def

optimal one. Let dt = rtopt − rt be the distance between the interest rate and the optimal prepayment frontier. The optimal policy leads to a 100% prepayment of the mortgage if dt > 0 and a 0% prepayment if not: it can thus be seen as a Heaviside function of dt . When determinants of mortgagors’ behavior cannot be observed, this behavior can be modelled as a noisy version of the optimal one. It is thus natural to look for the eﬀective prepayment policy under the form of a characteristic distribution function of dt , which introduces dispersion around the optimal frontier.

9.3.1

Generic Framework

A pool of mortgages with similar ﬁnancial characteristics is now considered. This homogeneity assumption of the pool is in accordance with market practice. For the ease of monitoring and investors’ analysis, mortgages with the same coupon rate and the same maturity are chosen for pooling. Without loss of generality, the MBS can be assimilated into a single loan with coupon c and maturity T, issued with principal N normalized at 1. Let Ft be the proportion of unprepaid shares at date t. In the optimal approach, the prepayment policy follows an “all or nothing” type strategy, with Ft being worth 0 or 1. When practical policies are involved, Ft is a positive process decreasing in time from 1 to 0. One can look for def

Ft = exp (−Πt ) F0 = 1, where, in probabilistic terms, Πt is the hazard process associated with the refunding dynamics. The size of the underlying mortgage gives incentives to model Πt as an absolutely continuous process. In mathematical terms, this amounts to assuming the existence of an intensity process πt such that dΠt = πt dt, or equivalently

t Ft = exp − πu du .

(9.11)

0

In this framework, the main point lies in the functional form of the refunding intensity πt . As it will be precised in the next subsection, πt must be seen as

214

9

Valuation of Mortgage Backed Securities

a function of dt instead of directly rt . The valuation consists in discounting a continuous sequence of cash ﬂows. Given the prepayment policy πt , the MBS cashﬂows during [t, T ] can be divided in two parts. Firstly, T

exp −

s

ru du Fs φds

t

t

is the discounted value of the continuous ﬂows φ refunded on the outstanding MBS principal Fs . Secondly, T

exp −

s

ru du πs K (s) dFs

t

t

is the discounted value of the principal prepaid at time s. The MBS value equals the risk neutral expectation of these cash ﬂows ⎡ T ⎤ s P (t, rt , Ft ) = E ⎣ exp − ru du · {Fs φ + πs Fs K (s)} ds⎦ . (9.12) t

t

Because πt is chosen as a function of dt , the explicit computation of P involves the knowledge of rtopt . As opposed to the classical approach, a simple Monte Carlo technique cannot do the job. P can be characterized as a solution of a standard two dimensional partial diﬀerential equation. In our one dimensional framework, this means that: PROPOSITION 9.2 Under Assumption A, the MBS price P (t, rt , Ft ) solves the partial diﬀerential equation ∂P (t, r, F ) ∂P (t, r, F ) ∂P (t, r, F ) + µ (t, r) − π (t, r) F ∂t ∂r ∂F 1 2 ∂ 2 P (t, r, F ) + F (φ + π (t, r) K (t)) − rP (t, r, F ) + σ 2 ∂r2 P (T, r, F ) def

=

0. (9.13)

=

0

where µ (t, r) = λ {θ (t) − r} and σ are ﬁxed by assumption A and πt has to be properly determined.

9.3

Valuation of Mortgage Backed Securities

9.3.2

215

A Parametric Speciﬁcation of the Prepayment Rate

We now come to a particular speciﬁcation of πt . For simplicity, we choose an ad hoc parametric form for π in order to analyze its main sensitivities. In accordance with stylized facts on prepayment, the prepayment rate πt is split in two distinct components πt = πtS + πtR , where πtS represents the structural component of prepayment and πtR , as a function of dt , accounts for both the refunding decision and burnout. Structural prepayment Structural prepayment can involve many diﬀerent reasons for prepaying, including: • professional changes, • natural disasters followed by insurance prepayment, • death or default of the mortgagor also followed by insurance prepayment. Such prepayment characteristics appear to be stationary in time, Hayre (1999). Their average eﬀect can be captured reasonably well through a deterministic model. The Public Securities Association (PSA) recommends the use of a piecewise linear structural prepayment rate: πtS = k (atI(0 ≤ t ≤ 30 months) + bI(30 months ≤ t)) .

(9.14)

This piecewise linear speciﬁcation takes into account the inﬂuence of the age of the mortgage on prepayment. According to the PSA, the mean annualized values for a and b are 2% and 6%, respectively. This implies that the prepayment starts from 0% at the issuance date of the mortgage, growing by 0.2% per month during the ﬁrst 30 months, and being equal to 6% afterwards. This curve is accepted by the market practice as the benchmark structural prepayment rate, see Figure 9.7. It is known as the 100% PSA curve. The parameter k sets the desired translation level of this benchmark curve. The PSA regularly publishes statistics on the level of k according to the geographical region, in the US, of mortgage issuance.

216

9

Valuation of Mortgage Backed Securities

Figure 9.7: The 100% PSA curve. Reﬁnancing prepayment The reﬁnancing prepayment rate has to account for both the eﬀect of interest rates and individual characteristics such as burnout. Reﬁnancing incentives linked to interest rate level can be captured through the optimal prepayment framework of Section 2. This framework implies a 1 to 0 rule for MBS principal evolution, depending on the optimal short term interest rate level for prepaying, rtopt . As soon as dt > 0, if the mortgagors were optimal, the whole MBS principal would be prepaid. In order to reﬂect the eﬀect of individual characteristics on prepayment rate causing dispersion around the optimal level dt = 0, we introduce the standard Weibull cumulative distribution function α dt R πt = π · 1 − exp − . (9.15) d We do not claim that this parametric form is better than other found in the literature. Its main advantage comes from the easy determination of parameters

9.3

Valuation of Mortgage Backed Securities

217

Figure 9.8: Prepayment policy. thanks to an analytic inversion of its quantile function. In fact, as suggested by Figure 9.8, the determination of quantile ensures that this parametric speciﬁcation can easily be interpreted. Parameter d is a scale parameter. In this form, being far into the prepayment zone means that dt /d 0 so that πtR ∼ π. Parameter π directly accounts for the magnitude of the burnout eﬀect since it represents the instantaneous fraction of mortgagors who chose not to prepay even for very low values of rt . More precisely, if rt was to stay very low during a time period [0, h] and if the reﬁnancing prepayment was the only prepayment component to be considered, using expression (9.11), the proportion of unprepaid shares at date h would be equal to Fh = exp (−¯ π h) . This proportion is the burnout rate during the time horizon h. Parameter α controls the speed at which prepayment is made, linking the PSA regime to the burnout regime.

218

9

Valuation of Mortgage Backed Securities

Figure 9.9: The relation between MBS and prepayment policy: MBS without prepayment (solid line), mortgage with prepayment (dashed line), and MBS (dotted line).

9.3.3

Sensitivity Analysis

In order to analyze the main eﬀects of our model, we choose the 100% PSA curve for the structural prepayment rate; the burnout is set equal to 20%. This means that, whatever the market conditions are, 20% of the mortgagors will never repay their loan. The time horizon h for this burnout eﬀect is ﬁxed equal to 2 years. Parameters d and α are calibrated in such a way that when dt = 0, ten percent of mortgagors prepay their loan after horizon h, and half of the mortgage is prepaid if half the distance to optimal prepayment rate is reached. Market conditions are set as of December 2003 in the EUR zone. The short rate equals to 2.3% and the long term rate is 5%. The volatility of the short rate σ is taken equal to 0.8% and λ is such that the volatility of the 10 year forward rate equals to 0.5%. The facial coupon of the pool of mortgage is c = 5%, its remaining maturity is set to T = 15 years and no prepayment has been made (F0 = 1).

9.3

Valuation of Mortgage Backed Securities

219

Figure 9.10: Embedded option price in MBS for a steeper forward-rate curve (dotted line) and a less steep forward-rate curve (solid line). With such parameters, the price of the MBS is displayed in Figure 9.9 as a function of interest rates, together with the optimally prepaid mortgage (OPM) and the mortgage without callability feature (NPM). When interest rates go down, the behavior of the MBS is intermediate between the OPM and the NPM. The value at rt = 0 is controlled by the burnout level. The transition part is controlled by parameters d and α. When interest rates increase, the MBS price is higher than the NPM’s due to the PSA eﬀect. In fact, by prepaying in the optimal region, mortgagors oﬀer the holder of MBS a positive NPV. This appears clearly when displaying the value of the option embedded in MBS. Recall that in the case of the optimally prepaid mortgage, this value was always positive (Figure 9.5). This is no longer the case for MBS as indicated in Figure 9.10. As a consequence, the sensitivity of MBS to interest rates moves is reduced. Duration is computed in Figure 9.11. It is always less than the underlying pool duration. Its behavior resembles a smoothed version of the optimally prepaid one.

220

9

Valuation of Mortgage Backed Securities

Figure 9.11: Duration of the MBS: MBS without prepayment (solid line), mortgage with prepayment (dashed line), and MBS (dotted line). Let us now increase the implied volatility of the underlying derivatives market. The embedded option value increases, translating the negative sensitivity of the MBS price to market volatilities, see Figure 9.12. In hedging terms, MBS are “vega negative”. A long position in MBS is “short volatility”. This is also well indicated in the variation of duration. Figure 9.13 shows how higher volatility increases the duration when the MBS is “in the money” (low interest rates) and decreases for “out of the money” MBS. This is not surprising when one thinks of the duration as the “delta” of the MBS with respect to interest rates. The eﬀect of volatility on the delta for a standard vanilla put option is known to be opposite, depending on the moneyness of the option.

9.3 Valuation of Mortgage Backed Securities

221

Figure 9.12: The sensitivity of the MBS price to interest-rates volatility: volatilities of the 1-year and 10-year bonds are 90 bps and 37 bps (solid line) and 135 bps and 55 bps (dotted line), respectively.

222

9

Valuation of Mortgage Backed Securities

Figure 9.13: The sensitivity of the MBS duration to interest-rates volatility: volatilities of the 1-year and 10-year bonds are 90 bps and 37 bps (solid line) and 135 bps and 55 bps (dotted line), respectively.

Bibliography

223

Bibliography Bj¨ ork, T. (1998) Arbitrage Theory in Continuous Time, Oxford University Press. Boudhouk, J., Richardson, M., Stanton, R., and Whitelaw, R. (1997) Pricing Mortgage Backed Securities in a Multifactor Interest Rate Environment : a Multivariate Density Estimation Approach. Review of Financial Studies 10: 405–446. Hayre, L. (1999) Guide to Mortgage Backed Securities. Technical report, Salomon Smith Barney Fixed Income Research. Longstaﬀ, F. A., and Whitelaw, E. (2001) Valuing American Options by Simulation : a Simple Least Square Approach. Review of Financial Studies 10: 405–446. Martellini, L. and Priaulet, P. (2000) Fixed Income Securities: Dynamic Methods for Interest Rate Risk Pricing and Hedging, Wiley. Musiela, M. and Rutkowsky, M. (1997) Martingal Methods in Financial Modeling, Springer. Nielsen, S. and Poulsen, R. (2002) A Two Factor Stochastic Programming of Danish Mortgage Backed Securities. Technical report, University of Copenhagen, Department of Statistics and Operation Research. Pham, H. (2003) Contrˆ ole Optimal Stochastique etApplications en Finance, Lecture Notes. Schwartz, E. and Torous, E. S. (1989) Prepayment and the Valuation of Mortgage Backed Securities, Journal of Finance 44: 375–392. Schwartz, E. and Torous, E. S. (1993) Mortgage Prepayment and Default Decision, Journal of the American Real Estate and Urban Economic Association 21: 431–449. Wilmott, P. (2000) On Quantitative Finance, Wiley.

10 Predicting Bankruptcy with Support Vector Machines Wolfgang H¨ ardle, Rouslan Moro, and Dorothea Sch¨ afer

The purpose of this work is to introduce one of the most promising among recently developed statistical techniques – the support vector machine (SVM) – to corporate bankruptcy analysis. An SVM is implemented for analysing such predictors as ﬁnancial ratios. A method of adapting it to default probability estimation is proposed. A survey of practically applied methods is given. This work shows that support vector machines are capable of extracting useful information from ﬁnancial data, although extensive data sets are required in order to fully utilize their classiﬁcation power. The support vector machine is a classiﬁcation method that is based on statistical learning theory. It has already been successfully applied to optical character recognition, early medical diagnostics, and text classiﬁcation. One application where SVMs outperformed other methods is electric load prediction (EUNITE, 2001), another one is optical character recognition (Vapnik, 1995). SVMs produce better classiﬁcation results than parametric methods and such a popular and widely used nonparametric technique as neural networks, which is deemed to be one of the most accurate. In contrast to the latter they have very attractive properties. They give a single solution characterized by the global minimum of the optimized functional and not multiple solutions associated with the local minima as in the case of neural networks. Moreover, SVMs do not rely so heavily on heuristics, i.e. an arbitrary choice of the model and have a more ﬂexible structure.

226

10.1

10 Predicting Bankruptcy with Support Vector Machines

Bankruptcy Analysis Methodology

Although the early works in bankruptcy analysis were published already in the 19th century (Dev, 1974), statistical techniques were not introduced to it until the publications of Beaver (1966) and Altman (1968). Demand from ﬁnancial institutions for investment risk estimation stimulated subsequent research. However, despite substantial interest, the accuracy of corporate default predictions was much lower than in the private loan sector, largely due to a small number of corporate bankruptcies. Meanwhile, the situation in bankruptcy analysis has changed dramatically. Larger data sets with the median number of failing companies exceeding 1000 have become available. 20 years ago the median was around 40 companies and statistically signiﬁcant inferences could not often be reached. The spread of computer technologies and advances in statistical learning techniques have allowed the identiﬁcation of more complex data structures. Basic methods are no longer adequate for analysing expanded data sets. A demand for advanced methods of controlling and measuring default risks has rapidly increased in anticipation of the New Basel Capital Accord adoption (BCBS, 2003). The Accord emphasises the importance of risk management and encourages improvements in ﬁnancial institutions’ risk assessment capabilities. In order to estimate investment risks one needs to evaluate the default probability (PD) for a company. Each company is described by a set of variables (predictors) x, such as ﬁnancial ratios, and its class y that can be either y = −1 (‘successful’) or y = 1 (‘bankrupt’). Initially, an unknown classiﬁer function f : x → y is estimated on a training set of companies (xi , yi ), i = 1, ..., n. The training set represents the data for companies which are known to have survived or gone bankrupt. Finally, f is applied to computing default probabilities (PD) that can be uniquely translated into a company rating. The importance of ﬁnancial ratios for company analysis has been known for more than a century. Among the ﬁrst researchers applying ﬁnancial ratios for bankruptcy prediction were Ramser (1931), Fitzpatrick (1932) and Winakor and Smith (1935). However, it was not until the publications of Beaver (1966) and Altman (1968) and the introduction of univariate and multivariate discriminant analysis that the systematic application of statistics to bankruptcy analysis began. Altman’s linear Z-score model became the standard for a decade to come and is still widely used today due to its simplicity. However, its assumption of equal normal distributions for both failing and successful companies with the same covariance matrix has been justly criticized. This approach was further developed by Deakin (1972) and Altman et al. (1977).

10.1 Bankruptcy Analysis Methodology

227

Later on, the center of research shifted towards the logit and probit models. The original works of Martin (1977) and Ohlson (1980) were followed by (Wiginton, 1980), (Zavgren, 1983) and (Zmijewski, 1984). Among other statistical methods applied to bankruptcy analysis there are the gambler’s ruin model (Wilcox, 1971), option pricing theory (Merton, 1974), recursive partitioning (Frydman et al., 1985), neural networks (Tam and Kiang, 1992) and rough sets (Dimitras et al., 1999) to name a few. There are three main types of models used in bankruptcy analysis. The ﬁrst one is structural or parametric models, e.g. the option pricing model, logit and probit regressions, discriminant analysis. They assume that the relationship between the input and output parameters can be described a priori. Besides their ﬁxed structure these models are fully determined by a set of parameters. The solution requires the estimation of these parameters on a training set. Although structural models provide a very clear interpretation of modelled processes, they have a rigid structure and are not ﬂexible enough to capture information from the data. The non-structural or nonparametric models (e.g. neural networks or genetic algorithms) are more ﬂexible in describing data. They do not impose very strict limitations on the classiﬁer function but usually do not provide a clear interpretation either. Between the structural and non-structural models lies the class of semiparametric models. These models, like the RiskCalc private company rating model developed by Moody’s, are based on an underlying structural model but all or some predictors enter this structural model after a nonparametric transformation. In recent years the area of research has shifted towards non-structural and semi-parametric models since they are more ﬂexible and better suited for practical purposes than purely structural ones. Statistical models for corporate default prediction are of practical importance. For example, corporate bond ratings published regularly by rating agencies such as Moody’s or S&P strictly correspond to company default probabilities estimated to a great extent statistically. Moody’s RiskCalc model is basically a probit regression estimation of the cumulative default probability over a number of years using a linear combination of non-parametrically transformed predictors (Falkenstein, 2000). These non-linear transformations f1 , f2 , ..., fd are estimated on univariate models. As a result, the original probit model: E[yi,t |xi,t ] =

Φ (β1 xi1,t + β2 xi2,t + ... + βd xid,t ) ,

(10.1)

228

10 Predicting Bankruptcy with Support Vector Machines

is converted into: E[yi,t |xi,t ] =

Φ{β1 f1 (xi1,t ) + β2 f2 (xi2,t ) + ... + βd fd (xid,t )}, (10.2)

where yi,t is the cumulative default probability within the prediction horizon for company i at time t. Although modiﬁcations of traditional methods like probit analysis extend their applicability, it is more desirable to base our methodology on general ideas of statistical learning theory without making many restrictive assumptions. The ideal classiﬁcation machine applying a classifying function f from the available set of functions F is based on the so called expected risk minimization principle. The expected risk 1 |f (x) − y| dP (x, y), (10.3) R (f ) = 2 is estimated under the distribution P (x, y), which is assumed to be known. This is, however, never true in practical applications and the distribution should also be estimated from the training set (xi , yi ), i = 1, 2, ..., n, leading to an ill-posed problem (Tikhonov and Arsenin, 1977). In most methods applied today in statistical packages this problem is solved by implementing another principle, namely the principle of the empirical risk minimization, i.e. risk minimization over the training set of companies, even when the training set is not representative. The empirical risk deﬁned as: 1 ˆ (f ) = 1 R |f (xi ) − yi | , n i=1 2 n

(10.4)

is nothing else but an average value of loss over the training set, while the expected risk is the expected value of loss under the true probability measure. The loss for i.i.d. observations is given by: 1 0, if classiﬁcation is correct, |f (x) − y| = 1, if classiﬁcation is wrong. 2 The solutions to the problems of expected and empirical risk minimization: fopt

=

arg min R (f ) ,

(10.5)

fˆn

=

ˆ (f ) , arg min R

(10.6)

f ∈F f ∈F

10.1 Bankruptcy Analysis Methodology

229

Risk

R

Rˆ

ˆ (f) R

R (f)

f

fopt

fˆn

Function class

ˆ Figure 10.1: The minima fopt and fˆn of the expected (R) and empirical (R) risk functions generally do not coincide.

generally do not coincide (Figure 10.1), although they converge to each other as n → ∞ if F is not too large. We cannot minimize expected risk directly since the distribution P (x, y) is unknown. However, according to statistical learning theory (Vapnik, 1995), it is possible to estimate the Vapnik-Chervonenkis (VC) bound that holds with a certain probability 1 − η: ˆ (f ) + φ h , ln(η) . R (f ) ≤ R (10.7) n n For a linear indicator function g(x) = sign(x w + b): , η h ln 2n h ln(η) h − ln 4 φ , , = n n n

(10.8)

where h is the VC dimension. The VC dimension of the function set F in a d-dimensional space is hh if some d function f ∈ F can shatter h objects x ∈ R , i = 1, ..., h , in all 2 possible i conﬁgurations and no set xj ∈ Rd , j = 1, ..., q , exists where q > h that satisﬁes this property. For example, three points on a plane (d = 2) can be shattered by linear indicator functions in 2h = 23 = 8 ways, whereas 4 points cannot be

230

10 Predicting Bankruptcy with Support Vector Machines

Figure 10.2: Eight possible ways of shattering 3 points on the plane with a linear indicator function.

shattered in 2q = 24 = 16 ways. Thus, the VC dimension of the set of linear indicator functions in a two-dimensional space is three, see Figure 10.2. The expression for the VC bound (10.7) is a regularized functional where the VC dimension h is aparameter controlling complexity of the classiﬁer function. The introduces a penalty for the excessive complexity of a classiﬁer term φ nh , ln(η) n function. There is a trade-oﬀ between the number of classiﬁcation errors on the training set and the complexity of the classiﬁer function. If the complexity were not controlled, it would be possible to ﬁnd such a classiﬁer function that would make no classiﬁcation errors on the training set notwithstanding how low its generalization ability would be.

10.2

Importance of Risk Classiﬁcation in Practice

In most countries only a small percentage of ﬁrms has been rated to date. The lack of rated ﬁrms is mainly due to two factors. Firstly, an external rating is an extremely costly procedure. Secondly, until the recent past most banks decided on their loans to small and medium sized ﬁrms (SME) without asking for the client’s rating ﬁgure or applying an own rating procedure to estimate the client’s default risk. At best, banks based their decision on rough scoring models. At worst, the credit decision was completely left to the loan oﬃcer.

10.2 Importance of Risk Classiﬁcation in Practice

231

Table 10.1: Rating grades and risk premia. Source: (Damodaran, 2002) and (F¨ user, 2002) Rating Class (S&P) One year PD (%) Risk Premia (%) AAA 0.01 0.75 AA 0.02 – 0.04 1.00 A+ 0.05 1.50 A 0.08 1.80 A0.11 2.00 BBB 0.15 – 0.40 2.25 BB 0.65 – 1.95 3.50 B+ 3.20 4.75 B 7.00 6.50 B13.00 8.00 CCC > 13 10.00 CC 11.50 C 12.70 D 14.00

Since learning to know its own risk is costly and, until recently, the lending procedure of banks failed to set the right incentives, small and medium sized ﬁrms shied away from rating. However, the regulations are about to change the environment for borrowing and lending decisions. With the implementation of the New Basel Capital Accord (Basel II) scheduled for the end of 2006 not only ﬁrms that issue debt securities on the market are in need of rating but also any ordinary ﬁrm that applies for a bank loan. If no external rating is available, banks have to employ an internal rating system and deduce each client’s speciﬁc risk class. Moreover, Basel II puts pressure on ﬁrms and banks from two sides. First, banks have to demand risk premia in accordance to the speciﬁc borrower’s default probability. Table 10.1 presents an example of how individual risk classes map into risk premiums (Damodaran, 2002) and (F¨ user, 2002). For small US-ﬁrms a one-year default probability of 0.11% results in a spread of 2%. Of course, the mapping used by lenders will be diﬀerent if the ﬁrm type or the country in which the bank is located changes. However, in any case future loan pricing has to follow the basic rule. The higher the ﬁrm’s default risk is the more risk premium the bank has to charge.

232

10 Predicting Bankruptcy with Support Vector Machines

Table 10.2: Rating grades and capital requirements. Source: (Damodaran, 2002) and (F¨ user, 2002). The ﬁgures in the last column were estimated by the authors for a loan to an SME with a turnover of 5 million euros with a maturity of 2.5 years using the data from column 2 and the recommendations of the Basel Committee on Banking Supervision (BCBS, 2003). Rating Class One-year Capital Capital (S&P) PD (%) Requirements Requirements (%) (Basel I) (%) (Basel II) AAA 0.01 8.00 0.63 AA 0.02 – 0.04 8.00 0.93 – 1.40 A+ 0.05 8.00 1.60 A 0.08 8.00 2.12 A0.11 8.00 2.55 BBB 0.15 – 0.40 8.00 3.05 – 5.17 BB 0.65 – 1.95 8.00 6.50 – 9.97 B+ 3.20 8.00 11.90 B 7.00 8.00 16.70 B13.00 8.00 22.89 CCC > 13 8.00 > 22.89 CC 8.00 C 8.00 D 8.00

Second, Basel II requires banks to hold client-speciﬁc equity buﬀers. The magnitudes of these buﬀers are determined by a risk weight function deﬁned by the Basel Committee and a solvability coeﬃcient (8%). The function maps default probabilities into risk weights. Table 10.2 illustrates the change in the capital requirements per unit of a loan induced by switching from Basel I to Basel II. Apart from basic risk determinants such as default probability (PD), maturity and loss given default (LGD) the risk weights depend also on the type of the loan (retail loan, loan to an SME, mortgages, etc.) and the annual turnover. Table 10.2 refers to an SME loan and assumes that the borrower’s annual turnover is 5 million EUR (BCBS, 2003). Since the lock-in of the bank’s equity aﬀects the provision costs of the loan, it is likely that these costs will be handed over directly to an individual borrower.

10.3 Lagrangian Formulation of the SVM

233

Basel II will aﬀect any ﬁrm that is in need for external ﬁnance. As both the risk premium and the credit costs are determined by the default risk, the ﬁrms’ rating will have a deeper economic impact on banks as well as on ﬁrms themselves than ever before. Thus in the wake of Basel II the choice of the right rating method is of crucial importance. To avoid friction of a large magnitude the employed method must meet certain conditions. On the one hand, the rating procedure must keep the amount of misclassiﬁcations as low as possible. On the other, it must be as simple as possible and, if employed by the borrower, also provide some guidance to him on how to improve his own rating. SVMs have the potential to satisfy both demands. First, the procedure is easy to implement so that any ﬁrm could generate its own rating information. Second, the method is suitable for estimating a unique default probability for each ﬁrm. Third, the rating estimation done by an SVM is transparent and does not depend on heuristics or expert judgements. This property implies objectivity and a high degree of robustness against user changes. Moreover, an appropriately trained SVM enables the ﬁrm to detect the speciﬁc impact of all rating determinants on the overall classiﬁcation. This property would enable the ﬁrm to ﬁnd out prior to negotiations what drawbacks it has and how to overcome its problems. Overall, SVMs employed in the internal rating systems of banks will improve the transparency and accuracy of the system. Both improvements may help ﬁrms and banks to adapt to the Basel II framework more easily.

10.3

Lagrangian Formulation of the SVM

Having introduced some elements of statistical learning and demonstrated the potential of SVMs for company rating we can now give a Lagrangian formulation of an SVM for the linear classiﬁcation problem and generalize this approach to a nonlinear case. In the linear case the following inequalities hold for all n points of the training set: x i w + b ≥ 1 − ξi xi w + b ≤ −1 + ξi ξi ≥ 0,

for yi = 1, for yi = −1,

234

10 Predicting Bankruptcy with Support Vector Machines

Figure 10.3: The separating hyperplane x w + b = 0 and the margin in a non-separable case.

which can be combined into two constraints: yi (x i w + b) ≥ 1 − ξi

(10.9)

ξi ≥ 0.

(10.10)

The basic idea of the SVM classiﬁcation is to ﬁnd such a separating hyperplane that corresponds to the largest possible margin between the points of diﬀerent classes, see Figure 10.3. Some penalty for misclassiﬁcation must also be introduced. The classiﬁcation error ξi is related to the distance from a misclassiﬁed point xi to the canonical hyperplane bounding its class. If ξi > 0, an error in separating the two sets occurs. The objective function corresponding to penalized margin maximization is formulated as: ! n "υ 1 2 w + C ξi , (10.11) 2 i=1

10.3 Lagrangian Formulation of the SVM

235

where the parameter C characterizes the generalization ability of the machine and υ ≥ 1 is a positive integer controlling the sensitivity of the machine to outliers. The conditional minimization of the objective function with constraint (10.9) and (10.10) provides the highest possible margin in the case when classiﬁcation errors are inevitable due to the linearity of the separating hyperplane. Under such a formulation the problem is convex. One can show that margin maximization reduces the VC dimension. The Lagrange functional for the primal problem for υ = 1 is: LP =

n n n 1 w2 + C ξi − αi {yi x w + b − 1 + ξ } − µi ξi , (10.12) i i 2 i=1 i=1 i=1

where αi ≥ 0 and µi ≥ 0 are Lagrange multipliers. The primal problem is formulated as: min max LP . αi

wk ,b,ξi

After substituting the Karush-Kuhn-Tucker conditions (Gale et al., 1951) into the primal Lagrangian, we derive the dual Lagrangian as: LD =

n i=1

1 αi αj yi yj x i xj , 2 i=1 j=1 n

αi −

n

(10.13)

and the dual problem is posed as: max LD , αi

subject to: 0 ≤ αi ≤ C, n αi yi = 0. i=1

Those points i for which the equation yi (x i w + b) ≤ 1 holds are called support vectors. After training the support vector machine and deriving Lagrange multipliers (they are equal to 0 for non-support vectors) one can classify a company described by the vector of parameters x using the classiﬁcation rule: (10.14) g(x) = sign x w + b ,

236

10 Predicting Bankruptcy with Support Vector Machines

where w = ni=1 αi yi xi and b = 12 (x+1 + x−1 ) w. x+1 and x−1 are two support vectors belonging to diﬀerent classes for which y(x w + b) = 1. The value of the classiﬁcation function (the score of a company) can be computed as f (x) = x w + b.

(10.15)

Each value of f (x) uniquely corresponds to a default probability (PD). The SVMs can also be easily generalized to the nonlinear case. It is worth noting that all the training vectors appear in the dual Lagrangian formulation only as scalar products. This means that we can apply kernels to transform all the data into a high dimensional Hilbert feature space and use linear algorithms there: Ψ : Rd → H.

(10.16)

If a kernel function K exists such that K(xi , xj ) = Ψ(xi ) Ψ(xj ), then it can be used without knowing the transformation Ψ explicitly. A necessary and suﬃcient condition for a symmetric function K(xi , xj ) to be a kernel is given by Mercer’s (1909) theorem. It requires positive deﬁniteness, i.e. for any data set x1 , ..., xn and any real numbers λ1 , ..., λn the function K must satisfy n n

λi λj K(xi , xj ) ≥ 0.

(10.17)

i=1 j=1

Some examples of kernel functions are: 2

• K(xi , xj ) = e−xi −xj /2σ – the isotropic Gaussian kernel; −2

−1

• K(xi , xj ) = e−(xi −xj ) r Σ (xi −xj )/2 – the stationary Gaussian kernel with an anisotropic radial basis; we will apply this kernel in our study taking Σ equal to the variance matrix of the training set; r is a constant; P • K(xi , xj ) = (x i xj + 1) – the polynomial kernel;

• K(xi , xj ) = tanh(kx i xj − δ) – the hyperbolic tangent kernel.

10.4

Description of Data

For our study we selected the largest bankrupt companies with the capitalization of no less than $1 billion that ﬁled for protection against creditors under

10.5 Computational Results

237

Chapter 11 of the US Bankruptcy Code in 2001–2002 after the stock marked crash of 2000. We excluded a few companies due to incomplete data, leaving us with 42 companies. They were matched with 42 surviving companies with the closest capitalizations and the same US industry classiﬁcation codes available through the Division of Corporate Finance of the Securities and Exchange Commission (SEC, 2004). From the selected 84 companies 28 belonged to various manufacturing industries, 20 to telecom and IT industries, 8 to energy industries, 4 to retail industries, 6 to air transportation industries, 6 to miscellaneous service industries, 6 to food production and processing industries and 6 to construction and construction material industries. For each company the following information was collected from the annual reports for 1998–1999, i.e. 3 years prior to defaults of bankrupt companies (SEC, 2004): (i) S – sales; (ii) COGS – cost of goods sold; (iii) EBIT – earnings before interest and taxes, in most cases equal to the operating income; (iv) Int – interest payments; (v) NI – net income (loss); (vi) Cash – cash and cash equivalents; (vii) Inv – inventories; (viii) CA – current assets; (ix) TA – total assets; (x) CL – current liabilities; (xi) STD – current maturities of the long-term debt; (xii) TD – total debt; (xiii) TL – total liabilities; (xiv) Bankr – bankruptcy (1 if a company went bankrupt, −1 otherwise). The information about the industry was summarized in the following dummy variables: (i) Indprod – manufacturing industries; (ii) Indtelc – telecom and IT industries; (iii) Indenerg – energy industries; (iv) Indret – retail industries; (v) Indair – air transportation industries; (vi) Indserv – miscellaneous service industries; (vii) Indfood – food production and processing industries; (viii) Indconst – construction and construction material industries. Based on these ﬁnancial indicators the following four groups of ﬁnancial ratios were constructed and used in our study: (i) proﬁt measures: EBIT/TA, NI/TA, EBIT/S; (ii) leverage ratios: EBIT/Int, TD/TA, TL/TA; (iii) liquidity ratios: QA/CL, Cash/TA, WC/TA, CA/CL and STD/TD, where QA is quick assets and WC is working capital; (iv) activity or turnover ratios: S/TA, Inv/COGS.

10.5

Computational Results

The most signiﬁcant predictors suggested by the discriminant analysis belong to proﬁt and leverage ratios. To demonstrate the ability of an SVM to extract information from the data, we will chose two ratios from these groups: NI/TA from the proﬁtability ratios and TL/TA from the leverage ratios. The SVMs,

238

10 Predicting Bankruptcy with Support Vector Machines

Table 10.3: Descriptive statistics for the companies. All data except SIZE = log (TA) and ratios are given in billions of dollars. Variable Min Max Mean Std. Dev. TA 0.367 91.072 8.122 13.602 CA 0.051 10.324 1.657 1.887 CL 0.000 17.209 1.599 2.562 TL 0.115 36.437 4.880 6.537 CASH 0.000 1.714 0.192 0.333 INVENT 0.000 7.101 0.533 1.114 LTD 0.000 13.128 1.826 2.516 STD 0.000 5.015 0.198 0.641 SALES 0.036 37.120 5.016 7.141 COGS 0.028 26.381 3.486 4.771 EBIT -2.214 29.128 0.822 3.346 INT -0.137 0.966 0.144 0.185 NI -2.022 4.013 0.161 0.628 EBIT/TA -0.493 1.157 0.072 0.002 NI/TA -0.599 0.186 -0.003 0.110 EBIT/S -2.464 36.186 0.435 3.978 EBIT/INT -16.897 486.945 15.094 68.968 TD/TA 0.000 1.123 0.338 0.236 TL/TA 0.270 1.463 0.706 0.214 SIZE 12.813 18.327 15.070 1.257 QA/CL -4.003 259.814 4.209 28.433 CASH/TA 0.000 0.203 0.034 0.041 WC/TA -0.258 0.540 0.093 0.132 CA/CL 0.041 2001.963 25.729 219.568 STD/TD 0.000 0.874 0.082 0.129 S/TA 0.002 5.559 1.008 0.914 INV/COGS 0.000 252.687 3.253 27.555

besides their Lagrangian formulation, can diﬀer in two aspects: (i) their capacity that is controlled by the coeﬃcient C in (10.12) and (ii) the complexity of classiﬁer functions controlled in our case by the anisotropic radial basis in the Gaussian kernel transformation.

10.5 Computational Results

239

Triangles and squares in Figures 10.4–10.7 represent successful and failing companies from the training set, respectively. The intensity of the gray background corresponds to diﬀerent score values f . The darker the area, the higher the score and the greater is the probability of default. Most successful companies lying in the bright area have positive proﬁtability and a reasonable leverage TL/TA of around 0.4, which makes economic sense. Figure 10.4 presents the classiﬁcation results for an SVM using locally near linear classiﬁer functions (the anisotropic radial basis is 100Σ1/2) with the capacity ﬁxed at C = 1. The discriminating rule in this case can be approximated by a linear combination of predictors and is similar to that suggested by discriminant analysis, although the coeﬃcients of the predictors may be diﬀerent. If the complexity of classifying functions increases (the radial basis goes down to 2Σ1/2 ) as illustrated in Figure 10.5, we get a more detailed picture. Now the areas of successful and failing companies become localized. If the radial basis is decreased further down to 0.5Σ1/2 (Figure 10.6), the SVM will try to track each observation. The complexity in this case is too high for the given data set. Figure 10.7 demonstrates the eﬀects of high capacities (C = 300) on the classiﬁcation results. As capacity is growing, the SVM localizes only one cluster of successful companies. The area outside this cluster is associated with approximately equally high score values. Thus, besides estimating the scores for companies the SVM also managed to learn that there always exists a cluster of successful companies, while the cluster for bankrupt companies vanishes when the capacity is high, i.e. a company must possess certain characteristics in order to be successful and failing companies can be located elsewhere. This result was obtained without using any additional knowledge besides that contained in the training set. The calibration of the model or estimation of the mapping f → PD can be illustrated by the following example (the SVM with the radial basis 2Σ1/2 and capacity C = 1 will be applied). We can set three rating grades: safe, neutral and risky which correspond to the values of the score f < −0.0115, −0.0115 < f < 0.0115 and f > 0.0115, respectively, and calculate the total number of companies and the number of failing companies in each of the three groups. If the training set were representative of the whole population of companies, the ratio of failing to all companies in a group would give the estimated probability of default. Figure 10.8 shows the power (Lorenz) curve (Lorenz, 1905) – the cumulative default rate as a function of the percentile

240

10 Predicting Bankruptcy with Support Vector Machines

Figure 10.4: Ratings of companies in two dimensions. The case of a low complexity of classiﬁer functions, the radial basis is 100Σ1/2 , the capacity is ﬁxed at C = 1. STFsvm01.xpl

of companies sorted according to their score – for the training set of companies. For the abovementioned three rating grades we derive PDsafe = 0.24, PDneutral = 0.50 and PDrisky = 0.76. If a suﬃcient number of observations is available, the model can also be calibrated for ﬁner rating grades such as AAA or BB by adjusting the score values separating the groups of companies so that the estimated default probabilities within each group equal to those of the corresponding rating grades. Note, that we are calibrating the model on the grid determined by grad(f) = 0 or ˆ = 0 and not on the orthogonal grid as in the Moody’s RiskCalc grad(PD) model. In other words, we do not make a restrictive assumption of an independent inﬂuence of predictors as in the latter model. This can be important since,

10.5 Computational Results

241

Figure 10.5: Ratings of companies in two dimensions; the case of an average complexity of classiﬁer functions, the radial basis is 2Σ1/2 , the capacity is ﬁxed at C = 1. STFsvm02.xpl

for example, the same decrease in proﬁtability will have diﬀerent consequences for high and low leveraged ﬁrms. For multidimensional classiﬁcation the results cannot be easily visualized. In this case we will use the cross-validation technique to compute the percentage of correct classiﬁcations and compare it with that for the discriminant analysis (DA). Note that both most widely used methods – the discriminant analysis and logit regression – choose only one signiﬁcant at the 5% level predictor (NI/TA) when forward selection is used. Cross-validation has the following stages. One company is taken out of the sample and the SVM is trained on the remaining companies. Then the class of the out-of-the-sample company is evaluated by the SVM. This procedure is repeated for all the companies and the percentage of correct classiﬁcations is calculated.

242

10 Predicting Bankruptcy with Support Vector Machines

Figure 10.6: Ratings of companies in two dimensions; the case of an excessively high complexity of classiﬁer functions, the radial basis is 0.5Σ1/2 , the capacity is ﬁxed at C = 1. STFsvm03.xpl

The best percentage of correctly cross-validated companies (all available ratios were used as predictors) is higher for the SVM than for the discriminant analysis (62% vs. 60%). However, the diﬀerence is not signiﬁcant at the 5% level. This indicates that the linear function might be considered as an optimal classiﬁer for the number of observations in the data set we have. As for the direction vector of the separating hyperplane, it can be estimated diﬀerently by the SVM and DA without aﬀecting much the accuracy since the correlation of underlying predictors is high. Cluster center locations, as they were estimated using cluster analysis, are presented in Table 10.4. The results of the cluster analysis indicate that two clusters are likely to correspond to successful and failing companies. Note the

10.6 Conclusions

243

Figure 10.7: Ratings of companies in two dimensions; the case of a high capacity (C = 300). The radial basis is ﬁxed at 2Σ1/2 . STFsvm04.xpl

substantial diﬀerences in the interest coverage ratios, NI/TA, EBIT/TA and TL/TA between the clusters.

10.6

Conclusions

As we have shown, SVMs are capable of extracting information from real life economic data. Moreover, they give an opportunity to obtain the results not very obvious at ﬁrst glance. They are easily adjusted with only few parameters. This makes them particularly well suited as an underlying technique for company rating and investment risk assessment methods applied by ﬁnancial institutions.

244

10 Predicting Bankruptcy with Support Vector Machines

0.5 0

Cumulative default rate

1

Power Curve

0

0.5 Percentile

1

Figure 10.8: Power (Lorenz) curve (Lorenz, 1905) – the cumulative default rate as a function of the percentile of companies sorted according to their score – for the training set of companies. An SVM is applied with the radial basis 2Σ1/2 and capacity C = 1. STFsvm05.xpl

SVMs are also based on very few restrictive assumptions and can reveal eﬀects overlooked by many other methods. They have been able to produce accurate classiﬁcation results in other areas and can become an option of choice for company rating. However, in order to create a practically valuable methodology one needs to combine an SVM with an extensive data set of companies and turn to alternative formulations of SVMs better suited for processing large data sets. Overall, we have a valuable tool for company rating that can answer the requirements of the new capital regulations.

10.6 Conclusions

Table 10.4: Cluster centre locations. successful companies, and panies. Cluster EBIT/TA NI/TA EBIT/S EBIT/INT TD/TA TL/TA SIZE QA/CL CASH/TA WC/TA CA/CL STD/TD S/TA INV/COGS

245

There are 19 members in class {-1} – 65 members in class {1} – failing com{-1} 0.263 0.078 0.313 13.223 0.200 0.549 15.104 1.108 0.047 0.126 1.879 0.144 1.178 0.173

{1} 0.015 -0.027 -0.040 1.012 0.379 0.752 15.059 1.361 0.030 0.083 1.813 0.061 0.959 0.155

246

Bibliography

Bibliography Altman, E., (1968). Financial Ratios, Discriminant Analysis and the Prediction of Corporate Bankruptcy, The Journal of Finance, September: 589-609. Altman, E., Haldeman, R. and Narayanan, P., (1977). ZETA Analysis: a New Model to Identify Bankruptcy Risk of Corporations, Journal of Banking and Finance, June: 29-54. Basel Committee on Banking Supervision (2003). The New Basel Capital Accord, third consultative paper, http://www.bis.org/bcbs/cp3full.pdf. Beaver, W., (1966). Financial Ratios as Predictors of Failures. Empirical Research in Accounting: Selected Studies, Journal of Accounting Research, supplement to vol. 5: 71-111. Damodaran, A., (2002). Investment Valuation, second ed., John Wiley & Sons, New York, NY. Deakin, E., (1972). A Discriminant Analysis of Predictors of Business Failure, Journal of Accounting Research, Spring: 167-179. Dev, S., (1974). Ratio Analysis and the Prediction of Company Failure in Ebits, Credits, Finance and Proﬁts, ed. H.C. Edy and B.S. Yamey, Sweet and Maxwell, London: 61-74. Dimitras, A., Slowinski, R., Susmaga, R. and Zopounidis, C., (1999). Business Failure Prediction Using Rough Sets, European Journal of Operational Research, number 114: 263-280. EUNITE, (2001). Electricity load forecast competition of the EUropean Network on Intelligent TEchnologies for Smart Adaptive Systems, http://neuron.tuke.sk/competition/. Falkenstein, E., (2000). RiskCalc for Private Companies: Moody’s Default Model, Moody’s Investors Service. Fitzpatrick, P., (2000). A Comparison of the Ratios of Successful Industrial Enterprises with Those of Failed Companies, The Accounting Publishing Company. Frydman, H., Altman, E. and Kao, D.-L., (1985). Introducing Recursive Partitioning for Financial Classiﬁcation: The Case of Financial Distress, The Journal of Finance, 40: 269-291.

Bibliography

247

F¨ user, K., (2002). Basel II – was muß der Mittelstand tun?, http://www.ey.com/global/download.nsf/Germany/Mittelstandsrating/ $ﬁle/Mittelstandsrating.pdf. H¨ardle, W. and Simar, L. (2003). Applied Multivariate Statistical Analysis, Springer Verlag. Gale, D., Kuhn, H.W. and Tucker, A.W., (1951). Linear Programming and the Theory of Games, Activity Analysis of Production and Allocation, ed. T.C. Koopmans, John Wiley & Sons, New York, NY: 317-329. Lorenz, M.O., (1905). Methods for Measuring the Concentration of Wealth, Journal of American Statistical Association, 9: 209-219. Martin, D., (1977). Early Warning of Bank Failure: A Logit Regression Approach, Journal of Banking and Finance, number 1: 249-276. Mercer, J., (1909). Functions of Positive and Negative Type and Their Connection with the Theory of Integral Equations, Philosophical Transactions of the Royal Society of London, 209: 415-446. Merton, R., (1974). On the Pricing of Corporate Debt: The Risk Structure of Interest Rates, The Journal of Finance, 29: 449-470. Ohlson, J., (1980). Financial Ratios and the Probabilistic Prediction of Bankruptcy, Journal of Accounting Research, Spring: 109-131. Ramser, J. and Foster, L., (1931). A Demonstration of Ratio Analysis. Bulletin No. 40, University of Illinois, Bureau of Business Research, Urbana, Illinois. Division of Corporate Finance of the Securities and Exchange Commission, (2004). Standard industrial classiﬁcation (SIC) code list, http://www.sec.gov/info/edgar/siccodes.htm. Securities and Exchange Commission, (2004). Archive of historical documents, http://www.sec.gov/cgi-bin/srch-edgar. Tam, K. and Kiang, M., (1992). Managerial Application of Neural Networks: the Case of Bank Failure Prediction, Management Science, 38: 926-947. Tikhonov, A.N. and Arsenin, V.Y., (1977). Solution of Ill-posed Problems, W.H. Winston, Washington, DC.

248

Bibliography

Vapnik, V., (1995). The Nature of Statistical Learning Theory, Springer Verlag, New York, NY. Wiginton, J., (1980). A Note on the Comparison of Logit and Discriminant Models of Consumer Credit Behaviour, Journal of Financial and Quantitative Analysis, 15: 757-770. Wilcox, A., (1971). A Simple Theory of Financial Ratios as Predictors of Failure, Journal of Accounting Research: 389-395. Winakor, A. and Smith, R., (1935). Changes in the Financial Structure of Unsuccessful Industrial Corporations. Bulletin No. 51, University of Illinois, Bureau of Business Research, Urbana, Illinois. Zavgren, C., (1983). The Prediction of Corporate Failure: The State of the Art, Journal of Accounting Literature, number 2: 1-38. Zmijewski, M., (1984). Methodological Issues Related to the Estimation of Financial Distress Prediction Models, Journal of Accounting Research, 20: 59-82.

11 Econometric and Fuzzy Modelling of Indonesian Money Demand Noer Azam Achsani, Oliver Holtem¨oller, and Hizir Sofyan

Money demand is an important element of monetary policy analysis. Inﬂation is supposed to be a monetary phenomenon in the long run, and the empirical relation between money and prices is usually discussed in a money demand framework. The main purpose of money demand studies is to analyze if a stable money demand function exists in a speciﬁc country, especially when a major structural change has taken place. Examples for such structural changes are the monetary union of West Germany and the former German Democratic Republic in 1990 and the introduction of the Euro in 1999. There is broad evidence that money demand has been quite stable both in Germany and in the Euro area, see for example Wolters, Ter¨ asvirta and L¨ utkepohl (1998) and Holtem¨oller (2004a). In this chapter, we explore the M2 money demand function for Indonesia in the period 1990:1–2002:3. This period is dominated by the Asian crises, which started in 1997. In the aftermath of the crisis, a number of immense ﬁnancial and economic problems have emerged in Indonesia. The price level increased by about 16 percent in 1997 compared to the previous year. In the same period, the call money rate increased temporarily from 12.85 percent to 57.10 percent and the money stock increased by about 54 percent. Additionally, Indonesia has faced a sharp decrease in real economic activity: GNP decreased by about 11 percent. Given these extraordinary economic developments, it may not be expected that a stable money demand function existed during that period. The main contribution of this chapter is twofold. Firstly, we provide a careful analysis of Indonesian money demand, an emerging market economy for which only very few money demand studies exist. Secondly, we do not only apply

250

11 Modelling Indonesian Money Demand

the standard econometric methods but also the fuzzy Takagi-Sugeno model which allows for locally diﬀerent functional relationships, for example during the Asian crisis. This is interesting and important because the assessment of monetary policy stance as well as monetary policy decisions depend on the relationship between money and other macroeconomic variables. Hence, a stable money demand function should be supported by various empirical methodologies. In Section 11.1 we discuss money demand speciﬁcation generally and in Section 11.2 we estimate a money demand function and the corresponding errorcorrection model for Indonesia using standard regression techniques. In Section 11.3, we exploit the fuzzy approach and its application to money demand. Section 11.4 presents conclusions and a comparison of the two approaches.

11.1

Speciﬁcation of Money Demand Functions

Major central banks stress the importance of money growth analysis and of a stable money demand function for monetary policy purposes. The Deutsche Bundesbank, for example, has followed an explicit monetary targeting strategy from 1975 to 1998, and the analysis of monetary aggregates is one of the two pillars of the European Central Bank’s (ECB) monetary policy strategy. Details about these central banks’ monetary policy strategies, a comparison and further references can be found in Holtem¨oller (2002). The research on the existence and stability of a money demand function is motivated inter alia by the following two observations: (i) Money growth is highly correlated with inﬂation, see McCandless and Weber (1995) for international empirical evidence. Therefore, monetary policy makers use money growth as one indicator for future risks to price stability. The information content of monetary aggregates for future inﬂation assessment is based on a stable relationship between money, prices and other observable macroeconomic variables. This relationship is usually analyzed in a money demand framework. (ii) The monetary policy transmission process is still a “black box”, see Mishkin (1995) and Bernanke and Gertler (1995). If we are able to specify a stable money demand function, an important element of the monetary transmission mechanism is revealed, which may help to learn more about monetary policy transmission. There is a huge amount of literature about money demand. The majority of the studies is concerned with industrial countries. Examples are Hafer and Jansen (1991), Miller (1991), McNown and Wallace (1992) and Mehra (1993) for the

11.1 Speciﬁcation of Money Demand Functions

251

USA; L¨ utkepohl and Wolters (1999), Coenen and Vega (1999), Brand and Cassola (2000) and Holtem¨ oller (2004b) for the Euro area; Arize and Shwiﬀ (1993), Miyao (1996) and Bahmani-Oskooee (2001) for Japan; Drake and Chrystal (1994) for the UK; Haug and Lucas (1996) for Canada; Lim (1993) for Australia and Orden and Fisher (1993) for New Zealand. There is also a growing number of studies analyzing money demand in developing and emerging countries, primarily triggered by the concern among central bankers and researchers around the world about the impact of moving toward ﬂexible exchange rate regimes, globalization of capital markets, ongoing ﬁnancial liberalization, innovation in domestic markets, and the country-speciﬁc events on the demand for money (Sriram, 1999). Examples are Hafer and Kutan (1994) and Tseng (1994) for China; Moosa (1992) for India; Arize (1994) for Singapore and Deckle and Pradhan (1997) for ASEAN countries. For Indonesia, a couple of studies have applied the cointegration and errorcorrection framework to money demand. Price and Insukindro (1994) use quarterly data from the period 1969:1 to 1987:4. Their results are based on diﬀerent methods of testing for cointegration. The two-step Engle and Granger (1987) procedure delivers weak evidence for one cointegration relationship, while the Johansen likelihood ratio statistic supports up to two cointegrating vectors. In contrast, Deckle and Pradhan (1997), who use annual data, do not ﬁnd any cointegrating relationship that can be interpreted as a money demand function. The starting point of empirical money demand analysis is the choice of variables to be included in the money demand function. It is common practice to assume that the desired level of nominal money demand depends on the price level, a transaction (or scaling) variable, and a vector of opportunity costs (e.g., Goldfeld and Sichel, 1990; Ericsson, 1999): (M ∗ /P ) = f (Y, R1 , R2 , ...),

(11.1)

where M ∗ is nominal money demand, P is the price level, Y is real income (the transaction variable), and Ri are the elements of the vector of opportunity costs which possibly also includes the inﬂation rate. A money demand function of this type is not only the result of traditional money demand theories but also of modern micro-founded dynamic stochastic general equilibrium models (Walsh, 1998). An empirical standard speciﬁcation of the money demand function is the partial adjustment model (PAM). Goldfeld and Sichel (1990) show that a desired level of real money holdings M Rt∗ = Mt∗ /Pt : ln M Rt∗ = φ0 + φ1 ln Yt + φ2 Rt + φ3 πt ,

(11.2)

252

11 Modelling Indonesian Money Demand

where Rt represents one or more interest rates and πt = ln(Pt /Pt−1 ) is the inﬂation rate, and an adjustment cost function: C = α1 (ln Mt∗ − ln Mt ) + α2 {(ln Mt − ln Mt−1 ) + δ (ln Pt − ln Pt−1 )} (11.3) yield the following reduced form: 2

2

ln M Rt = µφ0 + µφ1 ln Yt + µφ2 Rt + (1 − µ) ln M Rt−1 + γπt ,

(11.4)

where: µ = α1 /(α1 + α2 )

and

γ = µφ3 + (1 − µ)(δ − 1).

(11.5)

The parameter δ controls whether nominal money (δ = 0) or real money (δ = −1) adjusts. Intermediate cases are also possible. Notice that the coeﬃcient to the inﬂation rate depends on the value of φ3 and on the parameters of the adjustment cost function. The imposition of price-homogeneity, that is the price level coeﬃcient in a nominal money demand function is restricted to one, is rationalized by economic theory and Goldfeld and Sichel (1990) propose that empirical rejection of the unity of the price level coeﬃcient should be interpreted as an indicator for misspeciﬁcation. The reduced form can also be augmented by lagged independent and further lagged dependent variables in order to allow for a more general adjustment process. Rearranging (11.4) yields: ∆ ln M Rt

=

=

µφ0 + µφ1 ∆ ln Yt + µφ1 ln Yt−1 + µφ2 ∆Rt +µφ2 Rt−1 − µ ln M Rt−1 + γ∆πt + γπt−1 γ µφ0 − µ ln M Rt−1 − φ1 ln Yt−1 − φ2 Rt−1 − πt−1 µ +µφ1 ∆ ln Yt + µφ2 ∆Rt + γ∆πt . (11.6)

Accordingly, the PAM can also be represented by an error-correction model like (11.6).

11.2

The Econometric Approach to Money Demand

11.2.1

Econometric Estimation of Money Demand Functions

Since the seminal works of Nelson and Plosser (1982), who have shown that relevant macroeconomic variables exhibit stochastic trends and are only sta-

11.2 The Econometric Approach to Money Demand

253

tionary after diﬀerencing, and Engle and Granger (1987), who introduced the concept of cointegration, the (vector) error correction model, (V)ECM, is the dominant econometric framework for money demand analysis. If a certain set of conditions about the number of cointegration relations and exogeneity properties is met, the following single equation error correction model (SE-ECM) can be used to estimate money demand functions: ∆ ln M Rt

=

ct + α (ln M Rt−1 − β2 ln Yt−1 − β3 Rt−1 − β4 πt−1 ) ./ 0 error correction term

+

k

γ1i ∆ ln M Rt−i +

k

i=1

+

k

γ2i ∆ ln Yt−i

(11.7)

i=0

γ3i ∆Rt−i +

i=0

k

γ4i ∆πt−i .

i=0

It can immediately be seen that (11.6) is a special case of the error correction model (11.7). In other words, the PAM corresponds to a SE-ECM with certain parameter restrictions. The SE-ECM can be interpreted as a partial adjustment model with β2 as long-run income elasticity of money demand, β3 as long-run semi-interest rate elasticity of money demand, and less restrictive short-run dynamics. The coeﬃcient β4 , however, cannot be interpreted directly. In practice, the number of cointegration relations and the exogeneity of certain variables cannot be considered as known. Therefore, the VECM is often applied. In this framework, all variables are assumed to be endogenous a priori, and the imposition of a certain cointegration rank can be justiﬁed by statistical tests. The standard VECM is obtained from a vector autoregressive (VAR) model: k xt = µt + Ai xt−i + ut , (11.8) i=1

where xt is a (n × 1)-dimensional vector of endogenous variables, µt contains deterministic terms like constant and time trend, Ai are (n × n)-dimensional coeﬃcient matrices and ut ∼ N (0, Σu ) is a serially uncorrelated error term. Subtracting xt−1 and rearranging terms yields the VECM: ∆xt−1 = µt + Πxt−1 +

k−1

Γi ∆xt−i + ut ,

(11.9)

i=1

where Π and Γi are functions of the Ai . The matrix Π can be decomposed into two (n × r)-dimensional matrices α and β: Π = αβ where α is called an

254

11 Modelling Indonesian Money Demand

adjustment matrix, β comprises the cointegration vectors, and r is the number of linearly independent cointegration vectors (cointegration rank). Following Engle and Granger (1987), a variable is integrated of order d, or I(d), if it has to be diﬀerenced d-times to become stationary. A vector xt is integrated of order d if the maximum order of integration of the variables in xt is d. A vector xt is cointegrated, or CI(d, b), if there exists a linear combination β xt that is integrated of a lower order (d − b) than xt . The cointegration framework is only appropriate if the relevant variables are actually integrated. This can be tested using unit root tests. When no unit roots are found, traditional econometric methods can by applied.

11.2.2

Modelling Indonesian Money Demand with Econometric Techniques

We use quarterly data from 1990:1 until 2002:3 for our empirical analysis. The data is not seasonally adjusted and taken from Datastream (gross national product at 1993 prices Y and long-term interest rate R) and from Bank Indonesia (money stock M2 M and consumer price index P ). In the following, logarithms of the respective variables are indicated by small letters, and mr = ln M − ln P denotes logarithmic real balances. The data is depicted in Figure 11.1. In the ﬁrst step, we analyze the stochastic properties of the variables. Table 11.1 presents the results of unit root tests for logarithmic real balances mr, logarithmic real GNP y, logarithmic price level p, and logarithmic long-term interest rate r. Note that the log interest rate is used here while in the previous section the level of the interest rate has been used. Whether interest rates should be included in logarithms or in levels is mainly an empirical question. Because the time series graphs show that there seem to be structural breaks in real money, GNP and price level, we allow for the possibility of a mean shift and a change in the slope of a linear trend in the augmented Dickey-Fuller test regression. This corresponds to model (c) in Perron (1989), where the critical values for this type of test are tabulated. In the unit root test for the interest rate, only a constant is considered. According to the test results, real money, real GNP and price level are trend-stationary, that is they do not exhibit a unit root, and the interest rate is also stationary. These results are quite stable with respect to the lag length speciﬁcation. The result of trend-stationarity is also supported by visual inspection of a ﬁtted trend and the corresponding

11.2 The Econometric Approach to Money Demand

11.8

Log. GNP

Y

11.4

9.4

11.2

9.6

Y

9.8

10

11.6

10.2

10.4

Log. Real Balances

255

10

20

30

40

50

0

10

20

30

40

Time: 1990:1-2002:3

Time: 1990:1-2002:3

Log. Long-term Interest Rate

Consumer Price Inflation

50

Y

0.08

2.2

2.4

0

2.6

0.04

2.8

Y

3

0.12

3.2

0.16

3.4

0.2

0

0

10

20

30

Time: 1990:1-2002:3

40

50

0

10

20

30

40

50

Time: 1990:1-2002:3

Figure 11.1: Time series plots of logarithms of real balance, GNP, interest rate, and CPI. STFmon01.xpl

trend deviations, see Figure 11.2. In the case of real money, the change in the slope of the linear trend is not signiﬁcant. Now, let us denote centered seasonal dummies sit , a step dummy switching from zero to one in the respective quarter ds, and an impulse dummy having

256

11 Modelling Indonesian Money Demand

Table 11.1: Unit Root Tests Variable mr y p r

Deterministic terms c, t, s, P89c (98:3) c, t, s, P89c (98:1) c, t, s, P89c (98:1) c, s

Lags 2 0 2 2

Test stat. −4.55∗∗ −9.40∗∗∗ −9.46∗∗∗ −4.72∗∗∗

1/5/10% CV –4.75 / –4.44 / –4.18 –4.75 / –4.44 / –4.18 –4.75 / –4.44 / –4.18 –3.57 / –2.92 / –2.60

Note: Unit root test results for the variables indicated in the ﬁrst column. The second column describes deterministic terms included in the test regression: constant c, seasonal dummies s, linear trend t, and shift and impulse dummies P89c according to the model (c) in Perron (1989) allowing for a change in the mean and slope of a linear trend. Break points are given in parentheses. Lags denotes the number of lags included in the test regression. Column CV contains critical values. Three (two) asterisks denote signiﬁcance at the 1% (5%) level. value one only in the respective quarter di. Indonesian money demand is then estimated by OLS using the reduced form equation (11.4) (t- and p-values are in round and square parantheses, respectively): mrt

=

0.531 mrt−1 + 0.470 yt − 0.127 rt (6.79) (4.87) (−6.15) −

0.438 − 0.029 s1t − 0.034 s2t − 0.036 s3t (−0.84) (−2.11) (−2.57) (−2.77)

+ 0.174 di9802t + 0.217 di9801t + 0.112 ds9803t + ut (3.54) (5.98) (5.02) T R2

= =

50 (1990 : 2 − 2002 : 3) 0.987

RESET (1) =

0.006 [0.941]

LM (4) = JB =

0.479 [0.751] 0.196 [0.906]

ARCH(4) =

0.970 [0.434]

Here JB refers to the Jarque-Bera test for nonnormality, RESET is the usual test for general nonlinearity and misspeciﬁcation, LM(4) denotes a LagrangeMultiplier test for autocorrelation up to order 4, ARCH(4) is a LagrangeMultiplier test for autoregressive conditional heteroskedasticity up to order

11.2 The Econometric Approach to Money Demand

Fitted Trend for GNP

20

4.75+Y*E-2

15

9.8

5

9.4

10

9.6

Y

10

25

30

10.2

Fitted Trend for Real Money

257

0

10

20 30 Time: 1990:1-2002:3

40

50

0

20 30 Time: 1990:1-2002:3

40

50

Fitted Trend Residual for GNP

Y*E-2 -2

-5

0

0

Y*E-2

2

5

4

10

Residual of Fitted Trend

10

0

10

20 30 Time: 1990:1-2002:3

40

50

0

10

20 30 Time: 1990:1-2002:3

40

50

Figure 11.2: Fitted trends for real money and real GNP. STFmon02.xpl STFmon03.xpl

4. Given these diagnostic statistics, the regression seems to be well speciﬁed. There is a mean shift in 1998:3 and the impulse dummies capture the fact, that the structural change in GNP occurs two months before the change in real money. The inﬂation rate is not signiﬁcant and is therefore not included in the equation.

258

11 Modelling Indonesian Money Demand

The implied income elasticity of money demand is 0.47/(1–0.53) = 1 and the interest rate elasticity is –0.13/(1–0.53) = –0.28. These are quite reasonable magnitudes. Equation (11.10) can be transformed into the following error correction representation: ∆mrt

=

−0.47 · (mrt−1 − yt−1 + 0.28rt−1 ) + 0.47∆yt − 0.13∆rt + deterministic terms + ut . (11.10)

Stability tests for the real money demand equation (11.10) are depicted in Figure 11.3. The CUSUM of squares test indicates some instability at the time of the Asian crises, and the coeﬃcients of lagged real money and GNP seem to change slightly after the crisis. A possibility to allow for a change in these coeﬃcients from 1998 on is to introduce two additional right-handside variables: lagged real money multiplied by the step dummy ds9803 and GNP multiplied by ds9803. Initially, we have also included a corresponding term for the interest rate. The coeﬃcient is negative (-0.04) but not signiﬁcant (p-value: 0.29), such that we excluded the term from the regression equation. The respective coeﬃcients for the period 1998:3-2002:3 can be obtained by summing the coeﬃcients of lagged real money and lagged real money times step dummy and of GNP and GNP times step dummy, respectively. This reveals that the income elasticity stays approximately constant (0.28/(1–0.70)=0.93) until 1998:02 and ((0.28+0.29)/(1-0.70+0.32)=0.92) from 1998:3 to 2002:3 and that the interest rate elasticity declines in the second half of the sample from –0.13/(1–0.70)=–0.43 to –0.13/(1-0.79+0.32)=–0.21: mrt

=

0.697 mrt−1 + 0.281 yt − 0.133 rt (7.09) (2.39) (−6.81) −

0.322 mrt−1 · ds9803t + 0.288 yt · ds9803t (−2.54) (2.63)

+ 0.133 − 0.032 s1t − 0.041 s2t − 0.034 s3t (0.25) (−2.49) (−3.18) (−2.76) + 0.110 di9802t + 0.194 di9801t + ut . (2.04) (5.50)

11.2 The Econometric Approach to Money Demand

0.4 Y 0.2 0

0.2

0.4

Y

0.6

0.6

Recursive Coefficient GNP

0.8

Recursive Coefficient Real Balances

259

0

10

20 Time: 1993:1-2002:3

30

0

20 Time: 1993:1-2002:3

30

CUSUM of Square Test (5%)

Y

0.4 -0.2

0

-0.2

0.2

Y

-0.15

0.6

0.8

-0.1

1

1.2

-0.05

Recursive Coefficient Long-term Interest Rate

10

0

10

20 Time: 1993:1-2002:3

30

0

10

20 Time: 1993:1-2002:3

30

40

Figure 11.3: Stability test for the real money demand equation (11.10). STFmon04.xpl

T R2

= =

50 (1990 : 2 − 2002 : 3) 0.989

RESET (1) = LM (4) =

4.108 [0.050] 0.619 [0.652]

JB = ARCH(4) =

0.428 [0.807] 0.408 [0.802]

260

11 Modelling Indonesian Money Demand

Accordingly, the absolute adjustment coeﬃcient µ in the error correction representation increases from 0.30 to 0.62. It can be concluded that Indonesian money demand has been surprisingly stable throughout and after the Asian crisis given that the Cusum of squares test indicates only minor stability problems. A shift in the constant term and two impulse dummies that correct for the diﬀerent break points in real money and real output are suﬃcient to yield a relatively stable money demand function with an income elasticity of one and an interest rate elasticity of –0.28. However, a more ﬂexible speciﬁcation shows that the adjustment coeﬃcient µ increases and that the interest rate elasticity decreases after the Asian crisis. In the next section, we analyze if these results are supported by a fuzzy clustering technique.

11.3

The Fuzzy Approach to Money Demand

11.3.1

Fuzzy Clustering

Ruspini (1969) introduced fuzzy partition to describe the cluster structure of a data set and suggested an algorithm to compute the optimum fuzzy partition. Dunn (1973) generalized the minimum-variance clustering procedure to a Fuzzy ISODATA clustering technique. Bezdek (1981) used Dunn’s (1973) approach to obtain an inﬁnite family of algorithms known as the Fuzzy C-Means (FCM) algorithm. He generalized the fuzzy objective function by introducing the weighting exponent m, 1 ≤ m < ∞: Jm (U, V ) =

c n

(uik )m d2 (xk , vi ),

(11.11)

k=1 i=1

where X = {x1 , x2 , . . . , xn } ⊂ Rp is a subset of the real p-dimensional vector space Rp consisting of n observations, U is a random fuzzy partition matrix of p X into c parts, vi ’s are the cluster centers in R , and d(xk , vi ) p= xk − vi = (xk − vi ) (xk − vi ) is an inner product induced norm on R . Finally, uik refers to the degree of membership of point xk to the ith cluster. This degree of membership, which can be seen as a probability of xk belonging to cluster

11.3 The Fuzzy Approach to Money Demand

261

i, satisﬁes the following constraints: 0 ≤ uik ≤ 1, for 1 ≤ i ≤ c, 1 ≤ k ≤ n, n 0< uik < n, for 1 ≤ i ≤ c, k=1 c

uik = 1,

for 1 ≤ k ≤ n.

(11.12) (11.13) (11.14)

i=1

The FCM uses an iterative optimization of the objective function, based on the weighted similarity measure between xk and the cluster center vi . More details on the FCM algorithm can be found in Mucha and Sofyan (2000). In practical applications, a validation method to measure the quality of a clustering result is needed. Its quality depends on many factors, such as the method of initialization, the choice of the number of clusters c, and the clustering method. The initialization requires a good estimate of the clusters and the cluster validity problem can be reduced to the choice of an optimal number of clusters c. Several cluster validity measures have been developed in the past by Bezdek and Pal (1992).

11.3.2

The Takagi-Sugeno Approach

Takagi and Sugeno (1985) proposed a fuzzy clustering approach using the membership function µA (x) : X → [0, 1], which deﬁnes a degree of membership of x ∈ X in a fuzzy set A. In this context, all the fuzzy sets are associated with piecewise linear membership functions. Based on the fuzzy-set concept, the aﬃne Takagi-Sugeno (TS) fuzzy model consists of a set of rules Ri , i = 1, . . . , r, which have the following structure: IF x is Ai , THEN yi = a i x + bi . This structure consists of two parts, namely the antecedent part “x is Ai ” p and the consequent part “yi = a i x + bi ,” where x ∈ X ⊂ R is a crisp input vector, Ai is a (multidimensional) fuzzy set deﬁned by the membership function µAi (x) : X → [0, 1], and yi ∈ R is an output of the i-th rule depending on a parameter vector ai ∈ Rp and a scalar bi .

262

11 Modelling Indonesian Money Demand

Given a set of r rules and their outputs (consequents) yi , the global output y of the Takagi-Sugeno model is deﬁned by the fuzzy mean formula: r µAi (x)yi . (11.15) y = i=1 r i=1 µAi (x) It is usually diﬃcult to implement multidimensional fuzzy sets. Therefore, the antecedent part is commonly represented as a combination of equations for the elements of x = (x1 , . . . , xp ) , each having a corresponding one-dimensional fuzzy set Aij , j = 1, . . . , p. Using the conjunctive form, the rules can be formulated as: IF x1 is Ai,1 AND · · · AND xp is Ai,p , THEN yi = a i x + bi , with the degree of membership µAi (x) = µAi ,1 (x1 )·µAi ,2 (x2 ) · · · µAi ,p (xp ). This elementwise clustering approach is also referred to as product space clustering. Note that after normalizing this degree of membership (of the antecedent part) is: µA (x) . (11.16) φi (x) = r i j=1 µAj (x) We can also interpret the aﬃne Takagi-Sugeno model as a quasilinear model with a dependent input parameter (Wolkenhauer, 2001): " ! r r φi (x)ai x + φi (x)bi = a (x) + b(x). (11.17) y= i=1

11.3.3

i=1

Model Identiﬁcation

The basic principle of model identiﬁcation by product space clustering is to approximate a nonlinear regression problem by decomposing it to several local linear sub-problems described by IF-THEN rules. A comprehensive discussion can be found in Giles and Draeseke (2001). Let us now discuss identiﬁcation and estimation of the fuzzy model in case of multivariate data. Suppose y = f (x1 , x2 , ..., xp ) + ε

(11.18)

where the error term ε is assumed to be independent, identically and normally distributed around zero. The fuzzy function f represents the conditional mean of the output variable y. In the rest of the chapter, we use a linear form of f and the least squares criterion for its estimation. The algorithm is as follows.

11.3 The Fuzzy Approach to Money Demand

263

Step 1 For each pair xr and y, separately partition n observations of the sample into cr fuzzy clusters by using fuzzy clustering (where r = 1, ..., p). Step 2 Consider all possible combinations 1p of c fuzzy clusters given the number of input variables p, where c = r=1 cr . Step 3 Form a model by using data taken from each fuzzy cluster: yij = βi0 + βi1 x1ij + βi2 x2ij + ... + βip xpij + εij

(11.19)

where observation index j = 1, . . . , n and cluster index i = 1, . . . , c. Step 4 Predict the conditional mean of x by using: c (bi0 + bi1 x1k + ... + bip xpk )wik c , k = 1, . . . , n, (11.20) yˆk = i=1 i=1 wik 1p where wik = r=1 δij µrj (xk ), i = 1, . . . , c, and δij is an indicator equal to one if the jth cluster is associated with the ith observation. The fuzzy predictor of the conditional mean y is a weighted average of linear predictors based on the fuzzy partitions of explanatory variables, with a membership value varying continuously through the sample observations. The eﬀect of this condition is that the non-linear system can be eﬀectively modelled. The modelling technique based on fuzzy sets can be understood as a local method: it uses partitions of a domain process into a number of fuzzy regions. In each region of the input space, a rule is deﬁned which transforms input variables into output. The rules can be interpreted as local sub-models of the system. This approach is very similar to the inclusion of dummy variables in an econometric variable. By allowing interaction of dummy-variables and independent variables, we also specify local sub-models. While the number and location of the sub-periods is determined endogenously by the data in the fuzzy approach, they have been imposed exogenously after visual data inspection in our econometric model. However, this is not a fundamental diﬀerence because the number and location of the sub-periods could also be determined automatically by using econometric techniques.

11.3.4

Modelling Indonesian Money Demand with Fuzzy Techniques

In this section, we model the M2 money demand in Indonesia using the approach of fuzzy model identiﬁcation and the same data as in Section 11.2. Like

264

11 Modelling Indonesian Money Demand

Table 11.2: Four clusters of Indonesian money demand data Cluster

Observations

1

1-15

2

16-31

3

34-39

4

40-51

β0 (t-value) 3.9452 (3.402) 1.2913 (0.328) 28.7063 (1.757) -0.2389 (-0.053)

β1 (yt ) (t-value) 0.5479 (5.441) 0.7123 (1.846) -1.5480 (-1.085) 0.8678 (2.183)

β2 (rt ) (t-value) -0.2047 (-4.195) 0.1493 (0.638) -0.3177 (-2.377) 0.1357 (0.901)

in the econometric approach logarithmic real money demand (mrt ) depends on logarithmic GNP (yt ) and the logarithmic long-term interest-rate (rt ): mrt = β0 + β1 ln Yt + β2 rt .

(11.21)

The results of the fuzzy clustering algorithm are far from being unambiguous. Fuzzy clustering with real money and output yields three clusters. However, real money and output clusters overlap, such that it is diﬃcult to identify three common clusters. Hence, we arrange them as four clusters. On the other hand, clustering with real money and the interest rate leads to two clusters. The intersection of both clustering results gives 4 diﬀerent clusters. The four local models are presented in Table 11.2. In the ﬁrst cluster that covers the period 1990:1–1993:3 GNP has a positive eﬀect on money demand, and the interest rate eﬀect is negative. The output elasticity is substantially below one, but increases in the second cluster (1993:4–1997:3). The interest rate has no signiﬁcant impact on real money in the second period. The third cluster from 1997:4 to 1998:4 covers the Asean crisis. In this period, the relationship between real money and output breaks down while the interest rate eﬀect is stronger than before. The last cluster covers the period 1999:4–2002:3, in which the situation in Indonesia was slowly brought under control as a result of having a new government elected in October 1999. The elasticity of GNP turned back approximately to the level before the crisis. However, the eﬀect of the interest rate is not signiﬁcant.

11.3 The Fuzzy Approach to Money Demand

265

Fuzzy TS Model

9.8

True Value

Econometrics Model

9.4

9.6

Log(Money Demand)

10

10.2

Indonesian Money Demand

0

10

20 30 Time: 1990:1-2002:3

40

50

Figure 11.4: Fitted money demand (dotted line): econometric model (dashed line) and fuzzy model (solid line). STFmon05.xpl

The ﬁt of the local sub-models is not as good as the ﬁt of the econometric model (Figure 11.4). The main reasons for this result are that autocorrelation and seasonality of the data have not been considered in the fuzzy approach, mainly for computational reasons. Additionally, the determination of the number of diﬀerent clusters turned out to be rather diﬃcult. Therefore, the fuzzy model for Indonesian money demand described here should be interpreted as an illustrative example for the robustness analysis of econometric models. More research is necessary to ﬁnd a fuzzy speciﬁcation that describes the data as well as the econometric model.

266

11.4

11 Modelling Indonesian Money Demand

Conclusions

In this chapter, we have analyzed money demand in Indonesia in a period in which major instabilities in basic economic relations due to the Asian crises may be expected. In addition to an econometric approach we have applied fuzzy clustering in order to analyze the robustness of the econometric results. Both the econometric and the fuzzy clustering approach divide the period from 1990 to 2002 into separate sub-periods. In the econometric approach this is accomplished by inclusion of dummy variables in the regression model, and in the fuzzy clustering approach diﬀerent clusters are identiﬁed in which local regression models are valid. Both approaches reveal that there have been structural changes in Indonesian money demand during the late 1990s. A common result is that the income elasticity of money demand is quite stable before and after the crisis, the econometric estimation of the income elasticity after the crisis is about 0.93 and the fuzzy estimate is 0.87. The interest rate elasticity diﬀers in both approaches: the econometric model indicates a absolutely smaller but still signiﬁcant negative interest rate elasticity after the crisis, while the fuzzy approach yields an insigniﬁcant interest rate elasticity after the crises. A further diﬀerence is that the fuzzy approach suggests a higher number of sub-periods, namely four clusters, while the econometric model is based on only two sub-periods. However, it might well be that the results of the two approaches become even more similar when the ﬁt of the fuzzy model is improved. Our main conclusions are: Firstly, Indonesian money demand has been surprisingly stable in a troubled and diﬃcult time. Secondly, the fuzzy clustering approach provides a framework for the robustness analysis of economic relationships. This framework can especially be useful if the number and location of sub-periods exhibiting structural diﬀerences in the economic relationships is not known ex-ante. Thirdly, our analysis does also reveal why the previous studies of Indonesian money demand delivered unstable results. Theses studies applied cointegration techniques. However, we show that the relevant Indonesian time series are trend-stationary such that the cointegration framework is not appropriate.

Bibliography

267

Bibliography Arize, A. C. (1994). A Re-examination of the Demand for Money in Small Developing Economies, Applied Economics 26: 217-228. Arize, A. C. and Shwiﬀ, S. S. (1993). Cointegration, Real Exchange Rate and Modelling the Demand for Broad Money in Japan, Applied Economics 25 (6): 717-726. Bahmani-Oskooee, M. (2001). How Stable is M2 Money Demand Function in Japan?, Japan and the World Economy 13: 455-461. Bernanke, B. S. and Gertler, M. (1995). Inside the Black Box: the Credit Channel of Monetary Policy Transmission, Journal of Economic Perspectives 9: 27–48. Bezdek, J. C. (1981). Pattern Recognition with Fuzzy Objective Function Algorithms, Plenum Press, New York. Bezdek, J. C. and Pal, S. K. (1992). Fuzzy Models for Pattern Recognition, IEEE Press, New York. Brand, C. and Cassola, N. (2000). A Money Demand System for Euro Area M3, ECB Working Paper 39. Coenen, G. and Vega, J. L. (1999). The Demand for M3 in the Euro Area, ECB Working Paper 6. Deckle, P. and Pradhan, M. (1997). Financial Liberalization and Money Demand in ASEAN Countries: Implications for Monetary Policy. IMF Working Paper WP/97/36. Drake, L. and Chrystal, K. A. (1994). Company-Sector Money Demand: New Evidence on the Existence of a Stable Long-run Relationship for the UK, Journal of Money, Credit, and Banking 26: 479–494. Dunn, J. C. (1973). A Fuzzy Relative of the ISODATA Process and its Use in Detecting Compact Well-Separated Clusters, Journal of Cybernetics 3: 32–57. Engle, R. F. and Granger, C. W. J. (1987). Co-integration and Error Correction: Representation, Estimation and Testing, Econometrica 55: 251–276.

268

Bibliography

Ericsson, N. R. (1999). Empirical Modeling of Money Demand, In: L¨ utkepohl, H. and Wolters, J. (Eds), Money Demand in Europe, Physica, Heidelberg, 29-49. Giles, D. E. A and Draeseke, R. (2001). Econometric Modelling Using Pattern Recognition via the Fuzzy c-Means Algorithm, in D.E.A. Giles (ed.), Computer-Aided Econometrics, Marcel Dekker, New York. Goldfeld, S. M. and Sichel, D. E. (1990). The Demand for Money, In: Friedman, B. and Hahn, F. H. (Eds), Handbook of Monetary Economics, Elsevier, Amsterdam, 299-356. Hafer, R. W. and Jansen, D. W. (1991). The Demand for Money in the United States: Evidence from Cointegration Tests, Journal of Money, Credit, and Banking 23: 155-168. Hafer, R. W. and Kutan, A. M. (1994). Economic Reforms and Long-Run Money Demand in China: Implication for Monetary Policy, Southern Economic Journal 60(4): 936–945. Haug, A. A. and Lucas, R. F. (1996). Long-Term Money Demand in Canada: In Search of Stability, Review of Economics and Statistics 78: 345–348. Holtem¨oller, O. (2002). Vector Autoregressive Analysis and Monetary Policy. Three Essays, Shaker, Aachen. Holtem¨oller, O. (2004a). Aggregation of National Data and Stability of Euro Area Money Demand, In: Dreger, Chr. and Hansen, G. (Eds), Advances in Macroeconometric Modeling, Papers and Proceedings of the 3rd IWH Workshop in Macroeconometrics, Nomos, Baden-Baden, 181-203. Holtem¨oller, O. (2004b). A Monetary Vector Error Correction Model of the Euro Area and Implications for Monetary Policy, Empirical Economics, forthcoming. Lim, G. C. (1993). The Demand for the Components of Broad Money: Error Correction and Generalized Asset Adjustment Systems, Applied Economics 25(8): 995–1004. L¨ utkepohl, H. and Wolters, J. (Eds)(1999). Money Demand in Europe, Physica, Heidelberg. McCandless, G. T. and Weber, W. E. (1995). Some Monetary Facts, Federal Reserve Bank of Minneapolis Quarterly Review 19: 2-11.

Bibliography

269

McNown, R. and Wallace, M. S. (1992). Cointegration Tests of a Long-Run Relation between Money Demand and the Eﬀective Exchange Rate, Journal of International Money and Finance 11(1): 107–114. Mehra, Y. P. (1993). The Stability of the M2 Money Demand Function: Evidence from an Error-Correction Model, Journal of Money, Credit, and Banking 25: 455–460. Miller, S. M (1991). Monetary Dynamics: An Application of Cointegration and Error-Correction Modelling, Journal of Money, Credit, and Banking 23: 139–168. Mishkin, F. S. (1995). Symposium on the monetary transmission mechanism, Journal of Economic Perspectives 9: 3–10. Miyao, R. (1996). Does a Cointegrating M2 Demand Relation Really Exist in Japan?, Journal of the Japanese and International Economics 10: 169– 180. Moosa, I. A. (1992). The Demand for Money in India: A Cointegration Approach, The Indian Economic Journal 40(1): 101–115. Mucha, H. J. and Sofyan, H. (2000). Cluster Analysis, in H¨ ardle, W., Klinke, S. and Hlavka, Z. XploRe Application Guide, (Eds), Springer, Heidelberg. Nelson, C. R. and Plosser, C. I. (1982). Trends and Random Walks in Macroeconomic Time Series, Journal of Monetary Economics 10: 139–162. Orden, D. and Fisher, L. A. (1993). Financial Deregulation and the Dynamics of Money, Prices and Output in New Zealand and Australia, Journal of Money, Credit, and Banking 25: 273–292. Perron, P. (1989). The Great Crash, the Oil Price Shock, and the Unit Root Hypothesis, Econometrica 57: 1361–1401. Price, S. and Insukindro (1994). The Demand for Indonesian Narrow Money: Long-run Equilibrium, Error Correction and Forward-looking Behaviour, The Journal of International Trade and Economic Development 3(2): 147– 163. Ruspini, E. H. (1969). A New Approach to Clustering, Information Control 15: 22–32. Sriram, S. S. (1999). Demand for M2 in an Emerging-Market Economy: An Error-Correction Model for Malaysia, IMF Working paper WP/99/173.

270

Bibliography

Takagi, T. and Sugeno, M. (1985). Fuzzy Identiﬁcation of Systems and its Application to Modelling and Control, IEEE Transactions on Systems, Man and Cybernetics 15(1): 116–132. Tseng, W. (1994). Economic Reform in China: A New Phase, IMF Occasional Paper 114. Walsh, C. E. (1998). Monetary Theory and Policy, MIT Press, Cambridge. Wolkenhauer, O. (2001). Data Engineering: Fuzzy Mathematics in System Theory and Data Analysis, Wiley, New York. Wolters, J., Ter¨ asvirta, T. and L¨ utkepohl, H. (1998). Modeling the Demand for M3 in the Uniﬁed Germany, The Review of Economics and Statistics 90: 309–409.

12 Nonparametric Productivity Analysis Wolfgang H¨ ardle and Seok-Oh Jeong

How can we measure and compare the relative performance of production units? If input and output variables are one dimensional, then the simplest way is to compute eﬃciency by calculating and comparing the ratio of output and input for each production unit. This idea is inappropriate though, when multiple inputs or multiple outputs are observed. Consider a bank, for example, with three branches A, B, and C. The branches take the number of staﬀ as the input, and measures outputs such as the number of transactions on personal and business accounts. Assume that the following statistics are observed: • Branch A: 60000 personal transactions, 50000 business transactions, 25 people on staﬀ, • Branch B: 50000 personal transactions, 25000 business transactions, 15 people on staﬀ, • Branch C: 45000 personal transactions, 15000 business transactions, 10 people on staﬀ. We observe that Branch C performed best in terms of personal transactions per staﬀ, whereas Branch A has the highest ratio of business transactions per staﬀ. By contrast Branch B performed better than Branch A in terms of personal transactions per staﬀ, and better than Branch C in terms of business transactions per staﬀ. How can we compare these business units in a fair way? Moreover, can we possibly create a virtual branch that reﬂects the input/output mechanism and thus creates a scale for the real branches? Productivity analysis provides a systematic approach to these problems. We review the basic concepts of productivity analysis and two popular methods

272

12 Nonparametric Productivity Analysis

DEA and FDH, which are given in Sections 12.1 and 12.2, respectively. Sections 12.3 and 12.4 contain illustrative examples with real data.

12.1

The Basic Concepts

The activity of production units such as banks, universities, governments, administrations, and hospitals may be described and formalized by the production set: Ψ = {(x, y) ∈ Rp+ × Rq+ | x can produce y}. where x is a vector of inputs and y is a vector of outputs. This set is usually assumed to be free disposable, i.e. if for given (x, y) ∈ Ψ all (x , y ) with x ≥ x and y ≤ y belong to Ψ, where the inequalities between vectors are understood componentwise. When y is one-dimensional, Ψ can be characterized by a function g called the frontier function or the production function: Ψ = {(x, y) ∈ Rp+ × R+ | y ≤ g(x)}. Under free disposability condition the frontier function g is monotone nondecreasing in x. See Figure 12.1 for an illustration of the production set and the frontier function in the case of p = q = 1. The black curve represents the frontier function, and the production set is the region below the curve. Suppose the point A represent the input and output pair of a production unit. The performance of the unit can be evaluated by referring to the points B and C on the frontier. One sees that with less input x one could have produced the same output y (point B). One also sees that with the input of A one could have produced C. In the following we describe a systematic way to measure the eﬃciency of any production unit compared to the peers of the production set in a multi-dimensional setup. The production set Ψ can be described by its sections. The input (requirement) set X(y) is deﬁned by: X(y) = {x ∈ Rp+ | (x, y) ∈ Ψ}, which is the set of all input vectors x ∈ Rp+ that yield at least the output vector y. See Figure 12.2 for a graphical illustration for the case of p = 2. The region over the smooth curve represents X(y) for a given level y. On the other hand, the output (correspondence) set Y (x) is deﬁned by: Y (x) = {y ∈ Rq+ | (x, y) ∈ Ψ},

273

0.8

12.1 The Basic Concepts

0.4

output

0.6

C

A

0

0.2

B

0

0.5 input

1

Figure 12.1: The production set and the frontier function, p = q = 1. the set of all output vectors y ∈ Rq+ that is obtainable from the input vector x. Figure 12.3 illustrates Y (x) for the case of q = 2. The region below the smooth curve is Y (x) for a given input level x. In productivity analysis one is interested in the input and output isoquants or eﬃcient boundaries, denoted by ∂X(y) and ∂Y (x) respectively. They consist of the attainable boundary in a radial sense: {x | x ∈ X(y), θx ∈ / X(y), 0 < θ < 1} if y = 0 ∂X(y) = {0} if y = 0 and

∂Y (x) =

{y | y ∈ Y (x), λy ∈ / X(y), λ > 1} {0}

if Y (x) = {0} if y = 0.

Given a production set Ψ with the scalar output y, the production function g can also be deﬁned for x ∈ Rp+ : g(x) = sup{y | (x, y) ∈ Ψ}.

12 Nonparametric Productivity Analysis

x2

0.5

1

274

A

0

B

O

0

0.5 x1

1

Figure 12.2: Input requirement set, p = 2. It may be deﬁned via the input set and the output set as well: g(x) = sup{y | x ∈ X(y)} = sup{y | y ∈ Y (x)}. For a given input-output point (x0 , y0 ), its input eﬃciency is deﬁned as θIN (x0 , y0 ) = inf{θ | θx0 ∈ X(y0 )}. The eﬃcient level of input corresponding to the output level y0 is then given by (12.1) x∂ (y0 ) = θIN (x0 , y0 )x0 . Note that x∂ (y0 ) is the intersection of ∂X(y0 ) and the ray θx0 , θ > 0, see Figure 12.2. Suppose that the point A in Figure 12.2 represent the input used by a production unit. The point B is its eﬃcient input level and the input eﬃcient score of the unit is given by OB/OA. The output eﬃciency score θOUT (x0 , y0 ) can be deﬁned similarly: θOUT (x0 , y0 ) = sup{θ | θy0 ∈ Y (x0 )}.

(12.2)

275

0.6

0.8

12.1 The Basic Concepts

A

0

0.2

y2

0.4

B

O

0

0.5 y1

1

Figure 12.3: Output corresponding set, q = 2. The eﬃcient level of output corresponding to the input level x0 is given by y ∂ (x0 ) = θOUT (x0 , y0 )y0 . In Figure 12.3, let the point A be the output produced by a unit. Then the point B is the eﬃcient output level and the output eﬃcient score of the unit is given by OB/OA. Note that, by deﬁnition, θIN (x0 , y0 ) = inf{θ | (θx0 , y0 ) ∈ Ψ}, θOUT (x0 , y0 ) = sup{θ | (x0 , θy0 ) ∈ Ψ}.

(12.3)

Returns to scale is a characteristic of the surface of the production set. The production set exhibits constant returns to scale (CRS) if, for α ≥ 0 and P ∈ Ψ, αP ∈ Ψ; it exhibits non-increasing returns to scale (NIRS) if, for 0 ≤ α ≤ 1 and P ∈ Ψ, αP ∈ Ψ; it exhibits non-decreasing returns to scale (NDRS) if, for α ≥ 1 and P ∈ Ψ, αP ∈ Ψ. In particular, a convex production set exhibits non-increasing returns to scale. Note, however, that the converse is not true.

276

12 Nonparametric Productivity Analysis

For more details on the theory and method for productivity analysis, see Shephard (1970), F¨ are, Grosskopf, and Lovell (1985), and F¨ are, Grosskopf, and Lovell (1994).

12.2

Nonparametric Hull Methods

The production set Ψ and the production function g is usually unknown, but a sample of production units or decision making units (DMU’s) is available instead: X = {(xi , yi ), i = 1, . . . , n}. The aim of productivity analysis is to estimate Ψ or g from the data X . Here we consider only the deterministic frontier model, i.e. no noise in the observations and hence X ⊂ Ψ with probability 1. For example, when q = 1 the structure of X can be expressed as: yi = g(xi ) − ui , i = 1, . . . , n or yi = g(xi )vi , i = 1, . . . , n where g is the frontier function, and ui ≥ 0 and vi ≤ 1 are the random terms for ineﬃciency of the observed pair (xi , yi ) for i = 1, . . . , n. The most popular nonparametric method is Data Envelopment Analysis (DEA), which assumes that the production set is convex and free disposable. This model is an extension of Farrel (1957)’s idea and was popularized by Charnes, Cooper, and Rhodes (1978). Deprins, Simar, and Tulkens (1984), assuming only free disposability on the production set, proposed a more ﬂexible model, say, Free Disposal Hull (FDH) model. Statistical properties of these hull methods have been studied in the literature. Park (2001), Simar and Wilson (2000) provide reviews on the statistical inference of existing nonparametric frontier models. For the nonparametric frontier models in the presence of noise, so called nonparametric stochastic frontier models, we refer to Simar (2003), Kumbhakar, Park, Simar and Tsionas (2004) and references therein.

12.2 Nonparametric Hull Methods

12.2.1

277

Data Envelopment Analysis

The Data Envelopment Analysis (DEA) of the observed sample X is deﬁned as the smallest free disposable and convex set containing X : $ DEA Ψ

= {(x, y) ∈ Rp+ × Rq+ | x ≥

n

γi xi , y ≤

i=1

n

γi yi ,

i=1

for some (γ1 , . . . , γn ) such that n γi = 1, γi ≥ 0 ∀i = 1, . . . , n}. i=1

The DEA eﬃciency scores for a given input-output level (x0 , y0 ) are obtained via (12.3): θ$IN (x0 , y0 ) = θ$OUT (x0 , y0 ) =

$ DEA }, min{θ > 0 | (θx0 , y0 ) ∈ Ψ $ DEA }. max{θ > 0 | (x0 , θy0 ) ∈ Ψ

The DEA eﬃcient levels for a given level (x0 , y0 ) are given by (12.1) and (12.2) as: 2∂ (y0 ) = θ$IN (x0 , y0 )x0 ; y2∂ (x0 ) = θ$OUT (x0 , y0 )y0 . x Figure 12.4 depicts 50 simulated production units and the frontier built by DEA eﬃcient input levels. The simulated model is as follows: √ xi ∼ Uniform[0, 1], yi = g(xi )e−zi , g(x) = 1 + x, zi ∼ Exp(3), for i = 1, . . . , 50, where Exp(ν) denotes the exponential distribution with mean 1/ν. Note that E[−zi ] = 0.75. The scenario with an exponential distribution for the logarithm of ineﬃciency term and 0.75 as an average of ineﬃciency are reasonable in the productivity analysis literature (Gijbels, Mammen, Park, and Simar, 1999). $ DEA ⊂ Ψ. The DEA estimate is always downward biased in the sense that Ψ So the asymptotic analysis quantifying the discrepancy between the true frontier and the DEA estimate would be appreciated. The consistency and the convergence rate of DEA eﬃciency scores with multidimensional inputs and outputs were established analytically by Kneip, Park, and Simar (1998). For p = 1 and q = 1, Gijbels, Mammen, Park, and Simar (1999) obtained its limit distribution depending on the curvature of the frontier and the density at the boundary. Jeong and Park (2004) and Kneip, Simar, and Wilson (2003) extended this result to higher dimensions.

12 Nonparametric Productivity Analysis

0.5

1

output

1.5

2

278

0

0.5 input

1

Figure 12.4: 50 simulated production units (circles), the frontier of the DEA √ estimate (solid line), and the true frontier function g(x) = 1 + x (dotted line). STFnpa01.xpl

12.2.2

Free Disposal Hull

The Free Disposal Hull (FDH) of the observed sample X is deﬁned as the smallest free disposable set containing X : $ FDH = {(x, y) ∈ Rp × Rq | x ≥ xi , y ≤ yi , i = 1, . . . , n}. Ψ + + We can obtain the FDH estimates of eﬃciency scores for a given input-output $ DEA with Ψ $ FDH in the deﬁnition of DEA eflevel (x0 , y0 ) by substituting Ψ ﬁciency scores. Note that, unlike DEA estimates, their closed forms can be

12.3 DEA in Practice: Insurance Agencies

279

derived by a straightforward calculation: θ$IN (x0 , y0 ) θ$OUT (x0 , y0 )

3 max xji xj0 , i|y≤yi 1≤j≤p 3 = max min yik y0k , =

min

i|x≥xi 1≤k≤q

where v j is the jth component of a vector v. The eﬃcient levels for a given level (x0 , y0 ) are obtained by the same way as those for DEA. See Figure 12.5 for an illustration by a simulated example: xi ∼ Uniform[1, 2], yi = g(xi )e−zi , g(x) = 3(x−1.5)3 +0.25x+1.125, zi ∼ Exp(3), for i = 1, . . . , 50. Park, Simar, and Weiner (1999) showed that the limit distribution of the FDH estimator in a multivariate setup is a Weibull distribution depending on the slope of the frontier and the density at the boundary.

12.3

DEA in Practice: Insurance Agencies

In order to illustrate a practical application of DEA we consider an example from the empirical study of Scheel (1999). This concrete data analysis is about the eﬃciency of 63 agencies of a German insurance company, see Table 12.1. The input X ∈ R4+ and output Y ∈ R2+ variables were as follows: X1 : Number of clients of Type A, X2 : Number of clients of Type B, X3 : Number of clients of Type C, X4 : Potential new premiums in EURO, Y1 : Number of new contracts, Y2 : Sum of new premiums in EURO. Clients of an insurance company are those who are currently served by the agencies of the company. They are classiﬁed into several types which reﬂect, for example, the insurance coverage. Agencies should sell to the clients as many contracts with as many premiums as possible. Hence the number of clients (X1 , X2 , X3 ) are included as input variables, and the number of new contracts (Y1 )

12 Nonparametric Productivity Analysis

0.5

1

output

1.5

2

280

1

1.5 input

2

Figure 12.5: 50 simulated production units (circles) the frontier of the FDH estimate (solid line), and the true frontier function g(x) = 3(x − 1.5)3 + 0.25x + 1.125 (dotted line). STFnpa02.xpl

and the sum of new premiums (Y2 ) are included as output variables. The potential new premiums (X4 ) is included as input variables, since it depends on the clients’ current coverage. Summary statistics for this data are given in Table 12.2. The DEA eﬃciency scores and the DEA eﬃcient levels of inputs for the agencies are given in Tables 12.3 and 12.4, respectively. The input eﬃcient score for each agency provides a gauge for evaluating its activity, and the eﬃcient level of inputs can be interpreted as a ’goal’ input. For example, agency 1 should have been able to yield its activity outputs (Y1 = 7, Y2 = 1754) with only 38% of its inputs, i.e., X1 = 53, X2 = 93, X3 = 4, and X4 = 108960. By contrast, agency 63, whose eﬃciency score is equal to 1, turned out to have used its resources 100% eﬃciently.

12.4 FDH in Practice: Manufacturing Industry

281

Table 12.1: Activities of 63 agencies of a German insurance company

Agency 1 2 3 . . . 62 63

X1 138 166 152 . . . 83 108

X2 242 124 84 . . . 109 257

inputs X3 10 5 3 . . . 2 0

X4 283816.7 156727.5 111128.9 . . . 139831.4 299905.3

outputs Y1 Y2 7 1754 8 2413 15 2531 . . . . . . 11 4439 45 30545

Table 12.2: Summary statistics for 63 agencies of a German insurance company

X1 X2 X3 X4 Y1 Y2

12.4

Minimum 42 55 0 73756 2 696

Maximum 572 481 140 693820 70 33075

Mean 225.54 184.44 19.762 258670 22.762 7886.7

Median 197 141 10 206170 16 6038

Std.Error 131.73 110.28 26.012 160150 16.608 7208

FDH in Practice: Manufacturing Industry

In order to illustrate how FDH works, the Manufacturing Industry Productivity Database from the National Bureau of Economic Research (NBER), USA is considered. This database is downloadable from the website of NBER [http://www.nber.org]. It contains annual industry-level data on output, employment, payroll, and other input costs, investment, capital stocks, and various industry-speciﬁc price indices from 1958 on hundreds of manufacturing industries (indexed by 4 digits numbers) in the United States. We selected data from the year 1996 (458 industries) with the following 4 input variables, p = 4, and 1 output variable, q = 1 (summary statistics are given in Table 12.5):

282

12 Nonparametric Productivity Analysis

Table 12.3: DEA eﬃciency score of the 63 agencies Agency 1 2 3 . . . 62 63

Eﬃciency score 0.38392 0.49063 0.86449 . . . 0.79892 1 STFnpa03.xpl

Table 12.4: DEA eﬃciency level of the 63 agencies

Agency 1 2 3 . . . 62 63

Eﬃcient level of inputs X1 X2 X3 X4 52.981 92.909 3.8392 108960 81.444 60.838 2.4531 76895 131.4 72.617 2.5935 96070 . . . . . . . . . . . . 66.311 87.083 1.5978 111710 108 257 0 299910 STFnpa03.xpl

X1 : Total employment, X2 : Total cost of material, X3 : Cost of electricity and fuel, X4 : Total real capital stock, Y : Total value added.

12.4 FDH in Practice: Manufacturing Industry

283

Table 12.5: Summary statistics for Manufacturing Industry Productivity Database (NBER, USA) Minimum Maximum Mean Median Std.Error X1 0.8 500.5 37.833 21 54.929 X2 18.5 145130 4313 1957.2 10771 X3 0.5 3807.8 139.96 49.7 362 X4 15.8 64590 2962.8 1234.7 6271.1 Y 34.1 56311 3820.2 1858.5 6392

Table 12.6 summarizes the result of the analysis of US manufacturing industries in 1996. The industry indexed by 2015 was eﬃcient in both input and output orientation. This means that it is one of the vertices of the free disposal hull generated by the 458 observations. On the other hand, the industry 2298 performed fairly well in terms of input eﬃciency (0.96) but somewhat badly (0.47) in terms of output eﬃciency. We can obtain the eﬃcient level of inputs (or outputs) by multiplying (or dividing) the eﬃciency score to each corresponding observation. For example, consider the industry 2013, which used inputs X1 = 88.1, X2 = 14925, X3 = 250, and X4 = 4365.1 to yield the output Y = 5954.2. Since its FDH input eﬃciency score was 0.64, this industry should have used the inputs X1 = 56.667, X2 = 9600, X3 = 160.8, and X4 = 2807.7 to produce the observed output Y = 5954.2. On the other hand, taking into account that the FDH output eﬃciency score was 0.70, this industry should have increased its output upto Y = 4183.1 with the observed level of inputs.

284

12 Nonparametric Productivity Analysis

Table 12.6: FDH eﬃciency scores of 458 US industries in 1996

1 2 3 4 . . . 75 . . . 458

Industry 2011 2013 2015 2021 . . . 2298 . . . 3999

Eﬃciency scores input output 0.88724 0.94203 0.79505 0.80701 0.66933 0.62707 1 1 . . . . . . 0.80078 0.7439 . . . . . . 0.50809 0.47585 STFnpa04.xpl

Bibliography

285

Bibliography Charnes, A., Cooper, W. W., and Rhodes, E. (1978). Measuring the Ineﬃciency of Decision Making Units, European Journal of Operational Research 2, 429–444. Deprins, D., Simar, L., and Tulkens, H. (1984). Measuring Labor Ineﬃciency in Post Oﬃces, in Marchand, M., Pestieau, P. and Tulkens, H. (eds.)The Performance of Public Enterprizes: Concepts and Measurements, 243–267. F¨ are, R., Grosskopf, S., and Lovell, C. A. K. (1985). The Measurement of Eﬃciency of Production, Kluwer-Nijhoﬀ. F¨ are, R., Grosskopf, S., and Lovell, C. A. K. (1994). Production Frontiers, Cambridge University Press. Farrell, M. J. I.(1957).The Measurement of Productivity Eﬃciency, Journal of the Royal Statistical Society, Ser. A 120, 253–281. Gijbels, I., Mammen, E., Park, B. U., and Simar, L. (1999). On Estimation of Monotone and Concave Frontier Functions, Journal of the American Statistical Association 94, 220–228. Jeong, S. and Park, B. U. (2002). Limit Distributions Convex Hull Estimators of Boundaries, Discussion Paper # 0439, CASE (Center for Applieed Statistics and Economics), Humboldt-Universit¨ at zu Berlin, Germany. Kneip, A., Park, B. U., and Simar, L. (1998). A Note on the Convergence of Nonparametric DEA Eﬃciency Measures, Econometric Theory 14, 783– 793. Kneip, A., Simar, L., and Wilson, P. (2003). Asymptotics for DEA estimators in non-parametric frontier models, Discussion Paper # 0317, Institute de Statistique, Universit´e catholique de Louvain, Louvain-la-Neuve, Belgium. Kumbhakar, S. C., Park, B. U., Simar, L., and Tsionas, E. G. (2004 ). Nonparametric stochastic frontiers: A local maximum likelihood approach, Discussion Paper # 0417 Institut de statistique, Universit´e catholique de Louvain, Louvain-la-Neuve, Belgium. Park, B. U. (2001). On Nonparametric Estimation of Data Edges, Journal of the Korean Statistical Society 30, 2, 265–280.

286

Bibliography

Park, B. U., Simar, L., and Weiner, Ch. (1999). The FDH Estimator for Productivity Eﬃciency Scores: Asymptotic Properties, Econometric Theory 16, 855–877. Scheel, H. (1999). Continuity of the BCC eﬃciency measure, in: Westermann (ed.), Data Envelopment Analysis in the public and private service sector, Gabler, Wiesbaden. Shephard, R. W. (1970). Theory of Cost and Production Function, Princeton University Press. Simar, L. (2003 ). How to improve the performances of DEA/FDH estimators in the presence of noise?, Discussion Paper # 0323, Institut de statistique, Universit´e catholique de Louvain, Louvain-la-Neuve, Belgium. Simar, L. and Wilson, P. (2000 ). Statistical Inference in Nonparametric Frontier Models: The State of the Art, Journal of Productivity Analysis 13, 49–78.

Part II

Insurance

13 Loss Distributions Krzysztof Burnecki, Adam Misiorek, and Rafal Weron

13.1

Introduction

The derivation of loss distributions from insurance data is not an easy task. Insurers normally keep data ﬁles containing detailed information about policies and claims, which are used for accounting and rate-making purposes. However, claim size distributions and other data needed for risk-theoretical analyzes can be obtained usually only after tedious data preprocessing. Moreover, the claim statistics are often limited. Data ﬁles containing detailed information about some policies and claims may be missing or corrupted. There may also be situations where prior data or experience are not available at all, e.g. when a new type of insurance is introduced or when very large special risks are insured. Then the distribution has to be based on knowledge of similar risks or on extrapolation of lesser risks. There are three basic approaches to deriving the loss distribution: empirical, analytical, and moment based. The empirical method, presented in Section 13.2, can be used only when large data sets are available. In such cases a suﬃciently smooth and accurate estimate of the cumulative distribution function (cdf) is obtained. Sometimes the application of curve ﬁtting techniques – used to smooth the empirical distribution function – can be beneﬁcial. If the curve can be described by a function with a tractable analytical form, then this approach becomes computationally eﬃcient and similar to the second method. The analytical approach is probably the most often used in practice and certainly the most frequently adopted in the actuarial literature. It reduces to ﬁnding a suitable analytical expression which ﬁts the observed data well and which is easy to handle. Basic characteristics and estimation issues for the most popular and useful loss distributions are discussed in Section 13.3. Note, that

290

13

Loss Distributions

sometimes it may be helpful to subdivide the range of the claim size distribution into intervals for which diﬀerent methods are employed. For example, the small and medium size claims could be described by the empirical claim size distribution, while the large claims – for which the scarcity of data eliminates the use of the empirical approach – by an analytical loss distribution. In some applications the exact shape of the loss distribution is not required. We may then use the moment based approach, which consists of estimating only the lowest characteristics (moments) of the distribution, like the mean and variance. However, it should be kept in mind that even the lowest three or four moments do not fully deﬁne the shape of a distribution, and therefore the ﬁt to the observed data may be poor. Further details on the moment based approach can be found e.g. in Daykin, Pentikainen, and Pesonen (1994). Having a large collection of distributions to choose from, we need to narrow our selection to a single model and a unique parameter estimate. The type of the objective loss distribution can be easily selected by comparing the shapes of the empirical and theoretical mean excess functions. Goodness-of-ﬁt can be veriﬁed by plotting the corresponding limited expected value functions. Finally, the hypothesis that the modeled random event is governed by a certain loss distribution can be statistically tested. In Section 13.4 these statistical issues are thoroughly discussed. In Section 13.5 we apply the presented tools to modeling real-world insurance data. The analysis is conducted for two datasets: (i) the PCS (Property Claim Services) dataset covering losses resulting from catastrophic events in USA that occurred between 1990 and 1999 and (ii) the Danish ﬁre losses dataset, which concerns major ﬁre losses that occurred between 1980 and 1990 and were recorded by Copenhagen Re.

13.2

Empirical Distribution Function

A natural estimate for the loss distribution is the observed (empirical) claim size distribution. However, if there have been changes in monetary values during the observation period, inﬂation corrected data should be used. For a sample of observations {x1 , . . . , xn } the empirical distribution function (edf) is deﬁned as: 1 (13.1) Fn (x) = #{i : xi ≤ x}, n

13.2

Empirical Distribution Function

291

Empirical and lognormal distributions 1

0

0.5

CDF(x)

0.5

0

CDF(x)

1

Empirical distribution function

0

1

2 x

3

4

0

1

2

3

4

5

x

Figure 13.1: Left panel : Empirical distribution function (edf) of a 10-element log-normally distributed sample with parameters µ = 0.5 and σ = 0.5, see Section 13.3.1. Right panel : Approximation of the edf by a continuous, piecewise linear function (black solid line) and the theoretical distribution function (red dotted line). STFloss01.xpl

i.e. it is a piecewise constant function with jumps of size 1/n at points xi . Very often, especially if the sample is large, the edf is approximated by a continuous, piecewise linear function with the “jump points” connected by linear functions, see Figure 13.1. The empirical distribution function approach is appropriate only when there is a suﬃciently large volume of claim data. This is rarely the case for the tail of the distribution, especially in situations where exceptionally large claims are possible. It is often advisable to divide the range of relevant values of claims into two parts, treating the claim sizes up to some limit on a discrete basis, while the tail is replaced by an analytical cdf.

292

13.3

13

Loss Distributions

Analytical Methods

It is often desirable to ﬁnd an explicit analytical expression for a loss distribution. This is particularly the case if the claim statistics are too sparse to use the empirical approach. It should be stressed, however, that many standard models in statistics – like the Gaussian distribution – are unsuitable for ﬁtting the claim size distribution. The main reason for this is the strongly skewed nature of loss distributions. The log-normal, Pareto, Burr, Weibull, and gamma distributions are typical candidates for claim size distributions to be considered in applications.

13.3.1

Log-normal Distribution

Consider a random variable X which has the normal distribution with density

1 1 (x − µ)2 , −∞ < x < ∞. (13.2) fN (x) = √ exp − 2 σ2 2πσ Let Y = eX so that X = log Y . Then the probability density function of Y is given by:

1 1 1 (log y − µ)2 , y > 0, (13.3) f (y) = fN (log y) = √ exp − y 2 σ2 2πσy where σ > 0 is the scale and −∞ < µ < ∞ is the location parameter. The distribution of Y is termed log-normal, however, sometimes it is called the Cobb-Douglas law, especially when applied to econometric data. The lognormal cdf is given by: log y − µ F (y) = Φ , y > 0, (13.4) σ where Φ(·) is the standard normal (with mean 0 and variance l) distribution function. The k-th raw moment mk of the log-normal variate can be easily derived using results for normal random variables: k kX σ2 k2 , (13.5) mk = E Y = E e = MX (k) = exp µk + 2

13.3

Analytical Methods

293

where MX (z) is the moment generating function of the normal distribution. In particular, the mean and variance are σ2 E(X) = exp µ + , (13.6) 2 (13.7) Var(X) = exp σ 2 − 1 exp 2µ + σ 2 , respectively. For both standard parameter estimation techniques the estimators are known in closed form. The method of moments estimators are given by: ! n " " ! n 1 1 1 2 µ ˆ = 2 log (13.8) xi − log x , n i=1 2 n i=1 i " ! n " ! n 1 1 2 2 σ ˆ (13.9) = log x − 2 log xi , n i=1 i n i=1 while the maximum likelihood estimators by: 1 log(xi ), n i=1

(13.10)

1 2 {log(xi ) − µ ˆ} . n i=1

(13.11)

n

µ ˆ =

n

σ ˆ2

=

Finally, the generation of a log-normal variate is straightforward. We simply have to take the exponent of a normal variate. The log-normal distribution is very useful in modeling of claim sizes. It is right-skewed, has a thick tail and ﬁts many situations well. For small σ it resembles a normal distribution (see the left panel in Figure 13.2) although this is not always desirable. It is inﬁnitely divisible and closed under scale and power transformations. However, it also suﬀers from some drawbacks. Most notably, the Laplace transform does not have a closed form representation and the moment generating function does not exist.

13.3.2

Exponential Distribution

Consider the random variable with the following density and distribution functions, respectively: f (x)

= βe−βx ,

F (x)

=

1 − e−βx ,

x > 0, x > 0.

(13.12) (13.13)

294

13

Exponential densities

0.5

PDF(x)

0.4

0

0

0.2

PDF(x)

1

0.6

1.5

Log-normal densities

Loss Distributions

0

5

10

15

20

25

0

2

x

4 x

6

8

Figure 13.2: Left panel: Log-normal probability density functions (pdfs) with parameters µ = 2 and σ = 1 (black solid line), µ = 2 and σ = 0.1 (red dotted line), and µ = 0.5 and σ = 2 (blue dashed line). Right panel: Exponential pdfs with parameter β = 0.5 (black solid line), β = 1 (red dotted line), and β = 5 (blue dashed line). STFloss02.xpl

This distribution is termed an exponential distribution with parameter (or intensity) β > 0. The Laplace transform of (13.12) is ∞ β def L(t) = e−tx f (x)dx = , t > −β, (13.14) β +t 0 yielding the general formula for the k-th raw moment def

mk = (−1)k

∂ k L(t) '' k! = k. ' ∂tk t=0 β

(13.15)

The mean and variance are thus β −1 and β −2 , respectively. The maximum likelihood estimator (equal to the method of moments estimator) for β is given by: 1 βˆ = , (13.16) m ˆ1

13.3

Analytical Methods

295

where

1 k x , n i=1 i n

m ˆk =

(13.17)

is the sample k-th raw moment. To generate an exponential random variable X with intensity β we can use the inverse transform method (L’Ecuyer, 2004; Ross, 2002). The method consists of taking a random number U distributed uniformly on the interval (0, 1) and setting X = F −1 (U ), where F −1 (x) = − β1 log(1 − x) is the inverse of the exponential cdf (13.13). In fact we can set X = − β1 log U since 1 − U has the same distribution as U . The exponential distribution has many interesting features. For example, it has the memoryless property, i.e. P(X > x + y|X > y) = P(X > x). It also arises as the inter-occurrence times of the events in a Poisson process, see Chapter 14. The n-th root of the Laplace transform (13.14) is L(t) =

β β+t

n1 ,

(13.18)

which is the Laplace transform of a gamma variate (see Section 13.3.6). Thus the exponential distribution is inﬁnitely divisible. The exponential distribution is often used in developing models of insurance risks. This usefulness stems in a large part from its many and varied tractable mathematical properties. However, a disadvantage of the exponential distribution is that its density is monotone decreasing (see the right panel in Figure 13.2), a situation which may not be appropriate in some practical situations.

13.3.3

Pareto Distribution

Suppose that a variate X has (conditional on β) an exponential distribution with mean β −1 . Further, suppose that β itself has a gamma distribution (see Section 13.3.6). The unconditional distribution of X is a mixture and is called the Pareto distribution. Moreover, it can be shown that if X is an exponential random variable and Y is a gamma random variable, then X/Y is a Pareto random variable.

296

13

Loss Distributions

The density and distribution functions of a Pareto variate are given by: f (x)

=

F (x)

=

αλα , x > 0, (λ + x)α+1 α λ 1− , x > 0, λ+x

(13.19) (13.20)

respectively. Clearly, the shape parameter α and the scale parameter λ are both positive. The k-th raw moment: mk = λk k!

Γ(α − k) , Γ(α)

exists only for k < α. In the above formula ∞ def Γ(a) = y a−1 e−y dy,

(13.21)

(13.22)

0

is the standard gamma function. The mean and variance are thus: E(X)

=

Var(X)

=

λ , α−1 αλ2 , (α − 1)2 (α − 2)

(13.23) (13.24)

respectively. Note, that the mean exists only for α > 1 and the variance only for α > 2. Hence, the Pareto distribution has very thick (or heavy) tails, see Figure 13.3. The method of moments estimators are given by: α ˆ

=

ˆ λ

=

m ˆ2 −m ˆ 21 , m ˆ 2 − 2m ˆ 21 ˆ2 m ˆ 1m , m ˆ 2 − 2m ˆ 21

2

(13.25) (13.26)

where, as before, m ˆ k is the sample k-th raw moment (13.17). Note, that the estimators are well deﬁned only when m ˆ 2 − 2m ˆ 21 > 0. Unfortunately, there are no closed form expressions for the maximum likelihood estimators and they can only be evaluated numerically. Like for many other distributions the simulation of a Pareto variate X can be conducted via the inverse transform method. The inverse of the cdf (13.20) has a simple analytical form F −1 (x) = λ (1 − x)−1/α − 1 . Hence, we can

13.3

Analytical Methods

297

Pareto log-densities

0

-6

0.5

-4

1

PDF(x)

log(PDF(x))

-2

1.5

2

0

Pareto densities

0

2

4 x

6

8

-2

-1

0 log(x)

1

2

Figure 13.3: Left panel: Pareto pdfs with parameters α = 0.5 and λ = 2 (black solid line), α = 2 and λ = 0.5 (red dotted line), and α = 2 and λ = 1 (blue dashed line). Right panel: The same Pareto densities on a double logarithmic plot. The thick power-law tails of the Pareto distribution are clearly visible. STFloss03.xpl

set X = λ U −1/α − 1 , where U is distributed uniformly on the unit interval. We have to be cautious, however, when α is larger but very close to one. The theoretical mean exists, but the right tail is very heavy. The sample mean will, in general, be signiﬁcantly lower than E(X). The Pareto law is very useful in modeling claim sizes in insurance, due in large part to its extremely thick tail. Its main drawback lies in its lack of mathematical tractability in some situations. Like for the log-normal distribution, the Laplace transform does not have a closed form representation and the moment generating function does not exist. Moreover, like the exponential pdf the Pareto density (13.19) is monotone decreasing, which may not be adequate in some practical situations.

298

13.3.4

13

Loss Distributions

Burr Distribution

Experience has shown that the Pareto formula is often an appropriate model for the claim size distribution, particularly where exceptionally large claims may occur. However, there is sometimes a need to ﬁnd heavy tailed distributions which oﬀer greater ﬂexibility than the Pareto law, including a non-monotone pdf. Such ﬂexibility is provided by the Burr distribution and its additional shape parameter τ > 0. If Y has the Pareto distribution, then the distribution of X = Y 1/τ is known as the Burr distribution, see the left panel in Figure 13.4. Its density and distribution functions are given by: f (x) F (x)

xτ −1 = τ αλα , (λ + xτ )α+1 α λ = 1− , λ + xτ

x > 0, x > 0,

(13.27) (13.28)

respectively. The k-th raw moment mk =

k 1 k/τ k Γ α− , λ Γ 1+ Γ(α) τ τ

(13.29)

exists only for k < τ α. Naturally, the Laplace transform does not exist in a closed form and the distribution has no moment generating function as it was the case with the Pareto distribution. The maximum likelihood and method of moments estimators for the Burr distribution can only be evaluated numerically. A Burr variate X can be generated using the inverse transform method. The inverse of the cdf (13.28) has a sim4 51/τ ple analytical form F −1 (x) = λ (1 − x)−1/α − 1 . Hence, we can set 1/τ −1/α −1 , where U is distributed uniformly on the unit interX = λ U val. Like in the Pareto case, we have to be cautious when τ α is larger but very close to one. The theoretical mean exists, but the right tail is very heavy. The sample mean will, in general, be signiﬁcantly lower than E(X).

13.3.5

Weibull Distribution

If V is an exponential variate, then the distribution of X = V 1/τ , τ > 0, is called the Weibull (or Frechet) distribution. Its density and distribution

13.3

Analytical Methods

299

Weibull densities

PDF(x)

0

0

0.5

0.5

PDF(x)

1

1

Burr densities

0

2

4 x

6

8

0

1

2

3

4

5

x

Figure 13.4: Left panel: Burr pdfs with parameters α = 0.5, λ = 2 and τ = 1.5 (black solid line), α = 0.5, λ = 0.5 and τ = 5 (red dotted line), and α = 2, λ = 1 and τ = 0.5 (blue dashed line). Right panel: Weibull pdfs with parameters β = 1 and τ = 0.5 (black solid line), β = 1 and τ = 2 (red dotted line), and β = 0.01 and τ = 6 (blue dashed line). STFloss04.xpl

functions are given by: x > 0, f (x) = τ βxτ −1 e−βx , τ x > 0, F (x) = 1 − e−βx , τ

(13.30) (13.31)

respectively. The Weibull distribution is roughly symmetrical for the shape parameter τ ≈ 3.6. When τ is smaller the distribution is right-skewed, when τ is larger it is left-skewed, see the right panel in Figure 13.4. The k-th raw moment can be shown to be k −k/τ . (13.32) Γ 1+ mk = β τ Like for the Burr distribution, the maximum likelihood and method of moments estimators can only be evaluated numerically. Similarly, Weibull variates can be generated using the inverse transform method.

300

13.3.6

13

Loss Distributions

Gamma Distribution

The probability law with density and distribution functions given by: f (x) F (x)

e−βx , x > 0, Γ(α) x e−βs β(βs)α−1 ds, x > 0, = Γ(α) 0

= β(βx)α−1

(13.33) (13.34)

where α and β are non-negative, is known as a gamma (or a Pearson’s Type III) distribution, see the left panel in Figure 13.5. Moreover, for β = 1 the integral in (13.34): x 1 def Γ(α, x) = sα−1 e−s ds, (13.35) Γ(α) 0 is called the incomplete gamma function. If the shape parameter α = 1, the exponential distribution results. If α is a positive integer, the distribution is termed an Erlang law. If β = 21 and α = ν2 then it is termed a chi-squared (χ2 ) distribution with ν degrees of freedom. Moreover, a mixed Poisson distribution with gamma mixing distribution is negative binomial, see Chapter 18. The Laplace transform of the gamma distribution is given by: α β L(t) = , t > −β. β+t

(13.36)

The k-th raw moment can be easily derived from the Laplace transform: mk =

Γ(α + k) . Γ(α)β k

(13.37)

Hence, the mean and variance are E(X)

=

Var(X)

=

α , β α . β2

(13.38) (13.39)

Finally, the method of moments estimators for the gamma distribution parameters have closed form expressions: α ˆ

=

βˆ =

m ˆ 21 , m ˆ2 −m ˆ 21 m ˆ1 , m ˆ2 −m ˆ 21

(13.40) (13.41)

13.3

Analytical Methods

301

Mixture of two exponential densities

0

0

0.1

0.1

0.2

0.2

PDF(x)

0.3

PDF(x)

0.3

0.4

0.4

0.5

Gamma densities

0

2

4 x

6

8

0

5

10

15

x

Figure 13.5: Left panel: Gamma pdfs with parameters α = 1 and β = 2 (black solid line), α = 2 and β = 1 (red dotted line), and α = 3 and β = 0.5 (blue dashed line). Right panel: Densities of two exponential distributions with parameters β1 = 0.5 (red dotted line) and β2 = 0.1 (blue dashed line) and of their mixture with the mixing parameter a = 0.5 (black solid line). STFloss05.xpl

but maximum likelihood estimators can only be evaluated numerically. Simulation of gamma variates is not as straightforward as for the distributions presented above. For α < 1 a simple but slow algorithm due to J¨ ohnk (1964) can be used, while for α > 1 the rejection method is more optimal (Bratley, Fox, and Schrage, 1987; Devroye, 1986). The gamma distribution is closed under convolution, i.e. a sum of independent gamma variates with the same parameter β is again gamma distributed with this β. Hence, it is inﬁnitely divisible. Moreover, it is right-skewed and approaches a normal distribution in the limit as α goes to inﬁnity. The gamma law is one of the most important distributions for modeling because it has very tractable mathematical properties. As we have seen above it is also very useful in creating other distributions, but by itself is rarely a reasonable model for insurance claim sizes.

302

13.3.7

13

Loss Distributions

Mixture of Exponential Distributions

n Let a1 , a2 , . . . , an denote a series of non-negative weights satisfying i=1 ai = 1. Let F1 (x), F2 (x), . . . , Fn (x) denote an arbitrary sequence of exponential distribution functions given by the parameters β1 , β2 , . . . , βn , respectively. Then, the distribution function: F (x) =

n

n

ai Fi (x) =

i=1

ai {1 − exp(−βi x)} ,

(13.42)

i=1

is called a mixture of n exponential distributions (exponentials). The density function of the constructed distribution is f (x) =

n

ai fi (x) =

i=1

n

ai βi exp(−βi x),

(13.43)

i=1

where f1 (x), f2 (x), . . . , fn (x) denote the density functions of the input exponential distributions. Note, that the mixing procedure can be applied to arbitrary distributions. Using the technique of mixing, one can construct a wide class of distributions. The most commonly used in the applications is a mixture of two exponentials, see Chapter 15. In the right panel of Figure 13.5 a pdf of a mixture of two exponentials is plotted together with the pdfs of the mixing laws. The Laplace transform of (13.43) is L(t) =

n i=1

ai

βi , βi + t

t > − min {βi }, i=1...n

(13.44)

yielding the general formula for the k-th raw moment mk =

n i=1

ai

k! . βik

(13.45)

n The mean is thus i=1 ai βi−1 . The maximum likelihood and method of moments estimators for the mixture of n (n ≥ 2) exponential distributions can only be evaluated numerically. Simulation of variates deﬁned by (13.42) can be performed using the composition approach (Ross, 2002). First generate a random variable I, equal to i with probability ai , i = 1, ..., n. Then simulate an exponential variate with intensity βI . Note, that the method is general in the sense that it can be used for any set of distributions Fi ’s.

13.4

Statistical Validation Techniques

13.4

303

Statistical Validation Techniques

Having a large collection of distributions to choose from we need to narrow our selection to a single model and a unique parameter estimate. The type of the objective loss distribution can be easily selected by comparing the shapes of the empirical and theoretical mean excess functions. The mean excess function, presented in Section 13.4.1, is based on the idea of conditioning a random variable given that it exceeds a certain level. Once the distribution class is selected and the parameters are estimated using one of the available methods the goodness-of-ﬁt has to be tested. Probably the most natural approach consists of measuring the distance between the empirical and the ﬁtted analytical distribution function. A group of statistics and tests based on this idea is discussed in Section 13.4.2. However, when using these tests we face the problem of comparing a discontinuous step function with a continuous non-decreasing curve. The two functions will always diﬀer from each other in the vicinity of a step by at least half the size of the step. This problem can be overcome by integrating both distributions once, which leads to the so-called limited expected value function introduced in Section 13.4.3.

13.4.1

Mean Excess Function

For a claim amount random variable X, the mean excess function or mean residual life function is the expected payment per claim on a policy with a ﬁxed amount deductible of x, where claims with amounts less than or equal to x are completely ignored:

∞ {1 − F (u)} du e(x) = E(X − x|X > x) = x . (13.46) 1 − F (x) In practice, the mean excess function e is estimated by eˆn based on a representative sample x1 , . . . , xn : xi >x xi eˆn (x) = − x. (13.47) #{i : xi > x} Note, that in a ﬁnancial risk management context, switching from the right tail to the left tail, e(x) is referred to as the expected shortfall (Weron, 2004). When considering the shapes of mean excess functions, the exponential distribution plays a central role. It has the memoryless property, meaning that

304

13

Loss Distributions

whether the information X > x is given or not, the expected value of X − x is the same as if one started at x = 0 and calculated E(X). The mean excess function for the exponential distribution is therefore constant. One in fact easily calculates that for this case e(x) = 1/β for all x > 0. If the distribution of X is heavier-tailed than the exponential distribution we ﬁnd that the mean excess function ultimately increases, when it is lightertailed e(x) ultimately decreases. Hence, the shape of e(x) provides important information on the sub-exponential or super-exponential nature of the tail of the distribution at hand. Mean excess functions and ﬁrst order approximations to the tail for the distributions discussed in Section 13.3 are given by the following formulas: • log-normal distribution: e(x)

=

2 2 exp µ + σ2 1 − Φ ln x−µ−σ σ −x ln x−µ 1−Φ σ

=

σ2 x {1 + o(1)} , ln x − µ

where o(1) stands for a term which tends to zero as x → ∞; • exponential distribution: e(x) =

1 ; β

• Pareto distribution: e(x) =

λ+x , α−1

α > 1;

• Burr distribution: e(x)

−α λ1/τ Γ α − τ1 Γ 1 + τ1 λ · = · Γ(α) λ + xτ

1 1 xτ −x · 1 − B 1 + ,α − , τ τ λ + xτ

=

x {1 + o(1)} , ατ − 1

ατ > 1,

13.4

Statistical Validation Techniques

305

where Γ(·) is the standard gamma function (13.22) and x def Γ(a + b) y a−1 (1 − y)b−1 dy, B(a, b, x) = Γ(a)Γ(b) 0

(13.48)

is the beta function; • Weibull distribution:

e(x)

= =

1 Γ (1 + 1/τ ) τ exp (βxτ ) − x 1 − Γ 1 + , βx τ β 1/τ x1−τ {1 + o(1)} , βτ

where Γ(·, ·) is the incomplete gamma function (13.35); • gamma distribution:

e(x)

=

α 1 − F (x, α + 1, β) · − x = β −1 {1 + o(1)} , β 1 − F (x, α, β)

where F (x, α, β) is the gamma distribution function (13.34); • mixture of two exponential distributions: • distribution!mixture of exponentials

e(x)

=

x β1

exp (−β1 c) +

1−x β2

exp (−β2 c)

x exp (−β1 c) + (1 − x) exp (−β2 c)

− x.

Selected shapes are also sketched in Figure 13.6.

13.4.2

Tests Based on the Empirical Distribution Function

A statistics measuring the diﬀerence between the empirical Fn (x) and the ﬁtted F (x) distribution function, called an edf statistic, is based on the vertical diﬀerence between the distributions. This distance is usually measured either by a supremum or a quadratic norm (D’Agostino and Stephens, 1986).

13

Loss Distributions

1

0.5

1

2

1.5

e(x)

3

e(x)

2

2.5

4

3

5

306

0

5 x

0

10

5 x

10

Figure 13.6: Left panel: Shapes of the mean excess function e(x) for the lognormal (green dashed line), gamma with α < 1 (red dotted line), gamma with α > 1 (black solid line) and a mixture of two exponential distributions (blue long-dashed line). Right panel: Shapes of the mean excess function e(x) for the Pareto (green dashed line), Burr (blue long-dashed line), Weibull with τ < 1 (black solid line) and Weibull with τ > 1 (red dotted line) distributions. STFloss06.xpl

The most well-known supremum statistic: D = sup |Fn (x) − F (x)| ,

(13.49)

x

is known as the Kolmogorov or Kolmogorov-Smirnov statistic. It can also be written in terms of two supremum statistics: D+ = sup {Fn (x) − F (x)} x

and D− = sup {F (x) − Fn (x)} , x

where the former is the largest vertical diﬀerence when Fn (x) is larger than F (x) and the latter is the largest vertical diﬀerence when it is smaller. The Kolmogorov statistic is then given by D = max(D+ , D− ). A closely related statistic proposed by Kuiper is simply a sum of the two diﬀerences, i.e. V = D+ + D− .

13.4

Statistical Validation Techniques

307

The second class of measures of discrepancy is given by the Cram´er-von Mises family ∞ 2 {Fn (x) − F (x)} ψ(x)dF (x), (13.50) Q=n −∞

where ψ(x) is a suitable function which gives weights to the squared diﬀerence 2 {Fn (x) − F (x)} . When ψ(x) = 1 we obtain the W 2 statistic of Cram´er-von Mises. When ψ(x) = [F (x) {1 − F (x)}]−1 formula (13.50) yields the A2 statistic of Anderson and Darling. From the deﬁnitions of the statistics given above, suitable computing formulas must be found. This can be done by utilizing the transformation Z = F (X). When F (x) is the true distribution function of X, the random variable Z is uniformly distributed on the unit interval. Suppose that a sample x1 , . . . , xn gives values zi = F (xi ), i = 1, . . . , n. It can be easily shown that, for values z and x related by z = F (x), the corresponding vertical diﬀerences in the edf diagrams for X and for Z are equal. Consequently, edf statistics calculated from the empirical distribution function of the zi ’s compared with the uniform distribution will take the same values as if they were calculated from the empirical distribution function of the xi ’s, compared with F (x). This leads to the following formulas given in terms of the order statistics z(1) < z(2) < · · · < z(n) :

i + (13.51) = max − z(i) , D 1≤i≤n n

(i − 1) , (13.52) D− = max z(i) − 1≤i≤n n D V W2

A2

=

max(D+ , D− ), −

+

= D +D ,

2 n (2i − 1) 1 z(i) − = + , 2n 12n i=1

= −n −

n 1 log z(i) + log(1 − z(n+1−i) ) n i=1

1 (2i − 1) log z(i) + n i=1 +(2n + 1 − 2i) log(1 − z(i) ) .

(13.53) (13.54) (13.55)

(13.56)

n

= −n −

(13.57)

308

13

Loss Distributions

The general test of ﬁt is structured as follows. The null hypothesis is that a speciﬁc distribution is acceptable, whereas the alternative is that it is not: H0 : Fn (x) = F (x; θ), H1 : Fn (x) = F (x; θ), where θ is a vector of known parameters. Small values of the test statistic T are evidence in favor of the null hypothesis, large ones indicate its falsity. To see how unlikely such a large outcome would be if the null hypothesis was true, we calculate the p-value by: p-value = P (T ≥ t),

(13.58)

where t is the test value for a given sample. It is typical to reject the null hypothesis when a small p-value is obtained. However, we are in a situation where we want to test the hypothesis that the sample has a common distribution function F (x; θ) with unknown θ. To employ any of the edf tests we ﬁrst need to estimate the parameters. It is important to recognize, however, that when the parameters are estimated from the data, the critical values for the tests of the uniform distribution (or equivalently of a fully speciﬁed distribution) must be reduced. In other words, if the value of the test statistics T is d, then the p-value is overestimated by PU (T ≥ d). Here PU indicates that the probability is computed under the assumption of a uniformly distributed sample. Hence, if PU (T ≥ d) is small, then the p-value will be even smaller and the hypothesis will be rejected. However, if it is large then we have to obtain a more accurate estimate of the p-value. Ross (2002) advocates the use of Monte Carlo simulations in this context. ˆ First the parameter vector is estimated for a given sample of size n, yielding θ, and the edf test statistics is calculated assuming that the sample is distributed ˆ returning a value of d. Next, a sample of size n of F (x; θ)ˆ according to F (x; θ), distributed variates is generated. The parameter vector is estimated for this simulated sample, yielding θˆ1 , and the edf test statistics is calculated assuming that the sample is distributed according to F (x; θˆ1 ). The simulation is repeated as many times as required to achieve a certain level of accuracy. The estimate of the p-value is obtained as the proportion of times that the test quantity is at least as large as d. An alternative solution to the problem of unknown parameters was proposed by Stephens (1978). The half-sample approach consists of using only half the data to estimate the parameters, but then using the entire data set to conduct the

13.4

Statistical Validation Techniques

309

test. In this case, the critical values for the uniform distribution can be applied, at least asymptotically. The quadratic edf tests seem to converge fairly rapidly to their asymptotic distributions (D’Agostino and Stephens, 1986). Although, the method is much faster than the Monte Carlo approach it is not invariant – depending on the choice of the half-samples diﬀerent test values will be obtained and there is no way of increasing the accuracy. As a side product, the edf tests supply us with a natural technique of estimating the parameter vector θ. We can simply ﬁnd such θˆ∗ that minimizes a selected edf statistic. Out of the four presented statistics A2 is the most powerful when the ﬁtted distribution departs from the true distribution in the tails (D’Agostino and Stephens, 1986). Since the ﬁt in the tails is of crucial importance in most actuarial applications A2 is the recommended statistic for the estimation scheme.

13.4.3

Limited Expected Value Function

The limited expected value function L of a claim size variable X, or of the corresponding cdf F (x), is deﬁned by x L(x) = E{min(X, x)} = ydF (y) + x {1 − F (x)} , x > 0. (13.59) 0

The value of the function L at point x is equal to the expectation of the cdf F (x) truncated at this point. In other words, it represents the expected amount per claim retained by the insured on a policy with a ﬁxed amount deductible of x. The empirical estimate is deﬁned as follows: ⎛ ⎞ ˆ n (x) = 1 ⎝ L xj + x⎠ . (13.60) n x <x j

xj ≥x

In order to ﬁt the limited expected value function L of an analytical distribution ˆ n is ﬁrst constructed. Thereafter one tries to the observed data, the estimate L to ﬁnd a suitable analytical cdf F , such that the corresponding limited expected ˆ n as possible. value function L is as close to the observed L The limited expected value function has the following important properties: 1. the graph of L is concave, continuous and increasing;

310

13

Loss Distributions

2. L(x) → E(X), as x → ∞; 3. F (x) = 1 − L (x), where L (x) is the derivative of the function L at point x; if F is discontinuous at x, then the equality holds true for the right-hand derivative L (x+). A reason why the limited expected value function is a particularly suitable tool for our purposes is that it represents the claim size distribution in the monetary dimension. For example, we have L(∞) = E(X) if it exists. The cdf F , on the other hand, operates on the probability scale, i.e. takes values between 0 and 1. Therefore, it is usually diﬃcult to see, by looking only at F (x), how sensitive the price for the insurance – the premium – is to changes in the values of F , while the limited expected value function shows immediately how diﬀerent parts of the claim size cdf contribute to the premium (see Chapter 19 for information on various premium calculation principles). Apart from curveﬁtting purposes, the function L will turn out to be a very useful concept in dealing with deductibles in Chapter 19. It is also worth mentioning, that there exists a connection between the limited expected value function and the mean excess function: E(X) = L(x) + P(X > x)e(x). (13.61) The limited expected value functions for all distributions considered in this chapter are given by: • log-normal distribution:

σ2 ln x − µ − σ 2 ln x − µ L(x) = exp µ + Φ +x 1−Φ ; 2 σ σ • exponential distribution: L(x) =

1 {1 − exp(−βx)} ; β

L(x) =

λ − λα (λ + x)1−α ; α−1

• Pareto distribution:

13.5

Applications

311

• Burr distribution:

λ1/τ Γ α − τ1 Γ 1 + τ1 1 1 xτ L(x) = B 1 + ,α − ; Γ(α) τ τ λ + xτ α λ ; + x λ + xτ

• Weibull distribution: L(x)

α Γ (1 + 1/τ ) 1 α + xe−βx ; Γ 1 + , βx τ β 1/τ

=

• gamma distribution: L(x) =

α F (x, α + 1, β) + x {1 − F (x, α, β)} ; β

• mixture of two exponential distributions: L(x) =

1−a a {1 − exp (−β1 x)} + {1 − exp (−β2 x)} . β1 β2

From the curve-ﬁtting point of view the use of the limited expected value function has the advantage, compared with the use of the cdfs, that both the ˆ n , based on the observed analytical and the corresponding observed function L discrete cdf, are continuous and concave, whereas the observed claim size cdf Fn is a discontinuous step function. Property (3) implies that the limited expected value function determines the corresponding cdf uniquely. When the limited expected value functions of two distributions are close to each other, not only are the mean values of the distributions close to each other, but the whole distributions as well.

13.5

Applications

In this section we illustrate some of the methods described earlier in the chapter. We conduct the analysis for two datasets. The ﬁrst is the PCS (Property Claim Services, see Insurance Services Oﬃce Inc. (ISO) web site: www.iso.com/products/2800/prod2801.html) dataset covering losses resulting from natural catastrophic events in USA that occurred between 1990 and 1999.

312

13

Loss Distributions

The second is the Danish ﬁre losses dataset, which concerns major ﬁre losses in Danish Krone (DKK) that occurred between 1980 and 1990 and were recorded by Copenhagen Re. Here we consider only losses in proﬁts. The overall ﬁre losses were analyzed by Embrechts, Kl¨ uppelberg, and Mikosch (1997). The Danish ﬁre losses dataset has been already adjusted for inﬂation. However, the PCS dataset consists of raw data. Since the data have been collected over a considerable period of time, it is important to bring the values onto a common basis by means of a suitably chosen index. The choice of the index depends on the line of insurance. For example, an index of the cost of construction prices may be suitable for ﬁre and other property insurance, an earnings index for life and accident insurance, and a general price index may be appropriate when a single index is required for several lines or for the whole portfolio. Here we adjust the PCS dataset using the Consumer Price Index provided by the U.S. Department of Labor. Note, that the same raw catastrophe data, however, adjusted using the discount window borrowing rate that refers to the simple interest rate at which depository institutions borrow from the Federal Reserve Bank of New York was analyzed by Burnecki, H¨ ardle, and Weron (2004). A related dataset containing the national and regional PCS indices for losses resulting from catastrophic events in USA was studied by Burnecki, Kukla, and Weron (2000). As suggested in the proceeding section we ﬁrst look for the appropriate shape of the distribution. To this end we plot the empirical mean excess functions for the analyzed data sets, see Figure 13.7. Both in the case of PCS natural catastrophe losses and Danish ﬁre losses the data show a super-exponential pattern suggesting a log-normal, Pareto or Burr distribution as most adequate for modeling. Hence, in the sequel we calibrate these three distributions. We apply two estimation schemes: maximum likelihood and A2 statistic minimization. Out of the three ﬁtted distributions only the log-normal has closed form expressions for the maximum likelihood estimators. Parameter calibration for the remaining distributions and the A2 minimization scheme is carried out via a simplex numerical optimization routine. A limited simulation study suggests that the A2 minimization scheme tends to return lower values of all edf test statistics than maximum likelihood estimation. Hence, it is exclusively used for further analysis. The results of parameter estimation and hypothesis testing for the PCS loss amounts are presented in Table 13.1. The Burr distribution with parameters α = 0.4801, λ = 3.9495 · 1016 , and τ = 2.1524 yields the best results and passes all tests at the 2.5% level. The log-normal distribution with parameters

Applications

313

15 10

e_n(x) (DKK million)

4

0

0

0

5

2

e_n(x) (USD billion)

6

20

8

25

13.5

1

2 3 x (USD billion)

4

5

0

5

10

15 20 x (DKK million)

25

30

Figure 13.7: The empirical mean excess function eˆn (x) for the PCS catastrophe data (left panel ) and the Danish ﬁre data (right panel ). STFloss07.xpl

µ = 18.3806 and σ = 1.1052 comes in second, however, with an unacceptable ﬁt as tested by the Anderson-Darling statistic. As expected, the remaining distributions presented in Section 13.3 return even worse ﬁts. Thus we suggest to choose the Burr distribution as a model for the PCS loss amounts. In the left panel of Figure 13.8 we present the empirical and analytical limited expected value functions for the three ﬁtted distributions. The plot justiﬁes the choice of the Burr distribution. The results of parameter estimation and hypothesis testing for the Danish ﬁre loss amounts are presented in Table 13.2. The log-normal distribution with parameters µ = 12.6645 and σ = 1.3981 returns the best results. It is the only distribution that passes any of the four applied tests (D, V , W 2 , and A2 ) at a reasonable level. The Burr and Pareto laws yield worse ﬁts as the tails of the edf are lighter than power-law tails. As expected, the remaining distributions presented in Section 13.3 return even worse ﬁts. In the right panel of Figure 13.8 we depict the empirical and analytical limited expected value functions for the three ﬁtted distributions. Unfortunately, no deﬁnitive conclusions can be drawn regarding the choice of the distribution. Hence, we suggest to use the log-normal distribution as a model for the Danish ﬁre loss amounts.

13

Loss Distributions

5

10 x (USD billion)

15

1 0.8 0.6

Analytical and empirical LEVFs (DKK million)

0

0.4

250 200 150 100

Analytical and empirical LEVFs (USD million)

300

1.2

314

0

20

40 x (DKK million)

60

Figure 13.8: The empirical (black solid line) and analytical limited expected value functions (LEVFs) for the log-normal (green dashed line), Pareto (blue dotted line), and Burr (red long-dashed line) distributions for the PCS catastrophe data (left panel ) and the Danish ﬁre data (right panel ). STFloss08.xpl

Table 13.1: Parameter estimates obtained via the A2 minimization scheme and test statistics for the catastrophe loss amounts. The corresponding p-values based on 1000 simulated samples are given in parentheses. Distributions: log-normal Pareto Burr Parameters: µ=18.3806 α=3.4081 α=0.4801 σ=1.1052 λ=4.4767 · 108 λ=3.9495 · 1016 τ =2.1524 Tests: D 0.0440 0.1049 0.0366 (0.033)

(<0.005)

(0.077)

V

0.0786

0.1692

0.0703

(0.022)

(<0.005)

(0.038)

W2

0.1353

0.7042

0.0626

(0.006)

(<0.005)

(0.059)

A2

1.8606

6.1160

0.5097

(<0.005)

(<0.005)

(0.027)

STFloss09.xpl

13.5

Applications

315

Table 13.2: Parameter estimates obtained via the A2 minimization scheme and test statistics for the ﬁre loss amounts. The corresponding p-values based on 1000 simulated samples are given in parentheses. Distributions: log-normal Pareto Burr Parameters: µ=12.6645 α=1.7439 α=0.8804 σ=1.3981 λ=6.7522 · 105 λ=8.4202 · 106 τ =1.2749 Tests: D 0.0381 0.0471 0.0387 V W2 A2

(0.008)

(<0.005)

0.0676

0.0779

(<0.005)

0.0724

(0.005)

(<0.005)

(<0.005)

0.0921

0.2119

0.1117

(0.049)

(<0.005)

(0.007)

0.7567

1.9097

0.6999

(0.024)

(<0.005)

(0.005)

STFloss10.xpl

316

Bibliography

Bibliography Bratley, P., Fox, B. L., and Schrage, L. E. (1987). A Guide to Simulation, Springer-Verlag, New York. D’Agostino, R. B. and Stephens, M. A. (1986). Goodness-of-Fit Techniques, Marcel Dekker, New York. Burnecki, K., H¨ ardle, W., and Weron, R. (2004). Simulation of risk processes, in J. Teugels, B. Sundt (eds.) Encyclopedia of Actuarial Science, Wiley, Chichester. Burnecki, K., Kukla, G., and Weron, R. (2000). Property insurance loss distributions, Physica A 287: 269-278. Daykin, C.D., Pentikainen, T., and Pesonen, M. (1994). Practical Risk Theory for Actuaries, Chapman, London. Devroye, L. (1986). Non-Uniform Random Variate Generation, SpringerVerlag, New York. Embrechts, P., Kl¨ uppelberg, C., and Mikosch, T. (1997). Modelling Extremal Events for Insurance and Finance, Springer. Hogg, R. and Klugman, S. A. (1984). Loss Distributions, Wiley, New York. J¨ohnk, M. D. (1964). Erzeugung von Betaverteilten und Gammaverteilten Zufallszahlen, Metrika 8: 5-15. Klugman, S. A., Panjer, H.H., and Willmot, G.E. (1998). Loss Models: From Data to Decisions, Wiley, New York. L’Ecuyer, P. (2004). Random Number Generation, in J. E. Gentle, W. H¨ ardle, Y. Mori (eds.) Handbook of Computational Statistics, Springer, Berlin, 35–70. Panjer, H.H. and Willmot, G.E. (1992). Insurance Risk Models, Society of Actuaries, Chicago. Ross, S. (2002). Simulation, Academic Press, San Diego. Stephens, M. A. (1978). On the half-sample method for goodness-of-ﬁt, Journal of the Royal Statistical Society B 40: 64-70.

Bibliography

317

Weron, R. (2004). Computationally Intensive Value at Risk Calculations, in J. E. Gentle, W. H¨ ardle, Y. Mori (eds.) Handbook of Computational Statistics, Springer, Berlin, 911–950.

14 Modeling of the Risk Process Krzysztof Burnecki and Rafal Weron

14.1

Introduction

An actuarial risk model is a mathematical description of the behavior of a collection of risks generated by an insurance portfolio. It is not intended to replace sound actuarial judgment. In fact, a well formulated model is consistent with and adds to intuition, but cannot and should not replace experience and insight (Willmot, 2001). Even though we cannot hope to identify all inﬂuential factors relevant to future claims, we can try to specify the most important. A typical model for insurance risk, the so-called collective risk model, has two main components: one characterizing the frequency (or incidence) of events and another describing the severity (or size or amount) of gain or loss resulting from the occurrence of an event, see also Chapter 18. The collective risk model is often used in health insurance and in general insurance, whenever the main risk components are the number of insurance claims and the amount of the claims. It can also be used for modeling other non-insurance product risks, such as credit and operational risk (Embrechts, Kaufmann, and Samorodnitsky, 2004). In the former, for example, the main risk components are the number of credit events (either defaults or downgrades), and the amount lost as a result of the credit event. The stochastic nature of both the incidence and severity of claims are fundamental components of a realistic model. Hence, in its classical form the model for insurance risk is deﬁned as follows (Embrechts, Kl¨ uppelberg, and Mikosch, 1997; Grandell, 1991). If (Ω, F, P) is a probability space carrying (i) a point process {Nt }t≥0 , i.e. an integer valued stochastic process with N0 = 0 a.s., Nt < ∞ for each t < ∞ and nondecreasing realizations, and (ii) an independent sequence {Xk }∞ k=1 of positive independent and identically distributed

320

14

Modeling of the Risk Process

(i.i.d.) random variables, then the risk process {Rt }t≥0 is given by Rt = u + c(t) −

Nt

Xi .

(14.1)

i=1

The non-negative constant u stands for the initial capital of the insurance company. The company sells insurance policies and receives a premium according to c(t). In the classical model c is constant, but in a more general setup it can be a deterministicor even a stochastic function of time. Claims form the Nt aggregate claim loss { i=1 Xi }. The claim severities are described by the random sequence {Xk } and the number of claims in the interval (0, t] is modeled by the point process Nt , often called the claim arrival process. The modeling of the aggregate loss process consists of modeling the point process {Nt } and the claim size sequence {Xk }. Both processes are usually assumed to be independent, hence can be treated independently of each other. The modeling of claim severities was covered in detail in Chapter 13. The focus of this chapter is therefore on modeling the claim arrival point process {Nt }. The simplicity of the risk process (14.1) is only illusionary. In most cases no analytical conclusions regarding the time evolution of the process can be drawn. However, it is this evolution that is important for practitioners, who have to calculate functionals of the risk process like the expected time to ruin and the ruin probability, see Chapter 15. All this calls for numerical simulation schemes (Burnecki, H¨ ardle, and Weron, 2004). In Section 14.2 we present eﬃcient algorithms for ﬁve classes of the claim arrival point processes. Next, in Section 14.3 we apply some of them to modeling realworld risk processes. The analysis is conducted for the same two datasets as in Chapter 13: (i) the PCS (Property Claim Services) dataset covering losses resulting from catastrophic events in USA that occurred between 1990 and 1999 and (ii) the Danish ﬁre losses dataset, which concerns major ﬁre losses of proﬁts that occurred between 1980 and 1990 and were recorded by Copenhagen Re. It is important to note that the choice of the model has inﬂuence on both the ruin probability (see Chapter 15) and the reinsurance strategy of the company (see Chapter 20), hence the selection has to be made with great care.

14.2

Claim Arrival Processes

14.2

321

Claim Arrival Processes

In this section we focus on eﬃcient simulation of the claim arrival point process {Nt }. This process can be simulated either via the arrival times {Ti }, i.e. moments when the ith claim occurs, or the inter-arrival times (or waiting times) Wi = Ti − Ti−1 , i.e. the time periods between successive claims. ∞Note that in terms of Wi ’s the claim arrival point process is given by Nt = n=1 I(Tn ≤ t). In what follows we discuss ﬁve prominent examples of {Nt }, namely the classical (homogeneous) Poisson process, the non-homogeneous Poisson process, the mixed Poisson process, the Cox process (also called the doubly stochastic Poisson process) and the renewal process.

14.2.1

Homogeneous Poisson Process

The most common and best known claim arrival point process is the homogeneous Poisson process (HPP) with stationary and independent increments and the number of claims in a given time interval governed by the Poisson law. While this process is normally appropriate in connection with life insurance modeling, it often suﬀers from the disadvantage of providing an inadequate ﬁt to insurance data in other coverages. In particular, it tends to understate the true variability inherent in these situations. Formally, a continuous-time stochastic process {Nt : t ≥ 0} is a (homogeneous) Poisson process with intensity (or rate) λ > 0 if (i) {Nt } is a point process, and (ii) the waiting times Wi are independent and identically distributed and follow an exponential law with intensity λ, i.e. with mean 1/λ (see Chapter 13, where the properties and simulation scheme for the exponential distribution were discussed). This deﬁnition naturally leads to a simulation scheme for the successive arrival times T1 , T2 , . . . , Tn of the Poisson process: Algorithm HPP1 Step 1: set T0 = 0 Step 2: for i = 1, 2, . . . , n do Step 2a: generate an exponential random variable E with intensity λ Step 2b: set Ti = Ti−1 + E

322

14

Modeling of the Risk Process

Alternatively, the homogeneous Poisson process can be simulated by applying the following property (Rolski et al., 1999). Given that Nt = n, the n occurrence times T1 , T2 , . . . , Tn have the same distributions as the order statistics corresponding to n i.i.d. random variables uniformly distributed on the interval (0, t]. Hence, the arrival times of the HPP on the interval (0, t] can be generated as follows: Algorithm HPP2 Step 1: generate a Poisson random variable N with intensity λ Step 2: generate N random variables Ui distributed uniformly on (0, 1), i.e. Ui ∼ U(0, 1), i = 1, 2, . . . , N Step 3: set (T1 , T2 , . . . , TN ) = t · sort{U1 , U2 , . . . , UN } In general, this algorithm will run faster than the previous one as it does not involve a loop. The only two inherent numerical diﬃculties involve generating a Poisson random variable and sorting a vector of occurrence times. Whereas the latter problem can be solved via the standard quicksort algorithm, the former requires more attention. A simple algorithm would take N = min{n : U1 · . . . · Un < exp(−λ)} − 1, which is a consequence of the properties of the Poisson process (for a derivation see Ross, 2002). However, for large λ, this method can become slow. Faster, but more complicated methods have been proposed in the literature. Ahrens and Dieter (1982) suggested a generator which utilizes acceptance-complement with truncated normal variates whenever λ > 10 and reverts to table-aided inversion otherwise. Stadlober (1989) adapted the ratio of uniforms method for λ > 5 and classical inversion for small λ’s. H¨ormann (1993) advocated the transformed rejection method, which is a combination of the inversion and rejection algorithms. Sample trajectories of homogeneous and non-homogeneous Poisson processes are plotted in Figure 14.1. The dotted green line is a HPP with intensity λ = 1 (left panel) and λ = 10 (right panel). Clearly the latter jumps more often. Since for the HPP the expected value E(Nt ) = λt, it is natural to deﬁne the premium function in this case as c(t) = ct, where c = (1+θ)µλ, µ = E(Xk ) and θ > 0 is the relative safety loading which “guarantees” survival of the insurance company. With such a choice of the premium function we obtain the classical form of the risk process.

Claim Arrival Processes

323

0

0

20

50

N(t)

N(t)

40

100

60

14.2

0

5 t

10

0

5 t

10

Figure 14.1: Left panel : Sample trajectories of a NHPP with linear intensity λ(t) = a+b·t for a = 1 and b = 1 (solid blue line), b = 0.1 (dashed red line), and b = 0 (dotted green line). Note that the latter is in fact a HPP. Right panel : Sample trajectories of a NHPP with periodic intensity λ(t) = a + b · cos(2πt) for a = 10 and b = 10 (solid blue line), b = 1 (dashed red line), and b = 0 (dotted green line). Again, the latter is a HPP. STFrisk01.xpl

14.2.2

Non-homogeneous Poisson Process

The choice of a homogeneous Poisson process implies that the size of the portfolio cannot increase or decrease. In addition, it cannot describe situations, like in motor insurance, where claim occurrence epochs are likely to depend on the time of the year or of the week. For modeling such phenomena the non-homogeneous Poisson process (NHPP) suits much better than the homogeneous one. The NHPP can be thought of as a Poisson process with a variable intensity deﬁned by the deterministic intensity (rate) function λ(t). Note that the increments of a NHPP do not have to be stationary. In the special case when λ(t) takes the constant value λ, the NHPP reduces to the homogeneous Poisson process with intensity λ.

324

14

Modeling of the Risk Process

The simulation of the process in the non-homogeneous case is slightly more complicated than in the homogeneous one. The ﬁrst approach, known as the thinning or rejection method, is based on the following fact (Bratley, Fox, and Schrage, 1987; Ross, 2002). Suppose that there exists a constant λ such that λ(t) ≤ λ for all t. Let T1∗ , T2∗ , T3∗ , . . . be the successive arrival times of a homogeneous Poisson process with intensity λ. If we accept the ith arrival time Ti∗ with probability λ(Ti∗ )/λ, independently of all other arrivals, then the sequence T1 , T2 , . . . of the accepted arrival times (in ascending order) forms a sequence of the arrival times of a non-homogeneous Poisson process with the rate function λ(t). The resulting algorithm reads as follows: Algorithm NHPP1 (Thinning) Step 1: set T0 = 0 and T ∗ = 0 Step 2: for i = 1, 2, . . . , n do Step 2a: generate an exponential random variable E with intensity λ Step 2b: set T ∗ = T ∗ + E Step 2c: generate a random variable U distributed uniformly on (0, 1) Step 2d: if U > λ(T ∗ )/λ then return to step 2a (→ reject the arrival time) else set Ti = T ∗ (→ accept the arrival time) As mentioned in the previous section, the inter-arrival times of a homogeneous Poisson process have an exponential distribution. Therefore steps 2a–2b generate the next arrival time of a homogeneous Poisson process with intensity λ. Steps 2c–2d amount to rejecting (hence the name of the method) or accepting a particular arrival as part of the thinned process (hence the alternative name). Note that in the above algorithm we generate a HPP with intensity λ employing the HPP1 algorithm. We can also generate it using the HPP2 algorithm, which is in general much faster. The second approach is based on the observation (Grandell, 1991) that for a NHPP with rate function λ(t) the increment Nt −Ns , 0 < s < t, is distributed as

% = t λ(u)du. Hence, the cumulative a Poisson random variable with intensity λ s distribution function Fs of the waiting time Ws is given by Fs (t)

= =

P(Ws ≤ t) = 1 − P(Ws > t) = 1 − P(Ns+t − Ns = 0) = s+t

t

1 − exp − λ(u)du = 1 − exp − λ(s + v)dv . s

0

14.2

Claim Arrival Processes

325

If the function λ(t) is such that we can ﬁnd a formula for the inverse Fs−1 for each s, we can generate a random quantity X with the distribution Fs by using the inverse transform method. The algorithm, often called the integration method, can be summarized as follows: Algorithm NHPP2 (Integration) Step 1: set T0 = 0 Step 2: for i = 1, 2, . . . , n do Step 2a: generate a random variable U distributed uniformly on (0, 1) Step 2b: set Ti = Ti−1 + Fs−1 (U ) The third approach utilizes a generalization of the property used in the HPP2 algorithm. Given that Nt = n, the n occurrence times T1 , T2 , . . . , Tn of the non-homogeneous Poisson process have the same distributions as the order statistics corresponding to n independent random variables distributed

t on the interval (0, t], each with the common density function f (v) = λ(v)/ 0 λ(u)du, where v ∈ (0, t]. Hence, the arrival times of the NHPP on the interval (0, t] can be generated as follows: Algorithm NHPP3 Step 1: generate a Poisson random variable N with intensity

t 0

λ(u)du

Step 2: generate N random variables Vi , i = 1, 2, . . . N with density f (v) =

t λ(v)/ 0 λ(u)du. Step 3: set (T1 , T2 , . . . , TN ) = sort{V1 , V2 , . . . , VN }. The performance of the algorithm is highly dependent on the eﬃciency of the computer generator of random variables with density f (v). Moreover, like in the homogeneous case, this algorithm has the advantage of not invoking a loop. Hence, it performs faster than the former two methods if λ(u) is a nicely integrable function. Sample trajectories of non-homogeneous Poisson processes are plotted in Figure 14.1. In the left panel realizations of a NHPP with linear intensity λ(t) = a+b·t are presented for the same value of parameter a. Note, that the higher the value of parameter b, the more pronounced is the increase in the intensity of

326

14

Modeling of the Risk Process

the process. In the right panel realizations of a NHPP with periodic intensity λ(t) = a + b · cos(2πt) are illustrated, again for the same value of parameter a. This time, for high values of parameter b the events exhibit a seasonal behavior. The process has periods of high activity (grouped around natural values of t) and periods of low activity, where almost no jumps take place. Finally,

twe note that since in the non-homogeneous case the expected value E(Nt ) = 0 λ(s)ds,

t it is natural to deﬁne the premium function as c(t) = (1 + θ)µ 0 λ(s)ds.

14.2.3

Mixed Poisson Process

In many situations the portfolio of an insurance company is diversiﬁed in the sense that the risks associated with diﬀerent groups of policy holders are signiﬁcantly diﬀerent. For example, in motor insurance we might want to make a diﬀerence between male and female drivers or between drivers of diﬀerent age. We would then assume that the claims come from a heterogeneous group of clients, each one of them generating claims according to a Poisson distribution with the intensity varying from one group to another. Another practical reason for considering yet another generalization of the classical Poisson process is the following. If we measure the volatility of risk processes, expressed in terms of the index of dispersion Var(Nt )/ E(Nt ), then very often we obtain estimates in excess of one – a value obtained for the homogeneous and the non-homogeneous cases. These empirical observations led to the introduction of the mixed Poisson process (Ammeter, 1948). In the mixed Poisson process the distribution of {Nt } is given by a mixture of Poisson processes (Rolski et al., 1999). This means that, conditioning on an extrinsic random variable Λ (called a structure variable), the process {Nt } behaves like a homogeneous Poisson process. Since for each t the claim numbers {Nt } up to time t are Poisson variates with intensity Λt, it is now reasonable to consider the premium function of the form c(t) = (1 + θ)µΛt. The process can be generated in the following way: ﬁrst a realization of a nonnegative random variable Λ is generated and, conditioned upon its realization, {Nt } as a homogeneous Poisson process with that realization as its intensity is constructed. Both the HPP1 and the HPP2 algorithm can be utilized. Making use of the former we can write: Algorithm MPP1 Step 1: generate a realization λ of the random intensity Λ

14.2

Claim Arrival Processes

327

Step 2: set T0 = 0 Step 3: for i = 1, 2, . . . , n do Step 3a: generate an exponential random variable E with intensity λ Step 3b: set Ti = Ti−1 + E

14.2.4

Cox Process

The Cox process, or doubly stochastic Poisson process, provides ﬂexibility by letting the intensity not only depend on time but also by allowing it to be a stochastic process. Therefore, the doubly stochastic Poisson process can be viewed as a two-step randomization procedure. An intensity process {Λ(t)} is used to generate another process {Nt } by acting as its intensity. That is, {Nt } is a Poisson process conditional on {Λ(t)} which itself is a stochastic process. If {Λ(t)} is deterministic, then {Nt } is a non-homogeneous Poisson process. If Λ(t) = Λ for some positive random variable Λ, then {Nt } is a mixed Poisson process. In the doubly stochastic case the premium function is a generalization of the former functions, in line with the generalization of the claim arrival

t process. Hence, it takes the form c(t) = (1 + θ)µ 0 Λ(s)ds. The deﬁnition of the Cox process suggests that it can be generated in the following way: ﬁrst a realization of a non-negative stochastic process {Λ(t)} is generated and, conditioned upon its realization, {Nt } as a non-homogeneous Poisson process with that realization as its intensity is constructed. Out of the three methods of generating a non-homogeneous Poisson process the NHPP1 algorithm is the most general and, hence, the most suitable for adaptation. We can write: Algorithm CP1 Step 1: generate a realization λ(t) of the intensity process {Λ(t)} for a suﬃciently large time period Step 2: set λ = max {λ(t)} Step 3: set T0 = 0 and T ∗ = 0 Step 4: for i = 1, 2, . . . , n do Step 4a: generate an exponential random variable E with intensity λ

328

14

Modeling of the Risk Process

Step 4b: set T ∗ = T ∗ + E Step 4c: generate a random variable U distributed uniformly on (0, 1) Step 4d: if U > λ(T ∗ )/λ then return to step 4a (→ reject the arrival time) else set Ti = T ∗ (→ accept the arrival time)

14.2.5

Renewal Process

Generalizing the homogeneous Poisson process we come to the point where instead of making λ non-constant, we can make a variety of diﬀerent distributional assumptions on the sequence of waiting times {W1 , W2 , . . .} of the claim arrival point process {Nt }. In some particular cases it might be useful to assume that the sequence is generated by a renewal process, i.e. the random variables Wi are i.i.d. and positive. Note that the homogeneous Poisson process is a renewal process with exponentially distributed inter-arrival times. This observation lets us write the following algorithm for the generation of the arrival times of a renewal process: Algorithm RP1 Step 1: set T0 = 0 Step 2: for i = 1, 2, . . . , n do Step 2a: generate a random variable X with an assumed distribution function F Step 2b: set Ti = Ti−1 + X An important point in the previous generalizations of the Poisson process was the possibility to compensate risk and size ﬂuctuations by the premiums. Thus, the premium rate had to be constantly adapted to the development of the claims. For renewal claim arrival processes, a constant premium rate allows for a constant safety loading (Embrechts and Kl¨ uppelberg, 1993). Let {Nt } be a renewal process and assume that W1 has ﬁnite mean 1/λ. Then the premium function is deﬁned in a natural way as c(t) = (1 + θ)µλt, like for the homogeneous Poisson process.

Simulation of Risk Processes

329

3 2

Mean excess function

2 0

1

1

Sample mean excess function*E-2

4

3

5

14.3

0

5

10 x (years)*E-2

15

0

5 x

10

Figure 14.2: Left panel : The empirical mean excess function eˆn (x) for the PCS waiting times. Right panel : Shapes of the mean excess function e(x) for the log-normal (solid green line), Burr (dashed blue line), and exponential (dotted red line) distributions. STFrisk02.xpl

14.3

Simulation of Risk Processes

14.3.1

Catastrophic Losses

In this section we apply some of the models described earlier to the PCS dataset. The Property Claim Services dataset covers losses resulting from natural catastrophic events in USA that occurred between 1990 and 1999. It is adjusted for inﬂation using the Consumer Price Index provided by the U.S. Department of Labor. See Chapters 4 and 13 where this dataset was analyzed in the context of CAT bonds and loss distributions, respectively. Note, that the same raw catastrophe data, however, adjusted using the discount window borrowing rate that refers to the simple interest rate at which depository institutions borrow from the Federal Reserve Bank of New York was analyzed by Burnecki, H¨ardle, and Weron (2004).

330

14

Modeling of the Risk Process

Table 14.1: Parameter estimates obtained via the A2 minimization scheme and test statistics for the PCS waiting times. The corresponding pvalues based on 1000 simulated samples are given in parentheses. Distributions: log-normal Burr exponential Parameters: µ=−3.91 α=1.3051 β=33.187 σ=0.9051 λ=1.6 · 10−3 τ =1.7448 Tests: D 0.0589 0.0492 0.1193 V W2 A2

(<0.005)

(<0.005)

0.0973

0.0938

(<0.005)

0.1969

(<0.005)

(<0.005)

(<0.005)

0.1281

0.1120

0.9130

(0.013)

(<0.005)

(<0.005)

1.3681

0.8690

5.8998

(<0.005)

(<0.005)

(<0.005)

STFrisk03.xpl

Now, we study the claim arrival process and the distribution of waiting times. As suggested in Chapter 13 we ﬁrst look for the appropriate shape of the approximating distribution. To this end we plot the empirical mean excess function for the waiting time data (given in years), see Figure 14.2. The initially decreasing, later increasing pattern suggests the log-normal or Burr distribution as most adequate for modeling. The empirical distribution seems, however, to have lighter tails than the two: e(x) does not increase for very large x. The overall impression might be of a highly volatile but constant function, like that for the exponential distribution. Hence, we ﬁt the log-normal, Burr, and exponential distributions using the A2 minimization scheme and check the goodness-of-ﬁt with test statistics. In terms of the values of the test statistics the Burr distribution seems to give the best ﬁt. However, it does not pass any of the tests even at the very low level of 0.5% (see Chapter 13 for test deﬁnitions). The only distribution that passes any of the four applied tests, although at a very low level, is the log-normal law with parameters µ = −3.91 and σ = 0.9051, see Table 14.1. Thus, if we wanted to model the claim arrival process by a renewal process then the log-normal distribution would be the best to describe the waiting times.

Simulation of Risk Processes

331

40

Periodogram

0

5

20

10

Number of events

60

15

80

14.3

0

1

2

3

4

5 6 Time (years)

7

8

9

10

0.12

0.25 Frequency

0.38

0.5

Figure 14.3: Left panel : The quarterly number of losses for the PCS data. Right panel : Periodogram of the PCS quarterly number of losses. A distinct peak is visible at frequency ω = 0.25 implying a period of 1/ω = 4 quarters, i.e. one year. STFrisk04.xpl

If, on the other hand, we wanted to model the claim arrival process by a HPP then the studies of the quarterly numbers of losses would lead us to the conclusion that the best HPP is given by the annual intensity λ1 = 34.2. This value is obtained by taking the mean of the quarterly numbers of losses and multiplying it by four. Note, that the value of the intensity is signiﬁcantly diﬀerent from the parameter β = 32.427 of the calibrated exponential distribution, see Table 14.1. This, together with a very bad ﬁt of the exponential law to the waiting times, indicates that the HPP is not a good model for the claim arrival process. Further analysis of the data reveals its periodicity. The time series of the quarterly number of losses does not exhibit any trends but an annual seasonality can be very well observed using the periodogram, see Figure 14.3. This suggests that calibrating a NHPP with a sinusoidal rate function would give a good model. We estimate the parameters by ﬁtting the cumulative intensity function, i.e. the mean value function E(Nt ), to the accumulated number of PCS losses. The least squares algorithm yields the formula λ2 (t) = 35.32 + 2.32 · 2π · sin{2π(t − 0.20)}. This choice of λ(t) gives a reasonably good ﬁt, see also Chapter 4.

160

120 0

40

80

Capital (USD billion)

320

240 160

Capital (USD billion)

80 0

1

2

3

4

5 6 Time (years)

7

8

9

0

10

1

2

3

4

5 6 Time (years)

7

8

9

10

120 80 0

40

Capital (USD billion)

160

200

0

Modeling of the Risk Process

200

14

400

332

0

1

2

3

4

5 6 Time (years)

7

8

9

10

Figure 14.4: The PCS data simulation results for a NHPP with Burr claim sizes (left panel ), a NHPP with log-normal claim sizes (right panel ), and a NHPP with claims generated from the edf (bottom panel ). The dotted lines are the sample 0.001, 0.01, 0.05, 0.25, 0.50, 0.75, 0.95, 0.99, 0.999-quantile lines based on 3000 trajectories of the risk process. STFrisk05.xpl

14.3

Simulation of Risk Processes

333

To study the evolution of the risk process we simulate sample trajectories. We consider a hypothetical scenario where the insurance company insures losses resulting from catastrophic events in the United States. The company’s initial capital is assumed to be u = 100 billion USD and the relative safety loading used is θ = 0.5. We choose diﬀerent models of the risk process whose application is most justiﬁed by the statistical results described above. The results are presented in Figure 14.4. In all subplots the thick solid blue line is the “real” risk process, i.e. a trajectory constructed from the historical arrival times and values of the losses. The diﬀerent shapes of the “real” risk process in the subplots are due to the diﬀerent forms of the premium function c(t). Recall, that the function has to be chosen accordingly to the type of the claim arrival process. The dashed red line is a sample trajectory. The dotted lines are the sample 0.001, 0.01, 0.05, 0.25, 0.50, 0.75, 0.95, 0.99, 0.999-quantile lines based on 3000 trajectories of the risk process. The function x ˆp (t) is called a sample p-quantile line if for each t ∈ [t0 , T ], x ˆp (t) is the sample p-quantile, i.e. if it satisﬁes Fn (xp −) ≤ p ≤ Fn (xp ), where Fn is the edf. Quantile lines are a very helpful tool in the analysis of stochastic processes. For example, they can provide a simple justiﬁcation of the stationarity (or the lack of it) of a process, see Janicki and Weron (1994). In Figure 14.4 they visualize the evolution of the density of the risk process. The periodic pattern is due to the sinusoidal intensity function λ2 (t). We also note that we assumed in the simulations that if the capital of the insurance company drops bellow zero, the company goes bankrupt, so the capital is set to zero and remains at this level hereafter. This is in agreement with Chapter 15. The claim severity distribution of the PCS dataset was studied in Chapter 13. The Burr distribution with parameters α = 0.4801, λ = 3.9495 · 1016 , and τ = 2.1524 yielded the best ﬁt. Unfortunately, such a choice of the parameters leads to an undesired feature of the claim size distribution – very heavy tails of order x−ατ ≈ x−1.03 . Although the expected value exists, the sample mean is, in general, signiﬁcantly below the theoretical value. As a consequence, the premium function c(t) cannot include the factor µ = E(Xk ) or the risk process trajectories will exhibit a highly positive drift. To cope with this problem, in the simulations we substitute the original factor µ with µ ˜ equal to the empirical mean of the simulated claims for all trajectories. Despite this change the trajectories possess a positive drift due to the large value of the relative safety loading θ. They are also highly volatile leading to a large number of ruins – the 0.05-quantile line drops to zero after ﬁve years, see the left panel in Figure 14.4. It seems that the Burr distribution overestimates the PCS losses.

334

14

Modeling of the Risk Process

In our second attempt we simulate the NHPP with log-normal claims with µ = 18.3806 and σ = 1.1052, as the log-normal law was found in Chapter 13 to yield a relatively good ﬁt to the data. The results, shown in the right panel of Figure 14.4, are not satisfactory. This time the analytical distribution largely underestimates the loss data. The “real” risk process is well outside the 0.001quantile line. This leads us to the conclusion that none of the analytical loss distributions describes the data well enough. We either overestimate risk using the Burr distribution or underestimate it with the log-normal law. Hence, in our next attempt we simulate the NHPP with claims generated from the edf, see the bottom panel in Figure 14.4. The factor µ in the premium function c(t) is set to the empirical mean. This time the “real” risk process lies close to the median and does not cross the lower and upper quantile lines. This approach seems to give the best results. However, we do have to remember that it has its shortcomings. For example, the model is tailor-made for the dataset at hand but is not universal. As the dataset will be expanded by including new losses the model may change substantially. An analytic model would, in general, be less susceptible to such modiﬁcations. Hence, it might be more optimal to use the Burr distribution after all.

14.3.2

Danish Fire Losses

We conduct empirical studies for Danish ﬁre losses recorded by Copenhagen Re. The data concerns major Danish ﬁre losses in Danish Krone (DKK), occurred between 1980 and 1990 and adjusted for inﬂation. Only losses of proﬁts connected with the ﬁres are taken into consideration, see Chapter 13 and Burnecki and Weron (2004), where this dataset was also analyzed. We start the analysis with a HPP with a constant intensity λ3 . Studies of the quarterly numbers of losses and the inter-occurrence times of the ﬁres lead us to the conclusion that the HPP with the annual intensity λ3 = 57.72 gives the best ﬁt. However, as we can see in the right panel of Figure 14.5, the ﬁt is not very good suggesting that the HPP is too simplistic and forcing us to consider the NHPP. In fact, a renewal process would also give unsatisfactory results as the data reveals a clear increasing trend in the number of quarterly losses, see the left panel in Figure 14.5. We tested diﬀerent exponential and polynomial functional forms, but a simple linear intensity function λ4 (s) = c + ds gives the best ﬁt. Applying the least squares procedure we arrive at the following values of the parameters: c = 13.97 and d = 7.57. Processes with both choices of the intensity function, λ3 and λ4 (s), are illustrated in the right panel of Figure

Simulation of Risk Processes

335

600 500 400 300

200 100

20

0

0

10

Number of events

30

Aggregate number of losses / Mean value function

40

700

14.3

0

1

2

3

4

5 6 Time (years)

7

8

9

10

11

0

1

2

3

4

5 6 Time (years)

7

8

9

10

11

Figure 14.5: Left panel : The quarterly number of losses for the Danish ﬁre data. Right panel : The aggregate quarterly number of losses of the Danish ﬁre data (dashed blue line) together with the mean value function E(Nt ) of the calibrated HPP (solid black line) and the NHPP (dotted red line). Clearly the latter model gives a better ﬁt to the empirical data. STFrisk06.xpl

14.5, where the accumulated number of ﬁre losses and mean value functions for all 11 years of data are depicted. After describing the claim arrival process we have to ﬁnd an appropriate model for the loss amounts. In Chapter 13 a number of distributions were ﬁtted to loss sizes. The log-normal distribution with parameters µ = 12.6645 and σ = 1.3981 produced the best results. The Burr distribution with α = 0.8804, λ = 8.4202 · 106 , and τ = 1.2749 overestimated the tails of the empirical distribution, nevertheless it gave the next best ﬁt. The simulation results are presented in Figure 14.6. We consider a hypothetical scenario where the insurance company insures losses resulting from ﬁre damage. The company’s initial capital is assumed to be u = 400 million DKK and the relative safety loading used is θ = 0.5. We choose two models of the risk process whose application is most justiﬁed by the statistical results described above:

14

1000

400

800

Capital (DKK million)

1200

800 600

400

Capital (DKK million)

0

200 0

1

2

3

4

5 6 Time (years)

7

8

9

10

0

11

1

2

3

4

5 6 Time (years)

7

8

9

10

11

600

400 0

200

Capital (DKK million)

800

1000

0

Modeling of the Risk Process

1600

336

0

1

2

3

4

5 6 Time (years)

7

8

9

10

11

Figure 14.6: The Danish ﬁre data simulation results for a NHPP with lognormal claim sizes (left panel ), a NHPP with Burr claim sizes (right panel ), and a NHPP with claims generated from the edf (bottom panel ). The dotted lines are the sample 0.001, 0.01, 0.05, 0.25, 0.50, 0.75, 0.95, 0.99, 0.999-quantile lines based on 3000 trajectories of the risk process. STFrisk07.xpl

14.3

Simulation of Risk Processes

337

a NHPP with log-normal claim sizes and a NHPP with Burr claim sizes. For comparison we also present the results of a model incorporating the empirical distribution function. Recall, that in this model the factor µ in the premium function c(t) is set to the empirical mean. In all panels of Figure 14.6 the thick solid blue line is the “real” risk process, i.e. a trajectory constructed from the historical arrival times and values of the losses. The diﬀerent shapes of the “real” risk process in the subplots are due to the diﬀerent forms of the premium function c(t) which has to be chosen accordingly to the type of the claim arrival process. The dashed red line is a sample trajectory. The dotted lines are the sample 0.001, 0.01, 0.05, 0.25, 0.50, 0.75, 0.95, 0.99, 0.999-quantile lines based on 3000 trajectories of the risk process. Similarly as in PCS data case, we assume that if the capital of the insurance company drops bellow zero, the company goes bankrupt, so the capital is set to zero and remains at this level hereafter. Clearly, if claim severities are Burr distributed then extreme events are more probable to happen than in the log-normal case, for which the historical trajectory falls outside the 0.001-quantile line. The overall picture is, in fact, similar to the one obtained for the PCS data. We either overestimate risk using the Burr distribution or underestimate it with the log-normal law. The empirical approach yields the “real” risk process which lies close to the median and does not cross the very low or very high quantile lines. However, as stated previously, the empirical approach has its shortcomings. Since this time we only slightly undervalue risk with the log-normal law it might be advisable to use it for further modeling.

338

Bibliography

Bibliography Ahrens, J. H. and Dieter, U. (1982). Computer generation of Poisson deviates from modiﬁed normal distributions, ACM Trans. Math. Software 8: 163– 179. Ammeter, H. (1948). A generalization of the collective theory of risk in regard to ﬂuctuating basic probabilities, Skand. Aktuarietidskr. 31: 171–198. Bratley, P., Fox, B. L., and Schrage, L. E. (1987). A Guide to Simulation, Springer-Verlag, New York. Burnecki, K. and Weron, R. (2004). Modeling the risk process in the XploRe computing environment, Lecture Notes in Computer Science 3039: 868875. Burnecki, K., H¨ ardle, W., and Weron, R. (2004). Simulation of risk processes, in J. Teugels, B. Sundt (eds.) Encyclopedia of Actuarial Science, Wiley, Chichester. Embrechts, P., Kaufmann, R., and Samorodnitsky, G. (2002). Ruin theory revisited: stochastic models for operational risk, in C. Bernadell et al. (eds.) Risk Management for Central Bank Foreign Reserves, European Central Bank, Frankfurt a.M., 243–261. Embrechts, P. and Kl¨ uppelberg, C. (1993). Some aspects of insurance mathematics, Theory Probab. Appl. 38: 262–295. Embrechts, P., Kl¨ uppelberg, C., and Mikosch, T. (1997). Modelling Extremal Events for Insurance and Finance, Springer. Grandell, J. (1991). Aspects of Risk Theory, Springer, New York. H¨ormann, W. (1993). The transformed rejection method for generating Poisson random variables, Insurance: Mathematics and Economics 12: 39–45. Janicki, A. and Weron, A. (1994). Simulation and Chaotic Behavior of α-Stable Stochastic Processes, Marcel Dekker. L’Ecuyer, P. (2004). Random Number Generation, in J. E. Gentle, W. H¨ ardle, Y. Mori (eds.) Handbook of Computational Statistics, Springer, Berlin, 35–70.

Bibliography

339

Rolski, T., Schmidli, H., Schmidt, V., and Teugels, J. L. (1999). Stochastic Processes for Insurance and Finance, Wiley, Chichester. Ross, S. (2002). Simulation, Academic Press, San Diego. Stadlober, E. (1989). Sampling from Poisson, binomial and hypergeometric distributions: ratio of uniforms as a simple and fast alternative, Math. Statist. Sektion 303, Forschungsgesellschaft Joanneum Graz. Willmot, G. E. (2001). The nature of modelling insurance losses, The Munich Re Inaugural Lecture, December 5, 2001, Toronto.

15 Ruin Probabilities in Finite and Inﬁnite Time Krzysztof Burnecki, Pawel Mi´sta, and Aleksander Weron

15.1

Introduction

In examining the nature of the risk associated with a portfolio of business, it is often of interest to assess how the portfolio may be expected to perform over an extended period of time. One approach concerns the use of ruin theory (Panjer and Willmot, 1992). Ruin theory is concerned with the excess of the income (with respect to a portfolio of business) over the outgo, or claims paid. This quantity, referred to as insurer’s surplus, varies in time. Speciﬁcally, ruin is said to occur if the insurer’s surplus reaches a speciﬁed lower bound, e.g. minus the initial capital. One measure of risk is the probability of such an event, clearly reﬂecting the volatility inherent in the business. In addition, it can serve as a useful tool in long range planning for the use of insurer’s funds. We recall now a deﬁnition of the standard mathematical model for the insurance risk, see Grandell (1991) and Chapter 14. The initial capital of the insurance company is denoted by u, the Poisson process Nt with intensity (rate) λ describes the number of claims in (0, t] interval and claim severities are random, given by i.i.d. non-negative sequence {Xk }∞ k=1 with mean value µ and variance σ 2 , independent of Nt . The insurance company receives a premium at a constant rate c per unit time, where c = (1 + θ)λµ and θ > 0 is called the relative safety loading. The classical risk process {Rt }t≥0 is given by Rt = u + ct −

Nt i=1

Xi .

342

15

Ruin Probabilities in Finite and Inﬁnite Time

We deﬁne a claim surplus process {St }t≥0 as St = u − R t =

Nt

Xi − ct.

i=1

The time to ruin is deﬁned as τ (u) = inf{t ≥ 0 : Rt < 0} = inf{t ≥ 0 : St > u}. Let L = sup0≤t<∞ {St } and LT = sup0≤t

(15.1)

We note that the above deﬁnition implies that the relative safety loading θ has to be positive, otherwise c would be less than λµ and thus with probability 1 the risk business would become negative in inﬁnite time. The ruin probability in ﬁnite time T is given by ψ(u, T ) = P(τ (u) ≤ T ) = P(LT > u).

(15.2)

We also note that obviously ψ(u, T ) < ψ(u). However, the inﬁnite time ruin probability may be sometimes also relevant for the ﬁnite time case. From a practical point of view, ψ(u, T ), where T is related to the planning horizon of the company, may perhaps sometimes be regarded as more interesting than ψ(u). Most insurance managers will closely follow the development of the risk business and increase the premium if the risk business behaves badly. The planning horizon may be thought of as the sum of the following: the time until the risk business is found to behave “badly”, the time until the management reacts and the time until a decision of a premium increase takes eﬀect. Therefore, in non-life insurance, it may be natural to regard T equal to four or ﬁve years as reasonable (Grandell, 1991). We also note that the situation in inﬁnite time is markedly diﬀerent from the ﬁnite horizon case as the ruin probability in ﬁnite time can always be computed directly using Monte Carlo simulations. We also remark that generalizations of the classical risk process, which are studied in Chapter 14, where the occurrence of the claims is described by point processes other than the Poisson process (i.e., non-homogeneous, mixed Poisson and Cox processes) do not alter the ruin probability in inﬁnite time. This stems from the following fact. Consider a risk ˜ ˜t with the intensity process λ(t), process R˜t driven by a Cox process N namely

t

t ˜t N ˜ ˜ ˜ t = u + (1 + θ)µ λ(s)ds R Xi . Deﬁne now Λt = λ(s)ds and Rt = − 0

i=1

0

15.1

Introduction

343

˜ −1 ˜ −1 R(Λ t ). Then the point process Nt = N (Λt ) is a standard Poisson process ˜ with intensity 1, and therefore, ψ(u) = P(inf t≥0 {R˜t } < 0) = P(inf t≥0 {Rt } < 0) = ψ(u). The time scale deﬁned by Λ−1 is called the operational time scale. t It naturally aﬀects the time to ruin, hence the ﬁnite time ruin probability, but not the ultimate ruin probability. The ruin probabilities in inﬁnite and ﬁnite time can only be calculated for a few special cases of the claim amount distribution. Thus, ﬁnding a reliable approximation, especially in the ultimate case, when the Monte Carlo method can not be utilized, is really important from a practical point of view. In Section 15.2 we present a general formula, called Pollaczek-Khinchin formula, on the ruin probability in inﬁnite time, which leads to exact ruin probabilities in special cases of the claim size distribution. Section 15.3 is devoted to various approximations of the inﬁnite time ruin probability. In Section 15.4 we compare the 12 diﬀerent well-known and not so well-known approximations. The ﬁnitetime case is studied in Sections 15.5, 15.6, and 15.7. The exact ruin probabilities in ﬁnite time are discussed in Section 15.5. The most important approximations of the ﬁnite time ruin probability are presented in Section 15.6. They are illustrated in Section 15.7. To illustrate and compare approximations we use the PCS (Property Claim Services) catastrophe data example introduced in Chapter 13. The data describes losses resulting from natural catastrophic events in USA that occurred between 1990 and 1999. This data set was used to obtain the parameters of the discussed distributions. We note that ruin theory has been also recently employed as an interesting tool in operational risk. In the view of the data already available on operational risk, ruin type estimates may become useful (Embrechts, Kaufmann, and Samorodnitsky, 2004). We ﬁnally note that all presented explicit solutions and approximations are implemented in the Insurance library of XploRe. All ﬁgures and tables were created with the help of this library.

15.1.1

Light- and Heavy-tailed Distributions

We distinguish here between light- and heavy-tailed distributions. A distribution FX (x) is said to be light-tailed, if there exist constants a > 0, b > 0 such that F¯X (x) = 1 − FX (x) ≤ ae−bx or, equivalently, if there exist z > 0, such that MX (z) < ∞, where MX (z) is the moment generating function, see Chapter 13. Distribution FX (x) is said to be heavy-tailed, if for all a > 0,

344

15

Ruin Probabilities in Finite and Inﬁnite Time

Table 15.1: Typical claim size distributions. In all cases x ≥ 0. Light-tailed distributions Name Parameters pdf Exponential β > 0 fX (x) = β exp(−βx) βα Gamma α > 0, β > 0 fX (x) = Γ(α) xα−1 exp(−βx) Weibull β > 0, τ ≥ 1 fX (x) = βτ xτ −1 exp(−βxτ ) n n Mixed exp’s βi > 0, {ai βi exp(−βi x)} ai = 1 fX (x) = i=1

i=1

Name Weibull

Heavy-tailed distributions Parameters pdf τ β > 0, 0 < τ < 1 fX (x) = βτ xτ −1 exp(−βx )

Log-normal

µ ∈ R, σ > 0

fX (x) =

Pareto

α > 0, λ > 0

fX (x) =

Burr

α > 0, λ > 0, τ > 0

fX (x) =

√ 1 exp − 2πσx α α λ λ+x λ+x ατ λα xτ −1 (λ+xτ )α+1

(ln x−µ)2 2σ 2

b > 0: F¯X (x) > ae−bx , or, equivalently, if ∀z > 0 MX (z) = ∞. We study here claim size distributions as in Table 15.1. In the case of light-tailed claims the adjustment coeﬃcient (called also the Lundberg exponent) plays a key role in calculating the ruin probability. Let γ = supz {MX (z)} < ∞ and let R be a positive solution of the equation: 1 + (1 + θ)µR = MX (R),

R < γ.

(15.3)

If there exists a non-zero solution R to the above equation, we call it an adjustment coeﬃcient. Clearly, R = 0 satisﬁes the equation (15.3), but there may exist a positive solution as well (this requires that X has a moment generating function, thus excluding distributions such as Pareto and the log-normal). To see the plausibility of this result, note that MX (0) = 1, MX (z) < 0, MX (z) > 0 and MX (0) = −µ. Hence, the curves y = MX (z) and y = 1 + (1 + θ)µz may intersect, as shown in Figure 15.1.

An analytical solution to equation (15.3) exists only for few claim distributions. However, it is quite easy to obtain a numerical solution. The coeﬃcient R

Introduction

345

0.95

1

1.05

y

1.1

1.15

1.2

15.1

R

0 x

Figure 15.1: Illustration of the existence of the adjustment coeﬃcient. The solid blue line represents the curve y = 1 + (1 + θ)µz and the dotted red one y = MX (z). STFruin01.xpl

satisﬁes the inequality: R<

2θµ , µ(2)

(15.4)

where µ(2) = E(Xi2 ), see Asmussen (2000). Let D(z) = 1 + (1 + θ)µz − MX (z). Thus, the adjustment coeﬃcient R > 0 satisﬁes the equation D(R) = 0. In order to get the solution one may use the Newton-Raphson formula Rj+1 = Rj −

D(Rj ) , D (Rj )

(15.5)

(z). with the initial condition R0 = 2θµ/µ(2) , where D (z) = (1 + θ)µ − MX

346

15

Ruin Probabilities in Finite and Inﬁnite Time

Moreover, if it is possible to calculate the third raw moment µ(3) , we can obtain a sharper bound than (15.4), Panjer and Willmot (1992): R<

3µ(2)

12µθ , + 9(µ(2) )2 + 24µµ(3) θ

and use it as the initial condition in (15.5).

15.2

Exact Ruin Probabilities in Inﬁnite Time

In order to present a ruin probability formula we ﬁrst use the relation (15.1) and express L as a sum of so-called ladder heights. Let L1 be the value that the process {St } reaches for the ﬁrst time above the zero level. Next, let L2 be the value which is obtained for the ﬁrst time above the level L1 ; L3 , L4 , . . . are deﬁned in the same way. The values Lk are called ladder heights. Since the process {St } has stationary and independent increments, {Lk }∞ k=1 is a sequence of independent and identically distributed variables with the density fL1 (x) = F¯X (x)/µ.

(15.6)

One may also show that the number of ladder heights K is given by the geometric distribution with the parameter q = θ/(1 + θ). Thus, the random variable L may be expressed as K Li (15.7) L= i=1

and it has a compound geometric distribution. The above fact leads to the Pollaczek-Khinchin formula for the ruin probability: n ∞ θ 1 ψ(u) = 1 − P(L ≤ u) = 1 − FL∗n1 (u), (15.8) 1 + θ n=0 1 + θ where FL∗n1 (u) denotes the nth convolution of the distribution function FL1 . One can use it to derive explicit solutions for a variety of claim amount distributions, particularly those whose Laplace transform is a rational function. These cases will be discussed in this section. Unfortunately, heavy-tailed distributions like e.g. the log-normal or Pareto one are not included. In such a case various approximations can be applied or one can calculate the ruin probability directly via the Pollaczek-Khinchin formula using Monte Carlo simulations. This will be studied in Section 15.3.

15.2

Exact Ruin Probabilities in Inﬁnite Time

347

We shall now, in Sections 15.2.1–15.2.4, brieﬂy present a collection of basic exact results on the ruin probability in inﬁnite time. The ruin probability ψ(u) is always considered as a function of the initial capital u.

15.2.1

No Initial Capital

When u = 0 it is easy to obtain the exact formula: ψ(u) =

1 . 1+θ

Notice that the formula depends only on θ, regardless of the claim frequency rate λ and claim size distribution. The ruin probability is clearly inversely proportional to the relative safety loading.

15.2.2

Exponential Claim Amounts

One of the historically ﬁrst results on the ruin probability is the explicit formula for exponential claims with the parameter β, namely θβu 1 . (15.9) exp − ψ(u) = 1+θ 1+θ In Table 15.2 we present the ruin probability values for exponential claims with β = 6.3789 · 10−9 (see Chapter 13) and the relative safety loading θ = 30% with respect to the initial capital u. We can observe that the ruin probability decreases as the capital grows. When u = 1 billion USD the ruin probability amounts to 18%, whereas u = 5 billion USD reduces the probability to almost zero.

15.2.3

Gamma Claim Amounts

Grandell and Segerdahl (1971) showed that for the gamma claim amount distribution with mean 1 and α ≤ 1 the exact value of the ruin probability can be

348

15

Ruin Probabilities in Finite and Inﬁnite Time

Table 15.2: The ruin probability for exponential claims with β = 6.3789 · 10−9 and θ = 0.3 (u in USD billion). u ψ(u)

0 0.769231

1 0.176503

2 0.040499

3 0.009293

4 0.002132

5 0.000489

STFruin02.xpl

computed via the formula: ψ(u) =

θ(1 − R/α) exp(−Ru) αθ sin(απ) + · I, 1 + (1 + θ)R − (1 + θ)(1 − R/α) π

(15.10)

where I= 0

∞

xα exp {−(x + 1)αu} 2

[xα {1 + α(1 + θ)(x + 1)} − cos(απ)] + sin2 (απ)

dx.

(15.11)

The integral I has to be calculated numerically. We also notice that the assumption on the mean is not restrictive since for claims X with arbitrary mean µ we have that ψX (u) = ψX/µ (u/µ). As the gamma distribution is closed under scale changes we obtain that ψG(α,β) (u) = ψG(α,α) (βu/α). This correspondence enables us to calculate the exact ruin probability via equation (15.10) for gamma claims with arbitrary mean. Table 15.3 shows the ruin probability values for gamma claims with with α = 0.9185, β = 6.1662 · 10−9 (see Chapter 13) and the relative safety loading θ = 30% with respect to the initial capital u. Naturally, the ruin probability decreases as the capital grows. Moreover, the probability takes similar values as in the exponential case but a closer look reveals that the values in the exponential case are always slightly larger. When u = 1 billion USD the diﬀerence is about 1%. It suggests that a choice of the ﬁtted distribution function may have a an impact on actuarial decisions.

15.2

Exact Ruin Probabilities in Inﬁnite Time

349

Table 15.3: The ruin probability for gamma claims with α = 0.9185, β = 6.1662 · 10−9 and θ = 0.3 (u in USD billion). u ψ(u)

0 0.769229

1 0.174729

2 0.039857

3 0.009092

4 0.002074

5 0.000473

STFruin03.xpl

15.2.4

Mixture of Two Exponentials Claim Amounts

For the claim size distribution being a mixture of two exponentials with the parameters β1 , β2 and weights a, 1 − a, one may obtain an explicit formula by using the Laplace transform inversion (Panjer and Willmot, 1992):

ψ(u) =

1 {(ρ − r1 ) exp(−r1 u) + (r2 − ρ) exp(−r2 u)} , (15.12) (1 + θ)(r2 − r1 )

where

r1 =

r2 =

1/2 2 ρ + θ(β1 + β2 ) − {ρ + θ(β1 + β2 )} − 4β1 β2 θ(1 + θ) 2(1 + θ) 1/2 2 ρ + θ(β1 + β2 ) + {ρ + θ(β1 + β2 )} − 4β1 β2 θ(1 + θ) 2(1 + θ)

and p=

aβ1−1

aβ1−1 , + (1 − a)β2−1

,

,

ρ = β1 (1 − p) + β2 p.

Table 15.4 shows the ruin probability values for mixture of two exponentials claims with β1 = 3.5900 · 10−10 , β2 = 7.5088 · 10−9 , a = 0.0584 (see Chapter 13) and the relative safety loading θ = 30% with respect to the initial capital u. As before, the ruin probability decreases as the capital grows. Moreover, the increase in the ruin probability values with respect to previous cases is dramatic. When u = 1 billion USD the diﬀerence between the mixture of two exponentials and exponential cases reaches 240%! As the same underlying

350

15

Ruin Probabilities in Finite and Inﬁnite Time

Table 15.4: The ruin probability for mixture of two exponentials claims with β1 = 3.5900 · 10−10 , β2 = 7.5088 · 10−9 , a = 0.0584 and θ = 0.3 (u in USD billion). u ψ(u)

0 0.769231

1 0.587919

5 0.359660

10 0.194858

20 0.057197

50 0.001447

STFruin04.xpl

data set was used in all cases to estimate the parameters of the distributions, it supports the thesis that a choice of the ﬁtted distribution function and checking the goodness of ﬁt is of paramount importance.

Finally, note that it is possible to derive explicit formulae for mixture of n (n ≥ 3) exponentials (Wikstad, 1971; Panjer and Willmot, 1992). They are not presented here since the complexity of formulae grows as n increases and such mixtures are rather of little practical importance due to increasing number of parameters.

15.3

Approximations of the Ruin Probability in Inﬁnite Time

When the claim size distribution is exponential (or closely related to it), simple analytic results for the ruin probability in inﬁnite time exist, see Section 15.2. For more general claim amount distributions, e.g. heavy-tailed, the Laplace transform technique does not work and one needs some estimates. In this section, we present 12 diﬀerent well-known and not so well-known approximations. We illustrate them on a common claim size distribution example, namely the mixture of two exponentials claims with β1 = 3.5900 · 10−10 , β2 = 7.5088 · 10−9 and a = 0.0584 (see Chapter 13). Numerical comparison of the approximations is given in Section 15.4.

15.3

Approximations of the Ruin Probability in Inﬁnite Time

351

Table 15.5: The Cram´er–Lundberg approximation for mixture of two exponentials claims with β1 = 3.5900 · 10−10 , β2 = 7.5088 · 10−9 , a = 0.0584 and θ = 0.3 (u in USD billion). u ψCL (u)

0 0.663843

1 0.587260

5 0.359660

10 0.194858

20 0.057197

50 0.001447

STFruin05.xpl

15.3.1

Cram´ er–Lundberg Approximation

Cram´er–Lundberg’s asymptotic ruin formula for ψ(u) for large u is given by ψCL (u) = Ce−Ru ,

(15.13)

(R) − µ(1 + θ)} . For the proof we refer to Grandell (1991). where C = θµ/ {MX The classical Cram´er–Lundberg approximation yields quite accurate results, however we must remember that it requires the adjustment coeﬃcient to exist, therefore merely the light-tailed distributions can be taken into consideration. For exponentially distributed claims, the formula (15.13) yields the exact result.

In Table 15.5 the Cram´er–Lundberg approximation for mixture of two exponentials claims with β1 , β2 , a and the relative safety loading θ = 30% with respect to the initial capital u is given. We see that the Cram´er–Lundberg approximation underestimates the ruin probability. Nevertheless, the results coincide quite closely with the exact values shown by Table 15.4. When the initial capital is zero, the relative error is the biggest and exceeds 13%.

352

15

Ruin Probabilities in Finite and Inﬁnite Time

Table 15.6: The exponential approximation for mixture of two exponentials claims with β1 = 3.5900 · 10−10 , β2 = 7.5088 · 10−9 , a = 0.0584 and θ = 0.3 (u in USD billion). u ψE (u)

0 0.747418

1 0.656048

5 0.389424

10 0.202900

20 0.055081

50 0.001102

STFruin06.xpl

15.3.2

Exponential Approximation

This approximation was proposed and derived by De Vylder (1996). It requires the ﬁrst three moments to be ﬁnite.

2µθu − µ(2)

ψE (u) = exp −1 − (µ(2) )2 + (4/3)θµµ(3)

.

(15.14)

Table 15.6 shows the results of the exponential approximation for mixture of two exponentials claims with β1 , β2 , a and the relative safety loading θ = 30% with respect to the initial capital u. Comparing them with the exact values presented in Table 15.4 we see that the exponential approximation works not bad in the studied case. When the initial capital is USD 50 billion, the relative error is the biggest and reaches 24%.

15.3.3

Lundberg Approximation

The following formula, called the Lundberg approximation, comes from Grandell (2000). It requires the ﬁrst three moments to be ﬁnite.

µ(2) 4θµ2 µ(3) −2µθu . ψL (u) = 1 + θu − exp 2µ 3(µ(2) )3 µ(2)

(15.15)

In Table 15.7 the Lundberg approximation for mixture of two exponentials claims with β1 , β2 , a and the relative safety loading θ = 30% with respect to

15.3

Approximations of the Ruin Probability in Inﬁnite Time

353

Table 15.7: The Lundberg approximation for mixture of two exponentials claims with β1 = 3.5900 · 10−10 , β2 = 7.5088 · 10−9 , a = 0.0584 and θ = 0.3 (u in USD billion). u ψL (u)

0 0.504967

1 0.495882

5 0.382790

10 0.224942

20 0.058739

50 0.000513

STFruin07.xpl

the initial capital u is given. We see that the Lundberg approximation works worse than the exponential one. When the initial capital is USD 50 billion, the relative error exceeds 60%.

15.3.4

Beekman–Bowers Approximation

The Beekman–Bowers approximation uses the following representation of the ruin probability: ψ(u) = P(L > u) = P(L > 0)P(L > u|L > 0).

(15.16)

The idea of the approximation is to replace the conditional probability 1 − P(L > u|L > 0) with a gamma distribution function G(u) by ﬁtting ﬁrst two moments (Grandell, 2000). This leads to: ψBB (u) =

1 {1 − G(u)} , 1+θ

(15.17)

where the parameters α, β of G are given by

4µµ(3) 4µµ(3) (2) (2) θ . − 1 θ /(1 + θ), β = 2µθ/ µ + −µ α= 1+ 3(µ(2) )2 3µ(2) The Beekman–Bowers approximation gives rather accurate results, see Burnecki, Mi´sta, and Weron (2004). In the exponential case it becomes the exact formula. It can be used only for distributions with ﬁnite ﬁrst three moments.

354

15

Ruin Probabilities in Finite and Inﬁnite Time

Table 15.8: The Beekman–Bowers approximation for mixture of two exponentials claims with β1 = 3.5900 · 10−10 , β2 = 7.5088 · 10−9 , a = 0.0584 and θ = 0.3 (u in USD billion). u ψBB (u)

0 0.769231

1 0.624902

5 0.352177

10 0.186582

20 0.056260

50 0.001810

STFruin08.xpl

Table 15.8 shows the results of the Beekman–Bowers approximation for mixture of two exponentials claims with β1 , β2 , a and the relative safety loading θ = 30% with respect to the initial capital u. The results justify the thesis the approximation yields quite accurate results but when the initial capital is USD 50 billion, the relative error is unacceptable, reaching 25%, cf. the exact values in Table 15.4.

15.3.5

Renyi Approximation

The Renyi approximation (Grandell, 2000), may be derived from (20.5.4) when we replace the gamma distribution function G with an exponential one, matching only the ﬁrst moment. Hence, it can be regarded as a simpliﬁed version of the Beekman–Bowers approximation. It requires the ﬁrst two moments to be ﬁnite. ψR (u) =

1 2µθu . exp − (2) 1+θ µ (1 + θ)

(15.18)

In Table 15.9 the Renyi approximation for mixture of two exponentials claims with β1 , β2 , a and the relative safety loading θ = 30% with respect to the initial capital u is given. We see that the results compared with the exact values presented in Table 15.4 are quite accurate. The accuracy ot the approximation is similar to the Beekman–Bowers approximation but when the initial capital is USD 50 billion, the relative error exceeds 50%.

15.3

Approximations of the Ruin Probability in Inﬁnite Time

355

Table 15.9: The Renyi approximation for mixture of two exponentials claims with β1 = 3.5900 · 10−10 , β2 = 7.5088 · 10−9 , a = 0.0584 and θ = 0.3 (u in USD billion). u ψR (u)

0 0.769231

1 0.667738

5 0.379145

10 0.186876

20 0.045400

50 0.000651

STFruin09.xpl

15.3.6

De Vylder Approximation

The idea of this approximation is to replace the claim surplus process St with the claim surplus process S¯t with exponentially distributed claims such that the three moments of the processes coincide, namely E(Stk ) = E(S¯tk ) for k = 1, 2, 3, see De Vylder (1978). The process S¯t is determined by the three parameters ¯ θ, ¯ β). ¯ Thus the parameters must satisfy: (λ, 3

(2) ¯ = 9λµ 2 , λ 2µ(3)

2µµ(3) θ¯ = θ, 3µ(2)2

and

Then De Vylder’s approximation is given by: ¯¯ 1 θβu . exp − ψDV (u) = ¯ 1+θ 1 + θ¯

3µ(2) β¯ = (3) . µ

(15.19)

Obviously, in the exponential case the method gives the exact result. For other claim amount distributions, in order to apply the approximation, the ﬁrst three moments have to exist. Table 15.10 shows the results of the De Vylder approximation for mixture of two exponentials claims with β1 , β2 , a and the relative safety loading θ = 30% with respect to the initial capital u. The approximation gives surprisingly good results. In the considered case the relative error is the biggest when the initial capital is zero and amounts to about 13%, cf. Table 15.4.

356

15

Ruin Probabilities in Finite and Inﬁnite Time

Table 15.10: The De Vylder approximation for mixture of two exponentials claims with β1 = 3.5900 · 10−10 , β2 = 7.5088 · 10−9 , a = 0.0584 and θ = 0.3 (u in USD billion). u ψDV (u)

0 0.668881

1 0.591446

5 0.361560

10 0.195439

20 0.057105

50 0.001424

STFruin10.xpl

15.3.7

4-moment Gamma De Vylder Approximation

The 4-moment gamma De Vylder approximation, proposed by Burnecki, Mi´sta, and Weron (2003), is based on De Vylder’s idea to replace the claim surplus process St with another one S¯t for which the expression for ψ(u) is explicit. This time we calculate the parameters of the new process with gamma distributed claims and apply the exact formula (15.10) for the ruin probability. Let us note that the claim surplus process S¯t with gamma claims is determined by the four ¯ θ, ¯µ parameters (λ, ¯, µ ¯(2) ), so we have to match the four moments of St and S¯t . We also need to assume that µ(2) µ(4) < 32 (µ(3) )2 to ensure that µ ¯, µ ¯(2) > 0 and (2) 2 µ ¯ >µ ¯ , which is true for the gamma distribution. Then ¯ λ

=

θ¯

=

µ ¯

=

µ ¯(2)

=

λ(µ(3) )2 (µ(2) )3

{µ(2) µ(4) −2(µ(3) )2 }{2µ(2) µ(4) −3(µ(3) )2 } θµ{2(µ(3) )2 −µ(2) µ(4) } , (µ(2) )2 µ(3)

,

3(µ(3) )2 −2µ(2) µ(4) , µ(2) µ(3)

{µ(2) µ(4) −2(µ(3) )2 }{2µ(2) µ(4) −3(µ(3) )2 } (µ(2) µ(3) )2

.

When this assumption can not be fulﬁlled, the simpler case leads to

¯= λ

2λ(µ(2) )2 µ(µ(3) + µ(2) µ) θµ(µ(3) + µ(2) µ) , µ ¯ = µ, µ ¯(2) = . , θ¯ = (3) (2) (2) 2 µ(µ + µ µ) 2(µ ) 2µ(2)

15.3

Approximations of the Ruin Probability in Inﬁnite Time

357

Table 15.11: The 4-moment gamma De Vylder approximation for mixture of two exponentials claims with β1 = 3.5900 · 10−10 , β2 = 7.5088 · 10−9 , a = 0.0584 and θ = 0.3 (u in USD billion). u ψ4M GDV (u)

0 0.683946

1 0.595457

5 0.359879

10 0.194589

20 0.057150

50 0.001450

STFruin11.xpl

All in all, the 4-moment gamma De Vylder approximation is given by

¯ ¯ − R ) exp(− βR θ(1 α ¯ α ¯ u) ψ4M GDV (u) = ¯ ¯ 1 + (1 + θ)R − (1 + θ)(1 −

R α ¯)

+

α ¯ θ¯ sin(¯ απ) · I, π

(15.20)

where I= 0

∞

¯ dx xα¯ exp{−(x + 1)βu} , 4 52 ¯ + 1) − cos(¯ απ) xα¯ 1 + α ¯ (1 + θ)(x απ) + sin2 (¯

(2) (2) and α ¯=µ ¯2 / µ ¯ −µ ¯2 , β¯ = µ ¯2 . ¯/ µ ¯ −µ In the exponential and gamma case this method gives the exact result. For other claim distributions in order to apply the approximation, the ﬁrst four (or three in the simpler case) moments have to exist. Burnecki, Mi´sta, and Weron (2003) showed numerically that the method gives a slight correction to the De Vylder approximation, which is often regarded as the best among “simple” approximations. In Table 15.11 the 4-moment gamma De Vylder approximation for mixture of two exponentials claims with β1 = 3.5900·10−10 , β2 = 7.5088·10−9 , a = 0.0584 (see Chapter 13) and the relative safety loading θ = 30% with respect to the initial capital u is given. The most striking impression of Table 15.11 is certainly the extremely good accuracy of the simple 4-moment gamma De Vylder approximation for reasonable choices of the initial capital u. The relative error with respect to the exact values presented in Table 15.4 is the biggest for u = 0 and equals 11%.

358

15

Ruin Probabilities in Finite and Inﬁnite Time

Table 15.12: The heavy traﬃc approximation for mixture of two exponentials claims with β1 = 3.5900 · 10−10 , β2 = 7.5088 · 10−9 , a = 0.0584 and θ = 0.3 (u in USD billion). u ψHT (u)

0 1.000000

1 0.831983

5 0.398633

10 0.158908

20 0.025252

50 0.000101

STFruin12.xpl

15.3.8

Heavy Traﬃc Approximation

The term “heavy traﬃc” comes from queuing theory. In risk theory it means that, on the average, the premiums exceed only slightly the expected claims. It implies that the relative safety loading θ is positive and small. Asmussen (2000) suggests the following approximation.

2θµu ψHT (u) = exp − (2) µ

.

(15.21)

This method requires the existence of the ﬁrst two moments of the claim size distribution. Numerical evidence shows that the approximation is reasonable for the relative safety loading being 10 − 20% and u being small or moderate, while the approximation may be far oﬀ for large u. We also note that the approximation given by (15.21) is also known as the diﬀusion approximation and is further analysed and generalised to the stable case in Chapter 16, see also Furrer, Michna, and Weron (1997). Table 15.12 shows the results of the heavy traﬃc approximation for mixture of two exponentials claims with β1 , β2 , a and the relative safety loading θ = 30% with respect to the initial capital u. It is clear that the accuracy of the approximation in the considered case is extremely poor. When the initial capital is USD 50 billion, the relative error reaches 93%, cf. Table 15.4.

15.3

Approximations of the Ruin Probability in Inﬁnite Time

359

Table 15.13: The light traﬃc approximation for mixture of two exponentials claims with β1 = 3.5900 · 10−10 , β2 = 7.5088 · 10−9 , a = 0.0584 and θ = 0.3 (u in USD billion). u ψLT (u)

0 0.769231

1 0.303545

5 0.072163

10 0.011988

20 0.000331

50 0.000000

STFruin13.xpl

15.3.9

Light Traﬃc Approximation

As for heavy traﬃc, the term “light traﬃc” comes from queuing theory, but has an obvious interpretation also in risk theory, namely, on the average, the premiums are much larger than the expected claims, or in other words, claims appear less frequently than expected. It implies that the relative safety loading θ is positive and large. We may obtain the following asymptotic formula. ∞ 1 ψLT (u) = F¯X (x)dx. (15.22) (1 + θ)µ u In risk theory heavy traﬃc is most often argued to be the typical case rather than light traﬃc. However, light traﬃc is of some interest as a complement to heavy traﬃc, as well as it is needed for the interpolation approximation to be studied in the next point. It is worth noticing that this method gives accurate results merely for huge values of the relative safety loading, see Asmussen (2000). In Table 15.13 the light traﬃc approximation for mixture of two exponentials claims with β1 , β2 , a and the relative safety loading θ = 30% with respect to the initial capital u is given. The results are even worse than in the heavy case, only for moderate u the situation is better. The relative error dramatically increases with the initial capital.

360

15

Ruin Probabilities in Finite and Inﬁnite Time

Table 15.14: The heavy-light traﬃc approximation for mixture of two exponentials claims with β1 = 3.5900·10−10 , β2 = 7.5088·10−9 , a = 0.0584 and θ = 0.3 (u in USD billion). u ψHLT (u)

0 0.769231

1 0.598231

5 0.302136

10 0.137806

20 0.034061

50 0.001652

STFruin14.xpl

15.3.10

Heavy-light Traﬃc Approximation

The crude idea of this approximation is to combine the heavy and light approximations (Asmussen, 2000): θ ψHLT (u) = ψLT 1+θ

θu 1+θ

+

1 ψHT (u). (1 + θ)2

(15.23)

The particular features of this approximation is that it is exact for the exponential distribution and asymptotically correct both in light and heavy traﬃc. Table 15.14 shows the results of the heavy-light traﬃc approximation for mixture of two exponentials claims with β1 , β2 , a and the relative safety loading θ = 30% with respect to the initial capital u. Comparing the results with Table 15.12 (heavy traﬃc), Table 15.13 (light traﬃc) and the exact values given in Table 15.4 we see that the interpolation is promising. In the considered case the relative error is the biggest when the initial capital is USD 20 billion and is over 40%, but usually the error is acceptable.

15.3.11

Subexponential Approximation

First, let us introduce the class of subexponential distributions S (Embrechts, Kl¨ uppelberg, and Mikosch, 1997), namely F ∗2 (x) S = F : lim ¯ =2 . (15.24) x→∞ F (x)

15.3

Approximations of the Ruin Probability in Inﬁnite Time

361

Here F ∗2 (x) is the convolution square. In terms of random variables (15.24) means P(X1 + X2 > x) ∼ 2P (X1 > x), x → ∞, where X1 , X2 are independent random variables with distribution F . The class contains log-normal and Weibull (for τ < 1) distributions. Moreover, all distributions with a regularly varying tail (e.g. Pareto and Burr distributions) are subexponential. For subexponential distributions we can formulate the following approximation of the ruin probability. If F ∈ S, then the asymptotic formula for large u is given by u 1 ψS (u) = F¯ (x)dx , (15.25) µ− θµ 0 see Asmussen (2000). The approximation is considered to be inaccurate. The problem is a very slow rate of convergence as u → ∞. Even though the approximation is asymptotically correct in the tail, one may have to go out to values of ψ(u) which are unrealistically small before the ﬁt is reasonable. However, we will show in Section 15.4 that it is not always the case. As the mixture of exponentials does not belong to the subexponential class we do not present a numerical example like in all previously discussed approximations.

15.3.12

Computer Approximation via the Pollaczek-Khinchin Formula

One can use the Pollaczek-Khinchin formula (15.8) to derive explicit closed form solutions for claim amount distributions presented in Section 15.2, see Panjer and Willmot (1992). For other distributions studied here, in order to calculate the ruin probability, the Monte Carlo method can be applied to (15.1) and (15.7). The main problem is to simulate random variables from the density fL1 (x). Only four of the considered distributions lead to a known density: (i) for exponential claims, fL1 (x) is the density of the same exponential distribution, (ii) for a mixture of exponentials claims, fL1(x) is the density of the n a1 ai ,··· , mixture of exponential distribution with the weights β1 / i=1 βi n an ai , (iii) for Pareto claims, fL1 (x) is the density of the Pareto i=1 βi βn / distribution with the parameters α − 1 and λ, (iv) for Burr claims, fL1 (x) is the density of the transformed beta distribution.

362

15

Ruin Probabilities in Finite and Inﬁnite Time

Table 15.15: The Pollaczek-Khinchin approximation for mixture of two exponentials claims with β1 = 3.5900 · 10−10 , β2 = 7.5088 · 10−9 , a = 0.0584 and θ = 0.3 (u in USD billion). u ψP K (u)

0 0.769209

1 0.587917

5 0.359705

10 0.194822

20 0.057173

50 0.001445

STFruin15.xpl

For other distributions studied here we use formula (15.6) and controlled numerical integration to generate random variables Lk (except for the Weibull distribution, fL1 (x) does not even have a closed form). We note that the methodology based on the Pollaczek-Khinchin formula works for all considered claim distributions. The computer approximation via the Pollaczek-Khinchin formula will be called in short the Pollaczek-Khinchin approximation. Burnecki, Mi´sta, and Weron (2004) showed that the approximation can be chosen as the reference method for calculating the ruin probability in inﬁnite time, see also Table 15.15 where the results of the Pollaczek-Khinchin approximation are presented for mixture of two exponentials claims with β1 , β2 , a and the relative safety loading θ = 30% with respect to the initial capital u. For the Monte Carlo method purposes we generated 100 blocks of 500000 simulations.

15.3.13

Summary of the Approximations

Table 15.16 shows which approximation can be used for a particular choice of a claim size distribution. Moreover, the necessary assumptions on the distribution parameters are presented.

15.4

Numerical Comparison of the Inﬁnite Time Approximations

363

Table 15.16: Survey of approximations with an indication when they can be applied Distrib. Method Cram´ er-Lundberg Exponential Lundberg Beekman-Bowers Renyi De Vylder 4M Gamma De Vylder Heavy Traﬃc Light Traﬃc Heavy-Light Traﬃc Subexponential Pollaczek-Khinchin

15.4

Exp.

Gamma

Mix. Exp. + + + +

Lognormal – + + +

Pareto

Burr

+ + + +

Weibull – + + +

+ + + +

– α>3 α>3 α>3

– ατ > 3 ατ > 3 ατ > 3

+ + +

+ + +

+ + +

+ + +

+ + +

α>2 α>3 α>3

ατ > 2 ατ > 3 ατ > 3

+ + +

+ + +

+ + +

+ + +

+ + +

α>2 + α>2

ατ > 2 + ατ > 2

– +

– +

0<τ <1 +

– +

+ +

+ +

+ +

Numerical Comparison of the Inﬁnite Time Approximations

In this section we will illustrate all 12 approximations presented in Section 15.3. To this end we consider three claim amount distributions which were ﬁtted to the PCS catastrophe data in Chapter 13, namely the mixture of two exponential (a running example in Section 15.3) with β1 = 3.5900·10−10 , β2 = 7.5088·10−9 and a = 0.0584, log-normal with µ = 18.3806 and σ = 1.1052, and Pareto with α = 3.4081 and λ = 4.4767 · 108 distributions. The logarithm of the ruin probability as a function of the initial capital u ranging from USD 0 to 50 billion for the three distributions is depicted in Figure 15.2. In the case of log-normal and Pareto distributions the reference Pollaczek-Khinchin approximation is used. We see that the ruin probability values for the mixture of exponential distributions are much higher than for the log-normal and Pareto distributions. It stems from the fact that the estimated parameters of the mixture result in the mean equal to 2.88 · 108 , whereas the mean of the ﬁtted log-normal distribution amounts to 1.77 · 108 and of Pareto distribution to 1.86 · 108 .

15

Ruin Probabilities in Finite and Inﬁnite Time

-6 -12

-10

-8

log(psi(u))

-4

-2

0

364

0

5

10

15

30 25 20 u (billion USD)

35

40

45

50

Figure 15.2: The logarithm of the exact value of the ruin probability. The mixture of two exponentials (dashed blue line), log-normal (dotted red line), and Pareto (solid black line) clam size distribution. STFruin16.xpl

Figures 15.3–15.5 describe the relative error of the 11 approximations from Sections 15.3.1–15.3.11 with respect to exact ruin probability values in the mixture of two exponentials case and obtained via the Pollaczek-Khinchin approximation in the log-normal and Pareto cases. The relative safety loading is set to 30%. We note that for the Monte Carlo method purposes in the PollaczekKhinchin approximation we generate 500 blocks of 100000 simulations. First, we consider the mixture of two exponentials case already analysed in Section 15.3. Only the subexponential approximation can not be used for such a claim amount distribution, see Table 15.16. As we can clearly see in Figure 15.3 the Cram´er–Lundberg, De Vylder and 4-moment gamma De Vylder approximations work extremely well. Furthermore, the heavy traﬃc, light traﬃc, Renyi,

Numerical Comparison of the Inﬁnite Time Approximations

365

0

0.6

0.4 0.2 0 -0.2

-0.4

(psi(u)-psi_{exact}(u))/psi_{exact}(u)

-1

-0.3

-0.8

-0.6

0.1

0 -0.1 -0.2

(psi(u)-psi_{exact}(u))/psi_{exact}(u)

0.2

0.8

1

0.3

15.4

5

10

15

20 25 30 u (USD billion)

35

40

45

50

0

5

10

15

20 25 30 u (USD billion)

35

40

45

50

Figure 15.3: The relative error of the approximations. More eﬀective methods (left panel): the Cram´er–Lundberg (solid blue line), exponential (short-dashed brown line), Beekman–Bowers (dotted red line), De Vylder (medium-dashed black line) and 4-moment gamma De Vylder (long-dashed green line) approximations. Less eﬀective methods (right panel): Lundberg (short-dashed red line), Renyi (dotted blue line), heavy traﬃc (solid magenta line), light traﬃc (long-dashed green line) and heavy-light traﬃc (medium-dashed brown line) approximations. The mixture of two exponentials case. STFruin17.xpl

and Lundberg approximations show a total lack of accuracy and the rest of the methods are only acceptable.

In the case of log-normally distributed claims, the situation is diﬀerent, see Figure 15.4. Only results obtained via Beekman–Bowers, De Vylder and 4moment gamma De Vylder approximations are acceptable. The rest of the approximations are well oﬀ target. We also note that all 11 approximations can be employed in the log-normal case except the Cram´er–Lundberg one.

Ruin Probabilities in Finite and Inﬁnite Time

0

0.4 0.2 0 -0.2

-0.4

(psi(u)-psi_{exact}(u))/psi_{exact}(u)

-1

-0.8

-0.6

0.2 0 -0.2

-0.4 -0.6 -1

-0.8

(psi(u)-psi_{exact}(u))/psi_{exact}(u)

0.4

0.6

15

0.6

366

1

2

3

4 5 6 u (USD billion)

7

8

9

10

0

1

2

3

4 5 6 u (USD billion)

7

8

9

10

Figure 15.4: The relative error of the approximations. More eﬀective methods (left panel): the exponential (dotted blue line), Beekman–Bowers (short-dashed brown line), heavy-light traﬃc (solid red line), De Vylder (medium-dashed black line) and 4-moment gamma De Vylder (long-dashed green line) approximations. Less eﬀective methods (right panel): Lundberg (short-dashed red line), heavy traﬃc (solid magenta line), light traﬃc (long-dashed green line), Renyi (medium-dashed brown line) and subexponential (dotted blue line) approximations. The log-normal case. STFruin18.xpl

Finally, we take into consideration the Pareto claim size distribution. Figure 15.5 depicts the relative error for 9 approximations. Only the Cram´er– Lundberg and 4-moment gamma De Vylder approximations have to excluded as the moment generating function does not exist and the fourth moment is inﬁnite for the Pareto distribution with α = 3.4081. As we see in Figure 15.5 the relative errors for all approximations can not be neglected. There is no unanimous winner among the approximations but we may claim that the exponential approximation gives most accurate results.

0.8

0.4 0.2 0 -0.2

-1

-0.8

-0.6

-0.4

(psi(u)-psi_{exact}(u))/psi_{exact}(u)

0.6

0.8 0.6

0.4 0.2 0 -0.2

-0.4

(psi(u)-psi_{exact}(u))/psi_{exact}(u)

-0.6 -0.8 -1

0

367

1

Exact Ruin Probabilities in Finite Time

1

15.5

1

2

3

4 5 6 u (USD billion)

7

8

9

10

0

1

2

3

4 5 6 u (USD billion)

7

8

9

10

Figure 15.5: The relative error of the approximations. More eﬀective methods (left panel): the exponential (dotted blue line), Beekman–Bowers (short-dashed brown line), heavy-light traﬃc (solid red line) and De Vylder (medium-dashed black line) approximations. Less effective methods (right panel): Lundberg (short-dashed red line), heavy traﬃc (solid magenta line), light traﬃc (long-dashed green line), Renyi (medium-dashed brown line) and subexponential (dotted blue line) approximations. The Pareto case. STFruin19.xpl

15.5

Exact Ruin Probabilities in Finite Time

We are now interested in the probability that the insurer’s capital as deﬁned by (15.1) remains non-negative for a ﬁnite period T rather than permanently. We assume that the number of claims process Nt is a Poisson process with rate λ, and consequently, the aggregate loss process is a compound Poisson process. Premiums are payable at rate c per unit time. We recall that the intensity of the process Nt is irrelevant in the inﬁnite time case provided that it is compensated by the premium, see discussion at the end of Section 15.1. In contrast to the inﬁnite time case there is no general formula for the ruin probability like the Pollaczek-Khinchin one given by (15.8). In the literature one can only ﬁnd a partial integro-diﬀerential equation which satisﬁes the prob-

368

15

Ruin Probabilities in Finite and Inﬁnite Time

ability of non-ruin, see Panjer and Willmot (1992). An explicit result is merely known for the exponential claims, and even in this case a numerical integration is needed (Asmussen, 2000).

15.5.1

Exponential Claim Amounts

First, in order to simplify the formulae, let us assume that claims have the exponential distribution with β = 1 and the amount of premium is c = 1. Then 1 π f1 (x)f2 (x) ψ(u, T ) = λ exp {−(1 − λ)u} − dx, (15.26) π 0 f3 (x) √ √ λ cos x − 1 , f2 (x) = where f1 (x) = λ exp 2 λT cos x − (1 + λ)T + u √ √ √ cos u λ sin x − cos u λ sin x + 2x , and f3 (x) = 1 + λ − 2 λ cos x. Now, notice that the case β = 1 is easily reduced to β = 1, using the formula: ψλ,β (u, T ) = ψ λ ,1 (βu, βT ). β

(15.27)

Moreover, the assumption c = 1 is not restrictive since we have ψλ,c (u, T ) = ψλ/c,1 (u, cT ).

(15.28)

Table 15.17 shows the exact values of the ruin probability for exponential claims with β = 6.3789 · 10−9 (see Chapter 13) with respect to the initial capital u and the time horizon T . The relative safety loading θ equals 30%. We see that the values converge to those calculated in inﬁnite case as T is getting larger, cf. Table 15.2. The speed of convergence decreases as the initial capital u grows.

15.6

Approximations of the Ruin Probability in Finite Time

In this section, we present 5 diﬀerent approximations. We illustrate them on a common claim size distribution example, namely the mixture of two exponentials claims with β1 = 3.5900 · 10−10 , β2 = 7.5088 · 10−9 and a = 0.0584 (see Chapter 13). Their numerical comparison is given in Section 15.7.

15.6

Approximations of the Ruin Probability in Finite Time

369

Table 15.17: The ruin probability for exponential claims with β = 6.3789 · 10−9 and θ = 0.3 (u in USD billion). u ψ(u, 1) ψ(u, 2) ψ(u, 5) ψ(u, 10) ψ(u, 20)

0 0.757164 0.766264 0.769098 0.769229 0.769231

1 0.147954 0.168728 0.176127 0.176497 0.176503

2 0.025005 0.035478 0.040220 0.040495 0.040499

3 0.003605 0.007012 0.009138 0.009290 0.009293

4 0.000443 0.001288 0.002060 0.002131 0.002132

5 0.000047 0.000218 0.000459 0.000489 0.000489

STFruin20.xpl

15.6.1

Monte Carlo Method

The ruin probability in ﬁnite time can always be approximated by means of Monte Carlo simulations. Table 15.18 shows the output for mixture of two exponentials claims with β1 , β2 , a with respect to the initial capital u and the time horizon T . The relative safety loading θ is set to 30%. For the Monte Carlo method purposes we generated 50 x 10000 simulations. We see that the values approach those calculated in inﬁnite case as T increases, cf. Table 15.4. We note that the Monte Carlo method will be used as a reference method when comparing diﬀerent ﬁnite time approximations in Section 15.7.

15.6.2

Segerdahl Normal Approximation

The following result due to Segerdahl (1955) is said to be a time-dependent version of the Cram´er–Lundberg approximation given by (15.13). Under the assumption that c = 1, cf. relation (15.28), we have T − umL √ , (15.29) ψS (u, T ) = C exp(−Ru)Φ ωL u where C = θµ/ {MX (R) − µ(1 + θ)}, mL = 1 {λMX (R) − 1} 3 λMX (R)mL .

−1

2 and ωL =

370

15

Ruin Probabilities in Finite and Inﬁnite Time

Table 15.18: Monte Carlo results (50 x 10000 simulations) for mixture of two exponentials claims with β1 = 3.5900 · 10−10 , β2 = 7.5088 · 10−9 , a = 0.0584 and θ = 0.3 (u in USD billion). u ψ(u, 1) ψ(u, 2) ψ(u, 5) ψ(u, 10) ψ(u, 20)

0 0.672550 0.718254 0.753696 0.765412 0.769364

1 0.428150 0.501066 0.560426 0.580786 0.587826

5 0.188930 0.256266 0.323848 0.350084 0.359778

10 0.063938 0.105022 0.159034 0.184438 0.194262

20 0.006164 0.015388 0.035828 0.049828 0.056466

50 0.000002 0.000030 0.000230 0.000726 0.001244

STFruin21.xpl

Table 15.19: The Segerdahl approximation for mixture of two exponentials claims with β1 = 3.5900 · 10−10 , β2 = 7.5088 · 10−9 , a = 0.0584 and θ = 0.3 (u in USD billion). u ψ(u, 1) ψ(u, 2) ψ(u, 5) ψ(u, 10) ψ(u, 20)

0 0.663843 0.663843 0.663843 0.663843 0.663843

1 0.444333 0.554585 0.587255 0.587260 0.587260

5 0.172753 0.229282 0.338098 0.359593 0.359660

10 0.070517 0.092009 0.152503 0.192144 0.194858

20 0.013833 0.017651 0.030919 0.049495 0.057143

50 0.000141 0.000175 0.000311 0.000634 0.001254

STFruin22.xpl

This method requires existence of the adjustment coeﬃcient. This implies that only light-tailed distributions can be used. Numerical evidence shows that the Segerdahl approximation gives the best results for huge values of the initial capital u, see Asmussen (2000). In Table 15.19, the results of the Segerdahl approximation for mixture of two exponentials claims with β1 , β2 , a with respect to the initial capital u and the

15.6

Approximations of the Ruin Probability in Finite Time

371

Table 15.20: The diﬀusion approximation for mixture of two exponentials claims with β1 = 3.5900 · 10−10 , β2 = 7.5088 · 10−9 , a = 0.0584 and θ = 0.3 (u in USD billion). u ψ(u, 1) ψ(u, 2) ψ(u, 5) ψ(u, 10) ψ(u, 20)

0 1.000000 1.000000 1.000000 1.000000 1.000000

1 0.770917 0.801611 0.823343 0.829877 0.831744

5 0.223423 0.304099 0.370177 0.391556 0.397816

10 0.028147 0.072061 0.128106 0.150708 0.157924

20 0.000059 0.001610 0.011629 0.020604 0.024603

50 0.000000 0.000000 0.000000 0.000017 0.000073

STFruin23.xpl

time horizon T are presented. The relative safety loading θ = 30%. We see that the approximation in the considered case yields quite accurate results for moderate u, cf. Table 15.18.

15.6.3

Diﬀusion Approximation

The idea of the diﬀusion approximation is ﬁrst to approximate the claim surplus process St by a Brownian motion with drift (arithmetic Brownian motion) by matching the ﬁrst two moments, and next, to note that such an approximation implies that the ﬁrst passage probabilities are close. The ﬁrst passage probability serves as the ruin probability. The diﬀusion approximation is given by: 2 T µc u|µc | ψD (u, T ) = IG , ; −1; σc2 σc2

(15.30)

where µc = −λθµ, σc2 = λµ(2) , and IG(·; ζ; u) denotes the distribution function of the passage time of the Brownian motion with unit variance and drift ζ from the level 0 to the level u > 0 (often referred to as Gaussian √ the inverse √ distribution function), namely IG(x; ζ; u) = 1 − Φ (u/ x − ζ x) + exp (2ζu) √ √ ·Φ (−u/ x − ζ x), see Asmussen (2000).

372

15

Ruin Probabilities in Finite and Inﬁnite Time

Table 15.21: The corrected diﬀusion approximation for mixture of two exponentials claims with β1 = 3.5900·10−10 , β2 = 7.5088·10−9 , a = 0.0584 and θ = 0.3 (u in USD billion). u ψ(u, 1) ψ(u, 2) ψ(u, 5) ψ(u, 10) ψ(u, 20)

0 0.521465 0.587784 0.638306 0.655251 0.660958

1 0.426840 0.499238 0.557463 0.577547 0.584386

5 0.187718 0.254253 0.321230 0.347505 0.356922

10 0.065264 0.104967 0.157827 0.182727 0.192446

20 0.007525 0.016173 0.035499 0.049056 0.055610

50 0.000010 0.000039 0.000251 0.000724 0.001243

STFruin24.xpl

We also note that in order to apply this approximation we need the existence of the second moment of the claim size distribution. Table 15.20 shows the results of the diﬀusion approximation for mixture of two exponentials claims with β1 , β2 , a with respect to the initial capital u and the time horizon T . The relative safety loading θ equals 30%. The results lead to the conclusion that the approximation does not produce accurate results for such a choice of the claim size distribution. Only when u = 5 billion USD the results are acceptable, cf. the reference values in Table 15.18.

15.6.4

Corrected Diﬀusion Approximation

The idea presented above of the diﬀusion approximation ignores the presence of jumps in the risk process (the Brownian motion with drift is skip-free) and the value Sτ (u) − u in the moment of ruin. The corrected diﬀusion approximation takes this and other deﬁcits into consideration (Asmussen, 2000). Under the assumption that c = 1, cf. relation (15.28), we have T δ1 δ2 Ru δ2 ψCD (u, t) = IG , (15.31) + ; − ; 1 + u2 u 2 u where R is the adjustment coeﬃcient, δ1 = λMX (γ0 ), δ2 = MX (γ0 )/ {3MX (γ0 )}, and γ0 satisﬁes the equation: κ (γ0 ) = 0, where κ(s) = λ {MX (s) − 1} − s.

15.6

Approximations of the Ruin Probability in Finite Time

373

Table 15.22: The ﬁnite time De Vylder approximation for mixture of two exponentials claims with β1 = 3.5900 · 10−10 , β2 = 7.5088 · 10−9 , a = 0.0584 and θ = 0.3 (u in USD billion). u ψ(u, 1) ψ(u, 2) ψ(u, 5) ψ(u, 10) ψ(u, 20)

0 0.528431 0.594915 0.645282 0.662159 0.667863

1 0.433119 0.505300 0.563302 0.583353 0.590214

5 0.189379 0.256745 0.323909 0.350278 0.359799

10 0.063412 0.104811 0.158525 0.183669 0.193528

20 0.006114 0.015180 0.035142 0.048960 0.055637

50 0.000003 0.000021 0.000215 0.000690 0.001218

STFruin25.xpl

Similarly as in the Segerdahl approximation, the method requires existence of the moment generating function, so we can use it only for light-tailed distributions. In Table 15.21 the results of the corrected diﬀusion approximation for mixture of two exponentials claims with β1 , β2 , a with respect to the initial capital u and the time horizon T are given. The relative safety loading θ is set to 30%. It turns out that corrected diﬀusion method gives surprisingly good results and is vastly superior to the ordinary diﬀusion approximation, cf. the reference values in Table 15.18.

15.6.5

Finite Time De Vylder Approximation

Let us recall the idea of the De Vylder approximation in inﬁnite time: we re¯ λ=λ ¯ and exponential place the claim surplus process with the one with θ = θ, ¯ ﬁtting ﬁrst three moments, see Section 15.3.6. Here, claims with parameter β, the idea is the same. First, we compute 3µ(2) β¯ = (3) , µ

3

(2) ¯ = 9λµ 2 , λ 2µ(3)

and

2µµ(3) θ¯ = θ. 3µ(2)2

Next, we employ relations (15.27) and (15.28) and ﬁnally use the exact, exponential case formula presented in Section 15.5.1.

374

15

Ruin Probabilities in Finite and Inﬁnite Time

Obviously, the method gives the exact result in the exponential case. For other claim distributions, the ﬁrst three moments have to exist in order to apply the approximation. Table 15.22 shows the results of the ﬁnite time De Vylder approximation for mixture of two exponentials claims with β1 , β2 , a with respect to the initial capital u and the time horizon T . The relative safety loading θ = 30%. We see that the approximation gives even better results than the corrected diﬀusion one, cf. the reference values presented in Table 15.18.

15.6.6

Summary of the Approximations

Table 15.23 shows which approximation can be used for each claim size distribution. Moreover, the necessary assumptions on the distribution parameters are presented. Table 15.23: Survey of approximations with an indication when they can be applied Distrib. Method Monte Carlo Segerdahl Diﬀusion Corr. diﬀ. Fin. De Vylder

15.7

Exp.

Gamma

+ + + + +

+ + + + +

Weibull + – + – +

Mix. Exp. + + + + +

Lognormal + – + – +

Pareto

Burr

+ – α>2 – α>3

+ – ατ > 2 – ατ > 3

Numerical Comparison of the Finite Time Approximations

Now, we illustrate all 5 approximations presented in Section 15.6. As in the inﬁnite time case we consider three claim amount distributions which were best ﬁtted to the catastrophe data in Chapter 13, namely the mixture of two exponentials (a running example in Sections 15.3 and 15.6), log-normal and Pareto distributions. The parameters of the distributions are: β1 = 3.5900 · 10−10 , β2 = 7.5088 · 10−9 , a = 0.0584 (mixture), µ = 18.3806, σ = 1.1052

0.4 0.2 0

-0.2 -0.6

-0.4

(psi(u,T)-psi_(MC)(u,T))/psi_(MC)(u,T)

0.6 0.5 0.4

psi(u,T)

0.3 0.2

-0.8

0.1 0

0

375

0.6

Numerical Comparison of the Finite Time Approximations

0.7

15.7

3

6

9

12 15 18 u (USD billion)

21

24

27

30

0

3

6

9

12 15 18 u (USD billion)

21

24

27

30

Figure 15.6: The exact ruin probability obtained via Monte Carlo simulations (left panel), the relative error of the approximations (right panel). The Segerdahl (short-dashed blue line), diﬀusion (dotted red line), corrected diﬀusion (solid black line) and ﬁnite time De Vylder (long-dashed green line) approximations. The mixture of two exponentials case with T ﬁxed and u varying. STFruin26.xpl

(log-normal), and α = 3.4081, λ = 4.4767 · 108 (Pareto). The ruin probability will be depicted as a function of u, ranging from USD 0 to 30 billion, with ﬁxed T = 10 or with ﬁxed value of u = 20 billion USD and varying T from 0 to 20 years. The relative safety loading is set to 30%. Figures has the same form of output. In the left panel, the exact ruin probability values obtained via Monte Carlo simulations are presented. The right panel describes the relative error with respect to the exact values. We also note that for the Monte Carlo method purposes we generated 50 x 10000 simulations.

First, we consider the mixture of two exponentials case. As we can see in Figures 15.6 and 15.7 the diﬀusion approximation almost for all values of u and T gives highly incorrect results. Segerdahl and corrected diﬀusion approximations yield similar error, which visibly decreases when the time horizon gets bigger. The ﬁnite time De Vylder method is a unanimous winner and always gives the error below 10%.

15

Ruin Probabilities in Finite and Inﬁnite Time

0.6

0.4 0.2 0 -0.2

-0.4

(psi(u,T)-psi_(MC)(u,T))/psi_(MC)(u,T)

0.04

0.03

psi(u,T)

0.02 0

-1

-0.8

0.01

-0.6

0.05

0.8

1

0.06

376

0

2

4

6

8

10 12 T (years)

14

16

18

20

0

2

4

6

8

10 12 T (years)

14

16

18

20

Figure 15.7: The exact ruin probability obtained via Monte Carlo simulations (left panel), the relative error of the approximations (right panel). The Segerdahl (short-dashed blue line), diﬀusion (dotted red line), corrected diﬀusion (solid black line) and ﬁnite time De Vylder (long-dashed green line) approximations. The mixture of two exponentials case with u ﬁxed and T varying. STFruin27.xpl

In the case of log-normally distributed claims, we can only apply two approximations: diﬀusion and ﬁnite time De Vylder ones, cf. Table 15.23. Figures 15.8 and 15.9 depict the exact ruin probability values obtained via Monte Carlo simulations and the relative error with respect to the exact values. Again, the ﬁnite time De Vylder approximation works much better than the diﬀusion one.

Finally, we take into consideration the Pareto claim size distribution. Figures 15.10 and 15.11 depict the exact ruin probability values and the relative error with respect to the exact values for the diﬀusion and ﬁnite time De Vylder approximations. We see that now we cannot claim which approximation is better. The error strongly depends on the values of u and T . We may only suspect that a combination of the two methods could give interesting results.

0.6 0.4 0.2 0

-0.2 -0.6

-0.4

(psi(u,T)-psi_{MC}(u,T))/psi_{MC}(u,T)

0.7 0.6 0.5 0.4

psi_(MC)(u,T)

0.3 0.2

-0.8

0.1 0

0

377

0.8

Numerical Comparison of the Finite Time Approximations 0.8

15.7

0.5

1

1.5

2 2.5 3 u (USD billion)

3.5

4

4.5

0

5

0.5

1

1.5

2 2.5 3 u (USD billion)

3.5

4

4.5

5

0.2 0 -0.2

-0.4 -0.8 -1

0

0

-0.6

0.03 0.01

0.02

psi_(MC)(u,T)

0.04

(psi(u,T)-psi_{MC}(u,T))/psi_{MC}(u,T)

0.05

0.4

0.6

0.06

Figure 15.8: The exact ruin probability obtained via Monte Carlo simulations (left panel), the relative error of the approximations (right panel). Diﬀusion (dotted red line) and ﬁnite time De Vylder (long-dashed green line) approximations. The log-normal case with T ﬁxed and u varying. STFruin28.xpl

2

4

6

8

10 12 T (years)

14

16

18

20

0

2

4

6

8

10 12 T (years)

14

16

18

20

Figure 15.9: The exact ruin probability obtained via Monte Carlo simulations (left panel), the relative error of the approximations (right panel). Diﬀusion (dotted red line) and ﬁnite time De Vylder (long-dashed green line) approximations. The log-normal case with u ﬁxed and T varying. STFruin29.xpl

Ruin Probabilities in Finite and Inﬁnite Time

0.6 0.4 0.2 0

-0.2 -0.6 -0.8

0.1 0

0

-0.4

0.5 0.4

0.3 0.2

psi_(MC)(u,T)

0.6

(psi(u,T)-psi_{MC}(u,T))/psi_{MC}(u,T)

0.7

0.8

15 0.8

378

0.5

1

1.5

2 2.5 3 u (USD billion)

3.5

4

4.5

0

5

0.5

1

1.5

2 2.5 3 u (USD billion)

3.5

4

4.5

5

0.6

0.4 0.2 0 -0.2

-0.4

(psi(u,T)-psi_{MC}(u,T))/psi_{MC}(u,T)

0.04

0.03

psi_(MC)(u,T)

0.02 0

-1

-0.8

0.01

-0.6

0.05

0.8

1

0.06

Figure 15.10: The exact ruin probability obtained via Monte Carlo simulations (left panel), the relative error of the approximations (right panel). Diﬀusion (dotted red line) and ﬁnite time De Vylder (long-dashed green line) approximations. The Pareto case with T ﬁxed and u varying. STFruin30.xpl

0

2

4

6

8

10 12 T (years)

14

16

18

20

0

2

4

6

8

10 12 T (years)

14

16

18

20

Figure 15.11: The exact ruin probability obtained via Monte Carlo simulations (left panel), the relative error of the approximations (right panel). Diﬀusion (dotted red line) and ﬁnite time De Vylder (long-dashed green line) approximations. The Pareto case with u ﬁxed and T varying. STFruin31.xpl

Bibliography

379

Bibliography Asmussen, S. (2000). Ruin Probabilities, World Scientiﬁc, Singapore. Burnecki, K., Mi´sta, P., and Weron A. (2003). A New De Vylder Type Approximation of the Ruin Probability in Inﬁnite Time, Research Report HSC/03/05. Burnecki, K., Mi´sta, P., and Weron A. (2005). What is the Best Approximation of Ruin Probability in Inﬁnite Time?, Appl. Math. (Warsaw) 32. De Vylder, F.E. (1978). A Practical Solution to the Problem of Ultimate Ruin Probability, Scand. Actuar. J.: 114–119. De Vylder, F.E. (1996). Advanced Risk Theory. A Self-Contained Introduction, Editions de l’Universit´e de Bruxelles and Swiss Association of Actuaries. Embrechts, P., Kaufmann, R., and Samorodnitsky, G. (2004). Ruin Theory Revisited: Stochastic Models for Operational Risk, in C. Bernadell et al (eds.) Risk Management for Central Bank Foreign Reserves, European Central Bank, Frankfurt a.M., 243-261. Embrechts, P., Kl¨ uppelberg, C., and Mikosch, T. (1997). Modelling Extremal Events for Insurance and Finance, Springer, Berlin. Furrer, H., Michna, Z., and Weron A. (1997). Stable L´evy Motion Approximation in Collective Risk Theory, Insurance Math. Econom. 20: 97–114. Grandell, J. and Segerdahl, C.-O. (1971). A Comparison of Some Approximations of Ruin Probabilities, Skand. Aktuarietidskr.: 144–158. Grandell, J. (1991). Aspects of Risk Theory, Springer, New York. Grandell, J. (2000). Simple Approximations of Ruin Probability, Insurance Math. Econom. 26: 157–173. Panjer, H.H. and Willmot, G.E. (1992). Insurance Risk Models, Society of Actuaries, Schaumburg. Segerdahl, C.-O. (1955). When Does Ruin Occur in the Collective Theory of Risk?, Skand. Aktuarietidskr. 38: 22–36. Wikstad, N. (1971). Exempliﬁcation of Ruin Probabilities, Astin Bulletin 6: 147–152.

16 Stable Diﬀusion Approximation of the Risk Process Hansj¨ org Furrer, Zbigniew Michna, and Aleksander Weron

16.1

Introduction

Collective risk theory is concerned with random ﬂuctuations of the total net assets – the capital of an insurance company. Consider a company which only writes ordinary insurance policies such as accident, disability, health and whole life. The policyholders pay premiums regularly and at certain random times make claims to the company. A policyholder’s premium, the gross risk premium, is a positive amount composed of two components. The net risk premium is the component calculated to cover the payments of claims on the average, while the security risk premium, or safety loading, is the component which protects the company from large deviations of claims from the average and also allows an accumulation of capital. So the risk process has the Cram´er-Lundberg form:

N (t)

R(t) = u + ct −

Yk ,

k=1

where u > 0 is the initial capital (in some cases interpreted as the initial risk reserve) of the company and the policyholders pay a gross risk premium of c > 0 per unit time, see also Chapter 14. The successive claims {Yk } are assumed to form a sequence of i.i.d. random variables with mean EYk = µ and claims occur at jumps of a point process N (t), t ≥ 0. The ruin time T is deﬁned as the ﬁrst time the company has a negative capital, see Chapter 15. One of the key problems of collective risk theory concerns calculating the ultimate ruin probability Ψ = P(T < ∞|R(0) = u), i.e. the

382

16

Stable Diﬀusion Approximation of the Risk Process

probability that the risk process ever becomes negative. On the other hand, an insurance company will typically be interested in the probability that ruin occurs before time t, i.e. Ψ(t) = P(T < t|R(0) = u). However, many of the results available in the literature are in the form of complicated analytic expressions (for a comprehensive treatment of the theory see e.g. Asmussen, 2000; Embrechts, Kl¨ uppelberg, and Mikosch, 1997; Rolski et al., 1999). Hence, some authors have proposed to approximate the risk process by Brownian diﬀusion, see Iglehart (1969) and Schmidli (1994). The idea is to let the number of claims grow in a unit time interval and to make the claim sizes smaller in such a way that the risk process converges weakly to the diﬀusion. In this chapter we present weak convergence theory applied to approximate the risk process by Brownian motion and α-stable L´evy motion. We investigate two diﬀerent approximations. The ﬁrst one assumes that the distribution of claim sizes belongs to the domain of attraction of the normal law, i.e. claims are small. In the second model we consider claim sizes belonging to the domain of attraction of the α-stable law (1 < α < 2), i.e. large claims. The latter approximation is particularly relevant whenever the claim experience allows for heavy-tailed distributions. As the empirical results presented in Chapter 13 show, at least for the catastrophic losses the assumption of heavy-tailed severities is statistically justiﬁed. While the classical theory of Brownian diﬀusion approximation requires short-tailed claims, this assumption can be dropped in our approach, hence allowing for extremal events. Furthermore, employing approximations of risk processes by Brownian motion and α-stable L´evy motion we obtain formulas for ruin probabilities in ﬁnite and inﬁnite time horizons.

16.2

Brownian Motion and the Risk Model for Small Claims

This section will be devoted to the Brownian motion approximation in risk theory and will be based on the work of Iglehart (1969). We assume that the distribution of the claim sizes belongs to the domain of attraction of the normal law. Thus, such claims attain big values with small probabilities. This assumption will cover many practical situations in which the claim size distribution possesses a ﬁnite second moment and claims constitute an i.i.d. sequence. The claims counting process does not have to be independent of the sequence of claim sizes as it is assumed in many risk models and, in general, can be a renewal process constructed from random variables having a ﬁnite ﬁrst moment.

16.2

Brownian Motion and the Risk Model for Small Claims

16.2.1

383

Weak Convergence of Risk Processes to Brownian Motion

Let us consider a sequence of risk processes Rn (t) deﬁned in the following way:

N (nt)

Rn (t) = un + cn t −

(n)

Yk ,

(16.1)

k=1

where un is the initial, cn is the premium payed by policyholders, and the (n) sequence {Yk : k ∈ N } describes the consecutive claim sizes. Assume also (n) (n) that EYk = µn and VarYk = σn2 . The point process N = {N (t) : t ≥ 0} counts claims appearing up to time t that is: k Ti ≤ t , (16.2) N (t) = max k : i=1

where {Tk : k ∈ N } is an i.i.d. sequence of nonnegative random variables describing the times between arriving claims with ETk = 1/λ > 0. Recall that if Tk are exponentially distributed then N (t) is a Poisson process with intensity λ. To approximate the risk process by Brownian motion, we assume n−1/2 un → u, 2+ε (n) n−1/2 cn → c, n1/2 µn → µ, σn2 → σ 2 , and E Yk ≤ M for some ε > 0 where M is independent of n. Then: 1 L Rn (t) → u + (c − µλ)t + σλ1/2 B(t) n1/2

(16.3)

weakly in topology U (uniform convergence on compact sets). Let us denote by RB (t) the limit process from the above approximation, i.e.: RB (t) = u + (c − µλ)t + σλ1/2 B(t).

(16.4)

Property (16.3) let us approximate the risk process by RB (t) for which it is possible to derive exact formulas for ruin probabilities in ﬁnite and inﬁnite time horizons.

16.2.2

Ruin Probability for the Limit Process

Weak convergence of stochastic processes does not imply the convergence of ruin probabilities in general. Thus, to take the advantage of the Brownian

384

16

Stable Diﬀusion Approximation of the Risk Process

motion approximations it is necessary to show that the ruin probability in ﬁnite and inﬁnite time horizons of risk processes converges to the ruin probabilities of Brownian motion. Let us deﬁne the ruin time: T (R) = inf{t > 0 : R(t) < 0},

(16.5)

if the set is non-empty and T = ∞ in other cases. Then T (Rn ) → T (RB ) almost surely if Rn → RB almost surely as n → ∞ and P{T (Rn ) < ∞} → P{T (RB ) < ∞} . Thus we need to ﬁnd formulas for the ruin probabilities of the process RB . Let RB be the Brownian motion with the linear drift deﬁned in (16.4). Then

u(c − λµ) P{T (RB ) < ∞} = exp −2 (16.6) σ2 λ and P{T (RB ) ≤ t} = +

u + (c − λµ)t (16.7) 1−Φ σ(λt)1/2

u − (c − λµ)t −2u(c − λµ) 1 − Φ . exp σ2 λ σ(λt)1/2

It is also possible to determine the density distribution of the ruin time. Let T (RB ) be the ruin time of the process (16.4). Then the density fT of the random variable T (RB ) has the following form β −1 eαβ −3/2 1 2 −1 2 t exp − t + (αβ) t} , t > 0, fT (t) = {β 2 (2π)2/3 where α = (c − λµ)/σλ1/2 and β = u/σλ1/2 . The Brownian model is an approximation of the risk process in the case when the distribution of claim sizes belongs to the domain of attraction of the normal law and the assumptions imposed on the risk process indicate that from the point of view of an insurance company the number of claims is large and the sizes of claims are small.

16.2.3

Examples

Let us consider a risk model where the distribution of claim sizes belongs to the domain of attraction of the normal law and the process counting the number of

16.2

Brownian Motion and the Risk Model for Small Claims

385

Table 16.1: Ruin probabilities for the Brownian motion approximation. Parameters µ = 20, σ = 10, and t = 10 are ﬁxed. u c λ Ψ(t) Ψ 25 50 2 8.0842e-02 8.2085e-02 25 60 2 6.7379e-03 6.7379e-03 30 60 2 2.4787e-03 2.4787e-03 35 60 2 9.1185e-04 9.1188e-04 40 60 2 3.3544e-04 3.3546e-04 40 70 3 6.5282e-02 6.9483e-02 STFdiff01.xpl

claims is a renewal counting process constructed from i.i.d. random variables with a ﬁnite ﬁrst moment. Let R(t) be the following risk process

N (t)

R(t) = u + ct −

Yk ,

(16.8)

n=1

where u is the initial capital, c is the premium income in the unit time interval and {Yk : k ∈ N } are i.i.d. random variables belonging to the domain of attraction of the normal law. Moreover, EYk = µ, VarYk = σ 2 and the intensity of arriving claims is λ (reciprocal of the expectation of claims inter-arrivals). Thus, we obtain: P{T (R) ≤ t} ≈ P{T (RB ) ≤ t} (16.9) and P{T (R) < ∞} ≈ P{T (RB ) < ∞} ,

(16.10)

where RB (t) = u + (c − µλ)t + σλ1/2 B(t), and B(t) is the standard Brownian motion. Using the formulas for ruin probabilities in ﬁnite and inﬁnite time horizons given in (16.6) and (16.7) we compute approximate values of ruin probabilities for diﬀerent levels of initial capital, premium, intensity of claims, expectation of claims and their variance, see Table 16.1. A sample path of the process RB (t) is depicted in Figure 16.1.

16

Stable Diﬀusion Approximation of the Risk Process

Y

40

50

60

386

0

0.5 X

1

Figure 16.1: A sample path of the process RB for u = 40, c = 100, µ = 20, σ = 10, and λ = 3. STFdiff02.xpl

16.3

Stable L´ evy Motion and the Risk Model for Large Claims

In this section we present approximations of the risk process by α-stable L´evy motion. We assume that claims are large, i.e. that the distribution of their sizes is heavy-tailed. More precisely, we let the claim sizes distribution belong to the domain of attraction of the α-stable law with 1 < α < 2, see Weron (2001) and Chapter 1. This is an extension of the Brownian motion approximation approach. Note, however, that the methods and theory presented here are quite diﬀerent from those used in the previous section (Weron, 1984). We assume that claim sizes constitute an i.i.d. sequence and that the claim counting process does not have to be independent of the sequence of the claim

16.3

Stable L´evy Motion and the Risk Model for Large Claims

387

sizes and, in general, can be a counting renewal process constructed from the random variables having a ﬁnite second moment. This model can be applied when claims are caused by earthquakes, ﬂoods, tornadoes, and other natural disasters. In fact, the catastrophic losses dataset studied in Chapter 13 reveals a very heavy-tailed nature of the severity distribution. The best ﬁt was obtained for a Burr law with α = 0.4801 and τ = 2.1524, which indicates a power-law decay of order ατ = 1.0334 of the claim sizes distribution. Naturally, such a distribution belongs to the domain of attraction of the α-stable law with 1 < α < 2.

16.3.1

Weak Convergence of Risk Processes to α-stable L´ evy Motion

We construct a sequence of risk processes converging weakly to the α-stable L´evy motion. Let Rn (t) be a sequence of risk processes deﬁned as follows: N (n) (t)

Rn (t) = un + cn t −

(n)

Yk

,

(16.11)

k=1 (n)

where un is the initial capital, cn is the premium rate, {Yk : k ∈ N } is a sequence describing the sizes of the consecutive claims, and N (n) (t), for every n ∈ N, is a point process counting the number of claims. Moreover, we assume that the random variables representing the claim sizes are of the following form (n)

Yk

=

1 Yk , ϕ(n)

(16.12)

where {Yk : k ∈ N } is a sequence of i.i.d. random variables with distribution F and expectation EYk = µ. The normalizing function ϕ(n) = n1/α L(n), where L is a slowly varying function at inﬁnity. As before it is not necessary to assume that the random variables Yk are non-negative, however, this time we assume that they belong to the domain of attraction of an α-stable law, that is: 1 L (Yk − µ) → Zα,β (1) , ϕ(n) n

(16.13)

k=1

where Zα,β (t) is the α-stable L´evy motion with scale parameter σ , skewness parameter β, and 1 < α < 2. For details see Janicki and Weron (1994) and Samorodnitsky and Taqqu (1994).

388

16

Stable Diﬀusion Approximation of the Risk Process

Let Rα (t) be the α-stable L´evy motion with a linear drift Rα (t) = u + ct − λ1/α Zα,β (t),

(16.14)

where u, c, and λ are positive constants. Let {Yk } be the sequence of the random variable deﬁned above, {N (n) } be a sequence of point processes satisfying N (n) (t) − λnt L → 0, ϕ(n)

(16.15)

L

where → denotes weak convergence in the Skorokhod topology, and λ is a positive constant. Moreover, we assume µ (n) =c (16.16) lim c − λn n→∞ ϕ(n) and lim u(n) = u .

n→∞

(16.17)

Then (n)

N (t) 1 L Rn (t) = un + cn t − Yk → Rα (t) = u + ct − λ1/α Zα,β (t), (16.18) ϕ(n) k=1

when n → ∞, for details see Furrer, Michna, and Weron (1997). Assumption (16.15) is satisﬁed for a wide class of point processes. For example, if the times between consecutive claims constitute i.i.d. sequence with the distribution possessing a ﬁnite second moment. We should also notice that the skewness parameter β equals 1 for the process Rα (t) if the random variables {Yk } are non-negative.

16.3.2

Ruin Probability in the Limit Risk Model for Large Claims

As in the Brownian motion approximation it can be shown that the ﬁnite and inﬁnite time ruin probabilities converge to the ruin probabilities of the limit process. Thus it remains to derive ruin probabilities for the process Rα (t) deﬁned in (16.18). We present asymptotic behavior for ruin probabilities in ﬁnite and inﬁnite time horizons and an exact formula for inﬁnite time ruin probability. An upper bound for ﬁnite time ruin probability will be shown.

16.3

Stable L´evy Motion and the Risk Model for Large Claims

389

First, we derive the asymptotic ruin probability for the ﬁnite time horizon. Let T be the ruin time (17.11) and Zα,β (t) be the α-stable L´evy motion with 0 < α < 2, −1 < β ≤ 1, and scale parameter σ . Then: P{T (u + cs − λ1/α Zα,β (s)) ≤ t} = 1, u→∞ P{λ1/α Zα,β (t) > u + ct} lim

(16.19)

see Furrer, Michna, and Weron (1997) and Willekens (1987). Using the asymptotic behavior of probability P{λ1/α Zα,β (t) > u + ct} when u → ∞ for 1 < α < 2, we get ( Samorodnitsky and Taqqu, 1994, Prop. 1.2.15) that 1+β P{T (u + cs − λ1/α Zα,β (s)) ≤ t} ≈ Cα (16.20) λ(σ )α t(u + ct)−α , 2 where 1−α Cα = . (16.21) Γ(2 − α) cos(πα/2) The asymptotic ruin probability in the ﬁnite time horizon is a lower bound for the ﬁnite time ruin probability. Let Zα,β (t) be the α-stable L´evy motion with α = 1 and |β| ≤ 1 or α = 1 and β = 0. Then for positive u, c, and λ: P{T (u + cs − λ1/α Zα,β (s)) ≤ t} ≤

P{λ1/α Zα,β (t) > u + ct} . P {λ1/α Zα,β (t) > ct}

(16.22)

Now, we consider inﬁnite time ruin probability for the α-stable L´evy motion. It turns out that for β = 1 it is possible to give an exact formula for the ruin probability in the inﬁnite time horizon. If Zα,β (t) is the α-stable L´evy motion with 1 < α < 2, β = 1, and scale parameter σ then for positive u, c, and λ, Furrer (1998) showed that P{T (u + cs − λ1/α Zα,β (s)) < ∞} =

∞

(−a)n u(α−1)n , Γ{1 + (α − 1)n} n=0

(16.23)

where a = cλ−1 (σ )−α cos{π(α − 2)/2}. In general, for an arbitrary β we can obtain asymptotic behavior for inﬁnite time ruin probabilities when the initial capital tends to inﬁnity. Now, let Zα,β (t) be the α-stable L´evy motion with 1 < α < 2, −1 < β ≤ 1, and scale parameter σ . Then for positive u, c, and λ we have (Port, 1989, Theorem 9): P{T (u + cs − λ1/α Zα,β (s)) < ∞} =

A(α, β)λ(σ )α −α+1 + O(u−α+1 ) (16.24) u α(α − 1)c

390

16

Stable Diﬀusion Approximation of the Risk Process

when u → ∞, where A(α, β) =

Γ(1 + α) π

: 1 + β 2 tan2

πα 2

sin

πα 2

+ arctan{β tan

πα } . 2

For completeness it remains to consider the case β = −1, which is quite diﬀerent because the right tail of the distribution of the α-stable law with β = −1 does not behave like a power function but like an exponential function (i.e. it is not a heavy tail). Let Zα,β (t) be the α-stable L´evy motion with 1 < α < 2, β = −1, and scale parameter σ . Then for positive u, c, and λ: P{T (u + cs − λ1/α Zα,β (s)) < ∞} = exp{−a1/(α−1) u} ,

(16.25)

where a is as above.

16.3.3

Examples

Let us assume that the sequence of claims is i.i.d. and their distribution belongs to the domain of attraction of the α-stable law with 1 < α < 2. Let R(t) be the following risk process

N (t)

R(t) = u + ct −

Yk ,

(16.26)

n=1

where u is the initial capital, c is a premium rate payed by the policyholders, and {Yk : k ∈ N} is an i.i.d. sequence with the distribution belonging to the domain of attraction of the α-stable law with 1 < α < 2, that is fulﬁlling (16.13). Moreover, let EYk = µ and the claim intensity be λ. Similarly as in the Brownian motion approximation we obtain: P{T (R) ≤ t} ≈ P{T (Rα ) ≤ t},

(16.27)

P{T (R) < ∞} ≈ P{T (Rα ) < ∞} ,

(16.28)

and where Rα (t) = u + (c − λµ)t − λ1/α Zα (t), and Zα (t) is the α-stable L´evy motion with β = 1 and scale parameter σ . The scale parameter can be calibrated using the asymptotic results of Mijnheer (1975), see also Samorodnitsky and Taqqu (1994, p. 50).

16.3

Stable L´evy Motion and the Risk Model for Large Claims

Table 16.2: Ruin probabilities for α t = 10. u c λ 25 50 2 25 60 2 30 60 2 35 60 2 40 60 2 40 70 3

391

= 1.0334 and ﬁxed µ = 20, σ = 10, and Ψ(t) 0.45896 0.25002 0.24440 0.23903 0.23389 0.61235

Ψ 0.94780 0.90076 0.90022 0.89976 0.89935 0.96404 STFdiff03.xpl

Table 16.3: Ruin probabilities for α = 1.5 and ﬁxed µ = 20, σ = 10, and t = 10. u c λ Ψ(t) Ψ 25 50 2 9.0273e-02 0.39735 25 60 2 3.7381e-02 0.23231 30 60 2 3.6168e-02 0.21461 35 60 2 3.5020e-02 0.20046 40 60 2 3.3932e-02 0.18880 40 70 3 1.1424e-01 0.44372 STFdiff04.xpl

√ For α = 2, the standard deviation σ = 2σ . Hence, it is reasonable to put σ = 2−1/α σ in the general case. In this way we can compare the results for the two approximations. Using (16.20) and (16.23) we compute the ﬁnite and inﬁnite time ruin probabilities for diﬀerent levels of initial capital, premium, intensity of claims, expectation of claims and their scale parameter, see Tables 16.2 and 16.3. A sample path of the process Rα is depicted in Figure 16.2. The results in the tables show the eﬀects of the heaviness of the claim size distribution tails on the crucial parameter for insurance companies – the ruin probability. It is clearly visible that a decrease of α increases the ruin probability. The tables also illustrate the relationship between the ruin probability and the initial capital u, premium c, intensity of claims λ, expectation of claims µ and their scale parameter σ . For the heavy-tailed claim distributions the ruin

16

Stable Diﬀusion Approximation of the Risk Process

-100

-50

0

Y

50

100

392

0

0.5 X

1

Figure 16.2: A sample path of the process Rα for α = 1.5, u = 40, c = 100, µ = 20, σ = 10, and λ = 3. STFdiff05.xpl

probability is considerably higher than for the light-tailed claim distributions. Thus the estimation of the stability parameter α from real data is crucial for the choice of the premium c.

Bibliography

393

Bibliography Asmussen, S. (2000). Ruin Probabilities, World Scientiﬁc, Singapore. Embrechts, P., Kl¨ uppelberg, C., and Mikosch, T. (1997). Modeling Extremal Events for Insurance and Finance, Springer, Berlin. Furrer, H. (1998). Risk processes perturbed by α-stable L´evy motion, Scandinavian Actuarial Journal 10: 23–35. Furrer, H., Michna, Z. and Weron A. (1997). Stable L´evy motion approximation in collective risk theory, Insurance: Mathematics and Economics 20: 97– 114. Iglehart, D. L. (1969). Diﬀusion approximations in collective risk theory, Journal of Applied Probability 6: 285–292. Janicki, A. and Weron, A. (1994). Simulation and Chaotic Behavior of α–Stable Stochastic Processes, Marcel Dekker, New York. Mijnheer, J. L. (1975). Sample path properties of stable processes, Mathematical Centre Tracts 59, Mathematical Centrum Amsterdam. Port, S. C. (1989). Stable processes with drift on the line, Trans. Amer. Math. Soc. 313: 201–212. Rolski, T., Schmidli, H., Schmidt, V. and Teugels, J. (1999). Stochastic Processes for Insurance and Finance, John Wiley and Sons, New York. Samorodnitsky, G. and Taqqu, M. (1994). Non–Gaussian Stable Processes: Stochastic Models with Inﬁnite Variance, Chapman and Hall, London. Schmidli, H. (1994). Diﬀusion approximations for a risk process with the possibility of borrowing and investment, Stochastic Models 10: 365-388. Weron, A. (1984). Stable processes and measures: A survey. in Probability Theory on Vector Spaces III, D. Szynal, A. Weron (eds.), Lecture Notes in Mathematics 1080: 306–364. Weron, R. (2001). L´evy-stable distributions revisited: tail index > 2 does not exclude the L´evy-stable regime, International Journal of Modern Physics C 12: 200–223. Willekens, E. (1987). On the supremum of an inﬁnitely divisible process, Stoch. Proc. Appl. 26: 173–175.

17 Risk Model of Good and Bad Periods Zbigniew Michna

17.1

Introduction

Classical insurance risk models rely on independent increments of the corresponding risk process. However, this assumption can be very restrictive in modeling natural events. For example, M¨ uller and Pﬂug (2001) found a signiﬁcant correlation of claims related to tornados in the USA. To cope with these observations we present here a risk model producing positively correlated claims. In the recent years such models have been extensively investigated by Gerber (1981; 1982), Promislov (1991), Michna (1998), Nyrhinen (1998; 1999a; 1999b), Asmussen (1999), and M¨ uller and Pﬂug (2001). We consider a model where the time of the year inﬂuences claims. For example, seasonal weather ﬂuctuations aﬀect the size and quantity of damages in car accidents, intensive rains can cause abnormal damage to households. We assume the existence of good and bad periods for the insurance company in the sense of diﬀerent expected values for claim sizes. This structure of good and bad periods produces a dependence of claims such that the resulting risk process can be approximated by the fractional Brownian motion with a linear drift. Explicit asymptotic formulas and numerical results can be derived for diﬀerent levels of the dependence structure. As we will see the dependence of claims aﬀects a crucial parameter for the risk exposure of the insurance company – the ruin probability. Recall that the ruin time T is deﬁned as the ﬁrst time the company has a negative capital. One of the key problems of collective risk theory concerns calculating the ultimate ruin probability Ψ = P(T < ∞), i.e. the probability

396

17

Risk Model of Good and Bad Periods

that the risk process ever becomes negative. On the other hand, the insurance company will typically be interested in the probability that ruin occurs before time t, that is Ψ(t) = P(T ≤ t). In the next section we present basic deﬁnitions and assumptions imposed on the model, and results which permit to approximate the risk process by fractional Brownian motion. Section 17.3 deals with bounds and asymptotic formulas for ruin probabilities. The last section is devoted to numerical results.

17.2

Fractional Brownian Motion and the Risk Model of Good and Bad Periods

In this section we describe fractional Brownian motion approximation in risk theory. We show that under suitable assumptions the risk process constructed from claims appearing in good and bad periods can be approximated by the fractional Brownian motion with a linear drift. Hence, we ﬁrst introduce the deﬁnition of fractional Brownian motion and then construct the model. A process BH is called fractional Brownian motion if for some 0 < H ≤ 1: 1. BH (0) = 0 almost surely. 2. BH has strictly stationary increments, that is the random function Mh (t) = BH (t + h) − BH (t), h ≥ 0, is strictly stationary. 3. BH is self-similar of order H denoted H − ss, that is L{BH (ct)} = L{cH BH (t)} in the sense of ﬁnite-dimensional distributions. 4. Finite dimensional distributions of BH are Gaussian with EBH (t) = 0 5. BH is almost surely continuous. If not stated otherwise explicitly, we let the parameter of self-similarity satisfy 1 2 < H < 1. The concept of semi-stability was introduced by Lamperti (1962) and recently discussed in Embrechts and Maejima (2002). Mandelbrot and Van Ness (1968) call it self-similarity when appearing in conjunction with stationary increments, as it does here. When we observe arriving claims we assume that we have good and bad periods (e.g. periods of good weather and periods of bad weather). These two periods alternate. Let {TnG , n ∈ N} be i.i.d. non-negative random variables representing

17.2

Fractional Brownian Motion and Model of Good and Bad Periods 397

good periods; similarly, let {S B , SnB , n ∈ N} be i.i.d. non-negative random variables representing bad periods. The T ’s are assumed independent of the S’s, the common distribution of good periods is F G , and the distribution of bad periods is F B . We assume that both F G and F B have ﬁnite means νG and νB , respectively, and we set ν = νG + νB . n Consider the pure renewal sequence initiated by a good period {0, i=1 (TiG + SiB ), n ∈ N}. The inter-arrival distribution is F G ∗ F B and the mean interarrival time is ν. This pure renewal process has a stationary version {D, D + n B G i=1 (Ti +Si ), n ∈ N}, where D is a delay random variable (Asmussen, 1987). However, by deﬁning the initial delay interval of length D this way, the interval does not decompose into a good and a bad period the way subsequent interarrival intervals do. Consequently, we turn to an alternative construction of the stationary renewal process to decompose the delay random variable D into a good and bad period. Deﬁne three independent random variables B, T0G , and S0B , which are independent of (S B , TnG , SnB , n ∈ N), as follows: B is a Bernoulli random variable with values in {0, 1} and mass function νG P(B = 1) = = 1 − P(B = 0) ν and ∞ 1 − F G (s) def ds = 1 − F0G (x), P(T0G > x) = νG x ∞ 1 − F B (s) def ds = 1 − F0B (x), P(S0B > x) = νB x for x > 0. Deﬁne a delay random variable D0 by D0 = (T0G + S B )B + (1 − B)S0B and a delayed renewal sequence by def

{Sn , n ≥ 0} =

D0 , D0 +

n

(TiG + SiB ), n ≥ 0 .

i=1

One can verify that this delayed renewal sequence is stationary (Heath, Resnick, and Samorodnitsky, 1998). We now deﬁne L(t) to be 1 if t falls in a good period, and L(t) = 0 if t is in a bad period. More precisely, the process {L(t), t ≥ 0} is deﬁned in terms of {Sn , n ≥ 0} as follows L(t) = BI(0 ≤ t < T0G ) +

∞ n=0

G I(Sn ≤ t < Sn + Tn+1 ).

(17.1)

398

17

Risk Model of Good and Bad Periods

The process {L(t), t ≥ 0} is strictly stationary and P{L(t) = 1} = EL(t) =

νG . ν

Let {YnG , n ≥ 0} be i.i.d. random variables representing claims appearing in good periods (e.g. YnG describes a claim which may appear at the n-th moment in a good period). Similarly, let {YnB , n ≥ 0} be i.i.d. random variables representing claims appearing in bad periods (e.g. YnB describes a claim which may appear at the n-th moment in a bad period). We assume that {YnG , n ≥ 0}, {YnB , n ≥ 0} and {L(t), t ≥ 0} are independent, E(Y0G ) = g < E(Y0B ) = b, and the second moments of Y0G and Y0B exist. Then the claim Yn appearing at the n-th moment is Yn = L(n)YnG + {1 − L(n)}YnB , n ≥ 0.

(17.2)

Furthermore, the sequence {Yn , n ≥ 0} is stationary. Assume that

1 − F G (t) = t−(C+1) K(t),

(17.3)

for t → ∞, 0 < C < 1, where K is slowly varying at inﬁnity. Moreover, assume that 1 − F B (t) = O{1 − F G (t)}, (17.4) as t → ∞ and there exists an n ≥ 1 such that (FG ∗ FB )∗n is nonsingular. Then 2 (b − g)2 −C νB n K(n) Cν 3 when n → ∞ (Heath, Resnick, and Samorodnitsky, 1998).

Cov(Y0 , Yn ) ∼

(17.5)

We assumed that the good period dominates the bad period but one can approach the problem reversely (i.e. the bad period can dominate the good period) because of the symmetry of the good and bad period characteristics in the covariance function. Assume that EYn = µ and ϕ(n) = nH K(n), where K is a slowly varying function at inﬁnity. Let the sequence {Yk : k ∈ N} be as above and let {N (n) : n ∈ N} be a sequence of point processes such that N (n) (t) − λnt L →0 ϕ(n)

(17.6)

17.3

Ruin Probability in Limit Risk Model of Good and Bad Periods

399

weakly in the Skorokhod topology (Jacod and Shiryaev, 1987) for some positive constant λ. Assume also that

µ =c (17.7) lim c(n) − λn n→∞ ϕ(n) and lim u(n) = u.

n→∞

Then

(17.8)

(n)

u

(n)

(n)

+c

N (t) 1 L t− Yk → u + ct − λH BH (t) ϕ(n)

(17.9)

k=1

in the Skorokhod topology as n → ∞. Condition (17.6) is satisﬁed for a wide class of point processes. For example, if the times between consecutive claims constitute an i.i.d. sequence with the distribution possessing a ﬁnite second moment.

17.3

Ruin Probability in the Limit Risk Model of Good and Bad Periods

Let us deﬁne RH (t) = u + ct − λH BH (t),

(17.10)

where u, c, and λ are positive constants and the ruin time: T (RH ) = inf{t > 0 : RH (t) < 0},

(17.11)

if the set is non-empty and T (RH ) = ∞ otherwise. The ruin probability of the process of (17.10) is given by (Michna, 1998):

−2uct u − ct u + ct + exp 1−Φ , P{T (RH ) ≤ t} ≤ 1 − Φ σ(λt)H σ 2 (λt)2H σ(λt)H (17.12) 2 (1)}. where the functional T is given in (17.11) and σ 2 = E{BH The next result enables us to approximate the ruin probability of the process RH (t) for a suﬃciently large initial capital. For every t > 0: P {T (RH ) ≤ t} = 1, u→∞ P {λH BH (t) > u + ct} lim

(17.13)

400

17

Risk Model of Good and Bad Periods

where the functional T is given in (17.11). Now, let us consider the inﬁnite time ruin probability. The lower and upper bounds for the ruin probability are given by:

u1−H cH , (17.14) P{T (RH ) < ∞} ≥ 1 − Φ σ(λH)H (1 − H)1−H and

H 1 −2H −2 − 1−H 2 dx . exp − λ σ (ux + cx) 2 0 (17.15) See Norros (1994) for the lower bound and D¸ebicki, Michna, and Rolski (1998) for the upper bound analysis. 2c P{T (RH ) < ∞} ≤ √ 8π(1 − H)

∞

The next property will show the asymptotic behavior of the inﬁnite time ruin probability. Let the Hurst parameter satisfy 0 < H < 1. Then (H¨ usler and Piterbarg, 1999): √ 3 1 PH πc1−H H H− 2 u(1−H)( H −1) P{T (RH ) < ∞} = 1 − 1 · 1 3 1 2 2H 2 (1 − H)H+ H − 2 λ1−H σ H −1 H 1−H u1−H cH {1 + o(1)}, (17.16) · 1−Φ H (1 − H)λH σ as u → ∞ where PH is the Pickands constant, Piterbarg (1996). The value of the Pickands constant is known only for H = 0.5 and H = 1. Some approximations of its value can be found in Burnecki and Michna (2002) and D¸ebicki, Michna, and Rolski (2003). The above result permits to approximate the inﬁnite time ruin probability in the model of good and bad periods for large values of the initial capital. For an arbitrary value of the initial capital there exists a simulation method of the inﬁnite time ruin probability based on the Girsanov-type theorem. To present this method we introduce the stopping time τa (u) = inf{t > 0 : BH (t) + at > u} , where a ≥ 0 and the function 1 1 −H c1 s 2 −H (t − s) 2 w(t, s) = 0,

s ∈ (0, t) s ∈ (0, t),

(17.17)

(17.18)

17.3 where

Ruin Probability in Limit Risk Model of Good and Bad Periods 1 2

401

< H < 1,

H(2H − 1)B

c1 =

3 1 − H, H − 2 2

−1 ,

(17.19)

and B denotes the beta function. Note that τa < ∞ almost surely for a ≥ 0. According to Norros, Valkeila, and Virtamo (1999) the following centered Gaussian process t M (t) = w(t, s) dBH (s), (17.20) 0

possesses independent increments and its variance is EM 2 (t) = c22 t2−2H , where

(17.21)

c2 =

− 12 1 H(2H − 1)(2 − 2H)B H − , 2 − 2H . 2

In particular M (t) is a martingale. For all a > 0 we have P{T (RH ) < ∞} =

c+a E exp − H λ σ

0

τa

w(τa , s) dBH (s) −

1 2λ2H σ

c2 (c 2 2

2

+ a) τa

2−2H

.

The above formula enables us to simulate the inﬁnite time ruin probability for an arbitrary value of the initial capital. Using the structure of the common distribution of (M (t), BH (t)) we get the following estimator of the ruin probability valid for 0 < H < 1:

(c + a) (a2 − c2 ) 2−2H P{T (RH ) < ∞} = E exp − 2H 2 τa1−2H u + . (17.22) τ a λ σ 2λ2H σ 2 Let us note that putting a = c in (17.22) we obtain a simple formula

2cτc1−2H u . P{T (RH ) < ∞} = E exp − 2H λ σ2

(17.23)

For similar methods of simulation based on the change of measure technique applied to ﬂuid models see D¸ebicki, Michna, and Rolski (2003).

402

17

Risk Model of Good and Bad Periods

Table 17.1: Ruin probabilities for H = 0.7 and ﬁxed µ = 20, σ = 10, and t = 10. u c λ Ψ(t) Ψ 25 50 2 8.1257e-2 0.28307 25 60 2 1.3516e-2 0.03932 30 60 2 6.6638e-3 0.02685 35 60 2 3.6826e-3 0.01889 40 60 2 2.2994e-3 0.01363 40 70 3 1.0363e-1 0.38016 STFgood01.xpl

Table 17.2: Ruin probabilities for H = 0.8 and ﬁxed µ = 20, σ = 10, and t = 10. u c λ Ψ(t) Ψ 25 50 2 0.22240 0.40728 25 60 2 0.09890 0.08029 30 60 2 0.06570 0.06583 35 60 2 0.04496 0.05471 40 60 2 0.03183 0.04646 40 70 3 0.23622 0.55505 STFgood02.xpl

17.4

Examples

Let us assume that claims appear in good and bad periods. According to (17.9) we are able to approximate the risk process by: RH (t) = u + (c − λµ)t + λH BH (t), where BH (t) is a fractional Brownian motion, c is the premium rate, µ is the expected value of claims, σ 2 = EB 2 (1) is their variance, λ is the claim intensity, and u is the initial capital. We can compute ﬁnite and inﬁnite time ruin probabilities for diﬀerent levels of the initial capital, premium, intensity of claims, expectation of claims and

Examples

403

40

50

Y

60

70

17.4

0

0.5 X

1

Figure 17.1: Sample paths of the process RH for H = 0.7, u = 40, c = 100, µ = 20, σ = 10, and λ = 3. STFgood03.xpl

their variance (see Tables 17.1 and 17.2). We approximate the ﬁnite time ruin probabilities by formula (17.12) and the inﬁnite time ruin probabilities using the estimator given in (17.23). Sample paths of the process RH are depicted in Figure 17.1. The results in the tables show the eﬀects of dependence structures between claims on the crucial parameter for insurance companies – the ruin probability. Numerical simulations are performed for diﬀerent values of the parameter of self-similarity H which deﬁnes the level of the dependence between claims. It is clearly visible that an increase of H increases the ruin probability. The tables also illustrate the relationship between the ruin probability and the initial capital u, premium c, intensity of claims λ, expectation of claims µ and their variance σ. It is shown that for dependent damage occurrences the ruin probability is considerably higher than for independent events. Thus ignoring

404

17

Risk Model of Good and Bad Periods

possible dependence (existence of good and bad periods) and its level might lead to wrong choices of the premium c.

Bibliography

405

Bibliography Asmussen, S. (1987). Applied Probability and Queues, John Wiley and Sons, New York. Asmussen, S. (1999). On the ruin problems for some adapted premium rules, MaPhySto Research Report No. 5. University of Aarhus, Denmark. Burnecki, K. and Michna, Z. (2002). Simulation of Pickands constants, Probability and Mathematical Statistics 22: 193–199. D¸ebicki, K., Michna, Z. and Rolski, T. (1998). On the supremum from Gaussian processes over inﬁnite horizon, Probability and Mathematical Statistics 18: 83–100. D¸ebicki, K., Michna, Z. and Rolski T. (2003). Simulation of the asymptotic constant in some ﬂuid models, Stochastic Models 19: 407–423. Embrechts, P. and Maejima, M. (2002). Selfsimilar Processes, Princeton University Press, Princeton and Oxford. Gerber, H. U. (1981). On the probability of ruin in an autoregressive model, Mitteilung der Vereinigung Schweiz. Versicherungsmathematiker 2: 213– 219. Gerber, H. U. (1982). Ruin theory in a linear model, Insurance: Mathematics and Economics 1: 177–184. Heath, D., Resnick, S. and Samorodnitsky, G. (1998). Heavy tails and long range dependence in on/oﬀ processes and associated ﬂuid models, Mathematics and Operations Research 23: 145–165. H¨ usler, J. and Piterbarg, V. (1999). Extremes of certain class of Gaussian processes, Stochastic Processes and their Applications 83: 338–357. Embrechts, P. and Maejima, M. (1987). Limit Theorems for Stochastic Processes, Springer, Berlin Heidelberg. Lamperti, J. (1962). Semi-stable stochastic processes, Transection of the American Mathematical Society. 104: 62–78. Mandelbrot, B. B. and Van Ness, J. W. (1968). Fractional Brownian motions, fractional noises and applications, SIAM Review 10: 422–437.

406

Bibliography

Michna, Z. (1998). Self-similar processes in collective risk theory, Journal of Applied Mathematics and Stochastic Analysis 11: 429–448. M¨ uller, A. and Pﬂug, G. (2001). Asymptotic ruin probabilities for risk processes with dependent increments, Insurance: Mathematics and Economics 28: 381–392. Norros, I. (1994). A storage model with self-similar input, Queueing Systems 16: 387–396. Norros, I., Valkeila, E. and Virtamo, J. (1999). A Girsanov type theorem for the fractional Brownian motion, Bernoulli 5: 571–587. Nyrhinen, H. (1998). Rough description of ruin for general class of surplus process, Adv. Appl. Probab. 30: 107–119. Nyrhinen, H. (1999a). On the ruin probabilities in a general economic environment, Stoch. Proc. Appl. 83: 319–330. Nyrhinen, H. (1999b). Large deviations for the time of ruin, J. Appl. Probab. 36: 733–746. Piterbarg, V. I. (1996). Asymptotic methods in the theory of Gaussian processes and ﬁelds, Translations of Mathematical Monographs 148, AMS, Providence. Promislow, S. D. (1991). The probability of ruin in a process with dependent increments, Insurance: Mathematics and Economics 10: 99–107.

18 Premiums in the Individual and Collective Risk Models Jan Iwanik and Joanna Nowicka-Zagrajek

The premium is the price for the good “insurance” sold by an insurance company. The right pricing is vital since too low a price level results in a loss, while with too high prices a company can price itself out of the market. It is the actuary’s task to ﬁnd methods of premium calculation (also called premium calculation principles), i.e. rules saying what premium should be assigned to a given risk. We present the most important types of premiums in Section 18.1; for more premium calculation principles, that are not considered here, see Straub (1988) and Young (2004). We focus on the monetary payout made by the insurer in connection with insurable losses and we ignore premium loading for expenses and proﬁt. The goal of insurance modeling is to develop a probability distribution for the total amount paid in beneﬁts. This allows the insurance company to manage its capital account and honor its commitments. Therefore, we describe two standard models: the individual risk model in Section 18.2 and the collective risk model in Section 18.3. In both cases, we determine the expectation and variance of the portfolio, consider the approximation of the distribution of the aggregate claims, and present formulae for the considered premiums. It is worth mentioning here that the collective risk model can also be applied to quantifying regulatory capital for operational risk, for example to model a yearly operational risk variable (Embrechts, Furrer, and Kaufmann, 2003).

408

18.1

18

Premiums in the Individual and Collective Risk Models

Premium Calculation Principles

Let X denote a non-negative random variable describing the size of claim (risk, loss) with the distribution function FX (t). Moreover, we assume that the expected value E(X), the variance Var(X) and the moment generating function MX (z) = E(ezX ) exist. The simplest premium (calculation principle) is called pure risk premium and it is equal to the expectation of claim size variable: P = E(X).

(18.1)

This premium is often applied in life and some mass lines of business in non-life insurance. As it is known from the ruin theory, the pure risk premium without any kind of loading is insuﬃcient since, in the long run, the ruin is inevitable even in the case of substantial (though ﬁnite) initial reserves. Nevertheless, the pure risk premium can be – and still is – of practical use because, for one thing, in practice the planning horizon is always limited, and for another, there are indirect ways of loading a premium, e.g. by neglecting interest earnings (Straub, 1988). The future claims cost X may be diﬀerent from its expected value E(X) and drawn from past may be diﬀerent from the true E(X). the estimator E(X) To reﬂect this fact, the insurer can impose the risk loading on the pure risk premium. The pure risk premium with safety (security) loading given by PSL (θ) = (1 + θ) E(X),

θ ≥ 0,

(18.2)

where θ and θ E(X) are the relative and total safety loadings, respectively, is very popular in practical applications. This premium is an increasing linear function of θ and it is equal to the pure risk premium for θ = 0 . The pure risk premium and the premium with safety loading are sometimes criticised because they do not depend on the degree of ﬂuctuation of X. Thus, two other rules have been proposed. The ﬁrst one, denoted here by PV (a) and given by PV (a) = E(X) + a Var(X),

a ≥ 0,

(18.3)

is called the σ 2 -loading principle or the variance principle. In this case the premium depends not only on the expectation but also on the variance of the

18.1

Premium Calculation Principles

409

loss. The premium given by (18.3) is an increasing linear function of a and it is obvious that for a = 0 it is equal to the pure risk premium. The other one, denoted here by PSD (b) and given by PSD (b) = E(X) + b Var(X), b ≥ 0,

(18.4)

is called the σ-loading principle or the standard deviation principle. In this case the premium depends on the expectation and also on the standard deviation of the loss. The premium given by (18.4) is an increasing linear function of b and clearly for b = 0 it reduces to the pure risk premium. Both the σ 2 - and σ-loading principles are widely used in practice, but there is a discussion which one is better. If we consider two risks X1 and X2 , the σ-loading is additive and the σ 2 -loading not in case X1 and X2 are totally dependent, whereas the contrary is true for independent risks X1 and X2 . Although in many cases the additivity is required from premium calculation principles, there are also strong arguments against additivity based on the idea that the price of insurance ought to be the lower the larger number of the risk carriers are sharing the risk. The rules described so far are sometimes called “empirical” or “pragmatic”. Another approach employs the notion of utility (Straub, 1988). The so-called zero utility principle states that the premium PU for a risk X should be calculated such that the expected utility is (at least) equal to the zero utility. This principle yields a technical minimum premium in the sense that the risk X should not be accepted at a premium below PU . In the trivial case zero utility premium equals the pure risk premium. A more interesting case is the exponential utility which leads to a premium, denoted here by PE (c) and called the exponential premium, given by PE (c) =

ln MX (c) ln E(ecX ) = , c c

c > 0.

(18.5)

This premium is an increasing function of the parameter c that measures the risk aversion and limc→0 PE (c) = E(X). It is worth noticing that the zero utility principle yields additive premiums only in the trivial and the exponential utility cases (Gerber, 1980). As the trivial utility is just a special case of exponential utility corresponding to the limit c → 0, additivity characterizes the exponential utility. Another interesting approach to the problem of premium calculations is the quantile premium, denoted here by PQ (ε), is given by −1 (1 − ε), PQ (ε) = FX

(18.6)

410

18

Premiums in the Individual and Collective Risk Models

where ε ∈ (0, 1) is small enough. As it can be easily seen, it is just the quantile of order (1 − ε) of the loss distribution and this means that the insurer wants to get the premium that covers (1 − ε) · 100% of the possible loss. A reasonable range of the parameter ε is usually from 1% to 5%.

18.2

Individual Risk Model

We consider here a certain portfolio of insurance policies and the total amount of claims arising from it during a given period (usually a year). Our aim is to determine the joint premium for the whole portfolio that will cover the accumulated risk connected with all policies. In the individual risk model, which is widely used in applications, especially in life and health insurance, we assume that the portfolio consists of n insurance policies and the claim made in respect of the policy k is denoted by Xk . Then the total, or aggregate, amount of claims is S = X1 + X2 + . . . + Xn ,

(18.7)

where Xk is the loss on insured unit k and n is the number of risk units insured (known and ﬁxed at the beginning of the period). The Xk ’s are usually postulated to be independent random variables (but not necessarily identically distributed), so we will make such assumption in this section. Moreover, the individual risk model discussed here will not recognize the time value of money because we will consider only models for short periods. The claim amount variable Xk for each policy is usually presented as Xk = Ik Bk ,

(18.8)

where random variables I1 , . . . , In , B1 , . . . , Bn are independent. The random variable Ik indicates whether or not the kth policy produced a payment. If the claim has occurred, then Ik = 1; if there has not been any claim, Ik = 0. We denote qk = P(Ik = 1) and 1 − qk = P(Ik = 0). The random variable Bk can have an arbitrary distribution and represents the amount of the payment in respect of the kth policy given that the payment was made. In Section 18.2.1 we present general formulae for the premiums introduced in Section 18.1. In Section 18.2.2 we apply the normal approximation to obtain closed-form formulae for both the exponential and quantile premiums. Finally in Section 18.2.3, we illustrate the behavior of these premiums on a real-life data describing losses resulting from catastrophic events in the USA.

18.2

Individual Risk Model

18.2.1

411

General Premium Formulae

In order to ﬁnd formulae for the “pragmatic” premiums, let us assume that the expectations and variances of Bk ’s exist and denote µk = E(Bk ) and σk2 = Var(Bk ), k = 1, 2, . . . , n. Then E(Xk ) = µk qk ,

(18.9)

and the mean of the total loss in the individual risk model is given by E(S) =

n

µk qk .

(18.10)

k=1

The variance of Xk can be calculated as follows: = Var{E(Xk |Ik )} + E{Var(Xk |Ik )} = Var{Ik E(Bk )} + E{Ik Var(Bk )} = {E(Bk )}2 Var(Ik ) + Var(Bk ) E(Ik ) = µ2k qk (1 − qk ) + σk2 qk .

Var(Xk )

(18.11)

Applying the assumption of independent Xk ’s, the variance of S is of the form: Var(S) =

n

µ2k qk (1 − qk ) + σk2 qk .

(18.12)

k=1

Now we can easily obtain the following formulae for the individual risk model: • pure risk premium P =

n

µk qk ,

(18.13)

k=1

• premium with safety loading PSL (θ) = (1 + θ)

n

µk qk ,

θ ≥ 0,

(18.14)

k=1

• premium with variance loading PV (a) =

n k=1

µk qk + a

n k=1

µ2k qk (1 − qk ) + σk2 qk ,

a ≥ 0,

(18.15)

412

18

Premiums in the Individual and Collective Risk Models

• premium with standard deviation loading ; < n n < PSD (b) = µk qk + b= {µ2k qk (1 − qk ) + σk2 qk }, k=1

b ≥ 0. (18.16)

k=1

If we assume that for each k = 1, 2, . . . , n the moment generating function MBk (t) exists, then MXk (t) = 1 − qk + qk MBk (t), and hence MS (t) =

n &

(18.17)

{1 − qk + qk MBk (t)} .

(18.18)

k=1

This leads to the following formula for the exponential premium: 1 ln {1 − qk + qk MBk (c)} , c n

PE (c) =

c > 0.

(18.19)

k=1

In the individual risk model, claims of an insurance company are modeled as a sum of the claims of many insured individuals. Therefore, in order to ﬁnd the quantile premium given by PQ (ε) = FS−1 (1 − ε),

ε ∈ (0, 1),

(18.20)

the distribution of the sum of independent random variables has to be determined. There are methods to solve this problem, see Bowers et al. (1997) and Panjer and Willmot (1992). For example, one can use the convolution of the probability distributions of X1 , X2 , . . . , Xn . However in practice it can be a very complex task that involves numerous calculations. In many cases the result cannot be represented by a simple formula. Therefore, approximations for the distribution of the sum are often used.

18.2.2

Premiums in the Case of the Normal Approximation

The distribution of the total claim in the individual risk model can be approximated by means of the central limit theorem (Bowers et al., 1997). In such case it is suﬃcient to evaluate means and variances of the individual loss random variables, sum them to obtain the mean and variance of the aggregate loss of

18.2

Individual Risk Model

413

the insurer and apply the normal approximation. However, it is important to remember that the quality of this approximation depends not only on the size of the portfolio, but also on its homogeneity. The approximation of the distribution of the total loss S in the individual risk model can be applied to ﬁnd a simple expression for the quantile premium. If the distribution of S is approximated by a normal distribution with mean E(S) and variance Var(S), the quantile premium can be written as ; < n n < PQ (ε) = µk qk + Φ−1 (1 − ε)= {µ2k qk (1 − qk ) + σk2 qk }, (18.21) k=1

k=1

where ε ∈ (0, 1) and Φ(·) denotes the standard normal distribution function. It is the same premium as the premium with standard deviation loading with b = Φ−1 (1 − ε). Moreover, in the case of this approximation, it is possible to express the exponential premium as PE (c) =

n

c 2 µk qk (1 − qk ) + σk2 qk , 2 n

µk qk +

k=1

c > 0,

(18.22)

k=1

and it is easy to notice, that this premium is equal to the premium with variance loading with a = c/2. Since the distribution of S is approximated by the normal distribution with the same mean value and variance, premiums deﬁned in terms of the expected value of the aggregate claims are given by the same formulae as in Section 18.2.1.

18.2.3

Examples

Quantile premium for the individual risk model with Bk ’s log-normally distributed. The insurance company holds n = 500 policies Xk . The claims arising from policies can be represented as independent identically distributed random variables. The actuary estimates that each policy generates a claim with probability qk = 0.05 and the claim size, given that the claim happens, is log-normally distributed. The parameters of the log-normal distribution correspond to the real-life data describing losses resulting from catastrophic events in the USA, i.e. µk = 18.3806 and σk = 1.1052 (see Chapter 13).

414

18

Premiums in the Individual and Collective Risk Models

As the company wants to assure that the probability of losing any money is less than a speciﬁc value ε, the actuary is asked to calculate the quantile premium. The actuary wants to compare the quantile premium given by the general formula (18.20) with the one (18.21) obtained from the approximation of the aggregate claims. The distribution of the total claim in this model can be approximated by the normal distribution with mean 4.4236 · 109 and variance 2.6160 · 1018 . Figure 18.1 shows the quantile premium in the individual risk model framework for ε ∈ (0.01, 0.1). The exact premium is drawn with the solid blue line whereas the premium calculated on the base of the normal approximation is marked with the dashed red line. Because of the complexity of analytical formulae, the exact quantile premium for the total claim amount was obtained using numerical simulations. The simulation-based approach is the reason for the line being jagged. A better smoothness can be achieved by performing a larger number of Monte Carlo simulations (here we performed 10000 simulations). We can observe now that the approximation seems to ﬁt well for larger ε and worse for small ε. This is speciﬁc for the quantile premium. The eﬀect is caused by the fact that even if two distribution functions F1 (x), F2 (x) are very close to each other, their inverse functions F1−1 (y), F2−1 (y) may diﬀer signiﬁcantly for y close to 1. Exponential premium for the individual risk model with Bk ’s gamma distributed. Because the company has a speciﬁc risk strategy described by the exponential utility function, the actuary is asked to determine the premium for the same portfolio of 500 independent policies once again but now with respect to the risk aversion parameter c. The actuary is also asked to use a method of calculation that provides direct results and does not require Monte Carlo simulations. This time the actuary has decided to describe the claim size, given that the claim happens, by the gamma distribution with α = 0.9185 and β = 5.6870 · 10−9 , see Chapter 13. The choice of the gamma distribution guarantees a simple analytical form of the premium, namely PE (c) =

α

n β 1 , ln 1 − qk + qk c β−c k=1

c > 0.

(18.23)

Individual Risk Model

415

8.5 8 7.5 6.5

7

Quantile premium (USD billion)

9

18.2

0.02

0.06 0.04 Quantile parameter epsilon

0.08

0.1

Figure 18.1: Quantile premium for the individual risk model with Bk ’s lognormally distributed. The exact premium (solid blue line) and the premium resulting for the normal approximation of the aggregate claims (dashed red line). STFprem01.xpl

On the other hand, the actuary can use formula (18.22) applying the normal approximation of the aggregate claims with mean 4.0377 · 109 and variance 1.3295 · 1018 . Figure 18.2 shows the exponential premiums resulting from both approaches with respect to the risk aversion parameter c. A simple pattern can be observed – the more risk averse the customer is, the more he or she is willing to pay for the risk protection. Moreover, the normal approximation gives better results for smaller values of c.

18

Premiums in the Individual and Collective Risk Models

4.5 4.4 4.3 4.2 4.1

Exponential utility premuim (USD billion)

4.6

416

0

2

6 4 Risk aversion parameter*E-10

8

Figure 18.2: Exponential premium for the individual risk model with Bk ’s generated from the gamma distribution. The exact premium (solid blue line) and the premium resulting for the normal approximation of the aggregate claims (dashed red line). STFprem02.xpl

18.3

Collective Risk Model

We consider now an alternative model describing the total claim amount in a ﬁxed period in a portfolio of insurance contracts. Let N denote the number of claims arising from policies in a given time period. Let X1 denote the amount of the ﬁrst claim, X2 the amount of the second claim and so on. In the collective risk model, the random sum S = X1 + X2 + . . . + XN

(18.24)

represents the aggregate claims generated by the portfolio for the period under study. The number of claims N is a random variable and is associated with the

18.3 Collective Risk Model

417

frequency of claim. The individual claims X1 , X2 , . . . are also random variables and are said to measure the severity of claims. There are two fundamental assumptions that we will make in this section: X1 , X2 , . . . are identically distributed random variables and the random variables N, X1 , X2 , . . . are mutually independent. In Section 18.3.1 we present formulae for the considered premiums in the collective risk model. In Section 18.3.2 we apply the normal and translated gamma approximations to obtain closed formulae for premiums. Since for the number of claims N , a Poisson or a negative binomial distribution is often selected, we discuss these cases in detail in Section 18.3.3 and 18.3.4, respectively. Finally, we illustrate the behavior of the premiums on examples in Section 18.3.5.

18.3.1

General Premium Formulae

In order to ﬁnd formulae for premiums based on the expected value of the total claim, let us assume that E(X), E(N ), Var(X) and Var(N ) exist. For the collective risk model, the expected value of aggregate claims is the product of the expected individual claim amount and the expected number of claims, E(S) = E(N ) E(X),

(18.25)

while the variance of aggregate claims is the sum of two components where the ﬁrst is attributed to the variability of individual claim amounts and the other to the variability of the number of claims: Var(S) = E(N ) Var(X) + {E(X)}2 Var(N ).

(18.26)

Thus it is easy to obtain the following premium formulae in the collective risk model: • pure risk premium P = E(N ) E(X),

(18.27)

• premium with safety loading PSL (θ) = (1 + θ) E(N ) E(X),

θ ≥ 0,

(18.28)

• premium with variance loading PV (a)

= E(N ) E(X) + a[E(N ) Var(X) + {E(X)}2 Var(N )],

(18.29) a ≥ 0,

418

18

Premiums in the Individual and Collective Risk Models

• premium with standard deviation loading PSD (b)

=

E(N ) E(X) + b E(N ) Var(X) + {E(X)}2 Var(N ),

(18.30) b ≥ 0.

If we assume that MN (t) and MX (t) exist, the moment generating function of S can be derived as MS (t) = MN {ln MX (t)}, (18.31) and thus the exponential premium is of the form PE (c) =

ln[MN {ln MX (c)}] , c

c > 0.

(18.32)

It is often diﬃcult to determine the distribution of the aggregate claims and this fact causes problems with calculating the quantile premium given by PQ (ε) = FS−1 (1 − ε),

ε ∈ (0, 1).

(18.33)

Although the distribution function of S can be expressed by means of the distribution of N and the convolution of the claim amount distribution, this is too complicated in practical applications, see e.g. Klugman, Panjer, and Willmot (1998). Therefore, approximations for the distribution of the aggregate claims are usually considered.

18.3.2

Premiums in the Case of the Normal and Translated Gamma Approximations

In Section 18.2.2 the normal approximation was employed as an approximation for the distribution of aggregate claims in the individual risk model. This approach can also be used in the case of the collective model when the expected number of claims is large (Bowers et al., 1997; Daykin, Pentikainen, and Pesonen, 1994). The normal approximation simpliﬁes the calculations. If the distribution of S can be approximated by a normal distribution with mean E(S) and variance Var(S), the quantile premium is given by the formula PQ (ε) = E(N ) E(X) + Φ−1 (1 − ε) E(N ) Var(X) + {E(X)}2 Var(N ), (18.34)

18.3 Collective Risk Model

419

where ε ∈ (0, 1) and Φ(·) denotes the standard normal distribution function. It is easy to notice, that this premium is equal to the standard deviation-loaded premium with b = Φ−1 (1 − ε). Moreover, in the case of the normal approximation, it is possible to express the exponential premium as 5 c4 PE (c) = E(N ) E(X) + E(N ) Var(X) + {E(X)}2 Var(N ) , c > 0, (18.35) 2 which is the same premium as resulting from the variance principle with a = c/2. Let us also mention that since the mean and variance in the case of the normal approximation are the same as for the distribution of S, the premiums based on the expected value are given by the general formulae presented in Section 18.3.1. Unfortunately, the normal approximation is not usually suﬃciently accurate. The disadvantage of this approximation lies in the fact that the skewness of the normal distribution is always zero, as it has a symmetric probability density function. Since the distribution of aggregate claims is often skewed, another approximation of the distribution of aggregate claims that accommodates skewness is required. In this section we describe the translated gamma approximation. For more approaches and discussion of their applicability see, for example, Daykin, Pentikainen, and Pesonen (1994). The distribution function of the translated (shifted) gamma distribution is given by Gtr (x; α, β, x0 ) = F (x − x0 ; α, β),

x, α, β > 0,

(18.36)

where F (x; α, β) denotes the distribution function of the gamma distribution (described in Chapter 13) with parameters α and β: x α β F (x; α, β) = (18.37) tα−1 e−βt dt, x, α, β > 0. 0 Γ(α) To apply the approximation, the parameters α, β, and x0 have to be selected so that the ﬁrst, second, and third central moments of S equal the corresponding items for the translated gamma distribution. This procedure leads to the following result: α=4

{Var(S)}3 , (E[{S − E(S)}3 ])2

(18.38)

420

18

Premiums in the Individual and Collective Risk Models

β=2

Var(S) , E[{S − E(S)}3 ]

x0 = E(S) − 2

{Var(S)}2 . E[{S − E(S)}3 ]

(18.39)

(18.40)

In the case of the translated gamma distribution, it is impossible to give a simple analytical formula for the quantile premium. Therefore, in order to ﬁnd this premium a numerical approximation must be used. However, it is worth noticing that the exponential premium can be presented as α β PE (c) = x0 + ln , c > 0, (18.41) c β−c while the premiums given in terms of the expected value of the aggregate claims are the same as given in Section 18.3.1 (since the distribution of S is approximated by the translated gamma distribution with the same mean value and variance).

18.3.3

Compound Poisson Distribution

In many applications, the number of claims N is assumed to be described by the Poisson distribution with the probability function given by P(N = n) =

λn e−λ , n!

n = 0, 1, 2, . . . ,

(18.42)

where λ > 0. With this choice of the distribution of N , the distribution of S is called a compound Poisson distribution. The compound Poisson distribution has a number of useful properties. Formulae for the exponential premium and for the premiums based on the expectation of the aggregate claims simplify because E(N ) = Var(N ) = λ and MN (t) = exp {λ(et − 1)}. Moreover, for large λ, the distribution of the compound Poisson can be approximated by a normal distribution with mean λ E(X) and variance λ E(X 2 ), and the quantile premium is given by PQ (ε) = λ E(X) + Φ−1 (1 − ε) λ E(X 2 ), ε ∈ (0, 1), (18.43)

18.3 Collective Risk Model

421

and the exponential premium is of the form c PE (c) = λ E(X) + λ E(X 2 ), 2

c > 0.

(18.44)

If the ﬁrst three central moments of the individual claim distribution exist, the compound Poisson distribution can be approximated by the translated gamma distribution with the following parameters α = 4λ

{E(X 2 )}3 , {(E(X 3 )}2

β=2

E(X 2 ) , E(X 3 )

x0 = λ E(X) − 2λ

{E(X 2 )}2 . E(X 3 )

(18.45)

(18.46) (18.47)

Substituting these parameters in (18.41) one can obtain the formula for the exponential premium. It is worth mentioning that the compound Poisson distribution has many attractive features (Bowers et al., 1997; Panjer and Willmot, 1992), for example, the combination of a number of portfolios, each of which has a compound Poisson distribution of aggregate claims, also has a compound Poisson distribution of aggregate claims. Moreover, this distribution can be used to approximate the distribution of total claims in the individual model. Although the compound Poisson distribution is normally appropriate in life insurance modeling, it sometimes does not provide an adequate ﬁt to insurance data in other coverages (Willmot, 2001).

18.3.4

Compound Negative Binomial Distribution

When the variance of the number of claims exceeds its mean, the Poisson distribution is not appropriate – in this situation the use of the negative binomial distribution with the probability function given by r+n−1 P(N = n) = pr q n , n = 0, 1, 2, . . . , (18.48) n where r > 0, 0 < p < 1, and q = 1 − p, is suggested. In many cases it provides a signiﬁcantly improved ﬁt to that of the Poisson distribution. When

422

18

Premiums in the Individual and Collective Risk Models

a negative binomial distribution is selected for N , the distribution of S is called a compound negative binomial distribution. Since for the negative binomial distribution we have E(N ) =

rq , p

rq , p2

Var(N ) =

(18.49)

and MN (t) =

p 1 − qet

r ,

(18.50)

the formulae for the exponential premium and for the premiums based on the expectation of the aggregate claims simplify. For large r, the distribution of the compound negative binomial can be approximated by a normal distribution with the mean rq p E(X) and variance rq rq 2 p Var(X) + p2 {E(X)} . In this case the quantile premium is given by PQ (ε) =

rq E(X) + Φ−1 (1 − ε) p

:

rq rq Var(X) + 2 {E(X)}2 , ε ∈ (0, 1), (18.51) p p

and the exponential premium is of the form rq c rq rq PE (c) = E(X) + Var(X) + 2 {E(X)}2 , p 2 p p

c > 0.

(18.52)

It is worth mentioning that the negative binomial distribution arises as a mixed Poisson variate. More precisely, various distributions for the number of claims can be generated by assuming that the Poisson parameter Λ is a random variable with probability distribution function u(λ), λ > 0, and that the conditional distribution of N , given Λ = λ, is Poisson with parameter λ. In such case the distribution of S is called a compound mixed Poisson distribution, see also Chapter 14. This choice might be useful for example when we consider a population of insureds where various classes of insureds within the population generate numbers of claims according to the Poisson distribution, but the Poisson parameters may be diﬀerent for the various classes. The negative binomial distribution can be derived in this fashion when u(λ) is the gamma probability density function.

18.3 Collective Risk Model

18.3.5

423

Examples

Quantile premium for the collective risk model with log-normal claim distribution. As the number of policies sold by the insurance company grows, the actuary has decided to try to ﬁt a collective risk model to the portfolio. The log-normal distribution with the parameters µ = 18.3806 and σ = 1.1052 (these parameters are again estimated on the base of the real-life data describing losses resulting from catastrophic events in the USA, see Chapter 13) is chosen to describe the amount of claims. The number of claims is assumed to be Poisson distributed with parameter λ = 34.2. Moreover, the claim amounts and the number of claims are believed to be independent. The actuary wants to compare the behavior of the quantile premium for the whole portfolio of policies given by the general formula (18.34) and in the case of the translated gamma approximation. Figure 18.3 illustrates how the premium based on the translated gamma approximation (dashed red line) ﬁts the premium determined by the exact compound Poisson distribution (solid blue line). The premium for the original compound distribution has to be determined on the base of numerical simulations. This is the reason why the line is jagged. Better smoothness can be achieved by performing a larger number of Monte Carlo simulations (here we again performed 10000 simulations). The actuary notices that the approximation ﬁts better for the larger values of ε and worse for its smaller values. In fact the compound distribution functions of the original distribution and its transformed gamma approximation lay close to each other, but both are increasing and tend to one in inﬁnity. This explains why the quantile premiums – understood as inverse functions of the distribution functions – diﬀer so much for ε close to zero. Exponential premium for the collective risk model with gamma claim distribution. The actuary considers again the collective risk model where the number of claims is described by the Poisson distribution with parameter λ = 34.2, i.e. the compound Poisson model. But this time the claims are described by the gamma distribution with the parameters α = 0.9185 and β = 5.6870 · 10−9 (parameters are based on the same catastrophic data as in the previous example). Now the actuary considers the exponential premium for the aggregate claims in this model. The exponential premium in the case of the translated gamma

18

Premiums in the Individual and Collective Risk Models

10.5 10 9.5 8.5

9

Quantile premium (USD billion)

11

11.5

424

0.02

0.06 0.04 Quantile parameter epsilon

0.08

0.1

Figure 18.3: Quantile premium for the log-normal claim distribution and its translated gamma approximation in the collective risk model. The exact premium (solid blue line) and the premium in the case of the approximation (dashed red line) are plotted. STFprem03.xpl

approximation (dashed red line) and the exact premium (solid blue line) are plotted in Figure 18.4. Both premiums – for the original and the approximating distribution – are calculated analytically while it is easy to perform the calculations in this case. Both presented functions increase with the risk aversion parameter. We see that the translated gamma approximation can be a useful and precise tool for calculating the premiums in the collective risk model.

425

40 20

Exponential utility premuim (USD billion)

60

18.3 Collective Risk Model

0

1

3 2 Risk aversion parameter*E-9

4

Figure 18.4: Exponential premium for the gamma claim distribution in the collective risk model. The exact premium (solid blue line) and the translated gamma approximation premium (dashed red line) are plotted. STFprem04.xpl

426

Bibliography

Bibliography Bowers, N. L. JR., Gerber H. U., Hickman, J. C., Jones, D. A. and Nesbitt, C. J. (1997). Actuarial Mathematics, 2nd edition, The Society of Actuaries, Schaumburg. Daykin, C. D., Pentikainen, T. and Pesonen, M. (1994). Practical Risk Theory for Actuaries, Chapman&Hall, London. Embrechts, P., Furrer, H. and Kaufmann, R. (2003). Quantifying regulatory capital for operational risk, Trading & Regulation 9(3): 217-233. Embrechts, P., Kl¨ uppelberg, C. and Mikosch, T. (1997). Modelling Extremal Events for Insurance and Finance, Springer, Berlin. Gerber, H.U. (1980). An introduction to mathematical risk theory, Huebner, Philadelphia. Klugman, S. A., Panjer, H. H. and Willmot, G. E. (1998). Loss Models: From Data to Decisions, Wiley, New York. Panjer, H. H. and Willmot, G. E. (1992). Insurance Risk Models, Society of Actuaries, Schaumburg. Straub, E. (1988). Non-Life Insurance Mathematics, Springer, Berlin. Willmot, G. E. (2001). The nature of modelling insurance losses, Inaugural Lecture, Munich Reinsurance, Toronto. Young, V. R. (2004). Premium calculation principles, to appear in Encyclopedia of Actuarial Science, J. L. Teugels and B. Sundt eds., Wiley, Chichester.

19 Pure Risk Premiums under Deductibles Krzysztof Burnecki, Joanna Nowicka-Zagrajek, and Agnieszka Wyloma´ nska

19.1

Introduction

It is a common practice in most insurance lines for the coverage to be restricted by a deductible. For example, it is often incorporated in motor, health, disability, life, and business insurance. The main idea of a deductible is, ﬁrstly, to reduce claim handling costs by excluding coverage for the often numerous small claims and, secondly, to provide some motivation to the insured to prevent claims through a limited degree of participation in claim costs (Daykin, Pentikainen, and Pesonen, 1994; Sundt, 1994; Klugman, Panjer, and Willmot, 1998). We mention the following properties of a deductible: (i) loss prevention – as the compensation is reduced by a deductible the retention of the insured is positive; This makes out a good case for avoiding the loss; (ii) loss reduction – the fact a deductible puts the policyholder at risk of obtaining only partial compensation provides an economic incentive to reduce the extend of the damage; (iii) avoidance of small claims where administration costs are dominant – for small losses, the administration costs will often exceed the loss itself, and hence the insurance company would want the policyholder to pay it himself; (iv) premium reduction – premium reduction can be an important aspect for the policyholders, they may prefer to take a higher deductible to get a lower premium.

428

19

Pure Risk Premiums under Deductibles

There are two types of deductibles: an annual deductible and a per occurrence deductible, the latter being more common. We quote now an example from the American market. Blue Shield of California, an independent member of the Blue Shield Association, is California’s second largest not-for-proﬁt health care company, with 2 million members and USD 3 billion annual revenue. Blue Shield of California oﬀers 4 Preferred Provider Organization (PPO) Plans, each oﬀering similar levels of beneﬁts with a diﬀerent deductible option: USD 500, 750, 1500, and 2000, respectively. For example, the Blue Shield USD 500 Deductible PPO Plan has a USD 500 annual deductible for most covered expenses. This is just the case of the ﬁxed amount deductible, which is exploited in Section 19.2.2. The annual deductible does not apply to oﬃce visits or prescription medications. Oﬃce visits and most lab and x-ray services are provided at a USD 30 copayment. This is also the case of the ﬁxed amount deductible. For other covered services, after the annual deductible has been met, you pay 25% up to an annual maximum of USD 3500. This is a case of the limited proportional deductible, which is examined in Section 19.2.4. In Section 19.2 we present formulae for pure risk premiums under franchise, ﬁxed amount, proportional, limited proportional, and disappearing deductibles in terms of the limited expected value function (levf), which was introduced and exploited in Chapter 13. Using the speciﬁc form of levf for diﬀerent loss distributions, we present in Section 19.3 formulae for pure risk premiums under the deductibles for the log-normal, Pareto, Burr, Weibull, gamma, and mixture of two exponential distributions. The formulae can be used to obtain annual pure risk premiums under the deductibles in the individual and collective risk model framework analysed in Chapter 18. We illustrate graphically the inﬂuence of the parameters of the discussed deductibles on the premiums considering the Danish ﬁre loss example, which was studied in Chapter 13. It gives an insight into an important issue of choosing an optimal deductible and its level for a potential insured and a proper pricing of the accepted risk for an insurer.

19.2

General Formulae for Premiums Under Deductibles

Let X denote a non-negative continuous random variable describing the size of claim (risk, loss), F (t) and f (t) its distribution and probability density functions, respectively, and h(x) the payment function corresponding to a de-

19.2

General Formulae for Premiums Under Deductibles

429

ductible. We consider here the simplest premium which is called the pure risk premium, see Chapter 18. The pure risk premium P (as we consider only pure risk premium we will henceforth use the term premium meaning pure risk premium) is equal to the expectation, i.e. P = E(X),

(19.1)

and we assume that the expected value E(X) exists. In the case of no deductible the payment function is obviously of the form h(x) = x. This means that if the loss is equal to x, the insurer pays the whole claim amount and P = E(X). We express formulae for premiums under deductibles in terms of the so-called limited expected value function (levf), namely x L(x) = E{min(X, x)} = yf (y)dy + x {1 − F (x)} , x > 0. (19.2) 0

The value of this function at a point x is equal to the expected value of the random variable X truncated at the point x. The function is a very useful tool for testing the goodness of ﬁt an analytic distribution function to the observed claim size distribution function and was already discussed in Chapter 13. In the following sections we illustrate premium formulae for the most important types of deductibles. All examples were created with the insurance library of XploRe.

19.2.1

Franchise Deductible

One of the deductibles that can be incorporated in the contract is the so-called franchise deductible. In this case the insurer pays the whole claim, if the agreed deductible amount is exceeded. More precisely, under the franchise deductible of a, if the loss is less than a the insurer pays nothing, but if the loss equals or exceeds a claim is paid in full. This means that the payment function can be described as (Figure 19.1) hF D(a) (x) =

0, x < a, x, otherwise.

(19.3)

It is worth noticing that the franchise deductible satisﬁes properties (i), (iii) and (iv), but not property (ii). This deductible can even work against property

430

19

Pure Risk Premiums under Deductibles

a

Figure 19.1: The payment function under the franchise deductible (solid blue line) and no deductible (dashed red line). STFded01.xpl

(ii). Since if a loss occurs, the policyholder would prefer it to be greater than or equal to the deductible. The pure risk premium under the franchise deductible can be expressed in terms of the premium in the case of no deductible and the corresponding limited expected value function: PF D(a) = P − L(a) + a {1 − F (a)} .

(19.4)

It can be easily noticed that this premium is a decreasing function of a. When a = 0 the premium is equal to the no deductible case and if a tends to inﬁnity the premium tends to zero.

19.2

General Formulae for Premiums Under Deductibles

431

b

Figure 19.2: The payment function under the ﬁx amount deductible (solid blue line) and no deductible (dashed red line). STFded02.xpl

19.2.2

Fixed Amount Deductible

An agreement between the insured and the insurer incorporating a deductible b means that the insurer pays only the part of the claim which exceeds amount b. If the size of the claim falls below this amount, the claim is not covered by the contract and the insured receives no indemniﬁcation. The payment function is thus given by hF AD(b) (x) = max(0, x − b),

(19.5)

see Figure 19.2. The ﬁxed amount deductible satisﬁes all the properties (i)-(iv).

432

19

Pure Risk Premiums under Deductibles

The premium in the case of the ﬁxed amount deductible has the following form in terms of the premium under the franchise deductible. PF AD(b) = P − L(b) = PF D(b) − b {1 − F (b)} .

(19.6)

As previously, this premium is a decreasing function of b, for b = 0 it gives the premium in the case of no deductible and if b tends to inﬁnity, it tends to zero.

19.2.3

Proportional Deductible

In the case of the proportional deductible with c ∈ (0, 1), each payment is reduced by c · 100% (the insurer pays 100%(1 − c) of the claim). Consequently, the payment function is given by (Figure 19.3) hP D(c) (x) = (1 − c)x.

(19.7)

The proportional deductible satisﬁes properties (i), (ii), and (iv), but not property (iii), as it implies some compensation for even very small claims. The relation between the premium under the proportional deductible and the premium in the case of no deductible has the following form. PP D(c) = (1 − c) E(X) = (1 − c)P.

(19.8)

Clearly, the premium is a decreasing function of c, PP D(0) = P and PP D(1) = 0.

19.2.4

Limited Proportional Deductible

The proportional deductible is usually combined with a minimum amount deductible so the insurer does not need to handle small claims and with a maximum amount deductible to limit the retention of the insured. For the limited proportional deductible of c with a minimum amount m1 and maximum amount m2 (0 ≤ m1 < m2 ) the payment function is given by ⎧ 0, x ≤ m1 , ⎪ ⎪ ⎨ x − m1 , m1 < x ≤ m1 /c, hLP D(c,m1 ,m2 ) (x) = (19.9) (1 − c)x, m1 /c < x ≤ m2 /c, ⎪ ⎪ ⎩ x − m2 , otherwise, see Figure 19.4. The limited proportional deductible satisﬁes all the properties.

19.2

General Formulae for Premiums Under Deductibles

433

Figure 19.3: The payment function under the proportional deductible (solid blue line) and no deductible (dashed red line). STFded03.xpl

The following formula expresses the premium under the limited proportional deductible in terms of the premium in the case of no deductible and the corresponding limited expected value function. m m 1 2 PLP D(c,m1 ,m2 ) = P − L(m1 ) + c L −L . (19.10) c c Sometimes only one limitation is incorporated in the contract, i.e. m1 = 0 or m2 = ∞. It is easy to check that the limited proportional deductible with m1 = 0 and m2 = ∞ reduces to the proportional deductible.

434

19

m1 m1/c

Pure Risk Premiums under Deductibles

m2/c

Figure 19.4: The payment function under the limited proportional deductible (solid blue line) and no deductible (dashed red line). STFded04.xpl

19.2.5

Disappearing Deductible

There is another type of deductible that is a compromise between the franchise and ﬁxed amount deductible. In the case of disappearing deductible the payment depends on the loss in the following way: if the loss is less than an amount of d1 > 0, the insurer pays nothing; if the loss exceeds d2 (d2 > d1 ) amount, the insurer pays the loss in full; if the loss is between d1 and d2 , then the deductible is reduced linearly between d1 and d2 . Therefore, the larger the claim, the less of the deductible becomes the responsibility of the policyholder.

19.2

General Formulae for Premiums Under Deductibles

d1

435

d2

Figure 19.5: The payment function under the disappearing deductible (solid blue line) and no deductible (dashed red line). STFded05.xpl

The payment function is given by (Figure 19.5) ⎧ x ≤ d1 , ⎨ 0, d2 (x−d1 ) hDD(d1 ,d2 ) (x) = , d1 < x ≤ d2 , ⎩ d2 −d1 x, otherwise.

(19.11)

This kind of deductible satisﬁes properties (i), (iii), and (iv), but similarly to the franchise deductible it works against (ii). The following formula shows the premium under the disappearing deductible in terms of the premium in the case of no deductible and the corresponding

436

19

Pure Risk Premiums under Deductibles

limited expected value function PDD(d1 ,d2 ) = P +

d1 d2 L(d2 ) − L(d1 ). d2 − d1 d2 − d1

(19.12)

If d1 = 0, the premium does not depend on d2 and it becomes the premium in the case of no deductible. If d2 tends to inﬁnity, then the disappearing deductible reduces to the ﬁx amount deductible of d1 .

19.3

Premiums Under Deductibles for Given Loss Distributions

In the preceding section we showed a relation between the pure risk premium under several deductibles and a limited expected value function. Now, we use the relation to present formulae for premiums in the case of deductibles for a number of loss distributions often used in non-life actuarial practice, see Burnecki, Nowicka-Zagrajek, and Weron (2004). To this end we apply the formulae for levf for diﬀerent distributions given in Chapter 13. The log-normal, Pareto, Burr, Weibull, gamma, and mixture of two exponential distributions are typical candidates when looking for a suitable analytic distribution, which ﬁts the observed data well, see Aebi, Embrechts, and Mikosch (1992), Burnecki, Kukla, and Weron (2000), Embrechts, Kl¨ uppelberg, and Mikosch (1997), Mikosch (1997), Panjer and Willmot (1992), and Chapter 13. In the log-normal and Burr case the premium formulae will be illustrated on a real-life example, namely on the ﬁre loss data, already analysed in Chapter 13. For illustrative purposes, we assume that the total amount of risk X simply follows one of the ﬁtted distributions, whereas in practice, in the individual and collective risk model framework (see Chapter 18), in order to obtain an annual premium under a per occurrence deductible we would have to multiply the premium by a number of policies and mean number of losses per year, respectively, since in the individual risk model n E h (Xk ) = n E {h (Xk )} , k=1

provided that the claim amount variables are identically distributed, and in the collective risk model N E h (Xk ) = E(N ) E{h (Xk )}. k=1

19.3

Premiums Under Deductibles for Given Loss Distributions

19.3.1

437

Log-normal Loss Distribution

Consider a random variable Z which has the normal distribution. Let X = eZ . The distribution of X is called the log-normal distribution and its distribution function is given by t 2 ln t − µ 1 1 ln y − µ √ F (t) = Φ = dy, exp − σ 2 σ 2πσy 0 where t, σ > 0, µ ∈ R and Φ(.) is the standard normal distribution function, see Chapter 13. For the log-normal distribution the following formulae hold: (a) franchise deductible premium

ln a − µ − σ 2 σ2 PF D(a) = exp µ + 1−Φ , 2 σ (b) ﬁxed amount deductible premium σ2 PF AD(b) = exp µ + · 2

ln b − µ ln b − µ − σ 2 −b 1−Φ , · 1−Φ σ σ (c) proportional deductible premium

PP D(c)

σ2 = (1 − c) exp µ + 2

,

(d) limited proportional deductible premium

ln m1 − µ − σ 2 σ2 PLP D(c,m1 ,m2 ) = exp µ + 1−Φ 2 σ

ln(m1 /c) − µ ln m1 − µ −Φ + m1 Φ σ σ

ln(m2 /c) − µ − σ 2 ln(m1 /c) − µ − σ 2 −Φ · + Φ σ σ

σ2 ln(m2 /c) − µ · c exp µ + + m2 Φ −1 , 2 σ

438

19

Pure Risk Premiums under Deductibles

(e) disappearing deductible premium exp µ + σ 2 /2 · PDD(d1 ,d2 ) = d2 − d1

ln d2 − µ − σ 2 ln d1 − µ − σ 2 − d2 Φ · d2 − d1 + d 1 Φ σ σ

d1 d 2 ln d2 − µ ln d1 − µ −Φ . + Φ d2 − d1 σ σ We now illustrate the above formulae using the Danish ﬁre loss data. We study the log-normal loss distribution with parameters µ = 12.6645 and σ = 1.3981, which best ﬁtted the data. Figure 19.6 depicts the premium under franchise and ﬁxed amount deductibles in the log-normal case. Figure 19.7 shows the eﬀect of parameters c, m1 , and m2 of the limited proportional deductible. Clearly, PLP D(c,m1 ,m2 ) is a decreasing function of these parameters. Finally, Figure 19.8 depicts the inﬂuence of parameters d1 and d2 of the disappearing deductible. Markedly, PDD(d1 ,d2 ) is a decreasing function of the parameters and we can observe that the eﬀect of increasing d2 is rather minor.

19.3.2

Pareto Loss Distribution

The Pareto distribution function is deﬁned by α λ , F (t) = 1 − λ+t where t, α, λ > 0, see Chapter 13. The expectation of the Pareto distribution exists only for α > 1. For the Pareto distribution with α > 1 the following formulae hold: (a) franchise deductible premium PF D(a)

=

1 (aα + λ) α−1

λ a+λ

α ,

Premiums Under Deductibles for Given Loss Distributions

439

0.6 0.4 0

0.2

Premium (DKK million)

0.8

19.3

0

10

20 Deductible (DKK million)

40

30

Figure 19.6: The premium under the franchise deductible (thick blue line) and ﬁxed amount deductible (thin red line). The log-normal case. STFded06.xpl

(b) ﬁxed amount deductible premium PF AD(b)

=

1 (b + λ) α−1

λ b+λ

(c) proportional deductible premium PP D(c)

=

(1 − c)

λ , α−1

α ,

19

Pure Risk Premiums under Deductibles

0.6 0.4 0

0.2

Premium (DKK million)

0.8

440

0

10

20 m2 (DKK million)

30

40

Figure 19.7: The premium under the limited proportional deductible with respect to the parameter m2 . The thick blue solid line represents the premium for c = 0.2 and m1 = 100 000 DKK, the thin blue solid line for c = 0.4 and m1 = 100 000 DKK, the dashed red line for c = 0.2 and m1 = 1 million DKK, and the dotted red line for c = 0.4 and m1 = 1 million DKK. The log-normal case. STFded07.xpl

(d) limited proportional deductible premium PLP D(c,m1 ,m2 )

α 1 λ (m1 + λ) α−1 m1 + λ α m c λ 2 + +λ α−1 c m2 /c + λ α m λ 1 , − +λ c m1 /c + λ =

Premiums Under Deductibles for Given Loss Distributions

441

0.6 0.4 0

0.2

Premium (DKK million)

0.8

19.3

10

0

20 d2 (DKK million)

30

40

Figure 19.8: The premium under the disappearing deductible with respect to the parameter d2 . The thick blue line represents the premium for d1 = 100 000 DKK and the thin red line the premium for d1 = 500 000 DKK. The log-normal case. STFded08.xpl

(e) disappearing deductible premium PDD(d1 ,d2 )

= ·

19.3.3

1 · (α − 1)(d2 − d1 ) α α

λ λ . − d1 (d2 + λ) d2 (d1 + λ) d1 + λ d2 + λ

Burr Loss Distribution

Experience has shown that the Pareto formula is often an appropriate model for the claim size distribution, particularly where exceptionally large claims may

442

19

Pure Risk Premiums under Deductibles

occur. However, there is sometimes a need to ﬁnd heavy tailed distributions which oﬀer greater ﬂexibility than the Pareto law. Such ﬂexibility is provided by the Burr distribution which distribution function is given by α λ F (t) = 1 − , λ + tτ where t, α, λ, τ > 0, see Chapter 13. Its mean exists only for ατ > 1. For the Burr distribution with ατ > 1 the following formulae hold: (a) franchise deductible premium PF D(a)

=

λ1/τ Γ (α − 1/τ ) Γ (1 + 1/τ ) Γ(α)

1 1 aτ 1 − B 1 + ,α − , τ τ λ + aτ

(b) ﬁxed amount deductible premium PF AD(b)

= ·

λ1/τ Γ (α − 1/τ ) Γ (1 + 1/τ ) · Γ(α)

α λ 1 bτ 1 −b , 1 − B 1 + ,α − , τ τ λ + bτ λ + bτ

(c) proportional deductible premium PP D(c)

=

(1 − c)

λ1/τ Γ(α − 1/τ )Γ(1 + 1/τ ) , Γ(α)

,

19.3

Premiums Under Deductibles for Given Loss Distributions

443

(d) limited proportional deductible premium PLP D(c,m1 ,m2 )

=

·

λ1/τ Γ (α − 1/τ ) Γ (1 + 1/τ ) · Γ(α) 1 mτ1 1 1 − B 1 + ,α − , τ τ λ + mτ1 1 (m1 /c)τ 1 +cB 1 + , α − , τ τ λ + (m1 /c)τ

1 1 (m2 /c)τ −cB 1 + , α − , τ τ λ + (m2 /c)τ α α λ λ + m1 − m1 λ + mτ1 λ + (m1 /c)τ α λ , − m2 λ + (m2 /c)τ

(e) disappearing deductible premium PDD(d1 ,d2 )

=

·

λ1/τ Γ (α − 1/τ ) Γ (1 + 1/τ ) · Γ(α) d2 − d1 + d1 B 1 + 1/τ, α − 1/τ, dτ2 /(λ + dτ2 ) d2 − d1

− +

d2 B 1 + 1/τ, α − 1/τ, dτ1 /(λ + dτ1 )

d2 d1 d2 − d1

d2 − d1 α α

λ λ , − λ + dτ2 λ + dτ1

where the functions Γ(·) and B(·, ·, ·) are deﬁned as: Γ(a) = Γ(a+b) x a−1 B(a, b, x) = Γ(a)Γ(b) y (1 − y)b−1 dy. 0

∞ 0

y a−1 e−y dy and

In order to illustrate the preceding formulae we consider the ﬁre loss data. analysed in Chapter 13. The analysis showed that the losses can be well mod-

19

Pure Risk Premiums under Deductibles

1.5 1 0

0.5

Premium (DKK million)

2

2.5

444

0

10

20 Deductible (DKK million)

30

40

Figure 19.9: The premium under the franchise deductible (thick blue line) and ﬁxed amount deductible (thin red line). The Burr case. STFded09.xpl

elled by the Burr distribution with parameters α = 0.8804, λ = 8.4202 · 106 and τ = 1.2749. Figure 19.9 depicts the premium under franchise and ﬁxed amount deductibles for the Burr loss distribution. In Figure 19.10 the inﬂuence of the parameters c, m1 , and m2 of the limited proportional deductible is illustrated. Figure 19.11 shows the eﬀect of the parameters d1 and d2 of the disappearing deductible.

Premiums Under Deductibles for Given Loss Distributions

445

1.5 1 0

0.5

Premium (DKK million)

2

2.5

19.3

0

10

20 m2 (DKK million)

30

40

Figure 19.10: The premium under the limited proportional deductible with respect to the parameter m2 . The thick solid blue line represents the premium for c = 0.2 and m1 = 100 000 DKK, the thin solid blue line for c = 0.4 and m1 = 100 000 DKK, the dashed red line for c = 0.2 and m1 = 1 million DKK, and the dotted red line for c = 0.4 and m1 = 1 million DKK. The Burr case. STFded10.xpl

19.3.4

Weibull Loss Distribution

Another frequently used analytic claim size distribution is the Weibull distribution which is deﬁned by F (t) = 1 − exp (−βtτ ) , where t, τ, β > 0, see Chapter 13.

19

Pure Risk Premiums under Deductibles

1.5 1 0

0.5

Premium (DKK million)

2

2.5

446

10

0

20 d2 (DKK million)

30

40

Figure 19.11: The premium under the disappearing deductible with respect to the parameter d2 . The thick blue line represents the premium for d1 = 100 000 DKK and the thin red line the premium for d1 = 500 000 DKK. The Burr case. STFded11.xpl

For the Weibull distribution the following formulae hold: (a) franchise deductible premium PF D(a)

=

Γ (1 + 1/τ ) β 1/τ

1 1 − Γ 1 + , βaτ , τ

(b) ﬁxed amount deductible premium

1 Γ (1 + 1/τ ) τ 1 − Γ 1 + − b exp (−βbτ ) , , βb PF AD(b) = τ β 1/τ

19.3

Premiums Under Deductibles for Given Loss Distributions

447

(c) proportional deductible premium PP D(c)

=

(1 − c) 1 Γ 1 + , τ β 1/τ

(d) limited proportional deductible premium

1 Γ (1 + 1/τ ) τ PLP D(c,m1 ,m2 ) = 1 − Γ 1 + , βm 1 τ β 1/τ

cΓ (1 + 1/τ ) 1 m1 τ + Γ 1 + ,β τ c β 1/τ

cΓ (1 + 1/τ ) 1 m2 τ − Γ 1 + , β τ c β 1/τ m τ 1 − m1 exp (−βmτ1 ) + m1 exp −β c m τ 2 , − m2 exp −β c (e) disappearing deductible premium PDD(d1 ,d2 )

Γ (1 + 1/τ ) 1 τ = − d + d Γ 1 + d , βd 2 1 1 2 τ β 1/τ (d2 − d1 ) 1 τ −d2 Γ 1 + , βd1 τ +

d1 d2 {exp (−βdτ2 ) − exp (−βdτ1 )} , d2 − d1

where the incomplete gamma function Γ(·, ·) is deﬁned as x 1 Γ(a, x) = y a−1 e−y dy. Γ(a) 0

19.3.5

Gamma Loss Distribution

All four presented above distributions suﬀer from some mathematical drawbacks such as lack of a closed form representation for the Laplace transform

448

19

Pure Risk Premiums under Deductibles

and nonexistence of the moment generating function. The gamma distribution given by t α β F (t) = F (t, α, β) = y α−1 e−βy dy, Γ(α) 0 for t, α, β > 0 does not have these drawbacks, see Chapter 13. For the gamma distribution the following formulae hold: (a) franchise deductible premium PF D(a)

=

α {1 − F (a, α + 1, β)} , β

(b) ﬁxed amount deductible premium α {1 − F (b, α + 1, β)} − b {1 − F (b, α, β)} , PF AD(b) = β (c) proportional deductible premium PP D(c)

=

(1 − c)α , β

(d) limited proportional deductible premium α {1 − F (m1 , α + 1, β)} PLP D(c,m1 ,m2 ) = β m cα m1 2 + F , α + 1, β − F , α + 1, β β c c m 1 , α, β + m1 F (m1 , α, β) − F c m 2 , α, β , − m2 1 − F c (e) disappearing deductible premium α d2 {1 − F (d1 , α + 1, β)} PDD(d1 ,d2 ) = β(d2 − d1 ) −d1 {1 − F (d2 , α + 1, β)} +

d 1 d2 {F (d1 , α, β) − F (d2 , α, β)} . d2 − d1

19.3

Premiums Under Deductibles for Given Loss Distributions

19.3.6

449

Mixture of Two Exponentials Loss Distribution

The mixture of two exponentials distribution function is deﬁned by F (t) = 1 − a exp (−β1 t) − (1 − a) exp (−β2 t) , where 0 ≤ a ≤ 1 and β1 , β2 > 0, see Chapter 13. For the mixture of exponentials distribution the following formulae hold: (a) franchise deductible premium PF D(c)

a 1−a exp (−β1 c) + exp (−β2 c) β1 β2

=

+ c {a exp (−β1 c) + (1 − a) exp (−β2 c)} , (b) ﬁxed amount deductible premium a 1−a exp (−β1 b) + exp (−β2 b) , β1 β2

=

PF AD(b)

(c) proportional deductible premium PP D(c)

=

(1 − c)

1−a a + β1 β2

,

(d) limited proportional deductible premium PLP D(c,m1 ,m2 )

= + +

a 1−a exp (−β1 m1 ) + exp (−β2 m1 ) β1 β2 m2 m1 ca exp −β1 − exp −β1 β1 c c m2 m1 c(1 − a) exp −β2 − exp −β2 , β2 c c

(e) disappearing deductible premium

d2 a d1 PDD(d1 ,d2 ) = exp (−β1 d1 ) − exp (−β1 d2 ) β1 d2 − d1 d2 − d1

d2 d1 1−a exp (−β2 d1 ) − exp (−β2 d2 ) . + β2 d2 − d1 d2 − d1

450

19.4

19

Pure Risk Premiums under Deductibles

Final Remarks

Let us ﬁrst concentrate on the franchise and ﬁxed amount deductibles. Figures 19.6 and 19.9 depict the comparison of the two corresponding premiums and the eﬀect of increasing the parameters a and b. Evidently P PF D PF AD . Moreover, we can see that the deducible of about DKK 2 million in the lognormal case and DKK 40 million in the Burr case reduces PF AD by half. Figures corresponding to the two loss distributions are similar, however we note that the diﬀerences do not lie in shifting or scaling. The same is true for the rest of considered deductibles. We also note that the premiums under no deductible for log-normal and Burr loss distributions do not tally because the parameters were estimated via the Anderson-Darling statistic minimization procedure which in general does not yield the same moments, cf. Chapter 13. For the considered distributions the mean, and consequently the pure risk premium, is even 3 times bigger in the Burr case. The proportional deductible inﬂuences the premium in an obvious manner, that is pro rata (e.g. c = 0.25 results in cutting the premium by a quarter). Figures 19.7 and 19.10 show the eﬀect of parameters c, m1 and m2 of the limited proportional deductible. It is easy to see that PLP D(c,m1 ,m2 ) is a decreasing function of these parameters. Figures 19.8 and 19.11 depict the inﬂuence of parameters d1 and d2 of the disappearing deductible. Clearly, PDD(d1 ,d2 ) is a decreasing function of the parameters and we can observe that the eﬀect of increasing d2 is rather minor. It is clear that the choice of a distribution and a deductible has a great impact on the pure risk premium. For an insurer the choice can be crucial in reasonable quoting of a given risk. A potential insured should take into account insurance options arising from appropriate types and levels of self-insurance (deductibles). Insurance premiums decrease with increasing levels of deductibles. With adequate loss protection, a property owner can take some risk and accept a large deductible which might reduce the total cost of insurance. We presented here a general approach to calculating pure risk premiums under deductibles. In Section 19.2 we presented a link between the pure risk premium under several deductibles and a limited expected value function. We used this link in Section 19.3 to calculate the pure risk premium in the case of the deductibles for diﬀerent claim amount distributions. The results can be applied to derive annual premiums in the individual and collective risk model on a per occurrence deductible basis.

19.4

Final Remarks

451

The approach can be easily extended to other distributions. One has only to calculate levf for a particular distribution. This also includes the case of righttruncated distributions which would reﬂect the maximum limit of liability set in a contract. Moreover, the idea can be extended to other deductibles. Once we express the pure risk premium in terms of the limited expected value function, it is enough to apply a form of levf for a speciﬁc distribution. Finally, one can also use the formulae to obtain the premium with safety loading which is discussed in Chapter 18.

452

Bibliography

Bibliography Aebi, M., Embrechts, P., and Mikosch, T. (1992). A large claim index, Mitteilungen SVVM: 143–156. Burnecki, K., Kukla, G., and Weron, R. (2000). Property insurance loss distributions, Physica A 287: 269–278. Burnecki, K., Nowicka-Zagrajek, J., and Weron, A. (2004). Pure risk premiums under deductibles. A quantitative management in actuarial practice, Research Report HSC/04/5, Hugo Steinhaus Center, Wroclaw University of Technology. Daykin, C.D., Pentikainen, T., and Pesonen, M. (1994). Practical Risk Theory for Actuaries, Chapman&Hall, London. Embrechts, P., Kl¨ uppelberg, C., and Mikosch, T. (1997). Modelling Extremal Events for Insurance and Finance, Springer, Berlin. Klugman, S. A., Panjer, H.H., and Willmot, G.E. (1998). Loss Models: From Data to Decisions, Wiley, New York. Mikosch, T. (1997). Heavy-tailed modelling in insurance, Commun. Statist.Stochastic Models 13: 799–815. Panjer, H.H. and Willmot, G.E. (1992). Insurance Risk Models, Society of Actuaries, Schaumburg. Sundt, B. (1994). An Introduction to Non-Life Insurance Mathematics (3rd ed.), Verlag Versicherungswirtschaft e.V., Karlsruhe.

20 Premiums, Investments, and Reinsurance Pawel Mi´sta and Wojciech Otto

20.1

Introduction

In this chapter, setting the appropriate level of insurance premium is considered in a broader context of business decisions, concerning also risk transfer through reinsurance, and the rate of return on capital required to ensure solvability. Furthermore, the long term dividend policy, i.e. the rule of subdividing the ﬁnancial result between the company and shareholders, is analyzed. The problem considered throughout this chapter can be illustrated by a simple example. Example 1 Let us consider the following model of a risk process describing a capital of an insurer: Rt = u + (c − du)t − St , t ≥ 0, where Rt denotes the current capital at time t, u = R0 stands for initial capital, c is the intensity of premium inﬂow, and St is the aggregate loss process – amount of claim’s outlays over the period (0, t]. The term du represents the intensity of outﬂow of dividends paid to shareholders with d being the dividend rate. Let us assume that increments of the amount of claims process St+h − St are for any t, h > 0 normally distributed N (µh, σ 2 h) and mutually independent. Below we consider premium calculation under two cases. First case: d = 0. In this case the probability of ruin is an exponential function of the initial capital: ψ(u) = exp(−Ru),

u ≥ 0,

454

20 Premiums, Investments, and Reinsurance

where the adjustment coeﬃcient R exists for c > µ, and equals then 2(c−µ)σ −2 . The above formula can be easily inverted to render the intensity of premium c for a given capital u and predetermined level ψ of ruin probability: c=µ+

− log(ψ) 2 σ . 2u

Given the safety standard ψ, the larger the initial capital u of the company is, the more competitive it is (since it can oﬀer the insurance cover at a lower price c). However, a more realistic result is considered when we assume positive cost of capital. Second case: d > 0. Now the problem of competitiveness is reduced to the problem of minimizing the premium by choosing the optimal level of capital backing insurance risk: c=µ+

− log(ψ) 2 σ + du. 2u :

The solution reads:

− log(ψ) , 2d = µ + σ −2d log ψ,

uopt = σ copt

where exactly one half of the loading (copt − µ) serves to ﬁnance dividends and the other half serves as a safety loading (retained in the company). Having already calculated the total premium, we face the problem of decomposing it into premiums for individual risks. In order to do that we should ﬁrst identify the random variable W = St+1 − St as a sum of independent risks X1 , . . . , Xn , and the intensity of premium c as a whole-portfolio premium Π(W ), which has to be decomposed into individual premiums Π(Xi ). The decomposition is straightforward when the total premium is calculated as in the ﬁrst case above: − log(ψ) 2 Π(Xi ) = E(Xi ) + σ (Xi ), 2u which is due to additivity of variance for independent risks. The premium formula in the second case contains the safety loading proportional to the standard deviation and thus is no more additive. This does not mean that reasonable decomposition rules do not exist – rather that their derivation is not so straightforward.

20.1

Introduction

455

In this chapter, various generalizations of the basic problem presented in the Example 1 are considered. These generalizations make the basic problem more complex on the one hand, but closer to real-life situations on the other. Additionally, these generalizations do not yield analytical results and, therefore, we demonstrate in several examples how to obtain numerical solutions. First of all, Example 1 assumes that the safety standard is expressed in terms of an acceptable level of ruin probability. On the contrary, Sections 2, 3, and 4 are devoted to the approach based on the distribution of the single-year loss function. Section 2 presents the basic problem of joint decisions on premium and capital needed to ensure safety in terms of shareholder’s choice of the level of expected rate of return and risk. Section 3 presents in more details the problem of decomposition of the whole-portfolio premium into individual risks premiums. Section 4 presents the problem extended by allowing for reinsurance, where competitiveness is a result of simultaneous choice of the amount of capital and retention level. This problem has not been illustrated in Example 1, as in the case of the normal distribution of the aggregate loss and usual market conditions there is no room to improve competitiveness through reinsurance. Sections 5, 6, and 7 are devoted again to the approach based on ruin probability. However, Section 5 departs from the simplistic assumptions of Example 1 concerning the risk process. It is shown there how to invert various approximate formulas for the ruin probability in order to calculate premium for the whole portfolio as well as to decompose it into individual risks. Section 6 exploits results of Section 5 in the context of positive cost of capital. In that section a kind of ﬂexible dividend policy is also considered, and the possibility to improve competitiveness this way is studied. Finally, Section 7 presents an extension of the decision problem by allowing for reinsurance cession. Throughout this chapter we assume that we typically have at our disposal incomplete information on the distribution of the aggregate loss, and this incomplete information set consists of cumulants of order 1, 2, 3, and possibly 4. The rationale is that sensible empirical investigation of frequency and severity distributions could be done only separately for sub-portfolios of homogeneous risks. Cumulants for the whole portfolio are then obtained just by summing up ﬁgures over the collection of sub-portfolios, provided that sub-portfolios are mutually independent. The existence of cumulants of higher orders is assured by the common practice of issuing policies with limited cover exclusively (which in many countries is even enforced by law). Consequences of the assumption are that both the quantile of the current year loss and the probability of ruin in the long run will be approximated by formulas based on cumulants of the one-year aggregate loss W .

456

20 Premiums, Investments, and Reinsurance

The chapter is based on Otto (2004), a book on non-life insurance mathematics. However, general ideas are heavily borrowed from the seminal paper of B¨ uhlmann (1985).

20.2

Single-period Criterion and the Rate of Return on Capital

In this section the problem of joint decisions on premium and required capital is considered in terms of shareholder’s choice of the level of expected rate of return and risk. It is assumed that typically the single-year loss (when it happens) is covered by the insurance company through reduction of its own assets. This assumption can be justiﬁed by the fact that in most developed countries state supervision agencies eﬃciently prevent companies to undertake too risky insurance business without own assets being large enough. As shareholders are unable to externalize the loss, they are enforced to balance the required expected rate of return with the possible size of the loss. The risk based capital concept (RBC) formalizes the assumption that premium loading results from the required expected rate of return on capital invested by shareholders and the admitted level of risk.

20.2.1

Risk Based Capital Concept

Let us denote by RBC the amount of capital backing risk borne by the insurance portfolio. It is assumed that the capital has a form of assets invested in securities. Shareholders will accept risk borne by the insurance portfolio provided it yields expected rate of return larger than the rate of return on riskless investments oﬀered by the ﬁnancial market. Let us denote by r the required expected rate of return, and by rf the riskless rate. The following equality holds: Π (W ) − E(W ) = (r − rf ) · RBC. (20.1) For simplicity it is assumed that all assets are invested in riskless securities. This means that we neglect shareholder’s capital locked-up in ﬁxed assets necessary to run the insurance operations of the company, and we also assume prudent investment policy, at least with respect to those assets, which are devoted for backing the insurance risk. It is also assumed that all amounts are expressed in terms of their value at the end of year (accumulated when spent or received earlier, discounted when spent or received after the year end).

20.2

Single-Period Criterion and the Rate of Return on Capital

457

Let us also assume that company management is convinced that the rate of return r is large enough to admit the risk of technical loss in amount, let us say, ηRBC, η ∈ (0, 1) with a presumed small probability ε. The total loss of capital amounts then to (η − rf ) RBC. The assumption could be expressed in the following form: −1 FW (1 − ε) = Π (W ) + ηRBC,

(20.2)

where FW denotes the cdf of random variable W . Combining equations (20.1) and (20.2), one obtains the desired amount of capital backing risk of the insurance portfolio: RBC =

−1 (1 − ε) − E(W ) FW , r − rf + η

(20.3)

and the corresponding premium: ΠRBC (W ) = E(W ) +

r − rf −1 FW (1 − ε) − E(W ) . r − rf + η

(20.4)

In both formulas, only the diﬀerence r − rf is relevant. We denote it by r∗ . The obtained premium formula is just a simple generalization of the well-known quantile formula based on the one-year loss criterion. This standard formula is −1 obtained by replacing the coeﬃcient r∗ (r∗ + η) by one. Now it is clear that the standard formula could be interpreted as a result of the assumption η = 0, so that shareholders are not ready to suﬀer a technical loss at all (at least with probability higher than ε).

20.2.2

How to Choose Parameter Values?

Parameters r∗ , η, and ε of the formula are subject to managerial decision. However, an actuary could help reducing the number of redundant decision parameters. This is because parameters reﬂect not only subjective factors (shareholder’s attitude to risk), but also objective factors (rate of substitution between expected return and risk oﬀered by the capital market). The latter could be deduced from capital market quotations. In terms of the Capital Asset Pricing Model (CAPM), the relationship between expectation E(∆R) and standard deviation σ (∆R) of the excess ∆R of the rate of return over the riskless rate is reﬂected by the so-called capital market line (CML). The slope coeﬃcient E(∆Rt)σ −1 (∆R) of the CML represents just a risk premium (in

458

20 Premiums, Investments, and Reinsurance

terms of an increase in expectation) per unit increase of standard deviation. def

−1

Let us denote the reciprocal of the slope coeﬃcient by s = {E(∆R)} σ (∆R). We will now consider shareholder’s choice between two alternatives: investment of the amount RBC in a well diversiﬁed portfolio of equities and bonds versus investment in the insurance company’s capital. In the second case the total loss W − Π (W ) − rf RBC exceeds the amount (η − rf ) RBC with probability ε. The equally probable loss in the ﬁrst case equals: {uε σ (∆R) − E(∆R) − rf } RBC, where uε denotes the quantile of order (1 − ε) of the standard normal variable. This is justiﬁed by the fact that the CAPM is based on assumption of normality of ﬂuctuations of rates of return. The shareholder is indiﬀerent when the following equation holds: η − rf = uε σ (∆R) − E(∆R) − rf , provided that expected rates of return in both cases are the same: r = rf + E(∆R). Making use of our knowledge of the substitution rate s and putting the above results together we obtain: η = r∗ (uε s − 1). In the real world the required rate of return could depart (ceteris paribus) from the above equation. On the one hand, required expected rate of return could be larger, because direct investments in strategic portions of the insurance company capital are not as liquid as investments in securities traded on the stock exchange. On the other hand, there is empirical evidence that ﬂuctuations in proﬁts in the insurance industry are uncorrelated with the business cycle. This means that having a portion of insurance company shares in the portfolio improves diversiﬁcation of risk to which a portfolio investor is exposed. Hence, there are reasons to require smaller risk premium. The reasonable range of the parameter ε is from 1% to 5%. The rate of return depends on shareholder’s attitude to risk and market conditions, but it is customary to assume that the range of the risk premium r∗ is from 5% to 15%. A reference point for setting the parameter η could also be deduced from regulatory requirements, as the situation when the capital falls below the solvency margin needs undertaking troublesome actions enforced by supervision authority that could be harmful for company managers. A good summary of the CAPM and related models is given in Panjer et al. (1998), Chapters 4 and 8.

20.3

The Top-down Approach to Individual Risks Pricing

20.3

459

The Top-down Approach to Individual Risks Pricing

As it has been pointed out in the introduction, some premium calculation formulas are additive for independent risks, and then the decomposition of the whole-portfolio premium into individual risks premiums is straightforward. However, sometimes a non-additive formula for pricing the whole portfolio is well justiﬁed, and then the decomposition is no more trivial. This is exactly the case of the RBC formula (and also other quantile-based formulas) derived in the previous section. This section is devoted to showing the range, interpretation and applications of some solutions to this problem.

20.3.1

Approximations of Quantiles

In the case of the RBC formula decomposition means answering the question what is the share of a particular risk in the demand for capital backing the portfolio risk that in turn entails the premium. In order to solve the problem one can make use of approximations of the quantile by the normal power expansions. The most general version used in practice of the normal power formula for the quantile wε of order (1 − ε) of the variable W reads: u3 − 3uε 2u3 − 5uε 2 u2 − 1 wε ≈ µW + σW uε + ε γW + ε γ2,W − ε γW , 6 24 36 where µW , σW , γW , γ2,W denotes expectation, standard deviation, skewness, and kurtosis of the variable W and uε is the quantile of order (1−ε) of a N (0, 1) variable. Now the premium can be expressed by: 2 ΠRBC (W ) = µW + σW a0 + a1 γW + a2 γ2,W − a3 γW , (20.5) where coeﬃcients a0 , a1 , a2 , a3 are simple functions of parameters ε, η, r∗ , and the quantile uε of the standard normal variable. The above formula was proposed by Fisher and Cornish, see Hill and Davis (1968), so it will be referred to as FC20.5. The formula reduced by neglecting the last two components (by taking a2 = a3 = 0) will be referred to as FC20.6: ΠRBC (W ) = µW + σW (a0 + a1 γW ) ,

(20.6)

and the formula neglecting also the skewness component as normal approximation: ΠRBC (W ) = µW + a0 σW . (20.7)

460

20 Premiums, Investments, and Reinsurance

More details on normal power approximation can be found in Kendall and Stuart (1977).

20.3.2

Marginal Cost Basis for Individual Risk Pricing

Premium for the individual risk X could be set on the basis of marginal cost. This means that we look for such a price at which the insurer is indiﬀerent whether to accept the risk or not. Calculation of the marginal cost can be based on standards of diﬀerential calculus. In order to do that, we should ﬁrst write the formula explicitly in terms of a function of cumulants of ﬁrst four orders: c4 µ2 µ3 Π µ, σ 2 , µ3 , c4 = µ + a0 σ + a1 2 + a2 3 − a3 35 . σ σ σ def

This allows expressing the increment ∆Π (W ) = Π (W + X) − Π (W ) due to extend of the basic portfolio W by additional risk X in terms of linear approximation: ∆Π (W ) ≈

∂Π ∂Π ∂Π ∂Π 2 + (W ) ∆µ3,W + (W ) ∆c4,W , (W ) ∆µW + 2 (W ) ∆σW ∂µ ∂σ ∂µ3 ∂c4

∂Π ∂Π ∂Π ), ∂σ where ∂Π 2 (W ), ∂µ (W ), ∂c (W ) denote partial derivatives of the ∂µ (W 3 4 2 , µ3,W , c4,W . By function Π µ, σ 2 , µ3 , c4 calculated at the point µW , σW virtue of additivity of cumulants for independent random variables we replace 2 increments ∆µ , ∆σ , ∆µ , ∆c by cumulants of the additional risk W 3,W 4,W W 2 , µ3,X , c4,X . As a result the following formula is obtained: µX , σX

ΠM (X) =

∂Π ∂Π ∂Π ∂Π 2 + (W ) σX (W ) µ3,X + (W ) c4,X . (W ) µX + 2 ∂µ ∂σ ∂µ3 ∂c4

Respective calculations lead to the marginal premium formula: ΠM (X)

2 2 µ3,X σX σX + = µX + a0 + σW a1 γW − 2 2σW µ3,W σW

2 2 c4,X µ3,X 3σX 5σX 2 +σW a2 γ2,W − a3 γW 2 . − 2 − 2 c4,W 2σW µ3,W 2σW

First two components coincide with the result obtained when the whole premium is based on the normal approximation. Setting additionally a1 = 0 we obtain the premium for the case when skewness of the portfolio in non-neglectible

20.3

The Top-down Approach to Individual Risks Pricing

461

(making use of FC20.6 approximation), including last two components means we regard also portfolio kurtosis (approximation based on formula FC20.5).

20.3.3

Balancing Problem

For each component the problem of balancing the premium on the whole portfolio level arises. Should all risks composing the portfolio W = X1 + X2 + ...+ Xn be charged their marginal premiums, the portfolio premium amounts to: n 1 1 1 2 ΠM (Xi ) = µW + σW a0 − a2 γ2,W + a3 γW , 2 2 2 i=1 that is evidently underestimated by: n 1 3 3 2 . ΠM (Xi ) = σW Π (W ) − a0 + a1 γW + a2 γ2,W − a3 γW 2 2 2 i=1 The last ﬁgure represents a diversiﬁcation eﬀect obtained by composing the portfolio of a large number of individual risks, which could be also treated as an example of “positive returns to scale”. Balancing correction made so as to preserve sensitivity of premium on cumulants of order 1, 3, and 4 leads to the formula for the basic premium: ΠB (X)

σ2

= µX + σW a0 σ2X + W µ3,X µ3,X c4,X 2 +σW a1 γW µ3,W 2 µ3,W + a2 γ2,W c4,W − a3 γW −

2 σX 2 σW

.

Obviously, several alternative correction rules exist. For example, in the case of the kurtosis component any expression of the form:

c4,X σ2 c4,X a2 σW γ2,W +δ − 2X c4,W c4,W σW satisﬁes the requirement of balancing the whole portfolio premium for arbitrary number δ. In fact, any particular choice is more or less arbitrary. Some common sense can be expressed by the requirement that a basic premium formula should not produce smaller ﬁgures than marginal formula for any risk in the portfolio. Of course this requirement is insuﬃcient to point out a unique solution. Here, the balancing problem results from the lack of additivity of the RBC formula, as it is a nonlinear function of cumulants.

462

20.3.4

20 Premiums, Investments, and Reinsurance

A Solution for the Balancing Problem

2 It seems that only in the case of the variance component a0 σX /2σW some more or less heuristic argument for the correction can be found. The essence of the basic premium for individual risks is that it is a basis of an open market oﬀer. Once the cover is oﬀered to the public, clients decide whether to buy the cover or not. Thus the price should not depend on how many risks out of the portfolio W have been insured before, and how many after the risk in question. Let us imagine a particular ordering of the basic set of n risks amended by the additional risk X in a form of a sequence {X1 , ..., Xj , X, Xj+1 , ...Xn }. Given this ordering, the respective component of the marginal cost of risk X takes the form: " !: : j j 2 2 2 a0 σ (Xk ) + σX − σ (Xk ) . k=1

k=1

We can now consider the expected value of this component, provided that each of (n + 1)! orderings is equally probable (as it was proposed Shapley (1953)). However, calculations are much simpler if we assume that the share U of the aggregated variance of all risks preceeding the risk X in the total aggregate 2 variance σW is a random variable uniformly distributed over the interval (0, 1). The error of the simpliﬁcation is neglectible as the share of each individual risk in the total variance is small. The result: > 1 > > > 2 2 2 2 2 2 a0 E U σW + σX − U σW = a0 uσW + σX − uσW du 0

⎛,

⎞ 2 σ σ2 ≈ a0 σW 2 ⎝ 1 + X − 1⎠ ≈ a0 X σW σW is exactly what we need to balance the premium on the portfolio level. The reader easily veriﬁes that the analogous argumentation does not work any more in the case of components of higher orders of the premium formula.

20.3.5

Applications

Results presented in this section have three possible ﬁelds of application. The ﬁrst is just passive premium calculation for the whole portfolio. In this respect several more accurate formulas exist, especially when our information on the distribution of the variable W extends its ﬁrst four cumulants.

20.4

Rate of Return and Reinsurance Under the Short Term Criterion

463

The second application concerns pricing individual risks. In this respect it is hard to ﬁnd a better approach (apart from that based on long-run solvability criteria, which is a matter of consideration in next sections), which consistently links the risk relevant to the company (on the whole portfolio level) with risk borne by an individual policy. Of course open market oﬀer should be based on basic valuation ΠB (·), whereas the marginal cost valuation ΠM (·) could serve as a lower bound for contracts negotiated individually. The third ﬁeld of applications opens when a portfolio, characterized by substantial skewness and kurtosis, is inspected in order to localize these risks (or groups of risks), that distort the distribution of the whole portfolio. Too high (noncompetitive) general premium level could be caused though by few inﬂuential risks. Such localization could help in decisions concerning underwriting limits and reinsurance program. Applying these measures could help “normalize” the distribution of the variable W . Thus in the preliminary stage, when the basis for underwriting policy and reinsurance is considered, extended pricing formulas (involving higher order cumulants) should be used. Paradoxically, once the prudent underwriting and ceding policy has been elaborated, simple normal approximation suﬃces to price as well the portfolio as individual risks. Clearly, such prices concern only retained portions of risk, and should be complemented by reinsurance costs.

20.4

Rate of Return and Reinsurance Under the Short Term Criterion

This section is devoted to extending the decision problem considered in previous sections by allowing for reinsurance. Then the pricing obey the form: Π (W ) = ΠI (WI ) + ΠR (WR ) , where the whole aggregate loss W is subdivided into the share of the insurer WI and that of reinsurer WR . ΠI (·) denotes the premium formula applied by the insurer to price his share, set in accordance with RBC concept. ΠR (·) symbolizes the pricing formula used by the reinsurer. Provided formula ΠR (·) is accurate enough to reﬂect the existing oﬀer of the reinsurance market, we could compare various variants of subdivision of the variable W into components WI and WR , looking for such subdivision which optimizes some objective function.

464

20.4.1

20 Premiums, Investments, and Reinsurance

General Considerations

No matter which particular objective function is chosen, the space of possible subdivisions of the variable W has to be reduced somehow. One of the most important cases is when the variable W has a compound Poisson distribution, and the excess of loss reinsurance is chosen. Denoting by N the number of claims, we could deﬁne for each claim amount Yi , i = 1, 2, ...N its subdivision def def into the truncated loss Y M,i = min {Yi , M } and the excess of loss Y M,1 = max {Yi − M, 0} and then deﬁne variables representing subdivision of the whole portfolio: WI = Y M,1 + ... + Y M,N , WR = Y M,1 + ... + Y M,N , both having compound Poisson distributions too, with characteristics being functions of the subdivision parameter M . Assuming that capital of the insurer is not ﬂexible, and that the current amount u of capital is smaller than the amount RBC (W ) necessary to accept solely the whole portfolio, we could simply ﬁnd such value of M , for which RBC (WI ) = u. In the case when the current amount of capital is in excess, it is still relevant to assess such portion of the capital, which should serve as a protection for insurance operations. The excess of capital over this amount can be treated separately, as being free of prudence requirements when investment decisions are undertaken. It is more interesting to assume that the amount of capital is ﬂexible, and to choose the retention limit M to minimize the total premium Π (W ) given parameters r∗ , s, and ε. The objective function reﬂects the aim of maximizing competitiveness of the company. If the resulting premium (after being charged by respective cost loadings) is lower than that acceptable by the market, we can revise assumptions. Revised problem could consist in maximizing expected rate of return given the premium level and parameters η and ε. This would mean getting higher risk premium than that oﬀered by the capital market. Reasonable solutions could be expected in the case when reinsurance premium formula ΠR (·) contains loadings proportional primarily to the expected value, and its sensitivity to the variance (more so as to skewness and kurtosis) is small. This could be expected as a result of transaction costs on the one hand, and larger capital assets of reinsurers on the other. Also the possibility to diversify risk on the world-wide scale work in the same direction, increasing transaction costs and at the same time reducing the reinsurer’s exposure to risk.

20.4

Rate of Return and Reinsurance Under the Short Term Criterion

20.4.2

465

Illustrative Example

Example 2 Aggregate loss W has a compound Poisson distribution with truncated-Pareto severity distribution, with cdf given for y 0 by the formula: −α 1 − 1 + λy when y < M0 FY (y) = 1 when y M0 Variable W is subdivided into retained part W M and ceded part W M , that given the subdivision parameter M ∈ (0, M0 ] have a form: W M = Y M,1 + ... + Y M,N , W M = Y M,1 + ... + Y M,N . We assume that reinsurance pricing rule can be reﬂected by the formula: ΠR W M = (1 + re0 ) E W M + re1 Var W M , and that insurer’s own pricing formula is: ΠI (W M ) = E (W M ) +

r∗ −1 ) , (1 − ε) − E (W F M WM r∗ + η

with a respective approximation of the quantile of the variable W M . For expository purposes we take the following values of parameters: (i) Parameters of the Pareto distribution (α, λ) = point M0 = 500;

5

3 2, 2

, with truncation

(ii) Expected value of the number of claims E(N ) = λP = 1000; (iii) Substitution rate s = 2; (iv) Remaining parameters (in the basic variant of the problem) ε = 2%, r∗ = 10%, re0 = 100%, re1 = 0.5%. Problem consists in choosing the retention limit M ∈ (0, M0 ] that minimize the total premium Π (W ) = ΠI (W M ) + ΠR W M .

466

20 Premiums, Investments, and Reinsurance

Solution. First step is to express moments of ﬁrst four orders of variables Y M and Y M as functions of parameters (α, λ, M0 ) and the real variable M . Expected value of the truncated-Pareto variable with parameters (α, λ, M ) equals by deﬁnition: M y 0

αλ

1+ M λ

α

α+1 dy+M {1 − F (M )} = (λ + y) 1+ M λ

= αλ

−α

x

(x − 1)

1+

M λ

−α =

1

−α−1

−x

αλ dx+M xα+1

dx+M

M 1+ λ

−α

1

that, after integration and reordering of components produces the following formula: 1−α M λ m1 = 1− 1+ . α−1 λ Similar calculations made for moments of higher order yield the recursive equation: 1−α M λ k−1 mk,α = αmk−1,α−1 − (α − 1) mk−1,α − M , 1+ α−1 λ k = 2, 3, ... where the symbol mK,A means for A > 0 just the moment of order K of the truncated-Pareto variable with parameters (A, λ, M ). No matter whether A is positive or not, in order to start the recursion we take: 1−A λ 1− 1+ M when A = 1 A−1 λ m1,A = λ ln 1 + M when A = 1 λ The above formulas could serve to calculate raw moments as well of the variable Y M as the variable Y , provided we replace M by M0 . Having calculated moments for both variables Y M and Y already, we make use of the relation: E(Y k ) =

k j k E Y k−j Y M , M j j=0

(20.8)

20.4

Rate of Return and Reinsurance Under the Short Term Criterion

467

to calculate moments of the variable Y M . In the above formula we read Y 0M 0 and Y M as equal one with probability one. Mixed moments appearing on the RHS of formula (20.8) can be calculated easily as positive values of the variable Y M happen only when Y M = M . So mixed moments equal simply: n n m = M Y E YM E Ym M M for arbitrary m, n > 0. The second step is to express cumulants of both variables W M and W M as a product of the parameter λP and respective raw moments of variables Y M and Y M . Finally, both components ΠI (W M ) and ΠR W M of the total premium are expressed as a function of parameters (λP , α, λ, M0 , ε, r∗ , s, re0 , re1 ) and the decision parameter M ∈ (0, M0 ]. Now the search of such a value of M that minimizes the total premium Π (W ) is a quite feasible numerical task. Optimal retention level and related minimal premium entail optimal amount of capital −1 uopt = (r∗ ) {Π (WI ) − E(WI )}.

20.4.3

Interpretation of Numerical Calculations in Example 2

The problem described in Example 2 has been solved in several diﬀerent variants of assumptions on parameters. Variants 1–5 consist in minimization of the total premium, in variant 1 the parameters are (s, ε, r∗ , re0 , re1 ) = (2, 2%, 10%, 100%, 0.5%). In variants 2, 3, 4 and 5 value of one of parameters (ε, r∗ , re0 , re1 ) is modiﬁed and in variant 6 there is no reinsurance, (s, ε, r∗ ) are as in variant 1. Variant 7 consists in maximization of r∗ where (ε, η, re0 , re1 ) are as in variant 1, and premium loading equals 4.47%. Results are presented in Table 20.1. Reinsurance reduces the required level of RBC, which coincides either with premium reduction (compare variant 1 and 6) or with increase of the expected rate of return (compare variant 7 and 6). Reinsurance also reduces diﬀerence between results obtained on the basis of two diﬀerent approximation methods (FC20.6 and FC20.5). In variant 6 (no reinsurance) the diﬀerence is quite large, which is caused by the fairly long right tail of the distribution of the variable Y . Comparison of variants 2 and 1 conﬁrms that the choice of a smaller expected rate of return (given substitution rate) automatically raises the need for capital, leaving the premium level unchanged (and therefore also the optimal retention level).

468

20 Premiums, Investments, and Reinsurance

Table 20.1: Optimal choice of retention limit M . Basic characteristics of the variable W : E(W ) = 999.8, σ (W ) = 74.2, γ (W ) = 0.779, γ2 (W ) = 2.654 Optimization variants

V.1: (basic) V.2: r ∗ = 8% V.3: ε = 4% V.4: re0 = 50% V.5: re1 = 0.25% V.6: (no reinsurance) V.7:

r∗ = 11.27% r ∗ = 11.22%

Quantile approx. method for W M

Retention Limit M

RBC

Loading Π(W ) −1 E(W )

FC20.6 FC20.5 FC20.6 FC20.5 FC20.6 FC20.5 FC20.6 FC20.5 FC20.6 FC20.5 FC20.6 FC20.5 FC20.6 FC20.5

114.5 106.5 114.5 106.5 129.7 134.7 79.8 76.3 95.5 90.9 500.0 500.0 106.0 99.6

386.6 385.2 483.3 481.5 382.3 382.1 373.3 372.1 380.0 379.0 446.6 475.1 372.3 371.5

4.11% 4.13% 4.11% 4.13% 4.03% 4.01% 4.03% 4.03% 4.05% 4.06% 4.47% 4.75% 4.47% 4.47%

STFrein01.xpl Comparison of variants 3 and 1 shows that admission of greater loss probability ε causes reduction of premium, which coincides with substantial reduction of the need for reinsurance cover, and slight reduction in the need for capital. It is worthwhile to notice that replacement of ε = 2% by ε = 4% entails reversing the relation of results obtained by two approximation methods. Formula FC20.5 leads to smaller retention limits when safety standard is high (small ε), and to larger retention limits when safety standard is relaxed (large ε). Comparison of variants 4 and 5 with variant 1 illustrates the obvious rule that it does pay oﬀ to reduce retention limits when reinsurance is cheap, and to increase them when reinsurance is expensive. It could happen in practice that pricing rules applied by reinsurers diﬀer by lines of business. When the portfolio W = W1 + ... + Wn consists of n business lines, for which the market oﬀers reinsurance cover priced on the basis of diﬀerent formulas Π1,R (·), ..., Πn,R (·), the natural generalization of the problem lies in minimization of the premium (or maximization of the rate r∗ ) made by choosing n retention limits M1 , ..., Mn , for each of business lines separately. Separation of business lines makes it feasible to assume diﬀerent severity distributions, too.

20.5

Ruin Probability Criterion when the Initial Capital is Given

20.5

469

Ruin Probability Criterion when the Initial Capital is Given

Presuming long-run horizon for premium calculation we turn back to ruin theory. Our aim is now to obtain such a level of premium for the portfolio yielding each year the aggregate loss W , which results from a presumed level of ruin probability ψ and initial capital u. This is done by inverting various approximate formulae for the probability of ruin. Information requirements of diﬀerent methods are emphasized. Special attention is paid also to the problem of decomposition of the whole portfolio premium.

20.5.1

Approximation Based on Lundberg Inequality

This is a simplest (and crude) approximation method, simply assuming replacement of the true function ψ(u) by: ψLi (u) = e−Ru . At ﬁrst we obtain the approximation R(Li) of the desired level of the adjustment coeﬃcient R: − ln ψ R(Li) = . u In the next step we make use of the deﬁnition of the adjustment coeﬃcient for the portfolio: E eRW = eRΠ(W ) , to obtain directly the premium formula: Π (W ) = R−1 ln E eRW = R−1 CW (R) , where CW denotes the cumulant generating function. The result is well known as the exponential premium formula. It possesses several desirable properties – not only that it is derivable from ruin theory. First of all, by the virtue of properties of the cumulant generating function, it is additive for independent risks. So there is no need to distinguish between marginal and basic premiums for individual risks. By the same reason the formula does not reﬂect the crosssectional diversiﬁcation eﬀect when the portfolio is composed of large number of risks, each of them being small. The formula can be practically applied once we replace the adjustment coeﬃcient R by its approximation R(Li) .

470

20 Premiums, Investments, and Reinsurance

Under certain conditions we could rely on truncating higher order terms in the expansion of the cumulant generating function: Π (W ) =

1 1 1 1 2 + R2 µ3,W + R3 c4,W + ..., CW (R) = µW + RσW R 2! 3! 4!

(20.9)

and use for the purpose of individual risk pricing the formula (where higher order terms are truncated as well): Π (X) =

1 1 1 1 2 + R2 µ3,X + R3 c4,X + ... CX (R) = µX + RσX R 2! 3! 4!

(20.10)

Some insight into the nature of the long-run criteria for premium calculation could be gained by re-arrangement of the formula (20.9). At ﬁrst we could express the initial capital in units of standard deviation of the aggregate loss: −1 U = uσW . Now the adjustment coeﬃcient could be expressed as: R=

− ln ψ , U σW

and premium formula (20.9) as: 2 3 1 − ln ψ 1 − ln ψ 1 − ln ψ γW + γ2,W + ... Π (W ) = µW + σW + 2! U 3! U 4! U (20.11) where in the brackets appear only unit-less ﬁgures, that form together the pric−1 ing formula for the standardized risk (W − µW ) σW . Let us notice that the contribution of higher order terms in the expansion is neglectible when initial capital is large enough. The above phenomenon could be interpreted as a result of risk diversiﬁcation in time (as opposed to cross-sectional risk diversiﬁcation). Provided the initial capital is large, the ruin (if it happens at all) will rather appear as a result of aggregation of poor results over many periods of time. However, given the skewness and kurtosis of one-year increment of the risk pro1 cess, the sum of increments over n periods has skewness of order n− 2 , kurtosis of order n−1 etc. Hence the larger initial capital, the smaller importance of the diﬀerence between the distribution of the yearly increment and the normal distribution. In a way this is how the diversiﬁcation of risk in time works (as opposed to cross-sectional diversiﬁcation). In the case of a cross-sectional diversiﬁcation the assumption of mutual independency of risks plays the crucial role. Analogously, diversiﬁcation of risk in time works eﬀectively when subsequent increments of the risk process are independent.

20.5

Ruin Probability Criterion when the Initial Capital is Given

20.5.2

471

“Zero” Approximation

The “zero” approximation is a kind of naive approximation, assuming replacement of the function ψ(u) by: ψ0 (u) = (1 + θ)

−1

exp (−Ru) ,

) where θ denotes the relative security loading, which means that (1 + θ) = Π(W E(W ) . The “zero” approximation is applicable to the case of Poisson claim arrivals (as opposed to Lundberg inequality, which is applicable under more general assumptions). Relying on “zero” approximation leads to the system of two equations: Π (W ) = R−1 CW (R) 1 u

R=

E(W ) ln ψΠ(W ).

The system could be solved by assuming at ﬁrst: R(0) =

− ln ψ , u

and next by executing iterations: Π(n) (W ) = R(n) =

1 u

1 C R(n−1) W

R(n−1)

) ln ψΠE(W (n) (W ) ,

that under reasonable circumstances converge quite quickly to the solution R(0) = lim R(n) , which allows applying formula (20.9) for the whole portfolio n→∞

and formula (20.10) for individual risks, provided the coeﬃcient R is replaced by its approximation R(0) .

20.5.3

Cram´ er–Lundberg Approximation

Premium calculation could also be based on the Cram´er-Lundberg approximation. In this case the problem can be reduced also to the system of equations

472

20 Premiums, Investments, and Reinsurance

(three this time): = R−1 CW (R)

µY θ 1 − ln ψ + ln R = u MY (R) − µY (1 + θ) Π (W ) (1 + θ) = . E(W ) Π (W )

where MY (·) and µY denote respectively the ﬁrst order derivative of the moment generating function and the expectation of the severity distribution. Solution of the system in respect of unknowns Π (W ), θ and R requires now a bit more complex calculations. Obtained result R(CL) could be used then to replace R in formulas (20.9) and (20.10). The method is applicable to the case of Poisson claim arrivals. Moreover, severity distribution has to be known in this case. It can be expected that the method will produce accurate results for large u.

20.5.4

Beekman–Bowers Approximation

This method is often recommended as the one which produces relatively accurate approximations, especially for moderate amounts of initial capital. The problem consists in solving the system of three equations: ψ α β α (α + 1) β2

= = =

−1

{1 − Gα,β (u)} m2,Y (1 + θ) 2θm1,Y 2 m3,Y m2,Y , (1 + θ) +2 3θm1,Y 2θm1,Y

(1 + θ)

where Gα,β denotes the cdf of the gamma distribution with parameters (α, β), and mk,Y denotes the raw moment of order k of the severity distribution. Last two equations arise from equating moments of the gamma distribution to conditional moments of the maximal loss distribution (provided the maximal loss is positive). Solving this system of equation is a bit cumbersome, as it involves multiple numerical evaluations of the cdf of the gamma distribution. The admissible solution exists provided m3,Y m1,Y > m22,Y , that is always satisﬁed for arbitrary severity distribution with support on the positive part of the axis. Denoting the solution for the unknown θ by θBB , we can write the latter as

20.5

Ruin Probability Criterion when the Initial Capital is Given

473

a function: θBB = θBB (u, ψ, m1,Y , m2,Y , m2,Y ) , and obtain the whole portfolio premium from the equation: ΠBB (W ) = (1 + θBB ) E(W ). Formally, application of the method requires only moments of ﬁrst three orders of the severity distribution to be ﬁnite. However, the problem arises when we wish to price individual risks. Then we have to know the moment generating function of the severity distribution, and it should obey conditions for adjustment coeﬃcient to exist. If this is a case, we can replace the coeﬃcient θ of the equation: MY (r) = 1 + (1 + θ) m1,Y r by its approximation θBB , and thus obtain the approximation R(BB) of the adjustment coeﬃcient R. It allows calculating premiums according to formulas (20.9) and (20.10). It is easy to verify that there is no danger of contradiction, as both formulas for the premium ΠBB (W ) produce the same result −1 (1 + θBB ) E(W ) = R(BB) CW (R(BB) ).

20.5.5

Diﬀusion Approximation

This approximation method requires the scarcest information. It suﬃces to know the ﬁrst two moments of the increment of the risk process, to invert the formula: ψD (u) = exp −R(D) u , where:

−2 , R(D) = 2 {Π(W ) − µW } σW

in order to obtain the premium: ΠD (W ) = µW +

2 σW − log ψ , 2 u

that again is easily decomposable for individual risks. The formula is equivalent to the exponential formula (20.9), where all terms except the ﬁrst two are omitted.

474

20.5.6

20 Premiums, Investments, and Reinsurance

De Vylder Approximation

The method requires information on moments of the ﬁrst three orders of the increment of the risk process. According to the method, ruin probability could be expressed as: R(D) u 1 ψdV (u) = exp − , 1 + R(D) ρ 1 + R(D) ρ def

where for simplicity the abbreviated notation ρ = 13 σW γW is used. Setting ψdV (u) equal to ψ and rearranging the equation we obtain another form of it: − log ψ − log 1 + R(D) ρ 1 + R(D) ρ = R(D) u that can be solved numerically in respect of R(D) , to yield as a result premium formula: σ2 ΠdV (W ) = µW + W R(D) , 2 which again is directly decomposable. When the analytic solution is needed, we can make some further simpliﬁcations. Namely, the equation entangling the unknown coeﬃcient R(D) could be transformed to a simpliﬁed form on the basis of the following approximation: 1 + R(D) ρ log 1 + R(D) ρ =

2 1 3 1 R(D) ρ + R(D) ρ − . . . ≈ R(D) ρ. = 1 + R(D) ρ R(D) ρ − 2 3 Provided the error of omission of higher order terms is small, we obtain the approximation: − log ψ . R(D) ≈ u + ρ(log ψ + 1) The error of the above solution is small, provided the initial capital u is several times greater than the product ρ |log ψ + 1|. Under this condition we obtain the explicit (approximated) premium formula:

− log ψ σ2 ΠdV ∗ (W ) = µW + W , 2 u + ρ(log ψ + 1)

20.5

Ruin Probability Criterion when the Initial Capital is Given

475

where the star symbolizes the simpliﬁcation made. Applying now the method of linear approximation of marginal cost ΠdV ∗ (W + X) − ΠdV ∗ (W ) presented in Section 20.3 yields the result: ΠdV ∗ (X) = µX +

− log ψ {u + 2ρ(log ψ + 1)} 2 {u + ρ (log ψ + 1)}

2

2 + σX

log ψ(log ψ + 1) 6 {u + ρ (log ψ + 1)}

2 µ3,X .

The reader can verify that the formula ΠdV ∗ (·) is additive for independent risks, and so it can serve for marginal as well as for basic valuation.

20.5.7

Subexponential Approximation

This method applies to the classical model (Poisson claim arrivals) with thicktailed severity distribution. More precisely, when the severity cdf FY possesses the ﬁnite expectation µY , then the integrated tail distribution cdf FL1 (interpreted as the cdf of the variable L1 , being the “ladder height” of the claim surplus process) is deﬁned as follows: ∞ 1 1 − FL1 (x) = {1 − FY (y)}dy. µY x Assuming now that the latter distribution is subexponential (see Chapter 15), we could obtain (applying the Pollaczek-Khinchin formula) the approximation, which should work for large values of initial capital: 1 ΠS (W ) = µW 1 + {1 − FL1 (u)} . ψ The extended study of consequences of thick-tailed severity distributions can be found in Embrechts, Kl¨ uppelberg, and Mikosch (1997).

20.5.8

Panjer Approximation

The Pollaczek-Khinchin formula could be also used in combination with the Panjer recursion algorithm, to produce quite accurate (at the cost of timeconsuming calculations) answers in the case of the classical model (Poisson claim arrivals). The method consists of two basic steps. In the ﬁrst step the integrated tail distribution FL1 (x) is calculated and discretized. Once this step

476

20 Premiums, Investments, and Reinsurance

˜ 1 (discretized version of the is executed, we have a distribution of a variable L “ladder height” L1 ): ˜ 1 = jh , fj = P L j = 0, 1, 2, . . . The second step is based on the fact that the maximal loss L = L1 +· · ·+LN has a compound geometric distribution. Thus the distribution of the discretized ˜ of the variable L is obtained by making use of the Panjer recursion version L ∞ formula: ˜ = 0 = (1 − q) P L (qf0 )j , j=0

and for k = 1, 2, . . . : ˜ = kh = P L

k q ˜ = (k − j)h , fj P L 1 − qf0 j=1

def

where q = (1 + θ)−1 . Iterations should be stopped when for some kψ the cumulated probability FL˜ (kψ h) exceeds for the ﬁrst time the predetermined value 1 − ψ. The approximated value of the capital u at which the ruin probability attains the value ψ could be set then on the basis of interpolation, taking into account that the ruin probability function is approximately exponential: def

uψ = kψ h − h

log ψ − log {1 − FL˜ (kψ h)} . log {1 − FL˜ (kψ h − h)} − log {1 − FL˜ (kψ h)}

Calculations should be repeated for diﬀerent values of θ in order to ﬁnd such value θP anjer (ψ, u), at which the resulting capital uψ approaches the predetermined value of capital u. Then the resulting premium is given by the formula: ΠP anjer (W ) = (1 + θP anjer )µW . It should be noted, that it is only the second step of calculations which has to be repeated many times under the search procedure, as the distribution of ˜ 1 remains the same for various values of θ being tested. The the variable L advantage of the method is that the range of the approximation error is under control, as it is a simple consequence of the width of the discretization interval h and the discretization method used. The disadvantage already mentioned is a time-consuming algorithm. Moreover, the method produces only numerical results, and therefore, provides no rule for decomposition of the whole portfolio premium for individual risk premiums. Nevertheless, the method could be used to obtain quite accurate approximations, and thus, a reference point to estimate approximation errors produced by simpler methods.

20.6

Ruin Probability Criterion and the Rate of Return

477

All approximation methods presented in this section are more or less standard, and more detailed information on them can be found in any actuarial textbook, as for example in “Actuarial Mathematics” by Bowers et al. (1986, 1997). More advanced analysis can be found in a book “Ruin probabilities” by Asmussen (2000) and numerical comparison of this and other approximations are given in Chapter 15.

20.6

Ruin Probability Criterion and the Rate of Return

This section is devoted to considering the problem of balancing proﬁtability and solvency requirements. In Section 20.2 a similar problem has already been studied. However, we have considered there return on capital on the singleperiod basis. Therefore neither the allocation of returns (losses) nor the long run consequences of decision rules applied in this respect were considered. The problem was already illustrated in Example 1. Section 20.6.1 is devoted to presenting the same problem under more general assumptions about the risk process, making use of some of approximations presented in Section 20.5. Section 20.6.2 is devoted to another generalization, where more ﬂexible dividend policy allows for sharing risk between the company and shareholders.

20.6.1

Fixed Dividends

First we consider a reinterpretation of the model presented in Example 1. Now the discrete-time version of the model is assumed: Rn = u + (c − du) n − (W1 + ... + Wn ) ,

n = 0, 1, 2, . . .

where all events are assumed to be observed once a year, and notations are obviously adapted. The question is the same: to choose the optimal level of initial capital u that minimizes the premium c given the ruin probability ψ and the dividend rate d. The solution depends on how much information we have on the distribution of the variable W , and how precise result is required. Provided our information is restricted to the expectation and variance of W , we can use the diﬀusion approximation. This produces exactly the same results as in Example 1, although now we interpret them as an approximated solution. Let us remind that the resulting premium formula reads: Π(W ) = µW + σW −2d log ψ,

478

20 Premiums, Investments, and Reinsurance

with the accompanying result for the optimal level of capital: uopt = σ − log ψ(2d)−1 . Despite the fact that the premium formula is not additive, we can follow arguments presented in Section 20.3.4, to propose the individual basic premium formula: 2 −1 ΠB (X) = µX + σX σW −2d log ψ, and obviously the marginal premium containing loading twice as small as the basic one. The basic idea presented above can be generalized to cases when richer information on the distribution of the variable W allows for more sophisticated methods. For illustrative purposes only the method of De Vylder (in a simpliﬁed version) is considered. Example 3 Our information encompasses also skewness (which is positive), so premium is calculated on the basis of the De Vylder approximation. Allowing for simpliﬁcation proposed in the previous section, we obtain the minimized function:

2 − ln ψ σW + du. c = µW + 2 u + ρ (ln ψ + 1) Almost as simply as in the Example 1 we get the solutions: : − ln ψ uopt = σW − ρ (ln ψ + 1) , 2d −2d ln ψ − 13 d (ln ψ + 1) γW , copt = µW + σW √ where again the safety loading amounts to 21 σW −2d ln ψ. However, in this case the safety loading is smaller than a half of the total premium loading. This time the capital (and so the dividend loading) is larger, because of component proportional to σW γW . This complicates also individual risks pricing, as (analogously to formulas considered in Section 20.3.3), the basic premium in respect of this component has to be set arbitrarily. Comparing problems presented above with those considered in Section 20.5 we can conclude that premium calculation based on ruin theory are easily decomposable as far as the capital backing risk is considered as ﬁxed. Once the cost of capital is explicitly taken into account, we obtain premium calculation formulas much more similar to those derived on the basis of one-year considerations, what leads to similar obstacles when the decomposition problem is considered.

20.6

Ruin Probability Criterion and the Rate of Return

20.6.2

479

Flexible Dividends

So far we have assumed that shareholders are paid a ﬁxed dividend irrespective of the current performance of the company. It is not necessarily the case, as shareholders would accept some share in risk provided they will get a suitable risk premium in exchange. The more general model which encompasses the previous examples as well as the case of risk sharing can be formulated as follows: Rn = u + cn − (W1 + ... + Wn ) − (D1 + ... + Dn ) where Dn is a dividend due to the year n, deﬁned as such a function of the variable Wn that E(Dn ) = du. As dividend is a function of the current year result, it preserves independency of increments of the risk process. Of course, only such deﬁnitions of Dn are sensible, which reduce in eﬀect the range of ﬂuctuations of the risk process. The example presented below assumes one of possible (and sensible) choices in this respect. Example 4 Let us assume that Wn has a gamma (α, β) distribution, and the dividend is deﬁned as: Dn = max {0, δ (c − Wn )} , δ ∈ (0, 1) , which means that shareholders’ share in proﬁts amounts to δ100%, but they do not participate in losses. Problem is to choose a value of the parameter δ and amount of capital u so as to minimize premium c, under the restriction E(Dn ) = du, and given parameters (α, β, d, ψ). The problem could be reformulated so as to solve it numerically, making use of the De Vylder approximation. Solution. Let us write the state of the process after n periods in the form: Rn = u − (V1 + ... + Vn ) with the increment equal −Vn . The variable Vn could be then deﬁned as: when Wn > c Wn − c Vn = (1 − δ) (Wn − c) when Wn c According to the De Vylder method ruin probability is approximated by: −1 −1 ψdV (u) = 1 + R(D) ρ , exp −R(D) u 1 + R(D) ρ

480

20 Premiums, Investments, and Reinsurance

where R(D) = −2E (V ) σ −2 (V ), and ρ = 13 µ3 (V ) σ −2 (V ); for simplicity, the number of year n has been omitted. In order to minimize the premium under restrictions: ψdV (u) = ψ,

E(D) = du,

δ ∈ (0, 1) ,

u > 0,

and under predetermined values of (α, β, d, ψ) it suﬃces to express the expectation of a variable D and cumulants of order 1, 2, and 3 of the variable V as functions of these parameters and variables. First we derive raw moments of order 1, 2, and 3 of the variable D. From its deﬁnition we obtain: c k

E(D ) = δ

k

(c − x) dFW (x),

k 0

that (after some calculations) leads to the following results: F (c) , = δ cFα,β (c) − α α+1,β β α(α+1) E(D2 ) = δ 2 c2 Fα,β (c) − 2c α F (c) + F (c) , 2 α+1,β α+2,β β β E(D3 ) = δ 3 c3 Fα,β (c) − 3c2 α β Fα+1,β (c) + α(α+1)(α+2) + δ 3 3c α(α+1) F (c) − F (c) , 2 3 α+2,β α+3,β β β

E(D)

where Fα+j,β denotes the cdf of gamma distribution with parameters (α + j, β). In respect of the relation V − D = W − c, and taking into account that:

c n

n

m+n

E [D (−V ) ] = δ (1 − δ) m

m

(c − x)

dFW (x) =

1−δ δ

n E(Dm+n ),

0

we easily obtain raw moments of the variable V : E(V )

=

E(V 2 )

=

E(V 3 )

=

− c + E(D), 2 α α − 1 + 2 1−δ E(D2 ), β2 + β − c δ 3 1−δ 2 2α 3α α α 1−δ E(D3 ), − c + − c + 1 + 3 + 3 + 3 2 β β β β δ δ α β

20.7

Ruin Probability, Rate of Return and Reinsurance

481

so as cumulants of this variable, too. Provided we are able to evaluate numerically the cdf of the gamma distribution, all elements needed to construct the numerical procedure solving the problem are completed. In Example 4 some speciﬁc rule of sharing risk by shareholders and the company has been applied. On the contrary, the assumption on the distribution of the variable W is of some general advantage, as the shifted gamma distribution is often used to approximate the distribution of the aggregate loss. We will make use of it in Example 6 presented in the next section.

20.7

Ruin Probability, Rate of Return and Reinsurance

In this section premium calculation is considered under predetermined ruin probability and predetermined rate of dividend, with reinsurance included. At ﬁrst the example involving ﬁxed dividend is presented.

20.7.1

Fixed Dividends

Example 5 We assume (as in Example 2), that the aggregate loss W has a compound Poisson distribution with expected number of claims λP = 1000, and with severity distribution being truncated-Pareto distribution with parameters (α, λ, M0 ) = (2.5, 1.5, 500). We assume also that the excess of each loss over the limit M ∈ (0, M0 ] is ceded to the reinsurer using the same pricing formula: ΠR W M = (1 + re0 ) E W M + re1 V AR W M . The problem lies in choosing such a value of the retention limit M and initial capital u, which minimize the total premium paid by policyholders, under predetermined values of parameters (d, ψ, re0 , re1 ). The problem could be solved with application of the De Vylder and Beekman–Bowers approximation methods. As allowing for reinsurance leads to numerical solutions anyway, there is no more reason to apply the simpliﬁed version of the De Vylder method, as in Example 3.

482

20 Premiums, Investments, and Reinsurance

Solution. Risk process can be written now as: Rn = u + c − du − ΠR W M n − W M,1 + ... + W M,n . The problem takes a form of minimization of the premium c under restrictions, which in the case of De Vylder metod take a form: ψ

=

R(D)

=

−1 −1 1 + R(D) ρ , exp −R(D) u 1 + R(D) ρ 2 c − du − ΠR W M σ −2 (W M ) ,

ρ

=

1 3 µ3

(W M ) σ −2 (W M ) ,

and in the version based on the Beekman–Bowers approximation method take a form: c − du − ΠR W M = (1 + θ) E (W M ) , ψ

=

−1

=

α (α + 1) β −2

=

αβ

−1

(1 − Gα,β (u)) , 2 −1 (1 + θ) E Y M {2θ E (Y M )} , ⎧ ! "2 ⎫ 2 ⎨ E Y 3 ⎬ E Y M M (1 + θ) . +2 ⎩ 3θ E (Y M ) 2θ E (Y M ) ⎭ (1 + θ)

Moments of the ﬁrst three orders of the variable Y M as well as cumulants of variables W M and W M are calculated the same way as in Example 2. All these characteristics are functions of parameters (α, λ, λP ) and the decision variable M .

20.7.2

Interpretation of Solutions Obtained in Example 5

Results of numerical optimization are reported in Table 20.2. In the basic variant of the problem, parameters has been set on the level (d, ψ, re0 , re1 ) = (5%, 5%, 100%, 0.5%). In variant 6 the value M = M0 is assumed, so as this variant represents the lack of reinsurance. Variants 2, 3, 4 and 5 diﬀer from the basic wariant by the value of one of parameters (d, ψ, re0 , re1 ). In variant 2 the dividend rate d has been increased so as to obtain the same level of premium, than it is obtained in variant 6. Results could be summarized as follows:

20.7

Ruin Probability, Rate of Return and Reinsurance

483

Table 20.2: Minimization of premium c with respect to choice of capital u and retention limit M . Basic characteristics of the variable W : µW = 999.8, σW = 74.2, γW = 0.779, γ2,W = 2.654 Variants of minimization problems V.1: (basic) V.2: d = 5.2% V.3: ψ = 2.5% V.4: re0 = 50% V.5: re1 = 0.25% V.6: (no reinsurance)

Method of approx. of the ruin probability BB dV BB dV BB dV BB dV BB dV BB dV

Retention limit M

Initial capital u

Loading

184.2 185.2 179.5 180.5 150.1 156.3 126.1 127.1 139.7 140.5 500.0 500.0

416.6 416.3 408.2 407.9 463.3 461.7 406.2 406.0 409.0 408.8 442.9 442.7

4.17% 4.16% 4.25% 4.25% 4.65% 4.63% 4.13% 4.13% 4.13% 4.13% 4.25% 4.25%

c−µW µW

STFrein02.xpl (i) Reinsurance results either in premium reduction under unchanged rate of dividend (compare variant 6 with wariant 1), or in increase of the rate of dividend under the same premium level (compare variant 2 with variant 1). In both cases the need for capital is also reduced. If we wish to obtain reduction of premium as a result of reinsurance introduced, then the reduction of capital is slightly smaller than in the case when reinsurance serves to enlarge the rate of dividend. (ii) Comparison of variants 3 and 1 shows that increasing safety (reduction of parameter ψ from 5% to 2.5%) results in signiﬁcant growth of the premium. This eﬀect is caused as well by increase of capital (which burdens the premium through larger cost of dividends), as by increase of costs of reinsurance, because of reduced retention limit. It is also worthwhile to notice that predetermining ψ = 2.5% results in signiﬁcant diversiﬁcation of results obtained by two methods of approximation. In the case when ψ = 5% the diﬀerence is neglectible. (iii) Results obtained in variants 4 and 5 show that the optimal level of reinsurance is quite sensitive to changes of parameters reﬂecting costs of reinsurance.

484

20.7.3

20 Premiums, Investments, and Reinsurance

Flexible Dividends

In the next example assumptions are almost the same as in Example 5, except that the ﬁxed dividend is replaced by the dividend dependent on ﬁnancial result by the same manner, as in Example 4. Example 6 Assumptions on the aggregate loss W are the same as in Example 5: compound Poisson truncated-Pareto distribution with parameters (λP , α, λ, M0 ). Assumptions concerning available reinsurance (excess of loss over M ∈ (0, M0 ], pricing formulas characterized by parameters re0 and re1 ) are also the same. Dividend is deﬁned as in Example 4, with a suitable correction due to reinsurance allowed: 4 5 Dn = max 0, δ c − W M,n − ΠR W M , δ ∈ (0, 1) . Now the problem lies in choosing capital u, risk-sharing parameter δ and retention limit M so as to minimize premium c under the restriction E(Dn ) = du, and predetermined values of parameters characterizing the distribution (λP , α, λ, M0 ), parameters characterizing reinsurance costs (re0 , re1 ) and parameters characterizing proﬁtability and safety (d, ψ). Solution. Under the predetermined values of decision variables (u, δ, M ) and remaining parameters the risk process has a form: Rn = u − (V1 + ... + Vn ) , with increment −Vn , where the variable Vn is deﬁned as: W M,n − when W M,n > c − ΠR W M c + ΠR W M Vn = when W M,n c − ΠR W M (1 − δ) W M,n − c + ΠR W M The problem diﬀers from that presented in Example 4 by two factors: variable W M is not gamma and the premium c is now replaced by the distributed, constant c − ΠR W M . However, variable W M could be approximated by the shifted gamma distribution with parameters (x0 , α0 , β0 ) chosen so as to match moments of order 1, 2, and 3 of the original variable W M . Suitable calculations lead to the deﬁnition of the variable V˜ , that approximates the original variable Vn : when X > c∗ X − c∗ V˜ = ∗ (1 − δ) (X − c ) when X c∗

20.7

Ruin Probability, Rate of Return and Reinsurance

485

∗ where the variable X has a gamma (α0 , β0 ) distribution, and the constant c ¯ M − x0 . Thus we could express moments of the variable V˜ equals c − ΠR W as functions of parameters (α0 , β0 , c∗ , δ) exactly this way, as it is done with respect to variable V and parameters (α, β, c, δ) in Example 4. It suﬃces in turn to approximate ruin probability with the De Vylder method:

−1 −1 ψdV (u) = 1 + R(D) ρ , exp −R(D) u 1 + R(D) ρ where R(D) = −2E V˜ σ −2 V˜ and ρ = 13 µ3 V˜ σ −2 V˜ , and where the expected value of dividend E(D) satisﬁes the restriction: c − ΠR W M − E (W ) − E(D) = − E(V˜ ). M

Hence it is clear that the problem of minimization of premium under restrictions ψdV (u) = ψ, E(D) = du, δ ∈ (0, 1), u > 0, M ∈ (0, M0 ] and predetermined values of parameters (λP , α, λ, re0 , re1 , d, ψ, M0 ) is in essence analogous to the problem presented in Example 4, and diﬀers only in details. The set of decision variables (u, δ) in Example 4 is now extended by additional variable M , and the variable W M is only approximately gamma distributed.

20.7.4

Interpretation of Solutions Obtained in Example 6

Results are presented in Table 20.3. In all variants predetermined values of parameters (λP , α, λ, M0 , re0 , re1 ) = (1000, 2.5, 1.5, 500, 100%, 0.5%) are the same. In variant 1 (basic) the ruin probability ψ = 5% is assumed, and reinsurance is allowed. Variant 2 diﬀers from the basic one by higher safety standard (ψ = 2.5%), whereas variant 3 diﬀers by lack of reinsurance. In each variant three slightly diﬀerent versions of the problem have been solved. Version A is a simpliﬁed one, assuming ﬁxed dividend rate d = 5%, so that Dn = du. Consequently the minimization of the premium is conducted with respect to (u, M ) only. In fact, the results from Table 20.2 are quoted for this version. Versions B and C assume minimization with respect to (u, M, δ). Version B plays a role of a basic version, where premium c is minimized under the expected rate of dividend d = 5%. In version C such a rate of dividend d has been chosen, that leads (through minimization) to the same premium level, as obtained previously in version A. So two alternative eﬀects of the consent of shareholders to participate in risk could be observed. Eﬀect in terms of reduction of premium (expected rate of dividend remaining unchanged) is observed

486

20 Premiums, Investments, and Reinsurance

Table 20.3: Minimization of premium c under three variants of assumptions and three versions of the problem. Variant of assumptions

Version of problem A B V.1: ψ = 5%, reins. C A V.2: ψ = 2.5%, reins. B C A V.3: ψ = 5%, no reins. B C

d

M

u

c−µW µW

δ

σD u

5% 5% 8.54% 5% 5% 8.96% 5% 5% 8.09%

185.2 189.3 143.6 156.3 157.0 122.9 500.0 500.0 500.0

416.3 406.0 305.2 461.7 447.5 329.7 442.7 429.8 340.0

4.16% 3.35% 4.16% 4.63% 3.67% 4.63% 4.25% 3.45% 4.25%

41.7% 48.6% 44.4% 52.3% 42.0% 48.2%

0 5.02% 8.14% 0 4.94% 8.29% 0 4.70% 7.15%

STFrein03.xpl when we compare version B and A. Eﬀect in terms of increase of the expected rate of dividend (premium being ﬁxed) is observed when versions C and A are compared. Results could be summarized as follows. In each of three variants, the consent of shareholders for risk participation allows for substantial reduction of premium (loading reduced by about 20%). It is interesting that shareholder’s consent to participate in risk allows for much more radical reduction of premium than reinsurance. It results from the fact that reinsurance costs have been explicitly involved in optimization, whereas the “costs of the shareholder’s consent to participate in risk” have not been accounted for. Comparison of versions C with versions A in each variant of the problem allows us to see the outcome (increase of expected rate of dividend) of the shareholder’s consent to share risk. In the last column of the table the (relative) standard deviation σD /u of dividends is reported; it could serve as a measure of “cost” at which the outcome, in terms of the increment of the expected dividend rate, is obtained. Comparing versions B and C in variants 1 and 2 we could observe eﬀects of the increment in the expected rate of dividend. Apart from the obvious eﬀect on premium increase, also the reduction of capital could be observed (cost of capital is higher), and at the same time retention limits are reduced. Also the sharing parameter δ increases, as well as the (relative) standard deviation of dividends σD /u.

20.8

Final remarks

487

Comparing variants 1 and 2 (in all versions A, B, and C) we notice the substantial increase of the premium as an eﬀect of higher safety standard (smaller ψ). Also the amount of capital needed increases and the retention limit is reduced. At the same time a slight increase of sharing parameter δ is observed (versions B and C).

20.8

Final Remarks

It should be noted that all presented models, including risk participation of reinsurers and shareholders, lead only to a modiﬁcation of the distribution of the increment of the risk process. Still the mutual independence of subsequent increments and their identical distribution is preserved. There are also models where decisions concerning premiums, reinsurance, and dividends depend on current size of the capital. In general, models of this type need the stochastic control technique to be applied. Nevertheless, models presented in this chapter preserve simplicity, and allow just to have insight on long-run consequences of some decision rules, provided they remain unchanged for a long time. This insight is worthwhile despite the fact that in reality decisions are undertaken on the basis of the current situation, and no ﬁxed strategy remains unchanged under changing conditions of the environment. On the other hand, it is always a good idea to have some reference point, when consequences of decisions motivated by current circumstances have to be evaluated.

488

Bibliography

Bibliography Asmussen, S. (2000). Ruin Probabilities, World Scientiﬁc, Advanced Series on Statistical Science & Applied Probability, Vol. 2, Singapore. Bowers, N.L., Gerber H.U., Hickman J.C., Jones D.A., and Nesbitt C.J. (1986, 1997). Actuarial Mathematics, Society of Actuaries, Itasca, Illinois B¨ uhlmann H. (1985). Premium calculation from top down, Astin Bulletin 15: 89–101. Embrechts, P., Kl¨ uppelberg, C., and Mikosch, T. (1997). Modelling Extremal Events for Insurance and Finance, Springer, Berlin. Hill, G.W. and Davis, A.W. (1968). Generalized asymptotic expansions of Cornish-Fisher type, Ann. Math. Statist. 39: 1264–1273. Kendall, M. and Stuart, A. (1977). The Advanced Theory of Statistics, 4th ed. MacMillan. Otto, W. (2004). Nonlife insurance – part I – Theory of risk, series “Mathematics in Insurance” WNT (in Polish). Panjer, H.H. (ed.), Boyle, P.P., Cox, S.H., Dufresne, D., Gerber, H.U., Mueller, H.H., Pedersen, H.W., Pliska, S.R., Sherris, M., Shiu, E.S., and Tan, K. S. (1998). Financial Economics with Applications to Investments, Insurance and Pensions, The Actuarial Foundation, Schaumburg, Illinois. Shapley, L.S.(1953). A Value for n-Person Games, in Kuhn, H.W. and Tucker, A.W. (eds.) Contributions to the Theory of Games II, Princeton University Press, 307–317.

Part III

General

21 Working with the XQC Szymon Borak, Wolfgang H¨ ardle, and Heiko Lehmann

21.1

Introduction

An enormous number of statistical methods have been developed in quantitive ﬁnance during the last decades. Nonparametric methods, bootstrapping time series, wavelets, Markov Chain Monte Carlo are now almost standard in statistical applications. To implement these new methods the method developer usually uses a programming environment he is familiar with. Thus, automatically such methods are only available for preselected software packages, but not for widely used standard software packages like MS Excel. To apply these new methods to empirical data a potential user faces a number of problems or it may even be impossible for him to use the methods without rewriting them in a diﬀerent programming language. Even if one wants to apply a newly developed method to simulated data in order to understand the methodology one is confronted with the drawbacks described above. A very similar problem occurs in teaching statistics at undergraduate level. Since students (by deﬁnition!) have their preferred software and often do not have access to the same statistical software packages as their teacher, illustrating examples have to be executable with standard tools. The delayed proliferation of new statistical technology over heterogeneous platforms and the evident student/teacher software gap are examples of ineﬃcient distribution of quantitative methodology. This chapter describes the use of a platform independent client that is the basis for e-books, transparencies and other knowledge based systems. In general, two statisticians are on either side of the distribution process of newly implemented methods, the provider (inventor) of a new technique (algorithm) and the user who wants to apply (understand) the new technique. The aim of the XploRe Quantlet client/server architecture is to bring these statisticians closer to each other. The XploRe Quantlet Client (XQC) represents the

492

21

Working with the XQC

front end – the user interface (UI) of this architecture allowing to access the XploRe server and its methods and data. The XQC is fully programmed in Java not depending on a speciﬁc computer platform. It runs on Windows and Mac platforms as well as on Unix and Linux machines. The following sections contain a description of components and functionalities the XQC oﬀers. Section 21.2.1 gives a short overview about possible conﬁguration settings of the XQC, which allow inﬂuencing the behaviour of the client. Section 21.2.2 explains how to connect the XQC to an XploRe Quantlet Server. A detailed description of the XQC’s components desktop, Quantlet editor, data editor and method tree is part of Sections 21.3 to 21.3.3. Section 21.3.4 ﬁnally explains graphical features oﬀered by the XploRe Quantlet Client.

21.2

The XploRe Quantlet Client

The XploRe Quantlet Client can be initiated in two diﬀerent ways. The way depends on whether the XQC is supposed to run as a standalone application or as an applet embedded within an HTML page. The XQC comes packed in a single Java Archive (JAR) ﬁle, which allows an easy usage. This JAR ﬁle allows for running the XQC as an application, as well as running it as an applet. Running the XQC as an application does not require any programming skills. Provided that a Java Runtime Environment is installed on the computer the XQC is supposed to be executed on, the xqc.jar will automatically be recognized as an executable jar ﬁle that opens with the program javaw. If the XQC is embedded in a HTML page it runs as an applet and can be started right after showing the page.

21.2.1

Conﬁguration

Property ﬁles allow conﬁguring the XQC to meet special needs of the user. These ﬁles can be used to manage the appearance and behavior of the XQC. Any text editor can be used in editing the conﬁguration ﬁles. Generally, the use of all information is optional. In its current version, the XQC works with three diﬀerent conﬁguration ﬁles. The xqc.ini ﬁle contains important information about the basic setup of the XploRe Quantlet Client, such as server and port information the client is supposed to connect to. It also contains information

21.2

The XploRe Quantlet Client

493

Figure 21.1: Manual input for server and port number.

about the size of the client. This information can be maintained either relative to the actual size of the screen by using a factor or by stating its exact width and height. If this information is missing, the XQC begins by using its default values. The xqc language.ini allows for setting up the XQC’s language. This ﬁle contains all texts used within the XQC. To localize the client, the texts have to be translated. If no language ﬁle can be found, the client starts with its default setup, showing all menus and messages in English. The xqc methodtree.ini ﬁle ﬁnally contains information about the method tree that can be shown as part of the METHOD/DATA window, see Section 21.3.2. A detailed description of the set up of the method tree will be part of Section 21.3.3.

21.2.2

Getting Connected

After starting the XQC the client attempts to access and read information from the conﬁguration ﬁles. If no conﬁguration ﬁle is used error messages will pop up. If server and port information cannot be found, a pop up appears and enables a manual input of server and port number, as displayed in Figure 21.1. The screenshot in Figure 21.2 shows the XQC after it has been started and connected to an XploRe server. A traﬃc light in the lower right corner of the screen indicates the actual status of the server. A green light means the client

494

21

Working with the XQC

Figure 21.2: XQC connected and ready to work.

has successfully connected to the server and the server is ready to work. If the server is busy, computing previously received XploRe code, the traﬃc light will be set to yellow. A red light indicates that the XQC is not connected to the server.

21.3

Desktop

If no further restrictions or features are set in the conﬁguration ﬁle (e.g. not showing any window or starting with executing a certain XploRe Quantlet) the XQC should look like shown in the screen shot. It opens with the two screen components CONSOLE and OUTPUT/RESULT window. The CONSOLE allows for the sending of single-line XploRe commands to the server to be

21.3

Desktop

495

executed immediately. It also oﬀers a history of the last 20 commands sent to the server. To repeat a command from the history, all that is required is a mouse click on the command, and it will be copied to the command line. Pressing the ‘Return’ key on the keyboard executes the XploRe command. Text output coming from the XploRe server will be shown in the OUTPUT / RESULT window. Any text that is displayed can be selected and copied for use in other applications – e.g. for presentation of results within a scientiﬁc article. At the top of the screen the XQC oﬀers additional functions via a menu bar. These functions are grouped into four categories. The XQC menu contains the features Connect, Disconnect, Reconnect and Quit. Depending on the actual server status not every feature is enabled – e.g. if the client is not connected (the server status is indicated by a red traﬃc light) it does not make sense to disconnect or reconnect, if the client is already connected (server status equals a green light) the connect feature is disabled.

21.3.1

XploRe Quantlet Editor

The Program menu contains the features New Program, Open Program (local). . . and Open Program (net). . . . New Program opens a new and empty text editor window. This window enables the user to construct own XploRe Quantlets. The feature Open Program (local) oﬀers the possibility of accessing XploRe Quantlets stored on the local hard disk drive. It is only available if the XQC is running as an application or a certiﬁed applet. Due the Java sandbox restrictions running the XQC as an unsigned applet, it is not possible to access local programs. If the user has access to the internet the menu item Open Program (net) can be useful. This feature allows the opening of Quantlets stored on a remote Web server. All it needs is the ﬁlename and the URL address at which the ﬁle is located. Figure 21.3 shows a screen shot of the editor window containing a simple XploRe Quantlet. Two icons oﬀer actions on the XploRe code:

•

– Represents the probably most important feature – it sends the XploRe Quantlet to the server for execution.

496

21

Working with the XQC

Figure 21.3: XploRe Editor window.

•

– Saves the XploRe Quantlet to your local computer (not possible if running the XQC as an unsigned applet).

The Quantlet shown in Figure 21.3 assigns two three-dimensional standard normal distributions to the variables x and y. The generated data are formatted to a certain color, shape and size using the command setmaskp. The result is ﬁnally shown in a single display.

21.3.2

Data Editor

The Data menu contains the features New Data. . . , Open Data (local). . . , Open Data (net). . . , Download DataSet from Server. . . and DataSets uploaded to Server. New Data can be used to generate a new and empty data window. Before the data frame opens a pop-up window as shown in Figure 21.4 appears, asking for the desired dimension – the number of rows and cols – of the new data set. The XQC needs this information to create the spreadsheet. This deﬁnition does not have to be the exact and ﬁnal decision, it is possible to add and delete rows and columns later on.

21.3

Desktop

497

Figure 21.4: Dimension of the Data Set.

The menu item Open Data (local) enables the user to open data sets stored on the local hard disk. Again, access to local resources of the user’s computer is only possible if the XQC is running as an application or a certiﬁed applet. The ﬁle will be interpreted as a common text format ﬁle. Line breaks within the ﬁle are considered as new rows for the data set. To recognize data belonging to a certain column the single data in one line must be separated by either using a “;” or a “tab” (separating the data by just a “space” will force the XQC to open the complete line in just on cell). Open Data (net) lets the user open a data set that is stored on a web server by specifying the URL. The menu item Download DataSet from Server oﬀers the possibility to download data from the server. The data will automatically be opened in a new method and data window, oﬀering all features of the method and data window (e.g. applying methods, saving, . . . ) to them. A helpful feature especially for research purposes is presented with the menu item DataSets uploaded to Server. This item opens a window that contains a list of objects uploaded to the server using the data window or the console. Changes of these objects are documented as an object history. Due to performance reasons only uploaded data and actions on data from the CONSOLE and the TABLE MODEL are recorded. The appearance of the data window depends on the settings in the conﬁguration ﬁle. If a method tree is deﬁned and supposed to be shown, the window shows

498

21

Working with the XQC

Figure 21.5: Combined Data and Method Window.

the method tree on the left part and data spreadsheet on the right part of the frame. If no method tree has been deﬁned, only the spreadsheet will be shown. The method tree will be discussed in more detail in Section 21.3.3. Figure 21.5 shows a screen shot of the combined data and method frame. Icons on the upper part of the data and method window oﬀer additional functionalities: •

– If columns or cells are selected – this speciﬁc selection, otherwise the entire data set can be uploaded to the server with specifying a variable name.

•

– Saves the data to your local computer (not possible if running the XQC as an unsigned applet).

•

/

– Copy and paste.

21.3

•

Desktop

499

/ – Switches the column or cell selection mode on and oﬀ. Selected columns/cells can be uploaded to the server or methods can be executed on them.

The spreadsheet of the data and method window also oﬀers a context menu containing the following items: • Copy • Paste • No Selection Mode – Switches OFF the column or cell selection mode. • Column Selection Mode – Switches ON the column selection mode. • Cell Selection Mode – Switches ON the cell selection mode. • Set Row as Header Line • Set column Header • Delete single Row • Insert single Row • Add single Row • Delete single Column • Add single Column Most of the context menu items are self-explaining. However, there are two items that are worth taking a closer look at – ‘Set Row as Header Line’ and ‘Set column Header’. The spreadsheet has the capability to specify a header for each column. This information can be used within XploRe Quantlets to name the axis within a plot, making it easier for the user to interpret graphics. A more detailed description is included in Section 21.3.3. Default values for the headers are COL1, COL2, . . . as shown in Figure 21.6. Naming a single column can be performed using the menu item ‘Set column Header’. The name has to be maintained within the pop up window that appears right after choosing this menu item. It can also be used to change existing column headers. The spreadsheet also oﬀers the possibility to set column headers all at once. If the

500

21

Working with the XQC

Figure 21.6: Working with the Data and Method Window.

data set already contains a row with header information – either coming from manual input or as part of an opened data set – these row can be set as header using the menu item ‘Set Row as Header Line’. The row with the cell that is active at that time will be cut out of the data set and pasted into the header line. Setting the header is also possible while opening a data set. After choosing the data, a pop up asks whether or not the ﬁrst row of the data set to be opened should be used as the header. Nevertheless, the context menu features just described above are of course still available, enabling the user to set or change headers afterwards. Working with the XQC’s method and data window does not require any XploRe programming knowledge. All it requires is a pointing device like the mouse. Applying, for example, the scatter-plot-method on the two columns would only mean to • switch on the column selection mode • mark both columns • mouse click on the method “Scatter Plot”

21.3

Desktop

501

Result will be a plot as shown in Figure 21.6. As stated above, the selected area can also be uploaded to the server using the icon in order to be used for further investigation. This new variable can be used within XploRe Quantlets written using the EDITOR window or manipulated via the CONSOLE.

21.3.3

Method Tree

The METHOD TREE represents a tool for accessing statistical methods in an easy way. Its setup does not require any Java programming skills. All it needs is the maintenance of two conﬁguration ﬁles. Settings maintained within the xqc.ini ﬁle tell to the XQC whether there will be a method tree to be shown or not and where to get the tree information from. The client also needs to know where the methods are stored at. The MethodPath contains this information. Path statements can either be absolute statements or relative to the directory the XQC has been started in. For relative path information the path must start with XQCROOT. The settings in the example below tell the client to generate a method tree by using the ﬁle xqc methodtree.ini with the XploRe Quantlets stored in the relative subdirectory xqc_quantlets/. ShowMethodTree = yes MethodTreeIniFile = xqc_methodtree.ini MethodPath = XQCROOT/xqc_quantlets/ The actual method tree is set up in a separate conﬁguration ﬁle that is given by the property of MethodTreeIniFile. This ﬁle contains a systematic structure of the tree – nodes and children, the method to be executed and its description to be shown within the tree frame. Node_1 = path name Child_1.1 = method|description Child_1.2 = method|description Child_1.3 = method|description Node_2 = path name Node_2.1 = path name Child_2.1.1 = method|description

502

21

Working with the XQC

The name of the method has to be identical to the name of the XploRe program (Quantlet). The Quantlet itself has to have a procedure with the same name as the method. This procedure is called by the XQC on execution within the method tree.

Example The following example shows how to set up a simple method tree. First of all, we choose XploRe Quantlets used within this e-book that we want to be part of the method tree. The aim of the Quantlet should be to generate graphics from selected data of the data spreadsheet or to just generate text output. Before being able to use the Quantlets within the method tree, they have to be ‘wrapped’ in a procedure. The name of the procedure – in our case for example ‘STFstab08MT’ – has to equal the name of the saved XploRe ﬁle. Our example Quantlet STFstab08MT.xpl is based on the original Quantlet STFstab08.xpl used in Chapter 1. The procedure must further have two parameters: • data – Used for passing the selected data to the XploRe Quantlet. • names – Contains the names of the selected columns taken from the header of the spreadsheet. It might also be necessary to make some minor adjustments within the Quantlet in order to refer to the parameter handed over by the XQC. Those changes depend on the Quantlet itself. library (" graphic ") proc () = STFstab08MT ( data , names ) ... endp

Figure 21.7: STFstab08MT.xpl.

The XploRe coding within the procedure statement is not subject to any further needs or restrictions. Once we have programmed the Quantlet it needs to be integrated into a method tree. For this purpose we deﬁne our own conﬁguration ﬁle - xqc methodtree STF – with the following content shown in Figure 21.8.

21.3

Desktop

503

Node_1 = Stable Distribution Node_1 .1 = Estimation Child_1 .1.1 = stabreg . xpl | Stabreg Child_1 .1.2 = stabcull . xpl | Stabcull Child_1 .1.3 = stabmom . xpl | Stabmom Node_1 .2 = Examples Child_1 .2.1 = STFstab08 . xpl | STFstab08 Child_1 .2.2 = STFstab09 . xpl | STFstab09 Child_1 .2.3 = STFstab10 . xpl | STFstab10

Figure 21.8: sample tree.ini

We create a node calling it ‘Estimation’. Below this ﬁrst node we set up the Quantlets stabreg.xpl, stabcull.xpl and stabmom.xpl. A second node – ‘Examples’ contains the Quantlets STFstab08.xpl, STFstab09.xpl and STFstab10.xpl. The text stated right beside each Quantlet (separated by the ‘|’) represents the text we would like to be shown in the method tree. Now that we have programmed the XploRe Quantlet(s) and set up the method tree we still need to tell the XQC to show our method tree upon opening data sets. ... ShowMethodTree = yes M e t h o d T r e e I n i F i l e = x q c _ m e t h o d t r e e _ S T F . ini MethodPath = XQCROOT / xqc_quantlets / ...

Figure 21.9: Extract of the xqc.ini.

The settings as shown in Figure 21.9 tell the XQC to show the method tree that is set up in our xqc methodtree STF.ini ﬁle and to use our XploRe Quantlet stored in a subdirectory of the XQC itself. Our method tree is now ready for ﬁnally being tested. Figure 21.10 shows a screenshot of the ﬁnal result – the method tree, set up above.

21.3.4

Graphical Output

The previous sections contain some examples of graphical output shown within a display. The XQC’s displays do not show only the graphical results received

504

21

Working with the XQC

Figure 21.10: Final result of our tree example.

from the XploRe server. Besides the possibility to print out the graphic it oﬀers additional features that can be helpful for investigating data - especially for three-dimensional plots. Those features can be accessed via the display’s context menu. Figure 21.11 shows three-dimensional plot of the 236 implied volatilities and ﬁtted implied volatility surface of DAX from January 4th 1999. The red points in the plot represent observed implied volatilities on 7 diﬀerent maturities T = 0.13, 0.21, 0.46, 0.71, 0.96, 1.47, 1.97. The plot shows that implied volatilities are observed in strings and there are more observations on the strings with small maturities than on the strings with larger maturities. The surface is obtained with Nadaraya-Watson kernel estimator. For a more detailed inspection three-dimensional plots can be rotated by using a pointing device such as a mouse (with the left mouse-button pressed) or by using the keyboards arrow-keys. Figure 21.12 shows the same plot as before – it has just been rotated by some degrees. Now, one can see implied volatilities “smiles” and “smirks” and recognize diﬀerent curvature for diﬀerent maturities. For further research it would be helpful to know which data point belongs to which string. Here the XQC’s display oﬀers a feature to show

21.3

Desktop

505

Figure 21.11: Plot of the implied volatility surface from January 4, 1999

the point’s coordinates. This feature can be accessed via the display’s context menu. ‘Showing coordinates’ is not the only option. The user could also switch ˜ ‘Show XZ’ ˜ and ‘Show YZ’. ˜ between the three dimensions - ‘Show XY’, After the ‘Showing coordinates’ has been chosen all it needs is to point the mouse arrow on a certain data point in order to get the information. The possibility to conﬁgure the XploRe Quantlet Client for special purposes as well as its platform independence are features that recommends itself for the integration into HTML and PDF contents for visualizing statistical and mathematical coherences as already shown in this e-book.

506

21

Working with the XQC

Figure 21.12: Rotating scatter plot showing the context menu.

Figure 21.13: Showing the coordinates of a data point.

Index α-stable L´evy motion, 382, 386, 387 α-stable variable, 28 p-value, 308 aggregate loss, 455, 463 aggregate loss process, → process algorithm Box-Muller, 29 CP1, 327 FFT option pricing, 188, 192 Flury-Gautschi, 116, 128 Fuzzy C-Means (FCM), 260 Gauss-Legendre, 167 HPP1, 321 HPP2, 322 MPP1, 326 NHPP1 (Thinning), 324 NHPP2 (Integration), 325 NHPP3, 325 RP1, 328 arbitrage-free pricing, 94 arrival time, 321 Arrow-Debreu price, 139 Asian crisis, 250, 260, 266 asset return, 36 asset returns, 21, 81 bankruptcy, 225, 226 Basel Capital Accord Basel I, 232 Basel II, 226, 231, 232 basis function, 119

Bates’ model, 187 beta function, 305 binomial tree, 135, 137 constant volatility, 142 Cox-Ross-Rubinstein, 137 implied, 135, → implied binomial tree Black Monday, 38 Black-Scholes formula, 115, 116, 135, 136, 161, 170, 183 bond callable, 202 catastrophe, → CAT bond defaultable, 98 non-callable, 202 rating, 227 Brownian motion, 382, 385 arithmetic, 371 fractional, 395, 396 geometric, 136, 163, 183, 185 burnout, 202, 217 Burr distribution, → distribution call option, → derivative Capital Asset Pricing Model (CAPM), 457 capital market, 93 CAT bond, 93, 94, 96, 105 coupon, 105 coupon-bearing, 106

508 maturity, 93 premium, 93 pricing, 97 principal, 93 zero-coupon, 99, 104 catastrophe bond, → CAT bond data, 329, 387 seasonality, 102 trend, 102 futures, → derivative natural, 94, 311 option, → derivative Chambers-Mallows-Stuck method, 29 change of measure, 401 characteristic function, 24, 185, 187, 192 Cholesky factorization, 126 claim correlated, 395 severity, 320 claim arrival process, → process claim surplus process, → process classiﬁcation, 225, 228 clustering cluster center, 261 fuzzy, 249, 260, 261 fuzzy set, 261 membership function, 261 Takagi-Sugeno approach, 262 cointegration, 251, 254 collective risk model, 319, 407, 416, 428 collective risk theory, 381

Index composition method, → method consumer price index, 254 contingent claim, 166 copula, 45, 53, 54, 75 t, 86 Ali-Mikhail-Haq, 55, 70 Archimedean, 69, 71 Clayton, 55, 56, 70 Farlie-Gumbel-Morgenstern, 55, 56 Frank, 55, 56 Galambos, 59, 60 Gaussian, 56 Gumbel, 56, 59, 60 Gumbel II, 59, 60 Gumbel-Hougaard, 70 correlation, 170, 173 Cox process, → process Cox-Ross-Rubinstein scheme, 137 credit risk, 319 critical value, 309 cumulant, 455 generating function, 469 data envelopment analysis (DEA), 276, 277, 279 eﬃciency score, 277 eﬃcient level, 277 dataset Danish ﬁre losses, 312, 334, 436 Property Claim Services (PCS), 311, 329, 343, 413 Datastream, 254 DAX index, 115, 117 options, 152 deductible, 303, 427 disappearing, 434

Index premium, 438, 441, 443, 447– 449 ﬁxed amount, 309, 431 premium, 437, 439, 442, 446, 448, 449 franchise, 429 premium, 437, 438, 442, 446, 448, 449 limited proportional, 432 premium, 437, 440, 443, 447– 449 payment function, 428 proportional, 432 premium, 437, 439, 442, 447– 449 default, 226 probability, 232 probability of, 226 derivative, 93, 166 call option, 116, 135, 144 catastrophe futures, 96 catastrophe option, 94, 96 delta, 170, 211 dual delta, 168 European option, 116, 135 Gamma, 168 Greeks, 168 rho, 168 spot delta, 168 insurance, 94 maturity, → maturity prepayment option, 202 American, 204 put option, 116, 135, 144 risk reversal, 178 strike price, 115, 116, 135 vanilla option, 220 European, 167 vega, 169, 220

509 vol of vol, 163, 171 volga, 169 dimension reduction, 115 disappearing deductible, → deductible discriminant analysis, 226, 227 distribution α-stable, 382 θ-stable, 74 Bernoulli, 397 Burr, 100, 102, 298, 304, 311, 361, 387, 441 chi-squared (χ2 ), 300 claim amount, 102 Cobb-Douglas, 292 compound geometric, 346 compound mixed Poisson, 422 compound negative binomial, 422 compound Poisson, 420, 464, 465 conditional excess, 47, 52 elliptically-contoured, 70 Erlang, 300 exponential, 102, 293, 295, 298, 300, 303, 304, 310, 324, 361, 383 memoryless property, 295, 303 extreme value multivariate, 58 ﬁnite-dimensional, 396 Fr´echet, 46 gamma, 102, 295, 300, 305, 311, 353, 414, 422, 447, 472 generalized extreme value, 46 generalized Pareto, 47 geometric, 346, 476 Gumbel, 46 heavy-tailed, 164, 296, 298, 343, 382, 386 hyperbolic, 74, 164

510 inﬁnitely divisible, 293 inverse Gaussian, 371 L´evy stable, 22 light-tailed, 344 adjustment coeﬃcient, 344 Lundberg exponent, 344 log-normal, 102, 136, 292, 304, 310, 413, 437 logistic, 74 loss, → loss distribution mixture, 295 mixture of exponentials, 102, 302, 311, 361, 449 negative binomial, 300, 421 normal, 66, 74, 116, 226, 382, 455 of extremum, 46 Pareto, 46, 47, 100, 102, 295, 298, 304, 310, 361, 438 Pareto type II, 47 Pearson’s Type III, 300 Poisson, 322, 420 power-law, 97 shifted gamma, 419 stable, → stable distribution, 46 stable Paretian, 22 Student, 66, 74 subexponential, 360, 475 convolution square, 361 transformed beta, 361 translated gamma, 419 truncated-Pareto, 465, 466, 481 uniform, 295, 309 Weibull, 46, 102, 216, 279, 298, 305, 311, 362, 445 with regularly varying tail, 361 distribution function empirical, 290, 305

Index dividend, 453 ﬁxed, 477, 481 ﬂexible, 479, 484 domain of attraction, 382, 384, 386, 390 doubly stochastic Poisson process, → process Dow Jones Industrial Average (DJIA), 38 eigenfunction, 123 eigenvalue, 123 elliptically-contoured distributions, 72 empirical distribution function, → distribution function empirical risk, 228 error correction model, 251, 253 vector, 253 estimation A2 statistic minimization, 312, 450 maximum likelihood, 312 EUREX, 118 European Central Bank, 250 expected risk, 228, 229 expected shortfall, 52, 303 expected tail loss, 52 exponential distribution, → distribution extreme event, 22 extreme value, 45 ﬁltration, 99 ﬁnite diﬀerence approach, 211 Fisher-Tippet theorem, 46 ﬁxed amount deductible, → deductible foreign exchange, 166, 170 Fourier basis, 120, 124

Index Fourier transform, 25, 188, 189 fast (FFT), 183, 188, 190, 191 option pricing, 188, 192 fractional Brownian motion, → Brownian motion franchise deductible, → deductible Fredholm eigenequation, 123 free boundary problem, 210 free disposal hull (FDH), 276, 278, 281 eﬃciency score, 278 eﬃcient level, 279 function basis, → basis function beta, → beta function characteristic, → characteristic function classiﬁer, → Support Vector Machine (SVM) distribution, → distribution function frontier, → production Heaviside, → Heaviside function kernel, → Support Vector Machine (SVM) limited expected value, → limited expected value function mean excess, → mean excess function mean residual life, → mean residual life function

511 membership, → clustering moment generating, → moment generating function production, → production slowly varying at inﬁnity, 387 functional data analysis, 115, 118 gamma distribution, → distribution gamma function incomplete, 300, 305 standard, 296 generalized eigenequation, 125, 126 goodness-of-ﬁt, 38, 290, 330 half-sample approach, 308 Heath-Jarrow-Morton approach, 205 Heaviside function, 213 hedging, 94 Heston’s model, 161, 163, 185 Hill estimator, 31 homogeneous Poisson process (HPP), → process hurricane, 94 implied binomial tree, 138 implied trinomial tree, 135, 140 Arrow-Debreu price, 140 state space, 142 transition probability, 140, 144 implied volatility, 115, 137, 161, 170, 184, 192, 195, 220 surface, 115, 116, 192, 504 incomplete market, 162, 185 index of dispersion, 326 individual risk model, 407, 410, 428 inﬂation rate, 251

512 initial capital, 320, 381 risk reserve, 381 variance, 169–171 insurance policy, 381 portfolio, 319, 341, 410, 416, 456 risk, 319, 341 securitization, 96 insurance-linked security (ILS), 93 indemnity trigger, 94 index trigger, 94 parametric trigger, 94 intensity, → process intensity function, → process inter-arrival time, 321, 324, 397 inter-occurrence time, 295 interest, 204 rate, 254, 264 eﬀect, 264 elasticity, 266 long-term, 264 inverse transform method, → method investment, 453 jump-diﬀusion model, 162, 174 Karush-Kuhn-Tucker conditions, 235 Laplace transform, 294, 295, 346 inversion, 349 leverage eﬀect, 186 limited expected value function, 309 limited proportional deductible, → deductible linear interpolation, 120

Index local polynomial estimator, 118 log-normal distribution, → distribution logit, 227 London Inter-Bank Oﬀer Rate (LIBOR), 104 long-run variance, 163, 170, 172 Lorenz curve, 239 loss distribution, 102, 289, 341 analytical approach, 289 curve ﬁtting, 289 empirical approach, 289, 291 moment based approach, 290 lower tail-independence, 69 martingale, 185, 401 maturity, 115, 116, 201, 204, 232 time to, 115, 116 MD*Base, 195 Mean Absolute Error (MAE), 103, 195 mean excess function, 303, 310, 330 mean residual life function, 303 mean reversion, 170, 172, 186 Mean Squared Error (MSE), 103, 195 mean value function, 102, 331 Merton’s model, 184 method composition, 302 integration, → algorithm inverse transform, 295, 296, 298, 299 least squares, 102, 331 Newton-Raphson, 118, 345 of characteristic functions, 167 rejection, → algorithm

Index thinning, → algorithm minimum-volume ellipsoid estimator, 78 mixed Poisson process, → process mixture of exponential distributions, → distribution modeling dependence, 54 moment generating function, 293, 343, 408, 418 monetary policy, 249, 250 monetary union, 249 money demand, 249, 260 Indonesian, 249, 263 M2, 249 nominal, 251 partial adjustment model, 251 moneyness, 120 Monte Carlo method, 38, 214, 361, 369 simulation, 99, 192, 193, 308, 342, 369, 423 mortgage, 201, 202, 204 callability, 202, 204, 219 optimally prepaid, 201, 206, 211 mortgage backed security (MBS), 201 valuation, 212 multivariate GARCH, 61 multivariate trimming, 78

513 operational risk, 319, 343, 407 operational time scale, 343 optimal stopping problem, 206

Panjer recursion formula, 476 Pareto distribution, → distribution periodogram, 331 Pickands constant, 400 point process, → process Poisson process, → process policy ﬂexible dividend, 455 Pollaczek-Khinchin formula, 346, 361, 475 power-law tail, 23 premium, 310, 320, 322, 326, 328, 381, 407, 429, 453, 454, 457, 459, 469, 470, 473, 474, 478, 483 σ-loading principle, 409 σ 2 -loading principle, 408 balancing problem, 461 exponential, 409, 413, 414, 418, 419, 421–423 marginal, 460 normal approximation, 412, 418 pure risk, 408, 411, 417, 427, 429 natural catastrophe, with safety loading, 408 → catastrophe quantile, 409, 413, 418, 420, 422, neural network, 225, 227 423, 457 non-homogeneous Poisson process (NHPP), standard deviation principle, 409 → process translated gamma approximanonparametric regression, 119 tion, 419 normal distribution variance principle, 408 multivariate, 85 whole-portfolio, 454 normal power formula, 459 with safety loading, 411, 417

514 with standard deviation loading, 412, 418 with variance loading, 411, 417 zero utility principle, 409 premium function, → premium prepayment optimal frontier, 211 parametric speciﬁcation, 215 reﬁnancing, 216 structural, 215 prepayment policy, 201, 212 early prepayment, 204 interest rate, 202 optimality, 202, 204, 212 principal, 201, 202, 204 principal components analysis (PCA), 115, 121 common, 127 functional, 115, 121, 122 smoothed, 125, 126 roughness penalty, 125 probability space, 99 probit, 227 process aggregate loss, 99, 320, 367, 453 claim arrival, 102, 320, 321 claim surplus, 342, 355, 475 compound Poisson, 367 counting, 382 Cox, 327 intensity process, 342 doubly stochastic Poisson, 327 homogeneous Poisson, 321, 323, 324, 326 mixed Poisson, 326 non-homogeneous Poisson, 323, 327 Ornstein-Uhlenbeck, 186, 205 point, 98, 319, 320, 343, 381

Index Poisson, 295, 341, 342, 383 compound, 184, 187 cumulative intensity function, 102, 331 doubly stochastic, 98, 99 homogeneous, 184 intensity, 321, 341, 383 intensity function, 100, 323 linear intensity function, 334 non-homogeneous, 99, 100 periodic intensity, 326 rate, 321 rate function, 323 sinusoidal intensity function, 333 stochastic intensity, 98 predictable bounded, 99 progressive, 99 renewal, 102, 328, 382, 385, 387, 397 risk, → risk process, 453 self-similar, 396 stationary, 333 variance, 185 Wiener, 185 production frontier function, 272 function, 272 input eﬃciency score, 274 output eﬃciency score, 274, 275 set, 272 unit, 274 productivity analysis, 271 data envelopment analysis, → data envelopment analysis (DEA) free disposal hull, → free disposal hull (FDH) input requirement set, 274

Index nonparametric, 271 hull method, 276 output corresponding set, 275 Property Claim Services (PCS), 94, 100, 311 proportional deductible, → deductible Public Securities Association, 215 pure risk premium, → premium put option, → derivative quantile, 217 sample, 333 quantile line sample, 333 queuing theory, 358, 359 rate of mean reversion, 163 rate of return, 456, 477, 481 rating, 226, 227, 230–232 raw moment, 292, 294 reinsurance, 93, 320, 453, 463, 481, 483 excess of loss, 464 renewal process, → process retention, 427, 455 limit, 464 returns to scale constant, 275 non-decreasing, 275 non-increasing, 275 risk aversion, 415 Risk Based Capital (RBC), 456 risk classiﬁcation, 230 risk model collective, → collective risk model

515 individual, → individual risk model of good and bad periods, 395 risk process, 319, 320, 341, 381 modeling, 319 simulation, 329 stable diﬀusion approximation, 381 weak convergence to α-stable L´evy motion, 387 to Brownian motion, 383 risk-neutral measure, 185 RiskCalc, 227, 240 ruin probability, 320, 341, 383, 395, 481 “Zero” approximation, 471 4-moment gamma De Vylder approximation, 356, 364 adjustment coeﬃcient, 454 Beekman–Bowers approximation, 353, 354, 472, 481 corrected diﬀusion approximation, 372 Cram´er–Lundberg approximation, 351, 364, 369, 471 criterion, 469, 477 De Vylder approximation, 355, 364, 474, 481 diﬀusion approximation, 371, 473 exact exponential claim amount, 347, 368 gamma claim amount, 347 mixture of exponentials claim amount, 349 exponential approximation, 352, 366 ﬁnite time De Vylder approximation, 373 ﬁnite time horizon, 367, 368,

516 384, 389 heavy traﬃc approximation, 358, 364 heavy-light traﬃc approximation, 360 inﬁnite time horizon, 342, 384, 389 ladder heights, 346, 475, 476 light traﬃc approximation, 359, 364 Lundberg approximation, 352, 365 Lundberg inequality, 469, 471 Panjer approximation, 475 Renyi approximation, 354, 364 Segerdahl normal approximation, 369 subexponential approximation, 360, 364, 475 ultimate, 381, 395 ruin theory, 341 ruin time, 381, 384, 395 safety loading, 381, 454, 478 relative, 322, 341, 375 Securities and Exchange Commission, 237 single-period criterion, 456 Sklar theorem, 53, 58 Skorokhod topology, 388, 399 solvency, 477 special purpose vehicle (SPV), 93 stable distribution, 21 characteristic exponent, 22 density function, 26 direct integration, 26, 36 distribution function, 26 FFT-based approach, 26, 36 index of stability, 22 maximum likelihood method, 35

Index method of moments, 34 quantile estimation, 33 regression-type method, 34, 35 simulation, 28 skewness parameter, 22 tail exponent, 22 estimation, 31 tail index, 22 stochastic process mean reverting, 163 stochastic volatility, 161, 185 calibration, 169 strings, 118, 120 structure variable, 326 Student t distribution multivariate, 85 Sum of Squared Errors (SSE), 170 Support Vector Machine (SVM), 225, 233 calibration, 239 cross validation, 241, 242 classiﬁer function, 226 kernel function, 236 Lagrangian formulation, 233 outlier, 235 separating hyperplane, 234 training set, 226, 233 tail dependence, 65, 67 asset and FX returns, 81 estimation, 75, 78 tail exponent, 22, 31 estimation, 31 log-log regression, 31 tail index, 74 Takagi-Sugeno approach, 250, 261 test statistic Anderson-Darling, 38, 102, 307 Cram´er-von Mises, 102, 307 CUSUM, 258

Index Dickey-Fuller, 51 augmented, 52, 255 half-sample approach, 308 Jarque-Bera, 258 Kolmogorov, 38, 306 Kolmogorov-Smirnov, 102, 306 Kuiper, 102, 306 threshold time, 98 time to ruin, 342 top-down approach, 459 trinomial tree, 140 constant volatility, 142 implied, → implied trinomial tree uniform convergence on compact sets, 383 upper tail-dependence, 66 coeﬃcient, 66 upper tail-independence, 66 utility expected, 409 Value at Risk, 52, 84 conditional, 52 historical estimates, 86 portfolio, 84 Vapnik-Chervonenkis (VC) bound, 229, 230 dimension, 229, 230 Vasicek model, 205 vector autoregressive model (VAR), 253 volatility, 116, 126, 185 constant, 135 implied, → implied volatility of variance, 163, 170, 171 forward, 177 risk

517 market price, 166, 170 premium, 170 smile, 115, 135, 140, 161 surface, 174 waiting time, 321, 328 Weibull distribution, → distribution XploRe Quantlet, 494, 495 Quantlet Client (XQC), 491, 492 data editor, 496 method tree, 493, 501 Quantlet Editor, 495 Quantlet server (XQS), 493

Our partners will collect data and use cookies for ad personalization and measurement. Learn how we and our ad partner Google, collect and use data. Agree & close