An Introduction To Copulas 2nd Ed Roger B Nelsen
download
https://ebookbell.com/product/an-introduction-to-copulas-2nd-ed-
roger-b-nelsen-980362
Explore and download more ebooks at ebookbell.com
Here are some recommended products that we believe you will be
interested in. You can click the link to download.
An Introduction To Early Buddhist Soteriology Freedom Of Mind And
Freedom By Wisdom G A Somaratne
https://ebookbell.com/product/an-introduction-to-early-buddhist-
soteriology-freedom-of-mind-and-freedom-by-wisdom-g-a-
somaratne-44887694
An Introduction To Six Sigma And Process Improvement 2nd Edition James
R Evans
https://ebookbell.com/product/an-introduction-to-six-sigma-and-
process-improvement-2nd-edition-james-r-evans-44913594
An Introduction To Conservation Biology 3rd Edition Richard B Primack
https://ebookbell.com/product/an-introduction-to-conservation-
biology-3rd-edition-richard-b-primack-44975620
An Introduction To Bioanalysis Of Biopharmaceuticals Seema Kumar
https://ebookbell.com/product/an-introduction-to-bioanalysis-of-
biopharmaceuticals-seema-kumar-44992290
An Introduction To Knot Theory Graduate Texts In Mathematics 175
Softcover Reprint Of The Original 1st Ed 1997 Wbraymond Lickorish
https://ebookbell.com/product/an-introduction-to-knot-theory-graduate-
texts-in-mathematics-175-softcover-reprint-of-the-original-1st-
ed-1997-wbraymond-lickorish-44992914
An Introduction To The Theory Of Number 5th Edition Ivan Niven
https://ebookbell.com/product/an-introduction-to-the-theory-of-
number-5th-edition-ivan-niven-45136082
An Introduction To New Media And Cybercultures Pramod K Nayar
https://ebookbell.com/product/an-introduction-to-new-media-and-
cybercultures-pramod-k-nayar-45766740
An Introduction To The Mechanics Of Incompressible Fluids Michel O
Deville
https://ebookbell.com/product/an-introduction-to-the-mechanics-of-
incompressible-fluids-michel-o-deville-46074298
An Introduction To The Policy Process Theories Concepts And Models Of
Public Policy Making 5th Edition Thomas A Birkland
https://ebookbell.com/product/an-introduction-to-the-policy-process-
theories-concepts-and-models-of-public-policy-making-5th-edition-
thomas-a-birkland-46137630
An Introduction To Copulas 2nd Ed Roger B Nelsen
An Introduction To Copulas 2nd Ed Roger B Nelsen
Springer Series in Statistics
Advisors:
P. Bickel, P. Diggle, S. Fienberg, U. Gather,
I. Olkin, S. Zeger
Springer Series in Statistics
Alho/Spencer: Statistical Demography and Forecasting.
Andersen/Borgan/Gill/Keiding: Statistical Models Based on Counting Processes.
Atkinson/Riani: Robust Diagnostic Regression Analysis.
Atkinson/Riani/Cerioli: Exploring Multivariate Data with the Forward Search.
Berger: Statistical Decision Theory and Bayesian Analysis, 2nd edition.
Borg/Groenen: Modern Multidimensional Scaling: Theory and Applications,
2nd edition.
Brockwell/Davis: Time Series: Theory and Methods, 2nd edition.
Bucklew: Introduction to Rare Event Simulation.
Cappé/Moulines/Rydén: Inference in Hidden Markov Models.
Chan/Tong: Chaos: A Statistical Perspective.
Chen/Shao/Ibrahim: Monte Carlo Methods in Bayesian Computation.
Coles: An Introduction to Statistical Modeling of Extreme Values.
David/Edwards: Annotated Readings in the History of Statistics.
Devroye/Lugosi: Combinatorial Methods in Density Estimation.
Efromovich: Nonparametric Curve Estimation: Methods, Theory, and Applications.
Eggermont/LaRiccia: Maximum Penalized Likelihood Estimation, Volume I: Density
Estimation.
Fahrmeir/Tutz: Multivariate Statistical Modelling Based on Generalized Linear
Models, 2nd edition.
Fan/Yao: Nonlinear Time Series: Nonparametric and Parametric Methods.
Farebrother: Fitting Linear Relationships: A History of the Calculus of Observations
1750-1900.
Federer: Statistical Design and Analysis for Intercropping Experiments, Volume I:
Two Crops.
Federer: Statistical Design and Analysis for Intercropping Experiments, Volume II:
Three or More Crops.
Ghosh/Ramamoorthi: Bayesian Nonparametrics.
Glaz/Naus/Wallenstein: Scan Statistics.
Good: Permutation Tests: A Practical Guide to Resampling Methods for Testing
Hypotheses, 2nd edition.
Good: Permutation Tests: Parametric and Bootstrap Tests of Hypotheses, 3rd edition.
Gouriéroux: ARCH Models and Financial Applications.
Gu: Smoothing Spline ANOVA Models.
Györfi/Kohler/Krzyz
•
ak/Walk: A Distribution-Free Theory of Nonparametric
Regression.
Haberman: Advanced Statistics, Volume I: Description of Populations.
Hall: The Bootstrap and Edgeworth Expansion.
Härdle: Smoothing Techniques: With Implementation in S.
Harrell: Regression Modeling Strategies: With Applications to Linear Models, Logistic
Regression, and Survival Analysis.
Hart: Nonparametric Smoothing and Lack-of-Fit Tests.
Hastie/Tibshirani/Friedman: The Elements of Statistical Learning: Data Mining,
Inference, and Prediction.
Hedayat/Sloane/Stufken: Orthogonal Arrays: Theory and Applications.
Heyde: Quasi-Likelihood and its Application: A General Approach to Optimal
Parameter Estimation.
(continued after index)
Roger B. Nelsen
An Introduction
to Copulas
Second Edition
Roger B. Nelsen
Department of Mathematical Sciences
Lewis & Clark College, MSC 110
Portland, OR 97219-7899
USA
nelsen@lclark.edu
Library of Congress Control Number: 2005933254
ISBN-10: 0-387-28659-4
ISBN-13: 978-0387-28659-4
© 2006 Springer Science+Business Media, Inc.
All rights reserved. This work may not be translated or copied in whole or in part without the
written permission of the publisher (Springer Science+Business Media, Inc., 233 Springer
Street, New York, NY 10013, USA), except for brief excerpts in connection with reviews or
scholarly analysis. Use in connection with any form of information storage and retrieval, elec-
tronic adaptation, computer software, or by similar or dissimilar methodology now known or
hereafter developed is forbidden.
The use in this publication of trade names, trademarks, service marks, and similar terms, even
if they are not identified as such, is not to be taken as an expression of opinion as to whether
or not they are subject to proprietary rights.
Printed in the United States of America. (SBA)
9 8 7 6 5 4 3 2 1
springeronline.com
To the memory of my parents
Ann Bain Nelsen
and
Howard Ernest Nelsen
Preface to the First Edition
In November of 1995, I was at the University of Massachusetts in
Amherst for a few days to attend a symposium held, in part, to celebrate
Professor Berthold Schweizer’s retirement from classroom teaching.
During one afternoon break, a small group of us were having coffee
following several talks in which copulas were mentioned. Someone
asked what one should read to learn the basics about copulas. We men-
tioned several references, mostly research papers and conference pro-
ceedings. I then suggested that perhaps the time was ripe for “some-
one” to write an introductory-level monograph on the subject. A
colleague, I forget who, responded somewhat mischievously, “Good
idea, Roger—why don’t you write it?”
Although flattered by the suggestion, I let it lie until the following
September, when I was in Prague to attend an international conference
on distributions with fixed marginals and moment problems. In Prague,
I asked Giorgio Dall’Aglio, Ingram Olkin, and Abe Sklar if they
thought that there might indeed be interest in the statistical community
for such a book. Encouraged by their responses and knowing that I
would soon be eligible for a sabbatical, I began to give serious thought
to writing an introduction to copulas.
This book is intended for students and practitioners in statistics and
probability—at almost any level. The only prerequisite is a good upper-
level undergraduate course in probability and mathematical statistics,
although some background in nonparametric statistics would be benefi-
cial. Knowledge of measure-theoretic probability is not required.
The book begins with the basic properties of copulas and then pro-
ceeds to present methods for constructing copulas and to discuss the
role played by copulas in modeling and in the study of dependence.
The focus is on bivariate copulas, although most chapters conclude with
a discussion of the multivariate case. As an introduction to copulas, it is
not an encyclopedic reference, and thus it is necessarily incom-
plete—many topics that could have been included are omitted. The
reader seeking additional material on families of continuous bivariate
distributions and their applications should see (Hutchinson and Lai
1990); and the reader interested in learning more about multivariate
copulas and dependence should consult (Joe 1997).
There are about 150 exercises in the book. Although it is certainly
not necessary to do all (or indeed any) of them, the reader is encour-
aged to read through the statements of the exercises before proceeding
to the next section or chapter. Although some exercises do not add
viii Preface to the First Edition
anything to the exposition (e.g., “Prove Theorem 1.1.1”), many pre-
sent examples, counterexamples, and supplementary topics that are of-
ten referenced in subsequent sections.
I would like to thank Lewis & Clark College for granting me a sab-
batical leave in order to write this book; and my colleagues in the De-
partment of Mathematics, Statistics, and Computer Science at Mount
Holyoke College for graciously inviting me to spend the sabbatical year
with them. Thanks, too, to Ingram Olkin for suggesting and encourag-
ing that I consider publication with Springer’s Lecture Notes in Statis-
tics; and to John Kimmel, the executive editor for statistics at Springer,
for his valuable assistance in the publication of this book.
Finally, I would like to express my gratitude and appreciation to all
those with whom I have had the pleasure of working on problems re-
lated to copulas and their applications: Claudi Alsina, Jerry Frank, Greg
Fredricks, Juan Quesada Molina, José Antonio Rodríguez Lallena, Carlo
Sempi, Abe Sklar, and Manuel Úbeda Flores. But most of all I want to
thank my good friend and mentor Berthold Schweizer, who not only
introduced me to the subject but also has consistently and unselfishly
aided me in the years since and who inspired me to write this book. I
also want to thank Bert for his careful and critical reading of earlier
drafts of the manuscript and his invaluable advice on matters mathe-
matical and stylistic. However, it goes without saying that any and all
remaining errors in the book are mine alone.
Roger B. Nelsen
Portland, Oregon
July 1998
Preface to the Second Edition
In preparing a new edition of An Introduction to Copulas, my goals in-
cluded adding some topics omitted from the first edition while keeping
the book at a level appropriate for self-study or for a graduate-level
seminar. The major additions in the second edition are sections on:
• a copula transformation method;
• extreme value copulas;
• copulas with certain analytic or functional properties;
• tail dependence; and
• quasi-copulas.
There are also a number of new examples and exercises and new fig-
ures, including scatterplots of simulations from many of the families of
copulas presented in the text. Typographical errors in the first edition
have been corrected, and the references have been updated.
Thanks again to Lewis & Clark College for granting me a sabbatical
leave in order to prepare this second edition; and to the Department of
Mathematics and Statistics at Mount Holyoke College for again inviting
me to spend the sabbatical year with them. Finally, I would like to thank
readers of the first edition who found numerous typographical errors in
the first edition and sent me suggestions for this edition.
Roger B. Nelsen
Portland, Oregon
October 2005
Contents
Preface to the First Edition vii
Preface to the Second Edition ix
1 Introduction 1
2 Definitions and Basic Properties 7
2.1 Preliminaries 7
2.2 Copulas 10
Exercises 2.1-2.11 14
2.3 Sklar’s Theorem 17
2.4 Copulas and Random Variables 24
Exercises 2.12-2.17 28
2.5 The Fréchet-Hoeffding Bounds for Joint Distribution
Functions of Random Variables 30
2.6 Survival Copulas 32
Exercises 2.18-2.26 34
2.7 Symmetry 36
2.8 Order 38
Exercises 2.27-2.33 39
2.9 Random Variate Generation 40
2.10 Multivariate Copulas 42
Exercises 2.34-2.37 48
3 Methods of Constructing Copulas 51
3.1 The Inversion Method 51
3.1.1 The Marshall-Olkin Bivariate Exponential Distribution 52
3.1.2 The Circular Uniform Distribution 55
Exercises 3.1-3.6 57
3.2 Geometric Methods 59
3.2.1 Singular Copulas with Prescribed Support 59
3.2.2 Ordinal Sums 63
Exercises 3.7-3.13 64
3.2.3 Shuffles of M 67
3.2.4 Convex Sums 72
Exercises 3.14-3.20 74
3.2.5 Copulas with Prescribed Horizontal or Vertical Sections 76
3.2.6 Copulas with Prescribed Diagonal Sections 84
xii Contents
Exercises 3.21-3.34 86
3.3 Algebraic Methods 89
3.3.1 Plackett Distributions 89
3.3.2 Ali-Mikhail-Haq Distributions 92
3.3.3 A Copula Transformation Method 94
3.3.4 Extreme Value Copulas 97
Exercises 3.35-3.42 99
3.4 Copulas with Specified Properties 101
3.4.1 Harmonic Copulas 101
3.4.2 Homogeneous Copulas 101
3.4.3 Concave and Convex Copulas 102
3.5 Constructing Multivariate Copulas 105
4 Archimedean Copulas 109
4.1 Definitions 109
4.2 One-parameter Families 114
4.3 Fundamental Properties 115
Exercises 4.1-4.17 132
4.4 Order and Limiting Cases 135
4.5 Two-parameter Families 141
4.5.1 Families of Generators 141
4.5.2 Rational Archimedean Copulas 146
Exercises 4.18-4.23 150
4.6 Multivariate Archimedean Copulas 151
Exercises 4.24-4.25 155
5 Dependence 157
5.1 Concordance 157
5.1.1 Kendall’s tau 158
Exercises 5.1-5.5 165
5.1.2 Spearman’s rho 167
Exercises 5.6-5.15 171
5.1.3 The Relationship between Kendall’s tau
and Spearman’s rho 174
5.1.4 Other Concordance Measures 180
Exercises 5.16-5.21 185
5.2 Dependence Properties 186
5.2.1 Quadrant Dependence 187
Exercises 5.22-5.29 189
5.2.2 Tail Monotonicity 191
5.2.3 Stochastic Monotonicity, Corner Set Monotonicity,
and Likelihood Ratio Dependence 195
Exercises 5.30-5.39 204
5.3 Other Measures of Association 207
5.3.1 Measures of Dependence 207
5.3.2 Measures Based on Gini’s Coefficient 211
Exercises 5.40-5.46 213
5.4 Tail Dependence 214
Contents xiii
Exercises 5.47-5.50 216
5.5 Median Regression 217
5.6 Empirical Copulas 219
5.7 Multivariate Dependence 222
6 Additional Topics 227
6.1 Distributions with Fixed Margins 227
Exercises 6.1-6.5 233
6.2 Quasi-copulas 236
Exercises 6.6-6.8 240
6.3 Operations on Distribution Functions 241
6.4 Markov Processes 244
Exercises 6.9-6.13 248
References 251
List of Symbols 263
Index 265
1 Introduction
The study of copulas and their applications in statistics is a rather mod-
ern phenomenon. Until quite recently, it was difficult to even locate the
word “copula” in the statistical literature. There is no entry for “cop-
ula” in the nine volume Encyclopedia of Statistical Sciences, nor in the
supplement volume. However, the first update volume, published in
1997, does have such an entry (Fisher 1997). The first reference in the
Current Index to Statistics to a paper using “copula” in the title or as a
keyword is in Volume 7 (1981) [the paper is (Schweizer and Wolff
1981)]—indeed, in the first eighteen volumes (1975-1992) of the Cur-
rent Index to Statistics there are only eleven references to papers men-
tioning copulas. There are, however, 71 references in the next ten vol-
umes (1993-2002).
Further evidence of the growing interest in copulas and their applica-
tions in statistics and probability in the past fifteen years is afforded by
five international conferences devoted to these ideas: the “Symposium
on Distributions with Given Marginals (Fréchet Classes)” in Rome in
1990; the conference on “Distributions with Fixed Marginals, Doubly
Stochastic Measures, and Markov Operators” in Seattle in 1993; the
conference on “Distributions with Given Marginals and Moment Prob-
lems” in Prague in 1996; the conference on “Distributions with Given
Marginals and Statistical Modelling” in Barcelona in 2000; and the
conference on “Dependence Modelling: Statistical Theory and Appli-
cations in Finance and Insurance” in Québec in 2004. As the titles of
these conferences indicate, copulas are intimately related to study of
distributions with “fixed” or “given” marginal distributions. The
published proceedings of the first four conferences (Dall’Aglio et al.
1991; Rüschendorf et al. 1996; Beneš and Štěpán 1997; Cuadras et al.
2002) are among the most accessible resources for the study of copulas
and their applications.
What are copulas? From one point a view, copulas are functions that
join or “couple” multivariate distribution functions to their one-
dimensional marginal distribution functions. Alternatively, copulas are
multivariate distribution functions whose one-dimensional margins are
uniform on the interval (0,1). Chapter 2 will be devoted to presenting a
complete answer to this question.
Why are copulas of interest to students of probability and statistics?
As Fisher (1997) answers in his article in the first update volume of the
Encyclopedia of Statistical Sciences, “Copulas [are] of interest to statis-
ticians for two main reasons: Firstly, as a way of studying scale-free
2 1 Introduction
measures of dependence; and secondly, as a starting point for con-
structing families of bivariate distributions, sometimes with a view to
simulation.” These topics are explored and developed in Chapters 3, 4,
and 5.
The remainder of this chapter will be devoted to a brief history of the
development and study of copulas. Readers interested in first-hand ac-
counts by some of those who participated in the evolution of the subject
should see the papers by Dall’Aglio (1991) and Schweizer (1991) in
the proceedings of the Rome conference and the paper by Sklar (1996)
in the proceedings of the Seattle conference.
The word copula is a Latin noun that means “a link, tie, bond”
(Cassell’s Latin Dictionary) and is used in grammar and logic to de-
scribe “that part of a proposition which connects the subject and predi-
cate” (Oxford English Dictionary). The word copula was first employed
in a mathematical or statistical sense by Abe Sklar (1959) in the theo-
rem (which now bears his name) describing the functions that “join to-
gether” one-dimensional distribution functions to form multivariate
distribution functions (see Theorems 2.3.3 and 2.10.9). In (Sklar 1996)
we have the following account of the events leading to this use of the
term copula:
Féron (1956), in studying three-dimensional distributions had introduced
auxiliary functions, defined on the unit cube, that connected such distribu-
tions with their one-dimensional margins. I saw that similar functions could
be defined on the unit n-cube for all n ≥ 2 and would similarly serve to link
n-dimensional distributions to their one-dimensional margins. Having
worked out the basic properties of these functions, I wrote about them to
Fréchet, in English. He asked me to write a note about them in French.
While writing this, I decided I needed a name for these functions. Knowing
the word “copula” as a grammatical term for a word or expression that links
a subject and predicate, I felt that this would make an appropriate name for a
function that links a multidimensional distribution to its one-dimensional
margins, and used it as such. Fréchet received my note, corrected one
mathematical statement, made some minor corrections to my French, and
had the note published by the Statistical Institute of the University of Paris
as Sklar (1959).
But as Sklar notes, the functions themselves predate the use of the
term copula. They appear in the work of Fréchet, Dall’Aglio, Féron,
and many others in the study of multivariate distributions with fixed
univariate marginal distributions. Indeed, many of the basic results
about copulas can be traced to the early work of Wassily Hoeffding. In
(Hoeffding 1940, 1941) one finds bivariate “standardized distribu-
tions” whose support is contained in the square [ , ]
-1 2 1 2 2
and whose
margins are uniform on the interval [–1 2,1 2]. (As Schweizer (1991)
opines, “had Hoeffding chosen the unit square [ , ]
0 1 2
instead of
[ , ]
-1 2 1 2 2
for his normalization, he would have discovered copulas.”)
1 Introduction 3
Hoeffding also obtained the basic best-possible bounds inequality for
these functions, characterized the distributions (“functional depend-
ence”) corresponding to those bounds, and studied measures of de-
pendence that are “scale-invariant,” i.e., invariant under strictly in-
creasing transformations. Unfortunately, until recently this work did not
receive the attention it deserved, due primarily to the fact the papers
were published in relatively obscure German journals at the outbreak of
the Second World War. However, they have recently been translated into
English and are among Hoeffding’s collected papers, recently pub-
lished by Fisher and Sen (1994). Unaware of Hoeffding’s work, Fréchet
(1951) independently obtained many of the same results, which has led
to the terms such as “Fréchet bounds” and “Fréchet classes.” In rec-
ognition of the shared responsibility for these important ideas, we will
refer to “Fréchet-Hoeffding bounds” and “Fréchet-Hoeffding
classes.” After Hoeffding, Fréchet, and Sklar, the functions now known
as copulas were rediscovered by several other authors. Kimeldorf and
Sampson (1975b) referred to them as uniform representations, and
Galambos (1978) and Deheuvels (1978) called them dependence func-
tions.
At the time that Sklar wrote his 1959 paper with the term “copula,”
he was collaborating with Berthold Schweizer in the development of the
theory of probabilistic metric spaces, or PM spaces. During the period
from 1958 through 1976, most of the important results concerning
copulas were obtained in the course of the study of PM spaces. Recall
that (informally) a metric space consists of a set S and a metric d that
measures “distances” between points, say p and q, in S. In a probabil-
istic metric space, we replace the distance d(p,q) by a distribution func-
tion Fpq , whose value Fpq (x) for any real x is the probability that the
distance between p and q is less than x. The first difficulty in the con-
struction of probabilistic metric spaces comes when one tries to find a
“probabilistic” analog of the triangle inequality d(p,r) £ d(p,q) +
d(q,r)—what is the corresponding relationship among the distribution
functions Fpr , Fpq , and Fqr for all p, q, and r in S? Karl Menger (1942)
proposed F x y
pr ( )
+ ≥ T( Fpq (x), Fqr (y)); where T is a triangle norm or t-
norm. Like a copula, a t-norm maps [ , ]
0 1 2
to [0,1], and joins distribu-
tion functions. Some t-norms are copulas, and conversely, some copulas
are t-norms. So, in a sense, it was inevitable that copulas would arise in
the study of PM spaces. For a thorough treatment of the theory of PM
spaces and the history of its development, see (Schweizer and Sklar
1983; Schweizer 1991).
Among the most important results in PM spaces—for the statisti-
cian—is the class of Archimedean t-norms, those t-norms T that satisfy
T(u,u) < u for all u in (0,1). Archimedean t-norms that are also copulas
are called Archimedean copulas. Because of their simple forms, the ease
with which they can be constructed, and their many nice properties, Ar-
4 1 Introduction
chimedean copulas frequently appear in discussions of multivariate dis-
tributions—see, for example, (Genest and MacKay 1986a,b; Marshall
and Olkin 1988; Joe 1993, 1997). This important class of copulas is the
subject of Chapter 4.
We now turn our attention to copulas and dependence. The earliest
paper explicitly relating copulas to the study of dependence among
random variables appears to be (Schweizer and Wolff 1981). In that pa-
per, Schweizer and Wolff discussed and modified Rényi’s (1959) crite-
ria for measures of dependence between pairs of random variables, pre-
sented the basic invariance properties of copulas under strictly
monotone transformations of random variables (see Theorems 2.4.3
and 2.4.4), and introduced the measure of dependence now known as
Schweizer and Wolff’s s (see Section 5.3.1). In their words, since
... under almost surely increasing transformations of (the random vari-
ables), the copula is invariant while the margins may be changed at will, it
follows that it is precisely the copula which captures those properties of the
joint distribution which are invariant under almost surely strictly increasing
transformations. Hence the study of rank statistics—insofar as it is the
study of properties invariant under such transformations—may be character-
ized as the study of copulas and copula-invariant properties.
Of course, copulas appear implicitly in earlier work on dependence
by many other authors, too many to list here, so we will mention only
two. Foremost is Hoeffding. In addition to studying the basic properties
of “standardized distributions” (i.e., copulas), Hoeffding (1940, 1941)
used them to study nonparametric measures of association such as
Spearman’s rho and his “dependence index” F2
(see Section 5.3.1).
Deheuvels (1979, 1981a,b,c) used “empirical dependence functions”
(i.e., empirical copulas, the sample analogs of copulas—see Section 5.5)
to estimate the population copula and to construct various nonparamet-
ric tests of independence. Chapter 5 is devoted to an introduction to the
role played by copulas in the study of dependence.
Although this book concentrates on the two applications of copulas
mentioned by Fisher (1997)—the construction of families of multivari-
ate distributions and the study of dependence—copulas are being ex-
ploited in other ways. We mention but one, which we discuss in the final
chapter. Through an ingenious definition of a “product” * of copulas,
Darsow, Nguyen, and Olsen (1992) have shown that the Chapman-
Kolmogorov equations for the transition probabilities in a real stochas-
tic process can be expressed succinctly in terms of the *-product of
copulas. This new approach to the theory of Markov processes may well
be the key to “capturing the Markov property of such processes in a
framework as simple and perspicuous as the conventional framework
for analyzing Markov chains” (Schweizer 1991).
1 Introduction 5
The study of copulas and the role they play in probability, statistics,
and stochastic processes is a subject still in its infancy. There are many
open problems and much work to be done.
2 Definitions and Basic Properties
In the Introduction, we referred to copulas as “functions that join or
couple multivariate distribution functions to their one-dimensional
marginal distribution functions” and as “distribution functions whose
one-dimensional margins are uniform.” But neither of these statements
is a definition—hence we will devote this chapter to giving a precise
definition of copulas and to examining some of their elementary prop-
erties.
But first we present a glimpse of where we are headed. Consider for a
moment a pair of random variables X and Y, with distribution functions
F(x) = P X x
[ ]
£ and G(y) = P Y y
[ ]
£ , respectively, and a joint distribu-
tion function H(x,y) = P X x Y y
[ , ]
£ £ (we will review definitions of
random variables, distribution functions, and other important topics as
needed in the course of this chapter). To each pair of real numbers (x,y)
we can associate three numbers: F(x), G(y), and H(x,y). Note that each
of these numbers lies in the interval [0,1]. In other words, each pair (x,y)
of real numbers leads to a point F x G y
( ), ( )
( ) in the unit square
[0,1]¥[0,1], and this ordered pair in turn corresponds to a number
H(x,y) in [0,1]. We will show that this correspondence, which assigns the
value of the joint distribution function to each ordered pair of values of
the individual distribution functions, is indeed a function. Such func-
tions are copulas.
To accomplish what we have outlined above, we need to generalize
the notion of “nondecreasing” for univariate functions to a concept
applicable to multivariate functions. We begin with some notation and
definitions. In Sects. 2.1-2.9, we confine ourselves to the two-
dimensional case; in Sect. 2.10, we consider n dimensions.
2.1 Preliminaries
The focus of this section is the notion of a “2-increasing” function—a
two-dimensional analog of a nondecreasing function of one variable.
But first we need to introduce some notation. We will let R denote the
ordinary real line (–•,•), R denote the extended real line [–•,•], and
R2
denote the extended real plane R¥ R. A rectangle in R2
is the
8 2 Definitions and Basic Properties
Cartesian product B of two closed intervals: B = [ , ]
x x
1 2 ¥[ , ]
y y
1 2 . The
vertices of a rectangle B are the points ( x1, y1), ( x1, y2), ( x2, y1), and
( x2, y2). The unit square I2
is the product I¥I where I = [0,1]. A 2-
place real function H is a function whose domain, DomH, is a subset of
R2
and whose range, RanH, is a subset of R.
Definition 2.1.1. Let S1 and S2 be nonempty subsets of R, and let H be
a two-place real function such that DomH = S1¥ S2. Let B =
[ , ]
x x
1 2 ¥[ , ]
y y
1 2 be a rectangle all of whose vertices are in DomH. Then
the H-volume of B is given by
V B H x y H x y H x y H x y
H ( )= - - +
( , ) ( , ) ( , ) ( , )
2 2 2 1 1 2 1 1 . (2.1.1)
Note that if we define the first order differences of H on the rectangle
B as
Dx
x
H x y H x y H x y
1
2
2 1
( , ) ( , ) ( , )
= - and Dy
y
H x y H x y H x y
1
2
2 1
( , ) ( , ) ( , )
= - ,
then the H-volume of a rectangle B is the second order difference of H
on B,
V B H x y
H y
y
x
x
( ) ( , )
= D D
1
2
1
2
.
Definition 2.1.2. A 2-place real function H is 2-increasing if V B
H ( ) ≥ 0
for all rectangles B whose vertices lie in DomH.
When H is 2-increasing, we will occasionally refer to the H-volume of
a rectangle B as the H-measure of B. Some authors refer to 2-increasing
functions as quasi-monotone.
We note here that the statement “H is 2-increasing” neither implies
nor is implied by the statement “H is nondecreasing in each argu-
ment,” as the following two examples illustrate. The verifications are
elementary, and are left as exercises.
Example 2.1. Let H be the function defined on I2
by H(x,y) =
max(x,y). Then H is a nondecreasing function of x and of y; however,
VH ( )
I2
= –1, so that H is not 2-increasing. 
Example 2.2. Let H be the function defined on I2
by H(x,y) =
( )( )
2 1 2 1
x y
- - . Then H is 2-increasing, however it is a decreasing func-
tion of x for each y in (0,1/2) and a decreasing function of y for each x
in (0,1/2). 
The following lemmas will be very useful in the next section in es-
tablishing the continuity of subcopulas and copulas. The first is a direct
consequence of Definitions 2.1.1 and 2.1.2.
2.1 Preliminaries 9
Lemma 2.1.3. Let S1 and S2 be nonempty subsets of R, and let H be a
2-increasing function with domain S1¥ S2. Let x1, x2 be in S1 with x1 £
x2, and let y1, y2 be in S2 with y1 £ y2. Then the function t a
H t y
( , )
2 – H t y
( , )
1 is nondecreasing on S1, and the function t a
H x t
( , )
2 – H x t
( , )
1 is nondecreasing on S2.
As an immediate application of this lemma, we can show that with an
additional hypothesis, a 2-increasing function H is nondecreasing in
each argument. Suppose S1 has a least element a1 and that S2 has a
least element a2. We say that a function H from S1¥ S2 into R is
grounded if H(x, a2) = 0 = H( a1,y) for all (x,y) in S1¥ S2. Hence we
have
Lemma 2.1.4. Let S1 and S2 be nonempty subsets of R, and let H be a
grounded 2-increasing function with domain S1¥ S2. Then H is nonde-
creasing in each argument.
Proof. Let a1,a2 denote the least elements of S1,S2, respectively, and
set x1 = a1, y1 = a2 in Lemma 2.1.3. 
Now suppose that S1 has a greatest element b1 and that S2 has a
greatest element b2. We then say that a function H from S1¥ S2 into R
has margins, and that the margins of H are the functions F and G given
by:
DomF = S1, and F x H x b
( ) ( , )
= 2 for all x in S1;
DomG = S2, and G y H b y
( ) ( , )
= 1 for all y in S2.
Example 2.3. Let H be the function with domain [–1,1]¥[0,•] given by
H x y
x e
x e
y
y
( , )
( )( )
=
+ -
+ -
1 1
2 1
.
Then H is grounded because H(x,0) = 0 and H(–1,y) = 0; and H has
margins F(x) and G(y) given by
F x H x x
( ) ( , ) ( )
= • = +1 2 and G y H y e y
( ) ( , )
= = - -
1 1 . 
We close this section with an important lemma concerning grounded
2-increasing functions with margins.
Lemma 2.1.5. Let S1 and S2 be nonempty subsets of R, and let H be a
grounded 2-increasing function, with margins, whose domain is S1¥ S2.
Let ( , )
x y
1 1 and ( , )
x y
2 2 be any points in S1¥ S2. Then
H x y H x y F x F x G y G y
( , ) ( , ) ( ) ( ) ( ) ( )
2 2 1 1 2 1 2 1
- £ - + - .
10 2 Definitions and Basic Properties
Proof. From the triangle inequality, we have
H x y H x y H x y H x y H x y H x y
( , ) ( , ) ( , ) ( , ) ( , ) ( , )
2 2 1 1 2 2 1 2 1 2 1 1
- £ - + - .
Now assume x1 £ x2. Because H is grounded, 2-increasing, and has
margins, Lemmas 2.1.3 and 2.1.4 yield 0 £ H( x2, y2) – H( x1, y2)
£ F x F x
( ) ( )
2 1
- . An analogous inequality holds when x2 £ x1, hence it
follows that for any x1, x2 in S1, H x y H x y
( , ) ( , )
2 2 1 2
- £ F x F x
( ) ( )
2 1
- .
Similarly for any y1, y2 in S2, H x y H x y
( , ) ( , )
1 2 1 1
- £ G y G y
( ) ( )
2 1
- ,
which completes the proof. 
2.2 Copulas
We are now in a position to define the functions—copulas—that are the
subject of this book. To do so, we first define subcopulas as a certain
class of grounded 2-increasing functions with margins; then we define
copulas as subcopulas with domain I2
.
Definition 2.2.1. A two-dimensional subcopula (or 2-subcopula, or
briefly, a subcopula) is a function ¢
C with the following properties:
1. Dom ¢
C = S1¥ S2, where S1 and S2 are subsets of I containing 0
and 1;
2. ¢
C is grounded and 2-increasing;
3. For every u in S1 and every v in S2,
¢ =
C u u
( , )
1 and ¢ =
C v v
( , )
1 . (2.2.1)
Note that for every (u,v) in Dom ¢
C , 0 1
£ ¢ £
C u v
( , ) , so that Ran ¢
C is
also a subset of I.
Definition 2.2.2. A two-dimensional copula (or 2-copula, or briefly, a
copula) is a 2-subcopula C whose domain is I2
.
Equivalently, a copula is a function C from I2
to I with the following
properties:
1. For every u, v in I,
C u C v
( , ) ( , )
0 0 0
= = (2.2.2a)
and
C u u
( , )
1 = and C v v
( , )
1 = ; (2.2.2b)
2. For every u u v v
1 2 1 2
, , , in I such that u u
1 2
£ and v v
1 2
£ ,
C u v C u v C u v C u v
( , ) ( , ) ( , ) ( , )
2 2 2 1 1 2 1 1 0
- - + ≥ . (2.2.3)
2.2 Copulas 11
Because C u v V u v
C
( , ) ([ , ] [ , ])
= ¥
0 0 , one can think of C u v
( , ) as an as-
signment of a number in I to the rectangle [ , ] [ , ]
0 0
u v
¥ . Thus (2.2.3)
gives an “inclusion-exclusion” type formula for the number assigned
by C to each rectangle [ , ] [ , ]
u u v v
1 2 1 2
¥ in I2
and states that the number
so assigned must be nonnegative.
The distinction between a subcopula and a copula (the domain) may
appear to be a minor one, but it will be rather important in the next sec-
tion when we discuss Sklar’s theorem. In addition, many of the impor-
tant properties of copulas are actually properties of subcopulas.
Theorem 2.2.3. Let ¢
C be a subcopula. Then for every (u,v) in Dom ¢
C ,
max( , ) ( , ) min( , )
u v C u v u v
+ - £ ¢ £
1 0 . (2.2.4)
Proof. Let (u,v) be an arbitrary point in Dom ¢
C . Now ¢
C u v
( , ) £
¢
C u
( , )
1 = u and ¢
C u v
( , ) £ ¢
C v
( , )
1 = v yield ¢
C u v
( , ) £ min(u,v). Fur-
thermore, V u v
C¢ ¥ ≥
([ , ] [ , ])
1 1 0 implies ¢
C u v
( , ) ≥ u v
+ -1, which when
combined with ¢
C u v
( , ) ≥ 0 yields ¢
C u v
( , ) ≥ max( u v
+ -1,0). 
Because every copula is a subcopula, the inequality in the above
theorem holds for copulas. Indeed, the bounds in (2.2.4) are themselves
copulas (see Exercise 2.2) and are commonly denoted by M(u,v) =
min(u,v) and W(u,v) = max( , )
u v
+ -1 0 . Thus for every copula C and
every (u,v) in I2
,
W u v C u v M u v
( , ) ( , ) ( , )
£ £ . (2.2.5)
Inequality (2.2.5) is the copula version of the Fréchet-Hoeffding
bounds inequality, which we shall encounter later in terms of distribu-
tion functions. We refer to M as the Fréchet-Hoeffding upper bound
and W as the Fréchet-Hoeffding lower bound. A third important copula
that we will frequently encounter is the product copula P(u,v) = uv.
The following theorem, which follows directly from Lemma 2.1.5,
establishes the continuity of subcopulas—and hence of copulas—via a
Lipschitz condition on I2
.
Theorem 2.2.4. Let ¢
C be a subcopula. Then for every u u v v
1 2 1 2
, , ,
( ) ( )
in Dom ¢
C ,
¢ - ¢ £ - + -
C u v C u v u u v v
( , ) ( , )
2 2 1 1 2 1 2 1 . (2.2.6)
Hence ¢
C is uniformly continuous on its domain.
The sections of a copula will be employed in the construction of
copulas in the next chapter, and will be used in Chapter 5 to provide
interpretations of certain dependence properties:
Definition 2.2.5. Let C be a copula, and let a be any number in I. The
horizontal section of C at a is the function from I to I given by
12 2 Definitions and Basic Properties
t C t a
a ( , ); the vertical section of C at a is the function from I to I
given by t C a t
a ( , ); and the diagonal section of C is the function dC
from I to I defined by dC t
( ) = C(t,t).
The following corollary is an immediate consequence of Lemma
2.1.4 and Theorem 2.2.4.
Corollary 2.2.6. The horizontal, vertical, and diagonal sections of a
copula C are all nondecreasing and uniformly continuous on I.
Various applications of copulas that we will encounter in later chap-
ters involve the shape of the graph of a copula, i.e., the surface z =
C(u,v). It follows from Definition 2.2.2 and Theorem 2.2.4 that the
graph of any copula is a continuous surface within the unit cube I3
whose boundary is the skew quadrilateral with vertices (0,0,0), (1,0,0),
(1,1,1), and (0,1,0); and from Theorem 2.2.3 that this graph lies be-
tween the graphs of the Fréchet-Hoeffding bounds, i.e., the surfaces z =
M(u,v) and z = W(u,v). In Fig. 2.1 we present the graphs of the copulas
M and W, as well as the graph of P, a portion of the hyperbolic
paraboloid z = uv.
v
z
u
v
z
u
v
z
u
z = M(u,v)
z = P(u,v)
z = W(u,v)
Fig. 2.1. Graphs of the copulas M, P, and W
A simple but useful way to present the graph of a copula is with a
contour diagram (Conway 1979), that is, with graphs of its level
sets—the sets in I2
given by C(u,v) = a constant, for selected constants
in I. In Fig. 2.2 we present the contour diagrams of the copulas M, P,
2.2 Copulas 13
andW. Note that the points (t,1) and (1,t) are each members of the level
set corresponding to the constant t. Hence we do not need to label the
level sets in the diagram, as the boundary conditions C(1,t) = t = C(t,1)
readily provide the constant for each level set.
P(u,v)
M(u,v) W(u,v)
Fig. 2.2. Contour diagrams of the copulas M, P, andW
Also note that, given any copula C, it follows from (2.2.5) that for a
given t in I the graph of the level set ( , ) ( , )
u v C u v t
Œ =
{ }
I2
must lie in
the shaded triangle in Fig. 2.3, whose boundaries are the level sets de-
termined by M(u,v) = t and W(u,v) = t.
t
t
Fig. 2.3. The region that contains the level set ( , ) ( , )
u v C u v t
Œ =
{ }
I2
We conclude this section with the two theorems concerning the partial
derivatives of copulas. The word “almost” is used in the sense of
Lebesgue measure.
Theorem 2.2.7. Let C be a copula. For any v in I, the partial derivative
∂ ∂
C u v u
( , ) exists for almost all u, and for such v and u,
0 1
£ £
∂
∂u
C u v
( , ) . (2.2.7)
Similarly, for any u in I, the partial derivative ∂ ∂
C u v v
( , ) exists for al-
most all v, and for such u and v,
14 2 Definitions and Basic Properties
0 1
£ £
∂
∂v
C u v
( , ) . (2.2.8)
Furthermore, the functions u a ∂ ∂
C u v v
( , ) and v a ∂ ∂
C u v u
( , ) are
defined and nondecreasing almost everywhere on I.
Proof. The existence of the partial derivatives ∂ ∂
C u v u
( , ) and
∂ ∂
C u v v
( , ) is immediate because monotone functions (here the hori-
zontal and vertical sections of the copula) are differentiable almost eve-
rywhere. Inequalities (2.2.7) and (2.2.8) follow from (2.2.6) by setting
v1 = v2 and u1 = u2, respectively. If v1 £ v2, then, from Lemma 2.1.3,
the function u a C u v C u v
( , ) ( , )
2 1
- is nondecreasing. Hence
∂ ∂
C u v C u v u
( , ) ( , )
2 1
-
( ) is defined and nonnegative almost everywhere
on I, from which it follows that v a ∂ ∂
C u v u
( , ) is defined and nonde-
creasing almost everywhere on I. A similar result holds for u a
∂ ∂
C u v v
( , ) . 
Theorem 2.2.8. Let C be a copula. If ∂ ∂
C u v v
( , ) and ∂ ∂ ∂
2
C u v u v
( , )
are continous on I2
and ∂ ∂
C u v u
( , ) exists for all u Œ (0,1) when v = 0,
then ∂ ∂
C u v u
( , ) and ∂ ∂ ∂
2
C u v v u
( , ) exist in ( , )
0 1 2
and ∂ ∂ ∂
2
C u v u v
( , )
= ∂ ∂ ∂
2
C u v v u
( , ) .
Proof. See (Seeley 1961).
Exercises
2.1 Verify the statements in Examples 2.1 and 2.2.
2.2 Show that M u v u v
( , ) min( , )
= , W u v u v
( , ) max( , )
= + -1 0 , and
P( , )
u v = uv are indeed copulas.
2.3 (a) Let C0 and C1 be copulas, and let q be any number in I. Show
that the weighted arithmetic mean ( )
1 0 1
- +
q q
C C is also a copula.
Hence conclude that any convex linear combination of copulas is
a copula.
(b) Show that the geometric mean of two copulas may fail to be a
copula. [Hint: Let C be the geometric mean of P and W, and show
that the C-volume of the rectangle 1 2 3 4
,
[ ]¥ 1 2 3 4
,
[ ] is nega-
tive.]
2.4 The Fréchet and Mardia families of copulas.
(a) Let a, b be in I with a + b £ 1. Set
2.2 Copulas 15
C u v M u v u v W u v
a b a a b b
, ( , ) ( , ) ( ) ( , ) ( , )
= + - - +
1 P .
Show that Ca b
, is a copula. A family of copulas that includes M,
P, and W is called comprehensive. This two-parameter compre-
hensive family is due to Fréchet (1958).
(b) Let q be in [–1,1], and set
C u v M u v u v W u v
q
q q
q
q q
( , )
( )
( , ) ( ) ( , )
( )
( , ).
=
+
+ - +
-
2
2
2
1
2
1
1
2
P (2.2.9)
Show that Cq is a copula. This one-parameter comprehensive
family is due to Mardia (1970).
2.5 The Cuadras-Augé family of copulas. Let q be in I, and set
C u v u v uv
uv u v
u v u v
q
q q
q
q
( , ) [min( , )] [ ]
, ,
, .
= =
£
≥
Ï
Ì
Ó
-
-
-
1
1
1
(2.2.10)
Show that Cq is a copula. Note that C0 = P and C1 = M. This
family (weighted geometric means of M and P) is due to Cuadras
and Augé (1981).
2.6 Let C be a copula, and let (a,b) be any point in I2
. For (u,v) in I2
,
define
K u v V a u u a u b v v b v
a b C
, ( , ) [ ( ), ( )] [ ( ), ( )]
= - + - ¥ - + -
( )
1 1 1 1 .
Show that Ka b
, is a copula. Note that K u v C u v
0 0
, ( , ) ( , )
= . Several
special cases will be of interest in Sects. 2.4, 2.7, and 6.4, namely:
K u v u C u v
0 1 1
, ( , ) ( , )
= - - ,
K u v v C u v
1 0 1
, ( , ) ( , )
= - - , and
K u v u v C u v
11 1 1 1
, ( , ) ( , )
= + - + - - .
2.7 Let f be a function from I2
into I which is nondecreasing in each
variable and has margins given by f(t,1) = t = f(1,t) for all t in I.
Prove that f is grounded.
2.8 (a) Show that for any copula C, max(2 1
t - ,0) £ dC t
( ) £ t for all t
in I.
(b) Show that dC t
( )= dM t
( ) for all t in I implies C = M.
(c) Show dC t
( )= dW t
( ) for all t in I does not imply that C = W.
16 2 Definitions and Basic Properties
2.9 The secondary diagonal section of C is given by C t t
( , )
1- . Show
that C t t
( , )
1- = 0 for all t in I implies C = W.
2.10 Let t be in [0,1), and let Ct be the function from I2
into I given
by
C u v
u v t u v t
u v
t( , )
max( , ), ( , ) [ , ] ,
min( , ), .
=
+ - Œ
Ï
Ì
Ó
1 1 2
otherwise
(a) Show that Ct is a copula.
(b) Show that the level set ( , ) ( , )
u v C u v t
t
Œ =
{ }
I2
is the set of
points in the triangle with vertices (t,1), (1,t), and (t,t), that is, the
shaded region in Fig. 2.3. The copula in this exercise illustrates
why the term “level set” is preferable to “level curve” for some
copulas.
2.11 This exercise shows that the 2-increasing condition (2.2.3) for
copulas is not a consequence of simpler properties. Let Q be the
function from I2
into I given by
Q u v
u v u v u v
u v
( , )
min , , , , ,
max , ,
=
+ -
Ê
Ë
Á
ˆ
¯
˜ £ + £
+ -
( )
Ï
Ì
Ô
Ó
Ô
1
3
2
3
2
3
4
3
1 0 otherwise;
that is, Q has the values given in Fig. 2.4 in the various parts of I2
.
(a) Show that for every u,v in I, Q u Q v
( , ) ( , )
0 0 0
= = , Q u u
( , )
1 =
and Q v v
( , )
1 = ; W u v Q u v M u v
( , ) ( , ) ( , )
£ £ ; and that Q is continu-
ous, satisfies the Lipschitz condition (2.2.6), and is nondecreasing
in each variable.
(b) Show that Q fails to be 2-increasing, and hence is not a cop-
ula. [Hint: consider the Q-volume of the rectangle 1 3 2 3
2
,
[ ] .]
1
/
3
u
+
v
–
(
2
/
3
)
0
u
v
u
+
v
–
1
Fig. 2.4. The function Q in Exercise 2.11
2.3 Sklar’s Theorem 17
2.3 Sklar’s Theorem
The theorem in the title of this section is central to the theory of copulas
and is the foundation of many, if not most, of the applications of that
theory to statistics. Sklar’s theorem elucidates the role that copulas play
in the relationship between multivariate distribution functions and their
univariate margins. Thus we begin this section with a short discussion of
distribution functions.
Definition 2.3.1. A distribution function is a function F with domain R
such that
1. F is nondecreasing,
2. F(–•) = 0 and F(•) = 1.
Example 2.4. For any number a in R, the unit step at a is the distribu-
tion function ea given by
ea x
x a
x a
( )
, [ , ),
, [ , ];
=
Œ -•
Œ •
Ï
Ì
Ó
0
1
and for any numbers a,b in R with a  b, the uniform distribution on
[a,b] is the distribution function Uab given by
U x
x a
x a
b a
x a b
x b
ab ( )
, [ , ),
, [ , ],
, ( , ].
=
Œ -•
-
-
Œ
Œ •
Ï
Ì
Ô
Ô
Ó
Ô
Ô
0
1 
Definition 2.3.2. A joint distribution function is a function H with do-
main R2
such that
1. H is 2-increasing,
2. H(x,– •) = H(–•,y) = 0, and H(•,•) = 1.
Thus H is grounded, and because DomH = R2
, H has margins F and G
given by F(x) = H(x,•) and G(y) = H(•,y). By virtue of Corollary
2.2.6, F and G are distribution functions.
Example 2.5. Let H be the function with domain R2
given by
H x y
x e
x e
x y
e x y
y
y
y
( , )
( )( )
, ( , ) [ , ] [ , ],
, ( , ) ( , ] [ , ],
,
=
+ -
+ -
Œ - ¥ •
- Œ • ¥ •
Ï
Ì
Ô
Ô
Ó
Ô
Ô
-
1 1
2 1
11 0
1 1 0
0 elsewhere.
18 2 Definitions and Basic Properties
It is tedious but elementary to verify that H is 2-increasing and
grounded, and that H(•,•) = 1. Hence H is a joint distribution function.
The margins of H are the distribution functions F and G given by
F U
= -11
, and G y
y
e y
y
( )
, [ , ),
, [ , ].
=
Œ -•
- Œ •
Ï
Ì
Ó
-
0 0
1 0
[Cf. Examples 2.3 and 2.4.] 
Note that there is nothing “probabilistic” in these definitions of dis-
tribution functions. Random variables are not mentioned, nor is left-
continuity or right-continuity. All the distribution functions of one or
of two random variables usually encountered in statistics satisfy either
the first or the second of the above definitions. Hence any results we de-
rive for such distribution functions will hold when we discuss random
variables, regardless of any additional restrictions that may be imposed.
Theorem 2.3.3. Sklar’s theorem. Let H be a joint distribution function
with margins F and G. Then there exists a copula C such that for all x,y
in R,
H(x,y) = C(F(x),G(y)). (2.3.1)
If F and G are continuous, then C is unique; otherwise, C is uniquely
determined on RanF ¥RanG. Conversely, if C is a copula and F and G
are distribution functions, then the function H defined by (2.3.1) is a
joint distribution function with margins F and G.
This theorem first appeared in (Sklar 1959). The name “copula”
was chosen to emphasize the manner in which a copula “couples” a
joint distribution function to its univariate margins. The argument that
we give below is essentially the same as in (Schweizer and Sklar 1974).
It requires two lemmas.
Lemma 2.3.4. Let H be a joint distribution function with margins F and
G. Then there exists a unique subcopula ¢
C such that
1. Dom ¢
C = RanF ¥RanG,
2. For all x,y in R, H(x,y) = ¢
C (F(x),G(y)).
Proof. The joint distribution H satisfies the hypotheses of Lemma
2.1.5 with S1 = S2 = R. Hence for any points ( , )
x y
1 1 and ( , )
x y
2 2 in
R2
,
H x y H x y F x F x G y G y
( , ) ( , ) ( ) ( ) ( ) ( )
2 2 1 1 2 1 2 1
- £ - + - .
It follows that if F x F x
( ) ( )
1 2
= and G y G y
( ) ( )
1 2
= , then H x y
( , )
1 1 =
H x y
( , )
2 2 . Thus the set of ordered pairs
F x G y H x y x y
( ), ( ) , ( , ) ,
( )
( ) Œ
{ }
R
2.3 Sklar’s Theorem 19
defines a 2-place real function ¢
C whose domain is RanF ¥RanG. That
this function is a subcopula follows directly from the properties of H.
For instance, to show that (2.2.2) holds, we first note that for each u in
RanF, there is an x in R such that F(x) = u. Thus ¢
C (u,1) =
¢
C (F(x),G(•)) = H(x,•) = F(x) = u. Verifications of the other condi-
tions in Definition 2.2.1 are similar. 
Lemma 2.3.5. Let ¢
C be a subcopula. Then there exists a copula C
such that C(u,v) = ¢
C (u,v) for all (u,v) in Dom ¢
C ; i.e., any subcopula
can be extended to a copula. The extension is generally non-unique.
Proof. Let Dom ¢
C = S1¥ S2. Using Theorem 2.2.4 and the fact that
¢
C is nondecreasing in each place, we can extend ¢
C by continuity to a
function ¢¢
C with domain S1¥ S2, where S1 is the closure of S1 and S2 is
the closure of S2. Clearly ¢¢
C is also a subcopula. We next extend ¢¢
C to
a function C with domain I2
. To this end, let (a,b) be any point in I2
,
let a1 and a2 be, respectively, the greatest and least elements of S1 that
satisfy a1 £ a £ a2; and let b1 and b2 be, respectively, the greatest and
least elements of S2 that satisfy b1 £ b £ b2. Note that if a is in S1, then
a1 = a = a2; and if b is in S2, then b1 = b = b2. Now let
l1
1 2 1 1 2
1 2
1
=
- - 
=
Ï
Ì
Ó
( ) ( ), ,
, ;
a a a a a a
a a
if
if
m1
1 2 1 1 2
1 2
1
=
- - 
=
Ï
Ì
Ó
( ) ( ), ,
, ;
b b b b b b
b b
if
if
and define
C a b C a b C a b
C a b C a b
( , ) ( )( ) ( , ) ( ) ( , )
( ) ( , ) ( , ).
= - - ¢¢ + - ¢¢
+ - ¢¢ + ¢¢
1 1 1
1
1 1 1 1 1 1 1 2
1 1 2 1 1 1 2 2
l m l m
l m l m
(2.3.2)
Notice that the interpolation defined in (2.3.2) is linear in each place
(what we call bilinear interpolation) because l1 and m1 are linear in a
and b, respectively.
It is obvious that DomC = I2
, that C(a,b) = ¢¢
C (a,b) for any (a,b) in
Dom ¢¢
C ; and that C satisfies (2.2.2a) and (2.2.2b). Hence we only must
show that C satisfies (2.2.3). To accomplish this, let (c,d) be another
point in I2
such that c ≥ a and d ≥ b, and let c1, d1, c2, d2, l2, m2 be
related to c and d as a1, b1, a2, b2, l1, m1 are related to a and b. In
evaluating V B
C ( ) for the rectangle B = [a,c] ¥ [b,d], there will be sev-
eral cases to consider, depending upon whether or not there is a point in
S1 strictly between a and c, and whether or not there is a point in S2
20 2 Definitions and Basic Properties
strictly between b and d. In the simplest of these cases, there is no point
in S1 strictly between a and c, and no point in S2 strictly between b and
d, so that c1 = a1, c2 = a2, d1 = b1, and d2 = b2. Substituting (2.3.2)
and the corresponding terms for C(a,d), C(c,b) and C(c,d) into the ex-
pression given by (2.1.1) for V B
C ( ) and simplifying yields
V B V a c b d V a a b b
C C C
( ) ([ , ] [ , ]) ( )( ) ([ , ] [ , ])
= ¥ = - - ¥
l l m m
2 1 2 1 1 2 1 2 ,
from which it follows that V B
C ( ) ≥ 0 in this case, as c ≥ a and d ≥ b im-
ply l2 ≥ l1 and m2 ≥ m1.
( , )
a b
1 1
(a,b)
(a,d)
(c,b)
(c,d)
( , )
a b
1 2
( , )
a d
1 2
( , )
a d
1 1
( , )
a d
2 1
( , )
a d
2 2
( , )
a b
2 2
( , )
a b
2 1
( , )
c b
2 1
( , )
c b
2 2
( , )
c d
2 2
( , )
c d
2 1
( , )
c d
1 1
( , )
c d
1 2
( , )
c b
1 2
( , )
c b
1 1
Fig. 2.5. The least simple case in the proof of Lemma 2.3.5
At the other extreme, the least simple case occurs when there is at
least one point in S1 strictly between a and c, and at least one point in
S2 strictly between b and d, so that a  a2 £ c1  c and b  b2 £ d1  d.
In this case—which is illustrated in Fig. 2.5—substituting (2.3.2) and
the corresponding terms for C(a,d), C(c,b) and C(c,d) into the expres-
sion given by (2.1.1) for V B
C ( ) and rearranging the terms yields
V B V a a d d V a c d d
V c c d d V a a b d
V a c b d
C C C
C C
C
( ) ( ) ([ , ] [ , ]) ([ , ] [ , ])
([ , ] [ , ]) ( ) ([ , ] [ , ])
([ , ] [ , ])
= - ¥ + ¥
+ ¥ + - ¥
+ ¥ +
1
1
1 2 1 2 1 2 2 2 1 1 2
2 2 1 2 1 2 1 1 2 2 1
2 1 2 1
l m m
l m l
l2
2 1 2 2 1
1 1 1 2 1 2
1 2 1 1 2 2 1 1 2 1 2
1 1
1 1
V c c b d
V a a b b
V a c b b V c c b b
C
C
C C
([ , ] [ , ])
( )( ) ([ , ] [ , ])
( ) ([ , ] [ , ]) ( ) ([ , ] [ , ]).
¥
+ - - ¥
+ - ¥ + - ¥
l m
m l m
The right-hand side of the above expression is a combination of nine
nonnegative quantities (the C-volumes of the nine rectangles deter-
2.3 Sklar’s Theorem 21
mined by the dashed lines in Fig. 2.5) with nonnegative coefficients,
and hence is nonnegative. The remaining cases are similar, which com-
pletes the proof. 
Example 2.6. Let (a,b) be any point in R2
, and consider the following
distribution function H:
H x y
x a y b
x a y b
( , )
, ,
, .
=
 
≥ ≥
Ï
Ì
Ó
0
1
or
and
The margins of H are the unit step functions ea and eb . Applying
Lemma 2.3.4 yields the subcopula ¢
C with domain {0,1}¥{0,1} such
that ¢
C (0,0) = ¢
C (0,1) = ¢
C (1,0) = 0 and ¢
C (1,1) = 1. The extension of
¢
C to a copula C via Lemma 2.3.5 is the copula C = P, i.e., C(u,v) = uv.
Notice however, that every copula agrees with ¢
C on its domain, and
thus is an extension of this ¢
C . 
We are now ready to prove Sklar’s theorem, which we restate here for
convenience.
Theorem 2.3.3. Sklar’s theorem. Let H be a joint distribution function
with margins F and G. Then there exists a copula C such that for all x,y
in R,
H(x,y) = C(F(x),G(y)). (2.3.1)
If F and G are continuous, then C is unique; otherwise, C is uniquely
determined on RanF¥RanG. Conversely, if C is a copula and F and G
are distribution functions, then the function H defined by (2.3.1) is a
joint distribution function with margins F and G.
Proof. The existence of a copula C such that (2.3.1) holds for all x,y
in R follows from Lemmas 2.3.4 and 2.3.5. If F and G are continuous,
then RanF = RanG = I, so that the unique subcopula in Lemma 2.3.4 is
a copula. The converse is a matter of straightforward verification. 
Equation (2.3.1) gives an expression for joint distribution functions
in terms of a copula and two univariate distribution functions. But
(2.3.1) can be inverted to express copulas in terms of a joint distribu-
tion function and the “inverses” of the two margins. However, if a
margin is not strictly increasing, then it does not possess an inverse in
the usual sense. Thus we first need to define “quasi-inverses” of distri-
bution functions (recall Definition 2.3.1).
Definition 2.3.6. Let F be a distribution function. Then a quasi-inverse
of F is any function F( )
-1
with domain I such that
1. if t is in RanF, then F( )
-1
(t) is any number x in R such that F(x) =
t, i.e., for all t in RanF,
F( F( )
-1
(t)) = t;
2. if t is not in RanF, then
22 2 Definitions and Basic Properties
F t x F x t x F x t
( )
( ) inf{ ( ) } sup{ ( ) }
-
= ≥ = £
1
.
If F is strictly increasing, then it has but a single quasi-inverse, which is
of course the ordinary inverse, for which we use the customary notation
F-1
.
Example 2.7. The quasi-inverses of ea , the unit step at a (see Example
2.4) are the functions given by
ea t
a t
a t
a t
( )
( )
, ,
, ( , ),
, ,
-
=
=
Œ
=
Ï
Ì
Ô
Ó
Ô
1
0
1
0
0 1
1
where a0 and a1 are any numbers in R such that a0  a £ a1. 
Using quasi-inverses of distribution functions, we now have the fol-
lowing corollary to Lemma 2.3.4.
Corollary 2.3.7. Let H, F, G, and ¢
C be as in Lemma 2.3.4, and let
F( )
-1
and G( )
-1
be quasi-inverses of F and G, respectively. Then for
any (u,v) in Dom ¢
C ,
¢ = - -
C u v H F u G v
( , ) ( ( ), ( ))
( ) ( )
1 1
. (2.3.3)
When F and G are continuous, the above result holds for copulas as
well and provides a method of constructing copulas from joint distribu-
tion functions. We will exploit Corollary 2.3.7 in the next chapter to
construct families of copulas, but for now the following examples will
serve to illustrate the procedure.
Example 2.8. Recall the distribution function H from Example 2.5:
H x y
x e
x e
x y
e x y
y
y
y
( , )
( )( )
, ( , ) [ , ] [ , ],
, ( , ) ( , ] [ , ],
,
=
+ -
+ -
Œ - ¥ •
- Œ • ¥ •
Ï
Ì
Ô
Ô
Ó
Ô
Ô
-
1 1
2 1
11 0
1 1 0
0 elsewhere.
with margins F and G given by
F x
x
x x
x
( )
, ,
( ) , [ , ],
, ,
=
 -
+ Œ -

Ï
Ì
Ô
Ó
Ô
0 1
1 2 11
1 1
and G y
y
e y
y
( )
, ,
, .
=

- ≥
Ï
Ì
Ó
-
0 0
1 0
Quasi-inverses of F and G are given by F( )
-1
(u) = 2 1
u - and G( )
-1
(v)
= - -
ln( )
1 v for u,v in I. Because RanF = RanG = I, (2.3.3) yields the
copula C given by
2.3 Sklar’s Theorem 23
C u v
uv
u v uv
( , ) =
+ -
. ( . . )
2 3 4

Example 2.9. Gumbel’s bivariate exponential distribution (Gumbel
1960a). Let Hq be the joint distribution function given by
H x y
e e e x y
x y x y xy
q
q
( , )
, , ,
, ;
( )
=
- - + ≥ ≥
Ï
Ì
Ó
- - - + +
1 0 0
0 otherwise
where q is a parameter in [0,1]. Then the marginal distribution func-
tions are exponentials, with quasi-inverses F( )
-1
(u) = - -
ln( )
1 u and
G( )
-1
(v) = - -
ln( )
1 v for u,v in I. Hence the corresponding copula is
C u v u v u v e u v
q
q
( , ) ( )( ) ln( )ln( )
= + - + - - - - -
1 1 1 1 1
. ( . . )
2 3 5

Example 2.10. It is an exercise in many mathematical statistics texts to
find an example of a bivariate distribution with standard normal mar-
gins that is not the standard bivariate normal with parameters mx = my
= 0, sx
2
= sy
2
= 1, and Pearson’s product-moment correlation coeffi-
cient r. With Sklar’s theorem and Corollary 2.3.7 this becomes triv-
ial—let C be a copula such as one in either of the preceding examples,
and use standard normal margins in (2.3.1). Indeed, if F denotes the
standard (univariate) normal distribution function and Nr denotes the
standard bivariate normal distribution function (with Pearson’s product-
moment correlation coefficient r), then any copula except one of the
form
C u v N u v
s st t
dsdt
v
u
( , ) ( ( ), ( ))
exp
( )
( )
( )
( )
=
=
-
- - +
-
È
Î
Í
Í
˘
˚
˙
˙
- -
-•
-•
-
-
Ú
Ú
r
p r
r
r
F F
F
F
1 1
2
2 2
2
1
2 1
2
2 1
1
1 (2.3.6)
(with r π –1, 0, or 1) will suffice. Explicit constructions using the
copulas in Exercises 2.4, 2.12, and 3.11, Example 3.12, and Sect. 3.3.1
can be found in (Kowalski 1973), and one using the copula C1 2 from
Exercise 2.10 in (Vitale 1978). 
We close this section with one final observation. With an appropriate
extension of its domain to R2
, every copula is a joint distribution func-
tion with margins that are uniform on I. To be precise, let C be a copula,
and define the function HC on R2
via
24 2 Definitions and Basic Properties
H x y
x y
C x y x y
x y x
y x y
x y
C ( , )
, ,
( , ), ( , ) ,
, , ,
, , ,
, .
=
 
Œ
 Œ
 Œ
 
Ï
Ì
Ô
Ô
Ó
Ô
Ô
0 0 0
1
1
1 1 1
2
or
and
I
I
I
Then HC is a distribution function both of whose margins are readily
seen to be U01. Indeed, it is often quite useful to think of copulas as re-
strictions to I2
of joint distribution functions whose margins are U01.
2.4 Copulas and Random Variables
In this book, we will use the term “random variable” in the statistical
rather than the probabilistic sense; that is, a random variable is a quan-
tity whose values are described by a (known or unknown) probability
distribution function. Of course, all of the results to follow remain valid
when a random variable is defined in terms of measure theory, i.e., as a
measurable function on a given probability space. But for our purposes
it suffices to adopt the descriptions of Wald (1947), “a variable x is
called a random variable if for any given value c a definite probability
can be ascribed to the event that x will take a value less than c”; and of
Gnedenko (1962), “a random variable is a variable quantity whose val-
ues depend on chance and for which there exists a distribution func-
tion.” For a detailed discussion of this point of view, see (Menger
1956).
In what follows, we will use capital letters, such as X and Y, to repre-
sent random variables, and lowercase letters x, y to represent their values.
We will say that F is the distribution function of the random variable X
when for all x in R, F(x) = P[X £ x]. We are defining distribution func-
tions of random variables to be right-continuous—but that is simply a
matter of custom and convenience. Left-continuous distribution func-
tions would serve equally as well. A random variable is continuous if its
distribution function is continuous.
When we discuss two or more random variables, we adopt the same
convention—two or more random variables are the components of a
quantity (now a vector) whose values are described by a joint distribu-
tion function. As a consequence, we always assume that the collection of
random variables under discussion can be defined on a common prob-
ability space.
We are now in a position to restate Sklar’s theorem in terms of ran-
dom variables and their distribution functions:
Theorem 2.4.1. Let X and Y be random variables with distribution
functions F and G, respectively, and joint distribution function H. Then
2.4 Copulas and Random Variables 25
there exists a copula C such that (2.3.1) holds. If F and G are continu-
ous, C is unique. Otherwise, C is uniquely determined on RanF¥RanG.
The copula C in Theorem 2.4.1 will be called the copula of X and Y,
and denoted CXY when its identification with the random variables X
and Y is advantageous.
The following theorem shows that the product copula P(u,v) = uv
characterizes independent random variables when the distribution func-
tions are continuous. Its proof follows from Theorem 2.4.1 and the ob-
servation that X and Y are independent if and only if H(x,y) = F(x)G(y)
for all x,y in R2
.
Theorem 2.4.2. Let X and Y be continuous random variables. Then X
and Y are independent if and only if CXY = P.
Much of the usefulness of copulas in the study of nonparametric sta-
tistics derives from the fact that for strictly monotone transformations of
the random variables, copulas are either invariant or change in predict-
able ways. Recall that if the distribution function of a random variable X
is continuous, and if a is a strictly monotone function whose domain
contains RanX, then the distribution function of the random variable
a(X) is also continuous. We treat the case of strictly increasing trans-
formations first.
Theorem 2.4.3. Let X and Y be continuous random variables with cop-
ula CXY . If a and b are strictly increasing on RanX and RanY, respec-
tively, then C X Y
a b
( ) ( ) = CXY . Thus CXY is invariant under strictly in-
creasing transformations of X and Y.
Proof. Let F1, G1, F2 , and G2 denote the distribution functions of X,
Y, a(X), and b(Y), respectively. Because a and b are strictly increasing,
F2 (x) = P X x
[ ( ) ]
a £ = P X x
[ ( )]
£ -
a 1
= F x
1
1
( ( ))
a -
, and likewise
G2(y) = G y
1
1
( ( ))
b-
. Thus, for any x,y in R,
C F x G y P X x Y y
P X x Y y
C F x G y
C F x G y
X Y
XY
XY
a b a b
a b
a b
( ) ( )( ( ), ( )) [ ( ) , ( ) ]
[ ( ), ( )]
( ( ( )), ( ( )))
( ( ), ( )).
2 2
1 1
1
1
1
1
2 2
= £ £
= £ £
=
=
- -
- -
Because X and Y are continuous, Ran F2 = RanG2 = I, whence it follows
that C X Y
a b
( ) ( ) = CXY on I2
.
26 2 Definitions and Basic Properties
When at least one of a and b is strictly decreasing, we obtain results
in which the copula of the random variables a(X) and b(Y) is a simple
transformation of CXY . Specifically, we have:
Theorem 2.4.4. Let X and Y be continuous random variables with cop-
ula CXY . Let a and b be strictly monotone on RanX and RanY, respec-
tively.
1. If a is strictly increasing and b is strictly decreasing, then
C u v u C u v
X Y XY
a b
( ) ( )( , ) ( , )
= - -
1 .
2. If a is strictly decreasing and b is strictly increasing, then
C u v v C u v
X Y XY
a b
( ) ( )( , ) ( , )
= - -
1 .
3. If a and b are both strictly decreasing, then
C u v u v C u v
X Y XY
a b
( ) ( )( , ) ( , )
= + - + - -
1 1 1 .
The proof of Theorem 2.4.4 is left as an exercise. Note that in each
case the form of the copula is independent of the particular choices of
a and b, and note further that the three forms for C X Y
a b
( ) ( ) that appear
in this theorem were first encountered in Exercise 2.6. [Remark: We
could be somewhat more general in the preceding two theorems by re-
placing phrases such as “strictly increasing” by “almost surely strictly
increasing”—to allow for subsets of Lebesgue measure zero where the
property may fail to hold.]
Although we have chosen to avoid measure theory in our definition
of random variables, we will nevertheless need some terminology and
results from measure theory in the remaining sections of this chapter
and in chapters to come. Each joint distribution function H induces a
probability measure on R2
via V x y
H ( , ] ( , ]
-• ¥ -•
( ) = H(x,y) and a
standard extension to Borel subsets of R2
using measure-theoretic
techniques. Because copulas are joint distribution functions (with uni-
form (0,1) margins), each copula C induces a probability measure on I2
via V u v
C ([ , ] [ , ])
0 0
¥ = C(u,v) in a similar fashion—that is, the C-
measure of a set is its C-volume VC . Hence, at an intuitive level, the C-
measure of a subset of I2
is the probability that two uniform (0,1) ran-
dom variables U and V with joint distribution function C assume values
in that subset. C-measures are often called doubly stochastic measures,
as for any measurable subset S of I, VC (S¥I) = VC (I¥S) = l(S), where l
denotes ordinary Lebesgue measure on I. The term “doubly stochas-
tic” is taken from matrix theory, where doubly stochastic matrices have
nonnegative entries and all row sums and column sums are 1.
2.4 Copulas and Random Variables 27
For any copula C, let
C(u,v) = AC(u,v) + SC (u,v),
where
A u v
s t
C s t dtds
C
v
u
( , ) ( , )
= Ú
Ú
∂
∂ ∂
2
0
0
and SC (u,v)=C(u,v)– AC(u,v). (2.4.1)
Unlike bivariate distributions in general, the margins of a copula are
continuous, hence a copula has no “atoms” (individual points in I2
whose C-measure is positive).
If C ∫ AC on I2
—that is, if considered as a joint distribution func-
tion, C has a joint density given by ∂ ∂ ∂
2
C u v u v
( , ) —then C is absolutely
continuous, whereas if C ∫ SC on I2
—that is, if∂ ∂ ∂
2
C u v u v
( , ) = 0 almost
everywhere in I2
—then C is singular. Otherwise, C has an absolutely
continuous component AC and a singular component SC . In this case
neither AC nor SC is a copula, because neither has uniform (0,1) mar-
gins. In addition, the C-measure of the absolutely continuous compo-
nent is AC(1,1), and the C-measure of the singular component is
SC (1,1).
Just as the support of a joint distribution function H is the comple-
ment of the union of all open subsets of R2
with H-measure zero, the
support of a copula is the complement of the union of all open subsets
of I2
with C-measure zero. When the support of C is I2
, we say C has
“full support.” When C is singular, its support has Lebesgue measure
zero (and conversely). However, many copulas that have full support
have both an absolutely continuous and a singular component.
Example 2.11. The support of the Fréchet-Hoeffding upper bound M is
the main diagonal of I2
, i.e., the graph of v = u for u in I, so that M is
singular. This follows from the fact that the M-measure of any open
rectangle that lies entirely above or below the main diagonal is zero.
Also note that ∂ ∂ ∂
2
0
M u v = everywhere in I2
except on the main di-
agonal. Similarly, the support of the Fréchet-Hoeffding lower bound W
is the secondary diagonal of I2
, i.e., the graph of v = 1 – u for u in I,
and thus W is singular as well. 
Example 2.12. The product copula P(u,v) = uv is absolutely continu-
ous, because for all (u,v) in I2
,
A u v
s t
s t dtds dtds uv u v
v
u v
u
P P P
( , ) ( , ) ( , )
= = = =
Ú
Ú Ú
Ú
∂
∂ ∂
2
0
0 0
0
1 .
28 2 Definitions and Basic Properties
In Sect. 3.1.1 we will illustrate a general procedure for decomposing
a copula into the sum of its absolutely continuous and singular compo-
nents and for finding the probability mass (i.e., C-measure) of each
component.
Exercises
2.12 Gumbel’s bivariate logistic distribution (Gumbel 1961). Let X
and Y be random variables with a joint distribution function given
by
H x y e e
x y
( , ) ( )
= + +
- - -
1 1
for all x,y in R.
(a) Show that X and Y have standard (univariate) logistic distribu-
tions, i.e.,
F x e x
( ) ( )
= + - -
1 1
and G y e y
( ) ( )
= + - -
1 1
.
(b) Show that the copula of X and Y is the copula given by (2.3.4)
in Example 2.8.
2.13 Type B bivariate extreme value distributions (Johnson and Kotz
1972). Let X and Y be random variables with a joint distribution
function given by
H x y e e
x y
q
q q q
( , ) exp[ ( ) ]
= - +
- - 1
for all x,y in R, where q ≥ 1. Show that the copula of X and Y is
given by
C u v u v
q
q q q
( , ) exp ( ln ) ( ln )
= - - + -
[ ]
Ê
Ë
ˆ
¯
1
. (2.4.2)
This parametric family of copulas is known as the Gumbel-
Hougaard family (Hutchinson and Lai 1990), which we shall see
again in Chapter 4.
2.14 Conway (1979) and Hutchinson and Lai (1990) note that Gum-
bel’s bivariate logistic distribution (Exercise 2.12) suffers from
the defect that it lacks a parameter, which limits its usefulness in
applications. This can be corrected in a number of ways, one of
which (Ali et al. 1978) is to define Hq as
H x y e e e
x y x y
q q
( , ) ( )
= + + + -
( )
- - - - -
1 1
1
for all x,y in R, where q lies in [–1,1]. Show that
(a) the margins are standard logistic distributions;
2.4 Copulas and Random Variables 29
(b) when q = 1, we have Gumbel’s bivariate logistic distribution;
(c) when q = 0, X and Y are independent; and
(d) the copula of X and Y is given by
C u v
uv
u v
q
q
( , )
( )( )
=
- - -
1 1 1
. (2.4.3)
This is the Ali-Mikhail-Haq family of copulas (Hutchinson and
Lai 1990), which we will encounter again in Chapters 3 and 4.
2.15 Let X1 and Y1 be random variables with continuous distribution
functions F1 and G1, respectively, and copula C. Let F2 and G2
be another pair of continuous distribution functions, and set X2 =
F2
1
( )
-
( F1( X1)) and Y2 = G2
1
( )
-
( G1( Y1)). Prove that
(a) the distribution functions of X2 and Y2 are F2 and G2, re-
spectively; and
(b) the copula of X2 and Y2 is C.
2.16 (a) Let X and Y be continuous random variables with copula C
and univariate distribution functions F and G, respectively. The
random variables max(X,Y) and min(X,Y) are the order statistics
for X and Y. Prove that the distribution functions of the order sta-
tistics are given by
P X Y t C F t G t
[max( , ) ] ( ( ), ( ))
£ =
and
P X Y t F t G t C F t G t
[min( , ) ] ( ) ( ) – ( ( ), ( ))
£ = + ,
so that when F = G,
P X Y t F t
C
[max( , ) ] ( ( ))
£ = d and
P X Y t F t F t
C
[min( , ) ] ( ) – ( ( ))
£ = 2 d .
(b) Show that bounds on the distribution functions of the order
statistics are given by
max( ( ) ( ) , ) [max( , ) ] min( ( ), ( ))
F t G t P X Y t F t G t
+ - £ £ £
1 0
and
max( ( ), ( )) [min( , ) ] min( ( ) ( ), )
F t G t P X Y t F t G t
£ £ £ + 1 .
2.17 Prove Theorem 2.4.4.
30 2 Definitions and Basic Properties
2.5 The Fréchet-Hoeffding Bounds for Joint Distribution
Functions
In Sect. 2.2 we encountered the Fréchet-Hoeffding bounds as universal
bounds for copulas, i.e., for any copula C and for all u,v in I,
W u v u v C u v u v M u v
( , ) max( , ) ( , ) min( , ) ( , )
= + - £ £ =
1 0 .
As a consequence of Sklar’s theorem, if X and Y are random vari-
ables with a joint distribution function H and margins F and G, respec-
tively, then for all x,y in R,
max( ( ) ( ) , ) ( , ) min( ( ), ( ))
F x G y H x y F x G y
+ - £ £
1 0 (2.5.1)
Because M and W are copulas, the above bounds are joint distribution
functions and are called the Fréchet-Hoeffding bounds for joint distri-
bution functions H with margins F and G. Of interest in this section is
the following question: What can we say about the random variables X
and Y when their joint distribution function H is equal to one of its Fré-
chet-Hoeffding bounds?
To answer this question, we first need to introduce the notions of
nondecreasing and nonincreasing sets in R2
.
Definition 2.5.1. A subset S of R2
is nondecreasing if for any (x,y) and
(u,v) in S, x  u implies y £ v. Similarly, a subset S of R2
is nonin-
creasing if for any (x,y) and (u,v) in S, x  u implies y ≥ v.
Fig. 2.6 illustrates a simple nondecreasing set.
Fig. 2.6. The graph of a nondecreasing set
We will now prove that the joint distribution function H for a pair
(X,Y) of random variables is the Fréchet-Hoeffding upper bound (i.e.,
the copula is M) if and only if the support of H lies in a nondecreasing
set. The following proof is based on the one that appears in (Mi-
kusiński, Sherwood and Taylor 1991-1992). But first, we need two
lemmas:
2.5 The Fréchet-Hoeffding Bounds for Joint Distribution Functions 31
Lemma 2.5.2. Let S be a subset of R2
. Then S is nondecreasing if and
only if for each (x,y) in R2
, either
1. for all (u,v) in S, u £ x implies v £ y; or (2.5.2)
2. for all (u,v) in S, v £ y implies u £ x. (2.5.3)
Proof. First assume that S is nondecreasing, and that neither (2.5.2)
nor (2.5.3) holds. Then there exist points (a,b) and (c,d) in S such that
a £ x, b  y, d £ y, and c  x. Hence a  c and b  d; a contradiction. In
the opposite direction, assume that S is not nondecreasing. Then there
exist points (a,b) and (c,d) in S with a  c and b  d. For (x,y) =
( ) ,( )
a c b d
+ +
( )
2 2 , neither (2.5.2) nor (2.5.3) holds. 
Lemma 2.5.3. Let X and Y be random variables with joint distribution
function H. Then H is equal to its Fréchet-Hoeffding upper bound if
and only if for every (x,y) in R2
, either P[X  x, Y £ y] = 0 or P[X £ x, Y
 y] = 0.
Proof: As usual, let F and G denote the margins of H. Then
F x P X x P X x Y y P X x Y y
H x y P X x Y y
( ) [ ] [ , ] [ , ]
( , ) [ , ],
= £ = £ £ + £ 
= + £ 
and
G y P Y y P X x Y y P X x Y y
H x y P X x Y y
( ) [ ] [ , ] [ , ]
( , ) [ , ].
= £ = £ £ +  £
= +  £
Hence H(x,y) = M(F(x),G(y)) if and only if min( [ , ],
P X x Y y
£ 
P X x Y y
[ , ])
 £ = 0, from which the desired conclusion follows. 
We are now ready to prove
Theorem 2.5.4. Let X and Y be random variables with joint distribution
function H. Then H is identically equal to its Fréchet-Hoeffding upper
bound if and only if the support of H is a nondecreasing subset of R2
.
Proof. Let S denote the support of H, and let (x,y) be any point in
R2
. Then (2.5.2) holds if and only if ( , )
u v u x v y S
£ 
{ }« = ∆
and ;
or equivalently, if and only if P[X £ x, Y  y] = 0. Similarly, (2.5.3)
holds if and only if ( , )
u v u x v y S
 £
{ }« = ∆
and ; or equivalently, if
and only if P[X  x, Y £ y] = 0. The theorem now follows from Lemmas
2.5.2 and 2.5.3. 
Of course, there is an analogous result for the Fréchet-Hoeffding
lower bound—its proof is outlined in Exercises 2.18 through 2.20:
Theorem 2.5.5. Let X and Y be random variables with joint distribution
function H. Then H is identically equal to its Fréchet-Hoeffding lower
bound if and only if the support of H is a nonincreasing subset of R2
.
32 2 Definitions and Basic Properties
When X and Y are continuous, the support of H can have no hori-
zontal or vertical line segments, and in this case it is common to say that
“Y is almost surely an increasing function of X” if and only if the cop-
ula of X and Y is M; and “Y is almost surely a decreasing function of
X” if and only if the copula of X and Y is W. If U and V are uniform
(0,1) random variables whose joint distribution function is the copula M,
then P[U = V] = 1; and if the copula is W, then P[U + V = 1] = 1.
Random variables with copula M are often called comonotonic, and
random variables with copula W are often called countermonotonic.
2.6 Survival Copulas
In many applications, the random variables of interest represent the
lifetimes of individuals or objects in some population. The probability
of an individual living or surviving beyond time x is given by the sur-
vival function (or survivor function, or reliability function) F x
( ) =
P X x
[ ]
 = 1- F x
( ), where, as before, F denotes the distribution func-
tion of X. When dealing with lifetimes, the natural range of a random
variable is often [0,•); however, we will use the term “survival func-
tion” for P X x
[ ]
 even when the range is R.
For a pair (X,Y) of random variables with joint distribution function
H, the joint survival function is given by H x y
( , ) = P X x Y y
[ , ]
  . The
margins of H are the functions H x
( , )
-• and H y
( , )
-• , which are the
univariate survival functions F and G , respectively. A natural question
is the following: Is there a relationship between univariate and joint sur-
vival functions analogous to the one between univariate and joint distri-
bution functions, as embodied in Sklar’s theorem? To answer this ques-
tion, suppose that the copula of X and Y is C. Then we have
H x y F x G y H x y
F x G y C F x G y
F x G y C F x G y
( , ) ( ) ( ) ( , )
( ) ( ) ( ( ), ( ))
( ) ( ) ( ( ), ( )),
= - - +
= + - +
= + - + - -
1
1
1 1 1
so that if we define a function Ĉ from I2
into I by
ˆ( , ) ( , )
C u v u v C u v
= + - + - -
1 1 1 , (2.6.1)
we have
H x y C F x G y
( , ) ˆ( ( ), ( ))
= . (2.6.2)
First note that, as a consequence of Exercise 2.6, the function Ĉ in
(2.6.1) is a copula (see also part 3 of Theorem 2.4.4). We refer to Ĉ as
the survival copula of X and Y. Secondly, notice that Ĉ “couples” the
2.6 Survival Copulas 33
joint survival function to its univariate margins in a manner completely
analogous to the way in which a copula connects the joint distribution
function to its margins.
Care should be taken not to confuse the survival copula Ĉ with the
joint survival function C for two uniform (0,1) random variables whose
joint distribution function is the copula C. Note that C (u,v) =
P U u V v
[ , ]
  = 1 – u – v + C(u,v) = Ĉ(1 – u,1 – v).
Example 2.13. In Example 2.9, we obtained the copula Cq in (2.3.5)
for Gumbel’s bivariate exponential distribution: for q in [0,1],
C u v u v u v e u v
q
q
( , ) ( )( ) ln( )ln( )
= + - + - - - - -
1 1 1 1 1
.
Just as the survival function for univariate exponentially distributed
random variables is functionally simpler than the distribution function,
the same is often true in the bivariate case. Employing (2.6.1), we have
ˆ ( , ) ln ln
C u v uve u v
q
q
= -
. 
Example 2.14. A bivariate Pareto distribution (Hutchinson and Lai
1990). Let X and Y be random variables whose joint survival function is
given by
H x y
x y x y
x x y
y x y
x y
q
q
q
q
( , )
( ) , , ,
( ) , , ,
( ) , , ,
, , ;
=
+ + ≥ ≥
+ ≥ 
+  ≥
 
Ï
Ì
Ô
Ô
Ó
Ô
Ô
-
-
-
1 0 0
1 0 0
1 0 0
1 0 0
where q  0. Then the marginal survival functions F and G are
F x
x x
x
( )
( ) ,
, ,
=
+ ≥

Ï
Ì
Ó
-
1 0
1 0
q
and G y
y y
y
( )
( ) ,
, ,
=
+ ≥

Ï
Ì
Ó
-
1 0
1 0
q
so that X and Y have identical Pareto distributions. Inverting the survival
functions and employing the survival version of Corollary 2.3.7 (see
Exercise 2.25) yields the survival copula
ˆ ( , ) ( )
C u v u v
q
q q q
= + -
- - -
1 1
1 . (2.6.3)
We shall encounter this family again in Chapter 4. 
Two other functions closely related to copulas—and survival copu-
las—are the dual of a copula and the co-copula (Schweizer and Sklar
1983). The dual of a copula C is the function C̃ defined by
˜( , ) ( , )
C u v u v C u v
= + - ; and the co-copula is the function C*
defined
by C u v C u v
*
= - - -
( , ) ( , )
1 1 1 . Neither of these is a copula, but when C
34 2 Definitions and Basic Properties
is the copula of a pair of random variables X and Y, the dual of the cop-
ula and the co-copula each express a probability of an event involving
X and Y. Just as
P X x Y y C F x G y
[ , ] ( ( ), ( ))
£ £ = and P X x Y y C F x G y
[ , ] ˆ( ( ), ( ))
  = ,
we have
P X x Y y C F x G y
[ ] ˜( ( ), ( ))
£ £ =
or , (2.6.4)
and
P X x Y y C F x G y
[ ] ( ( ), ( ))
  = *
or . (2.6.5)
Other relationships among C, Ĉ, C̃, and C*
are explored in Exer-
cises 2.24 and 2.25.
Exercises
2.18 Prove the “Fréchet-Hoeffding lower bound” version of Lemma
2.5.2: Let S be a subset of R2
. Then S is nonincreasing if and
only if for each (x,y) in R2
, either
1. for all (u,v) in S, u £ x implies v  y; or
2. for all (u,v) in S, v  y implies u £ x.
2.19 Prove the “Fréchet-Hoeffding lower bound” version of Lemma
2.5.3: Let X and Y be random variables whose joint distribution
function H is equal to its Fréchet-Hoeffding lower bound. Then
for every (x,y) in R2
, either P X x Y y
[ , ]
  = 0 or P X x Y y
[ , ]
£ £
= 0
2.20 Prove Theorem 2.5.5.
2.21 Let X and Y be nonnegative random variables whose survival
function is H x y e e
x y
( , ) ( )
= + - -
1 1
for x,y ≥ 0.
(a) Show that X and Y are standard exponential random variables.
(b) Show that the copula of X and Y is the copula given by (2.3.4)
in Example 2.8 [cf. Exercise 2.12].
2.22 Let X and Y be continuous random variables whose joint distribu-
tion function is given by C(F(x),G(y)), where C is the copula of X
and Y, and F and G are the distribution functions of X and Y re-
spectively. Verify that (2.6.4) and (2.6.5) hold.
2.6 Survival Copulas 35
2.23 Let X1, Y1, F1, G1, F2 , G2, and C be as in Exercise 2.15. Set X2 =
F2
1
( )
-
(1 – F1( X1)) and Y2 = G2
1
( )
-
(1 – G1( Y1)). Prove that
(a) The distribution functions of X2 and Y2 are F2 and G2, re-
spectively; and
(b) The copula of X2 and Y2 is Ĉ.
2.24 Let X and Y be continuous random variables with copula C and a
common univariate distribution function F. Show that the distri-
bution and survival functions of the order statistics (see Exercise
2.16) are given by
Order
statistic
Distribution
function
Survival
function
max(X,Y) d ( ( ))
F t d *
( ( ))
F t
min(X,Y) ˜( ( ))
d F t ˆ( ( ))
d F t
where d, ˆ
d , ˜
d , and d *
denote the diagonal sections of C, Ĉ, C̃,
and C*
, respectively.
2.25 Show that under composition o, the set of operations of forming
the survival copula, the dual of a copula, and the co-copula of a
given copula, along with the identity (i.e., “Ÿ ”, “~”, “*”, and
“i”) yields the dihedral group (e.g., C C
**
= , so *o* = i; ˆ ˜
C C
*
= ,
so Ÿo* = ~, etc.):
o
~
^
*
i
i ^ ~ *
i
i
i
i
^
^
^
^
~
~
~
~
*
*
*
*
2.26 Prove the following “survival” version of Corollary 2.3.7: Let
H , F , G , and Ĉ be as in (2.6.2), and let F ( )
-1
and G ( )
-1
be
quasi-inverses of F and G , respectively. Then for any (u,v) in I2
,
ˆ( , ) ( ( ), ( ))
( ) ( )
C u v H F u G v
= - -
1 1
.
36 2 Definitions and Basic Properties
2.7 Symmetry
If X is a random variable and a is a real number, we say that X is sym-
metric about a if the distribution functions of the random variables
X a
- and a X
- are the same, that is, if for any x in R, P X a x
[ ]
- £ =
P a X x
[ ]
- £ . When X is continuous with distribution function F, this is
equivalent to
F a x
( )
+ = F a x
( )
- (2.7.1)
[when F is discontinuous, (2.7.1) holds only at the points of continuity
of F].
Now consider the bivariate situation. What does it mean to say that a
pair (X,Y) of random variables is “symmetric” about a point (a,b)?
There are a number of ways to answer this question, and each answer
leads to a different type of bivariate symmetry.
Definition 2.7.1. Let X and Y be random variables and let (a,b) be a
point in R2
.
1. (X,Y) is marginally symmetric about (a,b) if X and Y are symmetric
about a and b, respectively.
2. (X,Y) is radially symmetric about (a,b) if the joint distribution
function of X a
- and Y b
- is the same as the joint distribution func-
tion of a X
- and b Y
- .
3. (X,Y) is jointly symmetric about (a,b) if the following four pairs of
random variables have a common joint distribution: ( X a
- ,Y b
- ),
( X a
- ,b Y
- ), ( a X
- ,Y b
- ), and ( a X
- ,b Y
- ).
When X and Y are continuous, we can express the condition for radial
symmetry in terms of the joint distribution and survival functions of X
and Y in a manner analogous to the relationship in (2.7.1) between uni-
variate distribution and survival functions:
Theorem 2.7.2. Let X and Y be continuous random variables with joint
distribution function H and margins F and G, respectively. Let (a,b) be
a point in R2
. Then (X,Y) is radially symmetric about (a,b) if and only
if
H a x b y
( , )
+ + = H a x b y
( , )
- - for all (x,y) in R2
. (2.7.2)
The term “radial” comes from the fact that the points ( , )
a x b y
+ +
and ( , )
a x b y
- - that appear in (2.7.2) lie on rays emanating in oppo-
site directions from (a,b). Graphically, Theorem 2.7.2 states that re-
gions such as those shaded in Fig. 2.7(a) always have equal H-volume.
Example 2.15. The bivariate normal distribution with parameters mx,
my, sx
2
, sy
2
, and r is radially symmetric about the point ( , )
m m
x y . The
proof is straightforward (but tedious)—evaluate double integrals of the
joint density over the shaded regions in Fig. 2.7(a).
2.7 Symmetry 37
(a,b)
(a+x,b+y)
(a–x,b–y)
H(a+x,b+y)
H(a–x,b–y)
v
u
1–u
1–v
(a) (b)
Fig. 2.7. Regions of equal probability for radially symmetric random variables
Example 2.16. The bivariate normal is a member of the family of ellip-
tically contoured distributions. The densities for such distributions have
contours that are concentric ellipses with constant eccentricity. Well-
known members of this family, in addition to the bivariate normal, are
bivariate Pearson type II and type VII distributions (the latter including
bivariate t and Cauchy distributions as special cases). Like the bivariate
normal, elliptically contoured distributions are radially symmetric. 
It is immediate that joint symmetry implies radial symmetry and easy
to see that radial symmetry implies marginal symmetry (setting x = • in
(2.7.2) yields (2.7.1); similarly for y = •). Indeed, joint symmetry is a
very strong condition—it is easy to show that jointly symmetric random
variables must be uncorrelated when the requisite second-order mo-
ments exist (Randles and Wolfe 1979). Consequently, we will focus on
radial symmetry, rather than joint symmetry, for bivariate distributions.
Because the condition for radial symmetry in (2.7.2) involves both
the joint distribution and survival functions, it is natural to ask if copulas
and survival copulas play a role in radial symmetry. The answer is pro-
vided by the next theorem.
Theorem 2.7.3. Let X and Y be continuous random variables with joint
distribution function H, marginal distribution functions F and G, re-
spectively, and copula C. Further suppose that X and Y are symmetric
about a and b, respectively. Then (X,Y) is radially symmetric about
(a,b), i.e., H satisfies (2.7.2), if and only if C = Ĉ, i.e., if and only if C
satisfies the functional equation
C u v u v C u v
( , ) ( , )
= + - + - -
1 1 1 for all (u,v) in I2
. (2.7.3)
Proof. Employing (2.6.2) and (2.7.1), the theorem follows from the
following chain of equivalent statements:
38 2 Definitions and Basic Properties
H a x b y H a x b y x y
C F a x G b y C F a x G b y x y
C F a x G b y C F a x G b y x y
C
( , ) ( , ) ( , )
( ( ), ( )) ˆ( ( ), ( )) ( , ) ,
( ( ), ( )) ˆ( ( ), ( )) ( , ) ,
(
+ + = - -
¤ + + = - -
¤ + + = + +
¤
for all in
for all in
for all in
R
R
R
2
2
2
u
u v C u v u v
, ) ˆ( , ) ( , ) .
= for all in I2

Geometrically, (2.7.3) states that for any (u,v) in I2
, the rectangles
[0,u]¥[0,v] and [1- u,1]¥[1- v,1] have equal C-volume, as illustrated in
Fig. 2.7(b).
Another form of symmetry is exchangeability—random variables X
and Y are exchangeable if the vectors (X,Y) and (Y,X) are identically
distributed. Hence if the joint distribution function of X and Y is H, then
H(x,y) = H(y,x) for all x,y in R2
. Clearly exchangeable random vari-
ables must be identically distributed, i.e., have a common univariate
distribution function. For identically distributed random variables, ex-
changeability is equivalent to the symmetry of their copula as expressed
in the following theorem, whose proof is straightforward.
Theorem 2.7.4. Let X and Y be continuous random variables with joint
distribution function H, margins F and G, respectively, and copula C.
Then X and Y are exchangeable if and only if F = G and C(u,v) =
C(v,u) for all (u,v) in I2
.
When C(u,v) = C(v,u) for all (u,v) in I2
, we will say simply that C is
symmetric.
Example 2.17. Although identically distributed independent random
variables must be exchangeable (because the copula P is symmetric),
the converse is of course not true—identically distributed exchangeable
random variables need not be independent. To show this, simply choose
for the copula of X and Y any symmetric copula except P, such as one
from Example 2.8, 2.9 (or 2.13), or from one of the families in Exer-
cises 2.4 and 2.5. 
There are other bivariate symmetry concepts. See (Nelsen 1993) for
details.
2.8 Order
The Fréchet-Hoeffding bounds inequality—W(u,v) £ C(u,v) £ M(u,v)
for every copula C and all u,v in I—suggests a partial order on the set
of copulas:
2.8 Order 39
Definition 2.8.1. If C1 and C2 are copulas, we say that C1 is smaller
than C2 (or C2 is larger than C1), and write C C
1 2
p (or C C
2 1
f ) if
C u v C u v
1 2
( , ) ( , )
£ for all u,v in I.
In other words, the Fréchet-Hoeffding lower bound copula W is
smaller than every copula, and the Fréchet-Hoeffding upper bound
copula M is larger than every copula. This point-wise partial ordering of
the set of copulas is called the concordance ordering and will be im-
portant in Chapter 5 when we discuss the relationship between copulas
and dependence properties for random variables (at which time the rea-
son for the name of the ordering will become apparent). It is a partial
order rather than a total order because not every pair of copulas is
comparable.
Example 2.18. The product copula P and the copula obtained by aver-
aging the Fréchet-Hoeffding bounds are not comparable. If we let
C(u,v) = [W(u,v)+M(u,v)]/2, then C(1/4,1/4)  P(1/4,1/4) and C(1/4,3/4)
 P(1/4,3/4), so that neither C p P nor P p C holds. 
However, there are families of copulas that are totally ordered. We
will call a totally ordered parametric family Cq
{ } of copulas positively
ordered if C C
a b
p whenever a £ b; and negatively ordered if C C
a b
f
whenever a £ b.
Example 2.19. The Cuadras-Augé family of copulas (2.2.10), intro-
duced in Exercise 2.5, is positively ordered, as for 0 £ a £ b £ 1 and u,v
in (0,1),
C u v
C u v
uv
u v
a
b
b a
( , )
( , ) min( , )
=
Ê
Ë
Á
ˆ
¯
˜ £
-
1
and hence C C
a b
p . 
Exercises
2.27 Let X and Y be continuous random variables symmetric about a
and b with marginal distribution functions F and G, respectively,
and with copula C. Is (X,Y) is radially symmetric (or jointly sym-
metric) about (a,b) if C is
(a) a member of the Fréchet family in Exercise 2.4?
(b) a member of the Cuadras-Augé family in Exercise 2.5?
2.28. Suppose X and Y are identically distributed continuous random
variables, each symmetric about a. Show that “exchangeability”
does not imply “radial symmetry,” nor does “radial symmetry”
imply “exchangeability.”
Exploring the Variety of Random
Documents with Different Content
damaged disk or other medium, a computer virus, or computer
codes that damage or cannot be read by your equipment.
1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except for
the “Right of Replacement or Refund” described in paragraph 1.F.3,
the Project Gutenberg Literary Archive Foundation, the owner of the
Project Gutenberg™ trademark, and any other party distributing a
Project Gutenberg™ electronic work under this agreement, disclaim
all liability to you for damages, costs and expenses, including legal
fees. YOU AGREE THAT YOU HAVE NO REMEDIES FOR
NEGLIGENCE, STRICT LIABILITY, BREACH OF WARRANTY OR
BREACH OF CONTRACT EXCEPT THOSE PROVIDED IN PARAGRAPH
1.F.3. YOU AGREE THAT THE FOUNDATION, THE TRADEMARK
OWNER, AND ANY DISTRIBUTOR UNDER THIS AGREEMENT WILL
NOT BE LIABLE TO YOU FOR ACTUAL, DIRECT, INDIRECT,
CONSEQUENTIAL, PUNITIVE OR INCIDENTAL DAMAGES EVEN IF
YOU GIVE NOTICE OF THE POSSIBILITY OF SUCH DAMAGE.
1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you
discover a defect in this electronic work within 90 days of receiving
it, you can receive a refund of the money (if any) you paid for it by
sending a written explanation to the person you received the work
from. If you received the work on a physical medium, you must
return the medium with your written explanation. The person or
entity that provided you with the defective work may elect to provide
a replacement copy in lieu of a refund. If you received the work
electronically, the person or entity providing it to you may choose to
give you a second opportunity to receive the work electronically in
lieu of a refund. If the second copy is also defective, you may
demand a refund in writing without further opportunities to fix the
problem.
1.F.4. Except for the limited right of replacement or refund set forth
in paragraph 1.F.3, this work is provided to you ‘AS-IS’, WITH NO
OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.
1.F.5. Some states do not allow disclaimers of certain implied
warranties or the exclusion or limitation of certain types of damages.
If any disclaimer or limitation set forth in this agreement violates the
law of the state applicable to this agreement, the agreement shall be
interpreted to make the maximum disclaimer or limitation permitted
by the applicable state law. The invalidity or unenforceability of any
provision of this agreement shall not void the remaining provisions.
1.F.6. INDEMNITY - You agree to indemnify and hold the Foundation,
the trademark owner, any agent or employee of the Foundation,
anyone providing copies of Project Gutenberg™ electronic works in
accordance with this agreement, and any volunteers associated with
the production, promotion and distribution of Project Gutenberg™
electronic works, harmless from all liability, costs and expenses,
including legal fees, that arise directly or indirectly from any of the
following which you do or cause to occur: (a) distribution of this or
any Project Gutenberg™ work, (b) alteration, modification, or
additions or deletions to any Project Gutenberg™ work, and (c) any
Defect you cause.
Section 2. Information about the Mission
of Project Gutenberg™
Project Gutenberg™ is synonymous with the free distribution of
electronic works in formats readable by the widest variety of
computers including obsolete, old, middle-aged and new computers.
It exists because of the efforts of hundreds of volunteers and
donations from people in all walks of life.
Volunteers and financial support to provide volunteers with the
assistance they need are critical to reaching Project Gutenberg™’s
goals and ensuring that the Project Gutenberg™ collection will
remain freely available for generations to come. In 2001, the Project
Gutenberg Literary Archive Foundation was created to provide a
secure and permanent future for Project Gutenberg™ and future
generations. To learn more about the Project Gutenberg Literary
Archive Foundation and how your efforts and donations can help,
see Sections 3 and 4 and the Foundation information page at
www.gutenberg.org.
Section 3. Information about the Project
Gutenberg Literary Archive Foundation
The Project Gutenberg Literary Archive Foundation is a non-profit
501(c)(3) educational corporation organized under the laws of the
state of Mississippi and granted tax exempt status by the Internal
Revenue Service. The Foundation’s EIN or federal tax identification
number is 64-6221541. Contributions to the Project Gutenberg
Literary Archive Foundation are tax deductible to the full extent
permitted by U.S. federal laws and your state’s laws.
The Foundation’s business office is located at 809 North 1500 West,
Salt Lake City, UT 84116, (801) 596-1887. Email contact links and up
to date contact information can be found at the Foundation’s website
and official page at www.gutenberg.org/contact
Section 4. Information about Donations to
the Project Gutenberg Literary Archive
Foundation
Project Gutenberg™ depends upon and cannot survive without
widespread public support and donations to carry out its mission of
increasing the number of public domain and licensed works that can
be freely distributed in machine-readable form accessible by the
widest array of equipment including outdated equipment. Many
small donations ($1 to $5,000) are particularly important to
maintaining tax exempt status with the IRS.
The Foundation is committed to complying with the laws regulating
charities and charitable donations in all 50 states of the United
States. Compliance requirements are not uniform and it takes a
considerable effort, much paperwork and many fees to meet and
keep up with these requirements. We do not solicit donations in
locations where we have not received written confirmation of
compliance. To SEND DONATIONS or determine the status of
compliance for any particular state visit www.gutenberg.org/donate.
While we cannot and do not solicit contributions from states where
we have not met the solicitation requirements, we know of no
prohibition against accepting unsolicited donations from donors in
such states who approach us with offers to donate.
International donations are gratefully accepted, but we cannot make
any statements concerning tax treatment of donations received from
outside the United States. U.S. laws alone swamp our small staff.
Please check the Project Gutenberg web pages for current donation
methods and addresses. Donations are accepted in a number of
other ways including checks, online payments and credit card
donations. To donate, please visit: www.gutenberg.org/donate.
Section 5. General Information About
Project Gutenberg™ electronic works
Professor Michael S. Hart was the originator of the Project
Gutenberg™ concept of a library of electronic works that could be
freely shared with anyone. For forty years, he produced and
distributed Project Gutenberg™ eBooks with only a loose network of
volunteer support.
Project Gutenberg™ eBooks are often created from several printed
editions, all of which are confirmed as not protected by copyright in
the U.S. unless a copyright notice is included. Thus, we do not
necessarily keep eBooks in compliance with any particular paper
edition.
Most people start at our website which has the main PG search
facility: www.gutenberg.org.
This website includes information about Project Gutenberg™,
including how to make donations to the Project Gutenberg Literary
Archive Foundation, how to help produce our new eBooks, and how
to subscribe to our email newsletter to hear about new eBooks.
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.
More than just a book-buying platform, we strive to be a bridge
connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.
Join us on a journey of knowledge exploration, passion nurturing, and
personal growth every day!
ebookbell.com

More Related Content

PDF
Probability And Its Applications Olav Kallenberg
PDF
Multivariate Humanities Pieter M Kroonenberg
PDF
A Primer Of Multivariate Statistics 3rd Edition Richard J Harris
PDF
Semiparametric Theory And Missing Data 1st Edition Anastasios Tsiatis
PDF
The Function Of Theory In Composition Studies Raul Sanchez
PDF
General Systems Theory A Mathematical Approach Yi Lin
PDF
A Primer Of Signal Detection Theory New Edition Don Mcnicol
PDF
Selected Papers Of Frederick Mosteller 1st Edition Stephen E Fienberg
Probability And Its Applications Olav Kallenberg
Multivariate Humanities Pieter M Kroonenberg
A Primer Of Multivariate Statistics 3rd Edition Richard J Harris
Semiparametric Theory And Missing Data 1st Edition Anastasios Tsiatis
The Function Of Theory In Composition Studies Raul Sanchez
General Systems Theory A Mathematical Approach Yi Lin
A Primer Of Signal Detection Theory New Edition Don Mcnicol
Selected Papers Of Frederick Mosteller 1st Edition Stephen E Fienberg

Similar to An Introduction To Copulas 2nd Ed Roger B Nelsen (20)

PDF
Manual of Textual Analysis Vinton A. Dearing
PDF
Alec Fisher-The Logic Of Real Arguments-Cambridge University Press (2004).Pdf
PDF
Diffusion Phenomena Cases And Studies 1st Edition Richard Ghez Auth
PDF
Tychomancy Inferring Probability From Causal Structure Michael Strevens
PDF
Manual of Textual Analysis Vinton A. Dearing
PDF
Manual of Textual Analysis Vinton A. Dearing
PDF
Introducing Biological Rhythms W Koukkari R Sothern
PDF
Applied Metaanalysis For Social Science Research 1st Edition Noel A Card Phd
PDF
Writing for computer science, 3rd edition springer
PDF
Introduction To Empirical Processes And Semiparametric Inference 1st Edition ...
PDF
Manual of Textual Analysis Vinton A. Dearing
PDF
Graph theory
PDF
Functional Analysis Introduction To Further Topics In Analysis 1st Edition El...
PDF
Mathematical Foundations Of Neuroscience 1st Edition G Bard Ermentrout
PDF
Hilbert Space Operators In Quantum Physics 2ed Blank Jir Exner
PDF
Means In Mathematical Analysis Bivariate Means Costin Iulia Toader
PDF
Watching Closely A Guide To Ethnographic Observation 1st Edition Christena Ni...
PDF
Exploratory And Confirmatory Factor Analysis Understanding Concepts And Appli...
PDF
A Dictionary Of Philosophical Logic Cook Roy T
PDF
Sociophysics A Physicists Modeling Of Psychopolitical Phenomena Galam
Manual of Textual Analysis Vinton A. Dearing
Alec Fisher-The Logic Of Real Arguments-Cambridge University Press (2004).Pdf
Diffusion Phenomena Cases And Studies 1st Edition Richard Ghez Auth
Tychomancy Inferring Probability From Causal Structure Michael Strevens
Manual of Textual Analysis Vinton A. Dearing
Manual of Textual Analysis Vinton A. Dearing
Introducing Biological Rhythms W Koukkari R Sothern
Applied Metaanalysis For Social Science Research 1st Edition Noel A Card Phd
Writing for computer science, 3rd edition springer
Introduction To Empirical Processes And Semiparametric Inference 1st Edition ...
Manual of Textual Analysis Vinton A. Dearing
Graph theory
Functional Analysis Introduction To Further Topics In Analysis 1st Edition El...
Mathematical Foundations Of Neuroscience 1st Edition G Bard Ermentrout
Hilbert Space Operators In Quantum Physics 2ed Blank Jir Exner
Means In Mathematical Analysis Bivariate Means Costin Iulia Toader
Watching Closely A Guide To Ethnographic Observation 1st Edition Christena Ni...
Exploratory And Confirmatory Factor Analysis Understanding Concepts And Appli...
A Dictionary Of Philosophical Logic Cook Roy T
Sociophysics A Physicists Modeling Of Psychopolitical Phenomena Galam
Ad

Recently uploaded (20)

PDF
Vision Prelims GS PYQ Analysis 2011-2022 www.upscpdf.com.pdf
PDF
LIFE & LIVING TRILOGY - PART - (2) THE PURPOSE OF LIFE.pdf
PPTX
Share_Module_2_Power_conflict_and_negotiation.pptx
PDF
Τίμαιος είναι φιλοσοφικός διάλογος του Πλάτωνα
PDF
My India Quiz Book_20210205121199924.pdf
PPTX
Unit 4 Computer Architecture Multicore Processor.pptx
PPTX
Core Concepts of Personalized Learning and Virtual Learning Environments
PDF
International_Financial_Reporting_Standa.pdf
PPTX
What’s under the hood: Parsing standardized learning content for AI
PDF
LEARNERS WITH ADDITIONAL NEEDS ProfEd Topic
PPTX
Virtual and Augmented Reality in Current Scenario
PDF
Uderstanding digital marketing and marketing stratergie for engaging the digi...
PDF
FOISHS ANNUAL IMPLEMENTATION PLAN 2025.pdf
PDF
Skin Care and Cosmetic Ingredients Dictionary ( PDFDrive ).pdf
PPTX
Education and Perspectives of Education.pptx
PDF
1.3 FINAL REVISED K-10 PE and Health CG 2023 Grades 4-10 (1).pdf
PDF
Race Reva University – Shaping Future Leaders in Artificial Intelligence
PDF
MBA _Common_ 2nd year Syllabus _2021-22_.pdf
PDF
Paper A Mock Exam 9_ Attempt review.pdf.
PDF
Mucosal Drug Delivery system_NDDS_BPHARMACY__SEM VII_PCI.pdf
Vision Prelims GS PYQ Analysis 2011-2022 www.upscpdf.com.pdf
LIFE & LIVING TRILOGY - PART - (2) THE PURPOSE OF LIFE.pdf
Share_Module_2_Power_conflict_and_negotiation.pptx
Τίμαιος είναι φιλοσοφικός διάλογος του Πλάτωνα
My India Quiz Book_20210205121199924.pdf
Unit 4 Computer Architecture Multicore Processor.pptx
Core Concepts of Personalized Learning and Virtual Learning Environments
International_Financial_Reporting_Standa.pdf
What’s under the hood: Parsing standardized learning content for AI
LEARNERS WITH ADDITIONAL NEEDS ProfEd Topic
Virtual and Augmented Reality in Current Scenario
Uderstanding digital marketing and marketing stratergie for engaging the digi...
FOISHS ANNUAL IMPLEMENTATION PLAN 2025.pdf
Skin Care and Cosmetic Ingredients Dictionary ( PDFDrive ).pdf
Education and Perspectives of Education.pptx
1.3 FINAL REVISED K-10 PE and Health CG 2023 Grades 4-10 (1).pdf
Race Reva University – Shaping Future Leaders in Artificial Intelligence
MBA _Common_ 2nd year Syllabus _2021-22_.pdf
Paper A Mock Exam 9_ Attempt review.pdf.
Mucosal Drug Delivery system_NDDS_BPHARMACY__SEM VII_PCI.pdf
Ad

An Introduction To Copulas 2nd Ed Roger B Nelsen

  • 1. An Introduction To Copulas 2nd Ed Roger B Nelsen download https://ebookbell.com/product/an-introduction-to-copulas-2nd-ed- roger-b-nelsen-980362 Explore and download more ebooks at ebookbell.com
  • 2. Here are some recommended products that we believe you will be interested in. You can click the link to download. An Introduction To Early Buddhist Soteriology Freedom Of Mind And Freedom By Wisdom G A Somaratne https://ebookbell.com/product/an-introduction-to-early-buddhist- soteriology-freedom-of-mind-and-freedom-by-wisdom-g-a- somaratne-44887694 An Introduction To Six Sigma And Process Improvement 2nd Edition James R Evans https://ebookbell.com/product/an-introduction-to-six-sigma-and- process-improvement-2nd-edition-james-r-evans-44913594 An Introduction To Conservation Biology 3rd Edition Richard B Primack https://ebookbell.com/product/an-introduction-to-conservation- biology-3rd-edition-richard-b-primack-44975620 An Introduction To Bioanalysis Of Biopharmaceuticals Seema Kumar https://ebookbell.com/product/an-introduction-to-bioanalysis-of- biopharmaceuticals-seema-kumar-44992290
  • 3. An Introduction To Knot Theory Graduate Texts In Mathematics 175 Softcover Reprint Of The Original 1st Ed 1997 Wbraymond Lickorish https://ebookbell.com/product/an-introduction-to-knot-theory-graduate- texts-in-mathematics-175-softcover-reprint-of-the-original-1st- ed-1997-wbraymond-lickorish-44992914 An Introduction To The Theory Of Number 5th Edition Ivan Niven https://ebookbell.com/product/an-introduction-to-the-theory-of- number-5th-edition-ivan-niven-45136082 An Introduction To New Media And Cybercultures Pramod K Nayar https://ebookbell.com/product/an-introduction-to-new-media-and- cybercultures-pramod-k-nayar-45766740 An Introduction To The Mechanics Of Incompressible Fluids Michel O Deville https://ebookbell.com/product/an-introduction-to-the-mechanics-of- incompressible-fluids-michel-o-deville-46074298 An Introduction To The Policy Process Theories Concepts And Models Of Public Policy Making 5th Edition Thomas A Birkland https://ebookbell.com/product/an-introduction-to-the-policy-process- theories-concepts-and-models-of-public-policy-making-5th-edition- thomas-a-birkland-46137630
  • 6. Springer Series in Statistics Advisors: P. Bickel, P. Diggle, S. Fienberg, U. Gather, I. Olkin, S. Zeger
  • 7. Springer Series in Statistics Alho/Spencer: Statistical Demography and Forecasting. Andersen/Borgan/Gill/Keiding: Statistical Models Based on Counting Processes. Atkinson/Riani: Robust Diagnostic Regression Analysis. Atkinson/Riani/Cerioli: Exploring Multivariate Data with the Forward Search. Berger: Statistical Decision Theory and Bayesian Analysis, 2nd edition. Borg/Groenen: Modern Multidimensional Scaling: Theory and Applications, 2nd edition. Brockwell/Davis: Time Series: Theory and Methods, 2nd edition. Bucklew: Introduction to Rare Event Simulation. Cappé/Moulines/Rydén: Inference in Hidden Markov Models. Chan/Tong: Chaos: A Statistical Perspective. Chen/Shao/Ibrahim: Monte Carlo Methods in Bayesian Computation. Coles: An Introduction to Statistical Modeling of Extreme Values. David/Edwards: Annotated Readings in the History of Statistics. Devroye/Lugosi: Combinatorial Methods in Density Estimation. Efromovich: Nonparametric Curve Estimation: Methods, Theory, and Applications. Eggermont/LaRiccia: Maximum Penalized Likelihood Estimation, Volume I: Density Estimation. Fahrmeir/Tutz: Multivariate Statistical Modelling Based on Generalized Linear Models, 2nd edition. Fan/Yao: Nonlinear Time Series: Nonparametric and Parametric Methods. Farebrother: Fitting Linear Relationships: A History of the Calculus of Observations 1750-1900. Federer: Statistical Design and Analysis for Intercropping Experiments, Volume I: Two Crops. Federer: Statistical Design and Analysis for Intercropping Experiments, Volume II: Three or More Crops. Ghosh/Ramamoorthi: Bayesian Nonparametrics. Glaz/Naus/Wallenstein: Scan Statistics. Good: Permutation Tests: A Practical Guide to Resampling Methods for Testing Hypotheses, 2nd edition. Good: Permutation Tests: Parametric and Bootstrap Tests of Hypotheses, 3rd edition. Gouriéroux: ARCH Models and Financial Applications. Gu: Smoothing Spline ANOVA Models. Györfi/Kohler/Krzyz • ak/Walk: A Distribution-Free Theory of Nonparametric Regression. Haberman: Advanced Statistics, Volume I: Description of Populations. Hall: The Bootstrap and Edgeworth Expansion. Härdle: Smoothing Techniques: With Implementation in S. Harrell: Regression Modeling Strategies: With Applications to Linear Models, Logistic Regression, and Survival Analysis. Hart: Nonparametric Smoothing and Lack-of-Fit Tests. Hastie/Tibshirani/Friedman: The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Hedayat/Sloane/Stufken: Orthogonal Arrays: Theory and Applications. Heyde: Quasi-Likelihood and its Application: A General Approach to Optimal Parameter Estimation. (continued after index)
  • 8. Roger B. Nelsen An Introduction to Copulas Second Edition
  • 9. Roger B. Nelsen Department of Mathematical Sciences Lewis & Clark College, MSC 110 Portland, OR 97219-7899 USA nelsen@lclark.edu Library of Congress Control Number: 2005933254 ISBN-10: 0-387-28659-4 ISBN-13: 978-0387-28659-4 © 2006 Springer Science+Business Media, Inc. All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer Science+Business Media, Inc., 233 Springer Street, New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, elec- tronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. Printed in the United States of America. (SBA) 9 8 7 6 5 4 3 2 1 springeronline.com
  • 10. To the memory of my parents Ann Bain Nelsen and Howard Ernest Nelsen
  • 11. Preface to the First Edition In November of 1995, I was at the University of Massachusetts in Amherst for a few days to attend a symposium held, in part, to celebrate Professor Berthold Schweizer’s retirement from classroom teaching. During one afternoon break, a small group of us were having coffee following several talks in which copulas were mentioned. Someone asked what one should read to learn the basics about copulas. We men- tioned several references, mostly research papers and conference pro- ceedings. I then suggested that perhaps the time was ripe for “some- one” to write an introductory-level monograph on the subject. A colleague, I forget who, responded somewhat mischievously, “Good idea, Roger—why don’t you write it?” Although flattered by the suggestion, I let it lie until the following September, when I was in Prague to attend an international conference on distributions with fixed marginals and moment problems. In Prague, I asked Giorgio Dall’Aglio, Ingram Olkin, and Abe Sklar if they thought that there might indeed be interest in the statistical community for such a book. Encouraged by their responses and knowing that I would soon be eligible for a sabbatical, I began to give serious thought to writing an introduction to copulas. This book is intended for students and practitioners in statistics and probability—at almost any level. The only prerequisite is a good upper- level undergraduate course in probability and mathematical statistics, although some background in nonparametric statistics would be benefi- cial. Knowledge of measure-theoretic probability is not required. The book begins with the basic properties of copulas and then pro- ceeds to present methods for constructing copulas and to discuss the role played by copulas in modeling and in the study of dependence. The focus is on bivariate copulas, although most chapters conclude with a discussion of the multivariate case. As an introduction to copulas, it is not an encyclopedic reference, and thus it is necessarily incom- plete—many topics that could have been included are omitted. The reader seeking additional material on families of continuous bivariate distributions and their applications should see (Hutchinson and Lai 1990); and the reader interested in learning more about multivariate copulas and dependence should consult (Joe 1997). There are about 150 exercises in the book. Although it is certainly not necessary to do all (or indeed any) of them, the reader is encour- aged to read through the statements of the exercises before proceeding to the next section or chapter. Although some exercises do not add
  • 12. viii Preface to the First Edition anything to the exposition (e.g., “Prove Theorem 1.1.1”), many pre- sent examples, counterexamples, and supplementary topics that are of- ten referenced in subsequent sections. I would like to thank Lewis & Clark College for granting me a sab- batical leave in order to write this book; and my colleagues in the De- partment of Mathematics, Statistics, and Computer Science at Mount Holyoke College for graciously inviting me to spend the sabbatical year with them. Thanks, too, to Ingram Olkin for suggesting and encourag- ing that I consider publication with Springer’s Lecture Notes in Statis- tics; and to John Kimmel, the executive editor for statistics at Springer, for his valuable assistance in the publication of this book. Finally, I would like to express my gratitude and appreciation to all those with whom I have had the pleasure of working on problems re- lated to copulas and their applications: Claudi Alsina, Jerry Frank, Greg Fredricks, Juan Quesada Molina, José Antonio Rodríguez Lallena, Carlo Sempi, Abe Sklar, and Manuel Úbeda Flores. But most of all I want to thank my good friend and mentor Berthold Schweizer, who not only introduced me to the subject but also has consistently and unselfishly aided me in the years since and who inspired me to write this book. I also want to thank Bert for his careful and critical reading of earlier drafts of the manuscript and his invaluable advice on matters mathe- matical and stylistic. However, it goes without saying that any and all remaining errors in the book are mine alone. Roger B. Nelsen Portland, Oregon July 1998
  • 13. Preface to the Second Edition In preparing a new edition of An Introduction to Copulas, my goals in- cluded adding some topics omitted from the first edition while keeping the book at a level appropriate for self-study or for a graduate-level seminar. The major additions in the second edition are sections on: • a copula transformation method; • extreme value copulas; • copulas with certain analytic or functional properties; • tail dependence; and • quasi-copulas. There are also a number of new examples and exercises and new fig- ures, including scatterplots of simulations from many of the families of copulas presented in the text. Typographical errors in the first edition have been corrected, and the references have been updated. Thanks again to Lewis & Clark College for granting me a sabbatical leave in order to prepare this second edition; and to the Department of Mathematics and Statistics at Mount Holyoke College for again inviting me to spend the sabbatical year with them. Finally, I would like to thank readers of the first edition who found numerous typographical errors in the first edition and sent me suggestions for this edition. Roger B. Nelsen Portland, Oregon October 2005
  • 14. Contents Preface to the First Edition vii Preface to the Second Edition ix 1 Introduction 1 2 Definitions and Basic Properties 7 2.1 Preliminaries 7 2.2 Copulas 10 Exercises 2.1-2.11 14 2.3 Sklar’s Theorem 17 2.4 Copulas and Random Variables 24 Exercises 2.12-2.17 28 2.5 The Fréchet-Hoeffding Bounds for Joint Distribution Functions of Random Variables 30 2.6 Survival Copulas 32 Exercises 2.18-2.26 34 2.7 Symmetry 36 2.8 Order 38 Exercises 2.27-2.33 39 2.9 Random Variate Generation 40 2.10 Multivariate Copulas 42 Exercises 2.34-2.37 48 3 Methods of Constructing Copulas 51 3.1 The Inversion Method 51 3.1.1 The Marshall-Olkin Bivariate Exponential Distribution 52 3.1.2 The Circular Uniform Distribution 55 Exercises 3.1-3.6 57 3.2 Geometric Methods 59 3.2.1 Singular Copulas with Prescribed Support 59 3.2.2 Ordinal Sums 63 Exercises 3.7-3.13 64 3.2.3 Shuffles of M 67 3.2.4 Convex Sums 72 Exercises 3.14-3.20 74 3.2.5 Copulas with Prescribed Horizontal or Vertical Sections 76 3.2.6 Copulas with Prescribed Diagonal Sections 84
  • 15. xii Contents Exercises 3.21-3.34 86 3.3 Algebraic Methods 89 3.3.1 Plackett Distributions 89 3.3.2 Ali-Mikhail-Haq Distributions 92 3.3.3 A Copula Transformation Method 94 3.3.4 Extreme Value Copulas 97 Exercises 3.35-3.42 99 3.4 Copulas with Specified Properties 101 3.4.1 Harmonic Copulas 101 3.4.2 Homogeneous Copulas 101 3.4.3 Concave and Convex Copulas 102 3.5 Constructing Multivariate Copulas 105 4 Archimedean Copulas 109 4.1 Definitions 109 4.2 One-parameter Families 114 4.3 Fundamental Properties 115 Exercises 4.1-4.17 132 4.4 Order and Limiting Cases 135 4.5 Two-parameter Families 141 4.5.1 Families of Generators 141 4.5.2 Rational Archimedean Copulas 146 Exercises 4.18-4.23 150 4.6 Multivariate Archimedean Copulas 151 Exercises 4.24-4.25 155 5 Dependence 157 5.1 Concordance 157 5.1.1 Kendall’s tau 158 Exercises 5.1-5.5 165 5.1.2 Spearman’s rho 167 Exercises 5.6-5.15 171 5.1.3 The Relationship between Kendall’s tau and Spearman’s rho 174 5.1.4 Other Concordance Measures 180 Exercises 5.16-5.21 185 5.2 Dependence Properties 186 5.2.1 Quadrant Dependence 187 Exercises 5.22-5.29 189 5.2.2 Tail Monotonicity 191 5.2.3 Stochastic Monotonicity, Corner Set Monotonicity, and Likelihood Ratio Dependence 195 Exercises 5.30-5.39 204 5.3 Other Measures of Association 207 5.3.1 Measures of Dependence 207 5.3.2 Measures Based on Gini’s Coefficient 211 Exercises 5.40-5.46 213 5.4 Tail Dependence 214
  • 16. Contents xiii Exercises 5.47-5.50 216 5.5 Median Regression 217 5.6 Empirical Copulas 219 5.7 Multivariate Dependence 222 6 Additional Topics 227 6.1 Distributions with Fixed Margins 227 Exercises 6.1-6.5 233 6.2 Quasi-copulas 236 Exercises 6.6-6.8 240 6.3 Operations on Distribution Functions 241 6.4 Markov Processes 244 Exercises 6.9-6.13 248 References 251 List of Symbols 263 Index 265
  • 17. 1 Introduction The study of copulas and their applications in statistics is a rather mod- ern phenomenon. Until quite recently, it was difficult to even locate the word “copula” in the statistical literature. There is no entry for “cop- ula” in the nine volume Encyclopedia of Statistical Sciences, nor in the supplement volume. However, the first update volume, published in 1997, does have such an entry (Fisher 1997). The first reference in the Current Index to Statistics to a paper using “copula” in the title or as a keyword is in Volume 7 (1981) [the paper is (Schweizer and Wolff 1981)]—indeed, in the first eighteen volumes (1975-1992) of the Cur- rent Index to Statistics there are only eleven references to papers men- tioning copulas. There are, however, 71 references in the next ten vol- umes (1993-2002). Further evidence of the growing interest in copulas and their applica- tions in statistics and probability in the past fifteen years is afforded by five international conferences devoted to these ideas: the “Symposium on Distributions with Given Marginals (Fréchet Classes)” in Rome in 1990; the conference on “Distributions with Fixed Marginals, Doubly Stochastic Measures, and Markov Operators” in Seattle in 1993; the conference on “Distributions with Given Marginals and Moment Prob- lems” in Prague in 1996; the conference on “Distributions with Given Marginals and Statistical Modelling” in Barcelona in 2000; and the conference on “Dependence Modelling: Statistical Theory and Appli- cations in Finance and Insurance” in Québec in 2004. As the titles of these conferences indicate, copulas are intimately related to study of distributions with “fixed” or “given” marginal distributions. The published proceedings of the first four conferences (Dall’Aglio et al. 1991; Rüschendorf et al. 1996; Beneš and Štěpán 1997; Cuadras et al. 2002) are among the most accessible resources for the study of copulas and their applications. What are copulas? From one point a view, copulas are functions that join or “couple” multivariate distribution functions to their one- dimensional marginal distribution functions. Alternatively, copulas are multivariate distribution functions whose one-dimensional margins are uniform on the interval (0,1). Chapter 2 will be devoted to presenting a complete answer to this question. Why are copulas of interest to students of probability and statistics? As Fisher (1997) answers in his article in the first update volume of the Encyclopedia of Statistical Sciences, “Copulas [are] of interest to statis- ticians for two main reasons: Firstly, as a way of studying scale-free
  • 18. 2 1 Introduction measures of dependence; and secondly, as a starting point for con- structing families of bivariate distributions, sometimes with a view to simulation.” These topics are explored and developed in Chapters 3, 4, and 5. The remainder of this chapter will be devoted to a brief history of the development and study of copulas. Readers interested in first-hand ac- counts by some of those who participated in the evolution of the subject should see the papers by Dall’Aglio (1991) and Schweizer (1991) in the proceedings of the Rome conference and the paper by Sklar (1996) in the proceedings of the Seattle conference. The word copula is a Latin noun that means “a link, tie, bond” (Cassell’s Latin Dictionary) and is used in grammar and logic to de- scribe “that part of a proposition which connects the subject and predi- cate” (Oxford English Dictionary). The word copula was first employed in a mathematical or statistical sense by Abe Sklar (1959) in the theo- rem (which now bears his name) describing the functions that “join to- gether” one-dimensional distribution functions to form multivariate distribution functions (see Theorems 2.3.3 and 2.10.9). In (Sklar 1996) we have the following account of the events leading to this use of the term copula: Féron (1956), in studying three-dimensional distributions had introduced auxiliary functions, defined on the unit cube, that connected such distribu- tions with their one-dimensional margins. I saw that similar functions could be defined on the unit n-cube for all n ≥ 2 and would similarly serve to link n-dimensional distributions to their one-dimensional margins. Having worked out the basic properties of these functions, I wrote about them to Fréchet, in English. He asked me to write a note about them in French. While writing this, I decided I needed a name for these functions. Knowing the word “copula” as a grammatical term for a word or expression that links a subject and predicate, I felt that this would make an appropriate name for a function that links a multidimensional distribution to its one-dimensional margins, and used it as such. Fréchet received my note, corrected one mathematical statement, made some minor corrections to my French, and had the note published by the Statistical Institute of the University of Paris as Sklar (1959). But as Sklar notes, the functions themselves predate the use of the term copula. They appear in the work of Fréchet, Dall’Aglio, Féron, and many others in the study of multivariate distributions with fixed univariate marginal distributions. Indeed, many of the basic results about copulas can be traced to the early work of Wassily Hoeffding. In (Hoeffding 1940, 1941) one finds bivariate “standardized distribu- tions” whose support is contained in the square [ , ] -1 2 1 2 2 and whose margins are uniform on the interval [–1 2,1 2]. (As Schweizer (1991) opines, “had Hoeffding chosen the unit square [ , ] 0 1 2 instead of [ , ] -1 2 1 2 2 for his normalization, he would have discovered copulas.”)
  • 19. 1 Introduction 3 Hoeffding also obtained the basic best-possible bounds inequality for these functions, characterized the distributions (“functional depend- ence”) corresponding to those bounds, and studied measures of de- pendence that are “scale-invariant,” i.e., invariant under strictly in- creasing transformations. Unfortunately, until recently this work did not receive the attention it deserved, due primarily to the fact the papers were published in relatively obscure German journals at the outbreak of the Second World War. However, they have recently been translated into English and are among Hoeffding’s collected papers, recently pub- lished by Fisher and Sen (1994). Unaware of Hoeffding’s work, Fréchet (1951) independently obtained many of the same results, which has led to the terms such as “Fréchet bounds” and “Fréchet classes.” In rec- ognition of the shared responsibility for these important ideas, we will refer to “Fréchet-Hoeffding bounds” and “Fréchet-Hoeffding classes.” After Hoeffding, Fréchet, and Sklar, the functions now known as copulas were rediscovered by several other authors. Kimeldorf and Sampson (1975b) referred to them as uniform representations, and Galambos (1978) and Deheuvels (1978) called them dependence func- tions. At the time that Sklar wrote his 1959 paper with the term “copula,” he was collaborating with Berthold Schweizer in the development of the theory of probabilistic metric spaces, or PM spaces. During the period from 1958 through 1976, most of the important results concerning copulas were obtained in the course of the study of PM spaces. Recall that (informally) a metric space consists of a set S and a metric d that measures “distances” between points, say p and q, in S. In a probabil- istic metric space, we replace the distance d(p,q) by a distribution func- tion Fpq , whose value Fpq (x) for any real x is the probability that the distance between p and q is less than x. The first difficulty in the con- struction of probabilistic metric spaces comes when one tries to find a “probabilistic” analog of the triangle inequality d(p,r) £ d(p,q) + d(q,r)—what is the corresponding relationship among the distribution functions Fpr , Fpq , and Fqr for all p, q, and r in S? Karl Menger (1942) proposed F x y pr ( ) + ≥ T( Fpq (x), Fqr (y)); where T is a triangle norm or t- norm. Like a copula, a t-norm maps [ , ] 0 1 2 to [0,1], and joins distribu- tion functions. Some t-norms are copulas, and conversely, some copulas are t-norms. So, in a sense, it was inevitable that copulas would arise in the study of PM spaces. For a thorough treatment of the theory of PM spaces and the history of its development, see (Schweizer and Sklar 1983; Schweizer 1991). Among the most important results in PM spaces—for the statisti- cian—is the class of Archimedean t-norms, those t-norms T that satisfy T(u,u) < u for all u in (0,1). Archimedean t-norms that are also copulas are called Archimedean copulas. Because of their simple forms, the ease with which they can be constructed, and their many nice properties, Ar-
  • 20. 4 1 Introduction chimedean copulas frequently appear in discussions of multivariate dis- tributions—see, for example, (Genest and MacKay 1986a,b; Marshall and Olkin 1988; Joe 1993, 1997). This important class of copulas is the subject of Chapter 4. We now turn our attention to copulas and dependence. The earliest paper explicitly relating copulas to the study of dependence among random variables appears to be (Schweizer and Wolff 1981). In that pa- per, Schweizer and Wolff discussed and modified Rényi’s (1959) crite- ria for measures of dependence between pairs of random variables, pre- sented the basic invariance properties of copulas under strictly monotone transformations of random variables (see Theorems 2.4.3 and 2.4.4), and introduced the measure of dependence now known as Schweizer and Wolff’s s (see Section 5.3.1). In their words, since ... under almost surely increasing transformations of (the random vari- ables), the copula is invariant while the margins may be changed at will, it follows that it is precisely the copula which captures those properties of the joint distribution which are invariant under almost surely strictly increasing transformations. Hence the study of rank statistics—insofar as it is the study of properties invariant under such transformations—may be character- ized as the study of copulas and copula-invariant properties. Of course, copulas appear implicitly in earlier work on dependence by many other authors, too many to list here, so we will mention only two. Foremost is Hoeffding. In addition to studying the basic properties of “standardized distributions” (i.e., copulas), Hoeffding (1940, 1941) used them to study nonparametric measures of association such as Spearman’s rho and his “dependence index” F2 (see Section 5.3.1). Deheuvels (1979, 1981a,b,c) used “empirical dependence functions” (i.e., empirical copulas, the sample analogs of copulas—see Section 5.5) to estimate the population copula and to construct various nonparamet- ric tests of independence. Chapter 5 is devoted to an introduction to the role played by copulas in the study of dependence. Although this book concentrates on the two applications of copulas mentioned by Fisher (1997)—the construction of families of multivari- ate distributions and the study of dependence—copulas are being ex- ploited in other ways. We mention but one, which we discuss in the final chapter. Through an ingenious definition of a “product” * of copulas, Darsow, Nguyen, and Olsen (1992) have shown that the Chapman- Kolmogorov equations for the transition probabilities in a real stochas- tic process can be expressed succinctly in terms of the *-product of copulas. This new approach to the theory of Markov processes may well be the key to “capturing the Markov property of such processes in a framework as simple and perspicuous as the conventional framework for analyzing Markov chains” (Schweizer 1991).
  • 21. 1 Introduction 5 The study of copulas and the role they play in probability, statistics, and stochastic processes is a subject still in its infancy. There are many open problems and much work to be done.
  • 22. 2 Definitions and Basic Properties In the Introduction, we referred to copulas as “functions that join or couple multivariate distribution functions to their one-dimensional marginal distribution functions” and as “distribution functions whose one-dimensional margins are uniform.” But neither of these statements is a definition—hence we will devote this chapter to giving a precise definition of copulas and to examining some of their elementary prop- erties. But first we present a glimpse of where we are headed. Consider for a moment a pair of random variables X and Y, with distribution functions F(x) = P X x [ ] £ and G(y) = P Y y [ ] £ , respectively, and a joint distribu- tion function H(x,y) = P X x Y y [ , ] £ £ (we will review definitions of random variables, distribution functions, and other important topics as needed in the course of this chapter). To each pair of real numbers (x,y) we can associate three numbers: F(x), G(y), and H(x,y). Note that each of these numbers lies in the interval [0,1]. In other words, each pair (x,y) of real numbers leads to a point F x G y ( ), ( ) ( ) in the unit square [0,1]¥[0,1], and this ordered pair in turn corresponds to a number H(x,y) in [0,1]. We will show that this correspondence, which assigns the value of the joint distribution function to each ordered pair of values of the individual distribution functions, is indeed a function. Such func- tions are copulas. To accomplish what we have outlined above, we need to generalize the notion of “nondecreasing” for univariate functions to a concept applicable to multivariate functions. We begin with some notation and definitions. In Sects. 2.1-2.9, we confine ourselves to the two- dimensional case; in Sect. 2.10, we consider n dimensions. 2.1 Preliminaries The focus of this section is the notion of a “2-increasing” function—a two-dimensional analog of a nondecreasing function of one variable. But first we need to introduce some notation. We will let R denote the ordinary real line (–•,•), R denote the extended real line [–•,•], and R2 denote the extended real plane R¥ R. A rectangle in R2 is the
  • 23. 8 2 Definitions and Basic Properties Cartesian product B of two closed intervals: B = [ , ] x x 1 2 ¥[ , ] y y 1 2 . The vertices of a rectangle B are the points ( x1, y1), ( x1, y2), ( x2, y1), and ( x2, y2). The unit square I2 is the product I¥I where I = [0,1]. A 2- place real function H is a function whose domain, DomH, is a subset of R2 and whose range, RanH, is a subset of R. Definition 2.1.1. Let S1 and S2 be nonempty subsets of R, and let H be a two-place real function such that DomH = S1¥ S2. Let B = [ , ] x x 1 2 ¥[ , ] y y 1 2 be a rectangle all of whose vertices are in DomH. Then the H-volume of B is given by V B H x y H x y H x y H x y H ( )= - - + ( , ) ( , ) ( , ) ( , ) 2 2 2 1 1 2 1 1 . (2.1.1) Note that if we define the first order differences of H on the rectangle B as Dx x H x y H x y H x y 1 2 2 1 ( , ) ( , ) ( , ) = - and Dy y H x y H x y H x y 1 2 2 1 ( , ) ( , ) ( , ) = - , then the H-volume of a rectangle B is the second order difference of H on B, V B H x y H y y x x ( ) ( , ) = D D 1 2 1 2 . Definition 2.1.2. A 2-place real function H is 2-increasing if V B H ( ) ≥ 0 for all rectangles B whose vertices lie in DomH. When H is 2-increasing, we will occasionally refer to the H-volume of a rectangle B as the H-measure of B. Some authors refer to 2-increasing functions as quasi-monotone. We note here that the statement “H is 2-increasing” neither implies nor is implied by the statement “H is nondecreasing in each argu- ment,” as the following two examples illustrate. The verifications are elementary, and are left as exercises. Example 2.1. Let H be the function defined on I2 by H(x,y) = max(x,y). Then H is a nondecreasing function of x and of y; however, VH ( ) I2 = –1, so that H is not 2-increasing. Example 2.2. Let H be the function defined on I2 by H(x,y) = ( )( ) 2 1 2 1 x y - - . Then H is 2-increasing, however it is a decreasing func- tion of x for each y in (0,1/2) and a decreasing function of y for each x in (0,1/2). The following lemmas will be very useful in the next section in es- tablishing the continuity of subcopulas and copulas. The first is a direct consequence of Definitions 2.1.1 and 2.1.2.
  • 24. 2.1 Preliminaries 9 Lemma 2.1.3. Let S1 and S2 be nonempty subsets of R, and let H be a 2-increasing function with domain S1¥ S2. Let x1, x2 be in S1 with x1 £ x2, and let y1, y2 be in S2 with y1 £ y2. Then the function t a H t y ( , ) 2 – H t y ( , ) 1 is nondecreasing on S1, and the function t a H x t ( , ) 2 – H x t ( , ) 1 is nondecreasing on S2. As an immediate application of this lemma, we can show that with an additional hypothesis, a 2-increasing function H is nondecreasing in each argument. Suppose S1 has a least element a1 and that S2 has a least element a2. We say that a function H from S1¥ S2 into R is grounded if H(x, a2) = 0 = H( a1,y) for all (x,y) in S1¥ S2. Hence we have Lemma 2.1.4. Let S1 and S2 be nonempty subsets of R, and let H be a grounded 2-increasing function with domain S1¥ S2. Then H is nonde- creasing in each argument. Proof. Let a1,a2 denote the least elements of S1,S2, respectively, and set x1 = a1, y1 = a2 in Lemma 2.1.3. Now suppose that S1 has a greatest element b1 and that S2 has a greatest element b2. We then say that a function H from S1¥ S2 into R has margins, and that the margins of H are the functions F and G given by: DomF = S1, and F x H x b ( ) ( , ) = 2 for all x in S1; DomG = S2, and G y H b y ( ) ( , ) = 1 for all y in S2. Example 2.3. Let H be the function with domain [–1,1]¥[0,•] given by H x y x e x e y y ( , ) ( )( ) = + - + - 1 1 2 1 . Then H is grounded because H(x,0) = 0 and H(–1,y) = 0; and H has margins F(x) and G(y) given by F x H x x ( ) ( , ) ( ) = • = +1 2 and G y H y e y ( ) ( , ) = = - - 1 1 . We close this section with an important lemma concerning grounded 2-increasing functions with margins. Lemma 2.1.5. Let S1 and S2 be nonempty subsets of R, and let H be a grounded 2-increasing function, with margins, whose domain is S1¥ S2. Let ( , ) x y 1 1 and ( , ) x y 2 2 be any points in S1¥ S2. Then H x y H x y F x F x G y G y ( , ) ( , ) ( ) ( ) ( ) ( ) 2 2 1 1 2 1 2 1 - £ - + - .
  • 25. 10 2 Definitions and Basic Properties Proof. From the triangle inequality, we have H x y H x y H x y H x y H x y H x y ( , ) ( , ) ( , ) ( , ) ( , ) ( , ) 2 2 1 1 2 2 1 2 1 2 1 1 - £ - + - . Now assume x1 £ x2. Because H is grounded, 2-increasing, and has margins, Lemmas 2.1.3 and 2.1.4 yield 0 £ H( x2, y2) – H( x1, y2) £ F x F x ( ) ( ) 2 1 - . An analogous inequality holds when x2 £ x1, hence it follows that for any x1, x2 in S1, H x y H x y ( , ) ( , ) 2 2 1 2 - £ F x F x ( ) ( ) 2 1 - . Similarly for any y1, y2 in S2, H x y H x y ( , ) ( , ) 1 2 1 1 - £ G y G y ( ) ( ) 2 1 - , which completes the proof. 2.2 Copulas We are now in a position to define the functions—copulas—that are the subject of this book. To do so, we first define subcopulas as a certain class of grounded 2-increasing functions with margins; then we define copulas as subcopulas with domain I2 . Definition 2.2.1. A two-dimensional subcopula (or 2-subcopula, or briefly, a subcopula) is a function ¢ C with the following properties: 1. Dom ¢ C = S1¥ S2, where S1 and S2 are subsets of I containing 0 and 1; 2. ¢ C is grounded and 2-increasing; 3. For every u in S1 and every v in S2, ¢ = C u u ( , ) 1 and ¢ = C v v ( , ) 1 . (2.2.1) Note that for every (u,v) in Dom ¢ C , 0 1 £ ¢ £ C u v ( , ) , so that Ran ¢ C is also a subset of I. Definition 2.2.2. A two-dimensional copula (or 2-copula, or briefly, a copula) is a 2-subcopula C whose domain is I2 . Equivalently, a copula is a function C from I2 to I with the following properties: 1. For every u, v in I, C u C v ( , ) ( , ) 0 0 0 = = (2.2.2a) and C u u ( , ) 1 = and C v v ( , ) 1 = ; (2.2.2b) 2. For every u u v v 1 2 1 2 , , , in I such that u u 1 2 £ and v v 1 2 £ , C u v C u v C u v C u v ( , ) ( , ) ( , ) ( , ) 2 2 2 1 1 2 1 1 0 - - + ≥ . (2.2.3)
  • 26. 2.2 Copulas 11 Because C u v V u v C ( , ) ([ , ] [ , ]) = ¥ 0 0 , one can think of C u v ( , ) as an as- signment of a number in I to the rectangle [ , ] [ , ] 0 0 u v ¥ . Thus (2.2.3) gives an “inclusion-exclusion” type formula for the number assigned by C to each rectangle [ , ] [ , ] u u v v 1 2 1 2 ¥ in I2 and states that the number so assigned must be nonnegative. The distinction between a subcopula and a copula (the domain) may appear to be a minor one, but it will be rather important in the next sec- tion when we discuss Sklar’s theorem. In addition, many of the impor- tant properties of copulas are actually properties of subcopulas. Theorem 2.2.3. Let ¢ C be a subcopula. Then for every (u,v) in Dom ¢ C , max( , ) ( , ) min( , ) u v C u v u v + - £ ¢ £ 1 0 . (2.2.4) Proof. Let (u,v) be an arbitrary point in Dom ¢ C . Now ¢ C u v ( , ) £ ¢ C u ( , ) 1 = u and ¢ C u v ( , ) £ ¢ C v ( , ) 1 = v yield ¢ C u v ( , ) £ min(u,v). Fur- thermore, V u v C¢ ¥ ≥ ([ , ] [ , ]) 1 1 0 implies ¢ C u v ( , ) ≥ u v + -1, which when combined with ¢ C u v ( , ) ≥ 0 yields ¢ C u v ( , ) ≥ max( u v + -1,0). Because every copula is a subcopula, the inequality in the above theorem holds for copulas. Indeed, the bounds in (2.2.4) are themselves copulas (see Exercise 2.2) and are commonly denoted by M(u,v) = min(u,v) and W(u,v) = max( , ) u v + -1 0 . Thus for every copula C and every (u,v) in I2 , W u v C u v M u v ( , ) ( , ) ( , ) £ £ . (2.2.5) Inequality (2.2.5) is the copula version of the Fréchet-Hoeffding bounds inequality, which we shall encounter later in terms of distribu- tion functions. We refer to M as the Fréchet-Hoeffding upper bound and W as the Fréchet-Hoeffding lower bound. A third important copula that we will frequently encounter is the product copula P(u,v) = uv. The following theorem, which follows directly from Lemma 2.1.5, establishes the continuity of subcopulas—and hence of copulas—via a Lipschitz condition on I2 . Theorem 2.2.4. Let ¢ C be a subcopula. Then for every u u v v 1 2 1 2 , , , ( ) ( ) in Dom ¢ C , ¢ - ¢ £ - + - C u v C u v u u v v ( , ) ( , ) 2 2 1 1 2 1 2 1 . (2.2.6) Hence ¢ C is uniformly continuous on its domain. The sections of a copula will be employed in the construction of copulas in the next chapter, and will be used in Chapter 5 to provide interpretations of certain dependence properties: Definition 2.2.5. Let C be a copula, and let a be any number in I. The horizontal section of C at a is the function from I to I given by
  • 27. 12 2 Definitions and Basic Properties t C t a a ( , ); the vertical section of C at a is the function from I to I given by t C a t a ( , ); and the diagonal section of C is the function dC from I to I defined by dC t ( ) = C(t,t). The following corollary is an immediate consequence of Lemma 2.1.4 and Theorem 2.2.4. Corollary 2.2.6. The horizontal, vertical, and diagonal sections of a copula C are all nondecreasing and uniformly continuous on I. Various applications of copulas that we will encounter in later chap- ters involve the shape of the graph of a copula, i.e., the surface z = C(u,v). It follows from Definition 2.2.2 and Theorem 2.2.4 that the graph of any copula is a continuous surface within the unit cube I3 whose boundary is the skew quadrilateral with vertices (0,0,0), (1,0,0), (1,1,1), and (0,1,0); and from Theorem 2.2.3 that this graph lies be- tween the graphs of the Fréchet-Hoeffding bounds, i.e., the surfaces z = M(u,v) and z = W(u,v). In Fig. 2.1 we present the graphs of the copulas M and W, as well as the graph of P, a portion of the hyperbolic paraboloid z = uv. v z u v z u v z u z = M(u,v) z = P(u,v) z = W(u,v) Fig. 2.1. Graphs of the copulas M, P, and W A simple but useful way to present the graph of a copula is with a contour diagram (Conway 1979), that is, with graphs of its level sets—the sets in I2 given by C(u,v) = a constant, for selected constants in I. In Fig. 2.2 we present the contour diagrams of the copulas M, P,
  • 28. 2.2 Copulas 13 andW. Note that the points (t,1) and (1,t) are each members of the level set corresponding to the constant t. Hence we do not need to label the level sets in the diagram, as the boundary conditions C(1,t) = t = C(t,1) readily provide the constant for each level set. P(u,v) M(u,v) W(u,v) Fig. 2.2. Contour diagrams of the copulas M, P, andW Also note that, given any copula C, it follows from (2.2.5) that for a given t in I the graph of the level set ( , ) ( , ) u v C u v t Œ = { } I2 must lie in the shaded triangle in Fig. 2.3, whose boundaries are the level sets de- termined by M(u,v) = t and W(u,v) = t. t t Fig. 2.3. The region that contains the level set ( , ) ( , ) u v C u v t Œ = { } I2 We conclude this section with the two theorems concerning the partial derivatives of copulas. The word “almost” is used in the sense of Lebesgue measure. Theorem 2.2.7. Let C be a copula. For any v in I, the partial derivative ∂ ∂ C u v u ( , ) exists for almost all u, and for such v and u, 0 1 £ £ ∂ ∂u C u v ( , ) . (2.2.7) Similarly, for any u in I, the partial derivative ∂ ∂ C u v v ( , ) exists for al- most all v, and for such u and v,
  • 29. 14 2 Definitions and Basic Properties 0 1 £ £ ∂ ∂v C u v ( , ) . (2.2.8) Furthermore, the functions u a ∂ ∂ C u v v ( , ) and v a ∂ ∂ C u v u ( , ) are defined and nondecreasing almost everywhere on I. Proof. The existence of the partial derivatives ∂ ∂ C u v u ( , ) and ∂ ∂ C u v v ( , ) is immediate because monotone functions (here the hori- zontal and vertical sections of the copula) are differentiable almost eve- rywhere. Inequalities (2.2.7) and (2.2.8) follow from (2.2.6) by setting v1 = v2 and u1 = u2, respectively. If v1 £ v2, then, from Lemma 2.1.3, the function u a C u v C u v ( , ) ( , ) 2 1 - is nondecreasing. Hence ∂ ∂ C u v C u v u ( , ) ( , ) 2 1 - ( ) is defined and nonnegative almost everywhere on I, from which it follows that v a ∂ ∂ C u v u ( , ) is defined and nonde- creasing almost everywhere on I. A similar result holds for u a ∂ ∂ C u v v ( , ) . Theorem 2.2.8. Let C be a copula. If ∂ ∂ C u v v ( , ) and ∂ ∂ ∂ 2 C u v u v ( , ) are continous on I2 and ∂ ∂ C u v u ( , ) exists for all u Œ (0,1) when v = 0, then ∂ ∂ C u v u ( , ) and ∂ ∂ ∂ 2 C u v v u ( , ) exist in ( , ) 0 1 2 and ∂ ∂ ∂ 2 C u v u v ( , ) = ∂ ∂ ∂ 2 C u v v u ( , ) . Proof. See (Seeley 1961). Exercises 2.1 Verify the statements in Examples 2.1 and 2.2. 2.2 Show that M u v u v ( , ) min( , ) = , W u v u v ( , ) max( , ) = + -1 0 , and P( , ) u v = uv are indeed copulas. 2.3 (a) Let C0 and C1 be copulas, and let q be any number in I. Show that the weighted arithmetic mean ( ) 1 0 1 - + q q C C is also a copula. Hence conclude that any convex linear combination of copulas is a copula. (b) Show that the geometric mean of two copulas may fail to be a copula. [Hint: Let C be the geometric mean of P and W, and show that the C-volume of the rectangle 1 2 3 4 , [ ]¥ 1 2 3 4 , [ ] is nega- tive.] 2.4 The Fréchet and Mardia families of copulas. (a) Let a, b be in I with a + b £ 1. Set
  • 30. 2.2 Copulas 15 C u v M u v u v W u v a b a a b b , ( , ) ( , ) ( ) ( , ) ( , ) = + - - + 1 P . Show that Ca b , is a copula. A family of copulas that includes M, P, and W is called comprehensive. This two-parameter compre- hensive family is due to Fréchet (1958). (b) Let q be in [–1,1], and set C u v M u v u v W u v q q q q q q ( , ) ( ) ( , ) ( ) ( , ) ( ) ( , ). = + + - + - 2 2 2 1 2 1 1 2 P (2.2.9) Show that Cq is a copula. This one-parameter comprehensive family is due to Mardia (1970). 2.5 The Cuadras-Augé family of copulas. Let q be in I, and set C u v u v uv uv u v u v u v q q q q q ( , ) [min( , )] [ ] , , , . = = £ ≥ Ï Ì Ó - - - 1 1 1 (2.2.10) Show that Cq is a copula. Note that C0 = P and C1 = M. This family (weighted geometric means of M and P) is due to Cuadras and Augé (1981). 2.6 Let C be a copula, and let (a,b) be any point in I2 . For (u,v) in I2 , define K u v V a u u a u b v v b v a b C , ( , ) [ ( ), ( )] [ ( ), ( )] = - + - ¥ - + - ( ) 1 1 1 1 . Show that Ka b , is a copula. Note that K u v C u v 0 0 , ( , ) ( , ) = . Several special cases will be of interest in Sects. 2.4, 2.7, and 6.4, namely: K u v u C u v 0 1 1 , ( , ) ( , ) = - - , K u v v C u v 1 0 1 , ( , ) ( , ) = - - , and K u v u v C u v 11 1 1 1 , ( , ) ( , ) = + - + - - . 2.7 Let f be a function from I2 into I which is nondecreasing in each variable and has margins given by f(t,1) = t = f(1,t) for all t in I. Prove that f is grounded. 2.8 (a) Show that for any copula C, max(2 1 t - ,0) £ dC t ( ) £ t for all t in I. (b) Show that dC t ( )= dM t ( ) for all t in I implies C = M. (c) Show dC t ( )= dW t ( ) for all t in I does not imply that C = W.
  • 31. 16 2 Definitions and Basic Properties 2.9 The secondary diagonal section of C is given by C t t ( , ) 1- . Show that C t t ( , ) 1- = 0 for all t in I implies C = W. 2.10 Let t be in [0,1), and let Ct be the function from I2 into I given by C u v u v t u v t u v t( , ) max( , ), ( , ) [ , ] , min( , ), . = + - Œ Ï Ì Ó 1 1 2 otherwise (a) Show that Ct is a copula. (b) Show that the level set ( , ) ( , ) u v C u v t t Œ = { } I2 is the set of points in the triangle with vertices (t,1), (1,t), and (t,t), that is, the shaded region in Fig. 2.3. The copula in this exercise illustrates why the term “level set” is preferable to “level curve” for some copulas. 2.11 This exercise shows that the 2-increasing condition (2.2.3) for copulas is not a consequence of simpler properties. Let Q be the function from I2 into I given by Q u v u v u v u v u v ( , ) min , , , , , max , , = + - Ê Ë Á ˆ ¯ ˜ £ + £ + - ( ) Ï Ì Ô Ó Ô 1 3 2 3 2 3 4 3 1 0 otherwise; that is, Q has the values given in Fig. 2.4 in the various parts of I2 . (a) Show that for every u,v in I, Q u Q v ( , ) ( , ) 0 0 0 = = , Q u u ( , ) 1 = and Q v v ( , ) 1 = ; W u v Q u v M u v ( , ) ( , ) ( , ) £ £ ; and that Q is continu- ous, satisfies the Lipschitz condition (2.2.6), and is nondecreasing in each variable. (b) Show that Q fails to be 2-increasing, and hence is not a cop- ula. [Hint: consider the Q-volume of the rectangle 1 3 2 3 2 , [ ] .] 1 / 3 u + v – ( 2 / 3 ) 0 u v u + v – 1 Fig. 2.4. The function Q in Exercise 2.11
  • 32. 2.3 Sklar’s Theorem 17 2.3 Sklar’s Theorem The theorem in the title of this section is central to the theory of copulas and is the foundation of many, if not most, of the applications of that theory to statistics. Sklar’s theorem elucidates the role that copulas play in the relationship between multivariate distribution functions and their univariate margins. Thus we begin this section with a short discussion of distribution functions. Definition 2.3.1. A distribution function is a function F with domain R such that 1. F is nondecreasing, 2. F(–•) = 0 and F(•) = 1. Example 2.4. For any number a in R, the unit step at a is the distribu- tion function ea given by ea x x a x a ( ) , [ , ), , [ , ]; = Œ -• Œ • Ï Ì Ó 0 1 and for any numbers a,b in R with a b, the uniform distribution on [a,b] is the distribution function Uab given by U x x a x a b a x a b x b ab ( ) , [ , ), , [ , ], , ( , ]. = Œ -• - - Œ Œ • Ï Ì Ô Ô Ó Ô Ô 0 1 Definition 2.3.2. A joint distribution function is a function H with do- main R2 such that 1. H is 2-increasing, 2. H(x,– •) = H(–•,y) = 0, and H(•,•) = 1. Thus H is grounded, and because DomH = R2 , H has margins F and G given by F(x) = H(x,•) and G(y) = H(•,y). By virtue of Corollary 2.2.6, F and G are distribution functions. Example 2.5. Let H be the function with domain R2 given by H x y x e x e x y e x y y y y ( , ) ( )( ) , ( , ) [ , ] [ , ], , ( , ) ( , ] [ , ], , = + - + - Œ - ¥ • - Œ • ¥ • Ï Ì Ô Ô Ó Ô Ô - 1 1 2 1 11 0 1 1 0 0 elsewhere.
  • 33. 18 2 Definitions and Basic Properties It is tedious but elementary to verify that H is 2-increasing and grounded, and that H(•,•) = 1. Hence H is a joint distribution function. The margins of H are the distribution functions F and G given by F U = -11 , and G y y e y y ( ) , [ , ), , [ , ]. = Œ -• - Œ • Ï Ì Ó - 0 0 1 0 [Cf. Examples 2.3 and 2.4.] Note that there is nothing “probabilistic” in these definitions of dis- tribution functions. Random variables are not mentioned, nor is left- continuity or right-continuity. All the distribution functions of one or of two random variables usually encountered in statistics satisfy either the first or the second of the above definitions. Hence any results we de- rive for such distribution functions will hold when we discuss random variables, regardless of any additional restrictions that may be imposed. Theorem 2.3.3. Sklar’s theorem. Let H be a joint distribution function with margins F and G. Then there exists a copula C such that for all x,y in R, H(x,y) = C(F(x),G(y)). (2.3.1) If F and G are continuous, then C is unique; otherwise, C is uniquely determined on RanF ¥RanG. Conversely, if C is a copula and F and G are distribution functions, then the function H defined by (2.3.1) is a joint distribution function with margins F and G. This theorem first appeared in (Sklar 1959). The name “copula” was chosen to emphasize the manner in which a copula “couples” a joint distribution function to its univariate margins. The argument that we give below is essentially the same as in (Schweizer and Sklar 1974). It requires two lemmas. Lemma 2.3.4. Let H be a joint distribution function with margins F and G. Then there exists a unique subcopula ¢ C such that 1. Dom ¢ C = RanF ¥RanG, 2. For all x,y in R, H(x,y) = ¢ C (F(x),G(y)). Proof. The joint distribution H satisfies the hypotheses of Lemma 2.1.5 with S1 = S2 = R. Hence for any points ( , ) x y 1 1 and ( , ) x y 2 2 in R2 , H x y H x y F x F x G y G y ( , ) ( , ) ( ) ( ) ( ) ( ) 2 2 1 1 2 1 2 1 - £ - + - . It follows that if F x F x ( ) ( ) 1 2 = and G y G y ( ) ( ) 1 2 = , then H x y ( , ) 1 1 = H x y ( , ) 2 2 . Thus the set of ordered pairs F x G y H x y x y ( ), ( ) , ( , ) , ( ) ( ) Œ { } R
  • 34. 2.3 Sklar’s Theorem 19 defines a 2-place real function ¢ C whose domain is RanF ¥RanG. That this function is a subcopula follows directly from the properties of H. For instance, to show that (2.2.2) holds, we first note that for each u in RanF, there is an x in R such that F(x) = u. Thus ¢ C (u,1) = ¢ C (F(x),G(•)) = H(x,•) = F(x) = u. Verifications of the other condi- tions in Definition 2.2.1 are similar. Lemma 2.3.5. Let ¢ C be a subcopula. Then there exists a copula C such that C(u,v) = ¢ C (u,v) for all (u,v) in Dom ¢ C ; i.e., any subcopula can be extended to a copula. The extension is generally non-unique. Proof. Let Dom ¢ C = S1¥ S2. Using Theorem 2.2.4 and the fact that ¢ C is nondecreasing in each place, we can extend ¢ C by continuity to a function ¢¢ C with domain S1¥ S2, where S1 is the closure of S1 and S2 is the closure of S2. Clearly ¢¢ C is also a subcopula. We next extend ¢¢ C to a function C with domain I2 . To this end, let (a,b) be any point in I2 , let a1 and a2 be, respectively, the greatest and least elements of S1 that satisfy a1 £ a £ a2; and let b1 and b2 be, respectively, the greatest and least elements of S2 that satisfy b1 £ b £ b2. Note that if a is in S1, then a1 = a = a2; and if b is in S2, then b1 = b = b2. Now let l1 1 2 1 1 2 1 2 1 = - - = Ï Ì Ó ( ) ( ), , , ; a a a a a a a a if if m1 1 2 1 1 2 1 2 1 = - - = Ï Ì Ó ( ) ( ), , , ; b b b b b b b b if if and define C a b C a b C a b C a b C a b ( , ) ( )( ) ( , ) ( ) ( , ) ( ) ( , ) ( , ). = - - ¢¢ + - ¢¢ + - ¢¢ + ¢¢ 1 1 1 1 1 1 1 1 1 1 1 2 1 1 2 1 1 1 2 2 l m l m l m l m (2.3.2) Notice that the interpolation defined in (2.3.2) is linear in each place (what we call bilinear interpolation) because l1 and m1 are linear in a and b, respectively. It is obvious that DomC = I2 , that C(a,b) = ¢¢ C (a,b) for any (a,b) in Dom ¢¢ C ; and that C satisfies (2.2.2a) and (2.2.2b). Hence we only must show that C satisfies (2.2.3). To accomplish this, let (c,d) be another point in I2 such that c ≥ a and d ≥ b, and let c1, d1, c2, d2, l2, m2 be related to c and d as a1, b1, a2, b2, l1, m1 are related to a and b. In evaluating V B C ( ) for the rectangle B = [a,c] ¥ [b,d], there will be sev- eral cases to consider, depending upon whether or not there is a point in S1 strictly between a and c, and whether or not there is a point in S2
  • 35. 20 2 Definitions and Basic Properties strictly between b and d. In the simplest of these cases, there is no point in S1 strictly between a and c, and no point in S2 strictly between b and d, so that c1 = a1, c2 = a2, d1 = b1, and d2 = b2. Substituting (2.3.2) and the corresponding terms for C(a,d), C(c,b) and C(c,d) into the ex- pression given by (2.1.1) for V B C ( ) and simplifying yields V B V a c b d V a a b b C C C ( ) ([ , ] [ , ]) ( )( ) ([ , ] [ , ]) = ¥ = - - ¥ l l m m 2 1 2 1 1 2 1 2 , from which it follows that V B C ( ) ≥ 0 in this case, as c ≥ a and d ≥ b im- ply l2 ≥ l1 and m2 ≥ m1. ( , ) a b 1 1 (a,b) (a,d) (c,b) (c,d) ( , ) a b 1 2 ( , ) a d 1 2 ( , ) a d 1 1 ( , ) a d 2 1 ( , ) a d 2 2 ( , ) a b 2 2 ( , ) a b 2 1 ( , ) c b 2 1 ( , ) c b 2 2 ( , ) c d 2 2 ( , ) c d 2 1 ( , ) c d 1 1 ( , ) c d 1 2 ( , ) c b 1 2 ( , ) c b 1 1 Fig. 2.5. The least simple case in the proof of Lemma 2.3.5 At the other extreme, the least simple case occurs when there is at least one point in S1 strictly between a and c, and at least one point in S2 strictly between b and d, so that a a2 £ c1 c and b b2 £ d1 d. In this case—which is illustrated in Fig. 2.5—substituting (2.3.2) and the corresponding terms for C(a,d), C(c,b) and C(c,d) into the expres- sion given by (2.1.1) for V B C ( ) and rearranging the terms yields V B V a a d d V a c d d V c c d d V a a b d V a c b d C C C C C C ( ) ( ) ([ , ] [ , ]) ([ , ] [ , ]) ([ , ] [ , ]) ( ) ([ , ] [ , ]) ([ , ] [ , ]) = - ¥ + ¥ + ¥ + - ¥ + ¥ + 1 1 1 2 1 2 1 2 2 2 1 1 2 2 2 1 2 1 2 1 1 2 2 1 2 1 2 1 l m m l m l l2 2 1 2 2 1 1 1 1 2 1 2 1 2 1 1 2 2 1 1 2 1 2 1 1 1 1 V c c b d V a a b b V a c b b V c c b b C C C C ([ , ] [ , ]) ( )( ) ([ , ] [ , ]) ( ) ([ , ] [ , ]) ( ) ([ , ] [ , ]). ¥ + - - ¥ + - ¥ + - ¥ l m m l m The right-hand side of the above expression is a combination of nine nonnegative quantities (the C-volumes of the nine rectangles deter-
  • 36. 2.3 Sklar’s Theorem 21 mined by the dashed lines in Fig. 2.5) with nonnegative coefficients, and hence is nonnegative. The remaining cases are similar, which com- pletes the proof. Example 2.6. Let (a,b) be any point in R2 , and consider the following distribution function H: H x y x a y b x a y b ( , ) , , , . = ≥ ≥ Ï Ì Ó 0 1 or and The margins of H are the unit step functions ea and eb . Applying Lemma 2.3.4 yields the subcopula ¢ C with domain {0,1}¥{0,1} such that ¢ C (0,0) = ¢ C (0,1) = ¢ C (1,0) = 0 and ¢ C (1,1) = 1. The extension of ¢ C to a copula C via Lemma 2.3.5 is the copula C = P, i.e., C(u,v) = uv. Notice however, that every copula agrees with ¢ C on its domain, and thus is an extension of this ¢ C . We are now ready to prove Sklar’s theorem, which we restate here for convenience. Theorem 2.3.3. Sklar’s theorem. Let H be a joint distribution function with margins F and G. Then there exists a copula C such that for all x,y in R, H(x,y) = C(F(x),G(y)). (2.3.1) If F and G are continuous, then C is unique; otherwise, C is uniquely determined on RanF¥RanG. Conversely, if C is a copula and F and G are distribution functions, then the function H defined by (2.3.1) is a joint distribution function with margins F and G. Proof. The existence of a copula C such that (2.3.1) holds for all x,y in R follows from Lemmas 2.3.4 and 2.3.5. If F and G are continuous, then RanF = RanG = I, so that the unique subcopula in Lemma 2.3.4 is a copula. The converse is a matter of straightforward verification. Equation (2.3.1) gives an expression for joint distribution functions in terms of a copula and two univariate distribution functions. But (2.3.1) can be inverted to express copulas in terms of a joint distribu- tion function and the “inverses” of the two margins. However, if a margin is not strictly increasing, then it does not possess an inverse in the usual sense. Thus we first need to define “quasi-inverses” of distri- bution functions (recall Definition 2.3.1). Definition 2.3.6. Let F be a distribution function. Then a quasi-inverse of F is any function F( ) -1 with domain I such that 1. if t is in RanF, then F( ) -1 (t) is any number x in R such that F(x) = t, i.e., for all t in RanF, F( F( ) -1 (t)) = t; 2. if t is not in RanF, then
  • 37. 22 2 Definitions and Basic Properties F t x F x t x F x t ( ) ( ) inf{ ( ) } sup{ ( ) } - = ≥ = £ 1 . If F is strictly increasing, then it has but a single quasi-inverse, which is of course the ordinary inverse, for which we use the customary notation F-1 . Example 2.7. The quasi-inverses of ea , the unit step at a (see Example 2.4) are the functions given by ea t a t a t a t ( ) ( ) , , , ( , ), , , - = = Œ = Ï Ì Ô Ó Ô 1 0 1 0 0 1 1 where a0 and a1 are any numbers in R such that a0 a £ a1. Using quasi-inverses of distribution functions, we now have the fol- lowing corollary to Lemma 2.3.4. Corollary 2.3.7. Let H, F, G, and ¢ C be as in Lemma 2.3.4, and let F( ) -1 and G( ) -1 be quasi-inverses of F and G, respectively. Then for any (u,v) in Dom ¢ C , ¢ = - - C u v H F u G v ( , ) ( ( ), ( )) ( ) ( ) 1 1 . (2.3.3) When F and G are continuous, the above result holds for copulas as well and provides a method of constructing copulas from joint distribu- tion functions. We will exploit Corollary 2.3.7 in the next chapter to construct families of copulas, but for now the following examples will serve to illustrate the procedure. Example 2.8. Recall the distribution function H from Example 2.5: H x y x e x e x y e x y y y y ( , ) ( )( ) , ( , ) [ , ] [ , ], , ( , ) ( , ] [ , ], , = + - + - Œ - ¥ • - Œ • ¥ • Ï Ì Ô Ô Ó Ô Ô - 1 1 2 1 11 0 1 1 0 0 elsewhere. with margins F and G given by F x x x x x ( ) , , ( ) , [ , ], , , = - + Œ - Ï Ì Ô Ó Ô 0 1 1 2 11 1 1 and G y y e y y ( ) , , , . = - ≥ Ï Ì Ó - 0 0 1 0 Quasi-inverses of F and G are given by F( ) -1 (u) = 2 1 u - and G( ) -1 (v) = - - ln( ) 1 v for u,v in I. Because RanF = RanG = I, (2.3.3) yields the copula C given by
  • 38. 2.3 Sklar’s Theorem 23 C u v uv u v uv ( , ) = + - . ( . . ) 2 3 4 Example 2.9. Gumbel’s bivariate exponential distribution (Gumbel 1960a). Let Hq be the joint distribution function given by H x y e e e x y x y x y xy q q ( , ) , , , , ; ( ) = - - + ≥ ≥ Ï Ì Ó - - - + + 1 0 0 0 otherwise where q is a parameter in [0,1]. Then the marginal distribution func- tions are exponentials, with quasi-inverses F( ) -1 (u) = - - ln( ) 1 u and G( ) -1 (v) = - - ln( ) 1 v for u,v in I. Hence the corresponding copula is C u v u v u v e u v q q ( , ) ( )( ) ln( )ln( ) = + - + - - - - - 1 1 1 1 1 . ( . . ) 2 3 5 Example 2.10. It is an exercise in many mathematical statistics texts to find an example of a bivariate distribution with standard normal mar- gins that is not the standard bivariate normal with parameters mx = my = 0, sx 2 = sy 2 = 1, and Pearson’s product-moment correlation coeffi- cient r. With Sklar’s theorem and Corollary 2.3.7 this becomes triv- ial—let C be a copula such as one in either of the preceding examples, and use standard normal margins in (2.3.1). Indeed, if F denotes the standard (univariate) normal distribution function and Nr denotes the standard bivariate normal distribution function (with Pearson’s product- moment correlation coefficient r), then any copula except one of the form C u v N u v s st t dsdt v u ( , ) ( ( ), ( )) exp ( ) ( ) ( ) ( ) = = - - - + - È Î Í Í ˘ ˚ ˙ ˙ - - -• -• - - Ú Ú r p r r r F F F F 1 1 2 2 2 2 1 2 1 2 2 1 1 1 (2.3.6) (with r π –1, 0, or 1) will suffice. Explicit constructions using the copulas in Exercises 2.4, 2.12, and 3.11, Example 3.12, and Sect. 3.3.1 can be found in (Kowalski 1973), and one using the copula C1 2 from Exercise 2.10 in (Vitale 1978). We close this section with one final observation. With an appropriate extension of its domain to R2 , every copula is a joint distribution func- tion with margins that are uniform on I. To be precise, let C be a copula, and define the function HC on R2 via
  • 39. 24 2 Definitions and Basic Properties H x y x y C x y x y x y x y x y x y C ( , ) , , ( , ), ( , ) , , , , , , , , . = Œ Œ Œ Ï Ì Ô Ô Ó Ô Ô 0 0 0 1 1 1 1 1 2 or and I I I Then HC is a distribution function both of whose margins are readily seen to be U01. Indeed, it is often quite useful to think of copulas as re- strictions to I2 of joint distribution functions whose margins are U01. 2.4 Copulas and Random Variables In this book, we will use the term “random variable” in the statistical rather than the probabilistic sense; that is, a random variable is a quan- tity whose values are described by a (known or unknown) probability distribution function. Of course, all of the results to follow remain valid when a random variable is defined in terms of measure theory, i.e., as a measurable function on a given probability space. But for our purposes it suffices to adopt the descriptions of Wald (1947), “a variable x is called a random variable if for any given value c a definite probability can be ascribed to the event that x will take a value less than c”; and of Gnedenko (1962), “a random variable is a variable quantity whose val- ues depend on chance and for which there exists a distribution func- tion.” For a detailed discussion of this point of view, see (Menger 1956). In what follows, we will use capital letters, such as X and Y, to repre- sent random variables, and lowercase letters x, y to represent their values. We will say that F is the distribution function of the random variable X when for all x in R, F(x) = P[X £ x]. We are defining distribution func- tions of random variables to be right-continuous—but that is simply a matter of custom and convenience. Left-continuous distribution func- tions would serve equally as well. A random variable is continuous if its distribution function is continuous. When we discuss two or more random variables, we adopt the same convention—two or more random variables are the components of a quantity (now a vector) whose values are described by a joint distribu- tion function. As a consequence, we always assume that the collection of random variables under discussion can be defined on a common prob- ability space. We are now in a position to restate Sklar’s theorem in terms of ran- dom variables and their distribution functions: Theorem 2.4.1. Let X and Y be random variables with distribution functions F and G, respectively, and joint distribution function H. Then
  • 40. 2.4 Copulas and Random Variables 25 there exists a copula C such that (2.3.1) holds. If F and G are continu- ous, C is unique. Otherwise, C is uniquely determined on RanF¥RanG. The copula C in Theorem 2.4.1 will be called the copula of X and Y, and denoted CXY when its identification with the random variables X and Y is advantageous. The following theorem shows that the product copula P(u,v) = uv characterizes independent random variables when the distribution func- tions are continuous. Its proof follows from Theorem 2.4.1 and the ob- servation that X and Y are independent if and only if H(x,y) = F(x)G(y) for all x,y in R2 . Theorem 2.4.2. Let X and Y be continuous random variables. Then X and Y are independent if and only if CXY = P. Much of the usefulness of copulas in the study of nonparametric sta- tistics derives from the fact that for strictly monotone transformations of the random variables, copulas are either invariant or change in predict- able ways. Recall that if the distribution function of a random variable X is continuous, and if a is a strictly monotone function whose domain contains RanX, then the distribution function of the random variable a(X) is also continuous. We treat the case of strictly increasing trans- formations first. Theorem 2.4.3. Let X and Y be continuous random variables with cop- ula CXY . If a and b are strictly increasing on RanX and RanY, respec- tively, then C X Y a b ( ) ( ) = CXY . Thus CXY is invariant under strictly in- creasing transformations of X and Y. Proof. Let F1, G1, F2 , and G2 denote the distribution functions of X, Y, a(X), and b(Y), respectively. Because a and b are strictly increasing, F2 (x) = P X x [ ( ) ] a £ = P X x [ ( )] £ - a 1 = F x 1 1 ( ( )) a - , and likewise G2(y) = G y 1 1 ( ( )) b- . Thus, for any x,y in R, C F x G y P X x Y y P X x Y y C F x G y C F x G y X Y XY XY a b a b a b a b ( ) ( )( ( ), ( )) [ ( ) , ( ) ] [ ( ), ( )] ( ( ( )), ( ( ))) ( ( ), ( )). 2 2 1 1 1 1 1 1 2 2 = £ £ = £ £ = = - - - - Because X and Y are continuous, Ran F2 = RanG2 = I, whence it follows that C X Y a b ( ) ( ) = CXY on I2 .
  • 41. 26 2 Definitions and Basic Properties When at least one of a and b is strictly decreasing, we obtain results in which the copula of the random variables a(X) and b(Y) is a simple transformation of CXY . Specifically, we have: Theorem 2.4.4. Let X and Y be continuous random variables with cop- ula CXY . Let a and b be strictly monotone on RanX and RanY, respec- tively. 1. If a is strictly increasing and b is strictly decreasing, then C u v u C u v X Y XY a b ( ) ( )( , ) ( , ) = - - 1 . 2. If a is strictly decreasing and b is strictly increasing, then C u v v C u v X Y XY a b ( ) ( )( , ) ( , ) = - - 1 . 3. If a and b are both strictly decreasing, then C u v u v C u v X Y XY a b ( ) ( )( , ) ( , ) = + - + - - 1 1 1 . The proof of Theorem 2.4.4 is left as an exercise. Note that in each case the form of the copula is independent of the particular choices of a and b, and note further that the three forms for C X Y a b ( ) ( ) that appear in this theorem were first encountered in Exercise 2.6. [Remark: We could be somewhat more general in the preceding two theorems by re- placing phrases such as “strictly increasing” by “almost surely strictly increasing”—to allow for subsets of Lebesgue measure zero where the property may fail to hold.] Although we have chosen to avoid measure theory in our definition of random variables, we will nevertheless need some terminology and results from measure theory in the remaining sections of this chapter and in chapters to come. Each joint distribution function H induces a probability measure on R2 via V x y H ( , ] ( , ] -• ¥ -• ( ) = H(x,y) and a standard extension to Borel subsets of R2 using measure-theoretic techniques. Because copulas are joint distribution functions (with uni- form (0,1) margins), each copula C induces a probability measure on I2 via V u v C ([ , ] [ , ]) 0 0 ¥ = C(u,v) in a similar fashion—that is, the C- measure of a set is its C-volume VC . Hence, at an intuitive level, the C- measure of a subset of I2 is the probability that two uniform (0,1) ran- dom variables U and V with joint distribution function C assume values in that subset. C-measures are often called doubly stochastic measures, as for any measurable subset S of I, VC (S¥I) = VC (I¥S) = l(S), where l denotes ordinary Lebesgue measure on I. The term “doubly stochas- tic” is taken from matrix theory, where doubly stochastic matrices have nonnegative entries and all row sums and column sums are 1.
  • 42. 2.4 Copulas and Random Variables 27 For any copula C, let C(u,v) = AC(u,v) + SC (u,v), where A u v s t C s t dtds C v u ( , ) ( , ) = Ú Ú ∂ ∂ ∂ 2 0 0 and SC (u,v)=C(u,v)– AC(u,v). (2.4.1) Unlike bivariate distributions in general, the margins of a copula are continuous, hence a copula has no “atoms” (individual points in I2 whose C-measure is positive). If C ∫ AC on I2 —that is, if considered as a joint distribution func- tion, C has a joint density given by ∂ ∂ ∂ 2 C u v u v ( , ) —then C is absolutely continuous, whereas if C ∫ SC on I2 —that is, if∂ ∂ ∂ 2 C u v u v ( , ) = 0 almost everywhere in I2 —then C is singular. Otherwise, C has an absolutely continuous component AC and a singular component SC . In this case neither AC nor SC is a copula, because neither has uniform (0,1) mar- gins. In addition, the C-measure of the absolutely continuous compo- nent is AC(1,1), and the C-measure of the singular component is SC (1,1). Just as the support of a joint distribution function H is the comple- ment of the union of all open subsets of R2 with H-measure zero, the support of a copula is the complement of the union of all open subsets of I2 with C-measure zero. When the support of C is I2 , we say C has “full support.” When C is singular, its support has Lebesgue measure zero (and conversely). However, many copulas that have full support have both an absolutely continuous and a singular component. Example 2.11. The support of the Fréchet-Hoeffding upper bound M is the main diagonal of I2 , i.e., the graph of v = u for u in I, so that M is singular. This follows from the fact that the M-measure of any open rectangle that lies entirely above or below the main diagonal is zero. Also note that ∂ ∂ ∂ 2 0 M u v = everywhere in I2 except on the main di- agonal. Similarly, the support of the Fréchet-Hoeffding lower bound W is the secondary diagonal of I2 , i.e., the graph of v = 1 – u for u in I, and thus W is singular as well. Example 2.12. The product copula P(u,v) = uv is absolutely continu- ous, because for all (u,v) in I2 , A u v s t s t dtds dtds uv u v v u v u P P P ( , ) ( , ) ( , ) = = = = Ú Ú Ú Ú ∂ ∂ ∂ 2 0 0 0 0 1 .
  • 43. 28 2 Definitions and Basic Properties In Sect. 3.1.1 we will illustrate a general procedure for decomposing a copula into the sum of its absolutely continuous and singular compo- nents and for finding the probability mass (i.e., C-measure) of each component. Exercises 2.12 Gumbel’s bivariate logistic distribution (Gumbel 1961). Let X and Y be random variables with a joint distribution function given by H x y e e x y ( , ) ( ) = + + - - - 1 1 for all x,y in R. (a) Show that X and Y have standard (univariate) logistic distribu- tions, i.e., F x e x ( ) ( ) = + - - 1 1 and G y e y ( ) ( ) = + - - 1 1 . (b) Show that the copula of X and Y is the copula given by (2.3.4) in Example 2.8. 2.13 Type B bivariate extreme value distributions (Johnson and Kotz 1972). Let X and Y be random variables with a joint distribution function given by H x y e e x y q q q q ( , ) exp[ ( ) ] = - + - - 1 for all x,y in R, where q ≥ 1. Show that the copula of X and Y is given by C u v u v q q q q ( , ) exp ( ln ) ( ln ) = - - + - [ ] Ê Ë ˆ ¯ 1 . (2.4.2) This parametric family of copulas is known as the Gumbel- Hougaard family (Hutchinson and Lai 1990), which we shall see again in Chapter 4. 2.14 Conway (1979) and Hutchinson and Lai (1990) note that Gum- bel’s bivariate logistic distribution (Exercise 2.12) suffers from the defect that it lacks a parameter, which limits its usefulness in applications. This can be corrected in a number of ways, one of which (Ali et al. 1978) is to define Hq as H x y e e e x y x y q q ( , ) ( ) = + + + - ( ) - - - - - 1 1 1 for all x,y in R, where q lies in [–1,1]. Show that (a) the margins are standard logistic distributions;
  • 44. 2.4 Copulas and Random Variables 29 (b) when q = 1, we have Gumbel’s bivariate logistic distribution; (c) when q = 0, X and Y are independent; and (d) the copula of X and Y is given by C u v uv u v q q ( , ) ( )( ) = - - - 1 1 1 . (2.4.3) This is the Ali-Mikhail-Haq family of copulas (Hutchinson and Lai 1990), which we will encounter again in Chapters 3 and 4. 2.15 Let X1 and Y1 be random variables with continuous distribution functions F1 and G1, respectively, and copula C. Let F2 and G2 be another pair of continuous distribution functions, and set X2 = F2 1 ( ) - ( F1( X1)) and Y2 = G2 1 ( ) - ( G1( Y1)). Prove that (a) the distribution functions of X2 and Y2 are F2 and G2, re- spectively; and (b) the copula of X2 and Y2 is C. 2.16 (a) Let X and Y be continuous random variables with copula C and univariate distribution functions F and G, respectively. The random variables max(X,Y) and min(X,Y) are the order statistics for X and Y. Prove that the distribution functions of the order sta- tistics are given by P X Y t C F t G t [max( , ) ] ( ( ), ( )) £ = and P X Y t F t G t C F t G t [min( , ) ] ( ) ( ) – ( ( ), ( )) £ = + , so that when F = G, P X Y t F t C [max( , ) ] ( ( )) £ = d and P X Y t F t F t C [min( , ) ] ( ) – ( ( )) £ = 2 d . (b) Show that bounds on the distribution functions of the order statistics are given by max( ( ) ( ) , ) [max( , ) ] min( ( ), ( )) F t G t P X Y t F t G t + - £ £ £ 1 0 and max( ( ), ( )) [min( , ) ] min( ( ) ( ), ) F t G t P X Y t F t G t £ £ £ + 1 . 2.17 Prove Theorem 2.4.4.
  • 45. 30 2 Definitions and Basic Properties 2.5 The Fréchet-Hoeffding Bounds for Joint Distribution Functions In Sect. 2.2 we encountered the Fréchet-Hoeffding bounds as universal bounds for copulas, i.e., for any copula C and for all u,v in I, W u v u v C u v u v M u v ( , ) max( , ) ( , ) min( , ) ( , ) = + - £ £ = 1 0 . As a consequence of Sklar’s theorem, if X and Y are random vari- ables with a joint distribution function H and margins F and G, respec- tively, then for all x,y in R, max( ( ) ( ) , ) ( , ) min( ( ), ( )) F x G y H x y F x G y + - £ £ 1 0 (2.5.1) Because M and W are copulas, the above bounds are joint distribution functions and are called the Fréchet-Hoeffding bounds for joint distri- bution functions H with margins F and G. Of interest in this section is the following question: What can we say about the random variables X and Y when their joint distribution function H is equal to one of its Fré- chet-Hoeffding bounds? To answer this question, we first need to introduce the notions of nondecreasing and nonincreasing sets in R2 . Definition 2.5.1. A subset S of R2 is nondecreasing if for any (x,y) and (u,v) in S, x u implies y £ v. Similarly, a subset S of R2 is nonin- creasing if for any (x,y) and (u,v) in S, x u implies y ≥ v. Fig. 2.6 illustrates a simple nondecreasing set. Fig. 2.6. The graph of a nondecreasing set We will now prove that the joint distribution function H for a pair (X,Y) of random variables is the Fréchet-Hoeffding upper bound (i.e., the copula is M) if and only if the support of H lies in a nondecreasing set. The following proof is based on the one that appears in (Mi- kusiński, Sherwood and Taylor 1991-1992). But first, we need two lemmas:
  • 46. 2.5 The Fréchet-Hoeffding Bounds for Joint Distribution Functions 31 Lemma 2.5.2. Let S be a subset of R2 . Then S is nondecreasing if and only if for each (x,y) in R2 , either 1. for all (u,v) in S, u £ x implies v £ y; or (2.5.2) 2. for all (u,v) in S, v £ y implies u £ x. (2.5.3) Proof. First assume that S is nondecreasing, and that neither (2.5.2) nor (2.5.3) holds. Then there exist points (a,b) and (c,d) in S such that a £ x, b y, d £ y, and c x. Hence a c and b d; a contradiction. In the opposite direction, assume that S is not nondecreasing. Then there exist points (a,b) and (c,d) in S with a c and b d. For (x,y) = ( ) ,( ) a c b d + + ( ) 2 2 , neither (2.5.2) nor (2.5.3) holds. Lemma 2.5.3. Let X and Y be random variables with joint distribution function H. Then H is equal to its Fréchet-Hoeffding upper bound if and only if for every (x,y) in R2 , either P[X x, Y £ y] = 0 or P[X £ x, Y y] = 0. Proof: As usual, let F and G denote the margins of H. Then F x P X x P X x Y y P X x Y y H x y P X x Y y ( ) [ ] [ , ] [ , ] ( , ) [ , ], = £ = £ £ + £ = + £ and G y P Y y P X x Y y P X x Y y H x y P X x Y y ( ) [ ] [ , ] [ , ] ( , ) [ , ]. = £ = £ £ + £ = + £ Hence H(x,y) = M(F(x),G(y)) if and only if min( [ , ], P X x Y y £ P X x Y y [ , ]) £ = 0, from which the desired conclusion follows. We are now ready to prove Theorem 2.5.4. Let X and Y be random variables with joint distribution function H. Then H is identically equal to its Fréchet-Hoeffding upper bound if and only if the support of H is a nondecreasing subset of R2 . Proof. Let S denote the support of H, and let (x,y) be any point in R2 . Then (2.5.2) holds if and only if ( , ) u v u x v y S £ { }« = ∆ and ; or equivalently, if and only if P[X £ x, Y y] = 0. Similarly, (2.5.3) holds if and only if ( , ) u v u x v y S £ { }« = ∆ and ; or equivalently, if and only if P[X x, Y £ y] = 0. The theorem now follows from Lemmas 2.5.2 and 2.5.3. Of course, there is an analogous result for the Fréchet-Hoeffding lower bound—its proof is outlined in Exercises 2.18 through 2.20: Theorem 2.5.5. Let X and Y be random variables with joint distribution function H. Then H is identically equal to its Fréchet-Hoeffding lower bound if and only if the support of H is a nonincreasing subset of R2 .
  • 47. 32 2 Definitions and Basic Properties When X and Y are continuous, the support of H can have no hori- zontal or vertical line segments, and in this case it is common to say that “Y is almost surely an increasing function of X” if and only if the cop- ula of X and Y is M; and “Y is almost surely a decreasing function of X” if and only if the copula of X and Y is W. If U and V are uniform (0,1) random variables whose joint distribution function is the copula M, then P[U = V] = 1; and if the copula is W, then P[U + V = 1] = 1. Random variables with copula M are often called comonotonic, and random variables with copula W are often called countermonotonic. 2.6 Survival Copulas In many applications, the random variables of interest represent the lifetimes of individuals or objects in some population. The probability of an individual living or surviving beyond time x is given by the sur- vival function (or survivor function, or reliability function) F x ( ) = P X x [ ] = 1- F x ( ), where, as before, F denotes the distribution func- tion of X. When dealing with lifetimes, the natural range of a random variable is often [0,•); however, we will use the term “survival func- tion” for P X x [ ] even when the range is R. For a pair (X,Y) of random variables with joint distribution function H, the joint survival function is given by H x y ( , ) = P X x Y y [ , ] . The margins of H are the functions H x ( , ) -• and H y ( , ) -• , which are the univariate survival functions F and G , respectively. A natural question is the following: Is there a relationship between univariate and joint sur- vival functions analogous to the one between univariate and joint distri- bution functions, as embodied in Sklar’s theorem? To answer this ques- tion, suppose that the copula of X and Y is C. Then we have H x y F x G y H x y F x G y C F x G y F x G y C F x G y ( , ) ( ) ( ) ( , ) ( ) ( ) ( ( ), ( )) ( ) ( ) ( ( ), ( )), = - - + = + - + = + - + - - 1 1 1 1 1 so that if we define a function Ĉ from I2 into I by ˆ( , ) ( , ) C u v u v C u v = + - + - - 1 1 1 , (2.6.1) we have H x y C F x G y ( , ) ˆ( ( ), ( )) = . (2.6.2) First note that, as a consequence of Exercise 2.6, the function Ĉ in (2.6.1) is a copula (see also part 3 of Theorem 2.4.4). We refer to Ĉ as the survival copula of X and Y. Secondly, notice that Ĉ “couples” the
  • 48. 2.6 Survival Copulas 33 joint survival function to its univariate margins in a manner completely analogous to the way in which a copula connects the joint distribution function to its margins. Care should be taken not to confuse the survival copula Ĉ with the joint survival function C for two uniform (0,1) random variables whose joint distribution function is the copula C. Note that C (u,v) = P U u V v [ , ] = 1 – u – v + C(u,v) = Ĉ(1 – u,1 – v). Example 2.13. In Example 2.9, we obtained the copula Cq in (2.3.5) for Gumbel’s bivariate exponential distribution: for q in [0,1], C u v u v u v e u v q q ( , ) ( )( ) ln( )ln( ) = + - + - - - - - 1 1 1 1 1 . Just as the survival function for univariate exponentially distributed random variables is functionally simpler than the distribution function, the same is often true in the bivariate case. Employing (2.6.1), we have ˆ ( , ) ln ln C u v uve u v q q = - . Example 2.14. A bivariate Pareto distribution (Hutchinson and Lai 1990). Let X and Y be random variables whose joint survival function is given by H x y x y x y x x y y x y x y q q q q ( , ) ( ) , , , ( ) , , , ( ) , , , , , ; = + + ≥ ≥ + ≥ + ≥ Ï Ì Ô Ô Ó Ô Ô - - - 1 0 0 1 0 0 1 0 0 1 0 0 where q 0. Then the marginal survival functions F and G are F x x x x ( ) ( ) , , , = + ≥ Ï Ì Ó - 1 0 1 0 q and G y y y y ( ) ( ) , , , = + ≥ Ï Ì Ó - 1 0 1 0 q so that X and Y have identical Pareto distributions. Inverting the survival functions and employing the survival version of Corollary 2.3.7 (see Exercise 2.25) yields the survival copula ˆ ( , ) ( ) C u v u v q q q q = + - - - - 1 1 1 . (2.6.3) We shall encounter this family again in Chapter 4. Two other functions closely related to copulas—and survival copu- las—are the dual of a copula and the co-copula (Schweizer and Sklar 1983). The dual of a copula C is the function C̃ defined by ˜( , ) ( , ) C u v u v C u v = + - ; and the co-copula is the function C* defined by C u v C u v * = - - - ( , ) ( , ) 1 1 1 . Neither of these is a copula, but when C
  • 49. 34 2 Definitions and Basic Properties is the copula of a pair of random variables X and Y, the dual of the cop- ula and the co-copula each express a probability of an event involving X and Y. Just as P X x Y y C F x G y [ , ] ( ( ), ( )) £ £ = and P X x Y y C F x G y [ , ] ˆ( ( ), ( )) = , we have P X x Y y C F x G y [ ] ˜( ( ), ( )) £ £ = or , (2.6.4) and P X x Y y C F x G y [ ] ( ( ), ( )) = * or . (2.6.5) Other relationships among C, Ĉ, C̃, and C* are explored in Exer- cises 2.24 and 2.25. Exercises 2.18 Prove the “Fréchet-Hoeffding lower bound” version of Lemma 2.5.2: Let S be a subset of R2 . Then S is nonincreasing if and only if for each (x,y) in R2 , either 1. for all (u,v) in S, u £ x implies v y; or 2. for all (u,v) in S, v y implies u £ x. 2.19 Prove the “Fréchet-Hoeffding lower bound” version of Lemma 2.5.3: Let X and Y be random variables whose joint distribution function H is equal to its Fréchet-Hoeffding lower bound. Then for every (x,y) in R2 , either P X x Y y [ , ] = 0 or P X x Y y [ , ] £ £ = 0 2.20 Prove Theorem 2.5.5. 2.21 Let X and Y be nonnegative random variables whose survival function is H x y e e x y ( , ) ( ) = + - - 1 1 for x,y ≥ 0. (a) Show that X and Y are standard exponential random variables. (b) Show that the copula of X and Y is the copula given by (2.3.4) in Example 2.8 [cf. Exercise 2.12]. 2.22 Let X and Y be continuous random variables whose joint distribu- tion function is given by C(F(x),G(y)), where C is the copula of X and Y, and F and G are the distribution functions of X and Y re- spectively. Verify that (2.6.4) and (2.6.5) hold.
  • 50. 2.6 Survival Copulas 35 2.23 Let X1, Y1, F1, G1, F2 , G2, and C be as in Exercise 2.15. Set X2 = F2 1 ( ) - (1 – F1( X1)) and Y2 = G2 1 ( ) - (1 – G1( Y1)). Prove that (a) The distribution functions of X2 and Y2 are F2 and G2, re- spectively; and (b) The copula of X2 and Y2 is Ĉ. 2.24 Let X and Y be continuous random variables with copula C and a common univariate distribution function F. Show that the distri- bution and survival functions of the order statistics (see Exercise 2.16) are given by Order statistic Distribution function Survival function max(X,Y) d ( ( )) F t d * ( ( )) F t min(X,Y) ˜( ( )) d F t ˆ( ( )) d F t where d, ˆ d , ˜ d , and d * denote the diagonal sections of C, Ĉ, C̃, and C* , respectively. 2.25 Show that under composition o, the set of operations of forming the survival copula, the dual of a copula, and the co-copula of a given copula, along with the identity (i.e., “Ÿ ”, “~”, “*”, and “i”) yields the dihedral group (e.g., C C ** = , so *o* = i; ˆ ˜ C C * = , so Ÿo* = ~, etc.): o ~ ^ * i i ^ ~ * i i i i ^ ^ ^ ^ ~ ~ ~ ~ * * * * 2.26 Prove the following “survival” version of Corollary 2.3.7: Let H , F , G , and Ĉ be as in (2.6.2), and let F ( ) -1 and G ( ) -1 be quasi-inverses of F and G , respectively. Then for any (u,v) in I2 , ˆ( , ) ( ( ), ( )) ( ) ( ) C u v H F u G v = - - 1 1 .
  • 51. 36 2 Definitions and Basic Properties 2.7 Symmetry If X is a random variable and a is a real number, we say that X is sym- metric about a if the distribution functions of the random variables X a - and a X - are the same, that is, if for any x in R, P X a x [ ] - £ = P a X x [ ] - £ . When X is continuous with distribution function F, this is equivalent to F a x ( ) + = F a x ( ) - (2.7.1) [when F is discontinuous, (2.7.1) holds only at the points of continuity of F]. Now consider the bivariate situation. What does it mean to say that a pair (X,Y) of random variables is “symmetric” about a point (a,b)? There are a number of ways to answer this question, and each answer leads to a different type of bivariate symmetry. Definition 2.7.1. Let X and Y be random variables and let (a,b) be a point in R2 . 1. (X,Y) is marginally symmetric about (a,b) if X and Y are symmetric about a and b, respectively. 2. (X,Y) is radially symmetric about (a,b) if the joint distribution function of X a - and Y b - is the same as the joint distribution func- tion of a X - and b Y - . 3. (X,Y) is jointly symmetric about (a,b) if the following four pairs of random variables have a common joint distribution: ( X a - ,Y b - ), ( X a - ,b Y - ), ( a X - ,Y b - ), and ( a X - ,b Y - ). When X and Y are continuous, we can express the condition for radial symmetry in terms of the joint distribution and survival functions of X and Y in a manner analogous to the relationship in (2.7.1) between uni- variate distribution and survival functions: Theorem 2.7.2. Let X and Y be continuous random variables with joint distribution function H and margins F and G, respectively. Let (a,b) be a point in R2 . Then (X,Y) is radially symmetric about (a,b) if and only if H a x b y ( , ) + + = H a x b y ( , ) - - for all (x,y) in R2 . (2.7.2) The term “radial” comes from the fact that the points ( , ) a x b y + + and ( , ) a x b y - - that appear in (2.7.2) lie on rays emanating in oppo- site directions from (a,b). Graphically, Theorem 2.7.2 states that re- gions such as those shaded in Fig. 2.7(a) always have equal H-volume. Example 2.15. The bivariate normal distribution with parameters mx, my, sx 2 , sy 2 , and r is radially symmetric about the point ( , ) m m x y . The proof is straightforward (but tedious)—evaluate double integrals of the joint density over the shaded regions in Fig. 2.7(a).
  • 52. 2.7 Symmetry 37 (a,b) (a+x,b+y) (a–x,b–y) H(a+x,b+y) H(a–x,b–y) v u 1–u 1–v (a) (b) Fig. 2.7. Regions of equal probability for radially symmetric random variables Example 2.16. The bivariate normal is a member of the family of ellip- tically contoured distributions. The densities for such distributions have contours that are concentric ellipses with constant eccentricity. Well- known members of this family, in addition to the bivariate normal, are bivariate Pearson type II and type VII distributions (the latter including bivariate t and Cauchy distributions as special cases). Like the bivariate normal, elliptically contoured distributions are radially symmetric. It is immediate that joint symmetry implies radial symmetry and easy to see that radial symmetry implies marginal symmetry (setting x = • in (2.7.2) yields (2.7.1); similarly for y = •). Indeed, joint symmetry is a very strong condition—it is easy to show that jointly symmetric random variables must be uncorrelated when the requisite second-order mo- ments exist (Randles and Wolfe 1979). Consequently, we will focus on radial symmetry, rather than joint symmetry, for bivariate distributions. Because the condition for radial symmetry in (2.7.2) involves both the joint distribution and survival functions, it is natural to ask if copulas and survival copulas play a role in radial symmetry. The answer is pro- vided by the next theorem. Theorem 2.7.3. Let X and Y be continuous random variables with joint distribution function H, marginal distribution functions F and G, re- spectively, and copula C. Further suppose that X and Y are symmetric about a and b, respectively. Then (X,Y) is radially symmetric about (a,b), i.e., H satisfies (2.7.2), if and only if C = Ĉ, i.e., if and only if C satisfies the functional equation C u v u v C u v ( , ) ( , ) = + - + - - 1 1 1 for all (u,v) in I2 . (2.7.3) Proof. Employing (2.6.2) and (2.7.1), the theorem follows from the following chain of equivalent statements:
  • 53. 38 2 Definitions and Basic Properties H a x b y H a x b y x y C F a x G b y C F a x G b y x y C F a x G b y C F a x G b y x y C ( , ) ( , ) ( , ) ( ( ), ( )) ˆ( ( ), ( )) ( , ) , ( ( ), ( )) ˆ( ( ), ( )) ( , ) , ( + + = - - ¤ + + = - - ¤ + + = + + ¤ for all in for all in for all in R R R 2 2 2 u u v C u v u v , ) ˆ( , ) ( , ) . = for all in I2 Geometrically, (2.7.3) states that for any (u,v) in I2 , the rectangles [0,u]¥[0,v] and [1- u,1]¥[1- v,1] have equal C-volume, as illustrated in Fig. 2.7(b). Another form of symmetry is exchangeability—random variables X and Y are exchangeable if the vectors (X,Y) and (Y,X) are identically distributed. Hence if the joint distribution function of X and Y is H, then H(x,y) = H(y,x) for all x,y in R2 . Clearly exchangeable random vari- ables must be identically distributed, i.e., have a common univariate distribution function. For identically distributed random variables, ex- changeability is equivalent to the symmetry of their copula as expressed in the following theorem, whose proof is straightforward. Theorem 2.7.4. Let X and Y be continuous random variables with joint distribution function H, margins F and G, respectively, and copula C. Then X and Y are exchangeable if and only if F = G and C(u,v) = C(v,u) for all (u,v) in I2 . When C(u,v) = C(v,u) for all (u,v) in I2 , we will say simply that C is symmetric. Example 2.17. Although identically distributed independent random variables must be exchangeable (because the copula P is symmetric), the converse is of course not true—identically distributed exchangeable random variables need not be independent. To show this, simply choose for the copula of X and Y any symmetric copula except P, such as one from Example 2.8, 2.9 (or 2.13), or from one of the families in Exer- cises 2.4 and 2.5. There are other bivariate symmetry concepts. See (Nelsen 1993) for details. 2.8 Order The Fréchet-Hoeffding bounds inequality—W(u,v) £ C(u,v) £ M(u,v) for every copula C and all u,v in I—suggests a partial order on the set of copulas:
  • 54. 2.8 Order 39 Definition 2.8.1. If C1 and C2 are copulas, we say that C1 is smaller than C2 (or C2 is larger than C1), and write C C 1 2 p (or C C 2 1 f ) if C u v C u v 1 2 ( , ) ( , ) £ for all u,v in I. In other words, the Fréchet-Hoeffding lower bound copula W is smaller than every copula, and the Fréchet-Hoeffding upper bound copula M is larger than every copula. This point-wise partial ordering of the set of copulas is called the concordance ordering and will be im- portant in Chapter 5 when we discuss the relationship between copulas and dependence properties for random variables (at which time the rea- son for the name of the ordering will become apparent). It is a partial order rather than a total order because not every pair of copulas is comparable. Example 2.18. The product copula P and the copula obtained by aver- aging the Fréchet-Hoeffding bounds are not comparable. If we let C(u,v) = [W(u,v)+M(u,v)]/2, then C(1/4,1/4) P(1/4,1/4) and C(1/4,3/4) P(1/4,3/4), so that neither C p P nor P p C holds. However, there are families of copulas that are totally ordered. We will call a totally ordered parametric family Cq { } of copulas positively ordered if C C a b p whenever a £ b; and negatively ordered if C C a b f whenever a £ b. Example 2.19. The Cuadras-Augé family of copulas (2.2.10), intro- duced in Exercise 2.5, is positively ordered, as for 0 £ a £ b £ 1 and u,v in (0,1), C u v C u v uv u v a b b a ( , ) ( , ) min( , ) = Ê Ë Á ˆ ¯ ˜ £ - 1 and hence C C a b p . Exercises 2.27 Let X and Y be continuous random variables symmetric about a and b with marginal distribution functions F and G, respectively, and with copula C. Is (X,Y) is radially symmetric (or jointly sym- metric) about (a,b) if C is (a) a member of the Fréchet family in Exercise 2.4? (b) a member of the Cuadras-Augé family in Exercise 2.5? 2.28. Suppose X and Y are identically distributed continuous random variables, each symmetric about a. Show that “exchangeability” does not imply “radial symmetry,” nor does “radial symmetry” imply “exchangeability.”
  • 55. Exploring the Variety of Random Documents with Different Content
  • 56. damaged disk or other medium, a computer virus, or computer codes that damage or cannot be read by your equipment. 1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except for the “Right of Replacement or Refund” described in paragraph 1.F.3, the Project Gutenberg Literary Archive Foundation, the owner of the Project Gutenberg™ trademark, and any other party distributing a Project Gutenberg™ electronic work under this agreement, disclaim all liability to you for damages, costs and expenses, including legal fees. YOU AGREE THAT YOU HAVE NO REMEDIES FOR NEGLIGENCE, STRICT LIABILITY, BREACH OF WARRANTY OR BREACH OF CONTRACT EXCEPT THOSE PROVIDED IN PARAGRAPH 1.F.3. YOU AGREE THAT THE FOUNDATION, THE TRADEMARK OWNER, AND ANY DISTRIBUTOR UNDER THIS AGREEMENT WILL NOT BE LIABLE TO YOU FOR ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL, PUNITIVE OR INCIDENTAL DAMAGES EVEN IF YOU GIVE NOTICE OF THE POSSIBILITY OF SUCH DAMAGE. 1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you discover a defect in this electronic work within 90 days of receiving it, you can receive a refund of the money (if any) you paid for it by sending a written explanation to the person you received the work from. If you received the work on a physical medium, you must return the medium with your written explanation. The person or entity that provided you with the defective work may elect to provide a replacement copy in lieu of a refund. If you received the work electronically, the person or entity providing it to you may choose to give you a second opportunity to receive the work electronically in lieu of a refund. If the second copy is also defective, you may demand a refund in writing without further opportunities to fix the problem. 1.F.4. Except for the limited right of replacement or refund set forth in paragraph 1.F.3, this work is provided to you ‘AS-IS’, WITH NO OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED,
  • 57. INCLUDING BUT NOT LIMITED TO WARRANTIES OF MERCHANTABILITY OR FITNESS FOR ANY PURPOSE. 1.F.5. Some states do not allow disclaimers of certain implied warranties or the exclusion or limitation of certain types of damages. If any disclaimer or limitation set forth in this agreement violates the law of the state applicable to this agreement, the agreement shall be interpreted to make the maximum disclaimer or limitation permitted by the applicable state law. The invalidity or unenforceability of any provision of this agreement shall not void the remaining provisions. 1.F.6. INDEMNITY - You agree to indemnify and hold the Foundation, the trademark owner, any agent or employee of the Foundation, anyone providing copies of Project Gutenberg™ electronic works in accordance with this agreement, and any volunteers associated with the production, promotion and distribution of Project Gutenberg™ electronic works, harmless from all liability, costs and expenses, including legal fees, that arise directly or indirectly from any of the following which you do or cause to occur: (a) distribution of this or any Project Gutenberg™ work, (b) alteration, modification, or additions or deletions to any Project Gutenberg™ work, and (c) any Defect you cause. Section 2. Information about the Mission of Project Gutenberg™ Project Gutenberg™ is synonymous with the free distribution of electronic works in formats readable by the widest variety of computers including obsolete, old, middle-aged and new computers. It exists because of the efforts of hundreds of volunteers and donations from people in all walks of life. Volunteers and financial support to provide volunteers with the assistance they need are critical to reaching Project Gutenberg™’s goals and ensuring that the Project Gutenberg™ collection will
  • 58. remain freely available for generations to come. In 2001, the Project Gutenberg Literary Archive Foundation was created to provide a secure and permanent future for Project Gutenberg™ and future generations. To learn more about the Project Gutenberg Literary Archive Foundation and how your efforts and donations can help, see Sections 3 and 4 and the Foundation information page at www.gutenberg.org. Section 3. Information about the Project Gutenberg Literary Archive Foundation The Project Gutenberg Literary Archive Foundation is a non-profit 501(c)(3) educational corporation organized under the laws of the state of Mississippi and granted tax exempt status by the Internal Revenue Service. The Foundation’s EIN or federal tax identification number is 64-6221541. Contributions to the Project Gutenberg Literary Archive Foundation are tax deductible to the full extent permitted by U.S. federal laws and your state’s laws. The Foundation’s business office is located at 809 North 1500 West, Salt Lake City, UT 84116, (801) 596-1887. Email contact links and up to date contact information can be found at the Foundation’s website and official page at www.gutenberg.org/contact Section 4. Information about Donations to the Project Gutenberg Literary Archive Foundation Project Gutenberg™ depends upon and cannot survive without widespread public support and donations to carry out its mission of increasing the number of public domain and licensed works that can be freely distributed in machine-readable form accessible by the widest array of equipment including outdated equipment. Many
  • 59. small donations ($1 to $5,000) are particularly important to maintaining tax exempt status with the IRS. The Foundation is committed to complying with the laws regulating charities and charitable donations in all 50 states of the United States. Compliance requirements are not uniform and it takes a considerable effort, much paperwork and many fees to meet and keep up with these requirements. We do not solicit donations in locations where we have not received written confirmation of compliance. To SEND DONATIONS or determine the status of compliance for any particular state visit www.gutenberg.org/donate. While we cannot and do not solicit contributions from states where we have not met the solicitation requirements, we know of no prohibition against accepting unsolicited donations from donors in such states who approach us with offers to donate. International donations are gratefully accepted, but we cannot make any statements concerning tax treatment of donations received from outside the United States. U.S. laws alone swamp our small staff. Please check the Project Gutenberg web pages for current donation methods and addresses. Donations are accepted in a number of other ways including checks, online payments and credit card donations. To donate, please visit: www.gutenberg.org/donate. Section 5. General Information About Project Gutenberg™ electronic works Professor Michael S. Hart was the originator of the Project Gutenberg™ concept of a library of electronic works that could be freely shared with anyone. For forty years, he produced and distributed Project Gutenberg™ eBooks with only a loose network of volunteer support.
  • 60. Project Gutenberg™ eBooks are often created from several printed editions, all of which are confirmed as not protected by copyright in the U.S. unless a copyright notice is included. Thus, we do not necessarily keep eBooks in compliance with any particular paper edition. Most people start at our website which has the main PG search facility: www.gutenberg.org. This website includes information about Project Gutenberg™, including how to make donations to the Project Gutenberg Literary Archive Foundation, how to help produce our new eBooks, and how to subscribe to our email newsletter to hear about new eBooks.
  • 61. Welcome to our website – the perfect destination for book lovers and knowledge seekers. We believe that every book holds a new world, offering opportunities for learning, discovery, and personal growth. That’s why we are dedicated to bringing you a diverse collection of books, ranging from classic literature and specialized publications to self-development guides and children's books. More than just a book-buying platform, we strive to be a bridge connecting you with timeless cultural and intellectual values. With an elegant, user-friendly interface and a smart search system, you can quickly find the books that best suit your interests. Additionally, our special promotions and home delivery services help you save time and fully enjoy the joy of reading. Join us on a journey of knowledge exploration, passion nurturing, and personal growth every day! ebookbell.com