Next Article in Journal
Forecasting in Small Business Management
Next Article in Special Issue
Nonparametric Estimation of Extreme Quantiles with an Application to Longevity Risk
Previous Article in Journal
The Importance of Betting Early
Previous Article in Special Issue
Short-Term Price Reaction to Filing for Bankruptcy and Restructuring Proceedings—The Case of Poland
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Matrix-Tilted Archimedean Copulas

1
Department of Statistics and Actuarial Science, University of Waterloo, 200 University Avenue West, Waterloo, ON N2L 3G1, Canada
2
Institute of Mathematical Statistics and Actuarial Science, University of Bern, Alpeneggstrasse 22, 3012 Bern, Switzerland
*
Author to whom correspondence should be addressed.
Risks 2021, 9(4), 68; https://doi.org/10.3390/risks9040068
Submission received: 5 January 2021 / Revised: 11 March 2021 / Accepted: 24 March 2021 / Published: 6 April 2021
(This article belongs to the Special Issue Risks: Feature Papers 2021)

Abstract

:
The new class of matrix-tilted Archimedean copulas is introduced. It combines properties of Archimedean and elliptical copulas by introducing a tilting matrix in the stochastic representation of Archimedean copulas, similar to the Cholesky factor for elliptical copulas. Basic properties of this copula construction are discussed and a further extension outlined.

1. Introduction

Elliptical distributions are among the most important multivariate distributions and are widely used, for example, in finance, insurance, and risk management to model multivariate risk factor changes. A d-dimensional random vector X is said to be elliptically distributed with location vector 0 , if it admits the stochastic representation
X = R A U ,
where the (random) radial part R 0 is independent of the random vector U which follows a uniform distribution on the Euclidean unit sphere S k 1 and A R d × k is a matrix of rank k (see Cambanis et al. (1981)); in what follows, we assume R F R with F R ( 0 ) = 0 and k = d . Elliptical copulas are the copulas arising from elliptical distributions via Sklar’s Theorem; we assume the reader is familiar with the notion of copulas and related concepts of measuring dependence between the components of a random vector (see Nelsen 2006 for an introduction). Despite their popularity (partly due to the comparably simple simulation approach based on the stochastic representation (1)), elliptical copulas suffer from well-known limitations such as radial symmetry, which, in particular, implies that the lower and upper tail dependence coefficients are equal. This is often an unrealistic assumption from a practical point of view.
Another popular class of copulas are Archimedean copulas, which are copulas admitting the representation
C ( u ) = ψ ( ψ 1 ( u 1 ) + + ψ 1 ( u d ) ) , u [ 0 , 1 ] d ,
for some ψ : [ 0 , ) [ 0 , 1 ] known as (Archimedean) generator. McNeil and Nešlehová (2009) showed that a stochastic representation similar to (1) can be derived for Archimedean copulas. Archimedean copulas arise as survival copulas of random vectors X with stochastic representation
X = R U ,
where the radial part R is as in (1), independent of U , but U is now uniformly distributed on the standard simplex Δ d = { x = ( x 1 , , x d ) R d | x j 0 , j = 1 , , d , x 1 + + x d = 1 } , that is, U U ( Δ d ) . The Archimedean generator ψ corresponds to the Williamson d-transform W d [ F R ] of the radial part distribution F R of F and is given by
ψ ( t ) = W d [ F R ] ( t ) = ( t , ) 1 t r d 1 d F R ( r ) , t [ 0 , ) .
In contrast to elliptical copulas, Archimedean copulas are not restricted to radial symmetry. They are often given explicitly and properties can typically be described in terms of the generator ψ . However, one major drawback is that they are exchangeable, which implies, for example, that all pairs of random variables share the same dependence structure. This might be unrealistic from a practical point of view (see Embrechts and Hofert 2011 for a discussion).
Analogously to the stochastic representation given in (1), the main contribution of this paper is to introduce a matrix A in the stochastic representation (2) which allows one to create asymmetric copulas which are limited to neither exchangeability nor radial symmetry. The construction of this new class of copulas, referred to as matrix-tilted Archimedean copulas, is given in Section 2. Section 3 discusses properties of matrix-tilted Archimedean copulas. Further extensions are mentioned in Section 4, while Section 5 concludes. All proofs are provided in the Appendix A. Other novel copula constructions not further discussed here include, for example, those in (Genest et al. 2018; Krupskii and Joe 2013, 2015; Quessy and Durocher 2019; Quessy et al. 2016).

2. The Construction

We are interested in the survival copulas of multivariate random vectors of the form
X = R A U ,
where R and U U ( Δ d ) are as above, and A = ( a i j ) i , j = 1 , , d is a ( d × d ) -matrix with real entries a i j R . The intuition behind the resulting dependence structure is the following. Naturally, our construction allows for asymmetric copulas; the degree of asymmetry could be quantified, for example, with the (scaled) supremal distance between the survival copula and copula of (4) (see Rosco and Joe 2013). For positive entries in A, mass gets squeezed towards the diagonal. For general entries, it is also possible to move mass towards the secondary diagonal. Depending on the different conditions imposed on A, the dependence structure of X can vary substantially, which can make this construction interesting for scenario generation in risk applications. For example, Figure 1 and Figure 2 show samples of the empirical survival copula of X in (4) based on the same 1000 realizations of R and U but tilted with different matrices A; note that all scatter plots of samples of such type in this paper were generated in this way. The points are colored according to the quadrant the corresponding realization of X lie in and the boundary curves of the support of the copula density is depicted.
In what follows, we discuss a number of interesting cases and properties of matrix-tilted Archimedean copulas.

3. Properties

For the sake of tractability of the calculations and the fact that properties of multivariate copulas are often studied for bivariate margins (measures of association, tail dependence, symmetries, etc.), we focus on the bivariate case, that is, d = 2 , and assume that
a 11 = a 22 = 1 , a 12 1 , a 21 1 , a 12 a 21 < 1 ;
Section 3.1 addresses why Assumption (CPD) is natural. It often turns out to be useful to consider the random point
U ˜ = A U ,
so the image of U under A. All such images are denoted by their respective notation using a tilde. If A = I 2 , that is, the identity matrix in the bivariate case, then U ˜ = A U = U . The columns of A are e 1 , e 2 in this case (the canonical basis vectors of R 2 ) and our construction simply leads to the class of Archimedean copulas. In the general case, A transforms e 1 , e 2 to e ˜ 1 , e ˜ 2 . Figure 3 and Figure 4 depict the space (randomly) spanned by (2) and (4), corresponding to Figure 1 and Figure 2.

3.1. Special Cases

In this section, we briefly address various special cases of our construction and explain why Assumption (CPD) is natural.
First note that copulas are invariant under strictly increasing transformations of the margins. We can thus always multiply each component of X by positive constants without losing flexibility of our dependence model; the same scaling is commonly used for elliptical copulas where one can assume A to be the Cholesky factor of a correlation (instead of a more general covariance) matrix. Therefore, we can equally well assume that a 11 = a 22 = 1 , the first part of Assumption (CPD). The convex cone between the half-lines spanned by e ˜ j , j { 1 , 2 } , the columns of A, does not depend on the order of the columns in A. We can thus fix the determinant to be positive, the last part of Assumption (CPD). The assumptions a 12 1 and a 21 1 are for computational convenience. In the limiting case, when a 12 = a 21 = 1 , we obtain det A = 0 and the corresponding matrix-tilted Archimedean copula is the upper Fréchet–Hoeffding bound M.

3.2. The Multivariate Survival Function

We can now provide the survival function of X = R A U under Assumption (CPD).
Theorem 1.
Under Assumption (CPD), the joint survival function of Model (4) at ( x 1 , x 2 ) is given as follows.
(i) 
If 0 a 12 < 1 and 0 a 21 < 1 , then
1 , if a 21 x 1 x 2 and a 12 x 2 x 1 , 1 a 12 a 21 ( 1 a 12 ) ( 1 a 21 ) ψ ( 1 a 21 ) x 1 + ( 1 a 12 ) x 2 1 a 12 a 21 a 12 1 a 12 ψ x 1 a 12 a 21 1 a 21 ψ x 2 a 21 , if a 21 x 1 < x 2 and a 12 x 2 < x 1 , 𝟙 { x 2 0 } + 1 1 a 21 ψ ( x 2 ) a 21 ψ x 2 a 21 𝟙 { x 2 > 0 } , if a 21 x 1 < x 2 and a 12 x 2 x 1 , 𝟙 { x 1 0 } + 1 1 a 12 ψ ( x 1 ) a 12 ψ x 1 a 12 𝟙 { x 1 > 0 } , if a 21 x 1 x 2 and a 12 x 2 < x 1 .
(ii) 
If a 12 < 0 and a 21 < 0 , then
𝟙 { x 2 > 0 } a 12 1 a 12 ψ ( x 1 / a 12 ) + 1 1 a 21 ψ ( x 2 ) + 𝟙 { x 1 > 0 } a 21 1 a 21 ψ ( x 2 / a 21 ) + 1 1 a 12 ψ ( x 1 ) + 𝟙 { x 1 0 } 𝟙 { x 2 0 } a 12 1 a 12 ψ ( x 1 / a 12 ) + a 21 1 a 21 ψ ( x 2 / a 21 ) + 1 , if a 21 x 1 x 2 and a 12 x 2 x 1 , 1 a 12 a 21 ( 1 a 12 ) ( 1 a 21 ) ψ ( 1 a 21 ) x 1 + ( 1 a 12 ) x 2 1 a 12 a 21 , if a 21 x 1 < x 2 and a 12 x 2 < x 1 , 1 1 a 21 ψ ( x 2 ) + a 12 1 a 12 ψ ( x 1 / a 12 ) , if a 21 x 1 < x 2 and a 12 x 2 x 1 , 1 1 a 12 ψ ( x 1 ) + a 21 1 a 21 ψ ( x 2 / a 21 ) , if a 21 x 1 x 2 and a 12 x 2 < x 1 .
(iii) 
If 0 a 12 < 1 and a 21 < 0 , then
𝟙 { x 2 > 0 } 1 1 a 21 ψ ( x 2 ) + 𝟙 { x 2 0 } ( 1 + a 21 1 a 21 ψ ( x 2 / a 21 ) ) , if a 21 x 1 x 2 and a 12 x 2 x 1 , 1 a 12 a 21 ( 1 a 12 ) ( 1 a 21 ) ψ ( 1 a 21 ) x 1 + ( 1 a 12 ) x 2 1 a 12 a 21 a 12 ( 1 a 12 ) ψ ( x 1 / a 12 ) , if a 21 x 1 < x 2 and a 12 x 2 < x 1 , 1 1 a 21 ψ ( x 2 ) , if a 21 x 1 < x 2 and a 12 x 2 x 1 , a 21 1 a 21 ψ ( x 2 / a 21 ) + 𝟙 { x 1 0 } + 𝟙 { x 1 > 0 } 1 1 a 12 ( ψ ( x 1 ) a 12 ψ ( x 1 / a 12 ) ) , if a 21 x 1 x 2 and a 12 x 2 < x 1 .
(iv) 
If a 12 = 1 and 0 a 21 < 1 , then
1 , if a 21 x 1 x 2 x 1 , 1 1 a 21 1 x 2 x 1 F ¯ R ( x 1 ) + x 2 x 1 ψ ( x 1 ) a 21 1 a 21 ψ ( x 2 / a 21 ) , if a 21 x 1 < x 2 < x 1 , 1 1 a 21 ( ψ ( x 2 ) a 21 ψ ( x 2 / a 21 ) ) , if a 21 x 1 < x 2 and x 2 x 1 , F ¯ R ( max { 0 , x 1 } ) , if a 21 x 1 x 2 and x 2 < x 1 .
(v) 
If a 12 = 1 and a 21 < 0 , then
𝟙 { x 2 0 } ( 1 + a 21 1 a 21 ψ ( x 2 / a 21 ) ) + 𝟙 { x 2 > 0 } 1 1 a 21 ψ ( x 2 ) , if a 21 x 1 x 2 x 1 , 1 1 a 21 1 x 2 x 1 F ¯ R ( x 1 ) + x 2 x 1 ψ ( x 1 ) , if a 21 x 1 < x 2 < x 1 , 1 1 a 21 ψ ( x 2 ) , if a 21 x 1 < x 2 and x 2 x 1 , F ¯ R ( max { 0 , x 1 } ) + a 21 1 a 21 ψ ( x 2 / a 21 ) , if a 21 x 1 x 2 and x 2 < x 1 .
The remaining cases can be obtained by exchanging the roles of a 12 , a 21 and x 1 , x 2 .
The following corollary considers two interesting special cases for the parameters a 12 and a 21 .
Corollary 1.
Suppose Assumption (CPD) holds. Then, the joint survival function of Model (4) is given as follows.
(i) 
If a 21 = 0 and a 12 < 1 , then
1 , if 0 x 2 and a 12 x 2 x 1 , 1 1 a 12 ψ ( x 1 + ( 1 a 12 ) x 2 ) a 12 ( 1 a 12 ) ψ ( x 1 / a 12 ) , if 0 < x 2 and a 12 x 2 < x 1 , 1 1 a 21 ψ ( x 2 ) , if 0 < x 2 and a 12 x 2 x 1 , 𝟙 { x 1 0 } + 𝟙 { x 1 > 0 } 1 1 a 12 ( ψ ( x 1 ) a 12 ψ ( x 1 / a 12 ) ) , if 0 x 2 and a 12 x 2 < x 1 .
(ii) 
If a 12 = a 21 = a , then, for 0 a < 1 , we have
1 , if a x 1 x 2 and a x 2 x 1 , 1 1 a ( 1 + a ) ψ x 1 + x 2 1 + a a ψ x 1 a a ψ x 2 a , if a x 1 < x 2 and a x 2 < x 1 , 𝟙 { x 2 0 } + 1 1 a ψ ( x 2 ) a ψ x 2 a 𝟙 { x 2 > 0 } , if a x 1 < x 2 and a x 2 x 1 , 𝟙 { x 1 0 } + 1 1 a ψ ( x 1 ) a ψ x 1 a 𝟙 { x 1 > 0 } , if a x 1 x 2 and a x 2 < x 1 ,
and, for a ( 1 , 0 ) , we obtain that
𝟙 { x 2 > 0 } a 1 a ψ ( x 1 / a ) + 1 1 a ψ ( x 2 ) + 𝟙 { x 1 > 0 } a 1 a ψ ( x 2 / a ) + 1 1 a ψ ( x 1 ) + 𝟙 { x 1 0 } 𝟙 { x 2 0 } a 1 a ψ ( x 1 / a ) + a 1 a ψ ( x 2 / a ) + 1 , if a x 1 x 2 and a x 2 x 1 , 1 + a 1 a ψ x 1 + x 2 1 + a , if a x 1 < x 2 and a x 2 < x 1 , 1 1 a ψ ( x 2 ) + a 1 a ψ ( x 1 / a ) , if a x 1 < x 2 and a x 2 x 1 , 1 1 a ψ ( x 1 ) + a 1 a ψ ( x 2 / a ) , if a x 1 x 2 and a x 2 < x 1 .

3.3. The Boundary of the Support of the Copula Density

The following proposition provides almost sure boundaries for X 2 , the second component of X in (4). The statements are directly clear from Figure 3 and Figure 4. The boundaries carry over to the boundaries for U in the unit square after the map with the correct marginal survival functions (see also Figure 1 and Figure 2).
Proposition 1.
Under Assumption (CPD), the following inequalities hold almost surely for the component X 2 of X = ( X 1 , X 2 ) of Model (4).
(1) 
If a 12 < 0 , X 2 x / a 12 for all x < 0 and X 2 a 21 x for all x 0 .
(2) 
If a 12 = 0 , X 2 a 21 x for all x 0 .
(3) 
If 0 < a 12 1 , a 21 x X 2 x / a 12 for all x 0 .

3.4. Tail Dependence

In this section, we address the notion of tail dependence for the multivariate model as given in (4). Tail dependence is a copula property, so the coefficients of tail dependence only depend on the copula of (4). For the following result, note that a function h : ( 0 , ) R is called regularly varying at x 0 [ 0 , ] of index α R (denoted by h RV α ) if
lim x x 0 h ( t x ) h ( x ) = t α , t > 0 .
Proposition 2.
Let Assumption (CPD) hold and suppose that a 12 = a 21 = a . Furthermore, let the marginal distribution of X 1 and X 2 be denoted by F = F 1 = F 2 .
(1) 
Let a ( 0 , 1 ) . If F ¯ ( 0 ) < , then λ L = 0 . If F ¯ ( 0 ) = and ψ RV α at ∞, then
λ L = 2 2 α ( 1 + a ) α a α 1 a α .
If ψ RV α at 0, then
λ U = 2 1 2 α ( 1 + a ) α 1 a α .
(2) 
Let a ( 1 , 0 ) . If F ¯ ( 0 ) < , then λ L = 0 . If F ¯ ( 0 ) = and ψ RV α at ∞, then
λ L = 2 1 + a 2 α .
Furthermore, λ U = 0 .
Figure 5 shows λ L and λ U of Part (1) of Proposition 2 as functions of the parameter a for various α.
Assessing λ L and λ U in the case F 1 F 2 is more difficult. At least, the symmetric case a 12 = a 21 can provide bounds. For example, if F ¯ 1 ( u ) F ¯ 2 ( u ) for all u sufficiently small, then (compare with the proof of Proposition 2)
λ L = lim u 0 C ( u , u ) u = lim u 0 P ( X 1 > F ¯ 1 ( u ) , X 2 > F ¯ 2 ( u ) ) P ( X 1 > F ¯ 1 ( u ) ) lim u 0 P ( X 1 > F ¯ 1 ( u ) , X 2 > F ¯ 1 ( u ) ) P ( X 1 > F ¯ 1 ( u ) )
and one can proceed as in the proof of Proposition 2 to evaluate (6); if F ¯ 1 ( u ) F ¯ 2 ( u ) for all u sufficiently small, then the inequality leads to a lower bound for λ L .

4. Randomization and Other Extensions

As shown above, tilting an Archimedean copula through a matrix A makes the model more flexible by introducing different kinds of asymmetry. It is also possible to modify tail dependence properties through A. However, some parameter choices reduce the support of the copula to strict subsets of the unit square. This might be a desired feature for certain applications (a classical example is the copula of ( X 1 , X 2 ) with X 1 X 2 almost surely), but equally well unnatural in others. A possible remedy may be to randomize the tilting matrix A; theoretical properties can then be derived by conditioning on A. While such random-matrix-tilted Archimedean copulas allow for very flexible models, inference is an open question.
As a first example, we revisit the right-hand side of Figure 1 and randomize the entry a 21 = 0.5 of A with a standard uniform distribution. The result is shown on the left-hand side of Figure 6. Note that, although hardly visible here, the lower boundary as depicted on the right-hand side of Figure 1 still exists but the upper one is “washed out”. If we randomize both a 12 and a 21 , then both boundaries disappear. The right-hand side of Figure 6 shows such a case, where a 12 and a 21 are (independently) randomized by Beta ( 1 / 10 , 1 ) and Beta ( 2 , 1 ) distributions, respectively. Note that both of these randomizations satisfy Assumption (CPD).
As another example, let us consider the right-hand side of Figure 2 and randomize the entry a 21 = 0.05 of A via a uniform distribution on [ 10 , 0 ] (see the left-hand side of Figure 7) or via a 21 = 0.1 min { E , 20 } with E being standard exponential (see the right-hand side of Figure 7). As above, both randomizations satisfy Assumption (CPD).
Another option not discussed here would be to make the randomized a 12 , a 21 dependent by another copula.
A different approach to tilting (2) by a matrix A that we do not explore is the following. Consider a random vector
X = R A U ,
where R is a radial part as in (1), U is uniformly distributed on the 1 -sphere { x = ( x 1 , , x d ) R d | | x 1 | + + | x d | = 1 } independently of R, and A is a d × d -matrix. Then, one could consider the copula of the componentwise modulus | X | of X . Figure 8 and Figure 9 show the copulas of | X | for R and A as chosen in Figure 2 and Figure 7, respectively, and X given by (7). One observes that, for a deterministic tilting matrix A (see Figure 8), the construction does not seem to yield anything visually much different from a Gumbel copula (in distribution). We do not have proof of this fact but we have seen this behavior in all simulation examples.
The situation is more promising in the case of a randomized tiling matrix A (see Figure 9, especially the left-hand side).

5. Conclusions

Inspired by the construction of elliptical copulas, we present a new class of matrix-tilted Archimedean copulas, which generalize Archimedean copulas. Matrix-tilted Archimedean copulas are easy to simulate from and allow asymmetries to be modeled. In the bivariate case, the joint survival function and boundaries of the support of the multivariate model which gives rise to matrix-tilted Archimedean copulas can be given, and, in the case of equal margins and symmetric tilting matrix A, the tail dependence coefficients can be derived. Randomized tilting matrices A and the componentwise modulus of the model provide interesting extensions. Although their theoretical treatment is not straightforward, matrix-tilted Archimedean copulas are especially easy to incorporate into simulation-based applications.
The statistical estimation of matrix-tilted Archimedean copulas is an open problem for future research. A natural idea seems to be an iterative estimation procedure that alternates between finding an optimal Archimedean generator for the data given a specific tilting matrix and finding a best-fitting tilting matrix given the Archimedean generator of the previous step. In addition, random-matrix-tilted Archimedean copulas that do not limit the support to a strict subset of the unit hypercube may be of interest, too, as they may allow for likelihood optimization. Another avenue for future research is to introduce matrix tilting to generalizations of Archimedean copulas. One immediate generalization is to replace U U ( Δ d ) in (4) by a Dirichlet distribution which would then lead to matrix-tilted Liouville copulas.

Author Contributions

Conceptualization, M.H. and J.F.Z.; methodology, M.H. and J.F.Z.; formal analysis, M.H. and J.F.Z.; investigation, M.H. and J.F.Z.; writing—original draft preparation, M.H. and J.F.Z.; writing—review and editing, M.H. and J.F.Z.; visualization, M.H. and J.F.Z. All authors have read and agreed to the published version of the manuscript.

Funding

The first author’s research was funded by NSERC under Discovery Grant RGPIN-5010-2015.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

The following proposition provides the survival function of A U , a result which is used in the proof of Theorem 1 below.
Proposition A1.
Let U = ( U 1 , U 2 ) be uniformly distributed on Δ 2 . Under Assumption (CPD), we obtain that
= P ( a 11 U 1 + a 12 U 2 > y 1 , a 21 U 1 + a 22 U 2 > y 2 ) = 1 max { y 2 , a 21 } 1 a 21 𝟙 { y 1 < 1 } 𝟙 { y 2 < 1 } , if a 12 = 1 , a 21 < 1 , 1 max { y 1 , a 12 } 1 a 12 𝟙 { y 1 < 1 } 𝟙 { y 2 < 1 } , if a 12 < 1 , a 21 = 1 , 1 a 12 a 21 ( 1 a 21 ) y 1 ( 1 a 12 ) y 2 ( 1 a 12 ) ( 1 a 21 ) 𝟙 { y 1 < 1 } 𝟙 { y 2 < 1 } 𝟙 { ( 1 a 12 ) ( 1 y 2 ) ( 1 a 21 ) ( y 1 a 12 ) > 0 } a 12 y 1 1 a 12 𝟙 { y 1 a 12 } 𝟙 { y 2 < 1 } a 21 y 2 1 a 21 𝟙 { y 1 < 1 } 𝟙 { y 2 a 21 } , if a 12 < 1 , a 21 < 1 .
Proof. 
First, note that, for some U U ( 0 , 1 ) ,
P ( U 1 + a 12 U 2 > y 1 , a 21 U 1 + U 2 > y 2 ) = P ( U + a 12 ( 1 U ) > y 1 , a 21 U + 1 U > y 2 ) = P ( ( 1 a 12 ) U > y 1 a 12 , ( a 21 1 ) U > y 2 1 ) .
If a 12 = 1 and a 21 < 1 , then
P ( U 1 + a 12 U 2 > y 1 , a 21 U 1 + U 2 > y 2 ) = 𝟙 { y 1 < 1 } P U < 1 y 2 1 a 21 = 𝟙 { y 1 < 1 } 𝟙 { y 2 < 1 } min 1 , 1 y 2 1 a 21 = 𝟙 { y 1 < 1 } 𝟙 { y 2 < 1 } 1 max { y 2 , a 21 } 1 a 21 .
Similarly, if a 12 < 1 and a 21 = 1 , we can interchange the role of y 1 , y 2 and a 12 , a 21 and obtain
P ( U 1 + a 12 U 2 > y 1 , a 21 U 1 + U 2 > y 2 ) = 𝟙 { y 1 < 1 } 𝟙 { y 2 < 1 } 1 max { y 1 , a 12 } 1 a 12 ,
which is as stated in the claim. From now on, assume a 12 < 1 and a 21 < 1 , and let t = ( 1 a 12 ) ( 1 y 2 ) ( 1 a 21 ) ( y 1 a 12 ) . Then,
= P ( U 1 + a 12 U 2 > y 1 , a 21 U 1 + U 2 > y 2 ) = P U > y 1 a 12 1 a 12 , U < 1 y 2 1 a 21 = 𝟙 y 1 a 12 1 a 12 < 1 𝟙 1 y 2 1 a 21 > 0 𝟙 y 1 a 12 1 a 12 < 1 y 2 1 a 21 = · min 1 , 1 y 2 1 a 21 max 0 , y 1 a 12 1 a 12 = 𝟙 { y 1 < 1 } 𝟙 { y 2 < 1 } 𝟙 { t > 0 } 1 max { y 2 , a 21 } 1 a 21 max { y 1 , a 12 } a 12 1 a 12 .
In the remaining part of the proof, we rewrite this expression to obtain the form as stated. Using 𝟙 { y 1 < 1 } = 𝟙 { y 1 a 12 } + 𝟙 { a 12 < y 1 < 1 } and 𝟙 { y 2 < 1 } = 𝟙 { y 2 a 21 } + 𝟙 { a 21 < y 2 < 1 } , multiplying out the terms, and noting that t > 0 for all except the last term D (which easily follows from the bounds in the other indicators), we obtain that
P ( U 1 + a 12 U 2 > y 1 , a 21 U 1 + U 2 > y 2 ) = A + B + C + D ,
where
A = 𝟙 { y 1 a 12 } 𝟙 { y 2 a 21 } , B = 1 y 1 1 a 12 𝟙 { a 12 < y 1 < 1 } 𝟙 { y 2 a 21 } , C = 1 y 2 1 a 21 𝟙 { y 1 a 12 } 𝟙 { a 21 < y 2 < 1 } , D = 𝟙 { a 12 < y 1 < 1 } 𝟙 { a 21 < y 2 < 1 } 𝟙 { t > 0 } 1 a 12 a 21 ( 1 a 21 ) y 1 ( 1 a 12 ) y 2 ( 1 a 12 ) ( 1 a 21 ) .
Let
a = 1 a 12 a 21 ( 1 a 12 ) ( 1 a 21 ) , b = ( 1 a 21 ) y 1 + ( 1 a 12 ) y 2 1 a 12 a 21 , x = ( 1 a 21 ) ( a 12 y 1 ) 1 a 12 a 21 , x ˜ = ( 1 a 12 ) ( a 21 y 2 ) 1 a 12 a 21 .
Basic calculations show that
1 y 1 1 a 12 = a ( 1 b x ˜ ) = a ( 1 b ) a x ˜ , 1 y 2 1 a 21 = a ( 1 b x ) = a ( 1 b ) a x ,
so that B = B 1 B 2 with
B 1 = a ( 1 b ) 𝟙 { a 12 < y 1 < 1 } 𝟙 { y 2 a 21 } , B 2 = a x ˜ 𝟙 { a 12 < y 1 < 1 } 𝟙 { y 2 a 21 } .
In addition, 𝟙 { a 21 < y 2 < 1 } = 𝟙 { y 2 < 1 } 𝟙 { y 2 a 21 } implies that C = C 1 C 2 , with
C 1 = a ( 1 b x ) 𝟙 { y 1 a 12 } 𝟙 { y 2 < 1 } , C 2 = a ( 1 b x ) 𝟙 { y 1 a 12 } 𝟙 { y 2 a 21 } .
As for B, we can decompose C 1 into C 11 C 12 with
C 11 = a ( 1 b ) 𝟙 { y 1 a 12 } 𝟙 { y 2 < 1 } , C 12 = a x 𝟙 { y 1 a 12 } 𝟙 { y 2 < 1 } .
Furthermore, C 2 can be written as C 22 + A , where
C 22 = a 21 y 2 1 a 21 𝟙 { y 1 a 12 } 𝟙 { y 2 a 21 } .
We thus obtain that
= P ( U 1 + a 12 U 2 > y 1 , a 21 U 1 + U 2 > y 2 ) = A + B + C + D = A + B 1 B 2 + C 11 C 12 C 22 A + D = ( B 1 + C 11 + D ) C 12 ( B 2 + C 22 ) .
It is easy to see that B 2 + C 22 = a 21 y 2 1 a 21 𝟙 { y 1 < 1 } 𝟙 { y 2 a 21 } . Finally, we can multiply with indicators that t > 0 and obtain that
= B 1 + C 11 + D = a ( 1 b ) ( 𝟙 { a 12 < y 1 < 1 } 𝟙 { y 2 a 21 } + 𝟙 { y 1 a 12 } 𝟙 { y 2 < 1 } = a ( 1 b ) ( + 𝟙 { a 12 < y 1 < 1 } 𝟙 { a 21 < y 2 < 1 } 𝟙 { t > 0 } ) = a ( 1 b ) 𝟙 { t > 0 } ( 𝟙 { a 12 < y 1 < 1 } 𝟙 { y 2 a 21 } + 𝟙 { y 1 a 12 } 𝟙 { y 2 < 1 } = a ( 1 b ) 𝟙 { t > 0 } ( + 𝟙 { a 12 < y 1 < 1 } 𝟙 { a 21 < y 2 < 1 } ) .
By first taking out 𝟙 { a 12 < y 1 < 1 } and consecutively pulling together the terms, we obtain that
B 1 + C 11 + D = a ( 1 b ) 𝟙 { y 1 < 1 } 𝟙 { y 2 < 1 } 𝟙 { t > 0 } ,
which, by a basic calculation, leads to the claim as stated. □
The following lemma is an auxiliary result, which is also used in the proof of Theorem 1.
Lemma A1.
Let ψ ( t ) = W 2 [ F R ] ( t ) = ( t , ) ( 1 t / r ) d F R ( r ) and a b . Then,
= ( t , ) 𝟙 { a < r b } 1 t r d F R ( r ) = 1 min { t , a } a F ¯ R ( max { 0 , a } ) + min t max { 0 , a } , 1 ψ ( max { a , t } ) = 1 min { t , b } b F ¯ R ( max { 0 , b } ) min t max { 0 , b } , 1 ψ ( max { b , t } ) ,
with the convention that min { t / max { 0 , a } , 1 } = 1 for a 0 (and similarly for the term involving b).
Proof. 
First, assume 0 < a b . Writing 𝟙 { r b } = 1 𝟙 { r > b } , we obtain that
= ( t , ) 𝟙 { a < r b } 1 t r d F R ( r ) = ( t , ) 𝟙 { r > a } 1 t r d F R ( r ) ( t , ) 𝟙 { r > max { a , b } } 1 t r d F R ( r ) ,
so it suffices to consider the first integral. For t a , this gives ψ ( t ) . If t < a , we have
( a , ) 1 t r d F R ( r ) = ( a , ) 1 t a + t a 1 a r d F R ( r ) = 1 t a F ¯ R ( a ) + t a ψ ( a ) .
Hence,
( t , ) 𝟙 { r > a } 1 t r d F R ( r ) = 1 min { t , a } a F ¯ R ( a ) + min t a , 1 ψ ( max { a , t } ) .
Applying the same idea with a replaced by max { a , b } and putting the pieces together, we obtain that
= ( t , ) 𝟙 { a < r b } 1 t r d F R ( r ) = 1 min { t , a } a F ¯ R ( a ) + min t a , 1 ψ ( max { a , t } ) = 1 min { t , b } b F ¯ R ( b ) min t b , 1 ψ ( max { b , t } ) .
Now, assume a 0 b . Then,
( t , ) 𝟙 { a < r b } 1 t r d F R ( r ) = ( t , ) 𝟙 { 0 < r b } 1 t r d F R ( r ) ,
so that we can apply the formula for the case 0 < a b with a 0 here. This implies that
= ( t , ) 𝟙 { a < r b } 1 t r d F R ( r ) = 0 · 1 + 1 · ψ ( t ) 1 min { t , b } b F ¯ R ( b ) min t b , 1 ψ ( max { b , t } ) ,
which is of the form as stated in this case. Finally, if a b 0 , then the integral is 0. Again, it is easy to see that the result as stated is indeed 0 in this case. □
We can now address the proof of Theorem 1.
Proof of Theorem 1.
We consider several cases.
Case 1: a 12 = 1 and a 21 < 1 .
By Proposition A1,
= P ( R U 1 + a 12 R U 2 > x 1 , a 21 R U 1 + R U 2 > x 2 ) = ( 0 , ) P ( U 1 + a 12 U 2 > x 1 / r , a 21 U 1 + U 2 > x 2 / r ) d F R ( r ) = ( 0 , ) 1 max { x 2 r , a 21 } 1 a 21 𝟙 { r > max { x 1 , x 2 } } d F R ( r ) .
The fraction in the integrand can be written as 𝟙 { r a 21 x 2 } + ( 1 x 2 r ) 𝟙 { r a 21 < x 2 } / ( 1 a 21 ) and thus the whole integral as E + F / ( 1 a 21 ) , with
E = ( 0 , ) 𝟙 { r a 21 > x 2 } 𝟙 { r > max { x 1 , x 2 } } d F R ( r ) , F = ( 0 , ) 𝟙 { r a 21 x 2 } 𝟙 { r > max { x 1 , x 2 } } 1 x 2 r d F R ( r ) .
By distinguishing the cases a 21 < 0 , a 21 = 0 , and a 21 > 0 , we obtain that
E = ( 0 , ) 𝟙 r x 2 a 21 𝟙 { r > max { x 1 , x 2 } } d F R ( r ) 𝟙 { x 2 < 0 } 𝟙 { a 21 < 0 } = + ( 0 , ) 𝟙 { r > x 1 } d F R ( r ) 𝟙 { x 2 0 } 𝟙 { a 21 = 0 } = + ( 0 , ) 𝟙 r > x 2 a 21 𝟙 { r > x 1 } d F R ( r ) 𝟙 { a 21 > 0 } .
Note that, if x 2 0 , then 𝟙 { r > max { x 1 , x 2 } } = 𝟙 { r > max { 0 , x 1 , x 2 } } = 𝟙 { r > max { 0 , x 1 } } . This implies that
E = ( 0 , ) 𝟙 max { 0 , x 1 } < r x 2 a 21 d F R ( r ) 𝟙 { x 2 < 0 } 𝟙 { a 21 < 0 } = + F ¯ R ( max { 0 , x 1 } ) 𝟙 { x 2 0 } 𝟙 { a 21 = 0 } + F ¯ R max 0 , x 1 , x 2 a 21 𝟙 { a 21 > 0 } = F ¯ R ( max { 0 , x 1 } ) F ¯ R x 2 a 21 𝟙 x 2 a 21 > max { 0 , x 1 } 𝟙 { x 2 < 0 } 𝟙 { a 21 < 0 } = + F ¯ R ( max { 0 , x 1 } ) 𝟙 { x 2 0 } 𝟙 { a 21 = 0 } + F ¯ R max 0 , x 1 , x 2 a 21 𝟙 { a 21 > 0 } .
By splitting up F in a similar fashion, we obtain that
F = ( 0 , ) 𝟙 r > max 0 , x 1 , x 2 , x 2 a 21 1 x 2 r d F R ( r ) 𝟙 { a 21 < 0 } = + ( 0 , ) 𝟙 { r > max { x 1 , x 2 } } 1 x 2 r d F R ( r ) 𝟙 { x 2 > 0 } 𝟙 { a 21 = 0 } = + ( 0 , ) 𝟙 max { x 1 , x 2 } < r x 2 a 21 1 x 2 r d F R ( r ) 𝟙 { x 2 > 0 } 𝟙 { a 21 > 0 } .
Distinguishing the cases of different sign of x 2 in the last integral leads to
F = ( x 2 , ) 𝟙 { r > x 1 } 1 x 2 r d F R ( r ) 𝟙 { x 2 > 0 } 𝟙 { a 21 < 0 } = + ( x 2 / a 21 , ) 𝟙 { r > x 1 } 1 x 2 r d F R ( r ) 𝟙 { x 2 0 } 𝟙 { a 21 < 0 } = + ( x 2 , ) 𝟙 { r > x 1 } 1 x 2 r d F R ( r ) 𝟙 { x 2 > 0 } 𝟙 { a 21 = 0 } = + ( x 2 , ) 𝟙 max { 0 , x 1 } < r x 2 a 21 1 x 2 r d F R ( r ) 𝟙 { x 2 > 0 } 𝟙 { a 21 > 0 } = + · 𝟙 max { 0 , x 1 } < x 2 a 21 .
Note that the first and third integrald on the right-hand side of the last equation can be pulled together. In addition, note that the second integral can be written as
a 21 ( ( x 2 / a 21 , ) 1 a 21 1 𝟙 { r > x 1 } d F R ( r ) + ( x 2 / a 21 , ) 𝟙 { r > x 1 } 1 x 2 / a 21 r d F R ( r ) ) 𝟙 { x 2 0 } 𝟙 { a 21 < 0 } ) .
Applying Lemma A1 leads to
F = ( 1 min { x 1 , x 2 } x 1 F ¯ R ( max { 0 , x 1 } ) + min x 2 max { 0 , x 1 } , 1 = ( · ψ ( max { x 1 , x 2 } ) ) 𝟙 { x 2 > 0 } 𝟙 { a 21 0 } = + ( 1 min { max { 0 , x 1 } , x 2 } max { 0 , x 1 } F ¯ R ( max { 0 , x 1 } ) = + ( + min x 2 max { 0 , x 1 } , 1 ψ ( max { 0 , x 1 , x 2 } ) = + ( 1 min { x 2 , x 2 / a 21 } x 2 / a 21 F ¯ R ( max { 0 , x 2 / a 21 } ) = + ( min x 2 max { 0 , x 2 / a 21 } , 1 ψ max x 2 , x 2 a 21 ) 𝟙 { x 2 > 0 } 𝟙 { a 21 > 0 } = + · 𝟙 max { 0 , x 1 } < x 2 a 21 = + a 21 ( 1 a 21 1 F ¯ R max x 1 , x 2 a 21 + 1 min { x 1 , x 2 / a 21 } x 1 = + a 21 ( · F ¯ R ( max { 0 , x 1 } ) + min x 2 / a 21 max { 0 , x 1 } , 1 ψ max x 1 , x 2 a 21 ) = + · 𝟙 { x 2 0 } 𝟙 { a 21 < 0 } = ( 1 min { x 1 , x 2 } x 1 F ¯ R ( max { 0 , x 1 } ) + min x 2 max { 0 , x 1 } , 1 = ( · ψ ( max { x 1 , x 2 } ) ) 𝟙 { x 2 > 0 } = ( 1 a 21 ) F ¯ R ( x 2 / a 21 ) + a 21 ψ x 2 a 21 𝟙 { x 2 > 0 } 𝟙 { a 21 > 0 } 𝟙 x 1 < x 2 a 21 = + a 21 ( 1 a 21 1 F ¯ R max x 1 , x 2 a 21 + 1 min { x 1 , x 2 / a 21 } x 1 = + a 21 ( · F ¯ R ( max { 0 , x 1 } ) + min x 2 / a 21 max { 0 , x 1 } , 1 ψ max x 1 , x 2 a 21 ) = + · 𝟙 { x 2 0 } 𝟙 { a 21 < 0 } .
Recalling that the result is E + F / ( 1 a 21 ) , we obtain the form as claimed.
Case 2: a 12 < 1 and a 21 = 1 .
This case directly follows from Case 1 by interchanging a 12 and a 21 , as well as x 1 and x 2 .
Case 3: a 12 < 1 and a 21 < 1 .
Since P ( R U 1 + a 12 R U 2 > x 1 , a 21 R U 1 + R U 2 > x 2 ) = ( 0 , ) P ( U 1 + a 12 U 2 > x 1 / r , a 21 U 1 + U 2 > x 2 / r ) d F R ( r ) , let us first consider the integrand. By Proposition A1, we have that
P ( U 1 + a 12 U 2 > x 1 / r , a 21 U 1 + U 2 > x 2 / r ) = A B C ,
where
A = 1 a 12 a 21 ( 1 a 21 ) x 1 r ( 1 a 12 ) x 2 r ( 1 a 12 ) ( 1 a 21 ) 𝟙 { x 1 < r } 𝟙 { x 2 < r } = · 𝟙 x 1 ( 1 a 21 ) + x 2 ( 1 a 12 ) 1 a 12 a 21 < r , B = a 12 x 1 r 1 a 12 𝟙 { x 1 r a 12 } 𝟙 { x 2 < r } , C = a 21 x 2 r 1 a 21 𝟙 { x 1 < r } 𝟙 { x 2 r a 21 } .
It is straightforward to check that
a 21 x 1 x 2 x 1 ( 1 a 21 ) + x 2 ( 1 a 12 ) 1 a 12 a 21 x 1 ,
We now consider four different cases.
Case 3.1: a 21 x 1 x 2 and a 12 x 2 x 1 . Condition (A1) and the last equivalence imply that
𝟙 { x 1 < r } 𝟙 x 1 ( 1 a 21 ) + x 2 ( 1 a 12 ) 1 a 12 a 21 < r = 𝟙 { x 1 < r } ,
so that
A = 1 a 12 a 21 ( 1 a 21 ) x 1 r ( 1 a 12 ) x 2 r ( 1 a 12 ) ( 1 a 21 ) 𝟙 { x 1 < r } 𝟙 { x 2 < r } .
This can be rewritten as
A = 1 + 1 1 a 12 a 12 x 1 r + 1 1 a 21 a 21 x 2 r 𝟙 { x 1 < r } 𝟙 { x 2 < r } = 𝟙 { x 1 < r } 𝟙 { x 2 < r } = + 1 1 a 12 a 12 x 1 r 𝟙 { x 1 < r } 𝟙 { x 2 < r } 𝟙 { a 12 0 } = + 1 1 a 12 a 12 x 1 r 𝟙 { x 1 < r } 𝟙 { x 2 < r } 𝟙 { a 12 < 0 } = + 1 1 a 21 a 21 x 2 r 𝟙 { x 1 < r } 𝟙 { x 2 < r } 𝟙 { a 21 0 } = + 1 1 a 21 a 21 x 2 r 𝟙 { x 1 < r } 𝟙 { x 2 < r } 𝟙 { a 21 < 0 } .
Furthermore, x 1 a 12 x 2 implies that
𝟙 { a 12 0 } 𝟙 { x 2 < r } 𝟙 { x 1 < r a 12 } = 𝟙 { a 12 0 } 𝟙 { x 2 < r } ,
so that
B = 1 1 a 12 a 12 x 1 r 𝟙 { a 12 0 } 𝟙 { x 1 r a 12 } + 𝟙 { a 12 < 0 } 𝟙 { x 1 r a 12 } 𝟙 { x 2 < r } = 1 1 a 12 a 12 x 1 r 𝟙 { a 12 0 } 𝟙 { x 2 < r } + 𝟙 { a 12 < 0 } 𝟙 x 2 < r x 1 a 12 .
Similarly, we obtain that
C = 1 1 a 21 a 21 x 2 r 𝟙 { a 21 0 } 𝟙 { x 1 < r } + 𝟙 { a 21 < 0 } 𝟙 x 1 < r x 2 a 21 .
Summarizing the terms leads to
A B C = 𝟙 { x 1 < r } 𝟙 { x 2 < r } = + 𝟙 { a 12 0 } 1 1 a 12 a 12 x 1 r 𝟙 { x 1 < r } 𝟙 { x 2 < r } 𝟙 { x 2 < r } = + 𝟙 { a 12 < 0 } 1 1 a 12 a 12 x 1 r 𝟙 { x 1 < r } 𝟙 { x 2 < r } 𝟙 x 2 < r x 1 a 12 = + 𝟙 { a 21 0 } 1 1 a 21 a 21 x 2 r 𝟙 { x 1 < r } 𝟙 { x 2 < r } 𝟙 { x 1 < r } = + 𝟙 { a 21 < 0 } 1 1 a 21 a 21 x 2 r 𝟙 { x 1 < r } 𝟙 { x 2 < r } 𝟙 x 1 < r x 2 a 21 .
Since 𝟙 x 2 < r x 1 a 12 = 𝟙 { x 2 < r } 𝟙 r x 1 a 12 and 𝟙 x 1 < r x 2 a 21 = 𝟙 { x 1 < r } 𝟙 r x 2 a 21 , we have
A B C = 𝟙 { x 1 < r } 𝟙 { x 2 < r } = + 1 1 a 12 a 12 x 1 r 𝟙 { x 2 < r } ( 𝟙 { a 12 < 0 } ( 𝟙 { x 1 < r } = + 𝟙 r x 1 a 12 ) 𝟙 { a 12 0 } 𝟙 { x 1 r } ) = + 1 1 a 21 a 21 x 2 r 𝟙 { x 1 < r } ( 𝟙 { a 21 < 0 } ( 𝟙 { x 2 < r } = + 𝟙 r x 2 a 21 ) 𝟙 { a 21 0 } 𝟙 { x 2 r } ) .
Consider the case a 12 0 . If a 12 = 0 , then x 1 a 12 x 2 = 0 , so that 𝟙 { x 1 r } = 0 . If a 12 > 0 and x 1 > 0 , then our above assumptions imply that x 1 > a 12 a 21 x 1 a 12 x 2 x 1 , a contradiction. Thus, x 1 0 if a 12 0 . Similarly, x 2 0 if a 21 0 . This implies that
A B C = 𝟙 { x 1 < r } 𝟙 { x 2 < r } = + 1 1 a 12 a 12 x 1 r 𝟙 { a 12 < 0 } 𝟙 { x 1 < r } 𝟙 r x 1 a 12 𝟙 { x 2 < r } = + 1 1 a 21 a 21 x 2 r 𝟙 { a 21 < 0 } 𝟙 { x 2 < r } 𝟙 r x 2 a 21 𝟙 { x 1 < r } .
For a 12 < 0 , we have x 1 a 12 x 2 0 if x 2 0 . Thus, for a 12 < 0 , we have that
𝟙 { x 1 < r } 𝟙 r x 1 a 12 𝟙 { x 2 < r } = + 𝟙 { x 1 < r } 𝟙 r x 1 a 12 𝟙 { x 2 < r } 𝟙 { x 1 > 0 } 𝟙 { x 2 0 } = + 𝟙 { x 1 < r } 𝟙 r x 1 a 12 𝟙 { x 2 < r } 𝟙 { x 1 0 } 𝟙 { x 2 0 } = + 𝟙 { x 1 < r } 𝟙 r x 1 a 12 𝟙 { x 2 < r } 𝟙 { x 1 > 0 } 𝟙 { x 2 < 0 } = + 𝟙 { x 1 < r } 𝟙 r x 1 a 12 𝟙 { x 2 < r } 𝟙 { x 1 0 } 𝟙 { x 2 < 0 } = 0 + 1 𝟙 r x 1 a 12 · 1 · 1 · 𝟙 { x 2 0 } = 0 + ( 𝟙 { x 1 < r } 0 ) · 1 · 𝟙 { x 1 > 0 } 𝟙 { x 2 < 0 } = 0 + 1 𝟙 r x 1 a 12 · 1 · 𝟙 { x 1 0 } 𝟙 { x 2 < 0 } = 𝟙 r > x 1 a 12 𝟙 { x 2 0 } + 𝟙 { x 1 < r } 𝟙 { x 1 > 0 } 𝟙 { x 2 < 0 } = + 𝟙 r > x 1 a 12 𝟙 { x 1 0 } 𝟙 { x 2 < 0 } .
Similarly,
= 𝟙 { x 2 < r } 𝟙 r x 2 a 21 𝟙 { x 1 < r } = 𝟙 r > x 2 a 21 𝟙 { x 1 0 } + 𝟙 { x 2 < r } 𝟙 { x 1 < 0 } 𝟙 { x 2 > 0 } + 𝟙 r > x 2 a 21 𝟙 { x 1 < 0 } 𝟙 { x 2 0 } .
Integrating the terms now leads to
= ( 0 , ) ( A B C ) d F R ( r ) = ( 0 , ) 𝟙 { r > max { x 1 , x 2 , 0 } } d F R ( r ) = + 𝟙 { a 12 < 0 } 𝟙 { x 2 0 } 1 1 a 12 ( 0 , ) 𝟙 r > x 1 a 12 a 12 x 1 r d F R ( r ) = + 𝟙 { a 12 < 0 } 𝟙 { x 1 > 0 } 𝟙 { x 2 < 0 } 1 1 a 12 ( 0 , ) 𝟙 { r > x 1 } a 12 x 1 r d F R ( r ) = + 𝟙 { a 12 < 0 } 𝟙 { x 1 0 } 𝟙 { x 2 < 0 } 1 1 a 12 ( 0 , ) 𝟙 r > x 1 a 12 a 12 x 1 r d F R ( r ) = + 𝟙 { a 21 < 0 } 𝟙 { x 1 0 } 1 1 a 21 ( 0 , ) 𝟙 r > x 2 a 21 a 21 x 2 r d F R ( r ) = + 𝟙 { a 21 < 0 } 𝟙 { x 1 < 0 } 𝟙 { x 2 > 0 } 1 1 a 21 ( 0 , ) 𝟙 { r > x 2 } a 21 x 2 r d F R ( r ) = + 𝟙 { a 21 < 0 } 𝟙 { x 1 < 0 } 𝟙 { x 2 0 } 1 1 a 21 ( 0 , ) 𝟙 r > x 2 a 21 a 21 x 2 r d F R ( r ) = F ¯ R ( max { x 1 , x 2 , 0 } ) + 𝟙 { a 12 < 0 } 𝟙 { x 2 0 } a 12 1 a 12 ψ ( x 1 / a 12 ) = + 𝟙 { a 12 < 0 } 𝟙 { x 1 > 0 } 𝟙 { x 2 < 0 } 1 1 a 12 ψ ( x 1 ) F ¯ R ( max { x 1 , 0 } ) = + 𝟙 { a 12 < 0 } 𝟙 { x 1 0 } 𝟙 { x 2 < 0 } a 12 1 a 12 ψ ( x 1 / a 12 ) = + 𝟙 { a 21 < 0 } 𝟙 { x 1 0 } a 21 1 a 21 ψ ( x 2 / a 21 ) = + 𝟙 { a 21 < 0 } 𝟙 { x 1 < 0 } 𝟙 { x 2 > 0 } 1 1 a 21 ψ ( x 2 ) F ¯ R ( max { x 2 , 0 } ) = + 𝟙 { a 21 < 0 } 𝟙 { x 1 < 0 } 𝟙 { x 2 0 } a 21 1 a 21 ψ ( x 2 / a 21 ) .
Case 3.2: a 21 x 1 < x 2 and a 12 x 2 < x 1 .
It is straightforward to check that, in this case,
x 1 ( 1 a 21 ) + x 2 ( 1 a 12 ) 1 a 12 a 21 > max { x 1 , x 2 }
so that
A = 1 a 12 a 21 ( 1 a 21 ) x 1 r ( 1 a 12 ) x 2 r ( 1 a 12 ) ( 1 a 21 ) 𝟙 x 1 ( 1 a 21 ) + x 2 ( 1 a 12 ) 1 a 12 a 21 < r = 1 a 12 a 21 ( 1 a 12 ) ( 1 a 21 ) 1 ( 1 a 21 ) x 1 + ( 1 a 12 ) x 2 r ( 1 a 12 a 21 ) 𝟙 x 1 ( 1 a 21 ) + x 2 ( 1 a 12 ) 1 a 12 a 21 < r .
For B, note that a 12 < 0 and x 1 r a 12 imply that x 2 > x 1 a 12 r . For a 12 = 0 , x 1 > a 12 x 2 = 0 . Furthermore, a 12 > 0 , and x 1 r a 12 imply that x 2 < x 1 a 12 r . Therefore,
𝟙 { x 1 r a 12 } 𝟙 { x 2 < r } 𝟙 { a 12 < 0 } = 0 , 𝟙 { x 1 r a 12 } 𝟙 { x 2 < r } 𝟙 { a 12 = 0 } = 𝟙 { x 1 0 } 𝟙 { x 2 < r } 𝟙 { a 12 = 0 } = 0 , 𝟙 { x 1 r a 12 } 𝟙 { x 2 < r } 𝟙 { a 12 > 0 } = 𝟙 { x 1 r a 12 } 𝟙 { a 12 > 0 } .
Thus,
B = a 12 x 1 r 1 a 12 𝟙 { x 1 r a 12 } 𝟙 { x 2 < r } = a 12 x 1 r 1 a 12 ( 𝟙 { x 1 r a 12 } 𝟙 { x 2 < r } 𝟙 { a 12 < 0 } = + 𝟙 { x 1 r a 12 } 𝟙 { x 2 < r } 𝟙 { a 12 = 0 } + 𝟙 { x 1 r a 12 } 𝟙 { x 2 < r } 𝟙 { a 12 > 0 } ) = a 12 x 1 r 1 a 12 𝟙 x 1 a 12 r 𝟙 { a 12 > 0 } .
Similarly,
C = a 21 x 2 r 1 a 21 𝟙 x 2 a 21 r 𝟙 { a 21 > 0 } .
Note that, in our case here, ( 1 a 21 ) x 1 + ( 1 a 12 ) x 2 0 because
0 > ( 1 a 21 ) x 1 + ( 1 a 12 ) x 2 x 1 + x 2 < x 1 a 21 + x 2 a 12 ,
but the right-hand side is less than or equal to x 1 + x 2 . In addition, if a 12 > 0 , we have that x 1 0 , because, otherwise ( x 1 < 0 ), we have x 2 < x 1 a 12 < 0 , which then leads to ( 1 a 21 ) x 1 + ( 1 a 12 ) x 2 < 0 , a contradiction (note that a 12 a 21 < 1 ). This implies that
( 0 , ) ( A B C ) d F R ( r ) = 1 a 12 a 21 ( 1 a 12 ) ( 1 a 21 ) ψ ( 1 a 21 ) x 1 + ( 1 a 12 ) x 2 1 a 12 a 21 = a 12 1 a 12 𝟙 { a 12 > 0 } 𝟙 { x 1 0 } ψ x 1 a 12 = a 21 1 a 21 𝟙 { a 21 > 0 } 𝟙 { x 2 0 } ψ x 2 a 21 .
Case 3.3: a 21 x 1 < x 2 and a 12 x 2 x 1 .
Let x 2 < r . If a 12 0 , then x 1 a 12 x 2 < a 12 r r , so 𝟙 { x 1 < r } = 1 . If a 12 < 0 , then a 21 x 1 < x 2 x 1 / a 12 , thus ( a 21 1 / a 12 ) x 1 0 and therefore ( a 12 a 21 1 ) x 1 0 . This implies that x 1 0 and thus that 𝟙 { x 1 < r } = 1 , so this indicator drops out of the term A. In addition, note that a 12 x 2 x 1 implies x 1 ( 1 a 21 ) + x 2 ( 1 a 12 ) x 2 ( 1 a 12 a 21 ) < r ( 1 a 12 a 21 ) for x 2 < r . Hence, the corresponding indicator in the term A is also always one. We therefore obtain that
A = 1 a 12 a 21 ( 1 a 21 ) x 1 r ( 1 a 12 ) x 2 r ( 1 a 12 ) ( 1 a 21 ) 𝟙 { x 2 < r } .
By considering the cases of different signs for a 12 , we can write the term B as
B = 1 1 a 12 a 12 x 1 r 𝟙 { x 1 r a 12 } 𝟙 { x 2 < r } = 1 1 a 12 a 12 x 1 r 𝟙 { a 12 0 } 𝟙 { x 1 r a 12 } 𝟙 { x 2 < r } + 𝟙 { a 12 < 0 } 𝟙 { x 1 r a 12 } 𝟙 { x 2 < r } .
If x 2 < r and a 12 0 , then x 1 a 12 x 2 < a 12 r , so 𝟙 { a 12 0 } 𝟙 { x 1 r a 12 } 𝟙 { x 2 < r } = 𝟙 { a 12 0 } 𝟙 { x 2 < r } = ( 1 𝟙 { a 12 < 0 } ) 𝟙 { x 2 < r } . By summarizing the indicators, this yields
B = 1 1 a 12 a 12 x 1 r 𝟙 { x 2 < r } 𝟙 { a 12 < 0 } 𝟙 { x 2 < r } 𝟙 { x 1 > r a 12 } .
Since x 2 x 1 / a 12 < r if a 12 < 0 and r a 12 < x 1 , it follows that
B = 1 1 a 12 a 12 x 1 r 𝟙 { x 2 < r } 𝟙 { a 12 < 0 } 𝟙 { x 1 / a 12 < r } .
Similarly, for C,
C = 1 1 a 21 a 21 x 2 r 𝟙 { a 21 > 0 } 𝟙 { x 1 < r } 𝟙 { x 2 r a 21 } + 𝟙 { a 21 0 } 𝟙 { x 1 < r } 𝟙 { x 2 r a 21 } .
Note that the indicator function 𝟙 { x 1 < r } drops out, since, if a 21 > 0 and x 2 r a 21 , then x 1 < x 2 / a 21 r . The second summand of indicators completely vanishes, since, if a 21 < 0 and x 2 r a 21 , then a 21 x 1 < x 2 r a 21 implies x 1 > r . If a 21 = 0 , the contradiction 0 < x 2 0 follows. We thus obtain that
C = 1 1 a 21 a 21 x 2 r 𝟙 { a 21 > 0 } 𝟙 { x 2 r a 21 } .
Pulling the terms together, we obtain that
= A B C = 1 1 a 21 1 x 2 r 𝟙 { x 2 < r } + 1 1 a 12 a 12 x 1 r 𝟙 { a 12 < 0 } 𝟙 { x 1 / a 12 < r } = 1 1 a 21 a 21 x 2 r 𝟙 { a 21 > 0 } 𝟙 { x 2 r a 21 } .
Distinguishing the cases x 2 0 and x 2 > 0 , we obtain that
= A B C = 1 1 a 21 ( 1 x 2 r 𝟙 { x 2 0 } 𝟙 { x 2 < r } a 21 x 2 r 𝟙 { x 2 0 } 𝟙 { a 21 > 0 } 𝟙 { x 2 r a 21 } = 1 1 a 21 ( + 𝟙 { x 2 > 0 } 1 x 2 r 𝟙 { x 2 < r } a 21 x 2 r 𝟙 { a 21 > 0 } 𝟙 { x 2 r a 21 } ) = + 1 1 a 12 a 12 x 1 r 𝟙 { a 12 < 0 } 𝟙 x 1 a 12 < r .
Note that 𝟙 { x 2 0 } 𝟙 { x 2 < r } = 𝟙 { x 2 0 } and 𝟙 { x 2 0 } 𝟙 { a 21 > 0 } 𝟙 { x 2 r a 21 } = 𝟙 { x 2 0 } 𝟙 { a 21 > 0 } = 𝟙 { x 2 0 } ( 1 𝟙 { a 21 0 } ) . We show next that 𝟙 { x 2 0 } 𝟙 { a 21 0 } = 0 . To this end, let x 2 0 . If a 21 < 0 , then a 21 x 1 < x 2 0 implies that x 1 > 0 and thus a 12 x 2 x 1 > 0 . If x 2 = 0 , then a contradiction follows immediately. Otherwise, we see that a 12 < 0 . This yields a 12 a 21 > 1 in contradiction to the assumption that 1 a 12 a 21 > 0 . If a 21 = 0 , then 0 = a 21 x 1 < x 2 0 yields a contradiction immediately. Therefore, 𝟙 { x 2 0 } 𝟙 { a 21 0 } = 0 and hence 𝟙 { x 2 0 } 𝟙 { a 21 > 0 } 𝟙 { x 2 r a 21 } = 𝟙 { x 2 0 } . Similarly, if a 12 < 0 and x 1 / a 12 < r , we show that x 1 < 0 . To this end, let x 1 0 . Then, a 12 x 2 x 1 0 , so that x 2 0 , and thus a 21 x 1 < x 2 0 . The case x 1 = 0 directly leads to a contradiction. If x 1 > 0 , then a 21 < 0 , which implies that a 12 x 2 x 1 > x 2 / a 21 . As above, this leads to a contradiction to the assumption that 1 a 12 a 21 > 0 . Thus, 𝟙 { a 12 < 0 } 𝟙 x 1 a 12 < r = 𝟙 { a 12 < 0 } 𝟙 x 1 a 12 < r 𝟙 { x 1 < 0 } . Plugging in these relations and summarizing the terms, we obtain that
= A B C = 𝟙 { x 2 0 } + 𝟙 { x 2 > 0 } 1 a 21 1 x 2 r 𝟙 { r > x 2 } a 21 x 2 r 𝟙 { a 21 > 0 } 𝟙 { x 2 r a 21 } = + 1 1 a 12 a 12 x 1 r 𝟙 { a 12 < 0 } 𝟙 x 1 a 12 < r 𝟙 { x 1 < 0 } .
Integrating the terms now leads to
= ( 0 , ) ( A B C ) d F R ( r ) = 𝟙 { x 2 0 } + 1 1 a 21 ψ ( x 2 ) a 21 1 a 21 ψ x 2 a 21 𝟙 { a 21 > 0 } 𝟙 { x 2 > 0 } = + a 12 1 a 12 ψ x 1 a 12 𝟙 { a 12 < 0 } 𝟙 { x 1 < 0 } .
Case 3.4: a 21 x 1 x 2 and a 12 x 2 < x 1 .
Interchanging the roles of a 21 , a 12 and x 1 , x 2 , we obtain that
= ( 0 , ) ( A B C ) d F R ( r ) = 𝟙 { x 1 0 } + 1 1 a 12 ψ ( x 1 ) a 12 1 a 12 ψ x 1 a 12 𝟙 { a 12 > 0 } 𝟙 { x 1 > 0 } = + a 21 1 a 21 ψ x 2 a 21 𝟙 { a 21 < 0 } 𝟙 { x 2 < 0 } .
Finally, here is the proof of Proposition 2.
Proof of Proposition 2.
(1)  Recall that U = d ( F ¯ 1 ( X 1 ) , F ¯ 2 ( X 2 ) ) for X = ( X 1 , X 2 ) as in (4). Then,
C ( u , u ) u = P ( F ¯ 1 ( X 1 ) u , F ¯ 2 ( X 2 ) u ) P ( F ¯ 1 ( X 1 ) u ) = P ( X 1 > F ¯ 1 ( u ) , X 2 > F ¯ 2 ( u ) ) P ( X 1 > F ¯ 1 ( u ) ) .
Under the given assumptions, note that the distribution of ( X 1 , X 2 ) is supported in the first quadrant, the marginal survival functions are equal (denoted by F ¯ ), and, for x 0 , we have by Corollary 1 that
P ( X 1 > x , X 2 > x ) = 2 1 a 1 + a 2 ψ 2 1 + a x a ψ x a ,
F ¯ ( x ) = P ( X 1 > x ) = P ( X 2 > x ) = 1 1 a ψ ( x ) a ψ x a .
Therefore, we obtain that
λ L = lim u 0 C ( u , u ) u = 2 lim x F ¯ ( 0 ) 1 + a 2 ψ 2 1 + a x a ψ x a ψ ( x ) a ψ x a .
Let F ¯ ( 0 ) < .
Since ( 1 + a ) / 2 ( 1 / 2 , 1 ) and ψ is non-increasing, 1 + a 2 ψ ( 2 1 + a x ) < ψ ( x ) for all x [ 0 , F ¯ ( 0 ) ) . Furthermore, there exists an x ˜ [ 0 , F ¯ ( 0 ) ) such that the numerator of (A4) is zero but the denominator is greater than zero for all x [ x ˜ , F ¯ ( 0 ) ) . Hence λ L = 0 .
If F ¯ ( 0 ) = , applying l’Hôpital’s Rule leads to
λ L = 2 lim x ψ 2 x 1 + a ψ x a ψ ( x ) ψ x a .
Dividing by ψ ( x ) and using regular variation, we obtain that
λ L = 2 lim x ψ ( x · 2 / ( 1 + a ) ) ψ ( x ) ψ ( x / a ) ψ ( x ) 1 ψ ( x / a ) ψ ( x ) = 2 1 + a 2 α a α 1 a α .
Now, consider λ U and note that
λ U = lim u 1 1 2 u + C ( u , u ) 1 u = 2 lim u 1 1 C ( u , u ) 1 u = 2 lim u 1 1 P ( X 1 > F ¯ ( u ) , X 2 > F ¯ ( u ) ) 1 P ( X 1 > F ¯ ( u ) ) = 2 lim x 0 1 P ( X 1 > x , X 2 > x ) 1 P ( X 1 > x ) .
Using (A2) and (A3), and proceeding similarly as above (with regular variation at 0), one obtains λ U as stated.
(2) Under the given assumptions, using Corollary 1, we obtain that
P ( X 1 > x , X 2 > x ) = 2 a 1 a ψ ( x / a ) + 1 , if x 0 , 1 + a 1 a ψ ( 2 x / ( 1 + a ) ) , if x > 0 , F ¯ ( x ) = P ( X 1 > x ) = P ( X 2 > x ) = a 1 a ψ ( x / a ) + 1 , if x 0 , 1 1 a ψ ( x ) , if x > 0 .
This yields
λ L = lim u 0 C ( u , u ) u = lim x F ¯ ( 0 ) ( 1 + a ) ψ ( 2 x / ( 1 + a ) ) ψ ( x ) .
If F ¯ ( 0 ) < , then λ L = 0 . If F ¯ ( 0 ) = , then l’Hôpital’s Rule and regular variation imply that
λ L = 2 lim x ψ 2 1 + a x ψ ( x ) = 2 ( 2 1 + a ) α .
Now, consider λ U and note that the margins live on R . Similar to above, we obtain that
λ U = 2 lim x 1 P ( X 1 > x , X 2 > x ) 1 P ( X 1 > x )
which is easily seen to be 0.

References

  1. Cambanis, Stamatis, Steel Huang, and Gordon Simons. 1981. On the theory of elliptically contoured distributions. Journal of Multivariate Analysis 11: 368–85. [Google Scholar] [CrossRef] [Green Version]
  2. Embrechts, Paul, and Marius Hofert. 2011. Comments on: Inference in multivariate Archimedean copula models. TEST 20: 263–70. [Google Scholar] [CrossRef]
  3. Genest, Christian, Johanna Nešlehová, and Louis-Paul Rivest. 2018. The class of multivariate max-id copulas with 1-norm symmetric exponent measure. Bernoulli 24: 3751–90. [Google Scholar] [CrossRef] [Green Version]
  4. Krupskii, Pavel, and Harry Joe. 2013. Factor copula models for multivariate data. Journal of Multivariate Analysis 120: 85–101. [Google Scholar] [CrossRef]
  5. Krupskii, Pavel, and Harry Joe. 2015. Structured factor copula models: Theory, inference and computation. Journal of Multivariate Analysis 138: 53–73. [Google Scholar] [CrossRef]
  6. McNeil, Alexander J., and Johanna Nešlehová. 2009. Multivariate Archimedean copulas, d-monotone functions and l1-norm symmetric distributions. The Annals of Statistics 37: 3059–97. [Google Scholar] [CrossRef]
  7. Nelsen, Roger B. 2006. An Introduction to Copulas. Berlin/Heidelberg: Springer. [Google Scholar]
  8. Quessy, Jean-François, and Martin Durocher. 2019. The class of copulas arising from squared distributions: Properties and inference. Econometrics and Statistics 12: 148–66. [Google Scholar] [CrossRef]
  9. Quessy, Jean-François, Louis-Paul Rivest, and Marie-Hélène Toupin. 2016. On the family of multivariate chi-square copulas. Journal of Multivariate Analysis 152: 40–60. [Google Scholar] [CrossRef]
  10. Rosco, J. F., and Harry Joe. 2013. Measures of tail asymmetry for bivariate copulas. Statistical Papers 54: 709–26. [Google Scholar] [CrossRef]
Figure 1. Scatter plots of 1000 samples from matrix-tilted Archimedean copulas with different tilting matrices A but otherwise equal realizations of R and U . The plot on the left simply corresponds to a Gumbel copula with parameter θ = 2 (Kendall’s tau 0.5). The plot on the right also displays the boundary curves (spanned by e ˜ 1 , e ˜ 2 ; see Proposition 1 below).
Figure 1. Scatter plots of 1000 samples from matrix-tilted Archimedean copulas with different tilting matrices A but otherwise equal realizations of R and U . The plot on the left simply corresponds to a Gumbel copula with parameter θ = 2 (Kendall’s tau 0.5). The plot on the right also displays the boundary curves (spanned by e ˜ 1 , e ˜ 2 ; see Proposition 1 below).
Risks 09 00068 g001
Figure 2. Similar to Figure 1 with other tilting matrices A.
Figure 2. Similar to Figure 1 with other tilting matrices A.
Risks 09 00068 g002
Figure 3. Depicting how (5) maps the vectors e 1 = ( 1 , 0 ) and e 2 = ( 0 , 1 ) to e ˜ 1 = ( 1 , a 21 = 1 / 2 ) and e ˜ 2 = ( a 12 = 1 / 20 , 1 ) , respectively, corresponding to Figure 1.
Figure 3. Depicting how (5) maps the vectors e 1 = ( 1 , 0 ) and e 2 = ( 0 , 1 ) to e ˜ 1 = ( 1 , a 21 = 1 / 2 ) and e ˜ 2 = ( a 12 = 1 / 20 , 1 ) , respectively, corresponding to Figure 1.
Risks 09 00068 g003
Figure 4. Similar to Figure 3 but with vectors e ˜ 1 , e ˜ 2 corresponding to Figure 2.
Figure 4. Similar to Figure 3 but with vectors e ˜ 1 , e ˜ 2 corresponding to Figure 2.
Risks 09 00068 g004
Figure 5. λ L and λ U of Part (1) of Proposition 2 as functions of the parameter a for various α.
Figure 5. λ L and λ U of Part (1) of Proposition 2 as functions of the parameter a for various α.
Risks 09 00068 g005
Figure 6. Scatter plots of 1000 samples from random-matrix-tilted Archimedean copulas with randomized entry a 21 U ( 0 , 1 ) (left) and independent a 12 Beta ( 1 / 10 , 1 ) , a 21 Beta ( 2 , 1 ) (right) based on the example displayed on the right-hand side of Figure 1.
Figure 6. Scatter plots of 1000 samples from random-matrix-tilted Archimedean copulas with randomized entry a 21 U ( 0 , 1 ) (left) and independent a 12 Beta ( 1 / 10 , 1 ) , a 21 Beta ( 2 , 1 ) (right) based on the example displayed on the right-hand side of Figure 1.
Risks 09 00068 g006
Figure 7. Similar to Figure 6 based on the right-hand side of Figure 2, with a 21 U ( 10 , 0 ) (left) and a 21 = 0.1 min { E , 20 } with E being standard exponential (right).
Figure 7. Similar to Figure 6 based on the right-hand side of Figure 2, with a 21 U ( 10 , 0 ) (left) and a 21 = 0.1 min { E , 20 } with E being standard exponential (right).
Risks 09 00068 g007
Figure 8. Scatter plots of the copulas of | X | corresponding to Figure 2.
Figure 8. Scatter plots of the copulas of | X | corresponding to Figure 2.
Risks 09 00068 g008
Figure 9. Scatter plots of the copulas of | X | corresponding to Figure 7.
Figure 9. Scatter plots of the copulas of | X | corresponding to Figure 7.
Risks 09 00068 g009
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hofert, M.; Ziegel, J.F. Matrix-Tilted Archimedean Copulas. Risks 2021, 9, 68. https://doi.org/10.3390/risks9040068

AMA Style

Hofert M, Ziegel JF. Matrix-Tilted Archimedean Copulas. Risks. 2021; 9(4):68. https://doi.org/10.3390/risks9040068

Chicago/Turabian Style

Hofert, Marius, and Johanna F. Ziegel. 2021. "Matrix-Tilted Archimedean Copulas" Risks 9, no. 4: 68. https://doi.org/10.3390/risks9040068

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop