Next Article in Journal
Stable Reduced-Order Model for Index-3 Second-Order Systems
Previous Article in Journal
Oxygen Concentrator Design: Zeolite Based Pressure Swing Adsorption
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Dynamic Asymmetric Causality Tests with an Application †

by
Abdulnasser Hatemi-J
College of Business and Economics, UAE University, Al Ain P.O. Box 1551, United Arab Emirates
Presented at the 8th International Conference on Time Series and Forecasting, Gran Canaria, Spain, 27–30 June 2022.
Eng. Proc. 2022, 18(1), 41; https://doi.org/10.3390/engproc2022018041
Published: 2 August 2022
(This article belongs to the Proceedings of The 8th International Conference on Time Series and Forecasting)

Abstract

:
Testing for causation—defined as the preceding impact of the past value(s) of one variable on the current value of another when all other pertinent information is accounted for—is increasingly utilized in empirical research using the time-series data in different scientific disciplines. A relatively recent extension of this approach has been allowing for potential asymmetric impacts, since this is harmonious with the way reality operates, in many cases. The current paper maintains that it is also important to account for the potential change in the parameters when asymmetric causation tests are conducted, as several reasons exist for changing the potential causal connection between the variables across time. Therefore, the current paper extends the static asymmetric causality tests by making them dynamic via the use of subsamples. An application is also provided consistent with measurable definitions of economic, or financial bad, as well as good, news and their potential causal interactions across time.
JEL Classification:
C32; C51; D82; G15; E17

1. Introduction

From cradle to grave, one of the most prevalent and persistent questions in life is figuring out what is the cause and what is the effect when certain pertinent events are observed. This subject must have been one of the most inspirational and important issues since the dawn of humankind. Throughout history, many philosophers have devoted their pondering to causality as an abstraction. Yet there is no common definition of causality, and above all, there is no common or universally accepted approach for detecting or measuring causality. Since the pioneer notion of [1] and the seminal contribution of [2], testing for the predictability impact of one variable on another has increasingly gained popularity and practical usefulness in different fields when the variables are quantified across time. This approach is known as Granger causality in the literature, and it describes a situation in which the past values of one variable (i.e., the cause variable) are statistically significant in an autoregressive regression model of another variable (i.e., the effect variable) when all other relevant information is also accounted for. The null hypothesis is defined as zero restrictions imposed on the parameters of the cause variable in the autoregressive model when the dependent variable is the potential effect variable. If the null hypothesis is not accepted empirically, it is taken as empirical evidence for causality, according to the Wiener–Granger method (alternative designs exist, such as those detailed by [3,4]). There have been several extensions of this method, especially since the discovery of unit roots and stochastic trends, which is a common property of many time-series variables, that quantify economic or financial processes across time. Refs. [5,6,7] suggested testing for causality via an error correction model if the variables are integrated. Ref. [8] proposed a modified [9] test statistic in order to take into account the impact of unit roots when causality tests are conducted within the vector autoregressive (VAR) model by adding additional unrestricted lags. Ref. [10,11] suggested bootstrapping with leverage adjustments in order to generate accurate critical values for the modified Wald test, since the asymptotical values are not precise when the desirable statistical assumptions for a good model are not satisfied, according to the Monte Carlo simulations conducted by the authors. The bootstrap corrected tests appear to have better size and power properties compared to the asymptotic tests, especially in small sample sizes.
However, there are numerous reasons for the potential causal connection between the variables to have an asymmetric structure. It is commonly agreed to in the literature that markets with asymmetric information prevail (based on the seminal contributions of [12,13,14]). People frequently react more strongly to a negative change, in contrast to a comparable positive one (According to [15,16,17,18], among others, there is indeed an asymmetric behavior by the investors in the financial markets, since they have a tendency to respond more strongly to negative compared to positive news). There are also natural restrictions that can lead to the asymmetric causation phenomenon. For instance, there is a limit on the potential price decrease of any normal asset or commodity, since the price cannot drop below zero. However, there is no restriction on the amount of the potential price increase. In fact, if the price decreases by a given percentage point and then increases again by the same percentage point; it will not end up at the initial price level, but at a lower level. This is true even if the process occurs in the reverse order. There are also moral and/or legal limitations that can lead to an asymmetric behavior. For example, if a company manages to increase its profit by the P% at a given period, it is feasible and easy to expand the business by that percentage point. However, if the mentioned company experiences a loss by the P%, it is not that easy to implement an immediate contraction of the business operation by the P%. The contraction is usually less than the P%, and it can take a longer time to realize this contraction compared to the corresponding expansion. It is naturally easier for the company to hire people than to fire them. In the markets for many commodities, it can also be clearly observed that there is an inertia effect for price decreases compared to price increases. Among others, the fuel market can be mentioned. When the oil price increases, there seems to be an almost immediate increase in fuel prices, and by the same proportion, if not more. However, when the oil price decreases, there is a lag in the decrease in fuel prices, and the adjustment might not be fully implemented. This indicates that the fuel prices adjustments are asymmetric with regard to the oil price changes under the ceteris paribus condition. In order to account for this kind of potential asymmetric causation in the empirical research based on time-series data, ref. [19] suggests implementing asymmetric causality tests, which the author introduces. However, these asymmetric causality tests are static by nature.
The objective of the current paper is to extend these asymmetric causality tests to a dynamic context by allowing for the underlying causal parameters to vary across time, which can be achieved by using subsamples. There are several advantages for using this dynamic parameter approach. One of the advantages of the time-varying parameter approach is that it takes into account the well-known [20] critique, which is an essential issue from the policy-making point of view. Peoples’ preferences can change across time, resulting in a change in their behavior, thereby changing the economic or financial process. There are ground-breaking technological innovations and progresses that happen with time. Major organizational restructuring can take place across time. Unexpected major events, such as the current COVID-19 pandemic, can occur. All these events can result in a change in the potential causal connection between the underlying variables in a model. Thus, a dynamic parameter model can be more informative, and it can better present the way things operate in reality. Moreover, from a correct model specification perspective, the dynamic parameter approach can be preferred to the constant parameter approach, as it is also more informative. Since the dynamic testing of the potential causation connection is more informative than the static approach, it can shed light on the extent of the pertinent phenomenon known, in the financial literature, as the correlation risk. According to [21], correlation risk is defined as the potential risk that the strength of the relationship between financial assets varies unfavorably across time. This issue can have crucial ramifications for investors, institutions, and the policy makers.
The rest of the paper is organized as follows. Section 2 introduces the methodology of dynamic asymmetric causality testing. Section 3 provides an application of the potential causal impact of oil prices on the world’s largest stock market, accounting for rising and falling prices, using both the static and the dynamic asymmetric causality tests. Conclusions are offered in the final section.

2. Dynamic Asymmetric Causality Testing

The subsequent definitions are utilized in this paper.
Definition 1.
An n-dimensional stochastic process ( x t ) t = 1 , ,   T measured across time is integrated of degree 1, signified as I(1), if it must be differenced once for becoming a stationary process. That is, xt ~~ I(1) if ∆xt ~~ I(0), where the denotation ∆ is the first difference operator.
Definition 2.
Define ( ε t ) t = 1 , ,   T as an n-dimensional stochastic process. Thus, during any time period t { 1 , ,   T } , the positive and negative shocks of this random variable εt (i.e., ε t + and ε t ) are identified as the following:
ε t + m a x ( ε t , 0 ) ( m a x ( ε 1 t , 0 ) , , m a x ( ε n t , 0 ) )
and
ε t m i n ( ε t , 0 ) ( m i n ( ε 1 t , 0 ) , , m i n ( ε n t , 0 ) )
The definition of the positive and negative shocks was suggested by [22] for testing for hidden cointegration.
The implementation of the causality tests in the sense of the Wiener–Granger method is operational within the vector autoregressive (VAR) model of [23]. The asymmetric version of this test method is introduced by [19] (Ref. [24] extends the test to the frequency domain). Consider the following two I(1) variables with deterministic trend parts (For the simplicity of expression, we assume that n = 2. However, it is straightforward to generalize the results):
x 1 t = a + b t + x 1 t 1 + ε 1 t ,
and
x 2 t = c + d t + x 2 t 1 + ε 2 t ,
where a, b, c, and d are parametric constants and t is the deterministic trend term. The positive and negative partial sums of the two variables can be recursively defined as the following, based on the definitions of shocks presented in Equations (1) and (2):
x 1 t + : = a t + [ t ( t + 1 ) 2 ] b + x 10 2 + i = 1 t ε 1 i +
x 1 t : = a t + [ t ( t + 1 ) 2 ] b + x 10 2 + i = 1 t ε 1 i
x 2 t + : = c t + [ t ( t + 1 ) 2 ] d + x 20 2 + i = 1 t ε 2 i +
x 2 t : = c t + [ t ( t + 1 ) 2 ] d + x 20 2 + i = 1 t ε 2 i
where x10 and x20 are the initial values. Note that the required conditions of having x 1 t = x 1 t + + x 1 t and x 2 t = x 2 t + + x 2 t are fulfilled (For the proof of these results and for the transformation of I(2) and I(3) variables into the cumulative partial sums of negative and positive components see [25]). Interestingly, the values expressed in Equations (5)–(8) also have economic or financial implications in terms of measuring good or bad news that can affect the markets. It should be mentioned that the issue of whether to include the deterministic trends in the data generating process for a given variable is an empirical issue. In some cases, there might be the need for both a drift and a trend, and in other cases, it might be sufficient to include a drift without any trend. It is also possible to have no drift and no trend. For the selection of the deterministic trend components, the procedure suggested by [26] can be useful.
The asymmetric causality tests can be implemented via the vector autoregressive model of order p, as originally introduced by [22], i.e., the VAR(p). Let us consider testing for the potential causality between the positive components of these two variables. Then, the vector consisting of the dependent variables is defined as x t + = ( x 1 t + ,   x 2 t + ) , and the following VAR(p) can be estimated based on this vector:
x t + = B 0 + + B 1 + x t 1 + + + B p + x t p + + B p + 1 + x t p 1 + + v t +
where B 0 + is the 2 × 1 vector of intercepts, B r + is a 2 × 2 matrix of parameters to be estimated for lag length r (r = 1, …, p), and v t + is a 2 × 1 vector of the error terms. An important issue consideration before using the VAR(p) for drawing inference is to determine the optimal lag order p. This can be achieved, among other methods, by minimizing the information criterion suggested by [27], which is expressed as the following:
H J C = ln ( | Π ^ p + | ) + p ( n 2 l n T + 2 n 2 ln ( l n T ) 2 T ) , p = 1 ,   ,   p max .
where | Π ^ p + | is the determinant of the variance–covariance matrix of the error terms in the VAR model that is estimated based on the lag length p, ln is the natural logarithm, n is the number of time-series included in the VAR model, and T is the full sample size used for estimating the parameters in that model (The Monte Carlo simulations conducted by [28] demonstrate clearly that the information criterion expressed in equation (10) is successful in selecting the optimal lag order when the VAR model is used for forecasting purposes. In addition, the simulations show that this information criterion is robust to the ARCH effects and performs well when the variables in the VAR model are integrated. See also [29] for more information on this criterion). The lag order that results in the minimum value of the information criterion is to be selected as the optimal lag order. It is also important that the off-diagonal elements in the variance-covariance matrix are zero. Therefore, tests for multivariate autocorrelation need to be performed in order to verify this issue. The null hypothesis that the jth element of x t + does not cause the kth element of x t + can be tested via a [9] test statistic (It should be mentioned that the additional unrestricted lag has been added to the VAR model for taking into account the impact of one unit root, consistent with the results of [8]. Multivariate tests for autocorrelation also need to be implemented to make sure that the off-diagonal elements in the variance and covariance matrix are zero). The null hypothesis of non-causality can be formulated as the following:
H 0 :   The   row   k ,   column   j   element   in   B r +   equals   zero   for   r = 1 ,   ,   p .
For a dense representation of the Wald test statistic, we need to make use of the following denotations (It should be pointed out that this formulation requires that the p initial values for each variable in the VAR model are accessible. For the particulars on this requirement, see [30]):
X + : = ( x 1 + , . ,   x T + ) as a (n × T) matrix, D + : = ( B 0 + ,   B 1 + , ,   B p + 1 + ) as a (n × (1 + n × (p + 1))) matrix, Z t + : = [ 1 , x t + ,   x t 1 + , . ,   x t p +   ] as a ((1 + n × (p + 1)) × 1) matrix, Z + : = ( Z 0 + , . ,   Z T 1 + ) as a ((1 + n × ((p + 1) + 1)) × T) matrix and V + = ( v 1 + , , v T + as a (n × T) matrix. Via these denotations, we can express the VAR model and the Wald test statistic compactly as the following:
X + = D + Z + + V +
W a l d + = ( C β + ) [ C ( ( Z + Z + ) 1 Π ^ u + ) C ] 1 ( C β + )
The parameter matrix D + is estimated via the multivariate least squares as the following:
D ^ + = X + Z + ( Z + Z + ) 1
Note that β + = v e c ( D ^ + ) and vec is the column-stacking operator. That is
β + = v e c ( D ^ + ) = ( ( Z + Z + ) 1 I n ) v e c ( X + )
The denotation is the Kronecker product operator, and C is a ((p + 1) × n) × (1 + n × (p + 1)) indicator matrix that includes one and zero elements. The restricted parameters are defined by elements of ones, and the elements of zeros are used for defining the unrestricted parameters under the null hypothesis. I n is an (n × n) identity matrix. Π ^ u + represents the variance–covariance matrix of the unrestricted VAR model as expressed by Equation (12), which can be estimated as the following:
Π ^ u + = V u + ^ V u + ^ T q
Note that the constant q represents the number of parameters that are estimated in each equation of the VAR model. By using the presented denotations, the null hypothesis of no causation might also be formulated as the following expression:
H 0 : C β + = 0
The Wald test statistic expressed in (13) that is used for testing the null hypothesis of non-causality, as defined in (11) based on the estimated VAR model in Equation (12), has the following distribution, asymptotically:
W a l d +         d             χ p 2
This is the case if the assumption of normality is fulfilled. Thus, the Wald test statistic for testing for potential asymmetric causal impacts has a χ 2 distribution, with the number of degrees of freedom equal to the number of restrictions under the null hypothesis of non-causality, which is equal to p in this particular case. This result also holds for a corresponding VAR model for negative components, or any other combinations. For the proof, see Proposition 1 in [25].
However, if the assumption of the normal distribution of the underlying dataset is not fulfilled, the asymptotical critical values are not accurate, and bootstrap simulations need to be performed in order to obtain accurate critical values. If the variance is not constant, or if the ARCH effects prevail, then the bootstrap simulations need to be conducted with leverage adjustments. The size and power properties of the test statistics, based on the bootstrap simulation approach with leverage corrections, has been investigated by [10,11] via the Monte Carlo simulations. The simulation results provided by the previously mentioned authors show that the causality test statistic based on the leveraged bootstrapping exhibits correct size and higher power, compared to a causality test based on the asymptotic distributions, especially when the sample size is small or when the assumption of normal distribution and constant variance of the error terms are not fulfilled.
The bootstrap simulations can be conducted as the following. First, estimate the restricted model based on regression Equation (12). The restricted model imposes the restrictions under the null hypothesis of non-causality. Second, generate the bootstrap data, i.e., X + * , via the estimated parameters from the regression, the original data, and the bootstrapped residuals. This means generating X + * = D ^ + Z + + V + * . Note that the bootstrapped residuals (i.e., V + * ) are created by T random draws, with replacement from the modified residuals of the regression. Each of these random draws, with replacement, has the same likelihood, which is equal a probability of 1/T. The bootstrapped residuals need to be mean-adjusted in order to ensure that the residuals have zero expected value in each bootstrap sample. This is accomplished via subtracting the mean value of the bootstrap sample from each residual in that sample. Note that the residuals need to be adjusted by leverages in order to make sure that the variance is constant in each bootstrap sample. Next, repeat the bootstrap simulations 10,000 times and estimate the Wald test each time (For more information on the leverage adjustments in univariate cases, see Davison and [31], and in multivariate cases, see [10]). Use these test values in order to generate the bootstrap distribution of the test. The critical value at the α significance level via bootstrapping (denoted by c α * ) can be acquired by taking the (α)th upper quantile of the distribution of the Wald test that is generated via the bootstrapping. The final step is to estimate the Wald test value based on the original data and compare it to the bootstrap critical value at the α level of significance. The null hypothesis of non-causation is rejected at the α significance level if the estimated Wald test value is higher than the c α * (i.e., the bootstrap critical value at α level).
In order to account for the possibility of the potential change in the asymmetric causal connection between the variables, these tests can be conducted using subsamples. A crucial issue within this context is to determine the minimum subsample size that is required for testing for the dynamic asymmetric causality. The following formula, developed by [32], can be used for determining the smallest subsample size (S):
S = [ T ( 0.01 + 1.8 T ) ]
where T is the original full sample size. Note that S needs to be rounded up.
Two different approaches regarding the subsamples can be implemented for this purpose. The first one is the fixed rolling window approach, which is based on repeated estimation of the model, with a subsample size of S each time, and the window is moved forward by one observation each time. That is, we need to estimate the time varying causality for the following subsamples, where each number represents a point in time:
1 ,   2 ,   3 ,   ,   S 2 ,   3 , 4 ,   ,   ( S + 1 ) 3 ,   4 ,   5 , ,   ( S + 1 ) ,   ( S + 2 ) 4 ,   5 ,   6 , , ( S + 1 ) ,   ( S + 2 ) ,   (   S + 3 ) ( T S + 1 ) , ( T S + 2 ) ,   ( T S + 3 ) , ,   ( T S + S )
This means that the first subsample consists of the range covering the first observation to the S observation. The next subsample removes the first observation from S and adds the one after S. The process is continued until the full range is covered. For example, assume that T = 10 and then we have S = 6 based on Equation (19) when S is rounded up (Obviously, the sample size normally needs to be bigger than 10 observations in the empirical analysis. Here, a very small sample size is assumed for the sake of the simplicity of the expression). Thus, we have the following subsamples (where each number represents the corresponding time):
1 ,   2 ,   3 ,   ,   S = 1 ,   2 ,   3 ,   4 ,   5 ,   6
2 ,   3 ,   4 ,   ,   ( S + 1 ) = 2 ,   3 ,   4 ,   5 ,   6 ,   7
3 ,   4 ,   5 , , ( S + 1 ) ,   ( S + 2 ) = 3 ,   4 ,   5 ,   6 ,   7 ,   8
4 ,   5 ,   6 , , ( S + 1 ) ,   ( S + 2 ) ,   ( S + 3 ) = 4 ,   5 ,   6 ,   7 ,   8 ,   9
( T S + 1 ) , (   T S + 2 ) ,   ( T S + 3 ) , ,   ( T S + S ) = ( 10 6 + 1 ) , (   10 6 + 2 ) ,
( 10 6 + 3 ) ,   ( 10 6 + 4 ) ,   ( 10 6 + 5 ) ,   ( 10 6 + 6 ) = 5 ,   6 ,   7 ,   8 ,   9 ,   10
The graphical illustration of this approach for the current example is depicted in Figure 1.
The second method for determining multiple subsamples is to start with S and recursively add an observation to S each time for obtaining the next subsample without removing any observation from the beginning. In this approach, the sample size increases by one observation in each subsample until it covers the full range. That is, the size of the first subsample is equal to S and the size of the last one is equal to T. This is a recursive rolling window approach that is anchored at the start. The graphical drawing of this method for the mentioned example is shown in Figure 2.
The next step in order to implement the dynamic asymmetric causality tests is to calculate the Wald test statistic for each subsample and produce its bootstrap critical value at a given significance level. Then, the following ratio can be calculated for each subsample:
T V p C V = W a l d   T e s t   V a l u e   B a s e d   o n   t h e   G i v e n   S u b s a m p l e B o o t s t r a p   C r i t i c a l   V a l u e   a t   t h e   G i v e n   S i g n i f i c a n c e   L e v e l   a n d   S u b s a m p l e
where TVpCV signifies the test value per the critical value at a given significance level using a particular subsample. If this ratio is higher than one, it implies that the null hypothesis of no causality is rejected at the given significance level for that subsample. The 5% and the 10% significance levels can be considered. A graphical illustration of (20) for different subsamples can be informative to the investigator in order to detect the potential change of the asymmetric causal connection between the underlying variables in the model.
An alternative method for estimating and testing the time-varying asymmetric causality tests is to make use of the [33] filter within a multivariate setting. However, this method might not be operational if the dimension of the model is rather high and/or the lag order is large.

3. An Application

An application is provided for detecting the change in the potential causal impact of oil prices and the world’s largest financial market. Two indexes are used for this purpose. The first one is the total share prices index for all shares for the US. The second index is the global price of Brent Crude in USD per barrel. The frequency of the data is yearly, and it covers the period of 1990–2020. The source of the data is the FRED database, which is provided by the Federal Reserve Bank of St. Louis. Let us denote the stock price index by S and the oil price index by O. The aim of this application is to investigate whether the potential impact of rising or falling oil prices on the performance of the world’s largest stock market is time dependent or not. Interestingly, the US market is not only the largest market and its financial market is the most valuable in the word, but the US is also the biggest oil producer in the world. These combinations might make the empirical results of this pragmatic application more useful and general.
The variables are expressed in the natural logarithm format. The unit root test results of the conducted Phillips–Perron test confirm that each variable is integrated at the first order (The unit root test results are not reported, but they are available on request). First, the following linear regression is estimated via the OLS technique for the stock price index (i.e., x 1 t = l n S t ):
( l n S t l n S t 1 ) = a + b t + ε 1 t ,
Next, the residuals of the above regression are estimated. That is
ε ^ 1 t = ( l n S t l n S t 1 ) a ^ b ^ t .
Note that the circumflex implies the estimated value. The positive and negative shocks are measured as the following, based on the definitions presented in Equations (1) and (2):
ε ^ 1 t + m a x ( ε ^ 1 t , 0 )     and     ε ^ 1 t m i n ( ε ^ 1 t , 0 )
The positive and negative partial sums for this variable are defined as the following, based on the results presented in Equations (5) and (6):
( l n S t ) + : = a ^ t + [ t ( t + 1 ) 2 ] b ^ + ( l n S 0 ) 2 + i = 1 t ε ^ 1 i +
( l n S t ) : = a ^ t + [ t ( t + 1 ) 2 ] b ^ + ( l n S 0 ) 2 + i = 1 t ε ^ 1 i
where l n S 0 signifies the initial value of the stock price index in the logarithmic format, which is assumed to be zero, in this case. Note that the equivalency condition l n S t = ( l n S t ) + + ( l n S t ) is fulfilled. It should be mentioned that the value expressed by Equation (22) represents the good news with regard to the stock market, while the value expressed by Equation (23) signifies the bad news pertaining to the same market. The oil price index can also be transformed into cumulative partial sums of positive and negative components in an analogous way. Note that a drift and a trend were included in the equation of each variable, since it seems to be needed, based on the graphs presented in Figure 3.
The dataset can be transformed by a number of user-friendly statistical software components. Prior to implementing causality tests, diagnostic tests were also implemented. The results of these tests are presented in Table A1, which indicate that the assumption of normality is not fulfilled, and the conditional variance is not constant, in most cases. Thus, making use of the bootstrap simulations with leverage adjustments is necessary in order to produce reliable critical values. This is particularly the case for subsamples, since the degrees of freedom are lower.
Both symmetric and the asymmetric causality tests are implemented in a dynamic setting using the statistical software component created by [34] in Gauss. The results for the symmetric causality tests are presented in Table A2 and Table A3, based on the 5% and the 10% significance levels. Based on these results, it can be inferred that the oil price does not cause the results of the stock market price index, not even at the 10% significance level.
The results are also robust regarding the choice of the subsamples because the same results are obtained with all subsamples. An implication of this empirical finding is that the market is informationally efficient in the semi-strong form with regard to oil prices, as defined by [35]. However, when the tests for dynamic asymmetric causality are implemented, the results show that an oil price decrease does not cause a decrease in the stock market price index, and these results are the same across subsamples, even at the 10% significance level (see Table A4, Table A5 and Table A6). Conversely, the null hypothesis that an oil price increase does not cause an increase in the stock market price index is rejected during four subsamples.
It is interesting that by using only three fewer observations, the null hypothesis of non-causality would be rejected at the 10% significance level, in contrast to the result for the entire sample period, which does not reject the underlying null hypothesis (see Figure 4 and Table A7).

4. Conclusions

Tests for causality in the Wiener–Granger method are regularly used in empirical research regarding the time series data in different scientific disciplines. A popular extension of this approach is the asymmetric casualty testing approach, as developed by Hatemi-J (2012). However, this approach is static by nature. A pertinent issue within this context is whether the potential asymmetric causal impacts between the underlying variables in a model are steady or not over the selected time span. In order to throw light on this issue, the current paper suggests implementing asymmetric causality tests across time to see whether the potential asymmetric causal impact is time dependent or not. This can be achieved by using subsamples via two different approaches.
An application is provided in order to investigate the potential causal connection of oil prices with the stock prices of the world’s largest financial market within a time-varying setting. The results of the dynamic symmetric causality tests show that the oil prices do not cause the level of the market price index, regardless of the subsample size. However, when the dynamic asymmetric causality tests are implemented, the results show that positive oil price changes cause a positive price change in the stock market using certain subsamples. In fact, if only three fewer observations are used compared to the full sample size, the results show that there is causality from the oil price increase to the stock market price increase. Conversely, if the full sample size is used (i.e., only three more degrees of freedom), no causality is found. This shows that it can indeed be important to make use of the dynamic causality tests in order to see whether or not the causality result is robust across time.
It should be pointed out that an alternative method for estimating and testing the time-varying asymmetric causality tests is to make use of the [33] filter within a multivariate setting. However, this method might not be operational if the dimension of the VAR model is rather high and/or the lag order is large.
The time-varying asymmetric causality tests results can shed light on whether the causal connection between the variables of interest is general or time dependent. This has important practical implications. If the causal connection changes across time, then the decision or policy based on this causal impact needs to be time dependent as well. This is the case because a static strategy is likely to be inefficient within a dynamic environment.
At the end, the following English proverb that has been quoted by Wiener (1956) [1] says it all.
“For want of a nail, the shoe was lost;
For want of a shoe, the horse was lost;
For want of a horse, the rider was lost;
For want of a rider, the battle was lost;
For want of a battle, the kingdom was lost!”

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

FRED DATA BASE, https://fred.stlouisfed.org (accessed on 26 June 2021).

Acknowledgments

This paper was presented at the 8th International Conference on Time Series and Forecasting (ITISE 2022), Spain. The author would like to thank the participants for their comments. The usual disclaimer applies, nevertheless.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A

Table A1. Test results for multivariate normality and multivariate ARCH in the VAR model.
Table A1. Test results for multivariate normality and multivariate ARCH in the VAR model.
Variables in the ModelThe p-Values of the Multivariate Normality TestsThe p-Values of the Multivariate ARCH Tests
[ ln S t ,   ln O t ] 0.08140.3743
[ ( ln S t ) + ,   ( ln O t ) + ] 0.55550.0028
[ ( ln S t ) ,   ( ln O t ) ] 0.00870.0150
Notes: (1) lnOt signifies the oil price index and lnSt indicates the US stock price index for all shares. The vector [ ( ln S t ) + ,   ( ln O t ) + ] denotes the cumulative partial sum of the positive changes, and the vector [ ( ln S t ) ,   ( ln O t ) ] indicates the cumulative partial sum of the negative changes. (2) The multivariate test of [36] was implemented for testing the null hypothesis of multivariate normality in the residuals in each VAR model. (3) The multivariate test of [37] was conducted for testing the null hypothesis of no multivariate ARCH (1).
Table A2. Dynamic symmetric causality test results at the 5% significance level. (H0: The oil price does not cause the stock market price index).
Table A2. Dynamic symmetric causality test results at the 5% significance level. (H0: The oil price does not cause the stock market price index).
SSPTest Value10% Bootstrap CVTVpCV
11.788572.5280
21.43152.7870.034
30.83220.9770.068
40.86313.3120.062
50.11210.4840.082
60.3174.5920.024
70.3655.3850.059
80.4295.1350.071
90.2394.6780.092
100.5254.5020.053
110.54.760.11
120.5384.0540.123
130.6424.5610.118
140.7574.6060.139
150.5914.8590.156
160.2814.2110.14
170.6233.680.076
180.6354.5710.136
190.6384.2520.149
200.6274.4030.145
210.6274.1290.152
DENOTATIONS: SSP: The subsample period. CV: the critical value; TVpCV: the test value per the critical value; TVpCV = (test value)/(bootstrap critical value at the given significance level). If the value of TVpCV > 1, it implies that the null hypothesis of no causality is rejected at the given significance level.
Table A3. Dynamic symmetric causality test results at the 10% significance level. (H0: The oil price does not cause the stock market price index).
Table A3. Dynamic symmetric causality test results at the 10% significance level. (H0: The oil price does not cause the stock market price index).
SSPTest Value10% Bootstrap CVTVpCV
10.187139.420.001
21.78822.1050.081
31.43110.1860.14
40.8328.8510.094
50.8637.8110.11
60.1122.9570.038
70.3173.5740.089
80.3653.2310.113
90.4293.1880.135
100.2393.0430.079
110.5253.0110.174
120.52.8760.174
130.5383.0830.174
140.6422.950.218
150.7573.2180.235
160.5912.8350.209
170.2812.6230.107
180.6233.040.205
190.6352.8940.219
200.6382.9810.214
210.6273.050.206
DENOTATIONS: SSP: the subsample period; CV: the critical value; TVpCV: the test value per the critical value; TVpCV = (test value)/(bootstrap critical value at the given significance level). If the value of TVpCV > 1, it implies that the null hypothesis of no causality is rejected at the given significance level.
Table A4. Dynamic asymmetric causality test results at the 10% significance level. (H0: An oil price increase does not cause an increase in the stock market price index).
Table A4. Dynamic asymmetric causality test results at the 10% significance level. (H0: An oil price increase does not cause an increase in the stock market price index).
SSPTest Value10% Bootstrap CVTVpCV
100.1040
2000.001
30.0030.010.287
4010.3510
55.2928.5170.621
60.8387.860.107
72.895.5620.52
81.0523.3350.315
91.1242.9050.387
1010.0145.8731.705
112.3093.2450.712
123.6885.5940.659
136.3535.7491.105
149.8885.1881.906
1511.7785.3922.184
162.6073.0740.848
177.4355.3781.382
182.1363.20.667
191.2792.9650.432
201.2883.0850.417
DENOTATIONS: SSP: the subsample period; CV: the critical value; TVpCV: the test value per the critical value; TVpCV = (test value)/(bootstrap critical value at the given significance level). If the value of TVpCV > 1, it implies that the null hypothesis of no causality is rejected at the given significance level.
Table A5. Dynamic asymmetric causality test results at the 5% significance level. (H0: An oil price decrease does not cause a decrease in the stock market price index).
Table A5. Dynamic asymmetric causality test results at the 5% significance level. (H0: An oil price decrease does not cause a decrease in the stock market price index).
SSPTest Value10% Bootstrap CVTVpCV
140.802415.4320.098
21.02134.2280.030
31.00119.4680.051
41.24514.7060.085
50.4411.3280.039
60.5259.40.056
70.61610.3720.059
80.0365.5260.007
905.2210
100.3534.9480.071
110.3624.860.075
120.3734.6920.080
130.3824.3870.087
140.3874.7170.082
150.3724.6640.080
160.0814.3740.019
170.0854.3380.020
180.0875.0290.017
190.0844.7570.018
200.0784.4990.017
DENOTATIONS: SSP: the subsample period; CV: the critical value; TVpCV: the test value per the critical value; TVpCV = (test value)/(bootstrap critical value at the given significance level). If the value of TVpCV > 1, it implies that the null hypothesis of no causality is rejected at the given significance level.
Table A6. Dynamic asymmetric causality test results at the 10% significance level. (H0: An oil price decrease does not cause a decrease in the stock market price index).
Table A6. Dynamic asymmetric causality test results at the 10% significance level. (H0: An oil price decrease does not cause a decrease in the stock market price index).
SSPTest Value10% Bootstrap CVTVpCV
140.80293.9640.434
21.02119.2690.053
31.00110.7830.093
41.2459.2040.135
50.447.9350.055
60.5256.3160.083
70.6166.8360.09
80.0363.6010.01
903.3390
100.3533.2750.108
110.3623.0480.119
120.3733.2480.115
130.3823.0760.124
140.3873.1990.121
150.3722.9380.127
160.0812.8620.028
170.0852.7840.031
180.0873.2050.027
190.0842.8840.029
200.0783.0660.025
DENOTATIONS: SSP: the subsample period; CV: the critical value; TVpCV: the test value per the critical value; TVpCV = (test value)/(bootstrap critical value at the given significance level). If the value of TVpCV > 1, it implies that the null hypothesis of no causality is rejected at the given significance level.
Table A7. Dynamic asymmetric causality test results at the 5% significance level. (H0: An oil price increase does not cause an increase in the stock market price index.).
Table A7. Dynamic asymmetric causality test results at the 5% significance level. (H0: An oil price increase does not cause an increase in the stock market price index.).
SSPTest Value5% Bootstrap CVTVpCV
100.1570
2000
30.0030.0390.072
4016.5480
55.29212.610.42
60.83811.8360.071
72.898.2990.348
81.0525.0090.21
91.1244.3340.259
1010.0148.4461.186
112.3094.5430.508
123.6887.5230.49
136.3537.9670.797
149.8887.3871.339
1511.7787.5291.564
162.6075.2820.494
177.4357.5140.989
182.1364.3340.493
191.2794.3110.297
201.2884.5360.284
DENOTATIONS: SSP: the subsample period; CV: the critical value; TVpCV: the test value per the critical value; TVpCV (test value)/(bootstrap critical value at the given significance level). If the value of TVpCV > 1, it implies that the null hypothesis of no causality is rejected at the given significance level.

References

  1. Wiener, N. The Theory of Prediction. In Modern Mathematics for Engineers; Beckenbach, E.F., Ed.; McGraw-Hill: New York, NY, USA, 1956; Volume 1. [Google Scholar]
  2. Granger, C.W. Investigating Causal Relations by Econometric Models and Cross-spectral methods. Econometrica 1969, 37, 424–439. [Google Scholar] [CrossRef]
  3. Sims, C.A. Money, Income and Causality. Am. Econ. Rev. 1972, 62, 540–552. [Google Scholar]
  4. Geweke, J. Measurement of Linear Dependence and Feedback between Multiple Time Series. J. Am. Stat. Assoc. 1982, 77, 304–324. [Google Scholar] [CrossRef]
  5. Granger, C.W. Developments in the Study of Cointegrated Economic Variables. Oxf. Bull. Econ. Stat. 1986, 48, 213–228. [Google Scholar] [CrossRef]
  6. Granger, C.W. Some Recent Development in a Concept of Causality. J. Econom. 1988, 39, 199–211. [Google Scholar] [CrossRef]
  7. Engle, R.F.; Granger, C.W. Co-integration and error correction: Representation, estimation, and testing. Econometrica 1987, 55, 251–276. [Google Scholar] [CrossRef]
  8. Toda, H.Y.; Yamamoto, T. Statistical Inference in Vector Autoregressions with Possibly Integrated Processes. J. Econom. 1995, 66, 225–250. [Google Scholar] [CrossRef]
  9. Wald, A. Contributions to the Theory of Statistical Estimation and Testing Hypotheses. Ann. Math. Stat. 1939, 10, 299–326. [Google Scholar] [CrossRef]
  10. Hacker, S.; Hatemi-J, A. Tests for Causality between Integrated Variables using Asymptotic and Bootstrap Distributions: Theory and Application. Appl. Econ. 2006, 38, 1489–1500. [Google Scholar] [CrossRef]
  11. Hacker, S.; Hatemi-J, A. A Bootstrap Test for Causality with Endogenous Lag Length Choice: Theory and Application in Finance. J. Econ. Stud. 2012, 39, 144–160. [Google Scholar] [CrossRef] [Green Version]
  12. Akerlof, G. The Market for Lemons: Quality Uncertainty and the Market Mechanism. Q. J. Econ. 1970, 84, 488–500. [Google Scholar] [CrossRef]
  13. Spence, M. Job Market Signalling. Q. J. Econ. 1973, 87, 355–374. [Google Scholar] [CrossRef]
  14. Stiglitz, J. Incentives and Risk Sharing in Sharecropping. Rev. Econ. Stud. 1974, 41, 219–255. [Google Scholar] [CrossRef] [Green Version]
  15. Longin, F.; Solnik, B. Extreme correlation of international equity markets. J. Financ. 2001, 56, 649–676. [Google Scholar] [CrossRef]
  16. Ang, A.; Chen, J. Asymmetric correlations of equity portfolios. J. Financ. Econ. 2002, 63, 443–494. [Google Scholar] [CrossRef]
  17. Hong, Y.; Zhou, G. Asymmetries in stock returns: Statistical test and economic evaluation. Rev. Financ. Stud. 2008, 20, 1547–1581. [Google Scholar] [CrossRef]
  18. Alvarez-Ramirez, J.; Rodriguez, E.; Echeverria, J.C. A DFA approach for assessing asymmetric correlations. Phys. A Stat. Mech. Its Appl. 2009, 388, 2263–2270. [Google Scholar] [CrossRef]
  19. Hatemi-J, A. Asymmetric Causality Tests with an Application. Empir. Econ. 2012, 43, 447–456. [Google Scholar] [CrossRef]
  20. Lucas, R.E. Econometric Policy Evaluation: A Critique. In The Phillips Curve and Labor Markets. Carnegie-Rochester Conference Series on Public Policy; Brunner, K., Meltzer, A., Eds.; Elsevier: New York, NY, USA, 1976; Volume 1, pp. 19–46. [Google Scholar]
  21. Meissner, G. Correlation Risk Modelling and Management; Wiley Financial Series; Wiley: Hoboken, NJ, USA, 2014. [Google Scholar]
  22. Granger, C.W.; Yoon, G. Hidden Cointegration; Department of Economics Working Paper; University of California: San Diego, CA, USA, 2002. [Google Scholar]
  23. Sims, C.A. Macroeconomics and Reality. Econometrica 1980, 48, 1–48. [Google Scholar] [CrossRef] [Green Version]
  24. Bahmani-Oskooee, M.; Chang, T.; Ranjbarc, O. Asymmetric causality using frequency domain and time-frequency domain (wavelet) approaches. Econ. Model. 2016, 56, 66–78. [Google Scholar] [CrossRef]
  25. Hatemi-J, A.; El-Khatib, Y. An Extension of the Asymmetric Causality Tests for Dealing with Deterministic Trend Components. Appl. Econ. 2016, 48, 4033–4041. [Google Scholar] [CrossRef]
  26. Hacker, S.; Hatemi-J, A. The Properties of Procedures Dealing with Uncertainty about Intercept and Deterministic Trend in Unit Root Testing; Working Paper Series in Economics and Institutions of Innovation 214; Royal Institute of Technology, CESIS—Centre of Excellence for Science and Innovation Studies: Stockholm, Sweden, 2010. [Google Scholar]
  27. Hatemi-J, A. A new method to choose optimal lag order in stable and unstable VAR models. Appl. Econ. Lett. 2003, 10, 135–137. [Google Scholar] [CrossRef]
  28. Hatemi-J, A. Forecasting properties of a new method to choose optimal lag order instable and unstable VAR models. Appl. Econ. Lett. 2008, 15, 239–243. [Google Scholar] [CrossRef]
  29. Mustafa, A.; Hatemi-J, A. A VBA Module Simulation for Finding Optimal Lag Order in Time Series Models and Its Use on Teaching Financial Data Computation. Appl. Comput. Inform. 2022, 18, 208–220. [Google Scholar] [CrossRef]
  30. Lutkepohl, H. New Introduction to Multiple Time Series Analysis; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
  31. Davison, A.C.; Hinkley, D.V. Bootstrap Methods and Their Application; Cambridge University Press: Cambridge, UK, 1999. [Google Scholar]
  32. Phillips, P.C.; Shi, S.; Yu, J. Testing for multiple bubbles: Historical episodes of exuberance and collapse in the S&P 500. Int. Econ. Rev. 2015, 56, 1043–1078. [Google Scholar]
  33. Kalman, R.E. A New Approach to Linear Filtering and Prediction Problems. J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef] [Green Version]
  34. Hatemi-J, A.; Mustafa, A. DASCT01: Gauss Module for estimating Dynamic Asymmetric and Symmetric Causality Tests, Statistical Software Components D00001, Boston College Department of Economics. 2021. Available online: https://ideas.repec.org/c/boc/bocode/d00001.html (accessed on 11 May 2021).
  35. Fama, E.F. Efficient Capital Markets: A Review of Theory and Empirical Work. J. Financ. 1970, 25, 383–417. [Google Scholar] [CrossRef]
  36. Doornik, J.A.; Hansen, H. An omnibus test for univariate and multivariate normality. Oxf. Bull. Econ. Stat. 2008, 70, 927–939. [Google Scholar] [CrossRef]
  37. Hacker, S.; Hatemi-J, A. A multivariate test for ARCH effects. Appl. Econ. Lett. 2005, 12, 411–417. [Google Scholar] [CrossRef]
Figure 1. The illustration of the subsamples compared to the entire sample based on the fixed rolling window approach. Notes: T represents the full sample size, and Si represents subsample i (for i = 1, …, 5.), in this case.
Figure 1. The illustration of the subsamples compared to the entire sample based on the fixed rolling window approach. Notes: T represents the full sample size, and Si represents subsample i (for i = 1, …, 5.), in this case.
Engproc 18 00041 g001
Figure 2. The graphical presentation of the subsamples based on the recursive rolling window approach. Notes: Si represents subsample i (for i = 1, …, 5.), in this example. Note that the S5 = T, in this case.
Figure 2. The graphical presentation of the subsamples based on the recursive rolling window approach. Notes: Si represents subsample i (for i = 1, …, 5.), in this example. Note that the S5 = T, in this case.
Engproc 18 00041 g002
Figure 3. The time plot of the variables, along with the cumulative partial components for positive and negative parts. Notes: The notation lnOt represents the oil price index, and lnSt represents the US stock market price index for all shares. The corresponding sign indicates the positive and negative components.
Figure 3. The time plot of the variables, along with the cumulative partial components for positive and negative parts. Notes: The notation lnOt represents the oil price index, and lnSt represents the US stock market price index for all shares. The corresponding sign indicates the positive and negative components.
Engproc 18 00041 g003
Figure 4. The time plot of the causality test results for the positive components at the 10% significance level.
Figure 4. The time plot of the causality test results for the positive components at the 10% significance level.
Engproc 18 00041 g004
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hatemi-J, A. Dynamic Asymmetric Causality Tests with an Application. Eng. Proc. 2022, 18, 41. https://doi.org/10.3390/engproc2022018041

AMA Style

Hatemi-J A. Dynamic Asymmetric Causality Tests with an Application. Engineering Proceedings. 2022; 18(1):41. https://doi.org/10.3390/engproc2022018041

Chicago/Turabian Style

Hatemi-J, Abdulnasser. 2022. "Dynamic Asymmetric Causality Tests with an Application" Engineering Proceedings 18, no. 1: 41. https://doi.org/10.3390/engproc2022018041

Article Metrics

Back to TopTop