Next Article in Journal
Increasing the Wear Resistance of Structural Alloy Steel 38CrNi3MoV Subjected to Isothermal Hardening and Deep Cryogenic Treatment
Next Article in Special Issue
Comparative Sensitivity Analysis of Hydrology and Relative Corn Yield under Different Subsurface Drainage Design Using DRAINMOD
Previous Article in Journal
Noise Reduction Based on Improved Variational Mode Decomposition for Acoustic Emission Signal of Coal Failure
Previous Article in Special Issue
Soil-Surface-Image-Feature-Based Rapid Prediction of Soil Water Content and Bulk Density Using a Deep Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Performance Assessment of Bias Correction Methods for Precipitation and Temperature from CMIP5 Model Simulation

by
Digambar S. Londhe
1,
Yashwant B. Katpatal
1 and
Neeraj Dhanraj Bokde
2,*
1
Department of Civil Engineering, Visvesvaraya National Institute of Technology, Nagpur 440010, India
2
Center for Quantitative Genetics and Genomics, Aarhus University, 8000 Aarhus, Denmark
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(16), 9142; https://doi.org/10.3390/app13169142
Submission received: 12 June 2023 / Revised: 1 August 2023 / Accepted: 3 August 2023 / Published: 10 August 2023
(This article belongs to the Special Issue Climate Change on Water Resource)

Abstract

:
Hydrological modeling relies on the inputs provided by General Circulation Model (GCM) data, as this allows researchers to investigate the effects of climate change on water resources. But there is high uncertainty in the climate projections with various ensembles and variables. Therefore, it is very important to carry out bias correction in order to analyze the impacts of climate change at a regional level. The performance evaluation of bias correction methods for precipitation, maximum temperature, and minimum temperature in the Upper Bhima sub-basin has been investigated. Four bias correction methods are applied for precipitation viz. linear scaling (LS), local intensity scaling (LOCI), power transformation (PT), and distribution mapping (DM). Three bias correction methods are applied for temperature viz. linear scaling (LS), variance scaling (VS), and distribution mapping (DM). The evaluation of the results from these bias correction methods is performed using the Kolmogorov–Smirnov non-parametric test. The results indicate that bias correction methods are useful in reducing biases in model-simulated data, which improves their reliability. The results of the distribution mapping bias correction method have been proven to be more effective for precipitation, maximum temperature, and minimum temperature data from CMIP5-simulated data.

1. Introduction

The variability in precipitation has a significant impact on the agriculture-based Indian economy [1]. The impact of climate change on hydrological processes at a regional level or watershed scale has been noted by the Intergovernmental Panel on Climate Change (IPCC) [2]. The importance and influence of General Circulation Models (GCMs) are widely known for climate change scenarios, providing climate forecasts and dealing with unpropitious climate variability. The datasets related to atmosphere and meteorological parameters provided by GCMs simulations are good for studies to some extent, but their coarser spatial resolution restricts their application at the regional level [3]. Hence, a relatable climate prediction is essential for adaptation, mitigation, and policymaking for agriculture, the environment, and water resources. Variations in precipitation and temperature are considered significant driving parameters due to their pronounced influence on regional hydrological processes [4,5]. Historical and future changes in the meteorological variables at global and regional scales have been increasingly studied using daily weather datasets and GCM simulations [6,7,8,9,10].
Recently, the development of various bias correction methods has led to their extensive use in downscaling the authentic climate projection data of coarser-resolution at the regional level [11]. The biases like overestimation or underestimation of rainfall and increased rainy days with lesser rainfall intensities are present in the simulated datasets. The simulated outputs of hydro-meteorological variables such as temperature, precipitation, and runoff with biases affect climatological and hydrological analyses when utilized directly for impact studies [12]. Hence, to obtain the authentic climate projections from GCM outputs, it is necessary to perform bias correction on the GCM outputs. Bias correction is the statistical adjustment of GCM outputs towards the actual climatology. Bias correction methods commonly assume that there is a statistical relationship between the models’ simulated raw data and observed data that will continue in the future [13,14]. Despite the fact that bias correction must be performed while using GCM outputs as inputs for various hydrological models, there is still uncertainty in the bias-corrected results because of different bias correction techniques [15,16,17].
Bias correction methods are developed to reduce biases and rectify the simulated data produced by GCM models. Some of these techniques are simple scaling techniques and some are refined techniques such as distribution mapping techniques [11]. The scaling techniques generally take account of linear or nonlinear relationships of climatic factors to adjust the differences between observed and GCM data, i.e., linear or nonlinear formulation. The scaling techniques include the linear scaling method (LS) [18,19], local intensity scaling (LIS) [20], and the power transformation method (PT) [21]. The distribution mapping (DM) technique involves distribution-based and distribution-free mapping methods that utilize the statistical distribution of GCM-simulated data to the distribution of observations. The DM technique assumes that the GCM data follows a certain distribution like the Gamma distribution and Gaussian distribution [22,23]. The maximum area of the Upper Bhima sub-basin is a rainfall shadow zone/water scarcity zone and, hence, an evaluation and performance assessment of the bias correction methods is essential here in order to study the impacts of climate change on hydro-meteorological variables.
The objective of the study is to analyze and evaluate bias correction methods applied to the precipitation, maximum temperature, and minimum temperature of the Coupled Model Intercomparison Project Phase 5 (CMIP5) of the ARC Centre of Excellence for Climate System Science (ACCESS1-3) historical model output in the Upper Bhima sub-basin in Maharashtra, India. The ACCESS1-3 model of CMIP5 is selected for the study purpose, although data from 34 simulation models are available. Four bias correction methods are used for precipitation viz. linear scaling (LS), local intensity scaling (LIS), power transformation (PT), and distribution mapping (DM); and three bias correction methods are used for temperature viz. linear scaling (LS), variance scaling (VS) and distribution mapping (DM). The evaluation of bias correction methods is performed by using the Kolmogorov–Smirnov (K–S) non-parametric statistical test. This test assesses the level of discrepancy between the cumulative distributions of the parameters to determine the results. The novelty of the study is to evaluate the bias correction methods in the Upper Bhima sub-basin, which has a very unique geographical location that contains diverse agro-climatic zones such as the Western Ghat Zone, Transitions Zones, the Water Scarcity Zone, and the Assured Rainfall Zone. The major area is occupied by the Water Scarcity Zone, which is a rainfall shadow zone. Hence, it is very important to assess whether the CMIP5 model data are useful in analyzing the impacts of climate change at the regional level.
The objectives of the present study are: to perform bias correction on the precipitation, maximum temperature, and minimum temperature data of the CMIP5-simulated data, of the ACCESS1-3 historical model output data, in the Upper Bhima sub-basin in Maharashtra, India; and to evaluate the bias correction methods applied on the CMIP5 model-simulated precipitation, maximum temperature, and minimum temperature data with the help of observed meteorological data using the K–S non-parametric test.
The structure of the article is as follows: Section 2 provides a comprehensive overview of the study area, selected GCM simulated data, and the observed reanalysis dataset used for the study. Section 3 presents the overall methodology adopted for the study, including bias correction methods and a statistical analysis test, i.e., K–S test. The results and their discussion are explained in Section 4. Based on the observed results and the main findings, the paper concludes with the main outcomes and the concluding remarks are described in Section 5.

2. Study Area and Datasets Used

2.1. Study Area

The Bhima River is one of the main tributaries of the Krishna River. The Upper Bhima sub-basin covers 17.6% of the area of the Krishna basin, which is 46,066 km2. The majority of the sub-basin, accounting for approximately 98.4% of its area, is situated in Maharashtra, while the remaining 1.6% is located in the state of Karnataka. The study area extends between latitude 17.18 N to 19.24 N and longitude 73.20 E to 76.15 E. The elevation within the watershed ranges from 160 m in the eastern region to 1472 m in the Western Ghat Mountains region of the sub-basin. The location map and Digital Elevation Model (DEM), providing information on the topography of the study area, are shown in Figure 1.
The Upper Bhima sub-basin has a very diverse climate, spatially and temporally. The eastern part of the Western Ghats receives more than 4000 mm of annual precipitation and less than 500 mm on the Deccan Plateau plains. The annual average rainfall is 872 mm/year for the whole sub-basin [24]. A total of 80–90% of the yearly precipitation occurs between June to September, i.e., in the monsoon season. From 2002 to 2013, the minimum temperature observed was 5 °C and the maximum temperature was 46 °C.

2.2. Datasets

The CMIP5 model data of 12 out of 34 of the CMIP5 model datasets have been used in the analysis. Detailed information on the models used is given in Table 1. The simulation is a historical experiment that gives historical climate data from 1850 to 2005 using historical forcing such as solar variation, volcanic eruption, stratospheric and anthropogenic aerosol emissions, and greenhouse gas concentrations [25].
Observed precipitation and temperature data are necessary for bias correction purposes. Since observed data has problems such as missing data, an inadequate number of weather stations, and discontinuity, a global reanalysis of weather data has been used. The reanalysis data used in this study are obtained from CFSR (Climate Forecast System Reanalysis), comprehensive global data with high-resolution, coupled atmosphere, ocean, land surface, and sea ice systems, which provide the most accurate estimation of meteorological data [26]. CFSR data are available from 1979 to the present, and comprise gridded weather data with a spatial resolution of 0.35°.

3. Methodology

Four bias correction methods are applied for precipitation viz. LS, LOCI, PT, and DM, and three bias correction methods are applied for temperature viz. LS, VS, and DM. All these bias correction methods are applied for daily precipitation and maximum and minimum temperature data from 1979 to 2005. Then, bias-corrected outputs of the precipitation and maximum and minimum temperature are evaluated using K–S non-parametric test. The overall methodology used in this study is shown in Figure 2.

3.1. Bias Correction Methods

The bias correction methods used to correct the mean, variance, and distribution of the modeled variable, by using a function h, are given in (1) [27,28]:
P Observed = h   P Modeled
The aim is that the bias-corrected outputs of the variable parameter should match the observed data more closely in comparison to the modeled parameter.

3.1.1. Linear Scaling (LS) of Precipitation and Temperature

LS method is a simple bias correction method and is widely used to adjust precipitation and temperature from GCM [29]. This method reduces the biases by comparing the mean of the bias-corrected values with the observed values [21]. The function of the LS method is used to calculate the corrected values based on the differences between the observed and GCM-simulated data. Precipitation data are rectified using a multiplier term where the simulated precipitation data are multiplied by the scaling factor. On the other hand, temperature is corrected using an additive term, where the simulated temperature data are added to the scaling factor. LS equations of precipitation and temperature are given in (2) and (3):
For precipitation,
P cor , m , d =   P raw , m , d × μ P obs , m μ P raw , m
For temperature,
T cor , m , d = T raw , m , d + μ T obs , m μ T raw , m
where P cor , m , d   and T cor , m , d   are corrected precipitation and temperature on the dth day of mth month, respectively; P raw , m , d and T raw , m , d are raw precipitation and temperature on the dth day of mth month, respectively; μ P obs , m   and μ T obs , m   are mean values of observed precipitation and temperature at month m, respectively; and μ P raw , m and μ T raw , m   are mean values of raw precipitation and temperature at month m, respectively.

3.1.2. Local Intensity Scaling (LOCI) of Precipitation

The LOCI method [20] adjusts the biases in the frequency and intensity of precipitation which prevents the model-simulated raw data having an excessively large number of drizzle days. It is a two-step process; first, a threshold for a wet day for a month, m Pthreshold, m is determined from the time series of model-simulated raw precipitation data (Praw,m,d), so that the threshold should match the observed (Pobs,m,d) wet day frequency, and in the second step, scaling factor (Sm), is calculated using (4):
  S m = μ P obs , m , d   |   P obs , m , d   > 0 μ P raw , m , d   |   P raw , m , d   >   P threshold , m
Equation (4) is used to verify that the corrected model precipitation mean is equal to that of the observed precipitation data and calculated using (5):
P cor , m , d = 0   ,                                                             if     P raw , m , d   < P threshold , m P raw , m , d × S m                                                   otherwise                            

3.1.3. Power Transformation (PT) of Precipitation

The LS and LOCI methods remove the biases in the precipitation data without taking the variance into account, so the PT method adjusts the standard deviation of the time series data using an exponential form. In the PT method, initially, we have to estimate bm, which can be minimized using (6):
f b m = σ P obs , m μ P obs , m σ P LOCI , m b m μ P LOCI , m b m
where bm is the exponent for the mth month; σ represents the standard deviation operator; and PLOCI,m is the LOCI corrected precipitation in the mth month.
If bm > 1, the LOCI-corrected precipitation data underestimate the coefficient of variance in month, m. Once the optimum value of bm is found, the scaling factor is determined using (7):
S m = μ P obs , m μ P LOCI , m
The calculated scaling factor, Sm, should match the mean of the corrected values and observed values. Then, the corrected precipitation data are calculated using the LOCI-corrected precipitation data, PLOCI,m,d, using (8):
  P cor , m , d =   S m ×   P LOCI , m , d b m

3.1.4. Variance Scaling (VARI) of Temperature

The PT method is suitable for the correction of the mean and variance of precipitation, but not appropriate for the bias correction of temperature, because the temperature is known to have an approximately normal distribution [30]. The VARI method was developed to correct both the mean and variance of normally distributed variables such as temperature [4,30]. Hence, temperature data are corrected using the VARI method using (9):
T c o r , m , d = T r a w , m , d μ T r a w , m × σ T o b s , m σ r a w , m + μ T o b s , m  

3.1.5. Distribution Mapping (DM) of Precipitation and Temperature

In the DM method, the distribution function of the simulated GCM model data is corrected with that of the distribution function of the observed data. The DM method adjusts the mean, standard deviation, and quantiles. In addition, it retains the extreme data values [31]. This method assumes that the observed data and model-simulated raw data of variables follow the same distribution function, which leads to the addition of new unnecessary biases.
For precipitation, the gamma distribution function [32] with shape parameter α and scale parameter β is used for distribution (10) and has been verified to be effective [31,32]:
  f γ x   |   α ,   β = x α 1 × 1 β α × α × e x β ; x 0 ,   α ,   β > 0
where x is the observed variable; ⎾(.) is Gamma function; α is form parameter; and β is scale parameter.
As previously discussed regarding the LOCI method (Section 3.1.2), a precise threshold value is used to define a wet day as a large number of drizzle days are recorded in the raw GCM-simulated precipitation data, causing distortion in the distribution of raw data. The bias correction is performed like LOCI-corrected precipitation data, PLOCI,m,d using (11):
P c o r , m , d = F γ 1 F γ P L O C I , m , d |   α L O C I , m , β L O C I , m   |   α o b s , m , β o b s , m
where F γ   and F γ 1   are the gamma CDF (cumulative distribution function) and its inverse;   α L O C I , m and β L O C I , m are the fitted gamma parameters for the LOCI-corrected precipitation in a given month m; and α o b s , m   and   β o b s , m are the fitted gamma parameters for observed data.
For temperature, the Gaussian cumulative distribution function shown in (12), or normal distribution with mean µ and standard deviation σ, is assumed to fit temperature best [4]:
  f N x   |   µ ,   σ = 1 σ   × 2 π × e x µ 2 2 σ 2 ;   x ϵ   R  
Similarly, the corrected temperature can be estimated using (13):
T c o r , m , d = F N 1 F N T r a w , m , d |   µ r a w , m , σ r a w , m   |   µ o b s , m , σ o b s , m
where F N   and F N 1   are the Gaussian CDF and its inverse; µ raw , m and µ obs , m are fitted and observed means for the raw and observed temperature data at a given month, m; and σ r a w , m   and   σ obs , m are fitted and observed standard variation for the raw and observed temperature time series at a given month, m.

3.2. Kolmogorov–Smirnov Non-Parametric Test

The K–S test [33,34] is a non-parametric test used to check equality and to relate two samples (two-sample K–S test). The two-sample K–S test is the most useful non-parametric method that compares two variable samples by checking differences in the maximum vertical distance between two empirical distribution functions. The previous studies reveal that the K–S test is highly applicable to hydro-meteorological studies [35] and to the evaluation of bias correction techniques [36]. Thus, this test is used to evaluate bias correction methods.
Different datasets provide different cumulative distribution functions. Every cumulative distribution function starts with the lowest value and extends up to the highest value of that series. The K–S test estimates the statistic D (14), which is the maximum distance between the cumulative distribution function of the observed time series and the cumulative distribution function of the bias-corrected data series.
Here, the null hypothesis, H0: SN1(x) = SN2(x), i.e., two distribution functions are equal and the alternative hypothesis, H1: SN1(x) ≠ SN2(x), i.e., two distribution functions are not equal. The comparison of different cumulative functions using the K–S test, i.e., SN1(x) and SN2(x) is given in (14):
D = m a x < x <   | S N 1 x S N 2 x |
In the K–S test, the calculated p-value represents the level of significance at which it should not reject the hypothesis that SN1(x) and SN2(x) have the same distribution. As n1, n2→∞, the p-value for this statistic is given in (15):
p = Q T n 1 n 2 n 1 + n 2
where n1 and n2 are the number of sizes of two samples, and Q z = 2 k = 1 1 k 1 e 2 k 2 z 2 .
The K–S test is more sensitive to variations in both location and form of the empirical cumulative distribution functions of the two samples than the Mann–Whitney U test and Wilcoxon Signed Rank test [37].

4. Results and Discussion

4.1. Evaluation of CMIP5 Global Climate Models Used in the Analysis

The mean bias error, mean absolute error, Root mean square error, Nash–Sutcliffe coefficient, and correlation coefficient for each model used in the analysis have been calculated for precipitation, maximum temperature, and minimum temperature.

4.1.1. Precipitation

An analytical exploration into the diverse climate models for precipitation offers an intriguing narrative, as summarized in Table 2. These models, evaluated against several statistical measures, display a range of outcomes that echo the multifaceted reality of precipitation forecasting.
Examining the Mean Bias Error (MBE), it becomes evident that models such as ACCESS1.3 and CMCC-CESM exhibit a stark divergence, with MBE values of 24.894 and 19.867, respectively. This difference underscores the inherent disparities in the model’s forecast tendencies, from potential overestimations to underestimations. The Mean Absolute Error (MAE) metric brings forth another layer of discrepancy. The HadCM3 model, with an MAE value of 117.517, contrasts significantly with the MPI-ESM-P model which demonstrates a lower MAE value of 70.709. This difference represents the diverse magnitude of absolute differences between the models’ predictions and actual observations. The Root Mean Square Error (RMSE) values further delineate the differences in the model’s ability to predict precipitation accurately. Models like GFDL-CM3 and HadCM3 showcase RMSE values of 196.425 and 247.197, respectively, reflecting varying levels of prediction accuracy. Nash–Sutcliffe Efficiency (NSE) values bring a more discerning evaluation, as models like HadCM3 venture into negative territories (−0.351), demonstrating a level of prediction accuracy that falls below the mean of the observed data. The correlation coefficients present an aspect of association between model predictions and observed data. A positive correlation, as seen in the GFDL-CM3 model (0.421), indicates a closer agreement with the observations, as opposed to a lower correlation coefficient in the HadCM3 model (0.231).
In conclusion, the performance of these diverse climate models for precipitation varies significantly, underlining the complexity of precipitation modeling. These variances, as revealed through the different statistical metrics, contribute to our comprehensive understanding of the climate model performance for precipitation forecasting.

4.1.2. Maximum and Minimum Temperature

An analytical exploration into the diverse climate models for maximum and minimum temperature offers an intriguing narrative, as summarized in Table 3 and Table 4. These models, evaluated against several statistical measures, display a range of outcomes that echo the multifaceted reality of precipitation forecasting.

Maximum Temperature

Considering MBE values, the MIROC5 and CMCC-CESM models denote the lowest and highest biases with 0.926 and 1.092, respectively. This disparity reiterates the inherent differences in the models’ ability to minimize systematic errors. However, it is noteworthy that models such as ACCESS1.3 and CNRM-CM5 also present relatively low bias errors of 0.959 and 1.018, respectively, showcasing the varied degree of tendency towards over or underprediction among models.
The MAE values exhibit a similar trend. Here, the MPI-ESM-P and HADG-EM2-ES models demonstrate the minimum and maximum absolute deviations with 2.277 and 2.976, respectively. This distinction provides insight into the absolute magnitude of errors in the predictions. For instance, the GFDL-CM3 model posits an impressive 2.353, which is just slightly higher than the minimum observed in the MPI-ESM-P model. On the other end of the spectrum, HadCM3, with 2.982, stands close to the HADG-EM2-ES model, reflecting a higher degree of absolute error.
For RMSE, the values range from 7.659 (INM-CM4 model) to 8.022 (HADG-EM2-ES model), indicating diverse degrees of prediction accuracy. In this context, CNRM-CM5 and MRI-ESM1 models present values of 7.768 and 7.791, respectively, situating themselves closely to the most accurate INM-CM4 model. In terms of NSE, the INM-CM4 model excels, with 0.253, while the HADG-EM2-ES model lags, with 0.181. Additionally, the MRI-ESM1 model’s value stands at 0.227, which is close to the MPI-ESM-P model’s performance, suggesting that the former model’s prediction accuracy is commendable, relative to the mean of the observed data.
Finally, the correlation coefficients showcase the varying degrees of linear relationship between the predicted and observed values. The INM-CM4 model exhibits the strongest correlation (0.515), contrasting with the relatively lower correlation (0.45) by the HADG-EM2-ES model. Similarly, the NorESM1_M model exhibits a correlation of 0.481, placing it midway in the spectrum of correlation coefficients among the models.
This analysis underscores the inherent diversity of performance across climate models when forecasting maximum temperature, emphasizing the multifaceted nature of climate modeling and prediction. Through this detailed comparison of climate models, we further appreciate the spectrum of performance characteristics across models, all of which contribute to our overall understanding of climate prediction mechanisms.
Table 3. Statistical summary of the climate models for maximum temperature.
Table 3. Statistical summary of the climate models for maximum temperature.
ModelsMBE MAERMSENSECorrelation Coefficient
ACCESS1.30.9592.3587.8060.2240.486
CMCC-CESM1.0922.4987.7830.2290.494
CNRM-CM51.0182.4287.7680.2320.495
GFDL-CM30.9842.3537.7590.2340.496
HadCM31.0132.9827.9410.1970.466
HADGEM2-ES1.0052.9768.0220.1810.45
INM-CM40.9932.3087.6590.2530.515
MIROC50.9262.4257.7720.2310.492
MPI-ESM-P1.0292.2777.7150.2420.506
MRI-CGCM31.0342.647.8550.2150.479
MRI-ESM10.9782.5067.7910.2270.49
NorESM1_M0.9682.4857.8340.2190.481

Minimum Temperature

Starting with the MBE, we see a span from 0.773 (MIROC5) to 0.914 (CMCC-CESM), an indication of the diverse ability of models to control systematic discrepancies. Further inspection shows models such as ACCESS1.3 and CNRM-CM5 as being not far behind, with MBEs of 0.87 and 0.846, respectively, illustrating the differing propensity towards over or underestimation across models. As for MAE, the variation extends from 1.757 (INM-CM4) to 2.181 (HadCM3), revealing the wide spectrum of absolute errors across the models. In light of this, the GFDL-CM3 model scores an MAE of 1.792, only slightly more than the best-performing INM-CM4 model, whereas the HadCM3 model lags behind, showing higher absolute errors.
Concerning RMSE, values fluctuate between 6.771 (MRI-CGCM3) and 7.074 (HADG-EM2-ES), suggesting disparate levels of model accuracy. Notably, the CNRM-CM5 and MRI-ESM1 models have RMSEs of 6.895 and 6.944, respectively, aligning them more closely with the highest accuracy model, MRI-CGCM3.
The NSE, another telling metric, sees the highest performance from the MRI-CGCM3 model at 0.229, with the lowest at 0.159 from the HADG-EM2-ES model. Here, the MRI-ESM1 model performs admirably with an NSE of 0.189, trailing behind the MPI-ESM-P model, which shows a significantly higher value. Lastly, the correlation coefficients range from 0.417 (HADG-EM2-ES) to 0.497 (MRI-CGCM3), reflecting the varying strengths of linear relationships between the predicted and observed values. Here, the INM-CM4 model shows a strong correlation of 0.462, while the NorESM1_M model shows a somewhat weaker correlation of 0.444.
This assessment elucidates the nuanced performance characteristics across different climate models in the context of minimum temperature prediction. By highlighting these disparities, we enhance our understanding of the unique strengths and limitations of each model, thereby aiding in the more effective utilization of climate prediction tools.
Table 4. Statistical summary of the climate models for minimum temperature.
Table 4. Statistical summary of the climate models for minimum temperature.
ModelsMBE MAERMSENSECorrelation Coefficient
ACCESS1.30.871.6936.8620.2080.472
CMCC-CESM0.9141.9246.9140.1960.459
CNRM-CM50.8461.7626.8950.2010.462
GFDL-CM30.891.7926.930.1930.454
HadCM30.9052.1817.0050.1750.435
HADGEM2-ES0.8932.1357.0740.1590.417
INM-CM40.8931.7576.90.20.462
MIROC50.7731.8956.9080.1980.456
MPI-ESM-P0.8471.8186.8190.2180.482
MRI-CGCM30.9071.836.7710.2290.497
MRI-ESM10.8621.8546.9440.1890.449
NorESM1_M0.7931.9586.9560.1860.444

4.2. Statistical Analysis of Kolmogorov–Smirnov Test

The Kolmogorov–Smirnov (K–S) test is a statistical test used to compare the distributions of two datasets and determine if they are significantly different. In the context of analysis for the month of July and for different bias correction methods, the K–S test is being used to assess the effectiveness of bias correction on the simulated climate data, compared to the observed data, for precipitation and maximum and minimum temperature. The results of the K–S test (cumulative distribution plots and K–S test statistics D) for all the CMIP5 models mentioned in Table 1 are given in the Supplementary Data while, for instance, the results of the ACCESS 1.3 model are detailed and explained in this section.

4.2.1. Precipitation

The four bias correction methods were applied to correct the GCM-simulated raw daily precipitation data. To check its performance, the K–S test is used and bias-corrected outputs are compared with the observed data. Figure 3 illustrates the K–S test applied in July for the observed series with the model-simulated raw data (Figure 3a), the data corrected using LS (Figure 3b), the data corrected by LOCI (Figure 3c), the data corrected by PT (Figure 3d), the and data corrected by DM (Figure 3e). Figure 3 shows the plot of cumulative frequency distributions on the same graph along with the maximum difference D and p-value.
Table 5 synthesizes the results of D (14) and the level of significance of the cumulative distribution function (p-value) of the observed precipitation data, the model-simulated raw data, and the data corrected using the LS, LOCI, PT, and DM methods.
Table 5 indicates that the bias correction improves the model-simulated raw data, enhancing the K–S test statistic D and its level of significance, i.e., p-values. For better understanding, the K–S test statistic D = 0.3871 and p-value 0.013 means that the two distribution functions of the observed data and model-simulated data are different at p = 0.013, i.e., at a 98.7% confidence level. The null hypothesis has been defined and both cumulative distribution functions are equal. The K–S test statistic D value is decreased to some extent after the application of bias correction methods, which reveals that it gives better results than the GCM model data. Table 5 illustrates that most of the D values are improved, i.e., decreased after applying the bias correction methods.
In the LS bias correction method, except for the month of January, all remaining 11 months show decreased D values with respect to the data without correction, although the p-value is less than 5%. Similarly, after LOCI correction, except for the month of April, all remaining 11 months show decreased D values. In comparison to the LS and LOCI correction methods, the PT and DM methods give better results. In the PT method, the D value is reduced in all months except April, which has an increased D value. The DM is very effective, which significantly reduces the D value in all months throughout the year. The p-value is also near 1, which signifies that the observed data’s cumulative distribution and DM-corrected data’s cumulative distribution are equal.
The K–S test assessment index D represents the maximum difference between the cumulative distribution functions of the model-simulated data and observed data in each month, as shown in Figure 4. The line chart (Figure 4) shows that the K–S test statistics D value in each month is maximum in the model-simulated uncorrected data series with respect to the observed data and the results of the K–S test, which are improved after applying all four correction techniques. Figure 4 also reveals that, out of all four methods, the DM method gives better results as the D value is lowest in the DM method compared to other methods.
In general, all four bias correction methods succeeded in reducing the D value relative to the model-simulated raw data. But the DM method is most appropriate as it improves the significance (p-value) to near to 1 or 1 in almost all months, which is in agreement with the results and findings of the authors of reference [36].

4.2.2. Maximum and Minimum Temperature (Tmax and Tmin)

Similarly, three bias correction methods were applied to correct the GCM-simulated raw maximum and minimum temperature data and the K–S test is applied to check the performance of the bias-corrected outputs with respect to the observed data. Plots of both the cumulative frequency distributions on the same graph, along with the maximum difference D and p-values, are illustrated in Figure 5 (maximum temperature) and Figure 6 (minimum temperature).
Table 6 gives information on the results of the D value and its level of significance (p-value) regarding the cumulative distribution function of the observed maximum temperature data, the model-simulated data without correction, and the data corrected using the LS, VS, and DM methods.
As Table 6 specifies the K–S statistics D and p-value, it clearly shows that, in the simulated data without correction, 4 out of 12 months give a maximum difference value of 1, and, without correction data series, also shows zero significance throughout the year. As bias correction methods are applied, all D values throughout the year are decreased and p-values are increased to some extent. In the LS method, though all 12 months have decreased D values in comparison to the uncorrected data, 6 months show a significant p-value of more than 0.9. The VS method gave better results than the LS method and it improved the p-value by more than 0.9 in 8 months except for July, August, November, and December. Also, the months of January and February give 100% significance. The DM methods give the best results compared to the other methods. In the DM method, D values during all months of the year are decreased and 11 months also show a significance value of more than 0.9, except August which has a significance value of 0.778.
In the case of minimum temperature (Table 7), the months of April and May show greater values in comparison to the other months, even without correction. The plot in Figure 6a shows both cumulative frequency distributions on the graph along with the maximum difference D = 0.0968 and significance p-value = 0.998. Out of the 12 months, 7 show a significance value of zero in the uncorrected data. In the LS and VS methods, all D and p-values show enhanced results with decreased D values and increased p-values of more than 0.9. The DM method has better results than the LS and VS methods, as 4 months out of 12 show 100% significance, and also decreased D values than uncorrected data.
Similarly, for maximum and minimum temperature, all three correction techniques improved the K–S test assessment index D and significance p-value. The K–S test assessment index D for maximum temperature and minimum temperature are shown in Figure 7 and Figure 8, respectively. Also in this case, based on the statistics estimated by the K–S test, the DM method gives better results in comparison to the LS and VS methods.

4.3. Spatial Variation of Precipitation, Maximum Temperature, and Minimum Temperature

4.3.1. Precipitation

Precipitation shows considerable variation across the Upper Bhima sub-basin. The spatial distribution of average annual precipitation over the study area, for uncorrected output from the ACCESS1-3 CMIP5 GCM model and after applying various bias correction methods, is compared against the observed precipitation data, as shown in Figure 9. Figure 9a–f shows the spatial variation of the model-simulated raw precipitation data, observed precipitation data, LS-corrected, LOCI-corrected, PT-corrected, and DM-corrected data, respectively.
There is a discrepancy in the pattern, as shown in Figure 9a, as there exists biases in the model-simulated raw precipitation data. It shows a clear underestimation of precipitation over the whole Upper Bhima sub-basin. The spatial variation of the model-simulated raw data shows only two classes over the watershed, ranging from 200 to 500 mm per year in the Western Ghat region and from 0 to 200 mm per year in the remaining area, which is significantly less compared to the observed data. On the other hand, after bias correction methods are applied, there is a remarkable enhancement in the model output as its magnitude approaches the observed values. Figure 9b shows the spatial variation of the observed precipitation data for various classes over the Upper Bhima sub-basin, ranging from 200 to 500 mm per year in the central region, which is the lowest rainfall, and increasing to 2500 to 3000 mm per year in the Western Ghat region.
All the bias correction methods show a significant and similar level of ability in bringing the model output results closer when comparing the climatological distribution of annual average precipitation over the Upper Bhima sub-basin against the observed data for the uncorrected and corrected model data. Figure 9c–e are the spatial variations of the LS, LOCI, and PT correction methods and they show a similar spatial pattern of precipitation over the Upper Bhima sub-basin. But, Figure 9c–e do not show the class of lowest rainfall in the central part of the study area, which is 200 to 500 mm per year, as seen in the observed data (Figure 9b). The best improvement is seen in Figure 9f, showing the spatial variation of the DM-corrected precipitation data and relatively matching the pattern of spatial variation of the observed data, including all classes throughout the study area. Also, some central and eastern parts of the study area show the lowest rainfall, which can also be seen in the spatial variation of the observed data (Figure 9b).

4.3.2. Maximum Temperature

The spatial distribution of average daily maximum temperature over the Upper Bhima sub-basin for the uncorrected output from the ACCESS1-3 CMIP5 GCM model and its corrected datasets, after applying various bias correction methods, are compared against the observed precipitation data, as shown in Figure 10. Figure 10a–e shows the spatial variation of the model-simulated raw maximum temperature data, observed maximum temperature data, LS-corrected, VS-corrected, and DM-corrected datasets, respectively.
As raw precipitation datasets show a discrepancy in the spatial pattern, this also shows that the maximum temperature also shows dissimilarity in the spatial pattern of the uncorrected data and observed data. The only difference is that, in the case of precipitation, the model data underestimates the precipitation data and, in the case of the temperature data, the model data overestimates the temperature throughout the study area.
In the spatial variation of the model-simulated raw data, as shown in Figure 10a, there are only three classes of spatial patterns ranging from the lowest temperature of 34 °C in the Western Ghat region and the highest maximum temperature of 40 to 42 °C in the eastern part of the study area, which is very high in comparison to the observed data, as shown in Figure 10b.
Although, after the application of bias correction, there is a comparative improvement in the model output, Figure 10c–e show the spatial variation of the LS, VS, and DM correction methods, showing similar spatial patterns of precipitation over the Upper Bhima sub-basin. The overall spatial pattern of the maximum temperature distribution is the same for all three bias correction methods over the Upper Bhima sub-basin.

4.3.3. Minimum Temperature

Likewise, the results of the average daily minimum temperature show a similar pattern and change to the maximum temperature. The spatial distributions of the average daily minimum temperature over the study area for the uncorrected output GCM model are shown in Figure 11. The corrected datasets, after applying various bias correction methods, are compared against the observed minimum temperature. Figure 11a–e shows the spatial variation of the model-simulated raw minimum temperature data, observed minimum temperature data, LS-corrected, VS-corrected, and DM-corrected datasets, respectively.
Similar to the raw precipitation and maximum temperature, the minimum temperature also shows a difference in the spatial patterns of the uncorrected data and observed data. In the case of the spatial variation of the minimum temperature uncorrected data (Figure 11a), this shows conspicuous high-temperature values ranging from 23 to 24 °C in the Western Ghat region. Since these are uncorrected raw model data, they may have some biases and, hence, they show such unacceptable results. After bias correction, the results of the corrected minimum temperature match the pattern of the spatial variation of the observed data. The overall spatial pattern of the minimum temperature distribution is the same for all three bias correction methods over the Upper Bhima sub-basin.

5. Conclusions

The bias correction of the GCM-simulated output data is an important step before they can be used for the investigation of the impacts of climate change at the regional scale, because of the existence of biases in the model-simulated data. In this study, four bias correction techniques for precipitation, i.e., Linear Scaling, Local Intensity Scaling, Power Transformation, and Distribution mapping methods, and three bias correction techniques for temperature, i.e., Linear Scaling, Variance Scaling and Distribution Mapping methods, are applied to the CMIP5 ACCESS1-3 model simulation daily precipitation data from 1979 to 2005.
For precipitation, the K–S test assessment index D value in each month is maximum in the model-simulated uncorrected data series with respect to the observed data, and the results of the K–S test are improved after applying all four correction techniques. Among all four methods, the DM method gives better results, as the D value is lowest in the DM method each month compared to the other methods.
Spatial variation analysis also reveals that the uncorrected simulated data is not sufficiently accurate for use in climate change studies on a watershed scale because of its coarser spatial resolution and the presence of biases within it. Spatial variation maps show that the uncorrected model data underestimate the annual average precipitation data as they show the lowest value of precipitation compared to that of the observed and bias-corrected data. It seems that the spatial variation pattern of all the methods is similar except for the DM method, which matches with the observed data. Hence, in general, the DM method resulted in greater improvements to the D and p-values in comparison to the other methods. Hence, we conclude that the DM method is a better bias correction method for daily precipitation data than other methods.
Similarly, for maximum and minimum temperature, all three correction techniques improved the K–S test assessment index D and significance p-value. In this case, based on the statistics estimated by the K–S test, the DM method also gives better results in comparison to the LS and VS methods.
The spatial variation of the corrected maximum and minimum temperature shows a similar pattern of variation to all the methods with the observed data. But the case of the uncorrected simulated data of the minimum temperature shows conspicuous results in the Western Ghat region in the study area. Hence, after overall statistical analysis and study, we conclude that the DM method is a better bias correction method for daily maximum and minimum temperature data in comparison to the LS, LOCI, PT, and VS methods.
Although this work showed that various bias correction approaches that were evaluated for correcting the CMIP5-model-simulated daily precipitation and temperature data, it did not focus on the efficacy of the specific bias correction method in replicating the hydrological extremes. Hence, the future scope of this work could include an assessment the effectiveness of the bias correction methods in terms of extreme precipitation and temperature events. The future scope of the study will also concentrate on the various distribution-based and spatial disaggregation bias correction techniques, which are critical for developing the most effective climate information system for the sub-basin. The new bias correction methods may be developed by addressing extreme events, spatial heterogeneity and non-stationarity in biases, ensemble-based bias correction methods, and multivariate bias correction methods, by considering the interaction between precipitation and temperature which influences the regional climate conditions. The bias correction methods should be combined with a downscaling process in order to improve the spatial resolution and applicability of the CMIP5 model datasets. This would help to improve the reliability of the assessment of the impact of climate change at the regional and sub-basin levels.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/app13169142/s1, Supplementary data: For precipitation: (a) simulated data without correction, (b) LS corrected, (c) LOCI corrected, (d) PT corrected and (e) DM corrected precipitation data and for maximum and minimum temperature: (a) simulated data without correction, (b) LS corrected, (c) VS corrected and (d) DM corrected temperature data.

Author Contributions

Conceptualization, D.S.L., Y.B.K. and N.D.B.; methodology, D.S.L. and Y.B.K.; software, D.S.L.; validation, D.S.L., Y.B.K. and N.D.B.; formal analysis, D.S.L.; investigation, D.S.L. and Y.B.K.; resources, Y.B.K. and N.D.B.; data curation, D.S.L.; writing—original draft preparation, D.S.L., Y.B.K. and N.D.B.; writing—review and editing, D.S.L., Y.B.K. and N.D.B.; visualization, D.S.L. and N.D.B.; supervision, Y.B.K. and N.D.B.; project administration, D.S.L., Y.B.K. and N.D.B.; funding acquisition, Y.B.K. and N.D.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Revadekar, J.V.; Preethi, B. Statistical analysis of the relationship between summer monsoon precipitation extremes and foodgrain yield over India. Int. J. Climatol. 2012, 32, 419–429. [Google Scholar] [CrossRef]
  2. Mpelasoka, F.S.; Chiew, F.H. Influence of rainfall scenario construction methods on runoff projections. J. Hydrometeorol. 2009, 10, 1168–1183. [Google Scholar] [CrossRef]
  3. Kumar, K.R.; Sahai, A.K.; Kumar, K.K.; Patwardhan, S.K.; Mishra, P.K.; Revadekar, J.V.; Kamala, K.; Pant, G.B. High-resolution climate change scenarios for India for the 21st century. Curr. Sci. 2006, 90, 334–345. [Google Scholar]
  4. Teutschbein, C.; Seibert, J. Regional climate models for hydrological impact studies at the catchment scale: A review of recent modeling strategies. Geogr. Compass 2010, 4, 834–860. [Google Scholar]
  5. Luo, M.; Liu, T.; Meng, F.; Duan, Y.; Frankl, A.; Bao, A.; De Maeyer, P. Comparing bias correction methods used in downscaling precipitation and temperature from regional climate models: A case study from the Kaidu River Basin in Western China. Water 2018, 10, 1046. [Google Scholar]
  6. Liu, C.; Allan, R.P. Observed and simulated precipitation responses in wet and dry regions 1850–2100. Environ. Res. Lett. 2013, 8, 034002. [Google Scholar] [CrossRef]
  7. Chadwick, R.; Good, P.; Martin, G.; Rowell, D.P. Large rainfall changes consistently projected over substantial areas of tropical land. Nat. Clim. Change 2016, 6, 177–181. [Google Scholar] [CrossRef]
  8. Daksiya, V.; Mandapaka, P.; Lo, E.Y. A comparative frequency analysis of maximum daily rainfall for a SE Asian region under current and future climate conditions. Adv. Meteorol. 2017, 2017, 1–16. [Google Scholar] [CrossRef] [Green Version]
  9. Valappil, V.K.; Temimi, M.; Weston, M.; Fonseca, R.; Nelli, N.R.; Thota, M.; Kumar, K.N. Assessing Bias correction methods in support of operational weather forecast in arid environment. Asia-Pac. J. Atmos. Sci. 2020, 56, 333–347. [Google Scholar] [CrossRef]
  10. Shin, S.W.; Kim, T.J.; Kim, J.U.; Goo, T.Y.; Byun, Y.H. Application of bias-and variance-corrected SST on wintertime precipitation simulation of regional climate model over East Asian Region. Asia-Pac. J. Atmos. Sci. 2021, 57, 387–404. [Google Scholar] [CrossRef]
  11. Teutschbein, C.; Seibert, J. Bias correction of regional climate model simulations for hydrological climate-change impact studies: Review and evaluation of different methods. J. Hydrol. 2012, 456, 12–29. [Google Scholar] [CrossRef]
  12. Ghimire, U.; Srinivasan, G.; Agarwal, A. Assessment of rainfall bias correction techniques for improved hydrological simulation. Int. J. Climatol. 2019, 39, 2386–2399. [Google Scholar] [CrossRef]
  13. Wood, A.W.; Leung, L.R.; Sridhar, V.; Lettenmaier, D.P. Hydrologic implications of dynamical and statistical approaches to downscaling climate model outputs. Clim. Change 2004, 62, 189–216. [Google Scholar] [CrossRef]
  14. Parry, M.L.; Canziani, O.; Palutikof, J.; Van der Linden, P.; Hanson, C. (Eds.) Climate Change 2007—Impacts, Adaptation and Vulnerability: Working Group II Contribution to the Fourth Assessment Report of the IPCC; Cambridge University Press: Cambridge, UK, 2007; Volume 4. [Google Scholar]
  15. Hawkins, E.; Osborne, T.M.; Ho, C.K.; Challinor, A.J. Calibration and bias correction of climate projections for crop modelling: An idealised case study over Europe. Agric. For. Meteorol. 2013, 170, 19–31. [Google Scholar] [CrossRef]
  16. Seaby, L.P.; Refsgaard, J.C.; Sonnenborg, T.O.; Højberg, A.L. Spatial uncertainty in bias corrected climate change projections and hydrogeological impacts. Hydrol. Process. 2015, 29, 4514–4532. [Google Scholar] [CrossRef]
  17. Iizumi, T.; Takikawa, H.; Hirabayashi, Y.; Hanasaki, N.; Nishimori, M. Contributions of different bias-correction methods and reference meteorological forcing data sets to uncertainty in projected temperature and precipitation extremes. J. Geophys. Res. Atmos. 2017, 122, 7800–7819. [Google Scholar] [CrossRef] [Green Version]
  18. Zollo, A.L.; Rianna, G.; Mercogliano, P.; Tommasi, P.; Comegna, L. Validation of a simulation chain to assess climate change impact on precipitation induced landslides. In Landslide Science for a Safer Geoenvironment: Vol. 1: The International Programme on Landslides (IPL); Springer International Publishing: New York, NY, USA, 2014; pp. 287–292. [Google Scholar]
  19. Crochemore, L.; Ramos, M.H.; Pappenberger, F. Bias correcting precipitation forecasts to improve the skill of seasonal streamflow forecasts. Hydrol. Earth Syst. Sci. 2016, 20, 3601–3618. [Google Scholar] [CrossRef] [Green Version]
  20. Schmidli, J.; Frei, C.; Vidale, P.L. Downscaling from GCM precipitation: A benchmark for dynamical and statistical downscaling methods. Int. J. Climatol. J. R. Meteorol. Soc. 2006, 26, 679–689. [Google Scholar] [CrossRef]
  21. Lenderink, G.; Buishand, A.; Van Deursen, W. Estimates of future discharges of the river Rhine using two scenario methodologies: Direct versus delta approach. Hydrol. Earth Syst. Sci. 2007, 11, 1145–1159. [Google Scholar] [CrossRef]
  22. Chen, J.; Brissette, F.P.; Chaumont, D.; Braun, M. Performance and uncertainty evaluation of empirical downscaling methods in quantifying the climate change impacts on hydrology over two North American river basins. J. Hydrol. 2013, 479, 200–214. [Google Scholar] [CrossRef]
  23. White, R.H.; Toumi, R. The limitations of bias correcting regional climate model inputs. Geophys. Res. Lett. 2013, 40, 2907–2912. [Google Scholar] [CrossRef]
  24. Pavelic, P.; Patankar, U.; Acharya, S.; Jella, K.; Gumma, M.K. Role of groundwater in buffering irrigation production against climate variability at the basin scale in South-West India. Agric. Water Manag. 2012, 103, 78–87. [Google Scholar] [CrossRef]
  25. Trenberth, K.E.; Jones, P.D.; Ambenje, P.; Bojariu, R.; Easterling, D.; Klein Tank, A.; Rusticucci, M. Climate change 2007: The physical science basis. Clim. Change 2007, 6, 235–336. [Google Scholar]
  26. Fuka, D.R.; Walter, M.T.; MacAlister, C.; Degaetano, A.T.; Steenhuis, T.S.; Easton, Z.M. Using the Climate Forecast System Reanalysis as weather input data for watershed models. Hydrol. Process. 2014, 28, 5613–5623. [Google Scholar] [CrossRef]
  27. Piani, C.; Weedon, G.P.; Best, M.; Gomes, S.M.; Viterbo, P.; Hagemann, S.; Haerter, J.O. Statistical bias correction of global simulated daily precipitation and temperature for the application of hydrological models. J. Hydrol. 2010, 395, 199–215. [Google Scholar] [CrossRef]
  28. Piani, C.; Haerter, J.O.; Coppola, E. Statistical bias correction for daily precipitation in regional climate models over Europe. Theor. Appl. Climatol. 2010, 99, 187–192. [Google Scholar] [CrossRef] [Green Version]
  29. Maraun, D.; Wetterhall, F.; Ireson, A.M.; Chandler, R.E.; Kendon, E.J.; Widmann, M.; Brienen, S.; Rust, H.W.; Sauter, T.; Themeßl, M.; et al. Precipitation downscaling under climate change: Recent developments to bridge the gap between dynamical models and the end user. Rev. Geophys. 2010, 48. [Google Scholar] [CrossRef] [Green Version]
  30. Terink, W.; Hurkmans, R.T.W.L.; Torfs, P.J.J.F.; Uijlenhoet, R. Evaluation of a bias correction method applied to downscaled precipitation and temperature reanalysis data for the Rhine basin. Hydrol. Earth Syst. Sci. 2010, 14, 687–703. [Google Scholar] [CrossRef] [Green Version]
  31. Themeßl, M.J.; Gobiet, A.; Heinrich, G. Empirical-statistical downscaling and error correction of regional climate models and its impact on the climate change signal. Clim. Change 2012, 112, 449–468. [Google Scholar] [CrossRef]
  32. Thom, H.C. A note on the gamma distribution. Mon. Weather Rev. 1958, 86, 117–122. [Google Scholar] [CrossRef]
  33. Kim, P.J. On the exact and approximate sampling distribution of the two sample Kolmogorov-Smirnov criterion dmn, m ≤ n. J. Am. Stat. Assoc. 1969, 64, 1625–1637. [Google Scholar] [CrossRef]
  34. Press, W.H.; Teukolsky, S.A.; Vetterling, W.T.; Flannery, B.P. Numerical recipes in C; Cambridge University: Cambridge, UK, 1992. [Google Scholar]
  35. Rishma, C.; Katpatal, Y.B. ENSO modulated groundwater variations in a river basin of Central India. Hydrol. Res. 2019, 50, 793–806. [Google Scholar] [CrossRef] [Green Version]
  36. Tschöke, G.V.; Kruk, N.S.; de Queiroz, P.I.B.; Chou, S.C.; de Sousa Junior, W.C. Comparison of two bias correction methods for precipitation simulated with a regional climate model. Theor. Appl. Climatol. 2017, 127, 841–852. [Google Scholar] [CrossRef]
  37. Huang, X. A Statistical, Data-Driven Assessment of Climate Extremes and Trends for the Continental US; Louisiana State University and Agricultural & Mechanical College: Baton Rouge, LA, USA, 2016. [Google Scholar]
Figure 1. Location map (left) and DEM (right) of the Upper Bhima sub-basin.
Figure 1. Location map (left) and DEM (right) of the Upper Bhima sub-basin.
Applsci 13 09142 g001
Figure 2. Methodology adopted for the study.
Figure 2. Methodology adopted for the study.
Applsci 13 09142 g002
Figure 3. Illustration of the K–S test for the month of July for the (a) simulated data without correction, and the (b) LS-corrected, (c) LOCI-corrected, (d) PT-corrected, and (e) DM-corrected precipitation data.
Figure 3. Illustration of the K–S test for the month of July for the (a) simulated data without correction, and the (b) LS-corrected, (c) LOCI-corrected, (d) PT-corrected, and (e) DM-corrected precipitation data.
Applsci 13 09142 g003
Figure 4. K–S test statistic D for bias correction methods with respect to observed precipitation data.
Figure 4. K–S test statistic D for bias correction methods with respect to observed precipitation data.
Applsci 13 09142 g004
Figure 5. Illustration of the K–S test for the month of July for the (a) simulated data without correction, (b) LS-corrected, (c) VS-corrected, and (d) DM-corrected maximum temperature data.
Figure 5. Illustration of the K–S test for the month of July for the (a) simulated data without correction, (b) LS-corrected, (c) VS-corrected, and (d) DM-corrected maximum temperature data.
Applsci 13 09142 g005
Figure 6. Illustration of the K–S test for the month of July for (a) simulated data without correction, (b) LS-corrected, (c) VS-corrected, and (d) DM-corrected minimum temperature data.
Figure 6. Illustration of the K–S test for the month of July for (a) simulated data without correction, (b) LS-corrected, (c) VS-corrected, and (d) DM-corrected minimum temperature data.
Applsci 13 09142 g006
Figure 7. K–S test statistic D for bias correction methods with respect to observed daily maximum temperature data.
Figure 7. K–S test statistic D for bias correction methods with respect to observed daily maximum temperature data.
Applsci 13 09142 g007
Figure 8. K–S test statistic D for bias correction methods with respect to observed daily minimum temperature data.
Figure 8. K–S test statistic D for bias correction methods with respect to observed daily minimum temperature data.
Applsci 13 09142 g008
Figure 9. Spatial variation of annual average precipitation (mm/year) for the period from 1979 to 2005: (a) model-simulated raw data, (b) observed data, (c) LS-corrected, (d) LOCI-corrected, (e) PT method, and (f) DM method.
Figure 9. Spatial variation of annual average precipitation (mm/year) for the period from 1979 to 2005: (a) model-simulated raw data, (b) observed data, (c) LS-corrected, (d) LOCI-corrected, (e) PT method, and (f) DM method.
Applsci 13 09142 g009
Figure 10. Spatial variation of average daily maximum temperature (°C) for (a) model-simulated raw data, (b) observed data, (c) LS-corrected, (d) VS-corrected, and (e) DM method.
Figure 10. Spatial variation of average daily maximum temperature (°C) for (a) model-simulated raw data, (b) observed data, (c) LS-corrected, (d) VS-corrected, and (e) DM method.
Applsci 13 09142 g010
Figure 11. Spatial variation of average daily minimum temperature (°C) for (a) model-simulated raw data, (b) observed data, (c) LS-corrected, (d) VS-corrected, and (e) DM method.
Figure 11. Spatial variation of average daily minimum temperature (°C) for (a) model-simulated raw data, (b) observed data, (c) LS-corrected, (d) VS-corrected, and (e) DM method.
Applsci 13 09142 g011
Table 1. Detailed information on the 12 CMIP5 models datasets used.
Table 1. Detailed information on the 12 CMIP5 models datasets used.
Model NameCountryModeling CenterResolution
ACCESS1.3AustraliaAustralian Community Climate and Earth System Simulator Coupled Model1.875° × 1.25°
CMCC-CESMItalyEuro-Mediterranean Center on Climate Change1.875° × 2.5°
CNRM-CM5FranceCentre National de Recherches Météorologiques1.4° × 1.4°
GFDL-CM3United StatesGeophysical Fluid Dynamics Laboratory2.0° × 2.5°
HadCM3United KingdomMet Office Hadley Centre3.75° × 2.5°
HADGEM2-ESUnited KingdomMet Office Hadley Centre and the Climatic Research Unit1.875° × 1.875°
INM-CM4RussiaInstitute of Numerical Mathematics (INM) of the Russian Academy of Sciences2.8° × 2.8°
MIROC5JapanJapan Agency for Marine-Earth Science and Technology (JAMSTEC), the National Institute for Environmental Studies (NIES), and the University of Tokyo1.4° × 1.4°
MPI-ESM-PGermanyMax Planck Institute for Meteorology (MPI-M), Germany1.9° × 1.9°
MRI-CGCM3JapanMeteorological Research Institute (MRI) in Japan. The institute under the Japan Meteorological Agency (JMA)1.125° × 1.125°
MRI-ESM1JapanMeteorological Research Institute (MRI) in Japan. The institute under the Japan Meteorological Agency (JMA)1.125° × 1.875°
NorESM1_MNorwayNorwegian Climate Centre, a part of the Norwegian Meteorological Institute, and the Bjerknes Centre for Climate Research, a collaborative research center involving Norwegian institutions. 1.9° × 2.5°
Table 2. Statistical summary of the climate models for precipitation.
Table 2. Statistical summary of the climate models for precipitation.
ModelsMBE MAERMSENSECorrelation Coefficient
ACCESS1.324.89481.325204.2120.0780.372
CMCC-CESM19.86782.018207.4440.0480.34
CNRM-CM521.92475.804203.4350.0850.352
GFDL-CM324.62574.052196.4250.1470.421
HadCM323.19117.517247.197−0.3510.231
HADGEM2-ES23.149105.925235.032−0.2220.253
INM-CM424.28281.326207.3570.0490.36
MIROC523.72584.115206.7780.1050.345
MPI-ESM-P22.11670.709196.8530.1430.408
MRI-CGCM322.39782.868203.0730.0880.379
MRI-ESM124.20789.007223.448−0.1040.237
NorESM1_M22.19478.137201.4750.1020.385
Table 5. The K–S test statistic D and its level of significance for the cumulative distribution functions of the model-simulated data, the data corrected using the LS, LOCI, PT, and DM methods, with respect to the observed precipitation data.
Table 5. The K–S test statistic D and its level of significance for the cumulative distribution functions of the model-simulated data, the data corrected using the LS, LOCI, PT, and DM methods, with respect to the observed precipitation data.
MonthNo
Correction
Linear
Scaling
Local
Intensity Scaling
Power
Transformation
Distribution
Mapping
DpDpDpDpDp
January0.38710.0130.48390.0010.22580.3630.16130.7780.12900.944
February0.42860.0080.28570.1690.10710.9950.14290.9170.14290.917
March0.16130.7780.12900.9440.09680.9980.12900.9440.12900.944
April0.20.5370.23330.3420.23330.3420.23330.3420.10000.997
May0.35480.030.16130.7780.16130.7780.16130.7780.09680.998
June0.900.20000.5370.20.5370.30.1090.06671
July0.741900.22580.3630.22580.3630.12900.9440.09680.998
August0.580600.16130.7720.16130.7780.09680.9980.12900.944
September0.800.40000.0110.40.0110.40.010.10000.997
October0.48390.0010.19350.5590.16130.7780.16130.7780.06451
November0.33330.0550.16670.7600.16670.760.20.5370.13330.936
December0.35480.030.25810.2160.19350.5590.09680.9980.14290.917
Table 6. The K–S test statistic D and level of significance for the cumulative distribution functions of the model-simulated data without correction and data corrected using the LS, VS, and DM methods with respect to the observed daily maximum temperature data.
Table 6. The K–S test statistic D and level of significance for the cumulative distribution functions of the model-simulated data without correction and data corrected using the LS, VS, and DM methods with respect to the observed daily maximum temperature data.
MonthNo
Correction
Linear
Scaling
Variance
Scaling
Distribution
Mapping
DpDpDpDp
January0.67740.00.16130.77800.06451.00000.09680.9980
February0.60710.00.17860.72000.07141.00000.10710.9950
March0.61290.00.09680.99800.09680.99800.09680.9980
April0.70000.00.13330.93600.10000.99700.10000.9970
May0.61290.00.09680.99800.12900.94400.12900.9440
June0.80000.00.30000.10900.13330.93600.10000.9970
July1.00000.00.12900.94400.19350.55900.12900.9440
August1.00000.00.19350.55900.19350.55900.16130.7780
September1.00000.00.10000.99700.10000.99700.10000.9970
October1.00000.00.09680.99800.12900.94400.09680.9980
November0.96670.00.16670.76000.20000.53700.10000.9970
December0.90320.00.22580.36300.19350.55900.09680.9980
Table 7. The K–S test statistic D and level of significance for the cumulative distribution functions of the model-simulated data without correction and data corrected using the LS, VS, and DM methods with respect to the observed daily minimum temperature data.
Table 7. The K–S test statistic D and level of significance for the cumulative distribution functions of the model-simulated data without correction and data corrected using the LS, VS, and DM methods with respect to the observed daily minimum temperature data.
MonthNo
Correction
Linear
Scaling
Variance ScalingDistribution
Mapping
DpDpDpDp
January0.516100.12900.94400.19350.55900.12900.9440
February0.42860.0080.14290.91700.14290.91700.10710.9950
March0.25810.2160.06451.00000.12900.94400.06451.0000
April0.066710.10000.99700.10000.99700.10000.9970
May0.09680.9980.09680.99800.09680.99800.06451.0000
June0.36670.0260.10000.99700.13330.93600.06671.0000
July1.000000.06451.00000.09680.99800.06451.0000
August1.000000.12900.94400.12900.94400.16130.7780
September1.000000.13330.93600.13330.93600.10000.9970
October0.871000.12900.94400.16130.77800.12900.9440
November0.800000.13330.93600.13330.93600.13330.9360
December0.709700.09680.99800.25810.21600.09680.9980
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Londhe, D.S.; Katpatal, Y.B.; Bokde, N.D. Performance Assessment of Bias Correction Methods for Precipitation and Temperature from CMIP5 Model Simulation. Appl. Sci. 2023, 13, 9142. https://doi.org/10.3390/app13169142

AMA Style

Londhe DS, Katpatal YB, Bokde ND. Performance Assessment of Bias Correction Methods for Precipitation and Temperature from CMIP5 Model Simulation. Applied Sciences. 2023; 13(16):9142. https://doi.org/10.3390/app13169142

Chicago/Turabian Style

Londhe, Digambar S., Yashwant B. Katpatal, and Neeraj Dhanraj Bokde. 2023. "Performance Assessment of Bias Correction Methods for Precipitation and Temperature from CMIP5 Model Simulation" Applied Sciences 13, no. 16: 9142. https://doi.org/10.3390/app13169142

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop