Next Article in Journal
Numerical Analysis of Local Scour of the Offshore Wind Turbines in Taiwan
Next Article in Special Issue
Evaluating Changes in the Multiyear Predictability of the Pacific Decadal Oscillation Using Model Analogs since 1900
Previous Article in Journal
Influence of Oyster Shell Pyrolysis Temperature on Sediment Permeability and Remediation
Previous Article in Special Issue
Assessing the Spatio-Temporal Features and Mechanisms of Symmetric Instability Activity Probability in the Central Part of the South China Sea Based on a Regional Ocean Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

How Efficient Is Model-to-Model Data Assimilation at Mitigating Atmospheric Forcing Errors in a Regional Ocean Model?

by
Georgy I. Shapiro
1,* and
Mohammed Salim
2
1
School of Biological and Marine Sciences, University of Plymouth, Drake Circus, Plymouth PL4 8AA, UK
2
School of Ocean Sciences, Bangor University, Bangor LL57 2DG, UK
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2023, 11(5), 935; https://doi.org/10.3390/jmse11050935
Submission received: 31 March 2023 / Revised: 25 April 2023 / Accepted: 26 April 2023 / Published: 27 April 2023
(This article belongs to the Special Issue Frontiers in Physical Oceanography)

Abstract

:
This paper examines the efficiency of a recently developed Nesting with Data Assimilation (NDA) method at mitigating errors in heat and momentum fluxes at the ocean surface coming from external forcing. The analysis uses a set of 19 numerical simulations, all using the same ocean model and exactly the same NDA process. One simulation (the reference) uses the original atmospheric data, and the other eighteen simulations are performed with intentionally introduced perturbations in the atmospheric forcing. The NDA algorithm uses model-to-model data assimilation instead of assimilating observations directly. Therefore, it requires a good quality, although a coarser resolution data assimilating parent model. All experiments are carried out in the South East Arabian Sea. The variables under study are sea surface temperature, kinetic energy, relative vorticity and enstrophy. The results show significant improvement in bias, root-mean-square-error, and correlation coefficients between the reference and the perturbed models when they are run in the data assimilating configurations. Residual post-assimilation uncertainties are similar or lower than uncertainties of satellite based observations. Different length of DA cycle within a range from 1 to 8 days has little effect on the accuracy of results.

1. Introduction

In recent years, the improvement in ocean models and the use of more powerful computers have allowed ocean predictive models to be run at sub-mesoscale resolutions and better represent smaller-scale ocean features such as eddies, fronts, and filaments, e.g., [1] and references therein. Regional ocean forecasting models are usually driven by a continuous supply of time-varying external data, such as atmospheric forcing and lateral boundary conditions. These external data are produced by mathematical models and are available with a certain degree of uncertainty [2,3,4,5]. In addition to uncertainties in the time –varying external data, the errors in the ocean models come from a range of other sources, such as parameterisation of sub-grid oceanic processes, approximations used in numerical schemes, initial conditions and bathymetry.
The downward solar radiation at the sea surface is an essential component of the total heat exchange between the atmosphere and the ocean. Zhang et al. [2] evaluated errors in the shortwave radiation (SWR) from various reanalysis products using satellite and in situ observations. They found that all the reanalyses tend to overestimate the SWR with global (including both land and ocean) mean biases between the reanalysis and surface measurements ranging from 11.25 W/m2 to 49.80 W/m2. The best performance was demonstrated by ERA-Interim, with the yearly mean bias over the global ocean being −2.85 W/m2 and root-mean-squared error (RMSE) being 21.98 W/m2 [2]. The RMSE is quite large relative to the yearly and globally averaged SWR over the ocean of 180.0 W/m2. The correlation coefficient between the ERA-Interim and surface observations was R = 0.97, showing that the spatial pattern provided by the reanalysis is close to the observations. The uncertainties in the wind data from 15 global and regional reanalysis products were evaluated by [5]. The wind data are shown to be sufficiently reliable on offshore locations with the median BIAS (MB) = −0.1 m/s and median correlation coefficient CORR = 0.89. However, the RMSE ranged from 1 m/s to 3.2 m/s with the median value of 1.35 m/s.
The effect of uncertainties from the lateral boundaries was studied by [6]. They showed that the data assimilating open boundary condition could reduce the system bias by at least 50%, although it was not enough to constrain the mesoscale circulation in the study area. Ocean models are particularly sensitive to wind stress and heat fluxes at the ocean surface as well as to the horizontal resolution of atmospheric data [7,8]. The uncertainties in the ocean model output derived from errors in the atmospheric forcing have been investigated in a number of studies, e.g., [9,10]. Typically, the surface atmospheric fluxes were perturbed in an ensemble of experiments.
While the study of errors in the atmospheric forcing and sensitivity of an ocean model output to the uncertainty in the external input data is important for understanding the propagation of errors in ocean predictions, a practical ocean forecast would benefit from an efficient method of mitigating such errors. The usual way to improve the ocean model forecast is to use one of the many DA methods, which vary in complexity, effectiveness, and computational cost [11,12,13]. The DA methods have both advantages and disadvantages. For example, the assimilation of only SST in an operational model degrades the temperature/salinity profiles [14]. Data assimilation is deemed to be unsuitable for the study of derived quantities, not constrained by observations, i.e., for which observations are not available [15]. Recent research shows that ‘when only SST was assimilated, it had a negative effect on the subsurface layer of most regions, except for some regions in high-latitude regions’ [16]. Therefore the improvement provided by a specific DA scheme cannot be taken for granted and should be assessed for each DA method.
This paper examines the efficiency of a recently developed model-to-model DA method in mitigating errors in heat and momentum fluxes at the ocean surface. The analysis uses a set of 19 numerical simulations, all using the same ocean model and exactly the same DA process. One simulation (the reference) uses the best available atmospheric data, and other eighteen simulations are performed with various perturbations in the atmospheric forcing. A perfect DA would rectify all the consequences of errors in atmospheric forcing, so that all models would produce identical results. The study evaluates how close the NDA algorithm is to the ideal case. The variables under study are temperature, kinetic energy, relative vorticity, and enstrophy at the sea surface. The surface layer is selected as it is mostly influenced by atmospheric forcing. The simulations are carried out in the dynamically active part of South East Arabian Sea, also known as the Lakshadweep Sea.

2. Materials and Methods

The numerical experiments are conducted in the Lakshadweep Sea within the area of 7.5–14.5° N; 68–78° E, see Figure 1. This part of the South East Arabian Sea provides an important supply of proteins to more than 34 million people of the Indian state of Kerala. An accurate description of the physical environment in the area is important to contribute to the improvement of the livelihood of the people there.
All experiments are based on different configurations of the same regional ocean circulation model called LD20. The details of the LD20 model design, configuration, set-up, operation, DA, and validation against satellite and in situ observations are given in [17]. Below is the summary of the main characteristics of the model. The LD20 model uses NEMO v3.6 codebase [18] and is driven by the atmospheric forcing produced by the UK Met Office Unified Model [19]. The lateral open boundary conditions are taken from the global ocean model available via Copernicus Marine Service product GLOBAL_REANALYSIS_PHY_001_030-TDS [20] with the addition of tidal components from the global tidal model TPXO 7.2 [21]. The parent global model has a horizontal resolution of 1/12 degree and 50 depth levels, and it assimilates observations of sea surface temperature (SST), sea surface height, and in situ temperature and salinity profiles. The LD20 model has a horizontal resolution of 1/20 degree and 50 depth levels, which are different from the levels of the parent model to better represent the dynamics of the study area. Bathymetry for LD20 is adopted from GEBCO [22].
The LD20 model is run in two configurations, with and without DA. The DA configuration employs a model-to-model DA process based on Nesting with the Data Assimilation (NDA) algorithm [23]. This algorithm assimilates observations indirectly by taking the 3D data sets from same parent model used in providing a 2D lateral boundary conditions. Therefore, the coarser resolution parent model acts an intermediate processing layer in the NDA algorithm, which spreads actual observations across the coarse grid. The data taken from the parent model are the profiles of temperature, salinity, and u- and v-components of velocity at each parent model grid point. The commonly used DA processes are known to introduce some imbalance between density and velocity fields, so an additional geostrophic balancing step is required (see, for example, [24]). The data taken from the parent model are constraint by the equations of motion, and they therefore do not require dynamic balancing. The water column data from the parent model are more abundant than observations, and, hence, the ratio of the number of the ‘observational’ and model data points is reduced in the NDA algorithm. In the data assimilating configurations, the DA is carried out every 5 days at 00Z.
The base model LD20_DA was run in the DA configuration with undisturbed forcing from 1 January 2015 to 30 December 2016. The atmospheric forcing is provided with 1 h frequency for wind and 3 h frequency for SWR. The models with disturbed forcing were run for one year from 1 January 2016 to 30 December 2016 both in DA and noDA configurations. It took approximately 2–3 weeks for models with disturbed forcing to spin up, hence model results for January were excluded from further analysis.
The set of perturbed atmospheric forcing consists of the nine versions listed in Table 1. The SWR is modified in versions v1–v5 and v9, while the components of the wind at 10m above the surface are modified in versions v6–v8. The amplitudes of disturbances for both solar radiation and wind speed were taken to be within the range of the actual uncertainty of these variables, as estimated by Zhang et al. [2] and Gualtieri [5]. Solar radiation is modified by a multiplicative factor F, which is uniform across the domain but may vary randomly with time. The wind components are modified by adding a term A to the u-component and term B to v-component, which are uniform across the domain but vary randomly in time.
The errors in the model output in versions V1-V9 without DA and their reduction by the DA process are assessed by comparison with the validated LD20_DA model running with the original undisturbed forcing. The following field variables computed at the sea surface and used for assessing the errors were temperature, kinetic energy, relative vorticity, and enstrophy. The quality of models in representing large-scale ocean features are assessed by calculating area mean deviations (i.e., biases) and their evolution throughout the period of study. Higher resolution ocean models are known to exhibit spatial shift in representing smaller scale features, leading to a so-called ‘double penalty effect’ [25,26]. The model skills in representing meso- and sub-mesoscale structures are assessed by computing RMSE and pixel-to-pixel correlation coefficient (CORR) for each day. The usefulness of RMSE and CORR for assessing the double penalty effect can be illustrated by the following simple example.
Let us consider a street of cyclonic and anticyclonic eddies (Figure 2a) represented by the equation:
Q(x,y) = sin(x) × sin(y)
where Q(x,y) is any scalar property, e.g., temperature, salinity, etc., distributed over the horizontal plane.
Let the pattern be shifted in the x-direction due to the double penalty effect (Figure 2b):
Qs(x,y) = sin(x + shift) × sin(y)
The RMSE between Q and Qs varies from 0 to 1, and CORR varies from 1 to −1 as the shift increases from 0 to π as shown in Figure 2c. Therefore, both RMSE and CORR are good estimators for the spatial shift of ocean features and the ‘double penalty effect’.

3. Results and Analysis

3.1. Sea Surface Temperature

The seasonal variability of the area averaged SST in the Lakshadweep Sea is presented in Figure 3 as a summary time series showing the outputs from the reference and ensemble models. The purpose of this figure is to show a ‘cloud’ of curves produced by different experiments to demonstrate a visual spread of results. The separate curves for individual experiments are shown below in Figure 4.
Deviations of individual members of the ensemble from the reference model are shown in Figure 4. For clarity of presentation, the curves in all of the figures below are smoothed with a 5-day moving average.
As expected, the noDA simulations with persistently enhanced SWR (Figure 4(v1–v3,v5)) show the increase in SST throughout most of the year. The simulation with a symmetric spread of errors Figure 4(v9) shows both positive and negative biases. In some cases (see version 5) the bias is as high as 1 °C. Such a level of errors may be unacceptable for some applications as the range of area averaged SST seasonal variability in the Lakshdweep Sea is only about 3 °C (see Figure 4(v5)). The largest errors are recorded at different times for different ensemble members. In experiments Figure 4(v1,v3,v5), the largest errors are in March and November. These experiments are driven by persistent positive perturbations in the SWR, which are strongly correlated in time. The experiment Figure 4(v4) is driven by negative persistent perturbations and shows the largest errors in June and December. The forcing error in SWR for experiment Figure 4(v2) is partly persistent and partly uncorrelated. The distribution of errors over time is similar to experiments Figure 4(v1,v3,v5), demonstrating that a small random component in the forcing does not change the overall behaviour of the model response. Experiment Figure 4(v9) has the largest random perturbations in the SWR within the ensemble, and it shows the largest positive errors in SST similar to but slightly earlier than Figure 4(v1,v3,v5). It also shows large negative errors in June and December similar to experiment Figure 4(v4), but it has an additional large error in May that is not seen in other experiments.
NoDA simulations with randomly perturbed wind Figure 4(v6–v8) consistently show a cooling of the surface. This is due to the nonlinear effect of the wind speed on the latent heat loss, which causes stronger evaporation when the wind perturbation is in the direction of the original wind. The largest errors are common for versions Figure 4(v6–v8) and are recorded in November. The timings for other large errors do not seem to be correlated between the versions.
The models running in the DA mode have the biases significantly reduced down to 0.2 °C or better for all members of the ensemble. The improvement is the greatest for versions Figure 4(v1–v4) with perturbations of SWR not exceeding 10%. The improvement for Figure 4(v5) (driven by forcing with the largest 20% perturbations in SW) is significant but not as spectacular as for other experiments.
The summary of time series of area averaged SST errors produced by the models with and without DA is shown in Figure 5.
The ability of the NDA algorithm to reduce the errors in the spatial distribution of the field variables is demonstrated in Figure 6. The RMSE and CORR time series are calculated using the equations:
R M S E t = 1 N n = 1 N S S T t , n S S T r e f t , n 2
C O R R t = 1 N n = 1 N S S T t , n S S T t , n σ S S T t S S T r e f t , n S S T r e f t , n σ S S T r e f t
where n = 1, … N is the model grid node index, N is the total number of wet grid nodes at the sea surface (N = 19,147 for the LD20 model), S S T t , n and S S T r e f t , n are the values of SST at a time point t, and grid node index n for the ensemble member and the reference model, respectively, and σ S S T t and σ S S T r e f t are the standard deviations of SST and SSTref at a time point t. Similar formulae are used for other variables.
The DA configurations of the experiments are clearly clustered at lower values of RMSE and higher values of CORR than their noDA counterparts to show the improvement introduced by the DA algorithm in reducing the ‘double penalty effect’. The quantitative parameters showing the errors in SST and their reduction due to DA are shown in Table 2.
The numbers in Table 2 are the time averages of the time series presented in Figure 5 and Figure 6. The improvement ratios shown in the last three columns of Table 2 are calculated as follows: I_Bias = Bias_noDA/Bias_DA; I_RMSE = RMSE_noDA/RMSE_DA; I_CORR = CORR_DA/CORR_noDA and similar for other variables in the following sections. The bias improvement ratio is 3 to 5.3 for all experiments except for version 9, where the bias is slightly worse after DA, although the absolute values are very small (less than 0.1 °C) in most DA configurations. The improvement of RMSE is recorded in all cases, and it is slightly better for experiments with persistent errors in the forcing. The worst residual (after DA) RMSE is 0.18 °C, which is still better than the uncertainty of satellite derived SST (Donlon et al., 2012 [27]). The correlation coefficient is systematically higher in the DA simulations.
The improvement of representing the spatial structure is illustrated in Figure 7 for two typical cases: v1 (persistent perturbation in SWR) and v8 (random wind). A qualitative assessment demonstrates that the configurations with DA (rightmost panels in Figure 7) are nearly identical to the reference case (leftmost panels).
It is has been reported in the literature that some DA schemes are not appropriate for improving the results on derived variables (see, for example, [15]). In the next subsections, we assess how efficient the mode-to-model DA is for mitigating atmospheric forcing errors in the derived variables—kinetic energy, vorticity, and enstrophy.

3.2. Kinetic Energy

The errors in the atmospheric forces translate into errors in currents. The variability of currents and the associated errors are convenient to analyse using scalar rather than vector variables. This sub-section examines the area averaged surface kinetic energy (KE) per unit mass, which is calculated as:
K E = 1 2 u 2 + v 2
Figure 8 shows a summary of time series KE for configurations with and without DA.
The time series of KE bias is presented in Figure 9. The DA curves for experiments with perturbed SWR are clearly clustered at small bias values (Figure 9a), while results generated by perturbed wind show a more scattered pattern (Figure 9b).
The ability of the DA process to reduce the shift in the distribution of surface KE is illustrated in Figure 10. For experiments with perturbed SW, the CORR is very close to 1 in DA configurations all year round. For DA experiments with perturbed wind, the CORR is close to 0.9 most of the year but drops to 0.5–0.6 during transitional periods at the start and end of the south-west monsoon period (May and September to October).
Table 3 shows the time averages of uncertainties in surface kinetic energy and their reduction by the DA process. The improvement ratios for KE are calculated similar to SST.
The improvements in bias and RMSE for experiments with perturbed SWR are consistently better than that of perturbed wind, exhibiting an improvement ratio between 4.3 and 4.9. Even in the worst performing experiment, v8 (strongly perturbed wind, see Table 1), the RMSE was reduced by half. The improvement in CORR of more than 50% is achieved in all experiments.

3.3. Vorticity

Relative vorticity (VORT) is an important characteristic of ocean currents allowing clear identification of cyclonic (VORT > 0) and anticyclonic (VORT < 0) eddies. Figure 11 shows the evolution of RMSE and CORR for vorticity throughout the year. The calculation and analysis of vorticity bias is excluded as the area averaged vorticity is equal to the contour integral of currents along the boundary (Stokes’s theorem for vector fields). As all experiments use the same boundary conditions, the area averaged vorticities must be equal, and at a high level of accuracy, they are.
The time averages of the curves presented in Figure 11 are shown in Table 4.
The units for RMSE are the same as for vorticity, i.e., 1/s. The coefficient of improvement in CORR for vorticity is greater than for KE, but the CORR value itself is slightly lower for both no DA and DA configurations. The CORR is well improved by the DA process at the onset of the monsoon, but it achieves its lowest values in the middle of the monsoon period for all experiments. The experiments with perturbed SWR show systematically higher values of CORR.
Spatial distributions of VORT for two experiments, one with perturbed SWR and the other with perturbed wind, are shown in Figure 12.

3.4. Enstrophy

The enstrophy (square of relative vorticity) is another useful scalar characteristic of ocean currents and is often used to study the transfer of kinetic energy between larger and smaller scale motions (see, for example, [28]). It is sensitive even to small disturbances of the flow field and, therefore, is a good means to test the model skill. Figure 13 shows the time series of enstrophy bias, RMSE, and CORR.
The BIAS and RMSE for no DA experiments are large in May and September, similar to the seasonal variability of these parameters for kinetic energy. Data assimilation removes the bias very efficiently all year round, but RMSE is still large in the winter months: February to March and November to December. The CORR for enstrophy is improved by DA all year round, but it is still lower than for SST. There is a tendency for CORR to increase towards the end of the year. The time averages of the curves presented in Figure 13 are shown in Table 5.
The DA reduces bias by a large factor, ranging from 15 to more than 500. Improvement of RMSE across all experiments is more uniform. DA performs slightly better in experiments with perturbed SWR than with perturbed wind.
As with other variables presented above, the DA configurations (rightmost panels in Figure 14) produce maps that are difficult to differentiate qualitatively from the reference case (leftmost panels).

4. Discussion

Data assimilation intends to correct errors in the ocean models coming from a range of sources. Different DA method have different efficiency in doing this [11,27]. In order to study the ability of model-to-model DA to rectify errors coming from only one source, namely, atmospheric forcing, the models in all 18 experiments with perturbed forcing (both with and without DA) are identical to the reference model LD20_DA, except for the atmospheric forcing used to drive the models. Therefore, a perfect Data Assimilation would completely eliminate errors in the atmospheric forcing and would give identical results for the reference model (LD20_DA) and for all models with perturbed forcing. In other words, BIAS, RMSE between the perturbed and reference models would be zero, and CORR would be equal to one.
Obviously, none of DA methods are perfect. The analysis in this section is intended to contrast and compare the efficiency of model-to-model DA by comparing the results from the models with perturbed and original (not perturbed) forcing. For further discussion, the experiments are divided into two groups: experiments with forcing versions v1–v5 and v9 have perturbation only in the downward short wave radiation (group SOLAR), and the experiments v5–v8 have perturbation only in the wind components (group WIND). The configurations with model-to-model DA show improvements for all experiments and all estimators of model skill.
The actual residual errors for the SOLAR and WIND groups are evaluated as follows. Let us introduce ‘typical values’ of the parameters used in the previous section. A typical value for a variable in question (e.g., SST, KE, VORT, and Enstr) is equal to the time and area average of the output generated by the reference model. A typical value for a model uncertainty is defined as the group average of model skill parameters in the relevant columns presented in Table 2, Table 3, Table 4 and Table 5, and separately for SOLAR and WIND groups for configurations with DA. The typical values and the units of measurements are shown in Table 6. The correlation coefficient is a non-dimensional number as usual. The uncertainties are calculated against the reference model LD20_DA.
For SST, typical biases for both SOLAR and WIND groups as well as RMSE for the SOLAR group are all below 0.1 °C. The RMSE for the WIND group is slightly higher at 0.14 °C. In all cases, these values of uncertainty are much lower (i.e., better) than those coming from satellite measurements. For example, the accuracy of Operational Sea Surface Temperature and Sea Ice Analysis is around 0.5 °C [29]. The correlation coefficients are high at CORR = 0.97–0.99 and close to the correlation coefficient CORR = 0.97 between SWR from ERA-Interim and surface observations [2]. High correlation indicates a good representation of smaller-scale structures in the temperature field.
The uncertainty in the representation of currents is convenient to discuss in terms of current speed rather than kinetic energy, using the link K E = 1 2 U 2 , where U is the absolute value of current speed. From Table 6, it follows that the typical value for U is Utyp = 0.25 m/s, and typical biases for SOLAR and WIND groups are 0.02 m/s and 0.06 m/s, respectively. Typical RMSEs are calculated using a formula for uncertainty propagation [30]:
R M S E U = R M S E K E U t y p
For SOLAR and WIND groups, the RMSE(U) are 0.025 m/s and 0.073 m/s, respectively. Relative to the typical speed Utyp, these numbers translate into 10% and 29% of uncertainty. The uncertainty for the SOLAR group is well within the average accuracy of satellite measurements of currents at 27% [31], and the WIND group is similar. The correlation coefficients are also better for the SOLAR than the WIND group.
Vorticity and enstrophy are based on the spatial derivatives of current velocities and they are therefore very sensitive to the spatial shift of the circulation patterns. The correlation coefficients are higher for the SOLAR group (0.85–0.89) than for the WIND group (0.71–0.78). Both groups show lower correlations in vorticity and enstrophy fields than in temperature and kinetic energy. Nevertheless, the correlations are sufficiently high to confirm the consistency of current patterns between the reference model and all the experiments with perturbed atmospheric forcing in DA configurations. A visual qualitative analysis of vorticity and enstrophy maps supports this conclusion.
As shown in the analysis above, the efficiency of DA for experiments with perturbations in wind forcing is not as high as for experiments with perturbations in solar radiation. A potential reason for this is the higher temporal variability of wind than that of solar radiation. Below, we test a hypothesis that the efficiency of model-to-model DA depends on the frequency of the DA cycle. In order to test this hypothesis, an additional set of experiments with the WIND group of atmospheric forcing perturbations was carried out with the length of DA cycle of 1, 2, 3 and 8 days. Note that the length of the DA cycle in all previous sections was 5 days. In total, this section analyses the residual uncertainties using the following statistics: BIAS, RMSE, and CORR for 15 DA experiments. All statistics are calculated daily using all wet grid nodes (except the flow relaxation rim) and then averaged from 1 February to 31 December 2016. The results for the kinetic energy of surface currents are shown in Table 7. In addition, the RMSE for velocity magnitude is calculated using Equation (6).
For comparison, the last column in Table 7 shows the same statistics of uncertainty but for the free runs (without DA). The BIAS is very small for all assimilation lengths. The RMSE and CORR are marginally sensitive to the length of the assimilation cycle. The 5-day cycle shows slightly better performance than both shorter and longer cycles.
As an example of time evolution of the Pierson correlation coefficient (CORR), the time series for five experiments within the WIND group (v6–v8) are shown in Figure 15.
While time averaged CORR is within a comfortable range of 0.76 to 0.84 for all DA lengths, it is systematically lower in May and September and October. This drop in CORR may be of little practical significance as the magnitudes of deviation from the reference model during these periods are close to their minima, as shown in Figure 16a–c.
The time averaged correlation coefficient is the lowest (worst) at the length of the DA cycle of 1 day and the largest (best) at 5 days. Therefore, the hypothesis that a reduction in the DA cycle can further improve the skill of model-to-model DA does not hold. The reason for such behaviour is not clear.
The efficiency of DA in mitigating errors in atmospheric forcing is best to assess by comparison with the same model but with ‘error-free’ forcing. The error-free forcing is defined here as the best available forcing. However, for practical applications, it is useful to assess the overall skill of DA in reducing uncertainties due to all possible reasons. Below, the model results are compared with two reputable datasets: (i) OSTIA [27], which is largely based on satellite measurements but involves heavy interpolation, particularly at the time of the SW monsoon; and (ii) global ocean reanalysis [20], which, in turn, was validated against in situ observations collected in the World Ocean Database [32].
As shown in Figure 17, the model-to-model DA improves the overall quality of all numerical experiments both against CMEMS reanalysis and OSTIA observations, despite some misfits between these external datasets. The correlation coefficient CORR for experiments without DA is in the range 0.60–0.63 with reference to OSTIA and 0.69–0.73 with reference to CMEMS. In the experiments with DA, its value is higher in the range 0.69–0.74 for OSTIA and 0.90–0.92 for CMEMS.
The temperature and salinity profiles from the model experiments were compared with 399 ARGO float observations available from [33]. Calculations were performed as follows. The profiles from models were bi-linearly interpolated in the horizontal to the location of the ARGO profiles on the same day. The difference in time within the same day was ignored. The ARGO profile data have a higher resolution than the LD20 model and therefore were binned in the vertical in accordance to the extent of the model grid cells centred at the corresponding T-levels of NEMO (see [18]). ARGO data within each bin were averaged, and the differences were computed between observations and models and used to calculate biases and RMSE at each model depth level. The results were then averaged in the vertical to show typical remaining uncertainties after DA. The RMSE and bias for temperature were in the range 0.21–0.24 °C and 0.08–0.12 °C, respectively, for all experiments. The RMSE and bias for salinity were 0.49 and from (−0.13) to (−0.14), respectively.

5. Conclusions

A set of 19 ocean modelling experiments is carried out to examine the effectiveness of recently developed Nesting with the Data Assimilation (NDA) algorithm in the reduction of errors in ocean model output that are generated by errors in atmospheric forcing. The NDA algorithm requires that the parent coarse model, from which the data for model-do-model DA are taken, must use observation-based DA in the first place. The NDA algorithm is computationally very efficient as the processing of large matrices required to calculate the weights at the stochastic downscaling step of the method is carried out only once for a given model configuration. The NDA method does not need an additional step of dynamic balancing of temperature, salinity, and velocity provided by the measurements, as these variables are already balanced in the parent model. Another benefit of the NDA method is that it reduces the so-called ‘double penalty effect’ [17].
When applied to the models driven by disturbed (i.e., containing errors in solar radiation and wind speed) atmospheric forcing, the NDA algorithm significantly reduces biases and RMSE while improving correlation coefficients between the reference and the disturbed models for all of the examined variables: sea surface temperature, kinetic energy, vorticity, and enstrophy.
The residual uncertainties are similar or lower than the uncertainties of satellite observations. The data assimilation works best to compensate for errors in shortwave solar radiation, and it is slightly less efficient at mitigating errors in wind forcing. An additional set of 15 experiments was run from 1–8 day length of the Data Assimilation cycle. The model results are shown to be not sensitive to the length of the DA cycle in this range.

Author Contributions

G.I.S.: Conceptualization, Methodology, Software, Analysis, Original draft, Reviewing and editing; M.S.: Software, Analysis. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

OSTIA and CMEMS data are publicly available from their respective websites. Other data will be made available on reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tonani, M.; Sykes, P.; King, R.R.; McConnell, N.; Péquignet, A.-C.; O’Dea, E.; Graham, J.A.; Polton, J.; Siddorn, J. The impact of a new high-resolution ocean model on the Met Office North-West European Shelf forecasting system. Ocean Sci. 2019, 15, 1133–1158. [Google Scholar] [CrossRef]
  2. Zhang, X.; Liang, S.; Wang, G.; Yao, Y.; Jiang, B.; Cheng, J. Evaluation of the Reanalysis Surface Incident Shortwave Radiation Products from NCEP, ECMWF, GSFC, and JMA Using Satellite and Surface Observations. Remote Sens. 2016, 8, 225. [Google Scholar] [CrossRef]
  3. Tuononen, M.; O’Connor, E.J.; Sinclair, V.A. Evaluating solar radiation forecast uncertainty. Atmos. Chem. Phys. 2019, 19, 1985–2000. [Google Scholar] [CrossRef]
  4. Dussin, R.; Barnier, B.; Brodeau, L.; Molines, J.M. Drakkar Forcing Set DFS5. DRAKKAR /MyOcean Report 05-10-14. 2016. Available online: https://www.drakkar-ocean.eu/publications/reports/report_DFS5v3_April2016.pdf (accessed on 15 October 2022).
  5. Gualtieri, G. Analysing the uncertainties of reanalysis data used for wind resource assessment: A critical review. Renew. Sustain. Energy Rev. 2022, 167, 112741. Available online: https://www.sciencedirect.com/science/article/pii/S1364032122006293 (accessed on 15 October 2022). [CrossRef]
  6. Sandery, P.A.; Sakov, P.; Majewski, L. The impact of open boundary forcing on forecasting the East Australian Current using ensemble data assimilation. Ocean Model. 2014, 84, 1–11. [Google Scholar] [CrossRef]
  7. Chen, F.; Shapiro, G.; Thain, R. Sensitivity of Sea Surface Temperature Simulation by an Ocean Model to the Resolution of the Meteorological Forcing. Int. Sch. Res. Not. 2013, 2013, 215715. [Google Scholar] [CrossRef]
  8. Jung, T.; Serrar, S.; Wang, Q. The oceanic response to mesoscale atmospheric forcing. Geophys. Res. Lett. 2014, 41, 1255–1260. [Google Scholar] [CrossRef]
  9. Penny, S.G.; Kalnay, E.; Carton, J.A.; Hunt, B.R.; Ide, K.; Miyoshi, T.; Chepurin, G.A. The local ensemble transform Kalman filter and the running-in-place algorithm applied to a global ocean general circulation model. Nonlinear Process. Geophys. 2013, 20, 1031–1046. [Google Scholar] [CrossRef]
  10. Lima, L.N.; Pezzi, L.P.; Penny, S.G.; Tanajura, C.A.S. An investigation of ocean model uncertainties through ensemble forecast experiments in the Southwest Atlantic Ocean. J. Geophys. Res. Ocean. 2019, 124, 432–452. [Google Scholar] [CrossRef]
  11. Carrassi, A.; Bocquet, M.; Bertino, L.; Evensen, G. Data assimilation in the geosciences: An overview of methods, issues, and perspectives. Wiley Interdiscip. Rev. Clim. Chang. 2018, 9, e535. [Google Scholar] [CrossRef]
  12. Skákala, J.; Ford, D.; Brewin, R.J.; McEwan, R.; Kay, S.; Taylor, B.; de Mora, L.; Ciavatta, S. The assimilation of phytoplankton functional types for operational forecasting in the northwest European shelf. J. Geophys. Res. Ocean. 2018, 123, 5230–5247. [Google Scholar] [CrossRef]
  13. Moore, A.M.; Martin, M.J.; Akella, S.; Arango, H.G.; Balmaseda, M.; Bertino, L.; Ciavatta, S.; Cornuelle, B.; Cummings, J.; Frolov, S.; et al. Synthesis of ocean observations using data assimilation for operational, real-time and reanalysis systems: A more complete picture of the state of the ocean. Front. Mar. Sci. 2019, 6, 90. [Google Scholar] [CrossRef]
  14. King, R.; While, J.; Martin, M.; Lea, D.; Lemieux, B.; Waters, J.; O’Dea, E. Profile, Altimeter and SST Assimilation in an Operational Shelf-Seas Model—FOAM-Shelf v9. 2017. Available online: https://www.godae.org/~godae-data/OceanView/Events/DA-OSEval-TT-2017/2.8-King_UKMO_NWSassim.pdf (accessed on 1 February 2023).
  15. Bannister, R.N. Applications of and Problems with Data Assimilation. Available online: http://www.met.reading.ac.uk/~ross/Documents/Bannister_Lec2PtB.pdf (accessed on 14 February 2023).
  16. Chang, I.; Kim, Y.H.; Jin, H.; Park, Y.-G.; Pak, G.; Chang, Y.-S. Impact of satellite and regional in-situ profile data assimilation on a high-resolution ocean prediction system in the Northwest Pacific. Front. Mar. Sci. 2023, 10, 1085542. [Google Scholar] [CrossRef]
  17. Shapiro, G.I.; Gonzalez-Ondina, J.M.; Salim, M.; Tu, J.; Asif, M. Crisis Ocean Modelling with a Relocatable Operational Forecasting System and Its Application to the Lakshadweep Sea (Indian Ocean). J. Mar. Sci. Eng. 2022, 10, 1579. [Google Scholar] [CrossRef]
  18. Madec, G.; The NEMO Team. NEMO Ocean Engine. 2016. Available online: https://zenodo.org/record/3248739#.Yx8zIdfMKUk (accessed on 10 September 2022).
  19. MetOffice. 20 Years of UM and NWP at the Interdisciplinary Centre for Mathematical and Computational Modelling. 2017. Available online: https://www.metoffice.gov.uk/research/news/2017/20-years-of-um-use-at-icm (accessed on 10 February 2022).
  20. CMEMS. 2022. Available online: https://marine.copernicus.eu/ (accessed on 1 May 2022).
  21. Egbert, G.D.; Erofeeva, S.Y. Efficient inverse modeling of barotropic ocean tides. J. Atmos. Ocean. Technol. 2002, 19, 183–204. [Google Scholar] [CrossRef]
  22. GEBCO. 2014: The GEBCO_2014 Grid, Version 20150318. Available online: https://www.gebco.net/ (accessed on 6 July 2020).
  23. Shapiro, G.I.; Gonzalez-Ondina, J.M. An Efficient Method for Nested High-Resolution Ocean Modelling Incorporating a Data Assimilation Technique. J. Mar. Sci. Eng. 2022, 10, 432. [Google Scholar] [CrossRef]
  24. De Azevedo, H.B.; De Gonçalves, L.G.G.; Kalnay, E.; Wespetal, M. Dynamically weighted hybrid gain data assimilation: Perfect model testing. Tellus A Dyn. Meteorol. Oceanogr. 2020, 72, 1835310. [Google Scholar] [CrossRef]
  25. Zingerlea, C.; Nurmib, P. Monitoring and verifying cloud forecasts originating from operational numerical models. Meteorol. Appl. 2008, 15, 325–330. [Google Scholar] [CrossRef]
  26. Crocker, R.; Maksymczuk, J.; Mittermaier, M.; Tonani, M.; Pequignet, C. An approach to the verification of high-resolution ocean models using spatial methods. Ocean Sci. 2020, 16, 831–845. [Google Scholar] [CrossRef]
  27. Donlon, C.J.; Martin, M.; Stark, J.; Roberts-Jones, J.; Fiedler, E.; Wimmer, W. The operational sea surface temperature and sea ice analysis (OSTIA) system. Remote Sens. Environ. 2012, 116, 140–158. [Google Scholar] [CrossRef]
  28. Khatri, H.; Sukhatme, J.; Kumar, A.; Verma, M.K. Surface ocean enstrophy, kinetic energy fluxes, and spectra from satellite altimetry. J. Geophys. Res. Ocean. 2018, 123, 3875–3892. [Google Scholar] [CrossRef]
  29. Bell, M.J.; Lefebvre, M.; Le Traon, P.Y.; Smith, N.; Wilmer-Becker, K. GODAE: The global ocean data assimilation experiment. Oceanography 2009, 22, 14–21. [Google Scholar] [CrossRef]
  30. Taylor, J. Introduction to Error Analysis, the Study of Uncertainties in Physical Measurements; University Science Books: New York, NY, USA, 1997; 270p. [Google Scholar]
  31. Hart-Davis, M.G.; Backeberg, B.C.; Halo, I.; van Sebille, E.; Johannessen, J.A. Assessing the accuracy of satellite derived ocean currents by comparing observed and virtual buoys in the Greater Agulhas Region. Remote Sens. Environ. 2018, 216, 735–746. [Google Scholar] [CrossRef]
  32. World Ocean Database 2013. Available online: https://www.nodc.noaa.gov/OC5/WOD13 (accessed on 22 March 2023).
  33. Argo. 2022. Available online: https://www.aoml.noaa.gov/phod/argo/ (accessed on 10 February 2022).
Figure 1. A bathymetric chart of the study area adopted from (GEBCO, 2014).
Figure 1. A bathymetric chart of the study area adopted from (GEBCO, 2014).
Jmse 11 00935 g001
Figure 2. (a) The original pattern of vortices, (b) the same pattern but shifted by Δx = π, (c) RMSE (blue) and CORR (red) as functions of the shift.
Figure 2. (a) The original pattern of vortices, (b) the same pattern but shifted by Δx = π, (c) RMSE (blue) and CORR (red) as functions of the shift.
Jmse 11 00935 g002
Figure 3. Time series of area averaged SST: (a) ensemble models without DA, (b) ensemble models with DA. The reference model output is shown in the thick blue line for comparison. This Figure shows the spread of the curves, individual experiments are shown in Figure 4.
Figure 3. Time series of area averaged SST: (a) ensemble models without DA, (b) ensemble models with DA. The reference model output is shown in the thick blue line for comparison. This Figure shows the spread of the curves, individual experiments are shown in Figure 4.
Jmse 11 00935 g003
Figure 4. Time series of mean biases between members of the ensemble and the reference model for atmospheric perturbations, versions from v1 to v9.
Figure 4. Time series of mean biases between members of the ensemble and the reference model for atmospheric perturbations, versions from v1 to v9.
Jmse 11 00935 g004aJmse 11 00935 g004b
Figure 5. Time series of area averaged difference between the ensemble members and the reference model: (a) experiments without DA, (b) experiments incorporating the Nesting with Data Assimilation algorithm. This Figure shows the spread of the curves, individual experiments are shown in Figure 4.
Figure 5. Time series of area averaged difference between the ensemble members and the reference model: (a) experiments without DA, (b) experiments incorporating the Nesting with Data Assimilation algorithm. This Figure shows the spread of the curves, individual experiments are shown in Figure 4.
Jmse 11 00935 g005
Figure 6. The summary plots of time series of (a) RMSE and (b) CORR for all ensemble members. Experiments without and with DA are shown in thin and thick lines respectively. This Figure shows the spread of the curves, individual experiments are shown in Figure 4.
Figure 6. The summary plots of time series of (a) RMSE and (b) CORR for all ensemble members. Experiments without and with DA are shown in thin and thick lines respectively. This Figure shows the spread of the curves, individual experiments are shown in Figure 4.
Jmse 11 00935 g006
Figure 7. Maps of daily averaged SST on 10 November 2016. Upper row: (a) the reference model, (b) experiment v1 noDA, and (c) experiment v1 DA. Lower row: (d) experiment v8 noDA, and (e) experiment v8 DA.
Figure 7. Maps of daily averaged SST on 10 November 2016. Upper row: (a) the reference model, (b) experiment v1 noDA, and (c) experiment v1 DA. Lower row: (d) experiment v8 noDA, and (e) experiment v8 DA.
Jmse 11 00935 g007
Figure 8. Time series of surface KE: (a) experiments v1–v9 in noDA configurations, (b) same experiments in DA configuration. For comparison, the thick blue line shows the results from the reference model. This Figure shows the spread of the curves, individual experiments are shown in Figure 4.
Figure 8. Time series of surface KE: (a) experiments v1–v9 in noDA configurations, (b) same experiments in DA configuration. For comparison, the thick blue line shows the results from the reference model. This Figure shows the spread of the curves, individual experiments are shown in Figure 4.
Jmse 11 00935 g008
Figure 9. Time series of area-averaged bias of surface KE for experiments with (a) perturbed SWR, and (b) perturbed wind. The results from DA configurations are shown in thick lines. This Figure shows the spread of the curves, individual experiments are shown in Figure 4.
Figure 9. Time series of area-averaged bias of surface KE for experiments with (a) perturbed SWR, and (b) perturbed wind. The results from DA configurations are shown in thick lines. This Figure shows the spread of the curves, individual experiments are shown in Figure 4.
Jmse 11 00935 g009
Figure 10. Time series of RMSE (a,b) and CORR (c,d) presenting surface KE for experiments with perturbed SWR (a,c) and wind (b,d). Results from DA configurations are shown in thick lines. This Figure shows the spread of the curves, individual experiments are shown in Figure 4.
Figure 10. Time series of RMSE (a,b) and CORR (c,d) presenting surface KE for experiments with perturbed SWR (a,c) and wind (b,d). Results from DA configurations are shown in thick lines. This Figure shows the spread of the curves, individual experiments are shown in Figure 4.
Jmse 11 00935 g010aJmse 11 00935 g010b
Figure 11. The summary plots of time series of vorticity RMSE (a) and CORR (b) for all experiments. Experiments without and with DA are shown in the thin and thick lines, respectively. This Figure shows the spread of the curves, individual experiments are shown in Figure 4.
Figure 11. The summary plots of time series of vorticity RMSE (a) and CORR (b) for all experiments. Experiments without and with DA are shown in the thin and thick lines, respectively. This Figure shows the spread of the curves, individual experiments are shown in Figure 4.
Jmse 11 00935 g011
Figure 12. Maps of vorticity on 10 November 2016. Upper row: (a) the reference model, (b) experiment v5 noDA, and (c) experiment v5 DA. Lower row: (d) the reference model, (e) experiment v8 noDA, and (f) experiment v8 DA.
Figure 12. Maps of vorticity on 10 November 2016. Upper row: (a) the reference model, (b) experiment v5 noDA, and (c) experiment v5 DA. Lower row: (d) the reference model, (e) experiment v8 noDA, and (f) experiment v8 DA.
Jmse 11 00935 g012aJmse 11 00935 g012b
Figure 13. The summary plots of time series of enstrophy (a) bias, (b) RMSE, and (c) CORR for all experiments. Experiments without and with DA are shown in the thin and thick lines, respectively. This Figure shows the spread of the curves, individual experiments are shown in Figure 4.
Figure 13. The summary plots of time series of enstrophy (a) bias, (b) RMSE, and (c) CORR for all experiments. Experiments without and with DA are shown in the thin and thick lines, respectively. This Figure shows the spread of the curves, individual experiments are shown in Figure 4.
Jmse 11 00935 g013
Figure 14. Maps of enstrophy on 10 November 2016. Upper row: (a) the reference model, (b) experiment v2 noDA, and (c) experiment v2 DA. Lower row: (d) the reference model, (e) experiment v7 noDA, and (f) experiment v7 DA.
Figure 14. Maps of enstrophy on 10 November 2016. Upper row: (a) the reference model, (b) experiment v2 noDA, and (c) experiment v2 DA. Lower row: (d) the reference model, (e) experiment v7 noDA, and (f) experiment v7 DA.
Jmse 11 00935 g014
Figure 15. Time series of Pierson correlation coefficient for experiments with perturbations in the wind forcing and different lengths of DA cycle: (a) version 6, (b) version 7, and (c) version 8. The best performing set–up (5-day assimilation cycle) is shown in the bold line.
Figure 15. Time series of Pierson correlation coefficient for experiments with perturbations in the wind forcing and different lengths of DA cycle: (a) version 6, (b) version 7, and (c) version 8. The best performing set–up (5-day assimilation cycle) is shown in the bold line.
Jmse 11 00935 g015
Figure 16. Time series of daily RMSE for experiments with perturbations in the wind forcing and different lengths of DA cycle: (a) version 6, (b) version 7, and (c) version 8. The best performing set–up (5-day assimilation cycle) is shown in the bold line.
Figure 16. Time series of daily RMSE for experiments with perturbations in the wind forcing and different lengths of DA cycle: (a) version 6, (b) version 7, and (c) version 8. The best performing set–up (5-day assimilation cycle) is shown in the bold line.
Jmse 11 00935 g016aJmse 11 00935 g016b
Figure 17. Validation of model results using area and time averaged statistics: (a) BIAS vs CMEMS reanalysis, (b) RMSE vs CMEMS reanalysis, (c) BIAS vs OSTIA, and (d) RMSE vs OSTIA. For comparison, the first two bars in (c,d) show the misfit between CMEMS and OSTIA.
Figure 17. Validation of model results using area and time averaged statistics: (a) BIAS vs CMEMS reanalysis, (b) RMSE vs CMEMS reanalysis, (c) BIAS vs OSTIA, and (d) RMSE vs OSTIA. For comparison, the first two bars in (c,d) show the misfit between CMEMS and OSTIA.
Jmse 11 00935 g017aJmse 11 00935 g017b
Table 1. Description of the perturbed atmospheric forcing.
Table 1. Description of the perturbed atmospheric forcing.
Version NoDescription
V0Undisturbed forcing
V1F = 1.10 (constant), A = B = 0;
V2F = 1.05 + 0.05 × rand (time), A = B = 0;
V3F = 1.05 (constant), A = B = 0;
V4F = 0.95 (constant), A = B = 0;
V5F = 1.20 (constant), A = B = 0;
V6A = 2.0 × randn (time), B = 0, units = m/s, F = 0;
V7A = 0, B = 2.0 × randn (time), units = m/s, F = 0;
V8A = 2.0 × randn (time), B = 2.0 × randn (time), units = m/s, F = 0;
V9F = 1 + 0.20 × randn (time), A = B = 0;
Here, rand (time) is a uniformly distributed random number in the interval (0,1) which changes once a day at 00:01Z; randn (time) is a random number drawn from the normal distribution with standard deviation std = 1, which changes once a day at 00:01Z. The perturbations cover approximately the range of uncertainties reported in [2,5].
Table 2. Uncertainties in SST for experiments with and without DA.
Table 2. Uncertainties in SST for experiments with and without DA.
Ver NoBias_noDABias_DARMSE_noDARMSE DACORR_noDACORR_DAI_BiasI_RMSEI_CORR
10.250.080.370.100.850.993.13.81.2
20.180.060.320.080.850.993.14.21.2
30.120.040.280.060.850.993.04.71.2
4−0.13−0.040.290.060.840.993.74.81.2
50.500.160.590.180.820.983.13.21.2
6−0.16−0.030.320.120.830.975.32.61.2
7−0.12−0.030.310.120.820.974.42.51.2
8−0.26−0.050.390.170.790.955.02.41.2
90.000.010.300.110.840.990.52.81.2
Table 3. Uncertainties in surface kinetic energy for experiments with and without DA.
Table 3. Uncertainties in surface kinetic energy for experiments with and without DA.
Ver NoBias_noDABias_DARMSE_noDARMSE DACORR_noDACORR_DAI_BiasI_RMSEI_CORR
10.00290.00070.03160.00650.620.984.14.91.6
20.00410.00050.03530.00740.610.988.14.81.6
30.00410.00030.03550.00770.600.9713.24.61.6
40.0037−0.00050.03570.00770.600.97−7.34.61.6
50.00510.00160.03720.00860.590.973.24.31.6
60.00900.00340.03980.01820.570.892.62.21.6
70.00640.00200.03930.01690.550.903.12.31.6
80.01070.00540.04250.02260.540.842.01.91.5
90.00470.00010.03630.00850.600.9746.24.31.6
Table 4. Uncertainties in surface vorticity for experiments with and without DA.
Table 4. Uncertainties in surface vorticity for experiments with and without DA.
Ver NoRMSE_noDARMSE DACORR_noDACORR_DAI_RMSEI_CORR
17.69 × 10−62.5 × 10−60.350.893.02.4
27.82 × 10−62.5 × 10−60.340.902.92.6
37.84 × 10−62.5 × 10−60.340.892.92.6
48.03 × 10−62.5 × 10−60.330.892.92.6
57.80 × 10−62.6 × 10−60.340.892.72.5
68.06 × 10−63.7 × 10−60.320.792.12.4
78.19 × 10−63.6 × 10−60.310.812.22.6
88.14 × 10−64.2 × 10−60.320.751.92.3
97.91 × 10−62.6 × 10−60.340.892.82.5
Table 5. Uncertainties in surface enstrophy for experiments with and without DA.
Table 5. Uncertainties in surface enstrophy for experiments with and without DA.
Ver NoBias_noDABias_DARMSE_no DARMSE DACORR_noDACORR_DAI_BiasI_RMSEI_CORR
11.9 × 10−11−5.0 × 10−151.27 × 10−104.3 × 10−110.370.85−101.52.72.2
21.9 × 10−11−4.0 × 10−141.29 × 10−104.2 × 10−110.350.85−28.12.52.4
32.0 × 10−11−6.1 × 10−141.32 × 10−104.3 × 10−110.340.85−14.62.52.5
42.3 × 10−11−1.5 × 10−131.40 × 10−104.3 × 10−110.330.85−16.12.62.6
51.8 × 10−115.3 × 10−151.27 × 10−104.5 × 10−110.350.85−13.82.32.4
62.3 × 10−119.7 × 10−131.38 × 10−106.6 × 10−110.320.73−52.21.92.3
72.4 × 10−115.9 × 10−131.41 × 10−106.4 × 10−110.300.74−20.32.02.5
82.5 × 10−111.5 × 10−121.46 × 10−107.6 × 10−110.310.66−563.91.82.1
92.2 × 10−11−1.4 × 10−131.33 × 10−104.5 × 10−110.340.85−15.32.42.5
Table 6. Typical values and their uncertainties for perturbed models with DA.
Table 6. Typical values and their uncertainties for perturbed models with DA.
VariableTypical ValueTypical Bias SOLARTypical Bias WINDTypical RMSE
SOLAR
Typical RMSE
WIND
Typical CORR SOLAR Typical CORR WIND
SST (°C)29.20.051−0.0370.0980.140.990.97
KE (m2/s2)3.08 × 10−24.52 × 10−43.64 × 10−36.28 × 10−31.80 × 10−20.980.88
VORT (1/s)N/AN/AN/A2.51 × 10−63.82 × 10−60.890.78
Enstrophy (1/s2)4.31 × 10−11−6.63 × 10−141.03 × 10−124.36 × 10−116.91 × 10−110.850.71
Table 7. Uncertainties in Kinetic Energy against the reference model LD20_DA.
Table 7. Uncertainties in Kinetic Energy against the reference model LD20_DA.
StatisticAtmospheric Forcing Perturbation Version NoLength of the DA Cycle (Days)No DA
12358
BIAS, m2/s26 (U wind only)0.00420.00350.00330.00340.00390.0090
7 (V wind only)0.00390.00220.00210.00200.00200.0064
8 (both U and V wind)0.00610.00550.00510.00540.00580.0107
RMSE(KE), m2/s2/
RMSE(U) m/s
6 (U wind only)0.0214/0.0860.0188/0.0750.0177/0.0710.0169/0.0680.0194/0.0780.0398/0.159
7 (V wind only)0.0220/0.0880.0177/0.0710.0167/0.0670.0156/0.0620.0180/0.0720.0393/0.157
8 (both U and V wind)0.0248/0.0990.0225/0.0900.0216/0.0860.0215/0.0860.0234/0.094 0.0425/0.198
CORR6 (U wind only)0.810.860.870.890.860.57
7 (V wind only)0.790.870.880.900.870.55
8 (both U and V wind)0.760.810.830.840.810.54
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shapiro, G.I.; Salim, M. How Efficient Is Model-to-Model Data Assimilation at Mitigating Atmospheric Forcing Errors in a Regional Ocean Model? J. Mar. Sci. Eng. 2023, 11, 935. https://doi.org/10.3390/jmse11050935

AMA Style

Shapiro GI, Salim M. How Efficient Is Model-to-Model Data Assimilation at Mitigating Atmospheric Forcing Errors in a Regional Ocean Model? Journal of Marine Science and Engineering. 2023; 11(5):935. https://doi.org/10.3390/jmse11050935

Chicago/Turabian Style

Shapiro, Georgy I., and Mohammed Salim. 2023. "How Efficient Is Model-to-Model Data Assimilation at Mitigating Atmospheric Forcing Errors in a Regional Ocean Model?" Journal of Marine Science and Engineering 11, no. 5: 935. https://doi.org/10.3390/jmse11050935

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop