Next Article in Journal
Comparison of Ambient Air Quality among Industrial and Residential Areas of a Typical South Asian City
Next Article in Special Issue
Simulation of the Boreal Winter East Asian Cold Surge by IAP AGCM4.1
Previous Article in Journal
Northern Hemisphere Extratropical Cyclone Activity in the Twentieth Century Reanalysis Version 3 (20CRv3) and Its Relationship with Continental Extreme Temperatures
Previous Article in Special Issue
Evaluation of Sea Ice Simulation of CAS-ESM 2.0 in Historical Experiment
 
 
Article
Peer-Review Record

Understanding the El Niño Southern Oscillation Effect on Cut-Off Lows as Simulated in Forced SST and Fully Coupled Experiments

Atmosphere 2022, 13(8), 1167; https://doi.org/10.3390/atmos13081167
by Henri R. Pinheiro 1,*, Tercio Ambrizzi 1, Kevin I. Hodges 2 and Manoel A. Gan 3
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3:
Reviewer 4: Anonymous
Atmosphere 2022, 13(8), 1167; https://doi.org/10.3390/atmos13081167
Submission received: 2 June 2022 / Revised: 20 July 2022 / Accepted: 21 July 2022 / Published: 23 July 2022
(This article belongs to the Special Issue Coupled Climate System Modeling)

Round 1

Reviewer 1 Report

Review of: Understanding the El-Niño Southern Oscillation effect on Cut-off Lows as simulated in forced SST and fully coupled experiments

Paper Summary:  This manuscript offers ENSO composite anomalies of Cut-Off-Low (COL) activity based on a NINO3.4 SSTA definition for warm (El Nino) and cool (La Nina) ENSO phases and the 1979-2014 study period.  Global results are presented aggregated across all seasons using reanalysis (ERA5), prescribed-SST atmospheric model (AMIP) and coupled model (CMIP6) data archives.  Global SSTA and upper tropospheric zonal wind composite anomalies are also presented and the possible causal relationships between them are discussed.  The main conclusions are that; 1. Results support a link between increased (decreased) in upper tropospheric zonal winds and decreased (increased) COL activity, 2. Model biases in COL activity may be related to biases in tropical Pacific SSTAs, and 3. There may be a basis for seasonal prediction of COL activity provided, presumably, future knowledge of ENSO state.

 

Review.  The lack of statistically significant ENSO association with COL anomaly over land, combined with previous demonstrations of statistically significant ENSO associations with seasonal precipitation over land motivates calls to question choices made in the design of this analysis.  The estimation of the statistical significance of the results presented is incomplete.  These points are discussed in more detail below.

 

1.  The authors cite societal impacts (e.g. heavy precipitation events and flooding mentioned in 1st paragraph of introduction) as a reason we need to know more about seasonal variability and predictability of COLs, yet there is not much in the way of statistically significant COL anomaly over land in the observation-based results presented in this manuscript.  For example very little statistically significant anomaly is apparent over North America, despite previous results (e.g. seminal work of Ropelewski and Halpert) establishing sub-regions of North America as being among the most strongly affected by ENSO in terms of seasonal precipitation anomaly.  The same can be said for parts South America and Africa shown, over the same study period, to have statistically significant precipitation anomalies associated with EL Nino and La Nina.

This motivates the question of whether precipitation over land is really not significantly affected by COLs, or if choices made in the analysis approach make the results insensitive to this possibility.

For example, Chiodi and Harrison (2015; http://dx.doi.org/10.1175/JCLI-D-14-00387.1) demonstrated, over basically the same study period, that there are statistically significant ENSO associations with precipitation anomaly over many affected land regions, but they are mainly contributed by the subset of ENSO events distinguished by their signature in tropical Pacific OLR.  The link between seasonal precipitation anomaly and NINO3.4 SSTA is weaker.  These associations are also specific to certain seasons.  Defining ENSO state with NINO3.4 SSTA and aggregating across all seasons, as is done in this analysis, would blur the ENSO precipitation association over the study period.  On wonders if this may also be the case for COL activity.

 

2. This manuscript considers the local statistical significance of many different regions at once but does not offer an estimation of global significance.  It should.  Otherwise, the hypothesis that the results presented are consistent with the statement that the ENSO influence on COL activity is not statistically different from zero  remains untested within the present work.  For example, at the 90% confidence level, having 10% of the area reach statistical significance is well within this null hypothesis.

3. The manuscript also offers a globally averaged result (line 321; 10% global decrease associated with El Nino) without estimating the statistical significance of this result. The uncertainty associated with globally averaged values should also be estimated and reported.

3. The conclusion that COL density is related to changes in zonal wind strength is not supported by a quantified analysis.  The paper could be strengthened substantially if successful demonstration and quantification of this result was offered. 

If paper length is an issue, the wavelet analysis results might be fruitfully omitted in favor of presenting results from the analyses I suggest above.  The temporal variability of central Pacific SSTA has already been well documented.

 

Lines 180-186 do not really say what I think the authors mean to say.

 

Lines 198-199 claim that statistical significance is reached in south Texas and northeast Mexico, but cross hatching is not evident in these regions in the ERA5 results.  Please clarify or retract statement as appropriate.

 

Line 193 in Figure 1 caption uses the word "bias". Do the authors mean "anomaly"?

 

Page 9, Figure 4: Tropical pacific results are hard to discern given contour/colorbar intervals.  Presentation of global SSTA fields may not be the best choice here given the focus, in this case, specifically on the tropical Pacific.

 

Figures on pages 9 and 10 are both labeled "4".

 

The 2nd "Figure 4" shows stippling that is not defined in the Figure caption.

 

Line 380.  Do you mean to say that the model underestimates ERA5 results? For example "...the model generally underestimates the variance seen in the ERA5 results"

 

The term "negative feedback" is used to describe the claimed tendency for, for example, a strengthened 250mb zonal wind to correspond with fewer COLs.  Feedback connotes two-way effects from the COLs on the zonal flow and from the zonal flow on the COLs.  If this is what is meant then the feedback from the COLs to zonal flow should be supported with further evidence.  Otherwise, this link should be described differently.

 

 

 

Author Response

The authors express their appreciation to the reviewers and the editor for their constructive comments that have helped to improve the paper. The reviewers’ suggestions have been incorporated in the new version of the manuscript. In this document, we have addressed the reviewers’ comments below, which are highlighted in brown colour.

Paper Summary:  This manuscript offers ENSO composite anomalies of Cut-Off-Low (COL) activity based on a NINO3.4 SSTA definition for warm (El Nino) and cool (La Nina) ENSO phases and the 1979-2014 study period.  Global results are presented aggregated across all seasons using reanalysis (ERA5), prescribed-SST atmospheric model (AMIP) and coupled model (CMIP6) data archives.  Global SSTA and upper tropospheric zonal wind composite anomalies are also presented and the possible causal relationships between them are discussed.  The main conclusions are that; 1. Results support a link between increased (decreased) in upper tropospheric zonal winds and decreased (increased) COL activity, 2. Model biases in COL activity may be related to biases in tropical Pacific SSTAs, and 3. There may be a basis for seasonal prediction of COL activity provided, presumably, future knowledge of ENSO state.

Review: The lack of statistically significant ENSO association with COL anomaly over land, combined with previous demonstrations of statistically significant ENSO associations with seasonal precipitation over land motivates calls to question choices made in the design of this analysis.  The estimation of the statistical significance of the results presented is incomplete.  These points are discussed in more detail below.

  1. The authors cite societal impacts (e.g. heavy precipitation events and flooding mentioned in 1st paragraph of introduction) as a reason we need to know more about seasonal variability and predictability of COLs, yet there is not much in the way of statistically significant COL anomaly over land in the observation-based results presented in this manuscript.  For example very little statistically significant anomaly is apparent over North America, despite previous results (e.g. seminal work of Ropelewski and Halpert) establishing sub-regions of North America as being among the most strongly affected by ENSO in terms of seasonal precipitation anomaly.  The same can be said for parts South America and Africa shown, over the same study period, to have statistically significant precipitation anomalies associated with EL Nino and La Nina.

This motivates the question of whether precipitation over land is really not significantly affected by COLs, or if choices made in the analysis approach make the results insensitive to this possibility.

For example, Chiodi and Harrison (2015; http://dx.doi.org/10.1175/JCLI-D-14-00387.1) demonstrated, over basically the same study period, that there are statistically significant ENSO associations with precipitation anomaly over many affected land regions, but they are mainly contributed by the subset of ENSO events distinguished by their signature in tropical Pacific OLR.  The link between seasonal precipitation anomaly and NINO3.4 SSTA is weaker.  These associations are also specific to certain seasons.  Defining ENSO state with NINO3.4 SSTA and aggregating across all seasons, as is done in this analysis, would blur the ENSO precipitation association over the study period.  On wonders if this may also be the case for COL activity.

This is indeed an important point. There are couple of things to note about the reviewer’s comment. First, there are large areas inland that the changes in COL track density (shown in Fig. 1 in the manuscript) are significant at 90% confidence level such as the southern France, Italy, north of Africa, Middle East and southwest United States during La Nina and eastern Asia and northeast Italy during El Nino, and statistically significant anomalies also occur in somewhere else the Southern Hemisphere (Fig. 4 in the manuscript).

The other thing we should point out is that we have not looked at the COL associated precipitation in this study, then one possibility is that the sign of precipitation found in other studies may not be due to COL associated precipitation but other systems which we have not looked at here. It seems that the impact of ENSO on precipitation can be modulated by different systems which can be realized from the results of Chiodi and Harrison for their Fig. 17 that shows the precipitation is enhanced over southeast South America in El Nino while our composites suggest otherwise, with a statistical reduction of COLs for the same region and ENSO phase. It would be interesting to look at the precipitation associated with COLs but that needs further work.

We have now included a discussion on the point raised by the reviewer and the possible reasons for differences between our and previous studies. In addition to that, the track density composites of both NH COLs and SH COLs for each season and ENSO phase have been added as a supplementary material (Figures S1 and S2) to show the seasonal differences in the COL response to ENSO.

  1. This manuscript considers the local statistical significance of many different regions at once but does not offer an estimation of global significance.  It should.  Otherwise, the hypothesis that the results presented are consistent with the statement that the ENSO influence on COL activity is not statistically different from zero remains untested within the present work.  For example, at the 90% confidence level, having 10% of the area reach statistical significance is well within this null hypothesis.

We feel that it may not make much sense to show the significance for the whole hemisphere, though we calculated it as per suggestion. Considering the whole Northern hemisphere, there is an overall decrease in the average number of COLs during ENSO years of about 11.6% in El Nino and 0.4% in La Nina which are statistically significant at 95% confidence level. For the Southern Hemisphere, there is a decrease (increase) in the number of COLs of 9.8% (1.9%) during El Nino (La Nina) periods. We have not focused too much attention on these numbers as there are distinct and frequently opposite effects between regions where some regions present enhanced activity of COLs and other regions COLs are suppressed, and these features are observed during the warm and cold phases. However, we strongly agree with the reviewer that there are seasonal differences which affect both hemispheres. A table including the differences in the number of COLs with respect to the seasonal averages has included and discussed in the manuscript.

 

Table 2 (shown in the main manuscript). Number of COL tracks identified in each season and ENSO phase. The percentage of tracks with respect to seasonal values are given in parentheses

Period

Northern Hemisphere

Southern Hemisphere

Nino

Nina

Nino

Nina

DJF

0.0 (0.0)

9.8 (11.2)

-8.9 (-6.1)

9.9 (6.8)

MAM

-13.3 (-10.9)

6.0 (4.9)

-7.0 (-5.4)

-7.7 (-5.9)

JJA

-8.0 (-4.0)

4.5 (2.2)

-8.8 (-12.9)

0.2 (0.2)

SON

-12.0 (-7.2)

-8.5 (-5.1)

-14.0 (-15.0)

-2.3 (-2.4)

  1. The manuscript also offers a globally averaged result (line 321; 10% global decrease associated with El Nino) without estimating the statistical significance of this result. The uncertainty associated with globally averaged values should also be estimated and reported.

Agree, changed applied as suggested.

  1. The conclusion that COL density is related to changes in zonal wind strength is not supported by a quantified analysis.  The paper could be strengthened substantially if successful demonstration and quantification of this result was offered. 

Thank you for pointing out this gap in the study. We have attempted to calculate the correlations between  and COL track density anomalies for each season and hemisphere which are accordingly discussed in the paper. In addition, we have added the anomalous  fields for each season as a supplementary material (Figure S3). Stippling indicates where  and track density anomalies have opposite signs which aggregates information about the relationship between the two variables.

If paper length is an issue, the wavelet analysis results might be fruitfully omitted in favour of presenting results from the analyses I suggest above.  The temporal variability of central Pacific SSTA has already been well documented.

Thank you for this suggestion. We have however decided to keep this within the paper as we feel the wavelet analysis is important to give a qualitative measure of changes in COL variance to ENSO on different time scales. Taking out the SST analysis, it would be difficult to discuss about the problems in the performance of coupled models that arise from systematic bias in the predicted SSTs as the results suggest that the erroneous representation of COLs should be partly caused by inaccurate SSTs. If paper length exceeds the maximum limit, we can try and reduce the number of plots.

Lines 180-186 do not really say what I think the authors mean to say.

We have reworded this sentence in order to clarify what we mean about the COL response to different spatial patterns of ENSO.

Lines 198-199 claim that statistical significance is reached in south Texas and northeast Mexico, but cross hatching is not evident in these regions in the ERA5 results.  Please clarify or retract statement as appropriate.

We have adjusted the width of the generated tile in the plot (Figure 1 in the main manuscript) so that cross hatching is now evident over the mentioned region.

Line 193 in Figure 1 caption uses the word "bias". Do the authors mean "anomaly"?

We agree with the reviewer, changed applied as suggested.

Page 9, Figure 4: Tropical pacific results are hard to discern given contour/colorbar intervals.  Presentation of global SSTA fields may not be the best choice here given the focus, in this case, specifically on the tropical Pacific.

Not sure which figure in the paper the reviewer is referring to. Figure 4 does not give a SSTA field. If the reviewer refers to the 2nd "Figure 4" (Figure 5) in page 10, can you please specify which plot does not look tidy.

 Figures on pages 9 and 10 are both labeled "4".

This has corrected.

 The 2nd "Figure 4" shows stippling that is not defined in the Figure caption.

Information in the Figure caption has added.

 Line 380.  Do you mean to say that the model underestimates ERA5 results? For example "...the model generally underestimates the variance seen in the ERA5 results"

Yes, change made as per suggestion.

 The term "negative feedback" is used to describe the claimed tendency for, for example, a strengthened 250mb zonal wind to correspond with fewer COLs.  Feedback connotes two-way effects from the COLs on the zonal flow and from the zonal flow on the COLs.  If this is what is meant then the feedback from the COLs to zonal flow should be supported with further evidence.  Otherwise, this link should be described differently.

We understand the concern here. There seems to be as relationship between COL activity and changes in the zonal wind patterns, though we agree that this does not characterize an idea of two-way feedback relationship between the two variables. The two phrases that had mentioned the term “feedback” have been rewritten in order to avoid confusion.

Author Response File: Author Response.pdf

Reviewer 2 Report

I think that manuscripts are well written, particularly the result of Fig.1 and Fig.4 will provides valuable information for climate research community. On the other hand, I have a question why AMIP/CMIP simulated result in SH (Fig.4) is poor compared with result in NH (Fig.1). Considering them, I would like to accept this paper after minor revision.

 

Comments:

- As mentioned above, I have a question why AMIP/CMIP simulated result in SH (Fig.4) is poor compared with result in NH (Fig.1). Is it possible to discuss about which factor (SST bias, model resolution in AGCM, topographic effect, tropical rainfall & teleconnection in GCM etc..) seems important for such poor simulated skill of COLs frequency in SH?

Author Response

Comments and Suggestions for Authors (Reviewer 2)

I think that manuscripts are well written, particularly the result of Fig.1 and Fig.4 will provides valuable information for climate research community. On the other hand, I have a question why AMIP/CMIP simulated result in SH (Fig.4) is poor compared with result in NH (Fig.1). Considering them, I would like to accept this paper after minor revision.

Comments:

- As mentioned above, I have a question why AMIP/CMIP simulated result in SH (Fig.4) is poor compared with result in NH (Fig.1). Is it possible to discuss about which factor (SST bias, model resolution in AGCM, topographic effect, tropical rainfall & teleconnection in GCM etc..) seems important for such poor simulated skill of COLs frequency in SH?

We understand the concern with respect to the factors that may affect the predictability of COLs. Our results show that the CMIP6 multi-model has an improved performance in simulating COLs in the NH compared to SH, particularly in El Niño, however, comparable skills were observed with respect to AMIP6 simulations with a slightly superior performance for the SH compared to the NH.

In our recent work (https://link.springer.com/article/10.1007/s00382-022-06200-9), we have done a comparison between high and standard resolution models where improvements were less obvious in the SH compared to the NH, suggesting that biases in the simulation of SH COLs are mainly due to inaccurate model physics attributed to the air-sea coupling interactions rather than increased horizontal resolution. Our findings suggest that the predictive skill of the SH COLs appears to be more influenced by the ENSO-associated bias, possibly due to the large extent of oceanic areas. This question requires a more deeply investigation of the physical mechanisms for future work. We have added a comment about this in the Conclusions.

Author Response File: Author Response.pdf

Reviewer 3 Report

 

This is an interesting and comprehensive article, investigating the ENSO effect on global COL by using AMIP6 and CMIP6. The manuscript is well arranged and the results is interesting for the readers. I will accept the manuscript except that revision is needed to ensure the format of the citation within the article is being done properly.

Author Response

Comments and Suggestions for Authors (Reviewer 3)

This is an interesting and comprehensive article, investigating the ENSO effect on global COL by using AMIP6 and CMIP6. The manuscript is well arranged and the results is interesting for the readers. I will accept the manuscript except that revision is needed to ensure the format of the citation within the article is being done properly.

Thank you for your concern. We have tried to make sure that all the citations in the manuscript are in the correct format.

Author Response File: Author Response.pdf

Reviewer 4 Report

Overall, this manuscript is in very good shape. I have three comments to address before it is published. 

1. The authors should explicitly state the source of COL and ENSO data. The reviewer and readers need to know how this data is achieved. Is it directly from the dataset or the authors did some calculation? Especially, authors should show how to quantify the COL and ENSO strength, even though they may be well-known. 

2. For the second objective of this paper (in Lines 95-97), the authors should rewrite it, because it is not closely related to the topic. Please highlight why the differences between AMIP6 simulation and CMIP6 simulation is important to contrast. 

3. Evaluation of the model performance is indeed needed, while the authors are expected to put more effort to dig out the reason causes the modeling biases of ENSO, COL and other related parameters. This will be very helpful for model developers. 

Author Response

Comments and Suggestions for Authors (Reviewer 4)

Overall, this manuscript is in very good shape. I have three comments to address before it is published. 

  1. The authors should explicitly state the source of COL and ENSO data. The reviewer and readers need to know how this data is achieved. Is it directly from the dataset or the authors did some calculation? Especially, authors should show how to quantify the COL and ENSO strength, even though they may be well-known. 

The data used in the study were obtained directly from the WGCM and ECMWF database as detailed in the section “Data Availability Statement”. The only pre-processing on data is to retain the synoptic scale features with the vorticity data being spectrally truncated to T42. We believe that the techniques used for identifying COLs is described in detail in the methodology and adequately referenced.

ENSO events are defined using a +/- 1.0oC Nino-3.4 SST anomaly threshold of the 3-month running average. This is stated in the submitted manuscript in the pages 135-143. We have not specifically looked at different strengths of ENSO events, though this could be further investigated with focused studies.

  1. For the second objective of this paper (in Lines 95-97), the authors should rewrite it, because it is not closely related to the topic. Please highlight why the differences between AMIP6 simulation and CMIP6 simulation is important to contrast. 

We have tried to clarify this by adding a brief explanation: “Both AMIP6 and CMIP6 are used as there is interest in documenting possible errors in the coupled model predicted SSTs and their effect on the representation of COLs and how the results compare with simulations forced with observed SSTs.”

We believe the sentence above would fit more appropriately in the Introduction, as the reviewer suggests, than in the section data and analysis procedures as used in the previous version of the manuscript.

  1. Evaluation of the model performance is indeed needed, while the authors are expected to put more effort to dig out the reason causes the modeling biases of ENSO, COL and other related parameters. This will be very helpful for model developers.

We understand the concern here. We believe that the revised manuscript has been improved after considering the comments given by the reviewers. This question is somewhat related to the main comment from reviewer 2. We have attempted to deepen the discussion regarding the performance of models and suggested possible reasons for their deficiencies, in particular for coupled models. One of these aspects concerns the linkages between the predictive skill of  and COL activity which is now quantified and discussed in the Section 3.1. The results obtained in this study together with a previously reported one provide insights into the origin of model bias which appears more closely related to the model physics (e.g. air-sea coupling interactions) for the simulation of the SH COLs, in particular for coupled models. These aspects are now discussed in the Section 3 and emphasized in the Conclusions.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

1.     I’m still confused about why the text is claiming that certain areas reach statistical significance in COL track density anomaly even though I see no cross-hatching in those areas in Figure 1c or Figure 1d.  For example, the text still says:

 

“south Texas and northeast Mexico […] exhibit statistically significant decreasing densities in El Niño”.

 

and because Fig. 1's caption says “Rectangular cross-hatching regions in (c) and (d) are where statistical significance is > 90% confidence level.”  I expect to see cross hatching in Figure 1c, since (c) signifies the El Nino case.

 

But I see no cross hatching in the southern US (including Texas) or northeast Mexico in Fig. 1c (or Fig 1d).

 

 

2.     The addition of the results in Table 2 do not address the original about not having evaluated the overall “field” or “global” statistical significance of the composite anomaly results.  See references below for an explanation of and recommended treatments for this issue.

 

Livezey, R. E., and W. Y. Chen, 1983: Statistical field significance and its determination by Monte Carlo techniques. Mon. Wea. Rev., 111, 46–59.

 

Wilks, D.S. On “Field Significance” and the False Discovery Rate, Journal of Applied Meteorology and Climatology, 2006, 45, 1181-1189.

Author Response

The authors express again their appreciation to the reviewer for his/her constructive comments that have helped to improve the paper. We have addressed the reviewer’s comments below, which are highlighted in brown colour.

Comments and Suggestions for Authors

  1. I’m still confused about why the text is claiming that certain areas reach statistical significance in COL track density anomaly even though I see no cross-hatching in those areas in Figure 1c or Figure 1d.  For example, the text still says:

“south Texas and northeast Mexico […] exhibit statistically significant decreasing densities in El Niño”.

and because Fig. 1's caption says “Rectangular cross-hatching regions in (c) and (d) are where statistical significance is > 90% confidence level.”  I expect to see cross hatching in Figure 1c, since (c) signifies the El Nino case.

But I see no cross hatching in the southern US (including Texas) or northeast Mexico in Fig. 1c (or Fig 1d).

Sorry, we forgot to replace Figs 1c and 1d with the modified plot where hatching is more visible in southern US. We have now included them in the paper, shown in the doc file.

We can also change plot size to made the hatching more prominent which is available only for the reviewer (shown in the doc file).

 

  1. The addition of the results in Table 2 do not address the original about not having evaluated the overall “field” or “global” statistical significance of the composite anomaly results.  See references below for an explanation of and recommended treatments for this issue.

Livezey, R. E., and W. Y. Chen, 1983: Statistical field significance and its determination by Monte Carlo techniques. Mon. Wea. Rev., 111, 46–59.

Wilks, D.S. On “Field Significance” and the False Discovery Rate, Journal of Applied Meteorology and Climatology, 2006, 45, 1181-1189.

 

The authors are grateful to the reviewer for raising some important concerns about the manuscript. Although this is an interesting point about field significance, it would take a lot of additional calculations to address and we do not believe that it would add in a significant way to the paper. We believe that the local significance tests are sufficient for the discussion of the results. We have added a few sentences to the discussion in the paper along the lines that we have used local significance tests. However, some studies also use different field significance tests using Monte Carlo methods (Livezey and Chen 1983; Cubasch et al. 1994) to provide a measure of global significance which requires a lot of additional computation and it is not clear what the best approach is for this type of data. Whilst this is a useful additional statistical test, it does require more analysis to determine the best approach and we believe the local tests are sufficient for the analysis performed here. The application of field significance and how to implement it for this type of data will be left to future work.

Cubasch, U., Santer, B.D., Hellbach, A., Hegerl, G., Höck, H., Maier-Reimer, E., ... & Voss, R. Monte Carlo climate change forecasts with a global coupled ocean-atmosphere model. Climate Dynamics 1994 10, 1-19.

 

 

Author Response File: Author Response.docx

Back to TopTop