Radar reflectivity is a critical factor in weather nowcasting that serves as a reliable indicator for severe weather conditions. With its ability to offer detailed descriptions of local areas at a resolution of approximately 1 km, radar reflectivity plays a significant role in assessing and predicting weather patterns.
However, the effectiveness of radar is hindered by its limited coverage and susceptibility to obstruction by mountains and other physical barriers. This limitation poses a challenge in obtaining comprehensive radar reflectivity data across larger regions. To overcome this issue, satellite data can be employed as a supplementary tool to radar observations. Satellites provide a broader view of the Earth’s atmosphere, allowing for a global assessment of weather conditions but at a coarser resolution of around 4 km.
In recent years, deep learning techniques have gained significant attention in the field of satellite data processing and meteorology, offering innovative solutions to complex weather prediction and analysis tasks. Convolutional neural networks (CNNs) have been widely employed in weather forecasting tasks, primarily for the analysis of meteorological images and satellite data. CNNs excel at capturing spatial dependencies in data, making them suitable for tasks such as meteorological forecasting [
1], spatial downscaling [
2,
3], weather classification [
4,
5], and cloud classification [
6]. Han et al. [
7] transform meteorological nowcasting into two stages, i.e., precipitation level classification and accurate precipitation regression. Moreover, cross-channel 3D convolution is employed to fuse raw 3D Doppler radar data and to extract effective multi-source information automatically. Shi et al. exploit ConvLSTM [
8] and TrajGRU [
9] for precipitation nowcasitng, which has been the baseline among many spatial-temporal related tasks. Analogously, Yu et al. [
10] propose ATMConvGRU, a cascaded network that distinguishes from the previous paralleled architecture, with more deep nonlinear feature extraction. Though effective, training feasible forecasting models is relatively difficult due to their recurrent mechanism and their inner architecture. To solve this key problem, Ayzel et al. propose Dozhdya.Net [
11], which is an all convolutional neural network for radar-based precipitation nowcasting. Training such a network is efficient and experimental results show that it is crucial for early warning of hazardous events. Agrawal et al. [
12] treat forecasting as an image-to-image translation problem and leverage the power of the ubiquitous UNet [
13] architecture. Quantities of experiments demonstrate that this method outperforms commonly used models such as optical flow model, persistence model, and NOAA numerical prediction. Similar contributions can also be found in [
14,
15,
16]. Klocek et al. [
17] achieve 6 h precipitation nowcasting under the encoder-forecaster LSTM framework with radar mosaic sequences as input. The recently proposed MetNet [
18] has also shown dramatic superiority compared with numerical weather prediction. Basically, MetNet provides a framework and a promising direction up to 7 to 8 h forecasting. One main advantage of MetNet originates form its integrating of multi-source information such as satellite data, radar data, elevation, longitude, latitude, and time. Compared with typical ConvLSTM, MetNet adds an extra feature extraction module to explore more abstract spatial–temporal representation. The advanced version MetNet-2 [
19] promotes forecasting time range from 8 h to 12 h with a larger receptive field. The former introduced deep learning methods use radar to directly predict future rain rates, free of physical constraints, while they accurately predict low-intensity rainfall, their operational utility is limited because of their lacking of constraints producing blurry nowcasts at longer lead times, yielding poor performance on rarer medium-to-heavy rain events. To address these challenges, Ravuri et al. [
20] present a deep generative model for the probabilistic nowcasting of precipitation. On the other hand, Kuang et al. [
21] impose the meteorological motion equation into TransUNet [
22] for temperature forecasting, which is supposed to be a further integration of meteorological prior and machine learning methods. Benefiting from these achievements, it can be concluded that another promising direction for meteorological downscaling should be data-driven machine learning techniques.
When addressing the issue of radar reflectivity and satellite data gaps in satellite data-based radar reconstruction, Duan et al. [
23] extended the UNet method for reconstructing radar reflectivity from Himawari-8. It is a weather satellite developed by the Japan Aerospace Exploration Agency (JAXA) and manufactured by Mitsubishi Electric Corporation. The satellite was launched from the Tanegashima Space Center in Kagoshima Prefecture, Japan, and is operated from the Japan Meteorological Agency (JMA) headquarters in Tokyo, Japan.They made adjustments, such as employing one convolution operation instead of two in each convolution block to reduce the risk of overfitting. Additionally, they removed skip connections as their findings indicated that the high-resolution spatial information they provided was not always beneficial. In a similar vein, Zhu et al. [
24] aimed to extract deep network representations by reconstructing radar reflectivity data from Numerical Weather Prediction (NWP) simulations and satellite observations. They subsequently examined the relationship between these representations and physical quantities like NWP variables and satellite images. Their research highlighted the potential of data-driven deep learning models in bridging representation gaps across multiple scales and data sources. They also utilized Himawari-8 for radar reconstruction. Conversely, Yang et al. [
25] proposed a novel deep learning technology based on the attention mechanism to reconstruct radar reflectivity using observations from China’s new-generation geostationary meteorological satellite, FengYun-4A. To account for complex surface effects, they incorporated topography data into their model.
While significant progress has been made in using existing methods for radar reconstruction, most of these methods, such as UNet, were originally developed for natural or medical image segmentation and may not be applicable for radar reconstruction. Additionally, the unique properties of the atmosphere pose significant challenges for satellite-based radar reconstruction, further complicating the issue.
Considering the property of atmosphere, two key challenges remain in radar reconstruction. Firstly, the atmosphere is an integral, highly-correlated system, with a singular change affecting all aspects globally. Secondly, extreme local weather events require specialized handling. However, existing methods frequently overlook these challenges. To overcome these challenges and meet the need for improving radar reflectivity reconstruction, we introduce a novel satellite-based method. The contributions of this paper can be summarized as follows: