Next Article in Journal
Biomonitoring Atmospheric Pollution of Polycyclic Aromatic Hydrocarbons Using Mosses
Previous Article in Journal
Analysis of Flow Structures and Global Parameters across a Heated Square Cylinder in Forced and Mixed Convection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

CSIP-Net: Convolutional Satellite Image Prediction Network for Meteorological Satellite Infrared Observation Imaging

1
College of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin 150001, China
2
Beijing Institute of Applied Meteorology, Beijing 100029, China
3
Qingdao Innovation and Development Center, Harbin Engineering University, Qingdao 266400, China
4
The College of Ocean and Atmosphere, Ocean University of China, Qingdao 266100, China
5
Key Laboratory of Physical Oceanography, MOE, Institute for Advanced Ocean Study, Frontiers Science Center for Deep Ocean Multispheres and Earth System (DOMES), Ocean University of China, Qingdao 266100, China
6
Ocean Dynamics and Climate Function Lab/Pilot National Laboratory for Marine Science and Technology (QNLM), Qingdao 266237, China
7
International Laboratory for High-Resolution Earth System Prediction (iHESP), Qingdao 266000, China
8
Qingdao Hatran Ocean Intelligence Technology Co., Ltd., Qingdao 266400, China
*
Author to whom correspondence should be addressed.
Atmosphere 2023, 14(1), 25; https://doi.org/10.3390/atmos14010025
Submission received: 4 December 2022 / Revised: 14 December 2022 / Accepted: 20 December 2022 / Published: 23 December 2022
(This article belongs to the Section Meteorology)

Abstract

:
Geosynchronous satellite observation images have the advantages of a wide observation range and high temporal resolution, which are critical for understanding atmospheric motion and change patterns. The realization of geosynchronous satellite observation image prediction will provide significant support for short-term forecasting, including precipitation forecasting. Here, this paper proposes a deep learning method for predicting satellite observation images that can perform the task of predicting satellite observation sequences. In the study of predicting the observed images for Band 9 of the FY-4A satellite, the average mean square error of the network’s 2-h prediction is 4.77 Kelvin. The network’s predictive performance is the best among multiple deep learning models. We also used the model to predict Bands 10–14 of the FY-4A satellite and combined the multi-band prediction results. To test the application potential of the network prediction performance, we ran a precipitation area detection task on the multi-band prediction results. After 2 h of prediction, the detection results from satellite infrared images still achieved an accuracy of 0.855.

1. Introduction

Weather forecasting is having an increasingly significant impact on the development of current society, and achieving accurate short-term proximity forecasts will facilitate transportation, agricultural production, and energy production, among other areas [1,2]. Early forecasting of disastrous weather such as heavy precipitation and ice storms can reduce economic and human losses to the maximum extent [3,4,5]. In order to further improve the understanding of weather and climate change, countries are constantly upgrading their meteorological observation systems to obtain more refined weather factor observation products. Compared to traditional meteorological observation methods, data from geosynchronous satellites have a wide range, high temporal resolution, and independence from observation conditions [6]. Because of these advantages of geosynchronous satellites, they are widely used in the fields of monitoring extreme weather phenomena, precipitation estimation, etc. [7,8].
Currently, the major weather forecasting tools still rely on traditional numerical models for forecasting. With the steady development of numerical forecasting technology, numerical forecasting models have greatly improved the temporal and spatial resolution and accuracy of forecasts [9]. However, this enhancement also brings a huge amount of computation, making forecasting work very dependent on supercomputers. On the other hand, numerical prediction models are based on the current knowledge of the physical laws of motion of the atmosphere. However, the current understanding of the physical and chemical interactions of the atmosphere is still limited, which results in biased forecasts, and achieving a major theoretical breakthrough to solve these problems is more difficult [10]. Numerical models are affected by the accuracy of the initial field and the boundary condition, and there are still many problems in short-term proximity forecasting [11,12]. Therefore, it is very important to solve these prevailing problems [13].
With the advancement of deep learning technologies in recent years, short-term proximity forecasting has become more accurate. People have begun attempting to apply deep learning techniques to address the issues with short-term proximity weather forecasting [14,15,16,17,18]: Shi et al. 2015 proposed a ConvLSTM network for the prediction of radar echoes [19]; Wang et al. 2017 proposed a PredRNN network to improve the results of radar echo extrapolation [20]; Google proposed the MetNet network of hybrid satellite and radar data in 2020, enabling minute-level precipitation forecast products [21]; Kevin et al. proposed the SmaAt-UNet network to perform prediction experiments on binary images of cloud cover and precipitation images and achieved better prediction results [22]; and Gao et al. proposed a SimVP network based on convolutional neural networks and achieved the results in many prediction situations [23]. These deep learning-based short-term proximity prediction schemes are divided into two main types according to the method of their realization, which are based on recurrent neural networks (RNN) and convolutional neural networks (CNN). For the deep learning prediction task, a balanced result between performance and computation is required. The CNN models, with their simple structure and natural advantages for image tasks, allow them to achieve good results in prediction tasks [23].
These experiments in the application of deep learning methods have all achieved good results and significantly contributed to the development of deep learning for short-term proximity forecasting. However, their applications generally rely on radar observations. Radar observation is a common observation tool and has a more significant advantage in the observation of cloud cover. However, the observation range is small compared with satellite observation, and it is easily interfered with by the surrounding electromagnetic waves, which affect the data quality [24,25]. Especially in terms of observation coverage, the deployment of weather radar is usually uneven, affecting the integrity of the observations. This can lead to difficulties in making effective observations in areas where the observation density is low or where observation deployment is complex. These problems create an inconvenience for radar-based forecasting and warning efforts [26]. Previously, scientists have used the generative adversarial network (GAN) to enhance the sharpness of the LSTM network prediction images, which would produce sharper predicted images [27,28], but they only tried to predict one frame of the future satellite observation images. The most significant advantage of deep learning methods in prediction is their fast computation and output of higher temporal resolution prediction results. Therefore, these schemes have some limitations in terms of temporal resolution for the observation interval of geosynchronous meteorological satellites.
Geosynchronous meteorological satellite observation data provide an excellent convenience for capturing atmospheric motion patterns by the high resolution of the observed images and the ability to observe a large area without interruption. With the continuous progress of infrared imaging technology, infrared imaging technology has been widely used, which has also promoted the progress of deep learning-based infrared image processing technology [29]. Infrared imaging technology has also driven the development of satellite observation technology, and the infrared band of geosynchronous meteorological satellites is widely used in quantitative precipitation estimation and precipitation area segmentation missions [30,31]. Yang et al. proposed an artificial neural network for quantitative precipitation estimation based on satellite observations [32]. Min et al. used Himawari-8 satellite infrared observations and numerical weather prediction data for summer precipitation estimation [33]. Wang et al. used the satellite infrared observation band for quantitative precipitation estimation work [34,35]. These efforts have proved the value of satellite observation images in the field of forecasting. Achieving accurate satellite observation image forecasts will further improve the accuracy of short-term proximity forecasts. The high temporal resolution of forecast results will also significantly improve the meteorological support capability in areas such as transportation and shipping. It also provides meteorological services for areas that are difficult to cover by observations.
To this end, we propose a convolutional satellite image prediction network (CSIP-Net) capable of making high temporal resolution short-term proximity forecasts of satellite observation images. The network is built using a convolutional neural network (CNN) with an encoder-decoder structure [36]. The feature extraction capability of the network at the encoding stage is enhanced by adding a residual dense block (RDB) module, and a convolutional block attention module (CBAM) is added to achieve better prediction performance. The network achieves high temporal resolution for the prediction of satellite IR observation images, and the prediction results in an average MSE of 4.77 on Band 9 of the FY-4A satellite.
This paper is organized as follows: Section 2 presents the data we used and the methods for constructing the dataset; in Section 3, the construction of our prediction model, the prediction scheme, and the evaluation methods are presented; in Section 4, the performance of the prediction scheme and multiple prediction models are evaluated using evaluation metrics; and the application capability of the prediction results of the network is evaluated in Section 5.

2. Data

2.1. FY-4A Dataset

FY-4A is a new generation of Chinese meteorological geosynchronous satellites equipped with a new generation of advanced geosynchronous radiation imagers (AGRI) [37]. It has more observation bands than the previous generation of geosynchronous Earth satellites and has made significant advancements in both observation frequency and observation resolution [38]. Different observation bands have reached observation resolutions of 500 m, 1000 m, 2000 m, and 4000 m, respectively, which greatly enhance the observation capability of the satellite [39]. The infrared observation band of the FY-4A satellite was used in this paper’s satellite image prediction experiment. The infrared observation bands observe usable information throughout the day and are not affected by solar irradiation. The visible bands are not observing the available information for nearly two-thirds of the day, which is very disadvantageous for obtaining sustained forecasting capability. Therefore, we chose infrared observation bands for short-term image prediction experiments. The AGRI band observation information of the FY-4A satellite is shown in Table 1, and the infrared band observation images of the FY-4A satellite are shown in Figure 1.

2.2. Data Preprocess

In this paper, FY-4A AGRI 4000 m resolution L1-level full-disk observations were selected as the training data. To obtain data for the area chosen for the experiment, the lines and columns of the original data needed to be converted to latitude and longitude. We transformed the FY-4A full-disk observations to an equal latitude and longitude projection and then interpolated the converted data to a grid point with a spatial resolution of 0.05° × 0.05°. The experimental area was selected as 22–34.75° N, 110–122.75° E, and the cropped data contained 256 × 256 pixel points. Finally, the data from the completed preprocessing process were quality controlled. This process mainly removes satellite observation data with missing measurements and significant bias. After the data preprocessing, the observation image data can be used for dataset construction.
The preprocessed data were used to construct a time series dataset. Considering the observation interval of the FY-4A satellites, we used a 15-min interval as the construction method of the sequence. Each time series of data contained 16 frames of observed images. When the time interval of the selected 16 frames of sequence data did not reach 4 h, this set of sequence data was dropped. The program extracted the next sequence and continued the same judgment. The first eight frames were used for the input data of the model, and the last eight frames were used for the labels of the model training or the actual values in the evaluation. Depending on the time interval, this could enable a 2-h prediction of satellite observation images. We divided the constructed time series datasets according to time and, considering that precipitation events generally occur in summer, we selected May–September 2018–2019 as the training dataset of the model and May–September 2020 as the testing dataset of the model. The training dataset contains 21,936 sequence data and the testing dataset contains 10,800 sequence data. The preprocessing flow of the data is shown in Figure 2.

3. Methods

In this section, a convolutional satellite image prediction network CSIP-Net is proposed. After training and optimizing the network parameters, the network was able to achieve a prediction of the satellite infrared observed images. We will describe the structure of the CSIP-Net, the training method, and the metrics for evaluation in this section.

3.1. CSIP-Net

We proposed an encoder-decoder structure prediction model for satellite observation images to achieve the prediction of satellite infrared observation images. To improve the problem of the network not capturing the temporal change features well when the CNN is used for prediction tasks, we made a series of improvements to the U-Net network [36]. We added a more robust coding structure to the network by adding a residual dense block (RDB) unit to our encoding process [40,41]. This structure will improve the feature extraction ability of the network in the encoding stage. To enhance the ability of the CNN to capture temporal features, we added a CBAM to the network [42], which will allow the network’s prediction results to be more focused on temporal and spatial information. The structure of our proposed CSIP-Net is shown in Figure 3.
Satellite infrared observation images contain information at different scales and the atmosphere changes very rapidly. To obtain a more powerful temporal information extraction capability, a deeper network is usually a very effective solution. However, more convolutional layers tend to cause larger information loss and inhibit this enhancement. With the Dense connection proposed in the Dense-Net, the information after the feature extraction process can be reused and retained through the Dense connection. These retained features will be further extracted in the next layer of the feature extraction process, thus allowing the network to obtain better feature extraction capability. This is very helpful for the network to capture temporal and spatial variation characteristics. The RDB unit contains three convolutional layers with Dense connections. The size of the convolution kernel was set to 3 and zeros were padded to each edge to ensure that the shape of the input data does not change. The growth rate of the Dense connection was set to 32. The information from the Dense connection is fed into a convolutional layer with a kernel size of 1 to adjust the number of channels of information. The purpose of this operation is to keep the parameters of the network within a reasonable range and avoid bringing too much computational pressure. In addition, the residual connection is also added to this process. The original image information features will be merged with the extracted information and passed to deeper layers to reduce the information loss in the convolutional layer feature extraction process. The structure of the RDB unit has a greater advantage in terms of information extraction ability and information loss compared to the double layer convolution contained in the original structure of the U-Net. The structure of the RDB unit is shown in Figure 4.
The CSIP-Net is a convolutional network based on the encoder-decoder structure. The network uses a four-layer downsampling and four-layer upsampling structure to realize the encoding and decoding process. The RDB units are included in each layer of the encoding units and the RDB units can fuse information from different layers. This will allow the network to obtain a robust encoding structure and enhance the feature extraction capability of the network. The residual units are used in the decoding process to ensure that the parameters of the network remain in a more suitable range. In addition, we inserted the CBAM module into each layer of the encoding and decoding process. The CBAM module distributes different weights to different levels of information, allowing the network to obtain attention in both temporal and spatial dimensions, further enhancing the network’s ability to capture the characteristics of temporal information. The decoding process replaces the pooling layer used by the U-Net with a convolutional layer with a stride of 2. The network gradually returns the information to its original size in this process. At the same time, the original information is merged with the information in the decoding layer using skip connections. The data finally pass through the output layer to obtain the prediction image and realize the prediction task of the satellite observation image. The structure of the CSIP-Net allows the network to be more efficient in extracting spatial and temporal features during encoding and decoding.

3.2. Train

Two different prediction schemes were used in the experiment and the performance of the different prediction schemes in the satellite image prediction task was compared by the prediction results. The two schemes were as follows:
  • Input eight frames of images to directly predict the future eight frames of satellite images;
  • Input eight frames of images and predict the future one frame of images. Put the predicted images back into the model as input frames and predict the future eight frames step by step.
Two input schemes input the first eight frames of satellite observation images into the network (15-min intervals). Scheme 1 will output all forecasts directly (15-min intervals), which significantly speeds up and improves the efficiency of the forecasting. Scheme 2 will gradually output eight forecast frames (15-min intervals) and the network will only predict the next data frame for each forecast. After completing the prediction of a frame, we inserted the predicted result to the end of the previous time series and input it into the network again. The network will output the prediction result for the next frame and the cycle will continue eight times. This prediction scheme is a reference to the recurrent neural network prediction scheme, which ensures the consistency of temporal information. The reason for designing these two schemes is that in the prediction of the SmaAt-UNet, the authors focused more on the prediction of single frames. Conversely, we aim to obtain higher temporal resolution and longer prediction results, so we chose two input schemes to obtain different prediction results. With different prediction schemes, we can better test how the CNN is better at prediction. The predictions of the two input schemes are shown in Figure 5.
We constructed the input data as 8 × 256 × 256 dimensions and the channel dimension of the data will be used as the time dimension. The Adam optimizer with a learning rate of 0.001 was used in the training process. The training data are input into the network using the mini-batch set by shuffle and the mini-batch size is 64. The loss function uses the mean square error (MSE). The training process uses a variable learning rate strategy that decreases the learning rate as the number of iterations increases. Our deep learning model was built using the PyTorch framework and the training was accelerated using an NVIDIA RTX3090 graphics card.

3.3. Evaluation Methods

We used three metrics to evaluate the performance of our proposed model in satellite image prediction: mean square error (MSE), mean absolute error (MAE), and coefficient of determination (R2). The MSE describes the degree of deviation of the model prediction of satellite images from the actual satellite observation images. The MAE is used to describe the degree of accuracy of the model prediction of satellite images from the actual satellite observation images. The R2 describes the correlation between the model prediction of satellite images from the actual satellite observation images.
M S E = 1 m i = 1 m predict i true i 2
M A E = 1 m i = 1 m predict i true i
R 2 = 1 i = 1 m predict i true i 2 i = 1 m true mean true i 2
The m in Equations (1)–(3) refers to the number of pixel points contained in the image, “predict” refers to the satellite image predicted by the model, and “true” refers to the live observation at the same time as the predicted image. We used these metrics in the evaluation section to quantitatively evaluate the model performance at different prediction times.

4. Results and Discussion

4.1. Comparison of Two Prediction Schemes

In this section, we used the CSIP-Net network to compare and evaluate the two prediction schemes presented in Section 3.2, where the observations from Band 9 of the FY-4A satellite were used as input. Both prediction schemes were predicted for eight frames (120 min). The evaluation metrics mentioned in Section 3.3 were used for the evaluation process of both prediction schemes. The comparison experiments in this section allowed us to quantitatively evaluate the prediction performance of different prediction schemes and help us to find a more suitable data prediction scheme for satellite image prediction.
The quantitative evaluation metrics of the Band 9 prediction results are shown in Figure 6. Based on the quantitative comparison results, it can be found that Scheme 1 is able to make better predictions for a longer period of time. The error is lower for a longer prediction time compared with Scheme 2. Comparing the decision coefficients also shows that Scheme 1 is better at predicting longer times. In Scheme 2, better prediction results are obtained for the first two frames compared to Scheme 1. In addition, we calculated the average of the quantitative evaluation results of the two schemes in prediction performance, which made the comparison of the two schemes more direct. The mean values of the quantitative evaluation results of the two prediction schemes are shown in Table 2 and the time series comparison images of the prediction results of the two prediction schemes are shown in Figure 7.
The first row of images in Figure 7 shows the actual infrared images from the AGRI observation of the FY-4A satellite with a time interval of 15 min. The second and third rows show the visualization results of Scheme 1 and Scheme 2, respectively. The first eight images in each row are the absolute errors between the predicted results and the ground truth. The images are arranged in the temporal order of the prediction and are used to describe the bias of the predicted results compared to the actual observations. The last eight images are the satellite images predicted by the network, arranged in time order, and match the observed data in the first row. According to the results in Figure 6, it can be found that Scheme 1 can better capture long-term variability characteristics. It also provides a better description of cloud motion. Scheme 2 generates a relatively clearer image but does not capture the long-term variability well. We believe that this is because the errors in Scheme 2 are accumulated in each frame of the prediction process, which leads to a certain degree of influence on the long-term prediction results. According to the results in Figure 6, it can be found that Scheme 2 shows good prediction performance in the first two frames of prediction. However, errors gradually accumulate as the prediction proceeds. Scheme 2 cannot capture the long-term change characteristics in the prediction process as well as Scheme 1, which also causes the problem of poor prediction performance in long-term prediction. The results in Table 2 also illustrate that Scheme 1 is able to achieve better results in the overall prediction. Therefore, we used Scheme 1 as the prediction scheme of the model in the experiments.

4.2. Comparison of Multiple Deep Learning Models

After the evaluation in Section 4.1, we used Scheme 1 as the prediction scheme in all models. We selected the U-Net [36], the SmaAt-UNet [22], and our proposed CSIP-Net for the prediction performance comparison. Therefore, we removed the RDB unit and CBAM in the CSIP-Net, respectively, and used them as the ablation experiment. This was used to evaluate the contribution of different modules in the model to the network prediction performance. In our evaluation, we compared the performance of multiple networks in the satellite observation image prediction task. The performance comparison experiments also used the RMSE, MAE, and R2 as evaluation metrics. Figure 8 shows the prediction performance comparison results for the networks of the U-Net, SmaAt-UNet, CSIP-Net, U-Net + RDB units, and U-Net + CBAM at Band 9.
We calculated the average of the quantitative evaluation results of the four deep learning models, which made the comparison of the four deep learning models more direct. The average of the quantitative evaluation results of the four deep learning models is shown in Table 3, and the time series comparison images of the prediction results of the four models are shown in Figure 9.
According to the results in Figure 9 and Table 3, the CSIP-Net achieves better prediction results than the other models. The CSIP-Net is able to make good predictions of cloud changes. The prediction results of the CSIP-Net achieved the best results among all models in all evaluation metrics. Comparing the sequence images in Figure 9 also shows that the CSIP-Net is more accurate in predicting the changes in the more strongly changing regions, and the overall prediction error is more minor. For example, in the middle part of the image, where there are several obvious cloud change trends, the CSIP-Net achieves the best prediction results compared to the other four models for accurately capturing the cloud change trends. The comparison with the CSIP-Net, U-Net, and SmaAt-UNet shows that the better encoder network can bring the prediction results closer to the actual observation. This can be found in the evaluation results in Figure 8, where the SmaAt-UNet is much less effective than the U-Net and CSIP-Net in long-term prediction errors. This is also because the deep separable convolution used by the SmaAt-UNet affects the feature extraction ability of the model. The less powerful encoder network causes the SmaAt-UNet not to capture small trends and make accurate predictions. The CSIP-Net achieves more refined prediction results by improving the feature extraction ability of the encoder network. In addition, the comparison of the CSIP-Net, U-Net + RDB, and U-Net + CBAM indicates that the CBAM is based on an attention mechanism and assigns higher weight to the regions that produce changes in the prediction task. The network pays more attention to the regions that produce changes under the influence of the CBAM and thus, the network is able to perform better prediction ability for regions that change significantly. However, due to such a weight assignment, it leads to a certain impact on the predictive capability in the regions where the changes are not significant. The comparison in Figure 9 is able to show that the CBAM in regions with insignificant variation introduces a greater error [42]. Such a situation leads to three models with a similar MAE, while the prediction performance of the U-Net + CBAM is lower than the other two models by a small percentage in the R2, which is related to the prediction performance of the less variable regions. Overall, the CSIP-Net obtained the best prediction results in the multi-model comparison and was able to achieve the satellite image prediction task.

4.3. Multi-Band Result Evaluation

The AGRI of the FY-4A satellite is able to observe many different wavelengths of information and each different observation band has different main applications. For this reason, we conducted short-term prediction experiments for all IR observation bands, which consisted of a total of six bands from Band 9 to Band 14. We evaluated the prediction performance of each of these bands. The prediction experiments for these bands were performed using the same network and parameter settings as for Band 9. The prediction results of these bands were also evaluated using the metrics presented in Section 3.3. The prediction evaluation results for Band 9 to Band 14 are shown in Figure 10.
According to the results in Figure 10, it can be seen that the network does not have a consistent prediction error of the prediction result error for different bands. However, the trend of the coefficient of determination of the prediction result for different bands over time has a high consistency. For this reason, we further investigated the reasons for the discrepancy. The degree of variation in brightness temperature over time varies widely among bands, which led to significant inconsistencies in the prediction results of different bands when evaluated using the MSE. In order to further evaluate the application potential of the model prediction results, we combined the prediction results of these six bands in Section 5 and performed precipitation area detection experiments on the multi-band prediction results.

5. Predictive Data Performance Evaluation

5.1. Predictive Data Evaluation Method

In order to objectively assess the validity of the predicted data, we combined the results of the multiple band predictions and performed precipitation area detection experiments on the combined data. The precipitation area detection experiment refers to the method of detection of precipitation clouds proposed by Tao et al. The network is able to achieve the task of precipitation cloud identification by inputting infrared observation images of the FY-4A satellite [43]. We conducted precipitation region detection experiments using the CSIP-Net according to this method. For training the model, the training data used Band 9 to Band 14 of the FY-4A satellite and the difference in brightness temperature between Band 12 and Band 9. The targets for model training were taken from the Global Precipitation Measurement Integrated Multi-satellitE Retrievals for GPM (GPM IMERG) product [44]. We interpolated the GPM IMERG product to a spatial resolution of 0.05° × 0.05°. Regions with rainfall rates greater than 0.01 mm/h were considered precipitation regions, while the rest of the regions were considered to have no precipitation. In this way, the precipitation area detection target was obtained. The preprocessed data were used to train the precipitation area detection model. The data from May–September 2018–2019 were used as the training dataset for model training. The loss function of the model uses BCE (Binary Cross Entropy) + Dice Loss [45], Adam as the optimizer of the model.
We combined the multi-band prediction results from the CSIP-Net output and then, the combined data were fed into the trained precipitation area detection model. The availability of the predicted images was further evaluated by comparing the recognition results under different prediction time frames. Since the temporal resolution of the model predictions is higher than the GPM IMERG products, we time-matched the predictions to the GPM IMERG products and only time-matched data were fed into the model. We input the predicted results into the precipitation area detection network separately according to the forecast time frame. The precipitation area segmentation network output a probability matrix, which was converted to 0 (no precipitation) or 1 (precipitation occurred) by the SoftMax function, and we obtained the precipitation area detection results for the satellite image prediction data. The results of the detection were evaluated using the pixel accuracy (PA), probability of detection (POD), false alarm ratio (FAR), and critical success index (CSI) as metrics, and their equations are shown in Equations (4)–(7), respectively. In the equation, H is the number of precipitations predicted by the model and actually experienced. F is the number of model results when precipitation is predicted but not seen. M is the opposite to F. Z is the number of model results with no precipitation and no actual precipitation.
P A = H + F H + F + M + Z
P O D = H H + M
F A R = F H + F
C S I = H H + F + M

5.2. Precipitation Area Detection Results

We input the predicted data into the completed training precipitation detection model separately according to the prediction step and obtained the detection results. The detection results were matched to the actual time of the GPM IMERG product, and the precipitation area detection results were evaluated using Equations (4)–(7) as evaluation metrics. The change in the detection results in the prediction step is shown in Figure 11.
According to the results of the precipitation area detection in Figure 11, it can be found that the TS, POD, and CSI decreased to some extent with the increase in prediction time, and the FAR increased slightly with the change in time. This change is consistent with the characteristic of the prediction task, where the performance of the prediction decreases as the prediction time limit increases. Moreover, the PA for the precipitation area detection still reached 0.855 after 2 h. In addition, we input the original 2020 data directly into the model for the precipitation area detection experiment and the PA of the detection result reached 0.898. Compared with the PA of the 2-h prediction result, the PA only decreased by 0.043, which proves that the prediction result of the network can perform the task of precipitation area detection better. Since the study of precipitation detection requires using the brightness temperature and brightness temperature difference in different bands, this requires a good correlation between the prediction results of different bands to ensure that the trained network can accurately identify the precipitation areas. In general, the CSIP-Net is able to achieve the prediction task of satellite images better.

6. Conclusions

In this paper, a convolutional satellite image prediction network (CSIP-Net) is proposed, which shows better image prediction capability by adding an RDB unit and CBAM unit in the encoding stage. FY-4A satellite infrared band observation images can be predicted by the CSIP-Net for a period of two hours. The image resolution of the satellite observation data is 0.05° × 0.05°. The MSE, MAE, and R2 were used as the evaluation metrics for the prediction results of the network, and the average MSE was 4.77 Kelvin, the average MAE was 1.2, and the average R2 was 0.8. The main conclusions of this paper are as follows.
In the evaluation process, the impact of data input schemes on the prediction performance of the network was evaluated. Based on the comparison experiment, the data input scheme was determined as an input of eight frames of images to directly predict the future eight frames of satellite images. In all comparison experiments, the best data input scheme was used and the CSIP-Net achieved the best prediction performance when compared to multiple CNN-based deep learning networks. The quantitative evaluation results show that the CSIP-Net is able to achieve the best satellite image prediction performance among a variety of prediction models based on deep learning. In addition, the predicted images show that the network is able to capture time-varying features and make accurate predictions of cloud creation and elimination. The prediction performance of different bands of AGRI was also evaluated in the experiment and the prediction results showed that the CSIP-Net had higher consistency in the prediction results of multiple bands. Furthermore, to evaluate the relationship between the various band prediction results and their application value, the various band prediction results were combined and used to detect precipitation areas. The findings indicate that the network prediction results have a high application value.
Based on this work, we will further improve our proposed algorithm, including improving the prediction time and making the network more lightweight while obtaining better predictive performance. On the other hand, we will further search for more application possibilities of the prediction results, and we will also make further attempts to improve the algorithm to obtain more accurate precipitation prediction capability.

Author Contributions

Conceptualization, Y.J. and W.C.; methodology, Y.J. and W.C.; software, Y.J.; validation, Y.J., W.C. and F.G.; formal analysis, Y.J. and W.C.; investigation, Y.J. and W.C.; resources, F.G. and S.Z.; data curation, W.C. and C.L.; writing—original draft preparation, Y.J. and W.C.; writing—review and editing, Y.J., W.C. and J.S.; visualization, Y.J. and W.C.; supervision, S.Z.; project administration, S.Z.; funding acquisition, F.G. and S.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Key R&D Program of China (No.2022YFE0106400), special funds from Laoshan Laboratory (Nos. LSKJ202202200; LSKJ202202202), the National Natural Science Foundation of China (Grant No. 41830964), and Shandong Province’s “Taishan” Scientist Program (ts201712017).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the China National Satellite Meteorological Center and the National Aeronautics and Space Administration for their data provision.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fernández, J.G.; Mehrkanoon, S. Broad-UNet: Multi-Scale Feature Learning for Nowcasting Tasks. Neural Netw. 2021, 144, 419–427. [Google Scholar] [CrossRef] [PubMed]
  2. Cogato, A.; Meggio, F.; De Antoni Migliorati, M.; Marinello, F. Extreme Weather Events in Agriculture: A Systematic Review. Sustainability 2019, 11, 2547. [Google Scholar] [CrossRef] [Green Version]
  3. Hwang, Y.; Clark, A.J.; Lakshmanan, V.; Koch, S.E. Improved Nowcasts by Blending Extrapolation and Model Forecasts. Weather Forecast. 2015, 30, 1201–1217. [Google Scholar] [CrossRef]
  4. Pan, X.; Lu, Y.; Zhao, K.; Huang, H.; Wang, M.; Chen, H. Improving Nowcasting of Convective Development by Incorporating Polarimetric Radar Variables Into a Deep-Learning Model. Geophys. Res. Lett. 2021, 48, e2021GL095302. [Google Scholar] [CrossRef]
  5. PC, S.; Misumi, R.; Nakatani, T.; Iwanami, K.; Maki, M.; Seed, A.W.; Hirano, K. Comparison of Rainfall Nowcasting Derived from the STEPS Model and JMA Precipitation Nowcasts. Hydrol. Res. Lett. 2015, 9, 54–60. [Google Scholar] [CrossRef] [Green Version]
  6. Xian, D.; Zhang, P.; Gao, L.; Sun, R.; Zhang, H.; Jia, X. Fengyun Meteorological Satellite Products for Earth System Science Applications. Adv. Atmos. Sci. 2021, 38, 1267–1284. [Google Scholar] [CrossRef]
  7. Mekonnen, K.; Melesse, A.M.; Woldesenbet, T.A. Spatial Evaluation of Satellite-Retrieved Extreme Rainfall Rates in the Upper Awash River Basin, Ethiopia. Atmos. Res. 2021, 249, 105297. [Google Scholar] [CrossRef]
  8. Lagerquist, R.; Stewart, J.Q.; Ebert-Uphoff, I.; Kumler, C. Using Deep Learning to Nowcast the Spatial Coverage of Convection from Himawari-8 Satellite Data. Mon. Weather Rev. 2021, 149, 3897–3921. [Google Scholar] [CrossRef]
  9. Bauer, P.; Thorpe, A.; Brunet, G. The Quiet Revolution of Numerical Weather Prediction. Nature 2015, 525, 47–55. [Google Scholar] [CrossRef]
  10. Vannitsem, S.; Bremnes, J.B.; Demaeyer, J.; Evans, G.R.; Flowerdew, J.; Hemri, S.; Lerch, S.; Roberts, N.; Theis, S.; Atencia, A.; et al. Statistical Postprocessing for Weather Forecasts: Review, Challenges, and Avenues in a Big Data World. Bull. Am. Meteorol. Soc. 2021, 102, E681–E699. [Google Scholar] [CrossRef]
  11. Nicolis, C.; Perdigao, R.A.P.; Vannitsem, S. Dynamics of Prediction Errors under the Combined Effect of Initial Condition and Model Errors. J. Atmos. Sci. 2009, 66, 766–778. [Google Scholar] [CrossRef]
  12. Vannitsem, S. Predictability of Large-Scale Atmospheric Motions: Lyapunov Exponents and Error Dynamics. Chaos Interdiscip. J. Nonlinear Sci. 2017, 27, 032101. [Google Scholar] [CrossRef] [Green Version]
  13. Han, L.; Chen, M.; Chen, K.; Chen, H.; Zhang, Y.; Lu, B.; Song, L.; Qin, R. A Deep Learning Method for Bias Correction of ECMWF 24–240 h Forecasts. Adv. Atmos. Sci. 2021, 38, 1444–1459. [Google Scholar] [CrossRef]
  14. Han, L.; Liang, H.; Chen, H.; Zhang, W.; Ge, Y. Convective Precipitation Nowcasting Using U-Net Model. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–8. [Google Scholar] [CrossRef]
  15. Chen, G.; Wang, W. Short-Term Precipitation Prediction for Contiguous United States Using Deep Learning. Geophys. Res. Lett. 2022, 49, e2022GL097904. [Google Scholar] [CrossRef]
  16. Zhou, K.; Zheng, Y.; Dong, W.; Wang, T. A Deep Learning Network for Cloud-to-Ground Lightning Nowcasting with Multisource Data. J. Atmos. Ocean. Technol. 2020, 37, 927–942. [Google Scholar] [CrossRef] [Green Version]
  17. Tran, Q.-K.; Song, S.-K. Multi-ChannelWeather Radar Echo Extrapolation with Convolutional Recurrent Neural Networks. Remote Sens. 2019, 11, 2303. [Google Scholar] [CrossRef] [Green Version]
  18. Ayzel, G.; Scheffer, T.; Heistermann, M. RainNet v1.0: A Convolutional Neural Network for Radar-Based Precipitation Nowcasting. Geosci. Model Dev. 2020, 13, 2631–2644. [Google Scholar] [CrossRef]
  19. Shi, X.; Chen, Z.; Wang, H.; Yeung, D.-Y.; Wong, W.; Woo, W. Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting. In Proceedings of the 28th International Conference on Neural Information Processing Systems, Montreal, QC, Canada, 7–12 December 2015; MIT Press: Cambridge, MA, USA, 2015; Volume 1, pp. 802–810. [Google Scholar]
  20. Wang, Y.; Wu, H.; Zhang, J.; Gao, Z.; Wang, J.; Yu, P.S.; Long, M. PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive Learning. IEEE Trans. Pattern Anal. Mach. Intell. 2022. [Google Scholar] [CrossRef]
  21. Sønderby, C.K.; Espeholt, L.; Heek, J.; Dehghani, M.; Oliver, A.; Salimans, T.; Agrawal, S.; Hickey, J.; Kalchbrenner, N. MetNet: A Neural Weather Model for Precipitation Forecasting. arXiv 2020, arXiv:2003.12140. [Google Scholar]
  22. Trebing, K.; Staǹczyk, T.; Mehrkanoon, S. SmaAt-UNet: Precipitation Nowcasting Using a Small Attention-UNet Architecture. Pattern Recognit. Lett. 2021, 145, 178–186. [Google Scholar] [CrossRef]
  23. Gao, Z.; Tan, C.; Wu, L.; Li, S.Z. SimVP: Simpler Yet Better Video Prediction. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–20 June 2022; pp. 3170–3180. [Google Scholar]
  24. Duan, M.; Xia, J.; Yan, Z.; Han, L.; Zhang, L.; Xia, H.; Yu, S. Reconstruction of the Radar Reflectivity of Convective Storms Based on Deep Learning and Himawari-8 Observations. Remote Sens. 2021, 13, 3330. [Google Scholar] [CrossRef]
  25. Franch, G.; Maggio, V.; Coviello, L.; Pendesini, M.; Jurman, G.; Furlanello, C. TAASRAD19, a High-Resolution Weather Radar Reflectivity Dataset for Precipitation Nowcasting. Sci. Data 2020, 7, 234. [Google Scholar] [CrossRef] [PubMed]
  26. Song, K.; Yang, G.; Wang, Q.; Xu, C.; Liu, J.; Liu, W.; Shi, C.; Wang, Y.; Zhang, G.; Yu, X.; et al. Deep Learning Prediction of Incoming Rainfalls: An Operational Service for the City of Beijing China. In Proceedings of the 2019 International Conference on Data Mining Workshops (ICDMW), Beijing, China, 8–11 November 2019; pp. 180–185. [Google Scholar]
  27. Lee, J.-H.; Lee, S.S.; Kim, H.G.; Song, S.-K.; Kim, S.; Ro, Y.M. MCSIP Net: Multichannel Satellite Image Prediction via Deep Neural Network. IEEE Trans. Geosci. Remote Sens. 2020, 58, 2212–2224. [Google Scholar] [CrossRef]
  28. Xu, Z.; Du, J.; Wang, J.; Jiang, C.; Ren, Y. Satellite Image Prediction Relying on GAN and LSTM Neural Networks. In Proceedings of the ICC 2019—2019 IEEE International Conference on Communications (ICC), Shanghai, China, 20–24 May 2019; pp. 1–6. [Google Scholar]
  29. Ma, J.; Tang, L.; Xu, M.; Zhang, H.; Xiao, G. STDFusionNet: An Infrared and Visible Image Fusion Network Based on Salient Target Detection. IEEE Trans. Instrum. Meas. 2021, 70, 1–13. [Google Scholar] [CrossRef]
  30. Hayatbini, N.; Kong, B.; Hsu, K.; Nguyen, P.; Sorooshian, S.; Stephens, G.; Fowlkes, C.; Nemani, R. Conditional Generative Adversarial Networks (CGANs) for Near Real-Time Precipitation Estimation from Multispectral GOES-16 Satellite Imageries—PERSIANN-CGAN. Remote Sens. 2019, 11, 2193. [Google Scholar] [CrossRef] [Green Version]
  31. Gao, Y.; Guan, J.; Zhang, F.; Wang, X.; Long, Z. Attention-Unet-Based Near-Real-Time Precipitation Estimation from Fengyun-4A Satellite Imageries. Remote Sens. 2022, 14, 2925. [Google Scholar] [CrossRef]
  32. Hong, Y.; Hsu, K.-L.; Sorooshian, S.; Gao, X. Precipitation Estimation from Remotely Sensed Imagery Using an Artificial Neural Network Cloud Classification System. J. Appl. Meteorol. 2004, 43, 1834–1853. [Google Scholar] [CrossRef] [Green Version]
  33. Min, M.; Bai, C.; Guo, J.; Sun, F.; Liu, C.; Wang, F.; Xu, H.; Tang, S.; Li, B.; Di, D.; et al. Estimating Summertime Precipitation from Himawari-8 and Global Forecast System Based on Machine Learning. IEEE Trans. Geosci. Remote Sens. 2019, 57, 2557–2570. [Google Scholar] [CrossRef]
  34. Wang, C.; Tang, G.; Xiong, W.; Ma, Z.; Zhu, S. Infrared Precipitation Estimation Using Convolutional Neural Network for FengYun Satellites. J. Hydrol. 2021, 603, 127113. [Google Scholar] [CrossRef]
  35. Wang, C.; Xu, J.; Tang, G.; Yang, Y.; Hong, Y. Infrared Precipitation Estimation Using Convolutional Neural Network. IEEE Trans. Geosci. Remote Sens. 2020, 58, 8612–8625. [Google Scholar] [CrossRef]
  36. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015; Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2015; Volume 9351, pp. 234–241. ISBN 978-3-319-24573-7. [Google Scholar]
  37. Zhang, P.; Lu, Q.; Hu, X.; Gu, S.; Yang, L.; Min, M.; Chen, L.; Xu, N.; Sun, L.; Bai, W.; et al. Latest Progress of the Chinese Meteorological Satellite Program and Core Data Processing Technologies. Adv. Atmos. Sci. 2019, 36, 1027–1045. [Google Scholar] [CrossRef]
  38. Hu, Y.; Zhang, Y.; Yan, L.; Li, X.-M.; Dou, C.; Jia, G.; Si, Y.; Zhang, L. Evaluation of the Radiometric Calibration of FY4A-AGRI Thermal Infrared Data Using Lake Qinghai. IEEE Trans. Geosci. Remote Sens. 2021, 59, 8040–8050. [Google Scholar] [CrossRef]
  39. Yang, J.; Zhang, Z.; Wei, C.; Lu, F.; Guo, Q. Introducing the New Generation of Chinese Geostationary Weather Satellites, Fengyun-4. Bull. Am. Meteorol. Soc. 2017, 98, 1637–1658. [Google Scholar] [CrossRef]
  40. Zhang, Y.; Tian, Y.; Kong, Y.; Zhong, B.; Fu, Y. Residual Dense Network for Image Super-Resolution. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 2472–2481. [Google Scholar]
  41. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2261–2269. [Google Scholar]
  42. Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. CBAM: Convolutional Block Attention Module. In Computer Vision—ECCV 2018; Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2018; Volume 11211, pp. 3–19. ISBN 978-3-030-01233-5. [Google Scholar]
  43. Tao, R.; Zhang, Y.; Wang, L.; Cai, P.; Tan, H. Detection of Precipitation Cloud over the Tibet Based on the Improved U-Net. Comput. Mater. Contin. 2020, 65, 2455–2474. [Google Scholar] [CrossRef]
  44. Pradhan, R.K.; Markonis, Y.; Vargas Godoy, M.R.; Villalba-Pradas, A.; Andreadis, K.M.; Nikolopoulos, E.I.; Papalexiou, S.M.; Rahim, A.; Tapiador, F.J.; Hanel, M. Review of GPM IMERG Performance: A Global Perspective. Remote Sens. Environ. 2022, 268, 112754. [Google Scholar] [CrossRef]
  45. Zhou, Z.; Rahman Siddiquee, M.M.; Tajbakhsh, N.; Liang, J. UNet++: A Nested U-Net Architecture for Medical Image Segmentation. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support; Stoyanov, D., Taylor, Z., Carneiro, G., Syeda-Mahmood, T., Martel, A., Maier-Hein, L., Tavares, J.M.R.S., Bradley, A., Papa, J.P., Belagiannis, V., et al., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2018; Volume 11045, pp. 3–11. ISBN 978-3-030-00888-8. [Google Scholar]
Figure 1. The infrared band observation images of the FY-4A satellite (Example time 3 June 2020).
Figure 1. The infrared band observation images of the FY-4A satellite (Example time 3 June 2020).
Atmosphere 14 00025 g001
Figure 2. The preprocessing flow of the data.
Figure 2. The preprocessing flow of the data.
Atmosphere 14 00025 g002
Figure 3. The structure of the CSIP-Net. The CSIP-Net is a fully convolutional network with an encoding-decoding structure, using four downsampling and four upsampling. A CBAM was added to the RDB unit and the Res Conv unit. The numbers in the figure represent the number of channels in each layer of the model. The input of the network is the satellite infrared observation image sequence data and the output result is the predicted satellite infrared image sequence for the future 2 h.
Figure 3. The structure of the CSIP-Net. The CSIP-Net is a fully convolutional network with an encoding-decoding structure, using four downsampling and four upsampling. A CBAM was added to the RDB unit and the Res Conv unit. The numbers in the figure represent the number of channels in each layer of the model. The input of the network is the satellite infrared observation image sequence data and the output result is the predicted satellite infrared image sequence for the future 2 h.
Atmosphere 14 00025 g003
Figure 4. The structure of the RDB unit. The yellow, orange, and green lines in the figure represent the merged features in the channel dimension, and the red line represents the summation of the original features with the features after the convolution layer.
Figure 4. The structure of the RDB unit. The yellow, orange, and green lines in the figure represent the merged features in the channel dimension, and the red line represents the summation of the original features with the features after the convolution layer.
Atmosphere 14 00025 g004
Figure 5. The predictions of the two input schemes.
Figure 5. The predictions of the two input schemes.
Atmosphere 14 00025 g005
Figure 6. The quantitative evaluation metrics of the Band 9 prediction results.
Figure 6. The quantitative evaluation metrics of the Band 9 prediction results.
Atmosphere 14 00025 g006
Figure 7. The time series comparison images of the prediction results of the two prediction schemes (Example time 4 May 2020).
Figure 7. The time series comparison images of the prediction results of the two prediction schemes (Example time 4 May 2020).
Atmosphere 14 00025 g007
Figure 8. Model prediction performance evaluation results.
Figure 8. Model prediction performance evaluation results.
Atmosphere 14 00025 g008
Figure 9. The time series comparison images of the prediction results of the four models (Example time 9 July 2020).
Figure 9. The time series comparison images of the prediction results of the four models (Example time 9 July 2020).
Atmosphere 14 00025 g009
Figure 10. The prediction evaluation results for Band 9 to Band 14.
Figure 10. The prediction evaluation results for Band 9 to Band 14.
Atmosphere 14 00025 g010
Figure 11. The change in the detection results in the prediction step.
Figure 11. The change in the detection results in the prediction step.
Atmosphere 14 00025 g011
Table 1. The AGRI band observation information of the FY-4A satellite 1.
Table 1. The AGRI band observation information of the FY-4A satellite 1.
Spectral CoverageCentral WavelengthSpectral BandwidthSpatial ResolutionMain Applications
Visible0.47 µm0.45–0.49 µm1 kmAerosol
0.65 µm0.55–0.75 µm0.5–1 kmFog, cloud
0.825 µm0.75–0.90 µm1 kmVegetation
Short-wave infrared1.375 µm1.36–1.39 µm2 kmCirrus
1.61 µm1.58–1.64 µm2 kmCloud, snow
2.25 µm2.1–2.35 µm2–4 kmCirrus, aerosol
Mid-wave infrared3.75 µm3.5–4.0 µm2 kmFire
3.75 µm3.5–4.0 µm2 kmLand surface
Water vapor6.25 µm5.8–6.7 µm4 kmUpper-level water vapor
7.1 µm6.9–7.3 µm4 kmMid-level water vapor
Long-wave infrared8.5 µm8.0–9.0 µm4 kmVolcanic, ash, cloud top, phase
10.7 µm10.3–11.3 µm4 kmSea surface temperature, Land surface temperature
12.0 µm11.5–12.5 µm4 kmClouds, low-level water vapor
13.5 µm13.2–13.8 µm4 kmClouds, air temperature
1 Available online: http://www.nsmc.org.cn/nsmc/en/instrument/AGRI.html (accessed on 4 December 2022).
Table 2. The mean values of the quantitative evaluation results of the two prediction schemes.
Table 2. The mean values of the quantitative evaluation results of the two prediction schemes.
MSEMAER2
Scheme 14.771.250.86
Scheme 25.351.300.84
Table 3. The average of the quantitative evaluation results of the four deep learning models.
Table 3. The average of the quantitative evaluation results of the four deep learning models.
MSEMAER2
U-Net5.031.290.85
SmaAt-UNet5.781.320.84
U-Net + RDB4.931.250.86
U-Net + CBAM4.871.250.85
CSIP-Net4.771.250.86
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jiang, Y.; Cheng, W.; Gao, F.; Zhang, S.; Liu, C.; Sun, J. CSIP-Net: Convolutional Satellite Image Prediction Network for Meteorological Satellite Infrared Observation Imaging. Atmosphere 2023, 14, 25. https://doi.org/10.3390/atmos14010025

AMA Style

Jiang Y, Cheng W, Gao F, Zhang S, Liu C, Sun J. CSIP-Net: Convolutional Satellite Image Prediction Network for Meteorological Satellite Infrared Observation Imaging. Atmosphere. 2023; 14(1):25. https://doi.org/10.3390/atmos14010025

Chicago/Turabian Style

Jiang, Yuhang, Wei Cheng, Feng Gao, Shaoqing Zhang, Chang Liu, and Jingzhe Sun. 2023. "CSIP-Net: Convolutional Satellite Image Prediction Network for Meteorological Satellite Infrared Observation Imaging" Atmosphere 14, no. 1: 25. https://doi.org/10.3390/atmos14010025

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop