Next Article in Journal
Analysis of Activity in an Open-Pit Mine by Using InSAR Coherence-Based Normalized Difference Activity Index
Next Article in Special Issue
Improved YOLO Network for Free-Angle Remote Sensing Target Detection
Previous Article in Journal
Optical Remote Sensing Image Denoising and Super-Resolution Reconstructing Using Optimized Generative Network in Wavelet Transform Domain
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Technical Note

NDFTC: A New Detection Framework of Tropical Cyclones from Meteorological Satellite Images with Deep Transfer Learning

1
College of Computer Science and Technology, China University of Petroleum, Qingdao 266580, China
2
Southern Marine Science and Engineering Guangdong Laboratory (Zhuhai), Zhuhai 519080, China
3
School of Geosciences, China University of Petroleum, Qingdao 266580, China
4
School of Computer Science and Engineering, South China University of Technology, Guangzhou 510006, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(9), 1860; https://doi.org/10.3390/rs13091860
Submission received: 29 March 2021 / Revised: 19 April 2021 / Accepted: 6 May 2021 / Published: 10 May 2021
(This article belongs to the Special Issue Deep Learning and Computer Vision in Remote Sensing)

Abstract

:
Accurate detection of tropical cyclones (TCs) is important to prevent and mitigate natural disasters associated with TCs. Deep transfer learning methods have advantages in detection tasks, because they can further improve the stability and accuracy of the detection model. Therefore, on the basis of deep transfer learning, we propose a new detection framework of tropical cyclones (NDFTC) from meteorological satellite images by combining the deep convolutional generative adversarial networks (DCGAN) and You Only Look Once (YOLO) v3 model. The algorithm process of NDFTC consists of three major steps: data augmentation, a pre-training phase, and transfer learning. First, to improve the utilization of finite data, DCGAN is used as the data augmentation method to generate images simulated to TCs. Second, to extract the salient characteristics of TCs, the generated images obtained from DCGAN are inputted into the detection model YOLOv3 in the pre-training phase. Furthermore, based on the network-based deep transfer learning method, we train the detection model with real images of TCs and its initial weights are transferred from the YOLOv3 trained with generated images. Training with real images helps to extract universal characteristics of TCs and using transferred weights as initial weights can improve the stability and accuracy of the model. The experimental results show that the NDFTC has a better performance, with an accuracy (ACC) of 97.78% and average precision (AP) of 81.39%, in comparison to the YOLOv3, with an ACC of 93.96% and AP of 80.64%.

Graphical Abstract

1. Introduction

A tropical cyclone (TC) is a kind of catastrophic weather system with enormous destructive force [1,2]. TCs encompass hurricanes, typhoons, and cyclone equivalents, and they pose a serious threat to the safety of people’s lives and property and cause huge losses to agricultural production and transportation [3,4,5,6,7]. Therefore, accurate detection of TCs is the key to reducing the hazards [8,9].
Traditionally, the mainstream detection methods for TCs are numerical weather prediction (NWP) models, which have done a great deal of work in the development of a forecast system to provide guidance for TC prediction based on physics parameterizations and modeling techniques [10,11]. For example, the Met Office has been objectively providing real-time guidance for TC prediction and detection using its global numerical weather forecast model in recent years [12]. However, the predicted error increases because of the initial value dependency if numerical dynamical models try to simulate farther into the future [13].
The significant advantage of machine learning (ML) methods over traditional detection methods based on NWP is that ML methods do not require any assumption [14]. Decision trees (DT) are trained to classify different levels of TCs and the accuracy of TC prediction prior to 24 h was about 84.6% [15]. In addition, a convective initiation algorithm was developed from the Communication, Ocean, and Meteorological Satellite Meteorological Imager based on the DT, random forest (RF), and support vector machines (SVM) [16,17].
Recently, deep learning models, as a subset of ML methods, have had good performance in detection tasks [18,19,20,21]. For the detection task in images, object detection models based on deep learning are mainly divided into two streams based on different processing stages, which are one-stage detection models and two-stage detection models. YOLO series [22,23,24], SSD [25], and RetinaNet [26] are typical one-stage detection models, and R-CNN [27], Fast R-CNN [28], and Faster R-CNN [29] are classic two-stage detection models. Broadly speaking, two-stage detection models obtain high accuracy by region proposal with large-scale computing resources, whereas one-stage detection models have better performance with finite computing resources.
Additionally, deep learning models have been introduced in TC detection as well, for example, the use of deep neural networks (DNN) for existing TC detection [30], precursor detection of TCs [31], tropical and extratropical cyclone detection [32], TC track forecasting [33], and TC precursor detection by a cloud-resolving global nonhydrostatic atmospheric model [34]. However, deep learning models usually require a large number of training samples, because it is difficult to achieve high accuracy in case of finite training samples in computer vision and other fields [35,36,37]. At this time, transfer learning can effectively alleviate this problem by transferring the knowledge from the source domain to the target domain, and further improve the accuracy of deep learning models [38,39,40,41].
Deep transfer learning studies how to make use of knowledge transferred from other fields by DNN [42]. On the basis of different kinds of transfer techniques, there are four main categories: instance-based deep transfer learning, mapping-based deep transfer learning, network-based deep transfer learning, and adversarial-based deep transfer learning [42,43,44,45,46]. Instance-based deep transfer learning refers to selecting partial instances from the source domain to the training set in the target domain [43]. Mapping-based deep transfer learning refers to mapping partial instances from the source domain and target domain into a new data space [44]. Network-based deep transfer learning refers to reusing the partial network and connection parameters in the source domain and transferring it to be a part of DNN used in the target domain [45]. Adversarial-based deep transfer learning refers to introducing adversarial technologies such as generative adversarial nets (GAN) to find transferable formulations that apply to both the source domain and the target domain [46]. It is also worth noting that GAN has advantages in image processing and few-shot learning [47,48,49].
In order to improve the accuracy of a TC detection model in case of finite training samples, on the basis of deep transfer learning, we propose a new detection framework of tropical cyclones (NDFTC) from meteorological satellite images by combining the deep convolutional generative adversarial networks (DCGAN) and You Only Look Once (YOLO) v3 model.
The main contributions of this paper are as follows:
(1)
In view of the finite data volume and complex backgrounds encountered in meteorological satellite images, a new detection framework of tropical cyclones (NDFTC) is proposed for accurate TC detection. The algorithm process of NDFTC consists of three major steps: data augmentation, a pre-training phase, and transfer learning, which ensures the effectiveness of detecting different kinds of TCs in complex backgrounds with finite data volume.
(2)
We used DCGAN as the data augmentation method instead of traditional data augmentation methods such as flip and crop. DCGAN can generate images simulated to TCs by learning the salient characteristics of TCs, which improves the utilization of finite data.
(3)
We used the YOLOv3 model as the detection model in the pre-training phase. The detection model is trained with the generated images obtained from DCGAN, which can help the model to learn the salient characteristics of TCs.
(4)
In the transfer learning phase, YOLOv3 is still the detection model, and it is trained with real TC images. Most importantly, the initial weights of the model are weights transferred from the model trained with generated images, which is a typically network-based deep transfer learning method. After that, the detection model can extract universal characteristics from real images of TCs and obtain a high accuracy.

2. Materials and Methods

The flowchart of the NDFTC in this paper is illustrated in Figure 1. The framework can be summarized in the following steps: (1) a dataset based on meteorological satellite images of TCs is created; (2) the dataset is divided into three sub-datasets, which are training dataset 1, training dataset 2, and test dataset; (3) DCGAN is used as the data augmentation method to generate images simulated to TCs; (4) the generated images obtained from DCGAN are inputted into the detection model YOLOv3 in the pre-training phase; and (5) the detection model is trained with real images of TCs and its initial weights are transferred from the YOLOv3 trained with generated images.

2.1. Deep Convolutional Generative Adversarial Networks

As one of the research hotspots of artificial intelligence, generative adversarial networks (GAN) have developed rapidly in recent years and are widely used in image generation [50], image repair [51], visual prediction of typhoon clouds [52], and other fields.
GAN contains a generator and a discriminator [50]. The purpose of the generator is to make the discriminator unable to distinguish between the real images and generated images, whereas the purpose of the discriminator is to distinguish between real and generated images as much as possible. For the generator, an n-dimensional vector is required for input and the output is an image. The generator can be any model that can produce images, such as the simple fully connected neural network. For the discriminator, the input is a picture, and the output is the label of the picture. Similarly, the discriminator structure is similar to the generator structure, such as a network that contains convolution, and so on.
Deep convolutional generative adversarial networks (DCGANs) are an improvement on the original GAN [53]. The improvement does not include strict mathematical proof and the main contents of the improvement are as follows. Both the generator and discriminator use convolutional neural networks (CNN). Batch normalization is used in both generators and discriminators. Neither the generator nor the discriminator uses the pooling layer. The generator uses ReLU as the activation function except tanh for the output layer. The discriminator retains the structure of CNN, and the generator replaces the convolution layer with fractionally strided convolution. All layers of the discriminator use Leaky ReLU as the activation function.

2.2. You Only Look Once (YOLO) v3 Model

The detection model of NDFTC is the YOLOv3 model [24]. The reason why YOLOv3 is used as the detection model is that the detection speed of YOLOv3 is at least 2 times faster than SSD, RetinaNet, and Faster R-CNN [24], which can realize real-time detection of TCs and provide guarantee for disaster prevention and mitigation of TCs. In addition, YOLOv3 refers to the idea of feature pyramid networks and it ensures accurate detection of both large-size and small-size objects.
The base network of the YOLOv3 is Darknet-53. Darknet-53 uses successive 3 × 3 and 1 × 1 convolutional layers. It has 53 convolutional layers in total, as shown in Figure 1, which is why it is called Darknet-53. In addition, a large number of residual blocks are added to Darknet-53 to prevent the exploding gradient problem from network layer deepening. In the model, batch normalization is placed before the activation function Leaky ReLU, which alleviates the gradient disappearance problem. It should be noted that the concat is not the numerical addition operation for different feature graphs, but rather a direct concatenation. This means that the feature map is concatenated directly according to the channel dimension.
As for the change in image size during TC detection, the input meteorological satellite images has a size of 512 × 512 pixels. The model outputs feature maps of three sizes. The first feature map is obtained by down-sampling 32 times, and the size is 16 × 16 pixels. The second feature map is obtained by down-sampling 16 times, and the size is 32 × 32 pixels. The third feature map is obtained by down-sampling 8 times, and the size is 64 × 64 pixels. The above down-sampling is done under the guidance of YOLOv3 model by Redmon et al., which is a uniform operation of YOLOv3 and aims to obtain TC features at different scales and thus improve the detection accuracy of different kinds of TCs. Besides, the third dimension of these three feature maps is 18. Because there are three anchor boxes and each box has 1-dimensional confidence values, 4-dimensional prediction values ( x p , y p , w p , h p ) , and 1-dimensional object class numbers, the final calculation formula is (3 × (4 + 1 + 1)) and the result is 18.
It is important to note that once the number of anchor boxes is determined, confidence values, prediction values, and object class numbers are also determined [23]. In general, an anchor box has 1-dimensional confidence values, because it is the IOU of the bounding box and the prediction box, reflecting the detection effect of this anchor box [22]. An anchor box has 4-dimensional prediction values, reflecting the coordinate information of the anchor box [22]. An anchor box has only 1-dimensional object class numbers, because our study only detects TC and not other objects.

2.3. Loss Function

The loss function is the error between the predicted value and the real value, which is one of the important parameters to determine the detection performance. The loss of the NDFTC includes the loss of DCGAN and the loss of YOLOv3.

2.3.1. Loss Function of DCGAN

The loss function of DCGAN includes the loss function of generator G and the loss function of discriminator D. When the generator is trained, parameters of the discriminator are fixed. When training the discriminator, parameters of the generator are fixed.
The purpose of the generator is to make the discriminator unable to distinguish between the real TC images and the generated TC images. First, the adversarial loss is introduced. G(X) represents the TC images generated by the generator, Y represents the real images corresponding to it, and D(·) represents the discriminant probability of the generated images. The adversarial loss is as follows:
L G a d v = log ( 1 D ( G ( X ) )
By minimizing Formula (1), the generator can fool the discriminator, which means that the discriminator cannot distinguish between real images and generated images. Next, the L 1 loss function is introduced to measure the distance between generated images and real images.
L 1 = i = 1 P w j = 1 P h | | G ( X ) ( i , j ) Y ( i , j ) | | 1
where ( i , j ) represents pixel coordinates, and P w and P h are the width and height of TC images, respectively.
The generator’s total loss function is as follows:
L G = λ 1 L G a d v + λ 2 L 1
where λ 1 and λ 2 are empirical weight parameters. The generator can generate high-quality images of TCs by minimizing Formula (3).
The purpose of the discriminator D is to distinguish between the real TC images and the generated TC images. To achieve this goal, the adversarial loss function of the discriminator is as follows:
L D a d v = log ( D ( Y ) ) log ( 1 D ( G ( X ) )
For Equation (4), if the real image is wrongly judged as the generated image, or the generated image is wrongly judged as the real image, then an infinite situation will appear in Formula (4), which means that the discriminator should still be optimized. If the value of Formula (4) decreases gradually, it means that the discriminator is trained better and better.

2.3.2. Loss Function of YOLOv3

The loss function of YOLOv3 includes boundary box loss, confidence loss, and classification loss. The smaller the loss value, the better the performance of the model. The parameters involved in the loss function are introduced below.
The model divides the input image into an S × S grid. Each grid cell is responsible for detecting TCs if the center of a TC falls into a grid cell. The grid cell predicts B bounding boxes and confidence scores. These scores reflect how confident the model is that the box contains an object.
The first part of the total loss function is the boundary box loss, which is used to measure the difference between the real box and the predicted box, as follows:
L b o x = i = 1 s 2 × B [ ( x i p x i g ) 2 + ( y i p y i g ) 2 + ( w i p w i g ) 2 + ( h i p h i g ) 2 ]
where i is the number of bounding boxes, and ( x i p , y i p , w i p , h i p ) is the positional parameter of the predicted box. x p and y p represent the center point coordinates of the predicted box, and w p and h p represent the width and height of the predicted box, respectively. Similarly, ( x i g , y i g , w i g , h i g ) is the parameter of the true box.
The second part of the total loss function is the confidence loss, which reflects how confident the model is that the box contains an object. The confidence loss is as follows:
L c o n f = i = 1 s 2 × B [ h i × l n c i + ( 1 h i ) × ln ( 1 c i ) ]
where c i represents the probability of the object in the anchor box i. h i { 0 , 1 } represents whether the object is present in the anchor box i, in which 1 means yes and 0 means no.
The third part of the total loss function is the classification loss as follows:
L c l a s s = i = 1 s 2 × B k c l a s s e s [ h i k × l n c i k ]
where c i k represents the probability of the object of class k in the anchor box i. h i k { 0 , 1 } represents whether the object of class k is present in the anchor box i, in which 1 means yes and 0 means no. In this paper, there is only one kind of object, so k = 1 .
To sum up, the total loss function of the YOLOv3 model is as follows:
L t o t a l = λ 1 L b o x + λ 2 L c o n f + λ 3 L c l a s s
where λ 1 , λ 2 , and λ 3 are empirical weight parameters, and λ 1 = λ 2 = λ 3 = 1 in this paper.

2.4. Algorithm Process

According to the above description, the specific algorithm process is shown as follows.
Algorithm 1 The algorithm process of NDFTC.
Start
Input: 2400 meteorological satellite images of TCs; the images were collected from 1979 to 2019 in the South West Pacific Area.
A. Data Augmentation
(1) A total of 600 meteorological satellite images are input into the DCGAN model. The selection rule for these images is to randomly select 18 images from the TCs that occur every year (1979–2010), which contains the common characteristics of TCs over these years.
(2) A total of 1440 generated images with TC characteristics are obtained in the DCGAN model. These generated images are only used as training samples in the pre-training phase.
B. Pre-Training Phase
(3) The generated images obtained from step (2) are inputted into the YOLOv3 model.
(4) Feature extraction and preliminary detection of the generated images are completed.
(5) The weight trained to 10,000 times in step (4) is reserved in this phase.
C. Transfer Learning
(6) A total of 1800 meteorological satellite images are still available after step (1). A total of 80% of these data are used as the training samples in this phase. In other words, 1440 meteorological satellite images from 1979 to 2011 are used as training samples.
(7) The model starts to train with training samples of step (6) and weights of step (5) are initial weights in this phase, which is a typically network-based deep transfer learning method.
(8) A total of 360 meteorological satellite images from 2011 to 2019 are used as the testing samples. Then, the test is completed.
Output: detection results, accuracy, average precision.
End

3. Experimental Results

3.1. Data Set

The data set we used includes meteorological satellite observation images in the Southwest Pacific area from 1979 to 2019. These images, provided by the National Institute of Informatics, are meteorological satellite images with a size of 512 × 512 pixels. For more details on the meteorological satellite images we used in this study [54], see the website: http://agora.ex.nii.ac.jp/digital-typhoon/search_date.html.en#id2 (accessed on 29 March 2021).
In this paper, a total of 2400 real TC images were used. Among them, 600 real images were input into DCGAN model to produce 1440 generated images for training the detection model in the pre-training phase. Additionally, 80% of the remaining 1800 real TC images, which were from 1979 to 2011, were used to train the model. A total of 20% of the remaining 1800 real TC images, which were from 2011 to 2019, were used to test the model.
In other words, in the transfer learning phase, the selection rule for training and test data was based on the time when the TC was captured by the meteorological satellite. A total of 80% of the data used for training was historical data occurring from 1979 to 2011, whereas 20% of the data used for testing was recent data occurring from 2011 to 2019. Such a data selection method of training with historical data and testing with recent data is effective in the application of deep learning in meteorology [55], and thus we also adopted this data selection method.

3.2. Experiment Setup

In order to show the superiority of NDFTC in the training process and detection results, a TC detection model for comparison was also trained, which was only based on YOLOv3 and did not use NDFTC. In order to train and test this TC detection model for comparison, we still used 2400 real TC images, 80% of which were used for training and 20% for testing.
For the sake of fairness, the total number of training times for both NDFTC and YOLOv3 was 50,000. For the NDFTC, it used generated TC images to train 10,000 times, and then it used real TC images to train 40,000 times. For the detection model only based on YOLOv3, it was trained 50,000 times using real TC images. In the training process, the change of loss function values of NDFTC and detection model only based on YOLOv3 are shown in Figure 2.
Figure 2 visualizes the change of loss function values of YOLOv3 and NDFTC in the training process. Compared with the TC detection model only including YOLOv3, the NDFTC proposed in this paper had smaller loss function values and a more stable training process.
In order to show the stability of NDFTC during the training process from another perspective, the changes of region average IOU are also visualized in Figure 3. Region average IOU is the intersection over union (IOU) between the predicted box and the ground truth [22]. It is one of the most important indicators to measure the stability of models in the training process, and is commonly found in deep learning models such as YOLOv1 [22], YOLOv2 [23], YOLOv3 [24], and YOLOv4 [56]. In general, the closer it is to 1, the better the model is trained.
In Figure 3, the region average IOU of the models in the training process was generally decreasing. However, the region average IOU of YOLOv3 oscillated more sharply when the training reached a later stage. Compared with the TC detection model only including YOLOv3, the NDFTC oscillated less in the whole training process. This means that the NDFTC converged faster and was more stable in the training process.

3.3. Results and Discussion

In order to evaluate the detection effect of the NDFTC proposed in this paper, ACC and AP were used as evaluation indexes.
ACC refers to accuracy, which means the proportion of TCs detected correctly by the model in all images. The definition of ACC is as follows:
A c c u r a c y = T P A L L
where TP refers to the number of TC images detected correctly by the model, and ALL refers to the number of all images.
AP refers to average precision, which takes into account cases such as detection error and detection omission phenomenon, and it is a common index for evaluating YOLO series models such as YOLOv1, YOLOv2, and YOLOv3 by Redmon et al. [22,23,24]. AP is defined by precision and recall:
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
where TP refers to the number of TCs correctly recognized as TCs by the detection model, FP refers to the number of other objects recognized as TCs by the detection model, and FN refers to the number of TCs recognized as other objects by the detection model [57,58]. Then the P–R curve can be obtained by using the recall of TCs as the x-coordinate and the precision of TCs as the y-coordinate [59], and the area under the curve is AP, which is the index that evaluates the detection effectiveness of the NFDTC.
Figure 4 shows the ACC and AP of NDFTC and other models in the test set when the training times were 10,000, 20,000, 30,000, 40,000, and 50,000. Apparently, Figure 4 reflects that NDFTC performed better than YOLOv3 and other models with the same training times. Finally, the experimental results show that the NDFTC had better performance, with an ACC of 97.78% and AP of 81.39%, in comparison to the YOLOv3, with an ACC of 93.96% and AP of 80.64%.
In order to evaluate the detection effect on different kinds of TCs, all TCs in the test set were divided into five categories. According to the National Standard for Tropical Cyclone Grade (GB/T 19201-2006), TC intensity includes tropical storm (TS), severe tropical storm (STS), typhoon (TY), severe typhoon (STY), and super typhoon (SuperTY). The ACC performance of the NDFTC and other models on the test set is shown in Table 1. It shows that the NDFTC generally had a higher ACC. The best result was from NDFTC for SuperTY detection, and at that time the ACC reached 98.59%.
Next, the AP performance of the NDFTC and other models on the test set is shown in Table 2. It can be found that the NDFTC basically had a higher AP. The best result was from NDFTC for STY detection, which was 91.34%.
Last but not least, an example of TC detection results is shown in Figure 5, which is the super typhoon Marcus in 2018. It can be found that the NDFTC had a more detailed detection result, because the prediction box of NDFTC fit Marcus better. More importantly, compared with the TC detection model only including YOLOv3, the detection result of NDFTC was more consistent with the physical characteristics of TCs, because the spiral rainbands at the bottom of Marcus were also included in the detection box of NDFTC.

4. Discussion

To begin with, the complexity of NDFTC is explained here. Compared to the complex network architecture and huge number of parameters of YOLOv3, the complexity of DCGAN, which is a relatively simple network, could be negligible [60]. Therefore, the complexity of the NDFTC in this paper was approximately equal to that of the YOLOv3 model, conditional on a finite data set and the same scale of computing resources. More importantly, compared with the YOLOv3 model, NDFTC further improved the detection accuracy of TCs with almost no increase in complexity, which proves that NDFTC ensures generalization performance.
Then, the way in which the generated and real images are used in different phases needs to be emphasized again. In 2020, Maryam Hammami et al. proposed a CycleGAN and YOLO combined model for data augmentation and used generated data and real data to train a YOLO detector, in which generated data and real data are simultaneously input into YOLO for training [61]. In our study, the detector was trained using only generated images in the pre-training phase and only real images in the transfer learning phase, which is a typically network-based deep transfer learning method. Additionally, the average IOU and loss function values during the training process are plotted in this paper to reflect the stability of NDFTC.
Furthermore, it is necessary to explain the proportion of the data set allocated. In NDFTC, the initial dataset is composed of meteorological satellite images of TCs, and when it is divided into training dataset 1, training dataset 2, and test dataset according to Algorithm 1, then training datasets 1 and 2 must include the real images of TC. This means that training datasets 1 and 2 must contain TC features at the same time, which is a prerequisite for the adoption of NDFTC.
Finally, we need to explain the reason why 80% of the real images of TC were used for training and the rest for testing. In general, for finite datasets that are not very large, such a training and testing ratio is a common method in the field of deep learning [62,63]. It is generally believed that when the total number of images in the dataset reaches tens of thousands or even hundreds of thousands, the proportion of the training set can exceed 90% [63]. Of course, considering that the dataset of TCs used in this paper has only thousands of images, 80% was acceptable. More importantly, for object detection tasks with finite datasets, setting a smaller training dataset usually leads to lower accuracy, so we chose the common ratio of 80% over others.

5. Conclusions

In this paper, on the basis of deep transfer learning, we propose a new detection framework of tropical cyclones (NDFTC) from meteorological satellite images by combining the DCGAN and YOLOv3. The algorithm process of NDFTC consists of three major steps: data augmentation, a pre-training phase, and transfer learning, which ensures the effectiveness of detecting different kinds of TCs in complex backgrounds with finite data volume. We used DCGAN as the data augmentation method instead of traditional data augmentation methods because DCGAN can generate images simulated to TCs by learning the salient characteristics of TCs, which improves the utilization of finite data. In the pre-training phase, we used YOLOv3 as the detection model and it was trained with the generated images obtained from DCGAN, which helped the model learn the salient characteristics of TCs. In the transfer learning phase, we trained the detection model with real images of TCs and its initial weights were transferred from the YOLOv3 trained with generated images, which is a typically network-based deep transfer learning method and can improve the stability and accuracy of the model. The experimental results show that the NDFTC had better performance, with an ACC of 97.78% and AP of 81.39%, in comparison to the YOLOv3, with an ACC of 93.96% and AP of 80.64%. On the basis of the above conclusions, we think that our NDFTC with high accuracy has promising potential for detecting different kinds of TCs and we believe that NDFTC could benefit current TC-detection tasks and similar detection tasks, especially for those tasks with finite data volume.

Author Contributions

Conceptualization, T.S. and P.X.; data curation, P.X. and Y.L.; formal analysis, P.X., F.M., X.T. and B.L.; funding acquisition, S.P., T.S. and D.X.; methodology, T.S. and P.X.; project administration, S.P., D.X., T.S. and F.M.; validation, P.X.; writing—original draft, P.X. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Key Research and Development Program (no. 2018YFC1406201) and the Natural Science Foundation of China (grant: U1811464). The project was supported by the Innovation Group Project of the Southern Marine Science and Engineering Guangdong Laboratory (Zhuhai) (no. 311020008), the Natural Science Foundation of Shandong Province (grant no. ZR2019MF012), and the Taishan Scholars Fund (grant no. ZX20190157).

Data Availability Statement

The data used in this study are openly available at the National Institute of Informatics (NII) at http://agora.ex.nii.ac.jp/digital-typhoon/search_date.html.en#id2 (accessed on 29 March 2021).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
TCTropical cyclone
TCsTropical cyclones
NDFTCNew detection framework of tropical cyclones
GANGenerative adversarial nets
DCGANDeep convolutional generative adversarial networks
YOLOYou Only Look Once
NWPNumerical weather prediction
MLMachine learning
DTDecision trees
RFRandom forest
SVMSupport vector machines
DNNDeep neural networks
ReLURectified linear unit
TPTrue positive
TNTrue negative
FPFalse positive
FNFalse negative
ACCAccuracy
APAverage precision
IOUIntersection over union

References

  1. Khalil, G.M. Cyclones and storm surges in Bangladesh: Some mitigative measures. Nat. Hazards 1992, 6, 11–24. [Google Scholar] [CrossRef]
  2. Hunter, L.M. Migration and Environmental Hazards. Popul. Environ. 2005, 26, 273–302. [Google Scholar] [CrossRef] [PubMed]
  3. Mabry, C.M.; Hamburg, S.P.; Lin, T.-C.; Horng, F.-W.; King, H.-B.; Hsia, Y.-J. Typhoon Disturbance and Stand-level Damage Patterns at a Subtropical Forest in Taiwan1. Biotropica 1998, 30, 238–250. [Google Scholar] [CrossRef]
  4. Dale, V.H.; Joyce, L.A.; McNulty, S.; Neilson, R.P.; Ayres, M.P.; Flannigan, M.D.; Hanson, P.J.; Irland, L.C.; Lugo, A.E.; Peterson, C.J.; et al. Climate Change and Forest Disturbances. Bioscience 2001, 51, 723. [Google Scholar] [CrossRef] [Green Version]
  5. Pielke, R.A., Jr.; Gratz, J.; Landsea, C.W.; Collins, D.; Saunders, M.A.; Musulin, R. Normalized hurricane damage in the united states: 1900–2005. Nat. Hazards Rev. 2008, 9, 29–42. [Google Scholar] [CrossRef]
  6. Zhang, Q.; Liu, Q.; Wu, L. Tropical Cyclone Damages in China 1983–2006. Am. Meteorol. Soc. 2009, 90, 489–496. [Google Scholar] [CrossRef] [Green Version]
  7. Lian, Y.; Liu, Y.; Dong, X. Strategies for controlling false online information during natural disasters: The case of Typhoon Mangkhut in China. Technol. Soc. 2020, 62, 101265. [Google Scholar] [CrossRef]
  8. Kang, H.Y.; Kim, J.S.; Kim, S.Y.; Moon, Y.I. Changes in High- and Low-Flow Regimes: A Diagnostic Analysis of Tropical Cyclones in the Western North Pacific. Water Resour. Manag. 2017, 31, 3939–3951. [Google Scholar] [CrossRef]
  9. Kim, J.S.; Jain, S.; Kang, H.Y.; Moon, Y.I.; Lee, J.H. Inflow into Korea’s Soyang Dam: Hydrologic variability and links to typhoon impacts. J. Hydro Environ. Res. 2019, 22, 50–56. [Google Scholar] [CrossRef]
  10. Burton, D.; Bernardet, L.; Faure, G.; Herndon, D.; Knaff, J.; Li, Y.; Mayers, J.; Radjab, F.; Sampson, C.; Waqaicelua, A. Structure and intensity change: Operational guidance. In Proceedings of the 7th International Workshop on Tropical Cyclones, La Réunion, France, 15–20 November 2010. [Google Scholar]
  11. Halperin, D.J.; Fuelberg, H.E.; Hart, R.E.; Cossuth, J.H.; Sura, P.; Pasch, R.J. An Evaluation of Tropical Cyclone Genesis Forecasts from Global Numerical Models. Weather Forecast. 2013, 28, 1423–1445. [Google Scholar] [CrossRef]
  12. Heming, J.T. Tropical cyclone tracking and verification techniques for Met Office numerical weather prediction models. Meteorol. Appl. 2017, 26, 1–8. [Google Scholar] [CrossRef]
  13. Park, M.-S.; Elsberry, R.L. Latent Heating and Cooling Rates in Developing and Nondeveloping Tropical Disturbances during TCS-08: TRMM PR versus ELDORA Retrievals*. J. Atmos. Sci. 2013, 70, 15–35. [Google Scholar] [CrossRef] [Green Version]
  14. Rhee, J.; Im, J.; Carbone, G.J.; Jensen, J.R. Delineation of climate regions using in-situ and remotely-sensed data for the Carolinas. Remote Sens. Environ. 2008, 112, 3099–3111. [Google Scholar] [CrossRef]
  15. Zhang, W.; Fu, B.; Peng, M.S.; Li, T. Discriminating Developing versus Nondeveloping Tropical Disturbances in the Western North Pacific through Decision Tree Analysis. Weather Forecast. 2015, 30, 446–454. [Google Scholar] [CrossRef]
  16. Han, H.; Lee, S.; Im, J.; Kim, M.; Lee, M.-I.; Ahn, M.H.; Chung, S.-R. Detection of Convective Initiation Using Meteorological Imager Onboard Communication, Ocean, and Meteorological Satellite Based on Machine Learning Approaches. Remote Sens. 2015, 7, 9184–9204. [Google Scholar] [CrossRef] [Green Version]
  17. Kim, D.H.; Ahn, M.H. Introduction of the in-orbit test and its performance for the first meteorological imager of the Communication, Ocean, and Meteorological Satellite. Atmos. Meas. Tech. 2014, 7, 2471–2485. [Google Scholar] [CrossRef] [Green Version]
  18. Xu, Y.; Meng, X.; Li, Y.; Xu, X. Research on privacy disclosure detection method in social networks based on multi-dimensional deep learning. Comput. Mater. Contin. 2020, 62, 137–155. [Google Scholar] [CrossRef]
  19. Peng, H.; Li, Q. Research on the automatic extraction method of web data objects based on deep learning. Intell. Autom. Soft Comput. 2020, 26, 609–616. [Google Scholar] [CrossRef]
  20. He, S.; Li, Z.; Tang, Y.; Liao, Z.; Li, F.; Lim, S.-J. Parameters compressing in deep learning. Comput. Mater. Contin. 2020, 62, 321–336. [Google Scholar] [CrossRef]
  21. Courtrai, L.; Pham, M.-T.; Lefèvre, S. Small Object Detection in Remote Sensing Images Based on Super-Resolution with Auxiliary Generative Adversarial Networks. Remote Sens. 2020, 12, 3152. [Google Scholar] [CrossRef]
  22. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
  23. Redmon, J.; Farhadi, A. YOLO9000: Better, faster, stronger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7263–7271. [Google Scholar]
  24. Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
  25. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In European Conference on Computer Vision; Springer: Amsterdam, The Netherlands, 2016; pp. 21–37. [Google Scholar]
  26. Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
  27. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the CVPR, Columbus, OH, USA, 24–27 June 2014. [Google Scholar]
  28. Girshick, R. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 11–18 December 2015; pp. 1440–1448. [Google Scholar]
  29. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. In Proceedings of the Advances in Neural Information Processing Systems 28 (NIPS 2015), Montreal, QC, Canada, 7–12 December 2015; pp. 91–99. [Google Scholar]
  30. Liu, Y.; Racah, E.; Correa, J. Application of deep convolutional neural networks for detecting extreme weather in climate datasets. arXiv 2016, arXiv:1605.01156. [Google Scholar]
  31. Nakano, D.M.; Sugiyama, D. Detecting Precursors of Tropical Cyclone using Deep Neural Networks. In Proceedings of the 7th International Workshop on Climate Informatics, Boulder, CO, USA, 20–22 September 2017. [Google Scholar]
  32. Kumler-Bonfanti, C.; Stewart, J.; Hall, D. Tropical and Extratropical Cyclone Detection Using Deep Learning. J. Appl. Meteorol. Climatol. 2020, 59, 1971–1985. [Google Scholar] [CrossRef]
  33. Giffard-Roisin, S.; Yang, M.; Charpiat, G. Tropical cyclone track forecasting using fused deep learning from aligned reanalysis data. Front. Big Data 2020, 3, 1. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Matsuoka, D.; Nakano, M.; Sugiyama, D. Deep learning approach for detecting tropical cyclones and their precursors in the simulation by a cloud-resolving global nonhydrostatic atmospheric model. Prog. Earth Planet. Sci. 2018, 5, 1–16. [Google Scholar] [CrossRef] [Green Version]
  35. Cao, J.; Chen, Z.; Wang, B. Deep Convolutional networks with superpixel segmentation for hyperspectral image classification. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 3310–3313. [Google Scholar]
  36. Li, Z.; Guo, F.; Li, Q.; Ren, G.; Wang, L. An Encoder–Decoder Convolution Network with Fine-Grained Spatial Information for Hyperspectral Images Classification. IEEE Access 2020, 8, 33600. [Google Scholar] [CrossRef]
  37. Gorban, A.; Mirkes, E.; Tukin, I. How deep should be the depth of convolutional neural networks: A backyard dog case study. Cogn. Comput. 2020, 12, 388. [Google Scholar] [CrossRef] [Green Version]
  38. Pan, S.J.; Tsang, I.W.; Kwok, J.T.; Yang, Q. Domain Adaptation via Transfer Component Analysis. IEEE Trans. Neural Netw. 2011, 22, 199–210. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Yang, J.; Zhao, Y.; Chan, J. Learning and transferring deep joint spectral–spatial features for hyperspectral classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4729–4742. [Google Scholar] [CrossRef]
  40. Liu, X.; Sun, Q.; Meng, Y.; Fu, M.; Bourennane, S. Hyperspectral image classification based on parameter-optimized 3D-CNNs combined with transfer learning and virtual samples. Remote Sens. 2018, 10, 1425. [Google Scholar] [CrossRef] [Green Version]
  41. Jiang, Y.; Li, Y.; Zhang, H. Hyperspectral image classification based on 3-D separable ResNet and transfer learning. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1949–1953. [Google Scholar] [CrossRef]
  42. Tan, C.; Sun, F.; Kong, T.; Zhang, W.; Yang, C.; Liu, C. A Survey on Deep Transfer Learning. arXiv 2018, arXiv:1808.01974. [Google Scholar]
  43. Liu, X.; Liu, Z.; Wang, G.; Cai, Z.; Zhang, H. Ensemble transfer learning algorithm. IEEE Access 2018, 6, 2389–2396. [Google Scholar] [CrossRef]
  44. Tzeng, E.; Hoffman, J.; Zhang, N.; Saenko, K.; Darrell, T. Deep domain confusion: Maximizing for domain invariance. arXiv 2014, arXiv:1412.3474. [Google Scholar]
  45. Yosinski, J.; Clune, J.; Bengio, Y.; Lipson, H. How transferable are features in deep neural networks? arXiv 2014, arXiv:1411.1792. [Google Scholar]
  46. Long, M.; Cao, Z.; Wang, J.; Jordan, M.I. Domain adaptation with randomized multilinear adversarial networks. arXiv 2017, arXiv:1705.10667. [Google Scholar]
  47. Zhao, M.; Liu, X.; Yao, X. Better Visual Image Super-Resolution with Laplacian Pyramid of Generative Adversarial Networks. CMC Comput. Mater. Contin. 2020, 64, 1601–1614. [Google Scholar] [CrossRef]
  48. Fu, K.; Peng, J.; Zhang, H. Image super-resolution based on generative adversarial networks: A brief review. Comput. Mater. Contin. 2020, 64, 1977–1997. [Google Scholar] [CrossRef]
  49. Li, X.; Liang, Y.; Zhao, M. Few-shot learning with generative adversarial networks based on WOA13 data. Comput. Mater. Contin. 2019, 60, 1073–1085. [Google Scholar] [CrossRef] [Green Version]
  50. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. Adv. Neural Inf. Process. Syst. 2014, 27, 2672–2680. [Google Scholar]
  51. Denton, E.; Gross, S.; Fergus, R. Semi-supervised learning with context-conditional generative adversarial networks. arXiv 2016, arXiv:1611.06430. [Google Scholar]
  52. Li, H.; Gao, S.; Liu, G.; Guo, D.L.; Grecos, C.; Ren, P. Visual Prediction of Typhoon Clouds With Hierarchical Generative Adversarial Networks. IEEE Geosci. Remote Sens. Lett. 2020, 17, 1478–1482. [Google Scholar] [CrossRef]
  53. Radford, A.; Metz, L.; Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv 2015, arXiv:1511.06434. [Google Scholar]
  54. National Institute of Informatics. Digital Typhoon. 2009. Available online: http://agora.ex.nii.ac.jp/digital-typhoon/search_date.html.en#id2 (accessed on 29 March 2021).
  55. Ham, Y.; Kim, J.; Luo, J. Deep learning for multi-year ENSO forecasts. Nature 2019, 573, 568–572. [Google Scholar] [CrossRef]
  56. Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. YOLOv4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
  57. Rafael Padilla. Object Detection Metrics. 2018. Available online: https://github.com/rafaelpadilla/Object-Detection-Metrics (accessed on 22 June 2018).
  58. Everingham, M.; Van Gool, L.; Williams, C. The pascal visual object classes (voc) challenge. Int. J. Comput. Vis. 2010, 88, 303–338. [Google Scholar] [CrossRef] [Green Version]
  59. Davis, J.; Goadrich, M. The relationship between Precision-Recall and ROC curves. In Proceedings of the 23rd International Conference on Machine Learning, Pittsburgh, PA, USA, 25–29 June 2006; pp. 233–240. [Google Scholar]
  60. Neyshabur, B.; Bhojanapalli, S.; McAllester, D.; Srebro, N. Exploring generalization in deep learning. arXiv 2017, arXiv:1706.08947. [Google Scholar]
  61. Hammami, M.; Friboulet, D.; Kechichian, R. Cycle GAN-Based Data Augmentation for Multi-Organ Detection in CT Images Via Yolo. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Anchorage, AK, USA, 25–28 October 2020; pp. 390–393. [Google Scholar]
  62. Song, T.; Jiang, J.; Li, W. A deep learning method with merged LSTM Neural Networks for SSHA Prediction. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 2853–2860. [Google Scholar] [CrossRef]
  63. Song, T.; Wang, Z.; Xie, P. A novel dual path gated recurrent unit model for sea surface salinity prediction. J. Atmos. Ocean. Technol. 2020, 37, 317–325. [Google Scholar] [CrossRef]
Figure 1. Overview of the proposed new detection framework of tropical cyclones (NDFTC).
Figure 1. Overview of the proposed new detection framework of tropical cyclones (NDFTC).
Remotesensing 13 01860 g001
Figure 2. (a) The change of loss function values of YOLOv3 to train real TC images; (b) the change of loss function values of NDFTC to train real TC images.
Figure 2. (a) The change of loss function values of YOLOv3 to train real TC images; (b) the change of loss function values of NDFTC to train real TC images.
Remotesensing 13 01860 g002
Figure 3. (a) The change in region average IOU of YOLOv3 to train real TC images; (b) the change in region average IOU of NDFTC to train real TC images.
Figure 3. (a) The change in region average IOU of YOLOv3 to train real TC images; (b) the change in region average IOU of NDFTC to train real TC images.
Remotesensing 13 01860 g003
Figure 4. Performance of NDFTC and other models with ACC and AP: (a) ACC of NDFTC and other models; (b) AP of NDFTC and other models.
Figure 4. Performance of NDFTC and other models with ACC and AP: (a) ACC of NDFTC and other models; (b) AP of NDFTC and other models.
Remotesensing 13 01860 g004
Figure 5. An example of TC detection results, which is the super typhoon Marcus in 2018. (a) The detection result of YOLOv3; (b) the detection result of NDFTC.
Figure 5. An example of TC detection results, which is the super typhoon Marcus in 2018. (a) The detection result of YOLOv3; (b) the detection result of NDFTC.
Remotesensing 13 01860 g005
Table 1. ACC performance of the NDFTC and other models on the test set for five kinds of TCs.
Table 1. ACC performance of the NDFTC and other models on the test set for five kinds of TCs.
ModelTyphoon Types10,000 Times20,000 Times30,000 Times40,000 Times50,000 Times
YOLOv3TS71.2180.3087.8890.9192.42
STS83.4686.4789.4790.9894.74
TY85.5988.2990.0991.8992.79
STY88.7590.0091.2592.5095.00
SuperTY88.8991.1193.3393.3394.44
NDFTCTS87.5092.5092.5095.0097.50
STS88.4691.3592.3193.2798.07
TY89.4192.9494.1295.2996.47
STY91.6793.3395.0096.6798.33
SuperTY91.5594.3795.7797.1898.59
Table 2. AP performance of the NDFTC and other models on the test set for five kinds of TCs.
Table 2. AP performance of the NDFTC and other models on the test set for five kinds of TCs.
ModelTyphoon Types10,000 Times20,000 Times30,000 Times40,000 Times50,000 Times
YOLOv3TS60.9161.2463.9668.2666.85
STS80.7783.4683.5982.4286.84
TY79.1676.9379.9180.9078.11
STY88.6689.1287.1287.6088.63
SuperTY82.8281.1483.2381.4379.81
NDFTCTS67.1669.1263.5567.9663.89
STS78.1374.6484.1581.4082.22
TY79.7683.6081.5786.7083.04
STY89.2386.9789.7984.8991.34
SuperTY84.0385.2079.8980.5082.52
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Pang, S.; Xie, P.; Xu, D.; Meng, F.; Tao, X.; Li, B.; Li, Y.; Song, T. NDFTC: A New Detection Framework of Tropical Cyclones from Meteorological Satellite Images with Deep Transfer Learning. Remote Sens. 2021, 13, 1860. https://doi.org/10.3390/rs13091860

AMA Style

Pang S, Xie P, Xu D, Meng F, Tao X, Li B, Li Y, Song T. NDFTC: A New Detection Framework of Tropical Cyclones from Meteorological Satellite Images with Deep Transfer Learning. Remote Sensing. 2021; 13(9):1860. https://doi.org/10.3390/rs13091860

Chicago/Turabian Style

Pang, Shanchen, Pengfei Xie, Danya Xu, Fan Meng, Xixi Tao, Bowen Li, Ying Li, and Tao Song. 2021. "NDFTC: A New Detection Framework of Tropical Cyclones from Meteorological Satellite Images with Deep Transfer Learning" Remote Sensing 13, no. 9: 1860. https://doi.org/10.3390/rs13091860

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop