Next Article in Journal
Open Source Data-Based Solutions for Identifying Patterns of Urban Earthquake Systemic Vulnerability in High-Seismicity Areas
Previous Article in Journal
Landslide Detection Using Time-Series InSAR Method along the Kangding-Batang Section of Shanghai-Nyalam Road
Previous Article in Special Issue
A Multi-Scale Feature Pyramid Network for Detection and Instance Segmentation of Marine Ships in SAR Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Comprehensive Survey on SAR ATR in Deep-Learning Era

Naval Submarine Academy, Qingdao 266000, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(5), 1454; https://doi.org/10.3390/rs15051454
Submission received: 4 February 2023 / Accepted: 2 March 2023 / Published: 5 March 2023
(This article belongs to the Special Issue Ship Detection and Maritime Monitoring Based on SAR Data)

Abstract

:
Due to the advantages of Synthetic Aperture Radar (SAR), the study of Automatic Target Recognition (ATR) has become a hot topic. Deep learning, especially in the case of a Convolutional Neural Network (CNN), works in an end-to-end way and has powerful feature-extracting abilities. Thus, researchers in SAR ATR also seek solutions from deep learning. We review the related algorithms with regard to SAR ATR in this paper. We firstly introduce the commonly used datasets and the evaluation metrics. Then, we introduce the algorithms before deep learning. They are template-matching-, machine-learning- and model-based methods. After that, we introduce mainly the SAR ATR methods in the deep-learning era (after 2017); those methods are the core of the paper. The non-CNNs and CNNs, that is, those used in SAR ATR, are summarized at the beginning. We found that researchers tend to design specialized CNN for SAR ATR. Then, the methods to solve the problem raised by limited samples are reviewed. They are data augmentation, Generative Adversarial Networks (GAN), electromagnetic simulation, transfer learning, few-shot learning, semi-supervised learning, metric leaning and domain knowledge. After that, the imbalance problem, real-time recognition, polarimetric SAR, complex data and adversarial attack are also reviewed. The principles and problems of them are also introduced. Finally, the future directions are conducted. In this part, we point out that the dataset, CNN architecture designing, knowledge-driven, real-time recognition, explainable and adversarial attack should be considered in the future. This paper gives readers a quick overview of the current state of the field.

Graphical Abstract

1. Introduction

Compared with optical sensors, Synthetic Aperture Radar (SAR) can obtain high-resolution images all day and under any weather. Thus, SAR is widely used in military and civilian sectors. The purpose of SAR ATR is to automatically recognize important targets (vehicles, ships and aircraft), which is the key technology of reconnaissance [1,2]. Lincoln laboratory proposed a three-level flow chart of SAR ATR [3], which includes detection, discrimination and classification, as shown in Figure 1.
Detection algorithms can find Regions of Interest (RoI) containing potential targets [4]. CFAR (Constant False Alarm Rate) is a common method for such detection. It first determines a threshold according to the input image and compares it with every pixel. If the input pixel exceeds this threshold, it will be regarded as a target; otherwise, it will be regarded as background. The core of the algorithm is to describe images in terms of statistical characteristics. Lognormal, Weibull and K distribution are usually used. When the background is clear, it can obtain good performance. However, when the background is complex or the target is weak, it can produce false alarms [5,6].
The purpose of discrimination is to eliminate false alarms generated by natural and artificial clutter. The features that are usually used are geometric features, centroid, aspect ratio, backscatter features, texture features, polarization features, etc. [7,8]. The purpose of classification is to determine the categories of the targets. The template-matching-, machine-learning- and model-based methods are three usually used methods. Among them, the machine-learning method has better results and is widely used. It includes mainly two key steps: feature designing and classifier designing.
The SAR ATR algorithm has achieved great improvements on past algorithms. The core consists of designing distinguishing features and powerful classifiers. Shallow features such as aspect ratio, Surf (Speeded Up Robust Features) and LBP (Local Binary Pattern) are usually used. Neural networks, decision tree, SVM (Support Vector Machine) and random forest are the usually used classifiers [9].
SAR is the combination of the scattering units with the electromagnetic-scattering features. There are speckle, geometric distortion, shadow and other phenomena. SAR is vulnerable to changes in working conditions, for example, polarization mode, imaging angle and target scattering. Furthermore, the samples are limited. Datasets also have large intra-class differences and small inter-class differences, which bring difficulties to classification. SAR images also have difficulties in robust feature extraction and unbalanced class distribution, which render ATR more difficult.
Since the emergence of AlexNet in 2012, deep learning shows advantages over traditional methods. Traditional methods of feature extraction rely mainly on human experience. These methods have poor generalization performance [10]. Deep learning (especially the Convolutional Neural Network, CNN) automatically learns features from data, feature extraction and classifier are done at the same time. Thus, it has strong high-level feature-learning ability and high classification accuracy. Due to these advantages, SAR ATR also gradually adopts this method. CNN can automatically learn effective features, thus avoiding the difficulties of designing features manually.
We generate statistics from relevant papers and obtain the number of papers on traditional and deep-learning-based algorithms in this area in recent years in Table 1. We can see that SAR ATR entered the era of deep learning in 2017; since then, most of the papers adopted a deep-learning method.
SAR ATR has the following difficulties in practical applications due to the large differences with the optical image.
1. The number of SAR images is insufficient. This is the main reason that restricts the application of deep learning in SAR ATR. It will lead to serious over-fitting, resulting in low generalization. Thus, most of the papers based on SAR ATR try to improve the recognition result on limited samples.
2. Some classes have more samples, and some classes have fewer samples. The existing dataset generally has the problem of being imbalanced among categories, which also restricts the good results.
3. SAR images obtained under different conditions have different characteristics, which renders it difficult for existing data-driven deep-learning methods to extract robust features.
4. The SAR scattering center changes with the target azimuth angle, resulting in the different results from the recognition system at different azimuth angles, even for small azimuth increments.
Since 2017, a substantial number of achievements have been made in solving the above problems. However, no paper has systematically studied them, which is one of the motivations of this paper. Therefore, we selected the most representative 197 papers for review. The framework of the papers is shown in Figure 2.

2. Related Work

As far as we know, there are seven papers [9,11,12,13,14,15,16] which are related to our work to some extent. We divided them into three directions, as shown in Figure 3. They are as follows: (1) reviews on traditional methods; (2) reviews mainly on the traditional methods, while the deep learning methods are not reviewed thoroughly; (3) reviews mainly on the optical images, while SAR images are not reviewed thoroughly.
Li et al. [9] surveyed the feature extraction of SAR images. Wu et al. [11] reviewed the techniques of ship classification with SAR images in the past twenty years. They give some comments and suggestions for this area. Wang et al. [12] summarized the feature extractors from three directions. All of these three papers are a survey of the traditional algorithms. However, the deep-learning-based algorithms are not analyzed.
Odysseas et al. [13] surveyed the SAR ATR algorithms, specifically those trained and tested on MSTAR. The reflectivity attributed, attributed scattering centers, sparse representation, hybrid reflectivity attributed and compressive sensing-based methods are introduced in that order. The strengths and weaknesses of each technique are analyzed. The problem and the direction of the dataset are also highlighted. Darymli et al. [14] analyzed the challenges of SAR ATR. They divided SAR ATR into three steps. These steps are detection and low- and high-level classification. The authors divided them into model-, semi-model- and feature-based methods. These two papers reviewed many SAR ATR methods but not mainly on deep learning.
Song et al. [15] surveyed the advanced CNNs in classification. The specialized CNNs, public datasets and data augmentation methods are also introduced. The problems and challenges are also pointed out. John et al. [16] reviewed deep learning in view of theories, tools and challenges. The inadequate datasets, transfer learning, theoretical understanding and optimizing methods are analyzed in that order. These two papers give a systematic overview of deep-learning-based recognition, but they focus mainly on optical images, while SAR images do not constitute the core.
In summary, our work is different from the aforementioned, related work. It is the first paper that systematically reviews the deep-learning-based SAR ATR.

3. Datasets and Evaluation Metrics

3.1. Datasets

Datasets with labels are the basis of SAR ATR. Currently available datasets include MSTAR (Moving and Stationary Target Acquisition and Recognition) [17], OpenSARShip [18], OpenSARShip 2.0 [19], OpenSARUrban [20] and FuSARShip [21], as shown in Figure 4.
MSTAR is the first public dataset constructed by DARPA (Defense Advanced Research Projects Agency). It contains 10 categories of former Soviet military vehicles. The data are collected via X-band SAR, the imaging mode is spotlight, the polarization mode is HH, and the resolution is 0.3 m × 0.3 m. The angle is 0 degree to 360 degree; the angle interval is 3 degrees; and the size is 128 × 128. It has 120 slices. MSTAR includes SOC (Standard Operating Condition) and EOC (Extended Operating Condition). SOC represents that the elevation angle and azimuth angle are different. EOC refers to the large difference between test and training set, mainly in the large change in elevation angle, configuration and the different models of the same type.
OpenSARShip was constructed by Shanghai Jiaotong University. The information of OpenSARShip is shown in Table 2. It contains common types of civilian ships. The ships in the OpenSARShip are derived from 41 SAR images acquired by Sentinel-1. During the production of the dataset, the Automatic Identification System (AIS) information is used. The scene includes five ports in Shanghai, Shenzhen, Tianjin, Yokohama and Singapore. The dataset uses GRD (Ground Range Detected) products and SLC (Single Look Complex) products. It includes 11 ship classes. There are 11,346 ship slices in the dataset. Among them, Cargo constitutes the majority, accounting for 72.47%, and some categories have too few samples. OpenSARShip 2.0 is similar to OpenSARShip. It has 34,528 SAR chips with AIS information. Some of the ship chips of OpenSARShip 2.0 contain undesired effects. Some have interference information.
OpenSARUrban is used for the interpretation of urban SAR images. It provides 33,358 patches covering 21 major cities. It can be used for urban target classification and content-based image retrieval.
FUSAR-Ship constructs the dataset via SAR-AIS matchup. The data sources are the 126 GF-3 SAR images. It has 5000 ship chips with AIS information. It has 15 ship categories and 98 sub-categories. It has the following characteristics: high-resolution, consistency, diversity, extensibility and large-scale. It can also be used for detection, wake tracking and semantic segmentation.
In addition to the above military vehicle and ship, aircraft recognition has also been studied, but there is no public dataset now. The airplane has many scattering points. Due to the complex structure of the airplane, different parts have different scattering mechanisms, which are variable. Therefore, the feature diversity of aircraft renders aircraft recognition difficult.
In addition to the above real SAR data, many papers also use simulation to generate datasets to train recognition algorithms [22].
We can find that the above SAR datasets are very small when compared with optical datasets. Thus, many researchers try to solve the problem raised by limited samples, and the methods are shown in Section 5.3.

3.2. Evaluation Metrics

There are many indicators used to evaluate the recognition algorithm, and the calculations of these indicators are based on the confusion matrix. Thus, we introduce the confusion matrix firstly, as shown in Figure 5.
From this figure, we can understand the concept of TP (True Positive), FP (False Positive), FN (False Negative) and TN (True Negative). Based on the confusion matrix, the false positive rate and the true positive rate are calculated as follows:
FP rate = F P N
TP rate = T P P
The accuracy is generally used to evaluate the global accuracy of a model. It is calculated as follows:
accuracy = T P + T N P + N
Precision represents the ratio of ships that were correctly found in a positive detected result. It is calculated as follows:
precision = T P T P + F P
Recall represents the ratio of ships that were correctly found in the ground truth. It is calculated as follows:
recall = T P T P + F N
Precision and recall are contradictory. In order to give consideration to precision and recall at the same time, the F1-score is proposed. F1 is computed as below.
F 1 - score = 2 p r e c i s i o n r e c a l l p r e c i s i o n + r e c a l l
The P-R (Precision–Recall) and ROC (Receiver Operating Characteristic) curves are also indicators commonly used for comprehensive evaluation of recognition algorithms. The horizontal axis of the P-R curve is the recall rate, and the vertical axis is the accuracy rate. The larger the area under the P-R curve, the better the classifier. The larger the area under the line of the ROC curve, the better the classifier.

4. The Traditional Methods

Generally, traditional SAR ATR has template-matching-, model- and pattern-recognition-based methods, as show in Figure 6.

4.1. Template-Matching-Based Methods

The template-matching-based methods build a template library through a large number of samples. The similarity is compared under the criteria (Mean Square Error, MSE) [23,24]. The category with the highest matching similarity is used as the prediction. It can be divided into the direct-matching and correlation-filtering method. Although template matching is simple in engineering, it has the following problems. It is not robust enough and can adapt to recognition only under restricted conditions. For recognition under unrestricted conditions, the performance degrades seriously. Furthermore, it requires many templates, which are difficult to implement. As the number of categories and samples increases, the template library gradually increases, and the real-time performance becomes worse. Therefore, in the era of artificial intelligence, the application of template matching is gradually shrinking.

4.2. Machine-Learning-Based Methods

As pattern-recognition theory progresses, machine learning is also adopted in SAR ATR. It has two steps, as shown in Figure 7 [25,26]. Firstly, the features that are helpful for recognition are extracted. Then, the combination of them is selected as the feature vector. Then, according to a certain similarity measure, a classifier that can distinguish targets is designed. It can be divided into two stages: training and testing. In the training phase, SAR image features will be extracted, and then the classifier will be optimized using the extracted features and the labels. Through the optimization algorithm and samples in the dataset, the model can converge. When the new samples are input into it, it can output the result. The machine-learning method has low storage and high processing speed.
Whether robust features can be extracted is crucial to the final recognition accuracy. Unlike targets in optical images, which have complete contours, targets in SAR images have sparse scattering centers and are very sensitive to azimuth changes [27,28]. Thus, extracting useful features is difficult for SAR ATR. The geometric structure features such as perimeter, area and aspect ratio and the electromagnetic-scattering features such as peak value and scattering center are usually used. Electromagnetic-scattering features—for example, peak value and scattering center—are usually used. Transform features such as Fourier transform and wavelet transform, local invariant features such as SIFT (Scale-invariant Feature Transform) and generalized invariant moment are also usually used. Features with strong discriminative ability play an important role in recognition.
Designing an appropriate classifier is another important step. Typical methods include a support vector machine, neural network, adaptive boosting, sparse representation, K-Nearest Neighbor (KNN) and Bayes.
The deep-learning network has emerged in recent years. The most typical network is CNN. It adopts the strategy of automatic feature extraction such that it can extract robust features from many samples, which is more advantageous than the traditional method. A substantial amount of research shows that deep learning is an effective method for SAR ATR.

4.3. Model-Based Method

The model-based method mainly generates images under different conditions through a 3D electromagnetic-scattering model or a Computer-Aided Design (CAD) model [29]. Because the model can be processed and operated in the calculation process, the electromagnetic-scattering features under different conditions can be flexibly simulated. Its core is the PEMS (Prediction, Extraction, Matching and Search) subsystem. However, this method has some shortcomings, which are mainly shown as follows. First, the physical simulation is difficult to run real-time. Second, the data generated via simulation calculation are not electromagnetic-scattering characteristics with a clear physical meaning. Third, when the structure of the target part or its scenario changes, the overall calculation needs to be re-conducted. These shortcomings restrict the application of the physical model in practice.

5. The Deep-Learning-Based Methods

Since the success of AlexNet in ILSVRC (ImageNet Large Scale Visual Recognition Challenge), the key to image classification has turned from feature designing to CNN designing. Many CNNs, such as VGGNet [30], Inception [31], ResNet [32], ResNeXt [33] and DenseNet, have been proposed [34]. Similarly, due to the huge advantages of CNN, it is also used in SAR ATR and shows good performance. Furthermore, many papers have emerged. This paper summarizes mainly the deep-learning-based SAR ATR algorithm, including mainly its eight aspects, as shown in Figure 8.

5.1. The Non-CNN Models

Before the success of CNN, many non-CNN deep-learning models were used in feature representation as shown in Figure 9. For example, restricted Boltzmann machine (RBM) [35], Deep Belief Network (DBN) [36], auto-encoder and so on. RBM consists of two shallow visual and hidden layers, which are fully connected with each other. It learns the probability model from input data. DBN is composed of multiple RBM stacks. It uses a layer-by-layer unsupervised method to learn parameters. It can solve the problem of many hidden layers and difficult-to-optimize models. It can train deep networks and lay a foundation for the results of deep learning. Auto-encoder renders the input and output more similar. It is an unsupervised learning process which is mainly used for data dimension reduction or feature extraction.
Reference [37] proposed a discriminant deep belief network, which is used to learn high-level features of targets. A weak classifier is trained with pseudo-labels. Then, a specific SAR image block is represented by a set of projection vectors. Finally, projection vectors are input to produce discriminative features for classification. Reference [38] used the unsupervised learning method to build a pre-training model for feature extraction. This method can effectively use more samples. Reference [39] proposed compact convolutional auto-encoder for ATR. It produced a more discriminative feature representation by imposing compactness constraints on the encoder while minimizing the reconstruction loss. In reference [40], the deep network is divided into a convolutional auto-encoder network and a shallow neural network. The convolutional auto-encoder is trained via unsupervised learning as a feature extractor, and the shallow neural network containing a full connection layer is trained via supervised learning to predict the target category. Carlos et al. [41] also used a de-noising auto-encoder to build a pre-training network for feature extraction to classify ships in SAR images.
Compared with these non-CNN models, CNN is strongly supervised and, thus, has the advantage of high accuracy. Thus, in the deep-learning era, these non-CNN models are not mainstream; related research is also relatively minimal.

5.2. The CNN Models

The CNNs used in SAR ATR are shown as below. They are the off-the-shell CNN, the specialized CNN, the attention-based CNN and the capsule network as shown in Figure 10.

5.2.1. The Off-the-Shell CNN Borrowed from Computer Vision

In the early years, researchers preferred to adopt the off-the-shell CNN models in SAR ATR. This is because the success of CNN has not been proven in SAR ATR.
Shao et al. [42] compared the existing CNNs on SAR ATR in detail for the first time. The classical CNNs—for example AlexNet, VGGNet, GoogLeNet, ResNet, DenseNet and SENet—are applied on MSTAR. The results showed that most of the CNNs can obtain an accuracy of 99% on MSTAR, which shows superiority performance compared with the traditional algorithms. The running speeds are also analyzed in the paper. Fu et al. [43] used ResNet to obtain a good recognition performance on the small dataset. The dropout layer into the building block is also used. The center and softmax loss are adopted. It achieved an accuracy of 99.67% on MSTAR. Soldin et al. [44] used ResNet-18 on MSTAR to verify the effectiveness of the deep-learning-based SAR ATR. It had 99% accuracy with 10 types of targets. Anas et al. [45] adopted VGG-16 to extract features. Parameters are trained on the ImageNet firstly. Furthermore, the last three convolutional layers were re-trained on MSTAR. It achieved an accuracy of 97.91% on 10 different classes.
The above studies try to demonstrate the effectiveness of deep learning in SAR ATR. However, due to the differences between SAR and optical images, it is not appropriate to use CNNs in computer vision for SAR ATR. Thus, researchers are more inclined to design specialized CNNs, as shown in the next section.

5.2.2. Specialized CNN for SAR ATR

Researchers tend to design specialized CNN for SAR ATR. The specialized CNN can be divided into shallow and deep forms.
a.
The shallow CNN
Morgan et al. [46] designed a new CNN for SAR ATR. It has three convolutional layers, two max-pooling layers and one fully connected layer. It achieved 92.3% accuracy on a 10-way MSTAR dataset. Chen et al. [47] proposed a fully convolutional network. It has five convolutional layers, three max-pooling layers and one softmax layer. Results showed that it can achieve 99% accuracy. Xu et al. [48] proposed SARNet for SAR ATR. It has two convolutional-pooling and two full-connected layers. It achieved 95.68% on an MSTAR dataset. Li et al. [49] proposed DeepSAR-Net for learning discriminative features without human intervention. It consists of four repeated convolutional, normalization and max-pooling layers and two repeated convolutional, normalization and ReLu layers. It achieved 98.36% accuracy on three-class MSTAR. Liu et al. [50] presented a new convolutional network for SAR ATR. It has six convolutional layers and one fully connected layer. Data augmentation is also used for overcoming the limited sample problem. It achieved 99.48% accuracy on five-class MSTAR. Qiao et al. [51] proposed an improved CNN called Q-Net based on the characteristics of SAR images. The experiments are conducted on MSTAR. Q-Net has only three convolutional layers, which are very shallow compared with the classical CNNs. It achieved 97.58% accuracy on three-class MSTAR and 97.32% accuracy on ten-class MSTAR. Zhou et al. [52] used large-margin softmax and batch-normalization based CNN to increase the separability of samples. It has only four convolutional layers. The experiments conducted on MSTAR showed the robustness of the classifier. It achieved 96.44% accuracy on 10-class MSTAR. Cho et al. [53] proposed a two-way feature additional CNN for considering the pose information of the target. The two-way features are aggregated and input into the fully-connected layers. The CNN that they used has seven convolutional layers. It achieved 94.38% accuracy on MSTAR. Zhao et al. [54] used multi-stream CNN for solving the problem of limited data. It has only four convolutional layers. The multiple views of the same target are input to MS-CNN. The experiments conducted on MSTAR SOC and EOC showed the superiority of the method. It achieved 99.92% accuracy on 10-class MSTAR under SOC. Lang et al. [55] presented LW-CMDANet. It designs a four-layer CNN model combined with hinge loss. It achieved 92.98% accuracy on 10-class MSTAR.
The above-mentioned CNNs are for the SAR ATR that appeared in the early stage. These CNNs are stacked with several convolutional and pooling layers and connected to a classifier at the end. They have fewer layers. According to the common knowledge of deep learning, the deeper the network is, the stronger feature expression ability it has. Thus, their recognition abilities are worse than those of deep CNN in the computer vision. Therefore, it is absolutely necessary to use deep CNNs for SAR ATR. A large number of related achievements have also appeared. We will review them in detail in the next section.
b.
The deep CNN
Zhai et al. [56] proposed MF-SarNet for SAR ATR. The fire module is used for extracting features with fewer parameters. MF-SarNet consists of eighteen convolutional layers, two fully connected layers and eight fire-modules. The data augmentation of clock-wise-based rotation is used to expand the dataset 360 times. It achieved 98.53% accuracy on MSATR. Xie et al. [57] presented a neural network named umbrella. Umbrella has two blocks; one is a summation of three 3-layer paths, and the other is the concatenation of three 3-layer paths. The fusion of the six paths can extract rich features from different spatial scales. The designed CNN has five convolutional layers and two umbrella layers. It achieved 99% accuracy for a 10-class MSTAR dataset. Huang et al. [58] presented a new CNN called group squeeze–excitation sparsely connected convolutional networks. It conducted reweighting with fewer parameters. It is more efficient than DenseNet. It achieved 99.79% accuracy on an MSTAR dataset. Dong et al. [59] proposed a global receptive for building a special hierarchy of feed-forward neural networks. It has two feature-generation and refinement modules. The multiple receptive signals are used to extract features. The expert knowledge is also transplanted into the neural network. It achieved 95.07% accuracy on an MSTAR dataset. Wang et al. [60] presented SSF-Net with a sparse data feature extraction module. The other layers are also used to improve the efficiency. It has 99.55% accuracy. Wang et al. [61] proposed DNet, which can learn scale information. The special layers are added to render it more standardized and practical. It achieved 99% accuracy on a 10-class MSTAR dataset. Feng et al. [62] proposed a convolutional neural network to fully learn the feature information of SAR images. It performs noise suppression on SAR images firstly. It consists of 7 convolutional layers and 7 pooling layers. It shows better results on 10-classes of MSTAR datasets. Pei et al. [63] presented a feature extraction and fusion network for recognizing targets in multi-view SAR images. It is based on a multiple-input network with deformable convolution and squeeze-and-excitation. It achieved 99.31% accuracy on a 10-class MSTAR. Wang et al. [64] proposed a multi-view CNN with deformable convolution under a limited dataset. The deformable convolution can learn characteristics of the targets. It can capture more information from different views. Shang et al. [65] proposed M-Net for solving the problem of over-fitting due to the limited dataset. M-Net uses information recorders to store the spatial characteristics and uses spatial similarity to predict the labels of unknown samples. In order to optimize M-Net better, parameter migration training is used. The first step is to train CNN in M-Net to initialize parameters. The second step is to use initialization parameters in M-Net and use an MSTAR dataset for training. It achieved 99.71% accuracy on 10-class MSTAR. The experiments on MSTAR showed the effectiveness of M-Net. Lin et al. [66] adopted a highway network to allow information to pass through each layer of the deep neural network at high speed without hindrances, effectively reducing the impact of gradient disappearance problem. The convolutional highway network is based on the gate mechanism, including two basic structures: conversion gate and handling gate. One part of the input is converted through conversion gate, and the other part is directly passed through the handling gate. It achieved 99.09% accuracy on 10-class MSTAR.
Due to the development of CNN, SAR ATR also gradually adopted the ideas from this development. These specialized CNNs take into account the specific features of SAR images, such as speckle noise, sensitivity to angle and limited samples.

5.2.3. Attention-Based CNN

The attention mechanism—for example SENet and CBAM (Convolutional Block Attention Module)—can assign weights according to the importance of regions or channels. It can capture more valuable information and add less computation. Thus, it is widely used in computer vision and SAR ATR. Wang et al. [67] highlighted that the CNN will disturb the classifier. Thus, they designed a novel network ESENet with an enhanced squeeze-and-excitation module. ESENet has four convolutional layers, three max-pooling layers and one full-connected layer. The enhanced squeeze-and-excitation module uses a convolutional layer which can extract more effective features. It achieved 97.32% accuracy on MSTAR. Shi et al. [68] presented a deep residual shrinkage network with an attention module. The experiments on MSTAR showed that it can reduce the number of parameters while ensuring accuracy. Zhang et al. [69] used an attention module for SAR ATR on limited samples. The CBAM is lightweight and effective. It sequentially applies channel and spatial attention to learn “what” and “where”. The results on MSTAR showed that it achieved 99.35% on a ten-class dataset. Li et al. [70] proposed channel and spatial attention modules to refine and suppress features. The two lightweight layers are used to encode the weight map. Experiments on MSTAR showed that it has good performance (112.54 K parameters with 99.51% accuracy on 10-class MSTAR). Su et al. [71] proposed a complete frequency channel attention network for recognizing noisy images. It uses 2D discrete cosine transformation to select the important channels. The method is robust to the noise. The experiments showed that it is better than CBAM (97.65% versus 94.38% on a WHU-SAR6 DATASET dataset). Wang et al. [72] presented a multi-view attention network to learn features from different aspects. The spatial attention is used to find the important region. The LSTM (Long Short-Term Memory) is used to fuse the features from adjacent azimuths. It achieved 99.38% accuracy on 10-class MSTAR.
Through the above papers, we can find that the attention mechanisms commonly used in SAR ATR are SENet and CBAM. They are borrowed mainly from computer vision. In the future, we should design an attention mechanism based on SAR images.

5.2.4. Capsule Network

A capsule network can be used to improve the interaction between features. Every capsule is a vector, and only those features with targets can make a contribution to the prediction [73].
Shah et al. [74] adopted a capsule network for SAR ATR. It has one convolutional and two capsule layers. The demand of training data is small. It has an accuracy of 98.14% on MSTAR. Yang et al. [75] combined the dilated convolution and a capsule network to SAR ATR. They are less hungry to training samples. It achieved 97.15% accuracy on 10-class MSTAR. Guo et al. [76] used a capsule network for high accuracy recognition. It can connect every target in an SAR image. It is learned through full connected operation that is vector-based. It shows superior robustness compared with CNN. Ren et al. [77] proposed a new capsule network for improving performance under EOC. Multiple dilated convolutions are adopted for extracting features that are multi-sized. Feature refinement is used for extracting discriminative features. A feature pose preserving layer is adopted for high accuracy. It achieved 99.18% accuracy on 10-class MSTAR.
A capsule network performs better than CNN in some cases. However, it has a large amount of computation, a narrow range of adaptation and little support for other tasks, so it is less used today.

5.2.5. Others

In addition to the above content, there are some other achievements in applying CNN to SAR ATR. Some examples include regularization, feature fusion, and so on.
Feng et al. [78] studied the influences of data augmentation, L2 regularization term, and dropout on MSTAR. They also selected AlexNet and ResNet to train the ATR model. Results showed that AlexNet series with dropout are optimized better. L2 regularization terms can improve the accuracy. Data augmentation is effective on the small dataset, as the deep-learning models are always data-hungry, and SAR images are scarce compared with optical images. Kuang et al. [79] investigated the effect of the amount of training data. The experiments conducted on MSTAR found a good result for the smallest amount of training data. Wang et al. [80] proposed multi-level feature fusion for SAR ATR. The features are from ResNet. Different lever features are fused for getting a good classification performance. Li et al. [81] proposed a multi-aspect SAR recognition method based on self-attention. It can find the relationship of the targets in images. The convolutional auto-encoder is used to pre-train the network, which can improve the anti-noise ability and reduce the dependence on a large dataset. Zhao et al. [82] proposed the EfficientNet and GRU (Gated Recurrent Unit), which are robust to the angle of incidence.

5.3. Methods to Solve the Problem Raised by Limited Samples

Due to the powerful feature-extracting ability of CNN, CNN shows great advantages on SAR ATR. However, the training of CNN depends heavily on labeled data. The performance will decrease dramatically when the labeled samples are insufficient. What is more, SAR images are not easily available. Therefore, it is absolutely necessary to improve the performance with limited samples. Common methods include data augmentation, transfer learning, generating new samples, few-shot learning, metric learning, semi-supervised learning and adding domain knowledge. They are shown in Figure 11.

5.3.1. Data Augmentation

Data augmentation is commonly used in deep learning to improve the performance of neural networks [83]. Due to the scattering characteristics of SAR, the targets in SAR images may be quite different with different azimuth angles. The rotation method commonly used in optical imaging is not suitable here. How to effectively expand the SAR data needs to be considered. Recently, researchers have carried out research on this issue and have made some progress. They fall into the following categories as shown in Figure 12.
Ding et al. [84] studied the results of translation, noise addition and sample synthesis. In the sample synthesis method, in order to generate a special azimuth angle sample, the combination of two closest images is taken as the composite sample. It achieved 93.16% accuracy on 10-class MSTAR. Ding et al. [85] used translation and random speckle noising to strengthen the invariance of CNN models. Hidetoshi et al. [86] discussed the translation invariance of CNN on SAR ATR. The data augmentation is conducted with the random cropped patches of 96 × 96 from the chips of 100 × 100 pixels in the training phase. They conducted the experiments on MSTAR before and after the data augmentation. The results showed that after data augmentation, it achieved 99.6% accuracy. Jiang et al. [87] presented Gabor-deep CNN for a limited SAR training dataset. The Gabor features were used at first. Experimentation on MSTAR proved the effectiveness of the method. It achieved 96.32% accuracy on 10-class MSTAR. Lei et al. [88] proposed clutter reconstruction for augmentation. The augmentation is conducted from the aspect of signal and noise. The variable convolution kernels are used to model the spatial correlation. Furthermore, the background reflectance was reconstructed via power-law transform. Experiments showed that this method is effective and universal. Zhang et al. [89] used existing training samples to build unknown training samples, so as to improve the robustness of CNN and improve its classification accuracy. Lv et al. [90] presented a data augmentation method based on ASC (Attribute Scattering Center). It uses sparse representation to extract ASC from a single image and selects some ASCs to rebuild the image. The rebuilt images have a function of de-noising. By conducting this step several times, new images can be produced as usable training data. CNN is designed for classification and trained through enhanced images. On MSTAR, the proposed method can classify 10 classes under SOC with an accuracy of 99.48%.
Data augmentation is widely used in deep learning. There are many effective methods, for example, flipping, cropping, rotation, shifting, random erasing, mosaic, mixup cutout and cutmix. In SAR ATR, data augmentation is relatively simple to use but less studied. Due to the fact that the SAR data acquisition is limited, it is necessary to focus on data augmentation. Other than data augmentation, GAN and electromagnetic simulation can also be used for generating new samples.

5.3.2. GAN for Generating New Samples

GAN has two adversarial networks. They are generator and discriminator. The task of the former is to produce an image close to the real image. The task of the discriminator is to determine whether the produced image is real. After a substantial amount of training, the generator can produce a near-real image [91]. Using samples produced by GANs for the classifier can obtain a good prospect. They fall into the following categories as shown in Figure 13.
Guo et al. [92] adopted GANs to produced SAR images and solved the difficult problem of model training caused by noise through clutter normalization. They compared SAR samples generated by various GANs models. They include DCGAN (Deep Convolutional Generative Adversarial Networks) and WGAN (Wasserstein-Generative Adversarial Networks) [93,94,95,96]. Cui et al. [97] used WGAN to produce extended data and proposed a data-selection method to select high-quality images with a special azimuth angle. The performance of this method was demonstrated on the classification MSTAR dataset. It achieved 91.6% accuracy on 10-class MSTAR. Zhu et al. [98] adopted CycleGAN to convert the simulated data to the real data. CycleGAN is an unpaired domain-adaptive learning method, which can realize image conversion between different domains. In the method of using CycleGAN for simulating sample optimization, the training phase is used to build a generation network that converts simulation samples to real samples. In the test phase, the trained generation network is used to convert simulation samples into closer to real samples. The simulation samples converted by CycleGAN are closer to real samples. Simulation samples are effective in improving the performance of the classifier [99]. CycleGAN is used to render simulated data more similar to real samples. Results showed that it leads to an approximately 10% increase in accuracy. Wagner et al. [100] generated samples through elastic deformation and affine transformation. GAN is capable of generating new data by learning the distribution of data through adversarial training of generators and discriminators. It achieved 99.5% accuracy on 10-class MSTAR. Hwang et al. [101] presented triple-GANs to improve SAR ATR performance. Another classifier was added to make the generator converge with the real data distribution. Luo et al. [102] proposed a method to generate samples of small classes. They were expanded via automatic-search-based data augmentation. This method can produce good samples for small classes so as to solve the problem of unbalance. It can improve the accuracy of minority class by 11.68%. Reference [103] proposed a translation network between optical and SAR images via an improved conditional GAN. It achieved 77.97% accuracy on SPH4 Dataset.
Through training, GAN can generate rich samples, which is a better way to expand data. However, due to the problems of training difficulty, lack of stability and collapse of GANs, it is a challenging task to improve the performance of the classifier via adversarial training. What is more, due to the diversity of SAR-imaging performance and the complexity of the mechanism, the image produced by GAN is still different from the actual image. This will lead to the poor migration ability of the trained model, and it is difficult for the model to adapt to the new samples with large changes.

5.3.3. Electromagnetic Simulation for Generating New Samples

Using electromagnetic simulation to generate new samples is another idea in SAR ATR as shown in Figure 14. RaySAR is the typical method; it needs to manually set electromagnetic parameters related to the target, and the quality of the simulation image depends on the setting.
In order to determine the effective electromagnetic parameters, Niu et al. [104] used a neural network for regression prediction of electromagnetic simulation parameters. In the training stage, a series of electromagnetic simulation parameters were set according to experience, and simulation images were generated by RaySAR. The simulation images were used as input, and the electromagnetic simulation parameters were used as output to train the model. In the test phase, the real SAR target is input into the trained model, and the output is the best electromagnetic simulation parameters predicted by the network, which can be used to generate SAR simulation samples. Hansen et al. [105] studied the transfer of learning between simulated and real SAR images. The simulated dataset is obtained by the electromagnetic reflection characteristics. By this, samples in the simulated dataset do not require geometric duplication. Experiments showed that the pre-training on this simulated dataset can make the model converge faster. Cha et al. [106] designed an SAR-simulation data-adjustment method based on a deep residual network. They went from simulated data to real data as a function of the residual network and used this function to adjust the simulated image. Ahmadibeni et al. [107] proposed an SAR image electromagnetic simulation system for ATR. First, 250 CAD models were prepared with different objects. The simulation process consists of four steps. Firstly, the electromagnetic backscatter reflectance of the target is captured. Secondly, simulated samples are generated by using the noise modulation transfer function. In the third step, a method is used to project the target shadow from eight different perspective views. Finally, the surface regions producing high-intensity radiation backscatter are highlighted to further enhance the realism of the SAR images generated by the simulation. Zhang et al. [108] studied the accurate recognition using only simulated samples. Due to the distribution difference between simulated and real data, the recognition effect was poor. Therefore, they adopted a hierarchical identification method. Firstly, the pre-trained CNN is adopted to classify the image. Then, the samples that are easy to misclassify are found and reclassified. For these samples, they proposed a multiple-similarity fusion classifier, which measured the relation of them and then reclassified them.
Electromagnetic simulation can ease the limited sample problem of SAR ATR and improve the accuracy of recognition. However, the SAR image of electromagnetic simulation is also faced with the problem that there is a gap between the authenticity of the actual sample and SAR image. We need to continuously improve the authenticity of electromagnetic simulation.

5.3.4. Transfer Learning

Transfer learning can use the common knowledge between source task and target task. It is widely used in the condition that the training samples are limited. They fall into the classes as shown in Figure 15.
Reference [109] used a CIFAR-10 dataset to pre-train the network; then, the intermediate layers were used for TerraSAR classification. It achieved 64.64% accuracy on the TerraSAR dataset. Marmanis et al. [110] pointed out that due to the great imparity of optical and SAR data, it is difficult to apply the network trained on optical data. Even the low-level network features are difficult to effectively transfer. Lu et al. [111] used off-the-shell pre-trained models such as ResNet-50 and VGG-16. The low-level neural layers shared common features on different tasks. Thus, they changed only the fully connected layers and classifiers. It achieved 98.57% accuracy on the TerraSAR-X dataset. Zhai et al. [112] presented efficient transferred CNN for SAR ATR. They initialized MS-CNN firstly. Then, MS-CNN was trained on a source dataset, and the shallow layer’s (before conv4) parameters were fixed. Finally, the MS-CNN was trained on the target dataset. It achieved 98.83% accuracy on 10-class MSTAR. Ying et al. [113] construed a lightweight Atrous-Inception module for SAR ATR. In order to train it, several types of images were transferred to the SAR task. Furthermore, the performance of classification was improved on limited datasets. It achieved 97.97% accuracy on 10-class MSTAR. Song et al. [114] proposed a data- and feature-level transfer learning method. CycleGAN was used to convert optical images into intermediate domain SAR images. After that, the domain transfer method was adopted to realize recognition through domain adaptation of intermediate domain and target SAR. Experiments demonstrated that the two-level structure had good performance on military and civilian classification tasks. Zhang et al. [115] trained CNNs on MSTAR and fine-tuned them on OpenSARShip. It achieved 79.12% accuracy on an OpenSARShip dataset. Huang et al. [116] constructed a large land cover SAR dataset with 150 classes and up to 0.1 million chips. The CNN models trained on the above dataset were regarded as the pre-trained models. Furthermore, they were retrained on MATAR. The results showed that it could obtain an accuracy of 99.46%. This shows the benefit of pre-training on similar datasets. Zhang et al. [117] used many SAR images to train GAN to learn common features. After that, a pre-trained layer was repeatedly used to transfer general features to SAR ATR tasks. The accuracy improved from 92.76% to 96.24% on an MSTAR dataset. Reference [118] proposed a task-driven domain adaptation transfer learning method based on simulated SAR data. Reference [119] produced a large amount of SAR images via simulation. The pre-trained weight was used as the initial parameter and transferred to the actual SAR image. Experiments on MSTAR showed that it could obtain an accuracy of 99.78%. Wang et al. [120] used transfer learning to narrow the simulation and SAR images. They firstly pre-trained the model on a substantial amount of simulation data and minimal SAR data. Then, it was fine-tuned on real SAR data. Experiments on MSTAR showed the superiority of the method. It achieved 94.4% accuracy on 10-class MSTAR. Huang et al. [121] built a pre-training model through unsupervised learning by using a stack-convolution auto-coding network. Furthermore, they introduced a reconstruction bypass to provide regularization constraints. It achieved 96.62% accuracy on 10-class MSTAR. Borgwardt et al. [122] improved the network via a domain-adaptive learning method based on minimizing the difference. Adaptive learning reduces the difference between the source and target data in feature domains. Furthermore, the addition of domain-adaptive learning can further improve the performance after migration. Huang et al. [123] discussed the transfer problem in SAR ATR from three aspects: which network, which layer and how to carry out effective transfer learning. The following conclusions were drawn: large networks have better transfer potential; the closer the source to the target, the better the effect is.
Transfer learning has achieved good results. However, the theoretical basis is that the target and source domain data have similar characteristics. Nonetheless, SAR image and optical image have great differences in imaging mode noise, so transfer learning needs to be reconsidered in SAR ATR.

5.3.5. Few-Shot Learning

Few-shot learning can fit an unseen category on limited samples after training on a large amount of data of a certain category. The generalization of prior knowledge can be transferred to the new task. It is a special case of meta-learning in unsupervised learning. It is also used in SAR ATR as shown in Figure 16.
Wang et al. [124] proposed a few-shot method based on a conv-biLSTM prototypical network. Experiments on three types and five training samples of an MSTAR dataset showed that it can achieve an accuracy of 90%. Wang et al. [125] combined meta-learning with amortized variable input. The global parameters of meta-learning were used as the extractor. The specific parameters of probability distribution could adapt to the task with a small number of samples. Experiments showed that it could obtain 97.3% accuracy on MSATAR. Wang et al. [126] proposed a hybrid input network. It consisted of two stages. In the first stage, SAR images were mapped into embedded space. In the second stage, the samples in the embedded space were classified by combining inductive and transductive reasoning. Finally, the classification results were obtained by combining the above two reasoning methods. They proposed enhanced mixed loss to obtain better separability between classes. The results on MSTAR showed that it performs well in few-shot SAR classification. In order to transfer prior knowledge from simulation images to SAR images, Wang et al. [127] proposed probabilistic reasoning and the meta-learning-based method. First, they used the features extracted from simulation data to learn the global parameters of the model. Secondly, new features were extracted from the real data. Finally, a prediction distribution was generated to represent the confidence level of the target class. The experimental results showed that the model is superior with limited training samples. It achieved 97.6% accuracy on 10-class MSTAR. In order to learn more discriminant features from labeled data, Wang et al. [128] proposed an attribute-guided multi-scale model. Complex-valued images were used for sub-band decomposition. The proposed model was used to combine multi-scale features and improve the distinguishing ability. A priori binary attribute of SAR target was used, and additional classification was added. Li et al. [129] combined graphical neural networks with meta-learning and proposed a new graphical meta-learning method. Simulated SAR data were used to obtain meta-knowledge firstly. Furthermore, the labeled and unlabeled data were embedded to a vector, which is represented by a fully connected graph. The graph was iteratively updated via neighborhood aggregation to obtain a new representation of nodes and their relationships. Finally, the prediction distribution of the target class was generated by combing the values of node and edge. Experiments showed its superiority accuracy with minimal training data. Fu et al. [130] presented a meta-learning framework for SAR ATR. It can appropriate update strategies, and it can achieve fast adaptation by training images of some new tasks. Three transfer learning methods were adopted to overcome the meta-learning problems. The results showed that meta-learning is a good method for SAR ATR with limited samples. It achieved 1.7% and 2.3% improvements for one-shot and five-shot recognition on an NIST-SAR dataset.
Few-shot learning is a solution in the case of insufficient SAR samples. Although its performance is poor compared with that of strong supervision, it still has certain research value. However, as SAR sensors become more common, and it becomes easier to collect large amounts of SAR data, the benefits of such methods will be further reduced.

5.3.6. Semi-Supervised Learning

Collecting and labeling SAR images require a substantial amount of work and are difficult to realize. However, semi-supervised learning could utilize both labeled and unlabeled data and could improve the learning performance. Thus, it attracts the attention of many researchers in SAR ATR. They fall into the classes as shown in Figure 17.
GAN can effectively estimate the distribution of data from training samples, so it could be used for the research in [131]. Similar to GAN, semi-supervised GANs also have a generator and discriminator, but they are more complex. At the beginning, the network can only generate noise-like samples. After a period of training, the generator can generate more realistic samples, indicating that it has learned the distribution of data. Gao et al. [132] proposed deep convolution GAN to conduct semi-supervised learning. Two DCGANs discriminators were used for joint training. Experiments on MSTAR showed that it can obtain an accuracy of 98.14% with a 20% unlabeled rate. Zheng et al. [133] combined GAN with CNN to realize semi-supervised learning. They used GAN to generate labeled images. Label-smoothing regularization was also used. Experiments on MSTAR demonstrated the effectiveness of the method. Gao et al. [134] used more than one generator to realize stable semi-supervised GAN. The multi-classifier was used, and the labeled image was utilized in the training process, which shares the underlying layer with the discriminator. Then, the above layers were fine-tuned with little labeled SAR images to construct the recognition network of SAR images. It achieved 85.23%, 90.82% and 97.81% accuracy with 20%, 40% and 100% samples on MSTAR. El-darymli et al. [135] proposed a teacher–student semi-supervised method to train the model on a limited dataset. Firstly, the dataset was divided into consistent and confident unlabeled samples. Then, the student was used to generate the pseudo-labels. Finally, the pseudo-labeled, unlabeled data and labeled data were hybridized to train the model. Wang et al. [136] proposed a semi-supervised learning algorithms by a self-consistent enhancement rule, hybrid-based learning and loss learning. It can utilize unlabeled data during training. Self-consistent enhancement rules force samples to share the same label. It can balance the amount of labeled and unlabeled data. This could form the outstanding training effect of the supervised learning. Furthermore, it causes the network to obtain better performance. Then, they mixed labeled, unlabeled and enhanced samples, so that the labeled information could better participate in the mixed samples. The overall loss is the weighted summation of cross entropy loss and mean square error loss. Experiments on MSTAR and OpenSARShip datasets showed that it is close to supervised learning. Gao et al. [137] proposed semi-supervised classification algorithms based on attention and bias-variance resolving. The training set is represented by the dataset attention module. The uncontributed and difficult-to-learn unlabeled data will receive less attention. In the training phase, every unlabeled image is fed into the network for prediction. Treating pseudo-labels of unlabeled data as the most likely classification is good for prediction. It achieved 99.63% accuracy on 10-class MSTAR. Gao et al. [138] presented an active semi-supervised CNN algorithm. The active learning method was used to collect the most likely samples from the unlabeled dataset. The new regularization was also used for the loss function. The probability of unlabeled data was maximized by the above operations. The accuracy is 95.7% with only 236 labeled samples. Zhang et al. [139] presented a semi-supervised SAR ATR method. The labeled SAR images were used firstly to initialize the model. Then, the trained model was used to predict the labels of unlabeled images. After repeating the above steps, they could obtain a robust model. The trained model was used for producing predicted probabilities. The EM-based method was used to give the predicted labels at last. It achieved 99.83% accuracy on 10-class MSTAR. Tian et al. [140] proposed a multi-block mixed method for semi-supervised SAR ATR. A multi-block hybrid method was used to produce new SAR images to improve the accuracy. It achieved 99.67% accuracy with 80% labeled samples. Chen et al. [141] presented a semi-supervised algorithm based on consistency criterion and domain adaptation. Unlabeled data with weak enhancement and strong enhancement are used to predict the pseudo-label and train the model respectively.
Semi-supervision is a solution in the case of insufficient SAR samples. Although the performance of semi-supervision is worse than that of strong supervision, it still has certain prior research value. However, as SAR sensors become more common, and it becomes easier to collect large amounts of SAR data, there will be less room for such methods to work.

5.3.7. Metric Learning

For the M-class classification problem containing K training samples, the metric-learning method converts it into a classification task to determine whether two samples belong to the same category. The two samples belonging to the same class are combined into positive pairs, and the two samples belonging to different classes are combined into negative sample pairs. The number of positive and negative sample pairs is K (K − 1)/2, which is (K − 1)/2 times larger than the original dataset. Thus, it can use metric learning to alleviate the problem of limited samples. They are shown in Figure 18.
Xu et al. [142] comprehensively verified the performance of distance metric learning on SAR ATR. There are four feature representations methods, and twenty distance metric learning algorithms were used. The results showed that the feature representation and distance-metric-learning algorithm are both important for SAR ATR. Pan et al. [143] presented a Siamese convolutional network method on a limited dataset. Firstly, it extracted features through a Siamese network. Secondly, features were extracted from the single branch network. Finally, the classifier was constructed to realize the recognition of specific types of targets. It achieved 93.20% accuracy with 30 categories. Reference [144] used a positive and negative sample pair strategy to expand the dataset. A Siamese CNN was designed to calculate the similarity. The weighted voting mechanism was applied to the Siamese CNN. The results showed that it is better than others on MSTAR and OpenSARShip datasets. Li et al. [145] conducted SAR ATR via CNN embedding and metric learning. Experiments on OpenSARShip and MSTAR verified the effectiveness of the method. Wang et al. [146] proposed contrast learning and pseudo-labels to recognize targets under limited samples. They used a Siamese structure to learn semantic representations of objects, and these features could reflect the similarity of SAR images. An iteratively varying loss function was used. It achieved 97.86% accuracy on 10-class MSTAR.
Metric learning has great potential in SAR ATR, but there are few research achievements. Due to the particularity of SAR images, it is necessary to systematically use the metric learning method, which is the direction that should be considered further.

5.3.8. Adding Domain Knowledge

The above work considers SAR target as a simple category; domain knowledge is ignored. In fact, domain knowledge is important information for recognition. The information contained in the target itself, such as length, width and height, is the knowledge. The forms of radar scattering characteristics, such as ASCs, amplitude and phase information, are also the knowledge. The papers those are about it are shown in Figure 19.
Zhang et al. [147] pointed out the importance of domain knowledge in SAR ATR with limited samples. They took the aspect ratio and area of the SAR vehicle as domain knowledge. They used the domain knowledge information to correct output probability of the full convolutional model. Domain knowledge greatly alleviates the over-fitting problem caused by a small amount of data. Experiments on MSTAR showed that it can achieve 72.2% and 93.1%, respectively, under the condition of ten targets per class and thirty targets per class. Aiming at the problem of domain-adaptive SAR ATR, [148] proposed a deep knowledge integration framework. Deep knowledge transferring, multiple heterogeneous features projection and online learning were used to improve the performance.

5.4. Imbalance across Classes

Most of the datasets face the problem of imbalance across classes (also called long-tail distribution). When training CNN on these datasets, the majority of classes will dominate the training and degrade the performance. The accuracy of existing models will degrade.
The two ways to alleviate the problem are at the data-sampling level and the algorithm level. The data sampling technique makes the overall training data tend to be balanced. At the algorithm level, the phenomenon of “under-learning” is corrected by optimizing the loss function. A common solution is to increase the penalty of misclassification of fewer samples and reflect the cost in function, so that more “attention” can be paid to the classes with fewer samples. They can be divided into the following categories, as shown in Figure 20.
Shao et al. [149] proposed in-batch balanced sampling and model fine-tuning for solving the imbalance problem. Firstly, the training set with known data imbalance was used as the source domain, and the target was rearranged and selected via in-batch balanced sampling. Secondly, the dataset was trained, and the weights of sample balance were saved. Finally, it was trained on the target dataset with unprocessed samples, and the weight network in the source domain was fine-tuned. Cao et al. [150] proposed a cost-sensitive awareness-based recognition model for solving the imbalance problem. At both the data level and the algorithm level, it can improve the performance and learn accurate boundaries. It achieved 90.4% accuracy on MSTAR. Zhang et al. [151] presented a class imbalance loss to tackle the imbalance dataset. The imbalance degree was used as the decision index factor. Yang et al. [152] proposed cascading expert branches and parallel expert branches to solve the imbalance problem. For cascading expert branches, experts are routed sequentially, and each expert uses the entire dataset for training so as to make better predictions for the head class. The parallel expert adopts the rebalancing method in the training process. It achieved 26.02% Top-1 accuracy on an NTIRE2021 SAR dataset. Zhang et al. [153] proposed a dynamic sampling and soft threshold to solve the imbalance problem. The dynamic weighted sampling rendered the distribution of the dataset more reasonable. Experimentation on OpenSARShip showed that it is better than traditional resampling methods. It obtained 80.58% and 77.5% accuracy in the VH and VV channel, respectively. Li et al. [154] presented a two-level jitter network to alleviate the imbalance problem. It decouples the process into representation and classification learning.
The imbalance problem is a very common problem in SAR ATR and will seriously reduce the accuracy of the classification algorithm. The best approach is to spend a substantial amount of energy to collect data. However, due to the difficulty of this task, some data-level and algorithm-level methods still need to be adopted in the future to improve the performance.

5.5. Real-Time Recognition

At present, the commonly used CNN has high accuracy but faces a large number of layers, parameters and storage. It is difficult to implement in FPGA (Field Programmable Gate Array) or other embedded equipment hardware. This problem can be solved by designing a lightweight model and using model compression and acceleration, as shown in Figure 21. Model compression includes mainly network pruning, quantization, low-rank decomposition and knowledge distillation.
They fall into the categories as shown in Figure 22.
Reference [155] decomposed the traditional convolution into a cascade of per-channel convolution and per-pixel convolution to reduce the computational burden. Yu et al. [156] proposed a lightweight network called ASIR-Net. It includes channel-attention, channel-shuffle and inverted-residual. They are used to extract features with fewer parameters. Zhang et al. [157] presented a lightweight architecture. Pruning was conducted on the convolutional layer to obtain a lightweight network. Then, it was retrained by knowledge distillation. It achieved a reduction in model size by 344 times and a reduction in the computation by 18 times. Chen et al. [158] firstly used pruning and adaptive structure compression to accelerate the training and inference speed, and then, they quantified and coded the weights to further compress the model. The method achieved a 40-fold reduction in model scale and a 15-fold reduction in computational load without loss of classification accuracy. Min et al. [159] presented micro-CNN for real-time SAR classification. It had only two layers, and it was compressed from an 18-layer CNN via distillation. Weights of the models were either −1 or 1 or 0. The teacher network was DCNN, and the student network was MCNN. The gradual distillation shows better results than traditional knowledge distillation. MCNN was compressed 177 times but had similar accuracy when compared with DCNN. Zhong et al. [160] realized real-time recognition via transfer learning and model compression. The newly appended convolutional layer and global pooling layer were trained on an SAR dataset. Filter pruning was conducted to accelerate the speed. It achieved 3.6 times acceleration in testing with only a 1.42% decrease in the accuracy. Wang et al. [161] designed a lightweight model and compressed it via pruning and knowledge distillation. The convolution kernels with small attention values were pruned. It achieved 99.46% with only 10% parameters.
With the gradual improvement of the accuracy of SAR ATR, researchers have paid more attention to how to realize real-time target recognition on the end. The realization of real-time SAR ATR using model compression and acceleration technology is the key research direction in the future. There are considerable achievements in computer vision, which can provide reference for the development of this direction.

5.6. Polarimetric SAR

Compared with the single-channel SAR image, polarimetric SAR can capture more information through different combinations. Thus, researchers try to use polarized SAR images for recognizing targets as shown in Figure 23.
Zhou et al. [162] converted the polarization covariance matrix into a six-dimensional feature vector. Then, the vector was fed into a network for classification. Then, the two joined convolutional layers were used. The results from the PolSAR (Polarimetric SAR) dataset showed the good performance. It achieved 92.46% accuracy with the 15-class Flevoland test site. Hou et al. [163] used multilayer auto-encoders and super-pixels to perform classification on polarimetric SAR images. Pauli decomposition was used to generate super-pixels to use the spatial information firstly. The multilayer auto-encoder network was used. This network can use the pixel and spatial features of PolSAR images. It achieved 93.11% accuracy on a Flevoland four-look polarimetric AIRSAR. Gao et al. [164] proposed dual-branch CNN for PolSAR classification. It has two CNNs. One is responsible for extracting polarization features, and the other is for spatial features. The fully connected layer was used to combine them. It achieved 95.82% accuracy on RADARSAT-2 dataset. Adugna et al. [165] proposed a full convolution network. It used real valued weight kernels to classify complex-valued images by pixel. The results show that the method has higher accuracy in networks with the same structure. Hua et al. [166] designed a dual-channel CNN for PolSAR images. It includes two parallel CNN modules, which use two multi-scale convolution structures to extract different features. It achieved 82.58% accuracy on quad-polarized AIRSAR image. Li et al. [167] presented a complex multi-scale network for PolSAR classification. The complex CNN was defined for tackling PolSAR images. A multi-scale contourlet bank was used to extract discriminant features that were multi-directional, multi-scale and multi-resolution. The performance could be improved by substituting the filter of convolution. Experiments on PolSAR images showed that it is comparable to most advanced methods. It achieved 97.78% accuracy on the specific dataset. Xi et al. [168] proposed a fusion Siamese network for dual-polarized SAR ship classification. A two-stream Siamese network was used to combine the polarization SAR images. Fusion loss was used to improve the accuracy. The classification accuracy on the OpenSARship dataset reached 87.04%. Shang et al. [169] proposed a dense-connection, deeply separable CNN. The separable convolution can learn features of every channel. DSNet has deeply separable convolutions and dense connections, which can reduce parameters (decrease to less than 1/9) and improve accuracy. Zhang et al. [170] proposed an SE (squeeze-and-excitation) Laplacian pyramid network for dual-polarization SAR ATR. It had three parts: dual polarization feature fusion, SE and a Laplacian pyramid network. SE was used to model the channel and balance the contributions of polarization characteristics. A Laplacian pyramid network enables multi-resolution analysis. It achieved 56.66% accuracy on a six-class OpenSARShip dataset. Zeng et al. [171] presented a new CNN for ship classification of dual-polarized SAR. The network uses mixed-channel feature loss and combines the features in polarization channels. The results showed that it can effectively improve the classification performance. It achieved 82.42% accuracy on the OpenSARShip dataset. Xiong et al. [172] proposed dual-polarimetric SAR ship classification algorithms. The dual-channel loss can fuse features and render the model more fit for dual-polarized images. Results showed that it obtains 87.72%, which is 3.72% higher than the traditional method.
Polarimetric SAR data contain more information than amplitude data, but how to use this information to improve the performance of ATR is the direction that needs to be focused on in the future.

5.7. Complex Data

For SAR sensors, more information is contained in complex data. The unique phase information of an SAR image is inaccessible to other sensors. However, most CNN-based methods tackle only amplitude data and ignore complex data. Therefore, it is necessary to develop an accurate recognition algorithm by extracting the complex features. The papers those are about it are shown in Figure 24.
Zhang et al. [173] proposed a polarization fusion network with geometric feature embedding (PFGFE-Net). PFGFE-Net achieves the polarization fusion from the input data, feature-level, and decision-level. Moreover, the geometric feature embedding enriches expert experience. Results on OpenSARShip reveal PFGFE-Net’s excellent performance. Scarnati et al. [174] reviewed the complex neural network techniques on SAR ATR. They commented on the merits and the accuracy of each technique. Zhang et al. [175] proposed a complex valued CNN for extending CNN to the complex domain. The CNN includes input–output, convolution and the pooling layer. Taking complex data as input, each layer of the network can transmit phase information. The results on polarization SAR image classification showed that it has better performance than the conventional method. It achieved 99% accuracy on a Flevoland dataset with 14 classes. Sun et al. [176] presented a complex-valued model. They introduced the complex-valued operations. The SE module was also used to weight the feature maps. Results on MSTAR showed that it achieves 98.97% accuracy, which is higher than real-valued CNN algorithms. Wang et al. [177] presented a complex-valued CNN, in which the amplitude and phase information are fully utilized. Experiments demonstrated that it is better than traditional real-valued convolutional neural networks. Zeng et al. [178] proposed multi-stream complex-valued networks to use the phase of SAR images. The complex-valued operations were constructed, for example, complex convolution, complex batch normalization, complex activation, complex pooling and complex fully connected layers. Experiments on MSTAR showed that it can obtain better results. Hou et al. [179] proposed a complex online learning network. They believed that the amplitude and phase are important discriminators for recognition. They modeled SAR images by establishing a complex Gaussian distribution model in dictionary learning. Then, a dictionary of the distributed model was learned. Experiments on MSTAR showed that it obtains an accuracy of 94.52% with 20% samples.
Complex data contain more information than amplitude data, but how to tackle this information requires more research in the future.

5.8. Others

Besides the above direction, there are four other directions which are also studied by researchers, as shown in Figure 25. They are the usage of attributed scattering center, combining traditional features with CNN, explainable and adversarial attack. They are reviewed as follows.

5.8.1. The Usage of ASC

Current SAR ATR algorithms are aimed at amplitude of SAR image (data-driven network). Thus, the model of physics is not utilized so much. As the ASC can describe the characteristics and physical structure information of the target, it is necessary to use them in ATR. When the radar works in high frequency, the scattering field of the target can be approximated as the accumulation of its scattering center. A series of parameters can describe the characteristics of each scattering center. The parameters contain rich physical and geometric properties, which can accurately describe the real scattering mechanism of the target. The parameter set of ASC is in the form of point set. The CNN is not suitable for directly processing point set data. Many studies have been performed to fuse CNN and ASC as shown in Figure 26.
Feng et al. [180] integrated a parts model and deep learning to render the method more interpretable and powerful. It was computed via ASC. The local features were from the parts model. It achieved 99.79% accuracy on MSTAR SOC. Liu et al. [181] proposed SDF-Net to fuse physical knowledge and deep features. The physical knowledge was represented by the ASC data. Experiments on MSATR showed the effectiveness and robustness of it. Li et al. [182] also combined electromagnetic-scattering information and a graph convolutional network. They modeled every scattering center to convert them into a graph. The graph was used to represent the structure features. Jiang et al. [183] also combined a CNN and ASC. The test sample was processed by the CNN firstly. If the output is not reliable, the ASC matching will further identify it. It achieved 99.41% accuracy under MSTAR SOC. Li et al. [184] proposed an ASCM and a discriminative dictionary learning method. It has three steps. The low-level local features, the label-consistent discriminative dictionary learning and the spatial-pyramid matching were used to make full use of SAR images and ASCs. Zhang et al. [185] fused scattering center features and CNN models. The ASCs are extracted from complex SAR data. A modified VGG-Net was adopted to extract deep features in SAR images. Discrimination correlation analysis was used to fuse the features. Zhang et al. [186] proposed an attributed scattering-center-matching-based noise-robust recognition method. The ASCs are extracted based on sparse representation. A Hungarian algorithm is adopted to pair the template ASC sets. It achieved 97.54% accuracy under MSTAR SOC.

5.8.2. Combining the Traditional Features with CNN

A CNN has shown better accuracy than traditional hand-crafted features. However, the traditional features have been developed by experts, who can support their interpretability. Thus, many researchers seek to combine both of them. Zhang et al. [187] tried to inject the traditional features into CNN to improve the performance of SAR ATR. They assumed that the traditional features can improve the classification performance further. The HOG, NGFs, LRCS and PAFs were used. They can be injected at the convolutional, residual, dense blocks and FC Layer. Furthermore, the CNN main body was unchanged. The researchers also used the seven methods to perform the injection. Results showed that the accuracy improved by 6.75% after the injection. Zhang et al. [188] also proposed another method to integrate traditional features into CNN. The edge, Harris, and HOG features were used. The classical CNNs were also used. Experiments showed that the integrations have substantial progress in the accuracy.

5.8.3. Explainable

A CNN can mimic the human brain and is able to extract features automatically. It has shown good results in SAR ATR. However, it works similarly to a black box; the transparency is not clear enough. This would lead to security risks and reduce the trust in the algorithms. Thus, many researchers try to explain CNNs in SAR ATR. Mandeep et al. [189] used an explainable artificial intelligence system to verify the trained CNN model. It explains the test images by marking the decision boundary. It is a transparent learning method. Guo et al. [190] explained SAR ATR via model understanding, diagnosis and so on. Feng et al. [191] proposed a method to visualize the SAR ATR model. It assigns a pixel-wise weight matrix to different channels. Li et al. [192] proposed SAR-BagNet for SAR ATR. SAR-BagNet can show a heat-map which reflects the contribution of each part. Research in this direction is necessary. It can improve the validity of AI systems.

5.8.4. Adversarial Attack

Though deep-learning-based SAR ATR methods show good performance, they are easily attacked by adversarial samples. These samples can cause CNN to output the intended wrong labels by adding some perturbation. Some researchers have studied this problem in recent years. Huang et al. [193] used several methods to demonstrate that CNN is easily attacked by adversarial examples. Sun et al. [194] also conducted a detailed adversarial robustness evaluation of CNN-based SAR ATR. Seven different adversarial perturbations were used for generating adversarial samples. The adversarial average recognition accuracy was used as the evaluation. Du et al. [195] built a UNet-generative adversarial network to generate adversarial examples. The experiments showed that a high-quality adversarial example has good attack results. Zhang et al. [196] proposed an SAR-characteristic-based adversarial deception method. The perturbations have better results than other methods. Peng et al. [197] proposed a speckle-variant attack method. It consists of two parts: an iterative gradient-based generator and a region extractor. It is easy to generate good adversarial examples. The above work shows that the deep learning used in SAR ATR is very easy to attack. This is one of the disadvantages of deep learning. We should consider this problem when designing SAR ATR systems in real working conditions.

6. Future Directions

Recently, deep learning has dominated all tasks, for example, detection, recognition and segmentation. Due to this, SAR ATR researchers also use deep learning here, and a large number of methods have emerged recently. However, compared with computer vision, deep learning-based SAR ATR still faces many problems, which need to be further solved. They fall into the following classes, as shown in Figure 27.

6.1. The Dataset

Compared with optical images, SAR images are more sensitive to imaging parameters and observation attitude. The same target exhibits more diversity in them. Therefore, it is needed to establish a recognition dataset larger than the optical image dataset. However, it is difficult to obtain SAR images, which results in small datasets. This contradiction between supply and demand renders SAR ATR more difficult.
In the future, researchers need to consider constructing the dataset. They should realize the difficulty and importance of it and be willing to cover substantial costs to achieve this. When doing this, the imbalance problem should also be considered. Data augmentation can be used as a supplement to the lack of a large SAR ATR dataset. Other than these, how to design weakly supervised or unsupervised learning algorithms with few samples should also be studied further.

6.2. CNN Architecture Designing

At present, the CNN used in SAR ATR is partly specialized and partly borrowed from computer vision. Both of them have their advantages and disadvantages. The specially designed CNN can fully consider the merits of SAR ATR, and the CNN borrowed from computer vision can have a strong feature extraction ability. Therefore, it is needed to explicitly unify the two ideas when designing CNN architecture. What is more, the number of channels and the number of parameters should also be considered. It should also maximize the intra-class compactness and the inter-class separation simultaneously.

6.3. Knowledge-Driven Dataset

For SAR ATR, most of the current work is focused on the image itself (which is data-driven), and some knowledge (motion features, geometric features, scattering features, etc.) is ignored. In fact, the knowledge is also critical for recognition. Thus, we should integrate the knowledge into the CNN and further improve the recognition accuracy. The premise of research in this direction is to establish a knowledge dataset, which is relatively difficult to achieve.

6.4. Real-Time Recognition

With the maturity of CNN-based SAR ATR in recent years, the demand for real-time application deployment is becoming increasingly urgent. Lightweight CNN structure design, model compression and acceleration, and hardware deployment are the key technologies to achieve real-time recognition, which need to be focused on in the next step. It should be noted that the lightweight networks with extensive depthwise and pointwise convolution will not have a fast speed. As these operations are not optimized on the hardware, they should be used less.

6.5. Explainable and Adversarial Attack

Although the CNN has shown great advantages in SAR ATR, its working mechanism is not transparent. Furthermore, it is in a black box working state. The future work should aim to improve the interpretability of CNN. This can help people understand how the deep-learning model learns, what it learns from the data, why it makes such decisions for each input sample, and whether its decisions are reliable.
The CNN is vulnerable to the attack of counterattack samples. If the input is slightly modified, the network can give different results. The characteristics of the target in the radar image are affected by many factors. If the robustness of deep learning is insufficient, it is difficult to apply it to the actual scene. Thus, in the future, we need to focus on improving its resistance to counterattack.

7. Conclusions

This paper gives a comprehensive survey of SAR ATR. The datasets and the evaluation metrics were introduced firstly. The problems of limited samples and unbalanced distribution were also pointed out. Secondly, the traditional ATR methods, including template-matching-based, machine-learning-based and model-based methods, were introduced in that order. The machine-learning-based methods now show popularity in this area. Thirdly, the deep-learning-based methods were introduced thoroughly. This part is also the core of the paper. The non-CNN models and the CNN models were reviewed at the beginning. Then, the methods to solve the limited samples including data augmentation, GAN, electromagnetic simulation, transfer learning, few-shot learning, semi-supervised learning, metric learning and domain knowledge were surveyed in detail. After that, the imbalance problem, the real-time recognition, the polarimetric SAR, the complex data, the attributed scattering center, the adversarial attack and the explainable were surveyed thoroughly and in that order. Thirdly, the future directions of SAR ATR were introduced. In the future, we should construct a massive dataset, designing specialized CNN, adding knowledge to CNN, realizing real-time recognition and improving explainable and robustness to adversarial attack. To the best of our knowledge, this work represents the first comprehensive review of the research in the field of deep-learning techniques used for SAR ATR.

Author Contributions

Conceptualization, J.L. and Z.Y.; methodology, J.C. and L.Y.; investigation, C.C.; writing—original draft preparation, J.L.; writing—review and editing, J.L..; supervision, Z.Y.; funding acquisition, P.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created in this paper.

Acknowledgments

The authors would like to thank the editors and anonymous reviewers for their valuable comments, which can greatly improve our manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Moreira, A.; Prats-Iraola, P.; Younis, M.; Krieger, G.; Hajnsek, I.; Papathanassiou, K.P. A tutorial on synthetic aperture radar. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–43. [Google Scholar] [CrossRef] [Green Version]
  2. Reigber, A.; Scheiber, R.; Jager, M.; Prats-Iraola, P.; Hajnsek, I.; Jagdhuber, T.; Papathanassiou, K.P.; Nannini, M.; Aguilera, E.; Baumgartner, S.; et al. Very-high-resolution airborne synthetic aperture radar imaging: Signal processing and applications. Proc. IEEE 2013, 101, 759–783. [Google Scholar] [CrossRef] [Green Version]
  3. Ross, T.D.; Bradley, J.J.; Hudson, L.J. SAR ATR: So what’s the problem? An MSTAR perspective. In Proceedings of the SPIE 3721, Algorithms for Synthetic Aperture Radar Imagery VI, Orlando, FL, USA, 13 April 1999; SPIE: Bellingham, WA, USA, 1999; pp. 662–672. [Google Scholar] [CrossRef]
  4. Li, H.C.; Hong, W.; Wu, Y.R.; Fan, P.Z. An efficient and flexible statistical model based on generalized Gamma distribution for amplitude SAR images. IEEE Trans. Geosci. Remote Sens. 2010, 48, 2711–2722. [Google Scholar] [CrossRef]
  5. Achim, A.; Kuruoglu, E.E.; Zerubia, J. SAR image filtering based on the heavy-tailed Rayleigh model. IEEE Trans. Image Process. 2006, 15, 2686–2693. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Kang, M.; Leng, X.; Lin, Z.; Ji, K. A modified faster R-CNN based on CFAR algorithm for SAR ship detection. In Proceedings of the 2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP), Shanghai, China, 18–21 May 2017; IEEE: New York, NY, USA, 2017. [Google Scholar] [CrossRef]
  7. Kreithen, D.E.; Halversen, S.D.; Owirka, G.J. Discriminating targets from clutter. Linc. Lab. J. 1993, 6, 25–52. [Google Scholar]
  8. Schwegmann, C.P.; Kleynhans, W.; Salmon, B.P.; Mdakane, L.W.; Meyer, R.G. Very deep learning for ship discrimination in Synthetic Aperture Radar imagery. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 104–107. [Google Scholar] [CrossRef]
  9. Li, Y.; Chang, Z.; Ning, W. A survey on feature extraction of SAR Images. In Proceedings of the 2010 International Conference on Computer Application and System Modeling (ICCASM 2010), Taiyuan, China, 22–24 October 2010; IEEE: New York, NY, USA, 2010; pp. V1-312–V1-317. [Google Scholar] [CrossRef]
  10. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Fei-Fei, L. ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef] [Green Version]
  11. Fan, W.; Chao, W.; Bo, Z.; Hong, Z.; Xiaojuan, T. Study on Vessel Classification in SAR Imagery: A Survey. Remote Sens. Technol. Appl. 2014, 29, 1–8. [Google Scholar] [CrossRef]
  12. Wang, Z.; Xin, Z.; Huang, X.; Sun, Y.; Xuan, J. Overview of SAR Image Feature Extraction and Target Recognition. In 3D Imaging Technologies—Multi-Dimensional Signal Processing and Deep Learning; Jain, L.C., Kountchev, R., Shi, J., Eds.; Smart Innovation, Systems and Technologies; Springer: Singapore, 2021; Volume 234. [Google Scholar] [CrossRef]
  13. Kechagias-Stamatis, O.; Aouf, N. Automatic target recognition on synthetic aperture radar imagery: A survey. IEEE Aerosp. Electron. Syst. Mag. 2021, 36, 56–81. [Google Scholar] [CrossRef]
  14. El-Darymli, K.; Gill, E.W.; Mcguire, P.; Power, D.; Moloney, C. Automatic Target Recognition in Synthetic Aperture Radar Imagery: A State-of-the-Art Review. IEEE Access 2016, 4, 6014–6058. [Google Scholar] [CrossRef] [Green Version]
  15. Song, J.; Gao, S.; Zhu, Y.; Ma, C. A survey of remote sensing image classification based on CNNs. Big Earth Data 2019, 3, 232–254. [Google Scholar] [CrossRef]
  16. Ball, J.E.; Anderson, D.T.; Chan, C.S. Comprehensive survey of deep learning in remote sensing: Theories, tools, and challenges for the community. J. Appl. Remote Sens. 2017, 11, 042609. [Google Scholar] [CrossRef] [Green Version]
  17. Keydel, E.R.; Lee, S.W.; Moore, J.T. MSTAR extended operating conditions: A tutorial. In Proceedings of the 3rd SPIE Conference Algorithms SAR Imagery, Orlando, FL, USA, 10 June 1996; SPIE: Bellingham, WA, USA, 1996; Volume 2757, pp. 228–242. [Google Scholar] [CrossRef]
  18. Huang, L.; Liu, B.; Li, B.; Guo, W.; Yu, W.; Zhang, Z.; Yu, W. OpenSARShip: A dataset dedicated to Sentinel-1 ship interpretation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 11, 195–208. [Google Scholar] [CrossRef]
  19. Li, B.; Liu, B.; Huang, L.; Guo, W.; Zhang, Z.; Yu, W. OpenSARShip 2.0: A large-volume dataset for deeper interpretation of ship targets in Sentinel-1 imagery. In Proceedings of the 2017 SAR in Big Data Era: Models, Methods and Applications (BIGSARDATA), Beijing, China, 13–14 November 2017; IEEE: New York, NY, USA, 2017; pp. 1–5. [Google Scholar] [CrossRef]
  20. Zhao, J.; Zhang, Z.; Yao, W.; Datcu, M.; Xiong, H.; Yu, W. OpenSARUrban: A Sentinel-1 SAR image dataset for urban interpretation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 187–203. [Google Scholar] [CrossRef]
  21. Hou, X.; Ao, W.; Song, Q.; Lai, J.; Wang, H.; Xu, F. FUSAR-Ship: Building a high-resolution SAR-AIS matchup dataset of Gaofen-3 for ship detection and recognition. Sci. China Inf. Sci. 2020, 63, 140303. [Google Scholar] [CrossRef] [Green Version]
  22. Sellers, S.R.; Collins, P.J.; Jackson, J.A. Augmenting simulations for SAR ATR neural network training. In Proceedings of the 2020 IEEE International Radar Conference (RADAR), Washington, DC, USA, 28–30 April 2020; pp. 309–314. [Google Scholar] [CrossRef]
  23. Ikeuchi, K.; Shakunaga, T.; Wheeler, M.D.; Yamazaki, T. Invariant histograms and deformable template matching for SAR target recognition. In Proceedings of the Proceedings CVPR IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 18–20 June 1996; IEEE: New York, NY, USA, 1996; pp. 100–105. [Google Scholar] [CrossRef]
  24. Fu, K.; Dou, F.Z.; Li, H.C.; Diao, W.H.; Sun, X.; Xu, G.L. Aircraft recognition in SAR images based on scattering structure feature and template matching. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 4206–4217. [Google Scholar] [CrossRef]
  25. Meth, R.; Chellappa, R. Feature matching and target recognition in synthetic aperture radar imagery. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, Phoenix, AZ, USA, 15–19 March 1999; IEEE: New York, NY, USA, 1999; pp. 3333–3336. [Google Scholar] [CrossRef]
  26. Nicoli, L.P.; Anagnostopoulos, G.C. Shape-based recognition of targets in synthetic aperture radar images using elliptical Fourier descriptors. In Automatic Target Recognition XVIII; SPIE: Bellingham, WA, USA, 2008; Volume 6967, pp. 148–159. [Google Scholar] [CrossRef]
  27. Park, J.I.; Park, S.H.; Kim, K.T. New discrimination features for SAR automatic target recognition. IEEE Geosci. Remote Sens. Lett. 2012, 10, 476–480. [Google Scholar] [CrossRef]
  28. Sun, Y.; Liu, Z.; Todorovic, S.; Li, J. Adaptive boosting for SAR automatic target recognition. IEEE Trans. Aerosp. Electron. Syst. 2007, 43, 112–125. [Google Scholar] [CrossRef]
  29. Diemunsch, J.R.; Wissinger, J. Moving and stationarytarget acquisition and recognition (MSTAR) model-basedautomatic target recognition: Search technology for a robustATR. In Proceedings of the SPIE 3370, Algorithms for Synthetic Aperture Radar Imagery V, Orlando, FL, USA, 14–17 April 1998; SPIE: Bellingham, WA, USA, 1998; pp. 481–492. [Google Scholar] [CrossRef]
  30. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  31. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; IEEE: New York, NY, USA, 2015; pp. 1–9. [Google Scholar]
  32. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; IEEE: New York, NY, USA, 2016; pp. 770–778. [Google Scholar]
  33. Xie, S.; Girshick, R.; Dollár, P.; Tu, Z.; He, K. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; IEEE: New York, NY, USA, 2017; pp. 1492–1500. [Google Scholar]
  34. Huang, G.; Liu, Z.; van der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; IEEE: New York, NY, USA, 2017; pp. 2261–2269. [Google Scholar]
  35. Larochelle, H.; Mandel, M.; Pascanu, R.; Bengio, Y. Learning algorithms for the classification restricted Boltzmann machine. J. Mach. Learn. Res. 2012, 13, 643–669. [Google Scholar]
  36. Hua, Y.; Guo, J.; Zhao, H. Deep belief networks and deep learning. In Proceedings of the 2015 International Conference on Intelligent Computing and Internet of Things, Harbin, China, 17–18 January 2015; IEEE: New York, NY, USA, 2015; pp. 1–4. [Google Scholar]
  37. Zhao, Z.; Jiao, L.; Zhao, J.; Gu, J.; Zhao, J. Discriminant deep belief network for high-resolution SAR image classification. Pattern Recognit. 2017, 61, 686–701. [Google Scholar] [CrossRef]
  38. Sun, Z.; Xue, L.; Xu, Y. Recognition of SAR target based on multilayer auto-encoder and SNN. Int. J. Innov. Comput. Inf. Control 2013, 9, 4331–4341. [Google Scholar]
  39. Guo, J.; Wang, L.; Zhu, D.; Hu, C. Compact convolutional autoencoder for SAR target recognition. IET Radar Sonar Navig. 2020, 14, 967–972. [Google Scholar] [CrossRef]
  40. Li, X.; Li, C.; Wang, P.; Men, Z.; Xu, H. SAR ATR based on dividing CNN into CAE and SNN. In Proceedings of the 5th Asia-Pacific Conference on Synthetic Aperture Radar (APSAR), Singapore, 1–4 September 2015; IEEE: New York, NY, USA, 2015; pp. 676–679. [Google Scholar] [CrossRef]
  41. Bentes, C.; Velotto, D.; Lehner, S. Target classification in oceanographic SAR images with deep neural networks: Architecture and initial results. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; IEEE: New York, NY, USA, 2015; pp. 3703–3706. [Google Scholar] [CrossRef]
  42. Shao, J.; Qu, C.; Li, J. A performance analysis of convolutional neural network models in SAR target recognition. In Proceedings of the 2017 SAR in Big Data Era: Models, Methods and Applications (BIGSARDATA), Beijing, China, 13–14 November 2017; IEEE: New York, NY, USA, 2017; pp. 1–6. [Google Scholar] [CrossRef]
  43. Fu, Z.; Zhang, F.; Yin, Q.; Li, R.; Hu, W.; Li, W. Small Sample Learning Optimization for Resnet Based Sar Target Recognition. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; IEEE: New York, NY, USA, 2018; pp. 2330–2333. [Google Scholar] [CrossRef]
  44. Soldin, R.J. SAR Target Recognition with Deep Learning. In Proceedings of the 2018 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), Washington, DC, USA, 9–11 October 2018; IEEE: New York, NY, USA, 2018; pp. 1–8. [Google Scholar] [CrossRef]
  45. Anas, H.; Majdoulayne, H.; Chaimae, A.; Nabil, S.M. Deep Learning for SAR Image Classification. In Intelligent Systems and Applications. IntelliSys 2019; Bi, Y., Bhatia, R., Kapoor, S., Eds.; Advances in Intelligent Systems and Computing; Springer: Cham, Switzerland, 2019; Volume 1037. [Google Scholar] [CrossRef]
  46. Morgan, D.A.E. Deep convolutional neural networks for ATR from SAR imagery. In Proceedings of the SPIE 9475, Algorithms for Synthetic Aperture Radar Imagery XXII, 94750F, Baltimore, MD, USA, 13 May 2015. [Google Scholar] [CrossRef]
  47. Chen, S.; Wang, H.; Xu, F.; Jin, Y.Q. Target Classification Using the Deep Convolutional Networks for SAR Images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4806–4817. [Google Scholar] [CrossRef]
  48. Xu, Y.; Liu, K.; Ying, Z.; Shang, L.; Liu, J.; Zhai, Y.; Piuri, V.; Scotti, F. SAR Automatic Target Recognition Based on Deep Convolutional Neural Network. In Image and Graphics. ICIG 2017; Zhao, Y., Kong, X., Taubman, D., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2017; Volume 10668. [Google Scholar] [CrossRef]
  49. Li, Y.; Wang, J.; Xu, Y.; Li, H.; Miao, Z.; Zhang, Y. DeepSAR-Net: Deep convolutional neural networks for SAR target recognition. In Proceedings of the 2017 IEEE 2nd International Conference on Big Data Analysis (ICBDA), Beijing, China, 10–12 March 2017; IEEE: New York, NY, USA, 2017; pp. 740–743. [Google Scholar] [CrossRef]
  50. Liu, Q.; Li, S.; Mei, S.; Jiang, R.; Li, J. Feature Learning for SAR Images Using Convolutional Neural Network. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; IEEE: New York, NY, USA, 2018; pp. 7003–7006. [Google Scholar] [CrossRef]
  51. Qiao, W.; Zhang, X.; Fen, G. An automatic target recognition algorithm for SAR image based on improved convolution neural network. In Proceedings of the 2017 12th IEEE Conference on Industrial Electronics and Applications (ICIEA), Siem Reap, Cambodia, 18–20 June 2017; IEEE: New York, NY, USA, 2017; pp. 551–555. [Google Scholar] [CrossRef]
  52. Zhou, F.; Wang, L.; Bai, X.; Hui, Y. SAR ATR of Ground Vehicles Based on LM-BN-CNN. IEEE Trans. Geosci. Remote Sens. 2018, 56, 7282–7293. [Google Scholar] [CrossRef]
  53. Cho, J.H.; Park, C.G. Additional feature CNN based automatic target recognition in SAR image. In Proceedings of the 2017 Fourth Asian Conference on Defence Technology—Japan (ACDT), Tokyo, Japan, 29 November–1 December 2017; IEEE: New York, NY, USA, 2017; pp. 1–4. [Google Scholar] [CrossRef]
  54. Zhao, P.; Liu, K.; Zou, H.; Zhen, X. Multi-stream convolutional neural network for SAR automatic target recognition. Remote Sens. 2018, 10, 1473. [Google Scholar] [CrossRef] [Green Version]
  55. Lang, P.; Fu, X.; Feng, C.; Dong, J.; Qin, R.; Martorella, M. LW-CMDANet: A Novel Attention Network for SAR Automatic Target Recognition. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 6615–6630. [Google Scholar] [CrossRef]
  56. Zhai, Y.; Ma, H.; Cao, H.; Deng, W.; Liu, J.; Zhang, Z.; Guan, H.; Zhi, Y.; Wang, J.; Zhou, J. MF-SarNet: Effective CNN with data augmentation for SAR automatic target recognition. J. Eng. 2019, 2019, 5813–5818. [Google Scholar] [CrossRef]
  57. Xie, Y.; Dai, W.; Hu, Z.; Liu, Y.; Li, C.; Pu, X. A novel convolutional neural network architecture for SAR target recognition. J. Sens. 2019, 2019, 1246548. [Google Scholar] [CrossRef] [Green Version]
  58. Huang, G.; Liu, X.; Hui, J.; Wang, Z.; Zhang, Z. A novel group squeeze excitation sparsely connected convolutional networks for SAR target classification. Int. J. Remote Sens. 2019, 40, 4346–4360. [Google Scholar] [CrossRef]
  59. Dong, G.; Liu, H. Global Receptive-Based Neural Network for Target Recognition in SAR Images. IEEE Trans. Cybern. 2021, 51, 1954–1967. [Google Scholar] [CrossRef]
  60. Wang, W.; Zhang, C.; Tian, J.; Ou, J.; Li, J. A SAR Image Target Recognition Approach via Novel SSF-Net Models. Comput. Intell. Neurosci. 2020, 2020, 8859172. [Google Scholar] [CrossRef] [PubMed]
  61. Wang, B.; Jiang, Q.; Song, D.; Zhang, Q.; Sun, M.; Fu, X.; Wang, J. SAR vehicle recognition via scale-coupled Incep_Dense Network (IDNet). Int. J. Remote Sens. 2021, 42, 9109–9134. [Google Scholar] [CrossRef]
  62. Feng, B.; Yang, H.; Zhang, C.; Wang, J.; Li, G.; Gao, Y. SAR Image Target Recognition Algorithm Based on Convolutional Neural Network. In Proceedings of the 2021 IEEE International Conference on Artificial Intelligence and Industrial Design (AIID), Guangzhou, China, 28–30 May 2021; IEEE: New York, NY, USA, 2021; pp. 364–368. [Google Scholar] [CrossRef]
  63. Pei, J.; Wang, Z.; Sun, X.; Huo, W.; Zhang, Y.; Huang, Y.; Wu, J.; Yang, J. FEF-Net: A Deep Learning Approach to Multiview SAR Image Target Recognition. Remote Sens. 2021, 13, 3493. [Google Scholar] [CrossRef]
  64. Wang, Z.; Wang, C.; Pei, J.; Huang, Y.; Zhang, Y.; Yang, H.; Xing, Z. Multi-View SAR Automatic Target Recognition Based on Deformable Convolutional Network. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; IEEE: New York, NY, USA, 2021; pp. 3585–3588. [Google Scholar] [CrossRef]
  65. Shang, R.; Wang, J.; Jiao, L.; Stolkin, R.; Hou, B.; Li, Y. SAR Targets Classification Based on Deep Memory Convolution Neural Networks and Transfer Parameters. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 2834–2846. [Google Scholar] [CrossRef]
  66. Lin, Z.; Ji, K.; Kang, M.; Leng, X.; Zou, H. Deep convolutional highway unit network for SAR target classification with limited labeled training data. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1091–1095. [Google Scholar] [CrossRef]
  67. Wang, L.; Bai, X.; Zhou, F. SAR ATR of Ground Vehicles Based on ESENet. Remote Sens. 2019, 11, 1316. [Google Scholar] [CrossRef] [Green Version]
  68. Shi, B.; Zhang, Q.; Wang, D.; Li, Y. Synthetic Aperture Radar SAR Image Target Recognition Algorithm Based on Attention Mechanism. IEEE Access 2021, 9, 140512–140524. [Google Scholar] [CrossRef]
  69. Zhang, M.; An, J.; Yang, L.D.; Wu, L.; Lu, X.Q. Convolutional Neural Network with Attention Mechanism for SAR Automatic Target Recognition. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  70. Li, R.; Wang, X.; Wang, J.; Song, Y.; Lei, L. SAR Target Recognition Based on Efficient Fully Convolutional Attention Block CNN. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  71. Su, B.; Liu, J.; Su, X.; Luo, B.; Wang, Q. CFCANet: A Complete Frequency Channel Attention Network for SAR Image Scene Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 11750–11763. [Google Scholar] [CrossRef]
  72. Wang, C.; Liu, X.; Pei, J.; Huang, Y.; Zhang, Y.; Yang, J. Multiview Attention CNN-LSTM Network for SAR Automatic Target Recognition. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 12504–12513. [Google Scholar] [CrossRef]
  73. Sabour, S.; Frosst, N.; Hinton, G.E. Dynamic routing between capsules. Adv. Neural Inf. Process. Syst. 2017, 30, 3859–3869. [Google Scholar]
  74. Shah, R.; Soni, A.; Mall, V.; Gadhiya, T.; Roy, A.K. Automatic Target Recognition from SAR Images Using Capsule Networks. In Pattern Recognition and Machine Intelligence. PReMI 2019; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2019; Volume 11942. [Google Scholar] [CrossRef]
  75. Yang, Z.; Jing, S. SAR image classification method based on improved capsule network. J. Phys. Conf. Ser. IOP Publ. 2020, 1693, 012181. [Google Scholar] [CrossRef]
  76. Guo, Y.; Pan, Z.; Wang, M.; Wang, J.; Yang, W. Learning Capsules for SAR Target Recognition. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 4663–4673. [Google Scholar] [CrossRef]
  77. Ren, H.; Yu, X.; Zou, L.; Zhou, Y.; Wang, X.; Bruzzone, L. Extended convolutional capsule network with application on SAR automatic target recognition. Signal Process. 2021, 183, 108021. [Google Scholar] [CrossRef]
  78. Feng, Q.; Peng, D.; Gu, Y. Research of regularization techniques for SAR target recognition using deep CNN models. In Proceedings of the SPIE 11069, Tenth International Conference on Graphics and Image Processing (ICGIP 2018), Chengdu, China, 6 May 2019; SPIE: Bellingham, WA, USA, 2019. 110693p. [Google Scholar] [CrossRef]
  79. Kuang, W.; Dong, W.; Dong, L. The Effect of Training Dataset Size on SAR Automatic Target Recognition Using Deep Learning. In Proceedings of the 2022 IEEE 12th International Conference on Electronics Information and Emergency Communication (ICEIEC), Beijing, China, 15–17 July 2022; IEEE: New York, NY, USA, 2022; pp. 13–16. [Google Scholar] [CrossRef]
  80. Wang, J.; Jiang, Y. A SAR Target Recognition Method via Combination of Multilevel Deep Features. Comput. Intell. Neurosci. 2021, 2021, 2392642. [Google Scholar] [CrossRef]
  81. Li, S.; Pan, Z.; Hu, Y. Multi-Aspect Convolutional-Transformer Network for SAR Automatic Target Recognition. Remote Sens. 2022, 14, 3924. [Google Scholar] [CrossRef]
  82. Zhao, P.; Huang, L. Multi-Aspect SAR Target Recognition Based on Efficientnet and GRU. In Proceedings of the IGARSS 2020—2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 26 September–2 October 2020; IEEE: New York, NY, USA, 2020; pp. 1651–1654. [Google Scholar] [CrossRef]
  83. Shorten, C.; Khoshgoftaar, T.M. A survey on image data augmentation for deep learning. J. Big Data 2019, 6, 60. [Google Scholar] [CrossRef]
  84. Ding, J.; Chen, B.; Liu, H.; Huang, M. Convolutional neural network with data augmentation for SAR target recognition. IEEE Geosci. Remote Sens. Lett. 2016, 13, 364–368. [Google Scholar] [CrossRef]
  85. Ding, B.; Wen, G.; Huang, X.; Ma, C.; Yang, X. Data Augmentation by Multilevel Reconstruction Using Attributed Scattering Center for SAR Target Recognition. IEEE Geosci. Remote Sens. Lett. 2017, 14, 979–983. [Google Scholar] [CrossRef]
  86. Furukawa, H. Deep learning for target classification from SAR imagery: Data augmentation and translation invariance. arXiv 2017, arXiv:1708.07920. [Google Scholar]
  87. Jiang, T.; Cui, Z.; Zhou, Z.; Cao, Z. Data Augmentation with Gabor Filter in Deep Convolutional Neural Networks for Sar Target Recognition. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; IEEE: New York, NY, USA, 2018; pp. 689–692. [Google Scholar] [CrossRef]
  88. Lei, Y.; Xia, W.; Liu, Z. Synthetic Images Augmentation for Robust SAR Target Recognition. In Proceedings of the 2021 The 5th International Conference on Video and Image Processing, Hayward, CA, USA, 22–25 December 2021; IEEE: New York, NY, USA, 2021; pp. 19–25. [Google Scholar] [CrossRef]
  89. Ni, J.; Zhang, F.; Yin, Q.; Zhou, Y.; Li, H.C.; Hong, W. Random neighbor pixel-block-based deep recurrent learning for polarimetric SAR image classification. IEEE Trans. Geosci. Remote Sens. 2020, 59, 7557–7569. [Google Scholar] [CrossRef]
  90. Lv, J.; Liu, Y. Data Augmentation Based on Attributed Scattering Centers to Train Robust CNN for SAR ATR. IEEE Access 2019, 7, 25459–25473. [Google Scholar] [CrossRef]
  91. Goodfellow, P.A.; Mirza, X.; Warde-Farley, O. Generative adversarial nets. In Proceedings of the International Conference on Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014; MIT Press: Cambridge, MA, USA, 2014; pp. 2672–2680. [Google Scholar] [CrossRef]
  92. Guo, J.; Lei, B.; Ding, C.; Zhang, Y. Synthetic aperture radar image synthesis by using generative adversarial nets. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1111–1115. [Google Scholar] [CrossRef]
  93. Bao, X.; Pan, Z.; Liu, L.; Lei, B. SAR image simulation by generative adversarial networks. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 9995–9998. [Google Scholar] [CrossRef]
  94. Radford, A.; Metz, L.; Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv 2015, arXiv:1511.06434. [Google Scholar]
  95. Arjovsky, M.; Chintala, S.; Bottou, L. Wasserstein GAN. arXiv 2017, arXiv:1701.07875. [Google Scholar]
  96. Gulrajani, I.; Ahmed, F.; Arjovsky, M.; Dumoulin, V.; Courville, A.C. Improved training of Wasserstein GANs. arXiv 2017, arXiv:1704.00028. [Google Scholar]
  97. Cui, Z.; Zhang, M.; Cao, Z.; Cao, C. Image data augmentation for SAR sensor via generative adversarial nets. IEEE Access 2019, 7, 42255–42268. [Google Scholar] [CrossRef]
  98. Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; IEEE: New York, NY, USA, 2017; pp. 2242–2251. [Google Scholar]
  99. Liu, L.; Pan, Z.; Qiu, X.; Peng, L. SAR target classification with CycleGAN transferred simulated samples. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; IEEE: New York, NY, USA, 2018; pp. 4411–4414. [Google Scholar] [CrossRef]
  100. Wagner, S.A. SAR ATR by a combination of convolutional neural network and support vector machines. IEEE Trans. Aerosp. Electron. Syst. 2016, 52, 2861–2872. [Google Scholar] [CrossRef]
  101. Hwang, J.; Shin, Y. Image Data Augmentation for SAR Automatic Target Recognition Using TripleGAN. In Proceedings of the 2021 International Conference on Information and Communication Technology Convergence (ICTC), Jeju Island, Republic of Korea, 20–22 October 2021; IEEE: New York, NY, USA, 2021; pp. 312–314. [Google Scholar] [CrossRef]
  102. Luo, Z.; Jiang, X.; Liu, X. Synthetic minority class data by generative adversarial network for imbalanced sar target recognition. In Proceedings of the IGARSS 2020—2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 26 September–2 October 2020; IEEE: New York, NY, USA, 2020; pp. 2459–2462. [Google Scholar] [CrossRef]
  103. Sun, Y.; Jiang, W.; Yang, J.; Li, W. SAR Target Recognition Using cGAN-Based SAR-to-Optical Image Translation. Remote Sens. 2022, 14, 1793. [Google Scholar] [CrossRef]
  104. Niu, S.; Qiu, X.; Peng, L.; Lei, B. Parameter prediction method of SAR target simulation based on convolutional neural networks. In Proceedings of the 12th European Conference on Synthetic Aperture Radar, Aachen, Germany, 4–7 June 2018; IEEE: New York, NY, USA, 2018; pp. 106–110. [Google Scholar]
  105. Malmgren-Hansen, D.; Kusk, A.; Dall, J.; Nielsen, A.A.; Engholm, R.; Skriver, H. Improving SAR Automatic Target Recognition Models with Transfer Learning from Simulated Data. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1484–1488. [Google Scholar] [CrossRef] [Green Version]
  106. Cha, M.; Majumdar, A.; Kung, H.T.; Barber, J. Improving Sar Automatic Target Recognition Using Simulated Images Under Deep Residual Refinements. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 15–20 April 2018; IEEE: New York, NY, USA, 2018; pp. 2606–2610. [Google Scholar] [CrossRef]
  107. Ahmadibeni, A.; Borooshak, L.; Jones, B.; Shirkhodaie, A. Aerial and ground vehicles synthetic SAR dataset generation for automatic target recognition. In Proceedings of the Algorithms for Synthetic Aperture Radar Imagery XXVII, Online, 24 April 2020; SPIE: Bellingham, WA, USA, 2020; Volume 11393, pp. 96–107. [Google Scholar] [CrossRef]
  108. Zhang, C.; Wang, Y.; Liu, H.; Sun, Y.; Hu, L. SAR Target Recognition Using Only Simulated Data for Training by Hierarchically Combining CNN and Image Similarity. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  109. Kang, C.; He, C. SAR image classification based on the multi-layer network and transfer learning of mid-level representations. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; IEEE: New York, NY, USA, 2016; pp. 1146–1149. [Google Scholar] [CrossRef]
  110. Marmanis, D.; Yao, W.; Adam, F.; Datcu, M.; Reinartz, P.; Schindler, K.; Wegner, J.D.; Stilla, U. Artificial generation of big data for improving image classification: A generative adversarial network approach on SAR data. arXiv 2017, arXiv:1711.02010. [Google Scholar]
  111. Lu, C.; Li, W. Ship Classification in High-Resolution SAR Images via Transfer Learning with Small Training Dataset. Sensors 2019, 19, 63. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  112. Zhai, Y.; Deng, W.; Xu, Y.; Ke, Q.; Gan, J.; Sun, B.; Zeng, J.; Piuri, V. Robust SAR Automatic Target Recognition Based on Transferred MS-CNN with L2-Regularization. Comput. Intell. Neurosci. 2019, 2019, 9140167. [Google Scholar] [CrossRef] [Green Version]
  113. Ying, Z.; Xuan, C.; Zhai, Y.; Sun, B.; Li, J.; Deng, W.; Mai, C.; Wang, F.; Labati, R.D.; Piuri, V.; et al. TAI-SARNET: Deep Transferred Atrous-Inception CNN for Small Samples SAR ATR. Sensors 2020, 20, 1724. [Google Scholar] [CrossRef] [Green Version]
  114. Song, Y.; Li, J.; Gao, P.; Li, L.; Tian, T.; Tian, J. Two-Stage Cross-Modality Transfer Learning Method for Military-Civilian SAR Ship Recognition. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  115. Zhang, D.; Liu, J.; Heng, W.; Ren, K.; Song, J. Transfer learning with convolutional neural networks for SAR ship recognition. IOP Conf. Ser. Mater. Sci. Eng. 2018, 322, 072001. [Google Scholar] [CrossRef]
  116. Huang, Z.; Dumitru, C.O.; Pan, Z.; Lei, B.; Datcu, M. Classification of large-scale high-resolution SAR images with deep transfer learning. IEEE Geosci. Remote Sens. Lett. 2020, 18, 107–111. [Google Scholar] [CrossRef] [Green Version]
  117. Zhang, W.; Zhu, Y.; Fu, Q. Deep Transfer Learning Based on Generative Adversarial Networks for SAR Target Recognition with label limitation. In Proceedings of the 2019 IEEE International Conference on Signal, Information and Data Processing (ICSIDP), Chongqing, China, 11–13 December 2019; IEEE: New York, NY, USA, 2019; pp. 1–5. [Google Scholar] [CrossRef]
  118. He, Q.; Zhao, L.; Ji, K.; Kuang, G. SAR target recognition based on task-driven domain adaptation using simulated data. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1–5. [Google Scholar] [CrossRef]
  119. Wang, Z.L.; Xu, X.H.; Zhang, L. Study of deep transfer learning for SAR ATR based on simulated SAR images. J. Univ. Chin. Acad. Sci. 2020, 37, 516–524. [Google Scholar] [CrossRef]
  120. Wang, K.; Zhang, G.; Leung, H. SAR Target Recognition Based on Cross-Domain and Cross-Task Transfer Learning. IEEE Access 2019, 7, 153391–153399. [Google Scholar] [CrossRef]
  121. Huang, Z.; Pan, Z.; Lei, B. Transfer Learning with Deep Convolutional Neural Network for SAR Target Classification with Limited Labeled Data. Remote Sens. 2017, 9, 907. [Google Scholar] [CrossRef] [Green Version]
  122. Borgwardt, K.M.; Gretton, A.; Rasch, M.J.; Kriegel, H.P.; Schölkopf, B.; Smola, A.J. Integrating structured biological data by kernel maximum mean discrepancy. Bioinformatics 2006, 22, e49–e57. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  123. Huang, Z.; Pan, Z.; Lei, B. What, where and how to transfer in SAR target recognition based on deep CNNs. IEEE Trans. Geosci. Remote Sens. 2019, 58, 2324–2336. [Google Scholar] [CrossRef] [Green Version]
  124. Wang, L.; Bai, X.; Zhou, F. Few-Shot SAR ATR Based on Conv-BiLSTM Prototypical Networks. In Proceedings of the 2019 6th Asia-Pacific Conference on Synthetic Aperture Radar (APSAR), Xiamen, China, 26–29 November 2019; IEEE: New York, NY, USA, 2019; pp. 1–5. [Google Scholar] [CrossRef]
  125. Wang, K.; Zhang, G. SAR Target Recognition via Meta-Learning and Amortized Variational Inference. Sensors 2020, 20, 5966. [Google Scholar] [CrossRef]
  126. Wang, L.; Bai, X.; Gong, C.; Zhou, F. Hybrid Inference Network for Few-Shot SAR Automatic Target Recognition. IEEE Trans. Geosci. Remote Sens. 2021, 59, 9257–9269. [Google Scholar] [CrossRef]
  127. Wang, K.; Zhang, G.; Xu, Y.; Leung, H. SAR Target Recognition Based on Probabilistic Meta-Learning. IEEE Geosci. Remote Sens. Lett. 2021, 18, 682–686. [Google Scholar] [CrossRef]
  128. Wang, S.; Wang, Y.; Liu, H.; Sun, Y. Attribute-Guided Multi-Scale Prototypical Network for Few-Shot SAR Target Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 12224–12245. [Google Scholar] [CrossRef]
  129. Li, L.; Liu, J.; Su, L.; Ma, C.; Li, B.; Yu, Y. A Novel Graph Metalearning Method for SAR Target Recognition. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  130. Fu, K.; Zhang, T.; Zhang, Y.; Wang, Z.; Sun, X. Few-Shot SAR Target Classification via Metalearning. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
  131. Salimans, T.; Goodfellow, I.; Zaremba, W.; Cheung, V.; Radford, A.; Chen, X. Improved techniques for training GANs. In Proceedings of the NIPS, Barcelona, Spain, 5–10 December 2016; Curran Associates Inc.: Red Hook, NY, USA, 2016; pp. 2234–2242. [Google Scholar]
  132. Gao, F.; Yang, Y.; Wang, J.; Sun, J.; Yang, E.; Zhou, H. A Deep Convolutional Generative Adversarial Networks (DCGANs)-Based Semi-Supervised Method for Object Recognition in Synthetic Aperture Radar (SAR) Images. Remote Sens. 2018, 10, 846. [Google Scholar] [CrossRef] [Green Version]
  133. Zheng, C.; Jiang, X.; Liu, X. Semi-Supervised SAR ATR via Multi-Discriminator Generative Adversarial Network. IEEE Sens. J. 2019, 19, 7525–7533. [Google Scholar] [CrossRef]
  134. Gao, F.; Ma, F.; Wang, J.; Sun, J.; Yang, E.; Zhou, H. Semi-Supervised Generative Adversarial Nets with Multiple Generators for SAR Image Recognition. Sensors 2018, 18, 2706. [Google Scholar] [CrossRef] [Green Version]
  135. El-Darymli, K.; McGuire, P.; Power, D.; Moloney, C. Target detection in synthetic aperture radar imagery: A state-of-the-art survey. J. Appl. Remote Sens. 2013, 7, 071598. [Google Scholar] [CrossRef] [Green Version]
  136. Wang, C.; Shi, J.; Zhou, Y.; Yang, X.; Zhou, Z.; Wei, S.; Zhang, X. Semisupervised Learning-Based SAR ATR via Self-Consistent Augmentation. IEEE Trans. Geosci. Remote Sens. 2021, 59, 4862–4873. [Google Scholar] [CrossRef]
  137. Gao, F.; Shi, W.; Wang, J.; Hussain, A.; Zhou, H. A Semi-Supervised Synthetic Aperture Radar (SAR) Image Recognition Algorithm Based on an Attention Mechanism and Bias-Variance Decomposition. IEEE Access 2019, 7, 108617–108632. [Google Scholar] [CrossRef]
  138. Gao, F.; Yue, Z.; Wang, J.; Sun, J.; Yang, E.; Zhou, H. A novel active semisupervised convolutional neural network algorithm for SAR image recognition. Comput. Intell. Neurosci. 2017, 2017, 3105053. [Google Scholar] [CrossRef] [Green Version]
  139. Zhang, Y.; Guo, X.; Ren, H.; Li, L. Multi-view classification with semi-supervised learning for SAR target recognition. Signal Process. 2021, 183, 108030. [Google Scholar] [CrossRef]
  140. Tian, Y.; Sun, J.; Qi, P.; Yin, G.; Zhang, L. Multi-Block Mixed Sample Semi-Supervised Learning for SAR Target Recognition. Remote Sens. 2021, 13, 361. [Google Scholar] [CrossRef]
  141. Chen, K.; Pan, Z.; Huang, Z.; Hu, Y.; Ding, C. Learning From Reliable Unlabeled Samples for Semi-Supervised SAR ATR. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  142. Xu, Y.; Lang, H.; Chai, X.; Ma, L. Distance metric learning for ship classification in SAR images. In Proceedings of the SPIE 10789, Image and Signal Processing for Remote Sensing XXIV, 107891C, Berlin, Germany, 9 October 2018. [Google Scholar] [CrossRef]
  143. Pan, Z.; Bao, X.; Zhang, Y.; Wang, B.; An, Q.; Lei, B. Siamese network based metric learning for SAR target classification. In Proceedings of the IGARSS, Yokohama, Japan, 28 July–2 August 2019; IEEE: New York, NY, USA, 2019; pp. 1342–1345. [Google Scholar] [CrossRef]
  144. Wang, B.; Pan, Z.; Hu, Y.; Ma, W. SAR Target Recognition Based on Siamese CNN with Small Scale Dataset. Radar Sci. Technol. 2019, 17, 603–609. [Google Scholar] [CrossRef]
  145. Li, Y.; Li, X.; Sun, Q.; Dong, Q. SAR Image Classification Using CNN Embeddings and Metric Learning. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  146. Wang, C.; Gu, H.; Su, W. SAR Image Classification Using Contrastive Learning and Pseudo-Labels with Limited Data. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  147. Zhang, L.; Leng, X.; Feng, S.; Ma, X.; Ji, K.; Kuang, G.; Liu, L. Domain Knowledge Powered Two-Stream Deep Network for Few-Shot SAR Vehicle Recognition. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
  148. Zhang, Y.; Guo, X.; Li, L.; Ansari, N. Deep knowledge integration of heterogeneous features for domain adaptive SAR target recognition. Pattern Recognit. 2022, 126, 108590. [Google Scholar] [CrossRef]
  149. Shao, Q.; Qu, C.; Li, J.; Peng, S. CNN based ship target recognition of imbalanced SAR image. Electron. Opt. Control 2019, 26, 90–97. [Google Scholar]
  150. Cao, C.; Cui, Z.; Wang, L.; Wang, J.; Cao, Z.; Yang, J. Cost-Sensitive Awareness-Based SAR Automatic Target Recognition for Imbalanced Data. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–16. [Google Scholar] [CrossRef]
  151. Zhang, L.; Zhang, C.; Quan, S.; Xiao, H.; Kuang, G.; Liu, L. A Class Imbalance Loss for Imbalanced Object Recognition. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 2778–2792. [Google Scholar] [CrossRef]
  152. Yang, C.Y.; Hsu, H.M.; Cai, J.; Hwang, J.N. Long-tailed recognition of sar aerial view objects by cascading and paralleling experts. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Virtual Conference, 19–25 June 2021; IEEE: New York, NY, USA, 2021; pp. 142–148. [Google Scholar]
  153. Zhang, Y.; Lei, Z.; Zhuang, L.; Yu, H. A CNN Based Method to Solve Class Imbalance Problem in SAR Image Ship Target Recognition. In Proceedings of the 2021 IEEE 5th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Chongqing, China, 12–14 March 2021; IEEE: New York, NY, USA, 2021; pp. 229–233. [Google Scholar] [CrossRef]
  154. Li, G.; Pan, L.; Qiu, L.; Tan, Z.; Xie, F.; Zhang, H. A Two-Stage Shake-Shake Network for Long-Tailed Recognition of SAR Aerial View Objects. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; IEEE: New York, NY, USA, 2022; pp. 249–256. [Google Scholar]
  155. Shao, J.; Qu, C.; Li, J.; Peng, S. A lightweight convolutional neural network based on visual attention for SAR image target classification. Sensors 2018, 18, 3039. [Google Scholar] [CrossRef] [Green Version]
  156. Yu, J.; Zhou, G.; Zhou, S.; Yin, J. A Lightweight Fully Convolutional Neural Network for SAR Automatic Target Recognition. Remote Sens. 2021, 13, 3029. [Google Scholar] [CrossRef]
  157. Zhang, F.; Liu, Y.; Zhou, Y.; Yin, Q.; Li, H.C. A lossless lightweight CNN design for SAR target recognition. Remote Sens. Lett. 2020, 11, 485–494. [Google Scholar] [CrossRef]
  158. Chen, H.; Zhang, F.; Tang, B.; Yin, Q.; Sun, X. Slim and efficient neural network design for resource-constrained SAR target recognition. Remote Sens. 2018, 10, 1618. [Google Scholar] [CrossRef] [Green Version]
  159. Min, R.; Lan, H.; Cao, Z.; Cui, Z. A gradually distilled CNN for SAR target recognition. IEEE Access 2019, 7, 42190–42200. [Google Scholar] [CrossRef]
  160. Zhong, C.; Mu, X.; He, X.; Wang, J.; Zhu, M. SAR Target Image Classification Based on Transfer Learning and Model Compression. IEEE Geosci. Remote Sens. Lett. 2019, 16, 412–416. [Google Scholar] [CrossRef]
  161. Wang, Z.; Du, L.; Li, Y. Boosting Lightweight CNNs Through Network Pruning and Knowledge Distillation for SAR Target Recognition. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 8386–8397. [Google Scholar] [CrossRef]
  162. Zhou, Y.; Wang, H.; Xu, F.; Jin, Y.Q. Polarimetric SAR image classification using deep convolutional neural networks. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1935–1939. [Google Scholar] [CrossRef]
  163. Hou, B.; Kou, H.; Jiao, L. Classification of Polarimetric SAR Images Using Multilayer Autoencoders and Superpixels. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 3072–3081. [Google Scholar] [CrossRef]
  164. Liu, X.; Jiao, L.; Tang, X.; Sun, Q.; Zhang, D. Polarimetric Convolutional Network for PolSAR Image Classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 3040–3054. [Google Scholar] [CrossRef] [Green Version]
  165. Mullissa, A.G.; Persello, C.; Stein, A. PolSARNet: A Deep Fully Convolutional Network for Polarimetric SAR Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 5300–5309. [Google Scholar] [CrossRef]
  166. Hua, W.; Wang, S.; Xie, W.; Guo, Y.; Jin, X. Dual-Channel Convolutional Neural Network for Polarimetric SAR Images Classification. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; IEEE: New York, NY, USA, 2019; pp. 3201–3204. [Google Scholar] [CrossRef]
  167. Li, L.; Ma, L.; Jiao, L.; Liu, F.; Sun, Q.; Zhao, J. Complex Contourlet-CNN for Polarimetric SAR Image Classification. Pattern Recognit. 2020, 10, 107110. [Google Scholar] [CrossRef]
  168. Xi, Y.; Xiong, G.; Yu, W. Feature-loss Double Fusion Siamese Network for Dual-polarized SAR Ship Classification. In Proceedings of the 2019 IEEE International Conference on Signal, Information and Data Processing (ICSIDP), Chongqing, China, 11–13 December 2019; IEEE: New York, NY, USA, 2019; pp. 1–5. [Google Scholar] [CrossRef]
  169. Shang, R.; He, J.; Wang, J.; Xu, K.; Jiao, L.; Stolkin, R. Dense connection and depthwise separable convolution based CNN for polarimetric SAR image classification. Knowl.-Based Syst. 2020, 194, 105542. [Google Scholar] [CrossRef]
  170. Zhang, T.; Zhang, X. Squeeze-and-Excitation Laplacian Pyramid Network with Dual-Polarization Feature Fusion for Ship Classification in SAR Images. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  171. Zeng, L.; Zhu, Q.; Lu, D.; Zhang, T.; Wang, H.; Yin, J.; Yang, J. Dual-Polarized SAR Ship Grained Classification Based on CNN With Hybrid Channel Feature Loss. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  172. Xiong, G.; Xi, Y.; Chen, D.; Yu, W. Dual-Polarization SAR Ship Target Recognition Based on Mini Hourglass Region Extraction and Dual-Channel Efficient Fusion Network. IEEE Access 2021, 9, 29078–29089. [Google Scholar] [CrossRef]
  173. Zhang, T.; Zhang, X. A polarization fusion network with geometric feature embedding for SAR ship classification. Pattern Recognit. 2022, 123, 108365. [Google Scholar] [CrossRef]
  174. Scarnati, T.; Lewis, B. Complex-Valued Neural Networks for Synthetic Aperture Radar Image Classification. In Proceedings of the 2021 IEEE Radar Conference (RadarConf21), Atlanta, GA, USA, 7–14 May 2021; IEEE: New York, NY, USA, 2021; pp. 1–6. [Google Scholar] [CrossRef]
  175. Zhang, Z.; Wang, H.; Xu, F.; Jin, Y.Q. Complex-valued convolutional neural network and its application in polarimetric SAR image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 7177–7188. [Google Scholar] [CrossRef]
  176. Sun, Z.; Xu, X.; Pan, Z. SAR ATR Using Complex-Valued CNN. In Proceedings of the 2020 Asia-Pacific Conference on Image Processing, Electronics and Computers (IPEC), Dalian, China, 14–16 April 2020; IEEE: New York, NY, USA, 2020; pp. 125–128. [Google Scholar] [CrossRef]
  177. Wang, R.; Wang, Z.; Xia, K.; Zou, H.; Li, J. Target Recognition in Single-Channel SAR Images Based on the Complex-Valued Convolutional Neural Network with Data Augmentation. IEEE Trans. Aerosp. Electron. Syst. 2022, 1–8. [Google Scholar] [CrossRef]
  178. Zeng, Z.; Sun, J.; Han, Z.; Hong, W. SAR Automatic Target Recognition Method Based on Multi-Stream Complex-Valued Networks. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–18. [Google Scholar] [CrossRef]
  179. Hou, B.; Wang, L.; Wu, Q.; Han, Q.; Jiao, L. Complex Gaussian–Bayesian Online Dictionary Learning for SAR Target Recognition with Limited Labeled Samples. IEEE Access 2019, 7, 120626–120637. [Google Scholar] [CrossRef]
  180. Feng, S.; Ji, K.; Zhang, L.; Ma, X.; Kuang, G. SAR Target Classification Based on Integration of ASC Parts Model and Deep Learning Algorithm. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 10213–10225. [Google Scholar] [CrossRef]
  181. Liu, Z.; Wang, L.; Wen, Z.; Li, K.; Pan, Q. Multilevel Scattering Center and Deep Feature Fusion Learning Framework for SAR Target Recognition. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
  182. Li, C.; Du, L.; Li, Y.; Song, J. A Novel SAR Target Recognition Method Combining Electromagnetic Scattering Information and GCN. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  183. Jiang, C.; Zhou, Y. Hierarchical Fusion of Convolutional Neural Networks and Attributed Scattering Centers with Application to Robust SAR ATR. Remote Sens. 2018, 10, 819. [Google Scholar] [CrossRef] [Green Version]
  184. Li, T.; Du, L. SAR Automatic Target Recognition Based on Attribute Scattering Center Model and Discriminative Dictionary Learning. IEEE Sens. J. 2019, 19, 4598–4611. [Google Scholar] [CrossRef]
  185. Zhang, J.; Xing, M.; Xie, Y. FEC: A Feature Fusion Framework for SAR Target Recognition Based on Electromagnetic Scattering Features and Deep CNN Features. IEEE Trans. Geosci. Remote Sens. 2021, 59, 2174–2187. [Google Scholar] [CrossRef]
  186. Zhang, X. Noise-robust target recognition of SAR images based on attribute scattering center matching. Remote Sens. Lett. 2019, 10, 186–194. [Google Scholar] [CrossRef]
  187. Zhang, T.; Zhang, X. Injection of Traditional Hand-Crafted Features into Modern CNN-Based Models for SAR Ship Classification: What, Why, Where, and How. Remote Sens. 2021, 13, 2091. [Google Scholar] [CrossRef]
  188. Zhang, T.; Zhang, X. Integrate Traditional Hand-Crafted Features into Modern CNN-based Models to Further Improve SAR Ship Classification Accuracy. In Proceedings of the 2021 7th Asia-Pacific Conference on Synthetic Aperture Radar (APSAR), Virtual Conference, 1–3 November 2021; IEEE: New York, NY, USA, 2021; pp. 1–6. [Google Scholar] [CrossRef]
  189. Pannu, H.S.; Malhi, A. Deep learning-based explainable target classification for synthetic aperture radar images. In Proceedings of the 2020 13th International Conference on Human System Interaction (HSI), Tokyo, Japan, 6–8 June 2020; IEEE: New York, NY, USA, 2020; pp. 34–39. [Google Scholar] [CrossRef]
  190. Guo, W.; Zhang, Z.; YU, W.; Sun, X. Perspective on explainable SAR target recognition. J. Radars 2020, 9, 462–476. [Google Scholar] [CrossRef]
  191. Feng, Z.; Zhu, M.; Stanković, L.; Ji, H. Self-Matching CAM: A Novel Accurate Visual Explanation of CNNs for SAR Image Interpretation. Remote Sens. 2021, 13, 1772. [Google Scholar] [CrossRef]
  192. Li, P.; Feng, C.; Hu, X.; Tang, Z. SAR-BagNet: An Ante-hoc Interpretable Recognition Model Based on Deep Network for SAR Image. Remote Sens. 2022, 14, 2150. [Google Scholar] [CrossRef]
  193. Huang, T.; Zhang, Q.; Liu, J.; Hou, R.; Wang, X.; Li, Y. Adversarial attacks on deep-learning-based SAR image target recognition. J. Netw. Comput. Appl. 2020, 162, 102632. [Google Scholar] [CrossRef]
  194. Sun, H.; Xu, Y.; Kuang, G.; Chen, J. Adversarial Robustness Evaluation of Deep Convolutional Neural Network Based SAR ATR Algorithm. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; IEEE: New York, NY, USA, 2021; pp. 5263–5266. [Google Scholar] [CrossRef]
  195. Du, C.; Zhang, L. Adversarial Attack for SAR Target Recognition Based on UNet-Generative Adversarial Network. Remote Sens. 2021, 13, 4358. [Google Scholar] [CrossRef]
  196. Zhang, F.; Meng, T.; Xiang, D.; Ma, F.; Sun, X.; Zhou, Y. Adversarial Deception Against SAR Target Recognition Network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 4507–4520. [Google Scholar] [CrossRef]
  197. Peng, B.; Peng, B.; Zhou, J.; Xia, J.; Liu, L. Speckle-Variant Attack: Toward Transferable Adversarial Attack to SAR Target Recognition. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
Figure 1. The process of SAR ATR.
Figure 1. The process of SAR ATR.
Remotesensing 15 01454 g001
Figure 2. The framework of the paper.
Figure 2. The framework of the paper.
Remotesensing 15 01454 g002
Figure 3. The related work [9,11,12,13,14,15,16].
Figure 3. The related work [9,11,12,13,14,15,16].
Remotesensing 15 01454 g003
Figure 4. The datasets used in SAR ATR.
Figure 4. The datasets used in SAR ATR.
Remotesensing 15 01454 g004
Figure 5. The confusion matrix.
Figure 5. The confusion matrix.
Remotesensing 15 01454 g005
Figure 6. The traditional SAR ATR methods [23,24,25,26,27,28,29].
Figure 6. The traditional SAR ATR methods [23,24,25,26,27,28,29].
Remotesensing 15 01454 g006
Figure 7. The process of machine-learning-based SAR ATR algorithms.
Figure 7. The process of machine-learning-based SAR ATR algorithms.
Remotesensing 15 01454 g007
Figure 8. The deep-learning-based methods.
Figure 8. The deep-learning-based methods.
Remotesensing 15 01454 g008
Figure 9. The non-CNN models [37,38,39,40,41].
Figure 9. The non-CNN models [37,38,39,40,41].
Remotesensing 15 01454 g009
Figure 10. The CNNs in SAR ART [42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77].
Figure 10. The CNNs in SAR ART [42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77].
Remotesensing 15 01454 g010
Figure 11. Methods to solve the problem raised by limited samples.
Figure 11. Methods to solve the problem raised by limited samples.
Remotesensing 15 01454 g011
Figure 12. Data augmentation methods in SAR ATR [84,85,86,87,88,89,90].
Figure 12. Data augmentation methods in SAR ATR [84,85,86,87,88,89,90].
Remotesensing 15 01454 g012
Figure 13. GAN for generating new samples in SAR ATR [92,93,94,95,96,97,98,99,100,101,102,103].
Figure 13. GAN for generating new samples in SAR ATR [92,93,94,95,96,97,98,99,100,101,102,103].
Remotesensing 15 01454 g013
Figure 14. Electromagnetic simulation for generating new samples in SAR ATR [104,105,106,107,108].
Figure 14. Electromagnetic simulation for generating new samples in SAR ATR [104,105,106,107,108].
Remotesensing 15 01454 g014
Figure 15. Transfer learning used in SAR ATR [109,110,111,112,113,114,115,116,117,118,119,120,121,122,123].
Figure 15. Transfer learning used in SAR ATR [109,110,111,112,113,114,115,116,117,118,119,120,121,122,123].
Remotesensing 15 01454 g015
Figure 16. Few-shot learning used in SAR ATR [124,125,126,127,128,129,130].
Figure 16. Few-shot learning used in SAR ATR [124,125,126,127,128,129,130].
Remotesensing 15 01454 g016
Figure 17. Semi-supervised learning used in SAR ATR [131,132,133,134,135,136,137,138,139,140,141].
Figure 17. Semi-supervised learning used in SAR ATR [131,132,133,134,135,136,137,138,139,140,141].
Remotesensing 15 01454 g017
Figure 18. Metric learning used in SAR ATR [142,143,144,145,146].
Figure 18. Metric learning used in SAR ATR [142,143,144,145,146].
Remotesensing 15 01454 g018
Figure 19. Adding domain knowledge in SAR ATR [147,148].
Figure 19. Adding domain knowledge in SAR ATR [147,148].
Remotesensing 15 01454 g019
Figure 20. The imbalance across classes in SAR ATR [149,150,151,152,153,154].
Figure 20. The imbalance across classes in SAR ATR [149,150,151,152,153,154].
Remotesensing 15 01454 g020
Figure 21. Lightweight CNN, model compression and acceleration are two ways to implement real-time recognition.
Figure 21. Lightweight CNN, model compression and acceleration are two ways to implement real-time recognition.
Remotesensing 15 01454 g021
Figure 22. The real-time recognition in SAR ATR [155,156,157,158,159,160].
Figure 22. The real-time recognition in SAR ATR [155,156,157,158,159,160].
Remotesensing 15 01454 g022
Figure 23. The polarimetric SAR ATR [162,163,164,165,166,167,168,169,170,171,172,173].
Figure 23. The polarimetric SAR ATR [162,163,164,165,166,167,168,169,170,171,172,173].
Remotesensing 15 01454 g023
Figure 24. The complex data SAR ATR [174,175,176,177,178,179].
Figure 24. The complex data SAR ATR [174,175,176,177,178,179].
Remotesensing 15 01454 g024
Figure 25. The other directions in SAR ATR [180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196,197].
Figure 25. The other directions in SAR ATR [180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196,197].
Remotesensing 15 01454 g025
Figure 26. The usage of attributed scattering center in SAR ATR [180,181,182,183,184,185,186].
Figure 26. The usage of attributed scattering center in SAR ATR [180,181,182,183,184,185,186].
Remotesensing 15 01454 g026
Figure 27. The future directions.
Figure 27. The future directions.
Remotesensing 15 01454 g027
Table 1. The number of the traditional and deep-learning-based SAR ATR papers.
Table 1. The number of the traditional and deep-learning-based SAR ATR papers.
YearsBefore 20162016201720182019202020212022
Traditional-method-based481421171812142
Deep-learning-based68214045567631
Percentages of deep-learning-based11.1%36.4%50%70. 2%71.4%82.4%84.4%93.9%
Table 2. The statistics of OpenSARShip and OpenSARShip 2.0.
Table 2. The statistics of OpenSARShip and OpenSARShip 2.0.
OpenSARShip11,346 SAR ship chips41 Sentinel-1 images
OpenSARShip 2.034,528 ship chips87 Sentinel-1 images
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, J.; Yu, Z.; Yu, L.; Cheng, P.; Chen, J.; Chi, C. A Comprehensive Survey on SAR ATR in Deep-Learning Era. Remote Sens. 2023, 15, 1454. https://doi.org/10.3390/rs15051454

AMA Style

Li J, Yu Z, Yu L, Cheng P, Chen J, Chi C. A Comprehensive Survey on SAR ATR in Deep-Learning Era. Remote Sensing. 2023; 15(5):1454. https://doi.org/10.3390/rs15051454

Chicago/Turabian Style

Li, Jianwei, Zhentao Yu, Lu Yu, Pu Cheng, Jie Chen, and Cheng Chi. 2023. "A Comprehensive Survey on SAR ATR in Deep-Learning Era" Remote Sensing 15, no. 5: 1454. https://doi.org/10.3390/rs15051454

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop