Next Article in Journal
Rapid Harmonic Detection Scheme Based on Expanded Input Observer
Next Article in Special Issue
Time-Varying Reflectivity Modulation on Inverse Synthetic Aperture Radar Image Using Active Frequency Selective Surface
Previous Article in Journal
Microfabrication, Characterization, and Cold-Test Study of the Slow-Wave Structure of a Millimeter-Band Backward-Wave Oscillator with a Sheet Electron Beam
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

SAR Image Reconstruction of Vehicle Targets Based on Tensor Decomposition

State Key Laboratory of Complex Electromagnetic Environment Effects on Electronics and Information System, National University of Defense Technology, Changsha 410073, China
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(18), 2859; https://doi.org/10.3390/electronics11182859
Submission received: 14 August 2022 / Revised: 31 August 2022 / Accepted: 5 September 2022 / Published: 9 September 2022

Abstract

:
Due to the imaging mechanism of Synthetic Aperture Radars (SARs), the target shape on an SAR image is sensitive to the radar incidence angle and target azimuth, but there is strong correlation and redundancy between adjacent azimuth images of SAR targets. This paper studies multi-angle SAR image reconstruction based on non-negative Tucker decomposition using adjacent azimuth images reconstructed to form a sparse tensor. Sparse tensors are used to perform non-negative Tucker decomposition, resulting in non-negative core tensors and factor matrices. The reconstruction tensor is obtained by calculating the n-mode product of the core tensor and the factor matrix, and then image reconstruction is realized. The similarity between the original image and the reconstructed image is calculated by using the structural similarity index and the cosine of the angle between the feature vectors. The reconstruction results of three target images of MSTAR show that the reconstructed image has a similarity higher than 95% with the original image in most cases, which can support target recognition under sparse observation to a certain extent.

1. Introduction

Synthetic Aperture Radars (SARs) have the ability to work all day and in all weather conditions, and they have better development potential than optical sensors. Therefore, SARs are widely used in surveying and mapping, geology, agriculture, the military and other fields. The intelligent detection and recognition of SAR image targets [1] have become research hotspots. As deep learning is widely used in tasks in the field of computer vision, recognition tools represented by convolutional neural networks (CNNs) are widely used in the field of SAR image target recognition, and their effect is better than that of traditional methods [2,3,4,5,6]. The distribution of SAR data can be analyzed with the help of new mathematical statistical tools [7,8]. Neural networks use multiple parameters to fit the data distribution, but they require a large number of SAR images of targets for training. There are few public data sets supporting deep learning for SAR image target recognition. The neural network method based on the SAR public data set has the risk of overfitting, so data amplification has become the focus of SAR research. Gao extracted the scattering center feature of a target image, applied the scattering center feature to an SAR target echo signal simulation and reconstructed an SAR image [9]. With the development of a generative countermeasure network, many scholars have introduced it into the field of SAR images. Gao used Conditional Generative Adversarial Networks (CGANs) to generate SAR images, but the inherent speckle noise of SAR images leads to model collapse and image blur during training [10]. With the aim of solving the problems of an unstable model and the poor image quality of CGANs used in SAR target directional and controllable generation, Wang proposed a label coding method, which can improve the quality of generated SAR images to a certain extent [11]. GANs have good performance in the field of natural image generation. However, the training model easily collapses when using a few SAR image samples full of serious speckle noise, so GANs are not suitable for SAR image generation.
Tensors can be used to describe data, such as images and videos. Tensor decomposition divides high-order tensors into products of multiple low-order tensors and can ensure that the spatial structure of high-dimensional data is not destroyed. Therefore, tensor decomposition is widely used in the fields of image processing and computer vision. Ji Liu et al. used SiLRTC, LRTC and FaLRTC algorithms to fill color images and medical images with missing data, and the relative errors with the original images were small [11]. The method of non-negative tensor decomposition emphasizes the non-negativity of the decomposition factor, which is highly interpretable and can better capture the local characteristics of an image [12]. Please replace with the following:
Zhang used the non-negative sparse Tucker decomposition algorithm to fill in the missing image data, which took use of several frame images before and after the missing frame image in the video to reconstruct the missing data. And the reconstruction effect is good [13]. The above tensor analysis method first decomposes and then reconstructs to achieve image restoration and missing data reconstruction according to the correlation between the data in the image and the multi-frame video. With a small amount of data, a high-quality reconstructed image can be obtained.
Multi-angle SAR has the advantage of spatial diversity, and there is redundancy and correlation between images at various azimuth angles [14]. Tensor decomposition can better mine image information from various angles, which is beneficial to SAR image reconstruction. In this paper, non-negative Tucker decomposition is introduced into SAR image processing. After that, the existing target azimuth image and the target azimuth image to be reconstructed are formed into a sparse tensor. Moreover, non-negative Tucker decomposition is performed on the sparse tensor, and it is reconstructed to obtain a dense tensor. Then, the image corresponding to the target azimuth is reconstructed. Finally, the similarity of the reconstructed image and the original image is evaluated using two evaluation parameters, namely, the structural similarity index and the cosine of the included angle of the feature vector, to test the effectiveness of the method in this paper.
The remainder of this article is organized as follows: Section 2 introduces tensor decomposition and image prediction. Section 4 gives the experimental results and analysis based on non-negative tucker decomposition. Finally, Section 5 concludes this article.

2. Tensor Decomposition and Image Prediction

2.1. Basic Concepts of Tensor Decomposition

A tensor is a multidimensional array, a scalar is a rank 0 tensor, a vector is a rank 1 tensor, and a matrix is a rank 2 tensor. For example, a grayscale image is a second-order tensor, a color image is a third-order tensor, and a color video is a fourth-order tensor. The multiplication of tensors and matrices is converted to the multiplication of matrices and matrices. Therefore, matrixize the tensor by expanding it modulo n . The n order tensor has n modes, and the expansion in each mode generates a corresponding matrix. The modular n expansion of a tensor S J 1 × J 2 × × J n × × J N is expressed as S ( n ) . The product of the modular n expansion of a tensor S and the n-mode product of a matrix A ( n ) I n × J n are written as S × n A ( n ) [15]:
S × n A ( n ) j 1 j n 1 i n j n + 1 j N = j n = 1 J n s j 1 j n 1 j n j n + 1 j N a i n j n
where S × n A ( n ) J 1 × J 2 × × I n × × J N .
Tucker decomposition decomposes an N -order tensor X I 1 × I 2 × × I n × × I N into an n-mode product of a core tensor S J 1 × J 2 × × J n × × J N and n factor matrices A ( n ) I n × J n :
X S × 1 A ( 1 ) × 2 A ( 2 ) × N A ( N )
x i 1 i 2 i N j 1 , j 2 , , j N s j 1 j 2 j N a i 1 j 1 ( 1 ) a i 2 j 2 ( 2 ) a i N j N ( N )
Non-negative Tucker decomposition is used to add non-negative constraints to the core tensor S and factor matrix A ( n ) , and the tensor reconstructed by the n-mode product using the core tensor and the factor matrix is denoted as X ^ . However, there are many tensor decomposition parameters, which are difficult to solve. Generally, the alternating method is used to solve the core tensor and factor matrix. When solving the core tensor and factor matrix alternately, the objective function is as follows [16]:
J L S = X X ^ F 2
J I = i 1 , i 2 , , i N x i 1 i 2 i N log x i 1 i 2 i N x ^ i 1 i 2 i N   x i 1 i 2 i N + x ^ i 1 i 2 i N

2.2. Tensor Decomposition Missing Value Imputation

A tensor is an extension of a matrix. Tensor decomposition missing value imputation is the same as matrix imputation. The matrix filling process is represented by the example shown in Figure 1. In Figure 1, several elements in the matrix in the upper left corner are missing, and their locations are represented by the string ‘nan’ represented by nan. The values in the corresponding positions can be re-estimated by tensor decomposition and data reconstruction, and filled into the original data element positions, which is represented in the lower left matrix. It can be seen that the missing matrix elements are filled.
Tucker tensor decomposition, also known as higher-order PCA, can be used for feature subspace learning and representation. Multi-angle SAR images are the imaging results of the same target at different azimuth or elevation angles, and they have the same feature subspace [17] and a strong correlation. The third-order tensor composed of multi-angle SAR grayscale images can be represented by the mode-n product of a core tensor and three factor matrices. The factor matrix represents the principal component of the tensor in the corresponding mode, and the core tensor represents and the core tensor represents different component correlation. After the non-negative tensor is decomposed and reconstructed, the predicted value of the corresponding position of the missing value of the original sparse tensor is obtained.

3. SAR Image Reconstruction Based on Non-Negative Tucker Decomposition

3.1. SAR Image Reconstruction

Non-negative Tucker decomposition is used to reconstruct the azimuth image, and the structural similarity index (SSIM) and the cosine of the included eigenvector angle are used to measure the similarity between the original image and the reconstructed image.
The method flow of image reconstruction is as follows:
(I)
The azimuth of the image to be reconstructed is β , the SAR image azimuth of the constructed tensor is γ , and the value is shown in formula (6):
γ = β ± α × i n t e r v a l
α = 1 , 2 , 3 ( f r a m e s 1 ) / 2 , i n t e r v a l > 0 , f r a m e s = 2 n + 1 , n N
where i n t e r v a l represents the azimuth interval, and f r a m e s indicates the number of frontal slices constituting a sparse tensor (that is, the total number of azimuth images involved in the reconstruction and to be reconstructed). The selected azimuth images are rotated to azimuth β to be reconstructed and cropped to the same size in order to form a third-order tensor X , or the image to be reconstructed, replace it with an all-zero matrix of the same size, as shown in Figure 2.
(II)
Set the rank of the core tensor S For setting the rank of the core tensor, define, where represents the number of frontal slices of the tensor f r a m e s represents the number of frontal slices of the tensor, w i d t h represents the width of the frontal slice, and h e i g h t represents the height of the frontal slice. If X i , : , : = 0 , i = 1 k , k < f r a m e s is the reconstructed image detail information, set S ( f r a m e s k ) × w i d t h × h e i g h t , R a n k ( S ) = [ f r a m e s k , w i d t h , h e i g h t ] .
(III)
Perform non-negative Tucker decomposition on tensors to obtain core tensors and factor matrices.
(IV)
Use Formula (2) to reconstruct the third-order tensor X ^ X .
Steps (II)–(IV) are shown in Figure 3.

3.2. Evaluation of Reconstructed SAR Image

In order to evaluate the effect of the reconstructed images, in addition to the intuitive perception of the human eye, the SSIM index and the cosine similarity measure of the feature angle based on a convolutional neural network are also introduced. The similarity between the reconstructed image and the original image is evaluated, and the performance of the method proposed in this paper is judged according to the evaluation effect.
The SSIM index comprehensively considers three factors, namely, brightness, contrast and structure, to evaluate image similarity [18]. Its expression is as follows:
S ( x , y ) = 2 μ x μ y + c 1 2 σ x y + c 2 μ x 2 + μ y 2 + c 1 σ x 2 + σ y 2 + c 2
where S ( x , y ) is the SSIM index, μ x is the mean of x , μ y is the mean of y , σ x 2 is the variance of x , σ y 2 is the variance of y , and σ x y is the covariance of y . c 1 = k 1 L 2 , c 2 = k 2 L 2 is a constant that maintains stability. According to [18], L indicates the pixel dynamic range, k 1 = 0.01 ,   k 2 = 0.03 . The value range of SSIM is [0, 1], and the larger the value, the higher the similarity between the two images.
The cosine of the eigenvector angle measures the feature similarity by calculating the cosine value of the eigenvector angle, as shown in formula (8):
  similarity   = cos ( θ ) = f e a t u r e s 1 f e a t u r e s 2 f e a t u r e s 1 f e a t u r e s 2 = i = 1 n f e a t u r e s 1 i × f e a t u r e s 2 i s u m i = 1 n f e a t u r e s 1 i 2 × i = 1 n f e a t u r e s 2 i 2
The cosine of the angle between the eigenvectors considers the direction between different features, and the value range is [−1, 1]. When the similarity of the two eigenvectors is higher, the cosine value of the included angle of the eigenvectors is closer to 1. When the cosine of the included angle of the feature vector is 0, it means that the two are perpendicular to each other and are not similar. The key to determining whether the cosine of the included angle of the feature vector is accurate and reliable to measure the image similarity lies in the accurate extraction of image features. Convolutional neural networks have powerful feature representation capabilities and can be used as feature extractors. First, the MSTAR data set is used to train a three-class classifier. In order to make the extracted features robust and accurate, the recognition accuracy of the classifier is required to reach 95%. The process of extracting image features is shown in Figure 4. The convolutional neural network is mainly composed of two parts, one is feature extraction (convolution, activation function and pooling), and the other is classification and recognition (full connection layer). Figure 4 presents a structure diagram of a classical convolutional neural network used for SAR vehicle target recognition. In the feature extraction part, the network mainly includes four convolution layers, whose sizes are 3 × 3, 5 × 5, 6 × 6 and 5 × 5, as well as the nonlinear activation function and a pooling layer, which are produced after each convolution layer. The feature extraction section processes the SAR vehicle target image with a size of 100 × 100 × 3 into a feature map with a size of 4 × 4 × 128, which contains target features, and then inputs it into the classification recognition section. The classification and recognition part is composed of two fully connected layers, which convert the flattened feature map into a feature vector with a size of 216, reduce the dimension of the feature vector with a size of 216 into a feature vector with a size of 3 to perform three classification tasks later.
A flowchart of the method proposed in this section is shown in Figure 5.

4. Experiment and Analysis

The experimental data use MSTAR data. The Moving and Stationary Target Acquisition and Recognition (MSTAR) project is sponsored by the U.S. Department of Defense and the Air Force Laboratory and collected by the X-band SAR sensor of Sandia National Laboratory, which is a high-resolution sensor, a spotlight Synthetic Aperture Radar with a resolution of 0.3 m × 0.3 m [19]. The experiments in this paper mainly use three main types of vehicle targets in the MSTAR database: T72, BTR70 and BMP2. Each type of target contains multiple elevation and azimuth SAR images, and some optical images and SAR images are shown in Figure 6.
Firstly, the appropriate azimuth image and the azimuth image to be reconstructed are selected to form the sparse tensor, and the core tensor and factor matrix are decomposed using non-negative Tucker decomposition. The predicted value of the missing part is reconstructed to form the corresponding azimuth image. Finally, the similarity between the reconstructed image and the original image is calculated using the structural similarity index and the cosine of the included angle of the feature vector. To calculate the cosine of the included angle of the feature vector, the convolutional neural network is used to extract the image features. Therefore, a network with a recognition rate higher than 95% is trained first. The main parameters in the non-negative Tucker decomposition are f r a m e s and i n t e r v a l . Therefore, this section takes different values for the above two parameters, which are discussed further in (II) Vehicle target SAR image reconstruction results.
The experimental process and the corresponding results are as follows:
(I)
Convolutional neural network training results
Using the three-category network contained in Figure 4 to train the MSTAR data of three types of vehicle targets, the final recognition accuracy of the network is 95.91%, as shown in Figure 7. The recognition accuracy rate is greater than 95%, as shown in Figure 8, which meets the requirements as a feature extractor.
(II)
Vehicle target SAR image reconstruction results
Some azimuth SAR images of the three types of vehicle targets are reconstructed. If the azimuth γ corresponding image participating in the tensor construction does not exist during reconstruction, the most adjacent azimuth image is selected to replace it. The pitch angles of the images involved in the reconstruction are all 15°. Different values for the parameters f r a m e s and i n t e r v a l are set. The structural similarity index and the cosine of the feature vector angle between the reconstructed image and the original image are calculated (for convenience, feature similarity (Feature Similarity, FSIM) is used here). The influence of different parameter values on the reconstruction effect is discussed.
Figure 9, Figure 10 and Figure 11 show the reconstruction results of the partial azimuth angles of three types of military targets. The first row of each image represents the original image. The second line indicates the reconstruction results of f r a m e s = 5 , i n t e r v a l = 1 , followed by the reconstruction results of f r a m e s = 5 , i n t e r v a l = 10 , f r a m e s = 3 , i n t e r v a l = 1 and f r a m e s = 3 , i n t e r v a l = 10 .
From the perspective of image performance, the visual effect of the reconstructed image is very similar to that of the original image. It can be seen in Table 1, Table 2 and Table 3 that the same azimuth angle image is reconstructed. When the azimuth angle interval ( i n t e r v a l ) is smaller, the higher the number of images participating in the reconstruction (the larger the f r a m e s ), and the higher the similarity between the reconstructed image and the original image. The smaller the azimuth interval, the higher the similarity between images; therefore, the more accurate the principal components obtained by the factor matrix during tensor decomposition, the higher the similarity of the reconstructed images. When the azimuth angle interval increases, the similarity between the images is greatly reduced. If the number of reconstructed images increases, the reconstruction accuracy decreases. In the same f r a m e s situation, the smaller the i n t e r v a l for the reconstructed image, the more similar it is because the reconstruction method is proposed based on the similarity and redundancy between multi-angle SAR images. The smaller the azimuth interval, the higher the similarity and the more accurate the reconstruction. It can be seen in the table that the main factor affecting the reconstruction of the multi-angle SAR images is the azimuth interval. When the azimuth angle interval is too large, for example, greater than 20°, the quality of the reconstructed image is greatly reduced, and it is no longer applicable.
The reconstructed image has a lower SSIM than the original image, and the missing part is replaced by 0 when building a sparse tensor. After reconstruction, the value of the corresponding position is near 0, and the reconstructed image is obtained after the contrast is stretched. Therefore, there is a certain degree of distortion in image brightness, resulting in an unsatisfactory SSIM index effect. The cosine value of the angle between the reconstructed image and the original image is close to 1. This shows that the reconstructed image is very similar to the original image at the image feature level. The tensor decomposition result can well represent the feature subspace, so the image features reconstructed based on non-negative Tucker decomposition are highly similar to the original images.

5. Conclusions

Starting from the correlation and redundancy of multi-angle SAR images, this paper proposes a vehicle target SAR image reconstruction method based on non-negative Tucker decomposition. The missing azimuth image is reconstructed using the adjacent azimuth image composition tensor. The similarity calculation results confirm that the reconstruction method in this paper is effective. The cosine of the eigenvector angle is introduced in the similarity calculation. Combined with the visual effect, the cosine of the angle between the feature vectors can better evaluate the image similarity. However, there is brightness distortion in the reconstructed image. In future work, we can consider adding a regular term and bias to the non-negative Tucker decomposition in order to solve this problem. It is also possible to explore the effect of the azimuth interval change on the similarity and to add weights in the tensor decomposition in order to obtain a more accurate reconstructed image.
The multi-angle SAR image reconstruction method based on non-negative Tucker decomposition can reconstruct the azimuth image with only a few adjacent azimuth images, and it does not require a large number of samples for training. When the sample data are small, the method in this paper can be used to reconstruct the image as one of the means of data expansion. When using traditional target recognition methods, such as template matching, the stored templates can be reduced. In target recognition, existing templates can be used to generate corresponding azimuth images and then matched. When the azimuth images are separated by 10°, the reconstructed image still has a high feature similarity with the original image. This shows that the method proposed in this paper can support target recognition under the condition of sparse observation to a certain extent.
Future research work mainly considers the following two parts: First, the reconstruction and verification of the SAR image data of other vehicle targets will be added. However, the reconstructed data will be used for vehicle target detection and recognition in SAR images to measure its practicability and robustness.

Author Contributions

Methodology, T.T.; project administration, T.T. and G.K.; writing—original draft, T.T.; writing—review and editing, T.T. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Natural Science Foundation of Hunan province, China, under Project 2021JJ30780.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors acknowledge the anonymous reviewers and editors for their efforts in providing valuable comments and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, L.; Li, C.; Zhao, L.; Xiong, B.; Quan, S.; Kuang, G. A cascaded three-look network for aircraft detection in SAR images. Remote Sens. Lett. 2020, 11, 57–65. [Google Scholar] [CrossRef]
  2. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  3. Sun, Y.; Liang, D.; Wang, X.; Tang, X. Deepid3: Face recognition with very deep neural networks. arXiv 2015, arXiv:1502.00873. [Google Scholar]
  4. Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A.A. Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (2017), San Francisco, CA, USA, 4–9 February 2017. [Google Scholar]
  5. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  6. Zhang, L.; Zhang, C.; Quan, S.; Xiao, H.; Kuang, G.; Liu, L. A class imbalance loss for imbalanced object recognition. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 2778–2792. [Google Scholar] [CrossRef]
  7. Kim, T.; Kim, D.S.; Lee, H.; Park, S.-H. Dimorphic properties of Bernoulli random variable. Filomat 2022, 36, 1711–1717. [Google Scholar] [CrossRef]
  8. Kim, T.; Kim, D.S. Degenerate zero-truncated Poisson random variables. Russ. J. Math. Phys. 2021, 28, 66–72. [Google Scholar] [CrossRef]
  9. Gao, B.; Xu, X.Y.; Tian, Q.X.; Li, X.; Zhou, B. SAR image reconstruction based on scattering center feature. Chin. J. Radio Sci. 2010, 25, 761–766. (In Chinese) [Google Scholar]
  10. Guo, J.; Lei, B.; Ding, C.; Zhang, Y. Synthetic aperture radar image synthesis by using generative adversarial nets. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1111–1115. [Google Scholar] [CrossRef]
  11. Liu, J.; Musialski, P.; Wonka, P.; Ye, J. Tensor completion for estimating missing values in visual data. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 208–220. [Google Scholar] [CrossRef] [PubMed]
  12. Hazan, T.; Polak, S.; Shashua, A. Sparse image coding using a 3D non-negative tensor factorization. In Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV’05), Beijing, China, 17–21 October 2005. [Google Scholar]
  13. Zhang, Z.W.; Jie, M.A.; Xia, K.W.; Yu-Le, L.I. A sparse nonnegative Tucker decomposition for higher-order data inpainting. J. Optoelectron. Laser. 2017, 28, 773–779. (In Chinese) [Google Scholar]
  14. Walterscheid, I.; Brenner, A.R. Multistatic and multi-aspect SAR data acquisition to improve image interpretation. In Proceedings of the 2013 IEEE International Geoscience and Remote Sensing Symposium-IGARSS (2013), Melbourne, Australia, 21–26 July 2013. [Google Scholar]
  15. Kolda, T.G.; Bader, B.W. Tensor decompositions and applications. SIAM Rev. 2009, 51, 455–500. [Google Scholar] [CrossRef]
  16. Kim, Y.D.; Choi, S. Nonnegative tucker decomposition. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition (2007), Minneapolis, MN, USA, 17–22 June 2007. [Google Scholar]
  17. Sun, S. A survey of multi-view machine learning. Neural Comput. Appl. 2013, 23, 2031–2038. [Google Scholar] [CrossRef]
  18. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Keydel, E.R.; Lee, S.W.; Moore, J.T. MSTAR extended operating conditions: A tutorial. Algorithms Synth. Aperture Radar Imag. III 1996, 2757, 228–242. [Google Scholar]
Figure 1. Matrix missing value prediction.
Figure 1. Matrix missing value prediction.
Electronics 11 02859 g001
Figure 2. Sparse tensor construction.
Figure 2. Sparse tensor construction.
Electronics 11 02859 g002
Figure 3. Tensor decomposition and reconstruction.
Figure 3. Tensor decomposition and reconstruction.
Electronics 11 02859 g003
Figure 4. Feature extraction.
Figure 4. Feature extraction.
Electronics 11 02859 g004
Figure 5. Flowchart of vehicle target SAR image reconstruction based on non-negative Tucker decomposition.
Figure 5. Flowchart of vehicle target SAR image reconstruction based on non-negative Tucker decomposition.
Electronics 11 02859 g005
Figure 6. Multi-angle images of three types of vehicle targets.
Figure 6. Multi-angle images of three types of vehicle targets.
Electronics 11 02859 g006
Figure 7. Three classifier training results.
Figure 7. Three classifier training results.
Electronics 11 02859 g007
Figure 8. Zoomed in on the accuracy of the validation set (ACC: 0.9591).
Figure 8. Zoomed in on the accuracy of the validation set (ACC: 0.9591).
Electronics 11 02859 g008
Figure 9. BMP2 partial azimuth reconstruction results.
Figure 9. BMP2 partial azimuth reconstruction results.
Electronics 11 02859 g009
Figure 10. Partial azimuth reconstruction results of BTR70.
Figure 10. Partial azimuth reconstruction results of BTR70.
Electronics 11 02859 g010
Figure 11. Partial azimuth reconstruction results of T72.
Figure 11. Partial azimuth reconstruction results of T72.
Electronics 11 02859 g011
Table 1. Similarity calculation of BMP2 reconstruction results.
Table 1. Similarity calculation of BMP2 reconstruction results.
Reconstructed Azimuth113
SSIM FSIM
140
SSIM FSIM
144
SSIM FSIM
150
SSIM FSIM
155
SSIM FSIM
Parameter
frames = 5, interval = 10.6510.9770.6360.9710.6680.9870.5640.9830.6240.971
frames = 5, interval = 100.5730.9680.4380.9520.5440.9450.4990.910.5740.984
frames = 3, interval = 10.6010.960.5770.9650.6740.9720.5530.8350.6120.945
frames = 3, interval = 100.3960.6130.5190.7540.490.960.470.8140.5530.791
Table 2. Similarity calculation of BTR70 reconstruction results.
Table 2. Similarity calculation of BTR70 reconstruction results.
Reconstructed Azimuth69
SSIM FSIM
96
SSIM FSIM
100
SSIM FSIM
106
SSIM FSIM
111
SSIM FSIM
Parameter
frames = 5, interval = 10.5580.9250.6850.9980.6270.9920.7040.9890.5540.959
frames = 5, interval = 100.5580.7410.5150.9890.5610.9340.5160.9310.5620.972
frames = 3, interval = 10.5580.9470.6120.9680.6770.9840.6070.980.5910.987
frames = 3, interval = 100.5580.9740.5360.9470.5460.9280.5330.9330.5950.981
Table 3. Similarity calculation of T72 reconstruction results.
Table 3. Similarity calculation of T72 reconstruction results.
Reconstructed Azimuth137
SSIM FSIM
164
SSIM FSIM
168
SSIM FSIM
174
SSIM FSIM
179
SSIM FSIM
Parameter
frames = 5, interval = 10.6580.9670.5750.9960.5840.9870.6180.9410.6630.989
frames = 5, interval = 100.6020.9410.5580.9830.5440.990.580.9980.5740.984
frames = 3, interval = 10.6440.9940.590.9960.6750.9980.5470.9630.6350.994
frames = 3, interval = 100.5480.9850.530.9820.5670.9850.5620.9970.5620.987
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tang, T.; Kuang, G. SAR Image Reconstruction of Vehicle Targets Based on Tensor Decomposition. Electronics 2022, 11, 2859. https://doi.org/10.3390/electronics11182859

AMA Style

Tang T, Kuang G. SAR Image Reconstruction of Vehicle Targets Based on Tensor Decomposition. Electronics. 2022; 11(18):2859. https://doi.org/10.3390/electronics11182859

Chicago/Turabian Style

Tang, Tao, and Gangyao Kuang. 2022. "SAR Image Reconstruction of Vehicle Targets Based on Tensor Decomposition" Electronics 11, no. 18: 2859. https://doi.org/10.3390/electronics11182859

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop