remotesensing-logo

Journal Browser

Journal Browser

Advanced Artificial Intelligence Algorithm for the Analysis of Remote Sensing Images

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (30 November 2022) | Viewed by 38536

Special Issue Editors


E-Mail Website
Guest Editor
College of Electronic Science, National University of Defense Technology, Changsha 410073, China
Interests: remote sensing; SAR image processing; change detection; ground moving target indication; polarimetric SAR image classification
Special Issues, Collections and Topics in MDPI journals
School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China
Interests: multitemporal SAR image processing; change detection; SAR image classification; object detection and tracking
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Senior Researcher Institute for Astronomy, Astrophysics, Space Applications and Remote Sensing, National Observatory of Athens, Vas. Pavlou and I. Metaxa, 15236 Penteli, Greece
Interests: remote sensing; multispectral/hyperspectral imaging; imaging spectroscopy; optical/SAR sensors; image processing; geology; lithological and mineral mapping; terrestrial surface mapping
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Remote sensing images record various kinds of property information of the ground surface. The evolution of the ground surface form and natural environment can be evaluated, predicted, and planned by the analysis and interpretation of remote sensing images, which better serves people's production and social activities. Regarding the massive amount of remote sensing data, it is difficult to meet the growing demand of remote sensing applications by interpreting the images manually. Therefore, how to interpret the remote sensing images automatically, efficiently, and accurately is a significant and difficult problem in the research and application of remote sensing technology. In recent years, artificial intelligence, especially deep learning techniques, have had a significant impact on the field of remote sensing, which provide promising tools to overcome many challenging issues in the analysis of remote sensing images in terms of accuracy and reliability.

In this Special Issue we intend to compile a series of papers that merge the analysis and use of remote sensing images with AI techniques. We expect new research will address practical problems in remote sensing image applications with the help of advanced AI methods.

Articles may address, but are not limited, to the following topics:

  • Advanced AI architectures for image classification;
  • Advanced AI-based target detection/recognition/tracking;
  • Change detection for remote sensing;
  • Semantic segmentation for remote sensing;
  • Multi-senor data fusion/ Multi-modal data analysis;
  • Image super-resolution/restoration for remote sensing;
  • Unsupervised/ Weakly supervised learning for image processing;
  • Advanced AI techniques for remote sensing application;
  • New datasets for remote sensing image classification with deep learning;
  • Clustering (including classic and more advanced tools, such as subspace clustering, clustering ensemble, etc.);
  • Spectral unmixing adopting either linear or non-linear models, using Bayesian or non-Bayesian approaches for parameter estimation;
  • Dimensionality reduction;
  • Data transformations.

Prof. Dr. Gangyao Kuang
Dr. Xin Su
Dr. Olga Sykioti
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Artificial Intelligence
  • deep learning
  • image processing
  • target detection
  • change detection
  • data fusion
  • multispectral and hyperspectral images
  • synthetic aperture radar images
  • satellite video

Related Special Issue

Published Papers (19 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

21 pages, 1901 KiB  
Article
Gradual Domain Adaptation with Pseudo-Label Denoising for SAR Target Recognition When Using Only Synthetic Data for Training
by Yuanshuang Sun, Yinghua Wang, Hongwei Liu, Liping Hu, Chen Zhang and Siyuan Wang
Remote Sens. 2023, 15(3), 708; https://doi.org/10.3390/rs15030708 - 25 Jan 2023
Cited by 3 | Viewed by 1799
Abstract
Because of the high cost of data acquisition in synthetic aperture radar (SAR) target recognition, the application of synthetic (simulated) SAR data is becoming increasingly popular. Our study explores the problems encountered when training fully on synthetic data and testing on measured (real) [...] Read more.
Because of the high cost of data acquisition in synthetic aperture radar (SAR) target recognition, the application of synthetic (simulated) SAR data is becoming increasingly popular. Our study explores the problems encountered when training fully on synthetic data and testing on measured (real) data, and the distribution gap between synthetic and measured SAR data affects recognition performance under the circumstances. We propose a gradual domain adaptation recognition framework with pseudo-label denoising to solve this problem. As a warm-up, the feature alignment classification network is trained to learn the domain-invariant feature representation and obtain a relatively satisfactory recognition result. Then, we utilize the self-training method for further improvement. Some pseudo-labeled data are selected to fine-tune the network, narrowing the distribution difference between the training data and test data for each category. However, the pseudo-labels are inevitably noisy, and the wrong ones may deteriorate the classifier’s performance during fine-tuning iterations. Thus, we conduct pseudo-label denoising to eliminate some noisy pseudo-labels and improve the trained classifier’s robustness. We perform pseudo-label denoising based on the image similarity to keep the label consistent between the image and feature domains. We conduct extensive experiments on the newly published SAMPLE dataset, and we design two training scenarios to verify the proposed framework. For Training Scenario I, the framework matches the result of neural architecture searching and achieves 96.46% average accuracy. For Training Scenario II, the framework outperforms the results of other existing methods and achieves 97.36% average accuracy. These results illustrate the superiority of our framework, which can reach state-of-the-art recognition levels with appropriate stability. Full article
Show Figures

Figure 1

24 pages, 10438 KiB  
Article
A Dual Neighborhood Hypergraph Neural Network for Change Detection in VHR Remote Sensing Images
by Junzheng Wu, Ruigang Fu, Qiang Liu, Weiping Ni, Kenan Cheng, Biao Li and Yuli Sun
Remote Sens. 2023, 15(3), 694; https://doi.org/10.3390/rs15030694 - 24 Jan 2023
Cited by 4 | Viewed by 1819
Abstract
The very high spatial resolution (VHR) remote sensing images have been an extremely valuable source for monitoring changes occurring on the Earth’s surface. However, precisely detecting relevant changes in VHR images still remains a challenge, due to the complexity of the relationships among [...] Read more.
The very high spatial resolution (VHR) remote sensing images have been an extremely valuable source for monitoring changes occurring on the Earth’s surface. However, precisely detecting relevant changes in VHR images still remains a challenge, due to the complexity of the relationships among ground objects. To address this limitation, a dual neighborhood hypergraph neural network is proposed in this article, which combines multiscale superpixel segmentation and hypergraph convolution to model and exploit the complex relationships. First, the bi-temporal image pairs are segmented under two scales and fed to a pre-trained U-net to obtain node features by treating each object under the fine scale as a node. The dual neighborhood is then defined using the father-child and adjacent relationships of the segmented objects to construct the hypergraph, which permits models to represent higher-order structured information far more complex than the conventional pairwise relationships. The hypergraph convolutions are conducted on the constructed hypergraph to propagate the label information from a small amount of labeled nodes to the other unlabeled ones by the node-edge-node transformation. Moreover, to alleviate the problem of imbalanced sampling, the focal loss function is adopted to train the hypergraph neural network. The experimental results on optical, SAR and heterogeneous optical/SAR data sets demonstrate that the proposed method offersbetter effectiveness and robustness compared to many state-of-the-art methods. Full article
Show Figures

Figure 1

17 pages, 6750 KiB  
Article
RiDOP: A Rotation-Invariant Detector with Simple Oriented Proposals in Remote Sensing Images
by Chongyang Wei, Weiping Ni, Yao Qin, Junzheng Wu, Han Zhang, Qiang Liu, Kenan Cheng and Hui Bian
Remote Sens. 2023, 15(3), 594; https://doi.org/10.3390/rs15030594 - 19 Jan 2023
Cited by 2 | Viewed by 1580
Abstract
Compared with general object detection with horizontal bounding boxes in natural images, oriented object detection in remote sensing images is an active and challenging research topic as objects are usually displayed in arbitrary orientations. To model the variant orientations of oriented objects, general [...] Read more.
Compared with general object detection with horizontal bounding boxes in natural images, oriented object detection in remote sensing images is an active and challenging research topic as objects are usually displayed in arbitrary orientations. To model the variant orientations of oriented objects, general CNN-based methods usually adopt more parameters or well-designed modules, which are often complex and inefficient. To address this issue, the detector requires two key components to deal with: (i) generating oriented proposals in a light-weight network to achieve effective representation of arbitrarily oriented objects; (ii) extracting the rotation-invariant feature map in both spatial and orientation dimensions. In this paper, we propose a novel, lightweight rotated region proposal network to produce arbitrary-oriented proposals by sliding two vertexes only on adjacent sides and adopt a simple yet effective representation to describe oriented objects. This may decrease the complexity of modeling orientation information. Meanwhile, we adopt the rotation-equivariant backbone to generate the feature map with explicit orientation channel information and utilize the spatial and orientation modules to obtain completely rotation-invariant features in both dimensions. Without tricks, extensive experiments performed on three challenging datasets DOTA-v1.0, DOTA-v1.5 and HRSC2016 demonstrate that our proposed method can reach state-of-the-art accuracy while reducing the model size by 40% in comparison with the previous best method. Full article
Show Figures

Figure 1

17 pages, 6459 KiB  
Article
Novel Spatial–Spectral Channel Attention Neural Network for Land Cover Change Detection with Remote Sensed Images
by Xu Yang, Zhiyong Lv, Jón Atli Benediktsson and Fengrui Chen
Remote Sens. 2023, 15(1), 87; https://doi.org/10.3390/rs15010087 - 23 Dec 2022
Cited by 5 | Viewed by 1771
Abstract
Land cover change detection (LCCD) with remote-sensed images plays an important role in observing Earth’s surface changes. In recent years, the use of a spatial-spectral channel attention mechanism in information processing has gained interest. In this study, aiming to improve the performance of [...] Read more.
Land cover change detection (LCCD) with remote-sensed images plays an important role in observing Earth’s surface changes. In recent years, the use of a spatial-spectral channel attention mechanism in information processing has gained interest. In this study, aiming to improve the performance of LCCD with remote-sensed images, a novel spatial-spectral channel attention neural network (SSCAN) is proposed. In the proposed SSCAN, the spatial channel attention module and convolution block attention module are employed to process pre- and post-event images, respectively. In contrast to the scheme of traditional methods, the motivation of the proposed operation lies in amplifying the change magnitude among the changed areas and minimizing the change magnitude among the unchanged areas. Moreover, a simple but effective batch-size dynamic adjustment strategy is promoted to train the proposed SSCAN, thus guaranteeing convergence to the global optima of the objective function. Results from comparative experiments of seven cognate and state-of-the-art methods effectively demonstrate the superiority of the proposed network in accelerating the network convergence speed, reinforcing the learning efficiency, and improving the performance of LCCD. For example, the proposed SSCAN can achieve an improvement of approximately 0.17–23.84% in OA on Dataset-A. Full article
Show Figures

Figure 1

20 pages, 6573 KiB  
Article
A Lightweight Model for Ship Detection and Recognition in Complex-Scene SAR Images
by Boli Xiong, Zhongzhen Sun, Jin Wang, Xiangguang Leng and Kefeng Ji
Remote Sens. 2022, 14(23), 6053; https://doi.org/10.3390/rs14236053 - 29 Nov 2022
Cited by 17 | Viewed by 2710
Abstract
SAR ship detection and recognition are important components of the application of SAR data interpretation, allowing for the continuous, reliable, and efficient monitoring of maritime ship targets, in view of the present situation of SAR interpretation applications. On the one hand, because of [...] Read more.
SAR ship detection and recognition are important components of the application of SAR data interpretation, allowing for the continuous, reliable, and efficient monitoring of maritime ship targets, in view of the present situation of SAR interpretation applications. On the one hand, because of the lack of high-quality datasets, most existing research on SAR ships is focused on target detection. Additionally, there have been few studies on integrated ship detection and recognition in complex SAR images. On the other hand, the development of deep learning technology promotes research on the SAR image intelligent interpretation algorithm to some extent. However, most existing algorithms only focus on target recognition performance and ignore the model’s size and computational efficiency. Aiming to solve the above problems, a lightweight model for ship detection and recognition in complex-scene SAR images is proposed in this paper. Firstly, in order to comprehensively improve the detection performance and deployment capability, this paper applies the YOLOv5-n lightweight model as the baseline algorithm. Secondly, we redesign and optimize the pyramid pooling structure to effectively enhance the target feature extraction efficiency and improve the algorithm’s operation speed. Meanwhile, to suppress the influence of complex background interference and ships’ distribution, we integrate different attention mechanism into the target feature extraction layer. In addition, to improve the detection and recognition performance of densely parallel ships, we optimize the structure of the model’s prediction layer by adding an angular classification module. Finally, we conducted extensive experiments on the newly released complex-scene SAR image ship detection and recognition dataset, named the SRSDDv1.0 dataset. The experimental results show that the minimum size of the model proposed in this paper is only 1.92 M parameters and 4.52 MB of model memory, which can achieve an excellent F1-Score performance of 61.26 and an FPS performance of 68.02 on the SRSDDv1.0 dataset. Full article
Show Figures

Figure 1

23 pages, 15950 KiB  
Article
MSSDet: Multi-Scale Ship-Detection Framework in Optical Remote-Sensing Images and New Benchmark
by Weiming Chen, Bing Han, Zheng Yang and Xinbo Gao
Remote Sens. 2022, 14(21), 5460; https://doi.org/10.3390/rs14215460 - 30 Oct 2022
Cited by 8 | Viewed by 2109
Abstract
Ships comprise the only and most important ocean transportation mode. Thus, ship detection is one of the most critical technologies in ship monitoring, which plays an essential role in maintaining marine safety. Optical remote-sensing images contain rich color and texture information, which is [...] Read more.
Ships comprise the only and most important ocean transportation mode. Thus, ship detection is one of the most critical technologies in ship monitoring, which plays an essential role in maintaining marine safety. Optical remote-sensing images contain rich color and texture information, which is beneficial to ship detection. However, few optical remote-sensing datasets are open publicly due to the issue of sensitive data and copyrights, and only the HRSC2016 dataset is built for the ship-detection task. Moreover, almost all general object detectors suffer from the failure of multi-scale ship detection because of the diversity of spatial resolution and ship size. In this paper, we re-annotate the HRSC2016 dataset and supplement 610 optical remote-sensing images to build a new open source ship-detection benchmark dataset with rich multi-scale ship objects named the HRSC2016-MS dataset. In addition, we further explore the potential of a recursive mechanism in the field of object detection and propose a novel multi-scale ship-detection framework (MSSDet) in optical remote-sensing images. The success of detecting multi-scale objects depends on the hierarchical pyramid structure in the object-detection framework. However, the inherent semantic and spatial gaps among hierarchical pyramid levels seriously affect detection performance. To alleviate this problem, we propose a joint recursive feature pyramid (JRFP), which can generate semantically strong and spatially refined multi-scale features. Extensive experiments were conducted on the HRSC2016-MS, HRSC2016, and DIOR datasets. Detailed ablation studies directly demonstrated the effectiveness of the proposed JRFP architecture and also showed that the proposed method has excellent generalizability. Comparisons with state-of-the-art methods showed that the proposed method achieves competitive performance, i.e., 77.3%, 95.8%, and 73.3% mean average precision accuracy on the HRSC2016-MS, HRSC2016, and DIOR datasets, respectively. Full article
Show Figures

Figure 1

22 pages, 5713 KiB  
Article
Ship Classification in Synthetic Aperture Radar Images Based on Multiple Classifiers Ensemble Learning and Automatic Identification System Data Transfer Learning
by Zhenguo Yan, Xin Song, Lei Yang and Yitao Wang
Remote Sens. 2022, 14(21), 5288; https://doi.org/10.3390/rs14215288 - 22 Oct 2022
Cited by 1 | Viewed by 2412
Abstract
With the continuous development of earth observation technology, space-based synthetic aperture radar (SAR) has become an important source of information for maritime surveillance, and ship classification in SAR images has also become a hot research direction in the field of maritime ship monitoring. [...] Read more.
With the continuous development of earth observation technology, space-based synthetic aperture radar (SAR) has become an important source of information for maritime surveillance, and ship classification in SAR images has also become a hot research direction in the field of maritime ship monitoring. In recent years, the remote sensing community has proposed several solutions to the problem of ship object classification in SAR images. However, it is difficult to obtain an adequate amount of labeled SAR samples for training classifiers, which limits the application of machine learning, particularly deep learning methods, in SAR image ship object classification. In contrast, as a real-time automatic tracking system for monitoring ships at sea, a ship automatic identification system (AIS) can provide a large amount of relatively easy-to-obtain labeled ship samples. Therefore, to solve the problem of SAR image ship classification and improve the classification performance of learning models with limited samples, we proposed a SAR image ship classification method based on multiple classifiers ensemble learning (MCEL) and AIS data transfer learning. The core idea of our method is to transfer the MCEL model trained on AIS data to SAR image ship classification, which mainly includes three steps: first, we use the acquired global space-based AIS data to build a dataset for ship object classification models training; then, the ensemble learning model is constructed by combining multiple base classifiers; and finally, the trained classification model is transferred to SAR images for ship type prediction. Experiments show that the proposed method achieves a classification accuracy of 85.00% for the SAR ship classification, which is better than the performance of each base classifier. This proves that AIS data transfer learning can effectively solve the problem of SAR ship classification with limited samples, and has important application value in maritime surveillance. Full article
Show Figures

Graphical abstract

24 pages, 6473 KiB  
Article
A Group-Wise Feature Enhancement-and-Fusion Network with Dual-Polarization Feature Enrichment for SAR Ship Detection
by Xiaowo Xu, Xiaoling Zhang, Zikang Shao, Jun Shi, Shunjun Wei, Tianwen Zhang and Tianjiao Zeng
Remote Sens. 2022, 14(20), 5276; https://doi.org/10.3390/rs14205276 - 21 Oct 2022
Cited by 32 | Viewed by 1733
Abstract
Ship detection in synthetic aperture radar (SAR) images is a significant and challenging task. However, most existing deep learning-based SAR ship detection approaches are confined to single-polarization SAR images and fail to leverage dual-polarization characteristics, which increases the difficulty of further improving the [...] Read more.
Ship detection in synthetic aperture radar (SAR) images is a significant and challenging task. However, most existing deep learning-based SAR ship detection approaches are confined to single-polarization SAR images and fail to leverage dual-polarization characteristics, which increases the difficulty of further improving the detection performance. One problem that requires a solution is how to make full use of the dual-polarization characteristics and how to excavate polarization features using the ship detection network. To tackle the problem, we propose a group-wise feature enhancement-and-fusion network with dual-polarization feature enrichment (GWFEF-Net) for better dual-polarization SAR ship detection. GWFEF-Net offers four contributions: (1) dual-polarization feature enrichment (DFE) for enriching the feature library and suppressing clutter interferences to facilitate feature extraction; (2) group-wise feature enhancement (GFE) for enhancing each polarization semantic feature to highlight each polarization feature region; (3) group-wise feature fusion (GFF) for fusing multi-scale polarization features to realize polarization features’ group-wise information interaction; (4) hybrid pooling channel attention (HPCA) for channel modeling to balance each polarization feature’s contribution. We conduct sufficient ablation studies to verify the effectiveness of each contribution. Extensive experiments on the Sentinel-1 dual-polarization SAR ship dataset demonstrate the superior performance of GWFEF-Net, with 94.18% in average precision (AP), compared with the other ten competitive methods. Specifically, GWFEF-Net can yield a 2.51% AP improvement compared with the second-best method. Full article
Show Figures

Figure 1

19 pages, 5890 KiB  
Article
Energy-Based Adversarial Example Detection for SAR Images
by Zhiwei Zhang, Xunzhang Gao, Shuowei Liu, Bowen Peng and Yufei Wang
Remote Sens. 2022, 14(20), 5168; https://doi.org/10.3390/rs14205168 - 15 Oct 2022
Cited by 4 | Viewed by 1557
Abstract
Adversarial examples (AEs) bring increasing concern on the security of deep-learning-based synthetic aperture radar (SAR) target recognition systems. SAR AEs with perturbation constrained to the vicinity of the target have been recently in the spotlight due to the physical realization prospects. However, current [...] Read more.
Adversarial examples (AEs) bring increasing concern on the security of deep-learning-based synthetic aperture radar (SAR) target recognition systems. SAR AEs with perturbation constrained to the vicinity of the target have been recently in the spotlight due to the physical realization prospects. However, current adversarial detection methods generally suffer severe performance degradation against SAR AEs with region-constrained perturbation. To solve this problem, we treated SAR AEs as low-probability samples incompatible with the clean dataset. With the help of energy-based models, we captured an inherent energy gap between SAR AEs and clean samples that is robust to the changes of the perturbation region. Inspired by this discovery, we propose an energy-based adversarial detector, which requires no modification to a pretrained model. To better distinguish the clean samples and AEs, energy regularization was adopted to fine-tune the pretrained model. Experiments demonstrated that the proposed method significantly boosts the detection performance against SAR AEs with region-constrained perturbation. Full article
Show Figures

Graphical abstract

24 pages, 12495 KiB  
Article
Noise Parameter Estimation Two-Stage Network for Single Infrared Dim Small Target Image Destriping
by Teliang Wang, Qian Yin, Fanzhi Cao, Miao Li, Zaiping Lin and Wei An
Remote Sens. 2022, 14(19), 5056; https://doi.org/10.3390/rs14195056 - 10 Oct 2022
Cited by 4 | Viewed by 1441
Abstract
The existing nonuniformity correction methods generally have the defects of image blur, artifacts, image over-smoothing, and nonuniform residuals. It is difficult for these methods to meet the requirements of image enhancement in various complex application scenarios. In particular, when these methods are applied [...] Read more.
The existing nonuniformity correction methods generally have the defects of image blur, artifacts, image over-smoothing, and nonuniform residuals. It is difficult for these methods to meet the requirements of image enhancement in various complex application scenarios. In particular, when these methods are applied to dim small target images, they may remove dim small targets as noise points due to the image over-smoothing. This paper draws on the idea of a residual network and proposes a two-stage learning network based on the imaging mechanism of an infrared line-scan system. We adopt a multi-scale feature extraction unit and design a gain correction sub-network and an offset correction sub-network, respectively. Then, we pre-train the two sub-networks independently. Finally, we cascade the two sub-networks into a two-stage network and train it. The experimental results show that the PSNR gain of our method can reach more than 15 dB, and it can achieve excellent performance in different backgrounds and different intensities of nonuniform noise. Moreover, our method can avoid losing texture details or dim small targets after effectively removing nonuniform noise. Full article
Show Figures

Figure 1

19 pages, 4927 KiB  
Article
MAEANet: Multiscale Attention and Edge-Aware Siamese Network for Building Change Detection in High-Resolution Remote Sensing Images
by Bingjie Yang, Yuancheng Huang, Xin Su and Haonan Guo
Remote Sens. 2022, 14(19), 4895; https://doi.org/10.3390/rs14194895 - 30 Sep 2022
Cited by 6 | Viewed by 1770
Abstract
In recent years, using deep learning for large area building change detection has proven to be very efficient. However, the current methods for pixel-wise building change detection still have some limitations, such as a lack of robustness to false-positive changes and confusion about [...] Read more.
In recent years, using deep learning for large area building change detection has proven to be very efficient. However, the current methods for pixel-wise building change detection still have some limitations, such as a lack of robustness to false-positive changes and confusion about the boundary of dense buildings. To address these problems, a novel deep learning method called multiscale attention and edge-aware Siamese network (MAEANet) is proposed. The principal idea is to integrate both multiscale discriminative and edge structure information to improve the quality of prediction results. To effectively extract multiscale discriminative features, we design a contour channel attention module (CCAM) that highlights the edge of the changed region and combine it with the classical convolutional block attention module (CBAM) to construct multiscale attention (MA) module, which mainly contains channel, spatial and contour attention mechanisms. Meanwhile, to consider the structure information of buildings, we introduce the edge-aware (EA) module, which combines discriminative features with edge structure features to alleviate edge confusion in dense buildings. We conducted the experiments using LEVIR-CD and BCDD datasets. The proposed MA and EA modules can improve the F1-Score of the basic architecture by 1.13% on the LEVIR CD and by 1.39% on the BCDD with an accepted computation overhead. The experimental results demonstrate that the proposed MAEANet is effective and outperforms other state-of-the-art methods concerning metrics and visualization. Full article
Show Figures

Graphical abstract

19 pages, 10057 KiB  
Article
Multi-Source Remote Sensing Pretraining Based on Contrastive Self-Supervised Learning
by Chenfang Liu, Hao Sun, Yanjie Xu and Gangyao Kuang
Remote Sens. 2022, 14(18), 4632; https://doi.org/10.3390/rs14184632 - 16 Sep 2022
Cited by 9 | Viewed by 1987
Abstract
SAR-optical images from different sensors can provide consistent information for scene classification. However, the utilization of unlabeled SAR-optical images in deep learning-based remote sensing image interpretation remains an open issue. In recent years, contrastive self-supervised learning (CSSL) methods have shown great potential for [...] Read more.
SAR-optical images from different sensors can provide consistent information for scene classification. However, the utilization of unlabeled SAR-optical images in deep learning-based remote sensing image interpretation remains an open issue. In recent years, contrastive self-supervised learning (CSSL) methods have shown great potential for obtaining meaningful feature representations from massive amounts of unlabeled data. This paper investigates the effectiveness of CSSL-based pretraining models for SAR-optical remote-sensing classification. Firstly, we analyze the contrastive strategies of single-source and multi-source SAR-optical data augmentation under different CSSL architectures. We find that the CSSL framework without explicit negative sample selection naturally fits the multi-source learning problem. Secondly, we find that the registered SAR-optical images can guide the Siamese self-supervised network without negative samples to learn shared features, which is also the reason why the CSSL framework outperforms the CSSL framework with negative samples. Finally, we apply the CSSL pretrained network without negative samples that can learn the shared features of SAR-optical images to the downstream domain adaptation task of optical transfer to SAR images. We find that the choice of a pretrained network is important for downstream tasks. Full article
Show Figures

Figure 1

23 pages, 12068 KiB  
Article
Auto-Weighted Structured Graph-Based Regression Method for Heterogeneous Change Detection
by Lingjun Zhao, Yuli Sun, Lin Lei and Siqian Zhang
Remote Sens. 2022, 14(18), 4570; https://doi.org/10.3390/rs14184570 - 13 Sep 2022
Cited by 2 | Viewed by 1390
Abstract
Change detection using heterogeneous remote sensing images is an increasingly interesting and very challenging topic. To make the heterogeneous images comparable, some graph-based methods have been proposed, which first construct a graph for the image to capture the structure information and then use [...] Read more.
Change detection using heterogeneous remote sensing images is an increasingly interesting and very challenging topic. To make the heterogeneous images comparable, some graph-based methods have been proposed, which first construct a graph for the image to capture the structure information and then use the graph to obtain the structural changes between images. Nonetheless, previous graph-based change detection approaches are insufficient in representing and exploiting the image structure. To address these issues, in this paper, we propose an auto-weighted structured graph (AWSG)-based regression method for heterogeneous change detection, which mainly consists of two processes: learning the AWSG to capture the image structure and using the AWSG to perform structure regression to detect changes. In the graph learning process, a self-conducted weighting strategy is employed to make the graph more robust, and the local and global structure information are combined to make the graph more informative. In the structure regression process, we transform one image to the domain of the other image by using the learned AWSG, where the high-order neighbor information hidden in the graph is exploited to obtain a better regression image and change image. Experimental results and comparisons on four real datasets with seven state-of-the-art methods demonstrate the effectiveness of the proposed approach. Full article
Show Figures

Figure 1

20 pages, 2336 KiB  
Article
An Empirical Study of Fully Black-Box and Universal Adversarial Attack for SAR Target Recognition
by Bowen Peng, Bo Peng, Shaowei Yong and Li Liu
Remote Sens. 2022, 14(16), 4017; https://doi.org/10.3390/rs14164017 - 18 Aug 2022
Cited by 3 | Viewed by 1801
Abstract
It has been demonstrated that deep neural network (DNN)-based synthetic aperture radar (SAR) automatic target recognition (ATR) techniques are extremely susceptible to adversarial intrusions, that is, malicious SAR images including deliberately generated perturbations that are imperceptible to the human eye but can deflect [...] Read more.
It has been demonstrated that deep neural network (DNN)-based synthetic aperture radar (SAR) automatic target recognition (ATR) techniques are extremely susceptible to adversarial intrusions, that is, malicious SAR images including deliberately generated perturbations that are imperceptible to the human eye but can deflect DNN inference. Attack algorithms in previous studies are based on direct access to a ATR model such as gradients or training data to generate adversarial examples for a target SAR image, which is against the non-cooperative nature of ATR applications. In this article, we establish a fully black-box universal attack (FBUA) framework to craft one single universal adversarial perturbation (UAP) against a wide range of DNN architectures as well as a large fraction of target images. It is of both high practical relevance for an attacker and a risk for ATR systems that the UAP can be designed by an FBUA in advance and without any access to the victim DNN. The proposed FBUA can be decomposed to three main phases: (1) SAR images simulation, (2) substitute model training, and (3) UAP generation. Comprehensive evaluations on the MSTAR and SARSIM datasets demonstrate the efficacy of the FBUA, i.e., can achieve an average fooling ratio of 64.6% on eight cutting-edge DNNs (when the magnitude of the UAP is set to 16/255). Furthermore, we empirically find that the black-box UAP mainly functions by activating spurious features which can effectively couple with clean features to force the ATR models to concentrate on several categories and exhibit a class-wise vulnerability. The proposed FBUA aligns with the non-cooperative nature and reveals the access-free adversarial vulnerability of DNN-based SAR ATR techniques, providing a foundation for future defense against black-box threats. Full article
Show Figures

Figure 1

19 pages, 17809 KiB  
Article
Attention-Based Multi-Level Feature Fusion for Object Detection in Remote Sensing Images
by Xiaohu Dong, Yao Qin, Yinghui Gao, Ruigang Fu, Songlin Liu and Yuanxin Ye
Remote Sens. 2022, 14(15), 3735; https://doi.org/10.3390/rs14153735 - 04 Aug 2022
Cited by 19 | Viewed by 3497
Abstract
We study the problem of object detection in remote sensing images. As a simple but effective feature extractor, Feature Pyramid Network (FPN) has been widely used in several generic vision tasks. However, it still faces some challenges when used for remote sensing object [...] Read more.
We study the problem of object detection in remote sensing images. As a simple but effective feature extractor, Feature Pyramid Network (FPN) has been widely used in several generic vision tasks. However, it still faces some challenges when used for remote sensing object detection, as the objects in remote sensing images usually exhibit variable shapes, orientations, and sizes. To this end, we propose a dedicated object detector based on the FPN architecture to achieve accurate object detection in remote sensing images. Specifically, considering the variable shapes and orientations of remote sensing objects, we first replace the original lateral connections of FPN with Deformable Convolution Lateral Connection Modules (DCLCMs), each of which includes a 3×3 deformable convolution to generate feature maps with deformable receptive fields. Additionally, we further introduce several Attention-based Multi-Level Feature Fusion Modules (A-MLFFMs) to integrate the multi-level outputs of FPN adaptively, further enabling multi-scale object detection. Extensive experimental results on the DIOR dataset demonstrated the state-of-the-art performance achieved by the proposed method, with the highest mean Average Precision (mAP) of 73.6%. Full article
Show Figures

Figure 1

23 pages, 1970 KiB  
Article
Dual-Branch-AttentionNet: A Novel Deep-Learning-Based Spatial-Spectral Attention Methodology for Hyperspectral Data Analysis
by Bishwas Praveen and Vineetha Menon
Remote Sens. 2022, 14(15), 3644; https://doi.org/10.3390/rs14153644 - 29 Jul 2022
Cited by 4 | Viewed by 1548
Abstract
Recently, deep learning-based classification approaches have made great progress and now dominate a wide range of applications, thanks to their Herculean discriminative feature learning ability. Despite their success, for hyperspectral data analysis, these deep learning based techniques tend to suffer computationally as the [...] Read more.
Recently, deep learning-based classification approaches have made great progress and now dominate a wide range of applications, thanks to their Herculean discriminative feature learning ability. Despite their success, for hyperspectral data analysis, these deep learning based techniques tend to suffer computationally as the magnitude of the data soars. This is mainly because the hyperspectral imagery (HSI) data are multidimensional, as well as giving equal importance to the large amount of temporal and spatial information in the HSI data, despite the redundancy of information in the temporal and spatial domains. Consequently, in literature, this equal information emphasis has proven to affect the classification efficacy negatively in addition to increasing the computational time. As a result, this paper proposes a novel dual branch spatial-spectral attention based classification methodology that is computationally cheap and capable of selectively accentuating cardinal spatial and spectral features while suppressing less useful ones. The theory of feature extraction with 3D-convolutions alongside a gated mechanism for feature weighting using bi-directional long short-term memory is used as a spectral attention mechanism in this architecture. In addition, a union of 3D convolutional neural network (3D-CNN) and a residual network oriented spatial window-based attention mechanism is proposed in this work. To validate the efficacy of our proposed technique, the features collected from these spatial and spectral attention pipelines are transferred to a feed-forward neural network (FNN) for supervised pixel-wise classification of HSI data. The suggested spatial-spectral attention based hyperspectral data analysis and image classification methodology outperform other spatial-only, spectral-only, and spatial-spectral feature extraction based hyperspectral image classification methodologies when compared, according to experimental results. Full article
Show Figures

Figure 1

22 pages, 7365 KiB  
Article
A Dual-Generator Translation Network Fusing Texture and Structure Features for SAR and Optical Image Matching
by Han Nie, Zhitao Fu, Bo-Hui Tang, Ziqian Li, Sijing Chen and Leiguang Wang
Remote Sens. 2022, 14(12), 2946; https://doi.org/10.3390/rs14122946 - 20 Jun 2022
Cited by 4 | Viewed by 2786
Abstract
The matching problem for heterologous remote sensing images can be simplified to the matching problem for pseudo homologous remote sensing images via image translation to improve the matching performance. Among such applications, the translation of synthetic aperture radar (SAR) and optical images is [...] Read more.
The matching problem for heterologous remote sensing images can be simplified to the matching problem for pseudo homologous remote sensing images via image translation to improve the matching performance. Among such applications, the translation of synthetic aperture radar (SAR) and optical images is the current focus of research. However, the existing methods for SAR-to-optical translation have two main drawbacks. First, single generators usually sacrifice either structure or texture features to balance the model performance and complexity, which often results in textural or structural distortion; second, due to large nonlinear radiation distortions (NRDs) in SAR images, there are still visual differences between the pseudo-optical images generated by current generative adversarial networks (GANs) and real optical images. Therefore, we propose a dual-generator translation network for fusing structure and texture features. On the one hand, the proposed network has dual generators, a texture generator, and a structure generator, with good cross-coupling to obtain high-accuracy structure and texture features; on the other hand, frequency-domain and spatial-domain loss functions are introduced to reduce the differences between pseudo-optical images and real optical images. Extensive quantitative and qualitative experiments show that our method achieves state-of-the-art performance on publicly available optical and SAR datasets. Our method improves the peak signal-to-noise ratio (PSNR) by 21.0%, the chromatic feature similarity (FSIMc) by 6.9%, and the structural similarity (SSIM) by 161.7% in terms of the average metric values on all test images compared with the next best results. In addition, we present a before-and-after translation comparison experiment to show that our method improves the average keypoint repeatability by approximately 111.7% and the matching accuracy by approximately 5.25%. Full article
Show Figures

Graphical abstract

Other

Jump to: Research

12 pages, 2749 KiB  
Technical Note
Unsupervised SAR Image Change Type Recognition Using Regionally Restricted PCA-Kmean and Lightweight MobileNet
by Wei Liu, Zhikang Lin, Gui Gao, Chaoyang Niu and Wanjie Lu
Remote Sens. 2022, 14(24), 6362; https://doi.org/10.3390/rs14246362 - 15 Dec 2022
Viewed by 1307
Abstract
Change detection using synthetic aperture radar (SAR) multi-temporal images only detects the change area and generates no information such as change type, which limits its development. This study proposed a new unsupervised application of SAR images that can recognize the change type of [...] Read more.
Change detection using synthetic aperture radar (SAR) multi-temporal images only detects the change area and generates no information such as change type, which limits its development. This study proposed a new unsupervised application of SAR images that can recognize the change type of the area. First, a regionally restricted principal component analysis k-mean (RRPCA-Kmean) clustering algorithm, combining principal component analysis, k-mean clustering, and mathematical morphology composition, was designed to obtain pre-classification results in combination with change type vectors. Second, a lightweight MobileNet was designed based on the results of the first stage to perform the reclassification of the pre-classification results and obtain the change recognition results of the changed regions. The experimental results using SAR datasets with different resolutions show that the method can guarantee change recognition results with good change detection correctness. Full article
Show Figures

Figure 1

13 pages, 11526 KiB  
Technical Note
Gated Path Aggregation Feature Pyramid Network for Object Detection in Remote Sensing Images
by Yuchao Zheng, Xinxin Zhang, Rui Zhang and Dahan Wang
Remote Sens. 2022, 14(18), 4614; https://doi.org/10.3390/rs14184614 - 15 Sep 2022
Cited by 1 | Viewed by 1770
Abstract
Object detection in remote sensing images is a challenge because remote sensing targets have characteristics such as small geometries, an unfixed direction and multiple poses. Recent studies have shown that the accuracy of object detection can be improved using feature fusion. However, direct [...] Read more.
Object detection in remote sensing images is a challenge because remote sensing targets have characteristics such as small geometries, an unfixed direction and multiple poses. Recent studies have shown that the accuracy of object detection can be improved using feature fusion. However, direct fusion methods regard each layer as being of equal importance and rarely consider the hierarchical structure of multiple convolutional layers, leading to redundancy and rejected information being rarely applied during the fusion process. To address these issues, we propose a gated path aggregate (GPA) network that integrates path enhancement and information filtering into an end-to-end integrated network. Specifically, we first quantitatively analyze the performance of different gating functions to select the most suitable gating function. Then, we explore the embedding of soft switchable atrous convolution (SSAC) in the topmost feature layer. Finally, we validate our proposed model by combining it with experiments using the public NWPU VHR-10 dataset. The experimental results show that our proposed GPAFPN structure has significant improvement compared to the FPN structure. Compared with the mainstream networks, it achieved state-of-the-art performance. Full article
Show Figures

Figure 1

Back to TopTop