remotesensing-logo

Journal Browser

Journal Browser

Adversarial Attacks and Defenses for Remote Sensing Data

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: closed (1 May 2023) | Viewed by 17650

Special Issue Editors

Institute of Advanced Research in Artificial Intelligence (IARAI), 1030 Vienna, Austria
Interests: remote sensing; computer vision; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computer, Centre for Quantum Computation and Intelligent Systems, Wuhan University, Wuhan, China
Interests: big data mining management and analysis; multimedia technology and big data analysis; multimedia signal processing; machine learning and intelligent interaction; computer vision; computer applications; pattern recognition; artificial intelligence; data mining and analysis; audio and video processing; intelligent computing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
1. Machine Learning Group, Helmholtz Institute Freiberg for Resource Technology, Helmholtz-Zentrum Dresden-Rossendorf, 09599 Freiberg, Germany
2. Institute of Advanced Research in Artificial Intelligence (IARAI), 1030 Vienna, Austria
Interests: machine and deep learning; image and signal processing; hyperspectral image analysis; multisensor data fusion
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Security and reliability are important factors when addressing geoscience and remote sensing tasks. While artificial intelligence (AI) techniques, especially deep learning algorithms, have significantly improved the interpretation performance of remote sensing data in the past few years, recent research shows there exist some potential risks that these techniques may get attacked by specific deception algorithms. Such deception algorithms are known as "adversarial attacks", which can generate subtle perturbations that are imperceptible to a human observer but may greatly mislead the state-of-the-art deep learning methods to make wrong predictions.

To tackle this challenge and boost the development of secure AI algorithms in the remote sensing field, we would like to invite you to contribute to this Special Issue, which will gather new insights and contributions to the study of Adversarial Attacks and Defenses for Remote Sensing Data. Original research articles and reviews are welcome. Topics can be related but not limited to:

  • Adversarial examples in hyperspectral/multispectral/RGB/LiDAR/synthetic aperture radar (SAR) data
  • Adversarial attacks for scene classification, object detection, and semantic segmentation of remote sensing data
  • Explainable adversarial examples in remote sensing data
  • Black-box and white-box adversarial attacks
  • Adversarial attacks in the physical world
  • Advanced deep learning architectures with high resistance to adversarial examples
  • Adversarial examples detection
  • Adversarial defenses.

Dr. Yonghao Xu
Dr. Bo Du
Dr. Pedram Ghamisi 
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • adversarial attack
  • adversarial example
  • adversarial defense
  • deep learning
  • remote sensing

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 6203 KiB  
Article
Attention-Enhanced One-Shot Attack against Single Object Tracking for Unmanned Aerial Vehicle Remote Sensing Images
by Yan Jiang and Guisheng Yin
Remote Sens. 2023, 15(18), 4514; https://doi.org/10.3390/rs15184514 - 14 Sep 2023
Viewed by 1053
Abstract
Recent studies have shown that deep-learning-based models for processing Unmanned Aerial Vehicle (UAV) remote sensing images are vulnerable to artificially designed adversarial examples, which can lead to incorrect predictions of deep models when facing adversarial examples. Previous adversarial attack methods have mainly focused [...] Read more.
Recent studies have shown that deep-learning-based models for processing Unmanned Aerial Vehicle (UAV) remote sensing images are vulnerable to artificially designed adversarial examples, which can lead to incorrect predictions of deep models when facing adversarial examples. Previous adversarial attack methods have mainly focused on the classification and detection of UAV remote sensing images, and there is still a lack of research on adversarial attacks for object tracking in UAV video. To address this challenge, we propose an attention-enhanced one-shot adversarial attack method for UAV remote sensing object tracking, which perturbs only the template frame and generates adversarial samples offline. First, we employ an attention feature loss to make the original frame’s features dissimilar to those of the adversarial frame, and an attention confidence loss to either suppress or enhance different confidence scores. Additionally, by forcing the tracker to concentrate on the background information near the target, a background distraction loss is used to mismatch templates with subsequent frames. Finally, we add total variation loss to generate adversarial examples that appear natural to humans. We validate the effectiveness of our method against popular trackers such as SiamRPN, DaSiamRPN, and SiamRPN++ on the UAV123 remote sensing dataset. Experimental results verify the superior attack performance of our proposed method. Full article
(This article belongs to the Special Issue Adversarial Attacks and Defenses for Remote Sensing Data)
Show Figures

Figure 1

21 pages, 3477 KiB  
Article
Adversarial Examples in Visual Object Tracking in Satellite Videos: Cross-Frame Momentum Accumulation for Adversarial Examples Generation
by Yu Zhang, Lingfei Wang, Chenghao Zhang and Jin Li
Remote Sens. 2023, 15(13), 3240; https://doi.org/10.3390/rs15133240 - 23 Jun 2023
Viewed by 1021
Abstract
The visual object tracking technology of remote sensing images has important applications in areas with high safety performance such as national defense, homeland security, and intelligent transportation in smart cities. However, previous research demonstrates that adversarial examples pose a significant threat to remote [...] Read more.
The visual object tracking technology of remote sensing images has important applications in areas with high safety performance such as national defense, homeland security, and intelligent transportation in smart cities. However, previous research demonstrates that adversarial examples pose a significant threat to remote sensing imagery. This article first explores the impact of adversarial examples in the field of visual object tracking in remote sensing imagery. We design a classification- and regression-based loss function for the popular Siamese RPN series of visual object tracking models and use the PGD gradient-based attack method to generate adversarial examples. Additionally, we consider the temporal consistency of video frames and design an adversarial examples attack method based on momentum continuation. We evaluate our method on the remote sensing visual object tracking datasets SatSOT and VISO and the traditional datasets OTB100 and UAV123. The experimental results show that our approach can effectively reduce the performance of the tracker. Full article
(This article belongs to the Special Issue Adversarial Attacks and Defenses for Remote Sensing Data)
Show Figures

Figure 1

25 pages, 8106 KiB  
Article
Defending against Poisoning Attacks in Aerial Image Semantic Segmentation with Robust Invariant Feature Enhancement
by Zhen Wang, Buhong Wang, Chuanlei Zhang, Yaohui Liu and Jianxin Guo
Remote Sens. 2023, 15(12), 3157; https://doi.org/10.3390/rs15123157 - 17 Jun 2023
Cited by 1 | Viewed by 1272
Abstract
The outstanding performance of deep neural networks (DNNs) in multiple computer vision in recent years has promoted its widespread use in aerial image semantic segmentation. Nonetheless, prior research has demonstrated the high susceptibility of DNNs to adversarial attacks. This poses significant security risks [...] Read more.
The outstanding performance of deep neural networks (DNNs) in multiple computer vision in recent years has promoted its widespread use in aerial image semantic segmentation. Nonetheless, prior research has demonstrated the high susceptibility of DNNs to adversarial attacks. This poses significant security risks when applying DNNs to safety-critical earth observation missions. As an essential means of attacking DNNs, data poisoning attacks destroy model performance by contaminating model training data, allowing attackers to control prediction results by carefully crafting poisoning samples. Toward building a more robust DNNs-based aerial image semantic segmentation model, in this study, we proposed a robust invariant feature enhancement network (RIFENet) that can resist data poisoning attacks and has superior semantic segmentation performance. The constructed RIFENet improves the resistance to poisoning attacks by extracting and enhancing robust invariant features. Specifically, RIFENet uses a texture feature enhancement module (T-FEM), structural feature enhancement module (S-FEM), global feature enhancement module (G-FEM), and multi-resolution feature fusion module (MR-FFM) to enhance the representation of different robust features in the feature extraction process to suppress the interference of poisoning samples. Experiments on several benchmark aerial image datasets demonstrate that the proposed method is more robust and exhibits better generalization than other state-of-the-art methods. Full article
(This article belongs to the Special Issue Adversarial Attacks and Defenses for Remote Sensing Data)
Show Figures

Figure 1

26 pages, 21622 KiB  
Article
Robust Feature-Guided Generative Adversarial Network for Aerial Image Semantic Segmentation against Backdoor Attacks
by Zhen Wang, Buhong Wang, Chuanlei Zhang, Yaohui Liu and Jianxin Guo
Remote Sens. 2023, 15(10), 2580; https://doi.org/10.3390/rs15102580 - 15 May 2023
Cited by 1 | Viewed by 1227
Abstract
Profiting from the powerful feature extraction and representation capabilities of deep learning (DL), aerial image semantic segmentation based on deep neural networks (DNNs) has achieved remarkable success in recent years. Nevertheless, the security and robustness of DNNs deserve attention when dealing with safety-critical [...] Read more.
Profiting from the powerful feature extraction and representation capabilities of deep learning (DL), aerial image semantic segmentation based on deep neural networks (DNNs) has achieved remarkable success in recent years. Nevertheless, the security and robustness of DNNs deserve attention when dealing with safety-critical earth observation tasks. As a typical attack pattern in adversarial machine learning (AML), backdoor attacks intend to embed hidden triggers in DNNs by poisoning training data. The attacked DNNs behave normally on benign samples, but when the hidden trigger is activated, its prediction is modified to a specified target label. In this article, we systematically assess the threat of backdoor attacks to aerial image semantic segmentation tasks. To defend against backdoor attacks and maintain better semantic segmentation accuracy, we construct a novel robust generative adversarial network (RFGAN). Motivated by the sensitivity of human visual systems to global and edge information in images, RFGAN designs the robust global feature extractor (RobGF) and the robust edge feature extractor (RobEF) that force DNNs to learn global and edge features. Then, RFGAN uses robust global and edge features as guidance to obtain benign samples by the constructed generator, and the discriminator to obtain semantic segmentation results. Our method is the first attempt to address the backdoor threat to aerial image semantic segmentation by constructing the robust DNNs model architecture. Extensive experiments on real-world scenes aerial image benchmark datasets demonstrate that the constructed RFGAN can effectively defend against backdoor attacks and achieve better semantic segmentation results compared with the existing state-of-the-art methods. Full article
(This article belongs to the Special Issue Adversarial Attacks and Defenses for Remote Sensing Data)
Show Figures

Figure 1

23 pages, 10592 KiB  
Article
Defense against Adversarial Patch Attacks for Aerial Image Semantic Segmentation by Robust Feature Extraction
by Zhen Wang, Buhong Wang, Chuanlei Zhang and Yaohui Liu
Remote Sens. 2023, 15(6), 1690; https://doi.org/10.3390/rs15061690 - 21 Mar 2023
Cited by 2 | Viewed by 1799
Abstract
Deep learning (DL) models have recently been widely used in UAV aerial image semantic segmentation tasks and have achieved excellent performance. However, DL models are vulnerable to adversarial examples, which bring significant security risks to safety-critical systems. Existing research mainly focuses on solving [...] Read more.
Deep learning (DL) models have recently been widely used in UAV aerial image semantic segmentation tasks and have achieved excellent performance. However, DL models are vulnerable to adversarial examples, which bring significant security risks to safety-critical systems. Existing research mainly focuses on solving digital attacks for aerial image semantic segmentation, but adversarial patches with physical attack attributes are more threatening than digital attacks. In this article, we systematically evaluate the threat of adversarial patches on the aerial image semantic segmentation task for the first time. To defend against adversarial patch attacks and obtain accurate semantic segmentation results, we construct a novel robust feature extraction network (RFENet). Based on the characteristics of aerial images and adversarial patches, RFENet designs a limited receptive field mechanism (LRFM), a spatial semantic enhancement module (SSEM), a boundary feature perception module (BFPM) and a global correlation encoder module (GCEM), respectively, to solve adversarial patch attacks from the DL model architecture design level. We discover that semantic features, shape features and global features contained in aerial images can significantly enhance the robustness of the DL model against patch attacks. Extensive experiments on three aerial image benchmark datasets demonstrate that the proposed RFENet has strong resistance to adversarial patch attacks compared with the existing state-of-the-art methods. Full article
(This article belongs to the Special Issue Adversarial Attacks and Defenses for Remote Sensing Data)
Show Figures

Figure 1

21 pages, 3265 KiB  
Article
An Adaptive Adversarial Patch-Generating Algorithm for Defending against the Intelligent Low, Slow, and Small Target
by Jarhinbek Rasol, Yuelei Xu, Zhaoxiang Zhang, Fan Zhang, Weijia Feng, Liheng Dong, Tian Hui and Chengyang Tao
Remote Sens. 2023, 15(5), 1439; https://doi.org/10.3390/rs15051439 - 03 Mar 2023
Cited by 5 | Viewed by 1727
Abstract
The “low, slow, and small” target (LSST) poses a significant threat to the military ground unit. It is hard to defend against due to its invisibility to numerous detecting devices. With the onboard deep learning-based object detection methods, the intelligent LSST (ILSST) can [...] Read more.
The “low, slow, and small” target (LSST) poses a significant threat to the military ground unit. It is hard to defend against due to its invisibility to numerous detecting devices. With the onboard deep learning-based object detection methods, the intelligent LSST (ILSST) can find and detect the ground unit autonomously in a denied environment. This paper proposes an adversarial patch-based defending method to blind the ILSST by attacking its onboard object detection network. First, an adversarial influence score was established to indicate the influence of the adversarial noise on the objects. Then, based on this score, we used the least squares algorithm and Bisectional search methods to search the patch’s optimal coordinates and size. Using the optimal coordinates and size, an adaptive patch-generating network was constructed to automatically generate patches on ground units and hide the ground units from the deep learning-based object detection network. To evaluate the efficiency of our algorithm, a new LSST view dataset was collected, and extensive attacking experiments are carried out on this dataset. The results demonstrate that our algorithm can effectively attack the object detection networks, is better than state-of-the-art adversarial patch-generating algorithms in hiding the ground units from the object detection networks, and has high transferability among the object detection networks. Full article
(This article belongs to the Special Issue Adversarial Attacks and Defenses for Remote Sensing Data)
Show Figures

Figure 1

24 pages, 17916 KiB  
Article
Global Feature Attention Network: Addressing the Threat of Adversarial Attack for Aerial Image Semantic Segmentation
by Zhen Wang, Buhong Wang, Yaohui Liu and Jianxin Guo
Remote Sens. 2023, 15(5), 1325; https://doi.org/10.3390/rs15051325 - 27 Feb 2023
Cited by 7 | Viewed by 1683
Abstract
Aerial Image Semantic segmentation based on convolution neural networks (CNNs) has made significant process in recent years. Nevertheless, their vulnerability to adversarial example attacks could not be neglected. Existing studies typically focus on adversarial attacks for image classification, ignoring the negative effect of [...] Read more.
Aerial Image Semantic segmentation based on convolution neural networks (CNNs) has made significant process in recent years. Nevertheless, their vulnerability to adversarial example attacks could not be neglected. Existing studies typically focus on adversarial attacks for image classification, ignoring the negative effect of adversarial examples on semantic segmentation. In this article, we systematically assess and verify the influence of adversarial attacks on aerial image semantic segmentation. Meanwhile, based on the robust characteristics of global features, we construct a novel global feature attention network (GFANet) for aerial image semantic segmentation to solve the threat of adversarial attacks. GFANet uses the global context encoder (GCE) to obtain the context dependencies of global features, introduces the global coordinate attention mechanism (GCAM) to enhance the global feature representation to suppress adversarial noise, and the feature consistency alignment (FCA) is used for feature calibration. In addition, we construct a universal adversarial training strategy to improve the robustness of the semantic segmentation model against adversarial example attacks. Extensive experiments on three aerial image datasets demonstrate that GFANet is more robust against adversarial attacks than existing state-of-the-art semantic segmentation models. Full article
(This article belongs to the Special Issue Adversarial Attacks and Defenses for Remote Sensing Data)
Show Figures

Figure 1

27 pages, 34311 KiB  
Article
ULAN: A Universal Local Adversarial Network for SAR Target Recognition Based on Layer-Wise Relevance Propagation
by Meng Du, Daping Bi, Mingyang Du, Xinsong Xu and Zilong Wu
Remote Sens. 2023, 15(1), 21; https://doi.org/10.3390/rs15010021 - 21 Dec 2022
Cited by 1 | Viewed by 1344
Abstract
Recent studies have proven that synthetic aperture radar (SAR) automatic target recognition (ATR) models based on deep neural networks (DNN) are vulnerable to adversarial examples. However, existing attacks easily fail in the case where adversarial perturbations cannot be fully fed to victim models. [...] Read more.
Recent studies have proven that synthetic aperture radar (SAR) automatic target recognition (ATR) models based on deep neural networks (DNN) are vulnerable to adversarial examples. However, existing attacks easily fail in the case where adversarial perturbations cannot be fully fed to victim models. We call this situation perturbation offset. Moreover, since background clutter takes up most of the area in SAR images and has low relevance to recognition results, fooling models with global perturbations is quite inefficient. This paper proposes a semi-white-box attack network called Universal Local Adversarial Network (ULAN) to generate universal adversarial perturbations (UAP) for the target regions of SAR images. In the proposed method, we calculate the model’s attention heatmaps through layer-wise relevance propagation (LRP), which is used to locate the target regions of SAR images that have high relevance to recognition results. In particular, we utilize a generator based on U-Net to learn the mapping from noise to UAPs and craft adversarial examples by adding the generated local perturbations to target regions. Experiments indicate that the proposed method effectively prevents perturbation offset and achieves comparable attack performance to conventional global UAPs by perturbing only a quarter or less of SAR image areas. Full article
(This article belongs to the Special Issue Adversarial Attacks and Defenses for Remote Sensing Data)
Show Figures

Figure 1

15 pages, 5006 KiB  
Article
Targeted Universal Adversarial Examples for Remote Sensing
by Tao Bai, Hao Wang and Bihan Wen
Remote Sens. 2022, 14(22), 5833; https://doi.org/10.3390/rs14225833 - 17 Nov 2022
Cited by 13 | Viewed by 1747
Abstract
Researchers are focusing on the vulnerabilities of deep learning models for remote sensing; various attack methods have been proposed, including universal adversarial examples. Existing universal adversarial examples, however, are only designed to fool deep learning models rather than target specific goals, i.e., targeted [...] Read more.
Researchers are focusing on the vulnerabilities of deep learning models for remote sensing; various attack methods have been proposed, including universal adversarial examples. Existing universal adversarial examples, however, are only designed to fool deep learning models rather than target specific goals, i.e., targeted attacks. To this end, we propose two variants of universal adversarial examples called targeted universal adversarial examples and source-targeted universal adversarial examples. Extensive experiments on three popular datasets showed strong attackability of the two targeted adversarial variants. We hope such strong attacks can inspire and motivate research on the defenses against adversarial examples in remote sensing. Full article
(This article belongs to the Special Issue Adversarial Attacks and Defenses for Remote Sensing Data)
Show Figures

Graphical abstract

22 pages, 15201 KiB  
Article
A Cascade Defense Method for Multidomain Adversarial Attacks under Remote Sensing Detection
by Wei Xue, Zhiming Chen, Weiwei Tian, Yunhua Wu and Bing Hua
Remote Sens. 2022, 14(15), 3559; https://doi.org/10.3390/rs14153559 - 25 Jul 2022
Cited by 3 | Viewed by 2025
Abstract
Deep neural networks have been widely used in detection tasks based on optical remote sensing images. However, in recent studies, deep neural networks have been shown to be vulnerable to adversarial examples. Adversarial examples are threatening in both the digital and physical domains. [...] Read more.
Deep neural networks have been widely used in detection tasks based on optical remote sensing images. However, in recent studies, deep neural networks have been shown to be vulnerable to adversarial examples. Adversarial examples are threatening in both the digital and physical domains. Specifically, they make it possible for adversarial examples to attack aerial remote sensing detection. To defend against adversarial attacks on aerial remote sensing detection, we propose a cascaded adversarial defense framework, which locates the adversarial patch according to its high frequency and saliency information in the gradient domain and removes it directly. The original image semantic and texture information is then restored by the image inpainting method. When combined with the random erasing algorithm, the robustness of detection is further improved. Our method is the first attempt to defend against adversarial examples in remote sensing detection. The experimental results show that our method is very effective in defending against real-world adversarial attacks. In particular, when using the YOLOv3 and YOLOv4 algorithms for robust detection of single-class targets, the AP60 of YOLOv3 and YOLOv4 only drop by 2.11% and 2.17%, respectively, under the adversarial example. Full article
(This article belongs to the Special Issue Adversarial Attacks and Defenses for Remote Sensing Data)
Show Figures

Graphical abstract

Back to TopTop