remotesensing-logo

Journal Browser

Journal Browser

Deep Transfer Learning for Remote Sensing

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (31 December 2019) | Viewed by 71247

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, University of British Columbia, 2366 Main Mall, Vancouver, BC V6T 1Z4, Canada
Interests: deep learning; machine learning; hyperspectral image classification; band selection

E-Mail Website
Guest Editor
Northwestern Polytechnical University, 127 West Youyi Road, Xi'an 710072, Shaanxi, China
Interests: machine learning; remote sensing; semantic segmentation; scene parsing; small-sample learning

E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, University of British Columbia, 2366 Main Mall, Vancouver, BC V6T 1Z4, Canada
Interests: reinforcement learning; deep learning; image classification
University of Chinese Academy of Sciences, No.19(A) Yuquan Road, Shijingshan District, Beijing, China
Interests: machine learning; remote sensing; target detection; hyperspectral anomaly detection; hyperspectral classification

E-Mail Website
Guest Editor
Xi'an Institute of Optics and Precision Mechanics, CAS, NO. 17 Xinxi Road, New Industrial Park, Xi'an Hi-Tech Industrial Development Zone, Xi'an, Shaanxi, China
Interests: machine learning; remote sensing; action recognition; video segmentation; hyperpectral classification

Special Issue Information

Dear Colleagues,

Recently, deep learning (DL) for remote sensing (RS) image processing has gradually become a hot topic. Many deep learning models, including ResNet, AlexNet, as well as the newly proposed capsule network, have all been proven to have decent performance on RS images with enough prior knowledge for training. One existing problem is the limitation of label information for newly collected RS data, and this phenomenon will make it even more difficult for the DL models to process the RS images. With the development of modern satellite sensors and easy access to new RS data, the problem of processing such a large amount of data becomes even more serious and urgent. A straightforward consideration is to resort to existing labeled RS data to help with the unknown new data. To achieve this purpose, deep transfer learning-based frameworks that can overcome the semantic gap between different datasets have become a research frontier in RS data processing. The deep information of existing labeled data is exploited to predict the label of newly collected RS data.

This Special Issue is devoted to exploring the potential of deep transfer learning framework in RS image processing. Due to different acquisition conditions and sensors, the spectra observed on a new scene can be quite different from the existing scene even if they represent the same types of objects. This spectral difference brings huge semantic disparity among different RS datasets. Therefore, how to select, construct, and correlate the deep networks by transfer learning for different RS datasets will be the major concern of this Special Issue.

Topics of interest include, but are not limited to:

  • Theories for domain adaptation and generalization;
  • Auto-encoder-based transfer learning for remote sensing;
  • CNN-based transfer learning for remote sensing;
  • RNN-based transfer learning for remote sensing;
  • Capsule network-based transfer learning for remote sensing;
  • Domain generalization algorithms for visual problems;
  • Deep representation learning for domain adaptation and generalization.

Dr. Jianzhe Lin
Dr. Zhiyu Jiang
Dr. Sarbjit Sarkaria
Dr. Dandan Ma
Dr. Yang Zhao
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Deep Transfer Learning
  • Domain Adaptation
  • Machine Learning
  • Convolutional Network
  • Remote Sensing Image

Related Special Issue

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 6574 KiB  
Article
A Method for Vehicle Detection in High-Resolution Satellite Images that Uses a Region-Based Object Detector and Unsupervised Domain Adaptation
by Yohei Koga, Hiroyuki Miyazaki and Ryosuke Shibasaki
Remote Sens. 2020, 12(3), 575; https://doi.org/10.3390/rs12030575 - 09 Feb 2020
Cited by 51 | Viewed by 9246 | Correction
Abstract
Recently, object detectors based on deep learning have become widely used for vehicle detection and contributed to drastic improvement in performance measures. However, deep learning requires much training data, and detection performance notably degrades when the target area of vehicle detection (the target [...] Read more.
Recently, object detectors based on deep learning have become widely used for vehicle detection and contributed to drastic improvement in performance measures. However, deep learning requires much training data, and detection performance notably degrades when the target area of vehicle detection (the target domain) is different from the training data (the source domain). To address this problem, we propose an unsupervised domain adaptation (DA) method that does not require labeled training data, and thus can maintain detection performance in the target domain at a low cost. We applied Correlation alignment (CORAL) DA and adversarial DA to our region-based vehicle detector and improved the detection accuracy by over 10% in the target domain. We further improved adversarial DA by utilizing the reconstruction loss to facilitate learning semantic features. Our proposed method achieved slightly better performance than the accuracy achieved with the labeled training data of the target domain. We demonstrated that our improved DA method could achieve almost the same level of accuracy at a lower cost than non-DA methods with a sufficient amount of labeled training data of the target domain. Full article
(This article belongs to the Special Issue Deep Transfer Learning for Remote Sensing)
Show Figures

Graphical abstract

20 pages, 3821 KiB  
Article
Convolutional Neural Network for Remote-Sensing Scene Classification: Transfer Learning Analysis
by Rafael Pires de Lima and Kurt Marfurt
Remote Sens. 2020, 12(1), 86; https://doi.org/10.3390/rs12010086 - 25 Dec 2019
Cited by 179 | Viewed by 11979
Abstract
Remote-sensing image scene classification can provide significant value, ranging from forest fire monitoring to land-use and land-cover classification. Beginning with the first aerial photographs of the early 20th century to the satellite imagery of today, the amount of remote-sensing data has increased geometrically [...] Read more.
Remote-sensing image scene classification can provide significant value, ranging from forest fire monitoring to land-use and land-cover classification. Beginning with the first aerial photographs of the early 20th century to the satellite imagery of today, the amount of remote-sensing data has increased geometrically with a higher resolution. The need to analyze these modern digital data motivated research to accelerate remote-sensing image classification. Fortunately, great advances have been made by the computer vision community to classify natural images or photographs taken with an ordinary camera. Natural image datasets can range up to millions of samples and are, therefore, amenable to deep-learning techniques. Many fields of science, remote sensing included, were able to exploit the success of natural image classification by convolutional neural network models using a technique commonly called transfer learning. We provide a systematic review of transfer learning application for scene classification using different datasets and different deep-learning models. We evaluate how the specialization of convolutional neural network models affects the transfer learning process by splitting original models in different points. As expected, we find the choice of hyperparameters used to train the model has a significant influence on the final performance of the models. Curiously, we find transfer learning from models trained on larger, more generic natural images datasets outperformed transfer learning from models trained directly on smaller remotely sensed datasets. Nonetheless, results show that transfer learning provides a powerful tool for remote-sensing scene classification. Full article
(This article belongs to the Special Issue Deep Transfer Learning for Remote Sensing)
Show Figures

Graphical abstract

24 pages, 8456 KiB  
Article
Category-Sensitive Domain Adaptation for Land Cover Mapping in Aerial Scenes
by Bo Fang, Rong Kou, Li Pan and Pengfei Chen
Remote Sens. 2019, 11(22), 2631; https://doi.org/10.3390/rs11222631 - 11 Nov 2019
Cited by 29 | Viewed by 4125
Abstract
Since manually labeling aerial images for pixel-level classification is expensive and time-consuming, developing strategies for land cover mapping without reference labels is essential and meaningful. As an efficient solution for this issue, domain adaptation has been widely utilized in numerous semantic labeling-based applications. [...] Read more.
Since manually labeling aerial images for pixel-level classification is expensive and time-consuming, developing strategies for land cover mapping without reference labels is essential and meaningful. As an efficient solution for this issue, domain adaptation has been widely utilized in numerous semantic labeling-based applications. However, current approaches generally pursue the marginal distribution alignment between the source and target features and ignore the category-level alignment. Therefore, directly applying them to land cover mapping leads to unsatisfactory performance in the target domain. In our research, to address this problem, we embed a geometry-consistent generative adversarial network (GcGAN) into a co-training adversarial learning network (CtALN), and then develop a category-sensitive domain adaptation (CsDA) method for land cover mapping using very-high-resolution (VHR) optical aerial images. The GcGAN aims to eliminate the domain discrepancies between labeled and unlabeled images while retaining their intrinsic land cover information by translating the features of the labeled images from the source domain to the target domain. Meanwhile, the CtALN aims to learn a semantic labeling model in the target domain with the translated features and corresponding reference labels. By training this hybrid framework, our method learns to distill knowledge from the source domain and transfers it to the target domain, while preserving not only global domain consistency, but also category-level consistency between labeled and unlabeled images in the feature space. The experimental results between two airborne benchmark datasets and the comparison with other state-of-the-art methods verify the robustness and superiority of our proposed CsDA. Full article
(This article belongs to the Special Issue Deep Transfer Learning for Remote Sensing)
Show Figures

Graphical abstract

19 pages, 522 KiB  
Article
Deep Transfer Learning for Few-Shot SAR Image Classification
by Mohammad Rostami, Soheil Kolouri, Eric Eaton and Kyungnam Kim
Remote Sens. 2019, 11(11), 1374; https://doi.org/10.3390/rs11111374 - 08 Jun 2019
Cited by 176 | Viewed by 10569
Abstract
The reemergence of Deep Neural Networks (DNNs) has lead to high-performance supervised learning algorithms for the Electro-Optical (EO) domain classification and detection problems. This success is because generating huge labeled datasets has become possible using modern crowdsourcing labeling platforms such as Amazon’s Mechanical [...] Read more.
The reemergence of Deep Neural Networks (DNNs) has lead to high-performance supervised learning algorithms for the Electro-Optical (EO) domain classification and detection problems. This success is because generating huge labeled datasets has become possible using modern crowdsourcing labeling platforms such as Amazon’s Mechanical Turk that recruit ordinary people to label data. Unlike the EO domain, labeling the Synthetic Aperture Radar (SAR) domain data can be much more challenging, and for various reasons, using crowdsourcing platforms is not feasible for labeling the SAR domain data. As a result, training deep networks using supervised learning is more challenging in the SAR domain. In the paper, we present a new framework to train a deep neural network for classifying Synthetic Aperture Radar (SAR) images by eliminating the need for a huge labeled dataset. Our idea is based on transferring knowledge from a related EO domain problem, where labeled data are easy to obtain. We transfer knowledge from the EO domain through learning a shared invariant cross-domain embedding space that is also discriminative for classification. To this end, we train two deep encoders that are coupled through their last year to map data points from the EO and the SAR domains to the shared embedding space such that the distance between the distributions of the two domains is minimized in the latent embedding space. We use the Sliced Wasserstein Distance (SWD) to measure and minimize the distance between these two distributions and use a limited number of SAR label data points to match the distributions class-conditionally. As a result of this training procedure, a classifier trained from the embedding space to the label space using mostly the EO data would generalize well on the SAR domain. We provide a theoretical analysis to demonstrate why our approach is effective and validate our algorithm on the problem of ship classification in the SAR domain by comparing against several other competing learning approaches. Full article
(This article belongs to the Special Issue Deep Transfer Learning for Remote Sensing)
Show Figures

Figure 1

13 pages, 14157 KiB  
Article
Individual Tree-Crown Detection in RGB Imagery Using Semi-Supervised Deep Learning Neural Networks
by Ben G. Weinstein, Sergio Marconi, Stephanie Bohlman, Alina Zare and Ethan White
Remote Sens. 2019, 11(11), 1309; https://doi.org/10.3390/rs11111309 - 01 Jun 2019
Cited by 197 | Viewed by 22975
Abstract
Remote sensing can transform the speed, scale, and cost of biodiversity and forestry surveys. Data acquisition currently outpaces the ability to identify individual organisms in high resolution imagery. We outline an approach for identifying tree-crowns in RGB imagery while using a semi-supervised deep [...] Read more.
Remote sensing can transform the speed, scale, and cost of biodiversity and forestry surveys. Data acquisition currently outpaces the ability to identify individual organisms in high resolution imagery. We outline an approach for identifying tree-crowns in RGB imagery while using a semi-supervised deep learning detection network. Individual crown delineation has been a long-standing challenge in remote sensing and available algorithms produce mixed results. We show that deep learning models can leverage existing Light Detection and Ranging (LIDAR)-based unsupervised delineation to generate trees that are used for training an initial RGB crown detection model. Despite limitations in the original unsupervised detection approach, this noisy training data may contain information from which the neural network can learn initial tree features. We then refine the initial model using a small number of higher-quality hand-annotated RGB images. We validate our proposed approach while using an open-canopy site in the National Ecological Observation Network. Our results show that a model using 434,551 self-generated trees with the addition of 2848 hand-annotated trees yields accurate predictions in natural landscapes. Using an intersection-over-union threshold of 0.5, the full model had an average tree crown recall of 0.69, with a precision of 0.61 for the visually-annotated data. The model had an average tree detection rate of 0.82 for the field collected stems. The addition of a small number of hand-annotated trees improved the performance over the initial self-supervised model. This semi-supervised deep learning approach demonstrates that remote sensing can overcome a lack of labeled training data by generating noisy data for initial training using unsupervised methods and retraining the resulting models with high quality labeled data. Full article
(This article belongs to the Special Issue Deep Transfer Learning for Remote Sensing)
Show Figures

Graphical abstract

18 pages, 21004 KiB  
Article
Effective Airplane Detection in Remote Sensing Images Based on Multilayer Feature Fusion and Improved Nonmaximal Suppression Algorithm
by Mingming Zhu, Yuelei Xu, Shiping Ma, Shuai Li, Hongqiang Ma and Yongsai Han
Remote Sens. 2019, 11(9), 1062; https://doi.org/10.3390/rs11091062 - 05 May 2019
Cited by 29 | Viewed by 4159
Abstract
Aiming at the problem of insufficient representation ability of weak and small objects and overlapping detection boxes in airplane object detection, an effective airplane detection method in remote sensing images based on multilayer feature fusion and an improved nonmaximal suppression algorithm is proposed. [...] Read more.
Aiming at the problem of insufficient representation ability of weak and small objects and overlapping detection boxes in airplane object detection, an effective airplane detection method in remote sensing images based on multilayer feature fusion and an improved nonmaximal suppression algorithm is proposed. Firstly, based on the common low-level visual features of natural images and airport remote sensing images, region-based convolutional neural networks are chosen to conduct transfer learning for airplane images using a limited amount of data. Then, the L2 norm normalization, feature connection, scale scaling, and feature dimension reduction are introduced to achieve effective fusion of low- and high-level features. Finally, a nonmaximal suppression method based on a soft decision function is proposed to solve the overlap problem of detection boxes. The experimental results show that the proposed method can effectively improve the representation ability of weak and small objects, as well as quickly and accurately detect airplane objects in the airport area. Full article
(This article belongs to the Special Issue Deep Transfer Learning for Remote Sensing)
Show Figures

Graphical abstract

19 pages, 25910 KiB  
Article
Aerial Image Road Extraction Based on an Improved Generative Adversarial Network
by Xiangrong Zhang, Xiao Han, Chen Li, Xu Tang, Huiyu Zhou and Licheng Jiao
Remote Sens. 2019, 11(8), 930; https://doi.org/10.3390/rs11080930 - 17 Apr 2019
Cited by 48 | Viewed by 5997
Abstract
Aerial photographs and satellite images are one of the resources used for earth observation. In practice, automated detection of roads on aerial images is of significant values for the application such as car navigation, law enforcement, and fire services. In this paper, we [...] Read more.
Aerial photographs and satellite images are one of the resources used for earth observation. In practice, automated detection of roads on aerial images is of significant values for the application such as car navigation, law enforcement, and fire services. In this paper, we present a novel road extraction method from aerial images based on an improved generative adversarial network, which is an end-to-end framework only requiring a few samples for training. Experimental results on the Massachusetts Roads Dataset show that the proposed method provides better performance than several state of the art techniques in terms of detection accuracy, recall, precision and F1-score. Full article
(This article belongs to the Special Issue Deep Transfer Learning for Remote Sensing)
Show Figures

Figure 1

Back to TopTop