remotesensing-logo

Journal Browser

Journal Browser

Feature-Based Methods for Remote Sensing Image Classification

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (30 April 2021) | Viewed by 35230

Special Issue Editors


E-Mail Website
Guest Editor
Signal Theory and Communications Department, Universitat Politècnica de Catalunya—Bardelona Tech. (UPC), Campus Nord (D3-203), Jordi Girona, 1-3, 08034 Barcelona, Spain
Interests: remote sensing; synthetic aperture radar; polarimetry; interferometry; signal and image processing; quantitative information retrieval
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Luxembourg Institute of Science and Technology, 41, rue du Brill, L-4422 Belvaux, Luxembourg
Interests: change detection; image classification; information fusion; parameter estimation; SAR imagery; target detection; disaster monitoring; maritime surveillance
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Campus de Vannes, Universite de Bretagne-Sud, 56000 Vannes, France
Interests: computer vision; remote sensing applications; deep learning; image analysis and processing; signal and image processing on graphs; morphological mathematics; object detection

Special Issue Information

Dear Colleagues,

Remote sensing (RS) currently occupies a privileged position in Earth observation (EO)-based studies and applications, as well as in Earth data science, allowing the monitoring, understanding and modelling of our environment and its evolution at a global scale and with an unprecedented spatio-temporal resolution. EO data have been employed to monitor croplands and forested areas, oceans and seas, urban settlements, mountainous areas or climate-related processes and natural hazards. In all these cases, the extraction of features and information from EO data, either qualitative or quantitative, as well as their temporal dynamics is an essential step to better characterizing and understanding these different environments and processes, their interactions, and to also disentangle anthropogenic effects.

For instance, land-cover classes and land-cover changes at the global scale can be regularly updated thanks to automatic classification methods that are applicable to several EO data sources, including multi-spectral, hyperspectral and synthetic aperture radar (SAR) images. In this context, the accurate extraction of features and the estimation of essential variables from EO data, their classification and analysis are research topics of high interest to the scientific community, especially when considering global applications. Indeed, the transfer of locally trained classification techniques or the generalization of classification methodologies to a global scale remain open questions. Nevertheless, novel and highly relevant questions are currently emerging in these domains thanks to new technological advances. New developments in EO capabilities provided by new satellite constellations or improved imaging technologies, or developments in classification based on machine learning or deep learning, could allow precise classification on a large scale and with an unprecedented temporal sampling. In addition, the availability of different EO data sources with high spatio-temporal resolutions may allow access to novel features, not accessible in low or medium resolution data, and trigger the development of new feature extraction techniques to better understand the data information content. 

The main objective of this Special Issue is to address these emerging topics in feature-based methods for RS data processing and classification by gathering contributions on novel feature extraction techniques, advanced classification methodologies, but also algorithms able to improve and increase the understanding of EO-based variables derived from large and various RS imagery collections.

Dr. Carlos López-Martínez
Dr. Ramona-Maria Pelich
Dr. Minh-Tan Pham
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Target and image classification
  • Land-use and land-cover change classification
  • Multi-temporal analysis
  • Machine learning
  • SAR-based features
  • Optical-based features
  • Thermal-based features
  • Multisource RS data fusion and classification
  • Quantitative information extraction

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 2700 KiB  
Article
Multi-Resolution Supervision Network with an Adaptive Weighted Loss for Desert Segmentation
by Lexuan Wang, Liguo Weng, Min Xia, Jia Liu and Haifeng Lin
Remote Sens. 2021, 13(11), 2054; https://doi.org/10.3390/rs13112054 - 23 May 2021
Cited by 7 | Viewed by 2488
Abstract
Desert segmentation of remote sensing images is the basis of analysis of desert area. Desert images are usually characterized by large image size, large-scale change, and irregular location distribution of surface objects. The multi-scale fusion method is widely used in the existing deep [...] Read more.
Desert segmentation of remote sensing images is the basis of analysis of desert area. Desert images are usually characterized by large image size, large-scale change, and irregular location distribution of surface objects. The multi-scale fusion method is widely used in the existing deep learning segmentation models to solve the above problems. Based on the idea of multi-scale feature extraction, this paper took the segmentation results of each scale as an independent optimization task and proposed a multi-resolution supervision network (MrsSeg) to further improve the desert segmentation result. Due to the different optimization difficulty of each branch task, we also proposed an auxiliary adaptive weighted loss function (AWL) to automatically optimize the training process. MrsSeg first used a lightweight backbone to extract different-resolution features, then adopted a multi-resolution fusion module to fuse the local information and global information, and finally, a multi-level fusion decoder was used to aggregate and merge the features at different levels to get the desert segmentation result. In this method, each branch loss was treated as an independent task, AWL was proposed to calculate and adjust the weight of each branch. By giving priority to the easy tasks, the improved loss function could effectively improve the convergence speed of the model and the desert segmentation result. The experimental results showed that MrsSeg-AWL effectively improved the learning ability of the model and has faster convergence speed, lower parameter complexity, and more accurate segmentation results. Full article
(This article belongs to the Special Issue Feature-Based Methods for Remote Sensing Image Classification)
Show Figures

Graphical abstract

24 pages, 7557 KiB  
Article
A Multi-Branch Feature Fusion Strategy Based on an Attention Mechanism for Remote Sensing Image Scene Classification
by Cuiping Shi, Xin Zhao and Liguo Wang
Remote Sens. 2021, 13(10), 1950; https://doi.org/10.3390/rs13101950 - 17 May 2021
Cited by 33 | Viewed by 3273
Abstract
In recent years, with the rapid development of computer vision, increasing attention has been paid to remote sensing image scene classification. To improve the classification performance, many studies have increased the depth of convolutional neural networks (CNNs) and expanded the width of the [...] Read more.
In recent years, with the rapid development of computer vision, increasing attention has been paid to remote sensing image scene classification. To improve the classification performance, many studies have increased the depth of convolutional neural networks (CNNs) and expanded the width of the network to extract more deep features, thereby increasing the complexity of the model. To solve this problem, in this paper, we propose a lightweight convolutional neural network based on attention-oriented multi-branch feature fusion (AMB-CNN) for remote sensing image scene classification. Firstly, we propose two convolution combination modules for feature extraction, through which the deep features of images can be fully extracted with multi convolution cooperation. Then, the weights of the feature are calculated, and the extracted deep features are sent to the attention mechanism for further feature extraction. Next, all of the extracted features are fused by multiple branches. Finally, depth separable convolution and asymmetric convolution are implemented to greatly reduce the number of parameters. The experimental results show that, compared with some state-of-the-art methods, the proposed method still has a great advantage in classification accuracy with very few parameters. Full article
(This article belongs to the Special Issue Feature-Based Methods for Remote Sensing Image Classification)
Show Figures

Graphical abstract

22 pages, 9204 KiB  
Article
Sentinel-2 Image Scene Classification: A Comparison between Sen2Cor and a Machine Learning Approach
by Kashyap Raiyani, Teresa Gonçalves, Luís Rato, Pedro Salgueiro and José R. Marques da Silva
Remote Sens. 2021, 13(2), 300; https://doi.org/10.3390/rs13020300 - 16 Jan 2021
Cited by 23 | Viewed by 6202
Abstract
Given the continuous increase in the global population, the food manufacturers are advocated to either intensify the use of cropland or expand the farmland, making land cover and land usage dynamics mapping vital in the area of remote sensing. In this regard, identifying [...] Read more.
Given the continuous increase in the global population, the food manufacturers are advocated to either intensify the use of cropland or expand the farmland, making land cover and land usage dynamics mapping vital in the area of remote sensing. In this regard, identifying and classifying a high-resolution satellite imagery scene is a prime challenge. Several approaches have been proposed either by using static rule-based thresholds (with limitation of diversity) or neural network (with data-dependent limitations). This paper adopts the inductive approach to learning from surface reflectances. A manually labeled Sentinel-2 dataset was used to build a Machine Learning (ML) model for scene classification, distinguishing six classes (Water, Shadow, Cirrus, Cloud, Snow, and Other). This models was accessed and further compared to the European Space Agency (ESA) Sen2Cor package. The proposed ML model presents a Micro-F1 value of 0.84, a considerable improvement when compared to the Sen2Cor corresponding performance of 0.59. Focusing on the problem of optical satellite image scene classification, the main research contributions of this paper are: (a) an extended manually labeled Sentinel-2 database adding surface reflectance values to an existing dataset; (b) an ensemble-based and a Neural-Network-based ML models; (c) an evaluation of model sensitivity, biasness, and diverse ability in classifying multiple classes over different geographic Sentinel-2 imagery, and finally, (d) the benchmarking of the ML approach against the Sen2Cor package. Full article
(This article belongs to the Special Issue Feature-Based Methods for Remote Sensing Image Classification)
Show Figures

Graphical abstract

19 pages, 2444 KiB  
Article
Ensemble Learning Approaches Based on Covariance Pooling of CNN Features for High Resolution Remote Sensing Scene Classification
by Sara Akodad, Lionel Bombrun, Junshi Xia, Yannick Berthoumieu and Christian Germain
Remote Sens. 2020, 12(20), 3292; https://doi.org/10.3390/rs12203292 - 10 Oct 2020
Cited by 19 | Viewed by 3120
Abstract
Remote sensing image scene classification, which consists of labeling remote sensing images with a set of categories based on their content, has received remarkable attention for many applications such as land use mapping. Standard approaches are based on the multi-layer representation of first-order [...] Read more.
Remote sensing image scene classification, which consists of labeling remote sensing images with a set of categories based on their content, has received remarkable attention for many applications such as land use mapping. Standard approaches are based on the multi-layer representation of first-order convolutional neural network (CNN) features. However, second-order CNNs have recently been shown to outperform traditional first-order CNNs for many computer vision tasks. Hence, the aim of this paper is to show the use of second-order statistics of CNN features for remote sensing scene classification. This takes the form of covariance matrices computed locally or globally on the output of a CNN. However, these datapoints do not lie in an Euclidean space but a Riemannian manifold. To manipulate them, Euclidean tools are not adapted. Other metrics should be considered such as the log-Euclidean one. This consists of projecting the set of covariance matrices on a tangent space defined at a reference point. In this tangent plane, which is a vector space, conventional machine learning algorithms can be considered, such as the Fisher vector encoding or SVM classifier. Based on this log-Euclidean framework, we propose a novel transfer learning approach composed of two hybrid architectures based on covariance pooling of CNN features, the first is local and the second is global. They rely on the extraction of features from models pre-trained on the ImageNet dataset processed with some machine learning algorithms. The first hybrid architecture consists of an ensemble learning approach with the log-Euclidean Fisher vector encoding of region covariance matrices computed locally on the first layers of a CNN. The second one concerns an ensemble learning approach based on the covariance pooling of CNN features extracted globally from the deepest layers. These two ensemble learning approaches are then combined together based on the strategy of the most diverse ensembles. For validation and comparison purposes, the proposed approach is tested on various challenging remote sensing datasets. Experimental results exhibit a significant gain of approximately 2% in overall accuracy for the proposed approach compared to a similar state-of-the-art method based on covariance pooling of CNN features (on the UC Merced dataset). Full article
(This article belongs to the Special Issue Feature-Based Methods for Remote Sensing Image Classification)
Show Figures

Graphical abstract

26 pages, 6262 KiB  
Article
Residual Group Channel and Space Attention Network for Hyperspectral Image Classification
by Peida Wu, Ziguan Cui, Zongliang Gan and Feng Liu
Remote Sens. 2020, 12(12), 2035; https://doi.org/10.3390/rs12122035 - 24 Jun 2020
Cited by 32 | Viewed by 3517
Abstract
Recently, deep learning methods based on three-dimensional (3-D) convolution have been widely used in the hyperspectral image (HSI) classification tasks and shown good classification performance. However, affected by the irregular distribution of various classes in HSI datasets, most previous 3-D convolutional neural network [...] Read more.
Recently, deep learning methods based on three-dimensional (3-D) convolution have been widely used in the hyperspectral image (HSI) classification tasks and shown good classification performance. However, affected by the irregular distribution of various classes in HSI datasets, most previous 3-D convolutional neural network (CNN)-based models require more training samples to obtain better classification accuracies. In addition, as the network deepens, which leads to the spatial resolution of feature maps gradually decreasing, much useful information may be lost during the training process. Therefore, how to ensure efficient network training is key to the HSI classification tasks. To address the issue mentioned above, in this paper, we proposed a 3-DCNN-based residual group channel and space attention network (RGCSA) for HSI classification. Firstly, the proposed bottom-up top-down attention structure with the residual connection can improve network training efficiency by optimizing channel-wise and spatial-wise features throughout the whole training process. Secondly, the proposed residual group channel-wise attention module can reduce the possibility of losing useful information, and the novel spatial-wise attention module can extract context information to strengthen the spatial features. Furthermore, our proposed RGCSA network only needs few training samples to achieve higher classification accuracies than previous 3-D-CNN-based networks. The experimental results on three commonly used HSI datasets demonstrate the superiority of our proposed network based on the attention mechanism and the effectiveness of the proposed channel-wise and spatial-wise attention modules for HSI classification. The code and configurations are released at Github.com. Full article
(This article belongs to the Special Issue Feature-Based Methods for Remote Sensing Image Classification)
Show Figures

Graphical abstract

28 pages, 15704 KiB  
Article
Exploring TanDEM-X Interferometric Products for Crop-Type Mapping
by Mario Busquier, Juan M. Lopez-Sanchez, Alejandro Mestre-Quereda, Elena Navarro, María P. González-Dugo and Luciano Mateos
Remote Sens. 2020, 12(11), 1774; https://doi.org/10.3390/rs12111774 - 01 Jun 2020
Cited by 24 | Viewed by 3193
Abstract
The application of satellite single-pass interferometric data to crop-type mapping is demonstrated for the first time in this work. A set of nine TanDEM-X dual-pol pairs of images acquired during its science phase, from June to August 2015, is exploited for this purpose. [...] Read more.
The application of satellite single-pass interferometric data to crop-type mapping is demonstrated for the first time in this work. A set of nine TanDEM-X dual-pol pairs of images acquired during its science phase, from June to August 2015, is exploited for this purpose. An agricultural site located in Sevilla (Spain), composed of fields of 13 different crop species, is employed for validation. Sets of input features formed by polarimetric and interferometric observables are tested for crop classification, including single-pass coherence and repeat-pass coherence formed by consecutive images. The backscattering coefficient at HH and VV channels and the correlation between channels form the set of polarimetric features employed as a reference set upon which the added value of interferometric coherence is evaluated. The inclusion of single-pass coherence as feature improves by 2% the overall accuracy (OA) with respect to the reference case, reaching 92%. More importantly, in single-pol configurations OA increases by 10% for the HH channel and by 8% for the VV channel, reaching 87% and 88%, respectively. Repeat-pass coherence also improves the classification performance, but with final scores slightly worse than with single-pass coherence. However, it improves the individual performance of the backscattering coefficient by 6–7%. Furthermore, in products evaluated at field level the dual-pol repeat-pass coherence features provide the same score as single-pass coherence features (overall accuracy above 94%). Consequently, the contribution of interferometry, both single-pass and repeat-pass, to crop-type mapping is proved. Full article
(This article belongs to the Special Issue Feature-Based Methods for Remote Sensing Image Classification)
Show Figures

Graphical abstract

18 pages, 13730 KiB  
Article
Urban Land-Cover Classification Using Side-View Information from Oblique Images
by Changlin Xiao, Rongjun Qin and Xiao Ling
Remote Sens. 2020, 12(3), 390; https://doi.org/10.3390/rs12030390 - 26 Jan 2020
Cited by 1 | Viewed by 2840
Abstract
Land-cover classification on very high resolution data (decimetre-level) is a well-studied yet challenging problem in remote sensing data processing. Most of the existing works focus on using images with orthographic view or orthophotos with the associated digital surface models (DSMs). However, the use [...] Read more.
Land-cover classification on very high resolution data (decimetre-level) is a well-studied yet challenging problem in remote sensing data processing. Most of the existing works focus on using images with orthographic view or orthophotos with the associated digital surface models (DSMs). However, the use of the nowadays widely-available oblique images to support such a task is not sufficiently investigated. In the effort of identifying different land-cover classes, it is intuitive that information of side-views obtained from the oblique can be of great help, yet how this can be technically achieved is challenging due to the complex geometric association between the side and top views. We aim to address these challenges in this paper by proposing a framework with enhanced classification results, leveraging the use of orthophoto, digital surface models and oblique images. The proposed method contains a classic two-step of (1) feature extraction and (2) a classification approach, in which the key contribution is a feature extraction algorithm that performs simplified geometric association between top-view segments (from orthophoto) and side-view planes (from projected oblique images), and joint statistical feature extraction. Our experiment on five test sites showed that the side-view information could steadily improve the classification accuracy with both kinds of training samples (1.1% and 5.6% for evenly distributed and non-evenly distributed samples, separately). Additionally, by testing the classifier at a large and untrained site, adding side-view information showed a total of 26.2% accuracy improvement of the above-ground objects, which demonstrates the strong generalization ability of the side-view features. Full article
(This article belongs to the Special Issue Feature-Based Methods for Remote Sensing Image Classification)
Show Figures

Graphical abstract

20 pages, 4732 KiB  
Article
An End-to-End Local-Global-Fusion Feature Extraction Network for Remote Sensing Image Scene Classification
by Yafei Lv, Xiaohan Zhang, Wei Xiong, Yaqi Cui and Mi Cai
Remote Sens. 2019, 11(24), 3006; https://doi.org/10.3390/rs11243006 - 13 Dec 2019
Cited by 42 | Viewed by 4580
Abstract
Remote sensing image scene classification (RSISC) is an active task in the remote sensing community and has attracted great attention due to its wide applications. Recently, the deep convolutional neural networks (CNNs)-based methods have witnessed a remarkable breakthrough in performance of remote sensing [...] Read more.
Remote sensing image scene classification (RSISC) is an active task in the remote sensing community and has attracted great attention due to its wide applications. Recently, the deep convolutional neural networks (CNNs)-based methods have witnessed a remarkable breakthrough in performance of remote sensing image scene classification. However, the problem that the feature representation is not discriminative enough still exists, which is mainly caused by the characteristic of inter-class similarity and intra-class diversity. In this paper, we propose an efficient end-to-end local-global-fusion feature extraction (LGFFE) network for a more discriminative feature representation. Specifically, global and local features are extracted from channel and spatial dimensions respectively, based on a high-level feature map from deep CNNs. For the local features, a novel recurrent neural network (RNN)-based attention module is first proposed to capture the spatial layout information and context information across different regions. Gated recurrent units (GRUs) is then exploited to generate the important weight of each region by taking a sequence of features from image patches as input. A reweighed regional feature representation can be obtained by focusing on the key region. Then, the final feature representation can be acquired by fusing the local and global features. The whole process of feature extraction and feature fusion can be trained in an end-to-end manner. Finally, extensive experiments have been conducted on four public and widely used datasets and experimental results show that our method LGFFE outperforms baseline methods and achieves state-of-the-art results. Full article
(This article belongs to the Special Issue Feature-Based Methods for Remote Sensing Image Classification)
Show Figures

Figure 1

29 pages, 5362 KiB  
Article
Pixel-Wise PolSAR Image Classification via a Novel Complex-Valued Deep Fully Convolutional Network
by Yice Cao, Yan Wu, Peng Zhang, Wenkai Liang and Ming Li
Remote Sens. 2019, 11(22), 2653; https://doi.org/10.3390/rs11222653 - 13 Nov 2019
Cited by 43 | Viewed by 3481
Abstract
Although complex-valued (CV) neural networks have shown better classification results compared to their real-valued (RV) counterparts for polarimetric synthetic aperture radar (PolSAR) classification, the extension of pixel-level RV networks to the complex domain has not yet thoroughly examined. This paper presents a novel [...] Read more.
Although complex-valued (CV) neural networks have shown better classification results compared to their real-valued (RV) counterparts for polarimetric synthetic aperture radar (PolSAR) classification, the extension of pixel-level RV networks to the complex domain has not yet thoroughly examined. This paper presents a novel complex-valued deep fully convolutional neural network (CV-FCN) designed for PolSAR image classification. Specifically, CV-FCN uses PolSAR CV data that includes the phase information and uses the deep FCN architecture that performs pixel-level labeling. The CV-FCN architecture is trained in an end-to-end scheme to extract discriminative polarimetric features, and then the entire PolSAR image is classified by the trained CV-FCN. Technically, for the particularity of PolSAR data, a dedicated complex-valued weight initialization scheme is proposed to initialize CV-FCN. It considers the distribution of polarization data to conduct CV-FCN training from scratch in an efficient and fast manner. CV-FCN employs a complex downsampling-then-upsampling scheme to extract dense features. To enrich discriminative information, multi-level CV features that retain more polarization information are extracted via the complex downsampling scheme. Then, a complex upsampling scheme is proposed to predict dense CV labeling. It employs the complex max-unpooling layers to greatly capture more spatial information for better robustness to speckle noise. The complex max-unpooling layers upsample the real and the imaginary parts of complex feature maps based on the max locations maps retained from the complex downsampling scheme. In addition, to achieve faster convergence and obtain more precise classification results, a novel average cross-entropy loss function is derived for CV-FCN optimization. Experiments on real PolSAR datasets demonstrate that CV-FCN achieves better classification performance than other state-of-art methods. Full article
(This article belongs to the Special Issue Feature-Based Methods for Remote Sensing Image Classification)
Show Figures

Graphical abstract

Back to TopTop