remotesensing-logo

Journal Browser

Journal Browser

Learning to Understand Remote Sensing Images

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (31 December 2018) | Viewed by 320371

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editor


grade E-Mail Website
Collection Editor
School of Artificial Intelligence, Optics and Electronics (iOPEN), Northwestern Polytechnical University, 127 West Youyi Road, Beilin District, P.O. Box 64, Xi'an 710072, China
Interests: remote sensing; image analysis; computer vision; pattern recognition; machine learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With the recent advances of remote sensing technologies for Earth observation, many different remote sensors are collecting data with distinctive properties. The obtained data are so large and complex that analyzing them manually becomes impractical or even impossible. For example, multi-source/multi-temporal/multi-scale data are frequently delivered by remote sensors. But, if we want to explore them by hand and then obtain useful information, the workload would be overwhelming and the performance would be unsatisfactory. Therefore, understanding remote sensing images effectively, in connection with physics, has been the primary concern of the remote sensing research community in recent years. For this purpose, machine learning is thought to be a promising technique because it can make the system learn to improve itself. With this distinctive characteristic, the algorithms will be more adaptive, automatic and intelligent.

In recent decades, this area has attracted a lot of research interest, and significant progress has been made in this direction, particularly in the optical, hyperspectral and microwave remote sensing communities. For instance, there have been several tutorials at various conferences that directly or indirectly correlate with machine learning topics, and numerous papers are published each year in top journals in the remote sensing community. Particularly, with the popularity of deep learning and big data concepts, research towards data learning and mining paradigms has reached new heights. The success of machine learning techniques lies in its practical effectiveness, improving current methods and achieving the state-of-the-art performance.

Nevertheless, there are still problems that need to be solved. For example, how to bring together the limited training samples available for deep learning related methods? How to adapt the original machine learning prototypes to remote sensing applications and particularly to physics? How to compromise between the learning speed and its effectiveness? Many other challenges remain in the remote sensing field which have fostered new efforts and developments to better understand remote sensing images via machine learning techniques.

This Collection on "Learning to Understand Remote Sensing Images" focuses on this topic. We invite original submissions reporting recent advances in the machine learning approaches towards analyzing and understanding remote sensing images, and aim foster an increased interest in this field.

This Collection will emphasize the use of state-of-the-art machine learning techniques and statistical computing methods, such as deep learning, graphical models, sparse coding and kernel machines. 

Potential topics include, but are not limited to:

  • Optical, hyperspectral, microwave and other types of remote sensing data;

  • Feature learning for remote sensing images; 

  • Learning strategies for multi-source/multi-temporal/multi-scale image fusion; 

  • Novel machine learning and statistical computing methods for remote sensing; 

  • Learning metrics on benchmark databases;

  • Applications such as classification, segmentation, unmixing, change detection, semantic labelling using the learning approaches.

Authors are requested to check and follow the specific Instructions to Authors, https://www.mdpi.com/journal/remotesensing/instructions.

We look forward to receiving your submissions.

Dr. Qi Wang
Collection Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (40 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

24 pages, 24260 KiB  
Article
Nonlocal Tensor Sparse Representation and Low-Rank Regularization for Hyperspectral Image Compressive Sensing Reconstruction
by Jize Xue, Yongqiang Zhao, Wenzhi Liao and Jonathan Cheung-Wai Chan
Remote Sens. 2019, 11(2), 193; https://doi.org/10.3390/rs11020193 - 19 Jan 2019
Cited by 51 | Viewed by 5689
Abstract
Hyperspectral image compressive sensing reconstruction (HSI-CSR) is an important issue in remote sensing, and has recently been investigated increasingly by the sparsity prior based approaches. However, most of the available HSI-CSR methods consider the sparsity prior in spatial and spectral vector domains via [...] Read more.
Hyperspectral image compressive sensing reconstruction (HSI-CSR) is an important issue in remote sensing, and has recently been investigated increasingly by the sparsity prior based approaches. However, most of the available HSI-CSR methods consider the sparsity prior in spatial and spectral vector domains via vectorizing hyperspectral cubes along a certain dimension. Besides, in most previous works, little attention has been paid to exploiting the underlying nonlocal structure in spatial domain of the HSI. In this paper, we propose a nonlocal tensor sparse and low-rank regularization (NTSRLR) approach, which can encode essential structured sparsity of an HSI and explore its advantages for HSI-CSR task. Specifically, we study how to utilize reasonably the l 1 -based sparsity of core tensor and tensor nuclear norm function as tensor sparse and low-rank regularization, respectively, to describe the nonlocal spatial-spectral correlation hidden in an HSI. To study the minimization problem of the proposed algorithm, we design a fast implementation strategy based on the alternative direction multiplier method (ADMM) technique. Experimental results on various HSI datasets verify that the proposed HSI-CSR algorithm can significantly outperform existing state-of-the-art CSR techniques for HSI recovery. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Figure 1

15 pages, 2195 KiB  
Article
Online Hashing for Scalable Remote Sensing Image Retrieval
by Peng Li, Xiaoyu Zhang, Xiaobin Zhu and Peng Ren
Remote Sens. 2018, 10(5), 709; https://doi.org/10.3390/rs10050709 - 04 May 2018
Cited by 25 | Viewed by 4740
Abstract
Recently, hashing-based large-scale remote sensing (RS) image retrieval has attracted much attention. Many new hashing algorithms have been developed and successfully applied to fast RS image retrieval tasks. However, there exists an important problem rarely addressed in the research literature of RS image [...] Read more.
Recently, hashing-based large-scale remote sensing (RS) image retrieval has attracted much attention. Many new hashing algorithms have been developed and successfully applied to fast RS image retrieval tasks. However, there exists an important problem rarely addressed in the research literature of RS image hashing. The RS images are practically produced in a streaming manner in many real-world applications, which means the data distribution keeps changing over time. Most existing RS image hashing methods are batch-based models whose hash functions are learned once for all and kept fixed all the time. Therefore, the pre-trained hash functions might not fit the ever-growing new RS images. Moreover, the batch-based models have to load all the training images into memory for model learning, which consumes many computing and memory resources. To address the above deficiencies, we propose a new online hashing method, which learns and adapts its hashing functions with respect to the newly incoming RS images in terms of a novel online partial random learning scheme. Our hash model is updated in a sequential mode such that the representative power of the learned binary codes for RS images are improved accordingly. Moreover, benefiting from the online learning strategy, our proposed hashing approach is quite suitable for scalable real-world remote sensing image retrieval. Extensive experiments on two large-scale RS image databases under online setting demonstrated the efficacy and effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Figure 1

18 pages, 87543 KiB  
Article
A Novel Affine and Contrast Invariant Descriptor for Infrared and Visible Image Registration
by Xiangzeng Liu, Yunfeng Ai, Juli Zhang and Zhuping Wang
Remote Sens. 2018, 10(4), 658; https://doi.org/10.3390/rs10040658 - 23 Apr 2018
Cited by 32 | Viewed by 5812
Abstract
Infrared and visible image registration is a very challenging task due to the large geometric changes and the significant contrast differences caused by the inconsistent capture conditions. To address this problem, this paper proposes a novel affine and contrast invariant descriptor called maximally [...] Read more.
Infrared and visible image registration is a very challenging task due to the large geometric changes and the significant contrast differences caused by the inconsistent capture conditions. To address this problem, this paper proposes a novel affine and contrast invariant descriptor called maximally stable phase congruency (MSPC), which integrates the affine invariant region extraction with the structural features of images organically. First, to achieve the contrast invariance and ensure the significance of features, we detect feature points using moment ranking analysis and extract structural features via merging phase congruency images in multiple orientations. Then, coarse neighborhoods centered on the feature points are obtained based on Log-Gabor filter responses over scales and orientations. Subsequently, the affine invariant regions of feature points are determined by using maximally stable extremal regions. Finally, structural descriptors are constructed from those regions and the registration can be implemented according to the correspondence of the descriptors. The proposed method has been tested on various infrared and visible pairs acquired by different platforms. Experimental results demonstrate that our method outperforms several state-of-the-art methods in terms of robustness and precision with different image data and also show its effectiveness in the application of trajectory tracking. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

24 pages, 3900 KiB  
Article
Deep Salient Feature Based Anti-Noise Transfer Network for Scene Classification of Remote Sensing Imagery
by Xi Gong, Zhong Xie, Yuanyuan Liu, Xuguo Shi and Zhuo Zheng
Remote Sens. 2018, 10(3), 410; https://doi.org/10.3390/rs10030410 - 06 Mar 2018
Cited by 37 | Viewed by 5357
Abstract
Remote sensing (RS) scene classification is important for RS imagery semantic interpretation. Although tremendous strides have been made in RS scene classification, one of the remaining open challenges is recognizing RS scenes in low quality variance (e.g., various scales and noises). This paper [...] Read more.
Remote sensing (RS) scene classification is important for RS imagery semantic interpretation. Although tremendous strides have been made in RS scene classification, one of the remaining open challenges is recognizing RS scenes in low quality variance (e.g., various scales and noises). This paper proposes a deep salient feature based anti-noise transfer network (DSFATN) method that effectively enhances and explores the high-level features for RS scene classification in different scales and noise conditions. In DSFATN, a novel discriminative deep salient feature (DSF) is introduced by saliency-guided DSF extraction, which conducts a patch-based visual saliency (PBVS) algorithm using “visual attention” mechanisms to guide pre-trained CNNs for producing the discriminative high-level features. Then, an anti-noise network is proposed to learn and enhance the robust and anti-noise structure information of RS scene by directly propagating the label information to fully-connected layers. A joint loss is used to minimize the anti-noise network by integrating anti-noise constraint and a softmax classification loss. The proposed network architecture can be easily trained with a limited amount of training data. The experiments conducted on three different scale RS scene datasets show that the DSFATN method has achieved excellent performance and great robustness in different scales and noise conditions. It obtains classification accuracy of 98.25%, 98.46%, and 98.80%, respectively, on the UC Merced Land Use Dataset (UCM), the Google image dataset of SIRI-WHU, and the SAT-6 dataset, advancing the state-of-the-art substantially. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

18 pages, 9907 KiB  
Article
Learning a Dilated Residual Network for SAR Image Despeckling
by Qiang Zhang, Qiangqiang Yuan, Jie Li, Zhen Yang and Xiaoshuang Ma
Remote Sens. 2018, 10(2), 196; https://doi.org/10.3390/rs10020196 - 29 Jan 2018
Cited by 160 | Viewed by 8422
Abstract
In this paper, to break the limit of the traditional linear models for synthetic aperture radar (SAR) image despeckling, we propose a novel deep learning approach by learning a non-linear end-to-end mapping between the noisy and clean SAR images with a dilated residual [...] Read more.
In this paper, to break the limit of the traditional linear models for synthetic aperture radar (SAR) image despeckling, we propose a novel deep learning approach by learning a non-linear end-to-end mapping between the noisy and clean SAR images with a dilated residual network (SAR-DRN). SAR-DRN is based on dilated convolutions, which can both enlarge the receptive field and maintain the filter size and layer depth with a lightweight structure. In addition, skip connections and a residual learning strategy are added to the despeckling model to maintain the image details and reduce the vanishing gradient problem. Compared with the traditional despeckling methods, the proposed method shows a superior performance over the state-of-the-art methods in both quantitative and visual assessments, especially for strong speckle noise. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

18 pages, 4726 KiB  
Article
Comparative Analysis of Responses of Land Surface Temperature to Long-Term Land Use/Cover Changes between a Coastal and Inland City: A Case of Freetown and Bo Town in Sierra Leone
by Musa Tarawally, Wenbo Xu, Weiming Hou and Terence Darlington Mushore
Remote Sens. 2018, 10(1), 112; https://doi.org/10.3390/rs10010112 - 15 Jan 2018
Cited by 43 | Viewed by 8466
Abstract
Urban growth and its associated expansion of built-up areas are expected to continue through to the twenty second century and at a faster pace in developing countries. This has the potential to increase thermal discomfort and heat-related distress. There is thus a need [...] Read more.
Urban growth and its associated expansion of built-up areas are expected to continue through to the twenty second century and at a faster pace in developing countries. This has the potential to increase thermal discomfort and heat-related distress. There is thus a need to monitor growth patterns, especially in resource constrained countries such as Africa, where few studies have so far been conducted. In view of this, this study compares urban growth and temperature response patterns in Freetown and Bo town in Sierra Leone. Multispectral Landsat images obtained in 1998, 2000, 2007, and 2015 are used to quantify growth and land surface temperature responses. The contribution index (CI) is used to explain how changes per land use and land cover class (LULC) contributed to average city surface temperatures. The population size of Freetown was about eight times greater than in Bo town. Landsat data mapped urban growth patterns with a high accuracy (Overall Accuracy > 80%) for both cities. Significant changes in LULC were noted in Freetown, characterized by a 114 km2 decrease in agriculture area, 23 km2 increase in dense vegetation, and 77 km2 increase in built-up area. Between 1998 and 2015, built-up area increased by 16 km2, while dense vegetation area decreased by 14 km2 in Bo town. Average surface temperature increased from 23.7 to 25.5 °C in Freetown and from 24.9 to 28.2 °C in Bo town during the same period. Despite the larger population size and greater built-up extent, as well as expansion rate, Freetown was 2 °C cooler than Bo town in all periods. The low temperatures are attributed to proximity to sea and the very large proportion of vegetation surrounding the city. Even close to the sea and abundant vegetation, the built-up area had an elevated temperature compared to the surroundings. The findings are important for formulating heat mitigation strategies for both inland and coastal cities in developing countries. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

6695 KiB  
Article
Automatic Counting of Large Mammals from Very High Resolution Panchromatic Satellite Imagery
by Yifei Xue, Tiejun Wang and Andrew K. Skidmore
Remote Sens. 2017, 9(9), 878; https://doi.org/10.3390/rs9090878 - 23 Aug 2017
Cited by 48 | Viewed by 9938
Abstract
Estimating animal populations by direct counting is an essential component of wildlife conservation and management. However, conventional approaches (i.e., ground survey and aerial survey) have intrinsic constraints. Advances in image data capture and processing provide new opportunities for using applied remote sensing to [...] Read more.
Estimating animal populations by direct counting is an essential component of wildlife conservation and management. However, conventional approaches (i.e., ground survey and aerial survey) have intrinsic constraints. Advances in image data capture and processing provide new opportunities for using applied remote sensing to count animals. Previous studies have demonstrated the feasibility of using very high resolution multispectral satellite images for animal detection, but to date, the practicality of detecting animals from space using panchromatic imagery has not been proven. This study demonstrates that it is possible to detect and count large mammals (e.g., wildebeests and zebras) from a single, very high resolution GeoEye-1 panchromatic image in open savanna. A novel semi-supervised object-based method that combines a wavelet algorithm and a fuzzy neural network was developed. To discern large mammals from their surroundings and discriminate between animals and non-targets, we used the wavelet technique to highlight potential objects. To make full use of geometric attributes, we carefully trained the classifier, using the adaptive-network-based fuzzy inference system. Our proposed method (with an accuracy index of 0.79) significantly outperformed the traditional threshold-based method (with an accuracy index of 0.58) detecting large mammals in open savanna. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Figure 1

9026 KiB  
Article
Topic Modelling for Object-Based Unsupervised Classification of VHR Panchromatic Satellite Images Based on Multiscale Image Segmentation
by Li Shen, Linmei Wu, Yanshuai Dai, Wenfan Qiao and Ying Wang
Remote Sens. 2017, 9(8), 840; https://doi.org/10.3390/rs9080840 - 14 Aug 2017
Cited by 9 | Viewed by 6149
Abstract
Image segmentation is a key prerequisite for object-based classification. However, it is often difficult, or even impossible, to determine a unique optimal segmentation scale due to the fact that various geo-objects, and even an identical geo-object, present at multiple scales in very high [...] Read more.
Image segmentation is a key prerequisite for object-based classification. However, it is often difficult, or even impossible, to determine a unique optimal segmentation scale due to the fact that various geo-objects, and even an identical geo-object, present at multiple scales in very high resolution (VHR) satellite images. To address this problem, this paper presents a novel unsupervised object-based classification for VHR panchromatic satellite images using multiple segmentations via the latent Dirichlet allocation (LDA) model. Firstly, multiple segmentation maps of the original satellite image are produced by means of a common multiscale segmentation technique. Then, the LDA model is utilized to learn the grayscale histogram distribution for each geo-object and the mixture distribution of geo-objects within each segment. Thirdly, the histogram distribution of each segment is compared with that of each geo-object using the Kullback-Leibler (KL) divergence measure, which is weighted with a constraint specified by the mixture distribution of geo-objects. Each segment is allocated a geo-object category label with the minimum KL divergence. Finally, the final classification map is achieved by integrating the multiple classification results at different scales. Extensive experimental evaluations are designed to compare the performance of our method with those of some state-of-the-art methods for three different types of images. The experimental results over three different types of VHR panchromatic satellite images demonstrate the proposed method is able to achieve scale-adaptive classification results, and improve the ability to differentiate the geo-objects with spectral overlap, such as water and grass, and water and shadow, in terms of both spatial consistency and semantic consistency. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

2182 KiB  
Article
Learning-Based Sub-Pixel Change Detection Using Coarse Resolution Satellite Imagery
by Yong Xu, Lin Lin and Deyu Meng
Remote Sens. 2017, 9(7), 709; https://doi.org/10.3390/rs9070709 - 10 Jul 2017
Cited by 11 | Viewed by 6578
Abstract
Moderate Resolution Imaging Spectroradiometer (MODIS) data are effective and efficient for monitoring urban dynamics such as urban cover change and thermal anomalies, but the spatial resolution provided by MODIS data is 500 m (for most of its shorter spectral bands), which results in [...] Read more.
Moderate Resolution Imaging Spectroradiometer (MODIS) data are effective and efficient for monitoring urban dynamics such as urban cover change and thermal anomalies, but the spatial resolution provided by MODIS data is 500 m (for most of its shorter spectral bands), which results in difficulty in detecting subtle spatial variations within a coarse pixel—especially for a fast-growing city. Given that the historical land use/cover products and satellite data at finer resolution are valuable to reflect the urban dynamics with more spatial details, finer spatial resolution images, as well as land cover products at previous times, are exploited in this study to improve the change detection capability of coarse resolution satellite data. The proposed approach involves two main steps. First, pairs of coarse and finer resolution satellite data at previous times are learned and then applied to generate synthetic satellite data with finer spatial resolution from coarse resolution satellite data. Second, a land cover map was produced at a finer spatial resolution and adjusted with the obtained synthetic satellite data and prior land cover maps. The approach was tested for generating finer resolution synthetic Landsat images using MODIS data from the Guangzhou study area. The finer resolution Landsat-like data were then applied to detect land cover changes with more spatial details. Test results show that the change detection accuracy using the proposed approach with the synthetic Landsat data is much better than the results using the original MODIS data or conventional spatial and temporal fusion-based approaches. The proposed approach is beneficial for detecting subtle urban land cover changes with more spatial details when multitemporal coarse satellite data are available. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

68400 KiB  
Article
Road Segmentation of Remotely-Sensed Images Using Deep Convolutional Neural Networks with Landscape Metrics and Conditional Random Fields
by Teerapong Panboonyuen, Kulsawasd Jitkajornwanich, Siam Lawawirojwong, Panu Srestasathiern and Peerapon Vateekul
Remote Sens. 2017, 9(7), 680; https://doi.org/10.3390/rs9070680 - 01 Jul 2017
Cited by 104 | Viewed by 13099
Abstract
Object segmentation of remotely-sensed aerial (or very-high resolution, VHS) images and satellite (or high-resolution, HR) images, has been applied to many application domains, especially in road extraction in which the segmented objects are served as a mandatory layer in geospatial databases. Several attempts [...] Read more.
Object segmentation of remotely-sensed aerial (or very-high resolution, VHS) images and satellite (or high-resolution, HR) images, has been applied to many application domains, especially in road extraction in which the segmented objects are served as a mandatory layer in geospatial databases. Several attempts at applying the deep convolutional neural network (DCNN) to extract roads from remote sensing images have been made; however, the accuracy is still limited. In this paper, we present an enhanced DCNN framework specifically tailored for road extraction of remote sensing images by applying landscape metrics (LMs) and conditional random fields (CRFs). To improve the DCNN, a modern activation function called the exponential linear unit (ELU), is employed in our network, resulting in a higher number of, and yet more accurate, extracted roads. To further reduce falsely classified road objects, a solution based on an adoption of LMs is proposed. Finally, to sharpen the extracted roads, a CRF method is added to our framework. The experiments were conducted on Massachusetts road aerial imagery as well as the Thailand Earth Observation System (THEOS) satellite imagery data sets. The results showed that our proposed framework outperformed Segnet, a state-of-the-art object segmentation technique, on any kinds of remote sensing imagery, in most of the cases in terms of precision, recall, and F 1 . Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

2179 KiB  
Article
Nonlinear Classification of Multispectral Imagery Using Representation-Based Classifiers
by Yan Xu, Qian Du, Wei Li, Chen Chen and Nicolas H. Younan
Remote Sens. 2017, 9(7), 662; https://doi.org/10.3390/rs9070662 - 28 Jun 2017
Cited by 8 | Viewed by 4828
Abstract
This paper investigates representation-based classification for multispectral imagery. Due to small spectral dimension, the performance of classification may be limited, and, in general, it is difficult to discriminate different classes with multispectral imagery. Nonlinear band generation method with explicit functions is proposed to [...] Read more.
This paper investigates representation-based classification for multispectral imagery. Due to small spectral dimension, the performance of classification may be limited, and, in general, it is difficult to discriminate different classes with multispectral imagery. Nonlinear band generation method with explicit functions is proposed to use which can provide additional spectral information for multispectral image classification. Specifically, we propose the simple band ratio function, which can yield better performance than the nonlinear kernel method with implicit mapping function. Two representation-based classifiers—i.e., sparse representation classifier (SRC) and nearest regularized subspace (NRS) method—are evaluated on the nonlinearly generated datasets. Experimental results demonstrate that this dimensionality-expansion approach can outperform the traditional kernel method in terms of high classification accuracy and low computational cost when classifying multispectral imagery. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

1515 KiB  
Article
Saliency Analysis via Hyperparameter Sparse Representation and Energy Distribution Optimization for Remote Sensing Images
by Libao Zhang, Xinran Lv and Xu Liang
Remote Sens. 2017, 9(6), 636; https://doi.org/10.3390/rs9060636 - 21 Jun 2017
Cited by 6 | Viewed by 5002
Abstract
In an effort to detect the region-of-interest (ROI) of remote sensing images with complex data distributions, sparse representation based on dictionary learning has been utilized, and has proved able to process high dimensional data adaptively and efficiently. In this paper, a visual attention [...] Read more.
In an effort to detect the region-of-interest (ROI) of remote sensing images with complex data distributions, sparse representation based on dictionary learning has been utilized, and has proved able to process high dimensional data adaptively and efficiently. In this paper, a visual attention model uniting hyperparameter sparse representation with energy distribution optimization is proposed for analyzing saliency and detecting ROIs in remote sensing images. A dictionary learning algorithm based on biological plausibility is adopted to generate the sparse feature space. This method only focuses on finite features, instead of various considerations of feature complexity and massive parameter tuning in other dictionary learning algorithms. In another portion of the model, aimed at obtaining the saliency map, the contribution of each feature is evaluated in a sparse feature space and the coding length of each feature is accumulated. Finally, we calculate the segmentation threshold using the saliency map and obtain the binary mask to separate the ROI from the original images. Experimental results show that the proposed model achieves better performance in saliency analysis and ROI detection for remote sensing images. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

20385 KiB  
Article
One-Dimensional Convolutional Neural Network Land-Cover Classification of Multi-Seasonal Hyperspectral Imagery in the San Francisco Bay Area, California
by Daniel Guidici and Matthew L. Clark
Remote Sens. 2017, 9(6), 629; https://doi.org/10.3390/rs9060629 - 20 Jun 2017
Cited by 82 | Viewed by 11373
Abstract
In this study, a 1-D Convolutional Neural Network (CNN) architecture was developed, trained and utilized to classify single (summer) and three seasons (spring, summer, fall) of hyperspectral imagery over the San Francisco Bay Area, California for the year 2015. For comparison, the Random [...] Read more.
In this study, a 1-D Convolutional Neural Network (CNN) architecture was developed, trained and utilized to classify single (summer) and three seasons (spring, summer, fall) of hyperspectral imagery over the San Francisco Bay Area, California for the year 2015. For comparison, the Random Forests (RF) and Support Vector Machine (SVM) classifiers were trained and tested with the same data. In order to support space-based hyperspectral applications, all analyses were performed with simulated Hyperspectral Infrared Imager (HyspIRI) imagery. Three-season data improved classifier overall accuracy by 2.0% (SVM), 1.9% (CNN) to 3.5% (RF) over single-season data. The three-season CNN provided an overall classification accuracy of 89.9%, which was comparable to overall accuracy of 89.5% for SVM. Both three-season CNN and SVM outperformed RF by over 7% overall accuracy. Analysis and visualization of the inner products for the CNN provided insight to distinctive features within the spectral-temporal domain. A method for CNN kernel tuning was presented to assess the importance of learned features. We concluded that CNN is a promising candidate for hyperspectral remote sensing applications because of the high classification accuracy and interpretability of its inner products. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

1980 KiB  
Article
Convolutional Neural Networks Based Hyperspectral Image Classification Method with Adaptive Kernels
by Chen Ding, Ying Li, Yong Xia, Wei Wei, Lei Zhang and Yanning Zhang
Remote Sens. 2017, 9(6), 618; https://doi.org/10.3390/rs9060618 - 16 Jun 2017
Cited by 47 | Viewed by 7333
Abstract
Hyperspectral image (HSI) classification aims at assigning each pixel a pre-defined class label, which underpins lots of vision related applications, such as remote sensing, mineral exploration and ground object identification, etc. Lots of classification methods thus have been proposed for better hyperspectral imagery [...] Read more.
Hyperspectral image (HSI) classification aims at assigning each pixel a pre-defined class label, which underpins lots of vision related applications, such as remote sensing, mineral exploration and ground object identification, etc. Lots of classification methods thus have been proposed for better hyperspectral imagery interpretation. Witnessing the success of convolutional neural networks (CNNs) in the traditional images based classification tasks, plenty of efforts have been made to leverage CNNs to improve HSI classification. An advanced CNNs architecture uses the kernels generated from the clustering method, such as a K-means network uses K-means to generate the kernels. However, the above methods are often obtained heuristically (e.g., the number of kernels should be assigned manually), and how to data-adaptively determine the number of convolutional kernels (i.e., filters), and thus generate the kernels that better represent the data, are seldom studied in existing CNNs based HSI classification methods. In this study, we propose a new CNNs based HSI classification method where the convolutional kernels can be automatically learned from the data through clustering without knowing the cluster number. With those data-adaptive kernels, the proposed CNNs method achieves better classification results. Experimental results from the datasets demonstrate the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

6024 KiB  
Article
Road Detection by Using a Generalized Hough Transform
by Weifeng Liu, Zhenqing Zhang, Shuying Li and Dapeng Tao
Remote Sens. 2017, 9(6), 590; https://doi.org/10.3390/rs9060590 - 10 Jun 2017
Cited by 39 | Viewed by 6984
Abstract
Road detection plays key roles for remote sensing image analytics. Hough transform (HT) is one very typical method for road detection, especially for straight line road detection. Although many variants of Hough transform have been reported, it is still a great challenge to [...] Read more.
Road detection plays key roles for remote sensing image analytics. Hough transform (HT) is one very typical method for road detection, especially for straight line road detection. Although many variants of Hough transform have been reported, it is still a great challenge to develop a low computational complexity and time-saving Hough transform algorithm. In this paper, we propose a generalized Hough transform (i.e., Radon transform) implementation for road detection in remote sensing images. Specifically, we present a dictionary learning method to approximate the Radon transform. The proposed approximation method treats a Radon transform as a linear transform, which then facilitates parallel implementation of the Radon transform for multiple images. To evaluate the proposed algorithm, we conduct extensive experiments on the popular RSSCN7 database for straight road detection. The experimental results demonstrate that our method is superior to the traditional algorithms in terms of accuracy and computing complexity. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Figure 1

1069 KiB  
Article
Geometry-Based Global Alignment for GSMS Remote Sensing Images
by Dan Zeng, Rui Fang, Shiming Ge, Shuying Li and Zhijiang Zhang
Remote Sens. 2017, 9(6), 587; https://doi.org/10.3390/rs9060587 - 10 Jun 2017
Cited by 4 | Viewed by 4756
Abstract
Alignment of latitude and longitude for all pixels is important for geo-stationary meteorological satellite (GSMS) images. To align landmarks and non-landmarks in the GSMS images, we propose a geometry-based global alignment method. Firstly, the Global Self-consistent, Hierarchical, High-resolution Geography (GSHHG) database and GSMS [...] Read more.
Alignment of latitude and longitude for all pixels is important for geo-stationary meteorological satellite (GSMS) images. To align landmarks and non-landmarks in the GSMS images, we propose a geometry-based global alignment method. Firstly, the Global Self-consistent, Hierarchical, High-resolution Geography (GSHHG) database and GSMS images are expressed as feature maps by geometric coding. According to the geometric and gradient similarity of feature maps, initial feature matching is obtained. Then, neighborhood spatial consistency based local geometric refinement algorithm is utilized to remove outliers. Since the earth is not a standard sphere, polynomial fitting models are used to describe the global relationship between latitude, longitude and coordinates for all pixels in the GSMS images. Finally, with registered landmarks and polynomial fitting models, the latitude and longitude of each pixel in the GSMS images can be calculated. Experimental results show that the proposed method globally align the GSMS images with high accuracy, recall and significantly low computation complexity. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Figure 1

6795 KiB  
Article
Exploiting Deep Matching and SAR Data for the Geo-Localization Accuracy Improvement of Optical Satellite Images
by Nina Merkle, Wenjie Luo, Stefan Auer, Rupert Müller and Raquel Urtasun
Remote Sens. 2017, 9(6), 586; https://doi.org/10.3390/rs9060586 - 10 Jun 2017
Cited by 98 | Viewed by 10523
Abstract
Improving the geo-localization of optical satellite images is an important pre-processing step for many remote sensing tasks like monitoring by image time series or scene analysis after sudden events. These tasks require geo-referenced and precisely co-registered multi-sensor data. Images captured by the high [...] Read more.
Improving the geo-localization of optical satellite images is an important pre-processing step for many remote sensing tasks like monitoring by image time series or scene analysis after sudden events. These tasks require geo-referenced and precisely co-registered multi-sensor data. Images captured by the high resolution synthetic aperture radar (SAR) satellite TerraSAR-X exhibit an absolute geo-location accuracy within a few decimeters. These images represent therefore a reliable source to improve the geo-location accuracy of optical images, which is in the order of tens of meters. In this paper, a deep learning-based approach for the geo-localization accuracy improvement of optical satellite images through SAR reference data is investigated. Image registration between SAR and optical images requires few, but accurate and reliable matching points. These are derived from a Siamese neural network. The network is trained using TerraSAR-X and PRISM image pairs covering greater urban areas spread over Europe, in order to learn the two-dimensional spatial shifts between optical and SAR image patches. Results confirm that accurate and reliable matching points can be generated with higher matching accuracy and precision with respect to state-of-the-art approaches. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Figure 1

5189 KiB  
Article
Multiobjective Optimized Endmember Extraction for Hyperspectral Image
by Rong Liu, Bo Du and Liangpei Zhang
Remote Sens. 2017, 9(6), 558; https://doi.org/10.3390/rs9060558 - 03 Jun 2017
Cited by 15 | Viewed by 5016
Abstract
Endmember extraction (EE) is one of the most important issues in hyperspectral mixture analysis. It is also a challenging task due to the intrinsic complexity of remote sensing images and the lack of priori knowledge. In recent years, a number of EE methods [...] Read more.
Endmember extraction (EE) is one of the most important issues in hyperspectral mixture analysis. It is also a challenging task due to the intrinsic complexity of remote sensing images and the lack of priori knowledge. In recent years, a number of EE methods have been developed, where several different optimization objectives have been proposed from different perspectives. In all of these methods, only one objective function has to be optimized, which represents a specific characteristic of endmembers. However, one single-objective function may not be able to express all the characteristics of endmembers from various aspects, which would not be powerful enough to provide satisfactory unmixing results because of the complexity of remote sensing images. In this paper, a multiobjective discrete particle swarm optimization algorithm (MODPSO) is utilized to tackle the problem of EE, where two objective functions, namely, volume maximization (VM) and root-mean-square error (RMSE) minimization are simultaneously optimized. Experimental results on two real hyperspectral images show the superiority of the proposed MODPSO with respect to the single objective D-PSO method, and MODPSO still needs further improvement on the optimization of the VM with respect to other approaches. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

8345 KiB  
Article
Optimized Kernel Minimum Noise Fraction Transformation for Hyperspectral Image Classification
by Lianru Gao, Bin Zhao, Xiuping Jia, Wenzhi Liao and Bing Zhang
Remote Sens. 2017, 9(6), 548; https://doi.org/10.3390/rs9060548 - 01 Jun 2017
Cited by 52 | Viewed by 6900
Abstract
This paper presents an optimized kernel minimum noise fraction transformation (OKMNF) for feature extraction of hyperspectral imagery. The proposed approach is based on the kernel minimum noise fraction (KMNF) transformation, which is a nonlinear dimensionality reduction method. KMNF can map the original data [...] Read more.
This paper presents an optimized kernel minimum noise fraction transformation (OKMNF) for feature extraction of hyperspectral imagery. The proposed approach is based on the kernel minimum noise fraction (KMNF) transformation, which is a nonlinear dimensionality reduction method. KMNF can map the original data into a higher dimensional feature space and provide a small number of quality features for classification and some other post processing. Noise estimation is an important component in KMNF. It is often estimated based on a strong relationship between adjacent pixels. However, hyperspectral images have limited spatial resolution and usually have a large number of mixed pixels, which make the spatial information less reliable for noise estimation. It is the main reason that KMNF generally shows unstable performance in feature extraction for classification. To overcome this problem, this paper exploits the use of a more accurate noise estimation method to improve KMNF. We propose two new noise estimation methods accurately. Moreover, we also propose a framework to improve noise estimation, where both spectral and spatial de-correlation are exploited. Experimental results, conducted using a variety of hyperspectral images, indicate that the proposed OKMNF is superior to some other related dimensionality reduction methods in most cases. Compared to the conventional KMNF, the proposed OKMNF benefits significant improvements in overall classification accuracy. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

10872 KiB  
Article
Hourglass-ShapeNetwork Based Semantic Segmentation for High Resolution Aerial Imagery
by Yu Liu, Duc Minh Nguyen, Nikos Deligiannis, Wenrui Ding and Adrian Munteanu
Remote Sens. 2017, 9(6), 522; https://doi.org/10.3390/rs9060522 - 25 May 2017
Cited by 117 | Viewed by 13717
Abstract
A new convolution neural network (CNN) architecture for semantic segmentation of high resolution aerial imagery is proposed in this paper. The proposed architecture follows an hourglass-shaped network (HSN) design being structured into encoding and decoding stages. By taking advantage of recent advances in [...] Read more.
A new convolution neural network (CNN) architecture for semantic segmentation of high resolution aerial imagery is proposed in this paper. The proposed architecture follows an hourglass-shaped network (HSN) design being structured into encoding and decoding stages. By taking advantage of recent advances in CNN designs, we use the composed inception module to replace common convolutional layers, providing the network with multi-scale receptive areas with rich context. Additionally, in order to reduce spatial ambiguities in the up-sampling stage, skip connections with residual units are also employed to feed forward encoding-stage information directly to the decoder. Moreover, overlap inference is employed to alleviate boundary effects occurring when high resolution images are inferred from small-sized patches. Finally, we also propose a post-processing method based on weighted belief propagation to visually enhance the classification results. Extensive experiments based on the Vaihingen and Potsdam datasets demonstrate that the proposed architectures outperform three reference state-of-the-art network designs both numerically and visually. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

2678 KiB  
Article
Hypergraph Embedding for Spatial-Spectral Joint Feature Extraction in Hyperspectral Images
by Yubao Sun, Sujuan Wang, Qingshan Liu, Renlong Hang and Guangcan Liu
Remote Sens. 2017, 9(5), 506; https://doi.org/10.3390/rs9050506 - 22 May 2017
Cited by 30 | Viewed by 7746
Abstract
The fusion of spatial and spectral information in hyperspectral images (HSIs) is useful for improving the classification accuracy. However, this approach usually results in features of higher dimension and the curse of the dimensionality problem may arise resulting from the small ratio between [...] Read more.
The fusion of spatial and spectral information in hyperspectral images (HSIs) is useful for improving the classification accuracy. However, this approach usually results in features of higher dimension and the curse of the dimensionality problem may arise resulting from the small ratio between the number of training samples and the dimensionality of features. To ease this problem, we propose a novel algorithm for spatial-spectral feature extraction based on hypergraph embedding. Firstly, each HSI pixel is regarded as a vertex and the joint of extended morphological profiles (EMP) and spectral features is adopted as the feature associated with the vertex. A hypergraph is then constructed by the K-Nearest-Neighbor method, in which each pixel and its most K relevant pixels are linked as one hyperedge to represent the complex relationships between HSI pixels. Secondly, the hypergraph embedding model is designed to learn a low dimensional feature with the reservation of geometric structure of HSI. An adaptive hyperedge weight estimation scheme is also introduced to preserve the prominent hyperedges by the regularization constraint on the weight. Finally, the learned low-dimensional features are fed to the support vector machine (SVM) for classification. The experimental results on three benchmark hyperspectral databases are presented. They highlight the importance of spatial–spectral joint features embedding for the accurate classification of HSI data. The weight estimation is better for further improving the classification accuracy. These experimental results verify the proposed method. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

42450 KiB  
Article
Learning Dual Multi-Scale Manifold Ranking for Semantic Segmentation of High-Resolution Images
by Mi Zhang, Xiangyun Hu, Like Zhao, Ye Lv, Min Luo and Shiyan Pang
Remote Sens. 2017, 9(5), 500; https://doi.org/10.3390/rs9050500 - 19 May 2017
Cited by 43 | Viewed by 10203
Abstract
Semantic image segmentation has recently witnessed considerable progress by training deep convolutional neural networks (CNNs). The core issue of this technique is the limited capacity of CNNs to depict visual objects. Existing approaches tend to utilize approximate inference in a discrete domain or [...] Read more.
Semantic image segmentation has recently witnessed considerable progress by training deep convolutional neural networks (CNNs). The core issue of this technique is the limited capacity of CNNs to depict visual objects. Existing approaches tend to utilize approximate inference in a discrete domain or additional aides and do not have a global optimum guarantee. We propose the use of the multi-label manifold ranking (MR) method in solving the linear objective energy function in a continuous domain to delineate visual objects and solve these problems. We present a novel embedded single stream optimization method based on the MR model to avoid approximations without sacrificing expressive power. In addition, we propose a novel network, which we refer to as dual multi-scale manifold ranking (DMSMR) network, that combines the dilated, multi-scale strategies with the single stream MR optimization method in the deep learning architecture to further improve the performance. Experiments on high resolution images, including close-range and remote sensing datasets, demonstrate that the proposed approach can achieve competitive accuracy without additional aides in an end-to-end manner. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Figure 1

10169 KiB  
Article
Classification for High Resolution Remote Sensing Imagery Using a Fully Convolutional Network
by Gang Fu, Changjun Liu, Rong Zhou, Tao Sun and Qijian Zhang
Remote Sens. 2017, 9(5), 498; https://doi.org/10.3390/rs9050498 - 18 May 2017
Cited by 285 | Viewed by 18880
Abstract
As a variant of Convolutional Neural Networks (CNNs) in Deep Learning, the Fully Convolutional Network (FCN) model achieved state-of-the-art performance for natural image semantic segmentation. In this paper, an accurate classification approach for high resolution remote sensing imagery based on the improved FCN [...] Read more.
As a variant of Convolutional Neural Networks (CNNs) in Deep Learning, the Fully Convolutional Network (FCN) model achieved state-of-the-art performance for natural image semantic segmentation. In this paper, an accurate classification approach for high resolution remote sensing imagery based on the improved FCN model is proposed. Firstly, we improve the density of output class maps by introducing Atrous convolution, and secondly, we design a multi-scale network architecture by adding a skip-layer structure to make it capable for multi-resolution image classification. Finally, we further refine the output class map using Conditional Random Fields (CRFs) post-processing. Our classification model is trained on 70 GF-2 true color images, and tested on the other 4 GF-2 images and 3 IKONOS true color images. We also employ object-oriented classification, patch-based CNN classification, and the FCN-8s approach on the same images for comparison. The experiments show that compared with the existing approaches, our approach has an obvious improvement in accuracy. The average precision, recall, and Kappa coefficient of our approach are 0.81, 0.78, and 0.83, respectively. The experiments also prove that our approach has strong applicability for multi-resolution image classification. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

10421 KiB  
Article
Multi-Scale Analysis of Very High Resolution Satellite Images Using Unsupervised Techniques
by Jérémie Sublime, Andrés Troya-Galvis and Anne Puissant
Remote Sens. 2017, 9(5), 495; https://doi.org/10.3390/rs9050495 - 18 May 2017
Cited by 7 | Viewed by 6954
Abstract
This article is concerned with the use of unsupervised methods to process very high resolution satellite images with minimal or little human intervention. In a context where more and more complex and very high resolution satellite images are available, it has become increasingly [...] Read more.
This article is concerned with the use of unsupervised methods to process very high resolution satellite images with minimal or little human intervention. In a context where more and more complex and very high resolution satellite images are available, it has become increasingly difficult to propose learning sets for supervised algorithms to process such data and even more complicated to process them manually. Within this context, in this article we propose a fully unsupervised step by step method to process very high resolution images, making it possible to link clusters to the land cover classes of interest. For each step, we discuss the various challenges and state of the art algorithms to make the full process as efficient as possible. In particular, one of the main contributions of this article comes in the form of a multi-scale analysis clustering algorithm that we use during the processing of the image segments. Our proposed methods are tested on a very high resolution image (Pléiades) of the urban area around the French city of Strasbourg and show relevant results at each step of the process. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

4065 KiB  
Article
Cost-Effective Class-Imbalance Aware CNN for Vehicle Localization and Categorization in High Resolution Aerial Images
by Feimo Li, Shuxiao Li, Chengfei Zhu, Xiaosong Lan and Hongxing Chang
Remote Sens. 2017, 9(5), 494; https://doi.org/10.3390/rs9050494 - 18 May 2017
Cited by 21 | Viewed by 6933
Abstract
Joint vehicle localization and categorization in high resolution aerial images can provide useful information for applications such as traffic flow structure analysis. To maintain sufficient features to recognize small-scaled vehicles, a regions with convolutional neural network features (R-CNN) -like detection structure is employed. [...] Read more.
Joint vehicle localization and categorization in high resolution aerial images can provide useful information for applications such as traffic flow structure analysis. To maintain sufficient features to recognize small-scaled vehicles, a regions with convolutional neural network features (R-CNN) -like detection structure is employed. In this setting, cascaded localization error can be averted by equally treating the negatives and differently typed positives as a multi-class classification task, but the problem of class-imbalance remains. To address this issue, a cost-effective network extension scheme is proposed. In it, the correlated convolution and connection costs during extension are reduced by feature map selection and bi-partite main-side network construction, which are realized with the assistance of a novel feature map class-importance measurement and a new class-imbalance sensitive main-side loss function. By using an image classification dataset established from a set of traditional real-colored aerial images with 0.13 m ground sampling distance which are taken from the height of 1000 m by an imaging system composed of non-metric cameras, the effectiveness of the proposed network extension is verified by comparing with its similarly shaped strong counter-parts. Experiments show an equivalent or better performance, while requiring the least parameter and memory overheads are required. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

6973 KiB  
Article
Automatic Color Correction for Multisource Remote Sensing Images with Wasserstein CNN
by Jiayi Guo, Zongxu Pan, Bin Lei and Chibiao Ding
Remote Sens. 2017, 9(5), 483; https://doi.org/10.3390/rs9050483 - 15 May 2017
Cited by 19 | Viewed by 7336
Abstract
In this paper a non-parametric model based on Wasserstein CNN is proposed for color correction. It is suitable for large-scale remote sensing image preprocessing from multiple sources under various viewing conditions, including illumination variances, atmosphere disturbances, and sensor and aspect angles. Color correction [...] Read more.
In this paper a non-parametric model based on Wasserstein CNN is proposed for color correction. It is suitable for large-scale remote sensing image preprocessing from multiple sources under various viewing conditions, including illumination variances, atmosphere disturbances, and sensor and aspect angles. Color correction aims to alter the color palette of an input image to a standard reference which does not suffer from the mentioned disturbances. Most of current methods highly depend on the similarity between the inputs and the references, with respect to both the contents and the conditions, such as illumination and atmosphere condition. Segmentation is usually necessary to alleviate the color leakage effect on the edges. Different from the previous studies, the proposed method matches the color distribution of the input dataset with the references in a probabilistic optimal transportation framework. Multi-scale features are extracted from the intermediate layers of the lightweight CNN model and are utilized to infer the undisturbed distribution. The Wasserstein distance is utilized to calculate the cost function to measure the discrepancy between two color distributions. The advantage of the method is that no registration or segmentation processes are needed, benefiting from the local texture processing potential of the CNN models. Experimental results demonstrate that the proposed method is effective when the input and reference images are of different sources, resolutions, and under different illumination and atmosphere conditions. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

5825 KiB  
Article
Hyperspectral Target Detection via Adaptive Joint Sparse Representation and Multi-Task Learning with Locality Information
by Yuxiang Zhang, Ke Wu, Bo Du, Liangpei Zhang and Xiangyun Hu
Remote Sens. 2017, 9(5), 482; https://doi.org/10.3390/rs9050482 - 14 May 2017
Cited by 22 | Viewed by 6809
Abstract
Target detection from hyperspectral images is an important problem but encounters a critical challenge of simultaneously reducing spectral redundancy and preserving the discriminative information. Recently, the joint sparse representation and multi-task learning (JSR-MTL) approach was proposed to address the challenge. However, it does [...] Read more.
Target detection from hyperspectral images is an important problem but encounters a critical challenge of simultaneously reducing spectral redundancy and preserving the discriminative information. Recently, the joint sparse representation and multi-task learning (JSR-MTL) approach was proposed to address the challenge. However, it does not fully explore the prior class label information of the training samples and the difference between the target dictionary and background dictionary when constructing the model. Besides, there may exist estimation bias for the unknown coefficient matrix with the use of minimization which is usually inconsistent in variable selection. To address these problems, this paper proposes an adaptive joint sparse representation and multi-task learning detector with locality information (JSRMTL-ALI). The proposed method has the following capabilities: (1) it takes full advantage of the prior class label information to construct an adaptive joint sparse representation and multi-task learning model; (2) it explores the great difference between the target dictionary and background dictionary with different regularization strategies in order to better encode the task relatedness; (3) it applies locality information by imposing an iterative weight on the coefficient matrix in order to reduce the estimation bias. Extensive experiments were carried out on three hyperspectral images, and it was found that JSRMTL-ALI generally shows a better detection performance than the other target detection methods. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

22348 KiB  
Article
Maritime Semantic Labeling of Optical Remote Sensing Images with Multi-Scale Fully Convolutional Network
by Haoning Lin, Zhenwei Shi and Zhengxia Zou
Remote Sens. 2017, 9(5), 480; https://doi.org/10.3390/rs9050480 - 14 May 2017
Cited by 70 | Viewed by 8245
Abstract
In current remote sensing literature, the problems of sea-land segmentation and ship detection (including in-dock ships) are investigated separately despite the high correlation between them. This inhibits joint optimization and makes the implementation of the methods highly complicated. In this paper, we propose [...] Read more.
In current remote sensing literature, the problems of sea-land segmentation and ship detection (including in-dock ships) are investigated separately despite the high correlation between them. This inhibits joint optimization and makes the implementation of the methods highly complicated. In this paper, we propose a novel fully convolutional network to accomplish the two tasks simultaneously, in a semantic labeling fashion, i.e., to label every pixel of the image into 3 classes, sea, land and ships. A multi-scale structure for the network is proposed to address the huge scale gap between different classes of targets, i.e., sea/land and ships. Conventional multi-scale structure utilizes shortcuts to connect low level, fine scale feature maps to high level ones to increase the network’s ability to produce finer results. In contrast, our proposed multi-scale structure focuses on increasing the receptive field of the network while maintaining the ability towards fine scale details. The multi-scale convolution network accommodates the huge scale difference between sea-land and ships and provides comprehensive features, and is able to accomplish the tasks in an end-to-end manner that is easy for implementation and feasible for joint optimization. In the network, the input forks into fine-scale and coarse-scale paths, which share the same convolution layers to minimize network parameter increase, and then are joined together to produce the final result. The experiments show that the network tackles the semantic labeling problem with improved performance. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

1335 KiB  
Article
Hyperspectral Dimensionality Reduction by Tensor Sparse and Low-Rank Graph-Based Discriminant Analysis
by Lei Pan, Heng-Chao Li, Yang-Jun Deng, Fan Zhang, Xiang-Dong Chen and Qian Du
Remote Sens. 2017, 9(5), 452; https://doi.org/10.3390/rs9050452 - 06 May 2017
Cited by 46 | Viewed by 7610
Abstract
Recently, sparse and low-rank graph-based discriminant analysis (SLGDA) has yielded satisfactory results in hyperspectral image (HSI) dimensionality reduction (DR), for which sparsity and low-rankness are simultaneously imposed to capture both local and global structure of hyperspectral data. However, SLGDA fails to exploit the [...] Read more.
Recently, sparse and low-rank graph-based discriminant analysis (SLGDA) has yielded satisfactory results in hyperspectral image (HSI) dimensionality reduction (DR), for which sparsity and low-rankness are simultaneously imposed to capture both local and global structure of hyperspectral data. However, SLGDA fails to exploit the spatial information. To address this problem, a tensor sparse and low-rank graph-based discriminant analysis (TSLGDA) is proposed in this paper. By regarding the hyperspectral data cube as a third-order tensor, small local patches centered at the training samples are extracted for the TSLGDA framework to maintain the structural information, resulting in a more discriminative graph. Subsequently, dimensionality reduction is performed on the tensorial training and testing samples to reduce data redundancy. Experimental results of three real-world hyperspectral datasets demonstrate that the proposed TSLGDA algorithm greatly improves the classification performance in the low-dimensional space when compared to state-of-the-art DR methods. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

2391 KiB  
Article
Gated Convolutional Neural Network for Semantic Segmentation in High-Resolution Images
by Hongzhen Wang, Ying Wang, Qian Zhang, Shiming Xiang and Chunhong Pan
Remote Sens. 2017, 9(5), 446; https://doi.org/10.3390/rs9050446 - 05 May 2017
Cited by 163 | Viewed by 14697
Abstract
Semantic segmentation is a fundamental task in remote sensing image processing. The large appearance variations of ground objects make this task quite challenging. Recently, deep convolutional neural networks (DCNNs) have shown outstanding performance in this task. A common strategy of these methods (e.g., [...] Read more.
Semantic segmentation is a fundamental task in remote sensing image processing. The large appearance variations of ground objects make this task quite challenging. Recently, deep convolutional neural networks (DCNNs) have shown outstanding performance in this task. A common strategy of these methods (e.g., SegNet) for performance improvement is to combine the feature maps learned at different DCNN layers. However, such a combination is usually implemented via feature map summation or concatenation, indicating that the features are considered indiscriminately. In fact, features at different positions contribute differently to the final performance. It is advantageous to automatically select adaptive features when merging different-layer feature maps. To achieve this goal, we propose a gated convolutional neural network to fulfill this task. Specifically, we explore the relationship between the information entropy of the feature maps and the label-error map, and then a gate mechanism is embedded to integrate the feature maps more effectively. The gate is implemented by the entropy maps, which are generated to assign adaptive weights to different feature maps as their relative importance. Generally, the entropy maps, i.e., the gates, guide the network to focus on the highly-uncertain pixels, where detailed information from lower layers is required to improve the separability of these pixels. The selected features are finally combined to feed into the classifier layer, which predicts the semantic label of each pixel. The proposed method achieves competitive segmentation accuracy on the public ISPRS 2D Semantic Labeling benchmark, which is challenging for segmentation by only using the RGB images. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Figure 1

20819 KiB  
Article
Image Registration and Fusion of Visible and Infrared Integrated Camera for Medium-Altitude Unmanned Aerial Vehicle Remote Sensing
by Hongguang Li, Wenrui Ding, Xianbin Cao and Chunlei Liu
Remote Sens. 2017, 9(5), 441; https://doi.org/10.3390/rs9050441 - 05 May 2017
Cited by 56 | Viewed by 9847
Abstract
This study proposes a novel method for image registration and fusion via commonly used visible light and infrared integrated cameras mounted on medium-altitude unmanned aerial vehicles (UAVs).The innovation of image registration lies in three aspects. First, it reveals how complex perspective transformation can [...] Read more.
This study proposes a novel method for image registration and fusion via commonly used visible light and infrared integrated cameras mounted on medium-altitude unmanned aerial vehicles (UAVs).The innovation of image registration lies in three aspects. First, it reveals how complex perspective transformation can be converted to simple scale transformation and translation transformation between two sensor images under long-distance and parallel imaging conditions. Second, with the introduction of metadata, a scale calculation algorithm is designed according to spatial geometry, and a coarse translation estimation algorithm is presented based on coordinate transformation. Third, the problem of non-strictly aligned edges in precise translation estimation is solved via edge–distance field transformation. A searching algorithm based on particle swarm optimization is introduced to improve efficiency. Additionally, a new image fusion algorithm is designed based on a pulse coupled neural network and nonsubsampled contourlet transform to meet the special requirements of preserving color information, adding infrared brightness information, improving spatial resolution, and highlighting target areas for unmanned aerial vehicle (UAV) applications. A medium-altitude UAV is employed to collect datasets. The result is promising, especially in applications that involve other medium-altitude or high-altitude UAVs with similar system structures. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

2188 KiB  
Article
Quantifying Sub-Pixel Surface Water Coverage in Urban Environments Using Low-Albedo Fraction from Landsat Imagery
by Weiwei Sun, Bo Du and Shaolong Xiong
Remote Sens. 2017, 9(5), 428; https://doi.org/10.3390/rs9050428 - 01 May 2017
Cited by 30 | Viewed by 5776
Abstract
The problem of mixed pixels negatively affects the delineation of accurate surface water in Landsat Imagery. Linear spectral unmixing has been demonstrated to be a powerful technique for extracting surface materials at a sub-pixel scale. Therefore, in this paper, we propose an innovative [...] Read more.
The problem of mixed pixels negatively affects the delineation of accurate surface water in Landsat Imagery. Linear spectral unmixing has been demonstrated to be a powerful technique for extracting surface materials at a sub-pixel scale. Therefore, in this paper, we propose an innovative low albedo fraction (LAF) method based on the idea of unconstrained linear spectral unmixing. The LAF stands on the “High Albedo-Low Albedo-Vegetation” model of spectral unmixing analysis in urban environments, and investigates the urban surface water extraction problem with the low albedo fraction map. Three experiments are carefully designed using Landsat TM/ETM+ images on the three metropolises of Wuhan, Shanghai, and Guangzhou in China, and per-pixel and sub-pixel accuracies are estimated. The results are compared against extraction accuracies from three popular water extraction methods including the normalized difference water index (NDWI), modified normalized difference water index (MNDWI), and automated water extraction index (AWEI). Experimental results show that LAF achieves a better accuracy when extracting urban surface water than both MNDWI and AWEI do, especially in boundary mixed pixels. Moreover, the LAF has the smallest threshold variations among the three methods, and the fraction threshold of 1 is a proper choice for LAF to obtain good extraction results. Therefore, the LAF is a promising approach for extracting urban surface water coverage. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

25281 KiB  
Article
Sea Ice Concentration Estimation during Freeze-Up from SAR Imagery Using a Convolutional Neural Network
by Lei Wang, K. Andrea Scott and David A. Clausi
Remote Sens. 2017, 9(5), 408; https://doi.org/10.3390/rs9050408 - 26 Apr 2017
Cited by 65 | Viewed by 8262
Abstract
In this study, a convolutional neural network (CNN) is used to estimate sea ice concentration using synthetic aperture radar (SAR) scenes acquired during freeze-up in the Gulf of St. Lawrence on the east coast of Canada. The ice concentration estimates from the CNN [...] Read more.
In this study, a convolutional neural network (CNN) is used to estimate sea ice concentration using synthetic aperture radar (SAR) scenes acquired during freeze-up in the Gulf of St. Lawrence on the east coast of Canada. The ice concentration estimates from the CNN are compared to those from a neural network (multi-layer perceptron or MLP) that uses hand-crafted features as input and a single layer of hidden nodes. The CNN is found to be less sensitive to pixel level details than the MLP and produces ice concentration that is less noisy and in closer agreement with that from image analysis charts. This is due to the multi-layer (deep) structure of the CNN, which enables abstract image features to be learned. The CNN ice concentration is also compared with ice concentration estimated from passive microwave brightness temperature data using the ARTIST sea ice (ASI) algorithm. The bias and RMS of the difference between the ice concentration from the CNN and that from image analysis charts is reduced as compared to that from either the MLP or ASI algorithm. Additional results demonstrate the impact of varying the input patch size, varying the number of CNN layers, and including the incidence angle as an additional input. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

10570 KiB  
Article
Unassisted Quantitative Evaluation of Despeckling Filters
by Luis Gomez, Raydonal Ospina and Alejandro C. Frery
Remote Sens. 2017, 9(4), 389; https://doi.org/10.3390/rs9040389 - 20 Apr 2017
Cited by 68 | Viewed by 6786
Abstract
SAR (Synthetic Aperture Radar) imaging plays a central role in Remote Sensing due to, among other important features, its ability to provide high-resolution, day-and-night and almost weather-independent images. SAR images are affected from a granular contamination, speckle, that can be described by a [...] Read more.
SAR (Synthetic Aperture Radar) imaging plays a central role in Remote Sensing due to, among other important features, its ability to provide high-resolution, day-and-night and almost weather-independent images. SAR images are affected from a granular contamination, speckle, that can be described by a multiplicative model. Many despeckling techniques have been proposed in the literature, as well as measures of the quality of the results they provide. Assuming the multiplicative model, the observed image Z is the product of two independent fields: the backscatter X and the speckle Y. The result of any speckle filter is X ^ , an estimator of the backscatter X, based solely on the observed data Z. An ideal estimator would be the one for which the ratio of the observed image to the filtered one I = Z / X ^ is only speckle: a collection of independent identically distributed samples from Gamma variates. We, then, assess the quality of a filter by the closeness of I to the hypothesis that it is adherent to the statistical properties of pure speckle. We analyze filters through the ratio image they produce with regards to first- and second-order statistics: the former check marginal properties, while the latter verifies lack of structure. A new quantitative image-quality index is then defined, and applied to state-of-the-art despeckling filters. This new measure provides consistent results with commonly used quality measures (equivalent number of looks, PSNR, MSSIM, β edge correlation, and preservation of the mean), and ranks the filters results also in agreement with their visual analysis. We conclude our study showing that the proposed measure can be successfully used to optimize the (often many) parameters that define a speckle filter. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

8617 KiB  
Article
A Fuzzy-GA Based Decision Making System for Detecting Damaged Buildings from High-Spatial Resolution Optical Images
by Milad Janalipour and Ali Mohammadzadeh
Remote Sens. 2017, 9(4), 349; https://doi.org/10.3390/rs9040349 - 20 Apr 2017
Cited by 27 | Viewed by 5736
Abstract
In this research, a semi-automated building damage detection system is addressed under the umbrella of high-spatial resolution remotely sensed images. The aim of this study was to develop a semi-automated fuzzy decision making system using Genetic Algorithm (GA). Our proposed system contains four [...] Read more.
In this research, a semi-automated building damage detection system is addressed under the umbrella of high-spatial resolution remotely sensed images. The aim of this study was to develop a semi-automated fuzzy decision making system using Genetic Algorithm (GA). Our proposed system contains four main stages. In the first stage, post-event optical images were pre-processed. In the second stage, textural features were extracted from the pre-processed post-event optical images using Haralick texture extraction method. Afterwards, in the third stage, a semi-automated Fuzzy-GA (Fuzzy Genetic Algorithm) decision making system was used to identify damaged buildings from the extracted texture features. In the fourth stage, a comprehensive sensitivity analysis was performed to achieve parameters of GA leading to more accurate results. Finally, the accuracy of results was assessed using check and test samples. The proposed system was tested over the 2010 Haiti earthquake (Area 1 and Area 2) and the 2003 Bam earthquake (Area 3). The proposed system resulted in overall accuracies of 76.88 ± 1.22%, 65.43 ± 0.29%, and 90.96 ± 0.15% over Area 1, Area 2, and Area 3, respectively. On the one hand, based on the concept of the proposed Fuzzy-GA decision making system, the automation level of this system is higher than other existing systems. On the other hand, based on the accuracy of our proposed system and four advanced machine learning techniques, i.e., bagging, boosting, random forests, and support vector machine, in the detection of damaged buildings, it seems that our proposed system is robust and efficient. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

3985 KiB  
Article
A New Spatial Attraction Model for Improving Subpixel Land Cover Classification
by Lizhen Lu, Yanlin Huang, Liping Di and Danwei Hang
Remote Sens. 2017, 9(4), 360; https://doi.org/10.3390/rs9040360 - 11 Apr 2017
Cited by 23 | Viewed by 4803
Abstract
Subpixel mapping (SPM) is a technique that produces hard classification maps at a spatial resolution finer than that of the input images produced when handling mixed pixels. Existing spatial attraction model (SAM) techniques have been proven to be an effective SPM method. The [...] Read more.
Subpixel mapping (SPM) is a technique that produces hard classification maps at a spatial resolution finer than that of the input images produced when handling mixed pixels. Existing spatial attraction model (SAM) techniques have been proven to be an effective SPM method. The techniques mostly differ in the way in which they compute the spatial attraction, for example, from the surrounding pixels in the subpixel/pixel spatial attraction model (SPSAM), from the subpixels within the surrounding pixels in the modified SPSAM (MSPSAM), or from the subpixels within the surrounding pixels and the touching subpixels within the central pixel in the mixed spatial attraction model (MSAM). However, they have a number of common defects, such as a lack of consideration of the attraction from subpixels within the central pixel and the unequal treatment of attraction from surrounding subpixels of the same distance. In order to overcome these defects, this study proposed an improved SAM (ISAM) for SPM. ISAM estimates the attraction value of the current subpixel at the center of a moving window from all subpixels within the window, and moves the window one subpixel per step. Experimental results from both Landsat and MODIS imagery have proven that ISAM, when compared with other SAMs, can improve SPM accuracies and is a more efficient SPM technique than MSPSAM and MSAM. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

8703 KiB  
Article
Supervised and Semi-Supervised Multi-View Canonical Correlation Analysis Ensemble for Heterogeneous Domain Adaptation in Remote Sensing Image Classification
by Alim Samat, Claudio Persello, Paolo Gamba, Sicong Liu, Jilili Abuduwaili and Erzhu Li
Remote Sens. 2017, 9(4), 337; https://doi.org/10.3390/rs9040337 - 01 Apr 2017
Cited by 24 | Viewed by 9792
Abstract
In this paper, we present the supervised multi-view canonical correlation analysis ensemble (SMVCCAE) and its semi-supervised version (SSMVCCAE), which are novel techniques designed to address heterogeneous domain adaptation problems, i.e., situations in which the data to be processed and recognized are collected from [...] Read more.
In this paper, we present the supervised multi-view canonical correlation analysis ensemble (SMVCCAE) and its semi-supervised version (SSMVCCAE), which are novel techniques designed to address heterogeneous domain adaptation problems, i.e., situations in which the data to be processed and recognized are collected from different heterogeneous domains. Specifically, the multi-view canonical correlation analysis scheme is utilized to extract multiple correlation subspaces that are useful for joint representations for data association across domains. This scheme makes homogeneous domain adaption algorithms suitable for heterogeneous domain adaptation problems. Additionally, inspired by fusion methods such as Ensemble Learning (EL), this work proposes a weighted voting scheme based on canonical correlation coefficients to combine classification results in multiple correlation subspaces. Finally, the semi-supervised MVCCAE extends the original procedure by incorporating multiple speed-up spectral regression kernel discriminant analysis (SRKDA). To validate the performances of the proposed supervised procedure, a single-view canonical analysis (SVCCA) with the same base classifier (Random Forests) is used. Similarly, to evaluate the performance of the semi-supervised approach, a comparison is made with other techniques such as Logistic label propagation (LLP) and the Laplacian support vector machine (LapSVM). All of the approaches are tested on two real hyperspectral images, which are considered the target domain, with a classifier trained from synthetic low-dimensional multispectral images, which are considered the original source domain. The experimental results confirm that multi-view canonical correlation can overcome the limitations of SVCCA. Both of the proposed procedures outperform the ones used in the comparison with respect to not only the classification accuracy but also the computational efficiency. Moreover, this research shows that canonical correlation weighted voting (CCWV) is a valid option with respect to other ensemble schemes and that because of their ability to balance diversity and accuracy, canonical views extracted using partially joint random view generation are more effective than those obtained by exploiting disjoint random view generation. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

3742 KiB  
Article
Urban Change Analysis with Multi-Sensor Multispectral Imagery
by Yuqi Tang and Liangpei Zhang
Remote Sens. 2017, 9(3), 252; https://doi.org/10.3390/rs9030252 - 09 Mar 2017
Cited by 30 | Viewed by 6553
Abstract
An object-based method is proposed in this paper for change detection in urban areas with multi-sensor multispectral (MS) images. The co-registered bi-temporal images are resampled to match each other. By mapping the segmentation of one image to the other, a change map is [...] Read more.
An object-based method is proposed in this paper for change detection in urban areas with multi-sensor multispectral (MS) images. The co-registered bi-temporal images are resampled to match each other. By mapping the segmentation of one image to the other, a change map is generated by characterizing the change probability of image objects based on the proposed change feature analysis. The map is then used to separate the changes from unchanged areas by two threshold selection methods and k-means clustering (k = 2). In order to consider the multi-scale characteristics of ground objects, multi-scale fusion is implemented. The experimental results obtained with QuickBird and IKONOS images show the superiority of the proposed method in detecting urban changes in multi-sensor MS images. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

5089 KiB  
Article
Refinement of Hyperspectral Image Classification with Segment-Tree Filtering
by Lu Li, Chengyi Wang, Jingbo Chen and Jianglin Ma
Remote Sens. 2017, 9(1), 69; https://doi.org/10.3390/rs9010069 - 16 Jan 2017
Cited by 6 | Viewed by 6301
Abstract
This paper proposes a novel method of segment-tree filtering to improve the classification accuracy of hyperspectral image (HSI). Segment-tree filtering is a versatile method that incorporates spatial information and has been widely applied in image preprocessing. However, to use this powerful framework in [...] Read more.
This paper proposes a novel method of segment-tree filtering to improve the classification accuracy of hyperspectral image (HSI). Segment-tree filtering is a versatile method that incorporates spatial information and has been widely applied in image preprocessing. However, to use this powerful framework in hyperspectral image classification, we must reduce the original feature dimensionality to avoid the Hughes problem; otherwise, the computational costs are high and the classification accuracy by original bands in the HSI is unsatisfactory. Therefore, feature extraction is adopted to produce new salient features. In this paper, the Semi-supervised Local Fisher (SELF) method of discriminant analysis is used to reduce HSI dimensionality. Then, a tree-structure filter that adaptively incorporates contextual information is constructed. Additionally, an initial classification map is generated using multi-class support vector machines (SVMs), and segment-tree filtering is conducted using this map. Finally, a simple Winner-Take-All (WTA) rule is applied to determine the class of each pixel in an HSI based on the maximum probability. The experimental results demonstrate that the proposed method can improve HSI classification accuracy significantly. Furthermore, a comparison between the proposed method and the current state-of-the-art methods, such as Extended Morphological Profiles (EMPs), Guided Filtering (GF), and Markov Random Fields (MRFs), suggests that our method is both competitive and robust. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

Other

Jump to: Research

22900 KiB  
Technical Note
Flood Inundation Mapping from Optical Satellite Images Using Spatiotemporal Context Learning and Modest AdaBoost
by Xiaoyi Liu, Hichem Sahli, Yu Meng, Qingqing Huang and Lei Lin
Remote Sens. 2017, 9(6), 617; https://doi.org/10.3390/rs9060617 - 16 Jun 2017
Cited by 18 | Viewed by 8577
Abstract
Due to its capacity for temporal and spatial coverage, remote sensing has emerged as a powerful tool for mapping inundation. Many methods have been applied effectively in remote sensing flood analysis. Generally, supervised methods can achieve better precision than unsupervised. However, human intervention [...] Read more.
Due to its capacity for temporal and spatial coverage, remote sensing has emerged as a powerful tool for mapping inundation. Many methods have been applied effectively in remote sensing flood analysis. Generally, supervised methods can achieve better precision than unsupervised. However, human intervention makes its results subjective and difficult to obtain automatically, which is important for disaster response. In this work, we propose a novel procedure combining spatiotemporal context learning method and Modest AdaBoost classifier, which aims to extract inundation in an automatic and accurate way. First, the context model was built with images to calculate the confidence value of each pixel, which represents the probability of the pixel remaining unchanged. Then, the pixels with the highest probabilities, which we define as ‘permanent pixels’, were used as samples to train the Modest AdaBoost classifier. By applying the strong classifier to the target scene, an inundation map can be obtained. The proposed procedure is validated using two flood cases with different sensors, HJ-1A CCD and GF-4 PMS. Qualitative and quantitative evaluation results showed that the proposed procedure can achieve accurate and robust mapping results. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Figure 1

Back to TopTop