remotesensing-logo

Journal Browser

Journal Browser

Image Processing and Spatial Neighbourhoods for Remote Sensing Data Analysis

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (20 August 2020) | Viewed by 45783

Special Issue Editors

Dr. Wenzhi Liao
E-Mail Website
Guest Editor
Lecturer, Department of Electronic and Electrical Engineering, University of Strathclyde, Glasgow, UK
Interests: remote sensing; hyperspectral imaging; image processing; machine learning; data fusion
Special Issues, Collections and Topics in MDPI journals
1.Helmholtz Institute Freiberg for Resource Technology, Helmholtz-Zentrum Dresden-Rossendorf (HZDR), D-09599 Freiberg, Germany
2. Institute of Advanced Research in Artificial Intelligence (IARAI), 1030 Wien, Austria
Interests: hyperspectral image interpretation; multisensor and multitemporal data fusion
Special Issues, Collections and Topics in MDPI journals
Dr. Lianru Gao
E-Mail Website
Guest Editor
Key Laboratory of Digital Earth Science, Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences, China
Interests: remote sensing; Earth observation; hyperspectral image processing; target detection
Grenoble Institute of Technology, GIPSA-lab, 11 rue des Mathématiques, Grenoble Campus BP46, CEDEX, F-38402 Saint Martin d'Hères, France
Interests: image processing; machine learning; mathematical morphology; hyperspectral imaging; data fusion
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Recent advances in remote sensing technologies have led to the increased availability of a multitude of satellite and airborne data sources, with increasing spatial, spectral, and temporal resolutions. Additionally, at lower altitudes, airplanes and Unmanned Aerial Vehicles (UAVs) can deliver very high-resolution data from targeted locations. Remote sensing images of very high geometrical resolution can provide a precise and detailed representation of the surveyed scene. Thus, the spatial information contained in these images is fundamental for any application requiring the analysis of images.

In this Special Issue, we welcome methodological contributions in terms of novel spatial information extraction/modeling algorithms as well as their recent applications to relevant scenarios from remote sensing imagery. We invite you to submit the most recent advancements in (but not limited to) the following topics:

  • Mathematical morphology (e.g., morphological filters, attribute filters, etc.) for the analysis of high-resolution remote sensing images;
  • Image operations based on spatial neighbourhoods;
  • Textural, structural, and semantic feature extraction;
  • Operational methods for incorporating spatial information of high-resolution data;
  • Object-based image processing;
  • Semantic understanding and analysis.

Dr. Wenzhi Liao
Dr. Pedram Ghamisi
Dr. Lianru Gao
Prof. Jocelyn Chanussot
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • very high resolution
  • remote sensing
  • spatial information extraction
  • mathematical morphology
  • image processing

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 7698 KiB  
Article
SAFDet: A Semi-Anchor-Free Detector for Effective Detection of Oriented Objects in Aerial Images
Remote Sens. 2020, 12(19), 3225; https://doi.org/10.3390/rs12193225 - 03 Oct 2020
Cited by 8 | Viewed by 2604
Abstract
An oriented bounding box (OBB) is preferable over a horizontal bounding box (HBB) in accurate object detection. Most of existing works utilize a two-stage detector for locating the HBB and OBB, respectively, which have suffered from the misaligned horizontal proposals and the interference [...] Read more.
An oriented bounding box (OBB) is preferable over a horizontal bounding box (HBB) in accurate object detection. Most of existing works utilize a two-stage detector for locating the HBB and OBB, respectively, which have suffered from the misaligned horizontal proposals and the interference from complex backgrounds. To tackle these issues, region of interest transformer and attention models were proposed, yet they are extremely computationally intensive. To this end, we propose a semi-anchor-free detector (SAFDet) for object detection in aerial images, where a rotation-anchor-free-branch (RAFB) is used to enhance the foreground features via precisely regressing the OBB. Meanwhile, a center-prediction-module (CPM) is introduced for enhancing object localization and suppressing the background noise. Both RAFB and CPM are deployed during training, avoiding increased computational cost of inference. By evaluating on DOTA and HRSC2016 datasets, the efficacy of our approach has been fully validated for a good balance between the accuracy and computational cost. Full article
Show Figures

Graphical abstract

18 pages, 4024 KiB  
Article
Object Tracking in Unmanned Aerial Vehicle Videos via Multifeature Discrimination and Instance-Aware Attention Network
Remote Sens. 2020, 12(16), 2646; https://doi.org/10.3390/rs12162646 - 17 Aug 2020
Cited by 16 | Viewed by 3530
Abstract
Visual object tracking in unmanned aerial vehicle (UAV) videos plays an important role in a variety of fields, such as traffic data collection, traffic monitoring, as well as film and television shooting. However, it is still challenging to track the target robustly in [...] Read more.
Visual object tracking in unmanned aerial vehicle (UAV) videos plays an important role in a variety of fields, such as traffic data collection, traffic monitoring, as well as film and television shooting. However, it is still challenging to track the target robustly in UAV vision task due to several factors such as appearance variation, background clutter, and severe occlusion. In this paper, we propose a novel two-stage UAV tracking framework, which includes a target detection stage based on multifeature discrimination and a bounding-box estimation stage based on the instance-aware attention network. In the target detection stage, we explore a feature representation scheme for a small target that integrates handcrafted features, low-level deep features, and high-level deep features. Then, the correlation filter is used to roughly predict target location. In the bounding-box estimation stage, an instance-aware intersection over union (IoU)-Net is integrated together with an instance-aware attention network to estimate the target size based on the bounding-box proposals generated in the target detection stage. Extensive experimental results on the UAV123 and UAVDT datasets show that our tracker, running at over 25 frames per second (FPS), has superior performance as compared with state-of-the-art UAV visual tracking approaches. Full article
Show Figures

Graphical abstract

23 pages, 7561 KiB  
Article
Meta-XGBoost for Hyperspectral Image Classification Using Extended MSER-Guided Morphological Profiles
Remote Sens. 2020, 12(12), 1973; https://doi.org/10.3390/rs12121973 - 19 Jun 2020
Cited by 58 | Viewed by 5221
Abstract
To investigate the performance of extreme gradient boosting (XGBoost) in remote sensing image classification tasks, XGBoost was first introduced and comparatively investigated for the spectral-spatial classification of hyperspectral imagery using the extended maximally stable extreme-region-guided morphological profiles (EMSER_MPs) proposed in this study. To [...] Read more.
To investigate the performance of extreme gradient boosting (XGBoost) in remote sensing image classification tasks, XGBoost was first introduced and comparatively investigated for the spectral-spatial classification of hyperspectral imagery using the extended maximally stable extreme-region-guided morphological profiles (EMSER_MPs) proposed in this study. To overcome the potential issues of XGBoost, meta-XGBoost was proposed as an ensemble XGBoost method with classification and regression tree (CART), dropout-introduced multiple additive regression tree (DART), elastic net regression and parallel coordinate descent-based linear regression (linear) and random forest (RaF) boosters. Moreover, to evaluate the performance of the introduced XGBoost approach with different boosters, meta-XGBoost and EMSER_MPs, well-known and widely accepted classifiers, including support vector machine (SVM), bagging, adaptive boosting (AdaBoost), multi class AdaBoost (MultiBoost), extremely randomized decision trees (ExtraTrees), RaF, classification via random forest regression (CVRFR) and ensemble of nested dichotomies with extremely randomized decision tree (END-ERDT) methods, were considered in terms of the classification accuracy and computational efficiency. The experimental results based on two benchmark hyperspectral data sets confirm the superior performance of EMSER_MPs and EMSER_MPs with mean pixel values within region (EMSER_MPsM) compared to that for morphological profiles (MPs), morphological profile with partial reconstruction (MPPR), extended MPs (EMPs), extended MPPR (EMPPR), maximally stable extreme-region-guided morphological profiles (MSER_MPs) and MSER_MPs with mean pixel values within region (MSER_MPsM) features. The proposed meta-XGBoost algorithm is capable of obtaining better results than XGBoost with the CART, DART, linear and RaF boosters, and it could be an alternative to the other considered classifiers in terms of the classification of hyperspectral images using advanced spectral-spatial features, especially from generalized classification accuracy and model training efficiency perspectives. Full article
Show Figures

Figure 1

16 pages, 6867 KiB  
Article
Hyperspectral Image Denoising Based on Nonlocal Low-Rank and TV Regularization
Remote Sens. 2020, 12(12), 1956; https://doi.org/10.3390/rs12121956 - 17 Jun 2020
Cited by 17 | Viewed by 2723
Abstract
Hyperspectral image (HSI) acquisitions are degraded by various noises, among which additive Gaussian noise may be the worst-case, as suggested by information theory. In this paper, we present a novel tensor-based HSI denoising approach by fully identifying the intrinsic structures of the clean [...] Read more.
Hyperspectral image (HSI) acquisitions are degraded by various noises, among which additive Gaussian noise may be the worst-case, as suggested by information theory. In this paper, we present a novel tensor-based HSI denoising approach by fully identifying the intrinsic structures of the clean HSI and the noise. Specifically, the HSI is first divided into local overlapping full-band patches (FBPs), then the nonlocal similar patches in each group are unfolded and stacked into a new third order tensor. As this tensor shows a stronger low-rank property than the original degraded HSI, the tensor weighted nuclear norm minimization (TWNNM) on the constructed tensor can effectively separate the low-rank clean HSI patches. In addition, a regularization strategy with spatial–spectral total variation (SSTV) is utilized to ensure the global spatial–spectral smoothness in both spatial and spectral domains. Our method is designed to model the spatial–spectral non-local self-similarity and global spatial–spectral smoothness simultaneously. Experiments conducted on simulated and real datasets show the superiority of the proposed method. Full article
Show Figures

Graphical abstract

21 pages, 8268 KiB  
Article
Object Detection in Remote Sensing Images Based on Improved Bounding Box Regression and Multi-Level Features Fusion
Remote Sens. 2020, 12(1), 143; https://doi.org/10.3390/rs12010143 - 01 Jan 2020
Cited by 99 | Viewed by 6707
Abstract
The objective of detection in remote sensing images is to determine the location and category of all targets in these images. The anchor based methods are the most prevalent deep learning based methods, and still have some problems that need to be addressed. [...] Read more.
The objective of detection in remote sensing images is to determine the location and category of all targets in these images. The anchor based methods are the most prevalent deep learning based methods, and still have some problems that need to be addressed. First, the existing metric (i.e., intersection over union (IoU)) could not measure the distance between two bounding boxes when they are nonoverlapping. Second, the exsiting bounding box regression loss could not directly optimize the metric in the training process. Third, the existing methods which adopt a hierarchical deep network only choose a single level feature layer for the feature extraction of region proposals, meaning they do not take full use of the advantage of multi-level features. To resolve the above problems, a novel object detection method for remote sensing images based on improved bounding box regression and multi-level features fusion is proposed in this paper. First, a new metric named generalized IoU is applied, which can quantify the distance between two bounding boxes, regardless of whether they are overlapping or not. Second, a novel bounding box regression loss is proposed, which can not only optimize the new metric (i.e., generalized IoU) directly but also overcome the problem that existing bounding box regression loss based on the new metric cannot adaptively change the gradient based on the metric value. Finally, a multi-level features fusion module is proposed and incorporated into the existing hierarchical deep network, which can make full use of the multi-level features for each region proposal. The quantitative comparisons between the proposed method and baseline method on the large scale dataset DIOR demonstrate that incorporating the proposed bounding box regression loss, multi-level features fusion module, and a combination of both into the baseline method can obtain an absolute gain of 0.7%, 1.4%, and 2.2% or so in terms of mAP, respectively. Comparing this with the state-of-the-art methods demonstrates that the proposed method has achieved a state-of-the-art performance. The curves of average precision with different thresholds show that the advantage of the proposed method is more evident when the threshold of generalized IoU (or IoU) is relatively high, which means that the proposed method can improve the precision of object localization. Similar conclusions can be obtained on a NWPU VHR-10 dataset. Full article
Show Figures

Graphical abstract

25 pages, 8869 KiB  
Article
An Object-Based Markov Random Field Model with Anisotropic Penalty for Semantic Segmentation of High Spatial Resolution Remote Sensing Imagery
Remote Sens. 2019, 11(23), 2878; https://doi.org/10.3390/rs11232878 - 03 Dec 2019
Cited by 5 | Viewed by 3717
Abstract
The Markov random field model (MRF) has attracted a lot of attention in the field of remote sensing semantic segmentation. But, most MRF-based methods fail to capture the various interactions between different land classes by using the isotropic potential function. In order to [...] Read more.
The Markov random field model (MRF) has attracted a lot of attention in the field of remote sensing semantic segmentation. But, most MRF-based methods fail to capture the various interactions between different land classes by using the isotropic potential function. In order to solve such a problem, this paper proposed a new generalized probability inference with an anisotropic penalty for the object-based MRF model (OMRF-AP) that can distinguish the differences in the interactions between any two land classes. Specifically, an anisotropic penalty matrix was first developed to describe the relationships between different classes. Then, an expected value of the penalty information (EVPI) was developed in this inference criterion to integrate the anisotropic class-interaction information and the posteriori distribution information of the OMRF model. Finally, by iteratively updating the EVPI terms of different classes, segmentation results could be achieved when the iteration converged. Experiments of texture images and different remote sensing images demonstrated that our method could show a better performance than other state-of-the-art MRF-based methods, and a post-processing scheme of the OMRF-AP model was also discussed in the experiments. Full article
Show Figures

Graphical abstract

21 pages, 7738 KiB  
Article
Multiscale Spatial-Spectral Convolutional Network with Image-Based Framework for Hyperspectral Imagery Classification
Remote Sens. 2019, 11(19), 2220; https://doi.org/10.3390/rs11192220 - 23 Sep 2019
Cited by 29 | Viewed by 4030
Abstract
Jointly using spatial and spectral information has been widely applied to hyperspectral image (HSI) classification. Especially, convolutional neural networks (CNN) have gained attention in recent years due to their detailed representation of features. However, most of CNN-based HSI classification methods mainly use patches [...] Read more.
Jointly using spatial and spectral information has been widely applied to hyperspectral image (HSI) classification. Especially, convolutional neural networks (CNN) have gained attention in recent years due to their detailed representation of features. However, most of CNN-based HSI classification methods mainly use patches as input classifier. This limits the range of use for spatial neighbor information and reduces processing efficiency in training and testing. To overcome this problem, we propose an image-based classification framework that is efficient and straightforward. Based on this framework, we propose a multiscale spatial-spectral CNN for HSIs (HyMSCN) to integrate both multiple receptive fields fused features and multiscale spatial features at different levels. The fused features are exploited using a lightweight block called the multiple receptive field feature block (MRFF), which contains various types of dilation convolution. By fusing multiple receptive field features and multiscale spatial features, the HyMSCN has comprehensive feature representation for classification. Experimental results from three real hyperspectral images prove the efficiency of the proposed framework. The proposed method also achieves superior performance for HSI classification. Full article
Show Figures

Figure 1

23 pages, 14467 KiB  
Article
Adaptive Contrast Enhancement for Infrared Images Based on the Neighborhood Conditional Histogram
Remote Sens. 2019, 11(11), 1381; https://doi.org/10.3390/rs11111381 - 10 Jun 2019
Cited by 19 | Viewed by 5678
Abstract
In this paper, an adaptive contrast enhancement method based on the neighborhood conditional histogram is proposed to improve the visual quality of thermal infrared images. Existing block-based local contrast enhancement methods usually suffer from the over-enhancement of smooth regions or the loss of [...] Read more.
In this paper, an adaptive contrast enhancement method based on the neighborhood conditional histogram is proposed to improve the visual quality of thermal infrared images. Existing block-based local contrast enhancement methods usually suffer from the over-enhancement of smooth regions or the loss of some details. To address these drawbacks, we first introduce a neighborhood conditional histogram to adaptively enhance the contrast and avoid the over-enhancement caused by the original histogram. Then the clip-redistributed histogram of the contrast-limited adaptive histogram equalization (CLAHE) is replaced by the neighborhood conditional histogram. In addition, the local mapping function of each sub-block is updated based on the global mapping function to further eliminate the block artifacts. Lastly, the optimized local contrast enhancement process, which combines both global and local enhanced results is employed to obtain the desired enhanced result. Experiments are conducted to evaluate the performance of the proposed method and the other five methods are introduced as a comparison. Qualitative and quantitative evaluation results demonstrate that the proposed method outperforms the other block-based methods on local contrast enhancement, visual quality improvement, and noise suppression. Full article
Show Figures

Figure 1

26 pages, 14336 KiB  
Article
A Novel Effectively Optimized One-Stage Network for Object Detection in Remote Sensing Imagery
Remote Sens. 2019, 11(11), 1376; https://doi.org/10.3390/rs11111376 - 09 Jun 2019
Cited by 20 | Viewed by 3912
Abstract
With great significance in military and civilian applications, the topic of detecting small and densely arranged objects in wide-scale remote sensing imagery is still challenging nowadays. To solve this problem, we propose a novel effectively optimized one-stage network (NEOON). As a fully convolutional [...] Read more.
With great significance in military and civilian applications, the topic of detecting small and densely arranged objects in wide-scale remote sensing imagery is still challenging nowadays. To solve this problem, we propose a novel effectively optimized one-stage network (NEOON). As a fully convolutional network, NEOON consists of four parts: Feature extraction, feature fusion, feature enhancement, and multi-scale detection. To extract effective features, the first part has implemented bottom-up and top-down coherent processing by taking successive down-sampling and up-sampling operations in conjunction with residual modules. The second part consolidates high-level and low-level features by adopting concatenation operations with subsequent convolutional operations to explicitly yield strong feature representation and semantic information. The third part is implemented by constructing a receptive field enhancement (RFE) module and incorporating it into the fore part of the network where the information of small objects exists. The final part is achieved by four detectors with different sensitivities accessing the fused features, all four parallel, to enable the network to make full use of information of objects in different scales. Besides, the Focal Loss is set to enable the cross entropy for classification to solve the tough problem of class imbalance in one-stage methods. In addition, we introduce the Soft-NMS to preserve accurate bounding boxes in the post-processing stage especially for densely arranged objects. Note that the split and merge strategy and multi-scale training strategy are employed in training. Thorough experiments are performed on ACS datasets constructed by us and NWPU VHR-10 datasets to evaluate the performance of NEOON. Specifically, 4.77% and 5.50% improvements in mAP and recall, respectively, on the ACS dataset as compared to YOLOv3 powerfully prove that NEOON can effectually improve the detection accuracy of small objects in remote sensing imagery. In addition, extensive experiments and comprehensive evaluations on the NWPU VHR-10 dataset with 10 classes have illustrated the superiority of NEOON in the extraction of spatial information of high-resolution remote sensing images. Full article
Show Figures

Graphical abstract

21 pages, 2306 KiB  
Article
Hyperspectral Image Super-Resolution by Deep Spatial-Spectral Exploitation
Remote Sens. 2019, 11(10), 1229; https://doi.org/10.3390/rs11101229 - 23 May 2019
Cited by 18 | Viewed by 3616 | Correction
Abstract
Limited by the existing imagery sensors, hyperspectral images are characterized by high spectral resolution but low spatial resolution. The super-resolution (SR) technique aiming at enhancing the spatial resolution of the input image is a hot topic in computer vision. In this paper, we [...] Read more.
Limited by the existing imagery sensors, hyperspectral images are characterized by high spectral resolution but low spatial resolution. The super-resolution (SR) technique aiming at enhancing the spatial resolution of the input image is a hot topic in computer vision. In this paper, we present a hyperspectral image (HSI) SR method based on a deep information distillation network (IDN) and an intra-fusion operation. Specifically, bands are firstly selected by a certain distance and super-resolved by an IDN. The IDN employs distillation blocks to gradually extract abundant and efficient features for reconstructing the selected bands. Second, the unselected bands are obtained via spectral correlation, yielding a coarse high-resolution (HR) HSI. Finally, the spectral-interpolated coarse HR HSI is intra-fused with the input HSI to achieve a finer HR HSI, making further use of the spatial-spectral information these unselected bands convey. Different from most existing fusion-based HSI SR methods, the proposed intra-fusion operation does not require any auxiliary co-registered image as the input, which makes this method more practical. Moreover, contrary to most single-based HSI SR methods whose performance decreases significantly as the image quality gets worse, the proposal deeply utilizes the spatial-spectral information and the mapping knowledge provided by the IDN, which achieves more robust performance. Experimental data and comparative analysis have demonstrated the effectiveness of this method. Full article
Show Figures

Graphical abstract

18 pages, 651 KiB  
Article
Kernel Joint Sparse Representation Based on Self-Paced Learning for Hyperspectral Image Classification
Remote Sens. 2019, 11(9), 1114; https://doi.org/10.3390/rs11091114 - 09 May 2019
Cited by 5 | Viewed by 2693
Abstract
By means of joint sparse representation (JSR) and kernel representation, kernel joint sparse representation (KJSR) models can effectively model the intrinsic nonlinear relations of hyperspectral data and better exploit spatial neighborhood structure to improve the classification performance of hyperspectral images. However, due to [...] Read more.
By means of joint sparse representation (JSR) and kernel representation, kernel joint sparse representation (KJSR) models can effectively model the intrinsic nonlinear relations of hyperspectral data and better exploit spatial neighborhood structure to improve the classification performance of hyperspectral images. However, due to the presence of noisy or inhomogeneous pixels around the central testing pixel in the spatial domain, the performance of KJSR is greatly affected. Motivated by the idea of self-paced learning (SPL), this paper proposes a self-paced KJSR (SPKJSR) model to adaptively learn weights and sparse coefficient vectors for different neighboring pixels in the kernel-based feature space. SPL strateges can learn a weight to indicate the difficulty of feature pixels within a spatial neighborhood. By assigning small weights for unimportant or complex pixels, the negative effect of inhomogeneous or noisy neighboring pixels can be suppressed. Hence, SPKJSR is usually much more robust. Experimental results on Indian Pines and Salinas hyperspectral data sets demonstrate that SPKJSR is much more effective than traditional JSR and KJSR models. Full article
Show Figures

Figure 1

Back to TopTop