Topic Editors

Department of Computer Science and Technology, Xidian University, Xi'an 710071, China
Department of Computer Science and Software Engineering, Swinburne University of Technology, Hawthorn, VIC 3122, Australia
Key Laboratory of Intelligent Perception and Image Understanding, Xidian University, Xi'an 710071, China
Department of Computer Science and Technology, Xidian University, Xi'an 710071, China

Computational Intelligence in Remote Sensing

Abstract submission deadline
closed (31 December 2022)
Manuscript submission deadline
closed (31 March 2023)
Viewed by
105376
Topic Computational Intelligence in Remote Sensing book cover image

A printed edition is available here.

Topic Information

Dear Colleagues,

With the development of earth-observation techniques, huge amounts of remote sensing data with a high spectral–spatial–temporal resolution are captured all the time, and remote sensing data processing and analysis have been successfully used in numerous fields, including geography, environmental monitoring, land survey, disaster management, mineral exploration, etc. They also have military, intelligence, commercial, economic, planning, and humanitarian applications, among others. For the processing, analysis and application of remote sensing data, there are many challenges, such as the huge amount of data, complex data structures, small labeled samples and nonconvex optimization. Computational intelligence techniques, which are inspired by biological intelligent systems, can provide possible solutions to the above-mentioned problems.

Computational intelligence (CI) is the theory, design, application, and development of biologically and linguistically motivated computational paradigms. Traditionally, the three main pillars of CI have been neural networks, fuzzy systems, and evolutionary computation. However, in time, many nature-inspired computing paradigms have evolved. Thus, CI is an evolving field, and at present, in addition to the three main constituents, it encompasses computing paradigms such as ambient intelligence, artificial life, cultural learning, artificial endocrine networks, social reasoning, and artificial hormone networks. CI plays a major role in developing successful intelligent systems, including games and cognitive developmental systems. Over the last few years, there has been an explosion of research on deep learning, specifically deep convolutional neural networks, and deep learning has become the core method for artificial intelligence. In fact, some of the most successful AI systems today are based on CI. In the future, CI will produce effective solutions to the challenges in remote sensing.

This Topic aims to provide a forum for disseminating the achievements related to the research and applications of computational intelligence techniques for remote sensing (e.g., multi-/hyper-spectral, SAR and LIDAR) analysis and applications, with topics including but not limited to:

  • Neural networks in remote sensing;
  • Evolutionary computation in remote sensing;
  • Fuzzy logic and systems in remote sensing;
  • Artificial intelligence in remote sensing;
  • Machine learning in remote sensing;
  • Deep learning in remote sensing;
  • Earth observation big data intelligence;
  • Remote sensing image analysis;
  • Remote sensing imagery.

Dr. Yue Wu
Prof. Dr. Kai Qin
Prof. Dr. Maoguo Gong
Prof. Dr. Qiguang Miao
Topic Editors

Keywords

  • artificial intelligence
  • machine learning
  • deep learning
  • neural networks
  • computer vision
  • image processing
  • remote sensing
  • multi-/hyper-spectral
  • synthetic aperture radar
  • LIDAR
  • evolutionary computation
  • fuzzy logic and systems
  • earth observation big data intelligence
  • remote sensing image analysis
  • remote sensing imagery, etc.

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Applied Sciences
applsci
2.7 4.5 2011 16.9 Days CHF 2400
Electronics
electronics
2.9 4.7 2012 15.6 Days CHF 2400
Mathematics
mathematics
2.4 3.5 2013 16.9 Days CHF 2600
Remote Sensing
remotesensing
5.0 7.9 2009 23 Days CHF 2700
Algorithms
algorithms
2.3 3.7 2008 15 Days CHF 1600
AI
ai
- - 2020 20.8 Days CHF 1600

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (45 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
3 pages, 187 KiB  
Editorial
Computational Intelligence in Remote Sensing
by Yue Wu, Maoguo Gong, Qiguang Miao and Kai Qin
Remote Sens. 2023, 15(22), 5325; https://doi.org/10.3390/rs15225325 - 12 Nov 2023
Viewed by 789
Abstract
With the development of Earth observation techniques, vast amounts of remote sensing data with a high spectral–spatial–temporal resolution are captured all the time, and remote sensing data processing and analysis have been successfully used in numerous fields, including geography, environmental monitoring, land survey, [...] Read more.
With the development of Earth observation techniques, vast amounts of remote sensing data with a high spectral–spatial–temporal resolution are captured all the time, and remote sensing data processing and analysis have been successfully used in numerous fields, including geography, environmental monitoring, land survey, disaster management, mineral exploration and more [...] Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
16 pages, 5976 KiB  
Article
An Exploratory Verification Method for Validation of Sea Surface Radiance of HY-1C Satellite UVI Payload Based on SOA Algorithm
by Lei Li, Dayi Yin, Qingling Li, Quan Zhang and Zhihua Mao
Electronics 2023, 12(13), 2766; https://doi.org/10.3390/electronics12132766 - 21 Jun 2023
Cited by 1 | Viewed by 577
Abstract
To support the application of ocean surface radiance data from the ultraviolet imager (UVI) payload of the HY-1C oceanographic satellite and to improve the quantification level of ocean observation technology, the authenticity check study of ocean surface radiance data from the UVI payload [...] Read more.
To support the application of ocean surface radiance data from the ultraviolet imager (UVI) payload of the HY-1C oceanographic satellite and to improve the quantification level of ocean observation technology, the authenticity check study of ocean surface radiance data from the UVI payload was conducted to provide a basis for the quantification application of data products. The UVI load makes up for the lack of detection capabilities of modern ocean remote sensing satellites in the ultraviolet band. The UVDRAMS (Ultra-Violet Dual-band RadiAnce Measurement System) was used to verify the surface radiance data collected at 16 stations in the study area and the pupil radiance data collected by the UVI payload to establish an effective radiative transfer model and to identify the model parameters using the seeker optimization algorithm (SOA). The study of the UVDRAMS measurement system based on the SOA algorithm and the validation of the sea surface radiance of the UVI payload of the HY-1C satellite shows that 97.2% of the incident pupil radiance of the UVI payload is contributed by the atmospheric reflected radiance, and only 2.8% is from the real radiation of the water surface, while the high signal-to-noise ratio of the UVI payload of the HY-1C ocean satellite can effectively distinguish the reflectance of the water body. The high signal-to-noise ratio of the UVI payload of the HY-1C ocean satellite can effectively distinguish the amount of standard deviation in the on-satellite radiation variation, which meets the observation requirements and provides a new way of thinking and technology for further quantitative research in the future. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Figure 1

18 pages, 1457 KiB  
Article
High-Quality Instance Mining and Dynamic Label Assignment for Weakly Supervised Object Detection in Remote Sensing Images
by Li Zeng, Yu Huo, Xiaoliang Qian and Zhiwu Chen
Electronics 2023, 12(13), 2758; https://doi.org/10.3390/electronics12132758 - 21 Jun 2023
Cited by 2 | Viewed by 963
Abstract
Weakly supervised object detection (WSOD) in remote sensing images (RSIs) has attracted more and more attention because its training merely relies on image-level category labels, which significantly reduces the cost of manual annotation. With the exploration of WSOD, it has obtained many promising [...] Read more.
Weakly supervised object detection (WSOD) in remote sensing images (RSIs) has attracted more and more attention because its training merely relies on image-level category labels, which significantly reduces the cost of manual annotation. With the exploration of WSOD, it has obtained many promising results. However, most of the WSOD methods still have two challenges. The first challenge is that the detection results of WSOD tend to locate the significant regions of the object but not the overall object. The second challenge is that the traditional pseudo-instance label assignment strategy cannot adapt to the quality distribution change of proposals during training, which is not conducive to training a high-performance detector. To tackle the first challenge, a novel high-quality seed instance mining (HSIM) module is designed to mine high-quality seed instances. Specifically, the proposal comprehensive score (PCS) that consists of the traditional proposal score (PS) and the proposal space contribution score (PSCS) is designed as a novel metric to mine seed instances, where the PS indicates the probability that a proposal pertains to a certain category and the PSCS is calculated by the spatial correlation between top-scoring proposals, which is utilized to evaluate the wholeness with which a proposal locates an object. Consequently, the high PCS will encourage the WSOD model to mine the high-quality seed instances. To tackle the second challenge, a dynamic pseudo-instance label assignment (DPILA) strategy is developed by dynamically setting the label assignment threshold to train high-quality instances. Consequently, the DPILA can better adapt the distribution change of proposals according to the dynamic threshold during training and further promote model performance. The ablation studies verify the validity of the proposed PCS and DPILA. The comparison experiments verify that our method obtains better performance than other advanced WSOD methods on two popular RSIs datasets. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Figure 1

18 pages, 15554 KiB  
Article
CNTR-YOLO: Improved YOLOv5 Based on ConvNext and Transformer for Aircraft Detection in Remote Sensing Images
by Fengyun Zhou, Honggui Deng, Qiguo Xu and Xin Lan
Electronics 2023, 12(12), 2671; https://doi.org/10.3390/electronics12122671 - 14 Jun 2023
Cited by 6 | Viewed by 2027
Abstract
Aircraft detection in remote sensing images is an important branch of target detection due to the military value of aircraft. However, the diverse categories of aircraft and the intricate background of remote sensing images often lead to insufficient detection accuracy. Here, we present [...] Read more.
Aircraft detection in remote sensing images is an important branch of target detection due to the military value of aircraft. However, the diverse categories of aircraft and the intricate background of remote sensing images often lead to insufficient detection accuracy. Here, we present the CNTR-YOLO algorithm based on YOLOv5 as a solution to this issue. The CNTR-YOLO algorithm improves detection accuracy through three primary strategies. (1) We deploy DenseNet in the backbone to address the vanishing gradient problem during training and enhance the extraction of fundamental information. (2) The CBAM attention mechanism is integrated into the neck to minimize background noise interference. (3) The C3CNTR module is designed based on ConvNext and Transformer to clarify the target’s position in the feature map from both local and global perspectives. This module is applied before the prediction head to optimize the accuracy of prediction results. Our proposed algorithm is validated on the MAR20 and DOTA datasets. The results on the MAR20 dataset show that the mean average precision (mAP) of CNTR-YOLO reached 70.1%, which is a 3.3% improvement compared with YOLOv5l. On the DOTA dataset, the results indicate that the mAP of CNTR-YOLO reached 63.7%, which is 2.5% higher than YOLOv5l. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Figure 1

27 pages, 1518 KiB  
Article
TPENAS: A Two-Phase Evolutionary Neural Architecture Search for Remote Sensing Image Classification
by Lei Ao, Kaiyuan Feng, Kai Sheng, Hongyu Zhao, Xin He and Zigang Chen
Remote Sens. 2023, 15(8), 2212; https://doi.org/10.3390/rs15082212 - 21 Apr 2023
Cited by 5 | Viewed by 1578
Abstract
The application of deep learning in remote sensing image classification has been paid more and more attention by industry and academia. However, manually designed remote sensing image classification models based on convolutional neural networks usually require sophisticated expert knowledge. Moreover, it is notoriously [...] Read more.
The application of deep learning in remote sensing image classification has been paid more and more attention by industry and academia. However, manually designed remote sensing image classification models based on convolutional neural networks usually require sophisticated expert knowledge. Moreover, it is notoriously difficult to design a model with both high classification accuracy and few parameters. Recently, neural architecture search (NAS) has emerged as an effective method that can greatly reduce the heavy burden of manually designing models. However, it remains a challenge to search for a classification model with high classification accuracy and few parameters in the huge search space. To tackle this challenge, we propose TPENAS, a two-phase evolutionary neural architecture search framework, which optimizes the model using computational intelligence techniques in two search phases. In the first search phase, TPENAS searches for the optimal depth of the model. In the second search phase, TPENAS searches for the structure of the model from the perspective of the whole model. Experiments on three open benchmark datasets demonstrate that our proposed TPENAS outperforms the state-of-the-art baselines in both classification accuracy and reducing parameters. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Figure 1

16 pages, 8484 KiB  
Article
Remote Sensing Image Road Extraction Network Based on MSPFE-Net
by Zhiheng Wei and Zhenyu Zhang
Electronics 2023, 12(7), 1713; https://doi.org/10.3390/electronics12071713 - 04 Apr 2023
Cited by 4 | Viewed by 1243
Abstract
Road extraction is a hot task in the field of remote sensing, and it has been widely concerned and applied by researchers, especially using deep learning methods. However, many models using convolutional neural networks ignore the attributes of roads, and the shape of [...] Read more.
Road extraction is a hot task in the field of remote sensing, and it has been widely concerned and applied by researchers, especially using deep learning methods. However, many models using convolutional neural networks ignore the attributes of roads, and the shape of the road is banded and discrete. In addition, the continuity and accuracy of road extraction are also affected by narrow roads and roads blocked by trees. This paper designs a network (MSPFE-Net) based on multi-level strip pooling and feature enhancement. The overall architecture of MSPFE-Net is encoder-decoder, and this network has two main modules. One is a multi-level strip pooling module, which aggregates long-range dependencies of different levels to ensure the connectivity of the road. The other module is the feature enhancement module, which is used to enhance the clarity and local details of the road. We perform a series of experiments on the dataset, Massachusetts Roads Dataset, a public dataset. The experimental data showed that the model in this paper was better than the comparison models. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Figure 1

22 pages, 5138 KiB  
Article
Enhanced CNN Classification Capability for Small Rice Disease Datasets Using Progressive WGAN-GP: Algorithms and Applications
by Yang Lu, Xianpeng Tao, Nianyin Zeng, Jiaojiao Du and Rou Shang
Remote Sens. 2023, 15(7), 1789; https://doi.org/10.3390/rs15071789 - 27 Mar 2023
Cited by 3 | Viewed by 1575
Abstract
An enhancement generator model with a progressive Wasserstein generative adversarial network and gradient penalized (PWGAN-GP) is proposed to solve the problem of low recognition accuracy caused by the lack of rice disease image samples in training CNNs. First, the generator model uses the [...] Read more.
An enhancement generator model with a progressive Wasserstein generative adversarial network and gradient penalized (PWGAN-GP) is proposed to solve the problem of low recognition accuracy caused by the lack of rice disease image samples in training CNNs. First, the generator model uses the progressive training method to improve the resolution of the generated samples step by step to reduce the difficulty of training. Second, to measure the similarity distance accurately between samples, a loss function is added to the discriminator that makes the generated samples more stable and realistic. Finally, the enhanced image datasets of three rice diseases are used for the training and testing of typical CNN models. The experimental results show that the proposed PWGAN-GP has the lowest FID score of 67.12 compared with WGAN, DCGAN, and WGAN-GP. In training VGG-16, GoogLeNet, and ResNet-50 with PWGAN-GP using generated samples, the accuracy increased by 10.44%, 12.38%, and 13.19%, respectively. PWGAN-GP increased by 4.29%, 4.61%, and 3.96%, respectively, for three CNN models over the traditional image data augmentation (TIDA) method. Through comparative analysis, the best model for identifying rice disease is ResNet-50 with PWGAN-GP in X2 enhancement intensity, and the average accuracy achieved was 98.14%. These results proved that the PWGAN-GP method could effectively improve the classification ability of CNNs. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Figure 1

13 pages, 2548 KiB  
Article
Boundary-Aware Salient Object Detection in Optical Remote-Sensing Images
by Longxuan Yu, Xiaofei Zhou, Lingbo Wang and Jiyong Zhang
Electronics 2022, 11(24), 4200; https://doi.org/10.3390/electronics11244200 - 15 Dec 2022
Cited by 5 | Viewed by 1625
Abstract
Different from the traditional natural scene images, optical remote-sensing images (RSIs) suffer from diverse imaging orientations, cluttered backgrounds, and various scene types. Therefore, the object-detection methods salient to optical RSIs require effective localization and segmentation to deal with complex scenarios, especially small targets, [...] Read more.
Different from the traditional natural scene images, optical remote-sensing images (RSIs) suffer from diverse imaging orientations, cluttered backgrounds, and various scene types. Therefore, the object-detection methods salient to optical RSIs require effective localization and segmentation to deal with complex scenarios, especially small targets, serious occlusion, and multiple targets. However, the existing models’ experimental results are incapable of distinguishing salient objects and backgrounds using clear boundaries. To tackle this problem, we introduce boundary information to perform salient object detection in optical RSIs. Specifically, we first combine the encoder’s low-level and high-level features (i.e., abundant local spatial and semantic information) via a feature-interaction operation, yielding boundary information. Then, the boundary cues are introduced into each decoder block, where the decoder features are directed to focus more on the boundary details and objects simultaneously. In this way, we can generate high-quality saliency maps which can highlight salient objects from optical RSIs completely and accurately. Extensive experiments are performed on a public dataset (i.e., ORSSD dataset), and the experimental results demonstrate the effectiveness of our model when compared with the cutting-edge saliency models. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Figure 1

19 pages, 2248 KiB  
Article
Self-Attention and Convolution Fusion Network for Land Cover Change Detection over a New Data Set in Wenzhou, China
by Yiqun Zhu, Guojian Jin, Tongfei Liu, Hanhong Zheng, Mingyang Zhang, Shuang Liang, Jieyi Liu and Linqi Li
Remote Sens. 2022, 14(23), 5969; https://doi.org/10.3390/rs14235969 - 25 Nov 2022
Cited by 3 | Viewed by 1394
Abstract
With the process of increasing urbanization, there is great significance in obtaining urban change information by applying land cover change detection techniques. However, these existing methods still struggle to achieve convincing performances and are insufficient for practical applications. In this paper, we constructed [...] Read more.
With the process of increasing urbanization, there is great significance in obtaining urban change information by applying land cover change detection techniques. However, these existing methods still struggle to achieve convincing performances and are insufficient for practical applications. In this paper, we constructed a new data set, named Wenzhou data set, aiming to detect the land cover changes of Wenzhou City and thus update the urban expanding geographic data. Based on this data set, we provide a new self-attention and convolution fusion network (SCFNet) for the land cover change detection of the Wenzhou data set. The SCFNet is composed of three modules, including backbone (local–global pyramid feature extractor in SLGPNet), self-attention and convolution fusion module (SCFM), and residual refinement module (RRM). The SCFM combines the self-attention mechanism with convolutional layers to acquire a better feature representation. Furthermore, RRM exploits dilated convolutions with different dilation rates to refine more accurate and complete predictions over changed areas. In addition, to explore the performance of existing computational intelligence techniques in application scenarios, we selected six classical and advanced deep learning-based methods for systematic testing and comparison. The extensive experiments on the Wenzhou and Guangzhou data sets demonstrated that our SCFNet obviously outperforms other existing methods. On the Wenzhou data set, the precision, recall and F1-score of our SCFNet are all better than 85%. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Figure 1

17 pages, 5402 KiB  
Article
Improved One-Stage Detectors with Neck Attention Block for Object Detection in Remote Sensing
by Kaiqi Lang, Mingyu Yang, Hao Wang, Hanyu Wang, Zilong Wang, Jingzhong Zhang and Honghai Shen
Remote Sens. 2022, 14(22), 5805; https://doi.org/10.3390/rs14225805 - 17 Nov 2022
Cited by 7 | Viewed by 2414
Abstract
Object detection in remote sensing is becoming a conspicuous challenge with the rapidly increasing quantity and quality of remote sensing images. Although the application of Deep Learning has obtained remarkable performance in Computer Vision, detecting multi-scale targets in remote sensing images is still [...] Read more.
Object detection in remote sensing is becoming a conspicuous challenge with the rapidly increasing quantity and quality of remote sensing images. Although the application of Deep Learning has obtained remarkable performance in Computer Vision, detecting multi-scale targets in remote sensing images is still an unsolved problem, especially for small instances which possess limited features and intricate backgrounds. In this work, we managed to cope with this problem by designing a neck attention block (NAB), a simple and flexible module which combines the convolutional bottleneck structure and the attention mechanism, different from traditional attention mechanisms that focus on designing complicated attention branches. In addition, Vehicle in High-Resolution Aerial Imagery (VHRAI), a diverse, dense, and challenging dataset, was proposed for studying small object detection. To validate the effectiveness and generalization of NAB, we conducted experiments on a variety of datasets with the improved YOLOv3, YOLOv4-Tiny, and SSD. On VHRAI, the improved YOLOv3 and YOLOv4-Tiny surpassed the original models by 1.98% and 1.89% mAP, respectively. Similarly, they exceeded the original models by 1.12% and 3.72% mAP on TGRS-HRRSD, a large multi-scale dataset. Including SSD, these three models also showed excellent generalizability on PASCAL VOC. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Graphical abstract

20 pages, 2634 KiB  
Article
Frequency Spectrum Intensity Attention Network for Building Detection from High-Resolution Imagery
by Dan Feng, Hongyun Chu and Ling Zheng
Remote Sens. 2022, 14(21), 5457; https://doi.org/10.3390/rs14215457 - 30 Oct 2022
Cited by 3 | Viewed by 1439
Abstract
Computational intelligence techniques have been widely used for automatic building detection from high-resolution remote sensing imagery and especially the methods based on neural networks. However, existing methods do not pay attention to the value of high-frequency and low-frequency information in the frequency domain [...] Read more.
Computational intelligence techniques have been widely used for automatic building detection from high-resolution remote sensing imagery and especially the methods based on neural networks. However, existing methods do not pay attention to the value of high-frequency and low-frequency information in the frequency domain for feature extraction of buildings in remote sensing images. To overcome these limitations, this paper proposes a frequency spectrum intensity attention network (FSIANet) with an encoder–decoder structure for automatic building detection. The proposed FSIANet mainly involves two innovations. One, a novel and plug-and-play frequency spectrum intensity attention (FSIA) mechanism is devised to enhance feature representation by evaluating the informative abundance of the feature maps. The FSIA is deployed after each convolutional block in the proposed FSIANet. Two, an atrous frequency spectrum attention pyramid (AFSAP) is constructed by introducing FSIA in widely used atrous spatial pyramid pooling. The AFSAP is able to select the features with high response to building semantic features at each scale and weaken the features with low response, thus enhancing the feature representation of buildings. The proposed FSIANet is evaluated on two large public datasets (East Asia and Inria Aerial Image Dataset), which demonstrates that the proposed method can achieve the state-of-the-art performance in terms of F1-score and intersection-over-union. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Figure 1

23 pages, 12075 KiB  
Article
ELCD: Efficient Lunar Crater Detection Based on Attention Mechanisms and Multiscale Feature Fusion Networks from Digital Elevation Models
by Lili Fan, Jiabin Yuan, Keke Zha and Xunan Wang
Remote Sens. 2022, 14(20), 5225; https://doi.org/10.3390/rs14205225 - 19 Oct 2022
Cited by 5 | Viewed by 1873
Abstract
The detection and counting of lunar impact craters are crucial for the selection of detector landing sites and the estimation of the age of the Moon. However, traditional crater detection methods are based on machine learning and image processing technologies. These are inefficient [...] Read more.
The detection and counting of lunar impact craters are crucial for the selection of detector landing sites and the estimation of the age of the Moon. However, traditional crater detection methods are based on machine learning and image processing technologies. These are inefficient for situations with different distributions, overlaps, and crater sizes, and most of them mainly focus on the accuracy of detection and ignore the efficiency. In this paper, we propose an efficient lunar crater detection (ELCD) algorithm based on a novel crater edge segmentation network (AFNet) to detect lunar craters from digital elevation model (DEM) data. First, in AFNet, a lightweight attention mechanism module is introduced to enhance the feature extract capabilities of networks, and a new multiscale feature fusion module is designed by fusing different multi-level feature maps to reduce the information loss of the output map. Then, considering the imbalance in the classification and the distributions of the crater data, an efficient crater edge segmentation loss function (CESL) is designed to improve the network optimization performance. Lastly, the crater positions are obtained from the network output map by the crater edge extraction (CEA) algorithm. The experiment was conducted on the PyTorch platform using two lunar crater catalogs to evaluate the ELCD. The experimental results show that ELCD has a superior detection accuracy and inference speed compared with other state-of-the-art crater detection algorithms. As with most crater detection models that use DEM data, some small craters may be considered to be noise that cannot be detected. The proposed algorithm can be used to improve the accuracy and speed of deep space probes in detecting candidate landing sites, and the discovery of new craters can increase the size of the original data set. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Figure 1

21 pages, 11614 KiB  
Article
MANet: A Network Architecture for Remote Sensing Spatiotemporal Fusion Based on Multiscale and Attention Mechanisms
by Huimin Cao, Xiaobo Luo, Yidong Peng and Tianshou Xie
Remote Sens. 2022, 14(18), 4600; https://doi.org/10.3390/rs14184600 - 15 Sep 2022
Cited by 6 | Viewed by 3587
Abstract
Obtaining high-spatial–high-temporal (HTHS) resolution remote sensing images from a single sensor remains a great challenge due to the cost and technical limitations. Spatiotemporal fusion (STF) technology breaks through the technical limitations of existing sensors and provides a convenient and economical solution for obtaining [...] Read more.
Obtaining high-spatial–high-temporal (HTHS) resolution remote sensing images from a single sensor remains a great challenge due to the cost and technical limitations. Spatiotemporal fusion (STF) technology breaks through the technical limitations of existing sensors and provides a convenient and economical solution for obtaining HTHS resolution images. At present, most STF methods use stacked convolutional layers to extract image features and then obtain fusion images by using a summation strategy. However, these convolution operations may lead to the loss of feature information, and the summation strategy results in poorly fused images due to a lack of consideration of global spatial feature information. To address these issues, this article proposes a STF network architecture based on multiscale and attention mechanisms (MANet). The multiscale mechanism module composed of dilated convolutions is used to extract the detailed features of low-spatial resolution remote sensing images at multiple scales. The channel attention mechanism adaptively adjusts the weights of the feature map channels to retain more temporal and spatial information in the upsampling process, while the non-local attention mechanism adjusts the initial fusion images to obtain more accurate predicted images by calculating the correlation between pixels. We use two datasets with different characteristics to conduct the experiments, and the results prove that the proposed MANet method with fewer parameters obtains better fusion results than the existing machine learning-based and deep learning-based fusion methods. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Graphical abstract

19 pages, 5477 KiB  
Article
Fast Seismic Landslide Detection Based on Improved Mask R-CNN
by Rao Fu, Jing He, Gang Liu, Weile Li, Jiaqi Mao, Minhui He and Yuanyang Lin
Remote Sens. 2022, 14(16), 3928; https://doi.org/10.3390/rs14163928 - 12 Aug 2022
Cited by 29 | Viewed by 2812
Abstract
For emergency rescue and damage assessment after an earthquake, quick detection of seismic landslides in the affected areas is crucial. The purpose of this study is to quickly determine the extent and size of post-earthquake seismic landslides using a small amount of post-earthquake [...] Read more.
For emergency rescue and damage assessment after an earthquake, quick detection of seismic landslides in the affected areas is crucial. The purpose of this study is to quickly determine the extent and size of post-earthquake seismic landslides using a small amount of post-earthquake seismic landslide imagery data. This information will serve as a foundation for emergency rescue efforts, disaster estimation, and other actions. In this study, Wenchuan County, Sichuan Province, China’s 2008 post-quake Unmanned Air Vehicle (UAV) remote sensing images are used as the data source. ResNet-50, ResNet-101, and Swin Transformer are used as the backbone networks of Mask R-CNN to train and identify seismic landslides in post-quake UAV images. The training samples are then augmented by data augmentation methods, and transfer learning methods are used to reduce the training time required and enhance the generalization of the model. Finally, transfer learning was used to apply the model to seismic landslide imagery from Haiti after the earthquake that was not calibrated. With Precision and F1 scores of 0.9328 and 0.9025, respectively, the results demonstrate that Swin Transformer performs better as a backbone network than the original Mask R-CNN, YOLOv5, and Faster R-CNN. In Haiti’s post-earthquake images, the improved model performs significantly better than the original model in terms of accuracy and recognition. The model for identifying post-earthquake seismic landslides developed in this paper has good generalizability and transferability as well as good application potential in emergency responses to earthquake disasters, which can offer strong support for post-earthquake emergency rescue and disaster assessment. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Graphical abstract

21 pages, 6019 KiB  
Article
Pomelo Tree Detection Method Based on Attention Mechanism and Cross-Layer Feature Fusion
by Haotian Yuan, Kekun Huang, Chuanxian Ren, Yongzhu Xiong, Jieli Duan and Zhou Yang
Remote Sens. 2022, 14(16), 3902; https://doi.org/10.3390/rs14163902 - 11 Aug 2022
Cited by 7 | Viewed by 4031
Abstract
Deep learning is the subject of increasing research for fruit tree detection. Previously developed deep-learning-based models are either too large to perform real-time tasks or too small to extract good enough features. Moreover, there has been scarce research on the detection of pomelo [...] Read more.
Deep learning is the subject of increasing research for fruit tree detection. Previously developed deep-learning-based models are either too large to perform real-time tasks or too small to extract good enough features. Moreover, there has been scarce research on the detection of pomelo trees. This paper proposes a pomelo tree-detection method that introduces the attention mechanism and a Ghost module into the lightweight model network, as well as a feature-fusion module to improve the feature-extraction ability and reduce computation. The proposed method was experimentally validated and showed better detection performance and fewer parameters than some state-of-the-art target-detection algorithms. The results indicate that our method is more suitable for pomelo tree detection. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Graphical abstract

18 pages, 4414 KiB  
Article
Can Machine Learning Algorithms Successfully Predict Grassland Aboveground Biomass?
by Yue Wang, Rongzhu Qin, Huzi Cheng, Tiangang Liang, Kaiping Zhang, Ning Chai, Jinlong Gao, Qisheng Feng, Mengjing Hou, Jie Liu, Chenli Liu, Wenjuan Zhang, Yanjie Fang, Jie Huang and Feng Zhang
Remote Sens. 2022, 14(16), 3843; https://doi.org/10.3390/rs14163843 - 09 Aug 2022
Cited by 3 | Viewed by 2004
Abstract
The timely and accurate estimation of grassland aboveground biomass (AGB) is important. Machine learning (ML) has been widely used in the past few decades to deal with complex relationships. In this study, based on an 11-year period (2005–2015) of AGB data (1620 valid [...] Read more.
The timely and accurate estimation of grassland aboveground biomass (AGB) is important. Machine learning (ML) has been widely used in the past few decades to deal with complex relationships. In this study, based on an 11-year period (2005–2015) of AGB data (1620 valid AGB measurements) on the Three-River Headwaters Region (TRHR), combined with remote sensing data, weather data, terrain data, and soil data, we compared the predictive performance of a linear statistical method, machine learning (ML) methods, and evaluated their temporal and spatial scalability. The results show that machine learning can predict grassland biomass well, and the existence of an independent validation set can help us better understand the prediction performance of the model. Our findings show the following: (1) The random forest (RF) based on variables obtained through stepwise regression analysis (SRA) was the best model (R2vad = 0.60, RMSEvad = 1245.85 kg DW (dry matter weight)/ha, AIC = 5583.51, and BIC = 5631.10). It also had the best predictive capability of years with unknown areas (R2indep = 0.50, RMSEindep = 1332.59 kg DW/ha). (2) Variable screening improved the accuracy of all of the models. (3) All models’ predictive accuracy varied between 0.45 and 0.60, and the RMSE values were lower than 1457.26 kg DW/ha, indicating that the results were reliably accurate. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Graphical abstract

20 pages, 1376 KiB  
Article
Evolutionary Computational Intelligence-Based Multi-Objective Sensor Management for Multi-Target Tracking
by Shuang Liang, Yun Zhu, Hao Li and Junkun Yan
Remote Sens. 2022, 14(15), 3624; https://doi.org/10.3390/rs14153624 - 28 Jul 2022
Cited by 4 | Viewed by 1378
Abstract
In multi-sensor systems (MSSs), sensor selection is a critical technique for obtaining high-quality sensing data. However, when the number of sensors to be selected is unknown in advance, sensor selection is essentially non-deterministic polynomial-hard (NP-hard), and finding the optimal solution is computationally unacceptable. [...] Read more.
In multi-sensor systems (MSSs), sensor selection is a critical technique for obtaining high-quality sensing data. However, when the number of sensors to be selected is unknown in advance, sensor selection is essentially non-deterministic polynomial-hard (NP-hard), and finding the optimal solution is computationally unacceptable. To alleviate these issues, we propose a novel sensor selection approach based on evolutionary computational intelligence for tracking multiple targets in the MSSs. The sensor selection problem is formulated in a partially observed Markov decision process framework by modeling multi-target states as labeled multi-Bernoulli random finite sets. Two conflicting task-driven objectives are considered: minimization of the uncertainty in posterior cardinality estimates and minimization of the number of selected sensors. By modeling sensor selection as a multi-objective optimization problem, we develop a binary constrained evolutionary multi-objective algorithm based on non-dominating sorting and dynamically select a subset of sensors at each time step. Numerical studies are used to evaluate the performance of the proposed approach, where the MSS tracks multiple moving targets with nonlinear/linear dynamic models and nonlinear measurements. The results show that our method not only significantly reduces the number of selected sensors but also provides superior tracking accuracy compared to generic sensor selection methods. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Graphical abstract

11 pages, 3652 KiB  
Article
Recursive Least Squares for Near-Lossless Hyperspectral Data Compression
by Tie Zheng, Yuqi Dai, Changbin Xue and Li Zhou
Appl. Sci. 2022, 12(14), 7172; https://doi.org/10.3390/app12147172 - 16 Jul 2022
Cited by 5 | Viewed by 1376
Abstract
The hyperspectral image compression scheme is a trade-off between the limited hardware resources of the on-board platform and the ever-growing resolution of the optical instruments. Predictive coding attracts researchers due to its low computational complexity and moderate memory requirements. We propose a near-lossless [...] Read more.
The hyperspectral image compression scheme is a trade-off between the limited hardware resources of the on-board platform and the ever-growing resolution of the optical instruments. Predictive coding attracts researchers due to its low computational complexity and moderate memory requirements. We propose a near-lossless prediction-based compression scheme that removes spatial and spectral redundant information, thereby significantly reducing the size of hyperspectral images. This scheme predicts the target pixel’s value via a linear combination of previous pixels. The weight matrix of the predictor is iteratively updated using a recursive least squares filter with a loop quantizer. The optimal number of bands for prediction was analyzed experimentally. The results indicate that the proposed scheme outperforms state-of-the-art compression methods in terms of the compression ratio and quality retrieval. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Figure 1

18 pages, 3704 KiB  
Article
Multi-Parameter Inversion of AIEM by Using Bi-Directional Deep Neural Network
by Yu Wang, Zi He, Ying Yang, Dazhi Ding, Fan Ding and Xun-Wang Dang
Remote Sens. 2022, 14(14), 3302; https://doi.org/10.3390/rs14143302 - 08 Jul 2022
Cited by 3 | Viewed by 1343
Abstract
A novel multi-parameter inversion method is proposed for the Advanced Integral Equation Model (AIEM) by using bi-directional deep neural network. There is a very complex nonlinear relationship between the surface parameters (dielectric constant and roughness) and radar backscattering coefficient. The traditional inverse neural [...] Read more.
A novel multi-parameter inversion method is proposed for the Advanced Integral Equation Model (AIEM) by using bi-directional deep neural network. There is a very complex nonlinear relationship between the surface parameters (dielectric constant and roughness) and radar backscattering coefficient. The traditional inverse neural network, which is constructed by using the backscattering coefficients as the input and the surface parameters as the output, leads to bad convergence and wrong results. This is because many sets of surface parameters can get the same backscattering coefficient. Therefore, the proposed bi-directional deep neural network starts with building an AIEM-based forward deep neural network (AIEM-FDNN), whose inputs are the surface parameters and outputs are the backscattering coefficients. In this way, the weights and biases of the forward deep neural network can be optimized and predicted, which can be used for the backward deep neural network (AIEM-BDNN). Then, the multi-parameters are updated by minimizing the loss between the output backscattering coefficients with the measured ones. By inserting a sigmoid function between the input and the first hidden layer, the input multi-parameters can be efficiently approximated and continuously updated. As a result, both the forward and backward deep neural networks can be built with these weights and biases. By sharing the weights and biases of the forward network, the training of the inverse network is avoided. The bi-directional deep neural network can not only predict the backscattering coefficient but can also inverse the surface parameters. Numerical results are given to demonstrate that the RMSE of the backscattering coefficients calculated by the proposed bi-directional neural network can be reduced to 0.1%. The accuracy of the inversion parameters, including the real and imaginary parts of the dielectric constant, the root mean square height and the correlation length, can be improved to 97.56%, 91.14%, 99.04% and 98.45%, respectively. At the same time, the bi-directional neural network also has good accuracy for the inversion of the POLARSCAT measured data. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Figure 1

17 pages, 3548 KiB  
Article
Monitoring of Iron Ore Quality through Ultra-Spectral Data and Machine Learning Methods
by Ana Cristina Pinto Silva, Keyla Thayrinne Zoppi Coimbra, Levi Wellington Rezende Filho, Gustavo Pessin and Rosa Elvira Correa-Pabón
AI 2022, 3(2), 554-570; https://doi.org/10.3390/ai3020032 - 15 Jun 2022
Cited by 1 | Viewed by 3678
Abstract
Currently, most mining companies conduct chemical analyses by X-ray fluorescence performed in the laboratory to evaluate the quality of Fe ore, where the focus is mainly on the Fe content and the presence of impurities. However, this type of analysis requires the investment [...] Read more.
Currently, most mining companies conduct chemical analyses by X-ray fluorescence performed in the laboratory to evaluate the quality of Fe ore, where the focus is mainly on the Fe content and the presence of impurities. However, this type of analysis requires the investment of time and money, and the results are often available only after the ore has already been sent by the processing plant. Reflectance spectroscopy is an alternative method that can significantly contribute to this type of application as it consists of a nondestructive analysis technique that does not require sample preparation, in addition to making the analyses available in more active ways. Among the challenges of working with reflectance spectroscopy is the large volume of data produced. However, one way to optimize this type of approach is to use machine learning techniques. Thus, the main objective of this study was the calibration and evaluation of models to analyze the quality of Fe from Sinter Feed collected from deposits in the Carajás Mineral Province, Brazil. To achieve this goal, machine learning models were tested using spectral libraries and X-ray fluorescence data from Sinter Feed samples. The most efficient models for estimating Fe were the Adaboost and support vector machine and our results highlight the possibility of application in the samples without the need for preparation and optimization of the analysis time, providing results in a timely manner to contribute to decision-making in the production chain. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Figure 1

10 pages, 1961 KiB  
Communication
Self-Supervised Pre-Training with Bridge Neural Network for SAR-Optical Matching
by Lixin Qian, Xiaochun Liu, Meiyu Huang and Xueshuang Xiang
Remote Sens. 2022, 14(12), 2749; https://doi.org/10.3390/rs14122749 - 08 Jun 2022
Cited by 1 | Viewed by 1666
Abstract
Due to the vast geometric and radiometric differences between SAR and optical images, SAR-optical image matching remains an intractable challenge. Despite the fact that the deep learning-based matching model has achieved great success, SAR feature embedding ability is not fully explored yet because [...] Read more.
Due to the vast geometric and radiometric differences between SAR and optical images, SAR-optical image matching remains an intractable challenge. Despite the fact that the deep learning-based matching model has achieved great success, SAR feature embedding ability is not fully explored yet because of the lack of well-designed pre-training techniques. In this paper, we propose to employ the self-supervised learning method in the SAR-optical matching framework, in order to serve as a pre-training strategy for improving the representation learning ability of SAR images as well as optical images. We first use a state-of-the-art self-supervised learning method, Momentum Contrast (MoCo), to pre-train an optical feature encoder and an SAR feature encoder separately. Then, the pre-trained encoders are transferred to an advanced common representation learning model, Bridge Neural Network (BNN), to project the SAR and optical images into a more distinguishable common feature representation subspace, which leads to a high multi-modal image matching result. Experimental results on three SAR-optical matching benchmark datasets show that our proposed MoCo pre-training method achieves a high matching accuracy up to 0.873 even for the complex QXS-SAROPT SAR-optical matching dataset. BNN pre-trained with MoCo outperforms BNN with the most commonly used ImageNet pre-training, and achieves at most 4.4% gains in matching accuracy. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Graphical abstract

26 pages, 9524 KiB  
Article
Object Tracking and Geo-Localization from Street Images
by Daniel Wilson, Thayer Alshaabi, Colin Van Oort, Xiaohan Zhang, Jonathan Nelson and Safwan Wshah
Remote Sens. 2022, 14(11), 2575; https://doi.org/10.3390/rs14112575 - 27 May 2022
Cited by 5 | Viewed by 4201
Abstract
Object geo-localization from images is crucial to many applications such as land surveying, self-driving, and asset management. Current visual object geo-localization algorithms suffer from hardware limitations and impractical assumptions limiting their usability in real-world applications. Most of the current methods assume object sparsity, [...] Read more.
Object geo-localization from images is crucial to many applications such as land surveying, self-driving, and asset management. Current visual object geo-localization algorithms suffer from hardware limitations and impractical assumptions limiting their usability in real-world applications. Most of the current methods assume object sparsity, the presence of objects in at least two frames, and most importantly they only support a single class of objects. In this paper, we present a novel two-stage technique that detects and geo-localizes dense, multi-class objects such as traffic signs from street videos. Our algorithm is able to handle low frame rate inputs in which objects might be missing in one or more frames. We propose a detector that is not only able to detect objects in images, but also predicts a positional offset for each object relative to the camera GPS location. We also propose a novel tracker algorithm that is able to track a large number of multi-class objects. Many current geo-localization datasets require specialized hardware, suffer from idealized assumptions not representative of reality, and are often not publicly available. In this paper, we propose a public dataset called ARTSv2, which is an extension of ARTS dataset that covers a diverse set of roads in widely varying environments to ensure it is representative of real-world scenarios. Our dataset will both support future research and provide a crucial benchmark for the field. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Graphical abstract

13 pages, 4220 KiB  
Technical Note
An Optimal Transport Based Global Similarity Index for Remote Sensing Products Comparison
by Yumin Tan, Yanzhe Shi, Le Xu, Kailei Zhou, Guifei Jing, Xiaolu Wang and Bingxin Bai
Remote Sens. 2022, 14(11), 2546; https://doi.org/10.3390/rs14112546 - 26 May 2022
Cited by 1 | Viewed by 1642
Abstract
Remote sensing products, such as land cover data products, are essential for a wide range of scientific studies and applications, and their quality evaluation and relative comparison have become a major issue that needs to be studied. Traditional methods, such as error matrices, [...] Read more.
Remote sensing products, such as land cover data products, are essential for a wide range of scientific studies and applications, and their quality evaluation and relative comparison have become a major issue that needs to be studied. Traditional methods, such as error matrices, are not effective in describing spatial distribution because they are based on a pixel-by-pixel comparison. In this paper, the relative quality comparison of two remote sensing products is turned into the difference measurement between the spatial distribution of pixels by proposing a max-sliced Wasserstein distance-based similarity index. According to optimal transport theory, the mathematical expression of the proposed similarity index is firstly clarified, and then its rationality is illustrated, and finally, experiments on three open land cover products (GLCFCS30, FROMGLC, CNLUCC) are conducted. Results show that based on this proposed similarity index-based relative quality comparison method, the spatial difference, including geometric shapes and spatial locations between two different remote sensing products in raster form, can be quantified. The method is particularly useful in cases where there exists misregistration between datasets, while pixel-based methods will lose their robustness. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Figure 1

15 pages, 3879 KiB  
Article
MSGATN: A Superpixel-Based Multi-Scale Siamese Graph Attention Network for Change Detection in Remote Sensing Images
by Wenjing Shuai, Fenlong Jiang, Hanhong Zheng and Jianzhao Li
Appl. Sci. 2022, 12(10), 5158; https://doi.org/10.3390/app12105158 - 20 May 2022
Cited by 9 | Viewed by 1720
Abstract
With the rapid development of Earth observation technology, how to effectively and efficiently detect changes in multi-temporal images has become an important but challenging problem. Relying on the advantages of high performance and robustness, object-based change detection (CD) has become increasingly popular. By [...] Read more.
With the rapid development of Earth observation technology, how to effectively and efficiently detect changes in multi-temporal images has become an important but challenging problem. Relying on the advantages of high performance and robustness, object-based change detection (CD) has become increasingly popular. By analyzing the similarity of local pixels, object-based CD aggregates similar pixels into one object and takes it as the basic processing unit. However, object-based approaches often have difficulty capturing discriminative features, as irregular objects make processing difficult. To address this problem, in this paper, we propose a novel superpixel-based multi-scale Siamese graph attention network (MSGATN) which can process unstructured data natively and extract valuable features. First, a difference image (DI) is generated by Euclidean distance between bitemporal images. Second, superpixel segmentation is employed based on DI to divide each image into many homogeneous regions. Then, these superpixels are used to model the problem by graph theory to construct a series of nodes with the adjacency between them. Subsequently, the multi-scale neighborhood features of the nodes are extracted through applying a graph convolutional network and concatenated by an attention mechanism. Finally, the binary change map can be obtained by classifying each node by some fully connected layers. The novel features of MSGATN can be summarized as follows: (1) Training in multi-scale constructed graphs improves the recognition over changed land cover of varied sizes and shapes. (2) Spectral and spatial self-attention mechanisms are exploited for a better change detection performance. The experimental results on several real datasets show the effectiveness and superiority of the proposed method. In addition, compared to other recent methods, the proposed can demonstrate very high processing efficiency and greatly reduce the dependence on labeled training samples in a semisupervised training fashion. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Figure 1

21 pages, 5994 KiB  
Article
EfficientUNet+: A Building Extraction Method for Emergency Shelters Based on Deep Learning
by Di You, Shixin Wang, Futao Wang, Yi Zhou, Zhenqing Wang, Jingming Wang and Yibing Xiong
Remote Sens. 2022, 14(9), 2207; https://doi.org/10.3390/rs14092207 - 05 May 2022
Cited by 8 | Viewed by 2397
Abstract
Quickly and accurately extracting buildings from remote sensing images is essential for urban planning, change detection, and disaster management applications. In particular, extracting buildings that cannot be sheltered in emergency shelters can help establish and improve a city’s overall disaster prevention system. However, [...] Read more.
Quickly and accurately extracting buildings from remote sensing images is essential for urban planning, change detection, and disaster management applications. In particular, extracting buildings that cannot be sheltered in emergency shelters can help establish and improve a city’s overall disaster prevention system. However, small building extraction often involves problems, such as integrity, missed and false detection, and blurred boundaries. In this study, EfficientUNet+, an improved building extraction method from remote sensing images based on the UNet model, is proposed. This method uses EfficientNet-b0 as the encoder and embeds the spatial and channel squeeze and excitation (scSE) in the decoder to realize forward correction of features and improve the accuracy and speed of model extraction. Next, for the problem of blurred boundaries, we propose a joint loss function of building boundary-weighted cross-entropy and Dice loss to enforce constraints on building boundaries. Finally, model pretraining is performed using the WHU aerial building dataset with a large amount of data. The transfer learning method is used to complete the high-precision extraction of buildings with few training samples in specific scenarios. We created a Google building image dataset of emergency shelters within the Fifth Ring Road of Beijing and conducted experiments to verify the effectiveness of the method in this study. The proposed method is compared with the state-of-the-art methods, namely, DeepLabv3+, PSPNet, ResUNet, and HRNet. The results show that the EfficientUNet+ method is superior in terms of Precision, Recall, F1-Score, and mean intersection over union (mIoU). The accuracy of the EfficientUNet+ method for each index is the highest, reaching 93.01%, 89.17%, 91.05%, and 90.97%, respectively. This indicates that the method proposed in this study can effectively extract buildings in emergency shelters and has an important reference value for guiding urban emergency evacuation. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Figure 1

26 pages, 7037 KiB  
Article
Noise Robust High-Speed Motion Compensation for ISAR Imaging Based on Parametric Minimum Entropy Optimization
by Jiadong Wang, Yachao Li, Ming Song, Pingping Huang and Mengdao Xing
Remote Sens. 2022, 14(9), 2178; https://doi.org/10.3390/rs14092178 - 01 May 2022
Cited by 5 | Viewed by 1540
Abstract
When a target is moving at high-speed, its high-resolution range profile (HRRP) will be stretched by the high-order phase error caused by the high velocity. In this case, the inverse synthetic aperture radar (ISAR) image would be seriously blurred. To obtain a well-focused [...] Read more.
When a target is moving at high-speed, its high-resolution range profile (HRRP) will be stretched by the high-order phase error caused by the high velocity. In this case, the inverse synthetic aperture radar (ISAR) image would be seriously blurred. To obtain a well-focused ISAR image, the phase error induced by target velocity should be compensated. This article exploits the variation continuity of a high-speed moving target’s velocity and proposes a noise-robust high-speed motion compensation algorithm for ISAR imaging. The target’s velocity within a coherent processing interval (CPI) is modeled as a high-order polynomial based on which a parametric high-speed motion compensation signal model is developed. The entropy of the ISAR image after high-speed motion compensation is treated as an evaluation metric, and a parametric minimum entropy optimization model is established to estimate the velocity and compensate it simultaneously. A gradient-based solver of this optimization is then adopted to iteratively find the optimal solution. Finally, the high-order phase error caused by the target’s high-speed motion can be iteratively compensated, and a well-focused ISAR image can be obtained. Extensive simulation experiments have verified the noise robustness and effectiveness of the proposed algorithm. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Figure 1

18 pages, 5144 KiB  
Article
LiDAR Filtering in 3D Object Detection Based on Improved RANSAC
by Bingxu Wang, Jinhui Lan and Jiangjiang Gao
Remote Sens. 2022, 14(9), 2110; https://doi.org/10.3390/rs14092110 - 28 Apr 2022
Cited by 14 | Viewed by 3871
Abstract
At present, the LiDAR ground filtering technology is very mature. There are fewer applications in 3D-object detection due to the limitations of filtering accuracy and efficiency. If the ground can be removed quickly and accurately, the 3D-object detection algorithm can detect objects more [...] Read more.
At present, the LiDAR ground filtering technology is very mature. There are fewer applications in 3D-object detection due to the limitations of filtering accuracy and efficiency. If the ground can be removed quickly and accurately, the 3D-object detection algorithm can detect objects more accurately and quickly. In order to meet the application requirements of 3D-object detection, inspired by Universal-RANSAC, we analyze the detailed steps of RANSAC and propose a precise and efficient RANSAC-based ground filtering method. The principle of GroupSAC is analyzed, and the sampled points are grouped by attributes to make it easier to sample the correct point. Based on this principle, we devise a method for limiting sampled points that is applicable to point clouds. We describe preemptive RANSAC in detail. Its breadth-first strategy is adopted to obtain the optimal plane without complex iterations. We use the International Society for Photogrammetry and Remote Sensing (ISPRS) datasets and the KITTI dataset for testing. Experiments show that our method has higher filtering accuracy and efficiency compared with the currently widely used methods. We explore the application of ground filtering methods in 3D-object detection, and the experimental results show that our method can improve the object detection accuracy without affecting the efficiency. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Graphical abstract

16 pages, 9565 KiB  
Article
Use of a DNN-Based Image Translator with Edge Enhancement Technique to Estimate Correspondence between SAR and Optical Images
by Hisatoshi Toriya, Ashraf Dewan, Hajime Ikeda, Narihiro Owada, Mahdi Saadat, Fumiaki Inagaki, Youhei Kawamura and Itaru Kitahara
Appl. Sci. 2022, 12(9), 4159; https://doi.org/10.3390/app12094159 - 20 Apr 2022
Cited by 1 | Viewed by 1804
Abstract
In this paper, the local correspondence between synthetic aperture radar (SAR) images and optical images is proposed using an image feature-based keypoint-matching algorithm. To achieve accurate matching, common image features were obtained at the corresponding locations. Since the appearance of SAR and optical [...] Read more.
In this paper, the local correspondence between synthetic aperture radar (SAR) images and optical images is proposed using an image feature-based keypoint-matching algorithm. To achieve accurate matching, common image features were obtained at the corresponding locations. Since the appearance of SAR and optical images is different, it was difficult to find similar features to account for geometric corrections. In this work, an image translator, which was built with a DNN (deep neural network) and trained by conditional generative adversarial networks (cGANs) with edge enhancement, was employed to find the corresponding locations between SAR and optical images. When using conventional cGANs, many blurs appear in the translated images and they degrade keypoint-matching accuracy. Therefore, a novel method applying an edge enhancement filter in the cGANs structure was proposed to find the corresponding points between SAR and optical images to accurately register images from different sensors. The results suggested that the proposed method could accurately estimate the corresponding points between SAR and optical images. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Figure 1

21 pages, 476 KiB  
Article
Orbital Maneuver Optimization of Earth Observation Satellites Using an Adaptive Differential Evolution Algorithm
by Qizhang Luo, Wuxuan Peng, Guohua Wu and Yougang Xiao
Remote Sens. 2022, 14(9), 1966; https://doi.org/10.3390/rs14091966 - 19 Apr 2022
Cited by 9 | Viewed by 2906
Abstract
Earth observation satellite (EOS) systems often encounter emergency observation tasks oriented to sudden disasters (e.g., earthquake, tsunami, and mud-rock flow). However, EOS systems may not be able to provide feasible coverage time windows for emergencies, which requires that an appropriately selected satellite transfers [...] Read more.
Earth observation satellite (EOS) systems often encounter emergency observation tasks oriented to sudden disasters (e.g., earthquake, tsunami, and mud-rock flow). However, EOS systems may not be able to provide feasible coverage time windows for emergencies, which requires that an appropriately selected satellite transfers its orbit for better observation. In this context, we investigate the orbit maneuver optimization problem. First, by analyzing the orbit coverage and dynamics, we construct three models for describing the orbit maneuver optimization problem. These models, respectively, consider the response time, ground resolution, and fuel consumption as optimization objectives to satisfy diverse user requirements. Second, we employ an adaptive differential evolution (DE) integrating ant colony optimization (ACO) to solve the optimization models, which is named ACODE. In ACODE, key components (i.e., genetic operations and control parameters) of DE are formed into a directed acyclic graph and an ACO is appropriately embedded into an algorithm framework to find reasonable combinations of the components from the graph. Third, we conduct extensive experimental studies to show the superiority of ACODE. Compared with three existing algorithms (i.e., EPSDE, CSO, and SLPSO), ACODE can achieve the best performances in terms of response time, ground resolution, and fuel consumption, respectively. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Graphical abstract

16 pages, 46002 KiB  
Article
A Method for Digital Terrain Reconstruction Using Longitudinal Control Lines and Sparse Measured Cross Sections
by Yunwen Pan, Junqiang Xia and Kejun Yang
Remote Sens. 2022, 14(8), 1841; https://doi.org/10.3390/rs14081841 - 11 Apr 2022
Cited by 4 | Viewed by 1599
Abstract
Using longitudinal control lines and sparse measured cross sections with large spaces, a new method for quickly reconstructing digital terrains in natural riverways is presented. The longitudinal control lines in a natural riverway, mainly including the river boundaries, the thalweg, the dividing lines [...] Read more.
Using longitudinal control lines and sparse measured cross sections with large spaces, a new method for quickly reconstructing digital terrains in natural riverways is presented. The longitudinal control lines in a natural riverway, mainly including the river boundaries, the thalweg, the dividing lines of floodplains and main channel, and the water edges, can be obtained by interpreting satellite images, remote sensing images or site surveys. Then, the longitudinal control lines are introduced into quadrilateral grid generation as auxiliary lines that can control longitudinal riverway trends and reflect transverse terrain changes. Then, by the equal cross-sectional area principle at the same water level, all measured cross sections are reasonably fitted. On the above basis, by virtue of the fitted cross-sectional data and the weighted distance method, the terrain interpolations along the longitudinal grid lines are conducted to obtain the elevation data of all grid nodes. Finally, according to the readable text formats of MIKE21 and SMS, the gridded digital terrain and connection information are output by computer programming to achieve good construction of the data exchange channels and fully exploit the special advantages of various software programs for digital terrain visualization and further utilization. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Figure 1

22 pages, 4233 KiB  
Article
Optimized Spatial Gradient Transfer for Hyperspectral-LiDAR Data Classification
by Bing Tu, Yu Zhu, Chengle Zhou, Siyuan Chen and Antonio Plaza
Remote Sens. 2022, 14(8), 1814; https://doi.org/10.3390/rs14081814 - 09 Apr 2022
Cited by 2 | Viewed by 1888
Abstract
The classification accuracy of ground objects is improved due to the combined use of the same scene data collected by different sensors. We propose to fuse the spatial planar distribution and spectral information of the hyperspectral images (HSIs) with the spatial 3D information [...] Read more.
The classification accuracy of ground objects is improved due to the combined use of the same scene data collected by different sensors. We propose to fuse the spatial planar distribution and spectral information of the hyperspectral images (HSIs) with the spatial 3D information of the objects captured by light detection and ranging (LiDAR). In this paper, we use the optimized spatial gradient transfer method for data fusion, which can effectively solve the strong heterogeneity of heterogeneous data fusion. The entropy rate superpixel segmentation algorithm over-segments HSI and LiDAR to extract local spatial and elevation information, and a Gaussian density-based regularization strategy normalizes the local spatial and elevation information. Then, the spatial gradient transfer model and l1-total variation minimization are introduced to realize the fusion of local multi-attribute features of different sources, and fully exploit the complementary information of different features for the description of ground objects. Finally, the fused local spatial features are reconstructed into a guided image, and the guided filtering acts on each dimension of the original HSI, so that the output maintains the complete spectral information and detailed changes of the spatial fusion features. It is worth mentioning that we have carried out two versions of expansion on the basis of the proposed method to improve the joint utilization of multi-source data. Experimental results on two real datasets indicated that the fused features of the proposed method have a better effect on ground object classification than the mainstream stacking or cascade fusion methods. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Figure 1

17 pages, 5087 KiB  
Article
Estimation of Ground PM2.5 Concentrations in Pakistan Using Convolutional Neural Network and Multi-Pollutant Satellite Images
by Maqsood Ahmed, Zemin Xiao and Yonglin Shen
Remote Sens. 2022, 14(7), 1735; https://doi.org/10.3390/rs14071735 - 04 Apr 2022
Cited by 14 | Viewed by 3834
Abstract
During the last few decades, worsening air quality has been diagnosed in many cities around the world. The accurately prediction of air pollutants, particularly, particulate matter 2.5 (PM2.5) is extremely important for environmental management. A Convolutional Neural Network (CNN) P-CNN model is presented [...] Read more.
During the last few decades, worsening air quality has been diagnosed in many cities around the world. The accurately prediction of air pollutants, particularly, particulate matter 2.5 (PM2.5) is extremely important for environmental management. A Convolutional Neural Network (CNN) P-CNN model is presented in this paper, which uses seven different pollutant satellite images, such as Aerosol index (AER AI), Methane (CH4), Carbon monoxide (CO), Formaldehyde (HCHO), Nitrogen dioxide (NO2), Ozone (O3) and Sulfur dioxide (SO2), as auxiliary variables to estimate daily average PM2.5 concentrations. This study estimates daily average of PM2.5 concentrations in various cities of Pakistan (Islamabad, Lahore, Peshawar and Karachi) by using satellite images. The dataset contains a total of 2562 images from May-2019 to April-2020. We compare and analyze AlexNet, VGG16, ResNet50 and P-CNN model on every dataset. The accuracy of machine learning models was checked with Mean Absolute Error (MAE), Root Mean Square Error (RMSE) and Mean Absolute Percentage Error (MAPE). The results show that P-CNN is more accurate than other approaches in estimating PM2.5 concentrations from satellite images. This study presents robust model using satellite images, useful for estimating PM2.5 concentrations. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Graphical abstract

17 pages, 5345 KiB  
Article
Performance Evaluation of Feature Matching Techniques for Detecting Reinforced Soil Retaining Wall Displacement
by Yong-Soo Ha, Jeongki Lee and Yun-Tae Kim
Remote Sens. 2022, 14(7), 1697; https://doi.org/10.3390/rs14071697 - 31 Mar 2022
Cited by 4 | Viewed by 1640
Abstract
Image registration technology is widely applied in various matching methods. In this study, we aim to evaluate the feature matching performance and to find an optimal technique for detecting three types of behaviors—facing displacement, settlement, and combined displacement—in reinforced soil retaining walls (RSWs). [...] Read more.
Image registration technology is widely applied in various matching methods. In this study, we aim to evaluate the feature matching performance and to find an optimal technique for detecting three types of behaviors—facing displacement, settlement, and combined displacement—in reinforced soil retaining walls (RSWs). For a single block with an artificial target and a multiblock structure with artificial and natural targets, five popular detectors and descriptors—KAZE, SURF, MinEigen, ORB, and BRISK—were used to evaluate the resolution performance. For comparison, the repeatability, matching score, and inlier matching features were analyzed based on the number of extracted and matched features. The axial registration error (ARE) was used to verify the accuracy of the methods by comparing the position between the estimated and real features. The results showed that the KAZE method was the best detector and descriptor for RSWs (block shape target), with the highest probability of successfully matching features. In the multiblock experiment, the block used as a natural target showed similar matching performance to that of the block with an artificial target attached. Therefore, the behaviors of RSW blocks can be analyzed using the KAZE method without installing an artificial target. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Figure 1

29 pages, 4139 KiB  
Article
SAR Image Segmentation by Efficient Fuzzy C-Means Framework with Adaptive Generalized Likelihood Ratio Nonlocal Spatial Information Embedded
by Jingxing Zhu, Feng Wang and Hongjian You
Remote Sens. 2022, 14(7), 1621; https://doi.org/10.3390/rs14071621 - 28 Mar 2022
Cited by 10 | Viewed by 2201
Abstract
The existence of multiplicative noise in synthetic aperture radar (SAR) images makes SAR segmentation by fuzzy c-means (FCM) a challenging task. To cope with speckle noise, we first propose an unsupervised FCM with embedding log-transformed Bayesian non-local spatial information (LBNL_FCM). This non-local [...] Read more.
The existence of multiplicative noise in synthetic aperture radar (SAR) images makes SAR segmentation by fuzzy c-means (FCM) a challenging task. To cope with speckle noise, we first propose an unsupervised FCM with embedding log-transformed Bayesian non-local spatial information (LBNL_FCM). This non-local information is measured by a modified Bayesian similarity metric which is derived by applying the log-transformed SAR distribution to Bayesian theory. After, we construct the similarity metric of patches as the continued product of corresponding pixel similarity measured by generalized likelihood ratio (GLR) to avoid the undesirable characteristics of log-transformed Bayesian similarity metric. An alternative unsupervised FCM framework named GLR_FCM is then proposed. In both frameworks, an adaptive factor based on the local intensity entropy is employed to balance the original and non-local spatial information. Additionally, the membership degree smoothing and the majority voting idea are integrated as supplementary local information to optimize segmentation. Concerning experiments on simulated SAR images, both frameworks can achieve segmentation accuracy of over 97%. On real SAR images, both unsupervised FCM segmentation frameworks work well on SAR homogeneous segmentation in terms of region consistency and edge preservation. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Figure 1

18 pages, 3031 KiB  
Article
A Supervoxel-Based Random Forest Method for Robust and Effective Airborne LiDAR Point Cloud Classification
by Lingfeng Liao, Shengjun Tang, Jianghai Liao, Xiaoming Li, Weixi Wang, Yaxin Li and Renzhong Guo
Remote Sens. 2022, 14(6), 1516; https://doi.org/10.3390/rs14061516 - 21 Mar 2022
Cited by 14 | Viewed by 3594
Abstract
As an essential part of point cloud processing, autonomous classification is conventionally used in various multifaceted scenes and non-regular point distributions. State-of-the-art point cloud classification methods mostly process raw point clouds, using a single point as the basic unit and calculating point cloud [...] Read more.
As an essential part of point cloud processing, autonomous classification is conventionally used in various multifaceted scenes and non-regular point distributions. State-of-the-art point cloud classification methods mostly process raw point clouds, using a single point as the basic unit and calculating point cloud features by searching local neighbors via the k-neighborhood method. Such methods tend to be computationally inefficient and have difficulty obtaining accurate feature descriptions due to inappropriate neighborhood selection. In this paper, we propose a robust and effective point cloud classification approach that integrates point cloud supervoxels and their locally convex connected patches into a random forest classifier, which effectively improves the point cloud feature calculation accuracy and reduces the computational cost. Considering the different types of point cloud feature descriptions, we divide features into three categories (point-based, eigen-based, and grid-based) and accordingly design three distinct feature calculation strategies to improve feature reliability. Two International Society of Photogrammetry and Remote Sensing benchmark tests show that the proposed method achieves state-of-the-art performance, with average F1-scores of 89.16 and 83.58, respectively. The successful classification of point clouds with great variation in elevation also demonstrates the reliability of the proposed method in challenging scenes. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Figure 1

21 pages, 4848 KiB  
Article
MLCRNet: Multi-Level Context Refinement for Semantic Segmentation in Aerial Images
by Zhifeng Huang, Qian Zhang and Guixu Zhang
Remote Sens. 2022, 14(6), 1498; https://doi.org/10.3390/rs14061498 - 20 Mar 2022
Cited by 9 | Viewed by 2138
Abstract
In this paper, we focus on the problem of contextual aggregation in the semantic segmentation of aerial images. Current contextual aggregation methods only aggregate contextual information within specific regions to improve feature representation, which may yield poorly robust contextual information. To address this [...] Read more.
In this paper, we focus on the problem of contextual aggregation in the semantic segmentation of aerial images. Current contextual aggregation methods only aggregate contextual information within specific regions to improve feature representation, which may yield poorly robust contextual information. To address this problem, we propose a novel multi-level context refinement network (MLCRNet) that aggregates three levels of contextual information effectively and efficiently in an adaptive manner. First, we designed a local-level context aggregation module to capture local information around each pixel. Second, we integrate multiple levels of context, namely, local-level, image-level, and semantic-level, to aggregate contextual information from a comprehensive perspective dynamically. Third, we propose an efficient multi-level context transform (EMCT) module to address feature redundancy and to improve the efficiency of our multi-level contexts. Finally, based on the EMCT module and feature pyramid network (FPN) framework, we propose a multi-level context feature refinement (MLCR) module to enhance feature representation by leveraging multi-level contextual information. Extensive empirical evidence demonstrates that our MLCRNet achieves state-of-the-art performance on the ISPRS Potsdam and Vaihingen datasets. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Figure 1

2 pages, 202 KiB  
Correction
Correction: Demattê et al. The Brazilian Soil Spectral Service (BraSpecS): A User-Friendly System for Global Soil Spectra Communication. Remote Sens. 2022, 14, 740
by José A. M. Demattê, Ariane Francine da Silveira Paiva, Raul Roberto Poppiel, Nícolas Augusto Rosin, Luis Fernando Chimelo Ruiz, Fellipe Alcantara de Oliveira Mello, Budiman Minasny, Sabine Grunwald, Yufeng Ge, Eyal Ben Dor, Asa Gholizadeh, Cecile Gomez, Sabine Chabrillat, Nicolas Francos, Shamsollah Ayoubi, Dian Fiantis, James Kobina Mensah Biney, Changkun Wang, Abdelaziz Belal, Salman Naimi, Najmeh Asgari Hafshejani, Henrique Bellinaso, Jean Michel Moura-Bueno and Nélida E. Q. Silveroadd Show full author list remove Hide full author list
Remote Sens. 2022, 14(6), 1459; https://doi.org/10.3390/rs14061459 - 18 Mar 2022
Viewed by 1736
Abstract
There was an error in the original publication [...] Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
25 pages, 1737 KiB  
Article
Forecast of the Global TEC by Nearest Neighbour Technique
by Enric Monte-Moreno, Heng Yang and Manuel Hernández-Pajares
Remote Sens. 2022, 14(6), 1361; https://doi.org/10.3390/rs14061361 - 11 Mar 2022
Cited by 6 | Viewed by 2078
Abstract
We propose a method for Global Ionospheric Maps of Total Electron Content forecasting using the Nearest Neighbour method. The assumption is that in a database of global ionosphere maps spanning more than two solar cycles, one can select a set of past observations [...] Read more.
We propose a method for Global Ionospheric Maps of Total Electron Content forecasting using the Nearest Neighbour method. The assumption is that in a database of global ionosphere maps spanning more than two solar cycles, one can select a set of past observations that have similar geomagnetic conditions to those of the current map. The assumption is that the current ionospheric condition can be expressed by a linear combination of conditions seen in the past. The average of these maps leads to common geomagnetic components being preserved and those not shared by several maps being reduced. The method is based on searching the historical database for the dates of the maps closest to the current map and using as a prediction the maps in the database that correspond to time shifts on the prediction horizons. In contrast to other methods of machine learning, the implementation only requires a distance computation and does not need a previous step of model training and adjustment for each prediction horizon. It also provides confidence intervals for the forecast. The method has been analyzed for two full years (2015 and 2018), for selected days of 2015 and 2018, i.e., two storm days and two non-storm days and the performance of the system has been compared with CODE (24- and 48-h forecast horizons). Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Figure 1

21 pages, 5744 KiB  
Article
IoT Monitoring and Prediction Modeling of Honeybee Activity with Alarm
by Nebojša Andrijević, Vlada Urošević, Branko Arsić, Dejana Herceg and Branko Savić
Electronics 2022, 11(5), 783; https://doi.org/10.3390/electronics11050783 - 03 Mar 2022
Cited by 17 | Viewed by 5820
Abstract
A significant number of recent scientific papers have raised awareness of changes in the biological world of bees, problems with their extinction, and, as a consequence, their impact on humans and the environment. This work relies on precision beekeeping in apiculture and raises [...] Read more.
A significant number of recent scientific papers have raised awareness of changes in the biological world of bees, problems with their extinction, and, as a consequence, their impact on humans and the environment. This work relies on precision beekeeping in apiculture and raises the scale of measurement and prediction results using the system we developed, which was designed to cover beehive ecosystem. It is equipped with an IoT modular base station that collects a wide range of parameters from sensors on the hive and a bee counter at the hive entrance. Data are sent to the cloud for storage, analysis, and alarm generation. A time-series forecasting model capable of estimating the volume of bee exits and entrances per hour, which simulates dependence between environmental conditions and bee activity, was devised. The applied mathematical models based on recurrent neural networks exhibited high accuracy. A web application for monitoring and prediction displays parameters, measured values, and predictive and analytical alarms in real time. The predictive component utilizes artificial intelligence by applying advanced analytical methods to find correlation between sensor data and the behavioral patterns of bees, and to raise alarms should it detect deviations. The analytical component raises an alarm when it detects measured values that lie outside of the predetermined safety limits. Comparisons of the experimental data with the model showed that our model represents the observed processes well. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Figure 1

21 pages, 5135 KiB  
Article
A Mutual Teaching Framework with Momentum Correction for Unsupervised Hyperspectral Image Change Detection
by Jia Sun, Jia Liu, Ling Hu, Zhihui Wei and Liang Xiao
Remote Sens. 2022, 14(4), 1000; https://doi.org/10.3390/rs14041000 - 18 Feb 2022
Cited by 5 | Viewed by 1863
Abstract
Deep-learning methods rely on massive labeled data, which has become one of the main impediments in hyperspectral image change detection (HSI-CD). To resolve this problem, pseudo-labels generated by traditional methods are widely used to drive model learning. In this paper, we propose a [...] Read more.
Deep-learning methods rely on massive labeled data, which has become one of the main impediments in hyperspectral image change detection (HSI-CD). To resolve this problem, pseudo-labels generated by traditional methods are widely used to drive model learning. In this paper, we propose a mutual teaching approach with momentum correction for unsupervised HSI-CD to cope with noise in pseudo-labels, which is harmful for model training. First, we adopt two structurally identical models simultaneously, allowing them to select high-confidence samples for each other to suppress self-confidence bias, and continuously update pseudo-labels during iterations to fine-tune the models. Furthermore, a new group confidence-based sample filtering method is designed to obtain reliable training samples for HSI. This method considers both the quality and diversity of the selected samples by determining the confidence of each group instead of single instances. Finally, to better extract the spatial–temporal spectral features of bitemporal HSIs, a 3D convolutional neural network (3DCNN) is designed as an HSI-CD classifier and the basic network of our framework. Due to mutual teaching and dynamic label learning, pseudo-labels can be continuously updated and refined in iterations, and thus, the proposed method can achieve a better performance compared with those with fixed pseudo-labels. Experimental results on several HSI datasets demonstrate the effectiveness of our method. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Figure 1

20 pages, 7021 KiB  
Article
An Adaptive Surrogate-Assisted Endmember Extraction Framework Based on Intelligent Optimization Algorithms for Hyperspectral Remote Sensing Images
by Zhao Wang, Jianzhao Li, Yiting Liu, Fei Xie and Peng Li
Remote Sens. 2022, 14(4), 892; https://doi.org/10.3390/rs14040892 - 13 Feb 2022
Cited by 12 | Viewed by 2467
Abstract
As the foremost step of spectral unmixing, endmember extraction has been one of the most challenging techniques in the spectral unmixing processing due to the mixing of pixels and the complexity of hyperspectral remote sensing images. The existing geometrial-based endmember extraction algorithms have [...] Read more.
As the foremost step of spectral unmixing, endmember extraction has been one of the most challenging techniques in the spectral unmixing processing due to the mixing of pixels and the complexity of hyperspectral remote sensing images. The existing geometrial-based endmember extraction algorithms have achieved the ideal results, but most of these algorithms perform poorly when they do not meet the assumption of simplex structure. Recently, many intelligent optimization algorithms have been employed to solve the problem of endmember extraction. Although they achieved the better performance than the geometrial-based algorithms in different complex scenarios, they also suffer from the time-consuming problem. In order to alleviate the above problems, balance the two key indicators of accuracy and running time, an adaptive surrogate-assisted endmember extraction (ASAEE) framework based on intelligent optimization algorithms is proposed for hyperspectral remote sensing images in this paper. In the proposed framework, the surrogate-assisted model is established to reduce the expensive time cost of the intelligent algorithms by fitting the fully constrained evaluation value with the low-cost estimated value. In more detail, three commonly used intelligent algorithms, namely genetic algorithm, particle swarm optimization algorithm and differential evolution algorithm, are specifically designed into the ASAEE framework to verify the effectiveness and robustness. In addition, an adaptive weight surrogate-assisted model selection strategy is proposed, which can automatically adjust the weights of different surrogate models according to the characteristics of different intelligent algorithms. Experimental results on three data sets (including two simulated data sets and one real data set) show the effectiveness and the excellent performance of the proposed ASAEE framework. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Figure 1

19 pages, 2390 KiB  
Article
Borrow from Source Models: Efficient Infrared Object Detection with Limited Examples
by Ruimin Chen, Shijian Liu, Jing Mu, Zhuang Miao and Fanming Li
Appl. Sci. 2022, 12(4), 1896; https://doi.org/10.3390/app12041896 - 11 Feb 2022
Cited by 4 | Viewed by 1700
Abstract
Recent deep models trained on large-scale RGB datasets lead to considerable achievements in visual detection tasks. However, the training examples are often limited for an infrared detection task, which may deteriorate the performance of deep detectors. In this paper, we propose a transfer [...] Read more.
Recent deep models trained on large-scale RGB datasets lead to considerable achievements in visual detection tasks. However, the training examples are often limited for an infrared detection task, which may deteriorate the performance of deep detectors. In this paper, we propose a transfer approach, Source Model Guidance (SMG), where we leverage a high-capacity RGB detection model as the guidance to supervise the training process of an infrared detection network. In SMG, the foreground soft label generated from the RGB model is introduced as source knowledge to provide guidance for cross-domain transfer. Additionally, we design a Background Suppression Module in the infrared network to receive the knowledge and enhance the foreground features. SMG is easily plugged into any modern detection framework, and we show two explicit instantiations of it, SMG-C and SMG-Y, based on CenterNet and YOLOv3, respectively. Extensive experiments on different benchmarks show that both SMG-C and SMG-Y achieve remarkable performance even if the training set is scarce. Compared to advanced detectors on public FLIR, SMG-Y with 77.0% mAP outperforms others in accuracy, and SMG-C achieves real-time detection at a speed of 107 FPS. More importantly, SMG-Y trained on a quarter of the thermal dataset obtains 74.5% mAP, surpassing most state-of-the-art detectors with full FLIR as training data. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Graphical abstract

27 pages, 6566 KiB  
Article
The Brazilian Soil Spectral Service (BraSpecS): A User-Friendly System for Global Soil Spectra Communication
by José A. M. Demattê, Ariane Francine da Silveira Paiva, Raul Roberto Poppiel, Nícolas Augusto Rosin, Luis Fernando Chimelo Ruiz, Fellipe Alcantara de Oliveira Mello, Budiman Minasny, Sabine Grunwald, Yufeng Ge, Eyal Ben Dor, Asa Gholizadeh, Cecile Gomez, Sabine Chabrillat, Nicolas Francos, Shamsollah Ayoubi, Dian Fiantis, James Kobina Mensah Biney, Changkun Wang, Abdelaziz Belal, Salman Naimi, Najmeh Asgari Hafshejani, Henrique Bellinaso, Jean Michel Moura-Bueno and Nélida E. Q. Silveroadd Show full author list remove Hide full author list
Remote Sens. 2022, 14(3), 740; https://doi.org/10.3390/rs14030740 - 05 Feb 2022
Cited by 9 | Viewed by 4652 | Correction
Abstract
Although many Soil Spectral Libraries (SSLs) have been created globally, these libraries still have not been operationalized for end-users. To address this limitation, this study created an online Brazilian Soil Spectral Service (BraSpecS). The system was based on the Brazilian Soil Spectral Library [...] Read more.
Although many Soil Spectral Libraries (SSLs) have been created globally, these libraries still have not been operationalized for end-users. To address this limitation, this study created an online Brazilian Soil Spectral Service (BraSpecS). The system was based on the Brazilian Soil Spectral Library (BSSL) with samples collected in the Visible–Near–Short-wave infrared (vis–NIR–SWIR) and Mid-infrared (MIR) ranges. The interactive platform allows users to find spectra, act as custodians of the data, and estimate several soil properties and classification. The system was tested by 500 Brazilian and 65 international users. Users accessed the platform (besbbr.com.br), uploaded their spectra, and received soil organic carbon (SOC) and clay content prediction results via email. The BraSpecS prediction provided good results for Brazilian data, but performed variably for other countries. Prediction for countries outside of Brazil using local spectra (External Country Soil Spectral Libraries, ExCSSL) mostly showed greater performance than BraSpecS. Clay R2 ranged from 0.5 (BraSpecS) to 0.8 (ExCSSL) in vis–NIR–SWIR, but BraSpecS MIR models were more accurate in most situations. The development of external models based on the fusion of local samples with BSSL formed the Global Soil Spectral Library (GSSL). The GSSL models improved soil properties prediction for different countries. Nevertheless, the proposed system needs to be continually updated with new spectra so they can be applied broadly. Accordingly, the online system is dynamic, users can contribute their data and the models will adapt to local information. Our community-driven web platform allows users to predict soil attributes without learning soil spectral modeling, which will invite end-users to utilize this powerful technique. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Figure 1

18 pages, 2795 KiB  
Article
An Efficient Algorithm for Ocean-Front Evolution Trend Recognition
by Yuting Yang, Kin-Man Lam, Xin Sun, Junyu Dong and Redouane Lguensat
Remote Sens. 2022, 14(2), 259; https://doi.org/10.3390/rs14020259 - 06 Jan 2022
Cited by 5 | Viewed by 1926
Abstract
Marine hydrological elements are of vital importance in marine surveys. The evolution of these elements can have a profound effect on the relationship between human activities and marine hydrology. Therefore, the detection and explanation of the evolution laws of marine hydrological elements are [...] Read more.
Marine hydrological elements are of vital importance in marine surveys. The evolution of these elements can have a profound effect on the relationship between human activities and marine hydrology. Therefore, the detection and explanation of the evolution laws of marine hydrological elements are urgently needed. In this paper, a novel method, named Evolution Trend Recognition (ETR), is proposed to recognize the trend of ocean fronts, being the most important information in the ocean dynamic process. Therefore, in this paper, we focus on the task of ocean-front trend classification. A novel classification algorithm is first proposed for recognizing the ocean-front trend, in terms of the ocean-front scale and strength. Then, the GoogLeNet Inception network is trained to classify the ocean-front trend, i.e., enhancing or attenuating. The ocean-front trend is classified using the deep neural network, as well as a physics-informed classification algorithm. The two classification results are combined to make the final decision on the trend classification. Furthermore, two novel databases were created for this research, and their generation method is described, to foster research in this direction. These two databases are called the Ocean-Front Tracking Dataset (OFTraD) and the Ocean-Front Trend Dataset (OFTreD). Moreover, experiment results show that our proposed method on OFTreD achieves a higher classification accuracy, which is 97.5%, than state-of-the-art networks. This demonstrates that the proposed ETR algorithm is highly promising for trend classification. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Figure 1

16 pages, 4481 KiB  
Technical Note
Specific Windows Search for Multi-Ship and Multi-Scale Wake Detection in SAR Images
by Kaiyang Ding, Junfeng Yang, Zhao Wang, Kai Ni, Xiaohao Wang and Qian Zhou
Remote Sens. 2022, 14(1), 25; https://doi.org/10.3390/rs14010025 - 22 Dec 2021
Cited by 4 | Viewed by 2660
Abstract
Traditional ship identification systems have difficulty in identifying illegal or broken ships, but the wakes generated by ships can be used as a major feature for identification. However, multi-ship and multi-scale wake detection is also a big challenge. This paper combines the geometric [...] Read more.
Traditional ship identification systems have difficulty in identifying illegal or broken ships, but the wakes generated by ships can be used as a major feature for identification. However, multi-ship and multi-scale wake detection is also a big challenge. This paper combines the geometric and pixel characteristics of ships and their wakes in Synthetic Aperture Radar (SAR) images and proposes a method for multi-ship and multi-scale wake detection. This method first detects the highlight pixel area in the image and then generates specific windows around the centroid, thereby detecting wakes of different sizes in different areas. In addition, all wake components can be located completely based on wake clustering, the statistical features of wake axis pixels can be used to determine the visible length of the wake. Test results on the Gaofen-3 SAR image show the special potential of the method for wake detection. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Graphical abstract

Back to TopTop