remotesensing-logo

Journal Browser

Journal Browser

Advanced Machine Learning and Deep Learning Approaches for Remote Sensing II

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: closed (30 April 2023) | Viewed by 28667

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editor


E-Mail Website
Guest Editor
Department of Embedded Systems Engineering, Incheon National University, 119 Academy-ro, Yeonsu-gu, Incheon 22012, Republic of Korea
Interests: remote sensing; deep learning; artificial intelligence; image processing; signal processing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Remote sensing is the acquisition of information about an object or phenomenon without making physical contact. Artificial intelligence such as machine learning and deep learning has shown potential to overcome the challenges of remote sensing signal, image, and video processing. Artificial intelligence approaches need huge computing power as they normally use GPUs. Thanks to research efforts, recent advances in remote sensing have led to high-resolution monitoring of Earth on a global scale, providing a massive amount of Earth observation data. We trust that artificial intelligence, machine learning, and deep learning approaches will provide promising tools to overcome many challenges in remote sensing in terms of accuracy and reliability at high speeds.

This Special Issue is the third edition of “Advanced Machine Learning for Time Series Remote Sensing Data Analysis”. In this third edition, our new Special Issue aims to report the latest advances and trends concerning advanced machine learning and deep learning techniques in relation to remote sensing data processing issues. Papers of both theoretical and applicative nature, as well as contributions regarding new advanced artificial learning and data science techniques for the remote sensing research community, are welcome.

Both original research articles and review articles are welcome for submission.

This Special Issue is the second edition of the Special Issue: “Advanced Machine Learning and Deep Learning Approaches for Remote Sensing

Dr. Gwanggil Jeon
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • remote sensing
  • signal/image processing
  • deep learning
  • artificial intelligence
  • time series processing

Related Special Issue

Published Papers (17 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

7 pages, 150 KiB  
Editorial
Advanced Machine Learning and Deep Learning Approaches for Remote Sensing II
by Gwanggil Jeon
Remote Sens. 2024, 16(8), 1353; https://doi.org/10.3390/rs16081353 - 12 Apr 2024
Viewed by 425
Abstract
This Special Issue marks the third edition of “Advanced Machine Learning and Deep Learning Approaches for Remote Sensing” [...] Full article

Research

Jump to: Editorial

24 pages, 8150 KiB  
Article
Simulation of the Ecological Service Value and Ecological Compensation in Arid Area: A Case Study of Ecologically Vulnerable Oasis
by Jiamin Liu, Xiutong Pei, Wanyang Zhu and Jizong Jiao
Remote Sens. 2023, 15(16), 3927; https://doi.org/10.3390/rs15163927 - 08 Aug 2023
Cited by 3 | Viewed by 948
Abstract
In recent years, the delicate balance between economic development and ecological environment protection in ecologically fragile arid areas has gradually become apparent. Although previous research has mainly focused on changes in ecological service value caused by land use, a comprehensive understanding of ecology–economy [...] Read more.
In recent years, the delicate balance between economic development and ecological environment protection in ecologically fragile arid areas has gradually become apparent. Although previous research has mainly focused on changes in ecological service value caused by land use, a comprehensive understanding of ecology–economy harmony and ecological compensation remains elusive. To address this, we employed a coupled deep learning model (convolutional neural network-gated recurrent unit) to simulate the ecological service value of the Wuwei arid oasis over the next 10 years. The ecology–economy harmony index was used to determine the priority range of ecological compensation, while the GeoDetector analyzed the potential impact of driving factors on ecological service value from 2000 to 2030. The results show the following: (1) The coupled model, which extracts spatial features in the neighborhood of historical data using a convolutional neural network and adaptively learns time features using the gated recurrent unit, achieved an overall accuracy of 0.9377, outperforming three other models (gated recurrent unit, convolutional neural network, and convolutional neural network—long short-term memory); (2) Ecological service value in the arid oasis area illustrated an overall increasing trend from 2000 to 2030, but urban expansion still caused a decrease in ecological service value; (3) Historical ecology–economy harmony was mainly characterized by low conflict and potential crisis, while future ecology–economy harmony will be characterized by potential crisis and high coordination. Minqin and Tianzhu in the north and south have relatively high coordination between ecological environment and economic development, while Liangzhou and Guluang in the west and east exhibited relatively low coordination, indicating a greater urgency for ecological compensation; (4) Geomorphic, soil, and digital elevation model emerged as the most influential natural factor affecting the spatial differentiation of ecological service value in the arid oasis area. This study is of great significance for balancing economic development and ecological protection and promoting sustainable development in arid areas. Full article
Show Figures

Figure 1

20 pages, 12166 KiB  
Article
Hybrid-Scale Hierarchical Transformer for Remote Sensing Image Super-Resolution
by Jianrun Shang, Mingliang Gao, Qilei Li, Jinfeng Pan, Guofeng Zou and Gwanggil Jeon
Remote Sens. 2023, 15(13), 3442; https://doi.org/10.3390/rs15133442 - 07 Jul 2023
Cited by 3 | Viewed by 1211
Abstract
Super-resolution (SR) technology plays a crucial role in improving the spatial resolution of remote sensing images so as to overcome the physical limitations of spaceborne imaging systems. Although deep convolutional neural networks have achieved promising results, most of them overlook the advantage of [...] Read more.
Super-resolution (SR) technology plays a crucial role in improving the spatial resolution of remote sensing images so as to overcome the physical limitations of spaceborne imaging systems. Although deep convolutional neural networks have achieved promising results, most of them overlook the advantage of self-similarity information across different scales and high-dimensional features after the upsampling layers. To address the problem, we propose a hybrid-scale hierarchical transformer network (HSTNet) to achieve faithful remote sensing image SR. Specifically, we propose a hybrid-scale feature exploitation module to leverage the internal recursive information in single and cross scales within the images. To fully leverage the high-dimensional features and enhance discrimination, we designed a cross-scale enhancement transformer to capture long-range dependencies and efficiently calculate the relevance between high-dimension and low-dimension features. The proposed HSTNet achieves the best result in PSNR and SSIM with the UCMecred dataset and AID dataset. Comparative experiments demonstrate the effectiveness of the proposed methods and prove that the HSTNet outperforms the state-of-the-art competitors both in quantitative and qualitative evaluations. Full article
Show Figures

Figure 1

13 pages, 2743 KiB  
Communication
Application of Data Sensor Fusion Using Extended Kalman Filter Algorithm for Identification and Tracking of Moving Targets from LiDAR–Radar Data
by Oscar Javier Montañez, Marco Javier Suarez and Eduardo Avendano Fernandez
Remote Sens. 2023, 15(13), 3396; https://doi.org/10.3390/rs15133396 - 04 Jul 2023
Cited by 6 | Viewed by 2590
Abstract
In surveillance and monitoring systems, the use of mobile vehicles or unmanned aerial vehicles (UAVs), like the drone type, provides advantages in terms of access to the environment with enhanced range, maneuverability, and safety due to the ability to move omnidirectionally to explore, [...] Read more.
In surveillance and monitoring systems, the use of mobile vehicles or unmanned aerial vehicles (UAVs), like the drone type, provides advantages in terms of access to the environment with enhanced range, maneuverability, and safety due to the ability to move omnidirectionally to explore, identify, and perform some security tasks. These activities must be performed autonomously by capturing data from the environment; usually, the data present errors and uncertainties that impact the recognition and resolution in the detection and identification of objects. The resolution in the acquisition of data can be improved by integrating data sensor fusion systems to measure the same physical phenomenon from two or more sensors by retrieving information simultaneously. This paper uses the constant turn and rate velocity (CTRV) kinematic model of a drone but includes the angular velocity not considered in previous works as a complementary alternative in Lidar and Radar data sensor fusion retrieved using UAVs and applying the extended Kalman filter (EKF) for the detection of moving targets. The performance of the EKF is evaluated by using a dataset that jointly includes position data captured from a LiDAR and a Radar sensor for an object in movement following a trajectory with sudden changes. Additive white Gaussian noise is then introduced into the data to degrade the data. Then, the root mean square error (RMSE) versus the increase in noise power is evaluated, and the results show an improvement of 0.4 for object detection over other conventional kinematic models that do not consider significant trajectory changes. Full article
Show Figures

Figure 1

11 pages, 2729 KiB  
Article
A Prediction Method of Ionospheric hmF2 Based on Machine Learning
by Jian Wang, Qiao Yu, Yafei Shi and Cheng Yang
Remote Sens. 2023, 15(12), 3154; https://doi.org/10.3390/rs15123154 - 16 Jun 2023
Cited by 1 | Viewed by 1025
Abstract
The ionospheric F2 layer is the essential layer in the propagation of high-frequency radio waves, and the peak electron density height of the ionospheric F2 layer (hmF2) is one of the important parameters. To improve the predicted accuracy of hmF2 for further improving [...] Read more.
The ionospheric F2 layer is the essential layer in the propagation of high-frequency radio waves, and the peak electron density height of the ionospheric F2 layer (hmF2) is one of the important parameters. To improve the predicted accuracy of hmF2 for further improving the ability of HF skywave propagation prediction and communication frequency selection, we present an interpretable long-term prediction model of hmF2 using the statistical machine learning (SML) method. Taking Moscow station as an example, this method has been tested using the ionospheric observation data from August 2011 to October 2016. Only by inputting sunspot number, month, and universal time into the proposed model can the predicted value of hmF2 be obtained for the corresponding time. Finally, we compare the predicted results of the proposed model with those of the International Reference Ionospheric (IRI) model to verify its stability and reliability. The result shows that, compared with the IRI model, the predicted average statistical RMSE decreased by 5.20 km, and RRMSE decreased by 1.78%. This method is expected to provide ionospheric parameter prediction accuracy on a global scale. Full article
Show Figures

Figure 1

19 pages, 17258 KiB  
Article
An Integrated Framework for Spatiotemporally Merging Multi-Sources Precipitation Based on F-SVD and ConvLSTM
by Sheng Sheng, Hua Chen, Kangling Lin, Nie Zhou, Bingru Tian and Chong-Yu Xu
Remote Sens. 2023, 15(12), 3135; https://doi.org/10.3390/rs15123135 - 15 Jun 2023
Cited by 1 | Viewed by 1121
Abstract
To improve the accuracy and reliability of precipitation estimation, numerous models based on machine learning technology have been developed for integrating data from multiple sources. However, little attention has been paid to extracting the spatiotemporal correlation patterns between satellite products and rain gauge [...] Read more.
To improve the accuracy and reliability of precipitation estimation, numerous models based on machine learning technology have been developed for integrating data from multiple sources. However, little attention has been paid to extracting the spatiotemporal correlation patterns between satellite products and rain gauge observations during the merging process. This paper focuses on this issue by proposing an integrated framework to generate an accurate and reliable spatiotemporal estimation of precipitation. The proposed framework integrates Funk-Singular Value Decomposition (F-SVD) in the recommender system to achieve the accurate spatial distribution of precipitation based on the spatiotemporal interpolation of rain gauge observations and Convolutional Long Short-Term Memory (ConvLSTM) to merge precipitation data from interpolation results and satellite observation through exploiting the spatiotemporal correlation pattern between them. The framework (FS-ConvLSTM) is utilized to obtain hourly precipitation merging data with a resolution of 0.1° in Jianxi Basin, southeast of China, from both rain gauge data and Global Precipitation Measurement (GPM) from 2006 to 2018. The LSTM and Inverse Distance Weighting (IDW) are constructed for comparison purposes. The results demonstrate that the framework could not only provide more accurate precipitation distribution but also achieve better stability and reliability. Compared with other models, it performs better in variation process description and rainfall capture capability, and the root mean square error (RSME) and probability of detection (POD) are improved by 63.6% and 22.9% from the original GPM, respectively. In addition, the merged precipitation combines the strength of different data while mitigating their weaknesses and has good agreement with observed precipitation in terms of magnitude and spatial distribution. Consequently, the proposed framework provides a valuable tool to improve the accuracy of precipitation estimation, which can have important implications for water resource management and natural disaster preparedness. Full article
Show Figures

Graphical abstract

20 pages, 11759 KiB  
Article
Estimation of the Two-Dimensional Direction of Arrival for Low-Elevation and Non-Low-Elevation Targets Based on Dilated Convolutional Networks
by Guoping Hu, Fangzheng Zhao and Bingqi Liu
Remote Sens. 2023, 15(12), 3117; https://doi.org/10.3390/rs15123117 - 14 Jun 2023
Cited by 1 | Viewed by 902
Abstract
This paper addresses the problem of the two-dimensional direction-of-arrival (2D DOA) estimation of low-elevation or non-low-elevation targets using L-shaped uniform and sparse arrays by analyzing the signal models’ features and their mapping to 2D DOA. This paper proposes a 2D DOA estimation algorithm [...] Read more.
This paper addresses the problem of the two-dimensional direction-of-arrival (2D DOA) estimation of low-elevation or non-low-elevation targets using L-shaped uniform and sparse arrays by analyzing the signal models’ features and their mapping to 2D DOA. This paper proposes a 2D DOA estimation algorithm based on the dilated convolutional network model, which consists of two components: a dilated convolutional autoencoder and a dilated convolutional neural network. If there are targets at low elevation, the dilated convolutional autoencoder suppresses the multipath signal and outputs a new signal covariance matrix as the input of the dilated convolutional neural network to directly perform 2D DOA estimation in the absence of a low-elevation target. The algorithm employs 3D convolution to fully retain and extract features. The simulation experiments and the analysis of their results revealed that for both L-shaped uniform and L-shaped sparse arrays, the dilated convolutional autoencoder could effectively suppress the multipath signals without affecting the direct wave and non-low-elevation targets, whereas the dilated convolutional neural network could effectively achieve 2D DOA estimation with a matching rate and an effective ratio of pitch and azimuth angles close to 100% without the need for additional parameter matching. Under the condition of a low signal-to-noise ratio, the estimation accuracy of the proposed algorithm was significantly higher than that of the traditional DOA estimation. Full article
Show Figures

Graphical abstract

28 pages, 2703 KiB  
Article
Network Collaborative Pruning Method for Hyperspectral Image Classification Based on Evolutionary Multi-Task Optimization
by Yu Lei, Dayu Wang, Shenghui Yang, Jiao Shi, Dayong Tian and Lingtong Min
Remote Sens. 2023, 15(12), 3084; https://doi.org/10.3390/rs15123084 - 13 Jun 2023
Cited by 1 | Viewed by 1242
Abstract
Neural network models for hyperspectral images classification are complex and therefore difficult to deploy directly onto mobile platforms. Neural network model compression methods can effectively optimize the storage space and inference time of the model while maintaining the accuracy. Although automated pruning methods [...] Read more.
Neural network models for hyperspectral images classification are complex and therefore difficult to deploy directly onto mobile platforms. Neural network model compression methods can effectively optimize the storage space and inference time of the model while maintaining the accuracy. Although automated pruning methods can avoid designing pruning rules, they face the problem of search efficiency when optimizing complex networks. In this paper, a network collaborative pruning method is proposed for hyperspectral image classification based on evolutionary multi-task optimization. The proposed method allows classification networks to perform the model pruning task on multiple hyperspectral images simultaneously. Knowledge (the important local sparse structure of the network) is automatically searched and updated by using knowledge transfer between different tasks. The self-adaptive knowledge transfer strategy based on historical information and dormancy mechanism is designed to avoid possible negative transfer and unnecessary consumption of computing resources. The pruned networks can achieve high classification accuracy on hyperspectral data with limited labeled samples. Experiments on multiple hyperspectral images show that the proposed method can effectively realize the compression of the network model and the classification of hyperspectral images. Full article
Show Figures

Figure 1

22 pages, 4060 KiB  
Article
Adversarial Robustness Enhancement of UAV-Oriented Automatic Image Recognition Based on Deep Ensemble Models
by Zihao Lu, Hao Sun and Yanjie Xu
Remote Sens. 2023, 15(12), 3007; https://doi.org/10.3390/rs15123007 - 08 Jun 2023
Viewed by 1479
Abstract
Deep neural networks (DNNs) have been widely utilized in automatic visual navigation and recognition on modern unmanned aerial vehicles (UAVs), achieving state-of-the-art performances. However, DNN-based visual recognition systems on UAVs show serious vulnerability to adversarial camouflage patterns on targets and well-designed imperceptible perturbations [...] Read more.
Deep neural networks (DNNs) have been widely utilized in automatic visual navigation and recognition on modern unmanned aerial vehicles (UAVs), achieving state-of-the-art performances. However, DNN-based visual recognition systems on UAVs show serious vulnerability to adversarial camouflage patterns on targets and well-designed imperceptible perturbations in real-time images, which poses a threat to safety-related applications. Considering a scenario in which a UAV is suffering from adversarial attack, in this paper, we investigate and construct two ensemble approaches with CNN and transformer for both proactive (i.e., generate robust models) and reactive (i.e., adversarial detection) adversarial defense. They are expected to be secure under attack and adapt to the resource-limited environment on UAVs. Specifically, the probability distributions of output layers from base DNN models in the ensemble are combined in the proactive defense, which mainly exploits the weak adversarial transferability between the CNN and transformer. For the reactive defense, we integrate the scoring functions of several adversarial detectors with the hidden features and average the output confidence scores from ResNets and ViTs as a second integration. To verify their effectiveness in the recognition task of remote sensing images, we conduct experiments on both optical and synthetic aperture radar (SAR) datasets. We find that the ensemble model in proactive defense performs as well as three popular counterparts, and both of the ensemble approaches can achieve much more satisfactory results than a single base model/detector, which effectively alleviates adversarial vulnerability without extra re-training. In addition, we establish a one-stop platform for conveniently evaluating adversarial robustness and performing defense on recognition models called AREP-RSIs, which is beneficial for the future research of the remote sensing field. Full article
Show Figures

Figure 1

16 pages, 3883 KiB  
Article
Probabilistic Wildfire Segmentation Using Supervised Deep Generative Model from Satellite Imagery
by Ata Akbari Asanjan, Milad Memarzadeh, Paul Aaron Lott, Eleanor Rieffel and Shon Grabbe
Remote Sens. 2023, 15(11), 2718; https://doi.org/10.3390/rs15112718 - 24 May 2023
Cited by 2 | Viewed by 2170
Abstract
Wildfires are one of the major disasters among many and are responsible for more than 6 million acres burned in the United States alone every year. Accurate, insightful, and timely wildfire detection is needed to help authorities mitigate and prevent further destruction. Uncertainty [...] Read more.
Wildfires are one of the major disasters among many and are responsible for more than 6 million acres burned in the United States alone every year. Accurate, insightful, and timely wildfire detection is needed to help authorities mitigate and prevent further destruction. Uncertainty quantification is always a crucial part of the detection of natural disasters, such as wildfires, and modeling products can be misinterpreted without proper uncertainty quantification. In this study, we propose a supervised deep generative machine-learning model that generates stochastic wildfire detection, allowing fast and comprehensive uncertainty quantification for individual and collective events. In the proposed approach, we also aim to address the patchy and discontinuous Moderate Resolution Imaging Spectroradiometer (MODIS) wildfire product by training the proposed model with MODIS raw and combined bands to detect fire. This approach allows us to generate diverse but plausible segmentations to represent the disagreements regarding the delineation of wildfire boundaries by subject matter experts. The proposed approach generates stochastic segmentation via two model streams in which one learns meaningful stochastic latent distributions, and the other learns the visual features. Two model branches join eventually to become a supervised stochastic image-to-image wildfire detection model. The model is compared to two baseline stochastic machine-learning models: (1) with permanent dropout in training and test phases and (2) with Stochastic ReLU activations. The visual and statistical metrics demonstrate better agreements between the ground truth and the proposed model segmentations. Furthermore, we used multiple scenarios to evaluate the model comprehension, and the proposed Probabilistic U-Net model demonstrates a better understanding of the underlying physical dynamics of wildfires compared to the baselines. Full article
Show Figures

Figure 1

32 pages, 24237 KiB  
Article
A Pattern Classification Distribution Method for Geostatistical Modeling Evaluation and Uncertainty Quantification
by Chen Zuo, Zhuo Li, Zhe Dai, Xuan Wang and Yue Wang
Remote Sens. 2023, 15(11), 2708; https://doi.org/10.3390/rs15112708 - 23 May 2023
Viewed by 1353
Abstract
Geological models are essential components in various applications. To generate reliable realizations, the geostatistical method focuses on reproducing spatial structures from training images (TIs). Moreover, uncertainty plays an important role in Earth systems. It is beneficial for creating an ensemble of stochastic realizations [...] Read more.
Geological models are essential components in various applications. To generate reliable realizations, the geostatistical method focuses on reproducing spatial structures from training images (TIs). Moreover, uncertainty plays an important role in Earth systems. It is beneficial for creating an ensemble of stochastic realizations with high diversity. In this work, we applied a pattern classification distribution (PCD) method to quantitatively evaluate geostatistical modeling. First, we proposed a correlation-driven template method to capture geological patterns. According to the spatial dependency of the TI, region growing and elbow-point detection were launched to create an adaptive template. Second, a combination of clustering and classification was suggested to characterize geological realizations. Aiming at simplifying parameter specification, the program employed hierarchical clustering and decision tree to categorize geological structures. Third, we designed a stacking framework to develop the multi-grid analysis. The contribution of each grid was calculated based on the morphological characteristics of TI. Our program was extensively examined by a channel model, a 2D nonstationary flume system, 2D subglacial bed topographic models in Antarctica, and 3D sandstone models. We activated various geostatistical programs to produce realizations. The experimental results indicated that PCD is capable of addressing multiple geological categories, continuous variables, and high-dimensional structures. Full article
Show Figures

Graphical abstract

19 pages, 9122 KiB  
Article
Spectral-Swin Transformer with Spatial Feature Extraction Enhancement for Hyperspectral Image Classification
by Yinbin Peng, Jiansi Ren, Jiamei Wang and Meilin Shi
Remote Sens. 2023, 15(10), 2696; https://doi.org/10.3390/rs15102696 - 22 May 2023
Cited by 3 | Viewed by 2382
Abstract
Hyperspectral image classification (HSI) has rich applications in several fields. In the past few years, convolutional neural network (CNN)-based models have demonstrated great performance in HSI classification. However, CNNs are inadequate in capturing long-range dependencies, while it is possible to think of the [...] Read more.
Hyperspectral image classification (HSI) has rich applications in several fields. In the past few years, convolutional neural network (CNN)-based models have demonstrated great performance in HSI classification. However, CNNs are inadequate in capturing long-range dependencies, while it is possible to think of the spectral dimension of HSI as long sequence information. More and more researchers are focusing their attention on transformer which is good at processing sequential data. In this paper, a spectral shifted window self-attention based transformer (SSWT) backbone network is proposed. It is able to improve the extraction of local features compared to the classical transformer. In addition, spatial feature extraction module (SFE) and spatial position encoding (SPE) are designed to enhance the spatial feature extraction of the transformer. The spatial feature extraction module is proposed to address the deficiency of transformer in the capture of spatial features. The loss of spatial structure of HSI data after inputting transformer is supplemented by proposed spatial position encoding. On three public datasets, we ran extensive experiments and contrasted the proposed model with a number of powerful deep learning models. The outcomes demonstrate that our suggested approach is efficient and that the proposed model performs better than other advanced models. Full article
Show Figures

Graphical abstract

22 pages, 4224 KiB  
Article
Moving Point Target Detection Based on Temporal Transient Disturbance Learning in Low SNR
by Weihua Gao, Wenlong Niu, Pengcheng Wang, Yanzhao Li, Chunxu Ren, Xiaodong Peng and Zhen Yang
Remote Sens. 2023, 15(10), 2523; https://doi.org/10.3390/rs15102523 - 11 May 2023
Viewed by 1318
Abstract
Moving target detection in optical remote sensing is important for satellite surveillance and space target monitoring. Here, a new moving point target detection framework under a low signal-to-noise ratio (SNR) that uses an end-to-end network (1D-ResNet) to learn the distribution features of transient [...] Read more.
Moving target detection in optical remote sensing is important for satellite surveillance and space target monitoring. Here, a new moving point target detection framework under a low signal-to-noise ratio (SNR) that uses an end-to-end network (1D-ResNet) to learn the distribution features of transient disturbances in the temporal profile (TP) formed by a target passing through a pixel is proposed. First, we converted the detection of the point target in the image into the detection of transient disturbance in the TP and established mathematical models of different TP types. Then, according to the established mathematical models of TP, we generated the simulation TP dataset to train the 1D-ResNet. In 1D-ResNet, the structure of CBR-1D (Conv1D, BatchNormalization, ReLU) was designed to extract the features of transient disturbance. As the transient disturbance is very weak, we used several skip connections to prevent the loss of features in the deep layers. After the backbone, two LBR (Linear, BatchNormalization, ReLU) modules were used for further feature extraction to classify TP and identify the locations of transient disturbances. A multitask weighted loss function to ensure training convergence was proposed. Sufficient experiments showed that this method effectively detects moving point targets with a low SNR and has the highest detection rate and the lowest false alarm rate compared to other benchmark methods. Our method also has the best detection efficiency. Full article
Show Figures

Figure 1

21 pages, 39487 KiB  
Article
SSANet: An Adaptive Spectral–Spatial Attention Autoencoder Network for Hyperspectral Unmixing
by Jie Wang, Jindong Xu, Qianpeng Chong, Zhaowei Liu, Weiqing Yan, Haihua Xing, Qianguo Xing and Mengying Ni
Remote Sens. 2023, 15(8), 2070; https://doi.org/10.3390/rs15082070 - 14 Apr 2023
Cited by 1 | Viewed by 1684
Abstract
Convolutional neural-network-based autoencoders, which can integrate the spatial correlation between pixels well, have been broadly used for hyperspectral unmixing and obtained excellent performance. Nevertheless, these methods are hindered in their performance by the fact that they treat all spectral bands and spatial information [...] Read more.
Convolutional neural-network-based autoencoders, which can integrate the spatial correlation between pixels well, have been broadly used for hyperspectral unmixing and obtained excellent performance. Nevertheless, these methods are hindered in their performance by the fact that they treat all spectral bands and spatial information equally in the unmixing procedure. In this article, we propose an adaptive spectral–spatial attention autoencoder network, called SSANet, to solve the mixing pixel problem of the hyperspectral image. First, we design an adaptive spectral–spatial attention module, which refines spectral–spatial features by sequentially superimposing the spectral attention module and spatial attention module. The spectral attention module is built to select useful spectral bands, and the spatial attention module is designed to filter spatial information. Second, SSANet exploits the geometric properties of endmembers in the hyperspectral image while considering abundance sparsity. We significantly improve the endmember and abundance results by introducing minimum volume and sparsity regularization terms into the loss function. We evaluate the proposed SSANet on one synthetic dataset and four real hyperspectral scenes, i.e., Samson, Jasper Ridge, Houston, and Urban. The results indicate that the proposed SSANet achieved competitive unmixing results compared with several conventional and advanced unmixing approaches with respect to the root mean square error and spectral angle distance. Full article
Show Figures

Graphical abstract

18 pages, 26871 KiB  
Article
3D-UNet-LSTM: A Deep Learning-Based Radar Echo Extrapolation Model for Convective Nowcasting
by Shiqing Guo, Nengli Sun, Yanle Pei and Qian Li
Remote Sens. 2023, 15(6), 1529; https://doi.org/10.3390/rs15061529 - 10 Mar 2023
Cited by 8 | Viewed by 3871
Abstract
Radar echo extrapolation is a commonly used approach for convective nowcasting. The evolution of convective systems over a very short term can be foreseen according to the extrapolated reflectivity images. Recently, deep neural networks have been widely applied to radar echo extrapolation and [...] Read more.
Radar echo extrapolation is a commonly used approach for convective nowcasting. The evolution of convective systems over a very short term can be foreseen according to the extrapolated reflectivity images. Recently, deep neural networks have been widely applied to radar echo extrapolation and have achieved better forecasting performance than traditional approaches. However, it is difficult for existing methods to combine predictive flexibility with the ability to capture temporal dependencies at the same time. To leverage the advantages of the previous networks while avoiding the mentioned limitations, a 3D-UNet-LSTM model, which has an extractor-forecaster architecture, is proposed in this paper. The extractor adopts 3D-UNet to extract comprehensive spatiotemporal features from the input radar images. In the forecaster, a newly designed Seq2Seq network exploits the extracted features and uses different convolutional long short-term memory (ConvLSTM) layers to iteratively generate hidden states for different future timestamps. Finally, the hidden states are transformed into predicted radar images through a convolutional layer. We conduct 0–1 h convective nowcasting experiments on the public MeteoNet dataset. Quantitative evaluations demonstrate the effectiveness of the 3D-UNet extractor, the newly designed forecaster, and their combination. In addition, case studies qualitatively demonstrate that the proposed model has a better spatiotemporal modeling ability for the complex nonlinear processes of convective echoes. Full article
Show Figures

Graphical abstract

21 pages, 3709 KiB  
Article
Center-Ness and Repulsion: Constraints to Improve Remote Sensing Object Detection via RepPoints
by Lei Gao, Hui Gao, Yuhan Wang, Dong Liu and Biffon Manyura Momanyi
Remote Sens. 2023, 15(6), 1479; https://doi.org/10.3390/rs15061479 - 07 Mar 2023
Cited by 4 | Viewed by 1574
Abstract
Remote sensing object detection is a basic yet challenging task in remote sensing image understanding. In contrast to horizontal objects, remote sensing objects are commonly densely packed with arbitrary orientations and highly complex backgrounds. Existing object detection methods lack an effective mechanism to [...] Read more.
Remote sensing object detection is a basic yet challenging task in remote sensing image understanding. In contrast to horizontal objects, remote sensing objects are commonly densely packed with arbitrary orientations and highly complex backgrounds. Existing object detection methods lack an effective mechanism to exploit these characteristics and distinguish various targets. Unlike mainstream approaches ignoring spatial interaction among targets, this paper proposes a shape-adaptive repulsion constraint on point representation to capture geometric information of densely distributed remote sensing objects with arbitrary orientations. Specifically, (1) we first introduce a shape-adaptive center-ness quality assessment strategy to penalize the bounding boxes having a large margin shift from the center point. Then, (2) we design a novel oriented repulsion regression loss to distinguish densely packed targets: closer to the target and farther from surrounding objects. Experimental results on four challenging datasets, including DOTA, HRSC2016, UCAS-AOD, and WHU-RSONE-OBB, demonstrate the effectiveness of our proposed approach. Full article
Show Figures

Graphical abstract

20 pages, 5210 KiB  
Article
SquconvNet: Deep Sequencer Convolutional Network for Hyperspectral Image Classification
by Bing Li, Qi-Wen Wang, Jia-Hong Liang, En-Ze Zhu and Rong-Qian Zhou
Remote Sens. 2023, 15(4), 983; https://doi.org/10.3390/rs15040983 - 10 Feb 2023
Cited by 5 | Viewed by 1559
Abstract
The application of Transformer in computer vision has had the most significant influence of all the deep learning developments over the past five years. In addition to the exceptional performance of convolutional neural networks (CNN) in hyperspectral image (HSI) classification, Transformer has begun [...] Read more.
The application of Transformer in computer vision has had the most significant influence of all the deep learning developments over the past five years. In addition to the exceptional performance of convolutional neural networks (CNN) in hyperspectral image (HSI) classification, Transformer has begun to be applied to HSI classification. However, for the time being, Transformer has not produced satisfactory results in HSI classification. Recently, in the field of image classification, the creators of Sequencer have proposed a Sequencer structure that substitutes the Transformer self-attention layer with a BiLSTM2D layer and achieves satisfactory results. As a result, this paper proposes a unique network called SquconvNet, that combines CNN with Sequencer block to improve hyperspectral classification. In this paper, we conducted rigorous HSI classification experiments on three relevant baseline datasets to evaluate the performance of the proposed method. The experimental results show that our proposed method has clear advantages in terms of classification accuracy and stability. Full article
Show Figures

Figure 1

Back to TopTop