remotesensing-logo

Journal Browser

Journal Browser

Remote Sensing Data Fusion and Applications

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (31 March 2024) | Viewed by 9573

Special Issue Editors

School of Remote Sensing and Information Engineering, Wuhan University, Wuhan, China
Interests: satellite image processing; satellite image analysis; remote sensing; image registration; image reconstruction; image restoration; cloud cover; missing data analysis; image mosaic; image fusion; image inpainting; multitemporal analysis
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computer Science, China University of Geosciences, Wuhan 430074, China
Interests: remote sensing information processing and applications; quality improvement of remote sensing images; data fusion; regional and global environmental changes

E-Mail Website
Guest Editor
School of Geography and Information Engineering, China University of Geosciences, Wuhan 430074, China
Interests: remote sensing; data fusion; terrain modelling and analysis

Special Issue Information

Dear Colleagues,

The use of remote sensing technology is widespread in our world because it is one of the most effective methods to observe the earth. There are a variety of remote sensing platforms, e.g., ground, aerial, and space ones, which carry a series of optical, infrared, radar, and lidar sensors. The processing methods and practical applications of data from remote sensing have drawn increasing interest from the remote sensing community. The abundance of remote sensing data also presents new opportunities and challenges for researchers. Remote sensing data fusion from multiple sensors has greatly benefited many applications that require more extensive temporal, spatial, or spectral information than any individual sensor can provide.

This special issue derives its title and origin from the 8th Youth Geosciences Forum, on 5–8 May 2023 (http://www.qndxlt.com/index.html). However, this Special Issue is not only limited to the interested attendees of the conference but is open to all contributors who have relevant works to submit. This special issue aims to collect the most recent research and developments on remote sensing data fusion from sensors at different spatial and temporal resolutions. It will include, but not be limited to, novel image fusion algorithms based on transform domain, machine learning, and other theoretical approaches. The applications on earth observation are also welcome.

Dr. Xinghua Li
Prof. Dr. Qing Cheng
Dr. Linwei Yue
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • spatio-temporal data fusion
  • multimodal data fusion
  • multitemporal data fusion
  • multi-sensor data fusion
  • deep learning for data fusion
  • image enhancement and restoration

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

32 pages, 11156 KiB  
Article
A No-Reference Quality Assessment Method for Hyperspectral Sharpened Images via Benford’s Law
by Xiankun Hao, Xu Li, Jingying Wu, Baoguo Wei, Yujuan Song and Bo Li
Remote Sens. 2024, 16(7), 1167; https://doi.org/10.3390/rs16071167 - 27 Mar 2024
Viewed by 444
Abstract
In recent years, hyperspectral (HS) sharpening technology has received high attention and HS sharpened images have been widely applied. However, the quality assessment of HS sharpened images has not been well addressed and is still limited to the use of full-reference quality evaluation. [...] Read more.
In recent years, hyperspectral (HS) sharpening technology has received high attention and HS sharpened images have been widely applied. However, the quality assessment of HS sharpened images has not been well addressed and is still limited to the use of full-reference quality evaluation. In this paper, a novel no-reference quality assessment method based on Benford’s law for HS sharpened images is proposed. Without a reference image, the proposed method detects fusion distortion by performing first digit distribution on three quality perception features in HS sharpened images, using the standard Benford’s law as a benchmark. The experiment evaluates 10 HS fusion methods on three HS datasets and selects four full-reference metrics and four no-reference metrics to compare with the proposed method. The experimental results demonstrate the superior performance of the proposed method. Full article
(This article belongs to the Special Issue Remote Sensing Data Fusion and Applications)
Show Figures

Figure 1

21 pages, 8330 KiB  
Article
Deep Learning-Based Spatiotemporal Fusion Architecture of Landsat 8 and Sentinel-2 Data for 10 m Series Imagery
by Qing Cheng, Ruixiang Xie, Jingan Wu and Fan Ye
Remote Sens. 2024, 16(6), 1033; https://doi.org/10.3390/rs16061033 - 14 Mar 2024
Viewed by 700
Abstract
Medium- to high-resolution imagery is indispensable for various applications. Combining images from Landsat 8 and Sentinel-2 can improve the accuracy of observing dynamic changes on the Earth’s surface. Many researchers use Sentinel-2 10 m resolution data in conjunction with Landsat 8 30 m [...] Read more.
Medium- to high-resolution imagery is indispensable for various applications. Combining images from Landsat 8 and Sentinel-2 can improve the accuracy of observing dynamic changes on the Earth’s surface. Many researchers use Sentinel-2 10 m resolution data in conjunction with Landsat 8 30 m resolution data to generate 10 m resolution data series. However, current fusion techniques have some algorithmic weaknesses, such as simple processing of coarse or fine images, which fail to extract image features to the fullest extent, especially in rapidly changing land cover areas. Facing the aforementioned limitations, we proposed a multiscale and attention mechanism-based residual spatiotemporal fusion network (MARSTFN) that utilizes Sentinel-2 10 m resolution data and Landsat 8 15 m resolution data as auxiliary data to upgrade Landsat 8 30 m resolution data to 10 m resolution. In this network, we utilized multiscale and attention mechanisms to extract features from coarse and fine images separately. Subsequently, the features outputted from all input branches are combined and further feature information is extracted through residual networks and skip connections. Finally, the features obtained from the residual network are merged with the feature information of the coarsely processed images from the multiscale mechanism to generate accurate prediction images. To assess the efficacy of our model, we compared it with existing models on two datasets. Results demonstrated that our fusion model outperformed baseline methods across various evaluation indicators, highlighting its ability to integrate Sentinel-2 and Landsat 8 data to produce 10 m resolution data. Full article
(This article belongs to the Special Issue Remote Sensing Data Fusion and Applications)
Show Figures

Figure 1

16 pages, 10346 KiB  
Article
GSA-SiamNet: A Siamese Network with Gradient-Based Spatial Attention for Pan-Sharpening of Multi-Spectral Images
by Yi Gao, Mengjiao Qin, Sensen Wu, Feng Zhang and Zhenhong Du
Remote Sens. 2024, 16(4), 616; https://doi.org/10.3390/rs16040616 - 07 Feb 2024
Viewed by 641
Abstract
Pan-sharpening is a fusion process that combines a low-spatial resolution, multi-spectral image that has rich spectral characteristics with a high-spatial resolution panchromatic (PAN) image that lacks spectral characteristics. Most previous learning-based approaches rely on the scale-shift assumption, which may not be applicable in [...] Read more.
Pan-sharpening is a fusion process that combines a low-spatial resolution, multi-spectral image that has rich spectral characteristics with a high-spatial resolution panchromatic (PAN) image that lacks spectral characteristics. Most previous learning-based approaches rely on the scale-shift assumption, which may not be applicable in the full-resolution domain. To solve this issue, we regard pan-sharpening as a multi-task problem and propose a Siamese network with Gradient-based Spatial Attention (GSA-SiamNet). GSA-SiamNet consists of four modules: a two-stream feature extraction module, a feature fusion module, a gradient-based spatial attention (GSA) module, and a progressive up-sampling module. In the GSA module, we use Laplacian and Sobel operators to extract gradient information from PAN images. Spatial attention factors, learned from the gradient prior, are multiplied during the feature fusion, up-sampling, and reconstruction stages. These factors help to keep high-frequency information on the feature map as well as suppress redundant information. We also design a multi-resolution loss function that guides the training process under the constraints of both reduced- and full-resolution domains. The experimental results on WorldView-3 satellite images obtained in Moscow and San Juan demonstrate that our proposed GSA-SiamNet is superior to traditional and other deep learning-based methods. Full article
(This article belongs to the Special Issue Remote Sensing Data Fusion and Applications)
Show Figures

Graphical abstract

22 pages, 52500 KiB  
Article
Multi-View Data-Based Layover Information Compensation Method for SAR Image Mosaic
by Rui Liu, Feng Wang, Niangang Jiao, Hongjian You, Yuxin Hu, Guangyao Zhou and Yao Chen
Remote Sens. 2024, 16(3), 564; https://doi.org/10.3390/rs16030564 - 31 Jan 2024
Viewed by 574
Abstract
Currently, massive Synthetic Aperture Radar (SAR) images acquired from numerous SAR satellites have been widely utilized in various fields, and image mosaicking technology provides important support and assistance for these applications. The traditional mosaic method selects specific SAR images that can cover the [...] Read more.
Currently, massive Synthetic Aperture Radar (SAR) images acquired from numerous SAR satellites have been widely utilized in various fields, and image mosaicking technology provides important support and assistance for these applications. The traditional mosaic method selects specific SAR images that can cover the region of interest (ROI) from redundant data to produce “One Map”. However, an SAR image suffers from severe geometric distortion, especially in mountainous areas, which inevitably reduces the utilization of mosaic image. Therefore, a multi-view data-based layover information compensation (MDLIC) method for SAR image mosaic is proposed, aiming to take full advantage of multi-view data to compensate for the missing information in the layover area of the SAR image. This is performed to improve the information content of the mosaic image and realize efficient thematic information extraction and analysis. First, the calculation of the object-space extent of all images and the division of object-space grid are completed on the basis of geometric and radiometric preprocessing. Then, according to the transformation relationship between the object-space and the image-space, the sampling rate image of each image corresponding to the object-space grid is generated, which determines the layover area and the layover degree in each image. Finally, the information compensation strategy is implemented in accordance with the sampling rate image to realize the compensation of the layover information. The feasibility and effectiveness of the MDLIC method are verified by using multiple SAR images from the Chinese Gaofen-3 01 satellite as datasets for experiments. The experimental results indicate that the MDLIC method can obtain mosaic images with richer information compared with the traditional method, while still providing satisfactory results. Full article
(This article belongs to the Special Issue Remote Sensing Data Fusion and Applications)
Show Figures

Figure 1

21 pages, 106299 KiB  
Article
Sparse Mix-Attention Transformer for Multispectral Image and Hyperspectral Image Fusion
by Shihai Yu, Xu Zhang and Huihui Song
Remote Sens. 2024, 16(1), 144; https://doi.org/10.3390/rs16010144 - 29 Dec 2023
Viewed by 746
Abstract
Multispectral image (MSI) and hyperspectral image (HSI) fusion (MHIF) aims to address the challenge of acquiring high-resolution (HR) HSI images. This field combines a low-resolution (LR) HSI with an HR-MSI to reconstruct HR-HSIs. Existing methods directly utilize transformers to perform feature extraction and [...] Read more.
Multispectral image (MSI) and hyperspectral image (HSI) fusion (MHIF) aims to address the challenge of acquiring high-resolution (HR) HSI images. This field combines a low-resolution (LR) HSI with an HR-MSI to reconstruct HR-HSIs. Existing methods directly utilize transformers to perform feature extraction and fusion. Despite the demonstrated success, there exist two limitations: (1) Employing the entire transformer model for feature extraction and fusion fails to fully harness the potential of the transformer in integrating the spectral information of the HSI and spatial information of the MSI. (2) HSIs have a strong spectral correlation and exhibit sparsity in the spatial domain. Existing transformer-based models do not optimize this physical property, which makes their methods prone to spectral distortion. To accomplish these issues, this paper introduces a novel framework for MHIF called a Sparse Mix-Attention Transformer (SMAformer). Specifically, to fully harness the advantages of the transformer architecture, we propose a Spectral Mix-Attention Block (SMAB), which concatenates the keys and values extracted from LR-HSIs and HR-MSIs to create a new multihead attention module. This design facilitates the extraction of detailed long-range information across spatial and spectral dimensions. Additionally, to address the spatial sparsity inherent in HSIs, we incorporated a sparse mechanism within the core of the SMAB called the Sparse Spectral Mix-Attention Block (SSMAB). In the SSMAB, we compute attention maps from queries and keys and select the K highly correlated values as the sparse-attention map. This approach enables us to achieve a sparse representation of spatial information while eliminating spatially disruptive noise. Extensive experiments conducted on three synthetic benchmark datasets, namely CAVE, Harvard, and Pavia Center, demonstrate that the SMAformer method outperforms state-of-the-art methods. Full article
(This article belongs to the Special Issue Remote Sensing Data Fusion and Applications)
Show Figures

Figure 1

21 pages, 5231 KiB  
Article
Improving Geological Remote Sensing Interpretation via Optimal Transport-Based Point–Surface Data Fusion
by Jiahao Wu, Wei Han, Jia Chen and Sheng Wang
Remote Sens. 2024, 16(1), 53; https://doi.org/10.3390/rs16010053 - 22 Dec 2023
Viewed by 752
Abstract
High-quality geological remote sensing interpretation (GRSI) products play a vital role in a wide range of fields, including the military, meteorology, agriculture, the environment, mapping, etc. Due to the importance of GRSI products, this research aimed to improve their accuracy. Although deep-learning (DL)-based [...] Read more.
High-quality geological remote sensing interpretation (GRSI) products play a vital role in a wide range of fields, including the military, meteorology, agriculture, the environment, mapping, etc. Due to the importance of GRSI products, this research aimed to improve their accuracy. Although deep-learning (DL)-based GRSI has reduced dependence on manual interpretation, the limited accuracy of multiple geological element interpretation still poses a challenge. This issue can be attributed to small inter-class differences, the uneven distribution of geological elements, sensor limitations, and the complexity of the environment. Therefore, this paper proposes a point–surface data optimal fusion method (PSDOF) to improve the accuracy of GRSI products based on optimal transport (OT) theory. PSDOF combines geological survey data (which has spatial location and geological element information called point data) with a geological remote sensing DL interpretation product (which has limited accuracy and is called surface data) to improve the quality of the resulting output. The method performs several steps to enhance accuracy. First, it calculates the gray-scale correlation feature information for the pixels adjacent to the geological survey points. Next, it determines the distribution of the feature information for geological elements in the vicinity of the point data. Finally, it incorporates complementary information from the survey points into the geological elements’ interpretation boundary, as well as calculates the optimal energy loss for point–surface fusion, thus resulting in an optimal boundary. The experiments conducted in this study demonstrated the superiority of the proposed model in addressing the problem of the limited accuracy of GRSI products. Full article
(This article belongs to the Special Issue Remote Sensing Data Fusion and Applications)
Show Figures

Figure 1

17 pages, 9435 KiB  
Article
Spatial and Spectral Translation of Landsat 8 to Sentinel-2 Using Conditional Generative Adversarial Networks
by Rohit Mukherjee and Desheng Liu
Remote Sens. 2023, 15(23), 5502; https://doi.org/10.3390/rs15235502 - 25 Nov 2023
Viewed by 940
Abstract
Satellite sensors like Landsat 8 OLI (L8) and Sentinel-2 MSI (S2) provide valuable multispectral Earth observations that differ in spatial resolution and spectral bands, limiting synergistic use. L8 has a 30 m resolution and a lower revisit frequency, while S2 offers up to [...] Read more.
Satellite sensors like Landsat 8 OLI (L8) and Sentinel-2 MSI (S2) provide valuable multispectral Earth observations that differ in spatial resolution and spectral bands, limiting synergistic use. L8 has a 30 m resolution and a lower revisit frequency, while S2 offers up to a 10 m resolution and more spectral bands, such as red edge bands. Translating observations from L8 to S2 can increase data availability by combining their images to leverage the unique strengths of each product. In this study, a conditional generative adversarial network (CGAN) is developed to perform sensor-specific domain translation focused on green, near-infrared (NIR), and red edge bands. The models were trained on the pairs of co-located L8-S2 imagery from multiple locations. The CGAN aims to downscale 30 m L8 bands to 10 m S2-like green and 20 m S2-like NIR and red edge bands. Two translation methodologies are employed—direct single-step translation from L8 to S2 and indirect multistep translation. The direct approach involves predicting the S2-like bands in a single step from L8 bands. The multistep approach uses two steps—the initial model predicts the corresponding S2-like band that is available in L8, and then the final model predicts the unavailable S2-like red edge bands from the S2-like band predicted in the first step. Quantitative evaluation reveals that both approaches result in lower spectral distortion and higher spatial correlation compared to native L8 bands. Qualitative analysis supports the superior fidelity and robustness achieved through multistep translation. By translating L8 bands to higher spatial and spectral S2-like imagery, this work increases data availability for improved earth monitoring. The results validate CGANs for cross-sensor domain adaptation and provide a reusable computational framework for satellite image translation. Full article
(This article belongs to the Special Issue Remote Sensing Data Fusion and Applications)
Show Figures

Figure 1

20 pages, 10884 KiB  
Article
SRTM DEM Correction Using Ensemble Machine Learning Algorithm
by Zidu Ouyang, Cui Zhou, Jian Xie, Jianjun Zhu, Gui Zhang and Minsi Ao
Remote Sens. 2023, 15(16), 3946; https://doi.org/10.3390/rs15163946 - 09 Aug 2023
Cited by 3 | Viewed by 1215
Abstract
The Shuttle Radar Topographic Mission (SRTM) digital elevation model (DEM) is a widely utilized product for geological, climatic, oceanic, and ecological applications. However, the accuracy of the SRTM DEM is constrained by topography and vegetation. Using machine learning models to correct SRTM DEM [...] Read more.
The Shuttle Radar Topographic Mission (SRTM) digital elevation model (DEM) is a widely utilized product for geological, climatic, oceanic, and ecological applications. However, the accuracy of the SRTM DEM is constrained by topography and vegetation. Using machine learning models to correct SRTM DEM with high-accuracy reference elevation observations has been proven to be useful. However, most of the reference observation-aided approaches rely on either parametric or non-parametric regression (e.g., a single machine learning model), which may lead to overfitting or underfitting and limit improvements in the accuracy of SRTM DEM products. In this study, we presented an algorithm for correcting SRTM DEM using a stacking ensemble machine learning algorithm. The proposed algorithm is capable of learning how to optimally combine the predictions from multiple well-performing machine learning models, resulting in superior performance compared to any individual model within the ensemble. The proposed approach was tested under varying relief and vegetation conditions in Hunan Province, China. The results indicate that the accuracy of the SRTM DEM productions improved by approximately 46% using the presented algorithm with respect to the original SRTM DEM. In comparison to two conventional algorithms, namely linear regression and artificial neural network models, the presented algorithm demonstrated a reduction in root-mean-square errors of SRTM DEM by 28% and 12%, respectively. The approach provides a more robust tool for correcting SRTM DEM or other similar DEM products over a wide area. Full article
(This article belongs to the Special Issue Remote Sensing Data Fusion and Applications)
Show Figures

Graphical abstract

17 pages, 5422 KiB  
Article
Improving the Accuracy of TanDEM-X Digital Elevation Model Using Least Squares Collocation Method
by Xingdong Shen, Cui Zhou and Jianjun Zhu
Remote Sens. 2023, 15(14), 3695; https://doi.org/10.3390/rs15143695 - 24 Jul 2023
Cited by 1 | Viewed by 1081
Abstract
The TanDEM-X Digital Elevation Model (DEM) is limited by the radar side-view imaging mode, which still has gaps and anomalies that directly affect the application potential of the data. Many methods have been used to improve the accuracy of TanDEM-X DEM, but these [...] Read more.
The TanDEM-X Digital Elevation Model (DEM) is limited by the radar side-view imaging mode, which still has gaps and anomalies that directly affect the application potential of the data. Many methods have been used to improve the accuracy of TanDEM-X DEM, but these algorithms primarily focus on eliminating systematic errors trending over a large area in the DEM, rather than random errors. Therefore, this paper presents the least-squares collocation-based error correction algorithm (LSC-TXC) for TanDEM-X DEM, which effectively eliminates both systematic and random errors, to enhance the accuracy of TanDEM-X DEM. The experimental results demonstrate that TanDEM-X DEM corrected by the LSC-TXC algorithm reduces the root mean square error (RMSE) from 6.141 m to 3.851 m, resulting in a significant improvement in accuracy (by 37.3%). Compared to three conventional algorithms, namely Random Forest, Height Difference Fitting Neural Network and Back Propagation in Neural Network, the presented algorithm demonstrates a reduction in the RMSEs of the corrected TanDEM-X DEMs by 6.5%, 7.6%, and 18.1%, respectively. This algorithm provides an efficient tool for correcting DEMs such as TanDEM-X for a wide range of areas. Full article
(This article belongs to the Special Issue Remote Sensing Data Fusion and Applications)
Show Figures

Figure 1

22 pages, 61465 KiB  
Article
Unsupervised Low-Light Image Enhancement via Virtual Diffraction Information in Frequency Domain
by Xupei Zhang, Hanlin Qin, Yue Yu, Xiang Yan, Shanglin Yang and Guanghao Wang
Remote Sens. 2023, 15(14), 3580; https://doi.org/10.3390/rs15143580 - 17 Jul 2023
Viewed by 1327
Abstract
With the advent of deep learning, significant progress has been made in low-light image enhancement methods. However, deep learning requires enormous paired training data, which is challenging to capture in real-world scenarios. To address this limitation, this paper presents a novel unsupervised low-light [...] Read more.
With the advent of deep learning, significant progress has been made in low-light image enhancement methods. However, deep learning requires enormous paired training data, which is challenging to capture in real-world scenarios. To address this limitation, this paper presents a novel unsupervised low-light image enhancement method, which first introduces the frequency-domain features of images in low-light image enhancement tasks. Our work is inspired by imagining a digital image as a spatially varying metaphoric “field of light”, then subjecting the influence of physical processes such as diffraction and coherent detection back onto the original image space via a frequency-domain to spatial-domain transformation (inverse Fourier transform). However, the mathematical model created by this physical process still requires complex manual tuning of the parameters for different scene conditions to achieve the best adjustment. Therefore, we proposed a dual-branch convolution network to estimate pixel-wise and high-order spatial interactions for dynamic range adjustment of the frequency feature of the given low-light image. Guided by the frequency feature from the “field of light” and parameter estimation networks, our method enables dynamic enhancement of low-light images. Extensive experiments have shown that our method performs well compared to state-of-the-art unsupervised methods, and its performance approximates the level of the state-of-the-art supervised methods qualitatively and quantitatively. At the same time, the light network structure design allows the proposed method to have extremely fast inference speed (near 150 FPS on an NVIDIA 3090 Ti GPU for an image of size 600×400×3). Furthermore, the potential benefits of our method to object detection in the dark are discussed. Full article
(This article belongs to the Special Issue Remote Sensing Data Fusion and Applications)
Show Figures

Figure 1

Back to TopTop