remotesensing-logo

Journal Browser

Journal Browser

Multi-Source Data with Remote Sensing Techniques

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: closed (1 April 2024) | Viewed by 19968

Special Issue Editors


E-Mail Website
Guest Editor
The Royal Institute of Technology (KTH), 114 28 Stockholm, Sweden
Interests: change detection; remote sensing; multi-temporal analysis; data fusion; wildfire monitoring; machine learning; time-series analysis
College of Electrical and Information Engineering, Hunan University, Changsha 410082, China
Interests: image processing; remote sensing; information fusion; sparse representation; compressive sensing; pattern recognition; machine learning

Special Issue Information

Dear Colleagues,

A new era has been started by the open access to modern medium resolution of Landsat, Sentinel, and GaoFen series satellite data, which makes it possible to monitor Earth surface dynamics in higher spatial and temporal resolutions by the fusion of multi-source satellite data, such as SAR, multi-/hyper spectral, and LiDAR data etc. Recently, deep learning has achieved tremendous successes in various remote sensing applications, including image classification, semantic segmentation, change detection, and time series modeling. The advances in geospatial cloud computing platforms (such as Google Earth Engine, Euro Data Cube, and Microsoft Planetary Computers etc.) significantly accelerate the development of both remote sensing and machine learning communities, which makes it possible to implement large-scale and even global applications with Earth observation big data and advanced machine learning techniques.

The special issue will focus on exploiting machine learning technique and multi-source satellite big data to improve Earth observation applications, such as land cover mapping, change detection, urban extraction, crop mapping, vegetation or biomass dynamics, forest disturbance, and carbon release and sequestration.

Dr. Puzhao Zhang
Dr. Shutao Li
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • data fusion
  • land cover mapping
  • change detection
  • biomass and carbon
  • urban extraction
  • forest disturbance

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 5885 KiB  
Article
EGMT-CD: Edge-Guided Multimodal Transformers Change Detection from Satellite and Aerial Images
by Yunfan Xiang, Xiangyu Tian, Yue Xu, Xiaokun Guan and Zhengchao Chen
Remote Sens. 2024, 16(1), 86; https://doi.org/10.3390/rs16010086 - 25 Dec 2023
Viewed by 773
Abstract
Change detection from heterogeneous satellite and aerial images plays a progressively important role in many fields, including disaster assessment, urban construction, and land use monitoring. Currently, researchers have mainly devoted their attention to change detection using homologous image pairs and achieved many remarkable [...] Read more.
Change detection from heterogeneous satellite and aerial images plays a progressively important role in many fields, including disaster assessment, urban construction, and land use monitoring. Currently, researchers have mainly devoted their attention to change detection using homologous image pairs and achieved many remarkable results. It is sometimes necessary to use heterogeneous images for change detection in practical scenarios due to missing images, emergency situations, and cloud and fog occlusion. However, heterogeneous change detection still faces great challenges, especially using satellite and aerial images. The main challenges in satellite and aerial image change detection are related to the resolution gap and blurred edge. Previous studies used interpolation or shallow feature alignment before traditional homologous change detection methods, which ignored the high-level feature interaction and edge information. Therefore, we propose a new heterogeneous change detection model based on multimodal transformers combined with edge guidance. In order to alleviate the resolution gap between satellite and aerial images, we design an improved spatially aligned transformer (SP-T) with a sub-pixel module to align the satellite features to the same size of the aerial ones supervised by a token loss. Moreover, we introduce an edge detection branch to guide change features using the object edge with an auxiliary edge-change loss. Finally, we conduct considerable experiments to verify the effectiveness and superiority of our proposed model (EGMT-CD) on a new satellite–aerial heterogeneous change dataset, named SACD. The experiments show that our method (EGMT-CD) outperforms many previously superior change detection methods and fully demonstrates its potential in heterogeneous change detection from satellite–aerial images. Full article
(This article belongs to the Special Issue Multi-Source Data with Remote Sensing Techniques)
Show Figures

Graphical abstract

21 pages, 9471 KiB  
Article
Assessment and Data Fusion of Satellite-Based Precipitation Estimation Products over Ungauged Areas Based on Triple Collocation without In Situ Observations
by Xiaoqing Wu, Jialiang Zhu and Chengguang Lai
Remote Sens. 2023, 15(17), 4210; https://doi.org/10.3390/rs15174210 - 27 Aug 2023
Cited by 1 | Viewed by 894
Abstract
Reliable assessment of satellite-based precipitation estimation (SPE) and production of more accurate precipitation data by data fusion is typically challenging in sparsely gauged and ungauged areas. Triple collocation (TC) is a novel assessment approach that does not require gauge observations; it provides a [...] Read more.
Reliable assessment of satellite-based precipitation estimation (SPE) and production of more accurate precipitation data by data fusion is typically challenging in sparsely gauged and ungauged areas. Triple collocation (TC) is a novel assessment approach that does not require gauge observations; it provides a feasible solution for this problem. This study comprehensively validates the TC performance for assessing SPEs and performs data fusion of multiple SPEs using the TC-based merging (TCM) approach. The study area is the Tibetan Plateau (TP), a typical area lacking gauge observations. Three widely used SPEs are used: the integrated multi-satellite retrievals for global precipitation measurement (IMERG) “early run” product (IMERG-E), the precipitation estimation from remotely sensed information using artificial neural networks (PERSIANN) dynamic infrared (PDIR), and the Climate Prediction Center (CPC) morphing technique (CMORPH). Validation of the TC assessment approach shows that TC can effectively assess the SPEs’ accuracy, derive the spatial accuracy pattern of the SPEs, and reveal the accuracy ranking of the SPEs. TC can also detect the SPEs’ accuracy patterns, which are difficult to obtain from a traditional approach. The data fusion results of the SPEs show that TCM incorporates the regional advantages of the individual SPEs, providing more accurate precipitation data than the original SPEs, revealing that data fusion is reasonable and reliable in ungauged areas. In general, the TC approach performs well for the assessment and data fusion of SPEs, showing reasonable applicability in the TP and other areas lacking gauge data than other methods because it does not rely on gauge observations. Full article
(This article belongs to the Special Issue Multi-Source Data with Remote Sensing Techniques)
Show Figures

Figure 1

20 pages, 10642 KiB  
Article
Flood Analysis Using Multi-Scale Remote Sensing Observations in Laos
by Phonekham Hansana, Xin Guo, Shuo Zhang, Xudong Kang and Shutao Li
Remote Sens. 2023, 15(12), 3166; https://doi.org/10.3390/rs15123166 - 18 Jun 2023
Cited by 1 | Viewed by 2691
Abstract
Heavy rains usually hit Laos countrywide and cause serious floods, influencing local agriculture, households, and the economy. Therefore, it is crucial to monitor the flooding in Laos to better understand the flood patterns and characteristics. This paper aims to analyze the influence of [...] Read more.
Heavy rains usually hit Laos countrywide and cause serious floods, influencing local agriculture, households, and the economy. Therefore, it is crucial to monitor the flooding in Laos to better understand the flood patterns and characteristics. This paper aims to analyze the influence of the flooding in Laos with multi-source data, e.g., Synthetic Aperture Radar (SAR), optical multi-spectral images, and geographic information system data. First, the flood areas in Laos from 2018 to 2022 are detected using a decision fusion method. Based on the flood areas and the global Land Use/Land Cover (LULC) product, the macro scale global impact of the flood is analyzed. Second, taking the Vientiane Capital as a case study area, a flood forecasting method is applied to estimate the risk of flooding. Finally, optical images before and after the flood event are extracted for a close-up comparison at the micro scale. Based on the above multi-scale analysis, floods in Laos are found to be predominantly concentrated in the flat areas near the Mekong River, with a decreasing trend over time, which could be helpful for flood management and mitigation strategies in Laos. The validation results exhibited notable average indices across a five-year period, with mIoU (0.7782), F1 score (0.7255), and overall accuracy (0.9854), respectively. Full article
(This article belongs to the Special Issue Multi-Source Data with Remote Sensing Techniques)
Show Figures

Graphical abstract

16 pages, 5353 KiB  
Article
Land Use and Land Cover Mapping with VHR and Multi-Temporal Sentinel-2 Imagery
by Suzanna Cuypers, Andrea Nascetti and Maarten Vergauwen
Remote Sens. 2023, 15(10), 2501; https://doi.org/10.3390/rs15102501 - 10 May 2023
Cited by 3 | Viewed by 6124
Abstract
Land Use/Land Cover (LULC) mapping is the first step in monitoring urban sprawl and its environmental, economic and societal impacts. While satellite imagery and vegetation indices are commonly used for LULC mapping, the limited resolution of these images can hamper object recognition for [...] Read more.
Land Use/Land Cover (LULC) mapping is the first step in monitoring urban sprawl and its environmental, economic and societal impacts. While satellite imagery and vegetation indices are commonly used for LULC mapping, the limited resolution of these images can hamper object recognition for Geographic Object-Based Image Analysis (GEOBIA). In this study, we utilize very high-resolution (VHR) optical imagery with a resolution of 50 cm to improve object recognition for GEOBIA LULC classification. We focused on the city of Nice, France, and identified ten LULC classes using a Random Forest classifier in Google Earth Engine. We investigate the impact of adding Gray-Level Co-Occurrence Matrix (GLCM) texture information and spectral indices with their temporal components, such as maximum value, standard deviation, phase and amplitude from the multi-spectral and multi-temporal Sentinel-2 imagery. This work focuses on identifying which input features result in the highest increase in accuracy. The results show that adding a single VHR image improves the classification accuracy from 62.62% to 67.05%, especially when the spectral indices and temporal analysis are not included. The impact of the GLCM is similar but smaller than the VHR image. Overall, the inclusion of temporal analysis improves the classification accuracy to 74.30%. The blue band of the VHR image had the largest impact on the classification, followed by the amplitude of the green-red vegetation index and the phase of the normalized multi-band drought index. Full article
(This article belongs to the Special Issue Multi-Source Data with Remote Sensing Techniques)
Show Figures

Figure 1

29 pages, 20897 KiB  
Article
Effect of the Synergetic Use of Sentinel-1, Sentinel-2, LiDAR and Derived Data in Land Cover Classification of a Semiarid Mediterranean Area Using Machine Learning Algorithms
by Carmen Valdivieso-Ros, Francisco Alonso-Sarria and Francisco Gomariz-Castillo
Remote Sens. 2023, 15(2), 312; https://doi.org/10.3390/rs15020312 - 05 Jan 2023
Cited by 8 | Viewed by 1883
Abstract
Land cover classification in semiarid areas is a difficult task that has been tackled using different strategies, such as the use of normalized indices, texture metrics, and the combination of images from different dates or different sensors. In this paper we present the [...] Read more.
Land cover classification in semiarid areas is a difficult task that has been tackled using different strategies, such as the use of normalized indices, texture metrics, and the combination of images from different dates or different sensors. In this paper we present the results of an experiment using three sensors (Sentinel-1 SAR, Sentinel-2 MSI and LiDAR), four dates and different normalized indices and texture metrics to classify a semiarid area. Three machine learning algorithms were used: Random Forest, Support Vector Machines and Multilayer Perceptron; Maximum Likelihood was used as a baseline classifier. The synergetic use of all these sources resulted in a significant increase in accuracy, Random Forest being the model reaching the highest accuracy. However, the large amount of features (126) advises the use of feature selection to reduce this figure. After using Variance Inflation Factor and Random Forest feature importance, the amount of features was reduced to 62. The final overall accuracy obtained was 0.91 ± 0.005 (α = 0.05) and kappa index 0.898 ± 0.006 (α = 0.05). Most of the observed confusions are easily explicable and do not represent a significant difference in agronomic terms. Full article
(This article belongs to the Special Issue Multi-Source Data with Remote Sensing Techniques)
Show Figures

Figure 1

16 pages, 2482 KiB  
Article
Using Deep Learning to Model Elevation Differences between Radar and Laser Altimetry
by Alex Horton, Martin Ewart, Noel Gourmelen, Xavier Fettweis and Amos Storkey
Remote Sens. 2022, 14(24), 6210; https://doi.org/10.3390/rs14246210 - 08 Dec 2022
Viewed by 1662
Abstract
Satellite and airborne observations of surface elevation are critical in understanding climatic and glaciological processes and quantifying their impact on changes in ice masses and sea level contribution. With the growing number of dedicated airborne campaigns and experimental and operational satellite missions, the [...] Read more.
Satellite and airborne observations of surface elevation are critical in understanding climatic and glaciological processes and quantifying their impact on changes in ice masses and sea level contribution. With the growing number of dedicated airborne campaigns and experimental and operational satellite missions, the science community has access to unprecedented and ever-increasing data. Combining elevation datasets allows potentially greater spatial-temporal coverage and improved accuracy; however, combining data from different sensor types and acquisition modes is difficult by differences in intrinsic sensor properties and processing methods. This study focuses on the combination of elevation measurements derived from ICESat-2 and Operation IceBridge LIDAR instruments and from CryoSat-2’s novel interferometric radar altimeter over Greenland. We develop a deep neural network based on sub-waveform information from CryoSat-2, elevation differences between radar and LIDAR, and additional inputs representing local geophysical information. A time series of maps are created showing observed LIDAR-radar differences and neural network model predictions. Mean LIDAR vs. interferometric radar adjustments and the broad spatial and temporal trends thereof are recreated by the neural network. The neural network also predicts radar-LIDAR differences with respect to waveform parameters better than a simple linear model; however, point level adjustments and the magnitudes of the spatial and temporal trends are underestimated. Full article
(This article belongs to the Special Issue Multi-Source Data with Remote Sensing Techniques)
Show Figures

Figure 1

24 pages, 3785 KiB  
Article
GOES-R Time Series for Early Detection of Wildfires with Deep GRU-Network
by Yu Zhao and Yifang Ban
Remote Sens. 2022, 14(17), 4347; https://doi.org/10.3390/rs14174347 - 01 Sep 2022
Cited by 4 | Viewed by 2392
Abstract
Early detection of wildfires has been limited using the sun-synchronous orbit satellites due to their low temporal resolution and wildfires’ fast spread in the early stage. NOAA’s geostationary weather satellites GOES-R Advanced Baseline Imager (ABI) can acquire images every 15 min at 2 [...] Read more.
Early detection of wildfires has been limited using the sun-synchronous orbit satellites due to their low temporal resolution and wildfires’ fast spread in the early stage. NOAA’s geostationary weather satellites GOES-R Advanced Baseline Imager (ABI) can acquire images every 15 min at 2 km spatial resolution, and have been used for early fire detection. However, advanced processing algorithms are needed to provide timely and reliable detection of wildfires. In this research, a deep learning framework, based on Gated Recurrent Units (GRU), is proposed to detect wildfires at early stage using GOES-R dense time series data. GRU model maintains good performance on temporal modelling while keep a simple architecture, makes it suitable to efficiently process time-series data. 36 different wildfires in North and South America under the coverage of GOES-R satellites are selected to assess the effectiveness of the GRU method. The detection times based on GOES-R are compared with VIIRS active fire products at 375 m resolution in NASA’s Fire Information for Resource Management System (FIRMS). The results show that GRU-based GOES-R detections of the wildfires are earlier than that of the VIIRS active fire products in most of the study areas. Also, results from proposed method offer more precise location on the active fire at early stage than GOES-R Active Fire Product in mid-latitude and low-latitude regions. Full article
(This article belongs to the Special Issue Multi-Source Data with Remote Sensing Techniques)
Show Figures

Graphical abstract

19 pages, 7240 KiB  
Article
Fast Superpixel-Based Non-Window CFAR Ship Detector for SAR Imagery
by Liang Zhang, Zhijun Zhang, Shengtao Lu, Deliang Xiang and Yi Su
Remote Sens. 2022, 14(9), 2092; https://doi.org/10.3390/rs14092092 - 27 Apr 2022
Cited by 11 | Viewed by 2040
Abstract
Ship detection in high-resolution synthetic aperture radar (SAR) images has attracted great attention. As a popular method, a constant false alarm rate (CFAR) detection algorithm is widely used. However, the detection performance of CFAR is easily affected by speckle noise. Moreover, the sliding [...] Read more.
Ship detection in high-resolution synthetic aperture radar (SAR) images has attracted great attention. As a popular method, a constant false alarm rate (CFAR) detection algorithm is widely used. However, the detection performance of CFAR is easily affected by speckle noise. Moreover, the sliding window technique cannot effectively differentiate between clutter and target pixels and easily leads to a high computation load. In this paper, we propose a new superpixel-based non-window CFAR ship detection method for SAR images, which introduces superpixels to CFAR detection to resolve the aforementioned drawbacks. Firstly, our previously proposed fast density-based spatial clustering of applications with noise (DBSCAN) superpixel generation method is utilized to produce the superpixels for SAR images. With the assumption that SAR data obeys gamma distribution, the superpixel dissimilarity is defined. Then, superpixels can be accurately used to estimate the clutter parameters for the tested pixel, even in the multi-target situations, avoiding the drawbacks of the sliding window in the traditional CFAR. Moreover, a local superpixel contrast is proposed to optimize CFAR detection, which can eliminate numerous clutter false alarms, such as man-made urban areas and low bushes. Experimental results with real SAR images indicate that the proposed method can achieve ship detection with a higher speed and accuracy in comparison with other state-of-the-art methods. Full article
(This article belongs to the Special Issue Multi-Source Data with Remote Sensing Techniques)
Show Figures

Graphical abstract

Back to TopTop