remotesensing-logo

Journal Browser

Journal Browser

Earth Observation Using Satellite Global Images of Remote Sensing

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Environmental Remote Sensing".

Deadline for manuscript submissions: 15 June 2024 | Viewed by 12820

Special Issue Editors

DICEAM Department, University "Mediterranea" of Reggio Calabria, 89122 Reggio Calabria, Italy
Interests: remote sensing; earth observation; object based image analysis; image processing

Special Issue Information

Dear Colleagues,

The recent growing availability of satellite remote sensing imagery is becoming increasingly important as more and more satellite imagery is made available from various sensors, even for free, and new groups of satellites are put into orbit to allow global analyses. The same applications are continuously expanding: today they enable land cover and land use mapping (specifically, detection of urbanization, also at global level, monitoring of cultivated land in order to detect water stress and desertification, and in order to direct crop operations); they also are employed for detecting and monitoring air, land and sea pollution, and for fighting forest wildfires.

We are pleased to invite you to contribute to this Special Issue in subjects related to the journal scope. This Special Issue aims to explore the recent progresses of in the field of satellite Remote Sensing and possible further developments.

In this Special Issue, original research articles and reviews are welcome. Research areas may include (but not limited to) the following:

  • Image analysis applications to remote sensing;
  • Image segmentation;
  • Image classification;
  • Image edge detection;
  • Change detection;
  • Image processing and pattern recognition;
  • Mathematical morphology;
  • Object Based Image Analysis;
  • Feature extraction;
  • Remote sensing applications.

We look forward to receiving your contributions.

Dr. Giuliana Bilotta
Prof. Dr. Jon Atli Benediktsson
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Image analysis applications to remote sensing
  • Image segmentation
  • Image classification
  • Image edge detection
  • Change detection
  • Image processing and pattern recognition
  • Mathematical morphology
  • Object Based Image Analysis
  • Feature extraction
  • Remote sensing applications

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

22 pages, 24632 KiB  
Article
A Transformer-Based Neural Network with Improved Pyramid Pooling Module for Change Detection in Ecological Redline Monitoring
by Yunjia Zou, Ting Shen, Zhengchao Chen, Pan Chen, Xuan Yang and Luyang Zan
Remote Sens. 2023, 15(3), 588; https://doi.org/10.3390/rs15030588 - 18 Jan 2023
Cited by 2 | Viewed by 2225
Abstract
The ecological redline defines areas where industrialization and urbanization development should be prohibited. Its purpose is to establish the most stringent environmental protection system to meet the urgent needs of ecological function guarantee and environmental safety. Nowadays, deep learning methods have been widely [...] Read more.
The ecological redline defines areas where industrialization and urbanization development should be prohibited. Its purpose is to establish the most stringent environmental protection system to meet the urgent needs of ecological function guarantee and environmental safety. Nowadays, deep learning methods have been widely used in change detection tasks based on remote sensing images, which can just be applied to the monitoring of the ecological redline. Considering the convolution-based neural networks’ lack of utilization of global information, we choose a transformer to devise a Siamese network for change detection. We also use a transformer to design a pyramid pooling module to help the network maintain more features. Moreover, we construct a self-supervised network based on a contrastive method to obtain a pre-trained model, especially for remote sensing images, aiming to achieve better results. As for study areas and data sources, we chose Hebei Province, where the environmental problem is quite nervous, and used its GF-1 satellite images to do our research. Through ablation experiments and contrast experiments, our method is proven to have significant advantages in terms of accuracy and efficiency. We also predict large-scale areas and calculate the intersection recall rate, which confirms that our method has practical values. Full article
(This article belongs to the Special Issue Earth Observation Using Satellite Global Images of Remote Sensing)
Show Figures

Figure 1

18 pages, 3856 KiB  
Article
AUnet: A Deep Learning Framework for Surface Water Channel Mapping Using Large-Coverage Remote Sensing Images and Sparse Scribble Annotations from OSM Data
by Sarah Mazhar, Guangmin Sun, Anas Bilal, Bilal Hassan, Yu Li, Junjie Zhang, Yinyi Lin, Ali Khan, Ramsha Ahmed and Taimur Hassan
Remote Sens. 2022, 14(14), 3283; https://doi.org/10.3390/rs14143283 - 08 Jul 2022
Cited by 6 | Viewed by 2227
Abstract
Water is a vital component of life that exists in a variety of forms, including oceans, rivers, ponds, streams, and canals. The automated methods for detecting, segmenting, and mapping surface water have improved significantly with the advancements in satellite imagery and remote sensing. [...] Read more.
Water is a vital component of life that exists in a variety of forms, including oceans, rivers, ponds, streams, and canals. The automated methods for detecting, segmenting, and mapping surface water have improved significantly with the advancements in satellite imagery and remote sensing. Many strategies and techniques to segment water resources have been presented in the past. However, due to the variant width and complex appearance, the segmentation of the water channel remains challenging. Moreover, traditional supervised deep learning frameworks have been restricted by the scarcity of water channel datasets that include precise water annotations. With this in mind, this research presents the following three main contributions. Firstly, we curated a new dataset for water channel mapping in the Pakistani region. Instead of employing pixel-level water channel annotations, we used a weakly trained method to extract water channels from VHR pictures, relying only on OpenStreetMap (OSM) waterways to create sparse scribbling annotations. Secondly, we benchmarked the dataset on state-of-the-art semantic segmentation frameworks. We also proposed AUnet, an atrous convolution inspired deep learning network for precise water channel segmentation. The experimental results demonstrate the superior performance of the proposed AUnet model for segmenting using weakly supervised labels, where it achieved a mean intersection over union score of 0.8791 and outperformed state-of-the-art approaches by 5.90% for the extraction of water channels. Full article
(This article belongs to the Special Issue Earth Observation Using Satellite Global Images of Remote Sensing)
Show Figures

Figure 1

21 pages, 6204 KiB  
Article
Using Deep Learning and Very-High-Resolution Imagery to Map Smallholder Field Boundaries
by Weiye Mei, Haoyu Wang, David Fouhey, Weiqi Zhou, Isabella Hinks, Josh M. Gray, Derek Van Berkel and Meha Jain
Remote Sens. 2022, 14(13), 3046; https://doi.org/10.3390/rs14133046 - 25 Jun 2022
Cited by 6 | Viewed by 3018
Abstract
The mapping of field boundaries can provide important information for increasing food production and security in agricultural systems across the globe. Remote sensing can provide a viable way to map field boundaries across large geographic extents, yet few studies have used satellite imagery [...] Read more.
The mapping of field boundaries can provide important information for increasing food production and security in agricultural systems across the globe. Remote sensing can provide a viable way to map field boundaries across large geographic extents, yet few studies have used satellite imagery to map boundaries in systems where field sizes are small, heterogeneous, and irregularly shaped. Here we used very-high-resolution WorldView-3 satellite imagery (0.5 m) and a mask region-based convolutional neural network (Mask R-CNN) to delineate smallholder field boundaries in Northeast India. We found that our models had overall moderate accuracy, with average precision values greater than 0.67 and F1 Scores greater than 0.72. We also found that our model performed equally well when applied to another site in India for which no data were used in the calibration step, suggesting that Mask R-CNN may be a generalizable way to map field boundaries at scale. Our results highlight the ability of Mask R-CNN and very-high-resolution imagery to accurately map field boundaries in smallholder systems. Full article
(This article belongs to the Special Issue Earth Observation Using Satellite Global Images of Remote Sensing)
Show Figures

Graphical abstract

20 pages, 4164 KiB  
Article
Design of CGAN Models for Multispectral Reconstruction in Remote Sensing
by Brais Rodríguez-Suárez, Pablo Quesada-Barriuso and Francisco Argüello
Remote Sens. 2022, 14(4), 816; https://doi.org/10.3390/rs14040816 - 09 Feb 2022
Cited by 4 | Viewed by 2638
Abstract
Multispectral imaging methods typically require cameras with dedicated sensors that make them expensive. In some cases, these sensors are not available or existing images are RGB, so the advantages of multispectral processing cannot be exploited. To solve this drawback, several techniques have been [...] Read more.
Multispectral imaging methods typically require cameras with dedicated sensors that make them expensive. In some cases, these sensors are not available or existing images are RGB, so the advantages of multispectral processing cannot be exploited. To solve this drawback, several techniques have been proposed to reconstruct the spectral reflectance of a scene from a single RGB image captured by a camera. Deep learning methods can already solve this problem with good spectral accuracy. Recently, a new type of deep learning network, the Conditional Generative Adversarial Network (CGAN), has been proposed. It is a deep learning architecture that simultaneously trains two networks (generator and discriminator) with the additional feature that both networks are conditioned on some sort of auxiliary information. This paper focuses the use of CGANs to achieve the reconstruction of multispectral images from RGB images. Different regression network models (convolutional neuronal networks, U-Net, and ResNet) have been adapted and integrated as generators in the CGAN, and compared in performance for multispectral reconstruction. Experiments with the BigEarthNet database show that CGAN with ResNet as a generator provides better results than other deep learning networks with a root mean square error of 316 measured over a range from 0 to 16,384. Full article
(This article belongs to the Special Issue Earth Observation Using Satellite Global Images of Remote Sensing)
Show Figures

Figure 1

Other

Jump to: Research

15 pages, 3421 KiB  
Technical Note
A Year of Volcanic Hot-Spot Detection over Mediterranean Europe Using SEVIRI/MSG
by Catarina Alonso, Rita Durão and Célia M. Gouveia
Remote Sens. 2023, 15(21), 5219; https://doi.org/10.3390/rs15215219 - 03 Nov 2023
Viewed by 727
Abstract
Volcano eruption identification and watching is crucial to better understanding volcano dynamics, namely the near real-time identification of the eruption start, end, and duration. Eruption watching allows hazard assessment, eruption forecasting and warnings, and also risk mitigation during periods of unrest, to enhance [...] Read more.
Volcano eruption identification and watching is crucial to better understanding volcano dynamics, namely the near real-time identification of the eruption start, end, and duration. Eruption watching allows hazard assessment, eruption forecasting and warnings, and also risk mitigation during periods of unrest, to enhance public safety and reduce losses from volcanic events. The near real-time fire radiative power (FRP) product retrieved using information from the SEVIRI sensor onboard the Meteosat Second Generation (MSG) satellite are used to identify and follow up volcanic activity at the pan-European level, namely the Mount Etna and Cumbre Vieja eruptions which occurred during 2021. The FRP product is designed to record information on the location, timing, and fire radiative power output of wildfires. Measuring FRP from SEVIRI/MSG and integrating it over the lifetime of a fire provides an estimate of the total Fire Radiative Energy (FRE) released. Together with FRP data analysis, SO2 data from the Copernicus Atmosphere Monitoring Service (CAMS) is used to assess the relationship between daily emitted concentrations of SO2 and the radiative energy released during volcanic eruptions. Results show that the FRE data allows us to evaluate the amount of energy released and is related to the pollutant concentrations from volcanic emissions during the considered events. A good agreement between FRP detection and SO2 atmospheric concentrations was found for the considered eruption occurrences. The adopted methodology, due to its simplicity and near real-time availability, shows potential to be used as a management tool to help authorities monitor and manage resources during ongoing volcanic events. Full article
(This article belongs to the Special Issue Earth Observation Using Satellite Global Images of Remote Sensing)
Show Figures

Graphical abstract

Back to TopTop