remotesensing-logo

Journal Browser

Journal Browser

Remote Sensing for Emergency Management: Algorithms, Methods and Applications

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Environmental Remote Sensing".

Deadline for manuscript submissions: closed (15 August 2023) | Viewed by 18047

Special Issue Editors


E-Mail Website
Guest Editor
Geodesy and Geomatics - Department of Civil and Environmental Engineering, Politecnico di Milano, Piazza L. da Vinci, 32, 20133 Milano, Italy
Interests: remote sensing; crisis mapping; geographic information systems; spatial analysis; citizen science; geodesy

E-Mail Website1 Website2
Guest Editor
Department of Architecture and Design, Polytechnic University of Turin, 10125 Torino, Italy
Interests: geomatics; satellite remote sensing; unmanned aerial systems; GIS
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Geospatial information acquired with Earth observation techniques can effectively support emergency management activities in the different phases of its cycle, i.e., preparedness, response, recovery, mitigation. These data can be exploited to monitor, map, and model phenomena that impact on different types of emergencies, including the ones directly affected by climate change. Remote sensing plays a crucial role in the acquisition and processing of multisensor and multiscale data from satellite, aerial, UAV and terrestrial platforms.

This Special Issue aims at gathering the main outcomes of current applied research supporting emergency management activities.

Authors are encouraged to focus on algorithms, methods, and applications related to the different steps of the remote sensing workflow in this specific domain, i.e., data acquisition, data processing and integration, and information extraction.

It is expected that the contributions will highlight the growing role of Artificial Intelligence in the information extraction step, as well as of citizen science (including voluntary geographic information), in the data acquisition step.

Dr. Daniela Carrion
Dr. Fabio Giulio Tonolo
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Emergency management
  • Earth observation
  • UAS
  • Artificial Intelligence
  • Response
  • Preparedness
  • Recovery
  • Mitigation
  • Crisis mapping
  • RPAS
  • Climate change
  • Citizen science
  • VGI

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

24 pages, 6499 KiB  
Article
Forest Fire Monitoring Method Based on UAV Visual and Infrared Image Fusion
by Yuqi Liu, Change Zheng, Xiaodong Liu, Ye Tian, Jianzhong Zhang and Wenbin Cui
Remote Sens. 2023, 15(12), 3173; https://doi.org/10.3390/rs15123173 - 18 Jun 2023
Cited by 3 | Viewed by 2192
Abstract
Forest fires have become a significant global threat, with many negative impacts on human habitats and forest ecosystems. This study proposed a forest fire identification method by fusing visual and infrared images, addressing the high false alarm and missed alarm rates of forest [...] Read more.
Forest fires have become a significant global threat, with many negative impacts on human habitats and forest ecosystems. This study proposed a forest fire identification method by fusing visual and infrared images, addressing the high false alarm and missed alarm rates of forest fire monitoring using single spectral imagery. A dataset suitable for image fusion was created using UAV aerial photography. An improved image fusion network model, the FF-Net, incorporating an attention mechanism, was proposed. The YOLOv5 network was used for target detection, and the results showed that using fused images achieved a higher accuracy, with a false alarm rate of 0.49% and a missed alarm rate of 0.21%. As such, using fused images has greater significance for the early warning of forest fires. Full article
Show Figures

Graphical abstract

26 pages, 14425 KiB  
Article
A Spatiotemporal Drought Analysis Application Implemented in the Google Earth Engine and Applied to Iran as a Case Study
by Adel Taheri Qazvini and Daniela Carrion
Remote Sens. 2023, 15(9), 2218; https://doi.org/10.3390/rs15092218 - 22 Apr 2023
Viewed by 2439
Abstract
Drought is a major problem in the world and has become more severe in recent decades, especially in arid and semi-arid regions. In this study, a Google Earth Engine (GEE) app has been implemented to monitor spatiotemporal drought conditions over different climatic regions. [...] Read more.
Drought is a major problem in the world and has become more severe in recent decades, especially in arid and semi-arid regions. In this study, a Google Earth Engine (GEE) app has been implemented to monitor spatiotemporal drought conditions over different climatic regions. The app allows every user to perform analysis over a region and for a period of their choice, benefiting from the huge GEE dataset of free and open data as well as from its fast cloud-based computation. The app implements the scaled drought condition index (SDCI), which is a combination of three indices: the vegetation condition index (VCI), temperature condition index (TCI), and precipitation condition index (PCI), derived or calculated from satellite imagery data through the Google Earth Engine platform. The De Martonne climate classification index has been used to derive the climate region; within each region the indices have been computed separately. The test case area is over Iran, which shows a territory with high climate variability, where drought has been explored for a period of 11 years (from 2010 to 2021) allowing us to cover a reasonable time series with the data available in the Google Earth Engine. The developed tool allowed the singling-out of drought events over each climate, offering both the spatial and temporal representation of the phenomenon and confirming results found in local and global reports. Full article
Show Figures

Figure 1

19 pages, 7729 KiB  
Article
A Deep Learning-Based Method for the Semi-Automatic Identification of Built-Up Areas within Risk Zones Using Aerial Imagery and Multi-Source GIS Data: An Application for Landslide Risk
by Mauro Francini, Carolina Salvo, Antonio Viscomi and Alessandro Vitale
Remote Sens. 2022, 14(17), 4279; https://doi.org/10.3390/rs14174279 - 30 Aug 2022
Cited by 4 | Viewed by 2473
Abstract
Natural disasters have a significant impact on urban areas, resulting in loss of lives and urban services. Using satellite and aerial imagery, the rapid and automatic assessment of at-risk located buildings from can improve the overall disaster management system of urban areas. To [...] Read more.
Natural disasters have a significant impact on urban areas, resulting in loss of lives and urban services. Using satellite and aerial imagery, the rapid and automatic assessment of at-risk located buildings from can improve the overall disaster management system of urban areas. To do this, the definition, and the implementation of models with strong generalization, is very important. Starting from these assumptions, the authors proposed a deep learning approach based on the U-Net model to map buildings that fall into mapped landslide risk areas. The U-Net model is trained and validated using the Dubai’s Satellite Imagery Dataset. The transferability of the model results are tested in three different urban areas within Calabria Region, Southern Italy, using natural color orthoimages and multi-source GIS data. The results show that the proposed methodology can detect and predict buildings that fall into landslide risk zones, with an appreciable transferability capability. During the prevention phase of emergency planning, this tool can support decision-makers and planners with the rapid identification of buildings located within risk areas, and during the post event phase, by assessing urban system conditions after a hazard occurs. Full article
Show Figures

Graphical abstract

19 pages, 8295 KiB  
Article
Training a Disaster Victim Detection Network for UAV Search and Rescue Using Harmonious Composite Images
by Ning Zhang, Francesco Nex, George Vosselman and Norman Kerle
Remote Sens. 2022, 14(13), 2977; https://doi.org/10.3390/rs14132977 - 22 Jun 2022
Cited by 16 | Viewed by 3398
Abstract
Human detection in images using deep learning has been a popular research topic in recent years and has achieved remarkable performance. Training a human detection network is useful for first responders to search for trapped victims in debris after a disaster. In this [...] Read more.
Human detection in images using deep learning has been a popular research topic in recent years and has achieved remarkable performance. Training a human detection network is useful for first responders to search for trapped victims in debris after a disaster. In this paper, we focus on the detection of such victims using deep learning, and we find that state-of-the-art detection models pre-trained on the well-known COCO dataset fail to detect victims. This is because all the people in the training set are shown in photos of daily life or sports activities, while people in the debris after a disaster usually only have parts of their bodies exposed. In addition, because of the dust, the colors of their clothes or body parts are similar to those of the surrounding debris. Compared with collecting images of common objects, images of disaster victims are extremely difficult to obtain for training. Therefore, we propose a framework to generate harmonious composite images for training. We first paste body parts onto a debris background to generate composite victim images and then use a deep harmonization network to make the composite images look more harmonious. We select YOLOv5l as the most suitable model, and experiments show that using composite images for training improves the AP (average precision) by 19.4% (15.3%34.7%). Furthermore, using the harmonious images is of great benefit to training a better victim detector, and the AP is further improved by 10.2% (34.7%44.9%). This research is part of the EU project INGENIOUS. Our composite images and code are publicly available on our website. Full article
Show Figures

Figure 1

24 pages, 9655 KiB  
Article
Sentinel-1 Spatiotemporal Simulation Using Convolutional LSTM for Flood Mapping
by Noel Ivan Ulloa, Sang-Ho Yun, Shou-Hao Chiang and Ryoichi Furuta
Remote Sens. 2022, 14(2), 246; https://doi.org/10.3390/rs14020246 - 6 Jan 2022
Cited by 6 | Viewed by 3861
Abstract
The synthetic aperture radar (SAR) imagery has been widely applied for flooding mapping based on change detection approaches. However, errors in the mapping result are expected since not all land-cover changes are flood-induced, and those changes are sensitive to SAR data, such as [...] Read more.
The synthetic aperture radar (SAR) imagery has been widely applied for flooding mapping based on change detection approaches. However, errors in the mapping result are expected since not all land-cover changes are flood-induced, and those changes are sensitive to SAR data, such as crop growth or harvest over agricultural lands, clearance of forested areas, and/or modifications on the urban landscape. This study, therefore, incorporated historical SAR images to boost the detection of flood-induced changes during extreme weather events, using the Long Short-Term Memory (LSTM) method. Additionally, to incorporate the spatial signatures for the change detection, we applied a deep learning-based spatiotemporal simulation framework, Convolutional Long Short-Term Memory (ConvLSTM), for simulating a synthetic image using Sentinel One intensity time series. This synthetic image will be prepared in advance of flood events, and then it can be used to detect flood areas using change detection when the post-image is available. Practically, significant divergence between the synthetic image and post-image is expected over inundated zones, which can be mapped by applying thresholds to the Delta image (synthetic image minus post-image). We trained and tested our model on three events from Australia, Brazil, and Mozambique. The generated Flood Proxy Maps were compared against reference data derived from Sentinel Two and Planet Labs optical data. To corroborate the effectiveness of the proposed methods, we also generated Delta products for two baseline models (closest post-image minus pre-image and historical mean minus post-image) and two LSTM architectures: normal LSTM and ConvLSTM. Results show that thresholding of ConvLSTM Delta yielded the highest Cohen’s Kappa coefficients in all study cases: 0.92 for Australia, 0.78 for Mozambique, and 0.68 for Brazil. Lower Kappa values obtained in the Mozambique case can be subject to the topographic effect on SAR imagery. These results still confirm the benefits in terms of classification accuracy that convolutional operations provide in time series analysis of satellite data employing spatially correlated information in a deep learning framework. Full article
Show Figures

Graphical abstract

Other

Jump to: Research

15 pages, 3895 KiB  
Technical Note
ExtractEO, a Pipeline for Disaster Extent Mapping in the Context of Emergency Management
by Jérôme Maxant, Rémi Braun, Mathilde Caspard and Stephen Clandillon
Remote Sens. 2022, 14(20), 5253; https://doi.org/10.3390/rs14205253 - 20 Oct 2022
Cited by 2 | Viewed by 1993
Abstract
Rapid mapping of disasters using any kind of satellite imagery is a challenge. The faster the response, the better the service is for the end users who are managing the emergency activities. Indeed, production rapidity is crucial whatever the satellite data in input. [...] Read more.
Rapid mapping of disasters using any kind of satellite imagery is a challenge. The faster the response, the better the service is for the end users who are managing the emergency activities. Indeed, production rapidity is crucial whatever the satellite data in input. However, the speed of delivery must not be at the expense of crisis information quality. The automated flood and fire extraction pipelines, presented in this technical note, make it possible to take full advantage of advanced algorithms in short timeframes, and leave enough time for an expert operator to validate the results and correct any unmanaged thematic errors. Although automated algorithms aren’t flawless, they greatly facilitate and accelerate the detection and mapping of crisis information, especially for floods and fires. ExtractEO is a pipeline developed by SERTIT and dedicated to disaster mapping. It brings together automatic data download and pre-processing, along with highly accurate flood and fire detection chains. Indeed, the thematic quality assessment revealed F1-score values of 0.91 and 0.88 for burnt area and flooded area detection, respectively, from various kinds of high- and very-high- resolution data (optical and SAR). Full article
Show Figures

Graphical abstract

Back to TopTop