remotesensing-logo

Journal Browser

Journal Browser

Intelligent Damage Assessment Systems Using Remote Sensing Data

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Environmental Remote Sensing".

Deadline for manuscript submissions: closed (31 March 2022) | Viewed by 20467

Special Issue Editors


E-Mail Website
Guest Editor
RIKEN Center for Advanced Intelligence Project, Goal-Oriented Technology Research Group, Disaster Resilience Science Team, Tokyo 103-0027, Japan
Interests: machine learning; remote sensing and gis; image processing; environmental modelling; object detection
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Multimedia, Faculty of Computer Science & Information Technology, Universiti Putra Malaysia, Serdang 43400, Malaysia
Interests: applied machine learning; computer vision; image processing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Earthquakes can be considered one of the most serious natural disasters being faced by countries around the world. They can occur with slight or no notice, and the frequency increases as time goes by. Earthquakes cause massive destruction to the environment, infrastructures, and buildings within affected areas. Human life is also at risk during high-intensity earthquakes, especially in densely populated areas. Since earthquakes are not preventable, innovative pre-emptive technologies should be researched and developed in order to predict occurrences. This does not only have the potential of facilitating evacuation or safety measures, but can also help to improve disaster response actions. Efficient and timely response action allows affected areas to receive assistance faster, especially for distribution of relief resources (i.e., food, medicine and shelter). Recently, remote sensing (RS) technologies have been researched to facilitate disaster response actions. Tools have been developed based on RS to perform damage detection and emergency response systems (i.e., post-earthquake). Other efforts include studies estimating post-earthquake building damage where estimation accuracy varied depending on the type of data being used. Data types also vary from optical sensors, LiDAR point clouds to synthetic aperture radars (SAR) and aerial and unmanned aerial vehicle (UAV) imageries. In recent years, intelligent methods using machine and deep learning methods have become popular for post-earthquake RS-based analysis. Due to the potential in these technologies, this Special Issue invites scholars to share their recently-developed innovations and advances for post-earthquake building damage assessment using remote sensing data and computer vision, contributing to improve disaster resilience.

Dr. Bahareh Kalantar
Dr. Alfian Abdul Halin
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Building
  • Damage
  • Remote sensing
  • Earthquake
  • Machine learning
  • Artificial intelligence
  • Computer vision

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 44971 KiB  
Article
BDD-Net: An End-to-End Multiscale Residual CNN for Earthquake-Induced Building Damage Detection
by Seyd Teymoor Seydi, Heidar Rastiveis, Bahareh Kalantar, Alfian Abdul Halin and Naonori Ueda
Remote Sens. 2022, 14(9), 2214; https://doi.org/10.3390/rs14092214 - 05 May 2022
Cited by 7 | Viewed by 2061
Abstract
Building damage maps can be generated from either optical or Light Detection and Ranging (Lidar) datasets. In the wake of a disaster such as an earthquake, a timely and detailed map is a critical reference for disaster teams in order to plan and [...] Read more.
Building damage maps can be generated from either optical or Light Detection and Ranging (Lidar) datasets. In the wake of a disaster such as an earthquake, a timely and detailed map is a critical reference for disaster teams in order to plan and perform rescue and evacuation missions. Recent studies have shown that, instead of being used individually, optical and Lidar data can potentially be fused to obtain greater detail. In this study, we explore this fusion potential, which incorporates deep learning. The overall framework involves a novel End-to-End convolutional neural network (CNN) that performs building damage detection. Specifically, our building damage detection network (BDD-Net) utilizes three deep feature streams (through a multi-scale residual depth-wise convolution block) that are fused at different levels of the network. This is unlike other fusion networks that only perform fusion at the first and the last levels. The performance of BDD-Net is evaluated under three different phases, using optical and Lidar datasets for the 2010 Haiti Earthquake. The three main phases are: (1) data preprocessing and building footprint extraction based on building vector maps, (2) sample data preparation and data augmentation, and (3) model optimization and building damage map generation. The results of building damage detection in two scenarios show that fusing the optical and Lidar datasets significantly improves building damage map generation, with an overall accuracy (OA) greater than 88%. Full article
(This article belongs to the Special Issue Intelligent Damage Assessment Systems Using Remote Sensing Data)
Show Figures

Figure 1

23 pages, 10610 KiB  
Article
A Precision Efficient Method for Collapsed Building Detection in Post-Earthquake UAV Images Based on the Improved NMS Algorithm and Faster R-CNN
by Jiujie Ding, Jiahuan Zhang, Zongqian Zhan, Xiaofang Tang and Xin Wang
Remote Sens. 2022, 14(3), 663; https://doi.org/10.3390/rs14030663 - 29 Jan 2022
Cited by 23 | Viewed by 3528
Abstract
The results of collapsed building detection act as an important reference for damage assessment after an earthquake, which is crucial for governments in order to efficiently determine the affected area and execute emergency rescue. For this task, unmanned aerial vehicle (UAV) images are [...] Read more.
The results of collapsed building detection act as an important reference for damage assessment after an earthquake, which is crucial for governments in order to efficiently determine the affected area and execute emergency rescue. For this task, unmanned aerial vehicle (UAV) images are often used as the data sources due to the advantages of high flexibility regarding data acquisition time and flying requirements and high resolution. However, collapsed buildings are typically distributed in both connected and independent pieces and with arbitrary shapes, and these are generally more obvious in the UAV images with high resolution; therefore, the corresponding detection is restricted by using conventional convolutional neural networks (CNN) and the detection results are difficult to evaluate. In this work, based on faster region-based convolutional neural network (Faster R-CNN), deformable convolution was used to improve the adaptability to the arbitrarily shaped collapsed buildings. In addition, inspired by the idea of pixelwise semantic segmentation, in contrast to the intersection over union (IoU), a new method which estimates the intersected proportion of objects (IPO) is proposed to describe the degree of the intersection of bounding boxes, leading to two improvements: first, the traditional non-maximum suppression (NMS) algorithm is improved by integration with the IPO to effectively suppress the redundant bounding boxes; second, the IPO is utilized as a new indicator to determine positive and negative bounding boxes, and is introduced as a new strategy for precision and recall estimation, which can be considered a more reasonable measurement of the degree of similarity between the detected bounding boxes and ground truth bounding boxes. Experiments show that compared with other models, our work can obtain better precision and recall for detecting collapsed buildings for which an F1 score of 0.787 was achieved, and the evaluation results from the suggested IPO are qualitatively closer to the ground truth. In conclusion, the improved NMS with the IPO and Faster R-CNN in this paper is feasible and efficient for the detection of collapsed buildings in UAV images, and the suggested IPO strategy is more suitable for the corresponding detection result’s evaluation. Full article
(This article belongs to the Special Issue Intelligent Damage Assessment Systems Using Remote Sensing Data)
Show Figures

Graphical abstract

22 pages, 9176 KiB  
Article
Automatic Extraction of Damaged Houses by Earthquake Based on Improved YOLOv5: A Case Study in Yangbi
by Yafei Jing, Yuhuan Ren, Yalan Liu, Dacheng Wang and Linjun Yu
Remote Sens. 2022, 14(2), 382; https://doi.org/10.3390/rs14020382 - 14 Jan 2022
Cited by 31 | Viewed by 4151
Abstract
Efficiently and automatically acquiring information on earthquake damage through remote sensing has posed great challenges because the classical methods of detecting houses damaged by destructive earthquakes are often both time consuming and low in accuracy. A series of deep-learning-based techniques have been developed [...] Read more.
Efficiently and automatically acquiring information on earthquake damage through remote sensing has posed great challenges because the classical methods of detecting houses damaged by destructive earthquakes are often both time consuming and low in accuracy. A series of deep-learning-based techniques have been developed and recent studies have demonstrated their high intelligence for automatic target extraction for natural and remote sensing images. For the detection of small artificial targets, current studies show that You Only Look Once (YOLO) has a good performance in aerial and Unmanned Aerial Vehicle (UAV) images. However, less work has been conducted on the extraction of damaged houses. In this study, we propose a YOLOv5s-ViT-BiFPN-based neural network for the detection of rural houses. Specifically, to enhance the feature information of damaged houses from the global information of the feature map, we introduce the Vision Transformer into the feature extraction network. Furthermore, regarding the scale differences for damaged houses in UAV images due to the changes in flying height, we apply the Bi-Directional Feature Pyramid Network (BiFPN) for multi-scale feature fusion to aggregate features with different resolutions and test the model. We took the 2021 Yangbi earthquake with a surface wave magnitude (Ms) of 6.4 in Yunan, China, as an example; the results show that the proposed model presents a better performance, with the average precision (AP) being increased by 9.31% and 1.23% compared to YOLOv3 and YOLOv5s, respectively, and a detection speed of 80 FPS, which is 2.96 times faster than YOLOv3. In addition, the transferability test for five other areas showed that the average accuracy was 91.23% and the total processing time was 4 min, while 100 min were needed for professional visual interpreters. The experimental results demonstrate that the YOLOv5s-ViT-BiFPN model can automatically detect damaged rural houses due to destructive earthquakes in UAV images with a good performance in terms of accuracy and timeliness, as well as being robust and transferable. Full article
(This article belongs to the Special Issue Intelligent Damage Assessment Systems Using Remote Sensing Data)
Show Figures

Graphical abstract

20 pages, 6145 KiB  
Article
Detection of Collapsed Bridges from Multi-Temporal SAR Intensity Images by Machine Learning Techniques
by Wen Liu, Yoshihisa Maruyama and Fumio Yamazaki
Remote Sens. 2021, 13(17), 3508; https://doi.org/10.3390/rs13173508 - 03 Sep 2021
Cited by 2 | Viewed by 3136
Abstract
Bridges are an important part of road networks in an emergency period, as well as in ordinary times. Bridge collapses have occurred as a result of many recent disasters. Synthetic aperture radar (SAR), which can acquire images under any weather or sunlight conditions, [...] Read more.
Bridges are an important part of road networks in an emergency period, as well as in ordinary times. Bridge collapses have occurred as a result of many recent disasters. Synthetic aperture radar (SAR), which can acquire images under any weather or sunlight conditions, has been shown to be effective in assessing the damage situation of structures in the emergency response phase. We investigate the backscattering characteristics of washed-away or collapsed bridges from the multi-temporal high-resolution SAR intensity imagery introduced in our previous studies. In this study, we address the challenge of building a model to identify collapsed bridges using five change features obtained from multi-temporal SAR intensity images. Forty-four bridges affected by the 2011 Tohoku-oki earthquake, in Japan, and forty-four bridges affected by the 2020 July floods, also in Japan, including a total of 21 collapsed bridges, were divided into training, test, and validation sets. Twelve models were trained, using different numbers of features as input in random forest and logistic regression methods. Comparing the accuracies of the validation sets, the random forest model trained with the two mixed events using all the features showed the highest capability to extract collapsed bridges. After improvement by introducing an oversampling technique, the F-score for collapsed bridges was 0.87 and the kappa coefficient was 0.82, showing highly accurate agreement. Full article
(This article belongs to the Special Issue Intelligent Damage Assessment Systems Using Remote Sensing Data)
Show Figures

Figure 1

24 pages, 16967 KiB  
Article
On the Generalization Ability of a Global Model for Rapid Building Mapping from Heterogeneous Satellite Images of Multiple Natural Disaster Scenarios
by Yijiang Hu and Hong Tang
Remote Sens. 2021, 13(5), 984; https://doi.org/10.3390/rs13050984 - 05 Mar 2021
Cited by 10 | Viewed by 2438
Abstract
Post-classification comparison using pre- and post-event remote-sensing images is a common way to quickly assess the impacts of a natural disaster on buildings. Both the effectiveness and efficiency of post-classification comparison heavily depend on the classifier’s precision and generalization abilities. In practice, practitioners [...] Read more.
Post-classification comparison using pre- and post-event remote-sensing images is a common way to quickly assess the impacts of a natural disaster on buildings. Both the effectiveness and efficiency of post-classification comparison heavily depend on the classifier’s precision and generalization abilities. In practice, practitioners used to train a novel image classifier for an unexpected disaster from scratch in order to evaluate building damage. Recently, it has become feasible to train a deep learning model to recognize buildings from very high-resolution images from all over the world. In this paper, we first evaluate the generalization ability of a global model trained on aerial images using post-disaster satellite images. Then, we systemically analyse three kinds of method to promote its generalization ability for post-disaster satellite images, i.e., fine-tune the model using very few training samples randomly selected from each disaster, transfer the style of postdisaster satellite images using the CycleGAN, and perform feature transformation using domain adversarial training. The xBD satellite images used in our experiment consist of 14 different events from six kinds of frequently occurring disaster types around the world, i.e., hurricanes, tornadoes, earthquakes, tsunamis, floods and wildfires. The experimental results show that the three methods can significantly promote the accuracy of the global model in terms of building mapping, and it is promising to conduct post-classification comparison using an existing global model coupled with an advanced transfer-learning method to quickly extract the damage information of buildings. Full article
(This article belongs to the Special Issue Intelligent Damage Assessment Systems Using Remote Sensing Data)
Show Figures

Graphical abstract

22 pages, 4040 KiB  
Article
Assessment of Convolutional Neural Network Architectures for Earthquake-Induced Building Damage Detection based on Pre- and Post-Event Orthophoto Images
by Bahareh Kalantar, Naonori Ueda, Husam A. H. Al-Najjar and Alfian Abdul Halin
Remote Sens. 2020, 12(21), 3529; https://doi.org/10.3390/rs12213529 - 28 Oct 2020
Cited by 32 | Viewed by 3621
Abstract
In recent years, remote-sensing (RS) technologies have been used together with image processing and traditional techniques in various disaster-related works. Among these is detecting building damage from orthophoto imagery that was inflicted by earthquakes. Automatic and visual techniques are considered as typical methods [...] Read more.
In recent years, remote-sensing (RS) technologies have been used together with image processing and traditional techniques in various disaster-related works. Among these is detecting building damage from orthophoto imagery that was inflicted by earthquakes. Automatic and visual techniques are considered as typical methods to produce building damage maps using RS images. The visual technique, however, is time-consuming due to manual sampling. The automatic method is able to detect the damaged building by extracting the defect features. However, various design methods and widely changing real-world conditions, such as shadow and light changes, cause challenges to the extensive appointing of automatic methods. As a potential solution for such challenges, this research proposes the adaption of deep learning (DL), specifically convolutional neural networks (CNN), which has a high ability to learn features automatically, to identify damaged buildings from pre- and post-event RS imageries. Since RS data revolves around imagery, CNNs can arguably be most effective at automatically discovering relevant features, avoiding the need for feature engineering based on expert knowledge. In this work, we focus on RS imageries from orthophoto imageries for damaged-building detection, specifically for (i) background, (ii) no damage, (iii) minor damage, and (iv) debris classifications. The gist is to uncover the CNN architecture that will work best for this purpose. To this end, three CNN models, namely the twin model, fusion model, and composite model, are applied to the pre- and post-orthophoto imageries collected from the 2016 Kumamoto earthquake, Japan. The robustness of the models was evaluated using four evaluation metrics, namely overall accuracy (OA), producer accuracy (PA), user accuracy (UA), and F1 score. According to the obtained results, the twin model achieved higher accuracy (OA = 76.86%; F1 score = 0.761) compare to the fusion model (OA = 72.27%; F1 score = 0.714) and composite (OA = 69.24%; F1 score = 0.682) models. Full article
(This article belongs to the Special Issue Intelligent Damage Assessment Systems Using Remote Sensing Data)
Show Figures

Graphical abstract

Back to TopTop