Next Article in Journal
Variations in the Equatorial Ionospheric F Region Current during the 2022 Tonga Volcanic Eruption
Previous Article in Journal
Water Quality Grade Identification for Lakes in Middle Reaches of Yangtze River Using Landsat-8 Data with Deep Neural Networks (DNN) Model
Previous Article in Special Issue
Detecting Demolished Buildings after a Natural Hazard Using High Resolution RGB Satellite Imagery and Modified U-Net Convolutional Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

War Related Building Damage Assessment in Kyiv, Ukraine, Using Sentinel-1 Radar and Sentinel-2 Optical Images

1
Center for Remote Sensing, Earth and Environment Department, Boston University, Boston, MA 02215, USA
2
Department of Civil and Environmental Engineering, Tufts University, Medford, MA 02155, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(24), 6239; https://doi.org/10.3390/rs14246239
Submission received: 23 November 2022 / Accepted: 6 December 2022 / Published: 9 December 2022
(This article belongs to the Collection Feature Papers for Section Environmental Remote Sensing)

Abstract

:
Natural and anthropogenic disasters can cause significant damage to urban infrastructure, landscape, and loss of human life. Satellite based remote sensing plays a key role in rapid damage assessment, post-disaster reconnaissance and recovery. In this study, we aim to assess the performance of Sentinel-1 and Sentinel-2 data for building damage assessment in Kyiv, the capital city of Ukraine, due to the ongoing war with Russia. For damage assessment, we employ a simple and robust SAR log ratio of intensity for the Sentinel-1, and a texture analysis for the Sentinel-2. To suppress changes from other features and landcover types not related to urban areas, we construct a mask of the built-up area using the OpenStreetMap building footprints and World Settlement Footprint (WSF), respectively. As it is difficult to get ground truth data in the ongoing war zone, a qualitative accuracy assessment with the very high-resolution optical images and a quantitative assessment with the United Nations Satellite Center (UNOSAT) damage assessment map was conducted. The results indicated that the damaged buildings are mainly concentrated in the northwestern part of the study area, wherein Irpin, and the neighboring towns of Bucha and Hostomel are located. The detected building damages show a good match with the reference WorldView images. Compared with the damage assessment map by UNOSAT, 58% of the damaged buildings were correctly classified. The results of this study highlight the potential offered by publicly available medium resolution satellite imagery for rapid mapping damage to provide initial reference data immediately after a disaster.

Graphical Abstract

1. Introduction

Remotely sensed earth observations (EO) have long played a key role in the communication of pertinent damage information post natural disasters, especially information related to surface effects and damages to infrastructure. Medium resolution optical images (e.g., Landsat, ASTER, and Sentinel-2) have provided data for a more general interpretation of damaged areas in disaster zones [1,2,3]. The new generation of high-resolution optical images (e.g., IKONOS, QuickBird, Geo-Eye, and the WorldView series) have enabled building scale damage detection post disaster using pixel or object-based change detection techniques [4,5,6]. The advances in deep learning algorithms, such as deep Convolutional Neural Networks (CNNs), have also been instrumental in damage recognition [7,8,9]. For instance, the release of xBD [10] alongside the xView2 Challenge [11], which provide large-scale satellite imagery, building polygons, and ordinal labels of damage level for various natural disasters, have been great contributions to the architecture and training strategies of building damage mapping [12,13,14]. However, these optical sensors can only provide usable scenes of the intended areas during daytime imaging when it is, unfortunately, not always cloud-free. Microwave Synthetic Aperture Radar (SAR) provides an alternative that is devoid of these constraints [15]. Thus, various SAR sensors (L, C, and X-band SAR images provided by ALOS-2 PALSAR-2, RADARSAT-2, Sentinel-1, TerraSAR-X, and the COSMO-SkyMed constellation) are being incorporated in remote sensing disaster response analyses [16,17,18,19]. Furthermore, optical and SAR data are increasingly being employed in tandem to test different methods that augment the benefits of each sensor type [20]. In their study, Putri et al., integrated Sentinel-1 (SAR) and Sentinel-2 (optical) data using a random forest classifier to map the damaged buildings following the catastrophic 2018 Lombok Earthquake in Indonesia [21]. Adriano et al., used multi-source data fusion of optical (Sentinel-2 and Planet Scope) and SAR (Sentinel-1 and ALOS-2/PALSAR-2) images for building damage mapping of the 2018 Sulawesi Earthquake and Tsunami, also in Indonesia [22]. As the war in Ukraine progresses, satellite imagery can be used in comparable ways to provide timely damage assessments and facilitate relief efforts by various humanitarian organizations. The locations and the density of the damage contained within different study areas, especially the number of damaged buildings, can serve as a useful proxy to estimate the effects of the war [23,24] in real time. Such information will help to expedite the recovery phase once the war comes to an end [25].
Natural and anthropogenic disasters, accidental or intentional, have adverse im-pacts on human populations, organic and inorganic ecosystems—specifically, the built environment [26]. War and other such human-induced disasters, in particular, have the dangerous potential to cause irreparable damage to the course of human history, and alter the socioeconomic and geopolitical landscape of a country. Wars have direct and indirect effects. Direct war effects are defined as those that are as a consequence of combat, such as casualties resulting from a building that collapsed as a result of an attack [27]. Indirect war effects occur from the inability to access essential infrastructure, which subsequently initiates a mass exodus of the people of the impacted region that leave them even more vulnerable to the elements of war [27,28]. In most instances of war, indirect war fatalities tend to surpass the number of direct war fatalities, especially where the infrastructure is heavily damaged and destroyed [28]. The social and physical direct and indirect impacts of war have historically been studied and itemized after the cessation of the conflict. However, with the help of modern technology and the advances in global satellite imaging—wide area coverage, relatively low cost, and regular revisit time—journalists, scientists, researchers, and engineers have access to near-real time imagery for rapid reconnaissance of impacted regions and can quantify damage to infrastructure. As of 10 May 2022, the United Nations (UN) Human Rights Monitoring Mission in Ukraine has already confirmed more than 3380 civilian casualties, over 3680 injuries [29], $100 billion in infrastructure damage and counting [30] since the start of Russia’s invasion of Ukraine on 24 February 2022.
The United Nations Satellite Center (UNOSAT) division of the United Nations Institute for Training and Research (UNITAR), whose mission is to provide rapid mapping analysis during humanitarian crises, conducted building damage assessments for various regions, including the cities of Bucha, Mariupol, and Kharkiv in Ukraine using very high-resolution satellite imagery-based visual analyses [31,32]. In this study, we aim to test the use of medium resolution imagery to assess the initial damage in Kyiv and its surrounding suburbs using a semi-automated framework. The reason for choosing the medium resolution imagery is because these types of data are publicly available and provide systematic global coverage which is a significant advantage over commercially available imagery with non-systematic and thus limited global coverage. The specific objectives of this study are to: (1) perform damage assessment of the regions around Kyiv, Ukraine’s capital city, at the onset of the war of 2022; (2) explore the advantages and limitations of Sentinel-1 and Sentinel-2 data in damage assessment; and (3) evaluate alternative building masks to suppress non-building-related changes from war-induced building damage by testing and comparing two datasets: the OpenStreetMap (OSM) building footprints and the World Settlement Footprint (WSF).

2. Study Area and Data Sets

Our analysis focuses on multiple cities in Kyiv Oblast located in northern and central Ukraine (Figure 1). In Figure 1, the red rectangle shows the extent of the study area; the blue rectangles encompass the region imaged by SAR satellite Sentinel-1 and the green rectangle contains the area imaged by optical satellite Sentinel-2. The data used for the analysis presented herein were acquired at various times since the start of the war—refer to Table 1 for a more specific description of the data. Throughout this paper, pre-event refers to imagery acquired before 24 February 2022, and post-event refers to imagery acquired after the withdrawal of Russian forces from around Kyiv (at the end of March).

2.1. Sentinel-1 and Sentinel-2 Data

Sentinel-1 (A) is a C-band (λ = 5.6 cm) SAR satellite launched by the European Space Agency (ESA) with a revisit frequency of 12 days. It has four different imaging modes: Stripmap (SM), Interferometric Wide Swath (IW), Extra-Wide Swath (EW), and Wave (WV). As the focus of this research is on SAR intensity analysis, the level-1 Ground Range Detected (GRD) products acquired at IW mode in ascending direction were downloaded from the Alaska Satellite Facility https://search.asf.alaska.edu (accessed on 5 December 2022).
Sentinel-2 is an EO mission funded by ESA to provide information about environmental monitoring, vegetation, and land covers. It consists of twin polar orbiting, sun-synchronous satellites that are phased at 180° to each other [33]. Sentinel-2A and Sentinel-2B cover large islands, inland and coastal waters with a revisit frequency of 10 days each, and together they cover all of earth’s land surfaces at a combined constellation revisit time of five days. The Sentinel-2 satellites use a Multispectral Imager (MSI) sensor with 13 bands, including the visible, near-infrared (NIR), and shortwave infrared (SWIR) bands displayed across three resolutions: 10 m, 20 m, and 60 m [34]. For our analysis, Level-2A images that have few or no cloud coverage were selected and downloaded from the Copernicus Open Access Hub at https://scihub.copernicus.eu/dhus (accessed on 5 December 2022).

2.2. Reference Data

The rapid building damage assessment published by UNOSAT for the city of Bucha was used to validate the methodology presented herein. Figure 2 shows the UNOSAT survey area (dots); and the AOI (orange box) denoting the area used for optimal threshold selection (Section 3.2 for SAR and Section 3.3 for optical). UNOSAT generated a point-based inventory of damaged buildings based on a visual survey of WV-3 and WV-2 satellite imagery acquired between 20 February 2022, and 31 March 2022. The data is published through an opensource geodatabase and is, at the time of publication, the only accepted data source of damages across Ukraine. Each point is geolocated to match a building and is labeled with one of four damage attributes: “destroyed”, “severe damage”, “moderate damage”, and “possible damage”. Their findings are preliminary and have not yet been field verified [32].
The satellite-based analysis results include changes not only to buildings but also to other landcover types due to the war or seasonal changes. Thus, our analysis made use of ancillary data to suppress changes from other features and landcover types not related to urban areas. We extracted the OpenStreetMap (OSM) building footprints (acquired on 26 April 2022) from QGIS to mask the buildings. OSM is an opensource global geographic database that provides street-level data, like building footprints and roads. The features are outlined by a community of registered contributors using the OSM base map. By virtue of the creation of the OSM database, some areas have features that are outdated. Additionally, the OSM data oftentimes does not align properly with superimposed satellite imagery. The information is often incomplete and small-scaled buildings are frequently not outlined as was the case with the OSM building footprint acquired for this study; therefore, in the analysis stage, the footprint dataset was augmented to include the small-scaled buildings that were identified in the UNOSAT damage assessment report. We supplemented the OSM data by digitizing building footprints for an additional 100 buildings in the UNOSAT surveyed area (details are given in Section 4.3). The Google Open Buildings Footprints [35] and the Microsoft Building Footprints [36] are comparable datasets that use deep learning algorithms to determine the footprints of buildings from high resolution satellite imagery and may result in more accurate and complete building footprints but are not used in this study because they are not globally available. In the future, they may provide more accurate and updated building footprint information for natural disaster relief and humanitarian efforts. In the absence of a building footprint layer, data sources that provide settlement/built-up information from medium resolution images (e.g., World Settlement Footprint (WSF) [37], Global Human Settlement Layer (GHSL) [38], ESRI land use/land cover timeseries [39], and the ESA WorldCover map [40]) can also be used. The accuracy and level of detail of each product vary across different regions. Figure 3 shows the comparison of these built-up layers. As the WSF provides better detailed information in distinguishing built-up from other land cover types, we compare the use of WSF to the OSM building footprint as an alternative mask input.
DigitalGlobe’s WV-2 and WV-3 satellite imagery were used for qualitative validation. WV-2 and WV-3 follow a sun-synchronous orbit and are two of the highest resolution commercial satellites in the world. WV-2 provides panchromatic imagery with 0.46 m resolution and eight-band multispectral imagery with 1.84 m resolution. WV-3 collects panchromatic imagery with 0.31 m resolution and eight-band multispectral imagery with 1.24 m resolution [41]. The images were acquired as Level 3 orthorectified, radiometrically corrected products, and both WV-2 and WV-3 images were subsequently pansharpened to the same spatial resolution (0.5 m) using the Gram–Schmidt spectral sharpening algorithm available in ENVI software (release 5.6.1).

3. Methods

In this work, the damaged buildings were mapped using Sentinel-1 and Sentinel-2 data. The workflow is divided into four phases (Figure 4). In the data selection and pre-processing steps, after screening all available images, suitable pre-and post- event image pairs are selected, and pre-processing steps are performed. In the change detection phase, we conducted change detection analysis using two well-known methods, the log ratio of intensity [42,43] and the Gray Level Co-occurrence Matrix (GLCM) [44,45,46,47] on the pre-processed Sentinel-1 and Sentinel-2 images, respectively. Then, an Area of Interest (AOI) was selected, and the optimal threshold was determined based on the obtained highest F1 score value. The changed and unchanged areas are separated by applying the selected threshold to the whole study area and a mask of the built-up area using the OSM and Global built-up layers, respectively. In the last step, the results are validated using the high-resolution WorldView images and the UNOSAT damage assessment map.

3.1. Image Pre-Processing

To generate SAR intensity images, we performed identical preprocessing operations on the Ascending and Descending Sentinel-1 Level-1 GRD images of the study area using SNAP 8.0 software. First, the two images that cover the study area were combined using the slice assembly tool. Then, we applied a radiometric calibration to obtain the SAR backscattering coefficient and co-registered the pre- and post-event images. Next, we used Lee Sigma filter with a window size of 7 × 7 pixels to reduce the speckle effect. Finally, to correct the SAR geometric distortions, the Range Doppler Terrain Correction Operator which implements the Range Doppler orthorectification method with the reference 30 m SRTM 1 Sec HGT DEM was used and projected in geographic coordinates, WGS84. The details of the preprocessing workflow of Sentinel-1 Level-1 Ground Range Detected (GRD) image can be found in the tutorial by ESA [48].
The ESA provides Sentinel-2 Level-2A products as Bottom of Atmosphere (BOA) reflectance images derived from the associated Level-1C products. No further pre-processing was done, except resampling and layer stacking all spectral bands to a 10 m spatial resolution and creating a subset of the image for the region of study.

3.2. SAR Intensity Analysis

SAR backscattering intensity is dependent on the imaging geometry of the SAR sensor (e.g., orbit direction (ascending/descending), incidence angle, and polarization) as well as on the orientation, size, shape, electrical characteristics, surface roughness, and moisture content of the target on the ground [49]. Figure 5 shows the schematic of the building backscattering in ascending orbit (right looking from south towards north pole) SAR sensor. Due to the side-looking viewing geometry of the SAR sensor, an intact building shows a regular layover and shadow zones in the SAR image. In the layover areas (facing the radar sensor), strong backscattering signal can be observed (bright zone) owing to the double-bounce scattering effects at the dihedral corner reflectors and the overlaying of reflection from the roof, wall, and ground (Figure 5(a-1,a-2,c-1,c-2)). At the opposite side of the building, a shadow zone may appear as the building occludes the ground and no signal returns to the sensor (dark zone). However, the shadow area was not evident in this medium resolution Sentinel-1 image even in a 9-story building (Figure 5(a-1,a-2)). The layover and shadow effects can be observed in high-resolution SAR images such as TerraSAR-X images [50,51]. In general, the backscattering in VV is stronger than VH, and the backscatter mean values increase steadily with increasing building height for both VV and VH channels [52].
As shown in Figure 5, a collapsed building has a different backscatter structure than an intact building. The strong backscattering from the building usually disappears or decreases when a building collapses due to a disaster. In the case of the partially damaged building (Figure 5c), where parts of the wall together with the roof collapsed, the remaining walls, debris, and ground may have formed a corner reflector and hence triggered a strong double-bounce effects and an overall increased backscattering intensity (Figure 5(c-2)) [53,54]. Therefore, the damaged and nondamaged buildings can be identified by using the SAR backscattering intensity change detection. In this study, the simple but robust SAR log ratio of intensity was implemented for this purpose.
When two SAR images are obtained before and after the event, the log ratio of intensity is calculated as:
I r a t i o = 10 l o g 10 ( I N I N + 1 )
where I N   is the pre-event SAR intensity and I N + 1 is the post-event SAR intensity image. The damage detection using the log ratio value is based on the hypothesis that the disaster event, such as an earthquake/explosion, changes the backscattering characteristics of the target area. Depending on the backscattering property of the object and its changes before and after the disaster (e.g., earthquake, tsunamis, and anthropogenic causes such as war), the log ratio can be either positive or negative [42,43]. An increase on backscatter may occur (negative log ratio value) on a low story damaged building (Figure 5(c-2)), while a reduced backscatter (positive log ratio value) may occur on a High-rise building, where the damage is concentrated to their sidewalls and lower stories or collapsed [50]. The phenomenon is also evident in congested building areas [15]. The log ratio can either be calculated with the band math or the change detection tool in SNAP 8.0 software.
The log ratio was initially calculated on the Ascending VH polarization images with initial threshold values −1 and +1 to test the efficacy of the method. Each pixel in the SAR intensity change raster was classified as damaged if the log ratio value was either less than or equal to −1 or greater than or equal to +1. To determine an optimal threshold value, we selected an area of interest (AOI) which contains 54 (32 intact and 22 damaged) buildings from the WV-2 image (orange box in Figure 2). Then, we calculated the precision, recall, and F1 score under different threshold values for VH log ratio image. Precision is defined as the true positive divided by all that was classified as positive (Equation (2)). Recall is defined as the true positive divided by all actual positive class (Equation (3)). F1 score is the harmonic average of Precision and Recall (Equation (3)) [55]. Optimal thresholds were then determined by seeking the threshold that gives the highest F1 score. The selected threshold values were −0.6 and 0.9 for the VH log ratio. The same process was applied to the Ascending VV, and Descending VH/VV polarization images. The selected thresholds were (−0.7, 0.9) for the Ascending VV, (−0.8, 1) and (−0.65, 0.9) for the Descending VH and VV polarizations (Table 2). Furthermore, the selected thresholds were applied to the whole study area with a mask for the built-up areas using either OSM or the Global built-up layers, respectively. The final classification results are further discussed in Section 4.
P r e c i s i o n = T P ( T P + F P )
R e c a l l = T P ( T P + F N )
F 1   s c o r e = 2   P r e c i s i o n   ·   R e c a l l P r e c i s i o n + R e c a l l  
where TP, TN, FP, and FN are the true positive, true negative, false positive, and false negative, respectively.

3.3. Optical Texture Analysis

“Texture” in optical image describes the spatial variation of the brightness intensity of the neighboring pixels, and it provides information that is independent of spectral reflectance values [56]. One of the widely used statistical technique for texture analysis is the Haralick’s Gray Level Co-occurrence Matrix (GLCM) [44]. It is a second-order matrix that shows the number of relationship occurrences between a pixel and its specified neighbor. The GLCM texture features have been proven to be successful in building damage detection [45,46,47]. In this study, the GLCM Mean was chosen as a texture index for differentiating a damaged building from an undamaged building. The GLCM Mean is a useful feature that may contribute to distinguishing individual class signatures for landscape classification, especially in building damage classification [21]. It is an interior texture, which is defined as a metric yielding high values for a neighborhood that contains few coherent edges but has many subtle and irregular variations [56]. We computed the statistics with a window size of 3 × 3 using the co-occurrence measuring tool available in ENVI software (release 5.6.1). First, the average of the pre-event images taken on 28 March 2021, and 2 January 2022, was calculated to create a singular pre-event texture image. Then, the texture difference between the newly generated pre-event and the post-event texture was computed. The identical procedure used for the SAR intensity was performed to select the optimal threshold value for the texture difference analysis. The selected threshold value was 0.4; each pixel in the texture change raster was classified as damaged if the pixel value was smaller than the threshold value. Figure 6 shows the texture difference and the corresponding post-event optical images. The results showed that GLCM Mean has the potential of distinguishing damaged buildings from undamaged buildings (Figure 6). However, the brightness changes on other non-building objects on the optical images may also be misinterpreted as building change and may be misclassified as damaged (Figure 6d).

4. Results

4.1. Damage Assessment Using Sentinel-1

The damage assessment result from the Sentinel-1 intensity analysis is shown in Figure 7 for the entire area. Prior to the use of built-up mask, the log ratio of intensity result shows an area of significant flooding along the Irpin River in the northwestern part of the study region (Figure 7a). Even though the flooding can be considered an indirect effect of the war because of the circumstances surrounding the flood [57] and it may have caused certain damages to the affected area, we omitted the analysis of the flooded region as we are primarily assessing building/infrastructure damages related to the direct effects of the war. We selected four representative locations that match the high-resolution WorldView images to perform a visual assessment. In each of the four areas, we compare the results using the OSM mask versus using the WSF built-up layer mask. The log ratio of intensity result shows good match with the reference WorldView images. Per our assessment, the damaged buildings are shown to be concentrated mainly in the northwestern part of Kyiv Oblast, most notably in the regions of Irpin, Bucha and Hostomel, which confirms the deliberate plan by Russian intelligence to encircle and capture Kyiv, the capital city of Ukraine. Using the augmented OSM polygons, the damaged building statistics calculated with the ArcGIS Desktop software (version 10.8.1) showed the detected building areas ranged between 85 m2 and 74,156 m2. Most of the buildings detected by the SAR intensity analysis are medium to large-scale buildings (e.g., multi-story apartment towers, government facilities, supermarkets, warehouse, and crop storage facilities). The damaged residential buildings that are surrounded by vegetation or are of smaller scale, like one-story houses, were not well detected by SAR analysis regardless of the mask used. This may be related to the medium spatial resolution of Sentinel-1 and limited penetration ability of C-band SAR sensor and the disruption from the vegetation change. Yet, although per our analysis, the high-rise buildings (shown in Figure 7e) were detected as being damaged, our visual assessment of the regions using the WorldView imagery did not corroborate this finding. This may be the result of backscattering change of the high-rise buildings on the SAR images induced by differing incidence angle or the dielectric property of SAR image at the time of acquisition (February and April). Furthermore, compared to the OSM mask result, the WSF mask showed more false positives as it classifies large continuous areas as built-up whereas OSM provides single scale building polygons. On the other hand, the incomplete building footprint from the OSM data excluded certain medium to small scale damaged buildings that were retained by the WSF mask results (Figure 7b–d).

4.2. Damage Assessment Using Sentinel-2

The damage assessment result from the Sentinel-2 texture analysis is summarized in Figure 8. The texture analysis was able to detect the same flooded area along the Irpin River as the SAR intensity analysis (Figure 7a). However, the extent of the flooded region in this analysis is smaller than the SAR intensity flooded region. We selected four representative areas, three in the northwestern region of the study area (Figure 8b–d), which is the same location referenced in Section 4.1, and one in the southern region (Figure 8e) that matches the high-resolution WorldView images. In each of the four areas, we compare the results using the OSM mask versus using the WSF built-up layer mask. The effects of using the OSM and WSF building mask are less significant than applying it to the SAR intensity analysis, indicating that there are fewer false positives in unmasked damage labels. The comparison results with the high-resolution WorldView images showed that damages on the large buildings with clear boundaries were mainly detected by the Sentinel-2 texture analysis (Figure 8b–d). However, as shown in Figure 10, buildings with dark roofs were not detected as damaged buildings. This may be related to the fact that the brightness intensity of the damaged building for the pre- and post-event optical images are similar (lower on both images). Moreover, those smaller damaged buildings which do not have distinct texture features compared to their surroundings were also not captured in the texture analysis (Figure 8c). Furthermore, the changes in intensity values of bright objects, that represent either bare ground (Figure 6d) or tall structures that show different reflectance at different times (Figure 8e) were misclassified as damaged buildings. Overall, the texture difference highlighted mostly large, damaged buildings with fewer false positives.

4.3. Quantitative Comparison between the UNOSAT Damage Assesment and Sentinel-1 Intensity Results

Figure 9 is the comparison of the Sentinel-1 SAR intensity-based results with the UNOSAT building damage assessment report for Bucha. The UNOSAT damage assessment was performed visually on WorldView images on the region of interest- UNOSAT-BuchaGrid. Most buildings in the region are small to medium scale. Their results show (Table 3) 145 damaged buildings with an average building area of 867 m2; the smallest building area is 149 m2 (destroyed), and the largest one measures 1280 m2 (severe damage). As the Sentinel-2 texture results only highlighted large-scaled damaged buildings, the quantitative validation was carried out on the Sentinel-1 intensity results. The OSM building footprint was used to locate the UNOSAT damaged (point-based labels) buildings with a result of 45 building labels that fell in a OSM building footprint. For the remaining 100 buildings without an OSM building footprint, we created footprints by visually comparing them with the WorldView image acquired on 28 February 2022. Because our augmented OSM footprint focused on the buildings with UNOSAT labels, we are only able to quantify the true positive rate. The False positives, True negatives, and False negatives will be biased due to the incomplete nature of the OSM footprint layer.
The UNOSAT damage labels included four labels: destroyed, severe damage, moderate damage and possible damage. Including all four classes as a single damaged class, the comparison of SAR intensity-based results with the UNOSAT report showed that 58% of the damaged buildings (84 out of 145 buildings) were correctly classified. The comparison of UNOSAT building damage levels showed that severe damage class had the highest match with an accuracy of 64%, while the possible damage and destroyed class had the lowest match with a 50 and 47% accuracy, respectively. The lower accuracy rates may be a combination of the applied method and the medium resolution of the imagery. A majority of the destroyed class were small buildings with an average area of 149 m2, which may be difficult to detect as a result of the image resolution: 10 m. If we ignore all buildings with building footprints less than 300 m2, the True positive rate increases from 58% to 76%. The SAR intensity change results more favorably highlighted the significant backscatter changes (severe damage) paired with larger building footprints than the small changes (moderate and possibly damage). This is also consistent with the previous research results that it is challenging to detect lower damage grades from the medium resolution images [3,21,51]. Additionally, there were 23 damaged buildings, as shown by the green polygons in Figure 9, that were detected by the SAR intensity analysis, but were not included in the UNOSAT damage assessment report. From inspection, we expect that some of these false positives may be true positives because of missing labels in the UNOSAT dataset. The missing labels may be the result of the different dates between the image used by UNOSAT to assess the damage (31 March) and the SAR images used in this study (5/8 April).

5. Discussion

Our study investigated the potential use of Sentinel-1 SAR and Sentinel-2 optical images for damage assessment in the region of Kyiv, Ukraine. Optical and SAR imagery have their own advantages with respect to damage recognition capabilities and outputs. Which data type contributes more to damage classification is debatable because it is influenced by many elements, such as data availability and quality, satellite acquisition parameters, and geo-environmental characteristics of specific areas. Our comparison results showed that the SAR intensity-based analysis showed small to large scale damaged buildings, while optical texture-based analysis mainly showed the large-scale damaged buildings. Our comparison also demonstrates that roof color can influence the accuracy of the optical texture-based analysis as shown in Figure 10, where both white and dark roof damaged buildings were identified by the SAR intensity analysis, whereas only the white roof damaged building was identified by the optical texture analysis. Texture in optical images describes the spatial variation of the brightness intensity of the neighboring pixels, thus the brightness intensity of the dark roof building and surrounding areas shows lower brightness intensity, and the change is not significant on both pre- and post-event optical images. Previous researchers also found that the damage classification accuracy would be compromised when the roof materials are diverse [21]. Further research is needed to evaluate their performance in different areas, especially because there are contradictory research results. Like our findings, Adriano et al. [22] concluded that SAR-derived features for mapping damaged buildings following the Sulawesi earthquake and tsunami sequence in Indonesia performed better than the optical-derived features in their analysis. However, Putri et al. [21] found that Sentinel-2 performed better than Sentinel-1 in classifying damaged buildings following the 2018 Lombok Earthquake. Lubin and Saleem mapped the destruction of Aleppo during the Syrian Civil War using the GLCM matrices extracted from Landsat imagery only and were successful in detecting damaged buildings [47].
Changes in SAR backscattering intensity and coherence have been widely used by researchers for building damage mapping. The coherence information is more sensitive to minor ground changes than intensity information [51]. One of the successful methods developed by NASA’s Advanced Rapid Imaging and Analysis (ARIA) project uses coherence information to produce damage proxy maps in urban areas following natural hazards [16,58]. We initially explored the use of SAR coherence differencing for detecting damaged buildings, but the results were unsatisfactory as they mostly highlighted changes in soil moisture (flooded areas) and vegetation (seasonal) status. Therefore, we reconsidered our initial approach and focused on the SAR intensity analysis. Other researchers have also exploited the combination of SAR coherence with intensity and polarization to detect damaged buildings [59,60]. It is commonly acknowledged that due to speckle effects, single-pixel damage classification from the medium resolution SAR images leads to unsatisfactory results. Satisfying results may be achieved if the damage is assessed at a block level [21], whereas detailed single-building scale damage mapping can be achieved with the use of high-resolution SAR images (e.g., TerraSAR-X and COSMO-SkyMed) [61]. However, larger damaged buildings can be detectable by medium resolution satellite images at building scale.
Our results also highlight that it is necessary to use ancillary data to suppress changes from other features and landcover types not related to urban areas. The freely available OSM building footprints have been widely used by researchers. It is important to note that the OSM data may not be complete (especially in developing countries) or not well aligned with the imagery of analysis. Therefore, researchers can choose to use either original, moved, or buffered building footprint areas to detect the damaged buildings [62,63] or augment existing OSM building footprints with additional digitization of building footprints. In this study, we applied the original building footprint to mask building areas and augmented the OSM building footprints in the UNOSAT surveyed area. As shown in Figure 11, we noticed (during the validation with high resolution optical image) that the building footprint was incomplete and lead to the exclusion of certain damaged buildings in the result. A visual assessment of the area by two experts referencing the pre- and post-WorldView images showed 122 damaged buildings. We tested the WSF built-up layer to use as a mask and we were able to assess more damaged buildings (77), some of which were excluded by the OSM building mask (39). Considering the amount of superfluous land cover types excluded by the masks, both layers provide useful building information (e.g., location and geometry) that can accelerate the data processing in disaster response situations. However, improvements can be made to increase the number of buildings provided in each of the original datasets.

6. Conclusions

In this study, we leveraged public access to Sentinel-1 and Sentinel-2 data to perform a damage assessment of the regions around Kyiv, the capital city of Ukraine at the onset of the war of 2022. We employed a log ratio of intensity algorithm on SAR Sentinel-1 imagery and the GLCM mean texture analysis on optical Sentinel-2 data to primarily detect damaged buildings. The intensity analysis revealed that most damaged buildings are concentrated in the northwestern part of Kyiv Oblast, especially in the cities of Bucha, Irpin, and Hostomel. In a small region with UNOSAT visually verified damaged buildings, our analysis detected 58% of the damaged buildings reported by UNOSAT across all labels. The severe damage label had the highest accuracy at 64%. If we only evaluate buildings with footprints larger than 300 m2, the accuracy increases to 76% across all labels.
The qualitative comparison with the WorldView image using OSM and WSF masks highlighted building damage and reduced the false positives in regions of vegetation. The incomplete nature of the OSM building footprints lead to the exclusion of certain damaged buildings. The WSF mask is less accurate in terms of building footprints but has better coverage of buildings across a region and is therefore preferred when the OSM is incomplete. The texture analysis was less accurate for identifying building damage, especially small buildings, but accurately identified large, damaged buildings, and resulted in fewer false positives. In the future, we will extend the framework to detect other war related damages, such as damages to agricultural sectors, and road/infrastructure non-building damages.

Author Contributions

Conceptualization, Y.A., C.S., M.K., L.G.B., B.M.; methodology, Y.A.; software, Y.A.; validation, Y.A. and C.S.; formal analysis, Y.A.; investigation, C.S.; resources, M.K.; data curation, C.S.; writing—original draft preparation, Y.A. and C.S.; writing—review and editing, M.K., L.G.B., B.M.; visualization, Y.A.; supervision and funding acquisition, M.K., L.G.B., B.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Geospatial Intelligence Agency, grant number # HM04762010006.

Data Availability Statement

Not applicable.

Acknowledgments

The Sentinel-1 images are publicly available datasets owned and provided by the European Space Agency (ESA). The WorldView images are courtesy of MAXAR.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yusuf, Y.; Matsuoka, M.; Yamazaki, F. Damage assessment after 2001 Gujarat earthquake using Landsat-7 satellite images. J. Indian Soc. Remote Sens. 2001, 29, 17–22. [Google Scholar] [CrossRef]
  2. Kohiyama, M.; Yamazaki, F. Damage Detection for 2003 Bam, Iran, Earthquake Using Terra-ASTER Satellite Imagery. Earthq. Spectra 2005, 21, 267–274. [Google Scholar] [CrossRef]
  3. Moya, L.; Muhari, A.; Adriano, B.; Koshimura, S.; Mas, E.; Marval-Perez, L.R.; Yokoya, N. Detecting urban changes using phase correlation and ℓ1-based sparse model for early disaster response: A case study of the 2018 Sulawesi Indonesia earthquake-tsunami. Remote Sens. Environ. 2020, 242, 111743. [Google Scholar] [CrossRef]
  4. Tong, X.; Hong, Z.; Liu, S.; Zhang, X.; Xie, H.; Li, Z.; Yang, S.; Wang, W.; Bao, F. Building-damage detection using pre- and post-seismic high-resolution satellite stereo imagery: A case study of the May 2008 Wenchuan earthquake. ISPRS J. Photo-Gramm. Remote Sens. 2012, 68, 13–27. [Google Scholar] [CrossRef]
  5. Kaya, G.T.; Musaoglu, N.; Ersoy, O.K. Damage Assessment of 2010 Haiti Earthquake with Post-Earthquake Satellite Image by Support Vector Selection and Adaptation. Photogramm. Eng. Remote Sens. 2011, 77, 1025–1035. [Google Scholar] [CrossRef]
  6. Omarzadeh, D.; Karimzadeh, S.; Matsuoka, M.; Feizizadeh, B. Earthquake Aftermath from Very High-Resolution WorldView-2 Image and Semi-Automated Object-Based Image Analysis (Case Study: Kermanshah, Sarpol-e Zahab, Iran). Remote Sens. 2021, 13, 4272. [Google Scholar] [CrossRef]
  7. Miura, H.; Aridome, T.; Matsuoka, M. Deep Learning-Based Identification of Collapsed, Non-Collapsed and Blue Tarp-Covered Buildings from Post-Disaster Aerial Images. Remote Sens. 2020, 12, 1924. [Google Scholar] [CrossRef]
  8. Ma, H.; Liu, Y.; Ren, Y.; Yu, J. Detection of Collapsed Buildings in Post-Earthquake Remote Sensing Images Based on the Improved YOLOv3. Remote Sens. 2019, 12, 44. [Google Scholar] [CrossRef] [Green Version]
  9. Adriano, B.; Yokoya, N.; Xia, J.; Miura, H.; Liu, W.; Matsuoka, M.; Koshimura, S. Learning from multimodal and multitemporal earth observation data for building damage mapping. ISPRS J. Photogramm. Remote Sens. 2021, 175, 132–143. [Google Scholar] [CrossRef]
  10. Gupta, R.; Goodman, B.; Patel, N.; Hosfelt, R.; Sajeev, S.; Heim, E.; Doshi, J.; Lucas, K.; Choset, H.; Gaston, M. Creating XBD: A Dataset for Assessing Building Damage from Satellite Imagery. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–17 June 2019. [Google Scholar]
  11. Lam, D.; Kuzma, R.; McGee, K.; Dooley, S.; Laielli, M.; Klaric, M.K.; Bulatov, Y.; McCord, B. XView: Objects in Context in Overhead Imagery. arXiv 2018, arXiv:1802.07856. [Google Scholar]
  12. Wu, C.; Zhang, F.; Xia, J.; Xu, Y.; Li, G.; Xie, J.; Du, Z.; Liu, R. Building Damage Detection Using U-Net with Attention Mechanism from Pre- and Post-Disaster Remote Sensing Datasets. Remote Sens. 2021, 13, 905. [Google Scholar] [CrossRef]
  13. Berezina, P.; Liu, D. Hurricane damage assessment using coupled convolutional neural networks: A case study of hurricane Michael. Geomat. Nat. Hazards Risk 2022, 13, 414–431. [Google Scholar] [CrossRef]
  14. Tilon, S.; Nex, F.; Kerle, N.; Vosselman, G. Post-Disaster Building Damage Detection from Earth Observation Imagery using Unsupervised and Transferable Anomaly Detecting Generative Adversarial Networks. Remote Sens. 2020, 12, 4193. [Google Scholar] [CrossRef]
  15. Uprety, P.; Yamazaki, F.; Dell’Acqua, F. Damage Detection Using High-Resolution SAR Imagery in the 2009 L’Aquila, Italy, Earthquake. Earthq. Spectra 2013, 29, 1521–1535. [Google Scholar] [CrossRef] [Green Version]
  16. Yun, S.-H.; Hudnut, K.; Owen, S.; Webb, F.; Simons, M.; Sacco, P.; Gurrola, E.; Manipon, G.; Liang, C.; Fielding, E.; et al. Rapid Damage Mapping for the 2015 Mw 7.8 Gorkha Earthquake Using Synthetic Aperture Radar Data from COSMO–SkyMed and ALOS-2 Satellites. Seism. Res. Lett. 2015, 86, 1549–1556. [Google Scholar] [CrossRef] [Green Version]
  17. Park, S.-E.; Jung, Y.T. Detection of Earthquake-Induced Building Damages Using Polarimetric SAR Data. Remote Sens. 2020, 12, 137. [Google Scholar] [CrossRef] [Green Version]
  18. Chen, Q.; Yang, H.; Li, L.; Liu, X. A Novel Statistical Texture Feature for SAR Building Damage Assessment in Different Polarization Modes. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 13, 154–165. [Google Scholar] [CrossRef]
  19. Endo, Y.; Adriano, B.; Mas, E.; Koshimura, S. New Insights into Multiclass Damage Classification of Tsunami-Induced Building Damage from SAR Images. Remote Sens. 2018, 10, 2059. [Google Scholar] [CrossRef] [Green Version]
  20. Brunner, D.; Lemoine, G.; Bruzzone, L. Earthquake Damage Assessment of Buildings Using VHR Optical and SAR Imagery. IEEE Trans. Geosci. Remote Sens. 2010, 48, 2403–2420. [Google Scholar] [CrossRef] [Green Version]
  21. Putri, A.F.S.; Widyatmanti, W.; Umarhadi, D.A. Sentinel-1 and Sentinel-2 data fusion to distinguish building damage level of the 2018 Lombok Earthquake. Remote Sens. Appl. Soc. Environ. 2022, 26, 100724. [Google Scholar] [CrossRef]
  22. Adriano, B.; Xia, J.; Baier, G.; Yokoya, N.; Koshimura, S. Multi-Source Data Fusion Based on Ensemble Learning for Rapid Building Damage Mapping during the 2018 Sulawesi Earthquake and Tsunami in Palu, Indonesia. Remote Sens. 2019, 11, 886. [Google Scholar] [CrossRef]
  23. Dell’Acqua, F.; Gamba, P. Remote Sensing and Earthquake Damage Assessment: Experiences, Limits, and Perspectives. Proc. IEEE 2012, 100, 2876–2890. [Google Scholar] [CrossRef]
  24. Lee, J.; Xu, J.Z.; Sohn, K.; Lu, W.; Berthelot, D.; Gur, I.; Khaitan, P.; Huang, K.; Koupparis, K.M.; Kowatsch, B. Assessing Post-Disaster Damage from Satellite Imagery Using Semi-Supervised Learning Techniques. arXiv 2020, arXiv:2011.14004. [Google Scholar]
  25. Boloorani, A.; Darvishi, M.; Weng, Q.; Liu, X. Post-War Urban Damage Mapping Using InSAR: The Case of Mosul City in Iraq. ISPRS Int. J. Geo-Inf. 2021, 10, 140. [Google Scholar] [CrossRef]
  26. Filippi, A.M. Remote Sensing-Based Damage Assessment for Homeland Security. GeoJ. Libr. 2008, 94, 125–169. [Google Scholar] [CrossRef]
  27. ICRC. Urban Services during Protracted Armed Conflict: A Call for a Better Approach to Assisting Affected People; International Committee of the Red Cross: Geneva, Switzerland, 2015. [Google Scholar]
  28. Neta, C. Crawford Reliable Death Tolls from the Ukraine War Are Hard to Come by—The Result of Undercounts and Manipulation. The Conversation, 4 April 2022. [Google Scholar]
  29. Ukraine: UN Rights Office Probe Spotlights Harrowing Plight of Civilians|UN News. Available online: https://news.un.org/en/story/2022/05/1117902 (accessed on 12 May 2022).
  30. Ukraine War: $100 Billion in Infrastructure Damage, and Counting|UN News. Available online: https://news.un.org/en/story/2022/03/1114022 (accessed on 12 May 2022).
  31. United Nations Satellite Centre UNOSAT|UNITAR. Available online: https://www.unitar.org/sustainable-development-goals/united-nations-satellite-centre-UNOSAT (accessed on 11 May 2022).
  32. Bucha Rapid Damage Assessment Overview Map—Humanitarian Data Exchange. Available online: https://data.humdata.org/dataset/bucha-rapid-damage-assessment-overview-map (accessed on 11 May 2022).
  33. USGS EROS Archive—Sentinel-2|U.S Geological Survey. Available online: https://www.usgs.gov/centers/eros/science/usgs-eros-archive-sentinel-2?qt-science_center_objects=0#qt-science_center_objects (accessed on 14 June 2022).
  34. European Space Agency Sentinel-2 User Handbook. 2015. Available online: https://sentinels.copernicus.eu/web/sentinel/user-guides/document-library/-/asset_publisher/xlslt4309D5h/content/sentinel-2-user-handbook (accessed on 5 December 2022).
  35. Sirko, W.; Kashubin, S.; Ritter, M.; Annkah, A.; Salah, Y.; Bouchareb, E.; Dauphin, Y.; Keysers, D.; Neumann, M.; Cisse, M.; et al. Continental-Scale Building Detection from High Resolution Satellite Imagery. arXiv 2021, arXiv:2107.12283. [Google Scholar] [CrossRef]
  36. Building Footprints—Bing Maps. Available online: https://www.microsoft.com/en-us/maps/building-footprints (accessed on 5 July 2022).
  37. EOC Geoservice Maps—World Settlement Footprint (WSF)—Sentinel-1/Sentinel-2—Global. 2019. Available online: https://geoservice.dlr.de/web/maps/eoc:wsf2019 (accessed on 26 June 2022).
  38. European Commission. Global Human Settlement. Available online: https://ghsl.jrc.ec.europa.eu/datasets.php (accessed on 15 July 2018).
  39. Sentinel-2 10 m Land Use/Land Cover Timeseries—Overview. Available online: https://www.arcgis.com/home/item.html?id=d3da5dd386d140cf93fc9ecbf8da5e31 (accessed on 26 June 2022).
  40. WorldCover|WORLDCOVER. Available online: https://esa-worldcover.org/en (accessed on 26 June 2022).
  41. WorldView Series—Earth Online. Available online: https://earth.esa.int/eogateway/missions/worldview (accessed on 14 June 2022).
  42. Mondini, A.C.; Santangelo, M.; Rocchetti, M.; Rossetto, E.; Manconi, A.; Monserrat, O. Sentinel-1 SAR Amplitude Imagery for Rapid Landslide Detection. Remote Sens. 2019, 11, 760. [Google Scholar] [CrossRef] [Green Version]
  43. Jung, J.; Yun, S.-H. Evaluation of Coherent and Incoherent Landslide Detection Methods Based on Synthetic Aperture Radar for Rapid Response: A Case Study for the 2018 Hokkaido Landslides. Remote Sens. 2020, 12, 265. [Google Scholar] [CrossRef] [Green Version]
  44. Haralick, R.M.; Shanmugam, K.; Dinstein, I.H. Textural Features for Image Classification. IEEE Trans. Syst. Man Cybern. 1973, SMC-3, 610–621. [Google Scholar] [CrossRef] [Green Version]
  45. Moya, L.; Zakeri, H.; Yamazaki, F.; Liu, W.; Mas, E.; Koshimura, S. 3D gray level co-occurrence matrix and its application to identifying collapsed buildings. ISPRS J. Photogramm. Remote Sens. 2019, 149, 14–28. [Google Scholar] [CrossRef]
  46. Sonobe, M. Characteristics of Texture Index of Damaged Buildings Using Time-Series High-Resolution Optical Satellite Images. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, 43, 1709–1714. [Google Scholar] [CrossRef]
  47. Lubin, A.; Saleem, A. Remote sensing-based mapping of the destruction to Aleppo during the Syrian Civil War between 2011 and 2017. Appl. Geogr. 2019, 108, 30–38. [Google Scholar] [CrossRef]
  48. Braun, A. Sentinel-1 Toolbox Tutorial: SAR-Based Landcover Classification with Sentinel-1 GRD Products SAR-Based Land Cover Classification 2020. Available online: http://step.esa.int/docs/tutorials/S1TBX%20Landcover%20classification%20with%20Sentinel-1%20GRD.pdf (accessed on 5 December 2022).
  49. Plank, S. Rapid Damage Assessment by Means of Multi-Temporal SAR—A Comprehensive Review and Outlook to Sentinel-1. Remote Sens. 2014, 6, 4870–4906. [Google Scholar] [CrossRef] [Green Version]
  50. Yamazaki, F.; Iwasaki, Y.; Liu, W.; Nonaka, T.; Sasagawa, T. Detection of Damage to Building Side-Walls in the 2011 Tohoku, Japan Earthquake Using High-Resolution TerraSAR-X Images. In Proceedings of the Image and Signal Processing for Remote Sensing XIX-SPIE, Gainesville, FL, USA, 25–28 June 2013; Volume 8892, pp. 299–307. [Google Scholar] [CrossRef]
  51. Ge, P.; Gokon, H.; Meguro, K. A review on synthetic aperture radar-based building damage assessment in disasters. Remote Sens. Environ. 2020, 240, 111693. [Google Scholar] [CrossRef]
  52. Koppel, K.; Zalite, K.; Voormansik, K.; Jagdhuber, T. Sensitivity of Sentinel-1 backscatter to characteristics of buildings. Int. J. Remote Sens. 2017, 38, 6298–6318. [Google Scholar] [CrossRef]
  53. Matsuoka, M.; Yamazaki, F. Use of Satellite SAR Intensity Imagery for Detecting Building Areas Damaged Due to Earthquakes. Earthq. Spectra 2004, 20, 975–994. [Google Scholar] [CrossRef]
  54. Matsuoka, M.; Yamazaki, F.; Ohkura, H. Damage Mapping for the 2004 Niigata-Ken Chuetsu Earthquake Using Radarsat Images. In Proceedings of the 2007 Urban Remote Sensing Joint Event, Paris, France, 11–13 April 2007. [Google Scholar]
  55. Powers, D.M.W. Evaluation: From Precision, Recall and F-Measure to ROC, Informedness, Markedness and Correlation. arXiv 2020, arXiv:2010.16061. [Google Scholar]
  56. Hall-Beyer, M. Practical guidelines for choosing GLCM textures to use in landscape classification tasks over a range of moderate spatial scales. Int. J. Remote Sens. 2017, 38, 1312–1338. [Google Scholar] [CrossRef]
  57. Flood Saves Ukrainian Village from Russian Occupation|Reuters. Available online: https://www.reuters.com/world/europe/flood-saves-ukrainian-village-russian-occupation-2022-05-15/ (accessed on 19 July 2022).
  58. ARIA|Home. Available online: https://aria.jpl.nasa.gov/ (accessed on 1 June 2022).
  59. ElGharbawi, T.; Zarzoura, F. Damage detection using SAR coherence statistical analysis, application to Beirut, Lebanon. ISPRS J. Photogramm. Remote Sens. 2021, 173, 1–9. [Google Scholar] [CrossRef]
  60. Watanabe, M.; Thapa, R.B.; Ohsumi, T.; Fujiwara, H.; Yonezawa, C.; Tomii, N.; Suzuki, S. Detection of damaged urban areas using interferometric SAR coherence change with PALSAR-2. Earth Planets Space 2016, 68, 131. [Google Scholar] [CrossRef] [Green Version]
  61. Mazzanti, P.; Scancella, S.; Virelli, M.; Frittelli, S.; Nocente, V.; Lombardo, F. Assessing the Performance of Multi-Resolution Satellite SAR Images for Post-Earthquake Damage Detection and Mapping Aimed at Emergency Response Management. Remote Sens. 2022, 14, 2210. [Google Scholar] [CrossRef]
  62. Ge, P.; Gokon, H.; Meguro, K. Building Damage Assessment Using Intensity SAR Data with Different Incidence Angles and Longtime Interval. J. Disaster Res. 2019, 14, 456–465. [Google Scholar] [CrossRef]
  63. Miura, H.; Midorikawa, S.; Matsuoka, M. Building Damage Assessment Using High-Resolution Satellite SAR Images of the 2010 Haiti Earthquake. Earthq. Spectra 2016, 32, 591–610. [Google Scholar] [CrossRef]
Figure 1. Location of the study area and footprints of the Sentinel-1 and Sentinel-2 images.
Figure 1. Location of the study area and footprints of the Sentinel-1 and Sentinel-2 images.
Remotesensing 14 06239 g001
Figure 2. UNOSAT damage assessment map of Bucha city. The background image is from Sentinel-2 acquired on 2 January 2022. The red rectangular box denotes the area shown in Figure 3, and the orange rectangular box shows the AOI for threshold selection.
Figure 2. UNOSAT damage assessment map of Bucha city. The background image is from Sentinel-2 acquired on 2 January 2022. The red rectangular box denotes the area shown in Figure 3, and the orange rectangular box shows the AOI for threshold selection.
Remotesensing 14 06239 g002
Figure 3. The comparison of (a) OSM building footprint with the (b) ESRI land use/land cover, (c) GHSL, (d) WSF, and (e) ESA WorldCover map. The location of the area is shown in Figure 2.
Figure 3. The comparison of (a) OSM building footprint with the (b) ESRI land use/land cover, (c) GHSL, (d) WSF, and (e) ESA WorldCover map. The location of the area is shown in Figure 2.
Remotesensing 14 06239 g003
Figure 4. The processing workflow for building damage mapping using Sentinel-1 and Sentinel-2 images. Note: dif, AOI, and Tr refer to the difference, selected area for threshold determination, and threshold.
Figure 4. The processing workflow for building damage mapping using Sentinel-1 and Sentinel-2 images. Note: dif, AOI, and Tr refer to the difference, selected area for threshold determination, and threshold.
Remotesensing 14 06239 g004
Figure 5. Schematic diagram of backscatter intensity from (a) intact buildings (b) destroyed buildings, and (c) partially damaged buildings in synthetic aperture radar (SAR) images; (a-1,a-2,b-1,b-2,c-1,c-2) are pre-and post-event Sentinel-1 images (19 February 2022 & 8 April 2022); (a-3,b-3,c-3) are corresponding WorldView images (25 & 31 March 2022).
Figure 5. Schematic diagram of backscatter intensity from (a) intact buildings (b) destroyed buildings, and (c) partially damaged buildings in synthetic aperture radar (SAR) images; (a-1,a-2,b-1,b-2,c-1,c-2) are pre-and post-event Sentinel-1 images (19 February 2022 & 8 April 2022); (a-3,b-3,c-3) are corresponding WorldView images (25 & 31 March 2022).
Remotesensing 14 06239 g005
Figure 6. The GLCM texture difference and the corresponding optical images. (ad) are selected examples that have significant texture changes, and those locations are shown on the WorldView-2 image.
Figure 6. The GLCM texture difference and the corresponding optical images. (ad) are selected examples that have significant texture changes, and those locations are shown on the WorldView-2 image.
Remotesensing 14 06239 g006
Figure 7. Comparison of Sentinel-1 damaged building results using two different masks (OSM&WSF). (a) shows the SAR intensity-based results for the entire area. The locations (be) were selected as representative areas for comparison with the high-resolution WorldView image. The columns from left to right indicate the WorldView image (31 March 2022), OSM and WSF mask results of the damaged buildings. The scale shown in (b) is the same for (ce).
Figure 7. Comparison of Sentinel-1 damaged building results using two different masks (OSM&WSF). (a) shows the SAR intensity-based results for the entire area. The locations (be) were selected as representative areas for comparison with the high-resolution WorldView image. The columns from left to right indicate the WorldView image (31 March 2022), OSM and WSF mask results of the damaged buildings. The scale shown in (b) is the same for (ce).
Remotesensing 14 06239 g007aRemotesensing 14 06239 g007b
Figure 8. The result of Sentinel-2 texture-based analysis for study region (a). (be) are comparison of Sentinel-2 damaged building results using two different masks (OSM&WSF). The locations (be) are referenced in (a) and were selected as representative areas for comparison with the high-resolution WorldView image. The columns from left to right indicate the WorldView image (25 March 2022 & 31 March 2022), OSM and WSF mask results of the damaged buildings. The scale shown in (b) is the same for (ce).
Figure 8. The result of Sentinel-2 texture-based analysis for study region (a). (be) are comparison of Sentinel-2 damaged building results using two different masks (OSM&WSF). The locations (be) are referenced in (a) and were selected as representative areas for comparison with the high-resolution WorldView image. The columns from left to right indicate the WorldView image (25 March 2022 & 31 March 2022), OSM and WSF mask results of the damaged buildings. The scale shown in (b) is the same for (ce).
Remotesensing 14 06239 g008aRemotesensing 14 06239 g008b
Figure 9. The comparison of the Sentinel-1 SAR intensity-based results with the UNOSAT’s building damage assessment report.
Figure 9. The comparison of the Sentinel-1 SAR intensity-based results with the UNOSAT’s building damage assessment report.
Remotesensing 14 06239 g009
Figure 10. The Sentinel-1 SAR intensity versus Sentinel-2 texture (c,d) and the corresponding WorldView imagery taken on 28 February 2022, and 9 May 2022 (a,b). The blue polygons are the OSM building footprints overlaid on the images for better comparison.
Figure 10. The Sentinel-1 SAR intensity versus Sentinel-2 texture (c,d) and the corresponding WorldView imagery taken on 28 February 2022, and 9 May 2022 (a,b). The blue polygons are the OSM building footprints overlaid on the images for better comparison.
Remotesensing 14 06239 g010
Figure 11. WSF VS OSM building footprint mask on the SAR intensity results. (a,b) are pre- and post-event WorldView images taken on 28 February 2022 and 31 March 2022; (ce) are the results after applying OSM and WSF building mask, and without mask.
Figure 11. WSF VS OSM building footprint mask on the SAR intensity results. (a,b) are pre- and post-event WorldView images taken on 28 February 2022 and 31 March 2022; (ce) are the results after applying OSM and WSF building mask, and without mask.
Remotesensing 14 06239 g011
Table 1. Description of satellite images used in this study.
Table 1. Description of satellite images used in this study.
Sensor (Mode)Acquisition DateBands/PolarizationRelative Orbit NumberIncidence/Sun AzimuthResolution
Sentinel-1 (Desc *)16 February & 5 April 2022VV and VH3630.16°5 × 20 m
Sentinel-1 (Asc *)19 February & 8 April 2022VV and VH8730.15°
Sentinel-22 January & 7 April 2022
28 March 2021
b2-b8A and b11,1207166.8°10/20 m
WorldView-228 February & 25 March 2022Red, green, and blue154.7°0.5 m
WorldView-331 March & 9 May 2022Red, green, and blue157.7°0.3 m
* Desc refers to the Descending orbit, Asc refers to the Ascending orbit.
Table 2. The results of precision, recall, and F1 for the chosen thresholds in the AOI.
Table 2. The results of precision, recall, and F1 for the chosen thresholds in the AOI.
TPTNFP FN PrecisionRecallF1
Ascending VH162396647368
Ascending VV1521117586863
Descending VH1722105637769
Descending VV1620126577364
S2 Texture133028876272
Table 3. The statistics of UNOSAT damage levels and SAR intensity-based damaged buildings.
Table 3. The statistics of UNOSAT damage levels and SAR intensity-based damaged buildings.
DestroyedSevere DamageModerate DamagePossible DamageTotal
UNOSAT19722628145
Average area m21491280708441867
SAR damaged946151484
Percentage %4764585058
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Aimaiti, Y.; Sanon, C.; Koch, M.; Baise, L.G.; Moaveni, B. War Related Building Damage Assessment in Kyiv, Ukraine, Using Sentinel-1 Radar and Sentinel-2 Optical Images. Remote Sens. 2022, 14, 6239. https://doi.org/10.3390/rs14246239

AMA Style

Aimaiti Y, Sanon C, Koch M, Baise LG, Moaveni B. War Related Building Damage Assessment in Kyiv, Ukraine, Using Sentinel-1 Radar and Sentinel-2 Optical Images. Remote Sensing. 2022; 14(24):6239. https://doi.org/10.3390/rs14246239

Chicago/Turabian Style

Aimaiti, Yusupujiang, Christina Sanon, Magaly Koch, Laurie G. Baise, and Babak Moaveni. 2022. "War Related Building Damage Assessment in Kyiv, Ukraine, Using Sentinel-1 Radar and Sentinel-2 Optical Images" Remote Sensing 14, no. 24: 6239. https://doi.org/10.3390/rs14246239

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop