remotesensing-logo

Journal Browser

Journal Browser

Remote Sensing and SAR for Building Monitoring

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: 30 September 2024 | Viewed by 6757

Special Issue Editors


E-Mail Website
Guest Editor
Institute for Energy Technology, HØgskolen i Østfold, Halden, Norway
Interests: UAV; building monitoring
Graduate School of Engineering, Chiba University, Chiba 263-8522, Japan
Interests: SAR; optical images; lidar data; disasters; building damage; bridge damage

Special Issue Information

Dear Colleagues,

Building monitoring using remote sensing data plays an important role in urban planning, disaster management, navigation, the updating of geographic databases, and other geospatial applications. The rapid development of image processing techniques and easily available very-high-resolution multispectral, hyperspectral, LiDAR, SAR, and UAS remote sensing images have further boosted research on building-extraction-related topics. Of particular note, in recent years, many research institutes and associations have provided open source datasets and annotated training data to meet the demands of advanced artificial intelligence models, bringing new opportunities to develop advanced approaches to building extraction and monitoring.

In addition to building masks, more research has arisen on the automatic generation of 2/3 building models from remote sensing data. This Special Issue aims to feature cutting-edge methodologies and applications, as well as reviews, related to one or more of the following topics:

  • Advanced UAV/UAS for building detection and extraction;
  • Semantic remote sensing image segmentation;
  • 2D/3D change detection;
  • Disaster monitoring;
  • Rooftop modelling from remotely sensed data;
  • 3D point-cloud segmentation;
  • Building boundary extraction and vectorization;
  • Large-scale urban growth monitoring;
  • Weakly supervised urban classification;
  • Time-series remote sensing data analysis;
  • Roof-top modelling;
  • Urban object (vehicle, road, etc.) detection;
  • Multi-sensor, multi-resolution, and multi-modality data fusion;
  • Climate adaptation of smart cities;
  • Sustainable development.

Dr. Yonas Zewdu Ayele
Dr. Wen Liu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • building extraction
  • 3D urban modelling
  • urban classification
  • UAV/UAS
  • building change detection
  • disaster monitoring
  • optical stereo imagery
  • SAR
  • data fusion
  • time-series monitoring
  • sustainable development

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 13702 KiB  
Article
Dual-Channel Semi-Supervised Adversarial Network for Building Segmentation from UAV-Captured Images
by Wenzheng Zhang, Changyue Wu, Weidong Man and Mingyue Liu
Remote Sens. 2023, 15(23), 5608; https://doi.org/10.3390/rs15235608 - 02 Dec 2023
Viewed by 1010
Abstract
Accurate building extraction holds paramount importance in various applications such as urbanization rate calculations, urban planning, and resource allocation. In response to the escalating demand for precise low-altitude unmanned aerial vehicle (UAV) building segmentation in intricate scenarios, this study introduces a semi-supervised methodology [...] Read more.
Accurate building extraction holds paramount importance in various applications such as urbanization rate calculations, urban planning, and resource allocation. In response to the escalating demand for precise low-altitude unmanned aerial vehicle (UAV) building segmentation in intricate scenarios, this study introduces a semi-supervised methodology to alleviate the labor-intensive process of procuring pixel-level annotations. Within the framework of adversarial networks, we employ a dual-channel parallel generator strategy that amalgamates the morphology-driven optical flow estimation channel with an enhanced multilayer sensing Deeplabv3+ module. This approach aims to comprehensively capture both the morphological attributes and textural intricacies of buildings while mitigating the dependency on annotated data. To further enhance the network’s capability to discern building features, we introduce an adaptive attention mechanism via a feature fusion module. Additionally, we implement a composite loss function to augment the model’s sensitivity to building structures. Across two distinct low-altitude UAV datasets within the domain of UAV-based building segmentation, our proposed method achieves average mean pixel intersection-over-union (mIoU) ratios of 82.69% and 79.37%, respectively, with unlabeled data constituting 70% of the overall dataset. These outcomes signify noteworthy advancements compared with contemporaneous networks, underscoring the robustness of our approach in tackling intricate building segmentation challenges in the domain of UAV-based architectural analysis. Full article
(This article belongs to the Special Issue Remote Sensing and SAR for Building Monitoring)
Show Figures

Graphical abstract

19 pages, 7732 KiB  
Article
SCA-Net: Multiscale Contextual Information Network for Building Extraction Based on High-Resolution Remote Sensing Images
by Yuanzhi Wang, Qingzhan Zhao, Yuzhen Wu, Wenzhong Tian and Guoshun Zhang
Remote Sens. 2023, 15(18), 4466; https://doi.org/10.3390/rs15184466 - 11 Sep 2023
Cited by 5 | Viewed by 1122
Abstract
Accurately extracting buildings is essential for urbanization rate statistics, urban planning, resource allocation, etc. The high-resolution remote sensing images contain rich building information, which provides an important data source for building extraction. However, the extreme abundance of building types with large differences in [...] Read more.
Accurately extracting buildings is essential for urbanization rate statistics, urban planning, resource allocation, etc. The high-resolution remote sensing images contain rich building information, which provides an important data source for building extraction. However, the extreme abundance of building types with large differences in size, as well as the extreme complexity of the background environment, result in the accurate extraction of spatial details of multi-scale buildings, which remains a difficult problem worth studying. To this end, this study selects the representative Xinjiang Tumxuk urban area as the study area. A building extraction network (SCA-Net) with feature highlighting, multi-scale sensing, and multi-level feature fusion is proposed, which includes Selective kernel spatial Feature Extraction (SFE), Contextual Information Aggregation (CIA), and Attentional Feature Fusion (AFF) modules. First, Selective kernel spatial Feature Extraction modules are used for cascading composition, highlighting information representation of features, and improving the feature extraction capability. Adding a Contextual Information Aggregation module enables the acquisition of multi-scale contextual information. The Attentional Feature Fusion module bridges the semantic gap between high-level and low-level features to achieve effective fusion between cross-level features. The classical U-Net, Segnet, Deeplab v3+, and HRNet v2 semantic segmentation models are compared on the self-built Tmsk and WHU building datasets. The experimental results show that the algorithm proposed in this paper can effectively extract multi-scale buildings in complex backgrounds with IoUs of 85.98% and 89.90% on the two datasets, respectively. SCA-Net is a suitable method for building extraction from high-resolution remote sensing images with good usability and generalization. Full article
(This article belongs to the Special Issue Remote Sensing and SAR for Building Monitoring)
Show Figures

Figure 1

19 pages, 12106 KiB  
Article
Enhancing Building Segmentation in Remote Sensing Images: Advanced Multi-Scale Boundary Refinement with MBR-HRNet
by Geding Yan, Haitao Jing, Hui Li, Huanchao Guo and Shi He
Remote Sens. 2023, 15(15), 3766; https://doi.org/10.3390/rs15153766 - 28 Jul 2023
Cited by 4 | Viewed by 1384
Abstract
Deep learning algorithms offer an effective solution to the inefficiencies and poor results of traditional methods for building a footprint extraction from high-resolution remote sensing imagery. However, the heterogeneous shapes and sizes of buildings render local extraction vulnerable to the influence of intricate [...] Read more.
Deep learning algorithms offer an effective solution to the inefficiencies and poor results of traditional methods for building a footprint extraction from high-resolution remote sensing imagery. However, the heterogeneous shapes and sizes of buildings render local extraction vulnerable to the influence of intricate backgrounds or scenes, culminating in intra-class inconsistency and inaccurate segmentation outcomes. Moreover, the methods for extracting buildings from very high-resolution (VHR) images at present often lose spatial texture information during down-sampling, leading to problems, such as blurry image boundaries or object sticking. To solve these problems, we propose the multi-scale boundary-refined HRNet (MBR-HRNet) model, which preserves detailed boundary features for accurate building segmentation. The boundary refinement module (BRM) enhances the accuracy of small buildings and boundary extraction in the building segmentation network by integrating edge information learning into a separate branch. Additionally, the multi-scale context fusion module integrates feature information of different scales, enhancing the accuracy of the final predicted image. Experiments on WHU and Massachusetts building datasets have shown that MBR-HRNet outperforms other advanced semantic segmentation models, achieving the highest intersection over union results of 91.31% and 70.97%, respectively. Full article
(This article belongs to the Special Issue Remote Sensing and SAR for Building Monitoring)
Show Figures

Graphical abstract

27 pages, 8276 KiB  
Article
Investigation of Temperature Effects into Long-Span Bridges via Hybrid Sensing and Supervised Regression Models
by Bahareh Behkamal, Alireza Entezami, Carlo De Michele and Ali Nadir Arslan
Remote Sens. 2023, 15(14), 3503; https://doi.org/10.3390/rs15143503 - 12 Jul 2023
Cited by 5 | Viewed by 908
Abstract
Temperature is an important environmental factor for long-span bridges because it induces thermal loads on structural components that cause considerable displacements, stresses, and structural damage. Hence, it is critical to acquire up-to-date information on the status, sustainability, and serviceability of long-span bridges under [...] Read more.
Temperature is an important environmental factor for long-span bridges because it induces thermal loads on structural components that cause considerable displacements, stresses, and structural damage. Hence, it is critical to acquire up-to-date information on the status, sustainability, and serviceability of long-span bridges under daily and seasonal temperature fluctuations. This paper intends to investigate the effects of temperature variability on structural displacements obtained from remote sensing and represent their relationship using supervised regression models. In contrast to other studies in this field, one of the contributions of this paper is to leverage hybrid sensing as a combination of contact and non-contact sensors for measuring temperature data and structural responses. Apart from temperature, other unmeasured environmental and operational conditions may affect structural displacements of long-span bridges separately or simultaneously. For this issue, this paper incorporates a correlation analysis between the measured predictor (temperature) and response (displacement) data using a linear correlation measure, the Pearson correlation coefficient, as well as nonlinear correlation measures, namely the Spearman and Kendall correlation coefficients and the maximal information criterion, to determine whether the measured environmental factor is dominant or other unmeasured conditions affect structural responses. Finally, three supervised regression techniques based on a linear regression model, Gaussian process regression, and support vector regression are considered to model the relationship between temperature and structural displacements and to conduct the prediction process. Temperature and limited displacement data related to three long-span bridges are used to demonstrate the results of this research. The aim of this research is to assess and realize whether contact-based sensors installed in a bridge structure for measuring environmental and/or operational factors are sufficient or if it is necessary to consider further sensors and investigations. Full article
(This article belongs to the Special Issue Remote Sensing and SAR for Building Monitoring)
Show Figures

Figure 1

20 pages, 5912 KiB  
Article
Progressive Context-Aware Aggregation Network Combining Multi-Scale and Multi-Level Dense Reconstruction for Building Change Detection
by Chuan Xu, Zhaoyi Ye, Liye Mei, Wei Yang, Yingying Hou, Sen Shen, Wei Ouyang and Zhiwei Ye
Remote Sens. 2023, 15(8), 1958; https://doi.org/10.3390/rs15081958 - 07 Apr 2023
Cited by 6 | Viewed by 1617
Abstract
Building change detection (BCD) using high-resolution remote sensing images aims to identify change areas during different time periods, which is a significant research focus in urbanization. Deep learning methods are capable of yielding impressive BCD results by correctly extracting change features. However, due [...] Read more.
Building change detection (BCD) using high-resolution remote sensing images aims to identify change areas during different time periods, which is a significant research focus in urbanization. Deep learning methods are capable of yielding impressive BCD results by correctly extracting change features. However, due to the heterogeneous appearance and large individual differences of buildings, mainstream methods cannot further extract and reconstruct hierarchical and rich feature information. To overcome this problem, we propose a progressive context-aware aggregation network combining multi-scale and multi-level dense reconstruction to identify detailed texture-rich building change information. We design the progressive context-aware aggregation module with a Siamese structure to capture both local and global features. Specifically, we first use deep convolution to obtain superficial local change information of buildings, and then utilize self-attention to further extract global features with high-level semantics based on the local features progressively, which ensures capability of the context awareness of our feature representations. Furthermore, our multi-scale and multi-level dense reconstruction module groups extracted feature information according to pre- and post-temporal sequences. By using multi-level dense reconstruction, the following groups are able to directly learn feature information from the previous groups, enhancing the network’s robustness to pseudo changes. The proposed method outperforms eight state-of-the-art methods on four common BCD datasets, including LEVIR-CD, SYSU-CD, WHU-CD, and S2Looking-CD, both in terms of visual comparison and objective evaluation metrics. Full article
(This article belongs to the Special Issue Remote Sensing and SAR for Building Monitoring)
Show Figures

Figure 1

Back to TopTop