remotesensing-logo

Journal Browser

Journal Browser

Applications of Remote Sensing Imagery for Urban Areas

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (30 September 2021) | Viewed by 51268

Special Issue Editors

School of Remote Sensing and Information Engineering, Wuhan University, Wuhan, China
Interests: satellite image processing; satellite image analysis; remote sensing; image registration; image reconstruction; image restoration; cloud cover; missing data analysis; image mosaic; image fusion; image inpainting; multitemporal analysis
Special Issues, Collections and Topics in MDPI journals
Huaiyin Institute of Technology, Huai'an 223003, China
Interests: point cloud data processing; remote sensing image processing; object detection; object segmentation; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
1. School of Resource and Environmental Sciences, Wuhan University, No. 129 Luoyu Road, Wuhan 430079, China
2. Department of Geography and Planning, University of Toronto, No. 100 St. George St., Toronto, ON M5S 3G3, Canada
Interests: remote sensing; urban vegetation; vegetation index; spatial-temporal reconstruction; ecosystem carbon cycle; climate change
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Geography and Tourism, Shaanxi Normal University, No.620 West Chang'an Avenue, Xi'an 710119, Shaanxi, China
Interests: remote sensing; image processing; geometric correction; image stitching
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Urban areas are the center of human settlement with intensive anthropic activities and dense built-up infrastructures. Urban areas have been suffered, and still undergoing, great evolution in population shift, land-use change, high-rise buildings, industrial production, and so on. Urbanization induced environmental pollution, climate change, and ecosystem degradation are the research hotpot that not only highly relates to human lives, but also is the main force of global change. Besides, urban planning, public health management, and human security policy are also crucial research subjects throughout the globe that are significant to human sustainable development.

Remote sensing imagery provides essential information for these applications in urban areas. Especially, the continual improved spatial resolution can satisfy the description of the complex urban geographical system. The data from different platforms (drone, airborne, and spaceborne) and different sensors (optical, thermal, SAR, and LiDAR) have various characteristics and spatiotemporal resolutions, and therefore are applicable for numerous natural and anthropogenic issues in urban areas at different scales. Furthermore, the development of big data mining, machine learning, and cloud computing technology also advance the applications of remote sensing data and present new opportunities and challenges.

As a result, this special issue proposes to address recent thematic outcomes and advances on the urban applications based on remote sensing imagery. Topics of interest include, but not limited to:

  • Phenomena and evolution of urban ecosystem and environments: urban climate, atmosphere, soil, water bodies, vegetation, thermal environment;
  • Main applications on urban monitoring: urban sprawl, urban planning, spatial configuration, anthropic activities, public health, and emergency management;
  • Urban visualization and 3D/4D urban modeling from remote sending datasets;
  • Urban classification and object-analysis, including the identification of damaged infrastructures, land subsidence, pollution, and garbage;
  • Applications of new generations of sensors and high-resolution remote sensing data in urban areas;
  • Urban remote sensing data-processing: image registration, mosaic, data fusion, quality improvements, machine learning, cloud computing, and data mining.

The Special Issue “Applications of Remote Sensing Imagery for Urban Areas” is jointly organized between “Remote Sensing” and “Earth” journals. Contributors are required to check the website below and follow the specific instructions for authors:
https://www.mdpi.com/journal/remotesensing/instructions
https://www.mdpi.com/journal/earth/instructions

The other special issue could be found at: https://www.mdpi.com/journal/earth/special_issues/urban_image. You will have the opportunity to choose to publish your papers in Earth, which will offer a lot of discounts or fully waivers for your papers based on peer-review results.

Dr. Xinghua Li
Dr. Yongtao Yu
Dr. Xiaobin Guan
Ms. Ruitao Feng
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing
  • urban climate
  • urban pollution
  • urban thermal environment
  • urban ecosystem
  • land-use change
  • urban object-analysis
  • urban planning
  • urban spatial configuration

Published Papers (18 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

4 pages, 170 KiB  
Editorial
Overview of the Special Issue on Applications of Remote Sensing Imagery for Urban Areas
by Xinghua Li, Yongtao Yu, Xiaobin Guan and Ruitao Feng
Remote Sens. 2022, 14(5), 1204; https://doi.org/10.3390/rs14051204 - 01 Mar 2022
Cited by 1 | Viewed by 1663
Abstract
Urban areas are the center of human settlement with intensive anthropic activities and dense built-up infrastructures, suffering significant evolution in population shift, land-use change, industrial production, and so on [...] Full article
(This article belongs to the Special Issue Applications of Remote Sensing Imagery for Urban Areas)

Research

Jump to: Editorial

19 pages, 5290 KiB  
Article
Attention-Guided Multispectral and Panchromatic Image Classification
by Cheng Shi, Yenan Dang, Li Fang, Zhiyong Lv and Huifang Shen
Remote Sens. 2021, 13(23), 4823; https://doi.org/10.3390/rs13234823 - 27 Nov 2021
Cited by 4 | Viewed by 1789
Abstract
Multi-sensor image can provide supplementary information, usually leading to better performance in classification tasks. However, the general deep neural network-based multi-sensor classification method learns each sensor image separately, followed by a stacked concentrate for feature fusion. This way requires a large time cost [...] Read more.
Multi-sensor image can provide supplementary information, usually leading to better performance in classification tasks. However, the general deep neural network-based multi-sensor classification method learns each sensor image separately, followed by a stacked concentrate for feature fusion. This way requires a large time cost for network training, and insufficient feature fusion may cause. Considering efficient multi-sensor feature extraction and fusion with a lightweight network, this paper proposes an attention-guided classification method (AGCNet), especially for multispectral (MS) and panchromatic (PAN) image classification. In the proposed method, a share-split network (SSNet) including a shared branch and multiple split branches performs feature extraction for each sensor image, where the shared branch learns basis features of MS and PAN images with fewer learn-able parameters, and the split branch extracts the privileged features of each sensor image via multiple task-specific attention units. Furthermore, a selective classification network (SCNet) with a selective kernel unit is used for adaptive feature fusion. The proposed AGCNet can be trained by an end-to-end fashion without manual intervention. The experimental results are reported on four MS and PAN datasets, and compared with state-of-the-art methods. The classification maps and accuracies show the superiority of the proposed AGCNet model. Full article
(This article belongs to the Special Issue Applications of Remote Sensing Imagery for Urban Areas)
Show Figures

Graphical abstract

19 pages, 4734 KiB  
Article
Improving Urban Land Cover Classification in Cloud-Prone Areas with Polarimetric SAR Images
by Jing Ling, Hongsheng Zhang and Yinyi Lin
Remote Sens. 2021, 13(22), 4708; https://doi.org/10.3390/rs13224708 - 21 Nov 2021
Cited by 16 | Viewed by 2439
Abstract
Urban land cover (ULC) serves as fundamental environmental information for urban studies, while accurate and timely ULC mapping remains challenging due to cloud contamination in tropical and subtropical areas. Synthetic aperture radar (SAR) has excellent all-weather working capability to overcome the challenge, while [...] Read more.
Urban land cover (ULC) serves as fundamental environmental information for urban studies, while accurate and timely ULC mapping remains challenging due to cloud contamination in tropical and subtropical areas. Synthetic aperture radar (SAR) has excellent all-weather working capability to overcome the challenge, while optical SAR data fusion is often required due to the limited land surface information provided by SAR. However, the mechanism by which SAR can compensate optical images, given the occurrence of clouds, in order to improve the ULC mapping, remains unexplored. To address the issue, this study proposes a framework, through various sampling strategies and three typical supervised classification methods, to quantify the ULC classification accuracy using optical and SAR data with various cloud levels. The land cover confusions were investigated in detail to understand the role of SAR in distinguishing land cover under different types of cloud coverage. Several interesting experimental results were found. First, 50% cloud coverage over the optical images decreased the overall accuracy by 10–20%, while the incorporation of SAR images was able to improve the overall accuracy by approximately 4%, by increasing the recognition of cloud-covered ULC information, particularly the water bodies. Second, if all the training samples were not contaminated by clouds, the cloud coverage had a higher impact with a reduction of 35% in the overall accuracy, whereas the incorporation of SAR data contributed to an increase of approximately 5%. Third, the thickness of clouds also brought about different impacts on the results, with an approximately 10% higher reduction from thick clouds compared with that from thin clouds, indicating that certain spectral information might still be available in the areas covered by thin clouds. These findings provide useful references for the accurate monitoring of ULC over cloud-prone areas, such as tropical and subtropical cities, where cloud contamination is often unavoidable. Full article
(This article belongs to the Special Issue Applications of Remote Sensing Imagery for Urban Areas)
Show Figures

Figure 1

21 pages, 4323 KiB  
Article
MRA-SNet: Siamese Networks of Multiscale Residual and Attention for Change Detection in High-Resolution Remote Sensing Images
by Xin Yang, Lei Hu, Yongmei Zhang and Yunqing Li
Remote Sens. 2021, 13(22), 4528; https://doi.org/10.3390/rs13224528 - 11 Nov 2021
Cited by 17 | Viewed by 2740
Abstract
Remote sensing image change detection (CD) is an important task in remote sensing image analysis and is essential for an accurate understanding of changes in the Earth’s surface. The technology of deep learning (DL) is becoming increasingly popular in solving CD tasks for [...] Read more.
Remote sensing image change detection (CD) is an important task in remote sensing image analysis and is essential for an accurate understanding of changes in the Earth’s surface. The technology of deep learning (DL) is becoming increasingly popular in solving CD tasks for remote sensing images. Most existing CD methods based on DL tend to use ordinary convolutional blocks to extract and compare remote sensing image features, which cannot fully extract the rich features of high-resolution (HR) remote sensing images. In addition, most of the existing methods lack robustness to pseudochange information processing. To overcome the above problems, in this article, we propose a new method, namely MRA-SNet, for CD in remote sensing images. Utilizing the UNet network as the basic network, the method uses the Siamese network to extract the features of bitemporal images in the encoder separately and perform the difference connection to better generate difference maps. Meanwhile, we replace the ordinary convolution blocks with Multi-Res blocks to extract spatial and spectral features of different scales in remote sensing images. Residual connections are used to extract additional detailed features. To better highlight the change region features and suppress the irrelevant region features, we introduced the Attention Gates module before the skip connection between the encoder and the decoder. Experimental results on a public dataset of remote sensing image CD show that our proposed method outperforms other state-of-the-art (SOTA) CD methods in terms of evaluation metrics and performance. Full article
(This article belongs to the Special Issue Applications of Remote Sensing Imagery for Urban Areas)
Show Figures

Graphical abstract

22 pages, 36167 KiB  
Article
An Urban Flooding Index for Unsupervised Inundated Urban Area Detection Using Sentinel-1 Polarimetric SAR Images
by Hui Zhang, Zhixin Qi, Xia Li, Yimin Chen, Xianwei Wang and Yingqing He
Remote Sens. 2021, 13(22), 4511; https://doi.org/10.3390/rs13224511 - 10 Nov 2021
Cited by 10 | Viewed by 3235
Abstract
Urban flooding causes a variation in radar return from urban areas. However, such variation has not been thoroughly examined for different polarizations because of the lack of polarimetric SAR (PolSAR) images and ground truth data simultaneously collected over flooded urban areas. This condition [...] Read more.
Urban flooding causes a variation in radar return from urban areas. However, such variation has not been thoroughly examined for different polarizations because of the lack of polarimetric SAR (PolSAR) images and ground truth data simultaneously collected over flooded urban areas. This condition hinders not only the understanding of the effect mechanism of urban flooding under different polarizations but also the development of advanced methods that could improve the accuracy of inundated urban area detection. Using Sentinel-1 PolSAR and Jilin-1 high-resolution optical images acquired on the same day over flooded urban areas in Golestan, Iran, this study investigated the characteristics and mechanisms of the radar return changes induced by urban flooding under different polarizations and proposed a new method for unsupervised inundated urban area detection. This study found that urban flooding caused a backscattering coefficient increase (BCI) and interferometric coherence decrease (ICD) in VV and VH polarizations. Furthermore, VV polarization was more sensitive to the BCI and ICD than VH polarization. In light of these findings, the ratio between the BCI and ICD was defined as an urban flooding index (UFI), and the UFI in VV polarization was used for the unsupervised detection of flooded urban areas. The overall accuracy, detection accuracy, and false alarm rate attained by the UFI-based method were 96.93%, 91.09%, and 0.95%, respectively. Compared with the conventional unsupervised method based on the ICD and that based on the fusion of backscattering coefficients and interferometric coherences (FBI), the UFI-based method achieved higher overall accuracy. The performance of VV was evaluated and compared to that of VH in the flooded urban area detection using the UFI-, ICD-, and FBI-based methods, respectively. VV polarization produced higher overall accuracy than VH polarization in all the methods, especially in the UFI-based method. By using VV instead of VH polarization, the UFI-based method improved the detection accuracy by 38.16%. These results indicated that the UFI-based method improved flooded urban area detection by synergizing the BCI and ICD in VV polarization. Full article
(This article belongs to the Special Issue Applications of Remote Sensing Imagery for Urban Areas)
Show Figures

Graphical abstract

17 pages, 9120 KiB  
Article
Opposite Spatiotemporal Patterns for Surface Urban Heat Island of Two “Stove Cities” in China: Wuhan and Nanchang
by Yao Shen, Chao Zeng, Qing Cheng and Huanfeng Shen
Remote Sens. 2021, 13(21), 4447; https://doi.org/10.3390/rs13214447 - 05 Nov 2021
Cited by 9 | Viewed by 1872
Abstract
Under the circumstance of global climate change, the evolution of thermal environments has attracted more attention, for which the surface urban heat island (SUHI) is one of the major concerns. In this research, we focused on the spatiotemporal patterns for two “stove cities” [...] Read more.
Under the circumstance of global climate change, the evolution of thermal environments has attracted more attention, for which the surface urban heat island (SUHI) is one of the major concerns. In this research, we focused on the spatiotemporal patterns for two “stove cities” in China, i.e., Wuhan and Nanchang, based on the long-term (1984–2018) and fine-scale (Landsat-like) series of satellite images. The results showed opposite spatiotemporal patterns for the two cities, even though they were both widely concerned to be the hottest cities. No matter which definition of surface urban heat island intensity (SUHII) was selected, Nanchang presented higher and more fluctuating SUHII than Wuhan, with a relatively higher land surface temperature (LST) of the urban area and lower LST of the rural area in Nanchang, especially in recent years. For the spatial pattern, the highest LST center (i.e., the SUHI) has expanded obviously for the past 35 years in Nanchang. For Wuhan, the LST in SUHI has gone through a trend of a relatively increase at first, followed by a decrease. For the temporal pattern, an increasing trend of LST could be detected in Nanchang. However, the LST in Wuhan presented a slightly decreasing trend. Moreover, the SUHII evolution in Nanchang decreased at first then increased, while Wuhan showed a slight increasing trend at first, followed by a decrease for SUHII. In addition, different SUHII definitions would not affect the spatial pattern and temporal trend of SUHI, but only controlled the exact SUHII value, especially in those years with extreme weather. Full article
(This article belongs to the Special Issue Applications of Remote Sensing Imagery for Urban Areas)
Show Figures

Graphical abstract

22 pages, 23424 KiB  
Article
Rotation-Invariant and Relation-Aware Cross-Domain Adaptation Object Detection Network for Optical Remote Sensing Images
by Ying Chen, Qi Liu, Teng Wang, Bin Wang and Xiaoliang Meng
Remote Sens. 2021, 13(21), 4386; https://doi.org/10.3390/rs13214386 - 30 Oct 2021
Cited by 10 | Viewed by 2417
Abstract
In recent years, object detection has shown excellent results on a large number of annotated data, but when there is a discrepancy between the annotated data and the real test data, the performance of the trained object detection model is often degraded when [...] Read more.
In recent years, object detection has shown excellent results on a large number of annotated data, but when there is a discrepancy between the annotated data and the real test data, the performance of the trained object detection model is often degraded when it is directly transferred to the real test dataset. Compared with natural images, remote sensing images have great differences in appearance and quality. Traditional methods need to re-label all image data before interpretation, which will consume a lot of manpower and time. Therefore, it is of practical significance to study the Cross-Domain Adaptation Object Detection (CDAOD) of remote sensing images. To solve the above problems, our paper proposes a Rotation-Invariant and Relation-Aware (RIRA) CDAOD network. We trained the network at the image-level and the prototype-level based on a relation aware graph to align the feature distribution and added the rotation-invariant regularizer to deal with the rotation diversity. The Faster R-CNN network was adopted as the backbone framework of the network. We conducted experiments on two typical remote sensing building detection datasets, and set three domain adaptation scenarios: WHU 2012 → WHU 2016, Inria (Chicago) → Inria (Austin), and WHU 2012 → Inria (Austin). The results show that our method can effectively improve the detection effect in the target domain, and outperform competing methods by obtaining optimal results in all three scenarios. Full article
(This article belongs to the Special Issue Applications of Remote Sensing Imagery for Urban Areas)
Show Figures

Graphical abstract

26 pages, 3765 KiB  
Article
Analysis of Temporal and Spatial Characteristics of Urban Expansion in Xiaonan District from 1990 to 2020 Using Time Series Landsat Imagery
by Yan Liu, Renguang Zuo and Yanni Dong
Remote Sens. 2021, 13(21), 4299; https://doi.org/10.3390/rs13214299 - 26 Oct 2021
Cited by 14 | Viewed by 2088
Abstract
With the rapid development in the global economy and technology, urbanization has accelerated. It is important to characterize the urban expansion and determine its driving force. In this study, we used the Xiaonan District in Hubei Province, China, as an example to map [...] Read more.
With the rapid development in the global economy and technology, urbanization has accelerated. It is important to characterize the urban expansion and determine its driving force. In this study, we used the Xiaonan District in Hubei Province, China, as an example to map and quantify the spatiotemporal dynamics of urban expansion from the two perspectives of built-up area and urban land in 1990–2020 by using remote sensing images. The location of rivers was found to be a primary limiting factor for spatial patterns and expansion of the built-up area. The transfer of the city center and the main direction of expansion generally corresponded well to the topography, policies, and development strategies. The built-up area expanded faster than the urban population in 1995–2020, which caused a waste in land resources. The results showed that the urban expansion first decreased and then increased during the research period. The increase in the proportion of the secondary industry was the main driving force of the urban expansion. Based on the characteristics of urban expansion in the past three decades, we conclude that the urban land of Xiaonan District will expand quickly in the future and will occupy vast agricultural land. The government must deploy control measures to balance the benefits and costs of urban expansion. Full article
(This article belongs to the Special Issue Applications of Remote Sensing Imagery for Urban Areas)
Show Figures

Figure 1

16 pages, 10262 KiB  
Article
Multi-Feature Enhanced Building Change Detection Based on Semantic Information Guidance
by Junkang Xue, Hao Xu, Hui Yang, Biao Wang, Penghai Wu, Jaewan Choi, Lixiao Cai and Yanlan Wu
Remote Sens. 2021, 13(20), 4171; https://doi.org/10.3390/rs13204171 - 18 Oct 2021
Cited by 12 | Viewed by 2071
Abstract
Building change detection has always been an important research focus in production and urbanization. In recent years, deep learning methods have demonstrated a powerful ability in the field of detecting remote sensing changes. However, due to the heterogeneity of remote sensing and the [...] Read more.
Building change detection has always been an important research focus in production and urbanization. In recent years, deep learning methods have demonstrated a powerful ability in the field of detecting remote sensing changes. However, due to the heterogeneity of remote sensing and the characteristics of buildings, the current methods do not present an effective means to perceive building changes or the ability to fuse multi-temporal remote sensing features, which leads to fragmented and incomplete results. In this article, we propose a multi-branched network structure to fuse the semantic information of the building changes at different levels. In this model, two accessory branches were used to guide the buildings’ semantic information under different time sequences, and the main branches can merge the change information. In addition, we also designed a feature enhancement layer to further strengthen the integration of the main and accessory branch information. For ablation experiments, we designed experiments on the above optimization process. For MDEFNET, we designed experiments which compare with typical deep learning model and recent deep learning change detection methods. Experimentation with the WHU Building Change Detection Dataset showed that the method in this paper obtained accuracies of 0.8526, 0.9418, and 0.9204 in Intersection over Union (IoU), Recall, and F1 Score, respectively, which could assess building change areas with complete boundaries and accurate results. Full article
(This article belongs to the Special Issue Applications of Remote Sensing Imagery for Urban Areas)
Show Figures

Figure 1

16 pages, 2627 KiB  
Article
Expansion and Evolution of a Typical Resource-Based Mining City in Transition Using the Google Earth Engine: A Case Study of Datong, China
by Minghui Xue, Xiaoxiang Zhang, Xuan Sun, Tao Sun and Yanfei Yang
Remote Sens. 2021, 13(20), 4045; https://doi.org/10.3390/rs13204045 - 10 Oct 2021
Cited by 10 | Viewed by 2038
Abstract
China’s resource-based cities have made tremendous contributions to national and local economic growth and urban development over the last seven decades. Recently, such cities have been in transition from resource-centered development towards human-oriented urbanization to meet the requirements of long-term sustainability for the [...] Read more.
China’s resource-based cities have made tremendous contributions to national and local economic growth and urban development over the last seven decades. Recently, such cities have been in transition from resource-centered development towards human-oriented urbanization to meet the requirements of long-term sustainability for the natural environment and human society. A good understanding of urban expansion and evolution as a consequence of urbanization has important implications for future urban and regional planning. Using a series of remote sensing (RS) images and geographical information system (GIS)-based spatial analyses, this research explores how a typical resource-based mining city, Datong, has expanded and evolved over the last two decades (2000–2018), with a reflection on the role of urban planning and development policies in driving the spatial transformation of Datong. The RS images were provided and processed by the Google Earth Engine (GEE) platform. Spatial cluster analysis approaches were employed to examine the spatial patterns of urban expansion. The results indicate that the area of urban construction land increased by 132.6% during the study period, mainly along with the Chengqu District, the Mining Area, and in the southeast of the Nanjiao District, where most new towns are located. Reflection on the factors that influence urban expansion shows that terrain, urban planning policies, and social economy are driving Datong’s urban development. Full article
(This article belongs to the Special Issue Applications of Remote Sensing Imagery for Urban Areas)
Show Figures

Graphical abstract

27 pages, 4399 KiB  
Article
Evaluating the Ability to Use Contextual Features Derived from Multi-Scale Satellite Imagery to Map Spatial Patterns of Urban Attributes and Population Distributions
by Steven Chao, Ryan Engstrom, Michael Mann and Adane Bedada
Remote Sens. 2021, 13(19), 3962; https://doi.org/10.3390/rs13193962 - 03 Oct 2021
Cited by 5 | Viewed by 1977
Abstract
With an increasing global population, accurate and timely population counts are essential for urban planning and disaster management. Previous research using contextual features, using mainly very-high-spatial-resolution imagery (<2 m spatial resolution) at subnational to city scales, has found strong correlations with population and [...] Read more.
With an increasing global population, accurate and timely population counts are essential for urban planning and disaster management. Previous research using contextual features, using mainly very-high-spatial-resolution imagery (<2 m spatial resolution) at subnational to city scales, has found strong correlations with population and poverty. Contextual features can be defined as the statistical quantification of edge patterns, pixel groups, gaps, textures, and the raw spectral signatures calculated over groups of pixels or neighborhoods. While they correlated with population and poverty, which components of the human-modified landscape were captured by the contextual features have not been investigated. Additionally, previous research has focused on more costly, less frequently acquired very-high-spatial-resolution imagery. Therefore, contextual features from both very-high-spatial-resolution imagery and lower-spatial-resolution Sentinel-2 (10 m pixels) imagery in Sri Lanka, Belize, and Accra, Ghana were calculated, and those outputs were correlated with OpenStreetMap building and road metrics. These relationships were compared to determine what components of the human-modified landscape the features capture, and how spatial resolution and location impact the predictive power of these relationships. The results suggest that contextual features can map urban attributes well, with out-of-sample R2 values up to 93%. Moreover, the degradation of spatial resolution did not significantly reduce the results, and for some urban attributes, the results actually improved. Based on these results, the ability of the lower resolution Sentinel-2 data to predict the population density of the smallest census units available was then assessed. The findings indicate that Sentinel-2 contextual features explained up to 84% of the out-of-sample variation for population density. Full article
(This article belongs to the Special Issue Applications of Remote Sensing Imagery for Urban Areas)
Show Figures

Graphical abstract

22 pages, 15679 KiB  
Article
Camouflaged Target Detection Based on Snapshot Multispectral Imaging
by Ying Shen, Jie Li, Wenfu Lin, Liqiong Chen, Feng Huang and Shu Wang
Remote Sens. 2021, 13(19), 3949; https://doi.org/10.3390/rs13193949 - 02 Oct 2021
Cited by 12 | Viewed by 3478
Abstract
The spectral information contained in the hyperspectral images (HSI) distinguishes the intrinsic properties of a target from the background, which is widely used in remote sensing. However, the low imaging speed and high data redundancy caused by the high spectral resolution of imaging [...] Read more.
The spectral information contained in the hyperspectral images (HSI) distinguishes the intrinsic properties of a target from the background, which is widely used in remote sensing. However, the low imaging speed and high data redundancy caused by the high spectral resolution of imaging spectrometers limit their application in scenarios with the real-time requirement. In this work, we achieve the precise detection of camouflaged targets based on snapshot multispectral imaging technology and band selection methods in urban-related scenes. Specifically, the camouflaged target detection algorithm combines the constrained energy minimization (CEM) algorithm and the improved maximum between-class variance (OTSU) algorithm (t-OTSU), which is proposed to obtain the initial target detection results and adaptively segment the target region. Moreover, an object region extraction (ORE) algorithm is proposed to obtain a complete target contour that improves the target detection capability of multispectral images (MSI). The experimental results show that the proposed algorithm has the ability to detect different camouflaged targets by using only four bands. The detection accuracy is above 99%, and the false alarm rate is below 0.2%. The research achieves the effective detection of camouflaged targets and has the potential to provide a new means for real-time multispectral sensing in complex scenes. Full article
(This article belongs to the Special Issue Applications of Remote Sensing Imagery for Urban Areas)
Show Figures

Figure 1

22 pages, 9802 KiB  
Article
A Stacking Ensemble Deep Learning Model for Building Extraction from Remote Sensing Images
by Duanguang Cao, Hanfa Xing, Man Sing Wong, Mei-Po Kwan, Huaqiao Xing and Yuan Meng
Remote Sens. 2021, 13(19), 3898; https://doi.org/10.3390/rs13193898 - 29 Sep 2021
Cited by 21 | Viewed by 3141
Abstract
Automatically extracting buildings from remote sensing images with deep learning is of great significance to urban planning, disaster prevention, change detection, and other applications. Various deep learning models have been proposed to extract building information, showing both strengths and weaknesses in capturing the [...] Read more.
Automatically extracting buildings from remote sensing images with deep learning is of great significance to urban planning, disaster prevention, change detection, and other applications. Various deep learning models have been proposed to extract building information, showing both strengths and weaknesses in capturing the complex spectral and spatial characteristics of buildings in remote sensing images. To integrate the strengths of individual models and obtain fine-scale spatial and spectral building information, this study proposed a stacking ensemble deep learning model. First, an optimization method for the prediction results of the basic model is proposed based on fully connected conditional random fields (CRFs). On this basis, a stacking ensemble model (SENet) based on a sparse autoencoder integrating U-NET, SegNet, and FCN-8s models is proposed to combine the features of the optimized basic model prediction results. Utilizing several cities in Hebei Province, China as a case study, a building dataset containing attribute labels is established to assess the performance of the proposed model. The proposed SENet is compared with three individual models (U-NET, SegNet and FCN-8s), and the results show that the accuracy of SENet is 0.954, approximately 6.7%, 6.1%, and 9.8% higher than U-NET, SegNet, and FCN-8s models, respectively. The identification of building features, including colors, sizes, shapes, and shadows, is also evaluated, showing that the accuracy, recall, F1 score, and intersection over union (IoU) of the SENet model are higher than those of the three individual models. This suggests that the proposed ensemble model can effectively depict the different features of buildings and provides an alternative approach to building extraction with higher accuracy. Full article
(This article belongs to the Special Issue Applications of Remote Sensing Imagery for Urban Areas)
Show Figures

Graphical abstract

22 pages, 9230 KiB  
Article
Urban Building Extraction and Modeling Using GF-7 DLC and MUX Images
by Heng Luo, Biao He, Renzhong Guo, Weixi Wang, Xi Kuai, Bilu Xia, Yuan Wan, Ding Ma and Linfu Xie
Remote Sens. 2021, 13(17), 3414; https://doi.org/10.3390/rs13173414 - 27 Aug 2021
Cited by 10 | Viewed by 2518
Abstract
Urban modeling and visualization are highly useful in the development of smart cities. Buildings are the most prominent features in the urban environment, and are necessary for urban decision support; thus, buildings should be modeled effectively and efficiently in three dimensions (3D). In [...] Read more.
Urban modeling and visualization are highly useful in the development of smart cities. Buildings are the most prominent features in the urban environment, and are necessary for urban decision support; thus, buildings should be modeled effectively and efficiently in three dimensions (3D). In this study, with the help of Gaofen-7 (GF-7) high-resolution stereo mapping satellite double-line camera (DLC) images and multispectral (MUX) images, the boundary of a building is segmented via a multilevel features fusion network (MFFN). A digital surface model (DSM) is generated to obtain the elevation of buildings. The building vector with height information is processed using a 3D modeling tool to create a white building model. The building model, DSM, and multispectral fused image are then imported into the Unreal Engine 4 (UE4) to complete the urban scene level, vividly rendered with environmental effects for urban visualization. The results of this study show that high accuracy of 95.29% is achieved in building extraction using our proposed method. Based on the extracted building vector and elevation information from the DSM, building 3D models can be efficiently created in Level of Details 1 (LOD1). Finally, the urban scene is produced for realistic 3D visualization. This study shows that high-resolution stereo mapping satellite images are useful in 3D modeling for urban buildings and can support the generation and visualization of urban scenes in a large area for different applications. Full article
(This article belongs to the Special Issue Applications of Remote Sensing Imagery for Urban Areas)
Show Figures

Figure 1

17 pages, 3663 KiB  
Article
Estimation and Analysis of the Nighttime PM2.5 Concentration Based on LJ1-01 Images: A Case Study in the Pearl River Delta Urban Agglomeration of China
by Yanjun Wang, Mengjie Wang, Bo Huang, Shaochun Li and Yunhao Lin
Remote Sens. 2021, 13(17), 3405; https://doi.org/10.3390/rs13173405 - 27 Aug 2021
Cited by 15 | Viewed by 2495
Abstract
At present, fine particulate matter (PM2.5) has become an important pollutant in regard to air pollution and has seriously harmed the ecological environment and human health. In the face of increasingly serious PM2.5 air pollution problems, feasible large-scale continuous spatial [...] Read more.
At present, fine particulate matter (PM2.5) has become an important pollutant in regard to air pollution and has seriously harmed the ecological environment and human health. In the face of increasingly serious PM2.5 air pollution problems, feasible large-scale continuous spatial PM2.5 concentration monitoring provides great practical value and potential. Based on radiative transfer theory, a correlation model of the nighttime light radiance and ground PM2.5 concentration is established. A multiple linear regression model is proposed with the light radiance, meteorological elements (temperature, relative humidity, and wind speed) and terrain elements (elevation, slope, and terrain relief) as variables to estimate the ground PM2.5 concentration at 56 air quality monitoring stations in the Pearl River Delta (PRD) urban agglomeration from 2018 to 2019, and the accuracy of model estimation is tested. The results indicate that the R2 value between the model-estimated and measured values is 0.82 in the PRD region, and the model attains a high estimation accuracy. Moreover, the estimation accuracy of the model exhibits notable temporal and spatial heterogeneity. This study, to a certain extent, mitigates the shortcomings of traditional ground PM2.5 concentration monitoring methods with a high cost and low spatial resolution and complements satellite remote sensing technology. This study extends the use of LJ1-01 nighttime light remote sensing images to estimate nighttime PM2.5 concentrations. This yields a certain practical value and potential in nighttime ground PM2.5 concentration inversion. Full article
(This article belongs to the Special Issue Applications of Remote Sensing Imagery for Urban Areas)
Show Figures

Figure 1

18 pages, 9293 KiB  
Article
Cloud-to-Ground Lightning Response to Aerosol over Air-Polluted Urban Areas in China
by Haichao Wang, Zheng Shi, Xuejuan Wang, Yongbo Tan, Honglei Wang, Luying Li and Xiaotong Lin
Remote Sens. 2021, 13(13), 2600; https://doi.org/10.3390/rs13132600 - 02 Jul 2021
Cited by 5 | Viewed by 2094
Abstract
The effect of aerosols on lightning has been noted in many studies, but much less is known about the long-term impacts in air-polluted urban areas of China. In this paper, 9-year data sets of cloud-to-ground (CG) lightning, aerosol optical depth (AOD), convective available [...] Read more.
The effect of aerosols on lightning has been noted in many studies, but much less is known about the long-term impacts in air-polluted urban areas of China. In this paper, 9-year data sets of cloud-to-ground (CG) lightning, aerosol optical depth (AOD), convective available potential energy (CAPE), and surface relative humidity (SRH) from ground-based observation and model reanalysis are analyzed over three air-polluted urban areas of China. Decreasing trends are found in the interannual variations of CG lightning density (unit: flashes km−2day−1) and total AOD over the three study regions during the study period. An apparent enhancement in CG lightning density is found under conditions with high AOD on the seasonal cycles over the three study regions. The joint effects of total AOD and thermodynamic factors (CAPE and SRH) on CG lightning density and the percentage of positive CG flashes (+CG flashes/total CG flashes × 100; PPCG; unit: %) are further analyzed. Results show that CG lighting density is higher under conditions with high total AOD, while PPCG is lower under conditions with low total AOD. CG lightning density is more sensitive to CAPE under conditions with high total AOD. Full article
(This article belongs to the Special Issue Applications of Remote Sensing Imagery for Urban Areas)
Show Figures

Graphical abstract

27 pages, 26254 KiB  
Article
Self-Attention in Reconstruction Bias U-Net for Semantic Segmentation of Building Rooftops in Optical Remote Sensing Images
by Ziyi Chen, Dilong Li, Wentao Fan, Haiyan Guan, Cheng Wang and Jonathan Li
Remote Sens. 2021, 13(13), 2524; https://doi.org/10.3390/rs13132524 - 28 Jun 2021
Cited by 55 | Viewed by 7063
Abstract
Deep learning models have brought great breakthroughs in building extraction from high-resolution optical remote-sensing images. Among recent research, the self-attention module has called up a storm in many fields, including building extraction. However, most current deep learning models loading with the self-attention module [...] Read more.
Deep learning models have brought great breakthroughs in building extraction from high-resolution optical remote-sensing images. Among recent research, the self-attention module has called up a storm in many fields, including building extraction. However, most current deep learning models loading with the self-attention module still lose sight of the reconstruction bias’s effectiveness. Through tipping the balance between the abilities of encoding and decoding, i.e., making the decoding network be much more complex than the encoding network, the semantic segmentation ability will be reinforced. To remedy the research weakness in combing self-attention and reconstruction-bias modules for building extraction, this paper presents a U-Net architecture that combines self-attention and reconstruction-bias modules. In the encoding part, a self-attention module is added to learn the attention weights of the inputs. Through the self-attention module, the network will pay more attention to positions where there may be salient regions. In the decoding part, multiple large convolutional up-sampling operations are used for increasing the reconstruction ability. We test our model on two open available datasets: the WHU and Massachusetts Building datasets. We achieve IoU scores of 89.39% and 73.49% for the WHU and Massachusetts Building datasets, respectively. Compared with several recently famous semantic segmentation methods and representative building extraction methods, our method’s results are satisfactory. Full article
(This article belongs to the Special Issue Applications of Remote Sensing Imagery for Urban Areas)
Show Figures

Figure 1

18 pages, 9963 KiB  
Article
Region-by-Region Registration Combining Feature-Based and Optical Flow Methods for Remote Sensing Images
by Ruitao Feng, Qingyun Du, Huanfeng Shen and Xinghua Li
Remote Sens. 2021, 13(8), 1475; https://doi.org/10.3390/rs13081475 - 11 Apr 2021
Cited by 13 | Viewed by 2496
Abstract
While geometric registration has been studied in remote sensing community for many decades, successful cases are rare, which register images allowing for local inconsistency deformation caused by topographic relief. Toward this end, a region-by-region registration combining the feature-based and optical flow methods is [...] Read more.
While geometric registration has been studied in remote sensing community for many decades, successful cases are rare, which register images allowing for local inconsistency deformation caused by topographic relief. Toward this end, a region-by-region registration combining the feature-based and optical flow methods is proposed. The proposed framework establishes on the calculation of pixel-wise displacement and mosaic of displacement fields. Concretely, the initial displacement fields for a pair of images are calculated by the block-weighted projective model and Brox optical flow estimation, respectively in the flat- and complex-terrain regions. The abnormal displacements resulting from the sensitivity of optical flow in the land use or land cover changes, are adaptively detected and corrected by the weighted Taylor expansion. Subsequently, the displacement fields are mosaicked seamlessly for subsequent steps. Experimental results show that the proposed method outperforms comparative algorithms, achieving the highest registration accuracy qualitatively and quantitatively. Full article
(This article belongs to the Special Issue Applications of Remote Sensing Imagery for Urban Areas)
Show Figures

Graphical abstract

Back to TopTop