remotesensing-logo

Journal Browser

Journal Browser

3D City Modelling and Remote Sensing: Advances, Challenges, and New Technologies

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Urban Remote Sensing".

Deadline for manuscript submissions: closed (8 September 2023) | Viewed by 16505

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer and Systems Sciences (DSV), Stockholm University, SE-106 91 Stockholm, Sweden
Interests: geographic information systems, business–IT alignment, strong background in academics and university–industry framework supported by distinctive field experience in project management

E-Mail Website
Guest Editor
1. Urban Planning Engineering Department, An-Najah National University, Nablus P.O. Box 7, Palestine
2. Chair of Geoinformatics, TUM Department of Aerospace and Geodesy, Technical University of Munich, Munich, Germany
Interests: BIM/GIS integration; GIS for built environments; information architecture; urban dynamics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Today, digital societies are largely dependent on information. However, several tasks in urban and architectural design are undertaken in a geospatial context. Building Information Models (BIM) and geospatial technologies offer models of 3D city that provide information about buildings and the surrounding environment. A 3D city model is generally defined as the digital representation of the Earth’s surface and the built environment within a city. Using such a model, a variety of applications can be created, covering the whole city or may focus on a specific building model. As models become more detailed, the relationships between the spatial objects have to be modelled.

The recent developments in technology, especially in the restoration and storage of data, have introduced several advantages for the construction of more detailed city models that can be used in different applications. As a result, two domains, BIM and GIS, are moving closer to each other in terms of easier integration and interoperability processes between them. BIM-GIS integration provides a unified view of geospatial information and is seen as the future development of urban planning and smart city applications.

As a result of the increasing demands for integrated views and data standards in urban planning, unified applications have received great amounts of attention at both the national and international levels. At the EU level, different initiatives, such as the Infrastructure for Spatial Information in the European Community (INSPIRE) directive (European Commission – INSPIRE, 2007–2021), have suggested building common geospatial applications for EU countries based on BIM-GIS integration. This has contributed to smart cities having different applications in the functions of smart planning, end-to-end solutions, services, management, sustainable practices, and outcomes/policy making, emergency, security, among others. In a smart city ecosystem, the geospatial structure can serve any or all of the above functions.

Research and development in the above-described areas is sought for this Special Issue on “3D City Modelling and Remote Sensing: Advances, Challenges, and New Technologies”. Potential topics include, but are in no way limited to:

  • Three-dimensional city modelling;
  • BIM-GIS integration;
  • Urbanization and settlements;
  • Sustainable development of cities;
  • Smart cities and regions;
  • Different applications of 3D city modelling (e.g., 3D cadastre, crisis management, etc.).

Prof. Dr. Mohamed El Mekawy
Dr. Ihab Hamzi Hijazi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • three-dimensional city models
  • BIM-GIS integration
  • urbanization
  • smart cities
  • crisis management

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

26 pages, 37177 KiB  
Article
An Integrated Approach for 3D Solar Potential Assessment at the City Scale
by Hassan Waqas, Yuhong Jiang, Jianga Shang, Iqra Munir and Fahad Ullah Khan
Remote Sens. 2023, 15(23), 5616; https://doi.org/10.3390/rs15235616 - 03 Dec 2023
Cited by 1 | Viewed by 1499
Abstract
The use of solar energy has shown the fastest global growth of all renewable energy sources. Efforts towards careful evaluation are required to select optimal locations for the installation of photovoltaics (PV) because their effectiveness is strongly reliant on exposure to solar irradiation. [...] Read more.
The use of solar energy has shown the fastest global growth of all renewable energy sources. Efforts towards careful evaluation are required to select optimal locations for the installation of photovoltaics (PV) because their effectiveness is strongly reliant on exposure to solar irradiation. Assessing the shadows cast by nearby buildings and vegetation is essential, especially at the city scale. Due to urban complexity, conventional methods using Digital Surface Models (DSM) overestimate solar irradiation in dense urban environments. To provide further insights into this dilemma, a new modeling technique was developed for integrated 3D city modeling and solar potential assessment on building roofs using light detection and ranging (LiDAR) data. The methodology used hotspot analysis to validate the workflow in both site and without-site contexts (e.g., trees that shield small buildings). Field testing was conducted, covering a total area of 4975 square miles and 10,489 existing buildings. The results demonstrate a considerable impact of large, dense trees on the solar irradiation received by smaller buildings. Considering the site’s context, a mean annual solar estimate of 99.97 kWh/m2/year was determined. Without considering the site context, this value increased by 9.3% (as a percentage of total rooftops) to 109.17 kWh/m2/year, with a peak in July and troughs in December and January. The study suggests that both factors have a substantial impact on solar potential estimations, emphasizing the importance of carefully considering the shadowing effect during PV panel installation. The research findings reveal that 1517 buildings in the downtown area of Austin have high estimated radiation ranging from 4.7 to 6.9 kWh/m2/day, providing valuable insights for the identification of optimal locations highly suitable for PV installation. Additionally, this methodology can be generalized to other cities, addressing the broader demand for renewable energy solutions. Full article
Show Figures

Graphical abstract

20 pages, 19204 KiB  
Article
An Accurate and Efficient Supervoxel Re-Segmentation Approach for Large-Scale Point Clouds Using Plane Constraints
by Baokang Lai, Yingtao Yuan, Yueqiang Zhang, Biao Hu and Qifeng Yu
Remote Sens. 2023, 15(16), 3973; https://doi.org/10.3390/rs15163973 - 10 Aug 2023
Cited by 1 | Viewed by 929
Abstract
The accurate and efficient segmentation of large-scale urban point clouds is crucial for many higher-level tasks, such as boundary line extraction, point cloud registration, and deformation measurement. In this paper, we propose a novel supervoxel segmentation approach to address the problem of under-segmentation [...] Read more.
The accurate and efficient segmentation of large-scale urban point clouds is crucial for many higher-level tasks, such as boundary line extraction, point cloud registration, and deformation measurement. In this paper, we propose a novel supervoxel segmentation approach to address the problem of under-segmentation in local regions of point clouds at various resolutions. Our approach introduces distance constraints from boundary points to supervoxel planes in the merging stage to enhance boundary segmentation accuracy between non-coplanar supervoxels. Additionally, supervoxels with roughness above a threshold are re-segmented using random sample consensus (RANSAC) to address multi-planar coupling within local areas of the point clouds. We tested the proposed method on two publicly available large-scale point cloud datasets. The results show that the new method outperforms two classical methods in terms of boundary recall, under-segmentation error, and average entropy in urban scenes. Full article
Show Figures

Figure 1

17 pages, 24200 KiB  
Article
Remote Sensing Neural Radiance Fields for Multi-View Satellite Photogrammetry
by Songlin Xie, Lei Zhang, Gwanggil Jeon and Xiaomin Yang
Remote Sens. 2023, 15(15), 3808; https://doi.org/10.3390/rs15153808 - 31 Jul 2023
Cited by 1 | Viewed by 2342
Abstract
Neural radiance fields (NeRFs) combining machine learning with differentiable rendering have arisen as one of the most promising approaches for novel view synthesis and depth estimates. However, NeRFs only applies to close-range static imagery and it takes several hours to train the model. [...] Read more.
Neural radiance fields (NeRFs) combining machine learning with differentiable rendering have arisen as one of the most promising approaches for novel view synthesis and depth estimates. However, NeRFs only applies to close-range static imagery and it takes several hours to train the model. The satellites are hundreds of kilometers from the earth. Satellite multi-view images are usually captured over several years, and the scene of images is dynamic in the wild. Therefore, multi-view satellite photogrammetry is far beyond the capabilities of NeRFs. In this paper, we present a new method for multi-view satellite photogrammetry of Earth observation called remote sensing neural radiance fields (RS-NeRFs). It aims to generate novel view images and accurate elevation predictions quickly. For each scene, we train an RS-NeRF using high-resolution optical images without labels or geometric priors and apply image reconstruction losses for self-supervised learning. Multi-date images exhibit significant changes in appearance, mainly due to cars and varying shadows, which brings challenges to satellite photogrammetry. Robustness to these changes is achieved by the input of solar ray direction and the vehicle removal method. NeRFs make it intolerable by requiring a very long time to train an easy scene. In order to significantly reduce the training time of RS-NeRFs, we build a tiny network with HashEncoder and adopted a new sampling technique with our custom CUDA kernels. Compared with previous work, our method performs better on novel view synthesis and elevation estimates, taking several minutes. Full article
Show Figures

Figure 1

12 pages, 13941 KiB  
Communication
Accurate and Serialized Dense Point Cloud Reconstruction for Aerial Video Sequences
by Shibiao Xu, Bingbing Pan, Jiguang Zhang and Xiaopeng Zhang
Remote Sens. 2023, 15(6), 1625; https://doi.org/10.3390/rs15061625 - 17 Mar 2023
Cited by 1 | Viewed by 1567
Abstract
Traditional multi-view stereo (MVS) is not applicable for the point cloud reconstruction of serialized video frames. Among them, the exhausted feature extraction and matching for all the prepared frames are time-consuming, and the scope of the search requires covering all the key frames. [...] Read more.
Traditional multi-view stereo (MVS) is not applicable for the point cloud reconstruction of serialized video frames. Among them, the exhausted feature extraction and matching for all the prepared frames are time-consuming, and the scope of the search requires covering all the key frames. In this paper, we propose a novel serialized reconstruction method to solve the above issues. Specifically, a joint feature descriptors-based covisibility cluster generation strategy is designed to accelerate the feature matching and improve the performance of the pose estimation. Then, a serialized structure-from-motion (SfM) and dense point cloud reconstruction framework is designed to achieve high efficiency and competitive precision reconstruction for serialized frames. To fully demonstrate the superiority of our method, we collect a public aerial sequences dataset with referable ground truth for the dense point cloud reconstruction evaluation. Through a time complexity analysis and the experimental validation in this dataset, the comprehensive performance of our algorithm is better than the other compared outstanding methods. Full article
Show Figures

Graphical abstract

24 pages, 20344 KiB  
Article
High-Precision Single Building Model Reconstruction Based on the Registration between OSM and DSM from Satellite Stereos
by Yong He, Wenting Liao, Hao Hong and Xu Huang
Remote Sens. 2023, 15(5), 1443; https://doi.org/10.3390/rs15051443 - 04 Mar 2023
Viewed by 1698
Abstract
For large-scale 3D building reconstruction, there have been several approaches to utilizing multi-view satellite imagery to produce a digital surface model (DSM) for height information and extracting building footprints for contour information. However, limited by satellite resolutions and viewing angles, the corresponding DSM [...] Read more.
For large-scale 3D building reconstruction, there have been several approaches to utilizing multi-view satellite imagery to produce a digital surface model (DSM) for height information and extracting building footprints for contour information. However, limited by satellite resolutions and viewing angles, the corresponding DSM and building footprints are sometimes of a low accuracy, thus generating low-accuracy building models. Though some recent studies have added GIS data to refine the contour of the building footprints, the registration errors between the GIS data and satellite images are not considered. Since OpenStreetMap (OSM) provides a high level of precision and complete building polygons in most cities worldwide, this paper proposes an automatic single building reconstruction method that utilizes a DSM from high-resolution satellite stereos, as well as building footprints from OSM. The core algorithm accurately registers the building polygons from OSM with the rasterized height information from the DSM. To achieve this goal, this paper proposes a two-step “coarse-to-fine registration” algorithm, with both steps being formulated into the optimization of energy functions. The coarse registration is optimized by separately moving the OSM polygons at fixed steps with the constraints of a boundary gradient, an interior elevation mean, and variance. Given the initial solution of the coarse registration, the fine registration is optimized by a genetic algorithm to compute the accurate translations and rotations between the DSM and OSM. Experiments performed in the Beijing/Shanghai region show that the proposed method can significantly improve the IoU (intersection over union) of the registration results by 69.8%/26.2%, the precision by 41.0%/15.5%, the recall by 41.0%/16.0%, and the F1-score by 42.7%/15.8%. For the registration, the method can reduce the translation errors by 4.656 m/2.815 m, as well as the rotation errors by 0.538°/0.228°, which indicates its great potential in smart 3D applications. Full article
Show Figures

Figure 1

25 pages, 12762 KiB  
Article
A Method Based on Improved iForest for Trunk Extraction and Denoising of Individual Street Trees
by Zhiyuan Li, Jian Wang, Zhenyu Zhang, Fengxiang Jin, Juntao Yang, Wenxiao Sun and Yi Cao
Remote Sens. 2023, 15(1), 115; https://doi.org/10.3390/rs15010115 - 25 Dec 2022
Cited by 2 | Viewed by 2015
Abstract
Currently, the street tree resource survey using Mobile laser scanning (MLS) represents a hot spot around the world. Refined trunk extraction is an essential step for 3D reconstruction of street trees. However, due to scanning errors and the effects of occlusion by various [...] Read more.
Currently, the street tree resource survey using Mobile laser scanning (MLS) represents a hot spot around the world. Refined trunk extraction is an essential step for 3D reconstruction of street trees. However, due to scanning errors and the effects of occlusion by various types of features in the urban environment, street tree point cloud data processing has the problem of excessive noise. For the noise points that are difficult to remove using statistical methods in close proximity to the tree trunk, we propose an adaptive trunk extraction and denoising method for street trees based on an improved iForest (Isolation Forest) algorithm. Firstly, to extract the individual tree trunk points, the trunk and the crown are distinguished from the individual tree point cloud through point cloud slicing. Next, the iForest algorithm is improved by conducting automatic calculation of the contamination and further used to denoise the tree trunk point cloud. Finally, the method is validated with five datasets of different scenes. The results indicate that our method is robust and effective in extracting and denoising tree trunks. Compared with the traditional Statistical Outlier Removal (SOR) filter and Radius filter denoising methods, the denoising accuracy of the proposed method can be improved by approximately 30% for noise points close to tree trunks. Compared to iForest, the proposed method automatically calculates the contamination, improving the automation of the algorithm. Our method can provide more precise trunk point clouds for 3D reconstruction of street trees. Full article
Show Figures

Graphical abstract

26 pages, 21643 KiB  
Article
True2 Orthoimage Map Generation
by Guoqing Zhou, Qingyang Wang, Yongsheng Huang, Jin Tian, Haoran Li and Yuefeng Wang
Remote Sens. 2022, 14(17), 4396; https://doi.org/10.3390/rs14174396 - 04 Sep 2022
Cited by 19 | Viewed by 2089
Abstract
Digital/true orthoimage maps (D/TOMs) are one of the most important forms of national spatial data infrastructure (NSDI). The traditional generation of D/TOM is to orthorectify an aerial image into its upright and correct position by deleting displacements on and distortions of imagery. This [...] Read more.
Digital/true orthoimage maps (D/TOMs) are one of the most important forms of national spatial data infrastructure (NSDI). The traditional generation of D/TOM is to orthorectify an aerial image into its upright and correct position by deleting displacements on and distortions of imagery. This results in the generated D/TOM having no building façade texture when the D/TOM superimposes on the digital building model (DBM). This phenomenon is no longer tolerated for certain applications, such as micro-climate investigation. For this reason, this paper presents the generation of a true2 orthoimage map (T2OM), which is radically different from the traditional D/TOM. The basic idea for the T2OM generation of a single building is to orthorectify the DBM-based building roof from up to down, the building façade from front to back, from back to front, from left side to right side, and from right side to left side, as well as complete a digital terrain model (DTM)-based T2OM, of which a superpixel is proposed to store building ID, texture ID, the elevation of each pixel, and gray information. Two study areas are applied to verify the methods. The experimental results demonstrate that the T2OM not only maintains the traditional characteristics of D/TOM, but also displays building façade texture and three-dimensional (3D) coordinates (XYZ) measurable at any point, and the accuracy of 3D measurement on a T2OM can achieve 0.025 m (0.3 pixel). Full article
Show Figures

Figure 1

Review

Jump to: Research, Other

27 pages, 1447 KiB  
Review
UAVs and 3D City Modeling to Aid Urban Planning and Historic Preservation: A Systematic Review
by Dingkun Hu and Jennifer Minner
Remote Sens. 2023, 15(23), 5507; https://doi.org/10.3390/rs15235507 - 26 Nov 2023
Viewed by 1677
Abstract
Drone imagery has the potential to enrich urban planning and historic preservation, especially where it converges with the growing creation and use of 3D models in the context of cities and metro regions. Nevertheless, the widespread adoption of drones in these fields faces [...] Read more.
Drone imagery has the potential to enrich urban planning and historic preservation, especially where it converges with the growing creation and use of 3D models in the context of cities and metro regions. Nevertheless, the widespread adoption of drones in these fields faces limitations, and there is a shortage of research addressing this issue. Therefore, we have conducted a systematic literature review of articles published between 2002 and 2022 drawing from reputable academic repositories, including Science Direct, Web of Science, and China National Knowledge Infrastructure (CNKI), to identify current gaps in the existing research on the application of UAVs to the creation of 3D models in the contexts of urban planning and historic preservation. Our findings indicate five research shortcomings for 3D city modeling: limited participation of planning experts, research focus imbalance, lack of usage for special scenarios, lack of integration with smart city planning, and limited interdisciplinary collaboration. In addition, this study acknowledges current limitations around UAV applications and discusses possible countermeasures along with future prospects. Full article
Show Figures

Figure 1

Other

Jump to: Research, Review

20 pages, 54440 KiB  
Technical Note
Leveraging CNNs for Panoramic Image Matching Based on Improved Cube Projection Model
by Tian Gao, Chaozhen Lan, Longhao Wang, Wenjun Huang, Fushan Yao and Zijun Wei
Remote Sens. 2023, 15(13), 3411; https://doi.org/10.3390/rs15133411 - 05 Jul 2023
Viewed by 1328
Abstract
Three-dimensional (3D) scene reconstruction plays an important role in digital cities, virtual reality, and simultaneous localization and mapping (SLAM). In contrast to perspective images, a single panoramic image can contain the complete scene information because of the wide field of view. The extraction [...] Read more.
Three-dimensional (3D) scene reconstruction plays an important role in digital cities, virtual reality, and simultaneous localization and mapping (SLAM). In contrast to perspective images, a single panoramic image can contain the complete scene information because of the wide field of view. The extraction and matching of image feature points is a critical and difficult part of 3D scene reconstruction using panoramic images. We attempted to solve this problem using convolutional neural networks (CNNs). Compared with traditional feature extraction and matching algorithms, the SuperPoint (SP) and SuperGlue (SG) algorithms have advantages for handling images with distortions. However, the rich content of panoramic images leads to a significant disadvantage of these algorithms with regard to time loss. To address this problem, we introduce the Improved Cube Projection Model: First, the panoramic image is projected into split-frame perspective images with significant overlap in six directions. Second, the SP and SG algorithms are used to process the six split-frame images in parallel for feature extraction and matching. Finally, matching points are mapped back to the panoramic image through coordinate inverse mapping. Experimental results in multiple environments indicated that the algorithm can not only guarantee the number of feature points extracted and the accuracy of feature point extraction but can also significantly reduce the computation time compared to other commonly used algorithms. Full article
Show Figures

Graphical abstract

Back to TopTop