remotesensing-logo

Journal Browser

Journal Browser

Advances in Understanding and 3D Semantic Modeling of Large-Scale Urban Scenes from Point Clouds II

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Urban Remote Sensing".

Deadline for manuscript submissions: 30 November 2024 | Viewed by 3440

Special Issue Editors

Department of Geomatics Engineering, College of Civil Engineering, Nanjing Forestry University, Nanjing 210037, China
Interests: image- and LiDAR-based segmentation and reconstruction; full-waveform LiDAR data processing; related remote sensing applications in the field of forest ecosystems
Special Issues, Collections and Topics in MDPI journals
Department of Geomatics Engineering, College of Civil Engineering, Nanjing Forestry University, Nanjing 210037, China
Interests: UAV photogrammetry; quality analysis of geographic information systems; remote sensing image processing; GeOBIA algorithm development
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Mathematics and Computing Science, Faculty of Science, Saint Mary’s University, Halifax, NS B3H 3C2, Canada
Interests: computer graphics; 3D computer vision; geometric deep learning; related applications including motion capture for VR/AR and LiDAR-based urban modeling
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Research Group of Photogrammetry, Department of Geodesy and Geoinformation, TUWien, 1040 Vienna, Austria
Interests: orientation and calibration; surface-imaging sensors (cameras, laser scanners) for terrestrial and airborne earth observation and observation using satellites; 3D modeling and classification of recorded scenes; applying created models in topography, hydrology, forestry, and ecology; processing very large amounts of data; creation of scientific software
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Following the success of our previous Special Issue “Advances in Understanding and 3D Semantic Modeling of Large-Scale Urban Scenes from Point Clouds”, we are happy to announce a new one has been created.

Driven by many applications and the improvement of 3D data acquisition technology, the computer vision and remote sensing communities are now focusing on deep learning-based and knowledge-based algorithms to tackle the challenges in understanding the 3D semantic modeling of large-scale urban scenes. The physical modeling of scenes from point clouds includes the enhancement and segmentation of the scene at the object and part level, shape recognition, indoor and outdoor abstraction, reconstruction, and an optional simplification to make the 3D model web and/or mobile compatible. Although recent advanced deep-learning algorithms have exhibited powerful performance when carrying out low-level recognition tasks such as classification and segmentation, scant attention has been given to deep learning for large-scale 3D urban modeling due to a lack of available training data or benchmark repositories. Other challenges include detail modeling from imperfect (occluded or noisy) scans, free-form building modeling, lightweight modeling for web/mobile compatibility, flexible modeling which generates multiple levels of detail (LoDs) on the fly, and automated reconstruction from large-scale urban point clouds, to name a few. We position our Special Issue to support the ongoing efforts in the 3D scanning and modeling industry by focusing on applications of LiDAR/RGB-D/photogrammetric point clouds. The topics addressed within this Special Issue may encompass a wide array of subjects, including but not limited to:

  • The enhancement, registration, and filtering of point clouds;
  • Semantic, instance, panoptic, and part-level segmentation;
  • Large-scale outdoor scene and indoor scene reconstruction;
  • Detail synthesis and implicit modeling of urban scenes;
  • The 3D modeling of buildings, bridges, roads, trees, and utilities;
  • The rendering and visualization of urban scenes;
  • Polyhedral meshes, procedural models, and model simplification;
  • Deep learning-based reconstruction and point-based neural radiance fields;
  • Innovative applications in smart cities, VR/AR, autonomous driving, indoor navigation, etc.

Dr. Dong Chen
Dr. Jiaming Na
Dr. Jiju Poovvancheri
Prof. Dr. Norbert Pfeifer
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • point cloud enhancement
  • semantic segmentation
  • instance segmentation
  • panoptic segmentation
  • outdoor reconstruction
  • indoor reconstruction
  • urban utility modeling
  • efficient data structures
  • deep learning-based reconstruciton
  • model simplification
  • intelligent applications

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 4466 KiB  
Article
Position-Feature Attention Network-Based Approach for Semantic Segmentation of Urban Building Point Clouds from Airborne Array Interferometric SAR
by Minan Shi, Fubo Zhang, Longyong Chen, Shuo Liu, Ling Yang and Chengwei Zhang
Remote Sens. 2024, 16(7), 1141; https://doi.org/10.3390/rs16071141 - 25 Mar 2024
Viewed by 507
Abstract
Airborne array-interferometric synthetic aperture radar (array-InSAR), one of the implementation methods of tomographic SAR (TomoSAR), has the advantages of all-time, all-weather, high consistency, and exceptional timeliness. As urbanization continues to develop, the utilization of array-InSAR data for building detection holds significant application value. [...] Read more.
Airborne array-interferometric synthetic aperture radar (array-InSAR), one of the implementation methods of tomographic SAR (TomoSAR), has the advantages of all-time, all-weather, high consistency, and exceptional timeliness. As urbanization continues to develop, the utilization of array-InSAR data for building detection holds significant application value. Existing methods, however, face challenges in terms of automation and detection accuracy, which can impact the subsequent accuracy and quality of building modeling. On the other hand, deep learning methods are still in their infancy in SAR point cloud processing. Existing deep learning methods do not adapt well to this problem. Therefore, we propose a Position-Feature Attention Network (PFA-Net), which seamlessly integrates positional encoding with point transformer for SAR point clouds building target segmentation tasks. Experimental results show that the proposed network is better suited to handle the inherent characteristics of SAR point clouds, including high noise levels and multiple scattering artifacts. And it achieves more accurate segmentation results while maintaining computational efficiency and avoiding errors associated with manual labeling. The experiments also investigate the role of multidimensional features in SAR point cloud data. This work also provides valuable insights and references for future research between SAR point clouds and deep learning. Full article
Show Figures

Figure 1

27 pages, 9573 KiB  
Article
Iterative Low-Poly Building Model Reconstruction from Mesh Soups Based on Contour
by Xiao Xiao, Yuhang Liu and Yanci Zhang
Remote Sens. 2024, 16(4), 695; https://doi.org/10.3390/rs16040695 - 16 Feb 2024
Viewed by 486
Abstract
Existing contour-based building-reconstruction methods face the challenge of producing low-poly results. In this study, we introduce a novel iterative contour-based method to reconstruct low-poly meshes with only essential details from mesh soups. Our method focuses on two primary targets that determine the quality [...] Read more.
Existing contour-based building-reconstruction methods face the challenge of producing low-poly results. In this study, we introduce a novel iterative contour-based method to reconstruct low-poly meshes with only essential details from mesh soups. Our method focuses on two primary targets that determine the quality of the results: reduce the total number of contours, and generate compact surfaces between contours. Specifically, we implemented an iterative pipeline to gradually extract vital contours by loss and topological variance, and potential redundant contours will be removed in a post-processing procedure. Based on these vital contours, we extracted the planar primitives of buildings as references for contour refinement to obtain compact contours. The connection relationships between these contours are recovered for surface generation by a contour graph, which is constructed using multiple bipartite graphs. Then, a low-poly mesh can be generated from the contour graph using our contour-interpolation algorithm based on polyline splitting. The experiments demonstrated that our method produced satisfactory results and outperformed the previous methods. Full article
Show Figures

Figure 1

28 pages, 11516 KiB  
Article
Segmentation of Individual Tree Points by Combining Marker-Controlled Watershed Segmentation and Spectral Clustering Optimization
by Yuchan Liu, Dong Chen, Shihan Fu, Panagiotis Takis Mathiopoulos, Mingming Sui, Jiaming Na and Jiju Peethambaran
Remote Sens. 2024, 16(4), 610; https://doi.org/10.3390/rs16040610 - 06 Feb 2024
Viewed by 1187
Abstract
Accurate identification and segmentation of individual tree points are crucial for assessing forest spatial distribution, understanding tree growth and structure, and managing forest resources. Traditional methods based on Canopy Height Models (CHM) are simple yet prone to over- and/or under-segmentation. To deal with [...] Read more.
Accurate identification and segmentation of individual tree points are crucial for assessing forest spatial distribution, understanding tree growth and structure, and managing forest resources. Traditional methods based on Canopy Height Models (CHM) are simple yet prone to over- and/or under-segmentation. To deal with this problem, this paper introduces a novel approach that combines marker-controlled watershed segmentation with a spectral clustering algorithm. Initially, we determined the local maxima within a series of variable windows according to the lower bound of the prediction interval of the regression equation between tree crown radius and tree height to preliminarily segment individual trees. Subsequently, using this geometric shape analysis method, the under-segmented trees were identified. For these trees, vertical tree crown profile analysis was performed in multiple directions to detect potential treetops which were then considered as inputs for spectral clustering optimization. Our experiments across six plots showed that our method markedly surpasses traditional approaches, achieving an average Recall of 0.854, a Precision of 0.937, and an F1-score of 0.892. Full article
Show Figures

Figure 1

18 pages, 8485 KiB  
Article
Robust 3D Semantic Segmentation Method Based on Multi-Modal Collaborative Learning
by Peizhou Ni, Xu Li, Wang Xu, Xiaojing Zhou, Tao Jiang and Weiming Hu
Remote Sens. 2024, 16(3), 453; https://doi.org/10.3390/rs16030453 - 24 Jan 2024
Viewed by 889
Abstract
Since camera and LiDAR sensors provide complementary information for the 3D semantic segmentation of intelligent vehicles, extensive efforts have been invested to fuse information from multi-modal data. Despite considerable advantages, fusion-based methods still have inevitable limitations: field-of-view disparity between two modal inputs, demanding [...] Read more.
Since camera and LiDAR sensors provide complementary information for the 3D semantic segmentation of intelligent vehicles, extensive efforts have been invested to fuse information from multi-modal data. Despite considerable advantages, fusion-based methods still have inevitable limitations: field-of-view disparity between two modal inputs, demanding precise paired data as inputs in both the training and inferring stages, and consuming more resources. These limitations pose significant obstacles to the practical application of fusion-based methods in real-world scenarios. Therefore, we propose a robust 3D semantic segmentation method based on multi-modal collaborative learning, aiming to enhance feature extraction and segmentation performance for point clouds. In practice, an attention based cross-modal knowledge distillation module is proposed to effectively acquire comprehensive information from multi-modal data and guide the pure point cloud network; then, a confidence-map-driven late fusion strategy is proposed to dynamically fuse the results of two modalities at the pixel-level to complement their advantages and further optimize segmentation results. The proposed method is evaluated on two public datasets (urban dataset SemanticKITTI and off-road dataset RELLIS-3D) and our unstructured test set. The experimental results demonstrate the competitiveness of state-of-the-art methods in diverse scenarios and a robustness to sensor faults. Full article
Show Figures

Figure 1

Back to TopTop