sensors-logo

Journal Browser

Journal Browser

Intelligent Point Cloud Processing, Sensing and Understanding

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: closed (20 February 2023) | Viewed by 34351

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
School of Information Engineering, Shenzhen University, Shenzhen 518052, China
Interests: computer vision and machine learning; image/video processing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Biomedical Engineering, Shenzhen University Health Science Center, Shenzhen 518037, China
Interests: image processing; medical image analysis; computer vision
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Communications and Information Engineering, Nanjing University of Post and TeleCommunications, Nanjing, China
Interests: video/image/point cloud processing; computer vision
School of Mechanical Engineering, Shandong University, Jinan, China
Interests: medical image processing; deep learning; computer graphics and visualization
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Point clouds are deemed to be one of the foundational pillars in representing the 3D digital world, despite irregular topologies among discrete points. Recently, the advancements in sensor technologies that acquire point cloud data for flexible and scalable geometric representation have paved the way for the development of new ideas, methodologies, and solutions in countless remote sensing applications. State-of-the-art sensors are capable of capturing and describing objects in a scene by using dense point clouds from various platforms (satellites, aerial, UAVs, vehicle-borne, backpacks, handheld, and static terrestrial), perspectives (nadir, oblique, and side view), spectra (multispectral), and granularity (point density and completeness). Meanwhile, the ever-expanding application areas of point cloud processing have already covered not only conventional domains in geospatial analysis, but also include manufacturing, civil engineering, construction, transportation, ecology, forestry, mechanical engineering, and so on.

This Special Issue aims at contributions that focus on processing and utilizing point cloud data acquired from laser scanners and other 3D imaging systems. We are particularly interested in original papers that address innovative techniques for generating, handling, and analyzing point cloud data, challenges in dealing with point cloud data in emerging remote sensing applications, and developing new applications for point cloud data.

Dr. Miaohui Wang
Dr. Guanghui Yue
Dr. Jian Xiong
Dr. Sukun Tian
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • point cloud acquisition from laser scanners, stereo vision, panoramas, camera phone images, and oblique as well as satellite imagery
  • deep learning for point cloud processing
  • point cloud registration, segmentation, object detection, semantic labelling, compression, and quality assessment
  • fusion of multimodal point clouds
  • modeling of LiDAR/image-based point cloud processing
  • industrial applications with large-scale point clouds
  • high-performance computing for large-scale point clouds

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

5 pages, 188 KiB  
Editorial
Intelligent Point Cloud Processing, Sensing, and Understanding
by Miaohui Wang, Guanghui Yue, Jian Xiong and Sukun Tian
Sensors 2024, 24(1), 283; https://doi.org/10.3390/s24010283 - 03 Jan 2024
Viewed by 835
Abstract
Point clouds are considered one of the fundamental pillars for representing the 3D digital landscape [...] Full article
(This article belongs to the Special Issue Intelligent Point Cloud Processing, Sensing and Understanding)

Research

Jump to: Editorial, Review

22 pages, 12783 KiB  
Article
Discrete Geodesic Distribution-Based Graph Kernel for 3D Point Clouds
by Mehmet Ali Balcı, Ömer Akgüller, Larissa M. Batrancea and Lucian Gaban
Sensors 2023, 23(5), 2398; https://doi.org/10.3390/s23052398 - 21 Feb 2023
Viewed by 1178
Abstract
In the structural analysis of discrete geometric data, graph kernels have a great track record of performance. Using graph kernel functions provides two significant advantages. First, a graph kernel is capable of preserving the graph’s topological structures by describing graph properties in a [...] Read more.
In the structural analysis of discrete geometric data, graph kernels have a great track record of performance. Using graph kernel functions provides two significant advantages. First, a graph kernel is capable of preserving the graph’s topological structures by describing graph properties in a high-dimensional space. Second, graph kernels allow the application of machine learning methods to vector data that are rapidly evolving into graphs. In this paper, the unique kernel function for similarity determination procedures of point cloud data structures, which are crucial for several applications, is formulated. This function is determined by the proximity of the geodesic route distributions in graphs reflecting the discrete geometry underlying the point cloud. This research demonstrates the efficiency of this unique kernel for similarity measures and the categorization of point clouds. Full article
(This article belongs to the Special Issue Intelligent Point Cloud Processing, Sensing and Understanding)
Show Figures

Figure 1

20 pages, 35574 KiB  
Article
Point Cloud Instance Segmentation with Inaccurate Bounding-Box Annotations
by Yinyin Peng, Hui Feng, Tao Chen and Bo Hu
Sensors 2023, 23(4), 2343; https://doi.org/10.3390/s23042343 - 20 Feb 2023
Viewed by 1987
Abstract
Most existing point cloud instance segmentation methods require accurate and dense point-level annotations, which are extremely laborious to collect. While incomplete and inexact supervision has been exploited to reduce labeling efforts, inaccurate supervision remains under-explored. This kind of supervision is almost inevitable in [...] Read more.
Most existing point cloud instance segmentation methods require accurate and dense point-level annotations, which are extremely laborious to collect. While incomplete and inexact supervision has been exploited to reduce labeling efforts, inaccurate supervision remains under-explored. This kind of supervision is almost inevitable in practice, especially in complex 3D point clouds, and it severely degrades the generalization performance of deep networks. To this end, we propose the first weakly supervised point cloud instance segmentation framework with inaccurate box-level labels. A novel self-distillation architecture is presented to boost the generalization ability while leveraging the cheap but noisy bounding-box annotations. Specifically, we employ consistency regularization to distill self-knowledge from data perturbation and historical predictions, which prevents the deep network from overfitting the noisy labels. Moreover, we progressively select reliable samples and correct their labels based on the historical consistency. Extensive experiments on the ScanNet-v2 dataset were used to validate the effectiveness and robustness of our method in dealing with inexact and inaccurate annotations. Full article
(This article belongs to the Special Issue Intelligent Point Cloud Processing, Sensing and Understanding)
Show Figures

Figure 1

19 pages, 6307 KiB  
Article
Automatic Registration of Homogeneous and Cross-Source TomoSAR Point Clouds in Urban Areas
by Lei Pang, Dayuan Liu, Conghua Li and Fengli Zhang
Sensors 2023, 23(2), 852; https://doi.org/10.3390/s23020852 - 11 Jan 2023
Cited by 1 | Viewed by 1356
Abstract
Building reconstruction using high-resolution satellite-based synthetic SAR tomography (TomoSAR) is of great importance in urban planning and city modeling applications. However, since the imaging mode of SAR is side-by-side, the TomoSAR point cloud of a single orbit cannot achieve a complete observation of [...] Read more.
Building reconstruction using high-resolution satellite-based synthetic SAR tomography (TomoSAR) is of great importance in urban planning and city modeling applications. However, since the imaging mode of SAR is side-by-side, the TomoSAR point cloud of a single orbit cannot achieve a complete observation of buildings. It is difficult for existing methods to extract the same features, as well as to use the overlap rate to achieve the alignment of the homologous TomoSAR point cloud and the cross-source TomoSAR point cloud. Therefore, this paper proposes a robust alignment method for TomoSAR point clouds in urban areas. First, noise points and outlier points are filtered by statistical filtering, and density of projection point (DoPP)-based projection is used to extract TomoSAR building point clouds and obtain the facade points for subsequent calculations based on density clustering. Subsequently, coarse alignment of source and target point clouds was performed using principal component analysis (PCA). Lastly, the rotation and translation coefficients were calculated using the angle of the normal vector of the opposite facade of the building and the distance of the outer end of the facade projection. The experimental results verify the feasibility and robustness of the proposed method. For the homologous TomoSAR point cloud, the experimental results show that the average rotation error of the proposed method was less than 0.1°, and the average translation error was less than 0.25 m. The alignment accuracy of the cross-source TomoSAR point cloud was evaluated for the defined angle and distance, whose values were less than 0.2° and 0.25 m. Full article
(This article belongs to the Special Issue Intelligent Point Cloud Processing, Sensing and Understanding)
Show Figures

Figure 1

27 pages, 117724 KiB  
Article
A Sequential Color Correction Approach for Texture Mapping of 3D Meshes
by Lucas Dal’Col, Daniel Coelho, Tiago Madeira, Paulo Dias and Miguel Oliveira
Sensors 2023, 23(2), 607; https://doi.org/10.3390/s23020607 - 05 Jan 2023
Cited by 1 | Viewed by 1705
Abstract
Texture mapping can be defined as the colorization of a 3D mesh using one or multiple images. In the case of multiple images, this process often results in textured meshes with unappealing visual artifacts, known as texture seams, caused by the lack of [...] Read more.
Texture mapping can be defined as the colorization of a 3D mesh using one or multiple images. In the case of multiple images, this process often results in textured meshes with unappealing visual artifacts, known as texture seams, caused by the lack of color similarity between the images. The main goal of this work is to create textured meshes free of texture seams by color correcting all the images used. We propose a novel color-correction approach, called sequential pairwise color correction, capable of color correcting multiple images from the same scene, using a pairwise-based method. This approach consists of sequentially color correcting each image of the set with respect to a reference image, following color-correction paths computed from a weighted graph. The color-correction algorithm is integrated with a texture-mapping pipeline that receives uncorrected images, a 3D mesh, and point clouds as inputs, producing color-corrected images and a textured mesh as outputs. Results show that the proposed approach outperforms several state-of-the-art color-correction algorithms, both in qualitative and quantitative evaluations. The approach eliminates most texture seams, significantly increasing the visual quality of the textured meshes. Full article
(This article belongs to the Special Issue Intelligent Point Cloud Processing, Sensing and Understanding)
Show Figures

Figure 1

13 pages, 5627 KiB  
Article
Real-Time LiDAR Point-Cloud Moving Object Segmentation for Autonomous Driving
by Xing Xie, Haowen Wei and Yongjie Yang
Sensors 2023, 23(1), 547; https://doi.org/10.3390/s23010547 - 03 Jan 2023
Cited by 4 | Viewed by 4463
Abstract
The key to autonomous navigation in unmanned systems is the ability to recognize static and moving objects in the environment and to support the task of predicting the future state of the environment, avoiding collisions, and planning. However, because the existing 3D LiDAR [...] Read more.
The key to autonomous navigation in unmanned systems is the ability to recognize static and moving objects in the environment and to support the task of predicting the future state of the environment, avoiding collisions, and planning. However, because the existing 3D LiDAR point-cloud moving object segmentation (MOS) convolutional neural network (CNN) models are very complex and have large computation burden, it is difficult to perform real-time processing on embedded platforms. In this paper, we propose a lightweight MOS network structure based on LiDAR point-cloud sequence range images with only 2.3 M parameters, which is 66% less than the state-of-the-art network. When running on RTX 3090 GPU, the processing time is 35.82 ms per frame and it achieves an intersection-over-union(IoU) score of 51.3% on the SemanticKITTI dataset. In addition, the proposed CNN successfully runs the FPGA platform using an NVDLA-like hardware architecture, and the system achieves efficient and accurate moving-object segmentation of LiDAR point clouds at a speed of 32 fps, meeting the real-time requirements of autonomous vehicles. Full article
(This article belongs to the Special Issue Intelligent Point Cloud Processing, Sensing and Understanding)
Show Figures

Figure 1

24 pages, 6392 KiB  
Article
3D Reality-Based Survey and Retopology for Structural Analysis of Cultural Heritage
by Sara Gonizzi Barsanti, Mario Guagliano and Adriana Rossi
Sensors 2022, 22(24), 9593; https://doi.org/10.3390/s22249593 - 07 Dec 2022
Cited by 3 | Viewed by 1428
Abstract
Cultural heritage’s structural changes and damages can influence the mechanical behaviour of artefacts and buildings. The use of finite element methods (FEM) for mechanical analysis is largely used in modelling stress behaviour. The workflow involves the use of CAD 3D models and the [...] Read more.
Cultural heritage’s structural changes and damages can influence the mechanical behaviour of artefacts and buildings. The use of finite element methods (FEM) for mechanical analysis is largely used in modelling stress behaviour. The workflow involves the use of CAD 3D models and the use of non-uniform rational B-spline (NURBS) surfaces. For cultural heritage objects, altered by the time elapsed since their creation, the representation created with the CAD model may introduce an extreme level of approximation, leading to wrong simulation results. The focus of this work is to present an alternative method intending to generate the most accurate 3D representation of a real artefact from highly accurate 3D reality-based models, simplifying the original models to make them suitable for finite element analysis (FEA) software. The approach proposed, and tested on three different case studies, was based on the intelligent use of retopology procedures to create a simplified model to be converted to a mathematical one made by NURBS surfaces, which is also suitable for being processed by volumetric meshes typically embedded in standard FEM packages. This allowed us to obtain FEA results that were closer to the actual mechanical behaviour of the analysed heritage asset. Full article
(This article belongs to the Special Issue Intelligent Point Cloud Processing, Sensing and Understanding)
Show Figures

Figure 1

14 pages, 8468 KiB  
Article
PU-MFA: Point Cloud Up-Sampling via Multi-Scale Features Attention
by Hyungjun Lee and Sejoon Lim
Sensors 2022, 22(23), 9308; https://doi.org/10.3390/s22239308 - 29 Nov 2022
Viewed by 1549
Abstract
Recently, research using point clouds has been increasing with the development of 3D scanner technology. According to this trend, the demand for high-quality point clouds is increasing, but there is still a problem with the high cost of obtaining high-quality point clouds. Therefore, [...] Read more.
Recently, research using point clouds has been increasing with the development of 3D scanner technology. According to this trend, the demand for high-quality point clouds is increasing, but there is still a problem with the high cost of obtaining high-quality point clouds. Therefore, with the recent remarkable development of deep learning, point cloud up-sampling research, which uses deep learning to generate high-quality point clouds from low-quality point clouds, is one of the fields attracting considerable attention. This paper proposes a new point cloud up-sampling method called Point cloud Up-sampling via Multi-scale Features Attention (PU-MFA). Inspired by prior studies that reported good performance at generating high-quality dense point set using the multi-scale features or attention mechanisms, PU-MFA merges the two through a U-Net structure. In addition, PU-MFA adaptively uses multi-scale features to refine the global features effectively. The PU-MFA was compared with other state-of-the-art methods in various evaluation metrics through various experiments using the PU-GAN dataset, which is a synthetic point cloud dataset, and the KITTI dataset, which is the real-scanned point cloud dataset. In various experimental results, PU-MFA showed superior performance of generating high-quality dense point set in quantitative and qualitative evaluation compared to other state-of-the-art methods, proving the effectiveness of the proposed method. The attention map of PU-MFA was also visualized to show the effect of multi-scale features. Full article
(This article belongs to the Special Issue Intelligent Point Cloud Processing, Sensing and Understanding)
Show Figures

Figure 1

14 pages, 3774 KiB  
Article
MASPC_Transform: A Plant Point Cloud Segmentation Network Based on Multi-Head Attention Separation and Position Code
by Bin Li and Chenhua Guo
Sensors 2022, 22(23), 9225; https://doi.org/10.3390/s22239225 - 27 Nov 2022
Cited by 3 | Viewed by 1634
Abstract
Plant point cloud segmentation is an important step in 3D plant phenotype research. Because the stems, leaves, flowers, and other organs of plants are often intertwined and small in size, this makes plant point cloud segmentation more challenging than other segmentation tasks. In [...] Read more.
Plant point cloud segmentation is an important step in 3D plant phenotype research. Because the stems, leaves, flowers, and other organs of plants are often intertwined and small in size, this makes plant point cloud segmentation more challenging than other segmentation tasks. In this paper, we propose MASPC_Transform, a novel plant point cloud segmentation network base on multi-head attention separation and position code. The proposed MASPC_Transform establishes connections for similar point clouds scattered in different areas of the point cloud space through multiple attention heads. In order to avoid the aggregation of multiple attention heads, we propose a multi-head attention separation loss based on spatial similarity, so that the attention positions of different attention heads can be dispersed as much as possible. In order to reduce the impact of point cloud disorder and irregularity on feature extraction, we propose a new point cloud position coding method, and use the position coding network based on this method in the local and global feature extraction modules of MASPC_Transform. We evaluate our MASPC_Transform on the ROSE_X dataset. Compared with the state-of-the-art approaches, the proposed MASPC_Transform achieved better segmentation results. Full article
(This article belongs to the Special Issue Intelligent Point Cloud Processing, Sensing and Understanding)
Show Figures

Figure 1

13 pages, 2112 KiB  
Article
A Single Stage and Single View 3D Point Cloud Reconstruction Network Based on DetNet
by Bin Li, Shiao Zhu and Yi Lu
Sensors 2022, 22(21), 8235; https://doi.org/10.3390/s22218235 - 27 Oct 2022
Cited by 7 | Viewed by 2563
Abstract
It is a challenging problem to infer objects with reasonable shapes and appearance from a single picture. Existing research often pays more attention to the structure of the point cloud generation network, while ignoring the feature extraction of 2D images and reducing the [...] Read more.
It is a challenging problem to infer objects with reasonable shapes and appearance from a single picture. Existing research often pays more attention to the structure of the point cloud generation network, while ignoring the feature extraction of 2D images and reducing the loss in the process of feature propagation in the network. In this paper, a single-stage and single-view 3D point cloud reconstruction network, 3D-SSRecNet, is proposed. The proposed 3D-SSRecNet is a simple single-stage network composed of a 2D image feature extraction network and a point cloud prediction network. The single-stage network structure can reduce the loss of the extracted 2D image features. The 2D image feature extraction network takes DetNet as the backbone. DetNet can extract more details from 2D images. In order to generate point clouds with better shape and appearance, in the point cloud prediction network, the exponential linear unit (ELU) is used as the activation function, and the joint function of chamfer distance (CD) and Earth mover’s distance (EMD) is used as the loss function of 3DSSRecNet. In order to verify the effectiveness of 3D-SSRecNet, we conducted a series of experiments on ShapeNet and Pix3D datasets. The experimental results measured by CD and EMD have shown that 3D-SSRecNet outperforms the state-of-the-art reconstruction methods. Full article
(This article belongs to the Special Issue Intelligent Point Cloud Processing, Sensing and Understanding)
Show Figures

Figure 1

Review

Jump to: Editorial, Research

27 pages, 1105 KiB  
Review
A Survey on Deep-Learning-Based LiDAR 3D Object Detection for Autonomous Driving
by Simegnew Yihunie Alaba and John E. Ball
Sensors 2022, 22(24), 9577; https://doi.org/10.3390/s22249577 - 07 Dec 2022
Cited by 27 | Viewed by 13991
Abstract
LiDAR is a commonly used sensor for autonomous driving to make accurate, robust, and fast decision-making when driving. The sensor is used in the perception system, especially object detection, to understand the driving environment. Although 2D object detection has succeeded during the deep-learning [...] Read more.
LiDAR is a commonly used sensor for autonomous driving to make accurate, robust, and fast decision-making when driving. The sensor is used in the perception system, especially object detection, to understand the driving environment. Although 2D object detection has succeeded during the deep-learning era, the lack of depth information limits understanding of the driving environment and object location. Three-dimensional sensors, such as LiDAR, give 3D information about the surrounding environment, which is essential for a 3D perception system. Despite the attention of the computer vision community to 3D object detection due to multiple applications in robotics and autonomous driving, there are challenges, such as scale change, sparsity, uneven distribution of LiDAR data, and occlusions. Different representations of LiDAR data and methods to minimize the effect of the sparsity of LiDAR data have been proposed. This survey presents the LiDAR-based 3D object detection and feature-extraction techniques for LiDAR data. The 3D coordinate systems differ in camera and LiDAR-based datasets and methods. Therefore, the commonly used 3D coordinate systems are summarized. Then, state-of-the-art LiDAR-based 3D object-detection methods are reviewed with a selected comparison among methods. Full article
(This article belongs to the Special Issue Intelligent Point Cloud Processing, Sensing and Understanding)
Show Figures

Figure 1

Back to TopTop