sensors-logo

Journal Browser

Journal Browser

Intelligent Point Cloud Processing, Sensing and Understanding (Volume II)

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: 20 June 2024 | Viewed by 4096

Special Issue Editors


E-Mail Website
Guest Editor
School of Information Engineering, Shenzhen University, Shenzhen 518052, China
Interests: computer vision and machine learning; image/video processing
Special Issues, Collections and Topics in MDPI journals
School of Mechanical Engineering, Shandong University, Jinan, China
Interests: medical image processing; deep learning; computer graphics and visualization
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Following the success of the previous Special Issue “Intelligent Point Cloud Processing, Sensing and Understanding” (https://www.mdpi.com/journal/sensors/special_issues/IX18KRFUQ1), we are pleased to announce the next in the series, entitled “Intelligent Point Cloud Processing, Sensing and Understanding (Volume II)”.

Point clouds are deemed to be one of the foundational pillars in representing the 3D digital world, despite irregular topologies among discrete points. Recently, the advancements in sensor technologies that acquire point cloud data for flexible and scalable geometric representation have paved the way for the development of new ideas, methodologies, and solutions in countless remote sensing applications. State-of-the-art sensors are capable of capturing and describing objects in a scene by using dense point clouds from various platforms (satellites, aerial, UAVs, vehicle-borne, backpacks, handheld, and static terrestrial), perspectives (nadir, oblique, and side view), spectra (multispectral), and granularity (point density and completeness). Meanwhile, the ever-expanding application areas of point cloud processing have already covered not only conventional domains in geospatial analysis, but also include manufacturing, civil engineering, construction, transportation, ecology, forestry, mechanical engineering, and so on.

This Special Issue aims to include contributions that focus on processing and utilizing point cloud data acquired from laser scanners and other 3D imaging systems. We are particularly interested in original papers that address innovative techniques for generating, handling, and analyzing point cloud data, challenges in dealing with point cloud data in emerging remote sensing applications, and developing new applications for point cloud data.

Dr. Miaohui Wang
Dr. Sukun Tian
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • point cloud acquisition from laser scanners, stereo vision, panoramas, camera phone images and oblique as well as satellite imagery
  • deep learning for point cloud processing
  • point cloud registration, segmentation, object detection, semantic labelling, compression and quality assessment
  • fusion of multimodal point clouds
  • modeling of LiDAR/image-based point cloud processing
  • industrial applications with large-scale point clouds
  • high-performance computing for large-scale point clouds

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 12430 KiB  
Article
Comparison of Point Cloud Registration Techniques on Scanned Physical Objects
by Menthy Denayer, Joris De Winter, Evandro Bernardes, Bram Vanderborght and Tom Verstraten
Sensors 2024, 24(7), 2142; https://doi.org/10.3390/s24072142 - 27 Mar 2024
Viewed by 443
Abstract
This paper presents a comparative analysis of six prominent registration techniques for solving CAD model alignment problems. Unlike the typical approach of assessing registration algorithms with synthetic datasets, our study utilizes point clouds generated from the Cranfield benchmark. Point clouds are sampled from [...] Read more.
This paper presents a comparative analysis of six prominent registration techniques for solving CAD model alignment problems. Unlike the typical approach of assessing registration algorithms with synthetic datasets, our study utilizes point clouds generated from the Cranfield benchmark. Point clouds are sampled from existing CAD models and 3D scans of physical objects, introducing real-world complexities such as noise and outliers. The acquired point cloud scans, including ground-truth transformations, are made publicly available. This dataset includes several cleaned-up scans of nine 3D-printed objects. Our main contribution lies in assessing the performance of three classical (GO-ICP, RANSAC, FGR) and three learning-based (PointNetLK, RPMNet, ROPNet) methods on real-world scans, using a wide range of metrics. These include recall, accuracy and computation time. Our comparison shows a high accuracy for GO-ICP, as well as PointNetLK, RANSAC and RPMNet combined with ICP refinement. However, apart from GO-ICP, all methods show a significant number of failure cases when applied to scans containing more noise or requiring larger transformations. FGR and RANSAC are among the quickest methods, while GO-ICP takes several seconds to solve. Finally, while learning-based methods demonstrate good performance and low computation times, they have difficulties in training and generalizing. Our results can aid novice researchers in the field in selecting a suitable registration method for their application, based on quantitative metrics. Furthermore, our code can be used by others to evaluate novel methods. Full article
Show Figures

Figure 1

28 pages, 15051 KiB  
Article
Point Cloud Registration Method Based on Geometric Constraint and Transformation Evaluation
by Chuanli Kang, Chongming Geng, Zitao Lin, Sai Zhang, Siyao Zhang and Shiwei Wang
Sensors 2024, 24(6), 1853; https://doi.org/10.3390/s24061853 - 14 Mar 2024
Viewed by 474
Abstract
Existing point-to-point registration methods often suffer from inaccuracies caused by erroneous matches and noisy correspondences, leading to significant decreases in registration accuracy and efficiency. To address these challenges, this paper presents a new coarse registration method based on a geometric constraint and a [...] Read more.
Existing point-to-point registration methods often suffer from inaccuracies caused by erroneous matches and noisy correspondences, leading to significant decreases in registration accuracy and efficiency. To address these challenges, this paper presents a new coarse registration method based on a geometric constraint and a matrix evaluation. Compared to traditional registration methods that require a minimum of three correspondences to complete the registration, the proposed method only requires two correspondences to generate a transformation matrix. Additionally, by using geometric constraints to select out high-quality correspondences and evaluating the matrix, we greatly increase the likelihood of finding the optimal result. In the proposed method, we first employ a combination of descriptors and keypoint detection techniques to generate initial correspondences. Next, we utilize the nearest neighbor similarity ratio (NNSR) to select high-quality correspondences. Subsequently, we evaluate the quality of these correspondences using rigidity constraints and salient points’ distance constraints, favoring higher-scoring correspondences. For each selected correspondence pair, we compute the rotation and translation matrix based on their centroids and local reference frames. With the transformation matrices of the source and target point clouds known, we deduce the transformation matrix of the source point cloud in reverse. To identify the best-transformed point cloud, we propose an evaluation method based on the overlap ratio and inliers points. Through parameter experiments, we investigate the performance of the proposed method under various parameter settings. By conducting comparative experiments, we verified that the proposed method’s geometric constraints, evaluation methods, and transformation matrix computation consistently outperformed other methods in terms of root mean square error (RMSE) values. Additionally, we validated that our chosen combination for generating initial correspondences outperforms other descriptor and keypoint detection combinations in terms of the registration result accuracy. Furthermore, we compared our method with several feature-matching registration methods, and the results demonstrate the superior accuracy of our approach. Ultimately, by testing the proposed method on various types of point cloud datasets, we convincingly established its effectiveness. Based on the evaluation and selection of correspondences and the registration result’s quality, our proposed method offers a solution with fewer iterations and higher accuracy. Full article
Show Figures

Figure 1

30 pages, 5973 KiB  
Article
LiDAR Dynamic Target Detection Based on Multidimensional Features
by Aigong Xu, Jiaxin Gao, Xin Sui, Changqiang Wang and Zhengxu Shi
Sensors 2024, 24(5), 1369; https://doi.org/10.3390/s24051369 - 20 Feb 2024
Viewed by 569
Abstract
To address the limitations of LiDAR dynamic target detection methods, which often require heuristic thresholding, indirect computational assistance, supplementary sensor data, or postdetection, we propose an innovative method based on multidimensional features. Using the differences between the positions and geometric structures of point [...] Read more.
To address the limitations of LiDAR dynamic target detection methods, which often require heuristic thresholding, indirect computational assistance, supplementary sensor data, or postdetection, we propose an innovative method based on multidimensional features. Using the differences between the positions and geometric structures of point cloud clusters scanned by the same target in adjacent frame point clouds, the motion states of the point cloud clusters are comprehensively evaluated. To enable the automatic precision pairing of point cloud clusters from adjacent frames of the same target, a double registration algorithm is proposed for point cloud cluster centroids. The iterative closest point (ICP) algorithm is employed for approximate interframe pose estimation during coarse registration. The random sample consensus (RANSAC) and four-parameter transformation algorithms are employed to obtain precise interframe pose relations during fine registration. These processes standardize the coordinate systems of adjacent point clouds and facilitate the association of point cloud clusters from the same target. Based on the paired point cloud cluster, a classification feature system is used to construct the XGBoost decision tree. To enhance the XGBoost training efficiency, a Spearman’s rank correlation coefficient-bidirectional search for a dimensionality reduction algorithm is proposed to expedite the optimal classification feature subset construction. After preliminary outcomes are generated by XGBoost, a double Boyer–Moore voting-sliding window algorithm is proposed to refine the final LiDAR dynamic target detection accuracy. To validate the efficacy and efficiency of our method in LiDAR dynamic target detection, an experimental platform is established. Real-world data are collected and pertinent experiments are designed. The experimental results illustrate the soundness of our method. The LiDAR dynamic target correct detection rate is 92.41%, the static target error detection rate is 1.43%, and the detection efficiency is 0.0299 s. Our method exhibits notable advantages over open-source comparative methods, achieving highly efficient and precise LiDAR dynamic target detection. Full article
Show Figures

Figure 1

14 pages, 1951 KiB  
Article
SAE3D: Set Abstraction Enhancement Network for 3D Object Detection Based Distance Features
by Zheng Zhang, Zhiping Bao, Qing Tian and Zhuoyang Lyu
Sensors 2024, 24(1), 26; https://doi.org/10.3390/s24010026 - 20 Dec 2023
Viewed by 555
Abstract
With the increasing demand from unmanned driving and robotics, more attention has been paid to point-cloud-based 3D object accurate detection technology. However, due to the sparseness and irregularity of the point cloud, the most critical problem is how to utilize the relevant features [...] Read more.
With the increasing demand from unmanned driving and robotics, more attention has been paid to point-cloud-based 3D object accurate detection technology. However, due to the sparseness and irregularity of the point cloud, the most critical problem is how to utilize the relevant features more efficiently. In this paper, we proposed a point-based object detection enhancement network to improve the detection accuracy in the 3D scenes understanding based on the distance features. Firstly, the distance features are extracted from the raw point sets and fused with the raw features regarding reflectivity of the point cloud to maximize the use of information in the point cloud. Secondly, we enhanced the distance features and raw features, which we collectively refer to as self-features of the key points, in set abstraction (SA) layers with the self-attention mechanism, so that the foreground points can be better distinguished from the background points. Finally, we revised the group aggregation module in SA layers to enhance the feature aggregation effect of key points. We conducted experiments on the KITTI dataset and nuScenes dataset and the results show that the enhancement method proposed in this paper has excellent performance. Full article
Show Figures

Figure 1

14 pages, 3083 KiB  
Article
RRGA-Net: Robust Point Cloud Registration Based on Graph Convolutional Attention
by Jian Qian and Dewen Tang
Sensors 2023, 23(24), 9651; https://doi.org/10.3390/s23249651 - 06 Dec 2023
Cited by 1 | Viewed by 813
Abstract
The problem of registering point clouds in scenarios with low overlap is explored in this study. Previous methodologies depended on having a sufficient number of repeatable keypoints to extract correspondences, making them less effective in partially overlapping environments. In this paper, a novel [...] Read more.
The problem of registering point clouds in scenarios with low overlap is explored in this study. Previous methodologies depended on having a sufficient number of repeatable keypoints to extract correspondences, making them less effective in partially overlapping environments. In this paper, a novel learning network is proposed to optimize correspondences in sparse keypoints. Firstly, a multi-layer channel sampling mechanism is suggested to enhance the information in point clouds, and keypoints were filtered and fused at multi-layer resolutions to form patches through feature weight filtering. Moreover, a template matching module is devised, comprising a self-attention mapping convolutional neural network and a cross-attention network. This module aims to match contextual features and refine the correspondence in overlapping areas of patches, ultimately enhancing correspondence accuracy. Experimental results demonstrate the robustness of our model across various datasets, including ModelNet40, 3DMatch, 3DLoMatch, and KITTI. Notably, our method excels in low-overlap scenarios, showcasing superior performance. Full article
Show Figures

Figure 1

16 pages, 7096 KiB  
Article
Dimensioning Cuboid and Cylindrical Objects Using Only Noisy and Partially Observed Time-of-Flight Data
by Bryan Rodriguez, Prasanna Rangarajan, Xinxiang Zhang and Dinesh Rajan
Sensors 2023, 23(21), 8673; https://doi.org/10.3390/s23218673 - 24 Oct 2023
Viewed by 700
Abstract
One of the challenges of using Time-of-Flight (ToF) sensors for dimensioning objects is that the depth information suffers from issues such as low resolution, self-occlusions, noise, and multipath interference, which distort the shape and size of objects. In this work, we successfully apply [...] Read more.
One of the challenges of using Time-of-Flight (ToF) sensors for dimensioning objects is that the depth information suffers from issues such as low resolution, self-occlusions, noise, and multipath interference, which distort the shape and size of objects. In this work, we successfully apply a superquadric fitting framework for dimensioning cuboid and cylindrical objects from point cloud data generated using a ToF sensor. Our work demonstrates that an average error of less than 1 cm is possible for a box with the largest dimension of about 30 cm and a cylinder with the largest dimension of about 20 cm that are each placed 1.5 m from a ToF sensor. We also quantify the performance of dimensioning objects using various object orientations, ground plane surfaces, and model fitting methods. For cuboid objects, our results show that the proposed superquadric fitting framework is able to achieve absolute dimensioning errors between 4% and 9% using the bounding technique and between 8% and 15% using the mirroring technique across all tested surfaces. For cylindrical objects, our results show that the proposed superquadric fitting framework is able to achieve absolute dimensioning errors between 2.97% and 6.61% when the object is in a horizontal orientation and between 8.01% and 13.13% when the object is in a vertical orientation using the bounding technique across all tested surfaces. Full article
Show Figures

Figure 1

Back to TopTop