Optical 3D Sensing Systems

A special issue of Photonics (ISSN 2304-6732).

Deadline for manuscript submissions: closed (16 May 2022) | Viewed by 23078

Special Issue Editors


E-Mail Website
Guest Editor
College of Electronics and Information Engineering, Sichuan University, Chengdu 610065, China
Interests: optical metrology; 3D imaging; computer vision; structured light; phase retrieval
Special Issues, Collections and Topics in MDPI journals
Department of Mechanical Engineering, Iowa State University, Ames, IA 50011, USA
Interests: superfast 3D optical sensing; multi-scale 3D optical metrology; machine/computer vision; in-situ manufacturing inspection and quality control
Special Issues, Collections and Topics in MDPI journals
College of Engineering, Anhui Agricultural University, Hefei 230036, China
Interests: machine vision; optical measurement; smart agriculture
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear colleagues,

Optical 3D sensing that acquires surface geometry information without physically touching the measured objects plays an increasingly critical role in numerous fields such as industry, agriculture, medicine, entertainment, and so on. Advances in electronic sensors, computational power and deep learning have greatly promoted the development of optical 3D sensing techniques. This special issue focuses on optical 3D sensing techniques and their applications. Various 3D sensing systems based on technologies such as structured light, stereo vision, time-of-flight (TOF) and others have been developed by many researchers. Unique hardware and software are also designed to realize the high-speed, accurate, compact, convenient, and intelligent sensing systems. The topics of this special issue includes but not limited to: novel and advanced optical systems, information processing methods and interesting applications of optical 3D sensing.

Prof. Dr. Yajun Wang
Prof. Dr. Beiwen Li
Prof. Dr. Yuwei Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Photonics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • optical metrology
  • 3D shape measurement
  • surface characterization
  • photomechanics testing
  • fringe analysis
  • vision calibration
  • phase retrieval
  • point cloud processing
  • image processing
  • deep learning

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

12 pages, 5169 KiB  
Article
Half-Period Gray-Level Coding Strategy for Absolute Phase Retrieval
by Zipeng Ran, Bo Tao, Liangcai Zeng and Xiangcheng Chen
Photonics 2022, 9(7), 492; https://doi.org/10.3390/photonics9070492 - 14 Jul 2022
Cited by 5 | Viewed by 1242
Abstract
N-ary gray-level (nGL) coding strategy is an effective method for absolute phase retrieval in the fringe projection technique. However, the conventional nGL method contains many unwrapping errors at the boundaries of codewords. In addition, the number of codewords is limited in only one [...] Read more.
N-ary gray-level (nGL) coding strategy is an effective method for absolute phase retrieval in the fringe projection technique. However, the conventional nGL method contains many unwrapping errors at the boundaries of codewords. In addition, the number of codewords is limited in only one pattern. Consequently, this paper proposes a new gray-level coding method based on half-period coding, which can improve both these two deficiencies. Specifically, we embed every period with a 2-bit codeword, instead of a 1-bit codeword. Then, special correction and decoding methods are proposed to correct the codewords and calculate the fringe orders, respectively. The proposed method can generate n2 codewords with n gray levels in one pattern. Moreover, this method is insensitive to moderate image blurring. Various experiments demonstrate the robustness and effectiveness of the proposed strategy. Full article
(This article belongs to the Special Issue Optical 3D Sensing Systems)
Show Figures

Figure 1

13 pages, 2495 KiB  
Article
Depth Estimation Using Feature Pyramid U-Net and Polarized Self-Attention for Road Scenes
by Bo Tao, Yunfei Shen, Xiliang Tong, Du Jiang and Baojia Chen
Photonics 2022, 9(7), 468; https://doi.org/10.3390/photonics9070468 - 04 Jul 2022
Cited by 2 | Viewed by 1771
Abstract
Studies have shown that the observed image texture details and semantic information are of great significance for the depth estimation on the road scenes. However, there are ambiguities and inaccuracies in the boundary information of observed objects in previous methods. For this reason, [...] Read more.
Studies have shown that the observed image texture details and semantic information are of great significance for the depth estimation on the road scenes. However, there are ambiguities and inaccuracies in the boundary information of observed objects in previous methods. For this reason, we hope to design a new depth estimation method that can obtain higher accuracy and more accurate boundary information of the detected object. Based on polarized self-attention (PSA) and feature pyramid U-net, we proposed a new self-supervised monocular depth estimation model to extract more accurate texture details and semantic information. Firstly, we add a PSA module at the end of the depth encoder and pose encoder so that the network can extract more accurate semantic information. Then, based on the U-net, we put the multi-scale image obtained by the object detection module FPN (Feature Pyramid network) directly into the decoder. It can guide the model to learn semantic information, thus enhancing the boundary of the image. We evaluated our method on KITTI 2015 datasets and Make3D datasets, and our model achieved better results than previous studies. In order to verify the generalization of the model, we have done monocular, stereo, monocular plus stereo experiments. The experimental results show that our model has achieved better results in several main evaluation indexes and clearer boundary information. In order to compare different forms of PSA mechanism, we did ablation experiments. Compared with no PSA module, after adding the PSA module, better results in evaluating indicators were achieved. We also found that our model is better in monocular training than stereo training and monocular plus stereo training. Full article
(This article belongs to the Special Issue Optical 3D Sensing Systems)
Show Figures

Figure 1

13 pages, 6179 KiB  
Article
Deformation Measurements of Helicopter Rotor Blades Using a Photogrammetric System
by Chenglin Zuo, Jun Ma, Chunhua Wei, Tingrui Yue and Jin Song
Photonics 2022, 9(7), 466; https://doi.org/10.3390/photonics9070466 - 03 Jul 2022
Cited by 1 | Viewed by 1599
Abstract
As an important part of the helicopter, the rotor directly affects flight safety and flight quality. Knowledge of the rotor dynamic behaviors is significant for validating and optimizing the performance of the helicopter rotor system. In this study, a photogrammetric system, based on [...] Read more.
As an important part of the helicopter, the rotor directly affects flight safety and flight quality. Knowledge of the rotor dynamic behaviors is significant for validating and optimizing the performance of the helicopter rotor system. In this study, a photogrammetric system, based on 3D point tracking and stereo photogrammetry technology, is presented to solve the full-field dynamic motion and deformation parameters of rotating blades by identifying the retro-reflective targets arranged on the rotor. The photogrammetric system is demonstrated in the wind tunnel tests of a 2 m-diameter model rotor, conducted at the 5.5 m × 4 m Aeroacoustic Wind Tunnel of the China Aerodynamics Research and Development Center (CARDC). With the targets attached on the special hat installed directly over the rotor hub, a unified rotor coordinate system, that was stationary with respect to the rotor, could be established at any measuring instant. Therefore, by transforming the 3D coordinates of all measured targets to the rotor coordinate system, the blade displacements and deformations at different test conditions could be calculated consistently. Experimental results from current study were compared to simulation results calculated by the comprehensive analytical model of rotorcraft aerodynamics and dynamics (CAMRAD), which shows quite good agreements. Full article
(This article belongs to the Special Issue Optical 3D Sensing Systems)
Show Figures

Figure 1

20 pages, 17024 KiB  
Article
SCDeep: Single-Channel Depth Encoding for 3D-Range Geometry Compression Utilizing Deep-Learning Techniques
by Matthew G. Finley, Broderick S. Schwartz, Jacob Y. Nishimura, Bernice Kubicek and Tyler Bell
Photonics 2022, 9(7), 449; https://doi.org/10.3390/photonics9070449 - 27 Jun 2022
Cited by 1 | Viewed by 1612
Abstract
Recent advances in optics and computing technologies have encouraged many applications to adopt the use of three-dimensional (3D) data for the measurement and visualization of the world around us. Modern 3D-range scanning systems have become much faster than real-time and are able to [...] Read more.
Recent advances in optics and computing technologies have encouraged many applications to adopt the use of three-dimensional (3D) data for the measurement and visualization of the world around us. Modern 3D-range scanning systems have become much faster than real-time and are able to capture data with incredible precision. However, increasingly fast acquisition speeds and high fidelity data come with increased storage and transmission costs. In order to enable applications that wish to utilize these technologies, efforts must be made to compress the raw data into more manageable formats. One common approach to compressing 3D-range geometry is to encode its depth information within the three color channels of a traditional 24-bit RGB image. To further reduce file sizes, this paper evaluates two novel approaches to the recovery of floating-point 3D range data from only a single-channel 8-bit image using machine learning techniques. Specifically, the recovery of depth data from a single channel is enabled through the use of both semantic image segmentation and end-to-end depth synthesis. These two distinct approaches show that machine learning techniques can be utilized to enable significant file size reduction while maintaining reconstruction accuracy suitable for many applications. For example, a complex set of depth data encoded using the proposed method, stored in the JPG 20 format, and recovered using semantic segmentation techniques was able to achieve an average RMS reconstruction accuracy of 99.18% while achieving an average compression ratio of 106:1 when compared to the raw floating-point data. When end-to-end synthesis techniques were applied to the same encoded dataset, an average reconstruction accuracy of 99.59% was experimentally demonstrated for the same average compression ratio. Full article
(This article belongs to the Special Issue Optical 3D Sensing Systems)
Show Figures

Figure 1

12 pages, 3277 KiB  
Article
Self-Supervised Monocular Depth Estimation Based on Channel Attention
by Bo Tao, Xinbo Chen, Xiliang Tong, Du Jiang and Baojia Chen
Photonics 2022, 9(6), 434; https://doi.org/10.3390/photonics9060434 - 20 Jun 2022
Cited by 4 | Viewed by 1946
Abstract
Scene structure and local details are important factors in producing high-quality depth estimations so as to solve fuzzy artifacts in depth prediction results. We propose a new network structure that combines two channel attention modules in a deep prediction network. The structure perception [...] Read more.
Scene structure and local details are important factors in producing high-quality depth estimations so as to solve fuzzy artifacts in depth prediction results. We propose a new network structure that combines two channel attention modules in a deep prediction network. The structure perception module (spm) uses a frequency channel attention network. We use frequencies from different perspectives to analyze the channel representation as a compression process. This enhances the perception of the scene structure and obtains more feature information. The detail emphasis module (dem) adopts the global attention mechanism. It improves the performance of deep neural networks by reducing irrelevant information and magnifying global interactive representations. Emphasizing important details effectively fuses features at different scales to achieve more accurate and clearer depth predictions. Experiments show that our network produces clearer depth estimations, and our accuracy rate on the KITTI benchmark has improved from 98.1% to 98.3% in the δ < 1.253 metric. Full article
(This article belongs to the Special Issue Optical 3D Sensing Systems)
Show Figures

Figure 1

9 pages, 3019 KiB  
Communication
Elimination of Scintillation Noise Caused by External Environment Disturbances in Open Space
by Qi-Xing Tang, Hua Gao, Yu-Jun Zhang and Dong Chen
Photonics 2022, 9(6), 415; https://doi.org/10.3390/photonics9060415 - 15 Jun 2022
Cited by 2 | Viewed by 1272
Abstract
External environment disturbances in open space cause scintillation noise in tunable diode laser absorption spectroscopy (TDLAS), which is used to detect the concentration of gases in air. However, most gases analyzed by TDLAS are present in trace amounts in air. Thus, useful information [...] Read more.
External environment disturbances in open space cause scintillation noise in tunable diode laser absorption spectroscopy (TDLAS), which is used to detect the concentration of gases in air. However, most gases analyzed by TDLAS are present in trace amounts in air. Thus, useful information is typically submerged in strong noise, thereby reducing the detection accuracy. Herein, a method is proposed to eliminate the scintillation noise caused by external environment disturbances in open space. First, the submerged signal is detected via fast coarse-tuning filtering. Then, scintillation noise is eliminated through the extraction and reconstruction of the main feature information. Thereafter, the background signal is obtained by unequal precision. Furthermore, adaptive iterative fitting is performed. Finally, an experimental setup is established for atmospheric detection in an open optical path. The experimental results show that the COD and RSS fitted using the traditional method are 0.87859 and 1.5772 × 10−5, respectively, and those fitted using the proposed method are 0.91448 and 8.81639 × 10−6, respectively. The field results imply that the proposed method has improved accuracy for detecting trace gases in open space and can be employed for practical engineering applications. Full article
(This article belongs to the Special Issue Optical 3D Sensing Systems)
Show Figures

Figure 1

13 pages, 3346 KiB  
Article
Fast Point Cloud Registration Algorithm Based on 3DNPFH Descriptor
by Bo You, Hongyu Chen, Jiayu Li, Changfeng Li and Hui Chen
Photonics 2022, 9(6), 414; https://doi.org/10.3390/photonics9060414 - 15 Jun 2022
Cited by 5 | Viewed by 2234
Abstract
Although researchers have investigated a variety of approaches to the development of three-dimensional (3D) point cloud matching algorithms, the results have been limited by low accuracy and slow speed when registering large numbers of point cloud data. To address this problem, a new [...] Read more.
Although researchers have investigated a variety of approaches to the development of three-dimensional (3D) point cloud matching algorithms, the results have been limited by low accuracy and slow speed when registering large numbers of point cloud data. To address this problem, a new fast point cloud registration algorithm based on a 3D neighborhood point feature histogram (3DNPFH) descriptor is proposed for fast point cloud registration. With a 3DNPFH, the 3D key-point locations are first transformed into a new 3D coordinate system, and the key points generated from similar 3D surfaces are then close to each other in the newly generated space. Subsequently, a neighborhood point feature histogram (NPFH) was designed to encode neighborhood information by combining the normal vectors, curvature, and distance features of a point cloud, thus forming a 3DNPFH (3D + NPFH). The descriptor searches radially for 3D key point locations in the new 3D coordinate system, reducing the search coordinate system for the corresponding point pairs. The “NPFH” descriptor is then coarsely aligned using the random sample consensus (RANSAC) algorithm. Experiment results show that the algorithm is fast and maintains high alignment accuracy on several popular benchmark datasets, as well as our own data. Full article
(This article belongs to the Special Issue Optical 3D Sensing Systems)
Show Figures

Figure 1

13 pages, 2458 KiB  
Article
A Lightweight Semantic Segmentation Model of Wucai Seedlings Based on Attention Mechanism
by Wen Li, Chao Liu, Minhui Chen, Dequan Zhu, Xia Chen and Juan Liao
Photonics 2022, 9(6), 393; https://doi.org/10.3390/photonics9060393 - 02 Jun 2022
Cited by 3 | Viewed by 1431
Abstract
Accurate wucai seedling segmentation is of great significance for growth detection, seedling location, and phenotype detection. To segment wucai seedlings accurately in a natural environment, this paper presents a lightweight segmentation model of wucai seedlings, where U-Net is used as the backbone network. [...] Read more.
Accurate wucai seedling segmentation is of great significance for growth detection, seedling location, and phenotype detection. To segment wucai seedlings accurately in a natural environment, this paper presents a lightweight segmentation model of wucai seedlings, where U-Net is used as the backbone network. Specifically, to improve the feature extraction ability of the model for wucai seedlings of different sizes, a multi-branch convolution block based on inception structure is proposed and used to design the encoder. In addition, the expectation “maximizationexpectation” maximization attention module is added to enhance the attention of the model to the segmentation object. In addition, because of the problem that a large number of parameters easily increase the difficulty of network training and computational cost, the depth-wise separable convolution is applied to replace the original convolution in the decoding stage to lighten the model. The experimental results show that the precision, recall, MIOU, and F1-score of the proposed model on the self-built wucai seedling dataset are 0.992, 0.973, 0.961, and 0.982, respectively, and the average recognition time of single frame image is 0.0066 s. Compared with several state-of-the-art models, the proposed model achieves better segmentation performance and has the characteristics of smaller-parameter scale and higher real-time performance. Therefore, the proposed model can achieve good segmentation effect for wucai seedlings in natural environment, which can provide important basis for target spraying, growth recognition, and other applications. Full article
(This article belongs to the Special Issue Optical 3D Sensing Systems)
Show Figures

Figure 1

12 pages, 4164 KiB  
Article
Nonlinear Error Correction for Color Phase-Shifting Profilometry with Histogram Equalization
by Bolin Cai, Haojie Zhu, Chenen Tong and Lu Liu
Photonics 2022, 9(6), 385; https://doi.org/10.3390/photonics9060385 - 30 May 2022
Cited by 1 | Viewed by 1459
Abstract
Because color patterns with multiple channels can carry more information than gray patterns with only one channel, color phase-shifting profilometry (CPSP) has been widely used for high-speed, three-dimensional (3D) shape measurement. However, the accuracy of CPSP suffers from nonlinear errors caused by color [...] Read more.
Because color patterns with multiple channels can carry more information than gray patterns with only one channel, color phase-shifting profilometry (CPSP) has been widely used for high-speed, three-dimensional (3D) shape measurement. However, the accuracy of CPSP suffers from nonlinear errors caused by color crosstalk. This paper presents an effective nonlinear error correction method for CPSP based on histogram equalization. The two main steps of the proposed method are eliminating nonlinear errors with histogram equalization and optimizing the results using a spline fitting algorithm. Compared with other compensation methods, the proposed approach does not require any precalibration information or additional patterns, which are very time-consuming. The simulations and experiments indicate that the proposed method has a promising performance for nonlinear error elimination. Full article
(This article belongs to the Special Issue Optical 3D Sensing Systems)
Show Figures

Figure 1

12 pages, 3816 KiB  
Article
Stem and Calyx Identification of 3D Apples Using Multi-Threshold Segmentation and 2D Convex Hull
by Man Xia, Haojie Zhu, Yuwei Wang, Jiaxu Cai and Lu Liu
Photonics 2022, 9(5), 346; https://doi.org/10.3390/photonics9050346 - 15 May 2022
Cited by 3 | Viewed by 1870
Abstract
Traditional machine vision is widely used to identify apple quality, but this method finds it difficult to distinguish the apple stem and calyx from defects. To address this, we designed a new method to identify the stem and calyx of apples based on [...] Read more.
Traditional machine vision is widely used to identify apple quality, but this method finds it difficult to distinguish the apple stem and calyx from defects. To address this, we designed a new method to identify the stem and calyx of apples based on their concave shape. This method applies a fringe projection in a computer vision system of 3D reconstruction, followed by multi-threshold segmentation and a 2D convex hull technique to identify the stem and calyx. A camera and projector were used to reconstruct the 3D surface of the front half of an inspected apple. The height information for each pixel was reconstructed by a fringe projection and mathematical transformation. The 3D-reconstructed result was subjected to a multi-threshold segmentation technique and the segmentation results contained a concave feature in the curved line, representing the concave stem and calyx. The segmentation results were then subjected to a 2D convex hull technique, allowing for the identification of the stem and calyx. This method was evaluated using four groups of apples, and the proposed method is able to identify the stem and calyx with 98.93% accuracy. Full article
(This article belongs to the Special Issue Optical 3D Sensing Systems)
Show Figures

Figure 1

13 pages, 1371 KiB  
Article
Accelerated Phase Deviation Elimination for Measuring Moving Object Shape with Phase-Shifting-Profilometry
by Wei Liu, Xi Wang, Zhipeng Chen, Yi Ding and Lei Lu
Photonics 2022, 9(5), 295; https://doi.org/10.3390/photonics9050295 - 27 Apr 2022
Cited by 2 | Viewed by 1424
Abstract
Eliminating the phase deviation caused by object motion plays a vital role to obtain the precise phase map to recover the object shape with phase-shifting-profilometry. Pixel-by-pixel phase retrieval using the least-squares algorithm has been widely employed to eliminate the phase deviation caused by [...] Read more.
Eliminating the phase deviation caused by object motion plays a vital role to obtain the precise phase map to recover the object shape with phase-shifting-profilometry. Pixel-by-pixel phase retrieval using the least-squares algorithm has been widely employed to eliminate the phase deviation caused by moving object. However, pixel-level operation can only eliminate phase deviation within a limited range, and will bring high computational burden. In this paper, we propose an image-level phase compensation method with stochastic gradient descent (SGD) algorithm to accelerate the phase deviation elimination. Since the iteration calculation is implemented at the image-level, the proposed method can accelerate the convergence significantly. Furthermore, since the proposed algorithm is able to correct the phase deviation within (π,π), the algorithm can tolerate a greater motion range. In addition to simulation experiments, we consider 2-D motion of the object, and conduct a series of comparative experiments to validate the effectiveness of the proposed method in a larger motion range. Full article
(This article belongs to the Special Issue Optical 3D Sensing Systems)
Show Figures

Figure 1

13 pages, 3095 KiB  
Article
Real-Time Phase Retrieval Based on Cube-Corner Prisms Single Exposure
by Hong Cheng, Xiaotian Zhu, Ju Li and Zhengguang Tian
Photonics 2022, 9(4), 230; https://doi.org/10.3390/photonics9040230 - 01 Apr 2022
Viewed by 1656
Abstract
The phase retrieval method based on the Transport of Intensity Equation needs to record the light intensity information on two or more planes perpendicular to the optical axis propagating along the optical axis. Usually, a single CCD camera is moved back and forth [...] Read more.
The phase retrieval method based on the Transport of Intensity Equation needs to record the light intensity information on two or more planes perpendicular to the optical axis propagating along the optical axis. Usually, a single CCD camera is moved back and forth for recording, which not only brings the corresponding mechanical errors, but also has a certain time difference between the collected intensity images, which cannot meet the real-time requirements. In this paper, a single phase retrieval technique based on cube-corner prisms is proposed. This method can simultaneously collect the required initial intensity image in a single exposure, and then calculate the phase after registration and repair, so as to obtain high-precision results. According to the parallel reflection characteristics of the cube-corner prisms, the experimental system designed correspondingly can not only stagger the two beams separated by the beam splitter, but also ensure that the upper and lower propagation distances of a single beam are equal. Finally, the accuracy and effectiveness of the proposed method are fully verified by simulation experiments and experimental measurements. Full article
(This article belongs to the Special Issue Optical 3D Sensing Systems)
Show Figures

Figure 1

11 pages, 5275 KiB  
Article
Intensity-Averaged Double Three-Step Phase-Shifting Algorithm with Color-Encoded Fringe Projection
by Yuwei Wang, Haojie Zhu, Jiaxu Cai and Yajun Wang
Photonics 2022, 9(3), 173; https://doi.org/10.3390/photonics9030173 - 10 Mar 2022
Cited by 2 | Viewed by 2046
Abstract
Fringe projection profilometry (FPP) has been broadly employed for three-dimensional shape measurements. However, the measurement accuracy suffers from gamma nonlinearity. This paper proposes an intensity-averaged double three-step phase-shifting (IDTP) algorithm making use of color-encoded fringe projection, which does not require complex calibration processes [...] Read more.
Fringe projection profilometry (FPP) has been broadly employed for three-dimensional shape measurements. However, the measurement accuracy suffers from gamma nonlinearity. This paper proposes an intensity-averaged double three-step phase-shifting (IDTP) algorithm making use of color-encoded fringe projection, which does not require complex calibration processes or extra fringe patterns. Specifically, two phase maps with π/2 phase shift are encoded into the red and blue channels of color fringe patterns. The average fringe patterns of the red and blue channels are approximately in sinusoidal waveform with little harmonics, thus can be directly used for accurate phase recovery. Additionally, an adaptive weight is also estimated for average operation to suppress the effect of color crosstalk. Both simulations and experiments demonstrate that the proposed IDTP algorithm can effectively eliminate nonlinear phase errors. Full article
(This article belongs to the Special Issue Optical 3D Sensing Systems)
Show Figures

Figure 1

Back to TopTop