Next Article in Journal
An Inventory Model for Growing Items When the Demand Is Price Sensitive with Imperfect Quality, Inspection Errors, Carbon Emissions, and Planned Backorders
Previous Article in Journal
Multivariate SVR Demand Forecasting for Beauty Products Based on Online Reviews
Previous Article in Special Issue
Anti-Recompression Video Watermarking Algorithm Based on H.264/AVC
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimal Coherent Point Selection for 3D Quality Inspection from Silhouette-Based Reconstructions

by
Javier Pérez Soler
*,
Jose-Luis Guardiola
,
Alberto Perez Jimenez
,
Pau Garrigues Carbó
,
Nicolás García Sastre
and
Juan-Carlos Perez-Cortes
Instituto Tecnológico de Informática (ITI), Universitat Politècnica de València, 46022 Valencia, Spain
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(21), 4419; https://doi.org/10.3390/math11214419
Submission received: 14 September 2023 / Revised: 11 October 2023 / Accepted: 18 October 2023 / Published: 25 October 2023

Abstract

:
3D Geometric quality inspection involves assessing and comparing a reconstructed object to a predefined reference model or design that defines its expected volume. Achieving precise 3D object geometry reconstruction from multiple views can be challenging. In this research, we propose a camera-coherent point selection method to measure differences with the reference. The result is a point cloud extracted from the reconstruction that represents the best-case scenario, ensuring that any deviations from the reference are represented as seen from the cameras. This algorithm has been tested in both simulated and real conditions, reducing reconstruction errors by up to one fifth compared to traditional 3D reconstruction methodologies. Furthermore, this strategy assures that any existing difference with its reference really exists and it is a best-case scenario. It offers a fast and robust pipeline for comprehensive 3D geometric quality assurance, contributing significantly to advancements in the field of 3D object inspection.

1. Introduction

Quality inspection is an integral aspect of industrial production, essential for ensuring the manufacture of high-quality items that can be seamlessly integrated into fully functional products, free from defects. However, the majority of quality inspection processes continue to rely on manual labor or employ specialized systems tailored for a specific product or design.
For these reasons quality inspection is an expensive process that significantly increase final product costs and production times. This research contributes on closing the gap to a quality inspection system capable of inspecting a wide range of items with minimal adjustments. This technological solution holds the potential to enhance final product quality, diminish defects, minimize waste, and ultimately reduce production costs.
Since the geometry of the inspected objects is not known a priori, a generic 3D inspection approach becomes necessary to ensure the quality of inspection. However, conventional inspection systems employed in industry are typically limited to 2D or 2.5D capabilities, which may result in unexamined hidden regions. 3D systems capable of capturing a larger portion of the object’s surface often involve time-consuming processes that employ sequential operations, such as stereoscopy, laser beams, or structured light [1,2,3]. Some alternative systems utilize robotic arms to manipulate the object in front of a sensor, as demonstrated by Fei et al. [4] and Brosed et al. [5]. Another approach involves obtaining 3D reconstructions from multiple viewpoints, exemplified by Perez-Cortes et al. [6]. In the work of Bi et al. [7], a comprehensive survey of non-contact 3D scanners, along with a classification of their technologies from a manufacturing perspective, is presented. More recent techniques include deep learning from a single image [8], the use of RGB-D cameras [9], or employing histology sensors [10].
This study centers on the enhancement of silhouette-based reconstruction methods, such as the one presented in [6]. This method has been chosen due to its notable speed, rendering it suitable for integration into a production line, and its economic feasibility. The primary outcome of this reconstruction approach is the generation of a 3D representation, which, in the most favorable scenario, approximates the visual hull of the actual object, as elucidated by [11].
This reconstruction is essentially the result of intersecting the volumes generated by expanding the object silhouette from each camera’s perspective. In other words, each camera constrains the reconstruction to conform to the silhouette it observes. When this concept is extended to encompass all available cameras, it yields a 3D reconstruction that encapsulates the actual object. However, this process gives rise to synthetic bulges, the dimensions and positions of which are contingent upon the object’s geometry, as well as the quantity and placement of the cameras.
Figure 1 illustrates various instances of 2D carving reconstructions (depicted in blue shadowed areas) involving different objects (represented by green lines) and camera configurations. It is evident that the reconstructed area exhibits significant disparities when compared to the actual object, a variation influenced by various factors.
To mitigate this issue, we propose an optimal point selection technique: Best Case Point Selection (BCPS). BCPS involves identifying the nearest points to a expected reference model, which is the best case from a quality assurance point of view, while preserving coherence with the camera images.
To the best of the authors’ knowledge, there is no recent research addressing this problem introducing additional information regarding expected shape. Some approaches like [12,13] try to infer points, but assume a predefined shape and extract few points from each camera, insufficient from a quality assessment point of view. More recent approaches rely on neural networks reconstructions such as MVS derived works (see MVS [14]) that do not reliably find unexpected defects and deviations from known shapes as their training is specifically trying to generalize from training examples. More recent approaches rely on NeRF’s, introduced in [15] have been used for multiple 3D scene problems, see [16] for a review, or Gaussian splatting presented in [17] which is a promising technique that is obtaining impressive results in 3D scene synthesis. Unfortunately, both techniques require a training with multiple views that makes them far from the real time performance required in quality assessment. Furthermore, BCPS could be usead as a postprocess with this inputs.
In the course of this study, we demonstrate that BCPS result offers a superior representation of the actual object in comparison to the bibliography 3D carving reconstruction method, reducing errors to one-fifth of their value in the best-case scenario and up to twice in the worst-case scenario. The method extracts points to measure differences with expected design that are closer to the real object than previous bibliography methods.
The use of BCPS allows to easily compare a reconstructed object with an expected reference model establishing differences that a quality inspection pipeline may consider defects. Consequently, it advances to a quality system capable of automatically inspecting a wide range of items without adjustments.
The work is structured as follows. Section 2 describes the current state of the art in 3D reconstruction as well as related work enhancing 3D reconstruction for quality assessment. The BCPS algorithm for optimal point selection, and the experimental setup are described in Section 3. Section 4 shows how BCPS enhances classical reconstructions introducing expected shape information in simulated and real cases. Finally, conclusions and future work are presented in Section 5.

2. State of the Art

2.1. 3D Reconstruction Techniques

3D reconstruction stands as a pivotal task within the domain of 3D computer vision, boasting a broad spectrum of applications across various fields, including automated driving [18], augmented reality [19], and quality assessment [20]. Consequently, there has been a revolutionary transformation in 3D acquisition technology, with depth sensors becoming increasingly reliable, lightweight, and cost-effective. Nonetheless, these devices often fall short in terms of capturing intricate details, exhibiting restricted depth ranges, or having limited applicability. Thus, the pursuit of 3D reconstruction from multiple images remains an active area of research, as comprehensively reviewed in [21].
As elucidated in [21], deep learning has demonstrated remarkable performance gains over traditional techniques in the realm of multi-view reconstruction. Following the initial breakthrough achieved by MVSNet as documented in [22], several alternative deep learning methodologies have emerged, including those presented in [23,24,25]. While these algorithms yield commendable outcomes in generating 3D point clouds, they frequently encounter challenges when striving for robust 3D reconstructions. Additionally, they exhibit a high demand for GPU resources.
Furthermore, there are instances where computer vision remains a superior choice compared to machine learning, particularly for well-established problems or when interpretability is imperative, as expounded upon in [26]. Depending on the specific scenario, a synergistic combination of both techniques can be employed to attain superior outcomes.
For an extensive survey of classical multiview reconstruction methods, readers are directed to [14]. These algorithms enhance the visual hull, which represents the maximal shape congruent with the object’s silhouette from any vantage point, by incorporating various priors to achieve a more precise approximation of the actual object.
While classic algorithms may not attain the level of precision achievable by deep learning techniques, their well-understood limitations, shortcomings, and advantages enable the development of robust and replicable processing pipelines. Such pipelines are highly desirable, particularly in domains like industrial inspection.
These algorithms predominantly rely on photoconsistency principles, as exemplified by works such as [27,28,29]. However, it is noteworthy that many of them operate under specific scene assumptions, including conditions such as nearly Lambertian objects or the presence of adequate texture. These constraints can, in turn, restrict their applicability in broader contexts.
In the last years two deep promising deep learning techniques are obtaining even better results. Neural Radiance Field (NeRF), introduced in [15] have been used for multiple 3D scene problems, see [16] for a review. These networks are view synthesis methods that use volume rendering representation via Multi Layer Perceptrons (MLP). However, it is necessary to train them only using images of scene which makes them slow for real-time applications. Furthermore, their 3D representation is inefficient for 3D rendering once trained.
Gaussian splatting proposes a solution for this issue storing the 3D information in an efficient manner. Presented in [17] is a promising technique that is obtaining impressive results in 3D scene synthesis allowing fast rendering. However, it still requires a scene training stage that makes it difficult to be used in a production line.

2.2. 3D Reconstruction Enhancement for Quality Assurance

The choice of the optimal algorithm hinges on the specific application context, which may encompass scenarios like a single background with a differentiable object, interior scenes, open-air imagery, or reflective objects. This study’s primary focus is directed towards images featuring solitary objects, characterized by an absence of prior texture knowledge but a rudimentary understanding of their shape. Consequently, the emphasis lies in leveraging geometric cues to discern discrepancies relative to a reference geometry.
It is worth noting that while the proposed method can function effectively as a final refinement stage within any of the previously mentioned algorithms, for the sake of simplicity, its utility will be primarily illustrated in conjunction with the classic visual hull, as detailed in [11].
To the best of the authors’ knowledge, there is currently no recent research addressing the refinement of geometric points derived solely from silhouette reconstructions to enhance the quality of 3D reconstructions. The contemporary state of the art predominantly revolves around the identification of the rim contour, representing the optical ray originating from the pinhole camera that grazes the object. In [12], the estimation of the rim contour is achieved through the use of frontier points, which are the intersections of rim contours conforming to the 2-view epipolar tangency constraint. However, it is essential to note that this approach necessitates certain object characteristics, such as T-junctions or zero genus, to ensure precision, and it typically yields only a limited number of points per camera pair.
In [13], the authors introduced photoconsistency constraints as a means to enhance the precision of their results. Their approach involves simultaneous enforcement of color consistency and silhouette coherence, achieved by minimizing an energy function. This optimization process yields photoconsistently smooth surfaces coinciding with the locations of the rim curves. However, a notable limitation of this method is its reliance on certain assumed shape characteristics, which restricts the applicability of this technique to objects meeting those specific criteria.
In this paper, the approach deviates from the conventional practice of relying solely on silhouette information to determine the rim mesh. Instead, it leverages preliminary knowledge regarding the expected shape of the model to derive a rim contour that closely approximates it. The expected shape is frequently available in quality control processes, where the primary objective of object reconstruction is the comparison with a well-defined reference model.
Furthermore, the points selected through this method constitute the minimal error discernible from the silhouettes, a highly desirable attribute for accurately quantifying discrepancies with the reference. This approach offers several advantages when compared to prior research efforts. Notably, it imposes no restrictions on the reference shape, as long as the silhouette can be extracted from the images, and it facilitates the generation of as many points as required along the rim contour of the object in each camera.
Lastly, this process can be extended to incorporate additional analyses that examine texture, colors, or any other pertinent features as part of a comprehensive assessment.

3. Materials and Methods

The BCPS algorithm presented in this study is predicated upon the extraction of a point subset from a 3D reconstruction derived from multi-camera silhouettes. This selected subset of points retains the characteristics necessary to form a silhouette-compatible object, all the while minimizing the spatial disparity with respect to a reference object. In the context of this research, the reference object represents the anticipated surface configuration of the object under examination. For instance, in an industrial inspection scenario, it corresponds to the engineering design of the manufactured component. This supplementary information concerning the object’s shape serves as the basis for obtaining key sampling points essential for establishing a meaningful comparison with the reference model.
While this paper elucidates the details of the optimal point selection process, which will be expounded upon in subsequent subsections, it also provides a brief overview of the proposed pipeline for industrial inspection to ensure a comprehensive understanding.
  • 3D reconstruction.
  • Object classification.
  • Alignment.
  • Optimal point selection.
  • Quality evaluation.
In the case of 3D reconstruction, it is assumed a 3D from multiple cameras silhouettes as described in [30], or any other 3D reconstruction from multiple views. This kind of reconstructions introduce the synthetic bulges 1, previously described, that this work tries to avoid.
Object classification and Alignment steps are classical problems in 3D computer vision to identify and locate the 3D object, any solution from the bibliography can be used in this step such as [31] or [32] for classification and [33,34,35] or [36] for alignment.
The Optimal point selection algorithm will be explained in detail in the following section, as is the main contribution of this work, while Quality evaluation will be discussed under experimental setup subsection as it is closely related.

3.1. Optimal Point Selection (BCPS)

Optimal point selection is based on the observation that, while achieving precise object geometry reconstruction solely through silhouette information may be unattainable, nearly every defect encountered in industrial production lines can be identified through this means. Consequently, this procedure centers on the detection of deviations from an expected object, also referred to as the reference, rather than the exact reconstruction of the inspected object.
The BCPS algorithm steps or pseudocode for optimal point selection can be summarized as follows:
  • Create an empty set of selected points B C P S .
  • For each pair camera position o-silhouette image S
    (a)
    For each contour point p in the silhouette image S.
    i.
    Compute 3D camera epipolar line E from camera o to point p
    ii.
    Compute candidate points C a n d of 3D reconstruction surface that are grazed by epipolar line E
    iii.
    Add to set B C P S the point with minimal distance to the 3D object reference from points C a n d .
As can be seen in the pseudocode, BCPS selects the best-case point for each contour point of each camera that is seen as a 3D line in the geometric space. As can be seen in Figure 1 and Figure 2 at least one point from the blue epipolar lines is part of the real object, so BCPS is assuming a best-case to stablish differences with the expected design. In order to find this best-case, BCPS introduces a 3D aligned reference model in its final step to be able to measure distances. Employing this geometric reference enables the computation of the optimal point for each epipolar line.
In this context, the optimal point is defined as the one with the shortest distance to the reference, considering that points within the reference model are assigned negative values. This represents the ideal scenario for the epipolar line selection, as it identifies the point with the smallest deviation from the expected volume, taking into account that the carving reconstruction exclusively introduces positive errors, while ensuring that at least one point corresponds to the actual object.
J = min q min r | q p r p | , if r p q p · r n 0 min r | q p r p | , otherwise
subject to:
q S r e c
r S r e f
points o , p and q are colinear : o p q
The proposed optimization procedure is expressed in Equation (1) and is calculated for each contour point p of the object in every camera with an optical center denoted as o. In this equation, S r e c represents the set of reconstruction surface points, where each point q possesses a position q p and an associated surface normal q n . Similarly, S r e f denotes the set of reference surface points, where each point r is characterized by its position and normal.
The constraint c enforces silhouette coherence, thereby confining the optimization process exclusively to contour points. Lastly, the term “distance from each reconstruction point to reference” in Equation (1a) is a minimization operation applied individually to each reference point. The result of this operation is positive if q lies outside S r e f and negative otherwise. This condition is evaluated by computing the dot product of normals.
It is important to note that the points selected through this procedure may not necessarily be the most accurate from a strict reconstruction veracity perspective, as they may exhibit a non-negligible deviation from the actual object. However, these points are deemed a best-case from the standpoint of quality assessment, as they represent the minimal error observed along each epipolar line. Consequently, any selected point with a distance relative to the reference greater than 0 effectively indicates a disparity between the real object and the expected one.
In cases where the real and expected objects are identical, the scenario is straightforward. Figure 2 illustrates situations in which the real object is larger (left) or smaller (right). In the top cases, it is evident that the selected points correspond precisely to the real object. Consequently, BCPS not only identifies the optimal points for quantifying differences between the real object and the reference but also pinpointing the most accurate points within the reconstruction. However, in the bottom cases, when there are significant geometric disparities between the expected object and the real one, the selected points may exhibit slight deviations from the most precise reconstruction points. To circumvent these situations, it is possible to prevent inspection when the reconstructed object substantially deviates from the reference.
Even in these extreme scenarios, the point selection approach effectively identifies potential points along the epipolar line that would minimize the difference relative to the reference. This 2D approach can be seamlessly extended to the 3D case. Moreover, while the 2D scenario yields only 2 points per camera and object, the 3D case obtains one point per silhouette border, which could theoretically be infinite. However, in practical implementation, this is constrained to one point per pixel along the border direction.
Figure 3 presents an illustration of 3D optimal point selection in comparison to a direct carving 3D reconstruction and the reference surface. As evident, a significant number of 3D points are accurately generated, faithfully representing the actual object. In contrast, the direct carving method results in the creation of numerous synthetic bulges within the spring’s interior, making any quality assessment process unfeasible, while BCPS successfully circumvents this issue.

3.2. Experimental Setup

The experimental apparatus employed for this study is the ZG3D industrial inspection system, which has been comprehensively described in [6]. This device utilizes a configuration of multiple cameras distributed in a spherical arrangement, with the present setup incorporating 16 cameras. This arrangement enables the simultaneous capture of an object from all angles, ensuring that no surfaces are concealed. Subsequently, these acquired images are utilized for defect detection on the inspected object. Consequently, BCPS methodology contributes to the identification of defects, thereby mitigating false positives stemming from synthetic bulges.
To assess the effectiveness of BCPS algorithm, a comparison is made with the raw 3D reconstruction represented as point clouds. This evaluation aims to ascertain whether the process enhances accuracy by eliminating synthetic bulges that are absent in the actual object. While this is an uncomplicated task when the real object is exactly equal to reference, discrepancies between these two volumes necessitate an examination of whether the optimal point selection genuinely improves reconstruction accuracy or if it chooses reference points that deviate significantly from reality.
To evaluate the results obtained, we have employed a metric that calculates the mean and maximum distances of the points selected by the algorithm from the real object, which may exhibit slight differences compared to the reference. Although the proposed metric is not tailored to maximize reconstruction accuracy, it holds paramount importance that the resulting points faithfully represent the actual object rather than strictly adhering to the reference. This emphasis on accuracy is crucial because the ultimate goal is to identify these discrepancies. Nevertheless, it is essential to acknowledge a limitation of this evaluation: the real object must be measurable and known.
To address this constraint, we conducted a comprehensive series of simulated experiments using simple geometric shapes, specifically spheres and cubes. The simulated setup faithfully replicates the ZG3D device, utilizing the same number of cameras, distribution, and characteristics. We even incorporated Gaussian and impulsive noise into the images to enhance the realism of the results. Defects were introduced through scaling and the addition or subtraction of material. These geometric figures serve as extreme cases, with spheres yielding minimal or even negligible synthetic bulges, while cubes, or any shape with prominent planar surfaces, can give rise to substantial synthetic bulges, resulting in “star-shaped” reconstructions.
Two distinct types of defects have been simulated: scale defects and the local addition or subtraction of material. Scale defects represent the optimal scenario for BCPS, as they preserve the object’s shape, and the defects are consistently visible in the silhouette. In contrast, the introduction or removal of material from the object presents a more challenging situation. Depending on the viewing angle, these defects may become partially or entirely concealed within the silhouette.
Finally, BCPS algorithm underwent validation through real experiments, utilizing a calibrated gauge with known and precise dimensions that served as ground truth. Additionally, two realistic objects were employed, fabricated using a high-precision 3D printer with a resolution of 0.05 mm, surpassing the precision of the inspection device.
Regarding the calibrated gauge, it was not feasible to physically modify the real object to simulate size changes. Instead, a meticulous adjustment of the camera calibration was performed to induce object scaling. This calibration alteration was verified through simulations, yielding results consistent with directly scaling the object. Figure 4 displays the dimensions and tolerances of this calibrated gauge.
In the case of the 3D printed objects, defective versions were designed and printed to enable the evaluation of actual defects. The designed 3D objects ressemble possible engineering parts suitable for quality assesment and can be seen in Figure 5. In order to be able to compare the results the size of these objects is similar to the calibration gauge and the rest of the simulated objects.

4. Results and Discussion

The point selection method has been applied to various objects and defects, encompassing differences between the reference and the actual object. Typically, the selected points are concentrated along the edges and external regions of the objects, as these areas correspond to the silhouette borders, which crucially define the object’s shape.
Figure 6 presents a visual representation of the results obtained by BCPS algorithm. Each column depicts a different object, with the first three columns showcasing instances where the reference corresponds precisely to the ground truth object with no defects. The fourth and fifth columns illustrate cases with extra and missing material, respectively, while the last column features a real, non-simulated object. For each object, multiple surfaces are displayed: from top to bottom, the ground truth object, the raw reconstruction, and the selected points overlaid onto the object.
Upon examination, it is evident that when the captured object matches the ground truth (the first three columns), the selected points align perfectly, effectively avoiding any synthetic bulges. In the fourth column, where extra material is present, it is almost entirely encompassed by synthetic bulges, resulting in a partial oversight by BCPS, which prioritizes points with a smaller error relative to the reference and maintains silhouette coherence. Conversely, in the case of missing material, the selected points are highly accurate in covering the defect. Lastly, in the non-simulated case, the results closely resemble those of the first cases.
These results unveil several characteristics of BCPS. Firstly, selected points tend to cluster around edges, with less emphasis on large planar areas that cannot be effectively inspected via silhouette analysis. Secondly, the algorithm demonstrates greater proficiency in detecting missing material, provided that this defect is discernible in a silhouette. This observation stems from the reconstruction method, which may generate additional synthetic bulges where extra material is concealed, whereas metrics for missing material are reliable. Lastly, the algorithm proves adept at identifying disparities from a known reference, as long as these defects are visible in the image silhouettes.
To empirically substantiate the performance of BCPS algorithm, three experiments were conducted, consisting of two entirely simulated scenarios and one utilizing the real objects. Each experiment quantifies the accuracy of the base reconstruction and BCPS, thereby demonstrating the improvement achieved.

4.1. Case 1: Scaling Simulated Objects

In this experiment, the sphere and cube objects are simulated at various scales within the ZG3D 16-camera environment while maintaining a constant reference size. This simulation mimics errors that can occur in a production line, where manufactured objects may deviate slightly in size, either larger or smaller.
Figure 7 illustrates the results of the mean and maximum error for a simulated 30 mm cube. The errors were computed within a 95% confidence interval, as the size of synthetic bulges may vary depending on the object’s rotation. As evident, both mean and maximum reconstruction errors (represented by the orange and red lines, respectively) increase as the size of the object grows. This relationship can be attributed to the fact that the size of synthetic bulges is directly linked to the object’s size, resulting in larger errors for larger objects.
However, BCPS algorithm exhibits a different behavior. The mean error, depicted in blue, remains within the pixel error band (indicated by the grey zone) and experiences only negligible increases when the object surpasses the reference size. Similarly, the maximum error, shown in green, is close to the pixel size and also sees an uptick when the object exceeds the reference size.
These results align with expectations, as it is generally easier to detect the absence of material, given that the reconstruction technique synthetically adds material but does not subtract it. The behavior of BCPS reflects this pattern, as it may occasionally misinterpret extra material as a synthetic bulge, leading to the selection of slightly inaccurate points. Nevertheless, even in such situations, BCPS consistently delivers exceptionally precise results by correctly identifying the real object points and disregarding the bulges.
The case of the cube represents an almost ideal scenario for BCPS, as it generates substantial synthetic bulges in relation to the object’s size. To provide a contrasting perspective, Figure 8 presents the same experiment conducted with a sphere, known for producing smaller bulges relative to its size.
As depicted in the graph, even in this context, the optimal point selection offers significant advantages, entirely circumventing the presence of bulges. The maximum error in the selected points is nearly zero, rendering them virtually perfect, while the maximum reconstruction error hovers around 0.4 mm.

4.2. Case 2: Impulse Random Simulated Noise

While scaling serves as an insightful test showcasing the capabilities of BCPS, it’s crucial to acknowledge that defects encountered in industrial settings encompass various forms of volumetric deformations beyond mere scaling. This second experiment specifically aims to simulate random measurable defects by adding and subtracting volume from the objects. The objective is to assess the accuracy of the optimal point selection in handling such deformations. It’s worth noting that both transformations were constrained to create “convex” objects, ensuring they can be accurately reconstructed using a convex-hull technique, with a maximum defect distance of 3 mm.
Figure 9 presents the results obtained from the base reconstruction and BCPS when material is either added or subtracted from a 30 mm cube. In this scenario, instead of uniformly scaling the entire object, small bumps are randomly introduced or removed from the object’s surface. Depending on their positions, these errors may be partially or completely obscured in the silhouettes, making detection challenging.
As depicted in the graph, BCPS effectively reduces the error. Similar to the previous experiment, missing material is detected more accurately, as it cannot be easily concealed within synthetic bulges. Notably, in this case, the error exhibits a higher standard deviation (indicated by the shaded region) due to its strong dependence on the error’s location.
In any case, the raw reconstruction consistently yields nearly constant error, primarily because the object’s overall size does not undergo significant changes, resulting in similar synthetic bulges. The error experiences a slight increase when extra material is added, as it leads to larger synthetic bulges. However, in instances of missing material, there is no discernible difference.
The disparities between both algorithms become particularly pronounced when dealing with objects prone to synthetic bulges, as exemplified by the cube used in this experiment. As observed in the previous example, while the raw reconstruction can effectively reconstruct a sphere, BCPS further reduces the error.
The best-case results for carving reconstruction applied to a sphere are depicted in Figure 10. Even under these optimal conditions, BCPS effectively diminishes the error further. The outcomes consistently reveal that, in all cases, BCPS achieves results closer to the real object compared to the original reconstruction.

4.3. Case 3: Real Use Cases

Previous experiments have already demonstrated the potential of BCPS in simulated scenarios. However, real-world experiments often present unique challenges. To address this, experiments were conducted using a real calibrated gauge and two 3D-printed objects. Given the difficulties associated with obtaining precisely calibrated objects with small differences in scale, adjustments were made by altering camera positions.
In the context of reconstructing objects from silhouettes, small coherent translations in the look-at vector of each camera have no significant consequences other than causing a change in the scale of the reconstructed object. This situation is a common challenge in camera position calibration, as the scale of the system must be incorporated as a parameter in some way (such as the size of an object or the distance between objects). This characteristic can be leveraged to deliberately adjust the scale of the reconstructed object, producing both smaller and larger objects for experimentation.
Figure 11 presents the results of the real calibrated gauge experiment. It’s evident that reconstruction errors increase as the size of the real object grows, corresponding to the enlarged synthetic bulges. In contrast, BCPS exhibits significantly higher precision, maintaining the mean error at a level similar to that of a pixel-sized error, with the maximum error slightly exceeding this threshold.
Notably, these errors are notably higher than those observed in the simulated cube case, despite their similar shape. This discrepancy can be attributed to the greater complexity involved in calibrating, segmenting, aligning, and other real-world factors. Nevertheless, the point selection process remains highly precise, substantially reducing reconstruction errors and providing a reliable point cloud that can be leveraged in subsequent processes.
Furthermore, additional experiments were conducted using real objects, but instead of relying on traditional calibrated gauges, 3D resin-printed objects were utilized. This approach offers a more practical assessment of the algorithm’s performance and allows for the use of objects for which measurement errors are known, as these errors are intentionally incorporated.
One of the objects used comprises two interconnected orthohedra. One of the orthohedra has been subtracted with a sphere and subjected to twisting. The primary aim of this object is to replicate the geometry of a turbine blade. In this context, two defective objects were generated, one with excess material and the other with residual material.
Figure 12 displays the results of the base reconstruction and BCPS when scaling both the object and the reference object without any defects. The graph vividly illustrates the substantial reduction in error achieved by BCPS. Since the object is neither a synthetic object nor a highly precise calibrated gauge, there was an increase in both the reconstruction error and the error related to BCPS because although high precision 3D printers produce high fidelity objects, it introduces new errors. Nevertheless, the error associated with BCPS still remains significantly lower than the reconstruction error.
Figure 13 and Figure 14 depict the results for the object with an excess of material and the one with leftover material, respectively. In addition to these defects, simulated scaling errors have been introduced. Even in this scenario, the error obtained with the optimally selected points remains significantly lower than the error resulting from the raw reconstruction algorithm.
As happened in simulation, missing material is easier to detect due to the nature of the 3D reconstruction technique, while excess material may hide in syntetic bulges making difficult to distinguish from real defects. But in any case, BCPS enhances the quality of the selected points.
The second object used comprises a cube connected to a sphere through a cylinder. Similar to the previous case, two defective objects have been generated, one with excess material and another with leftover material. These defects were introduced to simulate production errors in an industrial setting.
As happened with the simulated cube, the reconstruction errors are higher due to the big planes that produce “star-shaped” reconstruction so BCPS has bigger benefits in this situation. Although, this synthetic bulges may hide real errors, the results show BCPS is worth reducing the veracity error close to zero.
Figure 15 displays the results of the object without defects at different scales, comparing the base reconstruction with BCPS. Once again, the graph demonstrates the significant reduction in error achieved through the implementation of the suggested algorithm, maintaining the error close to zero.
Figure 16 and Figure 17 illustrate the results of objects with an excess of material or missing material, respectively. In this scenario, the capture process tends to obscure excess or missing material, resulting in the generation of more noise compared to the two joined orthohedrons. In the case of the orthohedrons, excess or leftover material was clearly visible in the majority of instances, making it an easier situation for BCPS.
Even in this situation, the error of BCPS remains well below the reconstruction error. This demonstrates that pieces with defects still exhibit greater accuracy when evaluated using this approach compared to the reconstruction algorithm.

4.4. Results Discussion

As can be seen in the previous section, BCPS improvement really depends on the object geometry and the difference with expected reference. Objects that do not produce big synthetic bulges, such as spheres, do not obtain big improvements because their 3D reconstruction was already good. However, objects with big planes or concavities are highly improved through BCPS avoiding uncertain zones and keeping measuring points far from synthetic bulges.
Table 1 shows a sum up of these results in terms of improvement. As can be seen, min and max improvement ranges are wide in objects that produce large synthetic bulges, while spheres have small improvement ranges. Real cases improvement is smaller due to other factors such as segmentation, calibration or alignment errors that can be easily corrected in simulation.

5. Conclusions and Future Work

BCPS algorithm for optimal point selection from silhouette-based reconstruction has been presented in this study. BCPS proofed to accurately extract a subset of silhouette-coherent points from the reconstruction with minimal distance to an expected shape, which is a best-case in terms of quality assurance.
The obtained points reduced reconstruction errors compared with classic reconstruction techniques up to one fifth in the best case scenarios and up to twice in the most challenging scenarios. Futhermore, in scenarios where the expected shape information closely resembles the real object, such as in industrial production lines, the selected points not only serve as a precise representation of the real object itself bu also an optimal means to measure the differences between the reference and the real object, which is the final purpose of quality assurance.
These results have been achieved through simulated experiments as well as real objects involving a real calibrated gauge and two 3D printed objects. Deformations and defects where introduced in simulated as well as real objects proving the validity of the experiments conducted.
Finally there are some research lines to further improve 3D reconstruction for quality assurance such as:
  • Fuse texture information as well as geometric information to obtain higher point density in plane or concave areas.
  • Compute probabilistic confidence in produced points depending the length of the epipolar line that grazes the 3D reconstruction. This will allow to introduce an error estimation on each selected point.
  • Use fast-learning versions of recent approaches such as NeRF’s or Gaussian splatting for 3D reconstruction.

Author Contributions

Conceptualization, J.-L.G., A.P.J. and J.-C.P.-C.; Data curation, P.G.C. and N.G.S.; Formal analysis, J.-C.P.-C.; Methodology, J.-L.G. and A.P.J.; Resources, P.G.C. and N.G.S.; Software, J.P.S. and J.-L.G.; Supervision, J.-C.P.-C.; Validation, J.P.S., P.G.C. and N.G.S.; Writing—original draft, J.P.S.; Writing—review & editing, A.P.J. and N.G.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially funded by Generalitat Valenciana through IVACE (Valencian Institute of Business Competitiveness) distributed nominatively to Valencian technological innovation centres under project expedient IMAMCA/2022/11. It was also funded by the Cervera Network for R+D+I Leadership in Applied Artificial Intelligence (CEL.IA), co-funded by the Centre for Industrial and Technological Development, E.P.E. (CDTI) and by the European Union through the Next Generation EU Fund, within the Cervera Aids program for Technological Centres, with the expedient number CER-20211022.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, J.; Mai, F.; Hung, Y.S.; Chesi, G. 3d model reconstruction from turntable sequence with multiple-view triangulation. In Proceedings of the International Symposium on Visual Computing; Springer: Berlin/Heidelberg, Germany, 2009; pp. 470–479. [Google Scholar]
  2. Fremont, V.; Chellali, R. Turntable-based 3D object reconstruction. In Proceedings of the IEEE Conference on Cybernetics and Intelligent Systems, The Hague, The Netherlands, 10–13 October 2004; IEEE: Piscataway, NJ, USA, 2004; Volume 2, pp. 1277–1282. [Google Scholar]
  3. Kazó, C.; Hajder, L. High-quality structured-light scanning of 3D objects using turntable. In Proceedings of the 2012 IEEE 3rd International Conference on Cognitive Infocommunications (CogInfoCom), Kosice, Slovakia, 2–5 December 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 553–557. [Google Scholar]
  4. Fei, Z.; Zhou, X.; Gao, X.; Zhang, G. A flexible 3D laser scanning system using a robotic arm. In Proceedings of the Optical Measurement Systems for Industrial Inspection X; International Society for Optics and Photonics: Bellingham, WA, USA, 2017; Volume 10329, p. 103294U. [Google Scholar]
  5. Brosed, F.J.; Aguilar, J.J.; Guillomía, D.; Santolaria, J. 3D geometrical inspection of complex geometry parts using a novel laser triangulation sensor and a robot. Sensors 2011, 11, 90–110. [Google Scholar] [CrossRef] [PubMed]
  6. Perez-Cortes, J.C.; Perez, A.; Saez-Barona, S.; Guardiola, J.L.; Salvador, I. A System for In-Line 3D Inspection without Hidden Surfaces. Sensors 2018, 18, 2993. [Google Scholar] [CrossRef] [PubMed]
  7. Bi, Z.; Wang, L. Advances in 3D data acquisition and processing for industrial applications. Robot. Comput.-Integr. Manuf. 2010, 26, 403–413. [Google Scholar] [CrossRef]
  8. Fu, K.; Peng, J.; He, Q.; Zhang, H. Single image 3D object reconstruction based on deep learning: A review. In Multimedia Tools and Applications; Springer: Berlin/Heidelberg, Germany, 2020; pp. 1–36. [Google Scholar]
  9. Zollhöfer, M.; Stotko, P.; Görlitz, A.; Theobalt, C.; Nießner, M.; Klein, R.; Kolb, A. State of the Art on 3D Reconstruction with RGB-D Cameras. In Proceedings of the Computer Graphics Forum; Wiley Online Library: Hoboken, NJ, USA, 2018; Volume 37, pp. 625–652. [Google Scholar]
  10. Pichat, J.; Iglesias, J.E.; Yousry, T.; Ourselin, S.; Modat, M. A survey of methods for 3D histology reconstruction. Med. Image Anal. 2018, 46, 73–105. [Google Scholar] [CrossRef] [PubMed]
  11. Laurentini, A. The visual hull concept for silhouette-based image understanding. IEEE Trans. Pattern Anal. Mach. Intell. 1994, 16, 150–162. [Google Scholar] [CrossRef]
  12. Lazebnik, S.; Boyer, E.; Ponce, J. On computing exact visual hulls of solids bounded by smooth surfaces. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2001), Kauai, HI, USA, 8–14 December 2001; IEEE: Piscataway, NJ, USA, 2001; Volume 1, p. I-I. [Google Scholar]
  13. Sinha, S.N.; Pollefeys, M. Multi-view reconstruction using photo-consistency and exact silhouette constraints: A maximum-flow formulation. In Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV’05), Beijing, China, 17–20 October 2005; IEEE: Piscataway, NJ, USA, 2005; Volume 1, pp. 349–356. [Google Scholar]
  14. Seitz, S.M.; Curless, B.; Diebel, J.; Scharstein, D.; Szeliski, R. A comparison and evaluation of multi-view stereo reconstruction algorithms. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), New York, NY, USA, 17–22 June 2006; IEEE: Piscataway, NJ, USA, 2006; Volume 1, pp. 519–528. [Google Scholar]
  15. Mildenhall, B.; Srinivasan, P.P.; Tancik, M.; Barron, J.T.; Ramamoorthi, R.; Ng, R. Nerf: Representing scenes as neural radiance fields for view synthesis. Commun. ACM 2021, 65, 99–106. [Google Scholar] [CrossRef]
  16. Gao, K.; Gao, Y.; He, H.; Lu, D.; Xu, L.; Li, J. Nerf: Neural radiance field in 3d vision, a comprehensive review. arXiv 2022, arXiv:2210.00379. [Google Scholar]
  17. Kerbl, B.; Kopanas, G.; Leimkühler, T.; Drettakis, G. 3D Gaussian Splatting for Real-Time Radiance Field Rendering. ACM Trans. Graph. 2023, 42, 1–14. [Google Scholar] [CrossRef]
  18. Heng, L.; Choi, B.; Cui, Z.; Geppert, M.; Hu, S.; Kuan, B.; Liu, P.; Nguyen, R.; Yeo, Y.C.; Geiger, A.; et al. Project autovision: Localization and 3d scene perception for an autonomous vehicle with a multi-camera system. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 4695–4702. [Google Scholar]
  19. Yang, M.D.; Chao, C.F.; Huang, K.S.; Lu, L.Y.; Chen, Y.P. Image-based 3D scene reconstruction and exploration in augmented reality. Autom. Constr. 2013, 33, 48–60. [Google Scholar] [CrossRef]
  20. Rodríguez-Gonzálvez, P.; Rodríguez-Martín, M.; Ramos, L.F.; González-Aguilera, D. 3D reconstruction methods and quality assessment for visual inspection of welds. Autom. Constr. 2017, 79, 49–58. [Google Scholar] [CrossRef]
  21. Wang, X.; Wang, C.; Liu, B.; Zhou, X.; Zhang, L.; Zheng, J.; Bai, X. Multi-view stereo in the Deep Learning Era: A comprehensive revfiew. Displays 2021, 70, 102102. [Google Scholar] [CrossRef]
  22. Yao, Y.; Luo, Z.; Li, S.; Fang, T.; Quan, L. Mvsnet: Depth inference for unstructured multi-view stereo. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 767–783. [Google Scholar]
  23. Yao, Y.; Luo, Z.; Li, S.; Shen, T.; Fang, T.; Quan, L. Recurrent mvsnet for high-resolution multi-view stereo depth inference. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 5525–5534. [Google Scholar]
  24. Yang, J.; Mao, W.; Alvarez, J.M.; Liu, M. Cost volume pyramid based depth inference for multi-view stereo. In Proceedings of the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 4877–4886. [Google Scholar]
  25. Yu, A.; Guo, W.; Liu, B.; Chen, X.; Wang, X.; Cao, X.; Jiang, B. Attention aware cost volume pyramid based multi-view stereo network for 3d reconstruction. ISPRS J. Photogramm. Remote Sens. 2021, 175, 448–460. [Google Scholar] [CrossRef]
  26. O’Mahony, N.; Campbell, S.; Carvalho, A.; Harapanahalli, S.; Hernandez, G.V.; Krpalkova, L.; Riordan, D.; Walsh, J. Deep learning vs. traditional computer vision. In Proceedings of the Science and Information Conference; Springer: Berlin/Heidelberg, Germany, 2019; pp. 128–144. [Google Scholar]
  27. Yang, R. Dealing with textureless regions and specular highlights-a progressive space carving scheme using a novel photo-consistency measure. In Proceedings of the Ninth IEEE International Conference on Computer Vision, Nice, France, 13–16 October 2003; IEEE: Piscataway, NJ, USA, 2003; pp. 576–584. [Google Scholar]
  28. Furukawa, Y.; Ponce, J. Accurate, dense, and robust multiview stereopsis. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 32, 1362–1376. [Google Scholar] [CrossRef] [PubMed]
  29. Esteban, C.H.; Schmitt, F. Silhouette and stereo fusion for 3D object modeling. Comput. Vis. Image Underst. 2004, 96, 367–392. [Google Scholar] [CrossRef]
  30. Dyer, C.R. Volumetric scene reconstruction from multiple views. In Foundations of Image Understanding; Springer: Berlin/Heidelberg, Germany, 2001; pp. 469–489. [Google Scholar]
  31. Mahmmod, B.M.; Abdulhussain, S.H.; Naser, M.A.; Alsabah, M.; Hussain, A.; Al-Jumeily, D. 3D Object Recognition Using Fast Overlapped Block Processing Technique. Sensors 2022, 22, 9209. [Google Scholar] [CrossRef] [PubMed]
  32. Rivera-Lopez, J.S.; Camacho-Bello, C.; Gutiérrez-Lazcano, L.; Papakostas, G. Computation of 2D and 3D High-order Discrete Orthogonal Moments. In Recent Progress in Image Moments and Moment Invariants; Science Gate Publishing: Mountain View, CA, USA, 2021; Volume 7, pp. 53–74. [Google Scholar]
  33. Besl, P.J.; McKay, N.D. Method for registration of 3-D shapes. In Proceedings of the Sensor Fusion IV: Control Paradigms and Data Structures; International Society for Optics and Photonics: Bellingham, WA, USA, 1992; Volume 1611, pp. 586–606. [Google Scholar]
  34. Myronenko, A.; Song, X. Point set registration: Coherent point drift. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 2262–2275. [Google Scholar] [CrossRef] [PubMed]
  35. Tam, G.K.; Cheng, Z.Q.; Lai, Y.K.; Langbein, F.C.; Liu, Y.; Marshall, D.; Martin, R.R.; Sun, X.F.; Rosin, P.L. Registration of 3D point clouds and meshes: A survey from rigid to nonrigid. IEEE Trans. Vis. Comput. Graph. 2012, 19, 1199–1217. [Google Scholar] [CrossRef] [PubMed]
  36. Wang, C.; Xu, D.; Zhu, Y.; Martín-Martín, R.; Lu, C.; Fei-Fei, L.; Savarese, S. Densefusion: 6d object pose estimation by iterative dense fusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3343–3352. [Google Scholar]
Figure 1. Examples of 2D carving reconstruction (blue shadowed area), of different objects (green). Synthetic bulges errors are the difference between blue shadowed areas and green objects.
Figure 1. Examples of 2D carving reconstruction (blue shadowed area), of different objects (green). Synthetic bulges errors are the difference between blue shadowed areas and green objects.
Mathematics 11 04419 g001
Figure 2. Examples of 2D optimal point selection (blue dots) in a epipolar line (blue line) for a square real object (green square) and different references (dotted red line).
Figure 2. Examples of 2D optimal point selection (blue dots) in a epipolar line (blue line) for a square real object (green square) and different references (dotted red line).
Mathematics 11 04419 g002
Figure 3. Example of 3D optimal point selection of a spring, orange pointcloud in the middle, compared to the 3D reconstruction on the left and reference surface on the right.
Figure 3. Example of 3D optimal point selection of a spring, orange pointcloud in the middle, compared to the 3D reconstruction on the left and reference surface on the right.
Mathematics 11 04419 g003
Figure 4. Dimensions and tolerances of the calibrated gauge used as groundtruth.
Figure 4. Dimensions and tolerances of the calibrated gauge used as groundtruth.
Mathematics 11 04419 g004
Figure 5. Designed 3D objects to evaluate BCPS: turbine blade and cube-sphere. From left to right no defect, missing material and extra material version.
Figure 5. Designed 3D objects to evaluate BCPS: turbine blade and cube-sphere. From left to right no defect, missing material and extra material version.
Mathematics 11 04419 g005
Figure 6. Visual results for BCPS algorithm in different shapes. From top to bottom ground truth object, reconstruction, selected points overlayed on ground truth.
Figure 6. Visual results for BCPS algorithm in different shapes. From top to bottom ground truth object, reconstruction, selected points overlayed on ground truth.
Mathematics 11 04419 g006
Figure 7. Mean and maximum error in reconstruction and BCPS for a simulated 30 mm cube with scale variations.
Figure 7. Mean and maximum error in reconstruction and BCPS for a simulated 30 mm cube with scale variations.
Mathematics 11 04419 g007
Figure 8. Mean and maximum error in reconstruction and BCPS for a simulated 30 mm sphere with scale variations.
Figure 8. Mean and maximum error in reconstruction and BCPS for a simulated 30 mm sphere with scale variations.
Mathematics 11 04419 g008
Figure 9. Mean and maximum error in reconstruction and BCPS for a simulated 30 mm cube with impulsional noise.
Figure 9. Mean and maximum error in reconstruction and BCPS for a simulated 30 mm cube with impulsional noise.
Mathematics 11 04419 g009
Figure 10. Mean and maximum error in reconstruction and BCPS for a simulated 30 mm sphere with impulsional noise.
Figure 10. Mean and maximum error in reconstruction and BCPS for a simulated 30 mm sphere with impulsional noise.
Mathematics 11 04419 g010
Figure 11. Mean and maximum error in reconstruction and BCPS for a real calibrated gauge with scale variations.
Figure 11. Mean and maximum error in reconstruction and BCPS for a real calibrated gauge with scale variations.
Mathematics 11 04419 g011
Figure 12. Mean and maximum error in reconstruction and BCPS for a 3D printed joined orthoedrons with scale variations.
Figure 12. Mean and maximum error in reconstruction and BCPS for a 3D printed joined orthoedrons with scale variations.
Mathematics 11 04419 g012
Figure 13. Mean and maximum error in reconstruction and BCPS for a 3D printed joined orthoedrons with excess of material with scale variations.
Figure 13. Mean and maximum error in reconstruction and BCPS for a 3D printed joined orthoedrons with excess of material with scale variations.
Mathematics 11 04419 g013
Figure 14. Mean and maximum error in reconstruction and BCPS for a 3D printed joined orthoedrons with leftover material with scale variations.
Figure 14. Mean and maximum error in reconstruction and BCPS for a 3D printed joined orthoedrons with leftover material with scale variations.
Mathematics 11 04419 g014
Figure 15. Mean and maximum error in reconstruction and BCPS for a 3D printed cube joined with sphere with scale variations.
Figure 15. Mean and maximum error in reconstruction and BCPS for a 3D printed cube joined with sphere with scale variations.
Mathematics 11 04419 g015
Figure 16. Mean and maximum error in reconstruction and BCPS for a 3D printed cube joined with sphere with excess of material with scale variations.
Figure 16. Mean and maximum error in reconstruction and BCPS for a 3D printed cube joined with sphere with excess of material with scale variations.
Mathematics 11 04419 g016
Figure 17. Mean and maximum error in reconstruction and BCPS for a 3D printed cube joined with sphere with leftover material with scale variations.
Figure 17. Mean and maximum error in reconstruction and BCPS for a 3D printed cube joined with sphere with leftover material with scale variations.
Mathematics 11 04419 g017
Table 1. Summed up results of min and max improvement of BCPS for each use case compared with classic 3D reconstruction.
Table 1. Summed up results of min and max improvement of BCPS for each use case compared with classic 3D reconstruction.
Sphere ScaleCube ScaleSphere NoiseCube NoiseReal GaugeReal BladeReal Cube-Sphere
Max error[4.74–4.77][6.5–21][1.1–5][1.55–23][2–3.9][1.5–4.2][1.21–8]
Mean error[3.5–3.5][5–45][1.5–2][4.8–8][2.3–5][2.1–3.3][2.1–4.28]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pérez Soler, J.; Guardiola, J.-L.; Perez Jimenez, A.; Garrigues Carbó, P.; García Sastre, N.; Perez-Cortes, J.-C. Optimal Coherent Point Selection for 3D Quality Inspection from Silhouette-Based Reconstructions. Mathematics 2023, 11, 4419. https://doi.org/10.3390/math11214419

AMA Style

Pérez Soler J, Guardiola J-L, Perez Jimenez A, Garrigues Carbó P, García Sastre N, Perez-Cortes J-C. Optimal Coherent Point Selection for 3D Quality Inspection from Silhouette-Based Reconstructions. Mathematics. 2023; 11(21):4419. https://doi.org/10.3390/math11214419

Chicago/Turabian Style

Pérez Soler, Javier, Jose-Luis Guardiola, Alberto Perez Jimenez, Pau Garrigues Carbó, Nicolás García Sastre, and Juan-Carlos Perez-Cortes. 2023. "Optimal Coherent Point Selection for 3D Quality Inspection from Silhouette-Based Reconstructions" Mathematics 11, no. 21: 4419. https://doi.org/10.3390/math11214419

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop