Next Article in Journal
Best Practices of Convolutional Neural Networks for Question Classification
Next Article in Special Issue
Measurement Uncertainty Analysis of a Stitching Linear-Scan Method for the Evaluation of Roundness of Small Cylinders
Previous Article in Journal
Development and Testing of a Railway Bridge Weigh-in-Motion System
Previous Article in Special Issue
Development of a Rapid Optical Measurement System for Circular Workpieces with Irregular Tooth Contours after Broaching Process
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Interlaboratory Empirical Reproducibility Study Based on a GD&T Benchmark

by
Ali Aidibe
*,
Souheil Antoine Tahan
and
Mojtaba Kamali Nejad
Mechanical Engineering Department, École de Technologie Supérieure (ÉTS), Montreal, QC H3C 1K3, Canada
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(14), 4704; https://doi.org/10.3390/app10144704
Submission received: 2 June 2020 / Revised: 6 July 2020 / Accepted: 6 July 2020 / Published: 8 July 2020
(This article belongs to the Special Issue Manufacturing Metrology)

Abstract

:
The ASME Y14.5 geometric dimensioning and tolerancing (GD&T) and ISO-GPS (geometrical product specifications) standards define tolerances that can be added to components to achieve the necessary functionality and performance. The zone that each feature must lie within is defined in each tolerance. Measurement processes, including planning, programming, data collection (with contact or without contact), and data processing, check the compliance of the part with these specifications (tolerances). Over the last two decades, many works have been realized by the metrology community to investigate the accuracy, the measuring methods, and, specifically, the measurement errors of fixed and portable coordinate measuring machines (CMMs). A review of the literature showed the progression of CMMs in terms of accuracy and repeatability. However, discrepancies were observed between measurements using different CMMs or operators. This paper proposed a GD&T-based benchmark for the evaluation of the performance of different CMM operators in computer-aided inspection (CAI), considering different criteria related to the dimensional and geometrical features. An artifact was designed using basic geometries (cylinder and plane) and free-form surfaces. The results obtained from the interlaboratory comparison study showed significant performance variability for complex GD&T, such as in the composite profile and localization. This, in turn, emphasized the importance of GD&T training and certification in order to ensure a uniform understanding among different operators, combined with a fully automated inspection code generator for GD&T purposes.

1. Introduction

Geometric dimensioning and tolerancing (GD&T) (or geometrical product specifications (GPS)) is a language of symbols widely used in engineering drawings and computer-generated models to describe, communicate, and determine feature geometry permissible deviations. GD&T is an efficient and unambiguous way of communicating the measurement conditions and specifications of a part. This language accompanies the entire process chain and helps communicate the part intent and function through the design, manufacture, and inspection. As well, it provides a more precise depiction of part features and focuses on the feature-to-feature relationships.
Standards, such as ASME Y14.5-2009 [1] and ISO-GPS [2,3,4,5,6,7,8] are comprised of a library of symbols, definitions, rules, and conventions that describe a part in terms of tolerances based on the size, form, orientation, and location. The main and necessary steps needed to derive GD&T results begin with nominal information that describes a specific feature. The manufactured part is inspected using a measurement device (such as a coordinate measuring machine (CMM)) and compared with the nominal definition (e.g., a computer-aided design (CAD) file) in order to verify the dimensional and geometric feature specifications (the actual size and tolerance). The deviations are then computed and displayed. Figure 1 presents the inspection process definition model.

1.1. Measurement Uncertainties—Overview

In metrology science, the true values (ideal quantities) may never be known, and all measurements could potentially have some degree of uncertainty, which is often a function of several variables (sources). The difference between the true and measured values is known as an error. Uncertainty can, as defined by the Guide to the Expression of Uncertainty in Measurement (GUM) [9], be considered as a ‘parameter, associated with the result of a measurement that characterizes the dispersion of the values that could reasonably be attributed to the measurand’. Thus, the estimated value y of the measurand Y is generally calculated using the relationship presented in Equation (1):
y = f ( x 1 , x 2 , , x n )
where xi is the estimation for each input variable Xi that could potentially have a significant influence on the measurement result ( y ) . The function f can be known and explicit. However, in some cases, the measurement function is unknown or very complex, and no analytic expression is available.
If the function f is explicit and the input quantities are not correlated, the law of propagation of uncertainties given in [9] generally represents the combined standard uncertainty on the estimated value u ( y ) by:
u ( y ) = i = 1 n ( f x i ) 2 u 2 ( x i )
where u ( x i ) is the standard uncertainty of each input variable x i .
In practice, the expanded uncertainty U(y) corresponds to the combined standard uncertainty multiplied by a coverage factor k, where k is chosen, for a prior confidence interval (1–α) to be the t1−α/2,ν critical value from the t-table, with ν degrees of freedom (Section 6, [9]).
Monte Carlo simulations are typically used to approximate the statistical behavior of the measured value in situations where the measurement function cannot be found directly. To determine the output, the input variables are generated randomly for each simulation within their respective uncertainty ranges. The output probability density function (PDF) is then used for evaluating the uncertainty [10]. Finally, in cases where the measurement function is very complex (or unknown), an empirical estimation can be established using certain assumptions and simplification hypotheses, as proposed in the Measurement System Analysis (MSA) guide from the Automotive Industry Action Group [11].
The MSA consists of a specifically designed experiment aimed at determining the components of variation in the measurement (e.g., the reproducibility, repeatability, bias, etc.). Indeed, the process of obtaining measurements (and defect level estimation) may have variations and produce uncertainty. The analysis tools proposed by the MSA (e.g., the gage repeatability and reproducibility (R&R)) evaluate the uncertainty on a direct measure (f(x) = x), such as the thickness measurement from a micrometer. The aim of the whole process is to guarantee the integrity of the data used for quality analysis and to consider the consequences of a measurement error for decisions taken on the product. The reader is referred to [11] for more details.

1.2. Measurement Uncertainties Associated with Dimensional and Geometric Measurement Using CMM

During the last three decades, the coordinate measuring machine (CMM) saw progress in terms of accuracy and repeatability, which, in turn, resulted in productivity improvements. Currently, CMM plays a major role in GD&T standards, such as [1,2,3,4,5,6,7,8], which call for crucial measuring equipment needed for manufacturing quality control [12]. Notwithstanding such improvements, however, uncertainty can be induced not only by the equipment used, but also by the algorithmic choices and the measurement methodology adopted [13,14,15,16].
Measurement uncertainty evaluation (quantification) is a crucial step in characterizing and certifying the consistency of the inspection results [17,18]. Measurement uncertainty evaluation must be carried out to ensure advances in measurement science. CMM measurement uncertainty evaluation has become a key focus area for research by many institutions around the world. The Physikalisch-Technische Bundesanstalt (PTB) in Germany, for instance, suggested an expert system scheme for CMM uncertainty evaluation and investigated the impact of the measurement strategy on the overall CMM uncertainty [19].
The National Physical Laboratory (NPL) in the UK standardized the measurement strategies for CMM in order to ensure that the measurement results are reliable [20]. A few authors have employed the design of experiment techniques to estimate the CMM measurement uncertainty. The factorial design of experiments was applied by Feng et al. [21] to study the measurement uncertainty of the position of a hole measured by CMM. They analyzed the effect of variables and their interactions on the uncertainty, while complying with the fundamental rules of the Guide to the Expression of Uncertainty (GUM) [9].
Kritikos et al. [22] designed and implemented a random factorial design of experiments in order to analyze and quantify the influence of different factors (stylus diameter, step width and speed) and their interactions on the CMM measurements’ uncertainty of the variable’s parallelism, angularity, roundness, diameter, and distance. Other authors, such as Kruth et al. [23] and Sladek et al. [24], proposed methods to determine uncertainties using the Monte Carlo method for feature measurements on CMM. Hongli et al. [25] proposed the Simplified Virtual Coordinate Measuring Machine (SVCMM) method, which makes full use of the CMM acceptance or reinspection report and the Monte Carlo simulation method.
For dimensional metrology with CMM measurements, a task-specific uncertainty estimation was suggested by Haitjema [26], and can be extended to other measurement types as well as linear dimensions, forms (flatness, cylindricity, etc.) and roughnesses. Beaman and Morse [27] performed an experimental evaluation of the software estimation of the task-specific measurement uncertainty for CMMs. Jakubiec et al. [28] addressed this topic and proposed an evaluation of CMM uncertainty, not by studying each axis of the machine, but by proceeding based directly on key specifications expressed in the GD&T standard. Jbira et al. [29] suggested a benchmark including several geometrical and dimensional features for the algorithm efficiency comparison of different Computer-Aided Inspection (CAI) software applications. A comprehensive review of different methods, techniques, and various artifacts for monitoring CMM performance can be found in the research work conducted by [30,31,32,33].
In coordinate metrology, Weckenmann et al. [34] mentioned the main contributors to uncertainty, which they subdivided into five groups: measuring devices, environment, workpieces, software, operators, and measurement strategy. A great deal of work has been carried out by the metrology community in terms of investigating the measuring devices, environment, and workpiece components. Although no common understanding of software validation procedures currently exists, the reader is referred to [35], as well as to the European Metrology Research Project (EMRP) under the denomination ‘Traceability for computationally-intensive metrology (TraCIM)’ [36,37,38] for research performed on software validation in the field of metrology.
In this paper, we aimed to analyze the measurement uncertainty from an empirical (experimental) perspective. From a review of the literature on the subject, the collective impact of the operator (training, skills, certification, GD&T decoding and interpretation, etc.), the measurement strategy (amount of data, samples, number of measurements, etc.), and the software employed (algorithms used, filtering or removal of outliers, optimization of the stability of the algorithm, layout handling, etc.) had been surprisingly overlooked. We proposed a new GD&T-based benchmark (test artifact) for evaluating (comparing) the performance of measurement systems in different measurement organizations (e.g., industry, schools, and metrology service companies) by considering the uncertainty that can be induced by the operator, the measurement strategy, and the software used.
Under the conditions proposed by the equipment manufacturer, the current hardware is accurate enough to perform the “good” measurements to capture the actual position of a measuring point in the 3D space. In other words, the uncertainty induced by the measuring device is significantly less than that induced by the operator choices, the software options, and the measurement strategies. This means that the performance of a measurement system represents an estimation of the combined variation of the measurement errors (systematic and random errors), which include equipment (hardware) errors, algorithmic errors (software), and operator errors. In this paper, software and operator errors were combined into one, as they can be strongly correlated. According to the MSA approach, this was strictly a reproducibility study [11].
The basic concepts of metrology and related terms that conform to the International Vocabulary of Metrology (VIM) [39] were employed in the present work. According to VIM, reproducibility is the ‘closeness of the agreement between the results of measurements of the same quantity, where the individual measurements are made: by different methods, with different measuring instruments, by different observers, in different laboratories, after intervals of time quite long compared with the duration of the single measurement, under different normal conditions of use of the instruments employed’ [39]. According to the Automotive Industry Action Group (AIAG) Measurement System Analysis (MSA) reference manual [11], reproducibility is traditionally referred to as the ‘between appraisers’ variability. Typically, the term is defined as the average of measurements made by different appraisers using the same measuring instrument when measuring the identical characteristic of the same part.
For the remainder of this paper, MSA terminology will be used [11]. EV stands for Equipment Variation, which is the variation due to repeatability, and AV stands for Appraiser Variation, which is the variation due to reproducibility.
To allow validation of this hypothesis, a GD&T-based artifact was designed using common geometric features (plane, cylinder, etc.) and free-form surfaces. A total of five parts were created, one without any intentional defect (part #1), and four others with a predefined number of intentional dimensional and geometrical defects (parts #2 to #5). The artifacts were intended for use in assessing the performance of many measurement institutes (interlaboratory comparison) in accordance with the dimensional and geometrical tolerance criteria.
The remainder of this paper is structured as follows: In Section 2, we outline the proposed test artifact model, followed by an experimental procedure. A comprehensive metrological and statistical analysis, followed by a general discussion of the results, is presented in Section 3 and Section 4. Finally, a summary is provided and future works are described.

2. Materials and Methods

A new GD&T-based test artifact is presented in this section. The model is designed for interlaboratory comparisons of CMMs. Figure 2 provides a visual representation and a description of the proposed artifact, as well as its sub-elements. To ensure the measurement of different shapes and geometrical tolerances, the artifact included basic geometric features (primitives), such as rectangular, planar, cylindrical, and conical surfaces; bore and hole patterns; and free-form surfaces.
As shown in Figure 2 and Table 1, a total of ten different features (items) were selected in the artifact, and five main categories related to GD&T were proposed to be characterized and controlled based on ASME Y14.5 (2009) [1]:
  • The size tolerances on slab features and cylinder bore/hole diameters.
  • The form tolerances, which control the shape of surfaces, such as the flatness of datum plane A, and the cylindricity of cylinder bores.
  • The orientation tolerances, which control the tilt of the surfaces and axes for size and non-size features, such as the perpendicularity of datum plane B related to datum plane A.
  • The location tolerances presented by the position-locating zones of the bores/holes and the position-relating zone tolerances of the hole patterns. This category locates the center points, axes, and median planes for size features. This category also controls the orientations.
  • The profile tolerances, which locate and control the size, form, and orientation of surfaces based on datum references. This is presented by the composite profile-locating, profile-orienting, and profile-form zones.
The overall dimensions of the artifacts were 138 × 90 × 50 mm. They were conveniently transportable and could fit into small CMM metrology systems. The artifacts were made from aluminum: Part #1, with no intentional defects, and parts #2 to #5, with some predefined and intentional geometrical imperfections. The geometrical imperfections were performed in accordance with the procedure described in Table 2.
Table 1 presents the predefined defects, which were considered as the reference values (nominal defects). Their respective amplitudes were approximately in the same order of magnitude of tolerance. The final real geometry of the part ‘as manufactured’ was unknown and the actual values were calculated from the measurement points.
The artifact parts were manufactured on three-axis CNN milling machines at the École de technologie supérieure’s Products, Processes, and Systems Engineering Laboratory (P2SEL). Figure 3 presents one of the five manufactured artifacts.
A total of 15 fixed and portable CMMs from different and independent industrial and academic collaborators in North America (Canada and the USA) were included in this investigation, and are presented in Figure 4. The CMMs used were named according to ISO 10,360 [40]. The accuracy of the CMMs used (equipment variation) in this study typically ranged between 0.7 and 45 µm (±2 σ level). All the induced defects for artifacts #2 to #5 were significantly higher than the aforementioned accuracies (Table 1).
The measurements were performed from November 2013 to December 2017. Different institutes and industrial partners (eight industries, three schools, and three companies in the field of dimensional metrology) were asked to measure artifacts #2 to #5 without any particular focus. The aim was to analyze the ordinary measurement performance of each institute. Each artifact received a unique code for each partner (only the coordinator maintained the part-operator-equipment traceability). The circulation of the artifacts was arranged in a circular path, with the evaluation kit forwarded to the next participant and the results sent to the coordinator. Each partner carried out measurements with their own CMM system, which included calibrated equipment, specific software, and an appraiser.
The data were collected through a dynamic pdf form. In this form, each inspection item was mentioned and the operator was asked to: (1) accept or refuse the item and (2) mention the measured value. An online database was connected to this form for fully automatic and secure data collection.
Based on [9,18], the general mathematical model for determining the CMM task-oriented uncertainty is presented in Equation (3):
U c ± k u E 2 + u E V 2 + u A V 2
where u E       is the uncertainty caused by bias and linearity (the equipment variation as provided by the manufacturers); u E V     is the uncertainty caused by repeatability as defined in MSA [11]; u A V     is the uncertainty caused by reproducibility as defined in MSA [12] (this includes the software used and the measurement strategy); U c   is the expanded combined uncertainty with a coverage factor k (obtained from Student’s critical value table); and U c represents the total error of the inspection process.
Some assumptions were made:
(1)
All measurements were done in a controlled environment (metrology laboratory or facilities). Therefore, compared to amplitude defects, uncertainty induced by environmental conditions could be considered negligible in this study.
(2)
The uncertainty of the equipment (hardware), including the bias, linearity ( u E     ), and repeatability ( u E V     ), was much smaller than the amplitude of the induced defects ( u E     and u E V     << Defect). Here as well, the uncertainty of the equipment was much less than that resulting from the reproducibility u A V     (variation due to operators–software–measurement strategy error ( u E V     << u A V     ). Practically, in this study, the equipment uncertainty (uE) was typically 5 µm for the conventional CMM (Figure 4). Given the preceding, Equation (2) can be simplified and the measurement variation in this paper can be considered equal to U c ≈ AV = ± k u A V     .
(3)
The reproducibility in this study (uAV) represented variations due to algorithmic error, and were mainly due to a programming error (e.g., least squares or minimum zone Chebyshev fit [41]), measurement strategy, or the use of computer programs (how the operator uses all software options, the density and distribution of the measurement points, etc.).
(5)
All industrial and academic participants in this study guaranteed a temperature of 20 ± 1 °C in their laboratories. The artifacts, made of aluminum, had overall dimensions of 138 × 90 × 50 mm. The resulting thermal expansion of ±3.4 μm was well below the observed variations.
(4)
The confidence representing the 95% interval (error type I = 0.05) was used for the study (we assume k ≈ 1.96) corresponding to an infinite degree of freedom. Outliers and missing data were not included in the calculations.

3. Results

Table 3 presents the [minimum, median, maximum] geometric and dimensional deviations for items #1 to #10.2, respectively. As illustrated in Figure 5, the results of the investigation are presented on individual value plots with error (interval) bars. For geometrical tolerances with zero target values, the measurements for each part (#2–5) are shown directly. For dimensional tolerances (in this case, the target is the nominal value of CAD), the deviations between the digitized parts (measurement) and the nominal part (CAD) are presented. Figure 6 presents the plots for size tolerances, while Figure 7 presents the plots for form tolerances (items #1 and #4.2) and orientation tolerances (items #3 and #6), Figure 8 presents the plots for location tolerances (items #5.1, #5.2, #7.1, #7.2, #8.1, and #10.2), and Figure 9 presents the plots for profile tolerances (items #9 and #9.1).

4. Discussion

This investigation revealed the presence of varying degrees of uncertainty in measurement reproducibility while operating CMMs in different laboratories and institutions. Differing amounts of appraiser variation (AV) were present when identical parts were measured by different operators on different (but similar) CMMs of approximately similar designs. Based on the results of the different analyses:
  • A lower level of measurement uncertainty was observed on non-defective parts. This observation seems trivial, but deserves to be underlined. Indeed, in the absence of form error, the algorithmic error factor and the number of measurement points had no impact on the ‘measurand’, except for the perpendicularity tolerance (Table 3 and Figure 7c, parts #3 and #5) and the angularity (Table 3 and Figure 7d, parts #2 and #5).
  • For simple requirements (e.g., the flatness (Figure 7) and diameter tolerance (Table 3 and Figure 6)), the range of variation was relatively small and very close to the inherent variation of the equipment.
  • On the other hand, a greater presence of measurement variation was observed for more sophisticated and complex GD&Ts (e.g., in the composite profile (Table 3 and Figure 9) and localization tolerances (Figure 8 and Figure 10a)). The combination of different factors, such as the logistics and measurement strategy, the operator type, the set-up type, the size of the point clouds, the choice of the inspection algorithm, etc., appeared to be the source of this overall high measurement uncertainty.
  • For the composite profile tolerance (Figure 9), many partners gave the same result for different features. In the specific instance of profile tolerance with all degrees of freedom (Table 3 and Figure 10b), the range of variation was significantly larger than in other cases. In addition to the inadequate choices that the operators can make during inspection operations, the registration (bestfit) induced an additional source of uncertainty.
  • Although not generalized, several coaxial tolerance results (Figure 8d) (with coaxial tolerance being a specific case of position tolerance) clearly indicated an interpretation error (or manipulation) because the values were larger than in the pattern-locating tolerance zone framework (PLTZF) located in the ABC datum reference frame (Table 3 and Figure 8c).
Overall, items without induced defects presented low variability (measurement uncertainty), while those with complex GD&T (e.g., composite features), as well as those recently added to the standard, presented high variability. The combination of different factors, such as the logistics and measurement strategy, the operator type, the set-up type, the size of the point clouds, the choice of the inspection algorithm, etc., appeared to be the source of this overall high measurement uncertainty.
These experimental findings may be applied to technical industrial practice to ensure the quality of the measurement results. They may also serve as an inspiration for proposing solutions to reduce the measurement uncertainty. These solutions may include GD&T training and certification to recognize proficiency in the application and understanding of the GD&T principles expressed in the standards. This would ensure a uniform understanding of the drawings prepared using the GD&T language by different operators, as well as a uniform selection and application of geometric controls to drawings. Another such solution could be in the form of innovative combinations of applied methods, such as a fully automated inspection code generator for GD&T purposes.

Author Contributions

Conceptualization, S.A.T.; methodology, A.A., S.A.T. and M.K.N.; formal analysis, A.A., S.A.T. and M.K.N.; investigation, S.A.T. and M.K.N.; data curation, A.A.; writing—original draft preparation, A.A.; writing—review and editing, A.A., S.A.T. and M.K.N.; visualization, A.A. and S.A.T.; supervision, S.A.T.; project administration, S.A.T.; funding acquisition, S.A.T. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC), grant number RGPIN-2015-05995.

Acknowledgments

The authors would like to thank École de technologie supérieure (Montreal, QC, Canada), the Natural Sciences and Engineering Research Council of Canada (NSERC), as well as all industrial and academic participants in North America for their support and contributions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. ASME. ASME Y14.5-2009 Dimensioning and Tolerancing: Engineering Drawing and Related Documentation Practices; American Society of Mechanical Engineers: New York, NY, USA, 2009. [Google Scholar]
  2. ISO. ISO 1101:2012 Geometrical Product Specification (GPS)—Geometrical Tolerancing—Tolerances of Form, Orientation, Location, and Runout; International Organization for Standardization: Geneva, Switzerland, 2012. [Google Scholar]
  3. ISO. ISO 12781:2011 Geometrical Product Specification (GPS)—Flatness; International Organization for Standardization: Geneva, Switzerland, 2011. [Google Scholar]
  4. ISO. ISO 12181:20122 Geometrical Product Specification (GPS)—Roundness; International Organization for Standardization: Geneva, Switzerland, 2011. [Google Scholar]
  5. ISO. ISO 5458:1998 Geometrical Product Specification (GPS)—Geometrical Tolerancing—Positional Tolerancing; International Organization for Standardization: Geneva, Switzerland, 1998. [Google Scholar]
  6. ISO. ISO 14405-1:2010 Geometrical Product Specification (GPS)—Dimensional Tolerancing—Part 1: Linear Sizes; International Organization for Standardization: Geneva, Switzerland, 2010. [Google Scholar]
  7. ISO. ISO 14405-2:2011 Geometrical Product Specification (GPS)—Dimensional Tolerancing—Part 1: Dimensions Other than Linear Sizes; International Organization for Standardization: Geneva, Switzerland, 2011. [Google Scholar]
  8. ISO. ISO 14253-1:1998 Geometrical Product Specifications (GPS)—Inspection by Measurement of Workpieces and Measuring Equipment—Part 1: Decision Rules for Proving Conformance or Non-Conformance with Specifications; International Organization for Standardization: Geneva, Switzerland, 1998. [Google Scholar]
  9. JCGM, BIPM. JCGM 100:2008 Guide to the Expression of Uncertainty in Measurement, GUM 1995 with Minor Corrections; The Joint Committee for Guides in Metrology (JCGM) and the Bureau International des Poids et Mesures (BIPM): Paris, France, 2008. [Google Scholar]
  10. Schwenke, H.; Siebert, B.R.L.; Wäldele, F.; Kunzmann, H. Assessment of Uncertainties in Dimensional Metrology by Monte Carlo Simulation: Proposal of a Modular and Visual Software. CIRP Ann. Manuf. Technol. 2000, 49, 395–398. [Google Scholar] [CrossRef]
  11. Automotive Industry Action Group. Measurement Systems Analysis Reference Manual, 4th ed.; Daimler Chrysler Corporation, Ford Motor Company, General Motors Corporation: Southfield, MI, USA, 2010. [Google Scholar]
  12. Thalmann, R.; Meli, F.; Küng, A. State of the art of tactile micro coordinate metrology. Appl. Sci. 2016, 6, 150. [Google Scholar] [CrossRef]
  13. Savio, E. Uncertainty in testing the metrological performances of coordinate measuring machines. CIRP Ann. Manuf. Technol. 2006, 55, 535–538. [Google Scholar] [CrossRef]
  14. Trapet, E.; Savio, E.; De Chiffre, L. New advances in traceability of CMMs for almost the entire range of industrial dimensional metrology needs. CIRP Ann. Manuf. Technol. 2004, 53, 433–438. [Google Scholar] [CrossRef]
  15. Vrba, I.; Palencar, R.; Hadzistevic, M.; Strbac, B.; Spasic-Jokic, V.; Hodolic, J. Different Approaches in Uncertainty Evaluation for Measurement of Complex Surfaces Using Coordinate Measuring Machine. Meas. Sci. Rev. 2015, 15, 111–118. [Google Scholar] [CrossRef] [Green Version]
  16. Gąska, P.; Gąska, A.; Sładek, J.; Jędrzejewski, J. Simulation model for uncertainty estimation of measurements performed on five-axis measuring systems. Int. J. Adv. Manuf. Technol. 2019, 104, 4685–4696. [Google Scholar] [CrossRef] [Green Version]
  17. Bich, W. Revision of the ‘Guide to the Expression of Uncertainty in Measurement’. Why and how. Metrologia 2014, 51, 155–158. [Google Scholar] [CrossRef]
  18. Yinbao, C.; Zhongyu, W.; Xiaohuai, C.; Yaru, L.; Hongyang, L.; Hongli, L.; Hanbin, W. Evaluation and Optimization of Task-oriented Measurement Uncertainty for Coordinate Measuring Machines Based on Geometrical Product Specifications. Appl. Sci. 2019, 9, 6. [Google Scholar]
  19. Weckenmann, A.; Knauer, M.; Kunzmann, H. The Influence of Measurement Strategy on the Uncertainty of CMM-Measurements. CIRP Ann. Manuf. Technol. 1998, 47, 451–454. [Google Scholar] [CrossRef]
  20. AMT. BS 7172:1989 Guide to Assessment of Position, Size and Departure from Nominal Form of Geometric Features; Advanced Manufacturing Technology Standards Policy Committee of Britain (AMT): London, UK, 2010. [Google Scholar]
  21. Feng, C.X.J.; Saal, A.L.; Salsbury, J.G.; Ness, A.R.; Lin, G.C.S. Design and analysis of experiments in CMM measurement uncertainty study. Precis. Eng. 2007, 31, 94–101. [Google Scholar] [CrossRef]
  22. Kritikos, M.; Concepción Maure, L.; Leyva Céspedes, A.A.; Delgado Sobrino, D.R.; Hrušecký, R. A Random Factorial Design of Experiments Study on the Influence of Key Factors and Their Interactions on the Measurement Uncertainty: A Case Study Using the ZEISS CenterMax. Appl. Sci. 2020, 10, 37. [Google Scholar] [CrossRef] [Green Version]
  23. Kruth, J.P.; Gestel, N.V.; Bleys, P.; Welkenhuyzen, F. Uncertainty determination for CMMs by Monte Carlo simulation integrating feature form deviations. CIRP Ann. Manuf. Technol. 2009, 58, 463–466. [Google Scholar] [CrossRef]
  24. Sładek, J.; Gąska, A. Evaluation of coordinate measurement uncertainty with use of virtual machine model based on monte carlo method. Measurement 2012, 45, 1564–1575. [Google Scholar] [CrossRef]
  25. Li, H.L.; Chen, X.H.; Cheng, Y.B.; Liu, H.D.; Wang, H.B.; Cheng, Z.Y.; Wang, H.T. Uncertainty modeling and evaluation of CMM task oriented measurement based on SVCMM. Meas. Sci. Rev. 2017, 17, 226–231. [Google Scholar] [CrossRef] [Green Version]
  26. Haitjema, H. Task specific uncertainty estimation in dimensional metrology. Int. J. Precis. Technol. 2011, 2, 226–245. [Google Scholar] [CrossRef] [Green Version]
  27. Beaman, J.; Morse, E. Experimental evaluation of software estimates of task specific measurement uncertainty for CMMs. Precis. Eng. 2010, 34, 28–33. [Google Scholar] [CrossRef]
  28. Jakubiec, W.; Płowucha, W. First Coordinate Measurements Uncertainty Evaluation Software Fully Consistent with the GPS Philosophy. Procedia CIRP 2013, 10, 317–322. [Google Scholar] [CrossRef] [Green Version]
  29. Jbira, I.; Tahan, A.; Bonsaint, S.; Mahjoub, M.A. Reproducibility Experimentation among Computer-Aided Inspection Software from a Single Point Cloud. J. Control Sci. Eng. 2019, 2019, 9140702. [Google Scholar] [CrossRef]
  30. ISO. ISO 10360-2:2009 Geometrical Product Specifications (GPS)—Acceptance and Reverification Tests for Coordinate Measuring Machines (CMM)—Part 2: CMMs Used for Measuring Linear Dimensions; International Organization for Standardization: Geneva, Switzerland, 2009. [Google Scholar]
  31. ISO. ISO 17450-2:2012 Geometrical Product Specifications (GPS)—General Concepts—Part 2: Basic Tenets, Specifications, Operators, Uncertainties and Ambiguities; International Organization for Standardization: Geneva, Switzerland, 2012. [Google Scholar]
  32. ISO. ISO/TS 15530-4:2008Geometrical Product Specifications (GPS). Coordinate Measuring Machines (CMM): Technique for Determining the Uncertainty of Measurement. Part 4: Evaluating Task-specific Measurement Uncertainty Using Simulation; International Organization for Standardization: Geneva, Switzerland, 2008. [Google Scholar]
  33. Hammad Mian, S.; Al-Ahmari, A. New developments in coordinate measuring machines for manufacturing industries. Int. J. Metrol. Qual. Eng. 2014, 5, 101. [Google Scholar] [CrossRef] [Green Version]
  34. Weckenmann, A.; Knauer, M.; Killmaier, T. Uncertainty of coordinate measurements on sheet-metal parts in the automotive industry. J. Mater. Process. Technol. 2001, 115, 9–13. [Google Scholar] [CrossRef]
  35. Greif, N.; Schrepf, H.; Richter, D. Software validation in metrology: A case study for a GUM-supporting software. Measurement 2006, 39, 849–855. [Google Scholar] [CrossRef]
  36. EURAMET. Report: Traceability for Computationally-Intensive Metrology. Available online: https://www.euramet.org/research-innovation/search-research-projects/details/project/traceability-for-computationally-intensive-metrology/ (accessed on 6 June 2020).
  37. Forbes, A.B.; Smith, I.M.; Härtig, F.; Wendt, K. Overview of EMRP Joint Research Project NEW06 “Traceability for computationally-intensive metrology”. In Advanced Mathematical and Computational Tools in Metrology and Testing X; World Scientific Publishing Co Pte Ltd.: Singapore, 2015; pp. 164–170. [Google Scholar]
  38. Müller, B. Repeatable and Tracable Software Verification for 3D Coordinate Measuring Machines. In Proceedings of the 18th World Multi-Conference on Systemics, Cybernetics and Informatics, Orlando, FL, USA, 15–18 June 2014. [Google Scholar]
  39. JCGM, BIPM. JCGM 200:2012 International Vocabulary of Metrology: Basic and General Concepts and Associated Terms (VIM); The Joint Committee for Guides in Metrology and The Bureau International des Poids et Mesures: Paris, France, 2012. [Google Scholar]
  40. ISO. ISO 10360:2016 Geometrical Product Specification (GPS)—Geometrical Tolerancing—Acceptance and Reverification Tests for Coordinate Measuring Systems (CMS); International Organization for Standardization: Geneva, Switzerland, 2016. [Google Scholar]
  41. Vemulapalli, P.; Shah, J.J.; Davidson, J.K. Reconciling the differences between tolerance specification and measurement methods. In Proceedings of the ASME 2013 International Manufacturing Science and Engineering Conference, MSEC2013, Madison, WI, USA, 10–14 June 2013. [Google Scholar]
Figure 1. The inspection process definition activity model.
Figure 1. The inspection process definition activity model.
Applsci 10 04704 g001
Figure 2. Description of the proposed geometric dimensioning and tolerancing (GD&T)-based artifact.
Figure 2. Description of the proposed geometric dimensioning and tolerancing (GD&T)-based artifact.
Applsci 10 04704 g002
Figure 3. The proposed GD&T-based artifacts.
Figure 3. The proposed GD&T-based artifacts.
Applsci 10 04704 g003
Figure 4. The uncertainty caused by bias and linearity uE (equipment variation) of the participating fixed and portable coordinate measuring machines (CMMs) (in mm).
Figure 4. The uncertainty caused by bias and linearity uE (equipment variation) of the participating fixed and portable coordinate measuring machines (CMMs) (in mm).
Applsci 10 04704 g004
Figure 5. General representation of the results: (a) geometric and (b) dimensional tolerances.
Figure 5. General representation of the results: (a) geometric and (b) dimensional tolerances.
Applsci 10 04704 g005
Figure 6. Results of the size tolerance items (a) #5 and (b) #8.
Figure 6. Results of the size tolerance items (a) #5 and (b) #8.
Applsci 10 04704 g006
Figure 7. Results of the form tolerance items (a) #1 and (b) #4.2 and angularity tolerance items (c) #3 and (d) #6.
Figure 7. Results of the form tolerance items (a) #1 and (b) #4.2 and angularity tolerance items (c) #3 and (d) #6.
Applsci 10 04704 g007
Figure 8. Results of the location tolerance items (a) #5.1, (b) #5.2, (c) #7.1, (d) #7.2, (e) #8.1, and (f) #10.2.
Figure 8. Results of the location tolerance items (a) #5.1, (b) #5.2, (c) #7.1, (d) #7.2, (e) #8.1, and (f) #10.2.
Applsci 10 04704 g008
Figure 9. Results of the profile tolerance items (a) #9 and (b) #9.1.
Figure 9. Results of the profile tolerance items (a) #9 and (b) #9.1.
Applsci 10 04704 g009
Figure 10. This boxplot of variation = |measured-nominal| amplitude for different GD&T categories; (a) form, location, orientation, and size tolerances; (b) profile tolerances.
Figure 10. This boxplot of variation = |measured-nominal| amplitude for different GD&T categories; (a) form, location, orientation, and size tolerances; (b) profile tolerances.
Applsci 10 04704 g010
Table 1. Predefined computer-aided design (CAD) geometrical defects (all dimensions are in mm).
Table 1. Predefined computer-aided design (CAD) geometrical defects (all dimensions are in mm).
Defect’s CAD Parts
ItemSub ItemDescription#1#2#3#4#5
11.0 Applsci 10 04704 i00100000
22.0 Applsci 10 04704 i00200.1200.130
33.0 Applsci 10 04704 i00300.2000.250
44.0Ø40 ± 0.05Ø4040.0040.0540.0040.14
4.1 Applsci 10 04704 i00400.2500.110.250.19
4.2 Applsci 10 04704 i005000.1700.18
55.0.aØ8 ± 0.05Ø88.008.008.008.00
5.0.bØ88.008.008.008.00
5.0.cØ88.008.008.008.00
5.0.dØ88.008.008.008.00
5.1.a Applsci 10 04704 i00600.4500.100
5.1.b00.4500.090
5.1.c00.4500.160
5.1.d00.4500.110
5.2.a Applsci 10 04704 i0070000.100
5.2.b0000.090
5.2.c0000.160
5.2.d0000.110
66.0 Applsci 10 04704 i008000.2000.210
77.0.aØ20 ± 0.05Ø2020.0020.0020.0020.00
7.0.bØ2020.0020.0020.0020.00
7.1.a Applsci 10 04704 i00900.0300.250.10
7.1.b00.030.1800.250.10
7.2.a Applsci 10 04704 i01000000
7.2.b00000
88.020 ± 0.252020.0020.26020.0020.00
8.1 Applsci 10 04704 i01100.030.4300.2500.60
99.0 Applsci 10 04704 i01201.952.151.792.04
9.1 Applsci 10 04704 i01301.090.901.092.04
9.2 Applsci 10 04704 i01400000
1010.0.aØ14–14.10Ø1414.0014.0014.0014.00
10.0.bØ1414.0014.0014.0014.00
10.0.cØ1414.0014.0014.0014.00
10.1.a Applsci 10 04704 i01500.50000.2500
10.1.b00.25000.2500
10.1.c00.25000.2500
10.2.a Applsci 10 04704 i01600000
10.2.b00000
10.2.c00000
Table 2. Creation of the defects (all dimensions are in mm).
Table 2. Creation of the defects (all dimensions are in mm).
TypeDescription
SizeIn the case of item #4.0 (Ø40), the hole was machined by slightly modifying the circular path. In the case of items #5.0 (Ø8), #7.0 (Ø20) and #10.0 (Ø14), a drill bit was used for drilling the holes.
For the slab feature (item #8.0), size defects were created in CATIA® V5 by slightly modifying the distance between the corresponding parallel planes.
FormNo flatness and straightness defects were created for plane surfaces (item #1.0) and cylindrical features. In the case of item #4.0 (Ø40), cylindricity defects were created while machining the hole by slightly modifying the circular path.
OrientationOrientation defects were created in CATIA® V5 by rotating the plane surfaces (items #2.0 and #3.0).
LocationLocation defects were created in CATIA® V5 by imposing translations and rotations of the axes of cylindrical features (e.g., items # 4.1, 5.1, 5.2, 7.1, 7.2, 10.1, and 10.2) and slab features (item #8.1).
ProfileProfile defects were created in CATIA® V5 by imposing translations and rotations along the three axes of the surface (items #9.0 and #9.1).
For item #9.2, no profile defects were created.
Table 3. Results (all dimensions are in mm).
Table 3. Results (all dimensions are in mm).
Results [Minimum, Median, Maximum]
Sub ItemDescription#2#3#4#5
1.0 Applsci 10 04704 i017[0.002, 0.014, 0.031][0.002, 0.010, 0.029][0.001, 0.015, 0.025][0.004, 0.0163, 0.025]
2.0 Applsci 10 04704 i018[0.015, 0.025, 0.146][0.003, 0.030, 0.110][0.002, 0.071, 0.158][0.011, 0.0386, 0.140]
3.0 Applsci 10 04704 i019[0.016, 0.034, 0.148][0.005, 0.028, 0.170][0.008, 0.025, 0.279][0.009, 0.0381, 0.303]
4.0Ø40 ± 0.05[39.84, 39.90, 39.94][39.84, 39.91, 40.04][39.84, 39.92, 39.96][39.84, 39.95, 40.10]
4.1 Applsci 10 04704 i020[0.053, 0.198, 0.360][0.063, 0.154, 0.340][0.028, 0.200, 0.427][0.097, 0.195, 0.454]
4.2 Applsci 10 04704 i021[0.020, 0.040, 0.213][0.012, 0.102, 0.163][0.017, 0.049, 0.170][0.001, 0.1162, 0.216]
5.0.aØ8 ± 0.05[7.845, 7.863, 7.888][7.840, 7.910, 7.955][7.781, 7.831, 7.941][7.789, 7.842, 7.885]
5.0.b[7.840, 7.853, 7.889][7.836, 7.908, 7.960][7.786, 7.831, 7.944][7.788, 7.840, 7.884]
5.0.c[7.841, 7.860, 7.894][7.840, 7.910, 7.960][7.781, 7.829, 7.941][7.785, 7.837, 7.886]
5.0.d[7.833, 7.856, 7.889][7.722, 7.888, 7.959][7.785, 7.830, 7.947][7.785, 7.840, 7.884]
5.1.a Applsci 10 04704 i022[0.013, 0.442, 0.600][0.012, 0.149, 0.451][0.018, 0.089, 0.532][0.006, 0.186, 0.300]
5.1.b[0.013, 0.461, 0.699][0.008, 0.122, 0.470][0.018, 0.077, 0.587][0.015, 0.097, 0.380]
5.1.c[0.020, 0.103, 0.476][0.016, 0.150, 0.460][0.032, 0.152, 0.767][0.016, 0.095, 0.560]
5.1.d[0.008, 0.266, 0.633][0.013, 0.151, 0.464][0.031, 0.112, 0.592][0.017, 0.106, 0.399]
5.2.a Applsci 10 04704 i023[0.006, 0.020, 0.456][0.009, 0.120, 0.451][0.012, 0.086, 0.527][0.014, 0.116, 0.290]
5.2.b[0.002, 0.014, 0.452][0.006, 0.120, 0.470][0.018, 0.087, 0.462][0.011, 0.127, 0.316]
5.2.c[0.011, 0.032, 0.452][0.010, 0.120, 0.460][0.030, 0.149, 0.764][0.004, 0.053, 0.513]
5.2.d[0.005, 0.032, 0.487][0.013, 0.120, 0.464][0.026, 0.110, 0.391][0.0058, 0.060, 0.399]
6.0 Applsci 10 04704 i024[0.001, 0.014, 0.024][0.0001, 0.163, 0.215][0.022, 0.169, 0.218][0.001, 0.0151, 0.180]
7.0.aØ20 ± 0.05[19.89, 19.94, 20.04][19.90, 19.94, 20.01][19.93, 19.97, 19.99][19.97, 19.99, 20.01]
7.0.b[19.92, 19.96, 20.03][19.93, 19.94, 19.99][19.94, 19.98, 20.03][19.97, 19.99, 20.02]
7.1.a Applsci 10 04704 i025[0.106, 0.279, 2.777][0.072, 0.216, 2.633][0.082, 0.238, 2.991][0.095, 0.122, 2.667]
7.1.b[0.096, 0.185, 0.499][0.033, 0.225, 0.444][0.084, 0.248, 0.734][0.038, 0.246, 0.404]
7.2.a Applsci 10 04704 i026[0.042, 0.142, 0.405][0.026, 0.072, 0.346][0.021, 0.187, 0.368][0.017, 0.087, 0.514]
7.2.b[0.042, 0.142, 0.405][0.020, 0.046, 0.246][0.021, 0.144, 0.367][0.019, 0.072, 0.404]
8.020 ± 0.25[19.92, 19.98, 20.01][19.93, 19.95, 20.23][19.82, 19.97, 20.02][19.71, 19.97, 20.34]
8.1 Applsci 10 04704 i027[0.175, 0.391, 0.735][0.16, 0.4657, 0.608][0.029, 0.392, 0.539][0.025, 0.528, 0.806]
9.0 Applsci 10 04704 i028[0.333, 1.966, 3.430][0.196, 1.464, 3.558][0.217, 1.279, 2.369][0.36, 2.4498, 3.454]
9.1 Applsci 10 04704 i029[0.218, 0.348, 3.362][0.201, 0.299, 1.007][0.175, 0.261, 0.369][0.202, 1.956, 2.369]
9.2 Applsci 10 04704 i030[0.089, 0.481, 3.284][0.074, 0.196, 1.521][0.065, 0.099, 0.426][0.062, 2.341, 2.797]
10.0.aØ14–14.10[13.86, 13.98, 14.03][13.82, 13.99, 14.11][13.86, 13.95, 14.01][13.87, 13.97, 14.12]
10.0.b[13.87, 13.88, 13.99][13.86, 13.99, 14.01][13.95, 13.97, 14.01][13.95, 13.97, 14.00]
10.0.c[13.83, 13.88, 13.99][13.82, 13.99, 14.01][13.94, 13.97, 14.01][13.94, 13.98, 14.00]
10.1.a Applsci 10 04704 i031[0.113, 0.297, 0.708][0.034, 0.077, 0.860][0.049, 0.171, 0.745][0.016, 0.112, 0.508]
10.1.b[0.042, 0.221, 0.400][0.037, 0.058, 0.388][0.066, 0.146, 0.434][0.012, 0.072, 0.393]
10.1.c[0.040, 0.195, 0.588][0.049, 0.067, 0.388][0.062, 0.157, 0.444][0.035, 0.082, 0.408]
10.2.a Applsci 10 04704 i032[0.020, 0.247, 0.583][0.002, 0.089, 0.560][0.019, 0.171, 0.723][0.005, 0.155, 0.418]
10.2.b[0.001, 0.027, 0.351][0.004, 0.021, 0.332][0.003, 0.022, 0.360][0.001, 0.038, 0.393]
10.2.c[0.004, 0.024, 0.684][0.002, 0.020, 0.310][0.001, 0.020, 0.667][0.001, 0.041, 0.408]

Share and Cite

MDPI and ACS Style

Aidibe, A.; Tahan, S.A.; Kamali Nejad, M. Interlaboratory Empirical Reproducibility Study Based on a GD&T Benchmark. Appl. Sci. 2020, 10, 4704. https://doi.org/10.3390/app10144704

AMA Style

Aidibe A, Tahan SA, Kamali Nejad M. Interlaboratory Empirical Reproducibility Study Based on a GD&T Benchmark. Applied Sciences. 2020; 10(14):4704. https://doi.org/10.3390/app10144704

Chicago/Turabian Style

Aidibe, Ali, Souheil Antoine Tahan, and Mojtaba Kamali Nejad. 2020. "Interlaboratory Empirical Reproducibility Study Based on a GD&T Benchmark" Applied Sciences 10, no. 14: 4704. https://doi.org/10.3390/app10144704

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop