Next Article in Journal
Biosignals Monitoring of First Responders for Cognitive Load Estimation in Real-Time Operation
Next Article in Special Issue
Design and Research of Intelligent Assembly and Welding Equipment for Three-Dimensional Circuit
Previous Article in Journal
Privacy-Preserving Federated Singular Value Decomposition
Previous Article in Special Issue
Digitalization Trend and Its Influence on the Development of the Operational Process in Production Companies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Development and Integration of a Workpiece-Based Calibration Method for an Optical Assistance System

by
Julian Koch
*,†,
Christopher Büchse
and
Thorsten Schüppstuhl
Hamburg University of Technology, Institute of Aircraft Production Technology, 21073 Hamburg, Germany
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2023, 13(13), 7369; https://doi.org/10.3390/app13137369
Submission received: 21 April 2023 / Revised: 14 June 2023 / Accepted: 15 June 2023 / Published: 21 June 2023
(This article belongs to the Special Issue Smart Manufacturing Systems in Industry 4.0)

Abstract

:
Assistance systems utilize a broad range of technologies to provide information and guidance to workers in manufacturing. The use of light projectors, as of today, has seldom been catalogued in the relevant literature, and the implementation of such is yet to be found in production environments. However, light projectors may offer a cost effective enhancement for production processes, especially within the context of large-scale workpieces. Of the pertaining literature, only one calibration algorithm is currently considered applicable, thus acting as a framework of motivation for this paper. A novel calibration algorithm based on Newton’s method is presented and validated in conjunction with a proof-of-concept demonstration of the resulting accuracy, as well as the integration of such into an interface based on Node-RED, with MQTT as the main protocol.

1. Introduction

1.1. Motivation

Growing product variance stemming from an increase in customer requirements raises the demands on both the flexibility of production systems and the skills of employees on the shop floor [1]. Where approaches of Industry 4.0 are focused on the efficient automation and interconnection of systems providing the technical foundations for adaptive production, its successor, Industry 5.0, has evolved around notions that place the human at the center. The envisioned result of both trends are highly connected and human-oriented factories, in which the technologies are tailored to the needs of the people who work in them, and the latter, as the central decision-maker, can control production in unison with a multitude of autonomous systems [2].
The basic enablers of this vision are Human–Machine Interfaces (HMIs), which allow the interweaving of humans and cyber-physical systems. Individual interfaces provide a communication channel to transmit information with the help of a specific modality [3]. The modality of information transmission is characterized by basic human abilities to absorb information, whereby visual, acoustic and haptic exchange are the most frequently used. Based on these interfaces, numerous research projects in the context of smart production systems have dealt with the support of humans in complex activities under the keyword “Cognitive Assistance Systems” (CASs) [4].
The relevance of such systems increases in principle with the proportion of manual processes and with the product variance within a domain. This applies in particular to aircraft manufacturing, where the proportion of manual processes is substantial, with lot sizes within certain subsectors (e.g., cabin interior) approaching 1 [5]. Therefore, supporting manufacturing personnel in process execution is of central importance for time-efficient, robust and error-free aircraft production.
The domain of aircraft manufacturing poses special requirements that can also be found in the manufacturing of other large-scale products—namely, the vast quantity of assembly components that constitute an aircraft, as well as the pertaining range of dimensions of said components. From this, specific requirements can be derived for the coverage of the working areas of CASs. In addition, various processes are carried out simultaneously in a large workspace, occasionally in ergonomically unfavorable positions. For projection systems, this can lead to occlusions caused by the employees and tools on the shop floor, thus hindering information transmission in the assisted processes. Overall, the working environment can be described as challenging for the use of CASs.

1.2. Research Gap

However, most of the research on CAS focuses on small-scale production environments (e.g., assembly work benches) [6], hindering the direct applicability of the current approaches. Nevertheless, some of the core approaches of CASs with visual information output are generally applicable. These include augmented reality glasses, lasers, and video and light projectors (moving head spots). In view of the requirements described above, light projectors may be a cost-effective solution and are fundamentally suitable for use in aircraft production (note: more information about pricing of selected devices can be found in Appendix B). Nevertheless, there is not a significant number of papers on light projectors in the literature, which is due to the fact that they are mainly used in event technology, and no applications in the manufacturing industry are known. What exists are only a few papers touching on this technology’s potential as an information channel for a CAS [7,8]. So far, the integration of the moving head spot in the industrial laboratory environment has been demonstrated in principle. The commissioning, which includes the calibration of the device, has been solved using a frame so that a projection within a previously measured two-dimensional plane is enabled. Thus, there is a lack of a calibration methodology for three-dimensional space and, on the other hand, a methodology that does not require additional mechanical calibration jigs. To overcome this deficit, this paper presents a workpiece-based calibration method for moving head spots, which calculates the transformation between the moving head spot and a given reference coordinate system using Newton’s method. This not only simplifies the calibration process, but also makes it easier to work in three-dimensional space, enabling the illumination of curved objects such as those found in aircraft production.

1.3. Outline of This Work

Following the introduction, in Section 2 of this paper, the current state of the art is analyzed, underlining the framework of motivation. Section 3 explains the fundamentals of controlling a moving head spot and introduces the aircraft workpiece used for development and testing. Section 4 introduces the proposed calibration algorithm, which constitutes the core of this work. Following the calibration algorithm, the different aspects of the overall calibration process are validated (Section 5) and discussed (Section 6). The last two parts consist of Section 7, which summarizes the main contributions of this work, and Section 8, which motivates further research into the varying aspects of using a moving head spot as a CAS.

2. State of the Art and Related Work

This section provides an overview of pertinent state-of-the-art assistance systems in the field of manufacturing (Section 2.1), as well as related work on light projector commissioning (Section 2.2) and referencing based on component information (Section 2.3).

2.1. Assistance Systems

2.1.1. Classification of Assistance Systems in Manufacturing

Figure 1 serves as an overview for current research approaches in the field of assistance systems in manufacturing. The classification is based on the three criteria shown: the type of assistance (1), the modality used (2), and the technology leveraged in the form of the output device (3). The focus lies in cognitive assistance systems with visual information transfer, and these are marked in blue. Further detailed considerations of other types of assistance systems or alternative modalities within cognitive assistance systems will be omitted, as they are not relevant to the present work (marked in gray as to classify this work in the overall research field).
The illustrated classification method for assistance systems represents a combination of criteria commonly used within the pertinent literature, and is explained in more detail here. On the first level, a distinction is made regarding the type of assistance. More on this basic classification can be found in the work of [4,9]. The general aim of CASs is to support employees in their decision-making, or otherwise, in the execution of work processes with the aid of information, instructions, or feedback, thus imparting relief from mental fatigue. A detailed analysis of this type of assistance system in the context of manufacturing will be part of Section 2.1.2. Sensorial Assistance is intended to assist the worker by extending their capabilities of either acquiring information or focusing their attention on a specific piece of information [10]. Relevant instances often work with systems (mostly cameras) that record and document conditions such as product status [11], or track employee motions [12]. It should be noted that within the relevant literature, the terms “cognitive” and “sensorial” assistance are not always used in mutual distinction; in many systems, information acquisition, its processing, and its presentation to the human accompany each other, often rendering stringent distinction unnecessary. In several publications, therefore, the terms “Digital Assistance Systems” [13] or “Informational Assistance Systems” [14] can be found, in which the goal of mental relief is achieved by combining both approaches. Dissimilar to the first two types, Physical Assistance Systems provide not mental, but physical relief. This is realized, for example, by exoskeletons [15] that support specific body regions as a wearable suit, or by cobots that assume particularly unergonomic tasks in a common workflow [16].
The second level of classification relates only to cognitive assistance systems and is based on the modalities of information transfer in the work of [17,18]. They are derived from the human senses that allow us to perceive information. The most common modalities in which information can be transferred are visual, auditory and haptic. In this work, we focus on cognitive assistance systems with a visual information output that is focused separately. Examples for the the auditory modality can be found in [19], in which the worker is provided corresponding steps by a voice output on an Augmented Reality Glass, enabling hands-free work execution. Haptic information outputs are often realized in manufacturing by using vibration functionalities of a smart watch; e.g., [20] leverages this as a means to convey direct feedback about the quality of the conducted processes.
The third classification level pertains to the type of output devices selected and is also proposed as a classification criterion in similar approaches by [9,21]. In this context, it should be noted that some devices are not only capable of transmitting visual information, but are also capable of serving other modalities. These connecting lines are not drawn in the figure for simplicity. As depicted, the devices currently used in the literature range from fixed projection-based solutions to wearable devices. Based on the individual requirements of the use case, the developed assistance systems may consist of single or multiple components for information provision [21]. In the following subsection, we will examine the recent research in the field of CASs with visual output.

2.1.2. Cognitive Assistance Systems

With the primary objective of mental relief for the employee, the field of CASs in manufacturing is primarily concerned with the context-sensitive provision of process-relevant information during the assembly or disassembly of variant-rich products down to batch size 1 [22]. Within this context, CASs are often combined with physical assistance systems (e.g., cobots in human–robot collaboration [5,12]), or sensorial assistance systems, as mentioned in Section 2.1.1. In addition to assembly, research approaches in the field of maintenance [23], repair [24], overhaul [25] and logistics [26] can also be found. From an economic perspective, CASs may help to ensure an error-free and efficient process execution, thus saving time and additional rework costs.
Examining the possible output devices shown in Figure 1, different stationary projection-based approaches utilizing light [7], laser [11,27], and video projectors [22,28,29] as a component within their assistance systems can be found in the respectively cited literature.
A shared characteristic of these approaches is their limited scope of application due to the coverage of the devices, rendering them as non-directly transferable to components with larger dimensions. Given this constraint, these applications are often installed at spatially limited workstations (such as assembly tables). In order to cover larger work areas, additional devices must be purchased, increasing cost and control efforts. In addition, projection-based systems are always susceptible to occlusions. Therefore, the devices are typically ceiling-mounted directly over the component to avoid employee coverage of the projection surface. However, if the position of the projector cannot be oriented in such a way that occlusions can be avoided, this problem can also be countered with multiple devices, all with varying positions and projection angles.
Solutions based on mobile devices such as augmented reality glasses, with the HoloLens as a market-ready solution, allow the coverage of larger workspaces as well as a more immersive display of information due to their location flexibility, as shown in [30,31]. Nevertheless, localizing the AR glasses within space is a challenging task. Equipping the manufacturing and assembly environment with stationary reference markers (ArUco markers, April tags) for localization is infeasible within some domains, such as aircraft manufacturing, due to the requirement for installation, calibration and residue-free removal. Here, model-based (CAD) and feature-based (SLAM) methods represent an alternative for localization. However, the latter two methods have limitations in terms of accuracy in large-scale work environments [32]. Another as yet unsolved problem of Augmented Reality devices is “Simulator Sickness”, which describes a form of motion sickness during the prolonged use of immersive applications [33].
Since the requirements for CASs can vary diversely given different production scenarios, the design of them is frequently individualized. In addition to the selection of a suitable modality and the appropriate output device, factors such as the degree of support, adaptability to human needs (for example, the employee’s level of qualification), or the differing production environments compose roles that should not be neglected [34,35]. In summary, CASs may serve the paradigms of the efficient, networked factory as well as that of human-centered production, enabling flexible processes that can fundamentally meet product and process variations while ensuring high quality.
In this paper, we focus on stationary projection systems—more precisely light projection systems. For challenging work environments, such as those described in Section 1.1 using the example of aircraft production, the strengths of these systems come to fruition. Individual light projectors possess the ability to cover wide areas, as implemented in the event industry for large rooms, remaining relatively inexpensive to procure due to their availability on the consumer market. Due to this low cost, the systems can be deployed multiple times within a single workspace, mitigating potential occlusions caused by employees. Our goal is to provide a contribution to improve their applicability in CASs, especially for that of large products. In the following, we will look at the preliminary work related to commissioning, which in this case, mainly refers to the calibration process.

2.2. Commissioning of Visual Light Projectors

The utilization of moving head light projectors has been demonstrated previously as a part of the “Assist-By-X” assistance system [7]. The calibration process of the moving head is based on a precise calibration fixture with four reference points that define the workspace coordinate system. Based on the known dimensions of the fixture, the position and orientation can be calculated, resulting in a numerically stable solution that is expressed in the Denavit–Hardenberg notation [36], commonly used in robotics. The control of the moving head calibration process is aided by a Node-RED dashboard with a Python script in the back-end. The movement is controlled by MQTT through the Node-RED flows.
In an experiment prior to this paper, based on the proposed method, it became clear that the fixture itself must be positioned and aligned relative to a workpiece or reference coordinate system, which can pose further challenges and may require additional measurement equipment. This adds another step in the calibration process, including time and work effort, and is a potential source for uncertainty, motivating an approach to avoid this separate fixture.

2.3. Workpiece-Based Referencing

An approach for avoiding a separate calibration fixture by utilizing reference points on the workpiece itself for pose estimation and orientation correction has been demonstrated for aircraft wing assembly [37]. The specified goal was to increase the precision and speed of the wing-fuselage connection process in aircraft assembly. This was achieved by the continuous measuring of reference points on the aircraft wing with a laser tracker while its pose and position were manipulated by computer-controlled actors. The wing’s CAD model was used to find the workpiece coordinate system relative to the reference points. For the pose estimation, the Newton–Euler method, in combination with quaternions, was used.
Unlike the distinctly precise and immoderately priced laser trackers used in the paper, a system with low-cost moving head spots loses depth information, resulting in the light beam being represented by a vector from the moving head to the reference point with an unknown length. In order to use the remaining information, a different mathematical approach is necessary. This approach will be presented in the following chapters after an overview of the moving head control.

3. Fundamentals: Workpiece and Moving Head Kinematics

The large structure workpiece used for the development and verification of the calibration algorithm is a tail cone from aircraft production, and a schematic representation is shown in Figure 2. Typical in aircraft assembly, the component is constructed from metal sheets that are riveted together, leaving the heads of the rivets protruding relative to the surface. Rivets in characteristic spots, such as corners or intersections of rivet series, are chosen as reference points. These reference points are located on the lower half of the side of the workpiece, while the moving head is positioned on the ground, looking up at the reference points. The workpiece has a major diameter of about 2 m, a minor diameter of about 0.75 m and a length of about 3.5 m. It is held in a fixture, allowing for occlusion-free access to the aforementioned reference point area. On the basis of confidentiality, no pictures or renderings of the setup could be published in this paper.
The following section is divided into four subsections. The first subsection introduces the movement principle of the moving head and the relevant axis and parameters. This leads to the second subsection, that explains the control of the moving head with Cartesian coordinates. The third subsection introduces the coordinate systems and transformations used for the case of large structure workpieces. The last subsection gives an introduction to Newton’s method and the multivariable variant used for the iterative calculations in Section 4.

3.1. Axis and Control of the Moving Head

The moving head has two principal rotational movement axes called pan and tilt, which control the position of the light beam. Furthermore, it has zoom and focus parameters that control the size and sharpness of the light spot, as shown in Figure 2. These parameters are comparable to the intrinsic camera parameters within camera calibration, which will be further explained in the calibration section, Section 3. For effects, moving heads are typically equipped with a color filter wheel, one or more so called “Gobo wheels” with custom shapes, and a prism.

3.1.1. Pan and Tilt

The movement axes pan and tilt are the only axes that control the actual position of the light spot. For finer control, the finepan and finetilt controls exist, which are further explained in Appendix B.1. The movement ranges are typically in the area of p a n = [ 0 360 ] / [ 0 540 ] and t i l t = [ 0 180 ] / [ 0 270 ] . The moving head is a serial kinematic, where p a n describes the rotation around the base, and t i l t describes the rotation of the actual projection head. This movement principle is similar to the definition of a spherical coordinate system and leads to the derivation of the Cartesian control in Section 3.2.

3.1.2. Focus

The moving head offers the possibility to adjust the light spot focus with its built-in optics. The correct focus setting depends on the distance between the moving head and the illuminated surface and is especially important when using differently shaped light spots. A properly focused spot reduces uncertainty regarding which point or area is intended to be highlighted by the assistance system and which is not. Overall, the aim of this assistance system is to provide a helpful guidance throughout a given workflow, which requires unambiguity. A detailed derivation of the relationship between the distance and the correct focus value for the moving head used in this paper can be found in Appendix B.2.

3.1.3. Gobo Wheels

The moving head used in this paper is equipped with two Gobo wheels, one of which being rotatable and the other having static elements. The static Gobo wheel is shown in Figure 3. In the original configuration, the Gobo wheels have artistic symbols and shapes for show usage, which only offer limited benefit for assistance applications. Consequently, a new Gobo wheel was designed and laser cut from sheet metal with smaller apertures to reduce the spot size. This measure increases the specificity of the spot when the moving head is supposed to illuminate an individual point. Other shapes, such as arrows, crosses, circles, or logos, are conceivable if one would like to enrich the illuminated position with additional information and/or instructions.

3.2. Control with Cartesian Coordinates

For interoperability of the moving head with other systems and equipment, it is necessary to control it with three-dimensional Cartesian coordinates instead of directly controlling the angles of each movement axis. The general approach follows the transformation equations from Cartesian coordinates to spherical coordinates. The equations for the p a n and t i l t angles are noted in Equations (1) and (2) and are generalized for point P n = [ x n , y n m , z n ] T , as shown in Figure 2 and Figure 4.
p a n n = s g n ( y n M H ) · a r c c o s x n x n 2 + y n 2 M H + 360
t i l t n = a r c s i n z n | | P n | | M H
Some changes and simplifications of the spherical coordinate transformation equations were made to adapt to the specific use case of a moving head: The p a n equation in Equation (1) omits points with z n < 0 since the t i l t movement is software limited to the positive half space. Furthermore, the full 0–540 movement range of the p a n axis is not used; instead, to avoid a full turn around of the moving head between certain orientations, the p a n axis is offset by 360 to a range of 180–540 . The t i l t Equation (2) is defined in a way that 0 and 180 are on the x y plane, and 90 is parallel to the z axis, which deviates from the typical definitions of spherical coordinates.
With the Cartesian control of the moving head established, it is important to define the notations and relations between the Cartesian coordinate system of the moving head and the world or reference coordinate system, from which the control inputs for the moving head will be provided.

3.3. Coordinate Systems and Transformations

As with every light projection system, a relationship between the moving head projector and the illuminated object, or more specifically, between the moving head coordinate system ( M H ) and the object coordinate system, called the reference coordinate system ( R e f ), has to be defined. This relationship, indicated by the red arrow in Figure 4, is expressed by the 4 × 4 homogeneous coordinate transformation matrix T R e f M H in Equation (3), which generally consists of a scale, zero to three rotations around the coordinate axis, and a translation vector. The scale is 1 in this case, but can be adjusted for coordinate systems with different-length units. The rotation part is expressed by the 3 × 3 rotation matrix R z x R e f M H , where the right side subscript indicates the order of the rotations performed around the designated axis. The translation is expressed by the translation vector [ t x , t y , t z ] T .
T R e f M H = r 11 r 12 r 13 t x r 21 r 22 r 23 t y r 31 r 32 r 33 t z 0 0 0 1 = t x R z x R e f M H t y t z 0 0 0 1
The left side sub- and superscripts of the matrix symbols in Equation (3) follow the tensor notation, where the subscript designates the source and the superscript the destination coordinate system.
Calculating the transformation between the reference and the moving head coordinate system will be the core task of the calibration process explained in Section 4. Furthermore, it will be required for an assistance application in order to calculate the control inputs for the moving head from any given coordinate input in the reference coordinate system.

3.4. Usage of Non-Linear, Multidimensional Newton’s Method

The core of the calibration process from Section 4 are the previously derived Equations (1)–(3) from Section 3.2 and Section 3.3. These equations are similar in that they contain sine and cosine terms, making them non-linear equations, and the equations derived from them in the calibration process will be non-linear as well, ruling out the use of common linear solving algorithms. Although it would be possible to manually linearize the equations around the expected result, this process would require prior knowledge of the region that the result would lie in, which is often not possible. Furthermore, the solving algorithm must tolerate a certain degree of inaccuracy of the input values obtained from the manual reference point capturing, explained in Section 5.1.
A promising solving algorithm that fulfills the aforementioned constraints is the iterative Newton’s method, which performs automatic linearization at each iteration step. It can be adapted for multidimensional functions as well, and combined with its limited complexity, it was chosen for this paper. The general idea of Newton’s method is finding zeros of a function f ( x ) (as defined in Equation (4)) where the conventional analytical approach is either not possible or infeasible [38].
f ( x ) = ! 0
The first step in the iteration process is linearizing f ( x ) at an initial x ( 0 ) value, which can be chosen arbitrarily or as an estimate close to the expected result, accelerating the convergence. In the second step, shown in Equation (5), the zero of this linearization is calculated, leading to the next x ( 1 ) value. The distance between the two values is called the iteration step size Δ x ( 0 ) .
x ( 1 ) = x ( 0 ) f ( x ( 0 ) ) f ( x ( x ) ) = ! x ( 0 ) Δ x ( 0 )
This process is repeated until one of the three abortion criteria is met:
1.
The zero has been found with sufficient accuracy:
f ( x ( k ) ) ϵ f w i t h ϵ f 0
This condition does not guarantee convergence but can be used if convergence is not a requirement.
2.
The difference between two x values fell below a specified threshold:
Δ x ( k ) ϵ x w i t h ϵ x 0
This condition signifies convergence but does not guarantee the zero has been found accurately.
3.
The maximum iteration step count K has been reached without fulfilling one of the other criteria. This usually means the iteration did not converge, or that it oscillates around the zero.
The one-dimensional approach can be extended to multiple variables and functions, utilizing a vector notation [39]. Since the algorithm solves for all variables at the same time, typically, a good trade off is reached, and the results require little to no further evaluation. The x value is extended to an x vector, which contains all iteration variables. As shown in Equation (6), the function Equation (4) is extended to a function vector f with the x vector as input.
f ( x ) = 0
Equation (5) for calculating the next x value is extended. The derivations of f are combined in a Jacobian matrix J , and a dampening factor α is introduced, leading to Equation (7). Depending on the function equations, unwanted oscillations around the result can occur. To dampen these oscillations and help with convergence, α limits the iteration step size. The damping is automatically increased as soon as the iteration step size no longer shows a significant change.
x ( k + 1 ) = x ( k ) α · Δ x ( k ) = x ( k ) α ( J ( x ( k ) ) ) 1 · f ( x ( k ) )
Since determining the inverse of J for each iteration step is numerically unfavorable, the linear Equation (7) is shuffled and solved for Δ x ( k ) instead, resulting in Equation (8).
J ( x ( k ) ) · Δ x ( k + 1 ) = f ( x ( k ) )
As shown in Equation (9), for the multiple variable iteration, the second abortion criterion can be adapted by checking the norm of the iteration step size Δ x .
| | Δ x ( k + 1 ) | | < ϵ x w i t h ϵ x 0
All necessary calculations within the iteration steps can be performed automatically and usually do not require manual tuning for different sets of input values. The value for α can be changed if necessary in the case that oscillations occur. With the outlined Newton’s method, it is now possible to derive the calibration process and define the function equations f ( x ) from the core Equations (1)–(3). The next chapter will delve into further detail on this derivation.

4. Moving Head Calibration

The moving head alone is not (and can not) be aware of any other coordinate system other than its own, as it was not intended as a precision light spot projection system with Cartesian coordinate control. Additionally, it does not possess the necessary provisions to measure it’s position and orientation externally. As discussed in Section 2, the calculation of the moving head position and orientation relative to a reference coordinate system has been performed with a calibration frame by [7], and the utilization of known points on a large structure workpiece was conducted by [37]. The scope of the following sections involves the combination of these ideas as well as the calculation of the position and orientation of the moving head relative to the workpiece coordinate system, denominated as the reference coordinate system. Figure 5 gives an overview of the main steps in the calibration process as well as references to the sections describing each step.
The first phase of the calibration process is executed by illuminating distinctive points on the workpiece of which the exact coordinates in the reference coordinate system are known from the CAD construction data or, as shown later in the validation in Section 5.1, can be measured with external metrology equipment. The axis positions of the moving head are saved and used to calculate the moving head position and orientation relative to the reference coordinate system using an iterative process. The term ”Calibration” is not used in a sense to improve the accuracy of the system as it is known in measurement systems, but as an analogy to camera calibration in computer vision. The derived transformation is comparable to the extrinsic camera parameters of camera calibration, which determine the position and orientation of the camera in a reference coordinate system.

4.1. Determining the Moving Head Position

After illuminating the reference points and saving their distinct p a n and t i l t values in the first phase of the calibration process, the projector position is to be calculated in the second phase. The general steps of this phase are shown in Figure 6 as well as the input data and the result of the calculation.
The moving head’s light beam does not have a depth sensor; therefore, the distance between the moving head and the workpiece is not directly known. Since the rotation of the moving head relative to the reference coordinate system is also not known, the absolute rotation values of the p a n and t i l t axes are not directly usable to calculate the moving head position. Therefore, the reference points are divided into N ( N 1 ) 2 duplicate free pairs, from which the angle differences Δ p a n and Δ t i l t between the two points of a pair can be calculated. These angle differences determine how far each axis has to move to get from one point to the other and no longer include the rotation of the moving head relative to the reference coordinate system. Furthermore, each point has a connecting vector in the reference coordinate system from the moving head to the reference point. The angle between these two vectors of each point pair can be decomposed into two angles that are equal to the Δ p a n and Δ t i l t angles from the moving head control. However, these vectors are not yet determined as the projector position is unknown, but they can be determined iteratively. The unknown projector position P M H R e f in the reference coordinate system will be used as the iteration variables in the x vector, as defined in Equation (10). The iteration variables can be considered as symbolic variables, so they can be used in the subsequent calculation without being known yet.
P M H R e f = x M H y M H z M H R e f x
The initial value x ( 0 ) should be chosen close to the actual projector position since the Euler angles allow for more than one solution that the iteration algorithm can converge to. The minimum requirement for this estimation is the need for the initial position to be placed on the correct side of the work piece. A second, non-optimal solution exists, which is mirrored with respect to a plane formed by the reference points. Any measurement that offers results in the cm range is usually sufficient for a fixed or repeating setup. Previously calculated positions can be used as well. An overview of the points, angles and vectors for the following calculations is given in Figure 7.
The individual points in each point pair are index by n and m, which satisfies the condition n , m N n < m , with N representing the total number of points. The angle differences in each point pair are calculated as shown in Equations (11) and (12).
Δ p a n n , m = p a n m p a n n
Δ t i l t n , m = t i l t m t i l t n
The connecting vectors between each reference point and the moving head position are defined in Equation (13) with i = n , m . For readability, the index i will be used when an equation applies to n and m, and the M H subscript will be omitted.
v M H , i R e f = ( P i P M H ) R e f = x i x M H y i y M H z i z M H R e f = ! v i R e f
Using the direct angle between v n R e f and v m R e f is not preferable in 3D space, as splitting it into two components equivalent to p a n and t i l t adequately functions. Therefore, if one component of the combined angle is much larger than the other, it conceals the other part. In order to reduce the complexity of this decomposition, the assumption is made that the x y R e f and x y M H planes are parallel or anti-parallel. This is due to the fact that rotations around three axes are indefinite, meaning that different solutions exist for the same change in orientation, precipitating unfavorable results and problems in the iteration. Since the workpiece is placed in a solid, alignable fixture, and the moving head can be easily aligned as well, the assumption can be considered valid. Therefore, both point pair vectors can be projected onto the x y R e f plane to obtain the v ^ n R e f and v ^ m R e f vectors in Equation (14), which are independent of t i l t i for t i l t i [ 0 , 90 [ .
v ^ i R e f = P · v i R e f = 1 0 0 0 1 0 0 0 0 · x i y i z i R e f = x i y i 0 R e f
Therefore the angle between v ^ n R e f and v ^ m R e f can be equated to Δ p a n n , m from Equation (11). Reshaping Equation (15) to a zero equation yields the first set of function equations (Equation (16)) for the iteration.
Δ p a n n , m a r c c c o s v ^ n · v ^ m | | v ^ n | | · | | v ^ m | | R e f
f I : 0 = a r c c c o s v ^ n · v ^ m | | v ^ n | | · | | v ^ m | | R e f Δ p a n n , m
For deriving the Δ t i l t n , m angle, both vectors must be on a plane orthogonal to the x y R e f plane, which makes them independent of the Δ p a n n , m angle. This condition can be satisfied by rotating the v m R e f vector around the z axis by Δ p a n n , m , as shown in Figure 7 and Equation (17). In conjunction with the resulting vector v ^ R e f m , the accompanying point P m R e f is rotated as well. Contrary to the simplified figure, the rotated point P ^ R e f m does not lie on the workpiece surface anymore, as the length of the connecting vector did not change ( v ^ m = v m ) . This does not have an impact on the further calculations, as only the angle between the vectors are relevant, and the vectors are normalized in Equation (18).
v ^ m R e f = ! R z ( Δ p a n n , m ) T · v m R e f
Analog to Equation (15) for Δ p a n n , m , Equation (18) is derived for Δ t i l t n , m . The second set of functions in Equation (19) for the iteration is then equivalent to Equation (16), now with the rotated second vector instead of the projected vector.
Δ t i l t n , m a r c c c o s v n · v ^ m | | v n | | · | | v ^ m | | R e f
f I I : 0 = a r c c c o s v n · v ^ m | | v n | | · | | v ^ m | | R e f Δ t i l t n , m
With the two sets of function equations, Equations (16) and (19), the moving head position, which was defined and initialized in Equation (10), can be calculated iteratively.

4.2. Determining the Moving Head Orientation

The third and final phase of the calibration is the calculation of the moving head orientation relative to the reference coordinate system. The steps of this phase are shown in Figure 8.
To complete the final transformation matrix between the reference and moving head coordinate system introduced in Section 3.3 (Equation (3)), the 3 × 3 rotation matrix R R e f M H must be calculated. Since the angles of the rotations are yet to be determined, it is constructed with the symbolic variables α for the rotation around the x axis and γ for the rotation around the z axis. The rotation around the y axis has been omitted in this case as it is assumed again that the x y M H and x y R e f planes are parallel or anti-parallel, which leads to the transformation matrix shown in Equation (20). The values of α are limited to α = { 0 , 180 } . The angle γ , however, can take any value in the range of γ = [ 180 , 180 [ . The x vector for the iteration, therefore, is x ( 0 ) = [ α , γ ] T . Estimating initial values for the x ( 0 ) is trivial for α in this case due to the discrete values. For γ , a qualitative estimate is usually sufficient.
R z x R e f M H = R z ( γ ) T · R x ( α ) T = c o s γ s i n γ 0 s i n γ c o s γ 0 0 0 1 T · 1 0 0 0 c o s α s i n α 0 s i n α c o s α T
With this symbolic rotation matrix, Equation (21) for the translation vector [ t x , t y , t z ] T of the transformation matrix can be determined from the moving head position calculated in Section 4.1.
t x t y t z = ! R z x R e f M H · P M H R e f
With the rotation matrix and the translation vector, the transformation matrix T R e f M H is completely defined and can be used to transform the reference points from the reference coordinate system R e f to the moving head coordinate system M H , according to Figure 4. The transformation step in Equation (22) yields the [ x i , y i , z i ] T coordinates for the next calculation step.
P i M H = T R e f M H · P i R e f
The transformed reference points can now be used with the p a n and t i l t values from the reference points in combination with Equations (1) and (2) to derive the second set of function Equations (23) and (24) for the next iteration. This results in N ( N 1 ) equations for N points.
f I I I : 0 = s g n ( y i ) M H · a r c c o s x i x i 2 + y i 2 M H p a n i
f I V : 0 = a r c s i n x i | | P i | | M H t i l t i
With the second set of equations and the initial estimate of the orientation angles x ( 0 ) , the moving head orientation relative to the reference coordinate system origin can be calculated, leading to the final transformation between the moving head and the reference coordinate system, thus completing the calibration process. The final transformation matrix can be found in Equation (A6). In order to verify the exactitude and applicability of the obtained transformation, and the system in general, the following section will discuss the validation of everything shown thus far.

5. Validation

After establishing the control and calibration of the moving head in the previous sections, Section 3 and Section 4, the performance of the overall system is to be evaluated. This section has two main parts: First, the possible error sources introduced by the reference point capturing, control and mechanical limits of the moving head and the algorithmic accuracy are discussed in the first three sections, Section 5.1, Section 5.2 and Section 5.3. Since each of these steps has an influence on the following step, it is important to assess each possible error contribution to the overall error and evaluate their respective significance. Second, the practical validation process is described and evaluated in Section 5.4. The setup for this is equal to the setup of the calibration process described in Section 3, with the moving head located on the ground pointing at the reference points on the tail cone, as illustrated in Figure 9. The theoretical limits can then be set in relation to real world accuracy.

5.1. Capturing Reference Points

For the workpiece used in this paper, no CAD data were available to extract the reference point coordinates from. For this reason, a Leica LTD800 Laser Tracker was used, which offers an absolute measurement precision of less than 50 μ m in the range that was used in this paper [40]. This precision far exceeds the expected accuracy of the moving head and therefore will be assumed to be sufficient as a reference. This approach can likewise be applied if CAD data are available; however, for example, the workpiece is still flexible before final assembly, which is typical for large, flat workpieces such as side panels that do not possess much structural integrity independently.
The reference points chosen were rivets in distinctive positions, such as on an intersection of a horizontal and vertical rivet series. As described in Section 3, the rivet heads stand proud of the surface, rendering them easy to distinguish from the surface itself. Since the light spot, even with the smallest aperture, is much larger than a rivet head, a target printed on paper was placed over the rivets to aid with centering the light spot, similar to Figure 10. The p a n , f i n e p a n , t i l t and f i n e t i l t angles of each reference point were saved and used for the calibration process (see Appendix A.2).
It should be noted here that this procedure offers only limited accuracy since the light spot on the curved surface of the workpiece deviates distinctly from a perfect circle in most instances. On a flat surface that is not perpendicular to the light beam, the light spot would become a non-symmetric ellipse; however, on the curved surface, the resulting shape is no longer easily parametrizable. The effect is shown in Figure 9. Therefore, the p a n and t i l t of the reference points should only be regarded as a best-effort value, with an uncertainty in the mm range.

5.2. Theoretical and Mechanical Limits

From the derivation of the p a n and t i l t axis control in Appendix B.1, the limits of the control input accuracy can be derived. For the moving head used in this paper, it is p a n s t e p = 0.008272 d i g and t i l t s t e p = 0.002747 d i g . The testing showed that the moving head still performs movement for single-digit ( d i g ) increments and decrements; however, the resulting accuracy of the small movements was not further evaluated.
Comparable to industrial robot systems, the moving head can be characterized in terms of pose accuracy and pose repeatability [41]. The Pose Accuracy determines how precise an arbitrary given point will be hit by the light spot. The Pose Repeatability determines how precise the same point will be hit from different starting points and different movement velocities between the start and end point. The Pose Accuracy is generally harder to determine for light projection systems as the light beam does not have an end point until it contacts a surface. The pose repeatability, however, can be assessed without complication for isolated cases. From those results, a general idea about the systems accuracy can be derived.
To gain an initial estimation of the pose repeatability, a simple experiment was conceptualized as shown in Figure 11. A target, as shown in Figure 10, is aligned to an arbitrary center point. The moving head is moved to a point on a concentric circle around this center point and then back to the center. The deviation from the actual center point is measured and saved. Since the movement speed depends on the distance, and since the movement path chosen by the moving head is not always the shortest or direct path, different circle diameters and positions on the circle must be tested. It is posited that the initial orientation of the axis, especially the t i l t axis, has a significant influence on the pose repeatability due to the backlash within the moving head mechanism.
The described pose repeatability experiment was carried out by mounting the target shown in Figure 10 onto the workpiece. This way, realistic and relevant measurements can be generated, as the actual distance and orientation are used. A variety of starting points was selected, from far apart (>1 m) to achieve a high movement velocity, to short distances (5 mm) to achieve a very slow movement velocity. The starting points included single-axis movements and combined movements. However, none of the starting points could offer a measurable deviation from the marked center point to be observed. The achieved pose repeatability of the described experiment exceeded the expectations.
When manually agitating the moving head to forcefully alter its p a n and t i l t angles, it moves back precisely to the original position, indicating that the axes are servo controlled or have position feedback, which is a possible explanation for the high level of accuracy observed.
In order to gain more than a qualitative understanding of the pose repeatability, a laser tracker was used to directly measure the position at the light exit window. A retro reflector was mounted on the moving head, and the same experiment as described previously was conducted. After each return to the center of the target, the position of the retro reflector was measured with the laser tracker. The measurements and results are listed in Table A3, and the resulting standard deviations are σ x = 79.79 μ m, σ y = 19.30 μ m and σ z = 107.88 μ m, while the reported standard deviation from the laser tracker for its position measurements is a magnitude lower, indicating that the measurements are sufficiently precise. While these values do not give information regarding the pose repeatability of the light spot, they can be used to determine whether or not the accuracy is sufficient for the application.
Altogether, the pose repeatability has a much smaller influence on the overall system accuracy than the spot distortion discussed previously.

5.3. Algorithmic Accuracy

With the high mechanical accuracy determined in the pose repeatability experiment, the focus will now be set on evaluating potential error magnitudes and sources resulting from the control input side of the projection system. Iterative algorithms in combination with measurement errors are not able to deliver a perfect solution, but they can satisfy the given metrics as adequately as possible (if possible, and if convergence was reached). The metrics of the presented algorithm are Equations (16), (19), (23) and (24). As described previously, these equations should be as close as possible to 0, and all of these equations express angles. The errors are assumed to be small, enabling the neglect of the non-linear characteristics of the errors and the observing of the mean values for possible systematic errors and the standard deviation for potential outliers. Since the mean value and standard deviation of the position estimation in Equations (16) and (19) originates from the angle difference between two vectors, it is impossible to directly correlate them to the accuracy of the position estimation. However, it is possible to assess the quality of the result in terms of moving head steps. Ideally, the error should be smaller than one step, as this would limit the algorithmic accuracy to below the mechanical limits.
For the analysis of the transformation estimation error, it is assumed that the position error is small and that the light beam contacts a flat surface perpendicularly, eliminating spot distortion. Considering that the exact surface will be neglected, and no distance information can be derived from the angle error, it is not possible to reconstruct meaningful Cartesian coordinates. Therefore, it is expedient to combine the angle errors of the p a n and t i l t angles of each point to a single angle error, as shown in Equation (25).
Δ c o m b i n e d = a r c c o s ( c o s ( Δ p a n ) c o s ( Δ t i l t ) )
With this combined angle error Δ c o m b i n e d , the point position error e can be calculated (Equation (26)) as a circle around the true center of a point P n , as seen in Figure 12. The distance d is calculated from the estimated projector position and the Cartesian coordinates of the reference point P n . From the individual point position errors, a mean value and standard deviation can be calculated to evaluate the quality of the final result.
e = d · s i n ( Δ c o m b i n e d )
The mean values, standard deviations, and equivalent step counts are listed in Table 1 for the position and transformation estimation as well as the point position error. The values of the position estimation suggest an adequate result, with the p a n angle being more accurate than the t i l t angle. After the transformation estimation, a large error in the mean value of the t i l t angle can be observed, while the p a n angle error remains in the same magnitude. This deviation propagates to the point position error, where the mean value is much larger than that of the standard deviation.
The low error at the position estimation stage suggests that a result with an adequate fit for the given input values was found, which, however, deviates from the true projector position and therefore led to the mean value deviation in the t i l t angle. After manually tuning the result of the estimated moving head position, it was found that the z component was the cause of the inaccuracy. A hypothesis for the source of this inaccuracy is the much wider p a n angle range of the reference points compared to the t i l t angle range (see Table A2), since the x and y components are accurate and have a stronger correlation to the p a n angle. This hypothesis is supported by the fact that the accuracy of the result increases when selecting reference points with an overall smaller p a n range, even though the overall number of points is decreased. Since this was the result of a qualitative, manual tuning process, the results are not described in detail on account of their limited generality. Overall, special consideration should be taken when selecting reference points to avoid the adverse effects of data bias. The effect of oscillations around a result was reduced with the dampening factor α introduced in Equation (7) in Section 3.4. Convergence to local minima can be avoided by selecting the starting values of the iteration in the same general area as the expected result.

5.4. Practical Validation

The point position errors calculated previously offer only a qualitative significance for the given tail cone due to the simplifying assumptions regarding the surface being made, which do not correlate well to the particularly curved surface of the actual part, thus motivating a real-world validation. For the validation, the reference points are overlaid with a target similar to Figure 11, and the outlines of the light spots of the manually jogged position and the calculated position are manually traced.
This manual process is necessary due to the aforementioned distortion of the light spot, and on the curved surface of the workpiece especially, the center point is difficult to reconstruct. As the spot size of the two ellipses is identical, there will always be two intersection points and two points with maximum distances, as shown in Figure 13. Due to the angled surface, one spot will be elongated more than the other one, resulting in two distances, named e l for the lower and e u for the upper distance (comparable to the e value from the algorithmic accuracy evaluation). The measurements are listed in Table A2, along with the mean values and standard deviations.
The combined mean value is 4.64 mm, and the combined standard deviation is 0.83 mm. These values are significantly lower than the predicted theoretical values, implying that the simplifications made in Section 5.3 may not be applicable to all cases. Particularly, the angle between the light beam and the surface in the real world experiment is much shallower than the perpendicular angle assumed in the simplification. Depending on the general orientation between moving head and workpiece, the real-world errors may become more or less favorable compared to the theoretical values.

6. Discussion

After completing the validation of each aspect in the calibration process, this section evaluates the individual results and puts them into relation with each other. The order of such follows the order of the previous Section 5. The first paragraph covers the theoretical validation of the reference points and the moving head. The second paragraph focuses on the calibration algorithm and the aspect of using Newton’s method. The third and final paragraph discusses the results of the practical validation.
Capturing the reference points with a laser tracker adequately functioned in this case, delivering valid and usable results. This method can be applied to most large-structure workpieces or work areas if no CAD data is available, or if the expected data accuracy is insufficient.
The described point distortion posed significant issues and was a limiting factor for the accuracy when jogging the light spot to the reference points. An unclear but potentially significant amount of uncertainty was rendered at this stage, owing to the difficulty in determining the exact spot center. In all probability, this outweighed the other error sources discussed.
In the validation of the movement control, it was shown that the step granularity of the control input allows for precise movement inputs. Similarly, the results of the qualitative and laser-tracker-based pose repeatability experiment indicated that the moving head used in this paper is capable of achieving a precise mechanical accuracy in practice as well. It should be noted, however, that mechanical and control accuracy vary for different devices and application scenarios. For example, at greater distances between the moving head and the workpiece, small angular errors have a larger effect.
Obtaining meaningful measures for the algorithmic accuracy proved challenging due to both the nature of iterative algorithms and the non-trivial surface of the respective workpiece. The proposed point position error metric only offered limited relevance and predicted a low overall accuracy. Furthermore, a discrepancy between the two steps of the algorithm were discovered, where a suboptimal result of the first step can perpetuate a high error in the second step. Currently, no feedback from the second to the first algorithm step is in place to counteract such effects. The distribution of the reference points embodies a direct influence on the accuracy of the result components if certain input characteristics outweigh others.
None of these aforementioned effects can be attributed to the use of Newton’s method for the calibration; however, in the current implementation, the control authority over the results is limited. While Newton’s method converged to a result reliably, it was not necessarily the correct or desired result, attributable to multiple local minima that can occur in the multidimensional function equations. The calculation speed of the calibration algorithm increases significantly with an increased point count N since the number of function equations scales with O ( N 2 ) , leading to larger Jacobian matrices and more substitution steps. Further information regarding relevant performance metrics can be found in Appendix C.3. While improved solving algorithms exist that are less susceptible to local minima and offer higher performance, they embody increased complexity for the implementation. Overall, Newton’s method offered an adequate compromise between performance and complexity and was sufficient for the proof-of-concept.
The practical validation was able to demonstrate a higher-than-predicted point position accuracy, revealing a discrepancy between the algorithmic accuracy analysis and the real-world example. It was shown that the proposed method is applicable to large-structure workpieces with non-trivial surfaces (as intended) with reasonable accuracy. However, the exact accuracy is difficult to quantify in view of the resulting manual processes and judgements regarding the aforementioned spot distortion, which potentially introduces further unidentified uncertainty.

7. Summary

Throughout the course of this paper, the relevance and applicability of a moving head as a Cognitive Assistance System for large-structure workpieces was established. The control of the moving head was explained, and methods for the characterization and modification of the device were presented, improving the usability as an assistance system. A detailed decomposition of mathematical relations between the moving head and the reference coordinate systems were derived, giving a generalized foundation for further applications. This includes moving heads and similar light projection systems based on angular control inputs. The core of this paper is the novel, iterative calibration algorithm based on Newton’s method, enabling a workpiece-based calibration without additional external reference points or fixtures. In the literature on light projection systems, the presented calibration approach is currently unique and superior to the only previously presented calibration method based on four precisely placed reference points on a calibration frame by [7]. It can be suspected that laser projection systems use similar calibration algorithms; however, no public literature or documentation about the internal functionality of those algorithms has been released. Therefore, a direct comparison to the proposed algorithm is not possible. The validation uncovered the various challenges and constraints of the system and proposed algorithm, identifying the point distortion as a significant factor of measurement uncertainty. In summation, a reasonable accuracy could be proven, validating the moving head spot as a useful, low-cost assistance system with control via MQTT for ease of integration into an industrial environment, as established previously by [7]. Due to the low unit price, occlusion problems may be addressed by scaling the assistance system with multiple moving head spots, mitigating information loss.

8. Outlook and Future Work

This paper presented a comprehensive range of future research prospects, which can be broadly categorized into three distinct clusters. The first over-arching cluster involves the evaluation of cognitive assistance applications with a moving head; a special focus can be applied to its existing workflow integration as well as further research regarding worker acceptance. Another focus within this cluster can be applied to the scaling of the assistance system with multiple moving heads, sharing the dynamics of a common workspace as well as other assistance systems in an effort to avoid occlusion by objects, tools and workers. Different shapes for the Gobo wheels can be explored to enrich the light spot with simple information.
The second cluster entails improvements to the iterative algorithm. Several approaches are possible, including the use of other non-linear solvers for the derived equations, or the merging of two iteration steps to solve the aforementioned discrepancy between the steps. While Newton’s method is simple to implement, it requires a reasonably good estimation of the result to avoid local minima. A logical next step is therefore evaluating different iteration algorithms for improved performance and robustness. Weighing factors could also be introduced to tune the results in the case of the heterogeneous distribution of angle ranges on the reference points as well as potential methods to reduce the amount of reference points if they offer little to no additional accuracy for the end result, thus simultaneously increasing performance. Manual plausibility checks in the case of multiple solutions due to Euler’s angles could also be introduced. To mitigate this problem, especially within applications where assumptions made about the coordinate system do not hold, the use of quarternions instead of Euler’s angles can be examined. An additional focus can rest on quality measures of the calculated result in order to improve the validity assessment, and potentially, the feedback of metrics into the algorithm.
The third and final cluster expresses the characterization of the moving head itself. One aspect of this characterization entreats further research into the mechanical accuracy of each mechanical and optical axis with the methods described in this paper. Using a laser tracker, it is possible to determine pose accuracy and repeatability. With a mathematical description of the mechanical zoom, in conjunction with the focus and either custom Gobo wheels or a separate iris, a variable and controllable spot size at the workpiece is achievable. The custom Gobo wheels can be used for a precompensation of the aforementioned spot distortion, which leads to vast uncertainty. For this precompensation, a mathematical description of this distortion in combination with different workpiece surfaces is necessary.
The presented features and capabilities of a moving head spot, in conjunction with further research, situates the moving head as a viable low-cost alternative to established light projection systems, incentivizing its deployment in the production environments of large scale products.

Author Contributions

J.K.: Conceptualization, methodology, software, validation, supervision, project administration, writing—original draft preparation, writing—review and editing; C.B.: conceptualization, methodology, software, validation, formal analysis, investigation, writing—original draft preparation, writing—review and editing; T.S.: funding acquisition, writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the German Federal Ministry for Economic Affairs and Climate Action (BMWK) (Program LuFo VI-1; Project “digitaleQM”). Publishing fees supported by Funding Programme Open Access Publishing of Hamburg University of Technology (TUHH).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are available in the Appendix C.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Control of the Moving Head with Node-RED

Appendix A.1. Data Flow and Communication Overview

Not all the functionality of the presented calibration algorithm can be sensibly implemented in Node-RED and its JavaScript function nodes. Therefore, the core algorithm presented in Section 4.1 and Section 4.2 was outsourced into a Python script. The data flow between the Node-RED dashboard, Python scripts, and the external network communication is shown in Figure A1. To simplify the communication between the Node-RED flows and the Python script, JSON formatted the data strings since they are easily created and parsable by both languages. For the mathematical parts of the calibration algorithm, the n u m p y and s y m p y libraries were used, offering a powerful and easily operated mathematical functionality to Python scripts. Furthermore, the network interface to the ENTTEC DMX Ethernet adapter (see Appendix B) was outsourced into a second Python script, giving lower-level access to the computer network interfaces.
Figure A1. Flowchart of data exchange and communication between the individual subsystems in the proposed calibration process. Red: functionality implemented in Node-RED. Blue: functionality implemented in Python. Green: network and hardware devices.
Figure A1. Flowchart of data exchange and communication between the individual subsystems in the proposed calibration process. Red: functionality implemented in Node-RED. Blue: functionality implemented in Python. Green: network and hardware devices.
Applsci 13 07369 g0a1

Appendix A.2. The Node-RED Dashboard

In order to make the calibration process accessible and straight forward, a Node-RED dashboard (see Figure A2) was created with three main components; the upper-left section (Jogging) is dedicated to the direct control of the moving head, and all control elements are directly correlated to the respective DMX channels. The control elements are updated by all movement commands from other parts of the software as well, which aids with debugging and gives a general overview of the current moving head state. The upper right part (XY Jogging) is dedicated to the linear movement of the moving head. Aiming the light beam at a target with the basic angular p a n and t i l t controls is usually non-intuitive and is therefore difficult to accomplish successfully. Therefore, a generic transformation with a configurable base direction can be used for linear movements. The coordinates from the generic transformation do not represent any specific coordinate system but can be used to find the correct p a n and t i l t values for the calibration process. Furthermore, after the successful calibration, the final transformation can be used for linear movement in the correct coordinate system. The lower part (Flex Calibration Points) is dedicated to reference point management. The currently illuminated point can be saved together with the corresponding CAD coordinates. Saved points can be manipulated and deleted. The lower part gives some file management options to save the reference points and the calculated calibration results.
The main use of the dashboard is the aiding of the calibration process; the integration into an assistance application can be conducted via MQTT.
Figure A2. Screenshot of the Node-RED Dashboard for the moving head calibration process.
Figure A2. Screenshot of the Node-RED Dashboard for the moving head calibration process.
Applsci 13 07369 g0a2

Appendix B. Special Remarks on the Moving Head

The moving head used in this paper is a ShowTec Phantom 130 Spot with a standard DMX512 interface. It features a 130W LED white light source, which can be colored with a filter wheel and is shaped by two so-called Gobo wheels, which feature different shapes and patterns [42]. Details about the movement and focus axis follow in Appendix B.1, Appendix B.2 and Appendix B.3. The DMX to Ethernet interface is an ENTTEC ODE Mk2 controller, which allows bidirectional control of the DMX bus via ethernet through different protocols [43]. In the network communication script, ENTTECs ESP protocol was implemented for simplicity; however, more sophisticated protocols such as ArtNet can be implemented for future applications as well.
At the time of writing this paper, the ShowTec Phantom 130 Spot was available for EUR 1200 [44]. The price range for simple moving heads starts at around EUR 200, and high-end/high power devices cost up to EUR 3000. An industrial laser projection system from Z-Laser costs between EUR 7000–20,000 [45]. Comparable products from competitors are in a similar price range, though additional software licensing fees are not always included.

Appendix B.1. Control of the Pan and Tilt Axis

The initial commissioning of the device showed that the tilt axis has an even higher angular range; however, the exact limits were not specified, resulting in inaccurate movements. Therefore, the range has been software limited to 0–180 , which proved to be accurate. Furthermore, the full range of the DMX channels is not used. The largest angle that still results in movement is 65,535 − 255 = 65,280. Any value larger than 65,280 does not change the position. The right side superscript determines the format in which the values are stored ( for degrees, D M X for the DMX channel notation, and no superscript for decimal values).
p a n = p a n · 540 65280 t i l t = t i l t · 180 65280
p a n = p a n · 65280 540 t i l t = t i l t · 65280 180
The resulting 16-bit values from Equations (A1) and (A2) have to be split into two 8-bit values for their respective DMX channels, called p a n , f i n e p a n , t i l t and f i n e t i l t , as shown in Equations (A3) and (A4).
p a n D M X = ( p a n & 0 x F F 00 ) 8 f i n e p a n D M X = p a n & 0 x 00 F F
t i l t D M X = ( t i l t & 0 x F F 00 ) 8 f i n e t i l t D M X = t i l t & 0 x 00 F F

Appendix B.2. Control of the Focus

The light spot focus is controlled by a single DMX channel. To determine the relationship between the correct focus value at a certain distance, an experiment was conducted, as shown in Figure A3. A movable screen was placed in the light beam path of the moving head at a distance of 0.5–8.0 m in 0.5 m increments, measured from the center of the p a n and t i l t axis. For each distance, the value range that resulted in a visually focused light spot was determined by eye.
From this data, a polynomial relation (Equation (A5)) was found in the range of 1.5–6.0 m distance. Outside this range, the focus value is at its minimum or maximum value. When using the static Gobo wheel, the moving head limits the range of focus values, which results in an effective range of [80–255].
f o c u s D M X = 255 for d 1.5 m 80 for d > 6.0 m 1.271173 · d 3 + 20.391608 · d 2 132.271950 · d + 413.084848 else
Figure A3. Experimental setup for determining the relationship between focus value and distance between the stationary moving head and a movable projection screen.
Figure A3. Experimental setup for determining the relationship between focus value and distance between the stationary moving head and a movable projection screen.
Applsci 13 07369 g0a3
Figure A4. Polynomial fit of the manually determined focus value, depending on the distance.
Figure A4. Polynomial fit of the manually determined focus value, depending on the distance.
Applsci 13 07369 g0a4
Table A1. Measurement values from the experiment. The optimum values are the average of the minimum and maximum values and were used for determining the polynomial fit.
Table A1. Measurement values from the experiment. The optimum values are the average of the minimum and maximum values and were used for determining the polynomial fit.
DistanceMinimumOptimumMaximum
≤1.0255255255
1.5255255255
2.0219223227
2.5186189.5193
3.0158162.5167
3.5142146.5151
4.0125129133
4.5111115.5120
5.099104.5110
6.5838893
≥6.0808080

Appendix B.3. Behaviour of the Gobo Wheel

It was observed that the elements are not equidistantly arranged, consequentially indicating that the circles are not perfectly concentric. This must be taken into consideration when using the Gobo wheel as an aperture.

Appendix C. Validation Data

Appendix C.1. Reference Points and Calibration Results

The lowest p a n angle is 319 . 52 at point 5, and the highest p a n angle is 408 . 26 at point 1, resulting in a maximum range of 88 . 74 between these two points. The lowest t i l t angle is 24 . 71 , and the highest t i l t angle is 46 . 41 at point 4, resulting in a maximum range of 21 . 70 .
T R e f M H = 0.91289 0.40821 0 0.88606 0.40821 0.91289 0 0.42273 0 0 1 1.08992 0 0 0 1
Due to the assumption that the rotation angle α can only take two values α = { 0 , 180 } , the transformation matrix is composed of only the z rotation matrix and the translation vector.
Table A2. Reference points in moving head DMX Channel values, measured coordinates, and lower and upper position error between jogged and calculated reference points.
Table A2. Reference points in moving head DMX Channel values, measured coordinates, and lower and upper position error between jogged and calculated reference points.
IndexpanfinepantiltfinetiltX in mY in mZ in m e l in mm e u in mm
11922024584−1.74430.2477−0.307255
218524655235−1.62570.5788−0.236536
317329620−1.50491.0239−0.180134
416020965190−1.35211.4272−0.017833
51502265810−1.21261.95560.110946
6153735073−1.32991.9238−0.074436
71603652166−1.44771.5623−0.181066
81722250241−1.61191.1009−0.307356
918023644190−1.73020.7689−0.385844
1018952350−1.87020.3832−0.473955
1116017045225−1.54531.6221−0.252855
μ 4.185.09
4.64
σ 1.031.00
0.83

Appendix C.2. Pose Repeatability Measurements

For the pose repeatability experiment, the moving head was moved to eight points around the center point. The position at the center position when returning from each point was then measured and saved. The points were placed concentrically around the center at a radius of 0.5 m.
Table A3. Measurements of the moving head pose repeatability. The coordinates exist in the laser tracker coordinate system. σ L T is the measurement uncertainty reported by the laser tracker.
Table A3. Measurements of the moving head pose repeatability. The coordinates exist in the laser tracker coordinate system. σ L T is the measurement uncertainty reported by the laser tracker.
IndexX in mmY in mmZ in mm σ LT in mm
1−306.4154−2032.1464−972.19410.00188634
2−306.4316−2032.1124−972.20380.00220966
3−306.2175−2032.1691−971.93970.00263319
4−306.2847−2032.1653−971.96800.00199971
5−306.2746−2032.1472−971.96470.00221153
6−306.2349−2032.1175−971.94730.00211342
7−306.2486−2032.1570−971.94340.00228444
8−306.2201−2032.1406−971.94220.00236737
μ −306.290925−2032.144438−972.01290.002213208
σ 0.0797859280.0192987650.1078806280.000214366

Appendix C.3. Remarks on the Performance of Newton’s Method

This basic performance analysis of the proposed calibration algorithm in conjunction with the usage of Newton’s method as a non-linear solving algorithm is conducted as an example of the first iteration step, outlined in Section 4.1. The computational effort of the second step is one-third smaller due to one less iteration variable.
Figure A5. Comparison between the complexity of individual calibration algorithm steps. (Note: the number of SVD calculation steps has been scaled down by a factor of 1000 to plot all functions in a single diagram).
Figure A5. Comparison between the complexity of individual calibration algorithm steps. (Note: the number of SVD calculation steps has been scaled down by a factor of 1000 to plot all functions in a single diagram).
Applsci 13 07369 g0a5
The calibration process in Section 4 starts with N reference points. From these N reference points, N p a i r s duplicate free pairs are created.
N p a i r s = N ( N 1 ) 2
From each pair, two Equations (11) and (12) are created, which form the function vector Equations (16) and (19). Equation (A8) shows the dimension of the function vector f .
d i m ( f ) = 2 · N p a i r s = N ( N 1 )
Since the Jacobian matrix is created from the function vector, the dimension of the function vector and the major dimension of the Jacobian are identical, and the minor dimension is equal to the number of iteration variables (three for the first iteration step and two for the second iteration step). Equation (A9) shows the number of elements in the Jacobian matrix.
N J = 3 · N p a i r s = 3 · N ( N 1 )
The number of elements in the Jacobian matrix is relevant to the substitution of the symbolic values in the s y m p y implementation. The substitution is performed in every iteration step. The computational complexity of the substitution is O ( N J ) = O ( N 2 ) . Solving Equation (8) from Section 3.4 is done with the p i n v _ s o l v e function from s y m p y , which is based on the Moore–Penrose pseudoinverse. The pseudoinverse itself is calculated with Singular Value Decomposition (SVD), with a worst case computational complexity of O ( m 3 ) [46], where m correlates to the size of the matrix. The overall computational complexity can therefore be approximated by Equation (A10).
O ( m 3 ) = O ( ( N p a i r s ) 3 ) = O ( ( ( N ( N 1 ) ) 2 ) 3 ) = O ( N 5 )
This analysis should only offer an overview of the worst case computational complexity of the proposed algorithm. Since the focus did not lie in performance but in the proof-of-concept of the three-dimensional calibration algorithm, the high computational complexity is still within an acceptable range given the typically low number of calibration points.

References

  1. Froschauer, R.; Kurschl, W.; Wolfartsberger, J.; Pimminger, S.; Lindorfer, R.; Blattner, J. A Human-Centered Assembly Workplace For Industry: Challenges and Lessons Learned. Procedia Comput. Sci. 2021, 180, 290–300. [Google Scholar] [CrossRef]
  2. Maddikunta, P.K.R.; Pham, Q.V.; B, P.; Deepa, N.; Dev, K.; Gadekallu, T.R.; Ruby, R.; Liyanage, M. Industry 5.0: A survey on enabling technologies and potential applications. J. Ind. Inf. Integr. 2022, 26, 100257. [Google Scholar] [CrossRef]
  3. Bornewasser, M.; Hinrichsen, S. (Eds.) Informatorische Assistenzsysteme in der Variantenreichen Montage: Theorie und Praxis; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar] [CrossRef]
  4. Mark, B.G.; Rauch, E.; Matt, D.T. Industrial Assistance Systems to Enhance Human–Machine Interaction and Operator’s Capabilities in Assembly. In Implementing Industry 4.0 in SMEs: Concepts, Examples and Applications; Matt, D.T., Modrák, V., Zsifkovits, H., Eds.; Springer: Cham, Switzerland, 2021; pp. 129–161. [Google Scholar] [CrossRef]
  5. Kalscheuer, F.; Eschen, H.; Schüppstuhl, T. Towards Semi Automated Pre-assembly for Aircraft Interior Production. In Annals of Scientific Society for Assembly, Handling and Industrial Robotics 2021; Schüppstuhl, T., Ed.; Springer: Cham, Switzerland, 2022; pp. 203–213. [Google Scholar] [CrossRef]
  6. Mark, B.G.; Rauch, E.; Matt, D.T. Worker assistance systems in manufacturing: A review of the state of the art and future directions. J. Manuf. Syst. 2021, 59, 228–250. [Google Scholar] [CrossRef]
  7. Müller, R.; Hörauf, L.; Vette-Steinkamp, M.; Kanso, A.; Koch, J. The Assist-By-X system: Calibration and application of a modular production equipment for visual assistance. Procedia CIRP 2019, 86, 179–184. [Google Scholar] [CrossRef]
  8. Schoepflin, D.; Koch, J.; Gomse, M.; Schüppstuhl, T. Smart Material Delivery Unit for the Production Supplying Logistics of Aircraft. Procedia Manuf. 2021, 55, 455–462. [Google Scholar] [CrossRef]
  9. Hinrichsen, S.; Adrian, B.; Bornewasser, M. Information Management Strategies in Manual Assembly. In Human Interaction, Emerging Technologies and Future Applications II, Proceedings of the 2nd International Conference on Human Interaction and Emerging Technologies: Future Applications (IHIET—AI 2020), Lausanne, Switzerland, 23–25 April 2020; Ahram, T., Taiar, R., Gremeaux-Bader, V., Aminian, K., Eds.; Advances in Intelligent Systems and Computing; Springer: Cham, Switzerland, 2020; Volume 1152, pp. 520–525. [Google Scholar] [CrossRef]
  10. Romero, D.; Bernus, P.; Noran, O.; Stahre, J.; Fast-Berglund, Å. The Operator 4.0: Human Cyber-Physical Systems & Adaptive Automation Towards Human-Automation Symbiosis Work Systems. In Advances in Production Management Systems. Initiatives for a Sustainable World, Proceedings of the IFIP WG 5.7 International Conference, APMS 2016, Iguassu Falls, Brazil, 3–7 September 2016; Nääs, I.A., Vendrametto, O., Mendes Reis, J., Gonçalves, R.F., Terra Silva, M., von Cieminski, G., Kiritsis, D., Eds.; IFIP Advances in Information and Communication Technology; Springer: Cham, Switzerland, 2016; Volume 488, pp. 677–686. [Google Scholar] [CrossRef] [Green Version]
  11. Müller, R.; Vette-Steinkamp, M.; Hörauf, L.; Speicher, C.; Bashir, A. Worker centered cognitive assistance for dynamically created repairing jobs in rework area. Procedia CIRP 2018, 72, 141–146. [Google Scholar] [CrossRef]
  12. Koch, J.; Büsch, L.; Gomse, M.; Schüppstuhl, T. A Methods-Time-Measurement based Approach to enable Action Recognition for Multi-Variant Assembly in Human-Robot Collaboration. Procedia CIRP 2022, 106, 233–238. [Google Scholar] [CrossRef]
  13. Keller, T.; Bayer, C.; Bausch, P.; Metternich, J. Benefit evaluation of digital assistance systems for assembly workstations. Procedia CIRP 2019, 81, 441–446. [Google Scholar] [CrossRef]
  14. Petzoldt, C.; Keiser, D.; Beinke, T.; Freitag, M. Functionalities and Implementation of Future Informational Assistance Systems for Manual Assembly. In Subject-Oriented Business Process Management. The Digital Workplace—Nucleus of Transformation, Proceedings of the 12th International Conference, S-BPM ONE 2020, Bremen, Germany, 2–3 December 2020; Freitag, M., Kinra, A., Kotzab, H., Kreowski, H.J., Thoben, K.D., Eds.; Springer: Cham, Switzerland, 2020; Volume 1278, pp. 88–109. [Google Scholar] [CrossRef]
  15. Weidner, R.; Karafillidis, A.; Wulfsberg, J.P. Individual Support in Industrial Production—Outline of a Theory of Support-Systems. In Proceedings of the 49th Annual Hawaii International Conference on System Sciences (HICSS), Koloa, HI, USA, 5–8 January 2016; Bui, T.X., Sprague, R.H., Eds.; IEEE: Piscataway, NJ, USA, 2016; pp. 569–578. [Google Scholar] [CrossRef]
  16. Gualtieri, L.; Palomba, I.; Wehrle, E.J.; Vidoni, R. The Opportunities and Challenges of SME Manufacturing Automation: Safety and Ergonomics in Human–Robot Collaboration. In Industry 4.0 for SMEs: Challenges, Opportunities and Requirements; Matt, D., Modrak, V., Zsifkovits, H.E., Eds.; Palgrave Macmillan: Cham, Switzerland, 2020; pp. 105–144. [Google Scholar] [CrossRef] [Green Version]
  17. Masiak, T. Entwicklung eines Mensch-Roboter-kollaborationsfähigen Nietprozesses unter Verwendung von KI-Algorithmen und Blockchain-Technologien: Unter Randbedingungen der Flugzeugstrukturmontage. Ph.D. Thesis, Saarland University, Saarbrücken, Germany, 2020. [Google Scholar] [CrossRef]
  18. Neßelrath, R. SiAM-dp: An open development platform for massively multimodal dialogue systems in cyber-physical environments. Ph.D. Thesis, Saarland University, Saarbrücken, Germany, 2015. [Google Scholar] [CrossRef]
  19. Neumann, A.; Strenge, B.; Uhlich, J.C.; Schlicher, K.D.; Maier, G.W.; Schalkwijk, L.; Waßmuth, J.; Essig, K.; Schack, T. AVIKOM—Towards a Mobile Audiovisual Cognitive Assistance System for Modern Manufacturing and Logistics. In Proceedings of the 13th ACM International Conference on PErvasive Technologies Related to Assistive Environments, Corfu, Greece, 30 June–3 July 2020; Makedon, F., Ed.; Association for Computing Machinery: New York, NY, USA, 2020; pp. 1–8. [Google Scholar] [CrossRef]
  20. Borisov, N.; Weyers, B.; Kluge, A. Designing a Human Machine Interface for Quality Assurance in Car Manufacturing: An Attempt to Address the “Functionality versus User Experience Contradiction” in Professional Production Environments. Adv. Hum.-Comput. Interact. 2018, 2018, 1–18. [Google Scholar] [CrossRef]
  21. Hold, P.; Erol, S.; Reisinger, G.; Sihn, W. Planning and Evaluation of Digital Assistance Systems. Procedia Manuf. 2017, 9, 143–150. [Google Scholar] [CrossRef]
  22. Sochor, R.; Kraus, L.; Merkel, L.; Braunreuther, S.; Reinhart, G. Approach to Increase Worker Acceptance of Cognitive Assistance Systems in Manual Assembly. Procedia CIRP 2019, 81, 926–931. [Google Scholar] [CrossRef]
  23. Hinrichsen, S.; Riediger, D.; Unrau, A. Development of a projection-based assistance system for maintaining injection molding tools. In Proceedings of the 2017 IEEE International Conference on Industrial Engineering & Engineering Management (IEEM), Singapore, Singapore, 10–13 December 2017; pp. 1571–1575. [Google Scholar] [CrossRef]
  24. Eversberg, L.; Ebrahimi, P.; Pape, M.; Lambrecht, J. A cognitive assistance system with augmented reality for manual repair tasks with high variability based on the digital twin. Manuf. Lett. 2022, 34, 49–52. [Google Scholar] [CrossRef]
  25. Deneke, C.; Moenck, K.; Schueppstuhl, T. Augmented Reality Based Data Improvement for the Planning of Aircraft Cabin Conversions. In Proceedings of the 2021 The 8th International Conference on Industrial Engineering and Applications (Europe), Barcelona, Spain, 8–11 January 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 37–45. [Google Scholar] [CrossRef]
  26. Sidiropoulos, V.; Bechtsis, D.; Vlachos, D. An Augmented Reality Symbiosis Software Tool for Sustainable Logistics Activities. Sustainability 2021, 13, 10929. [Google Scholar] [CrossRef]
  27. Dolgov, O.S.; Kolosov, A.I.; Safoklov, B.B. Study of the Effectiveness of the Introduction of Laser Projection System in the Process of Technological Preparation of the Production of Aircraft Structures From Polymer Composite Materials. In Proceedings of the 2021 International Ural Conference on Electrical Power Engineering (UralCon), Magnitogorsk, Russia, 24–26 September 2021; pp. 639–643. [Google Scholar] [CrossRef]
  28. Rupprecht, P.; Kueffner-Mccauley, H.; Schlund, S. Information provision utilizing a dynamic projection system in industrial site assembly. Procedia CIRP 2020, 93, 1182–1187. [Google Scholar] [CrossRef]
  29. Bertram, P. Entwicklung Eines Kontextsensitiven, Modularen Assistenzsystems für Manuelle Tätigkeiten, 1st ed.; Mensch-Maschine-Systeme Series; VDI Verlag: Düsseldorf, Germany, 2020; Volume 40. [Google Scholar] [CrossRef]
  30. Eschen, H.; Kötter, T.; Rodeck, R.; Harnisch, M.; Schüppstuhl, T. Augmented and Virtual Reality for Inspection and Maintenance Processes in the Aviation Industry. Procedia Manuf. 2018, 19, 156–163. [Google Scholar] [CrossRef]
  31. Wang, X.; Ong, S.K.; Nee, A.Y.C. A comprehensive survey of augmented reality assembly research. Adv. Manuf. 2016, 4, 1–22. [Google Scholar] [CrossRef]
  32. Martin, P.; Marchand, E.; Houlier, P.; Marchal, I. Mapping and re-localization for mobile augmented reality. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP 2014), Paris, France, 27–30 October 2014; pp. 3352–3356. [Google Scholar] [CrossRef] [Green Version]
  33. Vovk, A.; Wild, F.; Guest, W.; Kuula, T. Simulator Sickness in Augmented Reality Training Using the Microsoft HoloLens. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; Mandryk, R., Ed.; Association for Computing Machinery: New York, NY, USA, 2018; pp. 1–9. [Google Scholar] [CrossRef]
  34. Hinrichsen, S.; Bornewasser, M. How to Design Assembly Assistance Systems. In Intelligent Human Systems Integration 2019, Proceedings of the 2nd International Conference on Intelligent Human Systems Integration (IHSI 2019): Integrating People and Intelligent Systems, San Diego, CA, USA, 7–10 February 2019; Advances in Intelligent Systems and Computing; Karwowski, W., Ahram, T., Eds.; Springer: Cham, Switzerland, 2019; Volume 903, pp. 286–292. [Google Scholar] [CrossRef]
  35. Pokorni, B.; Popescu, D.; Constantinescu, C. Design of Cognitive Assistance Systems in Manual Assembly Based on Quality Function Deployment. Appl. Sci. 2022, 12, 3887. [Google Scholar] [CrossRef]
  36. Denavit, J.; Hartenberg, R.S. A Kinematic Notation for Lower-Pair Mechanisms Based on Matrices. J. Appl. Mech. 1955, 22, 215–221. [Google Scholar] [CrossRef]
  37. Yongguo, Z.; Xiang, H.; Wei, F.; Shuanggao, L. Trajectory Planning Algorithm Based on Quaternion for 6-DOF Aircraft Wing Automatic Position and Pose Adjustment Method. Chin. J. Aeronaut. 2010, 23, 707–714. [Google Scholar] [CrossRef] [Green Version]
  38. Sutton, B. Chapter 31: Newton’s method. In Numerical Analysis: Theory and Experiments; Sutton, B., Ed.; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2019; pp. 347–359. [Google Scholar] [CrossRef]
  39. Higham, N.J. Chapter 25: Nonlinear Systems and Newton’s Method. In Accuracy and Stability of Numerical Algorithms; Higham, N.J., Ed.; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2002; pp. 459–469. [Google Scholar] [CrossRef]
  40. Leica Geosystems AG. Leica Laser Tracker for Hand-Tools: Superior by Any Measure—LT(D)800: Document 731 982—III. 2003. Available online: https://www.sigma3d.de/fileadmin/Webseiten-Daten/Dokumente/VermietungProduktPDFs/Vermietung_Lasertracker_Leica_LT_D_800.pdf (accessed on 20 April 2023).
  41. ISO 9283:1998; Manipulating Industrial Robots: Performance Criteria and Related Test Methods. International Organization for Standardization: Geneva, Switzerland, 1998.
  42. Highlite International B.V. Showtec Phantom 130 Spot V1 Manual. Available online: https://www.highlite.com/media/attachments/MANUAL/40073_MANUAL_GB_V1.pdf (accessed on 4 June 2023).
  43. ENTTEC Ltd. Open DMX Ethernet Mk2—RDM Compliant DMX Over Ethernet Gateway. Available online: https://support.enttec.com/helpdesk/attachments/101026597782 (accessed on 6 June 2023).
  44. Thomann GmbH. Showtec Phantom 130 Spot. Available online: https://www.thomann.de/intl/showtec_phantom_130_spot.htm (accessed on 4 June 2023).
  45. Sensor Partners, B.V. Laser Projectors. Available online: https://sensorpartners.com/en/products/laser/laserprojectoren/ (accessed on 4 June 2023).
  46. Trefethen, L.N.; Bau, D., III. Numerical Linear Algebra; Society for Industrial and Applied Mathematics (SIAM): Philadelphia, PA, USA, 1997; pp. 346–348. [Google Scholar]
Figure 1. Overview of assistance systems in manufacturing. Focused areas in blue, grey boxes are out of scope.
Figure 1. Overview of assistance systems in manufacturing. Focused areas in blue, grey boxes are out of scope.
Applsci 13 07369 g001
Figure 2. Pan, tilt axis and focus of the moving head in relation to the workpiece with connecting vectors to the reference points P 1 , P 2 and P n .
Figure 2. Pan, tilt axis and focus of the moving head in relation to the workpiece with connecting vectors to the reference points P 1 , P 2 and P n .
Applsci 13 07369 g002
Figure 3. Left: Original static Gobo wheel. Right: New static Gobo wheel with various aperture sizes.
Figure 3. Left: Original static Gobo wheel. Right: New static Gobo wheel with various aperture sizes.
Applsci 13 07369 g003
Figure 4. Definition of the Reference and Moving Head coordinate systems and transformation between them.
Figure 4. Definition of the Reference and Moving Head coordinate systems and transformation between them.
Applsci 13 07369 g004
Figure 5. Overall flowchart for the calibration process with the relevant subsections for the calculation steps. The sections related to the main steps are given in the brackets.
Figure 5. Overall flowchart for the calibration process with the relevant subsections for the calculation steps. The sections related to the main steps are given in the brackets.
Applsci 13 07369 g005
Figure 6. Flowchart for determining the moving head position from the reference points relative to the reference coordinate system.
Figure 6. Flowchart for determining the moving head position from the reference points relative to the reference coordinate system.
Applsci 13 07369 g006
Figure 7. Visualization of a point pair P n , P m , their connecting vectors v n , m m and the angle differences Δ p a n n , m , Δ t i l t n , m . P ^ m was rotated by R z ( Δ p a n n , m ) to a plane orthogonal to the x y plane. All points and vectors are in the R e f coordinate system.
Figure 7. Visualization of a point pair P n , P m , their connecting vectors v n , m m and the angle differences Δ p a n n , m , Δ t i l t n , m . P ^ m was rotated by R z ( Δ p a n n , m ) to a plane orthogonal to the x y plane. All points and vectors are in the R e f coordinate system.
Applsci 13 07369 g007
Figure 8. Flowchart for determining the moving head orientation from the reference points and the calculated position relative to the reference coordinate system orientation.
Figure 8. Flowchart for determining the moving head orientation from the reference points and the calculated position relative to the reference coordinate system orientation.
Applsci 13 07369 g008
Figure 9. Schematic representation of the moving head spot distortion due to the varying angle between light beam and surface normal.
Figure 9. Schematic representation of the moving head spot distortion due to the varying angle between light beam and surface normal.
Applsci 13 07369 g009
Figure 10. Light spot on the target (light grey circles: 1 mm, black circles: 5 mm); red circle marks the initial position of the light spot. Inaccuracies result from an uneven background.
Figure 10. Light spot on the target (light grey circles: 1 mm, black circles: 5 mm); red circle marks the initial position of the light spot. Inaccuracies result from an uneven background.
Applsci 13 07369 g010
Figure 11. Experimental setup for determining the Pose Repeatability.
Figure 11. Experimental setup for determining the Pose Repeatability.
Applsci 13 07369 g011
Figure 12. Relation of the equivalent position.
Figure 12. Relation of the equivalent position.
Applsci 13 07369 g012
Figure 13. Lower ( e l ) and upper ( e u ) measured point position errors of a light spot at a given reference point P n .
Figure 13. Lower ( e l ) and upper ( e u ) measured point position errors of a light spot at a given reference point P n .
Applsci 13 07369 g013
Table 1. Results from the algorithmic accuracy analysis of the position and transformation estimation and the combined point position error.
Table 1. Results from the algorithmic accuracy analysis of the position and transformation estimation and the combined point position error.
EquationAngle μ σ μ steps σ steps
(16): f I Δ p a n −0.01492 0.1557 1.8018.82
(19): f I I Δ t i l t 0.01748 0.2026 6.3673.75
(23): f I I I pan0.05552 0.1056 6.7112.77
(24): f I V tilt−1.0343 0.1406 −376.5251.18
(26): eCombined0.02640 m0.00482 m--
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Koch, J.; Büchse, C.; Schüppstuhl, T. Development and Integration of a Workpiece-Based Calibration Method for an Optical Assistance System. Appl. Sci. 2023, 13, 7369. https://doi.org/10.3390/app13137369

AMA Style

Koch J, Büchse C, Schüppstuhl T. Development and Integration of a Workpiece-Based Calibration Method for an Optical Assistance System. Applied Sciences. 2023; 13(13):7369. https://doi.org/10.3390/app13137369

Chicago/Turabian Style

Koch, Julian, Christopher Büchse, and Thorsten Schüppstuhl. 2023. "Development and Integration of a Workpiece-Based Calibration Method for an Optical Assistance System" Applied Sciences 13, no. 13: 7369. https://doi.org/10.3390/app13137369

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop