Next Article in Journal
Temperature Effect on Electrical Resistivity Measurement Using an Embedded Sensor to Estimate Concrete Water Content
Next Article in Special Issue
Finite Element-Based Machine Learning Model for Predicting the Mechanical Properties of Composite Hydrogels
Previous Article in Journal
Probabilistic Analysis of Ground Surface Settlement of Excavation Considering Spatial Variable Modified Cam-Clay Model Parameters
Previous Article in Special Issue
Cloud Storage Data Verification Using Signcryption Scheme
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Machine Learning Sequential Methodology for Robot Inverse Kinematic Modelling

by
Franco Luis Tagliani
,
Nicola Pellegrini
and
Francesco Aggogeri
*
Department of Mechanical and Industrial Engineering, University of Brescia, Via Branze, 38, 25123 Brescia, Italy
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(19), 9417; https://doi.org/10.3390/app12199417
Submission received: 27 August 2022 / Revised: 18 September 2022 / Accepted: 19 September 2022 / Published: 20 September 2022
(This article belongs to the Special Issue Recent Advances in Automated Machine Learning)

Abstract

:
The application of robots is growing in most countries, occupying a relevant place in everyday environments. Robots are still affected by errors due to their limitations, which may compromise the final performance. Accurate trajectories and positionings are strict requirements that robots have to satisfy and may be studied by the inverse kinematic (IK) formulation. The IK conventional numerical techniques are computationally intensive procedures, focusing on the robot joint values simultaneously and increasing the complexity of the solution identification. In this scenario, Machine Learning strategies may be adopted to achieve effective and robust manipulator’s IK formulation due to their computational efficiency and learning ability. This work proposes a machine learning (ML) sequential methodology for robot inverse kinematics modeling, iterating the model prediction at each joint. The method implements an automatic Denavit-Hartenberg (D-H) parameters formulation code to obtain the forward kinematic (FK) equations required to produce the robot dataset. Moreover, the artificial neural network (ANN) architecture is selected as a structure and the number of layers in combination with the hidden neurons per layer is defined by an offline optimization phase based on the genetic algorithm (GA) technique for each joint. The ANN is implemented with the following settings: scaled conjugate gradient as training function and the mean squared error as the loss function. Different network architectures are examined to validate the IK procedure, ranging from global to sequential and considering the computation direction (from end-effector or from basement). The method is validated in the simulated and experimental laboratory environment, considering articulated robots. The sequential method exhibits a reduction of the mean squared error index of 42.7–56.7%, compared to the global scheme. Results show the outstanding performance of the IK model in robot joint space prediction, with a residual mean absolute error of 0.370–0.699 mm in trajectory following 150.0–200.0 mm paths applied to a real robot.

1. Introduction

Robot configuration is based on several actuated joints that link the rigid elements in a series-wise, parallel, or hybrid approach for a general-purpose manipulation task. For precise motion tracking, the focus is on the relation between the base frame, located at the robot ground, and the end-effector frame, that defines the physical interaction with the external environment.
Multi degree-of-freedom (DOF) robots exhibit a working volume that enables the pose reaching in different configurations of the kinematic chain, except for the singularity positions. The robot’s kinematics formulate the transfer functions between the joint actuators’ values and the corresponding Cartesian coordinates of the end-effector with respect to the base frame. The forward kinematics (FK) equations describe the robotic manipulator, from joints to Cartesian coordinates, computing the position and orientation of an element (e.g., tool center point—TCP) and defining the required motion of the robot actuators. The function is unique, a given joint configuration corresponds to a specific Cartesian position and orientation. The robot control and the trajectory planning task require the robot joint space information.
In the literature, a significant amount of research work has been presented dealing with this set of equations through matrix multiplications and mathematical procedures [1,2]. Most industrial robots are developed with simple-form kinematics to obtain analytic closed-form solutions. Nevertheless, several factors, such as manufacturing tolerances, wear, transmission errors, and set-up errors, compromise the internal design model used in the robot controller, affecting the transformation accurately and deviating from the desired locations.
For these reasons, the inverse kinematic (IK) formulation may be an alternative approach to obtain joint variables for any cartesian coordinate information in reaching a required point. The inverse transformation may be a computationally intensive procedure for general-form multi-link manipulators, depending on the modeling, formulation used for the kinematics, robot structure, and workspace. In IK, for all the poses within the workspace volume exist n potential joint configurations, where n is the redundancy degree. Robots capable of reaching a point in the cartesian space with multiple configurations are known as redundant robots since they have more degrees of freedom than required to perform the point-reaching task [3].
The closed-form analytical IK solution becomes more complex as the DOF increases, requiring optimizations and constraints [4,5,6]. There is a set of conventional techniques to solve IK problems such as algebraic, geometric, and iterative methods [1,7,8]. Nevertheless, these techniques show several issues for robots with complex structures, such as closed-form solutions not guaranteed, high computation costs, or results depending on the start point. For these types of manipulators, optimization methods, such as particle swarm optimization (PSO) [9,10,11] or genetic algorithm (GA) [12,13], have been adopted to avoid high-order polynomials. Nevertheless, the impact of the initial setting and the inadequate convergence are constraints that need to be regulated [14].
Recently, machine learning (ML) procedures have been used to model the IK, as in other industrial applications [15,16,17,18,19,20,21]. The predictions are promptly obtained by comparing numerical and optimization methods, using data from the offline training phase.
One of the most recognized applications of ML schemes observed in literature is based on artificial neural networks (ANN). A planar ANN scheme was proposed by Duka in [22], creating a dataset from the FK of a 3-DOF planar robot, used to train a shallow network with 100 neurons in its hidden layer. The results showed a mean square error (MSE) equal to 0.0054387 rad in the joint space. Habibkhah and Mayorga, in [23], implemented an orientation function based on the cosine for a shallow network with 12 neurons, obtaining a validation of MSE equal to 0.00038523 rad. Varedi-Koulaei et al. deployed a 5-layer ANN based on an identical planar manipulator. The proposed scheme was designed with the PSO algorithm, obtaining a final validation of MSE close to 0.00013 rad in joint space [24]. However, the inverse kinematic computation was simplified in the planar motion configuration due to the dimensional reduction.
Li et al. presented an interesting study in [25], increasing the analysis complexity. Compared to previous works, the authors modeled an industrial robot with 6-DOFs (RS10N), using a shallow single hidden layer back-propagation neural network with 30 hidden neurons, applied in a constrained workspace and validated on a rectangle trajectory. The obtained results showed an error lower than ±0.05° with a maximum deviation of 1.00 mm from the horizontal axis. A further method was developed by Narayan et al. [26], proposing the ANFIS (adaptive neuro-fuzzy inference systems) technique to predict the IK of a 4-DOF SCARA robot in a reduced workspace. The maximum observed deviation of the end-effector was 0.3775 mm in X-direction, 0.4135 mm in Y-direction, and 0.0027 mm in Z-direction, respectively. Nevertheless, these applications considered limited workspaces with planar or highly constrained robot motions, facilitating the corresponding inverse kinematic computation and reducing the effect of the estimated errors. To avoid the workspace limitations, an alternative approach was presented by Shah et al. [27], dividing the spatial workspace into quadrants and applying a 5-layer ANN scheme. The study aimed to model a 5-DOFs robot. The application was validated for a single quadrant, obtaining a joint error with a range equal to ±0.003 rad. Almusawi et al. [28] suggested a further method to address the spatial complexity in modeling the inverse kinematics of a 6-DOF robot (Denso VP6242). Inner feedback data for the current joints’ values was included as an extra input to the 10-hidden layers ANN. The process validation was performed using a spring curve trajectory demonstrating a maximum error in the Z-direction lower than 0.35 mm. Despite the promising results, the inclusion of the joint values may increase the complexity in the scheme generalization since it is based on the trajectory as an input of the ANN requirement. Singh et al. [29] proposed a long short-term memory (LSTM) method regarding a recurrent neural network model, with two hidden layers of 10 LSTM neurons. The aim was to represent the inverse kinematics of the left arm of the Baxter Robot (7 DOFs). The study showed a final MSE lower than 0.02, obtained from the test data set. A tree-based algorithm was implemented by Thomas et al. [30] to model the inverse kinematics of a 6-DOF parallel manipulator. The dataset was generated by recording the end-effector poses with a camera and the corresponding joint values with ultrasonic sensors. The random forest (RF) algorithm showed the best training results, with a determination coefficient equal to 94.61% and a root-mean-square error (RMSE) close to 0.036 m.
Köker et al. [31] proposed the committee schemes as an alternative solution to represent a 6-DOF redundant robot, Hitachi M6100. The concept was based on 10 parallel shallow networks, simplified to 6 networks, with variable hidden neurons. The scheme output was selected from the parallel networks based on the minimization of the Cartesian error, obtained using the forward kinematics. Compared to a global ANN approach, the proposed method showed a significant reduction of the final error range, from 5.76 to 13.41 mm to 0.39 to 0.74 mm. Nevertheless, the multiple model training time and the higher amount of data required significant computational cost due to the high complexity.
To improve the model’s accuracy, further methodologies are the hybrid procedures based on ML models and optimization algorithms applied in the cascade form. In [32], Köker et al. implemented a hybrid simulated annealing with GA to reduce the shallow network (46 hidden neurons in the hidden layer) error to 1.0 µm for a 4-DOF manipulator. Similarly, a neuro annealing approach based on Elman recurrent networks for the Stanford and PUMA 560 robotic manipulator was applied in [33]. This study proposed a committee system with Elman networks as in [31], optimizing the final output using a simulated annealing and reducing the MSE from 2.10 to 0.0000079, in joint space. Zhou et al. [34] proposed the GA approach by applying an extreme learning machine (ELM), combined with a sequential mutation genetic algorithm (SGA). This work focused on a 6-DOF Stanford MT-ARM robot, highlighting a model error equal to 0.0225 mm with a standard deviation of 0.0081 mm.
Although the literature shows promising methods and significant results in addressing ML strategies to achieve effective and robust manipulator’s IK formulation, further studies are required to evaluate novel techniques for robot inverse kinematic modeling.
In particular, the state of art shows that several ML methods are based on general models to compute all the joint values simultaneously, increasing the computational cost and complexity. In this way, the development of a sequential methodology focused on the single joint calculation may represent a novel paradigm [35,36].
This paper aims to investigate and deploy an ML-model using a sequential procedure for an accurate IK formulation, addressing the model prediction using a data-driven approach. At each joint, the input vector includes the previously obtained actuators’ values and the Cartesian coordinates with orientation. Moreover, the ANN architecture is defined through an optimization phase based on the GA technique.
The main contribution of this study is to validate a complete sequential ML model methodology, assessing the results in terms of joint and Cartesian error. Additionally, the results are compared with the global approach. A further contribution is to integrate the Genetic Algorithm optimization into the IK procedure to define the network structure: the hidden layers and hidden neurons per layer. The GA score function depends on the normalized root mean square error, the maximum error, and the coefficient of determination. The novel sequential methodology is validated in a simulated environment (IRB140) and with a real physical robot (LR-Mate 200iC).
The paper is organized as follows: Section 2 presents the novel methodology to model the robot inverse kinematics, describing the sequential procedure. Following the proposed methodology, a simulated case study applied to an articulate robot (IRB140) is described in Section 3. Results are discussed and compared with other studies. Section 4 shows the experimental campaign performed on a 6 DOFs robot (FANUC LR-Mate 200iC), evaluating the obtained results and validated the proposed sequential methodology for robot inverse kinematic modeling. Finally, Section 5 presents the conclusions discussing the study findings and prospective.

2. Methodology Structure

The IK function defines the joints’ configuration (Q) required to reach the target frame with the Cartesian coordinates (S) of the end-effector. This study models the IK procedure adopting a sequential set of ANNs. The flow diagram of the proposed methodology is presented in Figure 1. Three macro-steps can be depicted, an initial phase (“IK boundary definition and pre-processing”) where the robot is chosen and the dataset is produced with the FK equations, obtained automatically through an iterative procedure based on the D-H parameters or by manual setting. The corresponding modeling is evaluated with the Mean Euclidean Error (MEE) for a set of predefined trajectories in order to verify the dataset quality on the machine learning model. Once the FK is validated, the dataset is produced. The following step aims to define the ANN structure (“Network Architecture Definition”) for the IK model. Two different approaches may be selected, a global approach, where all the joints are computed together, or the novel sequential approach, where the joints are computed sequentially from the TCP or the Base, integrating the previously modeled joint as extra input for the following one. In order to avoid a manual inference of the model structure, the genetic algorithm (GA) optimization is included with a predefined tunable cost-function. Finally, the produced IK model is validated using the same trajectories employed in the initial step FK validation.
Table 1 summarizes the methodology in terms of input and output elements of the proposed IK procedure.

2.1. Dataset Generation

This phase consists of the robot selection and the D-H parameters estimation for the FK dataset creation with a pseudo-random generator. The FK equations compute the Cartesian coordinates from the joints’ values using the D-H parameters. The axes X, Y, and Z are defined from the origin of each joint, positioned at the actuator centre: Z is the direction of the joint axis; X is the axis perpendicular to both Zi and Zi−1, from the latter; Y is derived from Z and X axes. The D-H parameters based on the presented axes definition are [37]:
  • Joint angleθi: rotation angle about the Zi−1 axis, determined from Xi−1 to Xi. The angle corresponds to the joint value if the joint has a rotary actuator.
  • Joint distancedi: distance along the Zi−1, from the origin of the (i − 1)th frame to the intersection of the Zi−1 with Xi.
  • Link lengthai: distance along the Xi axis, from the intersection between the Zi and the Zi−1 axis to the origin of the ith coordinate frame.
  • Link twistαi: rotation about Xi, defined from Zi−1 to the Zi axis.
The D-H (Rn×4) transformation matrices are obtained in the Equations (1) and (2), where k is the joint number of the robotic manipulator. The transformation matrices are computed to obtain the relative modeling between consequent joints, since the multiplication of the complete set of matrices generates the model between the base frame and the final joint frame. The homogeneous transformation matrix A is defined based on the D-H parameters as follows:
A k k 1 = R ( z , θ k )   T ( z , d k ) T ( x , a k ) R ( x , α k )
= [ c θ k s θ k · c α k s θ k · s α k a k · c θ k s θ k c θ k · c α k c θ k · s α k a k · s θ k 0 s α k c α k d k 0 0 0 1 ]
where c θ k and s θ k correspond to the cosine and sine functions of θ k ; c α k and s α k correspond to the cosine and sine functions of α k .
As stated, the transformation matrix from the end-effector to the base frame may be obtained as a product of matrix multiplication, as:
T n 0 = [ n x s x a x p x n y s y a y p y n z s z a z p z 0 0 0 1 ]
T n 0 = k = 1 n A k k 1
where vectors n, s, and a define the manipulator orientation and vector p represents the position in Cartesian coordinates. The produced matrix corresponds to the FK equations. Nevertheless, the manual definition of these parameters is a time-consuming activity; considering the novel methodology, an automatic procedure is implemented, as presented in [38]. This is an alternative method for the user, however, a manual approach may also be pursuable if the user aims to insert the D-H parameters or the FK equations, directly.
The FK model validation is a critical point in the proposed methodology, due to the data-driven nature of the IK model. The employed dataset needs to be coherent with the reality to prevent misleading results. In fact, the FK validation is carried out with a set of four different trajectories, as shown in Figure 2a–d, with the base reference frame fixed at Zero-World [0; 0; 0; 0; 0; 0].
Furthermore, the robustness of the direct model is assessed by varying the main trajectory parameter on two different levels: reducing (low level) and expanding (high level) by 20% the initial path. This perturbation, arbitrarily selected, permits to evaluate the effectiveness of the direct model in multi-conditions, recreating a set of trajectories in the workspace. The corresponding paths are:
  • Circle in the Y-Z plane:
Perturbed parameter: radius r c i r c l e
x = 0 y = r c i r c l e · cos ( 2 · π · t ) z = r c i r c l e · sin ( 2 · π · t )
r c i r c l e = 60.0   mm t = 0 ,   ,   57   s
  • Spiral X-Y along Z direction:
Perturbed parameter: radius r s p i r a l
x = r s p i r a l · cos ( 2 · π · t ) y = r s p i r a l · sin ( 2 · π · t ) z = c · t
r s p i r a l = 60.0   mm t = ( 0 , , 57 ) · 2   s c = p i t c h = 0.7   mm / s
  • S-shape in the X-Y plane:
Perturbed parameter: radius r S s h a p e
x = r S s h a p e · sin ( 2 · π · t 1 ) y = r S s h a p e · cos ( 2 · π · t 2 ) z = 0  
r S s h a p e = 60.0   mm t 1 = 0 , ,   28   s t 2 = 29 , , 57   s
  • Rectangle in the X-Y plane:
Perturbed parameter: length b
x = h = 150.0   mm y = ± b / 2 = ± 200.0   mm z = 0   mm
Considering the robot workspace, the Cartesian error should be evaluated in terms of Euclidean distance since it is more representative than the single-axis evaluation. Therefore, the MEE index is employed to assess the FK performances, as a measure of the mean 3D distance error. The MEE is obtained as defined in Equation (12):
M E E = 1 n   i = 0 n ( E x i 2 + E y i 2 + E z i 2 )
where { E x ;   E y   ;   E z } describe the error along X-, Y-, Z- axes, respectively; and n is the number of samples.
As shown in the flow diagram presented in Figure 1, if the obtained MEE is greater than the chosen threshold, the program requests the manual input of the D-H parameters (or the FK equations) if the automatic procedure has been initially selected, or their correction in case of the manual path setting.
Once the performance is validated, the FK equations are used to produce the robot dataset, considering the unique mapping between the input joint values and the resulting cartesian coordinates. Furthermore, a pseudo-random generator is considered to obtain the RN×m matrix, based on the minimum and maximum joints’ values, where n is the number of DOFs of the selected robot and m is the number of joint space data points (10,000 elements by default). Considering the IK methodology, this data corresponds to the output dataset for the ML model; in fact, the input dataset is obtained by computing the cartesian coordinates, position and orientation, based on the FK equations.

2.2. Model Architecture Definition

The second step of the sequential methodology, Figure 1, is the “network architecture definition”. The procedure begins with the dataset division: 70% for training, 15% for testing, and 15% for validation. This is a conventional approach for ML applications [39,40,41]. The validation dataset is generally employed for feature selection of model fine-tuning and it marginally influences the ML model. The test dataset is considered as an unbiased dataset for the performance evaluation. A different method that could be employed to reduce the three-part partition would be the k-fold cross-validation to tune the model hyperparameters [42,43,44].
The network structure is determined by the number of hidden layers, the number of neurons per hidden layer, and the activation function for each layer. The depth of the network determines the capability of the architecture to model non-linearities and complex structures. The first two parameters are tuned with the GA, which is a recognized stochastic search optimization algorithm inspired by the evolution theory, while the activation function is implemented as tan-sigmoid (tansig) to reduce the required application time. The number of hidden layers is limited from 3 to 7, and the neurons per layer is defined in the range from 10 to 80, with a 5 hidden neurons resolution. The GA iteration score is determined by Equation (13), based on training and validation performance measurements obtained by Equations (14)–(16):
y = α · N R M S E v + β · N R M S E t r + M A X E v · ( 1 R v 2 )
N R M S E = 1 n i = 1 n ( y ( i ) y ˜ ( i ) ) 2 y M A X y M I N
M A X E =   M A X ( | y ( i ) y ˜ ( i ) | )
R 2 = 1 i = 1 n ( y ( i ) y ˜ ( i ) ) 2 i = 1 n ( y ( i ) 1 n i = 1 n y ( i ) ) 2
where y ( i ) is the target value, y M A X and y M I N are the maximum and minimum of the target element;   y ˜ ( i ) is the predicted value; n is the sample number; N R M S E is the normalized root mean square error; M A X E is the maximum absolute error; and R 2 is the coefficient of determination. The v and tr correspond to the validation and training dataset, respectively. The α and β coefficients need to be tuned by the user to apply the optimization algorithm. Considering the sequential network deployment, the error coefficients and the final score are expressed in the joint space error [rad].
The defined cost-function for this method is based on three performance coefficients: N R M S E , M A X E , and R 2 . The first is a normalized coefficient correlated with the error’s range and independent on the error’s dimension. This is a critical point since the robot dimension and the corresponding workspace influence the non-normalized error.
Furthermore, the maximum error and the coefficient of determination are employed together, as shown in Equation (13). In particular, the maximum error corresponds to the maximum Euclidean distance, used as a coefficient of error distribution. Its value is typically higher than the normalized coefficient, thus, a reduction factor is included based on the R 2 value, the coefficient associated with the correlation in terms of variability dependency. Considering the presence of the performance measurements for the training and the dataset validation, the constant coefficients α and β are used to induce a proportion index.
The validation dataset is crucial for the model performance evaluation with respect to the training dataset. For this reason, this application defines α equal to 0.8 and β equal to 0.2, respectively.
Finally, during the GA optimization iterations, R 2 validation threshold (70%) is verified to define high-cost function values for those non-compliant models, depicted as “ANN design Rsq < 70%”.
Two different approaches are examined for the model comparison and validation, the traditional global approach, where the cartesian coordinates are given as input and all the joints are computed simultaneously; and the sequential approach derived from the single joint computation. The sequential concept is based on the FK obtention with the ordered matrix multiplication, from the base to the end-effector. Similarly, the deployed networks are applied to learn the IK of one joint at a time; the first network input is the Cartesian coordinates and orientation, and the output is the final joint value. The following network has the same input, including the previous predicted joints values. This approach has two main advantages: (1) it increases the information provided to the ML model; (2) it allows a parallel scheme training as each joint model can be trained and optimized, separately. Furthermore, the modularity of the presented procedure allows the model evaluation for each joint, implementing multiple models for the same robot and considering the complete mechanism as a set of simpler elements.
The GA is used to optimize each joint model for the sequential scheme. A further variability term is incorporated in the model deployment to change the direction of the sequential approach. In fact, the system finds the optimized structure starting from the tool center point (TCP), as the FK computation, which means that the first ANN computes the last joint firstly, using it as an input for the following joints, up to the base or from the base, computing the opposite path. This implementation increases the required optimization time; however, it provides a further degree of freedom in terms of model deployment in reducing the consequent error. A Sequential approach example from the TCP to the base is shown in Figure 3.

2.3. Model Validation

The final step of the proposed methodology is the “network architecture validation” for the global and sequential configurations, providing the IK solution with the associated error. The prediction accuracy for the network is measured using the mean square error (MSE), mean absolute error (MAE), root mean squared error (RMSE), and the maximum error (MAXE). Furthermore, the trajectories employed for the FK validations, presented in Figure 2a–d, are used to validate the IK model performance.

3. Simulation Validation

In order to obtain a preliminary evaluation of the IK procedure, a set of simulations has been performed considering a 6 DOF robot (IRB140) and using ABB Robotstudio software. Figure 4a,b shows the robot structure and the D-H axis representation. The proposed methodology is executed to obtain and validate the IK model of the selected robot.

3.1. Dataset Generation

In the virtual environment, the robot executes a series of controlled-position movements, defined in the joint space. The sequence of motions is performed through all the joints, from the initial joint position until the end position, representing the maximum and minimum allowed mechanical position, respectively. The pose information is acquired during the virtual robot motion with a constant number of elements for each record trajectory, 400 elements. As defined, the collected data is employed to produce the D-H parameters, listed in Table 2, with the automatic procedure described in [38].
Based on the obtained FK model, the validation trajectories are employed, and the MEE performance is computed for the IRB140 robot. The impact of the trajectory parameter variation based on two different levels (Low and High) is defined in the corresponding virtual simulation, as stated in Table 3. The rectangle contour has the highest deviation with a final error (MEE) lower than 0.02 µm, comparing the simulation in ABB RobotStudio and the D-H model. This effect is justified by the software capability to simulate the low-angle slope trajectory and it is reduced in real applications, as demonstrated in experimental laboratory tests (Section 4).
Based on the MEE performances of the IRB140 with the four trajectories and the corresponding disturbances, the FK model is validated, and the 10,000-point dataset for the IK modeling is produced.

3.2. Network Architecture Definition

The 3D workspace of the IRB140 robot is shown in Figure 5, where the black scatter pattern is the J2 space, the red dots represent the J3 workspace, and the blue region describes the J4 space. It is noted that the complete workspace is reduced since the J1 axis is constrained within the frontal region at [π/2; −π/2] range from the Home-pose.
Following the proposed methodology, the ANN architectures with the GA algorithm is defined using Matlab Parallel Computation to reduce the computational time. The GA optimization includes a set of hyperparameters for the ANN creation and training: “scaled conjugate gradient” as the training function and the MSE as the performance coefficient. In Table 4, the main inputs for the generated models are presented. In particular, the adopted technique allows the optimized mix network to obtain the best configuration of the number of layers and number of neurons for each joint.
The mentioned settings show an offline computational time close to 70 h in comparison with the global approach that requires ~30 h. The computational time influence is neglected in the present work due to its offline nature. The obtained network architecture from the robot base frame to the TCP is expressed as:
J 1   { 65 65 45 45 15 } J 2   { 65 65 60 55 15 } J 3   { 35 50 70 25 35 25 }
The network architecture from the robot TCP to the base frame is defined as:
J 1   { 70 65 65 35 40 20 } J 2   { 75 75 10 } J 3   { 45 50 55 55 35 50 30 }

3.3. Model Validation

The trajectories defined in Equations (5)–(11) and used for the FK validation step, are employed to evaluate the performance of the IK model, obtained by the optimized structure. Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10 shows the results of the sequential IK procedure prediction of the simulated trajectories. To assess the model, the trajectories are selected in critical workspace zones, such as the extreme reaching regions of the robot, where small errors in the joint space may create significant errors in TCP positioning. Although this effect is particularly relevant, the model deployment maintains the performance of the MAE (mean absolute error) in 3D space lower than 0.654 mm.
Table 5 summarizes the MAE, MSE, RMSE, MAXE, and R2 obtained comparing simulations and the developed models using the global approach and the sequential method. R2 coefficients are greater than 99.99% for all the deployed models, confirming the goodness of the correlation. Comparing the adopted methods, the results highlight that the global approach (7 hidden layers) has a higher error (MAE) than the sequential method by 42.7–56.7%. The sequential method has been obtained using [5; 5; 6] layers and [5; 3; 7] layers, starting from the robot TCP or from the base, respectively.
Moreover, the maximum Euclidean error is close to 1.3 mm using the global approach, while the sequential approach shows lower error (MAXE) values, reaching a minimum of 0.634 mm for the S-shape path. Considering the spiral trajectory, the sequential network starting from the base has the MAXE equal to 0.8 mm, that is lower than the global approach by 35.3%. Similar results are obtained from the rectangle trajectory. These deviations are calculated in the extreme reaching positionings of the robot where low errors in the joint space are amplified.
Table 6 lists the main error indicators in the join space related to the sequential method datasets (train, validation, test), underlining the capability to model datapoints in the robot workspace.
Finally, the MSE and RMSE are compared to literature applications [22,23,24,27]. The results of the MSE are closed to 1.0 × 10−4 rad2, while the proposed sequential methodology has shown a reduction of the mean squared error, that is equal to 1.73 × 10−6 rad2, starting from the robot Base, and 1.95 × 10−6 rad2, starting from the robot TCP, respectively. In the same way, the obtained results of the RMSE are promising since the maximum is lower than 1.60 × 10−2 rad.

4. Experimental Campaign and Discussion

In the light of the simulated case study results, an experimental campaign is executed. The laboratory validation follows the proposed procedure (Figure 1), using a 6 DOF robot (FANUC LR Mate 200iC) to test the model robustness on a similar robot to IRB140 Figure 11 shows the robot structure and the D-H representation. The axes definition is based on the joint location and the corresponding directions are obtained using the D-H approach. For each movement, a PC-based acquisition system collects the actuators’ positions and torques with a frequency of 10.0 ms, comparing the dataset quality with the robot internal information, accessible via teach-pendant.

4.1. Dataset Generation

This study is limited to J1–J3 due to their significant influence on the final TCP error. The joint-jog rules are defined as follows: joint motion with a range greater than 45% of total mechanical feasibility, TCP speed lower than 10 mm/s and a number of elements greater than 400.0. The D-H code estimates the unknown D-H parameters, as shown in Table 7. The experimental tests are performed in laboratory conditions, assuming a constant temperature (20 °C) and calibrating the robot before the experimental tests.
Table 3 lists the estimated MEE obtained from the FANUC LR Mate 200-iC, validating the FK model. Consequently, the pseudo-random generator creates 10,000 poses within the range of motion of the robot axes.

4.2. Network Architecture Definition

The constrained 3D workspace of the robot is shown in Figure 12, where the blue three-sided mesh region describes the J4 space. The corresponding volume is obtained by limiting the J1 axis within a [−π/2; π/2] range from home-pose, J2 axis within [0; π/2], and J3 within [−π/2; 5π/18]. Subsequently, data is processed and analyzed using Matlab software.
The following step is the optimization using the GA as presented in Section 2, providing the best network for the developed sequential scheme based on the cost function defined in Equation (13). The PC used for the model computation has an Intel Core i5-8500 processor with 16GB RAM.
The GA technique is applied with the following parameters for the ANN generation: the “scaled conjugate gradient” is used as a training function; the loss function corresponds to the “mean squared error”, and the calculations are executed with the Matlab Parallel Computation to reduce the required optimization’s computational time. In Table 8, the main inputs for the generated models are presented. In particular, the adopted technique allows the mix network optimization, providing the best configuration of the number of layers and number of neurons for each joint.
The final network architecture of the studied robot is expressed from the base to the TCP positioning as follows:
J 1   { 50 45 50 30 45 40 15 } J 2   { 55 60 35 35 } J 3   { 50 40 10 }
Differently, considering the network architecture from the TCP reference to the base frame, the ANN structure is:
J 1   { 50 45 55 15 } J 2   { 35 45 70 35 65 25 20 } J 3   { 60 60 30 15 }

4.3. Model Validation

Finally, the optimized models obtained from the three configurations (sequential from Base, sequential from TCP, and global) are compared. The validation trajectories are defined within the generated workspace, as shown in Figure 13a–d, each of them with more than 100 discrete target points. Furthermore, Figure 14, Figure 15, Figure 16 and Figure 17 present the error comparison between the lab robot (FANUC LR Mate 200Ic) and the IK Model for the X-, Y-, Z-axes.
Comparing the different models, the circle trajectory (Y-Z plane) shows that the maximum error is equal to 1.113 mm in the global approach, 0.675 mm in the sequential method starting from the TCP, and 0.777 mm in the sequential method starting from the base frame. The developed trajectories have a different number of elements, such as 150 in the S-shape and 320 elements in the spiral. Nevertheless, it is noted that the number of points does not influence the robustness of the final model capabilities, for example, the trajectory obtained by points has a mean absolute Euclidean error, expressed as a vector [global, sequential from TCP, sequential from base] equal to [0.73 mm, 0.60 mm, 0.56 mm] that is comparable to the results obtained by the 320 points trajectory [0.87 mm, 0.48 mm, 0.69 mm].
The study also shows a residual range error of 0.370–0.699 mm of MAE for trajectory following of 120.0–200.0 mm features, as shown in Table 9. MAE (%) is close to 0.582% without any knowledge of robot kinematic and equipment calibration or set-up. The required procedure is performed offline to maintain the robot’s availability. The IK complexity has been overcome and proved by numerical and laboratory campaigns.
Table 10 states the training, testing, and validation errors in terms of the joint prediction for the different models (global approach and sequential method). The results show the capability of the ANN structure to model any path or data point within the workspace.
The trajectories are studied and considered as a set of individual independent points. Thus, the performance indexes results obtained for the complete dataset reflect the IK modeling for any trajectory within the workspace. The difference between the results obtained for the three datasets (train, test, validation) is studied to realize the system overfitting. In fact, for this application, similar errors confirm that the ML overfitting is not present.
As stated, the literature applications [22,23,24,27] show results with MSE close to 1.0 × 10−4 rad2. Using the sequential method, the proposed study reduces the MSE that is equal to 1.90 × 10−6 rad2 and of 2.78 × 10−6 rad2, applying the different approaches (Base, TCP), respectively. In the same way, the obtained maximum error modeling from the robot base frame to the TCP reference is close to 2.6 × 10−2 rad, demonstrating the robustness of the methodology.

5. Conclusions

This paper proposes a procedure to generate a ML model using a sequential scheme for an accurate IK formulation. Two anthropomorphic robots are studied, evaluating the spatial trajectory error obtained from a data-driven model in the simulated (ABB IRB140) and the laboratory experimental (FANUC LR Mate 200iC) environments. A 3-step procedure is proposed: (i) IK boundary definition and pre-processing, (ii) network architecture definition based on GA, and (iii) network validation. The models show promising results for the validation of the trajectories, executed in critical workspace positionings of the robot. The goodness of IK methodology is confirmed by evaluating the errors of the joint space and the trajectories in cartesian space.
In the simulation, the sequential procedure shows a reduction of the mean squared error index of 42.7–56.7%, compared to the global scheme. In particular, the most effective approach is obtained by modeling the robot starting from the base to the TCP, with a final mean square error equal to 1.73 × 10−6 [rad2] in joint space. Comparable results are collected from the experimental campaign with a MAE lower than 0.60 mm, using the sequential method. As in the simulated environment, the global approach highlights a greater error, that is close to 0.72 mm.
Compared with other studies in the literature [23,31,32,33,34,35], the proposed methodology offers several contributions. The first point is the application of simulated data points based on the FK equations obtained from an automatic procedure with a reduced experimental dataset to train the ML model. Usually, it is required a significant computation of the FK model or the D-H parameters for the dataset creation [35]. This approach allows the creation of large datasets without unnecessary experimental measurements, reducing the application time. Furthermore, the automatic proposed procedure does not require user-time since the data generation and ANN configuration are done by the deployed methodology. In addition, if the robot does not allow a continuous data sample to produce the corresponding D-H parameters, the user may implement the direct FK equations within the program manually.
A further advantage of this methodology is the inclusion of the genetic algorithm to define and optimize the network structure. This step permits the user to avoid any kind of ANN parameters tuning; however, the limits and discretization for the number of hid-den layers and hidden neurons for each layer has to be defined. The fine-tuning requirement may represent a boundary limitation for the complete methodology application, and it may directly influence the results obtained during the model deployment and the computational training time.
The methodology also provides robust modeling in dealing with the multiple IK solutions, compared to the global approach or the pure individual joints method [22,23,24,27,35]. In particular, the proposed case studies showed a significant reduction of the final error with respect to the global approach [35], validating the model capabilities.
Finally, the sequential scheme capability to set and define a given joint value allows to choose a given robot configuration, especially when the initial joints are selected. This point will be explored in future applications focused on trajectory optimization schemes, minimizing the joint movement and redundant solutions. It is a significant advantage compared to global approaches [22,23,24], in which the individual joints are neglected, and single-joint modeling [35], that considers each joint as an independent element.
In this scenario, the future advances of this study will focus on dealing with high-redundant schemes and non-industrial robots as non-rigid real robots, that are complex to model analytically. In the same way, the methodology may be improved to solve configuration limitations due to workspace constraints by external objects, reduce the motion or include power consumption optimization schemes.

Author Contributions

Conceptualization, F.A., N.P. and F.L.T.; methodology, F.A., N.P. and F.L.T.; validation, F.A., N.P. and F.L.T.; writing—original draft, F.A., N.P. and F.L.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Manocha, D.; Canny, J.F. Efficient Inverse Kinematics for General 6R Manipulators. IEEE Trans. Robot. Autom. 1994, 10, 648–657. [Google Scholar] [CrossRef]
  2. Zaplana, I.; Basanez, L. A Novel Closed-Form Solution for the Inverse Kinematics of Redundant Manipulators through Workspace Analysis. Mech. Mach. Theory 2018, 121, 829–843. [Google Scholar] [CrossRef]
  3. Chiaverini, S.; Oriolo, G.; Maciejewski, A.A. Redundant Robots. In Springer Handbook of Robotics; Siciliano, B., Khatib, O., Eds.; Springer: Cham, Switzerland, 2016. [Google Scholar]
  4. Dou, R.; Yu, S.; Li, W.; Chen, P.; Xia, P.; Zhai, F.; Yokoi, H.; Jiang, Y. Inverse Kinematics for a 7-DOF Humanoid Robotic Arm with Joint Limit and End Pose Coupling. Mech. Mach. Theory 2022, 169, 104637. [Google Scholar] [CrossRef]
  5. Shimizu, M.; Kakuya, H.; Yoon, W.K.; Kitagaki, K.; Kosuge, K. Analytical Inverse Kinematic Computation for 7-DOF Redundant Manipulators with Joint Limits and Its Application to Redundancy Resolution. IEEE Trans. Robot. 2008, 24, 1131–1142. [Google Scholar] [CrossRef]
  6. Gong, M.; Li, X.; Zhang, L. Analytical Inverse Kinematics and Self-Motion Application for 7-DOF Redundant Manipulator. IEEE Access 2019, 7, 18662–18674. [Google Scholar] [CrossRef]
  7. Fu, K.S.; Gonzalez, R.C.; Lee, C.S.G. Robotics: Control, Sensing, Vision, and Intelligence; McGraw-Hill: New York, NY, USA, 1987. [Google Scholar]
  8. Featherstone, R. Position and velocity transformation between robot end-effector coordinate and joint angle. Int. J. Robot. Res. 1983, 2, 33–45. [Google Scholar] [CrossRef]
  9. Deng, H.; Xie, C. An Improved Particle Swarm Optimization Algorithm for Inverse Kinematics Solution of Multi-DOF Serial Robotic Manipulators. Soft Comput. 2021, 25, 13695–13708. [Google Scholar] [CrossRef]
  10. Rokbani, N.; Slim, M.; Alimi, A.M. The Beta Distributed PSO, β-PSO, with Application to Inverse Kinematics. In Proceedings of the 2021 National Computing Colleges Conference (NCCC), Taif, Saudi Arabia, 27–28 March 2021; pp. 1–6. [Google Scholar]
  11. Liu, Y.; Xi, J.; Bai, H.; Wang, Z.; Sun, L. A General Robot Inverse Kinematics Solution Method Based on Improved PSO Algorithm. IEEE Access 2021, 9, 32341–32350. [Google Scholar]
  12. Lee, C.-T.; Chang, J.-Y. A Workspace-Analysis-Based Genetic Algorithm for Solving Inverse Kinematics of a Multi-Fingered Anthropomorphic Hand. Appl. Sci. 2021, 11, 2668. [Google Scholar] [CrossRef]
  13. Zeng, Z.; Chen, Z.; Shu, G.; Chen, Q. Optimization of Analytical Inverse Kinematic Solution for Redundant Manipulators Using GA-PSO Algorithm. In Proceedings of the 2018 IEEE Industrial Cyber-Physical Systems (ICPS), St. Petersburg, Russia, 15–18 May 2018; pp. 446–451. [Google Scholar]
  14. Chiddarwar, S.S.; Babu, N.R. Comparison of RBF and MLP neural networks to solve inverse kinematic problem for 6R serial robot by a fusion approach. Eng. Appl. Artif. Intell. 2010, 23, 1083–1092. [Google Scholar] [CrossRef]
  15. Aggogeri, F.; Pellegrini, N.; Tagliani, F.L. Recent Advances on Machine Learning Applications in Machining Processes. Appl. Sci. 2021, 11, 8764. [Google Scholar] [CrossRef]
  16. AlAlaween, W.; Abueed, O.; Gharaibeh, B.; Alalawin, A.; Mahfouf, M.; Alsoussi, A.; Albashabsheh, N. The Development of a Radial Based Integrated Network for the Modelling of 3D Fused Deposition. Rapid Prototyp. J. 2022; ahead-of-print. [Google Scholar] [CrossRef]
  17. Qin, J.; Hu, F.; Liu, Y.; Witherell, P.; Wang, C.C.L.; Rosen, D.W.; Simpson, T.W.; Lu, Y.; Tang, Q. Research and Application of Machine Learning for Additive Manufacturing. Addit. Manuf. 2022, 52, 102691. [Google Scholar] [CrossRef]
  18. Khorasani, M.; Loy, J.; Ghasemi, A.H.; Sharabian, E.; Leary, M.; Mirafzal, H.; Cochrane, P.; Rolfe, B.; Gibson, I. A Review of Industry 4.0 and Additive Manufacturing Synergy. Rapid Prototyp. J. 2022, 28, 1462–1475. [Google Scholar] [CrossRef]
  19. Hu, Y.; Li, J.; Hong, M.; Ren, J.; Man, Y. Industrial Artificial Intelligence Based Energy Management System: Integrated Framework for Electricity Load Forecasting and Fault Prediction. Energy 2022, 244, 123195. [Google Scholar] [CrossRef]
  20. Li, W.; Huang, R.; Li, J.; Liao, Y.; Chen, Z.; He, G.; Yan, R.; Gryllias, K. A Perspective Survey on Deep Transfer Learning for Fault Diagnosis in Industrial Scenarios: Theories, Applications and Challenges. Mech. Syst. Signal Process. 2022, 167, 108487. [Google Scholar] [CrossRef]
  21. Cao, Q.; Zanni-Merk, C.; Samet, A.; Reich, C.; Beuvron, F.D.B.D.; Beckmann, A.; Giannetti, C. KSPMI: A Knowledge-Based System for Predictive Maintenance in Industry 4.0. Robot. Comput. Integr. Manuf. 2022, 74, 102281. [Google Scholar] [CrossRef]
  22. Duka, A.-V. Neural Network based Inverse Kinematics Solution for Trajectory Tracking of a Robotic Arm. Procedia Technol. 2014, 12, 20–27. [Google Scholar] [CrossRef]
  23. Habibkhah, S.; Mayorga, R.V. The computation of the inverse kinematics of a 3 DOF redundant manipulator via an ann approach and a virtual function. In Proceedings of the ICINCO 2020—the 17th International Conference on Informatics in Control, Automation and Robotics, (Icinco), Online, 7–9 July 2020; pp. 471–477. [Google Scholar]
  24. Varedi-Koulaei, S.M.; Mokhtari, M. Trajectory Tracking Solution of a Robotic Arm Based on Optimized ANN. In Proceedings of the 6th RSI International Conference on Robotics and Mechatronics, IcRoM, Tehran, Iran, 23–25 October 2018; pp. 76–81. [Google Scholar]
  25. Li, H.; Song, Z.; Jiang, Z.; Mo, Y.; Ni, W. Solving Inverse Kinematics of Industrial Robot Based on BP Neural Network. In Proceedings of the 2017 IEEE 7th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems, CYBER 2017, Honolulu, HI, USA, 31 July–4 August 2017; pp. 1167–1171. [Google Scholar]
  26. Narayan, J.; Singla, A. ANFIS based kinematic analysis of a 4-DOFs SCARA robot. In Proceedings of the 4th IEEE International Conference on Signal Processing, Computing and Control, ISPCC 2017, Honolulu, HI, USA, 31 July–4 August 2017; pp. 205–211. [Google Scholar]
  27. Shah, S.K.; Mishra, R.; Ray, L.S. Solution and validation of inverse kinematics using Deep artificial neural network. Mater. Today Proc. 2020, 26, 1250–1254. [Google Scholar] [CrossRef]
  28. Almusawi, A.R.J.; Dülger, L.C.; Kapucu, S. A New Artificial Neural Network Approach in Solving Inverse Kinematics of Robotic Arm (Denso VP6242). Comput. Intell. Neurosci. 2016, 2016, 5720163. [Google Scholar] [CrossRef]
  29. Singh, A.; Pandey, P.; Nandi, G.C. Effectiveness of multi-gated sequence model for the learning of kinematics and dynamics of an industrial robot. Ind. Robot. Int. J. Robot. 2021, 48, 62–70. [Google Scholar] [CrossRef]
  30. Thomas, M.; Sanjeev, M.M.; Sudheer, A.; Joy, M.L. Comparative study of various machine learning algorithms and Denavit-Hartenberg approach for the inverse kinematic solutions in a 3-PPSS parallel manipulator. Ind. Robot. 2020, 47, 683–695. [Google Scholar] [CrossRef]
  31. Köker, R.; Çakar, T.; Sari, Y. A neural-network committee machine approach to the inverse kinematics problem solution of robotic manipulators. Eng. Comput. 2014, 30, 641–649. [Google Scholar] [CrossRef]
  32. Köker, R.; Çakar, T. A neuro-genetic-simulated annealing approach to the inverse kinematics solution of robots: A simulation based study. Eng. Comput. 2016, 32, 553–565. [Google Scholar] [CrossRef]
  33. Köker, R. A neuro-simulated annealing approach to the inverse kinematics solution of redundant robotic manipulators. Eng. Comput. 2013, 29, 507–515. [Google Scholar] [CrossRef]
  34. Zhou, Z.; Guo, H.; Wang, Y.; Zhu, Z.; Wu, J.; Liu, X. Inverse kinematics solution for robotic manipulator based on extreme learning machine and sequential mutation genetic algorithm. Int. J. Adv. Robot. Syst. 2018, 15, 1729881418792992. [Google Scholar] [CrossRef]
  35. Demby’S, J.; Gao, Y.; DeSouza, G.N. A Study on Solving the Inverse Kinematics of Serial Robots using Artificial Neural Network and Fuzzy Neural Network. In Proceedings of the IEEE International Conference on Fuzzy Systems, New Orleans, LA, USA, 23–26 June 2019. [Google Scholar] [CrossRef]
  36. Aggogeri, F.; Pellegrini, N.; Taesi, C.; Tagliani, F.L. Inverse kinematic solver based on machine learning sequential procedure for robotic applications. In Proceedings of the 2021 International Symposium on Intelligent Robotics and Systems (ISoIRS 2021), Online, 7–9 November 2022; Volume 2234. [Google Scholar] [CrossRef]
  37. Zhang, T.; Song, Y.; Wu, H.; Wang, Q. A novel method to identify D-H parameters of the rigid serial-link robot based on a geometry model. Ind. Robot. 2021, 48, 157–167. [Google Scholar] [CrossRef]
  38. Faria, C.; Vilaca, J.L.; Monteiro, S.; Erlhagen, W.; Bicho, E. Automatic Denavit-Hartenberg Parameter Identification for Serial Manipulators. In Proceedings of the IECON 2019—45th Annual Conference of the IEEE Industrial Electronics Society, Lisbon, Portugal, 14–17 October 2019; Volume 2019, pp. 610–617. [Google Scholar]
  39. Reinhart, R.F.; Shareef, Z.; Steil, J.J. Hybrid Analytical and Data-Driven Modeling for Feed-Forward Robot Control. Sensors 2017, 17, 311. [Google Scholar] [CrossRef]
  40. Cursi, F.; Mylonas, G.P.; Kormushev, P. Adaptive Kinematic Modelling for Multiobjective Control of a Redundant Surgical Robotic Tool. Robotics 2020, 9, 68. [Google Scholar] [CrossRef]
  41. Csiszar, A.; Eilers, J.; Verl, A. On solving the inverse kinematics problem using neural networks. In Proceedings of the 2017 24th International Conference on Mechatronics and Machine Vision in Practice (M2VIP), Auckland, New Zealand, 21–23 November 2017; pp. 1–6. [Google Scholar] [CrossRef]
  42. Alonso-Martín, F.; Gamboa-Montero, J.J.; Castillo, J.C.; Castro-González, Á.; Salichs, M. Detecting and Classifying Human Touches in a Social Robot Through Acoustic Sensing and Machine Learning. Sensors 2017, 17, 1138. [Google Scholar] [CrossRef]
  43. Šegota, S.B.; Anđelić, N.; Mrzljak, V.; Lorencin, I.; Kuric, I.; Car, Z. Utilization of Multilayer Perceptron for Determining the Inverse Kinematics of an Industrial Robotic Manipulator. Int. J. Adv. Robot. Syst. 2021, 18, 172988142092528. [Google Scholar] [CrossRef]
  44. Baressi Šegota, S.; Anđelić, N.; Šercer, M.; Meštrić, H. Dynamics Modeling of Industrial Robotic Manipulators: A Machine Learning Approach Based on Synthetic Data. Mathematics 2022, 10, 1174. [Google Scholar] [CrossRef]
Figure 1. Flow diagram of the IK procedure.
Figure 1. Flow diagram of the IK procedure.
Applsci 12 09417 g001
Figure 2. Implemented validation trajectories: Circle in the Y-Z plane (a), Spiral X-Y along Z direction (b), Rectangle in the X-Y plane (c), S-shape in the X-Y plane (d).
Figure 2. Implemented validation trajectories: Circle in the Y-Z plane (a), Spiral X-Y along Z direction (b), Rectangle in the X-Y plane (c), S-shape in the X-Y plane (d).
Applsci 12 09417 g002
Figure 3. The sequential approach from the robot TCP to the base.
Figure 3. The sequential approach from the robot TCP to the base.
Applsci 12 09417 g003
Figure 4. The simulated case study: IRB140 robot (a) and D-H representation (b).
Figure 4. The simulated case study: IRB140 robot (a) and D-H representation (b).
Applsci 12 09417 g004
Figure 5. Workspace for the IRB140, J4—blue; J3—red; J2—black.
Figure 5. Workspace for the IRB140, J4—blue; J3—red; J2—black.
Applsci 12 09417 g005
Figure 6. Robot (blue) vs. IK procedure (red) for the implemented trajectories: (a) circle in the Y-Z plane; (b) spiral X-Y along Z-axis; (c) rectangle in the X-Y plane (d) S-shape in the X-Y plane.
Figure 6. Robot (blue) vs. IK procedure (red) for the implemented trajectories: (a) circle in the Y-Z plane; (b) spiral X-Y along Z-axis; (c) rectangle in the X-Y plane (d) S-shape in the X-Y plane.
Applsci 12 09417 g006
Figure 7. Circle in the Y-Z plane: X-, Y-, Z- axes Virtual IRB140 vs. IK procedure.
Figure 7. Circle in the Y-Z plane: X-, Y-, Z- axes Virtual IRB140 vs. IK procedure.
Applsci 12 09417 g007
Figure 8. Spiral X-Y along Z-direction: X-, Y-, Z- axes Virtual IRB140 vs. IK procedure.
Figure 8. Spiral X-Y along Z-direction: X-, Y-, Z- axes Virtual IRB140 vs. IK procedure.
Applsci 12 09417 g008
Figure 9. Rectangle in the X-Y plane: X-, Y-, Z- axes Virtual IRB140 vs. IK procedure.
Figure 9. Rectangle in the X-Y plane: X-, Y-, Z- axes Virtual IRB140 vs. IK procedure.
Applsci 12 09417 g009
Figure 10. S-shape: X-, Y-, Z- axes Virtual IRB140 vs. IK procedure.
Figure 10. S-shape: X-, Y-, Z- axes Virtual IRB140 vs. IK procedure.
Applsci 12 09417 g010
Figure 11. (a) FANUC LRMate200-iC and (b) D-H representation.
Figure 11. (a) FANUC LRMate200-iC and (b) D-H representation.
Applsci 12 09417 g011
Figure 12. Constrained workspace for the FANUC LR Mate 200iC.
Figure 12. Constrained workspace for the FANUC LR Mate 200iC.
Applsci 12 09417 g012
Figure 13. Robot (blue) vs. IK procedure (red) for the implemented trajectories: (a) circle in the Y-Z plane; (b) spiral X-Y along Z-axis; (c) rectangle in the X-Y plane (d) S-shape in the X-Y plane.
Figure 13. Robot (blue) vs. IK procedure (red) for the implemented trajectories: (a) circle in the Y-Z plane; (b) spiral X-Y along Z-axis; (c) rectangle in the X-Y plane (d) S-shape in the X-Y plane.
Applsci 12 09417 g013
Figure 14. Circle in the Y-Z plane: X-, Y-, Z-axes Robot vs. IK procedure.
Figure 14. Circle in the Y-Z plane: X-, Y-, Z-axes Robot vs. IK procedure.
Applsci 12 09417 g014
Figure 15. Spiral along Z-axis: X-, Y-, Z-axes Robot vs. IK procedure.
Figure 15. Spiral along Z-axis: X-, Y-, Z-axes Robot vs. IK procedure.
Applsci 12 09417 g015
Figure 16. Rectangle in the X-Y plane: X-, Y-, Z-axes Robot vs. IK procedure.
Figure 16. Rectangle in the X-Y plane: X-, Y-, Z-axes Robot vs. IK procedure.
Applsci 12 09417 g016
Figure 17. S-shape in the X-Y plane: X-, Y-, Z-axes Robot vs. IK procedure.
Figure 17. S-shape in the X-Y plane: X-, Y-, Z-axes Robot vs. IK procedure.
Applsci 12 09417 g017
Table 1. Input/Output IK procedure.
Table 1. Input/Output IK procedure.
IK Boundary Definition and Pre-Processing
Input
-
Select the robot
-
Joint geometrical dimensions/position
-
Use 70-15-15 technique to identify the training-testing-validation portion
Output
-
Estimate D-H parameters
-
FK equations, based on D-H parameters, validated with the mean Euclidean error index
Network Architecture Definition
Input
-
Dataset for the defined D-H
-
Define the number of joints
-
Check algorithm constraints
-
Activation Functions
-
Feature selection and filtering
Output
-
Hidden layers (HL)
-
Hidden neurons (HN)
-
Quantify model robustness
-
Comparison between methods, dataset training, and validation
Network Architecture Validation
Input
-
Appraisal of the testing dataset
-
Select optimization leverages and ranges
-
ANN optimized architecture
-
Select adequate deployment duration
Output
-
Algorithm performance
-
Estimate feature prediction error
Table 2. D-H parameters for the IRB140.
Table 2. D-H parameters for the IRB140.
IRB140
Jointθi [rad]di [mm]ai [mm]αi [rad]
10.03.52 × 1027.00 × 101π/2
2π/22.18 × 10−83.60 × 1020.0
30.02.95 × 10−78.94 × 10−7π/2
40.03.80 × 102−3.51 × 10−6−π/2
50.0−4.50 × 10−72.34 × 10−6π/2
60.06.50 × 101−4.45 × 10−70.0
Table 3. FK error—MEE in perturbed conditions.
Table 3. FK error—MEE in perturbed conditions.
RobotCaseMEE [µm]LowHigh
IRB140Circle0.015+9.0%+2.0%
Spiral0.017+1.0%−8.0%
S-Shape0.014−2.0%−5.0%
Rectangle0.013+7.0%+12.0%
Table 4. GA/ANN optimization inputs for IRB140.
Table 4. GA/ANN optimization inputs for IRB140.
RequirementsSequentialGlobal
- Number of jointsJ1–J3J1–J3
- Algorithm limitationsMix<8 Layers
- Activation Functions TansigTansig
- Feature filteringJ4–J6J4–J6
- Epochs10001000
Table 5. Global IK—Sequential IK trajectory error results in the virtual environment: MAE, MSE, MAXE.
Table 5. Global IK—Sequential IK trajectory error results in the virtual environment: MAE, MSE, MAXE.
Global 7 Layers
TrajectoryMAE [mm]MSE [mm2]RMSE [mm]MAXE [mm]R2 [-]
Circle in Y-Z1.07 × 1001.17 × 1001.08 × 1001.32 × 10099.967%
Spiral X-Y7.46 × 10−16.42 × 10−18.01 × 10−11.33 × 10099.984%
Rectangle X-Y7.96 × 10−17.14 × 10−18.45 × 10−11.27 × 10099.993%
S-shape in X-Y6.86 × 10−15.41 × 10−17.36 × 10−11.25 × 10099.993%
Sequential From TCP
TrajectoryMAE [mm]MSE [mm2]RMSE [mm]MAXE [mm]R2 [-]
Circle in Y-Z6.54 × 10−14.78 × 10−16.92 × 10−19.49 × 10−199.987%
Spiral X-Y6.39 × 10−14.69 × 10−16.85 × 10−11.19 × 10099.989%
Rectangle X-Y5.13 × 10−13.26 × 10−15.71 × 10−18.66 × 10−199.997%
S-shape in X-Y5.02 × 10−12.89 × 10−15.38 × 10−18.52 × 10−199.996%
Sequential From Base
TrajectoryMAE [mm]MSE [mm2]RMSE [mm]MAXE [mm]R2 [-]
Circle in Y-Z4.24 × 10−12.40 × 10−14.90 × 10−18.77 × 10−199.993%
Spiral X-Y4.48 × 10−12.47 × 10−14.97 × 10−18.66 × 10−199.994%
Rectangle X-Y5.03 × 10−13.21 × 10−15.67 × 10−11.15 × 10099.997%
S-shape in X-Y4.27 × 10−12.04 × 10−14.51 × 10−16.34 × 10−199.997%
Table 6. Train, validation, and test performance for the sequential models.
Table 6. Train, validation, and test performance for the sequential models.
TrainMSE [rad2]RMSE [rad]MAXE [rad]
Sequential From Base1.23 × 10−61.11 × 10−39.68 × 10−3
Sequential From TCP9.23 × 10−79.61 × 10−41.07 × 10−2
ValidationMSE [rad2]RMSE [rad]MAXE [rad]
Sequential From Base1.50 × 10−61.22 × 10−38.51 × 10−3
Sequential From TCP1.40 × 10−61.18 × 10−32.15 × 10−2
TestMSE [rad2]RMSE [rad]MAXE [rad]
Sequential From Base1.73 × 10−61.32 × 10−31.60 × 10−2
Sequential From TCP1.95 × 10−61.40 × 10−34.87 × 10−2
Table 7. D-H parameters for the FANUC LR Mate 200iC.
Table 7. D-H parameters for the FANUC LR Mate 200iC.
FANUC LRMate200-iC
Jointθi [rad]di [mm]ai [mm]αi [rad]
10.03.30 × 1027.50 × 101π/2
2π/20.00 × 1003.00 × 1020.0
30.00.00 × 1007.50 × 101π/2
40.03.20 × 1020.00 × 100−π/2
50.00.00 × 1000.00 × 100π/2
60.01.40 × 1020.00 × 1000.0
Table 8. GA/ANN optimization inputs for the FANUC LR Mate 200iC.
Table 8. GA/ANN optimization inputs for the FANUC LR Mate 200iC.
RequirementsSequentialGlobal
- Number of jointsJ1–J3J1–J3
- Algorithm limitationsMix<8 Layers
- Activation Functions TansigTansig
- Feature filteringJ4–J6J4–J6
- Epochs10001000
Table 9. Global IK—Sequential IK procedure trajectory errors in the laboratory environment.
Table 9. Global IK—Sequential IK procedure trajectory errors in the laboratory environment.
Global 6 Layers
TrajectoryMAE [mm]MSE [mm2]RMSE [mm]MAXE [mm]R2 [-]
Circle in Y-Z8.19 × 10−17.25 × 10−18.52 × 10−11.11 × 10099.975%
Spiral X-Y8.67 × 10−18.60 × 10−19.27 × 10−11.51 × 10099.971%
Rectangle X-Y8.67 × 10−19.13 × 10−19.55 × 10−11.77 × 10099.988%
S-shape in X-Y7.26 × 10−15.80 × 10−17.62 × 10−11.11 × 10099.987%
Sequential From TCP
TrajectoryMAE [mm]MSE [mm2]RMSE [mm]MAXE [mm]R2 [-]
Circle in Y-Z4.06 × 10−12.00 × 10−14.47 × 10−16.75 × 10−199.993%
Spiral X-Y4.82 × 10−12.79 × 10−15.28 × 10−19.71 × 10−199.990%
Rectangle X-Y5.66 × 10−14.38 × 10−16.62 × 10−11.26 × 10099.994%
S-shape in X-Y6.00 × 10−14.43 × 10−16.65 × 10−11.06 × 10099.990%
Sequential From Base
TrajectoryMAE [mm]MSE [mm2]RMSE [mm]MAXE [mm]R2 [-]
Circle in Y-Z5.13 × 10−12.90 × 10−15.39 × 10−17.77 × 10−199.990%
Spiral X-Y6.99 × 10−15.26 × 10−17.25 × 10−11.10 × 10099.982%
Rectangle X-Y3.70 × 10−11.64 × 10−14.06 × 10−17.33 × 10−199.998%
S-shape in X-Y5.57 × 10−13.37 × 10−15.80 × 10−18.38 × 10−199.993%
Table 10. Sequential IK procedure error results for the training, validation, and testing phases.
Table 10. Sequential IK procedure error results for the training, validation, and testing phases.
TrainMSE [rad2]RMSE [rad]MAXE [rad]
Sequential From Base1.25 × 10−61.12 × 10−39.70 × 10−3
Sequential From TCP1.18 × 10−61.09 × 10−31.41 × 10−2
ValidationMSE [rad2]RMSE [rad]MAXE [rad]
Sequential From Base1.62 × 10−61.27 × 10−31.33 × 10−2
Sequential From TCP1.83 × 10−61.35 × 10−33.51 × 10−2
TestMSE [rad2]RMSE [rad]MAXE [rad]
Sequential From Base1.90 × 10−61.38 × 10−32.56 × 10−2
Sequential From TCP2.78 × 10−61.67 × 10−36.29 × 10−2
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tagliani, F.L.; Pellegrini, N.; Aggogeri, F. Machine Learning Sequential Methodology for Robot Inverse Kinematic Modelling. Appl. Sci. 2022, 12, 9417. https://doi.org/10.3390/app12199417

AMA Style

Tagliani FL, Pellegrini N, Aggogeri F. Machine Learning Sequential Methodology for Robot Inverse Kinematic Modelling. Applied Sciences. 2022; 12(19):9417. https://doi.org/10.3390/app12199417

Chicago/Turabian Style

Tagliani, Franco Luis, Nicola Pellegrini, and Francesco Aggogeri. 2022. "Machine Learning Sequential Methodology for Robot Inverse Kinematic Modelling" Applied Sciences 12, no. 19: 9417. https://doi.org/10.3390/app12199417

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop