Next Article in Journal
Failure Prediction for the Tearing of a Pin-Loaded Dual Phase Steel (DP980) Adjusting Guide
Next Article in Special Issue
Dielectric Elastomer Actuator for Soft Robotics Applications and Challenges
Previous Article in Journal
Scattering of Surface Waves by a Three-Dimensional Cavity of Arbitrary Shape: Analytical and Experimental Studies
Previous Article in Special Issue
Design Methodology for Soft Wearable Devices—The MOSAR Case
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A General Approach Based on Newton’s Method and Cyclic Coordinate Descent Method for Solving the Inverse Kinematics

1
School of Mechanical Engineering, Beijing Institute of Technology, No. 5 Zhongguancun South Street, Haidian District, Beijing 100081, China
2
School of Computer Science and Technology, Beijing Institute of Technology, No. 5 Zhongguancun South Street, Haidian District, Beijing 100081, China
3
School of Mechatronical Engineering, Beijing Institute of Technology, No. 5 Zhongguancun South Street, Haidian District, Beijing 100081, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2019, 9(24), 5461; https://doi.org/10.3390/app9245461
Submission received: 22 October 2019 / Revised: 26 November 2019 / Accepted: 9 December 2019 / Published: 12 December 2019
(This article belongs to the Special Issue Soft Robotics: New Design, Control, and Application)

Abstract

:

Featured Application

The new approach can be used to solve the inverse kinematics of revolute and prismatic joint robots with any number of degrees of freedom as well as any configurations.

Abstract

The inverse kinematics of robot manipulators is a crucial problem with respect to automatically controlling robots. In this work, a Newton-improved cyclic coordinate descent (NICCD) method is proposed, which is suitable for robots with revolute or prismatic joints with degrees of freedom of any arbitrary number. Firstly, the inverse kinematics problem is transformed into the objective function optimization problem, which is based on the least-squares form of the angle error and the position error expressed by the product-of-exponentials formula. Thereafter, the optimization problem is solved by combining Newton’s method with the improved cyclic coordinate descent (ICCD) method. The difference between the proposed ICCD method and the traditional cyclic coordinate descent method is that consecutive prismatic joints and consecutive parallel revolute joints are treated as a whole in the former for the purposes of optimization. The ICCD algorithm has a convenient iterative formula for these two cases. In order to illustrate the performance of the NICCD method, its simulation results are compared with the well-known Newton–Raphson method using six different robot manipulators. The results suggest that, overall, the NICCD method is effective, accurate, robust, and generalizable. Moreover, it has advantages for the inverse kinematics calculations of continuous trajectories.

1. Introduction

Solving typical robotics problems, such as trajectory planning [1] and motion control [2], requires that forward and inverse kinematics problems be addressed. The former involves calculating the position and orientation of a robot’s end-effector frame from its joint values, which can easily be solved by using the matrix method of analysis [3]. The latter involves determining the joint variables corresponding to a given end-effector position and orientation. Indeed, the inverse kinematics problem is more complex than the forward kinematics problem because of the existence of nonlinear and multiple solutions [4]. Generally, two types of solutions exist: analytical solutions and numerical solutions. Iterative numerical methods can be applied if the inverse kinematics problem in question does not have any closed-form solutions and if it does not satisfy the Pieper criterion [5]. Even in cases where closed-form solutions do exist, iterative numerical methods are often used to improve the accuracy of these solutions.
From the 1980s, the inverse kinematics problem has attracted the attention of a large number of experts and researchers, all of whom have made remarkable achievements with respect to providing an adequate solution. Tsai and Morgan equated the kinematics problem of the general six-degree-of-freedom robot manipulators to that of a system of eight second-degree equations with eight unknowns solved numerically by polynomial continuation [6]. The authors in [6] conjectured that this problem has at most 16 solutions, the first conclusive proof of which was given in [7]. In order to improve the real-time performance of the solution, Manocha and Canny presented an algorithm for efficient inverse kinematics for a general six revolute (6R) robot manipulator, which performs both symbolic preprocessing and matrix computations and reduces the problem in question to computing the eigendecomposition of a matrix [8]. The generality of this algorithm, however, is limited since it can only be applied to 6R robot manipulators. In order to improve the generality of the inverse kinematics algorithm, some general approaches are proposed. The representative algorithms are the Newton–Raphson (NR) method [9] and the damped least-squares approach [10,11], which are all based on the inverse Jacobian matrix. The common drawback of this kind of algorithm is that it is sensitive to the initial solution. In order to solve this problem, Goldenberg et al. proposed a combined optimization method, which is based on a combination of two nonlinear programming techniques [12]. More specifically, the proposed method uses the cyclic coordinate descent (CCD) method [13] in order to rapidly find a feasible point approximate to the true solution and then the Broyden–Fletcher–Shanno variable metric method [13] in order to obtain a solution accurate to the desired degree of precision. Recently, some novel algorithms have been proposed to solve the inverse kinematics problem of robot manipulators. Kelemen et al. presented an algorithm for the control of kinematically-redundant manipulators considering three secondary tasks: a joint limit avoidance task, a kinematic singularities avoidance task, and an obstacle avoidance task [14]. Indeed, this approach is practical and results in a significant decrease in computing time. Zaplana and Basanez proposed an approach for a closed-form solution for the inverse kinematics of redundant manipulators [15]. This approach involves transforming redundant manipulators into non-redundant ones by selecting a set of joints, identifying the redundant ones, and parametrizing the joint variables. Thereafter, several closed-form methods previously developed for non-redundant manipulators can be applied in order to obtain the requisite solutions. This approach, however, is only suitable for manipulators that conform to the Pieper criterion. Qiao et al. [16] and Menini et al. [17] solved the inverse kinematics problem using double quaternions and Lie symmetries, respectively. In addition, other methods based on heuristic search techniques have been developed to solve the inverse kinematics problem, such as neural networks [18,19], genetic algorithms [20,21], and particle swarm optimization [22], as well as a combination of these methods [23,24].
On the basis of prior research, there is rarely a general and effective method for solving inverse kinematics. Therefore, this paper transforms the inverse kinematics problem into the objective function optimization problem, which is based on the least-squares form of the angle error and the position error expressed by the product-of-exponentials (PoE) formula. Then the new general approach that is suitable for the inverse kinematics of revolute and prismatic joint robots with any number of degrees of freedom as well as any configurations is presented.
The rest of the paper is organized in the following manner. Section 2 provides basic and associated knowledge, including the kinematics description of robot manipulators, Newton’s method and the CCD method. Section 3 provides a definition of the objective function with respect to inverse kinematics and then the necessary formulas for Newton’s method and the iterative procedure for the improved cyclic coordinate descent (ICCD) method. At the end of this part, the Newton-improved cyclic coordinate descent (NICCD) method is presented to solve the inverse kinematics problem. Section 4 provides the simulation results, followed by the discussion. Finally, Section 5 provides the conclusions and future work.

2. Preliminaries

Generally, this section introduces the preliminary knowledge used in this article. Firstly, the kinematics description of the robot manipulator is introduced, which plays an important role in deriving the formulas. The iteration process for Newton’s method and the CCD method are then described, thereby laying the foundation for the proposed NICCD method.

2.1. Kinematics Description of Robot Manipulator

Generally speaking, two methods of describing the forward kinematics of open chains currently exist. The first method relies on the Denavit- Hartenberg parameters and the second relies on the PoE formula. Indeed, the latter method has advantages over the former (e.g., no link frames are necessary) [25], and it is, therefore, the preferred choice with respect to representing forward kinematics.
The PoE formula used to describe the forward kinematics ( T ( Θ ) , Θ R n ) of an open chain with n-degrees of freedom follows Equation (1)
T ( Θ ) = R ( Θ ) P ( Θ ) 0 1 = e S 1 θ 1 e S n 1 θ n 1 e S n θ n M .
Specifically, Equation (1) is the space form of the PoE formula, referring to the fact that the screw axes are expressed in the fixed space frame, R ( Θ ) s p e c i a l o r t h o g o n a l g r o u p ( S O ( 3 ) ) , P ( Θ ) R 3 × 1 , 0  denotes the ( 1 × 3 ) null vector.
To calculate the homogeneous transformation matrics ( T ( Θ ) ) , which represent both the position and orientation of the end-effector with respect to the base frame, the elements outlined below are required.
(a) The end-effector configuration M Special Euclidean Group ( S E ( 3 ) ) is defined as
M = R m P m 0 1 ,
when the robot manipulator is at its home position.
(b) The screw axes are
S i = ω i v i = [ ω i 1 , ω i 2 , ω i 3 , v i 1 , v i 2 , v i 3 ] T , ( i = 1 n ) ,
where
v i = ω i × q i
with q i (any arbitrary point on the joint axis i) written in coordinates in the fixed base frame, corresponding to the joint motions when the robot manipulator is at the home position. Since the screw axis S i is just a normalized twist, S i represents Equation (5)
S i = Ω i v i 0 0 ,
where Ω i is a 3 × 3 skew-symmetric matrix representation of ω i and is defined as
Ω i = 0 ω i 3 ω i 2 ω i 3 0 ω i 1 ω i 2 ω i 1 0 .
(c) The joint variables are represented as follows: Θ = [ θ 1 , θ 2 , , θ n ] T .
(d) The matrix exponential e S i θ i can be calculated using Equation (7)
e S i θ i = R i ( θ i ) P i ( θ i ) 0 1 ,
where R i ( θ i ) and P i ( θ i ) are calculated as
R i ( θ i ) = e Ω i θ i = I + Ω i sin θ i + Ω i 2 ( 1 cos θ i ) ,
P i ( θ i ) = ( I + Ω i 2 ) θ i + Ω i ( 1 cos θ i ) Ω i 2 sin θ i v i ,
where I denotes the ( 3 × 3 ) identity matrix.
The formulas listed in this section will be used for calculating the formulas in the NICCD method.

2.2. Newton’s Method

An effective way to solve nonlinear optimization problems is to use Newton’s method. This method has quadratic convergence, and it uses both the first-order and second-order partial derivatives of the objective function, thus taking into account gradient changes. Therefore, it can comprehensively determine the appropriate search direction, and it has a more rapid convergence rate than the gradient method [26]. In particular, some scholars have improved the truncation criterion of Newton’s method, which avoids “over-solving” of the Newton equation as much as possible. The representative method is the heuristic adaptive truncation criterion proposed by [27,28].
Although Newton’s method requires a Hessian matrix, it is an appropriate method for solving low-dimensional optimization problems, such as the inverse kinematics of robot manipulators. The iterative formula for finding the local minimum value of the objective function ( f ( Θ ) , Θ R n ) using Newton’s method is given by Equation (10)
Θ ( k + 1 ) = Θ ( k ) pinv ( H ( Θ ( k ) ) ) f ( Θ ( k ) ) ,
where pinv ( ) is the pseudoinverse, and where f ( Θ ( k ) ) is the gradient of the k-th iteration of the objective function, which is defined as
f ( Θ ( k ) ) = [ f ( Θ ( k ) ) θ 1 , f ( Θ ( k ) ) θ 2 , , f ( Θ ( k ) ) θ n ] T .
Moreover, H ( Θ ( k ) ) must be positive definite matrix for the Hessian matrix of the k-th iteration of the objective function, which is expressed as Equation (12)
H ( Θ ( k ) ) = 2 f ( Θ ( k ) ) = 2 f θ 1 2 2 f θ 1 θ 2 2 f θ 1 θ n 2 f θ 2 θ 1 2 f θ 2 2 2 f θ 2 θ n 2 f θ n θ 1 2 f θ n θ 2 2 f θ n 2 .
Newton’s method for finding the optimal solution ( Θ * ) consists of the step-by-step procedure outlined below.
Step 1: determine the gradient f ( Θ ( k ) ) (see Equation (11)).
Step 2: verify convergence with the final solution as follows,
f ( Θ ( k ) ) ϵ ,
where ϵ is a threshold value supplied by the user. If Equation (13) holds, i.e., then Θ * = Θ ( k ) is the optimal solution, then terminate the process; otherwise, proceed to the next step.
Step 3: determine the Hessian matrix (see Equation (12)) and Θ k + 1 (see Equation (10)). Then, let k = k + 1 and return to Step 1.

2.3. The CCD Method

The CCD method, also known as the univariate search technique, is a dimension reduction method for unconstrained optimization problems. Its iteration process requires that one searches alternately along different coordinate directions. Indeed, the CCD method is a simpler method of establishing the search direction than the objective function derivative.
Although the convergence rate of the CCD method is slow for high-dimensional optimization problems, it is permissible with respect to the inverse kinematics problem of robot manipulators, and it can reduce the objective function at each iteration.
The CCD method process requires finding the optimal solution for one variable at a time while keeping the remaining n 1 variable unchanged. The last point of the previous one-dimensional search is the starting point of this one-dimensional search. The iterative formula for finding the local minimum value of the objective function ( f ( Θ ) , Θ R n ) by the CCD method is Equation (14)
Θ i k = Θ i 1 k + α i ( k ) D i ( k ) , ( i = 1 , 2 , , n ) ,
where D i ( k ) is the search direction, defined as
D i ( k ) = [ 0 , , 1 , , 0 ] T ,
in which the component of the i-th coordinate direction is 1 and the remaining components are 0, Moreover, α i ( k ) is the optimal step length in the i-th iteration of the k-th round, which is written as Equation (16)
α i ( k ) = arg   min α i ( k ) f ( Θ i 1 k + α i ( k ) D i ( k ) ) .
The CCD method for finding the optimal solution ( Θ * ) consists of the following step-by-step procedure.
Step 1: let i = 1.
Step 2: determine the optimal step length α i ( k ) (see Equation (16)) and Θ i k (see Equation (14)).
Step 3: check whether the iteration round is over as follows:
i = n .
If Equation (17) holds, i.e., then proceed to the next step, else let i = i + 1 and return to step 2.
Step 4: verify convergence with the final solution as follows,
Θ n ( k ) Θ 0 ( k ) ϵ ,
where ϵ is a threshold value supplied by the user. If Equation (18) holds, i.e., then Θ * = Θ n ( k ) is the optimal solution, then terminate the process; otherwise, let D i ( k + 1 ) = D i ( k ) ( i = 1 , 2 , , n ) , Θ 0 ( k + 1 ) = Θ n ( k ) , k = k + 1 and return to Step 1.
The CCD method does not require the derivative of the objective function in the search process; rather, it only requires the objective function value information. Moreover, it can be used to analytically find the optimal value in each search direction, such as the single joint variable optimization for the objective function proposed in this paper.

3. Numerical Methods for Nonlinear Kinematic Equations

In this section, the inverse kinematics problem is first transformed into the problem of how to get the value of Θ * in order to minimize the objective function, which is based on the least-squares form of the angle error and the position error expressed by the PoE formula. Accordingly, the equivalence of these two problems is proven. Then, the necessary formulas for Newton’s method and the ICCD methods are given. Finally, for the NICCD method, the iterative procedure required to find the local minimum value of the objective function with respect to inverse kinematics is proposed.

3.1. Definition of Objective Function in Inverse Kinematics

For an open chain with n-degrees of freedom with respect to forward kinematics ( T ( Θ ) , Θ R n ) , the inverse kinematics problem can be stated as follows: Given a homogeneous transformation matrix, E S E ( 3 ) which is defined as follows
E = R e P e 0 1 ,
find solutions ( Θ * ) that satisfy
T ( Θ * ) = E .
As previously mentioned, this problem can be transformed into the problem of how to get the value of Θ * in order to minimize the objective function ( f ( Θ ) ) of inverse kinematics. Formally, the problem can be defined as Equation (21)
Θ * = arg   min Θ f ( Θ ) ,
where the objective function ( f ( Θ ) ) of inverse kinematics is written as
f ( Θ ) = RErr ( Θ ) + λ PErr ( Θ ) = trace ( R ( Θ ) R e ) T ( R ( Θ ) R e ) + λ P ( Θ ) P e T P ( Θ ) P e ,
where λ > 0 is a scale factor, and R ( Θ ) and P ( Θ ) are expressed as
R ( Θ ) = i = 1 n + 1 R i ( θ i ) ,
P ( Θ ) = k = 0 n i = 1 k R i ( θ i ) P k + 1 ( θ k + 1 ) ,
where R i ( θ i ) , P i ( θ i ) ( i = 1 , , n ) can be calculated using Equations (8) and (9), respectively, and where P n + 1 ( θ n + 1 ) , R n + 1 ( θ n + 1 ) are defined as
P n + 1 ( θ n + 1 ) = P m ,
R n + 1 ( θ n + 1 ) = R m .
Equation (22) is based on the least-squares cost function. The first part of Equation (22) represents the sum of the squares of the errors of all elements of the current rotation matrix ( R ( Θ ) ) and the target rotation matrix ( R e ) . The second part of Equation (22) represents λ times the sum of the errors of all elements of the current displacement matrix ( P ( Θ ) ) and the target displacement matrix ( P e ) . Therefore, if a solution does not exist for inverse kinematics, then Θ * satisfies the required conditions as closely as possible in a least-squares sense. However, if a solution exists for inverse kinematics, then Θ * exactly satisfies
f ( Θ * ) = min   f ( Θ ) = 0 .
A constructive proof of this claim is provided below.
Proof. 
If a solution of inverse kinematics ( Θ * ) exists, then R ( Θ * ) = R e and P ( Θ * ) = P e must be satisfied. So Equation (27) must also be established. On the contrary, if there is Θ * so that Equation (27) holds, then R ( Θ * ) = R e and P ( Θ * ) = P e must also be satisfied. So Equation (20) must be established.
For the convenience of calculation, the objective function ( f ( Θ ) , Θ R n ) of inverse kinematics can also be defined as
f ( Θ ) = RErr ( Θ ) + λ PErr ( Θ ) = trace ( R i ( θ i ) R a i R t i ) T ( R i ( θ i ) R a i R t i ) + λ R i ( θ i ) P a i + P i ( θ i ) P t i T R i ( θ i ) P a i + P i ( θ i ) P t i , ( i = 1 , , n ) ,
where R a i , P a i , R t i and P t i are expressed as follows:
R a i = j = i + 1 n + 1 R j ( θ j ) ,
P a i = k = i n j = i + 1 k R j ( θ j ) P k + 1 ( θ k + 1 ) ,
R t i = j = 1 i 1 R j ( θ j ) 1 R e ,
P t i = j = 1 i 1 R j ( θ j ) 1 P e k = 0 i 2 j = 1 k R j ( θ j ) P k + 1 ( θ k + 1 ) ,
where P n + 1 ( θ n + 1 ) and R n + 1 ( θ n + 1 ) are defined as Equations (25) and (26), respectively. Equations (28) and (22) are completely equivalent.□

3.2. Necessary Formulas for Newton’s Method

Newton’s method involves calculating the gradient and Hessian matrix of the object function, and its iterative formula for finding the local minimum value of the objective function (Equation (28)) is Equation (10). The important formulas for Newton’s method are given below.

3.2.1. Determining the Gradient of Objective Function

For both prismatic and revolute joints, the gradient of the objective function of inverse kinematics is calculated using Equation (11), where
f ( Θ ) θ i = r i 1 cos ( θ i ) r i 2 sin ( θ i ) + r i 3 θ i + r i 4 ,
where
r i 1 = 2 λ ( P t i P a i ) T Ω i 2 v i P t i T Ω i P a i 2 trace R t i T Ω i R a i ,
r i 2 = 2 λ ( P a i + P t i ) T Ω i v i v i T Ω i T Ω i v i + P t i T Ω i 2 P a i + 2 trace R t i T Ω i 2 R a i ,
r i 3 = 2 λ v i T ( I + Ω i 2 ) v i ,
r i 4 = 2 λ ( P a i P t i ) T ( I + Ω i 2 ) v i ,
where R a i , P a i , R t i , and P t i are defined as Equations (29)–(32), respectively.
If the robot manipulator joint is a prismatic joint ( Ω i = 0 ), then the elements in Equation (11) are simplified to
f ( Θ ) θ i = 2 θ i + λ ( P a i P t i ) T v i .
Conversely, if the robot manipulator joint is a revolute joint ( ( I + Ω i 2 ) v i = 0 ), then the elements in Equation (11) are simplified to
f ( Θ ) θ i = r i 1 cos ( θ i ) r i 2 sin ( θ i ) ,
where r i 1 and r i 2 are defined as Equations (34) and (35), respectively.

3.2.2. Determining the Hessian Matrix of Objective Function

The Hessian matrix of the objective function of inverse kinematics defined by Equation (12) is a symmetric matrix, so
2 f θ i θ j = 2 f θ j θ i .
On the one hand, if the robot manipulator’s i-th joint is a prismatic joint, when i = j is satisfied, then, according to Equation (38), the elements in the Hessian matrix can be defined as
2 f θ i 2 = 2 λ .
When i < j is satisfied, the elements in the Hessian matrix can be calculated as
2 f θ i θ j = 2 λ v i T h i j 1 h i j 2 h i j 3 + h i j 4 ,
where
h i j 1 = k = i + 1 j 1 R k ,
h i j 2 = Ω j cos ( θ j ) + Ω j 2 sin ( θ j ) ,
h i j 3 = e = j n k = j + 1 e R k P e + 1 ,
h i j 4 = Ω j v j sin ( θ j ) Ω j 2 v j cos ( θ j ) + ( I + Ω j 2 ) v j .
When i > j is satisfied, the elements in the Hessian matrix can be calculated according to Equation (40).
On the other hand, if the robot manipulator’s i-th joint is a revolute joint, when i = j is satisfied, the elements in the Hessian matrix can be defined as
2 f θ i 2 = r i 1 sin ( θ i ) r i 2 cos ( θ i ) ,
where r i 1 and r i 2 are defined as Equations (34) and (35), respectively. When i < j is satisfied, the elements in the Hessian matrix can be expressed as
2 f θ i θ j = 2 λ v i T d i j 1 + P t i T d i j 4 h i j 1 d i j 2 h i j 3 + d i j 3 2 trace R t i T d i j 4 h i j 1 Ω j d i j 5 cos ( θ j ) 2 trace R t i T d i j 4 h i j 1 Ω j 2 d i j 5 sin ( θ j ) ,
where
d i j 1 = Ω i 2 cos ( θ i ) Ω i sin ( θ i ) ,
d i j 2 = Ω j cos ( θ j ) + Ω j 2 sin ( θ j ) ,
d i j 3 = Ω j v j sin ( θ j ) Ω j 2 v j cos ( θ j ) + I + Ω j 2 v j ,
d i j 4 = Ω i cos ( θ i ) + Ω i 2 sin ( θ i ) ,
d i j 5 = k = j + 1 n + 1 R k .
When i > j is satisfied, the elements in the Hessian matrix can be calculated according to Equation (40).

3.3. The ICCD Method

For the inverse kinematics problem, the CCD method has an analytical solution for the optimization of a single joint variable, but all joints are calculated independently, which makes the convergence slow. In order to improve this problem, this paper proposes the ICCD method in which multiple joint variables including consecutive prismatic joints and consecutive parallel revolute joints (Figure 1) can be adjusted concordantly. Although there are analytical solutions when the consecutive prismatic joints are perpendicular to each other, in order to improve the generality of the proposed method, the Newton’s method and the CCD method are combined to solve these two special cases. Associated formulas and the iterative procedure for the ICCD method are given below.

3.3.1. Necessary Formulas for Solving a Single Joint Variable

For a single joint variable, each iteration makes the objective function (Equation (28)) get a local minimum by adjusting only one joint variable, which has an analytical solution. If the joint is a prismatic joint, then the iteration formula for the ICCD method is calculated as
θ i ( k + 1 ) = λ P t i ( θ i ( k ) ) P a i ( θ i ( k ) ) T v i ,
where P t i and P a i are defined as Equations (32) and (30), respectively. If the joint is a revolute joint, then the iteration formula is defined as
θ i ( k + 1 ) = atan 2 r i 1 ( θ i ( k ) ) , r i 2 ( θ i ( k ) ) ,
where r i 1 and r i 2 are defined as Equations (34) and (35), respectively.
The proof for the iteration formula outlined above is shown in Appendix A.

3.3.2. Necessary Formulas for Solving Consecutive Prismatic Joints

When there are consecutive prismatic joints in a robot manipulator, they are regarded as a whole. Moreover, these joint variables can be adjusted simultaneously in order to minimize the objective function of inverse kinematics. Assuming that the robot manipulator in question has consecutive prismatic joints from the i-th joint to the h-th joint, then the following can be achieved:
R i = R i + 1 = = R h = I , ( 1 i < h n ) .
According to Equations (28), the objective function of inverse kinematics can be simplified to
f ( Θ i h ) = trace R c p R t i T R c p R t i + k = i h θ k 2 + 2 k = i h 1 j = k + 1 h v k T v j θ k θ j + λ 2 P c p P t i T k = i h v k θ k + P c p P t i T P c p P t i ,
where consecutive prismatic joints ( θ i θ h ) are joint variables and other joints are constant, and where
Θ i h = [ θ i , , θ h ] T ,
R c p = j = h + 1 n + 1 R j ,
P c p = k = h n j = h + 1 k R j P k + 1 ,
R t i and P t i are defined as Equations (31) and (32), respectively.
It’s not hard to conclude that when the joints are perpendicular to each other ( v k T v j = 0 ) , there is an analytical solution. In order to show the generality of the method, including the case that the joints are not perpendicular to each other ( v k T v j 0 ) , this paper uses the combination of Newton’s method and the CCD method to solve this problem. But note that this numerical solution is the same as the analytical solution when the joint satisfies the mutually perpendicular condition ( v k T v j = 0 ) . The gradient, the Hessian matrix, and the iteration formula of the CCD method with respect to the objective function (Equation (57)) are given below.
The gradient of the objective function (Equation (57)) is defined as
f ( Θ i h ) = f θ i , , f θ h T ,
where
f θ j = 2 θ j + v j T k = i , k j h v k θ k + λ P c p P t i T v j , ( j = i , , h ) .
The Hessian matrix of the objective function (Equation (57)) is defined as
H ( Θ i h ) = 2 f ( Θ i h ) = 2 f θ i 2 2 f θ i θ i + 1 2 f θ i θ h 2 f θ i + 1 θ i 2 f θ i + 1 2 2 f θ i + 1 θ h 2 f θ h θ i 2 f θ h θ i + 1 2 f θ h 2 ,
where
2 f θ j 2 = 2 ,
2 f θ j θ k = 2 v j T v k , ( j k ) .
It is not difficult to conclude that the Hessian matrix ( H ( Θ i h ) ) is always positive definite matrix, so when
f ( Θ i h ) = 0
is satisfied, the objective function (Equation (57)) obtains the local minimum value. Accordingly, the iteration formula of the CCD method of the objective function (Equation (57)) is calculated as
θ j ( k + 1 ) = λ P t i P c p l = i j 1 v l θ l ( k + 1 ) + l = j + 1 h v l θ l ( k ) T v j , ( j = i , , h ) .

3.3.3. Necessary Formulas for Solving Consecutive Parallel Revolute Joints

When there are consecutive parallel revolute joints in a robot manipulator, they can be regarded as a whole. Moreover, these joint variables can be adjusted simultaneously in order to minimize the objective function of inverse kinematics. Assuming that the robot manipulator in question has consecutive parallel revolute joints from the i-th joint to the h-th joint,
Ω i = Ω i + 1 = = Ω h , ( 1 i < h n ) ,
then the rotation matrix satisfies the following property:
R i R i + 1 R h = e Ω i θ i e Ω i + 1 θ i + 1 e Ω h θ h = e Ω i ( θ i + θ i + 1 + + θ h ) .
When there are several consecutive parallel revolute joints ( θ i , , θ h ) in the robot manipulator configuration, the several joints are regarded as variables, and the others as constants. Accordingly, the gradient of the objective function (Equation (28)) is defined as Equation (61) The Hessian matrix of the objective function(Equation (28)) is also defined as Equation (63).
In order to simplify the calculation, only the main diagonal elements of the Hessian matrix ( H ( Θ i h ) ) are considered here. When
2 f θ i 2 0 , 2 f θ i + 1 2 0 , , 2 f θ h 2 0
and
f θ i = f θ i + 1 = = f θ h = 0
are satisfied at the same time, the objective function (Equation (28)) obtains the local minimum. Thereafter, the iterative formulas of the CCD method for joint angles ( Θ i h ) can be calculated as
θ j ( k + 1 ) = atan 2 g Θ i h ( k j i + 1 ) , h Θ i h ( k j i + 1 ) , ( j = i , i + 1 , , h ) ,
where g ( ) and h ( ) are functions of Θ i h ( k t ) which means that the t-th calculation in the k-round iteration, and where Θ i h ( k j i + 1 ) are defined as
Θ i h ( k j i + 1 ) = [ θ i ( k + 1 ) , , θ j 1 ( k + 1 ) , θ j ( k ) , , θ h ( k ) ] T .
The configuration of more than three consecutive parallel revolute joints is rare in robot manipulators, Accordingly, necessary formulas with respect to minimizing the objective function by adjusting two or three consecutive parallel revolute joints at the same time are given in Appendix B.

3.3.4. ICCD Method of the Iterative Procedure

The ICCD method proposed in this section is responsible for finding Θ ( k + 1 ) from Θ ( k ) as shown in Algorithm 1. In line 1 groups are determined to facilitate subsequent computation. Specifically, consider a single revolute joint, a single prismatic joint, consecutive parallel revolute joints and consecutive prismatic joints as the groups, e.g., the UR robot with three consecutive parallel revolute joints( θ 2 , θ 3 , θ 4 ) is grouped into G = { g 1 , g 2 , g 3 , g 4 } = θ 1 , θ 2 , θ 3 , θ 4 , θ 5 , θ 6 . In lines 3 44 , taking g i as a variable and all others as constants, the goal is to find g i such that the objective function obtains a local minimum value. In lines 3 6 , the single revolute joint and prismatic joint are calculated separately. In lines 8 44 consecutive parallel revolute joints and consecutive prismatic joints are calculated in order to find the minimum value of the objective function. ϵ c a is a threshold value to verify convergence supplied by the user. If the Hessian matrix ( H ( Θ i h ) ) of either the consecutive parallel revolute joints or the consecutive prismatic joints is positive definite matrix, then Newton’s method should be used (let flag c a 2 = 1 ). Otherwise, Newton’s method and the CCD method are used simultaneously, and the more effective result of the two methods is adopted (let flag c a 2 = 2 ). The variable flag c a 2 is a flag used to record which method is used in this iteration. If flag c a 2 = 1 , it means the former. On the contrary, if flag c a 2 = 2 , it means the latter. In order to ensure that the Newton’s method and the CCD method converge to the local minimum value at the same time, flag c a 1 = 0 is used, so that Newton’s method and CCD method are simultaneously calculated in the case where the Hessian matrix is positive definite matrix. To be specific, the ICCD method consists of the step-by-step procedure shown in Algorithm 1.

3.4. The NICCD Method for The Inverse Kinematics Problem

The CCD method has great robustness and simple calculation process [13], but the convergence rate is slow when the iteration point approaches the optimal point. The ICCD method has the same problem. Conversely, Newton’s method (with a quadratic rate of convergence) converges quickly when the iteration point approaches the optimal point. However, Newton’s method often fails to converge when the initial estimate is not sufficiently close, and cannot search the descent direction when the second-order Hessian matrix is not positive definite matrix. In order to inherit the advantages of the two algorithms and discard their disadvantages, the NICCD method is proposed to solve the inverse kinematics problem. The determination of the scale factor and the iterative procedure for the NICCD method are given below.
Algorithm 1 The improved cyclic coordinate descent (ICCD) method.
1:
Initialization: Set Err g = + , flag c a 2 = 0 , and determine groups G ( k ) = g 1 ( k ) , g 2 ( k ) , , g l ( k )
2:
for each i [ 1 , l ] do
3:
if g i ( k ) is a group of single revolute joint then
4:
  Determine g i ( k + 1 ) (see Equation (55))
5:
else if   g i ( k ) is a group of single prismatic joint then
6:
  Determine g i ( k + 1 ) (see Equation (54))
7:
else
8:
  Set flag c a 1 = 1
9:
  while   Err g > ϵ c a or flag c a 2 2 do
10:
   if   g i ( k ) is a group of consecutive parallel revolute joints then
11:
    Calculate f ( g i ( k ) ) and H ( g i ( k ) ) (see Equations (61), (63), and Appendix B)
12:
    else
13:
    Calculate f ( g i ( k ) ) and H ( g i ( k ) ) (see Equations (61)∼(65)))
14:
    end if
15:
   if H ( g i ( k ) ) is a positive definite matrix and flag c a 1 = 1 then
16:
    Set flag c a 2 = 1
17:
    Determine g i ( k + 1 ) by Newton’s method (see Equation (10))
18:
    else
19:
    Set flag c a 2 = 2
20:
    Determine g N i ( k + 1 ) by Newton’s method (see Equation (10))
21:
    if g i ( k ) is a group of consecutive parallel revolute joints then
22:
     Calculate g C i ( k + 1 ) by CCD method (see Equation (72) and Appendix B)
23:
     else
24:
     Calculate g C i ( k + 1 ) by CCD method (see Equation (67))
25:
     end if
26:
    Set G C ( k + 1 ) = g 1 ( k + 1 ) , , g C i ( k + 1 ) , g i + 1 ( k + 1 ) , , g l ( k + 1 ) and G N ( k + 1 ) = g 1 ( k + 1 ) , , g N i ( k + 1 ) , g i + 1 ( k + 1 ) , , g l ( k + 1 )
27:
    Calculate f c a C = f ( G C ( k + 1 ) ) and f c a N = f ( G N ( k + 1 ) ) (see Equation (28))
28:
     if f c a C < f c a N   then
29:
     Set g i ( k + 1 ) = g C i ( k + 1 )
30:
     else
31:
     Set g i ( k + 1 ) = g N i ( k + 1 )
32:
     end if
33:
    end if
34:
   Set Err g = g i ( k + 1 ) g i ( k )
35:
    if   Err g ϵ c a then
36:
     if   flag c a 2 2 then
37:
     Set flag c a 1 = 0
38:
     end if
39:
    else
40:
    Set flag c a 1 = 1
41:
    end if
42:
   Set g i ( k ) = g i ( k + 1 )
43:
   end while
44:
end if
45:
end for
46:
Set Θ ( k + 1 ) = G ( k + 1 )

3.4.1. The Scale Factor

There is a scale factor ( λ ) in the objective function (Equation (22)), which is used to scale the position error so that the difference between the position error and the angle error is not very large. This scale factor has an impact on the convergence speed of the proposed approach and the ability to avoid the local minimum.
From the demonstration in Appendix C, it can be concluded that
max Θ RErr ( Θ ) = max Θ trace ( R ( Θ ) R e ) T ( R ( Θ ) R e ) = 8 .
Therefore, the scale factor ( λ ) can be defined as
λ = max Θ RErr ( Θ ) max Θ PErr ( Θ ) = 8 max Θ PErr ( Θ ) ,
where the PErr ( Θ ) is defined as Equation (22) and max Θ PErr ( Θ ) can be calculated by the robot manipulator configuration in advance.

3.4.2. The NICCD Approach of the Iterative Procedure

The method proposed in this section seeks a solution Θ * to the problem defined by Equation (27) as shown in Algorithm 2. Some important parameters in Algorithm 2 are introduced as follows. Parameters t and max k represent the coefficient of the scale factor ( λ ) and the maximum number of iterations respectively. Parameters ϵ 1 and ϵ 2 represent the convergence threshold of objective function (f) and independent variable ( Θ ) respectively. The roles of flag 1 and flag 2 can be analogized to flag c a 1 and flag c a 2 in Algorithm 1, respectively. In lines 3 17 , if the Hessian matrix is positive definite matrix and flag 1 = 1 , then Θ ( k + 1 ) can be calculated using Newton’s method (let flag 2 = 1 ); otherwise, Newton’s method and the ICCD method are executed simultaneously, with the more efficient method being used to calculate Θ ( k + 1 ) (let flag 2 = 2 ). Lines 18 and 19 involve the global and local convergence judgments of the method, respectively. In lines 23 30 , the method of jumping out of the local optimal solution is used. The NICCD method consists of the step-by-step procedure shown in Algorithm 2.

4. Simulation and Discussion

In order to illustrate the effectiveness, accuracy, robustness, and generality of the inverse kinematics approach proposed in this article, five simulations of six robots—a three-link planar arm (3R), a selective compliance assembly robot arm (SCARA), a Cartesian manipulator (3P), Universal Robot’s UR5, a Stanford Arm, and Barrett Technology’s WAM7R robot manipulator—are conducted using the NICCD method, the results of which are compared with the NR method using the Jacobian matrix. The NR method is described in section 6.2.2 of [25]. The PoE parameters of the six robots with the current configurations relative to the base frame are shown in Table 1 and Table 2. The last simulation verifies the improvement of ICCD compared with CCD method and the evolution process of NICCD method. All simulations are performed on a desktop computer (Core i5 3.40 GHz, 16 GB RAM, MATLAB 2015b software program).
Algorithm 2 The Newton-improved cyclic coordinate descent (NICCD) method.
1:
Initialization: Set t = 1 , flag 1 = 1 and determine the scale factor λ (see Equation (75))
2:
for each k [ 1 , max k ] do
3:
 Calculate f ( Θ ( k ) ) and H ( Θ ( k ) ) (see Equations (11), (12), and (33)∼(39))
4:
if H ( Θ ( k ) ) is positive definite matrix and flag 1 = 1 then
5:
  Set flag 2 = 1
6:
  Determine Θ ( k + 1 ) by Newton’s method (see Equation (10))
7:
else
8:
  Set flag 2 = 2
9:
  Determine Θ N ( k + 1 ) by Newton’s method (see Equation (10))
10:
  Determine Θ C ( k + 1 ) by ICCD method (see Algorithm 1 )
11:
  Calculate f N = f ( Θ N ( k + 1 ) ) and f C = f ( Θ C ( k + 1 ) ) (see Equation (28))
12:
  if f C < f N then
13:
   Set Θ ( k + 1 ) = Θ C ( k + 1 )
14:
  else
15:
    Θ ( k + 1 ) = Θ N ( k + 1 )
16:
  end if
17:
end if
18:
if f ( Θ ( k + 1 ) ) > ϵ 1 then
19:
  if Θ ( k + 1 ) Θ ( k ) ϵ 2 then
20:
   if flag 2 2 then
21:
    Set t = 1 , flag 1 = 0
22:
   else
23:
    if RErr ( Θ ( k + 1 ) ) < PErr ( Θ ( k + 1 ) ) then
24:
     Set t = t 2
25:
    else
26:
     Set t = 2 t
27:
    end if
28:
    Set λ = t λ
29:
    Determine Θ ( k + 1 ) by ICCD method (see Algorithm 1 )
30:
    Set λ = λ t
31:
   end if
32:
  else
33:
   Set flag 1 = 1
34:
  end if
35:
else
36:
  Set Θ * = Θ ( k + 1 )
37:
  break
38:
end if
39:
end for

4.1. Simulation I

This simulation aims to verify the effectiveness of the proposed approach using SCARA and UR5. The specific values of the target joint variables ( Θ g o a l ) and the initial joint variables ( Θ i n 1 , Θ i n 2 ) with respect to SCARA and UR5 are shown in Table 3. According to Equation (1), the goal configurations of the end-effector frames of the two robot manipulators relative to Θ g o a l are described as
T g o a l = 0 . 8479 0 . 5301 0 160 . 1408 0 . 5301 0 . 8479 0 383 . 1681 0 0 1 . 0000 520 . 0000 0 0 0 1 . 0000 , ( SCARA ) , T g o a l = 0 . 9592 0 . 0838 0 . 2699 93 . 1191 0 . 2823 0 . 3247 0 . 9027 20 . 4293 0 . 0120 0 . 9421 0 . 3351 716 . 5883 0 0 0 1 . 0000 , ( UR 5 ) .
The inverse kinematics are calculated using the NICCD and the NR method, with Θ i n 1 and Θ i n 2 as initial values. In order to illustrate the effectiveness of the proposed method, the angle error, and position error are defined as
Err R = trace ( R g o a l R o u t ) T ( R g o a l R o u t ) , Err P = ( P g o a l P o u t ) T ( P g o a l P o u t ) ,
where R g o a l and P g o a l are the elements in T g o a l , and R o u t and P o u t are the elements in T o u t , which are obtained by the forward kinematics calculated by either the NICCD or the NR method with respect to Θ o u t . The position and angle errors of the initial selected joint angles of the SCARA and the UR5 versus the convergence times of the NICCD and the NR methods are illustrated in Figure 2.
From Figure 2, when the initial value is Θ i n 1 , the NICCD method converges more rapidly than the NR method with respect to SCARA. However, it is slower with respect to UR5. When the initial value is Θ i n 2 , the NICCD method still converges, but the NR method diverges with respect to the inverse kinematics of SCARA and UR5. In addition, the NICCD method has less fluctuation in both the angle error and the position error during the iteration process. Therefore, it can be concluded that the proposed approach is more stable and effective than the NR method.

4.2. Simulation II

This second simulation involves testing the accuracy of the proposed approach using the Stanford Arm and Barrett Technology’s WAM7R robot manipulator. The specific values of the target joint variables ( Θ g o a l ) and the initial joint variables ( Θ in ) of the Stanford Arm and the WAM7R manipulator are shown in Table 4. According to Equation (1), the goal configurations of the end-effector frames of the two robot manipulators relative to Θ g o a l are described as
T g o a l = 0 . 6641 0 . 0505 0 . 7459 549 . 2836 0 . 6213 0 . 5177 0 . 5882 42 . 3986 0 . 4159 0 . 8541 0 . 3124 582 . 0788 0 0 0 1 . 0000 , ( Stanford Arm ) , T g o a l = 0 . 0765 0 . 9970 0 . 0139 346 . 4601 0 . 7721 0 . 0504 0 . 6335 309 . 6627 0 . 6309 0 . 0592 0 . 7736 381 . 6887 0 0 0 1 . 0000 , ( WAM 7 R ) .
In the case of closing the convergence judgment, 150 iterations of the NICCD and NR methods were conducted. The position and angle errors of the initial selected joint angles of the Stanford Arm and the WAM7R manipulator versus the number of iterations of the NICCD and NR methods are given in Figure 3, where the error is represented on the logarithmic axis. Indeed, from Figure 3, it is clear that the approach proposed in this paper is more accurate than the NR method with respect to the position error and angle error.

4.3. Simulation III

This simulation is conducted in order to validate the robustness of the proposed approach by using the 3P, 3R, SCARA, UR5, and Barrett Technology’s WAM7R robot manipulator. The difference between this simulation and the prior two is that 500 sets of target joint variables and 500 sets of initial joint variables are generated randomly. The end-effector configurations of the four robots are calculated by forward kinematics relative to the target joint variables. Thereafter, the inverse kinematics of the four robots are solved using the NICCD and NR method. The correct rate of solving the inverse kinematics of these four robots is shown in Figure 4. Indeed, from Figure 4, for the WAM7R robot manipulator, it is clear that the approach proposed in this paper has a correct rate that is smaller than that of the NR method. Moreover, from Figure 4, for the remaining four robots, it is clear that the approach proposed in this paper has a correct rate that is larger than that of the NR method.

4.4. Simulation IV

This simulation aims to illustrate the fact that the proposed approach has offline programming abilities by using the UR5. The goal here is to create a trajectory that will allow the UR5 to draw the letter ‘F’ (the red trajectory in Figure 5) in 20 s. The orientation of the end-effector is constant, with the y -axis parallel to that of the base frame and the x - and z -axes opposite to that of the base frame. Specifically, the desired trajectory of UR5 is shown in the left figure of Figure 5, and the actual trajectory is shown in the right figure. The latter is obtained after discrete and smooth processing of the former. The inverse kinematics solution for each sample point is calculated. and the position, velocity, and acceleration of the joints are given in Figure 6.
The proposed approach takes an average of 0 . 0057 s with respect to the calculation process, which demonstrates how efficient it is. By comparing Figure 6 of Simulation IV and Figure 2 of Simulation I, the average calculation time of the former is shorter than the latter. This is because Simulation IV follows a continuous trajectory, which means that the solution of the last step is taken as the initial value of the next step. In other words, the difference between the initial joint variables and the target joint variables is smaller in Simulation IV than in Simulation I. Accordingly, it can be concluded that the proposed approach is appropriate for offline programming and that it is advantageous with respect to the inverse kinematics of continuous trajectories.

4.5. Simulation V

This simulation aims to verify the improvement of ICCD method (consider consecutive prismatic joints or consecutive parallel revolute joints as the group) compared with the traditional CCD method (each joint is adjusted individually) and the evolution process of NICCD method. UR5 manipulator with consecutive parallel revolute joints meets the configuration requirements. The specific values of its target and initial angles of UR5 are the same as the data of Θ g o a l and Θ i n 2 in Simulation I (see Table 3). According to Equation (1), the goal configuration of the end-effector frame of UR5 relative to Θ g o a l is described as Equation (76).
The inverse kinematics are calculated using the CCD, ICCD, and NICCD methods, with Θ i n 2 as initial value. The position and angle errors of the initial selected joint angles of the UR5 versus the convergence times of the CCD, ICCD, and NICCD methods are illustrated in Figure 7 and versus the number of iterations are illustrated in Figure 8, where the error is represented on the logarithmic axis.
From Figure 7 and Figure 8, ICCD method is better than CCD method in convergence time and iteration times. Especially at the initial iteration, the ICCD method drops much more rapidly than the CCD method in terms of positional error. Specifically, the ICCD method has a position error of 5 . 8159 after 2 iterations ( 0 . 0032 s), while the CCD method needs to iteration 29 times ( 0 . 0364 s) to achieve the same error. A similar conclusion can be drawn for angle error.
It is not difficult to conclude from Figure 7 and Figure 8 that the convergence rate of the NICCD method is significantly more rapidly than that of the other two methods. To analyze this reason, Figure 9 shows the evolution process of the NICCD method. From Figure 9, the NICCD method converges after four iterations among which the ICCD method is used for the first two iterations and Newton’s method is used for the last two iterations. NICCD method proposed in this paper uses the ICCD method to rapidly find a feasible point that is near to the true solution and then uses Newton’s method to obtain a solution that achieves the desired precision. This avoids the common disadvantage of CCD and ICCD methods, that is, the convergence rate is slow when the iteration point approaches the optimal point.

5. Conclusions and Future Work

This article presented an efficient approach for inverse kinematics that is suitable for any configurations of robots with either revolute or prismatic joints with any arbitrary number of degrees of freedom. Compared with the traditional method, this paper transformed the inverse kinematics problem into the objective function optimization problem, which is based on the least-squares form of the angle error and the position error expressed by the PoE formula. Moreover, the necessary gradient and Hessian matrix required to solve the formula are given. Finally, the NICCD method was proposed to solve the proposed objective function optimization problem.
Five simulations were given in order to illustrate the effectiveness, accuracy, robustness, and generality of the proposed method. Simulation I verified the effectiveness of the proposed method, the results of which suggest that it has less fluctuation and is more effective than the NR method. Simulation II tests the accuracy of the proposed method. The results of which suggest that the approach proposed in this paper can accurately calculate the error of position and the angle to reach 10 13 and 10 15 respectively, but the NR method provides the error of the angle within 10 11 . This also proves the validity of the objective function constructed in Section 3.1. Simulation III demonstrated the fact that the proposed approach is not sensitive to the initial joint variables. Indeed, the correct rate for inverse kinematics calculations with respect to 3P, 3R, SCARA, UR5, and WAM7R are approximately 100 % , 100 % , 100 % , 94 . 5 % , and 93 . 1 % . Simulation IV illustrated the fact that the proposed approach has offline programming abilities and that it is advantageous with respect to the inverse kinematics of continuous trajectories since each iteration only takes 0 . 0057 s to calculate. Simulation V verified that the ICCD method drops much more rapidly than the CCD method in positional and angle errors in the initial iteration, and the NICCD method combines the respective advantages of the ICCD and Newton’s methods while trying to avoid their shortcomings.
The future work will mainly focus on the following aspects. First of all, we will introduce the heuristic adaptive truncation criterion [27] into the NICCD method to improve the convergence rate. Next, we will improve the NICCD method to become a more effective method with respect to avoiding a singularity location and local minimum, which, in turn, will enhance the accuracy and robustness of the method. Last but not least, the NICCD method will be applied to practical applications such as trajectory planning and motion control of the robot.

Author Contributions

Y.C. and X.L.: necessary formula and algorithm design; B.H. and Y.J.: programming; G.L. and X.W.: simulations and data analysis.

Funding

This research was funded by National Key R & D Program of China (2016YFC0803000, 2016YFC0803005).

Acknowledgments

Thanks to Shuo Wang and Jiaqi Feng for providing language help and all our colleagues for providing all types of help during the preparation of this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
NICCDNewton-improved cyclic coordinate descent
ICCDimproved cyclic coordinate descent
CCDcyclic coordinate descent
NRNewton–Raphson
6Rsix revolute
PoEproduct-of-exponentials
3Rthree-link planar arm
SCARAselective compliance assembly robot arm
3PCartesian manipulator

Appendix A

The two iteration formulas (Equations (54) and (55)) of the ICCD algorithm for a single prismatic joint and a single revolute joints are analytical solutions.
Proof. 
The second-order sufficient condition for the local minimization of the objective function ( f ) of inverse kinematics is the following: suppose that 2 f is continuous in an open neighborhood of θ * and that f ( θ * ) = 0 and 2 f ( θ * ) is positive definite matrix; then θ * is a strict local minimization of f [26].
Injecting Equation (54) in Equation (38), we obtain
f θ i = 2 λ P t i P a i T v i + λ ( P a i P t i ) T v i = 0 .
According to Equation (41), it can be concluded that
2 f θ i 2 = 2 λ > 0 .
Accordingly, Equation (54) is the iteration formula for a single prismatic joint.
Injecting Equation (55) in Equation (39) to get
f θ i = r 1 i r 2 i r 1 i 2 + r 2 i 2 r 2 i r 1 i r 1 i 2 + r 2 i 2 = 0 .
Injecting Equation (55) in Equation (47), we obtain
2 f θ i 2 = r 1 i r 1 i r 1 i 2 + r 2 i 2 r 2 i r 2 i r 1 i 2 + r 2 i 2 = 1 > 0 .
Accordingly, Equation (55) is the iteration formula for a single revolute joint.□

Appendix B

Here, the required formulas for minimizing the objective function by adjusting two and three consecutive parallel revolute joints at the same time are given.
When there are two consecutive parallel revolute joints ( θ i , θ i + 1 ) in the robot manipulator configuration, the two joints are regarded as variables, and the others are regarded as constants. Accordingly, the gradient of the objective function (Equation (28)) is defined as
f ( Θ i i + 1 ) = f θ i , f θ i + 1 T ,
where
f θ i = a 12 cos ( θ i + θ i + 1 ) + b 12 sin ( θ i + θ i + 1 ) + a 1 cos ( θ i ) + b 1 sin ( θ i ) ,
f θ i + 1 = a 12 cos ( θ i + θ i + 1 ) + b 12 sin ( θ i + θ i + 1 ) + a 2 cos ( θ i + 1 ) + b 2 sin ( θ i + 1 ) ,
where
a 12 = 2 λ P a i + 1 T Ω i 2 v i P t i T Ω i P a i + 1 + P t i T Ω i 2 v i + 1 + v i + 1 T Ω i v i 2 trace ( R t i T Ω i R a i + 1 ) ,
b 12 = 2 λ P a i + 1 T Ω i v i P t i T Ω i 2 P a i + 1 P t i T Ω i v i + 1 + v i + 1 T Ω i T Ω i v i 2 trace ( R t i T Ω i 2 R a i + 1 ) ,
a 1 = 2 λ P t i T Ω i 2 v i v i + 1 v i + 1 T Ω i v i ,
b 1 = 2 λ Ω i v i P t i T Ω i v i v i + 1 ,
a 2 = 2 λ P a i + 1 T Ω i 2 v i v i + 1 v i + 1 T Ω i v i ,
b 2 = 2 λ P a i + 1 Ω i v i + 1 T Ω i v i v i + 1 ,
where R a i + 1 , P a i + 1 , R t i , and P t i are defined as Equations (29)–(32), respectively. The Hessian matrix of the objective function (Equation(28)) is defined as
H ( Θ i i + 1 ) = 2 f ( Θ i i + 1 ) = 2 f θ i 2 2 f θ i θ i + 1 2 f θ i + 1 θ i 2 f θ i + 1 2 ,
where
2 f θ i 2 = a 12 sin ( θ i + θ i + 1 ) + b 12 cos ( θ i + θ i + 1 ) a 1 sin ( θ j ) + b 1 cos ( θ i ) ,
2 f θ i θ i + 1 = 2 f θ i + 1 θ i = a 12 sin ( θ i + θ i + 1 ) + b 12 cos ( θ i + θ i + 1 ) ,
2 f θ i + 1 2 = a 12 sin ( θ i + θ i + 1 ) + b 12 cos ( θ i + θ i + 1 ) a 2 sin ( θ i + 1 ) + b 2 cos ( θ i + 1 ) .
When
f θ i = 0 , f θ i + 1 = 0
and
2 f θ i 2 > 0 , 2 f θ i + 1 2 > 0
are satisfied at the same time, the objective function (Equation (28)) obtains the local minimum. Accordingly, the iterative formula of the CCD method for joint angles ( θ i , θ i + 1 ) can be calculated as
θ i ( k + 1 ) = atan 2 a 12 cos ( θ i + 1 ( k ) ) b 12 sin ( θ i + 1 ( k ) ) a 1 , b 12 cos ( θ i + 1 ( k ) ) a 12 sin ( θ i + 1 ( k ) ) + b 1 ,
θ i + 1 ( k + 1 ) = atan 2 a 12 cos ( θ i ( k + 1 ) ) b 12 sin ( θ i ( k + 1 ) ) a 2 , b 12 cos ( θ i ( k + 1 ) ) a 12 sin ( θ i ( k + 1 ) ) + b 2 .
When there are three consecutive parallel revolute joints ( θ i , θ i + 1 , θ i + 2 ) in the robot manipulator configuration, the three joints are regarded as variables, and the others are regarded as constants. Accordingly, the gradient of the objective function (Equation (28)) is defined as
f ( Θ i i + 2 ) = f θ i , f θ i + 1 , f θ i + 2 T ,
where
f θ i = a 123 cos ( θ i + θ i + 1 + θ i + 2 ) + b 123 sin ( θ i + θ i + 1 + θ i + 2 ) + a 12 cos ( θ i + θ i + 1 ) + b 12 sin ( θ i + θ i + 1 ) + a 1 cos ( θ i ) + b 1 sin ( θ i ) ,
f θ i + 1 = a 123 cos ( θ i + θ i + 1 + θ i + 2 ) + b 123 sin ( θ i + θ i + 1 + θ i + 2 ) + a 12 cos ( θ i + θ i + 1 ) + b 12 sin ( θ i + θ i + 1 ) + a 23 cos ( θ i + 1 + θ i + 2 ) + b 23 sin ( θ i + 1 + θ i + 2 ) + a 2 cos ( θ i + 1 ) + b 2 sin ( θ i + 1 ) ,
f θ i + 2 = a 123 cos ( θ i + θ i + 1 + θ i + 2 ) + b 123 sin ( θ i + θ i + 1 + θ i + 2 ) + a 23 cos ( θ i + 1 + θ i + 2 ) + b 23 sin ( θ i + 1 + θ i + 2 ) + a 3 cos ( θ i + 2 ) + b 3 sin ( θ i + 2 ) ,
where
a 123 = 2 λ P a i + 2 T Ω i 2 v i P t i T Ω i P a i + 2 + P t i T Ω i 2 v i + 2 + v i + 2 T Ω i v i 2 Trace R t i T Ω i R a i + 2 ,
b 123 = 2 λ P a i + 2 T Ω i v i P t i T Ω i 2 P a i + 2 P t i T Ω i v i + 2 v i + 2 T Ω i 2 v i 2 Trace R t i T Ω i 2 R a i + 2 ,
a 12 = 2 λ P t i T Ω i v i T Ω i v i + 1 v i + 2 ,
b 12 = 2 λ P t i T + v i T Ω i Ω i v i + 2 v i + 1 ,
a 23 = 2 λ P a i + 2 T Ω i v i + 2 T Ω i v i v i + 1 ,
b 23 = 2 λ P a i + 2 T + v i + 2 T Ω i Ω i v i v i + 1 ,
a 1 = 2 λ P t i T Ω i 2 v i v i + 1 2 λ v i + 1 T Ω i v i ,
b 1 = 2 λ P t i T + v i T Ω i Ω i v i + 1 v i ,
a 2 = 2 λ v i + 1 T Ω i v i v i + 2 + 2 λ v i + 2 T Ω i v i ,
b 2 = 2 λ v i + 1 T v i + 2 T Ω i 2 v i + 1 v i ,
a 3 = 2 λ P a i + 2 T Ω i 2 v i + 2 v i + 1 2 λ v i + 2 T Ω i v i + 1 ,
b 3 = 2 λ P a i + 2 T + v i + 2 T Ω i Ω i v i + 1 v i + 2 ,
where R a i + 2 , P a i + 2 , R t i , and P t i are defined as Equations (29)–(32), respectively. The Hessian matrix of the objective function (Equation (28)) is defined as
H ( Θ i i + 2 ) = 2 f ( Θ i i + 2 ) = 2 f θ i 2 2 f θ i θ i + 1 2 f θ i θ i + 2 2 f θ i + 1 θ i 2 f θ i + 1 2 2 f θ i + 1 θ i + 2 2 f θ i + 2 θ i 2 f θ i + 2 θ i + 1 2 f θ i + 2 2 ,
where
2 f θ i 2 = a 123 sin ( θ i + θ i + 1 + θ i + 2 ) + b 123 cos ( θ i + θ i + 1 + θ i + 2 ) a 12 sin ( θ i + θ i + 1 ) + b 12 cos ( θ i + θ i + 1 ) a 1 sin ( θ i ) + b 1 cos ( θ i ) ,
2 f θ i θ i + 1 = 2 f θ i + 1 θ i = a 123 sin ( θ i + θ i + 1 + θ i + 2 ) + b 123 cos ( θ i + θ i + 1 + θ i + 2 ) a 12 sin ( θ i + θ i + 1 ) + b 12 cos ( θ i + θ i + 1 ) ,
2 f θ i θ i + 2 = 2 f θ i + 2 θ i = a 123 sin ( θ i + θ i + 1 + θ i + 2 ) + b 123 cos ( θ i + θ i + 1 + θ i + 2 )
2 f θ i + 1 2 = a 123 sin ( θ i + θ i + 1 + θ i + 2 ) + b 123 cos ( θ i + θ i + 1 + θ i + 2 ) , a 12 sin ( θ i + θ i + 1 ) + b 12 cos ( θ i + θ i + 1 ) a 23 sin ( θ i + 1 + θ i + 2 ) + b 23 cos ( θ i + 1 + θ i + 2 ) a 2 sin ( θ i + 1 ) + b 2 cos ( θ i + 1 ) ,
2 f θ i + 1 θ i + 2 = 2 f θ i + 2 θ i + 1 = a 123 sin ( θ i + θ i + 1 + θ i + 2 ) + b 123 cos ( θ i + θ i + 1 + θ i + 2 ) a 23 sin ( θ i + 1 + θ i + 2 ) + b 23 cos ( θ i + 1 + θ i + 2 ) ,
2 f θ i + 2 2 = a 123 sin ( θ i + θ i + 1 + θ i + 2 ) + b 123 cos ( θ i + θ i + 1 + θ i + 2 ) a 23 sin ( θ i + 1 + θ i + 2 ) + b 23 cos ( θ i + 1 + θ i + 2 ) a 3 sin ( θ i + 2 ) + b 3 cos ( θ i + 2 ) .
When
f θ i = 0 , f θ i + 1 = 0 , f θ i + 2 = 0
and
2 f θ i 2 > 0 , 2 f θ i + 1 2 > 0 , 2 f θ i + 2 2 > 0
are satisfied at the same time, the objective function (Equation (28)) obtains the local minimum. Accordingly, the iterative formula of the CCD method for joint angles ( θ i , θ i + 1 , θ i + 2 ) can be calculated as
θ i ( k + 1 ) = atan 2 a 123 cos ( θ i + 1 ( k ) + θ i + 2 ( k ) ) b 123 sin ( θ i + 1 ( k ) + θ i + 2 ( k ) ) a 12 cos ( θ i + 1 ( k ) ) b 12 sin ( θ i + 1 ( k ) ) a 1 , b 123 sin ( θ i + 1 ( k ) + θ i + 2 ( k ) ) a 123 cos ( θ i + 1 ( k ) + θ i + 2 ( k ) ) + b 12 cos ( θ i + 1 ( k ) ) a 12 sin ( θ i + 1 ( k ) ) + b 1 ,
θ i + 1 ( k + 1 ) = atan 2 a 123 cos ( θ i ( k + 1 ) + θ i + 2 ( k ) ) b 123 sin ( θ i ( k + 1 ) + θ i + 2 ( k ) ) a 12 cos ( θ i ( k + 1 ) ) b 12 sin ( θ i + 1 ( k ) ) a 23 cos ( θ i + 2 ( k ) ) b 23 sin ( θ i + 2 ( k ) ) a 2 , b 123 sin ( θ i ( k + 1 ) + θ i + 2 ( k ) ) a 123 cos ( θ i ( k + 1 ) + θ i + 2 ( k ) ) + b 12 cos ( θ i ( k + 1 ) ) a 12 sin ( θ i ( k + 1 ) ) + b 23 sin ( θ i + 2 ( k ) ) a 23 cos ( θ i + 2 ( k ) ) + b 2 ,
θ i + 2 ( k + 1 ) = atan 2 a 123 cos ( θ i ( k + 1 ) + θ i + 1 ( k + 1 ) ) b 123 sin ( θ i ( k + 1 ) + θ i + 1 ( k + 1 ) ) a 23 cos ( θ i + 1 ( k + 1 ) ) b 23 sin ( θ i + 1 ( k + 1 ) ) a 3 , b 123 sin ( θ i ( k + 1 ) + θ i + 1 ( k + 1 ) ) a 123 cos ( θ i ( k + 1 ) + θ i + 1 ( k + 1 ) ) + b 23 cos ( θ i + 1 ( k + 1 ) ) a 23 sin ( θ i + 1 ( k + 1 ) ) + b 3 .

Appendix C

Prove that Equation (74) is established.
Proof. 
It’s obvious that this proof can be transformed into proof
max θ a , θ b , ω a , ω b g θ a , θ b , ω a , ω b = 8 ,
where
g θ a , θ b , ω a , ω b = trace R a ( θ a , ω a ) R b ( θ b , ω b ) T R a ( θ a , ω a ) R b ( θ b , ω b ) ,
where both R ( θ a , ω a ) and R ( θ b , ω b ) can be calculated by Equation (8) and θ a , θ b , ω a , ω b are all arbitrary.
Injecting Equation (8) in Equation (A51), we can obtain
g θ a , θ b , ω a , ω b = 2 trace Ω a 2 + Ω b 2 Ω a Ω b sin ( θ a ) sin ( θ b ) Ω a 2 cos ( θ a ) Ω b 2 cos ( θ b ) + Ω a 2 Ω b 2 ( 1 cos ( θ a ) ) ( 1 cos ( θ b ) ) .
Because of
trace Ω a 2 = 2 ,
trace Ω b 2 = 2 ,
trace Ω a 2 Ω b 2 = 1 + ω a · ω b 2 ,
trace Ω a Ω b = 2 ω a · ω b ,
Equation (A52) can be simplified as follows,
g θ a , θ b , t = ( 6 2 t 2 ) + ( 2 t 2 2 ) cos ( θ a ) + ( 2 t 2 2 ) cos ( θ b ) 4 t sin ( θ a ) sin ( θ b ) + ( 2 + 2 t 2 ) cos ( θ a ) cos ( θ b ) ,
where
t = ω a · ω b [ 1 , 1 ] .
In order to find the extreme value, g θ a * , θ b * , t * = 0 must be satisfied.
g t | θ a = θ a * , θ b = θ b * , t = t * = 4 t * t * cos ( θ a * ) t * cos ( θ b * ) + sin ( θ a * ) sin ( θ b * ) + t * cos ( θ a * ) cos ( θ b * ) = 0 ,
g θ a | θ a = θ a * , θ b = θ b * , t = t * = ( 2 t * 2 2 ) sin ( θ a * ) 4 t * cos ( θ a * ) sin ( θ b * ) + ( 2 + 2 t * 2 ) sin ( θ a * ) cos ( θ b * ) = 0 ,
g θ b | θ a = θ a * , θ b = θ b * , t = t * = ( 2 t * 2 2 ) sin ( θ b * ) 4 t * sin ( θ a * ) cos ( θ b * ) + ( 2 + 2 t * 2 ) cos ( θ a * ) sin ( θ b * ) = 0 .
In order to solve Equations (A59)∼(A61), we can add Equations (A60) and (A61) to get
g θ a + g θ b | θ a = θ a * , θ b = θ b * , t = t * = 2 ( 1 t ) ( 1 + t ) sin ( θ a * ) + sin ( θ b * ) + ( 1 t ) sin ( θ a * + θ b * ) = 0 .
Therefore, Equation (A62) is established if and only if the following conditions are met.
t * = 1 , θ a * + θ b * = 0 ;
t * = 1 ;
t * = 1 , θ a * + θ b * = π ;
t * = 1 , θ a * θ b * = π ;
θ a * = 0 , θ b * = 0 .
Equations (A63) and (A64) do not satisfy Equation (A59), but others all satisfy Equations (A59)∼(A61). Injecting Equations (A65)∼(A67) in Equation (A57), we can obtain
g θ a , θ b , t = 8 , ( θ a * + θ b * = π , t * = 1 ) ;
g θ a , θ b , t = 8 , ( θ a * θ b * = π , t * = 1 ) ;
g θ a , θ b , t = 0 , ( θ a * = 0 , θ b * = 0 ) .
Therefore, Equation (74) is established.
max θ a , θ b , ω a , ω b g θ a , θ b , ω a , ω b = max θ a , θ b , t g θ a , θ b , t = 8 .

References

  1. Jahanpour, J.; Motallebi, M.; Porghoveh, M. A Novel Trajectory Planning Scheme for Parallel Machining Robots Enhanced with NURBS Curves. J. Intell. Robot. Syst. 2016, 82, 257–275. [Google Scholar] [CrossRef]
  2. Pivarciova, E.; Bozek, P.; Turygin, Y.; Zajacko, I.; Shchenyatsky, A.; Vaclav, S.; Cisar, M.; Gemela, B. Analysis of control and correction options of mobile robot trajectory by an inertial navigation system. Int. J. Adv. Robot. Syst. 2018, 15. [Google Scholar] [CrossRef]
  3. Hartenberg, R.S.; Denavit, J. Kinematic Synthesis of Linkages; McGraw-Hill: New York, NY, USA, 1965. [Google Scholar]
  4. Siciliano, B.; Sciavicco, L.; Villani, L.; Oriolo, G. Robotics Modelling, Planning and Control; Springer: London, UK, 2009. [Google Scholar]
  5. Pieper, L.D. Kinematics of Manipulators under Computer Control. Ph.D. Thesis, Stanford University, Stanford, CA, USA, 1968. [Google Scholar]
  6. Tsai, L.W.; Morgan, A.P. Solving the kinematics of the most general six- and five-degree-of-freedom manipulators by continuation methods. J. Mech. Des. 1985, 107, 189. [Google Scholar] [CrossRef]
  7. Primrose, E.J.F. On the input-output equation of the general 7R-mechanism. Mech. Mach. Theory 1986, 21, 509–510. [Google Scholar] [CrossRef]
  8. Manocha D, C.J.F. Efficient inverse kinematics for general 6R manipulators. IEEE Trans. Robot. Autom. 1994, 10, 648–657. [Google Scholar] [CrossRef] [Green Version]
  9. Khatib, O. A unified approach for motion and force control of robot manipulators: The operational space formulation. IEEE J. Robot. Autom. 1987, 3, 43–53. [Google Scholar] [CrossRef] [Green Version]
  10. Chiaverini, S.; Siciliano, B.; Egeland, O. Review of damped least squares inverse kinematics with experiments on an industrial robot manipulator. IEEE Trans Control. Syst. Technol. 1994, 2, 123–134. [Google Scholar] [CrossRef] [Green Version]
  11. Xu, W.; Zhang, J.; Liang, B.; Bing, L. Singularity analysis and avoidance for robot manipulators with non-spherical wrists. IEEE Trans. Ind. Electron. 2015, 63, 277–290. [Google Scholar] [CrossRef]
  12. Goldenberg, A.A.; Benhabib, B.; Fenton, R.G. A complete generalized solution to the inverse kinematics of robots. IEEE J. Robot. Autom. 1985, 1, 14–20. [Google Scholar] [CrossRef]
  13. Luenberger, D.G.; Ye, Y. Linear and Nonlinear Programming; Springer: Berlin, Germany, 2008. [Google Scholar]
  14. Kelemen, M.; Virgala, I.; Lipták, T.; Miková, L.; Filakovský, F.; Bulej, V. A novel approach for a inverse kinematics solution of a redundant manipulator. Appl. Sci. 2018, 8, 2229. [Google Scholar] [CrossRef] [Green Version]
  15. Zaplana, I.; Basanez, L. A novel closed-form solution for the inverse kinematics of redundant manipulators through workspace analysis. Mech. Mach. Theory 2018, 121, 829–843. [Google Scholar] [CrossRef] [Green Version]
  16. Qiao, S.; Liao, Q.; Wei, S.; Su, H.J. Inverse kinematic analysis of the general 6R serial manipulators based on double quaternions. Mech. Mach. Theory 2010, 45, 193–199. [Google Scholar] [CrossRef]
  17. Menini, L.; Tornambè, A. A Lie symmetry approach for the solution of the inverse kinematics problem. Nonlinear Dyn. 2012, 69, 1965–1977. [Google Scholar] [CrossRef]
  18. Köker, R.; Öz, C.; Çakar, T.; Ekiz, H. A study of neural network based inverse kinematics solution for a three-joint robot. Robot. Auton. Syst. 2004, 49, 227–234. [Google Scholar] [CrossRef]
  19. Köker, R.; Çakar, T.; Sari, Y. A neural-network committee machine approach to the inverse kinematics problem solution of robotic manipulators. Eng. Comput. 2014, 30, 641–649. [Google Scholar] [CrossRef]
  20. Kalra, P.; Mahapatra, P.B.; Aggarwal, D.K. An evolutionary approach for solving the multimodal inverse kinematics problem of industrial robots. Mech. Mach. Theory 2006, 41, 1213–1229. [Google Scholar] [CrossRef]
  21. Tabandeh, S.; Melek, W.W.; Clark, C.M. An adaptive niching genetic algorithm approach for generating multiple solutions of serial manipulator inverse kinematics with applications to modular robots. Robotica 2010, 28, 493–507. [Google Scholar] [CrossRef]
  22. Rokbani, N.; Alimi, A.M. Inverse kinematics using particle swarm optimization, a statistical analysis. Procedia Eng. 2013, 64, 1602–1611. [Google Scholar] [CrossRef] [Green Version]
  23. Koker, R. A genetic algorithm approach to a neural-network-based inverse kinematics solution of robotic manipulators based on error minimization. Inf. Sci. 2013, 222, 528–543. [Google Scholar] [CrossRef]
  24. Köker, R.; Çakar, T. A neuro-genetic-simulated annealing approach to the inverse kinematics solution of robots: a simulation based study. Eng. Comput. 2016, 32, 1–13. [Google Scholar] [CrossRef]
  25. Lynch, K.; Park, F. Modern Robotics: Mechanics, Planning, and Control; Cambridge Univeristy Press: Cambridge, UK, 2017. [Google Scholar]
  26. Nocedal, J.; Wright, S.J. Numerical Optimization; Springer: Berlin, Germany, 2006. [Google Scholar]
  27. Caliciotti, A.; Fasano, G.; Nash, S.G.; Roma, M. An adaptive truncation criterion, for linesearch-based truncated Newton methods in large scale nonconvex optimization. Oper. Res. Lett. 2017, 46, 7–12. [Google Scholar] [CrossRef] [Green Version]
  28. Caliciotti, A.; Fasano, G.; Nash, S.G.; Roma, M. Data and performance profiles applying an adaptive truncation criterion, within linesearch-based truncated Newton methods, in large scale nonconvex optimization. Data Brief 2018, 17, 246–255. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The consecutive prismatic joints and consecutive parallel revolute joints.
Figure 1. The consecutive prismatic joints and consecutive parallel revolute joints.
Applsci 09 05461 g001
Figure 2. The position and angle errors of the initial selected joint angles of the SCARA and the UR5 versus the convergence times of the NICCD and the Newton–Raphson (NR) methods.
Figure 2. The position and angle errors of the initial selected joint angles of the SCARA and the UR5 versus the convergence times of the NICCD and the Newton–Raphson (NR) methods.
Applsci 09 05461 g002
Figure 3. The position and angle errors of the initial selected joint angles of the Stanford Arm and the WAM7R manipulator versus the number of iterations of the NICCD and NR methods.
Figure 3. The position and angle errors of the initial selected joint angles of the Stanford Arm and the WAM7R manipulator versus the number of iterations of the NICCD and NR methods.
Applsci 09 05461 g003
Figure 4. The correct rate of solving inverse kinematics of these four robots.
Figure 4. The correct rate of solving inverse kinematics of these four robots.
Applsci 09 05461 g004
Figure 5. The UR5 motion trajectory.
Figure 5. The UR5 motion trajectory.
Applsci 09 05461 g005
Figure 6. The position, velocity, and acceleration of joints.
Figure 6. The position, velocity, and acceleration of joints.
Applsci 09 05461 g006
Figure 7. The position and angle errors of the initial selected joint angles of the UR5 versus the convergence times of the CCD, ICCD, and NICCD methods.
Figure 7. The position and angle errors of the initial selected joint angles of the UR5 versus the convergence times of the CCD, ICCD, and NICCD methods.
Applsci 09 05461 g007
Figure 8. The position and angle errors of the initial selected joint angles of the UR5 versus the number of iterations of the CCD, ICCD, and NICCD methods.
Figure 8. The position and angle errors of the initial selected joint angles of the UR5 versus the number of iterations of the CCD, ICCD, and NICCD methods.
Applsci 09 05461 g008
Figure 9. The evolution process of NICCD method.
Figure 9. The evolution process of NICCD method.
Applsci 09 05461 g009
Table 1. The product-of-exponentials (PoE) parameters of three-link planar arm (3R), a selective compliance assembly robot arm (SCARA), a Cartesian manipulator (3P).
Table 1. The product-of-exponentials (PoE) parameters of three-link planar arm (3R), a selective compliance assembly robot arm (SCARA), a Cartesian manipulator (3P).
i3RSCARA3P
ω i T v i T ω i T v i T ω i T v i T
1 [ 0 , 0 , 1 ] [ 0 , 0 , 0 ] [ 0 , 0 , 1 ] [ 0 , 0 , 0 ] [ 0 , 0 , 0 ] [ 0 , 0 , 1 ]
2 [ 0 , 0 , 1 ] [ 0 , 400 , 0 ] [ 0 , 0 , 1 ] [ 0 , 450 , 0 ] [ 0 , 0 , 0 ] [ 0 , 1 , 0 ]
3 [ 0 , 0 , 1 ] [ 0 , 800 , 0 ] [ 0 , 0 , 0 ] [ 0 , 0 , 1 ] [ 0 , 0 , 0 ] [ 1 , 0 , 0 ]
4 [ 0 , 0 , 1 ] [ 0 , 850 , 0 ]
M 1 0 0 1200 0 1 0 0 0 0 1 1 0 0 0 1 1 0 0 850 0 1 0 0 0 0 1 420 0 0 0 1 1 0 0 400 0 1 0 400 0 0 1 400 0 0 0 1
Table 2. The PoE parameters of UR5, Stanford Arm, and WAM7R.
Table 2. The PoE parameters of UR5, Stanford Arm, and WAM7R.
iUR5Stanford ArmWAM7R
ω i T v i T ω i T v i T ω i T v i T
1 [ 0 , 0 , 1 ] [ 0 , 0 , 0 ] [ 0 , 0 , 1 ] [ 0 , 0 , 0 ] [ 0 , 0 , 1 ] [ 0 , 0 , 0 ]
2 [ 0 , 1 , 0 ] [ 89 , 0 , 0 ] [ 1 , 0 , 0 ] [ 0 , 0 , 0 ] [ 0 , 1 , 0 ] [ 0 , 0 , 0 ]
3 [ 0 , 1 , 0 ] [ 89 , 0 , 425 ] [ 0 , 0 , 0 ] [ 0 , 0 , 1 ] [ 0 , 0 , 1 ] [ 0 , 0 , 0 ]
4 [ 0 , 1 , 0 ] [ 89 , 0 , 817 ] [ 0 , 0 , 1 ] [ 0 , 200 , 0 ] [ 0 , 1 , 0 ] [ 550 , 0 , 45 ]
5 [ 0 , 0 , 1 ] [ 109 , 817 , 0 ] [ 0 , 1 , 0 ] [ 0 , 0 , 200 ] [ 0 , 0 , 1 ] [ 0 , 0 , 0 ]
6 [ 0 , 1 , 0 ] [ 6 , 0 , 817 ] [ 1 , 0 , 0 ] [ 0 , 400 , 0 ] [ 0 , 1 , 0 ] [ 850 , 0 , 0 ]
7 [ 0 , 0 , 1 ] [ 0 , 0 , 0 ]
M 1 0 0 817 0 0 1 191 0 1 0 6 0 0 0 1 1 0 0 692 0 1 0 0 0 0 1 400 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 910 0 0 0 1
Table 3. The specific values of the target and initial joint variables of SCARA and UR5.
Table 3. The specific values of the target and initial joint variables of SCARA and UR5.
SCARAUR5
θ 1 ( r a d ) θ 2 ( r a d ) d 3 ( mm ) θ 4 ( r a d ) θ 1 ( r a d ) θ 2 ( r a d ) θ 3 ( r a d ) θ 4 ( r a d ) θ 5 ( r a d ) θ 6 ( r a d )
Θ g o a l 0.21692.1269100.00000.23913.00761.33640.0030−0.1817−2.76701.1434
Θ i n 1 −2.11422.6458−11.99290.4863−1.9350−2.26901.2332−2.55210.15960.1907
Θ i n 2 −1.5218−0.6484−20.00001.1567−1.05850.49142.22740.8946−0.50200.2885
Table 4. The specific values of the target and initial joint variables of Stanford Arm and WAM7R manipulator.
Table 4. The specific values of the target and initial joint variables of Stanford Arm and WAM7R manipulator.
Stanford ArmWAM7R
θ 1 ( r a d ) θ 2 ( r a d ) d 3 ( mm ) θ 4 ( r a d ) θ 5 ( r a d ) θ 6 ( r a d ) θ 1 ( r a d ) θ 2 ( r a d ) θ 3 ( r a d ) θ 4 ( r a d ) θ 5 ( r a d ) θ 6 ( r a d ) θ 7 ( r a d )
Θ g o a l −0.12900.8754100.00000.92560.27571.38890.40882.0346−2.3493−1.2559−3.12832.83441.6732
Θ in −1.05202.2184−0.36192.5406−2.93310.2037−0.55762.29932.64311.8048−0.9740−2.70611.6552

Share and Cite

MDPI and ACS Style

Chen, Y.; Luo, X.; Han, B.; Jia, Y.; Liang, G.; Wang, X. A General Approach Based on Newton’s Method and Cyclic Coordinate Descent Method for Solving the Inverse Kinematics. Appl. Sci. 2019, 9, 5461. https://doi.org/10.3390/app9245461

AMA Style

Chen Y, Luo X, Han B, Jia Y, Liang G, Wang X. A General Approach Based on Newton’s Method and Cyclic Coordinate Descent Method for Solving the Inverse Kinematics. Applied Sciences. 2019; 9(24):5461. https://doi.org/10.3390/app9245461

Chicago/Turabian Style

Chen, Yuhan, Xiao Luo, Baoling Han, Yan Jia, Guanhao Liang, and Xinda Wang. 2019. "A General Approach Based on Newton’s Method and Cyclic Coordinate Descent Method for Solving the Inverse Kinematics" Applied Sciences 9, no. 24: 5461. https://doi.org/10.3390/app9245461

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop