Next Article in Journal
An IoT-Based Framework for Personalized Health Assessment and Recommendations Using Machine Learning
Next Article in Special Issue
The Shape Parameter in the Shifted Surface Spline—A Sharp and Friendly Approach
Previous Article in Journal
A Machine Proof System of Point Geometry Based on Coq
Previous Article in Special Issue
New Family of Multi-Step Iterative Methods Based on Homotopy Perturbation Technique for Solving Nonlinear Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Towards Higher-Order Zeroing Neural Networks for Calculating Quaternion Matrix Inverse with Application to Robotic Motion Tracking

by
Rabeh Abbassi
1,
Houssem Jerbi
2,
Mourad Kchaou
1,
Theodore E. Simos
3,4,5,6,7,*,
Spyridon D. Mourtas
8,9 and
Vasilios N. Katsikis
8
1
Department of Electrical Engineering, College of Engineering, University of Hail, Hail 81451, Saudi Arabia
2
Department of Industrial Engineering, College of Engineering, University of Hail, Hail 81451, Saudi Arabia
3
Center for Applied Mathematics and Bioinformatics, Gulf University for Science and Technology, West Mishref 32093, Kuwait
4
Laboratory of Inter-Disciplinary Problems of Energy Production, Ulyanovsk State Technical University, 32 Severny Venetz Street, 432027 Ulyanovsk, Russia
5
Department of Medical Research, China Medical University Hospital, China Medical University, Taichung City 40402, Taiwan
6
Data Recovery Key Laboratory of Sichun Province, Neijing Normal University, Neijiang 641100, China
7
Section of Mathematics, Department of Civil Engineering, Democritus University of Thrace, 67100 Xanthi, Greece
8
Department of Economics, Mathematics-Informatics and Statistics-Econometrics, National and Kapodistrian University of Athens, Sofokleous 1 Street, 10559 Athens, Greece
9
Laboratory “Hybrid Methods of Modelling and Optimization in Complex Systems”, Siberian Federal University, Prosp. Svobodny 79, 660041 Krasnoyarsk, Russia
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(12), 2756; https://doi.org/10.3390/math11122756
Submission received: 16 May 2023 / Revised: 13 June 2023 / Accepted: 15 June 2023 / Published: 18 June 2023
(This article belongs to the Special Issue Numerical Analysis and Scientific Computing, 3rd Edition)

Abstract

:
The efficient solution of the time-varying quaternion matrix inverse (TVQ-INV) is a challenging but crucial topic due to the significance of quaternions in many disciplines, including physics, engineering, and computer science. The main goal of this research is to employ the higher-order zeroing neural network (HZNN) strategy to address the TVQ-INV problem. HZNN is a family of zeroing neural network models that correlates to the hyperpower family of iterative methods with adjustable convergence order. Particularly, three novel HZNN models are created in order to solve the TVQ-INV both directly in the quaternion domain and indirectly in the complex and real domains. The noise-handling version of these models is also presented, and the performance of these models under various types of noises is theoretically and numerically tested. The effectiveness and practicality of these models are further supported by their use in robotic motion tracking. According to the principal results, each of these six models can solve the TVQ-INV effectively, and the HZNN strategy offers a faster convergence rate than the conventional zeroing neural network strategy.

1. Introduction

The real-time solution to the matrix inverse [1,2], which frequently arises in robotics [3], game theory [4], nonlinear systems [5], optimal control [6,7], and neural networks [8], has attracted a lot of interest in recent times. Quaternions, on the other hand, are crucial in a wide range of domains, such as computer graphics [9], signal processing [10], human motion modeling [11], robotics [12,13], navigation [14], quantum mechanics [15], electromagnetism [16], and mathematical physics [17,18]. Let H n × n present the set of all n × n matrices on the quaternion skew field H = { γ 1 + γ 2 l + γ 3 j + γ 4 k | l 2 = j 2 = k 2 = l j k = 1 , γ 1 , γ 2 , γ 3 , γ 4 R } . Considering that A ˜ H n × n , its inverse matrix is denoted by A ˜ 1 and it is the only solution X ˜ that satisfies the next equation [19,20]:
A ˜ X ˜ = I n ,
where I n is the identity n × n matrix.
Recently, research has begun to focus on time-varying quaternion (TVQ) problems involving matrices, such as the inversion of TVQ matrices [21], solving the dynamic TVQ Sylvester matrix equation [22], addressing the TVQ constrained matrix least-squares problem [23], and solving the TVQ linear matrix equation for square matrices [24]. Furthermore, real-world applications involving TVQ matrices are employed in the kinematically redundant manipulator of robotic joints [25,26], such as the control of wearable robotic knee system [27] and control of robotic arm [13], chaotic systems synchronization [23], mobile manipulator control [21], and image restoration [24]. All of these studies have one thing in common: they all use the zeroing neural network (ZNN) approach to derive the solution.
ZNNs are a subset of recurrent neural networks that are especially good at parallel processing and are used to address time-varying issues. They were initially developed by Zhang et al. to handle the problem of time-varying matrix inversion [28], but their subsequent iterations were dynamic models used to compute the time-varying MP-inverse of full-row/column rank matrices [29,30,31,32] in the real and complex domain. Today their use has expanded to include the resolution of generalized inversion issues, including time-varying Drazin inverse [33], time-varying ML-weighted pseudoinverse [34], time-varying outer inverse [35], time-varying pseudoinverse [36], and core and core-EP inverse [37]. Their use has expanded to include the resolution of linear programming tasks [38], quadratic programming tasks [39,40], systems of nonlinear equations [41,42], and systems of linear equations [43,44]. The creation of a ZNN model typically involves two fundamental steps. First, one defines an error matrix equation (EME) function E ( t ) . Second, the next ZNN dynamical system (under the linear activation) function must be used:
E ˙ ( t ) = λ E ( t ) ,
where the operator ( ˙ ) denotes the time derivative. Additionally, the design parameter λ > 0 is a real number that regulates the model’s convergence speed. For instance, a greater value for λ will increase the model’s convergence speed [45,46,47]. It is important to point out that continual learning is defined as learning continually from non-stationary data while simultaneously transferring and preserving prior knowledge. It is true that as time evolves, the architecture of ZNN relies on driving each element of the error function E ( t ) to zero. The continuous-time learning rule, which is the consequence of the definition of the EME function (2), is used to do this. Therefore, it is possible to think of the error function as a tool for tracking the learning of ZNN models.

1.1. The Higher-Order ZNN Design

In recent years, there has been a great deal of research and development into the hyperpower iteration family [48,49,50,51,52]. However, various continuous-time higher-order ZNN (HZNN) models were presented and studied in Refs. [36,43,53] due to the fact that iterative approaches are realizable to discrete-time models and that these methods often require starting points that are approximated and sometimes may not be easily supplied. Beginning with the subsequent hyperpower iterations with order p 2 [36,52]:
W k + 1 = W k i = 0 p 1 E k i ,
where E k R n × n denotes a suitable time-invariant EME, it is possible to extend the time-invariant (3) to a time-varying scenario. That is, taking into account the next EME:
E H p ( t ) = i = 1 p 1 E i ( t ) ,
where E i ( t ) R n × n and p 2 , the ZNN architecture and the hyperpower iterations approach can be combined to find the online solution to a time-varying problem. This yields the next comprehensive HZNN dynamical evolution [36,43,53] (under the linear activation function):
E ˙ ( t ) λ E H p ( t ) .

1.2. The Noise-Handling Higher-Order ZNN Design

Every form of noise has a significant impact on the precision of the suggested ZNN methods, and any preliminary processing for a noise reduction attaches time, sacrificing desired real-time demands. As a result, an enhanced noise-handling model for handling time-varying problems was developed in Ref. [54]. The noise-handling ZNN (NZNN) dynamical system below was introduced in particular [54]:
E ˙ ( t ) = λ E ( t ) ζ 0 t E ( τ ) d τ + N ( t ) ,
where ζ and λ are design parameters that track NZNN convergence, while N ( t ) stands for the proper dimensional matrix-form noises. It should be noted that [43] introduced and examined the generalization of the NZNN architecture to the NHZNN formulation for estimating a time-varying problem. The generic NHZNN dynamical evolution may be acquired by integrating the hyperpower iterations process and the NZNN design, using the same rationale as the HZNN design in (4) and (5):
E ˙ ( t ) λ E H p ( t ) ζ 0 t E H p ( τ ) d τ + N ( t ) .

1.3. Problem Formulation and Key Contributions

In this paper, the TVQ inverse (TVQ-INV) problem will be addressed using the HZNN and NHZNN approaches. Particularly, the following TVQ matrix equations problem is taken into consideration for computing the TVQ-INV of any nonsingular A ˜ ( t ) H n × n [19,20]:
I n A ˜ ( t ) X ˜ ( t ) = 0 n ,
where the TVQ matrix X ˜ ( t ) = X 1 ( t ) + X 2 ( t ) l + X 3 ( t ) j + X 4 ( t ) k H n × n , with  X i ( t ) R n × n for i = 1 , 2 , 3 , 4 , is the TVQ matrix of interest, 0 n refers to the zero n × n matrix and t [ 0 , t f ) [ 0 , + ) is the time. Additionally, we consider that A ˜ ( t ) is a smoothly time-varying matrix and its time derivative is either given or can be accurately estimated. It is important to note that (8) is the TVQ-INV problem and it is satisfied only for X ˜ ( t ) = A ˜ 1 ( t ) . Of greater significance, we will determine whether a direct solution in the quaternion domain or an indirect solution through representation in the complex and real domains is more efficient. To do this, we will create three HZNN and three NHZNN models, one for each domain, and rigorously validate them on two numerical simulations under various types of noises and a real-world application involving robotic motion tracking. By doing theoretical analysis of all presented models, this research strengthens the existing body of literature.
The following notations are employed in the remainder of this article: 0 u × n refers to the zero u × n matrix; 1 n refers to the n × 1 matrix of ones; · F is the matrix Frobenius norm; vec ( · ) denotes the vectorization process; ⊙ denotes the elementwise multiplication; ⊗ denotes the Kronecker product; the operator ( ) T implies transposition.
The key contributions of the paper are listed next:
(1)
For the first time, the TVQ-INV problem is addressed through the HZNN and NHZNN approaches;
(2)
With the purpose of addressing the TVQ-INV problem, three novel HZNN models and three novel NHZNN models are provided;
(3)
The models are subjected to a theoretical analysis that validates them;
(4)
Numerical simulations and applications under various types of noises are carried out to complement the theoretical concepts.
The rest of the article is divided into the following sections. Section 2 presents the three HZNN and three NHZNN models, while their theoretical analysis is presented in Section 3. Numerical simulations and applications are explored in Section 4 and, finally, Section 5 provides the concluding thoughts and comments.

2. Higher Order and Noise-Handling ZNN Models in Solving the TVQ-INV

Three HZNN models will be created in this section, each of which will operate in a distinct domain. We consider that A ˜ ( t ) H n × n is a differentiable TVQ matrix and X ˜ ( t ) H n × n is the unknown TVQ matrix to be found.

2.1. The HZNNQ p Model

The product of two TVQ matrices, A ˜ ( t ) = A 1 ( t ) + A 2 ( t ) l + A 3 ( t ) j + A 4 ( t ) k H n × n and X ˜ ( t ) = X 1 ( t ) + X 2 ( t ) l + X 3 ( t ) j + X 4 ( t ) k H n × n , with  A i ( t ) , X i ( t ) R n × n for i = 1 , , 4 , is:
A ˜ ( t ) X ˜ ( t ) = Z ˜ ( t ) = Z 1 ( t ) + Z 2 ( t ) l + Z 3 ( t ) j + Z 4 ( t ) k H n × n
where
Z 1 ( t ) = A 1 ( t ) X 1 ( t ) A 2 ( t ) X 2 ( t ) A 3 ( t ) X 3 ( t ) A 4 ( t ) X 4 ( t ) , Z 2 ( t ) = A 1 ( t ) X 2 ( t ) + A 2 ( t ) X 1 ( t ) + A 3 ( t ) X 4 ( t ) A 4 ( t ) X 3 ( t ) , Z 3 ( t ) = A 1 ( t ) X 3 ( t ) + A 3 ( t ) X 1 ( t ) + A 4 ( t ) X 2 ( t ) A 2 ( t ) X 4 ( t ) , Z 4 ( t ) = A 1 ( t ) X 4 ( t ) + A 4 ( t ) X 1 ( t ) + A 2 ( t ) X 3 ( t ) A 3 ( t ) X 2 ( t ) ,
with Z i ( t ) R n × n for i = 1 , , 4 . According to (8), setting Z ˜ ( t ) = I in the case of TVQ-INV, the next system of equations is satisfied:
A 1 ( t ) X 1 ( t ) A 2 ( t ) X 2 ( t ) A 3 ( t ) X 3 ( t ) A 4 ( t ) X 4 ( t ) = I n , A 2 ( t ) X 1 ( t ) + A 1 ( t ) X 2 ( t ) A 4 ( t ) X 3 ( t ) + A 3 ( t ) X 4 ( t ) = 0 n , A 3 ( t ) X 1 ( t ) + A 4 ( t ) X 2 ( t ) + A 1 ( t ) X 3 ( t ) A 2 ( t ) X 4 ( t ) = 0 n , A 4 ( t ) X 1 ( t ) A 3 ( t ) X 2 ( t ) + A 2 ( t ) X 3 ( t ) + A 1 ( t ) X 4 ( t ) = 0 n ,
where X i ( t ) , i = 1 , , 4 , are the unknown matrices of interest. Then, setting
B ( t ) = A 1 ( t ) A 2 ( t ) A 3 ( t ) A 4 ( t ) A 2 ( t ) A 1 ( t ) A 4 ( t ) A 3 ( t ) A 3 ( t ) A 4 ( t ) A 1 ( t ) A 2 ( t ) A 4 ( t ) A 3 ( t ) A 2 ( t ) A 1 ( t ) R 4 n × 4 n , Y ( t ) = X 1 ( t ) X 2 ( t ) X 3 ( t ) X 4 ( t ) R 4 n × n , I ^ = I n 0 n 0 n 0 n R 4 n × n ,
we have the following EME:
E ( t ) = I ^ B ( t ) Y ( t ) .
The fact that E ( t ) R 4 n × n is not a square EME and cannot be applied to the HZNN design in (5) is significant. Because of this, we may replace the E ( t ) of (13) into the following equation without losing generality:
E ( t ) = ( I ^ B ( t ) Y ( t ) ) I ^ T ,
and its first time derivative is:
E ˙ ( t ) = ( B ˙ ( t ) Y ( t ) + B ( t ) Y ˙ ( t ) ) I ^ T .
Then, the following EME can be defined based on the HZNN design:
E H p ( t ) = i = 1 p 1 ( I ^ B ( t ) Y ( t ) ) I ^ T i ,
while its derivative is:
E ˙ H p ( t ) = i = 1 p 1 j = 0 i 1 ( I ^ B ( t ) Y ( t ) ) I ^ T j ( B ˙ ( t ) Y ( t ) + B ( t ) Y ˙ ( t ) ) I ^ T ( I ^ B ( t ) Y ( t ) ) I ^ T i 1 j ,
the replacement ( I ^ B ( t ) Y ( t ) ) I ^ T = 0 4 n × 4 n in (17) converts each of the summations into the null matrix, beside the summand referring to j = 0 , i = 1 . So, (17) is estimated as:
E ˙ H p ( t ) ( B ˙ ( t ) Y ( t ) + B ( t ) Y ˙ ( t ) ) I ^ T = E ˙ ( t ) .
The next outcome is obtained by substituting E H p ( t ) of (16) and E ˙ H p ( t ) of (17) into (5):
( B ˙ ( t ) Y ( t ) + B ( t ) Y ˙ ( t ) ) I ^ T = λ i = 1 p 1 E i ( t ) ,
and solving in terms of Y ˙ ( t ) yields:
B ( t ) Y ˙ ( t ) I ^ T = λ i = 1 p 1 E i ( t ) + B ˙ ( t ) Y ( t ) I ^ T .
The dynamic model of (20) can then be made simpler with the use of vectorization and Kronecker product:
( I ^ B ( t ) ) vec ( Y ˙ ( t ) ) = vec ( λ i = 1 p 1 E i ( t ) + B ˙ ( t ) Y ( t ) I ^ T ) .
Furthermore, after setting:
K 1 ( t ) = ( I ^ B ( t ) ) R 16 n 2 × 4 n 2 , M 1 ( t ) = K 1 T ( t ) K 1 ( t ) R 4 n 2 × 4 n 2 , K 2 ( t ) = vec ( λ i = 1 p 1 E i ( t ) + B ˙ ( t ) Y ( t ) I ^ T ) R 16 n 2 , M 2 ( t ) = K 1 T ( t ) K 2 ( t ) R 4 n 2 , y ( t ) = vec ( Y ( t ) ) R 4 n 2 , y ˙ ( t ) = vec ( Y ˙ ( t ) ) R 4 n 2 ,
we arrive to the subsequent HZNN model:
M 1 ( t ) y ˙ ( t ) = M 2 ( t ) .
The suggested HZNN model to be utilized when addressing the TVQ-INV of (8) is the dynamic model of (23), denoted by the notation HZNNQ p .

2.2. The NHZNNQ p Model

Additionally, the next outcome is obtained by substituting E H p ( t ) of (16) and E ˙ H p ( t ) of (17) into (7):
( B ˙ ( t ) Y ( t ) + B ( t ) Y ˙ ( t ) ) I ^ T = λ i = 1 p 1 E i ( t ) ζ 0 t E H p ( τ ) d τ + N ( t ) ,
and solving in terms of Y ˙ ( t ) outputs:
B ( t ) Y ˙ ( t ) I ^ T = λ i = 1 p 1 E i ( t ) + B ˙ ( t ) Y ( t ) I ^ T ζ 0 t E H p ( τ ) d τ + N ( t ) .
The dynamic model of (25) can then be made simpler with the use of vectorization and Kronecker product:
( I ^ B ( t ) ) vec ( Y ˙ ( t ) ) = vec ( λ i = 1 p 1 E i ( t ) + B ˙ ( t ) Y ( t ) I ^ T ζ 0 t E H p ( τ ) d τ + N ( t ) ) .
Furthermore, after setting:
r q ( t ) = vec ( 0 t E H p ( τ ) d τ I ^ ) R 16 n 2 , r ˙ q ( t ) = vec ( i = 1 p 1 E i ( t ) I ^ ) R 16 n 2 , K 3 ( t ) = I 4 n 2 0 4 n 2 × 4 n 2 0 16 n 2 × 4 n 2 K 1 ( t ) R 20 n 2 × 8 n 2 , M 3 ( t ) = K 3 T ( t ) K 3 ( t ) R 8 n 2 × 8 n 2 , K 4 ( t ) = vec ( λ i = 1 p 1 E i ( t ) + B ˙ ( t ) Y ( t ) I ^ T + N ( t ) ) ζ r q ( t ) I ^ T R 16 n 2 , K 5 ( t ) = r q ( t ) K 4 ( t ) R 16 n 2 , M 4 ( t ) = K 3 T ( t ) K 5 ( t ) R 8 n 2 , y N ( t ) = r q ( t ) y ( t ) R 8 n 2 , y ˙ N ( t ) = r ˙ q ( t ) y ˙ ( t ) R 8 n 2 ,
we arrive to the subsequent NHZNN model:
M 3 ( t ) y ˙ N ( t ) = M 4 ( t ) .
The suggested NHZNN model to be utilized when addressing the TVQ-INV of (8) under various types of noises is the dynamic model of (28), denoted by the notation NHZNNQ p .

2.3. The HZNNQC p Model

The following is a complex representation of the TVQ matrix A ˜ ( t ) [22,55]:
A ˇ ( t ) = A 1 ( t ) A 4 ( t ) l A 3 ( t ) A 2 ( t ) l A 3 ( t ) A 2 ( t ) l A 1 ( t ) + A 4 ( t ) l C 2 n × 2 n .
Taking into account that the complex representation of the TVQ matrix acquired by multiplying two TVQ matrices is similar to the TVQ matrix acquired by multiplying the complex representations of two TVQ matrices (Theorem 1 in Ref. [22]), addressing (8) is equivalent to addressing the complex matrix equation:
A ˇ ( t ) X ˇ ( t ) = I 2 n ,
where X ˇ ( t ) C 2 n × 2 n , is the unknown matrix of interest, i.e., the complex representation of the TVQ matrix X ˜ ( t ) . Therefore, we set the next EME:
E ( t ) = I 2 n A ˇ ( t ) X ˇ ( t ) ,
and its first time derivative is:
E ˙ ( t ) = A ˇ ˙ ( t ) X ˇ ( t ) A ˇ ( t ) X ˇ ˙ ( t ) .
Then, the following EME can be defined based on the HZNN design:
E H p ( t ) = i = 1 p 1 I 2 n A ˇ ( t ) X ˇ ( t ) i ,
while its derivative is:
E ˙ H p ( t ) = i = 1 p 1 j = 0 i 1 I 2 n A ˇ ( t ) X ˇ ( t ) j A ˇ ( t ) X ˇ ˙ ( t ) A ˇ ˙ ( t ) X ˇ ( t ) I 2 n A ˇ ( t ) X ˇ ( t ) i 1 j ,
the replacement I 2 n A ˇ ( t ) X ˇ ( t ) = 0 2 n × 2 n in (34) converts each of the summations into the null matrix, beside the summand referring to j = 0 , i = 1 . So, (34) is estimated as:
E ˙ H p ( t ) A ˇ ( t ) X ˇ ˙ ( t ) A ˇ ˙ ( t ) X ˇ ( t ) = E ˙ ( t ) .
The next outcome is obtained by substituting E H p ( t ) of (33) and E ˙ H p ( t ) of (34) into (5):
A ˇ ( t ) X ˇ ˙ ( t ) A ˇ ˙ ( t ) X ˇ ( t ) = λ i = 1 p 1 E i ( t ) ,
and solving in terms of X ˇ ˙ ( t ) outputs:
A ˇ ( t ) X ˇ ˙ ( t ) = λ i = 1 p 1 E i ( t ) + A ˇ ˙ ( t ) X ˇ ( t ) .
The dynamic model of (37) can then be made simpler with the use of vectorization and Kronecker product:
( I 2 n A ˇ ) vec ( X ˇ ˙ ( t ) ) = vec ( λ i = 1 p 1 E i ( t ) + A ˇ ˙ ( t ) X ˇ ( t ) )
Furthermore, after setting:
N 1 ( t ) = ( I 2 n A ˇ ) C 4 n 2 × 4 n 2 , N 2 ( t ) = vec ( λ i = 1 p 1 E i ( t ) + A ˇ ˙ ( t ) X ˇ ( t ) ) C 4 n 2 , k ( t ) = vec ( X ˇ ( t ) ) C 4 n 2 , k ˙ ( t ) = vec ( X ˇ ˙ ( t ) ) C 4 n 2 ,
we arrive to the subsequent HZNN model:
N 1 ( t ) k ˙ ( t ) = N 2 ( t ) .
The suggested HZNN model to be utilized when addressing the TVQ-INV of (8) under complex representation of the input TVQ matrix A ˜ ( t ) is the dynamic model of (40), denoted by the notation HZNNQC p .

2.4. The NHZNNQC p Model

Additionally, the next outcome is obtained by substituting E H p ( t ) of (33) and E ˙ H p ( t ) of (34) into (7):
A ˇ ( t ) X ˇ ˙ ( t ) A ˇ ˙ ( t ) X ˇ ( t ) = λ i = 1 p 1 E i ( t ) ζ 0 t E H p ( τ ) d τ + N ( t ) ,
and solving in terms of X ˇ ˙ ( t ) outputs:
A ˇ ( t ) X ˇ ˙ ( t ) = λ i = 1 p 1 E i ( t ) + A ˇ ˙ ( t ) X ˇ ( t ) ζ 0 t E H p ( τ ) d τ + N ( t ) .
The dynamic model of (42) can then be made simpler with the use of vectorization and Kronecker product:
( I 2 n A ˇ ) vec ( X ˇ ˙ ( t ) ) = vec ( λ i = 1 p 1 E i ( t ) + A ˇ ˙ ( t ) X ˇ ( t ) ζ 0 t E H p ( τ ) d τ + N ( t ) ) .
Furthermore, after setting:
r c ( t ) = vec ( 0 t E H p ( τ ) d τ ) C 4 n 2 , r ˙ c ( t ) = vec ( i = 1 p 1 E i ( t ) ) C 4 n 2 , N 3 ( t ) = I 4 n 2 0 4 n 2 × 4 n 2 0 4 n 2 × 4 n 2 N 1 ( t ) C 8 n 2 × 8 n 2 , N 4 ( t ) = vec ( λ i = 1 p 1 E i ( t ) + A ˇ ˙ ( t ) X ˇ ( t ) + N ( t ) ) ζ r c ( t ) C 8 n 2 , k N ( t ) = r c ( t ) k ( t ) C 8 n 2 , k ˙ N ( t ) = r ˙ c ( t ) k ˙ ( t ) C 8 n 2 ,
we arrive to the subsequent NHZNN model:
N 3 ( t ) k ˙ N ( t ) = N 4 ( t ) .
The suggested NHZNN model to be utilized when addressing the TVQ-INV of (8) under various types of noises is the dynamic model of (45), denoted by the notation NHZNNQC p .

2.5. The HZNNQR p Model

The following is a real representation of the TVQ matrix A ˜ ( t ) [24]:
A ( t ) = A 1 ( t ) A 4 ( t ) A 3 ( t ) A 2 ( t ) A 4 ( t ) A 1 ( t ) A 2 ( t ) A 3 ( t ) A 3 ( t ) A 2 ( t ) A 1 ( t ) A 4 ( t ) A 2 ( t ) A 3 ( t ) A 4 ( t ) A 1 ( t ) R 4 n × 4 n .
Taking into account that the real representation of the TVQ matrix acquired by multiplying two TVQ matrices is similar to the TVQ matrix acquired by multiplying the real representations of two TVQ matrices (Corollary 1 in Ref. [24]), addressing (8) is equivalent to addressing the real matrix equation:
A ( t ) X ( t ) = I 4 n ,
where X ( t ) R 4 n × 4 n , is the unknown matrix of interest, i.e., the real representation of the TVQ matrix X ˜ ( t ) . Therefore, we set the next EME:
E ( t ) = I 4 n A ( t ) X ( t ) ,
and its first time derivative is:
E ˙ ( t ) = A ˙ ( t ) X ( t ) A ( t ) X ˙ ( t ) .
Then, the following EME can be defined based on the HZNN design:
E H p ( t ) = i = 1 p 1 I 4 n A ( t ) X ( t ) i ,
while its derivative is:
E ˙ H p ( t ) = i = 1 p 1 j = 0 i 1 I 4 n A ( t ) X ( t ) j A ( t ) X ˙ ( t ) A ˙ ( t ) X ( t ) I 4 n A ( t ) X ( t ) i 1 j ,
the replacement I 4 n A ( t ) X ( t ) = 0 2 n × 2 n in (51) converts each of the summations into the null matrix, beside the summand referring to j = 0 , i = 1 . So, (51) is estimated as:
E ˙ H p ( t ) A ( t ) X ˙ ( t ) A ˙ ( t ) X ( t ) = E ˙ ( t ) .
The next outcome is obtained by substituting E H p ( t ) of (50) and E ˙ H p ( t ) of (51) into (5):
A ( t ) X ˙ ( t ) A ˙ ( t ) X ( t ) = λ i = 1 p 1 E i ( t ) ,
and solving in terms of X ˙ ( t ) yields:
A ( t ) X ˙ ( t ) = λ i = 1 p 1 E i ( t ) + A ˙ ( t ) X ( t ) .
The dynamic model of (54) can then be made simpler with the use of vectorization and Kronecker product:
( I 4 n A ) vec ( X ˙ ( t ) ) = vec ( λ i = 1 p 1 E i ( t ) + A ˙ ( t ) X ( t ) )
Furthermore, after setting:
L 1 ( t ) = ( I 4 n A ) R 16 n 2 × 16 n 2 , L 2 ( t ) = vec ( λ i = 1 p 1 E i ( t ) + A ˙ ( t ) X ( t ) ) R 16 n 2 , x ( t ) = vec ( X ( t ) ) R 16 n 2 , x ˙ ( t ) = vec ( X ˙ ( t ) ) R 16 n 2 ,
we arrive to the subsequent HZNN model:
L 1 ( t ) x ˙ ( t ) = L 2 ( t ) .
The suggested HZNN model to be utilized when addressing the TVQ-INV of (8) under real representation of the input TVQ matrix A ˜ ( t ) is the dynamic model of (57), denoted by the notation HZNNQR p .

2.6. The NHZNNQR p Model

Additionally, the next outcome is obtained by substituting E H p ( t ) of (50) and E ˙ H p ( t ) of (51) into (7):
A ( t ) X ˙ ( t ) A ˙ ( t ) X ( t ) = λ i = 1 p 1 E i ( t ) ζ 0 t E H p ( τ ) d τ + N ( t ) ,
and solving in terms of X ˙ ( t ) outputs:
A ( t ) X ˙ ( t ) = λ i = 1 p 1 E i ( t ) + A ˙ ( t ) X ( t ) ζ 0 t E H p ( τ ) d τ + N ( t ) .
The dynamic model of (59) can then be made simpler with the use of vectorization and Kronecker product:
( I 4 n A ) vec ( X ˙ ( t ) ) = vec ( λ i = 1 p 1 E i ( t ) + A ˙ ( t ) X ( t ) ζ 0 t E H p ( τ ) d τ + N ( t ) ) .
Furthermore, after setting:
r r ( t ) = vec ( 0 t E H p ( τ ) d τ ) R 16 n 2 , r ˙ r ( t ) = vec ( i = 1 p 1 E i ( t ) ) R 16 n 2 , L 3 ( t ) = I 16 n 2 0 16 n 2 × 16 n 2 0 16 n 2 × 16 n 2 L 1 ( t ) R 32 n 2 × 32 n 2 , L 4 ( t ) = vec ( λ i = 1 p 1 E i ( t ) + A ˙ ( t ) X ( t ) + N ( t ) ) ζ r r ( t ) R 32 n 2 , k N ( t ) = r r ( t ) k ( t ) R 32 n 2 , x ˙ N ( t ) = r ˙ r ( t ) x ˙ ( t ) R 32 n 2 ,
we arrive to the subsequent NHZNN model:
L 3 ( t ) x ˙ N ( t ) = L 4 ( t ) .
The suggested NHZNN model to be utilized when addressing the TVQ-INV of (8) under various types of noises is the dynamic model of (62), denoted by the notation NHZNNQR p .

3. Stability and Convergence Analysis

This section examines the convergence and stability of the HZNN dynamics (5) and the NHZNN dynamics (7).

3.1. The HZNNQ p , HZNNQC p , and HZNNQR p Models Theoretical Analysis

The following theorems examine how effectively the HZNN dynamics (5) perform.
Theorem 1. 
Assuming that B ( t ) R 4 n × 4 n and Y ( t ) R 4 n × n are differentiable, the dynamical system (20) converges to A ˜ 1 ( t ) , which is the theoretical solution (THESO) of the TVQ-INV (8). In light of Lyapunov, the solution is thus stable.
Proof. 
Let Y ˘ ( t ) be the THESO. The replacement Y ¯ ( t ) : = Y ˘ ( t ) Y ( t ) entails Y ( t ) = Y ˘ ( t ) Y ¯ ( t ) and its first time derivative is Y ˙ ( t ) = Y ˘ ˙ ( t ) Y ¯ ˙ ( t ) . It is important to note that
( I ^ B ( t ) Y ˘ ( t ) ) I ^ T = 0 4 n × 4 n ,
and its first time derivative is:
( B ˙ ( t ) Y ˘ ( t ) + B ( t ) Y ˘ ˙ ( t ) ) I ^ T = 0 4 n × 4 n .
Therefore, the replacement Y ( t ) = Y ˘ ( t ) Y ¯ ( t ) into (16) yields:
E ¯ H p ( t ) = i = 1 p 1 ( I ^ B ( t ) ( Y ˘ ( t ) Y ¯ ( t ) ) ) I ^ T i .
Additionally, the implicit dynamics (5) denote:
E ¯ ˙ H p ( t ) = ( B ˙ ( t ) ( Y ˘ ( t ) Y ¯ ( t ) ) + B ( t ) ( Y ˘ ˙ ( t ) Y ¯ ˙ ( t ) ) ) I ^ T = λ E ¯ H p ( t ) .
The candidate Lyapunov function is subsequently identified to verify convergence:
L ( t ) = 1 2 E ¯ H p ( t ) F 2 = 1 2 Tr E ¯ H p ( t ) E ¯ H p ( t ) T .
The following identities may then be confirmed:
L ˙ ( t ) = 2 Tr E ¯ H p ( t ) T E ¯ ˙ H p ( t ) 2 = Tr E ¯ H p ( t ) T E ¯ ˙ H p ( t ) = λ Tr E ¯ H p ( t ) T E ¯ H p ( t ) .
Consequently, it holds:
          d L ( t ) d t < 0 , E ¯ H p ( t ) 0 = 0 , E ¯ H p ( t ) = 0 , L ˙ ( t ) < 0 , i = 1 p 1 ( I ^ B ( t ) ( Y ˘ ( t ) Y ¯ ( t ) ) ) I ^ T i 0 = 0 , i = 1 p 1 ( I ^ B ( t ) ( Y ˘ ( t ) Y ¯ ( t ) ) ) I ^ T i 0 , L ˙ ( t ) < 0 , Y ¯ ( t ) 0 = 0 , Y ¯ ( t ) = 0 .
We have the following when the equilibrium of the system (66) is at Y ¯ ( t ) and E H p ( 0 ) = 0 :
d L ( t ) d t 0 , Y ¯ ( t ) 0 .
The state of equilibrium:
Y ¯ ( t ) = Y ˘ ( t ) Y ( t ) = 0 ,
is deemed stable by the Lyapunov stability theory. Therefore, as  t , Y ( t ) Y ˘ ( t ) .    □
Theorem 2. 
Let A ˜ ( t ) H n × n be differentiable. At each time t, the HZNNQ p model (23) exponentially converges to the THESO y ˘ ( t ) for any possible starting point y ( 0 ) .
Proof. 
For the purpose of calculating the THESO of the TVQ-INV, the EME of (14) is declared. The model (20) is determined by utilizing the HZNN’s architecture (5) for zeroing (14). Taking into consideration Theorem 1, Y ( t ) Y ˘ ( t ) for any starting point when t . Therefore, the HZNNQ p model (23) converges to the THESO y ˘ ( t ) for any starting point y ( 0 ) when t , due to the fact that it is only a different implementation of (20). The proof is thus completed.    □
Theorem 3. 
Assuming that A ˇ ( t ) C 2 n × 2 n is differentiable, the dynamical system (37) converges to A ˇ 1 ( t ) , which is the THESO of the TVQ-INV (8). In light of Lyapunov, the solution is thus stable.
Proof. 
Given that the proof mirrors the Theorem’s 1 proof, it is omitted.    □
Theorem 4. 
Let A ˇ ( t ) C 2 n × 2 n be differentiable. At each time t, the HZNNQC p model (40) exponentially converges to the THESO k ˘ ( t ) for any possible starting point k ( 0 ) .
Proof. 
Given that the proof mirrors the Theorem’s 2 proof once we substitute Theorem 1 with Theorem 3, it is omitted.    □
Theorem 5. 
Assuming that A ( t ) R 4 n × 4 n is differentiable, the dynamical system (54) converges to A 1 ( t ) , which is the THESO of the TVQ-INV (8). In light of Lyapunov, the solution is thus stable.
Proof. 
Given that the proof mirrors the Theorem’s 1 proof, it is omitted.    □
Theorem 6. 
Let A ( t ) R 4 n × 4 n be differentiable. At each time t, the HZNNQR p model (57) exponentially converges to the THESO x ˘ ( t ) for any possible starting point x ( 0 ) .
Proof. 
Given that the proof mirrors the Theorem’s 2 proof once we substitute Theorem 1 with Theorem 5, it is omitted.    □

3.2. The NHZNNQ p , NHZNNQC p , and NHZNNQR p Models Theoretical Analysis

The proficiency of the NHZNN dynamics is examined in the next theorems, which are rehashed from Ref. [43], and we will attempt to solve various types of noise.
Theorem 7 
([43]).  Let A ˜ ( t ) H n × n be differentiable. Then the NHZNNQ p (28), NHZNNQC p (45), and NHZNNQR p (62) models converge globally to the THESO, in spite of the constant noise N ( t ) = N R ρ × ρ , where ρ = 4 n in the cases of NHZNNQ p and NHZNNQR p and ρ = 2 n in the case of NHZNNQC p .
Theorem 8 
([43]).  Under the suppositions of Theorem 7, the NHZNNQ p (28), NHZNNQC p (45), and NHZNNQR p (62) models polluted with the linear noise N ( t ) = N · t R ρ × ρ , where ρ = 4 n in the cases of NHZNNQ p and NHZNNQR p and ρ = 2 n in the case of NHZNNQC p , are convergent to the THESO, with the EME’s upper bound satisfying lim t E ( t ) F = 1 ζ N F . In addition, as  ζ + , E ( t ) fulfills lim t E H p ( t ) F 0 .
Theorem 9 
([43]).  Under the assumptions of Theorem 7, the NHZNNQ p (28), NHZNNQC p (45), and NHZNNQR p (62) models when there is bounded random noise N ( t ) : = σ ( t ) = [ σ i j ( t ) ] i , j = 1 , , n R ρ × ρ , where ρ = 4 n in the cases of NHZNNQ p and NHZNNQR p and ρ = 2 n in the case of NHZNNQC p , preserve bounded residual error E H p ( t ) F . In addition, lim t E H p ( t ) F of NHZNN is bounded by
sup 0 τ t | σ i j ( τ ) | 2 ρ Q , Q > 0 sup 0 τ t | σ i j ( τ ) | 4 ρ ζ Q , Q < 0
where η , ζ > 0 are parameters and Q = 4 ζ + η 2 . Therefore, in the case of Q 0 , the upper bound of lim t E H p ( t ) F is in roughly inverse analogy to η and lim t E H p ( t ) F being arbitrarily small for adequate large η and proper ζ.
Theorem 10. 
Let A ˜ ( t ) H n × n be differentiable. At each time t [ 0 , t f ) [ 0 , + ) , the NHZNNQ p model (28) converges to the THESO y ˘ N ( t ) when noise is present exponentially, for any possible starting point y N ( 0 ) . For each integer p 2 when noise is present, A ˜ 1 ( t ) is the last 4 n 2 element of y ˘ N ( t ) .
Proof. 
Given that the proof mirrors Theorem 3.1 in Ref. [56] once we substitute Theorem 1 in [57] with Theorems 7, 8 and 9, respectively, for the constant noise, the bounded random noise and the linear noise, it is omitted.    □
Theorem 11. 
Let A ^ ( t ) C 2 n × 2 n be differentiable. At each time t [ 0 , t f ) [ 0 , + ) , the NHZNNQC p model (45) converges to the THESO k ˘ N ( t ) when noise is present exponentially, for any possible starting point k N ( 0 ) . For each integer p 2 when noise is present, A ^ 1 ( t ) is the last 4 n 2 element of k ˘ N ( t ) .
Proof. 
Given that the proof mirrors the proof of Theorem 10, it is omitted.    □
Theorem 12. 
Let A ( t ) R 4 n × 4 n be differentiable. At each time t [ 0 , t f ) [ 0 , + ) , the NHZNNQR p model (62) converges to the THESO x ˘ N ( t ) when noise is present exponentially, for any possible starting point x N ( 0 ) . For each integer p 2 when noise is present, A 1 ( t ) is the last 16 n 2 elements of x ˘ N ( t ) .
Proof. 
Given that the proof mirrors the proof of Theorem 10, it is omitted.    □

4. Computational Simulations

We will present two simulation examples (SEs) and one application to robotic motion tracking in this section. What follows are a few crucial explanations. The HZNN design parameter λ is applied with value 10 in all SEs and with value 100 in the application. The starting points of the HZNNQ p , HZNNQC p , and HZNNQR p models have been set to y ( 0 ) = vec ( [ A 1 T ( 0 ) , A 2 T ( 0 ) , A 3 T ( 0 ) , A 4 T ( 0 ) ] T ) , k ( 0 ) = vec ( A ˇ ) and x ( 0 ) = vec ( A ) , respectively, and the starting points of the NHZNNQ p , NHZNNQC p , and NHZNNQR p models have been set to y N ( 0 ) = vec ( [ y T ( 0 ) , y T ( 0 ) ] T ) , k N ( 0 ) = vec ( [ k T ( 0 ) , k T ( 0 ) ] T ) and x N ( 0 ) = vec ( [ x T ( 0 ) , x T ( 0 ) ] T ) , respectively. For convenience purposes, we have set β ( t ) = cos ( t ) and α ( t ) = sin ( t ) . Further, the noises used are the next:
  • N ( t ) = 10 · 1 ρ represents the constant noise;
  • N ( t ) = ( 2 + t / 4 ) · 1 ρ represents the linear noise;
  • N ( t ) = 2 + α ( t ) · 1 ρ / 4 represents the bounded noise.
Finally, a MATLAB ode solver, to be specific ode15s, is used with the time interval being set to [ 0 , 10 ] and [ 0 , 20 ] , respectively, in all SEs and the application. For this ode solver, the default double precision arithmetic ( e p s = 2.22 × 10 16 ) is applied, causing the minimum value in each of the figures in this section to be primarily of the form 10 5 .

4.1. Simulation Examples

4.1.1. Example 1

The following are the input matrix A ˜ ( t ) coefficients:
A 1 ( t ) = 2 α ( t ) + 2 4 4 2 α ( t ) + 6 2 6 2 α ( t ) + 7 2 4 , A 2 ( t ) = 6 2 α ( t ) + 1 4 5 3 α ( t ) + 1 3 5 2 α ( t ) + 2 7 , A 3 ( t ) = 3 α ( t ) + 2 9 5 2 α ( t ) + 3 12 2 3 α ( t ) + 4 3 5 , A 4 ( t ) = 2 α ( t ) + 1 7 4 2 α ( t ) + 4 8 2 3 α ( t ) + 1 9 .
As a consequence, A ˜ ( t ) H 3 × 3 . The performance of the HZNN and NHZNN models is shown in Figure 1 and Figure 2.

4.1.2. Example 2

Considering the following matrix
K = 1 1 1 1 1 0 1 1 1 1 0 0 1 1 1 0 0 0 1 1 0 0 0 0 1 ,
the following are the input matrix A ˜ ( t ) coefficients:
A 1 ( t ) = K ( 1 + α ( t ) ) , A 2 ( t ) = K T ( 1 + 2 α ( t ) ) , A 3 ( t ) = K ( 1 + 3 β ( t ) ) , A 4 ( t ) = K T ( 1 + 4 β ( t ) ) .
As a consequence, A ˜ ( t ) H 5 × 5 . The performance of the HZNN and NHZNN models is shown in Figure 3 and Figure 4.

4.2. Application to Robotic Motion Tracking

The applicability of the NHZNNQ p , NHZNNQC p and NHZNNQR p models is validated in this experiment using a 3-link planar manipulator (PM), as shown in Figure 5a. It is important to mention that the 3-link PM’s kinematics equations at the position level r ( t ) R n and the velocity level r ˙ ( t ) R n are expressed as follows:
r ( t ) = f ( θ ( t ) ) , r ˙ ( t ) = J ( θ ) θ ˙ ( t ) ,
where θ R n is the angle of the 3-link PM, J ( θ ) = f ( θ ) / θ R n × n , and  f ( · ) is a nonlinear smooth mapping function, r ( t ) is the end effector’s position.
To comprehend how this 3-link PM tracked motion, the inverse kinematic equation is addressed. The equation of velocity can be thought of as a linear equations system when the end effector motion tracking task is assigned with r ˙ ( t ) known and θ ˙ ( t ) unknown. To put it another way, by setting A ˜ ( t ) = J ( θ ) , we find X ˜ ( t ) = A 1 ( t ) to solve θ ˙ ( t ) = X ˜ ( t ) r ˙ ( t ) . Therefore, we may track control of the 3-link PM by using the ZNN models to resolve the underlying linear equation system.
The 3-link PM’s end-effector is anticipated to follow a infinity-shaped path in the simulation experiment; Ref. [58] contains the X and Y-axis velocity functions of this path along with the specifications of 3-link PM. Additionally, the link length is α = [ 1 , 2 / 3 , 5 / 4 ] T and the initial value of the joints is θ ( 0 ) = [ π / 8 , π / 8 , π / 8 ] T . The performance of the NHZNN models under the bounded noise is shown in Figure 5.

4.3. Results and Discussion

The performance of the HZNNQ p (23), HZNNQC p (40), HZNNQR p (57), NHZNNQ p (28), NHZNNQC p (45) and NHZNNQR p (62) models for solving the TVQ-INV (8) is examined by the SEs in Section 4.1.1 and Section 4.1.2. A unique TVQ-INV problem, described by the proper TVQ matrix A ˜ ( t ) , is assigned to each section.
For the SE in Section 4.1.1, the results of the HZNNQ p , HZNNQC p , and HZNNQR p models with input TVQ matrix A ˜ ( t ) H 3 × 3 are presented in Figure 1. Particularly, Figure 1a–d depict the Frobenius norm of the models’ EMEs under p = 2 , 3 , 4 . In the case of p = 2 , we observe that, by the time-mark of t 2 , the models’ EMEs converge to the range [ 10 5 , 10 3 ] . The time-mark, however, gets shorter as p’s value grows. The Frobenius norm of the model’s EMEs values, shown in Table 1, likewise supports the aforementioned finding. The Frobenius norm of (8) in Figure 1e–h further supports this tendency. A higher price for λ will typically push the HZNN models to converge even more quickly. The fact that all models successfully converged is emphasized once more in Figure 1i–l, where the theoretical trajectories of the real and imaginary parts of the TVQ matrix A ˜ 1 ( t ) are contrasted with the X ˜ ( t ) trajectories retrieved by the three models. On the other hand, the results of the NHZNNQ p , NHZNNQC p , and NHZNNQR p models under the linear noise with z = 100 are presented in Figure 2. Particularly, Figure 2a,b depict the Frobenius norm of the models’ EMEs under p = 2 and p = 4 , respectively, and Figure 2c,d depict the Frobenius norm of (8) under p = 2 and p = 4 , respectively. We note that the errors of the models converge to the range [ 10 2 , 10 1 ] by the time-mark of t 2 in the case of p = 2 and the time-mark of t 1.5 in the case of p = 4 . That is, the time-mark gets shorter as p’s value grows. The fact that all models successfully converged is emphasized once more in Figure 2e–h, where the theoretical trajectories of the real and imaginary parts of the TVQ matrix A ˜ 1 ( t ) are contrasted with the X ˜ ( t ) trajectories retrieved by the three models.
For the SE in Section 4.1.2, the results of the HZNNQ p , HZNNQC p , and HZNNQR p models with input TVQ matrix A ˜ ( t ) H 5 × 5 are presented in Figure 3. Particularly, Figure 3a–d depict the Frobenius norm of the models’ EMEs under p = 2 , 4 , 6 . In the case of p = 2 , we observe that, by the time-mark of t 1.8 , the models’ EMEs converge to the range [ 10 5 , 10 3 ] . The time-mark, however, gets shorter as p’s value grows. The EMEs’ Frobenius norm values of the models, as shown in Table 1, can be used to confirm the aforementioned result. The Frobenius norm of (8) in Figure 3e–h further supports this tendency. The fact that all models successfully converged is emphasized once more in Figure 3i–l, where the theoretical trajectories of the real and imaginary parts of the TVQ matrix A ˜ 1 ( t ) are contrasted with the X ˜ ( t ) trajectories retrieved by the three models. On the other hand, the results of the NHZNNQ p , NHZNNQC p , and NHZNNQR p models under the constant noise with z = 10 are presented in Figure 4. Particularly, Figure 4a,b depict the Frobenius norm of the models’ EMEs under p = 2 and p = 4 , respectively, and Figure 4c,d depict the Frobenius norm of (8) under p = 2 and p = 4 , respectively. We note that the errors of the models converge to the range [ 10 3 , 10 2 ] by the time-mark of t 10 in both cases of p = 2 and p = 4 . In contrast to p = 2 , the errors of the models converge slightly more quickly when p = 4 in the time-range [ 0 , 5 ] . In other words, when p’s value increases, the pace of convergence increases in the time-range [ 0 , 5 ] . The fact that all models successfully converged is emphasized once more in Figure 4e–h, where the theoretical trajectories of the real and imaginary parts of the TVQ matrix A ˜ 1 ( t ) are contrasted with the X ˜ ( t ) trajectories retrieved by the three models.
For the application in Section 4.2, the results of the NHZNNQ p , NHZNNQC p , and NHZNNQR p models under the bounded noise with z = 1000 are presented in Figure 5. Particularly, Figure 5b,c depict the Frobenius norms of the models’ EMEs and (8) under p = 4 , respectively. We note that the errors of the models converge to the range [ 10 5 , 10 2 ] by the time-mark of t 1.5 . The fact that all models successfully converged is emphasized once more in Figure 5d–f, which depict the trajectories of the velocity and the infinity-shaped path tracking. As seen in these figures, all NHZNN model solutions match the actual velocity θ ˙ ( t ) , and the 3-link PM successfully completes the infinity-shaped path tracking task, where r ˙ ( t ) is the actual infinity-shaped path.
Lastly, once we take into account the complexity of each model, the results above can be placed into better context. Particularly, the HZNNQ p model performs 4 n 2 additions/subtractions and ( 4 n 2 ) 2 multiplications in each iteration of (23), which results in a computational complexity of O ( ( 4 n 2 ) 3 ) when an ode MATLAB solver is used. In the same manner, the HZNNQC p model performs 4 n 2 additions/subtractions and ( 4 n 2 ) 2 multiplications in each iteration of (40). By converting these measurements from the complex domain into the real domain, the HZNNQC p model performs 8 n 2 additions/subtractions and ( 8 n 2 ) 2 multiplications, which results in a computational complexity of O ( ( 8 n 2 ) 3 ) . The HZNNQR p model performs 16 n 2 additions/subtractions and ( 16 n 2 ) 2 multiplications in each iteration of (57), which results in a computational complexity of O ( ( 16 n 2 ) 3 ) . The NHZNNQ p model performs 8 n 2 additions/subtractions and ( 8 n 2 ) 2 multiplications in each iteration of (28), which results in a computational complexity of O ( ( 8 n 2 ) 3 ) . The NHZNNQC p model performs 8 n 2 additions/subtractions and ( 8 n 2 ) 2 multiplications in each iteration of (45). Converting these measurements from the complex domain into the real domain, the NHZNNQC p model performs 16 n 2 additions/subtractions and ( 16 n 2 ) 2 multiplications which results in a computational complexity of O ( ( 16 n 2 ) 3 ) . The NHZNNQR p model performs 32 n 2 additions/subtractions and ( 32 n 2 ) 2 multiplications in each iteration of (62), which results in a computational complexity of O ( ( 32 n 2 ) 3 ) . Because the dimensions of the associated real valued matrix A ( t ) are two times larger than those of the complex valued matrix A ˇ ( t ) and four times larger than those of the quaternion valued matrix A ˜ ( t ) , the HZNNQR p and NHZNNQR p are, by far, the most complex models. Because of this, choosing to address the TVQ-INV problem in the real domain has a significant memory penalty, with RAM fast being a limiting factor as A ˜ ( t ) grows in size. All six ZNN models can solve the TVQ-INV problem when all factors are considered, however the HZNNQ p seems to have the most potential in the absence of noise, and the NHZNNQ p seems to have the most promise in the presence of noise.

5. Conclusions

In view of handling the TVQ-INV problem, three models based on the HZNN design, namely HZNNQ p , HZNNQC p , and HZNNQR p , and three models based on the NHZNN design, namely NHZNNQ p , NHZNNQC p , and NHZNNQR p , have been proposed. Along with SEs and a practical application to Robotic motion tracking, the theoretical investigation has aided the creation of those models. Both direct and indirect approaches to solving the TVQ-INV problem—representing the results in the complex and real domains before converting the results back to the quaternion domain—have proved effective. Of the two approaches, the direct method, used by the HZNNQ p and NHZNNQ p models, has been suggested as the most effective and efficient. That is, according to the principal results, each of these six models can solve the TVQ-INV effectively, and the HZNN strategy offers a faster convergence rate than the conventional ZNN strategy. In light of this, the established findings pave the path for more engaging research projects. Here are a few topics to consider for future studies:
  • One may look at using nonlinear ZNNs for time-varying quaternion issues;
  • It is possible to investigate using the finite-time ZNN framework to time-varying quaternion problems.

Author Contributions

All authors (R.A., M.K., H.J., T.E.S., S.D.M. and V.N.K.) contributed equally. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been funded by Research Deanship at University of Hail, Saudi Arabia, through Project Number # RG-21-139.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ben-Israel, A.; Greville, T.N.E. Generalized Inverses: Theory and Applications, 2nd ed.; CMS Books in Mathematics; Springer: New York, NY, USA, 2003. [Google Scholar] [CrossRef]
  2. Wang, G.; Wei, Y.; Qiao, S.; Lin, P.; Chen, Y. Generalized Inverses: Theory and Computations; Springer: Singapore, 2018; Volume 53. [Google Scholar]
  3. Zhang, S.; Dong, Y.; Ouyang, Y.; Yin, Z.; Peng, K. Adaptive neural control for robotic manipulators with output constraints and uncertainties. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 5554–5564. [Google Scholar] [CrossRef] [PubMed]
  4. Yuan, Y.; Wang, Z.; Guo, L. Event-triggered strategy design for discrete-time nonlinear quadratic games with disturbance compensations: The noncooperative case. IEEE Trans. Syst. Man Cybern. Syst. 2018, 48, 1885–1896. [Google Scholar] [CrossRef]
  5. Mourtas, S.D.; Katsikis, V.N.; Kasimis, C. Feedback control systems stabilization using a bio-inspired neural network. EAI Endorsed Trans. AI Robots 2022, 1, 1–13. [Google Scholar] [CrossRef]
  6. Yang, X.; He, H. Self-learning robust optimal control for continuous-time nonlinear systems with mismatched disturbances. Neural Netw. 2018, 99, 19–30. [Google Scholar] [CrossRef] [PubMed]
  7. Li, S.; He, J.; Li, Y.; Rafique, M.U. Distributed recurrent neural networks for cooperative control of manipulators: A game-theoretic perspective. IEEE Trans. Neural Netw. Learn. Syst. 2017, 28, 415–426. [Google Scholar] [CrossRef] [PubMed]
  8. Mourtas, S.D. A weights direct determination neuronet for time-series with applications in the industrial indices of the federal reserve bank of St. Louis. J. Forecast. 2022, 14, 1512–1524. [Google Scholar] [CrossRef]
  9. Joldeş, M.; Muller, J.M. Algorithms for manipulating quaternions in floating-point arithmetic. In Proceedings of the 2020 IEEE 27th Symposium on Computer Arithmetic (ARITH), Portland, OR, USA, 7–10 June 2020; pp. 48–55. [Google Scholar]
  10. Szynal-Liana, A.; Włoch, I. Generalized commutative quaternions of the Fibonacci type. Boletín Soc. Mat. Mex. 2022, 28, 1. [Google Scholar] [CrossRef]
  11. Pavllo, D.; Feichtenhofer, C.; Auli, M.; Grangier, D. Modeling human motion with quaternion-based neural networks. Int. J. Comput. Vis. 2020, 128, 855–872. [Google Scholar] [CrossRef] [Green Version]
  12. Özgür, E.; Mezouar, Y. Kinematic modeling and control of a robot arm using unit dual quaternions. Robot. Auton. Syst. 2016, 77, 66–73. [Google Scholar] [CrossRef]
  13. Du, G.; Liang, Y.; Gao, B.; Otaibi, S.A.; Li, D. A cognitive joint angle compensation system based on self-feedback fuzzy neural network with incremental learning. IEEE Trans. Ind. Inform. 2021, 17, 2928–2937. [Google Scholar] [CrossRef]
  14. Goodyear, A.M.S.; Singla, P.; Spencer, D.B. Analytical state transition matrix for dual-quaternions for spacecraft pose estimation. In Proceedings of the AAS/AIAA Astrodynamics Specialist Conference, Portland, ME, USA, 11–15 August 2019; Univelt Inc.: Escondido, CA, USA, 2020; pp. 393–411. [Google Scholar]
  15. Giardino, S. Quaternionic quantum mechanics in real Hilbert space. J. Geom. Phys. 2020, 158, 103956. [Google Scholar] [CrossRef]
  16. Kansu, M.E. Quaternionic representation of electromagnetism for material media. Int. J. Geom. Methods Mod. Phys. 2019, 16, 1950105. [Google Scholar] [CrossRef]
  17. Weng, Z.H. Field equations in the complex quaternion spaces. Adv. Math. Phys. 2014, 2014, 450262. [Google Scholar] [CrossRef] [Green Version]
  18. Ghiloni, R.; Moretti, V.; Perotti, A. Continuous slice functional calculus in quaternionic Hilbert spaces. Rev. Math. Phys. 2013, 25, 1350006. [Google Scholar] [CrossRef]
  19. Kyrchei, I.I.; Mosić, D.; Stanimirović, P.S. MPCEP-*CEPMP-solutions of some restricted quaternion matrix equations. Adv. Appl. Clifford Algebr. 2022, 32, 22. [Google Scholar] [CrossRef]
  20. Huang, L.; Wang, Q.W.; Zhang, Y. The Moore-Penrose inverses of matrices over quaternion polynomial rings. Linear Algebra Its Appl. 2015, 475, 45–61. [Google Scholar] [CrossRef]
  21. Xiao, L.; Liu, S.; Wang, X.; He, Y.; Jia, L.; Xu, Y. Zeroing neural networks for dynamic quaternion-valued matrix inversion. IEEE Trans. Ind. Inform. 2022, 18, 1562–1571. [Google Scholar] [CrossRef]
  22. Xiao, L.; Huang, W.; Li, X.; Sun, F.; Liao, Q.; Jia, L.; Li, J.; Liu, S. ZNNs with a varying-parameter design formula for dynamic Sylvester quaternion matrix equation. IEEE Trans. Neural Netw. Learn. Syst. 2022, 1–11. [Google Scholar] [CrossRef]
  23. Xiao, L.; Cao, P.; Song, W.; Luo, L.; Tang, W. A fixed-time noise-tolerance ZNN model for time-variant inequality-constrained quaternion matrix least-squares problem. IEEE Trans. Neural Netw. Learn. Syst. 2023, 1–10. [Google Scholar] [CrossRef]
  24. Xiao, L.; Zhang, Y.; Huang, W.; Jia, L.; Gao, X. A dynamic parameter noise-tolerant zeroing neural network for time-varying quaternion matrix equation with applications. IEEE Trans. Neural Netw. Learn. Syst. 2022, 1–10. [Google Scholar] [CrossRef]
  25. Tan, N.; Yu, P.; Ni, F. New varying-parameter recursive neural networks for model-free kinematic control of redundant manipulators with limited measurements. IEEE Trans. Instrum. Meas. 2022, 71, 1–14. [Google Scholar] [CrossRef]
  26. Dachang, Z.; Aodong, C.; Baolin, D.; Puchen, Z. Dual-mode synchronization predictive control of robotic manipulator. J. Dyn. Syst. Meas. Control 2022, 144, 111002. [Google Scholar] [CrossRef]
  27. Jerbi, H.; Al-Darraji, I.; Tsaramirsis, G.; Ladhar, L.; Omri, M. Hamilton-Jacobi inequality adaptive robust learning tracking controller of wearable robotic knee system. Mathematics 2023, 11, 1351. [Google Scholar] [CrossRef]
  28. Zhang, Y.; Ge, S.S. Design and analysis of a general recurrent neural network model for time-varying matrix inversion. IEEE Trans. Neural Netw. 2005, 16, 1477–1490. [Google Scholar] [CrossRef] [Green Version]
  29. Chai, Y.; Li, H.; Qiao, D.; Qin, S.; Feng, J. A neural network for Moore-Penrose inverse of time-varying complex-valued matrices. Int. J. Comput. Intell. Syst. 2020, 13, 663–671. [Google Scholar] [CrossRef]
  30. Sun, Z.; Li, F.; Jin, L.; Shi, T.; Liu, K. Noise-tolerant neural algorithm for online solving time-varying full-rank matrix Moore-Penrose inverse problems: A control-theoretic approach. Neurocomputing 2020, 413, 158–172. [Google Scholar] [CrossRef]
  31. Wu, W.; Zheng, B. Improved recurrent neural networks for solving Moore-Penrose inverse of real-time full-rank matrix. Neurocomputing 2020, 418, 221–231. [Google Scholar] [CrossRef]
  32. Zhang, Y.; Yang, Y.; Tan, N.; Cai, B. Zhang neural network solving for time-varying full-rank matrix Moore-Penrose inverse. Computing 2011, 92, 97–121. [Google Scholar] [CrossRef]
  33. Qiao, S.; Wang, X.Z.; Wei, Y. Two finite-time convergent Zhang neural network models for time-varying complex matrix Drazin inverse. Linear Algebra Its Appl. 2018, 542, 101–117. [Google Scholar] [CrossRef]
  34. Qiao, S.; Wei, Y.; Zhang, X. Computing time-varying ML-weighted pseudoinverse by the Zhang neural networks. Numer. Funct. Anal. Optim. 2020, 41, 1672–1693. [Google Scholar] [CrossRef]
  35. Wang, X.; Stanimirovic, P.S.; Wei, Y. Complex ZFs for computing time-varying complex outer inverses. Neurocomputing 2018, 275, 983–1001. [Google Scholar] [CrossRef]
  36. Simos, T.E.; Katsikis, V.N.; Mourtas, S.D.; Stanimirović, P.S.; Gerontitis, D. A higher-order zeroing neural network for pseudoinversion of an arbitrary time-varying matrix with applications to mobile object localization. Inf. Sci. 2022, 600, 226–238. [Google Scholar] [CrossRef]
  37. Zhou, M.; Chen, J.; Stanimirovic, P.S.; Katsikis, V.N.; Ma, H. Complex varying-parameter Zhang neural networks for computing core and core-EP inverse. Neural Process. Lett. 2020, 51, 1299–1329. [Google Scholar] [CrossRef]
  38. Kovalnogov, V.N.; Fedorov, R.V.; Generalov, D.A.; Chukalin, A.V.; Katsikis, V.N.; Mourtas, S.D.; Simos, T.E. Portfolio insurance through error-correction neural networks. Mathematics 2022, 10, 3335. [Google Scholar] [CrossRef]
  39. Mourtas, S.D.; Katsikis, V.N. Exploiting the Black-Litterman framework through error-correction neural networks. Neurocomputing 2022, 498, 43–58. [Google Scholar] [CrossRef]
  40. Mourtas, S.D.; Kasimis, C. Exploiting mean-variance portfolio optimization problems through zeroing neural networks. Mathematics 2022, 10, 3079. [Google Scholar] [CrossRef]
  41. Jiang, W.; Lin, C.L.; Katsikis, V.N.; Mourtas, S.D.; Stanimirović, P.S.; Simos, T.E. Zeroing neural network approaches based on direct and indirect methods for solving the Yang–Baxter-like matrix equation. Mathematics 2022, 10, 1950. [Google Scholar] [CrossRef]
  42. Katsikis, V.N.; Mourtas, S.D.; Stanimirović, P.S.; Zhang, Y. Continuous-time varying complex QR decomposition via zeroing neural dynamics. Neural Process. Lett. 2021, 53, 3573–3590. [Google Scholar] [CrossRef]
  43. Stanimirović, P.S.; Katsikis, V.N.; Li, S. Higher-order ZNN dynamics. Neural Process. Lett. 2019, 51, 697–721. [Google Scholar] [CrossRef]
  44. Kornilova, M.; Kovalnogov, V.; Fedorov, R.; Zamaleev, M.; Katsikis, V.N.; Mourtas, S.D.; Simos, T.E. Zeroing neural network for pseudoinversion of an arbitrary time-varying matrix based on singular value decomposition. Mathematics 2022, 10, 1208. [Google Scholar] [CrossRef]
  45. Dai, J.; Tan, P.; Yang, X.; Xiao, L.; Jia, L.; He, Y. A fuzzy adaptive zeroing neural network with superior finite-time convergence for solving time-variant linear matrix equations. Knowl.-Based Syst. 2022, 242, 108405. [Google Scholar] [CrossRef]
  46. Xiao, L.; Tan, H.; Dai, J.; Jia, L.; Tang, W. High-order error function designs to compute time-varying linear matrix equations. Inf. Sci. 2021, 576, 173–186. [Google Scholar] [CrossRef]
  47. Zhong, N.; Huang, Q.; Yang, S.; Ouyang, F.; Zhang, Z. A varying-parameter recurrent neural network combined with penalty function for solving constrained multi-criteria optimization scheme for redundant robot manipulators. IEEE Access 2021, 9, 50810–50818. [Google Scholar] [CrossRef]
  48. Climent, J.; Thome, N.; Wei, Y. A geometrical approach on generalized inverses by Neumann-type series. Linear Algebra Appl. 2001, 332–334, 533–540. [Google Scholar] [CrossRef] [Green Version]
  49. Li, W.; Li, Z. A family of iterative methods for computing the approximate inverse of a square matrix and inner inverse of a non-square matrix. Appl. Math. Comput. 2010, 215, 3433–3442. [Google Scholar] [CrossRef]
  50. Liu, X.; Jin, H.; Yu, Y. Higher-order convergent iterative method for computing the generalized inverse and its application to Toeplitz matrices. Linear Algebra Appl. 2013, 439, 1635–1650. [Google Scholar] [CrossRef]
  51. Weiguo, L.; Juan, L.; Tiantian, Q. A family of iterative methods for computing Moore-Penrose inverse of a matrix. Linear Algebra Appl. 2013, 438, 47–56. [Google Scholar] [CrossRef]
  52. Stanimirović, P.S.; Srivastava, S.; Gupta, D.K. From Zhang Neural Network to scaled hyperpower iterations. J. Comput. Appl. Math. 2018, 331, 133–155. [Google Scholar] [CrossRef]
  53. Katsikis, V.N.; Stanimirović, P.S.; Mourtas, S.D.; Li, S.; Cao, X. Chapter Towards higher order dynamical systems. In Generalized Inverses: Algorithms and Applications; Mathematics Research Developments; Nova Science Publishers, Inc.: New York, NY, USA, 2021; pp. 207–239. [Google Scholar]
  54. Jin, L.; Zhang, Y.; Li, S. Integration-Enhanced Zhang Neural Network for Real-Time-Varying Matrix Inversion in the Presence of Various Kinds of Noises. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 2615–2627. [Google Scholar] [CrossRef]
  55. Farebrother, R.W.; Groß, J.; Troschke, S.O. Matrix representation of quaternions. Linear Algebra Its Appl. 2003, 362, 251–255. [Google Scholar] [CrossRef] [Green Version]
  56. Simos, T.E.; Katsikis, V.N.; Mourtas, S.D.; Stanimirović, P.S. Finite-time convergent zeroing neural network for solving time-varying algebraic Riccati equations. J. Frankl. Inst. 2022, 359, 10867–10883. [Google Scholar] [CrossRef]
  57. Liu, H.; Wang, T.; Guo, D. Design and validation of zeroing neural network to solve time-varying algebraic Riccati equation. IEEE Access 2020, 8, 211315–211323. [Google Scholar] [CrossRef]
  58. Zhang, Y.; Jin, L. Robot Manipulator Redundancy Resolution; John Wiley & Sons: Hoboken, NJ, USA, 2017. [Google Scholar] [CrossRef]
Figure 1. EMEs, error of (8), and the trajectories of X ˜ ( t ) in Section 4.1.1. (a) EMEs for p = 2 . (b) EMEs of HZNNQ p . (c) EMEs of HZNNQR p . (d) EMEs of HZNNQC p . (e) Error of (8) for p = 2 . (f) HZNNQ p error of (8). (g) HZNNQR p error of (8). (h) HZNNQC p error of (8). (i) Solutions traj. (j) Solutions traj. (k) Solutions traj. (l) Solutions traj.
Figure 1. EMEs, error of (8), and the trajectories of X ˜ ( t ) in Section 4.1.1. (a) EMEs for p = 2 . (b) EMEs of HZNNQ p . (c) EMEs of HZNNQR p . (d) EMEs of HZNNQC p . (e) Error of (8) for p = 2 . (f) HZNNQ p error of (8). (g) HZNNQR p error of (8). (h) HZNNQC p error of (8). (i) Solutions traj. (j) Solutions traj. (k) Solutions traj. (l) Solutions traj.
Mathematics 11 02756 g001
Figure 2. EMEs, error of (8), and the trajectories of X ˜ ( t ) in Section 4.1.1 under linear noise with z = 100 . (a) EMEs for p = 2 . (b) EMEs for p = 4 . (c) Error of (8) for p = 2 . (d) Error of (8) for p = 4 . (e) Solutions traj. (f) Solutions traj. (g) Solutions traj. (h) Solutions traj.
Figure 2. EMEs, error of (8), and the trajectories of X ˜ ( t ) in Section 4.1.1 under linear noise with z = 100 . (a) EMEs for p = 2 . (b) EMEs for p = 4 . (c) Error of (8) for p = 2 . (d) Error of (8) for p = 4 . (e) Solutions traj. (f) Solutions traj. (g) Solutions traj. (h) Solutions traj.
Mathematics 11 02756 g002
Figure 3. EMEs, error of (8), and the trajectories of X ˜ ( t ) in Section 4.1.2. (a) EMEs for p = 2 . (b) EMEs of HZNNQ p . (c) EMEs of HZNNQR p . (d) EMEs of HZNNQC p . (e) Error of (8) for p = 2 . (f) HZNNQ p error of (8). (g) HZNNQR p error of (8). (h) HZNNQC p error of (8). (i) Solutions traj. (j) Solutions traj. (k) Solutions traj. (l) Solutions traj.
Figure 3. EMEs, error of (8), and the trajectories of X ˜ ( t ) in Section 4.1.2. (a) EMEs for p = 2 . (b) EMEs of HZNNQ p . (c) EMEs of HZNNQR p . (d) EMEs of HZNNQC p . (e) Error of (8) for p = 2 . (f) HZNNQ p error of (8). (g) HZNNQR p error of (8). (h) HZNNQC p error of (8). (i) Solutions traj. (j) Solutions traj. (k) Solutions traj. (l) Solutions traj.
Mathematics 11 02756 g003
Figure 4. EMEs, error of (8), and the trajectories of X ˜ ( t ) in Section 4.1.2 under constant noise with z = 10 . (a) EMEs for p = 2 . (b) EMEs for p = 4 . (c) Error of (8) for p = 2 . (d) Error of (8) for p = 4 . (e) Solutions traj. (f) Solutions traj. (g) Solutions traj. (h) Solutions traj.
Figure 4. EMEs, error of (8), and the trajectories of X ˜ ( t ) in Section 4.1.2 under constant noise with z = 10 . (a) EMEs for p = 2 . (b) EMEs for p = 4 . (c) Error of (8) for p = 2 . (d) Error of (8) for p = 4 . (e) Solutions traj. (f) Solutions traj. (g) Solutions traj. (h) Solutions traj.
Mathematics 11 02756 g004
Figure 5. Robotic motion tracking application results under bounded noise with z = 1000 . (a) 3-link PM. (b) EMEs for p = 4 . (c) Error of (8). (d) Velocity. (e) Path tracking 3D. (f) Path tracking 2D.
Figure 5. Robotic motion tracking application results under bounded noise with z = 1000 . (a) 3-link PM. (b) EMEs for p = 4 . (c) Error of (8). (d) Velocity. (e) Path tracking 3D. (f) Path tracking 2D.
Mathematics 11 02756 g005
Table 1. Frobenius norm of the HZNN and NHZNN models’ EMEs in SEs of Section 4.1.1 and Section 4.1.2.
Table 1. Frobenius norm of the HZNN and NHZNN models’ EMEs in SEs of Section 4.1.1 and Section 4.1.2.
ModelSE of Section 4.1.1SE of Section 4.1.2
p t = 0 t = 10 6 t = 10 3 t = 10 1 p t = 0 t = 10 6 t = 10 3 t = 10 1
HZNNQ p 2663.7663.7657.3244.22876.2876.2867.5322.8
3663.7661.8288.16.64876.2236.379.122.1
4663.7461.778.63.16876.283.568.420.1
HZNNQC p 21327.41327.41314.5488.521752.51752.51735.1645.1
31327.41321.5212.31.941752.5439.216.81.7
41327.4512.423.40.961752.526.78.61.5
HZNNQR p 2938.6938.6929.5345.121239.21239.21227.1455.6
3938.6934.5150.21.341239.2310.711.81.2
4938.6362.716.50.661239.218.86.11.1
NHZNNQ p 2663.7663.7657.9199.92876.2876.2867.6304.2
4663.7461.676.934.94876.2236.379.124.5
NHZNNQR p 21327.41327.41315.8400.721752.51752.51735.2610.1
41327.4512.521.318.541752.5439.216.39.3
NHZNNQR p 2938.6938.6930.4285.421239.21239.21227.1432.8
4938.6362.315.113.141239.2310.811.56.6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Abbassi, R.; Jerbi, H.; Kchaou, M.; Simos, T.E.; Mourtas, S.D.; Katsikis, V.N. Towards Higher-Order Zeroing Neural Networks for Calculating Quaternion Matrix Inverse with Application to Robotic Motion Tracking. Mathematics 2023, 11, 2756. https://doi.org/10.3390/math11122756

AMA Style

Abbassi R, Jerbi H, Kchaou M, Simos TE, Mourtas SD, Katsikis VN. Towards Higher-Order Zeroing Neural Networks for Calculating Quaternion Matrix Inverse with Application to Robotic Motion Tracking. Mathematics. 2023; 11(12):2756. https://doi.org/10.3390/math11122756

Chicago/Turabian Style

Abbassi, Rabeh, Houssem Jerbi, Mourad Kchaou, Theodore E. Simos, Spyridon D. Mourtas, and Vasilios N. Katsikis. 2023. "Towards Higher-Order Zeroing Neural Networks for Calculating Quaternion Matrix Inverse with Application to Robotic Motion Tracking" Mathematics 11, no. 12: 2756. https://doi.org/10.3390/math11122756

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop