Next Article in Journal
A New Algorithm for Multivariate Genome Wide Association Studies Based on Differential Evolution and Extreme Learning Machines
Next Article in Special Issue
Zeroing Neural Network for Pseudoinversion of an Arbitrary Time-Varying Matrix Based on Singular Value Decomposition
Previous Article in Journal
A RUL Prediction Method of Small Sample Equipment Based on DCNN-BiLSTM and Domain Adaptation
Previous Article in Special Issue
On a New Family of Runge–Kutta–Nyström Pairs of Orders 6(4)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Tensor Splitting AOR Iterative Method for Solving a Tensor Absolute Value Equation

1
School of Electronic Engineering and Automation, Guilin University of Electronic Technology, Guilin 541004, China
2
School of Mathematics and Computing Science, Guangxi Colleges and Universities Key Laboratory of Data Analysis and Computation, Guilin University of Electronic Technology, Guilin 541004, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(7), 1023; https://doi.org/10.3390/math10071023
Submission received: 27 February 2022 / Revised: 19 March 2022 / Accepted: 21 March 2022 / Published: 23 March 2022
(This article belongs to the Special Issue Numerical Analysis and Scientific Computing II)

Abstract

:
In this paper, a tensor splitting AOR iterative method for solving the H + -tensor absolute value equation is presented. Some sufficient conditions for the existence of the solution to the tensor absolute value equation are given. Under suitable conditions, the new method is proved to be convergent. Finally, some numerical examples demonstrate that our new method is effective.

1. Introduction

Recently, the study of tensors has become a hot research field due to its numerous fields of application, such as tensor complementary problems [1,2,3], numerical partial differential equations [4], image processing [5] and data mining [6]. With the development of the tensor problem, the solution to the tensor absolute value equation has been paid more and more attention. In this paper, we consider the tensor absolute value equation (TAVE):
A x m 1 | x | [ m 1 ] = b ,
where A = ( a i 1 i 2 i m ) is an m-order, n-dimensional tensor; x = ( x 1 , x 2 , , x n ) T and the right-hand b = ( b 1 , b 2 , , b n ) T are both n-dimensional vectors; and | x | [ m 1 ] = ( | x 1 | m 1 , | x 2 | m 1 , , | x n | m 1 ) T . The tensor absolute value equation is a further extension of the matrix absolute value equation. Du et al. [7] proved that TAVE (1) is equal to a certain class of structural tensor complementarity problems, and they solved it with the Levenberg–Marquardt method. Bu and Ma [8] proposed some tensor splitting methods and proved the existence of positive solutions for solving TAVE (1), with A being a nonsingular M -tensor. Ning et al. [9] studied a tensor-type successive over-relaxation method and tensor-type accelerated over-relaxation method for solving TAVE (1) when A is a nonsingular M -tensor. More generally, Guo and Gu [10] introduced the following tensor generalized absolute value equation (TGAVE):
A x m 1 B | x | [ m 1 ] = b ,
where B = ( b i 1 i 2 i m ) is also an m-order n-dimensional tensor, and they proposed a smoothing Newton method to solve it. Ling et al. [11] provided the existence of solutions of TGAVE (2) with the help of degree theory and proved that TGAVE (2) has at least one solution under some checkable conditions. Jiang and Li [12] analyzed the existence of solutions of TGAVE (2) and proposed a SOR iterative algorithm to solve this equation. Apparently, (2) covers (1), when B is an m-order n-dimensional unit tensor.
In addition, when B is a zero tensor, TGAVE (2) is expressed as the tensor equation:
A x m 1 = b .
There are many methods for solving (3) when A is a nonsingular M -tensor. For example, the Jacobi method, the Gauss–Seidel method, the Newton method [13], the tensor splitting method [14], and the preconditioned tensor splitting method [15]. The H -tensor has been widely used in evolutionary game dynamics and high-order Markov chains. Wang et al. [16] proposed a preprocessing AOR iterative method for solving (3) when the coefficient tensor A is a nonsingular H -tensor. When A is an H + -tensor, Wang et al. [17] theoretically proved that (3) has a unique positive solution with positive right-hand side b , and also proved that (3) has a negative solution.
To our knowledge, TAVE (1) has been studied in recent years, and there is not much literature on it. It is observed that many of the current research works aim at TAVE (1), which A is an M -tensor, and the results of studies about TAVE (1) with A as an H + -tensor are relatively rare. Additionally, TAVE (1) of the H + -tensor is a further extension of the H + -matrix absolute value equation [18]. Based on these motivations, a tensor splitting AOR iterative method to solve TAVE (1) when A is an H + -tensor is intensively studied in this paper, and we illustrate the effectiveness of this method through some numerical examples in Section 4.
The main structure of this paper is as follows. In Section 2, we introduce some notation, definitions, and lemmas. In Section 3, we give some sufficient conditions for the existence of the solution of TAVE (1). Then we give a tensor splitting AOR iterative algorithm and prove the convergence of the algorithm. In Section 4, the results of some numerical examples are given.

2. Preliminaries

We first introduce some notation relevant to this paper. Let R , R n , and R [ m , n ] be the real field, the set of all n-dimensional real vectors, and the set of all m-order n-dimensional real tensors, respectively. Remember 0 , O, and O are a null vector, a null matrix, and a null tensor, respectively. Let A , B R [ m , n ] . The order A B means that a i 1 i 2 i m b i 1 i 2 i m . If A O , that is, a i 1 i 2 i m 0 , the tensor A is called a nonnegative tensor. An m-order n-dimensional unit tensor I m = ( δ i 1 i 2 i m ) is given by
δ i 1 i 2 i m = 1 , i 1 = i 2 = = i m , 0 , o t h e r w i s e .
Next, we summarize some definitions and lemmas related to this paper. We introduce the definitions of M -tensor, H -tensor, and H + -tensor.
Definition 1
([19]). A R [ m , n ] is called an M -tensor if there exist a nonnegative tensor B and a positive real number η ρ ( B ) such that
A = η I m B .
If η > ρ ( B ) , then A is called a nonsingular M -tensor.
Definition 2
([19]). Let A R [ m , n ] . We call another tensor A = ( m i 1 i 2 i m ) the comparison tensor of A if
m i 1 i 2 i m = | a i 1 i 2 i m | , ( i 1 i 2 i m ) = ( i 1 i 1 i 1 ) , | a i 1 i 2 i m | , ( i 1 i 2 i m ) ( i 1 i 1 i 1 ) .
Definition 3
([19,20]). Let A R [ m , n ] . We call a tensor A an H -tensor, if its comparison tensor is an M -tensor; we call it a nonsingular H -tensor, if its comparison tensor is a nonsingular M -tensor. We call A an H + -tensor, if it is a nonsingular H -tensor with all the diagonal elements a i i i > 0 .
The following introduces the majorization matrix and the order-2 left-inverse of the tensor.
Definition 4
([21]). Let A R [ m , n ] . The majorization matrix M ( A ) of A is defined as an n × n matrix with its entries
M ( A ) i j = ( a i j j ) , i , j = 1 , 2 , , n .
Definition 5
([22]). Let A R [ m , n ] . If M ( A ) is a nonsingular matrix and A = M ( A ) I m , we call M ( A ) 1 the order-2 left-inverse of A .
Definition 6
([14]). Let A R [ m , n ] . If A has an order-2 left-inverse, then A is called a left-invertible tensor or left-nonsingular tensor.
Tensor H-splitting and H-compatible splitting are introduced next.
Definition 7
([14]). Let A , E , F R [ m , n ] . A = E F is said to be a splitting of A if E is left-nonsingular, or a convergent splitting if ρ ( M ( E ) 1 F ) < 1 .
Definition 8
([16]). Let A , E , F R [ m , n ] . The splitting A = E F is called
(1) 
H-splitting if E is a left-nonsingular tensor and E | F | is a nonsingular M -tensor;
(2) 
H-compatible splitting if E is a left-nonsingular tensor and A = E | F | .
Lemma 1
([16]). Let A , E , F R [ m , n ] . If A = E F is an H-splitting, then A and E are nonsingular H -tensors and ρ ( M ( E ) 1 F ) ρ ( M ( E ) 1 | F | ) < 1 .
Lemma 2
([16]). Let A , E , F R [ m , n ] . If the splitting A = E F is an H-compatible splitting and A is a nonsingular H -tensor, then it is an H-splitting and ρ ( M ( E ) 1 F ) < 1 .
From the above Lemmas 1 and 2, we have the following lemma.
Lemma 3.
Let A is an H + -tensor. Then all H-(compatible) splittings of A are convergent.
Proof. 
Assume that A = E F is an H-(compatible) splitting. Since A is an H + -tensor, A is a nonsingular H -tensor. By Lemmas 1 and 2, we get ρ ( M ( E ) 1 F ) < 1 .    □
Finally, for tensor Equation (3) with A as an H + -tensor, the existence theorem for a positive solution and a negative solution is in the following lemmas.
Lemma 4
([17]). If A is an H + -tensor, then for every positive vector b , the tensor Equation (3) has a unique positive solution.
Lemma 5
([17]). Let A be an H + -tensor. For every positive vector b and an odd m, if x is a solution of
A x m 1 b = 0 ,
then the tensor equation (3) has a negative solution: x .

3. Main Results

3.1. Existence and Uniqueness of Solutions of TAVE (1)

First of all, we discuss the conditions for the existence and uniqueness of the solution of TAVE (1).
Theorem 1.
Let A = ( a i 1 i 2 i m ) , B , I m R [ m , n ] . If the diagonal elements a i i i > 1 and the comparison matrix A can be written as A = c I m B with B O and c > ρ ( B ) + 1 , then for every positive vector b , TAVE (1) has a unique positive solution.
Proof. 
Since a i i i > 1 , we get
A I m = | a i 1 i 2 i m | 1 , ( i 1 i 2 i m ) = ( i 1 i 1 i 1 ) , | a i 1 i 2 i m | , ( i 1 i 2 i m ) ( i 1 i 1 i 1 ) ,
and then we obtain A I m = A I m . Suppose that s = c 1 . According to A = c I m B , we have
A I m = s I m B , B O , s > ρ ( B ) ,
which means that A I m is a nonsingular M -tensor; that is, A I m is also a nonsingular M -tensor. Thus, A I m is an H + -tensor. By Lemma 4, the tensor equation
( A I m ) x m 1 = b
has a unique positive solution for every positive vector b . Additionally, TAVE (1) can be converted to the tensor Equation (4) when x > 0 . Hence, for every positive vector b , TAVE (1) has a unique positive solution.    □
Theorem 2.
Let A = ( a i 1 i 2 i m ) , B , I m R [ m , n ] , and x is a solution of TAVE (1). If the diagonal elements a i i i > 1 , the comparison matrix A can be written as A = c I m B with B O and c > ρ ( B ) + 1 , so TAVE (1) has a negative solution x for every positive vector b and an odd m.
Proof. 
It is known from the proof of Theorem 1 that A I m is an H + -tensor. By Lemma 5, it follows that for every positive vector b and an odd m, if x > 0 is a solution of the tensor Equation (4), then so is x ; that is,
( A I m ) ( x ) m 1 = b .
Hence, for every positive vector b and an odd m, if x is a solution of TAVE (1), then TAVE (1) has a negative solution x .    □
From the conditions of Theorems 1 and 2, we proved that A I m is an H + -tensor. Therefore, we have the following two corollaries.
Corollary 1.
Let A , I m R [ m , n ] . If A I m is an H + -tensor, then for every positive vector b , TAVE (1) has a unique positive solution.
Corollary 2.
Let A , I m R [ m , n ] and x is a solution of TAVE (1). If A I m is an H + -tensor, then TAVE (1) has a negative solution x for every positive vector b and an odd m.

3.2. Tensor Splitting AOR Iterative Method

In [16], the preconditioned tensor splitting AOR iterative method was proposed to solve the tensor Equation (3) by Wang et al. Next, based on the method of [16], we give a tensor splitting AOR iterative method for solving TAVE (1).
From Theorems 1 and 2, we proved that TAVE (1) can be converted to the tensor Equation (4) when x > 0 or m is an odd number and x < 0 , where x is the solution of (4). Thus, we can get the solution of TAVE (1) by solving the tensor Equation (4). First of all, a splitting of the tensor A I m into
A I m = E F .
If ( M ( E ) ) 1 exists, then an iterative formula for solving the tensor Equation (4) can be written as
x k = ( T x k 1 m 1 + f ) [ 1 m 1 ] , k = 1 , 2 , ,
where T = M ( E ) 1 F and f = M ( E ) 1 b . The tensor T denotes the iterative tensor of the splitting method. Let
A I m = D L U ,
where D = D I m ; L = L I m ; and D and L are the diagonal and the strictly lower triangle part of M ( A I m ) , respectively. If
E = 1 ω ( D r L ) , F = 1 ω [ ( 1 ω ) D + ( ω r ) L + ω U ] ,
then the splitting method (5) is called the AOR method. Additionally, the iteration tensor of the AOR method is given by T A O R = M ( E ) 1 F = M ( D r L ) 1 · [ ( 1 ω ) D + ( ω r ) L + ω U ] , where ω , r are real parameters with ω 0 . In particular, the AOR method becomes the SOR method when r = ω , the AOR method becomes the Gauss–Seidel method when r = ω = 1 , and the AOR method becomes the Jacobi method when r = 0 and ω = 1 . The corresponding tensor splitting AOR iterative method for solving TAVE (1) is given as follows.
Next, we give the convergence theorem of Algorithm 1.
Algorithm 1 Tensor splitting AOR iterative method.
1.
Given an H + -tensor A , a splitting A I m = E F , a right-hand vector b > 0 , a maximal iteration number k max , a machine precision ε , and a positive initial vector x 0 . Initialize k : = 1 .
2.
While k < k max
x k = ( M ( E ) 1 F x k 1 m 1 + M ( E ) 1 b ) [ 1 m 1 ] , k = 1 , 2 , .
3.
If ( A I m ) x k m 1 b 2 ε , output the solution x k .
4.
Let k = k + 1 , return to step 2.
Theorem 3.
Let A I m R [ m , n ] be an H + -tensor.
(1) If 0 r ω 1 ( ω 0 ) , then ρ ( T A O R ) < 1 ;
(2) If 0 r ω 2 ( ω > 1 , ω 0 ) and ( 2 ω 1 ) | D | | L | | U | is a nonsingular M -tensor, then ρ ( T A O R ) < 1 .
Proof. 
According to the AOR splitting of (5), we have
E = 1 ω ( | D | r | L | ) , | F | = 1 ω [ | ( 1 ω ) | · | D | + ( ω r ) | L | + ω | U | ] .
(1) Due to 0 r ω 1 ( ω 0 ) , we obtain
E | F | = 1 ω ( | D | r | L | ) 1 ω [ ( 1 ω ) | D | + ( ω r ) | L | + ω | U | ] = | D | | L | | U | = A I m .
Since A I m is an H + -tensor, the diagonal tensor D is nonsingular. Additionally, r ω , so E is a left-nonsingular tensor. Therefore, the splitting of A I m = E F is an H-compatible splitting. From Lemma 3, we can get that the AOR splitting of (5) is convergent. So ρ ( T A O R ) < 1 .
(2) It can be seen from the proof of (1) that E is a left-nonsingular tensor. While 0 r ω 2 ( ω > 1 , ω 0 ) , we have
E | F | = ( 2 ω 1 ) | D | | L | | U | .
As ( 2 ω 1 ) | D | | L | | U | is a nonsingular M -tensor, A I m = E F is an H-splitting. Hence, by Lemma 3, we know that the AOR splitting of (5) is convergent. In other words, ρ ( T A O R ) < 1 . □

4. Numerical Examples

In this section, we verify the effectiveness of Algorithm 1 through some numerical examples. All tests were performed in Matlab 2018b with configuration: Intel(R) Core(TM)i5-8265 CPU @ 1.60 GHz 1.80 GHz. The number of iterations is denoted by IT, and the CPU time in seconds is denoted by CPU(s). We set the maximum number of iterations as k max = 1000 and the stopping tolerance as ε = 10 11 .
Example 1.
Let an H + -tensor
A = 3 0.36 0.40 0.16 0.10 0.13 0.06 0.04 0.08 0.12 0.10 0.20 0.05 4 0.80 0.04 0.12 0.05 0.13 0.12 0.20 0.06 0.08 0.04 0.04 0.02 2 ,
and the right-hand side b = ( 1 , 1 , 1 ) T .
It is easy to prove that A I 3 is also an H + -tensor and the AOR splitting of A I 3 = E F is convergent when 0 r ω 1.2 and ω 0 . Let the initial value x 0 = ( 1 3 , 1 3 , 1 3 ) T . The numerical results of Algorithm 1 are shown in Figure 1, where the values of ω range from 0.1 to 1.2 in intervals of 0.1 and the values of r from 0 to ω in intervals of 0.1.
As can be seen in Figure 1, Algorithm 1 effectively solves TAVE (1). Of the parameters ω and r in this experiment, the value of ω has a greater impact on the running time of the algorithm. The values of optimal parameters ω and r in Example 1 were between 1.1 and 0.1. The minimum CPU time was 0.01042 s, which is represented by “#” in Figure 1, and the number of iteration steps was 13.
Example 2.
Let
A = | ( n 2 + 1 ) I 3 B | R [ 3 , n ] ,
where B R [ 3 , n ] is a nonnegative tensor with b i 1 i 2 i 3 = | sin ( i 1 + i 2 + i 3 ) | and the right-hand side b = ( 1 , 1 , , 1 ) T R n .
By [23], we know that A I 3 = n 2 I 3 B is a nonsingular M -tensor, so A I 3 is an H + -tensor. Let the initial value x 0 = ( 1 2 n , 1 2 n , , 1 2 n ) T R n . For the SOR method, we took the values of ω range from 0.1 to 1.0 in intervals of 0.1. For the AOR method, we took the values of ω range from 0.1 to 1.0 in intervals of 0.1 and the values of r from 0 to ω in intervals of 0.1. For n = 3 , 5 , 10 , 30 , 50 , 100 , 150 , 200 , and 300, we solved TAVE (1) by the Jacobi method, Gauss–Seidel method, SOR method, and AOR method. The numerical results are shown in Table 1, where ω and r are the optimal parameters.
It can be seen in Table 1 that if we take the optimal parameters ω and r, the AOR method and SOR method are more effective than the Jacobi method and Gauss–Seidel method for solving TAVE (1) with n = 3 , 5 , 10 , 30 , 50 , 100 , 150 , 200 , and 300. In particular, the AOR method is optimal. Furthermore, we found that the number of iteration steps decreases as n ( n 5 ) increases in these four methods. Therefore, our algorithm may be more efficient for solving the large-scale problems.
Example 3.
Let B R [ 4 , n ] be generated randomly by MATLAB and
s = 1 + ( 1 + 0.01 ) max i = 1 , 2 , , n ( B e 3 ) i ,
where, e = ( 1 , 1 , , 1 ) T R n . Let A = ( a i 1 i 2 i 3 i 4 ) , A ˜ = s I 4 B = ( a ˜ i 1 i 2 i 3 i 4 ) , and
a i 1 i 2 i 3 i 4 = ( 1 ) i 1 + i 2 + i 3 + i 4 · a ˜ i 1 i 2 i 3 i 4 , 1 i 1 , i 2 , i 3 , i 4 n .
The right-hand side is b = e .
It is easy to see that s > ρ ( B ) + 1 , A = A ˜ , so A I 4 is an H + -tensor. We took the initial value x 0 = ( 1 n + 1 , 1 n + 1 , 1 n + 1 , 1 n + 1 ) T . We took the values of ω from 0.1 to 1.0 with intervals of 0.1 and the values of r from 0 to ω with intervals of 0.1. For different ω and r, we solved TAVE (1) with n = 4 , 10 , 30 , and 50 by Algorithm 1. All numerical results are depicted in Figure 2, and the numerical results under the optimal parameters ω and r are shown in Table 2.
Figure 2 and Table 2 show that Algorithm 1 is efficient for solving TAVE (1) when A is a 4-order H + -tensor. In Figure 2, the minimum CPU time is represented by “#” for n = 4 , 10 , 30 , and 50. We observe that Algorithm 1 performs more efficiently as ω increases, and reaches the optimum when ω = 1.0 .

5. Conclusions

In this paper, a tensor splitting AOR iterative method was presented to solve the H + -tensor absolute value equation. The convergence of the new method was analyzed. Some numerical examples verified the effectiveness of the new method. In the future, we will explore theoretical optimal parameters for our algorithm, and study more efficient algorithms for solving TAVE (1). Our AOR iterative method may be applied to control problems [24,25,26,27,28], which will be research by our team in future.

Author Contributions

All authors contributed equally and significantly in writing this article. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Natural Science Foundation of China (11661027), the Guangxi Natural Science Foundation (2020GXNSFAA159143), and the Innovation Project of GUET Graduate Education (2021YCXS114).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Che, M.; Qi, L.; Wei, Y. Positive-definite tensors to nonlinear complementarity problems. J. Optim. Theory Appl. 2016, 168, 475–487. [Google Scholar] [CrossRef] [Green Version]
  2. Bai, X.; Huang, Z.; Wang, Y. Global uniqueness and solvability for tensor complementarity problems. J. Optim. Theory Appl. 2016, 170, 72–84. [Google Scholar] [CrossRef] [Green Version]
  3. Wang, X.; Che, M.; Wei, Y. Global uniqueness and solvability of tensor complementarity problems for H + -tensors. Numer. Algorithms 2020, 84, 567–590. [Google Scholar] [CrossRef]
  4. Khoromskij, B.N. DIFaddend Tensor numerical methods for multidimensional PDEs: Theoretical analysis and initial applications. ESAIM Proc. Surv. 2015, 48, 1–28. [Google Scholar] [CrossRef]
  5. Qi, L.; Yu, G.; Wu, E. Higher order positive semi-definite diffusion tensor imaging. Siam J. Imaging Sci. 2010, 3, 416–433. [Google Scholar] [CrossRef] [Green Version]
  6. Eldén, L. Matrix methods in data mining and pattern recognition. Int. Stat. Rev. 2007, 75, 409–438. [Google Scholar]
  7. Du, S.; Zhang, L.; Chen, C.; Qi, L. Tensor absolute value equations. Sci. China Math. 2018, 61, 1695–1710. [Google Scholar] [CrossRef] [Green Version]
  8. Bu, F.; Ma, C. The tensor splitting methods for solving tensor absolute value equation. Comput. Appl. Math. 2020, 39, 178. [Google Scholar] [CrossRef]
  9. Ning, J.; Xie, Y.; Yao, J. Efficient splitting methods for solving tensor absolute value equation. Symmetry 2022, 14, 387. [Google Scholar] [CrossRef]
  10. Guo, X.; Gu, W. A smoothing newton method for tensor generalized absolute value equations. Adv. Math. (China) 2020, 49, 761–768. [Google Scholar]
  11. Ling, C.; Yan, W.; He, H.; Qi, L. Further study on tensor absolute value equations. Sci. China Math. 2020, 63, 2137–2156. [Google Scholar] [CrossRef] [Green Version]
  12. Jiang, Z.; Li, J. Solving tensor absolute value equation. Appl. Numer. Math. 2021, 170, 255–268. [Google Scholar] [CrossRef]
  13. Ding, W.; Wei, Y. Solving multi-linear system with M -tensors. J. Sci. Comput. 2016, 68, 689–715. [Google Scholar] [CrossRef]
  14. Liu, D.; Li, W.; Vong, W. The tensor splitting with application to solve multi-linear systems. J. Comput. Appl. Math. 2018, 330, 75–94. [Google Scholar] [CrossRef]
  15. Li, W.; Liu, D.; Vong, W. Comparison results for splitting for solving multi-linear systems. Appl. Numer. Math. 2018, 134, 105–121. [Google Scholar] [CrossRef]
  16. Wang, X.; Che, M.; Wei, Y. Preconditioned tensor splitting AOR iterative methods for H-tensor equations. Numer. Linear Algebra Appl. 2020, 27, e2329. [Google Scholar] [CrossRef]
  17. Wang, X.; Che, M.; Wei, Y. Existence and uniqueness of positive solution for H + -tensor equations. Appl. Math. Lett. 2019, 98, 191–198. [Google Scholar] [CrossRef]
  18. Wang, H.; Liu, H.; Cao, S. A verification method for enclosing solutions of absolute value equations. Collect. Math. 2013, 64, 17–38. [Google Scholar] [CrossRef]
  19. Ding, W.; Qi, L.; Wei, Y. M -tensors and nonsingular M -tensors. Linear Algebra Its Appl. 2013, 439, 3264–3278. [Google Scholar] [CrossRef]
  20. Wang, X.; Wei, Y. H–tensors and nonsingular H-tensors. Front. Math. China 2016, 11, 557–575. [Google Scholar] [CrossRef]
  21. Pearson, K. Essentially positive tensors. Int. J. Algebra 2010, 4, 421–427. [Google Scholar]
  22. Liu, W.; Li, W. On the inverse of a tensor. Linear Algebra Its Appl. 2016, 495, 199–205. [Google Scholar] [CrossRef]
  23. Xie, Z.; Jin, X.; Wei, Y. Tensor methods for solving symmetric M -tensor systems. J. Sci. Comput. 2018, 74, 412–425. [Google Scholar] [CrossRef]
  24. Singh, A.; Shukla, A.; Vijayakumar, V.; Udhayakumar, R. Asymptotic stability of fractional order (1,2] stochastic delay differential equations in banach spaces. Chaos Solitons Fractals 2021, 150, 111095. [Google Scholar] [CrossRef]
  25. Vijayakumar, V.; Nisar, K.; Chalishajar, D.; Shukla, A.; Malik, M.; Alsaadi, A.; Aldosary, S. A note on approximate controllability of fractional semilinear integrodifferential control systems via resolvent operators. Fractal Fract. 2022, 6, 73. [Google Scholar] [CrossRef]
  26. Vijayakumar, V.; Panda, S.; Nisar, K.; Baskonus, H. Results on approximate controllability results for second-order Sobolev-type impulsive neutral differential evolution inclusions with infinite delay. Numer. Methods Partial. Differ. Equ. 2021, 2, 1200–1221. [Google Scholar] [CrossRef]
  27. Shukla, A.; Sukavanam, N.; Pandey, D. Complete controllability of semi-linear stochastic system with delay. Rend. Del. Circ. Mat. Di Palermo 2015, 64, 209–220. [Google Scholar] [CrossRef]
  28. Shukla, A.; Patel, R. Existence and optimal control results for second-order semilinear system in Hilbert spaces. Circuits Syst. Signal Process. 2021, 40, 4246–4258. [Google Scholar] [CrossRef]
Figure 1. The numerical results of Example 1.
Figure 1. The numerical results of Example 1.
Mathematics 10 01023 g001
Figure 2. The numerical results of Example 3. (a) n = 4; (b) n = 10; (c) n = 30; (d) n = 50.
Figure 2. The numerical results of Example 3. (a) n = 4; (b) n = 10; (c) n = 30; (d) n = 50.
Mathematics 10 01023 g002
Table 1. The numerical results of Example 2.
Table 1. The numerical results of Example 2.
nJacobiGauss–SeidelSORAOR
CPU(s)ITCPU(s)IT ω CPU(s)IT( ω ,r)CPU(s)IT
30.044761520.026841310.80.01220915(0.8, 0.8)0.01220915
50.048811540.033353400.70.01590117(0.7, 0.3)0.01462417
100.041475470.034756410.70.01391816(0.7, 0.3)0.01363216
300.056128440.064641420.70.01482613(0.7, 0.5)0.01286812
500.051478420.044186410.70.01465612(0.7, 0.1)0.01048410
1000.080886400.10691390.70.01892210(0.7, 0.3)0.01754210
1500.25923390.19730380.70.04501110(0.7, 0.3)0.04404910
2000.39142380.33626370.70.0720179(0.7, 0.7)0.0720179
3000.99464361.0196360.70.246929(0.6, 0.1)0.218018
Table 2. The numerical results of Example 3 with optimal parameters ω and r.
Table 2. The numerical results of Example 3 with optimal parameters ω and r.
nAOR
( ω ,r)CPU(s)IT
4(1.0, 0.9)0.00755319
10(1.0, 0.2)0.00420665
30(1.0, 0.4)0.00490983
50(1.0, 0.1)0.0175813
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, Y.; Li, C. A Tensor Splitting AOR Iterative Method for Solving a Tensor Absolute Value Equation. Mathematics 2022, 10, 1023. https://doi.org/10.3390/math10071023

AMA Style

Chen Y, Li C. A Tensor Splitting AOR Iterative Method for Solving a Tensor Absolute Value Equation. Mathematics. 2022; 10(7):1023. https://doi.org/10.3390/math10071023

Chicago/Turabian Style

Chen, Yuhan, and Chenliang Li. 2022. "A Tensor Splitting AOR Iterative Method for Solving a Tensor Absolute Value Equation" Mathematics 10, no. 7: 1023. https://doi.org/10.3390/math10071023

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop