Next Article in Journal
An Application of Poisson Distribution Series on Harmonic Classes of Analytic Functions
Previous Article in Journal
A New Adaptive Accelerated Levenberg–Marquardt Method for Solving Nonlinear Equations and Its Applications in Supply Chain Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Alternative SOR-like Iteration Method for Solving Absolute Value Equations

Institute for Optimization and Decision Analytics, Liaoning Technical University, Fuxin 123000, China
*
Author to whom correspondence should be addressed.
Symmetry 2023, 15(3), 589; https://doi.org/10.3390/sym15030589
Submission received: 29 January 2023 / Revised: 15 February 2023 / Accepted: 17 February 2023 / Published: 24 February 2023

Abstract

:
In this paper, by equivalently reformulating the absolute value equation (AVE) into an alternative two-by-two block nonlinear equation, we put forward an alternative SOR-like (ASOR-like) iteration method to solve the AVE. The convergence of the ASOR-like iteration method is established, subjecting to specific restrictions placed on the associated parameter. The selection of the optimal iteration parameter is investigated theoretically. Numerical experiments are given to validate the feasibility and effectiveness of the ASOR-like iteration method.

1. Introduction

The absolute value equation, denoted by AVE, is to find a vector x R n such that
A x | x | b = 0 ,
where A R n × n , b R n , | x | = ( | x 1 | , | x 2 | , , | x n | ) with x l being the l-th entry of x and | · | denotes absolute value for real scalar. The AVE (1) is a special case of the generalized AVE (GAVE)
A x + B | x | b = 0 ,
with A , B R n × n and b R n , which was introduced in [1] and further studied in [2,3,4]. In fact, if B is nonsingular, then (2) can be converted into (1). The AVE (1) is closely interrelated to the linear complementarity problem (LCP), bimatrix games and others (see e.g., [1,2,3,4,5,6,7,8] and the references therein).
In general, solving the AVE (1) is NP-hard [2]. Furthermore, when the AVE (1) is solvable, it is NP-complete for testing whether the AVE (1) has a unique solution or multiple solutions [9]. The existence of its solutions have been studied in [1,10,11,12,13,14], and an outstanding and commonly used sufficient condition for solving the AVE (1) can be found in [11], as follows.
Lemma 1 ([11]).
Assume that A R n × n is invertible. If  A 1 < 1 , then the AVE (1) is uniquely solvable for any b R n .
For solving the AVE (1), a great deal of numerical methods have been proposed, such as the Newton-type iteration methods [4,7,15,16,17,18,19,20,21,22,23], the Picard iteration method [24], the preconditioned AOR iteration method [25], the generalized Gauss–Seidel iteration method [26], the Levenberg–Marquardt methods [27,28], the exact and inexact Douglas–Rachford splitting methods [29], the dynamical systems [30,31,32,33,34,35], the modified multivariate spectral gradient algorithm [36], the modified HS conjugate gradient method [37], and others (see e.g., [3,38,39,40,41] and the references therein).
In recent years, the SOR-like iteration methods have attracted considerable attention. Ke and Ma [42] first developed an SOR-like iteration method (Algorithm 1) by converting the AVE (1) into a two-by-two block nonlinear equation to address the AVE (1), and proved the convergence of the Algorithm 1 under the sufficient condition that A 1 < 1 with ω ( 0 , 2 ) .
Algorithm 1([42]). (The SOR-like iteration method)
Let the matrix A be nonsingular. Given two initial guesses x 0 , y 0 R n , for  k = 0 , 1 , until the generated sequence { x k } is convergent, compute
x k + 1 = ( 1 ω ) x k + ω A 1 ( y k + b ) , y k + 1 = ( 1 ω ) y k + ω | x k + 1 | .
In order to further explore the convergence conditions of the SOR-like iteration method for solving the AVE (1) in [42], Guo et al. [43] proved the convergence of Algorithm 1 from the perspective of spectral radius and got the optimal relaxation parameter ω 0 = 2 1 + 1 ρ with ρ = ρ ( D ( x k + 1 ) A 1 ) , D ( x ) d i a g ( s i g n ( x ) ) . Herein, d i a g ( x ) represents a diagonal matrix with x i as its diagonal entries for every i = 1 , 2 , , n and
s i g n ( x i ) = 1 , if x i > 0 , 0 , if x i = 0 , 1 , if x i < 0 .
In the sequel, Chen et al. [44] investigated the theoretical optimal parameter ω o p t * and the approximate optimal parameter ω a o p t * of Algorithm 1 for resolving the AVE (1), resulting in
ω o p t * = 1 , if 0 < A 1 1 4 , ω o p t , if 1 4 < A 1 < 1 ,
ω a o p t * ( ν ) = 4 ν + 1 1 2 ν .
Meanwhile, by reformulating the AVE (1) as a two-by-two block nonlinear equation, a fixed point iteration (FPI) method was suggested for solving the AVE (1) in [45], but the convergence of the FPI method is only guaranteed for the case that 0 < A 1 < 2 2 . Furthermore, Yu et al. [46] put forward a modified fixed point iteration (MFPI) method by introducing a nonsingular matrix Q, which guaranteed the convergence for solving the AVE (1) with 2 2 A 1 < 1 by selecting an appropriate parameter matrix Q. In addition, Dong et al. [47] proposed a new SOR-like (NSOR) iteration method by rewriting the AVE (1) into a new two-by-two block nonlinear system, and the convergence conditions of the NSOR iteration method were proven from the perspective of spectrum. In this paper, by reformulating the AVE (1) into a new alternative two-by-two block nonlinear system, we propose an alternative SOR-like (ASOR-like) iteration method for solving the AVE (1) and prove its convergence from the view of iteration error and spectrum, respectively. Furthermore, the optimal iteration parameter selection is also discussed. In addition, we use numerical experiments to demonstrate the feasibility and effectiveness of the ASOR-like iteration method.
The layout of this paper is organized below. Section 2 explains some of the mathematical notations and the lemmas that are used later in the proof. Section 3 and Section 4 propose the iterative format, the convergence conditions and the optimal iteration parameter selection of the ASOR-like iteration method. In Section 5, some numerical experiments are conducted to prove the effectiveness of the proposed method by comparing it with some existing algorithms. Finally, we give a brief conclusion in Section 6.

2. Preliminaries

In this section, we will present some notations, classical definitions, and some auxiliary results that lay the foundation of our developments.
We start by recalling some notations and definitions used in this paper. R n × n is the set of all n × n real matrices and R n = R n × 1 . I is the identity matrix with suitable dimension. ρ ( A ) denotes the spectral radius of A and is defined by the formula ρ ( A ) max | λ ( A ) | where λ ( A ) denotes the eigenvalue of A. A max A x : x R n , x = 1 denotes the spectral norm of A, where x 2 = x H x . A 2 denotes the 2-norm of A. Based on this definition we can derive
A x A x , A + B A + B , A B A B ,
where A , B R n × n and x R n (see Chapter 5 of [48]).
Lemma 2 [49].
For any vectors x , y R n , the following results hold:
  • | x | | y | x y ;
  • If 0 x y , then x y ;
  • Assume that P is a nonnegative matrix. If  x y , then P x P y .

3. An Alternative SOR-like Iteration Method

In this section, we put forward an alternative two-by-two block nonlinear system of the AVE (1). Let y = x , and then the AVE (1) is equivalent to
A y | x | = b , x y = 0 ,
that is
A z : = A D ( x ) I I y x = b 0 : = b ,
where D ( x ) : = d i a g ( s i g n ( x ) ) , x R n .
Let A = D L U , where
D = A 0 0 I , L = 0 0 I 0 , U = 0 D ( x ) 0 0 ,
and then the following fixed point equation can be gained,
( D ω L ) z = [ ( 1 ω ) D + ω U ] z + ω b ,
where the parameter ω > 0 . That is,
A 0 ω I I y x = ( 1 ω ) A ω D ( x ) 0 ( 1 ω ) I y x + ω b 0 .
Based on (8), we establish the following matrix splitting iteration method to solve the AVE (1), called the alternative SOR-like (ASOR-like) iteration method. The algorithmic framework for this method is as follows.
Algorithm 2 (The ASOR-like iteration method)
Let the matrix A be nonsingular. Given two initial guesses x 0 , y 0 R n , for  k = 0 , 1 , until the generated sequence { x k } is convergent, compute
y k + 1 = ( 1 ω ) y k + ω A 1 ( | x k | + b ) , x k + 1 = ( 1 ω ) x k + ω y k + 1 .
In the following, we demonstrate the main outcomes of this paper. Theorems 1 and 2 are inspired by that of Theorem 3.1 in [42] and Theorem 2.1 in [44], respectively. Let ( y * , x * ) be the solution pair of the nonlinear system (7), then we have
y * = ( 1 ω ) y * + ω A 1 ( | x * | + b ) ,
x * = ( 1 ω ) x * + ω y * .
Let the vector pair ( y k , x k ) be generated by (9), and define the iteration errors as
e k y = y * y k and e k x = x * x k .
Then, the convergence results of the ASOR-like iteration method can be obtained as follows.
Theorem 1.
Let the matrix A be invertible. Denote
ν = A 1 and T = | 1 ω | ω ν ω | 1 ω | | 1 ω | + ω 2 ν ,
if T < 1 , then E k + 1 E k , where E k + 1 = ( e k + 1 y , e k + 1 x ) T .
Proof. 
From (9), (10), (11), and (12), we have
e k + 1 y = ( 1 ω ) e k y + ω A 1 ( | x * | | x k | ) ,
e k + 1 x = ( 1 ω ) e k x + ω e k + 1 y .
According to (13), (14) and Lemma 2, we can get
e k + 1 y | 1 ω | e k y + ω ν | x * | | x k | | 1 ω | e k y + ω ν e k x , e k + 1 x | 1 ω | e k x + ω e k + 1 y .
Thus, we can derive that
1 0 ω 1 e k + 1 y e k + 1 x | 1 ω | ω ν 0 | 1 ω | e k y e k x .
Let
P = 1 0 ω 1 0 .
According to Lemma 2, multiplying (15) from left by the nonnegative matrix P, it holds that
e k + 1 y e k + 1 x | 1 ω | ω ν ω | 1 ω | | 1 ω | + ω 2 ν e k y e k x .
Denote
E k + 1 = e k + 1 y e k + 1 x and T = | 1 ω | ω ν ω | 1 ω | | 1 ω | + ω 2 ν 0 .
In the light of (16), it follows that
E k + 1 T E k T E k .
If T < 1 , then we can obtain
E k + 1 E k .
This completes the proof. □
Theorem 2.
Let the matrix A be invertible. Denote ν = A 1 , φ = | 1 ω | , ψ = ω 2 ν , if
0 3 φ 2 + 2 ψ 2 + 2 φ ψ < min { 1 + φ 4 , 2 } ,
then the following inequality holds,
| | | ( e k + 1 y , e k + 1 x ) | | | | | | ( e k y , e k x ) | | | ,
where | | | · | | | is defined by
| | | ( e y , e x ) | | | = e y 2 + ω 2 e x 2 .
Proof. 
According to the proof of Theorem 1, we get
e k + 1 y e k + 1 x | 1 ω | ω ν ω | 1 ω | | 1 ω | + ω 2 ν e k y e k x .
Denote
Q = 1 0 0 ω 1 0 .
Multiplying (19) from left by the nonnegative matrix Q, we get
e k + 1 y ω 1 e k + 1 x | 1 ω | ω 2 ν | 1 ω | | 1 ω | + ω 2 ν e k y ω 1 e k x .
Then it can be concluded that
| | | ( e k + 1 y , e k + 1 x ) | | | T ^ · | | | ( e k y , e k x ) | | | ,
where
T ^ = | 1 ω | ω 2 ν | 1 ω | | 1 ω | + ω 2 ν : = φ ψ φ φ + ψ 0 .
Next, we discuss the selection of the iteration parameter ω such that T ^ 2 < 1 , thus the inequality (18) holds.
Because
T ^ T ^ = 2 φ 2 φ 2 + 2 φ ψ φ 2 + 2 φ ψ φ 2 + 2 ψ 2 + 2 φ ψ
is a symmetric positive semidefinite matrix, then we have T ^ 2 = ρ ( T ^ T ^ ) = κ m a x ( T ^ T ^ ) , where κ is an eigenvalue of T ^ T ^ , and then it holds that
( κ 2 φ 2 ) [ κ ( φ 2 + 2 ψ 2 + 2 φ ψ ) ] ( φ 2 + 2 φ ψ ) 2 = 0 ,
namely,
κ 2 ( 3 φ 2 + 2 ψ 2 + 2 φ ψ ) κ + φ 4 = 0 ,
from which we obtain
κ = 3 φ 2 + 2 ψ 2 + 2 φ ψ ± ( 3 φ 2 + 2 ψ 2 + 2 φ ψ ) 2 4 φ 4 2 .
Consequently,
κ m a x ( T ^ T ^ ) = 3 φ 2 + 2 ψ 2 + 2 φ ψ + ( 3 φ 2 + 2 ψ 2 + 2 φ ψ ) 2 4 φ 4 2 .
In particular,
κ m a x ( T ^ T ^ ) < 1 3 φ 2 + 2 ψ 2 + 2 φ ψ + ( 3 φ 2 + 2 ψ 2 + 2 φ ψ ) 2 4 φ 4 < 2 ( 3 φ 2 + 2 ψ 2 + 2 φ ψ ) 2 4 φ 4 < 2 ( 3 φ 2 + 2 ψ 2 + 2 φ ψ ) .
Hence, a sufficient condition for the convergence is
3 φ 2 + 2 ψ 2 + 2 φ ψ ( 0 , 2 ) , 3 φ 2 + 2 ψ 2 + 2 φ ψ ( 0 , 1 + φ 4 ) .
From (20), we have κ m a x ( T ^ T ^ ) < 1 provide (17), which completes the proof. □
Note that if the conditions of Theorem 2 are satisfied, then we obtain
0 | | | ( e k + 1 y , e k + 1 x ) | | | T ^ · | | | ( e k y , e k x ) | | | T ^ k + 1 · | | | ( e 0 y , e 0 x ) | | | .
Hence, lim k e k y = 0 and lim k e k x = 0 . Therefore, the iteration sequence { x k } k = 0 generated by (9) will convergent to the solution of the AVE (1).
In order to further study the existence of parameter ω for solving AVE (1), from the perspective of spectrum, we analyze the range and the optimal choice of parameter ω under the convergence condition of Algorithm 2. To determine the spectrum of iteration matrix, we consider the following eigenvalue problem
λ A 0 ω I I z 1 z 2 = ( 1 ω ) A ω D ( x ) 0 ( 1 ω ) I z 1 z 2 ,
where λ is an arbitrary eigenvalue of T ( ω ) . This means that we can provide a good approximation for optimal choice of parameter ω with D ( x ) D . Then we focus on the following eigenvalue equation
λ A 0 ω I I z 1 z 2 = ( 1 ω ) A ω D 0 ( 1 ω ) I z 1 z 2 .
It is important to be able to find the optimal parameter ω (hereafter abbreviated as ω o p t * ) to minimize ρ ( T ( ω ) ) for Algorithm 2; that is
ω o p t * = a r g m i n { ρ ( T ( ω ) ) } ,
where
ρ ( T ( ω ) ) = m a x | λ | .
To this end, we need the following auxiliary lemmas.
Lemma 3 [50].
Consider the quadratic equation x 2 b x + c = 0 , where b and c are real numbers. Both roots of the equation are less than one in modulus if and only if | c | < 1 and | b | < 1 + c .
Lemma 4.
If z H z = 1 and z = 1 , there exists z 0 satisfying z 0 H B z 0 = B for any matrix B.
Proof. 
Due to z H B z = ( z H B z ) H ( z H B z ) = z H B H z z H B z = z H B H B z , then there exists z 0 satisfying z 0 H B H B z 0 = max z = 1 z H B H B z = B . □
The following proof is inspired by [51]. Notice that D 2 = I where D is a diagonal matrix. Without loss of generality, suppose that z 1 H z 1 = 1 . From (21), it holds that
λ 2 z 1 = { ( ω 2 A 1 D + ( 2 2 ω ) I ) λ ( ω 1 ) 2 I ] } z 1 .
There exists a vector z 1 satisfying z 1 H A 1 D z 1 = A 1 D . Multiplying both sides of (22) by z 1 H from left and using Lemma 4, we obtain
λ 2 ( ω 2 μ + 2 2 ω ) λ + ( ω 1 ) 2 = 0 ,
where μ = A 1 D . The roots of (23) are given by
λ = ( ω 2 μ + 2 2 ω ) ± ( ω 2 μ + 2 2 ω ) 2 4 ( ω 1 ) 2 2 .
According to Lemma 3, we obtain a sufficient condition such that the two roots of (23) are both less than one, that is
(25) | ( ω 1 ) 2 | < 1 (26) F : = | ω 2 μ ( 2 ω 2 ) | < 1 + ( ω 1 ) 2 : = G .
It is easy to check that (25) is equivalent to ω ( 0 , 2 ) . Equation (26) seems harder to be verified at first sight. Hence, we will proceed to discuss more about it. Notice
F < ω 2 ν + | 2 ω 2 | : = F ^ and G = ω 2 2 ω + 2 ,
a sufficient condition for (26) is F ^ < G for ω ( 0 , 2 ) . Let f ν ( ω ) F ^ G , and then f ν ( ω ) < 0 holds for ω ( 0 , 1 ] when ν < 1 . For ω ( 1 , 2 ) , we have f ν ( ω ) ( ν 1 ) ω 2 + 4 ω 4 < 0 . The roots of f ν ( ω ) are
ω 1 = 2 2 ν ν 1 and ω 2 = 2 + 2 ν ν 1 .
Thus, we can obtain 1 < ω 2 < 2 < ω 1 if ν < 1 , which leads to the solution set of f ν ( ω ) < 0 being ω ( 1 , ω 2 ) .
In conclusion, when ν ( 0 , 1 ) , if
ω ( 0 , 2 2 ν 1 ν ) Ω ,
the roots of (23) are strictly lower than one in modulus.
Remark 1.
It is well-known that ω ( 0 , 2 ) is the selection of parameter ω for the classical SOR iteration method and the SOR-like iteration method in [42], which is also the basic necessary convergent condition. Considering the relationship between the convergence conditions of the ASOR-like method from the two perspectives, it is easy to check that (25) is equivalent to 1 + φ 4 < 2 , which is a sufficient condition of (20). This also shows that the convergence condition from the spectral perspective based on [51] is tighter than those from the norm perspective based on [42,44].

4. Optimal Parameter for the ASOR-like Iteration Method

In this section, we consider the choice of the iteration parameter ω . Let ϱ ( ω ) ω 2 μ + 2 2 ω , τ ( ω ) ( ω 1 ) 2 . According to (24), we get
m a x | λ | = ϱ ( ω ) + ϱ 2 ( ω ) 4 τ ( ω ) 2 , if ϱ ( ω ) > 0 , | ϱ ( ω ) ϱ 2 ( ω ) 4 τ ( ω ) | 2 , if ϱ ( ω ) 0 ,
from which we minimize m a x | λ | to approximately obtain the following condition:
ϱ 2 ( ω ) 4 τ ( ω ) = 0 , if ϱ ( ω ) > 0 , ϱ ( ω ) ϱ 2 ( ω ) 4 τ ( ω ) = 0 , if ϱ ( ω ) 0 .
In fact, due to μ ν , ϱ ( ω ) 0 can shrink to a sufficient condition ϱ ^ ( ω ) ω 2 ν + 2 2 ω 0 , which means ω [ 1 1 2 ν ν , 2 ) for ν ( 0 , 1 2 ) and ω is an empty set for ν [ 1 2 , 1 ) . However, from (28), we only need to prove τ ( ω ) = 0 when ϱ ^ ( ω ) 0 that obtains ω o p t * = 1 < 1 1 2 ν ν for ν ( 0 , 1 2 ) and ϱ ^ ( ω o p t * ) = ν > 0 . This is a contradictory inequality.
In addition, when ϱ ( ω ) > 0 , according to (28), we get
ϱ m a x 2 ( ω ) 4 τ ( ω ) = 0 h ν ( ω ) ν 2 ω 4 4 ν ω 3 + 4 ω 2 8 ω + 4 = 0 ,
which implies max | λ | = ϱ m a x ( ω ) + ϱ m a x 2 ( ω ) 4 τ ( ω ) 2 with ϱ m a x ( ω ) = ω 2 ν + 2 2 ω > 0 and μ ν . For ϱ m a x ( ω ) > 0 , after some simple algebraic operations, we get the existence of ω that ω ( 0 , 1 1 2 ν ν ) ( 0 , 2 ) for ν ( 0 , 1 2 ) and ω ( 0 , 2 ) for ν [ 1 2 , 2 ) . The roots of h ν ( ω ) can be solved by the function roots in Matlab to get the theoretical optimal parameter ω o p t * , expressed as
ω 1 ( ν ) = ν + 1 2 ν + 1 ν ν , ω 2 ( ν ) = ν + 1 + 2 ν + 1 ν ν , ω 3 ( ν ) = ν + 1 + 2 ν 1 + ν ν , ω 4 ( ν ) = ν + 1 2 ν 1 + ν ν .
In order to explore the characteristics of the roots of the quadratic Equation (29), we plot the contour for h ν ( ω ) and the ω i ( ν ) for i = 1 , 2 , 3 , 4 with ν ( 0 , 1 ) in Figure 1. In fact, ω 1 ( ν ) and ω 2 ( ν ) with ν ( 0 , 1 ) are both real values when 2 ν > 0 > ν 1 , ω 3 ( ν ) and ω 4 ( ν ) with ν ( 0 , 3 2 2 ) ( 0 , 0.172 ) are both real values when 2 ν < 1 ν . In this case, the complex roots are not considered. Therefore, it is obvious that ν 1 = 1 3 2 0.134 for ω o p t * ( 0 , 2 ) and
lim ν 0 + ω 1 ( ν ) = 1 , lim ν 0 + ω 4 ( ν ) = 1 , lim ν ν 1 ω 4 ( ν ) = ω 4 ( ν 1 ) = 2 , lim ν 1 ω 1 ( ν ) = ω 1 ( 1 ) = 2 2 0.5858 .
However, due to ω 4 ( ν ) ( 0 , 1 1 2 ν ν ) in ν ( 0 , ν 1 ) , from Figure 1, we know that
ω o p t * = ω 1 ( ν ) , if ν ( 0 , 1 ) .
Now, we devote our attention to investigating the approximate optimal parameter ω a o p t * . Let l ν ( ω ) = m a x { φ ( ω ) , ψ ( ω ) } , and then we have
T ^ ( ω ) l ν ( ω ) l ν ( ω ) l ν ( ω ) 2 l ν ( ω ) = l ν ( ω ) 1 1 1 2 l ν ( ω ) H .
It follows that
T ^ ( ω ) 2 l ν ( ω ) H 2 = l ν ( ω ) H 2 = l ν ( ω ) 3 + 5 2 .
Let δ = 2 3 + 5 , and this l ν ( ω ) satisfies T ^ ( ω ) 2 l ν ( ω ) δ , where l ν ( ω ) δ is an upper bound of T ^ ( ω ) with ω ( 0 , 2 ) . This is the reason that we find ω a o p t * in minimizing l ν ( ω ) .
It is not difficult to find that φ ( ω ) is strictly monotonously decreasing for ω ( 0 , 1 ) and is strictly monotonously increasing for ω ( 1 , 2 ) . In addition, ψ ( ω ) is strictly monotonously increasing in ω ( 0 , 2 ) . By simply drawing and analyzing function φ ( ω ) and ψ ( ω ) , we derive that
ω a o p t * = a r g m i n { l ν ( ω ) } = 1 + 1 + 4 ν 2 ν > 0 .
It notices that ω a o p t * is obtained by φ ( ω ) = 1 ω = ω 2 ν = ψ ( ω ) with ω ( 0 , 2 ) and ν ( 0 , 1 ) .
Remark 2.
Consider the range of values of ω obtained by the above convergence conditions, according to (27), (30), (31), and (32), we plot the Figure 2. It is easy to see that the blue curve divides the green area into two parts; the top part is actually the condition of ϱ ^ ( ω ) 0 , and the bottom part is actually the condition of ϱ m a x ( ω ) > 0 . According to the condition of (27), when ν = 1 3 , it holds 2 2 ν 1 ν = 1 1 2 ν ν . Therefore, it leads to the new convergence conditions that if ϱ m a x ( ω ) > 0 , ω ( 0 , 1 1 2 ν ν ) for ν ( 0 , 1 3 ) and ω ( 0 , 2 2 ν 1 ν ) for ν [ 1 3 , 1 ) ; if ϱ ^ ( ω ) 0 , ω ( 1 1 2 ν ν , 2 2 ν 1 ν ) for ν ( 0 , 1 3 ) . Furthermore, ω o p t * , ω a o p t * Ω .
Comparing ω o p t * and ω a o p t * , we have
lim ν 1 ω a o p t * ( ν ) = 5 1 2 0.618 , lim ν 1 ω o p t * ( ν ) = 2 2 0.586 .
The right of Figure 2 illustrates the gap of the ω o p t * and ω a o p t * where r ( ν ) = ω a o p t * ω o p t * .

5. Numerical Results

In this section, we will present three numerical examples to compare the ASOR-like iteration method with the previous algorithms to illustrate the feasibility and effectiveness of the ASOR-like iteration method. The following six algorithms will be tested.
  • SOR-like-exp method [42]: namely, the iteration format is (3). We choose the experimental optimal parameter ω e x p * with the smallest iteration step of the corresponding method in ω = [ 0.001 : 0.001 : 1.999 ] (in Example 1) and ω = [ 0.01 : 0.01 : 1.99 ] (in Example 2 and Example 3).
  • ASOR-like-exp method: its iteration format is (9). The optimal parameter selection of the ASOR-like-exp method is consistent with the SOR-like-exp method.
  • SOR-like-opt method [44]: its iteration format is consistent with the SOR-like-exp method where the theoretical optimal parameter ω o p t * follows (4). ω o p t * can be calculated by the classical bisection method with the termination criterion is | g ν 1 ( ω ) | 10 10 or the updated ends of the interval b 2 b 1 10 10 , see [44] for specific operations.
  • ASOR-like-opt method: its iteration format is consistent with the ASOR-like-exp method, and ω o p t * is calculated in accordance with (31).
  • SOR-like-aopt method [44]: its iteration format is consistent with the SOR-like-exp method where the approximate optimal parameter ω a o p t * follows (5).
  • ASOR-like-aopt method: its iteration format is consistent with the ASOR-like-exp method and ω a o p t * is calculated in accordance with (32).
The numerical experiments are explained in several aspects in the following. On the one hand, the choice of parameters ω are particularly important, which greatly affects the CPU time of numerical experiments. On other hand, in order to facilitate the comparison of algorithms, we select the following three experiments that satisfy the unique solution property of the AVE (1) for comparison.
All test problems are conducted under MATLAB R2016a on a personal computer with 1.19 GHz central processing unit (Intel(R) Core(TM) i5-1035U), 8.00 GB memory and Windows 10 operating system. The description of each method includes the number of iteration steps (denoted by “IT”), the CPU time (denoted by “CPU”) and residual relative error (denoted by “RES”). The stopping criterion of iteration is
R E S ( x k ) A x k | x k | b 2 < 10 5
or the prescribed maximal iteration number k m a x = 1000 is exceeded (“–” is used in the following tables to illustrate this case). All tests are started from the initial zero vector.
Example 1.
Considering the random AVE (1) with A 1 < 1 in [16,44], the influence of the condition number and the density of A (abbreviation for c o n d ( A ) and d e n s i t y ( A ) ) on the tests will be discussed during the numerical implements.
Let m i n ( c o n d ( A ) ) be 1, 10, or 10 2 , respectively, and the results are used to analyze the superiority of the ASOR-like method in different optimal parameter ω . Let x * = 100 + 200 × r a n d ( n , 1 ) and b = A x * | x * | is generated. For Example 1, the information (the order n, the approximate density of A (abbreviation for a . d e n s i t y ( A ) ), c o n d ( A ) and A 1 ) of random AVE problems under specific conditions obtained by numerical experiments are shown in Table 1, Table 2 and Table 3.
From the numerical results displayed in Table 1, Table 2 and Table 3, we find that the “CPU” of the ASOR-like-opt iteration method and the ASOR-like-aopt iteration method are less than the SOR-like-opt iteration method and the SOR-like-aopt iteration method in general, but the ASOR-like-opt iteration method compared to the SOR-like-opt iteration method requires much iteration steps, the two methods for selecting the approximate optimal parameter ω a o p t * or the experimental optimal parameter ω e x p * basically keep the same iteration steps. In brief, the ASOR-like iteration method is superior to the SOR-like iteration method under choosing appropriate optimal parameter.
Example 2 ([24]).
Consider the two-dimensional convection diffusion equation
( u x x + u y y ) + q ( u x + u y ) + p u = f ( x , y ) , ( x , y ) Υ , u ( x , y ) = 0 , ( x , y ) Υ ,
where q is a nonnegative constant, p is a real number, Υ = ( 0 , 1 ) × ( 0 , 1 ) , and Υ is its boundary. By using the five-point finite difference scheme and the central difference scheme to the diffusive terms and the convective terms, respectively. The equidistant step h = 1 m + 1 and the mesh Reynolds number r = q h 2 are denoted. Then we acquire the system of linear equations R x = d , where the matrix of R = T x I m + I m T y + p I n R m 2 × m 2 , I m R m × m and I n R n × n are two identity matrices, means the Kronecker product symbol, T x = t r i d i a g ( t 1 , t 2 , t 3 ) and T y = t r i d i a g ( t 1 , 0 , t 3 ) are the tridiagonal matrices with t 1 = 1 r , t 2 = 4 , t 3 = 1 + r . For our numerical experiments, we define the matrix A in AVE (1) by making use of the matrix R as follows.
For any positive number p and q, the matrix R is nonsymmetric positive definite. When q = 0 , the matrix R provided is symmetric positive definite. We set A = R + 5 ( L L ) , where L is the strictly lower part of R. It is not hard to find that the matrix A is nonsymmetric positive definite. Let x i * = ( 1 ) i i , i = 1 , 2 , , and b = A x * | x * | is generated. We present the numerical results for different values of m, p, q in Table 4 and Table 5.
From Table 4 and Table 5 we can see that all iteration methods can successfully produce an approximately unique solution to the AVE (1) for selecting appropriate problem scales n = m 2 and the convective measurements q ( q = 0 , 1 , 10 , 100 , 1000 when p = 0 and m = 10 ; q = 0 , 1 , 10 when p = 1 and m = 10 ). In the case where it converges to the unique solution of AVE (1), the ASOR-like-opt iteration method and the ASOR-like-aopt iteration method are superior to the SOR-like-opt iteration method and the SOR-like-aopt iteration method in “CPU”, respectively, and the numerical results with theoretical optimal parameters are much better than the numerical results with experimental optimal parameters.
Example 3.
Consider the AVE (1), where the sparse, symmetry matrix A with A 1 < 1 comes from five different test problems in [42]. Let x * = ( 1 , 1 , 1 , 1 , , 1 , 1 , ) and b = A x * | x * | is generated.
From Table 6, we present the numerical results on the ASOR-like iteration method incorporated with the SOR-like iteration method, corresponding to these optimal parameters. Obviously, all iteration methods can compute an approximate solution of the problem in [42]. In particular, the ASOR-like-opt iteration method outperforms the SOR-like-opt iteration method for all small-scale full data matrix problems.

6. Conclusions

The ASOR-like iteration method is developed to solve the AVE (1) by reformulating equivalently the AVE (1) as an alternative two-by-two block nonlinear system. The convergence results of the ASOR-like iteration method are proven under proper conditions imposed on the involved parameter. The optimal parameter and the approximate optimal parameter are explored. Numerical results are presented to demonstrate that the ASOR-like iteration method with the optimal parameter is feasible and effective in the case of small-scale problems. However, for large-scale problems, designing an efficient algorithm is still to be further studied. In addition, the choice of the optimal iteration parameter in theory is also worth considering from different perspectives.

Author Contributions

Conceptualization, methodology, software, original draft preparation, Y.Z.; guidance, review and revision, D.Y.; translation, editing and review, Y.Z., D.Y. and Y.Y.; validation, Y.Z., D.Y. and Y.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Natural Science Foundation of China (No.12201275) and the China Postdoctoral Science Foundation (No.2021M701537).

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank Cairong Chen from Fujian Normal University for his helpful discussions and guidance. The authors would like to thank the four anonymous reviewers and the editor for their helpful comments and suggestions that have greatly improved the paper.

Conflicts of Interest

The authors declare that they have no conflict of interest.

References

  1. Rohn, J. A theorem of the alternatives for the equation Ax + B|x| = b. Linear Multilinear Algebra 2004, 52, 421–426. [Google Scholar] [CrossRef] [Green Version]
  2. Mangasarian, O.L. Absolute value programming. Comput. Optim. Appl. 2007, 36, 43–53. [Google Scholar] [CrossRef]
  3. Mangasarian, O.L. Absolute value equation solution via concave minimization. Optim. Lett. 2019, 1, 3–8. [Google Scholar] [CrossRef]
  4. Mangasarian, O.L. A generalized Newton method for absolute value equations. Optim. Lett. 2009, 3, 101–108. [Google Scholar] [CrossRef]
  5. Bai, Z.-Z. Modulus–based matrix splitting iteration methods for linear complementarity problems. Numer. Linear Algebra Appl. 2010, 17, 917–933. [Google Scholar] [CrossRef]
  6. Cottle, R.W.; Pang, J.-S.; Stone, R.E. The Linear Complementarity Problem; Academic Press: New York, NY, USA, 1992. [Google Scholar]
  7. Hu, S.-L.; Huang, Z.-H.; Zhang, Q. A generalized Newton method for absolute value equations associated with second order cones. J. Comput. Appl. Math. 2019, 39, 1490–1501. [Google Scholar] [CrossRef]
  8. Miao, X.-H.; Yang, J.-T.; Saheya, B.; Chen, J.-S. A smoothing Newton method for absolute value equation associated with second-order cone. Appl. Numer. Math. 2017, 120, 82–96. [Google Scholar] [CrossRef]
  9. Prokopyev, O. On equivalent reformulations for absolute value equations. Comput. Optim. Appl. 2009, 44, 363–372. [Google Scholar] [CrossRef]
  10. Mezzadri, F. On the solution of general absolute value equations. Appl. Math. Lett. 2020, 107, 106462. [Google Scholar] [CrossRef]
  11. Mangasarian, O.L.; Meyer, R.R. Absolute value equations. Linear Algebra Appl. 2006, 419, 359–367. [Google Scholar] [CrossRef] [Green Version]
  12. Rohn, J. On unique solvability of the absolute value equation. Optim. Lett. 2009, 3, 603–606. [Google Scholar] [CrossRef]
  13. Wu, S.-L.; Li, C.-X. A note on unique solvability of the absolute value equation. Optim. Lett. 2020, 14, 1957–1960. [Google Scholar] [CrossRef]
  14. Wu, S.-L.; Shen, S.-Q. On the unique solution of the generalized absolute value equation. Optim. Lett. 2021, 15, 2017–2024. [Google Scholar] [CrossRef]
  15. Cao, Y.; Shi, Q.; Zhu, S.-L. A relaxed generalized Newton iteration method for generalized absolute value equations. AIMS Math. 2021, 6, 1258–1275. [Google Scholar] [CrossRef]
  16. Cruz, J.Y.B.; Ferreira, O.P.; Prudente, L.F. On the global convergence of the inexact semi-smooth Newton method for absolute value equation. Comput. Optim. Appl. 2016, 65, 93–108. [Google Scholar] [CrossRef] [Green Version]
  17. Feng, J.; Liu, S. A new two-step iterative method for solving absolute value equations. J. Inequal. Appl. 2019, 39, 1–8. [Google Scholar] [CrossRef]
  18. Huang, B.-H.; Ma, C.-F. Convergent conditions of the generalized Newton method for absolute value equation over second order cones. Appl. Math. Lett. 2019, 92, 151–157. [Google Scholar] [CrossRef]
  19. Lian, Y.-Y.; Li, C.-X.; Wu, S.-L. Stronger convergent results of the generalized Newton method for the generalized absolute value equations. J. Comput. Appl. Math. 2018, 2018, 221–226. [Google Scholar] [CrossRef]
  20. Li, C.-X. A modified generalized newton method for absolute value equations. J. Optim. Theory Appl. 2016, 170, 1055–1059. [Google Scholar] [CrossRef]
  21. Shi, L.; Iqbal, J.; Arif, M.; Khan, A. A two-step Newton-type method for solving system of absolute value equations. Math. Probl. Eng. 2020, 2020, 2798080. [Google Scholar] [CrossRef]
  22. Wang, A.; Cao, Y.; Chen, J.-X. Modified Newton-type iteration methods for generalized absolute value equations. J. Optim. Theory Appl. 2019, 2020, 216–230. [Google Scholar] [CrossRef]
  23. Wu, S.-L.; Li, C.-X.; Zhou, H.-Y. Newton-based matrix splitting method for generalized absolute value equation. J. Comput. Appl. Math. 2021, 394, 113578. [Google Scholar]
  24. Salkuyeh, D.K. The Picard–HSS iteration method for absolute value equations. Optim. Lett. 2014, 8, 2191–2202. [Google Scholar] [CrossRef]
  25. Li, C.-X. A preconditioned AOR iterative method for the absolute value equations. Int. J. Comput. Methods 2017, 14, 1750016. [Google Scholar] [CrossRef]
  26. Edalatpour, V.; Hezari, D.; Salkuyeh, D.K. A generalization of the Gauss–Seidel iteration method for solving absolute value equations. Appl. Math. Comput. 2017, 293, 156–167. [Google Scholar] [CrossRef]
  27. Huang, B.-H.; Ma, C.-F. The Modulus-based Levenberg-Marquardt method for solving linear complementarity problem. Numer. Math. Theor. Meth. Appl. 2019, 12, 154–168. [Google Scholar]
  28. Iqbal, J.; Iqbal, A.; Arif, M. Levenberg-Marquardt method for solving systems of absolute value equations. J. Comput. Appl. Math. 2015, 282, 134–138. [Google Scholar] [CrossRef]
  29. Chen, C.-R.; Yu, D.-M.; Han, D.-R. Exact and inexact Douglas-Rachford splitting methods for solving large-scale sparse absolute value equations. IMA J. Numr. Anal. 2022. online first. [Google Scholar] [CrossRef]
  30. Chen, C.-R.; Yu, D.-M.; Han, D.-R. An inverse-free dynamical system for solving the absolute value equations. Appl. Numer. Math. 2021, 168, 170–181. [Google Scholar] [CrossRef]
  31. Huang, X.-J.; Cui, B.-T. Neural network-based method for solving absolute value equations. ICIC Express Lett. 2017, 11, 853–861. [Google Scholar]
  32. Mansoori, A.; Eshaghnezhad, M.; Effati, S. An efficient neural network model for solving the absolute value equations. IEEE Trans. Circuits Syst. II 2017, 65, 391–395. [Google Scholar] [CrossRef]
  33. Mansoori, A.; Erfanian, M. A dynamic model to solve the absolute value equations. J. Comput. Appl. Math. 2019, 333, 28–35. [Google Scholar] [CrossRef]
  34. Saheya, B.; Nguyen, C.-T.; Chen, J.-S. Neural network based on systematically generated smoothing functions for absolute value equation. J. Appl. Math. Comput. 2019, 61, 533–558. [Google Scholar] [CrossRef]
  35. Yu, D.-M.; Chen, C.-R.; Yang, Y.-N.; Han, D.-R. An inertial inverse-free dynamical system for solving absolute value equations. J. Ind. Manag. Optim. 2023, 19, 2549–2559. [Google Scholar] [CrossRef]
  36. Yu, Z.-S.; Li, L.; Yuan, Y. A modified multivariate spectral gradient algorithm for solving absolute value equations. Appl. Math. Lett. 2021, 121, 107461. [Google Scholar] [CrossRef]
  37. Li, Y.; Du, S. Modified HS conjugate gradient method for solving generalized absolute value equations. J. Inequal. Appl. 2019, 68, 68. [Google Scholar] [CrossRef]
  38. Rohn, J.; Hooshyarbakhsh, V.; Farhadsefat, R. An iterative method for solving absolute value equations and sufficient conditions for unique solvability. Optim. Lett. 2014, 8, 35–44. [Google Scholar] [CrossRef]
  39. Saheya, B.; Yu, C.-H.; Chen, J.-S. Numerical comparisons based on four smoothing functions for absolute value equation. J. Appl. Math. Comput. 2018, 56, 131–149. [Google Scholar] [CrossRef]
  40. Zamani, M.; Hladík, M. A new concave minimization algorithm for the absolute value equation solution. Optim. Lett. 2021, 15, 2241–2254. [Google Scholar] [CrossRef]
  41. Zamani, M.; Hladík, M. Error bounds and a condition number for the absolute value equations. Math. Program. 2022, 1–29. [Google Scholar] [CrossRef]
  42. Ke, Y.-F.; Ma, C.-F. SOR-like iteration method for solving absolute value equations. Appl. Math. Comput. 2017, 311, 195–202. [Google Scholar] [CrossRef]
  43. Guo, P.; Wu, S.-L.; Li, C.-X. On the SOR-like iteration method for solving absolute value equations. Appl. Math. Lett. 2019, 97, 107–113. [Google Scholar] [CrossRef]
  44. Chen, C.-R.; Yu, D.-M.; Han, D.-R. Optimal parameter for the SOR-like iteration method for solving the system of absolute value equations. arXiv 2020, arXiv:2001.05781. [Google Scholar]
  45. Ke, Y.-F. The new iteration algorithm for absolute value equation. Appl. Math. Lett. 2020, 99, 105990. [Google Scholar] [CrossRef]
  46. Yu, D.-M.; Chen, C.-R.; Han, D.-R. A modified fixed point iteration method for solving the system of absolute value equations. Optimization 2022, 71, 449–461. [Google Scholar] [CrossRef]
  47. Dong, X.; Shao, X.-H.; Shen, H.-L. A new SOR-like method for solving absolute value equations. Appl. Numer. Math. 2020, 156, 410–421. [Google Scholar] [CrossRef]
  48. Horn, R.A.; Johnson, C.R. Matrix Analysis; Cambridge University Press: Cambridge, UK, 1990. [Google Scholar]
  49. Berman, A.; Plemmons, R. Nonnegative Matrices in the Mathematical Sciences; Academic Press: New York, NY, USA, 1979. [Google Scholar]
  50. Young, D.M. Iterative Solution for Large Linear Systems; Academic Press: New York, NY, USA, 1971. [Google Scholar]
  51. Shams, N.N.; Jahromi, A.F.; Beik, F.P.A. Iterative schemes induced by block splittings for solving absolute value equations. Filomat 2020, 34, 4171–4188. [Google Scholar] [CrossRef]
Figure 1. The contours of h ν ( ω ) with ν = [ 0.01 : 0.01 : 0.99 ] and ω = [ 0.01 : 0.01 : 1.99 ] (left) and the curve of ω i ( ν ) for i = 1 , 2 , 3 , 4 with ν = [ 0.001 : 0.001 : 0.999 ] (right).
Figure 1. The contours of h ν ( ω ) with ν = [ 0.01 : 0.01 : 0.99 ] and ω = [ 0.01 : 0.01 : 1.99 ] (left) and the curve of ω i ( ν ) for i = 1 , 2 , 3 , 4 with ν = [ 0.001 : 0.001 : 0.999 ] (right).
Symmetry 15 00589 g001
Figure 2. Left: The range and curves of the parameter ω ( 0 , 2 ) with ν [ 0.001 : 0.001 : 0.999 ] (the black line: ω o p t * ; the red line: ω a o p t * ; the blue line: ω ( ν ) = 1 1 2 ν ν for ν ( 0 , 1 2 ) ; the green area: Ω ); Right: the curve of r ( ν ) with ν = [ 0.001 : 0.001 : 0.999 ] .
Figure 2. Left: The range and curves of the parameter ω ( 0 , 2 ) with ν [ 0.001 : 0.001 : 0.999 ] (the black line: ω o p t * ; the red line: ω a o p t * ; the blue line: ω ( ν ) = 1 1 2 ν ν for ν ( 0 , 1 2 ) ; the green area: Ω ); Right: the curve of r ( ν ) with ν = [ 0.001 : 0.001 : 0.999 ] .
Symmetry 15 00589 g002
Table 1. Numerical results for Example 1 with m i n ( c o n d ( A ) ) = 1 .
Table 1. Numerical results for Example 1 with m i n ( c o n d ( A ) ) = 1 .
Methodn256512102420484096
a . density ( A ) 0 . 003 0 . 003 0 . 0003 0 . 00003 0 . 000003
density ( A ) 0 . 0039 0 . 0029 9 . 7656 × 10 4 4 . 8828 × 10 4 2 . 4414 × 10 4
cond ( A ) 2 . 5059 2 . 8172 3 . 5639 1 . 5041 2 . 5778
A 1 0 . 4024 0 . 9875 0 . 7948 0 . 6119 0 . 6153
SOR-like-exp ω e x p * 0.972 0.977 0.926 0.973 0.995
IT1534262925
CPU 18.5880 230.0134 100.1782 221.7795 457.9501
RES 9.8549 × 10 6 9.7743 × 10 6 9.5922 × 10 6 9.7838 × 10 6 9.7806 × 10 6
ASOR-like-exp ω e x p * 0.973 0.977 0.927 0.973 0.995
IT1534262925
CPU 22.1293 226.7375 97.8316 211.2889 464.9240
RES 9.8864 × 10 6 9.8190 × 10 6 9.9883 × 10 6 9.9058 × 10 6 9.8234 × 10 6
SOR-like-opt ω o p t * 0.924 0.623 0.705 0.8 0.798
IT1878464441
CPU 0.0156 0.0249 0.0135 0.0185 0.0323
RES 5.3732 × 10 6 8.1300 × 10 6 6.6885 × 10 6 8.1115 × 10 6 8.4195 × 10 6
ASOR-like-opt ω o p t * 0.667 0.587 0.606 0.629 0.629
IT3686596762
CPU 0.0067 0.0227 0.0071 0.0164 0.0308
RES 6.7158 × 10 6 8.4438 × 10 6 8.4476 × 10 6 7.8581 × 10 6 8.6371 × 10 6
SOR-like-aopt ω a o p t * 0.765 0.620 0.657 0.7 0.699
IT2778515652
CPU 0.0066 0.0200 0.0082 0.0137 0.0252
RES 9.4714 × 10 6 9.1027 × 10 6 8.9346 × 10 6 7.8581 × 10 6 7.8337 × 10 6
ASOR-like-aopt ω a o p t * 0.765 0.620 0.657 0.7 0.699
IT2879525652
CPU 0.0045 0.0188 0.0070 0.0151 0.0248
RES 6.8235 × 10 6 8.7466 × 10 6 8.0426 × 10 6 8.9334 × 10 6 9.2929 × 10 6
Table 2. Numerical results for Example 1 with m i n ( c o n d ( A ) ) = 10 .
Table 2. Numerical results for Example 1 with m i n ( c o n d ( A ) ) = 10 .
Methodn256512102420484096
a . density ( A ) 0 . 003 0 . 003 0 . 0003 0 . 00003 0 . 000003
density ( A ) 0 . 0039 0 . 0029 9 . 7656 × 10 4 4 . 8828 × 10 4 2 . 4414 × 10 4
cond ( A ) 14 . 7244 16 . 3457 35 . 1532 19 . 9552 43 . 1216
A 1 0 . 5628 0 . 8446 0 . 7157 0 . 5137 0 . 7003
SOR-like-exp ω e x p * 0.974 0.979 0.983 0.986 0.993
IT111310109
CPU 2.0967 16.7058 6.5141 15.7872 34.3956
RES 9.5213 × 10 6 9.7642 × 10 6 9.4063 × 10 6 9.1467 × 10 6 9.3907 × 10 6
ASOR-like-exp ω e x p * 0.975 0.98 0.984 0.987 0.994
IT111310109
CPU 2.0719 16.9688 6.5851 15.9789 34.7201
RES 9.9142 × 10 6 9.6326 × 10 6 9.3416 × 10 6 8.8497 × 10 6 8.4067 × 10 6
SOR-like-opt ω o p t * 0.829 0.682 0.744 0.858 0.752
IT1930231723
CPU 0.0106 0.0182 0.0129 0.0129 0.0203
RES 3.7751 × 10 6 5.6117 × 10 6 6.0539 × 10 6 5.1048 × 10 6 4.6180 × 10 6
ASOR-like-opt ω o p t * 0.636 0.6 0.615 0.645 0.617
IT3237333134
CPU 0.0053 0.0108 0.0061 0.0093 0.0182
RES 7.0505 × 10 6 9.4759 × 10 6 8.9738 × 10 6 8.0716 × 10 6 5.3617 × 10 6
SOR-like-aopt ω a o p t * 0.713 0.647 0.674 0.728 0.678
IT2532272428
CPU 0.0036 0.0125 0.0069 0.0085 0.0173
RES 8.5072 × 10 6 8.8675 × 10 6 9.9723 × 10 6 8.0679 × 10 6 4.6255 × 10 6
ASOR-like-aopt ω a o p t * 0.713 0.647 0.674 0.728 0.678
IT2633292529
CPU 0.0032 0.0119 0.0065 0.0080 0.0163
RES 7.6108 × 10 6 8.3079 × 10 6 4.7287 × 10 6 6.9463 × 10 6 4.9004 × 10 6
Table 3. Numerical results for Example 1 with m i n ( c o n d ( A ) ) = 100 .
Table 3. Numerical results for Example 1 with m i n ( c o n d ( A ) ) = 100 .
Methodn256512102420484096
a . density ( A ) 0 . 003 0 . 003 0 . 0003 0 . 00003 0 . 000003
density ( A ) 0 . 0039 0 . 0029 9 . 7656 × 10 4 4 . 8828 × 10 4 2 . 4414 × 10 4
cond ( A ) 120 . 3861 307 . 0414 153 . 6908 109 . 2455 200 . 7276
A 1 0 . 3826 0 . 6618 0 . 3243 0 . 9690 0 . 9450
SOR-like-exp ω e x p * 0.994 0.995 0.996 0.998 1
IT66687
CPU 1.6663 14.5714 5.9188 15.1130 33.1430
RES 7.8386 × 10 6 8.9158 × 10 6 7.4674 × 10 6 7.6907 × 10 6 5.8261 × 10 6
ASOR-like-exp ω e x p * 0.995 0.996 0.996 0.998 1
IT666107
CPU 1.7731 14.7743 6.2905 15.1825 33.6529
RES 6.6006 × 10 6 7.1726 × 10 6 9.1736 × 10 6 8.1919 × 10 6 5.8261 × 10 6
SOR-like-opt ω o p t * 0.936 0.773 0.967 0.630 0.639
IT101892928
CPU 0.0096 0.0149 0.0151 0.0165 0.0255
RES 3.5974 × 10 6 3.5650 × 10 6 9.8522 × 10 7 9.2673 × 10 6 8.4378 × 10 6
ASOR-like-opt ω o p t * 0.671 0.622 0.686 0.588 0.591
IT2529253434
CPU 0.0035 0.0103 0.0057 0.0104 0.0190
RES 4.7060 × 10 6 4.9047 × 10 6 4.0254 × 10 6 8.9792 × 10 6 6.3144 × 10 6
SOR-like-aopt ω a o p t * 0.772 0.687 0.795 0.623 0.628
IT1822173029
CPU 0.0033 0.0096 0.0050 0.0110 0.0193
RES 3.0545 × 10 6 8.0103 × 10 6 5.4605 × 10 6 6.4254 × 10 6 7.6955 × 10 6
ASOR-like-aopt ω a o p t * 0.772 0.687 0.795 0.623 0.628
IT1924183131
CPU 0.0031 0.0081 0.0049 0.0103 0.0150
RES 3.5584 × 10 6 6.2406 × 10 6 5.6879 × 10 6 8.5925 × 10 6 5.2066 × 10 6
Table 4. Numerical results for Example 2 with m = 10 and p = 0 .
Table 4. Numerical results for Example 2 with m = 10 and p = 0 .
Methodq01101001000
A 1 0 . 6836 0 . 6568 0 . 4955 0 . 2682 0 . 2502
SOR-like-exp ω e x p * 0.99 0.99 0.99 0.99 1
IT141413107
CPU 17.3447 16.7893 15.9677 13.5858 10.6116
RES 6.1270 × 10 6 5.2623 × 10 6 5.4657 × 10 6 4.4779 × 10 6 1.8263 × 10 6
ASOR-like-exp ω e x p * 0.99 0.99 0.99 0.99 1
IT141413107
CPU 16.5011 16.5607 15.8689 13.2042 10.3419
RES 6.3024 × 10 6 5.4242 × 10 6 5.6905 × 10 6 4.7519 × 10 6 1.8263 × 10 6
SOR-like-opt ω o p t * 0.761 0.775 0.869 0.993 1
IT272619107
CPU 0.0207 0.0148 0.0118 0.0140 0.0103
RES 5.4247 × 10 6 5.3442 × 10 6 9.1631 × 10 7 3.2788 × 10 6 1.9915 × 10 6
ASOR-like-opt ω o p t * 0.619 0.623 0.648 0.702 0.708
IT3938352623
CPU 0.0095 0.0106 0.0086 0.0098 0.0078
RES 6.4887 × 10 6 8.2543 × 10 6 6.6371 × 10 6 4.9962 × 10 6 7.0446 × 10 6
SOR-like-aopt ω a o p t * 0.682 0.689 0.733 0.820 0.828
IT3232271816
CPU 0.0091 0.0095 0.0090 0.0068 0.0057
RES 8.4176 × 10 6 6.1208 × 10 6 9.5221 × 10 6 7.4910 × 10 6 3.6549 × 10 6
ASOR-like-aopt ω a o p t * 0.682 0.689 0.733 0.820 0.828
IT3332281916
CPU 0.0085 0.0093 0.0080 0.0062 0.0056
RES 7.2919 × 10 6 8.8531 × 10 6 7.2922 × 10 6 4.2199 × 10 6 9.4326 × 10 6
Table 5. Numerical results for Example 2 with m = 10 and p = 1 .
Table 5. Numerical results for Example 2 with m = 10 and p = 1 .
Methodq0110
A 1 0 . 4535 0 . 4419 0 . 3633
SOR-like-exp ω e x p * 0.99 0.99 0.99
IT121212
CPU 15.6642 15.6092 15.2944
RES 8.1549 × 10 6 8.2374 × 10 6 5.4467 × 10 6
ASOR-like-exp ω e x p * 0.99 0.99 0.99
IT121212
CPU 15.9589 15.4252 15.2944
RES 8.3692 × 10 6 8.4516 × 10 6 5.6546 × 10 6
SOR-like-opt ω o p t * 0.894 0.901 0.947
IT171714
CPU 0.0157 0.0118 0.0138
RES 6.8516 × 10 6 3.9006 × 10 6 6.5650 × 10 7
ASOR-like-opt ω o p t * 0.656 0.658 0.676
IT333330
CPU 0.0091 0.0089 0.0081
RES 9.4174 × 10 6 7.1217 × 10 6 8.6655 × 10 6
SOR-like-aopt ω a o p t * 0.747 0.751 0.779
IT262523
CPU 0.0079 0.0079 0.0068
RES 5.3883 × 10 6 8.4835 × 10 6 5.0107 × 10 6
ASOR-like-aopt ω a o p t * 0.747 0.751 0.779
IT262623
CPU 0.0075 0.0074 0.0066
RES 9.2671 × 10 6 6.3792 × 10 6 7.6512 × 10 6
Table 6. Numerical results for Example 3.
Table 6. Numerical results for Example 3.
Method problem mesh 1 e 1 mesh 1 em 1 mesh 2 e 1 Trefethen _ 20 b Trefethen _ 200 b
A 1 0 . 5747 0 . 6397 0 . 7615 0 . 4244 0 . 4265
SOR-like-exp ω e x p * 0.94 0.93 0.94 0.95 0.95
IT1515171010
CPU 2.8584 2.9256 26.1986 1.5056 47.5188
RES 9.2893 × 10 6 9.8022 × 10 6 8.0935 × 10 6 7.8939 × 10 6 7.9623 × 10 6
ASOR-like-exp ω e x p * 0.95 0.91 0.94 0.95 0.95
IT1516171010
CPU 2.8557 2.8761 27.4489 1.6236 46.2290
RES 7.0764 × 10 6 9.3454 × 10 6 8.6726 × 10 6 9.0674 × 10 6 9.1348 × 10 6
SOR-like-opt ω o p t * 0.822 0.785 0.721 0.911 0.91
IT2122291212
CPU 0.0080 0.0112 0.0188 0.0103 0.0191
RES 6.6983 × 10 6 8.4588 × 10 6 9.1045 × 10 7 3.8820 × 10 6 4.1161 × 10 6
ASOR-like-opt ω o p t * 0.635 0.625 0.61 0.662 0.661
IT3333392323
CPU 0.0032 0.0033 0.0125 0.0028 0.0148
RES 9.8908 × 10 6 9.8762 × 10 6 8.6726 × 10 6 7.3721 × 10 6 7.8951 × 10 6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, Y.; Yu, D.; Yuan, Y. On the Alternative SOR-like Iteration Method for Solving Absolute Value Equations. Symmetry 2023, 15, 589. https://doi.org/10.3390/sym15030589

AMA Style

Zhang Y, Yu D, Yuan Y. On the Alternative SOR-like Iteration Method for Solving Absolute Value Equations. Symmetry. 2023; 15(3):589. https://doi.org/10.3390/sym15030589

Chicago/Turabian Style

Zhang, Yiming, Dongmei Yu, and Yifei Yuan. 2023. "On the Alternative SOR-like Iteration Method for Solving Absolute Value Equations" Symmetry 15, no. 3: 589. https://doi.org/10.3390/sym15030589

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop