Next Article in Journal
On the Security of a Lightweight and Secure Access Authentication Scheme for Both UE and mMTC Devices in 5G Networks
Next Article in Special Issue
Novel Unbiased Optimal Receding-Horizon Fixed-Lag Smoothers for Linear Discrete Time-Varying Systems
Previous Article in Journal
Optimal Configuration of Hybrid Energy System for Rural Electrification of Community Healthcare Facilities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Algorithms for Linear System Identification with Particular Symmetric Filters

by
Ionuţ-Dorinel Fîciu
1,
Jacob Benesty
2,
Laura-Maria Dogariu
1,*,
Constantin Paleologu
1 and
Silviu Ciochină
1
1
Department of Telecommunications, University Politehnica of Bucharest, 1-3, Iuliu Maniu Blvd., 061071 Bucharest, Romania
2
INRS-EMT, University of Quebec, Montreal, QC H5A 1K6, Canada
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(9), 4263; https://doi.org/10.3390/app12094263
Submission received: 6 April 2022 / Revised: 20 April 2022 / Accepted: 21 April 2022 / Published: 23 April 2022
(This article belongs to the Special Issue Statistical Signal Processing: Theory, Methods and Applications)

Abstract

:
In linear system identification problems, it is important to reveal and exploit any specific intrinsic characteristic of the impulse responses, in order to improve the overall performance, especially in terms of the accuracy and complexity of the solution. In this paper, we focus on the nearest Kronecker product decomposition of the impulse responses, together with low-rank approximations. Such an approach is suitable for the identification of a wide range of real-world systems. Most importantly, we reformulate the system identification problem by using a particular symmetric filter within the development, which allows us to efficiently design two (iterative/recursive) algorithms. First, an iterative Wiener filter is proposed, with improved performance as compared to the conventional Wiener filter, especially in challenging conditions (e.g., small amount of available data and/or noisy environments). Second, an even more practical solution is developed, in the form of a recursive least-squares adaptive algorithm, which could represent an appealing choice in real-time applications. Overall, based on the proposed approach, a system identification problem that can be conventionally solved by using a system of L = L 1 L 2 equations (with L unknown parameters) is reformulated as a combination of two systems of P L 1 and P L 2 equations, respectively, where usually P L 2 (i.e., a total of P L 1 + P L 2 parameters). This could lead to important advantages, in terms of both performance and complexity. Simulation results are provided in the framework of network and acoustic echo cancellation, supporting the performance gain and the practical features of the proposed algorithms.

1. Introduction

Linear system identification problems represent key issues in many important applications [1]. In this context, one of the main challenges is related to the long length of the impulse response to be identified, which is usually the case in applications related to the acoustic environment [2], like echo cancellation, noise reduction, microphone arrays, etc. For example, in echo cancellation [3], such a network/acoustic impulse response can have hundreds/thousands of coefficients (even for a sampling rate of 8 kHz), which significantly increases the difficulty of the system identification problem, especially in terms of the complexity and accuracy of the solution.
An efficient approach that can be exploited in the framework of such high-dimension system identification problems is based on the Kronecker product decomposition of the impulse response, e.g., see [4,5], and the references therein. Such an approach fits very well for the identification of linearly separable (or multilinear) systems, because it involves tensor modeling techniques. In other words, a high-dimension system identification problem is solved based on a combination of low-dimension solutions (i.e., shorter impulse responses), which are tensorized together. This approach can find applications in many important fields, like wireless communication systems [6,7], nonlinear acoustic echo cancellation [8,9], source separation [10,11], array beamforming [12,13], and object recognition [14,15]. Furthermore, by using the decomposition based on the nearest Kronecker product, followed by low-rank approximations, more general forms of impulse responses can be identified, which are not linearly separable [16,17,18,19]. Recent studies on this topic have targeted specific applications, like echo cancellation, noise reduction, adaptive beamforming, microphone arrays, linear prediction, and speech dereverberation, e.g., see [20,21,22,23,24,25,26,27,28] and references therein.
Recently, the identification of linearly separable systems that own particular intrinsic symmetric/antisymmetric properties has been addressed in [29,30]. As compared to the previous approaches, these very recent works exploit a particular “hidden” symmetry feature of the system to be identified, thus leading to improved solutions as compared to their conventional counterparts. In [29], the solution was formulated in terms of an iterative Wiener filter, whereas [30] developed a family of least-mean-square (LMS) adaptive algorithms tailored for such systems.
In this paper, we follow the idea of using such a particular symmetric filter for linear system identification problems. As compared to the works reported in [29,30], we do not limit our solution only to the identification of linearly separable systems, but for more general forms of impulse responses. The resulting contribution is basically twofold. First, an iterative Wiener filter is developed, with improved performance features as compared to the conventional Wiener approach, especially when a small amount of data is available for the estimation of the required statistics. Furthermore, we propose a recursive least-squares (RLS) adaptive algorithm based on a similar framework, which represents a more practical solution (as compared to the Wiener filter) for real-world/real-time applications.
Following this introduction, the system identification problem using the specific model that involves the symmetric filter is presented in Section 2. Next, the solution based on the Wiener filter is developed in Section 3. In Section 4, we propose the RLS-based algorithm following a similar approach. Simulation results are provided in Section 5, in the framework of network and acoustic echo cancellation, thus supporting the theoretical findings and the performance gain of the proposed solutions. Finally, a summarized discussion is included in Section 6, while Section 7 concludes this work, outlining the main ideas and results, together with several perspectives for future works.

2. System Model and Problem Formulation

Let us consider a linear single-input/single-output (SISO) system, whose output at the discrete-time index t is given by
d ( t ) = h T x ( t ) + w ( t ) = y ( t ) + w ( t ) ,
where d ( t ) is also known as the desired signal,
x ( t ) = x ( t ) x ( t 1 ) x ( t L + 1 ) T
is a vector containing the most recent L time samples of the zero-mean input signal x ( t ) , the superscript T denotes the transpose operator, h is the unknown system impulse response (a column vector of length L), and w ( t ) is the zero-mean additive noise. It is assumed that we deal with real-valued signals, and that x ( t ) and w ( t ) are uncorrelated. Then, the objective of the so-called SISO system identification problem is to identify or estimate h , in the best possible way, given the input signal vector, x ( t ) , and the output signal, d ( t ) . From (1), we see that the signal-to-noise ratio (SNR) of the SISO system is
SNR = h T R x h σ w 2 ,
where R x = E x ( t ) x T ( t ) and σ w 2 = E w 2 ( t ) are the covariance matrix of x ( t ) and variance of w ( t ) , respectively, with E [ · ] denoting mathematical expectation.
Let us assume that L = L 1 L 2 . Without loss of generality, we may also assume that L 1 L 2 . Then, the system impulse response can be decomposed as [16]
h = l = 1 L 2 h 2 , l h 1 , l ,
where ⊗ is the Kronecker product [31], and h 1 , l and h 2 , l are short impulse responses (column vectors) of lengths L 1 and L 2 , respectively. Another way to write (3) is
h = vec l = 1 L 2 h 1 , l h 2 , l T = vec H 1 H 2 T = vec H ,
where vec ( · ) is the vectorization operation, which consists of converting a matrix into a vector,
H 1 = h 1 , 1 h 1 , 2 h 1 , L 2 , H 2 = h 2 , 1 h 2 , 2 h 2 , L 2
are matrices of sizes L 1 × L 2 and L 2 × L 2 , respectively, and H is the equivalent matrix (of size L 1 × L 2 ) representation of h . However, very often in practice, the column rank of H is far from its maximum value, i.e., rank H = P L 2 . As a result, (3) can be expressed as
h = p = 1 P h 2 , p h 1 , p
and y ( t ) in (1) becomes
y ( t ) = p = 1 P h 2 , p h 1 , p T x ( t ) .
Let us define the symmetric impulse response [29]:
h + = p = 1 P h 2 , p h 1 , p + h 1 , p h 2 , p 2 .
We should note that this particular symmetric characteristic is not related to the samples of the impulse response (e.g., like in linear-phase finite-impulse-response filters), but to a specific combination of the two sub-impulse responses h 1 , p and h 2 , p . Thus, denoting by h p ( h 1 , p , h 2 , p ) the pth term of the sum from the numerator of (7), we can write
h p ( h 1 , p , h 2 , p ) = h p ( h 2 , p , h 1 , p ) ,
h p ( h 1 , p , h 1 , p ) = 2 h 1 , p h 1 , p ,
which define the particular symmetry mentioned before.
It can be shown that
h 1 , p h 2 , p = P L 1 L 2 h 2 , p h 1 , p ,
where P L 1 L 2 is an L × L permutation matrix; see [32] and Appendix A. Substituting the previous expression into (7), we get
h + = Q p = 1 P h 2 , p h 1 , p = Q h ,
where
Q = I L + P L 1 L 2 2 ,
with I L being the L × L identity matrix. As a consequence, another interesting way to express (6) is
y ( t ) = h + T Q T x ( t )
and our objective is to identify the symmetric impulse response, h + , with a symmetric filter; it is then easy to find an estimate of the original impulse response from the relation in (11).

3. Iterative Wiener Filter

Let the filter g be an estimate of the impulse response h and let the symmetric filter:
g + = p = 1 P g 2 , p g 1 , p + g 1 , p g 2 , p 2 = Q g
be an estimate of h + , with g 1 , p and g 2 , p being estimates of h 1 , p and h 2 , p , respectively. We can conveniently rewrite (14) as
g + = p = 1 P g 2 , p I L 1 + I L 1 g 2 , p g 1 , p 2 = p = 1 P G g 2 , p g 1 , p = G ̲ 2 g ̲ 1 ,
where
G g 2 , p = g 2 , p I L 1 + I L 1 g 2 , p 2
is a matrix of size L × L 1 ,
G ̲ 2 = G g 2 , 1 G g 2 , 2 G g 2 , P
is a matrix of size L × P L 1 , and
g ̲ 1 = g 1 , 1 T g 1 , 2 T g 1 , P T T
is a vector of length P L 1 . In the same manner, we can rewrite (14) in another useful form as follows:
g + = p = 1 P I L 2 g 1 , p + g 1 , p I L 2 g 2 , p 2 = p = 1 P G g 1 , p g 2 , p = G ̲ 1 g ̲ 2 ,
where
G g 1 , p = I L 2 g 1 , p + g 1 , p I L 2 2
is a matrix of size L × L 2 ,
G ̲ 1 = G g 1 , 1 G g 1 , 2 G g 1 , P
is a matrix of size L × P L 2 , and
g ̲ 2 = g 2 , 1 T g 2 , 2 T g 2 , P T T
is a vector of length P L 2 .
Now, we can define the error signal between the desired signal, d ( t ) , and the estimated signal, y ^ ( t ) = g T x ( t ) , as follows:
e ( t ) = d ( t ) y ^ ( t ) ,
which can be expressed in two different manners:
e ( t ) = d ( t ) g ̲ 1 T G ̲ 2 T Q T x ( t )
= d ( t ) g ̲ 2 T G ̲ 1 T Q T x ( t ) .
From (18) and assuming that g ̲ 2 is fixed, we can write the mean-squared error (MSE) criterion as
J g ̲ 1 | g ̲ 2 = E d ( t ) g ̲ 1 T G ̲ 2 T Q T x ( t ) 2 = σ d 2 2 g ̲ 1 T G ̲ 2 T Q T r x d + g ̲ 1 T G ̲ 2 T Q T R x Q 1 G ̲ 2 g ̲ 1 ,
where σ d 2 is the variance of d ( t ) and r x d = E x ( t ) d ( t ) is the cross-correlation vector between x ( t ) and d ( t ) . From the minimization of J g ̲ 1 | g ̲ 2 , we easily get the Wiener filter for g ̲ 1 , i.e.,
g ̲ 1 , W = G ̲ 2 T Q T R x Q 1 G ̲ 2 1 G ̲ 2 T Q T r x d .
In the same way, from (19) and assuming that g ̲ 1 is fixed, we can write the MSE criterion as
J g ̲ 2 | g ̲ 1 = E d ( t ) g ̲ 2 T G ̲ 1 T Q T x ( t ) 2 = σ d 2 2 g ̲ 2 T G ̲ 1 T Q T r x d + g ̲ 2 T G ̲ 1 T Q T R x Q 1 G ̲ 1 g ̲ 2 ,
whose minimization leads to the Wiener filter for g ̲ 2 , i.e.,
g ̲ 2 , W = G ̲ 1 T Q T R x Q 1 G ̲ 1 1 G ̲ 1 T Q T r x d .
By alternatively iterating between g ̲ 1 , W and g ̲ 2 , W as in [16,33], these two optimal filters will converge to the Wiener solution just after a couple of iterations. The optimal symmetric filter g W + can be easily constructed and then the Wiener filter g W , which is the optimal estimate of h , follows immediately.
As compared to the conventional Wiener filter, which obtains its solution by solving a linear system of L equations that involves large data structures (e.g., the matrix R x of size L × L ), the proposed iterative Wiener filter combines the solutions of two shorter filters of lengths P L 1 and P L 2 , with P L 2 . Thus, it would be more robust and accurate, especially when the estimates of the statistics (i.e., R x and r x d ) are less accurate. Such scenarios happen when only a small amount of data samples is available and/or in noisy conditions (i.e., low SNRs), which raise very challenging practical issues.

4. RLS Adaptive Algorithm

The iterative Wiener filter developed in the previous section owns important advantages as compared to its conventional counterpart. Simulation results provided in Section 5.1 will support these performance features. Nevertheless, this iterative algorithm inherits some inherent limitations of the Wiener filter, like the estimation of the required statistics, the time-invariant framework, and the matrix inversion operation. In practice, especially in real-time applications, the impulse response of the system to be identified could change in time. For example, the acoustic echo paths are time-variant systems, depending on temperature, pressure, humidity, and movement of objects or bodies [2]. This requires good tracking capabilities for the estimation algorithm. In this context, using an adaptive filter represents a more practical solution as compared to the Wiener filter. Consequently, in this section, we propose an RLS-based algorithm, following the approach presented in Section 3.
Let us denote by g ^ ( t ) the impulse response of the adaptive filter at time index t. The conventional RLS algorithm [34,35,36] is defined by the following equations:
ε ( t ) = d ( t ) g ^ T ( t 1 ) x ( t ) ,
k ( t ) = R ^ x 1 ( t 1 ) x ( t ) λ + x T ( t ) R ^ x 1 ( t 1 ) x ( t ) ,
g ^ ( t ) = g ^ ( t 1 ) + k ( t ) ε ( t ) ,
R ^ x 1 ( t ) = 1 λ I L k ( t ) x T ( t ) R ^ x 1 ( t 1 ) ,
where ε ( t ) is the a priori error signal, k ( t ) stands for the Kalman gain vector [37], R ^ x 1 ( t ) represents an estimate of the inverse of the input signal covariance matrix, and λ is the so-called forgetting factor ( 0 < λ 1 ), which controls the “memory” of the algorithm [38]. As we can notice, the matrix inversion operation is avoided based on the matrix inversion lemma (i.e., the Woodbury formula), which leads to the update from (27). The tracking capability of the algorithm is controlled in terms of the value of the forgetting factor, which is usually chosen as λ = 1 1 / ( K L ) , with K 1 . A large value of this parameter (i.e., very close or equal to one) improves the accuracy of the solution, but reduces the tracking capability of the algorithm.
As compared to the LMS-based algorithms, the RLS adaptive filter provides a much faster convergence rate (especially for correlated input signals), while requiring a large computational amount, which is proportional to O ( L 2 ) . Consequently, for long length filters (e.g., like in echo cancellation), the conventional RLS algorithm becomes prohibitive. On the other hand, there are several “fast” versions of this algorithm, e.g., [39,40], with a computational complexity proportional to the filter length. However, they still face the inherent challenges and limitations related to a long length filter, especially in terms of the convergence rate and tracking capability.
The main feature of the decomposition-based approach presented in the previous sections is that it reformulates the system identification problem based on a combination of shorter filters. Thus, the adaptive filter can also be decomposed similar to (5), thus resulting in
g ^ ( t ) = p = 1 P g ^ 2 , p ( t ) g ^ 1 , p ( t ) ,
where g ^ 1 , p ( t ) and g ^ 2 , p ( t ) , with p = 1 , 2 , , P , are filters of length L 1 and L 2 , respectively; these represent the estimates of h 1 , p and h 2 , p , respectively. Furthermore, similar to (14), a symmetric adaptive filter can be defined as
g ^ + ( t ) = p = 1 P g ^ 2 , p ( t ) g ^ 1 , p ( t ) + g ^ 1 , p ( t ) g ^ 2 , p ( t ) 2 = Q g ^ ( t ) ,
where the matrix Q is defined in (12). Following the same development as in (15), the symmetric filter from (29) can be rewritten as
g ^ + ( t ) = p = 1 P g ^ 2 , p ( t ) I L 1 + I L 1 g ^ 2 , p ( t ) g ^ 1 , p ( t ) 2 = p = 1 P G ^ g ^ 2 , p ( t ) g ^ 1 , p ( t ) = G ̲ ^ 2 ( t ) g ̲ ^ 1 ( t ) ,
where
G ^ g ^ 2 , p ( t ) = g ^ 2 , p ( t ) I L 1 + I L 1 g ^ 2 , p ( t ) 2 , p = 1 , 2 , , P , G ̲ ^ 2 ( t ) = G ^ g ^ 2 , 1 ( t ) G ^ g ^ 2 , 2 ( t ) G ^ g ^ 2 , P ( t ) , g ̲ ^ 1 ( t ) = g ^ 1 , 1 T ( t ) g ^ 1 , 2 T ( t ) g ^ 1 , P T ( t ) T .
On the other hand, based on (16), we can express (29) as
g ^ + ( t ) = p = 1 P I L 2 g ^ 1 , p ( t ) + g ^ 1 , p ( t ) I L 2 g ^ 2 , p ( t ) 2 = p = 1 P G ^ g ^ 1 , p ( t ) g ^ 2 , p ( t ) = G ̲ ^ 1 ( t ) g ̲ ^ 2 ( t ) ,
where
G ^ g ^ 1 , p ( t ) = I L 2 g ^ 1 , p ( t ) + g ^ 1 , p ( t ) I L 2 2 , p = 1 , 2 , , P , G ̲ ^ 1 ( t ) = G ^ g ^ 1 , 1 ( t ) G ^ g ^ 1 , 2 ( t ) G ^ g ^ 1 , P ( t ) , g ̲ ^ 2 ( t ) = g ^ 2 , 1 T ( t ) g ^ 2 , 2 T ( t ) g ^ 2 , P T ( t ) T .
At this point, following (18) and (19), and based on (28)–(32), we can express the a priori error from (24) in two equivalent forms, i.e.,
ε 1 ( t ) = d ( t ) g ̲ ^ 1 T ( t 1 ) G ̲ ^ 2 T ( t 1 ) Q T x ( t ) = d ( t ) g ̲ ^ 1 T ( t 1 ) x g ̲ ^ 2 ( t )
and
ε 2 ( t ) = d ( t ) g ̲ ^ 2 T ( t 1 ) G ̲ ^ 1 T ( t 1 ) Q T x ( t ) = d ( t ) g ̲ ^ 2 T ( t 1 ) x g ̲ ^ 1 ( t ) ,
where x g ̲ ^ 2 ( t ) = G ̲ ^ 2 T ( t 1 ) Q T x ( t ) and x g ̲ ^ 1 ( t ) = G ̲ ^ 1 T ( t 1 ) Q T x ( t ) . Based on the expressions from (34) and (35), the cost functions of the RLS-based algorithm can be constructed. Therefore, following the least-squares (LS) optimization criterion [34,35,36], the corresponding cost functions result in
J g ̲ ^ 2 g ̲ ^ 1 ( t ) = i = 1 t λ 1 t i d ( i ) g ̲ ^ 1 T ( t ) x g ̲ ^ 2 ( i ) 2 ,
J g ̲ ^ 1 g ̲ ^ 2 ( t ) = i = 1 t λ 2 t i d ( i ) g ̲ ^ 2 T ( t ) x g ̲ ^ 1 ( i ) 2 ,
where λ 1 and λ 2 denote the forgetting factors, which are positive parameters smaller (or equal) to one. These two forgetting factors can be chosen according to the corresponding filters’ lengths, i.e., λ 1 = 1 1 / ( K P L 1 ) and λ 2 = 1 1 / ( K P L 2 ) , with K 1 .
The cost functions from (36) and (37) suggest a bilinear optimization strategy [41], i.e., one of the systems is considered fixed within the optimization criterion of the other one. First, focusing on (36) and considering that g ̲ ^ 2 ( i ) is fixed for i = 1 , 2 , , t 1 , the corresponding cost function can be developed as
J g ̲ ^ 2 g ̲ ^ 1 ( t ) = i = 1 t λ 1 t i d 2 ( i ) 2 g ̲ ^ 1 T ( t ) p g ̲ ^ 2 ( t ) + g ̲ ^ 1 T ( t ) R g ̲ ^ 2 ( t ) g ̲ ^ 1 ( t ) ,
where
p g ̲ ^ 2 ( t ) = i = 1 t λ 1 t i x g ̲ ^ 2 ( i ) d ( i ) = λ 1 p g ̲ ^ 2 ( t 1 ) + x g ̲ ^ 2 ( t ) d ( t ) ,
R g ̲ ^ 2 ( t ) = i = 1 t λ 1 t i x g ̲ ^ 2 ( i ) x g ̲ ^ 2 T ( i ) = λ 1 R g ̲ ^ 2 ( t 1 ) + x g ̲ ^ 2 ( t ) x g ̲ ^ 2 T ( t ) .
Similarly, following (37) and considering that g ̲ ^ 1 ( i ) is fixed for i = 1 , 2 , , t 1 , the second cost function results in
J g ̲ ^ 1 g ̲ ^ 2 ( t ) = i = 1 t λ 2 t i d 2 ( i ) 2 g ̲ ^ 2 T ( t ) p g ̲ ^ 1 ( t ) + g ̲ ^ 2 T ( t ) R g ̲ ^ 1 ( t ) g ̲ ^ 2 ( t ) ,
where
p g ̲ ^ 1 ( t ) = i = 1 t λ 2 t i x g ̲ ^ 1 ( i ) d ( i ) = λ 2 p g ̲ ^ 1 ( t 1 ) + x g ̲ ^ 1 ( t ) d ( t ) ,
R g ̲ ^ 1 ( t ) = i = 1 t λ 2 t i x g ̲ ^ 1 ( i ) x g ̲ ^ 1 T ( i ) = λ 2 R g ̲ ^ 1 ( t 1 ) + x g ̲ ^ 1 ( t ) x g ̲ ^ 1 T ( t ) .
Thus, the minimization of J g ̲ ^ 2 g ̲ ^ 1 ( t ) and J g ̲ ^ 1 g ̲ ^ 2 ( t ) with respect to g ̲ ^ 1 ( t ) and g ̲ ^ 2 ( t ) , respectively, lead to the normal equations:
R g ̲ ^ 2 ( t ) g ̲ ^ 1 ( t ) = p g ̲ ^ 2 ( t ) ,
R g ̲ ^ 1 ( t ) g ̲ ^ 2 ( t ) = p g ̲ ^ 1 ( t ) .
In order to avoid the matrix inversion in (44) and (45), we can use the matrix inversion lemma in (40) and (43), which result in the updates:
R g ̲ ^ 2 1 ( t ) = 1 λ 1 I P L 1 k g ̲ ^ 1 ( t ) x g ̲ ^ 2 T ( t ) R g ̲ ^ 2 1 ( t 1 ) ,
R g ̲ ^ 1 1 ( t ) = 1 λ 2 I P L 2 k g ̲ ^ 2 ( t ) x g ̲ ^ 1 T ( t ) R g ̲ ^ 1 1 ( t 1 ) ,
where I P L 1 and I P L 2 denote the identity matrices of size P L 1 × P L 1 and P L 2 × P L 2 , respectively, and
k g ̲ ^ 1 ( t ) = R g ̲ ^ 2 1 ( t ) x g ̲ ^ 2 ( t ) = R g ̲ ^ 2 1 ( t 1 ) x g ̲ ^ 2 ( t ) λ 1 + x g ̲ ^ 2 T ( t ) R g ̲ ^ 2 1 ( t 1 ) x g ̲ ^ 2 ( t ) ,
k g ̲ ^ 2 ( t ) = R g ̲ ^ 1 1 ( t ) x g ̲ ^ 1 ( t ) = R g ̲ ^ 1 1 ( t 1 ) x g ̲ ^ 1 ( t ) λ 2 + x g ̲ ^ 1 T ( t ) R g ̲ ^ 1 1 ( t 1 ) x g ̲ ^ 1 ( t )
are the corresponding Kalman gain vectors. Consequently, the filters’ updates become
g ̲ ^ 1 ( t ) = g ̲ ^ 1 ( t 1 ) + k g ̲ ^ 1 ( t ) ε 1 ( t ) ,
g ̲ ^ 2 ( t ) = g ̲ ^ 2 ( t 1 ) + k g ̲ ^ 2 ( t ) ε 2 ( t ) .
Finally, the components of the vectors g ̲ ^ 1 ( t ) and g ̲ ^ 2 ( t ) [see (31) and (33)] are used in (29) to obtain the symmetric filter g ^ + ( t ) , then the estimate g ^ ( t ) results immediately. Nevertheless, we should note that in some applications (like echo cancellation), the coefficients of the adaptive filter are not really required, because only the error signal is needed. In such cases, the evaluation of g ^ ( t ) based on (29) is not necessary, and the adaptive filter provides only the a priori error signal ε ( t ) = ε 1 ( t ) = ε 2 ( t ) .
The proposed RLS algorithm with symmetric filter (RLS-SF) is summarized in Table 1. As we can see, the RLS-SF combines the solutions of two adaptive filters of much smaller lengths (i.e., P L 1 and P L 2 ), as compared to the conventional RLS algorithm that updates a single adaptive filter of length L = L 1 L 2 . Because usually P L 2 , the expected gain of the RLS-SF is twofold, in terms of both performance (i.e., convergence rate/tracking) and computational complexity, which is proportional to O ( P L 1 ) 2 + ( P L 2 ) 2 . Simulation results provided in Section 5.2 support these performance features.
The computational complexities of the proposed algorithms developed in Section 3 and Section 4 are similar to those proposed in [16,17], respectively. Basically, in both approaches, the original system identification problem of length L = L 1 L 2 is reformulated as a combination of two shorter filters of lengths P L 1 and P L 2 , where usually P L 2 . However, although the solutions developed in [16,17] follow the direct approach based on (5), the current proposed algorithms exploit (11), where the motivation is mainly twofold. First, the intrinsic characteristics of the resulting impulse response can be exploited (i.e., the particular symmetry feature) within the identification process. Second, using the matrix Q could lead to potential benefits in terms of further exploiting the low-rank features. This is different as compared to the approach used in [16,17], and also represents a step toward our future works on this topic. The final goal would be to further reduce the value of P, which represents an important gain, especially for the identification of acoustic impulse responses.

5. Simulation Results

Simulations are performed in the framework of echo cancellation [2,3]. The reference (or desired) signal d ( t ) is obtained based on (1), using different network and acoustic impulse responses of length L = 500 ; the sampling rate is 8 kHz. The input signal x ( t ) is either an AR(1) process (obtained by filtering a white Gaussian noise through a first-order autoregressive model with a pole at 0.9) or a speech sequence. The output of the echo path (i.e., the echo signal) is corrupted by a background noise, w ( t ) , which is considered white and Gaussian (with different SNRs). All the experiments were performed by using MATLAB R2018b, running on an Asus GL552VX device (Windows 10 OS), which has an Intel Core i7-6700HQ CPU @ 2.60 GHz, with 4 Cores, 8 Logical Processors, and 16 GB of RAM.
Four echo paths are used in simulations and their corresponding impulse responses are depicted in Figure 1. The impulse response from Figure 1a is the first network echo path from the ITU-T G168 Recommendation [42], which is a cluster of 64 coefficients padded with zeros up to the length L = 500 . The second impulse response from Figure 1b is obtained in a similar manner, by using the fifth network echo path from the same recommendation, i.e., a cluster of 96 coefficients padded with zeros. The impulse responses from Figure 1c,d are two measured acoustic echo paths, with different sparseness degrees.
In order to support the idea behind (5), Figure 2 shows the singular values of H (normalized with respect to the maximum one), which is the equivalent matrix representation ( L 1 × L 2 ) of the impulse response h . In our experiments, because L = 500 , we set L 1 = 25 and L = 20 . The faster these singular values decrease to zero, the farther the column rank of H is from its maximum value. As we can see in Figure 2a,b, the network echo paths from Figure 1a,b can be very well modeled based on (5), because the corresponding singular values of H decrease very quickly to zero, so that P L 2 in these cases. On the other hand, according to Figure 2c,d, a larger value of P is required in case of the acoustic echo paths from Figure 1c,d. Nevertheless, even in this case, the expected value of P is reasonably smaller as compared to L 2 .

5.1. Performance of the Iterative Wiener Filter

In this first subsection, the performance of the proposed iterative Wiener filter (developed in Section 3) is investigated, as compared to the conventional Wiener filter. The performance measure is the normalized misalignment (in dB), which is evaluated as 20 log 10 h g W / h , where · denotes the Euclidean norm. In case of the iterative Wiener filter, the estimate of the impulse response is obtained as explained in the end of Section 3, while the conventional Wiener solution results in g W = R x 1 r x d .
The statistics R x and r x d are required within both the conventional and iterative Wiener filters. These can be estimated by averaging over N available data samples of x ( t ) and d ( t ) , so that the covariance matrix can be evaluated as ( 1 / N ) t = 1 N x ( t ) x T ( t ) , whereas the cross-correlation vector results in ( 1 / N ) t = 1 N x ( t ) d ( t ) . In the experiments reported in this subsection, the input signal is an AR(1) process with a pole at 0.9. The accuracy of the statistics’ estimates reflects the accuracy of the Wiener solution, so that a higher value of N is desirable, i.e., N L . However, in practice, only a limited set of data could be available, which raises significant challenges. Because the proposed iterative Wiener filter combines the solution of two shorter filters (of lengths P L 1 and P L 2 , with P L 2 ), an improved accuracy of the solution is expected in such challenging scenarios, as outlined in the end of Section 3.
In the first experiment, we consider that a large amount of data is available (i.e., N = 10 L = 5000 samples) for the identification of the network echo paths from Figure 1a,b. In addition, we set SNR = 20 dB, so that the influence of the external noise on the accuracy of the statistics’ estimates is mild. The results are provided in Figure 3, comparing the conventional Wiener filter with the solution obtained by the proposed iterative algorithm using different values of P. As we can notice, the iterative Wiener filter with P = L 2 / 10 = 2 is able to match the accuracy of the conventional Wiener solution, whereas for P = 3 it even outperforms its conventional counterpart. This supports the discussion related to Figure 2a,b, where the singular values of H decrease very quickly to zero, so that the rank of this matrix is much smaller than L 2 = 20 .
A similar experiment is performed in Figure 4, for the identification of the acoustic echo paths from Figure 1c,d. The other conditions remain the same, i.e., N = 5000 and SNR = 20 dB. As previously indicated in Figure 2c,d, the rank of matrix H is larger for the acoustic impulse responses. Nevertheless, a reasonably small value of P (as compared to L 2 ) can still be used even in such scenarios. The results provided in Figure 4 show that a value of P < L 2 / 2 is sufficient for the iterative Wiener filter to achieve the accuracy of the conventional Wiener solution. In addition, a reasonable attenuation of the misalignment is obtained by the iterative algorithm even for a smaller value of its specific parameter, e.g., P < L 2 / 3 .
The performance gain of the iterative Wiener filter becomes more apparent in challenging scenarios, e.g., for a small value of N and/or in a low SNR environment [43]. It is known that the accuracy of the conventional Wiener filter is highly influenced in such adverse conditions [34,35,36]. In the following experiments from this subsection, we consider both these challenges, by using only N = 1000 data samples to estimate the statistics and setting SNR = 10 dB.
In this framework, the identification of the network echo paths from Figure 1a,b is analyzed in Figure 5. As we can see, the conventional Wiener filter is not able to provide an accurate solution in this case. On the other hand, the iterative Wiener filter using P L 2 still achieves a reasonable attenuation of the misalignment. Moreover, it can be remarked that the iterative algorithm even with P = 1 clearly outperforms its conventional counterpart. These performances are also supported in Figure 6, where the estimated impulse responses provided by the conventional Wiener filter and the proposed iterative version (using P = 2 ) are depicted, as compared to the true network impulse responses, corresponding to the previous experiment from Figure 5. As we can see, the iterative Wiener filter is able to obtain more accurate estimates as compared to its conventional counterpart.
A similar behavior can be observed in Figure 7 and Figure 8, where the identification of the acoustic echo paths from Figure 1c,d is assessed. As we can see in Figure 7, even if a larger value of P should be used here (as compared to the previous experiment from Figure 5), this value is much smaller as compared to L 2 = 20 (e.g., P = L 2 / 4 or L 2 / 5 ). In fact, the conventional Wiener filter is outperformed by the proposed iterative algorithm even for very small values of P (e.g., P = 3 ). Finally, it is important to outline that smaller values of P can be used in these difficult scenarios (presented in Figure 5 and Figure 7), as compared to the experiments reported in Figure 3 and Figure 4, which were performed in more favorable conditions (i.e., N L and good SNR). Figure 8 also supports the performance gain, showing the estimated acoustic impulse responses obtained with the conventional and iterative versions of the Wiener filter. In this figure, the iterative Wiener filter uses P = 4 , whereas the other conditions are the same as in the experiment reported in Figure 7. It can be noticed that more accurate estimates can be obtained by using the proposed iterative Wiener filter, as compared to the solutions provided by the conventional Wiener benchmark.

5.2. Performance of the RLS-SF

This subsection is dedicated to the proposed RLS-SF (from Section 4), which is compared in the following to its conventional counterpart, i.e., the RLS adaptive filter [34,35,36]. The performance measure is the normalized misalignment (in dB), which is now evaluated as 20 log 10 h g ^ ( t ) / h , where g ^ ( t ) represents the adaptive filter estimate at time index t. In order to assess the tracking capabilities of the algorithms, an abrupt change of the echo path is introduced in most of the following experiments, by changing the impulse response in the middle of simulation.
In Figure 9, the identification of the network echo paths from Figure 1a,b is evaluated. Initially, the impulse response from Figure 1a is used for the first 2.5 s. Then, there is an abrupt change of the system and the impulse response from Figure 1b is involved. The forgetting factors of the algorithms are set in order to achieve the same misalignment level, so that we can focus only on the comparisons related to the convergence rate and tracking behavior. Consequently, the conventional RLS adaptive filter uses λ = 1 1 / ( K L ) , whereas the parameters of the RLS-SF are similarly set (according to the filters’ lengths) to λ 1 = 1 1 / ( K P L 1 ) and λ 2 = 1 1 / ( K P L 2 ) , with K = 10 , L = 500 , L 1 = 25 , and L 2 = 20 . As we can see in Figure 9, the initial convergence rate is almost the same for all the algorithms, but the tracking capability of the RLS-SF is much better as compared to the conventional benchmark, for all the values of P that are used in this simulation. Most importantly, we should note that all these values satisfy the condition P L 2 , which leads to a significant gain in terms of the computational complexity. As expected, for a value of P smaller than the rank of the matrix H (e.g., P = 2 ), the RLS-SF pays with a higher misalignment level as compared to the conventional RLS algorithm. However, even in this case, it fully compensates in terms of the computational complexity and tracking capability.
These results suggest another strategy for compromising between the main performance criteria of the RLS-SF. Because its tracking behavior is better as compared to the conventional RLS algorithm, we could increase the forgetting factors of the RLS-SF, targeting a lower misalignment, while paying with a slightly reduced tracking capability. On the other hand, the forgetting factor of the conventional RLS algorithm could be reduced in order to improve its tracking capability, but this leads to a higher misalignment level. Such a scenario is considered in Figure 9, where the conditions are similar to the previous experiment from Figure 10, but the forgetting factors are set as discussed before. Consequently, the conventional RLS algorithm uses a smaller forgetting factor, i.e., λ = 1 1 / [ ( K / 2 ) L ] , whereas the forgetting factors of the RLS-SF are increased to λ 1 = 1 1 / ( 2 K P L 1 ) and λ 2 = 1 1 / ( 2 K P L 2 ) , with K = 10 . As expected, the results obtained in Figure 10 show that the RLS-SF outperforms the conventional RLS algorithm in terms of accuracy, reaching a lower misalignment level for most of the values of P used in this experiment. Even when using the smallest value P = 2 , the proposed algorithm is able to match the misalignment of the conventional RLS filter. In addition, the tracking capabilities of the RLS-SF are generally still better as compared to its conventional counterpart.
The next experiment considers a more challenging scenario, wherein the input signal is a speech signal and the SNR is reduced to 10 dB. The other settings remain the same as in the previous simulation. As we can see in Figure 11, the RLS-SF outperforms the conventional RLS algorithm in terms of the main performance criteria, for all the values of P L 2 . This behavior also supports the results reported in the previous subsection (for the iterative Wiener filter), which indicate that the proposed decomposition-based approach with the symmetric filters performs better (as compared to the conventional benchmarks) especially in challenging conditions, e.g., in noisy environments. The normalized misalignment represents the accuracy of the solution provided by the adaptive filter, i.e., the distance between the true impulse response of the system and its estimate. Consequently, the misalignment level is influenced by the SNR. This can be seen when comparing the results from Figure 10 and Figure 11, where most of the conditions are similar, but the SNRs are different. Although in Figure 10 the misalignment range is between 30 to 20 dB, in Figure 11 we can see a higher misalignment level, i.e., around 15 to 10 dB. Nevertheless, in both scenarios, the proposed RLS-SF reaches lower misalignment levels as compared to the conventional RLS filter.
The performance gain is also reflected in the accuracy of the estimated acoustic impulse responses (obtained by the conventional RLS filter and the RLS-SF), as indicated in Figure 12. In this figure, the RLS-SF uses P = 3 , whereas the other conditions are the same as in the experiment reported in Figure 11. As we can see, the accuracy of the estimate provided by the RLS-SF is superior to the estimate of the conventional RLS filter. Even if the estimated impulse response of the echo path is not really required in echo cancellation (where the error signal is transmitted back to the far-end), the results from Figure 12 validate the performance features of the proposed RLS-SF.
The last set of simulations focuses on the identification of the acoustic echo paths from Figure 1c,d. Because the background noise is usually considered stronger in the context of acoustic echo cancellation, while it is also amplified by the microphone of the hands-free device [2], we set SNR = 10 dB in all the following experiments. In the simulation provided in Figure 13, the input signal is an AR(1) process and the acoustic impulse response from Figure 1c is considered for the first 2.5 s, then the echo path abruptly changes into the impulse response from Figure 1d. The forgetting factors of the conventional RLS filter and the proposed RLS-SF are set to λ = 1 1 / ( K L ) and λ j = 1 1 / ( K P L j ) , j = 1 , 2 , respectively. Because the corresponding matrix H of the acoustic impulse response is closer to full-rank, higher values of P are required in this framework. Nevertheless, as we can see in Figure 13, the RLS-SF using a value of P reasonably smaller as compared to L 2 can reach a misalignment level closer to the conventional RLS algorithm. On the other hand, the tracking capability of the RLS-SF (for all the values of P) is better as compared to its conventional counterpart.
In addition, we can further tune the forgetting factors of the RLS-SF in order to improve its accuracy (i.e., to reach a lower misalignment level), while reducing the forgetting factor of the conventional RLS algorithm for a faster tracking. Such a scenario is considered in Figure 14, where these parameters are set to λ j = 1 1 / ( 2 K P L j ) , j = 1 , 2 and λ = 1 1 / [ ( K / 2 ) L ] , respectively, by using K = 10 . The other conditions are the same as in the previous experiment reported in Figure 13. As we can see in Figure 14, the initial convergence rate and the tracking reaction is basically the same for all the algorithms, but the proposed RLS-SF (for all the values of P) outperforms the conventional RLS filter in terms of the accuracy of its solution, achieving a lower misalignment level.
Finally, in Figure 15, the robustness of the algorithms is assessed. To this purpose, we consider a variation of the background noise at time 2.5 s, for a period of 0.5 s, by reducing the SNR from 10 dB to 0 dB. Such variations can often happen in acoustic environments, e.g., due to different movements of objects and bodies, the presence of ventilation systems or other electrical devices, etc. In this experiment, the input signal is a speech sequence and the acoustic echo path from Figure 1c is considered. The forgetting factors of the algorithms are set as in the previous simulation (from Figure 14). It is interesting to note that the RLS-SF has the same robustness features despite the value of P. Most importantly, the proposed algorithm reaches a lower misalignment level as compared to the conventional RLS benchmark.
The estimated acoustic impulse responses corresponding to the previous experiment are depicted in Figure 16, where the RLS-SF uses P = 8 . The results are compared to the true impulse response of the acoustic echo path from Figure 1c. It can be seen that a better accuracy of the estimate is obtained by the RLS-SF, as compared to the conventional RLS counterpart.

6. Discussion

The motivation behind the approach discussed in this paper was mainly twofold. First, we have shown how the intrinsic characteristics of the resulting impulse response can be exploited (i.e., the particular symmetry feature) within the identification process. As mentioned before, the identification of linearly separable systems that own particular intrinsic symmetric/antisymmetric properties has been addressed in our recent works [29,30]. In this paper, as compared to these previous works, we did not limit the current solution only to the identification of linearly separable systems, but for more general forms of impulse responses. Second, using the matrix Q within the identification process could represent a potential benefit in terms of further revealing and exploiting the low-rank features. This is different as compared to the approach used in [16,17], and also represents a step toward other potential future works on this topic. The final goal is to further reduce the value of P, which represents an important gain, especially for the identification of acoustic impulse responses.
The preliminary findings in this direction indicate promising results. For example, let us consider that we are able to find a matrix Q ˜ , such that the coefficients of the impulse response Q ˜ h result in the descending order of their absolute values. Also, let us focus on the singular values of the corresponding matrices, i.e., H = ivec ( h ) and H ˜ = ivec Q ˜ h , where ivec ( · ) is the inverse vectorization operation, which converts a vector of length L = L 1 L 2 into a matrix of size L 1 × L 2 . In case of acoustic impulse responses, these matrices are closer to full-rank. However, the faster their singular values decrease to zero, the better is the approximation that justifies (5). In order to provide a supporting example, the singular values of H and H ˜ (normalized with respect to the maximum one), together with the corresponding impulse responses, are represented in Figure 17, considering the acoustic impulse response h from Figure 1c. Clearly, a smaller value of P can be used for the approximation of the permuted impulse response from Figure 17b, because the singular values of H ˜ decrease faster to zero, as compared to the singular values of H .
The previous discussion outlined the main features of the proposed solutions, but it also represents a motivation for future investigations on this approach. This could further lead to potential performance improvements in the framework of low-rank system identification problems, by fully exploiting the intrinsic characteristics and the specific nature of the acoustic impulse responses.

7. Conclusions and Future Works

In this paper, we have followed an efficient approach for linear system identification problems, which basically involves three key items, i.e., (i) the nearest Kronecker product decomposition of the impulse responses, (ii) low-rank approximations, and (iii) a particular symmetric filter. Such an approach fits very well for the identification of long-length impulse responses, which raise significant challenges in many real-world system identification problems.
In this framework, we have developed an iterative Wiener filter, which is able to outperform the conventional Wiener filter (in terms of the accuracy of its solution), especially when only a small amount of data is available to estimate the required statistics and/or in noisy conditions (e.g., low SNRs). For example, when N = 2 L data samples are available and SNR = 10 dB, the conventional Wiener filter fails to provide an accurate solution (i.e., its normalized misalignment is close to 0 dB), while the proposed iterative version provides a reasonable attenuation of the misalignment (around −10 dB).
In addition, an adaptive solution has also been proposed, which results in an RLS algorithm with symmetric filter, namely the RLS-SF. This new adaptive filter provides improved performance features as compared to the conventional RLS benchmark, such as an improved tracking capability and a lower computational complexity. Moreover, the RLS-SF leads to more accurate solutions as compared to the conventional RLS filter, which reflect into lower misalignment levels (e.g., up to 5–10 dB improvement, depending on the SNR).
The proposed algorithms have been tested in the framework of network and acoustic echo cancellation, which represent very challenging system identification problems. Simulation results have indicated the good performance of the iterative Wiener filter and the RLS-SF algorithm, also supporting the theoretical findings behind their development.
In future works, we aim to improve the current solutions based on a different selection of the permutation matrix, which would further allow to better exploit the low-rank features of the systems to be identified. In addition, the development of a Kalman filter based on this approach will also be considered, which could open the path toward other potential applications, e.g., see [44,45], and the references therein.

Author Contributions

Conceptualization, I.-D.F.; methodology, J.B.; validation, L.-M.D.; investigation, C.P.; formal analysis, S.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by a grant of the Romanian Ministry of Education and Research, CNCS–UEFISCDI, project number: PN-III-P1-1.1-PD-2019-0340.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Let us define the L 1 × L 2 rectangular matrices:
J l 1 l 2 = i L 1 , l 1 i L 2 , l 2 T ,
for l 1 = 1 , 2 , , L 1 , l 2 = 1 , 2 , , L 2 , where i L 1 , l 1 and i L 2 , l 2 are the l 1 th and l 2 th columns of the identity matrices I L 1 (of size L 1 × L 1 ) and I L 2 (of size L 2 × L 2 ), respectively. It can be verified that the L × L matrix:
P L 1 L 2 = l 1 = 1 L 1 l 2 = 1 L 2 J l 1 l 2 J l 1 l 2 T = J 11 T J 12 T J 1 L 2 T J 21 T J 22 T J 2 L 2 T J L 1 1 T J L 1 2 T J L 1 L 2 T
is, indeed, a permutation matrix.
Now, let us show that (10) holds. Let
h 1 , p = h 1 , p , 1 h 1 , p , 2 h 1 , p , L 1 T = l 1 = 1 L 1 h 1 , p , l 1 i L 1 , l 1 , h 2 , p = h 2 , p , 1 h 2 , p , 2 h 2 , p , L 2 T = l 2 = 1 L 2 h 2 , p , l 2 i L 2 , l 2 .
We have
h 1 , p h 2 , p = vec h 2 , p h 1 , p T = vec l 1 , l 2 h 2 , p , l 2 h 1 , p , l 1 i L 2 , l 2 i L 1 , l 1 T = l 1 , l 2 vec i L 2 , l 2 h 1 , p , l 1 h 2 , p , l 2 i L 1 , l 1 T = l 1 , l 2 vec i L 2 , l 2 i L 1 , l 1 T h 1 , p h 2 , p T i L 2 , l 2 i L 1 , l 1 T = l 1 , l 2 vec J l 1 l 2 T h 1 , p h 2 , p T J l 1 l 2 T = l 1 , l 2 J l 1 l 2 J l 1 l 2 T vec h 1 , p h 2 , p T = P L 1 L 2 h 2 , p h 1 , p .

References

  1. Ljung, L. System Identification: Theory for the User, 2nd ed.; Prentice-Hall: Upper Saddle River, NJ, USA, 1999. [Google Scholar]
  2. Gay, S.L.; Benesty, J. (Eds.) Acoustic Signal Processing for Telecommunication; Kluwer Academic Publisher: Boston, MA, USA, 2000. [Google Scholar]
  3. Benesty, J.; Gänsler, T.; Morgan, D.R.; Sondhi, M.M.; Gay, S.L. Advances in Network and Acoustic Echo Cancellation; Springer: Berlin, Germany, 2001. [Google Scholar]
  4. Rupp, M.; Schwarz, S. A tensor LMS algorithm. In Proceedings of the 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), South Brisbane, QLD, Australia, 19–24 April 2015; pp. 3347–3351. [Google Scholar]
  5. Rupp, M.; Schwarz, S. Gradient-based approaches to learn tensor products. In Proceedings of the 2015 23rd European Signal Processing Conference (EUSIPCO), Nice, France, 31 August–4 September 2015; pp. 2486–2490. [Google Scholar]
  6. Gesbert, D.; Duhamel, P. Robust blind joint data/channel estimation based on bilinear optimization. In Proceedings of the 8th Workshop on Statistical Signal and Array Processing, Corfu, Greece, 24–26 June 1996; pp. 168–171. [Google Scholar]
  7. Da Costa, M.N.; Favier, G.; Romano, J.M.T. Tensor modelling of MIMO communication systems with performance analysis and Kronecker receivers. Signal Process. 2018, 145, 304–316. [Google Scholar] [CrossRef] [Green Version]
  8. Stenger, A.; Kellermann, W. Adaptation of a memoryless preprocessor for nonlinear acoustic echo cancelling. Signal Process. 2000, 80, 1747–1760. [Google Scholar] [CrossRef]
  9. Huang, Y.; Skoglund, J.; Luebs, A. Practically efficient nonlinear acoustic echo cancellers using cascaded block RLS and FLMS adaptive filters. In Proceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA, 5–9 March 2017; pp. 570–596. [Google Scholar]
  10. Cichocki, A.; Zdunek, R.; Pan, A.H.; Amari, S. Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multiway Data Analysis and Blind Source Separation; Wiley: Chichester, UK, 2009. [Google Scholar]
  11. Boussé, M.; Debals, O.; Lathauwer, L.D. A tensor-based method for large-scale blind source separation using segmentation. IEEE Trans. Signal Process. 2017, 65, 346–358. [Google Scholar] [CrossRef]
  12. Benesty, J.; Cohen, I.; Chen, J. Array Processing–Kronecker Product Beamforming; Springer: Cham, Switzerland, 2019. [Google Scholar]
  13. Ribeiro, L.N.; de Almeida, A.L.F.; Mota, J.C.M. Separable linearly constrained minimum variance beamformers. Signal Process. 2019, 158, 15–25. [Google Scholar] [CrossRef]
  14. Vasilescu, M.A.O.; Kim, E. Compositional hierarchical tensor factorization: Representing hierarchical intrinsic and extrinsic causal factors. arXiv 2019, arXiv:1911.04180. [Google Scholar]
  15. Vasilescu, M.A.O.; Kim, E.; Zeng, X.S. CausalX: Causal eXplanations and block multilinear factor analysis. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; p. 8. [Google Scholar]
  16. Paleologu, C.; Benesty, J.; Ciochină, S. Linear system identification based on a Kronecker product decomposition. IEEE/ACM Trans. Audio Speech Lang. Process. 2018, 26, 1793–1808. [Google Scholar] [CrossRef]
  17. Elisei-Iliescu, C.; Paleologu, C.; Benesty, J.; Stanciu, C.; Anghel, C.; Ciochină, S. Recursive least-squares algorithms for the identification of low-rank systems. IEEE/ACM Trans. Audio Speech Lang. Process. 2019, 27, 903–918. [Google Scholar] [CrossRef]
  18. Bhattacharjee, S.S.; George, N.V. Nearest Kronecker product decomposition based normalized least mean square algorithm. In Proceedings of the ICASSP 2020—2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020; pp. 476–480. [Google Scholar]
  19. Bhattacharjee, S.S.; Kumar, K.; George, N.V. Nearest Kronecker product decomposition based generalized maximum correntropy and generalized hyperbolic secant robust adaptive filters. IEEE Signal Process. Lett. 2020, 27, 1525–1529. [Google Scholar] [CrossRef]
  20. Wang, X.; Huang, G.; Benesty, J.; Chen, J.; Cohen, I. Time difference of arrival estimation based on a Kronecker product decomposition. IEEE Signal Process. Lett. 2021, 28, 51–55. [Google Scholar] [CrossRef]
  21. Yang, W.; Huang, G.; Chen, J.; Benesty, J.; Cohen, I.; Kellermann, W. Robust dereverberation with Kronecker product based multichannel linear prediction. IEEE Signal Process. Lett. 2021, 28, 101–105. [Google Scholar] [CrossRef]
  22. Itzhak, G.; Benesty, J.; Cohen, I. On the design of differential Kronecker product beamformers. IEEE/ACM Trans. Audio Speech Lang. Process. 2021, 29, 1397–1410. [Google Scholar] [CrossRef]
  23. Kuhn, E.V.; Pitz, C.A.; Matsuo, M.V.; Bakri, K.J.; Seara, R.; Benesty, J. A Kronecker product CLMS algorithm for adaptive beamforming. Digit. Signal Process. 2021, 111, 102968. [Google Scholar] [CrossRef]
  24. Bhattacharjee, S.S.; George, N.V. Fast and efficient acoustic feedback cancellation based on low rank approximation. Signal Process. 2021, 182, 107984. [Google Scholar] [CrossRef]
  25. Wang, X.; Benesty, J.; Chen, J.; Huang, G.; Cohen, I. Beamforming with cube microphone arrays via Kronecker product decompositions. IEEE/ACM Trans. Audio Speech Lang. Process. 2021, 29, 1774–1784. [Google Scholar] [CrossRef]
  26. Bhattacharjee, S.S.; George, N.V. Nearest Kronecker product decomposition based linear-in-the-parameters nonlinear filters. IEEE/ACM Trans. Audio Speech Lang. Process. 2021, 29, 2111–2122. [Google Scholar] [CrossRef]
  27. Huang, G.; Benesty, J.; Cohen, I.; Chen, J. Kronecker product multichannel linear filtering for adaptive weighted prediction error-based speech dereverberation. IEEE/ACM Trans. Audio Speech Lang. Process. 2022, 30, 1277–1289. [Google Scholar] [CrossRef]
  28. Vadhvana, S.; Yadav, S.K.; Bhattacharjee, S.S.; George, N.V. An improved constrained LMS algorithm for fast adaptive beamforming based on a low rank approximation. IEEE Trans. Circuits Syst. II Express Briefs 2022. early access. [Google Scholar] [CrossRef]
  29. Benesty, J.; Paleologu, C.; Ciochină, S. On the identification of symmetric and antisymmetric impulse responses. In Proceedings of the 2021 55th Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, USA, 31 October–3 November 2021; pp. 810–814. [Google Scholar]
  30. Benesty, J.; Paleologu, C.; Ciochină, S.; Kuhn, E.V.; Bakri, K.J.; Seara, R. LMS and NLMS algorithms for the identification of impulse responses with intrinsic symmetric or antisymmetric properties. In Proceedings of the (ICASSP 2022) 2022 IEEE International Conference on Acoustics, Speech and Signal Processing, Singapore, 22–27 May 2022; p. 5, accepted for publication. [Google Scholar]
  31. Loan, C.F.V. The ubiquitous Kronecker product. J. Comput. Appl. Math. 2000, 123, 85–100. [Google Scholar] [CrossRef] [Green Version]
  32. Harville, D.A. Matrix Algebra From a Statistician’s Perspective; Springer: New York, NY, USA, 1997. [Google Scholar]
  33. Benesty, J.; Paleologu, C.; Ciochină, S. On the identification of bilinear forms with the Wiener filter. IEEE Signal Process. Lett. 2017, 24, 653–657. [Google Scholar] [CrossRef]
  34. Haykin, S. Adaptive Filter Theory, 4th ed.; Prentice-Hall: Upper Saddle River, NJ, USA, 2002. [Google Scholar]
  35. Sayed, A.H. Adaptive Filters; Wiley: New York, NY, USA, 2008. [Google Scholar]
  36. Diniz, P.S.R. Adaptive Filtering: Algorithms and Practical Implementation, 4th ed.; Springer: New York, NY, USA, 2013. [Google Scholar]
  37. Sayed, A.H.; Kailath, T. A state-space approach to adaptive RLS filtering. IEEE Signal Process. Mag. 1994, 11, 18–60. [Google Scholar] [CrossRef] [Green Version]
  38. Leung, S.-H.; So, C.F. Gradient-based variable forgetting factor RLS algorithm in time-varying environments. IEEE Trans. Signal Process. 2005, 53, 3141–3150. [Google Scholar] [CrossRef]
  39. Zakharov, Y.V.; White, G.P.; Liu, J. Low-complexity RLS algorithms using dichotomous coordinate descent iterations. IEEE Trans. Signal Process. 2008, 56, 3150–3161. [Google Scholar] [CrossRef] [Green Version]
  40. Apolinário, J.A., Jr. (Ed.) QRD-RLS Adaptive Filtering; Springer: New York, NY, USA, 2009. [Google Scholar]
  41. Bertsekas, D.P. Nonlinear Programming, 2nd ed.; Athena Scientific: Belmont, MA, USA, 1999. [Google Scholar]
  42. Digital Network Echo Cancellers; ITU-T Recommendation G.168. 2012. Available online: https://www.itu.int/rec/dologin_pub.asp?lang=f&id=T-REC-G.168-200701-S!!PDF-E&type=items (accessed on 1 April 2022).
  43. Dogariu, L.-M.; Benesty, J.; Paleologu, C.; Ciochină, S. An insightful overview of the Wiener Filter for system identification. Appl. Sci. 2021, 11, 7774. [Google Scholar] [CrossRef]
  44. Kim, P.S. Selective finite memory structure filtering using the chi-square test statistic for temporarily uncertain systems. Appl. Sci. 2019, 9, 4257. [Google Scholar] [CrossRef] [Green Version]
  45. Kwon, B.; Kim, S.-I. Recursive optimal finite impulse response filter and its application to adaptive estimation. Appl. Sci. 2022, 12, 2757. [Google Scholar] [CrossRef]
Figure 1. Impulse responses used in simulations, with L = 500 : (a,b) network echo paths from the ITU-T G168 Recommendation [42]; (c,d) acoustic echo paths.
Figure 1. Impulse responses used in simulations, with L = 500 : (a,b) network echo paths from the ITU-T G168 Recommendation [42]; (c,d) acoustic echo paths.
Applsci 12 04263 g001
Figure 2. Singular values of the matrix H of size L 1 × L 2 (normalized with respect to the maximum one), with L 1 = 25 and L 2 = 20 , corresponding to: (a) the impulse response from Figure 1a; (b) the impulse response from Figure 1b; (c) the impulse response from Figure 1c; and (d) the impulse response from Figure 1d.
Figure 2. Singular values of the matrix H of size L 1 × L 2 (normalized with respect to the maximum one), with L 1 = 25 and L 2 = 20 , corresponding to: (a) the impulse response from Figure 1a; (b) the impulse response from Figure 1b; (c) the impulse response from Figure 1c; and (d) the impulse response from Figure 1d.
Applsci 12 04263 g002
Figure 3. Performance of the conventional Wiener filter (WF) and the iterative Wiener filter (IWF) for the identification of (a) the network echo path from Figure 1a and (b) the network echo path from Figure 1b. The input signal is an AR(1) process, N = 5000 (available data samples), and SNR = 20 dB.
Figure 3. Performance of the conventional Wiener filter (WF) and the iterative Wiener filter (IWF) for the identification of (a) the network echo path from Figure 1a and (b) the network echo path from Figure 1b. The input signal is an AR(1) process, N = 5000 (available data samples), and SNR = 20 dB.
Applsci 12 04263 g003
Figure 4. Performance of the conventional Wiener filter (WF) and the iterative Wiener filter (IWF) for the identification of (a) the acoustic echo path from Figure 1c and (b) the acoustic echo path from Figure 1d. The input signal is an AR(1) process, N = 5000 (available data samples), and SNR = 20 dB.
Figure 4. Performance of the conventional Wiener filter (WF) and the iterative Wiener filter (IWF) for the identification of (a) the acoustic echo path from Figure 1c and (b) the acoustic echo path from Figure 1d. The input signal is an AR(1) process, N = 5000 (available data samples), and SNR = 20 dB.
Applsci 12 04263 g004
Figure 5. Performance of the conventional Wiener filter (WF) and the iterative Wiener filter (IWF) for the identification of (a) the network echo path from Figure 1a and (b) the network echo path from Figure 1b. The input signal is an AR(1) process, N = 1000 (available data samples), and SNR = 10 dB.
Figure 5. Performance of the conventional Wiener filter (WF) and the iterative Wiener filter (IWF) for the identification of (a) the network echo path from Figure 1a and (b) the network echo path from Figure 1b. The input signal is an AR(1) process, N = 1000 (available data samples), and SNR = 10 dB.
Applsci 12 04263 g005
Figure 6. Performance of the conventional Wiener filter (WF) and the iterative Wiener filter (IWF) using P = 2 , for the identification of the network echo paths from Figure 1a,b. (a) The true network impulse response from Figure 1a; (b) the true network impulse response from Figure 1b; (c) estimated impulse response with the conventional WF, corresponding to Figure 1a; (d) estimated impulse response with the conventional WF, corresponding to Figure 1b; (e) estimated impulse response with the IWF, corresponding to Figure 1a; and (f) estimated impulse response with the IWF, corresponding to Figure 1b. Other conditions are the same as in Figure 5.
Figure 6. Performance of the conventional Wiener filter (WF) and the iterative Wiener filter (IWF) using P = 2 , for the identification of the network echo paths from Figure 1a,b. (a) The true network impulse response from Figure 1a; (b) the true network impulse response from Figure 1b; (c) estimated impulse response with the conventional WF, corresponding to Figure 1a; (d) estimated impulse response with the conventional WF, corresponding to Figure 1b; (e) estimated impulse response with the IWF, corresponding to Figure 1a; and (f) estimated impulse response with the IWF, corresponding to Figure 1b. Other conditions are the same as in Figure 5.
Applsci 12 04263 g006
Figure 7. Performance of the conventional Wiener filter (WF) and the iterative Wiener filter (IWF) for the identification of (a) the acoustic echo path from Figure 1c and (b) the acoustic echo path from Figure 1d. The input signal is an AR(1) process, N = 1000 (available data samples), and SNR = 10 dB.
Figure 7. Performance of the conventional Wiener filter (WF) and the iterative Wiener filter (IWF) for the identification of (a) the acoustic echo path from Figure 1c and (b) the acoustic echo path from Figure 1d. The input signal is an AR(1) process, N = 1000 (available data samples), and SNR = 10 dB.
Applsci 12 04263 g007
Figure 8. Performance of the conventional Wiener filter (WF) and the iterative Wiener filter (IWF) using P = 4 , for the identification of the acoustic echo paths from Figure 1c,d. (a) The true acoustic impulse response from Figure 1c; (b) the true acoustic impulse response from Figure 1d; (c) estimated impulse response with the conventional WF, corresponding to Figure 1c; (d) estimated impulse response with the conventional WF, corresponding to Figure 1d; (e) estimated impulse response with the IWF, corresponding to Figure 1c; and (f) estimated impulse response with the IWF, corresponding to Figure 1d. Other conditions are the same as in Figure 7.
Figure 8. Performance of the conventional Wiener filter (WF) and the iterative Wiener filter (IWF) using P = 4 , for the identification of the acoustic echo paths from Figure 1c,d. (a) The true acoustic impulse response from Figure 1c; (b) the true acoustic impulse response from Figure 1d; (c) estimated impulse response with the conventional WF, corresponding to Figure 1c; (d) estimated impulse response with the conventional WF, corresponding to Figure 1d; (e) estimated impulse response with the IWF, corresponding to Figure 1c; and (f) estimated impulse response with the IWF, corresponding to Figure 1d. Other conditions are the same as in Figure 7.
Applsci 12 04263 g008
Figure 9. Performance of the conventional RLS filter [with λ = 1 1 / ( K L ) ] and the RLS-SF [with λ 1 = 1 1 / ( K P L 1 ) and λ 2 = 1 1 / ( K P L 2 ) ], using K = 10 , for the identification of the network echo paths from Figure 1a,b. The input signal is an AR(1) process, SNR = 20 dB, and the echo path changes after 2.5 s.
Figure 9. Performance of the conventional RLS filter [with λ = 1 1 / ( K L ) ] and the RLS-SF [with λ 1 = 1 1 / ( K P L 1 ) and λ 2 = 1 1 / ( K P L 2 ) ], using K = 10 , for the identification of the network echo paths from Figure 1a,b. The input signal is an AR(1) process, SNR = 20 dB, and the echo path changes after 2.5 s.
Applsci 12 04263 g009
Figure 10. Performance of the conventional RLS filter [with λ = 1 1 / ( K L / 2 ) ] and the RLS-SF [with λ 1 = 1 1 / ( 2 K P L 1 ) and λ 2 = 1 1 / ( 2 K P L 2 ) ], using K = 10 , for the identification of the network echo paths from Figure 1a,b. The input signal is an AR(1) process, SNR = 20 dB, and the echo path changes after 2.5 s.
Figure 10. Performance of the conventional RLS filter [with λ = 1 1 / ( K L / 2 ) ] and the RLS-SF [with λ 1 = 1 1 / ( 2 K P L 1 ) and λ 2 = 1 1 / ( 2 K P L 2 ) ], using K = 10 , for the identification of the network echo paths from Figure 1a,b. The input signal is an AR(1) process, SNR = 20 dB, and the echo path changes after 2.5 s.
Applsci 12 04263 g010
Figure 11. Performance of the conventional RLS filter [with λ = 1 1 / ( K L / 2 ) ] and the RLS-SF [with λ 1 = 1 1 / ( 2 K P L 1 ) and λ 2 = 1 1 / ( 2 K P L 2 ) ], using K = 10 , for the identification of the network echo paths from Figure 1a,b. The input signal is a speech sequence, SNR = 10 dB, and the echo path changes after 2.5 s.
Figure 11. Performance of the conventional RLS filter [with λ = 1 1 / ( K L / 2 ) ] and the RLS-SF [with λ 1 = 1 1 / ( 2 K P L 1 ) and λ 2 = 1 1 / ( 2 K P L 2 ) ], using K = 10 , for the identification of the network echo paths from Figure 1a,b. The input signal is a speech sequence, SNR = 10 dB, and the echo path changes after 2.5 s.
Applsci 12 04263 g011
Figure 12. Performance of the conventional RLS filter [with λ = 1 1 / ( K L / 2 ) ] and the RLS-SF [with λ 1 = 1 1 / ( 2 K P L 1 ) , λ 2 = 1 1 / ( 2 K P L 2 ) , and P = 3 ], using K = 10 , for the identification of the network echo path from Figure 1b. (a) The true network impulse response, (b) estimated impulse response (after 5 s) with the conventional RLS filter, and (c) estimated impulse response (after 5 s) with the RLS-SF. Other conditions are the same as in Figure 11.
Figure 12. Performance of the conventional RLS filter [with λ = 1 1 / ( K L / 2 ) ] and the RLS-SF [with λ 1 = 1 1 / ( 2 K P L 1 ) , λ 2 = 1 1 / ( 2 K P L 2 ) , and P = 3 ], using K = 10 , for the identification of the network echo path from Figure 1b. (a) The true network impulse response, (b) estimated impulse response (after 5 s) with the conventional RLS filter, and (c) estimated impulse response (after 5 s) with the RLS-SF. Other conditions are the same as in Figure 11.
Applsci 12 04263 g012
Figure 13. Performance of the conventional RLS filter [with λ = 1 1 / ( K L ) ] and the RLS-SF [with λ 1 = 1 1 / ( K P L 1 ) and λ 2 = 1 1 / ( K P L 2 ) ], using K = 10 , for the identification of the acoustic echo paths from Figure 1c,d. The input signal is an AR(1) process, SNR = 10 dB, and the echo path changes after 2.5 s.
Figure 13. Performance of the conventional RLS filter [with λ = 1 1 / ( K L ) ] and the RLS-SF [with λ 1 = 1 1 / ( K P L 1 ) and λ 2 = 1 1 / ( K P L 2 ) ], using K = 10 , for the identification of the acoustic echo paths from Figure 1c,d. The input signal is an AR(1) process, SNR = 10 dB, and the echo path changes after 2.5 s.
Applsci 12 04263 g013
Figure 14. Performance of the conventional RLS filter [with λ = 1 1 / ( K L / 2 ) ] and the RLS-SF [with λ 1 = 1 1 / ( 2 K P L 1 ) and λ 2 = 1 1 / ( 2 K P L 2 ) ], using K = 10 , for the identification of the acoustic echo paths from Figure 1c,d. The input signal is an AR(1) process, SNR = 10 dB, and the echo path changes after 2.5 s.
Figure 14. Performance of the conventional RLS filter [with λ = 1 1 / ( K L / 2 ) ] and the RLS-SF [with λ 1 = 1 1 / ( 2 K P L 1 ) and λ 2 = 1 1 / ( 2 K P L 2 ) ], using K = 10 , for the identification of the acoustic echo paths from Figure 1c,d. The input signal is an AR(1) process, SNR = 10 dB, and the echo path changes after 2.5 s.
Applsci 12 04263 g014
Figure 15. Performance of the conventional RLS filter [with λ = 1 1 / ( K L / 2 ) ] and the RLS-SF [with λ 1 = 1 1 / ( 2 K P L 1 ) and λ 2 = 1 1 / ( 2 K P L 2 ) ], using K = 10 , for the identification of the acoustic echo path from Figure 1c. The input signal is a speech sequence and the SNR decreases from 10 dB to 0 dB between times 2.5 and 3 s.
Figure 15. Performance of the conventional RLS filter [with λ = 1 1 / ( K L / 2 ) ] and the RLS-SF [with λ 1 = 1 1 / ( 2 K P L 1 ) and λ 2 = 1 1 / ( 2 K P L 2 ) ], using K = 10 , for the identification of the acoustic echo path from Figure 1c. The input signal is a speech sequence and the SNR decreases from 10 dB to 0 dB between times 2.5 and 3 s.
Applsci 12 04263 g015
Figure 16. Performance of the conventional RLS filter [with λ = 1 1 / ( K L / 2 ) ] and the RLS-SF [with λ 1 = 1 1 / ( 2 K P L 1 ) , λ 2 = 1 1 / ( 2 K P L 2 ) , and P = 8 ], using K = 10 , for the identification of the acoustic echo path from Figure 1c. (a) The true acoustic impulse response, (b) estimated impulse response (after 5 s) with the conventional RLS filter, and (c) estimated impulse response (after 5 s) with the RLS-SF. Other conditions are the same as in Figure 15.
Figure 16. Performance of the conventional RLS filter [with λ = 1 1 / ( K L / 2 ) ] and the RLS-SF [with λ 1 = 1 1 / ( 2 K P L 1 ) , λ 2 = 1 1 / ( 2 K P L 2 ) , and P = 8 ], using K = 10 , for the identification of the acoustic echo path from Figure 1c. (a) The true acoustic impulse response, (b) estimated impulse response (after 5 s) with the conventional RLS filter, and (c) estimated impulse response (after 5 s) with the RLS-SF. Other conditions are the same as in Figure 15.
Applsci 12 04263 g016
Figure 17. (a) Acoustic impulse response h of length L = 500 (with L 1 = 25 and L 2 = 20 ); (b) resulting impulse response Q ˜ h ; (c) singular values of the matrix H = ivec ( h ) ; and (d) singular values of the matrix H ˜ = ivec Q ˜ h .
Figure 17. (a) Acoustic impulse response h of length L = 500 (with L 1 = 25 and L 2 = 20 ); (b) resulting impulse response Q ˜ h ; (c) singular values of the matrix H = ivec ( h ) ; and (d) singular values of the matrix H ˜ = ivec Q ˜ h .
Applsci 12 04263 g017
Table 1. RLS algorithm with symmetric filter (RLS-SF).
Table 1. RLS algorithm with symmetric filter (RLS-SF).
Data : x ( t ) , d ( t ) , Q = I L + P L 1 L 2 2 , Q 1 Parameters : δ 1 > 0 , δ 2 > 0 , λ 1 = 1 1 K P L 1 , λ 2 = 1 1 K P L 2 , K 1 Initialization : g ^ 1 , p ( 0 ) = 1 0 L 1 1 T T , p = 1 , 2 , , P g ̲ ^ 1 ( 0 ) = g ^ 1 , 1 T ( 0 ) g ^ 1 , 2 T ( 0 ) g ^ 1 , P T ( 0 ) T g ^ 2 , p ( 0 ) = 1 0 L 2 1 T T , p = 1 , 2 , , P g ̲ ^ 2 ( 0 ) = g ^ 2 , 1 T ( 0 ) g ^ 2 , 2 T ( 0 ) g ^ 2 , P T ( 0 ) T R g ̲ ^ 2 1 ( 0 ) = δ 1 1 I P L 1 , R g ̲ ^ 1 1 ( 0 ) = δ 2 1 I P L 2 For discrete - time index t = 1 , 2 , G ^ g ^ 2 , p ( t 1 ) = g ^ 2 , p ( t 1 ) I L 1 + I L 1 g ^ 2 , p ( t 1 ) 2 , p = 1 , 2 , , P G ̲ ^ 2 ( t 1 ) = G ^ g ^ 2 , 1 ( t 1 ) G ^ g ^ 2 , 2 ( t 1 ) G ^ g ^ 2 , P ( t 1 ) G ^ g ^ 1 , p ( t 1 ) = I L 2 g ^ 1 , p ( t 1 ) + g ^ 1 , p ( t 1 ) I L 2 2 , p = 1 , 2 , , P G ̲ ^ 1 ( t 1 ) = G ^ g ^ 1 , 1 ( t 1 ) G ^ g ^ 1 , 2 ( t 1 ) G ^ g ^ 1 , P ( t 1 ) x g ̲ ^ 2 ( t ) = G ̲ ^ 2 T ( t 1 ) Q T x ( t ) , x g ̲ ^ 1 ( t ) = G ̲ ^ 1 T ( t 1 ) Q T x ( t ) ε ( t ) = d ( t ) g ̲ ^ 1 T ( t 1 ) x g ̲ ^ 2 ( t ) = d ( t ) g ̲ ^ 2 T ( t 1 ) x g ̲ ^ 1 ( t ) k g ̲ ^ 1 ( t ) = R g ̲ ^ 2 1 ( t 1 ) x g ̲ ^ 2 ( t ) λ 1 + x g ̲ ^ 2 T ( t ) R g ̲ ^ 2 1 ( t 1 ) x g ̲ ^ 2 ( t ) k g ̲ ^ 2 ( t ) = R g ̲ ^ 1 1 ( t 1 ) x g ̲ ^ 1 ( t ) λ 2 + x g ̲ ^ 1 T ( t ) R g ̲ ^ 1 1 ( t 1 ) x g ̲ ^ 1 ( t ) R g ̲ ^ 2 1 ( t ) = 1 λ 1 I P L 1 k g ̲ ^ 1 ( t ) x g ̲ ^ 2 T ( t ) R g ̲ ^ 2 1 ( t 1 ) R g ̲ ^ 1 1 ( t ) = 1 λ 2 I P L 2 k g ̲ ^ 2 ( t ) x g ̲ ^ 1 T ( t ) R g ̲ ^ 1 1 ( t 1 ) g ̲ ^ 1 ( t ) = g ̲ ^ 1 ( t 1 ) + k g ̲ ^ 1 ( t ) ε ( t ) = g ^ 1 , 1 T ( t ) g ^ 1 , 2 T ( t ) g ^ 1 , P T ( t ) T g ̲ ^ 2 ( t ) = g ̲ ^ 2 ( t 1 ) + k g ̲ ^ 2 ( t ) ε ( t ) = g ^ 2 , 1 T ( t ) g ^ 2 , 2 T ( t ) g ^ 2 , P T ( t ) T g ^ + ( t ) = p = 1 P g ^ 2 , p ( t ) g ^ 1 , p ( t ) + g ^ 1 , p ( t ) g ^ 2 , p ( t ) 2 g ^ ( t ) = Q 1 g ^ + ( t )
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Fîciu, I.-D.; Benesty, J.; Dogariu, L.-M.; Paleologu, C.; Ciochină, S. Efficient Algorithms for Linear System Identification with Particular Symmetric Filters. Appl. Sci. 2022, 12, 4263. https://doi.org/10.3390/app12094263

AMA Style

Fîciu I-D, Benesty J, Dogariu L-M, Paleologu C, Ciochină S. Efficient Algorithms for Linear System Identification with Particular Symmetric Filters. Applied Sciences. 2022; 12(9):4263. https://doi.org/10.3390/app12094263

Chicago/Turabian Style

Fîciu, Ionuţ-Dorinel, Jacob Benesty, Laura-Maria Dogariu, Constantin Paleologu, and Silviu Ciochină. 2022. "Efficient Algorithms for Linear System Identification with Particular Symmetric Filters" Applied Sciences 12, no. 9: 4263. https://doi.org/10.3390/app12094263

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop