Next Article in Journal
A Privacy-Preserving Consensus Mechanism for ADMM-Based Peer-to-Peer Energy Trading
Previous Article in Journal
Feature Description Method for Contracted Graphs of Kinematic Chains and Automatic Synthesis by CAD
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Iterative Wiener Filter Based on a Fourth-Order Tensor Decomposition

by
Jacob Benesty
1,
Constantin Paleologu
2,* and
Laura-Maria Dogariu
2
1
INRS-EMT, University of Quebec, 800 de la Gauchetiere Ouest, Suite 6900, Montreal, QC H5A 1K6, Canada
2
Department of Telecommunications, University Politehnica of Bucharest, 1–3, Iuliu Maniu Blvd., 061071 Bucharest, Romania
*
Author to whom correspondence should be addressed.
Symmetry 2023, 15(8), 1560; https://doi.org/10.3390/sym15081560
Submission received: 26 July 2023 / Revised: 6 August 2023 / Accepted: 8 August 2023 / Published: 9 August 2023
(This article belongs to the Section Engineering and Materials)

Abstract

:
This work focuses on linear system identification problems in the framework of the Wiener filter. Specifically, it addresses the challenging identification of systems characterized by impulse responses of long length, which poses significant difficulties due to the existence of large parameter space. The proposed solution targets a dimensionality reduction of the problem by involving the decomposition of a fourth-order tensor, using low-rank approximations in conjunction with the nearest Kronecker product. In addition, the rank of the tensor is controlled and limited to a known value without involving any approximation technique. The final estimate is obtained based on a combination of four (shorter) optimal filters, which are alternatively iterated. As a result, the designed iterative Wiener filter outperforms the traditional counterpart, being more robust to the accuracy of the statistics’ estimates and/or noisy conditions. In addition, simulations performed in the context of acoustic echo cancellation indicate that the proposed iterative Wiener filter that exploits this fourth-order tensor decomposition achieves better performance as compared to some previously developed solutions based on lower decomposition levels. This study could further lead to the development of computationally efficient tensor-based adaptive filtering algorithms.

1. Introduction

In the context of system identification [1], the overall difficulty increases when dealing with the identification of long-length impulse responses, which raises significant challenges in terms of the computational complexity and accuracy of the solution. A well-known example is related to the acoustic echo cancellation scenarios [2], where the acoustic echo paths are modeled as finite impulse response filters that can reach thousands of coefficients. In this context, the desired performance criteria are difficult to achieve when dealing with such a large parameter space. To improve the overall performance of the identification methods when facing long-length impulse responses, it is natural to exploit the intrinsic properties of the systems to be identified. These may refer to sparseness characteristics [3,4], symmetry features [5,6], low-rank properties [7,8], among others.
Several recent approaches have focused on decomposition-based techniques that involve low-rank approximations and the nearest Kronecker product (NKP), e.g., see [9,10] (with the references therein). These methods exploit the singular value decomposition (SVD) of the corresponding matrix of the reshaped impulse response (obtained by inverse vectorization) and its low-rank feature. Therefore, the original system identification problem (that relies on a single long length filter) is reformulated using a combination of two (much) shorter filters. This could lead to important performance gains, so that the decomposition-based technique has been successfully applied in many practical applications, e.g., [11,12,13,14,15] (including their references). Nevertheless, the NKP-based technique used in these works relies on the second-order decomposition, exploiting the SVD of the (low-rank) corresponding matrix associated with the impulse response [9]. Extending this technique to a higher-order decomposition faces the difficulty of dealing with a higher-rank tensor, where its rank results from a sum of rank-1 tensors. In the context of the second-order decomposition, we rely on the fact that the rank of a matrix is limited by the number of its columns. On the other hand, for higher-order decompositions, the rank of a higher-order tensor could exceed its largest dimension; therefore, approximating its rank would be a sensitive issue [16,17,18,19,20,21].
Very recently, an alternative method to extend the NKP technique to a third-order decomposition level has been presented in [22], targeting a higher dimensionality reduction (i.e., a combination of three filters of shorter lengths). In this previous work, the rank of a third-order tensor is controlled in terms of a matrix rank. The resulting iterative Wiener filter developed in [22] has proved to outperform the traditional Wiener filter and a previously designed version that relies on the second-order NKP-based decomposition, which was proposed in [9]. Nevertheless, the extension of the third-order decomposition approach to a higher-order level cannot be performed in a straightforward manner.
Developing efficient and robust signal processing methods is an important target when dealing with applications where large amounts of data would be required but are not always available or easy to handle. In this context, tensor decomposition and modeling techniques have also been exploited in the framework of big data [23]. However, there is the so-called “curse of dimensionality” [23], which limits the order of the tensors that can be handled. This can be partially alleviated by using decompositions to represent the tensor instead of using the tensor itself. Nevertheless, most decomposition-based algorithms require full tensors, which is infeasible when dealing with many data sets. On the other hand, there are many important fields nowadays that rely on the concept of big data, such as artificial intelligence, ambient assisted living, cognitive neuroscience, machine learning, quantum information theory, and scientific computing, among others, e.g., see [24,25,26] and the references therein. Consequently, there is a high interest in developing methods that “extract” value from big data and operate with a reduced size of the data set.
The dimensionality reduction of a system identification problem with a large parameter space was initially addressed in the framework of linearly separable systems, which rely on the modeling of rank-1 tensors [27,28,29,30]. The main goal was to transform a high-dimension system identification problem into “smaller” problems (i.e., shorter filters) that can be further combined to match the original purpose. This approach is based on a Kronecker product decomposition of the global impulse response, and it is related to the processing of multilinear forms. Nevertheless, in such a framework, the global impulse response of the spatiotemporal system is considered to be perfectly separable. As a consequence, the resulting algorithms are applicable only for a particular (quasi-periodic) form of the impulse response. Nevertheless, the multilinear forms have found applications in several frameworks, such as channel equalization [31], nonlinear acoustic echo cancellation [32], multiple-input multiple-output wireless communication systems [33,34], and separable linearly constrained beamformers for large-scale antenna systems [35]. Overall, the main advantage of this decomposition-based technique that involves the modeling of rank-1 tensors is related to the dimensionality reduction (i.e., from the product of the components’ lengths to the sum of them). On the other hand, a significant limitation of this method is that it can be successfully applied only for linearly separable systems, which is not the case in most real-world applications. Therefore, the NKP-based decomposition techniques in conjunction with low-rank approximations represent better fits for the identification of realistic impulse responses of long length.
In this paper, motivated toward this direction, we develop an iterative Wiener filter using fourth-order tensor decomposition. As compared to the traditional version of the Wiener filter, which results as the solution of a single (high-dimensional) set of Wiener–Hopf equations, the advantage of this iterative technique is basically twofold. First, the final estimate is obtained by combining the solutions provided by four shorter sets of Wiener–Hopf equations, which result in “extracting” the components of the fourth-order tensor. These solutions are alternatively iterated, similar to a block coordinate descent method [36]. Such a technique fits very well when the cost functions have a partially decomposable structure, with respect to the optimization variables [37]. Second, an important feature of the proposed decomposition-based approach is that the tensor’s rank is controlled and limited to a known value without requiring any approximation technique. As a result, this decomposition method is suitable for the identification of long-length impulse responses since it operates with much smaller structures of data, as compared to the traditional Wiener solution. Moreover, it owns improved robustness to the accuracy of the statistics’ estimates and the signal-to-noise ratio (SNR). Thus, it could provide reliable solutions when only a limited amount (or incomplete set) of data samples is available and/or in low SNR conditions. In such cases, it could also outperform the previously designed iterative version that relies on third-order decomposition. The main novelty of this work with respect to the previous counterparts developed in [9,22] consists in the extension of the NKP-based decomposition to a higher-order level, which is a challenging task due to the specific difficulties associated with higher-order/higher-rank tensors, as outlined before.
Following this introduction, Section 2 presents the proposed fourth-order decomposition technique of the system impulse response. Next, the signal model for system identification and the overall problem formulation are described in Section 3. Then, Section 4 is dedicated to the development of the iterative Wiener filter based on this approach. Supporting simulation results are obtained in an acoustic echo cancellation scenario, as will be shown in Section 5. In the end, Section 6 outlines the main conclusions and presents several perspectives for future research.
The notation involved in this paper follows a common convention for data structures, using italic letters for scalars [e.g., L, x ( t ) , etc.], lowercase bold letters for vectors [e.g., h , x ( t ) , etc.], uppercase bold letters for matrices (e.g., H , R x , etc.), and uppercase calligraphic bold letters for tensors (e.g., H ). In addition, all the vectors are considered as column vectors.

2. Fourth-Order Decomposition of the System Impulse Response

Let us consider the framework of a linear system identification problem operating in a single-input single-output (SISO) scenario. The basic goal is to obtain a reliable estimate of the unknown impulse response, having available the input sequence and a reference signal, which is the output of the unknown system [1,5]. In this context, let
h = h 0 h 1 h L 1
be an impulse response with L real-valued coefficients; here, the superscript notation stands for the transpose operator. It is assumed that L = L 1 L 2 , with L 1 L 2 , and L 1 and L 2 are of the same order. Then, as shown in [7,9], this system impulse response can be represented in the following form:
h = i = 1 L 2 h 2 i h 1 i ,
where h 1 i and h 2 i are two impulse responses of shorter lengths (as compared to h ), having L 1 and L 2 coefficients, respectively; here, ⊗ denotes the Kronecker product [38].
Next, let us consider that L 1 = L 11 L 12 , where L 11 L 12 ; also, let L 2 = L 21 L 22 , with L 21 L 22 . We can apply the same decomposition in (2) to h 1 i and h 2 i , i.e.,
h 1 i = j = 1 L 12 h 12 i j h 11 i j ,
where h 11 i j and h 12 i j are two impulse responses of shorter lengths (as compared to h 1 i ), with L 11 and L 12 coefficients, respectively, while
h 2 i = k = 1 L 22 h 22 i k h 21 i k ,
where h 21 i k and h 22 i k are two impulse responses of shorter lengths (as compared to h 2 i ), with L 21 and L 22 coefficients, respectively. Clearly, we must have L = L 1 L 2 = L 11 L 12 L 21 L 22 , with L 1 L 2 , L 11 L 12 , and L 21 L 22 .
Substituting (3) and (4) into (2), we get
h = i = 1 L 2 k = 1 L 22 h 22 i k h 21 i k j = 1 L 12 h 12 i j h 11 i j = i = 1 L 2 j = 1 L 12 k = 1 L 22 h 22 i k h 21 i k h 12 i j h 11 i j .
Now, let us assume that h 1 i and h 2 i are low rank [7], i.e.,
h 1 i = p = 1 P h 12 i p h 11 i p ,
h 2 i = q = 1 Q h 22 i q h 21 i q ,
where P L 12 and Q L 22 . Then, (5) can be expressed as
h = l = 1 L 2 p = 1 P q = 1 Q h 22 l q h 21 l q h 12 l p h 11 l p .
Following (2) and (8), equivalent representations in matrix and fourth-order tensor forms, respectively, result in
H = l = 1 L 2 h 1 l h 2 l = l = 1 L 2 h 1 l h 2 l
and
H = l = 1 L 2 p = 1 P q = 1 Q h 11 l p h 12 l p h 21 l q h 22 l q = p = 1 P q = 1 Q l = 1 L 2 h 11 l p h 12 l p h 21 l q h 22 l q = p = 1 P q = 1 Q H : p q ,
where ∘ is the outer product and H : p q = l = 1 L 2 h 11 l p h 12 l p h 21 l q h 22 l q R L 11 × L 12 × L 21 × L 22 . It can be noticed that H : p q corresponds to the canonical polyadic (CP) decomposition, i.e., the factorization of a fourth-order tensor into a sum of fourth-order tensors of rank-1 [39]. Accordingly, L 2 is the minimum number that generates H : p q . As a result, the rank of H : p q is also L 2 , while H results as a double-sum of P Q fourth-order tensors (each one of rank L 2 ).
In order to exploit (8) in practical system identification problems, the next task is to write this expression in four alternative ways for the purpose of extracting each block (of short impulse responses), thus leading to a more efficient identification. First, we focus on h 22 l q ,   l = 1 , 2 , , L 2 ,   q = 1 , 2 , , Q . We have
h = l = 1 L 2 p = 1 P q = 1 Q I L 22 h 21 l q h 12 l p h 11 l p h 22 l q = l = 1 L 2 p = 1 P q = 1 Q H 22 l p q h 22 l q = q = 1 Q p = 1 P H 22 : p q h 22 : q = q = 1 Q H 22 : + : q h 22 : q = H ̲ 22 h ̲ 22 ,
where I L 22 is the L 22 × L 22 identity matrix, H 22 l p q = I L 22 h 21 l q h 12 l p h 11 l p is an L × L 22 matrix,
H 22 : p q = H 22 1 p q H 22 2 p q H 22 L 2 p q
is an L × L 2 L 22 matrix,
h 22 : q = h 22 1 q h 22 2 q h 22 L 2 q
is a vector of length L 2 L 22 , H 22 : + : q = p = 1 P H 22 : p q is an L × L 2 L 22 matrix,
H ̲ 22 = H 22 : + : 1 H 22 : + : 2 H 22 : + : Q
is an L × Q L 2 L 22 matrix, and
h ̲ 22 = h 22 : 1 h 22 : 2 h 22 : Q
is a column vector with Q L 2 L 22 coefficients.
Second, we express (8) by considering h 21 l q ,   l = 1 , 2 , , L 2 ,   q = 1 , 2 , , Q . We have
h = l = 1 L 2 p = 1 P q = 1 Q h 22 l q I L 21 h 12 l p h 11 l p h 21 l q = l = 1 L 2 p = 1 P q = 1 Q H 21 l p q h 21 l q = q = 1 Q p = 1 P H 21 : p q h 21 : q = q = 1 Q H 21 : + : q h 21 : q = H ̲ 21 h ̲ 21 ,
where I L 21 is the L 21 × L 21 identity matrix, H 21 l p q = h 22 l q I L 21 h 12 l p h 11 l p is an L × L 21 matrix,
H 21 : p q = H 21 1 p q H 21 2 p q H 21 L 2 p q
is an L × L 2 L 21 matrix,
h 21 : q = h 21 1 q h 21 2 q h 21 L 2 q
is a column vector with L 2 L 21 coefficients, H 21 : + : q = p = 1 P H 21 : p q is an L × L 2 L 21 matrix,
H ̲ 21 = H 21 : + : 1 H 21 : + : 2 H 21 : + : Q
is an L × Q L 2 L 21 matrix, and
h ̲ 21 = h 21 : 1 h 21 : 2 h 21 : Q
is a vector of length Q L 2 L 21 .
Third, (8) can be developed by extracting h 12 l p ,   l = 1 , 2 , , L 2 ,   p = 1 , 2 , , P . We have
h = l = 1 L 2 p = 1 P q = 1 Q h 22 l q h 21 l q I L 12 h 11 l p h 12 l p = l = 1 L 2 p = 1 P q = 1 Q H 12 l p q h 12 l p = p = 1 P q = 1 Q H 12 : p q h 12 : p = p = 1 P H 12 : p : + h 12 : p = H ̲ 12 h ̲ 12 ,
where I L 12 is the L 12 × L 12 identity matrix, H 12 l p q = h 22 l q h 21 l q I L 12 h 11 l p is an L × L 12 matrix,
H 12 : p q = H 12 1 p q H 12 2 p q H 12 L 2 p q
is an L × L 2 L 12 matrix,
h 12 : p = h 12 1 p h 12 2 p h 12 L 2 p
is a column vector with L 2 L 12 coefficients, H 12 : p : + = q = 1 Q H 12 : p q is an L × L 2 L 12 matrix,
H ̲ 12 = H 12 : 1 : + H 12 : 2 : + H 12 : P : +
is an L × P L 2 L 12 matrix, and
h ̲ 12 = h 12 : 1 h 12 : 2 h 12 : P
is a vector of length P L 2 L 12 .
Finally, we may express (8) by considering h 11 l p ,   l = 1 , 2 , , L 2 ,   p = 1 , 2 , , P . We have
h = l = 1 L 2 p = 1 P q = 1 Q h 22 l q h 21 l q h 12 l p I L 11 h 11 l p = l = 1 L 2 p = 1 P q = 1 Q H 11 l p q h 11 l p = p = 1 P q = 1 Q H 11 : p q h 11 : p = p = 1 P H 11 : p : + h 11 : p = H ̲ 11 h ̲ 11 ,
where I L 11 is the L 11 × L 11 identity matrix, H 11 l p q = h 22 l q h 21 l q h 12 l p I L 11 is an L × L 11 matrix,
H 11 : p q = H 11 1 p q H 11 2 p q H 11 L 2 p q
is an L × L 2 L 11 matrix,
h 11 : p = h 11 1 p h 11 2 p h 11 L 2 p
is a column vector with L 2 L 11 coefficients, H 11 : p : + = q = 1 Q H 11 : p q is an L × L 2 L 11 matrix,
H ̲ 11 = H 11 : 1 : + H 11 : 2 : + H 11 : P : +
is an L × P L 2 L 11 matrix, and
h ̲ 11 = h 11 : 1 h 11 : 2 h 11 : P
is a vector of length P L 2 L 11 .

3. Signal Model for System Identification and Problem Formulation

In order to exploit the previous approach in the framework of system identification problems, let us consider a linear SISO system with the impulse response h of length L. At the discrete-time index t, the output of this system, denoted by d ( t ) , is given by the convolution between the samples of the input signal, x ( t ) , and the coefficients of the system impulse response from (1), usually corrupted by additive noise, i.e.,
d ( t ) = τ = 0 L 1 h τ x ( t τ ) + w ( t ) = h x ( t ) + w ( t ) ,
where
x ( t ) = x ( t ) x ( t 1 ) x ( t L + 1 )
is a vector that groups the most recent L time samples of the zero-mean input signal, and w ( t ) is the zero-mean additive noise. In this framework, x ( t ) and w ( t ) are real-valued signals, which are uncorrelated.
From (31), we can evaluate the SNR of the SISO system. Using the notation y ( t ) = h x ( t ) , the SNR results in
SNR = σ y 2 σ w 2 = E h x ( t ) x ( t ) h σ w 2 = h R x h σ w 2 ,
where σ y 2 and σ w 2 are the variances of y ( t ) and w ( t ) , respectively, which are evaluated as σ y 2 = E y 2 ( t ) and σ w 2 = E w 2 ( t ) , with E [ · ] denoting mathematical expectation. In addition, R x = E x ( t ) x ( t ) represents the covariance matrix of the input signal.
The traditional Wiener filter results by solving the Wiener–Hopf equations [5], which lead to the solution
h ^ W = R x 1 r x d ,
where r x d = E x ( t ) d ( t ) represents the cross-correlation vector between the input signal x ( t ) and the reference (or desired) sequence d ( t ) . In practice, this optimal filter depends on several factors, such as the accuracy of the statistics’ estimates and the SNR. In other words, there are two main issues that limit its performance: (i) a low amount of available data [i.e., samples of x ( t ) and d ( t ) ], which influence the estimates of R x and r x d , and (ii) a low SNR, which also affects the estimation error. Under these circumstances, the normalized Euclidean distance between the true impulse response, h , and the conventional Wiener filter, h ^ W , could be very high. Nevertheless, we can improve the estimation of h by exploiting its structure and involving the decomposition technique, as shown in the following.
Based on the decompositions presented in Section 2, the reference signal from (31) can be rewritten as
d ( t ) = h ̲ 22 H ̲ 22 x ( t ) + w ( t )
    = h ̲ 21 H ̲ 21 x ( t ) + w ( t )
    = h ̲ 12 H ̲ 12 x ( t ) + w ( t )
    = h ̲ 11 H ̲ 11 x ( t ) + w ( t ) .
In the following, we show how to use the previous expressions in order to derive an iterative Wiener filter, which leads to better performance than the traditional Wiener filter in harsh conditions.

4. Proposed Iterative Wiener Filter for Linear System Identification

Let g be an estimate of h , i.e., a linear filter of length L, with real-valued coefficients. Furthermore, we can construct g ̲ 22 , g ̲ 21 , g ̲ 12 , g ̲ 11 , and G ̲ 22 , G ̲ 21 , G ̲ 12 , G ̲ 11 , in the same way we constructed h ̲ 22 , h ̲ 21 , h ̲ 12 , h ̲ 11 , and H ̲ 22 , H ̲ 21 , H ̲ 12 , H ̲ 11 . In this context, the error signal results as the difference between the desired signal, d ( t ) , and y E ( t ) = g x ( t ) , which is the estimated signal. Consequently,
e ( t ) = d ( t ) y E ( t ) ,
which can be further rewritten in four alternative manners:
e ( t ) = d ( t ) g ̲ 22 G ̲ 22 x ( t )
    = d ( t ) g ̲ 21 G ̲ 21 x ( t )
    = d ( t ) g ̲ 12 G ̲ 12 x ( t )
    = d ( t ) g ̲ 11 G ̲ 11 x ( t ) .
From (40) and considering that g ̲ 21 , g ̲ 12 , and g ̲ 11 are fixed, the mean-squared error (MSE) criterion can be expressed as
J g ̲ 22 | g ̲ 21 , g ̲ 12 , g ̲ 11 = E d ( t ) g ̲ 22 G ̲ 22 x ( t ) 2 = σ d 2 2 g ̲ 22 G ̲ 22 r x d + g ̲ 22 G ̲ 22 R x G ̲ 22 g ̲ 22 ,
where σ d 2 represents the variance of the desired signal, i.e., σ d 2 = E d 2 ( t ) . The minimization of (44) leads to the Wiener filter for g ̲ 22 , i.e.,
g ̲ 22 , W = G ̲ 22 R x G ̲ 22 1 G ̲ 22 r x d .
In a similar manner, following (41) and considering that g ̲ 22 , g ̲ 12 , and g ̲ 11 are fixed, the MSE criterion can be written as
J g ̲ 21 | g ̲ 22 , g ̲ 12 , g ̲ 11 = E d ( t ) g ̲ 21 G ̲ 21 x ( t ) 2 = σ d 2 2 g ̲ 21 G ̲ 21 r x d + g ̲ 21 G ̲ 21 R x G ̲ 21 g ̲ 21 ,
which leads to the Wiener filter for g ̲ 21 , i.e.,
g ̲ 21 , W = G ̲ 21 R x G ̲ 21 1 G ̲ 21 r x d .
Next, based on (42) and considering that g ̲ 22 , g ̲ 21 , and g ̲ 11 are fixed, the MSE criterion becomes
J g ̲ 12 | g ̲ 22 , g ̲ 21 , g ̲ 11 = E d ( t ) g ̲ 12 G ̲ 12 x ( t ) 2 = σ d 2 2 g ̲ 12 G ̲ 12 r x d + g ̲ 12 G ̲ 12 R x G ̲ 12 g ̲ 12 .
The minimization of (48) results in the Wiener filter for g ̲ 12 , which is
g ̲ 12 , W = G ̲ 12 R x G ̲ 12 1 G ̲ 12 r x d .
Finally, from (43) and considering that g ̲ 22 , g ̲ 21 , and g ̲ 12 are fixed, the MSE criterion is
J g ̲ 11 | g ̲ 22 , g ̲ 21 , g ̲ 12 = E d ( t ) g ̲ 11 G ̲ 11 x ( t ) 2 = σ d 2 2 g ̲ 11 G ̲ 11 r x d + g ̲ 11 G ̲ 11 R x G ̲ 11 g ̲ 11 .
The minimization of the cost function from (50) results in the Wiener filter for g ̲ 11 , so that
g ̲ 11 , W = G ̲ 11 R x G ̲ 11 1 G ̲ 11 r x d .
The solutions g ̲ 22 , W , g ̲ 21 , W , g ̲ 12 , W , and g ̲ 11 , W can be alternatively iterated (as shown in [9,30]) so that the four optimal filters can be combined to finally converge to the Wiener solution after a certain number of iterations. In this way, denoting by g W the global optimal filter, this estimate of h result in
g W = l = 1 L 2 p = 1 P q = 1 Q g 22 , W l q g 21 , W l q g 12 , W l p g 11 , W l p ,
where g 22 , W l q , g 21 , W l q , g 12 , W l p , and g 11 , W l p are extracted from g ̲ 22 , W , g ̲ 21 , W , g ̲ 12 , W , and g ̲ 11 , W , respectively. The resulting algorithm is summarized in Table 1, in a step-by-step manner. In practice, a regularization constant ( δ > 0 ) should be added to the diagonal of the matrices to be inverted in (45), (47), (49), and (51), in order to prevent any potential numerical issue related to the inversion of badly-scaled or ill-conditioned matrices. Additional insights about different regularization techniques that could be used for the Wiener filter can be found in [40]. However, the analysis of the influence of the regularization parameter on the overall performance of the proposed iterative Wiener filter is not relevant for the purpose of this paper.

5. Simulation Results

The experimental framework considered in this section is acoustic echo cancellation [2]. In this context, the main goal is to identify the acoustic impulse response that results due to the coupling between the loudspeaker and microphone (e.g., when using a hands-free communication device). This system is modeled using a long-length finite impulse response filter with thousands of coefficients. The excessive length in time is due to the slow speed of sound in the air, together with multiple reflections caused by the environment. In our simulations, we use a measured acoustic impulse response h with L = 4096 coefficients, while the sampling rate is 16 kHz. The fourth-order tensor decomposition of this impulse response is performed using L 1 = L 2 = 64 and L 11 = L 12 = L 21 = L 22 = 8 .
The acoustic impulse response is depicted in Figure 1a. In addition, the singular values (normalized to the largest one) corresponding to the SVD of the matrix H from (9) are shown in Figure 1b, while their logarithmic scale representation is provided in Figure 1c. These plots illustrate the low-rank property of the acoustic impulse response since most of the singular values are close to zero. The low-rank property can also be noticed in the case of the component impulse responses from (2), indicated in Figure 1d–g. In these subfigures, the singular values corresponding to the SVD of the matrix associated to h 2 i (of size L 21 × L 22 ) are shown, for different values of i (1, L 2 / 4 , L 2 / 2 , and L 2 ).
The input signal x ( t ) is a first-order autoregressive process. Such a sequence is the result of filtering a white Gaussian noise through a system with the pole at 0.9. The output of the echo path, y ( t ) , is corrupted by white Gaussian noise, w ( t ) , with different SNRs, thus resulting in the desired signal d ( t ) from (31). The estimates of the required statistics are obtained by averaging across N data samples of x ( t ) and d ( t ) . Consequently, they result in
R ^ x = t = 1 N x ( t ) x ( t ) ,
r ^ x d = t = 1 N x ( t ) d ( t ) ,
which are used instead of R x and r x d , respectively. The accuracy of these estimates depends on the value of N. In the following, we consider that N = K L , with K > 0 . A large value of K leads to a reliable set of statistics estimates. However, in practice, there are various scenarios in which we deal with a low quantity of data, which affects the quality of the estimates from (53) and (54). In this case, the accuracy of the Wiener filter is also affected. The statistics’ estimates are also influenced by the SNR, so it is challenging to obtain a reliable solution for the Wiener filter in noisy environments. Nevertheless, since the proposed iterative Wiener filter involves smaller structures of data, it is expected to be more robust against such adverse conditions.
The performance measure used in all the experiments is the normalized misalignment, which basically relies on an Euclidian distance. It is evaluated (in dB) as 20 log 10 h h ^ / h , where · is the Euclidean norm and h ^ generally denotes an estimate of h .
The first set of results illustrate the comparison between the conventional version of the Wiener filter (WF) and the designed iterative Wiener filter that exploits a fourth-order tensor decomposition, which is referred as WF-FOT. Different amounts of data samples are used (i.e., N = K L ), by varying the value of K. In addition, different SNR conditions are considered. The conventional WF is obtained based on (34), i.e., h ^ W , as the solution vector (of length L = 4096 ) that results by directly solving the Wiener–Hopf equations [5]. On the other hand, the estimate provided by the WF-FOT, i.e., g W , results based on (52), using a combination of shorter sets of coefficients.
In Figure 2, the settings are K = 5 and SNR = 20 dB, which represent good conditions for obtaining a reliable set of estimates from (53) and (54). Thus, the conventional WF provides a reasonable attenuation of the misalignment. Nevertheless, the WF-FOT is able to reach lower misalignment levels, for different settings of its decomposition parameters, P and Q, which are much smaller as compared to their maximum limits, L 12 and L 22 , respectively. Moreover, the solution obtained for P = Q = 1 provides not only a good attenuation of the misalignment, but also the fewest sets of coefficients to be combined based on (52).
The performance gain of the WF-FOT is more apparent when a smaller amount of data can be used for the statistics’ estimates from (53) and (54). Such a scenario is considered in Figure 3, where K = 2 . As we can notice, the difference between the proposed WF-FOT and the conventional WF is more significant in this case.
The case K = 1 (or N = L ) represents a difficult scenario for the conventional WF, due to the limited amount of data available for the estimation of the statistics. This can be noticed in Figure 4, where the conventional WF cannot provide a reliable estimate, while the proposed WF-FOT is still able to achieve some attenuation of the misalignment.
The SNR is also an important factor that influences the accuracy of the Wiener solution. This aspect is supported in Figure 5, where a sufficient amount of data is available (i.e., N = 5 L ), but SNR = 10 dB. The noisy conditions considered in this simulation lead to higher misalignment levels for all the algorithms. Nevertheless, the proposed WF-FOT still outperforms its conventional counterpart, especially for the case when P = Q = 1 , which is also advantageous in terms of the parameter space (i.e., the lowest number of coefficients to be estimated).
The second set of experiments focuses on the comparison between the proposed WF-FOT and two previously developed solutions that also exploit the impulse response decomposition. First, we consider the iterative Wiener filter using the nearest Kronecker product (namely WF-NKP), which was presented in [9], in the framework of a second-order decomposition level. In other words, if an impulse response h of length L = L 1 L 2 (with L 2 L 1 ) is reshaped as a matrix H of size L 1 × L 2 and the rank of this corresponding matrix is P < L 2 , then the system to be identified owns the low-rank property. Hence, using the NKP, the impulse response is expressed as h = p = 1 P h 2 , p h 1 , p , where h 1 , p and h 2 , p , with p = 1 , 2 , , P , are (shorter) impulse responses of lengths L 1 and L 2 , respectively. Basically, the WF-NKP operates with two filters of lengths P L 1 and P L 2 , which group the sets of coefficients h 1 , p and h 2 , p , with p = 1 , 2 , , P . In our experimental scenario (with L = L 1 L 2 = 4096 ), the decomposition setup of the WF-NKP is L 1 = L 2 = 64 and P < L 2 . For example, according to Figure 1b,c, we can choose P = 20 . Second, the iterative Wiener filter that relies on a third-order tensor decomposition (referred as WF-TOT) from [22] is considered in comparisons. In this case, the length of impulse response is factorized as L = L 11 L 12 L 2 = 4096 , with L 11 L 12 L 2 , so that the decomposition setup of the WF-TOT is L 11 = L 12 = 32 , L 2 = 4 , and P < L 12 . For the identification of acoustic impulse responses, P can be chosen in the range L 12 / 3 to L 12 / 2 . In the current setup, we choose P = 10 , 12 , or 14, as indicated in the captions of the following figures.
In Figure 6, the number of available data samples is N = 5 L and SNR = 20 dB. It can be noticed that the three comparing algorithms achieve similar performance in terms of their accuracy. However, the WF-FOT reaches the lowest misalignment level among its counterparts, with a slightly slower convergence rate.
Next, in Figure 7, the number of available data samples is reduced to N = 2 L , while keeping the same SNR conditions. As we can notice, the gain of the WF-FOT becomes more apparent in this scenario, especially as compared to WF-NKP.
A similar behavior can be observed in Figure 8, where N = 5 L and SNR = 10 dB. The lower SNR level influences the performances of all the algorithms. Nevertheless, the WF-FOT still outperforms both its counterparts. In the current setup, the WF-NKP [9] involves two filters of lengths P L 1 and P L 1 , so that a total of 2 × 20 × 64 = 2560 coefficients. In addition, the WF-TOT [22] uses three filters of lengths P L 2 L 11 , P L 2 L 11 , and L 2 2 , which result in 2 × 4 × 10 × 32 + 4 2 = 2576 coefficients. The smallest parameter space is obtained by the WF-FOT, which requires four filters of lengths P L 2 L 11 , P L 2 L 12 , Q L 2 L 21 , and Q L 2 L 22 , i.e., a total of 4 × 8 × 64 = 2048 coefficients. Finally, we should outline that all the decomposition-based iterative Wiener filters involve a much smaller parameter space as compared to the conventional WF, which needs to estimate L = 4096 coefficients, corresponding to the full length of the impulse response.

6. Conclusions and Perspectives

In this paper, we have designed an iterative Wiener filter based on a fourth-order tensor decomposition associated with the impulse response. Due to the dimensionality reduction, the proposed WF-FOT is suitable for system identification problems with a large parameter space since it combines the estimates provided by four shorter filters. The algorithm has been tested in an acoustic echo cancellation scenario, targeting the identification of a long-length acoustic impulse response with L = 4096 coefficients. Based on the obtained results, the gain is basically twofold. First, since the WF-FOT operates with smaller data structures, it shows improved robustness to the accuracy of the statistics’ estimates and in very noisy environments, compared to the conventional Wiener filter. As indicated by the experiments, it can provide a reliable solution when only a limited amount of data (e.g., N close to L) is available to estimate the statistics. Second, due to the higher-order decomposition, the WF-FOT outperforms the previously designed iterative Wiener filters that exploit the second-order and third-order decomposition levels, i.e., WF-NKP [9] and WF-TOT [22], respectively. In addition, for the reported settings, it also owns the smallest parameter space (i.e., L / 2 coefficients) among the decomposition-based algorithms involved in comparisons. Another important advantage is related to the rank of the fourth-order tensor, which is not obtained by any approximation technique, but it is controlled and limited to a known value.
Finally, we should note that there are some inherent limitations of the Wiener filter, even in this iterative form. First, it relies on the statistics’ estimates, which are challenging to evaluate, especially in real-time applications. Second, it is based on the stationary assumption, which is biased in time-varying environments. Third, it involves matrix inversion operations to solve the associated Wiener–Hopf equations, which could raise numerical and complexity issues. Consequently, in perspective, it would be beneficial to explore two main directions for future research. First, we aim to design adaptive filtering algorithms following the fourth-order tensor decomposition approach. Such adaptive algorithms would address two of the previously mentioned limitations concerning the statistics’ estimates (which would not be required anymore) and the operation in nonstationary conditions. Second, to address the matrix inversion issues, the proposed WF-FOT can be reformulated using efficient iterative methods for solving the four sets of Wiener–Hopf equations, such as the conjugate gradient method and different coordinate descent techniques [41].

Author Contributions

Conceptualization, J.B.; methodology, J.B.; software, L.-M.D.; validation, C.P.; formal analysis, L.-M.D.; investigation, C.P. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by a grant of the Ministry of Research, Innovation and Digitization, CNCS–UEFISCDI, project number PN-III-P4-PCE-2021-0438, within PNCDI III.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ljung, L. System Identification: Theory for the User, 2nd ed.; Prentice-Hall: Upper Saddle River, NJ, USA, 1999. [Google Scholar]
  2. Hänsler, E.; Schmidt, G. Acoustic Echo and Noise Control—A Practical Approach; Wiley: Hoboken, NJ, USA, 2004. [Google Scholar]
  3. Liu, J.; Grant, S.L. Proportionate adaptive filtering for block-sparse system identification. IEEE/ACM Trans. Audio Speech Lang. Process. 2016, 24, 623–630. [Google Scholar] [CrossRef] [Green Version]
  4. Radhika, S.; Albu, F.; Chandrasekar, A. Proportionate maximum Versoria criterion-based adaptive algorithm for sparse system identification. IEEE Trans. Circuits Syst. II Express Briefs 2022, 69, 1902–1906. [Google Scholar] [CrossRef]
  5. Haykin, S. Adaptive Filter Theory, 4th ed.; Prentice-Hall: Upper Saddle River, NJ, USA, 2002. [Google Scholar]
  6. Diniz, P.S.R. Adaptive Filtering: Algorithms and Practical Implementation, 4th ed.; Springer: New York, NY, USA, 2013. [Google Scholar]
  7. Benesty, J.; Paleologu, C.; Dogariu, L.-M.; Ciochină, S. Identification of linear and bilinear systems: A unified study. Electronics 2021, 10, 1790. [Google Scholar] [CrossRef]
  8. Vadhvana, S.; Yadav, S.K.; Bhattacharjee, S.S.; George, N.V. An improved constrained LMS algorithm for fast adaptive beamforming based on a low rank approximation. IEEE Trans. Circuits Syst. II Express Briefs 2022, 69, 3605–3609. [Google Scholar] [CrossRef]
  9. Paleologu, C.; Benesty, J.; Ciochină, S. Linear system identification based on a Kronecker product decomposition. IEEE/ACM Trans. Audio Speech Lang. Process. 2018, 26, 1793–1808. [Google Scholar] [CrossRef]
  10. Dogariu, L.-M.; Benesty, J.; Paleologu, C.; Ciochină, S. Identification of room acoustic impulse responses via Kronecker product decompositions. IEEE/ACM Trans. Audio Speech Lang. Process. 2022, 30, 2828–2841. [Google Scholar] [CrossRef]
  11. Bhattacharjee, S.S.; George, N.V. Nearest Kronecker product decomposition based normalized least mean square algorithm. In Proceedings of the IEEE ICASSP, Barcelona, Spain, 4–8 May 2020; pp. 476–480. [Google Scholar]
  12. Bhattacharjee, S.S.; George, N.V. Fast and efficient acoustic feedback cancellation based on low rank approximation. Signal Process. 2021, 182, 107984. [Google Scholar] [CrossRef]
  13. Bhattacharjee, S.S.; George, N.V. Nearest Kronecker product decomposition based linear-in-the-parameters nonlinear filters. IEEE/ACM Trans. Audio Speech Lang. Process. 2021, 29, 2111–2122. [Google Scholar] [CrossRef]
  14. Huang, G.; Benesty, J.; Cohen, I.; Chen, J. Kronecker product multichannel linear filtering for adaptive weighted prediction error-based speech dereverberation. IEEE/ACM Trans. Audio Speech Lang. Process. 2022, 30, 1277–1289. [Google Scholar] [CrossRef]
  15. Bhattacharjee, S.S.; Patel, V.; George, N.V. Nonlinear spline adaptive filters based on a low rank approximation. Signal Process. 2022, 201, 108726. [Google Scholar] [CrossRef]
  16. Comon, P. Tensors: A brief introduction. IEEE Signal Process. Mag. 2014, 31, 44–53. [Google Scholar] [CrossRef] [Green Version]
  17. Friedland, S.; Tammali, V. Low-rank approximation of tensors. In Numerical Algebra, Matrix Theory, Differential-Algebraic Equations and Control Theory; Benner, P., Ed.; Springer: Berlin/Heidelberg, Germany, 2015; pp. 377–411. [Google Scholar]
  18. Cichocki, A.; Mandic, D.P.; Phan, A.; Caiafa, C.F.; Zhou, G.; Zhao, Q.; De Lathauwer, L. Tensor decompositions for signal processing applications: From two-way to multiway component analysis. IEEE Signal Process. Mag. 2015, 32, 145–163. [Google Scholar] [CrossRef] [Green Version]
  19. Da Silva, A.P.; Comon, P.; de Almeida, A.L.F. A finite algorithm to compute rank-1 tensor approximations. IEEE Signal Process. Lett. 2016, 23, 959–963. [Google Scholar] [CrossRef] [Green Version]
  20. Bousse, M.; Debals, O.; De Lathauwer, L. A tensor-based method for large-scale blind source separation using segmentation. IEEE Trans. Signal Process. 2017, 65, 346–358. [Google Scholar] [CrossRef] [Green Version]
  21. Sidiropoulos, N.; De Lathauwer, L.; Fu, X.; Huang, K.; Papalexakis, E.; Faloutsos, C. Tensor decomposition for signal processing and machine learning. IEEE Trans. Signal Process. 2017, 65, 3551–3582. [Google Scholar] [CrossRef]
  22. Benesty, J.; Paleologu, C.; Ciochină, S. Linear system identification based on a third-order tensor decomposition. IEEE Signal Process. Lett. 2023, 30, 503–507. [Google Scholar] [CrossRef]
  23. Vervliet, N.; Debals, O.; Sorber, L.; De Lathauwer, L. Breaking the curse of dimensionality using decompositions of incomplete tensors: Tensor-based scientific computing in big data analysis. IEEE Signal Process. Mag. 2014, 31, 71–79. [Google Scholar] [CrossRef]
  24. Becker, H.; Albera, L.; Comon, P.; Gribonval, R.; Wendling, F.; Merlet, I. Brain-source imaging: From sparse to tensor models. IEEE Signal Process. Mag. 2015, 32, 100–112. [Google Scholar] [CrossRef] [Green Version]
  25. Thakur, N.; Han, C.Y. Multimodal approaches for indoor localization for ambient assisted living in smart homes. Information 2021, 12, 114. [Google Scholar] [CrossRef]
  26. Almalki, M.; Alsulami, M.H.; Alshdadi, A.A.; Almuayqil, S.N.; Alsaqer, M.S.; Atkins, A.S.; Choukou, M.-A. Delivering digital healthcare for elderly: A holistic framework for the adoption of ambient assisted living. Int. J. Environ. Res. Public Health 2022, 19, 16760. [Google Scholar] [CrossRef]
  27. Rupp, M.; Schwarz, S. Gradient-based approaches to learn tensor products. In Proceedings of the EUSIPCO, Nice, France, 31 August–4 September 2015; pp. 2486–2490. [Google Scholar]
  28. Rupp, M.; Schwarz, S. A tensor LMS algorithm. In Proceedings of the IEEE ICASSP, South Brisbane, QLD, Australia, 19–24 April 2015; pp. 3347–3351. [Google Scholar]
  29. Ribeiro, L.N.; de Almeida, A.L.F.; Mota, J.C.M. Identification of separable systems using trilinear filtering. In Proceedings of the IEEE CAMSAP, Cancun, Mexico, 13–16 December 2015; pp. 189–192. [Google Scholar]
  30. Benesty, J.; Paleologu, C.; Ciochină, S. On the identification of bilinear forms with the Wiener filter. IEEE Signal Process. Lett. 2017, 24, 653–657. [Google Scholar] [CrossRef]
  31. Gesbert, D.; Duhamel, P. Robust blind joint data/channel estimation based on bilinear optimization. In Proceedings of the WSSAP, Corfu, Greece, 24–26 June 1996; pp. 168–171. [Google Scholar]
  32. Stenger, A.; Kellermann, W. Adaptation of a memoryless preprocessor for nonlinear acoustic echo cancelling. Signal Process. 2000, 80, 1747–1760. [Google Scholar] [CrossRef]
  33. Ribeiro, L.N.; Schwarz, S.; Rupp, M.; de Almeida, A.L.F.; Mota, J.C.M. A low-complexity equalizer for massive MIMO systems based on array separability. In Proceedings of the EUSIPCO, Kos, Greece, 28 August–2 September 2017; pp. 2522–2526. [Google Scholar]
  34. Da Costa, M.N.; Favier, G.; Romano, J.M.T. Tensor modelling of MIMO communication systems with performance analysis and Kronecker receivers. Signal Process. 2018, 145, 304–316. [Google Scholar] [CrossRef] [Green Version]
  35. Ribeiro, L.N.; de Almeida, A.L.F.; Mota, J.C.M. Separable linearly constrained minimum variance beamformers. Signal Process. 2019, 158, 15–25. [Google Scholar] [CrossRef]
  36. Wright, S.J. Coordinate descent algorithms. Math. Program. 2015, 151, 3–34. [Google Scholar] [CrossRef]
  37. Bertsekas, D.P. Nonlinear Programming, 2nd ed.; Athena Scientific: Belmont, MA, USA, 1999. [Google Scholar]
  38. Van Loan, C.F. The ubiquitous Kronecker product. J. Comput. Appl. Math. 2000, 123, 85–100. [Google Scholar] [CrossRef] [Green Version]
  39. Kolda, T.G.; Bader, B.W. Tensor decompositions and applications. SIAM Rev. 2009, 51, 455–500. [Google Scholar] [CrossRef] [Green Version]
  40. Dogariu, L.-M.; Benesty, J.; Paleologu, C.; Ciochină, S. An insightful overview of the Wiener filter for system identification. Appl. Sci. 2021, 11, 7774. [Google Scholar] [CrossRef]
  41. Zakharov, Y.V.; White, G.P.; Liu, J. Low-complexity RLS algorithms using dichotomous coordinate descent iterations. IEEE Trans. Signal Process. 2008, 56, 3150–3161. [Google Scholar] [CrossRef] [Green Version]
Figure 1. (a) Acoustic impulse response h of length L = 4096 (its tail is magnified in the top-right of the subfigure); (b) Singular values of the corresponding matrix H of size L 1 × L 2 , with L 1 = L 2 = 64 ; (c) Singular values from subfigure (b) at logarithmic scale; (dg) Singular values of the corresponding matrix (of size L 21 × L 22 ) associated to the impulse response h 2 i , for i = 1 , i = L 2 / 4 , i = L 2 / 2 , and i = L 2 , respectively.
Figure 1. (a) Acoustic impulse response h of length L = 4096 (its tail is magnified in the top-right of the subfigure); (b) Singular values of the corresponding matrix H of size L 1 × L 2 , with L 1 = L 2 = 64 ; (c) Singular values from subfigure (b) at logarithmic scale; (dg) Singular values of the corresponding matrix (of size L 21 × L 22 ) associated to the impulse response h 2 i , for i = 1 , i = L 2 / 4 , i = L 2 / 2 , and i = L 2 , respectively.
Symmetry 15 01560 g001
Figure 2. Normalized misalignment of the conventional WF and WF-FOT (with different values of P and Q). The statistics are estimated based on N = 5 L data samples (with L = 4096 ) and SNR = 20 dB.
Figure 2. Normalized misalignment of the conventional WF and WF-FOT (with different values of P and Q). The statistics are estimated based on N = 5 L data samples (with L = 4096 ) and SNR = 20 dB.
Symmetry 15 01560 g002
Figure 3. Normalized misalignment of the conventional WF and WF-FOT (with different values of P and Q). The statistics are estimated based on N = 2 L data samples (with L = 4096 ) and SNR = 20 dB.
Figure 3. Normalized misalignment of the conventional WF and WF-FOT (with different values of P and Q). The statistics are estimated based on N = 2 L data samples (with L = 4096 ) and SNR = 20 dB.
Symmetry 15 01560 g003
Figure 4. Normalized misalignment of the conventional WF and WF-FOT (with different values of P and Q). The statistics are estimated based on N = L data samples (with L = 4096 ) and SNR = 20 dB.
Figure 4. Normalized misalignment of the conventional WF and WF-FOT (with different values of P and Q). The statistics are estimated based on N = L data samples (with L = 4096 ) and SNR = 20 dB.
Symmetry 15 01560 g004
Figure 5. Normalized misalignment of the conventional WF and WF-FOT (with different values of P and Q). The statistics are estimated based on N = 5 L data samples (with L = 4096 ) and SNR = 10 dB.
Figure 5. Normalized misalignment of the conventional WF and WF-FOT (with different values of P and Q). The statistics are estimated based on N = 5 L data samples (with L = 4096 ) and SNR = 10 dB.
Symmetry 15 01560 g005
Figure 6. Normalized misalignment of the WF-NKP [9] (with L 1 = L 2 = 64 and P = 20 ), WF-TOT [22] (with L 11 = L 12 = 32 , L 2 = 4 , and P = 14 ), and WF-FOT (with L 11 = L 12 = L 21 = L 22 = 8 and P = Q = 1 ). The statistics are estimated based on N = 5 L data samples (with L = 4096 ) and SNR = 20 dB.
Figure 6. Normalized misalignment of the WF-NKP [9] (with L 1 = L 2 = 64 and P = 20 ), WF-TOT [22] (with L 11 = L 12 = 32 , L 2 = 4 , and P = 14 ), and WF-FOT (with L 11 = L 12 = L 21 = L 22 = 8 and P = Q = 1 ). The statistics are estimated based on N = 5 L data samples (with L = 4096 ) and SNR = 20 dB.
Symmetry 15 01560 g006
Figure 7. Normalized misalignment of the WF-NKP [9] (with L 1 = L 2 = 64 and P = 20 ), WF-TOT [22] (with L 11 = L 12 = 32 , L 2 = 4 , and P = 12 ), and WF-FOT (with L 11 = L 12 = L 21 = L 22 = 8 and P = Q = 1 ). The statistics are estimated based on N = 2 L data samples (with L = 4096 ) and SNR = 20 dB.
Figure 7. Normalized misalignment of the WF-NKP [9] (with L 1 = L 2 = 64 and P = 20 ), WF-TOT [22] (with L 11 = L 12 = 32 , L 2 = 4 , and P = 12 ), and WF-FOT (with L 11 = L 12 = L 21 = L 22 = 8 and P = Q = 1 ). The statistics are estimated based on N = 2 L data samples (with L = 4096 ) and SNR = 20 dB.
Symmetry 15 01560 g007
Figure 8. Normalized misalignment of the WF-NKP [9] (with L 1 = L 2 = 64 and P = 20 ), WF-TOT [22] (with L 11 = L 12 = 32 , L 2 = 4 , and P = 10 ), and WF-FOT (with L 11 = L 12 = L 21 = L 22 = 8 and P = Q = 1 ). The statistics are estimated based on N = 5 L data samples (with L = 4096 ) and SNR = 10 dB.
Figure 8. Normalized misalignment of the WF-NKP [9] (with L 1 = L 2 = 64 and P = 20 ), WF-TOT [22] (with L 11 = L 12 = 32 , L 2 = 4 , and P = 10 ), and WF-FOT (with L 11 = L 12 = L 21 = L 22 = 8 and P = Q = 1 ). The statistics are estimated based on N = 5 L data samples (with L = 4096 ) and SNR = 10 dB.
Symmetry 15 01560 g008
Table 1. The Main Steps of the Proposed Iterative Wiener Filter.
Table 1. The Main Steps of the Proposed Iterative Wiener Filter.
      Statistics :   R ^ x ,   r ^ x d   [ estimated   based   on   ( 53 )   and   ( 54 ) ,   respectively ] ,   δ   >   0   ( regularization   constant )       Setup :   L = L 1 L 2 = L 11 L 12 L 21 L 22 ,   L 1 L 2 ,   L 11 L 12 ,   L 21 L 22 ,   P L 12 ,   Q L 22       Initialization :   g 11 , W l p =   1 0 L 11 1   ,   g 12 , W l p =   1 0 L 12 1   ,   l = 1 , 2 , , L 2 ,   p = 1 , 2 , , P       g 21 , W l q =   1 0 L 21 1 ,   l = 1 , 2 , , L 2 ,   q = 1 , 2 , , Q       Algorithm :       Step   1 : ̲   G 22 l p q = I L 22 g 21 , W l q g 12 , W l p g 11 , W l p ,   l = 1 , 2 , , L 2 ,   p = 1 , 2 , , P ,   q = 1 , 2 , , Q       G 22 : p q = G 22 1 p q G 22 2 p q G 22 L 2 p q ,   p = 1 , 2 , , P ,   q = 1 , 2 , , Q       G 22 : + : q = p = 1 P G 22 : p q ,   q = 1 , 2 , , Q     G ̲ 22 = G 22 : + : 1 G 22 : + : 2 G 22 : + : Q       g ̲ 22 , W = G ̲ 22 R ^ x G ̲ 22 + δ I Q L 2 L 22 1 G ̲ 22 r ^ x d = g 22 , W : 1 g 22 , W : 2 g 22 , W : Q       g 22 , W : q = g 22 , W 1 q g 22 , W 2 q g 22 , W L 2 q ,   q = 1 , 2 , , Q       Step   2 : ̲   G 21 l p q = g 22 , W l q I L 21 g 12 , W l p g 11 , W l p ,   l = 1 , 2 , , L 2 ,   p = 1 , 2 , , P ,   q = 1 , 2 , , Q       G 21 : p q = G 21 1 p q G 21 2 p q G 21 L 2 p q ,   p = 1 , 2 , , P ,   q = 1 , 2 , , Q       G 21 : + : q = p = 1 P G 21 : p q ,   q = 1 , 2 , , Q     G ̲ 21 = G 21 : + : 1 G 21 : + : 2 G 21 : + : Q       g ̲ 21 , W = G ̲ 21 R ^ x G ̲ 21 + δ I Q L 2 L 21 1 G ̲ 21 r ^ x d = g 21 , W : 1 g 21 , W : 2 g 21 , W : Q       g 21 , W : q = g 21 , W 1 q g 21 , W 2 q g 21 , W L 2 q ,   q = 1 , 2 , , Q       Step   3 : ̲   G 12 l p q = g 22 , W l q g 21 , W l q I L 12 g 11 , W l p ,   l = 1 , 2 , , L 2 ,   p = 1 , 2 , , P ,   q = 1 , 2 , , Q       G 12 : p q = G 12 1 p q G 12 2 p q G 12 L 2 p q ,   p = 1 , 2 , , P ,   q = 1 , 2 , , Q       G 12 : p : + = q = 1 Q G 12 : p q ,   p = 1 , 2 , , P     G ̲ 12 = G 12 : 1 : + G 12 : 2 : + G 12 : P : +       g ̲ 12 , W = G ̲ 12 R ^ x G ̲ 12 + δ I P L 2 L 12 1 G ̲ 12 r ^ x d = g 12 , W : 1 g 12 , W : 2 g 12 , W : P       g 12 , W : p = g 12 , W 1 p g 12 , W 2 p g 12 , W L 2 p , p = 1 , 2 , , P       Step   4 : ̲   G 11 l p q = g 22 , W l q g 21 , W l q g 12 , W l p I L 11 ,   l = 1 , 2 , , L 2 ,   p = 1 , 2 , , P ,   q = 1 , 2 , , Q       G 11 : p q = G 11 1 p q G 11 2 p q G 11 L 2 p q ,   p = 1 , 2 , , P ,   q = 1 , 2 , , Q       G 11 : p : + = q = 1 Q G 11 : p q ,   p = 1 , 2 , , P     G ̲ 11 = G 11 : 1 : + G 11 : 2 : + G 11 : P : +       g ̲ 11 , W = G ̲ 11 R ^ x G ̲ 11 + δ I P L 2 L 11 1 G ̲ 11 r ^ x d = g 11 , W : 1 g 11 , W : 2 g 11 , W : P       g 11 , W : p = g 11 , W 1 p g 11 , W 2 p g 11 , W L 2 p ,   p = 1 , 2 , , P       Optimal   filter : ̲   g W = l = 1 L 2 p = 1 P q = 1 Q g 22 , W l q g 21 , W l q g 12 , W l p g 11 , W l p
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Benesty, J.; Paleologu, C.; Dogariu, L.-M. An Iterative Wiener Filter Based on a Fourth-Order Tensor Decomposition. Symmetry 2023, 15, 1560. https://doi.org/10.3390/sym15081560

AMA Style

Benesty J, Paleologu C, Dogariu L-M. An Iterative Wiener Filter Based on a Fourth-Order Tensor Decomposition. Symmetry. 2023; 15(8):1560. https://doi.org/10.3390/sym15081560

Chicago/Turabian Style

Benesty, Jacob, Constantin Paleologu, and Laura-Maria Dogariu. 2023. "An Iterative Wiener Filter Based on a Fourth-Order Tensor Decomposition" Symmetry 15, no. 8: 1560. https://doi.org/10.3390/sym15081560

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop