Next Article in Journal
Interval Estimation for the Two-Parameter Exponential Distribution Based on the Upper Record Values
Next Article in Special Issue
A Mechanical Equipment Fault Diagnosis Model Based on TSK Fuzzy Broad Learning System
Previous Article in Journal
Applications of a q-Differential Operator to a Class of Harmonic Mappings Defined by q-Mittag–Leffler Functions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Stochastic Model for the LMS Algorithm with Symmetric/Antisymmetric Properties

by
Augusto Cesar Becker
1,
Eduardo Vinicius Kuhn
1,
Marcos Vinicius Matsuo
2,
Jacob Benesty
3,
Constantin Paleologu
4,*,
Laura-Maria Dogariu
4 and
Silviu Ciochină
4
1
LAPSE—Electronics and Signal Processing Laboratory, Department of Electronics Engineering, Federal University of Technology-Paraná, Toledo 85902-490, PR, Brazil
2
GEPS—Electronics and Signal Processing Group, Department of Control, Automation, and Computation, Federal University of Santa Catarina, Blumenau 89036-004, SC, Brazil
3
National Institute of Scientific Research—Energy, Materials, and Telecommunications, University of Quebec, Montreal, QC H5A 1K6, Canada
4
Department of Telecommunications, Faculty of Electronics, Telecommunications, and Information Technology, University Politehnica of Bucharest, 060042 Bucharest, Romania
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(9), 1908; https://doi.org/10.3390/sym14091908
Submission received: 17 August 2022 / Revised: 6 September 2022 / Accepted: 7 September 2022 / Published: 12 September 2022
(This article belongs to the Special Issue New Approaches for System Identification Problems)

Abstract

:
This paper presents a stochastic model for the least-mean-square algorithm with symmetric/antisymmetric properties (LMS-SAS), operating in a system identification setup with Gaussian input data. Specifically, model expressions are derived to describe the mean weight behavior of the (global and virtual) adaptive filters, learning curves, and evolution of some correlation-like matrices, which allow predicting the algorithm behavior. Simulation results are shown and discussed, confirming the accuracy of the proposed model for both transient and steady-state phases.

1. Introduction

In several practical applications, adaptive filtering techniques are used to obtain an approximate representation for the input–output relationship of an unknown system in real-time, such as occurs in system identification problems, echo and noise cancellation, as well as in adaptive control [1,2,3,4,5]. In some of these applications, the system to be identified exhibits known special characteristics (e.g., sparseness, symmetry or antisymmetry, among others), which can be exploited by the adaptive algorithm to improve its convergence and/or reduce computational complexity. For instance, proportionate-type algorithms [6,7,8,9,10,11] exploit the sparseness of the system impulse response. In turn, algorithms operating with bilinear forms [12,13] consider a more efficient representation for characterizing multiple-input/single-output (MISO) spatiotemporal systems. While the least-mean-square with symmetric/antisymmetric properties (LMS-SAS) algorithm and its normalized version [14] exploit the intrinsic symmetric or antisymmetric characteristics of some systems to decompose their impulse responses into two smaller vectors using the Kronecker product. Therefore, considering the practical applicability, theoretical studies on the behavior of these algorithms become relevant.
In this context, stochastic models may serve as a theoretical basis for studying the behavior of adaptive algorithms without relying only on extensive Monte Carlo (MC) simulations [15,16,17,18,19,20,21,22,23]. Such models are useful to establish (through mathematical expressions) stability conditions for the algorithm and a suitable range of values for its parameters, as well as interesting cause-and-effect relationships between performance metrics and algorithm parameters, which can then help the designer [24,25,26,27,28,29]. Moreover, models can be used to identify undesirable behavior of the algorithm, leading thereby to the development of enhanced algorithms [30,31]. However, despite the inherent importance, a stochastic model for the LMS-SAS algorithm [14] has not been discussed (to the best of our knowledge) in the literature so far. So, aiming to fill this gap, the present research work has the following goals:
(i)
To develop a stochastic model describing the behavior of the algorithm, considering (uncorrelated and correlated) Gaussian input data;
(ii)
To derive expressions characterizing the mean weight behavior of the adaptive filters, learning curves, and evolution of some correlation-like matrices; and
(iii)
To verify and discuss the accuracy of the model for different operating scenarios.
Note that performance comparisons with other algorithms from the literature are beyond the scope of this research work (for such, see [14]).
The remainder of the paper is organized as follows. Section 2 introduces the system model and revisits the LMS-SAS algorithm. Section 3 presents the proposed stochastic model. Section 4 provides simulation results for different operating scenarios. Section 5 presents a brief discussion about the results and highlights some algorithm characteristics. Finally, Section 6 summarizes conclusions and suggestions for future research works.
The mathematical notation adopted in this paper follows the standard practice of using lower-case boldface letters for vectors, upper-case boldface letters for matrices, and both italic Roman and Greek letters for scalar quantities. Superscripts T and B stand for the transpose and block-transpose operators [32], ⊗ denotes the Kronecker product, vec ( · ) characterizes the vectorization operator [33], Tr ( · ) represents the trace of a matrix, and E ( · ) is the expected value operator. Still, the notation [ A ] ( i , j ) represents the scalar ( i , j ) -entry of matrix A , while A i , j denotes the i , j -block/submatrix of A .

2. Problem Formulation

In a system identification setup (as depicted in Figure 1), let us consider a linear single-input/single-output (SISO) system whose output signal y ( n ) corrupted by an additive measurement noise w ( n ) at time index n yields the so-called desired signal which can be expressed as
d ( n ) = y ( n ) + w ( n ) = h T ( h 1 , h 2 ) x ( n ) + w ( n )
where x ( n ) = [ x ( n ) x ( n 1 ) x ( n L 0 2 + 1 ) ] T contains the L 0 2 most recent samples of the input signal x ( n ) , while
h ( h 1 , h 2 ) = h 1 h 2 ± h 2 h 1
denotes an L 0 2 -dimensional (unknown) symmetric (+) or antisymmetric (−) system impulse response which depends on two (virtual) impulse responses h 1 and h 2 of length L 0 . Note that the symmetric or antisymmetric characteristic considered here is not related to the samples of the impulse response such as in linear-phase finite impulse response filters (for details, see the discussion presented in [14]). So, the error signal can be written as
e ( n ) = d ( n ) d ^ ( n )
with d ^ ( n ) denoting the output signal of the adaptive filter h ^ ( n ) whose weights are adjusted based on x ( n ) and e ( n ) .
In this context, instead of estimating h ( h 1 , h 2 ) directly, the LMS-SAS algorithm [14] can be used to identify h 1 and h 2 [in (2)] through two (virtual) adaptive filters with weight vectors h ^ 1 ( n ) and h ^ 2 ( n ) , both of length L 0 , in such a way that the (global) adaptive filter can be computed as
h ^ ( n ) = h ^ 1 ( n ) h ^ 2 ( n ) ± h ^ 2 ( n ) h ^ 1 ( n ) .
Thus, the adjustment of h ^ 1 ( n ) and h ^ 2 ( n ) in (4) is carried out using the following update rules:
h ^ 1 ( n ) = h ^ 1 ( n 1 ) + μ 1 x 2 ( n ) e ( n )
and
h ^ 2 ( n ) = h ^ 2 ( n 1 ) + μ 2 x 1 ( n ) e ( n )
where μ 1 and μ 2 represent step-size parameters,
x 1 ( n ) = H ^ 1 T ( n 1 ) x ( n )
and
x 2 ( n ) = H ^ 2 T ( n 1 ) x ( n )
with
H ^ 1 ( n ) = h ^ 1 ( n ) I L 0 ± I L 0 h ^ 1 ( n )
and
H ^ 2 ( n ) = I L 0 h ^ 2 ( n ) ± h ^ 2 ( n ) I L 0
characterizing matrices of size L 0 2 × L 0 , in which I L 0 denotes the identity matrix with dimension L 0 × L 0 , while the error signal (3) can conveniently be expressed as
e ( n ) = d ( n ) h ^ 1 T ( n 1 ) x 2 ( n ) = d ( n ) h ^ 2 T ( n 1 ) x 1 ( n ) .
Therefore, the system identification setup and the LMS-SAS algorithm [defined through (4)–(11)] have now been characterized.

3. Proposed Model

In this section, the proposed stochastic model describing the behavior of the LMS-SAS algorithm is derived. Specifically, model expressions are obtained for predicting the mean weight behavior of the (global and virtual) adaptive filters, learning curves, and the evolution of some correlation-like matrices related to the adaptive weight vectors. To this end, the following assumptions and approximations are used:
Assumption 1.
The input signal x ( n ) is obtained from a zero-mean (uncorrelated or correlated) Gaussian process with variance σ x 2 and autocorrelation matrix R = E [ x ( n ) x T ( n ) ] [2,5].
Assumption 2.
The measurement noise w ( n ) is obtained from a white Gaussian process with variance σ w 2 , which is uncorrelated with any other signal in the system [2,5].
Assumption 3.
The adaptive weight vectors h ^ 1 ( n ) , h ^ 2 ( n ) , and h ^ ( n ) are assumed independent of each other as well as from any other variable in the system [2,5,13,29].
Note that these assumptions have been commonly used in the stochastic modeling of adaptive algorithms to make the development mathematically tractable (see [2,5]) and have led to satisfactory results (as shown later in Section 4).

3.1. Mean Weight Behavior

Taking into account the similar forms of (5) and (6), let us start replacing the subindexes 1 and 2 by α and α ¯ (or vice-versa) such that both update rules can be expressed through
h ^ α ( n ) = h ^ α ( n 1 ) + μ α x α ¯ ( n ) e ( n ) .
Then, substituting (1) and (11) into (12), taking the expected value of both sides of the resulting expression, and using Assumptions 2 and 3, the mean weight behavior of the (virtual) adaptive filters can be determined as
E [ h ^ α ( n ) ] = [ I L 0 μ α S α ¯ ( n ) ] E [ h ^ α ( n 1 ) ] + μ α S α ¯ ( n ) h
where
S α ( n ) = E [ x α ( n ) x α T ( n ) ]
and
S α ( n ) = E [ x α ( n ) x T ( n ) ] .
Notice from (13) that the convergence of the (virtual) adaptive filters depends on
  • The step sizes μ α , the system impulse response h , the estimate of the system impulse response h ^ α ( n ) , as well as the correlation-like matrices S α ( n ) and S α ( n ) ;
  • The initialization condition h ^ α ( 0 ) which may impair the initial convergence or cause instability of the algorithm; and
  • Each other, i.e., the behavior of one (virtual) adaptive filter is affected by the behavior of the other one and vice-versa.
Next, substituting either (7) or (8) into (14), using the properties of the Kronecker product [33,34], as well as considering Assumption 3, we have
S α ( n ) = S α ( 1 ) ( n ) ± S α ( 2 ) ( n ) ± S α ( 2 ) T ( n ) + S α ( 3 ) ( n )
with
S α ( 1 ) ( n ) = E { [ I L 0 h ^ α ( n 1 ) ] T R [ I L 0 h ^ α ( n 1 ) ] } = Tr [ R 1 , 1 G α ( n 1 ) ] Tr [ R 1 , L 0 G α ( n 1 ) ] Tr [ R L 0 , 1 G α ( n 1 ) ] Tr [ R L 0 , L 0 G α ( n 1 ) ]
S α ( 2 ) ( n ) = E { [ I L 0 h ^ α ( n 1 ) ] T R [ h ^ α ( n 1 ) I L 0 ] } = i = 1 L 0 g α , i T ( n 1 ) R 1 , i i = 1 L 0 g α , i T ( n 1 ) R L 0 , i
and
S α ( 3 ) ( n ) = E { [ h ^ α ( n 1 ) I L 0 ] T R [ h ^ α ( n 1 ) I L 0 ] } = i = 1 L 0 j = 1 L 0 [ G α ( n 1 ) ] ( i , j ) R i , j
in which
G α ( n ) = E [ h ^ α ( n ) T h ^ α ( n ) ]
is the autocorrelation matrix of h ^ α ( n ) , g α , i ( n ) represents the i-th column of G α ( n ) , [ G α ( n ) ] ( i , j ) characterizes the ( i , j ) -th element of G α ( n ) , while R i , j denotes an L 0 × L 0 -block/submatrix of R , i.e.,
R = R 1 , 1 R 1 , L 0 R L 0 , 1 R L 0 , L 0 .
Similarly, from (7) and (8), (15) can be simplified as
S α ( n ) = E [ H ^ α T ( n 1 ) ] R
with E [ H ^ α T ( n 1 ) ] being obtained by taking the expected value of both sides of either (9) or (10).
Finally, the mean weight behavior of the (global) adaptive filter h ^ ( n ) can be determined by applying the expected value to both sides of (4) and using Assumption 3. Thereby, one obtains
E [ h ^ ( n ) ] = E [ h ^ 1 ( n ) ] E [ h ^ 2 ( n ) ] ± E [ h ^ 2 ( n ) ] E [ h ^ 1 ( n ) ]
which depends on (13); hence, the characteristics observed in the convergence of the (virtual) adaptive filters hold also for the convergence of the (global) adaptive filter. Therefore, the mean weight behavior of h ^ 1 ( n ) , h ^ 2 ( n ) , and h ^ ( n ) can be predicted [from (13) and (23)] if the evolution of (20) is known.

3.2. Learning Curves

Aiming to characterize the learning curves, let us start by rewriting (11) using the weight-error vector
v ( n ) = h h ^ ( n )
as
e ( n ) = v T ( n 1 ) x ( n ) + w ( n ) .
Then, squaring both sides of (25), taking the expected value of the resulting expression, and using Assumption 2, one obtains the following expression describing the evolution of the mean-square error (MSE):
J ( n ) = J min + J ex ( n )
where
J min = σ w 2
is the minimum MSE attainable in steady state, while
J ex ( n ) = E [ v T ( n 1 ) x ( n ) x T ( n ) v ( n 1 ) ]
is the excess MSE (EMSE) introduced by the algorithm. Next, considering the properties of the trace operator [33] and using Assumption 3, (28) reduces to
J ex ( n ) = Tr [ R K ( n 1 ) ]
with
K ( n ) = E [ v ( n ) v T ( n ) ]
being the autocorrelation matrix of the weight-error vector. Therefore, if the evolution of (30) is known, the MSE and EMSE learning curves are completely characterized by (26), (27), and (29).

3.3. Correlation-Like Matrices

The aim is now to derive recursive expressions describing the evolution of the correlation-like matrices G α ( n ) [given by (20)] and K ( n ) [given by (30)]. To this end, substituting (12) into (20), and using Assumptions 2 and 3, we obtain
G α ( n ) = G α ( n 1 ) μ α S α ¯ ( n ) G α ( n 1 ) μ α G α ( n 1 ) S α ¯ ( n ) + μ α G α ( n 1 ) S α ¯ T ( n ) + μ α S α ¯ ( n ) G α T ( n 1 ) + μ α 2 F α ¯ ( n ) + μ α 2 σ w 2 S α ¯ ( n )
where
G α ( n ) = E [ h ^ α ( n ) ] h T
with E [ h ^ α ( n ) ] being computed from (13),
F α ( n ) = E [ H ^ α T ( n 1 ) x ( n ) x T ( n ) v ( n 1 ) v T ( n 1 ) x ( n ) x T ( n ) H ^ α ( n 1 ) ] E [ H ^ α T ( n 1 ) E ( n ) H ^ α ( n 1 ) ]
in which
E ( n ) = E [ x ( n ) x T ( n ) K ( n 1 ) x ( n ) x T ( n ) ] = 2 R K ( n 1 ) R + R Tr [ R K ( n 1 ) ]
due to the factorization theorem of Gaussian variables [2,3,5] (also known as Isserlis’ theorem [35]). Thereby, such as in (16), (33) can be expressed [using (34)] as
F α ( n ) = F α ( 1 ) ( n ) ± F α ( 2 ) ( n ) ± F α ( 2 ) T ( n ) + F α ( 3 ) ( n )
where
F α ( 1 ) ( n ) = E { [ I L 0 h ^ α ( n 1 ) ] T E ( n ) [ I L 0 h ^ α ( n 1 ) ] } = Tr [ E 1 , 1 ( n ) G α ( n 1 ) ] Tr [ E 1 , L 0 ( n ) G α ( n 1 ) ] Tr [ E L 0 , 1 ( n ) G α ( n 1 ) ] Tr [ E L 0 , L 0 ( n ) G α ( n 1 ) ]
F α ( 2 ) ( n ) = E { [ I L 0 h ^ α ( n 1 ) ] T E ( n ) [ h ^ α ( n 1 ) I L 0 ] } = i = 1 L 0 g α , i T ( n 1 ) E 1 , i ( n ) i = 1 L 0 g α , i T ( n 1 ) E L 0 , i ( n )
and
F α ( 3 ) ( n ) = E { [ h ^ α ( n 1 ) I L 0 ] } T E ( n ) [ h ^ α ( n 1 ) I L 0 ] } = i = 1 L 0 j = 1 L 0 [ G α ( n 1 ) ] ( i , j ) E i , j ( n ) .
In turn, calculating v ( n ) v T ( n ) from (24) and taking the expected value of both sides of the resulting expression, it is possible to rewrite (30) as
K ( n ) = h h T h E [ h ^ T ( n ) ] E [ h ^ ( n ) ] h T + G ( n )
where
G ( n ) = E [ h ^ ( n ) h ^ T ( n ) ] = G 1 ( n ) G 2 ( n ) ± { vec [ G 2 ( n ) ] vec [ G 1 ( n ) ] T } B ± { vec [ G 1 ( n ) ] vec [ G 2 ( n ) ] T } B + G 2 ( n ) G 1 ( n )
denotes the autocorrelation matrix of h ^ ( n ) and ( · ) B represents the transposition of blocks [32] of dimension L 0 × L 0 . Note that (40) is obtained by calculating the outer product h ^ ( n ) h ^ T ( n ) from (4), taking the expected value of both sides of the resulting expression, and considering Assumption 3.
Therefore, since the evolution of the correlation-like matrices G α ( n ) , K ( n ) , G ( n ) has been properly characterized, the behavior of the LMS-SAS algorithm can now be predicted.

4. Simulation Results

This section aims to assess the accuracy of the proposed model by comparing results obtained from MC simulations (average of 200 independent runs) with model predictions. To this end, three examples are presented, covering uncorrelated and correlated Gaussian input data, several signal-to-noise ratio (SNR) values, as well as distinct system impulse responses and initialization conditions for the adaptive filters. Specifically, the input signal x ( n ) is obtained from an autoregressive (AR) process [5], given by
x ( n ) = a 1 x ( n 1 ) a 2 x ( n 2 ) + v ( n )
in which a 1 and a 2 denote the AR(2) coefficients, while v ( n ) is a white Gaussian noise whose variance is determined as
σ v 2 = σ x 2 1 a 2 1 + a 2 [ ( 1 + a 2 ) 2 a 1 2 ]
such that σ x 2 = 1 . The SNR is defined (in dB) as [14,29]
SNR = 10 log 10 σ y 2 σ w 2
with σ y 2 = h T ( h 1 , h 2 ) R h ( h 1 , h 2 ) characterizing the variance of the system output signal; in particular, three values of SNR are considered here, i.e., 10, 20, and 30 dB. The (virtual) adaptive filters 1 and 2 are initialized, respectively, as h ^ 1 ( 0 ) = [ 1 0 0 ] T and h ^ 2 ( 0 ) = L 0 1 [ 1 1 1 ] T unless otherwise stated to prevent them from stalling at the beginning of the adaptation process [14], while the step sizes are chosen as μ 1 = μ 2 = 10 3 .

4.1. Example 1

Here, the proposed model is verified for uncorrelated Gaussian input data, different SNR values, and a system impulse response with antisymmetric characteristic. Specifically, the input signal is obtained from (41) by making a 1 = a 2 = 0 , which results in an eigenvalue spread [2,3,5] for the input autocorrelation matrix of χ = 1 . The antisymmetric system impulse response h ( h 1 , h 2 ) is obtained from (2), with h 1 containing the first L 0 = 16 weights of the echo path model 1 given in the ITU-T G.168 Recommendation [36] while [ h 2 ] l = ( 0.9 ) l 1 for l = 1 , 2 , , L 0 ; consequently, the length of h ( h 1 , h 2 ) is L 0 2 = 256 . The results obtained for this operating scenario are presented in Figure 2.

4.2. Example 2

This example aims now to assess the proposed model for correlated Gaussian input data, different SNR values, and a system impulse response with symmetric characteristic. Particularly, the input signal is taken from (41) with a 1 = 0.6 and a 2 = 0.8 , which results in an eigenvalue spread of χ = 162.13 for the input autocorrelation matrix. The symmetric system impulse response h ( h 1 , h 2 ) is obtained from (2), with h 1 containing the first L 0 = 32 weights of the echo path model 1 given in the ITU-T G.168 Recommendation [36] while [ h 2 ] l = 0 . 5 l 1 for l = 1 , 2 , , L 0 ; consequently, the length of h ( h 1 , h 2 ) is L 0 2 = 1024 . The results obtained for this operating scenario are presented in Figure 3.

4.3. Example 3

In this example, the accuracy of the proposed model is verified considering different initialization conditions for the adaptive weight vectors. To this end, the input signal is obtained from (41) by making a 1 = 0.6 and a 2 = 0.8 , yielding an eigenvalue spread of χ = 160.55 for the input autocorrelation matrix. Symmetric and antisymmetric system impulse responses are used, which are determined from (2) with h 1 and h 2 given as in Example 2 but now with L 0 = 16 weights; hence, the length of h ( h 1 , h 2 ) becomes L 0 2 = 256 . In turn, the adaptive weight vectors are initialized as h ^ 1 ( 0 ) = L 0 1 [ 1 1 1 ] T and h ^ 2 ( 0 ) = c + L 0 1 [ 1 1 1 ] T with c + = { 10 1 , 1 10 6 , 1 } when the system impulse response is symmetric, while h ^ 1 ( 0 ) = [ 1 0 0 ] T and h ^ 2 ( 0 ) = c L 0 1 [ 1 1 1 ] T with c = { 1 , 5 , 7.5 } when the antisymmetric impulse response is used. The obtained results are presented in Figure 4, considering only SNR = 20 dB for simplicity.

5. Discussion

Figure 2, Figure 3 and Figure 4 present the results obtained from MC simulations and model predictions for the operating scenarios described in Examples 1, 2, and 3, respectively. Specifically, Figure 2a–c and Figure 3a–c depict the evolution of (five) adaptive weights of h ^ 1 ( n ) , h ^ 2 ( n ) , and h ^ ( n ) , while Figure 2d and Figure 3d show EMSE learning curves. [Results obtained for SNR values of 10 and 30 dB have been omitted in Figure 2a–c and Figure 3a–c since they are very similar]. In turn, Figure 4a,b illustrate the impact of the initialization of the adaptive weight vectors on the evolution of (two) adaptive weights of h ^ ( n ) , while Figure 4c,d depict the effect on the EMSE learning curves, from which one verifies that the convergence speed and the steady-state EMSE are affected by the choice of the initialization vectors of the adaptive filters; in this particular case, when h ^ 1 ( 0 ) and h ^ 2 ( 0 ) assume small and distinct values, improved convergence and lower steady-state EMSE are achieved. Despite that, one observes from these figures that the behavior predicted from the proposed model matches very well the one obtained from MC simulations, during both transient and steady-state phases. Therefore, the accuracy of the model is confirmed for uncorrelated and correlated Gaussian input data, different SNR values and initialization conditions for the adaptive weight vectors, as well as for system impulse responses with symmetric and antisymmetric characteristics.

6. Conclusions

A stochastic model for the LMS-SAS algorithm was derived in this paper. The proposed model allows predicting the mean weight behavior of the (global and virtual) adaptive filters, learning curves, and evolution of some correlation-like matrices. Simulation results confirmed the accuracy of the model irrespective of the input data correlation level, SNR value, initialization condition of the adaptive weight vectors, and system impulse response. Based on the proposed model, further research could address the derivation of stability bounds for the step size, analytical expressions characterizing the algorithm behavior in steady state, and the development of models for the normalized version of the LMS-SAS algorithm.

Author Contributions

Conceptualization, A.C.B. and E.V.K.; methodology, A.C.B. and M.V.M.; software, A.C.B.; validation, M.V.M.; investigation, A.C.B.; writing—original draft preparation, A.C.B., E.V.K. and M.V.M.; writing—review and editing, J.B., C.P. and L.-M.D.; supervision, J.B. and S.C.; funding acquisition, L.-M.D. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by a grant from the Romanian Ministry of Education and Research, CNCS–UEFISCDI, project no. PN-III-P1-1.1-PD-2019-0340.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data and source codes related to the present research work can be made available upon request to the corresponding author.

Acknowledgments

The authors thank the Academic Editor and the anonymous reviewers for their valuable and constructive comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Åström, K.J.; Wittenmark, B. Adaptive Control, 2nd ed.; Dover Publications: Mineola, NY, USA, 2008. [Google Scholar]
  2. Sayed, A.H. Adaptive Filters; John Wiley & Sons: Hoboken, NJ, USA, 2008. [Google Scholar]
  3. Diniz, P.S.R. Adaptive Filtering: Algorithms and Practical Implementation, 4th ed.; Springer: New York, NY, USA, 2013. [Google Scholar]
  4. Farhang-Boroujeny, B. Adaptive Filters: Theory and Applications, 2nd ed.; John Wiley & Sons: Chichester, UK, 2013. [Google Scholar]
  5. Haykin, S. Adaptive Filter Theory, 5th ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2014. [Google Scholar]
  6. Duttweiler, D.L. Proportionate normalized least-mean-squares adaptation in echo cancelers. IEEE Trans. Speech Audio Process. 2000, 8, 508–518. [Google Scholar] [CrossRef]
  7. de Souza, F.C.; Tobias, O.J.; Seara, R.; Morgan, D.R. A PNLMS algorithm with individual activation factors. IEEE Trans. Signal Process. 2010, 58, 2036–2047. [Google Scholar] [CrossRef]
  8. Wagner, K.T.; Doroslovacki, M.I. Proportionate-Type Normalized Least Mean Square Algorithms; ISTE Ltd.: London, UK, 2013. [Google Scholar]
  9. Perez, F.L.; de Souza, F.C.; Seara, R. An improved mean-square weight deviation-proportionate gain algorithm based on error autocorrelation. Signal Process. 2014, 94, 503–513. [Google Scholar] [CrossRef]
  10. Beck, E.; Batista, E.L.O.; Seara, R. Norm-constrained adaptive algorithms for sparse system identification based on projections onto intersections of hyperplanes. Signal Process. 2016, 118, 259–271. [Google Scholar] [CrossRef]
  11. Perez, F.L.; Kuhn, E.V.; de Souza, F.C.; Seara, R. A novel gain distribution policy based on individual-coefficient convergence for PNLMS-type algorithms. Signal Process. 2017, 138, 294–306. [Google Scholar] [CrossRef]
  12. Benesty, J.; Paleologu, C.; Ciochină, S. On the identification of bilinear forms with the Wiener filter. IEEE Signal Process. Lett. 2017, 24, 653–657. [Google Scholar] [CrossRef]
  13. Paleologu, C.; Benesty, J.; Ciochină, S. Adaptive filtering for the identification of bilinear forms. Digit. Signal Process. 2018, 75, 153–167. [Google Scholar] [CrossRef]
  14. Benesty, J.; Paleologu, C.; Ciochină, S.; Kuhn, E.V.; Bakri, K.J.; Seara, R. LMS and NLMS algorithms for the identification of impulse responses with intrinsic symmetric or antisymmetric properties. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Singapore, 23–27 May 2022; pp. 5662–5666. [Google Scholar]
  15. Bershad, N. Analysis of the normalized LMS algorithm with Gaussian inputs. IEEE Trans. Acoust. Speech Signal Process. 1986, 34, 793–806. [Google Scholar] [CrossRef]
  16. Rupp, M. The behavior of LMS and NLMS algorithms in the presence of spherically invariant processes. IEEE Trans. Signal Process. 1993, 41, 1149–1160. [Google Scholar] [CrossRef]
  17. Lopes, C.G.; Sayed, A.H. Diffusion least-mean squares over adaptive networks: Formulation and performance analysis. IEEE Trans. Signal Process. 2008, 56, 3122–3136. [Google Scholar] [CrossRef]
  18. Abadi, M.; Husøy, J. On the application of a unified adaptive filter theory in the performance prediction of adaptive filter algorithms. Digit. Signal Process. 2009, 19, 410–432. [Google Scholar] [CrossRef]
  19. Takahashi, N.; Yamada, I.; Sayed, A.H. Diffusion least-mean squares with adaptive combiners: Formulation and performance analysis. IEEE Trans. Signal Process. 2010, 58, 4795–4810. [Google Scholar] [CrossRef]
  20. Parreira, W.D.; Bermudez, J.C.M.; Richard, C.; Tourneret, J.Y. Stochastic behavior analysis of the Gaussian kernel least-mean-square algorithm. IEEE Trans. Signal Process. 2012, 60, 2208–2222. [Google Scholar] [CrossRef]
  21. Nosrati, H.; Shamsi, M.; Taheri, S.M.; Sedaaghi, M.H. Adaptive networks under non-stationary conditions: Formulation, performance analysis, and application. IEEE Trans. Signal Process. 2015, 63, 4300–4314. [Google Scholar] [CrossRef]
  22. Yang, F.; Enzner, G.; Yang, J. A unified approach to the statistical convergence analysis of frequency-domain adaptive filters. IEEE Trans. Signal Process. 2019, 67, 1785–1796. [Google Scholar] [CrossRef]
  23. Eweda, E.; Bershad, N.J.; Bermudez, J.C.M. Stochastic analysis of the diffusion least mean square and normalized least mean square algorithms for cyclostationary white Gaussian and non-Gaussian inputs. Int. J. Adapt. Control. Signal Process. 2021, 35, 2466–2486. [Google Scholar] [CrossRef]
  24. de Almeida, S.; Bermudez, J.; Bershad, N.; Costa, M. A statistical analysis of the affine projection algorithm for unity step size and autoregressive inputs. IEEE Trans. Circuits Syst. I Regul. Pap. 2005, 52, 1394–1405. [Google Scholar] [CrossRef]
  25. Eweda, E. A new approach for analyzing the limiting behavior of the normalized LMS algorithm under weak assumptions. Signal Process. 2009, 89, 2143–2151. [Google Scholar] [CrossRef]
  26. Batista, E.; Seara, R. On the performance of adaptive pruned Volterra filters. Signal Process. 2013, 93, 1909–1920. [Google Scholar] [CrossRef]
  27. Matsuo, M.V.; Kuhn, E.V.; Seara, R. Stochastic analysis of the NLMS algorithm for nonstationary environment and deficient length adaptive filter. Signal Process. 2019, 160, 190–201. [Google Scholar] [CrossRef]
  28. Matsuo, M.V.; Kuhn, E.V.; Seara, R. On the diffusion NLMS algorithm applied to adaptive networks: Stochastic modeling and performance comparisons. Digit. Signal Process. 2021, 113, 103018. [Google Scholar] [CrossRef]
  29. Bakri, K.J.; Kuhn, E.V.; Seara, R.; Benesty, J.; Paleologu, C.; Ciochină, S. On the stochastic modeling of the LMS algorithm operating with bilinear forms. Digit. Signal Process. 2022, 122, 103359. [Google Scholar] [CrossRef]
  30. Kolodziej, J.E.; Tobias, O.J.; Seara, R.; Morgan, D.R. On the constrained stochastic gradient algorithm: Model, performance, and improved version. IEEE Trans. Signal Process. 2009, 57, 1304–1315. [Google Scholar] [CrossRef]
  31. Yu, Y.; Zhao, H.; Lu, L. Steady-state behavior of the improved normalized subband adaptive filter algorithm and its improvement in under-modeling. Signal Image Video Process. 2018, 12, 617–624. [Google Scholar] [CrossRef]
  32. Mackey, D.S. Structured Linearizations for Matrix Polynomials. Ph.D. Thesis, University of Manchester, Manchester, UK, 2006. [Google Scholar]
  33. Bernstein, D.S. Matrix Mathematics: Theory, Facts, and Formulas; Princeton University Press: Princeton, NJ, USA, 2009. [Google Scholar]
  34. Van Loan, C.F. The ubiquitous Kronecker product. J. Comput. Appl. Math. 2000, 123, 85–100. [Google Scholar] [CrossRef]
  35. Isserlis, L. On a formula for the product-moment coefficient of any order of a normal frequency distribution in any number of variables. Biometrika 1918, 12, 134–139. [Google Scholar] [CrossRef] [Green Version]
  36. ITU-T Recommendation G.168; Digital Network Echo Cancellers. Switzerland, International Telecommunications Union—Telecommunication Standardization Sector: Geneva, Switzerland, April 2015.
Figure 1. Block diagram of a system identification setup.
Figure 1. Block diagram of a system identification setup.
Symmetry 14 01908 g001
Figure 2. Example 1. Results obtained from MC simulations (gray-ragged lines) and predicted from the proposed model (dark-dashed lines): (a) Evolution of (five) weights of the (virtual) adaptive filter 1; (b) Evolution of (five) weights of the (virtual) adaptive filter 2; (c) Evolution of (five) weights of the (global) adaptive filter; (d) Evolution of the EMSE learning curves.
Figure 2. Example 1. Results obtained from MC simulations (gray-ragged lines) and predicted from the proposed model (dark-dashed lines): (a) Evolution of (five) weights of the (virtual) adaptive filter 1; (b) Evolution of (five) weights of the (virtual) adaptive filter 2; (c) Evolution of (five) weights of the (global) adaptive filter; (d) Evolution of the EMSE learning curves.
Symmetry 14 01908 g002
Figure 3. Example 2. Results obtained from MC simulations (gray-ragged lines) and predicted from the proposed model (dark-dashed lines): (a) Evolution of (five) weights of the (virtual) adaptive filter 1; (b) Evolution of (five) weights of the (virtual) adaptive filter 2; (c) Evolution of (five) weights of the (global) adaptive filter; (d) Evolution of the EMSE learning curves.
Figure 3. Example 2. Results obtained from MC simulations (gray-ragged lines) and predicted from the proposed model (dark-dashed lines): (a) Evolution of (five) weights of the (virtual) adaptive filter 1; (b) Evolution of (five) weights of the (virtual) adaptive filter 2; (c) Evolution of (five) weights of the (global) adaptive filter; (d) Evolution of the EMSE learning curves.
Symmetry 14 01908 g003
Figure 4. Example 3. Results obtained from MC simulations (gray-ragged lines) and predicted from the proposed model (dark-dashed lines): (a) Evolution of (two) weights of the (global) adaptive filter for the symmetric impulse response; (b) Evolution of (two) weights of the (global) adaptive filter for the antisymmetric impulse response; (c) Evolution of the EMSE learning curves for the symmetric impulse response; (d) Evolution of the EMSE learning curves for the antisymmetric impulse response.
Figure 4. Example 3. Results obtained from MC simulations (gray-ragged lines) and predicted from the proposed model (dark-dashed lines): (a) Evolution of (two) weights of the (global) adaptive filter for the symmetric impulse response; (b) Evolution of (two) weights of the (global) adaptive filter for the antisymmetric impulse response; (c) Evolution of the EMSE learning curves for the symmetric impulse response; (d) Evolution of the EMSE learning curves for the antisymmetric impulse response.
Symmetry 14 01908 g004
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Becker, A.C.; Kuhn, E.V.; Matsuo, M.V.; Benesty, J.; Paleologu, C.; Dogariu, L.-M.; Ciochină, S. Stochastic Model for the LMS Algorithm with Symmetric/Antisymmetric Properties. Symmetry 2022, 14, 1908. https://doi.org/10.3390/sym14091908

AMA Style

Becker AC, Kuhn EV, Matsuo MV, Benesty J, Paleologu C, Dogariu L-M, Ciochină S. Stochastic Model for the LMS Algorithm with Symmetric/Antisymmetric Properties. Symmetry. 2022; 14(9):1908. https://doi.org/10.3390/sym14091908

Chicago/Turabian Style

Becker, Augusto Cesar, Eduardo Vinicius Kuhn, Marcos Vinicius Matsuo, Jacob Benesty, Constantin Paleologu, Laura-Maria Dogariu, and Silviu Ciochină. 2022. "Stochastic Model for the LMS Algorithm with Symmetric/Antisymmetric Properties" Symmetry 14, no. 9: 1908. https://doi.org/10.3390/sym14091908

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop