# Tensor-Based Adaptive Filtering Algorithms

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. System Model

## 3. Tensor-Based LMS Algorithms

Algorithm 1: NLMS-T algorithm. |

Initialization: Set ${\widehat{\mathbf{h}}}_{i}\left(0\right),\phantom{\rule{4pt}{0ex}}i=1,2,\dots ,N$, based on (21)–(22) Set $0<{\alpha}_{i}\le 1,\phantom{\rule{4pt}{0ex}}{\delta}_{i}>0,\phantom{\rule{4pt}{0ex}}i=1,2,\dots ,N$ For n = 1,2,…, number of iterations: Compute ${\mathbf{x}}_{{\widehat{\mathbf{h}}}_{i}}\left(n\right),\phantom{\rule{4pt}{0ex}}i=1,2,\dots ,N$, based on (19) ${e}_{{\widehat{\mathbf{h}}}_{i}}\left(n\right)=d\left(n\right)-{\widehat{\mathbf{h}}}_{i}^{T}(n-1){\mathbf{x}}_{{\widehat{\mathbf{h}}}_{i}}\left(n\right)$, for any $i=1,2,\dots ,N$ ${\widehat{\mathbf{h}}}_{i}\left(n\right)={\widehat{\mathbf{h}}}_{i}(n-1)+\frac{{\alpha}_{i}{\mathbf{x}}_{{\widehat{\mathbf{h}}}_{i}}\left(n\right){e}_{{\widehat{\mathbf{h}}}_{i}}\left(n\right)}{{\delta}_{i}+{\mathbf{x}}_{{\widehat{\mathbf{h}}}_{i}}^{T}\left(n\right){\mathbf{x}}_{{\widehat{\mathbf{h}}}_{i}}\left(n\right)},\phantom{\rule{4pt}{0ex}}i=1,2,\dots ,N$ $\widehat{\mathbf{g}}\left(n\right)={\widehat{\mathbf{h}}}_{N}\left(n\right)\otimes {\widehat{\mathbf{h}}}_{N-1}\left(n\right)\otimes \cdots \otimes {\widehat{\mathbf{h}}}_{1}\left(n\right)$ |

## 4. Tensor-Based RLS Algorithm

Algorithm 2: RLS-T algorithm. |

Initialization: Set ${\widehat{\mathbf{h}}}_{i}\left(0\right),\phantom{\rule{4pt}{0ex}}i=1,2,\dots ,N$, based on (21)–(22) ${\mathbf{R}}_{i}^{-1}\left(0\right)={\xi}_{i}^{-1}{\mathbf{I}}_{{L}_{i}},\phantom{\rule{4pt}{0ex}}{\xi}_{i}>0,\phantom{\rule{4pt}{0ex}}i=1,2,\dots ,N$ ${\lambda}_{i}=1-\frac{1}{K{L}_{i}},\phantom{\rule{4pt}{0ex}}K\ge 1,\phantom{\rule{4pt}{0ex}}i=1,2,\dots ,N$ For n = 1, 2,…, number of iterations: Compute ${\mathbf{x}}_{{\widehat{\mathbf{h}}}_{i}}\left(n\right),\phantom{\rule{4pt}{0ex}}i=1,2,\dots ,N$, based on (19) ${e}_{{\widehat{\mathbf{h}}}_{i}}\left(n\right)=d\left(n\right)-{\widehat{\mathbf{h}}}_{i}^{T}(n-1){\mathbf{x}}_{{\widehat{\mathbf{h}}}_{i}}\left(n\right)$, for any $i=1,2,\dots ,N$ ${\mathbf{k}}_{i}\left(n\right)=\frac{{\mathbf{R}}_{i}^{-1}(n-1){\mathbf{x}}_{{\widehat{\mathbf{h}}}_{i}}\left(n\right)}{{\lambda}_{i}+{\mathbf{x}}_{{\widehat{\mathbf{h}}}_{i}}^{T}\left(n\right){\mathbf{R}}_{i}^{-1}(n-1){\mathbf{x}}_{{\widehat{\mathbf{h}}}_{i}}\left(n\right)},\phantom{\rule{4pt}{0ex}}i=1,2,\dots ,N$ ${\widehat{\mathbf{h}}}_{i}\left(n\right)={\widehat{\mathbf{h}}}_{i}(n-1)+{\mathbf{k}}_{i}\left(n\right){e}_{{\widehat{\mathbf{h}}}_{i}}\left(n\right),\phantom{\rule{4pt}{0ex}}i=1,2,\dots ,N$ ${\mathbf{R}}_{i}^{-1}\left(n\right)=\frac{1}{{\lambda}_{i}}\left[{\mathbf{I}}_{{L}_{i}}-{\mathbf{k}}_{i}\left(n\right){\mathbf{x}}_{{\widehat{\mathbf{h}}}_{i}}^{T}\left(n\right)\right]{\mathbf{R}}_{i}^{-1}(n-1),\phantom{\rule{4pt}{0ex}}i=1,2,\dots ,N$ $\widehat{\mathbf{g}}\left(n\right)={\widehat{\mathbf{h}}}_{N}\left(n\right)\otimes {\widehat{\mathbf{h}}}_{N-1}\left(n\right)\otimes \cdots \otimes {\widehat{\mathbf{h}}}_{1}\left(n\right)$ |

## 5. Beyond the Identification of Rank-1 Tensors

## 6. Simulation Results

## 7. Conclusions and Future Works

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Conflicts of Interest

## References

- Benesty, J.; Huang, Y. (Eds.) Adaptive Signal Processing–Applications to Real-World Problems; Springer: Berlin, Germany, 2003. [Google Scholar]
- Ljung, L. System Identification: Theory for the User, 2nd ed.; Prentice-Hall: Upper Saddle River, NJ, USA, 1999. [Google Scholar]
- Benesty, J.; Gänsler, T.; Morgan, D.R.; Sondhi, M.M.; Gay, S.L. Advances in Network and Acoustic Echo Cancellation; Springer: Berlin, Germany, 2001. [Google Scholar]
- Diniz, P.S.R. Adaptive Filtering: Algorithms and Practical Implementation, 4th ed.; Springer: New York, NY, USA, 2013. [Google Scholar]
- Haykin, S. Adaptive Filter Theory, 4th ed.; Prentice-Hall: Upper Saddle River, NJ, USA, 2002. [Google Scholar]
- Cichocki, A.; Mandic, D.; de Lathauwer, L.; Zhou, G.; Zhao, Q.; Caiafa, C.; Phan, A.H. Tensor decompositions for signal processing applications: From two-way to multiway component analysis. IEEE Signal Process. Mag.
**2015**, 32, 145–163. [Google Scholar] [CrossRef][Green Version] - Rupp, M.; Schwarz, S. A tensor LMS algorithm. In Proceedings of the 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), South Brisbane, Australia, 19–24 April 2015; pp. 3347–3351. [Google Scholar]
- Ribeiro, L.N.; de Almeida, A.L.F.; Mota, J.C.M. Identification of separable systems using trilinear filtering. In Proceedings of the 2015 IEEE 6th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), Cancun, Mexico, 13–16 December 2015; pp. 189–192. [Google Scholar]
- Da Silva, A.P.; Comon, P.; de Almeida, A.L.F. A finite algorithm to compute rank-1 tensor approximations. IEEE Signal Process. Lett.
**2016**, 23, 959–963. [Google Scholar] [CrossRef][Green Version] - Ribeiro, L.N.; Schwarz, S.; Rupp, M.; de Almeida, A.L.F.; Mota, J.C.M. A low-complexity equalizer for massive MIMO systems based on array separability. In Proceedings of the 2017 25th European Signal Processing Conference (EUSIPCO), Kos, Greece, 28 August–2 September 2017; pp. 2522–2526. [Google Scholar]
- Boussé, M.; Debals, O.; De Lathauwer, L. A tensor-based method for large-scale blind source separation using segmentation. IEEE Trans. Signal Process.
**2017**, 65, 346–358. [Google Scholar] [CrossRef] - Sidiropoulos, N.; de Lathauwer, L.; Fu, X.; Huang, K.; Papalexakis, E.; Faloutsos, C. Tensor decomposition for signal processing and machine learning. IEEE Trans. Signal Process.
**2017**, 65, 3551–3582. [Google Scholar] [CrossRef] - Da Costa, M.N.; Favier, G.; Romano, J.M.T. Tensor modelling of MIMO communication systems with performance analysis and Kronecker receivers. Signal Process.
**2018**, 145, 304–316. [Google Scholar] [CrossRef][Green Version] - Ribeiro, L.N.; de Almeida, A.L.F.; Mota, J.C.M. Separable linearly constrained minimum variance beamformers. Signal Process.
**2019**, 158, 15–25. [Google Scholar] [CrossRef] - Dogariu, L.-M.; Ciochină, S.; Benesty, J.; Paleologu, C. System identification based on tensor decompositions: A trilinear approach. Symmetry
**2019**, 11, 556. [Google Scholar] [CrossRef][Green Version] - Gesbert, D.; Duhamel, P. Robust blind joint data/channel estimation based on bilinear optimization. In Proceedings of the 8th Workshop on Statistical Signal and Array Processing, Corfu, Greece, 24–26 June 1996; pp. 168–171. [Google Scholar]
- Benesty, J.; Cohen, I.; Chen, J. Array Processing–Kronecker Product Beamforming; Springer: Cham, Switzerland, 2019. [Google Scholar]
- Stenger, A.; Kellermann, W. Adaptation of a memoryless preprocessor for nonlinear acoustic echo cancelling. Signal Process.
**2000**, 80, 1747–1760. [Google Scholar] [CrossRef] - Huang, Y.; Skoglund, J.; Luebs, A. Practically efficient nonlinear acoustic echo cancellers using cascaded block RLS and FLMS adaptive filters. In Proceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA, 5–9 March 2017; pp. 570–596. [Google Scholar]
- Cichocki, A.; Zdunek, R.; Pan, A.H.; Amari, S. Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multiway Data Analysis and Blind Source Separation; Wiley: Chichester, UK, 2009. [Google Scholar]
- Benesty, J.; Paleologu, C.; Ciochină, S. On the identification of bilinear forms with the Wiener filter. IEEE Signal Process. Lett.
**2017**, 24, 653–657. [Google Scholar] [CrossRef] - Paleologu, C.; Benesty, J.; Ciochină, S. Adaptive filtering for the identification of bilinear forms. Digit. Signal Process.
**2018**, 75, 153–167. [Google Scholar] [CrossRef] - Elisei-Iliescu, C.; Stanciu, C.; Paleologu, C.; Benesty, J.; Anghel, C.; Ciochină, S. Efficient recursive least-squares algorithms for the identification of bilinear forms. Digit. Signal Process.
**2018**, 83, 280–296. [Google Scholar] [CrossRef] - Dogariu, L.-M.; Ciochină, S.; Paleologu, C.; Benesty, J. A connection between the Kalman filter and an optimized LMS algorithm for bilinear forms. Algorithms
**2018**, 11, 211. [Google Scholar] [CrossRef][Green Version] - Elisei-Iliescu, C.; Dogariu, L.-M.; Paleologu, C.; Benesty, J.; Enescu, A.A.; Ciochină, S. A recursive least-squares algorithm for the identification of trilinear forms. Algorithms
**2020**, 13, 135. [Google Scholar] [CrossRef] - Dogariu, L.-M.; Ciochină, S.; Paleologu, C.; Benesty, J.; Oprea, C. An iterative Wiener filter for the identification of multilinear forms. In Proceedings of the 2020 43rd International Conference on Telecommunications and Signal Processing (TSP), Milan, Italy, 7–9 July 2020; pp. 193–197. [Google Scholar]
- Bertsekas, D.P. Nonlinear Programming, 2nd ed.; Athena Scientific: Belmont, MA, USA, 1999. [Google Scholar]
- Dogariu, L.-M.; Paleologu, C.; Benesty, J.; Oprea, C.; Ciochină, S. LMS algorithms for multilinear forms. In Proceedings of the 2020 International Symposium on Electronics and Telecommunications (ISETC), Timisoara, Romania, 5–6 November 2020. [Google Scholar]
- Benesty, J.; Paleologu, C.; Ciochină, S. On regularization in adaptive filtering. IEEE Trans. Audio Speech Lang. Process.
**2011**, 19, 1734–1742. [Google Scholar] [CrossRef] - Paleologu, C.; Benesty, J.; Ciochină, S. Linear system identification based on a Kronecker product decomposition. IEEE/ACM Trans. Audio Speech Lang. Process.
**2018**, 26, 1793–1808. [Google Scholar] [CrossRef] - Elisei-Iliescu, C.; Paleologu, C.; Benesty, J.; Stanciu, C.; Anghel, C.; Ciochină, S. Recursive least-squares algorithms for the identification of low-rank systems. IEEE/ACM Trans. Audio Speech Lang. Process.
**2019**, 27, 903–918. [Google Scholar] [CrossRef] - de Lathauwer, L.; de Moor, B.; Vanderwalle, J. A multilinear singular value decomposition. SIAM J. Matrix Anal. Appl.
**2000**, 21, 1253–1278. [Google Scholar] [CrossRef][Green Version] - Vannieuwenhoven, N.; Vanderbril, R.; Meerbergen, K. A new truncation strategy for the higher order singular value decomposition. SIAM J. Sci. Comput.
**2012**, 34, A1027–A1052. [Google Scholar] [CrossRef][Green Version] - Kitamura, D.; Mogami, S.; Mitsui, Y.; Takamune, N.; Saruwatari, H.; Ono, N.; Takahashi, Y.; Kondo, K. Generalized independent low-rank matrix analysis using heavy-tailed distributions for blind source separation. EURASIP J. Adv. Signal Process.
**2018**, 2018, 25. [Google Scholar] [CrossRef][Green Version] - Kashima, K.; Aoyama, H.; Ohta, Y. Stable process approach to analysis of systems under heavy-tailed noise: Modeling and stochastic linearization. IEEE Trans. Autom. Control
**2019**, 64, 1344–1357. [Google Scholar] [CrossRef] - Niu, H.; Wei, J.; Chen, Y. Optimal randomness for stochastic configuration network (SCN) with heavy-tailed distributions. Entropy
**2021**, 23, 56. [Google Scholar] [CrossRef] - Shao, T.; Zheng, Y.R.; Benesty, J. An affine projection sign algorithm robust against impulsive interferences. IEEE Signal Process. Lett.
**2010**, 17, 327–330. [Google Scholar] [CrossRef] - Yang, Z.; Zheng, Y.R.; Grant, S.L. Proportionate affine projection sign algorithms for network echo cancellation. IEEE Trans. Audio Speech Lang. Process.
**2011**, 19, 2273–2284. [Google Scholar] [CrossRef] - Pogula, R.; Kumar, T.K.; Albu, F. Robust sparse normalized LMAT algorithms for adaptive system identification under impulsive noise environments. Circuits Syst. Signal Process.
**2019**, 38, 5103–5134. [Google Scholar] [CrossRef] - Digital Network Echo Cancellers; ITU-T Recommendations G.168; ITU: Geneva, Switzerland, 2002.
- Ciochină, S.; Paleologu, C.; Benesty, J.; Enescu, A.A. On the influence of the forgetting factor of the RLS adaptive filter in system identification. In Proceedings of the 2009 International Symposium on Signals, Circuits and Systems, Iasi, Romania, 9–10 July 2009; pp. 205–208. [Google Scholar]
- Paleologu, C.; Ciochină, S.; Benesty, J. Variable step-size NLMS algorithm for under-modeling acoustic echo cancellation. IEEE Signal Process. Lett.
**2008**, 15, 5–8. [Google Scholar] [CrossRef]

**Figure 1.**Impulse responses used in simulations of the multiple-input/single-output (MISO) system identification scenario (for $N=4$): (

**a**) ${\mathbf{h}}_{1}$ (of length ${L}_{1}=16$) contains the first 16 coefficients of the first impulse response from G168 Recommendation [40], (

**b**) ${\mathbf{h}}_{2}$ (of length ${L}_{2}=8$) is a randomly generated impulse response, (

**c**) ${\mathbf{h}}_{3}$ (of length ${L}_{3}=4$) has the coefficients computed as ${h}_{3,{l}_{3}}=0.{9}^{{l}_{3}-1}$, with ${l}_{3}=1,2,\dots ,{L}_{3}$, (

**d**) ${\mathbf{h}}_{4}$ (of length ${L}_{4}=4$) has the coefficients computed as ${h}_{4,{l}_{4}}=0.{5}^{{l}_{4}-1}$, with ${l}_{4}=1,2,\dots ,{L}_{4}$, and (

**e**) $\mathbf{g}$ (of length $L={L}_{1}{L}_{2}{L}_{3}{L}_{4}=2048$) is the global impulse response, which results based on (10).

**Figure 2.**Impulse responses used in simulations of the single-input/single-output (SISO) system identification scenario, with $\underline{L}=1000$: (

**a**) network echo path from G168 Recommendation [40] and (

**b**) measured acoustic echo path available on-line at www.comm.pub.ro/plant (accessed on 5 March 2021).

**Figure 3.**Performance of the LMS-T and LMS algorithms (using different step-size parameters), for the identification of the global impulse response $\mathbf{g}$. The input signals are white Gaussian noises, $N=4$, and $L=2048$.

**Figure 4.**Performance of the LMS-T and LMS algorithms (using different step-size parameters), for the identification of the global impulse response $\mathbf{g}$. The input signals are AR(1) processes, $N=4$, and $L=2048$.

**Figure 5.**Performance of the LMS-T and LMS algorithms (using different step-size parameters), for the identification of the global impulse response $\mathbf{g}$. The input signals are AR(1) processes, $N=5$, and $L=4096$.

**Figure 6.**Performance of the NLMS-T and NLMS algorithms (using different normalized step-size parameters), for the identification of the global impulse response $\mathbf{g}$. The input signals are white Gaussian noises, $N=4$, and $L=2048$.

**Figure 7.**Performance of the NLMS-T and NLMS algorithms (using different normalized step-size parameters), for the identification of the global impulse response $\mathbf{g}$. The input signals are AR(1) processes, $N=4$, and $L=2048$.

**Figure 8.**Performance of the NLMS-T and NLMS algorithms (using different normalized step-size parameters), for the identification of the global impulse response $\mathbf{g}$. The input signals are AR(1) processes, $N=5$, and $L=4096$.

**Figure 9.**Performance of the NLMS-T algorithm using different normalized step-size parameters (with equal values of ${\alpha}_{i},\phantom{\rule{4pt}{0ex}}i=1,2,\dots ,N$), for the identification of the global impulse response $\mathbf{g}$. The input signals are AR(1) processes, $N=4$, and $L=2048$.

**Figure 10.**Performance of the NLMS-T algorithm using different normalized step-size parameters (with different values of ${\alpha}_{i},\phantom{\rule{4pt}{0ex}}i=1,2,\dots ,N$), for the identification of the global impulse response $\mathbf{g}$. The input signals are AR(1) processes, $N=4$, and $L=2048$.

**Figure 11.**Performance of the RLS-T algorithm (using different forgetting factors), for the identification of the global impulse response $\mathbf{g}$. The input signals are white Gaussian noises, $N=4$, and $L=2048$.

**Figure 12.**Performance of the RLS-T, NLMS-T, and RLS algorithms, for the identification of the global impulse response $\mathbf{g}$. The input signals are AR(1) processes, $N=4$, and $L=2048$.

**Figure 13.**Performance of the RLS-T, NLMS-T, and RLS algorithms, for the identification of the global impulse response $\mathbf{g}$. The input signals are AR(1) processes, $N=5$, and $L=4096$.

**Figure 14.**Performance of the RLS-T, NLMS-T, tensor LMS [7], and RLS algorithms, for the identification of the impulse response $\mathbf{h}=\mathbf{g}+\mathbf{f}$. The vector $\mathbf{f}$ is randomly generated (Gaussian distribution), with the variance $\zeta {\u2225\mathbf{g}\u2225}_{2}/L$, where $\zeta =0.001$. The input signals are AR(1) processes, $N=4$, and $L=2048$.

**Figure 15.**Performance of the RLS-T, NLMS-T, tensor LMS [7], and RLS algorithms, for the identification of the impulse response $\mathbf{h}=\mathbf{g}+\mathbf{f}$. The vector $\mathbf{f}$ is randomly generated (Gaussian distribution), with the variance $\zeta {\u2225\mathbf{g}\u2225}_{2}/L$, where $\zeta =0.001$. The input signals are AR(1) processes, $N=5$, and $L=4096$.

**Figure 16.**Performance of the RLS-T, NLMS-T, tensor LMS [7], and RLS algorithms, for the identification of the impulse response $\mathbf{h}=\mathbf{g}+\mathbf{f}$. The vector $\mathbf{f}$ is randomly generated (Gaussian distribution), with the variance $\zeta {\u2225\mathbf{g}\u2225}_{2}/L$, where $\zeta =0.005$. The input signals are AR(1) processes, $N=5$, and $L=4096$.

**Figure 17.**Performance of the RLS-T, NLMS-T, tensor LMS [7], and RLS algorithms, for the identification of the impulse response $\mathbf{h}=\mathbf{g}+\mathbf{f}$. The vector $\mathbf{f}$ is randomly generated (Gaussian distribution), with the variance $\zeta {\u2225\mathbf{g}\u2225}_{2}/L$, where $\zeta =0.01$. The input signals are AR(1) processes, $N=5$, and $L=4096$.

**Figure 18.**Performance of the RLS-T algorithm for the identification of the impulse response $\mathbf{h}=\mathbf{g}+\mathbf{f}$. The vector $\mathbf{f}$ is randomly generated (Gaussian distribution), with the variance $\zeta {\u2225\mathbf{g}\u2225}_{2}/L$, using different values of $\zeta $. The theoretical error (misalignment) is marked with a dashed line. The input signals are AR(1) processes, $N=5$, and $L=4096$.

**Figure 19.**Performance of the RLS-NKP and RLS algorithms for the identification of the network impulse response $\underline{\mathbf{h}}$ from Figure 2a, with the length $\underline{L}=1000$. The RLS-NKP algorithm uses ${\underline{L}}_{1}=40$, ${\underline{L}}_{2}=25$, ${\underline{\lambda}}_{1}=1-1/\left(10P{\underline{L}}_{1}\right)$, and ${\underline{\lambda}}_{2}=1-1/\left(10P{\underline{L}}_{2}\right)$. The forgetting factor of the RLS algorithm is $\lambda =1-1/\left(10\underline{L}\right)$. The input signal is an AR(1) process and $\mathrm{ENR}=20$ dB.

**Figure 20.**Performance of the RLS-NKP and RLS algorithms for the identification of the acoustic impulse response $\underline{\mathbf{h}}$ from Figure 2b, with the length $\underline{L}=1000$. The RLS-NKP algorithm uses ${\underline{L}}_{1}=40$, ${\underline{L}}_{2}=25$, ${\underline{\lambda}}_{1}=1-1/\left(10P{\underline{L}}_{1}\right)$, and ${\underline{\lambda}}_{2}=1-1/\left(10P{\underline{L}}_{2}\right)$. The forgetting factor of the RLS algorithm is $\lambda =1-1/\left(10\underline{L}\right)$. The input signal is an AR(1) process and echo-to-noise ratio $\left(\mathrm{ENR}\right)=20$ dB.

**Figure 21.**Performance of the RLS-NKP and RLS algorithms for the identification of the acoustic impulse response $\underline{\mathbf{h}}$ from Figure 2b, with the length $\underline{L}=1000$. The RLS-NKP algorithm uses ${\underline{L}}_{1}=40$, ${\underline{L}}_{2}=25$, ${\underline{\lambda}}_{1}=1-1/\left(10P{\underline{L}}_{1}\right)$, and ${\underline{\lambda}}_{2}=1-1/\left(10P{\underline{L}}_{2}\right)$. The forgetting factor of the RLS algorithm is $\lambda =1-1/\left(10\underline{L}\right)$. The input signal is a speech sequence and $\mathrm{ENR}=20$ dB.

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Dogariu, L.-M.; Stanciu, C.-L.; Elisei-Iliescu, C.; Paleologu, C.; Benesty, J.; Ciochină, S.
Tensor-Based Adaptive Filtering Algorithms. *Symmetry* **2021**, *13*, 481.
https://doi.org/10.3390/sym13030481

**AMA Style**

Dogariu L-M, Stanciu C-L, Elisei-Iliescu C, Paleologu C, Benesty J, Ciochină S.
Tensor-Based Adaptive Filtering Algorithms. *Symmetry*. 2021; 13(3):481.
https://doi.org/10.3390/sym13030481

**Chicago/Turabian Style**

Dogariu, Laura-Maria, Cristian-Lucian Stanciu, Camelia Elisei-Iliescu, Constantin Paleologu, Jacob Benesty, and Silviu Ciochină.
2021. "Tensor-Based Adaptive Filtering Algorithms" *Symmetry* 13, no. 3: 481.
https://doi.org/10.3390/sym13030481