# An Optimized Differential Step-Size LMS Algorithm

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. System Model

## 3. Autocorrelation Matrix of the Coefficients Error

## 4. ODSS-LMS Algorithm

#### 4.1. Minimum MSD Value

#### 4.2. Optimum Step-Size Derivation

#### 4.3. Simplified Version

#### 4.4. Practical Considerations

Algorithm 1: ODSS-LMS-G algorithm. |

Initialization: |

• $\widehat{\mathbf{w}}\left(0\right)={\mathbf{0}}_{L\times 1}$ |

• ${\mathbf{R}}_{\mathbf{c}}\left(0\right)=c\phantom{\rule{3.33333pt}{0ex}}{\mathbf{I}}_{L}$, where c is a small positive constant |

• ${\mathbf{R}}_{\mathbf{x}}\left(0\right)={\mathbf{0}}_{L\times L}$ |

• $\mathbf{\Gamma}\left(0\right)={\mathbf{R}}_{\mathbf{c}}\left(0\right)+{\sigma}_{w}^{2}{\mathbf{I}}_{L}$ |

Parameters ${\sigma}_{x}^{2},{\sigma}_{v}^{2},{\sigma}_{w}^{2}$, known or estimated |

$\lambda =1-\frac{1}{PL}$, with $P\ge 1$ |

For time index $n=1,2,\dots $: |

• $\alpha \left(n\right)=d\left(n\right)-{\mathbf{x}}^{T}\left(n\right)\widehat{\mathbf{w}}(n-1)$ |

• ${\mathbf{R}}_{\mathbf{x}}\left(n\right)=\lambda {\mathbf{R}}_{\mathbf{x}}(n-1)+(1-\lambda )\mathbf{x}\left(n\right){\mathbf{x}}^{T}\left(n\right)$ |

If time index $n<KL$, with $K\ge 1$: |

• $\mu \left(n\right)=\frac{\mu}{{\mathbf{x}}^{T}\left(n\right)\mathbf{x}\left(n\right)+\delta}$ (i.e., step-size of the NLMS algorithm) |

• $\mathbf{u}\left(n\right)=\mu \left(n\right)\mathbf{x}\left(n\right)\alpha \left(n\right)$ |

• ${\mathbf{R}}_{\mathbf{c}}\left(n\right)=\lambda {\mathbf{R}}_{\mathbf{c}}(n-1)+(1-\lambda )\mathbf{u}\left(n\right){\mathbf{u}}^{T}\left(n\right)$ |

else: |

• $\begin{array}{ll}\mathbf{M}\left(n\right)=\hfill & {\sigma}_{x}^{2}{\mathbf{I}}_{L}\{\mathrm{tr}[\mathbf{\Gamma}(n-1){\mathbf{R}}_{\mathbf{x}}\left(n\right)]+{\sigma}_{v}^{2}\}\hfill \\ & +2\mathrm{mdiag}[{\mathbf{R}}_{\mathbf{x}}\left(n\right)\mathbf{\Gamma}(n-1){\mathbf{R}}_{\mathbf{x}}\left(n\right)]\hfill \end{array}$ |

• ${\mathbf{y}}_{\mathrm{o}}\left(n\right)={\mathbf{M}}^{-1}\left(n\right)\mathrm{diag}[{\mathbf{R}}_{\mathbf{x}}\left(n\right)\mathbf{\Gamma}(n-1)]$ |

• $\mathbf{u}\left(n\right)={\mathbf{y}}_{\mathrm{o}}\left(n\right)\odot \mathbf{x}\left(n\right)\alpha \left(n\right)$ |

• $\mathbf{Y}\left(n\right)=\mathrm{mdiag}\left[{\mathbf{y}}_{\mathrm{o}}\left(n\right)\right]$ |

• $\begin{array}{ll}{\mathbf{R}}_{\mathbf{c}}\left(n\right)\hfill & =\mathbf{\Gamma}(n-1)-\mathbf{\Gamma}(n-1){\mathbf{R}}_{\mathbf{x}}\left(n\right)\mathbf{Y}\left(n\right)\hfill \\ & -\mathbf{Y}\left(n\right){\mathbf{R}}_{\mathbf{x}}\left(n\right)\mathbf{\Gamma}(n-1)+\mathbf{Y}\left(n\right)\{2{\mathbf{R}}_{\mathbf{x}}\left(n\right)\mathbf{\Gamma}(n-1){\mathbf{R}}_{\mathbf{x}}\left(n\right)\hfill \\ & +{\mathbf{R}}_{\mathbf{x}}\left(n\right)\mathrm{tr}[\mathbf{\Gamma}(n-1){\mathbf{R}}_{\mathbf{x}}\left(n\right)]\}\mathbf{Y}\left(n\right)+\mathbf{Y}\left(n\right){\mathbf{R}}_{\mathbf{x}}\left(n\right)\mathbf{Y}\left(n\right){\sigma}_{v}^{2}\hfill \end{array}$ |

• $\widehat{\mathbf{w}}\left(n\right)=\widehat{\mathbf{w}}(n-1)+\mathbf{u}\left(n\right)$ |

• $\mathbf{\Gamma}\left(n\right)={\mathbf{R}}_{\mathbf{c}}\left(n\right)+{\sigma}_{w}^{2}{\mathbf{I}}_{L}$ |

Algorithm 2: ODSS-LMS-W algorithm. |

Initialization: • $\widehat{\mathbf{w}}\left(0\right)={\mathbf{0}}_{L\times 1}$ • $m\left(0\right)=\u03f5>0$ • $\gamma \left(0\right)=c\phantom{\rule{3.33333pt}{0ex}}{\mathbf{1}}_{L\times 1}$, where c is a small positive constant • ${\mathbf{R}}_{\mathbf{c}}\left(0\right)=c\phantom{\rule{3.33333pt}{0ex}}{\mathbf{I}}_{L}$ • $\mathbf{\Gamma}\left(0\right)={\mathbf{R}}_{\mathbf{c}}\left(0\right)+{\sigma}_{w}^{2}{\mathbf{I}}_{L}$ Parameters ${\sigma}_{x}^{2},{\sigma}_{v}^{2},{\sigma}_{w}^{2}$, known or estimated $\lambda =1-\frac{1}{PL}$, with $P\ge 1$ For time index $n=1,2,\dots $: • $\alpha \left(n\right)=d\left(n\right)-{\mathbf{x}}^{T}\left(n\right)\widehat{\mathbf{w}}(n-1)$ If time index $n<KL$, with $K\ge 1$: • $\mu \left(n\right)=\frac{\mu}{{\mathbf{x}}^{T}\left(n\right)\mathbf{x}\left(n\right)+\delta}$ (i.e., step-size of the NLMS algorithm) • $\mathbf{u}\left(n\right)=\mu \left(n\right)\mathbf{x}\left(n\right)\alpha \left(n\right)$ • ${\mathbf{R}}_{\mathbf{c}}\left(n\right)=\lambda {\mathbf{R}}_{\mathbf{c}}(n-1)+(1-\lambda )\mathbf{u}\left(n\right){\mathbf{u}}^{T}\left(n\right)$ • $\mathbf{\Gamma}\left(n\right)={\mathbf{R}}_{\mathbf{c}}\left(n\right)+{\sigma}_{w}^{2}{\mathbf{I}}_{L}$ else: • $\begin{array}{ll}\mathbf{M}\left(n\right)\hfill & ={\sigma}_{x}^{2}{\mathbf{I}}_{L}[m(n-1)+{\sigma}_{w}^{2}(L+2)]\hfill \\ & +2{\sigma}_{x}^{2}\mathrm{mdiag}\left[\gamma (n-1)\right]+{\sigma}_{v}^{2}{\mathbf{I}}_{L}\hfill \end{array}$ • ${\mathbf{y}}_{\mathrm{o}}\left(n\right)={\mathbf{M}}^{-1}\left(n\right)\gamma (n-1)$ • $\begin{array}{ll}\gamma \left(n\right)\hfill & =\gamma (n-1)+{\sigma}_{w}^{2}{\mathbf{1}}_{L\times 1}\hfill \\ & +2{\sigma}_{x}^{2}[{\sigma}_{x}^{2}{\mathbf{y}}_{\mathrm{o}}\left(n\right)-{\mathbf{1}}_{L\times 1}]\odot \gamma (n-1)\odot {\mathbf{y}}_{\mathrm{o}}\left(n\right)\hfill \\ & +{\sigma}_{x}^{2}\{{\sigma}_{x}^{2}\mathrm{tr}[\mathbf{\Gamma}(n-1)]+{\sigma}_{v}^{2}\}{\mathbf{y}}_{\mathrm{o}}\left(n\right)\odot {\mathbf{y}}_{\mathrm{o}}\left(n\right)\hfill \end{array}$ • $\mathbf{\Gamma}\left(n\right)=\mathrm{mdiag}\left[\gamma \right(n\left)\right]$ • $m\left(n\right)=\mathrm{tr}[\mathbf{\Gamma}\left(n\right)]-L{\sigma}_{w}^{2}$ • $\mathbf{u}\left(n\right)={\mathbf{y}}_{\mathrm{o}}\left(n\right)\odot \mathbf{x}\left(n\right)\alpha \left(n\right)$ • $\widehat{\mathbf{w}}\left(n\right)=\widehat{\mathbf{w}}(n-1)+\mathbf{u}\left(n\right)$ |

## 5. Simulation Results

## 6. Conclusions

## Author Contributions

## Funding

## Conflicts of Interest

## References

- Widrow, B. Least-Mean-Square Adaptive Filters; Haykin, S.S., Widrow, B., Eds.; Wiley: Hoboken, NJ, USA, 2003. [Google Scholar]
- Duttweiler, D.L. Proportionate normalized least-mean-squares adaptation in echo cancelers. IEEE Trans. Speech Audio Process.
**2000**, 8, 508–518. [Google Scholar] [CrossRef] - Gay, S.L. An efficient, fast converging adaptive filter for network echo cancellation. In Proceedings of the Conference Record of Thirty-Second Asilomar Conference on Signals, Systems and Computers (Cat. No.98CH36284), Pacific Grove, CA, USA, 1–4 November 1998; pp. 394–398. [Google Scholar]
- Benesty, J.; Gay, S.L. An improved PNLMS algorithm. In Proceedings of the 2002 IEEE International Conference on Acoustics, Speech, and Signal Processing, Orlando, FL, USA, 13–17 May 2002. [Google Scholar]
- Deng, H.; Doroslovački, M. Proportionate adaptive algorithms for network echo cancellation. IEEE Trans. Signal Process.
**2006**, 54, 1794–1803. [Google Scholar] [CrossRef] - Das Chagas de Souza, F.; Tobias, O.J.; Seara, R.; Morgan, D.R. A PNLMS algorithm with individual activation factors. IEEE Trans. Signal Process.
**2010**, 58, 2036–2047. [Google Scholar] [CrossRef] - Liu, J.; Grant, S.L. Proportionate adaptive filtering for block-sparse system identification. IEEE/ACM Trans. Audio Speech Lang. Process.
**2016**, 24, 623–630. [Google Scholar] [CrossRef] - Gu, Y.; Jin, J.; Mei, S. ℓ
_{0}norm constraint LMS algorithm for sparse system identification. IEEE Signal Process. Lett.**2009**, 16, 774–777. [Google Scholar] - Loganathan, P.; Khong, A.W.H.; Naylor, P.A. A class of sparseness-controlled algorithms for echo cancellation. IEEE Trans. Audio Speech Lang. Process.
**2009**, 17, 1591–1601. [Google Scholar] [CrossRef] - Li, Y.; Wang, Y.; Sun, L. A proportionate normalized maximum correntropy criterion algorithm with correntropy induced metric constraint for identifying sparse systems. Symmetry
**2018**, 10, 683. [Google Scholar] [CrossRef] - Rusu, A.G.; Ciochină, S.; Paleologu, C. On the step-size optimization of the LMS algorithm. In Proceedings of the 42nd International Conference on Telecommunications and Signal Processing, Budapest, Hungary, 1–3 July 2019; pp. 168–173. [Google Scholar]
- Ciochină, S.; Paleologu, C.; Benesty, J.; Grant, S.L.; Anghel, A. A family of optimized LMS-based algorithms for system identification. In Proceedings of the 2016 24th European Signal Processing Conference (EUSIPCO), Budapest, Hungary, 29 August–2 September 2016; pp. 1803–1807. [Google Scholar]
- Enzner, G.; Buchner, H.; Favrot, A.; Kuech, F. Acoustic echo control. In Academic Press Library in Signal Processing; Academic Press: Cambridge, MA, USA, 2014; Volume 4, pp. 807–877. [Google Scholar]
- Shin, H.-C.; Sayed, A.H.; Song, W.-J. Variable step-size NLMS and affine projection algorithms. IEEE Signal Process. Lett.
**2004**, 11, 132–135. [Google Scholar] - Benesty, J.; Rey, H.; Rey Vega, L.; Tressens, S. A nonparametric VSS-NLMS algorithm. IEEE Signal Process. Lett.
**2006**, 13, 581–584. [Google Scholar] [CrossRef] - Park, P.; Chang, M.; Kong, N. Scheduled-stepsize NLMS algorithm. IEEE Signal Process. Lett.
**2009**, 16, 1055–1058. [Google Scholar] [CrossRef] - Huang, H.-C.; Lee, J. A new variable step-size NLMS algorithm and its performance analysis. IEEE Trans. Signal Process.
**2012**, 60, 2055–2060. [Google Scholar] [CrossRef] - Song, I.; Park, P. A normalized least-mean-square algorithm based on variable-step-size recursion with innovative input data. IEEE Signal Process. Lett.
**2012**, 19, 817–820. [Google Scholar] [CrossRef] - Isserlis, L. On a formula for the product-moment coefficient of any order of a normal frequency distribution in any number of variables. Biometrika
**1918**, 12, 134–139. [Google Scholar] [CrossRef] - Iqbal, M.A.; Grant, S.L. Novel variable step size NLMS algorithms for echo cancellation. In Proceedings of the 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, Las Vegas, NV, USA, 31 March–4 April 2008; pp. 241–244. [Google Scholar]
- Paleologu, C.; Ciochină, S.; Benesty, J. Variable step-size NLMS algorithm for under-modeling acoustic echo cancellation. IEEE Signal Process. Lett.
**2008**, 15, 5–8. [Google Scholar] [CrossRef] - Strassen, V. Gaussian Elimination is not Optimal. Numer. Math.
**1969**, 13, 354–356. [Google Scholar] [CrossRef] - Coppersmith, D.; Winograd, S. Matrix multiplication via arithmetic progressions. J. Symb. Comput.
**1990**, 9, 251–280. [Google Scholar] [CrossRef] [Green Version] - Ciochină, S.; Paleologu, C.; Benesty, J. An optimized NLMS algorithm for system identification. Signal Process.
**2016**, 118, 115–121. [Google Scholar] [CrossRef] - ITU. Digital Network Echo Cancellers, ITU-T Recommendation G 168; ITU: Geneva, Switzerland, 2002. [Google Scholar]
- Hoyer, P.O. Non-negative matrix factorization with sparseness constraints. J. Mach. Learn. Res.
**2001**, 49, 1208–1215. [Google Scholar]

**Figure 1.**Echo paths used in simulations. (

**a**) acoustic echo path and (

**b**) network echo path (which is adapted from G168 Recommendation [25].)

**Figure 2.**Normalized misalignment of the IPNLMS, ODSS-LMS-G, and ODSS-LMS-W algorithms, where ${\sigma}_{w}^{2}=0$, the input signal is a white Gaussian noise, $L=512$, and SNR = 20 dB. The unknown system is the acoustic echo path.

**Figure 3.**Normalized misalignment of the IPNLMS and ODSS-LMS-G algorithms for different values of ${\sigma}_{w}^{2}$. Echo path changes at time 10 s. The input signal is a white Gaussian noise, $L=512$, and SNR = 20 dB. The unknown system is the acoustic echo path.

**Figure 4.**Normalized misalignment of the IPNLMS and ODSS-LMS-G algorithms for different values of ${\sigma}_{w}^{2}$. Echo path changes at time 10 s. The input signal is a white Gaussian noise, $L=512$, and SNR = 20 dB. The unknown system is the network echo path.

**Figure 5.**Normalized misalignment of the IPNLMS, JO-NLMS, ODSS-LMS-G, and ODSS-LMS-W algorithms for different values of ${\sigma}_{w}^{2}$. The input signal is an AR(1) process, with $\beta =0.9$, $L=512$, and SNR = 20 dB. The unknown system is the acoustic echo path.

**Figure 6.**Normalized misalignment of the IPNLMS, ODSS-LMS-G, and ODSS-LMS-W algorithms for different values of ${\sigma}_{w}^{2}$. Echo path changes at time 10 s. The input signal is an AR(1) process, with $\beta =0.8$, $L=512$, and SNR = 20 dB. The unknown system is the network echo path.

**Figure 7.**Normalized misalignment of the IPNLMS, ODSS-LMS-G, and ODSS-LMS-W algorithms for different values of ${\sigma}_{w}^{2}$. Echo path changes at time 10 s. The input signal is an AR(1) process, with $\beta =0.9$, $L=512$, and SNR = 20 dB. The unknown system is the acoustic echo path.

**Figure 8.**Normalized misalignment of the IPNLMS, JO-NLMS, ODSS-LMS-G, and ODSS-LMS-W algorithms. The input signal is a speech sequence, $L=512$, and SNR = 20 dB. The unknown system is the network echo path.

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Rusu, A.-G.; Ciochină, S.; Paleologu, C.; Benesty, J.
An Optimized Differential Step-Size LMS Algorithm. *Algorithms* **2019**, *12*, 147.
https://doi.org/10.3390/a12080147

**AMA Style**

Rusu A-G, Ciochină S, Paleologu C, Benesty J.
An Optimized Differential Step-Size LMS Algorithm. *Algorithms*. 2019; 12(8):147.
https://doi.org/10.3390/a12080147

**Chicago/Turabian Style**

Rusu, Alexandru-George, Silviu Ciochină, Constantin Paleologu, and Jacob Benesty.
2019. "An Optimized Differential Step-Size LMS Algorithm" *Algorithms* 12, no. 8: 147.
https://doi.org/10.3390/a12080147