A Kernel Least Mean Square Algorithm Based on Randomized Feature Networks
Abstract
:1. Introduction
2. The KLMS Algorithm Based on Random Fourier Features
2.1. The KLMS Algorithm
2.2. The Random Fourier Feature Mapping
2.3. Randomized Feature NetworksBased KLMS Algorithm
Algorithm 1: The KLMSRFN Algorithm. 

3. The Mean Square Convergence Analysis
3.1. The Energy Conversion Relation
3.2. Mean Square Convergence Condition
3.3. Steady State Mean Square Error Analysis
 (1)
 When the step size $\mu $ is sufficiently small, the term $\frac{\mu}{2}{lim}_{n\to +\infty}E\left[{\phi (\mathit{x}(n))}^{2}{e}_{a}{(n)}^{2}\right]$ can be assumed to be far less than $\frac{\mu {\sigma}_{v}^{2}}{2}{lim}_{n\to +\infty}{\phi (\mathit{x}(n))}^{2}$. Therefore, for a small $\mu $, the following is obtained:$$\underset{n\to +\infty}{lim}E\left[{e}_{a}{(n)}^{2}\right]=\frac{\mu {\sigma}_{v}^{2}}{2}\underset{n\to +\infty}{lim}{\phi (\mathit{x}(n))}^{2}$$
 (2)
 When the step size $\mu $ is large (the value is not infinitesimally small to guarantee filter stability), the following independence assumption is required to find the expression for EMSE. At steadystate, the input signal $\mathit{x}(n)$ is statistically independent of ${e}_{a}(n)$. Thus, we obtain$$\begin{array}{c}\hfill \underset{n\to +\infty}{lim}E\left[{e}_{a}{(n)}^{2}\right]=\frac{{\displaystyle \mu {\sigma}_{v}^{2}\underset{n\to +\infty}{lim}{\phi (\mathit{x}(n))}^{2}}}{{\displaystyle 2\mu \underset{n\to +\infty}{lim}{\phi (\mathit{x}(n))}^{2}}}\end{array}$$
4. Computational Complexity
5. Simulations and Results
5.1. Lorenz Time Series Prediction
5.2. Nonlinear Channel Equalization
5.2.1. TimeVarying Channel Equalization
5.2.2. Abruptly Changed Channel Equalization
6. Discussion and Conclusions
Author Contributions
Conflicts of Interest
References
 Muller, K.; Mika, S.; Ratsch, G.; Tsuda, K.; Scholkopf, B.; Muller, K.R.; Ratsch, G.; Scholkopf, B. An introduction to kernelbased learning algorithms. IEEE Trans. Neural Netw. 2001, 12, 181. [Google Scholar] [CrossRef] [PubMed]
 RojoAlvarez, J.L.; MartinezRamon, M.; MunozMari, J.; CampsValls, G. Adaptive Kernel Learning for Signal Processing. In Digital Signal Processing with Kernel Methods; WileyIEEE Press: Hoboken, NJ, USA, 2018; pp. 387–431. [Google Scholar] [CrossRef]
 Ding, G.; Wu, Q.; Yao, Y.D.; Wang, J. KernelBased Learning for Statistical Signal Processing in Cognitive Radio Networks: Theoretical Foundations, Example Applications, and Future Directions. IEEE Signal Process. Mag. 2013, 30, 126–136. [Google Scholar] [CrossRef]
 Liu, W.; Pokharel, P.P.; Principe, J.C. The Kernel LeastMeanSquare Algorithm. IEEE Trans. Signal Process. 2008, 56, 543–554. [Google Scholar] [CrossRef]
 Engel, Y.; Mannor, S.; Meir, R. The kernel recursive leastsquares algorithm. IEEE Trans. Signal Process. 2004, 52, 2275–2285. [Google Scholar] [CrossRef]
 Liu, W.; Principe, J.C. Kernel Affine Projection Algorithms. Eurasip. J. Adv. Signal Process. 2008, 2008, 1–12. [Google Scholar] [CrossRef]
 Parreira, W.D.; Bermudez, J.C.M.; Richard, C.; Tourneret, J.Y. Stochastic behavior analysis of the Gaussian Kernel Least Mean Square algorithm. IEEE Trans. Signal Process. 2012, 60, 2208–2222. [Google Scholar] [CrossRef] [Green Version]
 Zhao, J.; Liao, X.; Wang, S.; Chi, K.T. Kernel Least Mean Square with Single Feedback. IEEE Signal Process. Lett. 2015, 22, 953–957. [Google Scholar] [CrossRef]
 Paul, T.K.; Ogunfunmi, T. A Kernel Adaptive Algorithm for QuaternionValued Inputs. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 2422–2439. [Google Scholar] [CrossRef] [PubMed]
 Haghighat, N.; Kalbkhani, H.; Shayesteh, M.G.; Nouri, M. Variable bit rate video traffic prediction based on kernel least mean square method. IET Image Process. 2015, 9, 777–794. [Google Scholar] [CrossRef]
 Platt, J.C. A ResourceAllocating Network for Function Interpolation. Neural Comput. 1991, 3, 213–225. [Google Scholar] [CrossRef]
 Liu, W.; Park, I.; Principe, J.C. An Information Theoretic Approach of Designing Sparse Kernel Adaptive Filters. IEEE Trans. Neural Netw. 2009, 20, 1950–1961. [Google Scholar] [CrossRef] [PubMed]
 Richard, C.; Bermudez, J.C.M.; Honeine, P. Online Prediction of Time Series Data With Kernels. IEEE Trans. Signal Process. 2009, 57, 1058–1067. [Google Scholar] [CrossRef]
 Chen, B.; Zhao, S.; Zhu, P.; Principe, J.C. Quantized Kernel Least Mean Square Algorithm. IEEE Trans. Neural Netw. Learn. Syst. 2012, 23, 22–32. [Google Scholar] [CrossRef] [PubMed]
 Chen, B.; Zheng, N.; Principe, J.C. Sparse kernel recursive least squares using L1 regularization and a fixedpoint subiteration. In Proceedings of the 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Florence, Italy, 4–9 May 2014; pp. 5257–5261. [Google Scholar]
 Gao, W.; Chen, J.; Richard, C.; Huang, J. Online Dictionary Learning for Kernel LMS. IEEE Trans. Signal Process. 2014, 62, 2765–2777. [Google Scholar]
 Zhao, S.; Chen, B.; Zhu, P.; Principe, J.C. Fixed budget quantized kernel leastmeansquare algorithm. Signal Process. 2013, 93, 2759–2770. [Google Scholar] [CrossRef]
 Rahimi, A.; Recht, B. Random features for largescale kernel machines. In Proceedings of the International Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 3–6 December 2007; pp. 1177–1184. [Google Scholar]
 Rahimi, A.; Recht, B. Uniform approximation of functions with random bases. In Proceedings of the Allerton Conference on Communication, Control, and Computing, UrbanaChampaign, IL, USA, 23–26 September 2008; pp. 555–561. [Google Scholar]
 Shakiba, N.; Rueda, L. MicroRNA identification using linear dimensionality reduction with explicit feature mapping. BMC Proc. 2013, 7, S8. [Google Scholar] [CrossRef] [PubMed]
 Hu, Z.; Lin, M.; Zhang, C. Dependent Online Kernel Learning with Constant Number of Random Fourier Features. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 2464–2476. [Google Scholar] [CrossRef] [PubMed]
 Boroumand, M.; Fridrich, J. Applications of Explicit NonLinear Feature Maps in Steganalysis. IEEE Trans. Inf. Forensics Secur. 2018, 13, 823–833. [Google Scholar] [CrossRef]
 Sharma, M.; Jayadeva; Soman, S.; Pant, H. LargeScale Minimal Complexity Machines Using Explicit Feature Maps. IEEE Trans. Syst. Man Cybern. Syst. 2017, 47, 2653–2662. [Google Scholar] [CrossRef]
 Rudin, W. Fourier Analysis on Groups; Interscience Publishers: Geneva, Switzerland, 1962; p. 82. [Google Scholar]
 Sutherland, D.J.; Schneider, J. On the error of random fourier features. In Proceedings of the Conference on Uncertainty in Artificial Intelligence, Amsterdam, The Netherlands, 12–16 July 2015; pp. 862–871. [Google Scholar]
 Yousef, N.R.; Sayed, A.H. A unified approach to the steadystate and tracking analyses of adaptive filters. IEEE Trans. Signal Process. 2001, 49, 314–324. [Google Scholar] [CrossRef]
 AlNaffouri, T.Y.; Sayed, A.H. Transient analysis of datanormalized adaptive filters. IEEE Trans. Signal Process. 2003, 51, 639–652. [Google Scholar] [CrossRef]
 Mirmomeni, M.; Lucas, C.; Araabi, B.N.; Moshiri, B.; Bidar, M.R. Recursive spectral analysis of natural time series based on eigenvector matrix perturbation for online applications. IET Signal Process. 2011, 5, 515–526. [Google Scholar] [CrossRef]
 Chandra, R.; Zhang, M. Cooperative coevolution of Elman recurrent neural networks for chaotic time series prediction. Neurocomputing 2012, 86, 116–123. [Google Scholar] [CrossRef]
 Miranian, A.; Abdollahzade, M. Developing a local leastsquares support vector machinesbased neurofuzzy model for nonlinear and chaotic time series prediction. IEEE Trans. Neural Netw. Learn. Syst. 2013, 24, 207–218. [Google Scholar] [CrossRef] [PubMed]
 Kechriotis, G.; Zervas, E.; Manolakos, E.S. Using recurrent neural networks for adaptive communication channel equalization. IEEE Trans. Neural Netw. 1994, 5, 267–278. [Google Scholar] [CrossRef] [PubMed]
 Choi, J.; Lima, A.C.C.; Haykin, S. Kalman filtertrained recurrent neural equalizers for timevarying channels. IEEE Trans. Commun. 2005, 53, 472–480. [Google Scholar] [CrossRef]
 Liang, Q.; Mendel, J.M. Equalization of nonlinear timevarying channels using type2 fuzzy adaptive filters. IEEE Trans. Fuzzy Syst. 2000, 8, 551–563. [Google Scholar] [CrossRef]
 Patra, J.C.; Meher, P.K.; Chakraborty, G. Nonlinear channel equalization for wireless communication systems using Legendre neural networks. Signal Process. 2009, 89, 2251–2262. [Google Scholar] [CrossRef]
 Xu, L.; Huang, D.; Guo, Y.J. Robust Blind Learning Algorithm for Nonlinear Equalization Using Input Decision Information. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 3009–3020. [Google Scholar] [CrossRef] [PubMed]
The Number of Calculations of the KLMSRFN Algorithm. 

(1) Calculating random Fourier feature vectors $\phi (\mathit{x}(n))$ 
Number of multiplications $=(N+2)M$; 
Number of additions $=(N+1)M$; 
Number of $sin(\xb7)/cos(\xb7)$ calculation $=2M$ 
(2) Calculating the output of the filter 
Number of multiplications $=2M$; 
Number of additions $=2M1$; 
(3) Updating the weight vector 
Number of multiplications $=2M+1$; 
Number of additions $=2M$; 
(4) Total calculations 
Total number of multiplications $=(N+6)M+1$; 
Total number of additions $=(N+5)M1$; 
Total number of $cos(\xb7)/sin(\xb7)$ $=2M$ 
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, Y.; Sun, C.; Jiang, S. A Kernel Least Mean Square Algorithm Based on Randomized Feature Networks. Appl. Sci. 2018, 8, 458. https://doi.org/10.3390/app8030458
Liu Y, Sun C, Jiang S. A Kernel Least Mean Square Algorithm Based on Randomized Feature Networks. Applied Sciences. 2018; 8(3):458. https://doi.org/10.3390/app8030458
Chicago/Turabian StyleLiu, Yuqi, Chao Sun, and Shouda Jiang. 2018. "A Kernel Least Mean Square Algorithm Based on Randomized Feature Networks" Applied Sciences 8, no. 3: 458. https://doi.org/10.3390/app8030458