Next Article in Journal
Local and Global Stability Analysis of Dengue Disease with Vaccination and Optimal Control
Previous Article in Journal
Research on Airport Scheduling of FGAP Multi-Objective Programming Model Based on Uncertainty Theory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

p-Norm-like Affine Projection Sign Algorithm for Sparse System to Ensure Robustness against Impulsive Noise

1
Department of Electronic Engineering, Kumoh National Institute of Technology, Gumi 39177, Korea
2
Department of Cogno-Mechatronics Engineering, Pusan National University, Busan 46241, Korea
3
Department of Optics and Mechatronics Engineering, Pusan National University, Busan 46241, Korea
4
Department of Electronics, Information and Communication Engineering, Mokpo National University, Muan 58554, Korea
5
Department of Automotive Engineering, Kookmin University, Seoul 02707, Korea
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(10), 1916; https://doi.org/10.3390/sym13101916
Submission received: 11 September 2021 / Revised: 5 October 2021 / Accepted: 8 October 2021 / Published: 12 October 2021

Abstract

:
An improved affine projection sign algorithm (APSA) was developed herein using a L p -norm-like constraint to increase the convergence rate in sparse systems. The proposed APSA is robust against impulsive noise because APSA-type algorithms are generally based on the L 1 -norm minimization of error signals. Moreover, the proposed algorithm can enhance the filter performance in terms of the convergence rate due to the implementation of the L p -norm-like constraint in sparse systems. Since a novel cost function of the proposed APSA was designed for maintaining the similar form of the original APSA, these have symmetric properties. According to the simulation results, the proposed APSA effectively enhances the filter performance in terms of the convergence rate of sparse system identification in the presence of impulsive noises compared to that achieved using the existing APSA-type algorithms.

1. Introduction

The adaptive filtering theory has been widely applied in several domains such as system identification, channel estimation, as well as noise and echo cancellation [1,2,3,4,5,6]. As shown in Figure 1, the main purpose of an adaptive filter is to accomplish the precise estimation of filter coefficients to minimize error signals with the same input signals. Representative adaptive filtering algorithms include the least-mean-squares (LMS) algorithm and normalized LMS algorithm, which have a low computational complexity and can be easily implemented. In addition, the affine projection algorithm (APA) [7,8,9] has been developed to enhance the convergence performance associated with correlated input signals. However, because LMS-type and APA-type algorithms are based on the L 2 -norm optimization of error signals, their performance deteriorates in the presence of system output noise, which includes impulsive noise.
To ensure satisfactory filter performance even in the presence of impulsive noise, several adaptive filtering algorithms derived from L 1 -norm optimization have recently been proposed [10,11,12,13]. A representative algorithm that is robust against impulsive noises is the affine projection sign algorithm (APSA) [11]. However, the APSA is not particularly useful in sparse system identification as network echo cancellation or underwater acoustic channel estimation, in which the impulse response is primarily composed of near-zero coefficients and only a few large coefficients. In this context, many adaptive filters have been proposed to optimize filter performance by considering the system sparsity [12,13,14,15,16,17]. Among such frameworks, adaptive filtering algorithms based on the concept of an L 0 -norm constraint have been considered to enhance the convergence rate of sparse systems [13,14]. In accordance with the L 0 -norm constraint, a L p -norm-like concept has been introduced to enhance the filter performance of LMS-type algorithms [18,19,20] and it has been demonstrated that the L p -norm-like concept can contribute to the performance enhancement of adaptive filters in sparse systems. However, due to the inherent characteristics of LMS-type algorithms, their performance remains unsatisfactory in cases involving impulsive noise.
Considering these aspects, this paper reports on an improved APSA for sparse system identification. The algorithm is based on a novel cost function derived from L 1 -norm optimization including the L p -norm-like constraint. Unlike the original APSA, the proposed cost function uses the a priori error instead of the a posteriori error. Consequently, the proposed APSA does not employ the approximation of the a posteriori error with the a priori error while formulating the equation to update the filter coefficient vectors. The performance of the proposed APSA in a general sparse system was evaluated and compared with those of the APSA [11], real-coefficient proportionate APSA (RP-APSA) [12], real-coefficient improved proportionate APSA (RIP-APSA) [12] and L 0 -APSA [13].
This paper is organized as follows. In the following Section 2, the original APSA is described. In Section 3, the proposed L p -norm-like APSA is explained in detail. In Section 4, we present the simulation results to verify the performance of the proposed APSA. Finally, Section 5 gives the conclusions of this paper.

2. Original APSA

The data vector d i from an unknown target system is defined as
d i = u i T w o + v i ,
where w o is the n-dimensional column vector to be estimated, v i indicates the measurement noise with a variance σ v 2 , and the input vector u i = [ u i u i 1 u i n + 1 ] T . The output error vector is denoted by e i , where the desired output data vector is denoted by d i , the data matrix is denoted by U i , and w ^ i is the estimate of w o at iteration i, as follows:
e i = d i U i T w ^ i ,
d i = [ d i d i 1 d i M + 1 ] T ,
U i = [ u i u i 1 u i M + 1 ] ,
w ^ i = [ w ^ i ( 0 ) , , w ^ i ( n 1 ) ] T .
The original APSA [11] is derived by minimizing the L 1 -norm of the a posteriori error vector with a constraint on the filter coefficient vectors, as shown below:
min w ^ i + 1 | | d i U i T w ^ i + 1 | | 1 subject to | | w ^ i + 1 w ^ i | | 2 2 μ 2 .
where μ 2 ensures that the filter coefficient vectors do not abruptly change. Using the gradient descent method, the filter coefficient vector of the original APSA is recursively defined using the following updating Equation [11]:
w ^ i + 1 = w ^ i + μ U i sgn ( e i ) sgn ( e i T ) U i T U i sgn ( e i ) ,
where μ represents the step size, s g n ( · ) is the sign function and sgn ( e i ) = [ sgn ( e i ) , , sgn ( e i M + 1 ) ] T .

3. Proposed L p -Norm-like APSA

The proposed affine projection sign algorithm is newly formulated by minimizing the L 1 -norm of the a p r i o r i error vector with the L p -norm-like constraint [18] as follows:
min w ^ i | | d i U i T w ^ i | | 1 + γ | | w ^ i | | p p ,
where | | · | | p p denotes the L p -norm-like entity that exerts zero attraction, p denotes the order of the norm, and γ is used to adjust the effect of the L p -norm-like constraint. The proposed cost function can be derived from (8) as
J ( w ^ i ) = | | e i | | 1 + γ | | w ^ i | | p p .
The derivative of the proposed cost function (9) with respect to the filter coefficient vector w ^ i is as follows:
w ^ i J ( w ^ i ) = J ( w ^ i ) w ^ i = U i sgn ( e i ) + γ | | w ^ i | | p p w ^ i U i sgn ( e i ) + γ f ( w ^ i ) ,
where f ( w ^ i ) [ f ( w ^ i ( 0 ) ) , , f ( w ^ i ( n 1 ) ) ] T .
The L p -norm-like entity is widely defined as
| | w ^ i | | p p = k = 0 n 1 | w ^ i ( k ) | p , 0 p 1 .
The derivative of (11) with respect to the filter coefficient vector can be expressed in a component-wise manner as
f ( w ^ i ( k ) ) = | | w ^ i | | p p w ^ i ( k ) = p sgn ( w ^ i ( k ) ) | w ^ i ( k ) | 1 p + ϵ , 0 k < n ,
where ϵ is an extremely small positive parameter that is introduced to avoid division by zero.
Moreover, the normalized gradient descent method is used to modify the updating equation of the filter coefficient vectors derived from the proposed cost function:
w ^ i + 1 = w ^ i μ w ^ i J ( w ^ i ) | | w ^ i J ( w ^ i ) | | 2 = w ^ i + μ ( U i sgn ( e i ) γ f ( w ^ i ) ) ( U i sgn ( e i ) γ f ( w ^ i ) ) T ( U i sgn ( e i ) γ f ( w ^ i ) ) ,
where μ is a step-size parameter of the proposed APSA. As can be seen in (7) and (13), the original APSA and proposed APSA have a symmetric relationship to maintain their robustness against impulsive noise. Moreover, owing to the L p -norm-like constraint, the proposed APSA has the zero-attraction property for increasing the convergence rate in the sparse system.

4. Simulation Results

The filter performance of the proposed APSA was evaluated through computer simulations involving system-identification scenarios. An unknown target system was randomly generated with 128 taps (n = 128) and the adaptive filter and unknown target system were considered to have the same number of taps. Moreover, we set 96 near-zero filter coefficients among the 128 taps to establish a general sparse system. The general sparse system has the property of sparsity as shown in Figure 2.
Each adaptive filter was tested for a projection order M = 4. In our simulations, three types of input signal—white input, autoregressive (AR) and autoregressive–moving-average (ARMA)—were used, and correlated input signals for AR and ARMA models were generated by filtering white Gaussian noise through the following systems as
G 1 ( z ) = 1 1 0.7 z 1 ,
G 2 ( z ) = 1 + 0.6 z 1 1 + z 1 + 0.21 z 2 .
The signal-to-noise ratio (SNR) was set to 30 dB for adding the measurement noise at the output signal y i . The SNR is defined by y i = u i T w o as follows:
SNR 10 l o g 10 E [ y i 2 ] E [ v i 2 ] .
The impulsive noise n i was generated as n i = k i A i where k i represents a Bernoulli process with a probability of success P [ k i = 1 ] = P r , and A i represents zero-mean Gaussian noise with the power σ A 2 = 1000 σ y 2 [11,13]. P r , which denotes the probability of the occurrence of impulsive noise, was set to 0.001. The normalized mean squared deviation (NMSD) was defined as
NMSD 10 log 10 E [ w ˜ T ( k ) w ˜ ( k ) ] w o T w o ,
where the filter-coefficient error vector w ˜ i w o w ^ i . The results were obtained via ensemble averaging over 100 trials.

4.1. System Identification for Sparse System in Presence of Impulsive Noises

Figure 3 shows the NMSD learning curves for the original APSA [11], RP-APSA [12], RIP-APSA [12], L 0 -APSA [13] and proposed APSA with white input signals. Figure 4 and Figure 5 also indicate the NMSD learning curves for the original APSA, RP-APSA, RIP-APSA, L 0 -APSA and the proposed APSA with correlated input signals G 1 ( z ) and G 2 ( z ) , respectively. According to Figure 3, Figure 4 and Figure 5, the proposed L p -norm-like APSA has a faster convergence rate than the existing algorithms when the representative white, AR and ARMA input signals were used. Because the convergence rate and steady-state estimation errors have a trade-off relationship, we focus on the comparison in aspects of the convergence rate. To ensure a fair comparison of the convergence rate of the proposed APSA and other algorithms, the algorithm parameters were selected such that the steady-state errors for all the algorithms were identical. Specifically, the parameters were set as follows: APSA ( μ = 0.005 ); RP-APSA ( μ = 0.007 , p = 0.1 ); RIP-APSA ( μ = 0.009 , α = 0 ); L 0 -APSA ( μ = 0.022 , β = 20 , γ = 0.003 ); and the proposed APSA ( μ = 0.017 for the white input/ μ = 0.02 for G 1 ( z ) , G 2 ( z ) and speech input, γ = 0.03 , p = 0.1 , ϵ = 0.01 ). To set almost the same convergence rate for all algorithms, the step sizes of the proposed algorithm for the white input are slightly slight difference compared to those of the AR and ARMA cases. The p parameter of RP-APSA and the α parameter of RIP-APSA were set with reference to an existing study [12] for fair comparison. As shown in Figure 1 and Figure 2, the proposed APSA exhibited a higher convergence rate than those of the other algorithms. In addition, the proposed APSA maintains filter performance in terms of the convergence rate even though the system change suddenly occurs—as shown in Figure 6, Figure 7 and Figure 8.

4.2. Speech Input Test Including a Double-Talk Situation

The proposed APSA was subjected to an experiment using a speech input signal to ensure the filter performance in practical scenarios, as can be seen in Figure 9. Since speech input signals are real human speech data, this simulation result for speech input increases the reliability of the proposed APSA for the practical use. As can be seen in Figure 10, we can find that the proposed APSA can accomplish a faster convergence rate and smaller steady-state estimation errors compared to the other algorithms. The proposed algorithm was also tested in a double-talk situation as shown in Figure 11. The far-end input signal and near-end input signal were speech signals where the power of the near-end input signal was two times greater than that of the far-end input signal. The near-end input signal was added between iterations 5.2 × 10 3 and 6.2 × 10 3 . Figure 11 shows that the proposed APSA delivered better performance than the other algorithms in terms of the convergence rate and the steady-state estimation error. Even after the double-talk occurrence, we found that the proposed APSA consistently has smaller steady-state estimation errors.

4.3. Practical Considerations for the p Parameter

Through Equation (19), we can analyze that when the p parameter is close to 0, the proposed L p -norm-like APSA is the same as the L 0 -norm APSA. Moreover, the better filter performance of the proposed APSA compared to the L 0 -norm APSA is demonstrated via Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8 and Figure 10 and Figure 11. On the other hand, if the p parameter is close to 1 as can be seen in Equation (20), the proposed L p -norm-like APSA is similar to the L 1 -norm APSA that does not have the advantages for the sparse system due to the characteristic of the L 1 norm. Therefore, it is meaningful to choose the specific p value between 0 and 1 to improve the filter performance of the proposed APSA and there is no advanced research for the APSA-type algorithms using the proposed L p -norm-like concept. We find that the specific p value can maximize the improvement of our proposed algorithm through the parameter tuning method, as shown in Figure 12, Figure 13 and Figure 14. As shown in Figure 12, Figure 13 and Figure 14, the specific p value consistently verifies the improved filter performance regardless of the white, ar and arma input signals. When the p parameter is set to 0.1, the proposed APSA has the best filter performance, achieving a fast convergence rate and small steady-state estimation errors. Even though the parameter tuning method cannot precisely provide optimal value, p = 0.1 gives the best performance in our simulation scenarios. Since the parameter p value has a restriction between 0 and 1, it is easy to find the proper p value according to the variation of p values to adequately guarantee the improved filter performance in each scenario.
It is important to decide the p parameter in the proposed APSA because the effect of the p value is dominant in adjusting the filter performance in terms of convergence rate and steady-state estimation errors. The L p -norm-like definition can be represented by
| | w ^ i | | p p = k = 0 n 1 | w ^ i ( k ) | p , 0 p 1 .
From Equation (18), we can derive the L 0 -norm term and the L 1 -norm term by adjusting the p value as follows:
lim p 0 | | w ^ i | | p p = | | w ^ i | | 0 = # { k | w ^ i ( k ) 0 } ,
lim p 1 | | w ^ i | | p p = | | w ^ i | | 1 = k = 0 n 1 | w ^ i ( k ) | .

5. Conclusions

An improved affine projection sign algorithm for sparse system identification was developed using a L p -norm-like constraint. The L p -norm-like constraint could accelerate the convergence of the near-zero coefficients of a general sparse system. Moreover, a novel cost function including the L p -norm-like constraint was used for the proposed APSA. Consequently, the convergence rate of the proposed APSA was higher than those of the original APSA, RP-APSA, RIP-APSA and L 0 -APSA, as demonstrated by simulation results.

Author Contributions

Conceptualization and formal analysis, J.S.; investigation and validation, J.K.; methodology and software T.-K.K.; software and writing, J.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. NRF-2021R1F1A1062153). This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. NRF-2021R1A5A1032937).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, R.; Zhao, H. A Novel Method for Online Extraction of Small-Angle Scattering Pulse Signals from Particles Based on Variable Forgetting Factor RLS Algorithm. Sensors 2021, 21, 5759. [Google Scholar]
  2. Qian, G.; Wang, S.; Lu, H.H.C. Maximum Total Complex Correntropy for Adaptive Filter. IEEE Trans. Signal Process. 2020, 68, 978–989. [Google Scholar] [CrossRef]
  3. Shen, L.; Zakharov, Y.; Henson, B.; Morozs, N.; Mitchell, P.D. Adaptive filtering for full-duplex UWA systems with time-varying self-interference channel. IEEE Access 2020, 8, 187590–187604. [Google Scholar] [CrossRef]
  4. Dogariu, L.-M.; Stanciu, C.L.; Elisei-Iliescu, C.; Paleologu, C.; Benesty, J.; Ciochina, S. Tensor-Based Adaptive Filtering Algorithms. Symmetry 2021, 13, 481. [Google Scholar] [CrossRef]
  5. Kumar, K.; Pandey, R.; Karthik, M.L.N.S.; Bhattacharjee, S.S.; George, N.V. Robust and sparsity-aware adaptive filters: A Review. Signal Process. 2021, 189, 108276. [Google Scholar] [CrossRef]
  6. Kivinen, J.; Warmuth, M.K.; Hassibi, B. The p-norm generalization of the LMS algorithm for adaptive filtering. IEEE Trans. Signal Proc. 2006, 54, 1782–1793. [Google Scholar] [CrossRef] [Green Version]
  7. Ozeki, K.; Umeda, T. An adaptive filtering algorithm using an orthogonal projection to an affine subspace and its properties. Electron. Commun. Jpn. 1984, 67, 19–27. [Google Scholar] [CrossRef]
  8. Jiang, Z.; Li, Y.; Huang, X.; Jin, Z. A Sparsity-Aware Variable Kernel Width Proportionate Affine Projection Algorithm for Identifying Sparse Systems. Symmetry 2019, 11, 1218. [Google Scholar] [CrossRef] [Green Version]
  9. Li, G.; Wang, G.; Dai, Y.; Sun, Q.; Yang, X.; Zhang, H. Affine projection mixed-norm algorithms for robust filtering. Signal Process. 2021, 187, 108153. [Google Scholar] [CrossRef]
  10. Li, G.; Zhang, H.; Zhao, J. Modified Combined-Step-Size Affine Projection Sign Algorithms for Robust Adaptive Filtering in Impulsive Interference Environments. Symmetry 2020, 12, 385. [Google Scholar] [CrossRef] [Green Version]
  11. Shao, T.; Zheng, Y.R.; Benesty, J. An affine projection sign algorithm robust against impulsive interference. IEEE Signal Process. Lett. 2010, 17, 327–330. [Google Scholar] [CrossRef]
  12. Yang, Z.; Zheng, Y.R.; Grant, S.L. Proportionate affine projection sign algorithm for network echo cancellation. IEEE Trans. Audio Speech Lang. Process. 2011, 19, 2273–2284. [Google Scholar] [CrossRef]
  13. Yoo, J.; Shin, J.; Choi, H.; Park, P. Improved affine projection sign algorithm for sparse system identification. Electron. Lett. 2012, 48, 927–929. [Google Scholar] [CrossRef]
  14. Jin, J.; Mei, S. l0 Norm constraint LMS Algorithm for spare system identification. IEEE Signal Process. Lett. 2009, 16, 774–777. [Google Scholar]
  15. Li, Y.; Cherednichenko, A.; Jiang, Z.; Shi, W.; Wu, J. A Novel Generalized Group-Sparse Mixture Adaptive Filtering Algorithm. Symmetry 2019, 11, 697. [Google Scholar] [CrossRef] [Green Version]
  16. Li, Y.; Wang, Y.; Albu, F.; Jiang, J. A General Zero Attraction Proportionate Normalized Maximum Correntropy Criterion Algorithm for Sparse System Identification. Symmetry 2017, 9, 229. [Google Scholar] [CrossRef] [Green Version]
  17. Li, Y.; Wang, Y.; Sun, L. A Proportionate Normalized Maximum Correntropy Criterion Algorithm with Correntropy Induced Metric Constraint for Identifying Sparse Systems. Symmetry 2018, 10, 683. [Google Scholar] [CrossRef] [Green Version]
  18. Wu, F.; Tong, F. Gradient optimization p-norm-like constraint LMS algorithm for sparse system estimation. Signal Process. 2013, 93, 967–971. [Google Scholar] [CrossRef]
  19. Wu, F.; Tong, F. Non-Uniform Norm Constraint LMS Algorithm for Sparse System Identification. IEEE Commun. Lett. 2013, 17, 385–388. [Google Scholar] [CrossRef]
  20. Zayyani, H. Continuous mixed p-norm adaptive algorithm for system identification. IEEE Signal Process. Lett. 2014, 21, 1108–1110. [Google Scholar]
Figure 1. Structure of an adaptive filter in the presence of impulsive noise n i .
Figure 1. Structure of an adaptive filter in the presence of impulsive noise n i .
Symmetry 13 01916 g001
Figure 2. Characteristics of a general sparse system (n = 128).
Figure 2. Characteristics of a general sparse system (n = 128).
Symmetry 13 01916 g002
Figure 3. NMSD learning curves for the white input in a sparse system with impulsive noises (Pr = 0.01).
Figure 3. NMSD learning curves for the white input in a sparse system with impulsive noises (Pr = 0.01).
Symmetry 13 01916 g003
Figure 4. NMSD learning curves for the correlated input generated using G 1 ( z ) in a sparse system with impulsive noises (Pr = 0.01).
Figure 4. NMSD learning curves for the correlated input generated using G 1 ( z ) in a sparse system with impulsive noises (Pr = 0.01).
Symmetry 13 01916 g004
Figure 5. NMSD learning curves for the correlated input generated using G 2 ( z ) in a sparse system with impulsive noises (Pr = 0.01).
Figure 5. NMSD learning curves for the correlated input generated using G 2 ( z ) in a sparse system with impulsive noises (Pr = 0.01).
Symmetry 13 01916 g005
Figure 6. NMSD learning curves for the white input in a sparse system with impulsive noises (Pr = 0.01). The system suddenly changes ( w o w o ) at iteration 3.8 × 10 3 ).
Figure 6. NMSD learning curves for the white input in a sparse system with impulsive noises (Pr = 0.01). The system suddenly changes ( w o w o ) at iteration 3.8 × 10 3 ).
Symmetry 13 01916 g006
Figure 7. NMSD learning curves for the correlated input generated using G 1 ( z ) in a sparse system with impulsive noises (Pr = 0.01). The system suddenly changes ( w o w o ) at iteration 3.8 × 10 3 .
Figure 7. NMSD learning curves for the correlated input generated using G 1 ( z ) in a sparse system with impulsive noises (Pr = 0.01). The system suddenly changes ( w o w o ) at iteration 3.8 × 10 3 .
Symmetry 13 01916 g007
Figure 8. NMSD learning curves for the correlated input generated using G 2 ( z ) in a sparse system with impulsive noises (Pr = 0.01). The system suddenly changes w o w o ) at iteration 3.8 × 10 3 .
Figure 8. NMSD learning curves for the correlated input generated using G 2 ( z ) in a sparse system with impulsive noises (Pr = 0.01). The system suddenly changes w o w o ) at iteration 3.8 × 10 3 .
Symmetry 13 01916 g008
Figure 9. Speech input signal.
Figure 9. Speech input signal.
Symmetry 13 01916 g009
Figure 10. NMSD learning curves for the speech input.
Figure 10. NMSD learning curves for the speech input.
Symmetry 13 01916 g010
Figure 11. NMSD learning curves for the double-talk situation.
Figure 11. NMSD learning curves for the double-talk situation.
Symmetry 13 01916 g011
Figure 12. NMSD learning curves with several values of p to decide the p value to ensure the best filter performance for the white input in a sparse system with impulsive noises (Pr = 0.01).
Figure 12. NMSD learning curves with several values of p to decide the p value to ensure the best filter performance for the white input in a sparse system with impulsive noises (Pr = 0.01).
Symmetry 13 01916 g012
Figure 13. NMSD learning curves with several values of p to decide the p value to ensure the best filter performance for the correlated input generated using G 1 ( z ) in a sparse system with impulsive noises (Pr = 0.01).
Figure 13. NMSD learning curves with several values of p to decide the p value to ensure the best filter performance for the correlated input generated using G 1 ( z ) in a sparse system with impulsive noises (Pr = 0.01).
Symmetry 13 01916 g013
Figure 14. NMSD learning curves with several values of p to decide the p value to ensure the best filter performance for the correlated input generated using G 2 ( z ) in a sparse system with impulsive noises (Pr = 0.01).
Figure 14. NMSD learning curves with several values of p to decide the p value to ensure the best filter performance for the correlated input generated using G 2 ( z ) in a sparse system with impulsive noises (Pr = 0.01).
Symmetry 13 01916 g014
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Shin, J.; Kim, J.; Kim, T.-K.; Yoo, J. p-Norm-like Affine Projection Sign Algorithm for Sparse System to Ensure Robustness against Impulsive Noise. Symmetry 2021, 13, 1916. https://doi.org/10.3390/sym13101916

AMA Style

Shin J, Kim J, Kim T-K, Yoo J. p-Norm-like Affine Projection Sign Algorithm for Sparse System to Ensure Robustness against Impulsive Noise. Symmetry. 2021; 13(10):1916. https://doi.org/10.3390/sym13101916

Chicago/Turabian Style

Shin, Jaewook, Jeesu Kim, Tae-Kyoung Kim, and Jinwoo Yoo. 2021. "p-Norm-like Affine Projection Sign Algorithm for Sparse System to Ensure Robustness against Impulsive Noise" Symmetry 13, no. 10: 1916. https://doi.org/10.3390/sym13101916

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop