Next Article in Journal
Application of Neutrosophic Logic to Evaluate Correlation between Prostate Cancer Mortality and Dietary Fat Assumption
Previous Article in Journal
An Integrated Decision-Making Method Based on Neutrosophic Numbers for Investigating Factors of Coastal Erosion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Local and Nonlocal Steering Kernel Weighted Total Variation Model for Image Denoising

Department of Microelectronics, Xidian University, Xi’an 710071, China
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(3), 329; https://doi.org/10.3390/sym11030329
Submission received: 7 January 2019 / Revised: 17 February 2019 / Accepted: 26 February 2019 / Published: 5 March 2019

Abstract

:
To eliminate heavy noise and retain more scene details, we propose a structure-oriented total variation (TV) model based on data dependent kernel function and TV criterion for image denoising application. The innovative model introduces the weights produced from the local and nonlocal symmetry features involved in the image itself to pick more precise solutions in the TV denoising process. As a result, the proposed local and nonlocal steering kernel weighted TV model yields excellent noise suppression and structure-preserving performance. The experimental results verify the validity of the proposed model in objective quantitative indices and subjective visual appearance.

1. Introduction

Image denoising is a vital preprocessing step for image based object detection, recognition, and tracking [1,2,3,4,5,6]. Since high frequency image details are mixed with noise in most cases, most of the existing image denoising methods have difficult preserving the edge and texture information while thoroughly eliminating the noise [7,8,9,10,11,12].
Many traditional image processing methods are exploited on the basis of the local structural regularity assumption present in natural images. The rationale of denoising algorithms is to make use of the structural patterns to regularize the ill-posed restoration problem and make the texture region less blurry and the flat region smoother [13,14,15]. The gradient based total variation (TV) is a state-of-the-art method that has been proven to restore real scenes from noisy images effectively [16]. However, the TV model tends to introduce staircase effect and texture loss. To surmount the inherent defects of TV regularization, some improved TV models with structure preserving performance are presented. By combining intensity into the definition of the distance between pixels, bilateral filtering [17] clearly relieves the blurring effect of the Gaussian filter and provides detail preserving performance. In view of this, the bilateral total variation (BTV) model [18] and non-local total variation (NLTV) model [19] are successively exploited to more precisely restore the details by fusing the idea of bilateral filtering and non-local means filtering [20] with TV criterion. However, the BTV model only considers the spatial distance but ignores the neighborhood similarity in obtaining the gradient of a pixel, which leads to the derogation of structure information in the recovered image. Moreover, the nonsymmetrical structure preserving ability of the NLTV model tends to be weakened with growing noise strength, which may result from neglecting the robust local structure constraints.
Existing investigations have shown that feature descriptors using local steering kernel (LSK) are robust to noise interruption [21]. The reason lies in that LSK is exploited to solve image noise and uncertainty by estimating the local structure. Moreover, a patch containing a flat region, textural clutter, and structural part have significant differences for LSK-based descriptors. In view of this, we encode the noise corrupting image patches using the LSK method [22] to robustly recover the original image structure and remove the noise disturbance.
Inspired by the fact that the disconnected nonlocal components with spatial support provide more useful information in image restoration [23], we further combine the nonlocal self-similarity [24,25] with the local constraints of LSK to weight the respective measurements of TV, which enhances the structure preserving capability of the TV model in denoising application. In this way, our proposed local and nonlocal steering kernel weighted TV model for image denoising can robustly estimate the local structure of the image, as well as effectively remove the annoying noise. Obviously, it utilizes the redundancy of symmetrically similar patches in the corrupted image and the sensitivity of local feature description to implement the denoising task.
The rest of this paper is structured as follows. Section 2 briefly reviews the related works of this study, Section 3 introduces the presented algorithm and discusses its mechanism, Section 4 states experimental results and analysis, and Section 5 gives the conclusion of the paper.

2. Related Works

Consider a discrete noisy image
Y = X + V
where Y denotes the observation, V indicates the zero-mean additive white noise perturbation that is uncorrelated to the true image X .

2.1. Regularization Based Denoising Framework

For the image denoising problem, we need to solve the minimization problem as [26]
X ^ = A r g min X ^ { Y X ^ p p + μ ϒ ( X ^ ) }
where X ^ is the denoised image, Y X ^ p p is the data fidelity item used to retain the original image characteristics and reduce the image distortion, the scalar μ is used to properly balance the fidelity term Y X ^ p p with the regularizer ϒ ( X ^ ) , a smaller μ increases the impact of fidelity item and provides stable convergence, while a larger μ tends to enhance the influence of regularization term and induces fast tracking; p p denotes the L p ( 1 p 2 ) norm of residual.
The well-known Tikhonov regularizer [27,28] is defined as
ϒ T ( X ^ ) = Γ X ^ 2 2
where Γ represents the high-pass operator. It is clear that the Tikhonov regularization method tends to constrain the total energy of the image or implements spatial smoothing.
Since the noise and image textures both contain abundant high frequency components, the regularization procedure easily remove both of them indiscriminately, and the corresponding result is that the denoised images lose most of their sharp edges and details.
The popular edge preserving regularization strategy for image restoration applications is the TV method [16]. The TV criterion penalizes the L 1 norm of gradient amplitude that measures the total change of image and is expressed as
ϒ TV ( X ^ ) = X ^ 1
where is the gradient operator. The advantage of TV criterion lies in that it can retain edges during refactoring without seriously penalizing the steep gradients [29,30].
On the basis of TV criterion and bilateral filter, a spatially adaptive regularizer called bilateral total variation (BTV) that attempts to eliminate the staircase effect and detail the loss problem of the TV prior model is presented in [18]. The BTV regularizer is mathematically formulated as
ϒ BTV ( X ^ ) = l = P P m = 0 P l + m 0 α | m | + | l | X ^ S x l S y m X ^ 1
where operators S x l and S y m , respectively, implement l pixel horizontal translation and m pixel vertical translation, so as to present multi-scale derivatives. P indicates the radius of the search window. The scalar weight α ( 0 < α < 1 ) is utilized to produce spatial decay for the regularization terms. Larger α provides a larger impact of neighbor pixels, but an exceeded α generally blots out the scene details, as is the case with the TV prior, and results in over-smooth effects. On the contrary, a tiny α sharply attenuates the spread of weight with the increase of spatial distance, which decreases the noise suppression capability and leads to slower convergence. A proper selection of α plays an important role in balancing noise suppression and detail preservation.
Local image structures are often repeated themselves within the image and across the image sequences. The redundant information contained in similar patches has a vital significance for solving most ill-posed image restoration problems. This is because similar patches are generally considered to be various observations of the same real scene.
To overcome the performance decline of local derivative-based prior models in noise suppression, a self-similarity-based nonlocal prior model is employed in the regularized framework, which gives the nonlocal total variation (NLTV) regularizer as [19]
ϒ NLTV ( X ^ ) = i Ω j N i W N L ( i , j ) X ^ ( i ) X ^ ( j ) 1
where i indicates any of the pixel in the image X ^ : Ω , j represents the pixels located in the R sized search window around i and denoted as N i , the weight formula is represented as
W N L ( i , j ) = exp ( N i ( u ) N j ( u ) p p σ 2 )
where N i ( ) and N j ( ) denote the r sized square similarity patch surrounding pixel i and j , respectively; σ represents the filtering parameter that controls the smoothness.

2.2. Local Steering Kernel

Steering kernel regression (SKR) [21] depends on not only the position and intensity, but also the intrinsic local structure of the samples. Therefore, the size and shape of the regression kernel will significantly affect its spread and feature extraction characteristics [31,32]. The core of the SKR method is the local steering kernel (LSK) function, which estimates the local structures accurately, even in strong noise.
Let coordinate vector x i = [ x i 1 , x i 2 ] T represents the position of a certain pixel, and X ^ ( x i ) denotes the intensity of the x i pixel. The structural representation capability of LSK mainly relies on the so-called gradient covariance matrix or steering matrix [21]. Assume there exists a p pixels involved patch N ( x i ) = { x 1 , ... , x i , ... x p } centered at x i , the structure adaptive steering kernel representation can be modeled as
K ( x i , x j ) = det ( C j ) h 2 exp ( ( x i x j ) T C j ( x i x j ) 2 h 2 )
where x j N ( x i ) , the symmetric covariance matrix C j is evaluated by the spatial gradient vectors surrounding x j [33]. A good choice of C j is vital for estimating the LSK and will expand the kernel weight along the local edges; h ( 2 h 3 ) is a global smoothing parameter.
Let Ω ( x j ) = { x 1 , , x j , , x M } represent the positions of the M adjacent pixels surrounding x j . Natively, C j can be directly estimated by G j T G j , in which G j is expressed as
G j = [ X ^ h ( x 1 ) X ^ v ( x 1 ) X ^ h ( x j ) X ^ v ( x j ) X ^ h ( x M ) X ^ v ( x M ) ]
where X ^ h ( ) and X ^ v ( ) , respectively, indicate the first-order gradients in the horizontal and vertical direction.
In order to enhance robustness and promote stability, the proposed method estimates the covariance matrix using a regularized parametric method. Based on the singular value decomposition (SVD) formula of G j ,
G j = U j S j V j = U j d i a g [ s 1 , s 2 ] j [ u 1 , u 2 ] j T
we can obtain the singular values ( s 1 , s 2 ) and the singular vectors ( u 1 , u 2 ) , which are further used to calculate the stable covariance matrix
C j = ( s 1 s 2 + e p s M ) ϕ ( f u 1 u 1 T + g u 2 u 2 T )
where the amplification factor ϕ and the constant e p s are set to 0.5 and 10 7 , respectively. According to the intrinsic structure, we can regulate f and g to make the induced kernel be isotropic at the smooth area and uniform along the image contour ( g > f ) as
f ( β , φ ) = s 1 + γ s 2 + γ   ( γ 0 )
g ( β , φ ) = s 2 + γ s 1 + γ   ( γ 0 )
where γ is a tuning parameter for controlling the kernel spread and is set to 1, which suppresses the noise, as well as decreases the detail loss; β and φ are the eigen values of the structure tensor [34] that reflects the gradient strength along the direction of each eigenvector.

3. Local and Nonlocal Steering Kernel Weighted Total Variation Model

The local derivative-based models (such as TV and BTV) are sensitive to noise in the homogenous region, while the patch similarity-based models (such as NLTV) are not fit to deal with the noise in the cluttered texture region. In view of this, we incorporate the steering kernel [35]-based structure descriptor into the TV regularization framework and present an innovative regularizer. This local structure based regularizer can smooth out noise while preserving the details, even in very noisy circumstances. The weight function based on the local steering kernel (LSK) can be defined as
W L S K l , m ( i ) = K ( x i , x i l , m ) l = P P m = 0 P l + m 0 K ( x i , x i l , m )
Furthermore, we consider both the nonlocal similarity and local structure properties and propose a joint local and nonlocal structural weight, which is then normalized in its neighborhood to prevent the nonuniform weight in various patches, and is characterized by
W N L S K l , m ( i ) = W N L ( x i , x i l , m ) K ( x i , x i l , m ) l = P P m = 0 P l + m 0 W N L ( x i , x i l , m ) K ( x i , x i l , m )
where x i l , m is a pixel located in the patch that shifts l and m pixels from x i in the horizontal and vertical direction. For x i Ω , either of the W L S K l , m ( i ) calculated by (14) or W N L S K l , m ( i ) calculated by (15) can be used to form the weight matrix uniformly represented by W l , m .
In order to compare the spread characterization of different kernels, Figure 1 shows the subjective visual representation of the steering kernel for different local structures (texture, strong edge) in the “House” image on the noisy and noiseless cases. Seeing the weight map of LSK, the shape and orientation of its footprints elongate along the edge to realize edge preservation and noise smoothing. The weight map of non-local kernel (NLK) is strewn according to the nonlocal similarity but neglects the influence of local structure. In contrast, the weight map of non-local steering kernel (NLSK) contains both neighborhood similarity and local spatial support, and assigns large weights to the locally and structurally similar pixels along with the central pixels of nonlocal similar patches. Specifically speaking, the weight map of NLSK spread closer than other kernels in the direction perpendicular to edges, which adaptively reduces the blurriness with respect to the local feature of the image. Moreover, we can easily find that all of the kernels show a rapid decline for spread characterization in the noisy condition and NLSK is still the most precise structure descriptor.
Rather than defining the weight by local intensity or nonlocal similarity, we, respectively, introduce the LSK and NLSK prior to weight for the neighboring gradient according to the nonlocal and intrinsic structure of the image itself. On the basis of TV criterion and LSK or NLSK prior, we uniformly define the noise tolerant LSKTV and NLSKTV regularizer as
ϒ ( X ^ ) = l = P P m = 0 P l + m 0 W l , m ( X ^ S x l S y m X ^ ) 1
Combining the above presented idea, we propose an innovative structure feature-guided cost function for the denoising problem as
X ^ = ArgMin X ^ { l = P P m = 0 P l + m 0 W l , m l , m X ^ 1 + μ 2 X ^ Y 2 2 }
where l , m X ^ = X ^ S x l S y m X ^ .
Considering the equivalent constrained problem to (17), such that W l , m l , m X ^ = d l , m , we can get the unconstrained version and solve it by using the split Bregman algorithm [36,37] given by
{ ( X ^ k + 1 , d l , m k + 1 ) = ArgMin u , d { μ 2 X ^ k Y 2 2 + l = P P m = 0 P l + m 0 [ d l , m k 1 + λ 2 d l , m k W l , m l , m X ^ k b l , m k 2 2 ] } b l , m k + 1 = b l , m k + ( W l , m l , m X ^ k + 1 d l , m k + 1 )
where λ > 0 is a constant.
Following (18), we solve the equations using the Gauss-Seidel iteration written as
X ^ i , j k + 1 = λ μ + 4 λ [ X ^ i + 1 , j k + X ^ i 1 , j k + X ^ i , j 1 k + X ^ i , j + 1 k + l = P P m = 0 P l + m 0 ( d l , m , i 1 , j k d l , m , i , j k b l , m , i , j 1 k + b l , m , i , j k ) ] + μ μ + 4 λ Y i , j
At the boundaries of the domain, one-sided finite differences are used instead of the centered finite differences. We get
d l , m k + 1 = s h r i n k ( W l , m l , m X ^ k + 1 + d l , m k , 1 λ )
The whole algorithm procedure is presented in Algorithm 1.
Algorithm 1: Proposed image denoising algorithm
Input: noisy observation Y .
1. Initialization:
    X ^ 0 = Y , d l , m 0 = b l , m 0 = 0
2. Iteration:
While X ^ k + 1 X ^ k 2 2 / X ^ k 2 2 < ε do
 Calculate the normalized weight W k l , m by Equation (15)
 Update X ^ k + 1 according to Equation (19)
d l , m k + 1 = s h r i n k ( W l , m l , m X ^ k + 1 + d l , m k , 1 / λ )
b l , m k + 1 = s h r i n k ( W l , m l , m X ^ k + 1 + b l , m k , 1 / λ )
k = k + 1
end While
Output: desired image X

4. Experimental Results and Analysis

In this section, we will contrast the performance of the presented method with the previous variational denoising methods on artificially degraded samples that are generated by adding Gaussian noise with zero mean and standard deviation of 10, 25, and 40 to the 512 × 512 sized standard bitmap (BMP) format test images. In the following experiments, the parameter sensitivity will be firstly discussed for finding a tune strategy to obtain balanced and higher performance. It is worthy to note that the parameters selected according to the advice of original documents are employed to pursue the best performance. In the following experiments, the stopping criteria ε of the proposed local steering kernel total variation (LSKTV) and non-local steering kernel total variation (NLSKTV) algorithm is set to 1 × 10 3 to guarantee stable convergence.
In order to facilitate quantitative comparison, Peak Signal-to-Noise Ratio (PSNR) and Structure Similarity index (SSIM) [38] are employed to objectively assess the performance of various denoising methods. The PSNR is defined as
PSNR ( x , y ) = 10 log 255 2 1 / ( H W ) i = 0 H j = 0 W ( x ( i , j ) y ( i , j ) ) 2
where H and W indicate the height and width of the image, respectively.
In addition, the SSIM index can be calculated by
S S I M ( x , y ) = ( 2 μ x μ y + c 1 ) ( 2 σ x y + c 2 ) ( μ x 2 μ y 2 + c 1 ) ( σ x 2 + σ y 2 + c 2 )
where μ x and μ y stand for the mean value of x and y , σ x and σ y indicate the variance of x and y , σ x y denotes the covariance of x and y . The constants c 1 and c 2 are, respectively, set to 6.5025 and 58.6225 to stabilize the division with the weak denominator.

4.1. Parameter Sensitivity Analysis

As for the proposed NLSK weight, appropriate window size is crucial for precisely measuring the local and nonlocal structure, which will directly affect the final restoration effect and subsequent parameter adjustment. In view of this, we select different sizes of similar windows and search windows, and then implement the proposed denoising method upon the same simulated image on three noise strengths. For ease of comparison, we uniformly symbolize the search window size of NLK, LSK, and NLSK weight with R = 2 l + 1 = 2 m + 1 . The PSNR curves of the iterative denoising process for different parameter configurations are shown in Figure 2. Through comparison, we can find that the size of similar window r and search window R should be aptly raised with the increase of noise level to effectively utilize redundant information in the neighborhood for high precision and stable convergence. However, extremely large similar windows and search windows may remove the image details and cause image over-smoothness effect, which instead results in performance degradation.
The learning rate μ is the key impact factor to control the convergence property of the proposed method [39,40]. As can be seen from the Figure 3, the increase of μ promotes the convergence rate, but if μ is too large it will interrupt the convergence process. The reason lies in that a larger μ is beneficial for accelerating the process to reach maximum PSNR, but excessive μ will produce non-convergence issues, which instead depresses the PSNR. As for higher noise level, the learning rate μ should be set larger to ensure the algorithm can effectively remove noise and improve convergence speed.
Considering the smoothing parameter h in the steering kernel, we will further find the optimal selection of h is beneficial to achieve higher restoration precision. As can be seen from Figure 4, h should be raised with the increase of noise strength on the premise of stable convergence. A larger h is helpful to smooth out noise and promote PSNR, but if h is too large it will lead to over-smoothness effect and destroy the intrinsic structure of images, which instead depresses the PSNR.

4.2. Performance Comparisons

Figure 5 shows the PSNR for the denoised “Zebra” image of TV, BTV, TGV [41], NLTV, and the proposed algorithms on variant noise strength. Obviously, the proposed NLSKTV and LSKTV method achieve an expanded leading advantage in PSNR with the increase of noise strength σ when compared to TV, BTV, TGV, and NLTV method. As for the highest noise strength σ = 40 , NLSKTV can still reach 27.56 dB (the input PSNR is only 16.08 dB at that time), which is 2.56 dB, 1.64 dB, 1.72 dB, 1.29 dB, and 0.09 dB higher than TV, BTV, TGV, NLTV, and LSKTV, respectively. This phenomenon indicates that the proposed NLSKTV and LSKTV deal with the strong noise more effectively.
In order to intuitively observe the visual effect, we implement the denoising with different algorithms. We can see clearly from Figure 6 that TV and TGV methods generate over-smoothed details and produce staircase effect in varying degrees. Furthermore, there are partial noise residuals and texture distortions in the results of BTV and NLTV method. Obviously, the noise suppression ability of the proposed LSKTV and NLSKTV are superior to others. It is worth noting that the presented NLSKTV method preserves more details for its special local structure and nonlocal similarity balanced weight.
To quantitatively evaluate the precision of the abovementioned denoising methods, we carry out comparisons of PSNR and SSIM on different test images respectively corrupted by additive Gaussian noise with σ = 10 , 25 , 40 . The results summarized in Table 1 indicate that the PSNR and SSIM of the proposed LSKTV is better than that of TV, BTV, total generalized variation (TGV), and NLTV in most cases; furthermore, its superiority becomes increasingly significant with the increasing noise strength. Note that NLSKTV constantly achieves the best performance by fusing additional nonlocal self-similarity to describe the image structure on the basis of LSKTV. In summary, the proposed method effectively preserves the image details and removes the noise simultaneously.

4.3. Image Deblurring Application

In this section, we will validate the deblurring performance of the proposed method with the existing TV, BTV, TGV, and NLTV methods on standard test image “cameraman”. Following the blurring degradation model in [42], the test image is blurred by a 9 × 9 Gaussian kernel with standard deviation 5 and 10 and corrupted with Gaussian noise with standard deviation 10. The PSNR and SSIM results of different methods are shown in Table 2. It is apparent that the LSKTV and NLSKTV achieve higher PSNR and SSIM than TV, BTV, TGV, and NLTV.
Figure 7 shows the visual effects of different deblurring methods. It is obvious that other competitive methods, such as TV, BTV, TGV, and NLTV, tend to generate staircase effects in flat regions and zigzag effect on edges, which reduces the structural similarity significantly. In contrast, the proposed LSKTV and NLSKTV preserve sharp edges and recover high frequency details, as well as yield visually pleasant results. In summary, the quantitative and qualitative results indicate that the proposed local and nonlocal steering kernel-weighted total variation regularizer is a good candidate for image deblurring application.

4.4. Runtime Comparisons

In order to compare the computation complexity of different methods, we perform the simulation on various sized test images and list the corresponding runtime in Table 3. These results are implemented in MATLAB (R2017a) on a computer with Intel (R) Core(TM) i5-4590@3.3GHz CPU. As can be seen from Table 3, the proposed LSKTV and NLSKTV demand more computational load than others to calculate the steering kernel weight, which is the price for their outstanding denoising performance. As the computing of the steering kernel has larger parallelism, the LSKTV and NLSKTV will be further accelerated by the specially designed parallel hardware, such as Graphics Processing Unit (GPU) and Field Programmable Gate Array (FPGA).

5. Conclusions

Noise suppression and detail preservation are generally difficult to balance for most of the existing TV model-based image denoising methods. In view of this, we proposed an innovative structure-oriented TV model that incorporates the local structural regularity of local steering kernel (LSK) as well as nonlocal self-similarity of nonlocal kernel (NLK) to achieve a more precise denoising effect. Experimental results demonstrated that the presented LSKTV and NLSKTV model is conducive to simultaneously restoring the image structures and eliminating the noise disturbance, which is validated by the remarkable objective performance assessment and the favorable subjective visual effect.

Author Contributions

R.L. and Y.M. implemented the theoretical derivation. Y.M. and J.G. planed the experiments and processed the test data. R.L. and Z.L. wrote the paper.

Funding

This work is funded by Natural Science Foundation of China under grants No. 61674120, No. 61571338, No. U1709218, and No. 61672131, Fundamental Research Funds for the Central Universities of China under grants No. JBG161113 and No. 300102328110, Key Research and Development Plan of Shaanxi Province under grant No. 2017ZDCXL-GY-05-01.

Acknowledgments

The authors will thank the editors and reviewers for their careful work and valuable comments, which are helpful to promote the quality of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Portilla, J.; Strela, V.; Wainwright, M.J.; Simoncelli, E.P. Image denoising using scale mixtures of gaussians in the wavelet domain. IEEE Trans. Image Process. 2003, 12, 1338–1351. [Google Scholar] [CrossRef] [PubMed]
  2. Bertalmio, M.; Caselles, V.; Pardo, A. Movie denoising by average of warped lines. IEEE Trans. Image Process. 2007, 16, 2333–2347. [Google Scholar] [CrossRef] [PubMed]
  3. Kindermann, S.; Osher, S.; Jones, P.W. Deblurring and denoising of images by nonlocal functionals. Multiscale Model. Simul. 2005, 4, 1091–1115. [Google Scholar] [CrossRef]
  4. Schulte, S.; Huysmans, B.; Pižurica, A.; Kerre, E.E.; Philips, W. A new fuzzy-based wavelet shrinkage image denoising technique. In International Conference on Advanced Concepts for Intelligence Vision System; Springer: Berlin, Heidelberg, 2006; pp. 12–23. [Google Scholar]
  5. Felzenszwalb, P.F.; Girshick, R.B.; McAllester, D.; Ramanan, D. Object detection with discriminatively trained partbased models. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 9, 1627–1645. [Google Scholar] [CrossRef] [PubMed]
  6. Pang, Y.; Yuan, Y.; Wang, K. Learning optimal spatial filters by discriminant analysis for brain–computer-interface. Neurocomputing 2012, 77, 20–27. [Google Scholar] [CrossRef]
  7. Arivazhagan, S.; Deivalakshmi, S.; Kannan, K.; Gajbhiye, B.N.; Muralidhar, C.; Lukose, S.N.; Subramanian, M.P. Performance analysis of image denoising system for different levels of wavelet decomposition. Int. J. Imaging Sci. Eng. 2007, 1, 104–107. [Google Scholar]
  8. Buades, A.; Coll, B.; Morel, J.M. A review of image denoising algorithms, with a new one. Multiscale Model. Simul. 2005, 4, 490–530. [Google Scholar] [CrossRef]
  9. Rao, B.C.; Latha, M.M. Analysis of multi resolution image denoising scheme using fractal transform. Int. J. 2010, 2, 63–74. [Google Scholar]
  10. Kim, W.H.; Sikora, T. Image denoising method using diffusion equation and edge map estimated with k-means clustering algorithm. In Eighth International Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS’07); IEEE: Santorini, Aegean Sea, Greece, 2007; p. 21. [Google Scholar]
  11. Pardo, A. Analysis of non-local image denoising methods. Pattern Recognit. Lett. 2010, 32, 2145–2149. [Google Scholar] [CrossRef]
  12. Pang, Y.; Li, X.; Yuan, Y. Robust tensor analysis with L1-norm. IEEE Trans. Circuits Syst. Video Technol. 2010, 20, 172–178. [Google Scholar] [CrossRef]
  13. Diewald, U.; Preusser, T.; Rumpf, M. Anisotropic diffusion in vector field visualization on euclidean domains and surfaces. IEEE Trans. Vis. Comput. Gr. 2000, 6, 139–149. [Google Scholar] [CrossRef]
  14. Rajan, J.; Kannan, K.; Kaimal, M.R. An improved hybrid model for molecular image denoising. J. Math. Imaging Vision 2008, 31, 73–79. [Google Scholar] [CrossRef]
  15. Catté, F.; Lions, P.L.; Morel, J.M.; Coll, T. Image selective smoothing and edge detection by nonlinear diffusion. SIAM J. Numer. Anal. 1992, 29, 182–193. [Google Scholar] [CrossRef]
  16. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
  17. Tomasi, C.; Manduchi, R. Bilateral filtering for gray and color images. In Sixth International Conference on Computer Vision; IEEE: Bombay, MH, India, 1998; pp. 839–846. [Google Scholar]
  18. Farsiu, S.; Robinson, M.D.; Elad, M.; Milanfar, P. Fast and robust multiframe super resolution. IEEE Trans. Image Process. 2004, 13, 1327–1344. [Google Scholar] [CrossRef] [PubMed]
  19. Hu, H.; Froment, J. Nonlocal total variation for image denoising. In Symposium on Photonics and Optoelectronics (SOPO); IEEE: Shanghai, China, 2012; pp. 1–4. [Google Scholar]
  20. Yang, M.; Liang, J.; Zhang, J.; Gao, H.; Meng, F.; Xingdong, L.; Song, S.J. Non-local means theory based Perona–Malik model for image denosing. Neurocomputing 2013, 120, 262–267. [Google Scholar] [CrossRef]
  21. Takeda, H.; Farsiu, S.; Milanfar, P. Kernel regression for image processing and reconstruction. IEEE Trans. Image Process. 2007, 16, 349–366. [Google Scholar] [CrossRef] [PubMed]
  22. Li, Y.; Zhang, Y. Robust infrared small target detection using local steering kernel reconstruction. Pattern Recognit. 2018, 77, 113–125. [Google Scholar] [CrossRef]
  23. Efros, A.A.; Leung, T.K. Texture synthesis by non-parametric sampling. In Proceedings of the Seventh IEEE International Conference on Computer Vision; IEEE: Kerkyra, Greece, 1999; pp. 1033–1038. [Google Scholar]
  24. Protter, M.; Elad, M.; Takeda, H.; Milanfar, P. Generalizing the nonlocal-means to super-resolution reconstruction. IEEE Trans. Image Process. 2009, 18, 36–51. [Google Scholar] [CrossRef] [PubMed]
  25. Zhang, K.; Gao, X.; Tao, D.; Li, X. Image super-resolution via non-local steering kernel regression regularization. In 2013 IEEE International Conference on Image Processing; IEEE: Melbourne, VIC, Australia, 2013; pp. 943–946. [Google Scholar]
  26. Elad, M.; Hel-Or, Y. A fast super-resolution reconstruction algorithm for pure translational motion and common space-invariant blur. IEEE Trans. Image Process. 2001, 10, 1187–1193. [Google Scholar] [CrossRef] [PubMed]
  27. Elad, M.; Feuer, A. Restoration of a single superresolution image from several blurred, noisy, and undersampled measured images. IEEE Trans. Image Process. 1997, 6, 1646–1658. [Google Scholar] [CrossRef] [PubMed]
  28. Nguyen, N.; Milanfar, P.; Golub, G. A computationally efficient superresolution image reconstruction algorithm. IEEE Trans. Image Process. 2001, 10, 573–583. [Google Scholar] [CrossRef] [PubMed]
  29. Gilboa, G.; Osher, S. Nonlocal linear image regularization and supervised segmentation. Multiscale Model. Simul. 2007, 6, 595–630. [Google Scholar] [CrossRef]
  30. Chan, T.F.; Osher, S.; Shen, J. The digital TV filter and nonlinear denoising. IEEE Trans. Image Process. 2001, 10, 231–241. [Google Scholar] [CrossRef] [PubMed]
  31. Feng, X.; Milanfar, P. Multiscale principal components analysis for image local orientation estimation. In Conference Record of the Thirty-Sixth Asilomar Conference on Signals, Systems and Computers; IEEE: Pacific Grove, CA, USA, 2002; Volume 1, pp. 478–482. [Google Scholar]
  32. Takeda, H.; Farsiu, S.; Milanfar, P. Higher order bilateral filters and their properties. In Computational Imaging V; International Society for Optics and Photonics: San Jose, CA, USA, 2007; Volume 6498. [Google Scholar]
  33. Seo, H.J.; Milanfar, P. Static and space-time visual saliency detection by self-resemblance. J. Vis. 2009, 9, 15. [Google Scholar] [CrossRef] [PubMed]
  34. Tschumperlé, D. PDE’s Based Regularization of Multivalued Images and Applications. Ph.D. Thesis, University Nice Sophia Antipolis, Nice, France, 2012. [Google Scholar]
  35. Zhang, K.; Gao, X.; Li, J.; Xia, H. Single image super-resolution using regularization of non-local steering kernel regression. Signal Process. 2016, 123, 53–63. [Google Scholar] [CrossRef]
  36. Getreuer, P. Total variation deconvolution using split Bregman. Image Process. Line. 2012, 2, 158–174. [Google Scholar] [CrossRef]
  37. Zhang, X.; Burger, M.; Bresson, X.; Osher, S. Bregmanized nonlocal regularization for deconvolution and sparse reconstruction. SIAM J. Imaging Sci. 2010, 3, 253–276. [Google Scholar] [CrossRef]
  38. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
  39. Langer, A. Automated parameter selection for total variation minimization in image restoration. J. Math. Imaging Vision 2015, 57, 1–30. [Google Scholar] [CrossRef]
  40. Chambolle, A.; Caselles, V.; Novaga, M.; Pock, T. An introduction to total variation for image analysis. In Theoretical Foundations and Numerical Methods for Sparse Recovery; De Gruyter: Berlin, Germany, 2009. [Google Scholar]
  41. Bredies, K.; Kunisch, K.; Pock, T. Total generalized variation. SIAM J. Imaging Sci. 2010, 3, 492–526. [Google Scholar] [CrossRef]
  42. Ma, L.; Xu, L.; Zeng, T. Low rank prior and total variation regularization for image deblurring. J. Sci. Comput. 2017, 70, 1336–1357. [Google Scholar] [CrossRef]
Figure 1. Comparison of local weight map produced by LSK, NLK, and NLSK in the edge area and the texture area. (a) The original image. (b) The noisy image with Gaussian noise (σ = 25).
Figure 1. Comparison of local weight map produced by LSK, NLK, and NLSK in the edge area and the texture area. (a) The original image. (b) The noisy image with Gaussian noise (σ = 25).
Symmetry 11 00329 g001aSymmetry 11 00329 g001b
Figure 2. Comparisons of PSNR for different similar window radius (r) and search window radius (R) at different noise levels.
Figure 2. Comparisons of PSNR for different similar window radius (r) and search window radius (R) at different noise levels.
Symmetry 11 00329 g002
Figure 3. Comparisons of PSNR for different learning rate ( μ ) at different noise levels.
Figure 3. Comparisons of PSNR for different learning rate ( μ ) at different noise levels.
Symmetry 11 00329 g003
Figure 4. Comparisons of PSNR for different smoothing parameters ( h ) at different noise levels.
Figure 4. Comparisons of PSNR for different smoothing parameters ( h ) at different noise levels.
Symmetry 11 00329 g004
Figure 5. Comparison of PSNR for denoising result of the “Zebra” image at different noise strengths.
Figure 5. Comparison of PSNR for denoising result of the “Zebra” image at different noise strengths.
Symmetry 11 00329 g005
Figure 6. Comparison of visual effects for various denoising methods. Images from top to bottom raw are standard test images, noisy images (σ = 25), results of TV, BTV, TGV, NLTV, LSKTV, and NLSKTV method. The first four columns are the dollar and its partial enlargement and zebra and its partial enlargement, respectively. The last column is pixel test images.
Figure 6. Comparison of visual effects for various denoising methods. Images from top to bottom raw are standard test images, noisy images (σ = 25), results of TV, BTV, TGV, NLTV, LSKTV, and NLSKTV method. The first four columns are the dollar and its partial enlargement and zebra and its partial enlargement, respectively. The last column is pixel test images.
Symmetry 11 00329 g006
Figure 7. Visual effects of various deblurring methods on blurring level σ h = 5 .
Figure 7. Visual effects of various deblurring methods on blurring level σ h = 5 .
Symmetry 11 00329 g007
Table 1. PSNR and SSIM of denoised images for different denoising methods.
Table 1. PSNR and SSIM of denoised images for different denoising methods.
MethodZebraBoatBarbaraDollarLighthouseHouse
PSNR/SSIMPSNR/SSIMPSNR/SSIMPSNR/SSIMPSNR/SSIMPSNR/SSIM
σ = 10
TV31.84/0.805331.45/0.820429.90/0.799828.69/0.823530.58/0.828735.23/0.8567
BTV33.13/0.882532.37/0.859830.96/0.869029.73/0.881631.26/0.863336.54/0.9303
TGV33.36/0.877132.81/0.868831.42/0.879729.54/0.875031.54/0.866136.77/0.9466
NLTV34.06/0.895232.85/0.869232.26/0.909130.60/0.929631.84/0.874337.24/0.9392
LSKTV33.73/0.889132.87/0.870131.91/0.898329.88/0.902132.04/0.872937.39/0.9347
NLSKTV34.18/0.895532.97/0.874532.60/0.913930.80/0.932932.20/0.885237.76/0.9394
σ = 25
TV27.30/0.655327.30/0.681024.98/0.656822.72/0.656625.89/0.672131.15/0.7583
BTV28.35/0.814328.10/0.739225.64/0.707923.52/0.723626.30/0.698232.46/0.8842
TGV28.45/0.772428.24/0.739925.71/0.687623.17/0.697426.32/0.678832.63/0.8512
NLTV28.70/0.828828.19/0.747726.02/0.736823.57/0.747426.25/0.713332.64/0.8705
LSKTV29.72/0.834528.73/0.754326.72/0.764224.15/0.776727.71/0.729733.47/0.8783
NLSKTV30.21/0.849529.31/0.773727.37/0.804324.73/0.845528.03/0.750534.05/0.8938
σ =40
TV25.00/0.586225.45/0.606423.55/0.586920.54/0.563323.90/0.582429.06/0.7064
BTV25.92/0.764026.20/0.668223.77/0.623821.05/0.607224.04/0.602430.44/0.8540
TGV25.84/0.734026.19/0.667024.00/0.599520.80/0.575324.01/0.568330.34/0.8232
NLTV26.27/0.740526.26/0.665324.52/0.675621.75/0.710124.50/0.622029.99/0.7834
LSKTV27.47/0.773526.75/0.680124.53/0.662621.90/0.680525.67/0.647631.18/0.8255
NLSKTV27.56/0.809226.83/0.693724.74/0.698022.60/0.742025.74/0.660431.31/0.8498
Table 2. PSNR and SSIM of the deblurring results for different methods.
Table 2. PSNR and SSIM of the deblurring results for different methods.
MethodTVBTVTGVNLTVLSKTVNLSKTV
PSNR/SSIMPSNR/SSIMPSNR/SSIMPSNR/SSIMPSNR/SSIMPSNR/SSIM
σ h = 5 24.68/0.757424.67/0.769824.74/0.766624.88/0.771924.98/0.788725.14/0.7923
σ h = 10 23.67/0.638523.75/0.682723.91/0.669324.01/0.698624.37/0.750624.53/0.7550
Table 3. Runtime(s) of different methods for various sized images.
Table 3. Runtime(s) of different methods for various sized images.
Image SizeTVBTVTGVNLTVLSKTVNLSKTV
128 × 1280.0010.0330.0320.1180.4850.753
256 × 2560.0510.1120.0890.4231.3561.975
512 × 5120.1750.4510.3382.0244.1226.771

Share and Cite

MDPI and ACS Style

Lai, R.; Mo, Y.; Liu, Z.; Guan, J. Local and Nonlocal Steering Kernel Weighted Total Variation Model for Image Denoising. Symmetry 2019, 11, 329. https://doi.org/10.3390/sym11030329

AMA Style

Lai R, Mo Y, Liu Z, Guan J. Local and Nonlocal Steering Kernel Weighted Total Variation Model for Image Denoising. Symmetry. 2019; 11(3):329. https://doi.org/10.3390/sym11030329

Chicago/Turabian Style

Lai, Rui, Yiguo Mo, Zesheng Liu, and Juntao Guan. 2019. "Local and Nonlocal Steering Kernel Weighted Total Variation Model for Image Denoising" Symmetry 11, no. 3: 329. https://doi.org/10.3390/sym11030329

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop