Next Article in Journal
IoT Platforms and Security: An Analysis of the Leading Industrial/Commercial Solutions
Next Article in Special Issue
Detection and Recognition of Pollen Grains in Multilabel Microscopic Images
Previous Article in Journal
Force Sensing on Cells and Tissues by Atomic Force Microscopy
Previous Article in Special Issue
A Novel Method for Intelligibility Assessment of Nonlinearly Processed Speech in Spaces Characterized by Long Reverberation Times
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Image Denoising Using a Compressive Sensing Approach Based on Regularization Constraints

by
Assia El Mahdaoui
1,
Abdeldjalil Ouahabi
2,* and
Mohamed Said Moulay
1
1
AMNEDP Laboratory, Department of Analysis, University of Sciences and Technology Houari Boumediene, Algiers 16111, Algeria
2
UMR 1253, iBrain, INSERM, Université de Tours, 37000 Tours, France
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(6), 2199; https://doi.org/10.3390/s22062199
Submission received: 31 January 2022 / Revised: 24 February 2022 / Accepted: 28 February 2022 / Published: 11 March 2022
(This article belongs to the Special Issue Analytics and Applications of Audio and Image Sensing Techniques)

Abstract

:
In remote sensing applications and medical imaging, one of the key points is the acquisition, real-time preprocessing and storage of information. Due to the large amount of information present in the form of images or videos, compression of these data is necessary. Compressed sensing is an efficient technique to meet this challenge. It consists in acquiring a signal, assuming that it can have a sparse representation, by using a minimum number of nonadaptive linear measurements. After this compressed sensing process, a reconstruction of the original signal must be performed at the receiver. Reconstruction techniques are often unable to preserve the texture of the image and tend to smooth out its details. To overcome this problem, we propose, in this work, a compressed sensing reconstruction method that combines the total variation regularization and the non-local self-similarity constraint. The optimization of this method is performed by using an augmented Lagrangian that avoids the difficult problem of nonlinearity and nondifferentiability of the regularization terms. The proposed algorithm, called denoising-compressed sensing by regularization (DCSR) terms, will not only perform image reconstruction but also denoising. To evaluate the performance of the proposed algorithm, we compare its performance with state-of-the-art methods, such as Nesterov’s algorithm, group-based sparse representation and wavelet-based methods, in terms of denoising and preservation of edges, texture and image details, as well as from the point of view of computational complexity. Our approach permits a gain up to 25% in terms of denoising efficiency and visual quality using two metrics: peak signal-to-noise ratio (PSNR) and structural similarity (SSIM).

1. Introduction

Compressed sensing (CS) has already attracted great interest in various fields. Examples include medical imaging [1,2], communication systems [3,4,5,6], remote sensing [7], reconstruction algorithm design [8], image storage in databases [9], etc. Compressed sensing provides an alternative approach to Shannon’s vision to reduce the number of samples and/or reduce transmission/storage costs. Other approaches also address this issue, such as random sampling [10]. Compressed sensing recovery is a linear optimization problem. The most common CS retrieval algorithms explore the prior knowledge that a natural image is sparse in certain domains, such as the wavelet domain, where simple and efficient noise reduction is possible [11,12,13,14,15], or in the discrete-gradient domain, which we will develop in this work.
In real life, images are usually noisy, but noise is random and is, therefore, unknown. This noise can have many origins: It can be due to poor weather conditions (wind, haze, fog, mist, …), light fluctuations, the electronic image sensor of a digital camera, the conditions under which the image was acquired or simply the manner the image was stored and the techniques used to compress it. Therefore, denoising is an essential preprocessing step to recover improved image quality.
We can consider the problem of image recovery as an inverse problem because the goal is to reconstruct an image close to the original image from a degraded image while respecting two constraints: an optimized image quality and the speed of execution. Our strategy is part of this framework, which consists in recovering a quality image—a debatable notion that is imprecise and depends not only on objective criteria but also on the “eye” of the observer—in a relatively short time.
The regularization of an inverse problem corresponds to the idea that data alone do not allow obtaining an acceptable solution and that it is, thus, necessary to introduce a priori information on the regularity of the image to be estimated, reconstructed or recovered. The regularization of an inverse problem corresponds to the idea that data alone cannot make obtaining an acceptable solution possible and that it is, therefore, necessary to introduce a priori information, likely allowing an estimation, reconstruction or recovery of the image of interest.
Many inverse problem optimization approaches for image denoising have been proposed in the literature. Some are based on deep learning and, more precisely, on the deep generative network [16], and others are based on models [17].
In particular, the total variation (TV)-based approach has been one of the most popular and successfully applied approach, where, for example, Chambolle [18] introduced the dual approach to the unconstrained real case. Subsequently, Beck and Teboulle [19] presented a fast computational method based on a gradient optimization approach to solve the TV-regularized problem. Recently, other denoising approaches have been proposed, such as a remote sensing image denoising method via a low-rank tensor approximation [20], which can be formulated as a generative Bayesian model. Another method for regularizing nonuniformly sampled data based on least-squares spectral analysis [21] can be applied to nonstationary signals by introducing classical sliding windowing, as was the case for processing marine seismic data. Other approaches to denoising hyperspectral images [22,23] combine total variation regularization and low-rank tensor decomposition.
However, since the total variation model favors piecewise constant image structures, the total variation models tend to oversmooth image details; it tends to smooth out the fine details of an image. To overcome these intrinsic drawbacks of the total variation model, we introduce a nonlocal self-similarity constraint as a complementary regularization. Nonlocal self-similarity can restore high-quality images. To make our algorithm robust, an augmented Lagrangian method was used efficiently to solve the above inverse problem.
The remainder of this paper is organized as follows. In Section 2, the basics of the methods used to image restoration are presented. The tools of denoising image considered in the proposed framework are described in Section 3. Section 4 presents our proposed algorithm, called DCSR. The experimental results from the processing of Lena, Barbara and Cameraman images, available in the public and royalty-free database https://ccia.ugr.es/cvg/dbimagenes/index.php (accessed on 1 June 2021), and the comparison with competing methods are shown in Section 5. The results are discussed in Section 6. Finally, conclusions are provided in Section 7.

2. Related Works

2.1. Compressive Sensing

Compressive sensing or compressed sensing [24,25] allows acquiring and efficiently reconstructing a signal by solving underdetermined linear systems. This technique is based on the principle where, by optimization, the sparsity of a signal can be exploited to recover it from several samples that are much lower than that required by the Nyquist-Shannon condition, which was, until recently, unavoidable.
Compressive sensing is based on three conditions: signal sparsity, the measurement matrix’s incoherence and the reconstruction algorithm’s robustness.
Let us assume a real signal x N possessing a sparse representation in some transform domain, with K sparsity, K N . Then, the signal can be written as follows:
x = ψ   u  
where ψ = ψ 1 , ψ 2 , , ψ N is the basis matrix, and u N × 1 is a weighted N dimensional vector with the following being the case:
    u i = x i ,   ψ i = ψ T x  
using   ϕ as a sensing matrix of dimension M × N   with K < M N . The measurement vector is acquired according to the following linear model:
f = ϕ   x = A   u  
with A = ϕ   ψ 1 .   A is an   M × N matrix that verifies the restricted isometry condition of order   K .
1 δ K   A u   2 2   u   2 2 1 + δ K ,   0 < δ K < 1
The compressed sensing recovery of x from f can be obtained via l 1 norm minimization as the following optimization problem [26].
u ^ = min   u   1   subject   to   f = A   u  
More often, when f is contaminated by noise, the equality constraint is as follows:
  u ^ = min u 1   subject   to A u f 2 2   ε
where ε > 0 is the noise level. For an appropriate scalar weight α , we can obtain the following variant of (6).
min   u 1 2 A u f 2 2 + α   u 1
Problems (6) and (7) are equivalent because solving one will determine the parameter in the other such that both provide the same solution. The signal x can be reconstructed by solving the l 1 norm minimization problem under the condition that x is sufficiently sparse, and the measurement matrix ψ is inconsistent with the orthogonal basis ϕ .
Several iterative reconstruction algorithms solve the problem of denoising algorithms for CS recovery and obtained high performances for nature images, such as orthogonal matching pursuit [27] and approximate message passing (AMP) [28], an extension of AMP and denoising based AMP [29].

2.2. Augmented Lagrangian

The augmented Lagrangian method, originally known as the multipliers method [30], combines the Lagrangian function and a quadratic penalty term. It is applied to solve a constrained optimization problem iteratively:
min u f u   subject   to   H   u = g  
where u N and   g M , and H is a matrix of dimension M × N .
Applied to Equation (8), the augmented Lagrangian can be expressed as follows:
  L u , λ , ρ = f   u   λ T H u g + ρ 2 H u g 2 2
where λ M is a vector of the Lagrange multiplier, and ρ > 0 is the augmented Lagrangian penalty parameter for the quadratic infeasibility term [31]. We recall in Algorithm 1 the so-called Augmented Lagrangian method: this consists of searching for the optimal solution providing signal u by alternately updating u and the Lagrange multiplier λ . The alternate iteration method keeps one vector fixed and updates the other successfully until the stopping criterion is satisfied.
Algorithm 1: Augmented Lagrangian method.
Fixed: k = 0 ,     ρ > 0 ,     λ 0 ,     u 0
Iterations: for k = k + 1   until   the   stopping   criterion   is  
satisfied, repeat steps 1–2
1. Keep λ fixed and update u:
u k + 1 =   min   u f u λ k T H u g + ρ 2 H u g 2 2
2. Keep u fixed and update λ:
λ k + 1 = λ k + ρ H u k + 1 g
Output:u the optimal solution.

3. Technical Framework for Image Denoising

Several methods exist for image denoising, including total variation image regularization [32], wavelet thresholding [33], nonlocal means [34], basis pursuit denoising [35], block matching and 3D filtering [36], among others. Moreover, these methods can perform, to a certain extent, image smoothing and preserve edges. Therefore, we can reconstruct the image well by CS theory to obtain more precise measurements of the original images than the corresponding noisy images.
In particular, we can insert them into two classes as exceptional cases of the proposed framework.

3.1. Regularization Functions

Image recovery in these application domains can be formulated as a linear inverse problem, which can be modelled as follows:
f = A   u + ε
where f M is the noisy image observation, u N . is the cleanest image unknown, ε is an additive noise and A M × N is a linear operator. Given A , image reconstruction extracts u ^ from f , making classical least-squares approximation alone unsuitable. To stabilize recovery, regularization techniques are frequently used, producing a general reconstruction model of the following form:
arg   min u 1 2 A   u f 2 2 + λ   ϕ r e g u
where λ > 0 is a regularization parameter, and 2 denotes the l 2 norm. The fidelity term,   A   u f   2 2 , forces the reconstructed image close to the original image, and the regularization function, ϕ r e g u , performs noise reduction.
The choice of the regularization function is very important for reconstructing an image that reflects, as accurately as possible, the original image of interest. In our work, we have combined total variation and nonlocal self-similarities. The interest of such a choice is linked to the fact that the total variation (TV) model shows high efficiency in preserving the contours and recovering smooth regions. However, this operator is local. It, therefore, does not take into account nonlocal features of the data, such as repetitive structures (such as texture, for example). However, nonlocal self-similarity describes the repetitiveness of textures [37,38] or embodied structures in realistic images, which allows the preservation of sharp edges.
The total variation has been introduced first by Rudin, Osher and Fatemi [39] as a regularizing criterion for solving inverse problems. Then, the total variation model is written as follows:
T V u = D u p
where u N represents an image, and D = D h   ,   D v with D h and D v represents the gradient of the image in the horizontal and vertical direction, respectively. The   l p norm could be the l 1 norm corresponding to the anisotropic TV or the l 2 norm corresponding to the isotropic TV. By definition, l p norm is   u   p =   i = 1   N u i p 1 p . In this paper, we consider p   to be equal to 1.
A nonlocal self-similarity (NLS) is a significant property of natural images too. It was proposed for the first time in image denoising [40] and was obtained in these steps. Image u   of size N   is divided into many overlapping blocks u i of size   n   × n   , at location i   ,   i = 1 ,   2 , ,   N . Then, find m 1 similar blocks, which comprise set S u i in the training window with L × L size. Finally, all blocks of S u i are stacked into a 3D array Z u i of size n   × n   × m . Nonlocal self-similarity can be formulated as follows:
N L S u = θ u 1 = i = 1 N T 3 D Z u i 1
where T 3 D is a transform operator, and   T 3 D Z u i includes the transform coefficients for Z u i .   θ u is the column vector of the lexicographically stacked representation of all 3D transform coefficients.

3.2. Wavelet Denoising

Wavelet shrinkage denoising removes whatever present noise and retains whatever signal is present regardless of the signal’s frequency content. In the wavelet domain, the energy of a natural signal is concentrated in a small number of coefficients; noise is, however, spread over the entire domain. The basic wavelet shrinkage denoising algorithm comprises three steps:
  • Discrete wavelet transform (DWT) [41];
  • Denoising [42];
  • Inverse DWT.
The following is the measurement model:
  f = u + ε
where   u is the original image of size M × N corrupted by additive noise   ε . The goal is to estimate denoised image u ^ from noisy observation f . The elimination of this additive noise permits the assumption that the appropriate decomposition basis allows the discrimination of a useful signal (image) from noise. This hypothesis justifies, in part, the traditional use of denoising by thresholding. Two thresholding methods are frequently used. The soft-thresholding function (also called the shrinkage function) is as follows.
d j u ^ k = d j y k S if     d j y k > S     d j y k + S     if     d j y k < S   0   otherwise
The hard-thresholding function, another popular alternative, is as follows.
d j u ^ k = d j y k i f             d j y k > S 0             o t h e r w i s e  
d j y k includes the wavelet coefficients of the measured signal at level j , and d j u ^ k is the estimation of the wavelet coefficients of the useful signal with the threshold S = σ 2 log N , where N and σ represent the number of pixels for the test image and the standard noise deviation.

4. Our Efficient Image Denoising Scheme

In what follows, we substitute the aforementioned results (12) and (13) into (7), and we obtain the following problem.
arg   min                           u 1 2 A u f 2 2 + τ   T V u + μ   N L S u
Let us recall that T V u = D u 1 and N L S u = θ u 1 = 1 N T 3 D Z u i 1 .
Recovering image u with high quality requires solving the optimization problem described by Equation (17). This issue is converted into an equality-constraint problem by introducing w and x :
min w , u , x 1 2 A u f 2 2 + τ   w 1 + μ   θ x 1   subject   to     D u = w ,   u = x
where τ and μ are control parameters. The augmented Lagrangian function of (18) is as follows:
L A w , u , x = 1 2 A u f 2 2 + τ   w 1 + μ   θ x 1 γ T D u w φ T u x + μ 2 D u w 2 2 + β 2 u x 2 2  
where μ and β are the penalty parameters corresponding to   D u w 2 2 and u x 2 2 , respectively.
To solve (18), we use the augmented Lagrangian method iteratively as follows.
w k + 1 , u k + 1 , x k + 1 = arg   min                       w , u , x     L A w , u , x γ k + 1 = γ k μ D u k + 1 w k + 1 φ k + 1 = φ k β u k + 1 x k + 1
Here, subscript k denotes the iteration index, and γ and φ are the Lagrangian multipliers associated with the constraints D u = w ,   u = x , respectively.
The alternative direction method is introduced to solve the problem efficiently. Due to the non-differentiability of Equation (19), the alternative direction method is introduced to solve the problem efficiently, which alternatively minimizes one variable while fixing the other variables to split Equation (19) into the following three subproblems. For the sake of simplicity, subscript k is omitted without confusion.
  • Update w
Given u and x , we obtain subproblem w .
arg   min                     w   τ w 1 γ T D u w + μ 2 D u w 2 2
The optimization problem described by Equation (21) can be solved using the shrinkage formula [43]. Then, w ˜ can be obtained as follows:
w ˜ = max D u γ μ τ μ ,   0 sgn D u γ μ
where max   · represents the larger number between two elements, and   sgn ·   is a piece- wise function that is defined as follows.
sgn x = 1 if       x < 0 0 if       x = 0 1 if       x > 0
  • Update u
By fixing   w and x , the optimization associated with u is as follows.
arg   min                     u 1 2 A u f 2 2 γ T D u w φ T u x + μ 2 D u w 2 2 + β 2 u x 2 2
Equation (23) is u -quadratic; thus, the minimization of the subproblem is simplified to a linear system.
D T D + β μ + 1 μ I     u = 1 μ A T   f + D T w γ μ + β μ x φ μ  
The matrix on the left-hand side of the above system is positive, definite and tridiagonal, since D T D is a positive semidefinite tridiagonal matrix. Moreover, μ and β are both positive scalars.
  • Update x
Given u , we obtain the   x subproblem as follows.
arg   min                 x   μ   θ x 1 + β 2 u x 2 2 φ T u x  
By applying a completing square method and omitting all constants independent of x , the subproblem defined in Equation (25) can be simplified as follows.
arg   min                     x 1 2 x r 2 2 + μ β θ x 1  
Considering = u c , with   c = φ β , as a noisy observation of x , error (or noise) e = x r follows a probability law that is not necessarily Gaussian, with zero mean and variance σ 2 . According to the central limit theorem or law of large numbers, the following equation holds:
1 N x r 2 2 = 1 K θ x θ r 2 2
with e ,   x ,   r     N and θ x   , θ r   i for   i = 1 , , N .
Incorporating (27) into (26) results in the following.
arg   min                     θ x 1 2 θ x θ r 2 2 + K μ N β θ x 1
Since θ x is component-wise separable and an unknown variable, according to [44], the closed-form of the minimization problem of Equation (28) can be written as follows.
θ x ^ = soft θ r , 2 φ ,     φ = K μ N β   ,     K = n   × n   × m  
θ x ^ = sgn θ r max θ r 2 φ ,       0
Thus, the solution of the x subproblem of Equation (25) is as follows:
x ˜ = Ω θ x ^
where Ω is the reconstruction operator.
Based on the discussions above, we obtain Algorithm 2 for solving (17):
Algorithm 2 is applied to recover corrupted images by white Gaussian noise and salt pepper noise.
The comparative performance of competing methods for the recovery of noisy images is discussed in the next sections.
Algorithm 2: Our algorithm (DCSR).
Input: The measurement f and the linear measurement matrix A
Initialization: γ 0 = φ 0 = 0 ,   u 0 = f , w 0 = x 0 = 0
while Outer stopping criteria unsatisfied do
while Inner stopping criteria unsatisfied do
use Equation (22) to solve w sub-problem
use Equation (24) to solve u sub-problem
use Equation (31) to solve   x sub-problem
end while
Update multipliers by using Equation (20)
end while
Output: The final image u is restored.

5. Experimental Results

In this section, we verify the efficiency and practicability of the proposed method by describing experiments with simulated and real data sets. The proposed algorithm’s performance, DCSR, is evaluated by comparing it with three other popular CS recovery algorithms: The first algorithm is wavelet thresholding, which involves denoising natural images by assuming that they are sparse in the wavelet domain. It transforms signals into a wavelet basis, thresholds the coefficients and then inverses the transform. The second algorithm extends Nesterov’s smoothing technique to TV minimization by modifying the smooth approximation of the objective function, called the NESTA algorithm. The third algorithm, group-based sparse representation (GSR), simultaneously enforces image sparsity and self-similarity under a unified framework in an adaptive group domain.
We used three images, Barbara and Cameraman gray-scale images with sizes of 256 × 256 and a color Lena image sized 512 × 512 in our experiments (see Figure 1).
The simulations used two types of noise: additive white Gaussian noise (AWGN) measured by its standard deviation σ and its mean   m and the salt and pepper noise.
The salt and pepper noise in image has the following form:
η p = v m a x             with   probability   q 1     v m i n           with   probability   q 2
where p = p 1 ,   p 2 , , p n n pixels, v m a x is the pixel intensity of salt pixels and   v m i n is the pixel intensity of pepper pixels. The sum, q = q 1 + q 2 ,   is the level of the salt and pepper noise.

5.1. Visual Quality Comparison

First, we evaluate the performance of our DCSR algorithm by performing experiments on Barbara’s image corrupted by white Gaussian noise with a standard deviation σ ranging from 20 to 80. Then, to verify the superiority of the proposed method, we compared it with the GSR algorithm, wavelet denoising and the NESTA algorithm. In Figure 2, each line represents the Barbara image reconstructed according to the following algorithms: DCSR (ours) in the first line, GSR in the second line, wavelet denoising in third line and NESTA in the fourth line. The level of the initial Gaussian white noise varies according to each column: (a) σ = 20 , (b) σ = 50 , (c) σ = 60 and (d) σ = 80 .
The Barbara image is relatively complex given its rich texture and its geometric structure: it is clear that our algorithm (see the first line of Figure 2) has this ability to effectively denoise while preserving the details and texture of this particular image. Secondly, we handled the proposed approach to process impulsive salt and pepper noise in various noise levels varying from 20% to 70% and compared its denoising performance with several denoising algorithms: GSR algorithm, NESTA algorithm, and wavelet denoising. Figure 3 provides visual results for different algorithms and several noise levels; it can be observed that the Cameraman image has been reconstructed very well by our DCSR algorithm. By analyzing the images in the fifth column of Figure 3, obtained by the NESTA algorithm, we can observe some artifacts above the camera head and in the upper right corner due to the nature of the NESTA algorithm itself, which is characterized by a loss of precision in high-frequency components. We can conclude that the DCSR method presents the best quality image compared to other methods.

5.2. Image Quality Metrics

Peak signal-to-noise ratio and mean square error (MSE) have long been used as fidelity metrics in the image processing community. The formulas are simple to understand and implement; they are easy and fast to compute, and minimizing MSE is also very well understood from a mathematical point of view.
PSNR (peak signal-to-noise ratio, unit: dB) [45] is the ratio between the maximum possible power of a signal and the power of noise. Higher PSNR value means better visual quality. PSNR is defined as follows.
PSNR = 10   log 10 2 k 1 2 MSE
MSE = 1 M N i = 1 M j = 1 N u i , j u ^ i , j 2
MSE is the mean square error between initial image u and estimated image u ^ of size M × N ; i and j represent the image row and column pixel position, respectively; and   k is the number of bits of each sample value.
Structural similarity (SSIM) textures [46] have proven to be a better error metric for comparing the image quality with better structure preservation. They are in the range of [0,1], which is a value closer to one indicating better structure preservation:
SSIM = l u , u ^ × c u , u ^ × s u , u ^
such that l i , j is the luminance comparison defined as follows:
l i , j = 2 μ i μ j + c 1 μ i 2 + μ j 2 + c 1
where μ i and μ j are functions of the mean intensities of signals   i and j , respectively.
c i , j , the contrast comparison, is a function of standard deviations σ i and σ j , and it is defined as the following form.
c i , j = 2 σ i σ j + c 2 σ i 2 + σ j 2 + c 2
Structure comparison s i , j is defined as follows:
s i , j = σ i j + c 3 σ i σ j + c 3
where μ i and μ j are the mean value of images u and u ^ , respectively.   σ i 2 and σ j 2 represent the variance of u and u ^ , respectively. σ i j is the covariance of images u and u ^ .   c 1 , and c 2 and c 3 are constant.
Specifically, it is possible to choose   c i = K i 2 D 2 with i = 1 and K i 1 and D as the pixel values’ dynamic range.

5.3. Quantitative Assessment

In this subsection, we evaluate the quality of image reconstruction. We compare these methods quantitatively; the peak signal-to-noise ratio and structural similarity indices are calculated for images with different algorithms.
Remember that, at first, Barbara was contaminated by Gaussian white noise with different values of standard deviation σ and denoising with other algorithms. The results of the PSNR values by various algorithms are shown in Figure 4. Let us recall that a higher PSNR indicates superior image quality and good performance of the algorithm. The values of PSNR illustrate that DCSR yields a higher PSNR than the other methods.
Figure 5 presents the performance analysis of four denoising methods for grayscale images corrupted by additive white Gaussian noise. We can see that DCSR, our algorithm, exhibits the best performance with high PSNR (low noise level) and low PSNR (high noise level).
Furthermore, to evaluate the effect of denoising our algorithm, DCSR, the grayscale image is corrupted by salt and pepper noise with different levels. Table 1 shows the quantitative assessment results of PSNR and SSIM relative to our algorithm, DCSR, for several values of noise levels, and the results were obtained using GSR, wavelet and NESTA.
The best results are highlighted in bold type font. Table 1 validates that our method has superiority in image reconstruction compared with the three other methods.
The performance of our method is confirmed in Figure 6, which illustrates PSNR output variations concerning the input PSNR.
The next challenge is to apply this DCSR algorithm to color or multidimensional images. This is essential since most digital images used in the modern world are not grayscale but usually operate in either RGB or YCbCr color spaces. Both these colorspaces are three-dimensional.
We evaluate the proposed DCSR algorithm on the Lena color image because this test image is interesting from the point of view of the mixture of details, flat regions, shadow areas and texture. We mainly compareed our proposed method to wavelet denoising and the GSR algorithm.
As mentioned previously, two types of noise were used in these experiments: AWGN with a standard deviation   σ and several salt and pepper noise levels.
Our first experiment is to add white Gaussian noise with different σ values, σ = 20 ,   50 ,   60   and   80 , to the test image (here Lena), thus generating noisy observations, and the challenge is to determine the most efficient denoising method in terms of PSNR.
The denoising results of this image with different algorithm are shown in Figure 7. From Figure 7, the proposed method achieves the highest scores of PSNR in all cases, which fully demonstrates that the denoising results by the proposed method are the best both objectively and visually.
Figure 8 shows the output results (after reconstruction) evaluated at different noise levels. DCSR algorithm provides the best denoising performance.
In the second experiment, we added salt and pepper noise with different noise levels at 10%, 15%, 20% and 30% to the Lena test image. Then, we applied our denoising algorithm to restore the noisy images and compared them with two other algorithms: wavelet denoising and GSR algorithm. Figure 9 shows that the DCSR algorithm provides better visual quality results. This performance is confirmed by Figure 10, which represents the variations of the output PSNR vs. the input PSNR. The robustness of our algorithm relative to the noise level is, thus, verified.
From these results of PSNR and SSIM, in all the cases, the proposed method achieves the highest scores, which fully demonstrates that the restoration results by the proposed method are the best both objectively and visually.

5.4. Algorithm Robustness

The robustness of the proposed algorithm will be confirmed in this subsection.
The test images are corrupted, on the one hand, by additive white Gaussian noise and, on the other hand, by salt and pepper noise at different noise levels. Figure 11, Figure 12, Figure 13 and Figure 14 show PSNR values (after reconstruction) as a function of the number of iterations for the greyscale images of Barbara and Cameraman and for the color image of Lena using different algorithms. Note that we did not use the NESTA algorithm in the case of the color images because NESTA is not suitable for color images.

5.5. Computational Complexity

In the following paragraph, we will estimate the computational complexity of the proposed DCSR algorithm. It is clear that the main complexity of the proposed algorithm comes from total variation TV and the high cost of the nonlocal self-similarities (NLS).
Knowing that the computational complexity of TV is O N [47], let us compute that of NLS.
For an image u   of   N pixels, the average time to compute similar blocks for each reference block is   T s .
If n   × n   represents overlapped blocks u i , i = 1 ,   2 , ,   N and m 1 is the number of similar blocks denoted   S u i , all blocks of S u i are stacked in a matrix of size n × m with complexity   O n × m 2 ; hence, the resulting complexity is O N   n   m 2 + T s , similarly to the computational complexity of a group-based sparse representation GSR [48].
We point to the finding that the total computational complexity of our algorithm is O N n   m 2 + T s + N .
It is interesting to compare the computational complexity of DCSR with competing methods.
The computational complexity of NESTA algorithm is O N + N log 2 N [49] and that of wavelet denoising is O N log 2 N [50].
Table 2 summarizes the computational complexity of the four algorithms used.
The result of this comparison is as follows.
O N log 2 N < O N + N log 2 N < O N   n   m 2 + T s < O N n   m 2 + T s + N  
Relation (39) clearly shows that our proposed algorithm is more expensive in terms of computational complexity by an order N . Such an increase is not excessive in view of the good performance of this algorithm and the current computational means, which permit real-time processing.

6. Discussion

In the real world, the images acquired or measured or recorded have generally suffered degradation of various origins:
-
Bad weather conditions (wind, fog, haze, etc.);
-
Chain of acquisition of the image;
-
Compression of the image.
This degradation can be canceled or at least reduced by proceeding to a preprocessing procedure called a denoising operation or simply denoising. Such an operation allows, on the one hand, producing a perception of a quality image and, on the other hand, the improvement in the performance of subsequent image processing (extraction of the desired information, prediction, classification, texture analysis, segmentation, etc.).
With this in mind, the authors of reference [51] propose a new image denoising method called dehazing because it tends to eliminate haze due to bad weather conditions. This original method is based on the application of artificial multiexposure image fusion [52] involving local and global image details. Such an approach allows the recovery of quality images but, unfortunately, halos or artifacts often appear near the edges when the inputs are sparse, which makes postprocessing (linear saturation adjustment) introduced by the authors ineffective.
Wavelet image denoising [53] is powerful for edge detection in three preferred directions: diagonal, vertical and horizontal. For this purpose, several types of wavelets exist, such as the Haar wavelet that preserves edge information, but technically it is not continuous and is not differentiable. This wavelet can be identified with the optimization problem using an l 1 norm. In contrast, the Morlet wavelet [13], which is continuous, can be identified with the use of the l 2 norm.
Our method mixed the total variation with nonlocal self-similarities to recover the image without artifacts and preserve details and textures of the image. On a psycho-emotional level, the visual quality of an image is necessary for the observer. In this respect, it is interesting to note that the images reconstructed in Figure 2 by the NESTA algorithm are visually unpleasant and lose some important details. Limitations also appear during wavelet denoising: The noise contained in the image cannot be removed if the standard deviation is set too large (above 50), i.e., if the noise level is high. Concerning GSR (second row of Figure 2), we can observe that this algorithm is quite efficient in removing noise; however, it has a slight loss of contrast. On the other hand, it can be observed that our algorithm, DCSR, is efficient in removing even noise with standard deviations equal to 80. It is, therefore, confirmed that the proposed method provides the most visually satisfying results for both edges and textures.
On an objective and therefore measurable level, Table 1 shows that the PSNR and SSIM values obtained by NESTA are the lowest, which clearly indicates a limitation in the performance of this algorithm. GSR and wavelets obtain intermediate PSNR and SSIM values. At the end of the convergence analysis, Figure 11, Figure 12, Figure 13 and Figure 14 allow us to conclude that, as the number of iterations increases, all PSNR curves increase monotonically and stabilize from the 10th iteration. These figures show that all these algorithms converge very quickly. Nevertheless, the most robust algorithm should have a high PSNR. According to these figures, regardless of the nature of the noise or the test image used, DCSR has the highest PSNR. It is, therefore, the most robust algorithm among the competing methods.
However, when we compared our method, DCSR, in terms of computational complexity, our approach is not the most advantageous; however, real-time processing is largely feasible. Hence, complexity reduction techniques such as block-compressed sensing [54] or deep learning technique [55] are feasible.
We applied our algorithm in an ablation study to clarify the effect of total variation (TV) and nonlocal self-similarity (NLS) on the compressed sensing (CS) recovery model. First, we cancel all regularization constraints and leave only the CS alone. Next, we analyze CS coupled with TV and cancelled NLS. Then, we delete TV, replaced it with NLS and kept CS. Table 3 summarizes this ablation analysis using quantitative values: PSNR (dB) and SSIM. From Table 3, we can conclude that the presence of TV or NLS improves the quality of the reconstruction with CS, while noting that the CS+NLS coupling performs better than CS+TV. On the other hand, by comparing the four scenarios with each other, we can conclude that the reconstruction, with both regularization functions simultaneously (CS+TV+NLS), is significantly better than the other three scenarios: it effectively removes noise and improves the robustness of our approach.

7. Conclusions

In this paper, we proposed an original image denoising method based on compressed sensing that we called denoising-compressed sensing by regularizations terms (DCSR), by incorporating two regularization constraints in the model: total variation and nonlocal self-similarity. The optimization of this method is performed by the augmented Lagrangian, which avoids the difficult problem of nonlinearity and non-differentiability of the regularization terms.
The effectiveness of our approach was validated using images corrupted by white Gaussian noise and impulsive salt and pepper noise. Comparing DCSR in terms of PSNR and SSIM to state-of-the-art methods such as Nesterov’s algorithm, group-based sparse representation and wavelet-based methods, it turns out that, depending on the image texture and the type of noise corrupting the image, our method performs much better: We gain at least 25% in PSNR and at least 11% in SSIM. The price to pay is a slight increase in terms of computational complexity of the order of image size, but this does not call real time processing into question.
Due to the robustness and the speed of convergence of DCSR algorithm, its application is efficient in vital and sensitive domains such as medical imaging and remote sensing. Our future contribution is a technological breakthrough that includes introducing a layer of intelligence at the acquisition level aimed at automatically determining image texture and its quality in terms of noise level, blur and shooting conditions (lighting, inpainting, registration, occlusion, low resolution, etc.) [56,57,58,59,60,61] in order to automatically adjust the parameters necessary for an optimal use of the proposed DCSR algorithm.
The definitions of the acronyms used in this work are given in Table 4.

Author Contributions

Software, investigation, formal analysis and writing, A.E.M.; methodology, validation and writing–review and editing, A.O.; methodology, project administration and funding acquisition, M.S.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yang, Y.; Qin, X.; Wu, B. Fast and accurate compressed sensing model in magnetic resonance imaging with median filter and split Bregman method. IET Image Process. 2019, 13, 1–8. [Google Scholar] [CrossRef]
  2. Labat, V.; Remenieras, J.; Matar, O.B.; Ouahabi, A.; Patat, F. Harmonic propagation of finite amplitude sound beams: Experimental determination of the nonlinearity parameter B/A. Ultrasonics 2000, 38, 292–296. [Google Scholar] [CrossRef]
  3. He, J.; Zhou, Y.; Sun, G.; Xu, Y. Compressive multi-attribute data gathering using hankel matrix in wireless sensor networks. IEEE Commun. Lett. 2019, 23, 2417–2421. [Google Scholar] [CrossRef]
  4. Haneche, H.; Ouahabi, A.; Boudraa, B. New mobile communication system design for Rayleigh environments based on com-pressed sensing-source coding. IET Commun. 2019, 13, 2375–2385. [Google Scholar] [CrossRef]
  5. Haneche, H.; Boudraa, B.; Ouahabi, A. A new way to enhance speech signal based on compressed sensing. Measurement 2020, 151, 107–117. [Google Scholar] [CrossRef]
  6. Haneche, H.; Ouahabi, A.; Boudraa, B. Compressed sensing-speech coding scheme for mobile communications. Circuits Syst. Signal Process. 2021, 40, 5106–5126. [Google Scholar] [CrossRef]
  7. Li, H.; Li, S.; Li, Z.; Dai, Y.; Jin, T. Compressed sensing imaging with compensation of motion errors for MIMO Radar. Remote Sens. 2021, 13, 4909. [Google Scholar] [CrossRef]
  8. Andras, I.; Dolinský, P.; Michaeli, L.; Šaliga, J. A time domain reconstruction method of randomly sampled frequency sparse signal. Measurement 2018, 127, 68–77. [Google Scholar] [CrossRef]
  9. Mimouna, A.; Alouani, I.; Ben Khalifa, A.; El Hillali, Y.; Taleb-Ahmed, A.; Menhaj, A.; Ouahabi, A.; Ben Amara, N.E. OLIMP: A heterogeneous multimodal dataset for advanced environment perception. Electronics 2020, 9, 560. [Google Scholar] [CrossRef] [Green Version]
  10. Ouahabi, A.; Depollier, C.; Simon, L.; Koume, D. Spectrum estimation from randomly sampled velocity data [LDV]. IEEE Trans. Instrum. Meas. 1998, 47, 1005–1012. [Google Scholar] [CrossRef]
  11. Ouahabi, A. A review of wavelet denoising in medical imaging. In Proceedings of the 8th International Workshop on Systems, Signal Processing and Their Applications (IEEE/WoSSPA), Algiers, Algeria, 12–15 May 2013; pp. 19–26. [Google Scholar]
  12. Ahmed, S.S.; Messali, Z.; Ouahabi, A.; Trepout, S.; Messaoudi, C.; Marco, S. Nonparametric denoising methods based on contourlet transform with sharp frequency localization: Application to low exposure time electron microscopy images. Entropy 2015, 17, 3461–3478. [Google Scholar] [CrossRef] [Green Version]
  13. Smirnova, O.M.; Menéndez Pidal de Navascués, I.; Mikhailevskii, V.R.; Kolosov, O.I.; Skolota, N.S. Sound-Absorbing Composites with Rubber Crumb from Used Tires. Appl. Sci. 2021, 11, 7347. [Google Scholar] [CrossRef]
  14. Ouahabi, A. Signal and Image Multiresolution Analysis; ISTE-Wiley: London, UK; Hoboken, NJ, USA, 2013. [Google Scholar]
  15. Femmam, S.; M’Sirdi, N.K.; Ouahabi, A. Perception and characterization of materials using signal processing techniques. IEEE Trans. Instrum. Meas. 2001, 50, 1203–1211. [Google Scholar] [CrossRef]
  16. Chen, S.; Xu, S.; Chen, X.; Li, F. Image denoising using a novel deep generative network with multiple target images and adaptive termination condition. Appl. Sci. 2021, 11, 4803. [Google Scholar] [CrossRef]
  17. Zha, Z.; Yuan, X.; Wen, B.; Zhou, J.; Zhu, C. Group sparsity residual constraint with non-local priors for image restoration. IEEE Trans. Image Process. 2020, 29, 8960–8975. [Google Scholar] [CrossRef]
  18. Chambolle, A. An algorithm for total variation minimization and applications. J. Math. Imaging Vis. 2004, 20, 89–97. [Google Scholar]
  19. Beck, A.; Teboulle, M. Fast Gradient-based algorithms for constrained total variation image denoising and deblurring problems. IEEE Trans. Image Process. 2009, 18, 2419–2434. [Google Scholar] [CrossRef] [Green Version]
  20. Ma, T.; Xu, Z.; Meng, D. Remote sensing image denoising via low-rank tensor approximation and robust noise modeling. Remote Sens. 2020, 12, 1278. [Google Scholar] [CrossRef] [Green Version]
  21. Ghaderpour, E. Multichannel antileakage least-squares spectral analysis for seismic data regularization beyond aliasing. Acta Geophys. 2019, 67, 1349–1363. [Google Scholar] [CrossRef]
  22. Zhang, H.; Liu, L.; He, W.; Zhang, L. Hyperspectral image denoising with total variation regularization and nonlocal low-rank tensor decomposition. IEEE Trans. Geosci. Remote Sens. 2020, 58, 3071–3084. [Google Scholar] [CrossRef]
  23. Wang, M.; Wang, Q.; Chanussot, J. Tensor low-rank constraint and l0 total variation for hyperspectral image mixed noise removal. IEEE J. Sel. Top. Signal Process. 2021, 15, 718–733. [Google Scholar] [CrossRef]
  24. Candes, E.J.; Romberg, J.K.; Tao, T. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inform. 2006, 52, 489–509. [Google Scholar] [CrossRef] [Green Version]
  25. Eldar, Y.C.; Kutyniok, G. Compressed Sensing: Theory and Applications; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
  26. El Mahdaoui, A.; Ouahabi, A.; Moulay, M.S. Image recovery using total variation minimization on compressive sensing. In Proceedings of the 6th International Conference on Image and Signal Processing and Their Applications (ISPA), Algiers, Algeria, 24–25 November 2019; pp. 1–5. [Google Scholar]
  27. Tropp, J.A. Greed is good: Algorithmic results for sparse approximation. IEEE Trans. Inf. Theory 2004, 50, 2231–2242. [Google Scholar] [CrossRef] [Green Version]
  28. Donoho, D.L.; Malekib, A.; Montanaria, A. Message-passing algorithms for compressed sensing. Proc. Natl. Acad. Sci. USA 2009, 106, 18914–18919. [Google Scholar] [CrossRef] [Green Version]
  29. Metzler, C.A.; Maleki, A.; Baraniuk, R.G. From denoising to compressed sensing. IEEE Trans. Inf. Theory 2016, 62, 5117–5144. [Google Scholar] [CrossRef]
  30. Hestenes, M.R. Multiplier and gradient methods. J. Optim. Theory Appl. 1969, 4, 303–320. [Google Scholar] [CrossRef]
  31. Nocedal, J.; Wright, S.J. Numerical Optimization; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  32. Blomgren, P.; Chan, T.F. Color tv: Total variation methods for restoration of vector-valued images. IEEE Trans. Image Process. 1998, 7, 304–309. [Google Scholar] [CrossRef] [Green Version]
  33. Mallat, S. A Wavelet Tour of Signal Processing: The Sparse Way; Academic Press: Cambridge, MA, USA, 2008. [Google Scholar]
  34. Buades, A.; Coll, B.; Morel, J.M. Nonlocal image and movie denoising. Int. J. Comput. Vis. 2008, 76, 123–139. [Google Scholar] [CrossRef]
  35. Chen, S.S.; Donoho, D.L.; Saunders, M.A. Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 1998, 20, 33–61. [Google Scholar] [CrossRef]
  36. Dabov, K.A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef]
  37. Ouahabi, A. Multifractal analysis for texture characterization: A new approach based on DWT. In Proceedings of the 10th International Conference on Information Science, Signal Processing and Their Applications (ISSPA 2010), Kuala Lumpur, Malaysia, 10–13 May 2010; pp. 698–703. [Google Scholar]
  38. Djeddi, M.; Ouahabi, A.; Batatia, H.; Basarab, A.; Kouam, D. Discrete wavelet for multifractal texture classification: Application to medical ultra sound imaging. In Proceedings of the IEEE International Conference on Image Processing, Hong Kong, 26–29 September 2010; pp. 637–640. [Google Scholar]
  39. Rudin, L.I.; Osher, S.; Fatemi, E. Non linear total variation based noise removal algorithms. Phys. D 1992, 60, 259–268. [Google Scholar] [CrossRef]
  40. Buades, A.; Coll, B.; Morel, J.M. A non-local algorithm for image denoising. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; pp. 60–65. [Google Scholar]
  41. Donoho, D.L.; Johnstone, I.M.; Kerkyacharian, G.; Picard, D. Wavelet shrinkage. Asymptopia. J. R. Stat. Soc. B 1995, 57, 301–337. [Google Scholar] [CrossRef]
  42. Donoho, D.L.; Johnstone, I.M. Ideal spatial adaptation by wavelet shrinkage. Biometrika 1994, 81, 425–455. [Google Scholar] [CrossRef]
  43. Li, C.; Yin, W.; Jiang, H.; Zhang, Y. An efficient augmented Lagrangian method with applications to total variation minimization. Comput. Optim. Appl. 2013, 56, 507–530. [Google Scholar] [CrossRef] [Green Version]
  44. Li, C.; Yin, W.; Zhang, Y. User’s guide for TVAL3: TV minimization by augmented Lagrangian and alternating direction algorithms. CAAM Rep. 2010, 20, 46–47. [Google Scholar]
  45. Ferroukhi, M.; Ouahabi, A.; Attari, M.; Habchi, Y.; Taleb-Ahmed, A. Medical video coding based on 2nd-generation wavelets: Performance Evaluation. Electronics 2019, 8, 88. [Google Scholar] [CrossRef] [Green Version]
  46. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
  47. Daei, S.; Haddadi, F.; Amini, A. Sample complexity of total variation minimization. IEEE Signal Process. Lett. 2018, 25, 1151–1155. [Google Scholar] [CrossRef] [Green Version]
  48. Zhang, J.; Zhao, D.; Gao, W. Group-based sparse representation for image restoration. IEEE Trans. Image Process. 2014, 23, 3336–3351. [Google Scholar] [CrossRef] [Green Version]
  49. Becker, S.; Bobin, J.; Candes, E.J. NESTA: A fast and accurate first-order method for sparse recovery. SIAM J. Imaging Sci. 2011, 4, 1–39. [Google Scholar] [CrossRef] [Green Version]
  50. Srivastava, M.; Anderson, C.L.; Freed, J.H. A New wavelet denoising method for selecting decomposition levels and noise thresholds. IEEE Access. 2016, 4, 3862–3877. [Google Scholar] [CrossRef] [PubMed]
  51. Zhu, Z.; Wei, H.; Hu, G.; Li, Y.; Qi, G.; Mazur, N. A novel fast single image dehazing algorithm based on artificial multiexposure image fusion. IEEE Trans. Instrum. Meas. 2021, 70, 1–23. [Google Scholar] [CrossRef]
  52. Kaur, H.; Koundal, D.; Kadyan, V. Image fusion techniques: A survey. Arch. Comput. Methods Eng. 2021, 28, 4425–4447. [Google Scholar] [CrossRef] [PubMed]
  53. Bnou, K.; Raghay, S.; Hakim, A. A wavelet denoising approach based on unsupervised learning model. EURASIP J. Adv. Signal Process. 2020, 36. [Google Scholar] [CrossRef]
  54. Li, L.; Fang, Y.; Liu, L.; Peng, H.; Kurths, J.; Yang, Y. Overview of Compressed Sensing: Sensing Model, Reconstruction Algorithm, and Its Applications. Appl. Sci. 2020, 10, 5909. [Google Scholar] [CrossRef]
  55. Tian, C.; Li, L.; Zhang, W.; Xu, Y.; Zhou, W.; Lin, C.-W. Deep learning on image denoising: An overview. Neural Netw. 2020, 131, 251–275. [Google Scholar] [CrossRef] [PubMed]
  56. Adjabi, I.; Ouahabi, A.; Benzaoui, A.; Taleb-Ahmed, A. Past, present, and future of face recognition: A Review. Electronics 2020, 9, 1188. [Google Scholar] [CrossRef]
  57. Adjabi, I.; Ouahabi, A.; Benzaoui, A.; Jacques, S. Multi-block color-binarized statistical images for single-sample face recognition. Sensors 2021, 21, 728. [Google Scholar] [CrossRef]
  58. El Morabit, S.; Rivenq, A.; Zighem, M.-E.-n.; Hadid, A.; Ouahabi, A.; Taleb-Ahmed, A. Automatic Pain Estimation from Facial Expressions: A Comparative Analysis Using Off-the-Shelf CNN Architectures. Electronics 2021, 10, 1926. [Google Scholar] [CrossRef]
  59. Khaldi, Y.; Benzaoui, A.; Ouahabi, A.; Jacques, S.; Taleb-Ahmed, A. Ear recognition based on deep unsupervised active learning. IEEE Sens. J. 2021, 21, 20704–20713. [Google Scholar] [CrossRef]
  60. Arbaoui, A.; Ouahabi, A.; Jacques, S.; Hamiane, M. Concrete Cracks Detection and Monitoring Using Deep Learning-Based Multiresolution Analysis. Electronics 2021, 10, 1772. [Google Scholar] [CrossRef]
  61. Arbaoui, A.; Ouahabi, A.; Jacques, S.; Hamiane, M. Wavelet-based multiresolution analysis coupled with deep learning to efficiently monitor cracks in concrete. Frat. Integrità Strutt. Fract. Struct. Integr. 2021, 58, 33–47. [Google Scholar] [CrossRef]
Figure 1. Test images. (a) Barbara, (b) Cameraman and (c) Lena.
Figure 1. Test images. (a) Barbara, (b) Cameraman and (c) Lena.
Sensors 22 02199 g001
Figure 2. Visual comparison of the reconstruction quality for different noise levels. (a) σ = 20 , (b) σ = 50 , (c) σ = 60 and (d) σ = 80 . The algorithms used are DCSR, GSR, wavelet denoising and NESTA: the denoised images are arranged in rows 1, 2, 3 and 4, respectively.
Figure 2. Visual comparison of the reconstruction quality for different noise levels. (a) σ = 20 , (b) σ = 50 , (c) σ = 60 and (d) σ = 80 . The algorithms used are DCSR, GSR, wavelet denoising and NESTA: the denoised images are arranged in rows 1, 2, 3 and 4, respectively.
Sensors 22 02199 g002
Figure 3. Visual comparison of the reconstruction quality for different noise levels in the case of salt and pepper (see column 1 located on the left side). The algorithms used are DCSR, GSR, wavelet denoising and NESTA: the denoised images are arranged in columns 2, 3, 4 and 5, respectively.
Figure 3. Visual comparison of the reconstruction quality for different noise levels in the case of salt and pepper (see column 1 located on the left side). The algorithms used are DCSR, GSR, wavelet denoising and NESTA: the denoised images are arranged in columns 2, 3, 4 and 5, respectively.
Sensors 22 02199 g003
Figure 4. PSNR for different noise levels using the proposed DCSR, GSR algorithm, wavelet denoising and NESTA algorithm, arranged in rows 1, 2, 3 and 4, respectively. From left to right σ = 20, 50, 60 and 80, respectively.
Figure 4. PSNR for different noise levels using the proposed DCSR, GSR algorithm, wavelet denoising and NESTA algorithm, arranged in rows 1, 2, 3 and 4, respectively. From left to right σ = 20, 50, 60 and 80, respectively.
Sensors 22 02199 g004
Figure 5. Grayscale image corrupted by additive white Gaussian noise (AWGN): performance analysis of four denoising methods.
Figure 5. Grayscale image corrupted by additive white Gaussian noise (AWGN): performance analysis of four denoising methods.
Sensors 22 02199 g005
Figure 6. Grayscale image corrupted by salt and pepper noise: performance analysis of four denoising methods.
Figure 6. Grayscale image corrupted by salt and pepper noise: performance analysis of four denoising methods.
Sensors 22 02199 g006
Figure 7. Restoration results of the AWGN-corrupted Lena image for different values of σ (see column 1 located on the left side of the figure). The algorithms used, the proposed DCSR, wavelet denoising and GSR, are arranged in columns 2, 3 and 4, respectively.
Figure 7. Restoration results of the AWGN-corrupted Lena image for different values of σ (see column 1 located on the left side of the figure). The algorithms used, the proposed DCSR, wavelet denoising and GSR, are arranged in columns 2, 3 and 4, respectively.
Sensors 22 02199 g007
Figure 8. Color image corrupted by AWGN: performance analysis of three denoising methods.
Figure 8. Color image corrupted by AWGN: performance analysis of three denoising methods.
Sensors 22 02199 g008
Figure 9. Restoration results of the salt and pepper noise-corrupted Lena image for different values of σ (see column 1 located on the left side of the figure). The algorithms used, the proposed DCSR, wavelet denoising and GSR, are arranged in columns 2, 3 and 4, respectively.
Figure 9. Restoration results of the salt and pepper noise-corrupted Lena image for different values of σ (see column 1 located on the left side of the figure). The algorithms used, the proposed DCSR, wavelet denoising and GSR, are arranged in columns 2, 3 and 4, respectively.
Sensors 22 02199 g009
Figure 10. Color image corrupted by salt and pepper noise: performance analysis of three denoising methods.
Figure 10. Color image corrupted by salt and pepper noise: performance analysis of three denoising methods.
Sensors 22 02199 g010
Figure 11. PSNR values of Barbara’s grayscale image recovered by four competing methods as a function of the number of iterations. The test image is corrupted by AWGN.
Figure 11. PSNR values of Barbara’s grayscale image recovered by four competing methods as a function of the number of iterations. The test image is corrupted by AWGN.
Sensors 22 02199 g011
Figure 12. PSNR values of the Cameraman grayscale image recovered by four competing methods as a function of the number of iterations. The test image is corrupted by salt and pepper noise.
Figure 12. PSNR values of the Cameraman grayscale image recovered by four competing methods as a function of the number of iterations. The test image is corrupted by salt and pepper noise.
Sensors 22 02199 g012
Figure 13. PSNR values of the recovered Lena color image by the competing methods vs. iteration number. The test image is corrupted by AWGN.
Figure 13. PSNR values of the recovered Lena color image by the competing methods vs. iteration number. The test image is corrupted by AWGN.
Sensors 22 02199 g013
Figure 14. PSNR values of the recovered Lena color image by the competing methods vs. iteration number. The test image is corrupted by salt and pepper noise.
Figure 14. PSNR values of the recovered Lena color image by the competing methods vs. iteration number. The test image is corrupted by salt and pepper noise.
Sensors 22 02199 g014
Table 1. Quality metric results on different algorithms for different values of noise levels.
Table 1. Quality metric results on different algorithms for different values of noise levels.
MethodNoise levelPSNRSSIM
Ours20%33.970.99
50%30.080.95
60%27.030.93
70%25.240.92
GSR algorithm20%26.700.80
50%24.660.78
60%23.610.74
70%22.400.69
Wavelet denoising20%31.680.89
50%28.610.86
60%26.690.79
70%24.840.74
NESTA algorithm20%18.880.39
50%16.710.38
60%16.330.37
70%16.030.36
Table 2. Computational complexity.
Table 2. Computational complexity.
AlgorithmsComputational Complexity
TV O N   [46]
Wavelet denoising O N log 2 N   [49]
NESTA O N + N log 2 N   [48]
GSR O N   n   m 2 + T s [47]
Our algorithm DCSR O N n   m 2 + T s + N
Table 3. Ablation analysis.
Table 3. Ablation analysis.
Method σ PSNRSSIM
CS2022.110.53
5020.350.43
6018.590.37
CS+TV2022.340.54
5021.740.52
6020.670.44
CS+NLS2029.450.89
5028.570.78
6027.970.74
CS+TV+NLS
(DCSR)
2038.750.99
5035.160.97
6034.670.96
Table 4. Definition of acronyms.
Table 4. Definition of acronyms.
AcronymsDescription
CSCompressed Sensing
DCSRDenoising Compressed Sensing by Regularizations terms
PSNRPeak Signal-to-Noise Ratio
SSIMStructural Similarity
TVTotal Variation
NLSNonlocal self-Similarity
DWTDiscrete Wavelet Transform
NESTANesterov’s algorithm
GSRGroup-based Sparse Representation
AWGNAdditive White Gaussian Noise
MSEMean Square Error
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mahdaoui, A.E.; Ouahabi, A.; Moulay, M.S. Image Denoising Using a Compressive Sensing Approach Based on Regularization Constraints. Sensors 2022, 22, 2199. https://doi.org/10.3390/s22062199

AMA Style

Mahdaoui AE, Ouahabi A, Moulay MS. Image Denoising Using a Compressive Sensing Approach Based on Regularization Constraints. Sensors. 2022; 22(6):2199. https://doi.org/10.3390/s22062199

Chicago/Turabian Style

Mahdaoui, Assia El, Abdeldjalil Ouahabi, and Mohamed Said Moulay. 2022. "Image Denoising Using a Compressive Sensing Approach Based on Regularization Constraints" Sensors 22, no. 6: 2199. https://doi.org/10.3390/s22062199

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop