Next Article in Journal
Open Source IIoT Solution for Gas Waste Monitoring in Smart Factory
Previous Article in Journal
Two-Way Communication Digital Power Controllers for Wireless Retinal Prosthesis Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Stripe Noise Removal Model for Infrared Images

1
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(8), 2971; https://doi.org/10.3390/s22082971
Submission received: 12 March 2022 / Revised: 2 April 2022 / Accepted: 11 April 2022 / Published: 13 April 2022
(This article belongs to the Section Sensing and Imaging)

Abstract

:
Infrared images often carry obvious streak noises due to the non-uniformity of the infrared detector and the readout circuit. These streak noises greatly affect the image quality, adding difficulty to subsequent image processing. Compared with current elimination algorithms for infrared stripe noises, our approach fully utilizes the difference between the stripe noise components and the actual information components, takes the gradient sparsity along the stripe direction and the global sparsity of the stripe noises as regular terms, and treats the sparsity of the components across the stripe direction as a fidelity term. On this basis, an adaptive edge-preserving operator (AEPO) based on edge contrast was proposed to protect the image edge and, thus, prevent the loss of edge details. The final solution was obtained by the alternating direction method of multipliers (ADMM). To verify the effectiveness of our approach, many real experiments were carried out to compare it with state-of-the-art methods in two aspects: subjective judgment and objective indices. Experimental results demonstrate the superiority of our approach.

1. Introduction

The readout circuit of an infrared detector is highly inconsistent. The non-uniformity of the detector is often manifested in the image as streak noises, which directly affect the imaging quality, and can even hinder subsequent image processing [1,2,3], namely, image classification, target detection, and target recognition. Therefore, it is important to explore how to remove stripe noises while preserving image details. This paper aims to separate stripe noise components from a target infrared image and obtain the information components that preserve image details.
In recent years, many scholars have been devoted to the removal of stripe noises from images, and have proposed various methods such as frequency domain filtering, wavelet transform, statistical matching, and total variation.
In 1987, Quarmby adopted a spatial frequency domain filter to remove stripe noises, according to the frequency difference between stripe noises and the target information in the frequency domain [4]. Soon, wavelet transform was introduced to eliminate stripe noises [5,6,7]. As an example of this, a contrastive method is multi-scale guided filter (MSGF) [8]. Through wavelet transform, the target image is divided into high- and low-frequency components before removing stripe noises, which is possible due to the ability of wavelet transform to describe the local frequency components of signals. However, there are two limitations to the frequency-domain filtering of stripe noises: First, this strategy cannot easily differentiate between stripe noises and the target information, unless the stripe noises are highly regular. Second, this strategy may remove edge and texture information when the target information is relatively complex. As a result, the original information may be lost, and artifacts may even appear.
Another popular approach to stripe noise removal is statistical matching, which is often used in engineering. Originally applied to the marine data of MOS-B, this approach assumes that the response of each pixel on the sea surface should output equal electrical levels [9]. Under this assumption, the output features of each pixel are obtained, including the gain and bias coefficients. Then, the denoised target image is obtained through correction based on these parameters. Later, some scholars combined local constant statistical constraint with neural networks [10], or wavelet transform coupled with gradient equalization (WAGE) [11], to remove stripe noises. The former approach views each row of pixels as having the same standard deviation and mean, and takes these values as the median of local channel statistics, thereby correcting the stripe noises caused by image non-uniformity. The latter approach concentrates the stripe noise components in each vertical component of wavelet transform, and removes the stripe noises through column equalization. However, this approach, failing to consider strong edge information, cannot effectively remove noises, when the stripe noises if they are unevenly distributed.
Recently, a class of popular methods emerged based on total variation minimization. The earliest method in this class was proposed by Antonin Chambolle in 2004. These methods construct a cost function to derive the minimization equation, which represents the ideal image features, and solves the ideal image that minimizes the equation through gradient descent [12]. Then, some scholars adopted the L1-norm of the difference between the original image and the denoised image as the cost function [13] to optimize denoising. Some scholars combined the above three methods to remove noises and proposed total variation coupled with a guided filter (TVGF) for stripe noises. Firstly, frequency domain filtering was employed to extract the high-frequency stripe noise components. Secondly, the total variation model was adopted to implement gradient equalization and eliminate strong interference noises. Finally, the smooth image was used as a guide to eliminate stripe noises [14].
Most existing stripe noise removal methods for infrared images focus on the denoising degree of the image, but rarely consider the structural features of the stripe noises, not to mention the differentiation between image edges and stripe noises. Hence, many current stripe noise removal algorithms for infrared images either have insufficient denoising ability (the denoised image still has residual noises), or have excessive denoising ability (the denoised image loses information). In previous studies, stripe noises in infrared images were regarded as fixed pattern noises, i.e., additive noises. In other words, the original image was assumed to contain two types of components: noise and information [15,16,17]. The denoising problem can be regarded as the extraction of noise components from the original image, that is, the estimation of noise components. Considering the above problems, this paper tries to combine prior information of infrared stripe noises with edge extraction, in order to improve the denoising effect while preserving the edge information.
In this paper, the L1-norm, which is often used as an error function, is adopted to represent the stripe noise features of the infrared images. Owing to their stripe property, the noise components clearly differ from information components in terms of structure and direction. After exploring the directionality and cross-directionality of the noise components, this paper presents a new optimization-based stripe noise removal model for infrared images, capable of adaptively preserving the edges. In addition, the alternating direction method of multipliers (ADMM) is utilized to solve the model [18,19]. The proposed method has the following strengths:
1. This paper constructs a convex optimization model, drawing on the unique directional, cross-directional, and structural properties of stripe noises. The model makes full use of the prior information on stripe noise classes and actual information components to improve the noise separation.
2. Based on the global sparsity and gradient sparsity of stripe noises, this paper adopts the L1-norm to constrain the overall sparsity of stripe noise and the gradient sparsity along the stripe direction, producing a nonconvex optimization model. This facilitates the solution of the optimal results.
3. This paper proposes an adaptive edge-preserving operator (AEPO), which ensures that the edges will not be over-smoothed or distorted during the optimization process.

2. Preliminaries

Stripe noises differ from other noises in directionality and structure. According to these unique properties, this section focuses on designing the regular terms to remove the noises of infrared images.

2.1. Stripe Noise Model

In the field of infrared image denoising, stripe noises are widely regarded as additive noises with strong structuredness and directionality. The optimization-based denoising approaches emphasize the design of proper regular terms, in the light of the properties of stripe noises or the target image.
The first step is to mathematically depict the correlations of the original image with the stripe noise components and information components:
I ( i , j ) = D ( i , j ) + N ( i , j )
where I ( i , j ) , D ( i , j ) , and N ( i , j ) are the original image outputted by the infrared detector, [d=Li.] the information components of the noise-free imagethe information components after the removal of stripe noises, and the stripe noise components at pixel ( i , j ) , respectively. Since our strategy considers the entire image, Formula (1) can be rewritten as a matrix:
I = D + N
where, I , D , and N are the discrete vectors of I ( i , j ) , D ( i , j ) , and N ( i , j ) , respectively. Starting with the properties of stripe noises N , this paper presents a denoising approach for stripe noises based on optimization. The proposed approach can remove stripe noises from the original image and preserve the effective information in the image as much as possible. Figure 1 illustrates the procedure of the approach.

2.2. Properties of Stripe Noises

To improve the denoising of stripes noises in infrared images, it is essential to fully understand all prior knowledge of the topic, and to constrain the features of these noises with the corresponding regular terms. Figure 2 and Figure 3 show some properties of stripe noises. It can be seen that the stripe noise components are more directional and structurally regular than the information components in the original image.
(1) Directionality
Figure 2 shows the differences in properties between stripe noise components and information components in the original image. Comparing Figure 2d with Figure 2f, it is clear that the vertical gradient of stripe noise components is much smoother than that of information components. Comparing Figure 2e with Figure 2g, it is apparent that the horizontal gradient of stripe noise components is not as smooth as that of information components. In addition, the gradient domain of stripe noises is sparse in the vertical direction. To distinguish between stripe noise components and information components, the sparsity needs to be constrained into the vertical gradient field. The L0-norm is the best tool to describe sparsity [20,21]. Hence, the following regular term formula can be established as:
P 1 ( N ) = d y N 0
where d y is the convolutional gradient operator in the vertical direction. Since the L0-norm is not nonconvex, this term was sparsely represented by the nonconvex L1-norm instead of the L0-norm [22,23]. Therefore, we denote this regular term as:
P 1 ( N ) = d y N 1
This kind of optimization-based approach often uses the root-mean square error (RMSE) or square error between the original image and noise components as the fidelity term [24], such that the image will not be distorted due to excessive denoising. The fidelity term can be expressed as:
P 2 ( N ) = I N 2 2
Or:
P 2 ( N ) = I 2 N 2 1 / 2
I N are essentially information components D . Neither of the above fidelity terms take the properties of information components D or noise components N into account. As shown in Figure 2, there are obvious stripes in the horizontal gradient domains of the original image and the stripe noise components. Meanwhile, the horizontal gradient domain of the information components is relatively smooth. Thus, this is adopted as the fidelity term. In other words, the horizontal gradient of the information components was depicted as the L1-norm of the horizontal gradient difference between the original image and the noise components:
P 2 ( N ) = d x I d x N 1
where d x is the convolutional gradient operator in the horizontal direction. In addition, Figure 3b shows that the L1-norm for each row of information components in the L1-norm takes up a small proportion of the horizontal gradient of the original image. Therefore, this regular term can ensure smoothness in the horizontal direction. However, the vertical edges could be over-smoothed during the optimization, causing the loss of edge information. To prevent this problem, the image edges were recognized in advance, edge pixels were assigned a small weight operator, and non-edge pixels were assigned a large weight operator. Then, Formula (7) can be modified as:
P 2 ( N ) = Ω e d g e d x I d x N 1
where Ω edge assigns different weights to the edge pixels in the original image:
Ω edge ( i , j ) = α , if ( i , j ) is edge pixel 1 , else
where α is a constant. Additionally, the method for obtaining the value of α is introduced in Formula (34).
(2) Structuredness
As shown in Figure 2c, the stripe noises in the infrared image exist in columns. The pixels in the stripe-free areas equal zero. Hence, stripe noise components can be regarded as a sparse matrix, represented by the L0-norm:
P 3 ( N ) = N 0
Similar to the sparse matrix in the vertical gradient domain described in Section 2.1, the L0-norm is not nonconvex. Thus, the regular term can be expressed by the nonconvex L1-norm:
P 3 ( N ) = N 1
As shown in Figure 3a, the L1-norm for each column of noise components takes up a very small proportion in the L1-norm for the vertical gradient of the original image, and ia able to realize the sparsity constraint.

2.3. AEPO Experiment

Using Formula (8) in Section 2.2, the fidelity term of the information components was obtained by examining the sparsity of the gradient domain for the actual information components of the target infrared image, which is perpendicular to the stripe direction. However, this fidelity term actually realizes the constraining effect through a smoothing operation vertically to the stripe direction. During optimization, the edge pixels are easily over-smoothed perpendicular to the stripe direction, causing loss of information. To overcome the problem, this paper presents an AEPO based on edge contrast. The operator aims to adaptively adjust the weights of edge pixels through the optimization process, and aims to prevent the information loss induced by over-smoothing of these pixels.
Experiments were conducted to summarize the relationship between the value of the AEPO and the contrast of edge pixels. The denoising effect of the AEPO at different edge contrasts was measured by structural similarity (SSIM), which is a full-reference evaluation index [25,26].
Figure 4 shows the details of these experiments. Figure 4a presents a reference image without any stripe noise; Figure 4b provides an image containing random stripe noises; Figure 4c displays the test image that combines the reference image with the noises. To fully verify the relationship between the AEPO value and edge contrast, 30 edge pixels were selected as objects from the original image, and the mean SSIM of these pixels was solved after manually adjusting the edge contrast. On this basis, the authors discussed how to optimize the AEPO. In Figure 4d, the different colored curves reflect the influence of the AEPO value on the SSIM at different contrasts of edge pixels. The experimental results clearly demonstrate that when the edge contrast remained constant, there was an optimal value of the AEPO leading to the best denoising effect of the edge pixels. With the decline in edge contrast, the value of the optimal AEPO decreased.

3. Methodology

Based on the properties of stripe noises and regular terms identified in the preceding section, this section finalizes the stripe noise removal model for infrared images, and details how to solve the model through the ADMM. During the separation of stripe noises, our model obtains an AEPO based on the edge contrast. In this way, the noise components are extracted without sacrificing edge information.

3.1. Model

The above analysis reveals large differences between the noise components and the information components of infrared images in terms of structure and directionality. The three terms P 1 ( N ) , P 2 ( N ) , and P 3 ( N ) can be combined to obtain the final stripe noise optimization model:
N = arg min N λ 1 d y N 1 + λ 2 N 1 + λ 3 Ω e d g e d x I d x N 1
where, λ 1 , λ 2 , and λ 3 are used to balance the different regular terms. Firstly, the stripe noise components N that minimize Formula (12) are solved. Then, the denoised information components can be estimated through the transform below:
D = I N

3.2. ADMM Optimization

Finding the second derivative is the most direct way to optimize the convergence matrix above. Nevertheless, the regular terms of the regularization model (12), which is based on the L1-norm, are not continuously differentiable, making derivation difficult. As a popular machine learning tool, the ADMM provides an effective solution to the regular terms of the L1-norm. In essence, the algorithm optimizes the parts for unconstrained optimization through block coordinate (12). The specific solving process is explained below.
For the three regular terms, three auxiliary variables are introduced to substitute these regular terms, namely, G = d y N ,   T = N , and U = d x I d x N . Then, the minimization of Formula (12) is equivalent to:
arg min N , G , U λ 1 G 1 + λ 2 T 1 + λ 3 Ω edge U 1 s . t . G = d y N ,   T = N ,   U = d x I d x N
The convex optimization problem (14) may be further converted into an augmented Lagrangian function:
arg min N , G , U λ 1 G 1 + λ 2 T 1 + λ 3 W edge U 1 + m 1 T d y N G + m 2 T ( N T ) + m 3 T d x I d x N U + ρ 1 2 d y N G 2 2 + ρ 2 2 N T 2 2 + ρ 3 2 d x I d x N U 2 2
where, m 1 , m 2 , and m 3 are the Lagrange multipliers of the three constraints, respectively; ρ 1 , ρ 2 , and ρ 3 are three penalties. Then, Formula (15) can be converted into four sub-items for iterative solution:
a. G problem
G = arg min G λ 1 G 1 + m 1 T d y N G + ρ 1 2 d y N G 2 2
According to Formula (12) in Reference [27], for solving for X such that the following formula is minimized:
arg min X X B 2 2 + 2 λ X 1
It can be directly obtained that:
X = soft ( B , λ ) = sign ( B ) max ( | B | λ , 0 )
Thus, Formula (16) can be converted into:
G = arg min G λ 1 G 1 + ρ 1 2 d y N G + m 1 ρ 1 2 2
Following the solution principle of Formula (17), it can be solved by:
G k + 1 = soft d y N k + m 1 k ρ 1 , λ 1 ρ 1
where, k is the number of iterations.
b. T problem
T = arg min T λ 2 T 1 + m 2 T ( N T ) + ρ 2 2 N T 2 2
Similar to the G problem, it can be solved that:
T k + 1 = soft N k + m 2 k ρ 2 , λ 2 ρ 2
c. U problem
U = arg min U λ 3 Ω e d g e U 1 + m 3 T d x I d x N U + ρ 3 2 d x I d x N U 2 2
It can be solved that:
U k + 1 = soft d x I d x N k + m 3 k ρ 3 , λ 3 Ω e d g e ρ 3
d. N problem:
N = arg min N m 1 T d y N G + m 2 T ( N T ) + m 3 T d x I d x N U + ρ 1 2 d y N G 2 2 + ρ 2 2 N T 2 2 + ρ 3 2 d x I d x N U 2 2
Formula (25) can be simplified as:
N = arg min N ρ 1 2 d y N G + m 1 ρ 1 2 2 + ρ 2 2 N T + m 2 ρ 2 2 2 + ρ 3 2 d x I d x N U + m 3 ρ 3 2 2
This is a quadratic optimization with differentiability. It is equivalent to solving the following linear system. Through the direct derivation of Formula (26):
ρ 1 d y T d y N k + 1 + ρ 2 N k + 1 + ρ 3 d x T d x N k + 1 = ρ 1 d y T G k + 1 m 1 ρ 1 + ρ 2 T k + 1 m 2 ρ 2 + ρ 3 d x T d x I U k + 1 + m 3 ρ 3
where ⊗ denotes convolution. It is very difficult to solve a formula involving convolution. This paper introduces the Fourier transform to convert the convolution in the time domain into multiplication in the frequency domain:
ρ 1 F d y T F d y + ρ 2 + ρ 3 F d x T F d x F N k + 1 = ρ 1 F d y T . F G k + 1 m 1 ρ 1 + ρ 2 F T k + 1 m 2 ρ 2 + ρ 3 F d x T . F d x I U k + 1 + m 3 ρ 3
By left division of the matrix, we obtain:
F N k + 1 = ρ 1 F d y T . F G k + 1 m 1 ρ 1 + ρ 2 F T k + 1 m 2 ρ 2 + ρ 3 F d x T . F d x I U k + 1 + m 3 ρ 3 . / ρ 1 F d y T . F d y + ρ 2 + ρ 3 F d x T . F d x
Then, the inverse Fourier transform of Formula (29) was implemented to obtain the expression for stripe noises N :
N k + 1 = F 1 ρ 1 F d y T F G k + 1 m 1 ρ 1 + ρ 2 F T k + 1 m 2 ρ 2 + ρ 3 F d x T . F d x I U k + 1 + m 3 ρ 3 . / ρ 1 F d y T F d y + ρ 2 + ρ 3 F d x T . F d x
where .* is the point multiplication of two matrices; ./ is the point division of two matrices; F ( ) is the Fourier transform; F ( ) is the inverse Fourier transform. Note that the complete convolution of a matrix will change its size. It is highly necessary to conduct normalization during the computing. After each iteration, the Lagrange multipliers must be updated by [28]:
m 1 k + 1 = m 1 k + ρ 1 d y N k + 1 G k + 1 m 2 k + 1 = m 2 k + ρ 2 N k + 1 T k + 1 m 3 k + 1 = m 3 k + ρ 3 d x I d x N k + 1 U k + 1
Finally, the noise components N k + 1 of the original image were obtained, and the final denoised image was derived through D = I N k + 1 .

3.3. AEPO

According to the experiments in Section 2.3, we learn that the selection of the AEPO has a large impact on the effectiveness of the algorithm for stripe noise removal. Figure 4a–d show that, to optimize the denoising effect, the AEPO value must increase with the edge contrast.
To quantify the contrast of edge pixels, a formula was defined for the normalized edge contrast:
C e d g e ( i , j ) = | I ( i , j ) E ( N B H ( i , j ) ) | / 2 n 100 %
where, C edge ( i , j ) is the edge contrast of the pixel in row i and column j of the image; N B H ( i , j ) is the neighboring pixel of pixel ( i , j ) in the direction perpendicular to stripe noises; E ( ) is the averaging operation; n is the digits of the image. Additionally, we can obtain the E ( N B H ( i , j ) ) as the following formula:
E ( N B H ( i , j ) ) = [ I ( i , j + 1 ) + I ( i , j 1 ) ] / 2
Through the above experiments, the AEPO at different edge contrasts can be optimized by:
Ω e d g e ( i , j ) = β e c e d g e e 1 + θ
where β and θ are normalized parameters and e is the natural logarithm.

4. Experimental Results

In our experiments, our approach was compared with three state-of-the-art methods on three different image datasets. The contrastive methods are multi-scale guided filter (MSGF) for stripe noises [7], wavelet transform coupled with gradient equalization (WAGE) for stripe noises [11], and total variation coupled with guided filter (TVGF) for stripe noises [14]. To further demonstrate the effectiveness of the proposed method, an ablation experiment, that is, the non-AEPO method, was added to the comparison experiments. All the image data were shot using a LUSTER TB640-CL refrigerated medium wavefront infrared camera at the resolution of 640 × 512. All the experiments were run on MATLAB (R2020b), using a computer with 8 GB RAM and an AMD Ryzen 7 2700X Eight-Core Processor@3.70 GHz.
The experimental data were evaluated both subjectively and objectively. The subjective evaluation targets the edge details and denoising degree of the denoised image. For the experiments on real data, since there is no true image to use as a reference, we select the no-reference evaluation metrics of noise reduction (NR) [29,30], mean relative deviation (MRD) [30,31], and image distortion (ID) [32,33]. The definition of NR is shown in Formula (35), which reflects the overall performance of the denoised image. The definition of MRD is shown in Formula (36), which mirrors the ability to preserve image information in stripe-free areas. The definition of ID is shown in Formula (37), which demonstrates the degree of distortion for the denoised image. The denoising effect is positively correlated with the NR and the ID, and negatively with the MRD.
N R = N 0 / N 1 N = i = 0 k mean P u i
where N 0 and N 1 stand for the value of N in the original and de-striped images, respectively. u i is the frequency component produced by stripes. N is the total power of stripes’ noise in the mean power spectrum.
M R D = 1 M N i = 1 M N z i g i g i × 100 %
where g i and z i are the pixel values of point i in the original image and the image after stripe noise removal, respectively. Additionally, M N represents the number of all pixels in the selected area.
I D = S 1 / S 0 S = j 1 N 1 mean P u j
where S 0 and S 1 stand for the value of S in original image and the de-striped image, respectively. u j is the frequency component caused by the raw image without stripes. S stands for the total power of the clean image in the mean power spectrum.

4.1. Parameter Analysis

Taking Figure 4 as an example, the three regular terms were subjected to a sensitivity analysis, with the aim to verify the importance of the key parameters to our approach. For such reference experiments, the full-reference evaluation metric is used to better reflect the denoising performance of the algorithm. As a representative of the full-reference evaluation index, the peak signal-to-noise ratio (PSNR) is often widely used to determine the parameter because it can reflect the denoising effect of the image. Thus, we selected the PSNR metric to evaluate this experiment to prove the validity of the parameter selection. Firstly, λ 1 was empirically set to 1. Figure 5 shows the relationship between PSNR and regular terms λ 2 and λ 3 [34]. The results in Figure 5 prove that the selected λ 2 and λ 3 indeed affect the denoising performance. Similar results were obtained in other experiments. According to the experimental findings, it was determined that λ 1 = 1 , λ 2 = 0.7 and λ 3 = 1.2 . As for penalties, their values were empirically set to ρ 1 = ρ 2 = ρ 3 = 0.15 . In Formula (12), the optimal values of β and θ fall in [ 0.15 , 0.20 ] , and [ 0.4 , 0.5 ] , respectively. In our experiments, the two parameters were empirically set to β = 0.18 and θ = 0.46 , respectively.

4.2. Experimental Contents

In order to prove the universality and effectiveness of the presented approach in this paper, four infrared images of different scenes are selected to be the experimental subjects in these experiments. The first image is shown in Figure 6a, which contains a person with a large gray difference from the background, and an object with vertical edges that has a small gray difference from the background. The second image is shown in Figure 6b, which contains several buildings against the sky with a small difference in grayscale values from the stripe noises. Additionally, there are a large number of small vertical features. The third image is shown in Figure 6c, which contains a single building with the presence of features that do not have a distinct vertical texture. The fourth image is shown in Figure 6d, which mainly consists of complex buildings. Apart from the normal buildings, clouds and micro-objects such as tower cranes can be observed in the original image. In these four images, there are stripe noises with different grayscale differences, tiny vertical edge features and various texture characteristics. The excellent results of our approach can be fully illustrated.

4.2.1. Ablation Experiments

The ablation experiments for our proposed method are shown in this section. As shown in Figure 7a,b, the non-AEPO method and our approach both display excellent denoising performance. However, Figure 7a shows that there is a significant blurring of the vertical edge characteristics (encircled by red dotted lines). Figure 7c,d compare the mean power spectral densities (MPSDs) of all rows in the images denoised by different methods with the MPSD of all rows in the original image, where the abscissa is the normalized frequency, and the ordinate is the MPSD of all rows. It is observed that the curve in Figure 7c is slightly smoother than that in Figure 7d (encircled by red dotted lines). This phenomenon verifies the difference between Figure 7a,b.
As shown in Figure 8a,b, the non-AEPO method and our approach both have good denoising performance. However, Figure 8a shows a significant blurring of the vertical edge characteristics on the buildings (encircled by red dotted lines). It is observed that the curve in Figure 8c is slightly smoother than that in Figure 8d (encircled by red dotted lines). This phenomenon verifies the difference between Figure 8a,d.
As shown in Figure 9a,b, the non-AEPO method and our approach both effectively remove the stripe noises from the original image. However, we can see in Figure 9a that there is significant blurring of the vertical edge characteristics on the top of the building (encircled by red dotted lines). It is observed that the curve in Figure 9c is slightly smoother than that in Figure 9d (encircled by red dotted lines). This phenomenon is consistent with the performance of the algorithms in Figure 9a,b.
As shown in Figure 10a,b, the non-AEPO method and our approach both have excellent performance in removing stripe noise. However, we can see in Figure 10a that there is significant blurring of the vertical edge characteristics on the building (encircled by red dotted lines). It is observed that the curve in Figure 10c is slightly smoother than that in Figure 10d (encircled by red dotted lines). This is consistent with the phenomena reflected in Figure 10a,b.

4.2.2. Comparison Experiments

To verify the effectiveness of our approach, several experiments were performed on images containing stripe noises. The first experiment was reported in Figure 11. As shown in Figure 11a, the MSGF achieved a relatively poor denoising effect, and could not effectively identify and remove the irregular stripe noises with a small cross-directional gradient variation. As shown in Figure 11b, the WAGE method still leaves a small number of stripe noises unremoved. As shown in Figure 11c, the TVGF over-smoothed the original image in the horizontal direction. As a result, the edges of the person and the vertical features of the object were very obscure. As shown in Figure 11d, our approach effectively removed irregular stripe noises, while preserving the information of edge textures as much as possible. Figure 11e–h compare the mean power spectral densities (MPSDs) of all rows in the images denoised by different methods with the MPSD of all rows in the original image, where the abscissa is the normalized frequency, and the ordinate is the MPSD of all rows. Figure 11e,f clearly show small pulses at the locations of large pulses in the original image, indicating that a few stripe noises were not removed. Figure 11g,h performed well in MDSP, which is consistent with the performance shown in Figure 11a–d. The comparison fully demonstrates the superiority of our approach in the removal of stripe noises.
The second experiment is reported in Figure 12. The original image is an infrared image containing several buildings against the sky. The objects in the image have a small gray difference from the background, making it difficult to differentiate between building edges and stripe noises through frequency domain filtering. As shown in Figure 12a,b, MSGF and the WAGE displayed poor denoising effects, despite preserving the edges, and left clear stripe noises in the background. As shown in Figure 12c, TVGF caused a loss of information due to over-smoothing of building edge pixels. As shown in Figure 12d, our approach eliminated stripe noises and retained the edges and details of the buildings. The MDSPs in Figure 12e–h were consistent with the performance of our approach. The above results further demonstrate the superiority of our approach.
The third experiment is reported in Figure 13. The original image depicts a single building with a vertical structure. As shown in Figure 13a,b, MSGF and the WAGE had the worst performance among all methods. The image edges and details were preserved, but the noises were not effectively erased. As shown in Figure 13c, TVGF led to clear attenuation of edge features, despite its good denoising effects. Additionally, as shown in Figure 13d, our approach preserves the edge information of the image and effectively achieves streak noise removal simultaneously. The MDSPs in Figure 13e–h were consistent with the performance of our approach. The above results provide more evidence for the superiority of our approach.
The last experiment is reported in Figure 14. As shown in Figure 14a,b, MSGF and WAGE were outshined by the other methods, as they failed to remove some stripe noises. As shown in Figure 14c, the edges in images denoised by the TVGF method are blurred to varying degrees (encircled by red dotted lines). The MDSPs in Figure 14e–h were consistent with the performance of our approach. The above results fully reflect the superiority of our approach.
Table 1 compares the NR, MRD, and ID values of the images denoised by the four methods. The optimal value of each metric is shown in bold.
As shown in Table 1, our approach had the best effects in terms of the NR and MRD, which mirror the ability to denoise stripe images. Judging by the ID, our approach also achieved fairly good results. Some methods had a high ID as they were unable to fully remove noises and retained only part of the information of the original image. Through subjective observation of Figure 11, Figure 12, Figure 13 and Figure 14, the images denoised by our approach were not greatly distorted, had edge details preserved, and had ID values close to 1. These results testify that our approach is excellent in the removal of stripe noises.

5. Conclusions

This paper proposes a stripe noise removal model for infrared images, based on the sparse representation of L1-norm and the AEPO. The proposed model fully utilizes the directional, cross-directional, and structural differences between stripe noises in infrared images and other components in those images, and describes the sparsity of infrared images well with the L1-norm. Focusing on the edge pixels, the AEPO can reasonably separate and remove stripe noises, and can preserve the edge information of the original image excellently. The classic ADMM algorithm was introduced to solve the proposed model. Finally, the superiority of our approach was demonstrated through numerous experiments. Nevertheless, there are still some questions in the field of stripe noise removal. Our proposed method is flawed in dealing with diagonal stripe noises or heavy stripe noises; in the future, we will focus on removing these noises from infrared images.

Author Contributions

Methodology, M.L.; Software, S.N.; Writing—original draft preparation, M.L., T.N., and L.H.; Writing—review and editing, C.H. and L.Q. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (No.62105328).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Due to the nature of this research, participants of this study did not agree for their data to be shared publicly, so supporting data is not available.

Acknowledgments

We thank the anonymous reviewers for their valuable comments and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cao, Y.; He, Z.; Yang, J.; Cao, Y.; Yang, M.Y. Spatially adaptive column fixed-pattern noise correction in infrared imaging system using 1D horizontal differential statistics. IEEE Photonics J. 2017, 9, 7803513. [Google Scholar] [CrossRef]
  2. Liu, C.; Sui, X.; Gu, G.; Chen, Q. Shutterless non-uniformity correction for long-term stability of uncooled longwave infrared camera. Meas. Sci. Technol. 2017, 29, 25402. [Google Scholar] [CrossRef]
  3. Chen, B.Y.; Feng, X.; Wu, R.H.; Guo, Q.; Wang, X.; Ge, S.M. Adaptive wavelet filter with edge compensation for remote sensing image denoising. IEEE Access 2019, 7, 91966–91979. [Google Scholar] [CrossRef]
  4. Quarmby, N.A. Noise removal for SPOT HRV imagery. Int. J. Remote Sens. 1987, 8, 1229–1234. [Google Scholar] [CrossRef]
  5. Chen, J.S.; Zhu, B.Q.; Shao, Y. Destriping multi-sensor imagery based on wavelet transform. Remote Sens. Inf. 2003, 2, 6–9. [Google Scholar]
  6. Choubey, S.B.; Choubey, A.; Nandan, D.; Mahajan, A. Polycystic ovarian syndrome detection by using two-stage image denoising. Trait. Signal 2021, 38, 1217–1227. [Google Scholar] [CrossRef]
  7. Cao, Y.L.; He, Z.W.; Yang, J.X.; Ye, X.P.; Cao, Y.P. A multi-scale non-uniformity correction method based on wavelet decomposition and guided filtering for uncooled long wave infrared camera. Signal Process. Image Commun. 2017, 60, 13–21. [Google Scholar] [CrossRef]
  8. Bhatele, K.R.; Bhadauria, S.S. Glioma segmentation and classification system based on proposed texture features extraction method and hybrid ensemble learning. Trait. Signal 2020, 37, 989–1001. [Google Scholar] [CrossRef]
  9. Diani, M.; Baldacci, A.; Corsini, G. Joint striping noise removal and background clutter cancellation in IR naval surveillance systems. IEEE Proc. Vis. Image Signal Process. 2001, 148, 407–412. [Google Scholar] [CrossRef]
  10. Sui, J.; Jin, W.; Dong, L.Q.; Wang, X. A new nonuniformity correction algorithm for infrared line scanners. In Proceedings of the Infrared Imaging Systems: Design, Analysis, Modeling, Testing XVII, Orlando, FL, USA, 16 May 2006; Volume 6207, p. 62070Y. [Google Scholar]
  11. Wang, E.D.; Jiang, P.; Hou, X.K.; Zhu, Y.L.; Peng, L.Y. Infrared stripe correction algorithm based on wavelet analysis and gradient equalization. Appl. Sci. 2019, 9, 1993. [Google Scholar] [CrossRef] [Green Version]
  12. Chambolle, A. An algorithm for total variation minimization and applications. J. Math. Imaging Vis. 2004, 20, 89–97. [Google Scholar]
  13. He, Z.W.; Cao, Y.P.; Dong, Y.F.; Yang, J.X.; Cao, Y.L.; Tisse, C.L. Single-image-based nonuniformity correction of uncooled long-wave infrared detectors: A deep-learning approach. Appl. Opt. 2018, 57, D155–D164. [Google Scholar] [CrossRef]
  14. Wang, E.D.; Jiang, P.; Li, X.P.; Cao, H. Infrared stripe correction algorithm based on wavelet decomposition and total variation-guided filtering. J. Eur. Opt. Soc. Rapid Publ. 2020, 16, 1. [Google Scholar] [CrossRef] [Green Version]
  15. Khare, S.; Singh, M.; Kaushik, B.K. Real time non-uniformity correction algorithm and implementation in reconfigurable architecture for infrared imaging systems. Def. Sci. J. 2019, 69, 179–184. [Google Scholar] [CrossRef]
  16. Sheng, Y.C.; Dun, X.; Jin, W.Q.; Zhou, F.; Wang, X.; Mi, F.W.; Xiao, S. The on-orbit non-uniformity correction method with modulated internal calibration sources for infrared remote sensing systems. Remote Sens. 2018, 10, 830. [Google Scholar] [CrossRef] [Green Version]
  17. Kłosowski, M.; Sun, Y.C. Fixed pattern noise reduction and linearity improvement in time-mode CMOS image sensors. Sensors 2020, 20, 5921. [Google Scholar] [CrossRef]
  18. Xie, W.S.; Yang, Y.F.; Zhou, B. An ADMM algorithm for second-order TV-based MR image reconstruction. Numer. Algorithms 2014, 67, 827–843. [Google Scholar] [CrossRef]
  19. Xu, J.C.; Wang, N.; Xu, Z.W.; Xu, K.Q. Weighted l(p) norm sparse error constraint based ADMM for image denoising. Math. Probl. Eng. 2019, 2019, 1262171. [Google Scholar] [CrossRef] [Green Version]
  20. Guo, D.; Tu, Z.; Wang, J.C.; Xiao, M.; Du, X.F.; Qu, X.B. Salt and pepper noise removal with multi-class dictionary learning and l0 norm regularizations. Algorithms 2018, 12, 7. [Google Scholar] [CrossRef] [Green Version]
  21. Gu, Y.T.; Jian, J.; Mei, S.L. L(0) norm constraint LMS algorithm for sparse system identification. IEEE Signal Process. Lett. 2009, 16, 774–777. [Google Scholar]
  22. Yang, P.X.; Chen, M.J. A non local total variation based on L-1 norm for image recovery. Chin. J. Liq. Cryst. Disp. 2017, 32, 635–641. [Google Scholar] [CrossRef]
  23. Mousavi, A.; Gao, Z.M.; Han, L.S.; Lim, A. Quadratic surface support vector machine with L1 norm regularization. arXiv 2019, arXiv:1908.08616. [Google Scholar] [CrossRef]
  24. Saxena, A.; Patidar, K. An Improved linear threshold based domain denoising. Int. J. Adv. Technol. Eng. Explor. 2013, 2, 163. [Google Scholar]
  25. Sun, Y.J.; Huang, T.Z.; Ma, T.H.; Chen, Y. Remote sensing image stripe detecting and destriping using the joint sparsity constraint with iterative support detection. Remote Sens. 2019, 11, 608. [Google Scholar] [CrossRef] [Green Version]
  26. Chen, Y.; He, W.; Yokoya, N.; Huang, T.Z. Hyperspectral image restoration using weighted group sparsity-regularized low-rank tensor decomposition. IEEE Trans. Cybern. 2020, 50, 3556–3570. [Google Scholar] [CrossRef]
  27. Wright, S.J.; Nowak, R.D.; Figueiredo, M.A.T. Sparse reconstruction by separable approximation. IEEE Int. Trans. Signal Process. 2008, 57, 2479–2493. [Google Scholar] [CrossRef] [Green Version]
  28. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends® Mach. Learn. 2011, 3, 1–122. [Google Scholar]
  29. Shen, H.F.; Zhang, L.P. A map-based algorithm for destriping and inpainting of remotely sensed images. IEEE Trans. Geosci. Remote Sens. 2009, 47, 1490–1500. [Google Scholar]
  30. Wang, J.L.; Huang, T.Z.; Ma, T.H.; Zhao, X.L.; Chen, Y. A sheared low-rank model for oblique stripe removal. Appl. Math. Comput. 2019, 360, 167–180. [Google Scholar] [CrossRef]
  31. Zeng, Q.J.; Qin, H.L.; Yan, X.; Yang, T.W. Fourier domain anomaly detection and spectral fusion for stripe noise removal of TIR imagery. Remote Sens. 2020, 12, 3714. [Google Scholar] [CrossRef]
  32. Li, Q.Y.; Zhong, R.F.; Wang, Y. A method for the destriping of an orbita hyperspectral image with adaptive moment matching and unidirectional total variation. Remote Sens. 2019, 11, 2098. [Google Scholar] [CrossRef] [Green Version]
  33. Chen, Y.; Huang, T.Z.; Deng, J.L.; Zhao, X.L.; Wang, M. Group sparsity based regularization model for remote sensing image stripe noise removal. Neurocomputing 2017, 267, 95–106. [Google Scholar] [CrossRef]
  34. Guan, J.T.; Lai, R.; Xiong, A. Learning spatiotemporal features for single image stripe noise removal. IEEE Access 2019, 7, 144489–144499. [Google Scholar] [CrossRef]
Figure 1. Block diagram of the proposed approach.
Figure 1. Block diagram of the proposed approach.
Sensors 22 02971 g001
Figure 2. Difference between information components and stripe noises: (a) Original image; (b) Information components; (c) Noise components; (d) Vertical gradient of noise components; (e) Horizontal gradient of noise components; (f) Vertical gradient of information components; (g) Horizontal gradient of information components.
Figure 2. Difference between information components and stripe noises: (a) Original image; (b) Information components; (c) Noise components; (d) Vertical gradient of noise components; (e) Horizontal gradient of noise components; (f) Vertical gradient of information components; (g) Horizontal gradient of information components.
Sensors 22 02971 g002
Figure 3. Difference between proportion of the L1-norm for information components and stripe noises: (a) Proportion of the L1-norm for each column of noise components in the L1-norm for the vertical gradient of the original image; (b) Proportion of the L1-norm for each row of information components in the L1-norm for the horizontal gradient of the original image.
Figure 3. Difference between proportion of the L1-norm for information components and stripe noises: (a) Proportion of the L1-norm for each column of noise components in the L1-norm for the vertical gradient of the original image; (b) Proportion of the L1-norm for each row of information components in the L1-norm for the horizontal gradient of the original image.
Sensors 22 02971 g003
Figure 4. Experiments on the AEPO: (a) reference image; (b) noisy image; (c) test image; (d) correlation of SSIM with AEPO and edge contrast.
Figure 4. Experiments on the AEPO: (a) reference image; (b) noisy image; (c) test image; (d) correlation of SSIM with AEPO and edge contrast.
Sensors 22 02971 g004
Figure 5. Influence of regular terms on the PSNR: (a) relationship between λ 2 and PSNR; (b) relationship between λ 3 and PSNR.
Figure 5. Influence of regular terms on the PSNR: (a) relationship between λ 2 and PSNR; (b) relationship between λ 3 and PSNR.
Sensors 22 02971 g005
Figure 6. Experimental images: (a) a person; (b) buildings against the sky; (c) a single building; (d) complex buildings.
Figure 6. Experimental images: (a) a person; (b) buildings against the sky; (c) a single building; (d) complex buildings.
Sensors 22 02971 g006
Figure 7. Denoising effects of ablation experiment on an image of a person: (a) non-AEPO; (b) our approach; (c) non-AEPO; (d) our approach.
Figure 7. Denoising effects of ablation experiment on an image of a person: (a) non-AEPO; (b) our approach; (c) non-AEPO; (d) our approach.
Sensors 22 02971 g007
Figure 8. Denoising effects of ablation experiment on an image of buildings against the sky: (a) non-AEPO; (b) our approach; (c) non-AEPO; (d) our approach.
Figure 8. Denoising effects of ablation experiment on an image of buildings against the sky: (a) non-AEPO; (b) our approach; (c) non-AEPO; (d) our approach.
Sensors 22 02971 g008
Figure 9. Denoising effects of ablation experiment on an image of a single building: (a) non-AEPO; (b) our approach; (c) non-AEPO; (d) our approach.
Figure 9. Denoising effects of ablation experiment on an image of a single building: (a) non-AEPO; (b) our approach; (c) non-AEPO; (d) our approach.
Sensors 22 02971 g009
Figure 10. Denoising effects of ablation experiment on an image of complex buildings: (a) non-AEPO; (b) our approach; (c) non-AEPO; (d) our approach.
Figure 10. Denoising effects of ablation experiment on an image of complex buildings: (a) non-AEPO; (b) our approach; (c) non-AEPO; (d) our approach.
Sensors 22 02971 g010
Figure 11. Denoising effects of different methods on an image of a person: (a) MSGF; (b) WAGE; (c) TVGF; (d) our approach; (e) MSGF; (f) WAGE; (g) TVGF; (h) our approach.
Figure 11. Denoising effects of different methods on an image of a person: (a) MSGF; (b) WAGE; (c) TVGF; (d) our approach; (e) MSGF; (f) WAGE; (g) TVGF; (h) our approach.
Sensors 22 02971 g011
Figure 12. Denoising effects of different methods on an image of buildings against the sky: (a) MSGF; (b) WAGE; (c) TVGF; (d) our approach; (e) MSGF; (f) WAGE; (g) TVGF; (h) our approach.
Figure 12. Denoising effects of different methods on an image of buildings against the sky: (a) MSGF; (b) WAGE; (c) TVGF; (d) our approach; (e) MSGF; (f) WAGE; (g) TVGF; (h) our approach.
Sensors 22 02971 g012
Figure 13. Denoising effects of different methods on an image of a single building: (a) MSGF; (b) WAGE; (c) TVGF; (d) our approach; (e) MSGF; (f) WAGE; (g) TVGF; (h) our approach.
Figure 13. Denoising effects of different methods on an image of a single building: (a) MSGF; (b) WAGE; (c) TVGF; (d) our approach; (e) MSGF; (f) WAGE; (g) TVGF; (h) our approach.
Sensors 22 02971 g013
Figure 14. Denoising effects of different methods on an image of complex buildings: (a) MSGF; (b) WAGE; (c) TVGF; (d) our approach; (e) MSGF; (f) WAGE; (g) TVGF; (h) our approach.
Figure 14. Denoising effects of different methods on an image of complex buildings: (a) MSGF; (b) WAGE; (c) TVGF; (d) our approach; (e) MSGF; (f) WAGE; (g) TVGF; (h) our approach.
Sensors 22 02971 g014
Table 1. Metrics of different methods on different images.
Table 1. Metrics of different methods on different images.
ImageMetricMSGFWAGETVGFNon-AEPOOur Approach
A person NR 2.02 2.51 3.58 4.01 4.07
MRD ( % ) 2.95 3.72 4.53 3.52 3.01
ID 0.999 0.992 0.975 0.980 0.989
Buildings against the sky NR 2.29 2.79 3.45 3.91 3.96
MRD ( % ) 3.94 4.36 5.47 4.45 4.11
ID 0.999 0.991 0.971 0.978 0.986
A single building NR 3.31 3.36 3.41 3.48 3.53
MRD ( % ) 2.74 2.56 3.40 2.96 2.47
ID 0.999 0.993 0.977 0.985 0.991
Complex buildings NR 3.05 3.16 3.40 3.36 3.42
MRD ( % ) 3.08 2.47 4.10 3.54 2.21
ID 0.999 0.993 0.980 0.988 0.994
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, M.; Nong, S.; Nie, T.; Han, C.; Huang, L.; Qu, L. A Novel Stripe Noise Removal Model for Infrared Images. Sensors 2022, 22, 2971. https://doi.org/10.3390/s22082971

AMA Style

Li M, Nong S, Nie T, Han C, Huang L, Qu L. A Novel Stripe Noise Removal Model for Infrared Images. Sensors. 2022; 22(8):2971. https://doi.org/10.3390/s22082971

Chicago/Turabian Style

Li, Mingxuan, Shenkai Nong, Ting Nie, Chengshan Han, Liang Huang, and Lixin Qu. 2022. "A Novel Stripe Noise Removal Model for Infrared Images" Sensors 22, no. 8: 2971. https://doi.org/10.3390/s22082971

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop