Next Article in Journal
Revenue Management in Airlines and External Factors Affecting Decisions: The Harmonic Oscillator Model
Previous Article in Journal
Calculating Insurance Claim Reserves with an Intuitionistic Fuzzy Chain-Ladder Method
Previous Article in Special Issue
High-Quality Reversible Data Hiding Based on Multi-Embedding for Binary Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Moiré Removal Method Based on Peak Filtering and Image Enhancement

1
Wangxuan Institute of Computer Technology, Peking University, Beijing 100871, China
2
School of Computer Science and Engineering, Ministry of Education Key Laboratory of Information Technology, Guangdong Province Key Laboratory of Information Security Technology, Sun Yat-sen University, Guangzhou 510006, China
3
Institute of Information Science, Beijing Jiaotong University, Beijing 100044, China
4
Beijing Key Laboratory of Advanced Information Science and Network Technology, Beijing 100044, China
5
Beijing Institute of Information Application Technology, Beijing 100044, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(6), 846; https://doi.org/10.3390/math12060846
Submission received: 11 January 2024 / Revised: 1 March 2024 / Accepted: 8 March 2024 / Published: 14 March 2024
(This article belongs to the Special Issue Mathematical Methods Applied in Explainable Fake Multimedia Detection)

Abstract

:
Screen photos often suffer from moiré patterns, which significantly affect their visual quality. Although many deep learning-based methods for removing moiré patterns have been proposed, they fail to recover images with complex textures and heavy moiré patterns. Here, we focus on text images with heavy moiré patterns and propose a new demoiré approach, incorporating frequency-domain peak filtering and spatial-domain visual quality enhancement. We find that the content of the text image mainly lies in the central region, whereas the moiré pattern lies in the peak region, in the frequency domain. Based on this observation, a peak-filtering algorithm and a central region recovery strategy are proposed to accurately locate and remove moiré patterns while preserving the text parts. In addition, to further remove the noisy background and paint the missing text parts, an image enhancement algorithm utilising the Otsu method is developed. Extensive experimental results show that the proposed method significantly removes severe moiré patterns from images with better visual quality and lower time cost compared to the state-of-the-art methods.

1. Introduction

With the development of multimedia technology and the popularity of smart electronic devices, digital cameras and smartphones are being increasingly used. In particular, due to the spread of COVID-19, more people work and study online, and it has become common to utilise these devices to capture the output of digital screens. However, when we take photos of digital screens, moiré patterns frequently appear and resemble water ripples or stripes [1,2]. This significantly affects the visual quality of images. Generally, these moiré patterns are caused by the interference between the pixel layout of digital screens and the CFA (Colour Filter Array) of cameras, which naturally occurs due to the principles of photonics [3].
The presence of moiré patterns severely degrades the visual quality of captured images and affects the effectiveness of subsequent image processing tasks, such as image super-resolution [4,5], image segmentation [6,7,8,9], and face recognition [10,11,12]. In addition, traditional image restoration algorithms, such as image denoising [13,14], mesh removal [15,16], and deblurring [17,18], cannot be effectively and directly applied to the moiré pattern removal task. For this reason, image demoiré techniques have been proposed to remove moiré patterns on images and further improve visual quality [19,20]. Currently, most existing demoiré methods are learning-based approaches [21,22,23,24] that heavily rely on training data. In other words, these methods require training with large annotated image-pair datasets, such as the dataset of [25], in order to obtain favourable performance. However, these datasets are captured with slight moiré patterns under strictly designed conditions. As a result, we have observed that previous methods fail to remove heavy moiré patterns when facing complex textures such as text images.
Existing methods treat the demoiré task as a pixelwise denoising procedure in the RGB domain [1,23,24]. Commonly, a pixel-to-pixel L1 loss is applied between the input moiré and clear image pairs. Sun et al. [25] proposed a triple-stream network called MopNet (Moiré Pattern Removal Neural Network) and utilised moiré appearance classification and edge information to rebuild the entire input image. Yang et al. [26] built a high-resolution reconstruction model with four branches. Zheng et al. [27] utilised a two-step tone-mapping strategy for colour restoration to fine-tune the colour of each pixel. However, some complex textures such as text are also removed during the demoiré procedure, which seriously limits its applicability in practical scenarios. Furthermore, we contend that their method neglects the correlation between the different scales or frequencies of the moiré components. In addition, the limited amount of training data also affects the generalisation performance. These limitations in the current state of the art have prompted us to develop a novel, efficient, and stable approach to overcoming moiré patterns.
Specifically, unlike the aforementioned methods that address slight moiré patterns in natural images, this paper focuses on images with complex textures and heavy moiré, e.g., text images. Since it is difficult to obtain intuitive regularity from the spatial domain, we calculate the frequency domain of moiré images using fast Fourier transform (FFT). Then, we analyse the differences between the clear and moiré text images in both the spatial and frequency domains.
Figure 1 shows that the image with moiré contains multiple evenly distributed peaks in the frequency domain, whereas the clean image does not. Based on this important observation, we propose that these unusual multiple peaks lead to heavy moiré patterns. To prove our conjecture, we simply remove the other peaks (excluding the single peak in the central region) and then reverse the FFT to recover the processed image in the RGB domain.
As shown in Figure 2, the heavy moiré patterns in the background become less pronounced, and the body of the text part is retained after the frequency truncation. This demonstrates two key observations: (1) the moiré pattern lies in the unusual multiple frequency peak regions, excluding the central peak region; and (2) the complex texture, i.e., the text, mainly lies in the central peak region. Based on these conclusions, we formulate our image reconstruction strategy, called peak filtering, to eliminate the peak regions and preserve the central frequency region. After the above operation, although the heavy moiré patterns disappear, the visual quality of the processed image is unsatisfactory. Thus, we propose an image enhancement strategy, consisting of Otsu binarisation, image erosion, and expansion processing, to repair the noisy background and enhance the text part. Based on the above, we achieve the removal of heavy moiré patterns from text images.
In summary, the main contributions of this paper are as follows:
  • We propose a peak-filtering algorithm to remove multiple uniform peaks in the frequency domain, which more accurately determines and updates the area where the moiré peaks are located based on the peak region update strategy.
  • The central region recovery algorithm is observed to better preserve the text of the image, which utilises median filtering to handle the central frequency region.
  • An image enhancement algorithm based on the Otsu method is proposed to binarise, corrode, and expand the processed image after peak filtering, which removes the noisy background and fixes the missing text parts.

2. Related Works

In this section, we review two aspects of previous demoiré methods, including prior knowledge-based and learning-based methods.

2.1. Prior Knowledge-Based Methods

Traditional demoiré methods require prior knowledge of the causes and properties of moiré patterns, e.g., image interpolation and decomposition.
For the former, Kimmel et al. [28] proposed an image interpolation method based on different weights closely related to the information of the edges in the neighbourhood. In their method, the missing colour component of each pixel was calculated from the colour components of the pixels in the neighbourhood according to the different weights. For the latter, Liu et al. [29] proposed a low-rank sparse matrix decomposition model for the elimination of moiré in fabric images. In their method, a sparse prior constraint was added to the moiré component, and a position constraint was applied to its frequency-domain distribution to distinguish the moiré component from the texture component. Yang et al. [30] proposed a joint wavelet domain-oriented filtering and multiphase layer decomposition model, based on the premise that moiré patterns and background images are additive and that the differences in the three-channel imaging screenshots and moiré stripe structures are similar.

2.2. Learning-Based Methods

Currently, deep learning methods are widely used in the field of moiré removal, including CNN (convolutional neural network)-based methods and GAN (generative adversarial network)-based methods.
Sun et al. [25] first attempted to remove moiré patterns using a CNN called DMCNN (Demoiré Multiresolution CNN), which utilises five parallel branches to process the input image at different resolutions. Due to the complex frequency distribution, the imbalance of colour channel amplitude imbalances, and different appearance properties of moiré patterns, He et al. [31] proposed MopNet for demoiré, consisting of multiscale feature aggregation, an edge predictor, and a moiré pattern classifier. To address deficiencies in multiscale information exchange and fusion, Yang et al. [26] proposed a high-resolution moiré removal network based on HRNet (High-Resolution Network) [32], named HRDN (High-Resolution Demoiré Network), to fully explore the relationship between different resolution feature mappings. HRDN can process lower-resolution images while maintaining the full resolution of the inputs and handle different frequencies at different scales. Zheng et al. [27] extracted features of moiré components in the image frequency domain and proposed MBCNN (Multiscale Bandpass CNN) to perform moiré removal while preserving the colour and texture recovery. Recently, GAN-based demoiré methods [33,34,35] have also been studied, which can be trained without paired (clear and moiré image pair) datasets. Park et al. [35] proposed a demoiré method named EEUCN (End-to-End Unpaired Cyclic Network), with the goal of learning the mapping from moiré images to clean images directly. EEUCN consists of two GANs: one is a moiré generation GAN to add moiré artefacts that reduce the quality of clean images, and the other is a moiré removal GAN to remove moiré artefact interference from moiré images. In this way, they constructed a pseudo-pairing set for training.
Overall, the aforementioned learning-based methods heavily rely on large annotated image-pair datasets and fail to remove heavy moiré patterns when faced with complex textures such as text images. In addition, text with slight colour variations is also removed during the demoiré procedure, which seriously limits their applicability. We contend that they neglect the correlation between the different scales or frequencies of the moiré components.

3. Proposed Method

In this section, our proposed method is described. First, a peak-filtering algorithm is developed for removing moiré patterns from images. Second, an image enhancement algorithm is developed to further improve visual quality. Finally, the implementation details of the proposed method are provided.

3.1. Peak Filtering

To remove moiré patterns from images, we first explore the distribution of images in the frequency domain. Specifically, a given image I is transformed to the frequency domain via the Fourier transform:
F ( u , v ) = x = 0 M 1 y = 0 N 1 I ( x , y ) e j 2 π ( u x M + v y N )
where I ( x , y ) represents the pixel value of I at pixel point ( x , y ) , F ( u , v ) represents its corresponding frequency-domain value, and j represents the imaginary part. M and N represent the height and width of the image, respectively.
Notably, after the Fourier transform, the low-frequency regions are concentrated in the four corners. To facilitate processing, the data are shifted by moving the centre of the transformed frequency domain from the origin of the matrix to the centre, so that the low-frequency regions are concentrated in the centre of the frequency domain. For this operation, we apply F as before to denote the frequency domain after the shift for conciseness.
Figure 3 shows the frequency domain before and after the shift. The shifted frequency domain has a mountainous shape, which facilitates our observations and further processing.
In addition, Figure 1 shows that the image with moiré contains multiple evenly distributed peaks in the frequency domain, whereas this is not the case for the image without moiré. Therefore, it is assumed that these uniformly distributed peak points are caused by moiré patterns. Subsequently, a peak-filtering algorithm is developed to filter out these peak points and thus remove moiré. Its overall process is shown in Figure 4, which mainly consists of the Fourier transform, Gaussian filtering, identification of peak points, peak region filtering, median filtering, central region recovery, and inverse Fourier transform.
Specifically, to avoid noise in the image from interfering with the peak-finding operation, we first apply a Gaussian filter to F to smooth the entire frequency domain. Specifically, the Gaussian filter outputs are weighted averages of the pixel values in the image. The operation scans each pixel in the image through a template and later replaces the value of the central pixel point with the weighted-average grey value of all pixels in the neighbourhood determined by the template:
F 1 = F G
where F 1 represents the frequency-domain value after Gaussian filtering, * represents the convolution operation, and G represents the Gaussian filter kernel:
G = 1 2 π σ e ( x 2 + y 2 ) / 2 σ 2 .
Notably, the effect of filtering is closely related to the size of the Gaussian kernel and the standard deviation σ . Specifically, the larger the Gaussian kernel (or the larger the σ ), the smoother the filtered frequency-domain value, i.e., the more blurred the image. Conversely, the coarser the frequency domain, the more likely it is that the filtering effect is not achieved.
Next, the peak points are found, and then the peak regions are filtered. Since the peak points in the frequency domain are uniformly distributed, we determine the largest peak point in the entire frequency domain and then use it as a reference to find the remaining peak points by setting the minimum interval between two adjacent peaks. Specifically, the region of the peak points is covered and updated by:
F i , j 2 = r i 1 × r j 2 × μ
where μ represents the mean of the matrix in the region around that peak point, 1 i m , 1 j n . m and n represent the height and width of the actual peak region, respectively. r 1 and r 2 represent two artificially defined vectors:
r 1 = [ 0.5 : 0.5 m : 1 ]
and
r 2 = [ 0.5 : 0.5 n : 1 ] .
The median filter is then utilised to make the image smoother to improve the visual quality of the de-peaked image. Specifically, the median filter updates the original frequency-domain values with the median value in its neighbourhood. As with the Gaussian filter, we must also set a reasonable median filter kernel size to achieve favourable performance.
Subsequently, the central region is recovered [36]. Since the frequency domain after the Fourier transform is shifted, the central region of the frequency domain corresponds to the low-frequency region, which contains the textual information of the image. Therefore, the central region of the median filter is replaced with that of the Gaussian filter to better preserve the textual information. Notably, the larger the central region retained, the more complete the information in the image will be. However, this also makes it more likely that the areas of moiré will be retained. Therefore, setting a reasonable central region size is essential to remove moiré and ensure visual quality.
Finally, the shift and the inverse Fourier transform are performed. Specifically, the data obtained in the previous step are first shifted, and then the peak-filtered image is obtained by performing the inverse Fourier transform with
I ( x , y ) = u = 0 M 1 v = 0 N 1 F ( u , v ) e j 2 π ( u x M + v y N )
where F ( u , v ) represents the frequency-domain value of F 2 ( u , v ) after median filtering and recovery of the central region, and I ( x , y ) represents its corresponding pixel value.
These operations achieve the removal of the moiré peaks. Figure 5 illustrates the results of the above process, where (a) represents the original image, (b) represents the frequency domain after the Fourier transform, (c) represents the frequency domain after the shift, (d) represents the frequency domain after Gaussian filtering, (e) represents the frequency domain after filtering the peak region, (f) represents the frequency domain after median filtering, (g) represents the frequency domain after recovering the central region, and (h) represents the image after the inverse Fourier transform. The results show that the proposed peak-filtering algorithm clearly removes the moiré pattern from the image.

3.2. Image Enhancement

To further enhance the visual quality of the image and highlight the textual information in the image, an image enhancement algorithm based on the Otsu method is developed. Its overall process is shown in Figure 6, which consists of three main components: binarisation based on the Otsu method, image erosion, and image expansion.
The Otsu method, also called the maximum interclass variance method, proceeds as follows. First, the maximum interclass variance and the segmentation threshold are initialised to 0 and 1, respectively. Second, the pixel points in the image are divided into two categories (greater than the threshold and less than the threshold) by comparing each pixel value in the image in relation to the segmentation threshold. Third, the interclass variance of the two categories divided is calculated, and then the maximum interclass variance is updated if the interclass variance is greater than the set maximum interclass variance. Fourth, 1 is added to the segmentation threshold, and then the above operation is repeated until the segmentation threshold is 255. As a result, the maximum interclass variance and the best segmentation threshold can be obtained. Finally, the image is binarised by iterating each pixel, and the grey value is set to 255 and 0 for values greater and less than the optimal threshold, respectively. In particular, the interclass variance can be calculated by
σ ˜ 2 = L 1 L S 1 L 1 S 1 + S 2 L 2 + L 2 L S 2 L 2 S 1 + S 2 L 2
where L represents the total pixel value of this image, i.e., L = M × N , and L 1 and L 2 represent the number of pixels classified into the first and second classes based on the current segmentation threshold, respectively. S 1 and S 2 represent the sizes of the total pixel grey values classified into the first and second classes based on the current segmentation threshold, respectively.
Figure 7c shows that processing the image after binarisation using the Otsu method still retains some burrs and noise. To handle these artefacts, an erosion operation is utilised to reduce and refine the highlighted areas or white parts of the image, which makes the highlighted areas of the resulting image smaller than those of the original image to address the burrs and noise in the image. The erosion result is output from the following set:
z | B z A
where A represents the image data, B represents the erosion template, and B z represents the region after the translation z of B. This result represents the set of all z that is still contained in the set A after the translation of B by z. Then, the minimum pixel value of the covered area can be obtained by convolving B with A. Thus, erosion can be achieved by replacing the original pixel value with the minimum pixel value of the area.
Figure 8 shows the schematic of image erosion. Point a in the original image is removed because the pixel values for both the left side and the top of point a are 0 when the centre of the template coincides with it. In contrast, point b in the original image needs to be retained because the pixel values of the image covered by the template are all 1 when the centre of the template coincides with it.
Although image erosion removes burrs and noise to some extent, it also disrupts the continuity of the textual information part, as shown in Figure 7d. Therefore, an expansion operation is performed to expand the highlighted areas or white parts of the image, which makes the highlighted areas of the resulting image larger than those of the original image to fill in the gaps in the image and better preserve the continuity of the textual information. The expansion operation is given as follows:
z | B z A ϕ
where A represents the image data, B represents the expansion template, and B z represents the region after the translation z of B. This result represents the set of all z that satisfies that the intersection between the set A and the region after translation of B by z is not empty. Then, the maximum pixel value of the covered area can be obtained by performing “and” operations with B and A. Thus, the purpose of expansion can be achieved by replacing the original pixel value with the maximum pixel value of the area.
Figure 9 shows the schematic of image expansion. The surrounding points of point a are filled with the template when the centre of the template coincides with it. For the other points, the operation and outcome are the same.
Figure 7e shows the final results of image enhancement. Compared to the image after peak filtering, the textual information is more visible, and the visual quality is relatively better after image enhancement.

3.3. Implementation Details

Based on the peak-filtering and image enhancement techniques described above, the implementation details of the proposed method are described as follows:
  • Fourier transform and shift. The image is Fourier transformed utilising (1) and then shifted so that the low-frequency region is concentrated in the centre of the frequency domain.
  • Gaussian filtering. The Gaussian filter is applied to the shifted frequency domain utilising (2). Here, the filter kernel is set to 7 × 7 and σ is set to 1.5.
  • Identification of the peak points. First, the largest peak is located in the entire frequency domain. Second, the second largest peak point is located. Third, the distance d between the largest peak point and the second largest peak point is calculated. Then, the minimum interval between two adjacent peaks is set utilising (11). Finally, the position of the remaining peak points in the row where the largest peak point is located can be found.
    d ˜ = max 25 , d 2 .
  • Peak region filtering. First, each peak region is set to 65 × 65 , i.e., m = n = 65 , which includes the peak point as the centre and 64 × 64 as the neighbourhood. Later, the peak region is updated utilising (4).
  • Median filtering. Specifically, the median filter kernel is set to 31 × 31 .
  • Recovery of the central region. First, the central region is set to ( d ˜ + 1 ) × ( d ˜ + 1 ) , which is made up of the largest peak as the centre and d ˜ × d ˜ as the neighbourhood. Then, the central region after median filtering is replaced with that of the Gaussian filter.
  • Shift and inverse Fourier transforms. The shift operation is first performed so that the low-frequency regions are distributed around, and the peak-filtered spatial-domain image is later obtained utilising (7).
  • Binarisation. The image is binarised utilising the Otsu method.
  • Image erosion. Specifically, a line structure with a length of 3 and an angle of 90 is utilised for erosion. Then, the eroded image is obtained utilising (9).
  • Image expansion. Specifically, a line structure with a length of 3 and an angle of 90 is utilised for expansion. Then, the expanded image is obtained utilising (10).

4. Experimental Results

In this section, two ablation experiments are conducted. Then, the performance of the proposed method is verified by comparing it with five state-of-the-art methods. It is worth noting that the proposed method was implemented based on MATLAB 2016b. Both the ablation experiments and validity experiments were conducted on a PC equipped with 4 Intel(R) Core(TM) i7-8700 @3.20 GHz CPUs and 8 Gb of memory. The comparison methods mentioned in this paper were implemented using the source code from the original papers.

4.1. Ablation Study

An ablation experiment regarding the recovery of the central region was conducted. Figure 10 shows the corresponding results for six images, where the first row represents the original image, and the second and third rows represent the image without and with central region recovery, respectively. The data show that recovering the central region is crucial for preserving textual information in images and improving visual quality. For example, in Image 3, the textual information in the image without central region recovery is very vague compared to the image with central region recovery.
Next, an ablation experiment on peak filtering was conducted. Figure 11 shows the results for six images, where the first row represents the original image, and the second and third rows represent the enhanced image without and with peak filtering, respectively. The data show that peak filtering is crucial for removing moiré. For example, in Image 2, the moiré in the enhanced image without peak filtering is very obvious, which seriously affects our visual perception. Similar outcomes were observed for other images.

4.2. Comparison with State-of-the-Art Methods

To highlight the advantages of the proposed method, we compared it with state-of-the-art methods, including DMCNN [25], HRDN [26], MBCNN [27], MopNet [31], and EEUCN [33]. The results are shown in Figure 12, which shows that the proposed method achieved better performance compared to these methods.
Specifically, DMCNN [25] removed moiré patterns utilising multiscale convolutional neural networks, but the resulting images were somewhat reddish, and it was less effective at removing moiré compared to our approach. HRDN [26] did not sufficiently extract features at each scale, resulting in the unsatisfactory removal of moiré patterns from images containing complex moiré patterns. MBCNN [27] extracted the frequency-domain features of the image and analysed and removed the moiré component in the frequency domain. However, it was relatively ineffective at addressing textual images with heavy moiré. MopNet [31] removed moiré patterns by distinguishing the moiré component from the image component only through boundary extraction. However, when faced with text images with heavy moiré, it could not distinguish between moiré patterns and textual information, resulting in unsatisfactory moiré removal. EEUCN [33] used an adversarial neural network to remove moiré. It first added moiré patterns to a clean image utilising a moiré addition network and then removed the moiré patterns utilising supervised learning. However, when the textual information was relatively faint, it was mistakenly identified as a moiré pattern and removed, as the moiré-added image still differed from the real moiré image.
To further prove the performance of the proposed method, we also conducted the same morphological text extraction operation using state-of-the-art methods and compared the results with the proposed method. The experimental results are shown in Figure 13. It can be seen that the performance of the proposed method was equally the best. Although the state-of-the-art methods achieved good performance on test samples 4, 5, and 6, they did not perform well on test samples 1, 2, 3, and 8. In particular, MopNet did not completely remove the moiré parts from test samples 3, 7, and 8, resulting in an unsatisfactory visual effect. EEUCN completely erased the text in test sample 8.
In summary, the proposed method can effectively remove moiré patterns while retaining textual information.

5. Conclusions

This paper proposed a moiré removal method utilising peak filtering and image enhancement. Specifically, the proposed peak-filtering algorithm removes uniformly distributed moiré patterns in the image, and the central region recovery strategy preserves the textual information in the image. Furthermore, the image enhancement algorithm based on the Otsu method further improves the visual quality of the image. Extensive experimental results show that, compared to state-of-the-art methods, the proposed method significantly removes severe moiré patterns from images with better visual quality and lower time cost.

Author Contributions

Conceptualisation, W.Q.; methodology, X.L.; software, X.Y.; validation, X.Y.; data curation, S.K.; writing—original draft preparation, W.Q.; writing—review and editing, X.L.; visualisation, X.Y.; supervision, S.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China No. 61972031.

Data Availability Statement

The data are available from the corresponding author on reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Liu, L.; Yuan, S.; Liu, J.; Bao, L.; Slabaugh, G.G.; Tian, Q. Self-adaptively learning to demoiré from focused and defocused image pairs. arXiv 2020, arXiv:2011.02055. [Google Scholar]
  2. Siddiqui, H.; Boutin, M.; Bouman, C.A. Hardware-friendly descreening. IEEE Trans. Image Process. 2010, 19, 746–757. [Google Scholar] [CrossRef] [PubMed]
  3. Liu, S.; Li, C.; Nan, N.; Zong, Z.; Song, R. MMDM: Multi-frame and multi-scale for image demoiréing. arXiv 2020, arXiv:1909.11947. [Google Scholar]
  4. Yoon, Y.; Jeon, H.; Yoo, D.; Lee, J.; Kweon, I.S. Learning a deep convolutional network for light-field image super-resolution. In Proceedings of the 2015 IEEE International Conference on Computer Vision Workshop (ICCVW), Santiago, Chile, 7–13 December 2015; pp. 57–65. [Google Scholar]
  5. Dong, C.; Loy, C.C.; Tang, X. Accelerating the super-resolution convolutional neural network. arXiv 2016, arXiv:1608.00367. [Google Scholar]
  6. Chatterjee, P.; Mukherjee, S.; Chaudhuri, S.; Seetharaman, G. Application of papoulis–gerchberg method in image super-resolution and inpainting. Comput. J. 2009, 52, 80–89. [Google Scholar] [CrossRef]
  7. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R.B. Mask R-CNN. arXiv 2017, arXiv:1703.06870. [Google Scholar]
  8. Francis Alexander Raghu, A.; Ananth, J.P. Robust object detection and localization using semantic segmentation network. Comput. J. 2021, 64, 1531–1548. [Google Scholar] [CrossRef]
  9. Lin, X.; Wang, Z.-J.; Ma, L.; Li, R.; Fang, M.-E. Salient object detection based on multiscale segmentation and fuzzy broad learning. Comput. J. 2022, 65, 1006–1019. [Google Scholar] [CrossRef]
  10. Garcia, D.C.; de Queiroz, R.L. Face-spoofing 2D-detection based on moiré-pattern analysis. IEEE Trans. Inf. Forensics Secur. 2015, 10, 778–786. [Google Scholar] [CrossRef]
  11. Sajid, M.; Ali, N.; Ratyal, N.I.; Usman, M.; Butt, F.M.; Riaz, I.; Musaddiq, U.; Aziz Baig, M.J.; Baig, S.; Ahmad Salaria, U. Deep learning in age-invariant face recognition: A comparative study. Comput. J. 2022, 65, 940–972. [Google Scholar] [CrossRef]
  12. Shoba, B.T.; Shatheesh Sam, I. Aging facial recognition for feature extraction using adaptive fully recurrent deep neural learning. Comput. J. 2022, 65, 1923–1936. [Google Scholar] [CrossRef]
  13. Elad, M.; Aharon, M. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process. 2006, 15, 3736–3745. [Google Scholar] [CrossRef] [PubMed]
  14. Chen, X.; Kang, S.B.; Yang, J.; Yu, J. Fast patch-based denoising using approximated patch geodesic paths. IEEE Trans. Circuits Syst. Video Technol. 2015, 25, 897–909. [Google Scholar] [CrossRef]
  15. Xu, L.; Yan, Q.; Xia, Y.; Jia, J. Structure extraction from texture via relative total variation. ACM Trans. Graph. 2012, 31, 1–10. [Google Scholar] [CrossRef]
  16. Shou, Y.; Lin, C. mage descreening by GA-CNN-based texture classification. IEEE Trans. Circuits Syst. I Regul. Pap. 2004, 51-I, 2287–2299. [Google Scholar] [CrossRef]
  17. Nah, S.; Kim, T.H.; Lee, K.M. Deep multi-scale convolutional neural network for dynamic scene deblurring. arXiv 2016, arXiv:1612.02177. [Google Scholar]
  18. Chen, L.; Zhang, J.; Lin, S.; Fang, F.; Ren, J.S. Blind deblurring for saturated images. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 6308–6316. [Google Scholar]
  19. An, V.G.; Park, H.; Lee, C. Dual-domain deep convolutional neural networks for image demoireing. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 14–19 June 2020; pp. 1934–1942. [Google Scholar]
  20. An, V.G.; Park, H.; Lee, C. Moiré artifacts removal in screen-shot images via multiple domain learning. In Proceedings of the 2020 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Auckland, New Zealand, 7–10 December 2020; pp. 1268–1273. [Google Scholar]
  21. Gao, T.; Guo, Y.; Zheng, X.; Wang, Q.; Luo, X. Moiré pattern removal with multi-scale feature enhancing network. In Proceedings of the 2019 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), Shanghai, China, 8–12 July 2019; pp. 240–245. [Google Scholar]
  22. Cheng, X.; Fu, Z.; Yang, J. Multi-scale dynamic feature encoding network for image demoiréing. arXiv 2019, arXiv:1909.11947. [Google Scholar]
  23. Cheng, X.; Fu, Z.; Yang, J. Improved multi-scale dynamic feature encoding network for image demoiréing. Pattern Recognit. 2021, 116, 107970. [Google Scholar] [CrossRef]
  24. Yu, X.; Dai, P.; Li, W.; Ma, L.; Shen, J.; Li, J.; Qi, X. Towards efficient and scale-robust ultra-high-definition image demoiréing. arXiv 2022, arXiv:2207.09935. [Google Scholar]
  25. Sun, Y.; Yu, Y.; Wang, W. Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 2018, 27, 4160–4172. [Google Scholar] [CrossRef]
  26. Yang, S.; Lei, Y.; Xiong, S.; Wang, W. High resolution demoire network. In Proceedings of the 2020 IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, 25–28 October 2020; pp. 888–892. [Google Scholar]
  27. Zheng, B.; Yuan, S.; Slabaugh, G.G.; Leonardis, A. Image demoireing with learnable bandpass filters. arXiv 2020, arXiv:2004.00406. [Google Scholar]
  28. Kimmel, R. Demosaicing: Image reconstruction from color CCD samples. IEEE Trans. Image Process. 1999, 8, 1221–1228. [Google Scholar] [CrossRef] [PubMed]
  29. Liu, F.; Yang, J.; Yue, H. Moiré pattern removal from texture images via low-rank and sparse matrix decomposition. In Proceedings of the 2015 Visual Communications and Image Processing (VCIP), Singapore, 13–16 December 2015; pp. 1–4. [Google Scholar]
  30. Yang, J.; Zhang, X.; Cai, C.; Li, K. Demoiréing for screen-shot images with multi-channel layer decomposition. In Proceedings of the 2017 IEEE Visual Communications and Image Processing (VCIP), St. Petersburg, FL, USA, 10–13 December 2017; pp. 1–4. [Google Scholar]
  31. He, B.; Wang, C.; Shi, B.; Duan, L.-Y. Mop moire patterns using mopnet. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 2424–2432. [Google Scholar]
  32. Wang, J.; Sun, K.; Cheng, T.; Jiang, B.; Deng, C.; Zhao, Y.; Liu, D.; Mu, Y.; Tan, M.; Wang, X.; et al. Deep high-resolution representation learning for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 3349–3364. [Google Scholar] [CrossRef] [PubMed]
  33. Park, H.; Vien, A.G.; Koh, Y.J.; Lee, C. Unpaired image demoiréing based on cyclic moiré learning. In Proceedings of the APSIPA Annual Summit and Conference 2021, Tokyo, Japan, 14–17 December 2021; pp. 146–150. [Google Scholar]
  34. Yue, H.; Cheng, Y.; Liu, F.; Yang, J. Unsupervised moiré pattern removal for recaptured screen images. Neurocomputing 2021, 456, 352–363. [Google Scholar] [CrossRef]
  35. Park, H.; Vien, A.G.; Kim, H.; Koh, Y.J.; Lee, C. Unpaired screen-shot image demoiréing with cyclic moiré learning. IEEE Access 2022, 10, 16254–16268. [Google Scholar] [CrossRef]
  36. Hasan, Y.M.Y.; Karam, L.J. Morphological text extraction from images. IEEE Trans. Image Process. 2000, 9, 1978–1983. [Google Scholar] [CrossRef]
Figure 1. Frequency-domain comparison of images with and without moiré.
Figure 1. Frequency-domain comparison of images with and without moiré.
Mathematics 12 00846 g001
Figure 2. Results of preliminary attempts.
Figure 2. Results of preliminary attempts.
Mathematics 12 00846 g002
Figure 3. Comparison of the frequency domain before and after the shift. Note that the plotted frequency domain has undergone a logarithmic transformation to emphasise the contrast and thus provide a more intuitive depiction.
Figure 3. Comparison of the frequency domain before and after the shift. Note that the plotted frequency domain has undergone a logarithmic transformation to emphasise the contrast and thus provide a more intuitive depiction.
Mathematics 12 00846 g003
Figure 4. Overall process of peak filtering.
Figure 4. Overall process of peak filtering.
Mathematics 12 00846 g004
Figure 5. Results of peak filtering.
Figure 5. Results of peak filtering.
Mathematics 12 00846 g005
Figure 6. Overall process of image enhancement.
Figure 6. Overall process of image enhancement.
Mathematics 12 00846 g006
Figure 7. Results of image enhancement.
Figure 7. Results of image enhancement.
Mathematics 12 00846 g007
Figure 8. Schematic of image erosion.
Figure 8. Schematic of image erosion.
Mathematics 12 00846 g008
Figure 9. Schematic of image expansion.
Figure 9. Schematic of image expansion.
Mathematics 12 00846 g009
Figure 10. Ablation study on recovery of the central region.
Figure 10. Ablation study on recovery of the central region.
Mathematics 12 00846 g010
Figure 11. Ablation study on peak filtering.
Figure 11. Ablation study on peak filtering.
Mathematics 12 00846 g011
Figure 12. Comparison between the proposed method without image enhancement and five state-of-the-art methods.
Figure 12. Comparison between the proposed method without image enhancement and five state-of-the-art methods.
Mathematics 12 00846 g012
Figure 13. Comparison between the proposed method and five state-of-the-art methods for morphological text-extraction operations.
Figure 13. Comparison between the proposed method and five state-of-the-art methods for morphological text-extraction operations.
Mathematics 12 00846 g013
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Qi, W.; Yu, X.; Li, X.; Kang, S. A Moiré Removal Method Based on Peak Filtering and Image Enhancement. Mathematics 2024, 12, 846. https://doi.org/10.3390/math12060846

AMA Style

Qi W, Yu X, Li X, Kang S. A Moiré Removal Method Based on Peak Filtering and Image Enhancement. Mathematics. 2024; 12(6):846. https://doi.org/10.3390/math12060846

Chicago/Turabian Style

Qi, Wenfa, Xinquan Yu, Xiaolong Li, and Shuangyong Kang. 2024. "A Moiré Removal Method Based on Peak Filtering and Image Enhancement" Mathematics 12, no. 6: 846. https://doi.org/10.3390/math12060846

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop