Next Article in Journal
On Strictly Positive Fragments of Modal Logics with Confluence
Next Article in Special Issue
Cattle Number Estimation on Smart Pasture Based on Multi-Scale Information Fusion
Previous Article in Journal
Bipartite Synchronization of Fractional-Order Memristor-Based Coupled Delayed Neural Networks with Pinning Control
Previous Article in Special Issue
SaMfENet: Self-Attention Based Multi-Scale Feature Fusion Coding and Edge Information Constraint Network for 6D Pose Estimation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Printed Texture Guided Color Feature Fusion for Impressionism Style Rendering of Oil Paintings

1
Faculty of Printing, Packaging Engineering and Digital Media Technology, Xi’an University of Technology, Xi’an 710048, China
2
National Subsea Centre, Robert Gordon University, Aberdeen AB21 0BH, UK
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2022, 10(19), 3700; https://doi.org/10.3390/math10193700
Submission received: 14 August 2022 / Revised: 16 September 2022 / Accepted: 6 October 2022 / Published: 9 October 2022
(This article belongs to the Special Issue Advances in Computer Vision and Machine Learning)

Abstract

:
As a major branch of Non-Photorealistic Rendering (NPR), image stylization mainly uses computer algorithms to render a photo into an artistic painting. Recent work has shown that the ex-traction of style information such as stroke texture and color of the target style image is the key to image stylization. Given its stroke texture and color characteristics, a new stroke rendering method is proposed. By fully considering the tonal characteristics and the representative color of the original oil painting, it can fit the tone of the original oil painting image into a stylized image whilst keeping the artist’s creative effect. The experiments have validated the efficacy of the proposed model in comparison to three state-of-the-arts. This method would be more suitable for the works of pointillism painters with a relatively uniform style, especially for natural scenes, otherwise, the results can be less satisfactory.

1. Introduction

Non-Photorealistic Rendering (NPR) is one of the important branches of computer graphics. Unlike realistic painting, which pursues physical accuracy and painting authenticity, non-photorealistic painting (NPP) draws scenes through artistic expression techniques, such as oil paintings, pencil drawings, and cartoons et al, so that the painting effect can convey rich feelings [1]. Among various NPP styles, oil painting is widely favored for its long history and rich expressiveness. It can not only accurately depict the scene, as shown in Figure 1a, but also convey various emotions of the painter through different artistic exaggerations (Figure 1b).
In Figure 2, an example is given to convert a photo by a style automatically extracted from an oil painting of a professional artist: Monet.
In the history of art development, Impressionism is a very important category of painting. This category pays attention to the true description of nature, especially the light and shadow changes, which has played a good transitional role in the transfor-mation of European painting from realism to modernism. Because of these unique artistic pursuits the works of this category have different aesthetic characteristics against any other periods and painting categories in terms of subject matter, color, and composition et al. Therefore, the protection of these paintings has become an urgent issue. However, how to digitize these priceless antique paintings and exhibit them on the Internet is reported as a challenging problem.
Due to the characteristics of Impressionism art and similar techniques presented in many such artworks, it is usually difficult for researchers to fully extract the distinctive features from each painter’s work. To address these shortcomings, we develop an adap-tive brush strokes selection approach to automatically extract a generic texture path of an oil painting image. The extracted texture feature will be used to optimize the color feature by guiding the color feature fusion in the Fourier domain. Eventually, the stylized image will be generated by an inverse Fourier transform. A comprehensive experiment with de-tail analysis is reported.
Overall, the main contributions of this paper can be summarized as follows:
  • We proposed an unsupervised printed texture guided color fusion framework, namely PTGCF, for impressionist oil painting style migration, which can fully consider the tonal characteristics and the representative color of the original oil painting.
  • We proposed an effective fusion strategy to adaptively transfer the texture information from oil painting image to the nature image in the frequency domain of dominant color component.
The remaining of our paper is organized as follows. In Section 2, we give a brief review of the related work. In Section 3, the proposed PTGCF is elaborated in detail. In Section 4, experiments and analysis are given. Finally, some concluding remarks are drawn in Section 5.

2. Related Work

Image stylization is actually an image rendering process, which transfers the color and texture information from one image to another [2]. Previously, some classical methods have been developed. Efros [3] synthesized the target image by extracting and reorganizing texture samples; Hertzman [4] transformed the existing image style into the target image through image analogy; Ashikhmin [5] transformed the high-frequency texture of the source image into the target image while retaining the coarse scale of the target image; Lee [6] improved Ashikhmin’s algorithm by passing additional edge information; Semmo [7] proposed a dominant color quantization model for transforming images into filtered variants with oil paint appearance.
In the last decades, with the wide applications of deep neural network in other visual perception fields, such as face recognition, object detection and even style transfer of images [1]. Gatys [8] first applied the VGG19 network to style migration in 2015, where the key discovery is that the content and style of convolutional neural network are separated, although the style feature representation of any image can be extracted by constructing a Gram matrix. Because the color after the migration may be severely distorted, Gatys proposed two style migration methods to preserve the color information of the original image [9], including RGB space color histogram matching and L*a*b* space brightness migration.
In Gatys’ approach, the filter pyramid in the VGG network is used as the high-level representation of the image. However, different convolution layers can only capture the connections between each pixel of the image, resulting poor rendering effects due to the lack of the connections to the spatial distribution. In a VGG-based fast style migration model proposed in Johnson [10], the efficiency I greatly improved but the shortcomings remain as in [8,9]. Motivated by above problems, Li et al. proposed to combine the Markov random field (MRF) and convolutional neural network [11], replacing the Gram matrix matching in Gatys’ model with a Markov regularization model to improve the visual rationality of the synthetic image. This method provides an interesting extension for style transfer based on image iteration, but it has one major limitation, i.e., the restrictions on the input image: the style image must be able to be reconstructed by MRF. However, a picture with a strong transmission structure is not suitable for this method. As a result, compared with the original image, the image will have the disadvantage of blurred edges. To sum up, the MRF-based style transfer method will produce good results only when the content image and style image have similar shapes without strong changes of angles and sizes.
Liao [12] has achieved good results by combining deep learning (VGG19) and image analogy (patchmatch) for style transfer. With a pre-trained VGG19 network to extract image features through deep learning, they have extended patch-match from the image domain to the feature domain, effectively guiding the semantic level visual attribute migration. The algorithm lacks the ability of semantic generalization, but it can only transfer similar content.
In Zou et al. [13], a neural style transfer model is proposed for stroke-based image-to-painting conversion. Compared with the violent generation of a large number of strokes, this work can outline a stunning oil painting with a relatively small number of strokes. The core idea used is to fix the network parameters of the rendering module and find the optimized stroke parameters in terms of color, coordinate position, and rotation angle, etc. Although this approach is quite flexible in generating painting results for a variety of different stroke types such as oil painting, watercolor, and square, et al., the optimization process is very time-consuming, making it difficult to implement in practical applications. To solve this drawback, Liu et al. [14] has proposed a fast Paint Transformer network, which can quickly transfer a picture into an oil painting with the full usage of texture features. In this work, instead of using an optimization procedure, the stroke parameters are generated by minimizing the difference between the oil painting and the natural photos for improved effectiveness and efficiency. However, it still has many shortcomings, such as poor capability of supporting complicatedly-curved strokes and poor generation effect of slender strokes. In Deng et al. [15], a transformer-based network, namely stytr ^ 2, is proposed to improve the generalizability. Compared with other deep learning models, this approach has better feature representation ability because it captures the long-term dependence of input images and avoids losing of content and style details. Therefore, it can achieve high-quality stylization in terms of good content structure and rich style patterns.
Although deep learning based methods can produce satisfied style synthetic images, it suffers from low efficiency and relying on a large amount of training data [16]. Some fast style transfer models [10,14] has been used to alleviate the problem of low efficiency, yet they can only carry out model training for specific styles thus will need a lengthy process of parameter tuning. Therefore, according to the characteristics of Impressionist oil painting, we introduce a generic image style migration method, based on the rendering of physical model and texture synthesis, and propose an efficient NPR framework for adaptively fusing the texture and color attributes of the target oil painting image to realize the image stylization of impressionist style. Firstly, an adaptive stroke selection method is introduced to automatically obtain the general texture block of oil painting stylized image. Then, the color and texture information of the source image and stroke are fused in the Fourier domain. By making full use of the color and texture features of oil painting stylized images, the source images can be imitated into target images with any artist’s style, such as Monet, Shura and Van Gogh. In order to select the best parameters to obtain the best rendering performance, the selection of pen block size is discussed, and the influence of different painters’ attributes on stylized images is analyzed.

3. Materials and Methods

3.1. Data Collection and Pre-Processing

The experimental image data are acquired by a D65 Kodak camera with a CMOS sensor of 12 mega pixels. The original digital photos mainly cover the real landscape and portrait in real life. Four examples of acquired data are shown in Figure 3. Due to the inconsistent lighting issue in different scenes, the tone reproduction of each captured images was different, which may cause inconsistent image stylized effects. To solve this problem, a simple yet fast image pre-processing algorithm [17] was used to adjust the chroma and contrast of digital images.

3.2. Workflow of the Proposed PTGCF Model

Figure 4 shows the workflow of the proposed method, which is composed of three major modules, i.e., color space conversion, color matching, and feature fusion. Color space conversion is to convert the image from RGB space to the L*a*b* color space. It is an essential step in color matching and also plays a vital role between color matching and feature fusion. Color matching aims to enhance the original digital images and make its appearance similar to the impressionism oil painting. In feature fusion, the color attributes of enhanced image and brush stroke will be adaptively fused in the L*a*b* color space according to their texture attributes. The reason to choose the L*a*b* color space is because its high capability in simulating the human visual system [18]. Its L component closely matches human brightness perception and can be used to adjust the brightness contrast, a and b components can be modified for precise color balance. These transformations are difficult or impossible in other color spaces such as RGB and CMYK et al., because they are modeled according to the output of a physical device, not human visual perception. The final stylized RGB image will be generated after color space conversion. The implementation of color matching and feature fusion will be detailed below.

3.3. Color Matching

To introduce the impressionism style into the original digital photos, the first step is to imitate the color attributes of the oil painting at the original digital photos. For this purpose, a color transfer between images algorithm [19] is employed. The implementation of color matching [20] is detailed below:
  • Input a source and an oil painting image. In the Figure 5, the landscape image on the left is the source image, the middle image is the oil painting image;
  • Convert both the source and the oil painting image from the RGB to the L*a*b* color space;
  • Calculate the mean value and standard deviation of each channels for the source and oil painting images in the L*a*b* color space, denoted as ( μ l . s ,   μ a . s ,   μ b . s ,   σ l . s ,   σ a . s ,   σ b . s ) and ( μ l . o ,   μ a . o ,   μ b . o ,   σ l . o ,   σ a . o ,   σ b . o );
  • Subtract the mean value of each channel (i.e.,   l o , a o , b o ) from the oil painting image in the L*a*b* color space;
    l = l o μ l . o ,   a = a o μ a . o ,     b = b o     μ b . o
  • Scale the oil painting channels by the ratio of the standard deviation of the oil painting image divided by the standard deviation of the source image, multiplied by the channels of oil painting image;
    l s c a l e = σ l . o σ l . s   l ,   a s c a l e = σ a . o σ a . s   a ,   b s c a l e = σ b . o σ b . s   b
  • Add the mean value of the result of previous step into the source image;
    l n e w = l s c a l e ¯ + l s ,   a n e w = a s c a l e ¯ + a s ,   b n e w = b s c a l e ¯ + b s
  • Convert the result of previous step from the L*a*b* space to the RGB color space.

3.4. Feature Fusion

After color matching, the texture of enhanced digital images and oil painting images will be analysed, which will be used to further guide the color fusion procedure for producing final optimized image rendering.

3.4.1. Edge Enhancement

For each enhanced digital image, the edge attributes will be extracted in the RGB color space. The USM algorithm from the OpenCV library [21] is employed to enhance and extract the edges. Given a source image S , a smoothed image map G is first obtained by using the Gaussian blur. The enhanced image E for S can be calculated by Equation (4), where ω [ 0.1 , 0.9 ] represents the weight, which is set to 0.6 for fast implementation in this paper.
E = ( S     ω   G ) / ( 1 ω )

3.4.2. Adaptive Selection of Brush Stroke

Brush strokes refer to the markers produced by the painter in the process of painting, which can be used to describe the characterized objects or to express the subjective emotion. As stated in many literatures [13,22,23], image gradient plays an important role in selecting brush strokes. To consistently imitate the most representative brush stroke from images of the oil paintings, the standard deviation SD(i) in a sliding windows W(i) on the source image is calculated as a gradient map ∇S.
S D ( i ) = 1 N j = 1 N ( o j ( i ) μ ( i ) ) 2
Here we have i   [ 1 , I ] , I is the number of sliding windows with a size of 128 pixels. N = x y , o j ( i ) represents the jth pixels in the ith sliding window, μ ( i ) denotes the mean pixel value in the ith sliding window.
The sliding window with the lowest SD will be selected as brush stroke texture patch. 65 original paintings were studied, including 10 Van Gogh’s landscapes, 12 Van Gogh’s portraits, 15 Monet’s landscapes, 12 Monet’s portraits, 9 Seurat’s landscapes, and 7 Seurat’s portraits. The method has a certain universality. Some representative artworks of each artist are shown in Figure 6.
In order to gather our collection of fine-art paintings, we used the publicly available dataset of “Wikiart paintings” (http://www.wikiart.org/, accessed on 1 January 2022); which, to the best of our knowledge, is the largest online public collection of digitized artworks. Examples of selected brush strokes in this study are shown in Figure 7, where Figure 7a,c,e show the oil painting of “Bend in the Epte River near Giverny” by Claude Monet, “Cypress against a Starry Sky” by Van Gogh and “Models.Detail” by Seurat, respectively. Figure 7b,d,f show the selected brush strokes of the three oil paintings.

3.4.3. Feature Fusion in the Fourier Domain

For an actual oil painting artwork, the brush stroke direction and the gradient information are the most important texture attributes, respectively. As shown in Figure 8, for a single orange, the brush stroke direction and gradient of its surface and surrounding shadows have already shown the uniqueness. Therefore, a piece of oil painting will contain more abstract stroke patterns. Unlike oil painting image, digital image doesn’t have brush stroke direction, but the gradient information is still vital to characterise the texture in the image.
Therefore, we take the full usage of texture information in the selected brush as well as the digital image to guide the adaptive color information fusion of brush stroke and digital image in fully rendering the oil painting style on the digital image. The detailed implementation is presented below.
Given a enhance digital image and a selected brush stroke, they will be converted from the RGB color space to the L*a*b* color space. Then, their L* components will be transferred to the frequency domain using the fast Fourier transformation (FFT).
As seen in Figure 9, a power spectrum for each phase angle can be built after transferring the brush stroke into FFT domain. Then the angle of maximum power can be extracted.
On the other hand, the gradient information of each digital image can be extracted after edge enhancement. For each pixel window in the FFT domain, its value will be updated after shifting the brush stroke with the specific phase angle (Figure 10).
Based on this, we came up with an algorithm to add stroke texture to digital photos to produce oil painting style images. Algorithm adapted to stroke (gradient perpendicular to photo gradient), which is summarized as follows.
  • Convert the original photograph into the frequency domain.
  • Convert the brush texture patch into the frequency domain to create a filter.
  • Blur the original photo.
  • Enhance the boundary of image in step 3.
  • Acquire the direction information of gradation from the output of step 4.
  • Update the value of original photograph by adaptively shifting the brush stroke in FFT domain as shown in Figure 10b.
  • Convert the result of step 6 into spatial domain by inverse Fourier transformed.
  • Concatenate the result of step 7 with original a* and b*. To this end, the color and texture features of digital image and brush stroke have been successfully fused.
  • Finally, the fused result of step 8 will be transferred from the L*a*b* to the RGB color space for displaying the optimised rendering effects.

3.4.4. Selection of Stroke Texture

From the movement of stroke texture, it is naturally associated with a problem. That is, to shift the sliding window to fuse stroke texture with enhanced image in order to make the generated image closer to the oil painting. To choose the best size of sliding window, we have compared the performance using three moving strategies, i.e., 1 8 , 1 4 and 1 2 of the patch size in the horizontal and vertical directions, respectively (Figure 11).
The evaluation image is printed on photo (glossy) paper with light black ink in the inkjet printer Epson px-5500. The printing size is 12 cm × 16 cm. Under the high color rendering fluorescent lamp of D50, the brightness of 700 lx is evaluated. There is no limit to the observation distance. A subjective evaluate was done by 30 students aged between 20 and 30. The participants were asked to compare the stylized image with Monet’s collection of paintings.
The evaluation method used is the Rank order [24]. This method is to present many stimuli at the same time, where many participants will rank these stimuli according to certain criteria, where the averaged rank score on each stimulus can be obtained. In our study, participants were asked to rank the three oil painting style images according to the criteria of artist-style-similarity between the rendered images and original oil paintings. After massive subjective evaluation, the average rank score of rendered images can be obtained, which can reflect the rendering performance. The stroke texture block to be applied moves 1/8, 1/4 and 1/2 of the patch size horizontally and vertically, and the three generated images are shown in Figure 12. The scale value representing the degree of compliance with the evaluation benchmark of each stimulus is obtained. For landscape 1 and portrait 1, the selection of 1/4 and 1/2 for the patch size gives the same score. On the other hand, for landscape 2 and portrait 2, the selection of 1/8 for the patch size is not ideal. In this way, if the number of overlapping becomes less, the number of mixing colors becomes less.

4. Results

4.1. Landscape Painting

“Starry Night” is one of the most well-known paintings by Van Gogh (Figure 13). There are three dominant colors: sky blue, yellow, and black. In general, the appearance of stylized image is close to the original oil painting images. However, some elements of the oil painting image (i.e., orange moon and the cluster of starligh) don’t reflect on the rendering effects. This is mainly because the painting consists of various texture such as curved long lines, dash lines, swirl patterns, etc. However, the brush stroke texture patch fails to capture all such information, which indicates that the selected brush stroke can merely imitate the general tone and ignore the less-dominant colours.
Impressionism believes that the reality is a fleeting visual impression, which breaks through the previous painting stereotypes which is the usage of high contrast color. In Haystacks series (Figure 14b), Monet used techniques of juxtaposing solid and dot colors, which can reflect the light change in the scene. Although the image stylization result (Figure 14c) can imitate a brown tone of the oil painting, the effect of light change and shadow is insufficient. This can be possibly mitigated by mathematical modelling of computer graphics.
At the end of the 19th century, French painter Georges Seurat was the first one to propose and practice the stippling and color separation painting method which uses a unique stroke to generate colors by primary colors. “Big Bowl Island Sunday Afternoon” is a representative work of Neo-Impressionism (Figure 15b), where the purple color is mixed by parallelly painting the red and blur colors. In addition, this oil painting is composed of millions of widely distributed color dots, which brings the difficulty of fully imitating its color and texture information.

4.2. Portrait Painting

The similarity between the final rendering effect of oil painting stylization and the oil painting image is high. However, due to the lack of semantics of the image content, the rendering effects on the portraits are far away from our human perception. As shown in the third row of Figure 16. Van Gogh’s paintings are formed by thick and powerful strokes, intense, heavy, strong and distorted color lines. However, when Van Gogh’s powerful stroke texture is applied to the portrait, the performance of the skin, the sticky touch of the clothes and the overall texture of the image are affected, which significantly reduce the visualisation. To tackle with this problem, abstract and high-level sematic features can be extremely useful, which can be achieved by multiple convolutional layers with large perception field and pyramid pooling.

4.3. Key Parameter Analysis

With ω varying from 0.1 to 0.9 at a step of 0.1, the rendering performance of using USM as edge enhancement is presented in Figure 17. The overall trends for the Structural Similarity Index (SSIM) and Peak Signal-to-Noise Ratio (PSNR) are quite similar. Specifically, SSIM reaches the peak when ω = 0.4. Meanwhile, with ω = 0.1 and 0.4, PSNR reaches the best. Therefore, ω is selected as 0.4 in the USM model and will be used in the following experiments. On a different note, we also compare the rendering performance using USM and the other two classic edge enhancement methods i.e., Gamma transform and Laplacian enhancement [21]. As seen in Table 1, USM outperforms the other two methods, which indicates that USM is more suitable for the image rendering task. Additionally, we also investigate how the size of the brush stroke affects the final rendering performance. As seen in the Table 2, the size of 128 pixels can give the best SSIM and PSNR. Therefore, we set the window size of the brush stroke as 128 in our study. Compared with the results shown in Table 3, selecting any size of the brush stroke can give us better results than all three compared approaches.

4.4. Objective Comparison

To further validate the effectiveness of our proposed method, three state-of-the-art benchmarking methods are utilized for comparison, including Fast Stylization (FT) [25], Perceptual Losses for real-time Style Transfer (PLST) [10], and Artistic Style Neural Network (ASNN) [8]. Qualitative and quantitative results using the Vincent van Gogh’s painting are shown in Figure 18 and Table 3, respectively.
As seen in Figure 18, the proposed PTGCF method can simulate the color tone of the oil paintings, preserve the artist’s original brush strokes and most importantly, keep the semantic content of images much better than the other three benchmarking methods. For the portrait examples, the person can be clearly recognized in our rendered result. Similarly, for the scene samples, the objects such as trees, houses and the chair can be well recognized in our results. Meanwhile, for the third sample, the features of shadows on the street and house can also be characterized. However, all three benchmarking methods fail to preserve the semantic contents in the rendered images. For quantitative results using PSNR, SSIM and LPIPS [26], it can be clearly seen that our proposed PTGCF can produce higher SSIM and PSNR, but lower LPIPS scores than all the compared approaches, which has further validated its efficacy.

5. Conclusions

In this paper, an effective PTGCF model is proposed for optimized impressionism-focused image stylization, which takes the full use of both the representation color and the brush stroke texture of the oil paintings and produce much improved visual effect in the results. Four source photographs were selected: two landscapes and two portraits, and 65 oil paintings were studied, including 10 Van Gogh’s landscapes, 12 Van Gogh’s portraits, 15 Monet’s landscapes, 12 Monet’s portraits, 9 Seurat’s landscapes, and 7 of Seurat’s portraits. The proposed method has shown its generalization in dealing with various artist styles. For our proposed model, it is easy to learn some Impressionist features, though these are difficult to learn for other algorithms or models, if they focus more on universal oil painting. This may be because the trajectory of texture rendering used in our PTGCF model is based on Impressionism. Meanwhile, within the existing framework, the further improvement can be done by replacing some current components with more advanced techniques. For example, ref. [27] can be used for pre-processing stage, deep learning based color transfer methods [28] can be used for color matching stage, and transformer-based network [29] can be employed for edge enhancement.
According to the current research trends, there are still further improvements that can be made in the process of image stylization, such as how to determine whether the rendered image belongs to the expected style. Deep learning based analysis can be applied, such as the DenseNet [30] model and MTFFNet [31] model. Therefore, our future work is to establish an improved deep learning network to objectively evaluate the classification of rendered images, in order to achieve the approximation between rendered images and real oil paintings.
Another work would focus on semantic segmentation based image stylization. Semantic information about image content is seldom considered in image stylization. Most of the methods extract the low-level formal features of the image, such as the direction field of the segmentation block and the determined position of the strokes. Due to little consideration is given to the content information of the image, this has led to an inability to achieve the desired effect regarding the position and direction of the brush strokes. Moreover, there is a lack of distinction between the important and non-important elements of the image. Stylization, such as various forms of filtering operations, abstracts graphics to a certain extent. Therefore, content-aware image stylization can be a new insight in the future.

Author Contributions

Conceptualization, J.G.; Data curation, J.G., L.M. and X.Z.; Formal analysis, X.Z. and Y.Y.; Funding acquisition, L.M.; Investigation, Y.Y.; Methodology, J.G.; Project administration, L.M.; Visualization, X.L. and Y.Y.; Writing – original draft, J.G. and X.L.; Writing – review & editing, Y.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research work was financially supported by the Application of multi-sensor image information fusion technology in fountain landscape image, Project Number: 108/441219001; Technology Innovation Leading Program of Shaanxi Province, Project Number: 2020QFY03-04.

Informed Consent Statement

The images of “Portrait1 and Portrait2” in the manuscript are photos of former colleagues of the first author in Japan, where consent was received to publish the image in the research paper.

Acknowledgments

We thank Maher Assaad from Ajman University for his kind guidance and assistance in modelling some benchmarking methods.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liao, Y.; Huang, Y. Deep Learning-Based Application of Image Style Transfer. Math. Probl. Eng. 2022, 2022, 1693892. [Google Scholar] [CrossRef]
  2. Kumar, M.P.; Poornima, B.; Nagendraswamy, H.S.; Manjunath, C. A comprehensive survey on non-photorealistic rendering and benchmark developments for image abstraction and stylization. Iran J. Comput. Sci. 2019, 2, 131–165. [Google Scholar] [CrossRef]
  3. Efros, A.A.; Freeman, W.T. Image Quilting for Texture Synthesis and Transfer. In Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, Los Angeles, CA, USA, 28 August 2001; pp. 341–346. [Google Scholar]
  4. Hertzmann, A.; Jacobs, C.E.; Oliver, N.; Curless, B.; Salesin, D.H. Image Analogies. In Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, Los Angeles, CA, USA, 28 August 2001; pp. 327–340. [Google Scholar]
  5. Ashikhmin, N. Fast texture transfer. IEEE Comput. Graph. Appl. 2003, 23, 38–43. [Google Scholar] [CrossRef]
  6. Lee, H.; Seo, S.; Ryoo, S.; Yoon, K. Directional Texture Transfer. In Proceedings of the 8th International Symposium on Non-Photorealistic Animation and Rendering, Annecy, France, 7–10 June 2010; pp. 43–48. [Google Scholar]
  7. Semmo, A.; Limberger, D.; Kyprianidis, J.E.; Döllner, J. Image stylization by interactive oil paint filtering. Comput. Graph. 2016, 55, 157–171. [Google Scholar] [CrossRef]
  8. Gatys, L.; Ecker, A.; Bethge, M. A Neural Algorithm of Artistic Style. J. Vis. 2016, 16, 326. [Google Scholar] [CrossRef]
  9. Gatys, L.A.; Bethge, M.; Hertzmann, A.; Shechtman, E. Preserving color in neural artistic style transfer. arXiv 2016, arXiv:1606.05897. [Google Scholar]
  10. Johnson, J.; Alahi, A.; Fei-Fei, L. Perceptual Losses for Real-Time Style Transfer and Super-Resolution; Springer: Berlin/Heidelberg, Germany, 2016; pp. 694–711. [Google Scholar]
  11. Li, C.; Wand, M. Combining Markov Random Fields and Convolutional Neural Networks for Image Synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2479–2486. [Google Scholar]
  12. Liao, J.; Yao, Y.; Yuan, L.; Hua, G.; Kang, S.B. Visual attribute transfer through deep image analogy. arXiv 2017, arXiv:1705.01088. [Google Scholar] [CrossRef] [Green Version]
  13. Zou, Z.; Shi, T.; Qiu, S.; Yuan, Y.; Shi, Z. Stylized neural painting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 15689–15698. [Google Scholar]
  14. Liu, S.; Lin, T.; He, D.; Li, F.; Deng, R.; Li, X.; Ding, E.; Wang, H. Paint transformer: Feed forward neural painting with stroke prediction. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 6598–6607. [Google Scholar]
  15. Deng, Y.; Tang, F.; Dong, W.; Ma, C.; Pan, X.; Wang, L.; Xu, C. StyTr2: Image Style Transfer with Transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 19–20 June 2022; pp. 11326–11336. [Google Scholar]
  16. Yan, Y.; Ren, J.; Sun, G.; Zhao, H.; Han, J.; Li, X.; Marshall, S.; Zhan, J. Unsupervised image saliency detection with Gestalt-laws guided optimization and visual attention based refinement. Pattern Recognit. 2018, 79, 65–78. [Google Scholar] [CrossRef] [Green Version]
  17. Reinhard, E.; Stark, M.; Shirley, P.; Ferwerda, J. Photographic tone reproduction for digital images. In Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques, San Antonio, TX, USA, 21–26 July 2002; pp. 267–276. [Google Scholar]
  18. Yang, H.; Nan, G.; Lin, M.; Chao, F.; Shen, Y.; Li, K.; Ji, R. LAB-Net: LAB Color-Space Oriented Lightweight Network for Shadow Removal. arXiv 2022, preprint. arXiv:2208.13039. [Google Scholar]
  19. Reinhard, E.; Adhikhmin, M.; Gooch, B.; Shirley, P. Color transfer between images. IEEE Comput. Graph. Appl. 2001, 21, 34–41. [Google Scholar] [CrossRef]
  20. Super Fast Color Transfer between Images. Available online: https://pyimagesearch.com/2014/06/30/super-fast-color-transfer-images (accessed on 1 January 2022).
  21. Bradski, G. The openCV library. Dr. Dobb’s J. Softw. Tools Prof. Program. 2000, 25, 120–123. [Google Scholar]
  22. Gao, J.; Li, D.; Gao, W. Oil painting style rendering based on kuwahara filter. IEEE Access 2019, 7, 104168–104178. [Google Scholar] [CrossRef]
  23. Hertzmann, A. A survey of stroke-based rendering. IEEE Comput. Graph. Appl. 2003, 23, 70–81. [Google Scholar] [CrossRef]
  24. Engeldrum, P.G. Psychometric Scaling: A Toolkit for Imaging Systems Development; Imcotek Press: Winchester, MA, USA, 2000. [Google Scholar]
  25. Ulyanov, D.; Vedaldi, A.; Lempitsky, V. Instance normalization: The missing ingredient for fast stylization. arXiv 2016, arXiv:1607.08022. [Google Scholar]
  26. Wang, Q.; Wang, Z.; Genova, K.; Srinivasan, P.P.; Zhou, H.; Barron, J.T.; Martin-Brualla, R.; Snavely, N.; Funkhouser, T. Ibrnet: Learning multi-view image-based rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 4690–4699. [Google Scholar]
  27. Ancuti, C.O.; Ancuti, C.; De Vleeschouwer, C.; Sbert, M. Color channel compensation (3C): A fundamental pre-processing step for image enhancement. IEEE Trans. Image Process. 2019, 29, 2653–2665. [Google Scholar] [CrossRef] [PubMed]
  28. Liu, S. An Overview of Color Transfer and Style Transfer for Images and Videos. arXiv 2022, arXiv:2204.13339. [Google Scholar]
  29. Luthra, A.; Sulakhe, H.; Mittal, T.; Iyer, A.; Yadav, S. Eformer: Edge enhancement based transformer for medical image denoising. arXiv 2021, arXiv:2109.08044. [Google Scholar]
  30. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  31. Jiang, W.; Wang, X.; Ren, J.; Li, S.; Sun, M.; Wang, Z.; Jin, J.S. MTFFNet: A Multi-task Feature Fusion Framework for Chinese Painting Classification. Cogn. Comput. 2021, 13, 1287–1296. [Google Scholar] [CrossRef]
Figure 1. (a) Monet, “Terrace at St. Adresse, 1866”, (b) Gogh, “Wheatfield with Crows 1890”.
Figure 1. (a) Monet, “Terrace at St. Adresse, 1866”, (b) Gogh, “Wheatfield with Crows 1890”.
Mathematics 10 03700 g001
Figure 2. Sample of image rendering: (a) Landscape, (b) Monet, “Bend in the Epte River near Giverny”, (c) Stylized image.
Figure 2. Sample of image rendering: (a) Landscape, (b) Monet, “Bend in the Epte River near Giverny”, (c) Stylized image.
Mathematics 10 03700 g002
Figure 3. The original digital photos for stylized migration.
Figure 3. The original digital photos for stylized migration.
Mathematics 10 03700 g003
Figure 4. Process of producing oil painting-style images made by non-photorealistic rendering.
Figure 4. Process of producing oil painting-style images made by non-photorealistic rendering.
Mathematics 10 03700 g004
Figure 5. Image samples, (a) source image, (b) oil painting image, (c) image after color conversion.
Figure 5. Image samples, (a) source image, (b) oil painting image, (c) image after color conversion.
Mathematics 10 03700 g005
Figure 6. (a) Monet, “Impression Sunrise”, (b) Monet, “Self-Portrait”, (c) Van Gogh, “Cypress against a Starry Sky”, (d) Van Gogh, “Self-Portrait”, (e) Seurat, “A Sunday Afternoon on the Island of La Grande Jatte”, (f) Seurat, “Model”.
Figure 6. (a) Monet, “Impression Sunrise”, (b) Monet, “Self-Portrait”, (c) Van Gogh, “Cypress against a Starry Sky”, (d) Van Gogh, “Self-Portrait”, (e) Seurat, “A Sunday Afternoon on the Island of La Grande Jatte”, (f) Seurat, “Model”.
Mathematics 10 03700 g006
Figure 7. (a) Monet, “Bend in the Epte River near Giverny”, (b) Brush stroke image of (a), (c) Van Gogh, “Cypress against a Starry Sky”, (d) Brush stroke image of (c), (e) Seurat, “Models.Detail”, (f) Brush stroke image of (e).
Figure 7. (a) Monet, “Bend in the Epte River near Giverny”, (b) Brush stroke image of (a), (c) Van Gogh, “Cypress against a Starry Sky”, (d) Brush stroke image of (c), (e) Seurat, “Models.Detail”, (f) Brush stroke image of (e).
Mathematics 10 03700 g007
Figure 8. (a) “Apples and Manderines” Renoir (impressionism), (b) enlargement of selected region.
Figure 8. (a) “Apples and Manderines” Renoir (impressionism), (b) enlargement of selected region.
Mathematics 10 03700 g008
Figure 9. (a) Calculate the average value of the power spectrum of each angle, (b) Monet stroke direction information.
Figure 9. (a) Calculate the average value of the power spectrum of each angle, (b) Monet stroke direction information.
Mathematics 10 03700 g009
Figure 10. Frequency domain texture guided color information fusion, (a) transfer the edge enhanced image to FFT domain, (b) shift the sliding window and update the value in FFT domain based on phase angle.
Figure 10. Frequency domain texture guided color information fusion, (a) transfer the edge enhanced image to FFT domain, (b) shift the sliding window and update the value in FFT domain based on phase angle.
Mathematics 10 03700 g010
Figure 11. Sample of brush stroke, and the movement of 1/8, 1/4, and 1/2 of patch size.
Figure 11. Sample of brush stroke, and the movement of 1/8, 1/4, and 1/2 of patch size.
Mathematics 10 03700 g011
Figure 12. Generated examples of landscape and portrait painting with the movement of 1/8, 1/4, and 1/2.
Figure 12. Generated examples of landscape and portrait painting with the movement of 1/8, 1/4, and 1/2.
Mathematics 10 03700 g012
Figure 13. (a) Landscape, (b) Van Gogh, “Starry Night”, (c) Stylized image.
Figure 13. (a) Landscape, (b) Van Gogh, “Starry Night”, (c) Stylized image.
Mathematics 10 03700 g013
Figure 14. (a) Landscape, (b) Monet, “Haystacks series”, (c) Stylized image.
Figure 14. (a) Landscape, (b) Monet, “Haystacks series”, (c) Stylized image.
Mathematics 10 03700 g014
Figure 15. (a) Landscape, (b) Seurat, “Big Bowl Island Sunday Afternoon”, (c) Stylized image.
Figure 15. (a) Landscape, (b) Seurat, “Big Bowl Island Sunday Afternoon”, (c) Stylized image.
Mathematics 10 03700 g015
Figure 16. Sample of image stylization: (a) portrait image, (b) artists’ oil painting, (c) stylized image.
Figure 16. Sample of image stylization: (a) portrait image, (b) artists’ oil painting, (c) stylized image.
Mathematics 10 03700 g016aMathematics 10 03700 g016b
Figure 17. SSIM (a) and PSNR (b) of the rendered image with the increasing of ω in USM.
Figure 17. SSIM (a) and PSNR (b) of the rendered image with the increasing of ω in USM.
Mathematics 10 03700 g017
Figure 18. Visualized results of image rendering: (a) original image, (b) PTGCF, (c) FT, (d) PLST, (e) ASNN.
Figure 18. Visualized results of image rendering: (a) original image, (b) PTGCF, (c) FT, (d) PLST, (e) ASNN.
Mathematics 10 03700 g018aMathematics 10 03700 g018b
Table 1. Quantitative results of different edge enhancement methods in terms of SSIM/PSNR (higher is better).
Table 1. Quantitative results of different edge enhancement methods in terms of SSIM/PSNR (higher is better).
SSIM↑PSNR↑
USM0.995460.23
Gamma0.991458.84
Laplance0.993058.82
Table 2. Quantitative results of Brush stroke window size comparison in terms of SSIM/PSNR (higher is better).
Table 2. Quantitative results of Brush stroke window size comparison in terms of SSIM/PSNR (higher is better).
Size (Pixel)SSIM↑PSNR↑
1280.995460.23
1920.994460.09
2560.994460.11
Table 3. Quantitative results of different methods in terms of SSIM/PSNR (higher is better) and LPIPS (lower is better).
Table 3. Quantitative results of different methods in terms of SSIM/PSNR (higher is better) and LPIPS (lower is better).
SSIM↑PSNR↑LPIPS↓
PTGCF0.995460.230.0306
FT0.991458.840.0312
PLST0.993058.820.0327
ASNN0.991158.930.0656
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Geng, J.; Ma, L.; Li, X.; Zhang, X.; Yan, Y. Printed Texture Guided Color Feature Fusion for Impressionism Style Rendering of Oil Paintings. Mathematics 2022, 10, 3700. https://doi.org/10.3390/math10193700

AMA Style

Geng J, Ma L, Li X, Zhang X, Yan Y. Printed Texture Guided Color Feature Fusion for Impressionism Style Rendering of Oil Paintings. Mathematics. 2022; 10(19):3700. https://doi.org/10.3390/math10193700

Chicago/Turabian Style

Geng, Jing, Li’e Ma, Xiaoquan Li, Xin Zhang, and Yijun Yan. 2022. "Printed Texture Guided Color Feature Fusion for Impressionism Style Rendering of Oil Paintings" Mathematics 10, no. 19: 3700. https://doi.org/10.3390/math10193700

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop