Next Article in Journal
Upsampling Real-Time, Low-Resolution CCTV Videos Using Generative Adversarial Networks
Next Article in Special Issue
An Optimized Digital Watermarking Scheme Based on Invariant DC Coefficients in Spatial Domain
Previous Article in Journal
Analysis of Challenges Due to Changes in Net Load Curve in South Korea by Integrating DERs
Previous Article in Special Issue
AntsOMG: A Framework Aiming to Automate Creativity and Intelligent Behavior with a Showcase on Cantus Firmus Composition and Style Development
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Pixel-Based Approach for Generating Original and Imitating Evolutionary Art

School of Computer Science, Wuhan University, Wuhan 430072, China
*
Author to whom correspondence should be addressed.
Electronics 2020, 9(8), 1311; https://doi.org/10.3390/electronics9081311
Submission received: 20 July 2020 / Revised: 10 August 2020 / Accepted: 12 August 2020 / Published: 14 August 2020
(This article belongs to the Special Issue Evolutionary Machine Learning for Nature-Inspired Problem Solving)

Abstract

:
We proposed a pixel-based evolution method to automatically generate evolutionary art. Our method can generate diverse artworks, including original artworks and imitating artworks, with different artistic styles and high visual complexity. The generation process is fully automated. In order to adapt to the pixel-based method, a von Neumann neighbor topology-modified particle swarm optimization (PSO) is employed to the proposed method. The fitness functions of PSO are well prepared. Firstly, we come up with a set of aesthetic fitness functions. Next, the imitating fitness function is designed. Finally, the aesthetic fitness functions and the imitating fitness function are weighted into one single object function, which is used in the modified PSO. Both the original outputs and imitating outputs are shown. A questionnaire is designed to investigate the subjective aesthetic feeling of proposed evolutionary art, and the statistics are shown.

1. Introduction

Evolutionary art is a branch of generative art, which is automatically generated by evolutionary computation. Evolutionary computation can be used to create various artworks, including 2D artworks, 3D artworks, music, and animation artworks [1,2]. The genotype–phenotype algorithms, such as genetic algorithms and many nature-inspired evolution algorithms, are widely used to create evolutionary art [3,4]. It is not easy for artificial intelligence to automatically generate artworks with significant aesthetic feeling, since the concept of aesthetic is subjective and hard to measure. In this case, people have proposed some aesthetic measurement methods to guide the algorithms [5,6].
Approaches for evolutionary art can be classified into two categories, agent-based methods and pixel-based methods. The majority of approaches are agent-based. Some of those approaches design a certain number of agents, which are usually few, to act on the canvas with a series of evolution rules [7]. The agents will change the image during the movement. The evolutionary art will be created with the agent’s evolution iterations. Another type of these approaches designed some non-figurative agents that can receive an image as a self-organizing map [8]. The agents evolved individually and the outputs of agents were filtered. The pixel-based methods are less mentioned. The input images were converted into Red-Green-Blue (RGB) or Hue-Saturation-Value (HSV) dataset, and evolved into aesthetically pleasing images. Most of the pixel-based methods are fully automatic evolutionary methods, which do not need manual training or pre-trained data.
In this paper, we propose a pixel-based evolution method for creating evolutionary art. The main motivation of our work is to provide a supplement to existing evolutionary art generation methods. Since there is no significant difference in the essential mechanism between various agent-based methods, the patterns and styles of agent-based evolutionary artworks are similar. There are three advantages of the proposed method. First of all, the artistic style of artworks which are generated by the proposed method is completely different from the artworks generated by agent-based methods. It provides viewers with a different kind of artistic enjoyment. Secondly, the proposed method is a fully automated generation method, which means it does not need any manual intervention, such as data pre-training and parameter adjustment. Thirdly, our proposed method can generate both original artworks and imitating artworks. Taking advantages of its pixel-based characteristics, an imitating fitness function was designed. A target imitating artwork can be input into the algorithm, and the proposed method will generate a special artwork. The special artwork not only imitates the target input but also has its own creative aesthetic features. The main goal of our work is to propose an originally creative, self-organized, fully automated method to generate evolutionary artworks. We employ a modified PSO algorithm to implement the proposed method. The PSO algorithm, known as a kind of automatic evolutionary algorithm, is very suitable for the pixel-based method.
The remainder of this paper is organized as follows. In Section 2, we review the development of evolutionary art. Section 3 describes the proposed approach in detail, including the optimization objects selection and evolution method. Section 4 lists our evolution results, including the original evolutionary artworks and the imitating evolutionary artworks. Section 5 shows the results of the questionnaire about man-made artworks and our evolutionary artworks. The contributions and future works are drawn in Section 6.

2. Background and Related Works of Evolutionary Art

The first application of evolutionary algorithms to create shapes or designs was proposed by Richard Dawkins. The proposed method, named biomorphs, evolved 2D shapes made up of straight black lines. The length, position, and angle of the straight black lines were defined by a set of instructions and rules [9]. After that, Karl Sims successfully applied evolutionary techniques and computer graphics to create evolutionary art with procedural textures. Evolutionary 3D plant structures were grown using genetic parameters. Evolutionary images, animations, and solid textures were created using mutating symbolic lisp expressions [10]. The biomorphs, as some of the earliest evolutionary artworks, were applied to 3D models by Todd et al. The method was called the interactive genetic algorithm, allowing people to manually select the mutant individuals [11]. Ibrahim designed a system named Genshade, which can replicate characteristics of a target image by performing wavelet analysis. The RenderMan shaders with noise functions were evolved, and applied to the Genshade system [12]. The concept of artificial life is suitable for evolutionary art generation. Tiago implemented the artificial life in evolutionary art. In particular, an interactive board was used to let the people directly interact with the agents [13]. There are also many approaches used efficient and autonomous methods such as deep learning, neural networks, and convolutional neural networks [14,15,16].
Aesthetic selection is also a focus of evolutionary art research. Since evolutionary art approaches are automated or semi-automated, they can generate a large number of outputs in a short time. We did not use the word “artworks” because many of the outputs do not have a significant aesthetic value. People face the problem of user fatigue when they manually pick outputs with aesthetic value. Therefore, automated methods to identify artworks are needed. In order to support the exploration of new territory in any generative system, McCormack and Andy visualized both genotype and phenotype space with dimensionality reduction approaches [17]. Easton et al. proposed two approaches to address the user fatigue by improving user engagement, firstly through using virtual environments and secondly improving the predictability of the evolutionary art and giving the user more control [18]. To reduce user fatigue in interactive evolutionary computation with a human-in-the-loop evaluation method, Daniel et al. used the problem of locating interesting fractals to introduce an application of fertility intended. The high fertility sets of fractal parameters are a form of automatically generated content that is part of an application intended to permit a user to find pleasing fractals [19]. Some aesthetic measurement methods and aesthetic learning methods have been proposed to solve the problems [20,21,22].
As we mentioned, there are many agent-based evolutionary art generation approaches. One example of approaches which are driven by figurative agents is Tae and Chang’s work [23]. They proposed a multi-agent-based art production framework for generating artworks that contain highly complex dynamics. Agents acted on the canvas following the three rules of Boids model. Each agent possessed a chaotic neural network which trained by differential evolution algorithm. The colors can be evolved to represent a different style. Another example is Greenfield’s ant painting methods. Greenfield designed a series of ant painting methods, such as ant painting, using a multiple pheromone model [24,25,26]. The ant colony mechanism was employed to generate evolutionary art. Several ants moved on the canvas according to ant colony optimization rules and left colorful trajectories. Different fitness criteria were used to evaluate the aesthetic contributions of the ants and determined different painting styles. Sample outputs of those approaches are shown in Figure 1.
Some state-of-the-art methods have been applied to the evolutionary art generation approaches. Tan et al. introduced the Generative Adversarial Networks (GAN) into the method named ArtGAN [27]. The ArtGAN can synthetically generate artworks that have abstract characteristics, even if the background and foreground of input artworks are not clearly distinguishable. Wang et al. proposed a multimodal convolutional neural network (CNN) that takes into consideration faithful representations of both color and luminance channels, and performs stylization hierarchically with multiple losses of increasing scales [28]. The proposed method solved the texture scale mismatch issue and generated more visually appealing stylized results on high-resolution images. However, compared with our pixel-based approach, they still have some flaws. The CNN-based method needs pre-training data, and therefore it is actually a semi-automated method. Both the ArtGAN method and CNN-based method require an input image to generate artwork, which means that they cannot generate original evolutionary artwork without an input. Our pixel-based approach is a fully automated method, and it can generate original evolutionary artworks without specific inputs.
There are some challenges in evolutionary art. Firstly, many agent-based approaches use genotype–phenotype algorithms, which are very sensitive to the genotype input of the system. A minor change in genotype may produce a radical change in phenotype, leading to an inconsistent genotype and phenotype. In other words, the outputs of genotype–phenotype methods are unstable and unpredictable. Secondly, many agent-based and pixel-based approaches need a fitness function as the target to evolve. The target affects the evolutionary art outputs, so the target should be aesthetically pleasing. The target is a function, which must be mathematically representable and computable. However, as a concept of art and philosophy, aesthetics are still not fully understood by humans. The aesthetic value of an artwork is hard to measure. Another challenge is that fully automated evolutionary art approaches are preferable. As one of the original intentions and ultimate goals of evolutionary art design, the machine will completely autonomously generate evolutionary artworks with aesthetic enjoyment. Many existing approaches require manual intervention or supervision. The main goal of our work is to propose a non-agent-based and automatic approach using aesthetic fitness functions, which can automatically generate evolutionary art with aesthetic enjoyment.

3. Pixel-Based Evolution Method for Creating Evolutionary Art

The proposed approach is mainly based on a modified PSO evolutionary algorithm. To execute the optimization algorithm, a fitness function is necessary. We built a set of aesthetic fitness functions with the Cartesian Genetic Programming (CGP) aesthetic fitness function set, and designed an imitating fitness function [6]. The final fitness function was obtained by weighting two parts. The optimization algorithm we used was modified from the original PSO algorithm. As the evolutionary process was completed, we obtained the output artworks. The flowchart of our approach is shown in Figure 2.

3.1. Optimization Problems Setting for Evolutionary Art

The evolution of evolutionary art needs a certain goal, which can be described as an optimization target. In this section, we proposed several aesthetic fitness functions and an imitate fitness function, and transformed them into a single-objective optimization problem. There are significant differences between the agent-based and the pixel-based approaches for evolutionary art. The agent-based approaches such as genetic algorithm, genetic programming, and ant colony optimization, evolve evolutionary art with few agents (particles). Our pixel-based approach evolves evolutionary art with every single pixel. The pixels keep changing toward a group of certain goals, including the multiple aesthetic fitness and an optional target artwork. This is a standard multi-objective optimization problem which can be described as Equations (1)–(3):
x = ( x 1 , x 2 , x h , x s , x v )
min             f h ( x ) , f s ( x ) , f v ( x ) , f t ( x )
D = { x R 5 | 0 x 1 < w i d t h ,   0 x 2 < h e i g h t ,   0 x h < 256 ,   0 x s < 256 ,   0 x v < 256 }
The vector x contains a set of attributes for each pixel. x 1 and x 2 are constant variable for current pixel. They indicate the coordinates of current pixel and participate in the fitness functions. x h , x s , x v are the hue, saturation and value (H, S, V) color channels of the current pixel.
The fitness functions f h ( x ) , f s ( x ) , f v ( x ) are the aesthetic fitness functions. The challenges arise from proposing fitness functions based on aesthetics, since the aesthetics are usually a subjective concept. There have been some aesthetic measurement methods can work out scores of different aspects, to help people to judge evolutionary art fast. However, these methods measure aesthetic with the whole picture. What we want to know is whether the pixels become more “beautiful”. To solve this problem, the aesthetic fitness functions based on CGP are employed. The fitness functions need the coordinates of pixels as inputs, and output the non-negative integer less than 256 as the value of HSV channels. The CGP aesthetic fitness function set has 15 functions, which are shown in Table 1.
The range of the CGP aesthetic fitness functions is [ 0 ,   255 ] . A parameter p was added to increase the flexibility. In order to adapt to the optimization method, it is converted into a single optimization problem (SOP). Our aesthetic fitness functions calculate the absolute value of the difference between the pixel HSV value and CGP aesthetic fitness. The CGP aesthetic fitness functions used in our functions are selected from the CGP function set mentioned above. In particular, a certain function can be selected repeatedly for different channel functions f h ( x ) , f s ( x ) , and f v ( x ) .
f h ( x ) = | x h F ( x 1 , x 2 ) |
f s ( x ) = | x s F ( x 1 , x 2 ) |
f v ( x ) = | x v F ( x 1 , x 2 ) |
The imitating fitness function f t ( x ) is optional. There should be an artwork as the target art. We assume that the resolution of the target art is W H . In this case, the coordinate attributes of vector x , x 1 and x 2 are restricted to x 1 [ 0 ,   W 1 ] and x 2 [ 0 ,   H 1 ] . The HSV values of a pixel for target art are represented by h , s , and v . The imitating fitness function f t ( x ) is described as Equation (7):
f t ( x ) = j = 0 W 1 k = 0 H 1 x h h i j + x s s i j + x v v i j 3 · W · H · ( ( x 1 j ) 2 + ( x 2 k ) 2 + 1 )
The imitating fitness function describes how similar the current pixel and the target art are. The impact of farther pixels is weakened by a square number of Euclidean geometric distance. In the case where the location of the compared pixel is the current pixel location itself, a baseline of 1 is added. With all of the fitness functions set, the final SOP fitness function for pixel-based evolutionary art is described as Equation (8):
min     f ( x ) = f h ( x ) + f s ( x ) + f v ( x ) + t f t ( x )
The weighting method is the simplest method to integrate the multi-objects into a single object. We assigned the same weight to the aesthetic fitness functions. For the imitating fitness function, we assigned an imitating factor t . The higher the imitating factor, the more similar the evolutionary imitating art and the target art are. On the contrary, if we set the value of imitating factor to zero, the optional target art input will be unnecessary, and the method will generate an autonomous evolution art.

3.2. Evolution Method

The pixel characteristic of a photograph is grid-organized. As the genetic algorithm has a lack of topology information, the PSO algorithm is a better choice. The original PSO algorithm has the global neighbor topology. However, it is unnecessary for pixel-based evolution and will result in the loss of local features. In the following part, we use PSO way to refer to things. The “pixels” will be called “particles”; The vector x will be called “location”, since it is a set of values in the search space; The growth of the vector x will be called “velocity”. We modified the global neighbor relationship into a von Neumann topology. The focal particle will only calculate the values of adjacent particles. The modified PSO algorithm is described as Equations (9) and (10):
Δ x i k + 1 = ω Δ x i k + c 1 · r 1 · ( xbest i k x i k ) + c 2 · r 2 · ( vbest i k x i k )
x i k + 1 = x i k + · x i k + 1
where k is the iteration of evolution process and r 1 , r 2 are the random factors in the range of [ 0 ,   1 ] . c 1 is the acceleration weight toward particle personal best location. c 2 is the acceleration weight toward particle von Neumann neighbors best location. New velocity will be worked out by weighting the personal best location xbest i k and von Neumann neighbor best location vbest i k . The inertial weight factor ω controls the impact of the previous velocity on the current velocity. Finally, the new position of the focal particle can be calculated with the old position and the velocity augmenter.
The randomness decides the uniqueness and characteristic of evolutionary art. The randomness of the genotype–phenotype approaches, such as genetic algorithm and genetic programming, comes from the selection-crossover-mutation process. The randomness of PSO approaches comes from the inertia weight ω and the random factor r 1 and r 2 .
The inertial weight factor ω floats with the compactness of swarm. To prevent the particles become premature convergence, a greater inertial weight factor is needed. The inertial weight factor ω is self-adaptive and can be calculated by Equation (11).
ω = ω m a x k iter m a x ( ω m a x ω m i n ) + r
where ( ω m i n ,   ω m a x ) is the dynamic range of the linear decreasing weight. ω m a x is the starting value of inertia weight and ω m i n is the ending value of inertia weight. The random seed r will be selected within ( 0 , 1 ) . k is the current iteration number and iter m a x is the maximum number of iterations. By introducing the kinetic energy of the particle swarm, the inertia weight is reduced with a certain buffer, so that inertia weight can be adjusted adaptively with the kinetic energy of the particle swarm. The self-adaptive updating of inertia weight is very beneficial to the improvement of the precision and also enables the algorithm to converge better.

4. Experiment Results

4.1. Original Evolutionary Art without a Target

Before the proposed evolutionary program runs, a set of initial values should be set appropriately. Firstly, we set the parameters of the optimization objects setting phase mentioned in Section 3.1. The resolution of the canvas was set to 300 · 300 , therefore the coordinate attributes x 1 and x 2 were restricted to x 1 [ 0 ,   299 ] and x 2 [ 0 ,   299 ] . The parameter p of CGP aesthetic fitness functions was set to 10. The functions used in our fitness functions were randomly selected as usual. Next, for the evolution method parameters, there are many parameter selection methods for PSO. We set the parameters in PSO as follows. The starting value of inertia weight was 0.9 and the ending value of inertia weight was 0.4. The range of inertia weight was restricted to ( 0.4 ,   0.9 ) . The maximum number of iterations was set to 150. The acceleration factors c 1 and c 2 were both set to 2. It has been proven that the initial parameter values we used grant convergence and good exploration, and are still used in many state-of-art optimization algorithms [29].
Figure 3 shows the results of five independent runs. Each row shows the evolutionary art at iteration 0, 30, 60, 90, 120, 150. For those runs, different initial inputs were set. The final evolutionary art shows the random, fuzzy, and soft features. The feature of randomness resulted from ω , c 1 , and c 2 . The feature of fuzziness resulted from the von Neumann topology of particles, since they learn from each other’s values during the iterations. The aesthetic styles are different because the aesthetic fitness functions were randomly selected in different runs.

4.2. Imitating Evolutionary Art with a Target

We used the same parameters in the imitating runs as the previous original runs, except an additional target art input and the imitating factor t = 0.2 . Figure 4 shows the results of three independent runs. Every single run has two input graphs, four intermediate generations and one final imitating evolutionary art. Our method does not intend to reproduce the target art precisely. The results demonstrate the tendency to evolve towards the target art, but also retain some original characteristics. Subjectively, we can feel the aesthetic feeling of final imitating evolutionary art and some intermediate generation art.

5. Artistic Questionnaire of Proposed Evolutionary Art

The reactions to the evolutionary art by viewers are important. Although cognitive science and other fields are developing, it is still difficult to quantify the subjective aesthetic feeling of an artwork. There are some methods to evaluate the aesthetics of artworks. Questionnaires and surveys are one of the options. We conducted an artistic questionnaire for exploring perceptions of our evolutionary art in terms of aesthetic appraisal. The population of participants was 73, 36 of them were female and 37 of them were male. The mean age of the population was 22.6 and the Standard deviation of population was 11.7. The questionnaire had three questions for prepared artworks:
  • Q1. Do you think this artwork is man-made?
  • A1. A binary response about “Yes” or “No”.
  • Q2. Please rate this artwork from the perspective of artistic sense.
  • A2. A Likert item of 5-point Likert scale.
  • Q3. Please describe what this artwork wants to depict. (For imitating artworks only)
  • A3. A description of the artwork.
Where Q1 and Q2 were prepared for every artwork. Q3 was prepared for only imitating evolutionary artworks. We randomly picked 10 man-made artworks, 10 original evolutionary artworks, and 10 imitating evolutionary artworks. All 30 pictures displayed to the participants were randomly mixed. For Q3, we manually confirmed whether the answer was correct. The statistical results of the questionnaire are shown in Figure 5. The results of Q1 show that when the three kinds of artworks were fully mixed, the majority of participants thought that the prepared artworks were man-made artworks, even if two thirds of them were actually evolutionary artworks. The results of Q2 show that the imitating evolutionary artworks achieved slightly lower scores than the man-made artworks (Ave = 3.8, Ave = 4.2). For the original evolutionary artworks, the gap of scores was large, the average score was acceptable (Ave = 3.3). The results of Q3 show that the majority of participants can correctly recognize the imitating target of proposed imitating evolutionary art.

6. Our Contributions and Future Works

Based on all the generation results and the questionnaire, we can conclude that the proposed pixel-based method can be used as a strategy to guide evolutionary art. The method has a number of advantages. It is a fully automatic evolutionary method, which does not need manual training. Its weight parameters can be adjusted freely to generate different artworks. There are several possible applications of our work. For example, the proposed method can be used to automatically generate stickers for messaging apps (e.g., telegram stickers). Our method can easily generate multiple sticker sets with consistent art styles. Another possible application is developing mobile phone applications with our proposed method. The applications with our method, like some other evolutionary art generation applications such as Art Done Quick, can bring users a pleasant artistic experience. We summarize the main contributions of this work.
  • Our work demonstrates the potential of pixel-based method to generate evolutionary art. We perceive the proposed method not as a simple, improved evolutionary art generation method but rather as a meta-method. The proposed method is a supplement for generating evolutionary art and enriches the style of evolutionary art.
  • Our method provides a new approach to generate evolutionary imitating art. The imitating fitness function is designed for the pixel-based generation. The similarity of imitation can be simply controlled by the imitating factor.
In the future, we will clarify the relationship between the initial parameters and output styles. We are going to extend the proposed method. It can be extended in the future to generate more complex and diverse artworks; not only 2D artworks, but also 3D artworks. Other evolutionary algorithms will be used for pixel-based evolutionary art.

Author Contributions

Conceptualization, Y.W.; methodology, Y.W.; Programming, Y.W.; writing-original draft preparation, Y.W.; supervision and review, R.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Antunes, R.F.; Leymarie, F.F.; Latham, W. Two Decades of Evolutionary Art Using Computational Ecosystems and Its Potential for Virtual Worlds. J. Virtual Worlds Res. 2014, 7. [Google Scholar] [CrossRef] [Green Version]
  2. Lewis, M. Evolutionary visual art and design. In The Art of Artificial Evolution; Springer: Berlin/Heidelberg, Germany, 2008; pp. 3–37. [Google Scholar]
  3. House, A.; Agah, A. Autonomous Evolution of Digital Art Using Genetic Algorithms. J. Intell. Syst. 2016, 25, 319–333. [Google Scholar] [CrossRef]
  4. De Smedt, T.; Ludivine, L.; Walter, D. Generative art inspired by nature, using NodeBox. In European Conference on the Applications of Evolutionary Computation; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  5. den Heijer, E.; Agoston, E.E. Comparing aesthetic measures for evolutionary art. In European Conference on the Applications of Evolutionary Computation; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  6. Ashmore, L.; Miller, J. Evolutionary Art with Cartesian Genetic Programming. Available online: http://www.emoware.org/evolutionary_art.asp (accessed on 14 August 2020).
  7. Greenfield, G.; Penousal, M. Simulating artist and critic dynamics—An agent-based application of an evolutionary art system. In Proceedings of the International Joint Conference on Computational Intelligence, Madeira, Portugal, 5–7 October 2009. [Google Scholar]
  8. Robert, S.; Gero, J.S. Artificial creativity: A synthetic approach to the study of creative behaviour. In Computational and Cognitive Models of Creative Design V, Key Centre of Design Computing and Cognition; University of Sydney: Sydney, Australia, 2001; pp. 113–139. [Google Scholar]
  9. Dawkins, R. The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe without Design; WW Norton: New York, NY, USA, 1986. [Google Scholar]
  10. Sims, K. Artificial evolution for computer graphics. Comput. Graphics 1991, 25. [Google Scholar] [CrossRef]
  11. Stephen, T.; William, L. Mutator: A Subjective Human Interface for Evolution of Computer Sculptures; IBM United Kingdom Scientific Centre: Winchester, UK, 1991. [Google Scholar]
  12. Ibrahim, A.E.M. Genshade: An Evolutionary Approach to Automatic and Interactive Procedural Texture Generation; Texas A&M University: College Station, TX, USA, 1998. [Google Scholar]
  13. e Silva, T.B.P. Computational Evolutionary Art: Artificial Life and Effective Complexity. In International Conference on Human-Computer Interaction; Springer: Cham, Switzerland, 2019. [Google Scholar]
  14. Tanjil, F.; Ross, B.J. Deep learning concepts for evolutionary art. In International Conference on Computational Intelligence in Music, Sound, Art and Design (Part of EvoStar); Springer: Cham, Switzerland, 2019. [Google Scholar]
  15. Martindale, C.; Locher, P.; Petrov, V.; Berleant, A. Evolutionary and Neurocognitive Approaches to Aesthetics, Creativity and the Arts; Routledge: London, UK, 2019. [Google Scholar]
  16. Zaidel, D.W. Neuropsychology of Art: Neurological, Cognitive, and Evolutionary Perspectives; Psychology Press: London, UK, 2015. [Google Scholar]
  17. McCormack, J.; Lomas, A. Understanding Aesthetic Evaluation using Deep Learning. In International Conference on Computational Intelligence in Music, Sound, Art and Design (Part of EvoStar); Springer: Cham, Switzerland, 2020. [Google Scholar]
  18. Easton, E.; Bernardet, U.; Ekárt, A. Tired of Choosing? Just Add Structure and Virtual Reality. In International Conference on Computational Intelligence in Music, Sound, Art and Design (Part of EvoStar); Springer: Cham, Switzerland, 2019. [Google Scholar]
  19. Ashlock, D.; Brown, J.A.; Sultanaeva, L. Exploiting fertility to enable automatic content generation to ameliorate user fatigue in interactive evolutionary computation. In Proceedings of the 2018 IEEE Congress on Evolutionary Computation (CEC), Rio de Janeiro, Brazil, 8–13 July 2018. [Google Scholar]
  20. Li, Y.; Hu, C.-J. Aesthetic learning in an interactive evolutionary art system. In European Conference on the Applications of Evolutionary Computation; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  21. den Heijer, E.; Eiben, A.E. Evolving art using multiple aesthetic measures. In European Conference on the Applications of Evolutionary Computation; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  22. Blijlevens, J.; Thurgood, C.; Hekkert, P.; Chen, L.-L.; Leder, H.; Whitfield, T.W.A. The Aesthetic Pleasure in Design Scale: The development of a scale to measure aesthetic pleasure for designed artifacts. Psychol. Aesthet. Creat. Arts 2017, 11, 86–98. [Google Scholar] [CrossRef]
  23. Choi, T.J.; Ahn, C.W. Artificial life based on boids model and evolutionary chaotic neural networks for creating artworks. Swarm Evol. Comput. 2019, 47, 80–88. [Google Scholar] [CrossRef]
  24. Greenfield, G. Ant paintings based on the seed foraging behavior of P. barbatus. In Proceedings of the Bridges 2013: Mathematics, Music, Art, Architecture, Culture, Enschede, The Netherlands, 27–31 July 2013. [Google Scholar]
  25. Greenfield, G. Ant paintings using a multiple pheromone model. Leonardo 2006, 47, 319–326. [Google Scholar]
  26. Greenfield, G.; Machado, J.T. Ant- and Ant-Colony-Inspired ALife Visual Art. Artif. Life 2015, 21, 293–306. [Google Scholar] [CrossRef] [PubMed]
  27. Tan, W.R.; Chan, C.S.; Aguirre, H.E.; Tanaka, K. ArtGAN: Artwork synthesis with conditional categorical GANs. In 2017 IEEE International Conference on Image Processing (ICIP); IEEE: Beijing China, 2017. [Google Scholar]
  28. Wang, X.; Oxholm, G.; Zhang, D.; Wang, Y.-F. Multimodal transfer: A hierarchical deep convolutional neural network for fast artistic style transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honoluli, HI, USA, 21 July 2017. [Google Scholar]
  29. Pedersen, M.E.H. Good parameters for particle swarm optimization. Hvass Lab. Cph. Den. Tech. Rep. HL1001 2010, 1551–3203. Available online: https://pdfs.semanticscholar.org/a4ad/7500b64d70a2ec84bf57cfc2fedfdf770433.pdf (accessed on 14 August 2020).
Figure 1. Sample outputs of agent-based evolutionary art. (a) An evolutionary artwork generated with Boids model and evolutionary chaotic neural network. (b) Evolutionary artworks generated by Greenfield’s ant painting method.
Figure 1. Sample outputs of agent-based evolutionary art. (a) An evolutionary artwork generated with Boids model and evolutionary chaotic neural network. (b) Evolutionary artworks generated by Greenfield’s ant painting method.
Electronics 09 01311 g001
Figure 2. The flowchart of the proposed method.
Figure 2. The flowchart of the proposed method.
Electronics 09 01311 g002
Figure 3. Original evolutionary art without a target. Each row represents a single run.
Figure 3. Original evolutionary art without a target. Each row represents a single run.
Electronics 09 01311 g003
Figure 4. Imitating evolutionary art with a target.
Figure 4. Imitating evolutionary art with a target.
Electronics 09 01311 g004
Figure 5. Questionnaire results of proposed art.
Figure 5. Questionnaire results of proposed art.
Electronics 09 01311 g005
Table 1. CGP aesthetic fitness functions.
Table 1. CGP aesthetic fitness functions.
F 1 ( x 1 |   x 2 ) % 255
F 2 ( p & x 1 ) % 255    
F 3 ( x 1 1 + x 2 + p ) % 255
F 4 ( x 1 · x 2 ) % 255
F 5 ( x 1 + x 2 ) % 255
F 6 | x 1 x 2 | % 255
F 7 255 x 1 % 255
F 8 | 255 · cos x 1 |
F 9 | 255 · tan ( ( x 1 % 45 ) · π / 180 ) |
F 10 | 255 · ( tan x 1 ) % 255 |
F 11 x 1 2 + x 2 2 % 255
F 12 x 1 % ( p + 1 ) + 255 p
F 13 ( x 1 + x 2 ) / 2 % 255
F 14 255 · ( x 1 + 1 ) ( x 2 + 1 ) ,       ( x 1 < x 2 )   o r   255 · ( x 2 + 1 ) ( x 1 + 1 ) ,           e l s e
F 15 | x 1 2 p 2 + x 2 2 p 2 % 255 |

Share and Cite

MDPI and ACS Style

Wang, Y.; Xie, R. Pixel-Based Approach for Generating Original and Imitating Evolutionary Art. Electronics 2020, 9, 1311. https://doi.org/10.3390/electronics9081311

AMA Style

Wang Y, Xie R. Pixel-Based Approach for Generating Original and Imitating Evolutionary Art. Electronics. 2020; 9(8):1311. https://doi.org/10.3390/electronics9081311

Chicago/Turabian Style

Wang, Yuchen, and Rong Xie. 2020. "Pixel-Based Approach for Generating Original and Imitating Evolutionary Art" Electronics 9, no. 8: 1311. https://doi.org/10.3390/electronics9081311

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop