Next Article in Journal
Investigation of the Dependence of Electrocatalytic Activity of Copper and Palladium Nanoparticles on Morphology and Shape Formation
Previous Article in Journal
Microstructure, Mechanical Properties, Wear and Erosion Performance of a Novel High Entropy Nitride (AlCrTiMoV)N Coating Produced by Cathodic Arc Evaporation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Image Generation Method of Unbalanced Ship Coating Defects Based on IGASEN-EMWGAN

School of Mechanical Engineering, Jiangsu University of Science and Technology, Zhenjiang 212100, China
*
Author to whom correspondence should be addressed.
Coatings 2023, 13(3), 620; https://doi.org/10.3390/coatings13030620
Submission received: 24 February 2023 / Revised: 9 March 2023 / Accepted: 12 March 2023 / Published: 14 March 2023

Abstract

:
During the process of ship coating, various defects will occur due to the improper operation by the workers, environmental changes, etc. The special characteristics of ship coating limit the amount of data and result in the problem of class imbalance, which is not conducive to ensuring the effectiveness of deep learning-based models. Therefore, a novel hybrid intelligent image generation algorithm called the IGASEN-EMWGAN model for ship painting defect images is proposed to tackle the aforementioned limitations in this paper. First, based on a subset of imbalanced ship painting defect image samples obtained by a bootstrap sampling algorithm, a batch of different base discriminators was trained independently with the algorithm parameter and sample perturbation method. Then, an improved genetic algorithm based on the simulated annealing algorithm is used to search for the optimal subset of base discriminators. Further, the IGASEN-EMWGAN model was constructed by fusing the base discriminators in this subset through a weighted integration strategy. Finally, the trained IGASEN-EMWGAN model is used to generate new defect images of the minority classes to obtain a balanced dataset of ship painting defects. The extensive experimental results are conducted on a real unbalanced ship coating defect database and show that, compared with the baselines, the values of the ID and FID scores are significantly improved by 4.92% and decreased by 7.29%, respectively, which prove the superior effectiveness of the proposed model in this paper.

1. Introduction

The shipbuilding industry is an indispensable and important part of the national equipment manufacturing industry, the epitome of modernized industry, and a strategic industry related to national economic development and national defense security. Ship painting refers to the grinding, sandblasting, rust removal, and coating of the hull structure and outfitting parts. As one of the three pillars of the modern shipbuilding process [1], ship painting runs through the whole process of shipbuilding from design and construction to the ship’s delivery [2]. Since each part of the hull is in various corrosive environments, it requires applying different anti-corrosive coatings to its surface before the ship is launched [3]. These coatings not only serve as decorative aesthetics but also function as an effective anti-fouling barrier, which will be critical to the structural integrity, hydrodynamic properties, and longevity of service for the vessel [4]. Affected by the improper operation of the workers, environmental changes during drying and curing, or the quality of the paint itself, diverse defects such as sagging and orange skin are produced during the process of the ship painting [5]. These coating defects are roughly divided into wet film defects and dry film defects. Among them, the wet film defects are generated immediately after the coating is applied to the substrate surface, while the dry film defects are generated during the drying and curing stages of the coating and when the coating is put into service.
The quality of the coating is directly related to the construction cycle and maintenance cost of the ship, and it is an important factor affecting the corrosion resistance of the hull and the service life of the ship [6]. The coating defects detection and filling phase is a key section of the coating PDCA comprehensive quality management. At present, the detection of coating defects in most domestic shipyards involves detecting and judging the coating quality and recording the types and grades of defects by the staff at a specific time after the coating operation is completed, which not only increases the intensity and pressure of human work but also greatly increases the time cost and reduces the efficiency of ship painting operations [7]. Therefore, in the ship coating industry, it is essential to identify the painting defects intelligently and feedback information such as the type, size, shape, and location of each defect in the coating process. With the proposal of ship intelligent manufacturing theory and the continuous popularization of artificial intelligence technology in the ship painting industry, people began to gradually apply the intelligent technology to the knowledge acquisition of ship painting defects, but there are fewer reports on the application of image-based ship coating defects recognition.
Machine learning (ML) is one of the core research topics of artificial intelligence and neural computing [8], which is extensively applied to solve the problems of classification and prediction. Generalization ability has always been one of the important criteria for evaluating the models of ML algorithms. However, due to various reasons such as a high tagging cost, data security, and privacy protection, there is no dedicated and large dataset of ship painting defects images for the training of deep learning models, which are highly susceptible to small sample problems such as model overfitting, low classification accuracy, and falling into local optimum solutions. In particular, the characteristics of class imbalance in the ship coating defect dataset [9] lead to the fact that the traditionally trained classification model cannot identify the minority classes easily, thus becoming an overfitting model and making its generalization performance greatly reduced.
Generative models-based neural networks can alleviate the above problems [10,11]. Generative Adversarial Networks (GAN) and its variants [12] are becoming one of the most popular generative network models due to their low loss and end-to-end advantages in generating images, and they have recently been successfully applied to image generation [13], image-to-image translation [14], image super-resolution enhancement [15], anomaly detection [16], and other applications. However, there are still problems such as difficulty in generating controllable multi-class and high-quality datasets [17], unstable training [18], and pattern collapse [19] because GANs are unsupervised learning. Therefore, many recent efforts on GANs have focused on overcoming these inherent drawbacks by developing various adversarial training approaches such as optimizing the objective loss function, training additional discriminators, training multiple generators, and using the neural-evolution computation strategy (NECS) [20]. One of them is EGAN, proposed by Wang et al. [21], which is most widely used in image generation due to its combination of methods such as the neural-evolution computation strategy (NECS) and adversarial training mechanism. XIAO et al. [22] proposed a rumor propagation model based on GAN and anti-rumor for enhancing homomorphism data in the sample space, and the experiments showed that the proposed method achieves 9%–29% over the current methods in terms of precision gains. Chen et al. [23] proposed a hybrid algorithm combining the Generative adversarial nets (GAN) and Genetic Algorithm (GA), and the experimental results showed that the proposed method can solve the permutation flow shop problem to verify the algorithm’s solution ability. Han et al. [24] proposed a Generative Adversarial Network with Evolutionary Generators (EG-GAN) and applied it in face image inpainting. Experiments on various face image datasets show that EG-GAN successfully overcomes the gradient vanishing problem, achieves stable and efficient training, and generates visually reasonable images. Erivaldo et al. [25] proposed an algorithm for pruning GANs based on Evolution Strategy (ES) and Multi-Criteria Decision Making (MCDM) and applied it in medical imaging diagnosis. The extensive experimental results show that the pruned GAN model decreases to 70% Floating-Point Operations when compared with the original model. Meng et al. [26] proposed a novel hybrid short-power prediction model in newly built wind farms based on secondary evolutionary generative adversarial networks (SEGAN) and a dual-dimension attention mechanism (DDAM)-assisted bidirectional gate recurrent unit (BiGRU) for solving the insufficient data. The experiments were conducted on Galicia Wind Farm in Sotavento, and the results show that the proposed model yields successfully. Zheng et al. [27] proposed a novel evolutionary generative adversarial network (GAN) ensemble method for HSR passenger classification. Compared with state-of-the-art methods, the proposed model has exhibited significant advantages and has successfully been applied to the anti-terrorism for the Chinese HSR. He et al. [28] developed a new evolvable adversarial framework for automatic COVID-19 infection segmentation. Experiments on several COVID-19 CT scan datasets verified that the proposed method achieved superior effectiveness and stability for COVID-19 infection segmentation. Cai et al. [29] proposed a skin cancer detection model based on federated learning integrated with a deep generation model based on the knee point-driven evolutionary algorithm (KnEA). Experiments were conducted on the ISIC 2018 dataset, and the results show that the proposed model can help resolve problems of insufficient data in smart medicine of IoT. Hazra et al. [30] proposed a generative adversarial networks (GAN) model to create synthetic nucleic acid sequences of the cat genome tuned to exhibit specific desired properties, and the experimental results shows that our proposed architecture could create synthetic nucleic acid sequences efficiently and exhibits desired properties of the cat genome. Talasa et al. [31] proposed a novel method for exploiting Generative Adversarial Networks to simulate an evolutionary arms race between the camouflage of a synthetic prey and its predator. Chen et al. [32] proposed a novel data-driven scenario generation using generative adversarial networks, which is based on two interconnected deep neural networks for stochastic power generation processes. The experiments were conducted on wind and solar time-series data from NREL integration datasets, and the results show that the proposed method is able to generate realistic wind and photovoltaic power profiles with a full diversity of behaviors.
Although the above research findings have all obtained good optimization results, there is still much room for improvement. Such weaknesses include standard EGAN, and its variants use only a single discriminator to discriminate, leading to large accuracy errors, which cannot guarantee the quality and diversity of the generated samples, leading to pattern collapse and poor training stability. Generators with lower performance are directly discarded and easily fall into local optimal solutions, there is no cross-variation operation to guarantee search performance, and there are other challenges. Therefore, the area is still in an infantile stage, and we believe that the model and work proposed in this paper will be able to make a valid contribution to filling the current research gap of knowledge.
In light of the aforementioned challenges, a novel hybrid defect image generation model based on selective ensemble learning (SEN) and Auxiliary Classifier Wasserstein evolutionary GAN(EGAN)-assisted GA improved by simulated annealing (IGA) is first proposed in this paper to generate new high-quality defect images of the minority classes in ship coating defects, which alleviates the problems of insufficient data and class imbalance to some extent.
The main novelties and contributions of this paper are summarized as follows.
(1)
To address the dilemma of a ship coating defect database’s insufficient historical labeled data and class imbalance, this paper proposes a novel hybrid defect image generation model called IGASEN-EMWGAN for generating the new high-quality defect images of the minority class for the first time.
(2)
In order to alleviate the problem of a vanishing gradient and model collapse in original minimax mutation and provide smoother gradients for updating the generators and stabilizing the training process, we also employ the modified hinge mutation and the wasserstein_gp mutation to inject genome mutation diversity into the training of complementary generators, which is conducive to steady training between generators and discriminators.
(3)
To remedy the deficiencies existing in vanilla EGANs, such as a lack of a crossover mutation operator, it being limited by the single specified environment (discriminator), and it easily getting stuck in the local optimum, we also propose adding the two-point crossover mutations operator (TP) into the proposed IGASEN-EMWGAN model to evolve the generators, which is conducive to fostering diversity in generators and discriminators.
(4)
In order to solve the limitations of those preveniently proposed methods, such as the difficult balance between the discriminative accuracy and diversity of base discriminators and the high possibility of falling in local optima, we propose a novel model, which integrates selective ensemble learning and GA-based SA into the training of multi-discriminators in IGASEN-EMWGAN to escape from the local optimum caused by the class imbalance.
(5)
The extensive experiments are conducted on a real unbalanced ship coating defect database, and the experimental results demonstrate that, compared with the baselines and different state-of-the-art GANs, the values of the ID and FID scores are significantly improved and deceased, respectively, which proves the superior effectiveness of the proposed model in this paper.
The rest of the paper is organized as follows: the details of the proposed IGASEN-EMWGAN model are illustrated in Section 2. Section 3 describes the experimental design and gives the experimental results. The conclusions of the paper and the outlook for future research are presented in Section 4.

2. Proposed Method

In this section, we first review the original E-GAN structure. Then, the algorithm framework of the IGASEN-EMWGAN model is proposed to remedy the above challenges. The implementation details of the proposed IGASEN-EMWGAN model are discussed by explaining the whole modified neuro-evolutionary process of base generators and discriminators in detail. Among them, the whole evolutionary process of base generators in IGASEN-EMWGAN includes four steps: variation mutations, crossover mutations, evaluation, and selection, while the base discriminators in IGASEN-EMWGAN contain five steps: generation, encoding, evolution (variation mutations, crossover mutations, evaluation, and selection based on ensemble pruning), update, and combination.

2.1. Evolutionary Generative Adversarial Networks

The proposed classification algorithm for ship coating defects consists of four modules, namely, a data acquisition module, a data pre-processing module, a data generation module, and a coating defect classification module, as shown in Figure 1.
Evolutionary Generative Adversarial Networks (E-GAN) were first proposed in [22] to mitigate the inherent deficiencies existing in conventional GANs, namely, mode collapse, training instability, and vanishing gradients. Different from vanilla GANs, which update a generator and a discriminator iteratively in a specified adversarial optimization strategy, E-GAN introduced a neuro-evolutionary computation strategy (NECS) that evolved a population of generators (parents) { G θ } instead of a single generator in a given static environment denoted by the discriminator D ϕ to produce a set of new generators (offspring). The adversarial training procedure of the generator and discriminator is treated as an evolutionary process of individual species in response to the environment. During the process of evolution, each evolution mainly includes the following three steps: variation, evaluation, and selection, as shown in Figure 1a.

2.2. Generators in IGASEN-EMWGAN

In summary, most existing generators of E-GANs lack a crossover mutation operator, are limited by the single specified environment (discriminator), and easily get stuck in the local optimum, which has a big influence on the optimization performance during training. To this end, apart from the two original genetic operators (variation mutations operator and selection operator) in vanilla E-GAN, we also utilize the crossover mutations operator to evolve the generators in IGASEN-EMWGAN. In addition to it, we build a set of generators { G θ j } ( j = 1 , 2 , , μ ) (acting as parents) to produce a population of new generators (acting as offspring) { G θ j , c } ( j = 1 , 2 , , μ ; c = 1 , 2 , , M ) to adapt to the dynamic environment (i.e., discriminators { D ϕ n } ( n = 1 , 2 , N ) ), which represents that the evolved generators can generate controllable and higher-quality images and estimate the distribution of true data accurately.
Based on the modified NECS, we take Variation mutations, Crossover mutations, Evaluation, and Selection for evolving the generators in IGASEN-EMWGAN training. The whole modified neuro-evolutionary computation process of generators in IGASEN-EMWGAN is depicted in Figure 1b and Algorithm 1.

2.2.1. Variation Mutations of Generators

In essence, the variation mutations of generators in IGASEN-EMWGAN are the different objective functions, whose purposes are to update the parameters in generators followed by each evaluation by the discriminators and narrow the distances between the generated distribution and the data distribution [22,33]. As shown in Figure 2, the performance of the original minimax mutation is disappointing because the gradient of the generator easily vanishes. To this end, we employ the modified hinge loss as the hinge mutation in this section, which replaces the original minimax mutation and expects the final discriminator to discriminate the generated samples from real samples as true as possible (i.e., D ϕ ( G θ ( z ) , C ) 1 ) or to be punished in IGASEN-EMWGAN. The modified hinge mutation introduces class label C, and the term of gradient penalty (GP) to the original hinge loss, which assists generators in generating specific types of samples and achieves Lipschitz continuity constraints, respectively. The modified hinge mutation can be defined as follows:
M G h i n g e = n = 1 τ E z p z [ max ( 0 , 1 D ˜ ϕ n ( G θ ( z ) , C ) ] + E x p d a t a [ max ( 0 , 1 + D ˜ ϕ n ( x ) ) ] + ξ E x ^ p x ^ [ ( x ^ D ( x ^ ) 2 1 ) 2 ]
where the D ˜ ( x ) and D ˜ ( G θ ( z ) , C ) in this paper are defined as follows:
D ˜ ϕ n ( x ) = D ϕ n ( x ) E x p d a t a D ϕ n ( x )
D ˜ ϕ n ( G θ ( z ) , C ) = D ϕ n ( G θ ( z ) , C ) E z p z D ϕ n ( G θ ( z ) , C )
where the noise sample z p z is from a normal or uniform distribution, the real sample x p d a t a is the distribution of real samples x, C denotes the class labels of ship coating defect images, ( G θ ( z ) , C ) is the generated image with class label C, and D ϕ n ( G θ ( z ) , C ) is the output of the discriminators.
Apart from the hinge mutation, we also employ a novel wasserstein_gp mutation inspired by WGAN-GP [34,35] and ACGAN [36] to provide smoother gradients for updating the generators and stabilizing the training process, as shown in Figure 2. Unlike the mutations in original EGANs, the wasserstein_gp mutation proposed in this paper not only generates value functions superior to those generated through the Jensen–Shannon divergence (JSD), which alleviates the vanishing gradient and mode collapse, but also replaces weight clipping with a gradient penalty to satisfy the Lipschitz continuity and constraint, which is beneficial to improving the speed and stability of training [37]. In a nutshell, using the combination of hinge mutation and wasserstein_gp mutation as mutation operators of generators in IGASEN-EMWGAN will not only avoid these deficiencies such as gradient vanishing and mode collapse to some extent but will also inject genome mutation diversity into the training of complementary generators, which is conducive to steady training between generators and discriminators. The wasserstein_gp mutation is defined in Equation (4).
Μ G w a s s e s t e i n _ g p = n = 1 τ Ε z p z [ D ϕ n ( G θ ( z ) , C ) ] + E z p z [ P ( c l a s s = c | G θ ( z ) ) ]

2.2.2. Crossover Mutations of Generators

Another idea in modified NECS for addressing the training pathologies of conventional GANs is also motivated by the crossover mutation operators in genetic operators. As an equally important mutation operator in evolutionary algorithms (EAs) [38], the operators of crossover mutation can exchange the genes in chromosomes between two paired parents to help offspring inherit some superior traits from the parents, which is conducive to improving the diversity of the genome and the global search ability. Common crossover operators include single-point crossover (SP), two-point crossover (TP), scattered crossover (SC), heuristic crossover (HE), arithmetic crossover (AM), and intermediate crossover (IT) [39,40]. In this paper, we propose adding a two-point crossover operator (TP) into the IGASEN-EMWGAN model to foster diversity in generators, which will therefore be better than existing EGANs.

2.2.3. Evaluation of Generators

In the modified neuro-evolutionary computation strategy, evaluation is the operator of measuring the quality and diversity of evolved individual offspring (i.e., children) by the changing environment (i.e., discriminators D ϕ n ). In this paper, we devise an evaluation function which consists of two typical properties (i.e., quality fitness score F G q u a l i t y and diversity fitness score F G d i v e r s i t y ) of the generated samples to evaluate the performance of the evolved offspring of generators for determining the evolution direction (i.e., individual offspring selection) in IGASEN-EMWGAN. The formal representations of the quality fitness score ( F G θ q u a l i t y ) and diversity fitness score ( F G θ d i v e r s i t y ) are as follows:
F G θ q u a l i t y = n = 1 τ E z p z [ D ϕ n ( G θ ( z ) , C ) ]
F G θ d i v e r s i t y = n = 1 τ log | | D ϕ n E x p d a t a [ log D ϕ n ( x ) ] E z p z [ log ( 1 D ϕ n ( G θ ( z ) , C ) ) ] | |
Overall, based on the aforementioned two function fitness scores, we can finally give the evaluation function of the generators proposed in this paper, which are defined as:
F G θ = F G θ q u a l i t y + λ F G θ d i v e r s i t y
where λ ( λ 0 ) in front of F G θ d i v e r s i t y is used to balance the two indices of the generative quality fitness score F G θ q u a l i t y and diversity fitness score F G θ d i v e r s i t y of the generated samples. In short, a relatively high fitness score F G θ leads toa higher training efficiency and better generative performance.

2.2.4. Selection of Generators

The selection operator, which acts as the counterpart of the mutation operators and recombination, aims to select individuals (parents) to produce the next well-performing offspring population. The most common methods are roulette wheel selection [41] and binary tournament selection [42]. In the proposed IGASEN-EMWGAN, we employ the simple yet used comma survivor selection strategy, i.e., (μ, λ)-selection [43], as the selection mechanism of IGASEN-EMWGAN to select and retain the desired ones based on the fitness score of existing individuals of generators. We define the selection fitness function score for the offspring populations of generators in IGASEN-EMWGAN as:
F G θ 1 , 1 , F G θ 2 , 1 , , F G θ μ , M s o r t max ( F G θ j , ν )
After sorting the current μ offspring populations (generators) G θ 1 , G θ 2 , , G θ μ in ascending order by fitness function scores, the μ best-performing individuals that survived are selected to form the generation of the next evolution during evolutionary adversarial training in IGASEN-EMWGAN. Therefore, it can be written as:
θ 1 , θ 2 , , θ μ θ 1 , 1 , θ 2 , 1 , , θ μ , 1

2.3. Discriminators in IGASEN-EMWGAN

In fact, although many recent attempts regarding GANs have been made by building multi-discriminators and have contributed to overcoming many well-documented failures (i.e., lack of informative feedback gradient, larger discrimination deviation of a single discriminator) existing in GAN models to some extent [44,45,46], GANs can easily stop adversarial training and get stuck in a local optimal solution when they try to learn the distributions from the minority classes in unbalanced datasets [47,48,49], as shown in Figure 3a. Apart from it, the tradeoff metric between the discrimination accuracy and the difference in multiple base discriminators has no well-accepted, unified, formal definition [50], resulting in decreasing the generalization performance and increasing the hardware memory and computational costs of the server system. To this end, considering the aforementioned limitations of those preveniently proposed methods, we propose a novel model, which integrates selective ensemble learning and a genetic algorithm-based simulated annealing algorithm into the training of multi-discriminators in IGASEN-EMWGAN to escape from the local optimum caused by class imbalance and improve the training efficiency and the generalization performance. The overall modified NECS process of multi-discriminators in IGASEN-EMWGAN is presented in Figure 1b and illustrated in Algorithm 1.

2.3.1. Generation of Base Discriminators

Before the evolutionary process of the base discriminators in IGASEN-EMWGAN, the random construction of different base discriminators is crucial for improving the discrimination accuracy and the generalization ability of the ensemble model. In this paper, to warranty the variety of the constructed base types, we first use the Bootstrap sampling algorithm [51] to construct τ multiple training subsets s u b i ( i = 1 , 2 , , τ ) with a strong diversity degree from S t r a i n . Subsequently, it is necessary to initialize the weights for the minority class and majority class in a subset of training samples, which can reduce the effect of the initial imbalance data, make the initial base discriminator have better discriminative ability, and ensure the diversity of the base discriminator. The weights of minority class sample weights and majority class samples are shown in Equations (10) and (11), respectively. A combination of the training data sample perturbation and algorithm parameter perturbation methods is proposed to train initial homogenous base discriminators with diversities independently [52]. Among them, the k-fold cross-validation (CV) method [53], for its simplicity and universality, is regarded as an effective method for perturbing data samples, where the training set is divided into k equal parts (one for testing and the remaining k-1 folds for training). In this paper, we choose the five-fold cross-validation method, which gives robustness against overfitting and underfitting, as shown in Figure 4. Apart from the training data sample perturbation method, it is crucial for the proper tuning of the hyperparameters, which ensures the goodness-of-fit of the outcomes and the model fitting convergence. The common algorithms include Grid Search, Random Search, and Bayesian Optimization (BO) [54]. In this paper, the optimization strategy of Grid Search is used to set different network hyperparameters (i.e., number of hidden layer neurons, number of layers, and initial connection weights) to construct the subsets of candidate base discriminators with large diversities so as to obtain different compact sets of candidate initial base discriminators randomly.
w = S + S S
w + = S S + S
where S denotes the total number of samples in the training set, and S and S + represent the number of samples in the minority class and the majority class in the training set, respectively.

2.3.2. Encoding of Base Discriminators

For evolutionary algorithms such as GA, the first step is to connect a bridge in the context of a real situation problem and the solving problem space through the NECS [55]. In order to represent the candidate base discriminators in the search space, a desired encoding scheme is adopted, where each of the chromosomes is represented by the vectors’ length. The common coding methods are binary coding, gray coding, floating-point (real) coding, permutation coding, and so forth [56]. Consequently, for the selected optimal base discriminator subset, all the homogeneous candidate base discriminators D ϕ i ( i = 1 , 2 , , τ ) are encoded as the binary chromosome individuals in the genetic space, where 1 means the base discriminators are selected for the ensemble member, while 0 means the opposite, as shown in Figure 5 [57].

2.3.3. Evolution of Base Discriminators

To provide informative gradients to the generator through the adversarial training, we employ the modified neuro-evolutionary computation strategy (NECS) to generate multiple offspring discriminators. The specific flowchart of updating the discriminators in IGASEN-EMWGAN is shown in Figure 6. The whole modified neuro-evolutionary computation strategy (NECS) includes four steps: variation mutations, crossover mutations, evaluation, and selection.
Variation mutations of discriminators
Similar to the variation mutations of generators in IGASEN-EMWGAN, various objective functions are taken as variation mutations for base discriminators. In this work, we employ two complementary variation mutation operators, including minimax mutation and least-squares mutation to provide a promising optimization direction for the evolution of the base discriminators in IGASEN-EMWGAN. The operators of the minimax mutation and least-squares mutation are formulated as:
M D mini max = E x p d a t a [ log D ϕ n ( x ) ] + E z p z [ log ( 1 D ϕ n ( G θ ( z ) , C ) ) ]
M D l e a s t s q u a r e s = 1 2 E x p d a t a [ ( D ϕ n ( x ) 1 ) 2 ] + 1 2 E z p z [ D ϕ n ( G θ ( z ) , C ) 2 ]
Crossover mutations of base discriminators
As one of the important steps in GAs for maintaining dominant sequences and increasing sample diversity [32], the operators of crossover mutations are another key factor for fostering the diversity of base discriminators. Similar to the operators of the crossover mutations, we also add a two-point crossover operator (TP) into the IGASEN-EMWGAN model to update the parameters in base discriminators, which will therefore be better than the existing EGANs in this paper.
Evaluation of base discriminators
Generally, the appropriate handling of the tradeoff between diversity and the performance of the ensemble discriminators is crucial to the effectiveness of ensemble learning [58,59]. From the perspective of the measurement of diversity, there are a substantial number of statistics that can measure the diversity among the sub-discriminators, such as the Q statistics, the correlation, and the entropy of the votes, which can be grouped into two categories: pairwise diversity measures and non-pairwise diversity [60]. The interrater agreement [61,62], which is considered to measure the intraclass correlation coefficient, is employed. Because of the influence of the unbalanced dataset, a modified interrater agreement κ is proposed in this paper and defined as follows:
κ = 1 1 2 p ¯ ( 1 p ¯ ) D i s a v
where p ¯ denotes the average discrimination accuracy of K base discriminators for the training set S t r a i n = x 1 , l 1 , x 2 , l 2 , , x s , l s and is expressed as
p ¯ = 1 M K 2 s = 1 M K i = 1 K h i ( x s , l s )
where h i ( x s , l s ) is described as a correct/incorrect decision, x s and l s 0 , 1 represent the samples with n-dimensional features in the training set, 0 represents the generated samples, and 1 represents the real samples.
The average disagreement measure D i s a v is defined as follows:
D i s a v = 1 K ( K 1 ) i = 1 K j = 1 i j K D i s i , j
where D i s i , j is defined as follows:
D i s i , j = w 1 ( N i j 01 + N i j 10 ) + w 0 ( N i j 01 + N i j 10 ) w 1 N i j 1 + w 0 N i j 0
where w 0 and w 1 represent the sample weights of the minority class and the majority class, respectively, N i j x y denotes the number of samples in the validation set for which the base discriminator h i outputs x while the base discriminator h j outputs y for some validation set instance, as shown in Table 1, and N i j 0 and N i j 1 represent the total number of the generated samples and the real samples, respectively.
Apart from the diversity, the discrimination accuracy of the basic discriminators is another important complementarity factor for improving the ensemble performance. In practical applications, the metrics of the F1-score [63] and G-mean [64] are more effective for evaluating the model for unbalanced data. Therefore, to evaluate the discrimination performance of the subpopulation evolved by the discriminators, we devise an evaluation characteristic called Accuracy–Diversity (AD), which considers both the discrimination accuracy (a combination of F1-score and G-mean) and the modified interrater agreement κ of the ensemble model in building the selection criterion simultaneously. The metric of AD can be formulated as
A D = β × V a l u e ( F 1 s c o r e ) + γ × V a l u e ( G m e a n ) + 1 β γ × κ , β ,     γ [ 0 , 1 ]
where β and γ are the weight coefficients balancing between the F1-score, G-mean, and modified interrater agreement κ of the models of basic discriminators.
Selection of base discriminators
The evaluation procedure corresponds to a selection process, which selects the best-performing evolved individuals with larger evaluation scores [65]. After evaluation, similar to the selection of the generators in IGASEN-EMWGAN, the new parents of the next evolution of base discriminators in IGASEN-EMWGAN will be selected with respect to the value of AD. The selection function for the offspring of the base discriminators is defined as
F D ϕ 1 , 1 , F D ϕ 2 , 1 , , F D ϕ τ , N s o r t max ( F D ϕ τ , c )
After sorting the current τ offspring populations (base discriminators) D ϕ 1 , D ϕ 2 , , D ϕ τ by descending order according to the value of AD, the τ best-performing component individuals that survived are selected to form the generation of the next evolution during the evolutionary adversarial training in IGASEN-EMWGAN. Therefore, it is formulated as follows:
ϕ 1 , ϕ 2 , , ϕ τ θ 1 , 1 , θ 2 , 1 , , θ τ , 1

2.3.4. Update of Discriminators

To avoid the premature convergence and increase the possibility of finding new solutions in the population rather than fall into the local optima, this paper introduces the mechanism of simulated annealing (SA) [66] into the training process of updating the discriminators to select after retaining the best-fit offspring of base discriminators. In addition to the best-performing discriminators with high discriminatory accuracy and variability by the metric of AD, a subset of base discriminators with poor discrimination performance will be likely to be retained according to the Metropolis criterion, which is conducive to avoiding the local optimum trapping problem. The Metropolis criterion is defined as follows:
p = 1 , E ( n + 1 ) E ( n ) exp ( E ( n + 1 ) E ( n ) T ) , E ( n + 1 ) E ( n )
where p represents the probability of accepting a new base discriminator, E(n + 1) and E(n) denote the AD value of the base discriminator with modified network parameters and the AD value of the previous base discriminator, respectively.

2.3.5. Combination of Discriminators

To obtain the best and most robust discriminator model, it is necessary to form a combined discriminator model. In this study, the chromosomes corresponding to the shortlisted evolved best-performing discriminators by the modified NECS are at first decoded. Then, the shortlisted evolved best-performing base discriminators and the suboptimal discriminator by the mechanism of SA are combined by the weighted ensemble strategy. In this paper, the weighted ensemble strategy is based on the values of AD. Suppose the weight of the best-performing discriminators by the modified NECS offspring discriminator is w 1 and the other weight of the suboptimal base discriminator is w 2 . The relationship between w 1 and w 2 is shown in Equations (22) and (23).
w 1 w 2 = A D ( D ϕ b e s t i , c ) A D ( D ϕ s u b o p t i m a l i , c )
w 1 + w 2 = 1
Algorithm 1: IGASEN-EMWGAN
Require: mini-batch size ψ; the number of iterations T; the generator G θ ; the discriminator D ϕ ; the updating steps of the discriminator per iteration n D ; the number of parents for generator μ ; the number of parents for the base discriminator τ ; the number of mutations for generator M; the number of mutations for discriminator N; the number of variation mutations n m ; the number of crossover mutations n c ; the spatial dimension d z of the noise z; the initial temperature of annealing T 0 ; the annealing coefficient α; the hyperparameter γ of the fitness function of generators; the hyperparameter δ of the fitness function of base discriminators.
Initialize base discriminators, parameters ϕ 0 1 , ϕ 0 2 , , ϕ 0 τ and generators, parameters θ 0 1 , θ 0 2 , , θ 0 μ .
1: Construct τ multiple training subsets s u b i ( i = 1 , 2 , , τ ) with a diversity degree from S t r a i n by using the Bootstrap Sampling Algorithm.
2: Obtain the initial homogeneous base discriminators set D ϕ i ( i = 1 , 2 , , τ ) by using these multiply training subsets s u b i ( i = 1 , 2 , , τ ) to train independently, and each parameter is optimized by the parallel optimization strategy.
▷Discriminators Generation
3: Encoding the homogeneous candidate base discriminators D ϕ i ( i = 1 , 2 , , τ ) as the binary chromosome individuals in the genetic space, where 1 means the base discriminators are selected for an ensemble member, while 0 means the opposite.
▷Discriminators Encoding
4: for t =1, …, T do
5: for i =1, …, n D  do
6:   for n = 1, …, τ do
▷Discriminators Evolution
7:     { x ( s ) } s = 1 ψ p d a t a ←mini-batch sampling randomly from the real ship coating defects training set.
8:    { z ( s ) } s = 1 ψ P z ←mini-batch sampling randomly from noise samples, and generate a batch of generated samples.
9:     D ϕ i ( i = 1 , 2 , , τ ) generates N offspring D ϕ i , n ( n = 1 , 2 , , N ) via Equation (12) or Equation (13), respectively. 
▷D-Variation Mutation
10:    D ϕ i , n ( n = 1 , 2 , , N ) generates N offspring D ϕ i , c ( c = 1 , 2 , , N ) via D-crossover mutation, that is, updating D ϕ i , n ( n = 1 , 2 , , N ) .
▷D-Crossover Mutation
11:    Calculate the individual fitness F n ( n = 1 , 2 , , N ) of the N evolved offspring of discriminators via Equation (16).
▷D-Evaluation
12:     Sort F n ( n = 1 , 2 , , N ) , and express the largest one as F n b e s t .
▷D-Selection
13:   end for
14:   if  F n b e s t > F D  then
▷D-Update
15:      Update D n b e s t to D n e w
16:   else
17:       Δ = F c b e s t F D
18:       P = e Δ / T
19:      Update D n b e s t to D n e w with a probability of P which ranges from 0 to 1
20:   end if
21:       T i = α T i 1
21:    end for
22:  end for
23:  Output the remaining filtered base discriminators combination.
▷D-Combination
24:  for j = 1,…, μ  do
▷Generators Evolution
25:    { z ( s ) } s = 1 ψ P z ←mini-batch sampling randomly from noise samples.
26:   G θ j ( j = 1 , 2 , , μ ) generates M offspring G θ j , m ( m = 1 , 2 , , M ) via variation mutation, that is, updating G θ j ( j = 1 , 2 , , μ ) via Equations (1) and (4), respectively.
▷G-Variation Mutation
27:   G θ j , m ( m = 1 , 2 , , M ) generates M offspring G θ j , υ ( υ = 1 , 2 , , M ) via G-variation mutation, that is, updating G θ j , m ( m = 1 , 2 , , M ) .
▷G-Crossover Mutation
28:   Calculate the individual fitness F m ( m = 1 , 2 , , M ) of the M evolved offspring of discriminators via Equation (7).
▷G-Evaluation
29:   Sort F m ( m = 1 , 2 , , M ) , and select the largest offspring F m b e s t as the next generation’s parents of the generators via Equations (8) and (9).
▷G-Selection
30:   end for
31:end for
32:   Print (New Structure)
33: end

3. Experimental Results and Analysis

In order to qualitatively and quantitatively demonstrate the effectiveness and robustness of our proposed algorithm, namely, IGASEN-EMWGAN, the comparative experiments based on different evaluation metrics are conducted for the sake of demonstrating the merits of the proposed model over the baselines in this subsequent section. The configure information for implementing the experiments involved is 11th Gen Intel (R) Core (TM) i5-11400H @ 2.70GHz with NVIDIA GeForce RTX 3050 Laptop GPU and 16.0 GB RAM. The programming language is Python, and the version is 3.7.4.

3.1. Dataset Setup

In our work, 10 typical types of ship coating defects are analyzed, including Holiday coating, Sagging, Orange skin, Cracking, Exudation, Wrinkling, Bitty appearance, Blistering, Pinholing, and Delamination. The details of the unbalanced ship coating defects dataset are listed in Table 2. Since the size of the collected original ship coating defects images varied in proportion, each image was resized to 128 × 128 × 3 before the experiments were performed. The pre-processed ship painting defect dataset was randomly split into a training set, test set, and validation set in the ratio of 0.64:0.16:0.2. Apart from the aforementioned preprocessing techniques, Both the techniques of data smoothing (gaussian filtering) and data normalization (batch normalization) are used to remove image noise and improve the performance of neural networks significantly, respectively.

3.2. Evaluation Metrics

To evaluate the overall generative performance of the proposed model accurately and comprehensively, different quantitative evaluation indices are adopted to measure the generative performance in our experiments. In this paper, the evaluation metrics we choose are the Inception Score (IS) [67] and the Fréchet Inception Distance score (FID) [68], which are the two most widely used metrics in the training of GANs. Among them, the IS computes the Kullback–Leibler Divergence (KLD) between the conditional class distribution p ( C | x ) and the marginal class distribution [22], while the FID score calculates the Wasserstein-2 distance between the generated samples and the real samples in the feature space. Therefore, the higher the IS and the lower the FID score, the better the quality and diversity of the generated images. They are formulated as follows:
I S ( G ) = exp ( E x p g D K L ( p ( C | x ) | | p ( C ) ) )
F I D = | | μ r μ g | | 2 + T r ( Σ r + Σ g 2 Σ r Σ g )
where C denotes the vector of the class label, x denotes the original images of the minority class, μ r and μ g denote the mean of features, T r denotes the sum of the diagonal elements, and r and g are the covariance of the feature in the real and generated distribution, respectively.

3.3. Implementation Details

To empirically examine the effectiveness of the proposed IGASEN-EMWGAN model in this paper, extensive experiments are conducted on the real-world dataset: the unbalanced ship coating defects dataset in this section. Apart from it, we slightly fine-tune the network architectures with exiting EGANs in previous works [22,34] to generate controllable and high-quality images of the minority class. The detailed network architectures and the experimental parameters settings of the proposed IGASEN-EMWGAN model in our experiments are distinctly displayed in Table 3 and Table 4, respectively. All the experiments that have been conducted were repeated five times to add randomness, and all the mean values of the indices are shown in figures and tables.

3.4. Experimental Results and Analysis

3.4.1. Hyperparameters Analysis

In the first experiment, there are two hyperparameters which have an impact on the performance of the proposed IGASEN-EMWGAN model, namely, the balance weight coefficients λ and the number of candidate base discriminators τ. The hyperparameters of SA are also important factors which respond to the performance of IGASEN-EMWGAN. We analyze the effects of setting different numbers of the aforementioned hyperparameters on the performance of the proposed IGASEN-EMWGAN model in this subsection. The details of conducting the related experiments are as follows.
(1)
Balance Weight coefficients λ
In order to select the proper values of the balance weight coefficient λ for IGASEN-EMWGAN, we conduct the experiments on unbalanced ship coating defects database by the strategy of gird search. The Inception Score (IS) evaluation for different IGASEN-EMWGANs under different settings of balance hyperparameters is depicted in Figure 7a. As shown in Figure 7a, we take different balance hyperparameters λ= {0.01,0.1, 0.25,0.5,0.75,1} to balance the quality and diversity of the generated samples. The experimental results demonstrate that a larger setting value of λ (e.g., λ = 0.75) results in slower convergence and the proposed IGASEN-EMWGAN model being out of work at the beginning. Furthermore, a relatively small setting value of λ (λ = 0.1) outperforms the other settings of the balance weight coefficient λ for the sake of the generative performance and the convergence speed. Consequently, we employ an optimal value of the balance weight coefficient λ (λ = 0.1) for the later experiments.
(2)
Number of base discriminators τ
The number of base discriminators has a huge impact on the final generalization performance of the proposed IGASEN-EMWGAN model. If the number of base discriminators is not ideal, the evolved base discriminators will fall into a local optimal solution, and the informative gradient signal provided by discriminators will lack, which results in the lack of an ability to reject generated samples perfectly. Moreover, a larger population of base discriminators often results in the additional time computation cost of the system and the decrease in discrimination performance. To this end, in consideration of the tradeoff between the computational overhead and the discrimination performance, the relationship between the FID evaluation and the number of the base discriminators is shown in Figure 7b. As shown in Figure 7b, as the number of base candidate discriminators τ increases, the value of FID decreases, which means the generative performance of IGASEN-EMWGAN is better. However, the more the base discriminators are ensembled, the computational cost of the system will increase. When the number of base discriminators is eight, the value of FID reaches the minimum value. While adding more base discriminators continuously, the value of FID increases, conversely. To this end, to take the tradeoff between the computational overhead and the improvement of the generative performance, it is suggested that the value of τ is set to be eight. At this time, the median value of FID is 25.413, and the runtime is 64.2 ± 0.8 s.
(3)
The initial temperature T and the annealing coefficient α
In the simulated annealing (SA) algorithm, both the initial temperature T and the annealing coefficient α are the two crucial hyperparameters, as depicted in Algorithm 1. Experiments with different initial temperatures and annealing coefficients are conducted on the real ship coating defect dataset with IR = 107.2. In our experiment, the relationship between the discrimination accuracy of the proposed IGASEN-EMWGAN model and the training epochs under different hyperparameters in SA at different epochs is shown in Figure 8. In our work, we use Grid Search to set different hyperparameters of initial temperatures T { 100 , 1000 , 10000 } and annealing coefficients α { 0.99 , 0.999 } in SA. As shown in Figure 8, the accuracy of the proposed IGASEN-EMWGAN model increases at a relatively comparable speed at the beginning of training. When the epoch of training is about 750, the accuracy of IGASEN-EMWGAN converges to a certain value, which reveals that the proposed IGASEN-EMWGAN model is robust to different settings of hyperparameters in SA. Furthermore, it can be vividly shown that a smaller setting of the initial temperature T and the annealing coefficient α can make the proposed IGASEN-EMWGAN model converge fast, while a larger setting of the initial temperature T and the annealing coefficient α leads to the opposite result after a certain period. These experimental results show that the proposed IGASEN-EMWGAN model is robust to the different settings of hyperparameters. To this end, the annealing coefficient α and the initial temperature T are recommended to be 0.99 and 1000, respectively.

3.4.2. Comparisons with Different Existing GANs in Generative Performance

In this section, to further demonstrate the superiority of the proposed method, we discuss the relationship between the survived parents’ number and generative performance. The comparative results with various GANs in the real ship coating defect dataset are listed in Table 5. As shown in Table 5, both the metrics of the Inception Score and FID score are employed to evaluate the generative performance of learned generators in our work. Since E-GAN [22] is the most familiar to our method, we further take E-GAN as the baseline to compare with IGASEN-EMWGAN. In addition to this, we also utilize different existing state-of-art GANs including DCGAN [69], WGAN-GP [34], AEGAN [70], LSGAN [71], and EASGAN [72] to compare with the proposed model in this paper. In Table 5, we take a number of various generators μ = {1,2,4,8} for E-GAN and various numbers of discriminators τ= {1,2,4,8} for IGASEN-EMWGAN. We train each model in 150 k generator iterations to conduct experiments. The bold values indicate the highest values of IS and FID among various GANs. Note that the results in Table 5 are the average training results of the five-fold cross-validation method. First, compared with the baseline (E-GAN, μ = 1 without GP), adding the term of the GP norm to optimize the discriminator indeed improves the generative performance during training (e.g., 0.25 improvement for IS and 2.8 decrease for FID when μ is set as 1). This reveals that the term of GP is an effective method for providing smoother gradients to update the generators and stabilize the training process. Furthermore, with the number of base discriminators increasing, the values of ID and FID improve and decrease, respectively, which means the balance of the base generators and discriminators of IGASEN-EMWGAN is well adjusted. These quantitative results in Table 5 show that the proposed IGASEN-EMWGAN model achieves better performance than E-GAN on the unbalanced ship coating defects dataset, which means the proposed method can generate diverse high-quality defect images.

3.4.3. Ablation Study

In this section, we set EGAN as the baseline and add the two-point crossover operator (TP) to the proposed IGANSEN-EMWGAN model for the sake of further verifying the important role of each module in our proposed method. The experimental results are conducted on the real unbalanced ship coating defect dataset, and the two-point crossover operator (TP) on the influence of the values of the average FID score under different iterations is displayed in Figure 9.
As depicted in Figure 9, the IGASEN-EMWGAN model with TP crossover can obtain a lower value of average FID than that without TP crossover when the number of iterations is up to 30 k. Furthermore, the average FID value of the proposed IGASEN-EMWGAN with the two-point crossover operator is slightly lower than the IGASEN-EMWGAN without the two-point crossover operator after 50 k iterations. As vividly depicted in Figure 9, the proposed IGASEN-EMWGAN with the two-point crossover operator yields the lowest FID value. Nevertheless, the process of adding the two-point crossover operator has no considerable influence on the performance of the proposed IGASEN-EMWGAN. A potential reason is that the training epoch is not enough, which leads to the low probability of the crossover operator selection at the very early training stages.

4. Conclusions and Future Work

In this paper, we propose, for the first time, a novel modified NECS and selective ensemble learning driven by the E-GANs, called the IGASEN-EMWGAN model, for generating the new minority images of an unbalanced ship coating defect database without rich training data. The extensive experimental results are conducted on a real unbalanced ship coating defect database and demonstrate that, compared with the baselines and different existing state-of-art GANs mentioned in this paper, the values of the ID and FID scores are significantly improved by 4.92% and decreased by 7.29%, respectively, To this end, in view of these experimental results, we can conclude that the proposed IGASEN-EMWGAN model in this paper can effectively remedy the inherent deficiencies existing in existing GANs and fulfill the valuable image synthesis task successfully when faced with an unbalanced ship coating defect database.
Because the special characteristics of ship coating limit the amount of data, we will consider using online meta learning and transfer learning to reduce the computation cost in future work. In addition, although the genetic algorithm improved by SA can improve the diversity of the generated images to some extent, the global search ability is not powerful, and the operational efficiency is not high. Therefore, the idea of quantum computation theory will be introduced for E-GANs in the future. To justify these hypotheses, we must conduct the related experiments in conjunction with rigorous theoretical analysis, which is another interesting issue for future work.

Author Contributions

H.B. revised the paper and completed it, C.H. wrote the first draft of the paper, X.Y. and X.J. collected and sorted the data, H.L. assisted in the experimental verification of the paper, H.Z. provided funding for the paper. All authors reviewed the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This study was financially supported by the Ministry of Industry and Information Technology High-Tech Ship Research Project: Research on the Development and Application of a Digital Process Design System for Ship Coating (No.: MC-202003-Z01-02), the National Defense Basic Scientific Research Project: Research and Development of an Intelligent Methanol-Fueled New Energy Ship (No.: JCKY2021414B011), the RO-RO Passenger Ship Efficient Construction Process and Key Technology Research (No.: CJ07N20), and the Intelligent Methanol-Fueled New Energy Ship R&D Project (Guangdong Natural Resources Cooperation [2021] No. 44).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available because they are also part of ongoing research.

Conflicts of Interest

The authors declare that there is no conflict of interest regarding the publication of this work.

References

  1. Salem, M.H.; Li, Y.; Liu, Z.; Abdel Tawab, A.M. A Transfer Learning and Optimized CNN Based Maritime Vessel Classification System. Appl. Sci. 2023, 13, 1912. [Google Scholar] [CrossRef]
  2. Dev, A.K.; Saha, M. Analysis of Hull Coating Renewal in Ship Repairing. J. Ship Prod. Des. 2017, 33, 197–211. [Google Scholar] [CrossRef]
  3. Bu, H.; Yuan, X.; Niu, J.; Yu, W.; Ji, X.; Lyu, H.; Zhou, H. Ship Painting Process Design Based on IDBSACN-RF. Coatings 2021, 11, 1458. [Google Scholar] [CrossRef]
  4. Cho, D.-Y.; Swan, S.; Kim, D.; Cha, J.-H.; Ruy, W.-S.; Choi, H.-S.; Kim, T.-S. Development of paint area estimation software for ship compartments and structures. Int. J. Nav. Archit. Ocean Eng. 2016, 8, 198–208. [Google Scholar] [CrossRef]
  5. Bu, H.; Ji, X.; Zhang, J.; Lyu, H.; Yuan, X.; Pang, B.; Zhou, H. A Knowledge Acquisition Method of Ship Coating Defects Based on IHQGA-RS. Coatings 2022, 12, 292. [Google Scholar] [CrossRef]
  6. Xin, Y.; Henan, B.; Jianmin, N.; Wenjuan, Y.; Honggen, Z.; Xingyu, J.; Pengfei, Y. Coating matching recommendation based on improved fuzzy comprehensive evaluation and collaborative filtering algorithm. Sci. Rep. 2021, 11, 14035. [Google Scholar] [CrossRef]
  7. Davies, J.; Truong-Ba, H.; Cholette, M.E.; Will, G. Optimal inspections and maintenance planning for anti-corrosion coating failure on ships using non-homogeneous Poisson Processes. Ocean Eng. 2021, 238, 109695. [Google Scholar] [CrossRef]
  8. Bu, H.; Ji, X.; Yuan, X.; Han, Z.; Li, L.; Yan, Z. Calculation of coating consumption quota for ship painting: A CS-GBRT approach. J. Coat. Technol. Res. 2020, 17, 1597–1607. [Google Scholar] [CrossRef]
  9. Liang, K.; Liu, F.; Zhang, Y. Household Power Consumption Prediction Method Based on Selective Ensemble Learning. IEEE Access 2020, 8, 95657–95666. [Google Scholar] [CrossRef]
  10. Barua, S.; Islam, M.; Yao, X.; Murase, K. MWMOTE-Majority Weighted Minority Oversampling Technique for Imbalanced Data Set Learning. IEEE Trans. Knowl. Data Eng. 2014, 26, 405–425. [Google Scholar] [CrossRef]
  11. Ji, X.; Wang, J.; Li, Y.; Sun, Q.; Jin, S.; Quek, T.Q.S. Data-Limited Modulation Classification with a CVAE-Enhanced Learning Model. IEEE Commun. Lett. 2020, 24, 2191–2195. [Google Scholar] [CrossRef]
  12. Garciarena, U.; Mendiburu, A.; Santana, R. Analysis of the transferability and robustness of GANs evolved for Pareto set approximations. Neural Netw. 2020, 132, 281–296. [Google Scholar] [CrossRef]
  13. Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.C.; Bengio, Y. Generative Adversarial Networks. arXiv 2014. [Google Scholar] [CrossRef]
  14. Schellenberg, M.; Gröhl, J.; Dreher, K.K.; Nölke, J.-H.; Holzwarth, N.; Tizabi, M.D.; Seitel, A.; Maier-Hein, L. Photoacoustic image synthesis with generative adversarial networks. Photoacoustics 2022, 28, 100402. [Google Scholar] [CrossRef]
  15. Bharti, V.; Biswas, B. EMOCGAN: A novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Comput. Appl. 2021, 34, 21433–21447. [Google Scholar] [CrossRef]
  16. Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.P.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. arXiv 2016. [Google Scholar] [CrossRef]
  17. Kavousi-Fard, A.; Dabbaghjamanesh, M.; Jin, T.; Su, W.; Roustaei, M. An Evolutionary Deep Learning-Based Anomaly Detection Model for Securing Vehicles. IEEE Trans. Intell. Transp. Syst. 2021, 22, 4478–4486. [Google Scholar] [CrossRef]
  18. Zou, L.; Zhang, H.; Wang, C.; Wu, F.; Gu, F. MW-ACGAN: Generating Multiscale High-Resolution SAR Images for Ship Detection. Sensors 2020, 20, 6673. [Google Scholar] [CrossRef] [PubMed]
  19. Fiore, U.; De Santis, A.; Perla, F.; Zanetti, P.; Palmieri, F. Using generative adversarial networks for improving classification effectiveness in credit card fraud detection. Inf. Sci. 2017, 479, 448–455. [Google Scholar] [CrossRef]
  20. Li, J.; Zhang, J.; Gong, X.; Lu, S. Evolutionary Generative Adversarial Networks with Crossover Based Knowledge Distillation. In Proceedings of the 2021 International Joint Conference on Neural Networks (IJCNN), Shenzhen, China, 18–22 July 2021; pp. 1–8. [Google Scholar]
  21. Chen, S.; Wang, W.; Xia, B.; You, X.; Peng, Q.; Cao, Z.; Ding, W. CDE-GAN: Cooperative Dual Evolution-Based Generative Adversarial Network. IEEE Trans. Evol. Comput. 2021, 25, 986–1000. [Google Scholar] [CrossRef]
  22. Wang, C.; Xu, C.; Yao, X.; Tao, D. Evolutionary Generative Adversarial Networks. IEEE Trans. Evol. Comput. 2019, 23, 921–934. [Google Scholar] [CrossRef]
  23. Xiao, Y.; Li, W.; Qiang, S.; Li, Q.; Xiao, H.; Liu, Y. A Rumor & Anti-Rumor Propagation Model Based on Data Enhancement and Evolutionary Game. IEEE Trans. Emerg. Top. Comput. 2022, 10, 690–703. [Google Scholar] [CrossRef]
  24. Chen, M.; Yu, R.; Xu, S.; Luo, Y.; Yu, Z. An Improved Algorithm for Solving Scheduling Problems by Combining Generative Adversarial Network with Evolutionary Algorithms. In Proceedings of the Computer Science and Application Engineering, Sanya, China, 22–24 October 2019. [Google Scholar]
  25. Erivaldo, F.F.; Gary, G.Y. Pruning of generative adversarial neural networks for medical imaging diagnostics with evolution strategy. Inf. Sci. 2021, 558, 91–102. [Google Scholar] [CrossRef]
  26. Zheng, Y.-J.; Gao, C.-C.; Huang, Y.-J.; Sheng, W.-G.; Wang, Z. Evolutionary ensemble generative adversarial learning for identifying terrorists among high-speed rail passengers. Expert Syst. Appl. 2022, 261, 118430. [Google Scholar] [CrossRef]
  27. Meng, A.; Chen, S.; Ou, Z.; Xiao, J.; Zhang, J.; Zhang, Z.; Liang, R.; Zhang, Z.; Xian, Z.; Wang, C.; et al. A novel few-shot learning approach for wind power prediction applying secondary evolutionary generative adversarial network. Energy 2022, 210, 125276. [Google Scholar] [CrossRef]
  28. He, J.; Zhu, Q.; Zhang, K.; Yu, P.; Tang, J. An evolvable adversarial network with gradient penalty for COVID-19 infection segmentation. Appl. Soft Comput. 2021, 113, 107947. [Google Scholar] [CrossRef] [PubMed]
  29. Cai, X.; Lan, Y.; Zhang, Z.; Wen, J.; Cui, Z.; Zhang, W. A Many-objective Optimization based Federal Deep Generation Model for Enhancing Data Processing Capability in IOT. IEEE Trans. Ind. Inform. 2021, 19, 561–569. [Google Scholar] [CrossRef]
  30. Hazra, D.; Kim, M.-R.; Byun, Y.-C. Generative Adversarial Networks for Creating Synthetic Nucleic Acid Sequences of Cat Genome. Int. J. Mol. Sci. 2022, 23, 3701. [Google Scholar] [CrossRef]
  31. Talas, L.; Fennell, J.G.; Kjernsmo, K.; Cuthill, I.C.; Scott-Samuel, N.E.; Baddeley, R.J. CamoGAN: Evolving optimum camouflage with Generative Adversarial Networks. Methods Ecol. Evol. 2020, 11, 240–247. [Google Scholar] [CrossRef]
  32. Chen, Y.; Wang, Y.; Kirschen, D.S.; Zhang, B. Model-Free Renewable Scenario Generation Using Generative Adversarial Networks. IEEE Trans. Power Syst. 2018, 33, 3265–3275. [Google Scholar] [CrossRef]
  33. Han, C.; Wang, J. Face Image Inpainting With Evolutionary Generators. IEEE Signal Process. Lett. 2021, 28, 190–193. [Google Scholar] [CrossRef]
  34. Gulrajani, I.; Ahmed, F.; Arjovsky, M.; Dumoulin, V.; Courville, A.C. Improved training of wasserstein GANs. In Proceedings of the Advances in Neural Information Processing Systems (NIPS), Long Beach, CA, USA, 4–9 December 2017; pp. 5769–5779. [Google Scholar]
  35. Yin, H.; Ou, Z.; Zhu, Z.; Xu, X.; Fan, J.; Meng, A. A novel asexual-reproduction evolutionary neural network for wind power prediction based on generative adversarial networks. Energy Convers. Manag. 2021, 247, 114714. [Google Scholar] [CrossRef]
  36. Gong, X.; Jia, L.; Li, N. Research on mobile traffic data augmentation methods based on SA-ACGAN-GN. Math. Biosci. Eng. 2022, 19, 11512–11532. [Google Scholar] [CrossRef] [PubMed]
  37. Li, Y.-H.; Aslam, M.S.; Harfiya, L.N.; Chang, C.-C. Conditional Wasserstein Generative Adversarial Networks for Rebalancing Iris Image Datasets:Regular Section. IEICE Trans. Inf. Syst. 2021, E104.D, 1450–1458. [Google Scholar] [CrossRef]
  38. Liang, Z.; Xu, X.; Liu, L.; Tu, Y.; Zhu, Z. Evolutionary Many-Task Optimization Based on Multisource Knowledge Transfer. IEEE Trans. Evol. Comput. 2022, 26, 319–333. [Google Scholar] [CrossRef]
  39. Hakimi, D.; Oyewola, D.O.; Yahaya, Y.; Bolarin, G. Comparative Analysis of Genetic Crossover Operators in Knapsack Problem. J. Appl. Sci. Environ. Manag. 2016, 20, 593. [Google Scholar] [CrossRef]
  40. Xue, Y.; Zhu, H.; Liang, J.; Słowik, A. Adaptive crossover operator based multi-objective binary genetic algorithm for feature selection in classification. Knowl.-Based Syst. 2021, 227, 107218. [Google Scholar] [CrossRef]
  41. Hu, B.; Xiao, H.; Yang, N.; Jin, H.; Wang, L. A hybrid approach based on double roulette wheel selection and quadratic programming for cardinality constrained portfolio optimization. Concurr. Comput. Pract. Exp. 2021, 34, e6818. [Google Scholar] [CrossRef]
  42. Sun, Y.; Xue, B.; Zhang, M.; Yen, G.G. Completely Automated CNN Architecture Design Based on Blocks. IEEE Trans. Neural Netw. Learn. Syst. 2020, 31, 1242–1254. [Google Scholar] [CrossRef]
  43. Kramer, O. Machine Learning for Evolution Strategies; Springer: Berlin/Heidelberg, Germany, 2016; Volume 20. [Google Scholar]
  44. Rezaei, M.; Näppi, J.J.; Lippert, C.; Meinel, C.; Yoshida, H. Generative multi-adversarial network for striking the right balance in abdominal image segmentation. Int. J. Comput. Assist. Radiol. Surg. 2020, 15, 1847–1858. [Google Scholar] [CrossRef]
  45. Durugkar, I.P.; Gemp, I.; Mahadevan, S. Generative Multi-Adversarial Networks. arXiv 2016. [Google Scholar] [CrossRef]
  46. Albuquerque, I.; Monteiro, J.; Doan, T.; Considine, B.; Falk, T.; Mitliagkas, I. Multi-objective training of Generative Adversarial Networks with multiple discriminators. arXiv 2019. [Google Scholar] [CrossRef]
  47. Hao, J.; Wang, C.; Yang, G.; Gao, Z.; Zhang, J.; Zhang, H. Annealing Genetic GAN for Imbalanced Web Data Learning. IEEE Trans. Multimed. 2022, 24, 1164–1174. [Google Scholar] [CrossRef]
  48. Hao, J.; Wang, C.; Zhang, H.; Yang, G. Annealing genetic GAN for minority oversampling. In Proceedings of the 31st British Machine Vision (Virtual) Conference 2020, Virtual Event, 7–10 September 2020; pp. 243.1–243.12. [Google Scholar]
  49. Yun, J.P.; Shin, W.C.; Koo, G.; Kim, M.S.; Lee, C.; Lee, S.J. Automated defect inspection system for metal surfaces based on deep learning and data augmentation. J. Manuf. Syst. 2020, 55, 317–324. [Google Scholar] [CrossRef]
  50. Yang, Y.; Hu, Y.; Zhang, X.; Wang, S. Two-Stage Selective Ensemble of CNN via Deep Tree Training for Medical Image Classification. IEEE Trans. Cybern. 2021, 52, 9194–9207. [Google Scholar] [CrossRef] [PubMed]
  51. Xu, Y.; Yu, Z.; Cao, W.; Chen, C.L.P. Adaptive Dense Ensemble Model for Text Classification. IEEE Trans. Cybern. 2022, 52, 7513–7526. [Google Scholar] [CrossRef] [PubMed]
  52. Ma, T.; Yu, T.; Wu, X.; Cao, J.; Al-Abdulkarim, A.; Al-Dhelaan, A.; Al-Dhelaan, M. Multiple clustering and selecting algorithms with combining strategy for selective clustering ensemble. Soft Comput. A Fusion Found. Methodol. Appl. 2020, 20, 15129–15141. [Google Scholar] [CrossRef]
  53. Zhang, H.; Wu, S.; Zhang, X.; Han, L.; Zhang, Z. Slope stability prediction method based on the margin distance minimization selective ensemble. Catena 2022, 212, 106055. [Google Scholar] [CrossRef]
  54. La Fé-Perdomo, I.; Ramos-Grez, J.A.; Jeria, I.; Guerra, C.; Barrionuevo, G.O. Comparative analysis and experimental validation of statistical and machine learning-based regressors for modeling the surface roughness and mechanical properties of 316L stainless steel specimens produced by selective laser melting. J. Manuf. Process. 2022, 80, 666–682. [Google Scholar] [CrossRef]
  55. Naqvi, F.B.; Shad, M.Y. Seeking a balance between population diversity and premature convergence for real-coded genetic algorithms with crossover operator. Evol. Intel. 2022, 15, 2651–2666. [Google Scholar] [CrossRef]
  56. Zhou, J.; Wu, Z.; Xue, Y.; Li, M.; Zhou, D. Network unknown-threat detection based on a generative adversarial network and evolutionary algorithm. Int. J. Intell. Syst. 2021, 37, 4307–4328. [Google Scholar] [CrossRef]
  57. Zhang, H.; Wu, S.; Zhang, Z. Prediction of Uniaxial Compressive Strength of Rock Via Genetic Algorithm—Selective Ensemble Learning. Nat. Resour. Res. 2022, 31, 1721–1737. [Google Scholar] [CrossRef]
  58. Xu, Y.; Yu, Z.; Cao, W.; Chen, C.L.P.; You, J.J. Adaptive Classifier Ensemble Method Based on Spatial Perception for High-Dimensional Data Classification. IEEE Trans. Knowl. Data Eng. 2019, 33, 2847–2862. [Google Scholar] [CrossRef]
  59. Amgad, M.M.; Enrique, O. Selective ensemble of classifiers trained on selective samples. Neurocomputing 2022, 482, 197–211. [Google Scholar] [CrossRef]
  60. Lin, C.; Chen, W.; Qiu, C.; Wu, Y.; Krishnan, S.; Zou, Q. LibD3C: Ensemble classifiers with a clustering and dynamic selection strategy. Neurocomputing 2014, 123, 424–435. [Google Scholar] [CrossRef]
  61. Wolfe, K.; Seaman, M.A. The influence of data characteristics on interrater agreement among visual analysts. J. Appl. Behav. Anal. 2023. [Google Scholar] [CrossRef]
  62. Wei, L.; Wan, S.; Guo, J.; Wong, K.K. A novel hierarchical selective ensemble classifier with bioinformatics application. Artif. Intell. Med. 2017, 83, 82–90. [Google Scholar] [CrossRef]
  63. Chaofan, D.; Xiaoping, L.P. Classification of Imbalanced Electrocardiosignal Data using Convolutional Neural Network. Comput. Methods Programs Biomed. 2022, 214, 106483. [Google Scholar] [CrossRef]
  64. Douzas, G.; Bacao, F. Effective data generation for imbalanced learning using conditional generative adversarial networks. Expert Syst. Appl. 2018, 91, 464–471. [Google Scholar] [CrossRef]
  65. Liu, Z.; Wang, J. CatGAN: Category-Aware Generative Adversarial Networks with Hierarchical Evolutionary Learning for Category Text Generation. Proc. AAAI Conf. Artif. Intell. 2020, 34, 8425–8432. [Google Scholar] [CrossRef]
  66. Wu, M. Heuristic parallel selective ensemble algorithm based on clustering and improved simulated annealing. J. Supercomput. 2018, 76, 3702–3712. [Google Scholar] [CrossRef]
  67. Xue, Y.; Tong, W.; Neri, F.; Zhang, Y. PEGANs: Phased Evolutionary Generative Adversarial Networks with Self-Attention Module. Mathematics 2022, 10, 2792. [Google Scholar] [CrossRef]
  68. Kim, D.; Joo, D.; Kim, J. TiVGAN: Text to Image to Video Generation with Step-by-Step Evolutionary Generator. IEEE Access 2020, 8, 153113–153122. [Google Scholar] [CrossRef]
  69. Radford, A.; Metz, L. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. arXiv 2015. [Google Scholar] [CrossRef]
  70. Wu, Z.; He, C.; Yang, L.; Kuang, F. Attentive evolutionary generative adversarial network. Appl. Intell. 2020, 51, 1747–1761. [Google Scholar] [CrossRef]
  71. Mao, X.; Li, Q.; Xie, H.; Lau, R.Y.K.; Wang, Z.; Smolley, S.P. Least Squares Generative Adversarial Networks. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2813–2821. [Google Scholar] [CrossRef]
  72. Lin, Q.; Fang, Z.; Chen, Y.; Tan, K.C.; Li, Y. Evolutionary Architectural Search for Generative Adversarial Networks. IEEE Trans. Emerg. Top. Comput. Intell. 2022, 6, 783–794. [Google Scholar] [CrossRef]
Figure 1. The framework of the proposed image classification method for ship coating defects. (a) The general framework of the original E-GAN. (b) The overall pipeline of the proposed IGASEN-EMWGAN. E-GAN evolved a population of generators { G θ j } in a given dynamic environment denoted by the discriminators D ϕ n ( n = 1 , 2 , , τ ) by NECS. Each evolutionary process of generators is divided into four steps: variation mutations, crossover mutations, evaluation, and selection, while the discriminators contain five steps: generation, encoding, evolution (variation mutations, crossover mutations, evaluation, and selection based on ensemble pruning), update, and combination.
Figure 1. The framework of the proposed image classification method for ship coating defects. (a) The general framework of the original E-GAN. (b) The overall pipeline of the proposed IGASEN-EMWGAN. E-GAN evolved a population of generators { G θ j } in a given dynamic environment denoted by the discriminators D ϕ n ( n = 1 , 2 , , τ ) by NECS. Each evolutionary process of generators is divided into four steps: variation mutations, crossover mutations, evaluation, and selection, while the discriminators contain five steps: generation, encoding, evolution (variation mutations, crossover mutations, evaluation, and selection based on ensemble pruning), update, and combination.
Coatings 13 00620 g001
Figure 2. The graph of the variation mutation function (loss function) received by generator G when discriminator D is given: IGASEN−EMWGAN will determine using hinge mutation or wasserstein_gp mutation with the same probability ( 1 2 ) at random.
Figure 2. The graph of the variation mutation function (loss function) received by generator G when discriminator D is given: IGASEN−EMWGAN will determine using hinge mutation or wasserstein_gp mutation with the same probability ( 1 2 ) at random.
Coatings 13 00620 g002
Figure 3. (a) Schematic illustration of a local optimal solution stuck for the conventional GANs. In this case, traditional GANs fluctuate around a local optimal solution when faced with the training of a minority class in unbalanced datasets. (b) The proposed method incorporates selective ensemble learning and an improved genetic algorithm based on simulated annealing into the training of multi-discriminators in IGASEN-EMWGAN. When the discriminators in EGANs get trapped in local optimal solutions, the improved GA-based SA and selective ensemble learning can help discriminators avoid falling into a local optimum and achieve the global optimum. Therefore, the proposed method can be conducive to the improvement of the generalization performance and the tradeoff between discrimination accuracy and difference.
Figure 3. (a) Schematic illustration of a local optimal solution stuck for the conventional GANs. In this case, traditional GANs fluctuate around a local optimal solution when faced with the training of a minority class in unbalanced datasets. (b) The proposed method incorporates selective ensemble learning and an improved genetic algorithm based on simulated annealing into the training of multi-discriminators in IGASEN-EMWGAN. When the discriminators in EGANs get trapped in local optimal solutions, the improved GA-based SA and selective ensemble learning can help discriminators avoid falling into a local optimum and achieve the global optimum. Therefore, the proposed method can be conducive to the improvement of the generalization performance and the tradeoff between discrimination accuracy and difference.
Coatings 13 00620 g003
Figure 4. Five-fold cross-validation method.
Figure 4. Five-fold cross-validation method.
Coatings 13 00620 g004
Figure 5. A typical chromosome for the selection of τ base discriminators.
Figure 5. A typical chromosome for the selection of τ base discriminators.
Coatings 13 00620 g005
Figure 6. The specific flowchart of updating the discriminators in IGASEN-EMWGAN.
Figure 6. The specific flowchart of updating the discriminators in IGASEN-EMWGAN.
Coatings 13 00620 g006
Figure 7. Experimental results on the ship coating defects database for hyperparameter analysis. (a) Inception score (IS) evaluation for different IGASEN-EMWGANs under different settings of balance hyperparameters λ = {0.01,0.1,0.25,0.5,0.75,1}. (b) The evaluation of the FID value and runtime for different IGASEN-EMWGANs under different numbers of base discriminators τ = {1,2,4,8,16} (w/o means without, while w/ means the opposite).
Figure 7. Experimental results on the ship coating defects database for hyperparameter analysis. (a) Inception score (IS) evaluation for different IGASEN-EMWGANs under different settings of balance hyperparameters λ = {0.01,0.1,0.25,0.5,0.75,1}. (b) The evaluation of the FID value and runtime for different IGASEN-EMWGANs under different numbers of base discriminators τ = {1,2,4,8,16} (w/o means without, while w/ means the opposite).
Coatings 13 00620 g007
Figure 8. Experimental results of the hyperparameters analysis of simulated annealing (SA) under different training epochs.
Figure 8. Experimental results of the hyperparameters analysis of simulated annealing (SA) under different training epochs.
Coatings 13 00620 g008
Figure 9. Ablation study on the influence of the two-point crossover operator (TP) under different iterations.
Figure 9. Ablation study on the influence of the two-point crossover operator (TP) under different iterations.
Coatings 13 00620 g009
Table 1. The definition of the N i j x y .
Table 1. The definition of the N i j x y .
Discriminator (i)Discriminator (j)
Predict (0)Predict (1)
Predict (0) N i j 00 N i j 10
Predict (1) N i j 01 N i j 11
Table 2. The description of the unbalanced ship coating defects dataset.
Table 2. The description of the unbalanced ship coating defects dataset.
Category of DefectsSample ImageSample Number (N)IR
Holiday coatingCoatings 13 00620 i0015107.2
SaggingCoatings 13 00620 i002866.23
Orange skinCoatings 13 00620 i0035361
CrackingCoatings 13 00620 i004776.96
ExudationCoatings 13 00620 i005717.54
WrinklingCoatings 13 00620 i006638.51
Bitty appearanceCoatings 13 00620 i0071341.23
BlisteringCoatings 13 00620 i0081084.96
PinholingCoatings 13 00620 i0092620.62
DelaminationCoatings 13 00620 i010867
Table 3. The architectures of the generator and base discriminator network used in the proposed IGASEN-EMWGAN model.
Table 3. The architectures of the generator and base discriminator network used in the proposed IGASEN-EMWGAN model.
Generator NetworkDiscriminator Network
Input: Random noise z p z (100 dimensions) and Class label C (10 dimensions)Input: Images, (128,128,3)
[layer 1] Embedding, Dense, BN; Reshape to (8,8,1024) and Concatenate; ReLU;[layer 1] Conv2D (64,64,128), stride = 2; Dropout; LeakyReLU;
[layer 2] Conv2DT (8,8,1024), stride = 2; ReLU;[layer 2] Conv2D (32,32,256), stride = 2; BN; Dropout; LeakyReLU;
[layer 3] Conv2DT (16,16,512), stride = 2; ReLU;[layer 3] Conv2D (16,16,512), stride = 2; BN; Dropout; LeakyReLU;
[layer 4] Conv2DT (32,32,256), stride = 2; ReLU;[layer 4] Conv2D (8,8,1024), stride = 2; BN; Dropout; LeakyReLU;
[layer 5] Conv2DT (64,64,128), stride = 2; ReLU;[layer 5] Flatten (1,1,1), Dropout; Dense; Sigmoid/Least Squares;
[layer 6] Conv2DT (128,128,3), stride = 2; Tanh;[layer 6] Flatten (1,1,1), Dropout; Dense; Softmax;
Output: Generated images, (128,128,3)Output: Accuracy: Real or Fake (probability); Sample class label C
Table 4. The experimental parameters settings of the proposed IGASEN-EMWGAN model.
Table 4. The experimental parameters settings of the proposed IGASEN-EMWGAN model.
HyperparametersDefault Values
Number of iterations 2000
Population size100
Updating steps of discriminator per iteration2
Number of variation mutations 2
Number of crossover mutation 1
Probability of variation mutations0.1
Probability of crossover mutation0.9
Mini-batch size64
Learning rate of generator0.0004
Learning rate of discriminator0.0001
Dropout0.5
Slope of LeakyReLU0.2
OptimizerRMSProp
Initial learning rate of optimizer0.0002
Initial annealing temperature100
Annealing coefficient0.9
Table 5. Comparison among various GANs on the ship coating defect dataset in terms of IS and FID scores. “-” indicates that no results were reported in this paper. (The ↑ means the higher the better, while the ↓ means the opposite.)
Table 5. Comparison among various GANs on the ship coating defect dataset in terms of IS and FID scores. “-” indicates that no results were reported in this paper. (The ↑ means the higher the better, while the ↓ means the opposite.)
MethodsInception Score (IS) ↑Fréchet Inception Distance (FID) ↓
Real data11.68 ± 0.147.6
-Standard CNN-
DCGAN [68]6.52 ± 0.0936.3
WGAN-GP [34]6.61 ± 0.3339.59
E-GAN [22]6.93 ± 0.0835.3
AEGAN [70]6.43 ± 0.5249.68
LSGAN [71]-44.16
EASGAN [72]7.48 ± 0.0621.94
E-GAN-GP (μ = 1) [22]7.18 ± 0.0532.8
E-GAN-GP (μ = 2) [22]7.25 ± 0.1131.4
E-GAN-GP (μ = 4) [22]7.33 ± 0.0829.5
E-GAN-GP (μ = 8) [22]7.35 ± 0.0727.4
(ours) IGASEN-EMWGAN (τ = 1)7.11 ± 0.0632.3
(ours) IGASEN-EMWGAN (τ = 2)7.18 ± 0.1129.7
(ours) IGASEN-EMWGAN (τ = 4)7.31 ± 0.0927.8
(ours) IGASEN-EMWGAN (τ = 8)7.56 ± 0.0726.3
(ours) IGASEN-EMWGAN-GP (τ = 1)7.21 ± 0.0530.2
(ours) IGASEN-EMWGAN-GP (τ = 2)7.33 ± 0.0728.9
(ours) IGASEN-EMWGAN-GP (τ = 4)7.68 ± 0.1026.5
(ours) IGASEN-EMWGAN-GP (τ = 8)7.73 ± 0.06 125.4 2
1 The bold value indicates the highest value of IS among various GANs, 2 The bold value indicates the highest value of FID among various GANs.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bu, H.; Hu, C.; Yuan, X.; Ji, X.; Lyu, H.; Zhou, H. An Image Generation Method of Unbalanced Ship Coating Defects Based on IGASEN-EMWGAN. Coatings 2023, 13, 620. https://doi.org/10.3390/coatings13030620

AMA Style

Bu H, Hu C, Yuan X, Ji X, Lyu H, Zhou H. An Image Generation Method of Unbalanced Ship Coating Defects Based on IGASEN-EMWGAN. Coatings. 2023; 13(3):620. https://doi.org/10.3390/coatings13030620

Chicago/Turabian Style

Bu, Henan, Changzhou Hu, Xin Yuan, Xingyu Ji, Hongyu Lyu, and Honggen Zhou. 2023. "An Image Generation Method of Unbalanced Ship Coating Defects Based on IGASEN-EMWGAN" Coatings 13, no. 3: 620. https://doi.org/10.3390/coatings13030620

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop