Next Article in Journal
Is There Any Effect of Symmetry on Velocity of the Four Swimming Strokes?
Next Article in Special Issue
An Improved Equilibrium Optimizer with a Decreasing Equilibrium Pool
Previous Article in Journal
Searches for Exotic Interactions Using Neutrons
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Intelligent Dendritic Neural Model for Classification Problems

1
School of Computer Engineering, Jiangsu Ocean University, Lianyungang 222005, China
2
The Ministry of Education Key Laboratory of TianQin Project, Sun Yat-sen University, Zhuhai 519082, China
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(1), 11; https://doi.org/10.3390/sym14010011
Submission received: 23 October 2021 / Revised: 23 November 2021 / Accepted: 7 December 2021 / Published: 22 December 2021

Abstract

:
In recent years, the dendritic neural model has been widely employed in various fields because of its simple structure and inexpensive cost. Traditional numerical optimization is ineffective for the parameter optimization problem of the dendritic neural model; it is easy to fall into local in the optimization process, resulting in poor performance of the model. This paper proposes an intelligent dendritic neural model firstly, which uses the intelligent optimization algorithm to optimize the model instead of the traditional dendritic neural model with a backpropagation algorithm. The experiment compares the performance of ten representative intelligent optimization algorithms in six classification datasets. The optimal combination of user-defined parameters for the model evaluates by using Taguchi’s method, systemically. The results show that the performance of an intelligent dendritic neural model is significantly better than a traditional dendritic neural model. The intelligent dendritic neural model has small classification errors and high accuracy, which provides an effective approach for the application of dendritic neural model in engineering classification problems. In addition, among ten intelligent optimization algorithms, an evolutionary algorithm called biogeographic optimization algorithm has excellent performance, and can quickly obtain high-quality solutions and excellent convergence speed.

1. Introduction

Since the dawn of the big data era, every corner of human society has accumulated a large amount of data. There is an urgent need for computer algorithms that can properly evaluate and utilize data, while machine learning just meets the urgent need of the era. Classification problems, as a hot topic of machine learning, have a widespread application in reality [1], such as common spam recognition speech recognition, tumor recognition, bank credit loan business and so on [2,3]. For solving various classification problems, lots of machine learning techniques have been proposed, such as decision tree [4], naïve Bayesian classifiers [5], support vector machine [6], artificial neural networks [7], k-nearest neighbor [8], ensemble learning [9] and so on.
Due to the high-dimensional characteristics of complex nonlinear problems, traditional methods cannot effectively solve these problems. Among them, artificial neural network simulates the information processing mechanism of biological neural network, which has a good fitting effect on nonlinear problems and has been successfully used in text classification, pattern and speech recognition. In 1943, an artificial neural network was proposed by McCulloch and Pitts firstly [10]. This model is relatively simple but significant. In 1957, Rosenblatt came up with perceptron model, which was based on the M–P model [11]. Perceptron has fundamental principles of modern neural networks, while its structure is consistent with the real biological nerves. However, the linear perceptron has limited functions and cannot even solve the simple XOR problem. In 1986, with research based on multilayer neural networks, Rumelhart put forward the backpropagation algorithm (BP) for weight correction of a multilayer neural network [12]. It solved the learning problem of a multilayer feedforward neural network and proved that a multilayer neural network has a strong learning capacity, which can complete lots of learning tasks and solve a large number of practical problems.
With the deepening of research, engineering problems become more and more complex. The field of machine learning is also quickly evolving, and many scholars have proposed many different neural networks, such as a cyclic neural network [13], convolutional neural network [14], feedforward neural network [15] and dendritic neural model [16,17,18].
The artificial neural network is originally constructed in the form of a perceptron [19]; almost all modern artificial neural networks use this kind of structure, but experiments in the field of physiology have found that biological neurons are far more complex than the above model. Studies have shown that dendritic structures contain a large number of active ion channels, and synaptic input may have a linear effect on its adjacent synaptic input [20]. In addition, many studies have shown that the biological plasticity mechanism also plays a local role in dendrites [21]. These characteristics greatly promote the role of local nonlinear components in neuron output and endow neural networks with higher information processing capabilities [22]. Different from other neural models, a dendritic neural model considers the nonlinearity of synapse, and it simulates the process of the information transmission of a neuron [17]. Because of its easy explanation and simple implementation, it has been used by many scholars to solve various complex problems, for example, tourism economic forecast [23], bankruptcy prediction [24], breast cancer classification [25], liver disorders [26] and so on [27,28,29,30,31,32,33]. References [25,26] use traditional dendritic neural models with backpropagation algorithm to optimize weights and thresholds. Backpropagation is gradient descent in essence; it has poor robustness, and it is extremely easy to fall into local traps [34]. References [27,35], respectively, proposed to use particle swarm optimization and a states of matter search algorithm as the optimization algorithm. However, their comparative experiment is single, and there is a lack of systematic and complete research on the application of intelligent optimization algorithm in dendritic neural models.
With the continuous innovation of evolutionary computation, intelligent algorithm has a rapid development and a wide range of practical applications in various fields such as model symmetry/asymmetry, model architecture and hyper-parameters, clustering and prediction, becoming a novel method to solve traditional optimization problems in machine learning.
Intelligent optimization algorithm is a cluster of algorithms. With continuous research and development, the algorithm cluster is growing, and a variety of algorithms arises at the historic moment, most of them inspired by biological evolution in nature. A example is the genetic algorithm (GA), which maintains and improves multiple candidate solutions based on population method and uses population characteristics to guide search [36,37], and another typical algorithm is the differential evolution algorithm (DE) of heuristic search based on population [38,39]. Moreover, inspired by biogeography theory, a biogeographic optimization algorithm (BBO) is proposed [40,41], which based on study of mathematical model of biological species migration. As another type of intelligent optimization algorithm, based on mathematical statistics theory, the estimation of distribution algorithm realizes population search and evolution by constructing probability models. Population-based incremental learning (PBIL) is a classical estimation of distribution algorithm [42,43]. Another kind of intelligent optimization algorithm are swarm intelligence algorithms, inspired by natural phenomena such as particle swarm optimization (PSO) [44,45], ant colony optimization (ACO) [46,47], artificial bee colony algorithm (ABC) [48,49] and whale optimization algorithm (WOA) [50,51]. In recent years, the Harris hawks optimization algorithm (HHO) inspired by the group cooperation behavior of the Harris eagle and chimp optimization algorithm (ChOA) inspired by chimpanzee hunting behavior in its group are new swarm intelligence optimization algorithms [52,53].
The contribution of this paper is as follows:
  • This paper proposes an intelligent dendritic neural model firstly, which uses an intelligent optimization algorithm to optimize the model instead of the traditional backpropagation algorithm. The experimental results show that the performance of the intelligent dendritic neural model is superior to the traditional dendritic neural model, which provides an effective approach for the application of dendritic neural model in engineering classification problems.
  • Through the comparison of scientific experiments, the effective intelligent learning algorithm of dendritic neural model for classification problems is determined. By comparing different types of intelligent optimization algorithms, the result shows that the biogeographic optimization algorithm has excellent performance, can quickly obtain high-quality solutions and has excellent convergence speed.

2. Dendritic Neural Model

There exists four layers in the dendritic neural model; they are the synaptic layer, dendritic layer, membrane layer and soma layer. Each layer has corresponding functions and characteristics. The detailed structure of the whole model is shown in Figure 1, where X = x 1 , x 2 , x 3 , , x n is the input data of model, m branches represent m dendritic layers and O is actual output of model. There are weights and thresholds in each synapse. In the training process of the model, the algorithm will continuously adjust the weights and thresholds in each synapse to optimize the performance of the model. The specific functions of each layer are introduced in detail below.

2.1. Synaptic Layer

Input vector X = x 1 , x 2 ,   x 3 , , x n inputs data from synapses, and the output of synapse layer is obtained by the activation of the sigmoid function. The nodes in the synaptic layer contain the weights and thresholds of the dendritic neural network. In the training process of the model, the intelligent optimization algorithm needs to optimize all the weights and thresholds in the synaptic layer. The expression of the synaptic layer is (1).
Y i j = 1 e k ω i j x i θ i j
where k is a user-defined constant. ω i j and θ i j are the corresponding weight and threshold. According to different values of ω i j and θ i j , there are six situations: ➀ 0 < θ i j < ω i j , ➁ 0 < ω i j < θ i j , ➂ ω i j < θ i j < 0, ➃ θ i j < ω i j < 0, ➄ θ i j < 0 < ω i j and ➅ ω i j < 0 < θ i j . In case ➀, the output Y i j is proportional to the output x i , which called the Excitatory state; in case ➂, the output Y i j is inversely proportional to the output x i , which is called the Inhibitory state; in cases ➁ and ➅, the value of the output Y i j is always close to 0, which called the Constant-0 state and in cases ➃ and ➄, the value of the output Y i j is always close to 1, which called the Constant-1 state. Figure 2 shows the details of the four states corresponding to the six situations.

2.2. Dendritic Layer

The dendrite layer multiplies results of n synaptic layers. The whole process can be expressed as (2).
Z j = i = 1 n Y i j

2.3. Membrane Layer

The membrane layer is connected to m dendritic branches; the function of the membrane is to perform the sum operation over the results of all branches. The whole process can be expressed as (3).
V = j = 1 m Z j

2.4. Soma Layer

Finally, the final output of the soma layer is obtained by the sigmoid function, which can be expressed as (4).
O = 1 1 + e k s o m a V θ s o m a  
where k s o m a and θ s o m a are self-defined parameters.

3. Learning Algorithms

3.1. Backpropagation Algorithm

The traditional dendritic neural model uses the backpropagation algorithm to update weights and thresholds of the model, which adjusts the weight by backpropagation of the training error layer by layer through the chain rule. Firstly, the least squared error between the actual output O p and the desired output T p of the target is obtained as follows in (5).
E p = 1 2 T p O p 2
The synaptic parameters ω i j and θ i j are updated along the negative gradient direction; the whole process can be described as (6) and (7).
Δ ω i j t = p = 1 P E p ω i j Δ θ i j t = p = 1 P E p θ i j
ω i j t + 1 = ω i j t η Δ ω i j t   θ i j t + 1 = θ i j t η Δ θ i j t
where η represents the learning rate; it is usually set to 0.1. According to the structure of the dendritic neural model in Section 2, the whole process of partial differential derivation can be described as (8) and (9).
E p ω i j = E p O p O p V V Z J Z J Y i j Y i j ω i j   E p θ i j = E p O p O p V V Z J Z J Y i j Y i j θ i j
E p O p = T p O p O p V = k s x i e k s V θ s 1 + e k s V θ s 2 V Z j = 1   Z j Y i j = l = 1   a n d   l i n Y l j Y i j ω i j = k x i e k ω i j x i θ i j 1 + e k ω i j x i θ i j 2 Y i j θ i j = k e k ω i j x i θ i j 1 + e k ω i j x i θ i j 2
This paper introduces the intelligent optimization algorithm into the training of the model. Different from the idea of backpropagation algorithm, the intelligent optimization algorithm updates ω and θ by iterating to find the optimization individual. Assuming that the input number of the dendritic neural model is n and m is the number of dendritic layers, if a vector represents a viable solution, then X i i ϵ 1 , Q can be expressed as (10).
X i = x i 1 , x i 2 , , x i n = { ω 11 , ω 12 , , ω m n , θ 11 , θ 12 , , θ m n }
where X i denotes the ith feasible solutions and ω and θ are weight and threshold in the dendritic layer. Generally, the intelligent optimization algorithm first initializes a certain number of feasible solutions randomly, and then iteratively optimizes all feasible solutions according to the characteristics of different optimization algorithms until meeting the termination conditions. The optimal solution X b e s t is the optimal solution of the weight and threshold of the dendritic neural model. In the iterative optimization process, the minimum mean squared error (MSE) is used as a loss function to assess the quality of each feasible solution. The minimum mean squared error is calculated by (11).
M S E X i = 1 2 P p = 1 P T p O p 2
where T p is desired output of the pth sample, and O p is the actual output of the pth sample.

3.2. Genetic Algorithm

Referring to the genetic evolution theory of Darwin and Mendel, the genetic algorithm was proposed by J. Holland in 1975 firstly. It was developed by simulating the biological evolution mechanism in nature. The individual of population is called the chromosome. The constant updating of chromosomes during iteration is called heredity. Inheritance consists of three parts—selection, crossover and mutation. Chromosome quality is usually evaluated by fitness function. At the beginning of the genetic algorithm is the generation of random individuals, according to a predetermined loss function to evaluate each individual, and giving a fitness value. Through the fitness function, the selected individuals produce the next generation. This operation inherits the idea of survival of the fittest in nature, and then the selected individuals combine to produce a new generation through crossover and mutation. The new generation is better than the previous generation because it inherits the excellent characteristics of the previous generation, and the whole population gradually moves towards the optimal solution. According to Algorithm 1, genetic algorithm is mainly divided into the following three parts.
(a)
Selection operator: based on the assessment of individual fitness, selection operator usually selects individuals with higher fitness and eliminates individuals with lower fitness. The common selection methods: fitness allocation method based on proportion or ranking, roulette selection method and so on.
Algorithm 1: Genetic Algorithm.
Begin:
  Randomly initialized population of chromosomes ( { X i } ,   i 1 , Q )
  Evaluate the fitness value for each chromosome using Equation (11)
  while Termination criterion
    Selection the best chromosome by Roulette Wheel Selection
    Generating new chromosomes through single-point crossover and mutation
    Evaluate the fitness value of the new chromosome
    Replace the population’s worst chromosomes with the greatest new chromosomes
    t = t + 1
   end while
  return the best solution
End
(b)
Crossover operator: in the process of biological evolution in nature, two chromosomes form new chromosomes by gene recombination. Therefore, crossover is the core link of the whole process. The design of crossover operator needs to be analyzed for each specific question. The familiar crossover operators such as single point crossover, uniform crossover, multi-point crossover and so on.
(c)
Mutation operator: mutation changes genes inherited on chromosomes by random selection. Mutation itself can be seen as a random algorithm, strictly speaking, an auxiliary algorithm used to generate new individuals.

3.3. Differential Evolution Algorithm

In 1996, Rainer and Kenneth, to solve Chebyshev polynomials, proposed the differential evolution algorithm. The basic idea of DE is to randomly generate a group of the initial population and randomly select three individuals in the initial population; the new individual is generated through summing up the vector difference of two individuals with the third individual according to certain rules. Then, it would compare the new individual with another individual randomly selected in population. If the new individual is superior to the compared individual, it would retain the new individual and eliminate the compared old individual; if the new individual is inferior to the one compared with it, the algorithm would abandon the new individual, retain the old individual compared with it and reselect other individuals from population to generate new individuals. Repeated in this way, the individuals of the initial population are continuously updated until the population reaches a certain optimal state. As shown in Algorithm 2, the process of the differential evolution algorithm includes mutation, crossover and selection operations, which is very similar to the genetic algorithm, but each process has a completely different meaning. It has a variety of mutation strategies. The strategy adopted in the experiment is DE/rand/1, and the formula is shown in (12).
V i = X r 1 + F · X r 2 X r 3
Algorithm 2: Differential Evolution Algorithm.
Begin:
  F0: Initial mutation operator; CR: Crossover operator
  Q: Population; D: Dimension
  Initialize the population and calculate the fitness of each population
  while Termination criterion
    Adaptive mutation operator λ = e x p 1 G G + 1 t ,   F = F 0 2 λ
    for i = 1: Q
     Crossover operator: V i = X r 1 + F · X r 2 X r 3
     Mutation operator: u i j = v i , j   i f   r a n d < C R   o r   r a n d i 1 , D = j x i , j   i f   r a n d > C R   o r   r a n d i 1 , D j
     Selection operator: x i = u i   i f   f i t u i   s u p e r i o r   t o   f i t x i x i                                                 e l s e  
    end for
    t = t + 1
  end while
  return the best solution
End

3.4. Population-Based Incremental Learning Algorithm

PBIL guides the evolution of the population by maintaining a probability vector. The algorithm selects the individual with the best fitness in each generation to update the probability vector, and the next generation population is generated by the updated probability vector sampling. Repeating these steps, the final optimal solution is obtained. The probability vector is the core part of a population-based incremental learning algorithm. According to Algorithm 3, the main operations related to the probability vector in the algorithm are described below.
Algorithm 3: Population-based Incremental Learning Algorithm.
Begin:
  Q: population; LR: learning factor
  pm: mutation probability; MS: mutated offset value
  Initialization probability vector P t ,   p i t = 0.5 ,   i = 1 , 2 , , L   
  whileTermination criterion
    Generate population according to Pt sampling
    Evaluate the fitness of each individual according to Equation (11)
    Find the individual Bt with the best fitness in the population
    Update Pt according to Equations (13) and (14)
    t = t + 1
  end while
  return the best solution
End
Assuming that the optimal individual of the current generation is B t = b 1 t , b 2 t , , b L t , the probability vector P t of the current generation is updated by the following Formula (13).
p i t = 1.0     L R   ·   p i t 1 + L R · b i t ,   i = 1 , 2 , , L
where LR indicates the learning factor, which is generally set to 0.01. For each p i t in the probability vector P t , if random(0, 1) is less than the mutation probability   p m , the vector value p i t is mutated; otherwise, it is not mutated. Formula is as follows in (14).
p i t = 1.0 M S   ·   p i t + M S · r a n d o m 0   o r   1
where p i t represents the ith mutated vector value of P t , and MS represents the mutated offset value.

3.5. Particle Swarm Optimization Algorithm

PSO is a stochastic search algorithm developed by simulating the foraging behavior of birds. The algorithm abstracts individual birds into massless, volumetric and informative particles. Since bird populations do not know the specific location of food at first, they can only get the information that food is within a certain range, so bird populations adopt a stable and simple method: search the area around the bird that is closest to the food.
Inspired by this, particle swarm optimization randomly initializes many particles in a solution space, and each particle is given random speed and position information. Then these particles begin to move or stay in place with the initialized speed. After sharing information with the surrounding particles, the approximate orientation of the optimal solution and the position information of the particles closest to the optimal solution is obtained, and the particle moves progressively towards the optimal solution position. The whole process is shown in Algorithm 4. The mathematical formula of particle velocity update mode is described as follows in (15).
v i t = ω v i t 1 + c 1 r 1 p b e s t i x i t 1 + c 2 r 2 g b e s t x i t 1
where c 1 and c 2 are the learning factors of individual extreme and global extreme, r 1 and r 2 are the influence disturbance factors of individual extreme and global extreme and p b e s t i and gbest are the individual extremum and global extreme in the tth iteration.
Algorithm 4: Particle Swarm Optimization Algorithm.
Begin:
  Initialize population of particles ( { X i } ,   i 1 , Q )
  Evaluate fitness for each particle using Equation (11)
  Set p b e s t i   = X i ,   gbest = min { pbest }
  whileTermination criterion
    fori = 1: Q
    Update the velocity and position of Xi by Equation (15)
    Evaluate the fitness of Xi
    if fit(Xi) < fit(pbesti)
     pbesti = Xi
    if fit(pbesti) < fit(gbest)
     gbest = pbesti
    end for
    t = t + 1
  end while
  return the best solution
End

3.6. Ant Colony Optimization Algorithm

ACO was proposed by M. Dorigo firstly, and inspired by real ant colony behavior. The study found that individual ants communicate with each other through a pheromone that allows them to collaborate on complex tasks. Ants tend to move towards the directions with high pheromone concentrations during movement. They not only leave pheromones on the path they pass but also sense the presence and concentration of pheromones during movement to guide their movement direction and search for food. The ant colony algorithm simulates such optimization mechanisms to find the optimal solution through information exchange and cooperation among individuals. The whole simulation process is shown in Algorithm 5. According to the pheromone quantity and heuristic information on each path, the transition probability of the ant at point i at time t to select the next movable point j is (16).
P i j k = τ i j α · η i j β s a l l o w e d k τ i j α · η i j β ] ,     j a l l o w e d k   0 ,                                             o t h e r w i s e
where a l l o w e d k = n t a n u k represents the set of points that ant k at point i is allowed to choose to go next. α represents the influence of pheromone remaining on the path on the subsequent ants’ choice of the path, β represents the expected heuristic factor, τ i j is the pheromone concentration and η i j is the heuristic function.

3.7. Artificial Bee Colony Algorithm

ABC simulates the honey-collecting behavior of honeybees. The location of a nectar source represents the solution, and the pollen quantity of the nectar source represents the value of fitness function. All bees are separated into three groups: the employed bees, the onlooker bees and the explored bees. The number of employed bees is the same as the number of nectar sources, and the number of onlooker bees is half of the number of nectar sources. Firstly, the employed bees take charge of the initial search for nectar sources and collects and shares information. Then the onlooker bees is in charge of staying in the hive and collecting nectar according to the information provided by the employed bees; last, the explored bees are responsible for randomly searching for new nectar sources to replace original ones after original ones are abandoned. According to Algorithm 6, each stage is described as follows.
Algorithm 5: Ant Colony Optimization Algorithm.
Begin:
  keep: elitism parameter; ρ: local pheromone decay rate
  Q: Population; D: Dimension
  Initialize the population and calculate the fitness of each population
  whileTermination criterion
    fori = 1: Q
      for j = 1: Q
        Use each solution to update the pheromone for each parameter value: τ i j = 1 ρ τ i j + i = 1 Q Δ τ i j k
      end for
    end for
    for k = keep + 1: Q
    Use the probability Equation (16) to generate new solutions
    end for
    t=t+1;
  end
  return the best solution
End
(a)
Employed bees’ stage: employed bees use Formula (17) to find new nectar sources.
v i j = x i j + φ i j x i j x k j
where x k j represents the domain nectar source, and k is not equal to i. ρ i j is a random constant between −1 and 1. After obtaining the new nectar source through the above formula, the fitness function values of new and old nectar sources are compared by using the greedy algorithm, and the superior one is selected.
(b)
Onlooker bees’ stage: at this stage, employed bees share nectar information in the dance area. Then onlooker bees analyze information and adopt a roulette strategy to select nectar source tracking mining to ensure that the probability of nectar source mining with higher fitness value is greater.
(c)
Explored bees’ stage: if a nectar source has not been renewed after several mining sessions, the nectar source should be abandoned and the explored bees stage starts. The explored bees use Formula (18) to randomly search for new nectar sources to replace abandoned ones.
  x i j = x m i n j + r a n d 0 , 1 x m a x j x m i n j
where x m i n j and x m a x j represent the minimum and maximum values of the jth dimension.
Algorithm 6: Artificial Bee Colony Algorithm.
Begin:
  Q: The number of nectar sources or the employed bees;
  D: Dimension; O: The number of the onlooker bees
  Initialize the nectar sources and calculate the fitness of each nectar source
  while Termination criterion
    fori = 1: Q
     for j = 1: D
      Update the location of employed bees by Equation (17)
      Store the best nectar sources in a greedy way, and record if not updated
     end for
    end for
    for l = 1: O
     Check whether there is nectar source stagnant update. If so, update through Equation (18)
    end for
    Complete generation update
    t = t + 1;
  end
  return the best solution
End

3.8. Whale Optimizaton Algorithm

Mirjalili proposed a WOA-inspired predation mode of humpback whales. Humpback whale individuals can identify and surround prey. According to Algorithm 7, the whole process can be abstracted into three stages: searching prey, surrounding prey and bubble net attack.
(a)
Surrounding prey: humpback whales can recognize prey and continuously reduce their surrounding range. The optimal solution represents target prey or location close to target prey, and other whales will keep approaching it. The entire mathematical model can be interpreted as follows in (19) and (20).
        D = | C · X t X t
X t + 1 = X t A · D
where X t is the current optimal whale position vector, t is iterations, X t is the current humpback whale position vector, A · D is bounding step length A and C represent different coefficient vectors.
(b)
Bubble net attack: when a humpback whale surrounds prey, it spits out bubbles in a spiral way to surround the prey. When A 1 , the Formula (21) simulates the spiral hunting behavior of the humpback whale.
X t + 1 = D · e b l · cos 2 π l + X t
where D = X t X t , D is the distance between current humpback whale position and prey, and b is the constant that determines shape of spiral. l is a random constant in [−1,1].
(c)
Searching prey: when A > 1 , whale individuals are forced to stay away from the optimal whale location of the current generation so that whale individuals randomly search for prey, which is no longer affected by the current optimal whale. Its mathematical model can be described as follows in (22) and (23).
D = | C · X r a n d t X t
X t + 1 = X r a n d t A · D
where X r a n d t represents the random whale location in the current whale population.
Algorithm 7: Whale Optimization Algorithm.
Begin:
  Initialize related parameters, Q: Population
  Initialize the population and calculate the fitness of each population
  while Termination criterion
   fori = 1: Q
    if (p < 0.5)
      if (|A| < 1)
       Update the individual by Equation (19)
      else if ( A 1 )
      Select a random individual to update by Equation (23)
    else if (p ≥ 0.5)
      Update the individual by Equation (21)
   end for
  Calculate individual fitness and update the optimal solution
  t = t + 1;
  end while
  return the best solution
End

3.9. Harris Hawks Optimization Algorithm

Inspired by the process of Harris eagle predation, Heidari proposes HHO. The Harris eagle is a group predator, and all are members of the division of labor and coordinated action. The exploration phase is a global search process in which eagles track and detect prey from the air. When the target prey is determined, all members of the group gradually approach the location of the prey, find a suitable position around the prey and complete the encircling for the final attack preparation. The whole simulation process is shown in Algorithm 8. According to the escape behavior of prey and the chase strategy of Harris eagle, Harris Hawks Optimization uses four different strategies to simulate the attack.
(a)
When E 0.5   &   r 0.5 , in this solution, prey has enough physical strength to try to escape by jumping but is eventually captured; the formula is as follows (24) and (25).
X t + 1 = D X t E · J · X P r e y t X t
D X t = X P r e y t X t
where D X t represents position deviation between eagle group and prey after the tth iteration, and j is the jumping distance during the escape of the prey. X P r e y is the position of the prey.
(b)
When E < 0.5   &   r 0.5 , in this solution, prey does not have sufficient energy and is directly captured by the eagle, the formula can be described as follows (26).
X t + 1 = X P r e y t E · D X t
(c)
When E 0.5   &   r < 0.5 , in this solution, the prey has plenty of energy to escape and has the opportunity to escape. Therefore, the eagles form a more intelligent encirclement. The implementation strategies are as follows (27) and (28).
Y = X P r e y t E · | J · X P r e y t X t
X t + 1 = Y ,         F Y < F X t Z ,         F Z < F X t
Algorithm 8: Harris Hawks Optimization Algorithm.
Begin:
  Initialize related parameters, Q: Population
  Initialize the population
  while Termination criterion
   Calculate the fitness of each population
   Set Xprey as the best individual
   for i = 1: Q
    Update the energy E and jump strength J
    if ( E 1 )
    Update by X t + 1 = X r a n d t r 1 X r a n d t 2 r 2 X t                       q 0.5 X P r e y t X m t r 3 L B + r 4 U B L B       q < 0.5
    if (|E| < 1)
     if (|E| ≥ 0.5 & r ≥ 0.5)
      Update by formulae (24) and (25)
     else if (|E| < 0.5 & r ≥ 0.5)
      Update by Formula (26)
     else if (|E| ≥ 0.5 & r < 0.5)
      Update by formulae (27) and (28)
     else if (|E| < 0.5 & r < 0.5)
      Update by formulae (29) and (30)
   end for
  t = t + 1
  end while
  return the best solution
End
When E < 0.5   &   r < 0.5 , in this solution, there is a prey energy shortage but still a chance to escape. Therefore, when the eagles form a hard encirclement to narrow average distance from their prey, the formula can be interpreted as follows in (29) and (30).
Y = X P r e y t E · | J · X P r e y t X t
X t + 1 = Y ,         F Y < F X t Z ,         F Z < F X t

3.10. Chimp Optimization Algorithm

During chimpanzee hunting, any chimpanzee can randomly change its position in the space around its prey. The mathematical description is as follows in (31) and (32).
d = | c X p r e y t m X c h i m p t  
X c h i m p t + 1 = X p r e y t a d  
where d is the distance between chimpanzees and prey; t is the current number of iterations; X p r e y t is the prey position vector; X c h i m p t is the chimpanzee position vector and a, m, and c are coefficient vectors.
In the chimpanzee community, according to the diversity of intelligence and ability shown by individuals in the process of hunting, chimpanzee groups are classified as “driver”, “barrier”, “chaser” and “attacker”. Each type of chimpanzee has its own ability to think independently and use its own search strategy to explore and predict the location of prey. While they have their own tasks, they also exhibit chaotic individual hunting behavior at the end of the hunt due to social incentives to obtain sexual behavior and benefits. The Chimp Optimization Algorithm solves the problem by simulating the co-hunting behavior of the four chimpanzee species. According to Algorithm 9, the formula for updating the positions of the four chimpanzee groups is described as follows (33)–(35).
D A t t a c k e r = C 1 X A t t a c k e r m X D B a r r i e r = C 2 X B a r r i e r m X     D C h a s e r = C 3 X C h a s e r m X       D D r i v e r = C 4 X D r i v e r m X      
  X 1 = X A t t a c k e r A 1 D A t t a c k e r X 2 = X B a r r i e r A 2 D B a r r i e r   X 3 = X C h a s e r A 3 D C h a s e r     X 4 = X D r i v e r A 4 D D r i v e r      
X t + 1 = X 1 + X 2 + X 3 + X 4 4
Algorithm 9: Chimp Optimization Algorithm.
Begin:
  Initialize f, m, a, and c; Q: Population
  Initialize the population and calculate the fitness of each chimp
  XAttacker represents the best search agent
  XBarrier represents the second best search agent
  XChaser represents the third best search agent
  XDriver represents the fourth best search agent
    while Termination criterion
     for i = 1: Q
      Update f, m, a and c according to different group strategies
      if (μ < 0.5)
       if (|a| < 1)
        Update the position by (32)
       else if (|a| ≥ 1)
        Select a search agent randomly
      else if (μ ≥ 0.5)
        Update the position by X c h i m p t + 1 = X p r e y t a . d     i f   μ < 0.5 C h a o t i c _ v a l u e     i f   μ 0.5

      end for
    Update XAttacker,XBarrier,XChaser,XDriver
    t = t + 1
    end while
  return the best solution
End

3.11. Biogeography-Based Optimization Algorithm

BBO is an intelligence algorithm that uses biogeographic theory to solve optimization problems. In a biogeographic optimization algorithm, habitat is used to represent individuals in intelligent optimization algorithm, the suitable index variable (SIV) represents variables in individuals and the habitat suitability index (HSI) represents individual fitness. In nature, the habitat suitability index of each habitat for biological population is different. Habitats with high HSI can accommodate more species and have high species migration rates and low species migration rates. Individual migration can share excellent SIV between habitats. The migration models in biogeography-based optimization mainly include the linear migration model, trapezoidal migration model, secondary migration model and cosine migration model. The linear migration model is used in this paper. Formula (36) is the calculation of immigration rate λ S ; Formula (37) is emigration rate μ S . The entire migration operation is shown in Algorithm 10.
μ S = E × S S m a x
  λ S = I × 1 S S m a x
where E represents the probability of maximum emigration, I represents the maximum immigration probability.
Mutation operation simulates the phenomenon that diseases, natural disasters and other factors change the living environment of habitat and lead to the deviation of habitat population from the equilibrium point. According to Algorithm 11, the mutation probability M s of the species is calculated as follows (38).
M s = M m a x × 1 P s P S m a x
where P s is probability of S species in the habitat; P S m a x is the maximum value of P s , and M m a x represent the maximum mutation rate.
The above introduces how the intelligent optimization algorithm optimizes the weight and threshold of the dendritic neural network. At the same time, the principle, characteristics, important formulas and pseudocode of each algorithm are introduced in detail. Each algorithm has its own characteristics, which also determines the adaptability of the algorithm.
Algorithm 10: Migration Operation(BBO).
Ford = 1: D
  if rand() < λi
    select another habitat h from the population with the migration probability μj
     H i d H j d
  end if
End for
Algorithm 11: Mutation Operation(BBO).
Ford = 1: D
  if rand() < πi
     H i d l d + r a n d ( ) · μ d l d end   if
  end if
End for

4. Experiment

For the classification problem of DNM, the experiment compared the results of ten intelligent optimization algorithms and traditional backpropagation algorithms on six classical classification datasets. The software environment of the experiment is MATLAB2018a, and the hardware environment is Intel(R) Core(TM) I5-9500 CPU @ 3.00 GHz, 8.00GB.
All datasets are from the UCI database [54]. Table 1 shows the attributes, number and classes of data in detail. All six data sets are binary classification problems. All datasets are randomly divided into two groups: 70% training and 30% testing. These six datasets contain common classification problems in the real world. For example, the Australian Credit Approval dataset and Banknote Authentication dataset are related to finance, and the Bank Credit Approval dataset concerns credit card applications and is used to evaluate the credit of the new applicant, which can effectively identify the good or bad of the customer, provide the basis for issuing the card and help the bank to establish the first line of defense against credit card risk. The Banknote Authentication dataset is to classify the data extracted from the images of real and counterfeit similar banknote samples. The Breast Cancer dataset and Diabetic Retinopathy dataset are classified datasets about common breast cancer and diabetes. By classifying and predicting different influence attributes of diseases, it has a positive effect on medical treatment activities. The last two dataare on actual production life; the Car Evaluation dataset is a classified data set to assess car safety, and the Glass Identification dataset is a classified data set to classify the type of glass.
To eliminate interference, all experimental results are the average results of 30 independent experiments. The experiment compares the performance of eleven algorithms, including ten intelligent algorithms (GA, DE, PBIL, PSO, ACO, ABC, WOA, HHO, ChOA and BBO) and a traditional back propagation algorithm. The population size of ten intelligent optimization algorithms is 50, and the number of iterations is 250. Other parameters intelligent optimization algorithms of are set according to the experience and characteristics of the algorithm. The initialization parameters of intelligent optimization algorithm are shown in Table 2. The learning rate of the backpropagation algorithm is 0.01, and the number of iterations is 250.
There are four user-defined parameters in DNM, which are the number of synaptic layers M, k of synaptic layer, k s o m a and θ s o m a in the activation function of the output layer. The experiment uses Taguchi’s method [55] to obtain a reasonable combination of four parameters of DNM. Taguchi’s method adopts the orthogonal experimental design method, which is a scientific experimental design method to select appropriate and representative points from a large number of experimental points. The four factors of the number of levels are set as follows: M 1 , 5 , 10 , 15 , 20 , k 1 , 5 , 10 , 15 , 25 , k s o m a 1 , 5 , 10 , 15 , 25 , θ s o m a 0.1 , 0.3 , 0.5 , 0.7 , 0.9 . Full-factor analysis contains 5 4 = 625 experiments, orthogonal array L 25 5 4 only contains 25 experiments, it greatly reduces the number of test runs, cost time, manpower, and materials. Table 3 shows the optimal parameters of six datasets obtained through the test of orthogonal array L 25 5 4 . The orthogonal matrix experimental results of eleven algorithms in six datasets are recorded in detail in the Appendix A.
In the process of model training optimization, MSE is the loss function of the model, which is the evaluated fitness value of each solution. The accuracy of the classification is to evaluate the performance of the model during the test. Formula (11) is the calculation of MSE, and the calculation method of accuracy is as follows (39).
A c c u r a c y = T P + T N T P + F P + T N + F N × 100 %
where TP, TN, FP, and FN repent true positives, true negatives, false positives, and false negatives respectively.

5. Result

Figure 3 shows the convergence diagram of the algorithm in the iterative process of optimizing the weights and thresholds of the model, and Figure 4 shows the boxplot of the algorithm after iterative optimization, which can intuitively see the distribution of the optimal solution obtained by the algorithm. At this time, the user-defined parameters of the model adopt the optimal parameters in Table 3. Table 4 summarizes the classification accuracy and minimum mean squared error of each algorithm under the optimized parameters.
As we can see, the BP algorithm converges very slowly in the iterative process. In Australian Credit Approval dataset, Diabetic Retinopathy dataset, and Glass Identification dataset, the algorithm falls into local optimization and loses its function. Its robustness is very poor. ACO and ChOA also have slow convergence speed. In the Australian Credit Approval dataset and Diabetic Retinopathy dataset, it is the same as the BP algorithm, which falls into local traps and has poor overall performance. In the six datasets, BBO has the best convergence effect, followed by PSO. In the first 50 iterations of Australian Credit Approval dataset and Diabetic Retinopathy dataset, the convergence speed of PSO is better than that of BBO, but after 50 iterations, BBO’s convergence value is lower than that of PSO, indicating that BBO has better global search ability than PSO. Unlike the DE algorithm, which has a good effect on some datasets (Breast Cancer, Car Evaluation) and poor effect on some datasets (Australian Credit Approval, Diabetic Retinopathy), BBO has good robustness.
From the distribution of optimal solutions, DNM + BBO still has obvious advantages, with smaller error and stable performance, followed by is DNM + PSO. DNM + ACO and DNM + ChOA have many outliers, which means that the algorithm has poor performance and cannot find a good feasible solution.
As can be seen from Table 4, the classification accuracy of BP in the six datasets is very low, where in the Diabetic Retinopathy dataset it is as low as 39% and the highest at 86.5%. BBO has the highest classification accuracy in all six datasets, reaching 99.5% in the Banknote Authentication dataset. PSO is inferior to BBO. The performance of ACO and ChOA is unstable. For example, in the Diabetic Retinopathy dataset, ACO is as low as 38.7%, and ChOA is as low as 69.6%. Other data sets such as WOA, GA, PBIL and so on have good performance, but the overall performance is worse than BBO. In addition, MSE is the loss function value of the model, and BBO is the smallest in six data sets, which shows that BBO has good performance.
Among the ten intelligent optimization algorithms, DNM + BBO has the fastest convergence speed and the highest accuracy, followed by DNM + PSO. Otherwise DNM + WOA, DNM + GA and so on have good convergence speeds and higher classification, but they are still lower than BBO as a whole. Meanwhile, just like the ideal of “no free lunch”, not all intelligent optimization algorithms are suitable for the classification of dendritic neural models. For example, DNM + ACO and DNM + ChOA have poor performance.
In contrast, intelligent optimization algorithms have obvious advantages, the performance of the intelligent dendrite neural model is far better than that of the traditional dendrite neural model.

6. Conclusions

In this paper, an intelligent dendritic neural model is proposed for the first time, which uses intelligent optimization algorithm instead of the traditional BP algorithm to train the model. In the experiment, ten intelligent optimization algorithms including GA, DE, PBIL, PSO, ACO, ABC, WOA, HHO, ChOA, BBO, and traditional BP algorithm were selected to train and test the model on six datasets (Australian Credit Approval, Banknote Authentication, Breast Cancer, Car Evaluation, Diabetic Retinopathy, Glass Identification). These ten algorithms are representative intelligent algorithms, such as GA, DE and BBO based on evolutionary ideas, PBIL based on mathematical statistics, ABC, ACO, PSO, and WOA based on swarm intelligence, and new algorithms HHO, ChOA. The experiment uses Taguchi’s method to obtain a reasonable combination of four parameters of DNM. The experiments compare and analyze effectiveness, convergence speed, and classification accuracy of the algorithm. The experimental results show that the intelligent dendritic neural model (DNM-BBO) is obviously superior to the traditional dendritic neural model. At the same time, through the comparison of intelligent optimization algorithms, the result shows that BBO algorithm has excellent performance, and its robustness, accuracy and convergence speed are the best. The intelligent dendritic neural model established in this study is a powerful tool for solving classification problems and provides more choices in practical engineering applications.

Author Contributions

Conceptualization, D.J., W.X.; methodology, D.J., W.X.; software, W.X.; validation, W.X., D.J., Z.Z., C.L. and Z.X.; formal analysis, D.J.; investigation, W.X.; resources, W.X.; data curation, W.X.; writing—original draft preparation, W.X.; writing—review and editing, D.J., W.X., Z.Z., C.L. and Z.X.; visualization, W.X.; supervision, D.J., Z.Z. and C.L.; project administration, D.J.; funding acquisition, D.J. and Z.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Natural Science Foundation of China (NSFC) under Grants 12105120 and 72174079, the Natural Science Foundation of the Jiangsu Higher Education Institutions of China under Grant 19KJB160001, the Lianyungang City Haiyan Project under Grant 2019-QD-004, the MOE Key Laboratory of TianQin Project, Sun Yat-sen University, and the Open Fund Project of the Jiangsu Institute of Marine Resources Development under Grant JSIMR202018.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Six datasets of this paper are obtained from the following website: https://archive.ics.uci.edu/ml/index.php accessed on 22 October 2021.

Acknowledgments

The authors would like to thank all collaborators for their time and all reviewers for their valuable suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

The appendix shows results of eleven learning algorithms (BP, GA, DE, PBIL, PSO, ACO, ABC, WOA, HHO, ChOA and BBO) in Taguchi’s method on six datasets. Table A1, Table A2, Table A3, Table A4, Table A5 and Table A6 describe the parameter sensitivity results of each learning algorithm based on the orthogonal array for each dataset. Table A7, Table A8, Table A9, Table A10, Table A11 and Table A12 shows the complete parameter combination in each dataset and the average classification accuracy of 30 independent experiments of each learning algorithm.
Table A1. Parameter sensitivity results of Australian Credit Approval dataset based on orthogonal arrangement.
Table A1. Parameter sensitivity results of Australian Credit Approval dataset based on orthogonal arrangement.
No.Mk k s o m a θ s o m a MSE
BPGA DEPBILPSOACOABCWOAHHOChOABBO
11110.11.24 × 10−14.01 × 10−14.05 × 10−13.95 × 10−14.07 × 10−14.95 × 10−14.16 × 10−14.09 × 10−13.93 × 10−14.88 × 10−13.93 × 10−1
215100.92.14 × 10−12.90 × 10−13.09 × 10−12.66 × 10−16.79 × 10−18.87 × 10−13.88 × 10−12.75 × 10−12.38 × 10−17.59 × 10−12.63 × 10−1
3110150.32.15 × 10−13.00 × 10−13.83 × 10−12.87 × 10−17.94 × 10−18.71 × 10−13.45 × 10−12.36 × 10−12.43 × 10−17.99 × 10−13.37 × 10−1
4115250.52.23 × 10−13.41 × 10−14.27 × 10−13.62 × 10−18.11 × 10−18.91 × 10−14.14 × 10−12.53 × 10−12.60 × 10−18.39 × 10−14.36 × 10−1
512550.72.07 × 10−12.70 × 10−14.35 × 10−13.52 × 10−17.59 × 10−18.31 × 10−14.33 × 10−12.44 × 10−12.68 × 10−17.65 × 10−13.98 × 10−1
65150.31.24 × 10−12.35 × 10−12.52 × 10−12.24 × 10−12.27 × 10−16.27 × 10−12.64 × 10−12.31 × 10−12.17 × 10−14.73 × 10−12.07 × 10−1
755250.72.12 × 10−12.62 × 10−13.24 × 10−12.85 × 10−13.99 × 10−18.72 × 10−13.87 × 10−12.84 × 10−12.63 × 10−17.48 × 10−12.78 × 10−1
851010.51.26 × 10−13.53 × 10−13.66 × 10−13.56 × 10−14.00 × 10−15.01 × 10−13.80 × 10−13.45 × 10−13.42 × 10−14.95 × 10−13.47 × 10−1
9515150.92.23 × 10−12.46 × 10−13.17 × 10−12.46 × 10−14.58 × 10−18.87 × 10−13.42 × 10−12.41 × 10−12.36 × 10−17.56 × 10−12.38 × 10−1
10525100.11.39 × 10−12.81 × 10−13.16 × 10−12.56 × 10−14.34 × 10−15.56 × 10−13.20 × 10−12.52 × 10−12.84 × 10−15.50 × 10−12.89 × 10−1
11101100.52.21 × 10−12.30 × 10−12.84 × 10−12.20 × 10−12.12 × 10−18.51 × 10−12.76 × 10−12.15 × 10−12.34 × 10−14.00 × 10−11.79 × 10−1
12105150.11.25 × 10−12.48 × 10−12.86 × 10−12.50 × 10−12.30 × 10−16.27 × 10−12.78 × 10−12.13 × 10−12.32 × 10−15.54 × 10−12.29 × 10−1
13101050.92.19 × 10−12.69 × 10−12.98 × 10−12.67 × 10−13.26 × 10−18.66 × 10−13.66 × 10−12.91 × 10−12.61 × 10−16.49 × 10−12.27 × 10−1
14101510.71.30 × 10−13.42 × 10−13.65 × 10−13.40 × 10−13.74 × 10−15.20 × 10−13.76 × 10−13.26 × 10−13.20 × 10−14.97 × 10−13.12 × 10−1
151025250.32.24 × 10−12.85 × 10−13.26 × 10−12.61 × 10−13.96 × 10−18.74 × 10−12.97 × 10−12.42 × 10−12.96 × 10−17.93 × 10−13.04 × 10−1
16151150.72.22 × 10−12.34 × 10−13.43 × 10−12.29 × 10−12.67 × 10−18.82 × 10−13.51 × 10−12.39 × 10−12.22 × 10−16.12 × 10−11.89 × 10−1
1715550.51.45 × 10−12.34 × 10−12.68 × 10−12.18 × 10−12.52 × 10−17.67 × 10−12.88 × 10−12.22 × 10−12.44 × 10−15.38 × 10−12.01 × 10−1
181510250.11.83 × 10−12.59 × 10−12.97 × 10−12.44 × 10−12.44 × 10−17.64 × 10−12.81 × 10−12.03 × 10−12.48 × 10−16.35 × 10−12.38 × 10−1
191515100.32.04 × 10−12.72 × 10−13.04 × 10−12.27 × 10−13.09 × 10−18.02 × 10−12.72 × 10−12.34 × 10−12.57 × 10−16.67 × 10−12.31 × 10−1
20152510.91.36 × 10−13.41 × 10−13.72 × 10−13.51 × 10−13.76 × 10−15.40 × 10−13.76 × 10−13.37 × 10−13.15 × 10−15.05 × 10−13.10 × 10−1
21201250.92.22 × 10−12.33 × 10−13.92 × 10−12.25 × 10−12.36 × 10−18.83 × 10−13.85 × 10−13.97 × 10−12.13 × 10−16.51 × 10−11.88 × 10−1
2220510.31.18 × 10−13.57 × 10−13.79 × 10−13.67 × 10−13.58 × 10−14.92 × 10−13.86 × 10−13.59 × 10−13.46 × 10−14.69 × 10−13.24 × 10−1
232010100.72.21 × 10−12.48 × 10−12.88 × 10−12.66 × 10−12.63 × 10−18.82 × 10−13.53 × 10−12.56 × 10−12.46 × 10−16.40 × 10−12.43 × 10−1
24201550.11.24 × 10−13.17 × 10−13.27 × 10−13.02 × 10−13.33 × 10−15.03 × 10−13.20 × 10−13.00 × 10−13.07 × 10−14.69 × 10−13.03 × 10−1
252025150.52.21 × 10−12.74 × 10−13.26 × 10−12.34 × 10−13.17 × 10−18.85 × 10−13.22 × 10−12.45 × 10−12.81 × 10−17.27 × 10−12.59 × 10−1
Table A2. Parameter sensitivity results of Banknote Authentication dataset based on orthogonal arrangement.
Table A2. Parameter sensitivity results of Banknote Authentication dataset based on orthogonal arrangement.
No.Mk k s o m a θ s o m a MSE
BPGA DEPBILPSOACOABCWOAHHOChOABBO
11110.11.23 × 10−13.73 × 10−13.72 × 10−13.73 × 10−13.68 × 10−13.94 × 10−13.71 × 10−13.76 × 10−13.72 × 10−13.72 × 10−13.72 × 10−1
215100.91.13 × 10−12.10 × 10−11.96 × 10−12.16 × 10−12.00 × 10−13.16 × 10−11.96 × 10−11.86 × 10−11.89 × 10−11.87 × 10−12.03 × 10−1
3110150.32.04 × 10−12.07 × 10−11.90 × 10−12.06 × 10−11.93 × 10−13.17 × 10−11.78 × 10−11.21 × 10−11.28 × 10−11.75 × 10−11.91 × 10−1
4115250.52.21 × 10−12.15 × 10−11.96 × 10−12.12 × 10−11.94 × 10−13.16 × 10−11.90 × 10−11.56 × 10−11.58 × 10−11.91 × 10−12.04 × 10−1
512550.72.08 × 10−11.97 × 10−11.85 × 10−11.98 × 10−11.84 × 10−12.91 × 10−11.85 × 10−11.74 × 10−11.74 × 10−11.81 × 10−11.90 × 10−1
65150.31.23 × 10−11.35 × 10−11.28 × 10−11.26 × 10−15.39 × 10−22.32 × 10−11.10 × 10−18.27 × 10−27.37 × 10−28.64 × 10−25.14 × 10−2
755250.72.18 × 10−11.53 × 10−11.52 × 10−11.32 × 10−13.87 × 10−23.03 × 10−11.25 × 10−14.93 × 10−26.85 × 10−25.00 × 10−21.49 × 10−2
851010.51.10 × 10−12.60 × 10−12.78 × 10−12.62 × 10−12.17 × 10−13.44 × 10−12.65 × 10−12.24 × 10−12.30 × 10−12.18 × 10−12.02 × 10−1
9515150.92.09 × 10−11.53 × 10−11.52 × 10−11.28 × 10−14.33 × 10−22.71 × 10−11.17 × 10−15.98 × 10−27.74 × 10−24.49 × 10−22.60 × 10−2
10525100.12.09 × 10−11.87 × 10−11.86 × 10−11.67 × 10−11.11 × 10−12.76 × 10−11.63 × 10−11.34 × 10−11.42 × 10−11.29 × 10−19.03 × 10−2
11101100.56.51 × 10−21.31 × 10−11.26 × 10−11.17 × 10−11.28 × 10−22.17 × 10−16.44 × 10−22.35 × 10−24.23 × 10−22.73 × 10−22.28 × 10−3
12105150.17.29 × 10−21.56 × 10−11.64 × 10−11.51 × 10−14.99 × 10−22.17 × 10−11.12 × 10−18.24 × 10−28.93 × 10−28.55 × 10−24.06 × 10−2
13101050.91.90 × 10−11.54 × 10−11.55 × 10−11.30 × 10−13.58 × 10−22.63 × 10−11.12 × 10−14.94 × 10−26.15 × 10−23.96 × 10−21.18 × 10−2
14101510.71.27 × 10−12.37 × 10−12.49 × 10−12.34 × 10−11.63 × 10−13.18 × 10−12.21 × 10−11.83 × 10−11.87 × 10−11.73 × 10−11.45 × 10−1
151025250.32.44 × 10−11.63 × 10−11.59 × 10−11.39 × 10−11.60 × 10−22.43 × 10−11.13 × 10−15.58 × 10−25.92 × 10−25.15 × 10−21.12 × 10−2
16151150.75.07 × 10−21.42 × 10−11.49 × 10−11.22 × 10−11.03 × 10−22.19 × 10−17.49 × 10−21.67 × 10−23.81 × 10−22.10 × 10−22.58 × 10−3
1715550.58.13 × 10−21.51 × 10−11.61 × 10−11.35 × 10−11.81 × 10−22.16 × 10−11.16 × 10−14.07 × 10−28.16 × 10−24.94 × 10−21.08 × 10−2
181510250.12.65 × 10−11.68 × 10−11.81 × 10−11.48 × 10−12.05 × 10−22.19 × 10−11.09 × 10−17.18 × 10−27.80 × 10−25.31 × 10−21.66 × 10−2
191515100.32.28 × 10−11.54 × 10−11.74 × 10−11.40 × 10−11.54 × 10−22.27 × 10−11.10 × 10−15.17 × 10−27.46 × 10−25.08 × 10−26.52 × 10−3
20152510.91.50 × 10−12.16 × 10−12.28 × 10−12.08 × 10−11.28 × 10−13.21 × 10−12.03 × 10−11.53 × 10−11.66 × 10−11.44 × 10−11.08 × 10−1
21201250.93.62 × 10−21.36 × 10−11.48 × 10−11.41 × 10−16.87 × 10−32.35 × 10−17.21 × 10−21.90 × 10−23.65 × 10−22.38 × 10−22.97 × 10−3
2220510.37.96 × 10−22.92 × 10−13.03 × 10−12.92 × 10−12.20 × 10−13.46 × 10−12.87 × 10−12.41 × 10−12.51 × 10−12.42 × 10−12.08 × 10−1
232010100.72.09 × 10−11.71 × 10−12.05 × 10−11.66 × 10−18.99 × 10−32.42 × 10−11.26 × 10−14.13 × 10−25.65 × 10−23.63 × 10−21.59 × 10−1
24201550.12.71 × 10−12.63 × 10−12.79 × 10−12.58 × 10−11.65 × 10−12.84 × 10−12.44 × 10−12.01 × 10−12.14 × 10−11.95 × 10−11.59 × 10−1
252025150.52.41 × 10−11.86 × 10−12.20 × 10−11.68 × 10−11.27 × 10−22.45 × 10−11.42 × 10−14.85 × 10−27.32 × 10−24.86 × 10−27.02 × 10−3
Table A3. Parameter sensitivity results of Breast Cancer dataset based on orthogonal arrangement.
Table A3. Parameter sensitivity results of Breast Cancer dataset based on orthogonal arrangement.
No.Mk k s o m a θ s o m a MSE
BPGA DEPBILPSOACOABCWOAHHOChOABBO
11110.11.29 × 10−12.94 × 10−12.92 × 10−12.91 × 10−12.82 × 10−13.29 × 10−12.90 × 10−13.05 × 10−12.94 × 10−12.87 × 10−12.84 × 10−1
215100.92.73 × 10−11.73 × 10−11.60 × 10−11.66 × 10−11.50 × 10−12.31 × 10−11.56 × 10−11.58 × 10−11.59 × 10−11.46 × 10−11.41 × 10−1
3110150.32.81 × 10−11.01 × 10−19.33 × 10−28.45 × 10−21.52 × 10−11.76 × 10−17.48 × 10−27.02 × 10−29.91 × 10−25.60 × 10−26.61 × 10−2
4115250.53.28 × 10−11.04 × 10−19.54 × 10−28.78 × 10−23.20 × 10−11.63 × 10−17.78 × 10−27.10 × 10−21.02 × 10−15.75 × 10−26.56 × 10−2
512550.73.09 × 10−11.27 × 10−11.19 × 10−11.18 × 10−14.46 × 10−11.86 × 10−11.08 × 10−11.08 × 10−11.17 × 10−19.62 × 10−21.01 × 10−1
65150.31.28 × 10−18.00 × 10−29.25 × 10−28.31 × 10−27.21 × 10−21.48 × 10−18.06 × 10−28.43 × 10−28.19 × 10−27.33 × 10−25.26 × 10−2
755250.79.98 × 10−29.03 × 10−29.93 × 10−29.93 × 10−26.56 × 10−21.93 × 10−17.44 × 10−27.48 × 10−28.38 × 10−25.73 × 10−25.18 × 10−2
851010.51.41 × 10−11.94 × 10−12.27 × 10−12.06 × 10−12.48 × 10−13.07 × 10−12.27 × 10−12.64 × 10−12.71 × 10−13.02 × 10−11.43 × 10−1
9515150.93.28 × 10−11.07 × 10−11.22 × 10−11.16 × 10−11.15 × 10−12.16 × 10−11.01 × 10−11.01 × 10−11.11 × 10−18.98 × 10−24.91 × 10−2
10525100.11.78 × 10−11.06 × 10−11.14 × 10−11.08 × 10−19.13 × 10−21.63 × 10−11.01 × 10−19.84 × 10−29.67 × 10−28.91 × 10−28.46 × 10−2
11101100.51.77 × 10−17.34 × 10−28.15 × 10−28.15 × 10−25.16 × 10−21.66 × 10−16.50 × 10−26.33 × 10−27.27 × 10−25.62 × 10−23.42 × 10−2
12105150.11.83 × 10−28.33 × 10−28.97 × 10−29.20 × 10−27.14 × 10−21.49 × 10−17.41 × 10−26.96 × 10−27.45 × 10−26.48 × 10−25.30 × 10−2
13101050.92.45 × 10−19.22 × 10−21.23 × 10−19.54 × 10−26.48 × 10−21.75 × 10−11.01 × 10−11.20 × 10−11.08 × 10−18.60 × 10−24.45 × 10−2
14101510.71.66 × 10−11.70 × 10−12.16 × 10−11.88 × 10−11.74 × 10−12.35 × 10−11.81 × 10−12.19 × 10−12.41 × 10−11.90 × 10−11.10 × 10−1
151025250.32.88 × 10−18.28 × 10−29.90 × 10−28.38 × 10−26.18 × 10−21.76 × 10−17.29 × 10−27.23 × 10−27.98 × 10−25.94 × 10−25.67 × 10−2
16151150.71.66 × 10−17.08 × 10−28.98 × 10−28.68 × 10−25.84 × 10−21.67 × 10−16.76 × 10−26.91 × 10−26.38 × 10−25.81 × 10−23.46 × 10−2
1715550.52.63 × 10−27.00 × 10−29.02 × 10−28.88 × 10−25.53 × 10−21.57 × 10−16.89 × 10−27.07 × 10−27.43 × 10−26.17 × 10−23.79 × 10−2
181510250.17.14 × 10−27.24 × 10−29.11 × 10−28.34 × 10−25.56 × 10−21.54 × 10−16.35 × 10−26.50 × 10−26.76 × 10−25.61 × 10−24.23 × 10−2
191515100.31.41 × 10−17.62 × 10−29.23 × 10−28.64 × 10−25.78 × 10−21.51 × 10−16.70 × 10−25.95 × 10−26.41 × 10−25.65 × 10−24.70 × 10−2
20152510.91.80 × 10−11.60 × 10−12.04 × 10−11.80 × 10−11.41 × 10−12.41 × 10−11.84 × 10−11.91 × 10−11.99 × 10−11.54 × 10−19.11 × 10−2
21201250.92.02 × 10−17.64 × 10−29.92 × 10−28.30 × 10−25.00 × 10−21.88 × 10−16.95 × 10−21.16 × 10−16.73 × 10−25.81 × 10−23.63 × 10−2
2220510.36.27 × 10−21.83 × 10−12.12 × 10−11.97 × 10−11.58 × 10−12.28 × 10−11.97 × 10−11.97 × 10−11.95 × 10−11.89 × 10−11.44 × 10−1
232010100.71.93 × 10−11.36 × 10−19.19 × 10−29.09 × 10−21.25 × 10−11.74 × 10−16.88 × 10−26.69 × 10−27.90 × 10−25.69 × 10−24.44 × 10−2
24201550.16.66 × 10−21.37 × 10−11.46 × 10−11.44 × 10−11.24 × 10−11.80 × 10−11.32 × 10−11.33 × 10−11.39 × 10−11.27 × 10−11.19 × 10−1
252025150.52.87 × 10−17.69 × 10−28.89 × 10−28.28 × 10−25.39 × 10−22.43 × 10−11.32 × 10−16.83 × 10−28.01 × 10−25.72 × 10−24.73 × 10−2
Table A4. Parameter sensitivity results of Car Evaluation dataset based on orthogonal arrangement.
Table A4. Parameter sensitivity results of Car Evaluation dataset based on orthogonal arrangement.
No.Mk k s o m a θ s o m a MSE
BPGA DEPBILPSOACOABCWOAHHOChOABBO
11110.11.20 × 10−14.32 × 10−14.29 × 10−14.32 × 10−14.24 × 10−14.65 × 10−14.34 × 10−14.50 × 10−14.33 × 10−14.66 × 10−14.28 × 10−1
215100.99.21 × 10−21.81 × 10−11.72 × 10−11.99 × 10−12.22 × 10−13.23 × 10−11.81 × 10−11.76 × 10−11.76 × 10−13.18 × 10−11.78 × 10−1
3110150.31.47 × 10−12.25 × 10−12.05 × 10−12.20 × 10−12.72 × 10−13.72 × 10−11.74 × 10−11.70 × 10−11.81 × 10−14.79 × 10−11.54 × 10−1
4115250.51.50 × 10−12.30 × 10−12.24 × 10−12.04 × 10−12.90 × 10−13.47 × 10−11.89 × 10−11.86 × 10−11.89 × 10−14.80 × 10−11.58 × 10−1
512550.71.40 × 10−11.93 × 10−11.83 × 10−11.96 × 10−12.73 × 10−13.10 × 10−11.76 × 10−11.71 × 10−11.76 × 10−14.24 × 10−11.71 × 10−1
65150.31.21 × 10−12.05 × 10−12.14 × 10−12.10 × 10−11.64 × 10−13.09 × 10−11.86 × 10−12.20 × 10−11.94 × 10−12.70 × 10−11.48 × 10−1
755250.71.35 × 10−12.16 × 10−12.10 × 10−12.28 × 10−11.65 × 10−13.67 × 10−11.67 × 10−11.71 × 10−11.70 × 10−13.61 × 10−11.82 × 10−1
851010.51.03 × 10−13.30 × 10−13.43 × 10−13.43 × 10−13.41 × 10−13.65 × 10−13.39 × 10−13.37 × 10−13.32 × 10−14.01 × 10−13.16 × 10−1
9515150.91.49 × 10−11.92 × 10−11.85 × 10−11.99 × 10−11.69 × 10−13.15 × 10−11.65 × 10−11.63 × 10−11.71 × 10−13.07 × 10−11.76 × 10−1
10525100.11.19 × 10−12.65 × 10−12.60 × 10−12.60 × 10−12.42 × 10−13.38 × 10−12.43 × 10−12.56 × 10−12.47 × 10−13.77 × 10−12.53 × 10−1
11101100.55.44 × 10−21.61 × 10−11.79 × 10−11.84 × 10−11.21 × 10−13.82 × 10−11.39 × 10−11.54 × 10−11.22 × 10−12.67 × 10−19.65 × 10−2
12105150.14.67 × 10−22.18 × 10−12.24 × 10−12.37 × 10−11.86 × 10−13.85 × 10−11.84 × 10−11.99 × 10−12.00 × 10−13.11 × 10−11.62 × 10−1
13101050.91.26 × 10−11.85 × 10−11.91 × 10−11.86 × 10−11.69 × 10−13.37 × 10−11.78 × 10−11.64 × 10−11.61 × 10−12.65 × 10−11.36 × 10−1
14101510.71.05 × 10−12.95 × 10−13.16 × 10−13.12 × 10−12.96 × 10−13.68 × 10−13.13 × 10−12.96 × 10−12.95 × 10−13.64 × 10−12.73 × 10−1
151025250.31.94 × 10−12.25 × 10−12.25 × 10−12.21 × 10−11.99 × 10−14.38 × 10−11.81 × 10−11.92 × 10−11.88 × 10−13.93 × 10−11.98 × 10−1
16151150.74.40 × 10−21.57 × 10−11.73 × 10−11.73 × 10−11.24 × 10−13.71 × 10−11.28 × 10−11.22 × 10−11.19 × 10−12.31 × 10−18.71 × 10−2
1715550.54.04 × 10−21.86 × 10−11.86 × 10−11.90 × 10−11.47 × 10−13.48 × 10−11.56 × 10−11.60 × 10−11.57 × 10−13.07 × 10−11.08 × 10−1
181510250.11.17 × 10−12.05 × 10−12.13 × 10−12.27 × 10−11.72 × 10−14.12 × 10−11.69 × 10−11.81 × 10−11.76 × 10−13.29 × 10−11.69 × 10−1
191515100.31.34 × 10−12.14 × 10−12.15 × 10−12.03 × 10−11.74 × 10−14.11 × 10−11.70 × 10−11.59 × 10−11.75 × 10−13.21 × 10−11.55 × 10−1
20152510.91.04 × 10−12.68 × 10−12.93 × 10−12.96 × 10−12.54 × 10−13.60 × 10−12.82 × 10−12.71 × 10−12.69 × 10−13.34 × 10−12.36 × 10−1
21201250.91.07 × 10−11.50 × 10−11.74 × 10−11.63 × 10−19.89 × 10−22.95 × 10−11.28 × 10−11.20 × 10−11.18 × 10−12.16 × 10−19.23 × 10−2
2220510.39.22 × 10−23.68 × 10−13.78 × 10−13.77 × 10−13.62 × 10−14.13 × 10−13.71 × 10−13.77 × 10−13.72 × 10−14.10 × 10−13.38 × 10−1
232010100.71.37 × 10−12.11 × 10−12.06 × 10−12.04 × 10−11.62 × 10−13.85 × 10−11.60 × 10−11.51 × 10−11.59 × 10−13.11 × 10−11.72 × 10−1
24201550.11.07 × 10−13.23 × 10−13.22 × 10−13.19 × 10−13.04 × 10−13.84 × 10−13.02 × 10−13.13 × 10−13.18 × 10−14.03 × 10−13.01 × 10−1
252025150.51.54 × 10−12.39 × 10−12.21 × 10−12.00 × 10−11.88 × 10−13.57 × 10−11.75 × 10−11.58 × 10−11.70 × 10−14.20 × 10−11.80 × 10−1
Table A5. Parameter sensitivity results of Diabetic Retinopathy dataset based on orthogonal arrangement.
Table A5. Parameter sensitivity results of Diabetic Retinopathy dataset based on orthogonal arrangement.
No.Mk k s o m a θ s o m a MSE
BPGA DEPBILPSOACOABCWOAHHOChOABBO
11110.11.28 × 10−13.80 × 10−13.90 × 10−13.57 × 10−13.76 × 10−15.12 × 10−13.91 × 10−13.88 × 10−13.75 × 10−14.05 × 10−13.51 × 10−1
215100.93.09 × 10−14.19 × 10−14.51 × 10−14.23 × 10−18.41 × 10−11.23 × 104.76 × 10−14.49 × 10−13.59 × 10−13.74 × 10−14.22 × 10−1
3110150.33.02 × 10−14.17 × 10−15.77 × 10−13.63 × 10−18.21 × 10−11.21 × 104.06 × 10−14.28 × 10−13.93 × 10−13.51 × 10−14.61 × 10−1
4115250.53.09 × 10−14.73 × 10−18.02 × 10−16.86 × 10−11.05 × 101.22 × 105.51 × 10−16.48 × 10−14.40 × 10−13.68 × 10−17.62 × 10−1
512550.72.91 × 10−14.26 × 10−17.40 × 10−14.64 × 10−11.10 × 101.16 × 105.93 × 10−16.20 × 10−13.88 × 10−13.67 × 10−14.55 × 10−1
65150.31.28 × 10−12.66 × 10−13.18 × 10−12.53 × 10−12.38 × 10−18.16 × 10−13.19 × 10−12.61 × 10−12.56 × 10−13.58 × 10−11.75 × 10−1
755250.72.94 × 10−13.79 × 10−14.77 × 10−13.68 × 10−15.52 × 10−11.215.08 × 10−15.25 × 10−13.21 × 10−13.50 × 10−12.87 × 10−1
851010.51.47 × 10−13.75 × 10−14.01 × 10−13.71 × 10−14.53 × 10−15.84 × 10−14.08 × 10−14.06 × 10−13.93 × 10−14.24 × 10−13.56 × 10−1
9515150.93.08 × 10−13.80 × 10−15.74 × 10−13.74 × 10−16.83 × 10−11.21 × 104.52 × 10−14.01 × 10−13.70 × 10−13.74 × 10−13.52 × 10−1
10525100.11.79 × 10−12.85 × 10−13.94 × 10−12.59 × 10−15.39 × 10−16.93 × 10−13.68 × 10−13.22 × 10−13.42 × 10−13.55 × 10−12.85 × 10−1
11101100.53.05 × 10−12.82 × 10−14.24 × 10−12.69 × 10−11.90 × 10−11.203.14 × 10−12.43 × 10−12.18 × 10−13.57 × 10−11.41 × 10−1
12105150.11.93 × 10−12.73 × 10−13.32 × 10−12.67 × 10−12.34 × 10−17.86 × 10−13.14 × 10−12.79 × 10−12.66 × 10−13.57 × 10−11.92 × 10−1
13101050.93.02 × 10−14.06 × 10−14.88 × 10−14.01 × 10−14.62 × 10−11.19 × 103.68 × 10−14.61 × 10−13.98 × 10−13.29 × 10−12.49 × 10−1
14101510.71.59 × 10−13.88 × 10−14.34 × 10−14.01 × 10−14.24 × 10−16.31 × 10−14.29 × 10−14.06 × 10−13.91 × 10−14.30 × 10−13.38 × 10−1
151025250.33.09 × 10−13.96 × 10−15.56 × 10−12.84 × 10−17.47 × 10−11.234.23 × 10−14.34 × 10−14.66 × 10−14.08 × 10−13.19 × 10−1
16151150.73.06 × 10−12.79 × 10−15.86 × 10−12.78 × 10−11.91 × 10−11.215.74 × 10−16.41 × 10−12.28 × 10−13.60 × 10−11.26 × 10−1
1715550.52.17 × 10−13.10 × 10−13.92 × 10−12.95 × 10−12.93 × 10−11.003.67 × 10−12.82 × 10−12.98 × 10−13.40 × 10−11.75 × 10−1
181510250.12.63 × 10−13.16 × 10−13.84 × 10−12.99 × 10−12.44 × 10−18.96 × 10−13.15 × 10−13.38 × 10−13.00 × 10−13.57 × 10−12.30 × 10−1
191515100.32.69 × 10−13.58 × 10−14.66 × 10−12.74 × 10−13.10 × 10−11.08 × 103.46 × 10−13.22 × 10−13.46 × 10−13.83 × 10−12.37 × 10−1
20152510.91.71 × 10−14.16 × 10−14.73 × 10−14.34 × 10−15.20 × 10−16.79 × 10−14.68 × 10−14.47 × 10−14.41 × 10−14.10 × 10−13.63 × 10−1
21201250.93.07 × 10−13.06 × 10−19.60 × 10−13.01 × 10−12.71 × 10−11.21 × 104.78 × 10−11.132.98 × 10−14.19 × 10−11.27 × 10−1
2220510.31.30 × 10−13.46 × 10−13.91 × 10−13.55 × 10−13.20 × 10−15.37 × 10−13.76 × 10−13.76 × 10−13.54 × 10−14.03 × 10−12.62 × 10−1
232010100.73.03 × 10−13.81 × 10−14.79 × 10−12.99 × 10−12.70 × 10−11.223.98 × 10−13.89 × 10−13.06 × 10−13.41 × 10−12.08 × 10−1
24201550.11.40 × 10−12.75 × 10−13.21 × 10−12.61 × 10−12.87 × 10−15.87 × 10−12.95 × 10−12.94 × 10−13.08 × 10−13.30 × 10−12.30 × 10−1
252025150.53.07 × 10−13.99 × 10−14.76 × 10−13.02 × 10−14.50 × 10−11.22 × 10+003.70 × 10−14.57 × 10−13.75 × 10−13.79 × 10−12.66 × 10−1
Table A6. Parameter sensitivity results of Glass Identification dataset based on orthogonal arrangement.
Table A6. Parameter sensitivity results of Glass Identification dataset based on orthogonal arrangement.
No.Mk k s o m a θ s o m a MSE
BPGA DEPBILPSOACOABCWOAHHOChOABBO
11110.11.19 × 10−14.18 × 10−14.18 × 10−14.20 × 10−14.11 × 10−14.36 × 10−14.19 × 10−14.27 × 10−14.21 × 10−14.19 × 10−14.20 × 10−1
215100.91.15 × 10−11.25 × 10−11.09 × 10−11.18 × 10−12.06 × 10−12.15 × 10−11.15 × 10−11.22 × 10−11.35 × 10−11.26 × 10−11.12 × 10−1
3110150.31.16 × 10−11.17 × 10−11.11 × 10−11.15 × 10−11.86 × 10−12.02 × 10−18.84 × 10−29.72 × 10−21.06 × 10−19.60 × 10−21.44 × 10−1
4115250.51.20 × 10−11.22 × 10−11.20 × 10−11.23 × 10−12.32 × 10−12.09 × 10−11.17 × 10−11.48 × 10−11.29 × 10−11.47 × 10−11.63 × 10−1
512550.71.14 × 10−11.06 × 10−11.15 × 10−11.08 × 10−12.88 × 10−12.01 × 10−11.09 × 10−11.71 × 10−11.22 × 10−11.18 × 10−11.18 × 10−1
65150.31.19 × 10−11.37 × 10−11.50 × 10−11.40 × 10−11.11 × 10−12.04 × 10−11.32 × 10−11.49 × 10−11.35 × 10−11.32 × 10−11.12 × 10−1
755250.79.39 × 10−21.04 × 10−11.21 × 10−11.18 × 10−18.48 × 10−22.34 × 10−19.09 × 10−21.02 × 10−19.89 × 10−29.22 × 10−27.45 × 10−2
851010.59.89 × 10−22.93 × 10−13.14 × 10−13.02 × 10−12.97 × 10−13.40 × 10−13.05 × 10−13.12 × 10−13.17 × 10−13.12 × 10−12.77 × 10−1
9515150.91.18 × 10−19.22 × 10−21.10 × 10−11.05 × 10−11.18 × 10−12.23 × 10−19.13 × 10−21.03 × 10−11.13 × 10−11.08 × 10−19.61 × 10−2
10525100.18.84 × 10−21.76 × 10−11.92 × 10−11.88 × 10−11.91 × 10−12.58 × 10−11.87 × 10−11.81 × 10−12.00 × 10−12.01 × 10−11.74 × 10−1
11101100.51.17 × 10−11.00 × 10−11.11 × 10−11.05 × 10−15.62 × 10−22.01 × 10−19.34 × 10−29.79 × 10−28.10 × 10−28.77 × 10−25.57 × 10−2
12105150.15.73 × 10−21.23 × 10−11.42 × 10−11.37 × 10−11.04 × 10−12.09 × 10−11.21 × 10−11.21 × 10−11.14 × 10−11.31 × 10−11.01 × 10−1
13101050.91.08 × 10−19.84 × 10−21.31 × 10−11.23 × 10−18.44 × 10−22.29 × 10−11.09 × 10−11.05 × 10−11.22 × 10−11.21 × 10−15.99 × 10−2
14101510.79.59 × 10−22.49 × 10−12.75 × 10−12.63 × 10−12.53 × 10−12.98 × 10−12.62 × 10−12.67 × 10−12.70 × 10−12.80 × 10−12.40 × 10−1
151025250.31.18 × 10−19.77 × 10−21.22 × 10−11.17 × 10−19.47 × 10−22.24 × 10−19.57 × 10−29.80 × 10−21.09 × 10−19.67 × 10−28.62 × 10−2
16151150.71.17 × 10−19.19 × 10−21.15 × 10−11.01 × 10−16.00 × 10−22.24 × 10−19.42 × 10−21.04 × 10−18.88 × 10−29.29 × 10−25.49 × 10−2
1715550.54.48 × 10−29.35 × 10−21.00 × 10−11.12 × 10−17.56 × 10−22.07 × 10−18.71 × 10−28.79 × 10−29.05 × 10−28.94 × 10−25.42 × 10−2
181510250.15.94 × 10−29.94 × 10−21.11 × 10−11.08 × 10−17.37 × 10−22.15 × 10−18.73 × 10−29.53 × 10−21.03 × 10−18.61 × 10−26.23 × 10−2
191515100.39.17 × 10−29.90 × 10−21.15 × 10−19.92 × 10−26.83 × 10−22.34 × 10−18.84 × 10−28.88 × 10−29.14 × 10−29.11 × 10−26.27 × 10−2
20152510.99.17 × 10−22.20 × 10−12.38 × 10−12.30 × 10−12.03 × 10−12.87 × 10−12.35 × 10−12.34 × 10−12.41 × 10−12.53 × 10−11.84 × 10−1
21201250.91.20 × 10−19.36 × 10−21.19 × 10−11.11 × 10−15.80 × 10−22.29 × 10−19.66 × 10−21.01 × 10−18.64 × 10−29.72 × 10−25.20 × 10−2
2220510.31.02 × 10−13.36 × 10−13.55 × 10−13.49 × 10−13.20 × 10−13.76 × 10−13.43 × 10−19.04 × 10−13.45 × 10−13.53 × 10−13.11 × 10−1
232010100.71.15 × 10−19.16 × 10−21.14 × 10−11.07 × 10−16.69 × 10−22.38 × 10−18.79 × 10−28.13 × 10−28.83 × 10−28.06 × 10−26.12 × 10−2
24201550.18.52 × 10−22.65 × 10−12.79 × 10−12.78 × 10−12.51 × 10−13.45 × 10−12.68 × 10−12.75 × 10−11.07 × 10−12.71 × 10−12.55 × 10−1
252025150.51.19 × 10−19.18 × 10−21.15 × 10−11.10 × 10−17.40 × 10−22.50 × 10−19.03 × 10−27.62 × 10−21.05 × 10−19.30 × 10−26.92 × 10−2
Table A7. Average accuracy of testing of each learning algorithm in Australian Credit Approval dataset.
Table A7. Average accuracy of testing of each learning algorithm in Australian Credit Approval dataset.
No.Mk k s o m a θ s o m a Accuracy
BPGA DEPBILPSOACOABCWOAHHOChOABBO
11110.154.33 82.56 84.09 85.28 80.19 55.88 77.04 84.11 85.19 56.55 85.39
215100.955.31 83.17 82.33 84.64 66.33 55.01 70.82 82.13 84.06 58.12 84.67
3110150.354.33 83.93 79.44 80.85 59.40 56.01 76.51 83.80 84.22 58.79 78.62
4115250.555.81 81.79 78.36 76.47 59.15 56.12 71.51 84.30 83.08 56.73 73.72
512550.754.33 83.27 75.96 77.95 59.71 55.60 69.60 83.85 83.46 56.84 77.33
65150.355.76 84.41 83.69 84.59 84.35 55.01 82.06 83.98 85.39 63.67 85.04
755250.757.65 83.80 83.66 85.23 76.59 55.85 75.04 81.82 81.85 60.85 83.64
851010.555.76 84.04 83.35 84.57 73.99 54.98 78.99 84.40 83.90 57.12 83.59
9515150.955.76 85.52 82.14 85.99 74.51 56.02 79.15 84.11 84.30 59.15 84.48
10525100.155.76 82.35 80.45 84.72 66.36 55.62 77.71 84.06 81.30 56.76 80.66
11101100.556.34 84.21 82.64 83.11 84.34 56.10 80.34 83.96 83.17 71.40 84.38
12105150.165.64 84.61 82.46 84.25 85.06 55.28 79.94 84.81 84.52 59.77 83.62
13101050.956.34 85.67 83.64 85.31 81.00 55.41 78.90 83.11 83.64 62.19 84.51
14101510.755.07 83.88 83.17 83.41 78.33 56.10 79.68 83.78 84.25 58.18 84.86
151025250.356.34 83.53 82.38 83.59 76.54 57.00 79.82 84.48 81.34 59.29 82.27
16151150.755.48 84.40 78.78 84.62 82.33 55.78 78.65 83.24 84.27 64.17 85.97
1715550.564.38 84.30 83.37 81.32 81.88 55.43 76.28 85.43 83.12 66.12 84.86
181510250.156.97 84.04 83.54 84.48 83.41 55.41 77.62 85.89 82.46 61.71 85.10
191515100.356.36 83.88 83.24 82.93 79.13 55.17 81.00 83.99 82.24 62.11 83.74
20152510.955.48 84.73 82.85 84.94 79.18 54.78 78.47 82.09 83.86 61.45 83.45
21201250.955.14 84.51 76.97 83.96 83.09 56.33 75.43 77.00 85.65 62.01 85.01
2220510.361.14 84.52 82.91 84.96 83.14 55.73 77.12 83.82 84.56 59.32 84.65
232010100.755.14 84.20 84.90 85.15 82.29 55.09 78.62 83.19 83.49 64.30 84.73
24201550.158.12 82.25 80.45 84.06 78.57 55.70 79.98 83.78 84.25 60.34 83.24
252025150.555.14 84.80 82.72 81.95 81.01 55.93 79.05 83.85 81.72 59.89 82.75
Table A8. Average accuracy of testing of each learning algorithm in Banknote Authentication dataset.
Table A8. Average accuracy of testing of each learning algorithm in Banknote Authentication dataset.
No.Mk k s o m a θ s o m a Accuracy
BPGA DEPBILPSOACOABCWOAHHOChOABBO
11110.157.45 88.88 89.04 89.28 89.00 84.94 82.43 88.67 89.10 89.09 88.55
215100.975.02 88.11 88.57 87.62 87.28 82.85 78.79 89.26 88.88 89.25 87.73
3110150.357.89 88.79 88.91 88.62 88.92 83.60 80.65 91.82 91.29 89.90 88.67
4115250.554.93 88.68 88.96 88.88 88.95 83.86 83.67 90.56 90.00 88.86 89.35
512550.754.93 88.11 88.41 88.62 88.33 83.57 78.91 88.79 89.24 88.75 88.30
65150.367.58 92.40 93.12 92.94 98.10 85.50 87.79 96.80 97.22 96.63 98.94
755250.757.91 91.31 91.48 92.45 96.89 84.17 83.60 96.42 95.43 96.46 98.03
851010.569.01 90.44 89.40 90.96 95.35 84.23 85.95 94.74 94.13 95.24 96.34
9515150.957.56 90.57 90.76 93.20 97.09 84.82 86.69 96.08 95.13 97.16 98.00
10525100.152.59 91.38 90.96 92.47 96.59 83.16 86.80 94.59 94.75 95.11 98.04
11101100.586.50 91.62 91.50 92.67 98.45 88.10 93.23 98.67 97.53 98.73 99.50
12105150.181.17 91.20 91.37 91.22 97.94 88.17 92.68 96.43 95.37 96.44 99.26
13101050.961.35 89.69 89.79 90.66 96.25 86.33 91.58 96.46 96.31 97.54 98.66
14101510.763.06 89.89 89.23 90.73 95.74 85.02 90.59 94.45 94.01 96.11 97.57
151025250.350.11 91.11 91.83 92.42 97.86 87.66 92.69 96.29 95.60 96.50 98.55
16151150.788.16 91.33 90.89 92.52 98.69 88.24 94.33 98.96 97.68 99.01 99.57
1715550.580.88 90.68 89.89 91.67 98.11 87.97 91.01 97.27 95.39 96.96 98.88
181510250.147.85 90.69 89.22 91.82 97.97 88.18 91.86 95.31 95.50 96.09 98.75
191515100.351.42 91.25 90.40 91.65 97.99 87.65 91.33 96.67 94.85 96.51 98.82
20152510.955.69 87.67 87.86 89.94 95.53 83.22 88.25 94.16 93.01 96.13 97.93
21201250.992.48 91.51 91.33 91.75 98.93 87.31 94.91 98.94 97.52 98.35 99.42
2220510.396.19 90.03 88.26 90.19 97.83 88.04 90.01 95.19 94.78 95.53 98.50
232010100.755.22 90.15 88.54 90.66 98.16 87.66 91.15 97.18 96.44 97.12 99.07
24201550.146.21 89.18 87.34 90.17 98.28 87.54 91.24 95.55 94.60 95.93 98.97
252025150.551.43 89.35 88.58 90.67 98.21 88.20 88.44 96.86 95.19 96.59 98.81
Table A9. Average accuracy of testing of each learning algorithm in Breast Cancer dataset.
Table A9. Average accuracy of testing of each learning algorithm in Breast Cancer dataset.
No.Mk k s o m a θ s o m a Accuracy
BPGA DEPBILPSOACOABCWOAHHOChOABBO
11110.134.7193.5994.2594.9795.1789.4882.3792.6593.6394.4894.78
215100.946.0392.8994.1994.194.8490.0685.3394.3294.4895.0595.43
3110150.342.4693.8794.1993.9290.5290.6790.7894.1692.8495.6895.33
4115250.534.7193.9794.3794.0582.7990.0883.8494.4193.1695.3294.56
512550.734.7193.5794.1993.4376.7690.1980.6894.5493.8694.8194.56
65150.334.5495.0694.9294.4994.8491.1194.7695.3894.5495.3096.22
755250.778.9593.7594.4194.0295.1090.6794.5694.9594.1994.8495.14
851010.542.7193.4190.8492.7694.9789.6894.1994.3394.7595.3895.35
9515150.934.5494.0593.7993.7392.4989.2492.4694.2994.2995.5795.83
10525100.138.2994.1993.9294.3294.5989.8492.2994.6795.0595.1494.76
11101100.558.1394.9894.0094.3595.0690.0589.9795.0894.4395.7195.94
12105150.195.9594.8694.1194.1395.1190.4692.8795.1394.995.5795.90
13101050.950.8493.1992.3593.7895.4390.5291.7595.0094.6795.4895.60
14101510.735.3592.8491.9492.194.8989.9889.5994.4493.9894.9595.38
151025250.342.1994.2193.6594.2595.0690.7694.3294.9494.1695.3295.08
16151150.758.7895.4393.9294.3295.3790.0594.4695.0095.0395.3296.16
1715550.593.3094.5693.9794.2495.690.6792.4995.0295.1495.7596.13
181510250.183.1794.8394.1094.2595.5190.7692.6595.5695.2295.6895.51
191515100.367.7594.2193.7394.4694.9891.1488.6395.0894.6595.5195.71
20152510.933.8492.4492.3891.8194.8188.4689.1494.3394.7695.0295.46
21201250.954.5294.6793.6394.4195.3789.1791.2792.9495.5695.4496.00
2220510.395.4193.4393.5293.7894.8689.4194.1094.2794.7095.1195.30
232010100.760.4894.5793.4394.1195.1991.0894.2594.9293.9295.0896.21
24201550.178.5294.4994.3594.2595.3890.9294.1094.8794.4695.1095.46
252025150.542.3794.1694.2493.8494.7180.3894.1095.0094.0695.2595.83
Table A10. Average accuracy of testing of each learning algorithm in Car Evaluation dataset.
Table A10. Average accuracy of testing of each learning algorithm in Car Evaluation dataset.
No.Mk k s o m a θ s o m a Accuracy
BPGA DEPBILPSOACOABCWOAHHOChOABBO
11110.170.186.7887.5285.2286.2475.5381.681.586.6575.8887.78
215100.979.6386.4987.8985.8283.7376.4481.9487.7787.8174.7986.74
3110150.370.186.3786.9786.582.9980.0684.8388.487.9471.0688.93
4115250.570.187.7387.2083.7382.8977.8683.4287.9587.3272.3785.21
512550.770.4386.4386.486.0081.1078.581.7188.1186.8971.6288.42
65150.369.4388.7486.8587.4290.9377.8688.6288.6588.5881.6393.8
755250.771.287.5386.2887.7988.9681.1683.9388.4488.3877.6289.36
851010.572.9587.4187.6886.8586.1782.0785.9787.0387.8675.5189.66
9515150.969.4386.4586.0288.2787.6780.8984.8387.8387.7678.7187.57
10525100.170.4285.9386.8287.1287.6181.1086.3487.1188.3173.9186.64
11101100.585.4588.9187.6486.5391.5375.9788.8289.3691.4579.5892.94
12105150.188.5586.7286.3987.3289.5676.6187.9988.9888.8577.7791.6
13101050.974.486.186.3886.0486.8969.885.4488.3888.5278.7189.36
14101510.771.0887.9387.3287.0588.9876.5585.3687.8887.6977.9589.49
151025250.361.1387.387.0787.7987.3777.1087.3487.7386.5677.2989.03
16151150.788.4488.6387.4887.9990.7277.7789.8591.4691.5182.4693.8
1715550.588.5987.2187.0484.0988.9778.6886.2489.8988.6977.5992.87
181510250.173.5487.0787.1087.7788.7576.9487.6487.788.5376.7189.95
191515100.371.8387.4286.387.788.2476.8186.9689.1887.7377.9089.96
20152510.970.3586.5585.5186.4388.775.3386.3587.9286.5481.0690.15
21201250.988.7789.3486.6387.7192.9482.2887.5791.2291.4784.4393.32
2220510.389.2788.4288.0787.6688.9780.3787.0587.3987.9679.4690.74
232010100.772.0586.1886.8187.8688.278.3185.5389.5688.1378.8889.21
24201550.180.0586.0386.1786.7489.479.5187.0988.0787.5773.9388.44
252025150.569.4987.0786.9684.1787.9279.3986.888.8188.1574.7688.65
Table A11. Average accuracy of testing of each learning algorithm in Diabetic Retinopathy dataset.
Table A11. Average accuracy of testing of each learning algorithm in Diabetic Retinopathy dataset.
No.Mk k s o m a θ s o m a Accuracy
BPGA DEPBILPSOACOABCWOAHHOChOABBO
11110.139.0475.6277.0581.4581.5637.0765.8875.4374.3267.2081.11
215100.939.0470.3270.5380.656.9438.6559.5772.3574.4072.8876.73
3110150.339.0477.870.0078.6856.8239.2974.4273.6174.0072.2472.31
4115250.539.0475.2458.9561.0547.3537.4867.6563.7072.0172.2259.04
512550.739.0475.2857.572.2640.0038.9358.6361.9269.9869.8771.67
65150.338.6881.9278.781.4583.3538.6577.2082.0981.3771.7188.27
755250.741.1178.2974.2981.5269.3838.1667.6767.3776.2875.1383.08
851010.538.6880.0077.4680.9664.5138.9572.4473.175.4568.2981.05
9515150.938.6881.6267.3382.2963.1439.0669.8971.7974.1772.4882.33
10525100.138.6880.1768.5981.6255.5638.7667.0375.2673.7070.1777.95
11101100.539.0880.6669.0681.5487.1638.774.2982.8285.1570.3889.91
12105150.145.2181.5477.3982.6583.7046.1377.6379.3280.2169.6486.22
13101050.939.0878.2566.9481.5475.3240.2467.0370.4972.5474.5984.51
14101510.738.8980.5375.0082.0172.2937.9574.5576.1875.6671.7182.69
151025250.339.0880.0071.2081.9059.8339.5374.9474.5969.5372.5280.85
16151150.737.9180.1562.4881.4386.1338.5968.1262.5283.6370.0690.79
1715550.548.4878.6877.3578.9181.0335.6071.7979.1277.8872.9786.28
181510250.137.8880.3476.7981.7182.7145.5377.3576.1177.7472.8684.27
191515100.341.3079.5974.4283.0379.8338.6174.4477.5074.6470.2184.70
20152510.937.9179.3669.6281.3764.0836.8268.2372.3972.0174.8181.62
21201250.938.1878.0348.3880.3881.0538.1267.7843.4880.0965.6489.66
2220510.345.2681.8274.8781.4582.7137.7176.7175.8875.7968.9785.94
232010100.738.877.9972.7680.1381.7739.2969.4972.9376.8474.3685.92
24201550.142.3979.5173.9381.3977.0340.0674.8178.5775.9675.2483.59
252025150.538.1878.9775.4576.3773.5937.6373.9771.9774.8369.7681.50
Table A12. Average accuracy of testing of each learning algorithm in Diabetic Retinopathy dataset.
Table A12. Average accuracy of testing of each learning algorithm in Diabetic Retinopathy dataset.
No.Mk k s o m a θ s o m a Accuracy
BPGA DEPBILPSOACOABCWOAHHOChOABBO
11110.176.72 90.73 91.88 90.89 90.10 88.44 87.66 89.95 91.04 92.29 91.67
215100.977.24 92.19 91.51 91.61 86.30 87.24 88.39 92.24 91.61 92.86 91.25
3110150.376.98 92.19 91.61 90.52 88.23 89.06 87.55 92.92 91.56 93.07 90.26
4115250.576.72 91.46 91.30 91.88 86.98 85.52 87.66 89.38 90.21 90.68 89.53
512550.776.72 91.41 92.19 92.55 82.97 87.40 87.86 88.80 90.63 92.60 92.14
65150.375.63 92.34 91.93 91.15 92.45 88.07 89.53 90.94 92.19 93.13 91.88
755250.778.70 92.40 90.99 92.71 92.76 86.82 91.30 90.78 90.31 92.76 92.08
851010.576.61 92.60 90.42 92.45 91.15 86.77 91.41 90.26 91.46 92.92 91.82
9515150.975.63 90.68 92.14 91.04 92.19 86.41 91.25 90.26 91.35 93.23 90.73
10525100.176.35 92.08 91.35 90.73 89.90 86.25 89.69 90.31 90.94 89.95 90.99
11101100.575.89 91.98 92.34 90.26 93.01 88.07 91.72 92.50 91.67 93.05 93.07
12105150.184.58 91.77 90.36 91.88 92.40 85.83 90.10 90.47 91.51 92.50 91.30
13101050.977.40 90.57 90.26 91.56 91.72 86.61 90.94 92.45 91.15 92.86 92.08
14101510.778.39 91.56 90.57 91.72 93.39 88.02 90.99 90.63 91.04 94.43 92.60
151025250.375.89 91.61 91.20 91.98 92.40 87.86 90.99 92.08 90.94 92.60 91.25
16151150.775.36 90.16 91.09 90.36 93.18 87.86 92.14 90.57 91.98 93.59 93.49
1715550.586.04 91.51 90.78 92.76 92.97 85.47 91.25 91.20 92.08 93.39 90.89
181510250.183.75 91.46 90.89 92.19 92.76 86.77 89.79 91.82 91.67 92.29 91.15
191515100.378.29 91.61 91.41 91.35 93.49 86.41 92.08 91.25 92.76 94.06 92.81
20152510.975.36 92.08 89.84 91.35 91.82 85.21 90.73 92.45 92.29 92.71 92.66
21201250.976.35 92.50 91.35 92.14 93.07 87.86 91.51 90.94 92.08 93.65 92.40
2220510.382.86 90.73 89.11 89.95 92.71 85.26 90.10 90.36 91.46 93.54 92.40
232010100.776.88 92.03 91.61 91.56 93.07 86.35 90.83 92.71 93.39 92.97 92.66
24201550.183.91 90.31 89.53 91.46 89.90 84.74 88.65 89.90 92.50 91.46 90.89
252025150.576.30 92.08 91.25 89.38 92.71 85.00 90.89 93.07 90.78 93.39 91.46

References

  1. Novaković, J.D.; Veljović, A. Evaluation of classification models in machine learning. Theory Appl. Math. Comput. Sci. 2017, 7, 39–46. [Google Scholar]
  2. Awad, W.A.; ELseiofi, S.M. Machine learning methods for spam e-mail classification. IJCSIT 2011, 3, 173–184. [Google Scholar] [CrossRef]
  3. Ossama, A.H.; Mohamed, A.R. Convolutional Neural Networks for Speech Recognition. IEEE/ACM Trans. 2014, 22, 1533–1545. [Google Scholar]
  4. Argentiero, P.; Chin, R. An automated approach to the design of decision tree classifiers. IEEE Trans. 1982, PAMI-4, 51–57. [Google Scholar] [CrossRef]
  5. Rish, I. An empirical study of the naïve Bayes classifier. IJCAI 2001, 3, 41–46. [Google Scholar]
  6. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  7. Schalkoff, R.J. Artificial Neural Networks; McGraw-Hill: New York, USA, USA, 1997. [Google Scholar]
  8. Zouhal, L.M.; Denœux, T. An evidence-theoretic k-NN rule with parameter optimization. IEEE Trans. 1998, 28, 263–271. [Google Scholar] [CrossRef]
  9. Dietterich, T.G. The Handbook of Brain Theory and Neural Networks; The MIT Press: Cambridge, MA, USA, 2002. [Google Scholar]
  10. McCulloch, W.S.; Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biol. 1990, 52, 99–115. [Google Scholar] [CrossRef]
  11. Rosenblatt, F. The Perceptron: A probabilistic model for information storage and organization in the brain. Psychological Rev. 1958, 65, 386. [Google Scholar] [CrossRef] [Green Version]
  12. Rumelhart, D.E.; Geoffrey, E.H. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
  13. Riess, J. Adaptive neural network control of cyclic movements using functional neuromuscular stimulation. IEEE Trans. 2000, 8, 42–52. [Google Scholar] [CrossRef] [PubMed]
  14. Albawi, S.; Mohammed, T.A.; Al-zawi, S. Understanding of a Convolutional Neural Network; ICET2017: Antalya, Turkey, 2017. [Google Scholar]
  15. Terrence, L.F. Feedforward Neural Network Methodology; Springer Science & Business Media: Berlin, Germany, 2006. [Google Scholar]
  16. Gao, S.C.; Zhou, M.C. Dendritic neuron with effective learning algorithms for classification, approximation, and prediction. IEEE Trans. 2019, 30, 601–614. [Google Scholar] [CrossRef]
  17. Todo, Y.; Tamura, H.; Yamashita, K.; Tang, Z. Unsupervised learnable neuron model with nonlinear interaction on dendrites. Neural Netw. 2014, 60, 96–103. [Google Scholar] [CrossRef] [PubMed]
  18. Ji, J.K.; Gao, S.C.; Cheng, J. An approximate logic neuron model with a dendritic structure. Neurocomputing 2016, 173, 1775–1783. [Google Scholar] [CrossRef]
  19. Gardner, M.W.; Dorling, S.R. Artificial neural networks (the multilayer perceptron)—A review of applications in the atmospheric sciences. Atmos. Environ. 1998, 32, 2627–2636. [Google Scholar] [CrossRef]
  20. Losonczy, A.; Makara, J. Compartmentalized dendritic plasticity and input feature storage in neurons. Nature 2008, 452, 436–441. [Google Scholar] [CrossRef] [PubMed]
  21. Losonczy, A.; Magee, J. Integrative properties of radial oblique dendrites in hippocampal CA1 pyramidal neurons. Neuron 2006, 50, 291–307. [Google Scholar] [CrossRef] [Green Version]
  22. Branco, T.; Häusser, M. The single dendritic branch as a fundamental functional unit in the nervous system. Curr. Opin. Neurobiol. 2010, 20, 494–502. [Google Scholar] [CrossRef]
  23. Yu, Y.; Wang, Y.R.; Gao, S.C.; Tang, Z. Statistical modeling and prediction for tourism economy using dendritic neural network. Comput. Intell. Neurosci. 2017, 2017, 9. [Google Scholar] [CrossRef] [PubMed]
  24. Tang, Y.J.; Ji, J.K.; Zhu, Y.L.; Gao, S.C.; Tang, Z.; Todo, Y. A differential evolution-oriented pruning neural network model for bankruptcy prediction. Complexity 2019, 2019, 21. [Google Scholar] [CrossRef] [Green Version]
  25. Sha, Z.J.; Hu, L.; Todo, Y.; Ji, J.K.; Gao, S.C.; Tang, Z. A breast cancer classifier using a neuron model with dendritic nonlinearity. IEICE Trans. Inf. Syst. 2015, 98, 1365–1376. [Google Scholar] [CrossRef] [Green Version]
  26. Jiang, T.; Gao, S.C.; Wang, D.Z.; Ji, J.K.; Todo, Y.; Tang, Z. A neuron model with synaptic nonlinearities in a dendritic tree for liver disorders. IEEJ Trans. Electr. Electron. Eng. 2017, 12, 105–115. [Google Scholar] [CrossRef]
  27. Jia, D.B.; Yuka, F. Validation of large-scale classification problem in dendritic neuron model using particle antagonism mechanism. Electronics 2020, 9, 792. [Google Scholar] [CrossRef]
  28. Gao, S.C.; Zhou, M.C.; Wang, Z. Fully Complex-valued Dendritic Neuron Model. IEEE Trans. Neural Netw. Learn. Syst. 2021. [Google Scholar] [CrossRef] [PubMed]
  29. Jia, D.B.; Li, C.H. Application and evolution for neural network and signal processing in large-scale systems. Complexity 2021, 2021, 7. [Google Scholar] [CrossRef]
  30. Luo, X.D.; Wen, X.H.; Zhou, M.C.; Abusorrah, A. Decision-Tree-Initialized Dendritic Neuron Model for Fast and Accurate Data Classification. IEEE Trans. Neural Netw. Learn. Syst. 2021. [Google Scholar] [CrossRef]
  31. Jia, D.B.; Dai, H.W. EEG processing in Internet of medical things using non-harmonic analysis: Application and evolution for SSVEP responses. IEEE Access 2019, 7, 11318–11327. [Google Scholar] [CrossRef]
  32. Jia, D.B.; Zheng, S.X. A Dendritic Neuron Model with Nonlinearity Validation on Istanbul Stock and Taiwan Futures Exchange Indexes Prediction; IEEE CCIS: Nanjing, China, 2018. [Google Scholar]
  33. Xu, W.X.; Li, C.H. Optimizing the Weights and Thresholds in Dendritic Neuron Model Using the Whale Optimization Algorithm; Journal of Physics: Conference Series; IOP Publishing: Beijing, China, 2021. [Google Scholar]
  34. Hecht-Nielsen, R. Theory of the backpropagation neural network. Neural Netw. Percept. 1992, 65–93. [Google Scholar] [CrossRef]
  35. Ji, J.K.; Song, S.B.; Tang, Y.J.; Gao, S.C.; Tang, Z.; Todo, Y. Approximate logic neuron model trained by states of matter search algorithm. Knowl.-Based Syst. 2019, 163, 120–130. [Google Scholar] [CrossRef]
  36. Mirjalili, S. Genetic algorithm. In Evolutionary Algorithms and Neural Networks; Springer: Cham, Switzerland; New York, NY, USA, 2019; pp. 43–55. [Google Scholar]
  37. Whitley, D. A genetic algorithm tutorial. Stat. Comput. 1994, 4, 65–85. [Google Scholar] [CrossRef]
  38. Ali, K.; Neda, F. A new optimization method: Dolphin echolocation. Adv. Eng. Softw. 2013, 59, 53–70. [Google Scholar]
  39. Soto, R.; Crawford, B.; Olivares, R. A reactive population approach on the dolphin echolocation algorithm for solving cell manufacturing systems. Mathematics 2020, 8, 1389. [Google Scholar] [CrossRef]
  40. Dan, S. Biogeography-based optimization. IEEE Trans. 2008, 12, 702–713. [Google Scholar]
  41. Zhang, Y.; Gu, X. Biogeography-based optimization algorithm for large-scale multistage batch plant scheduling. Expert Syst. Appl. 2020, 162, 113776. [Google Scholar] [CrossRef]
  42. Höhfeld, M.; Rudolph, G. Towards a theory of population-based incremental learning. In Proceedings of the IEEE Conference on Evolutionary Computation, Indianapolis, IN, USA, 13–16 April 1997; pp. 1–5. [Google Scholar]
  43. Li, Y.; Feng, X.; Wang, G. Application of Population Based Incremental Learning Algorithm in Satellite Mission Planning. In Proceedings of the International Conference on Wireless and Satellite Systems, Nanjing, China, 17–18 September 2020; Springer: Cham, Switzerland, 2020. [Google Scholar]
  44. Eberhart, R.; Kennedy, J. A new optimizer using particle swarm theory. In Proceedings of the MHS’95. Proceedings of the Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995; IEEE: Nagoya, Japan, 2002. [Google Scholar]
  45. Wang, F.; Zhang, H.; Zhou, A. A particle swarm optimization algorithm for mixed-variable optimization problems. Swarm Evol. Comput. 2021, 60, 100808. [Google Scholar] [CrossRef]
  46. Dorigo, M.; Birattari, M. Ant colony optimization. IEEE Commun. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  47. Paniri, M.; Dowlatshahi, M.B.; Nezamabadi-pour, H. MLACO: A multi-label feature selection algorithm based on ant colony optimization. Knowl.-Based Syst. 2020, 192, 105285. [Google Scholar] [CrossRef]
  48. Karaboga, D. Artificial bee colony algorithm. Scholarpedia 2010, 5, 6915. [Google Scholar] [CrossRef]
  49. Wang, H.; Wang, W.; Xiao, S.; Cui, Z.; Xu, M. Improving artificial bee colony algorithm using a new neighborhood selection mechanism. Inf. Sci. 2020, 527, 227–240. [Google Scholar] [CrossRef]
  50. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  51. Jafari-Asl, J.; Seghier, M.E.; Ohadi, S. Efficient method using Whale Optimization Algorithm for reliability-based design optimization of labyrinth spillway. Appl. Soft Comput. 2021, 101, 107036. [Google Scholar] [CrossRef]
  52. Heidari, A.A.; Mirjalili, S. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  53. Khishe, M.; Mosavi, M.R. Chimp optimization algorithm. Expert Syst. Appl. 2020, 149, 113338. [Google Scholar] [CrossRef]
  54. UCI Machine Learning Repository. Available online: https://archive.ics.uci.edu/ml/index.php (accessed on 23 October 2021).
  55. Jugulum, R.; Taguchi, S. Computer-Based Robust Engineering: Essentials for DFSS; ASQ Quality Press: Milwaukee, WI, USA, 2004. [Google Scholar]
Figure 1. Structure in detail of dendritic neural model.
Figure 1. Structure in detail of dendritic neural model.
Symmetry 14 00011 g001
Figure 2. Details of the four states corresponding to the six situations.
Figure 2. Details of the four states corresponding to the six situations.
Symmetry 14 00011 g002
Figure 3. Iterative convergence graphs of eleven algorithms for six datasets.
Figure 3. Iterative convergence graphs of eleven algorithms for six datasets.
Symmetry 14 00011 g003
Figure 4. Box plots of eleven algorithms for six datasets.
Figure 4. Box plots of eleven algorithms for six datasets.
Symmetry 14 00011 g004
Table 1. Details of the dataset.
Table 1. Details of the dataset.
Classification DatasetsAttributesSamplesClasses
Australian Credit Approval156902
Banknote authentication 513722
Breast cancer106992
Car Evaluation717282
Diabetic Retinopathy175202
Glass Identification102142
Table 2. Initialize parameter Settings of intelligent algorithms.
Table 2. Initialize parameter Settings of intelligent algorithms.
AlgorithmParameterValue
GASelection mechanismRoulette wheel
Crossover probability0.9
Mutation probability0.1
DECrossover probability0.9
Differential weight0.5
PBILLearning rate0.05
Good population member1
Bad population member0
Elitism parameter1
Mutational probability0.1
PSOAcceleration constants[2,2]
Inertia weights[0.9,0.5]
ACOInitial pheromone1e-6
Pheromone update constant20
Exploration constant1
Global pheromone decay rate0.9
Local pheromone decay rate0.5
ABCNumber of employed bee50
Number of onlooker bee25
WOAConvergence factor a a = 2 2 · t t max
Coefficient vectors A A = 2 a · r a n d 0 , 1 a
HHOInitial escape energyinf
ChOAInitial attacker_scoreinf
Initial barrier_scoreinf
Initial chaser_scoreinf
Initial driver_scoreinf
BBOHabitat modification probability1
Mutation probability0.005
bound for immigration probability per gene[0,1]
Table 3. Optimal parameters of the six datasets in the dendrite neural model.
Table 3. Optimal parameters of the six datasets in the dendrite neural model.
Classification DatasetsMk k s o m a   θ s o m a  
Australian Credit Approval101100.5
Banknote Authentication 101100.5
Breast Cancer101100.5
Car Evaluation201250.9
Diabetic Retinopathy101100.5
Glass Identification101100.5
Table 4. Optimal parameters of the six datasets in the dendrite neural model.
Table 4. Optimal parameters of the six datasets in the dendrite neural model.
Learning
Algorithm
DatasetsMk k s o m a   θ s o m a   Average Accuracy (%) ± Standard DeviationMSE
BPAustralian Credit Approval101100.556.34 ± 2.972.21 × 10−1
Banknote Authentication101100.586.50 ± 2.786.51 × 10−2
Breast Cancer101100.558.13 ± 25.881.77 × 10−1
Car Evaluation201250.975.08 ± 8.891.07 × 10−1
Diabetic Retinopathy101100.539.08 ± 2.913.05 × 10−1
Glass Identification101100.575.88 ± 4.921.17 × 10−1
GAAustralian Credit Approval101100.584.21 ± 3.302.30 × 10−1
Banknote Authentication101100.591.62 ± 2.451.31 × 10−1
Breast Cancer101100.594.98 ± 1.627.34 × 10−2
Car Evaluation201250.989.34 ± 1.641.50 × 10−1
Diabetic Retinopathy101100.580.66 ± 4.592.82 × 10−1
Glass Identification101100.591.98 ± 2.921.00 × 10−1
DEAustralian Credit Approval101100.582.64 ± 4.092.84 × 10−1
Banknote Authentication101100.591.50 ± 2.251.26 × 10−1
Breast Cancer101100.594.00 ± 1.338.15 × 10−2
Car Evaluation201250.986.63 ± 2.101.74 × 10−1
Diabetic Retinopathy101100.569.06 ± 9.174.24 × 10−1
Glass Identification101100.592.34 ± 3.611.11 × 10−1
PBILAustralian Credit Approval101100.583.11 ± 2.002.20 × 10−1
Banknote Authentication101100.592.67 ± 2.001.17 × 10−1
Breast Cancer101100.594.35 ± 1.508.15 × 10−2
Car Evaluation201250.987.71 ± 1.721.63 × 10−1
Diabetic Retinopathy101100.581.54 ± 3.272.69 × 10−1
Glass Identification101100.590.26 ± 2.981.05 × 10−1
PSOAustralian Credit Approval101100.584.34 ± 3.512.12 × 10−1
Banknote Authentication101100.598.45 ± 0.891.28 × 10−2
Breast Cancer101100.595.06 ± 1.255.16 × 10−2
Car Evaluation201250.992.94 ± 2.449.89 × 10−2
Diabetic Retinopathy101100.587.16 ± 2.671.90 × 10−1
Glass Identification101100.593.00 ± 2.945.62 × 10−2
ACOAustralian Credit Approval101100.556.10 ± 3.808.51 × 10−1
Banknote Authentication101100.588.10 ± 1.312.17 × 10−1
Breast cancer101100.590.05 ± 2.751.66 × 10−1
Car evaluation201250.982.28 ± 4.682.95 × 10−1
Diabetic retinopathy101100.538.70 ± 2.931.20 × 100
Glass Identification101100.588.07 ± 3.882.01 × 10−1
ABCAustralian Credit Approval101100.580.34 ± 6.402.76 × 10−1
Banknote Authentication101100.593.23 ± 6.256.44 × 10−2
Breast Cancer101100.589.97 ± 15.176.50 × 10−2
Car Evaluation201250.987.57 ± 6.311.27 × 10−1
Diabetic Retinopathy101100.574.29 ± 10.423.97 × 10−1
Glass Identification101100.591.72 ± 3.489.34 × 10−2
WOAAustralian Credit Approval101100.583.96 ± 3.342.15 × 10−1
Banknote Authentication101100.598.67 ± 0.802.35 × 10−2
Breast Cancer101100.595.08 ± 1.356.33 × 10−2
Car Evaluation201250.991.22 ± 2.571.20 × 10−1
Diabetic Retinopathy101100.582.82 ± 6.252.43 × 10−1
Glass Identification101100.592.50 ± 2.619.79 × 10−2
HHOAustralian Credit Approval101100.583.17 ± 5.352.34 × 10−1
Banknote Authentication101100.597.53 ± 1.134.23 × 10−2
Breast Cancer101100.594.43 ± 2.787.27 × 10−2
Car Evaluation201250.991.47 ± 2.141.18 × 10−1
Diabetic Retinopathy101100.585.15 ± 5.342.18 × 10−1
Glass Identification101100.591.67 ± 3.398.10 × 10−2
ChOAAustralian Credit Approval101100.571.40 ± 10.974.00 × 10−1
Banknote Authentication101100.598.73 ± 0.712.73 × 10−2
Breast Cancer101100.595.71 ± 1.395.62 × 10−2
Car Evaluation201250.984.43 ± 7.462.16 × 10−1
Diabetic Retinopathy101100.569.64 ± 9.353.57 × 10−1
Glass Identification101100.593.05 ± 3.538.77 × 10−2
BBOAustralian Credit Approval101100.584.38 ± 1.911.79 × 10−1
Banknote Authentication101100.599.50 ± 0.372.28 × 10−3
Breast Cancer101100.595.94 ± 1.153.42 × 10−2
Car Evaluation201250.993.32 ± 1.989.23 × 10−2
Diabetic Retinopathy101100.589.91 ± 3.191.41 × 10−1
Glass Identification101100.593.07 ± 3.305.57 × 10−2
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xu, W.; Jia, D.; Zhong, Z.; Li, C.; Xu, Z. Intelligent Dendritic Neural Model for Classification Problems. Symmetry 2022, 14, 11. https://doi.org/10.3390/sym14010011

AMA Style

Xu W, Jia D, Zhong Z, Li C, Xu Z. Intelligent Dendritic Neural Model for Classification Problems. Symmetry. 2022; 14(1):11. https://doi.org/10.3390/sym14010011

Chicago/Turabian Style

Xu, Weixiang, Dongbao Jia, Zhaoman Zhong, Cunhua Li, and Zhongxun Xu. 2022. "Intelligent Dendritic Neural Model for Classification Problems" Symmetry 14, no. 1: 11. https://doi.org/10.3390/sym14010011

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop