Next Article in Journal
Effects of Naphthalene Application on Soil Fungal Community Structure in a Poplar Plantation in Northern Jiangsu, China
Next Article in Special Issue
Behavioral Analysis of an Interval Type-2 Fuzzy Controller Designed with Harmony Search Enhanced with Shadowed Type-2 Fuzzy Parameter Adaptation
Previous Article in Journal
Tropospheric Delay Model Based on VMF and ERA5 Reanalysis Data
Previous Article in Special Issue
Machine Learning Models for Ecofriendly Optimum Design of Reinforced Concrete Columns
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

CDDO–HS: Child Drawing Development Optimization–Harmony Search Algorithm

1
Information Systems Engineering Department, Erbil Technical Engineering College, Erbil Polytechnic University, Erbil 44001, Iraq
2
Department of Computer Science, College of Sciences, Charmo University, Sulaymaniyah 46023, Iraq
3
Computer Science and Engineering Department, University of Kurdistan Hewler, Erbil 44001, Iraq
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(9), 5795; https://doi.org/10.3390/app13095795
Submission received: 27 March 2023 / Revised: 28 April 2023 / Accepted: 29 April 2023 / Published: 8 May 2023

Abstract

:
Child drawing development optimization (CDDO) is a recent example of a metaheuristic algorithm. The motive for inventing this method is children’s learning behavior and cognitive development, with the golden ratio being employed to optimize the aesthetic value of their artwork. Unfortunately, CDDO suffers from low performance in the exploration phase, and the local best solution stagnates. Harmony search (HS) is a highly competitive algorithm relative to other prevalent metaheuristic algorithms, as its exploration phase performance on unimodal benchmark functions is outstanding. Thus, to avoid these issues, we present CDDO–HS, a hybridization of both standards of CDDO and HS. The hybridized model proposed consists of two phases. Initially, the pattern size (PS) is relocated to the algorithm’s core and the initial pattern size is set to 80% of the total population size. Second, the standard harmony search (HS) is added to the pattern size (PS) for the exploration phase to enhance and update the solution after each iteration. Experiments are evaluated using two distinct standard benchmark functions, known as classical test functions, including 23 common functions and 10 CEC-C06 2019 functions. Additionally, the suggested CDDO–HS is compared to CDDO, the HS, and six others widely used algorithms. Using the Wilcoxon rank-sum test, the results indicate that CDDO–HS beats alternative algorithms.

1. Introduction

The term “optimization” refers to the method used to pick the optimal solution from a pool of possibilities. Each process may be optimized, and many difficult issues in management, economics, science, and technology can be cast as optimization problems [1]. Due to the computational effort required by conventional numerical optimization methods, it is most likely not possible to execute a thorough search of the optimal solution space for challenging engineering optimization problems. In the past two decades, there has been growing interest in the development of alternatives to traditional, mathematically derived, and gradient-based optimization strategies. This interest stems from a desire to discover more efficient methods for attaining optimal results [2]. In this scenario, heuristic optimization algorithms are better approaches for resolving difficult engineering optimization problems [3]. Due to this, heuristic optimization approaches have been developed, which can be applied to problems that derivative methods are unable to solve. As a result, a multitude of heuristic and metaheuristic algorithmic techniques has been created [4].
Metaheuristics are effective optimization strategies that have gained popularity in a variety of disciplines, including engineering, finance, and applied science [5,6]. The vast majority of cutting-edge metaheuristics were developed in the years leading up to the year 2000, when these algorithms were still considered to be “classical” metaheuristic algorithms [7]. The most common classical metaheuristics include genetic algorithms (GAs), introduced by Goldberg in 1988, which are a population-based approach that mimics natural selection and genetics to search for optimal solutions [8]. Kennedy and Eberhart proposed particle swarm optimization (PSO) in 1995, which uses a swarm of moving particles in the search space to find the best solution [9]. Kirkpatrick et al. introduced simulated annealing (SA) in 1983 as a probabilistic method that employs the physical process of annealing to seek out the optimal solution [10]. Tabu search (TS), introduced by Glover in 1986, is a local search-based metaheuristic that uses a taboo list to avoid returning to recently visited solutions [11]. Ant colony optimization (ACO), proposed by Dorigo and Stützle in 1992, uses a population of artificial ants that deposit pheromones to communicate and find the optimal solution [12]. Notwithstanding the successes of classical metaheuristic algorithms, novel and innovative evolutionary techniques have been developed lately. Throughout this age, metaheuristic algorithm research has resulted in the introduction of a substantial number of new metaheuristics inspired by evolutionary or behavioral processes. This new wave of metaheuristic techniques frequently delivers the best solutions for some benchmark problem sets that have yet to be addressed [13]. These metaheuristics have been applied to various optimization problems and have demonstrated their effectiveness and efficiency in finding near-optimal solutions in a reasonable amount of time [14].
Based on the criteria, there are many classifications of inspired metaheuristic algorithms [15,16,17,18]. Therefore, according to [18], five groups can be used to broadly categorize inspired metaheuristic algorithms:
  • Evolution-based methods, such as the genetic algorithm (GA) [8], memetic algorithm (MA) [19], genetic programming (GP) [8], biogeography-based optimizer (BBO) [20], and virulence optimization algorithm (VAO) [21].
  • Swarm-based methods, such as krill herd (KH) [22], fitness-dependent optimizer (FDO) [23], ant colony optimization (ACO) [12], bacterial foraging behavior (BFO) [24], particle swarm optimization (PSO) [9], cuckoo optimization algorithm (COA) [25], dolphin optimization algorithm (DOA) [26], dragonfly algorithm (DA) [27], bat algorithm (BA) [28], group search optimizer (GSO) [29], ant nesting algorithm (ANA) [30], and donkey and smuggler optimization (DSO) [31].
  • Physics-based methods, such as black hole (BH) [32], ray optimization (RO) [33], charged system search (CSS) [34], simulated annealing (SA) [10], big-bang–big-crunch (BBCB) [35], gravitational local search (GLSA) [36], central force optimization (CFO) [37], thermal exchange optimization (TEO) [38], and the water strider algorithm (WSA) [39].
  • Human-based methods, such as the harmony search (HS) [40], group learning algorithm (GLA) [41], firework algorithm (FA) [42], league championship algorithm (LCA) [43], learner performance-based (LPB) [44], interior search algorithm (ISA) [45], mine blast algorithm (MBA) [46], soccer league competition (SLC) [47], exchange market algorithm (EMA) [48], and the social-based algorithm (SBA) [49].
  • Natural disaster algorithms, such as the earthquake algorithm (EA) [50].
Child drawing development optimization (CDDO), as a human-based metaheuristic algorithm, is one of the latest metaheuristic algorithms. CDDO depends on the behavior of children learning to draw at an early age [51]. This algorithm demonstrated superior performance in locating the optimal global solution for optimization problems tested using classical benchmark functions. Its results were compared to multiple cutting-edge algorithms, such as PSO (particle swarm optimization) [9], DE (differential evolution) [52], the WOA (whale optimization algorithm) [53], GSA (gravitational search algorithm) [36], and FEP (fast evolutionary programing) [54]. The effectiveness of the CDDO was evaluated. This demonstrated that the CDDO was exceptionally resilient when acquiring a new solution. It also employs a random search mechanism to change the position to find the best solution [13]. According to [51], CDDO performs well in terms of convergence time and balancing exploration and exploitation.
Although the method performs well compared to other algorithms, several restrictions affect how well it performs in particular circumstances. When an algorithm, for instance, becomes trapped in local optima, it finds a solution that is optimal within a specific region of the search space, but not necessarily the optimal solution overall. As the encircling mechanism is used in the search space, CDDO’s performance in escaping from local solutions is poor, and convergence and speed are inefficient in both. This means that CDDO has issues relating to the balance between exploration and exploitation.
In addition, not improving the best solution more effectively is a problem associated with the encircling mechanism, and the CDDO exploitation phase must be enhanced to obtain better solutions [14,51].
The following are the primary motivations for hybridizing CDDO with HS in this paper:
  • Due to CDDO employing an encircling search mechanism, thus, suffering from the inability to avoid local optima.
  • CDDO performs poorly during the exploitation phase.
  • Improving the CDDO solution after each iteration is insufficient.
The authors came up with the hybridized algorithm as a solution to the CDDO concerns that were described previously, as well as the standard harmonic search algorithm’s great exploitability in the execution of multimodal benchmark functions. As a consequence of this, the authors decided to go with a hybrid approach that combines CDDO and global HS to provide higher performance in the exploitation phase with global HS, particularly when tested using unimodal benchmark functions. Thus, the purpose of this paper is to propose a hybridized approach to overcoming CDDO problems by employing two effective mechanisms: The first mechanism for improving CDDO performance is updating the pattern memory iteratively by using the standard HS mechanism during the exploitation phase and comparing new experiences (pattern memory) with the current child’s drawings. The second is to save the best solution for each iteration and then compare each new solution to the best solution in the exploration phase. If the outcome is better than the best solution, the child’s drawings are changed; otherwise, they remain in their current drawings. This hybridization combines two completely different algorithm mechanisms, CDDO and HS, and adds an update condition during the exploitation phase. The child’s drawings can then be updated during the exploitation phase, utilizing HS approach. As a result, CDDO–HS is a newly proposed hybridization that enhances CDDO’s performance.
This paper is organized as follows: In Section 2 and Section 3, we sequentially describe CDDO and HS with their mechanisms. Section 4 describes the novelty and contribution of our work. Section 5 presents our proposed approach, CDDO–HS, including a detailed description of the hybridized algorithm. We provide experimental results and analyses in Section 6, assessed with two benchmark test functions (the classical and CEC 2019 benchmark functions), and compare it to other recent algorithms, such as FOX (FOX-inspired optimization algorithm) [55], Choa (chimp optimization algorithm) [56], BOA (butterfly optimization algorithm) [57], DCSO (dynamic cat swarm optimization algorithm) [58], WOA–BAT [59] (hybrid WOA (whale optimization algorithm) [53] with BAT (bat algorithm optimization) [28]), and GWO–WOA [60] (hybrid GWO (grey wolf optimization) [61] with WOA (whale optimization algorithm) [53]). Finally, Section 7 provides the conclusions and prospective research recommendations.

2. CDDO

CDDO was developed by Sabat Abdulhameed and Tarik A. Rashid in 2021 as a metaheuristic strategy for solving optimization issues with a single object. CDDO takes inspiration from children’s natural tendencies to learn and grow intellectually and applies the principles of the golden ratio to their work to bring out their maximum capabilities for beauty [14]. CDDO uses the golden ratio and mimics cognitive development and the steps a youngster takes to improve from unskilled scribbling to proficient pattern drawing. The “golden ratio”, a mathematical relationship between any two consecutive numbers in the Fibonacci sequence, is ubiquitous in nature, art, architecture, and design. If a child’s hand pressure is adjusted for width, length, and the golden ratio, the resulting drawing becomes more aesthetically beautiful. This fosters a child’s natural development, raises their intelligence, and teaches them how to work together towards a common goal [14,51]. Table 1 shows the formulation of CDDO optimization [51].
This algorithm is divided into five phases [51]:
  • Stage one: The scribble initialization, as shown below.
X ( i , j ) = X ( 1,1 ) X ( 1,2 ) X ( 1 , j ) X ( 2,1 ) X ( 2,2 ) X ( 2 , j ) X ( 3,1 ) X ( 3,2 ) X ( 3 , j ) X ( i , 1 ) X ( i , 1 ) X ( i , j )
In the equation above, i stands for the population size and j stands for the number of variable dimensions. Within the allowed range, all population member dimensions can be set freely.
  • Stage two: Exploitation
This stage teaches the child to control the movement and direction. Equation (2) generates the RHP. The RHP is a random number that is used to evaluate the current solution’s hand pressure (HP). This number is located between the lower boundary of the problem (LB) and the upper boundary of the solution (UP). Where HP denotes the pressure applied with the hand and j refers to the parameters of the solution, Equation (3) can select the HP.
R H P = R A N D ( L B ( L o w e r B o u n d a r y ) , U P ( U p p e r B o u n d a r y ) )
H P = X ( i , R A N D j )
(i,RAND(j)) represents the current solution’s i hand pressure among several drawing solutions (j). RAND generates a random number between two or more variables.
  • Stage three: The golden ratio
The golden ratio (GR) is also utilized to update and enhance the efficacy of the solution. The ratio of a child’s artwork’s width to its length is one of the elements that are considered when determining how to solve a problem (see Equation (4)). Using Equation (5), each of these two elements can be chosen at random from among all the problem factors.
X G R i = X L ( i , M ) + X W ( i , N ) X L ( i , M )
To calculate the golden ratio, which is based on a random selection of two drawings for each population (i), we first added the first and second drawings together and then divided the result by the second drawing using Equation (4).
X L ( i , M ) , X W ( i , N ) = X i , R A N D j , X L ( i , M ) X W ( i , N )
To ensure that the first and second drawings were different, we utilized Equation (5) to choose the drawings (dimension) (j) randomly from a population (i).
The child would now use the knowledge and abilities they gained from previous experiences and criticisms by trying to analyze the patterns in the actual pictures, attempting to give the drawings some sort of significance, and honing their drawing skills through copying, practicing, and being passionate about their work (with trial).
To start implementing these behaviors, first, the child’s competency can be measured by having them measure their hand pressure (HP). If the current hand pressure is lower than the RHP, the outcome can be recalculated using Equation (3), which takes into consideration both the child’s skill rate (SR) and level rate (LR).
In Equation (6), X i l b e s t is a child’s best drawing thus far, and X i g b e s t is the children’s consensus on the globe’s best solution, as determined by the conditions in their surroundings. In addition, as mentioned previously, the golden ratio (GR) is the proportion of the length (L) to the width (W) of a child’s drawing (W).
X i + 1 = G R + S R X i l b e s t X i + L R ( X i g b e s t X i )
  • Stage four: Creativity
Children gain creativity and skills through experience and observation. Every artwork benefit from creativity. At this level, the child revises the golden ratio solutions. Regrettably, there is no meaningful hand pressure in the answer, indicating that a child’s talents are not yet established and need to be improved upon by utilizing the creative factor and golden ratio. Every child remembers the best learning methods and attempts to reproduce them to progress. Each algorithm solution has a pattern memory (PM), the size of which is determined by the problem. Using a random solution from the PM array to update underperforming solutions can boost the algorithm’s convergence rate and accelerate children’s learning for a long period.
Equation (7) uses CR and PM to update the current solution and converge to the ideal solution. The creative factor, CR = 0.1, improves performance. Later on, the child’s SR and LR can both be set to a low value (between 0 and 0.5), indicating that the kid has a poor skill rate and inaccurate knowledge, but that their originality and pattern memory can grow.
X i + 1 = X i P M + S R ( X i g b e s t )
  • Stage five: Pattern Memory
This pertains to adding more specifics and increasing the level of precision, as well as comparing the results to the most accurate drawings possible while drawing on prior experience and expertise. The current drawing can have an accurate golden ratio, but be unaffected by the hand pressure of the user because the algorithm randomly selects one of the child’s best drawings to use as the basis for the update. This stage emphasizes drawing in finer details. The algorithm applies the agent’s own optimal updating mechanism’s behavior. If better solutions exist, the population’s global best solution would be revised. This also applies to updating the pattern memory with each iteration’s best global answer. CDDO’s pseudocode demonstrates how the CDDO algorithm works [51] (see Algorithm 1):
Algorithm 1: Child drawing development optimization (CDDO) algorithm
Start
Create a population of children’s drawings Xi (i = 1, 2, …, j)
Calculate the fitness of each drawing, then create personal and global benchmarks.
Compute each drawing’s golden ratio using Equation (4)
Create a pattern memory (PM) array
Select a pattern memory index at random
If (t < maximum number of iterations)
Utilize Equation (2) to calculate RHP
Choose at random the hand pressure using Equation (3)
If (hand pressure (HP) is low)
Utilize Equation (6) to update the drawings
Set LR and SR to HIGH (0.6–1)
Else if
XiGR is near to golden ratio (GR)
Consider the learned patterns, LR and SR using Equation (7)
Set LR and SR to low (0–0.5)
End if
Evaluate the fitness function of cost values
Update local (personal) best
Update global best
Update pattern memory (PM)
Store the best cost value
Increase t
End if
Return global best
End

3. HS

The harmony search algorithm is a metaheuristic optimization technique inspired by the process of musical improvisation, where a musician generates a melody that follows a certain harmony. The HS algorithm was first proposed by Geem et al. in 2001 [40], and has since been applied to solve various optimization problems in engineering, finance, and other fields. The HS algorithm simulates a group of musicians who generate new melodies by improvising and adjusting their playing style based on their previous experiences and the harmony of the melody. The algorithm searches for the optimal solution by adjusting the harmony among the variables of the problem. The HS algorithm is effective in solving complex optimization problems, especially in situations where the problem space is continuous and high-dimensional. The HS algorithm has been used in various applications, such as design optimization, image processing, and signal processing. Several modifications and extensions of the HS algorithm have also been proposed to improve its performance and applicability [62,63,64,65].
The initialized population members of the HS algorithm are independent in each dimension and fall within the allowable range. The algorithm only creates one new member in each iteration. Then, using either a memory consideration rule and a pitch adjustment factor or all random reinitializations in the permitted range of dimensions, each dimension of the new point is generated from all of the solutions in the HM. The population member with the highest cost function value is compared to the new generation of solutions, and if the new solution has a lower cost, the population member is replaced. Up until one of the termination criteria is satisfied, this process is repeated [40,66].
The following is a description of the HS algorithm sequence [67,68]:
  • Define the cost function (f (x)) that needs to be reduced to achieve the algorithm’s objective.
  • Set the parameters as shown below to begin.
H M = X 1 1 X 2 1 X j 1 X 1 2 X 2 2 X j 2 X 1 3 X 2 3 X j 3 X 1 H M S X 2 H M S X j H M S
The population number in the above equation is the HS memory (HMS) and the number of variable dimensions is (j). All population member dimensions can be set at random within the authorized range. The initial values of the HS consideration rate (HMCR) and pitch adjustment rate (PAR) are often set to (0.995) and (0.1).
3.
Generate a new point ( X n e w = X 1 n e w , X 2 n e w , X 3 n e w , , X j n e w ) by performing the following:
A corresponding member dimension is chosen at random using HMCR for each of the n dimensions. The value of the new point is chosen at random from the authorized range:
X i n e w = X n e w i X j 1 , X j 2 , X j 3 , , X j H M S i f   r a n d 0,1 H M C R X n e w i   i s   r a n d o m   i f   i t   i s n t
X i n e w = X n e w i +   RAND   ( 1 , 1 ) × b w , i f   r a n d ( 0,1 ) P A R where   bw   =   0.04
4.
If the new harmony vector X n e w has a lower cost, replace the worst member of the population with it.
5.
Verify the termination criteria; if they are satisfied, move on to step three; otherwise, the optimum point is identified. The HS pseudocode demonstrates how the HS algorithm works [69] (see Algorithm 2):
Algorithm 2: Harmony search (HS) algorithm
Begin
Define objective function f(x), x = (x, x2, …, xa)
Define harmony memory considering rate (HMCR)
Define pitch-adjusting rate (PAR) and other parameters
Generate harmony memory with random harmonies
While (t < max number of iterations)
While (≤number of variables)
If (RAND < HMCR)
Choose a value from HM for variable i (Equation (9))
If (RAND < PAR)
Adjust the value by adding a certain amount (Equation (10))
End if
Else
Choose a random value
End if
End while
Accept the new harmony (solution) if better
End while
Find the current best solution
End

4. The Novelty and Contribution

The goal of our study is to introduce a novel hybrid metaheuristic algorithm for optimization, which combines two existing approaches: child drawing development optimization (CDDO) and global harmony search (HS). By combining the best features of CDDO and HS, the suggested hybrid algorithm, CDDO–HS, can improve upon the optimization process. Our most significant achievement thus far has been the creation of a novel hybridization method for the efficient joint use of CDDO and conventional HS. To be more specific, we suggest a new hybridization strategy that integrates CDDO’s local search capabilities with traditional HS’s worldwide exploration potential. The solution space can be efficiently searched with this method, resulting in faster convergence and better solutions overall.
The suggested CDDO–HS algorithm is also subjected to a thorough performance evaluation on a variety of benchmark optimization tasks. In terms of solution quality, convergence speed, and robustness, the CDDO–HS algorithm is shown to excel in experimental settings, compared to both CDDO and baseline HS. The CDDO–HS algorithm outperforms many other state-of-the-art metaheuristic algorithms, including some that were developed relatively recently. These include the FOX (FOX-inspired optimization algorithm), Choa (chimp optimization algorithm), BOA (butterfly optimization algorithm), DCSO (dynamic cat swarm optimization algorithm), and GWO–WOA (hybrid grey wolf optimization with whale optimization algorithm).

5. CDDO–HSF

Based on the previous sections on CDDO and the standard HS, the proposed approach was described in this section by combining CDDO and HS to improve the performance of CDDO in terms of efficiency during the exploitation phase to produce gain advantages. In general, the standard CDDO is capable of locating the optimal solution. Unfortunately, it is not sufficient in refining the optimal solution with each iteration. Hence, CDDO was hybridized with global HS to enhance its performance. The name of the hybridization algorithm was decided as CDDO–HS. Thus, the CDDO was hybridized by including two strategies:
First, we moved the pattern size (PS) to the algorithm’s core, where it changed with each iteration. The CDDO algorithm assumed that drawing skills from childhood to adolescence are stored in memory once and do not need to be refreshed, which is incorrect. As a result, children’s drawings are regenerated as they draw and throughout their life, which means that the pattern size is constantly renewed and aids the child in the subsequent stages of drawing. In each sketching session, the child learns new skills that are far superior to and distinct from previous sessions.
To achieve the best exploration, we set the pattern size to 80% of the overall population size when we started. On the other hand, we attempted to update the pattern size of the algorithm that would power CDDO using a mechanism or technique. We used the global harmony search technique to update the pattern size after multiple experiments and different algorithms. Moreover, the harmony search algorithm offered various advantages, such as its balance of exploration and exploitation, adaptability, simplicity, and resilience. The HSA hybridization with other algorithms could assist in overcoming its limits and improving its performance.
These modifications resulted in improved performance for obtaining the optimum fitness function. Algorithm 3 and Figure 1 show the CDDO–HS pseudocode and flowchart, respectively.
Algorithm 3: Child-drawing-development-optimization–harmony-search-based hybrid algorithm
Start
Create a population of children’s drawings Xi (i = 1, 2, …, j)
Calculate the fitness of each drawing, then create personal and global benchmarks.
Compute each drawing’s golden ratio using Equation (4)
Create an array of pattern memory (PM)
Select a pattern memory index at random
If (t < maximum number of iterations)
If (i ≤ number of variables)
If (RAND < HMCR)
Choose a value from updated pattern memory for variable, using Equation (9)
If (RAND < PAR)
Increase the value by a given amount, using Equation (10)
End if
Else
Choose a random value
End if
End if
Update the pattern memory
Utilize Equation (2) to calculate the RHP
Choose at random the hand pressure using Equation (3)
If (hand pressure (HP) is low)
Utilize Equation (6) to update the drawings
Set LR and SR to high (0.6–1)
Else if
XiGR is near to the golden ratio (GR)
Consider the learned patterns, LR and SR using Equation (7)
Set LR and SR to low (0–0.5)
End if
Evaluate the fitness function of cost values
Update local (personal) best
Update global best
Update pattern memory (PM)
Store the best cost value
Increase t
End if
Return global best
End

6. Testing, Results, and Discussion

To verify the efficacy of a novel method and determine meaningful comparisons to state-of-the-art and classical algorithms, optimization researchers employ a conventional approach. In optimization research, this method is frequently employed to assess candidates for optimization on a sizable test set. Therefore, the CDDO–HS method was implemented and examined with 23 classical benchmark functions and 10 benchmark functions from CEC 2019. The parts that follow detail the CDDO–HS’s evaluations in comparison to other metaheuristic algorithms, as well as its benchmark functions, experimental results, evaluations, and statistical data.

6.1. Benchmark Functions

To ensure the accuracy of our proposed CDDO–HS, we tested it using two sets of standard benchmark functions. As an initial standard, we used a set of 23 different classical functions. The CEC-C06 2019 standard was the benchmark function’s second component. Some examples of these operations include the multimodal function, the unimodal function, the expanding multimodal function, and the hybrid composition operation. You can view these standard test functions in [23].

6.2. Experimental Setup

MATLAB R2020a was used to implement the code on Windows 11. To achieve a better and more accurate outcome, the first population was randomly selected. The parameter initialization for the implementation was set as follows:
  • Population size = 40.
  • The number of iterations = 500.
  • Run of Algorithms = 30.

6.3. Evaluation Criteria

CDDO–HS could be evaluated in a variety of ways. The following were the evaluation points:
  • The average and standard deviations were displayed.
  • When comparing CDDO–HS to CDDO.
  • When comparing CDDO–HS to HS.
  • CDDO–HS vs. other metaheuristic algorithms (FOX, Choa, DCSO, WOA–BAT. and WOAGWO).
  • Making a box and whisker plot to compare CDDO, HS, and CDDO–HS.

6.4. CDDO–HS vs. CDDO and HS

To validate the performance of CDDO–HS, two types of benchmark functions were used: classical benchmark functions and CEC-C06 2019. Hence, the obtained results’ average (Ave), probability value (p-value), and standard deviation (Std) were utilized to evaluate the performance testing.

6.4.1. Classical Benchmark Functions

The classical benchmark functions are a set of standard functions used to evaluate the performance of optimization algorithms. These functions are commonly used in the field of optimization and have been studied extensively [70]. Functions f1–f7 are called the unimodal functions because they only have one global optimum and no local optimums. They have a single solution and can be exploited as variants (see Table A1, Table A2 and Table A3). Therefore, using these unimodal functions, it was possible to assess the CDDO–HS exploitation capabilities. Table 2 demonstrates that, compared to the CDDO in these functions, CDDO–HS had improved exploitation capabilities in two of the seven. We also discovered that all seven equations had significant outcomes when we compared the results of this hybrid algorithm to those of the HS algorithm. The results of the CDDO algorithm’s transitions were denoted with boldface type, while those of the HS algorithm’s transitions were denoted with underlining.
To evaluate our suggested algorithm’s performance in terms of exploration, we used multimodal functions ranging from F8 to F13 (see Table 1 and Table A2) to analyze the local optimum avoidance. The ability to start from the local optima and continue the search across a wide range of regions of the search area could be tested by utilizing multimodal features in an exploration capacity test. To put it another way, they ranked the depth of the variation investigation for a broad set of local optimums. Five out of six multimodal functions (Table 2) suggested that CDDO–HS performed better. Hence, it could be asserted that CDDO–HS enhanced CDDO’s exploratory capabilities. Consequently, when the results of this method were compared to the results of the HS algorithm, six out of seven outcomes were significant.
The third section of these benchmark functions included fixed-dimension functions ranging from F14 to F19 (see Table 3 and Table A3). Hence, avoiding the local minima was one of the most difficult challenges, because only by finding an optimal balance between exploration and exploitation could a local minimum be avoided to some extent. Due to the large number of local minima, the multimodal fixed-dimension benchmark functions were employed to test the algorithm’s ability to avoid them. The numerical results of the benchmark functions from F14 to F19 showed that CDDO–HS outperformed the CDDO and standard HS algorithms in terms of local minima avoidance, except for F14 and F18, where the CDDO–HS algorithm performed worse than the other algorithms. After a while, CDDO–HS was shown to compete with other cutting-edge metaheuristic algorithms, as evidenced by its remarkable findings.
This showed that the algorithm could avoid the local minima because it looked at a lot of good places in the design space and chose the best one. The method used to avoid the local minima was that all of the search agents would change suddenly in the early stages of the optimization process and then progressively converge towards the best solution. This strategy later ensured that cooperative search agents eventually converged on a spot in the search space.

6.4.2. CEC-C06 2019 Benchmark Test Functions

CEC-C06 2019 Benchmark Test Functions are a set of 10 mathematical functions commonly used to evaluate the performance of optimization algorithms. These functions were specifically designed for the IEEE Congress on Evolutionary Computation (CEC) 2019 competition to provide a more challenging and diverse set of test functions compared to previous years. These functions cover a wide range of optimization problems, including unimodal, multimodal, separable, and non-separable functions. Some of the functions also incorporate features such as rotation, scaling, and shifting to further increase their complexity. Each function is defined over a set of input variables and has a known global minimum or maximum, which allows for optimization algorithms to be evaluated based on their ability to find the optimal solution. Thus, they are widely used in the field of optimization and evolutionary computation to compare the performance of different algorithms and to benchmark new algorithms. They are also used to evaluate the effectiveness of various optimization techniques, such as metaheuristics, swarm intelligence, and genetic algorithms [71].
The CDDO–HS algorithm was also evaluated using the CEC-C06 2019 benchmark function. Table 3 and Figure 2 demonstrate that CDDO–HS outperformed CDDO in all functions. Similarly, when we compare the outcomes of this approach to the standard HS algorithm, we found that CDDO–HS outperformed the HS algorithm in all except three functions: F4, F5, and F6. Overall, CDDO–HS outperformed CDDO and standard HS in eight multimodal benchmark functions.
Figure 3 and Figure 4 shows the simulation results comparing CDDO–HS to CDDO and the standard HS method. The simulated convergence curves for the accepted testing functions showed that, when compared to CDDO and the standard HS algorithm, the convergence velocity and optimization precision of CDDO–HS was the best.

6.5. Statistical Analysis

To figure out if the performance of the algorithm was significant or not, we needed to use a statistical analysis. Therefore, we used the p-value to figure out how important a result was from a statistical point of view and to decide whether or not to reject the null hypothesis.
To show whether the results were significant or not, Table 4 was used to find the p-values for both benchmark tests: the classical benchmark and the CEC-C06 2019 benchmark functions.
The results showed that CDDO–HS performed better than CDDO in 11 of the classical benchmark’s nineteen functions. Furthermore, the findings showed that CDDO–HS outscored CDDO in all 10 functions of the CEC 2019 benchmark.
Table 5 was utilized to generate the p-values for both benchmark tests, the classical benchmark and the CEC-C06 2019 benchmark functions, while we compared CDDO–HS to standard HS to demonstrate if the results were significant or not.
Furthermore, the results showed that CDDO–HS beat standard HS in 16 functions, as opposed to 19 in the classic benchmark functions. As a result, the findings showed that CDDO–HS outperformed regular HS in seven of the CEC-C06 2019 benchmark functions.

6.6. The CDDO-CD Algorithm in Context with Hybrid and Metaheuristic Approaches

Using the CEC2019 test functions, the CDDO–HS hybrid algorithm was contrasted with six new hybrid and metaheuristic approach algorithms (ChOA, BOA, FOX, GWO–WOA, WOA–BAT, and DCSO).
According to Table 6, the gardened algorithm outperformed all six metaheuristic approach algorithms in six out of ten benchmarks (CEC04, CEC05, CEC07, CEC08, CEC09, and CEC10), while five algorithms yielded 1.370E+01 in CEC03.
However, when contrasting each of the six algorithms with CDDO–HS separately, CDDO–HS gave superior results, and we used the ranking sort to encapsulate the findings (see Table 7).
The ranking score could be used to evaluate the performance of various algorithms and determine which one would be most effective for a particular task. In this case, better performance was indicated by a lower-ranking score, whereas worse performance was shown by a higher-ranking score.
Table 7 and Table 8 show that CDDO–HSA had the highest-ranking score of 2.5, followed by DCSO with a score of 2.8. FOX had a score of 3.1, which was higher than the scores of the other three algorithms: GWO–WOA (3.6), WOA–BAT (3.8), and ChOA (5.8). Finally, BOA had the lowest ranking score of six.

7. Conclusions

To summarize, both CDDO and global HS, together with their algorithmic features, were thoroughly discussed. The CDDO-HS technique was given. The experimental data were described to evaluate CDDO–HS’s performance. Numerous tests were carried out to evaluate CDDO–HS. CDDO–HS was evaluated using 19 traditional benchmark test functions in both exploitation and exploration. When compared to CDDO and standard HS, CDDO–HS outperformed in 11 of 19 functions. CDDO–HS was also evaluated using CEC-C06 2019 benchmark functions. As a result, CDDO–HS performed admirably in seven of the ten functions. Despite conventional HS outperforming CDDO–HS in the remaining three functions, and having a better overall outcome, CDDO–HS outperformed standard HS in seven of the ten functions.
Additionally, CDDO–HS was statistically evaluated using the Wilcoxon rank-sum test, and it achieved overall significant results in the classical and CEC-C06 2019 benchmark functions. The CDDO–HS algorithm was then compared to the ChOA, BOA, FOX, GWO–WOA, WOA–BAT, and DCSO algorithms, and it outperformed all six metaheuristic approach algorithms in six of the ten benchmarks.
As a result, CDDO–HS’s performance in terms of its exploration capability enhanced. Generally, CDDO–HS improved the solution’s quality with each iteration while avoiding the local optima.
Finally, the following potential research projects could be undertaken in the future:
  • Addressing real-world difficulties, such as medical, applied science, and engineering issues.
  • Mixing different methods to improve the results we already have.
  • Change the standard HS parameters to improve the exploration and exploitation phases and achieve better performance as a result.

Author Contributions

Conceptualization, A.A.A., T.A.R. and S.A.; methodology, A.A.A., T.A.R. and S.A.; software, A.A.A.; validation, A.A.A. and T.A.R.; formal analysis, A.A.A. and T.A.R.; investigation, T.A.R.; resources, A.A.A.; data curation, A.A.A.; writing—original draft preparation, A.A.A.; writing—review and editing, T.A.R.; visualization, A.A.A., T.A.R. and S.A.; supervision, T.A.R. and S.A.; project administration, T.A.R.; funding acquisition, A.A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

The mathematical formulation of the conventional benchmark functions utilized in this work is shown in Table A1, Table A2 and Table A3.
Table A1. Unimodal test functions [72].
Table A1. Unimodal test functions [72].
FunctionDimensionRange f m i n
F 1 x = i = 1 n X i 2 10 100 , 100 0
F 2 x = i = 1 n | X i | + i = 1 n | X i | 10 10 , 10 0
F 3 x = i = 1 n j = 1 i X j 2 10 30 , 30 0
F 4 x = m a x i X i , 1 i n 10 100 , 100 0
F 5 x = i = 1 n 1 100 x i + 1 x 1 2 2 + x i 1 2 10 30 , 30 0
F 6 x = i = 1 n x i + 0.5 2 10 100 , 100 0
F 7 x = i = 1 n i x i 4 + r a n d o m 0,1 10 1.28 , 1.28 0
Table A2. Multimodal test functions [72].
Table A2. Multimodal test functions [72].
FunctionDimensionRange f m i n
F 8 x = i = 1 n X i 2 sin x i 10 500 , 500 0
F 9 x = i = 1 n x i 2 10 cos 2 π x i + 10 10 10 , 10 0
F 10 x = 20 e x p i = 1 n x i 2 0.2 e x p 1 n i = 1 n cos 2 π x i + 20 + e 10 32 , 32 0
F 11 x = 1 4000 i = 1 n x i 2 i = 1 n cos x i i + 1 10 600 , 600 0
F 12 x = π n 10 sin π y 1 + i = 1 n 1 y i 1 2 1 + 10 sin π y i + 1 2 + y n 1 2 + i = 1 n μ x i , 10,100,4 , y i = 1 + x + 1 4 , μ x i , a , k , m = k x i a m x i > a 0 a < x i < a k x i a m x i < a 10 50 , 50 0
F 13 x = 0.1 sin 3 π x 1 2 + i = 1 n x i 1 2 1 + sin 3 π x i + 1 2 + x n 1 2 1 + sin 2 π x n 2 + i = 1 n μ x i , 5,100,4 30 150,50 0
Table A3. Fixed-dimension multimodal benchmark functions.
Table A3. Fixed-dimension multimodal benchmark functions.
FunctionDimensionRange f m i n
F 14 x = 1 500 + j = 1 25 1 j + i = 1 2 x i a i j 6 1 2 65 , 65 1
F 15 x = i = 1 11 a i x 1 b i 2 + b i x 2 b i 2 + b i x 3 + x 4 2 4 5 , 5 0.0003
F 16 x = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 2 [−5,5]−1.0316
F 17 x = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 1 1 8 π cos x 1 + 10 2 5 , 5 0.398
F 18 x = 1 + x 1 + x 2 + 1 2 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 × 30 + 2 x 1 3 x 2 2 × 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 2 2 , 2 3
F 19 x = i = 1 4 c i e x p j = 1 3 a i j x j p i j 2 3 1 , 3 −3.86

References

  1. Abualigah, L.; Yousri, D.; Abd Elaziz, M.; Ewees, A.A.; Al-Qaness, M.A.A.; Gandomi, A.H. Aquila Optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  2. Rahman, M.A.; Sokkalingam, R.; Othman, M.; Biswas, K.; Abdullah, L.; Abdul Kadir, E. Nature-Inspired Metaheuristic Techniques for Combinatorial Optimization Problems: Overview and Recent Advances. Mathematics 2021, 9, 2633. [Google Scholar] [CrossRef]
  3. Amini, F.; Ghaderi, P. Hybridization of Harmony Search and Ant Colony Optimization for optimal locating of structural dampers. Appl. Soft Comput. 2013, 13, 2272–2280. [Google Scholar] [CrossRef]
  4. Gandomi, A.H.; Yang, X.-S.; Talatahari, S.; Alavi, A.H. Metaheuristic Applications in Structures and Infrastructures; Newnes: New South Wales, Australia, 2013; ISBN 0123983797. [Google Scholar]
  5. Glover, F.; Laguna, M.; Marti, R. Scatter Search and Path Relinking: Advances and Applications BT—Handbook of Metaheuristics; Glover, F., Kochenberger, G.A., Eds.; Springer: Boston, MA, USA, 2003; pp. 1–35. ISBN 978-0-306-48056-0. [Google Scholar]
  6. Talbi, E.-G. Metaheuristics: From Design to Implementation; John Wiley & Sons: Hoboken, NJ, USA, 2009; ISBN 0470496908. [Google Scholar]
  7. Raidl, G.R.; Puchinger, J.; Blum, C. Metaheuristic Hybrids BT—Handbook of Metaheuristics; Gendreau, M., Potvin, J.-Y., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 385–417. ISBN 978-3-319-91086-4. [Google Scholar]
  8. Goldberg, D.E.; Holland, J.H. Genetic Algorithms and Machine Learning. Mach. Learn. 1988, 3, 95–99. [Google Scholar] [CrossRef]
  9. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  10. Van Laarhoven, P.J.M.; Aarts, E.H.L. Simulated Annealing: Theory and Applications; van Laarhoven, P.J.M., Aarts, E.H.L., Eds.; Springer: Dordrecht, Germany, 1987; pp. 7–15. ISBN 978-94-015-7744-1. [Google Scholar]
  11. Glover, F.; Taillard, E.; Taillard, E. A user’s guide to tabu search. Ann. Oper. Res. 1993, 41, 1–28. [Google Scholar] [CrossRef]
  12. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  13. Dokeroglu, T.; Sevinc, E.; Kucukyilmaz, T.; Cosar, A. A survey on new generation metaheuristic algorithms. Comput. Ind. Eng. 2019, 137, 106040. [Google Scholar] [CrossRef]
  14. Ameen, A.A.; Rashid, T.A. A Tutorial on Child Drawing Development Optimization; Springer International Publishing AG: Muscat, Oman, 2022; pp. 1–15. [Google Scholar]
  15. Vinod Chandra, S.S.; Anand, H.S. Nature inspired meta heuristic algorithms for optimization problems. Computing 2022, 104, 251–269. [Google Scholar] [CrossRef]
  16. Rajakumar, R.; Dhavachelvan, P.; Vengattaraman, T. A survey on nature inspired meta-heuristic algorithms with its domain specifications. In Proceedings of the 2016 International Conference on Communication and Electronics Systems (ICCES), Coimbatore, India, 21–22 October 2016; pp. 1–6. [Google Scholar]
  17. Osman, I.H.; Laporte, G. Metaheuristics: A bibliography. Ann. Oper. Res. 1996, 63, 511–623. [Google Scholar] [CrossRef]
  18. Cruz, P.P.; Gutierrez, A.M.; Ramırez-Mendoza, R.A.; Flores, E.M.; Espinoza, A.A.O.; Silva, D.C.B. A Practical Approach to Metaheuristics Using LabVIEW and MATLAB; Taylor & Francis Group, LLC: Abingdon, UK, 2020. [Google Scholar]
  19. Moscato, P.; Cotta, C.; Mendes, A. Memetic algorithms. In Handbook of Approximation Algorithms and Metaheuristics; Springer: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
  20. Simon, D. Biogeography-based optimization. IEEE Trans. Evol. Comput. 2008, 12, 702–713. [Google Scholar] [CrossRef]
  21. Jaderyan, M.; Khotanlou, H. Virulence optimization algorithm. Appl. Soft Comput. 2016, 43, 596–618. [Google Scholar] [CrossRef]
  22. Wang, G.G.; Guo, L.; Gandomi, A.H.; Hao, G.S.; Wang, H. Chaotic Krill Herd algorithm. Inf. Sci. 2014, 274, 17–34. [Google Scholar] [CrossRef]
  23. Abdullah, J.M.; Ahmed, T. Fitness Dependent Optimizer: Inspired by the Bee Swarming Reproductive Process. IEEE Access 2019, 7, 43473–43486. [Google Scholar] [CrossRef]
  24. Passino, K.M. Bacterial foraging optimization. In Innovations and Developments of Swarm Intelligence Applications; IGI Global: Hershey, PA, USA, 2012; pp. 219–234. [Google Scholar]
  25. Yang, X.-S.; Deb, S. Cuckoo search via Lévy flights. In Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), Coimbatore, India, 9–11 December 2009; pp. 210–214. [Google Scholar]
  26. Kaveh, A.; Farhoudi, N. A new optimization method: Dolphin echolocation. Adv. Eng. Softw. 2013, 59, 53–70. [Google Scholar] [CrossRef]
  27. Mirjalili, S. Dragonfly algorithm: A new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput. Appl. 2016, 27, 1053–1073. [Google Scholar] [CrossRef]
  28. Yang, X.; Hossein Gandomi, A. Bat algorithm: A novel approach for global engineering optimization. Eng. Comput. 2012, 29, 464–483. [Google Scholar] [CrossRef]
  29. He, S.; Wu, Q.H.; Saunders, J.R. Group search optimizer: An optimization algorithm inspired by animal searching behavior. IEEE Trans. Evol. Comput. 2009, 13, 973–990. [Google Scholar] [CrossRef]
  30. Hama Rashid, D.N.; Rashid, T.A.; Mirjalili, S. ANA: Ant Nesting Algorithm for Optimizing Real-World Problems. Mathematics 2021, 9, 3111. [Google Scholar] [CrossRef]
  31. Shamsaldin, A.S.; Rashid, T.A.; Al-Rashid Agha, R.A.; Al-Salihi, N.K.; Mohammadi, M. Donkey and smuggler optimization algorithm: A collaborative working approach to path finding. J. Comput. Des. Eng. 2019, 6, 562–583. [Google Scholar] [CrossRef]
  32. Hatamlou, A. Black hole: A new heuristic optimization approach for data clustering. Inf. Sci. 2013, 222, 175–184. [Google Scholar] [CrossRef]
  33. Kaveh, A.; Khayatazad, M. A new meta-heuristic method: Ray Optimization. Comput. Struct. 2012, 112–113, 283–294. [Google Scholar] [CrossRef]
  34. Kaveh, A.; Talatahari, S. A novel heuristic optimization method: Charged system search. Acta Mech. 2010, 213, 267–289. [Google Scholar] [CrossRef]
  35. Erol, O.K.; Eksin, I. A new optimization method: Big bang–big crunch. Adv. Eng. Softw. 2006, 37, 106–111. [Google Scholar] [CrossRef]
  36. Rashedi, E.; Nezamabadi-pour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  37. Formato, R.A. Central force optimization. Prog. Electromagn. Res. 2007, 77, 425–491. [Google Scholar] [CrossRef]
  38. Kaveh, A.; Dadras, A. A novel meta-heuristic optimization algorithm: Thermal exchange optimization. Adv. Eng. Softw. 2017, 110, 69–84. [Google Scholar] [CrossRef]
  39. Kaveh, A.; Dadras Eslamlou, A. Water strider algorithm: A new metaheuristic and applications. Structures 2020, 25, 520–541. [Google Scholar] [CrossRef]
  40. Geem, Z.W.; Kim, J.H.; Loganathan, G.V. A new heuristic optimization algorithm: Harmony search. Simulation 2001, 76, 60–68. [Google Scholar] [CrossRef]
  41. Rahman, C.M. Group learning algorithm: A new metaheuristic algorithm. Neural Comput. Appl. 2023. [Google Scholar] [CrossRef]
  42. Tan, Y.; Zhu, Y. Fireworks algorithm for optimization. In Proceedings of the Advances in Swarm Intelligence: First International Conference, ICSI 2010, Beijing, China, 12–15 June 2010; Proceedings, Part I 1. Springer: Berlin/Heidelberg, Germany, 2010; pp. 355–364. [Google Scholar]
  43. Kashan, A.H. League championship algorithm: A new algorithm for numerical function optimization. In Proceedings of the 2009 International Conference of Soft Computing and Pattern Recognition, Malacca, Malaysia, 4–7 December 2009; pp. 43–48. [Google Scholar]
  44. Rahman, C.M.; Rashid, T.A. A new evolutionary algorithm: Learner performance based behavior algorithm. Egypt. Inform. J. 2021, 22, 213–223. [Google Scholar] [CrossRef]
  45. Gandomi, A.H. Interior search algorithm (ISA): A novel approach for global optimization. ISA Trans. 2014, 53, 1168–1183. [Google Scholar] [CrossRef]
  46. Sadollah, A.; Bahreininejad, A.; Eskandar, H.; Hamdi, M. Mine blast algorithm: A new population based algorithm for solving constrained engineering optimization problems. Appl. Soft Comput. 2013, 13, 2592–2612. [Google Scholar] [CrossRef]
  47. Moosavian, N.; Roodsari, B.K. Soccer league competition algorithm: A novel meta-heuristic algorithm for optimal design of water distribution networks. Swarm Evol. Comput. 2014, 17, 14–24. [Google Scholar] [CrossRef]
  48. Ghorbani, N.; Babaei, E. Exchange market algorithm. Appl. Soft Comput. 2014, 19, 177–187. [Google Scholar] [CrossRef]
  49. Ramezani, F.; Lotfi, S. Social-based algorithm (SBA). Appl. Soft Comput. 2013, 13, 2837–2856. [Google Scholar] [CrossRef]
  50. Keilis-Borok, V.I.; Kossobokov, V.G. Premonitory activation of earthquake flow: Algorithm M8. Phys. Earth Planet. Inter. 1990, 61, 73–83. [Google Scholar] [CrossRef]
  51. Abdulhameed, S.; Rashid, T.A. Child Drawing Development Optimization Algorithm Based on Child’s Cognitive Development. Arab. J. Sci. Eng. 2022, 47, 1337–1351. [Google Scholar] [CrossRef]
  52. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  53. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  54. Yao, X.; Liu, Y.; Lin, G. Evolutionary programming made faster. IEEE Trans. Evol. Comput. 1999, 3, 82–102. [Google Scholar] [CrossRef]
  55. Mohammed, H.; Rashid, T. FOX: A FOX-inspired optimization algorithm. Appl. Intell. 2023, 53, 1030–1050. [Google Scholar] [CrossRef]
  56. Khishe, M.; Mosavi, M.R. Chimp optimization algorithm. Expert Syst. Appl. 2020, 149, 113338. [Google Scholar] [CrossRef]
  57. Arora, S.; Singh, S. Butterfly optimization algorithm: A novel approach for global optimization. Soft Comput. 2019, 23, 715–734. [Google Scholar] [CrossRef]
  58. Ahmed, A.M.; Rashid, T.A.; Saeed, S.A.M. Dynamic Cat Swarm Optimization algorithm for backboard wiring problem. Neural Comput. Appl. 2021, 33, 13981–13997. [Google Scholar] [CrossRef]
  59. Mohammed, H.M.; Umar, S.U.; Rashid, T.A. A Systematic and Meta-Analysis Survey of Whale Optimization Algorithm. Comput. Intell. Neurosci. 2019, 2019, 8718571. [Google Scholar] [CrossRef] [PubMed]
  60. Mohammed, H.; Rashid, T. A novel hybrid GWO with WOA for global numerical optimization and solving pressure vessel design. Neural Comput. Appl. 2020, 32, 14701–14718. [Google Scholar] [CrossRef]
  61. Rezaei, H.; Bozorg-Haddad, O.; Chu, X. Grey Wolf Optimization (GWO) Algorithm BT—Advanced Optimization by Nature-Inspired Algorithms; Bozorg-Haddad, O., Ed.; Springer: Singapore, 2018; pp. 81–91. ISBN 978-981-10-5221-7. [Google Scholar]
  62. Corsini, R.R.; Costa, A.; Fichera, S.; Parrinello, V. Hybrid Harmony Search for Stochastic Scheduling of Chemotherapy Outpatient Appointments. Algorithms 2022, 15, 424. [Google Scholar] [CrossRef]
  63. Fesanghary, M.; Mahdavi, M.; Minary-Jolandan, M.; Alizadeh, Y. Hybridizing harmony search algorithm with sequential quadratic programming for engineering optimization problems. Comput. Methods Appl. Mech. Eng. 2008, 197, 3080–3091. [Google Scholar] [CrossRef]
  64. Turky, A.M.; Abdullah, S.; Sabar, N.R. A Hybrid Harmony Search Algorithm for Solving Dynamic Optimisation Problems. Procedia Comput. Sci. 2014, 29, 1926–1936. [Google Scholar] [CrossRef]
  65. Al-Omoush, A.A.; Alsewari, A.A.; Alamri, H.S.; Zamli, K.Z. Comprehensive Review of the Development of the Harmony Search Algorithm and its Applications. IEEE Access 2019, 7, 14233–14245. [Google Scholar] [CrossRef]
  66. Yang, X.-S. Harmony Search as a Metaheuristic Algorithm BT—Music-Inspired Harmony Search Algorithm: Theory and Applications; Geem, Z.W., Ed.; Springer: Berlin/Heidelberg, Germany, 2009; pp. 1–14. ISBN 978-3-642-00185-7. [Google Scholar]
  67. Mahdavi, M.; Fesanghary, M.; Damangir, E. An improved harmony search algorithm for solving optimization problems. Appl. Math. Comput. 2007, 188, 1567–1579. [Google Scholar] [CrossRef]
  68. Yarmohamadi, H.; Zhang, Q.; Jiao, P.; Alavi, A.H. An enhanced adaptive global-best harmony search algorithm for continuous optimization problems. Eng. Rep. 2020, 2, 1–19. [Google Scholar] [CrossRef]
  69. Feng, Y.; Wang, G.-G.; Gao, X.-Z. A novel hybrid cuckoo search algorithm with global harmony search for 0–1 knapsack problems. Int. J. Comput. Intell. Syst. 2016, 9, 1174–1190. [Google Scholar] [CrossRef]
  70. Hellwig, M.; Beyer, H.-G. Benchmarking evolutionary algorithms for single objective real-valued constrained optimization—A critical review. Swarm Evol. Comput. 2019, 44, 927–944. [Google Scholar] [CrossRef]
  71. Viktorin, A.; Senkerik, R.; Pluhacek, M.; Kadavy, T.; Zamuda, A. DISH Algorithm Solving the CEC 2019 100-Digit Challenge. In Proceedings of the 2019 IEEE Congress on Evolutionary Computation (CEC), Wellington, New Zealand, 10–13 June 2019; pp. 1–6. [Google Scholar]
  72. Molga, M.; Smutnicki, C. Test functions for optimization needs. Test Funct. Optim. Needs 2005, 101, 48. [Google Scholar]
Figure 1. Child-drawing-development-optimization–harmony-search-based hybrid flowchart.
Figure 1. Child-drawing-development-optimization–harmony-search-based hybrid flowchart.
Applsci 13 05795 g001
Figure 2. The CDDO, HS, and CDDO–HS box and whisker plots for CEC-C06 2019.
Figure 2. The CDDO, HS, and CDDO–HS box and whisker plots for CEC-C06 2019.
Applsci 13 05795 g002
Figure 3. Convergence curves for function F1.
Figure 3. Convergence curves for function F1.
Applsci 13 05795 g003
Figure 4. Convergence curves for function F10.
Figure 4. Convergence curves for function F10.
Applsci 13 05795 g004
Table 1. The formulation of CDDO optimization.
Table 1. The formulation of CDDO optimization.
StatementDescription
f(x)The cost function to obtain the best drawing.
XijThe current solution is X, which is a child’s drawing that is influenced by factors such as hand pressure, the golden ratio, length, and width.
i represents the number of decision variables (the population number).
j represents the number of dimensions that differ.
PMPattern memories are abilities that are acquired through experience. They use the feedback to identify patterns in the drawings, strive to give the pictures meaning, and exercise through copying, practicing, and being enthusiastic (with a trial).
GRThe ratio between a child’s drawing’s length and width, serving as the two components of the solution, is known as the golden ratio.
HPCurrent solutions exert pressure between the lower boundary (LB) of the problem and the upper boundary (UB) of the solution (UP).
RHPThe children’s drawings each have hand pressure, and one of those hand pressures is chosen at random.
LB, UBThe lower boundary (LB) of the problem and the upper boundary (UB) of the problem are the width and length of a child’s artwork, respectively.
RANDThe expression RAND is used to symbolize the action of producing a random number from a set of parameters.
Table 2. Table comparing CDDO–HS with CDDO and standard HS.
Table 2. Table comparing CDDO–HS with CDDO and standard HS.
FunCDDO–HSCDDOHS
Avg.Std.Avg.Std.Avg.Std.
F15.087E−331.057E−321.328E−578.635E−732.850E+029.021E+01
F24.921E−174.033E−172.453E−324.385E−323.005E+005.355E−01
F31.249E−292.238E−292.736E−395.168E−401.754E+045.815E+03
F41.986E−161.920E−167.815E−332.784E−482.214E+011.787E+00
F52.298E+006.935E+002.419E+011.023E+012.040E+048.759E+03
F65.589E−041.439E−047.074E−016.787E−012.834E+021.023E+02
F72.901E−031.497E−031.361E−031.123E−032.042E−015.145E−02
F8−1.178E+043.098E+03−1.244E+045.537E+02−1.240E+047.638E+01
F92.222E+006.070E+001.060E+011.724E+011.968E+013.683E+00
F106.809E−151.703E−157.875E−154.118E−155.095E+005.702E−01
F110.000E+000.000E+005.688E−011.532E+003.513E+008.925E−01
F121.161E−063.112E−073.167E−019.0046E−017.060E+002.147E+00
F131.555E−055.271E−064.128E−013.745E−011.622E+021.596E+02
F145.372E+004.241E+009.981E−013.686E−049.980E−013.448E−11
F156.249E−042.502E−041.181E−031.022E−036.699E−039.110E−03
F16−1.032E+003.777E−10−1.029E+003.247E−03−1.032E+001.889E−07
F173.979E−012.675E−094.231E−014.814E−023.979E−016.467E−06
F185.700E+008.238E+003.117E+001.579E−013.900E+004.930E+00
F19−3.863E+001.554E−10−3.728E+001.075E−01−3.863E+004.714E−08
Table 3. Table comparing CDDO–HS with CDDO and standard HS on CEC-C06 2019.
Table 3. Table comparing CDDO–HS with CDDO and standard HS on CEC-C06 2019.
FunCDDO–HSCDDOHS
Avg.Std.Avg.Std.Avg.Std.
F15.317E+048.142E+034.104E+053.377E+052.232E+102.295E+10
F21.835E+016.390E−031.898E+013.663E−011.875E+012.090E−01
F31.370E+015.785E−131.370E+011.633E−051.370E+014.987E−10
F45.746E+013.066E+012.221E+038.515E+023.976E+012.003E+01
F52.170E+002.109E−013.350E+001.421E−012.039E+002.623E−02
F61.130E+011.591E+001.238E+017.803E−011.139E+011.091E+00
F75.440E+011.300E+021.040E+031.841E+021.708E+022.455E+02
F83.150E+007.640E−016.710E+003.515E−014.543E+001.082E+00
F93.484E+001.912E−011.790E+028.510E+013.634E+001.742E−01
F101.554E+017.609E+002.155E+011.127E−012.146E+011.050E−01
Table 4. The p-value of CDDO–HS against CDDO for classical benchmark functions and CEC-C06 2019.
Table 4. The p-value of CDDO–HS against CDDO for classical benchmark functions and CEC-C06 2019.
FunctionClassical Benchmark FunctionsCEC-C06 2019
1-2.99373E−07
2-3.10921E−13
3-3.94470E−13
4-3.97091E−20
59.44415E−143.87125E−33
64.16584E−071.43887E−03
7-9.09047E−32
82.54526E−015.07244E−31
91.48536E−022.82731E−16
105.58851E−026.02103E−05
114.66181E−02
125.88921E−02
131.18387E−07
14-
155.29812E−03
161.31944E−05
175.69975E−03
189.13712E−02
194.87510E−09
Table 5. The p-value of CDDO–HS against HS for classical benchmark functions and CEC-C06 2019.
Table 5. The p-value of CDDO–HS against HS for classical benchmark functions and CEC-C06 2019.
FunctionClassical Benchmark FunctionsCEC-C06 2019
11.41157E−241.69507E−06
21.38125E−376.92112E−15
31.31874E−235.89563E−08
45.90033E−57-
51.80505E−18-
67.63997E−227.89851E−01
73.13914E−292.54399E−02
82.75280E−013.36825E−07
91.68215E−192.36583E−03
107.24212E−497.52043E−05
112.24778E−29
121.98825E−25
136.98511E−07
14-
155.62412E−04
162.21129E−05
173.26857E−02
183.08727E−01
193.55345E−04
Table 6. Table comparing CDDO–HS with ChOA, BOA, FOX, GWO–WOA, WOA–BAT, and DCSO on CEC-C06 2019.
Table 6. Table comparing CDDO–HS with ChOA, BOA, FOX, GWO–WOA, WOA–BAT, and DCSO on CEC-C06 2019.
FunctionCDDO–HSChOABOAFOXWOAGWOWOA–BATDCSO
F15.317E+044.240E+095.890E+042.580E+044.760E+047.600E+073.863E+04
F21.835E+011.841E+011.890E+011.834E+011.834E+011.750E+011.834E+01
F31.370E+011.370E+011.370E+011.370E+011.370E+011.270E+011.370E+01
F45.746E+015.933E+032.090E+041.060E+032.537E+022.120E+037.266E+01
F52.170E+004.209E+006.180E+005.315E+002.426E+002.440E+002.493E+00
F61.130E+011.215E+011.180E+015.033E+001.137E+011.110E+018.864E+00
F75.440E+011.007E+031.040E+033.068E+025.876E+026.060E+023.291E+02
F83.150E+006.785E+006.340E+005.462E+005.587E+005.720E+005.160E+00
F93.484E+004.493E+022.270E+033.796E+005.671E+002.280E+016.104E+00
F101.554E+012.150E+012.150E+012.098E+012.156E+012.120E+012.113E+01
Table 7. The ranking sort of CDDO–HS, ChOA, BOA, FOX, GWO–WOA, WOA–BAT, and DCSO algorithms.
Table 7. The ranking sort of CDDO–HS, ChOA, BOA, FOX, GWO–WOA, WOA–BAT, and DCSO algorithms.
Function1st2nd3rd4th5th6th7th
F1FOXDCSOGWO–WOACDDO–HSBOAWOA–BATChOA
F2WOA–BATGWO–WOAFOXDCSOCDDO–HSChOABOA
F3WOA–BATBOAChOADCSODCSOCDDO–HSFOX
F4CDDO–HSDCSOGWO–WOAFOXWOA–BATChOABOA
F5CDDO–HSGWO–WOAWOA–BATDCSOChOAFOXBOA
F6FOXDCSOWOA–BATCDDO–HSGWO–WOABOAChOA
F7CDDO–HSFOXDCSOGWO–WOAWOA–BATChOABOA
F8CDDO–HSDCSOFOXGWO–WOAWOA–BATBOAChOA
F9CDDO–HSFOXGWO–WOADCSOWOA–BATChOABOA
F10CDDO–HSFOXDCSOWOA–BATChOABOAGWO–WOA
Table 8. The ranking score of CDDO–HS, ChOA, BOA, FOX, GWO–WOA, WOA–BAT, and DCSO algorithms.
Table 8. The ranking score of CDDO–HS, ChOA, BOA, FOX, GWO–WOA, WOA–BAT, and DCSO algorithms.
FunctionCDDO–HSChOABOAFOXGWO–WOAWOA–BATDCSO
Subtotal2.55.863.13.63.82.8
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ameen, A.A.; Rashid, T.A.; Askar, S. CDDO–HS: Child Drawing Development Optimization–Harmony Search Algorithm. Appl. Sci. 2023, 13, 5795. https://doi.org/10.3390/app13095795

AMA Style

Ameen AA, Rashid TA, Askar S. CDDO–HS: Child Drawing Development Optimization–Harmony Search Algorithm. Applied Sciences. 2023; 13(9):5795. https://doi.org/10.3390/app13095795

Chicago/Turabian Style

Ameen, Azad A., Tarik A. Rashid, and Shavan Askar. 2023. "CDDO–HS: Child Drawing Development Optimization–Harmony Search Algorithm" Applied Sciences 13, no. 9: 5795. https://doi.org/10.3390/app13095795

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop