Next Article in Journal
Asymptotic Expansions for Symmetric Statistics with Degenerate Kernels
Previous Article in Journal
Framework for Integrated Use of Agent-Based and Ambient-Oriented Modeling
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhanced Marine Predators Algorithm for Solving Global Optimization and Feature Selection Problems

1
Department of Information Systems, College of Computing and Information Technology, University of Bisha, Bisha 61922, Saudi Arabia
2
Department of Computer, Damietta University, Damietta 34517, Egypt
3
Faculty of Computer Science, Misr International University, Cairo 11341, Egypt
4
Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
5
Department of Computer, Mansoura University, Mansoura 35516, Egypt
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(21), 4154; https://doi.org/10.3390/math10214154
Submission received: 26 September 2022 / Revised: 25 October 2022 / Accepted: 2 November 2022 / Published: 7 November 2022

Abstract

:
Feature selection (FS) is applied to reduce data dimensions while retaining much information. Many optimization methods have been applied to enhance the efficiency of FS algorithms. These approaches reduce the processing time and improve the accuracy of the learning models. In this paper, a developed method called MPAO based on the marine predators algorithm (MPA) and the “narrowed exploration” strategy of the Aquila optimizer (AO) is proposed to handle FS, global optimization, and engineering problems. This modification enhances the exploration behavior of the MPA to update and explore the search space. Therefore, the narrowed exploration of the AO increases the searchability of the MPA, thereby improving its ability to obtain optimal or near-optimal results, which effectively helps the original MPA overcome the local optima issues in the problem domain. The performance of the proposed MPAO method is evaluated on solving FS and global optimization problems using some evaluation criteria, including the maximum value (Max), minimum value (Min), and standard deviation (Std) of the fitness function. Furthermore, the results are compared to some meta-heuristic methods over four engineering problems. Experimental results confirm the efficiency of the proposed MPAO method in solving FS, global optimization, and engineering problems.

1. Introduction

The curse of dimensionality is one of the most tackled topics in recent research. Many redundant unuseful noisy features might contaminate the feature space. Hence, the idea of feature selection (FS) has been prominent for many years. The literature classified FS methods into two main categories: filter and wrapper [1]. Filter methods take advantage of the statistical measures of the features. For example, it might eliminate or keep features based on their values. The filter method is fast but less sensitive to the used classifier [2]. Wrapper methods select subsets of the feature space and evaluate the classifier performance. However, they are computationally exhaustive and time consuming if the brute force search method is accommodated [3]. Here, the role of metaheuristic algorithms appears clearly. Metaheuristics algorithms can search the feature space for the best subset that achieves the highest classification accuracy [4].
Metaheuristic algorithms can be classified into four classes: (1) swarm intelligence algorithms, (2) human-based algorithms, (3) chemistry and physics algorithms, and (4) evolution algorithms. Swarm-based algorithms depend on studying the behavior of flocks and how they interact to reach food sources. Examples of such algorithms are the grasshopper optimization algorithm (GOA) [5], Harris hawk optimization [6], snake optimizer (SO) [7], and coot bird optimizer [8]. Human-based algorithms depend on simulating physical and nonphysical human behaviors. Such classes of algorithms include teaching learning-based optimization (TLBO) [9], social-based algorithm (SBA) [10], and imperialist competitive algorithm (ICA) [11]. Chemistry and physics algorithms are derived from chemical or physical laws such as ion motion algorithm [12], lightning search algorithm [13], and vortex search algorithm [14]. Evolution algorithms maintain the evolution process of biological creatures such as genetic algorithm (GA) [15], differential evolution (DE) [16], bacterial foraging optimization (BFO) [17], and memetic algorithm (MA) [18]. However, no free lunch theory [19] mentioned that if one optimization algorithm could solve some optimization problems, it might not be able to solve all problems. That provokes scientists to propose more algorithms or even enhance the established ones.
In 2020, the marine predator algorithm (MPA) was introduced by the authors of  [20]. Various foraging strategies of predators in biological interaction inspired the authors to develop MPA. The mathematical model of MPA mimics the behavior of marine predators’ foraging strategy in nature. MPA accommodates Lévy and Brownian statistical distributions. Lévy strategy searches the space with small steps associated with long jumps. Meanwhile, the Brownian strategy sweeps the search space in controlled and uniform steps. The virtue of the Lévy strategy is its deep and accurate search, while the Brownian strategy ensures visiting distant areas. This cooperation improved the searchability of MPA significantly. The statistical results showed that MPA outperformed the genetic algorithm (GA), cuckoo search (CS), gravitational search algorithm (GSA), particle swarm optimization (PSO), salp swarm algorithm (SSA), and covariance matrix adaptation evolution strategy (CMA-ES). The MPA also has indicated high execution in solving engineering issues. It shows a high convergence rate toward the global optimal and does not require training its parameters. However, applying the metaheuristic algorithm in solving various optimization problems is encouraged according to the “no free lunch” theory. Moreover, a new metaheuristic algorithm named Aquila optimizer (AO) was introduced in [21]. It is inspired by the hunting behavior of one of the most intelligent birds, Aquila, a dark brown bird belonging to the Accipitridae group. The AO was successfully applied to optimize the parameters of PID controllers [22] and multilevel inverters [23].
From the previous revision, we can summarize the novelty and contributions of the study in the following points:
  • Improve the exploration phase of the MPA algorithm.
  • Apply the “narrowed exploration” strategy of the AO in the MPA algorithm.
  • Evaluate the proposed method using different optimization problems.
The rest of this paper is arranged as follows: Section 2 lists the related works. Section 2 describes the materials and methods. Section 4 presents the proposed method. Section 5 contains the experiments results and discussion, whereas the last section concludes the study.

2. Related Work

Although MPA has recently appeared as a metaheuristic algorithm, it suffers from early convergence. This drawback affects its accuracy in classification tasks. The authors in [24] improved this drawback by hybridizing MPA and simulated annealing (SM). SM has widened the MPA search space and improved its efficiency in FS tasks from high-dimensional datasets. Many metaheuristic algorithms successfully balance exploration and exploitation phases by embedding chaos theory. Chaos theory is an alternative method to generate random numbers in the algorithm. The authors in [25] introduced a chaotic MPA for feature selection. The practical results showed that the improved MPA had achieved the optimal number of features in many experiments. One of MPA’s drawbacks is its unidirectional search for prey. This drawback provoked the authors in [26] to embed opposition learning-based concepts [27] to allow the examination in all possible search directions. This improvement saves the MPA from stagnation in the local optima. The proposed method achieved the best convergence rate and selected the optimal feature set compared with other competitive algorithms. Again, the shortage of MPA search ability is addressed in [28]. The authors hybridize MPA with a sine cosine algorithm (SCA). Extensive experiments have been conducted to evaluate the performance of the proposed hybrid approach, and it showed its superiority in all accuracy measures.
The authors in [29] noticed that the prey movement of MPA depends on two simple strategies: levy flight and Brownian motion. The problem search space is too complex to be searched by those simple strategies. This note trapped the native MPA in local optima. They proposed a co-evolutionary cultural mechanism that divides the populations into subpopulations. Each subpopulation respects its search space and shares accumulated experiences with others. In addition, two operators are added to enhance the diversity of the populations and allow better experience exchange to support the exploitation of the native MPA. The proposed approach was tested in the FS task and proved its superiority over competitive algorithms. Moreover, it is used to optimize the parameters of the SVM algorithm.
The two most significant drawbacks of the MPA are the lack of population diversity and the bad convergence performance. In this regard, Gang et al. [30] tackled the weakness of MPA regarding population diversity. However new and successful in many applications, MPA solutions still need to be more accurate. They proposed an enhanced MPA by adding the neighborhood-based learning strategy and adaptive technique to control the population size. Their proposed approach was tested on state-of-the-art datasets for solving optimization problems and real-life applications. Compared with the original MPA, the enhanced version showed superior performance. Gang et al., again in 2021 [31], introduced a solution to the shape optimization problem of developable ball surfaces with hybrid MPA. The differential evolution algorithm combined MPA and a quasi-opposition strategy to help the original MPA jump out of local optima and enrich the population diversity. Their proposed approach effectively solved the shape optimization problem regarding robustness and precision.
As a new metaheuristic algorithm, MPA was applied in solving engineering problems in [32]. However, its convergence was treated by the following steps: (1) The logistic chaos function was used to ensure the quality of population diversity. (2) An adjustment transition strategy was added to maintain both exploration and exploitation. Moreover, the problem of falling into local optima was solved by updating the step information of predators by modifying the sigmoid function. Finally, after adding the golden sine factor, the proposed MPA achieved a better convergence rate, and its proposed solutions were diverse enough to solve engineering problems effectively.
MPA has been introduced to solve engineering and renewable energy problems such as predicting wind power in [33]. Mohamed et al. optimized the parameters of ANFIS (adaptive neuro-fuzzy inference system) using augmented MPA. In their augmented MPA, they added a mutation operator to the original MPA to overcome its sticking in local optima. The experiments revealed the competency of their modified MPA approach over many time-series forecasting algorithms.
Multilevel thresholding is an essential preprocessing step in image segmentation. Selecting the optimal threshold affects the accuracy of the segmented image. An improved MPA has been proposed in [34]; it proposed a strategy to steer the worst solutions toward the best ones and, at the same time, randomly in the search space to improve the convergence rate and prevent sticking in local optima. Another strategy is added to improve the exploration and exploitation capabilities. The experimental results showed the competency of the proposed MPA over other metaheuristic algorithms.
From the previous works, we can conclude that MPA has many drawbacks tackled in the previously mentioned proposed approaches, such as the problem of premature convergence, unidirectional search for prey, and sticking in local optima. The motivation of our research is to improve the weakness of the original MPA with the strength of another metaheuristic algorithm, Aquila optimizer (AO). Aquila optimizer was introduced in 2021 to solve continuous optimization problems. However, a binary version of AO is used for the wrapper approach FS from a medical dataset on COVID-19 [35]. Different shapes of transfer functions are applied to convert AO from its continuous nature into binary nature. The proposed approach proved its competencies in many directions, such as increasing accuracy, balancing exploration and exploitation, reducing the number of selected features, and fast convergence speed. Deep learning techniques inspired many researchers to use them as feature extraction methods and then apply AO for feature selection. In [36], the MobileNetV3 deep learning method extracted the features of some medical images. A binary thresholding version of AO selected the most non-trivial features to detect COVID-19 from X-ray images. The proposed method showed high performance and was suggested to cover many other areas of applications. However, the thresholding binary version of the AO optimizer is applied in intrusion detection in an IoT environment [37]. A deep learning technique based on CNN is used first for feature extraction, and then, the binary AO selects the most important features. This research confirmed that binary AO is competitive in many areas other than medical applications. The strength of the AO algorithm can treat the weakness of some metaheuristic algorithms. The hybrid combination of the Harris hawk algorithm with AO enhanced the searchability of the former. The proposed hybrid approach in [38] proved its competence in solving optimization problems. A hybrid algorithm that combined the exploration of AO and the exploitation of Harris hawk emerged in [39]. The robust search capability of AO is integrated with random opposition-based learning to enhance the Harris hawk optimization algorithm. An intensive study on its performance is introduced to test its exploration, exploitation, and sticking in local optima. The results show its superiority and its competence in solving industrial engineering problems.

3. Material and Methods

This section briefly describes the marine predators algorithm and Aquila optimizer.

3.1. Marine Predators Algorithm (MPA)

The MPA was presented in [20]. The biological ocean predators’ motions inspired it. The main steps of MPA are as follows:
  • MPA initialization: As with all population-based algorithms, MPA has an initialization step where all populations are distributed uniformly in the search space as shown in Equation (1).
    X 0 = u + rand l u
    where u and l are the lower and upper bounds, respectively. r a n d is a random vector in [0, 1]. The fittest solution X I is selected to form a matrix called Elite shown in Equation (2). Elite matrix has a dimension [ n , d ] , where n is the search agents and n is the number of the problem dimension.
    Elite = X 1 , 1 I X 1 , 2 I X 1 , d I X 2 , 1 I X 2 , 2 I X 2 , d I X n , 1 I X n , 2 I X n , d I n × d
    Another matrix called Prey is constructed with the exact dimensions of Elite shown in Equation (3).
    Prey = X 1 , 1 X 1 , 2 X 1 , d X 2 , 1 X 2 , 2 X 2 , d X 3 , 1 X 3 , 2 X 3 , d X n , 1 X n , 2 X n , d n × d
    The process of MPA is split into three levels based on the difference in velocity ratio between a predator and prey.
  • Predator moving faster than prey (exploration phase): When the prey is faster than the predator, the predator’s best strategy is to remain stationary. Exploration is more important in the first third of iterations. The prey position is updated by a s t e s i z e calculated as shown in Equation (4).
    stepsize i = R B Elite i R B Pre y i , i = 1 , , n
    where R B is a random numbers vector. The new prey position is updated as shown in Equation (5).
    Prey i = Prey i + 0.5 · N stepsize i
    where N is a random vector in [0, 1].
  • Predator and prey are moving at the same rate (exploitation vs. exploration):
    When the predator and the prey move at the same speed, they are both on the prowl for prey. This section occurs during the intermediate stage of the optimization process when exploration attempts to be transiently converted to exploitation. It is critical for both exploration and exploitation. As a result, half of the population is used for exploration and the other half is used for exploitation. During this phase, the predator is in charge of exploration, while the prey is in charge of exploitation. The new positions for the first half of the populations (supporting exploitation) are updated as shown in Equation (6).
    stepsize i = R L Elite i R L Prey i i = 1 , , n / 2 Prey i = Prey i + 0.5 · N stepsize i
    where R L is a vector of random numbers based on Levy distribution. The new positions for the second half of the populations (supporting exploration) are updated as shown in Equation (7)
    stepsize i = R B R B Elite i Prey i i = n / 2 , , n Prey i = Elite i + 0.5 . A stepsize i
    where A is a control parameter calculated as shown in Equation (8).
    A = 1 Iter M a x I t e r 2 Iter MaxIter
  • Prey moving faster than predator (exploitation):
    This scenario occurs during the final stage of the optimization process and is typically associated with a high capacity for exploitation. The prey positions are updated as shown in Equation (9).
    stepsize i = R L R L Elite i Prey i , i = 1 , , n Prey i = Elite i + 0.5 . A stepsize i
    To maintain the search, the Fish Aggregating Devices (FADs) is proposed. The mathematical model of FADs is defined in Equation (10). Algorithm 1 shows the pseudo code of the MPA.
    preyl l ¯ = prey i ¯ + C F X min + R X ¯ max X ¯ min U if r FADs prey l + [ FADs ( 1 r ) + r ] prey 1 ¯ prey r 2 ¯ if r > FADs
Algorithm 1 MPA pseudo code
  1:
initialize populations. t = 1.
  2:
while ( t < = T m a x ) do
  3:
    Calculate the fitness and form the Elite matrix.
  4:
    if  t < T m a x / 3  then
  5:
        Update the prey position based on Equation (5).
  6:
    else if  T m a x / 3 < t < 2 T m a x / 3  then
  7:
        For the first half of the populations, update prey position based on Equation (6).
  8:
        For the other half, use Equation (7) to update the position.
  9:
    else if  t > 2 T m a x / 3  then
10:
        Update prey position using Equation (9).
11:
    end if
12:
    Apply memory saving and Elite update
13:
    Apply FAD effect using Equation (10)
14:
    Apply memory saving and Elite update
15:
     t = t + 1
16:
end while
17:
return the best position.

3.2. Aquila Optimizer (AO)

AO [21] was introduced in 2021 as an optimization algorithm. The algorithm mimics Aquila hunting behavior and has four main stages: initialization, exploration, exploitation, and finally, reaching the optimal solution. The initialization stage starts by initializing population X with N agents as shown in Equation (11).
X i j = r 1 × ( U B j L B j ) + L B j , i = 1 , 2 , . . . . . , N j = 1 , 2 , . . . , D i m
where U B j and L B j are the upper and lower bounds of the search space, the number of problem dimensions is D i m , and r 1 [ 0 , 1 ] is the random value.
The next phase of the AO approach is to either explore or exploit until the best solution is identified. According to [21], there are two methods for both exploration and exploitation. The first technique uses the best agent ( X b ) and the average of all agents ( X M ) to carry out the exploration. The mathematical formulation of this method is as follows:
X i ( t + 1 ) = X b ( t ) × 1 t T + ( X M ( t ) X b ( t ) r a n d ) ,
X M ( t ) = 1 N i = 1 N X ( t ) , j = 1 , 2 , . . . , D i m
T declares the max iteration’s number.
The second method, expressed as follows, uses X b and the Levy flight L e v y ( D ) distribution to improve the solutions’ ability for exploration.
X i ( t + 1 ) = L e v y ( D ) × X b ( t ) + X R ( t ) + ( y x ) r a n d ,
L e v y ( D ) = s × u × σ | υ | 1 β , σ = s i n e ( π β 2 ) × Γ ( 1 + β ) β × 2 ( β 1 2 ) × Γ ( 1 + β 2 )
where u and υ are random numbers; while s = 0.01 and b e t a = 1.5 are constants. X R is a randomly chosen agent in Equation (14). In addition, y and x are used to mimic the spiral shape, and they are written as:
y = r × c o s ( θ ) , x = s i n ( θ ) × r
r = r 1 + U × D 1 , θ = ω × D 1 + θ 1 , θ 1 = 3 × π 2
where ω = 0.0050 and U = 0.005650 are random values, and r 1 [ 0 , 20 ] is a random number.
The first technique is used in [21] in the exploitation phase based on X b and X M and it is computed as:
X i ( t + 1 ) = ( X b ( t ) X M ( t ) ) × α r a n d + ( ( U B L B ) × r a n d + L B ) × δ ,
The parameters for exploitation adjustment are given by α and δ . A random value is the value rand [0, 1].
The solution is updated using L e v y , X b , and the quality function Q F in the second exploitation phase. That is defined as follows:
X i ( t + 1 ) = Q F × X b ( t ) ( G 1 × X ( t ) × r a n d ) G 2 × L e v y ( D ) + r a n d × G 1 ,
Q F ( t ) = t 2 × r a n d ( ) 1 ( 1 T ) 2
In addition, G 1 refers to several motions used to track the optimal solution as in Equation (21).
G 1 = 2 × r a n d ( ) 1 , G 2 = 2 × ( 1 t T )
A random value is denoted by the symbol r a n d . G 2 , on the other hand, denotes decreasing values from 2 to 0, and it is calculated as:
G 2 = 2 × ( 1 t T )

4. Proposed Method

This section describes the proposed MPAO method. This method aims to improve the optimization technique of the original MPA using the strategy of the narrowed exploration of the AO. This modification enhances the exploration behavior of the MPA to update the search space and explore more regions in the search domain. Therefore, the narrowed exploration of the AO increases the searchability of the MPA, thereby improving its ability to obtain optimal or near-optimal results. This phase effectively helps the original MPA overcome the local optima issues in the problem domain. The narrowed exploration of the AO is applied based on a random variable r x , where r x [ 0 , 1 ] . If r x is greater than 0.25, the narrowed exploration (Equation (14)) is applied, else the operators of MPA (Equation (7)) are used.
The optimization process of MPAO starts by determining the values of all parameters and creates the initial population for the search space. Then, the fitness function ( f ( x ) ) checks the solutions’ quality using Equation (23); if the current solution is better than the old one, the MPAO saves it as the best solution.
f ( x ) = γ C x + ( 1 γ ) ( s S )
where γ C x denotes the error of the classification phase (in this study, the K-NN is used as a classifier). The second part of the equation defines the selected feature’s ratio (s). S denotes the number of full features. γ is a random value in [ 0 , 1 ] .
After that, the optimization begins by discovering the search space and evaluating the solutions using the fitness function to determine the initial optimal values.
In the second–third part of the optimization process, the MPAO decides to update the solutions using the MPA or AO based on a random value. This step helps with improving the exploration phase and adds more diversity to the search space. Finally, all obtained values by the fitness function are checked; then, the best one is selected and saved. The above steps can be summarized as follows:
  • Declare the experiment variables and their values.
  • Generate the X population randomly with a specific size and dimension.
  • Start the main loop of the MPAO.
  • Apply the fitness function for all solutions.
  • Return the best value.
The above steps are iterated until reaching the stop condition. The structure of the MPAO is presented in Figure 1.

5. Experiments and Discussion

This section evaluates the proposed MPAO using three experiments: solving global optimization problems, selecting the essential features, and solving real engineering problems. The proposed method is compared with nine optimization methods: PSO [40], GA [41], AO [21], MFO [42], MPA [20], SSA [43], GOA [5], slime mold algorithm (SMA) [44], and whale optimization algorithm (WOA) [45]. Table 1 lists the parameter settings for all of them.
In the experiments, there are some performance measures used to evaluate the proposed method, namely: the average, minimum (Min), standard deviation (Std), and maximum (Max) values of the fitness function as in Equations (24)–(26), respectively. In addition, the classification accuracy is as in Equation (27).
M a x = max 1 i N f b i
M i n = min 1 i N f b i
S t d = 1 N i = 1 N | f i μ | 2
where f and μ denote the value and mean of the objective function, respectively. N denotes the size of the sample.
A c c = T P + T N T P + F N + F P + T N
where T N and T P denote true negative and positive results, respectively. F N and F P denote false negative and positive results.

5.1. Experiment 1: Global Optimization

This experiment discusses the experimental results using the CEC2019 benchmark [46] by comparing the MPAO to some state-of-the-art advanced competitors, including PSO, GA, AO, MFO, SMA, MPA, SSA, GOA, and WOA. Table 2, Table 3, Table 4 and Table 5 report the results of average fitness, Std, Max, and Min over ten test functions of the CEC 2019 benchmark. In all tables, the boldface refers to the best value.
Concerning the average of function fitness which is computed for all the counterparts, it is clear from Table 2 that the proposed MPAO is superior in six out of 10 functions (F2–F5 and F7–F8), followed by the MPA, which is better in three functions (F3, F6, and F10). On the other hand, the PSO, GA, MFO, and WOA show the best values in only one out of 10 functions. The SSA and GOA failed to realize the best files over all the functions.
The Std values show the stability in the results obtained by the competitors over the testing functions, which are reported in Table 3. The proposed MPAO shows lower Std values in five out of ten functions (F2, F3, F5, F7, and F10), which reflects the stability of the algorithm in most testing functions. The MFO comes in the second rank, stable over only two functions (F4 and F9), whereas the other competitors did not show stability over all the algorithms.
Table 4 reports the Max values computed from each counterpart’s fitness value of the testing functions. The MPAO reports the Max value in the majority of functions (F2–F8), followed by the MPA that shows the maximal over two functions only (F9–F10). The rest of the competing algorithms cannot provide any maximum value in all the testing functions.
Table 5 presents the Min values of fitness computed for the counterparts over the testing functions. The MPAO shows lower values in five functions (F2–F5 and F9), followed by PSO and MPA, each of which shows lower values in only two functions. Each MFO and SMA obtains the lower values in only one function, namely F8 and F1 means no substantial changes over the experiments. Table 5 demonstrates the Std values, where the MPAO realizes the lower value over seven out of 15 datasets. The PSO is only in three, the MFO in two, and the GA and MPA in one. In contrast, the other counterparts did not realize the best results over any datasets.

5.2. Experiment 2: Feature Selection

This experiment evaluates a set of UCI datasets by the proposed method. These benchmark datasets are collected from distinct fields such as biology, electromagnetic, games, physics, politics, and chemistry. Furthermore, each benchmark shows a different number of instances, features, and categories. The description of each benchmark is provided in Table 6.
Concerning this subsection, we demonstrate and discuss the experimental results of the comparisons conducted to solve the feature selection issue. These comparisons involved the proposed MPAO, PSO [40], GA [41], AO [21], MFO [42], SMA [44], MPA [20], SSA [43], GOA [5], and WOA [45] with the previously described evaluation metrics. Table 7 demonstrates the experimental results of the average of the fitness function, which were calculated for all compared counterparts over the 15 datasets. From the table, it is clear that the proposed MPAO was superior in 13 out of 15 datasets, whereas the PSO demonstrated the best values in three cases, which is followed by the MPA and SSA, each of which was better in one dataset. These results indicated that the MPAO was accurate and superior in the average measure of the fitness value. In all tables, the boldface refers to the best value. Keeping on with the fitness values, we can analyze the fitness functions’ maximum (Max) and minimum (Min) values. This investigation allows us to determine when the algorithms realize the worst and best value.
Table 8 demonstrates the Min values of fitness obtained by the counterparts over all the datasets. The MPAO, PSO, and MPA showed lower values in most datasets, 13, 11, and 8 of the 15 datasets, respectively, as these algorithms could reach the optimal values.
On the other side, Table 9 demonstrates the Max values of the fitness function, which were obtained during the conducted experiments using the counterparts across the 15 datasets of the UCI. The MPAO obtained the Max values over most cases (10 out of 15 datasets), followed by the SSA (only in six cases), MPA (in five cases) and WOA (in four cases). The other algorithms, including GA, AO, MFO, and SMA, could not provide any maximal values over all the experiments.
The results’ stability was calculated for each algorithm and analyzed based on the standard deviation (Std) measure. The Std was computed for the independent experiments over each benchmark by setting the fitness as the input value. In this regard, the lower Std refers to better stability for the results, which means no substantial changes over the experiments. Table 10 demonstrates the Std values, where the MPAO realized the lower value over seven out of 15 datasets. The PSO showed the best values in three datasets, the MFO in two, and the GA and MPA in only one. The other counterparts did not realize the best results over any datasets.
The feature numbers resulting across all 15 datasets are reported in Table 11. The recorded results in that table are evidence of the efficacy and superiority of the MPAO algorithm. It obtained the best results in seven out of 15 datasets, showing the minor selected feature number that realizes high performance. It was followed by the SMA and the WOA, which achieved the least features in only three and two datasets, respectively, whereas the remaining algorithms were out of the competition.
The computational time consumed by each algorithm is reported in Table 12. In this context, a smaller value is expected to be the best value. Regardless, this is not evidence of better performance as the quick algorithm is not always the accurate one. The time is observed in Table 8, where the SMA represents the algorithm that showed the lower computation time over 11 datasets, which is followed by the MFO with two datasets. The PSO and GOA showed the best time in only one dataset. Accordingly, the proposed MPAO showed a higher computation time because of the operator hybridization; however, the performance of this algorithm was better than those shown by the remaining algorithms.
Furthermore, as was previously illustrated, the accuracy assesses the classification quality according to the values of true positives, false positives, true negatives, and false negatives. Herein, the values obtained by this measure are expected to be closer to one, which indicates a higher accuracy. Table 13 reports the values of the compared counterparts. The proposed MPAO achieved better accuracy concerning the classification of the selected feature. It showed the best accuracy values in 13 of the 15 datasets, followed by the PSO in two, whereas MPA and SSA obtained the best accuracy in only one dataset. The remaining algorithms showed acceptable accuracy but did not outperform the proposed algorithm.
For further analysis, Table 14 shows the results of the Wilcoxon rank sum test as a statistical test. This measure tests if there is a significant difference between the proposed method and the other methods at a level equal to 0.05. From Table 14, we can notice that there are significant differences between the MPAO and AO, MFO, SMA, SSA, GOA, and WOA in most datasets, and there are significant differences between the MPAO and PSO, GA, and MPA in 46% of the datasets. These results show the superiority of the proposed MPAO.

5.3. Experiment 3: Solving Different Real Engineering Problems

This experiment evaluates the proposed MPAO using four well-known engineering issues, including (a) tension/compression spring, (b) rolling element bearing, (c) speed reducer, and (d) gear train design. These issues were handled by the proposed MPAO and some meta-heuristic methods formerly performed in the literature. The following subsection aims to evaluate the effectiveness of the proposed MPAO algorithm by comparing its results with the other methods for solving these optimization issues. In all tables, the boldface refers to the best value.

5.3.1. Tension/Compression Spring Problem

The issue of optimizing tension/compression spring is a portion of the multidisciplinary engineering optimization issue. This issue aims to decrease the spring weight. To solve this issue, it needs three kinds of optimized variables, such as the diameter of the wire (d), the mean coil diameter (D), and the number of active coils (N).
This problem was intensively handled through various optimization algorithms such as: MVO [47], GSA [47], WOA [48], GWO [49], MFO [42], SSA [50] and RO [9]. The comparison results of the tension/ compression spring problem between the proposed MPAO algorithm and the other methods are listed in Table 15. It is obviously observed from Table 15 that the proposed MPAO obtained the minimum cost value with 0.012665, which was ranked in the first place, followed by the GWO algorithm with 0.012666, which was ranked second. At the same time, both SSA and WOA obtained a similar cost value of 0.0126763, followed by MFO and RO algorithms with a slightly lower value. On the contrary, GSA and MVO obtained the highest cost values, with 0.0127022 for GSA and 0.01279 for MVO, which were ranked last. For the tension/compression spring problem, the cost value of the proposed MPAO algorithm is better than other algorithms.

5.3.2. Rolling Element Bearing Problem

Another basic portion of the multidisciplinary engineering design issues is called rolling element bearing, which aims to maximize the dynamic type of the load-carrying capability of the bearing of the rolling part. To solve this issue, it needs ten kinds of decision variables. In these kinds of designs, problem restrictions are implemented based on manufacturing conditions and kinematics. A comparison between the proposed MPAO and the CHHO [51], HHO [51], SCA [52], MFO [42], MVO [53], TLBO [9], and PVS [54] algorithms for solving that problem is illustrated in Table 16. Concerning the results of the optimal costs displayed in Table 16, the proposed MPAO showed the best value for that problem with 85539.192. The MVO algorithm ranked second with 84491.266 cost value, which is followed by the HHO, MFO, CHHO, and SCA. On the other hand, the TLBO and PVS algorithms showed the lowest value for that problem, with 81859.74 for TLBO and 81859.741 for PVS. The problem results indicate the superiority of the proposed MPAO method in solving the rolling element bearing design problem.

5.3.3. Speed Reducer Problem

Speed reducer is also an important engineering issue. The actual aim of this issue is to reduce the speed reducer weight with the following limitations: bending stress of the gear teeth, the stress of the surface, the shafts stresses, and the shafts’ transverse deflections. The speed reducer problem has some variables that need to be optimized, as shown in Figure 2. In this subsection, these variables are extensively addressed through different bio-inspired optimization methods such as CHHO [51], HHO [51], MDE [55], PSO-DE [56], PSO [57], MBA [58], SSA [57], and ISCA [57]. Table 17 displays the assessment of the speed reducer design problem between the proposed MPAO and the other methods. From the results shown in Table 17, it can be observed that the proposed MPAO algorithm is competitive as it obtained the optimal weight compared to other methods with 2994.4725. In addition, CHHO can be considered equally competitive in line with the proposed MPAO algorithm as it ranked second with 2994.4737, which is followed by MBA with 2994.4824, PSO-DE with 2996.3481, MDE with 2996.3566, and ISCA with 2997.1295. On the other hand, the SSA, PSO, and HHO came in the last rank as they obtained the highest cost values. These results indicate that the proposed MPAO algorithm outperforms other methods in obtaining the optimal cost value for the speed reducer problem.

5.3.4. Gear Train Design Problem

Gear train is another kind of engineering optimization issue, which have four types of variables as shown in Figure 3. This kind of engineering issue aims to reduce the teeth ratio and the scaler value of the gear. Therefore, the decision parameter consists of the number of teeth on each gear.
To test the effect of the proposed MPAO algorithm in handling the problem of gear design, we compared to six optimization methods, including: CHHO [51], HHO [51], IMPFA [59], GeneAS [60], Kannan and Kramer [60], and Sandgren [60]. The test results of the proposed MPAO and other methods are listed in Table 18. The test result of the proposed MPAO algorithm is the best optimal value with 2.701E-12, which is followed by the IMPFA algorithm with 1.392E-10. Kannan and Kramer, GeneAS, and CHHO provided close results with 0.144121 for Kannan and Kramer, 0.144242 for GeneAS, and 0.1434 for CHHO. At the same time, HHO and Sandgren do not perform well, as they provide the lowest optimal value. The comparative results reveal that the proposed MPAO method is more precisely competent for handling the gear train design problem.
To sum up, the outcomes of the previous experiments and the statistical analyses show that the proposed algorithm outperformed all others in obtaining the optimal outcomes and demonstrated its efficiency in most cases in solving FS, global optimization, and engineering problems. Evaluating the efficacy of MPAO relies on different performance metrics, including Min, Max, and Std of the fitness value, besides the classification accuracy, number of features, computation time, and Wilcoxon rank sum test. The MPAO showed some advantages, such as fast convergence, maintaining the search space with good exploration behavior, and escaping from the local optima in most cases. However, it showed a limitation in requiring a higher computation time, which needs to be improved in future study. In general, the better performance of the proposed MPAO can be due to the exploration’s improvement and exploitative capabilities, along with the utilization of AO parameters.

6. Conclusions

This study suggested an improved marine predators algorithm (MPA) efficient optimization technique for handling global optimization, feature selection (FS), and real-world engineering problems. The proposed algorithm in this paper used the strategy of the narrowed exploration of the Aquila optimizer (AO) to update the search space and explore more regions in the search domain to enhance the exploration behavior of the MPA. Therefore, the narrowed exploration of the AO increased the searchability of the MPA, thereby improving its ability to obtain the optimal or near-optimal results and thus helping the original MPA to overcome the local optima issues in the search domain effectively. The MPAO was evaluated for solving three problems: global optimization, feature selection, and real engineering cases. At first, the MPAO was evaluated on ten benchmark global optimization functions and outperformed the other algorithms in 60% of the functions. Concerning the FS experiment, a set of UCI datasets and four evaluation criteria were considered to prove the effectiveness of the suggested MPAO compared to nine metaheuristic optimization algorithms. Moreover, four engineering optimization issues were also considered to demonstrate the superiority of the suggested MPAO. The findings showed that the performance measures proved the superiority of the suggested MPAO compared to other methods in terms of Max, Min, and Std of the fitness function and accuracy over all considered FS issues. It outperformed the compared method in 87% of the datasets in terms of classification accuracy. The MPAO provided better results than the compared methods regarding real engineering problems. In the future, the proposed method will be used to solve more real-world problems, such as wind speed estimation, business optimization issues, and large-scale optimization problems.

Author Contributions

Conceptualization, A.A.E.; Data curation, F.H.I., R.M.G. and M.A.G.; Formal analysis, A.A.E., R.M.G. and M.A.G.; Investigation, A.A.E., F.H.I., R.M.G. and M.A.G.; Methodology, A.A.E., F.H.I., R.M.G. and M.A.G.; Resources, A.A.E.; Software, A.A.E.; Validation, A.A.E., R.M.G. and M.A.G.; Visualization, A.A.E., F.H.I. and M.A.G.; Writing—original draft, A.A.E., F.H.I., R.M.G. and M.A.G.; Writing—review and editing, A.A.E., F.H.I., R.M.G. and M.A.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kohavi, R.; John, G.H. Wrappers for feature subset selection. Artif. Intell. 1997, 97, 273–324. [Google Scholar] [CrossRef] [Green Version]
  2. Liu, H.; Motoda, H. Feature Extraction, Construction and Selection: A Data Mining Perspective; Springer Science & Business Media: Berlin/Heidelberg, Germany, 1998; Volume 453. [Google Scholar]
  3. Luukka, P. Feature selection using fuzzy entropy measures with similarity classifier. Expert Syst. Appl. 2011, 38, 4600–4607. [Google Scholar] [CrossRef]
  4. Talbi, E.G. Metaheuristics: From Design to Implementation; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
  5. Saremi, S.; Mirjalili, S.; Lewis, A. Grasshopper optimisation algorithm: Theory and application. Adv. Eng. Softw. 2017, 105, 30–47. [Google Scholar] [CrossRef] [Green Version]
  6. Hussien, A.G.; Amin, M. A self-adaptive Harris Hawks optimization algorithm with opposition-based learning and chaotic local search strategy for global optimization and feature selection. Int. J. Mach. Learn. Cybern. 2022, 13, 309–336. [Google Scholar] [CrossRef]
  7. Hashim, F.A.; Hussien, A.G. Snake Optimizer: A novel meta-heuristic optimization algorithm. Knowl.-Based Syst. 2022, 242, 108320. [Google Scholar] [CrossRef]
  8. Mostafa, R.R.; Hussien, A.G.; Khan, M.A.; Kadry, S.; Hashim, F.A. Enhanced coot optimization algorithm for dimensionality reduction. In Proceedings of the 2022 Fifth International Conference of Women in Data Science at Prince Sultan University (WiDS PSU), Riyadh, Saudi Arabia, 28–29 March 2022; pp. 43–48. [Google Scholar]
  9. Rao, R.V.; Savsani, V.J.; Vakharia, D. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput.-Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  10. Ramezani, F.; Lotfi, S. Social-based algorithm (SBA). Appl. Soft Comput. 2013, 13, 2837–2856. [Google Scholar] [CrossRef]
  11. Atashpaz-Gargari, E.; Lucas, C. Imperialist competitive algorithm: An algorithm for optimization inspired by imperialistic competition. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 4661–4667. [Google Scholar]
  12. Javidy, B.; Hatamlou, A.; Mirjalili, S. Ions motion algorithm for solving optimization problems. Appl. Soft Comput. 2015, 32, 72–79. [Google Scholar] [CrossRef]
  13. Abualigah, L.; Elaziz, M.A.; Hussien, A.G.; Alsalibi, B.; Jalali, S.M.J.; Gandomi, A.H. Lightning search algorithm: A comprehensive survey. Appl. Intell. 2021, 51, 2353–2376. [Google Scholar] [CrossRef]
  14. Doğan, B.; Ölmez, T. A new metaheuristic for numerical function optimization: Vortex Search algorithm. Inf. Sci. 2015, 293, 125–145. [Google Scholar] [CrossRef]
  15. Holland, J.H. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence; MIT Press: Cambridge, MA, USA, 1992. [Google Scholar]
  16. Rocca, P.; Oliveri, G.; Massa, A. Differential evolution as applied to electromagnetics. IEEE Antennas Propag. Mag. 2011, 53, 38–49. [Google Scholar] [CrossRef]
  17. Passino, K.M. Biomimicry of bacterial foraging for distributed optimization and control. IEEE Control Syst. Mag. 2002, 22, 52–67. [Google Scholar]
  18. Moscato, P.; Mendes, A.; Berretta, R. Benchmarking a memetic algorithm for ordering microarray data. Biosystems 2007, 88, 56–75. [Google Scholar] [CrossRef] [PubMed]
  19. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef] [Green Version]
  20. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  21. Abualigah, L.; Yousri, D.; Abd Elaziz, M.; Ewees, A.A.; Al-qaness, M.A.; Gandomi, A.H. Aquila Optimizer: A novel meta-heuristic optimization Algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  22. Aribowo, W.; Supari, B.S.; Suprianto, B. Optimization of PID parameters for controlling DC motor based on the aquila optimizer algorithm. Int. J. Power Electron. Drive Syst. (IJPEDS) 2022, 13, 808–2814. [Google Scholar] [CrossRef]
  23. Hussan, M.R.; Sarwar, M.I.; Sarwar, A.; Tariq, M.; Ahmad, S.; Shah Noor Mohamed, A.; Khan, I.A.; Ali Khan, M.M. Aquila Optimization Based Harmonic Elimination in a Modified H-Bridge Inverter. Sustainability 2022, 14, 929. [Google Scholar] [CrossRef]
  24. Khaire, U.M.; Dhanalakshmi, R.; Balakrishnan, K. Hybrid Marine Predator Algorithm with Simulated Annealing for Feature Selection. In Machine Learning and Deep Learning in Medical Data Analytics and Healthcare Applications; CRC Press: Boca Raton, FL, USA, 2022; pp. 131–150. [Google Scholar]
  25. Alrasheedi, A.F.; Alnowibet, K.A.; Saxena, A.; Sallam, K.M.; Mohamed, A.W. Chaos Embed Marine Predator (CMPA) Algorithm for Feature Selection. Mathematics 2022, 10, 1411. [Google Scholar] [CrossRef]
  26. Balakrishnan, K.; Dhanalakshmi, R.; Mahadeo Khaire, U. Analysing stable feature selection through an augmented marine predator algorithm based on opposition-based learning. Expert Syst. 2022, 39, e12816. [Google Scholar] [CrossRef]
  27. Tizhoosh, H.R. Opposition-based learning: A new scheme for machine intelligence. In Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC’06), Vienna, Austria, 28–30 November 2005; Volume 1, pp. 695–701. [Google Scholar]
  28. Abd Elaziz, M.; Ewees, A.A.; Yousri, D.; Abualigah, L.; Al-qaness, M.A. Modified marine predators algorithm for feature selection: Case study metabolomics. Knowl. Inf. Syst. 2022, 64, 261–287. [Google Scholar] [CrossRef]
  29. Jia, H.; Sun, K.; Li, Y.; Cao, N. Improved marine predators algorithm for feature selection and SVM optimization. KSII Trans. Internet Inf. Syst. (TIIS) 2022, 16, 1128–1145. [Google Scholar]
  30. Hu, G.; Zhu, X.; Wang, X.; Wei, G. Multi-strategy boosted marine predators algorithm for optimizing approximate developable surface. Knowl.-Based Syst. 2022, 254, 109615. [Google Scholar] [CrossRef]
  31. Hu, G.; Zhu, X.; Wei, G.; Chang, C.T. An improved marine predators algorithm for shape optimization of developable Ball surfaces. Eng. Appl. Artif. Intell. 2021, 105, 104417. [Google Scholar] [CrossRef]
  32. Han, M.; Du, Z.; Zhu, H.; Li, Y.; Yuan, Q.; Zhu, H. Golden-Sine dynamic marine predator algorithm for addressing engineering design optimization. Expert Syst. Appl. 2022, 210, 118460. [Google Scholar] [CrossRef]
  33. Al-qaness, M.A.; Ewees, A.A.; Fan, H.; Abualigah, L.; Abd Elaziz, M. Boosted ANFIS model using augmented marine predator algorithm with mutation operators for wind power forecasting. Appl. Energy 2022, 314, 118851. [Google Scholar] [CrossRef]
  34. Abdel-Basset, M.; Mohamed, R.; Abouhawwash, M. Hybrid marine predators algorithm for image segmentation: Analysis and validations. Artif. Intell. Rev. 2022, 55, 3315–3367. [Google Scholar] [CrossRef]
  35. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S.; Abualigah, L. Binary Aquila Optimizer for Selecting Effective Features from Medical Data: A COVID-19 Case Study. Mathematics 2022, 10, 1929. [Google Scholar] [CrossRef]
  36. Abd Elaziz, M.; Dahou, A.; Alsaleh, N.A.; Elsheikh, A.H.; Saba, A.I.; Ahmadein, M. Boosting COVID-19 image classification using MobileNetV3 and aquila optimizer algorithm. Entropy 2021, 23, 1383. [Google Scholar] [CrossRef]
  37. Fatani, A.; Dahou, A.; Al-Qaness, M.A.; Lu, S.; Elaziz, M.A. Advanced feature extraction and selection approach using deep learning and Aquila optimizer for IoT intrusion detection system. Sensors 2021, 22, 140. [Google Scholar] [CrossRef]
  38. Zhang, Y.J.; Zhao, J.; Gao, Z.M. Hybridized improvement of the chaotic Harris Hawk optimization algorithm and Aquila Optimizer. In Proceedings of the International Conference on Electronic Information Engineering and Computer Communication (EIECC 2021), SPIE, Nanchang, China, 17–19 December 2021; Volume 12172, pp. 327–332. [Google Scholar]
  39. Wang, S.; Jia, H.; Abualigah, L.; Liu, Q.; Zheng, R. An improved hybrid aquila optimizer and harris hawks algorithm for solving industrial engineering optimization problems. Processes 2021, 9, 1551. [Google Scholar] [CrossRef]
  40. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Western, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  41. Mitchell, M. An Introduction to Genetic Algorithms; MIT Press: Cambridge, MA, USA, 1998. [Google Scholar]
  42. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  43. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  44. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  45. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  46. Price, K.; Awad, N.; Ali, M.; Suganthan, P. The 100-Digit Challenge: Problem Definitions and Evaluation Criteria for the 100-Digit Challenge Special Session and Competition on Single Objective Numerical Optimization; Nanyang Technological University: Singapore, 2018. [Google Scholar]
  47. Wang, Y.; Li, H.X.; Huang, T.; Li, L. Differential evolution based on covariance matrix learning and bimodal distribution parameter setting. Appl. Soft Comput. 2014, 18, 232–247. [Google Scholar] [CrossRef]
  48. Gandomi, A.H.; Yang, X.S.; Alavi, A.H. Cuckoo search algorithm: A metaheuristic approach to solve structural optimization problems. Eng. Comput. 2013, 29, 17–35. [Google Scholar] [CrossRef]
  49. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  50. Xing, Z.; Jia, H. Multilevel color image segmentation based on GLCM and improved salp swarm algorithm. IEEE Access 2019, 7, 37672–37690. [Google Scholar] [CrossRef]
  51. Dhawale, D.; Kamboj, V.K.; Anand, P. An improved chaotic harris hawks optimizer for solving numerical and engineering optimization problems. Eng. Comput. 2021, 1–46. [Google Scholar] [CrossRef]
  52. Kamboj, V.K.; Bhadoria, A.; Gupta, N. A novel hybrid GWO-PS algorithm for standard benchmark optimization problems. INAE Lett. 2018, 3, 217–241. [Google Scholar] [CrossRef]
  53. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  54. Savsani, P.; Savsani, V. Passing vehicle search (PVS): A novel metaheuristic algorithm. Appl. Math. Model. 2016, 40, 3951–3978. [Google Scholar] [CrossRef]
  55. Zhao, W.; Zhang, Z.; Wang, L. Manta ray foraging optimization: An effective bio-inspired optimizer for engineering applications. Eng. Appl. Artif. Intell. 2020, 87, 103300. [Google Scholar] [CrossRef]
  56. Niu, B.; Li, L. A novel PSO-DE-based hybrid algorithm for global optimization. International Conference on Intelligent Computing; Springer: Berlin/Heidelberg, Germany, 2008; pp. 156–163. [Google Scholar]
  57. Gupta, S.; Deep, K.; Moayedi, H.; Foong, L.K.; Assad, A. Sine cosine grey wolf optimizer to solve engineering design problems. Eng. Comput. 2021, 37, 3123–3149. [Google Scholar] [CrossRef]
  58. Sadollah, A.; Bahreininejad, A.; Eskandar, H.; Hamdi, M. Mine blast algorithm: A new population based algorithm for solving constrained engineering optimization problems. Appl. Soft Comput. 2013, 13, 2592–2612. [Google Scholar] [CrossRef]
  59. Tang, C.; Zhou, Y.; Luo, Q.; Tang, Z. An enhanced pathfinder algorithm for engineering optimization problems. Eng. Comput. 2022, 38, 1481–1503. [Google Scholar] [CrossRef]
  60. Deb, K.; Goyal, M. A combined genetic adaptive search (GeneAS) for engineering design. Comput. Sci. Inform. 1996, 26, 30–45. [Google Scholar]
Figure 1. Proposed MPAO architecture.
Figure 1. Proposed MPAO architecture.
Mathematics 10 04154 g001
Figure 2. Speed reducer problem.
Figure 2. Speed reducer problem.
Mathematics 10 04154 g002
Figure 3. Gear train design problem.
Figure 3. Gear train design problem.
Mathematics 10 04154 g003
Table 1. Parameter settings.
Table 1. Parameter settings.
AlgorithmParameters Values
GA γ  = 0.2, pm = 0.3, pc = 0.8, β  = 8, mu = 0.02
PSOw = 1, C1 and C2 = 1, wDamp = 0.990
WOAb = 1.0, a = [0, 2], l = [−1, 1]
SSA C 3 ∈ [0, 1], C 2 ∈ [0, 1]
MPAP = 0.50, β  = 1.50, FADs = 0.20
AO δ  = 0.1, α  = 0.1
SMAz = 0.030
GOA c m a x = 1.00 , c m i n = 0.000040
MFOa =  [ 2 1 ] , b = 1
MPAOP = 0.50, FADs = 0.20, β  = 1.50, α  = 0.1, δ  = 0.1
Table 2. Average of the fitness function.
Table 2. Average of the fitness function.
FitnessMPAOPSOGAAOMFOSMAMPASSAGOAWOA
F1 9.8 × 10 7 5.4 × 10 12 6.4 × 10 10 7 . 2 × 10 4 3.9 × 10 10 1.3 × 10 8 2.0 × 10 8 3.2 × 10 10 1.0 × 10 12 2.4 × 10 11
F217.342917572.624.053917.425317.376217.349317.343817.75064835.6417.5476
F312.702412.702412.702412.702512.702412.702912.702412.702912.704312.7024
F437.3282655.65103.2483888.51111.35443.32366.65286.6916347.842244.70
F51.14932.48561.17072.36921.39421.52441.26881.19313.19502.3165
F65.399511.39596.963711.64606.904710.02835.17937.930310.535510.3370
F7128.129369.736230.85678.72455.91413.32171.96414.07890.94812.84
F84.65666.09685.48276.21235.38375.73924.75335.75226.96506.1076
F93.308760.81983.3120262.6033.73253.12313.32683.8415906.81177.21
F1020.01220.53020.17420.62620.14320.49918.80020.09620.52220.499
Table 3. Average of the standard deviation of the fitness function.
Table 3. Average of the standard deviation of the fitness function.
FitnessMPAOPSOGAAOMFOSMAMPASSAGOAWOA
F1 2.5 × 10 7 5.5 × 10 12 5.9 × 10 10 1 . 8 × 10 8 5.0 × 10 10 5.0 × 10 8 2.1 × 10 8 4.1 × 10 10 1.1 × 10 12 2.2 × 10 11
F20.00004545.3220.76970.04560.02530.00290.00361.29101814.26250.1412
F3 4 . 3 × 10 12 4.2 × 10 7 5.3 × 10 7 4.9 × 10 5 3.8 × 10 5 7.0 × 10 4 3.4 × 10 9 1.0 × 10 3 1.1 × 10 3 1.5 × 10 5
F413.8121755.84866.8091123.7856.37414.78813.07938.2643601.571079.32
F50.06810.63140.09270.37480.20570.18920.08250.08720.48210.4346
F60.79660.90991.12230.74441.20101.30711.16281.37401.06111.0808
F7121.125304.895186.871209.602295.891247.995131.941266.407219.64145.75
F80.54161.04920.65540.43440.99440.49290.42850.55230.29500.7153
F90.737165.1220.5731173.3390.66700.49460.38840.9028540.447123.834
F100.01380.14390.05700.10410.13510.10733.44810.13490.16600.1516
Table 4. Minimum values of the fitness function.
Table 4. Minimum values of the fitness function.
FitnessMPAOPSOGAAOMFOSMAMPASSAGOAWOA
F1 5.9 × 10 7 3.5 × 10 11 4.2 × 10 9 5.2 × 10 4 2.4 × 10 9 5 . 1 × 10 4 2.3 × 10 7 5.9 × 10 9 3.1 × 10 11 2.3 × 10 10
F217.34310314.09917.34317.36017.34517.34517.34317.3442150.47317.354
F312.70212.70212.70312.70312.70312.70312.70312.70312.70312.703
F416.221501.03017.1831846.9957.76824.48944.51426.6201674.01803.47
F51.05231.66121.05691.75371.12141.26521.17611.06402.48571.6620
F63.63309.61093.83549.93054.41747.04273.34235.81038.84788.6721
F7−78.041−121.72−57.655306.11−59.043158.51−50.040−76.690460.78634.57
F83.71753.32914.35365.62992.58385.02094.12344.67286.54614.2988
F92.54743.33572.567562.9402.73962.59272.86402.9346284.7715.946
F1019.99620.15920.04820.40220.00220.3106.712519.97820.16320.260
Table 5. Maximum values of the fitness function.
Table 5. Maximum values of the fitness function.
FitnessMPAOPSOGAAOMFOSMAMPASSAGOAWOA
F1 1.4 × 10 8 1.9 × 10 13 1.9 × 10 11 1 . 2 × 10 5 2.1 × 10 11 2.1 × 10 9 7.3 × 10 8 1.7 × 10 11 5.3 × 10 12 9.3 × 10 11
F217.34325684.6101.15817.53017.45117.35517.35722.5728464.2117.776
F312.70212.70212.70212.70312.70312.70512.70212.70612.70612.702
F458.7297440.74308.295653.4264.8774.43191.710168.85116776.64751.8
F51.28204.42361.37903.16672.07341.98491.42281.35664.01743.2375
F66.520613.64298.564612.66878.495711.85497.636310.436912.507312.3857
F7307.33866.88596.191062.451098.39908.58456.60893.191371.841067.62
F85.58737.11626.83446.98356.51866.79665.65206.55707.53657.4462
F95.1445197.7774.5874618.105.39254.34624.20656.30942167.1462.86
F1020.04620.70320.25720.83520.48920.69620.03920.40320.74620.784
Table 6. Description of the UCI datasets.
Table 6. Description of the UCI datasets.
InstancesFeaturesClasses
ionosphere351342
breastcancer69992
glass21497
sonar208602
lymphography148182
waveform5000403
clean1data4761662
SPECT267222
ecoli33678
CongressEW435162
Exactly21000132
M-of-n1000132
Vote300162
krvskp3196362
heart270132
Table 7. Results of the average of the fitness value.
Table 7. Results of the average of the fitness value.
MPAOPSOGAAOMFOSMAMPASSAGOAWOA
ionosphere0.12330.14520.19180.24650.26570.41170.12370.16640.20250.1569
breastcancer0.10880.13090.15870.22340.25360.41820.11640.16200.20370.1800
glass0.15170.16620.16640.19260.20850.28010.16660.96911.01550.9730
sonar0.04330.07900.12030.28230.28290.48320.05680.23790.27590.2759
lymphography0.26560.30400.33710.42270.46870.63940.30510.34950.38880.3888
waveform0.63630.62690.64260.66600.66850.94240.63450.65900.67860.6701
clean1data0.15850.19160.20700.28040.25050.47120.17230.22960.26300.2616
SPECT0.30950.33610.35000.39780.42100.52000.33720.31520.33770.3587
ecoli0.24520.24880.24830.24990.33930.41430.24761.62451.55791.6245
CongressEW0.09130.11850.14170.20520.27850.38230.11080.20640.19810.1820
Exactly20.47660.48490.48960.49860.54100.58180.47960.48650.49650.4953
M-of-n0.00000.00000.01290.21420.48270.57070.00000.00000.11930.0298
Vote0.13950.16420.17450.25110.29610.39570.15110.18020.18580.1755
krvskp0.12740.11340.14760.31560.17080.56000.12360.15780.22240.1879
heart0.33170.35380.36440.47370.44410.56710.34960.36650.44260.3850
Table 8. Results of the minimum value of the fitness value.
Table 8. Results of the minimum value of the fitness value.
MPAOPSOGAAOMFOSMAMPASSAGOAWOA
ionosphere0.00000.00000.10660.10660.21320.21320.00000.15080.18460.1066
breastcancer0.00000.00000.00000.00000.15080.15080.00000.15080.18460.1508
glass0.09110.09110.09110.09110.12290.16020.11660.78911.00000.8009
sonar0.00000.00000.00000.19610.13870.19610.00000.19610.24020.2402
Lymphography0.16440.16440.16440.23250.32880.32880.23250.28470.32880.3288
waveform0.59400.59730.60730.61770.61510.68760.59800.64560.66030.6621
clean1data0.12960.12960.09170.20500.18330.27500.09170.20500.24250.2050
SPECT0.24430.24430.24430.27320.29930.34550.27320.29930.32320.3232
ecoli0.20140.20140.20140.20140.20380.25180.20141.50001.30021.5000
CongressEW0.00000.00000.00000.09580.19160.13550.00000.09580.16590.1659
Exactly20.43360.44270.43820.43820.47750.47330.43820.45170.48170.4940
M-of-n0.00000.00000.00000.00000.29660.00000.00000.00000.00000.0000
Vote0.00000.11550.11550.11550.20000.20000.00000.16330.16330.1633
krvskp0.10010.07910.10010.11190.10010.23470.08670.14150.20930.1697
heart0.24430.24430.27320.32320.32320.42320.27320.36650.38630.3455
Table 9. Results of the maximum value of the fitness value.
Table 9. Results of the maximum value of the fitness value.
MPAOPSOGAAOMFOSMAMPASSAGOAWOA
ionosphere0.21320.23840.26110.36930.33710.83940.21320.21320.23840.2132
breastcancer0.18460.23840.23840.38440.33710.61240.18460.18460.21320.2384
glass0.21460.24420.24420.26490.27880.42740.28811.09031.03701.0903
sonar0.13870.19610.27740.41600.39220.70710.19610.27740.31010.3101
lymphography0.43500.49320.54520.69750.65760.83830.43500.43500.43500.4350
waveform0.67110.65910.67410.76420.72441.13980.66270.67650.69970.6794
clean1data0.20500.25930.28990.37800.34300.59410.27500.25930.30400.2899
SPECT0.34550.50370.51830.51830.62290.87250.48870.32320.36650.3863
ecoli0.30940.32250.32250.32250.58760.79490.32251.68681.68681.6868
CongressEW0.19160.21420.21420.34530.41750.76630.19160.33180.21420.2142
Exactly20.51770.51770.54770.60000.60330.71830.52150.51380.51380.4980
M-of-n0.00000.00000.24490.58990.61320.83190.00000.00000.35780.0894
Vote0.23090.23090.25820.40000.55380.61100.23090.23090.23090.2000
krvskp0.16210.14150.21520.53180.37440.72420.14590.16590.23990.2001
heart0.36650.42320.44050.58590.57300.77270.42320.36650.51830.4232
Table 10. Results of the standard deviation of the fitness value.
Table 10. Results of the standard deviation of the fitness value.
MPAOPSOGAAOMFOSMAMPASSAGOAWOA
ionosphere0.06230.06350.04550.07650.04360.12480.05780.05230.06610.0481
breastcancer0.06020.07860.06880.08620.04140.12060.06470.04700.05620.0497
glass0.02640.03910.03660.04460.03730.07920.03890.03260.04620.0253
sonar0.06430.07680.08830.06490.07770.15140.07750.08390.06780.0858
lymphography0.09100.09270.09310.14260.09010.13110.07470.07580.11630.0535
waveform0.01950.01810.01950.03850.02960.13680.01910.01890.05550.0190
clean1data0.02390.03310.04780.04460.05000.07550.04740.04550.05440.0417
SPECT0.03460.06380.06880.06770.07380.11780.05090.05190.06510.0482
ecoli0.02830.03210.02940.02970.10970.15170.03050.03790.10360.0243
CongressEW0.05990.06190.04610.07920.06080.16380.06220.06910.10880.0463
Exactly20.02030.02060.02710.03760.03180.07130.02210.04120.04070.0263
M-of-n0.00000.00000.05470.23990.08040.15680.00000.11000.12690.1248
Vote0.05840.03320.04280.07770.10060.11640.04740.06750.08730.0443
krvskp0.01380.01790.03050.11700.06930.12450.01570.02680.11640.0278
heart0.03260.04600.04840.07000.06240.08630.03920.03900.07940.0452
Table 11. Ratio of the selected feature for all datasets.
Table 11. Ratio of the selected feature for all datasets.
MPAOPSOGAAOMFOSMAMPASSAGOAWOA
ionosphere0.21670.39630.45510.40780.45800.37750.24590.31620.45100.3627
breastcancer0.20420.43500.44740.39710.45800.30720.25650.47060.53920.4216
glass0.46670.52630.59060.57670.60850.48480.51850.74070.55560.7407
sonar0.31670.50350.49300.52580.52540.44720.39130.54440.51670.4000
lymphography0.39200.50880.54390.51520.51320.45450.48150.46300.48150.5000
waveform0.61380.69170.70930.79870.67570.31290.73520.66670.58730.8730
clean1data0.25230.48960.50600.39510.50660.25840.33300.45780.47590.5040
SPECT0.34270.50240.50000.54750.55410.47880.42550.37880.42420.2273
ecoli0.64840.78950.77440.80270.52380.52380.77140.66670.80950.7619
CongressEW0.34620.48680.48030.42050.48210.34900.38500.54170.47920.2083
Exactly20.35220.58300.50200.18880.55680.38970.38770.35900.56410.5641
M-of-n0.52400.56680.55470.68180.55310.45670.53540.58970.51280.5897
Vote0.34380.46050.48030.57950.48810.48080.36250.51560.54170.5000
krvskp0.49380.56870.56730.69700.60050.30250.61110.55560.59260.5556
heart0.51920.56280.57490.65380.58240.50890.51080.53850.48720.5897
Table 12. Computation time by each algorithm.
Table 12. Computation time by each algorithm.
MPAOPSOGAAOMFOSMAMPASSAGOAWOA
ionosphere44.3524.5428.0549.5224.6425.0149.0628.5128.8128.07
breastcancer43.7724.1627.4848.5124.1324.5348.3526.1926.7526.19
glass44.8725.7330.6349.6226.8224.8451.2025.0423.5024.49
sonar43.0423.8227.1047.7223.8124.6347.3525.9426.2126.04
lymphography36.3119.2422.9837.2620.2817.3640.8720.0219.8521.84
waveform272.64163.37190.83332.82165.5664.50312.20180.09148.55192.38
clean1data52.3431.3736.0361.4831.7328.3457.5833.2034.5334.49
SPECT41.7623.7227.0345.1123.6622.3546.4423.9526.5623.63
ecoli29.6817.7821.4835.7418.4817.3136.5920.3518.7820.57
CongressEW44.4725.0228.4449.3624.9224.5849.3827.9028.4527.61
Exactly245.5228.3329.8746.7827.3726.2148.9828.2631.0928.51
M-of-n49.7528.1331.8056.8927.8226.6155.3030.1030.0530.01
Vote43.3324.1027.4948.0824.1423.8348.0127.1627.1527.24
krvskp174.7296.15111.78203.5699.5958.54197.99101.23104.37102.83
heart43.0523.9127.1847.6623.8923.7547.7226.8826.6026.69
Table 13. Results of the accuracy measure for all datasets.
Table 13. Results of the accuracy measure for all datasets.
MPAOPSOGAAOMFOSMAMPASSAGOAWOA
ionosphere0.98200.97490.96110.93340.92750.81490.98140.97160.95830.9735
breastcancer0.98450.97670.97010.94270.93400.81060.98230.97350.95830.9659
glass0.81890.80830.80640.70530.71880.56780.81130.81130.76100.8050
sonar0.99400.98790.97770.91610.91390.74360.99080.94230.92310.9231
lymphography0.94140.93310.91610.84890.81470.66800.93690.90090.87390.9009
waveform0.80340.80550.79970.78450.78480.58010.80380.79200.77840.7869
clean1data0.97430.96220.95490.91940.93480.77230.96810.94680.93000.9300
SPECT0.90580.88300.87270.83720.81730.71570.88600.90050.88560.8706
ecoli0.82740.81700.82020.81010.65870.54710.81760.82540.82140.8254
CongressEW0.98810.98210.97780.95160.91870.82700.98390.94800.96020.9664
Exactly20.77250.76440.75960.75000.70630.65640.77120.76270.75330.7547
M-of-n1.00001.00000.99680.89650.76060.64971.00001.00000.95730.9973
Vote0.97710.97190.96770.93090.90220.82980.97490.96670.96440.9689
krvskp0.98360.98680.97730.88670.96600.67090.98450.97500.95040.9645
heart0.88890.87270.86490.77070.79890.67090.87880.86570.80100.8507
Table 14. Results of the Wilcoxon rank sum test for all methods.
Table 14. Results of the Wilcoxon rank sum test for all methods.
PSOGAAOMFOSMAMPASSAGOAWOA
ionosphere0.0480.0210.0000.0000.0000.0300.0200.0090.024
breastcancer0.4690.0580.0030.0000.0000.5370.0440.0000.001
glass0.6530.0490.0030.0000.0000.0450.0000.0000.000
sonar0.0400.4350.0000.0000.0000.8190.0000.0000.000
Lymphography0.3330.0050.0110.0000.0000.6310.0030.0000.000
waveform0.6210.1420.0420.0260.0000.0300.0160.0010.002
clean1data0.0170.0010.0000.0000.0000.3660.0000.0000.000
SPECT0.6560.0780.0000.0010.0000.1870.2080.1680.000
ecoli0.0300.5870.7790.1070.0000.0100.0000.0000.000
CongressEW0.1440.0070.0010.0000.0000.2840.0000.0000.000
Exactly20.8820.6480.2660.0000.0000.9500.0080.0020.019
M-of-n0.0900.0490.0820.0000.0000.0490.0980.0020.471
Vote0.0490.3540.0340.0000.0000.0480.0520.0070.069
krvskp0.1180.0420.0010.0130.0000.7140.0000.0000.000
heart0.0350.0310.0000.0000.0000.0090.0090.0000.000
Table 15. Results of the Tension/Compression Spring.
Table 15. Results of the Tension/Compression Spring.
AlgorithmdDNOptimal Cost
MPAO0.05168900.356709011.2894550.012665
MVO [47]0.05251000.376000010.3351000.012790
GSA [47]0.05027600.323680013.5254100.012702
WOA [48]0.05120700.345215012.0040320.012676
GWO [49]0.05169000.356737011.2888500.012666
MFO [42]0.051995000.36410910.8684000.012670
SSA [50]0.05120700.345215012.0040320.012676
RO [9]0.05137000.349096011.7627900.012679
Table 16. Results of the rolling element bearing.
Table 16. Results of the rolling element bearing.
Algorithmr1r2r3r4r5r6r7r8r9r10Opt. Cost
MPAO125.72321.42311.0010.51500.51500.50000.69990.30000.10000.614485,539.19159
CHHO [51]125.72321.42311.0010.51500.51500.49440.69860.30000.03350.600583,455.82500
HHO [51]125.00021.07511.0760.51500.51500.40550.60600.30000.08440.600084,072.58400
SCA [52]125.00021.03310.9660.51500.55000.50000.70000.30000.02780.629183,431.11000
MFO [42]125.00021.03310.9660.51500.51500.50000.67580.30020.02400.610084,002.52400
MVO [53]125.60021.60010.9730.51500.51500.50000.68780.30130.03620.610684,491.26600
TLBO [9]125.71921.42611.0000.51500.51500.42430.63950.30000.06890.799581,859.74000
PVS [54]125.71921.42611.0000.51500.51500.40040.68020.30000.08000.700081,859.74100
Table 17. Results of the speed reducer problem.
Table 17. Results of the speed reducer problem.
Algorithm X 1 X 2 X 3 X 4 X 5 X 6 X 7 Opt. Weight
MPAO3.5000.70017.0007.3007.7153223.3502165.2866552994.4725
CHHO [51]3.5000.70017.0007.3007.7153.3502155.2866552994.4737
HHO [51]3.5600.70017.0008.0198.0193.4948005.2867003060.3720
MDE [55]3.5000.70017.0007.3007.8003.3502215.2866852996.3566
PSO-DE [56]3.5000.70017.0007.3007.8003.3502145.2866832996.3481
PSO [57]3.5810.70017.8287.9847.8213.1539805.1873003005.3248
MBA [58]3.5000.70017.0007.3007.7163.3502185.2866542994.4824
SSA [57]3.5000.70017.0007.8007.8503.3524705.2867003002.5678
ISCA [57]3.5000.70017.0007.3007.8003.3512905.2869802997.1295
Table 18. Results of the gear train design problem.
Table 18. Results of the gear train design problem.
Algorithm X 1 X 2 X 3 X 4 Opt. Cost
MPAO42.9216.4518.7849.032.700 × 10 12
CHHO [51]41.0047.0016.0017.000.1434000
HHO [51]56.0058.0022.0021.000.14563000
IMPFA [59]30.8023.9212.0012.001.3915 × 10 10
GeneAS [60]50.0033.0014.0017.000.14424200
Kannan and Kramer [60]41.0033.0015.0013.000.14412100
Sandgren [60]60.0045.0022.0018.000.14666700
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ewees, A.A.; Ismail, F.H.; Ghoniem, R.M.; Gaheen, M.A. Enhanced Marine Predators Algorithm for Solving Global Optimization and Feature Selection Problems. Mathematics 2022, 10, 4154. https://doi.org/10.3390/math10214154

AMA Style

Ewees AA, Ismail FH, Ghoniem RM, Gaheen MA. Enhanced Marine Predators Algorithm for Solving Global Optimization and Feature Selection Problems. Mathematics. 2022; 10(21):4154. https://doi.org/10.3390/math10214154

Chicago/Turabian Style

Ewees, Ahmed A., Fatma H. Ismail, Rania M. Ghoniem, and Marwa A. Gaheen. 2022. "Enhanced Marine Predators Algorithm for Solving Global Optimization and Feature Selection Problems" Mathematics 10, no. 21: 4154. https://doi.org/10.3390/math10214154

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop