Next Article in Journal
Performance Entitlement by Using Novel High Strength Electrical Steels and Copper Alloys for High-Speed Laminated Rotor Induction Machines
Next Article in Special Issue
Innovative Hyperspectral Image Classification Approach Using Optimized CNN and ELM
Previous Article in Journal
Critical Review of Basic Methods on DoA Estimation of EM Waves Impinging a Spherical Antenna Array
Previous Article in Special Issue
Peak Shaving and Frequency Regulation Coordinated Output Optimization Based on Improving Economy of Energy Storage
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Population Enhanced Slime Mould Algorithm and with Application to Postgraduate Employment Stability Prediction

1
Graduate School, Wenzhou University, Wenzhou 325035, China
2
Wenzhounese Economy Research Institute, Wenzhou University, Wenzhou 325035, China
3
Department of Information Technology, Wenzhou Polytechnic, Wenzhou 325035, China
4
College of Computer Science and Artificial Intelligence, Wenzhou University, Wenzhou 325035, China
*
Authors to whom correspondence should be addressed.
Electronics 2022, 11(2), 209; https://doi.org/10.3390/electronics11020209
Submission received: 5 December 2021 / Revised: 5 January 2022 / Accepted: 7 January 2022 / Published: 10 January 2022
(This article belongs to the Special Issue Advanced Machine Learning Applications in Big Data Analytics)

Abstract

:
In this study, the authors aimed to study an effective intelligent method for employment stability prediction in order to provide a reasonable reference for postgraduate employment decision and for policy formulation in related departments. First, this paper introduces an enhanced slime mould algorithm (MSMA) with a multi-population strategy. Moreover, this paper proposes a prediction model based on the modified algorithm and the support vector machine (SVM) algorithm called MSMA-SVM. Among them, the multi-population strategy balances the exploitation and exploration ability of the algorithm and improves the solution accuracy of the algorithm. Additionally, the proposed model enhances the ability to optimize the support vector machine for parameter tuning and for identifying compact feature subsets to obtain more appropriate parameters and feature subsets. Then, the proposed modified slime mould algorithm is compared against various other famous algorithms in experiments on the 30 IEEE CEC2017 benchmark functions. The experimental results indicate that the established modified slime mould algorithm has an observably better performance compared to the algorithms on most functions. Meanwhile, a comparison between the optimal support vector machine model and other several machine learning methods on their ability to predict employment stability was conducted, and the results showed that the suggested the optimal support vector machine model has better classification ability and more stable performance. Therefore, it is possible to infer that the optimal support vector machine model is likely to be an effective tool that can be used to predict employment stability.

1. Introduction

In China, postgraduates are valuable talent resources. The employment quality of postgraduates is not only related to their own sense of social belonging and security, but it also affects social stability and sustainable development, where employment stability is an important measure of postgraduate employment quality. Employment stability not only affects the career development of individual graduate students, but it is also a focal issue of educational equity and social stability. Moreover, employment stability not only reflects practitioners’ psychological satisfaction with the employment unit, employment environment, remuneration, and career development, but it is also an important indicator of employment quality. When the skill level and the salary level of the job match, employment stability is high. On the contrary, the practitioner will actively seek to change jobs if there is a disparity between those factors, and especially in cases where the salary level is extremely mismatched with the skill level, the practitioner will face the risk of being fired and will passively change jobs, and employment stability will be low. It can be seen that employment stability also determines the employment quality of graduate students to a large extent. In addition, for enterprises, if they can retain talent and maintain the job stability of new graduate students, they can not only reduce labor costs, but these enterprises can also achieve sustainable development. Therefore, it is necessary to analyze the employment stability of graduate students through the effective mining of big data related to post-graduation graduate employment and to construct an intelligent prediction model using a fusion of intelligent optimization algorithms and machine learning methods to verify the hypothesis of relevant relationships. At the same time, in order to provide a reference for postgraduate employment decision making and policy formulation by relevant departments, it is also necessary to dig into the key factors affecting the stable employment of postgraduates, conduct in-depth analyses of key influencing factors, and explore the main factors affecting the stability of postgraduate employment.
At present, many studies have been conducted by many researchers on employment and employment stability. Yogesh et al. [1] applied artificial intelligence algorithms to enrich the student employability assessment process. Li et al. [2] made full use of the C4.5 algorithm to generate a type of employment data mining model for graduates. Liu et al. [3] proposed a weight-based decision tree to help students improve their employability. Mahdi et al. [4] proposed a novel method based on support vector machines, which was applied to predicting cryptocurrency returns. Tu et al. [5] developed an adaptive SVM framework to predict whether students would choose to start a business or find a job after graduation. Additionally, there also have been many studies on swarm intelligence algorithms. Cuong-Le et al. [6] presented an improved version of the Cuckoo search algorithm (NMS-CS) using the random walk strategy. Abualigah et al. [7] presented a novel nature-inspired meta-heuristic optimizer called the reptile search algorithm (RSA). Nadimi-Shahraki et al. [8] introduced an enhanced version of the whale optimization algorithm (EWOA-OPF), which combines the Levy motion strategy and Brownian motion. Gandomi et al. [9] proposed an evolutionary framework for the seismic response formulation of self-centering concentrically braced frame systems.
Therefore, in order to better predict the employment stability of graduate students, this paper first proposes a modified slime mould algorithm (MSMA), the core of which is the use of a multi-population mechanism to further balance the exploration and development of the slime mould algorithm, effectively improving the accuracy of the solution of the original slime mould algorithm. Further, a MSMA-based SVM model (MSMA-SVM) is proposed, in which MSMA effectively enhances the accuracy of the classification prediction of the original SVM. To demonstrate the performance of MSMA, MSMA and the slime mould algorithm were first subjected to analytical experiments to obtain careful balance and diversity using the 30 benchmark functions in the IEEE CEC2017 as a basis. In addition, this paper not only compares MSMA with other traditional basic algorithms, including differential evolution (DE) [10], the slime mould algorithm (SMA) [11], the grey wolf optimizer (GWO) [12,13], the bat-inspired algorithm (BA) [14], the firefly algorithm (FA) [15], the whale optimizer (WOA) [16,17], moth–flame optimization (MFO) [18,19,20], and the sine cosine algorithm (SCA) [21], but it also compares MSMA with some algorithm variants that have previously demonstrated very good performance, including boosted GWO (OBLGWO) [22], the balanced whale optimization algorithm (BWOA) [17], the chaotic mutative moth–flame-inspired optimizer (CLSGMFO) [20], PSO with an aging leader and challengers (ALCPSO) [23], the differential evolution algorithm based on chaotic local search (DECLS) [24], the double adaptive random spare reinforced whale optimization algorithm (RDWOA) [25], the chaos-enhanced bat algorithm (CEBA) [26], and the chaos-induced sine cosine algorithm (CESCA) [27]. Ultimately, the comparative experimental results that were obtained for the benchmark functions effectively illustrate that MSMA not only provides better performance than the initial SMA, but that it is also offers greater superiority than many common similar algorithms. To make better predictions and judgments about the employment stability of graduate students, the comparative MSMA-SVM experiments and experiments for other machine learning approaches were conducted. The results of the experiments indicate that, among all the comparison methods, MSMA-SVM can obtain more accurate classification results and better stability using the four indicators.
The rest of this paper is structures as follows: Section 2 provides a brief introduction to SVM and SMA. In Section 3 and Section 4, the proposed MSMA and the MSMA-SVM model are described in detail, respectively. Section 5 mainly introduces the data source and simulation settings. The experimental outcomes of MSMA on the benchmark functions and the MSMA-SVM on the real-life dataset are analyzed in Section 6. A discussion of the improved algorithm is provided in Section 7. Additionally, the last section provides summaries and advice as they pertain to the present research.
In conclusion, the present research contributes the following major innovations:
(a)
This paper proposes a novel version of SMA that combines a multi-population strategy called MSMA.
(b)
Experiments comparing MSMA with other algorithms are conducted on a benchmark function set. The experimental results demonstrate that the proposed algorithm can better balance the exploitation and exploration capabilities and has better accuracy.
(c)
The MSMA algorithm is combined with the support vector machine algorithm to construct a prediction model for the first time, which is called MSMA-SVM. Additionally, the MSMA-SVM model is employed in entrepreneurial intention prediction experiments.
(d)
The proposed MSMA in the benchmark function experiment and the MSMA-SVM in entrepreneurial intention prediction demonstrate better performance than their counterparts.

2. Background

2.1. Support Vector Machine

The core principle of SVMs is the development of a plane that is best able to divide two kinds of data in such a way where the distance between the two is maximized and where the classification has the greatest generalization power. Support-vector data are the closest data to the boundary. The SVM is often a supervised learning approach that is used to process classification data for the purpose of finding the best hyperplane that can properly separate positive and negative samples.
With the given data set G = x i , y i , i = 1 , , N , x R d , y ± 1 , the hyperplane can be expressed as:
g x   =   ω x + b
In terms of the geometric understanding of the hyperplane, the maximization of the geometric spacing is equal to the minimization of   | | ω | | . The concept of a “soft interval” is introduced, and the slack variable ξ i > 0 is applied in cases where there are few outliers. One of the key parameters that can influence the ability of SVM classification is the disciplinary factor c , which represents the ability to accommodate outliers. A standard SVM model is shown below:
min ω   =   1 2 | | ω | | 2 + c i = 1 N ξ i 2 s . t     y i ω Τ x i + b 1 ξ i , i = 1 , 2 , , N
where ω is an inertia weight, and b is a constant.
In this way, the initial low dimensional sample set is mapped to the high dimensional space H , allowing the best classification surface to be established in a linear method. Meanwhile, the SVM non-linearly transforms the linearly inseparable sample set Φ : R d H . For the purposes of keeping the computed results of the sample set in the low dimensional space corresponding to the results of the inner product that is mapped to the high dimensional part, a suitable k x i , x j is constructed using generalized function theory to denote the kernel function, with α i denoting the Lagrange multiplier, and Equation (3) being converted to as it is seen below:
Q α = 1 2 i = 1 N α i α j y i y j k x i , x j i = 1 N α i s . t   i = 1 N a i y i = 0 , 0 a i C , i = 1 , 2 ,   , N
This paper adopts the generalized radial basis kernel function as the function model of the support vector machine, and its expression is as follows:
k x , y = e γ | | x i x j | |
where γ is a kernel parameter, another element that is quite important to the classification performance of an SVM, and it represents the interaction’s kernel function width.

2.2. Slime Mould Algorithm

Similar to many other recently proposed optimization algorithms, including Harris hawks optimization (HHO) [28], the Runge Kutta optimizer (RUN) [29], the colony predation algorithm (CPA) [30], and hunger games search (HGS) [31], SMA is a novel and high-performing swarm intelligence optimization algorithm that was developed by Li et al. [11], who were motivated by the slime mould’s foraging behavior. Since its introduction, SMA has been applied to many problems such as image segmentation [32,33], engineering design [34], parameter identification in photovoltaic models [35], medical decision-making [36], and multi-objective problems [37]. In this section, some mathematical models related to the mechanisms and characteristics of SMA are presented.
During its approach to food, the slime mould can be approached by odors in the environment. To show its actions mathematically in terms of convergence, the expressions below can be used to simulate its shrinkage pattern:
X t + 1 = X b   t + v b W X A t X B t , r < p v c X t                                                                                         ,   r p    
where v b is a parameter in a ,   a ,   v C that takes values in the range 1 , 1 , t acts as the the quantity of current iterations, X b   represents the location of the individual found to have the best fitness value, X   represents the location of the slime mould, X A   and X B   represent two individuals chosen from the slime mould in a random way, W represents the weight of the slime mould, and r is a stochastic number in the range 0 , 1 . In addition, p , v b   , v c , and W   are computed as follows:
p = tanh   ( S i B F )
v b   =   a , a
v c   =   rand b , b
a = arctanh   F E s M a x _ F E s + 1
b = 1 F E s M a x _ F E s
W S I F E s   =   1 + r l o g B F S i B F W F + 1   ,   condition 1 r l o g B F S i B F W F + 1   ,   others      
s m e l l I n d e x = sort S
where i 1 , 2 , 3 , , n , S i represents the fitness of X   , B F and W F are the currently gained best fitness and worst fitness, Fes is the current quantity of the evaluations, M a x _ F E s is the maximum quantity of the evaluations, c o n d i t i o n refers to the top half of the S i   ranking in the population, and S I represents the sequence of the fitness values arranged in ascending order.
When food is being wrapped and as the concentration of food exposed to the vein increases, the more powerful the propagation wave produced by the bio-oscillator and the quicker the cytoplasmic flow are, resulting in thicker veins. Equation (11) models the positive and negative feedback relationship between vein width and food concentration in slime moulds. If the food concentration is higher, the weight of the nearby area will increase, and at lower food concentrations, the weight of the area will decline, causing the slime mould to move on to explore other areas. Therefore, the motility behavior of slime moulds can be simulated using Equation (13).
X * = r a n d · U B L B + L B           1 ,    r a n d < z   X b t + v b · W · X A t X B t ,        r < p      2     v c · X t       , r p    3 ,    r a n d z
where the upper and lower bounds are expressed by U B and L B in the search range, and r a n d and r are random values in 0 , 1 . According to the original version, parameter z is set to 0.03.
While grasping food, the way in which the slime moulds change the cytoplasmic flux is mainly through the propagation wave of the biological oscillator, putting it in a more favorable position for food concentration. W, vb, and vc were used to imitate the changes observed in the venous width of slime moulds. The value of v b oscillates randomly between a , a and approaches 0 with increasing iterations of the primary key. The value of v c varies between 1 , 1 and eventually converges to 0. The drifts of the two are monitored in Figure 1, and these drifts are also specific to the task considered in this work.
The pseudo-code of the SMA is displayed in Algorithm 1.
Algorithm 1 Pseudo-code of SMA
Initialize the parameters popsize, Max_FEs;
Initialize the population of slime mould Xi (i = 1, 2, 3, …n);
Initialize control parameters z, a;
While ( t M a x _ F E s )
Calculate the fitness of slime mould;
Sorted in ascending order by fitness;
Update b e s t F i t n e s s ,   X b ;
Calculate the W by Equation (12);
For     i = 1     t o     p o p s i z e
Update   p   by   Equation 6 ;
Update v b by Equations (7) and (9);
Update v b by Equations (8) and (10);
    If rand < z
u p d a t e   p o s i t i o n s   b y   Equation   13 1 ;
    Else
u p d a t e   p , v b ,   v c ;
      If r < p
u p d a t e   p o s i t i o n s   b y   Equation   13 2 ;
      Else
u p d a t e   p o s i t i o n s   b y   Equation   13 3 ;
      End If
    End If
  End For
  t = t + 1 ;
End While
Return b e s t F i t n e s s ,   X b

3. Suggested MSMA

3.1. Multi-Population Structure

As an important factor that affects the information exchange between populations, the topological structure of the population also has a great impact on the balancing of the exploration and development processes. In the multi-population topological structure, the structure is mainly composed of three parts, which are the dynamic sub-population number strategy (DNS), the purposeful detecting strategy (PDS), the sub-populations regrouping strategy (SRS).
DNS means that the whole population is separated into many sub-populations after the first iteration. Usually, a sub-population is composed of two search individuals, and as the quantity of iterations increases, the quantity of the sub-populations gradually decreases, and the scale of the sub-populations increases. Additionally, only one sub-population is left in the search space, which represents the aggregation of all of the sub-populations at the ending of the iteration process. Smaller sub-populations can better help the swarm maintain its diversity. With the iteration process, the population change characteristics mainly show that the number of sub-populations gradually decreases and that the size of sub-populations expands. The strategy enables individuals in the population to exchange information more quickly and widely. In addition, the DNS implementation is decided by the feedback of the changing principle of the subgroup quantity and the cycle. To resolve the first problem, a set of integers N = n 1 , n 2 , , n k 1 , n k , n 1 > n 2 > > n k 1 > n k are used, where the integer indicates the subgroup quantity. To ensure the implementation of the DNS strategy, the size of each sub-population remains unchanged in one iteration, that is, the whole number of individuals can be evenly divided by the quantity of the sub-populations. For that changing period, a fixed stage is used to adapt the structure of the whole population. The stage length is calculated by C g e n = M a x F E s / N , where N is the quantity of the integers in N , and M a x F E s delegates the preset number of evaluation times to ensure that the efficient variation of sub-population quantity is efficient.
In SRS, the proposed method uses the same sub-population reorganization strategy as the published enhanced particle swarm optimization [38], where S t a g b e s t represents the quantity of the best individual stagnations. The sub-population reorganization strategy will be executed when the whole population stagnates in the suggested approach, and the execution timing of the sub-population reorganization scheme is determined in this way. Additionally, the scale of the sub-population impacts the frequency with which this strategy is executed. As the scale of the sub-population increases, individuals need more iterations to obtain useful guidelines. Because of the above points, the S t a g b e s t calculation method is shown below: S t a g b e s t = S s u b / 2 .
PDS enhances the capability of the presented method to get rid of the local optima, particularly in multi-modal problems. The collected population information is used to guide the swarm to energetically search rooms with a higher search value, and many researches have proven the superiority of the scheme [39,40]. To provide convenience for PDS execution, it is stipulated that each dimension of the search room be equal in size. The function of the segmentation mechanism is to help the search individuals collect information. For PDS, the segments are classified. When the best search agent and when the current individual are in the best exploration interval of the dimension, the best search individual will select a search segment in the worst exploration interval of the same dimension. If the fitness of that newly searched-for new candidate solution is superior to the current optimal record, the optimal single position will be substituted by the new solution. The underexplored intervals will be more fully explored because of the benefits imparted by PDS. Meanwhile, a taboo scheme was attached to the PDS to avoid repeatedly exploring the same area. When a segment s j i is searched, the variable tab j i that delegates the segment is set to 1. Additionally, segment s j i can only be found again when tab j i is reset to 0. All flag variables will be recorded as 0 when each segment of each dimension has been fully explored.

3.2. Proposed MSMA

The MSMA improvement principle is the addition of the dynamic multi-population structure to the original SMA. The whole population is divided into many subgroups with the same swarm scale at the start of the multi-population strategy. The equal scale of the subgroups not only simplifies the general population structure, but it also simplifies the process complication of adjusting and fusing the population structure. The multi-population structure is employed to lead the whole population’s exploration tendency in the direction of the improved search methodology by updating the SMA function. With the continuation of the iterative process, the DNS strategy is to increase the scale of the sub-populations while reducing their number in order to guide this method to the exploitation stage. In addition, during the searching process, the PDS scheme is implemented to realize information sharing among sub-populations and enhances algorithm exploration capabilities as well. The SRS strategy will be executed to make the population jump out of the local optima when the population is located in the local optima. The pseudocode of Algorithm 2 below expresses those details of the MSMA framework.
Algorithm 2 Pseudo-code of MSMA
Initialize the parameters popsize, Max_FEs;
Initialize the population of slime mould Xi (i = 1, 2, 3, …n);
Initialize control parameters z, a;
While ( t M a x _ F E s )
Calculate the fitness of slime mould;
Sorted in ascending order by fitness;
Update b e s t F i t n e s s ,   X b ;
Calculate the W by Equation (12);
For     i = 1     t o     p o p s i z e
Update   p   by   Equation   6 ;
Update v b by Equations (7) and (9);
Update v b by Equations (8) and (10);
    If rand < z
u p d a t e   p o s i t i o n s   b y   Equation   13 1 ;
     Else
u p d a t e   p , v b ,   v c ;
       If r < p
u p d a t e   p o s i t i o n s   b y   Equation   13 2 ;
       Else
u p d a t e   p o s i t i o n s   b y   Equation   13 3 ;
      End If
     End If
  End For
  Perform DNS, SRS, and PDS from multi-population topological structure;
t = t + 1 ;
End While
Return b e s t F i t n e s s ,   X b
The complexity of MSMA is mainly related to slime mould initialization, fitness calculation, weight calculation, position updating, and the complexity of DNS, SRS, and PDS. n represents the quantity of the slime mould, T represents the number of iterations, and d i m represents the dimension of the objective function. Thus, the complexity of slime mould initialization is O n , the fitness calculation and ordering complexity is O T × 3 × n + n l o g   n , the weight calculation complexity is   O T × n   ×   d i m , and the position updating complexity is O T × n   ×   d i m . The DNS complexity is O   T × n + T ×   n . The SRS complexity is O   T × n . The PDS complexity is O   T × d i m × R n , where Rn represents the quantity of segments in the dimension. Thus, the overall MSMA complexity is   O n ×   1 + T   ×   n   ×   5 + T + 3 × l o g   n + 3   × d i m .

3.3. Proposed MSMA-SVM Method

Penalty factor C, the kernel parameter γ , and the optimal feature set are two important factors that determine the classification results and algorithm complexity of the SVM classification model. Usually, these two parameters are selected based on experience, resulting in poor efficiency and accuracy. The feature subset also uses the whole set or randomly selected variables, which also leads to poor efficiency and accuracy. Therefore, a new solution model MSMA-SVM was proposed, in which MSMA is used to optimize two vital parameters in SVM and in the feature subset. Then, the model will be applied to two special situations in the actual world: medical diagnosis situations and financial forecasting situations. The framework of the MSMA-SVM is displayed in Figure 2. The model mainly contains two important components. The left two columns use MSMA to optimize the two parameters and feature subset in the SVM model. In the right half, this optimized SVM obtained the classification accuracy (ACC) through 10-fold cross-validation (CV), nine of which were utilized for the training, and the rest was employed for test applications.

4. Experiments

4.1. Collection of Data

The population studied in this article comprised (a total of 331) full-time postgraduate students from the class of 2016 at Wenzhou University. According to the comparison of the employment status of the 2016 postgraduate graduates after three years with the initial postgraduate graduate employment program in September 2019, it was found that 153 postgraduates (46.22%) had not changed workplaces in three years, and 178 postgraduates (53.78%) demonstrated separation behavior.
Through data mining and analyses gender, political outlook, professional attributes, academic system, situations where the student experienced difficulty, student origin, academic performance (average course grades, teaching practice grades, social practice grades, academic report grades, thesis grades), graduation destination, nature of initial employment unit, location of initial employment unit, initial employment position, degree of initial employment and its relevance to the student’s major, monthly salary level during initial employment, employment variation, current employment status, nature of current employment unit, location of current employment unit, variation in employment location, current employment position, degree of current employment and its relevance to the student’s major, current monthly salary level, and monthly salary difference (see Table 1), the authors explored the importance and intrinsic connection of each index and built an intelligent prediction model based on this information.

4.2. Experimental Setup

MATLAB R2018 software was utilized to conduct the experiment. The data were scaled to 1 ,   1 before classification. The k-fold cross-validation (CV) was used to split the data, where k was set to 10.
In addition, to ensure the same environment for all experiments, the experiments were conducted on a Windows 10 with Intel(R) Core (TM) i5−4200H CPU @ 2.80 GHz and 8 GB of RAM. Coding was completed by using Matlab R2018.

5. Experimental Result

5.1. The Qualitative Analysis of MSMA

Swarm intelligence algorithms are good at solving many optimization problems, such as traveling salesman problems [41], feature selection [42,43,44,45,46], object tracking [47,48], wind speed prediction [49], PID optimization control [50,51,52], image segmentation [53,54], the hard maximum satisfiability problem [55,56], parameter optimization [22,57,58,59], gate resource allocation [60,61], fault diagnosis of rolling bearings [62,63], the detection of foreign fibers in cotton [64,65], large-scale supply chain network design [66], cloud workflow scheduling [67,68], neural network training [69], airline crew rostering problems [70], and energy vehicle dispatch [71]. This section conducts a qualitative analysis of MSMA.
Original SMA was selected for comparison with MSMA. Figure 3 displays the feasibility outcomes of the study comparing MSMA and SMA. There are five columns in the figure. The first column (a) is the position distribution for the MSMA search history on the three-dimensional plane. The second column (b) is the position distribution for the MSMA search history on the two-dimensional plane. In Figure 3b, the red dot represents the location of the optimal solution, and the black dot represents the MSMA search location. In the figure, the black dots are scattered everywhere on the entire search flat, which shows that MSMA performs a global search on the solution space. The black dots are significantly denser in the area around the red dots, which shows that MSMA has exploited the area to a greater extent in the areas where the best solution is situated. The third column (c) is the trajectory of the first dimension of the MSMA during the iteration. In Figure 3c, it is easy to see that the one-MSMA dimensional trajectory has large fluctuations. The amplitude of the trajectory fluctuation reflects the search range of the algorithm to a certain extent. The large fluctuation range of the trajectory indicates that the algorithm has performed a large-scale search. The fourth column (d) displays changes in the average MSMA fitness during the iteration. In Figure 3d, the average fitness of the algorithm shows huge fluctuations, but the overall fitness is decreasing. The fifth column (e) describes the MSMA and SMA convergence curves. In Figure 3e, the authors can clearly see that the MSMA convergence is lower than that of SMA, which shows that MSMA has better convergence performance.
Balance analysis and diversity analysis were carried out on the same functions. Figure 4 shows the outcomes of the balance study on MSMA and SMA. The three curves in each picture represent three different behaviors. As indicated in the legend, the red curve and blue curve represent exploration and exploitation, respectively. The large value of the curve indicates that this corresponding behavior is prominent in this algorithm. The green curve is an incremental–decremental curve. This curve can more intuitively reflect the changing trends in the two behaviors of the algorithm. When the curve increases, it means that exploration activities are currently dominant. The exploitation behavior is dominant in the opposite circumstances. Additionally, if these two are at the same stage, the increment–decrement curve has the best performance.
The swarm intelligence algorithm will first perform a global search when solving optimization problems. After determining the position of the optimal solution, the area will be locally developed. Therefore, the authors see that exploration activities are dominant in MSMA and SMA at the beginning. MSMA spends more time on exploration than the original SMA, which can be clearly seen in F2, F23, F27, and F30. However, the proportion of MSMA exploration behavior on F4, F9, F22, and F26 is also higher than that of SMA. The authors can see that the exploration curves and exploitation curves of MSMA on F4, F9, F22, and F26 are not monotonous, but instead fluctuate. This fluctuation can be clearly observed when the MSMA exploration curve drops rapidly in the early phase. Because the fluctuation guarantees the proportion of exploration behavior, MSMA will not end the global exploration phase too quickly. This is a big difference in the balance between MSMA and SMA.
Figure 5 is the result of diversity analysis of MSMA and SMA. In Figure 5, the abscissa stands for the iteration quantity, and the ordinate represents the population diversity. At the beginning, the swarm is randomly generated, so the population diversity is very high. As the iteration progresses, the algorithm continues to narrow the search range, and the population diversity will decrease. The SMA diversity curve is a monotonically decreasing curve, which can be seen in Figure 5. However, MSMA is different. The fluctuations in the balance analysis are also reflected in the diversity curve. The authors can see that the F1, F3, F12, and F15 curves all have a reverse increase period in terms of diversity, while other functions are not obvious. This fluctuation period becomes more obvious when the MSMA diversity decreases rapidly in the early stage. Obviously, this ensures that MSMA can maintain high population diversity and wide search capabilities in the early and mid-term. In the later period, the MSMA diversity dropped to a low level and demonstrated good convergence ability.

5.2. Comparison with Original Methods

In this section, the MSMA is compared with eight original swarm intelligence algorithms: SMA [11], DE [10], GWO [12,13], BA [14], FA [15], WOA [16,17], MFO [18,19,20], and SCA [21], to prove the performance of the MSMA. These comparison algorithms are classic and representative original algorithms that have been cited by many researchers for the sake of estimating the superiority of their own developed algorithms. In this experiment, the authors selected the CEC2017 [72] test function to judge the excellence of the involved algorithms and set the number of search agents to 30, the search agent dimension to 30, and the maximum evaluation times to 150,000. Every algorithm was run individually 30 times to obtain the mean value. In Table 2, the average and standard deviation that these algorithms searched for on different test functions is displayed, respectively. Obviously, the mean and standard deviations of the presented algorithm are lower than those of other compared ones for most functions. The Friedman [73] test is a non-parametric test method that can test whether multiple population distributions are significantly different. The Friedman test calculates the average performance differences for the chosen approaches and then compares them statistically to determine the ARV values (average ranking values) for the different methods. In Table 2, the MSMA algorithm ranks first in 22 benchmark functions, such as F1 and F3, proving that this paper’s enhanced algorithm has numerous advantages over other algorithms that were compared using the CEC2017 benchmark functions. The Wilcoxon [74] symbolic rank test was used to test whether the algorithm that was improved in the article was significantly better than the others. In the Wilcoxon symbolic rank test, when the p value is lower than 0.05, the MSMA algorithm is obviously better than others in the present test functions. In Table 2, most of the p-values that were calculated by the MSMA and the comparison algorithm on the test function are less than 0.05. Therefore, the MSMA algorithm is more capable of searching for the optimal solution using the CEC2017 test function than other competitors.
The authors can more clearly understand the convergence speed and precision of the algorithm through the algorithm convergence graph. The authors have selected six representative algorithm convergence graphs from the CEC2017 test function. As shown in Figure 6, six convergence trend graphs are listed, namely F1, F12, F15, F18, F22, and F30. In the trends observed in the six convergence graphs, the MSMA algorithm converges quickly before approaching 5000 evaluations, but the convergence speed becomes slower at around 5000 to 20,000 evaluations, and then the convergence speed increases. Consequently, the MASA algorithm demonstrates a strong ability to remove the local optimal solution well. Furthermore, the optimal solutions that are searched for by the MSMA algorithm on these six test functions are better than those determined by the other algorithms that were compared.

5.3. Comparison against Well-Established Algorithms

To prove the superiority of the MSMA algorithm, this section compares the MSMA algorithm with eight improved swarm intelligence algorithms, including OBLGWO [22], CLSGMFO [20], BWOA [17], RDWOA [25], CEBA [26], DECLS [24], ALCPSO [23], and CESCA [27]. Those comparison algorithms are improved by some classic original algorithms and have a strong ability to find optimal solutions. This section uses these algorithms to evaluate the superiority of the MSMA algorithm more precisely. The authors chose the CEC2017 test function as the test function and set the number of search agents to 30, the dimension of search agents to 30, and the maximum quantity of evaluations to 150,000. Every algorithm was run individually 30 times to obtain the average value. Table 3 shows the average fitness value and standard deviation for every algorithm on various test functions. The smaller the average fitness value and standard deviation, the better the algorithm performed on the current test function. As seen from the table, the average value and standard deviation of the MSMA on a few test functions are larger than some comparison algorithms, which m proves that the MSMA has great advantages over the other algorithms. This research uses Friedman’s test to rank the algorithm’s efficiency and to obtain the ARV value (average ranking value) of different algorithms. Observing Table 3, the authors can see that the MSMA algorithm ranks first in most test functions. This proves that the MSMA also has a relatively strong advantage compared to the other peers on the CEC2017 test functions. Additionally, the Wilcoxon signed-rank test was used to assess whether the MSMA algorithm performs significantly better than other advanced and improved algorithms in this experiment. Table 3 presents that the p values calculated on most test functions, and all of them are lower than 0.05. This proves that the MSMA algorithm has a big advantage over the remaining algorithms on most test functions.
The convergence diagram was employed to clearly understand the convergence trends of the algorithms on the test functions. The authors selected six representative convergence graphs from the CEC2017 test functions. As shown in Figure 7, when the convergence trend of the MSMA algorithm slows down, the algorithm convergence speed becomes faster after a certain number of evaluations, which proves that it able to skip between local optimal solution well. The MSMA algorithm searches for the optimal solution on these six test functions better than the other advanced and improved algorithms.

5.4. Predicting Results of Employment Stability

During this experiment, the authors evaluated the validity of the MSMA-SVM with a feature selection (MSMA-SVM-FS) model relative to its peers, the detailed results of which are presented in Table 4. From the results that were obtained, the authors can conclude that the ACC obtained from MSMA-SVM-FS was 86.4%, the MCC was 72.9%, the sensitivity was 82.3%, the specificity was 89.9%, and the standard deviations (STD) were 0.040, 0.081, 0.064, and 0.057, respectively. In addition, the optimal parameters and feature subsets were acquired directly by the MSMA method in our experiments, which means that the introduction of the multi-population structure mechanism results in the SMA having a stronger search capability and better accuracy.
With the aim of determining the efficiency of the approach, the authors compared it with five other successful machine learning models containing MSMA-SVM, SMA-SVM, ANN, RF, and KELM, is the results of which are displayed in Figure 8. The results show that MSMA-SVM-FS outperforms SMA-SVM, ANN, RF, and KELM in four evaluation metrics and that MSMA-SVM only outperforms MSMA-SVM-FS in sensitivity, but not in the other three metrics. Further, the STD is smaller than that of MSMA-SVM, SMA-SVM, ANN, RF, and KELM, indicating that the introduction of the multi-population structure strategy makes MSMA-SVM-FS perform better and results in it being more stable. On the ACC evaluation metric, the best performance was achieved by MSMA-SVM-FS with MSMA-SVM, which was 2.4% higher than the second ranked MSMA-SVM. This was closely followed by SMA-SVM and RF, with ANN achieving the worst result, which was 6.6% lower than that of MSMA-SVM-FS. The STD of MSMA-SVM-FS is smaller than that of MSMA-SVM and SMA-SVM, indicating that the MSMA-SVM and SMA-SVM models are less stable than MSMA-SVM-FS in coping with the situation but that the enhanced MSMA-SVM-FS model has much better results. On the MCC evaluation metric, the best results were still achieved with MSMA-SVM-FS followed by MSMA-SVM. MSMA-SVM was 4.6% lower than MSMA-SVM-FS accompanied by SMA-SVM and RF, and ANN had the worst effects, with values that were 12.5% lower than MSMA-SVM-FS, where MSMA-SVM-FS had the smallest STD of 0.081. In terms of sensitivity evaluation metrics, MSMA-SVM had the best effects along with MSMA-SVM-FS, only demonstrating a difference of 0.7%, accompanied by RF and SMA-SVM. The ANN model owns the worst effects, but concerning STD, MSMA-SVM-FS is the smallest at 0.064, and MSMA-SVM is the largest at 0.113. In terms of specificity metrics, MSMA-SVM-FS ranked first, accompanied by ANN, RF, KELM, MSMA-SVM, and SMA-SVM. MSMA-SVM-FS only differed from ANN by 2.4% and from MSMA-SVM by 5%; the worst was SMA-SVM at 84.9%. However, regarding STD, MSMA-SVM-FS was still the smallest at 0.057.
During the process, the suggested MSMA not only achieved the optimal SVM super parameters settings, but it also achieved the best feature set. The authors took advantage of a 10-fold CV technique. Figure 9 illustrates the frequency of the major characteristics identified by the MSMA-SVM through the 10-fold CV procedure.
As displayed in the chart, the monthly salary of current employment (F20), monthly salary of first employment (F12), change in place of employment (F17), degree of specialty relevance of first employment (F11), and salary difference (F21) were the five most frequent characteristics, which appeared 10, 9, 9, 7, and 7 times, respectively. Consequently, it was concluded that those characteristics may play a central part in forecasting graduate employment.

6. Discussion

The simulation results reveal the postgraduate student employment stability is influenced by the constraints of many factors, showing corresponding patterns in specific aspects and showing some inevitable links with most of the factors involved. Among them, the monthly salary of current employment (F20), the monthly salary of first employment (F12), change in place of employment (F17), degree of specialty relevance of first employment (F11), and salary difference (F21) have a great deal of influence on student employment stability. This section analyzes and predicts graduate student employment stability based on these five characteristic factors while further demonstrating the practical significance and validity of the MSMA-SVM model.
Among them, the monthly salary of current employment, the monthly salary of first employment, and salary difference can be unified into a wage category for analysis. First, in terms of employment area, graduate student employment is mainly concentrated in large and medium-sized cities with higher costs of living, and the monthly employment salary (F12, F20) is closely related to the costs associated with daily life in those environments; in addition, compared to undergraduates, graduate students have higher employment expectations, and they have higher salary requirements in terms of them being able to support themselves well. Secondly, the salary difference (F21) indicates the difference between the current monthly salary and the first monthly salary, and the salary difference can, to a certain extent, infer future salary packages. Graduate students do not choose employment immediately after their bachelor’s degree, often because they believe that a higher level of education offers broader employment prospects. If the gap between the higher expectations that graduate students have and the real salary level is large, then graduate students will feel that the salary cannot does not reflect their human resource value and labor contribution, which will reduce their confidence in their current jobs and affect their job satisfaction, which will lead to separation behavior, and the probability of separation is higher for graduates at lower salary and benefit levels. Finally, from a comprehensive point of view, postgraduate employment looks at the current employment monthly salary, the first employment monthly salary, and salary difference in order to seek better career development and a more favorable working environment, improve quality of life, and achieve more sustainable and stable employment.
The degree of specialty relevance of first employment (F11) represents the relevance between the field of study and the work performed. According to the theory of person–job matching, it is only possible to obtain stable and harmonious career development when personal traits and career traits are consistent. On the one hand, graduate students choose their graduate majors independently based on their undergraduate professional knowledge and ability, which is reflective in their subjective future career aspirations. On the other hand, the disciplinary strength of graduate students, the influence of supervisors, academic ability and professionalism, and the demand of the job market all directly or indirectly affect the choice of graduate employment positions. If there is inconsistency between the professional structure and economic development structure in postgraduate training, or if there is a distance between academic goal of cultivation and real social and economic development, the deviation phenomenon between study major and the employment industry will appear, which will be specifically manifested as a low-relevance employment position and a job that is less relevant to the student’s field of study. Therefore, graduate students are prone to making the decision to find another job that reflects their own values. Therefore, it can be seen that the degree of relevance that a student’s major has on their first employment position can greatly affect the employment stability of graduate students.
Among them, changes in the place of employment (F17) represent the difference in location type between initial employment location and current employment location. First, in recent years, major cities have realized that talent is an important resource for urban development and frequently introduce unprecedented policies to attract talent. By virtue of developed economic conditions, perfect infrastructure, quality public services, and wide development space, large cities attract a continuous inflow of talent. Therefore, in order to squeeze into big cities, some postgraduates give up their majors and engage in jobs with a relatively low professional match; other postgraduates accumulate certain working experience in small and medium cities before rushing to the job market of big cities. Secondly, changes in employment location often follow changes in occupation. In our re-study sample, the authors found that among the 128 graduate students employed in non-staff positions, such as at private enterprises, 82 of them found their jobs with those establishments within three years of graduation, accounting for 64.06% of the students involved in the study, which is 10.28 percentage points higher than the average separation rate of the sample. On the one hand, the reason for this is that postgraduates working in established jobs have higher security in terms of social security, social reputation and occupational safety, and higher job stability. On the other hand and in contrast, non-established positions are a two-way selection market that is characterized by competition, and although employees can enjoy good income and security, the competition is fierce and stressful, so the probability of leaving is higher.
This subsection provides a detailed analysis of graduate student employment stability through MSMA-SVM model simulation experiments and actual survey sampling. From the monthly salary of current employment, monthly salary of first employment, and salary difference, it can be seen that graduate students first care about their salary because it represents the guarantee of current and future quality of life; The degree at which a student’s specialization relevant to their first job indicates that when employment is consistent with the field of study, it is easier students to realize their own value and thus find a long-term and stable job. Changes in employment location indicate that graduate students are more likely to be employed in big cities with rich resources or in stable and established positions where they are able to realize their value. In summary, the MSMA-SVM model can reasonably analyze and predict the current employment situation of postgraduates, which will hopefully act as an effective reference for related postgraduate employment.
Due to its strong optimization capability, the developed MSMA can also be applied to other optimization problems, such as multi-objective or many optimization problems [75,76,77], big data optimization problems [78], and combination optimization problems [79]. Moreover, it can be applied to tackle the practical problems such as medical diagnosis [80,81,82,83], location-based service [84,85], service ecosystem [86], communication system conversion [87,88,89], kayak cycle phase segmentation [90], image dehazing and retrieval [91,92], information retrieval service [93,94,95], multi-view learning [96], human motion capture [97], green supplier selection [98], scheduling [99,100,101], and microgrid planning [102] problems.

7. Conclusions, Limitations, and Future Research

In this study, the authors developed an effective hybrid MSMA-SVM model that could be used to predict the employment stability of graduate students. This method’s main innovation is the introduction a multi-population mechanism into the SMA, which further balances its exploration and exploitation abilities. The proposed MSMA can provide better solutions with better stability for the 30 CEC2017 benchmark functions when compared to several comparison algorithms. Meanwhile, it is possible to acquire better parameter combinations and feature subsets than other methods when using MSMA to optimize SVM. According to the employment stability prediction model for graduate students, it was found that the career stability of graduate students within three years of graduation is low, and the monthly salary level of initial employment, the relevance of initial employment, the location of the initial employment unit, and the nature of the initial employment unit are significant in predicting the exit behavior of graduates. The proposed method has more accurate and stable prediction and realization abilities when dealing with the problem of graduate employment stability prediction compared to other machine learning methods.
This article has some limitations. First of all, there were not enough research samples, and if more data samples are collected, better prediction performance with prediction accuracy can be obtained. Second, the incomplete sample attributes of the study create factors that affect the employment stability of graduate students, and these factors need to be discussed further. In addition, due to the fact that the study sample is only from one university, both the applicability of the model and the reliability of its prediction of postgraduate employment stability need to be proven further.
In future research, the authors will address the limitations for future work expansion, such as expanding the number of samples to enhance the prediction performance and accuracy of the model, expanding the number of employment attribute samples to enhance the precision of the model, and collecting samples from different regions to enhance the adaptability of the model. On the other hand, MSMA-SVM models will be applied to predict other problems such as disease diagnosis and financial risk prediction. In addition, it is expected that the MSMA algorithm can be extended to address different application areas such as photovoltaic cell optimization [103], resource requirement prediction [104,105], and the optimization of deep learning network nodes [106,107].

Author Contributions

Conceptualization, H.G. and H.C.; Methodology, H.C. and G.L.; software, G.L.; validation, H.C., H.G. and G.L.; formal analysis, H.G.; investigation, G.L. and G.L.; resources, H.C.; data curation, G.L.; writing—original draft preparation, G.L.; writing—review and editing, H.C., G.L. and H.G.; visualization, G.L. and H.G.; supervision, H.G.; project administration, G.L.; funding acquisition, H.C., H.G. and G.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by The WenZhou Philosophy and Social Science Planning (21wsk205).

Data Availability Statement

The data involved in this study are all public data, which can be downloaded through public channels.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bharambe, Y.; Mored, N.; Mulchandani, M.; Shankarmani, R.; Shinde, S.G. Assessing employability of students using data mining techniques. In Proceedings of the 2017 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Manipal, Karnataka, India, 13–16 September 2017; pp. 2110–2114. [Google Scholar]
  2. Li, L.; Zheng, Y.; Sun, X.H.; Wang, F.S. The Application of Decision Tree Algorithm in the Employment Management System. Appl. Mech. Mater. 2014, 543-547, 1639–1642. [Google Scholar] [CrossRef]
  3. Liu, Y.; Hu, L.; Yan, F.; Zhang, B. Information Gain with Weight Based Decision Tree for the Employment Forecasting of Undergraduates. In Proceedings of the 2013 IEEE International Conference on Green Computing and Communications and IEEE Internet of Things and IEEE Cyber, Physical and Social Computing, Washington, DC, USA, 20–23 August 2013; pp. 2210–2213. [Google Scholar]
  4. Mahdi, E.; Leiva, V.; Mara’Beh, S.; Martin-Barreiro, C. A New Approach to Predicting Cryptocurrency Returns Based on the Gold Prices with Support Vector Machines during the COVID-19 Pandemic Using Sensor-Related Data. Sensors 2021, 21, 6319. [Google Scholar] [CrossRef] [PubMed]
  5. Tu, J.; Lin, A.; Chen, H.; Li, Y.; Li, C. Predict the Entrepreneurial Intention of Fresh Graduate Students Based on an Adaptive Support Vector Machine Framework. Math. Probl. Eng. 2019, 2019, 1–16. [Google Scholar] [CrossRef] [Green Version]
  6. Cuong-Le, T.; Minh, H.-L.; Khatir, S.; Wahab, M.A.; Tran, M.T.; Mirjalili, S. A novel version of Cuckoo search algorithm for solving optimization problems. Expert Syst. Appl. 2021, 186, 115669. [Google Scholar] [CrossRef]
  7. Abualigah, L.; Elaziz, M.A.; Sumari, P.; Geem, Z.W.; Gandomi, A.H. Reptile Search Algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2021, 191, 116158. [Google Scholar] [CrossRef]
  8. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S.; Abualigah, L.; Elaziz, M.A.; Oliva, D. EWOA-OPF: Effective Whale Optimization Algorithm to Solve Optimal Power Flow Problem. Electronics 2021, 10, 2975. [Google Scholar] [CrossRef]
  9. Gandomi, A.H.; Roke, D. A Multi-Objective Evolutionary Framework for Formulation of Nonlinear Structural Systems. IEEE Trans. Ind. Inform. 2021, 1. [Google Scholar] [CrossRef]
  10. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  11. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  12. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  13. Zhao, X.; Zhang, X.; Cai, Z.; Tian, X.; Wang, X.; Huang, Y.; Chen, H.; Hu, L. Chaos enhanced grey wolf optimization wrapped ELM for diagnosis of paraquat-poisoned patients. Comput. Biol. Chem. 2019, 78, 481–490. [Google Scholar] [CrossRef]
  14. Yang, X.-S. A New Metaheuristic Bat-Inspired Algorithm. In Nature Inspired Cooperative Strategies for Optimization (NICSO 2010). Studies in Computational Intelligence; González, J.R., Pelta, D.A., Cruz, C., Terrazas, G., Krasnogor, N., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; pp. 65–74. [Google Scholar]
  15. Yang, X.-S. Firefly Algorithms for Multimodal Optimization. In International Symposium on Stochastic Algorithms; Springer: Berlin/Heidelberg, Germany, 2009; pp. 169–178. [Google Scholar]
  16. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  17. Chen, H.; Xu, Y.; Wang, M.; Zhao, X. A balanced whale optimization algorithm for constrained engineering design problems. Appl. Math. Model. 2019, 71, 45–59. [Google Scholar] [CrossRef]
  18. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl. Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  19. Xu, Y.; Chen, H.; Luo, J.; Zhang, Q.; Jiao, S.; Zhang, X. Enhanced Moth-flame optimizer with mutation strategy for global optimization. Inf. Sci. 2019, 492, 181–203. [Google Scholar] [CrossRef]
  20. Xu, Y.; Chen, H.; Heidari, A.A.; Luo, J.; Zhang, Q.; Zhao, X.; Li, C. An efficient chaotic mutative moth-flame-inspired optimizer for global optimization tasks. Expert Syst. Appl. 2019, 129, 135–155. [Google Scholar] [CrossRef]
  21. Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl. Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  22. Heidari, A.A.; Abbaspour, R.A.; Chen, H. Efficient boosted grey wolf optimizers for global search and kernel extreme learning machine training. Appl. Soft Comput. 2019, 81, 105521. [Google Scholar] [CrossRef]
  23. Chen, W.-N.; Zhang, J.; Lin, Y.; Chen, N.; Zhan, Z.-H.; Chung, H.; Li, Y.; Shi, Y.-H. Particle Swarm Optimization with an Aging Leader and Challengers. IEEE Trans. Evol. Comput. 2012, 17, 241–258. [Google Scholar] [CrossRef]
  24. Jia, D.; Zheng, G.; Khan, M.K. An effective memetic differential evolution algorithm based on chaotic local search. Inf. Sci. 2011, 181, 3175–3187. [Google Scholar] [CrossRef]
  25. Chen, H.; Yang, C.; Heidari, A.A.; Zhao, X. An efficient double adaptive random spare reinforced whale optimization algorithm. Expert Syst. Appl. 2020, 154, 113018. [Google Scholar] [CrossRef]
  26. Yu, H.; Zhao, N.; Wang, P.; Chen, H.; Li, C. Chaos-enhanced synchronized bat optimizer. Appl. Math. Model. 2020, 77, 1201–1215. [Google Scholar] [CrossRef]
  27. Lin, A.; Wu, Q.; Heidari, A.A.; Xu, Y.; Chen, H.; Geng, W.; Li, Y.; Li, C. Predicting Intentions of Students for Master Programs Using a Chaos-Induced Sine Cosine-Based Fuzzy K-Nearest Neighbor Classifier. IEEE Access 2019, 7, 67235–67248. [Google Scholar] [CrossRef]
  28. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Futur. Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  29. Ahmadianfar, I.; Heidari, A.A.; Gandomi, A.H.; Chu, X.; Chen, H. RUN beyond the metaphor: An efficient optimization algorithm based on Runge Kutta method. Expert Syst. Appl. 2021, 181, 115079. [Google Scholar] [CrossRef]
  30. Tu, J.; Chen, H.; Wang, M.; Gandomi, A.H. The Colony Predation Algorithm. J. Bionic Eng. 2021, 18, 674–710. [Google Scholar] [CrossRef]
  31. Yang, Y.; Chen, H.; Heidari, A.A.; Gandomi, A.H. Hunger games search: Visions, conception, implementation, deep analysis, perspectives, and towards performance shifts. Expert Syst. Appl. 2021, 177, 114864. [Google Scholar] [CrossRef]
  32. Zhao, S.; Wang, P.; Heidari, A.A.; Chen, H.; Turabieh, H.; Mafarja, M.; Li, C. Multilevel threshold image segmentation with diffusion association slime mould algorithm and Renyi’s entropy for chronic obstructive pulmonary disease. Comput. Biol. Med. 2021, 134, 104427. [Google Scholar] [CrossRef] [PubMed]
  33. Liu, L.; Zhao, D.; Yu, F.; Heidari, A.A.; Ru, J.; Chen, H.; Mafarja, M.; Turabieh, H.; Pan, Z. Performance optimization of differential evolution with slime mould algorithm for multilevel breast cancer image segmentation. Comput. Biol. Med. 2021, 138, 104910. [Google Scholar] [CrossRef]
  34. Yu, C.; Heidari, A.A.; Xue, X.; Zhang, L.; Chen, H.; Chen, W. Boosting quantum rotation gate embedded slime mould algorithm. Expert Syst. Appl. 2021, 181, 115082. [Google Scholar] [CrossRef]
  35. Liu, Y.; Heidari, A.A.; Ye, X.; Liang, G.; Chen, H.; He, C. Boosting slime mould algorithm for parameter identification of photovoltaic models. Energy 2021, 234, 121164. [Google Scholar] [CrossRef]
  36. Shi, B.; Ye, H.; Zheng, J.; Zhu, Y.; Heidari, A.A.; Zheng, L.; Chen, H.; Wang, L.; Wu, P. Early Recognition and Discrimination of COVID-19 Severity Using Slime Mould Support Vector Machine for Medical Decision-Making. IEEE Access 2021, 9, 121996–122015. [Google Scholar] [CrossRef]
  37. Premkumar, M.; Jangir, P.; Sowmya, R.; Alhelou, H.H.; Heidari, A.A.; Chen, H. MOSMA: Multi-Objective Slime Mould Algorithm Based on Elitist Non-Dominated Sorting. IEEE Access 2020, 9, 3229–3248. [Google Scholar] [CrossRef]
  38. Xia, X.; Gui, L.; Zhan, Z.-H. A multi-swarm particle swarm optimization algorithm based on dynamical topology and purposeful detecting. Appl. Soft Comput. 2018, 67, 126–140. [Google Scholar] [CrossRef]
  39. Zhang, L.; Zhang, C. Hopf bifurcation analysis of some hyperchaotic systems with time-delay controllers. Kybernetika 2008, 44, 35–42. [Google Scholar]
  40. Geyer, C.J. Markov Chain Monte Carlo Maximum Likelihood; Interface Foundation of North America: Fairfax Sta, VA, USA, 1991. [Google Scholar]
  41. Lai, X.; Zhou, Y. Analysis of multiobjective evolutionary algorithms on the biobjective traveling salesman problem (1,2). Multimedia Tools Appl. 2020, 79, 30839–30860. [Google Scholar] [CrossRef]
  42. Zhang, Y.; Liu, R.; Wang, X.; Chen, H.; Li, C. Boosted binary Harris hawks optimizer and feature selection. Eng. Comput. 2021, 37, 3741–3770. [Google Scholar] [CrossRef]
  43. Hu, J.; Chen, H.; Heidari, A.A.; Wang, M.; Zhang, X.; Chen, Y.; Pan, Z. Orthogonal learning covariance matrix for defects of grey wolf optimizer: Insights, balance, diversity, and feature selection. Knowl. Based Syst. 2020, 213, 106684. [Google Scholar] [CrossRef]
  44. Zhang, X.; Xu, Y.; Yu, C.; Heidari, A.A.; Li, S.; Chen, H.; Li, C. Gaussian mutational chaotic fruit fly-built optimization and feature selection. Expert Syst. Appl. 2020, 141, 112976. [Google Scholar] [CrossRef]
  45. Li, Q.; Chen, H.; Huang, H.; Zhao, X.; Cai, Z.-N.; Tong, C.; Liu, W.; Tian, X. An Enhanced Grey Wolf Optimization Based Feature Selection Wrapped Kernel Extreme Learning Machine for Medical Diagnosis. Comput. Math. Methods Med. 2017, 2017, 1–15. [Google Scholar] [CrossRef]
  46. Liu, T.; Hu, L.; Ma, C.; Wang, Z.-Y.; Chen, H.-L. A fast approach for detection of erythemato-squamous diseases based on extreme learning machine with maximum relevance minimum redundancy feature selection. Int. J. Syst. Sci. 2013, 46, 919–931. [Google Scholar] [CrossRef]
  47. Hu, K.; Ye, J.; Fan, E.; Shen, S.; Huang, L.; Pi, J. A novel object tracking algorithm by fusing color and depth information based on single valued neutrosophic cross-entropy. J. Intell. Fuzzy Syst. 2017, 32, 1775–1786. [Google Scholar] [CrossRef] [Green Version]
  48. Hu, K.; He, W.; Ye, J.; Zhao, L.; Peng, H.; Pi, J. Online Visual Tracking of Weighted Multiple Instance Learning via Neutrosophic Similarity-Based Objectness Estimation. Symmetry 2019, 11, 832. [Google Scholar] [CrossRef] [Green Version]
  49. Chen, M.-R.; Zeng, G.-Q.; Lu, K.-D.; Weng, J. A Two-Layer Nonlinear Combination Method for Short-Term Wind Speed Prediction Based on ELM, ENN, and LSTM. IEEE Internet Things J. 2019, 6, 6997–7010. [Google Scholar] [CrossRef]
  50. Zeng, G.-Q.; Lu, K.; Dai, Y.-X.; Zhang, Z.; Chen, M.-R.; Zheng, C.-W.; Wu, D.; Peng, W.-W. Binary-coded extremal optimization for the design of PID controllers. Neurocomputing 2014, 138, 180–188. [Google Scholar] [CrossRef]
  51. Zeng, G.-Q.; Chen, J.; Dai, Y.-X.; Li, L.-M.; Zheng, C.-W.; Chen, M.-R. Design of fractional order PID controller for automatic regulator voltage system based on multi-objective extremal optimization. Neurocomputing 2015, 160, 173–184. [Google Scholar] [CrossRef]
  52. Zeng, G.-Q.; Xie, X.-Q.; Chen, M.-R.; Weng, J. Adaptive population extremal optimization-based PID neural network for multivariable nonlinear control systems. Swarm Evol. Comput. 2019, 44, 320–334. [Google Scholar] [CrossRef]
  53. Zhao, D.; Liu, L.; Yu, F.; Heidari, A.A.; Wang, M.; Liang, G.; Muhammad, K.; Chen, H. Chaotic random spare ant colony optimization for multi-threshold image segmentation of 2D Kapur entropy. Knowl. Based Syst. 2021, 216, 106510. [Google Scholar] [CrossRef]
  54. Zhao, D.; Liu, L.; Yu, F.; Heidari, A.A.; Wang, M.; Oliva, D.; Muhammad, K.; Chen, H. Ant colony optimization with horizontal and vertical crossover search: Fundamental visions for multi-threshold image segmentation. Expert Syst. Appl. 2020, 167, 114122. [Google Scholar] [CrossRef]
  55. Zeng, G.-Q.; Lu, Y.-Z.; Mao, W.-J. Modified extremal optimization for the hard maximum satisfiability problem. J. Zhejiang Univ. Sci. C 2011, 12, 589–596. [Google Scholar] [CrossRef]
  56. Zeng, G.; Zheng, C.; Zhang, Z.; Lu, Y. An Backbone Guided Extremal Optimization Method for Solving the Hard Maximum Satisfiability Problem. Int. J. Innov. Comput. Inf. Control. 2012, 8, 8355–8366. [Google Scholar] [CrossRef] [Green Version]
  57. Shen, L.; Chen, H.; Yu, Z.; Kang, W.; Zhang, B.; Li, H.; Yang, B.; Liu, D. Evolving support vector machines using fruit fly optimization for medical data classification. Knowl. Based Syst. 2016, 96, 61–75. [Google Scholar] [CrossRef]
  58. Wang, M.; Chen, H.; Yang, B.; Zhao, X.; Hu, L.; Cai, Z.; Huang, H.; Tong, C. Toward an optimal kernel extreme learning machine using a chaotic moth-flame optimization strategy with applications in medical diagnoses. Neurocomputing 2017, 267, 69–84. [Google Scholar] [CrossRef]
  59. Wang, M.; Chen, H. Chaotic multi-swarm whale optimizer boosted support vector machine for medical diagnosis. Appl. Soft Comput. 2020, 88, 105946. [Google Scholar] [CrossRef]
  60. Deng, W.; Xu, J.; Zhao, H.; Song, Y. A Novel Gate Resource Allocation Method Using Improved PSO-Based QEA. IEEE Trans. Intell. Transp. Syst. 2020, PP, 1–9. [Google Scholar] [CrossRef]
  61. Deng, W.; Xu, J.; Song, Y.; Zhao, H. An Effective Improved Co-evolution Ant Colony Optimization Algorithm with Multi-Strategies and Its Application. Int. J. Bio-Inspired Comput. 2020, 16, 158–170. [Google Scholar] [CrossRef]
  62. Deng, W.; Liu, H.; Xu, J.; Zhao, H.; Song, Y. An Improved Quantum-Inspired Differential Evolution Algorithm for Deep Belief Network. IEEE Trans. Instrum. Meas. 2020, 69, 7319–7327. [Google Scholar] [CrossRef]
  63. Zhao, H.; Liu, H.; Xu, J.; Deng, W. Performance Prediction Using High-Order Differential Mathematical Morphology Gradient Spectrum Entropy and Extreme Learning Machine. IEEE Trans. Instrum. Meas. 2020, 69, 4165–4172. [Google Scholar] [CrossRef]
  64. Zhao, X.; Li, D.; Yang, B.; Ma, C.; Zhu, Y.; Chen, H. Feature selection based on improved ant colony optimization for online detection of foreign fiber in cotton. Appl. Soft Comput. 2014, 24, 585–596. [Google Scholar] [CrossRef]
  65. Zhao, X.; Li, D.; Yang, B.; Chen, H.; Yang, X.; Yu, C.; Liu, S. A two-stage feature selection method with its application. Comput. Electr. Eng. 2015, 47, 114–125. [Google Scholar] [CrossRef]
  66. Zhang, X.; Du, K.-J.; Zhan, Z.-H.; Kwong, S.; Gu, T.-L.; Zhang, J. Cooperative Coevolutionary Bare-Bones Particle Swarm Optimization With Function Independent Decomposition for Large-Scale Supply Chain Network Design With Uncertainties. IEEE Trans. Cybern. 2019, 50, 4454–4468. [Google Scholar] [CrossRef]
  67. Chen, Z.-G.; Zhan, Z.-H.; Lin, Y.; Gong, Y.-J.; Gu, T.-L.; Zhao, F.; Yuan, H.-Q.; Chen, X.; Li, Q.; Zhang, J. Multiobjective Cloud Workflow Scheduling: A Multiple Populations Ant Colony System Approach. IEEE Trans. Cybern. 2019, 49, 2912–2926. [Google Scholar] [CrossRef]
  68. Wang, Z.-J.; Zhan, Z.-H.; Yu, W.-J.; Lin, Y.; Zhang, J.; Gu, T.-L.; Zhang, J. Dynamic Group Learning Distributed Particle Swarm Optimization for Large-Scale Optimization and Its Application in Cloud Workflow Scheduling. IEEE Trans. Cybern. 2020, 50, 2715–2729. [Google Scholar] [CrossRef]
  69. Yang, Z.; Li, K.; Guo, Y.; Ma, H.; Zheng, M. Compact real-valued teaching-learning based optimization with the applications to neural network training. Knowl. Based Syst. 2018, 159, 51–62. [Google Scholar] [CrossRef]
  70. Zhou, S.-Z.; Zhan, Z.-H.; Chen, Z.-G.; Kwong, S.; Zhang, J. A Multi-Objective Ant Colony System Algorithm for Airline Crew Rostering Problem with Fairness and Satisfaction. IEEE Trans. Intell. Transp. Syst. 2020, 22, 6784–6798. [Google Scholar] [CrossRef]
  71. Liang, D.; Zhan, Z.-H.; Zhang, Y.; Zhang, J. An Efficient Ant Colony System Approach for New Energy Vehicle Dispatch Problem. IEEE Trans. Intell. Transp. Syst. 2019, 21, 4784–4797. [Google Scholar] [CrossRef]
  72. Liang, J.J.; Qu, B.Y.; Suganthan, P.N. Problem definitions and evaluation criteria for the CEC 2017 special session and competition on single objective real-parameter numerical optimization. Tech. Rep. 2016, 635, 490. [Google Scholar]
  73. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
  74. García, S.; Fernández, A.; Luengo, J.; Herrera, F. Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power. Inf. Sci. 2010, 180, 2044–2064. [Google Scholar] [CrossRef]
  75. Hua, Y.; Liu, Q.; Hao, K.; Jin, Y. A Survey of Evolutionary Algorithms for Multi-Objective Optimization Problems with Irregular Pareto Fronts. IEEE/CAA J. Autom. Sin. 2021, 8, 303–318. [Google Scholar] [CrossRef]
  76. Zhang, W.; Hou, W.; Li, C.; Yang, W.; Gen, M. Multidirection Update-Based Multiobjective Particle Swarm Optimization for Mixed No-Idle Flow-Shop Scheduling Problem. Complex Syst. Model. Simul. 2021, 1, 176–197. [Google Scholar] [CrossRef]
  77. Gu, Z.-M.; Wang, G.-G. Improving NSGA-III algorithms with information feedback models for large-scale many-objective optimization. Futur. Gener. Comput. Syst. 2020, 107, 49–69. [Google Scholar] [CrossRef]
  78. Yi, J.-H.; Deb, S.; Dong, J.; Alavi, A.H.; Wang, G.-G. An improved NSGA-III algorithm with adaptive mutation operator for Big Data optimization problems. Futur. Gener. Comput. Syst. 2018, 88, 571–585. [Google Scholar] [CrossRef]
  79. Zhao, F.; Di, S.; Cao, J.; Tang, J. Jonrinaldi A Novel Cooperative Multi-Stage Hyper-Heuristic for Combination Optimization Problems. Complex Syst. Model. Simul. 2021, 1, 91–108. [Google Scholar] [CrossRef]
  80. Hu, Z.; Wang, J.; Zhang, C.; Luo, Z.; Luo, X.; Xiao, L.; Shi, J. Uncertainty Modeling for Multi center Autism Spectrum Disorder Classification Using Takagi-Sugeno-Kang Fuzzy Systems. IEEE Trans. Cogn. Dev. Syst. 2021, 1. [Google Scholar] [CrossRef]
  81. Chen, C.Z.; Wu, Q.; Li, Z.Y.; Xiao, L.; Hu, Z.Y. Diagnosis of Alzheimer’s disease based on Deeply-Fused Nets. Comb. Chem. High Throughput Screen. 2020, 24, 781–789. [Google Scholar] [CrossRef]
  82. Fei, X.; Wang, J.; Ying, S.; Hu, Z.; Shi, J. Projective parameter transfer based sparse multiple empirical kernel learning Machine for diagnosis of brain disease. Neurocomputing 2020, 413, 271–283. [Google Scholar] [CrossRef]
  83. Saber, A.; Sakr, M.; Abo-Seida, O.M.; Keshk, A.; Chen, H. A Novel Deep-Learning Model for Automatic Detection and Classification of Breast Cancer Using the Transfer-Learning Technique. IEEE Access 2021, 9, 71194–71209. [Google Scholar] [CrossRef]
  84. Wu, Z.; Li, G.; Shen, S.; Lian, X.; Chen, E.; Xu, G. Constructing dummy query sequences to protect location privacy and query privacy in location-based services. World Wide Web 2021, 24, 25–49. [Google Scholar] [CrossRef]
  85. Wu, Z.; Wang, R.; Li, Q.; Lian, X.; Xu, G.; Chen, E.; Liu, X. A Location Privacy-Preserving System Based on Query Range Cover-Up or Location-Based Services. IEEE Trans. Veh. Technol. 2020, 69, 5244–5254. [Google Scholar] [CrossRef]
  86. Xue, X.; Zhou, D.; Chen, F.; Yu, X.; Feng, Z.; Duan, Y.; Meng, L.; Zhang, M. From SOA to VOA: A Shift in Understanding the Operation and Evolution of Service Ecosystem. IEEE Trans. Serv. Comput. 2021, 1. [Google Scholar] [CrossRef]
  87. Zhang, L.; Zou, Y.; Wang, W.; Jin, Z.; Su, Y.; Chen, H. Resource allocation and trust computing for blockchain-enabled edge computing system. Comput. Secur. 2021, 105, 102249. [Google Scholar] [CrossRef]
  88. Zhang, L.; Zhang, Z.; Wang, W.; Waqas, R.; Zhao, C.; Kim, S.; Chen, H. A Covert Communication Method Using Special Bitcoin Addresses Generated by Vanitygen. Comput. Mater. Contin. 2020, 65, 597–616. [Google Scholar]
  89. Zhang, L.; Zhang, Z.; Wang, W.; Jin, Z.; Su, Y.; Chen, H. Research on a Covert Communication Model Realized by Using Smart Contracts in Blockchain Environment. IEEE Syst. J. 2021, 1–12. [Google Scholar] [CrossRef]
  90. Qiu, S.; Hao, Z.; Wang, Z.; Liu, L.; Liu, J.; Zhao, H.; Fortino, G. Sensor Combination Selection Strategy for Kayak Cycle Phase Segmentation Based on Body Sensor Networks. IEEE Internet Things J. 2021, 1. [Google Scholar] [CrossRef]
  91. Zhang, X.; Wang, T.; Wang, J.; Tang, G.; Zhao, L. Pyramid Channel-based Feature Attention Network for image dehazing. Comput. Vis. Image Underst. 2020, 197–198, 103003. [Google Scholar] [CrossRef]
  92. Liu, H.; Li, X.; Zhang, S.; Tian, Q. Adaptive Hashing With Sparse Matrix Factorization. IEEE Trans. Neural Networks Learn. Syst. 2019, 31, 4318–4329. [Google Scholar] [CrossRef]
  93. Wu, Z.; Li, R.; Zhou, Z.; Guo, J.; Jiang, J.; Su, X. A user sensitive subject protection approach for book search service. J. Assoc. Inf. Sci. Technol. 2020, 71, 183–195. [Google Scholar] [CrossRef]
  94. Wu, Z.; Shen, S.; Lian, X.; Su, X.; Chen, E. A dummy-based user privacy protection approach for text information retrieval. Knowl. Based Syst. 2020, 195, 105679. [Google Scholar] [CrossRef]
  95. Wu, Z.; Shen, S.; Zhou, H.; Li, H.; Lu, C.; Zou, D. An effective approach for the protection of user commodity viewing privacy in e-commerce website. Knowl. Based Syst. 2021, 220, 106952. [Google Scholar] [CrossRef]
  96. Liu, H.; Liu, L.; Le, T.D.; Lee, I.; Sun, S.; Li, J. Nonparametric Sparse Matrix Decomposition for Cross-View Dimensionality Reduction. IEEE Trans. Multimedia 2017, 19, 1848–1859. [Google Scholar] [CrossRef]
  97. Qiu, S.; Zhao, H.; Jiang, N.; Wu, D.; Song, G.; Zhao, H.; Wang, Z. Sensor network oriented human motion capture via wearable intelligent system. Int. J. Intell. Syst. 2021, 37, 1646–1673. [Google Scholar] [CrossRef]
  98. Liu, P.; Gao, H. A novel green supplier selection method based on the interval type-2 fuzzy prioritized choquet bonferroni means. IEEE/CAA J. Autom. Sin. 2020, 1–17. [Google Scholar] [CrossRef]
  99. Han, X.; Han, Y.; Chen, Q.; Li, J.; Sang, H.; Liu, Y.; Pan, Q.; Nojima, Y. Distributed Flow Shop Scheduling with Sequence-Dependent Setup Times Using an Improved Iterated Greedy Algorithm. Complex Syst. Model. Simul. 2021, 1, 198–217. [Google Scholar] [CrossRef]
  100. Gao, D.; Wang, G.-G.; Pedrycz, W. Solving Fuzzy Job-Shop Scheduling Problem Using DE Algorithm Improved by a Selection Mechanism. IEEE Trans. Fuzzy Syst. 2020, 28, 3265–3275. [Google Scholar] [CrossRef]
  101. Cao, X.; Cao, T.; Gao, F.; Guan, X. Risk-Averse Storage Planning for Improving RES Hosting Capacity Under Uncertain Siting Choices. IEEE Trans. Sustain. Energy 2021, 12, 1984–1995. [Google Scholar] [CrossRef]
  102. Cao, X.; Wang, J.; Wang, J.; Zeng, B. A Risk-Averse Conic Model for Networked Microgrids Planning with Reconfiguration and Reorganizations. IEEE Trans. Smart Grid 2020, 11, 696–709. [Google Scholar] [CrossRef]
  103. Ramadan, A.; Kamel, S.; Taha, I.B.M.; Tostado-Véliz, M. Parameter Estimation of Modified Double-Diode and Triple-Diode Photovoltaic Models Based on Wild Horse Optimizer. Electronics 2021, 10, 2308. [Google Scholar] [CrossRef]
  104. Liu, Y.; Ran, J.; Hu, H.; Tang, B. Energy-Efficient Virtual Network Function Reconfiguration Strategy Based on Short-Term Resources Requirement Prediction. Electronics 2021, 10, 2287. [Google Scholar] [CrossRef]
  105. Shafqat, W.; Malik, S.; Lee, K.-T.; Kim, D.-H. PSO Based Optimized Ensemble Learning and Feature Selection Approach for Efficient Energy Forecast. Electronics 2021, 10, 2188. [Google Scholar] [CrossRef]
  106. Choi, H.-T.; Hong, B.-W. Unsupervised Object Segmentation Based on Bi-Partitioning Image Model Integrated with Classification. Electronics 2021, 10, 2296. [Google Scholar] [CrossRef]
  107. Saeed, U.; Shah, S.Y.; Shah, S.A.; Ahmad, J.; Alotaibi, A.A.; Althobaiti, T.; Ramzan, N.; Alomainy, A.; Abbasi, Q.H. Discrete Human Activity Recognition and Fall Detection by Combining FMCW RADAR Data of Heterogeneous Environments for Independent Assistive Living. Electronincs 2021, 10, 2237. [Google Scholar] [CrossRef]
Figure 1. Variations in v b and v c trends.
Figure 1. Variations in v b and v c trends.
Electronics 11 00209 g001
Figure 2. Flowchart of the suggested MSMA-SVM model.
Figure 2. Flowchart of the suggested MSMA-SVM model.
Electronics 11 00209 g002
Figure 3. (a) Three-dimensional location distribution of MSMA; (b) two-dimensional location distribution of MSMA; (c) MSMA trajectory in the first dimension; (d) mean fitness of MSMA; (e) c MSMA and SMA convergence graphs.
Figure 3. (a) Three-dimensional location distribution of MSMA; (b) two-dimensional location distribution of MSMA; (c) MSMA trajectory in the first dimension; (d) mean fitness of MSMA; (e) c MSMA and SMA convergence graphs.
Electronics 11 00209 g003
Figure 4. Balance analysis of MSMA and SMA.
Figure 4. Balance analysis of MSMA and SMA.
Electronics 11 00209 g004
Figure 5. Diversity analysis of MSMA and SMA.
Figure 5. Diversity analysis of MSMA and SMA.
Electronics 11 00209 g005
Figure 6. Convergence tendency of MSMA and original algorithms.
Figure 6. Convergence tendency of MSMA and original algorithms.
Electronics 11 00209 g006
Figure 7. Convergence trends of MSMA and well-established algorithms.
Figure 7. Convergence trends of MSMA and well-established algorithms.
Electronics 11 00209 g007
Figure 8. Classification results of five models in terms of four metrics.
Figure 8. Classification results of five models in terms of four metrics.
Electronics 11 00209 g008
Figure 9. Frequency of the features chosen from MSMA-SVM through the 10-fold CV procedure.
Figure 9. Frequency of the features chosen from MSMA-SVM through the 10-fold CV procedure.
Electronics 11 00209 g009
Table 1. Description of the total 26 attributes.
Table 1. Description of the total 26 attributes.
IDAttributeDescription
F1genderMale and female students are marked as 1 and 2, respectively.
F2political status (PS)There are four categories: Communist Party members, reserve party members, Communist Youth League members, and the masses, denoted by 1, 2, 3, and 13, respectively.
F3division of liberal arts and science (DLS)Liberal arts and sciences are indicated by 1 and 2.
F4years of schooling (YS)The 3-year and 4-year academic terms are indicated by 3 and 4.
F5students with difficulties (SWD)There are four categories: non-difficult students, employment difficulties, family financial difficulties, and dual employment and family financial difficulties, which are indicated by 0, 1, 2, and 3, respectively.
F6student origin (OS)There are three categories: urban, township, and rural, denoted by 1, 2, and 3, respectively.
F7career development after graduation (CDG)There are three categories of direct employment, pending employment, and further education, which are indicated by 1, 2, and 3, respectively.
F8unit of first employment (UFE)Employment pending is indicated by 0. State organizations are indicated by 10, scientific research institutions are indicated by 20, higher education institutions are indicated by 21, middle and junior high education institutions are indicated by 22, health and medical institutions are indicated by 23, other institutions are indicated by 29, state-owned enterprises are indicated by 31, foreign-funded enterprises are indicated by 32, private enterprises are indicated by 39, troops are indicated by 40, rural organizations are indicated by 55, and self-employment is indicated by 99.
F9location of first employment (LFE)Employment pending is indicated by 0, sub-provincial and above large cities by 1, prefecture-level cities by 2, and counties and villages by 3.
F10position of first employment (PFE)Employment pending is represented by 0, civil servants by 10, doctoral students and researchers by 11, engineers and technicians by 13, teaching staff by 24, professional and technical staff by 29, commercial service staff and clerks by 30, and military personnel by 80.
F11degree of specialty relevance of first employment (DSRFE)The correlation between major and job is measured, and the higher the percentage, the higher the correlation.
F12monthly salary of first employment (MSFE)Used to measure the average monthly salary earned, with higher values indicating higher salary levels.
F13status of current employment (SCE)Three years after graduation, the employment status is represented by 1, 2, and 3 for the categories of employment, pending employment, and further education, respectively.
F14employment change (EC)When comparing the employment units three years after graduation with initial employment units, no change is indicated by 0 and any change is indicated by 1.
F15unit of current employment (UCE) The nature of the employment unit three years after graduation is expressed in the same way as the nature of initial employment unit in F8.
F16location of current employment (LCE) The type employment location three years after graduation is expressed in the same way as the initial employment location in F9.
F17change in place of employment (CPE)Used to measure the changes in employment location from the initial employment location three years after graduation and is expressed as the difference between F16 current employment location type and F9 initial employment location type, and the larger the absolute value of the difference, the larger the change in employment location.
F18position of current employment (PCE)The job type three years after graduation is expressed in the same way as the initial employment job type in F10.
F19specialty relevance of current employment (SRCE)The professional relevance of employment three years after graduation is expressed in the same way as the initial employment job type in F11.
F20monthly salary of current employment (MSCE)The monthly salary level three years after graduation is expressed in the same way as the monthly salary level during initial employment in F12.
F21salary difference (SD)Used to measure the changes in the graduates’ monthly salary in their current employment and initial employment, i.e., the difference between F20 monthly salary level in current employment and F12 monthly salary level in initial employment, with a larger value indicating a larger increase in monthly salary.
F22grade point average (GPA)Used to assess the how much the postgraduate students learned while they were in school and is the average of the final grades of courses taken by graduate students, with higher averages indicating higher quality learning.
F23scores of teaching practice (STP)A method used to assess the quality of learning in postgraduate teaching practice sessions, with excellent, good, moderate, pass, and fail expressed as 1, 2, 3, 4, and 5, respectively.
F24scores of social practices (SSP) A method used to assess how much the postgraduate students learned in social practice sessions, with excellent, good, moderate, pass, and fail expressed as 1, 2, 3, 4, and 5, respectively.
F25scores of academic reports (SAR)A method used to assess how the must the postgraduate students learned during academic reporting sessions, with excellent, good, moderate, pass, and fail expressed as 1, 2, 3, 4, and 5, respectively.
F26scores of graduation thesis (SGT)A method used to assess the how much the postgraduate students learned during the thesis sessions, with excellent, good, moderate, pass, and fail expressed as 1, 2, 3, 4, and 5, respectively.
Table 2. Comparison results of different original algorithms best scores obtained so far.
Table 2. Comparison results of different original algorithms best scores obtained so far.
MSMASMADEGWOBAFAWOAMFOSCA
F1Avg1.31 × 1028.03 × 1031.79 × 1032.24 × 1095.72 × 1051.55 × 10+102.36 × 1079.38 × 1091.41 × 10+10
Std1.68 × 1027.18 × 1032.96 × 1031.68 × 1093.78 × 1051.58 × 1091.86 × 1077.26 × 1091.89 × 109
Rank132649578
p-value 1.73 × 10−63.88 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F2Avg
Std
8.31 × 104
2.55 × 105
6.49 × 102
9.69 × 102
1.26 × 10+24
3.56 × 10+24
2.36 × 10+32
9.82 × 10+32
1.75 × 103
8.51 × 103
6.49 × 10+34
1.54 × 10+35
2.84 × 10+26
1.11 × 10+27
1.31 × 10+38
6.86 × 10+38
7.01 × 10+36
3.82 × 10+37
Rank314627598
p-value 7.51 × 10−51.73 × 10−61.73 × 10−62.37 × 10−51.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F3Avg
Std
3.00 × 102
3.11 × 10−5
3.00 × 102
2.80 × 10−1
6.26 × 104
1.13 × 104
3.85 × 104
1.15 × 104
3.00 × 102
1.39 × 10−1
6.85 × 104
7.95 × 103
2.18 × 105
6.96 × 104
1.09 × 105
5.83 × 104
4.37 × 104
7.84 × 103
Rank136427985
p-value 1.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F4Avg4.01 × 1024.92 × 1024.91 × 1025.90 × 1024.81 × 1021.49 × 1035.69 × 1021.60 × 1031.53 × 103
Std1.62 × 1002.69 × 1017.26 × 1008.24 × 1013.02 × 1011.92 × 1023.31 × 1018.13 × 1022.85 × 102
Rank143627598
p-value 1.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F5Avg5.90 × 1025.94 × 1026.26 × 1026.09 × 1027.94 × 1027.66 × 1027.81 × 1027.13 × 1027.96 × 102
Std2.31 × 1012.55 × 1018.69 × 1002.52 × 1015.42 × 1011.12 × 1016.49 × 1015.43 × 1011.89 × 101
Rank124386759
p-value 5.86 × 10−16.98 × 10−66.04 × 10−31.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F6Avg6.03 × 1026.02 × 1026.00 × 1026.08 × 1026.73 × 1026.46 × 1026.71 × 1026.38 × 1026.52 × 102
Std1.28 × 1001.30 × 1005.59 × 10−143.70 × 1001.16 × 1012.57 × 1001.02 × 1011.20 × 1014.36 × 100
Rank321496857
p-value 2.85 × 10−21.73 × 10−62.35 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F7Avg8.25 × 1028.35 × 1028.61 × 1028.74 × 1021.73 × 1031.42 × 1031.25 × 1031.14 × 1031.15 × 103
Std1.92 × 1012.31 × 1011.18 × 1014.93 × 1012.24 × 1023.68 × 1018.20 × 1011.51 × 1023.99 × 101
Rank123498756
p-value 6.87 × 10−24.29 × 10−61.36 × 10−51.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F8Avg8.82 × 1029.04 × 1029.24 × 1028.96 × 1021.02 × 1031.06 × 1031.01 × 1031.02 × 1031.06 × 103
Std2.07 × 1013.00 × 1019.78 × 1002.54 × 1014.55 × 1011.20 × 1015.04 × 1015.39 × 1012.19 × 101
Rank134269578
p-value 5.67 × 10−32.35 × 10−61.85 × 10−21.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F9Avg1.08 × 1032.84 × 1039.00 × 1022.17 × 1031.42 × 1045.49 × 1038.24 × 1037.75 × 1035.95 × 103
Std1.44 × 1021.55 × 1032.11 × 10−141.05 × 1035.31 × 1036.66 × 1022.81 × 1031.97 × 1031.11 × 103
Rank241395876
p-value 8.47 × 10−61.73 × 10−62.35 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F10Avg3.86 × 1034.23 × 1036.29 × 1033.92 × 1035.54 × 1038.21 × 1036.32 × 1035.24 × 1038.32 × 103
Std6.45 × 1026.50 × 1022.26 × 1026.84 × 1026.85 × 1023.30 × 1029.04 × 1026.51 × 1023.21 × 102
Rank136258749
p-value 3.85 × 10−31.73 × 10−69.59 × 10−11.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F11Avg1.19 × 1031.27 × 1031.18 × 1032.17 × 1031.29 × 1033.95 × 1032.11 × 1034.85 × 1032.49 × 103
Std3.49 × 1015.92 × 1012.27 × 1011.00 × 1036.85 × 1016.09 × 1027.63 × 1024.69 × 1035.58 × 102
Rank231648597
p-value 1.24 × 10−57.81 × 10−11.73 × 10−62.60 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F12Avg2.99 × 1031.39 × 1063.25 × 1061.15 × 1082.80 × 1061.71 × 1098.42 × 1073.92 × 1081.37 × 109
Std5.18 × 1021.27 × 1061.84 × 1063.27 × 1081.65 × 1064.40 × 1087.66 × 1076.94 × 1084.14 × 108
Rank124639578
p-value 1.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F13Avg4.43 × 1032.80 × 1048.88 × 1043.05 × 1073.75 × 1057.12 × 1082.29 × 1054.63 × 1075.09 × 108
Std1.76 × 1032.43 × 1045.01 × 1048.51 × 1071.76 × 1051.95 × 1083.25 × 1051.93 × 1081.46 × 108
Rank123659478
p-value 1.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F14Avg1.69 × 1036.08 × 1048.10 × 1042.12 × 1058.68 × 1032.78 × 1051.05 × 1061.07 × 1052.32 × 105
Std1.85 × 1022.74 × 1044.73 × 1043.41 × 1055.59 × 1031.38 × 1051.15 × 1061.62 × 1051.36 × 105
Rank134628957
p-value 1.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F15Avg2.04 × 1032.97 × 1041.54 × 1042.44 × 1051.36 × 1057.90 × 1077.87 × 1046.23 × 1042.07 × 107
Std2.08 × 1021.46 × 1041.08 × 1046.85 × 1056.24 × 1043.57 × 1074.96 × 1045.54 × 1041.60 × 107
Rank132769548
p-value 2.35 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F16Avg2.22 × 1032.52 × 1032.17 × 1032.47 × 1033.61 × 1033.56 × 1033.67 × 1033.14 × 1033.74 × 103
Std2.05 × 1023.15 × 1021.40 × 1022.68 × 1024.46 × 1021.68 × 1026.32 × 1023.36 × 1021.80 × 102
Rank241376859
p-value 5.29 × 10−42.80 × 10−11.29 × 10−31.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F17Avg1.93 × 1032.28 × 1031.89 × 1032.03 × 1032.79 × 1032.62 × 1032.53 × 1032.47 × 1032.47 × 103
Std1.41 × 1022.33 × 1027.80 × 1011.66 × 1023.14 × 1021.13 × 1022.60 × 1022.56 × 1021.80 × 102
Rank241398756
p-value 1.13 × 10−54.53 × 10−12.18 × 10−21.73 × 10−61.73 × 10−61.73 × 10−62.35 × 10−61.73 × 10−6
F18Avg2.19 × 1033.49 × 1054.56 × 1057.66 × 1052.28 × 1055.26 × 1063.15 × 1066.37 × 1064.05 × 106
Std1.67 × 1023.22 × 1052.64 × 1059.10 × 1052.49 × 1052.44 × 1063.44 × 1069.45 × 1062.28 × 106
Rank134528697
p-value 1.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F19Avg2.61 × 1032.68 × 1041.54 × 1045.05 × 1069.81 × 1051.21 × 1084.14 × 1065.36 × 1063.37 × 107
Std4.59 × 1022.26 × 1041.12 × 1042.49 × 1074.02 × 1055.76 × 1073.06 × 1061.87 × 1071.94 × 107
Rank132649578
p-value 5.22 × 10−61.92 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−62.13 × 10−61.73 × 10−6
F20Avg2.28 × 1032.40 × 1032.20 × 1032.38 × 1033.03 × 1032.65 × 1032.73 × 1032.71 × 1032.70 × 103
Std1.07 × 1021.89 × 1028.56 × 1011.26 × 1022.14 × 1028.76 × 1011.96 × 1022.28 × 1021.17 × 102
Rank241395876
p-value 4.99 × 10−31.83 × 10−34.99 × 10−31.73 × 10−61.73 × 10−61.73 × 10−62.35 × 10−61.73 × 10−6
F21Avg2.37 × 1032.40 × 1032.42 × 1032.40 × 1032.64 × 1032.55 × 1032.57 × 1032.50 × 1032.56 × 103
Std3.69 × 1012.52 × 1011.09 × 1013.07 × 1018.23 × 1011.38 × 1017.61 × 1014.54 × 1012.18 × 101
Rank134296857
p-value 1.60 × 10−41.73 × 10−63.38 × 10−31.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F22Avg2.30 × 1035.31 × 1034.57 × 1035.36 × 1037.27 × 1033.95 × 1036.36 × 1036.40 × 1038.74 × 103
Std8.02 × 10−11.17 × 1032.14 × 1031.69 × 1031.27 × 1031.59 × 1022.11 × 1031.56 × 1032.09 × 103
Rank143582679
p-value 1.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F23Avg2.74 × 1032.75 × 1032.78 × 1032.77 × 1033.31 × 1032.92 × 1033.08 × 1032.85 × 1033.01 × 103
Std2.31 × 1012.77 × 1011.29 × 1013.96 × 1011.70 × 1021.25 × 1011.06 × 1024.02 × 1013.07 × 101
Rank124396857
p-value 8.59 × 10−21.73 × 10−63.61 × 10−31.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F24Avg2.90 × 1032.93 × 1032.98 × 1032.94 × 1033.37 × 1033.07 × 1033.17 × 1032.99 × 1033.18 × 103
Std2.31 × 1012.80 × 1011.13 × 1016.09 × 1011.25 × 1021.13 × 1017.61 × 1014.48 × 1013.44 × 101
Rank124396758
p-value 1.60 × 10−41.73 × 10−68.73 × 10−31.73 × 10−61.73 × 10−61.73 × 10−61.92 × 10−61.73 × 10−6
F25Avg2.88 × 1032.89 × 1032.89 × 1033.00 × 1032.92 × 1033.64 × 1032.98 × 1033.23 × 1033.24 × 103
Std2.14 × 1001.42 × 1002.86 × 10−16.92 × 1012.39 × 1011.10 × 1023.13 × 1013.69 × 1029.54 × 101
Rank123649578
p-value 1.73 × 10−61.92 × 10−61.73 × 10−61.92 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F26Avg4.41 × 1034.63 × 1034.86 × 1034.81 × 1039.93 × 1036.65 × 1037.13 × 1035.97 × 1037.07 × 103
Std2.73 × 1022.29 × 1029.40 × 1014.56 × 1021.04 × 1031.49 × 1021.34 × 1035.01 × 1022.27 × 102
Rank124396857
p-value 2.77 × 10−32.35 × 10−65.71 × 10−41.73 × 10−61.73 × 10−62.35 × 10−61.73 × 10−61.73 × 10−6
F27Avg3.18 × 1033.22 × 1033.21 × 1033.26 × 1033.44 × 1033.34 × 1033.37 × 1033.25 × 1033.44 × 103
Std2.30 × 1011.23 × 1014.56 × 1003.15 × 1011.26 × 1021.68 × 1018.05 × 1013.12 × 1016.15 × 101
Rank132586749
p-value 1.92 × 10−62.60 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F28Avg3.15 × 1033.24 × 1033.22 × 1033.47 × 1033.14 × 1033.98 × 1033.36 × 1034.40 × 1033.88 × 103
Std5.70 × 1013.12 × 1011.96 × 1011.37 × 1026.20 × 1019.78 × 1014.44 × 1011.02 × 1031.46 × 102
Rank243618597
p-value 7.69 × 10−63.11 × 10−51.73 × 10−64.17 × 10−11.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F29Avg3.58 × 1033.79 × 1033.63 × 1033.73 × 1035.11 × 1034.80 × 1034.88 × 1034.19 × 1034.81 × 103
Std1.27 × 1022.25 × 1027.18 × 1011.83 × 1024.23 × 1021.48 × 1024.28 × 1022.98 × 1022.71 × 102
Rank142396857
p-value 5.29 × 10−47.86 × 10−22.77 × 10−31.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F30Avg8.37 × 1031.75 × 1041.67 × 1047.79 × 1061.67 × 1061.03 × 1081.81 × 1073.54 × 1067.72 × 107
Std2.08 × 1034.73 × 1035.47 × 1039.38 × 1069.70 × 1054.04 × 1071.32 × 1077.66 × 1063.40 × 107
Rank132649758
p-value 1.92 × 10−63.18 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.92 × 10−61.73 × 10−6
Table 3. Comparison results of different well-established algorithms.
Table 3. Comparison results of different well-established algorithms.
MSMAOBLGWOCLSGMFOBWOARDWOACEBADECLSALCPSOCESCA
F1Avg1.22 × 1023.26 × 1075.45 × 1031.10 × 1094.48 × 1073.89 × 1032.80 × 1035.48 × 1035.71 × 10+10
Std8.09 × 1011.94 × 1076.08 × 1031.04 × 1094.02 × 1073.77 × 1033.85 × 1036.14 × 1034.49 × 109
Rank164873259
p-value 1.73 × 10−63.52 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.92 × 10−61.73 × 10−61.73 × 10−6
F2Avg 1.05 × 1051.16 × 10+185.25 × 10+131.88 × 10+301.53 × 10+178.48 × 1021.07 × 10+261.18 × 10+175.51 × 10+45
Std4.32 × 1051.45 × 10+181.49 × 10+148.16 × 10+302.34 × 10+173.30 × 1032.66 × 10+264.62 × 10+171.54 × 10+46
Rank263851749
p-value 1.73 × 10−62.13 × 10−61.73 × 10−61.73 × 10−64.90 × 10−41.73 × 10−61.92 × 10−61.73 × 10−6
F3Avg3.00 × 1022.96 × 1041.70 × 1046.53 × 1043.17 × 1043.00 × 1028.43 × 1043.97 × 1041.09 × 105
Std9.43 × 10−66.71 × 1034.55 × 1031.11 × 1048.77 × 1032.07 × 10−21.42 × 1046.83 × 1031.55 × 104
Rank143752869
p-value 1.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F4Avg4.01 × 1025.35 × 1024.96 × 1027.18 × 1025.27 × 1024.50 × 1024.95 × 1025.06 × 1021.57 × 104
Std1.95 × 1003.64 × 1012.43 × 1019.64 × 1013.15 × 1013.74 × 1011.04 × 1014.45 × 1012.38 × 103
Rank174862359
p-value 1.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−63.52 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F5Avg5.93 × 1026.68 × 1026.59 × 1027.85 × 1027.10 × 1027.61 × 1026.41 × 1026.14 × 1029.64 × 102
Std2.58 × 1015.27 × 1013.67 × 1013.55 × 1015.15 × 1013.20 × 1011.23 × 1013.21 × 1011.71 × 101
Rank154867329
p-value 5.75 × 10−64.73 × 10−61.73 × 10−61.92 × 10−61.73 × 10−64.29 × 10−68.73 × 10−31.73 × 10−6
F6Avg6.03 × 1026.20 × 1026.25 × 1026.68 × 1026.19 × 1026.61 × 1026.00 × 1026.08 × 1027.03 × 102
Std1.84 × 1001.36 × 1011.14 × 1015.47 × 1006.09 × 1004.07 × 1001.12 × 10−135.98 × 1004.67 × 100
Rank256847139
p-value 1.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−67.51 × 10−51.73 × 10−6
F7Avg8.27 × 1029.54 × 1029.09 × 1021.28 × 1039.72 × 1021.27 × 1038.75 × 1028.55 × 1021.54 × 103
Std2.12 × 1016.76 × 1015.79 × 1016.67 × 1016.66 × 1014.55 × 1011.07 × 1013.20 × 1014.64 × 101
Rank154867329
p-value 1.73 × 10−63.52 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.92 × 10−68.31 × 10−41.73 × 10−6
F8Avg8.83 × 1029.61 × 1029.28 × 1029.89 × 1029.93 × 1029.90 × 1029.41 × 1029.10 × 1021.18 × 103
Std1.74 × 1013.84 × 1012.49 × 1012.73 × 1014.43 × 1011.94 × 1018.93 × 1002.41 × 1011.95 × 101
Rank153687429
p-value 2.35 × 10−62.88 × 10−61.73 × 10−61.92 × 10−61.73 × 10−61.73 × 10−63.06 × 10−41.73 × 10−6
F9Avg1.03 × 1034.25 × 1033.26 × 1036.66 × 1035.35 × 1035.29 × 1039.00 × 1021.94 × 1031.45 × 104
Std1.32 × 1022.71 × 1039.16 × 1029.50 × 1021.90 × 1032.58 × 1028.94 × 10−21.08 × 1031.47 × 103
Rank254876139
p-value 2.88 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−65.75 × 10−61.73 × 10−6
F10Avg3.93 × 1035.48 × 1035.05 × 1036.68 × 1034.99 × 1035.31 × 1036.71 × 1034.38 × 1038.65 × 103
Std5.84 × 1021.11 × 1036.26 × 1028.24 × 1026.41 × 1025.86 × 1022.77 × 1028.41 × 1022.46 × 102
Rank164735829
p-value 1.64 × 10−52.35 × 10−61.73 × 10−61.24 × 10−53.18 × 10−61.73 × 10−63.16 × 10−21.73 × 10−6
F11Avg1.18 × 1031.29 × 1031.26 × 1032.51 × 1031.29 × 1031.25 × 1031.22 × 1031.28 × 1031.06 × 104
Std2.81 × 1015.14 × 1015.10 × 1015.13 × 1024.38 × 1016.13 × 1011.25 × 1017.34 × 1011.61 × 103
Rank174863259
p-value 2.35 × 10−65.75 × 10−61.73 × 10−62.35 × 10−64.45 × 10−56.34 × 10−61.02 × 10−51.73 × 10−6
F12Avg2.82 × 1032.09 × 1071.68 × 1061.49 × 1084.00 × 1061.46 × 1055.04 × 1063.46 × 1051.54 × 10+10
Std4.40 × 1022.14 × 1071.81 × 1061.00 × 1082.27 × 1062.53 × 1052.16 × 1065.30 × 1051.82 × 109
Rank174852639
p-value 1.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F13Avg4.69 × 1033.08 × 1051.95 × 1059.78 × 1051.24 × 1041.70 × 1042.23 × 1051.97 × 1041.39 × 10+10
Std1.83 × 1035.16 × 1058.05 × 1059.89 × 1051.26 × 1041.73 × 1041.79 × 1051.94 × 1044.05 × 109
Rank175823649
p-value 1.73 × 10−61.92 × 10−61.73 × 10−69.63 × 10−44.20 × 10−41.73 × 10−64.53 × 10−41.73 × 10−6
F14Avg1.95 × 1038.01 × 1046.88 × 1041.44 × 1062.35 × 1053.62 × 1031.13 × 1053.53 × 1045.46 × 106
Std1.16 × 1036.51 × 1046.79 × 1041.58 × 1061.94 × 1052.18 × 1037.96 × 1048.56 × 1042.62 × 106
Rank154872639
p-value 1.73 × 10−61.92 × 10−61.73 × 10−61.73 × 10−61.80 × 10−51.73 × 10−61.73 × 10−61.73 × 10−6
F15Avg2.01 × 1031.17 × 1059.54 × 1037.57 × 1051.22 × 1043.96 × 1035.15 × 1041.47 × 1045.02 × 108
Std2.01 × 1021.14 × 1057.76 × 1031.16 × 1061.07 × 1043.52 × 1033.34 × 1041.36 × 1041.44 × 108
Rank173842659
p-value 1.73 × 10−62.88 × 10−61.73 × 10−62.60 × 10−68.31 × 10−41.73 × 10−64.73 × 10−61.73 × 10−6
F16Avg2.21 × 1032.94 × 1032.87 × 1033.87 × 1032.82 × 1033.14 × 1032.34 × 1032.62 × 1036.02 × 103
Std2.73 × 1023.06 × 1023.66 × 1025.28 × 1023.71 × 1023.48 × 1021.55 × 1023.36 × 1025.57 × 102
Rank165847239
p-value 1.73 × 10−61.49 × 10−51.73 × 10−64.29 × 10−61.73 × 10−62.70 × 10−21.60 × 10−41.73 × 10−6
F17Avg1.97 × 1032.28 × 1032.36 × 1032.65 × 1032.36 × 1032.65 × 1031.95 × 1032.15 × 1034.75 × 103
Std1.23 × 1021.96 × 1023.11 × 1022.93 × 1022.46 × 1023.11 × 1026.19 × 1011.83 × 1028.76 × 102
Rank246758139
p-value 8.47 × 10−61.97 × 10−51.73 × 10−67.69 × 10−61.92 × 10−65.04 × 10−11.36 × 10−41.73 × 10−6
F18Avg2.20 × 1031.75 × 1063.78 × 1055.38 × 1067.65 × 1059.68 × 1047.13 × 1055.27 × 1055.57 × 107
Std1.57 × 1021.81 × 1063.12 × 1054.77 × 1068.51 × 1057.27 × 1043.24 × 1051.09 × 1062.69 × 107
Rank173862549
p-value 1.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F19Avg2.63 × 1038.18 × 1055.75 × 1037.67 × 1061.56 × 1045.61 × 1034.83 × 1041.47 × 1041.26 × 109
Std4.31 × 1027.17 × 1054.29 × 1037.54 × 1061.38 × 1043.26 × 1033.47 × 1041.46 × 1042.75 × 108
Rank173852649
p-value 1.73 × 10−64.86 × 10−51.73 × 10−62.35 × 10−66.32 × 10−51.73 × 10−62.16 × 10−51.73 × 10−6
F20Avg2.32 × 1032.49 × 1032.49 × 1032.75 × 1032.54 × 1032.90 × 1032.22 × 1032.44 × 1033.23 × 103
Std1.40 × 1021.15 × 1022.23 × 1021.96 × 1022.00 × 1021.81 × 1028.02 × 1011.86 × 1021.12 × 102
Rank254768139
p-value 4.20 × 10−41.04 × 10−23.18 × 10−64.45 × 10−51.73 × 10−66.64 × 10−46.84 × 10−31.73 × 10−6
F21Avg2.38 × 1032.45 × 1032.43 × 1032.59 × 1032.50 × 1032.60 × 1032.44 × 1032.42 × 1032.76 × 103
Std1.82 × 1013.94 × 1013.33 × 1014.95 × 1013.49 × 1015.17 × 1011.26 × 1013.37 × 1013.19 × 101
Rank153768429
p-value 2.13 × 10−61.02 × 10−51.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−62.16 × 10−51.73 × 10−6
F22Avg2.30 × 1032.90 × 1032.30 × 1037.18 × 1036.06 × 1037.16 × 1034.39 × 1034.73 × 1039.35 × 103
Std7.47 × 10−11.51 × 1031.43 × 1001.96 × 1031.81 × 1031.41 × 1031.99 × 1031.94 × 1036.80 × 102
Rank132867459
p-value 1.73 × 10−61.04 × 10−31.73 × 10−61.73 × 10−61.92 × 10−61.73 × 10−65.31 × 10−51.73 × 10−6
F23Avg2.73 × 1032.82 × 1032.79 × 1033.10 × 1032.89 × 1033.39 × 1032.79 × 1032.80 × 1033.46 × 103
Std2.69 × 1014.28 × 1013.48 × 1011.20 × 1027.39 × 1012.00 × 1021.23 × 1016.07 × 1015.09 × 101
Rank153768249
p-value 1.92 × 10−65.22 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−63.11 × 10−51.73 × 10−6
F24Avg2.91 × 1032.98 × 1032.96 × 1033.23 × 1033.09 × 1033.48 × 1033.00 × 1032.99 × 1033.49 × 103
Std2.15 × 1014.97 × 1014.75 × 1019.77 × 1018.74 × 1011.48 × 1021.14 × 1017.20 × 1013.88 × 101
Rank132768549
p-value 2.37 × 10−58.47 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.64 × 10−51.73 × 10−6
F25Avg2.88 × 1032.93 × 1032.90 × 1033.08 × 1032.92 × 1032.90 × 1032.89 × 1032.90 × 1035.53 × 103
Std1.78 × 1002.33 × 1011.89 × 1015.01 × 1012.15 × 1011.74 × 1013.67 × 10−11.91 × 1014.63 × 102
Rank173864259
p-value 1.73 × 10−61.92 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F26Avg4.50 × 1035.55 × 1033.84 × 1037.90 × 1035.61 × 1036.08 × 1035.00 × 1034.99 × 1031.11 × 104
Std2.57 × 1024.03 × 1021.32 × 1031.04 × 1031.27 × 1032.40 × 1039.28 × 1015.57 × 1025.86 × 102
Rank251867439
p-value 1.73 × 10−62.07 × 10−21.92 × 10−63.59 × 10−43.32 × 10−42.60 × 10−61.25 × 10−41.73 × 10−6
F27Avg3.19 × 1033.25 × 1033.31 × 1033.41 × 1033.25 × 1033.69 × 1033.21 × 1033.25 × 1033.72 × 103
Std2.16 × 1012.11 × 1017.31 × 1011.09 × 1022.48 × 1013.83 × 1023.69 × 1002.38 × 1016.97 × 101
Rank136758249
p-value 1.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−64.29 × 10−61.73 × 10−61.73 × 10−6
F28Avg3.13 × 1033.30 × 1033.23 × 1033.49 × 1033.28 × 1033.14 × 1033.23 × 1033.23 × 1037.09 × 103
Std5.03 × 1013.61 × 1011.88 × 1011.01 × 1022.96 × 1015.78 × 1012.15 × 1013.55 × 1014.95 × 102
Rank174862539
p-value 1.73 × 10−61.92 × 10−61.73 × 10−61.73 × 10−65.44 × 10−11.73 × 10−61.73 × 10−61.73 × 10−6
F29Avg3.57 × 1034.11 × 1034.02 × 1035.13 × 1034.02 × 1034.48 × 1033.73 × 1033.84 × 1036.05 × 103
Std1.20 × 1023.17 × 1022.20 × 1025.98 × 1022.53 × 1023.27 × 1021.04 × 1021.92 × 1021.49 × 102
Rank165847239
p-value 1.73 × 10−62.88 × 10−61.73 × 10−61.73 × 10−61.73 × 10−69.32 × 10−61.36 × 10−51.73 × 10−6
F30Avg9.82 × 1034.24 × 1061.19 × 1053.50 × 1072.74 × 1049.74 × 1033.89 × 1041.84 × 1042.74 × 109
Std3.02 × 1032.92 × 1061.69 × 1052.91 × 1071.92 × 1044.48 × 1032.49 × 1041.44 × 1047.83 × 108
Rank276841539
p-value 1.73 × 10−61.97 × 10−51.73 × 10−63.18 × 10−65.44 × 10−11.92 × 10−62.11 × 10−31.73 × 10−6
Table 4. Classification results of MSMA-SVM-FS in the light of four metrics.
Table 4. Classification results of MSMA-SVM-FS in the light of four metrics.
FoldACCMCCSensitivitySpecificity
Num.10.848 0.702 0.733 0.944
Num.20.824 0.646 0.813 0.833
Num.30.909 0.819 0.875 0.941
Num.40.909 0.820 0.938 0.882
Num.50.909 0.817 0.867 0.944
Num.60.848 0.702 0.733 0.944
Num.70.879 0.756 0.867 0.889
Num.80.879 0.759 0.800 0.944
Num.90.788 0.576 0.800 0.778
Num.100.848 0.694 0.800 0.889
AVG0.864 0.729 0.823 0.899
STD0.040 0.081 0.064 0.057
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gao, H.; Liang, G.; Chen, H. Multi-Population Enhanced Slime Mould Algorithm and with Application to Postgraduate Employment Stability Prediction. Electronics 2022, 11, 209. https://doi.org/10.3390/electronics11020209

AMA Style

Gao H, Liang G, Chen H. Multi-Population Enhanced Slime Mould Algorithm and with Application to Postgraduate Employment Stability Prediction. Electronics. 2022; 11(2):209. https://doi.org/10.3390/electronics11020209

Chicago/Turabian Style

Gao, Hongxing, Guoxi Liang, and Huiling Chen. 2022. "Multi-Population Enhanced Slime Mould Algorithm and with Application to Postgraduate Employment Stability Prediction" Electronics 11, no. 2: 209. https://doi.org/10.3390/electronics11020209

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop