Next Article in Journal
On Consistency of the Nearest Neighbor Estimator of the Density Function for m-AANA Samples
Previous Article in Journal
Decomposition Is All You Need: Single-Objective to Multi-Objective Optimization towards Artificial General Intelligence
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Multi-Objective Particle Swarm Optimization-Based Hybrid Intelligent Algorithm for Index Screening of Underwater Manned/Unmanned Cooperative System of Systems Architecture Evaluation

1
School of Naval Architecture, Ocean and Energy Power Engineering, Wuhan University of Technology, Wuhan 430070, China
2
School of Information Engineering, Wuhan University of Technology, Wuhan 430070, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(20), 4389; https://doi.org/10.3390/math11204389
Submission received: 5 September 2023 / Revised: 12 October 2023 / Accepted: 19 October 2023 / Published: 23 October 2023
(This article belongs to the Section Mathematics and Computer Science)

Abstract

:
An improved multi-objective particle swarm optimization algorithm is combined with a machine learning classifier to meet the needs of underwater manned/unmanned cooperative warfare architecture evaluation. Firstly, based on the traditional Cauchy variation method, the particles in the population are disturbed in a dynamic way so that the particles trapped in the local optimal can jump out of the local optimal, and the convergence performance of the particle swarm optimization is improved. Secondly, the accuracy of the index set is analyzed based on the CART decision tree algorithm and the IWRF algorithm. A screening method of key indexes with fewer evaluation indexes and high evaluation accuracy is developed to solve the problem of a large number of evaluation indexes and unclear correlation of the underwater combat system. Through simulation, the extraction results of key indicators were verified, and the reliability coefficient of the final simulation experiment was 0.93, which can be considered as high reliability and effectiveness of the key indicators extracted in this study. By combining multi-objective optimization with machine learning and weighing evaluation efficiency and accuracy, a high-precision and rapid evaluation of a few indicators is achieved, which provides support for establishing an evaluation model of SoS architecture for underwater manned/unmanned cooperative operations. This research result can provide inspiration for the evaluation and evaluation of the system in order to analyze the accuracy of indicators and the evaluation effect and carry out research by simulating the actual system with tools with a high simulation degree.

1. Introduction

As a new mode of operation, underwater manned/unmanned cooperative operation is a hot topic in the military operation concept research of various countries [1]. This mode of operation needs to rely on the complete design of SoS architecture. Therefore, it is a more important task to evaluate whether a complete combat system is advanced, whether it meets the needs, and whether there are risks. Quantitative evaluation is the most critical issue in the process of architecture design [2]. For the evaluation and evaluation of SoS architecture, the primary task is to determine the evaluation index system [3] to provide a basis for the evaluation of the combat system.
However, in today’s research world, some indicators of the index system initially constructed by most scholars will directly or indirectly affect the efficiency and cost of SoS architecture [4]. Some of these indicators may be redundant, and some indicators are not closely related to the evaluation objectives. Therefore, it is necessary to use appropriate algorithms to select key indicators [5], which can effectively reflect the efficiency and cost of the architecture, and then conduct rapid quantitative analysis and evaluation of the SoS architecture scheme according to the constructed key indicator system.
This paper starts with the analysis of the main contents of the operational application scheme of underwater manned/unmanned coordinated attack and defense systems and proposes a screening method of key indicators to achieve high evaluation accuracy using a small number of indicators. The number of indicators and the accuracy of the index set under the corresponding number of indicators are taken as the optimization objective, and the optimal index set with a small number of indicators and a high program evaluation accuracy is obtained. This method combines the multi-objective optimization idea with machine learning and achieves high accuracy and fast evaluation of a small number of indicators by weighing evaluation efficiency and accuracy. Compared with traditional algorithms, the convergence accuracy and classification accuracy are greatly improved.
Based on the mapping relationship between the OODA decision loop and combat flow, this paper sorted out the impact of each major combat stage on program effectiveness, constructed program evaluation element space, analyzed it, and pooled it to form an evaluation index system of operational program effectiveness. Based on this index system, the CART-IWRF algorithm was designed to determine the accuracy of the indicator set. Finally, with the input of the multi-objective algorithm, the CI-IMOPSO algorithm is designed to realize the extraction of key indicators of the scheme effectiveness evaluation index system. By weighing the evaluation efficiency and accuracy, the high accuracy and rapid evaluation of a few indicators are realized. Finally, experiments are designed to conduct a rapid quantitative evaluation of the system.

2. Determination of the Evaluation Index System

Under the combat background of an underwater manned/unmanned combat system, the evaluation index system of an underwater manned/unmanned cooperative combat SoS architecture plan is established based on the observer–orient–decide–act (OODA) loop and combat process [6]. According to the typical OODA decision chain, we can divide the cooperative combat operations into eight main stages: the information link and interaction stage, environment awareness stage, target recognition stage, situational awareness and recognition stage, task planning stage, cooperative formation control stage, autonomous or non-autonomous behavior decision-making stage, and cooperative action stage [7]. The evaluation elements are formed by analyzing each stage, and the mapping of the main stages of the cooperative combat operations and key evaluation elements in the OODA decision chain is constructed, as shown in Figure 1.
In summary, the evaluation index system of an underwater manned/unmanned cooperative combat SoS architecture plan should include six parts, including cooperative detection, cooperative decision-making, unmanned system maneuver, cooperative attack, information exchange, and cooperative combat cost, which can comprehensively describe the impact of each combat plan on underwater manned/unmanned cooperative combat effectiveness. We take the contents of these six parts as the second-level evaluation indexes of the index system, and based on the operational requirements of the cooperative combat system, we analyze these six indexes in detail, complete the establishment of the third-level indexes, and finally build a complete evaluation system of the operational plan of the underwater manned/unmanned cooperative combat SoS architecture.
(1)
Cooperative detection capability (A): cooperative detection capability refers to effectively discovering, identifying, tracking, and positioning enemy targets in a complex underwater environment, including detection range, detection accuracy, target recognition capability, and other indexes.
(2)
Cooperative decision-making ability (B): cooperative decision-making ability refers to the evaluation of the cooperative decision-making plan when implementing task planning, including task efficiency, task implementation difficulty, task risk, and other indexes.
(3)
Unmanned maneuverability (C): unmanned maneuverability refers to the maneuverability of the unmanned system on the way to a combat preparation position, including formation rate, formation navigation rate, and formation autonomous obstacle avoidance capability.
(4)
Cooperative strike capability (D): cooperative strike capability refers to the ability of the system to strike enemy targets accurately in an underwater environment, including strike accuracy, strike range, strike timeliness, and other indexes.
(5)
Information interaction capability (E): information interaction capability refers to the ability of the system to transmit information between manned and unmanned equipment in an underwater environment, including communication range, communication rate, and communication reliability.
(6)
Cooperative combat cost (F): operational cost refers to the cost of all resources needed to perform tasks, including the cost of manpower, cost of equipment, cost of energy consumption, and cost of ammunition consumption.
Through the above analysis, the evaluation index system of the underwater manned/unmanned combat SoS architecture constructed in this paper is shown in Figure 2.

3. Key Index Extraction Process

To solve the problem that a few indexes are difficult to achieve high evaluation accuracy, based on the multi-objective optimization theory, in this paper, we take the number of indexes and the evaluation accuracy of the index set under the corresponding number of indexes as two key objectives of the optimization problem and we carry out key index extraction of the plan evaluation index system [8]. We adopt an Improved Weighted Random Forest (IWRF) [9] machine learning algorithm based on the CART decision tree [10] as a basic classifier [11], determine the plan evaluation accuracy under each index number, design an improved multi-objective particle swarm optimization algorithm, and obtain Pareto front curves with accuracy and index numbers as horizontal and vertical coordinates [12,13]. Finally, we select the optimal index set with fewer index numbers and higher plan evaluation accuracy according to the system needs and complete key index extraction of the index system.

3.1. Index Screening Optimization Model

The principle of screening the key indexes is to remove redundant indexes and ensure the accuracy of evaluation to meet the requirements. For the screening problem with m initial indexes, the combinations of indexes are taken as the independent variables and x k R x k R  is set as the index set of different index combinations, and this principle can be expressed in the form of multi-objective programming, as shown in the following equation:
f 1 = min k = 1 , , m L x k R f 2 = max k = 1 , , m G x 1 R , , x k R , , x m R
Further, the equation can be expressed in standard form, as shown in Equation (2):
min F x = [ f 1 x , f 2 x ] T s . t .   x S
where G represents the accuracy of the state recognition (classification) algorithm based on machine learning and f 1  and f 2  represent, respectively, the standard form of f 1  and f 2  (i.e., f 2  represents the state recognition error). It can be seen from the characteristics of the machine learning algorithm itself that there are contradictions among the sub-objectives of the abovementioned multi-objective optimization problem. This paper introduces the idea of the Pareto optimal solution to balance the redundant indexes and the evaluation accuracy of the architecture plan.
Definition 1.
Dominance relationship of key index combination. For the combination of indexes  x A , x B S , if  f i x A f i x B   i = 1 , 2 , , n  and there is at least one  j 1 , 2 , , n  to make  f j x A < f j x B , then the combination of key indexes  x A  dominates the combination of key indexes  x B , and it is recorded as  x A x B
Definition 2.
Non-dominant relationship of key index combination. If the key index  x C S  is not dominated by any other key indexes, then  x C  is called a non-dominant key index combination.
Definition 3.
Pareto front of screening key indexes. The representation of the set of objective function values calculated based on the combination of all non-dominant key indexes in the solution space is called the Pareto front of key index screening (expressed as PF), as shown in Equation (3):
P F = f 1 x , f 2 x | x n o n d o m i n a n t k e y   k e y   i n d e x   c o m b i n a t i o n .

3.2. Design of the Key Index Extraction Algorithm

Based on the improved multi-objective particle swarm optimization (IMOPSO) [14] and Pareto front (PF), this study extracts the key indexes from the evaluation index system of the underwater manned/unmanned cooperative combat SoS architecture plan. The index set and the accuracy of the index set are taken as the two inputs of the improved multi-objective particle swarm optimization algorithm, and a Pareto optimal curve with the accuracy and number of indexes as the horizontal and vertical coordinates is the output. Combined with the research content, the number and accuracy of the indexes are constrained and, finally, the key index set is obtained.
Based on the traditional Cauchy mutation method, the multi-objective particle swarm optimization algorithm is optimized and improved, and a dynamic method is used to disturb the particles in the population so that the particles trapped in the local optimum can jump out of the local optimum and improve the convergence performance [15]. The idea of the traditional Cauchy mutation [16] operator is to select some individuals in the population with a certain probability. In each iteration, the Cauchy mutation operation for these individuals is as follows:
X i j = X i j + η C 0 , 1
where j = 1 , 2 , , n , with η  being a constant used to control the mutation step size and C 0 , 1  representing a random number generated by the Cauchy distribution function when t = 1  [17].
An adaptive Cauchy mutation based on individual moving speed is designed. The average moving speed of the population is as follows:
V ¯ j = i = 1 m v i j / m .
In this equation, V i j  represents the velocity component of particle i  in dimension j  and m  represents the number of populations, V ¯ j V ¯ m a x , V ¯ m a x . In this paper, V ¯ m a x .
The modified Cauchy variant is determined using the following equation:
X i j = X i j + V ¯ j C X m i n , X m a x
where X i j  is the value j = 1 , 2 , , n  on the i th particle X i X i 1 , X i 2 , , X i n  in the j th dimension, V j  is calculated based on the equation, which is the average moving speed of the population that decreases with each iteration, and X m i n , X m a x  is the domain of the problem. With each iteration of the algorithm, the average movement speed of the population decreases continuously, which can greatly improve the searchability of particles in the population in the early iteration while disturbing the population with a small variation in step size in the late iteration, which can accelerate the convergence of the algorithm.
The main steps of the algorithm are as follows:
Step 1: Set the parameters of the algorithm, divide the data set into two parts, including a training set and a test set, and initialize the speed and position of particles in the population.
Step 2: Evaluate the number and accuracy of the key indexes of particles in the population and then determine the non-dominant particle set in the population according to the evaluation results.
Step 3: Calculate the crowding distance of each non-dominant particle according to the newly proposed crowding distance calculation equation and sort the non-dominant particles in ascending order according to the crowding distance based on the characteristics of the new equation.
Step 4: Update the global optimum value g b e s t  and select the particle with the smallest crowding distance from the sorting results in step 3 as the particle g b e s t .
Step 5: Update the historical optimal value p b e s t . p b e s t  is the solution in which particles will not be dominated by current particles in each iteration of the algorithm. If the updated solution of the current particle can dominate p b e s t , replace the current particle as p b e s t .
Step 6: Update the particles according to the position update and velocity update equations of the particle swarm optimization.
Step 7: The Cauchy mutation based on the average moving speed of the population proposed in this paper is used to mutate the particles.
Step 8: Add the updated particles and the particles before updating to a set U and distribute all the particles in the set to a subset F according to different non-dominant levels, where   F = F 1 , F 2 , F 3 , , F K .
Step 9: Empty the population and add the particles from the subset F 1  to the population. If the number of particles in the current external document is larger than the number of particles required by the population, all the particles in the external document are added to the population. Otherwise, the particles of the current non-dominant subset are added to the population according to the ascending order of crowding distance until the scale is reached.
Step 10: Iterate after updating the particles in the population.
The application flow of key index extraction based on IMOPSO is shown in Figure 3.

3.3. Determination of the Accuracy Rate of the Index Set Evaluation

An Improved Weighted Random Forest (IWRF) machine learning algorithm based on classification and regression tree (CART) as the basic classifier is used to determine the plan evaluation’s accuracy under each index number. Firstly, a training model is constructed based on the CART decision tree algorithm and the Improved Weighted Random Forest algorithm. The sample data are divided into a training sample data set and a test sample data set according to the ratio of 8:2, in which the training sample data set is used as the training data of the CART-IWRF (CI) training model. After the model is trained, the accuracy of different index sets is confirmed using the test sample data set, and the accuracy test results of the index sets are obtained. The accuracy of the determination process for the index sets is shown in Figure 4.
CART uses the Gini coefficient as the purity measure of selected attributes. The smaller the Gini coefficient of a feature’s eigenvalue, the more important the feature, and the purer the split data set. Therefore, the feature with the smallest Gini coefficient is selected for node splitting. The Gini coefficient is calculated as follows:
G i n i d , a = d 1 d G i n i d 1 + d 2 d G i n i d 2 ,
G i n i d = 1 i = 1 k c 1 d 2 .
The decision tree algorithm is prone to over-fitting problems, especially when dealing with complex data sets. To prevent the decision tree’s learning accuracy from being too high and resulting in over-fitting, pruning and random forest can be used to modify the decision tree. On the premise of effectively dealing with the over-fitting problem of the decision tree, both the stochastic forest algorithm and the decision tree pruning algorithm have the advantages of high-dimensional data processing, robustness, training speed, and interpretability.
The specific steps of decision tree pruning [18] are as follows:
Make the non-leaf nodes of the decision tree   T 1 , T 2 , T 3 , , T n .
Step 1: Calculate the surface error rate gain value of all non-leaf nodes.
Step 2: Select the non-leaf node with the smallest surface error rate gain value (if multiple non-leaf nodes have the same small surface error rate gain value, select the non-leaf node with the largest number of nodes).
Step 3: Prune the selected non-leaf node.
The surface error rate gain value is calculated as follows:
α = R t R T N T 1
where R t  represents the error cost of the leaf nodes; R T  is the error cost of the sub-trees, R T = i m r i t p i t , with r i t  being the error rate of sub-node i  and p i t  being the proportion of data nodes of node i ; and N T  represents the number of sub-tree nodes.
In the stage of decision tree generation, Weighted Random Forest (WRF) [9] is introduced to give higher weights to the minority classes so that the penalty for misclassification of the minority classes will be higher than misclassification of the majority classes in model training, which makes the decision tree based on WRF pay more attention to the minority classes when splitting nodes. Class weights are calculated as follows:
W j = 1 P j × C
where P j  is the proportion of the number of samples in the category j  to the total number of samples and C is the number of classes. The weight of the decision tree when voting can be calculated using the out-of-pocket data. The specific equation is as follows:
W i = X c o r r e c t X
where W i  is the voting weight for the i  decision tree, X c o r r e c t  is the number of samples predicted correctly, and X  is the total number of samples estimated by the decision tree for the out-of-pocket data.
Based on the WRF algorithm, the sampling method is further improved, and an improved WRF algorithm (IWRF) is designed to increase the proportion of abnormal data during sampling and further optimize the sample imbalance problem. The core idea is that after bootstrap random sampling, if it is found that the proportion of abnormal categories is too low, a second random sampling is carried out on these abnormal categories to increase the proportion of abnormal samples.
The sampling algorithm flow is shown in Figure 5.
Two thresholds are designed for the second sampling of unbalanced samples, and different thresholds, K 1 , K 2 ,  are set according to different sample characteristics. K 1  is the threshold for judging whether a sample is balanced, and the calculation equation is as follows:
K 1 = O 1 O 2
where O 1  represents the number of abnormal samples being sampled and O 2  is the number of normal samples being sampled. If the threshold K 1  is not met, it is necessary to randomly re-sample the anomaly category with a low proportion and add it to the total sample. K 2  indicates the proportion after the second random sampling of the sample, ranging from 0.1% to 0.5%.

3.4. Determination of the Key Index Set

The Pareto front, which is a combination of key indexes obtained using the above algorithm, provides a trade-off plan between the number of indexes and the accuracy of the architecture evaluation. As shown in Figure 6, the maximum state evaluation accuracy (or the minimum number of indexes under each accuracy value) can be obtained under the condition of each index number, and the selection of key indexes can intercept the combination of non-dominant key indexes that meet the conditions based on the evaluation accuracy.
When the maximum number of key indexes in the Pareto front of the key index combination is less than the maximum number of indexes, it shows that there is index redundancy in the index system. Through algorithm iteration, this study selects the Pareto front corresponding to the index combination with the highest evaluation accuracy and the minimum number of key indexes, and this index combination is regarded as the key index. The overall indicator screening process is shown in Figure 7.

4. Simulation

Based on the Command: Modern Operations (CMOs) combat data sets, a comparative test of key index extraction in two dimensions was carried out. Firstly, based on four machine learning algorithm models, including KNN (K-nearest neighbor), MLP (multi-layer perceptron), RF, and IWRF, the accuracy of the key indicators of the MOPSO algorithm and IMOPSO algorithm were extracted and compared to verify the superiority of the IMOPSO algorithm. The other test was based on the IMOPSO algorithm by conducting comparative experiments of KNN, MLP, RF, and IWRF. The main steps were divided into three steps:
(1)
Preparation of simulation data. Taking the warning and reconnaissance mission of the underwater manned/unmanned cooperative attack and defense system as an example, and after selecting the type, number, and task flow of the system, Simulation software called Command: Modern Operations (CMO v1.05.1307.18) was used to simulate 10,000 combat operations of the warning and reconnaissance mission and obtain 10,000 simulation test data.
(2)
Multi-objective optimization algorithm comparison experiment. Based on four machine learning algorithms, namely KNN, MLP, RF, and IWRF, 10,000 CMO combat data were obtained to establish the mapping relationship between the simulation data and evaluation indicators, and 8000 combat data were taken as the training sample set as input to train the model. Then, the remaining 2000 pieces of combat data were taken as the test sample set. Finally, the test sample set was used to confirm the accuracy of the index set of different quantity indicators, and the accuracy test results of the index set under each machine learning algorithm were obtained. Based on the test results of each machine learning algorithm, the index set and the evaluation accuracy of the index set were taken as the input of the multi-objective optimization algorithm, and the key index extraction and comparison experiments were carried out, respectively, using the MOPSO algorithm and IMOPSO algorithm to verify the superiority of the multi-objective optimization algorithm designed in this study.
(3)
Machine learning algorithm comparison experiment. Based on the IMOPSO algorithm, the key index extraction of four machine learning algorithms, namely KNN, MLP, RF, and IWRF, was carried out to verify the superiority of the machine learning algorithm designed in this study.
Using the CART decision tree to build a random forest and adopting post-pruning optimization, the parameter settings of the random forest algorithm are shown in Table 1 below.
The parameter settings of the multi-objective particle swarm optimization algorithm are shown in Table 2.

4.1. Simulation Data Preprocessing

According to the constructed plan evaluation index system, and taking simplicity and comprehensiveness as the basic principles, this paper describes the cooperative combat effect hierarchically from three aspects, namely combat effect, combat assistance and decision-making effect, and loss and consumption, and sets three evaluation grades for each layer description, which are used as the machine learning algorithm’s classification labels [19]. They include combat effect (Good, Medium, and Bad), combat assistance and decision-making effect (Good, Medium, and Bad), and loss and consumption (High, Medium, and Low), as shown in Table 3.
The simulation data were obtained from the Command: Modern Operations (CMOs) combat simulation deduction platform. Based on the CMO simulation combat data, the mapping relationship between the plan evaluation index and CMO simulation combat data was established. For each evaluation index, the corresponding data set in the CMO data was identified, and the corresponding relationship is shown in Table 4.
The CMO v1.05.1307.18 was used while taking the underwater manned/unmanned cooperative attack and defense system warning and reconnaissance task as an example. After selecting the type, number, and task flow of the system, the enemy target units were set as the aircraft carrier strike group, underwater submarine formation, the lone wolf submarine unit, and transport ship formation according to the threat degree, and the mines and enemy cruise units were randomly set in the open sea area.
Finally, 10,000 combat operations of warning and reconnaissance missions were simulated, and 10,000 simulation test results were obtained.

4.2. Simulation Results

Four machine learning algorithms, namely KNN, MLP, RF, and CART-IWRF, were used to carry out key index extraction experiments comparing the MOPSO algorithm and IMOPSO algorithm. The Pareto optimal front simulation results of each algorithm combination are shown in Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14 and Figure 15.
Figure 8 and Figure 10 are simulation results based on CART-IWRF, Figure 9 and Figure 15 are experimental results based on MLP, Figure 11 and Figure 13 are experimental results based on RF, and Figure 12 and Figure 14 are experimental results based on KNN.
Based on the CART-IWRF algorithm, a comparative test of the IMOPSO algorithm and MOPSO algorithm was conducted, and the test results are shown in Figure 16.
Based on the KNN algorithm, a comparative test of the IMOPSO algorithm and MOPSO algorithm was carried out, and the test results are shown in Figure 17.
Based on the RF algorithm, a comparative test of the IMOPSO algorithm and MOPSO algorithm was conducted, and the test results are shown in Figure 18.
Based on the MLP algorithm, a comparative test of the IMOPSO algorithm and MOPSO algorithm was conducted, and the test results are shown in Figure 19.
Based on the IMOPSO algorithm, a comparative test of the four machine learning algorithms, namely KNN, MLP, RF, and CART-IWRF, was conducted. The test results are shown in Figure 20.
Based on an analysis of Figure 16, Figure 17, Figure 18 and Figure 19, it can be observed that by combining the four machine learning algorithms, the traditional MOPSO algorithm has a low convergence accuracy in the classification of Pareto optimal solutions; thus, it is difficult to jump out of the local optimal problem, and the overall Pareto optimal frontier curve is inferior to the IMOPSO algorithm. Based on an analysis of Figure 20, it can be observed that the classification accuracy of the RF, KNN, and MLP machine learning algorithms is low, and the overall Pareto optimal frontier curve is inferior to the CART-IWRF algorithm. In summary, based on the analysis of the comparative test results, the CI-IMPSO algorithm designed in this study has the dual advantage of high convergence accuracy and high classification accuracy of the index set, which can jump out of the constraint of the local optimal region and obtain the optimal Pareto optimal frontier curve.
Using the CI-IMPSO algorithm, 300 independent tests were conducted, and then the index set corresponding to 300 Pareto curves and the corresponding average accuracy rate were taken to obtain the Pareto optimal front data list, as shown in Table 5.
According to the simulation of Pareto fronts, when combining the double constraint requirements regarding the number of key indexes and the accuracy of the index set, the number of key indexes should not be too high (no more than five), while the accuracy of the index set should not be too low (no less than 70%). Therefore, we chose the Pareto optimal front index set with five indexes as the result for extracting key indexes in the plan evaluation space. According to Table 5, the extracted five key indexes are A1, B1, B2, E1, and F2.

4.3. Simulation Analysis

According to the extraction results of key indexes, the five key indexes obtained are A1, B1, B2, E1, and F2. This section presents an analysis of the effectiveness and reliability of the above indexes.
The validity analysis of the key indexes is determined by the validity coefficient β , and an evaluation index system F = f 1 , f 2 , , f n  is set up. The number of experts participating in the evaluation is s , the score set of expert j  on evaluation objectives is X j = x 1 j , x 2 j , , x n j , the validity coefficient of index f i  is β i , and the validity coefficient β  of the evaluation index system F  is calculated as follows:
β = 1 n i = 1 n β i = 1 n i = 1 n j = 1 s x ¯ i x i j s M
where x ¯ i  is the average score of the evaluation index f i , with x ¯ = 1 s j = 1 s x i j , and M  is the best value of the evaluation concentration score of index f i .
When the validity coefficient β < 0.1 , the effectiveness of the index system is higher. We set F K I = f 1 , f 2 , f 3 , f 4 , f 5 , f 6 , and f 1 , f 2 , f 3 , f 4 , f 5 , f 6  represents the key indexes A1, B1, B2, D1, F1, and E2, respectively. Ten experts were invited to participate in the effectiveness analysis of the key indexes, and the validity coefficient β = 0.07  was calculated based on the key index score set X j = x 1 j , x 2 j , , x n j  given by experts from different organizations; thus, the effectiveness of the key indexes is higher.
The reliability analysis of the key indexes was determined based on the validity coefficient ρ , and the correlation coefficient obtained from the mathematical statistics was used as the reliability coefficient of the evaluation index system to reflect its reliability.
According to the above assumptions, the average data set of expert group scores is calculated as Y = y 1 , y 2 , y n , where:
y i = 1 s j = 1 s x i , j
The reliability coefficient of the evaluation index system is ρ , where ρ = 1 s j = 1 s ρ j , and the calculation equation of ρ j  is as follows:
ρ j = x i j x ¯ j y i y ¯ i = 1 n ( x i j x ¯ j ) 2 i = 1 n ( y i y ¯ ) 2   j = 1 , 2 , , s
where x ¯ j = 1 n i = 1 n x i j , y ¯ j = 1 n i = 1 n y i .
When ρ = 0.95 , 1.00 , it is considered that the reliability of the evaluation index system is very high; when ρ = 0.85 , 0.95 , it is considered that the reliability of the evaluation index system is high; when ρ = 0.80 , 0.85 , it is considered that the reliability of the evaluation index system is average; and when ρ < 0.80 , it can be considered that the reliability of the evaluation index system is poor [20].
Based on the score set of the key indexes X j = x 1 j , x 2 j , , x n j  given by the experts, the average data set of expert group scores is Y = y 1 , y 2 , y n , and, finally, the reliability coefficient of the key index system was calculated to be ρ = 0.93 . Therefore, the reliability and effectiveness of the key indexes extracted in this study are high.

5. Conclusions

Based on the mapping relationship between the OODA decision-making loop and operational flow, in this study, we systematically investigated the influence of each main stage on the effectiveness of the operational plan, constructed and analyzed the element space of plan evaluation, and aggregated the results to form an effectiveness evaluation index system of the operational plan. According to the index system, the CART-IWRF algorithm was designed to determine the accuracy of the index set. Taking the index set and the accuracy of the index set as the input of the multi-objective algorithm, the CI-IMOPSO algorithm was designed to extract the key indexes of the effectiveness evaluation index system. Simulation experiments prove that the evaluation indexes proposed in this study are highly reliable and can effectively evaluate an underwater combat system after calculation by the CART-IWRF and CI-IMOPSO algorithms. Compared with other data classification or data optimization algorithms, our algorithm has greatly improved reliability and evaluation accuracy.
Although the research in this paper has achieved some results, the indicators used in this paper to form the evaluation system have not broken away from the limitations of the past indicators. These indicators are computationally heavy, and it is difficult to complete a comprehensive evaluation perfectly from the perspective of the underwater combat system. In addition, although the algorithm in this paper is compared with the existing algorithms, there are some improvements that can be made, but the actual operation of the algorithm still needs to be considered.

Author Contributions

Conceptualization, H.Z.; Methodology, H.Z.; Software, X.G.; Validation, Y.M.; Formal analysis, Y.M.; Data curation, X.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dorsey, R.A. Joint Vision 2010 Operational Concepts and Issues in Military Operations Other Than War (MOOTW); US Army War College: Carlisle, PA, USA, 2000. [Google Scholar]
  2. Wenwu, Y.; Yueming, H. On Risk Management of Tunnel Engineering in Whole Process. Tunn. Constr. 2015, 35, 753–758. [Google Scholar]
  3. Ding, Y.; Liu, C.; Lu, Q.; Zhu, M. Effectiveness Evaluation of UUV Cooperative Combat Based on GAPSO-BP Neural Network. In Proceedings of the 31st China Control and Decision-Making Conference, Nanchang, China, 3–5 June 2019. [Google Scholar]
  4. Chmieliauskas, A.; Chappin, E.J.L.; Nikolic, I.; Dijkema, G.; Davis, C. New Methods for Analysis of System of Systems and Policy: The Power of Systems Theory, Crowd Sourcing and Data Management. In System of Systems; Chapters; IntechOpen: London, UK, 2012. [Google Scholar]
  5. Chunzhi, W. Research on the method of comprehensive evaluation index screening and preprocessing. Stat. Educ. 2007, 15–16. [Google Scholar]
  6. Yang, J.; Hu, X.; Zhang, Y.; Wu, W. Joint combat capability evaluation technology based on system simulation experiment. Command. Inf. Syst. Technol. 2017, 8, 1–9. [Google Scholar] [CrossRef]
  7. Yanfei, L.; Yi, Y. Evaluation of combat effectiveness of land attack cruise missile based on OODA loop. Ship Electron. Eng. 2022, 42, 128–133. [Google Scholar]
  8. Wang, L.; Beeson, D.; Akkaram, S.; Wiggs, G. Gaussian Process Meta-Models for Efficient Probabilistic Design in Complex Engineering Design Spaces. In Proceedings of the ASME 2005 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Long Beach, CA, USA, 24–28 September 2005. [Google Scholar] [CrossRef]
  9. Sihao, C. Empirical Research on Credit Risk Based on Random Forest. Ph.D. Thesis, Chongqing University, Chongqing, China, 2023. [Google Scholar]
  10. Shamrat, F.M.J.M.; Ranjan, R.; Hasib, K.M.; Yadav, A.; Siddique, A.H. Performance evaluation among ID3, C4. 5, and CART Decision Tree Algorithms. In Proceedings of the International Conference on Pervasive Computing and Social Networking [ICPCSN 2021], Salem, India, 19–20 March 2021. [Google Scholar] [CrossRef]
  11. Goldberg, D.E. Genetic Algorithms in Search, Optimization, and Machine Learning; Addison-Wesley: Boston, MA, USA, 1989; ISBN 0-201-15767-5. [Google Scholar] [CrossRef]
  12. Lapucci, M.; Mansueto, P. Improved Front Steepest Descent for Multi-objective Optimization. arXiv 2023, arXiv:2301.03310. [Google Scholar] [CrossRef]
  13. Martin, D.J.; Simpson, W.T. Use of Kriging Models to Approximate Deterministic Computer Models. AIAA J. 2012, 43, 853–863. [Google Scholar] [CrossRef]
  14. Feng, G.; Cui, D.; Zhang, Q.; Dai, X. Application of neural network-assisted multi-objective particle swarm optimization algorithm in complex product design. J. Syst. Manag. 2019, 28, 11. [Google Scholar] [CrossRef]
  15. Ding, X. Research and Implementation of Tobacco Commercial Logistics Network Optimization Based on Particle Swarm Optimization. Ph.D. Thesis, Shanghai Jiaotong University, Shanghai, China, 2012. [Google Scholar]
  16. Hui, W. Research on Hybrid Particle Swarm Optimization Based on Cauchy Variation. Ph.D. Thesis, University of Geosciences, Wuhan, China, 2008. [Google Scholar]
  17. Jiang, H.; Guo, T.; Yang, Z.; Zhao, L. Research on Material Emergency Dispatch Based on Discrete Whale Swarm Algorithm. J. Electron. Inf. Technol. 2022, 44, 1484–1494. [Google Scholar]
  18. Shaobo, F.; Zhongjie, Z.; Jian, H. Association rule classification method for enhanced decision tree pruning. Comput. Eng. Appl. 2023, 59, 87–94. [Google Scholar] [CrossRef]
  19. Yang, Z. Research on the Application of Machine Learning Algorithms in Data Classification. Ph.D. Thesis, North University of China, Taiyuan, China, 2023. [Google Scholar]
  20. Li, C.; Ding, L. Screening method for complex system evaluation indicators. Stat. Decis. Mak. 2004, 2. [Google Scholar] [CrossRef]
Figure 1. Mapping of main stages and key evaluation elements in the OODA decision chain.
Figure 1. Mapping of main stages and key evaluation elements in the OODA decision chain.
Mathematics 11 04389 g001
Figure 2. Evaluation index system.
Figure 2. Evaluation index system.
Mathematics 11 04389 g002
Figure 3. Flow of the IMOPSO algorithm in the application of key index extraction.
Figure 3. Flow of the IMOPSO algorithm in the application of key index extraction.
Mathematics 11 04389 g003
Figure 4. Accuracy determination process of the index set based on CART-IWRF.
Figure 4. Accuracy determination process of the index set based on CART-IWRF.
Mathematics 11 04389 g004
Figure 5. Sampling algorithm flow.
Figure 5. Sampling algorithm flow.
Mathematics 11 04389 g005
Figure 6. Pareto front.
Figure 6. Pareto front.
Mathematics 11 04389 g006
Figure 7. Indicator screening process based on CART-IWRF and IMOPSO.
Figure 7. Indicator screening process based on CART-IWRF and IMOPSO.
Mathematics 11 04389 g007
Figure 8. Experimental results of the IMOPSO algorithm based on CART-IWRF.
Figure 8. Experimental results of the IMOPSO algorithm based on CART-IWRF.
Mathematics 11 04389 g008
Figure 9. Experimental results of the IMOPSO algorithm based on MLP.
Figure 9. Experimental results of the IMOPSO algorithm based on MLP.
Mathematics 11 04389 g009
Figure 10. Experimental results of the MOPSO algorithm based on CART-IWRF.
Figure 10. Experimental results of the MOPSO algorithm based on CART-IWRF.
Mathematics 11 04389 g010
Figure 11. Experimental results of the IMOPSO algorithm based on RF.
Figure 11. Experimental results of the IMOPSO algorithm based on RF.
Mathematics 11 04389 g011
Figure 12. Experimental results of the IMOPSO algorithm based on KNN.
Figure 12. Experimental results of the IMOPSO algorithm based on KNN.
Mathematics 11 04389 g012
Figure 13. Experimental results of the MOPSO algorithm based on RF.
Figure 13. Experimental results of the MOPSO algorithm based on RF.
Mathematics 11 04389 g013
Figure 14. Experimental results of the MOPSO algorithm based on KNN.
Figure 14. Experimental results of the MOPSO algorithm based on KNN.
Mathematics 11 04389 g014
Figure 15. Experimental results of the MOPSO algorithm based on MLP.
Figure 15. Experimental results of the MOPSO algorithm based on MLP.
Mathematics 11 04389 g015
Figure 16. Comparative test results of the IMOPSO algorithm and MOPSO algorithm based on CART-IWRF.
Figure 16. Comparative test results of the IMOPSO algorithm and MOPSO algorithm based on CART-IWRF.
Mathematics 11 04389 g016
Figure 17. Comparative test results of the IMOPSO algorithm and MOPSO algorithm based on KNN.
Figure 17. Comparative test results of the IMOPSO algorithm and MOPSO algorithm based on KNN.
Mathematics 11 04389 g017
Figure 18. Comparative test results of the RF-based IMOPSO algorithm and MOPSO algorithm.
Figure 18. Comparative test results of the RF-based IMOPSO algorithm and MOPSO algorithm.
Mathematics 11 04389 g018
Figure 19. Comparative test results of the IMOPSO algorithm and MOPSO algorithm based on MLP.
Figure 19. Comparative test results of the IMOPSO algorithm and MOPSO algorithm based on MLP.
Mathematics 11 04389 g019
Figure 20. Comparative experimental results of the four machine learning algorithms based on the IMOPSO algorithm.
Figure 20. Comparative experimental results of the four machine learning algorithms based on the IMOPSO algorithm.
Mathematics 11 04389 g020
Table 1. Random forest algorithm parameter settings.
Table 1. Random forest algorithm parameter settings.
ParameterValueRemark
n_estimators120The number of sub-data sets, i.e., the number of decision trees, is generated by sampling the original data set. A total of 120 CART decision trees were set in this study.
bootstrapTrueThis indicates whether to sample the sample set with back sampling to build the tree.
oob_scoreTrueThis indicates whether to use out-of-pocket samples to evaluate the quality of the model.
Table 2. MOPSO algorithm parameter settings.
Table 2. MOPSO algorithm parameter settings.
ParameterSymbolValue
Particle swarm sizeN100
Iteration times of index screening algorithm/36
NDi storage capacity of the individual non-dominant solution set/50
NDg storage capacity of the global non-dominant solution set/200
Choice probability of example update modeps0.7
Crossing probabilitypc0.8
Mutation probabilitypm0.08
Crowding distance/ i 1 i + 1 f i + 1 f i 1
Table 3. Operational evaluation level.
Table 3. Operational evaluation level.
Evaluation Level’s Combination Sequence (Combat Effect, Combat Assistance and Decision-Making Effect, and Loss and Consumption)
1: [G, G, H]15: [M, M, L]
2: [G, G, M]16: [M, B, H]
3: [G, G, L]17: [M, B, M]
4: [G, M, H]18: [M, B, L]
5: [G, M, M]19: [B, G, H]
6: [G, M, L]20: [B, G, M]
7: [G, B, H]21: [B, G, L]
8: [G, B, M]22: [B, M, H]
9: [G, B, L]23: [B, M, M]
10: [M, G, H]24: [B, M, L]
11: [M, G, M]25: [B, B, H]
12: [M, G, L]26: [B, B, M]
13: [M, M, H]27: [B, B, L]
14: [M, M, M]
Table 4. Mapping between evaluation index and CMO data.
Table 4. Mapping between evaluation index and CMO data.
Program Evaluation IndexesCMO Corresponding Data Set
Secondary IndexesLevel 3 Indexes
AA1Equipment data (detection range)
A2Equipment data (detection accuracy)
A3Unit data (reconnaissance attribute target recognition)
BB1Combat data (mission efficiency)
B2Task data (task difficulty)
B3Task data (task risk)
CC1Equipment data (environmental data)
C2Equipment data (environmental data)
C3Equipment data (environmental data)
DD1Weapon data, equipment data (attack range)
D2Weapon data, equipment data (attack accuracy)
D3Combat data (operational timeliness)
EE1Equipment data (communication range)
E2Equipment data (communication rate)
E3Equipment data (communication capability)
FF1Combat data (combat manpower consumption)
F2Combat data (combat equipment consumption)
F3Combat data (combat energy consumption)
F4Combat data (combat ammunition consumption)
Table 5. Pareto front data list.
Table 5. Pareto front data list.
IDIndex Set ContentAccuracy
1E120%
2A1, E136%
3A1, B1, E141%
4A1, B1, B2, E155%
5A1, B1, B2, E1, F275%
6A1, B1, B2, D2, E1, F277%
7A1, B1, B2, D1, D2, E1, F279%
8A1, B1, B2, D2, E1, E2, F2, E380%
9A1, B1, B2, B3, D2, D3, E1, E2, F282%
10A1, A2, B1, B2, B3, D2, E1, E2, E3, F285%
11A1, A2, B1, B2, B3, D2, E1, E2, E3, F1, F287%
12A1, A2, A3, B1, B2, B3, D1, D2, E1, E2, E3, F489%
13A1, A2, A3, B1, B2, B3, D1, D2, D3, E1, E2, E3, F490%
14A1, A2, A3, B1, B2, B3, C2, D1, D2, D3, E1, E2, E3, F492%
15A1, A2, A3, B1, B2, B3, C2, D1, D2, D3, E1, E2, E3, F2, F494%
16A1, A2, A3, B1, B2, B3, C2, D1, D2, D3, E1, E2, E3, F1, F2, F496%
17A1, A2, A3, B1, B2, B3, C2, C3, D1, D2, D3, E1, E2, E3, F1, F2, F497%
18A1, A2, A3, B1, B2, B3, C2, C3, D1, D2, D3, E1, E2, E3, F1, F2, F3, F498%
19A1, A2, A3, B1, B2, B3, C1, C2, C3, D1, D2, D3, E1, E2, E3, F1, F2, F3, F4100%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhou, H.; Mao, Y.; Guo, X. An Improved Multi-Objective Particle Swarm Optimization-Based Hybrid Intelligent Algorithm for Index Screening of Underwater Manned/Unmanned Cooperative System of Systems Architecture Evaluation. Mathematics 2023, 11, 4389. https://doi.org/10.3390/math11204389

AMA Style

Zhou H, Mao Y, Guo X. An Improved Multi-Objective Particle Swarm Optimization-Based Hybrid Intelligent Algorithm for Index Screening of Underwater Manned/Unmanned Cooperative System of Systems Architecture Evaluation. Mathematics. 2023; 11(20):4389. https://doi.org/10.3390/math11204389

Chicago/Turabian Style

Zhou, Hao, Yunsheng Mao, and Xuan Guo. 2023. "An Improved Multi-Objective Particle Swarm Optimization-Based Hybrid Intelligent Algorithm for Index Screening of Underwater Manned/Unmanned Cooperative System of Systems Architecture Evaluation" Mathematics 11, no. 20: 4389. https://doi.org/10.3390/math11204389

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop