Next Article in Journal
Effects of Harvesting Grabbing Type on Grabbing Force and Leaf Injury of Lettuce
Previous Article in Journal
Embedded Ultrasonic Inspection on the Mechanical Properties of Cold Region Ice under Varying Temperatures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient Improved Harris Hawks Optimizer and Its Application to Form Deviation-Zone Evaluation

1
School of Mechanical Engineering, Southwest Jiaotong University, Chengdu 610031, China
2
Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610209, China
3
Sichuan Research & Design Institute of Agricultural Machinery, Chengdu 610066, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(13), 6046; https://doi.org/10.3390/s23136046
Submission received: 4 May 2023 / Revised: 16 June 2023 / Accepted: 27 June 2023 / Published: 29 June 2023

Abstract

:
Evaluation of the deviation zone based on discrete measured points is crucial for quality control in manufacturing and metrology. However, deviation-zone evaluation is a highly nonlinear problem that is difficult to solve using traditional numerical optimization methods. Swarm intelligence has many advantages in solving this problem: it produces gradient-free, high-quality solutions and is characterized by its ease of implementation. Therefore, this study applies an improved Harris hawks algorithm (HHO) to tackle the problem. The average fitness is applied to replace the random operator in the exploration phase to solve the problem of conflicting exploration strategies due to randomness. In addition, the salp swarm algorithm (SSA) with a nonlinear inertia weight is embedded into the HHO, such that the superior explorative ability of SSA can fill the gap in the exploration of HHO. Finally, the optimal solution is greedily selected between SSA-based individuals and HHO-based individuals. The effectiveness of the proposed improved HHO optimizer is checked through a comparison with other swarm intelligence methods in typical benchmark problems. Moreover, the experimental results of form deviation-zone evaluation on primitive geometries show that the improved method can accurately solve various form deviations, providing an effective general solution for primitive geometries in the manufacturing and metrology fields.

1. Introduction

Primitive geometries are widely used in aerospace, shipbuilding, and medicine. The form deviation of primitive geometries affects part mating. Part mating is critical in manufacturing and metrology quality control, affecting assembly, service life, wear resistance, and motion. Therefore, the development of computational algorithms to improve the efficiency and reliability of the production processes for manufactured parts has been a challenging research task during the last three decades. The rapid acquisition of discrete point clouds from surfaces has become possible with the development of coordinate measuring machines (CMMs) and low-cost 3D acquisition techniques. Therefore, inspecting manufactured parts by coordinate metrology on a discrete point cloud is an effective method for assessing the degree of satisfaction with design requirements.
Typically, the measured points should be compared to the ideal geometry to determine whether the part is to be accepted or rejected [1]. Robust algorithms can quickly determine a substitute geometry in point clouds, but the reliability and efficiency of the algorithms are affected by the inherent uncertainty of the equipment used. Uncertainty may arise either from systematic errors [2] or data noise [3]. Hence, numerous coordinate metrology tasks focus on eliminating the uncertainty associated with data acquisition, which is referred to as point measurement planning (PMP) [4]. Since the measurement time is proportional to the number of points, PMP research focuses on the size and location of the measurement process to achieve a more precise representation of the measured geometries using fewer points [5]. The sampling strategy design is a solution for contact measurements. Sampling strategies in the literature can be divided into three main categories: uniform, random, and stratified sampling [6]. Noncontact measurements provide more surface information [7], but at the same time, the increased density leads to computational instability and cost. Gohari et al. [8] introduced a data-mining algorithm that analyzes the trend of errors for the acquired points, which guarantees a reliable evaluation of geometric and form deviations.
The second computational task, which is also the main focus of this research work, is substitute geometry estimation (SGE). The objective of SGE is to obtain the ideal geometry parameters for the measured points or to locate the ideal geometry for the measured points via different types of fitting criteria, such as least-squares fitting, total least-squares fitting, min–max fitting, and minimum-zone fitting. In the former, fitting occurs directly on the point cloud, and the latter aligns the point cloud with the design model. The least-squares method is widely used in surface error estimation owing to its high evaluation efficiency. However, it does not strictly adhere to the minimum zone specified by ISO [9] and can only provide approximate results, which may lead to misjudgment of the workpiece and economic losses. The minimum-zone method is often used as the basis for arbitration among various evaluation methods since it is more consistent with the standard definition of physical fittings [10]. Nevertheless, minimum-zone deviation is a highly nonlinear problem, and multivariate optimization algorithms are required to provide satisfactory substitute geometries.
There have been many studies that have applied numerical optimization methods to the evaluation of form errors, including simplex search [11,12], semidefinite programming [13], linear approximation [14], iterative reweighted [15], Chebyshev approximation [16], and steepest descent [17]. In the case of higher nonlinearity, it is challenging for these algorithms to obtain the global solution, since several local solutions may exist. Increasing the number of sample points also reduces the chance of obtaining the global minima in the employed optimization process [18]. Based on this, many researchers have developed new data-fitting methods to solve the above problems.
The representation of surfaces by convex hulls is common in computational geometry techniques, and many researchers have applied it to form error evaluations [19,20]. In addition to the convex-hull technique, Liu et al. [21] constructed the minimum-zone roundness intersection structure and evaluation model using the crossing relationship of chords. In a subsequent study [22], the method was expanded to cylindricity evaluation. In addition, based on computational geometry techniques, Alhadi et al. [23] presented an improved algorithm for the minimum zone of roundness error evaluation using an alternating exchange method. A minimum-zone fitting function was created to enhance the roundness error evaluation. Zhuo et al. [24] introduced the definition of the crossing sector structure based on the minimum-zone criterion and transformed it into an angular relationship of control points, making it easy to identify the MZC. For straightness error, Li et al. [25] proposed a simple bidirectional algorithm based on a four-point model for the calculation of the minimum-zone straightness error from planar coordinate data. Four points are used to construct the upper and lower reference lines which can select candidate points effectively by comparing the slope of the upper and lower reference lines. It is worth noting that computational geometry techniques suffer from the inherent problem of poor solution accuracy. Therefore, combining the initial solution and region search algorithm to search the parameter space greedily is a new research direction. Ye et al. [26] proposed a new neighborhood-based adaptive iterative search strategy. The results of the proposed method provide more accurate values than conventional techniques. Huang et al. [27] presented an asymptotic search method according to which roundness is solved iteratively using the intersecting chord to avoid trapping in the local solution. Liu et al. [28] proposed and developed a novel cylindricity evaluation method. The framework and information flow of the algorithm has been documented, together with the description of the six-point subset, the replacement strategy, and the terminal condition. However, the greedy search approach does not guarantee a global solution and is often inefficient when data increase. There is still a demand for a comprehensive and stable method.
In recent studies, researchers have successively adopted swarm intelligence (SI) to resolve these issues. Some well-known SI methods and many improved optimization algorithms have been effectively used to determine substitute geometries according to various criteria. Du [29] and Pathak [30] applied particle swarm algorithms (PSO) to evaluate form error. Zhang et al. [31] applied an ant-colony algorithm (ACO) to straightness, but it easily fell into local optimal solutions, so Luo et al. [32] applied an improved artificial bee-colony (ABC) algorithm to straightness error evaluation; however, there was still a lack of accuracy. Based on this, Luo [33] proposed to use an improved differential evolution algorithm (DE) for straightness evaluation. For roundness, Wen et al. [34] proposed the use of a genetic algorithm (GA) for the evaluation of the minimum-zone circle, but the genetic algorithm requires the adjustment of numerous parameters. Recently, Li et al. [35] proposed an improved bat algorithm (BA) to achieve accurate evaluation of minimum-zone roundness. In addition, the application of a genetic algorithm [10] and an improved cuckoo search (CS) [36] algorithm to flatness has been studied. These advanced optimization algorithms have their own advantages and disadvantages. Genetic algorithms can be applied to various complex optimization problems in reality but need to adjust various operators, such as crossover, mutation, and selection. The particle swarm optimization algorithm still needs to adjust the inertia weights for different problems to avoid falling into a local optimum. The CS algorithm, based on the foraging behavior of cuckoos, can obtain high-quality solutions, but the convergence speed is slow.
Harris hawks optimization (HHO) [37] has received extensive attention from the research community. The construction of HHO mimics the foraging behavior of Harris hawks in nature. HHO is designed with two phases of exploration and four phases of exploitation. The results of testing for benchmark functions and several engineering optimization problems confirm that HHO outperforms many well-known SI approaches, such as PSO, GWO, CS, DE, and WOA. Notably, HHO expresses a highly exploitative ability in later stages. SSA [38] is also a well-established swarm intelligence technique based on the salp chain, which simulates the foraging patterns in oceans. Due to its simplicity and superiority, it has been widely used in unconstrained and constrained optimization problems.
In this paper, an improved HHO algorithm (IHHO) is proposed for solving the form deviation-zone evaluation problem. The IHHO focuses on two areas of improvement: exploration strategy selection and exploration capabilities. The latter was mainly inspired by [39]. Furthermore, the search area of the primitive geometries is analyzed to speed up the convergence.
The rest of the paper is organized as follows: Section 2 presents the modeling of the objective function and the determination of the search area of the primitive geometries. An overview of the optimizer is also described. The specific structure of the proposed optimizer is presented in Section 3. Section 4 describes a group of experiments and analyses of the global benchmark problem. Section 5 verifies the practicality of the proposed optimizer in dealing with the form deviation-zone evaluation problem. Finally, conclusions are drawn in Section 6.

2. Materials and Methods

2.1. Form Deviation-Zone Evaluation Model

The minimum zone is the basic principle of assessing the form error and is the final basis for arbitration in the event of a dispute. To obtain a reliable form deviation zone, it is necessary to establish an optimization objective function according to the minimum-zone criterion based on the distance function of each primitive geometry. Assuming that P i ( x i , y i , z i ) is a discrete measurement point acquired from the surface, f ( P i , U ) denotes the distance function from the point to the ideal surface, where U denotes the fitted parameter; then:
e i = max i { f ( P i , U ) } min i { f ( P i , U ) }
The objective function satisfying the minimum-zone criterion is:
F ( P i , U ) = min { e i }
The deviation-zone evaluation process solves Equation (2) by continuously optimizing the fitting parameter, U, to minimize the objective function. Obviously, diverse geometries have different expressions and distance functions, and the number of parameters to be optimized varies. In the following, these surfaces will be individually discussed.
  • Roundness
The circle is one of the most common features of industrial annular workpiece parts. Expression equations and distance functions for circles and other geometries are given in Appendix A. Although the distance function is related to the circle center, ( a , b ) , and the radius, R, the radius cancels out while subtracting the maximum and minimum distances. Therefore, the variables to be optimized are the circle center coordinates, which is a two-dimensional optimization problem. We determine the center and roundness error by the least-squares method, as shown in Figure 1, and the search area of the proposed optimization algorithm is shown in Figure 2a.
2.
Straightness
A spatial line has six parameters: three for position, ( x 0 , y 0 , z 0 ) , and three for direction, ( a , b , c ) . According to error theory, the arithmetic mean of the measurement sequence is the closest to its actual value, so the arithmetic mean of the line is used as the spatial line position. Typically, the minimum-zone line is around the least-squares line. Consequently, the initial linear direction vector is obtained by least squares, as shown in Figure 1. However, the direction of the spatial line is arbitrary, and it is often difficult to average over all directions when determining the search area by the least-squares parameters. With this in mind, we align the line to the Z-axis by the Rodrigues rotation matrix, T , to reduce the optimization dimension and determine the appropriate search area. The transformation process is as follows:
P i j = T P i j
T = [ cos ( θ ) + a 2 ( 1   -   cos ( θ ) ) - c sin ( θ ) + a b ( 1   -   cos ( θ ) )   b sin ( θ ) + a c ( 1   -   cos ( θ ) ) c sin ( θ ) + a b ( 1   -   cos ( θ ) ) cos ( θ ) + b 2 ( 1   -   cos ( θ ) ) - a sin ( θ ) + b c ( 1   -   cos ( θ ) ) - b sin ( θ ) + a c ( 1   -   cos ( θ ) ) asin ( θ ) + b c ( 1   -   cos ( θ ) ) cos ( θ ) + c 2 ( 1   -   cos ( θ ) ) ]
where θ denotes the angle between the line and the Z-axis, satisfying the following equation:
cos ( θ ) = n z | n | | z |
The search space of the position parameter can then be centered on the projection point of the centroid in the XY-plane. The search space of the direction parameter can be centered on the Z-axis. Then, the line expression can be simplified as:
X x 0 p = Y y 0 q = Z 1
Through the above process, the straightness evaluation becomes a four-dimensional optimization problem, and the search area is shown in Figure 1b.
3.
Cylindricity
Many parts designed in machines have a cylindrical geometry. Compared to roundness, cylindricity takes into account both axial and radial directions. The distance function of a cylinder is f ( P i , x 0 , y 0 , z 0 , a , b , c ) (given in Appendix A), including the axis position, ( x 0 , y 0 , z 0 ) , the axis direction, ( a , b , c ) , and the cylindrical radius, R . Similar to the circle, R can be disregarded in deviation-zone evaluation. Therefore, cylindricity evaluation is essentially the determination of the position and direction of the cylindrical axis. It is a six-dimensional optimization problem similar to the spatial line. Thus, the dimension-reduced method of spatial lines can also be used for cylindricity.
First, normal estimation [40] is performed for the cylindrical surface’s point cloud. The obtained unit normal is regarded as a new set of point clouds. Subsequently, the normal estimation is continued on the normal point cloud to obtain the initial axis direction, n ( a , b , c ) , of the cylinder. As with straightness, by aligning the initial axis to the Z-axis via Equations (3)–(5), the control variables can be reduced from six to four, and the search area can be equally distributed in each direction. The process is also demonstrated in Figure 1 and Figure 2b.
4.
Flatness
The distance function of the plane is f ( P i , a , b , c , d ) , where d is related to the plane position. Since the deviation zone is a relative distance, the parameter d has no effect, and the process is a three-dimensional optimization problem. We estimate the plane normal, n ( a , b , c ) , using the least-squares method, and then the algorithm’s search area is determined by applying the Rodrigues rotation matrix, as shown in Figure 1 and Figure 2c.

2.2. Overview of HHO

HHO is a population-based optimization algorithm that mimics the cooperative behavior of Harris hawks chasing prey (in most cases, rabbits) in nature. In the absence of prey, the hawks will randomly change position until the prey is found. When a rabbit is detected, the hawks will choose different strategies for besiegement, depending on the dynamic nature of the environment and the prey’s escape pattern. A switching tactic involves the best hawk (the leader) swooping at the prey and disappearing and the chase being continued by one of the party members. By means of this tactic, the detected rabbit is chased to exhaustion, resulting in a successful hunt. A total of six stages of HHO are plotted in Figure 3, and the specific HHO steps and mathematical model are described in the following subsection.

2.2.1. Exploration Phase

In this phase, Harris hawks will choose two strategies to move with equal probability: one is to move based on the positions of other family members; the other is to perch on random tall trees. The process is modeled as follows:
X ( t + 1 ) = { X r a n d ( t ) r 1 | X r a n d ( t ) 2 r 2 X ( t ) | q 0.5 ( X p r e y ( t ) X a v e ( t ) ) r 3 ( r 4 ( U B L B ) + L B ) q < 0.5  
where X ( t + 1 ) and X ( t ) represent the position vectors of the search agent in the t + 1 and t iterations, respectively, and each dimension represents a control variable; q , r 1 , r 2 , r 3 , r 4 are random variables inside (0, 1), which are updated in each iteration; X r a n d ( t ) is the position vector of a random individual; X p r e y ( t ) denotes the rabbit’s position vector, which is the best agent; LB and UB are the lower and upper bounds of the control variables; and X a v e ( t ) is the average position vector of the current search agents, which is calculated using the following equation:
X a v e ( t ) = 1 N i = 1 N X i ( t )
where N denotes the total number of hawks and X i ( t ) indicates the location of each hawk in iteration t.

2.2.2. Transition from Exploration to Exploitation

The prey’s energy decreases over time during escape. HHO can transition from exploration to exploitation and choose different exploitation strategies based on the prey’s escape energy. The prey’s escape energy is modeled as a time-varying stochastic parameter as follows:
E = 2 E 0 ( 1 t T )
where E indicates the remaining energy of the rabbit in the escape process, T denotes the maximum number of iterations, and E 0 is the initial state of its energy generated randomly inside the interval (−1, 1) in each iteration. Thus, if | E | 1 , this means that the rabbit has enough energy to escape, so the HHO will perform diverse exploration operations, and if | E | < 1 , the rabbit is weak, so the algorithm will try to exploit the neighborhood of the solutions.

2.2.3. Exploitation Phase

For this phase, according to the escape behaviors of the prey and the chasing strategies of Harris hawks, four possible strategies are proposed for HHO to model the attacking stage. Let r be the chance that the prey escapes successfully ( r < 0.5 ) or escapes unsuccessfully ( r 0.5 ).
  • Soft Besiegement
When r 0.5 and E 0.5 , the prey still has enough energy to escape dangerous situations. At this time, the hawks will perform a soft besiegement to continuously exhaust the rabbit’s energy and prevent it from making random misleading jumps by encircling it softly. If the jump strength of a rabbit is denoted as J = 2 ( 1 r 5 ) , where r 5 is a random number inside (0, 1), this behavior can be modeled according to the following rules:
X ( t + 1 ) = Δ X ( t ) E | J X p r e y ( t ) X ( t ) |
Δ X ( t ) = X p r e y ( t ) X ( t )
where Δ X ( t ) is the difference between the prey’s position vector and the current location in iteration t.
2.
Hard Besiegement
When r 0.5 and E < 0.5 , the intended prey exhausts the energy, and the hawks finally perform the surprise pounce. In this situation, the current position is updated using Equation (12):
X ( t + 1 ) = X p r e y E | Δ X ( t ) |
3.
Soft Besiegement with Progressive Rapid Dives
When r < 0.5 , the prey has a chance to escape successfully, and the hawks will adopt the chasing strategies of soft besiegement and hard besiegement, but their doing so is more intelligent than in the previous case. By utilizing the concept of Levy flight (LF), the real zigzag deceptive motions of prey are mimicked and the hawks will progressively adjust their location and directions through rapid dives. When the prey has sufficient energy ( E 0.5 ), the process is expressed as follows:
X ( t + 1 ) = { Y   i f   f ( Y ) < f ( X ( t ) ) Z   i f   f ( Z ) < f ( X ( t ) )  
Y = X p r e y ( t ) E | J X p r e y ( t ) X ( t ) |
Z = Y + S × L F ( D )
where D is the dimension of the problem, S is a random vector of size 1 × D , and LF is the Levy flight function, which is modeled as follows:
L F ( D ) = 0.01 × u × σ | v | 1 β , σ = ( Γ ( 1 + β ) × s i n ( π β 2 ) Γ ( 1 + β 2 ) × β × 2 ( β 1 2 ) ) 1 β , β = 1.5
where u and v are random numbers in the range (0, 1).
4.
Hard Besiegement with Progressive Rapid Dives
Similarly, when E < 0.5 , the prey does not have enough energy to escape, and the rapid dive strategy with LF is modeled as follows:
X ( t + 1 ) = { Y   i f   f ( Y ) < f ( X ( t ) ) Z   i f   f ( Z ) < f ( X ( t ) )  
Y = X p r e y ( t ) E | J X p r e y ( t ) X a v e ( t ) |
Z = Y + S × L F ( D )

2.3. Overview of SSA

In the foraging behavior of the salp swarm, the group is divided into two parts, the leader and the followers. The first individual is considered the leader, and the other individuals form the main body of the chain and are called the followers. Newtonian mechanical analysis is used to model the leader’s and the followers’ movements separately. The leader position vector is updated using the following equation:
X 1 , j = { F j + c 1 × ( c 2 × ( U B j L B j ) + L B j ) , c 3 0.5 F j c 1 × ( c 2 × ( U B j L B j ) + L B j ) , c 3 < 0.5
where dimension j = { 1 , 2 , , D } , F = [ F 1 , F 2 , F D ] T denotes the position vector of the target agent or the current best solution, c 2 and c 3 are the adaptive tuning parameters between (0, 1), and X 1 , j are the position vectors of the leader in the jth dimension. c 1 is an important factor in controlling exploration and exploitation and is calculated by means of the following equation:
c 1 = 2 e ( 4 t T ) 2
The follower’s update formula is expressed as:
X i , j = X i , j + X i 1 , j 2 , i = 2 , 3 , , N
As the leader moves, the fluctuation of the leader’s position change is transmitted to each follower step by step with the salp chain. The leader continuously explores the space around the moving food source, F . This significantly enhances the exploration ability of SSA and enables the salp chain to catch up with the moving food source and finally complete the foraging behavior.

3. The Proposed Optimizer

To solve the problem of HHO easily falling into a local optimum and having slow convergence, this paper proposes an improved HHO optimization algorithm. The IHHO focuses on two areas of improvement: exploration strategy selection and exploration capabilities. The following is a specific description of the improved HHO algorithm, and the pseudocode is given in Algorithm 1.

3.1. Exploration Based on Average Fitness

In optimization techniques, random operators are often used to determine the update strategy of the search agent due to their random nature. However, in HHO, strategy selection based on random operators may conflict with the actual situation. In detail, when the hawk is very near to the prey ( f ( X i ) > f ( X p r e y ) ) , it incorrectly moves to a random location due to a better perching chance ( q   0.5 ) , but the correct strategy is to perch with other family members. When the hawk is far from the prey ( f ( X i ) f ( X p r e y ) ) , it incorrectly perches with other family members due to a lower perching chance ( q < 0.5 ) , but the correct strategy is to move to a random high tree to perch suddenly. Therefore, we replace the random operator, q, of HHO in the exploration phase with the average fitness to solve the conflict problem when choosing between two strategies. Let us define the average fitness, f a v e , of the search agents’ locations as:
f a v e = 1 N i = 1 N f ( X i )
Then, Equation (7) becomes the following equation:
X ( t + 1 ) = { X r a n d ( t ) r 1 | X r a n d ( t ) 2 r 2 X ( t ) | f ( X i ) < f a v e ( X p r e y ( t ) X a v e ( t ) ) r 3 ( r 4 ( U B L B ) + L B ) f ( X i ) f a v e
where f ( X i ) denotes the fitness value of the individual agent and N is the number of agents.
Algorithm 1: Pseudocode of IHHO
Set the initial iteration t = 1
Initialize the random search space X i = ( x i 1 , x i 2 , x i D ) for the ith hawk of the D dimension problem within the search boundary [UB, LB]
While t T
  A. Evaluate the fitness value, f ( X i ) , of all N hawks
  B. Find the best agent to be the leader and use the rest as followers
  C. Update the leader and followers using Equation (20) and Equation (26)
  D. If the SSA individual is better, replace the corresponding HHO individual and update the fitness value
  E. Calculate the average fitness, f a v e , using Equation (23)
  F. Label the best prey location as X p r e y
  G. For i = 1, 2, … N (each hawk)
   a. Update the escaping energy E by Equation (9)
   b. If ( | E | 1 )
       Randomly choose one hawk location as X r a n d from the search space
       Calculate the mean position vector X a v e using Equation (8)
       Update the new location using Equation (24)
        Elseif ( | E | < 1 )
       Generate a random escaping chance of prey r in the range [0, 1]
        If ( r 0.5   a n d   | E | 0.5 )
        Update the new location using Equation (10)
      Elseif ( r 0.5   a n d   | E | < 0.5 )
        Update the new location using Equation (12)
      Elseif ( r < 0.5   a n d   | E | 0.5 )
        Update the new location using Equation (13)
      Elseif ( r < 0.5   a n d   | E | < 0.5 )
        Update the new location using Equation (17)
      End (If)
     End (If)
   End (For)
  H. Amend the search space X i for i = 1, 2, … N based on the search boundary UB and LB
  I. t = t+1
End (While)
Return the best location X p r e y

3.2. Nonlinear Inertia Weight

Note that the values of the inertia weights affect the algorithm’s efficiency. Larger inertia weights enhance the global search capability of the algorithm, and, conversely, smaller inertia weights enhance the local search capability. To solve the problem of low convergence accuracy and slow convergence of the traditional SSA algorithm, a nonlinear inertia weight is introduced in the update formula of the follower salp to evaluate the degree of interindividual influence, with values nonlinearly transformed between 0.9 and 0.4. The proposed nonlinear inertia weights are as follows:
w = ( w i n i t w e n d k ) e 1 1 + t u t max
where w i n i t is the initial inertia weight, w e n d is the inertia weight for the maximum number of iterations, and k and u are control coefficients that regulate the range of w. After sufficient experiments, taking w i n i t = 0.98 , w e n d = 0.4 , k = 0.21, and u = 11.2. the new follower update rule is:
X i , j = w X i , j + X ( i 1 ) , j , i = 2 , 3 , , N
The addition of a nonlinear inertia weight enhances the global search ability in the early stage compared to the previous averaging strategy with a fixed weight of 0.5. It enhances the local search ability of the algorithm in the later stage and balances the exploration and exploitation of the SSA.

3.3. Hybrid SSA

Combining algorithms has become a trend in optimization research in recent years. The superior explorative ability of SSA can fill the gap in the exploration of conventional HHO. Therefore, this paper embeds the SSA with a nonlinear inertia weight into HHO to improve the diversity of hawks while retaining its inherent excellent convergence and exploitation capabilities.
Specifically, before updating the search agent through the HHO mechanism, the space around the current best agent is explored with SSA to determine whether a better agent exists, and, if so, the position of the HHO individual is updated to the SSA individual. Otherwise, it remains unchanged. Subsequently, the individual with the smallest objective function value is selected as the prey’s position vector, X p r e y , in X H H O , X S S A .
X p r e y = min { f ( X H H O ) , f ( X S S A ) }

4. Performance Evaluation of the IHHO Algorithm

4.1. Benchmark Functions and Compared Algorithms

In this section, the proposed improved HHO algorithm (IHHO) is investigated using a set of 23 diverse classical benchmark functions from [37]. The benchmark functions can be divided into three categories: unimodal (of which there are seven), multimodal with varied dimensions (of which there are six), and multimodal with fixed dimensions (of which there are ten). The unimodal sets have a globally unique solution suitable for revealing the optimizer’s exploitation capabilities, whereas the multimodal sets have multiple optima that disclose the explorative capability and local optimum avoidance potentials of the proposed optimizer. The mathematical descriptions of the benchmark functions are shown in Table 1, Table 2 and Table 3.
To investigate the performance of IHHO, in addition to comparison with traditional HHO, other well-recognized swarm intelligence methods, such as PSO [41], GA [42], DE [43], TLBO [44], ABC [45], CS [46], WOA [47], and SSA [38], were also compared. The quantitative analysis included the average value and standard deviation (std. dev.), and the qualitative analysis included the prey position, search history, trajectory, diversity history, average fitness history, boxplot and convergence curves. Furthermore, the nonparametric statistical results of the Wilcoxon signed-rank test and Friedman test were introduced to detect the substantial differences between optimizers. The significance level was set at 0.05. The Wilcoxon signed-rank test categorized the IHHO calculations as significantly better (+), equal (=), or significantly worse (−) by p-values. Further statistical comparisons were made by applying the Friedman test for average ranking performance (expressed as ARV).

4.2. Experimental Setup

In this study, the following control variables were adopted: the maximum number of iterations equaled 500 iterations, and the number of search agents equaled 30. The dimension of the function was set to 30 if it was a non-fixed problem. The parameter settings used for various optimization algorithms are reported in Table 4. Every method applied 30 independent runs to avoid the effect of randomness in MATLAB2016a using a Windows 10 64-bit Intel (R) Core (TM) i7-11800 h@2.30 GHz with 16 GB.
The trajectory curve represents the change of the first agent in the first dimension during 500 iterations. It can be observed from Figure 4 that the curve reached the optimal solution after oscillations in the initial iteration, which reveals the exploration behavior of the algorithm. For complex functions, the fluctuation will be correspondingly more significant.
The average fitness is a measure of the collaborative behavior of the hawk. The average fitness in Figure 4 and Figure 5 decreases with iterations, which indicates that all hawks update to a better position with an increasing number of generations.
The diversity history reveals the transition between the exploration and exploitation of search agents. In this paper, diversity is calculated by the Euclidean distance between N hawks. If the ith agent of the D-dimensional problem is represented as X i = ( x i , 1 , x i , 2 , , x i , D ) , then at a specific iteration t, the diversity is calculated as:
d i v e r s i t y = i = 1 N j = 1 N ( k = 1 D ( x i , k ( t ) x j , k ( t ) ) 2 )
As can be seen from the diversity history diagrams in Figure 4 and Figure 5, there is more diversity in the initial stage than in the later stage of the optimization algorithm. The IHHO algorithm performs more exploration in the initial stage, while in the later stages it performs more exploitation. Moreover, the curve tends to zero as the iterations proceed, revealing that the proposed algorithm strikes a good balance between exploration and exploitation.

4.3. Comparison with Conventional Swarm-Based Algorithms

In this section, the statistical results of the conventional swarm-based algorithm and the proposed IHHO for 23 benchmark problems are presented in Table 5. The results expose the statistical outcomes in terms of average values and standard deviations. The number of dimensions for all problems was 30, except for the fixed-dimensional multimodal problems f14f23. The best values are in bold in Table 5, while their statistical significance can be observed in Table 6. In addition, Figure 6 and Figure 7 demonstrate the boxplots and convergence curves for unimodal functions (f1, f4), multimodal benchmark functions with varied dimensions (f8, f9), and multimodal benchmark functions with fixed dimensions (f15, f21) in repeated experiments.
From the statistical results listed in Table 5, the proposed IHHO algorithm had a significant advantage over the other algorithms in terms of average values and standard deviations. For most functions, IHHO was able to find the best solutions, even the optimal ones, except for f6, f8, and f20. For f6, the performance of IHHO was worse than that of PSO and SSA but far better than that of traditional HHO algorithms. For f8, the results were better than those of the other nine algorithms and slightly inferior to those of the HHO algorithm, while, for f20, IHHO also generated the best solution. From the perspective of standard deviations, the standard deviations of IHHO were the lowest for 15 functions, even though the standard deviations for f9, f10, and f11 were 0. Although the standard deviations of IHHO were poorer than those of DE, CS, and TLBO for f16f19 when the same average values were obtained, it outperformed the other algorithms. Meanwhile, IHHO came next in performance to the first method for f14. Therefore, IHHO can achieve satisfactory solutions with guaranteed accuracy and stability.
According to the p-values of the Wilcoxon rank-sum tests for analyzing the significant differences of the paired algorithms in Table 6, the performance of IHHO had significant positive differences compared to the other algorithms with respect to the test results for the 23 benchmark functions, except for DE, CS, and TLBO. Although the statistical results for IHHO were significantly worse than those for DE, CS, and TLBO for f16f19, the main reason was the difference in the standard deviations. As analyzed before, IHHO was superior to the rest of the methods with respect to standard deviation, so it can still be considered that IHHO can obtain high-quality solutions. Additionally, from the overall significant statistical results of the Wilcoxon rank-sum tests for all functions, the worst case IHHO produced 17 significantly better, 1 equal, and 5 significantly worse results (TLBO), and in the best case IHHO overwhelmingly succeeded for all benchmark functions (GA). From the statistical results of the Friedman test, the best ARV of 1.61 was obtained for IHHO, which is also consistent with the Wilcoxon rank-sum test results; IHHO was far superior to TLBO in second place with a ranked value of 4.04. Therefore, it can be concluded that the IHHO algorithm is an improvement on the HHO algorithm with considerable advantages over the other nine competitive swarm-based algorithms.
The boxplot diagrams of the classical test functions are shown in Figure 6. As can be seen from the boxplots, IHHO consistently outperformed or equaled the other optimization algorithms, while HHO underperformed for the non-scalable function f23. In addition, the TLBO algorithm also showed strong consistency. On average, IHHO showed results comparable to those of other optimization algorithms using the boxplot representation.
The convergence curves for six classical benchmark functions are presented in Figure 7. Based on the observation, IHHO ranked first for f1, f7, f10, and f12 and performed the same as TLBO for f23. For the test function f14, IHHO ranked second with TLBO and WOA—worse than CS, DE, and ABC, but better than the other algorithms. Regarding the overall performance of IHHO for 23 benchmark problems, which are combinations of unimodal and multimodal problems designed to test exploration and exploitation capabilities, it can be stated that IHHO can be used for function optimization.

4.4. Discussion

In this section, the effectiveness of the improved HHO algorithm is verified by 23 benchmark functions. It is important to note that all the experiments were executed under the same conditions. First, a qualitative analysis of the IHHO algorithm was performed. By analyzing the eight benchmark functions in five aspects, namely, search history, prey position, trajectory, average fitness value, and diversity, it was demonstrated that the IHHO algorithm can balance exploration and exploitation, thus avoiding falling into local solutions and finding the optimal value. Hence, IHHO can perform the optimization search for complex nonlinear optimization problems. Second, to comprehensively assess the advantages of the proposed algorithm, the IHHO algorithm was compared with several swarm-based methods in six aspects: average values, standard deviations, Wilcoxon rank-sum tests, Friedman tests, boxplot diagrams, and convergence curves. The comparative outcomes of all cases revealed that the developed IHHO optimizer, which fuses the average fitness exploration strategy and the nonlinear inertia weight SSA algorithm, obtained better overall performance and converged faster than the alternatives.
Accordingly, the proposed optimizer significantly enhances the optimization capability compared to several other classical optimization algorithms. This is because the SSA mechanism makes the search agents better diversified while taking full advantage of the excellent intensification capability of the original HHO algorithms. Although other variants of HHO embedded within the SSA mechanism are available, the exploration strategy based on the average fitness value and the introduction of nonlinear inertia weights described in this paper is innovative and further enhances the coordination of intensification and diversification, with excellent results. However, similar to other swarm-based optimizers, there are also some limitations to the proposed optimizer. First of all, it may expend more time on optimization because of the addition of the SSA exploration mechanism. Second, the range of nonlinear inertia weights may need to be adjusted in some cases. Therefore, there is a need to harmonize efficiency and accuracy when solving problems with real-time requirements.

5. The Application of IHHO in Form Deviation-Zone Evaluation

Traditional form deviation-zone evaluation suffers from the problems of difficulty in generating solutions, poor generality, and lack of solution accuracy, while other metaheuristic intelligent optimization algorithms have a wide variety of algorithms, each containing many variants, and each having its own advantages and disadvantages. Therefore, the goal of this study was to find an algorithm with strong global optimization capability, less parameter adjustment, and high accuracy for error evaluation.
It has been shown that the HHO algorithm has fewer parameters, is simple in principle, is more exploratory and adaptable in global optimization, and outperforms many well-known intelligent optimization methods, such as PSO, GWO, CS, DE, and WOA. Therefore, its application to form deviation-zone evaluation satisfies the requirement of fewer parameter adjustments and has some advantages over other methods in terms of optimization capability and optimization accuracy. Although there are still problems of early convergence, poor optimization accuracy, and weak global search capability, they have been improved by various measures.

5.1. Comparison of Data in the Literature

To evaluate the availability of IHHO in form deviation-zone evaluation, we benchmarked the proposed IHHO by reference to data in the literature. The flowchart of IHHO applied to solve the deviation-zone evaluation problem is shown in Figure 8. The population size was set to 30, the maximum number of iterations was 500, and the optimization dimension and search area for the corresponding problem were shown in Section 2.1. The algorithm was run 30 times independently using MATLAB2016a software, and the average result was taken as the corresponding form error. Table 7 shows the evaluation results of the algorithms reported in the literature and those obtained by IHHO. The results list the number of points, reported minimum-zone errors, IHHO evaluation minimum-zone errors, least-squares evaluation errors, and relative differences. In addition, the convergence curves of three randomly selected experiments are shown in Figure 9 to visualize the working process of the IHHO algorithm for deviation-zone evaluation.
From Table 7, the average evaluation results for the IHHO algorithm in the four types of deviation zones are more accurate or equal to the reported MZs in the literature, except for example 2 of flatness, and significantly improved compared to the least-squares method. In particular, the straightness evaluation error of example 2 improved by 25.65% compared to the reported results. As can be seen from the convergence diagram in Figure 9, the optimal solution was found with only 50 iterations on the dataset, except for the roundness error evaluation, which reached convergence in approximately 150 iterations. The trends were essentially the same for the three randomly selected experiments. Therefore, it can be tentatively concluded that the proposed IHHO optimization algorithm works well in deviation-zone evaluation and can meet the needs of high-precision evaluation in engineering.

5.2. Engineering Applications

To further validate the advantages of the IHHO algorithm applied to deviation-zone evaluation, the surface of a seamless steel tube was measured by a hexagon image-probe hybrid measuring device, MSOC-03-2C. With the probe system, eight sets of section data and corresponding center coordinates were collected to assess the cylindricity and axis straightness of the seamless steel tube. With the vision system, the coordinates of the cross-section of the steel tube hole were collected, filtered, and downsampled for roundness evaluation. Furthermore, the measuring surface of a 10 mm gauge block was collected with a probe for flatness evaluation. The experimental equipment and objects are shown in Figure 10.
Based on the deviation-zone model developed in Section 2.1, the above acquisition data were evaluated using the IHHO, HHO, SSA, SSA&HHO [34], and least-squares methods. To thoroughly verify the convergence property of the algorithm, the maximum number of iterations T = 500 and N = 30. The experiments were repeated 30 times for each data group to remove accidental errors. The results are summarized in Table 8. The boxplots and average convergence curves for the different algorithms relative to the number of iterations are plotted in Figure 11 and Figure 12. The error maps of the gauge block surface and the seamless tube surface are shown in Figure 13.
As shown in Table 8, in the cylindricity evaluation, the error was 0.1013 mm, which is much higher than that of the other algorithms. For straightness, the SSA&HHO and IHHO algorithms were more effective, while SSA and HHO were poor. HHO performed the worst in the roundness evaluation, while the rest of the algorithms performed similarly. In the flatness evaluation, all algorithms achieved the same results. According to the boxplot diagram in Figure 11, the IHHO fluctuations were lower than those of the other methods in 30 independent runs, while the rest of the algorithms showed performance differences when evaluating different form errors.

6. Conclusions

This paper proposes an improved Harris hawks optimization algorithm based on the average fitness exploration strategy and nonlinear inertia weights (SSA). The idea of using average fitness in the exploration phase provided us with a solution to the strategy selection conflict caused by randomness. Furthermore, the introduction of nonlinear inertia weights further enhanced the global search capability of the SSA algorithm, enabling it to give full play to its advantages when embedded in HHO and compensating for the shortcomings of the HHO exploration phase. Although the computational complexity of the IHHO algorithm was slightly higher than that of the HHO algorithm, the convergence of the IHHO algorithm was faster than HHO in terms of the number of iterations and function evaluation results. The IHHO algorithm was thoroughly compared with the well-established optimization algorithms, and the results showed that the IHHO algorithm outperformed the other optimization algorithms. With respect to the engineering problem, the IHHO algorithm was compared with other algorithms using data reported in the literature and collected data to verify its effectiveness and superiority in determining form errors. The results show that IHHO is applicable to the deviation-zone evaluation problem and can give accurate and reliable form error evaluation results. However, this paper does not deal with free surfaces without a specific functional expression. Therefore, in future studies, a promising direction would be to evaluate the deviation zone based on the CAD model and the collected discrete points.

Author Contributions

G.L.: supervision, project administration, writing—review and editing. Z.L.: methodology, writing—original draft, software. S.S.: software, validation. Y.Y.: software, validation, writing—review and editing. X.L.: writing—review and editing, project administration. W.Y.: validation, project administration. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Natural Science Foundation of China (grant number 51275431), the Sichuan Science and Technology Program (grant number 2021YFN0021), and the Sichuan Province Information Application Support Software Engineering Technology Research Center Open Project (grant number 2021RJGC-Z01).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

  • Circle
The implicit expression of a circle and the corresponding distance function at a point are given in Equation (A1) and Equation (A2), respectively, where ( a , b ) denotes the center of the circle and R denotes the radius of the circle.
( X a ) 2 + ( Y b ) 2 R 2 = 0
f ( P i , U ( a , b , R ) ) = ( x i a ) 2 + ( y i b ) 2 R
2.
Line
The implicit expressions of the straight line and the distance function are given in Equation (A3) and Equation (A4), respectively, where ( x 0 , y 0 , z 0 ) is the position and ( a , b , c ) is the direction vector of the spatial line.
X x 0 a = Y y 0 b = Z z 0 c
f ( P i , U ( a , b , c , x 0 , y 0 , z 0 ) ) = [ b ( x i x 0 ) a ( y i y 0 ) ] 2 + [ c ( x i x 0 ) a ( z i z 0 ) ] 2 + [ c ( y i y 0 ) b ( z i z 0 ) ] 2 a 2 + b 2 + c 2
3.
Plane
The implicit expression of the plane and the corresponding distance function at a point are given in Equation (A5) and Equation (A6), respectively, where a, b, and c denote the normal vectors of the plane.
a X + b Y + c Z + d = 0
f ( P i , U ( a , b , c , d ) ) = a x i + b y i + c z i + d a 2 + b 2 + c 2
4.
Cylinder
The implicit expression of the cylinder and the corresponding distance function at a point are given in Equation (A7) and Equation (A8), respectively, where ( x 0 , y 0 , z 0 ) denotes the axis position of the cylinder, ( a , b , c ) denotes the direction vector of the axis, and R denotes the radius of the cylinder.
( X x 0 ) 2 + ( Y y 0 ) 2 + ( Z z 0 ) 2 R 2 = a ( X x 0 ) + b ( Y y 0 ) + c ( Z z 0 ) a 2 + b 2 + c 2
f ( P i , U ( a , b , c , x 0 , y 0 , z 0 , R ) ) = [ b ( x i x 0 ) a ( y i y 0 ) ] 2 + [ c ( x i x 0 ) a ( z i z 0 ) ] 2 + [ c ( y i y 0 ) b ( z i z 0 ) ] 2 a 2 + b 2 + c 2 R

References

  1. Gohari, H.; Barari, A. A quick deviation zone fitting in coordinate metrology of NURBS surfaces using principle component analysis. Measurement 2016, 92, 352–364. [Google Scholar] [CrossRef]
  2. Sun, C.; Wang, H.; Liu, Y.; Wang, X.; Wang, B.; Li, C.; Tan, J. A cylindrical profile measurement method for cylindricity and coaxiality of stepped shaft. Int. J. Adv. Manuf. Technol. 2020, 111, 2845–2856. [Google Scholar] [CrossRef]
  3. Yang, Y.; Dong, Z.; Meng, Y.; Shao, C. Data-Driven Intelligent 3D Surface Measurement in Smart Manufacturing: Review and Outlook. Machines 2021, 9, 13. [Google Scholar] [CrossRef]
  4. Barari, A. Estimation of Detailed Deviation Zone of Inspected Surfaces. In Advanced Mathematical and Computational Tools in Metrology and Testing IX; World Scientific: Singapore, 2012; pp. 18–26. [Google Scholar]
  5. Jamiolahmadi, S.; Barari, A. Study of detailed deviation zone considering coordinate metrology uncertainty. Measurement 2018, 126, 433–457. [Google Scholar] [CrossRef]
  6. Mian, S.H.; Al-Ahmari, A.; Alkhalefah, H. Analysis and Realization of Sampling Strategy in Coordinate Metrology. Math. Probl. Eng. 2019, 2019, 9574153. [Google Scholar] [CrossRef] [Green Version]
  7. Abenhaim, G.N.; Tahan, A.S.; Desrochers, A.; Maranzana, R. A Novel Approach for the Inspection of Flexible Parts Without the Use of Special Fixtures. J. Manuf. Sci. Eng. 2011, 133, 011009. [Google Scholar] [CrossRef]
  8. Gohari, H.; Barari, A. Finding optimal correspondence sets for large digital metrology point clouds using anisotropic diffusion analogy. Int. J. Comput. Integr. Manuf. 2022, 35, 462–482. [Google Scholar] [CrossRef]
  9. ISO 1101; Geometrical Product Specification (GPS)-Tolerances of Form, Orientation, Location and Run Out. 2nd ed. International Organization for Standardization: Geneva, Switzerland, 2004.
  10. Wen, X.; Xia, Q.; Zhao, Y. An effective genetic algorithm for circularity error unified evaluation. Int. J. Mach. Tools Manuf. 2006, 46, 1770–1777. [Google Scholar] [CrossRef]
  11. Zhu, L.; Ding, Y.; Ding, H. Algorithm for Spatial Straightness Evaluation Using Theories of Linear Complex Chebyshev Approximation and Semi-infinite Linear Programming. J. Manuf. Sci. Eng. 2006, 128, 167–174. [Google Scholar] [CrossRef]
  12. Damodarasamy, S.; Anand SA, M. Evaluation of minimum zone for flatness by normal plane method and simplex search. IIE Trans. 1999, 31, 617–626. [Google Scholar] [CrossRef]
  13. Ding, Y.; Zhu, L.; Ding, H. A unified approach for circularity and spatial straightness evaluation using semi-definite programming. Int. J. Mach. Tools Manuf. 2007, 47, 1646–1650. [Google Scholar] [CrossRef]
  14. Weber, T.; Motavalli, S.; Fallahi, B.; Cheraghi, S. A unified approach to form error evaluation. Precis. Eng. 2002, 26, 269–278. [Google Scholar] [CrossRef]
  15. Zhu, X.; Ding, H.; Wang, M.Y. Form Error Evaluation: An Iterative Reweighted Least Squares Algorithm*. J. Manuf. Sci. Eng. 2004, 126, 535–541. [Google Scholar] [CrossRef]
  16. Dhanish, P. A simple algorithm for evaluation of minimum zone circularity error from coordinate data. Int. J. Mach. Tools Manuf. 2002, 42, 1589–1594. [Google Scholar] [CrossRef]
  17. Zhu, L.M.; Ding, H.; Xiong, Y.L. A steepest descent algorithm for circularity evaluation. Comput. Aided Des. 2003, 35, 255–265. [Google Scholar] [CrossRef]
  18. Yang, X.S. Engineering Optimization: An Introduction with Metaheuristic Applications; Wiley: Hoboken, NJ, USA, 2010. [Google Scholar]
  19. Summerhays, K.; Henke, R.; Baldwin, J.; Cassou, R.; Brown, C. Optimizing discrete point sample patterns and measurement data analysis on internal cylindrical surfaces with systematic form deviations. Precis. Eng. 2002, 26, 105–121. [Google Scholar] [CrossRef]
  20. Zheng, P.; Liu, D.; Zhao, F.; Zhang, L. An efficient method for minimum zone cylindricity error evaluation using kinematic geometry optimization algorithm. Measurement 2019, 135, 886–895. [Google Scholar] [CrossRef]
  21. Fei, L.; Guanghua, X.; Lin, L.; Qing, Z.; Dan, L. Intersecting chord method for minimum zone evaluation of roundness deviation using Cartesian coordinate data. Precis. Eng. 2015, 42, 242–252. [Google Scholar] [CrossRef]
  22. Liu, F.; Liang, L.; Xu, G.; Hou, C.; Liu, D. Measurement and evaluation of cylindricity deviation in Cartesian coordinates. Meas. Sci. Technol. 2020, 32, 035018. [Google Scholar] [CrossRef]
  23. Khlil, A.; Shi, Z.; Umar, A.; Ma, B. Improved algorithm for minimum zone of roundness error evaluation using alternating exchange approach. Meas. Sci. Technol. 2022, 33, 045003. [Google Scholar] [CrossRef]
  24. Zhuo, M.; Geng, J.; Zhong, C.; Xia, K. New accurate algorithms of circularity evaluation. Meas. Sci. Technol. 2023, 34, 025019. [Google Scholar] [CrossRef]
  25. Li, X.; Luo, H.; Li, T. A Bidirectional Algorithm for Evaluation of Straightness Error. MAPAN 2023, 1–7. [Google Scholar] [CrossRef]
  26. Ye, R.F.; Cui, C.C.; Huang, F.G. Minimum Zone Evaluation of Flatness Error Using an Adaptive Iterative Strategy for Coordinate Measuring Machines Data. In Advanced Materials Research; Trans Tech Publications Ltd: Bäch, Switzerland, 2012; Volume 472, pp. 25–29. [Google Scholar]
  27. Huang, Q.; Mei, J.; Yue, L.; Cheng, R.; Zhang, L.; Fang, C.; Li, R.; Chen, L. A simple method for estimating the roundness of minimum zone circle. Mater. Und Werkst. 2020, 51, 38–46. [Google Scholar] [CrossRef]
  28. Liu, F.; Cao, Y.; Li, T.; Ren, L.; Zhi, J.; Yang, J.; Jiang, X. An Iterative Minimum Zone Algorithm for assessing cylindricity deviation. Measurement 2023, 213, 112738. [Google Scholar] [CrossRef]
  29. Du, C.-L.; Luo, C.-X.; Han, Z.-T.; Zhu, Y.-S. Applying particle swarm optimization algorithm to roundness error evaluation based on minimum zone circle. Measurement 2014, 52, 12–21. [Google Scholar] [CrossRef]
  30. Pathak, V.K.; Singh, A.K. Form Error Evaluation of Noncontact Scan Data Using Constriction Factor Particle Swarm Optimization. J. Adv. Manuf. Syst. 2017, 16, 205–226. [Google Scholar] [CrossRef] [Green Version]
  31. Zhang, K. Spatial straightness error evaluation with an ant colony algorithm. In Proceedings of the 2008 IEEE International Conference on Granular Computing, Hangzhou, China, 26–28 August 2008; pp. 793–796. [Google Scholar] [CrossRef]
  32. Luo, J.; Wang, Q. A method for axis straightness error evaluation based on improved artificial bee colony algorithm. Int. J. Adv. Manuf. Technol. 2014, 71, 1501–1509. [Google Scholar] [CrossRef]
  33. Luo, J.; Liu, Z.; Zhang, P.; Liu, X.; Liu, Z. A method for axis straightness error evaluation based on improved differential evolution algorithm. Int. J. Adv. Manuf. Technol. 2020, 110, 413–425. [Google Scholar] [CrossRef]
  34. Wen, X.L.; Zhu, X.C.; Zhao, Y.B. Flatness error evaluation and verification based on new generation geometrical product specification (GPS). Precis. Eng. 2012, 36, 70–76. [Google Scholar] [CrossRef]
  35. Li, G.; Xu, Y.; Chang, C.; Wang, S.; Zhang, Q.; An, D. Improved bat algorithm for roundness error evaluation problem. Math. Biosci. Eng. 2022, 19, 9388–9411. [Google Scholar] [CrossRef]
  36. Abdulshahed, A.M.; Badi, I.; Alturas, A. Efficient evaluation of flatness error from Coordinate Measurement Data using Cuckoo Search optomisation algorithm. J. Acad. Res. 2019, 37, 51. [Google Scholar]
  37. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Futur. Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  38. Mirjalili, S. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  39. Zhang, Y.; Liu, R.; Wang, X.; Chen, H.; Li, C. Boosted binary Harris hawks optimizer and feature selection. Eng. Comput. 2020, 37, 3741–3770. [Google Scholar] [CrossRef]
  40. Hoppe, H.; DeRose, T.; Duchamp, T. Surface reconstruction from unorganized points. In Proceedings of the 19th Annual Conference on Computer Graphics and Interactive Techniques, Chicago, IL, USA, 27–31 July 1992; pp. 71–78. [Google Scholar]
  41. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  42. Holland, J.H. Genetic algorithms, Scientific american. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  43. Simon, D. Biogeography-based optimization. IEEE Trans. Evol. Comput. 2008, 12, 702–713. [Google Scholar] [CrossRef] [Green Version]
  44. Rao, R.; Savsani, V.; Vakharia, D. Teaching–Learning-Based Optimization: An optimization method for continuous non-linear large scale problems. Inf. Sci. 2012, 183, 1–15. [Google Scholar] [CrossRef]
  45. Karaboga, D.; Gorkemli, B.; Ozturk, C.; Karaboga, N. A comprehensive survey: Artificial bee colony (ABC) algorithm and applications. Artif. Intell. Rev. 2014, 42, 21–57. [Google Scholar] [CrossRef]
  46. Gandomi, A.H.; Yang, X.-S.; Alavi, A.H. Cuckoo search algorithm: A metaheuristic approach to solve structural optimization problems. Eng. Comput. 2013, 29, 17–35. [Google Scholar] [CrossRef]
  47. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  48. Huang, J.; Yang, R.; Ge, H.; Li, J.; Tan, J. Improved evaluation of minimum zone roundness using an optimal solution guidance algorithm. Meas. Sci. Technol. 2021, 32, 115013. [Google Scholar] [CrossRef]
  49. Radlovački, V.; Hadžistević, M.; Štrbac, B.; Delić, M.; Kamberović, B. Evaluating minimum zone flatness error using new method—Bundle of plains through one point. Precis. Eng. 2016, 43, 554–562. [Google Scholar] [CrossRef]
Figure 1. The least-squares method is used to estimate the initial parameters, and the Rodrigues rotation matrix aligns the original geometry with the Z-axis.
Figure 1. The least-squares method is used to estimate the initial parameters, and the Rodrigues rotation matrix aligns the original geometry with the Z-axis.
Sensors 23 06046 g001
Figure 2. The search area in the optimization process: (a) for circles; (b) for lines and cylinders, divided into regions of position (yellow) and areas of axial direction (orange); and (c) for planes, only areas of direction.
Figure 2. The search area in the optimization process: (a) for circles; (b) for lines and cylinders, divided into regions of position (yellow) and areas of axial direction (orange); and (c) for planes, only areas of direction.
Sensors 23 06046 g002
Figure 3. Six phases of HHO.
Figure 3. Six phases of HHO.
Sensors 23 06046 g003
Figure 4. Qualitative results of unimodal classical benchmark functions f1, f3, f4, and f7.
Figure 4. Qualitative results of unimodal classical benchmark functions f1, f3, f4, and f7.
Sensors 23 06046 g004
Figure 5. Qualitative results of multimodal classical benchmark functions f9, f10, f12, and f13.
Figure 5. Qualitative results of multimodal classical benchmark functions f9, f10, f12, and f13.
Sensors 23 06046 g005
Figure 6. Boxplots of the six classical benchmark functions f1, f7, f10, f12, f14, and f23.
Figure 6. Boxplots of the six classical benchmark functions f1, f7, f10, f12, f14, and f23.
Sensors 23 06046 g006
Figure 7. The convergence curves of the six classical benchmark functions.
Figure 7. The convergence curves of the six classical benchmark functions.
Sensors 23 06046 g007
Figure 8. Flowchart of IHHO-based deviation-zone evaluation.
Figure 8. Flowchart of IHHO-based deviation-zone evaluation.
Sensors 23 06046 g008
Figure 9. The convergence curves relative to the number of iterations.
Figure 9. The convergence curves relative to the number of iterations.
Sensors 23 06046 g009
Figure 10. (Data acquisition equipment and measurement objects).
Figure 10. (Data acquisition equipment and measurement objects).
Sensors 23 06046 g010
Figure 11. Boxplots of the evaluation results for the seamless steel pipe and gauge block data of different algorithms.
Figure 11. Boxplots of the evaluation results for the seamless steel pipe and gauge block data of different algorithms.
Sensors 23 06046 g011
Figure 12. The average convergence curves for the seamless steel tube and gauge block data.
Figure 12. The average convergence curves for the seamless steel tube and gauge block data.
Sensors 23 06046 g012
Figure 13. The error maps for the seamless steel tube and gauge block surface.
Figure 13. The error maps for the seamless steel tube and gauge block surface.
Sensors 23 06046 g013
Table 1. Unimodal benchmark functions.
Table 1. Unimodal benchmark functions.
NameFunctionRange f min
Sphere Function f 1 ( x ) = i = 1 n x i 2 [−100, 100]0
Schwefel’s Problem 2.22 f 2 ( x ) = i = 1 n | x i | + i = 1 n | x i | [−10, 10]0
Schwefel’s Problem 1.2 f 3 ( x ) = i = 1 n ( j = 1 i x j ) 2 [−100, 100]0
Schwefel’s Problem 2.21 f 4 ( x ) = max i { | x i | , 1 i n } [−100, 100]0
Generalized Rosenbrock’s Function f 5 ( x ) = i = 1 n 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] [−30, 30]0
Step Function f 6 ( x ) = i = 1 n 1 ( | x i + 0.5 | ) 2 [−100, 100]0
Quartic Function f 7 ( x ) = i = 1 n 1 i x i 4 + random [ 0 , 1 ] [−1.28, 1.28]0
Table 2. Multimodal benchmark functions.
Table 2. Multimodal benchmark functions.
NameFunctionRange f min
Generalized Schwefel’s Problem 2.26 f 8 ( x ) = i = 1 n x i sin ( | x i | ) [−500, 500]−418.9829 × n
Generalized Rastrigin’s Function f 9 ( x ) = i = 1 n [ x i 2 10 cos ( 2 π x i ) + 10 ] [−5.12, 5.12]0
Ackley’s Function f 10 ( x ) = 20 exp ( 0.2 1 n i = 1 n x i 2 ) exp ( 1 n i = 1 n cos ( 2 π x i ) ) + 20 + e [−32, 32]0
Generalized Griewank’s Function f 11 ( x ) = 1 4000 i = 1 n x i 2 i = 1 n cos ( x i i ) + 1 [−600, 600]0
Generalized Penalized Function 1 f 12 ( x ) = π n { 10 sin ( π y 1 ) + i = 1 n ( y i 1 ) 2 [ 1 + 10 sin 2 ( π y i + 1 ) ] + ( y n 1 ) 2 } + i = 1 n u ( x i , 10 , 100 , 4 ) y i = 1 + x i + 1 4 , u ( x i , a , k , m ) = { k ( x i a ) m x i > a 0 a < x i < a k ( x i a ) m x i < a [−50, 50]0
Generalized Penalized Function 2 f 13 ( x ) = 0.1 { sin 2 ( 3 π x 1 ) + i = 1 n ( x i 1 ) 2 [ 1 + sin 2 ( 3 π x 1 + 1 ) ] + ( x n 1 ) 2 [ 1 + sin 2 ( 2 π x n ) ] } + i = 1 n u ( x i , 5 , 100 , 4 ) [−50, 50]0
Table 3. Fixed-dimension multimodal benchmark functions.
Table 3. Fixed-dimension multimodal benchmark functions.
NameFunctionDimensionRange f min
Shekel’s Foxholes Function f 14 ( x ) = ( 1 500 + j = 1 25 1 j + i = 1 2 ( x i a i j ) 6 ) 1 2[−65.536, 65.536]1
Kowalik’s Function f 15 ( x ) = i = 1 11 [ a i x i ( b i 2 b i x 2 ) b i 2 + b i x 3 + x 4 ] 2 4[−5, 5]0.0003075
Six-Hump Camel-Back Function f 16 ( x ) = 4 x 1 2 2.1 x 1 2 + 1 3 x 1 6 + x 1 x 2 4 2 2 + 4 x 2 4 2[−5, 5]−1.0316285
Branin Function f 17 ( x ) = ( x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 ) 2 + 10 ( 1 1 8 π ) cos x 1 + 10 2[−5, 10] × [0, 15]0.398
Goldstein–Price Function f 18 ( x ) = [ 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) ] × [ 30 + ( 2 x 1 3 x 2 ) 2 ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] 2[−2, 2]3
Hartman’s Family Function 1 (N = 3) f 19 ( x ) = i = 1 4 c i exp ( j = 1 3 a i j ( x j p i j ) 2 ) 3[0, 1]−3.86
Hartman’s Family Function 2 (N = 6) f 20 ( x ) = i = 1 4 c i exp ( j = 1 6 a i j ( x j p i j ) 2 ) 6[0, 1]−3.32
Shekel’s Family Function 1 (N = 5) f 21 ( x ) = i = 1 5 [ ( X a i ) ( X a i ) T + c i ] 1 4[0, 10]−10.1532
Shekel’s Family Function 2 (N = 7) f 22 ( x ) = i = 1 7 [ ( X a i ) ( X a i ) T + c i ] 1 4[0, 10]−10.4028
Shekel’s Family Function 3 (N = 10) f 23 ( x ) = i = 1 10 [ ( X a i ) ( X a i ) T + c i ] 1 4[0, 10]−10.5363
Table 4. Parameter setting of various optimization algorithms.
Table 4. Parameter setting of various optimization algorithms.
AlgorithmParameters
DE [43]Scaling factor, F = 0.5 and crossover probability, Cr = 0.9
WOA [47]a = [0, 2], b = 1, and l = [−1, 1]
ABC [45]Abandonment limit = 0.6 × D × N
CS [46]Abandon probability, p a = 0.25 , step size α = 1, and λ = 1.5
TLBO [44]TFmax = 2 and TFmin = 1
PSO [41]Inertia factor = 0.3, c1 = 2, and c2 = 2
GA [42]Crossover probability = 0.8 and mutation probability = 0.05
SSA [38]The number of leaders = N/2
HHO [37]Initial state = 2
Table 5. Results of IHHO and nine conventional swarm-based algorithms on 23 benchmarks.
Table 5. Results of IHHO and nine conventional swarm-based algorithms on 23 benchmarks.
FunctionMetricIHHOHHOSSADEWOAABCCSTLBOPSOGA
f 1 Average9.8191E-1081.4371E-921.5602E-077.7905E+012.1530E-731.1409E+027.5264E+002.5870E-891.0381E-094.1818E-01
Std. Dev.4.4397E-1077.8627E-922.0095E-073.7708E+028.9376E-734.8459E+012.1409E+004.6942E-891.3662E-093.3787E-01
f 2 Average3.0532E-572.6479E-491.6734E+006.2743E-023.1615E-485.7348E+011.0723E+013.8157E-452.0393E-034.2087E+00
Std. Dev.9.1102E-571.4397E-481.2626E+002.1282E-011.0594E-473.9230E+014.1260E+003.1791E-459.3505E-031.7632E+00
f 3 Average2.5765E-852.5366E-731.8195E+036.2831E+024.4019E+046.8920E+042.2440E+031.0301E-171.8531E+021.2535E+01
Std. Dev.1.4091E-841.3893E-721.0367E+034.4083E+021.4134E+041.1351E+044.7799E+022.6023E-171.0574E+021.4649E+01
f 4 Average6.8699E-532.1653E-481.0598E+012.5240E+015.5366E+016.3423E+019.7881E+001.1145E-362.8576E+001.9379E+00
Std. Dev.3.4390E-521.0414E-473.1624E+006.7334E+002.8293E+015.1929E+001.9909E+008.7162E-378.5118E-014.4034E-01
f 5 Average6.1507E-041.9393E-022.9493E+021.0868E+042.7984E+012.4351E+064.6646E+022.5510E+014.9218E+011.1337E+02
Std. Dev.6.4890E-042.4361E-025.1965E+022.3958E+044.7925E-011.0799E+061.9434E+026.2797E-013.4270E+015.8954E+01
f 6 Average2.6096E-061.8993E-041.5566E-079.1127E+003.7682E-011.1210E+027.5379E+009.0981E-052.1010E-091.1791E+00
Std. Dev.4.2167E-062.9303E-042.2753E-072.3285E+012.4227E-016.2740E+013.4105E+001.9284E-043.7393E-091.2221E+00
f 7 Average1.5171E-041.5382E-041.6333E-016.9135E-022.5776E-038.1377E-017.3337E-021.0832E-031.7265E-021.1592E+00
Std. Dev.1.2809E-041.6972E-049.7234E-025.3048E-023.3112E-032.4526E-011.9549E-024.4530E-047.3532E-035.5957E-01
f 8 Average−1.2302E+04−1.2554E+04-7.3403E+03−7.6939E+03−1.1026E+04−3.1224E+03−8.1127E+03−7.5624E+03−6.2607E+03−5.3371E+02
Std. Dev.5.9923E+023.4318E+016.3915E+021.2408E+031.6428E+038.9440E+602.5378E+029.5457E+028.7230E+023.5126E+01
f 9 Average0.0000E+000.0000E+004.9317E+011.3880E+023.7896E-152.5014E+021.0510E+021.1205E+013.7245E+012.3606E+01
Std. Dev.0.0000E+000.0000E+001.8591E+014.1840E+011.4422E-141.4647E+011.1449E+017.9283E+001.3498E+017.9342E+00
f 10 Average8.8818E-168.8818E-162.3523E+001.6160E+004.4409E-153.7927E+002.5517E+006.3357E-155.5614E-011.7984E+00
Std. Dev.0.0000E+000.0000E+001.0343E+007.8250E-012.4685E-152.2979E-014.1469E-011.8027E-157.7138E-016.2565E-01
f 11 Average0.0000E+000.0000E+001.7518E-022.9595E-010.0000E+002.1273E+001.0901E+000.0000E+001.4493E-022.2855E-02
Std. Dev.0.0000E+000.0000E+001.5344E-026.5631E-010.0000E+004.9385E-014.1013E-020.0000E+001.5665E-022.4988E-02
f 12 Average8.9331E-072.8996E-064.8967E-012.7740E-021.1222E-027.0510E+005.5754E-013.4633E-032.0734E-022.0734E-01
Std. Dev.1.0057E-064.1578E-062.8196E-015.4624E-027.4349E-031.3382E+003.0349E-011.8926E-024.2177E-021.4924E-01
f 13 Average1.2617E-053.0911E-058.8144E-022.0079E-021.4800E-012.9343E+008.3980E-025.2411E-026.2262E-036.9511E-02
Std. Dev.1.9505E-054.5579E-052.2500E-013.3155E-021.0076E-011.0096E+002.9587E-027.3284E-028.9788E-037.1892E-02
f 14 Average9.9800E-011.8223E+001.4273E+001.1955E+003.2893E+009.9867E-019.9800E-019.9800E-012.8411E+001.2003E+01
Std. Dev.4.4695E-161.4902E+009.9594E-019.1220E-013.3397E+002.4376E-034.2452E-160.0000E+002.1717E+007.9773E-01
f 15 Average4.0478E-044.1574E-042.8157E-032.0786E-036.1588E-041.2013E-034.2754E-041.0424E-031.7770E-032.6560E-03
Std. Dev.1.0016E-042.6449E-045.9575E-035.0218E-035.0040E-041.6644E-042.0271E-043.6537E-035.0656E-032.9745E-03
f 16 Average−1.0316E+00−1.0316E+00−1.0316E+00−1.0316E+00−1.0316E+00−1.0316E+00−1.0316E+00−1.0316E+00−1.0316E+00−1.0044E+00
Std. Dev.2.5551E-144.5761E-094.3473E-146.7752E-164.3849E-091.6930E-075.0499E-166.6486E-166.6486E-161.4901E-01
f 17 Average3.9789E-013.9789E-013.9789E-013.9789E-013.9789E-013.9789E-013.9789E-013.9789E-013.9789E-013.9789E-01
Std. Dev.3.2210E-137.6822E-065.7514E-140.0000E+004.3516E-051.8938E-052.0810E-130.0000E+000.0000E+003.1302E-09
f 18 Average3.0000E+003.0000E+003.0000E+003.0000E+003.9007E+003.0000E+003.0000E+003.0000E+005.7000E+002.2802E+01
Std. Dev.3.0097E-131.5546E-061.8604E-121.3550E-154.9332E+003.1730E-051.7200E-151.3650E-151.4789E+013.0031E+01
f 19 Average−3.8628E+00−3.8582E+00−3.8628E+00−3.8628E+00−3.8575E+00−3.8628E+00−3.8628E+00−3.8628E+00−3.8628E+00−3.5947E+00
Std. Dev.1.3697E-115.7063E-032.1958E-102.6962E-155.4941E-038.8844E-102.4643E-152.7101E-152.6962E-157.4320E-01
f 20 Average−3.2618E+00−3.0860E+00−3.2187E+00-3.2348E+00−3.2512E+00−3.3216E+00−3.3220E+00−3.3170E+00−3.2625E+00−3.2467E+00
Std. Dev.6.1237E-021.1589E-015.3356E-025.3475E-021.1331E-012.2756E-031.5043E-072.2115E-026.0463E-025.8274E-02
f 21 Average−1.0153E+01−5.3608E+00−7.7297E+00−9.9848E+00−8.8591E+00−9.7889E+00−1.0153E+01−9.7481E+00−5.0640E+00−5.5273E+00
Std. Dev.8.8280E-111.1795E+003.3185E+009.2244E-012.1653E+001.0294E+001.8470E-061.3854E+003.0349E+002.7987E+00
f 22 Average−1.0403E+01−5.2518E+00−7.5807E+00−1.0226E+01−8.5132E+00−1.0403E+01−1.0403E+01−9.2400E+00−6.7222E+00−5.7084E+00
Std. Dev.7.1326E-119.1813E-013.5764E+009.7043E-012.9663E+007.3059E-082.0020E-062.3712E+003.5629E+002.8233E+00
f 23 Average−1.0536E+01−5.1221E+00−8.7358E+00−9.8342E+00−8.1769E+00−1.0536E+01−1.0536E+01−1.0536E+01−6.2903E+00−4.1986E+00
Std. Dev.8.7812E-111.1132E-023.0954E+002.1477E+003.2227E+001.7148E-079.2141E-061.2342E-103.6204E+002.4535E+00
+ / = / / / 16/6/117/5/118/1/421/2/022/1/016/4/317/1/517/2/323/0/0
ARV 1.614.266.046.095.877.355.874.046.137.74
RANK 13674952810
Table 6. p-values of the Wilcoxon rank-sum test comparing IHHO with conventional algorithms for all functions.
Table 6. p-values of the Wilcoxon rank-sum test comparing IHHO with conventional algorithms for all functions.
FunctionHHOSSADEWOAABCCSTLBOPSOGA
f 1 3.8202E-103.0199E-113.0199E-113.0199E-113.0199E-113.0199E-113.0199E-113.0199E-113.0199E-11
f 2 5.5727E-103.0199E-113.0199E-113.0199E-113.0199E-113.0199E-113.0199E-113.0199E-113.0199E-11
f 3 1.6980E-083.0199E-113.0199E-113.0199E-113.0199E-113.0199E-113.0199E-113.0199E-113.0199E-11
f 4 5.4617E-093.0199E-113.0199E-113.0199E-113.0199E-113.0199E-113.0199E-113.0199E-113.0199E-11
f 5 1.8608E-063.0199E-113.0199E-113.0199E-113.0199E-113.0199E-113.0199E-113.0199E-113.0199E-11
f 6 4.1825E-093.4742E-103.0199E-113.0199E-113.0199E-113.0199E-115.3221E-033.0199E-113.0199E-11
f 7 4.6427E-013.0199E-113.0199E-111.4294E-083.0199E-113.0199E-113.3384E-113.0199E-113.0199E-11
f 8 5.3221E-033.0199E-113.0199E-116.5261E-073.0199E-113.0199E-113.0199E-113.0199E-113.0199E-11
f 9 1.0000E+001.2118E-121.2118E-121.6074E-011.2118E-121.2118E-125.7720E-111.2118E-121.2118E-12
f 10 1.0000E+001.2118E-121.2118E-123.6292E-091.2118E-121.2118E-124.6350E-131.2118E-121.2118E-12
f 11 1.0000E+001.2118E-121.2118E-121.0000E+001.2118E-121.2118E-121.0000E+001.2118E-121.2118E-12
f 12 1.2732E-023.0199E-111.6351E-053.0199E-113.0199E-113.0199E-111.0763E-026.7650E-053.0199E-11
f 13 6.8432E-018.1527E-111.4918E-063.0199E-113.0199E-113.0199E-117.6950E-086.6273E-013.0199E-11
f 14 2.9119E-108.8183E-012.7599E-092.1293E-112.1293E-118.3961E-012.9467E-121.4237E-032.1210E-11
f 15 1.0666E-074.5726E-092.3240E-021.2023E-082.4386E-096.7912E-018.5598E-044.8251E-019.9186E-11
f 16 5.4439E-111.3526E-011.1970E-122.9916E-112.9916E-111.3929E-103.2244E-123.2244E-122.9916E-11
f 17 1.0651E-058.9366E-013.4488E-072.6253E-112.6253E-119.1686E-013.4488E-073.4488E-076.4586E-11
f 18 2.0965E-084.2958E-084.0625E-123.0085E-113.0085E-111.6375E-095.1812E-121.0398E-093.0085E-11
f 19 3.0180E-111.7296E-021.7189E-123.0180E-116.0621E-111.4049E-111.2108E-121.7189E-123.0180E-11
f 20 3.0811E-088.5641E-046.9661E-022.9205E-021.0000E+001.0000E+002.2649E-077.4628E-046.1001E-01
f 21 3.0199E-114.5530E-017.7540E-113.0199E-113.0199E-113.8202E-108.5609E-073.8298E-043.0199E-11
f 22 3.0199E-118.8830E-011.9434E-103.0199E-113.0199E-111.6132E-103.3102E-046.6181E-013.0199E-11
f 23 3.0199E-111.2967E-013.9329E-083.0199E-113.0199E-113.0199E-111.4488E-111.8487E-013.0199E-11
Table 7. Comparison of the proposed method with existing methods.
Table 7. Comparison of the proposed method with existing methods.
Dataset SourceExampleNumber of PointsReported MZIHHO MZLeast SquareSmaller Method 1Relative Difference (%) 2
Huang et al. [48] (Roundness, 2021)Example 12529.280329.280229.8072Close values−0.000
Example 22438.231338.231039.1007Close values−0.001
Example 3100957.413957.420988.236Close values−0.000
Example 48027.197627.197029.085Close values−0.002
Luo et al. [33] (Straightness, 2020)Example 1160.066930.063560.0956IHHO MZ−5.033
Example 288.52006.33429.0000IHHO MZ−25.65
Zheng et al. [20] (Cylindricity, 2019)Example 1320.019380.019390.28558Close values+0.05
Example 2800.031890.031830.03661Close values−0.19
Example 3200.183960.183960.21197Close values−0.00
Radlovački V et al. [49] (Flatness, 2016)Example 1250.018400.018380.02187Close values−0.11
Example 22000.122520.126130.24339Reported MZ+2.9
1 “Close value” indicates an absolute relative difference between reported MZ and IHHO MZ values of less than 0.5%. 2 Difference between reported MZ and IHHO MZ values—relative to reported MZs (negative values indicate smaller values for IHHO).
Table 8. Comparison of different methods of evaluation results (unit per millimeter).
Table 8. Comparison of different methods of evaluation results (unit per millimeter).
DatasetNumber of PointsLeast SquareSSAHHOSSA&HHOIHHO
Cylindrical surface1580.16270.40660.89510.34160.1013
Cylindrical axis80.075090.067430.071020.066570.06610
Circular section1000.028980.027160.028680.027150.02715
Gauge block surface400.001870.001840.001840.001840.00184
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, G.; Li, Z.; Sun, S.; Yang, Y.; Li, X.; Yi, W. An Efficient Improved Harris Hawks Optimizer and Its Application to Form Deviation-Zone Evaluation. Sensors 2023, 23, 6046. https://doi.org/10.3390/s23136046

AMA Style

Liu G, Li Z, Sun S, Yang Y, Li X, Yi W. An Efficient Improved Harris Hawks Optimizer and Its Application to Form Deviation-Zone Evaluation. Sensors. 2023; 23(13):6046. https://doi.org/10.3390/s23136046

Chicago/Turabian Style

Liu, Guangshuai, Zuoxin Li, Si Sun, Yuzou Yang, Xurui Li, and Wenyu Yi. 2023. "An Efficient Improved Harris Hawks Optimizer and Its Application to Form Deviation-Zone Evaluation" Sensors 23, no. 13: 6046. https://doi.org/10.3390/s23136046

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop