Next Article in Journal
Ensemble Machine Learning-Based Approach for Predicting of FRP–Concrete Interfacial Bonding
Next Article in Special Issue
S3D: Squeeze and Excitation 3D Convolutional Neural Networks for a Fall Detection System
Previous Article in Journal
Analysis of Measured Parameters in Relation to the Amount of Fibre in Lightweight Red Ceramic Waste Aggregate Concrete
Previous Article in Special Issue
Ensemble of Deep Learning-Based Multimodal Remote Sensing Image Classification Model on Unmanned Aerial Vehicle Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Induction Motor Fault Classification Based on Combined Genetic Algorithm with Symmetrical Uncertainty Method for Feature Selection Task

1
Department of Electrical Engineering, Chung Yuan Christian University, Taoyuan 320314, Taiwan
2
Department of Electrical and Electronic Engineering, Thu Dau Mot University, Thu Dau Mot 75000, Binh Duong, Vietnam
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(2), 230; https://doi.org/10.3390/math10020230
Submission received: 1 December 2021 / Revised: 19 December 2021 / Accepted: 23 December 2021 / Published: 12 January 2022

Abstract

:
This research proposes a method to improve the capability of a genetic algorithm (GA) to choose the best feature subset by incorporating symmetrical uncertainty (SU) to rank the features and remove redundant features. The proposed method is a combination of symmetrical uncertainty and a genetic algorithm (SU-GA). In this study, feature selection is implemented on four different conditions of an induction motor: normal, broken bearings, a broken rotor bar, and a stator winding short circuit. The Hilbert-Huang transform (HHT) is then used to analyze the current signal in these four motor conditions. After that, the feature selection is used to find the best feature subset for the classification task. A support vector machine (SVM) was used for the feature classification. Three feature selection methods were implemented: SU, GA, and SU-GA. The results show that SU-GA obtained better accuracy with fewer selected features. In addition, to simulate and analyze the actual operating situation of the induction motors, three different magnitudes of white noise were added with the following signal-to-noise ratios (SNR): 40 dB, 30 dB, and 20 dB. Finally, the results show that the proposed method has a better classification capability.

1. Introduction

With the emergence of Industry 4.0, the factory operation model is gradually moving towards automation, leading to an increase in the number of unmanned factories. The number of unmanned factories has also significantly increased with the demand for motors [1]. However, some motors are often damaged, unusable, or decommissioned without warning due to the wear of parts, improper installation, and negligent operation. These issues increase operational costs and cause safety problems. In a motor failure distribution analysis, studies have shown that 45% of failures are found in the bearing, 35% in the stator, and 10% in the rotor, while the remaining 10% were other failures [2]. Thus, detecting a motor’s possible malfunction and performing maintenance before motor failure in an unmanned factory has become essential for many factories to prevent decommissioning. This research paper measures and analyzes the current signals of induction motors with the following conditions: normal, broken bearings, a broken rotor bar, and a stator winding short circuit.
In general, the motor fault detection model can be divided into three essential parts: signal analysis, feature selection, and classification. In the first part, the signal analysis method, the current signal of the induction motor is converted into analyzable data, which is widely used in fault diagnosis [3]. However, there are many signal analysis methods nowadays, such as envelope analysis (EA), which is suitable for vibration fault detection [4], and multiresolution analysis (MRA), which is often used for image analysis and can be used to repair images and reduce image noise [5]. Likewise, the Hilbert-Huang transform (HHT) can be used to analyze time domain and frequency domain signals. Due to its strong adaptability, it is suitable for processing nonlinear signals and non-stationary signals, and is often used in various fields [6].
Furthermore, two kinds of feature selection approaches are investigated in this study. The genetic algorithm (GA) is a heuristic global search method that can compare multiple individuals simultaneously, and its robustness is good [7]. GA can be used to find the best chromosome by simulating the crossover and mutation of chromosomes, using a fitness function to calculate the score of the chromosomes and then choosing the chromosomes with the higher scores to simulate reproduction over several generations [8]. The feature subset represents the chromosome in the fault diagnosis of induction motors. Symmetrical uncertainty (SU) is a value calculated by the information entropy to illustrate the correlation between the two values [9]. The importance of each feature can be determined based on the correlation coefficient between the fault type and the feature itself. By comparing the correlation coefficient between the feature and the fault type, and the correlation coefficient between that feature and the other feature, it can be judged whether there is a redundant feature in the feature subset. This article explains the advantages and disadvantages of these two feature selection methods and discusses them based on the identification results.
The classifier is discussed in the last part. The use of machine learning to classify motor failures has become prevalent. Commonly used machine learning methods include the back propagation neural network (BPNN), decision tree (DT), and support vector machine (SVM) methods. Each method has its advantages and disadvantages. BPNN has hidden layers, which can transmit the output error to the input layer through the hidden layers as a basis for adjusting the weight and then strengthen the intensity between the input layer and the output layer (although the convergence speed is too slow) [10,11]. The DT method can perform visual analysis and classify data with noise or missing values, but it has a high computational cost [12]. With the SVM method, the data are mapped from a low-dimensional space to a high-dimensional space, and the best hyperplane is found for classification. Then, the data can be mapped back to the low-dimensional space, avoiding overfitting when calculating small-sample problems, and this can solve non-linear and high-dimensional problems [13,14]. This research also uses SVM for classification.
Since SU has the advantage of calculating the correlation coefficient between the feature and fault type, sorting the essential features, and deleting redundant features, whereas the GA can find the best feature subset by calculating the fitness value of some feature subsets and then selecting the higher fitness value of these feature subsets for reproduction over several generations, optimal results might be obtained by combining these two algorithms. This paper proposes a SU-GA method to sort important features, delete redundant features, and effectively find the best feature subset. Finally, by comparing the classification capability of SU-GA to SU and GA, it has been proven that the number of features obtained by SU-GA is the least, and that this method has better classification accuracy.
The feature selection method proposed in this research can identify a feature subset and sort the features in the set according to the calculated correlation coefficient. In this research, the current signals of induction motors with four different conditions are measured first, and then HHT is used to analyze them. The maximum and minimum values, mean, root means square (rms), and standard deviation (std) of its instantaneous amplitude and instantaneous frequency are obtained as features. Then, we use SU, GA, and SU-GA to obtain the best feature subsets. Furthermore, we use SVM to classify the feature subsets from HHT, HHT-SU (HHT with SU), HHT-GA (HHT with GA), and HHT-SU-GA (HHT with SU-GA). Finally, the results for the feature selection methods are compared and analyzed. In addition, to observe the anti-noise ability of the feature selection methods, white noises with signal-to-noise ratio (SNR) magnitudes of 40 dB, 30 dB, and 20 dB are added to the original signal for comparison. The results can verify that the SU-GA method with white noise with SNR magnitudes of ∞dB, 40 dB, 30 dB, and 20 dB can eliminate redundant features and improve classification accuracy, with its classification accuracy higher than SU and GA.
The rest of the paper is arranged as follows. Section 2 describes the equipment, experimental process, and the method of signal analysis. Subsequently, Section 3 introduces two kinds of feature selection methods and the SU-GA method proposed by this research. Then, classifier and identification results are presented in Section 4. Finally, Section 5 presents the conclusions.

2. Methodology

Recently, induction motors have been the primary power-producing for many factories. A fast, effective and accurate mechanism to detect possible fault conditions of induction motors has become more and more critical. This section explains the specifications of the motors used in the research. First, we measured the current signals of four induction motor conditions: normal, broken bearings, a broken rotor bar, and a stator winding short-circuit. Next, we introduced the experimental framework of this research and the signal analysis method HHT used in this research. Finally, the maximum, minimum, average, rms, and std values of the signal after HHT analysis were taken as the main features, and we compared the differences between the four kinds of induction motors.

2.1. Hilbert-Huang Transform (HHT)

HHT was proposed by Norden E. Huang of the Academia Sinica in Taiwan in 1998. Compared with Hilbert transform (HT), this method uses empirical mode decomposition (EMD), decomposes the data, and determines the intrinsic mode functions (IMF), so that the instantaneous frequency and instantaneous amplitude are more of a reference value [15]. This method has good results when used to analyze non-linear and non-stationary data and is often used in various engineering fields such as building structure vibration testing, motor fault diagnosis, earthquake research, and atmospheric science [6]. HHT is mainly divided into two steps, which are discussed next.

2.1.1. Empirical Mode Decomposition (EMD)

EMD is the process of decomposing non-stationary and non-linear data and searching for IMF. However, the IMF must satisfy that the number of local maximums plus the number of local minima must be different from the number of zero-crossings by plus or minus one. They must be close to zero after taking the average of the upper and lower envelope. Furthermore, to meet the conditions for HT, the IMF must also be a monotonic function [16]. Therefore, EMD needs to go through continuous iterations before getting an IMF capable of HT. The flow chart of EMD is shown in Figure 1.
The IMF waveforms of the four conditions generated by EMD are shown in Figure 2. The horizontal axis represents the data points. Since the sampling time for each signal is two seconds and the capture frequency is 1000 Hz, the total data points are 2000. The IMF waveform of the normal motor is shown in Figure 2a, the IMF waveform of broken bearings motor is shown in Figure 2b, the IMF waveform of a broken rotor bar motor is shown in Figure 2c, and the IMF waveform of a stator winding short-circuit motor is shown in Figure 2d. Next, we performed HT on the four conditions of the IMF to obtain the instantaneous amplitude and instantaneous frequency of each layer of IMF. Then, we perform an analysis of the results.

2.1.2. Hilbert Transform (HT)

HT is commonly used in the tomographic reconstruction and has fast calculation speed and high efficiency [17]. HT is the convolution of IMF signal c ( t ) with 1/t, and it keeps amplitude and frequency, and its definition is shown in Equation (1) [18].
H [ c ( t ) ] = 1 π c ( τ ) t τ d τ
After the IMF signal c ( t ) is subjected to HT, the complex signal z ( t ) can be decomposed into real and imaginary parts. The c ( t ) is the real part and H [ c ( t ) ] is the imaginary part, as shown in Equation (2).
z ( t ) = c ( t ) + j H [ c ( t ) ] = a ( t ) e j θ ( t )
After decomposing the real and the imaginary parts, the vibration and phase can be further obtained. The amplitude a ( t ) is shown in Equation (3) and the phase θ ( t ) is shown in Equation (4).
a ( t ) = c 2 ( t ) + H 2 [ c ( t ) ]
θ ( t ) = t a n 1 ( H [ c ( t ) ] c ( t ) )
Finally, the instantaneous frequency can be found through the phase, and the instantaneous frequency ω ( t ) is defined and shown in Equation (5).
ω ( t ) = d θ ( t ) d t

2.2. Feature Extraction

After obtaining the instantaneous amplitude and instantaneous frequency of each layer of IMF through HT, we then extracted the maximum, minimum, average, rms, and std of the instantaneous amplitude and the instantaneous frequency of each layer of IMF as features and normalized them. The min-max normalization method is shown in Equation (6) [19]. However, the value ranges of the extracted features vary widely. Therefore, the range of all features is normalized to increase the classifier efficiency. First, subtract the minimum value of the feature F m i n from the value of the feature F i , then divide by the maximum value of the feature F m a x minus the minimum value of the feature F m i n . After normalizing, the scale of every feature is between zero and one so that every feature is equally important. The structure of signal analysis and feature extraction is shown in Figure 3.
F i F min F max F min
MATLAB has been used to draw a feature map to make it easier to observe the specified features. The vertical axis of the feature map is the number of features, totaling 80, and the horizontal axis is the number of samples, totaling 200. Among them, 1–50 samples in the horizontal axis are the characteristics of normal induction motors; 51~100 samples are the characteristics of induction motors with broken bearings; 101~150 samples are the characteristics of induction motors with a broken rotor bar; and 151~200 samples are characteristics of induction motors with a stator winding short-circuit. It can be found that from F41 to F46, the difference between broken bearings and other conditions is quite apparent (as shown in Figure 4a).
In order to simulate the different actual scenarios of the study, the current signal of the three-phase squirrel-cage induction motor is simulated by adding white noise of SNR = 40 dB, SNR = 30 dB, and SNR = 20 dB. The HHT is performed on these signals, for feature extraction, and to obtain the feature map. The feature map of motor failure (40 dB) is shown in Figure 4b, the feature map of motor failure (30 dB) is shown in Figure 4c, and the feature map of motor failure (20 dB) is shown in Figure 4d. Figure 4b–d show that the difference between the broken bearings and other conditions from F41 to F46 is quite apparent. The bearing failure identification rate is expected to be higher than in other states. We can find that when choosing the feature from F41 to F46, the classification accuracy of broken bearings motor is higher than other conditions. In the next section, we describe feature selection which can make the features easy to classify.

2.3. Feature Selection

Feature selection is an effective method for deleting redundant and irrelevant features and improving classification accuracy. Feature selection is divided into a wrapper method and filter method. The wrapper method involves much calculation, but the effect is usually better than the filter method [20]. This section discusses the wrapper method GA, the filter method SU used, and the proposed method SU-GA (which is the combination of the SU and GA).

2.3.1. Genetic Algorithm (GA)

GA is a heuristic global search method proposed by John Henry Holland of the University of Michigan when he studied cellular automata. The GA process includes crossover, mutation, and then produces different offspring. The offspring undergo a series of evolution to find the best chromosome [21,22]. With the improvement of powerful computers and practical application requirements, various types of GA have appeared one after another. They have been applied in many fields, including future trend forecasting, planned production scheduling, data analysis, and solving various combinations of optimization problems.
Initially, in the GA calculation method, the feature subset of the array is randomly generated. Since this is a binary GA algorithm, the feature subset or chromosomes are represented in the binary string. Hence, the genes of the chromosomes have two states (either zero or one). Next, to obtain the fitness value of each feature subset by calculating the fitness function. The higher the fitness value, the higher the probability of reproducing the feature subset. After selecting off-springs of feature subset for reproduction, the two feature subsets are selected a random gene position to crossover and obtain two new feature subsets. Subsequently, execute mutation in mutation ratio is 0.1 to each gene position of the new feature subset. After the mutation is completed, reproduction is completed. Then, we calculated the fitness value of the feature subsets of the two generations, sorted them according to the fitness value, and selected the feature subset with the higher fitness value to continue the subsequent reproduction round. After several generations of reproduction and survival of the fittest, the feature subset with the highest fitness value is produced [22]. The flow chart of GA is shown in Figure 5.
In this research, the number of feature subsets in each is 30, and the number of generations is 100. The genome length is 80, and the crossover ratio is 0.8, with the mutation ratio being 0.1. KNN model is used as the fitness function to calculate the fitness value of each feature subset and operate on binary search space with ‘1’ representing the selected feature and ‘0’ representing the unselected feature.

2.3.2. Symmetrical Uncertainty (SU)

The SU value is a correlation coefficient calculated based on information entropy to illustrate the correlation between two values. Information entropy is a measure of uncertainty, so it can be used to estimate irregular information in time series, and information entropy also can evaluate the similarity of two variables [23,24]. The equation of information entropy H ( X ) is shown in Equation (7), the amount of information I ( X ) is shown in Equation (8), and the similarity of two variables H ( X | Y ) is shown in Equation (9).
H ( X ) = i = 1 n P ( x i ) log 2 [ P ( x i ) ]
I ( X ) = i n log 2 [ P ( x i ) ]
H ( X | Y ) = j n { P ( y j ) i n P ( x i | y i ) log 2 [ P ( x i | y i ) ] }
P is the probability mass function of x , and i is the information body of x .
The information gain I G between information entropy can be calculated by Equations (7) and (9), as shown in Equation (10).
I G = H ( X ) H ( X | Y )
Then, the SU value can be calculated from the information gain and the information entropy of the two variables. The SU value S U ( X , Y ) is shown in Equation (11).
S U ( X , Y ) = 2 [ I G H ( X ) + H ( Y ) ]
The value of SU is between zero and one. When S U ( X ,   Y   ) = 1 , X and Y are completely correlated. When   S U ( X ,   Y   ) = 0 , X and Y are entirely independent. SU finds the correlation between the feature and the target by calculating the correlation coefficient between the feature and the target and then screens out the important features, which is an effective and fast screening method. The flow chart of SU is shown in Figure 6.

2.3.3. The Proposed Method (SU-GA)

The GA can find the essential features but cannot rank them, and the calculation time is long [25]. To solve the said issue, initially, we used SU to calculate the correlation coefficient between features and fault types, arranged them in descending order to obtain the feature subset S = { F 1 , F 2 , , F 80 } , and then used SU to calculate the correlation coefficient between the features. Next, we started the comparison from F 1 ; if the SU value between F 1 and the fault type was less than or equal to the SU value between F 1 and F 2 , this means that the correlation between F 1 and F 2 was more significant than the correlation between F 1 and the fault type. Therefore, F 2 was regarded as a redundant feature. Subsequently, we calculated the SU values between F 1 and each feature in sequence, compared them to delete redundant features, then removed F 1 from the set S and store it in the set S . Finally, we selected the maximum value F I from the set S and repeat the above steps to get the feature set S . Then, we used GA on the feature set S , and after several generations of reproduction and survival of the fittest, the feature subset with the highest fitness value was produced. The advantage of this method is that the importance of each feature can be calculated, redundant features can be deleted, the time cost of the genetic algorithm can be shortened, and the classification accuracy can be improved. The feature selection method of SU-GA first uses the characteristics of SU to delete redundant features and then uses GA to search out the best feature subset. The flow chart of SU-GA is shown in Figure 7.
The content of this part mainly explains the feature selection method used in this research. It conducts a complete discussion on the steps and application of GA and SU. It can be found that GA can simulate the crossover and mutation of feature subsets, and then use the fitness function to calculate the score of each feature subset, and select the feature subset with the higher score to multiply for several generations, and then find the best feature subset. The SU can calculate the correlation coefficient between the feature and the category, the correlation coefficient between the feature and the feature, delete the redundant feature according to the correlation coefficient, and find the best feature subset. Finally, by combining the advantages of GA and SU, this research proposes a feature selection SU-GA method that can delete redundant features and find the best feature subset through the fitness function.

2.4. Support Vector Machine (SVM)

SVM is a machine learning model based on the statistical learning theory proposed by Vapnik [26]. It has good results in different fields such as medicine, biology, solar power forecasting, weather forecasting, text classification, image recognition, fault diagnosis, and bioinformatics, and it has good results when applied to small sample training and forecasting [26,27,28,29,30].
SVM can map the samples in the low-dimensional space to the high-dimensional space is shown in Figure 8a, and look at it from different angles to find a hyperplane that can divide the sample, and then map it back to the low-dimensional space [31], as shown in Figure 8b.
In order to optimize the benefit after segmentation, it is necessary to find the maximum interval hyperplane that can segment the samples. To find the maximum separation hyperplane, the parallel hyperplane that separates the samples must be determined and make the spacing as wide as possible. The area within this hyperplane is called the margin, and the maximum margin hyperplane is the hyperplane in the middle of them [26,32], as shown in Figure 9.

3. Experimental Setup

3.1. Experimental Equipment

The equipment and instruments used in this research include servo motor, oscilloscope, computer, torque sensor, signal extractor (NI PXI-1033) is shown in Figure 10a, three-phase squirrel-cage induction motors with the four different conditions: normal, broken bearings, a broken rotor bar, and a stator winding short-circuit. The three-phase squirrel-cage induction motor with broken bearings (Aperture 1.96 mm × 0.53 mm) is shown in Figure 10b. The three-phase squirrel-cage induction motor with a broken rotor bar (2 holes∮8 mm deep 10 mm) is shown in Figure 10c. The three-phase squirrel-cage induction motor with a stator winding short-circuit (2 coils) is shown in Figure 10d. The specifications of three-phase squirrel-cage induction motors are shown in Table 1. Finally, through the captured current signal, the analysis of fault diagnosis of the induction motor is performed.

3.2. Experimental Process

First, we set the normal induction motor on the power meter platform and connected it with the servo motor. Secondly, we connected the power supply to the induction motor. After the connection was completed, we used a signal extractor (NI PXI-1033) to capture the current signal. Each test motor measures fifty times, with the sampling time for each signal being two seconds and the capture frequency being 1000 Hz. Then, we repeated the above steps to measure the induction motor with broken bearings, a broken rotor bar, and a stator winding short-circuit. Finally, we used HHT (which analyzes the measured data on MATLAB and then extracts the maximum, minimum, average, rms, and std as features), and used GA, SU, SU-GA to select feature subsets. Finally, 160 samples are used as the training set, 40 samples are used as the test set, then using SVM to classify and repeat 100 times to obtain the average classification accuracy and compare the results of these three feature selections.

4. Experimental Results

In this research, a signal extractor (NI PXI-1033) was used to extract current signals from induction motors of four different types of faults. Each signal recorded 50 records. The measured signals use HHT to analyze in MATLAB and normalize the extracted features. Then, the SU, GA, and SU-GA methods were used to delete redundant and less influential features, bring 160 data into SVM for training, use the remaining 40 data as test samples, and then carry out classification. This process was repeated 100 times and the average classification accuracy was calculated. Finally, to test the three method’s anti-noise ability, this research also added different degrees of white noise with SNR = 40 dB, SNR = 30 dB, and SNR = 20 dB to the signal to observe the degree of influence on the classification accuracy.

4.1. Induction Motor Fault Classification Results

This research uses HHT to extract the feature of the induction motor in normal and broken bearings, a broken rotor bar, and a stator winding short-circuit, and uses three kinds of feature selection methods (GA, SU, and SU-GA) to find the most important features. Then, SVM was used to calculate the identification rate. Finally, white noise of different intensities was added to test system’s robustness in different environments. This research paper uses SVM to classify the three kinds of fault conditions of induction motors and adds white noise with SNR = 40 dB, SNR = 30 dB, and SNR = 20 dB for research. The identification results are shown in Table 2. Since the process of GA and SU-GA includes crossover and mutation, the best feature subsets selected each time are different. Therefore, the identification results in Table 2 are the best identification results after repeated GA and SU-GA 100 times. The results of the better method for each experiment are made bold.
In the absence of noise, the classification accuracy of HHT is 79.5%, and the use of HHT combined with feature selection methods GA and SU can delete 76.3% and 72.5% of the number of features, respectively, and the classification accuracies are 87.8% and 86.4%. Using HHT combined with SU-GA, it can not only delete 95% of the features but the classification accuracy can be increased from 79.5% to 91.2%. Therefore, it can be judged that this method can delete more redundant and unimportant features, effectively improving the classification accuracy and obtaining a better feature subset.
Under the influence of slight white noise with SNR = 40 dB added to the signal, the classification accuracy of HHT is 78.1%, and through the feature selection methods of GA, SU, SU-GA, 72.5%, 78.8%, and 95% of the features, can be deleted respectively (while the classification accuracy obtained is 84.6%, 85.5%, and 89.6%, respectively). The results show that after feature selection, all three methods can maintain the identification ability before adding white noise and the ability to delete redundant features.
Then, when the white noise is increased to SNR = 30 dB, the classification accuracy of HHT is 55.6%, while the feature selection methods of GA and SU can delete 62.5% and 78.8% of features, respectively, and the classification accuracy can be, respectively, increased to 66.8% and 65.8%. In comparison, the SU-GA method proposed in this research can effectively delete 96.3% of redundant features, and the classification accuracy can reach 73.6%, which is the best of all of the methods. The results show that under the condition of SNR = 30 dB, SU and SU-GA can still maintain the ability to delete redundant features before adding white noise. Compared with the recognition ability before adding white noise, the classification accuracy of the three feature selection methods all have a significant decline under the condition of SNR = 30 dB.
Finally, in severe noise with SNR = 20 dB, the classification accuracy of HHT is 52.2%, and through the feature selection methods of GA, SU, and SU-GA, 60%, 78.8%, and 96.3% of the features can be deleted, respectively. The classification accuracies of the three feature selection methods are 60.2%, 61.7%, and 64.1%, respectively. The results show that under the condition of SNR = 20 dB, the recognition ability of the three feature selection methods all have a severe declination, but SU and SU-GA can still maintain the ability to remove redundant features before adding white noise.

4.2. Feature Selection and Result

Feature selection is the process of finding the best feature subset from the original feature set without additional feature conversion to reduce the number of features without changing the vital information represented by the original feature set. This research uses GA, SU, and SU-GA three feature screening methods for comparison.

4.2.1. Selection and Classification of GA

This research uses GA’s feature selection method to select the features after HHT. Initially, we randomly generated an array of feature subsets, and calculated the fitness function to obtain the fitness value of each feature subset. A feature subset with a higher fitness value has a higher probability of reproducing (after several generations of reproduction) the most adaptable feature subset is obtained. Since the process of GA includes crossover and mutation, the best feature subset selected each time is different. Therefore, this research was repeated 100 times to find the feature subset with the highest classification accuracy. The feature number of the feature selection method GA is reduced from the original 80 features analyzed by HHT to 19 features (76.3% of the total feature number is deleted). Furthermore, using SVM to calculate its classification accuracy, it is found that the classification accuracy can also be increased from 79.5% to 87.8%, which shows that the GA selection method used in this research can obtain the best feature subset by calculating each feature subset. However, this method also has the disadvantage of not calculating the correlation coefficient between features and sorting the essential features.

4.2.2. Selection and Classification of SU

This research uses the feature selection method SU. By applying this feature selection method, the correlation between the data can be easily and quickly obtained. First, obtain the correlation between the feature and the category, and sort the calculated SU value in descending order. Then, the SU value between the feature and the feature is calculated. If the SU value between the feature and the feature is greater than the SU value between the feature and the category, the feature is determined to be a redundant feature, and the feature is deleted. After the SU feature selection, the features can be reduced to 16 (80% of the total number of features deleted), and finally after identifying the obtained feature subset, the classification accuracy can be increased from 79.5% to 86.4%. Through the experiments, it can be found that the SU feature selection method can effectively delete redundant features and improve classification accuracy. It is found from Figure 11a that when the number of features reaches three in this method, the classification accuracy tends to be stable.

4.2.3. Selection and Classification of SU-GA

This research uses the feature selection SU-GA method combined with the advantages of SU, which can delete redundant features and rank essential features, and the advantages of GA, which can effectively find the best feature subset by calculating the fitness value. Since the SU-GA process includes crossover and mutation, this research was repeated 100 times to find the feature subset with the highest classification accuracy. After using the SU-GA feature selection method, the number of features has been reduced from 80 features initially analyzed using HHT to four features (95% of the total number of features deleted). After using SVM calculation, the classification accuracy has been effectively increased from 79.5% to 91.2%. This result shows that compared with the GA and SU feature selection methods, this feature selection method can delete redundant features, sort them according to the importance of the features, and finally get the best feature subset.
From Figure 11a,b, it can be seen that SU-GA can increase classification accuracy by deleting unimportant features under the condition of no noise and SNR = 40 dB. In adding SNR = 30 dB white noise SU-GA effectively deletes unimportant features and significantly improves classification accuracy, as shown in Figure 11c. Finally, in the case of severe noise of SNR = 20 dB, as shown in Figure 11d, although the classification accuracy of SU-GA is lower than the highest classification accuracy of SU. However, by deleting unimportant features, the final classification accuracy of SU-GA is still higher than that of SU.

4.2.4. Comparison of Feature Selection

When GA reaches the termination condition, the average fitness value and the best fitness value must be close [33]. The fitness value of GA is the same from the 55th to the 100th iteration, and the fitness value of SU-GA is the same from the 93rd to the 100th iteration. The GA convergence is quite early at the 55th iteration. This result could mean that the GA algorithm has fallen into a local trap. Meanwhile, the proposed algorithm convergences at the 93rd iteration. It proves that the global search ability of the proposed algorithm is improved and escapes the local trap. Also, the KNN model has been used to calculate the fitness value. The average fitness value of SU-GA is 0.1052, and the best fitness value of SU-GA is 0.8. The average fitness value of GA is 0.1865, and the best fitness value of GA is 0.135. The convergence curve of SU-GA and GA is shown in Figure 12. The feature selected by GA and SU-GA is shown in Table 3.
As shown in Table 4, this research compares the number of features selected by three feature selection methods GA, SU, and SU-GA. Under the condition of no noise, SNR = 40 dB, SNR = 30 dB, and SNR = 20 dB, it can be found that the number of features selected by the feature selection SU-GA method is much smaller than that of GA and SU. Then, we sorted the importance of the features which were selected through feature selection methods SU and SU-GA. It can be seen from Table 4, under the condition of no noise and SNR = 40 dB, that the features selected by SU-GA are the same as the four most important features of SU. Due to the difference in the amount of noise, the importance and quantity of the features are not exactly the same, and GA cannot be sorted according to the importance of the features. Finally, in Table 4, it can be found that GA and SU need to use more features than SU-GA, and the classification accuracy is lower than SU-GA.
The feature selection SU-GA method proposed in this research first uses SU feature selection to delete redundant features and then uses GA feature selection to find the best feature subset. It can be found that this method can increase the classification accuracy by 11.7% compared to before feature selection, and can delete 95% of the total feature number. This research also added white noise with SNR = 40 dB, SNR = 30 dB, and SNR = 20 dB to the original measurement signal to simulate the influencing factors in the actual operating environment of the motor. In the above cases, using SU-GA for selection, four, three, and three essential features can be obtained, respectively, and the classification accuracy can reach 89.6%, 73.6%, and 64.1%, respectively. The comparison of the number of features and the classification accuracy shows that the SU-GA method proposed in this research is the best, as shown in Table 5.
This research analyzes the current signal of the induction motor with different motor faults. Also, this research uses feature selection method GA and SU to select the signals analyzed by HHT to improve the classification accuracy and select essential and useful features for classification. By combining the advantages of SU and GA, this research proposes a feature selection method SU-GA that can delete redundant features, rank essential features, and effectively find the best feature subset. The results are analyzed by comparing the classification accuracy with three feature selection methods and discussing their classification outcomes.
It has been proven that SU-GA can use fewer features and obtain a better classification accuracy. This method combines the advantage of SU, which can delete redundant features, rank important features, and the advantage of GA which can find the best feature subset through continuous iteration and effectively impact the classification accuracy. After the experiment of this research, it can be concluded that the method of this research proposed SU-GA can delete 95% of the features, and the classification accuracy reaches 91.2%, which is better than the classification accuracy before the feature selection. Under the influence of white noise with SNR = 40 dB, SNR = 30 dB, and SNR = 20 dB, the classification accuracy respectively reaches 89.6%, 73.6%, and 64.1%, which are better than that of others those without feature selection and using SU and GA. It shows that the method of this research has a strong anti-noise ability.

5. Conclusions

This research uses the signal extractor to capture the current signal of the four different kinds of motors and uses HHT to capture their maximum, minimum, average, rms, and std as the characteristics. Then, SU is used to calculate the correlation coefficient between the feature and the motor fault category, the feature and the feature, and remove the redundant features. GA is used to search for the best feature subset. Finally, by adding white noise to test the system’s robustness, the results show that the best feature subset can be found by using SU-GA. The following are the main research results of this study:
  • This research has compared the feature selection methods GA and SU and used these two methods to remove the unimportant and redundant features on the classifier. Under normal situations, the number of features decreased by 76.3% and 72.5%, respectively. In classification accuracy, after using GA, the classification accuracy increased by 8.3%. After using SU, the classification accuracy increased by 6.9%. When the severe noise SNR = 20 dB is added, the number of features of the two feature selection methods is reduced by 60% and 78.8%, respectively. After using GA, the classification accuracy is increased by 8% and, after using SU, the classification accuracy increased by 9.5%.
  • By combining the advantages of SU and GA, this research proposes a SU-GA method that can delete redundant features, rank important features, and effectively find the best subset. Under normal situations, using SVM to classify the classification accuracy can reach 91.2%, which is better than other feature selection methods and, doped with different signal-to-noise ratios (SNR = 40 dB, SNR = 30 dB, and SNR = 20 dB), it can also can increase the classification accuracy by 11.5%, 17.5%, and 11.9%, respectively. Therefore, it can be explained that the proposed method can obtain higher classification accuracy.
Although some research results have been obtained, many in-depth topics can be further studied and discussed. They are summarized as follows: this research only classifies four conditions, but the types of faults of induction motors are somewhat diverse. Therefore, the database can be expanded in the future to classify more types of faults. Many public databases and various feature selection methods from the internet have been applied to classify the induction motor faults. Comparing the proposed method of this research with other methods would help to find a more robust and reliable method for motor fault diagnosis.

Author Contributions

Supervision, writing—review and editing, C.-Y.L.; writing—original draft preparation, visualization, Y.-J.H.; writing—review and editing, T.-A.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Choi, J.; Chun, Y.; Han, P. Design of high power permanent magnet motor with segment rectangular copper wire and closed slot opening on electric vehicles. IEEE Trans. Magn. 2010, 46, 2070–2073. [Google Scholar]
  2. Bazurto, A.J.; Quispe, E.C.; Mendoza, R.C. Causes and failures classification of industrial electric motor. In Proceedings of the 2016 IEEE ANDESCON, Arequipa, Peru, 19–21 October 2016; pp. 1–4. [Google Scholar]
  3. Contreras-Hernandez, J.L.; Almanza-Ojeda, D.L.; Ledesma, S.; Ibarra-Manzano, M.A. Motor fault detection using quaternion signal analysis on fpga. Sci. Direct Meas. 2019, 138, 416–424. [Google Scholar]
  4. Han, B.; Chen, Y. Marine Shafting Fault Detection Method Using Improved Envelope Analysis. In Proceedings of the 2019 5th International Conference on Transportation Information and Safety (ICTIS), Liverpool, UK, 14–17 July 2019; pp. 177–182. [Google Scholar]
  5. Yadav, A.; Aryasomayajula, A.; AhmedAnsari, R. Multiresolution analysis based sparse dictionary learning for remotely sensed image retrieval. In Proceedings of the 2019 Women Institute of Technology Conference on Electrical and Computer Engineering (WITCON ECE), Dehradun, India, 22–23 November 2019; pp. 76–80. [Google Scholar]
  6. Yuan, M.; Fu, Z.; Bao, P. Detection of bolt tightness degree based on HHT. In Proceedings of the 9th International Conference on Electronic Measurement and Instruments, Beijing, China, 2 October 2009; Volume 4, pp. 334–337. [Google Scholar]
  7. Maruyama, T.; Igarashi, H. An Effective Robust Optimization Based on Genetic Algorithm. IEEE Trans. Magn. 2008, 44, 990–993. [Google Scholar]
  8. Popa, R. The hybridisation of the selfish gene algorithm. In Proceedings of the 2002 IEEE International Conference on Artificial Intelligence Systems (ICAIS 2002), Divnomorskoe, Russia, 5–10 September 2002; pp. 345–350. [Google Scholar]
  9. Yang, Y.; Yu, Y. A hand gestures recognition approach combined attribute bagging with symmetrical uncertainty. In Proceedings of the 2012 9th International Conference on Fuzzy Systems and Knowledge Discovery, Chongqing, China, 29–31 May 2012; pp. 2551–2554. [Google Scholar]
  10. Cui, L.; Tao, Y.; Deng, J.; Liu, X.; Xu, D.; Tang, G. BBO-BPNN and AMPSO-BPNN for multiple-criteria inventory classification. Expert Syst. Appl. 2021, 175, 114842. [Google Scholar]
  11. Chen, B.; Xing, L.; Zhao, L.; Xie, Y.; Cai, Y.; Chen, X. Prediction Model of Commercial Economic Index Based on BPNN Optimization Algorithm. In Proceedings of the 2020 International Conference on Computer Engineering and Application (ICCEA), Guangzhou, China, 18–20 March 2020; pp. 529–532. [Google Scholar]
  12. Rivera-Lopez, R.; Canul-Reich, J. Construction of near-optimal axis-parallel decision trees using a differential-evolution-based approach. IEEE Access 2018, 6, 5548–5563. [Google Scholar]
  13. Song, Y.; Jin, Q.; Yan, K.; Lu, H.; Pan, J. Vote Parallel SVM: An Extension of Parallel Support Vector Machine. In Proceedings of the 2018 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI), Leicester, UK, 19–23 August 2018; pp. 1942–1947. [Google Scholar]
  14. Zhu, S.; Xu, C.; Wang, J.; Xiao, Y.; Ma, F. Research and application of combined kernel SVM in dynamic voiceprint password authentication system. In Proceedings of the 2017 IEEE 9th International Conference on Communication Software and Networks (ICCSN), Guangzhou, China, 6–8 May 2017; pp. 1052–1055. [Google Scholar]
  15. Huang, N.E.; Shen, Z.; Long, S.R.; Wu, M.C.; Shih, H.H.; Zheng, Q.; Yen, N.C.; Tung, C.C.; Liu, H.H. The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis. Proc. R. Soc. A 1998, 454, 903–995. [Google Scholar]
  16. Palkar, P.M.; Udupi, V.R.; Patil, S.A. A review on bidimensional empirical mode decomposition: A novel strategy for image decomposition. In Proceedings of the 2017 International Conference on Energy, Communication, Data Analytics and Soft Computing (ICECDS), Chennai, India, 1–2 August 2017; pp. 1098–1100. [Google Scholar]
  17. Sidky, E.Y.; Pan, X. Recovering a compactly supported function from knowledge of its Hilbert transform on a finite interval. IEEE Signal Processing Lett. 2005, 12, 97–100. [Google Scholar]
  18. Lenka, B. Time-frequency analysis of non–stationary electrocardiogram signals using Hilbert-Huang Transform. In Proceedings of the IEEE International Conference on Communications and Signal Processing, Melmaruvathur, India, 2–4 April 2015; pp. 1156–1159. [Google Scholar]
  19. Deepu, V.; Madhvanath, S. Madhvanath Genetically evolved transformations for rescaling online handwritten characters. In Proceedings of the IEEE INDICON 2004. First India Annual Conference, Kharagpur, India, 20–22 December 2004; pp. 262–265. [Google Scholar] [CrossRef]
  20. Piao, Y.; Ryu, K.H. Detection of differentially expressed genes using feature selection approach from RNA-seq. In Proceedings of the 2017 IEEE International Conference on Big Data and Smart Computing (BigComp), Jeju Island, Korea, 17–20 January 2017; pp. 304–308. [Google Scholar]
  21. Roberge, V.; Tarbouchi, M.; Okou, F. Strategies to accelerate harmonic minimization in multilevel inverters using a parallel genetic algorithm on graphical processing unit. IEEE Trans. Power Electron. 2014, 29, 5087–5090. [Google Scholar]
  22. Behera, N.; Sinha, S.; Gupta, R.; Geoncy, A.; Dimitrova, N.; Mazher, J. Analysis of Gene Expression Data by Evolutionary Clustering Algorithm. In Proceedings of the 2017 International Conference on Information Technology (ICIT), Bhubaneswar, India, 21–23 December 2017; pp. 165–169. [Google Scholar]
  23. Piroonratana, T.; Wongseree, W.; Usavanarong, T.; Assawamakin, A.; Limwongse, C.; Chaiyaratana, N. Identification of Ancestry Informative Markers from Chromosome-Wide Single Nucleotide Polymorphisms Using Symmetrical Uncertainty Ranking. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 2448–2451. [Google Scholar]
  24. Nashwan, M.S.; Shahid, S. Symmetrical uncertainty and random forest for the evaluation of gridded precipitation and temperature data. Atmos. Res. 2019, 230, 104632. [Google Scholar]
  25. Altarabichi, M.G.; Nowaczyk, S.; Pashami, S.; Mashhadi, P.S. Surrogate-Assisted Genetic Algorithm for Wrapper Feature Selection. In Proceedings of the 2021 IEEE Congress on Evolutionary Computation (CEC), Kraków, Poland, 28 June–1 July 2021; pp. 776–785. [Google Scholar]
  26. Almugren, N.; Alshamlan, H.M. New bio-marker gene discovery algorithms for cancer gene expression profile. IEEE Access 2019, 7, 136907–136913. [Google Scholar]
  27. Jang, H.S.; Bae, K.Y.; Park, H.S.; Sung, D.K. Solar power prediction based on satellite images and support vector machine. IEEE Trans. Sustain. Energy 2016, 7, 1255–1263. [Google Scholar]
  28. Bron, E.E.; Smits, M.; Niessen, W.J.; Klein, S. Feature selection based on the SVM weight vector for classification of dementia. IEEE J. Biomed. Health Inform. 2015, 19, 1617–1626. [Google Scholar] [PubMed]
  29. Insom, P.; Cao, C.; Boonsrimuang, P.; Liu, D.; Saokarn, A.; Yomwan, P.; Xu, Y. A Support Vector Machine-Based Particle Filter Method for Improved Flooding Classification. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1943–1947. [Google Scholar]
  30. Yu, H.; Sun, C.; Yang, X.; Zheng, S.; Zou, H. Fuzzy Support Vector Machine with Relative Density Information for Classifying Imbalanced Data. IEEE Trans. Fuzzy Syst. 2019, 27, 2353–2367. [Google Scholar]
  31. Jebur, M.N.; Pradhan, B.; Tehrany, M.S. Manifestation of lidar-derived parameters in the spatial prediction of landslides using novel ensemble evidential belief functions and support vector machine models in GIS. IEEE J. Sel. Top. Appl. Earth Remote Sens. 2015, 8, 674–690. [Google Scholar]
  32. Ranganarayanan, P.; Thanigesan, N.; Ananth, V. Identification of glucose-binding pockets in human serum albumin using support vector machine and molecular dynamics simulations. IEEE/ACM Trans. Comput. Biol. Bioinform. 2016, 13, 148–157. [Google Scholar] [PubMed]
  33. Babatunde, O.; Armstrong, L. A genetic algorithm-based feature selection. Int. J. Electron. Commun. Comput. Eng. 2014, 5, 2278–4209. [Google Scholar]
Figure 1. The flow chart of EMD.
Figure 1. The flow chart of EMD.
Mathematics 10 00230 g001
Figure 2. (a) The IMF waveform of normal motor, (b) the IMF waveform of broken bearings motor, (c) the IMF waveform of a broken rotor bar motor, (d) the IMF waveform of a stator winding short-circuit motor.
Figure 2. (a) The IMF waveform of normal motor, (b) the IMF waveform of broken bearings motor, (c) the IMF waveform of a broken rotor bar motor, (d) the IMF waveform of a stator winding short-circuit motor.
Mathematics 10 00230 g002
Figure 3. The structure of signal analysis and feature extraction.
Figure 3. The structure of signal analysis and feature extraction.
Mathematics 10 00230 g003
Figure 4. (a) Feature map of motor failure (without noise), (b) feature map of motor failure (40 dB), (c) feature map of motor failure (30 dB), (d) feature map of motor failure (20 dB).
Figure 4. (a) Feature map of motor failure (without noise), (b) feature map of motor failure (40 dB), (c) feature map of motor failure (30 dB), (d) feature map of motor failure (20 dB).
Mathematics 10 00230 g004
Figure 5. The flow chart of GA.
Figure 5. The flow chart of GA.
Mathematics 10 00230 g005
Figure 6. The flow chart of SU.
Figure 6. The flow chart of SU.
Mathematics 10 00230 g006
Figure 7. The flow chart of SU-GA.
Figure 7. The flow chart of SU-GA.
Mathematics 10 00230 g007
Figure 8. (a) Low-dimensional space to the high-dimensional space; (b) divide the sample, and then map it back to the low-dimensional space.
Figure 8. (a) Low-dimensional space to the high-dimensional space; (b) divide the sample, and then map it back to the low-dimensional space.
Mathematics 10 00230 g008
Figure 9. Maximum margin hyperplane.
Figure 9. Maximum margin hyperplane.
Mathematics 10 00230 g009
Figure 10. (a) Servo motor, oscilloscope, computer, torque sensor, signal extractor, (b) broken bearings motor (Aperture 1.96 mm × 0.53 mm), (c) broken rotor bar motor (2 holes∮8 mm deep 10 mm), (d) stator winding short-circuit motor (2 coils).
Figure 10. (a) Servo motor, oscilloscope, computer, torque sensor, signal extractor, (b) broken bearings motor (Aperture 1.96 mm × 0.53 mm), (c) broken rotor bar motor (2 holes∮8 mm deep 10 mm), (d) stator winding short-circuit motor (2 coils).
Mathematics 10 00230 g010
Figure 11. (a) Relationship between number of features and accuracy, (b) relationship between number of features and accuracy (40 dB), (c) relationship between number of features and accuracy (30 dB), and (d) relationship between number of features and accuracy (20 dB).
Figure 11. (a) Relationship between number of features and accuracy, (b) relationship between number of features and accuracy (40 dB), (c) relationship between number of features and accuracy (30 dB), and (d) relationship between number of features and accuracy (20 dB).
Mathematics 10 00230 g011
Figure 12. The convergence curve of SU-GA and GA.
Figure 12. The convergence curve of SU-GA and GA.
Mathematics 10 00230 g012
Table 1. The three-phase squirrel-cage induction motor specifications.
Table 1. The three-phase squirrel-cage induction motor specifications.
Voltage220 V/380 VOutput1.5 kW
Current5.58 A/3.23 AEfficient83.5%
Speed1715 rpmPoles4
Table 2. The identification results.
Table 2. The identification results.
SNRSelection
Method
Number of FeatureAcc(%)
NormalBroken BearingsBroken Rotor BarStator Winding Short-CircuitAverage
Without noiseHHT8079.599.292.262.479.5
HHT-GA1969.299.999.982.587.8
HHT-SU1672.098.798.277.286.4
HHT-SU-GA476.710099.989.591.2
40 dBHHT8061.899.789.062.578.1
HHT-GA2269.810099.970.184.6
HHT-SU1773.799.899.469.185.5
HHT-SU-GA476.610010082.389.6
30 dBHHT8038.991.149.243.055.6
HHT-GA3050.796.969.650.666.8
HHT-SU1741.999.363.358.165.8
HHT-SU-GA343.099.872.579.473.6
20 dBHHT8042.387.646.332.652.2
HHT-GA3250.896.250.841.760.2
HHT-SU1754.297.754.240.361.7
HHT-SU-GA349.810076.733.264.1
Table 3. Feature selected by GA and SU-GA.
Table 3. Feature selected by GA and SU-GA.
Selection MethodNumber of FeatureNumber of Features After Feature Selection
GA19F1, F2, F3, F4, F5, F6, F19, F23, F25, F29, F30, F35, F39, F41, F42, F43, F45, F61, F73
SU-GA4F4, F5, F41, F45
Table 4. Importance and quantity of features after feature selection.
Table 4. Importance and quantity of features after feature selection.
Signal AnalysisSNRSelection MethodNumber of FeatureNumber of Features after Feature Selection
(Sort by Importance, Boldface is the Redundant Feature
for Comparison)
HHTWithout noiseGA19-
SU16F41, F45, F5, F4, F76, F11, F27, F32, F65, F39, F60, F21, F50, F22, F68, F35
SU-GA4F41, F45, F5, F4
40 dBGA22-
SU17F41, F45, F5, F2, F4, F29, F78, F72, F55, F46, F48, F40, F13, F23, F58, F62, F65
SU-GA4F41, F45, F5, F4
30 dBGA30-
SU17F43, F42, F41, F45, F51, F3, F25, F30, F15, F56, F2, F20, F32, F61, F8, F74, F68
SU-GA3F43, F45, F3
20 dBGA32-
SU17F43, F45, F42, F41, F3, F76, F62, F36, F51, F11, F19, F65, F35, F6, F5, F50, F70
SU-GA3F43, F3, F62
Table 5. The comparison of the number of features and the classification accuracy.
Table 5. The comparison of the number of features and the classification accuracy.
SNRBefore
Feature Selection
After SelectionImprovement of Accuracy
Feature Selection MethodProposed Method
GASUSU-GA
Number of FeatureAcc
(%)
Number of FeatureAcc
(%)
Number of FeatureAcc
(%)
Number of FeatureAcc
(%)
Acc
(%)
Without noise8079.51987.81686.4491.2+11.7
40 dB8078.12284.61785.5489.6+11.5
30 dB8056.13066.81765.8373.6+17.5
20 dB8052.23260.21761.7364.1+11.9
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lee, C.-Y.; Hsieh, Y.-J.; Le, T.-A. Induction Motor Fault Classification Based on Combined Genetic Algorithm with Symmetrical Uncertainty Method for Feature Selection Task. Mathematics 2022, 10, 230. https://doi.org/10.3390/math10020230

AMA Style

Lee C-Y, Hsieh Y-J, Le T-A. Induction Motor Fault Classification Based on Combined Genetic Algorithm with Symmetrical Uncertainty Method for Feature Selection Task. Mathematics. 2022; 10(2):230. https://doi.org/10.3390/math10020230

Chicago/Turabian Style

Lee, Chun-Yao, Yun-Jhan Hsieh, and Truong-An Le. 2022. "Induction Motor Fault Classification Based on Combined Genetic Algorithm with Symmetrical Uncertainty Method for Feature Selection Task" Mathematics 10, no. 2: 230. https://doi.org/10.3390/math10020230

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop