Next Article in Journal
Thermochemical and Enzymatic Saccharification of Water Hyacinth Biomass into Fermentable Sugars
Next Article in Special Issue
A Novel Fault Detection Scheme Based on Mutual k-Nearest Neighbor Method: Application on the Industrial Processes with Outliers
Previous Article in Journal
Chemical Characterization and Preliminary Evaluation of the Efficacy and Tolerability of a Food Supplement Based on Pomegranate Extract, B Vitamins, and Vitamin C against Prolonged Fatigue in Healthy Consumers
Previous Article in Special Issue
A Novel Radial Basis Function Neural Network with High Generalization Performance for Nonlinear Process Modelling
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bearing Fault Diagnosis Based on a Novel Adaptive ADSD-gcForest Model

School of Information Science and Technology, Beijing University of Chemical Technology, Beijing 100029, China
*
Author to whom correspondence should be addressed.
Processes 2022, 10(2), 209; https://doi.org/10.3390/pr10020209
Submission received: 3 January 2022 / Revised: 20 January 2022 / Accepted: 21 January 2022 / Published: 22 January 2022

Abstract

:
With the continuous improvement of industrial production requirements, bearings work significantly under strong noise interference, which makes it difficult to extract fault features. Deep Learning-based approaches are promising for bearing diagnosis. They can extract fault information efficiently and conduct accurate diagnosis. However, the structure of deep learning is often determined by trial and error, which is time-consuming and lacks theoretical support. To address the above problems, an adaptive (Adaptive Depthwise Separable Dilated Convolution and multi-grained cascade forest) ADSD-gcForest fault diagnosis model is proposed in this paper. Multiscale convolution combined with convolutional attention mechanism (CBAM) concentrates on effectively extracting fault information under strong noise, and the Meta-Activate or Not (Meta-ACON) activation function is integrated to adaptively optimize the model structure according to the characteristics of input samples, then gcForest outputs the final diagnosis result as the classifier. The experiment compares the effects of three bearings failure diagnoses under various noise and load conditions. The experimental results show the effectiveness and practicability of the proposed method.

1. Introduction

With the development of the manufacturing industry, rolling bearings, as one of the core components of mechanical equipment, play an increasingly irreplaceable role. However, under the condition of strong noise and multiple loads for a long time, the bearings are prone to wear or breakage. An expected failure, such as a crack in the bearings, may cause the breakdown of the entire machine, resulting in magnificent economic loss and severe safety accidents [1]. Therefore, it is of great significance to realize the high efficiency and accuracy of bearing fault diagnosis.
The location of the bearing failure is generally located in the inner ring, outer ring, and rolling element, and the bearing fault usually produces periodic vibrations when machinery is running, so analysis of the vibration signal during bearing operation is the key to achieving the diagnosis of the fault [2]. Traditional fault diagnosis methods are divided mainly into linear and non-linear methods. Linear diagnosis methods mainly contain time domain analysis, frequency domain analysis, and time-frequency analysis [3]. Nonlinear analysis is less adopted in fault diagnosis than linear analysis, chaos theory, fractal dimension, and entropy value theory, are commonly applied nonlinear analysis methods, among others. However, due to the increase in bearing fault datasets and the increasing complexity of production environments, traditional fault diagnosis methods that rely on traditional manual fault sign extraction have become no longer applicable [4]. Therefore, constructing novel fault diagnosis models based on approaches of deep learning have become a research hotspot.
Frequently used deep learning models include the Deep Autoencoder (DAE), the Deep Belief Network (DBN), the Convolutional Neural Network (CNN) and the Recurrent Neural Network (RNN). Among them, the improved Stack Denoising Autoencoder (SDAE) diagnostic method was proposed by Hou et al. [5], in which the hyperparameters of the DAE network were adaptively selected by the particle swarm algorithm to determine the structure of the SDAE network. On this basis, the characteristic representation of the fault state was obtained, which was input into the Softmax classifier for fault classification and recognition; this method has achieved accurate fault diagnosis under the circumstance of variable operating conditions. In-depth research on DAE was conducted by Shao et al. [6], in which DAE and shrinking auto-encoding were introduced to improve the fault extraction capabilities of faulty features, and local preservation of projection fusion features was applied to optimize feature quality. Liang T et al. [7] presented a method for the diagnosis of rolling bearing failures, which consisted mainly of three steps: a series of DBNs with different hyperparameters were constructed and trained, after which the improved ensemble method was applied to acquire the weight matrix for each DBN and then each DBN voted together according to its respective weight matrix to obtain the final result of the diagnosis. The method of DBN-based degradation assessment under accelerated life testing of bearings was adopted by Ma et al. [8]. Shao et al. proposed a DBN for the diagnosis of induction motors faults, in which vibration signals were introduced directly as input [9], and the t-SNE algorithm was adopted to visualize the learning representation. Han Tao et al. [10] used CNN training to obtain the corresponding characteristic diagram of the multi-wavelet coefficient branching process through the wavelet transform to realize the intelligent diagnosis of rolling bearing composite faults. Liang et al. [11] have constructed two different CNNs, one for extracting time-domain features and the other was applied to extract time-frequency domain features, and then fused them with the time-frequency features and time-domain features extracted by continuous wavelet transform diagnose faults of rolling bearings in a characteristic way. Bearing fault diagnosis based on LSTM (Long Short-Term Network) and CNN models was established by Pan et al. [12], a fault diagnosis method was proposed by Zhang et al. [13], in which self-encoding of convolutional noise reduction was performed to achieve feature extraction and CNN was introduced for pattern recognition. The long- and short-term memory stacking network was designed by Yu et al. [14], where 12 different bearing health conditions were classified using augmented data, including the type and severity of bearing failure. A convolutional bidirectional long- and short-term memory network was designed by Zhao et al. [15] for bearing fault diagnosis. In this method, a convolutional neural network was applied as a feature extractor of the original signal, and then the bearing faults were classified through a bidirectional long and short-term memory network.
Based on the brief review of the existing diagnosis approaches, the challenges can be summarized as follows: first of all, numerous methods only conduct comparative experiments for a single type of noise and other types of noise are not considered. Second, deep learning structures are often determined by trial and error, which means this structure is randomly defined; as a result, the model with the best performance is adopted after many experiments [16]. To solve the above problem, an adaptive ADSD-gcForest fault diagnosis model is proposed in this paper, and based on the basis of the existing traditional network, the core fault features at different scales are effectively extracted by using dilated convolution with different dilation rates and CBAM fusion under strong noise interference. On this basis, deep separable convolution is incorporated into the dilated convolution mechanism to improve the efficiency of the calculation [17]. In recent years, many adaptive optimization methods have been developed for network structure, but most of these approaches require the assistant of an intelligent optimization algorithm or migration learning [18,19]. In contrast, the network structure can be simply optimized by the Meta-ACON activation based on the input samples without the need for additional complex algorithms and can not only optimize the model structure but also make the model better deal with different sample data. Then, through the multigranular scanning of the deep forest classifier and the cascade forest algorithm, the hidden fault features in the feature vector are analyzed and extracted, and the final classification results are output. The main contributions of this article are as follows:
  • An adaptive ADSD-gcForest diagnostic model is proposed for the diagnosis of rolling bearing fault diagnosis, allowing the extraction of features under the high-noise and complex working conditions that could be realized. The structure of the diagnosis model achieves adaptive optimization based on the characteristics of the sample data.
  • Combining the multiscale depth-separable dilated convolution with CBAM can effectively extract fault features under strong noise interference. On the basis of the lack of adjust the original structure of the model, the Meta-ACON activation function is introduced into the convolution layer of the model to achieve adaptive optimization of the model structure according to the fault data of different bearings.
  • The comparative experiment shows that the ADSD-gcForest model proposed in this paper has strong generalization ability and robustness with certain practical value.
The rest of this paper is organized in the following way: the introduction of the related theories is mainly in Section 2, the specific structure of the adaptive ADSD-gcForest model is described in Section 3, Section 4 is the experimental comparison part, and the conclusion is drawn in Section 5.

2. Related Works

2.1. SDP Image

Through the normalization method, the time domain signal can be described in the polar coordinate system. Thus, the vibration signal can be converted to SDP images [20], and the relationship between the amplitude and the frequency of the vibration signal can be described simply and directly through the geometric shape. The specific mapping relationship is as follows:
r ( i ) = x i x m i n x m a x x m i a n
θ ( i ) = θ l + x i + a x m i n x m a x x m i n δ
φ ( i ) = θ l x i + a x m i n x m a x x m i n δ
where the input vibration signal is represented as x i , i represents the sequence number of the discrete sampling point of the signal in the time domain, the maximum and minimum values of the vibration signal are, respectively, described as x m a x and x m i n and the amplitude of x i corresponding to the time lag coefficient a is shown as x i + a , the radius of the polar coordinates is indicated as r(i), δ is the magnification angle, θ l is the angle of the l-th mirror symmetry plane, θ(i) and φ(i) are the deflection angles of the mirror symmetry plane, where δ ≤ θ l and θ l = 360 l N (l ∈ (0, N−1)), and N is the number of symmetry planes. SDP images of different fault characteristics will present various geometric characteristics, which are manifested mainly in the curvature, thickness, geometric center and concentrated area of the image arm of the SDP image [21]. The SDP images of different fault characteristics are shown in Figure 1. Among them, IR, OR and B, respectively, represent the inner ring, outer ring and rolling element, while 007, 014, and 021 indicate that the fault diameter is 0.1778 mm, 0.3556 mm and 0.5334 mm separately.

2.2. Dilated Convolutional

Unlike ordinary convolution, the dilated convolution is one of the convolutional neural networks which increases the receptive field of the output unit without increasing the parameters, which is mainly implemented by introducing the dilation rate parameter. Specifically, it mainly depends on the way of interval sampling, which means that the spacing of each value is defined by the dilation rate when the convolution kernel processes the data. Thus, the receptive field can be increased without reducing the image resolution. For convolution kernels of the same size, the larger the dilation rate, the greater the receptive field of the convolution kernel [22]. The calculation formula for the receptive field of the dilated convolution is as follows:
r n = r n 1 1 + k
where the receptive field of the current layer is presented as rn, r n 1 is the upper receptive field and k is the size of the convolution kernel. The sampling process is shown in Figure 2.

2.3. Depth-Separable Convolution

The standard convolution is decomposed into deep convolution and point-wise convolution by depth-separable convolution [23,24,25]. First, each channel of the input sample is convolved one by one by the deep convolution; thus, the number of feature maps generated is the same as the number of channels of the input sample, and after that, the feature map is reconstructed with weights which are assigned on the basis of the designed algorithm according to point-by-point convolution. In this way, the amount of calculation and parameters of the model can be reduced in both time and space. For example, the dimension of the input samples is set as D i n 1 , D i n 2 , C i n , the output dimension is arranged accordingly as D o u t 1 , D o u t 2 , C o u t and the sizes of the convolution kernel are D k 1   and   D k 2 , where C o u t , C i n are the number of channels. The formula to calculate the total number of parameters for ordinary convolution and deeply separable convolution is shown in Formulas (5) and (6), respectively, where C c o n v represents the total number of parameters for ordinary convolution and the total number of parameters for deeply separable convolution is C s e p a r a b l e c o n v . The specific schematic diagram is shown in Figure 3, where channel 1, channel 2 and channel 3, respectively, indicate the three dimensions of the input image.
C c o n v = D o u t 1 × D o u t 2 × D k 1 × D k 2 × C o u t × C i n
C s e p a r a b l e c o n v = D o u t 1 × D o u t 2 × D k 1 × D k 2 × C i n + D o u t 1 × D o u t 2 × C o u t × C i n

2.4. CBAM

The attention mechanism is derived from the human visual mechanism, which is widely used in image processing. CBAM has been widely used in target detection by skillfully integrating spatial attention mechanism and channel attention mechanism [26,27,28]. Primarily, the channel attention mechanism selects which features are the key features, and then uses the spatial attention mechanism to learn the location of the key features, strengthening the extraction of the core features of the input sample; from this, in addition, the model can achieve adaptive refinement of the core features in the images. The specific composition structure is shown in Figure 4, where Avgpooling and Maxpooling, respectively, represent the average pooling and maximum pooling, while shared FC means the shared full connectivity layer.

2.5. gcForest

Different from the traditional Softmax classifier, the hidden features of the input feature vector can be learned by gcForest through the superposition of multi-layer random forests, which then output the final classification results [29,30]. It has been proven that the accuracy of the deep forest classifier is about 1–4% higher than that of the Softmax classifier. The deep forest classifier mainly consists of two parts: multi-granularity scanning and cascade forest. The feature vector is sampled in a sliding window to form a new feature vector, which will be input into the cascade forest. After passing through the multi-layer random forest, the final output class probability distribution vector is taken as the final classification result. The specific structure of the deep forest classifier is shown in Figure 5, where K is the dimension of the input vector table type, n is the dimension of the sliding window, m as the category of the classification number and P is the final output vector whose dimension. Furthermore, in Figure 5, the scanned vector is input into the cascade forest, the blue and green two color Forests, respectively, represent random forest and completely random forest, each layer contains two random forests and two completely random forests and each forest after the completion of a training will be an output vector, of which the dimension is C. The output vectors of the four forests are stacked with the output vectors of multi-granularity scanning, and the vector with dimension (C × 4 + P) is the output; moreover, after multi-layer learning, the output vectors of the last layer are averaged to obtain the final probability category vector with dimension C and the maximum probability of the vector is taken as the classification result.

3. Method

The ADSD-gcForest model will be described in detail in this section. The detailed implementations of the method are described in the following three steps.
Step 1: A sliding window is used to sample vibration signals, then the noise of different intensity is added and the signals are converted into SDP image, and then the sample data are divided into a training set and a test set.
Step 2: The training set is entered into the adaptive ADSD-gcForest model for training and the Meta-ACON activation function is applied to adaptively adjust the network structure, according to different types of sample data to obtain the current optimal model structure, after which the trained model is saved.
Step 3: The trained model is used to directly extract fault features from new images, which results in the final diagnosis. The overall flowchart of the fault diagnosis is drawn in Figure 6.

3.1. Meta-ACON

In order to achieve more effective fault diagnosis based on different bearing fault data, it may be necessary to continuously adjust the existing structure to achieve higher accuracy. In order to solve the above problems, a relatively simple way to achieve adaptive adjustment of the network model is proposed in this paper: by setting a single conversion factor β, the Meta-ACON activation function can simply select whether to activate the neurons in this layer according to different sample data (activation represents nonlinear output, while on the contrary, non-activation represents linear output). The design of the Meta-ACON activation function is derived from the smooth maximum function, and its formula is as follows:
S β ( x 1 , x n ) = i = 1 n x i e β x i i = 1 n e β x i
where x i represents the input signal sequence, β is the conversion factor, when β →∞, S β max and β →0, and S β is the arithmetic mean value. Many common activation functions have the form max (ηa (x), ηb (x)). ηa (x) and ηb (x) are two freely configurable functions. For example, in the ReLU function, ηa (x) = x and ηb (x) = 0, many activation functions can be expressed in the form of max (ηa (x), ηb (x)). To simplify the design, only two variables are considered here, and the sigmoid function is simplified as σ. At this time, the approximate relationship is represented as:
S β ( η a ( x ) , η b ( x ) ) = η a ( x ) e β η a ( x ) e β η a ( x ) + e β η b ( x ) + η b ( x ) e β η b ( x ) e β η a ( x ) + e β η b ( x ) = η a ( x ) 1 1 + e β ( η a ( x ) η b ( x ) ) + η b ( x ) 1 1 + e β ( η b ( x ) η a ( x ) ) = ( η a ( x ) η b ( x ) ) σ [ β ( η a ( x ) η b ( x ) ) ] + η b ( x )
Furthermore, η a ( x ) = p 1 x , η b ( x ) = p 2 x and p 1 x p 2 x . The Meta-ACON activation function is as follows:
S β = ( p 1 p 2 ) x σ [ β ( p 1 p 2 ) x ] + p 2 x
Among them, p 1 and p 2 are two random trainable parameters; therefore, the activation of neurons in this layer can be easily controlled by means of conversion factor β , where β = σ W 1 W 2 h = 1 H w = 1 W x c , h , w , the input sample data is represented as x c , h , w and c, h and w, respectively, describe the number of channels, width and height of the input sample data.   W 1 is the convolution of the sample data with the number of input channels as the width of the sample, the number of output channels as the width/r (r is a constant, generally taken as 16) and the convolution core size of 1 × 1. Similarly, W 2 is also obtained by the convolution with the convolution core size of 1 × 1, except that the number of output channels and input channels of convolution are opposite to the setting of W 1 . Since the β value is directly determined by the structural characteristics of the sample data, different sample data will produce different β values, Therefore, after many times of training, with the continuous updating of Meta-ACON parameters, the structure of the model can be continuously optimized. The specific calculation process is shown in Figure 7.

3.2. ADSD-gcForest

Compared with time-domain signals, SDP images can represent different fault types in a more intuitive and simple way by presenting different geometric features. Therefore, the key to achieve an accurate fault diagnosis is to design a diagnosis model that can effectively extract geometric features from images. The visual geometry group 16 (VGG16) is one of the commonly used models in image processing. Feature extraction is effectively realized by stacking multilayer convolution, and network parameters are reduced by pooling layer. The model in this paper takes VGG16 as the basic framework. However, the structure of VGG16 network is relatively simple. Firstly, although the network is deep, ordinary convolution is widely used in convolution layers, which cannot extract the sample feature information in multiple scales, which limits the feature extraction ability of the network under the intervention of strong noise. Second, most of the activation functions in the convolution layer simply make the input signal become non-linear, so the network does not have good migration learning ability; thus, in the face of different sample data, the performance of the network will become unstable. Moreover, most diagnostic models use Softmax as the final classifier, However, Softmax is not an advanced classifier and cannot learn the feature information that has not been extracted, so as to reduce the final accuracy. In response to the above problems, the ADSD-gcForest model proposed in this paper makes the following improvements.
Due to the large number of input sample types, in order to increase the model feature extraction range and enrich the feature information, the characteristics of the receptive field are expanded by using the dilated convolution and combined with the residual network to build three branches. Therefore, the construction of three kinds of dilated convolutions with different dilation rates is connected through the residual network, the dilation rate is set to 1, 2 and 3 and the size of the convolution kernel is 3 × 3. Thus, multi-scale feature extraction is realized. After the dilated convolution with different expansion rates, it is combined with the CBAM, and the channel attention mechanism is used to measure the importance of different kinds of channel feature information in the feature map at different scales, so as to determine the key points under different channels in the feature map features. Then, the spatial attention mechanism is introduced to locate these key features and extract the key feature information from the feature map to obtain key features at different scales. Next, three feature maps are obtained and integrated using the residual network and input into the next layer. Due to the use of more dilated convolution and attention mechanisms in the network, it may lead to a longer network training time. Since the convolution operations for different channels of the input image can be simultaneously performed by the depth-separable convolution, the depth-separable convolution mechanism is led into the dilated convolution layer, after which the weight ratio of each feature map is determined through quasi-point convolution, and the the feature maps are integrated according to the weights. Thus, computational efficiency could be improved in this way. In order to realize that the network model can be adaptively adjusted according to the sample data of different fault types, the original ReLU activation function in the convolutional layer is replaced with the Meta-ACON activation function. The Meta-ACON activation function can be based on the size characteristics of the input image. By setting the conversion factor β, the value of β determines whether to activate the neurons in this layer after multiple trainings, and a flexible and efficient network structure can be adopted for the training model according to different input samples. Softmax is replaced by gcForest, which learns the hidden fault characteristics and gives the final results of the diagnosis results. The structure of the model is shown in Figure 8. SD convolution stands for dilated convolution with a deep separable mechanism. Detailed parameters of the optimized network are shown in Table 1.

4. Experiments

4.1. Introduction of Datasets

The datasets used in the experiment were the Case Western Reserve University bearing dataset and the Canadian University of Ottawa bearing dataset. Two different bearings are contained in the Western Reserve University bearing dataset: drive end bearing SKF6205 and fan end bearing SKF6203. The drive end bearing included the two different sampling frequencies of 12 KHZ and 48 KHZ, while the sampling frequency of the fan end bearing was only 12 KHZ. Ten types of states are contained in each bearing dataset, which are normal state, inner ring failure, outer ring failure, and rolling element failure. Each fault state contains three different fault levels, represented by a fault diameter. A total of four different load conditions were applied when measuring the bearing data. A total of 8 normal samples, 53 outer ring damage samples, 23 inner ring damage samples and 11 rolling element damage samples were obtained. The Canadian Ottawa dataset is the bearing vibration signal of different health conditions measured under time-varying speed conditions, which had 36 datasets. The bearing conditions include: normal, inner ring failure and outer ring failure. The working speed conditions are speed increase, speed deceleration, deceleration after speed increase and speed increase after deceleration. Each dataset contains two channels, and channel 1 represents the vibration data measured by the accelerometer, channel 2 signifies the speed data measured by the encoder, the sampling frequency is 200 KHZ and the sampling duration is 10 s.
The drive end and the fan end bearing data of Western Reserve University used in this paper are at the sampling frequency of 12 KHZ, and for part of the dataset in Channel 1 of the University of Ottawa in Canada, the sample data used were randomly selected from the dataset, where B, IR and OR indicate that the fault location is located in the rolling element, inner ring and outer ring of the bearing, respectively. Moreover, 007, 014 and 021, respectively, indicate that the fault diameter is 0.1778 mm, 0.3556 mm and 0.5334 mm, and the number at the end indicates the size of the load. For example, “−1” means that the load is 1 horsepower. A total of 1000 samples were sampled for each fault category, and the sample ratio of training set to test set was 7:3. The details are shown in Table 2.
Six noises of different intensities were added to the sample dataset, namely, Noise 1, Noise 2, Noise 3, Noise 4, Noise 5 and Noise 6. Each type of noise contains three different types of noise. The proportions of the three noises in the six noises were Gaussian noise with signal-to-noise ratios of −4, −2, 0, 2, 4 and 6, salt and pepper noise, with ratios of 0.3, 0.25, 0.2, 0.15, 0.1 and 0.05, and Cauchy noise with position parameter 0 and scale parameter 1. gcForest was set as the classifier in all comparison methods, and SigDSD-gcforest means that the Sigmoid function is the activation function of the convolutional layers. Similarly, the activation functions of the convolutional layers in ReluDSD-gcforest and PReluDSD-gcforest are Relu and PRelu, respectively. The parameter settings of the ADSD-gcForest model were as follows: the network training parameters were set at a learning rate of 0.00005, the number of batch processing was 580, the number of iterations was 350, Adam was used as the optimization algorithm, the sliding window dimension used in MGS was 240, the number of trees in the random forests of MGS was 35 and the number of trees in a single random forest in the cascade forest was 150. The diagnostic effect is analyzed by comparing the accuracy rate, F1 value and Area Under Curve (AUC) value of different diagnostic models after training. The accuracy rate is generally expressed as T P + T N T P + T N + F P + F N , and FI value is calculated as 2 F P 2 T P + F P + F N , where TP refers to True Positives, FP represents True Negatives, FN indicates False Negatives and FP signifies False Positives. AUC is defined as the area under the area under curve. Generally, the higher the AUC value, the better the classification effect of the model.

4.2. Case Study 1: Performance of Drive End Bearing Fault Diagnosis

It can be seen from Figure 9 and Figure 10 that when the noise environment is Noise 1, after the sample is trained by the ADSD-gcForest model, there are three categories of samples with low recognition rates, and there is also a small amount of aliasing in the T-SNE image. In other noise environments, there are only one or two fault categories with a low recognition rate. It can be seen from Table 3 that, compared to other methods, the ADSD-gcForest model achieves the highest fault accuracy rate and F1 value under various noises and different working conditions. Among them, the VGG16-gcForest model obtained the lowest accuracy and the F1 values, which is about 26–35% lower than those of the ADSD-gcForest model, while the accuracy and F1 values of the Res50-gcForest model are about 18% higher than the VGG16-gcForest model. Since the Relu function can better solve the network convergence problem than the Sigmoid function, the accuracy and F1 values of the Relu-gcForest model are about 0.6–0.7% higher than that of the SigDSD-gcForest model, and PRelu updates the weight according to the input data, which makes the network have a certain adaptive optimization capability. The accuracy and F1 values obtained after training is about 1.3% higher than that of the Relu-gcForest model, but its values are still lower than the ADSD-gcForest model. Figure 11 mainly describes the comparison of the AUC values of different methods. From Figure 11, it can be found that the AUC values of ADSD-gcForest under different noises are the highest and all are above 92%, indicating that the ADSD-gcForest model has a good fault diagnosis effect. It can be seen from the experimental results presented above that the ADSD-gcForest model can more accurately diagnose drive-end bearing failures under different working conditions and strong noise interference with a high accuracy rate.

4.3. Case Study 2: Performance of Fan End Bearing Fault Diagnosis

It can be seen from Figure 12 and Figure 13 that only when the noise environment is Noise 1, a few fault categories cannot be effectively identified. In other noise environments, the entire fault category can be accurately identified. It can be seen in Table 4 that the accuracy and F1 values obtained from the training of the VGG16-gcForest and Res50-gcForest models have dropped by approximately 1.5–1.6% compared to the driving end values. The overall accuracy and F1 value of the VGG16-gcForest model are between 61–75%. The accuracy and F1 values of the training of the SigDSD-gcForest, ReluDSD-gcForest and PreluDSD-gcForest models has also decreased. Among them, the most obvious decrease is SigDSD-gcForest, with a decrease from 0.3% to 0.4%, while the accuracy and F1 value of the PreluDSD-gcForest model drops by at least about 1.3–1.5%. The accuracy value and F1 values of the ADSD-gcForest model are the highest, and these values are similar to case study 1. Figure 14 depicts the AUC values obtained by different diagnostic methods under different noises. It can be found that the AUC values obtained by the ADSD-gcForest model are still the highest, which are close to those obtained in case study 1. Through the experimental results presented above, it can be found that the ADSD-gcForest model proposed in this paper can basically realize an effective fault diagnosis for different bearings under multiple working conditions.

4.4. Case Study 3: Performance of the Ottawa Bearing Dataset

In order to further test the generalization and robustness of the ADSD-gcForest model, case study 3 focused on the University of Ottawa dataset, which was specifically divided into six datasets. The setting method of adding noise was the same as case study 1. The specific sample types are shown in Table 5. There were three operation conditions of the bearings in the datasets, i.e., normal (H), inner race fault (I) and out race fault (O), and also contained four speed transformation conditions, i.e., speed up (A), slow down (B), speed up and slow down (C) and slow down and speed up (D). The noise setting used in case study 3 is the same as the case study 1. The training parameter settings of the ADSD-gcForest model are as follows: the network training parameters were set to a learning rate of 0.00005, the number of batch processing was 550, the number of iterations was 350, Adam was used as the optimization algorithm, the sliding window dimension used in MGS was 240, the number of trees in the MGS random forest was 35 and the number of trees in a single random forest in the cascade forest was 150.
It can be seen from Figure 15 that, compared to case study 1 and case study 2, when the noise environment is Noise 1 and Noise 2, the degree of discrimination of some fault categories is lower, but in other noise environments, the fault categories can be accurately classified. It can be seen from Figure 16 and Figure 17 that the training accuracy of the ADSD-gcForest model is the highest and the value is relatively stable, while the fluctuation is small, which is consistent with the values in Table 6. At the same time, it can be found from Table 6 that the accuracy and F1 values obtained by training the VGG16-gcForest and Res50-gcForest models are significantly lower than case study 1 and case study 2. In Figure 16 and Figure 17, the accuracy of the two models also fluctuates significantly, and the accuracy of the other three models is more accurate. The rate values have also decreased, but the value fluctuations are relatively small. Figure 18 reflects the AUC values of different diagnostic models, from which it can be found that the AUC values of the ADSD-gcForest model are basically similar to the first two cases, but other diagnostic models have decreased. Through comparative experiments of three groups of different bearings, it can be seen that under different noise conditions and for bearing data under different working conditions, in one way, the ADSD-gcForest model can achieve effective fault feature extraction, while in another way, the use of the Meta-ACON activation function can easily and efficiently complete the self-adaptive optimization of the model structure and realize more accurate fault diagnosis.

5. Conclusions

This paper proposes an adaptive ADSD-gcForest model. The model uses the VGG network as the basic framework. Multi-scale features of input samples can be extracted through deep separable dilated convolution, and then the CBAM to focus the core features is combined at different scales, the Meta-ACON activation function is integrated into all convolution layers in the network, so that the model can be optimized adaptively according to different input data, and the gcForest as the final classifier can provide the final result. In the experimental part of this paper, datasets of Western Reserve University and University of Ottawa are used, including three bearing data, and it can be seen that faults of different types of bearings under strong noise and multiple load conditions can be effectively diagnosed by the ADSD-gcForest model. This shows that the model proposed in this paper has good robustness. It can also be found that the method proposed in this paper has better improved the migration ability of the model, simplified the design process of the diagnostic model and effectively avoided the problem of repeatedly modifying the model structure.
In modern industrial production, multiple bearings are often required to work together; thus, the effective fault diagnosis of multiple bearings is a hot research topic. The ADSD-gcForest model proposed in this paper can simply optimize the model structure according to different bearing data with the help of the Meta-ACON activation function. It has a certain industrial application value, but the addition of the Meta-ACON activation function also increases the number of parameters of the model, which leads to a longer training time. Therefore, how to reduce the training parameters of the Meta-ACON activation function under the premise of ensuring high accuracy will become the focus of future research.

Author Contributions

Conceptualization: S.Z.; Methodology: S.Z. and Z.W.; Formal analysis and investigation: S.Z.; Writing—original draft preparation: S.Z.; Writing—review and editing: D.G. and S.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The used data of bearing fault can be found in CWRU Data Center and University of Ottawa datasets are available online: https://csegroups.case.edu/bearingdatacenter/pages/welcome-case-western-reserve-university-bearing-data-center-website (accessed 18 Octorber 2021), https://data.mendeley.com/datasets/v43hmbwxpm/1 (accessed 20 Octorber 2021).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fan, J.; Qi, Y.; Liu, L.; Gao, X.; Li, Y. Application of an information fusion scheme for rolling element bearing fault diagnosis. Meas. Sci. Technol. 2021, 32, 075013. [Google Scholar] [CrossRef]
  2. Niu, G.; Wang, X.; Golda, M.; Mastro, S.; Zhang, B. An optimized adaptive PReLU-DBN for rolling element bearing fault diagnosis. Neurocomputing 2021, 445, 26–34. [Google Scholar] [CrossRef]
  3. Jie, D.; Zheng, G.; Zhang, Y.; Ding, X.; Wang, L. Spectral kurtosis based on evolutionary digital filter in the application of rolling element bearing fault diagnosis. Int. J. Hydromechatronics 2021, 4, 27–42. [Google Scholar] [CrossRef]
  4. Zhao, X.; Qin, Y.; Fu, H.; Jia, L.; Zhang, X. Blind source extraction based on EMD and temporal correlation for rolling element bearing fault diagnosis. Smart Resilient Transp. 2021, 3, 52–65. [Google Scholar] [CrossRef]
  5. Hou, W.; Ye, M.; Li, W. Rolling bearing fault classification based on improved stack noise reduction self-encoding. Chin. J. Mech. Eng. 2018, 54, 87–96. [Google Scholar] [CrossRef]
  6. Shao, H.; Jiang, H.; Zhang, X.; Niu, M. Rolling bearing fault diagnosis using an optimization deep belief network. Meas. Sci. Technol. 2015, 26, 115002. [Google Scholar] [CrossRef]
  7. Liang, T.; Wu, S.; Duan, W.; Zhang, R. Bearing fault diagnosis based on improved ensemble learning and deep belief network. J. Phys. Conf. Ser. 2018, 1074, 012154. [Google Scholar] [CrossRef]
  8. Ma, M.; Chen, X.; Wang, S.; Liu, Y.; Li, W. Bearing degradation assessment based on weibull distribution and deep belief network. In Proceedings of the IEEE International Symposium on Flexible Automation, Cleveland, OH, USA, 1–3 August 2016; pp. 382–385. [Google Scholar]
  9. Shao, S.; Sun, W.; Wang, P.; Gao, R.X.; Yan, R. Learning features from vibration signals for induction motor fault diagnosis. In Proceedings of the IEEE International Symposium on Flexible Automation, Cleveland, OH, USA, 1–3 August 2016; pp. 71–76. [Google Scholar]
  10. Han, T.; Yuan, J.H.; Tang, J.; An, L.Z. Intelligent composite fault diagnosis method of rolling bearing based on MWT and CNN. Mech. Transm. 2016, 40, 139–143. [Google Scholar]
  11. Liang, M.; Cao, P.; Tang, J. Tang. Rolling bearing fault diagnosis based on feature fusion with parallel convolutional neural network. Int. J. Adv. Manuf. Technol. 2020, 112, 819–831. [Google Scholar] [CrossRef]
  12. Pan, H.; He, X.; Tang, S.; Meng, F. An improved bearing fault diagnosis method using one-dimensional CNN and LSTM. J. Mech. Eng. 2018, 64, 443–452. [Google Scholar]
  13. Zhang, L.; Jing, L.; Xu, W.; Tan, J. Rolling bearing fault diagnosis based on convolutional noise reduction autoencoder and CNN. Modul. Mach. Tool Autom. Manuf. Technol. 2019, 6, 58–62. [Google Scholar]
  14. Yu, L.; Qu, J.; Gao, F.; Tian, Y. A novel hierarchical algorithm for bearing fault diagnosis based on stacked LSTM. Shock. Vib. 2019, 2019, 2756284. [Google Scholar] [CrossRef] [PubMed]
  15. Zhao, R.; Yan, R.; Wang, J.; Mao, K. Learing to monitor machine health with convolutional bi-directional lstm networks. Sensors 2017, 17, 273. [Google Scholar] [CrossRef] [PubMed]
  16. Yan, X.; Xu, Y.; Jia, M. Intelligent Fault Diagnosis of Rolling-Element Bearings Using a Self-Adaptive Hierarchical Multiscale Fuzzy Entropy. Entropy 2021, 23, 1128. [Google Scholar] [CrossRef] [PubMed]
  17. Yong, Z.; Zhang, X.; Da, N. Research on 3D Object Detection Method Based on Convolutional Attention Mechanism. J. Phys. Conf. Ser. 2021, 1848, 012097. [Google Scholar] [CrossRef]
  18. Cao, Q.; Yu, L.; Wang, Z.; Zhan, S.; Quan, H.; Yu, Y.; Khan, Z.; Koubaa, A. Wild Animal Information Collection Based on Depthwise Separable Convolution in Software Defined IoT Networks. Electronics 2021, 10, 2091. [Google Scholar] [CrossRef]
  19. Ma, N.; Zhang, X.; Liu, M.; Sun, J. Activate or Not: Learning Customized Activation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
  20. Zhang, N.; Cui, F.; Jiang, B.; He, X. Rotating machinery gearbox fault diagnosis method integrating SDP and CNN. Ind. Control. Comput. 2021, 34, 89–91. [Google Scholar]
  21. Zhao, L.; Xu, L.; Liu, Y.; Liu, J.; Huang, X. Transformer mechanical fault diagnosis method based on point symmetry transformation and image matching. Trans. Chin. Soc. Electr. Eng. 2021, 36, 3614–3626. [Google Scholar]
  22. Yin, Q.; Yang, W.; Ran, M.; Wang, S. FD-SSD: An improved SSD object detection algorithm based on feature fusion and dilated convolution. Signal Processing Image Commun. 2021, 98, 116402. [Google Scholar] [CrossRef]
  23. Yuhui, Z.; Mengyao, C.; Yuefen, C.; Zhaoqian, L.; Yao, L.; Kedi, L. An Automatic Recognition Method of Fruits and Vegetables Based on Depthwise Separable Convolution Neural Network. J. Phys. Conf. Ser. 2021, 1871, 012075. [Google Scholar] [CrossRef]
  24. Teng, Y.; Gao, P. Generative Robotic Grasping Using Depthwise Separable Convolution. Comput. Electr. Eng. 2021, 94, 107318. [Google Scholar] [CrossRef]
  25. Liu, T.; Pang, B.; Zhang, L.; Yang, W.; Sun, X. Sea Surface Object Detection Algorithm Based on YOLO v4 Fused with Reverse Depthwise Separable Convolution (RDSC) for USV. J. Mar. Sci. Eng. 2021, 9, 753. [Google Scholar] [CrossRef]
  26. Chen, Y.; Zhang, X.; Chen, W.; Li, Y.; Wang, J. Research on Recognition of Fly Species Based on Improved RetinaNet and CBAM. IEEE Access 2020, 8, 102907–102919. [Google Scholar] [CrossRef]
  27. Canayaz, M. C+EffxNet: A novel hybrid approach for COVID-19 diagnosis on CT images based on CBAM and EfficientNet. Chaos Solitons Fractals 2021, 151, 111310. [Google Scholar] [CrossRef] [PubMed]
  28. Niu, C.; Nan, F.; Wang, X. A super resolution frontal face generation model based on 3DDFA and CBAM. Displays 2021, 69, 102043. [Google Scholar] [CrossRef]
  29. Sun, Z.; Li, M.; Zhang, J.; Hu, B.; Qi, G.; Zhu, Y. Transient Voltage Stability Assessment Method based on gcForest. J. Phys. Conf. Ser. 2021, 1914, 012025. [Google Scholar] [CrossRef]
  30. Liu, H.; Zhang, N.; Jin, S.; Xu, D.; Gao, W. Small sample color fundus image quality assessment based on gcforest. Multimed. Tools Appl. 2020, 80, 17441–17459. [Google Scholar] [CrossRef]
Figure 1. SDP images. (a) Drive end bearing, (b) Fan end bearing.
Figure 1. SDP images. (a) Drive end bearing, (b) Fan end bearing.
Processes 10 00209 g001
Figure 2. Dilated convolution.
Figure 2. Dilated convolution.
Processes 10 00209 g002
Figure 3. Depth-separable convolution.
Figure 3. Depth-separable convolution.
Processes 10 00209 g003
Figure 4. CBAM.
Figure 4. CBAM.
Processes 10 00209 g004
Figure 5. gcForest.
Figure 5. gcForest.
Processes 10 00209 g005
Figure 6. The overall flowchart of the fault diagnosis.
Figure 6. The overall flowchart of the fault diagnosis.
Processes 10 00209 g006
Figure 7. The specific calculation process of Meta-ACON.
Figure 7. The specific calculation process of Meta-ACON.
Processes 10 00209 g007
Figure 8. The model structure of ADSD-gcForest.
Figure 8. The model structure of ADSD-gcForest.
Processes 10 00209 g008
Figure 9. Confusion matrix obtained by ADSD-gcforest model training (drive end). (a) Noise 1, (b) Noise 2, (c) Noise 3, (d) Noise 4, (e) Noise 5. (f) Noise 6.
Figure 9. Confusion matrix obtained by ADSD-gcforest model training (drive end). (a) Noise 1, (b) Noise 2, (c) Noise 3, (d) Noise 4, (e) Noise 5. (f) Noise 6.
Processes 10 00209 g009
Figure 10. T-SNE images obtained by ADSD-gcforest model training (drive end). (a) Noise 1, (b) Noise 2, (c) Noise 3, (d) Noise 4, (e) Noise 5. (f) Noise 6.
Figure 10. T-SNE images obtained by ADSD-gcforest model training (drive end). (a) Noise 1, (b) Noise 2, (c) Noise 3, (d) Noise 4, (e) Noise 5. (f) Noise 6.
Processes 10 00209 g010
Figure 11. Comparison figures of the AUC of the driver end data under Noise 1, 2, 3, 4, 5 and 6.
Figure 11. Comparison figures of the AUC of the driver end data under Noise 1, 2, 3, 4, 5 and 6.
Processes 10 00209 g011
Figure 12. Confusion matrix obtained by ADSD-gcForest model training (fan end). (a) Noise 1, (b) Noise 2, (c) Noise 3, (d) Noise 4, (e) Noise 5. (f) Noise 6.
Figure 12. Confusion matrix obtained by ADSD-gcForest model training (fan end). (a) Noise 1, (b) Noise 2, (c) Noise 3, (d) Noise 4, (e) Noise 5. (f) Noise 6.
Processes 10 00209 g012
Figure 13. T-SNE images obtained by ADSD-gcForest model training (fan end). (a) Noise 1, (b) Noise 2, (c) Noise 3, (d) Noise 4, (e) Noise 5. (f) Noise 6.
Figure 13. T-SNE images obtained by ADSD-gcForest model training (fan end). (a) Noise 1, (b) Noise 2, (c) Noise 3, (d) Noise 4, (e) Noise 5. (f) Noise 6.
Processes 10 00209 g013
Figure 14. Comparison figures of the AUC of the fan end data under Noise 1, 2, 3, 4, 5 and 6.
Figure 14. Comparison figures of the AUC of the fan end data under Noise 1, 2, 3, 4, 5 and 6.
Processes 10 00209 g014
Figure 15. T-SNE images obtained by ADSD-gcforest model training (Ottawa Bearing). (a) Noise 1, (b) Noise 2, (c) Noise 3, (d) Noise 4, (e) Noise 5. (f) Noise 6.
Figure 15. T-SNE images obtained by ADSD-gcforest model training (Ottawa Bearing). (a) Noise 1, (b) Noise 2, (c) Noise 3, (d) Noise 4, (e) Noise 5. (f) Noise 6.
Processes 10 00209 g015
Figure 16. Box plots of accuracy values under different noise conditions. (a) Noise 1, (b) Noise 2.
Figure 16. Box plots of accuracy values under different noise conditions. (a) Noise 1, (b) Noise 2.
Processes 10 00209 g016
Figure 17. Box plots of accuracy values under different noise conditions. (a) Noise 3, (b) Noise 4, (c) Noise 5, (d) Noise 6.
Figure 17. Box plots of accuracy values under different noise conditions. (a) Noise 3, (b) Noise 4, (c) Noise 5, (d) Noise 6.
Processes 10 00209 g017
Figure 18. Comparison figures of the AUC of the Ottawa dataset under Noise 1, 2, 3, 4, 5 and 6.
Figure 18. Comparison figures of the AUC of the Ottawa dataset under Noise 1, 2, 3, 4, 5 and 6.
Processes 10 00209 g018
Table 1. Detailed parameters of the optimized network.
Table 1. Detailed parameters of the optimized network.
LayersFiltersKernel_Size/Dilation RateTrainable ParametersInput_ShapeOutput_Shape
separable_conv_1643/113728 × 28 × 128 × 28 × 64
CBAM_1 67728 × 28 × 6428 × 28 × 64
separable_conv_2643/213728 × 28 × 128 × 28 × 64
CBAM_2 67728 × 28 × 6428 × 28 × 64
separable_conv_3643/313728 × 28 × 128 × 28 × 64
CBAM_3 67728 × 28 × 6428 × 28 × 64
Add_1 028 × 28 × 64, 28 × 28 × 64, 28 × 28 × 6428 × 28 × 64
separable_conv_41283/1889628 × 28 × 6428 × 28 × 128
CBAM_4 227728 × 28 × 12828 × 28 × 128
separable_conv_51283/2889628 × 28 × 6428 × 28 × 128
CBAM_5 227728 × 28 × 12828 × 28 × 128
separable_conv_61283/3889628 × 28 × 6428 × 28 × 128
CBAM_6 227728 × 28 × 12828 × 28 × 128
Add_2 028 × 28 × 128, 28 × 28 × 128, 28 × 28 × 12828 × 28 × 128
separable_conv_72563/134,17628 × 28 × 12828 × 28 × 256
CBAM_7 894925 × 28 × 25628 × 28 × 256
separable_conv_82563/234,17628 × 28 × 12828 × 28 × 256
CBAM_8 894925 × 28 × 25628 × 28 × 256
separable_conv_92563/334,17628 × 28 × 12828 × 28 × 256
CBAM_9 894925 × 28 × 25628 × 28 × 256
Add_3 028 × 28 × 256, 28 × 28 × 256, 28 × 28 × 25628 × 28 × 256
separable_conv_102563/168,09628 × 28 × 25628 × 28 × 256
separable_conv_112563/168,09628 × 28 × 25628 × 28 × 256
Add_4 028 × 28 × 25628 × 28 × 256
Flatten 028 × 28 × 256200,704 × 1
dense_1(1000) 200,705,000200,704 × 11000 × 1
dense_2 (256) 256,2561000 × 1256 × 1
Table 2. Sample distribution table.
Table 2. Sample distribution table.
Bearing NumberDataset 1Dataset 2Dataset 3Dataset 4Dataset 5Dataset 6
SKF6205Normal-1Normal-2Normal-3Normal-1Normal-2Normal-3
B007-1B007-3B007-3B007-2B007-1B007-2
B014-1B014-3B014-3B014-2B014-3B014-3
B021-1B021-3B021-2B021-3B021-2B021-1
IR007-1IR007-1IR007-2IR007-3IR007-2IR007-1
IR014-2IR014-2IR014-2IR014-3IR014-3IR014-2
IR021-3IR021-2IR021-2IR021-2IR021-3IR021-3
OR007-1OR007-2OR007-1OR007-1OR007-3OR007-1
OR014-2OR014-2OR014-1OR014-1OR014-2OR014-1
OR021-3OR021-1OR021-20R021-2OR021-1OR021-3
SKF6203Normal-2Normal-2Normal-1Normal-3Normal-1Normal-2
B007-1B007-1B007-3B007-2B007-1B007-3
B014-3B014-2B014-2B014-3B014-2B014-3
B021-3B021-1B021-2B021-3B021-2B021-1
IR007-3IR007-2IR007-2IR007-3IR007-2IR007-1
IR014-3IR014-1IR014-3IR014-1IR014-1IR014-1
IR021-3IR021-3IR021-1IR021-1IR021-3IR021-3
OR007-2OR007-2OR007-1OR007-1OR007-1OR007-2
OR014-2OR014-2OR014-3OR014-2OR014-3OR014-2
OR021-3OR021-3OR021-1OR021-2OR021-3OR021-1
Table 3. Comparison table of the accuracy (AC) and F1 values of driver end data.
Table 3. Comparison table of the accuracy (AC) and F1 values of driver end data.
DatasetMethodsN1(AC/F1)N2(AC/F1)N3(AC/F1)N4(AC/F1)N5(AC/F1)N6(AC/F1)
Dataset 1VGG16-gcForest65.10%/66.20%69.13%/68.78%71.20%/72.02%72.41%/73.10%75.36%/74.48%77.54%/76.98%
Res50- gcForest83.25%/82.86%86.42%/85.96%87.85%/88.12%89.22%/90.03%92.02%/92.05%93.33%/94.40%
SigDSD- gcForest92.43%/91.73%94.17%/93.92%95.95%/94.65%96.04%/96.11%96.35%/96.30%96.85%/96.80%
ReluDSD-gcforest92.85%/93.05%94.22%/93.88%96.20%/96.22%96.47%/96.50%96.70%/96.73%97.03%/96.98%
PReluDSD-gcForest93.09%/92.86%94.72%/94.70%96.62%/96.65%96.73%/96.80%96.83%/96.96%97.45%/97.52%
ADSD-gcForest94.32%/94.30%95.85%/95.78%97.70%/97.73%97.83%/98.03%97.92%/98.15%98.23%/98.28%
Dataset 2VGG16-gcForest64.72%/63.56%69.20%/68.95%69.96%/70.14%72.02%/71.92%73.46%/73.45%76.85%/76.88%
Res50- gcForest83.50%/83.47%86.72%/86.75%87.60%/87.59%89.27%/89.30%91.26%/91.33%92.96%/93.05%
SigDSD- gcForest89.98%/90.13%91.40%/91.48%92.53%/92..60%93.06%/93.10%94.12%/94.08%95.23%/95.26%
ReluDSD-gcforest90.48%/90.43%91.92%/92.03%92.90%/92.86%93.55%/93.57%94.67%/94.65%95.73%/95.70%
PReluDSD-gcForest91.52%/91.55%92.03%/92.05%93.63%/93.71%94.52%/94.50%95.03%/95.11%96.15%/96.18%
ADSD-gcForest92.65%/92.66%93.79%/93.82%94.42%/94.45%95.91%/96.02%96.51%/96.47%97.83%/97.85%
Dataset 3VGG16-gcForest66.25%/66.31%67.43%/67.40%69.84%/69.86%72.45%/72.50%74.62%/75.06%77.11%/78.23%
Res50- gcForest83.85%/83.80%85.62%/86.15%86.87%/86.93%88.38%/88.43%91.67%/91.72%92.63%/92.73%
SigDSD- gcForest91.61%/91.78%92.96%/93.32%95.41%/95.60%96.08%/96.05%96.21%/96.34%96.91%/97.01%
ReluDSD-gcforest91.95%/91.86%93.60%/93.74%95.92%/96.02%96.52%/96.46%96.87%/96.94%97.32%/97.28%
PReluDSD-gcForest92.07%/91.96%94.30%/94.52%95.42%/95.40%96.90%/96.86%97.24%/97.35%97.75%/97.92%
ADSD-gcForest93.22%/93.25%95.43%/95.48%96.48%/96.56%97.11%/97.16%97.84%/98.02%98.33%/98.30%
Dataset 4VGG16-gcForest65.89%/65.92%68.34%/69.16%70.54%/71.17%73.52%/73.58%75.66%/76.16%77.35%/77.49%
Res50- gcForest77.82%/77.80%80.35%/81.42%84.48%/85.53%86.76%/87.36%90.81%/91.28%92.28%/93.16%
SigDSD- gcForest92.25%/91.89%92.72%/92.70%93.78%/93.89%94.75%/94.82%95.52%/95.64%96.44%/97.13%
ReluDSD-gcforest92.71%/93.04%93.21%/93.19%94.36%/94.28%95.46%/95.51%96.01%/96.33%97.08%/97.26%
PReluDSD-gcForest93.07%/93.26%94.96%/95.14%95.58%/96.27%96.82%/97.14%97.42%/97.59%97.64%/98.01%
ADSD-gcForest94.18%/95.64%95.78%/95.74%96.28%/96.32%97.34%/97.64%97.92%/98.12%98.17%/98.34%
Dataset 5VGG16-gcForest66.14%/66.10%67.94%/68.23%71.13%/71.06%72.86%/72.93%75.43%/76.15%76.29%/77.35%
Res50- gcForest79.85%/80.15%81.62%/81.64%84.96%/84.86%88.62%/88.75%91.52%/91.67%93.53%/93.78%
SigDSD- gcForest92.02%/92.35%94.40%/94.67%95.22%/96.37%95.72%/96.89%96.51%/97.02%97.05%/97.28%
ReluDSD-gcforest92.47%/92.43%94.86%/94.82%95.76%/95.84%96.12%/96.15%97.02%/97.46%97.53%/97.88%
PReluDSD-gcForest93.08%/93.47%95.48%/94.86%96.27%/96.20%96.76%/97.16%97.67%/98.16%98.08%/98.24%
ADSD-gcForest94.27%/94.35%96.27%/96.37%97.19%/97.10%97.46%/97.65%98.15%/98.10%98.67%/98.76%
Dataset 6VGG16-gcForest65.85%/65.78%66.72%/67.05%70.32%/71.14%72.61%/72.76%74.50%/75.65%76.76%/77.04%
Res50- gcForest78.95%/79.13%84.06%/84.23%85.46%/85.63%87.42%/87.25%89.73%/89.53%91.32%/92.03%
SigDSD- gcForest91.06%/91.32%92.92%/93.16%93.72%/94.07%95.03%/95.06%96.40%/96.32%97.02%/97.14%
ReluDSD-gcforest91.62%/91.76%93.52%/94.31%94.34%/94.54%95.43%/95.32%97.02%/96.53%97.33%/97.27%
PReluDSD-gcForest92.07%/92.15%94.08%/93.64%95.17%/95.67%96.06%/96.05%97.47%/97.36%97.78%/98.05%
ADSD-gcForest93.76%/93.48%95.61%/95.53%96.86%/96.57%97.59%/97.42%98.34%/98.28%98.55%/98.62%
Table 4. Comparison table of the accuracy (AC) and F1 values of the fan end data.
Table 4. Comparison table of the accuracy (AC) and F1 values of the fan end data.
DatasetMethodsN1(AC/F1)N2(AC/F1)N3(AC/F1)N4(AC/F1)N5(AC/F1)N6(AC/F1)
Dataset 1VGG16-gcForest61.87%/61.80%64.84%/64.75%67.45%/68.12%69.85%/70.23%73.46%/73.32%75.75%/75.82%
Res50- gcForest79.75%/80.04%81.28%/81.42%84.86%/84.90%87.91%/88.34%91.03%/91.06%92.46%/92.50%
SigDSD- gcForest91.52%/91.68%92.53%/93.46%94.48%/95.62%95.56%/95.63%96.01%/96.20%96.65%/96.60%
ReluDSD-gcforest92.03%/92.12%93.20%/93.18%95.06%/94.92%96.01%/95.68%96.53%/96.48%97.04%/97.06%
PReluDSD-gcForest92.62%/92.65%93.95%/93.86%95.65%/95.60%96.62%/96.63%97.03%/97.18%97.75%/97.70%
ADSD-gcForest93.32%/93.34%94.74%/94.70%96.54%96.46%97.23%/97.20%97.72%/97.67%98.34%/98.25%
Dataset 2VGG16-gcForest62.41%/63.26%64.27%/64.68%66.75%/67.35%68.58%/69.14%71.59%/71.48%73.84%/73.76%
Res50- gcForest79.61%/79.53%81.45%/81.39%83.68%/83.61%86.84%/86.72%89.83%/89.76%92.69%/92.53%
SigDSD- gcForest90.37%/90.28%93.12%/93.36%94.27%/95.26%95.76%/95.63%95.67%/95.72%96.43%/96.84%
ReluDSD-gcforest91.06%/91.26%94.08%/94.34%95.59%/95.62%96.22%/96.32%96.75%/97.14%97.29%/97.38%
PReluDSD-gcForest91.56%/92.36%94.67%/94.37%96.24%/95.37%96.87%/97.08%97.32%/97.46%97.64%/97.85%
ADSD-gcForest92.68%/93.06%95.26%/95.37%96.81%/96.68%97.26%/97.39%97.83%/97.80%98.44%/98.40%
Dataset 3VGG16-gcForest63.21%/63.46%65.32%/64.89%67.63%/67.90%68.42%/68.36%70.94%/71.26%72.84%/73.58%
Res50- gcForest77.12%/77.42%80.23%/80.68%82.47%/82.86%86.58%/86.69%89.52%/90.15%92.27%/92.38%
SigDSD- gcForest90.46%/91.68%93.48%/93.56%94.59%/94.75%.95.03%/96.15%95.67%/96.82%95.33%/95.59%
ReluDSD-gcforest90.72%/91.08%93.73%/94.28%95.07%/95.36%95.68%/96.06%96.28%/96.33%96.87%/97.11%
PReluDSD-gcForest91.25%/91.20%94.36%/94.38%95.53%/95.49%96.36%/96.42%96.82%/96.74%97.35%/97.19%
ADSD-gcForest91.85%/92.09%94.75%/95.13%96.15%/96.18%96.98%/97.26%97.23%/97.46%98.31%/98.48%
Dataset 4VGG16-gcForest61.74%/62.31%63.87%/64.26%66.88%/67.18%68.36%/69.45%70.42%/71.26%71.29%/72.21%
Res50- gcForest76.42%/76.48%78.73%/78.62%82.23%/82.26%84.64%/85.04%88.37%/88.49%91.44%/91.57%
SigDSD- gcForest91.23%/91.34%92.34%/92.86%94.68%/95.71%95.82%/96.02%96.42%/96.74%97.08%/97.06%
ReluDSD-gcforest91.75%/91.60%93.03%/93.09%95.02%/95.16%96.35%/96.42%96.97%/97.05%97.34%/97.48%
PReluDSD-gcForest92.20%/92.34%93.58%/93.64%95.42%/95.56%96.72%/96.83%97.41%/97.56%97.86%/98.01%
ADSD-gcForest93.45%/93.40%94.71%/94.65%96.75%/96.63%97.32%/97.46%98.03%/98.14%98.29%/98.24%
Dataset 5VGG16-gcForest62.76%/62.64%64.52%/65.38%67.12%/67.48%69.29%/69.70%70.68%/71.17%71.36%/72.47%
Res50- gcForest75.12%/75.49%77.37%/77.25%79.52%/79.26%82.67%/82.54%84.46%/84.69%87.53%/88.14%
SigDSD- gcForest91.40%/91.42%93.01%/92.89%94.81%/95.17%95.02%/95.43%96.24%/96.39%96.82%/97.16%
ReluDSD-gcforest92.22%/92.47%93.50%/93.49%95.22%/95.24%95.75%/96.78%96.64%/96.51%97.27%/97.38%
PReluDSD-gcForest92.86%/92.92%94.08%/94.06%95.82%/95.86%96.22%/96.36%97.25%/97.36%97.76%/97.82%
ADSD-gcForest93.24%/93.28%95.78%/95.83%96.28%/97.36%96.87%/97.12%97.73%/97.68%98.38%/98.06%
Dataset 6VGG16-gcForest61.98%/61.86%63.45%/64.58%65.62%/65.52%68.29%/68.34%69.56%/69.96%71.42%/71.86%
Res50- gcForest74.86%/74.92%77.53%/77.50%80.22%/80.36%82.23%/82.18%84.29%/84.27%86.36%/86.34%
SigDSD- gcForest92.25%/92.37%93.45%/93.57%94.09%/94.16%95.42%/95.26%96.24%/96.31%96.83%/96.80%
ReluDSD-gcforest92.86%/92.79%93.73%/93.76%94.42%/94.39%95.73%/95.70%96.89%/96.92%97.36%/97.34%
PReluDSD-gcForest93.03%/93.17%94.24%/94.28%95.02%/94.88%96.36%/97.32%97.25%/97.05%97.76%/97.65%
ADSD-gcForest93.55%/93.64%94.61%/94.78%95.42%/95.57%96.92%/97.18%97.83%/98.07%98.34%/98.64%
Table 5. Sample distribution table.
Table 5. Sample distribution table.
The Name of DatasetDataset 1Dataset 2Dataset 3Dataset 4Dataset 5Dataset 6
University of Ottawa bearing dataHD-1HA-1HB-1HA-1HC-1HD-1
HA-1HB-1HC-1HB-1HA-1HB-1
HB-1HD-1HA-1HD-1HB-1HC-1
IC-1IA-1IC-1IB-1IB-1ID-1
ID-1IB-1IA-1IA-1IC-1IA-1
IB-1ID-1IB-1ID-1IA-1IB-1
OB-1OD-1OB-1OB-1OA-1OD-1
OD-1OC-1OA-1OD-1OC-1OB-1
OA-1OB-1OD-1OC-1OD-1OC-1
HC-1IA-1HA-1IB-1OD-1OB-1
Table 6. Comparison table of the accuracy (AC) and F1 values of the Ottawa dataset.
Table 6. Comparison table of the accuracy (AC) and F1 values of the Ottawa dataset.
DatasetMethodsN1(AC/F1)N2(AC/F1)N3(AC/F1)N4(AC/F1)N5(AC/F1)N6(AC/F1)
Dataset 1VGG16-gcForest56.23%/56.20%60.39%/60.34%63.76%/63.72%65.63%/65.59%68.42%/68.45%70.28%/70.36%
Res50- gcForest78.29%/78.26%82.53%/82.57%84.31%/84.59%85.42%/85.31%87.12%/87.18%88.03%/87.93%
SigDSD- gcForest91.23%/91.36%92.39%/92.58%94.52%/95.67%95.26%/95.68%96.01%/96.19%96.75%/97.70%
ReluDSD-gcforest91.79%/91.66%93.42%/93.35%95.01%/95.16%95.76%/95.72%96.42%/96.48%97.03%/97.18%
PReluDSD-gcForest92.34%/92.46%93.44%/93.67%95.47%/95.55%96.12%/96.39%97.03%/96.98%97.35%/97.36%
ADSD-gcForest93.42%/93.3994.76%/94.58%96.08%/96.06%96.89%/96.92%97.62%/97.58%98.42%/98.48%
Dataset 2VGG16-gcForest57.62%/57.55%59.45%/59.52%62.45%/62.38%64.81%/64.80%67.02%/67.28%69.31%/69.59%
Res50- gcForest77.94%/77.80%80.66%/81.56%83.52%/84.28%86.44%/86.50%88.18%/88.26%89.50%/89.53%
SigDSD- gcForest90.46%/90.63%92.02%/91.26%93.25%/93.18%94.11%/94.07%95.47%/95.48%96.26%/96.31%
ReluDSD-gcforest91.04%/91.16%92.68%/92.76%93.82%/93.96%94.45%/94.48%96.06./95.89%96.63%/96.54%
PReluDSD-gcForest91.65%/91.77%93.24%/93.20%94.75%/94.79%95.36%/95.42%96.62%/96.59%97.30%/97.28%
ADSD-gcForest92.42%/92.38%94.66%/94.76%96.14%/97.28%96.76%/97.02%97.32%/97.48%98.15%/98.24%
Dataset 3VGG16-gcForest56.80%/56.68%58.55%/59.04%62.53%/62.68%64.33%/65.22%66.86%/66.78%68.42%/68.40%
Res50- gcForest76.73%/76.55%78.42%/78.61%81.18%/81.26%83.92%/83.90%86.35%/86.42%88.26%/88.37%
SigDSD- gcForest89.08%/88.89%90.91%/91.26%92.58%/92.87%94.27%/94.38%95.31%/95.36%96.26%/96.28%
ReluDSD-gcforest89.62%/89.52%91.54%/91.69%93.18%/92.89%94.82%/94.80%96.05%/96.18%96.86%/96.76%
PReluDSD-gcForest90.16%/91.02%92.32%/92.21%94.84%/94.56%95.47%/95.32%96.76%/96.79%97.32%/97.42%
ADSD-gcForest91.85%/91.29%93.49%/93.74%95.68%/95.88%96.31%/96.28%97.45%/97.52%98.27%/98.38%
Dataset 4VGG16-gcForest57.34%/57.49%59.56%/59.63%61.16%/61.19%63.47%/63.30%65.52%/65.71%67.92%/67.94%
Res50- gcForest75.26%/75.36%77.56%/77.49%80.64%/80.79%83.94%/84.06%85.16%/85.67%87.85%/88.06%
SigDSD- gcForest91.53%/91.68%92.20%/92.34%93.23%/94.36%94.78%/94.82%95.63%/95.56%96.28%/96.34%
ReluDSD-gcforest92.06%/92.64%92.76%/92.84%93.74%/93.70%95.58%/95.88%96.17%/96.24%96.65%/96.69%
PReluDSD-gcForest92.60%/91.65%93.26%/93.36%94.43%/94.58%96.03%/96.15%96.76%/96.72%97.22%/97.36%
ADSD-gcForest93.42%/93.62%94.60%/94.26%95.81%/95.68%96.63%/96.58%97.45%/97.26%98.32%/98.30%
Dataset 5VGG16-gcForest56.85%/56.79%58.25%/58.16%61.83%/62.05%62.55%/63.18%64.67%/65.25%66.86%/66.79%
Res50- gcForest76.40%/76.59%78.62%/78.59%82.35%/83.59%84.54%/85.69%85.46%/85.96%89.14%/89.19%
SigDSD- gcForest91.01%/91.08%93.06%/93.28%95.15%/95.28%95.70%/95.76%96.02%/96.34%96.55%/96.64%
ReluDSD-gcforest91.57%/91.68%93.62%/94.68%95.65%/95.78%96.04%/96.29%96.66%/96.64%97.05%/97.14%
PReluDSD-gcForest92.02%/91.89%94.22%/94.26%96.03%/96.38%96.48%/96.58%97.21%/97.36%97.43%/97.58%
ADSD-gcForest92.52%/93.76%94.76%/94.88%96.56%/96.49%97.03%/96.69%97.85%/97.68%98.42%/98.64%
Dataset 6VGG16-gcForest57.25%/57.64%59.60%/60.12%61.33%/61.24%64.58%/64.88%66.02%/65.89%68.20%/67.96%
Res50- gcForest75.05%/76.18%77.33%/78.38%79.62%/80.19%82.32%/83.49%85.58%/84.99%88.75%/89.06%
SigDSD- gcForest92.08%/93.16%94.01%/93.86%94.52%/95.67%95.27%/95.86%96.06%/96.18%96.47%/96.34%
ReluDSD-gcforest92.52%/92.36%94.63%/94.59%95.02%/94.98%95.72%/95.64%96.58%/96.59%96.95%/96.85%
PReluDSD-gcForest93.05%/92.96%95.06%/94.89%95.62%/95.57%96.34%/96.78%97.10%/96.68%97.36%/97.29%
ADSD-gcForest93.60%/93.84%95.52%/95.67%96.06%/95.98%96.85%/97.93%97.67%/97.58%98.29%/98.36%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhai, S.; Wang, Z.; Gao, D. Bearing Fault Diagnosis Based on a Novel Adaptive ADSD-gcForest Model. Processes 2022, 10, 209. https://doi.org/10.3390/pr10020209

AMA Style

Zhai S, Wang Z, Gao D. Bearing Fault Diagnosis Based on a Novel Adaptive ADSD-gcForest Model. Processes. 2022; 10(2):209. https://doi.org/10.3390/pr10020209

Chicago/Turabian Style

Zhai, Shuo, Zhenghua Wang, and Dong Gao. 2022. "Bearing Fault Diagnosis Based on a Novel Adaptive ADSD-gcForest Model" Processes 10, no. 2: 209. https://doi.org/10.3390/pr10020209

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop