Next Article in Journal
Discriminating Bacterial Infection from Other Causes of Fever Using Body Temperature Entropy Analysis
Next Article in Special Issue
A Fault Detection Method for Electrohydraulic Switch Machine Based on Oil-Pressure-Signal-Sectionalized Feature Extraction
Previous Article in Journal
Entropy Estimators in SAR Image Classification
Previous Article in Special Issue
Feature Enhancement Method of Rolling Bearing Based on K-Adaptive VMD and RBF-Fuzzy Entropy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Intrinsic Dimension Estimation-Based Feature Selection and Multinomial Logistic Regression for Classification of Bearing Faults Using Compressively Sampled Vibration Signals

by
Hosameldin O. A. Ahmed
1 and
Asoke K. Nandi
2,3,*
1
Department of Mechanical and Aerospace Engineering, Brunel University London, London UB8 3PH, UK
2
Department of Electronic and Electrical Engineering, Brunel University London, London UB8 3PH, UK
3
School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an 710049, China
*
Author to whom correspondence should be addressed.
Entropy 2022, 24(4), 511; https://doi.org/10.3390/e24040511
Submission received: 12 March 2022 / Revised: 1 April 2022 / Accepted: 3 April 2022 / Published: 5 April 2022

Abstract

:
As failures of rolling bearings lead to major failures in rotating machines, recent vibration-based rolling bearing fault diagnosis techniques are focused on obtaining useful fault features from the huge collection of raw data. However, too many features reduce the classification accuracy and increase the computation time. This paper proposes an effective feature selection technique based on intrinsic dimension estimation of compressively sampled vibration signals. First, compressive sampling (CS) is used to get compressed measurements from the collected raw vibration signals. Then, a global dimension estimator, the geodesic minimal spanning tree (GMST), is employed to compute the minimal number of features needed to represent efficiently the compressively sampled signals. Finally, a feature selection process, combining the stochastic proximity embedding (SPE) and the neighbourhood component analysis (NCA), is used to select fewer features for bearing fault diagnosis. With regression analysis-based predictive modelling technique and the multinomial logistic regression (MLR) classifier, the selected features are assessed in two case studies of rolling bearings vibration signals under different working loads. The experimental results demonstrate that the proposed method can successfully select fewer features, with which the MLR-based trained model achieves high classification accuracy and significantly reduced computation times compared to published research.

1. Introduction

Rolling bearings are critical components of the entire system of rotating machines and play a crucial role in retaining motion between motionless and moving parts. Failure of rolling bearing is one of the key problems in rotating machines that may lead to major catastrophes in machines [1]. It has previously been observed that approximately 40–90% of rotating machines failures are related to bearing faults [2]. Therefore, in most manufacturing procedures, rolling bearings need to be monitored to avoid machine failures. Numerous techniques can be used for machine condition monitoring such as vibration monitoring, electric motor current monitoring, acoustic emission monitoring, etc. Of these, vibration-based condition monitoring has been extensively utilized and has become a widely approved method [3]. As presented in Figure 1, a roller bearing is comprised of some components containing the inner race, the outer race, the rolling elements, and the cage [4]. Bearings faults can happen for several reasons such as fatigue, incorrect lubrication, contamination, corrosion, etc. [5]. In practice, faults in rolling bearings produce a series of impulses that repeat periodically at a rate named the bearing fundamental defect frequency (BFDF), which usually depends on the site of the faults, the geometry of the bearing, and the shaft speed, as displayed in Figure 2 [6].
Based on the damaged component, the BFDFs are categorized into four groups: (i) bearing pass frequency of the inner race (BPFI), (ii) bearing pass frequency of the outer race (BPFO), (iii) ball spin frequency (BSF), and (iv) fundamental train frequency (FTF), which are connected to the defect at the outer race, the inner race, the rolling element, and the cage [7]. These frequencies could be described using the following equations:
BPFI = N b S s h 2 1 + d b D p c o s φ
BPFO = N b S s h 2 1 d b D p c o s φ
BSF = D p 2 d b 1 d b D p c o s φ 2
FTF = S s h 2 1 d b D p c o s φ
Here, N b represents the number of rolling elements, S s h represents the shaft speed, d b is the rolling element diameter, D p is the pitch diameter, and φ represents the angle of the load from the radial plane. It has previously been observed that the frequency of the obtained vibration signal specifies the cause of the fault, and the amplitude shows the fault severity.
In vibration-based machine fault diagnosis practice, we handle a huge gathering of vibration signals obtained from several sources in the machines and some background noises. Consequently, it is challenging to use the raw vibration signals directly for fault diagnosis. Much of the current literature on vibration-based fault diagnosis pays particular attention to introducing methods capable of obtaining useful information, usually called features, from the raw vibration signals, which can be successfully used to classify the health condition of the machine. Automatic vibration-based condition monitoring employs machine learning classifiers to classify the vibration signal with its correct health condition type using the obtained features as inputs. Various techniques have been introduced for vibration signal analysis that can be utilized to obtain useful features from the raw vibration data. These include time-domain methods, frequency-domain methods, and time-frequency domain methods.
Numerous studies have examined many statistical techniques and some other cutting-edge methods to extract features from vibration signals in the time domain. For instance, McCormick and Nandi conducted several investigations to classify the condition of a small rotating machine using many statistical parameters such as mean and variance estimated from the time series of vibration signals, which are then applied as inputs to multi-layer perceptron and radial basis function neural networks [8]. Jack and Nandi introduced a genetic algorithm (GA) to select the most significant input features to artificial neural networks (ANN) from a large group of statistical estimates in machine condition monitoring situations [9]. In the same vein, Jack and Nandi attempted to improve the overall generalisation performance of support vector machines (SVMs) and ANNs techniques in two-class fault/no-fault recognition by applying a GA-based feature selection process [10]. In [11] a neural network technique for automated fault diagnosis of rolling element bearing using vibration data was proposed. First, the authors examined ten time-domain features including peak value (Pv), root mean square (RMS), standard deviation (SD), Kurtosis value (Kv), Crest Factor (CrF), Clearance Factor (ClF), Impulse Factor (IF), Shape Factor (ShF), Weibull negative log-likelihood (Wnl), and normal negative likelihood as inputs to ANN. Then only Kurtosis and normal negative likelihood are used as input features to ANN. The results showed that the ANN with Kurtosis and negative normal likelihood performed fault diagnosis with the same accuracy as the ANN with the ten-time domain features that were examined first. Furthermore, Prieto et al. presented a technique for bearing fault detection using statistical time-domain features and ANN. In this technique, several time-domain-based features including, mean, maximum value, RMS, SD, variance, ShF, CrF, latitude Factor, IF, Kv, and normalized fifth and sixth moments were computed from the acquired vibration signals [12].
Previous research has established that frequency domain methods can divulge information based on frequency characteristics that are not certain to be observed in the time domain. Various frequency domain methods have been extensively used for vibration signals analysis in the context of bearing fault classification. For example, McCormick and Nandi examined the application of ANN for rotating shaft’s fault diagnosis using moments of vibration time series as input features. These features were compared with features computed using the fast Fourier transform (FFT) as a suitable choice for real-time implementation [13]. Li et al. introduced a method for motor-bearing fault detection using frequency domain vibration signals and ANN [14]. In this method, the acquired vibration signals in the time domain were converted into the frequency domain using the FFT method. Then, the converted vibration signals in the frequency domain are used as inputs to train the ANN. Zeng and Wang proposed a framework for machine fault classification that comprises data acquisition, data processing, feature extraction, fault clustering, and fault assignment [15]. In this framework, the acquired vibration was transformed into the frequency domain using the FFT. Dhamande and Chaudhari proposed a method for bearing fault diagnosis based on statistical feature extraction in the time and frequency domain and ANN [16]. In this method, many statistical parameters of the vibration data were computed in the time domain including mean, SD, variance, RMS, the absolute maximum of the vibration signal, skewness, kurtosis, CrF, and a combination of them. Additionally, in the frequency domain, several statistical parameters were estimated including the mean, the variance, the third moment, the fourth moment, the grand mean, SD concerning the grand moment, as well as the third and fourth moments concerning the grand mean [16]. Similarly, Helmi and Forouzantabar proposed a technique for rolling bearing fault detection of electric motor applying time domain and frequency domain features with the adaptive neuro-fuzzy interface system (ANFIS) network [17]. In this technique, 15 time-domain features such as mean SD, IF, and skewness were computed. Then, the frequency spectrums were obtained using FFT, and 13 frequency-domain features such as mean, frequency centre, and kurtosis were computed [17].
Moreover, data from several studies suggest that the time-frequency domain methods have been introduced to deal with nonstationary waveform signals, which are very common when machine failures happen. The literature on machines fault diagnosis has highlighted several time frequency methods that are employed to transform the vibration signals in the time domain into the time-frequency domain. For instance, Wang and Chen investigated the sensitivity of three time-frequency domain methods, namely, short-time Fourier transform (STFT), wavelet analysis (WA), and pseudo Wigner–Ville distribution (PWVD) for a rotating machine’s fault diagnosis [18]. In [19], the authors presented a feature extraction methodology that is based on empirical mode decomposition (EMD) energy entropy for rolling element bearings fault diagnosis. In this methodology, a mathematical analysis process to select the most significant intrinsic mode functions (IMFs) was introduced. Then, the selected features were applied as inputs to train an ANN-based model, which is used to classify bearings faults. Furthermore, Djebala et al. presented a denoising technique based on discrete wavelet analysis of the acquired vibration signals for bearing fault detection [20]. In [21], a deep learning-based approach for bearing fault diagnosis is proposed. In this approach, the acquired signals were preprocessed using STFT to generate a spectrum matrix. Then sub-patterns were generated from the spectrum matrix and used to obtain the optimized deep learning structure, the large memory storage retrieval (LAMSTAR) neural network for bearing fault diagnosis. Furthermore, Immovilli et al. introduced a technique for the detection of generalized-roughness bearing fault using the spectral-kurtosis energy of vibration or current signals [22]. Lei and colleagues presented an improved kurtogram method for fault diagnosis of rolling element bearings. In this method, the wavelet packet transform (WPT) was used as the filter of the kurtogram method to overcome the limitations of the original kurtogram [23]. Recently, Hongwei et al. proposed a method for rolling element bearing fault diagnosis based on Fuzzy C-means (FCM) clustering of vibration images that were obtained using EMD-PWVD [24]. In this method, first, the acquired vibration signals with different fault degrees were converted into contour time-frequency images by using the EMD-PWVD technique. Then, the obtained vibration images were divided into sections and their energy distributions values were used as image features. Furthermore, in [25], the authors presented a method for rotating machinery fault diagnosis using time frequency domain features and CNN knowledge transfer.
On the other hand, despite the various techniques described above, which are used to process and examine vibration signals in the time domain, the frequency domain, and the time frequency domain, various studies have proposed other methods that can reduce the computational complexity and enhance fault classification accuracy. Of these, numerous methods have been introduced for learning subspace features from the raw vibration signals in rotating machine fault diagnosis. For instance, in [26], a method is proposed for incipient failures in large-size low-speed rolling bearings using the multiscale principal component analysis (MSPCA) and the ensemble empirical mode decomposition (EEMD). Guo et al. introduced a feature extraction approach for rolling element bearing fault diagnosis using the envelope extraction and the independent component analysis (ICA) technique [27]. In [28], the authors evaluated the use of the principal component analysis (PCA) technique and NN performance for bearing fault diagnosis. In their experiments, the vibration signals were preprocessed using detrended-fluctuation analysis (DFA) and rescaled-range analysis (RSA) techniques. Additionally, Dong et al. introduced a technique for bearing fault diagnosis using kernel PCA (KPCA) and an optimized k-nearest neighbour model [29]. In this technique, first, the original vibration signals were decomposed using local mean decomposition (LMD). Then, the entropy values of product functions that represent the input features were computed utilizing the Shannon method. The KPCA was used to reduce the dimension of the original features needed to train the k-nearest neighbour model [29].
Additionally, data from several studies demonstrate that the use of feature selection techniques will reduce the computational cost and might remove irrelevant and redundant features and accordingly may improve learning performance [1]. Feature selection techniques can be grouped into filter models (e.g., Fisher score, Relief and Relief-F algorithms), wrapped models (e.g., Genetic algorithm), and embedded models (e.g., LASSO and elastic net). Numerous studies have investigated the application of feature selection methods in the context of vibration-based bearings fault diagnosis. For example, Haroun et al. introduced a feature selection method for bearing fault detection and diagnosis using a self-organizing map (SOM) [30]. In this method, the authors, employed multiple methods from the time domain, frequency domain, and time-frequency domains to extract features. Then, Relief-F and minimum redundancy maximum relevance (mRMR) were used to select the optimal features from the extracted features. With these selected features, the SOM was applied to classify the bearing’s health condition [30]. Furthermore, a method for machinery fault diagnosis using redefined dimensionless indicators (RDIs) and mRMR was introduced [31]. In this method, first, the original vibration signals were preprocessed using variation mode decomposition to construct multiple RDIs. Then, the mRMR technique was employed to select several important RDIs. Finally, with these selected RDIs, a grid support vector machine (SVM) was used to carry out the identification of machinery faults. In [32], the authors proposed a methodology for bearing fault diagnosis of induction motors using a genetic algorithm (GA) and machine learning classifiers. In this methodology, first, some statistical features were obtained from the raw signals. Then, the GA was employed to select the most important features. Finally, with the selected features, three different classification algorithms namely, k-nearest neighbour (KNN), decision tree (DT), and random forest (RF) were trained to accomplish the classification task.
Furthermore, recent advances in dimensionality reduction methods have aided the investigation of the compressive sampling (CS) technique [33]. Several researchers have used CS to reduce the dimensionality of the original vibration signals for rolling bearing fault classification. For instance, Wong et al. examined the effects of CS on bearing fault classification [34]. In this investigation, the authors resampled the originally collected vibration signals using a random Bernoulli matrix to match the compressive sampling process. Then, sample entropy-based features were obtained from both the normalized raw vibration signals and the reconstructed signals. Finally, the SVM was trained using the obtained features to accomplish the fault classification task [34]. Tang et al. proposed a CS framework of characteristics harmonics to detect bearing faults [34]. In this method, the characteristics harmonics were obtained from sparse measurements via a compressive matching pursuit technique during the procedure of incomplete reconstruction [35]. Moreover, Xinpeng et al. introduced a bearing fault detection technique using CS and matching pursuit (MP) reconstruction algorithm [36]. In [4], a method for bearing fault classification from highly compressed measurements is proposed. In this method, CS was used to produce highly compressed measurements of the original bearing vibration signals. Then, a deep neural network (DNN) based on sparse autoencoder (SAE) was utilized to learn overcomplete sparse representations of the compressed measurements, which were used for the classification of bearing fault using the Softmax layer. Moreover, Ahmed and Nandi proposed a three-stage method for rolling bearings fault diagnosis using CS and subspace learning techniques [37]. In this method, the CS technique was employed to obtain compressively sampled vibration signals from the original vibration signals. Then, a multistep feature learning algorithm using PCA, linear discriminant analysis (LDA), and canonical correlation analysis (CCA) was used to obtain fewer features from the compressively sampled signals [37]. In [38], a framework for bearing fault classification using CS and feature ranking is proposed. Then, the authors used the CS process to produce compressively sampled signals from the raw vibration signals using two compressible representations of vibration signals, namely, Fourier transform-based coefficients and threshold Wavelet transform-based coefficients. Then, various feature ranking procedures were used to select fewer features from the compressively sampled signals. Finally, three classifiers were evaluated for the classification of bearing faults using these selected features.
The issue of collecting a large amount of vibration data for machine fault diagnosis has attracted considerable attention as it requires large storage to be stored and time to be processed. Existing research has highlighted various techniques for vibration signal analysis that can be applied to obtain useful features from the originally collected vibration data. However, the number of obtained features could be a contributing factor to the performance of these techniques in terms of classification accuracy and computational time, which are particularly important in the real implementation of fault diagnosis techniques. This study aims to contribute to this growing area of research by investigating the following: 1. A method that can reduce the high dimensionality of the raw vibration data to a fewer number of features capable of achieving high fault classification accuracy and highly reduced computational time; 2. The CS is an appropriate mechanism to compress the original high dimensional vibration data and then further reduce the dimension of the compressed vibration data to a far fewer number of features that satisfactorily represent the health condition of rolling bearings, and 3. The multinomial logistic regression (MLR) algorithm is a possible classifier to deal with the bearing’s fault classification task using the fewer selected features.
To accomplish high classification accuracy and highly reduced computation time, this paper proposes a new methodology for bearing fault classification based on intrinsic dimension estimation-based feature selection and multinomial logistic regression using compressively sampled vibration signals. In the methodology, the input vibration signals are resampled using CS to reduce the high-dimensional samples of the originally collected vibration data. Then, to further reduce the number of features of the compressively sampled vibration signals to far fewer features that can sufficiently represent the health condition of rolling bearings consequently achieve high classification accuracy and highly reduced computation time, a feature selection procedure based on intrinsic dimension estimation, stochastic proximity embedding (SPE), and neighbourhood component analysis (NCA) is applied. Finally, to perform the bearing’s fault classification task, the fewer selected features are applied as inputs to a multinomial logistic regression (MLR) classification algorithm. The contributions of this paper are as follows:
  • The proposed method produces far fewer features that can represent the health condition of bearings. This study accomplishes high classification accuracies and highly reduced computational time with regression analysis-based predictive modelling technique, namely the multinomial logistic regression (MLR) classifier using the fewer selected features as inputs.
  • A dimensionality reduction process has been proposed, which comprises (1) data compression using CS and (2) intrinsic dimension estimation-based feature selection process, which includes (a) SPE-based feature selection that utilizes a self-organizing iterative scheme to embed the compressed data dimension into a further lower dimension, and (b) the non-parametric NCA-based feature selection that maximizes the stochastic variant of the leave-one-out nearest neighbour score to achieve the best classification accuracy on the training set. This ensures selecting fewer features from the high dimensional data capable to achieve high classification accuracy and a reduced computational time.
  • We studied the impact of values of two parameters within the data compression and feature selection process used in the proposed method, namely the compressive sampling rates and the NCA tolerance values on the number of the selected features and the fault classification accuracy.
  • Two fault classification case studies of rolling element bearings vibration signals under different working loads are used to evaluate the proposed method.
  • Compared to recently published classification results from the literature on the same vibration-bearing datasets used in this study, our proposed method achieves high classification accuracy and highly reduced computation time, which suggests that our proposed methodology could be used in actual applications of vibration-based machine fault diagnosis.
The remainder of this paper is organized as follows. Section 2 describes the proposed framework. Section 3 is devoted to descriptions of the experimental study used to validate the proposed framework and presents comparison results. Finally, Section 4 offers some conclusions.

2. The Proposed Method

The methodological approach taken in this study is a mixed methodology based on vibration data compression, feature selection based on intrinsic dimension estimation, and regression analysis-based predictive modelling techniques. The proposed method automatically learns and selects far fewer features from compressively sampled vibration signals, which can be used as inputs to a classifier for bearing fault detection and classification. The key objective of the proposed method is to achieve high fault classification accuracy while highly reducing the computation time. The flow chart of the proposed method is presented in Figure 3. First, the compressive sampling (CS) mechanism was employed to compress the acquired vibration data. Then, a feature selection procedure based on intrinsic dimension estimation, stochastic proximity embedding (SPE), and neighbourhood component analysis (NCA) was utilized to estimate and further reduce the dimensionality of the compressively sampled data. Finally, with these reduced features, a classifier was used to classify the bearing’s health condition, namely, the multinomial logistic regression (MLR) algorithm was employed to perform the classification task. The following subsections discuss the proposed method in more detail.

2.1. Vibration Data Compression Using CS

To reduce the high dimensional of the collected vibration data, the proposed method uses the CS mechanism to obtain compressively sampled vibration signals from the original signals. The central principle of the CS is that a finite-dimensional signal having sparse or compressible representations can be reconstructed from a small number of linear measurements much lower than measurements based on the Nyquist sampling rate {xe “Nyquist sampling rate”}. Machine vibration signal has compressible representations in several domains such as in the frequency domain using FFT {xe “frequency domain”}. Therefore, in recent times, there has been a growing interest in the application of CS in machine fault diagnosis {xe “fault diagnosis”}. There are many benefits of CS in vibration-based bearing fault diagnosis, e.g., reducing the high dimension of the acquired vibration data, reducing computation time required to analyze the collected data, reducing data transmission cost in the cases where it is essential to send the collected data from remote places, e.g., fault diagnosis of offshore wind turbines.
The successful implementation of the CS mechanism is based on two fundamental concepts: (i) the sparsity of the targeted signal, and (ii) the measurements matrix that fulfils the minimal data information loss, which is usually called the restricted isometry property {xe “restricted isometry property”} (RIP) [39]. Briefly, we describe the CS mechanism as follows.
Let x   R n   x   1 be the originally collected time-indexed signal. With an identified sparsifying transform {xe “sparsifying transform”} matrix ψ   ϵ   R n   x   n where the columns represent the basis elements ψ i i = 1 n , the signal x can be described as follows,
x = i = 1 n ψ i s i
or,
x = ψ s
Here, s represents a n 1 column vector of coefficients. In case the basis ψ generates q-sparse representations of the signal x then x of length n can be signified with q < < n nonzero coefficients. Consequently, Equation (5) can be rewritten as follows,
x = i = 1 q ψ n i s n i
Here, n i represents the index of the basic elements and the coefficients corresponding to the q nonzero elements. Accordingly, s   ϵ   R n   x   1 represents a vector column with q nonzero elements and characterizes the sparse representation {xe “sparse representation”} vector of the signal x . Consistent with the CS mechanism, with m < < n projections of the vector x , measurement vectors j j = 1 m , and the sparse representations s , the compressed measurements of the signal x can be obtained using the following equation,
y = ψ s = θ s
Here, y is a m 1 column vector of the compressed measurements and θ = ψ represents the measurement matrix. Figure 4 shows an illustration of the CS framework that can be used to produce the single measurement vector of the compressed measurements y . According to the CS theory, the original signal x can be reconstructed from the compressed measurements y by applying a recovery algorithm. This can be completed by first recovering the sparse representation vector s and then employing the inverse of the sparsifying transform {xe “sparsifying transform”} ψ to recover x . One of the solutions to be used to recover the sparse representations s R n from its compressed measurement vector y R m is the l 0 minimization technique, which searches for a sparse vector consistent with the measured data y = θ s such that,
s ^ = arg   min z s 0   such   that   θ s = y
Moreover, the convex optimization . 1 can also be employed in place of . 0 , such that,
s ^ = arg   min z s 1   such   that   θ s = y
In case the measurement matrix θ satisfies the {xe “Restricted Isometry Property”} RIP, the sparse representation s can be reconstructed by solving the convex program in Equation (10). The matrix θ satisfies the rth restricted isometry property {xe “Restricted Isometry Property”} (RIP) if there exists a δ r 1 , such that
1 δ r s l 2 2   θ s l 2 2 1 + δ r   s l 2 2
In our case, the collected vibration data is a collection of signals, which can be represented as a matrix of sparse vectors Y , such that,
Y = Ө S
where Y R m   x   L , L is the number of observations and m is the number of compressed measurements, Ө R m   x   n represents a dictionary, and S   R n   x   L is a sparse representation {xe “sparse representation”} matrix. Therefore, multiple measurement vector compressive sampling {xe “Compressive Sampling”} is used in our proposed method.
Our proposed method is intended to learn directly from the compressed vibration signals. To obtain the compressively sampled signals from the collected vibration dataset X = x 1 , x 2 ,   ,   x L   R n   , first, the {xe “Fast Fourier Transform”} FFT, which commonly provides a sparse basis for vibration signals, is employed to produce the sparse representation ( S R n x L ) that comprises only a small number q   n   of nonzero coefficients. The FFT algorithm calculates the n -point complex discrete Fourier transform (DFT) of the signal X . In this study, we utilise the magnitude of the DFT to get S. Then, a random matrix with i.i.d Gaussian entries matrix, which satisfies the RIP, is used as the measurement matrix   Ө R m x n [40]. Finally, a compressed sampling rate (α) is used to produce the compressively sampled signals Y R m x L , where m represents the number of compressed signal elements and given by m = α n . This compression process is summarized in Algorithm 1 below:
Algorithm 1 Vibration data compression using CS
1. Input: vibration dataset X R n   x   L ; measurement matrix Ө R m x n and compressive sampling rate α
2. Output: compressively sampled vibration signals Y R m   x   L
3. Produce the sparse representations S of X: a b s   F F T X     S R n   x   L
4. Project S into Ө with compressed sampling rate α to obtain compressively sampled to obtain compressively sampled signals Y R m   x   L

2.2. Feature Selection Process

Based on CS theory, the compressively sampled signals ( Y ) have sufficient information to reconstruct successfully the originally collected vibration signals. Nevertheless, the dimensions of these compressively sampled vibration signals might be further reduced to attain more reduction in the computational cost while achieving high classification accuracy. Accordingly, our proposed method offers a feature selection process to learn and select fewer features from the compressively sampled signals ( Y ) to achieve superior classification accuracy and reduced computation costs. The feature selection process starts by identifying the minimal number of features required to represent the compressively sampled vibration signals Y , using a global dimension estimator, namely the geodesic minimal spanning tree (GMST). The GMST calculates the geodesic graph G from which the intrinsic dimension (d) is projected by calculating multiple minima spanning trees (MSTs) in which each data sample x i is linked to its k nearest neighbours [41], such that,
d Y = m i n e T D E u c l
Here, T signifies the set of all the subtrees of G, e is an edge in T, and D E u c l is the Euclidean distance of e. Then, with the computed minimal number of features d, where d < m, the SPE technique is employed to convert the compressively sampled data Y into a reduced-dimensionality space of significant representation Z   R d   x   L .
The SPE is a nonlinear approach that has many benefits such as being simple to implement, very fast, scales linearly with the size of the data in both time and memory, and is relatively insensitive to missing data [42]. Thus, it was decided that SPE is an appropriate method to use for this investigation. The SPE utilizes a self-organizing iterative scheme to embed m-dimensional data into d dimensions, such that the geodesic distances in the original m dimensions are preserved in the embedded d dimension. Briefly, we describe the simplified SPE procedure as follows [43]:
  • Initialize the coordinates y i . Select an initial learning rate β .
  • Select a pair of points, i , and j , at random, and calculate their distance:   d i j = y i y j . If   d i j   r i j ( r i j is the distance of the corresponding proximity), update the coordinates y i and y j using the following equations,
      y i   y i + β 1 2   r i j d i j   d i j + υ   y i y j
    and
    y j   y j + β 1 2   r i j d i j   d i j + υ   y j y i
Here, υ is a small number to avoid division by zero. For a given number of iterations, this step will be repeated for several steps and β will be reduced by a recommended decrement δ β . Finally, to obtain far fewer selected features in the feature selection process step, our proposed method uses the NCA technique to automatically selects a subset from the SPE-based learned features by converting Z   R d   x   L into Q  R f   x   L where f < d . Briefly, we describe the NCA feature selection as follows:
Let Z = z 1 ,   c 1 ,   ,   z i ,   c i ,   ,   ( z L ,   c L be a training set samples with d-dimension and c i 1 ,   ,   C is the matching class label. The NCA searches for a weighting vector w that uses to select a feature subset. In this method, first, the weighted distance between two samples z i and z j can be computed using the following equation,
D w z i ,   z j = r = 1 d w r 2 z i l z j l  
Here, w r is a weight-related to the r-th feature. Then, the strategy is to maximize the leave-one-out classification accuracy on the training set. The reference point is defined by a probability distribution. In our case, the probability of z i chooses z j as its reference point such that,
p i j = k D w z i ,   z j k i k D w z i ,   z j ,   if   i j 0 ,   if   i = j  
Here, k D w z i ,   z j = exp ( D w z i ,   z j / σ ) is a kernel function and σ is an input parameter that represents the kernel width. The probability of correct classification of y i can be computed using the following equation,
p i = j c i j p i j  
Here,
c i j =   1   if   c i = c j   0   otherwise
This process of NCA is summarized in Algorithm 2 below [44]:
Algorithm 2 NCA Feature Selection
1. Input:
Z   R d   x   L ;   γ initial step length; σ: kernel width; ℷ: regularisation parameter; and η: small positive constant.
2. Initialization: w 0 = 1 ,   1 ,   ,   1 ,   ϵ 0 = ,   t = 0 .
3. Repeat
4.  f o r   i = 1 ,   ,   L     d o
5. Compute p i j and p i   using w t according to (2) and (3)
6.  f o r   r = 1 ,   ,   d       d o
7.  _ r = 2 ( 1 / σ ( p i j i p i j z i r z j r j c i j z i r z j r ) ) w r t
8. t = t + 1
9.  w t = w t 1 + γ Δ
10.  ϵ t = ξ w t 1
11.  if   ϵ t > ϵ t 1   then   γ = 1.01 γ
12. else
13.  γ = 0.4 γ
14.  until   ϵ t ϵ t 1 < η
15.  w = w t
16.  Return   w
To select the most important features based on features weights, the criteria are based on a threshold value (Thr), which can be computed as follows:
T h r = τ   max w  
Here τ is the tolerance value.

2.3. Regression Analysis-Based Predictive Modelling Using MLR

Regression analysis can be used as a predictive modelling method as it defines the relationship between a dependent variable and one or more independent descriptive variables. There are many types of regression analysis methods such as linear regression, polynomial regression, logistic regression, etc. Of these, logistic regression (LR) is one of the most used methods in many machine learning applications. The LR is usually utilized for binary classification, i.e., the class labels c has only two values, e.g., (Fault, Normal). Briefly, we describe the LR as follows:
Let a training data Q = q 1 ,   q 2 ,   ,   q L with f -dimension produced in the feature selection step of our proposed method. The LR is a probabilistic discriminative model that learns P ( c | Q ) directly from the training data where c i 0 ,   1 , such that,
P c = 1 | q = h 1 q = g θ T h = 1 / 1 + e θ T q  
Here g θ T h is the logistic function, which is also called the sigmoid function. Since P C = 1 , we can compute P c = 0 | Q as follows:
P c = 0 | q = h 0 q = 1   P c = 1 | q = 1 ( 1 / ( 1 + e θ T q ) )
The likelihood of the parameters of L training examples can be computed using the following equation,
L θ = i = 1 L ( g θ T q i c i ( 1 g θ T q i ) 1 c i  
Here θ = θ 0 ,   θ 1 ,   ,   θ i represents the parameters of the model. Nevertheless, the log-likelihood is widely utilised and, consequently, Equation (23) and can be updated as Equation (24), such that
log   L θ = i = 1 L log ( ( g θ T q i c i 1 g θ T q i ) 1 c i  
To avoid overfitting a regularisation term, λ is added to the log-likelihood function, such that
log   L θ = i = 1 L log P c i = c k | q i ; θ   λ 2 θ 2
The MLR classifier, which also goes with the name SoftMax regression in ANN, generalizes the LR to a multi-class classification problem with multi-labels c i   1 , , K , such that
  h θ q = P c = 1 | q ; θ P c = 2 | q ; θ . . . P ( c = K | q ; θ ) = 1 j = 1 K exp θ j T q   e θ 1 T q e θ 2 T q . . . e θ K T q
Here,   θ 1 ,   θ 2 ,   ,   θ K   represent the parameters of the multinomial logistic regression model. In this study, we are dealing with a multiclassification problem, so MLR is employed to perform the classification task in our proposed method.

3. Experimental Study

Two fault classification case studies of rolling element bearings using vibration signals are presented to evaluate the proposed method.

3.1. First Case Study

The vibration dataset used in this case study is acquired from experiments on a test rig that simulates running roller bearings’ environment. In these experiments, several interchangeable faulty roller bearings are inserted in the test rig to symbolize the type of faults that can normally happen in roller bearings. As shown in Figure 5, the test rig to collect the vibration dataset of bearings contains a 12V DC electric motor driving the shaft via a flexible coupling. The shaft was supported by two blocks of Plummer bearing, where several damaged bearings were inserted. Two accelerometers were used to measure the vibration signals in both the horizontal and vertical planes. The output from the accelerometers was fed back over a charge amplifier to a Loughborough Sound Images DSP32 ADC card with a low-pass filter using a cut-off of 18 kHz. The sampling rate was 48 kHz. Six health conditions of roller bearings have been recorded with two normal conditions {XE “normal conditions”}, i.e., brand new condition (NO) and worn yet undamaged condition (NW), and four faulty conditions {XE “faulty condition”} with, inner race fault {XE “inner race fault”} (IR), an outer race fault {XE “outer race fault”} (OR), rolling element fault {XE “rolling element fault”} (RE), and cage fault (CA). Table 1 explains the corresponding characteristics of these bearing health conditions [4].
The data was recorded using 16 different speeds within 25–75 rev/s. In each speed, ten time series were recorded for each condition, i.e., 160 examples per condition. This resulted in a total of 960 examples with 6000 data points each. Figure 6 illustrates some typical time series plots for the six different conditions.
To apply our proposed method in this case study, first, we randomly selected 50% of the total observations for training, and the remaining 50% is employed for testing the trained model. Then, we computed the compressively sampled vibration signal from the high dimensional data X which has 6000 time samples for each of its 960 observations. As described in Algorithm 1, the FFT basis was used as the sparse representation of each signal in X. After, the CS framework with different sampling rates (α) (0.1, 0.2, and 0.3) using a random Gaussian matrix was used to produce the compressed measurements.
To estimate and reduce further the dimensionality of the compressively sampled signals, first, we computed the intrinsic dimension (d) using the GMST technique. Then, the process combining the SPE and NCA techniques was performed to learn and select far fewer features from the compressively sampled vibration signals. The compressively sampled vibration signals were transformed into further reduced dimensions using the defined intrinsic dimension and the SPE method. Then, a regularized NCA based on feature weights and a relative threshold was employed to select far fewer features with f-dimension from the SPE-based selected features with d-dimension where f < d. We computed the best regularization parameter λ value that corresponds to the minimum average loss to be used in fitting the NCA model on all the reduced dimension data. The final selected features were computed using the feature weights of the NCA model and a relative threshold. The stochastic gradient descent (SGD) solver was used for estimating feature weights. Two tolerance values (0.01 and 0.02) were tested in this case study for computing the threshold values used in the feature selection process. Figure 7 shows an example of the average loss values versus λ values. Figure 8 presents an example of the selected features and their corresponding weights. Moreover, Table 2 shows examples of the computed values of the average intrinsic dimension, the dimension of the NCA-based selected features, least loss, and best λ values taken from 10 trials.
The first benefit of the proposed method is to obtain far fewer features from the acquired vibration signals to be successfully used for rolling bearing fault diagnosis and consequently reduce the computational cost. Therefore, the first set of analyses examined the impact of the tolerance values and the compressive sampling rates on the number of the selected features using our proposed method. As shown in Table 2, the average least losses were slightly reduced from 0.014 to 0.013; 0.013 to 0.009; and from 0.013 to 0.010 using the tolerance value of 0.01 in place of 0.02. Furthermore, with a 0.02 tolerance value, the computed average best lambda value for NCA remained the same for all the compressive sampling rates (with λ = 0.003 ), while for the tolerance value of 0.01, the value λ = 0.004 achieved for the sampling rate of α = 0.3 , and λ = 0.004 was obtained for both α = 0.1 and α = 0.2 .
Moreover, as Table 2 shows, all the computed average least lost values are very small—in the range of 0.009–0.014—although the feature dimension was reduced from 6000 (in the original raw vibration signals) to 600 (the dimension of the compressively sampled signals with α = 0.1), which reduced to 28 (d-dimension) and then further reduced to 8 (f-dimension wit 0.01 tolerance value). This suggests that the feature selection is a good idea to be performed in our proposed method.
Furthermore, the NCA tolerance values have a clear effect on the computed intrinsic dimension (d) and the dimension of the NCA-based selected features. For example, with a 0.02 tolerance value, we obtained intrinsic dimension (d) of 62, 55, 33, which reduced to 28, 40, and 26 with 0.01 tolerance value for α = 0.1 , α = 0.2 , and α = 0.3 , respectively. Furthermore, with the tolerance value of 0.02, we obtained the final feature dimension (f) of 18, 14, and 11, which reduced to 8, 10, 8 with 0.01 tolerance for α = 0.1 , α = 0.2 , and α = 0.3 respectively. Taken together when the tolerance value decreases the dimension of both selected features, i.e., d and f, decreases.
The final NCA-based selected features were used to train the classification algorithm, i.e., the MLR, to classify among c classes of roller bearing health conditions. The overall results are shown in Table 3, where the classification accuracy is the average of 10 trials for each experiment and the time obtained by averaging the training time and testing time of these 10 trials. It is apparent from this Table that our proposed method achieved high classification accuracies (all above 99%) for all the compressive sampling rates and tolerance values with less than 35% of the acquired vibration data samples. Classification accuracies from our proposed method are 99.9%, 99.7%, and 99.5% for only 30%, 20%, and 10% of the whole collected data with 8, 10, 8 selected features (with tolerance value = 0.01) respectively, used to train the MLR classifier. Additionally, with tolerance value = 0.02, α = 0.3 , and 11 selected features, the proposed method achieved 100% classification accuracy for every single run in our experiments. Moreover, the trained classification model of our proposed method requires less than 0.016 s to complete the classification task.
Table 4 provides sample confusion matrices of the classification results of MLR classifier using selected features with tolerance value = 0.01 and a sampling rate of (a) α = 0.1 , 0.2 , and α = 0.3 . As can be seen from Table 4c, the recognition of the bearing health conditions with α = 0.3 is 100%. In Table 4a, with 10% testing data (with α = 0.1 ), our method misclassified one of the testing examples of condition 5, i.e., RE, as condition three, i.e., IR. In Table 4b, with 20% testing data (with α = 0.2 ), our method misclassified only two of the testing examples of condition 2 (NW) as condition 6 (CA).

Comparisons of Results

In this subsection, a comparison of various methods using the same vibration dataset of rolling bearings used in the first case study (see Table 5). In [34], three methods were used for bearing fault diagnosis using SVM. The first method used the whole collected vibration data. The second method used compressively sampled datasets of α = 0.25 and α = 0.5 , while the third method used the corresponding reconstructed signals of these compressively sampled data. In [45], a method using a genetic programming (GP) algorithm for feature extraction was used, and then ANN and SVM were employed to classify bearing health conditions. In [46], a hybrid model comprising the fuzzy min–max (FMM) neural network and random forest (RF) with sample entropy (SampEn) and power spectrum (PS) features was utilized to classify bearing health conditions. In [37], a three-stage hybrid method consisting of CS, PCA, LDA, and canonical correlation analysis (CCA) was used for bearing fault classification from (1) the whole 6000 samples from the frequency domain, and (2) compressively sampled data with α = 0.1 and α = 0.2 . In [38], a framework combining CS and feature ranking techniques including fisher score, Laplacian score, Relief-F, Pearson correlation coefficients, and Chi-square (Chi-2) were used for bearing fault classification from compressively sampled vibration data with 0.1 , feature dimension of 120. Then, with these features, the MLR classifier was used to classify bearing faults. In [47], a three-stage method combining CS (with α = 0.1 , 0.2, and 0.3), a feature selection procedure, and SVM was used for bearing fault classification.
As Table 5 shows, the classification results of our proposed method are better than those reported in [34,45]. Moreover, our results are the same as, if not better than the classification results described in [37,38,46,47]. Our method is extremely fast and needs only 0.015 s to complete the fault classification task compared to the method in [37], which needs 6.7 s using a classification model trained with 10% of the whole data. Furthermore, results from our proposed method remain as good as, if not well improved than the results stated in [38], although we used limited features (only 8 features), which are not met by the method in [38], which used 120 features.
For further verification of the efficacy of the proposed method, we conducted three experiments using our proposed method using the same settings used to perform our experiments with α = 0.1 ,   0.2 ,   and   0.3 . Then, we employed SVM in place of the MLR classifier in our proposed method to classify bearing health conditions to examine the speed and accuracy performance of our proposed method compared to the method used in [47]. The results are presented in the last row of Table 5. The classification results of our method with MLR classifier are as good as, if not better than the results of our method with SVM. Interestingly, the results demonstrate that our method with MLR classifier is faster and requires only 25%, 13.3%, and 10% of the time of our method with SVM to complete the classification task using classification models trained with α = 0.1, 0.2, and 0.3, respectively.

3.2. Second Case Study

The bearing datasets used in this case are provided by Case Western Reserve University (https://engineering.case.edu/bearingdatacenter/download-data-file, accessed on 2 April 2022). The bearing datasets were obtained from a motor driving mechanical system where the faults were planted into the drive-end bearing of the motor under four different speeds and several health conditions, namely, normal condition (NO), with an IR fault (IR), with a roller element fault (RE), and with an OR fault (OR). Then, the datasets were further categorized by the width of the fault (0.18–0.71 mm) and the load of the motor (0–3 hp). The sampling rates {xe “sampling rates”} utilized were 12 kHz for some of the sampled data and 48 kHz for the rest of the sampled data. At each speed, 100 time series were recorded for each condition per load. For the IR, OR, and RE conditions, vibration signals for four different fault widths (0.18, 0.36, 0.53, and 0.71 mm) were separately recorded. In this study, of these acquired vibration signals, two groups of datasets were prepared for the evaluation of our proposed method.
The first group of datasets is selected from the data files of the vibration signals sampled at 48 kHz with fault width (0.18, 0.36, and 0.53 mm) and fixed loads including 1, 2, and 3 hp, and the number of examples chosen is 200 examples per condition. This offered three different datasets A, B, and C with 2000 total examples and 2400 data points for each signal. The second type of bearing dataset was chosen from the data files of vibration signals sampled at 12 kHz with fault size (0.18, 0.36, 0.53, and 0.71 mm) and load 2 hp, and the number of examples chosen was 60 examples per condition. This offered a dataset D with 720 total examples and 2000 data points for each signal. The description of the used bearing vibration dataset is presented in Table 6.
To classify the bearing’s health conditions from datasets A, B, C, and D described above, the same steps as in the first case study were followed to apply our proposed method. First, 50% of the total observations of datasets A, B, C, and D were randomly chosen for training and the other 50% is employed for testing the trained model. Then, we obtained the compressively sampled vibration signal from the high dimensional datasets A, B, and C, with 2400 time samples for each of the 2000 observations and dataset D with 2000 time samples for each of the 720 observations. The FFT coefficients were employed as sparse representations of all the datasets used in the second case study, i.e., A, B, C, and D. Then, we adopted the CS mechanism with different sampling rates (α) of 0.1, 0.2, and 0.3, using a random Gaussian matrix as described in the first case study to obtain the compressively sampled signals for each dataset.
To estimate and reduce further the dimensionality of the compressively sampled signals, we applied the same steps of the feature selection process step of our proposed method as described in the first case study. Two tolerance values (0.01 and 0.02) were investigated for the feature selection process. Table 7 presents examples of the computed values of the average intrinsic dimension, the dimension of the NCA-based selected features, least loss, and best λ values taken from 10 for each dataset. To test the impact of the tolerance values and the compressive sampling rates on the number of the selected features using our proposed method. Table 7 shows the computed values of the average intrinsic dimension (d), the dimension of the NCA-based selected features (f), the best least loss, and best λ values taken from 10 trials for datasets A, B, C, and D. Moreover, it can be seen from the data in Table 7 that all the computed average least lost values are exceedingly small in the range of 0.000–0.003 although the feature dimension was reduced to a fewer number of features, e.g., for dataset A the feature dimension were reduced from 2400 (in the original raw vibration signals) to 240 (the dimension of the compressively sampled signals with α = 0.1), which reduced to 13 (d-dimension) and then further reduced to 6 (f-dimension with 0.01 tolerance value).
Moreover, the computed average best lambda values for the NCA algorithm are in the range of 0.0002–0.0041. The NCA tolerance values have a clear effect on the computed intrinsic dimension (d) and the dimension of the NCA-based selected features. For example, for dataset A with a 0.02 tolerance value, we obtained an intrinsic dimension (d) of 18, 21, and 25, which reduced to 13, 15, and 15 with 0.01 tolerance values for α = 0.1 , α = 0.2 , and α = 0.3 , respectively. Furthermore, for dataset A with the tolerance value of 0.02, we obtained the final feature dimension (f) of 10, 12, and 14, which reduced to 6, 7, 9 with 0.01 tolerance for α = 0.1 , α = 0.2 , and α = 0.3 , respectively. It can therefore be assumed that when the tolerance value decreases the dimension of both selected features, i.e., d and f, decreases. Figure 9 shows an example of the average loss values versus λ values. Figure 10 presents an example of the selected features and their corresponding weights.
The final NCA-based selected features from datasets A, B, C, and D were used to train the MLR classifier to obtain a trained classification model for each dataset to classify among c classes of roller bearing health conditions. Table 8 shows an overview of the testing results for each bearing vibration dataset where the classification accuracy is the average of 10 trials for each experiment, and the time obtained by averaging the testing time of these 10 trials. One of the more significant findings to emerge from the results in Table 8 is that the classification results with α = 0.2   and   0.3 and tolerance values = 0.01 and 0.02 for all datasets A, B, C, and D are all over 99%. For datasets B and C with α = 0.3 and tolerance values of 0.01 and 0.02, our proposed method attained 100% classification accuracy.
Similarly, for dataset A with α = 0.3 and tolerance value of 0.02, our proposed method achieved 100% classification accuracy for every single run in our experiments. Furthermore, the trained classification models of our proposed method for all bearing vibration datasets A, B, C, D require less than 0.01 s to complete the classification task. These findings suggest that our proposed method is fast and offers high classification accuracies for rolling bearings from vibration datasets under different load levels as in A, B, and C.
Table 9 presents sample confusion matrices of the classification results of MLR classifier using selected features with tolerance value = 0.01 and a sampling rate of (a) α = 0.1 , 0.2, and 0.3 with dataset A. As can be seen from Table 9a with 10% testing data our method misclassified three of the testing examples of condition 6, i.e., IR2, as condition 1, i.e., NO, condition 3, i.e., RE2, and condition nine, i.e., OR2, respectively. In Table 9b, with 20% testing data, our method misclassified three of the testing examples of condition 6, i.e., IR2, as condition 7, i.e., IR3. Moreover, in Table 9c the recognition of all bearing health conditions is 100%.

Comparisons of Results

For additional assessment of the efficiency of the proposed method, Table 10 shows comparisons with some recently published results using the same bearing vibration datasets used in the second case study. In [4], a CS-DNN technique that involves a deep neural network method (DNN) with two hidden layers method combined with the Haar Wavelet-based CS technique was used to classify rolling bearing from the same rolling bearing datasets A, B, and C with α = 0.1 . In [38], a framework combining CS and feature ranking techniques including Fisher score, Laplacian score, Relief-F, Pearson correlation coefficients, and Chi-square (Chi-2) were used for bearing fault classification from the dataset D with α = 0.1 and a feature dimension of 120. With these features, the MLR classifier was used to classify bearing faults. In [48], several methods were used to classify bearing faults with the same roller bearing the dataset D used in the second study. One of the methods used feature selection by adjunct rand index and standard deviation ratio (FSAR) the original feature set (OFS). Some of the other techniques utilized PCA, LDA, LFDA, and support margin LFDA (SM-LFDA). The selected features of these methods were used to train SVM to be used for bearing fault classification. Moreover, classification results for bearing fault classification using two methods with datasets A, B, and C are reported in [49]. The first method is based on a deep neural network (DNN) and the second method is a method based on a backpropagation neural network (BPNN). Additionally, classification results using a generic multi-layer perceptron (MLP) method with datasets A, B, and C are reported in [50].
As shown in Table 10, the results from our proposed method are better than those reported in [38,48,50]. Additionally, our classification results are better than the results produced using BPNN in [49]. Moreover, our results are the same as, if not better than the results reported in [4] and the results obtained using DNN in [49]. Remarkably, the results show that our method is faster than the CS-DNN technique that used in [4] as our method requires less than 0.005 s while the CS-DNN requires at least 5.7 s to complete the classification task.
In summary, the high reduction in the computation time originates from two reasons—(i) using CS that allow us to use a smaller sampling rate as in α = 0.1 ,   0.2 ,   and   0.3 , and (ii) selecting far fewer features to be used for training the classification algorithm and for classifying rolling bearing health conditions using the trained classification model. Finally, our proposed method achieves classification results for all the rolling bearing vibration datasets A, B, C, and D that are the same as, if not more improved than, fault classification results from the literature on the same vibration bearing datasets.

4. Conclusions

The purpose of the present research was to examine the classification of bearing health conditions with far fewer selected features of compressively sampled vibration signals to achieve highly reduced computation time and yet to achieve high classification accuracy. The proposed method comprises a CS-based technique, which was used to obtain compressed vibration signals, followed by an intrinsic dimension estimation-based feature selection process that includes SPE-based feature selection with a self-organizing iterative scheme to embed the compressed data dimension into a further lower dimension, and a non-parametric NCA-based feature selection that maximize the stochastic variant of the leave-one-out nearest neighbour score to achieve the best classification accuracy on the training set. This ensures selecting fewer features from the high dimensional data capable of achieving high classification accuracy and a reduced computation time.
The multinomial logistic regression (MLR) algorithm was used to classify bearing faults. Two fault classification cases of rolling bearings vibration signals under different working loads were used to test the proposed method. The first set of analyses inspected the impact of the compressive sampling rate and the tolerance values on the number of the selected features The experimental results of bearing faults classification demonstrated that the proposed method could obtain higher classification accuracy and higher reduction in the computational time. The higher reduction in the computation time originates from two causes, (i) using CS that allows us to use a smaller sampling rate as in α = 0.1 ,   0.2 ,   and   0.3 , and (ii) selecting far fewer features to be used for training the classification algorithm and for classifying rolling bearing health conditions using the trained classification model. Finally, our proposed method achieves classification results for all the rolling bearing vibration datasets A, B, C, and D that are as good as, if not better than, classification results from the literature on the same vibration bearing datasets.

Author Contributions

H.O.A.A. and A.K.N. conceived and designed this paper. H.O.A.A. performed the experiments. H.O.A.A. and A.K.N. wrote a draft of the manuscript and contributed to discussing the results in the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in the first case study may be available on request from the first author, Hosameldin O. A. Ahmed.

Acknowledgments

Authors wish to thank Brunel University London for their support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ahmed, H.; Nandi, A.K. Condition Monitoring with Vibration Signals: Compressive Sampling and Learning Algorithms for Rotating Machines; John Wiley & Sons: Hoboken, NJ, USA, 2020. [Google Scholar]
  2. Immovilli, F.; Bellini, A.; Rubini, R.; Tassoni, C. Diagnosis of bearing faults in induction machines by vibration or current signals: A critical comparison. IEEE Trans. Ind. Appl. 2010, 46, 1350–1359. [Google Scholar] [CrossRef]
  3. Randall, R.B. Vibration-Based Condition Monitoring: Industrial, Aerospace, and Automotive Applications; John Wiley & Sons: Hoboken, NJ, USA, 2011. [Google Scholar]
  4. Ahmed, H.O.A.; Wong, M.L.D.; Nandi, A.K. Intelligent condition monitoring method for bearing faults from highly compressed measurements using sparse over-complete features. Mech. Syst. Signal Process. 2018, 99, 459–477. [Google Scholar] [CrossRef]
  5. Nandi, S.; Toliyat, H.A.; Li, X. Condition monitoring and fault diagnosis of electrical motors—A review. IEEE Trans. Energy Convers. 2005, 20, 719–729. [Google Scholar] [CrossRef]
  6. Ahmed, H. Intelligent Methods for Condition Monitoring of Rolling Bearings Using Vibration Data. Ph.D. Thesis, Brunel University London, London, UK, 2019. [Google Scholar]
  7. Rai, A.; Upadhyay, S.H. A review on signal processing techniques utilized in the fault diagnosis of rolling element bearings. Tribol. Int. 2016, 96, 289–306. [Google Scholar] [CrossRef]
  8. McCormick, A.C.; Nandi, A.K. A comparison of artificial neural networks and other statistical methods for rotating machine condition classification. In Proceedings of the IEE Colloquium on Modeling and Signal Processing for Fault Diagnosis (Digest No: 1996/260), Leicester, UK, 18 September 1996; Volume 260, pp. 2/1–2/6. [Google Scholar] [CrossRef] [Green Version]
  9. Jack, L.B.; Nandi, A.K. Genetic algorithms for feature selection in machine condition monitoring with vibration signals. IEEE Proc.-Vis. Image Signal Process. 2000, 147, 205–212. [Google Scholar] [CrossRef]
  10. Jack, L.B.; Nandi, A.K. Fault detection using support vector machines and artificial neural networks, augmented by genetic algorithms. Mech. Syst. Signal Process. 2002, 16, 373–390. [Google Scholar] [CrossRef]
  11. Sreejith, B.; Verma, A.K.; Srividya, A. Fault diagnosis of rolling element bearing using time-domain features and neural networks. In Proceedings of the 2008 IEEE Region 10 and the Third international Conference on Industrial and Information Systems, Kharagpur, India, 8–10 December 2008; pp. 1–6. [Google Scholar] [CrossRef]
  12. Prieto, M.D.; Cirrincione, G.; Espinosa, A.G.; Ortega, J.A.; Henao, H. Bearing fault detection by a novel condition-monitoring scheme based on statistical-time features and neural networks. IEEE Trans. Ind. Electron. 2012, 60, 3398–3407. [Google Scholar] [CrossRef]
  13. McCormick, A.C.; Nandi, A.K. Real-time classification of rotating shaft loading conditions using artificial neural networks. IEEE Trans. Neural Netw. 1997, 8, 748–757. [Google Scholar] [CrossRef]
  14. Li, B.; Goddu, G.; Chow, M.Y. Detection of common motor bearing faults using frequency-domain vibration signals and a neural network-based approach. In Proceedings of the 1998 American Control Conference. ACC (IEEE Cat. No. 98CH36207), Philadelphia, PA, USA, 26–26 June 1998; Volume 4, pp. 2032–2036. [Google Scholar] [CrossRef]
  15. Zeng, L.; Wang, H.P. Machine-fault classification: A fuzzy-set approach. Int. J. Adv. Manuf. Technol. 1991, 6, 83–93. [Google Scholar] [CrossRef]
  16. Dhamande, L.S.; Chaudhari, M.B. Bearing fault diagnosis based on statistical feature extraction in time and frequency domain and neural network. Int. J. Veh. Struct. Syst. 2017, 8, 229–240. [Google Scholar] [CrossRef]
  17. Helmi, H.; Forouzantabar, A. Rolling bearing fault detection of an electric motor using time domain and frequency domain feature extraction and ANFIS. IET Electr. Power Appl. 2019, 13, 662–669. [Google Scholar] [CrossRef]
  18. Wang, H.; Chen, P. Fuzzy diagnosis method for rotating machinery in variable rotating speed. IEEE Sens. J. 2010, 11, 23–34. [Google Scholar] [CrossRef]
  19. Ali, J.B.; Fnaiech, N.; Saidi, L.; Chebel-Morello, B.; Fnaiech, F. Application of empirical mode decomposition and artificial neural network for automatic bearing fault diagnosis based on vibration signals. Appl. Acoust. 2015, 89, 16–27. [Google Scholar] [CrossRef]
  20. Djebala, A.; Ouelaa, N.; Hamzaoui, N. Detection of rolling bearing defects using discrete wavelet analysis. Meccanica 2008, 43, 339–348. [Google Scholar] [CrossRef]
  21. He, M.; He, D. Deep learning-based approach for bearing fault diagnosis. IEEE Trans. Ind. Appl. 2017, 53, 3057–3065. [Google Scholar] [CrossRef]
  22. Immovilli, F.; Cocconcelli, M.; Bellini, A.; Rubini, R. Detection of generalized-roughness bearing fault by spectral-kurtosis energy of vibration or current signals. IEEE Trans. Ind. Electron. 2009, 56, 4710–4717. [Google Scholar] [CrossRef]
  23. Lei, Y.; Lin, J.; He, Z.; Zi, Y. Application of an improved kurtogram method for fault diagnosis of rolling element bearings. Mech. Syst. Signal Process. 2011, 25, 1738–1749. [Google Scholar] [CrossRef]
  24. Fan, H.; Shao, S.; Zhang, X.; Wan, X.; Cao, X.; Ma, H. Intelligent fault diagnosis of rolling bearing using FCM clustering of EMD-PWVD vibration images. IEEE Access 2020, 8, 145194–145206. [Google Scholar] [CrossRef]
  25. Ye, L.; Ma, X.; Wen, C. Rotating Machinery Fault Diagnosis Method by Combining Time-Frequency Domain Features and CNN Knowledge Transfer. Sensors 2021, 21, 8168. [Google Scholar] [CrossRef]
  26. Žvokelj, M.; Zupan, S.; Prebil, I. Multivariate and multiscale monitoring of large-size low-speed bearings using ensemble empirical mode decomposition method combined with principal component analysis. Mech. Syst. Signal Process. 2010, 24, 1049–1067. [Google Scholar] [CrossRef]
  27. Guo, Y.; Na, J.; Li, B.; Fung, R.F. Envelope extraction-based dimension reduction for independent component analysis in fault diagnosis of rolling element bearing. J. Sound Vib. 2014, 333, 2983–2994. [Google Scholar] [CrossRef]
  28. De Moura, E.P.; Souto, C.R.; Silva, A.A.; Irmao, M.A.S. Evaluation of principal component analysis and neural network performance for bearing fault diagnosis from vibration signal processed by RS and DF analyses. Mech. Syst. Signal Process. 2011, 25, 1765–1772. [Google Scholar] [CrossRef]
  29. Dong, S.; Luo, T.; Zhong, L.; Chen, L.; Xu, X. Fault diagnosis of bearing based on the kernel principal component analysis and optimized k-nearest neighbour model. J. Low-Freq. Noise Vib. Act. Control 2017, 36, 354–365. [Google Scholar] [CrossRef] [Green Version]
  30. Haroun, S.; Seghir, A.N.; Touati, S. Feature selection for enhancement of bearing fault detection and diagnosis based on a self-organizing map. In International Conference on Electrical Engineering and Control Applications; Springer: Cham, Switzerland, 2016; pp. 233–246. [Google Scholar]
  31. Hu, Q.; Si, X.S.; Qin, A.S.; Lv, Y.R.; Zhang, Q.H. Machinery fault diagnosis scheme using redefined dimensionless indicators and mRMR feature selection. IEEE Access 2020, 8, 40313–40326. [Google Scholar] [CrossRef]
  32. Toma, R.N.; Prosvirin, A.E.; Kim, J.M. Bearing fault diagnosis of induction motors using a genetic algorithm and machine learning classifiers. Sensors 2020, 20, 1884. [Google Scholar] [CrossRef] [Green Version]
  33. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  34. Wong, M.L.D.; Zhang, M.; Nandi, A.K. Effects of compressed sensing on the classification of bearing faults with entropic features. In Proceedings of the 2015 23rd European Signal Processing Conference (EUSIPCO), Nice, France, 31 August–4 September 2015; pp. 2256–2260. [Google Scholar] [CrossRef] [Green Version]
  35. Tang, G.; Hou, W.; Wang, H.; Luo, G.; Ma, J. Compressive sensing of roller bearing faults via harmonic detection from under-sampled vibration signals. Sensors 2015, 15, 25648–25662. [Google Scholar] [CrossRef] [Green Version]
  36. Xinpeng, Z.; Niaoqing, H.; Zhe, C. A bearing fault detection method base on compressed sensing. In Engineering Asset Management-Systems, Professional Practices and Certification; Springer: Cham, Switzerland, 2015; pp. 789–798. [Google Scholar]
  37. Ahmed, H.O.; Nandi, A.K. Three-stage hybrid fault diagnosis for rolling bearings with compressively sampled data and subspace learning techniques. IEEE Trans. Ind. Electron. 2018, 66, 5516–5524. [Google Scholar] [CrossRef] [Green Version]
  38. Ahmed, H.O.; Nandi, A.K. Compressive sampling and feature ranking framework for bearing fault classification with vibration signals. IEEE Access 2018, 6, 44731–44746. [Google Scholar] [CrossRef]
  39. Candes, E.J.; Tao, T. Near-optimal signal recovery from random projections: Universal encoding strategies. IEEE Trans. Inf. Theory 2006, 52, 5406–5425. [Google Scholar] [CrossRef] [Green Version]
  40. Baraniuk, R.G.; Wakin, M.B. Random projections of smooth manifolds. Found. Comput. Math. 2009, 9, 51–77. [Google Scholar] [CrossRef] [Green Version]
  41. Costa, J.A.; Hero, A.O. Geodesic entropic graphs for dimension and entropy estimation in manifold learning. IEEE Trans. Signal Process. 2004, 52, 2210–2221. [Google Scholar] [CrossRef] [Green Version]
  42. Agrafiotis, D.K.; Xu, H.; Zhu, F.; Bandyopadhyay, D.; Liu, P. Stochastic proximity embedding: Methods and applications. Mol. Inform. 2010, 29, 758–770. [Google Scholar] [CrossRef]
  43. Agrafiotis, D.K. Stochastic proximity embedding. J. Comput. Chem. 2003, 24, 1215–1221. [Google Scholar] [CrossRef]
  44. Yang, W.; Wang, K.; Zuo, W. Neighborhood component feature selection for high-dimensional data. J. Comput. 2012, 7, 161–168. [Google Scholar] [CrossRef]
  45. Guo, H.; Jack, L.B.; Nandi, A.K. Feature generation using genetic programming with application to fault classification. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2005, 35, 2256–2260. [Google Scholar] [CrossRef]
  46. Seera, M.; Wong, M.D.; Nandi, A.K. Classification of ball bearing faults using a hybrid intelligent model. Appl. Soft Comput. 2017, 57, 427–435. [Google Scholar] [CrossRef]
  47. Ahmed, H.; Nandi, A. Three-stage method for rotating machine health condition monitoring using vibration signals. In Proceedings of the 2018 Prognostics and System Health Management Conference (PHM-Chongqing), Chongqing, China, 26–28 October 2018; Volume 26, pp. 285–291. [Google Scholar] [CrossRef] [Green Version]
  48. Yu, X.; Dong, F.; Ding, E.; Wu, S.; Fan, C. Rolling bearing fault diagnosis using modified LFDA and EMD with sensitive feature selection. IEEE Access 2017, 6, 3715–3730. [Google Scholar] [CrossRef]
  49. Jia, F.; Lei, Y.; Lin, J.; Zhou, X.; Lu, N. Deep neural networks: A promising tool for fault characteristic mining and intelligent diagnosis of rotating machinery with massive data. Mech. Syst. Signal Process. 2016, 72, 303–315. [Google Scholar] [CrossRef]
  50. de Almeida, L.F.; Bizarria, J.W.; Bizarria, F.C.; Mathias, M.H. Condition-based monitoring system for rolling element bearing using a generic multi-layer perceptron. J. Vib. Control 2015, 21, 3456–3464. [Google Scholar] [CrossRef] [Green Version]
Figure 1. A typical roller bearing [4].
Figure 1. A typical roller bearing [4].
Entropy 24 00511 g001
Figure 2. Rolling element bearing geometry [6].
Figure 2. Rolling element bearing geometry [6].
Entropy 24 00511 g002
Figure 3. The proposed method.
Figure 3. The proposed method.
Entropy 24 00511 g003
Figure 4. Single measurement vector compressive sampling framework [37].
Figure 4. Single measurement vector compressive sampling framework [37].
Entropy 24 00511 g004
Figure 5. The test rig used to collect the vibration data of bearings of the first case study [4].
Figure 5. The test rig used to collect the vibration data of bearings of the first case study [4].
Entropy 24 00511 g005
Figure 6. Typical time domain vibration signals for the six different conditions [4].
Figure 6. Typical time domain vibration signals for the six different conditions [4].
Entropy 24 00511 g006
Figure 7. Example of the average loss values versus λ values computed from the reduced dimension of compressively sampled data with α = 0.2.
Figure 7. Example of the average loss values versus λ values computed from the reduced dimension of compressively sampled data with α = 0.2.
Entropy 24 00511 g007
Figure 8. Example of the selected features and their corresponding weights using α = 0.2 and NCA tolerance value = 0.02.
Figure 8. Example of the selected features and their corresponding weights using α = 0.2 and NCA tolerance value = 0.02.
Entropy 24 00511 g008
Figure 9. Example of the average loss values versus λ values computed from the reduced dimension of compressively sampled data with α = 0.2 and dataset A.
Figure 9. Example of the average loss values versus λ values computed from the reduced dimension of compressively sampled data with α = 0.2 and dataset A.
Entropy 24 00511 g009
Figure 10. Example of the selected features from dataset A and their corresponding weights using α = 0.2 and NCA tolerance value = 0.02.
Figure 10. Example of the selected features from dataset A and their corresponding weights using α = 0.2 and NCA tolerance value = 0.02.
Entropy 24 00511 g010
Table 1. The characteristics of bearings’ health conditions in the obtained bearing dataset.
Table 1. The characteristics of bearings’ health conditions in the obtained bearing dataset.
ConditionCharacteristic
NOThe bearing was brand new and in perfect condition.
NWThe bearing was in service for some time but in good condition.
IRInner race fault. This fault was created by cutting a small groove in the raceway of the inner race.
OROuter race fault. This fault was created by cutting a small groove in the raceway of the outer race.
RERoller element fault. This fault was created by using an electrical etcher to mark the surface of the balls, simulating corrosion.
CACage fault. This fault was created by removing the plastic cage from one of the bearings, cutting away a section of the cage so that two of the balls were not held in a regular space and had freedom to move.
Table 2. Examples of the computed values of the average intrinsic dimension, the dimension of the NCA-based selected features, least loss, and best λ values taken from 10 trials.
Table 2. Examples of the computed values of the average intrinsic dimension, the dimension of the NCA-based selected features, least loss, and best λ values taken from 10 trials.
NCA Tolerance ValueCS Sampling Rate ( α ) Average
Intrinsic Dimension (d)
The Average Dimension of NCA-Based Selected Features (f)Average Least LossAverage Best Lambda for NCA
0.010.12880.0130.004
0.240100.0090.004
0.32680.0100.003
0.020.162180.0140.003
0.255140.0130.003
0.333110.0130.003
Table 3. Classification results with their corresponding RMSE and computational time for the automatically selected features (d refers to the intrinsic dimension, and f is the dimension of the NCA-based selected feature) using two values of NCA tolerance and three compressive sampling rates.
Table 3. Classification results with their corresponding RMSE and computational time for the automatically selected features (d refers to the intrinsic dimension, and f is the dimension of the NCA-based selected feature) using two values of NCA tolerance and three compressive sampling rates.
NCA Tolerance ValueCS Sampling Rate ( α ) dfMLR Classifier
Training Accuracy
(%)
Training
Time (s)
Testing Accuracy
(%)
Testing
Time (s)
0.010.128899.8 ± 0.35.34 ± 1.799.5 ± 0.60.015 ± 0.002
0.2401099.9 ± 0.14.6 ± 2.399.7 ± 0.30.003 ± 0.00
0.3268100 ± 0.03.3 ± 0.599.9 ± 0.10.003 ± 0.001
0.020.1621899.9 ± 0.23.37 ± 0.899.7 ± 0.30.003 ± 0.001
0.2551499.9 ± 0.13.55 ± 0.999.8 ± 0.20.003 ± 0.001
0.33311100 ± 0.04.6 ± 2.0100 ± 0.00.004 ± 0.003
Table 4. Sample confusion matrices of the classification results of MLR classifier using selected features with tolerance value = 0.01 and a sampling rate of (a) α = 0.1 , (b) α = 0.2 , and (c) α = 0.3 .
Table 4. Sample confusion matrices of the classification results of MLR classifier using selected features with tolerance value = 0.01 and a sampling rate of (a) α = 0.1 , (b) α = 0.2 , and (c) α = 0.3 .
NONWIRORRECA
NO8000000
NW0800000
IR0080000
OR0008000
RE0010790
CA0000080
(a)
NONWIRORRECA
NO8000000
NW0780002
IR0080000
OR0008000
RE0000800
CA0000080
(b)
NONWIRORRECA
NO8000000
NW0800000
IR0080000
OR0008000
RE0000800
CA0000080
(c)
Table 5. A comparison with the classification results from the literature on the vibration bearing dataset of the first case study.
Table 5. A comparison with the classification results from the literature on the vibration bearing dataset of the first case study.
RefMethodTesting Accuracy Testing Time
[33]Raw vibration with entropic features + SVM98.9 ± 1.2_
Compressed sampled with α = 0.5 followed by signal reconstruction + SVM92.4 ± 0.5
Compressed sampled with α = 0.25 followed by signal reconstruction + SVM84.6 ± 0.41
[44]GP generated feature sets (un-normalised data)
ANN96.5
SVM97.1
[45] FMM-RF                      SamEn99.7 ± 0.02_
PS99.7 ± 0.50
SamEn + PS99.8 ± 0.41
[36]CPDC (with 6000 inputs from FFT)99.4 ± 0.564.9
CS-CPDC α = 0.199.8 ± 0.26.7
      α = 0.2 99.9 ± 0.17.8
[37]With FFT, α = 0.1, feature dimension = 120, and LRC classifier) _
CS-FS 99.7 ± 0.4
CS-LS99.5 ± 0.3
CS-Relief-F99.8 ± 0.2
CS-PCC99.8 ± 0.3
CS-Chi-299.5 ± 0.5
[46]Feature selection (with λ = 0.004, tolerance value = 0.02) from compressively sampled data and SVM for fault classification: _
α = 0.1 and feature dimension = 1498.8 ± 2.4
α = 0.2 and feature dimension = 1399.9 ± 0.2
α = 0.3 and feature dimension = 2699.9 ± 0.1
Our proposed method with λ = 0.003, NCA tolerance value = 0.01, α = 0.1, and feature dimension = 8:
MLR classifier99.5 ± 0.60.015
SVM classifier99.5 ± 0.50.060
Our proposed method with λ = 0.003, NCA tolerance value = 0.01, α = 0.2, and feature dimension = 10:
MLR classifier99.7 ± 0.30.003
SVM classifier99.8 ± 0.20.040
Our proposed method with λ = 0.003, NCA tolerance value = 0.01, α = 0.3, feature dimension = 8:
MLR classifier100 ± 0.00.003
SVM classifier100 ± 0.00.030
Table 6. Description of the bearing health conditions of the bearing vibration dataset used in the second case study.
Table 6. Description of the bearing health conditions of the bearing vibration dataset used in the second case study.
Health ConditionFault Width (mm)Classification Label
NO01
RE10.182
RE20.363
RE30.534
RE40.715
IR10.186
IR20.367
IR30.538
IR40.719
OR10.1810
OR20.3611
OR30.5312
Table 7. Examples of the computed values of the average intrinsic dimension, the dimension of the NCA-based selected features, least loss, and best λ values taken from 10 trials for datasets A, B, C, and D.
Table 7. Examples of the computed values of the average intrinsic dimension, the dimension of the NCA-based selected features, least loss, and best λ values taken from 10 trials for datasets A, B, C, and D.
DatasetNCA Tolerance ValueCS Sampling Rate ( α ) dfAverage Least LossAverage Best Lambda for NCA
A0.010.11360.0010.0011
0.21570.0000.0009
0.31590.0010.0006
0.020.118100.0000.0009
0.221120.0000.0004
0.325140.0000.0003
B0.010.11770.0000.0007
0.21990.0010.0006
0.32490.0000.0002
0.020.128110.0000.0009
0.223100.0000.0005
0.326120.0000.0003
C0.010.11650.0010.0010
0.21770.0000.0006
0.32380.0000.0006
0.020.121110.0000.0009
0.222120.0000.0004
0.327140.0000.0003
D0.010.11540.0030.0041
0.21850.0010.0041
0.32070.0010.0015
0.020.11790.0010.003
0.221100.0010.003
0.32390.0010.003
Table 8. Classification results with their corresponding RMSE and computational time for the automatically selected features (d refers to the intrinsic dimension, and f is the dimension of the NCA-based selected feature) using two values of the NCA tolerance and compressive sampling rates for datasets A, B, C, and D (all classification accuracies of 100% are in bold).
Table 8. Classification results with their corresponding RMSE and computational time for the automatically selected features (d refers to the intrinsic dimension, and f is the dimension of the NCA-based selected feature) using two values of the NCA tolerance and compressive sampling rates for datasets A, B, C, and D (all classification accuracies of 100% are in bold).
DatasetNCA
Tolerance Value
CS Sampling Rate ( α ) dfMLR Classifier
Testing Accuracy
(%)
Testing Time
(s)
A0.010.113698.5 ± 0.70.002
0.215799.9 ± 0.10.003
0.315999.9 ± 0.10.002
0.020.1181099.6 ± 0.20.003
0.2211299.8 ± 0.20.006
0.32514100 ± 0.00.003
B0.010.117799.2 ± 0.70.002
0.219999.5 ± 0.50.003
0.3249100 ± 0.00.009
0.020.1281199.9 ± 0.10.003
0.2231099.9 ± 0.10.006
0.32612100 ± 0.00.003
C0.010.116599.7 ± 0.20.002
0.217799.9 ± 0.10.003
0.3238100 ± 0.00.002
0.020.1221199.9 ± 0.10.003
0.2271299.9 ± 0.10.006
0.31514100 ± 0.00.003
D0.010.118492.7 ± 2.90.002
0.220599.1 ± 0.80.002
0.317799.9 ± 0.10.002
0.020.121999.9 ± 0.10.002
0.2231099.9 ± 0.10.002
0.313999.9 ± 0.10.002
Table 9. Sample confusion matrices of the classification results of MLR classifier using selected features with tolerance value = 0.01 and a sampling rate of (a) α = 0.1 , (b) α = 0.2 , and (c) α = 0.3   with the dataset A.
Table 9. Sample confusion matrices of the classification results of MLR classifier using selected features with tolerance value = 0.01 and a sampling rate of (a) α = 0.1 , (b) α = 0.2 , and (c) α = 0.3   with the dataset A.
NORE1RE2RE3IR1IR2IR3OR1OR2OR3
NO100000000000
RE1010000000000
RE2001000000000
RE3000100000000
IR1000010000000
IR210100970010
IR3000000100000
OR1000000010000
OR2000000001000
OR300000010099
(a)
NORE1RE2RE3IR1IR2IR3OR1OR2OR3
NO100000000000
RE1010000000000
RE2001000000000
RE3000100000000
IR1000010000000
IR200000973000
IR3000000100000
OR1000000010000
OR2000000001000
OR3000000000100
(b)
NORE1RE2RE3IR1IR2IR3OR1OR2OR3
NO100000000000
RE1010000000000
RE2001000000000
RE3000100000000
IR1000010000000
IR2000001000000
IR3000000100000
OR1000000010000
OR2000000001000
OR3000000000100
(c)
Table 10. A comparison with the classification results from the literature on the vibration bearing datasets A, B, C, and D of the second case study.
Table 10. A comparison with the classification results from the literature on the vibration bearing datasets A, B, C, and D of the second case study.
RefDatasetMethodTesting
Accuracy (%)
Testing Time
(s)
A 99.3 ± 0.65.7
[4]BCS-DNN with α = 0.199.7 ± 0.55.9
C 100 ± 0.05.7
[37] DWith FFT, α = 0.1, feature dimension = 120, and LRC _
CS-FS 98.4 ± 1.6
CS-LS99.1 ± 0.8
CS-Relief-F99.3 ± 0.6
CS-PCC99.2 ± 0.8
CS-Chi-297.5 ± 2.6
[47] OFS-FSAR-SVM _
Selected features = 2591.46
Selected features = 5069.58
OFS-FSAR-PCA-SVM
Selected features = 2591.67
Selected features = 5069.79
OFS-FSAR-LDA-SVM
Selected features = 2586.25
Selected features = 5092.70
OFS-FSAR-LFDA-SVM
Selected features = 2593.75
Selected features = 5094.38
OFS-FSAR-(SM-LFDA)-SVM
Selected features = 2594.58
DSelected features = 5095.63
[48] A 99.95 ± 0.06_
B 99.61 ± 0.21
CDNN99.74 ± 0.16
A 62.20 ± 18.09
B 61.95 ± 22.09
CBPNN69.82 ± 17.67
[49]A
B
C
MLP_
Our proposed method with λ = 0.003, NCA tolerance value = 0.02, and α = 0.1.
Afeature dimension = 1099.6 ± 0.20.003
The proposedBfeature dimension = 1199.9 ± 0.10.003
methodCfeature dimension = 1199.9 ± 0.10.003
Dfeature dimension = 999.9 ± 0.10.002
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ahmed, H.O.A.; Nandi, A.K. Intrinsic Dimension Estimation-Based Feature Selection and Multinomial Logistic Regression for Classification of Bearing Faults Using Compressively Sampled Vibration Signals. Entropy 2022, 24, 511. https://doi.org/10.3390/e24040511

AMA Style

Ahmed HOA, Nandi AK. Intrinsic Dimension Estimation-Based Feature Selection and Multinomial Logistic Regression for Classification of Bearing Faults Using Compressively Sampled Vibration Signals. Entropy. 2022; 24(4):511. https://doi.org/10.3390/e24040511

Chicago/Turabian Style

Ahmed, Hosameldin O. A., and Asoke K. Nandi. 2022. "Intrinsic Dimension Estimation-Based Feature Selection and Multinomial Logistic Regression for Classification of Bearing Faults Using Compressively Sampled Vibration Signals" Entropy 24, no. 4: 511. https://doi.org/10.3390/e24040511

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop