# On the Sparse Gradient Denoising Optimization of Neural Network Models for Rolling Bearing Fault Diagnosis Illustrated by a Ship Propulsion System

^{1}

^{2}

^{3}

^{4}

^{5}

^{*}

^{†}

## Abstract

**:**

## 1. Introduction

## 2. Preliminaries

#### 2.1. Fault Diagnosis of Marine Machinery

#### 2.2. Overview of the Density-Based Spatial Clustering of Applications with Noise (DBSCAN)

## 3. The Proposed SDGD Model

#### 3.1. Keys Nodes Sparse Method Based on Mean Impact Value

#### 3.2. Clustering Noise Reduction Method Based on Distribution Density

#### 3.3. Sparse Denoising Gradient Descent Optimization Algorithm

Algorithm 1: MIV-Sparse algorithm. |

-B: batch size; -H: number of network layers; -$[{L}_{1},{L}_{2},\dots ,{L}_{H}]$: node numbers per network layer; -${U}_{}^{h}$: input of the $h\mathrm{th}$ layer; -$\widehat{Y}={F}_{h}({U}^{h})$: the nonlinear iterative process transmitted from the $h\mathrm{th}$ layer backward; -Input: network: $[{L}_{1},{L}_{2},\dots ,{L}_{H}]$, impact value threshold $\theta $, training dataset-Output: spared network: $Net\_Lab$1. for n = 1…B do //samples traversal2. for i = 1…H do //transmission between input and output layers3. for k = 1…${L}_{i}$ do //node traversal per layer 4. ${U}_{n,\pm \delta k}^{{h}_{i}}=({u}_{n}^{1},{u}_{n}^{2},\dots ,(1\pm \delta ){u}_{n}^{k},\dots ,{u}_{n}^{{L}_{i}})$ //self-increment and self-subtraction operation 5. Forward Propagation: ${\widehat{Y}}_{n,\pm k}^{{h}_{i}}={F}_{i}({U}_{n,\pm \delta k}^{{h}_{i}})$ 6. $I{V}_{n,k}^{{h}_{i}}={\Vert {\widehat{Y}}_{n,+k}^{{h}_{i}}-{\widehat{Y}}_{n,-k}^{{h}_{i}}\Vert}_{1}=\Vert {F}_{i}({U}_{n,+\delta k}^{{h}_{i}}){-{F}_{i}({U}_{n,-\delta k}^{{h}_{i}})\Vert}_{1}$ 7. end for //quit node traversal 8. $I{V}_{n}^{{h}_{i}}=[I{V}_{n,1}^{{h}_{i}},I{V}_{n,2}^{{h}_{i}},\dots ,I{V}_{n,k}^{{h}_{i}},\dots ,I{V}_{n,{L}_{i}}^{{h}_{i}}]$ //impact value vector of the $i\mathrm{th}$ layer 9. Summation comparison: $I{V}_{n}^{{h}_{i}}=I{V}_{n}^{{h}_{i}}/\frac{1}{{L}_{i}}{\displaystyle \sum _{k=1}^{{L}_{i}}I{V}_{n,k}^{{h}_{i}}}$ 10. end for //quit layer traversal 11. end for //quit samples traversal 12. for i = 1…H do //marked the key nodes 13. $I{V}_{}^{{h}_{i}}=\frac{1}{B}{\displaystyle \sum _{n=1}^{B}I{V}_{n}^{{h}_{i}}}$ 14. All elements within $I{V}_{}^{{h}_{i}}$ are compared with $\theta $. The network node greater than $\theta $ is marked as 1, otherwise it is marked as 0, and spare network $Net\_Lab$ is obtained. 15. end for |

Algorithm 2: Sparse denoising gradient decent optimization algorithm |

-${J}_{b}$: loss function; -$\nabla \omega $: connection weight gradient; -$\nabla b$: connection bias gradient; -$IV$: impact value of nodes; -$MIV$: mean impact value; -Input: network parameters $[{L}_{1},{L}_{2},\dots ,{L}_{H}]$, BatchSize = B, learning rate $\eta $, impact value threshold $\theta $, training loss stop condition threshold ${J}_{0}$, training epochs;-Output: the trained network $Ne{t}_{tra}$1. for k = 1…epochs do //Cycle epochs times 2. for b = 1…B do //All samples traversal 3. Calculating $\widehat{Y}={F}_{1}({X}_{b})$ //Forward propagation 4. Calculating loss function ${J}_{b}$; 5. if ${J}_{b}\le {J}_{0}$ //Whether the stopping condition is satisfied 6. yes:stop training; 7. else8. $\nabla {\omega}_{ij}^{b}=\frac{\partial {J}_{b}}{\partial {\omega}_{ij}}$, $\nabla {b}_{ij}^{b}=\frac{\partial {J}_{b}}{\partial {b}_{ij}}$; //Calculate the corresponding gradient update value for each sample 9. get $Net\_Lab$ //$Net\_Lab$ is obtained according to Algorithm 1 10. $\nabla {{\omega}^{\prime}}_{ij}^{}={f}_{d}(\nabla {\omega}_{ij}^{b}|b=1,2\dots B)$, $\nabla {{b}^{\prime}}_{ij}^{}={f}_{d}(\nabla {b}_{ij}^{b}|b=1,2\dots B)$ //Calculating gradient update values by DBSCAN method; 11. ${\omega}_{ij}(k+1)={\omega}_{ij}(k)-\eta \cdot \nabla {{\omega}^{\prime}}_{ij}^{}$, ${b}_{ij}(k+1)={b}_{ij}(k)-\eta \cdot \nabla {{b}^{\prime}}_{ij}^{}$ //Updating network connection weights 12. end if 13. end for 14. end for |

## 4. Validation Experiments

#### 4.1. Case One: Study of Accuracy & Convergence Speed of Fault Diagnosis Model

#### 4.1.1. Dataset Preparation and Parameter Settings

#### 4.1.2. Comparative Experiments Based on RESNET

#### 4.1.3. Comparative Experiments Based on Random CNN

#### 4.1.4. Comparative Experiments Based on Sparse Auto-Encoder (SAE)

#### 4.1.5. Computational Cost

#### 4.2. Case Two: Study of Local Optimal Trap of Fault Diagnosis Model Training

#### 4.2.1. Dataset Preparation and Parameter Settings

#### 4.3. Experimental Analysis

## 5. SDGD’s Application on a Ship Engine System

## 6. Conclusions

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Conflicts of Interest

## References

- Liu, Y.; Chen, Z.; Tang, L.; Zhai, W. Skidding dynamic performance of rolling bearing with cage flexibility under accelerating conditions. Mech. Syst. Signal Processing
**2021**, 150, 107257. [Google Scholar] [CrossRef] - Wang, S.; Zhang, Y. Multi-Level Federated Network Based on Interpretable Indicators for Ship Rolling Bearing Fault Diagnosis. J. Mar. Sci. Eng.
**2022**, 10, 743. [Google Scholar] [CrossRef] - Hou, J.; Wu, Y.; Ahmad, A.; Gong, H.; Liu, L. A Novel Rolling Bearing Fault Diagnosis Method Based on Adaptive Feature Selection and Clustering. IEEE Access
**2021**, 9, 99756–99767. [Google Scholar] [CrossRef] - Wang, Y.; Ding, X.; Zeng, Q.; Wang, L.; Shao, Y. Intelligent Rolling Bearing Fault Diagnosis via Vision ConvNet. IEEE Sens. J.
**2021**, 21, 6600–6609. [Google Scholar] [CrossRef] - Zheng, H.; Yang, Y.; Yin, J.; Li, Y.; Wang, R.; Xu, M. Deep Domain Generalization Combining A Priori Diagnosis Knowledge Toward Cross-Domain Fault Diagnosis of Rolling Bearing. IEEE Trans. Instrum. Meas.
**2021**, 70, 1–11. [Google Scholar] [CrossRef] - Zhang, Y.; Zhang, Z.; Chen, L.; Wang, X. Reinforcement Learning-Based Opportunistic Routing Protocol for Underwater Acoustic Sensor Networks. IEEE Trans. Veh. Technol.
**2021**, 70, 2756–2770. [Google Scholar] [CrossRef] - Zhang, Y.; Liu, Q. On IoT intrusion detection based on data augmentation for enhancing learning on unbalanced samples. Future Gener. Comput. Syst. Int. J. Escience
**2022**, 133, 213–227. [Google Scholar] [CrossRef] - Kong, X.G.; Fu, Y.; Wang, Q.B.; Ma, H.; Wu, X.; Mao, G. A High Generalizable Feature Extraction Method Using Ensemble Learning and Deep Auto-Encoders for Operational Reliability Assessment of Bearings. Neural Processing Lett.
**2020**, 51, 383–406. [Google Scholar] [CrossRef] - Ye, Z.; Zhang, Q.; Shao, S.; Niu, T.; Zhao, Y. Rolling Bearing Health Indicator Extraction and RUL Prediction Based on Multi-Scale Convolutional Autoencoder. Appl. Sci.
**2022**, 12, 5747. [Google Scholar] [CrossRef] - Chen, G.; Liu, M.; Chen, J. Frequency-temporal-logic-based bearing fault diagnosis and fault interpretation using Bayesian optimization with Bayesian neural networks. Mech. Syst. Signal Processing
**2020**, 145, 106951. [Google Scholar] [CrossRef] - Xia, M.; Li, T.; Xu, L.; Liu, L.; de Silva, C.W. Fault Diagnosis for Rotating Machinery Using Multiple Sensors and Convolutional Neural Networks. IEEE-ASME Trans. Mechatron.
**2018**, 23, 101–110. [Google Scholar] [CrossRef] - Hu, Q.; Qin, A.; Zhang, Q.; He, J.; Sun, G. Fault Diagnosis Based on Weighted Extreme Learning Machine With Wavelet Packet Decomposition and KPCA. IEEE Sens. J.
**2018**, 18, 8472–8483. [Google Scholar] [CrossRef] - Lei, Y.; Jia, F.; Lin, J.; Xing, S.; Ding, S.X. An Intelligent Fault Diagnosis Method Using Unsupervised Feature Learning Towards Mechanical Big Data. IEEE Trans. Ind. Electron.
**2016**, 63, 3137–3147. [Google Scholar] [CrossRef] - Zhang, X.; Chen, G.; Hao, T.; He, Z. Rolling bearing fault convolutional neural network diagnosis method based on casing signal. J. Mech. Sci. Technol.
**2020**, 34, 2307–2316. [Google Scholar] [CrossRef] - Islam, M.; Kim, J. Automated bearing fault diagnosis scheme using 2D representation of wavelet packet transform and deep convolutional neural network. Comput. Ind.
**2019**, 106, 142–153. [Google Scholar] [CrossRef] - Cai, B.; Liu, Y.; Xie, M. A Dynamic-Bayesian-Network-Based Fault Diagnosis Methodology Considering Transient and Intermittent Faults. IEEE Trans. Autom. Sci. Eng.
**2017**, 14, 276–285. [Google Scholar] [CrossRef] - Tao, C.; Wang, X.; Gao, F.; Wang, M. Fault diagnosis of photovoltaic array based on deep belief network optimized by genetic algorithm. Chin. J. Electr. Eng.
**2020**, 6, 106–114. [Google Scholar] [CrossRef] - Liang, N.; Huang, G.; Saratchandran, P.; Sundararajan, N. A fast and accurate online sequential learning algorithm for feedforward networks. IEEE Trans. Neural Netw.
**2006**, 17, 1411–1423. [Google Scholar] [CrossRef] - Nesterov, Y. Gradient methods for minimizing composite functions. Math. Program.
**2013**, 140, 125–161. [Google Scholar] [CrossRef] - Ai, X.; Sheng, V.; Li, C. A MBGD enhancement method for imbalance smoothing. Multimed. Tools Appl.
**2022**, 81, 24225–24243. [Google Scholar] [CrossRef] - Zhang, Y.; Li, P.; Wang, X. Intrusion Detection for IoT Based on Improved Genetic Algorithm and Deep Belief Network. IEEE Access
**2019**, 7, 31711–31722. [Google Scholar] [CrossRef] - Paul, R.; Friedman, Y.; Cohen, K. Accelerated Gradient Descent Learning Over Multiple Access Fading Channels. IEEE J. Sel. Areas Commun.
**2022**, 40, 532–547. [Google Scholar] [CrossRef] - Li, X.; Wang, W.; Zhu, S.; Xiang, W.; Wu, X. Generalized Nesterov Accelerated Conjugate Gradient Algorithm for a Compressively Sampled MR Imaging Reconstruction. IEEE Access
**2020**, 8, 157130–157139. [Google Scholar] [CrossRef] - Qu, G.; Li, N. Accelerated Distributed Nesterov Gradient Descent. IEEE Trans. Autom. Control
**2020**, 65, 2566–2581. [Google Scholar] [CrossRef] - Yu, T.; Liu, X.; Dai, Y.; Sun, J. Stochastic Variance Reduced Gradient Methods Using a Trust-Region-Like Scheme. J. Sci. Comput.
**2021**, 87, 1–24. [Google Scholar] [CrossRef] - Shang, F.; Zhou, K.; Liu, H.; Cheng, J.; Tsang, I.W.; Zhang, L.; Tao, D.; Jiao, L. VR-SGD: A Simple Stochastic Variance Reduction Method for Machine Learning. IEEE Trans. Knowl. Data Eng.
**2020**, 32, 188–202. [Google Scholar] [CrossRef] - Koppel, A.; Zhang, K.; Zhu, H.; Basar, T. Projected Stochastic Primal-Dual Method for Constrained Online Learning With Kernels. IEEE Trans. Signal Processing
**2019**, 67, 2528–2542. [Google Scholar] [CrossRef] - Li, H.; Fang, C.; Yin, W.; Lin, Z. Decentralized Accelerated Gradient Methods With Increasing Penalty Parameters. IEEE Trans. Signal Processing
**2020**, 68, 4855–4870. [Google Scholar] [CrossRef] - Li, H.; Zheng, L.; Wang, Z.; Yan, Y.; Feng, L.; Guo, J. S-DIGing: A Stochastic Gradient Tracking Algorithm for Distributed Optimization. IEEE Trans. Emerg. Top. Comput. Intell.
**2022**, 6, 53–65. [Google Scholar] [CrossRef] - Wang, Z.; Li, H. Edge-Based Stochastic Gradient Algorithm for Distributed Optimization. IEEE Trans. Netw. Sci. Eng.
**2020**, 7, 1421–1430. [Google Scholar] [CrossRef] - Liu, H.; Wang, J.; Meng, X. Hierarchical maximum likelihood generalized extended stochastic gradient algorithms for bilinear-in-parameter systems. Optim. Control Appl. Methods
**2022**, 43, 402–417. [Google Scholar] [CrossRef] - Traore, C.; Pauwels, E. Sequential convergence of AdaGrad algorithm for smooth convex optimization. Oper. Res. Lett.
**2021**, 49, 452–458. [Google Scholar] [CrossRef] - Su, S.; Kek, S. An Improvement of Stochastic Gradient Descent Approach for Mean-Variance Portfolio Optimization Problem. J. Math.
**2021**, 2021, 1–10. [Google Scholar] [CrossRef] - Wu, Z.; Li, S.; Chen, C.; Hao, A.; Qin, H. Deeper Look at Image Salient Object Detection: Bi-Stream Network With a Small Training Dataset. IEEE Trans. Multimed.
**2022**, 24, 73–86. [Google Scholar] [CrossRef] - Tan, H.; Zhang, D.; Tian, H.; Jiang, D.; Guo, L.; Wang, G.; Lin, Y. Multi-label classification for simultaneous fault diagnosis of marine machinery: A comparative study. Ocean Eng.
**2021**, 239, 109723. [Google Scholar] [CrossRef] - Yan, H.; Hu, H.; Jiang, W. A Novel Fault Diagnosis Method for Marine Blower with Vibration Signals. Pol. Marit. Res.
**2022**, 29, 77–86. [Google Scholar] [CrossRef] - Xu, J.; Zhao, Z.; Xu, B.; Yang, J.; Chang, L.; Yan, X.; Wang, G. Machine learning-based wear fault diagnosis for marine diesel engine by fusing multiple data-driven models. Knowl. -Based Syst.
**2020**, 190, 105324. [Google Scholar] [CrossRef] - Xu, J.; Yan, P.; Sheng, X.; Yuan, C.; Xu, D.; Yang, J. A Belief Rule-Based Expert System for Fault Diagnosis of Marine Diesel Engines. IEEE Trans. Syst. Man Cybern. -Syst.
**2020**, 50, 656–672. [Google Scholar] [CrossRef] - Tan, H.; Niu, Y.; Tian, H.; Hou, L.; Zhang, J. A one-class SVM based approach for condition-based maintenance of a naval propulsion plant with limited labeled data. Ocean Eng.
**2019**, 193, 106592. [Google Scholar] [CrossRef] - Karatuğ, Ç.; Arslanoğlu, Y. Development of condition-based maintenance strategy for fault diagnosis for ship engine systems. Ocean Eng.
**2022**, 256, 111515. [Google Scholar] [CrossRef] - Han, P.; Ellefsen, A.; Li, G.; Holmeset, F.T.; Zhang, H. Fault Detection with LSTM-Based Variational Autoencoder for Maritime Components. IEEE Sens. J.
**2021**, 21, 21903–21912. [Google Scholar] [CrossRef] - Velasco-Gallego, C.; Lazakis, I. RADIS: A real-time anomaly detection intelligent system for fault diagnosis of marine machinery. Expert Syst. Appl.
**2022**, 204, 117634. [Google Scholar] [CrossRef] - Velasco-Gallego, C.; Lazakis, I. Development of a time series imaging approach for fault classification of marine systems. Ocean Eng.
**2022**, 263, 112297. [Google Scholar] [CrossRef] - Zhang, R.; Zhang, Y.; Tan, X.; Liu, X.; Li, Q. Fault Diagnosis for Marine Main Engines using Improved Semi-Supervised Locally Linear Embedding. J. Syst. Simul.
**2021**, 33, 7383–7388. [Google Scholar] [CrossRef] - Loparo, K. Case Western Reserve University Bearing Data Centre Website. 2012. Available online: https://engineering.case.edu/bearingdatacenter (accessed on 14 September 2022).
- Wang, B.; Lei, Y.; Li, N.; Li, N. A Hybrid Prognostics Approach for Estimating Remaining Useful Life of Rolling Element Bearings. IEEE Trans. Reliab.
**2020**, 69, 401–412. [Google Scholar] [CrossRef]

**Figure 7.**(

**a**) Visualization and prediction error matrix of RESNET (

**b**) Visualization and prediction error matrix of SDGD-RESNET.

**Figure 8.**Convergence speed comparison (

**a**) BatchSize = 50 (

**b**) BatchSize = 100 (

**c**) BatchSize = 200 (

**d**) BatchSize = 350.

**Figure 9.**(

**a**) Visualization and prediction error matrix of CNN. (

**b**) Visualization and prediction error matrix of SDGD-CNN.

**Figure 10.**Convergence speed comparison (

**a**) BatchSize = 40 (

**b**) BatchSize = 50 (

**c**) BatchSize = 100 (

**d**) BatchSize = 200.

Fault Type | Fault Diameter (inch) | Label | Sample Size |
---|---|---|---|

BDⅠ | 0.007 | 1 | 1000 |

BDⅡ | 0.014 | 2 | 1000 |

BDⅢ | 0.021 | 3 | 1000 |

IRⅠ | 0.007 | 4 | 1000 |

IRⅡ | 0.014 | 5 | 1000 |

IRⅢ | 0.021 | 6 | 1000 |

ORⅠ | 0.007 | 7 | 1000 |

ORⅡ | 0.014 | 8 | 1000 |

ORⅢ | 0.021 | 9 | 1000 |

Layer Number | Layer Name | Core/Pool Size | Output Shape |
---|---|---|---|

1 | Conv1 | [5,5] | [400,5] |

2 | Pooling1 | 4(stride = 1) | [100,5] |

3 | Conv2 | $\left[\begin{array}{c}6,5\\ 6,5\end{array}\right]\times 2$ | [100,5^{4}] |

4 | Conv3 | $\left[\begin{array}{c}6,5\\ 6,5\end{array}\right]\times 2$ | [100,5^{8}] |

5 | Output AvergePooling+FC+Softmax 9 |

No. | RESNET (Acc:%) | SDGD-RESNET (Acc:%) |
---|---|---|

1 | 99.44 | 99.78 |

2 | 99.56 | 99.67 |

3 | 99.72 | 99.44 |

4 | 99.67 | 99.56 |

5 | 99.44 | 99.50 |

6 | 99.50 | 99.61 |

7 | 99.78 | 99.78 |

8 | 99.78 | 99.72 |

9 | 99.61 | 99.34 |

10 | 99.38 | 99.78 |

Mean accuracy | 99.58 | 99.62 |

BatchSize | SDGD-Resnet (Epochs) | Resnet (Epochs) | Loss-Aim | Improvement Index (%) |
---|---|---|---|---|

50 | 39 | 45 | 0.05 | 13.33 |

100 | 80 | 94 | 0.05 | 14.89 |

200 | 125 | 135 | 0.05 | 7.41 |

350 | 478 | 559 | 0.05 | 14.49 |

Parameters | First Convolution Layer | Second Convolution Layer |
---|---|---|

Number of filters | 5 | 4 |

Size of filter | 16 | 18 |

Stride | 1 | 1 |

Parameters | First Pooling Layer | Second Pooling Layer |
---|---|---|

Pooling size | 5 | 4 |

Stride | 1 | 1 |

No. | CNN (Acc:%) | SDGD−CNN (Acc:%) |
---|---|---|

1 | 97.33 | 99.17 |

2 | 97.00 | 99.33 |

3 | 96.83 | 99.00 |

4 | 96.86 | 98.83 |

5 | 96.50 | 99.17 |

6 | 95.86 | 99.17 |

7 | 96.33 | 99.33 |

8 | 97.57 | 99.00 |

9 | 96.50 | 99.00 |

10 | 97.00 | 99.33 |

Mean accuracy | 96.78 | 99.13 |

BatchSize | SDGD-CNN (Acc:%) | CNN (Acc:%) | Improvement Index (%) |
---|---|---|---|

40 | 99.13 | 96.78 | 2.35 |

50 | 98.45 | 96.29 | 2.16 |

100 | 98.17 | 94.71 | 3.46 |

200 | 94.71 | 92.01 | 2.7 |

No. | SAE (Acc:%) | SDGD-SAE (Acc:%) |
---|---|---|

1 | 96.56 | 97.33 |

2 | 96.44 | 96.44 |

3 | 96.00 | 96.89 |

4 | 96.22 | 97.00 |

5 | 96.56 | 97.33 |

6 | 97.11 | 97.00 |

7 | 96.67 | 96.67 |

8 | 95.78 | 96.11 |

9 | 97.00 | 97.21 |

10 | 97.56 | 96.89 |

Mean accuracy | 96.59 | 96.89 |

BatchSize | SDGD-SAE (Epochs) | SAE (Epochs) | Loss-Aim | Improvement Index (%) |
---|---|---|---|---|

40 | 163 | 198 | 0.05 | 17.68 |

50 | 236 | 273 | 0.05 | 13.55 |

100 | 393 | 449 | 0.05 | 12.47 |

Fault Type | Label | Train Sample Size | Test Sample Size |
---|---|---|---|

ID | 100 | 265 | 105 |

OR | 010 | 266 | 104 |

CD | 001 | 269 | 101 |

Random Seed | rng (4) | rng (200) | rng (258) |
---|---|---|---|

DNN accuracy | 33% | 63.67% | 65% |

SDGD-DNN accuracy | 99.33% | 99% | 98.33% |

Method | With SDGD | Fault Types | Acc:% | Acc Improve:% | Convergence Speed Improve:% | Avoid Local Optimal Trap |
---|---|---|---|---|---|---|

Resnet | N | 9 (CRWU) | 99.58 | 0.04 | 7.41–14.89 | N |

Y | 99.62 | |||||

CNN | N | 6 (CRWU) | 96.78 | 2.35 | 0–5.71 | Y |

Y | 99.13 | |||||

SAE | N | 9 (CRWU) | 96.59 | 0.3 | 12.47–17.68 | N |

Y | 96.89 | |||||

DNN | N | 3 (XJT) | 53.89 | 33.33-66.33 | - | Y |

Y | 98.89 |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Wang, S.; Zhang, Y.; Zhang, B.; Fei, Y.; He, Y.; Li, P.; Xu, M.
On the Sparse Gradient Denoising Optimization of Neural Network Models for Rolling Bearing Fault Diagnosis Illustrated by a Ship Propulsion System. *J. Mar. Sci. Eng.* **2022**, *10*, 1376.
https://doi.org/10.3390/jmse10101376

**AMA Style**

Wang S, Zhang Y, Zhang B, Fei Y, He Y, Li P, Xu M.
On the Sparse Gradient Denoising Optimization of Neural Network Models for Rolling Bearing Fault Diagnosis Illustrated by a Ship Propulsion System. *Journal of Marine Science and Engineering*. 2022; 10(10):1376.
https://doi.org/10.3390/jmse10101376

**Chicago/Turabian Style**

Wang, Shuangzhong, Ying Zhang, Bin Zhang, Yuejun Fei, Yong He, Peng Li, and Mingqiang Xu.
2022. "On the Sparse Gradient Denoising Optimization of Neural Network Models for Rolling Bearing Fault Diagnosis Illustrated by a Ship Propulsion System" *Journal of Marine Science and Engineering* 10, no. 10: 1376.
https://doi.org/10.3390/jmse10101376