Deep Learning Algorithm Generalization for Complex Industrial Systems

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence Circuits and Systems (AICAS)".

Deadline for manuscript submissions: closed (30 June 2023) | Viewed by 12724

Special Issue Editors


E-Mail Website
Guest Editor
Faculty of Computers and Artificial Intelligence, Cairo University, Giza 12613, Egypt
Interests: artificial intelligence; swarm optimization; IoT; blockchain; space science

E-Mail Website1 Website2 Website3 Website4
Guest Editor
1. College of Computer and Information Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia
2. Automated Systems & Soft Computing Lab (ASSCL), Prince Sultan University, Riyadh 12435, Saudi Arabia
3. Faculty of Computers and Artificial Intelligence, Benha University, Benha 13518, Egypt
Interests: control theory and applications; robotics; process control; artificial intelligence; machine learning, computational intelligence, dynamic system modeling
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The AI implementation scenario can be compared to an iceberg. The industry is the hidden "treasure" under the horizontal surface, which is exceptionally large and and has great potential, but at the same time, it is particularly challenging to overcome. Implementing industrial intelligence is destined to be a tough and protracted battle. A good algorithm model requires massive data and arithmetic support. Still, industrial scene data are seriously lacking, and the arithmetic cost of installing GPUs is difficult for traditional enterprises to accept.   The more critical point is that the "black box" characteristic of deep learning is naturally contradictory to the pursuit of industrial manufacturing's accuracy, reliability, and interpretability. It is not easy to gain the trust of industrial enterprises. In contrast, decision trees, classification algorithms, regression analyses, and other classical machine learning algorithms are more widely used in the industrial field. In the process of industrial intelligence implementation, a headache is "one machine, one model", and industrial algorithms are difficult to generalize. Algorithm generalization is crucial and directly affects whether it can be called a product. After all, if it cannot be productized, it cannot be scaled up. However, the working conditions of industrial systems are particularly complex. For example the materials, structures, and models of tools; the performance of the processing machine; the material and structure of the workpiece; and the site environment are all different, which often leads to the creation of models only for a specific working conditions. Put into other working conditions, these models’ effects are greatly reduced. The core reason behind this is that the complexity and process threshold of the industry is extremely high. At the same time, the amount of data available for modeling is generally scarce and of low quality, and the lack of industry knowledge and mechanisms makes it difficult for data-driven models to have good generalization capabilities.

From the modeling point of view, fusing industry experts' knowledge and mechanism models into machine learning models can often reduce the required training data several times and make the models more adaptable to different environments and working conditions. From the feature perspective, extracting features with certain mechanical properties can enhance the causal properties of model judgments and significantly reduce the required computational effort. Compared with not adding mechanistic features, adding mechanistic features usually improves the accuracy of the model, only that the degree of improvement may vary from scenario to scenario. Currently, industry and scholars try to use migration learning to improve the generalization ability of models, but it is still in the exploration stage now, and it still takes time to really move toward the ground. In addition, the ability of model generalization itself is limited, and it needs to be complemented by a series of engineering means from the product dimension at this time.

This Special Issue aims to provide a forum for researchers and practitioners to discuss and exchange recent advances, research results, and emerging research directions in Deep Learning Algorithm Generalization for Complex Industrial System, specifically in an environment of smart technologies such as artificial intelligence, big data, and so on. Hopefully, we can develop some new modeling and simulation methodologies to support the challenges of complex industrial systems, such as changes, adaptation, change, and innovativeness. This Special Issue covers a variety of contributions from different fields. 

Prof. Dr. Aboul Ella Hassanien
Prof. Dr. Ahmad Azar
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine-learning-based simulation experiment design methods
  • multi-intelligent body modeling and simulation
  • big data modelling and simulation
  • parallel system modeling and simulation
  • high-performance four-level parallel simulation engine
  • cross-media intelligent visualization technology
  • intelligent cloud/edge computing
  • complex product multidisciplinary virtual machine
  • intelligent simulation resource management
  • model engineering, data-driven modeling and simulation
  • high-performance modeling and simulation
  • virtual reality/augmented reality engineering
  • cloud modeling and simulation
  • edge modeling and simulation
  • embedded/pervasive modeling and simulation
  • intelligent modeling and simulation
  • complex system modeling and simulation
  • physical effect modeling and simulation

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 4624 KiB  
Article
Sandpiper Optimization with a Deep Learning Enabled Fault Diagnosis Model for Complex Industrial Systems
by Mesfer Al Duhayyim, Heba G. Mohamed, Jaber S. Alzahrani, Rana Alabdan, Amira Sayed A. Aziz, Abu Sarwar Zamani, Ishfaq Yaseen and Mohamed Ibrahim Alsaid
Electronics 2022, 11(24), 4190; https://doi.org/10.3390/electronics11244190 - 15 Dec 2022
Cited by 3 | Viewed by 1437
Abstract
Recently, artificial intelligence (AI)-enabled technologies have been widely employed for complex industrial applications. AI technologies can be utilized to improve efficiency and reduce human labor in industrial applications. At the same time, fault diagnosis (FD) and detection in rotating machinery (RM) becomes a [...] Read more.
Recently, artificial intelligence (AI)-enabled technologies have been widely employed for complex industrial applications. AI technologies can be utilized to improve efficiency and reduce human labor in industrial applications. At the same time, fault diagnosis (FD) and detection in rotating machinery (RM) becomes a hot research field to assure safety and product quality. Numerous studies based on statistical, machine learning (ML), and mathematical models have been available in the literature for automated fault diagnosis. From this perspective, this study presents a novel sandpiper optimization with an artificial-intelligence-enabled fault diagnosis (SPOAI-FD) technique for intelligent industrial applications. The aim is to detect the existence of faults in machineries. The proposed model involves the design of a continuous wavelet transform (CWT)-based pre-processing approach, which transforms the raw vibration signal into a useful format. In addition, a bidirectional long short-term memory (BLSTM) model is applied as a classifier, and the Faster SqueezeNet model is applied as a feature extractor. In order to modify the hyperparameter values of the BLSTM model, the sandpiper optimization algorithm (SPOA) can be utilized, showing the novelty of the work. A wide range of simulation analyses were conducted on benchmark datasets, and the results highlighted the supremacy of the SPOAI-FD algorithm over recent approaches. Full article
(This article belongs to the Special Issue Deep Learning Algorithm Generalization for Complex Industrial Systems)
Show Figures

Figure 1

22 pages, 3603 KiB  
Article
Deep Learning Reader for Visually Impaired
by Jothi Ganesan, Ahmad Taher Azar, Shrooq Alsenan, Nashwa Ahmad Kamal, Basit Qureshi and Aboul Ella Hassanien
Electronics 2022, 11(20), 3335; https://doi.org/10.3390/electronics11203335 - 16 Oct 2022
Cited by 17 | Viewed by 5240
Abstract
Recent advances in machine and deep learning algorithms and enhanced computational capabilities have revolutionized healthcare and medicine. Nowadays, research on assistive technology has benefited from such advances in creating visual substitution for visual impairment. Several obstacles exist for people with visual impairment in [...] Read more.
Recent advances in machine and deep learning algorithms and enhanced computational capabilities have revolutionized healthcare and medicine. Nowadays, research on assistive technology has benefited from such advances in creating visual substitution for visual impairment. Several obstacles exist for people with visual impairment in reading printed text which is normally substituted with a pattern-based display known as Braille. Over the past decade, more wearable and embedded assistive devices and solutions were created for people with visual impairment to facilitate the reading of texts. However, assistive tools for comprehending the embedded meaning in images or objects are still limited. In this paper, we present a Deep Learning approach for people with visual impairment that addresses the aforementioned issue with a voice-based form to represent and illustrate images embedded in printed texts. The proposed system is divided into three phases: collecting input images, extracting features for training the deep learning model, and evaluating performance. The proposed approach leverages deep learning algorithms; namely, Convolutional Neural Network (CNN), Long Short Term Memory (LSTM), for extracting salient features, captioning images, and converting written text to speech. The Convolution Neural Network (CNN) is implemented for detecting features from the printed image and its associated caption. The Long Short-Term Memory (LSTM) network is used as a captioning tool to describe the detected text from images. The identified captions and detected text is converted into voice message to the user via Text-To-Speech API. The proposed CNN-LSTM model is investigated using various network architectures, namely, GoogleNet, AlexNet, ResNet, SqueezeNet, and VGG16. The empirical results conclude that the CNN-LSTM based training model with ResNet architecture achieved the highest prediction accuracy of an image caption of 83%. Full article
(This article belongs to the Special Issue Deep Learning Algorithm Generalization for Complex Industrial Systems)
Show Figures

Figure 1

18 pages, 8596 KiB  
Article
Research on Diesel Engine Fault Diagnosis Method Based on Stacked Sparse Autoencoder and Support Vector Machine
by Huajun Bai, Xianbiao Zhan, Hao Yan, Liang Wen, Yunbin Yan and Xisheng Jia
Electronics 2022, 11(14), 2249; https://doi.org/10.3390/electronics11142249 - 18 Jul 2022
Cited by 10 | Viewed by 1605
Abstract
Due to the relative insufficiencies of conventional time-domain waveform and spectrum analysis in fault diagnosis research, a diesel engine fault diagnosis method based on the Stacked Sparse Autoencoder and the Support Vector Machine is proposed in this study. The method consists of two [...] Read more.
Due to the relative insufficiencies of conventional time-domain waveform and spectrum analysis in fault diagnosis research, a diesel engine fault diagnosis method based on the Stacked Sparse Autoencoder and the Support Vector Machine is proposed in this study. The method consists of two main steps. The first step is to utilize the Stacked Sparse Autoencoder (SSAE) to reduce the feature dimension of the multi-sensor vibration information; when compared with other dimension reduction methods, this approach can better capture nonlinear features, so as to better cope with dimension reduction. The second step consists of diagnosing faults, implementing the grid search, and K-fold cross-validation to optimize the hyperparameters of the SVM method, which effectively improves the fault classification effect. By conducting a preset failure experiment for the diesel engine, the proposed method achieves an accuracy rate of more than 98%, better engineering application, and promising outcomes. Full article
(This article belongs to the Special Issue Deep Learning Algorithm Generalization for Complex Industrial Systems)
Show Figures

Figure 1

21 pages, 6544 KiB  
Article
Combination of Optimized Variational Mode Decomposition and Deep Transfer Learning: A Better Fault Diagnosis Approach for Diesel Engines
by Huajun Bai, Xianbiao Zhan, Hao Yan, Liang Wen and Xisheng Jia
Electronics 2022, 11(13), 1969; https://doi.org/10.3390/electronics11131969 - 24 Jun 2022
Cited by 13 | Viewed by 1541
Abstract
Extracting features manually and employing preeminent knowledge is overly utilized in methods to conduct fault diagnosis. A diagnosis approach utilizing intelligent methods of the optimized variational mode decomposition and deep transfer learning is proposed in this manuscript to deal with fault diagnosis. Firstly, [...] Read more.
Extracting features manually and employing preeminent knowledge is overly utilized in methods to conduct fault diagnosis. A diagnosis approach utilizing intelligent methods of the optimized variational mode decomposition and deep transfer learning is proposed in this manuscript to deal with fault diagnosis. Firstly, the variational mode decomposition is optimized by K values of the dispersion entropy to realize an adaptive decomposition and reduce the noise of the signal. Secondly, an image with two dimensions is generated by a vibration signal with one dimension utilizing a short-time Fourier transform, after conducting noise reduction. Then, the ResNet18 network model is used to pre-train the model. Finally, the model transfer method is used to detect faults of a diesel engine. The results show that the proposed method outperforms the deep learning methods available in the literature. Besides, better noise reduction ability and higher diagnostic accuracy are attained. Full article
(This article belongs to the Special Issue Deep Learning Algorithm Generalization for Complex Industrial Systems)
Show Figures

Figure 1

20 pages, 4411 KiB  
Article
Fault Diagnosis Method of Planetary Gearbox Based on Compressed Sensing and Transfer Learning
by Huajun Bai, Hao Yan, Xianbiao Zhan, Liang Wen and Xisheng Jia
Electronics 2022, 11(11), 1708; https://doi.org/10.3390/electronics11111708 - 27 May 2022
Cited by 4 | Viewed by 1644
Abstract
This paper suggests a novel method for diagnosing planetary gearbox faults. It addresses the issue of network bandwidth limitation during wireless data transmission and the problem of relying on expert experience and insufficient training samples in traditional fault diagnosis. The continuous wavelet transform [...] Read more.
This paper suggests a novel method for diagnosing planetary gearbox faults. It addresses the issue of network bandwidth limitation during wireless data transmission and the problem of relying on expert experience and insufficient training samples in traditional fault diagnosis. The continuous wavelet transform was combined with the AlexNet convolutional neural network using transfer learning and the compressed theory of sense. The original vibration signal was compressed and reconstructed using the compressed sampling orthogonal matching pursuit reconstruction algorithm. A continuous wavelet transform was used to convert the compressed signal into a time–frequency image. The pretrained AlexNet model was selected as the migration object, the network model was fine-tuned and retrained, and the trained AlexNet model was used to diagnose the fault using the model-based migration method. It was demonstrated by the experimental results when the compression ratio CR = 0.5. Compared to other network models, the classification accuracy rate is 97.78%. This method has specific reference value and application prospects and good feature extraction and fault classification capabilities. Full article
(This article belongs to the Special Issue Deep Learning Algorithm Generalization for Complex Industrial Systems)
Show Figures

Figure 1

Back to TopTop