Application of Artificial Intelligence in Engineering

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 30 April 2024 | Viewed by 17016

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer Science and Technology, Xidian University, Xi’an 710071, China
Interests: AI engineering; big data analytics

E-Mail Website
Guest Editor
Department of Data Science, New Jersey Institute of Technology, Newark, NJ 07102, USA
Interests: big data; data-intensive computing; parallel and distributed computing; high-performance networking; large-scale scientific visualization; wireless sensor networks; cyber security
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Xi'an Research Institute of China Coal Technology & Engineering Group Corp, Xi’an 710071, China
Interests: geophysics; machine learning

Special Issue Information

Dear Colleagues,

Artificial intelligence (AI) engineering is an emergent discipline focused on developing tools, processes, and systems to enable the application of artificial intelligence in many real-world contexts, such as manufacturing, energy, transportation, and software engineering. According to Gartner, by 2025, the 10% of enterprises that establish AI engineering best practices will generate at least three times more value from their AI efforts than the 90% of enterprises that do not. Therefore, this Special Issue is intended for the presentation of new ideas and practical applications in the field of AI engineering from theory, methodology, and process to its practical use.

This Special Issue will publish high-quality, original research papers. Topics of interest include, but are not limited to, the following:

  • Artificial intelligence, machine learning, and deep learning;
  • AI applications in intelligent manufacturing;
  • AI applications in intelligent mine;
  • AI applications in intelligent transportation;
  • AI for software engineering and software engineering for AI;
  • AI applications in many other engineering fields.

Prof. Dr. Liang Bao
Prof. Dr. Chase Wu
Dr. Fan Tao
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • machine learning
  • deep learning
  • intelligent manufacturing
  • intelligent transportation

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 3909 KiB  
Article
A Methodology for Estimating the Assembly Position of the Process Based on YOLO and Regression of Operator Hand Position and Time Information
by Byeongju Lim, Seyun Jeong and Youngjun Yoo
Appl. Sci. 2024, 14(9), 3611; https://doi.org/10.3390/app14093611 (registering DOI) - 24 Apr 2024
Abstract
These days, many assembly lines are becoming automated, leading to a trend of decreasing defect rates. However, in assembly lines that have opted for partial automation due to high cost of construction, defects still occur. The cause of defects are that the location [...] Read more.
These days, many assembly lines are becoming automated, leading to a trend of decreasing defect rates. However, in assembly lines that have opted for partial automation due to high cost of construction, defects still occur. The cause of defects are that the location of the work instructions and the work field are different, which is inefficient and some workers who are familiar with the process tend not to follow the work instructions. As a solution to establishing a system for object detection without disrupting the existing assembly lines, we decided to use wearable devices. As a result, it is possible to solve the problem of spatial constraints and save costs. We adopted the YOLO algorithm for object detection, an image recognition model that stands for “You Only Look Once”. Unlike R-CNN or Fast R-CNN, YOLO predicts images with a single network, making it up to 1000 times faster. The detection point was determined based on whether the pin was fastened after the worker’s hand appeared and disappeared. For the test, 1000 field data were used and the object-detection performance, mAP, was 35%. The trained model was analyzed using seven regression algorithms, among which Xgboost was the most excellent, with a result of 0.15. Distributing labeling and class-specific data equally is expected to enable the implementation of a better model. Based on this approach, the algorithm is considered to be an efficient algorithm that can be used in work fields. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Engineering)
15 pages, 505 KiB  
Article
Quality Control of Cement Clinker through Operating Condition Classification and Free Calcium Oxide Content Prediction
by Xukang Lyu, Dongliang Chu, Xingran Lu, Jiahui Mu, Zengji Zhang and Daqing Yun
Appl. Sci. 2024, 14(3), 1119; https://doi.org/10.3390/app14031119 - 29 Jan 2024
Viewed by 797
Abstract
Recent advances in artificial intelligence (AI) technologies such as deep learning open up new opportunities for various industries, such as cement manufacturing, to transition from traditional human-aided manually controlled production processes to the modern era of “intelligentization”. More and more practitioners have started [...] Read more.
Recent advances in artificial intelligence (AI) technologies such as deep learning open up new opportunities for various industries, such as cement manufacturing, to transition from traditional human-aided manually controlled production processes to the modern era of “intelligentization”. More and more practitioners have started to apply machine learning methods and deploy practical applications throughout the production process to automate manufacturing activities and optimize product quality. In this work, we employ machine learning methods to perform effective quality control for cement production through monitoring and predicting the density of free calcium oxide (f-CaO) in cement clinker. Based upon the control data measured and collected within the distributed control system (DCS) of cement production plants and the laboratory measurements of the density of free lime in cement clinker, we are able to train effective models to stabilize the cement production process and optimize the quality of cement clinker. We report the details of the methods used and illustrate the superiority and benefits of the adopted machine learning-based approaches. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Engineering)
Show Figures

Figure 1

33 pages, 22972 KiB  
Article
Artificial Intelligence in Aviation: New Professionals for New Technologies
by Igor Kabashkin, Boriss Misnevs and Olga Zervina
Appl. Sci. 2023, 13(21), 11660; https://doi.org/10.3390/app132111660 - 25 Oct 2023
Cited by 2 | Viewed by 8013
Abstract
Major aviation organizations have highlighted the need to adopt artificial intelligence (AI) to transform operations and improve efficiency and safety. However, the aviation industry requires qualified graduates with relevant AI competencies to meet this demand. This study analyzed aviation engineering bachelor’s programs at [...] Read more.
Major aviation organizations have highlighted the need to adopt artificial intelligence (AI) to transform operations and improve efficiency and safety. However, the aviation industry requires qualified graduates with relevant AI competencies to meet this demand. This study analyzed aviation engineering bachelor’s programs at European universities to determine if they are preparing students for AI integration in aviation by incorporating AI-related topics. The analysis focused on program descriptions and syllabi using semantic annotation. The results showed a limited focus on AI and machine learning competencies, with more emphasis on foundational digital skills. Reasons include the newness of aviation AI, its specialized nature, and implementation challenges. As the industry evolves, dedicated AI programs may emerge. But currently, curricula appear misaligned with stated industry goals for AI adoption. The study provides an analytical methodology and competency framework to help educators address this gap. Producing graduates equipped with AI literacy and collaboration skills will be key to aviation’s intelligent future. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Engineering)
Show Figures

Figure 1

15 pages, 6040 KiB  
Article
A New Approach for Seepage Parameters Inversion Analysis Using Improved Whale Optimization Algorithm and Support Vector Regression
by Haoxuan Li, Zhenzhong Shen, Yiqing Sun, Yijun Wu, Liqun Xu, Yongkang Shu and Jiacheng Tan
Appl. Sci. 2023, 13(18), 10479; https://doi.org/10.3390/app131810479 - 20 Sep 2023
Cited by 1 | Viewed by 709
Abstract
Seepage is the primary cause of dam failures. Conducting regular seepage analysis for dams can effectively prevent accidents from occurring. Accurate and rapid determination of seepage parameters is a prerequisite for seepage calculation in hydraulic engineering. The Whale Optimization Algorithm (WOA) was combined [...] Read more.
Seepage is the primary cause of dam failures. Conducting regular seepage analysis for dams can effectively prevent accidents from occurring. Accurate and rapid determination of seepage parameters is a prerequisite for seepage calculation in hydraulic engineering. The Whale Optimization Algorithm (WOA) was combined with Support Vector Regression (SVR) to invert the hydraulic conductivity. The good point set initialization method, a cosine-based nonlinear convergence factor, the Levy flight strategy, and the Quasi-oppositional learning strategy were employed to improve WOA. The effectiveness and practicality of Improved Whale Optimization Algorithm (IWOA) were evaluated via numerical experiments. As a case study, the seepage parameters of the Dono Dam located on the Baishui River in China were inversed, adopting the proposed inversion model. The calculated seepage field was reasonable, and the relative error between the simulated head and the measured value at each monitoring point was within 2%. This new inversion method is more feasible and accurate than the existing hydraulic conductivity estimation methods. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Engineering)
Show Figures

Figure 1

17 pages, 5266 KiB  
Article
Partial Discharge Pattern-Recognition Method Based on Embedded Artificial Intelligence
by Xuewen Yan, Yuanyuan Bai, Wenwen Zhang, Chen Cheng and Jihong Liu
Appl. Sci. 2023, 13(18), 10370; https://doi.org/10.3390/app131810370 - 16 Sep 2023
Cited by 1 | Viewed by 981
Abstract
This paper proposes a method for detecting and recognizing partial discharges in high-voltage (HV) equipment. The aim is to address issues commonly found in traditional systems, including complex operations, high computational demands, significant power consumption, and elevated costs. Various types of discharges were [...] Read more.
This paper proposes a method for detecting and recognizing partial discharges in high-voltage (HV) equipment. The aim is to address issues commonly found in traditional systems, including complex operations, high computational demands, significant power consumption, and elevated costs. Various types of discharges were investigated in an HV laboratory environment. Discharge data were collected using a high-frequency current sensor and a microcontroller. Subsequently, this data underwent processing and transformation into feature sets using the phase-resolved partial discharge analysis technique. These features were then converted into grayscale map samples in PNG format. To achieve partial discharge classification, a convolutional neural network (CNN) was trained on these samples. After successful training, the network model was adapted for deployment on a microcontroller, facilitated by the STM32Cube.AI ecosystem, enabling real-time partial discharge recognition. The study also examined storage requirements across different CNN layers and their impact on recognition efficacy. To assess the algorithm’s robustness, recognition accuracy was tested under varying discharge voltages, insulation media thicknesses, and noise levels. The test results demonstrated that the algorithm could be effectively implemented on a microcontroller, achieving a recognition accuracy exceeding 98%. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Engineering)
Show Figures

Figure 1

21 pages, 1411 KiB  
Article
Multi-Intent Natural Language Understanding Framework for Automotive Applications: A Heterogeneous Parallel Approach
by Xinlu Li, Lexuan Zhang, Liangkuan Fang and Pei Cao
Appl. Sci. 2023, 13(17), 9919; https://doi.org/10.3390/app13179919 - 01 Sep 2023
Viewed by 830
Abstract
Natural language understanding (NLU) is an important aspect of achieving human–machine interactions in the automotive application field, consisting of two core subtasks, multiple-intent detection, and slot filling (ID-SF). However, existing joint multiple ID-SF tasks in the Chinese automotive domain face two challenges: (1) [...] Read more.
Natural language understanding (NLU) is an important aspect of achieving human–machine interactions in the automotive application field, consisting of two core subtasks, multiple-intent detection, and slot filling (ID-SF). However, existing joint multiple ID-SF tasks in the Chinese automotive domain face two challenges: (1) There is a limited availability of Chinese multi-intent corpus data for research purposes in the automotive domain; (2) In the current models, the interaction between intent detection and slot filling is often unidirectional, which ultimately leads to inadequate accuracy in intent detection. A novel multi-intent parallel interactive framework based on heterogeneous graphs for the automotive applications field (Auto-HPIF) was proposed to overcome these issues. Its improvements mainly include three aspects: firstly, the incorporation of the Chinese bidirectional encoder representations from transformers (BERT) language model and Gaussian prior attention mechanism allow each word to acquire more comprehensive contextual information; secondly, the establishment of a heterogeneous graph parallel interactive network efficiently exploits intent and slot information, facilitating mutual guidance; lastly, the application of the cross-entropy loss function to the multi-intent classification task enhances the model’s robustness and adaptability. Additionally, a Chinese automotive multi-intent dataset (CADS) comprising 13,100 Chinese utterances, seven types of slots, and thirty types of intents were collected and annotated. The proposed framework model demonstrates significant improvements across various datasets. On the Chinese automotive multi-intent dataset (CADS), the model achieves an overall accuracy of 87.94%, marking a notable 2.07% enhancement over the previous best baseline. Additionally, the model performs commendably on two publicly available datasets. Specifically, it showcases a 3.0% increase in overall accuracy on the MixATIS dataset and a 0.7% improvement on the MixSNIPS dataset. These findings showcase the efficacy and generalizability of the proposed model in tackling the complexity of joint multiple ID-SF tasks within the Chinese automotive domain. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Engineering)
Show Figures

Figure 1

19 pages, 4758 KiB  
Article
Localization and Classification of Gastrointestinal Tract Disorders Using Explainable AI from Endoscopic Images
by Muhammad Nouman Noor, Muhammad Nazir, Sajid Ali Khan, Imran Ashraf and Oh-Young Song
Appl. Sci. 2023, 13(15), 9031; https://doi.org/10.3390/app13159031 - 07 Aug 2023
Cited by 2 | Viewed by 1158
Abstract
Globally, gastrointestinal (GI) tract diseases are on the rise. If left untreated, people may die from these diseases. Early discovery and categorization of these diseases can reduce the severity of the disease and save lives. Automated procedures are necessary, since manual detection and [...] Read more.
Globally, gastrointestinal (GI) tract diseases are on the rise. If left untreated, people may die from these diseases. Early discovery and categorization of these diseases can reduce the severity of the disease and save lives. Automated procedures are necessary, since manual detection and categorization are laborious, time-consuming, and prone to mistakes. In this work, we present an automated system for the localization and classification of GI diseases from endoscopic images with the help of an encoder–decoder-based model, XceptionNet, and explainable artificial intelligence (AI). Data augmentation is performed at the preprocessing stage, followed by segmentation using an encoder–decoder-based model. Later, contours are drawn around the diseased area based on segmented regions. Finally, classification is performed on segmented images by well-known classifiers, and results are generated for various train-to-test ratios for performance analysis. For segmentation, the proposed model achieved 82.08% dice, 90.30% mIOU, 94.35% precision, and 85.97% recall rate. The best performing classifier achieved 98.32% accuracy, 96.13% recall, and 99.68% precision using the softmax classifier. Comparison with the state-of-the-art techniques shows that the proposed model performed well on all the reported performance metrics. We explain this improvement in performance by utilizing heat maps with and without the proposed technique. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Engineering)
Show Figures

Figure 1

15 pages, 4617 KiB  
Article
Cable Temperature Prediction Based on RF-GPR for Digital Twin Applications
by Weixing Han, Chunsheng Hao, Dejing Kong and Guang Yang
Appl. Sci. 2023, 13(13), 7700; https://doi.org/10.3390/app13137700 - 29 Jun 2023
Viewed by 889
Abstract
With the wide application of power cables in the field of transmission and distribution and the increasing emphasis of power departments on the reliability, safety and stability of power cable operation, how to more accurately and quickly analyze the temperature distribution of power [...] Read more.
With the wide application of power cables in the field of transmission and distribution and the increasing emphasis of power departments on the reliability, safety and stability of power cable operation, how to more accurately and quickly analyze the temperature distribution of power cables and how to evaluate the running state of power cables have become research hotspots. Through the combination of finite element calculation and the artificial intelligence method, an innovative method of digital twin cable temperature prediction based on RF-GPR is proposed in this paper. Firstly, the finite element method is used to calculate the coupling of the electromagnetic field and temperature field of a 10 kV AC cable laid in the cable trench, and a certain amount of basic data are provided through the finite element calculation results. Then, using the basic principle of the random forest (RF) variable importance score, the RF-GPR cable temperature prediction model is constructed using the series hybrid model and Gaussian process regression (GPR), the model prediction results are compared and analyzed, and the calculation time is improved by about 1500 times. Finally, a digital twinning platform for cable temperature calculation based on RF-GPR is designed, which provides technical support for the application of digital twinning. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Engineering)
Show Figures

Figure 1

16 pages, 2121 KiB  
Article
Evaluation of Table Grape Flavor Based on Deep Neural Networks
by Zheng Liu, Yu Zhang, Yicheng Zhang, Lei Guo, Chase Wu and Wei Shen
Appl. Sci. 2023, 13(11), 6532; https://doi.org/10.3390/app13116532 - 27 May 2023
Cited by 2 | Viewed by 946
Abstract
For fresh table grapes, flavor is one of the most important components of their overall quality. The flavor of table grapes includes both their taste and aroma, involving multiple physical and chemical properties, such as soluble solids. In this paper, we investigate six [...] Read more.
For fresh table grapes, flavor is one of the most important components of their overall quality. The flavor of table grapes includes both their taste and aroma, involving multiple physical and chemical properties, such as soluble solids. In this paper, we investigate six factors, divide flavor ratings into a range of five grades based on the results of trained food tasters, and propose a deep-neural-network-based flavor evaluation model that integrates an attention mechanism. After training, the proposed model achieved a prediction accuracy of 94.8% with an average difference of 2.657 points between the predicted score and the actual score. This work provides a promising solution to the evaluation of table grapes and has the potential to improve product quality for future breeding in agricultural engineering. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Engineering)
Show Figures

Figure 1

15 pages, 2093 KiB  
Article
MDPN: Multilevel Distribution Propagation Network for Few-Shot Image Classification
by Jie Wu, Haixiang Zhang, Jie Feng, Hanjie Ma, Huaxiong Zhang, Mingfeng Jiang and Mengyue Shao
Appl. Sci. 2023, 13(11), 6518; https://doi.org/10.3390/app13116518 - 26 May 2023
Viewed by 1080
Abstract
Due to a shortage of labeled examples, few-shot image classification frequently experiences noise interference and insufficient feature extraction. In this paper, we present a two-stage framework based on the distribution propagation graph neural network (DPGN) called the multilevel distribution propagation network (MDPN). An [...] Read more.
Due to a shortage of labeled examples, few-shot image classification frequently experiences noise interference and insufficient feature extraction. In this paper, we present a two-stage framework based on the distribution propagation graph neural network (DPGN) called the multilevel distribution propagation network (MDPN). An instance-segmentation-based object localization (ISOL) module and a graph-based multilevel distribution propagation (GMDP) module are both included in the MDPN. To create a clear and full object zone, the ISOL module generates a mask that eliminates background and pseudo-object noises. The GMDP module enriches the level of features. We carried out comprehensive experiments on the few-shot dataset CUB-200-2011 to show the usefulness of MDPN. The results demonstrate that MDPN indeed outperforms DPGN in terms of few-shot image classification accuracy. Under 5-way 1-shot and 5-way 5-shot settings, the classification accuracy of MDPN exceeds the baseline by 8.17% and 1.24%, respectively. MDPN also outperforms the majority of the existing few-shot classification methods in the same setting. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Engineering)
Show Figures

Figure 1

Back to TopTop