Topic Editors

Institute of Artificial Intelligence and Blockchain, Guangzhou University, Guangzhou 510006, China
Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
Institute of Artificial Intelligence and Blockchain, Guangzhou University, Guangzhou 510006, China

AI and Data-Driven Advancements in Industry 4.0

Abstract submission deadline
30 June 2024
Manuscript submission deadline
30 September 2024
Viewed by
25682

Topic Information

Dear Colleagues,

Our society is filled with various modalities of data: pictures, words, voices, videos, remote-sensing images, and more. Data-driven artificial intelligence brings the promise of deriving meaning from all of that data to us and has opened extraordinary theoretic and application-based opportunities. In recent years, abundant theories and algorithms are proposed, trusted, and utilized to finance, security, healthcare, education, sustainability, neuroscience, sports, et al. Motivated by these discoveries, substantial AI-based techniques are applied in various domains, such as medical image analysis, virtual reality, human-computer interaction, remote-sensing representations, et al.

This topic aims to gather and showcase the most recent advances in the areas of next-generation artificial intelligence and Industry 4.0 applications. Our interest is in the whole spectrum of big data and artificial intelligence research in diversified domains, including sophisticated framework design, training strategy, optimization, trusted and robust, and the corresponding applications. Topic contents include (but are not limited to):

  • computer vision, natural language processing, reinforcement learning
  • multi-modal learning, object recognition, detection, segmentation, and tracking
  • graph neural network, knowledge graph, recommendation system
  • pattern recognition and intelligent system
  • blockchain theory or application
  • artificial intelligence security, data security and privacy
  • trusted data sharing, digital intellectual property protection
  • bioinformatics, medical image analysis and processing
  • remote sensing image interpretation
  • virtual reality, robotics, medical artificial intelligence

Dr. Teng Huang
Prof. Dr. Qiong Wang
Dr. Yan Pang
Topic Editors

Keywords

  • computer vision
  • natural language processing
  • reinforcement learning
  • graph neural network
  • pattern recognition and intelligent system
  • blockchain
  • security and privacy
  • medical image analysis and processing
  • remote sensing image interpretation
  • medical artificial intelligence

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Remote Sensing
remotesensing
5.0 7.9 2009 21.1 Days CHF 2700 Submit
Cancers
cancers
5.2 7.4 2009 18.2 Days CHF 2900 Submit
Mathematics
mathematics
2.4 3.5 2013 17.7 Days CHF 2600 Submit
Sensors
sensors
3.9 6.8 2001 16.4 Days CHF 2600 Submit
Electronics
electronics
2.9 4.7 2012 15.8 Days CHF 2200 Submit

Preprints is a platform dedicated to making early versions of research outputs permanently available and citable. MDPI journals allow posting on preprint servers such as Preprints.org prior to publication. For more details about reprints, please visit https://www.preprints.org.

Published Papers (27 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
16 pages, 9688 KiB  
Article
Deep Network-Assisted Quality Inspection of Laser Welding on Power Battery
Sensors 2023, 23(21), 8894; https://doi.org/10.3390/s23218894 - 01 Nov 2023
Viewed by 369
Abstract
Reliable quality control of laser welding on power batteries is an important issue due to random interference in the production process. In this paper, a quality inspection framework based on a two-branch network and conventional image processing is proposed to predict welding quality [...] Read more.
Reliable quality control of laser welding on power batteries is an important issue due to random interference in the production process. In this paper, a quality inspection framework based on a two-branch network and conventional image processing is proposed to predict welding quality while outputting corresponding parameter information. The two-branch network consists of a segmentation network and a classification network, which alleviates the problem of large training sample size requirements for deep learning by sharing feature representations among two related tasks. Moreover, coordinate attention is introduced into feature learning modules of the network to effectively capture the subtle features of defective welds. Finally, a post-processing method based on the Hough transform is used to extract the information of the segmented weld region. Extensive experiments demonstrate that the proposed model can achieve a significant classification performance on the dataset collected on an actual production line. This study provides a valuable reference for an intelligent quality inspection system in the power battery manufacturing industry. Full article
(This article belongs to the Topic AI and Data-Driven Advancements in Industry 4.0)
Show Figures

Figure 1

13 pages, 3992 KiB  
Article
Prediction of Radiation Treatment Response for Locally Advanced Rectal Cancer via a Longitudinal Trend Analysis Framework on Cone-Beam CT
Cancers 2023, 15(21), 5142; https://doi.org/10.3390/cancers15215142 - 25 Oct 2023
Viewed by 523
Abstract
Locally advanced rectal cancer (LARC) presents a significant challenge in terms of treatment management, particularly with regards to identifying patients who are likely to respond to radiation therapy (RT) at an individualized level. Patients respond to the same radiation treatment course differently due [...] Read more.
Locally advanced rectal cancer (LARC) presents a significant challenge in terms of treatment management, particularly with regards to identifying patients who are likely to respond to radiation therapy (RT) at an individualized level. Patients respond to the same radiation treatment course differently due to inter- and intra-patient variability in radiosensitivity. In-room volumetric cone-beam computed tomography (CBCT) is widely used to ensure proper alignment, but also allows us to assess tumor response during the treatment course. In this work, we proposed a longitudinal radiomic trend (LRT) framework for accurate and robust treatment response assessment using daily CBCT scans for early detection of patient response. The LRT framework consists of four modules: (1) Automated registration and evaluation of CBCT scans to planning CT; (2) Feature extraction and normalization; (3) Longitudinal trending analyses; and (4) Feature reduction and model creation. The effectiveness of the framework was validated via leave-one-out cross-validation (LOOCV), using a total of 840 CBCT scans for a retrospective cohort of LARC patients. The trending model demonstrates significant differences between the responder vs. non-responder groups with an Area Under the Curve (AUC) of 0.98, which allows for systematic monitoring and early prediction of patient response during the RT treatment course for potential adaptive management. Full article
(This article belongs to the Topic AI and Data-Driven Advancements in Industry 4.0)
Show Figures

Figure 1

21 pages, 1036 KiB  
Article
Local Differential Privacy Based Membership-Privacy-Preserving Federated Learning for Deep-Learning-Driven Remote Sensing
Remote Sens. 2023, 15(20), 5050; https://doi.org/10.3390/rs15205050 - 20 Oct 2023
Viewed by 451
Abstract
With the development of deep learning, image recognition based on deep learning is now widely used in remote sensing. As we know, the effectiveness of deep learning models significantly benefits from the size and quality of the dataset. However, remote sensing data are [...] Read more.
With the development of deep learning, image recognition based on deep learning is now widely used in remote sensing. As we know, the effectiveness of deep learning models significantly benefits from the size and quality of the dataset. However, remote sensing data are often distributed in different parts. They cannot be shared directly for privacy and security reasons, and this has motivated some scholars to apply federated learning (FL) to remote sensing. However, research has found that federated learning is usually vulnerable to white-box membership inference attacks (MIAs), which aim to infer whether a piece of data was participating in model training. In remote sensing, the MIA can lead to the disclosure of sensitive information about the model trainers, such as their location and type, as well as time information about the remote sensing equipment. To solve this issue, we consider embedding local differential privacy (LDP) into FL and propose LDP-Fed. LDP-Fed performs local differential privacy perturbation after properly pruning the uploaded parameters, preventing the central server from obtaining the original local models from the participants. To achieve a trade-off between privacy and model performance, LDP-Fed adds different noise levels to the parameters for various layers of the local models. This paper conducted comprehensive experiments to evaluate the framework’s effectiveness on two remote sensing image datasets and two machine learning benchmark datasets. The results demonstrate that remote sensing image classification models are susceptible to MIAs, and our framework can successfully defend against white-box MIA while achieving an excellent global model. Full article
(This article belongs to the Topic AI and Data-Driven Advancements in Industry 4.0)
Show Figures

Graphical abstract

26 pages, 8654 KiB  
Article
HFCC-Net: A Dual-Branch Hybrid Framework of CNN and CapsNet for Land-Use Scene Classification
Remote Sens. 2023, 15(20), 5044; https://doi.org/10.3390/rs15205044 - 20 Oct 2023
Viewed by 470
Abstract
Land-use scene classification (LUSC) is a key technique in the field of remote sensing imagery (RSI) interpretation. A convolutional neural network (CNN) is widely used for its ability to autonomously and efficiently extract deep semantic feature maps (DSFMs) from large-scale RSI data. However, [...] Read more.
Land-use scene classification (LUSC) is a key technique in the field of remote sensing imagery (RSI) interpretation. A convolutional neural network (CNN) is widely used for its ability to autonomously and efficiently extract deep semantic feature maps (DSFMs) from large-scale RSI data. However, CNNs cannot accurately extract the rich spatial structure information of RSI, and the key information of RSI is easily lost due to many pooling layers, so it is difficult to ensure the information integrity of the spatial structure feature maps (SSFMs) and DSFMs of RSI with CNNs only for LUSC, which can easily affect the classification performance. To fully utilize the SSFMs and make up for the insufficiency of CNN in capturing the relationship information between the land-use objects of RSI, while reducing the loss of important information, we propose an effective dual-branch hybrid framework, HFCC-Net, for the LUSC task. The CNN in the upper branch extracts multi-scale DSFMs of the same scene using transfer learning techniques; the graph routing-based CapsNet in the lower branch is used to obtain SSFMs from DSFMs in different scales, and element-by-element summation achieves enhanced representations of SSFMs; a newly designed function is used to fuse the top-level DSFMs with SSFMs to generate discriminant feature maps (DFMs); and, finally, the DFMs are fed into classifier. We conducted sufficient experiments using HFCC-Net on four public datasets. The results show that our method has better classification performance compared to some existing CNN-based state-of-the-art methods. Full article
(This article belongs to the Topic AI and Data-Driven Advancements in Industry 4.0)
Show Figures

Figure 1

13 pages, 1754 KiB  
Article
A Study on Survival Analysis Methods Using Neural Network to Prevent Cancers
Cancers 2023, 15(19), 4757; https://doi.org/10.3390/cancers15194757 - 27 Sep 2023
Viewed by 429
Abstract
Background: Cancer is one of the main global health threats. Early personalized prediction of cancer incidence is crucial for the population at risk. This study introduces a novel cancer prediction model based on modern recurrent survival deep learning algorithms. Methods: The study includes [...] Read more.
Background: Cancer is one of the main global health threats. Early personalized prediction of cancer incidence is crucial for the population at risk. This study introduces a novel cancer prediction model based on modern recurrent survival deep learning algorithms. Methods: The study includes 160,407 participants from the blood-based cohort of the Korea Cancer Prevention Research-II Biobank, which has been ongoing since 2004. Data linkages were designed to ensure anonymity, and data collection was carried out through nationwide medical examinations. Predictive performance on ten cancer sites, evaluated using the concordance index (c-index), was compared among nDeep and its multitask variation, Cox proportional hazard (PH) regression, DeepSurv, and DeepHit. Results: Our models consistently achieved a c-index of over 0.8 for all ten cancers, with a peak of 0.8922 for lung cancer. They outperformed Cox PH regression and other survival deep neural networks. Conclusion: This study presents a survival deep learning model that demonstrates the highest predictive performance on censored health dataset, to the best of our knowledge. In the future, we plan to investigate the causal relationship between explanatory variables and cancer to reduce cancer incidence and mortality. Full article
(This article belongs to the Topic AI and Data-Driven Advancements in Industry 4.0)
Show Figures

Figure 1

20 pages, 2193 KiB  
Article
Long-Time Coherent Integration for Marine Targets Based on Segmented Compensation
Remote Sens. 2023, 15(18), 4530; https://doi.org/10.3390/rs15184530 - 14 Sep 2023
Cited by 1 | Viewed by 528
Abstract
Long-time coherent integration is an effective method for dim target detection from heavy sea clutter. To detect dim targets, a novel long-time coherent integration method based on segmented compensation is proposed in this paper. The method models the complex motion of a marine [...] Read more.
Long-time coherent integration is an effective method for dim target detection from heavy sea clutter. To detect dim targets, a novel long-time coherent integration method based on segmented compensation is proposed in this paper. The method models the complex motion of a marine target as the combination of multi-stage uniformly accelerated motions. According to the difference of energy distribution in Doppler frequency domain, this method can suppress sea clutter and detect the regions of interest (ROIs). Using time–frequency domain energy analysis, the potential target can be extracted. After estimating the parameters and segmentation, for the potential targets, the phase compensation factor can be used to eliminate the Doppler frequency modulation caused by the complex motion. Finally, for the compensated signal, long-time coherent integration is performed to realize the target detection and discrimination under low signal-to-clutter ratio. To verify the effectiveness of the proposed method, we apply simulation data and measured CSIR data in the experiments. The results show that the proposed method can integrate the energy of target more effectively than MTD and RFrFT, and the novel method has better detection performance for complex moving targets under low signal-to-clutter ratio situation. Full article
(This article belongs to the Topic AI and Data-Driven Advancements in Industry 4.0)
Show Figures

Graphical abstract

14 pages, 2321 KiB  
Article
PSBF: p-adic Integer Scalable Bloom Filter
Sensors 2023, 23(18), 7775; https://doi.org/10.3390/s23187775 - 09 Sep 2023
Viewed by 445
Abstract
Given the challenges associated with the dynamic expansion of the conventional bloom filter’s capacity, the prevalence of false positives, and the subpar access performance, this study employs the algebraic and topological characteristics of p-adic integers to introduce an innovative approach for dynamically expanding [...] Read more.
Given the challenges associated with the dynamic expansion of the conventional bloom filter’s capacity, the prevalence of false positives, and the subpar access performance, this study employs the algebraic and topological characteristics of p-adic integers to introduce an innovative approach for dynamically expanding the p-adic Integer Scalable Bloom Filter (PSBF). The proposed method involves converting the target element into an integer using a string hash function, followed by the conversion of said integer into a p-adic integer through algebraic properties. This process automatically establishes the topological tree access structure of the PSBF. The experiment involved a comparison of access performance among the standard bloom filter, dynamic bloom filter, and scalable bloom filter. The findings indicate that the PSBF offers advantages such as avoidance of a linear storage structure, enhanced efficiency in element insertion and query, improved storage space utilization, and reduced likelihood of false positives. Consequently, the PSBF presents a novel approach to the dynamic extensibility of bloom filters. Full article
(This article belongs to the Topic AI and Data-Driven Advancements in Industry 4.0)
Show Figures

Figure 1

31 pages, 660 KiB  
Article
Machine Learning-Based Model Predictive Control of Two-Time-Scale Systems
Mathematics 2023, 11(18), 3827; https://doi.org/10.3390/math11183827 - 06 Sep 2023
Viewed by 488
Abstract
In this study, we present a general form of nonlinear two-time-scale systems, where singular perturbation analysis is used to separate the dynamics of the slow and fast subsystems. Machine learning techniques are utilized to approximate the dynamics of both subsystems. Specifically, a recurrent [...] Read more.
In this study, we present a general form of nonlinear two-time-scale systems, where singular perturbation analysis is used to separate the dynamics of the slow and fast subsystems. Machine learning techniques are utilized to approximate the dynamics of both subsystems. Specifically, a recurrent neural network (RNN) and a feedforward neural network (FNN) are used to predict the slow and fast state vectors, respectively. Moreover, we investigate the generalization error bounds for these machine learning models approximating the dynamics of two-time-scale systems. Next, under the assumption that the fast states are asymptotically stable, our focus shifts toward designing a Lyapunov-based model predictive control (LMPC) scheme that exclusively employs the RNN to predict the dynamics of the slow states. Additionally, we derive sufficient conditions to guarantee the closed-loop stability of the system under the sample-and-hold implementation of the controller. A nonlinear chemical process example is used to demonstrate the theory. In particular, two RNN models are constructed: one to model the full two-time-scale system and the other to predict solely the slow state vector. Both models are integrated within the LMPC scheme, and we compare their closed-loop performance while assessing the computational time required to execute the LMPC optimization problem. Full article
(This article belongs to the Topic AI and Data-Driven Advancements in Industry 4.0)
Show Figures

Figure 1

23 pages, 3444 KiB  
Article
AI-Enabled Vibrotactile Feedback-Based Condition Monitoring Framework for Outdoor Mobile Robots
Mathematics 2023, 11(18), 3804; https://doi.org/10.3390/math11183804 - 05 Sep 2023
Viewed by 495
Abstract
An automated Condition Monitoring (CM) and real-time controlling framework is essential for outdoor mobile robots to ensure the robot’s health and operational safety. This work presents a novel Artificial Intelligence (AI)-enabled CM and vibrotactile haptic-feedback-based real-time control framework suitable for deploying mobile robots [...] Read more.
An automated Condition Monitoring (CM) and real-time controlling framework is essential for outdoor mobile robots to ensure the robot’s health and operational safety. This work presents a novel Artificial Intelligence (AI)-enabled CM and vibrotactile haptic-feedback-based real-time control framework suitable for deploying mobile robots in dynamic outdoor environments. It encompasses two sections: developing a 1D Convolutional Neural Network (1D CNN) model for predicting system degradation and terrain flaws threshold classes and a vibrotactile haptic feedback system design enabling a remote operator to control the robot as per predicted class feedback in real-time. As vibration is an indicator of failure, we identified and separated system- and terrain-induced vibration threshold levels suitable for CM of outdoor robots into nine classes, namely Safe, moderately safe system-generated, and moderately safe terrain-induced affected by left, right, and both wheels, as well as severe classes such as unsafe system-generated and unsafe terrain-induced affected by left, right, and both wheels. The vibration-indicated data for each class are modelled based on two sensor data: an Inertial Measurement Unit (IMU) sensor for the change in linear and angular motion and a current sensor for the change in current consumption at each wheel motor. A wearable novel vibrotactile haptic feedback device architecture is presented with left and right vibration modules configured with unique haptic feedback patterns corresponding to each abnormal vibration threshold class. The proposed haptic-feedback-based CM framework and real-time remote controlling are validated with three field case studies using an in-house-developed outdoor robot, resulting in a threshold class prediction accuracy of 91.1% and an effectiveness that, by minimising the traversal through undesired terrain features, is four times better than the usual practice. Full article
(This article belongs to the Topic AI and Data-Driven Advancements in Industry 4.0)
Show Figures

Figure 1

25 pages, 41037 KiB  
Article
Single Object Tracking in Satellite Videos Based on Feature Enhancement and Multi-Level Matching Strategy
Remote Sens. 2023, 15(17), 4351; https://doi.org/10.3390/rs15174351 - 04 Sep 2023
Viewed by 633
Abstract
Despite significant advancements in remote sensing object tracking (RSOT) in recent years, achieving accurate and continuous tracking of tiny-sized targets remains a challenging task due to similar object interference and other related issues. In this paper, from the perspective of feature enhancement and [...] Read more.
Despite significant advancements in remote sensing object tracking (RSOT) in recent years, achieving accurate and continuous tracking of tiny-sized targets remains a challenging task due to similar object interference and other related issues. In this paper, from the perspective of feature enhancement and a better feature matching strategy, we present a tracker SiamTM specifically designed for RSOT, which is mainly based on a new target information enhancement (TIE) module and a multi-level matching strategy. First, we propose a TIE module to address the challenge of tiny object sizes in satellite videos. The proposed TIE module goes along two spatial directions to capture orientation and position-aware information, respectively, while capturing inter-channel information at the global 2D image level. The TIE module enables the network to extract discriminative features of the targets more effectively from satellite images. Furthermore, we introduce a multi-level matching (MM) module that is better suited for satellite video targets. The MM module firstly embeds the target feature map after ROI Align into each position of the search region feature map to obtain a preliminary response map. Subsequently, the preliminary response map and the template region feature map are subjected to the Depth-wise Cross Correlation operation to get a more refined response map. Through this coarse-to-fine approach, the tracker obtains a response map with a more accurate position, which lays a good foundation for the prediction operation of the subsequent sub-networks. We conducted extensive experiments on two large satellite video single-object tracking datasets: SatSOT and SV248S. Without bells and whistles, the proposed tracker SiamTM achieved competitive results on both datasets while running at real-time speed. Full article
(This article belongs to the Topic AI and Data-Driven Advancements in Industry 4.0)
Show Figures

Figure 1

21 pages, 1447 KiB  
Article
AI-Enabled Condition Monitoring Framework for Indoor Mobile Cleaning Robots
Mathematics 2023, 11(17), 3682; https://doi.org/10.3390/math11173682 - 26 Aug 2023
Viewed by 449
Abstract
Autonomous mobile cleaning robots are ubiquitous today and have a vast market need. Current studies are mainly focused on autonomous cleaning performances, and there exists a research gap on monitoring the robot’s health and safety. Vibration is a key indicator of system deterioration [...] Read more.
Autonomous mobile cleaning robots are ubiquitous today and have a vast market need. Current studies are mainly focused on autonomous cleaning performances, and there exists a research gap on monitoring the robot’s health and safety. Vibration is a key indicator of system deterioration or external factors causing accelerated degradation or threats. Hence, this work proposes an artificial intelligence (AI)-enabled automated condition monitoring (CM) framework using two heterogeneous sensor datasets to predict the sources of anomalous vibration in mobile robots with high accuracy. This allows triggering proper maintenance or corrective actions based on the condition of the robot’s health or workspace, easing condition-based maintenance (CbM). Anomalous vibration sources are classified as induced by uneven Terrain, Collision with obstacles, loose Assembly, and unbalanced Structure, which causes accelerated system deterioration or potential hazards. Here, an unexplored heterogeneous sensor dataset using inertial measurement unit (IMU) and current sensors is proposed for effective recognition across different vibration classes, resulting in higher-accuracy prediction. A simple-structured 1D convolutional neural network (1D CNN) is developed for training and real-time prediction. A 2D CbM map is generated by fusing the predicted classes in real time on an occupancy grid map of the workspace to monitor the conditions of the robot and workspace remotely. The evaluation test results of the proposed method show that the usage of heterogeneous sensors performs significantly more accurately (98.4%) than previous studies, which used IMU (92.2%) and camera (93.8%) sensors individually. Also, this model is comparatively fast, fit for the environment, and ideal for real-time applications in mobile robots based on field trial validations, enhancing mobile robots’ productivity and operational safety. Full article
(This article belongs to the Topic AI and Data-Driven Advancements in Industry 4.0)
Show Figures

Figure 1

22 pages, 1778 KiB  
Article
AI-Enabled Condition Monitoring Framework for Outdoor Mobile Robots Using 3D LiDAR Sensor
Mathematics 2023, 11(16), 3594; https://doi.org/10.3390/math11163594 - 19 Aug 2023
Viewed by 479
Abstract
An automated condition monitoring (CM) framework is essential for outdoor mobile robots to trigger prompt maintenance and corrective actions based on the level of system deterioration and outdoor uneven terrain feature states. Vibration indicates system failures and terrain abnormalities in mobile robots; hence, [...] Read more.
An automated condition monitoring (CM) framework is essential for outdoor mobile robots to trigger prompt maintenance and corrective actions based on the level of system deterioration and outdoor uneven terrain feature states. Vibration indicates system failures and terrain abnormalities in mobile robots; hence, five vibration threshold classes for CM in outdoor mobile robots were identified, considering both vibration source system deterioration and uneven terrain. This study proposes a novel CM approach for outdoor mobile robots using a 3D LiDAR, employed here instead of its usual use as a navigation sensor, by developing an algorithm to extract the vibration-indicated data based on the point cloud, assuring low computational costs without losing vibration characteristics. The algorithm computes cuboids for two prominent clusters in every point cloud frame and sets motion points at the corners and centroid of the cuboid. The three-dimensional vector displacement of these points over consecutive point cloud frames, which corresponds to the vibration-affected clusters, are compiled as vibration indication data for each threshold class. A simply structured 1D Convolutional Neural Network (1D CNN)-based vibration threshold prediction model is proposed for fast, accurate, and real-time application. Finally, a threshold class mapping framework is developed which fuses the predicted threshold classes on the 3D occupancy map of the workspace, generating a 3D CbM map in real time, fostering a Condition-based Maintenance (CbM) strategy. The offline evaluation test results show an average accuracy of vibration threshold classes of 89.6% and consistent accuracy during real-time field case studies of 89%. The test outcomes validate that the proposed 3D-LiDAR-based CM framework is suitable for outdoor mobile robots, assuring the robot’s health and operational safety. Full article
(This article belongs to the Topic AI and Data-Driven Advancements in Industry 4.0)
Show Figures

Figure 1

21 pages, 584 KiB  
Article
Smart Contract Vulnerability Detection Based on Deep Learning and Multimodal Decision Fusion
Sensors 2023, 23(16), 7246; https://doi.org/10.3390/s23167246 - 18 Aug 2023
Cited by 1 | Viewed by 1810
Abstract
With the rapid development and widespread application of blockchain technology in recent years, smart contracts running on blockchains often face security vulnerability problems, resulting in significant economic losses. Unlike traditional programs, smart contracts cannot be modified once deployed, and vulnerabilities cannot be remedied. [...] Read more.
With the rapid development and widespread application of blockchain technology in recent years, smart contracts running on blockchains often face security vulnerability problems, resulting in significant economic losses. Unlike traditional programs, smart contracts cannot be modified once deployed, and vulnerabilities cannot be remedied. Therefore, the vulnerability detection of smart contracts has become a research focus. Most existing vulnerability detection methods are based on rules defined by experts, which are inefficient and have poor scalability. Although there have been studies using machine learning methods to extract contract features for vulnerability detection, the features considered are singular, and it is impossible to fully utilize smart contract information. In order to overcome the limitations of existing methods, this paper proposes a smart contract vulnerability detection method based on deep learning and multimodal decision fusion. This method also considers the code semantics and control structure information of smart contracts. It integrates the source code, operation code, and control-flow modes through the multimodal decision fusion method. The deep learning method extracts five features used to represent contracts and achieves high accuracy and recall rates. The experimental results show that the detection accuracy of our method for arithmetic vulnerability, re-entrant vulnerability, transaction order dependence, and Ethernet locking vulnerability can reach 91.6%, 90.9%, 94.8%, and 89.5%, respectively, and the detected AUC values can reach 0.834, 0.852, 0.886, and 0.825, respectively. This shows that our method has a good vulnerability detection effect. Furthermore, ablation experiments show that the multimodal decision fusion method contributes significantly to the fusion of different modalities. Full article
(This article belongs to the Topic AI and Data-Driven Advancements in Industry 4.0)
Show Figures

Figure 1

22 pages, 3027 KiB  
Article
Skeleton-Based Human Action Recognition Based on Single Path One-Shot Neural Architecture Search
Electronics 2023, 12(14), 3156; https://doi.org/10.3390/electronics12143156 - 20 Jul 2023
Cited by 1 | Viewed by 484
Abstract
Skeleton-based human action recognition based on Neural Architecture Search (NAS.) adopts a one-shot NAS strategy. It improves the speed of evaluating candidate models in the search space through weight sharing, which has attracted significant attention. However, directly applying the one-shot NAS method for [...] Read more.
Skeleton-based human action recognition based on Neural Architecture Search (NAS.) adopts a one-shot NAS strategy. It improves the speed of evaluating candidate models in the search space through weight sharing, which has attracted significant attention. However, directly applying the one-shot NAS method for skeleton recognition requires training a super-net with a large search space that traverses various combinations of model parameters, which often leads to overly large network models and high computational costs. In addition, when training this super-net, the one-shot NAS needs to traverse the entire search space of the complete skeleton recognition task. Furthermore, the traditional method does not consider the optimization of the search strategy. As a result, a significant amount of search time is required to obtain a better skeleton recognition network model. A more efficient weighting model, a NAS skeleton recognition model based on the Single Path One-shot (SNAS-GCN) strategy, is proposed to address the above challenges. First, to reduce the model search space, a simplified four-category search space is introduced to replace the mainstream multi-category search space. Second, to improve the model search efficiency, a single-path one-shot approach is introduced, through which the model randomly samples one architecture at each step of the search training optimization. Finally, an adaptive Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is proposed to obtain a candidate structure of the perfect model automatically. With these three steps, the entire network architecture of the recognition model (and its weights) is fully and equally trained significantly. The search and training costs will be greatly reduced. The search-out model is trained by the NTU-RGB + D and Kinetics datasets to evaluate the performance of the proposed model’s search strategy. The experimental results show that the search time of the proposed method in this paper is 0.3 times longer than that of the state-of-the-art method. Meanwhile, the recognition accuracy is roughly comparable compared to that of the SOTA NAS-GCN method. Full article
(This article belongs to the Topic AI and Data-Driven Advancements in Industry 4.0)
Show Figures

Figure 1

15 pages, 8825 KiB  
Article
LPO-YOLOv5s: A Lightweight Pouring Robot Object Detection Algorithm
Sensors 2023, 23(14), 6399; https://doi.org/10.3390/s23146399 - 14 Jul 2023
Viewed by 747
Abstract
The casting process involves pouring molten metal into a mold cavity. Currently, traditional object detection algorithms exhibit a low accuracy and are rarely used. An object detection model based on deep learning requires a large amount of memory and poses challenges in the [...] Read more.
The casting process involves pouring molten metal into a mold cavity. Currently, traditional object detection algorithms exhibit a low accuracy and are rarely used. An object detection model based on deep learning requires a large amount of memory and poses challenges in the deployment and resource allocation for resource limited pouring robots. To address the accurate identification and localization of pouring holes with limited resources, this paper designs a lightweight pouring robot hole detection algorithm named LPO-YOLOv5s, based on YOLOv5s. First, the MobileNetv3 network is introduced as a feature extraction network, to reduce model complexity and the number of parameters. Second, a depthwise separable information fusion module (DSIFM) is designed, and a lightweight operator called CARAFE is employed for feature upsampling, to enhance the feature extraction capability of the network. Finally, a dynamic head (DyHead) is adopted during the network prediction stage, to improve the detection performance. Extensive experiments were conducted on a pouring hole dataset, to evaluate the proposed method. Compared to YOLOv5s, our LPO-YOLOv5s algorithm reduces the parameter size by 45% and decreases computational costs by 55%, while sacrificing only 0.1% of mean average precision (mAP). The model size is only 7.74 MB, fulfilling the deployment requirements for pouring robots. Full article
(This article belongs to the Topic AI and Data-Driven Advancements in Industry 4.0)
Show Figures

Figure 1

17 pages, 6228 KiB  
Article
YOLO-Xray: A Bubble Defect Detection Algorithm for Chip X-ray Images Based on Improved YOLOv5
Electronics 2023, 12(14), 3060; https://doi.org/10.3390/electronics12143060 - 13 Jul 2023
Viewed by 1064
Abstract
In the manufacturing of chips, the accurate and effective detection of internal bubble defects of chips is essential to maintain product reliability. In general, the inspection is performed manually by viewing X-ray images, which is time-consuming and less reliable. To solve the above [...] Read more.
In the manufacturing of chips, the accurate and effective detection of internal bubble defects of chips is essential to maintain product reliability. In general, the inspection is performed manually by viewing X-ray images, which is time-consuming and less reliable. To solve the above problems, an improved bubble defect detection model YOLO-Xray based on the YOLOv5 algorithm for chip X-ray images is proposed. First, the chip X-ray images are preprocessed by image segmentation to construct the chip X-ray defect dataset, namely, CXray. Then, in the input stage, the K-means++ algorithm is used to re-cluster the CXray dataset to generate the anchors suitable for our dataset. In the backbone network, a micro-scale detection head is added to improve the capabilities for small defect detection. In the neck network, the bi-direction feature fusion idea of BiFPN is used to construct a new feature fusion network based on the improved backbone to fuse the semantic features of different layers. In addition, the Quality Focal Loss function is used to replace the cross-entropy loss function to solve the imbalance of positive and negative samples. The experimental results show that the mean average precision (mAP) of the YOLO-Xray algorithm on the CXray dataset reaches 93.5%, which is 5.1% higher than the original YOLOv5. Meanwhile, the YOLO-Xray algorithm achieves state-of-the-art detection accuracy and speed compared with other mainstream object detection models. This shows the proposed YOLO-Xray algorithm can provide technical support for bubble defect detection in chip X-ray images. The CXray dataset is also open and available at CXray. Full article
(This article belongs to the Topic AI and Data-Driven Advancements in Industry 4.0)
Show Figures

Figure 1

16 pages, 3432 KiB  
Article
Accurate Nonlinearity and Temperature Compensation Method for Piezoresistive Pressure Sensors Based on Data Generation
Sensors 2023, 23(13), 6167; https://doi.org/10.3390/s23136167 - 05 Jul 2023
Cited by 1 | Viewed by 759
Abstract
Piezoresistive pressure sensors exhibit inherent nonlinearity and sensitivity to ambient temperature, requiring multidimensional compensation to achieve accurate measurements. However, recent studies on software compensation mainly focused on developing advanced and intricate algorithms while neglecting the importance of calibration data and the limitation of [...] Read more.
Piezoresistive pressure sensors exhibit inherent nonlinearity and sensitivity to ambient temperature, requiring multidimensional compensation to achieve accurate measurements. However, recent studies on software compensation mainly focused on developing advanced and intricate algorithms while neglecting the importance of calibration data and the limitation of computing resources. This paper aims to present a novel compensation method which generates more data by learning the calibration process of pressure sensors and uses a larger dataset instead of more complex models to improve the compensation effect. This method is performed by the proposed aquila optimizer optimized mixed polynomial kernel extreme learning machine (AO-MPKELM) algorithm. We conducted a detailed calibration experiment to assess the quality of the generated data and evaluate the performance of the proposed method through ablation analysis. The results demonstrate a high level of consistency between the generated and real data, with a maximum voltage deviation of only 0.71 millivolts. When using a bilinear interpolation algorithm for compensation, extra generated data can help reduce measurement errors by 78.95%, ultimately achieving 0.03% full-scale (FS) accuracy. These findings prove the proposed method is valid for high-accuracy measurements and has superior engineering applicability. Full article
(This article belongs to the Topic AI and Data-Driven Advancements in Industry 4.0)
Show Figures

Figure 1

15 pages, 1593 KiB  
Review
Recent Advances in Deep Learning and Medical Imaging for Head and Neck Cancer Treatment: MRI, CT, and PET Scans
Cancers 2023, 15(13), 3267; https://doi.org/10.3390/cancers15133267 - 21 Jun 2023
Cited by 2 | Viewed by 1709
Abstract
Deep learning techniques have been developed for analyzing head and neck cancer imaging. This review covers deep learning applications in cancer imaging, emphasizing tumor detection, segmentation, classification, and response prediction. In particular, advanced deep learning techniques, such as convolutional autoencoders, generative adversarial networks [...] Read more.
Deep learning techniques have been developed for analyzing head and neck cancer imaging. This review covers deep learning applications in cancer imaging, emphasizing tumor detection, segmentation, classification, and response prediction. In particular, advanced deep learning techniques, such as convolutional autoencoders, generative adversarial networks (GANs), and transformer models, as well as the limitations of traditional imaging and the complementary roles of deep learning and traditional techniques in cancer management are discussed. Integration of radiomics, radiogenomics, and deep learning enables predictive models that aid in clinical decision-making. Challenges include standardization, algorithm interpretability, and clinical validation. Key gaps and controversies involve model generalizability across different imaging modalities and tumor types and the role of human expertise in the AI era. This review seeks to encourage advancements in deep learning applications for head and neck cancer management, ultimately enhancing patient care and outcomes. Full article
(This article belongs to the Topic AI and Data-Driven Advancements in Industry 4.0)
Show Figures

Figure 1

32 pages, 3977 KiB  
Article
A Snapshot-Stacked Ensemble and Optimization Approach for Vehicle Breakdown Prediction
Sensors 2023, 23(12), 5621; https://doi.org/10.3390/s23125621 - 15 Jun 2023
Viewed by 961
Abstract
Predicting breakdowns is becoming one of the main goals for vehicle manufacturers so as to better allocate resources, and to reduce costs and safety issues. At the core of the utilization of vehicle sensors is the fact that early detection of anomalies facilitates [...] Read more.
Predicting breakdowns is becoming one of the main goals for vehicle manufacturers so as to better allocate resources, and to reduce costs and safety issues. At the core of the utilization of vehicle sensors is the fact that early detection of anomalies facilitates the prediction of potential breakdown issues, which, if otherwise undetected, could lead to breakdowns and warranty claims. However, the making of such predictions is too complex a challenge to solve using simple predictive models. The strength of heuristic optimization techniques in solving np-hard problems, and the recent success of ensemble approaches to various modeling problems, motivated us to investigate a hybrid optimization- and ensemble-based approach to tackle the complex task. In this study, we propose a snapshot-stacked ensemble deep neural network (SSED) approach to predict vehicle claims (in this study, we refer to a claim as being a breakdown or a fault) by considering vehicle operational life records. The approach includes three main modules: Data pre-processing, Dimensionality Reduction, and Ensemble Learning. The first module is developed to run a set of practices to integrate various sources of data, extract hidden information and segment the data into different time windows. In the second module, the most informative measurements to represent vehicle usage are selected through an adapted heuristic optimization approach. Finally, in the last module, the ensemble machine learning approach utilizes the selected measurements to map the vehicle usage to the breakdowns for the prediction. The proposed approach integrates, and uses, the following two sources of data, collected from thousands of heavy-duty trucks: Logged Vehicle Data (LVD) and Warranty Claim Data (WCD). The experimental results confirm the proposed system’s effectiveness in predicting vehicle breakdowns. By adapting the optimization and snapshot-stacked ensemble deep networks, we demonstrate how sensor data, in the form of vehicle usage history, contributes to claim predictions. The experimental evaluation of the system on other application domains also indicated the generality of the proposed approach. Full article
(This article belongs to the Topic AI and Data-Driven Advancements in Industry 4.0)
Show Figures

Figure 1

35 pages, 1490 KiB  
Review
Machine Learning Techniques for Developing Remotely Monitored Central Nervous System Biomarkers Using Wearable Sensors: A Narrative Literature Review
Sensors 2023, 23(11), 5243; https://doi.org/10.3390/s23115243 - 31 May 2023
Cited by 1 | Viewed by 1265
Abstract
Background: Central nervous system (CNS) disorders benefit from ongoing monitoring to assess disease progression and treatment efficacy. Mobile health (mHealth) technologies offer a means for the remote and continuous symptom monitoring of patients. Machine Learning (ML) techniques can process and engineer mHealth data [...] Read more.
Background: Central nervous system (CNS) disorders benefit from ongoing monitoring to assess disease progression and treatment efficacy. Mobile health (mHealth) technologies offer a means for the remote and continuous symptom monitoring of patients. Machine Learning (ML) techniques can process and engineer mHealth data into a precise and multidimensional biomarker of disease activity. Objective: This narrative literature review aims to provide an overview of the current landscape of biomarker development using mHealth technologies and ML. Additionally, it proposes recommendations to ensure the accuracy, reliability, and interpretability of these biomarkers. Methods: This review extracted relevant publications from databases such as PubMed, IEEE, and CTTI. The ML methods employed across the selected publications were then extracted, aggregated, and reviewed. Results: This review synthesized and presented the diverse approaches of 66 publications that address creating mHealth-based biomarkers using ML. The reviewed publications provide a foundation for effective biomarker development and offer recommendations for creating representative, reproducible, and interpretable biomarkers for future clinical trials. Conclusion: mHealth-based and ML-derived biomarkers have great potential for the remote monitoring of CNS disorders. However, further research and standardization of study designs are needed to advance this field. With continued innovation, mHealth-based biomarkers hold promise for improving the monitoring of CNS disorders. Full article
(This article belongs to the Topic AI and Data-Driven Advancements in Industry 4.0)
Show Figures

Figure 1

15 pages, 1556 KiB  
Article
SVR-Net: A Sparse Voxelized Recurrent Network for Robust Monocular SLAM with Direct TSDF Mapping
Sensors 2023, 23(8), 3942; https://doi.org/10.3390/s23083942 - 13 Apr 2023
Cited by 2 | Viewed by 1116
Abstract
Simultaneous localization and mapping (SLAM) plays a fundamental role in downstream tasks including navigation and planning. However, monocular visual SLAM faces challenges in robust pose estimation and map construction. This study proposes a monocular SLAM system based on a sparse voxelized recurrent network, [...] Read more.
Simultaneous localization and mapping (SLAM) plays a fundamental role in downstream tasks including navigation and planning. However, monocular visual SLAM faces challenges in robust pose estimation and map construction. This study proposes a monocular SLAM system based on a sparse voxelized recurrent network, SVR-Net. It extracts voxel features from a pair of frames for correlation and recursively matches them to estimate pose and dense map. The sparse voxelized structure is designed to reduce memory occupation of voxel features. Meanwhile, gated recurrent units are incorporated to iteratively search for optimal matches on correlation maps, thereby enhancing the robustness of the system. Additionally, Gauss–Newton updates are embedded in iterations to impose geometrical constraints, which ensure accurate pose estimation. After end-to-end training on ScanNet, SVR-Net is evaluated on TUM-RGBD and successfully estimates poses on all nine scenes, while traditional ORB-SLAM fails on most of them. Furthermore, absolute trajectory error (ATE) results demonstrate that the tracking accuracy is comparable to that of DeepV2D. Unlike most previous monocular SLAM systems, SVR-Net directly estimates dense TSDF maps suitable for downstream tasks with high efficiency of data exploitation. This study contributes to the development of robust monocular visual SLAM systems and direct TSDF mapping. Full article
(This article belongs to the Topic AI and Data-Driven Advancements in Industry 4.0)
Show Figures

Figure 1

22 pages, 9290 KiB  
Article
Cloud Based Fault Diagnosis by Convolutional Neural Network as Time–Frequency RGB Image Recognition of Industrial Machine Vibration with Internet of Things Connectivity
Sensors 2023, 23(7), 3755; https://doi.org/10.3390/s23073755 - 05 Apr 2023
Cited by 3 | Viewed by 1520
Abstract
The human-centric and resilient European industry called Industry 5.0 requires a long lifetime of machines to reduce electronic waste. The appropriate way to handle this problem is to apply a diagnostic system capable of remotely detecting, isolating, and identifying faults. The authors present [...] Read more.
The human-centric and resilient European industry called Industry 5.0 requires a long lifetime of machines to reduce electronic waste. The appropriate way to handle this problem is to apply a diagnostic system capable of remotely detecting, isolating, and identifying faults. The authors present usage of HTTP/1.1 protocol for batch processing as a fault diagnosis server. Data are sent by microcontroller HTTP client in JSON format to the diagnosis server. Moreover, the MQTT protocol was used for stream (micro batch) processing from microcontroller client to two fault diagnosis clients. The first fault diagnosis MQTT client uses only frequency data for evaluation. The authors’ enhancement to standard fast Fourier transform (FFT) was their usage of sliding discrete Fourier transform (rSDFT, mSDFT, gSDFT, and oSDFT) which allows recursively updating the spectrum based on a new sample in the time domain and previous results in the frequency domain. This approach allows to reduce the computational cost. The second approach of the MQTT client for fault diagnosis uses short-time Fourier transform (STFT) to transform IMU 6 DOF sensor data into six spectrograms that are combined into an RGB image. All three-axis accelerometer and three-axis gyroscope data are used to obtain a time-frequency RGB image. The diagnosis of the machine is performed by a trained convolutional neural network suitable for RGB image recognition. Prediction result is returned as a JSON object with predicted state and probability of each state. For HTTP, the fault diagnosis result is sent in response, and for MQTT, it is send to prediction topic. Both protocols and both proposed approaches are suitable for fault diagnosis based on the mechanical vibration of the rotary machine and were tested in demonstration. Full article
(This article belongs to the Topic AI and Data-Driven Advancements in Industry 4.0)
Show Figures

Figure 1

14 pages, 1744 KiB  
Article
Cluster Validity Index for Uncertain Data Based on a Probabilistic Distance Measure in Feature Space
Sensors 2023, 23(7), 3708; https://doi.org/10.3390/s23073708 - 03 Apr 2023
Cited by 1 | Viewed by 862
Abstract
Cluster validity indices (CVIs) for evaluating the result of the optimal number of clusters are critical measures in clustering problems. Most CVIs are designed for typical data-type objects called certain data objects. Certain data objects only have a singular value and include no [...] Read more.
Cluster validity indices (CVIs) for evaluating the result of the optimal number of clusters are critical measures in clustering problems. Most CVIs are designed for typical data-type objects called certain data objects. Certain data objects only have a singular value and include no uncertainty, so they are assumed to be information-abundant in the real world. In this study, new CVIs for uncertain data, based on kernel probabilistic distance measures to calculate the distance between two distributions in feature space, are proposed for uncertain clusters with arbitrary shapes, sub-clusters, and noise in objects. By transforming original uncertain data into kernel spaces, the proposed CVI accurately measures the compactness and separability of a cluster for arbitrary cluster shapes and is robust to noise and outliers in a cluster. The proposed CVI was evaluated for diverse types of simulated and real-life uncertain objects, confirming that the proposed validity indexes in feature space outperform the pre-existing ones in the original space. Full article
(This article belongs to the Topic AI and Data-Driven Advancements in Industry 4.0)
Show Figures

Figure 1

16 pages, 7452 KiB  
Article
Industrial Transfer Learning for Multivariate Time Series Segmentation: A Case Study on Hydraulic Pump Testing Cycles
Sensors 2023, 23(7), 3636; https://doi.org/10.3390/s23073636 - 31 Mar 2023
Cited by 2 | Viewed by 1183
Abstract
Industrial data scarcity is one of the largest factors holding back the widespread use of machine learning in manufacturing. To overcome this problem, the concept of transfer learning was developed and has received much attention in recent industrial research. This paper focuses on [...] Read more.
Industrial data scarcity is one of the largest factors holding back the widespread use of machine learning in manufacturing. To overcome this problem, the concept of transfer learning was developed and has received much attention in recent industrial research. This paper focuses on the problem of time series segmentation and presents the first in-depth research on transfer learning for deep learning-based time series segmentation on the industrial use case of end-of-line pump testing. In particular, we investigate whether the performance of deep learning models can be increased by pretraining the network with data from other domains. Three different scenarios are analyzed: source and target data being closely related, source and target data being distantly related, and source and target data being non-related. The results demonstrate that transfer learning can enhance the performance of time series segmentation models with respect to accuracy and training speed. The benefit can be most clearly seen in scenarios where source and training data are closely related and the number of target training data samples is lowest. However, in the scenario of non-related datasets, cases of negative transfer learning were observed as well. Thus, the research emphasizes the potential, but also the challenges, of industrial transfer learning. Full article
(This article belongs to the Topic AI and Data-Driven Advancements in Industry 4.0)
Show Figures

Figure 1

18 pages, 592 KiB  
Article
Feature Interaction-Based Reinforcement Learning for Tabular Anomaly Detection
Electronics 2023, 12(6), 1313; https://doi.org/10.3390/electronics12061313 - 09 Mar 2023
Viewed by 965
Abstract
Deep learning-based anomaly detection (DAD) has been a hot topic of research in various domains. Despite being the most common data type, DAD for tabular data remains under-explored. Due to the scarcity of anomalies in real-world scenarios, deep semi-supervised learning methods have come [...] Read more.
Deep learning-based anomaly detection (DAD) has been a hot topic of research in various domains. Despite being the most common data type, DAD for tabular data remains under-explored. Due to the scarcity of anomalies in real-world scenarios, deep semi-supervised learning methods have come to dominate, which build deep learning models and leverage a limited number of labeled anomalies and large-scale unlabeled data to improve their detection capabilities. However, existing works share two drawbacks. (1) Most of them simply treat the unlabeled samples as normal ones, ignoring the problem of label contamination, which is very common in real-world datasets. (2) Only very few works have designed models specifically for tabular data instead of migrating models from other domains to tabular data. Both of them will limit the model’s performance. In this work, we propose a feature interaction-based reinforcement learning for tabular anomaly detection, FIRTAD. FIRTAD incorporates a feature interaction module into a deep reinforcement learning framework; the former can model tabular data by learning a relationship among features, while the latter can effectively exploit available information and fully explore suspicious anomalies from the unlabeled samples. Extensive experiments on three datasets not only demonstrate its superiority over the state-of-art methods but also confirm its robustness to anomaly rarity, label contamination and unknown anomalies. Full article
(This article belongs to the Topic AI and Data-Driven Advancements in Industry 4.0)
Show Figures

Figure 1

17 pages, 3074 KiB  
Article
Cost-Sensitive YOLOv5 for Detecting Surface Defects of Industrial Products
Sensors 2023, 23(5), 2610; https://doi.org/10.3390/s23052610 - 27 Feb 2023
Cited by 1 | Viewed by 1437
Abstract
Owing to the remarkable development of deep learning algorithms, defect detection techniques based on deep neural networks have been extensively applied in industrial production. Most existing surface defect detection models assign equal costs to the classification errors among different defect categories but do [...] Read more.
Owing to the remarkable development of deep learning algorithms, defect detection techniques based on deep neural networks have been extensively applied in industrial production. Most existing surface defect detection models assign equal costs to the classification errors among different defect categories but do not strictly distinguish them. However, various errors can generate a great discrepancy in decision risk or classification costs and then produce a cost-sensitive issue that is crucial to the manufacturing process. To address this engineering challenge, we propose a novel supervised classification cost-sensitive learning method (SCCS) and apply it to improve YOLOv5 as CS-YOLOv5, where the classification loss function of object detection was reconstructed according to a new cost-sensitive learning criterion explained by a label–cost vector selection method. In this way, the classification risk information from a cost matrix is directly introduced into the detection model and fully exploited in training. As a result, the developed approach can make low-risk classification decisions for defect detection. It is applicable for direct cost-sensitive learning based on a cost matrix to implement detection tasks. Using two datasets of a painting surface and a hot-rolled steel strip surface, our CS-YOLOv5 model outperforms the original version with respect to cost under different positive classes, coefficients, and weight ratios, but also maintains effective detection performance measured by mAP and F1 scores. Full article
(This article belongs to the Topic AI and Data-Driven Advancements in Industry 4.0)
Show Figures

Figure 1

24 pages, 3593 KiB  
Article
An Optimization Method of Production-Distribution in Multi-Value-Chain
Sensors 2023, 23(4), 2242; https://doi.org/10.3390/s23042242 - 16 Feb 2023
Viewed by 994
Abstract
Value chain collaboration management is an effective means for enterprises to reduce costs and increase efficiency to enhance competitiveness. Vertical and horizontal collaboration have received much attention, but the current collaboration model combining the two is weak in terms of task assignment and [...] Read more.
Value chain collaboration management is an effective means for enterprises to reduce costs and increase efficiency to enhance competitiveness. Vertical and horizontal collaboration have received much attention, but the current collaboration model combining the two is weak in terms of task assignment and node collaboration constraints in the whole production-distribution process. Therefore, in the enterprise dynamic alliance, this paper models the MVC (multi-value-chain) collaboration process for the optimization needs of the MVC collaboration network in production-distribution and other aspects. Then a MVC collaboration network optimization model is constructed with the lowest total production-distribution cost as the optimization objective and with the delivery cycle and task quantity as the constraints. For the high-dimensional characteristics of the decision space in the multi-task, multi-production end, multi-distribution end, and multi-level inventory production-distribution scenario, a genetic algorithm is used to solve the MVC collaboration network optimization model and solve the problem of difficult collaboration of MVC collaboration network nodes by adjusting the constraints among genes. In view of the multi-level characteristics of the production-distribution scenario, two chromosome coding methods are proposed: staged coding and integrated coding. Moreover, an algorithm ERGA (enhanced roulette genetic algorithm) is proposed with enhanced elite retention based on a SGA (simple genetic algorithm). The comparative experiment results of SGA, SEGA (strengthen elitist genetic algorithm), ERGA, and the analysis of the population evolution process show that ERGA is superior to SGA and SEGA in terms of time cost and optimization results through the reasonable combination of coding methods and selection operators. Furthermore, ERGA has higher generality and can be adapted to solve MVC collaboration network optimization models in different production-distribution environments. Full article
(This article belongs to the Topic AI and Data-Driven Advancements in Industry 4.0)
Show Figures

Figure 1

Back to TopTop