Deep-Learning-Based Defect Detection for Smart Manufacturing
A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Fault Diagnosis & Sensors".
Deadline for manuscript submissions: 25 August 2024 | Viewed by 5312
Special Issue Editor
Interests: computer vision; artificial intelligence; image processing and image understanding; simulation and 3D visualization; photography
Special Issue Information
Dear Colleagues,
Nowadays, artificial intelligence (AI) is becoming more widely used in the smart industry field, due to its ability to enhance production process efficiency, lower expenses, and improve production quality. The smart industry trend relies on the advanced integration of information and communication technologies, such as robotics, AI, Big Data, and the Internet of Things (IoT).
Within the smart industry, defect detection in production systems is one of the most popular applications of AI. By utilizing AI algorithms, such as deep learning, smart production systems are capable of analysing images and videos of production processes, detecting deviations, identifying problems in a timely manner, improving product quality, and predicting maintenance needs.
Implementing smart inspection systems presents unique challenges, including the complexity of the components to be inspected, the availability of training data, the design of agile and robust AI algorithms, and the deployment of these systems within real industrial scenarios. This Special Issue aims to highlight novel and cutting-edge research focused on artificial intelligence applied to industry and production processes.
In particular, submitted papers should clearly show novel contributions and innovative applications covering, among others, any of the following topics:
- Machine vision and pattern recognition techniques;
- The use of sensors in intelligent industrial quality control applications;
- Data augmentation techniques in unfavourable scenarios of the lack or imbalance of data;
- Artificial Intelligence techniques for surface defect detection and characterization;
- Deployment and integration of intelligent quality control systems using machine vision in real industrial environments.
Dr. Iñigo Barandiaran
Guest Editor
Manuscript Submission Information
Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.
Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.
Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.
Planned Papers
The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.
Title: Failure modes classification for rolling element bearings using time-domain transformer-based encoder
Authors: Minh Vu Tri, Motoaki Hiraga, Nanako Miura, Arata Masuda* (*Corresponding author)
Affiliation: Kyoto Institute of Technology
Abstract: Existing Transformer models often require transformed data or extensive computational resources, limiting their practical adoption. we propose a simple yet competitive modification of the Transformer model, integrating a trainable noise reduction method specifically tailored for failure mode classification using vibration data in the time domain. Furthermore, we present the key architectural components and algorithms underlying our model, emphasizing interpretability and trustworthiness. Our model is trained and validated using two benchmark datasets: the IMS dataset (4 failure modes) and the CWRU dataset (4 and 10 failure modes). Notably, our model performs competitively, especially when using an unbalanced test set and a lightweight architecture.
Title: Optimizing Automated Optical Inspection: An Adaptive Fusion and Semi-Supervised Self-Learning Approach for Elevated Accuracy and Efficiency in Scenarios with Scarce Labeled Data
Authors: Yu-Shu Ni and Jiun-In Guo
Affiliation: Institute of Electronics, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
Abstract: In the realm of Automatic Optical Inspection (AOI), this study introduces two innovative technical strategies aimed at enhancing the accuracy of object detection models while reducing reliance on extensive annotated datasets. Initially, by establishing a preliminary defect detection workflow and utilizing a dataset collaboratively assembled with a major panel manufacturer in Taiwan, we developed and refined a defect detection model. This process commenced with a preliminary set of 3,579 images spanning 24 categories to construct the model. Subsequently, the model was evaluated on 12,000 ambiguously labeled images to assess its initial performance and verify the accuracy of the annotations. Through data augmentation, annotation refinement, and defect classification techniques, we enhanced the model's accuracy and generalizability, thereby expanding the defect dataset on unlabelled datasets and retraining the model. Moreover, addressing the self-learning needs of AOI inspection, we introduced an Adaptive-Fused Semi-Supervised Self-learning (AFSL) method. This approach, rooted in semi-supervised learning and tailored for Anchor-based object detection models, facilitates the model's self-learning and continuous optimization through a minimal set of labeled data and a larger volume of unlabeled data. The proposed AFSL technique, with its modules of Bounding Box Assigner, Adaptive Training Scheduler, and Data Allocator, enables dynamic threshold adjustments, balanced training between labeled and unlabeled data, and efficient data allocation, significantly boosting the model's accuracy on AOI datasets. This methodology not only elevates the precision and efficiency of AOI object detection but also provides an effective approach for achieving efficient model training with limited labeled data.
Title: End-to-End Fast Defect Detection in HBMs with Semi-Supervised and Incremental Learning
Authors: Richard Chang, Jie Wang, Ramanpreet Singh Pahwa
Affiliation: Institute for Infocomm Research, A*STAR
Abstract: Deep learning and AI methods can improve defect detection accuracy and reduce time and manpower required for a high-quality inspection process. Semi-supervised learning models have recently been applied to computer vision tasks and significantly increased the models’ capabilities on diverse data with high accuracy. In this paper, we leverage on the new advances in deep learning and semi-supervised models for segmentation in defect detection. We propose an end-to-end pipeline including detection, segmentation and metrology tasks with a new strategy to analyse the entire 3D scan in one pass instead of individual memory and logic bumps. We demonstrate the proposed work’s capabilities by showing significant reduction in the total processing time as well as the resources needed for defect detection with higher accuracy and efficiency. Our extensive experiments showed a 50% faster processing and an accuracy improvement by 10% compared our previous state-of-the-art approaches.