Next Article in Journal
Study on Mechanical Properties of Deep Expansive Soil and Coupling Damage Model of Freeze–Thaw Action and Loading
Next Article in Special Issue
U2-Net: A Very-Deep Convolutional Neural Network for Detecting Distracted Drivers
Previous Article in Journal
Research on Characteristics of Flow Noise and Flow-Induced Noise
Previous Article in Special Issue
LightSeg: Local Spatial Perception Convolution for Real-Time Semantic Segmentation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploring the ViDiDetect Tool for Automated Defect Detection in Manufacturing with Machine Vision

Faculty of Mechanical Engineering and Computer Science, University of Bielsko-Biala, 43-309 Bielsko-Biala, Poland
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(19), 11098; https://doi.org/10.3390/app131911098
Submission received: 14 September 2023 / Revised: 1 October 2023 / Accepted: 7 October 2023 / Published: 9 October 2023
(This article belongs to the Special Issue Application of Machine Vision and Deep Learning Technology)

Abstract

:
Automated monitoring of cutting tool wear is of paramount importance in the manufacturing industry, as it directly impacts production efficiency and product quality. Traditional manual inspection methods are time-consuming and prone to human error, necessitating the adoption of more advanced techniques. This study explores the application of ViDiDetect, a deep learning-based defect detection solution, in the context of machine vision for assessing cutting tool wear. By capturing high-resolution images of machining tools and analyzing wear patterns, machine vision systems offer a non-contact and non-destructive approach to tool wear assessment, enabling continuous monitoring without disrupting the machining process. In this research, a smart camera and an illuminator were utilized to capture images of a car suspension knuckle’s machined surface, with a focus on detecting burrs, chips, and tool wear. The study also employed a mask to narrow the region of interest and enhance classification accuracy. This investigation demonstrates the potential of machine vision and ViDiDetect in automating cutting tool wear assessment, ultimately enhancing manufacturing processes’ efficiency and product quality. The project is at the implementation stage in one of the automotive production plants located in southern Poland.

1. Introduction

In today’s industrial landscape, optimizing manufacturing processes and improving productivity are of paramount importance. Among the various factors influencing production efficiency, the condition of cutting tools plays a vital role. Cutting tools are subjected to rigorous wear and tear during machining operations, leading to a decline in their performance and adversely affecting the quality of the final product. Timely and accurate assessment of tool wear is crucial for ensuring efficient machining operations, reducing downtime, and maximizing tool life [1].
Traditionally, manual inspection methods have been employed to assess cutting tool wear. These manual methods, which often involve human operators visually inspecting tools, are not only time-consuming but are also highly subjective and prone to human error. Studies have shown that manual tool wear observation can result in inconsistencies and inaccuracies in wear measurements, leading to suboptimal machining processes and reduced product quality. Statistics reveal that manual inspection methods contribute significantly to production inefficiencies. In a survey conducted across multiple manufacturing facilities, it was found that manual tool wear observation accounts for approximately 30% of unscheduled machine downtime due to incorrect assessments and the time needed for inspections. Moreover, the subjectivity of these methods introduces variability in tool wear assessment across different operators, making it challenging to establish consistent maintenance schedules [2,3].
Machine vision refers to the technology that enables machines to visually perceive and interpret their surroundings. By leveraging advanced image processing techniques and artificial intelligence algorithms, machine vision systems can analyze images or video footage of cutting tools and accurately detect and quantify various wear parameters. This technology offers a non-contact and non-destructive approach to tool wear assessment, allowing continuous monitoring of tools without disrupting the machining process [4,5,6].
The application of machine vision systems for cutting tool wear assessment has revolutionized the way manufacturers manage their machining operations. These systems can capture high-resolution images of cutting tools at regular intervals and compare them to reference images of new tools. By analyzing the acquired images, machine vision algorithms can identify and quantify wear patterns such as flank wear, crater wear, chipping, and edge rounding. This information provides valuable insights into the tool’s degradation over time, allowing operators to proactively plan tool changes and reduce unplanned downtime [7].
Furthermore, machine vision systems can be integrated with existing manufacturing systems and databases, enabling real-time monitoring and data analysis. The collected wear data can be used for trend analysis, predictive maintenance, and process optimization. Manufacturers can leverage this information to identify optimal tool life, improve tool selection, and optimize machining parameters, leading to enhanced productivity, reduced costs, and improved product quality [1].
In this article, application of machine vision systems for cutting tool wear assessment is presented. The underlying principles, the components of a typical machine vision system, and the image processing techniques used for wear detection and quantification are explored. Furthermore, the benefits and challenges associated with implementing machine vision systems in industrial settings are discussed. By comprehensively examining the current state-of-the-art technology, valuable insights into the potential of machine vision systems for improving manufacturing processes and optimizing tool life are aimed to be provided in this article.
As a case study, an element of the car suspension (knuckle) was proposed, and the dependence of the impact of the wear of the cutting tool on the quality of the tested surface was examined. A station with a conveyor belt was proposed and implemented, on which a camera, illuminator, and control cabinet equipped with a PC computer with the possibility of data acquisition were mounted. An algorithm was developed to analyze the quality of the machined surface, on the basis of which the decision to replace the cutting tool is made. The project used available machine learning tools included in the Cognex In-Sight Vision Suite system.
Cutting tool wear is chosen as the focal point in this study for several reasons:
  • Direct Impact on Product Quality: Cutting tool wear directly influences the quality of machined parts. As tools wear down, the surface finish of components can deteriorate, leading to higher rejection rates and increased scrap production;
  • Efficiency and Cost Savings: Excessive tool wear can result in longer machining times, increased energy consumption, and more frequent tool changes. By addressing tool wear proactively, manufacturers can significantly reduce operational costs and improve overall efficiency;
  • Tool Life Optimization: Understanding the wear patterns of cutting tools allows manufacturers to optimize tool life. This means tools are replaced at the right time, preventing premature wear-related failures and maximizing their useful life;
  • Process Consistency: Consistent tool wear assessment ensures that machining processes remain stable over time. This consistency is essential for meeting quality standards and reducing the need for rework.
By focusing on cutting tool wear and leveraging machine vision technology, manufacturers can move towards a proactive and data-driven approach. This advancement promises to revolutionize the way machining operations are managed, empowering manufacturers to achieve higher efficiency, reduced costs, and enhanced product quality in today’s competitive industrial landscape.
The paper is organized as follows. Section 2 describes the state of the art in machine vision systems for cutting tool wear assessment. The proposed solution for the vision system is presented in Section 3. Section 4 includes the experimental results and a comparison of the introduced method with state-of-the-art approaches. The conclusions and future research directions are given in Section 5.

2. State of the Art

A vision system is a technology that combines the capabilities of cameras, software, and algorithms to analyze images and inspect products and production processes. The industrial vision system processes the physical characteristics of the tested objects, which allows it to analyze their geometry, location, or surface condition. The most popular tasks performed by the systems include the following [8,9,10,11]:
  • Reading barcodes;
  • OCR, OCV, i.e., recognizing or verifying text;
  • Quality control;
  • Measurement of elements;
  • Recognizing the shape, presence, and location of objects;
  • Identification by comparing features with a model;
  • Surface control (roughness, scratches, defects).
The field of machine vision for cutting tool wear assessment has witnessed significant advancements in recent years. Researchers and manufacturers have developed innovative techniques and systems that have improved the accuracy, speed, and reliability of wear detection and quantification. In this section, we explore some of the state-of-the-art approaches and technologies in the application of machine vision systems for cutting tool wear assessment [12,13].
High-resolution imaging: One key aspect of accurate tool wear assessment is the ability to capture detailed and high-resolution images of the cutting tools. Advancements in imaging technology, such as high-definition cameras and microscopy systems, have enabled the acquisition of clear and precise images, even at micro-scale levels. These high-resolution images provide a wealth of information for wear analysis and facilitate the detection and quantification of subtle wear features [14,15,16].
Image processing algorithms: Machine vision systems employ sophisticated image processing algorithms to analyze tool images and extract relevant wear information. Various techniques, such as edge detection, thresholding, feature extraction, and pattern recognition, are utilized to identify wear patterns and quantify wear parameters. Advanced machine learning algorithms, including convolutional neural networks (CNNs), have also been applied to improve the accuracy and automation of wear assessment [17,18,19].
Multispectral and hyperspectral imaging: Traditional grayscale or color imaging may not always capture certain wear characteristics effectively. To overcome this limitation, multispectral and hyperspectral imaging techniques have been explored for cutting tool wear assessment. These techniques involve capturing images across a wide range of wavelengths, allowing for enhanced differentiation of wear features, such as variations in surface texture or color changes due to wear [20,21].
Real-time monitoring: Real-time monitoring of cutting tool wear is crucial for proactive maintenance and avoiding unexpected tool failures. Machine vision systems have been integrated with real-time data acquisition and analysis capabilities, enabling continuous wear assessment during the machining process. By leveraging high-speed image processing techniques and parallel computing, these systems can provide instantaneous feedback on tool condition, allowing operators to make timely decisions and schedule tool changes optimally [22,23].
Integration with manufacturing systems: Machine vision systems for tool wear assessment are increasingly being integrated with other manufacturing systems, such as computer-aided manufacturing (CAM) and computerized numerical control (CNC) machines. This integration enables seamless data exchange, facilitating process optimization and adaptive machining strategies. By integrating machine vision with manufacturing systems, manufacturers can achieve a closed-loop feedback mechanism that optimizes tool life and machining efficiency [24,25].
Automated tool life prediction: Predicting the remaining useful life of cutting tools is a valuable capability for production planning and scheduling. Advanced machine vision systems, combined with predictive analytics and machine learning algorithms, have made significant progress in automated tool life prediction. These systems analyze historical wear data, consider machining parameters, and employ predictive models to estimate the remaining tool life accurately. This information enables manufacturers to proactively schedule tool changes and minimize downtime [26,27,28].
Three-dimensional imaging and surface metrology: Traditional 2D imaging techniques may not capture the complete wear profile of cutting tools. To overcome this limitation, 3D imaging technologies, such as structured light scanning and confocal microscopy, have been employed. These methods enable the acquisition of three-dimensional surface data, facilitating detailed wear analysis and precise measurement of wear parameters, including wear depth, wear volume, and wear rate. Integrating surface metrology techniques with machine vision systems enhances the accuracy and reliability of tool wear assessment [29,30].
Deep learning and anomaly detection: Deep learning algorithms, particularly those based on recurrent neural networks (RNNs) and generative adversarial networks (GANs), have demonstrated promising results in tool wear assessment. These algorithms can learn complex patterns and anomalies from large datasets, enabling the detection of subtle wear features and identifying abnormal wear conditions. By training machine vision systems with extensive wear data, these algorithms can enhance the system’s ability to identify wear patterns accurately and predict wear progression [31,32].
Multi-sensor fusion: To further enhance the accuracy and reliability of tool wear assessment, machine vision systems are being combined with other sensing technologies, such as acoustic emission sensors, vibration sensors, and temperature sensors. By fusing data from multiple sensors, a comprehensive picture of tool condition can be obtained. This multi-sensor approach provides a more holistic understanding of tool wear, enabling more informed decision-making and proactive maintenance strategies [33,34,35].
Despite these advancements, challenges still exist in the application of machine vision systems for cutting tool wear assessment. Variations in lighting conditions, tool geometries, and complex wear patterns can pose difficulties for accurate wear detection. Standardizing image acquisition protocols, developing robust algorithms, and addressing these challenges through ongoing research and development efforts will further enhance the capabilities and reliability of machine vision systems for tool wear assessment.
Table 1 presented the representation of the state-of-the-art advances in machine vision systems for cutting tool wear assessment.
In conclusion, the state of the art in machine vision systems for cutting tool wear assessment has witnessed remarkable progress. Through advancements in imaging technology, image processing algorithms, real-time monitoring, integration with manufacturing systems, and automated tool life prediction, these systems are revolutionizing the way tool wear is assessed in industrial environments. As research and development continue to push the boundaries of this technology, manufacturers can expect further improvements in productivity, tool life optimization, and overall manufacturing efficiency.

3. Proposed Solution to the Problem

This section describes the proposed solution for the machine vision system for cutting tool wear assessment.

3.1. Design Assumptions

The station equipped with a vision system has the ability to communicate via digital inputs/outputs with a PLC controller, in order to provide information about the need to replace the tool or the occurrence of irregularities detected by the system. The vision system is a system based on a smart camera equipped with the possibility of implementing machine learning algorithms. The camera is capable of high-resolution image capture and is configured to focus on a specific region of interest, which is the machined surface of the car suspension knuckle. The camera captures images of the machined surface at regular intervals. These images are taken both before and after tool changes to assess tool wear over time. The camera’s illuminator was selected in such a way that, compatible with the lighting in the production hall of plant, it directed a beam of white LED light to the tested element of the detail (car knuckle clamp) in a way that allowed for the best contrast between the background and the edge of the tested surface. Due to the need to mount the camera at a distance that allows the details to move freely along the path of the automatic line, it was necessary to select a properly adapted lens in such a way that the examined part was clearly visible and was always in the field of view of the camera. The smart camera is configured in such a way that data acquisition occurs automatically based on the signal from the inductive sensor. The image analysis function utilizes deep learning-based defect detection. During the training phase, the system is taught what a good (undamaged) part looks like. Optionally, it can also be taught what bad (damaged) parts look like for comparison. To improve classification accuracy, a mask is applied to narrow the region of interest to the edge of the machined surface. This helps the system focus on the critical area for tool wear assessment. The defect detection tool then analyzes the images and identifies any deviations from the trained model. It returns a deviation score and a Pass/Fail result for each image. Images with defects (indicating tool wear) are flagged as “bad”. The system is set up to log and store the images and assessment of the results. It uses an FTP server for efficient data transfer and storage.
The proposed machine vision system automates the process of tool wear assessment. It continuously monitors tool conditions without disrupting the manufacturing process, reducing the need for manual inspections. By using advanced image processing techniques and deep learning algorithms, the system can detect subtle wear patterns that might be missed by human inspectors. This leads to more accurate and consistent assessments. The system offers real-time monitoring of tool wear. It provides instantaneous feedback on tool conditions, allowing operators to make timely decisions regarding tool changes. The machine vision system can be integrated with other manufacturing systems, such as CNC machines and PLC controllers. This integration facilitates data exchange, process optimization, and adaptive machining strategies. The system’s ability to predict tool life based on historical wear data and machining parameters enables proactive maintenance. Manufacturers can schedule tool changes to minimize downtime.
In conclusion, the proposed machine vision system for cutting tool wear assessment offers numerous advantages, including automation, accuracy, real-time monitoring, integration, predictive maintenance, efficiency, and data-driven decision-making. It is a valuable tool for optimizing manufacturing processes, reducing costs, and improving product quality in industrial settings.

3.2. Selection of the Camera and Its Components

Choosing the right smart camera is a critical decision for any industrial or commercial application. To make an informed choice, there are several key reference parameters to consider. First and foremost, the scope of the application should be clearly defined. Understanding the specific requirements, such as image quality, processing speed, and environmental conditions, is essential. Next, the type and resolution of the camera’s sensor play a crucial role in capturing the necessary details. Data acquisition speed is another critical factor, especially for applications that require real-time monitoring. Connectivity and compatibility with existing systems are vital to ensure seamless integration. Physical dimensions and mounting options should align with the installation environment. Budget constraints must also be taken into account, as well as any additional features required for the application, such as specialized lighting or onboard processing capabilities. By carefully considering these reference parameters, the In-Sight D900 (ISD905M-61-3709) camera from Cognex was selected for the analysis of the issue discussed in this paper. The camera is shown in Figure 1.
The parameters of the In-Sight D900M camera were read from the technical documentation provided by the manufacturer. Table 2 compares the types of D900 series cameras.
The D905M camera used in this application, according to its specification, is a monochrome camera and has a data acquisition speed of 26 fps.
After the analysis and experimental measurements of the stand, the LEC-59870 lens was selected. The proposed lens is shown in Figure 2.
Lens Parameters:
  • Focal length: 16 mm;
  • f/1.4: 16 aperture;
  • Minimum detection distance: 100 mm.
Due to the need to adjust the focus on the site of the vision system assembly, a lens with a variable aperture was selected, which makes it possible to precisely focus depending on the height of the lens above the detail.
Another element of the vision system is the illuminator. Lighting is one of the most important components of a machine vision application. Improper target lighting can lead to loss of information and productivity. The lighting technique includes the light source and its position relative to the part and the camera. Cognex vision systems offer different combinations of external and built-in lighting options, depending on the environment and application.
In the case of the discussed situation, testing the wear of the cutting tool, depending on the appearance of the surface of the tested detail, high-intensity, diffused ODS75 OverDrive Brick Light LED lighting (Figure 3) with the possibility of a stroboscopic effect was used. The camera and lighting system are coordinated to ensure synchronized operation. This involves using hardware triggers and synchronization signals to coordinate the camera’s exposure time with the lighting source’s pulsing. This illuminator can work at a working distance of 300 mm to 4000 mm. Using the discussed illuminator, it is possible to perform dark field, bright field, and direct lighting. The illuminator has triggers, due to which it is possible to trigger a light pulse directly using the D900 camera signals.
Technical data of the illuminator are presented in Table 3.

3.3. The Tested Detail and the Machined Surface

The tested detail is a car suspension knuckle, which is produced in one of the automotive industry plants located in Bielsko-Biala, Poland. The view of the tested element is shown in Figure 4.
According to the collective information obtained regarding the replacement of cutting tools on machine tools, the cutter responsible for the machining of brake calipers (Figure 5) is replaced every 200 cycles.
In order to analyze the workpiece, attention should be paid to the characteristic features of the surface at two moments: before the tool change (Figure 6) and just after the tool change (Figure 7). The remaining chips and the uneven edge after machining are marked in red.
The above figures show that, in the final phase of the tool life, the remaining chips after machining appear on the surface margin, which are not visible on the details made just after replacement.
The task of the proposed machine vision system is evaluating the wear of the cutting tool based on the recognition of whether chips are detected on the edge of the machined surface. During the tests, photos of the details were linked to the tool change schedules at the plant.
An alternative solution is testing the cutting tool surface directly, also known as tool surface inspection or tool wear measurement, can be a viable approach in certain manufacturing and machining scenarios. However, there are several challenges and limitations associated with direct tool surface inspection that may make inspecting the machined workpiece surface method more practical in some cases. Machine vision systems that inspect the machined workpiece surface offer advantages such as continuous monitoring, real-time feedback, non-intrusive inspection, and the ability to assess the impact of tool wear on workpiece quality. These systems can capture high-resolution images of the workpiece and analyze them using advanced image processing techniques and artificial intelligence algorithms to detect and quantify wear patterns, defects, and deviations.
Ultimately, the choice between direct tool surface inspection and workpiece surface inspection depends on the specific needs of the manufacturing process, the accessibility of the cutting tool, the desired level of automation, and the trade-offs between downtime, safety, and data accuracy. In many modern manufacturing environments, machine vision systems are preferred for their ability to provide comprehensive, real-time, and non-intrusive insights into tool wear and workpiece quality.

3.4. Implementation and Testing of the Vision System

A vision system was prepared and assembled, which included the following elements:
  • Electrical cabinet equipped with a PC computer for data acquisition;
  • D900 camera with an illuminator;
  • Cables for the camera and illuminator;
  • EWON remote access module;
  • Adjustable camera and illuminator mounts.
The measurement system is shown in Figure 8. The camera was set in such a way as to sharpen the edge of the examined surface. The illuminator was set in such a way that the greatest possible contrast was created between the surface and the background. In this way, it was possible for the vision system to detect the presence of chips.
In order to enable data acquisition, an FTP server was configured on a computer located in the electrical cabinet. The Windows system is equipped with a built-in server configuration option, from which it is possible to transfer files from any client to an appropriately designated place on the computer. Using the “WriteImageFTP” function of the In-Sight Vision Suite software, it is possible to save the current image to an FTP server.
After configuring and testing the operation of the file transfer, the photo was saved when an event occurred (signal from the inductive sensor) with a delay of 500 ms, which was necessary for the details to stabilize in the right position.
After the data collection, a set of photos was prepared on their basis, which were used to learn the neural network. These data were photos of the correct detail in the initial machining cycles after replacing the tool, photos of a detail with acceptable deviations from regularities classified as correct and the bad images.

3.5. Image Processing Algorithm

The ViDiDetect function was used to analyze the image. This function allows one to create a deep learning-based defect detection solution, based on the Red Analyze tool in Unsupervised mode. During the training phase, ViDiDetect can be taught what a good part looks like. Optionally, in comparison, it is possible to teach ViDiDetect the appearance of bad parts. Based on the training labels, the ViDiDetect tool discovers any deviations from the trained model and returns a deviation score and a Pass/Fail result. To use ViDiDetect or similar functions, training data typically need to be set up, and the system is taught the appearance of good and bad parts. Then, the trained model can be applied to analyze new images or video streams for defects or irregularities. These tools prove valuable in automating quality control processes and ensuring the consistency and accuracy of inspections in manufacturing and industrial settings.
In order to properly prepare the images for analysis, the ViDiDetect tool extracted the Region of Interest (ROI) from the examined image. For this purpose, the edge-finding tool available in the software was used and configured accordingly.
With the aim of locating the tested surface in the photo, it was found that the detail is found with the highest efficiency when it focuses on teaching the tool on a fragment of the outer edge of the surface (Figure 9).
Next, in order to examine the orientation of the details found in the photo, the FindPatMaxRedLine tool, visible in cell B5, was used, one of the input parameters of which is the function called in cell B7. In the subsequent columns of row number 5, starting from column C, one can see the values of the X and Y coordinates of the location of the coordinate system attached to the photo, angle, scale, and the result of the certainty of finding the surface.

4. Results and Discussion

In the next stage of work, the ViDiDetect tools were configured, starting from defining the ROI. The results of training the neural network are shown in Figure 10. The dataset used for training and testing the neural network consisted of 500 labeled images of tested surfaces. Each image was classified as good, bad, or unspecified, based on the presence or absence of defects. The dataset was divided into three subsets:
  • Training Set: This subset, comprising 80% of the data, was used to train the neural network;
  • Validation Set: A portion of the data, typically 10%, was set aside for validation. The validation set was used during training to monitor the model’s performance and prevent overfitting;
  • Testing Set: The remaining data, around 10%, was reserved for testing the trained model’s performance. The model was evaluated on this set to assess its ability to classify images accurately.
To analyze the results, the GetScore and GetPassed functions were used, returning the result of the found defect (in mm2) and a Boolean answer regarding the correctness of the detail, respectively.
The Score graph lists all images that were labeled and scored as well as the two thresholds (T1, T2) for the graph. These thresholds are the localized defect area, expressed in mm2. The first threshold determines the maximum value a True Negative (truly lacking a defect) can have before it is unsure about its value. The second threshold determines the minimum value a result must have before it is considered a True Positive (truly containing a defect). Values in between are considered False Positives or False Negatives, depending on their label, and are assigned to the Inter column of the confusion matrix. The study shows that, out of 41 images, 35 were marked as good or bad detail. Eight (8) details were marked as good; in five cases, the network was not sure of the solution. Finally, 10 images were flagged as bad; in this case, 12 were flagged as unsafe (out of 22 tested).
The network was trained in such a way that it did not classify details with emulsion residues as bad; most of the details that had visible emulsion were considered by the network as incorrect in the process of processing. For this purpose, a mask was placed on the tested surface in such a way that only the edge of the surface was tested (Figure 11).
The application of the mask is intended to ignore parts of the analyzed area from the region of interest. The mask was added manually as part of the image processing and analysis workflow. It is used to precisely define a region of interest (ROI) within an image, which narrows the focus of the machine vision system to a specific area for further analysis. This allows the deep learning tool to focus on a narrower area, which improves classification results. In the learning process, the same sequence of actions was used as in the previous case, but an appropriate mask was additionally applied.
The results of the operation of the neural network with the mask applied are shown in Figure 12.
The obtained results show that, out of 22 labelled images, 14 were considered good and 8 were bad. Only one image was marked as unspecified. The limit values are 6.32 mm2 for good classification and 6.65 mm2 for bad classification. The use of an appropriate mask, narrowing the ROI to the edge of the examined surface, clearly improved the classification results and the efficiency of the algorithm.
Bergs et al. [31] reached similar results in their publication, proving that machine vision systems can use pattern recognition and machine learning algorithms to automatically recognize and classify objects in an image. This allows the system to adapt to various conditions and increase the effectiveness of the analysis. Hashmi et al. [32] have focused on a variety of machine vision systems for the assessment of machining parameters. The ongoing advancement of different machine vision methods in the realm of monitoring tool conditions is of great importance due to the enhancement of touchless applications and the evolution of computer hardware.

5. Conclusions

Machine vision systems that enable the implementation of neural networks, equipped with deep learning algorithms, allow for recognizing, analyzing, and interpreting images in a way similar to human vision. Through the use of vision systems, neural networks are able to automatically detect objects, recognize faces, classify images, analyze movement, and generate image descriptions. Deep learning algorithms learn from large amounts of data, which allows them to work more precisely and efficiently. Vision systems using these advanced techniques are used in many fields, such as medicine, industry, security, facial recognition, autonomous cars, and many others.
In order to carry out the selected task, the Cognex D900 series camera in monochrome version was selected due to its high resolution, adaptability, and operation of the illuminator as well as communication capabilities (file transfer—FTP Client). The illuminator and other elements necessary to carry out the data acquisition process were selected for the vision system. On the basis of the collected data, the analysis and learning of the neural network was carried out.
Analyzing the obtained results, aimed at assessing the possibility of using a vision system to assess the wear of the cutting tool, it was found that there is a possibility of implementing this system in production. Implementation of such a system and ensuring communication between the vision system and the controller or superior plant management system can lead to the optimization of costs incurred for tool replacement. Using the vision system, it is also possible to fully automate the tool change process. The vision system based on the Cognex smart camera correctly classifies defects on the test surface resulting from the wear of the cutting blade, which was confirmed in the presented research. The system measures the area of damage and chips on the edges of the processed surfaces and, on this basis, performs an appropriate classification.
The In-Sight Vision Suite software was used to configure the camera parameters and program the logic of its operation. Using the available functions, it is possible to implement a neural network using the ViDiDetect tool. An important element in the network training process is the proper determination of the region of interest and the imposition of a mask that will allow ignoring irrelevant places in the region of interest.
Some of the industries that can derive substantial benefits from this innovative work include manufacturing industries, such as metalworking, automotive, aerospace, and electronics, which have heavily relied on cutting tools. Implementing a vision-based tool wear assessment system can lead to increased efficiency, reduced downtime, and cost savings. Industries where product quality is paramount, such as pharmaceuticals, food processing, and consumer electronics, can use this technology to ensure the integrity and consistency of their products, reducing defects and recalls. Heavy machinery and equipment in mining and construction undergo significant wear and tear. Machine vision systems can help monitor and optimize the maintenance of critical components, enhancing safety and productivity.
The study, while showcasing the potential of machine vision systems and deep learning algorithms in recognizing and interpreting images for various applications, has some limitations. These limitations include potential data variability, the complexity of defects, and questions about the model’s generalization to unseen scenarios. Additionally, the application of masks, although improving classification results, may present challenges in adapting to diverse surfaces and defect types. Future aspects of research in this domain could involve collecting a more diverse dataset, exploring advanced defect detection techniques, developing adaptive masking strategies, and conducting a thorough cost-benefit analysis for real-world implementation. Furthermore, investigating ways to enhance human–machine collaboration in tool maintenance decisions would contribute to more efficient and reliable operations in industrial settings.

Author Contributions

Conceptualization, J.R. and D.J.; methodology, M.D.; software, J.R., D.J. and M.D.; validation, J.R.; formal analysis, M.D.; investigation, J.R., D.J. and M.D.; writing—original draft preparation, M.D., J.R. and D.J.; writing—review and editing, M.D., D.J. and J.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhu, Q.; Zhang, Y.; Luan, J.; Hu, L. A Machine Vision Development Framework for Product Appearance Quality Inspection. Appl. Sci. 2022, 12, 11565. [Google Scholar] [CrossRef]
  2. Díaz-Saldaña, G.; Osornio-Ríos, R.A.; Zamudio-Ramírez, I.; Cruz-Albarrán, I.A.; Trejo-Hernández, M.; Antonino-Daviu, J.A. Methodology for Tool Wear Detection in CNC Machines Based on Fusion Flux Current of Motor and Image Workpieces. Machines 2023, 11, 480. [Google Scholar] [CrossRef]
  3. Daicu, R.; Oancea, G. Methodology for Measuring the Cutting Inserts Wear. Symmetry 2022, 14, 469. [Google Scholar] [CrossRef]
  4. Zhang, C.; Xu, X.; Fan, C.; Wang, G. Literature Review of Machine Vision in Application Field. E3S Web Conf. 2021, 236, 04027. [Google Scholar] [CrossRef]
  5. Kumar, V.; Kumar, V.; Raza, Z.; Madan, A.K. Machine vision system: A review. J. Emerg. Technol. Innnovative Res. 2021, 8, c83–c91. [Google Scholar]
  6. Dhanush, G.; Khatri, N.; Kumar, S.; Kumar-Shukla, P. A comprehensive review of machine vision systems and artificial intelligence algorithms for the detection and harvesting of agricultural produce. Sci. Afr. 2023, 21, e01798. [Google Scholar] [CrossRef]
  7. Colantonio, L.; Equeter, L.; Dehombreux, P.; Ducobu, F. A Systematic Literature Review of Cutting Tool Wear Monitoring in Turning by Using Artificial Intelligence Techniques. Machines 2021, 9, 351. [Google Scholar] [CrossRef]
  8. Zhang, X.; Zhang, J.; Ma, M.; Chen, Z.; Yue, S.; He, T.; Xu, X. A high precision quality inspection system for steel bars based on machine vision. Sensors 2018, 18, 2732. [Google Scholar] [CrossRef]
  9. Martínez, S.S.; Ortega, J.G.; García, J.G.; García, A.S.; Estévez, E.E. An industrial vision system for surface quality inspection of transparent parts. Int. J. Adv. Manuf. Technol. 2013, 68, 1123–1136. [Google Scholar] [CrossRef]
  10. Malamas, E.N.; Petrakis, E.G.M.; Zervakis, M.; Petit, L.; Legat, J.-D. A survey on industrial vision systems, applications and tools. Image Vis. Comput. 2003, 21, 171–188. [Google Scholar] [CrossRef]
  11. Boby, R.A.; Sonakar, P.S.; Singaperumal, M.; Ramamoorthy, B. Identification of defects on highly reflective ring components and analysis using machine vision. Int. J. Adv. Manuf. Technol. 2010, 52, 217–233. [Google Scholar] [CrossRef]
  12. Lee, S.H.; Yang, C.S. A real time object recognition and counting system for smart industrial camera sensor. IEEE Sens. J. 2017, 17, 2516–2523. [Google Scholar] [CrossRef]
  13. Ambhore, N.; Kamble, D.; Chinchanikar, S.; Wayal, V. Tool Condition Monitoring System: A Review. Mater. Today Proc. 2015, 2, 3419–3428. [Google Scholar] [CrossRef]
  14. Thakre, A.; Lad, A.; Mala, K. Measurements of Tool Wear Parameters Using Machine Vision System. Model. Simul. Eng. 2019, 2019, 1876489. [Google Scholar] [CrossRef]
  15. Kurada, S.; Bradley, C. A machine vision system for tool wear assessment. Tribol. Int. 1997, 30, 295–304. [Google Scholar] [CrossRef]
  16. Schmitt, R.; Cai, Y.; Pavim, A. Machine vision system for inspecting flank wear on cutting tools. Int. J. Control. Syst. Instrum. 2012, 3, 31–37. [Google Scholar]
  17. Manoharan, D.S. A smart image processing algorithm for text recognition, information extraction and vocalization for the visually challenged. J. Innov. Image Process. 2019, 1, 31–38. [Google Scholar] [CrossRef]
  18. Monga, V.; Li, V.; Eldar, Y.C. Algorithm unrolling: Interpretable, efficient deep learning for signal and image processing. IEEE Signal Process. Mag. 2021, 38, 18–44. [Google Scholar] [CrossRef]
  19. Naranjo-Torres, J.; Mora, M.; Hernández-García, R.; Barrientos, R.J.; Fredes, C.; Valenzuela, A. A Review of Convolutional Neural Network Applied to Fruit Image Processing. Appl. Sci. 2020, 10, 3443. [Google Scholar] [CrossRef]
  20. Navin, M.S.; Agilandeeswari, L. Multispectral and hyperspectral images based land use / land cover change prediction analysis: An extensive review. Multimed. Tools Appl. 2020, 79, 29751–29774. [Google Scholar] [CrossRef]
  21. Vali, A.; Comai, S.; Matteucci, M. Deep Learning for Land Use and Land Cover Classification Based on Hyperspectral and Multispectral Earth Observation Data: A Review. Remote Sens. 2020, 12, 2495. [Google Scholar] [CrossRef]
  22. Wang, D.; Hong, R.; Lin, X. A method for predicting hobbing tool wear based on CNC real-time monitoring data and deep learning. Precis. Eng. 2021, 72, 847–857. [Google Scholar] [CrossRef]
  23. Peng, R.; Pang, H.; Jiang, H.; Hu, Y. Study of Tool Wear Monitoring Using Machine Vision. Autom. Control. Comput. Sci. 2020, 54, 259–270. [Google Scholar] [CrossRef]
  24. Lins, R.G.; Marques de Araujo, P.R.; Corazzim, M. In-process machine vision monitoring of tool wear for Cyber-Physical Production Systems. Robot. Comput. Integr. Manuf. 2020, 61, 101859. [Google Scholar] [CrossRef]
  25. Zhang, X.; Yu, H.; Li, C.; Yu, Z.; Xu, J.; Li, Y.; Yu, H. Study on In-Situ Tool Wear Detection during Micro End Milling Based on Machine Vision. Micromachines 2023, 14, 100. [Google Scholar] [CrossRef]
  26. Wang, J.; Yan, J.; Li, C.; Gao, R.X.; Zhao, R. Deep heterogeneous GRU model for predictive analytics in smart manufacturing: Application to tool wear prediction. Comput. Ind. 2019, 111, 1–14. [Google Scholar] [CrossRef]
  27. Zhuang, K.; Shi, Z.; Sun, Y.; Gao, Z.; Wang, L. Digital Twin-Driven Tool Wear Monitoring and Predicting Method for the Turning Process. Symmetry 2021, 13, 1438. [Google Scholar] [CrossRef]
  28. Mikołajczyk, T.; Nowicki, K.; Bustillo, A.; Yu Pimenov, D. Predicting tool life in turning operations using neural networks and image processing. Mech. Syst. Signal Process. 2018, 104, 503–513. [Google Scholar] [CrossRef]
  29. Hawryluk, M.; Ziemba, J.; Dworzak, Ł. Development of a Method for Tool Wear Analysis Using 3D Scanning. Metrol. Meas. Syst. 2017, 24, 739–757. [Google Scholar] [CrossRef]
  30. Du, Z.; Wu, Z.; Yang, J. 3D measuring and segmentation method for hot heavy forging. Measurement 2016, 85, 43–53. [Google Scholar] [CrossRef]
  31. Bergs, T.; Holst, C.; Gupta, P.; Augspurger, T. Digital image processing with deep learning for automated cutting tool wear detection. Procedia Manuf. 2020, 48, 947–958. [Google Scholar] [CrossRef]
  32. Hashmi, A.W.; Mali, H.S.; Meena, A.; Khilji, I.A.; Hashmi, M.F.; Saffe, S.N.B.M. Machine vision for the measurement of machining parameters: A review. Mater. Today Proc. 2022, 56, 1939–1946. [Google Scholar] [CrossRef]
  33. He, Z.; Shi, T. Multi-sensor Fusion Technology and Machine Learning Methods for Milling Tool Wear Prediction. In Lecture Notes on Data Engineering and Communications Technologies; Springer: Cham, Switzerland, 2021; pp. 602–610. [Google Scholar]
  34. He, Z.; Shi, T.; Xuan, J. Milling tool wear prediction using multi-sensor feature fusion based on stacked sparse autoencoders. Measurement 2022, 190, 110719. [Google Scholar] [CrossRef]
  35. Liu, Y.; Wang, F.; Lv, J.; Wang, X. A Novel Method for Tool Identification and Wear Condition Assessment Based on Multi-Sensor Data. Appl. Sci. 2020, 10, 2746. [Google Scholar] [CrossRef]
  36. Cognex. Smart Camera In-Sight D905M—Product Data. Available online: https://www.cognex.com/products/machine-vision/2d-machine-vision-systems/in-sight-d900 (accessed on 2 September 2023).
  37. Cognex. Available online: https://www.cognex.com/products/machine-vision/2d-machine-vision-systems/in-sight-d900/specifications (accessed on 2 September 2023).
  38. Cognex. LEC-59870 Edmund Optics Lens—Product Data. Available online: https://www.powermotionstore.com/products/LEC-59870 (accessed on 2 September 2023).
  39. Cognex. ODS75 OverDrive™ Brick Light Illuminato—Product Data. Available online: https://smartvisionlights.com/wp-content/uploads/ODS75_Datasheet.pdf (accessed on 2 September 2023).
Figure 1. Smart camera Cognex In-Sight D900M [36].
Figure 1. Smart camera Cognex In-Sight D900M [36].
Applsci 13 11098 g001
Figure 2. Cognex LEC-59870 Edmund Optics lens [38].
Figure 2. Cognex LEC-59870 Edmund Optics lens [38].
Applsci 13 11098 g002
Figure 3. ODS75 OverDrive™ Brick Light illuminator [39].
Figure 3. ODS75 OverDrive™ Brick Light illuminator [39].
Applsci 13 11098 g003
Figure 4. Car steering knuckle with machined surfaces for tool wear testing.
Figure 4. Car steering knuckle with machined surfaces for tool wear testing.
Applsci 13 11098 g004
Figure 5. Selected machined surface for testing—brake caliper (marked in red).
Figure 5. Selected machined surface for testing—brake caliper (marked in red).
Applsci 13 11098 g005
Figure 6. Detailed photos taken with a service life of 170–200 tool cycles.
Figure 6. Detailed photos taken with a service life of 170–200 tool cycles.
Applsci 13 11098 g006
Figure 7. Detailed photos taken just after changing the cutting tool.
Figure 7. Detailed photos taken just after changing the cutting tool.
Applsci 13 11098 g007
Figure 8. Vision system installed on the transport line of car knuckles.
Figure 8. Vision system installed on the transport line of car knuckles.
Applsci 13 11098 g008
Figure 9. Spreadsheet view and details within the found surface (TrainPatMaxRedLine tool in cell B7).
Figure 9. Spreadsheet view and details within the found surface (TrainPatMaxRedLine tool in cell B7).
Applsci 13 11098 g009
Figure 10. Results of defect recognition on the tested surface—first attempt.
Figure 10. Results of defect recognition on the tested surface—first attempt.
Applsci 13 11098 g010
Figure 11. The result of applying a mask to the learned image—visible detection of defects at the edges of the surface.
Figure 11. The result of applying a mask to the learned image—visible detection of defects at the edges of the surface.
Applsci 13 11098 g011
Figure 12. A selected example of neural network classification results after applying a mask.
Figure 12. A selected example of neural network classification results after applying a mask.
Applsci 13 11098 g012
Table 1. The state-of-the-art advances in machine vision systems for cutting tool wear assessment.
Table 1. The state-of-the-art advances in machine vision systems for cutting tool wear assessment.
AspectState-of-the-Art Advances
Imaging Technology
-
High-resolution cameras and microscopy systems
-
Multispectral and hyperspectral imaging
Image Processing Algorithms
-
Edge detection, thresholding, feature extraction
Image Processing Algorithms
-
Advanced machine learning, including CNNs
Real-Time Monitoring
-
Continuous wear assessment during machining
-
High-speed image processing and parallel computing
Integration with Manufacturing
-
Integration with CAM and CNC systems
Systems
-
Closed-loop feedback for process optimization
Automated Tool Life Prediction
-
Predictive analytics and machine learning models
-
Proactive tool change scheduling
3D Imaging and Surface Metrology
-
Structured light scanning, confocal microscopy
-
Precise measurement of wear parameters
Deep Learning and Anomaly Detection
-
RNNs and GANs for complex pattern recognition
-
Improved wear pattern identification
Multi-Sensor Fusion
-
Integration with acoustic, vibration, temperature
-
Sensors for comprehensive tool condition analysis
Table 2. Comparison of available types of In-Sight D900 cameras [37].
Table 2. Comparison of available types of In-Sight D900 cameras [37].
Image TypeD905MD905CD902MD902C
MonochromeColorMonochromeColor
Imager Typer2/3 inch CMOS
(3.45 µm × 3.45 µm pixels)
2/2.3 inch CMOS
(3.45 µm × 3.45 µm pixels)
Resolution (pixels)5 MP (2448 × 2048)2.3 MP (1920 × 1200)
Acquisition Speed (max)26 fps16 fps51 fps34 fps
MemoryFile storage16 GB non-volatile flash memory; unlimited storage via remote network device
Processing3 GB SDRAM
Additional Storage8 GB SD card, network drive via FTP over gigabit network
OpticsLensesC-Mount, S-Mount, Autofocus
Indicatror LEDsSD card status, pass/fail LED and 360° viewing indicator ring, network LED, and error LED
LigtingExternal lights via light control connector
I/ONetworkGigabit Ethernet (10/100/1000 Mbps)
Built-in1 dedicated trigger IN, 1 general purpose IN, 2 general purpose OUT, 2 bi-direction IN/OUT
MechanicalIndustrail M12 ConnectorsPower/IO, Ethernet, External light power/control
Dimensions53.4 mm × 60.5 mm × 121 mm
Weight380 g
ProtectionIP67 with C-mount lens cover or integrated light connected
Power24 VDC
Table 3. Technical data of the ODS75 25 illuminator [39].
Table 3. Technical data of the ODS75 25 illuminator [39].
Electrical Input24 VDC ±5%
Input CurrentPeak 3 A draw during strobe
Input PowerPeak 72 W during strobe
PNP Trigger2.8 mA @ 4VDC|8.8 mA @ 12 VDC|17.6 mA @ 24 VDC
NPN Trigger14.4 mA @ Common (0 VDC)
Trigger InputPNP > +4 VDC (24 VDC max.) to activate or NPN ≥ GND < 1 VDC to activate (not both)
Strobe DurationMin. 1 µs|Max. 50 ms
Strobe FrequencyMax 4 kHz or 1/Duty Cycle as calculated, whichever is less
Duty CycleMax 10%
Red Indicator LEDON = Light Rest (LED inactive)|OFF = LED/Light Ready
Green Indicator LEDON = Power
Intensity Limit270° turn pot—intensity control of 10–100%. Turn clockwise to increases intensity.
Analog IntensityThe output is adjustable from 10–100% of brightness by a 1–10 VDC signal.
Connection5-pin M12 connector
Operating Temperature−10 °C to 40 °C (14° to 104° F)|RH max 80% non-condensing humidity
Storage Temperature−20 °C to 70 °C (−4° to 158° F)|RH max 80% non-condensing humidity
IP RatingIP50
Weight~155 g
CompliancesCE, RoHS, IEC 62471
WarrantyUV LEDs have a 2-year warranty, all other LEDs have a 10 year warranty. For complete warranty information, visit smartvisionlights.com/warranty, (accessed on 10 August 2023).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dziubek, M.; Rysiński, J.; Jancarczyk, D. Exploring the ViDiDetect Tool for Automated Defect Detection in Manufacturing with Machine Vision. Appl. Sci. 2023, 13, 11098. https://doi.org/10.3390/app131911098

AMA Style

Dziubek M, Rysiński J, Jancarczyk D. Exploring the ViDiDetect Tool for Automated Defect Detection in Manufacturing with Machine Vision. Applied Sciences. 2023; 13(19):11098. https://doi.org/10.3390/app131911098

Chicago/Turabian Style

Dziubek, Mateusz, Jacek Rysiński, and Daniel Jancarczyk. 2023. "Exploring the ViDiDetect Tool for Automated Defect Detection in Manufacturing with Machine Vision" Applied Sciences 13, no. 19: 11098. https://doi.org/10.3390/app131911098

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop