Next Article in Journal
Gas Pipeline Flow Prediction Model Based on LSTM with Grid Search Parameter Optimization
Next Article in Special Issue
Cloud-Based Machine Learning Application for Predicting Energy Consumption in Automotive Spot Welding
Previous Article in Journal
Coarse X-ray Lumbar Vertebrae Pose Localization and Registration Using Triangulation Correspondence
Previous Article in Special Issue
User-Driven: A Product Innovation Design Method for a Digital Twin Combined with Flow Function Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhance the Injection Molding Quality Prediction with Artificial Intelligence to Reach Zero-Defect Manufacturing

1
School of Technology and Management, Polytechnic of Leiria, 2411-901 Leiria, Portugal
2
Muvu Technologies, 1050-052 Lisboa, Portugal
3
Institut de Robòtica i Informàtica Industrial, Universitat Politècnica de Catalunya BarcelonaTech (UPC), 08034 Barcelona, Spain
4
Vipex, 2430-153 Marinha Grande, Portugal
5
Department of Electrical and Computer Engineering, NOVA School of Science and Technology, NOVA University of Lisbon, 2829-516 Caparica, Portugal
6
UNINOVA Centre of Technology and Systems (CTS), FCT Campus, Monte de Caparica, 2829-516 Caparica, Portugal
7
MEtRICs Research Center, School of Engineering, University of Minho, 4800-058 Guimarães, Portugal
*
Author to whom correspondence should be addressed.
Processes 2023, 11(1), 62; https://doi.org/10.3390/pr11010062
Submission received: 6 December 2022 / Revised: 20 December 2022 / Accepted: 22 December 2022 / Published: 27 December 2022
(This article belongs to the Special Issue Digitalized Industrial Production Systems and Industry 4.0, Volume II)

Abstract

:
With the spread of the Industry 4.0 concept, implementing Artificial Intelligence approaches on the shop floor that allow companies to increase their competitiveness in the market is starting to be prioritized. Due to the complexity of the processes used in the industry, the inclusion of a real-time Quality Prediction methodology avoids a considerable number of costs to companies. This paper exposes the whole process of introducing Artificial Intelligence in plastic injection molding processes in a company in Portugal. All the implementations and methodologies used are presented, from data collection to real-time classification, such as Data Augmentation and Human-in-the-Loop labeling, among others. This approach also allows predicting and alerting with regard to process quality loss. This leads to a reduction in the production of non-compliant parts, which increases productivity and reduces costs and environmental footprint. In order to understand the applicability of this system, it was tested in different injection molding processes (traditional and stretch and blow) and with different materials and products. The results of this document show that, with the approach developed and presented, it was possible to achieve an increase in Overall Equipment Effectiveness (OEE) of up to 12%, a reduction in the process downtime of up to 9% and a significant reduction in the number of non-conforming parts produced. This improvement in key performance indicators proves the potential of this solution.

1. Introduction

Nowadays, with the increase in competitiveness, it is necessary to provide high-quality standards to customers to distinguish a company in the market, which means the lowest possible number of errors in production processes. This approach allows companies to become more productive by decreasing production costs and company downtime, among other things. Artificial Intelligence is becoming increasingly widespread because its use to increase the efficiency of existing systems has demonstrated significant improvements [1,2].
The global plastics market was valued at $579.7 billion in 2020, $590 billion in 2021 and is expected to expand at a compound annual growth rate of 3.7% from 2022 to 2030 [3]. To obtain an injected part with high quality it is necessary to use the best machine and process parameters [4,5], which are not always easy to define and most of the time are obtained through trial and error method by injection technicians based on their field experience [6].
Injection Molding (IM) is one of the most commonly used processes to produce large-volume polymeric parts. The main focus of this process is to produce repeatable parts with a similar appearance. This manufacturing process is dynamic due to the constant variations of its parameters. The possibility of producing defective parts exists in every process and this occurrence generates unpredictable costs. In plastic injection molding, the most recurrent defects are: unfilled, burr, burn marks, short shot, warpage and flow line [7,8]. To prevent this potential negative event, it is advantageous to use a method capable of monitoring and predicting the production of defective parts through variations in process features. A robust and reliable classification system capable of alerting most defective parts produced is the ideal method for dealing with production defects. To do this, it is mandatory to use real-time process data and if the machines do not have factory enable protocols, for example, Euromap 77 through OPC-UA, it is necessary to adapt them [9,10].
In the literature, there are several approaches to defect classification. The automatic classification of parts will change how the plastic industry works because it will move from reactive to preventive action. In the past, it was expected that a problem would occur for it to be identified and corrected, which implied the production of non-conforming parts that only serve as material and production time-wasting. Then, the machine was stopped and the problem was solved. With this new approach, the goal is to detect the problem even before it occurs so that the process can be intervened in before non-conforming parts are produced [11,12]. The approach used in the past caused problems and losses not only in terms of production of conforming parts, but also in terms of logistics, quality and human resources allocations. On the other hand, adopting preventive action leads to reduction in environmental footprint in this type of industry because only a percentage of the material can be changed and reused, but much of the remaining is still not recyclable. Thus, reducing the nonconforming parts produced is a step toward greater sustainability.
Our proposal to solve this problem is to design an automatic procedure to rapidly identify defective parts based on machine parameters and let the technical teams know if the injection process is experiencing a loss of quality over time. As mentioned, there is much work done in this area, but the focus of the work presented in this paper is to create an intelligent system that goes from data collecting to real-time classification. In order to make this system as generic as possible, a European platform with AI tools to take production processes towards zero defects was used. This platform is Zero-Defect Manufacturing Platform (ZDMP) (https://cordis.europa.eu/project/id/825631, accessed on 23 May 2022). The ZDMP allows the integration of these AI concepts on the shop floor. The platform was developed under the Horizon 2020 research and innovation platform of the European Union and was developed by several institutions and companies. The work presented in this article was developed under the subproject "RAIZED - Approach for smooth integration of advanced Zero-Defect Manufacturing" developed by two Portuguese companies Muvu Technologies and Vipex.
The entire process, from dataset harmonization to the machine learning algorithm responsible for predicting the quality, was developed through this ZDMP platform, integrating other external machine learning tools. As the ZDMP platform does not allow data collection, it was necessary to use a data collection platform (RAILES) developed by one of the consortium companies, Muvu Technologies. The RAILES platform is an intelligent manufacturing system to monitor manufacturing systems in real-time. Through RAILES it is possible to extract the process data from a plastic injection molding machine.
In the process data collected, three different datasets were obtained in a real scenario at Vipex, a plastic injection molding company in Portugal with multiple production processes. Two traditional injection processes and a stretch and blow injection process were chosen to understand if there are correlations in the variables to be monitored. Two of the machines communicate through the OPC-UA protocol (Negri Bossi ST Cambio 400 and Nissei ASB 12M) and one in OPC-DA communication (Tederic DH850) requires the introduction of a wrapper for data conversion from OPC-DA to OPC-UA. This conversion was carried out to ensure that all collection is carried out the same way, thus making a standardization.
To understand the effectiveness of the implemented project, key performance indicators (KPIs) were applied. KPIs represent a form of measure designed to evaluate the performance of a new strategy implemented. In general, after introducing the system developed and presented in this article, an improvement in the OEE of the different processes was obtained, as well as a reduction in downtime. The magnitude of these improvements varies depending on the state of maturity that the process already has on the factory floor in terms of rejection rate.

2. Related Work

This section presents the platform and the different Quality Prediction methods in the literature related to this work.

2.1. Quality Prediction

There are many process parameters that affect the quality of injection molded parts and they are linked to both the qualities of the plastic material and the process parameters [13]. Tripathi et al. [14] explains that the temperature, the maximum pressure and the cushion are variables that need to be considered while identifying the variables to be monitored. The variables with the most significant influence on the injection process, according to Saleh et al. [15], include melt temperature, plastification time, maximum pressure, mold wall temperature and injection time. According to Jung [11], machine learning approaches frequently choose the temperature, injection time and cycle time as important factors. Bernardete [8] states that cycle time, plastification time, injection time, barrel temperature before the nozzle and cushion must be monitored.
Based on the insights of the above-mentioned authors, some of the authors of this paper have done an extensive study on feature selection and which variables to monitor in each of the processes which can be found here [16]. In order to carry out this study on feature selection, methods were used that had already been taken into account in several works focused on plastic injection in the literature. Algorithms from the three main families of feature selection methods were compared: filter (Info Gain [17,18] and Chi-Squared [19]), wrapper (Forward Feature Selection [20]) and embedded (Random Forest Importance [21]). Additionally, a hybrid approach was also evaluated that takes into account not only the supervised contribution but also an unsupervised method in an effort to evaluate each feature without the influence of the target label. For this, Principal Component Analysis has been added in the unsupervised condition [22]. The summary results of this work can be seen further in this document.
The use of injection molding process parameters to classify the quality of parts is found in the literature on different approaches [5,8,23,24]. The work presented in this paper compares different methods and processes and their respective performances in real injection molding processes. There are other works that combine parameters with other techniques, such as computer vision images, in addition to only using parameters alone [25]. This indicates that other methodologies of various kinds might have been taken into account to support the presented study. Recent research has begun to incorporate the use of ensemble methods [26], which combine many machine learning approaches, as well as the use of deep learning techniques like auto-encoders to increase the effectiveness of classification/regression [11,27].
A model based on the Support Vector Machine (SVM) regression technique is built using data [28] and Schreiber [29] demonstrated the effectiveness of an injection molding process model based on Artificial Neural Networks. Numerous techniques, including expert systems and systems based on mathematical models, have been developed for online diagnosis and fault detection, as noted in [30]. Both require that the system be well known, which is not always the case in this type of operation, particularly when it comes to starting a new product in production [31,32]. Artificial Neural Networks (ANNs) and Support Vector Machine (SVM) require little or no prior knowledge [33,34] of the system. These were some of the motivating factors for us to test these classifiers, but there are still additional research projects in this field that make use of these machine learning methods [8,30]. Regarding the accuracy values obtained, although at first glance they seem quite positive, there are studies that obtain identical values (over 90%) [5,30].

2.2. Zero-Defect Manufacturing (ZDM)

The development of new technologies and techniques that exceed the capabilities of traditional methods and technologies has had a significant impact on the industrial field of quality control and assurance, which is essential to all production processes [35,36]. Due to the increasing complexity of production processes, the number of intrinsic connections between the individual processes is continuously rising. In addition, the increasing product customization leads to a significant increase in process variance. Instead of a pure assessment based on process data and final product inspection, existing information about intermediate products and the individual assembly can also be considered [37,38]. With the effective use of the data available today and with the increasing availability of production and user-specific information on each manufactured product, the focus is once again shifting to the product. This requires the traceability of the individual product data from production to the customer so that the data points can be meaningfully linked and analyzed.
Because of the costs associated with defective goods, ZDM has become a natural target for manufacturers seeking to minimize or eliminate all manufacturing-related problems. ZDM is a holistic approach that combines several tools, including product life cycle assessment, diagnostic and prognostic techniques, production control improvements, quality control and inspection techniques, to achieve sustainable manufacturing. These tools enable process adjustments through rapid forward and/or feedback control. The problem of lack of standardization in ZDM is already being addressed in multiple initiatives. Although vocabulary is not the only requirement for effective communication, as the context has an impact on meaning, each domain needs its vocabulary and the lack of a standard vocabulary for ZDM was identified in the scope of ZDMP. Under a CEN-CENELEC Workshop Agreement (CWA) to standardize ZDM vocabulary [39], a significant effort has been made to help current and future researchers in the field conduct their research with standardized terminology.
ZDM emphasizes defect prevention and prediction [39]. Predictive quality is the optimization of product and process-related quality through data-driven predictions as a basis for decision-making and action [40]. The target variables are interdependent, so high process quality is a necessity but not a sufficient condition for high product quality. Therefore, the value of predictive quality is not in the data itself but in the insights, as they directly inform the decision. The change in focus from reactive to predictive signs is the beginning of a new era in quality improvement, which is currently taking place. This shift should not be seen as a replacement but rather as a complement to existing methods [39].

2.3. ZDMP Platform

The ZDMP platform focuses on monitoring pre, during and post-production quality problems. The main goal of ZDMP is to provide an extendable platform to allow factories with a high level of interoperability, to reach the concept of Zero-Defect Manufacturing. In the particular context of this work, ZDMP allows the connection of shop floor systems to the benefits of Artificial Intelligence algorithms. These benefits include the prediction of part quality and process quality. It is possible due to the tools provided by the platform, which allow data acquisition, data pre-processing, design of machine learning classifiers adapted to the gathered data, deployment of the functional models in run-time and subsequently the model’s output access on third-party applications.
ZDMP has 45 modules and the developers can choose the most useful components for their projects. In this case, four components would be necessary, namely:
  • Data Harmonization Designer (Design the Data Pipelines);
  • Data Harmonization Run-time (Run the Data Pipelines in Real-Time);
  • AI Analytics Run-Time (Run the Quality Classifiers in Real-Time);
  • Service and Message Bus (Communication between Different Modules and RAILES).
This group of components working together receives raw data from the sensors, transforms the data into a format that can be used for training and prediction, trains the predictive model and uses it to make predictions in real-time.
Due to the breadth and diversity of the ZDMP modules, in the literature there are several studies that use the ZDMP platform in many different areas, namely, job-shop scheduling [41] and multi-tenant data management [42], among other projects associated with Industry 4.0 and digitalization.

3. Materials and Methods

This section presents the developments carried out during the work. Initially, a generic architecture to achieve Zero-Defect Manufacturing through Quality Prediction is presented. This architecture aims not only to deliver these functionalities to the proposed case study but to be generic in order to be replicable and scalable. After the description of the generic architecture, the necessary steps are presented to develop the prediction models that will give the system the ability to predict possible problems related to the quality of the products before they actually happen. This section also describes how these models are then used at runtime; specifically, how they are used to predict defects during production.

3.1. System Architecture

Quality issues in plastic injection molding are problems that can be found in many different factories and products. Whether in the production of final products or plastic parts used to produce other products, it is important to present solutions that can be scalable and replicable. In this way, an architecture that combines MES RAILES and some components from the ZDMP platform is designed and proposed. RAILES Platform is a Smart MES able to monitor real-time manufacturing systems. RAILES fits the list of next-generation software developed for the industry to help manufacturers make better decisions based on real information gathered from the shop floor. The proposed architecture aims to integrate RAILES and ZDMP Ecosystem, to expand the functionalities of the existing ecosystem through integrating new functionalities, which is the case of the ZDMP platform. Figure 1 shows the proposed high-level architecture and which are the main components to be used in both platforms. This architecture guarantees the integrity of the system and its scalability to be approached in different scenarios.
As can be observed, the system consists of three large groups. The shop floor where machines operate and where changes and improvements are needed. The RAILES ecosystem is responsible for data extraction and all production management through the MES. Furthermore, the ZDMP ecosystem will be integrated to extend what is offered by RAILES. All raw data generated by the machines and the process are sent to the RAILES IoT Edge. For a device to send this raw data, it must have an integrated RAILES IoT Connector, an adaptor or the capacity to send the data in a protocol known as the RAILES IoT Edge. When the RAILES IoT Edge receives the data, it makes a preliminary analysis so that these data can later be used. This layer is composed of a device or network of devices responsible for giving context to the data, completing the collected data and so forth. When the data are pre-processed, they are sent to the cloud, where the RAILES cloud environment will store these data in a PostgreSQL database and make it available for the most advanced features of MES RAILES. At this level, the system is already capable of planning and controlling production, monitoring the system and managing maintenance tasks. Within the ZDMP ecosystem, two large groups constitute the two different phases of platform execution. In the initial phase, during the development and setup of the platform, the design versions of the two used components will be used. The Data Harmonization component will be used during the design phase to create the pipelines that will allow data extraction. Regarding the AI-Analytics component, it will be used to develop the classification models, which will be able to generate some preliminary and more generic forecasts for the injection molding process. During the runtime phase, that is, during production, the Data Harmonization component will be responsible for receiving the data made available by the RAILES ecosystem. The AI-Analytics component will use these data to generate predictions based on the models created in the design phase. The predictions generated by the ZDMP ecosystem (AI-Analytics) are then sent and made available to the RAILES ecosystem. Hence, these predictions can be used to reduce problems related to quality through alerts, changes to production planning and changes to the maintenance schedule, among other things.

3.2. Data

This subsection presents the procedures related to the treatment of the data used in the study presented in this article.

3.2.1. Data Collection

In the context of this research, data collection is the process of obtaining information from different sources, capturing patterns of past events and storing them in the proper format for future use. This collection and storage are done with the objective of using this data to build predictive models using machine learning algorithms.
Data are basically unorganized statistical facts collected for specific purposes. Due to the unorganized nature of the collected data, it is necessary to make some adjustments and transform them so that they become more organized and more accessible for the algorithms to process.
There are numerous methods of communication and data collection from machines, but in order to collect data in a standardized way, and as mentioned, a communication protocol known and used within Industry 4.0 to collect data from an injection molding machine, the EUROMAP 77, was used. This protocol is an OPC UA interface for plastic and rubber machines to exchange data between injection molding machines and the manufacturing execution system (MES). The newer machines use this factory-enabled protocol and those that do not have it have other protocols, such as OPC-DA. One of the machines used in this study was OPC-DA and it was necessary to use a wrapper to convert the data availability format from OPC-DA to OPC-UA.
This study includes three different injection molding machine models: Negri Bossi 400 (OPC UA), Nissei ASB 12M (OPC UA) and Tederic DH 850 (OPC DA). Negri Bossi and Tederic work with traditional injection molding and Nissei ASB works with stretch and blow injection molding. This diversification allowed us to simulate as many scenarios as possible during the project implementation.
Having access to data alone does not add value, it is necessary to have a platform to store the data with the respective information necessary to identify them and have a way to communicate these data to different platforms. The machine learning algorithms that will be used for the training and obtaining of the predictive models are supervised algorithms, which means they require labeled datasets. With this type of algorithm, to obtain a good predictive model, it is necessary to know the correct outputs in the historical data that need to be predicted or classified. To do this, RAILES was used.
In order to have a good amount of historical data where the output was known, measurements were made during some time, forcing failures in the production process so that the historical data would have periods where everything was good and have some periods where failures would occur, while this output would be registered, this way labeling the datasets. For this labeling, RAILES provided a useful tool because when a failure in production occurred, the digital platform allowed the factory workers to introduce the type of failure and at which cycle it occurred, as can be observed in Figure 2.

3.2.2. Pre-Processing

Data pre-processing is the process responsible for transforming unorganized data into an organized and reliable dataset ready for analysis. The pre-processing process should commonly be applied to datasets before raw data are used in the machine learning context. It is a fundamental step otherwise, an unorganized data matrix will make the performed analysis untrustworthy.
Currently, there are several ways to pre-process data, through commercial modules or libraries specifically developed for data pre-processing using programming languages. In the case of this paper, feature values come isolated and data are time series, which are values that are recorded over time. As mentioned, these values are collected through OPC-UA and recorded whenever there is a value variation, as the different parameters do not vary in the same way causing a different number of records per variable per cycle.
In order to create a classifier to predict the quality of the part, it is necessary to understand what the maximum values of the different features involved were in order to create a certain part. The pre-processing stage is responsible for transforming these time series of data into single values. According to the feature in consideration, it records the highest or lowest value. The criteria for registering the value per feature is based on the worst case scenario of each parameter. For example, in the case of features directly associated with pressure and temperature, the values registered are the local maximum values per run count. The worst case scenario for each of these variables occurs when the time series value registers higher values.
In contrast, in the case of cushion (volume-related), the value to be registered is the minimum. This feature represents the quantity of material deposited in the mold. The larger this quota, the smaller the amount of material injected into the mold. The minimum values for volume are “worst case scenario” since they represent a part lacking material. This entire Data Harmonization process was carried out using the tool made available by the ZDMP project, Data Harmonization modules (DH).
The DH component allows the user to design a manufacturing map that, when executed, transforms a specific syntax from one format to another. This map is a java archive file wrapped in a docker container that executes as a transformation engine. This application provides a graphical interface where data pipelines can be built in the form of drag-and-drop building blocks. These blocks are connected to each other, having a general structure containing an input block, a data transformation block and an output block. The input block receives the raw data and passes them on to the transformation block, which performs the necessary transformations on the data so that they are in the correct format for the predictive model to receive and sends them to the output block, where they are forwarded to the model. After creating the model, it is necessary to make it work in real-time and DH allows this to happen. In Table 1 and Table 2, it is possible to observe the pre- and post-transformation data where the maximum value of reading in the time series format is highlighted (yellow color) and appears in the post-treatment as one of the parameters for the creation of a certain part.
In the case of this research, and as mentioned, a data streaming pipeline was implemented, as it needs to receive and transform data in real-time. Therefore, the pipeline is running all the time in the API, always ready to receive data at any moment, transforming them and sending them to the respective receiver.
Although only the maximum values per feature are used for the classifier, the time series data are stored in a way that shows, in case of problems/malfunctions, the variation of features over time in order to help the different technicians troubleshoot. For example, if the pressure cycle of the maximum is not correct, observing its variation can help in troubleshooting maintenance problems. This allows us to have an overall view of the process and finer monitoring of the process.

3.2.3. Labeling

As mentioned before and in order to create supervised classifiers, it is necessary to assign a label to each of the vectors that contain the process parameters. So Xs are our process parameters and Ys are our labels. Given a basic Artificial Neural Network for the conceptual analysis, it is possible to observe what is mentioned in Figure 3.
The idea of characterizing a manufacturing process according to its own limits allows us to say whether the part derived from the process is a conforming (OK) or a non-conforming part (NOK). The criterion for defining an OK or NOK part is the specifications created for its manufacture (OK part) or a part outside the control parameters (NOK). These parameters can vary from product characteristics, which are called defects.
Following the quality criteria, a process characterization approach was performed, where each part was evaluated with a discrete interval [0,1], where ‘0’ means NOK (Y1) part and ‘1’ means OK part (Y2).
This characterization was handled by a quality operator with knowledge of the process and the parts involved. Using RAILES software, it was possible to label each part during the ordinary course of the process.
There are several ways to carry out labeling [43], being automated, such as Automated Labeling through Semi-Supervised Learning (SSL), Propagation and Transfer Learning. However, in this case, since the process is known and has well defined boundaries regarding the quality of the parts through the analysis of the quality technicians and production engineers, this labeling was carried out manually and internally, that is, internal labeling. This approach was used because the creation of only three different processes was considered and it was logistically possible to do so. It will be interesting in the future to explore more automated labeling tools, which, if effective, will reduce the setup time of the different classifiers.
With this approach, known as Human-in-the-Loop (HITL), both human and machine intelligence is leveraged to create machine learning models. In a Human-in-the-Loop setup, people are involved in a virtuous circle of improvement where human judgment is used to train, tune and test a particular data model. In Figure 4, it is possible to observe the Human-in-the-Loop process used during the provocation error tests in order to create the training datasets. After the parts are produced, a quality inspection is performed and the classification of each of the parts is entered into RAILES (Figure 2). The quality of the parts (label) is attached to the dataset along with the process parameters (features), only then is it possible to create a dataset with labels for a supervised learning utility.
This was done because the data labeling process is incomplete without quality assurance. The labels on the data should represent a baseline degree of accuracy and with human intervention, with skilled technicians, the quality and ground truth of the data set is thus assured.
Although this approach meets the desired requirements and the classifiers perform well, it was only possible to classify a part after it was created, but one of our intentions was to predict a deviation of the process quality and detect a possible failure early on. That is, to make a predictive quality, this would help not only to reduce the environmental footprint (there are parts that cannot be reused) but also increase the efficiency and performance of the different processes.
To do this and since the boundaries between the OK (‘1’) and NOK (‘0’) parts and the places where process errors were provoked in the creation of the different dataset scenarios are well known, two more labels were assigned, Deviation I (‘1’) and Deviation II (‘2’), thus turning the OK into (‘3’), in order to predict a loss of quality in the process and to alert the operators so that they might intervene in the process before it produces non-OK parts. This approach proved to be very relevant because it was possible to detect a problem in the process, on average, about seven cycles before the production of a non-conforming part occurred. This was very relevant for the process because there was a transition from a reactive approach to a predictive approach.
The Deviation I process deviation allows the classifier model to detect a minimal change in the process behavior. At Deviation II, the values substantially diverge from the normal trend line and at this level there exists the possibility of slight point defects. However, these occasional defects may not mean an NOK part, as they may be “expected” under the product evaluation criteria, such as slight warping, mini black spots or micro lines. When an NOK part is predicted (‘0’), at this level, it is easy to identify defects such as warpage, brittleness, gloss, part oversize or undersize and orange skin, among other things.
These alerts are made to the users through the RAILES system whenever there are five or more level I deviations within a configurable time, whenever there are three or more level II deviations or an NOK part. This approach allows for mitigating outliers or sporadic runs without generating alerts that may cause unnecessary spam.

3.2.4. Data Augmentation

Data Augmentation (DA) is a technique that can be used to artificially expand the size of a training set by creating modified data from the existing one. It is a good practice to use DA if the intention is to prevent overfitting, if the initial dataset is too small to train on or if the intent is to squeeze better performance out of a model. However, this technique not only expands the training set’s size but is also suitable for enhancing the model’s performance [44].
One of the problems in applying machine learning algorithms in industries with well known and stable processes, such as injection molding processes and low scrap rates is the difficulty in balancing the number of conforming and non-conforming parts in the training dataset. These problems in the literature are defined as imbalanced classification problems [45].
The challenge of working with imbalanced datasets is that there are few examples of the minority class for a model to learn the decision boundary effectively. One approach to addressing imbalanced datasets is to oversample the minority class through Data Augmentation, in this case, the NOK parts. The simplest approach involves duplicating examples in the minority class, although these examples do not add new information to the model. Instead, new examples can be synthesized from the existing examples. This Data Augmentation type for the minority class is referred to as the Synthetic Minority Oversampling Technique (SMOTE) [46]. Instead of deleting or copying data, the current inputs were used to generate unique input rows with a label based on what the original data imply.
The method in this work relies on knowing the cycle and the combination of parameters that originated OK and NOK parts because the training datasets were created with error provocation tests in the processes followed by quality technicians grading each of the parts.
The use of this method in this work relies on knowing the cycle and the combination of parameters that originated OK and NOK parts because the training datasets were created with error provocation tests in the processes followed by quality technicians grading each of the parts.
For this paper, the Data Augmentation was implemented using python language software. As can be observed in Figure 5, the original dataset is imported to the program in CSV format. This dataset serves as the basis of information for synthetic data generation. The algorithm counts the rows and columns of the dataset and registers the values of each variable. Then, it multiplies the feature by a pseudo-random number with defined limits. The boundaries are defined to apply slight variations to the magnitude of the original process with a focus on maintaining the organic characteristics.
The user chooses the size of the synthetic dataset. This way, it was possible to introduce some data into the system that worked as sub-variations of the original intervals. It keeps the process trend lines unchanged and provides an analysis based on more values and training data for the classifier. In this case it is possible, for example, to only augment NOK values (minority class) and then add these to the original dataset with OK values.
After augmentation, the synthetic and the original datasets were used to train the classifier and observe the performances and it is possible to perceive the difference in performance with a significant increase in Table 3.
It is possible to observe a performance improvement in all cases where DA was applied. It should be emphasized that this procedure should always be followed to guarantee the balance between the performance gain and not allowing the system to be overfitted.

3.2.5. Feature Selection

The complexity of the injection molding process lies in the high number of parameters that intervene throughout the process. From the large set of process parameters, the critical issue is to find the relevant ones that should be used to classify the produced parts correctly. Previously to the work reported in this article, some of the authors of this paper carried out an extensive study on feature selection and which variables to monitor in each of the processes, which can be found here [16].
In this work, the researchers compared algorithms from the three main families of feature selection methods: filter, wrapper and embedded. Additionally, a hybrid approach was also evaluated that takes into account not only the supervised contribution but also an unsupervised method in an effort to evaluate each feature without the influence of the target label.
Experimental data came from the same injection processes used in this work and were derived from the three different injection processes working on three machines of different brands and with different materials (PP, ABS and Tritan). In order to relate the process variables with the quality of the parts, typical problems were induced, such as resistor failure, water turning off and mold carburetor failure, among others.
In the mentioned study, the researchers found that there are variables that are transversal to the three processes even though they are different materials and parts working with different machines. These variables are Maximum Injection Pressure, Nozzle Temperature, Spindle Temperatures (Zone 1, Zone 2 and Zone 4), Zone 3 in the Nissei ASB case because it only has three resistors on the spindle, Cushion and Ambience Temperature. Additionally and specific to the blowing machine, the temperature variables of the pots and M2 must also be taken into account. There are still other variables that have been shown to be relevant from time to time and that can be taken into account in the representation of an injection process.
The number of features on average is reduced by 73%, representing not only gains in terms of performance but also in terms of writing flow to the cloud and computing time, among other things.
Regarding the introduction of meteorological variables in the monitoring of the process, it is clear that the ambient temperature has a significant impact on the processes [16].

3.3. Predictive Methods

In this subchapter, the methodologies associated with predictive methods will be presented.

3.3.1. Train and Test Data

As explained earlier through the different methodologies, once there was access to the data, datasets were created for training and testing the different classifiers. Regarding the classifiers created, in all of them that will be mentioned further, 20% of the data were used for testing and 80% of the data for training.
Based on the feature selection study results, the features were selected and taking into account the labels mentioned in Section 3.2.3, the following are the features and labels for each dataset.

Nissei ASB

In the case of this process, the size of the dataset used to train the classifier is 18,721 injection cycles, already augmented and the features, as well as the outputs, can be seen in Table 4.

Negri Bossi

The size of the dataset used to train the classifier of this machine is 7828 injection cycles, already augmented and the features, as well as the outputs, can be seen in Table 5.

Tederic

In Tederic machines, the size of the dataset used to train the classifier is 4852 injection cycles, already augmented and the features, as well as the outputs, can be seen in Table 6.
Regarding the sharing of the datasets, because they are industrial processes, it will not be possible to share them due to confidentiality.

3.3.2. Classifiers

Regarding real-time predictive quality and part classification, as mentioned in the related work, many works have been carried out in this area and can be found in the literature.
Concerning the existing processes in Vipex, previous studies were carried out by the authors of this article related to the machine learning algorithms that obtained the best classification performance [47].
The result of this work was that among several classifiers, the combination that obtained the best performance was the use of the Voting-Based Ensemble Method. This method considers both the contribution of an Artificial Neural Network (ANN) with a Support Vector Machine (SVM). This conclusion served as the basis of the work and proved to be true by obtaining the highest classification compared to other methods.
The classifiers were implemented in Python language through the scikit-learn library. In the ANN case, several tests were carried out with different numbers of neurons in the hidden layers and with different numbers of hidden layers. Several solvers were also tested (lbgfs, sgd and adam) and several activation functions (logistic, relu and tanh) and the architecture for which the best performance was obtained was the use of a hidden layer with 5000 neurons (Nissei ASB), 2000 neurons (Tederic) and 1000 neurons (Negri Bossi), respectively, and all with the logistic activation function and lbfgs solver.
In the case of SVM, the grid search was drawn using the GridSearchCV from the scikit-learn library to define the most suitable parameters. The parameters that resulted from the grid search were: Nissei ASB (Cost Function Value = 10,000 and Gamma = 0.001), Negri Bossi (Cost Function Value = 100,000 and Gamma = 0.01) and Tederic (Cost Function Value = 10,000 and Gamma = 0.01), all with linear kernel.
Table 7, Table 8 and Table 9 show the classifier performances obtained for the three processes. It should be taken into account that the values correspond to the average value of 10 ANN trains.

3.3.3. Run-Time Deployment

Since the main objective of this project is the prediction of failures in the manufacturing processes using machine learning, a component capable of integrating this into the architecture is of vital importance. After the classifiers for the different processes were created, they were exported in pickle (.pkl) format. A PKL file is a file created by pickle, a Python module that enables objects to be serialized to files on disk and deserialized back into the program at run-time. It contains a byte stream that represents the objects.
The AI Analytics Run-Time component from the ZDMP is capable of deploying and executing an external classifier in real-time. This component is responsible for running the models automatically. Users can upload models to the AI Analytics Run-Time API, where they can be executed in real time, receiving the data that were treated in the Data Harmonization component and predicting failures based on that information.
This is an advantage of using this component because it allows one not only to use classifiers exported directly by the ZDMP components through AutoML but also to integrate externally created modules and fine-tune them as needed, which leads to an increase in the versatility of this tool.

4. Demonstration Scenario

To prove the proposed theme of this article, the ZDMP architecture was applied combined with RAILES software. It allowed all instances, from data gathering to real-time classification, to be completed.
The data gathering system is handled by RAILES, which, using ZDMP’s module Message Bus, can send all the data to the Data Harmonization. Once on the ZDMP platform, the data received were analyzed and structured into organized datasets. This structure can be seen in Figure 6.
Once the classifiers are deployed, the output prediction is sent back by the Message Bus module to the RAILES system. Via the RAILES system, it is possible to get in touch with the probable output and, at the same time, monitor the process and receive warnings when deviations are found.
For the part classification system through Artificial Intelligence, experimental tests are essential. The experimental tests appear with a focus on creating organic datasets about the process in question. To obtain reliable datasets, the injection and error generation process was individually monitored by injection technicians with knowledge of the process, changing the parameters of the various processes and causing the features to deviate. This parameter change proved effective in producing defective parts and generating reliable data to train each classifier.
In order to prove the effectiveness and applicability of the developed system, a test was performed on the shop floor, provoking errors in the process to identify in advance the deterioration of the process that leads to the production of non-conforming parts.
Next is documented an error provocation test that simulates a problem in the heating resistance of the spindle nozzle by turning off its temperature. This type of defect is expected to create failed parts (the lower the spindle temperature, the harder it is to inject material into the mold) after a long time interval following a resistor failure. This is because the other resistances in the spindle compensate for the temperature for some time. Thus, the intent was to prove that the system detects the loss of quality of the process over time (variations of the intrinsic variables) through automatic classifications and to alert users before defective parts are produced. Thus, allowing to reduce the environmental footprint and the number of non-conforming parts produced.
There are four types of output from the automatic classifiers, compliant part (OK), process quality deviation level I (DI), process quality deviation level II (DII) and non-compliant part (NOK). Figure 7 serves as an example where it shows the output of the classifier in the RabbitMQ for two injection cycles, where it its possible to observe the cycle and the predicted quality.
In order not to take up too much space in the document, Table 10 summarizes the different predictions made since the error occurred (through provocation) until the system predicts a nonconforming part. Alerts are generated when deviation events are followed in order to mitigate any outlier that may occur. The variable runCount represents the number of parts produced and increases every 2 in 2 because the mold has two cavities, that is, for each injection cycle, the machine produces two parts.
As it is possible to observe from the entries in the table, the proposed solution predicted potential problems 13 entries before the problem happened. In this way, it is possible to validate that the classifier can detect deviations and anomalies compared to the normal execution of the machines and thus proves the approach reported in Section 3.2.3.

5. Results

This chapter presents the results obtained when the system was implemented on the shop floor. The experimental tests were carried out on three different machines. In order to get a better view of the impact of the machine learning classifier used in each process, different indicators were calculated. These indicators were analyzed before and after the introduction of the classifier in the process, this way allowing the comparison of both scenarios and giving us the desired metrics for evaluation.
Tederic (TD850), Negri Bossi (NB400) and Nissei ASB are the machines that were used for the experiments. Each of them works differently, having a different duration of production cycles, the number of cavities and the number of products produced in the same period. These differences influence the metrics of the indicators, in addition to having a different impact coming from the introduction of the classifier.
The indicators used for analyzing the classifier’s performance were the OEE, the FPY (First Pass Yield), the number of defective products and the downtime of the machines. Other indicators were also considered, but their calculation was not possible under the circumstances in which the experimental tests were carried out.
When the classifier is introduced, a few things must be taken into account. Of the total production failures that occur, the classifier will not be able to work on all of them. There are some failures that are inherent to the process, for example, stopping to clean the molds, which results in some defective products that are unavoidable, with or without the classifier. Moreover, the accuracy of the classifiers is not 100%, which means that some failures will simply not be successfully detected. Considering these setbacks, it is understandable that the classifier cannot mitigate all the failures that can occur in production but still manages to increase production efficiency.

5.1. Tederic—TD850

This machine has a production cycle that lasts for 77.1 s, has only one cavity and works five days a week, totaling 5603 parts produced each week. Without any classifier, this process had an OEE of 77.4%, a downtime of 15.1%, an FPY of 89.5% and a total of 588 defective products. Of all these 588 defective products, 120 are inherent to the process, such as machine cleaning stops, which leaves us 468 failures to act on. As the accuracy of the classifier is approximately 92%, about 37 of these 468 failures will not be predicted correctly, leaving us with a total of 431 failures that can be prevented. It is possible to observe the results of introducing the classifier in the process in Figure 8.

5.2. Negri Bossi—NB400

The production cycle of this machine is about 47.8 s, it has eight cavities, which means that it will produce eight parts per cycle. Because of the superior number of cavities, this machine produces many more products than the others. Given this information, along with the fact that it works five days a week, it is estimated that this machine produces about 72,301 products every week. About 10.5% of all products produced have defects, which gives us a total of 7592 failures. Of these 7592, 720 are inherent to the process. Having the classifier with approximately 92% accuracy, about 550 failures will not be successfully predicted. With this, it is concluded that of all 7592 failures that may occur during production, about 6322 can be successfully mitigated. It is possible to observe the results of introducing the classifier in the process in Figure 9.

5.3. Nissei ASB

The duration of the production cycle of this machine is 19.2 s, it has two cavities, working five days a week, totaling an estimated 45,000 parts produced each week. This machine is very stable and efficient, with only about 1.6% of total products having defects, leaving us with 720 defective products each week. Of these 720, 90 are inherent to the process. The accuracy of the classifier is approximately 98.96%, which means that seven failures will not be correctly predicted. With all unavoidable failures taken away, there are about 623 that can be successfully prevented. It is possible to observe the results of introducing the classifier in the process in Figure 10.

5.4. General Analysis

To summarize, introducing a machine learning classifier in this manufacturing process had a positive impact. The Tederic and Negri Bossi machines had many more defective products compared to the total parts produced, which resulted in a lower evaluation given by the performance indicators. On the other hand, the Nissei ASB machine was more stable, having fewer defective products compared to the total parts produced and having a good evaluation right from the start.
It is possible to identify that the OEE can increase up to 12.25% more in the case of the Negri Bossi machine and 10.97% in the case of Tederic. For these machines, the FPY, which is the ratio of good products to total products, also had a remarkable increase. The number of defective products and the downtime of the machines also had a significant decrease. The decrease in downtime is due to the fact that by being able to anticipate problems in the quality of the parts produced, it is possible to intervene in the production process even before it generates non-conforming parts. For the Nissei ASB machine, the results were not as positive. The introduction of the classifier did have a positive impact, but not as much as it had on the other machines. The biggest impact seen was on the downtime of the machine. Looking at the results from the indicators, before and after the introduction of the classifier in the process, it is easy to conclude that the classifier has a greater impact on the process when it is already a process with poor performance. When the process already has a good performance, the introduction of a machine learning classifier will not have as big of an impact.

6. Conclusions and Future Work

Based on all the insights presented in this article, it is possible to identify the advantages of using Artificial Intelligence on a shop floor and how to enhance injection molding quality prediction.
Concerning the datasets, an essential step applied was the generation of synthetic information from the original datasets. This use of Data Augmentation allowed to reduce the number of practical tests and therefore reduce the amount of scrapping required for initial dataset creation. In addition, with Data Augmentation, it is possible to register an increase in performance across all classifiers. The performance values increased between 4% and 6% relative to the initial values.
To reduce the entropies in the datasets, a pre-processing method was employed. This method allowed us to select the crucial features that most contribute to information about the injection process. Using this approach, it is possible to reduce to 1/3 the number of features used in the classifier and also contribute to an increase in the performance of machine learning classifiers.
Another fundamental point in the contribution of this article lies in the capability of monitoring deviations in the injection process besides predicting defective parts. This monitoring capacity resulted from adding the HITL approach to the labeling process.
All the tools used in this project were clustered into a standard ecosystem. This was achieved by using the RAILES platform and ZDMP. It allows the shop floor improvement reliability monitoring and also allows for registering substantial enhancements in the KPIs.
The results show that with the approach developed and presented, it was possible to achieve an increase in OEE of up to 12%, a reduction in the process downtime of up to 9% and a significant reduction in the number of non-conforming parts produced.
Including this monitoring system to achieve a zero defect approach in the manufacturing process proved to be a great advantage in fault identification and resolution. It becomes much easier for the maintenance department to repair a mechanical fault with incongruent machine parameters as guide points.
Overall, the smooth integration of the zero defects approach has proven to be a beneficial factor for companies in this phase of Industry 4.0, contributing with a marked impact on increasing the company’s efficiency and, consequently, its competitiveness in the market.

6.1. Limitations

One of the limitations of using AI in production processes is the need for the classifiers to be created individually for each machine at an early stage. Thus, even when dealing with similar processes, in the case of injection molding, it is necessary to dedicate resources to data collection with the addition of a new machine to the prediction project. One of the ways to reduce this problem is the application of the Data Augmentation algorithm, introducing synthetic data generation.
From the creation of the original datasets, the necessity emerges to stop the manufacturing workflow to dedicate the station exclusively to data acquisition in the test environment. As a result, this requirement results from the forced change of the process parameterization to originate defective parts. This data-gathering process is fundamental for reliable dataset creation and the implementation of labeling in the process outputs. It allows registering the conforming and nonconforming outputs according to the quality criteria.
Moreover, this process would have to be repeated in case of a change in the manufacturing environment. For example, if another machine was placed beside the one that has a predictive model, this could increase the temperature in the area and the model could become rotten, decreasing its effectiveness.
The predictive models do not have to be created individually for each machine only, they must also be created individually for each product, as a new product may require different process parameters, changing the importance of each feature.
Because it takes time to create a reasonable dataset and since it is required to stop the manufacturing workflow in order to have a labeled dataset, it makes the process of implementing a machine learning solution for predicting failures a slow process, which can cause problems in terms of scalability.

6.2. Future Work

In the future, there are other indicators of performance that may be taken into account to analyze the performance of the proposed system. Under the circumstances in which the experimental tests were carried out, it was not possible to make an accurate calculation of these indicators, in particular, the Mean Time to Repair (MTTR) and the Mean Time Between Failures (MTBF), among others.
Even in the future, it will be essential to extrapolate this shop floor application to a larger number of machines. This extrapolation will allow the re-validation of the applied practical methodologies, from data gathering to quality prediction.
Methodologies should also be developed to reduce the startup time of the models in order to reduce the first impact related to introducing a new process on the shop floor.
Another high-value feature for future work is the combination of part defect prediction and part defect profiling. The creation of a database to register all the potential defects and their main causes of appearance for different solutions to be proposed.

Author Contributions

Funding acquisition, T.S.; Investigation, B.S., R.M. and D.F.; Methodology, B.S. and P.I.; Project administration, B.S. and T.S.; Software, B.S., R.M., D.F. and P.I.; Supervision, J.S. and A.D.R.; Validation, B.S., R.M. and D.F.; Writing—original draft, B.S., R.M. and D.F.; Writing—review and editing, B.S., J.S. and A.D.R. All authors have read and agreed to the published version of the manuscript.

Funding

The research leading to these results has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement n°. ZDMP 825631.

Data Availability Statement

The data presented in this study are not publicly available due to the fact that they are real industrial production processes and represent the production characteristics of products that have an associated level of confidentiality. For further clarification, please contact the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Preuveneers, D.; Ilie-Zudor, E. The intelligent industry of the future: A survey on emerging trends, research challenges and opportunities in Industry 4.0. J. Ambient Intell. Smart Environ. 2017, 9, 287–298. [Google Scholar] [CrossRef] [Green Version]
  2. Aminabadi, S.S.; Tabatabai, P.; Steiner, A.; Gruber, D.P.; Friesenbichler, W.; Habersohn, C.; Berger-Weber, G. Industry 4.0 In-Line AI Quality Control of Plastic Injection Molded Parts. Polymers 2022, 14, 3551. [Google Scholar] [CrossRef] [PubMed]
  3. The Global Plastic Market Size 2022–2030. Available online: https://www.grandviewresearch.com/industry-analysis/global-plastics-market (accessed on 23 May 2022).
  4. Zhao, P.; Zhou, H.; He, Y.; Cai, K.; Fu, J. A nondestructive online method for monitoring the injection molding process by collecting and analyzing machine running data. Int. J. Adv. Manuf. Technol. 2014, 72, 765–777. [Google Scholar] [CrossRef]
  5. Ogorodnyk, O.; Lyngstad, O.V.; Larsen, M.; Wang, K.; Martinsen, K. Application of Machine Learning Methods for Prediction of Parts Quality in Thermoplastics Injection Molding. In Advanced Manufacturing and Automation VIII; Wang, K., Wang, Y., Strandhagen, J.O., Yu, T., Eds.; Springer: Singapore, 2019; pp. 237–244. [Google Scholar]
  6. Tsai, K.M.; Luo, H.J. An inverse model for injection molding of optical lens using Artificial Neural Network coupled with genetic algorithm. J. Intell. Manuf. 2017, 28, 473–487. [Google Scholar] [CrossRef]
  7. Rosato, M.; Rosato, D. Injection Molding Handbook; Springer: Berlin/Heidelberg, Germany, 2000. [Google Scholar] [CrossRef]
  8. Bernardete, R. Support Vector Machines for quality monitoring in aplastic injection molding process. IEEE Trans. Syst. Man Cybern. Part C 2005, 35, 401–410. [Google Scholar] [CrossRef]
  9. Silva, B.; Sousa, J.; Alenyà, G. Data Acquisition and Monitoring System for Legacy Injection Machines. In Proceedings of the IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA), Online, 18–20 June 2021. [Google Scholar]
  10. Martins, A.; Silva, B.; Costelha, H.; Neves, C.; Lyons, S.; Cosgrove, J. An approach to integrating manufacturing data from legacy Injection Moulding Machines using OPC UA. In Proceedings of the 37th International Manufacturing Conference, Online, 7–8 September 2021. [Google Scholar]
  11. Jung, H.; Jeon, J.; Choi, D.; Park, J.Y. Application of Machine Learning Techniques in Injection Molding Quality Prediction: Implications on Sustainable Manufacturing Industry. Sustainability 2021, 13, 4120. [Google Scholar] [CrossRef]
  12. Chang, H.; Su, Z.; Lu, S.; Zhang, G. Intelligent Predicting of Product Quality of Injection Molding Recycled Materials Based on Tie-Bar Elongation. Polymers 2022, 14, 679. [Google Scholar] [CrossRef]
  13. Dang, X.P. General frameworks for optimization of plastic injection molding process parameters. Simul. Model. Pract. Theory 2014, 41, 15–27. [Google Scholar] [CrossRef]
  14. Tripathi, S.; Straßer, S.; Mittermayr, C.; Dehmer, M.; Jodlbauer, H. Approaches to Identify Relevant Process Variables in Injection Moulding using Beta Regression and SVM. In Proceedings of the International Conference on Data Science, Technology and Applications, Prague, Czech Republic, 26 July 2019; pp. 233–242. [Google Scholar] [CrossRef]
  15. Saleh Meiabadi, M.; Vafaeesefat, A.; Sharifi, F. Optimization of Plastic Injection Molding Process by Combination of Artificial Neural Network and Genetic Algorithm. J. Optim. Ind. Eng. 2013, 49–54. [Google Scholar]
  16. Silva, B.; Marques, R.; Santos, T.; Sousa, J.; Alenyà, G. Relevant Parameters Identification in Traditional & Stretch and Blow Thermoplastics Injection Molding. In Proceedings of the IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA), Chemnitz, Germany, 15–17 June 2022. [Google Scholar]
  17. Ogorodnyk, O.; Lyngstad, O.V.; Larsen, M.; Martinsen, K. Application of feature selection methods for defining critical parameters in thermoplastics injection molding. Procedia CIRP 2019, 81, 110–114. [Google Scholar] [CrossRef]
  18. Verron, S.; Tiplica, T.; Kobi, A. Distance Rejection in a Bayesian Network for Fault Diagnosis of Industrial Systems. In Proceedings of the 16th Mediterranean Conference on Control and Automation, Ajaccio, France, 25–27 June 2008; pp. 615–620. [Google Scholar] [CrossRef] [Green Version]
  19. Ramana, E.; Sapthagiri, S.; Srinivas, P. Data Mining Approach for Quality Prediction and Improvement of Injection Molding Process Through SANN, GCHAID AND Association Rules. Int. J. Mech. Eng. Technol. (IJMET) 2016, 7, 31–40. [Google Scholar]
  20. Struchtrup, A.; Kvaktun, D.; Schiffers, R. Comparison of feature selection methods for machine learning based injection molding Quality Prediction. AIP Conf. Proc. 2020, 2289, 020052. [Google Scholar] [CrossRef]
  21. Cao, Y.; Fan, X.; Guo, Y.; Li, S.; Huang, H. Multi-objective optimization of injection-molded plastic parts using entropy weight, random forest and genetic algorithm methods. J. Polym. Eng. 2020, 40, 360–371. [Google Scholar] [CrossRef]
  22. Song, F.; Guo, Z.; Mei, D. Feature Selection Using Principal Component Analysis. In Proceedings of the International Conference on System Science, Engineering Design and Manufacturing Informatization, Yichang, China, 12–14 November 2010; Volume 1, pp. 27–30. [Google Scholar] [CrossRef]
  23. Ke, K.C.; Huang, M.S. Quality Prediction for Injection Molding by Using a Multilayer Perceptron Neural Network. Polymers 2020, 12, 1812. [Google Scholar] [CrossRef]
  24. Párizs, R.D.; Török, D.; Ageyeva, T.; Kovács, J.G. Machine Learning in Injection Molding: An Industry 4.0 Method of Quality Prediction. Sensors 2022, 22, 2704. [Google Scholar] [CrossRef]
  25. Nagorny, P.; Pillet, M.; Pairel, E.; Goff, R.; Loureaux, J.; Wali, M.; Kiener, P. Quality Prediction in Injection Molding. In Proceedings of the IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA), Annecy, France, 26–28 June 2017. [Google Scholar] [CrossRef] [Green Version]
  26. Obregon, J.; Hong, J.; Jung, J.Y. Rule-based explanations based on ensemble machine learning for detecting sink mark defects in the injection moulding process. J. Manuf. Syst. 2021, 60, 392–405. [Google Scholar] [CrossRef]
  27. Bai, Y.; Sun, Z.; Deng, J.; Li, L.; Long, J.; Li, C. Manufacturing Quality Prediction Using Intelligent Learning Approaches: A Comparative Study. Sustainability 2018, 10, 85. [Google Scholar] [CrossRef]
  28. Ogorodnyk, O.; Martinsen, K. Monitoring and Control for Thermoplastics Injection Molding A Review. Procedia CIRP 2018, 67, 380–385. [Google Scholar] [CrossRef]
  29. Schreiber, A. Regelung des Spritzgießprozesses auf Basis von Prozessgrößen und im Werkzeug Ermittelter Materialdaten. Ph.D. Thesis, RWTH Aache, Aachen, Germany, 2011. [Google Scholar]
  30. Lopes, N.; Ribeiro, B. Part Quality Prediction in an Injection Moulding Process Using Neural Networks. 2000. Available online: https://www.semanticscholar.org/paper/Part-Quality-Prediction-in-an-Injection-Moulding-Lopes-Ribeiro/ce0d7ba0c9a2ec24be031fef33015d2ba70b068d (accessed on 23 May 2022).
  31. Hoskins, J.C.; Kaliyur, K.M.; Himmelblau, D.M. Fault diagnosis in complex chemical plants using artificial neural networks. AIChE J. 1991, 37, 137–141. [Google Scholar] [CrossRef]
  32. Joseph, B.; Wang, F.; Shieh, D.S. Exploratory data analysis: A comparison of statistical methods with Artificial Neural Networks. Comput. Chem. Eng. 1992, 16, 413–423. [Google Scholar] [CrossRef]
  33. Chen, J.; Patton, R. Robust Model-Based Fault Diagnosis for Dynamic Systems; International Series on Asian Studies in Computer and Information Science; Springer: Berlin, Germany, 1999. [Google Scholar]
  34. Fung, G.M.; Mangasarian, O.L.; Shavlik, J.W. Knowledge-Based Support Vector Machine Classifiers. In Proceedings of the 15th International Conference on Neural Information Processing Systems, NIPS’02, Vancouver, BC, Canada, 9–14 December 2002; MIT Press: Cambridge, MA, USA, 2002; pp. 537–544. [Google Scholar]
  35. Sousa, J.; Ferreira, J.; Lopes, C.; Sarraipa, J.; Silva, J. Enhancing the Steel Tube Manufacturing Process with a Zero Defects Approach. Volume 2B: Advanced Manufacturing. In Proceedings of the ASME International Mechanical Engineering Congress and Exposition, Online, 16–19 November 2020. [Google Scholar] [CrossRef]
  36. Sousa, J.; Nazarenko, A.A.; Ferreira, J.; Antunes, H.; Jesus, E.; Sarraipa, J. Zero-Defect Manufacturing using data-driven technologies to support the natural stone industry. In Proceedings of the 2021 IEEE International Conference on Engineering, Technology and Innovation (ICE/ITMC), Cardiff, UK, 21–23 June 2021; pp. 1–7. [Google Scholar] [CrossRef]
  37. Tao, F.; Qi, Q.; Liu, A. Data-driven smart manufacturing. J. Manuf. Syst. 2018, 48, 157–169. [Google Scholar] [CrossRef]
  38. Xu, K.; Li, Y.; Liu, C.; Liu, X.; Hao, X.; Gao, J.; Paul, G.M. Advanced Data Collection and Analysis in Data-Driven Manufacturing Process. Chin. J. Mech. Eng. 2020, 33, 43. [Google Scholar] [CrossRef]
  39. Sousa, J.; Mendonça, J.P.; Machado, J. A generic interface and a framework designed for industrial metrology integration for the Internet of Things. Comput. Ind. 2022, 138, 103632. [Google Scholar] [CrossRef]
  40. Schmitt, R.; Kurzhals, R.; Ellerich, M.; Nilgen, G.; Schlegel, P.; Dietrich, E.; Krauß, J.; Latz, A.; Gregori, J.; Miller, N. Predictive Quality—Data Analytics in produzierenden Unternehmen; Book Internet of Production—Turning Data into Value; 2020; pp. 226–253. Available online: https://www.wzl.rwth-aachen.de/go/id/siht/file/810031 (accessed on 23 May 2022).
  41. Fraile, F.; Montalvillo, L.; Rodriguez, M.A.; Navarro, H.; Ortiz, A. Multi-tenant Data Management in Collaborative Zero Defect Manufacturing. In Proceedings of the 2021 IEEE International Workshop on Metrology for Industry 4.0 & IoT (MetroInd4.0IoT), Naples, Italy, 4–6 June 2021; pp. 464–468. [Google Scholar]
  42. Ruiz, J.C.S.; Bru, J.M.; Escoto, R.P. Smart Digital Twin for ZDM-based job-shop scheduling. In Proceedings of the 2021 IEEE International Workshop on Metrology for Industry 4.0 & IoT, Naples, Italy, 4–6 June 2021; pp. 510–515. [Google Scholar]
  43. Zhou, Y.; Liu, Y.; Yang, J.; He, X.; Liu, L. A Taxonomy of Label Ranking Algorithms. J. Comput. 2014, 9, 557–565. [Google Scholar] [CrossRef] [Green Version]
  44. Taylor, L.; Nitschke, G. Improving Deep Learning with Generic Data Augmentation. In Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence (SSCI), Bengaluru, India, 18–21 November 2018; pp. 1542–1547. [Google Scholar]
  45. Lemnaru, C.; Potolea, R. Imbalanced Classification Problems: Systematic Study, Issues and Best Practices. In Proceedings of the International Conference on Enterprise Information Systems, Wroclaw, Poland, 28 June–1 July 2012; Volume 102, pp. 35–50. [Google Scholar] [CrossRef]
  46. Chawla, N.; Bowyer, K.; Hall, L.; Kegelmeyer, W. SMOTE: Synthetic Minority Over-sampling Technique. J. Artif. Intell. Res. (JAIR) 2002, 16, 321–357. [Google Scholar] [CrossRef]
  47. Silva, B.; Sousa, J.; Alenya, G. Machine Learning Methods for Quality Prediction in Thermoplastics Injection Molding. In Proceedings of the 2021 International Conference on Electrical, Computer and Energy Technologies (ICECET), Cape Town, South Africa, 9–10 December 2021; pp. 1–6. [Google Scholar]
Figure 1. Generic system architecture.
Figure 1. Generic system architecture.
Processes 11 00062 g001
Figure 2. RAILES part classification by defect module.
Figure 2. RAILES part classification by defect module.
Processes 11 00062 g002
Figure 3. Artificial Neural Network with a demonstration of process parameters and outputs.
Figure 3. Artificial Neural Network with a demonstration of process parameters and outputs.
Processes 11 00062 g003
Figure 4. Human-in-the-Loop flow to create the training dataset.
Figure 4. Human-in-the-Loop flow to create the training dataset.
Processes 11 00062 g004
Figure 5. Data Augmentation proceedings.
Figure 5. Data Augmentation proceedings.
Processes 11 00062 g005
Figure 6. Software architecture.
Figure 6. Software architecture.
Processes 11 00062 g006
Figure 7. Messages received with the predictions.
Figure 7. Messages received with the predictions.
Processes 11 00062 g007
Figure 8. Tederic machine indicators with and without the proposed classifier.
Figure 8. Tederic machine indicators with and without the proposed classifier.
Processes 11 00062 g008
Figure 9. Negri Bossi machine indicators with and without the proposed classifier.
Figure 9. Negri Bossi machine indicators with and without the proposed classifier.
Processes 11 00062 g009
Figure 10. Nissei ASB machine indicators with and without the proposed classifier.
Figure 10. Nissei ASB machine indicators with and without the proposed classifier.
Processes 11 00062 g010
Table 1. Raw Data Extracted from the Machine with OPC-UA (Maximum Injection Pressure).
Table 1. Raw Data Extracted from the Machine with OPC-UA (Maximum Injection Pressure).
DateValueRunCount
10:19:2488156,760
10:19:2489156,760
10:19:2494156,760
10:19:2489156,760
10:19:2492156,760
10:19:2494156,760
10:19:2495156,760
10:19:2494156,760
10:19:2498156,760
10:19:2497156,760
10:19:2494156,760
10:19:2498156,760
10:19:24101156,760
Table 2. Data with corresponding minimum or maximum values per feature.
Table 2. Data with corresponding minimum or maximum values per feature.
RunCountPlastification Time [s]Maximum Injection Pressure [Bar]Cushion [mm]M2 [s]
156,76010.2810121.141.13
156,76210.0911121.341.13
156,76410.1311621.221.13
156,7669.4512121.641.13
156,7689.4612121.391.14
156,7709.1812021.251.15
156,7729.3112121.271.15
156,7748.8212121.381.14
156,7768.8512121.011.15
156,7788.3212121.041.16
156,7808.4212121.271.14
Table 3. Performance of the classifiers with and without Data Augmentation (Nissei ASB, NB400 e TD850).
Table 3. Performance of the classifiers with and without Data Augmentation (Nissei ASB, NB400 e TD850).
MachineDatasetPerformance
Original (937 cycles)93.515%
Nissei ASBAugmented (7488 cycles)96.163%
Augmented (18,721 cycles)97.963%
Original (392 cycles)86.164%
Negri Bossi 400Augmented (3136 cycles)89.454%
Augmented (7828 cycles)91.063%
Original (243 cycles)85.114%
Tederic 850Augmented (1944 cycles)88.412%
Augmented (4852 cycles)90.864%
Table 4. Features and Labels of the Nissei ASB dataset.
Table 4. Features and Labels of the Nissei ASB dataset.
Maximum Injection Pressure
Nozzle Temperature
Spindle Zone 1 Temperature
Spindle Zone 2 Temperature
Spindle Zone 3 Temperature
Cushion
FeaturesAmbience Temperature
M2
Pot Temperature 8
Pot Temperature 4
Plastification Time
Hot Resistance Block
NOK
LabelsDeviation I
Deviation II
OK
Table 5. Features and Labels of the Negri Bossi dataset.
Table 5. Features and Labels of the Negri Bossi dataset.
Maximum Injection Pressure
Nozzle Temperature
Spindle Zone 1 Temperature
Spindle Zone 2 Temperature
FeaturesSpindle Zone 3 Temperature
Cushion
Ambience Temperature
Injection Velocity
NOK
OutputsDeviation I
Deviation II
OK
Table 6. Features and Labels of the Tederic dataset.
Table 6. Features and Labels of the Tederic dataset.
Maximum Injection Pressure
Nozzle Temperature
Spindle Zone 1 Temperature
Spindle Zone 2 Temperature
Spindle Zone 3 Temperature
FeaturesCushion
Ambience Temperature
Computation Pressure
Injection Velocity
Closing Force
NOK
OutputsDeviation I
Deviation II
OK
Table 7. Nissei ASB classification performance.
Table 7. Nissei ASB classification performance.
PrecisionRecallF1-Score
NOK10.9280.979
DI111
DII0.88710.948
OK0.96911
Accuracy 0.989
Macro Average0.97910.990
Weighted Average0.9900.9900.990
Table 8. Negri Bossi classification performance.
Table 8. Negri Bossi classification performance.
PrecisionRecallF1-Score
NOK0.90010.951
DI0.7880.6040.685
DII0.7260.7770.747
OK0.9920.9820.992
Accuracy 0.921
Macro Average0.8490.8390.839
Weighted Average0.9210.9210.921
Table 9. Tederic classification performance.
Table 9. Tederic classification performance.
PrecisionRecallF1-Score
NOK0.9080.9900.949
DI0.7760.6120.674
DII0.7250.7660.755
OK0.9800.9800.990
Accuracy 0.919
Macro Average0.8370.8470.837
Weighted Average0.9190.9190.919
Table 10. Predictions received during the execution of the demonstration scenario.
Table 10. Predictions received during the execution of the demonstration scenario.
RunCountPredicted Quality
735,118OK
735,120OK
735,122OK
735,124OK
735,126OK
735,128DI
735,130DI
735,132DI
735,134DI
735,136DI
735,138DI
735,140DI
735,142DI
735,144DI
735,146DI
735,148DI
735,150DII
735,152DII
735,154NOK
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Silva, B.; Marques, R.; Faustino, D.; Ilheu, P.; Santos, T.; Sousa, J.; Rocha, A.D. Enhance the Injection Molding Quality Prediction with Artificial Intelligence to Reach Zero-Defect Manufacturing. Processes 2023, 11, 62. https://doi.org/10.3390/pr11010062

AMA Style

Silva B, Marques R, Faustino D, Ilheu P, Santos T, Sousa J, Rocha AD. Enhance the Injection Molding Quality Prediction with Artificial Intelligence to Reach Zero-Defect Manufacturing. Processes. 2023; 11(1):62. https://doi.org/10.3390/pr11010062

Chicago/Turabian Style

Silva, Bruno, Ruben Marques, Dinis Faustino, Paulo Ilheu, Tiago Santos, João Sousa, and André Dionisio Rocha. 2023. "Enhance the Injection Molding Quality Prediction with Artificial Intelligence to Reach Zero-Defect Manufacturing" Processes 11, no. 1: 62. https://doi.org/10.3390/pr11010062

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop