Next Article in Journal
Trustworthy Transaction Spreading Using Node Reliability Estimation in IoT Blockchain Networks
Next Article in Special Issue
Machine Learning and Sustainable Mobility: The Case of the University of Foggia (Italy)
Previous Article in Journal
Development of a Remote Collaboration System for Interactive Communication with Building Information Model in Mixed Reality
Previous Article in Special Issue
Autonomous Temporal Pseudo-Labeling for Fish Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Design of an In-Process Quality Monitoring Strategy for FDM-Type 3D Printer Using Deep Learning

by
Gabriel Avelino R. Sampedro
1,2,
Danielle Jaye S. Agron
2,
Gabriel Chukwunonso Amaizu
3,
Dong-Seong Kim
2 and
Jae-Min Lee
2,*
1
College of Computer Studies, De La Salle University, 2401 Taft Ave, Malate, Manila 1004, Philippines
2
Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39177, Korea
3
ICT Convergence Research Center, Kumoh National Institute of Technology, Gumi 39177, Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(17), 8753; https://doi.org/10.3390/app12178753
Submission received: 20 July 2022 / Revised: 29 August 2022 / Accepted: 30 August 2022 / Published: 31 August 2022
(This article belongs to the Collection Machine Learning in Computer Engineering Applications)

Abstract

:
Additive manufacturing is one of the rising manufacturing technologies in the future; however, due to its operational mechanism, printing failures are still prominent, leading to waste of both time and resources. The development of a real-time process monitoring system with the ability to properly forecast anomalous behaviors within fused deposition modeling (FDM) additive manufacturing is proposed as a solution to the particular problem of nozzle clogging. A set of collaborative sensors is used to accumulate time-series data and its processing into the proposed machine learning algorithm. The multi-head encoder–decoder temporal convolutional network (MH-ED-TCN) extracts features from data, interprets its effect on the different processes which occur during an operational printing cycle, and classifies the normal manufacturing operation from the malfunctioning operation. The tests performed yielded a 97.2% accuracy in anticipating the future behavior of a 3D printer.

1. Introduction

The development of specific components utilizing filaments or materials like plastic or even metal and resins can now be done using the help of 3D printers. The method is utilized for faster creation of prototypes and end-use products, since producing molds, plastic, or metal components involves the use and application of industrial equipment [1]. Although using industrial-grade machinery and equipment is suitable and reasonable for its price during mass production, it is time-consuming and quite expensive for prototype development. The mass production method is not viable for prototype development because it typically needs numerous design revisions and iterations. For the development of a single piece, the procedure is rather affordable, but not for mass productions.
In using 3D printers for printing materials, the process of fabrication requires a length of time to finish. However, in that duration there might be errors during printing [2,3]. Printing mistakes are frequently expected, and output is typically rendered useless. Because the procedure could take awhile, operators typically do not monitor the printer as it runs. The 3D printer is not intended to handle faults; hence, it normally keeps printing even if one occurs. Proceeding with the printing process could harm the 3D printer and even waste materials or the output itself [4]. One of the prominent printing error is nozzle clogging [5]. There are numerous efforts in the part of the researchers to address these issue. The researchers proposed in-process monitoring, which is a critical quality control factor that is often overlooked [6].
The in-process quality control is based on continuously monitoring the 3D printing using a visual inspection or acquiring sensor data during the manufacturing processes [7,8,9]. These method commonly uses a collaborative sensors for an improved quality prediction and defect detection. In paper presented by [10] used an acoustic emission (AE) to monitor and identify the occurrence of filament breakage using statistical analysis to detect the event, and when it happen during the printing process, it causes the nozzle to clog and malfunction. Likewise, in [11] in preventing the nozzle clogging, the current flowing on the extruder motor is monitored. The increase of amperes readings implicates a rise in extruder motor temperature, leading to a clogged nozzle tip; to stabilize the nozzle temperature, a proportional integral derivative (PID) controls the cooling fan to prevent the nozzle clogging that leads to warping on the final product, and a physics-based model is constructed to further validate their result. Aside from affecting the rheological properties of the material, this printing parameter—nozzle temperature—also influences the filament extrusion velocity, a critical factor attributing on nozzle clogging that can cause a printing failure [12,13].
Recently, the authors applied machine learning algorithms to the in-process monitoring systems to perform smart monitoring scheme. Machine learning algorithms are used to classify, predict, and prevent the malfunctions during the printing process [14,15]. The authors in [16,17] acquire the data on a fused deposition model (FDM) machine with an AE technique. These signals are processed using the hidden semi-Markov model and support vector machine (SVM) to classify the normal and nozzle clogged state of the FDM printer. The approach in [18] utilized heterogenous sensors for quality monitoring. The data from thermocouples, accelerometers, and infrared sensors are analyzed in different machine learning algorithms like probabilistic neural networks, naïve Bayesian clustering, and SVM to predict the anomality during the printing processes. In this paper, a deep learning-based data-driven monitoring system is proposed to enable an in-process quality monitoring scheme for additive manufacturing process.
The implementation of a 3D printer monitoring device is proposed to solve the need to continuously monitor the production and status of the 3D printer. It is advised to use temporal convolutional networks (TCN) to instruct the device to distinguish between faulty outputs and safe values [19]. The general objective of this design project is to create a tool that will keep track of the 3D printing chamber’s interior, air quality, and temperature. The device aims to display whether the 3D printer status is safe for printing and if the output has no errors. Through the development of the system, operators can be notified if something is wrong and halt the printing process to mitigate waste and damage to the printer. Using a machine learning algorithm, the MH-ED-TCN algorithm processes the multi-variate data, extracts features from it, and classifies whether or not an anomaly exists.
The flow of the paper is organized as follows. The development of the proposed multi-head encoder–decoder temporal neural network (MH-ED-TCN) structure is discussed in Section 2. The experimental set-up for the data gathering were enumerated in Section 3. The results and discussion of the application of MH-ED-TCN are elaborated in Section 4, and Section 5 presents the conclusion and recommendations for future studies.

2. Development of In-Process Monitoring System

The lack of a device to accurately monitor if the 3D printer is still functioning at appropriate settings was the primary reason of manufacturing waste in terms of time and resources, which led to the creation and conceptualization of this study. The process of 3D printing is complex and even a small mistake could render the print output useless. To build the data-driven in-process quality monitoring system, a deep-learning model to identify whether there will be an occurring error during the printing process is proposed.
In building a monitoring system for a FDM 3D printer, a deep learning method is used in predictive modeling to identify the malfunctions that occur during the manufacturing process. In this paper, an additional pre-processing method is integrated on the temporal neural network [20] in order to improve the classification accuracy.

2.1. Multi-Variate Data

In this section, the input data and the structure of multi-head encoder–decoder temporal neural network is discussed. The data are acquired using the experimental setup illustrated in Figure 1. To rigorously search for the perfect machine learning algorithm to use, the nature of the data used is defined. The input data to the system shall include a multi-variate time series data, presented in Equation (1).
D = [ D 1 , D 2 , D 3 , , D x ]
where:
  • D: the combination of various sensor readings that the acquisition device has collected;
  • x: the global epoch time in seconds;
  • D x : the data matrix which contains the collection of unit data.
For each data matrix D x , there is a set of data d a , b k where a R and ranges from 1 to the number of parameters used for classification within the system and b refers to the measurement count. The data fed into the system are from thermistor sensor to record the temperature of the nozzle, as well as the humidity sensor in order to record the ambient temperature and humidity, and the data from spectrometer that are used to detect the filament’s temperature. In each second, a sample of multiple logs is collected where the value of the measurement count depends on the capability of the data acquisition device to sample data per second. In this paper, the data acquisition rate is set to 2047 Hz.
D 1 = d 1 , 1 1 d 1 , 2 1 d 1 , b 1 d 2 , 1 1 d 2 , 2 1 d 2 , b 1 d a , 1 1 d a , 2 1 d a , b 1
To conform to the training’s predetermined level of unfolding, the data is first spliced using the sliding window approach, and the values are selected taking into account the sampling capability of the experiment’s acquisition module. By adopting a sliding step of ( Δ = 250 ) and a time window of ( δ = 50 ), the elements in D k are divided into fixed-length sequences of x ( Δ ) . It may be expressed mathematically as x ( Δ ) = ( x δ Δ + 1 , x δ Δ + 2 , , x δ ). The new data sequence consists of a matrix of dimension ( a , D e l t a ) ( a , Δ ) , where x ( Δ ) R ( a , Δ ) .

2.2. Multi-Head Encoder-Decoder TCN Architecture

Figure 2 shows the overall network structure. The model has two primary parts: an encoder and decoder comprised of a convolutional neural network (CNN) structure that handles multivariate time series data. In this instance, time and feature axis features are extracted using a one-dimensional CNN. Three residual blocks that make up the encoder are constructed particularly to calculate the shortest path to broadcast data in both the front and rear layers. In addition, the decoder has a dense layer, a rectified linear unit (ReLU) activation function, and three transpose convolutional layers. Excluding the dense layers that penalize large model weights, all stages in the encoder and decoder are deployed with L2 regularizers to avoid overfitting.

Residual Blocks

In the proposed architecture, a stacked dilated convolution inspired by [21] has a receptive field r f that defines the number of neurons in the input that influences the predicted values produced on the final layer. To ensure that the predicted values solely based on the historical values of the series x, a causal convolution is used. Each of the neurons are subjected to a filter size of 256 and a d = [ 2 0 , , 2 l 1 ] increment the dilation factor of each layer l. About on the individual layer, a residual block is implemented instead of directly applying the architecture presented in [22]. The residual block applied in this research was composed of batch normalization, dilated convolution, and non-linearity. These displayed blocks efficiently support the network stabilization and training.

3. Experimental Setup

In order to acquire the data, the equipment used is a cartesian FDM printer single extruder system (Creality CS-10). The experimental setup is summarized in Table 1. For the sensor placement, the spectrometer probe is placed near the extruder motor in order to read the temperature of the extruder motor. The temperature on the extruder motor indicates whether there is nozzle clogging that is happening during the printing process. Materials used in additive manufacturing have specific threshold values for optimal temperatures when melting. When the temperature of the extruder falls below the material’s optimal melting point, the melted filament may cool and clogging may occur [23]. On the other hand, when the temperature of the extruder goes beyond the maximum melting point, heat creeping may occur. Heat creeping is the phenomena that occurs when the premature melting of the filament disrupts the extrusion process.
The dataset is accumulated first. It is done by strategically mounting an LM35 temperature sensor on the printer head. The humidity is also accumulated to record the ambient temperature and humidity of the printing external environment and a spectrometer is used to detect the filament’s temperature. The collection frequency of 2047 Hz is used for data acquisition and is saved on an PC in comma-separated value format. The slicer program for FDM has a printing surface temperature of 60 C, a 5 mm/s feed rate, and a 40 mm/s production rate. The 108 min execution period allowed us to gather a dataset with a sample size of 328,470. An additional vibration and acoustic emission were used to detect the printing activity from the printer in order to notify the users if the printing activity suddenly encounter unscheduled printing halt.

4. Results and Discussion

The collated data was split into three parts, training, validation, and testing before running the TCN model. The TCN was run for 100 epochs and collected performance metrics such as accuracy, loss, and confusion matrix of the scheme at the end of the simulations are explained below. The proposed model is compared to the most prominently used architectures, namely, the CNN [24] and recurrent neural networks (RNN) such as long short-term memory (LSTM) [25]. These methods are known in handling time series forecasting methods [26,27].
In Figure 3, the performance evaluation of the proposed model illustrates its accuracy over time. In addition, Figure 4 illustrates the performance of the proposed model in terms of training and validation loss over a period of 100 epochs. Values for both stayed almost the same all through the entire process and this shows that the proposed TCN model properly familiarized the process from the collected data and overfitting was not recorded. Here the accuracy obtained was 97% The goal of most (if not all) machine learning algorithm is to reduce loss, i.e., get the loss value closer to zero as much as possible. Figure 5 shows the loss of training and validation sets to be below 0.06; hence, it can be said that the model had a very minimal loss. Then, in Figure 6, the TCN performance summary is presented in the form of a confusion matrix. The percentage of accurately categorized results (True Negative and True Positive) is roughly 97%. The remaining 3% is made up of the inaccurately categorized results (False Negative and False Positive).
Lastly, the algorithm is evaluated in comparison to the current classifiers, namely, SVM, LSTM, LSTM Autoencoder, a simple CNN, and the proposed MH-ED-TCN approach. As seen in the results, the use of MH-ED-TCN trumps all the other methods. By using MH-ED-TCN, results of 97% and 95% were produced by accuracy and recall sequentially, and they are at par with that of the method that produced the best results in those areas, that is, CNN. Hence, the characteristics of the TCN model overcome the vanishing gradient problem from which the canonical RNN suffers. As for the precision and recall, using MH-ED-TCN has produced results of 99% and 96%, respectively, and the results in those areas are greater than all the other four algorithms. Overall, in every observable aspect, the use of MH-ED-TCN performed better than all other algorithms considered in this research.

5. Conclusions

In this article, an in-process smart monitoring strategy is presented. The proposed machine learning algorithm, multi-head encoder–decoder temporal neural network (MH-ED-TCN) is utilized to identify the occurrence of nozzle clogging. In the process of testing, the proposed model outperformed the other machine learning algorithms in identifying the occurrence of nozzle clogging using the multi-variate data. From the experiment, although the MH-ED-TCN recorded a 97% accuracy with the same percentage as CNN model, the proposed model performed better in other evaluation metrics like F1 score and precision, outperforming the other machine learning structure presented.
The paper outlines the development of a 3D printer monitor. The possible future works in this research would be to develop a 3D printer monitor using digital twin technology. Digital twin technology is applied in experimenting with objects because it eliminates the risk of ruining the actual object [28,29]. In monitoring and inspection work, workers often apply manual inspection techniques and travel to the location of the object to be inspected. The process is costly and laborious; thus, the emergence of digital twin technology is seen as the alternative to the manual practice. In addition, the improvement of the algorithm and the use of recently developed algorithms to increase the accuracy of the system is recommended. Furthermore, it is also recommended to add appropriate sensors to investigate and address the printing malfunctions that were caused by viscoelastic behavior of the polymer materials that were used in the experiment. In the event of an operational breakdown, a mechanism for promptly shutting down a 3D printer is proposed. For the purposes of this layout, only status tracking and reporting were taken into account.

Author Contributions

Conceptualization and paper writing, G.A.R.S. and D.J.S.A.; software, G.A.R.S., D.J.S.A., and G.C.A.; formal analysis, G.A.R.S.; review and editing, G.A.R.S. and J.-M.L.; supervision, J.-M.L. and D.-S.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research work was supported by Priority Research Centers Program through NRF funded by MEST (2018R1A6A1A03024003) and the Grand Information Technology Research Center support program (IITP-2021-2020-0-01612) supervised by the IITP by MSIT, Korea.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
FDMfused deposition modeling
MH-ED-TCNmulti-head encoder–decoder temporal convolutional network
AEacoustic emission
PIDproportional integral derivative
SVMsupport vector machine
TCNtemporal convolutional network
CNNconvolutional convolutional network
ReLUrectified linear unit
LSTMlong short-term memory

References

  1. Dilberoglu, U.; Gharehpapagh, B.; Yaman, U.; Dolen, M. The Role of Additive Manufacturing in the Era of Industry 4.0. Procedia Manuf. 2017, 11, 545–554. [Google Scholar] [CrossRef]
  2. Tlegenov, Y.; Hong, G.S.; Lu, W.F. Nozzle condition monitoring in 3D printing. Robot. Comput.-Integr. Manuf. 2018, 54, 45–55. [Google Scholar] [CrossRef]
  3. Liao, J.; Shen, Z.; Xiong, G.; Liu, C.; Luo, C.; Lu, J. Preliminary study on fault diagnosis and intelligent learning of fused deposition modeling (FDM) 3D Printer. In Proceedings of the 2019 14th IEEE Conference on Industrial Electronics and Applications (ICIEA), Xi’an, China, 19–21 June 2019; pp. 2098–2102. [Google Scholar]
  4. Kim, Y.; Yoon, C.; Ham, S.; Park, J.; Kim, S.; Kwon, O.; Tsai, P.J. Emissions of nanoparticles and gaseous material from 3D printer operation. Environ. Sci. Technol. 2015, 49, 12044–12053. [Google Scholar] [CrossRef]
  5. Lalegani, M.; Mohd ariffin, M.k.a.; Hatami, S. An overview of fused deposition modelling (FDM): Research, development and process optimisation. Rapid Prototyp. J. 2021, 27, 562–582. [Google Scholar] [CrossRef]
  6. Verma, A.; Tangri, P.; Mamgain, P.; Shaffi; Lakshmayya. In process quality control: A review. Int. J. Ind. Pharm. Bio Sci. 2014, 1, 48–59. [Google Scholar]
  7. Wu, H.C.; Chen, T.C. Quality control issues in 3D-printing manufacturing: A review. Rapid Prototyp. J. 2018, 24, 607–614. [Google Scholar] [CrossRef]
  8. Kantaros, A.; Karalekas, D. Fiber Bragg grating based investigation of residual strains in ABS parts fabricated by fused deposition modeling process. Mater. Des. 2013, 50, 44–50. [Google Scholar] [CrossRef]
  9. Kantaros, A.; Giannatsis, J.; Karalekas, D. A novel strategy for the incorporation of optical sensors in Fused Deposition Modeling parts. In Proceedings of the International Conference on Advanced Manufacturing Engineering and Technologies, Stockolm, Sweden, 27–30 October 2013; pp. 163–170. [Google Scholar]
  10. Yang, Z.; Jin, L.; Yan, Y.; Mei, Y. Filament Breakage Monitoring in Fused Deposition Modeling Using Acoustic Emission Technique. Sensors 2018, 18, 749. [Google Scholar] [CrossRef]
  11. Tlegenov, Y.; Wong, Y.; Hong, G.S. A dynamic model for nozzle clog monitoring in fused deposition modelling. Rapid Prototyp. J. 2017, 23, 391–400. [Google Scholar] [CrossRef]
  12. Moretti, M.; Rossi, A.; Senin, N. In-process simulation of the extrusion to support optimisation and real-time monitoring in fused filament fabrication. Addit. Manuf. 2020, 38, 101817. [Google Scholar] [CrossRef]
  13. Kuznetsov, V.; Solonin, A.; Tavitov, A.; Urzhumtsev, O.; Vakulik, A. Increasing strength of FFF three-dimensional printed parts by influencing on temperature-related parameters of the process. Rapid Prototyp. J. 2019. ahead-of-print. [Google Scholar] [CrossRef]
  14. Agron, D.J.; Lee, J.M.; Kim, D.S. Nozzle Thermal Estimation for Fused Filament Fabricating 3D Printer Using Temporal Convolutional Neural Networks. Appl. Sci. 2021, 11, 6424. [Google Scholar] [CrossRef]
  15. Hu, H.; He, K.; Zhong, T.; Hong, Y. Fault diagnosis of FDM process based on support vector machine (SVM). Rapid Prototyp. J. 2019. ahead-of-print. [Google Scholar] [CrossRef]
  16. Wu, H.; Yu, Z.; Wang, Y. Real-time FDM machine condition monitoring and diagnosis based on acoustic emission and hidden semi-Markov model. Int. J. Adv. Manuf. Technol. 2017, 90, 2027–2036. [Google Scholar] [CrossRef]
  17. Wu, H.; Wang, Y.; Yu, Z. In situ monitoring of FDM machine condition via acoustic emission. Int. J. Adv. Manuf. Technol. 2016, 84, 1483–1495. [Google Scholar] [CrossRef]
  18. Rao, P.; Liu, J.; Roberson, D.; Kong, Z.; Williams, C. Online Real-Time Quality Monitoring in Additive Manufacturing Processes Using Heterogeneous Sensors. J. Manuf. Sci. Eng. 2015, 137, 1007–1011. [Google Scholar] [CrossRef]
  19. Delli, U.; Chang, S. Automated process monitoring in 3D printing using supervised machine learning. Procedia Manuf. 2018, 26, 865–870. [Google Scholar] [CrossRef]
  20. Bai, S.; Kolter, J.Z.; Koltun, V. An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling. arXiv 2018, arXiv:1803.01271. [Google Scholar]
  21. Borovykh, A.; Bohte, S.; Oosterlee, C. Dilated Convolutional Neural Networks for Time Series Forecasting. J. Comput. Financ. 2019, 22, 73–101. [Google Scholar] [CrossRef]
  22. Van den Oord, A.; Dieleman, S.; Zen, H.; Simonyan, K.; Vinyals, O.; Graves, A.; Kalchbrenner, N.; Senior, A.; Kavukcuoglu, K. WaveNet: A Generative Model for Raw Audio. arXiv 2016, arXiv:1609.03499. [Google Scholar]
  23. Coppola, B.; Cappetti, N.; Di Maio, L.; Scarfato, P.; Incarnato, L. 3D printing of PLA/clay nanocomposites: Influence of printing temperature on printed samples properties. Materials 2018, 11, 1947. [Google Scholar] [CrossRef]
  24. Doan, V.S.; Huynh-The, T.; Kim, D.S. Underwater Acoustic Target Classification Based on Dense Convolutional Neural Network. IEEE Geosci. Remote Sens. Lett. 2020, 19, 1500905. [Google Scholar] [CrossRef]
  25. Hermawan, A.P.; Kim, D.S.; Lee, J.M. Sensor Failure Recovery using Multi Look-back LSTM Algorithm in Industrial Internet of Things. In Proceedings of the 2020 25th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), Vienna, Austria, 8–11 September 2020; Volume 1, pp. 1363–1366. [Google Scholar] [CrossRef]
  26. Sampedro, G.A.; Paramartha Putra, M.A.; Kim, D.S.; Lee, J.M. 3D Printer State Prediction: A Deep Learning Model Approach. In Proceedings of the 2021 1st International Conference in Information and Computing Research (iCORE), Manila, Philippines, 11–12 December 2021; pp. 135–138. [Google Scholar] [CrossRef]
  27. Sampedro, G.A.; Agron, D.J.; Kim, R.G.; Kim, D.S.; Lee, J.M. Fused Deposition Modeling 3D Printing Fault Diagnosis using Temporal Convolutional Network. In Proceedings of the 2021 1st International Conference in Information and Computing Research (iCORE), Manila, Philippines, 11–12 December 2021; pp. 62–65. [Google Scholar] [CrossRef]
  28. Debroy, T.; Zhang, W.; Turner, J.; Babu, S.S. Building digital twins of 3D printing machines. Scr. Mater. 2017, 135, 119–124. [Google Scholar] [CrossRef]
  29. Kantaros, A.; Piromalis, D.; Tsaramirsis, G.; Papageorgas, P.; Tamimi, H. 3D printing and implementation of digital twins: Current trends and limitations. Appl. Syst. Innov. 2021, 5, 7. [Google Scholar] [CrossRef]
Figure 1. (a) The actual experimental setup for the data acquisition using the FDM 3D Printer Creality CS-10, and the data acquisition device, DAQ MX 6001. (b) The architecture of the experimental setup featuring the sensors that were gathered to be processed in the proposed MH-ED-TCN scheme. (c) The developed monitoring system software graphical interface.
Figure 1. (a) The actual experimental setup for the data acquisition using the FDM 3D Printer Creality CS-10, and the data acquisition device, DAQ MX 6001. (b) The architecture of the experimental setup featuring the sensors that were gathered to be processed in the proposed MH-ED-TCN scheme. (c) The developed monitoring system software graphical interface.
Applsci 12 08753 g001
Figure 2. The framework of the proposed MH-ED-TCN illustrates how data collected x ( Δ ) from the 3D printer is fed into the system is will be the sequenced. The variables f, k, and d stand for the filter count, filter size, and dilation factor, respectively.
Figure 2. The framework of the proposed MH-ED-TCN illustrates how data collected x ( Δ ) from the 3D printer is fed into the system is will be the sequenced. The variables f, k, and d stand for the filter count, filter size, and dilation factor, respectively.
Applsci 12 08753 g002
Figure 3. The graph illustrates the accuracy of the training and validation of the proposed model over a period of 100 epochs.
Figure 3. The graph illustrates the accuracy of the training and validation of the proposed model over a period of 100 epochs.
Applsci 12 08753 g003
Figure 4. The comparison of the different machine learning algorithm compared to the proposed MH-ED-TCN.
Figure 4. The comparison of the different machine learning algorithm compared to the proposed MH-ED-TCN.
Applsci 12 08753 g004
Figure 5. The graph illustrates the training and validation loss of the proposed model over a period of 100 epochs.
Figure 5. The graph illustrates the training and validation loss of the proposed model over a period of 100 epochs.
Applsci 12 08753 g005
Figure 6. The results presented in a confusion matrix.
Figure 6. The results presented in a confusion matrix.
Applsci 12 08753 g006
Table 1. The 3D Printer experimental conditions.
Table 1. The 3D Printer experimental conditions.
3D Printer (Type: FDM)Creality CS-10
FilamentMaterialPLA
Diameter1.75 mm
Nozzle diameter0.4 mm
Slicing softwareCreatorK
X/Y/Z accuracy11/11/2.5 μ m
SpeedFeed rate5 mm/s
Extruder travel40 mm/s
TemperatureExtruder210 C
Bed90 C
Layer resolution0.1 mm
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sampedro, G.A.R.; Agron, D.J.S.; Amaizu, G.C.; Kim, D.-S.; Lee, J.-M. Design of an In-Process Quality Monitoring Strategy for FDM-Type 3D Printer Using Deep Learning. Appl. Sci. 2022, 12, 8753. https://doi.org/10.3390/app12178753

AMA Style

Sampedro GAR, Agron DJS, Amaizu GC, Kim D-S, Lee J-M. Design of an In-Process Quality Monitoring Strategy for FDM-Type 3D Printer Using Deep Learning. Applied Sciences. 2022; 12(17):8753. https://doi.org/10.3390/app12178753

Chicago/Turabian Style

Sampedro, Gabriel Avelino R., Danielle Jaye S. Agron, Gabriel Chukwunonso Amaizu, Dong-Seong Kim, and Jae-Min Lee. 2022. "Design of an In-Process Quality Monitoring Strategy for FDM-Type 3D Printer Using Deep Learning" Applied Sciences 12, no. 17: 8753. https://doi.org/10.3390/app12178753

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop