Next Article in Journal
Research on the Quality Improvement and Consumption Reduction of Iron Ore Agglomeration Based on Optimization
Previous Article in Journal
Comparison of the Effect of Mechanical and Ultrasonic Agitation on the Properties of Electrodeposited Ni-Co/ZrO2 Composite Coatings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Determination of Decarburization Depth Base on Deep Learning Methods

1
Department of Telecommunication Engineering, National Kaohsiung University of Science and Technology, Kaohsiung City 811, Taiwan
2
Ph.D. Program in Maritime Science and Technology, National Kaohsiung University of Science and Technology, Kaohsiung City 811, Taiwan
3
Department of Marine Leisure Management, National Kaohsiung University of Science and Technology, Kaohsiung City 811, Taiwan
4
Metal Industries Research & Development Centre (MIRDC), Taichung City 407, Taiwan
5
Technology Talent Bechelor’s Program in Intelligent Maritime, National Kaohsiung University of Science and Technology, Kaohsiung City 811, Taiwan
*
Author to whom correspondence should be addressed.
Metals 2023, 13(3), 479; https://doi.org/10.3390/met13030479
Submission received: 30 January 2023 / Revised: 20 February 2023 / Accepted: 24 February 2023 / Published: 26 February 2023

Abstract

:
In the heat treatment of steel, decarburization is a serious issue that leads to poor wear resistance and low fatigue life. At present, the decarburization depth was determined using a visual estimation by the human eye, and the software estimation was determined through traditional image analysis. Therefore, decarburization depth analysis remains limited in experts and traditional algorithms. Artificial intelligence is a general-purpose technology that has a multitude of applications. This paper uses the concept of deep learning to propose a decarburization layer detector (DLD) that can determine the depth of decarburized layers. This DLD system boasts high performance, real-time, low learning, and computation costs. In addition, we used several kinds of decarburized layers images to compare the proposed method with other deep learning network architectures. The experimental results show that the proposed method yields a detection accuracy of 92.97%, which is higher than existing methods and boasts computational demands which are far lower than other network architectures. Therefore, we propose a novel system for automatic decarburization depth determination as an application for metallographic analysis.

1. Introduction

Steel is often categorized according to its carbon content. In general, steel can be divided into three categories based on carbon levels, including low-carbon, medium-carbon, and high-carbon. Low-carbon steel consists of less than 0.30% carbon. Medium-carbon steel consists of 0.300.60% carbon. Additionally, high-carbon steel contains more than 0.60% carbon. The strength and hardness of the steel decrease as the amount of carbon content decreases. Therefore, all steel contains at least some amount of carbon and is defined as an alloy of iron and carbon. Excessive decarburization can result in unstable steel, which leads to reduced product performance. In addition, it can also lead to various problems with machinery created from steel. Thus, determining the depth of decarburization is important in various fields.
Decarburization is the process in which steel is heated during heat treatment, leading to oxidation and the loss of carbon. Generally speaking, decarburization occurs in an improperly controlled heat treatment process. Decarburization is a serious issue as the near-surface carbon can be lost from steel at high temperatures, and the surface of the steel has a lower hardness and tensile strength. Industry standards provide guidelines for the measurement of decarburization depth which are set in ASTM E1077 (standard test methods for estimating the depth of decarburization of steel specimens) [1] and ISO 3887 [2]. The two industry standards describe several methods for determining the decarburization depth. There are four basic procedures for determining decarburization depth: screening method, microscopical method, microindentation hardness method, and chemical analytical method. These methods can be employed with any cross-sectional shape. Through the availability of equipment, the microscopical method generally reveals a lesser depth of decarburization than chemical analytical methods in certain simple shapes. The microindentation hardness method measures the hardness of steel on a microscopic scale. In addition, the International Standard ISO 3887 used two main methods for measuring the decarburizing depth: micro-structural observation and the plotting hardness profile. Both of them are time-consuming and expensive: those methods cannot examine all materials.
In addition to ASTM E1077 and ISO 3887, some researchers have begun to explore the utilization of machine vision and image processing in microstructure science. Over the past few decades, optical decarburization measurements have been preferred for quality tests, modeling, or indirect methods over carbon steel products. The measurements are commonly performed by metallographs and can be supported by image processing. Image processing allows for the collection of multiple measurements from digital micrographs. Huang et al. [3] proposed a digital image analysis approach by comparing the variation in the image gray gradient for determining the decarburization depth. Chávez-Campos et al. [4] used the image processing method to examine the impact of the segmentation method parameters on the precision of decarburization depth measurements. However, the performance of the image processing method is often time-consuming and inaccurate to determine decarburization depth. With the increasing amount of data and computing resources, traditional image processing methods have been replaced by artificial intelligence (AI). Nowadays, deep learning has had success in many fields, including pattern recognition, object recognition, and audio processing. The main function of deep learning methods is that they enable real-time analysis and precise measurements. Because deep learning is totally excellent in terms of accuracy when trained with massive amounts of data, its popularity has increased more and more in different fields.
In general, deep learning methods for target detection are dominated by CNN-based algorithms, which can be mainly based on YOLO (you-only-look-once) [5] series. After the YOLO model was developed, Redmon et al. also proposed YOLOv2 [6] and YOLOv3 [7] object detection models successively. YOLOv3 is different from other existing methods, as detection at different scales makes it easier to detect various objects. After YOLOv3, a series of methods were proposed, including YOLOv4 [8] and YOLOv6 [9]. They effectively integrate all kinds of advantages of deep learning to create an optimal neural network for object detection. Until now, the YOLO series were already developed to YOLOv7 [10]. YOLOv7 proposes several bag-of-freebies methods and effectively integrated deep learning tricks to greatly promote performance without increasing the inference cost. To sum up, the YOLO series has become a mainstream trend in the object detection of natural images. For metallographic analysis, Lee et al. [11] proposed a fast image classifier (FIC) to classify grain size based on the concept of YOLO. Compared with the classical network of deep learning, the performance of the FIC algorithm was higher than the existing methods. However, to the best of our knowledge, no paper discusses deep learning models that can determine decarburization depth. To address the above problem, we propose the decarburization layer detector (DLD) to determine the decarburization depth based on CNN-based. This detector provides an automatic determination of the decarburization layer depth by providing metallic microstructure images. In this work, the proposed method is to verify the determination of the decarburization depth based on the deep learning method. The research work and contributions of this paper include the following aspects:
  • Based on a convolutional neural network (CNN) model, we proposed the use of a decarburization layer detector (DLD) to determine the depth of decarburized layers.
  • For real-time detection, we used a neural network with only 48 layers to replace the existing methods for the determination of decarburization depth.
  • The proposed method decreases the computing cost while improving the performance of the DLD system over existing methods.
  • To determine the decarburization depth, we also proposed a re-training strategy to make the DLD system more robust.
After the introduction in Section 1, Section 2 will describe in detail the proposed strategy for determining decarburization depth. The experimental results and comparisons with other representative object detection methods are discussed in Section 3, and Section 4 gives the conclusion and proposes future work.

2. Proposed Method

YOLO series-based object detectors are clearly faster and more accurate than other existing methods for the 80 classes in the MS COCO dataset. However, for determining decarburization depth, it can only to detect one target category. To find out the proper decarburization depth detection, we reduce the architecture of YOLOv4 and modify the original three output layers of YOLOv4 two output layers. This section describes the DLD framework for determining the decarburization layer depth with metallic microstructure images. The solutions and steps are described in detail below.

2.1. Architecture of Proposed Method

In this paper, we combine several famous CNN networks to extract features from decarburization layer images. The residual networks (ResNet) [12] and cross-stage partial networks (CSPNet) [8] are mainly adopted to extract local descriptors from each image. ResNet, one of the outstanding architectures in object detection, proposed shortcut connections to bypass a layer and move to the next layer in the sequence, which makes it possible to solve the problem of the vanishing gradient and still achieve a compelling performance. The CSPNet divides the feature map of the base layer into two parts and then merges them through a cross-stage hierarchy. The system framework for determining the decarburization layer depth is shown in Figure 1.
There are no small targets of decarburization depth in the decarburization layer images. Therefore, the YOLO series model still did not extract the proper features for determining the decarburization layer depth. Unlike the YOLO series model, DLD predicts boxes in two different scales to replace the three scales (small, medium, and large) of the original model. In other words, DLD divides the input image and uses two different grid sizes to detect medium and large objects, respectively. While reducing the neural network architecture, it also reduces the computing time. In addition, in contrast to other deep learning network methods, the DLD model is not down-sampled to reduce more information at the first level. On the contrary, this network starts with a convolutional layer with 32 filters to reserve more information. In addition, the DLD model uses three ResNets and CSPNets to extract more information from the backbone networks. The DLD model used various skills of deep learning to extract different features such as edges, corners, and color patterns from the decarburization layer images. The framework of DLD is defined in more detail in Table 1. To improve the accuracy of decarburization layer detection, DLD removes the unnecessary network architecture to detect decarburization depth. In addition, the decarburization layer can detect more accurately and reduce the computation cost. Therefore, we only used a 48-layer network architecture to replace the traditional assessment methods for the determination of decarburization depth.

2.2. Re-Training Strategy

Modern deep learning models have millions of parameters. At the same time, training the DLD network architecture requires many labeled data and much computing time. Transfer learning is a machine learning technique where we can reuse a pre-trained model as the starting point for a model on a new task. In this paper, we propose a retraining strategy to improve efficiency in the DLD model. As a matter of fact, there are not enough decarburization layer samples to train the DLD models. Therefore, after each testing phase, we could use the testing results to make the DLD model more robust, as shown in Figure 2. The poor results mean that the recognition results of the decarburization depth are inaccurate for experienced specialists. In Figure 3, the red boxes show poorer recognition results by applying the DLD method. On the other hand, the yellow boxes present a better result with the manually confirmed correction. For poor results, we revise the error images to implement retraining. To save much manual labeling time, automated pre-labeling employs the re-training models to label the test images automatically. The DLD models achieve better performance for determining decarburization depth through the use of re-training strategies that benefit the final application.

3. Experimental Results and Analysis

To evaluate the DLD model, we describe the used metrics and datasets in this section. We collected the decarburization layer dataset from the metallographic analysis laboratory of the Metal Industries Research and Development Centre (MIRDC) [13]. These experiments were conducted in CUDA C++ API on a computer with NVIDIA 3060 GPUs. The Zeiss Axiovert 200 Mat optical microscope was employed to collect the decarburization layer images, as shown in Figure 4. The average precision (AP) [14] was a metric used to evaluate the DLD model. Moreover, we compared the results with the YOLO series models in terms of AP and inference speed. Finally, we divided the decarburization layer dataset into training and testing sets by randomly splitting the dataset. In our experiments, the training and testing data were selected at a rate of 80:20 for training and testing, and five-fold cross-validation was conducted.

3.1. Decarburization Layer Dataset

Up to this time, there have been no appropriate decarburization layer datasets for deep learning. Thus, we collected the decarburization layer images from MIRDC [13]. The decarburization layer datasets contain eight different classes of decarburization under the optical microscopy 100× magnification, including SAE 9254 (518), S45C (497), SAE 8620H (730), SCM418H (420), S53C (554), S55C (503), 5120H (519), and 4320H (468) steels, as shown in Figure 5. The number of decarburization layer images in each class is shown in parentheses. Each steel class varies from 420 to 730 decarburization layer images. In general, more data resulted in better performance of the training model [15]. Therefore, data augmentation was used to expand the number of our training dataset. In our dataset, we only used the flipping policies of data augmentation techniques for decarburization layer images. Thus, the total images of the decarburization layer dataset were composed of 8418 images in the 24-bit JPEG format.

3.2. Validation of Proposed Method

The DLD system has been completed based on the concept of the graphical user interface (GUI), as shown in Figure 6. The GUI system of the DLD model consists of four parts which are called the main window and three control blocks, respectively. Initially, the decarburization layer images are read and displayed on the main window. Next, the user can explore the detection result of the video or image in the main window with the two control blocks of video and image recognition. However, the up and down adjustment buttons were used to correct the image labeling of error detection in the re-training strategy.
In this work, the size of decarburization layer images was resized into three different sizes-416 × 288, 608 × 416, and 800 × 544. Table 2 shows the detection results on our decarburization layer dataset for various image resolutions with a mean average precision (mAP). In this experiment, the better performance of the decarburization layer detection could be achieved through the use of an 800 × 544 size image. However, the decarburization layer images with 416 × 288 were smaller than other size images to detect effectively. Overall, using the 800 × 544 size image, our proposed method achieved better performance than the 416 × 288 size image, improving the mAP from 86.82 to 92.97%. In addition, the results also showed that most of the SAE 9254, S45C, SAE 8620H, SCM418H, and S53C steels successfully detected the decarburization layer depth by the proposed method. The lower performance occurred on S55C, 5120H, and 4320H steels that were difficult to distinguish with the decarburization layer depth, even with the human eye, as shown in Figure 7. In Table 2, we also show the results for the different steel categories to demonstrate the strong performance of decarburization layer detection.

3.3. Comparison with Other Existing Method

Not only the performance but also the computation cost must be considered for neural network architectures. Hence, we employed our decarburization layer dataset to compare our method with state-of-the-art YOLO series in terms of recognition accuracy and FPS (frames per second), as shown in Table 3. A similar result could be obtained by the use of YOLOv7, YOLOv6, Scaled-YOLOv4, and the proposed method, which resulted in better accuracy. The other evaluation values (the precision, recall, and F1 score) are also listed in Table 3. All detection methods were evaluated under the same training and test sets, and the metric results were performed under exactly the same standard. These detection methods included YOLOv4, YOLOv4-tiny [8], YOLOv6, YOLOv6-tiny [9], YOLOv7, YOLOv7-tiny [10], and the proposed method. From Table 3, our proposed method was better than the YOLO-tiny series methods and similar to the YOLO series methods. In conjunction with the results in Table 3, it effectively proves that the DLD algorithm in this paper had good performance in decarburization layer detection. The YOLO-tiny series methods experienced the loss of more information because of down-sampling to reduce the computation cost at the first level of the neural network. The experimental results show that the DLD model is the faster detector, while the YOLO series methods are the slower, mostly because YOLO series methods have several million parameters, leading to greater computational costs. In addition, although parts of the YOLO series methods have higher accuracy, their FPS is lower than our proposed method for determining the decarburization depth. Thus, the performance of the DLD method is not the best, but its accuracy approximates that of the YOLO series methods, making it still useful for real-time image detection.
In these experiments, some ambiguities concerning the decarburization layer images were still not accurately detected by the DLD model. An analysis of the final results shows that some decarburization layer images were difficult to distinguish even manually (as shown in Figure 7) and that some images were not part of the training data. In future work, solving these problems will be quite a challenging job. Generally speaking, in order to improve the object detection performance and accuracy of the neural network model, it is necessary to increase the number of training images first. Therefore, we will continue to extend the dataset with a re-training strategy and integrate more advanced neural network architectures to further improve the DLD system.

4. Conclusions

In this work, we proposed the decarburization layer detector module, which could effectively fuse features from different stages of the network to extract features that were crucial to determining the decarburization layer for carbon steel. The DLD algorithm was experimentally analyzed on MIRDC datasets, and the mAP of this paper’s algorithm was 92.97%. The results show that the DLD algorithm can better utilize the feature information in the shallow feature layer in the large and medium target detection process and has better detection performance for the determination of decarburization depth. In addition, we collected the decarburization layer dataset to evaluate the effectiveness of DLD and used a neural network with only 48 layers to replace the existing methods. The experimental results also show that the DLD model provides higher performance and even outperforms state-of-the-art algorithms. To promote the steel products with more accurate determination for decarburization layer depth and relieve the operators’ burden, an intelligent de-carburization layer detection system is essential. How to reduce redundancy and optimize the network structure while ensuring accuracy will be the main direction of future research.

Author Contributions

Data curation, H.-C.H.; Formal analysis, C.-H.C.; Methodology, C.-C.L. and J.-C.L. (Jao-Chuan Lin); Resources, T.-K.H.; Software, J.-C.L. (Jen-Chun Lee) and J.-C.L. (Jao-Chuan Lin); Supervision, C.-C.L. and H.-C.H.; Writing—original draft, J.-C.L. (Jao-Chuan Lin); Writing—review & editing, C.-H.C. and H.-C.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Kaohsiung University of Science and Technology, Taiwan.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. ASTM E1077-14; Standard Test Methods for Estimating the Depth of Decarburization of Steel Specimens. ASTM International: West Conshohocken, PA, USA, 2021.
  2. ISO 3887:2017; Steels Determination of the Depth of Decarburization. International Organization for Standardization: Geneva, Switzerland, 2017.
  3. Huang, W.; Wang, C.S.; Chang, Y.; Yeh, C. A digital analysis approach for estimating decarburization layer depth of carbon steel via a portable device. IOP Conf.Ser. Mater. Sci. Eng. 2019, 644, 012007. [Google Scholar] [CrossRef]
  4. Chávez-Campos, G.M.; Reyes-Archundia, E.; Vergara-Hernández, H.; Vázquez-Gómez, O.; Gutiérrez-Gnecchi, J.; Prieto-Sánchez, O.D. Improving Selectivity on High-Temperature Decarburization Depth Measurements using an Image Segmentation Method. Oxid. Met. 2022, 98, 121–134. [Google Scholar] [CrossRef]
  5. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition CVPR, Las Vegas, NV, USA, 27–30 June 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 779–788. [Google Scholar]
  6. Redmon, J.; Farhadi, A. YOLO9000: Better, faster, stronger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 21–26. [Google Scholar]
  7. Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
  8. Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. YOLOv4: Optimal speed and accuracy of object detection. In Proceedings of the Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 14–19. [Google Scholar]
  9. Li, C.; Li, L.; Jiang, H.; Weng, K.; Geng, Y.; Li, L.; Ke, Z.; Li, Q.; Cheng, M.; Nie, W.; et al. YOLOv6: A Single-Stage Object Detection Framework for Industrial Applications. arXiv 2022, arXiv:2209.02976. [Google Scholar]
  10. Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv 2022, arXiv:2207.02696. [Google Scholar]
  11. Lee, J.C.; Hsu, H.H.; Liu, S.C.; Chen, C.H.; Huang, H.C. Fast Image Classification for Grain Size Determination. Metals 2021, 11, 1547. [Google Scholar] [CrossRef]
  12. Kaiming, H.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. Int. J. Light ElectronOpt, ICCV 2015, 770–778. [Google Scholar]
  13. Metal Industries Research and Development Centre (MIRDC). Available online: https://www.mirdc.org.tw/English (accessed on 1 January 2020).
  14. Everingham, M.; Van Gool, L.; Williams, C.K.I.; Winn, J.; Zisserman, A. The pascal visual object classes (voc) challenge. Int. J. Comput.Vis. 2010, 88, 303–338. [Google Scholar] [CrossRef] [Green Version]
  15. Chen, S.; Abhinav, S.; Saurabh, S.; Abhinav, G. Revisting unreasonable effectiveness of data in deep learning era. In Proceedings of the ICCV, Venice, Italy, 22–29 October 2017; pp. 843–852. [Google Scholar]
Figure 1. The decarburization layer detector network architecture.
Figure 1. The decarburization layer detector network architecture.
Metals 13 00479 g001
Figure 2. The re-training strategy of DLD models.
Figure 2. The re-training strategy of DLD models.
Metals 13 00479 g002
Figure 3. The red boxes are the poor results, and the yellow boxes are the better recognition result.
Figure 3. The red boxes are the poor results, and the yellow boxes are the better recognition result.
Metals 13 00479 g003
Figure 4. Zeiss Axiovert 200 Mat optical microscope.
Figure 4. Zeiss Axiovert 200 Mat optical microscope.
Metals 13 00479 g004
Figure 5. The decarburization images of eight different classes. Figures (ah) are, respectively, (a) SAE 9254 (518); (b) S45C (497); (c) SAE 8620H (730); (d) SCM418H (420); (e) S53C (554); (f) S55C (503); (g) 5120H (519); (h) 4320H (468) steels.
Figure 5. The decarburization images of eight different classes. Figures (ah) are, respectively, (a) SAE 9254 (518); (b) S45C (497); (c) SAE 8620H (730); (d) SCM418H (420); (e) S53C (554); (f) S55C (503); (g) 5120H (519); (h) 4320H (468) steels.
Metals 13 00479 g005
Figure 6. The GUI of DLD system.
Figure 6. The GUI of DLD system.
Metals 13 00479 g006
Figure 7. The decarburization images with poor performance. The first rows are images of S55C steel, the second rows are images of 5120H steel, and the third rows are images of 4320H steel.
Figure 7. The decarburization images with poor performance. The first rows are images of S55C steel, the second rows are images of 5120H steel, and the third rows are images of 4320H steel.
Metals 13 00479 g007
Table 1. The framework of the decarburization layer detector model.
Table 1. The framework of the decarburization layer detector model.
LayerOperation TypeInput FilterSize/StrideOutputLayer
0Convolution800 × 544 × 3323 × 3/1800 × 544 × 32
1Convolution800 × 544 × 32643 × 3/2400 × 272 × 64
2Convolution400 × 272 × 64321 × 1/1400 × 272 × 32ResNet
3Convolution400 × 272 × 32643 × 3/1400 × 272 × 64
4Shortcut400 × 272 × 64--400 × 272 × 64
5Convolution400 × 272 × 641283 × 3/2200 × 136 × 128
6Convolution200 × 136 × 128641 × 1/1200 × 136 × 64ResNet
7Convolution200 × 136 × 641283 × 3/1200 × 136 × 128
8Shortcut200 × 136 × 128--200 × 136 × 128
9Convolution200 × 136 × 128641 × 1/1200 × 136 × 64ResNet
10Convolution200 × 136 × 641283 × 3/1200 × 136 × 128
11Shortcut200 × 136 × 128--200 × 136 × 128
12Max pooling200 × 136 × 128-2 × 2/2100 × 68 × 128
13Convolution100 × 68 × 1281283 × 3/1100 × 68 × 128CSPNet
14Route13--100 × 68 × 64
15Convolution100 × 68 × 64643 × 3/1100 × 68 × 64
16Convolution100 × 68 × 64643 × 3/1100 × 68 × 64
17Concatenation15,16--100 × 68 × 128
18Convolution100 × 68 × 1281281 × 1/1100 × 68 × 128
19Concatenation13,18--100 × 68 × 256
20Max pooling100 × 68 × 256-2 × 2/250 × 34 × 256
21Convolution50 × 34 × 2562563 × 3/150 × 34 × 256CSPNet
22Route21--50 × 34 × 128
23Convolution50 × 34 × 1281283 × 3/150 × 34 × 128
24Convolution50 × 34 × 1281283 × 3/150 × 34 × 128
25Concatenation23,24--50 × 34 × 256
26Convolution50 × 34 × 2562561 × 1/150 × 34 × 256
27Concatenation21,26--50 × 34 × 512
28Max pooling50 × 34 × 512-2 × 2/225 × 17 × 512
29Convolution25 × 17 × 5125123 × 3/125 × 17 × 512CSPNe
30Route29--25 × 17 × 256
31Convolution25 × 17 × 2562563 × 3/125 × 17 × 256
32Convolution25 × 17 × 2562563 × 3/125 × 17 × 256
33Concatenation31,32--25 × 17 × 512
34Convolution25 × 17 × 5125121 × 1/125 × 17 × 512
35Concatenation29,34--25 × 17 × 1024
36Convolution25 × 17 × 10245121 × 1/125 × 17 × 512
37Convolution25 × 17 × 5125123 × 3/125 × 17 × 512
38Convolution25 × 17 × 5122561 × 1/125 × 17 × 256
39Convolution25 × 17 × 2565123 × 3/125 × 17 × 512
40Convolution25 × 17 × 512181 × 1/125 × 17 × 18
41Large object detection
42Route38--25 × 17 × 256
43Convolution25 × 17 × 2561281 × 1/125 × 17 × 128
442 × upsample25 × 17 × 128--50 × 34 × 128
45Route44,26--50 × 34 × 384
46Convolution50 × 34 × 3842563 × 3/150 × 34 × 256
47Convolution50 × 34 × 256181 × 1/150 × 34 × 18
48Medium object detection
Table 2. The detection results of eight different decarburization classes.
Table 2. The detection results of eight different decarburization classes.
Resolution416 × 288608 × 416800 × 544
SAE 925485.61%88.04%91.78%
S45C84.79%90.39%92.17%
SAE 8620H95.53%99.08%99.45%
SCM418H93.55%95.48%98.12%
S53C90.78%93.72%96.88%
S55C84.35%88.19%91.31%
5120H78.54%82.78%86.94%
4320H81.47%85.61%87.12%
Table 3. The detection result of different methods.
Table 3. The detection result of different methods.
MethodmAPPrecisionRecallF1-ScoreFPS
YOLOv795.05%0.960.930.9531
YOLOv7-tiny88.46%0.840.890.8785
YOLOv692.97%0.910.930.9255
YOLOv6-tiny88.14%0.850.880.8762
YOLOv491.51%0.920.920.9122
YOLOv4-tiny86.12%0.870.840.8567
Scaled-YOLOv493.64%0.960.910.9337
Proposed method92.78%0.970.980.9778
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, H.-C.; Hu, T.-K.; Lee, J.-C.; Lin, J.-C.; Chen, C.-H.; Lin, C.-C. Determination of Decarburization Depth Base on Deep Learning Methods. Metals 2023, 13, 479. https://doi.org/10.3390/met13030479

AMA Style

Huang H-C, Hu T-K, Lee J-C, Lin J-C, Chen C-H, Lin C-C. Determination of Decarburization Depth Base on Deep Learning Methods. Metals. 2023; 13(3):479. https://doi.org/10.3390/met13030479

Chicago/Turabian Style

Huang, Huang-Chu, Ting-Kuang Hu, Jen-Chun Lee, Jao-Chuan Lin, Chung-Hsien Chen, and Chiu-Chin Lin. 2023. "Determination of Decarburization Depth Base on Deep Learning Methods" Metals 13, no. 3: 479. https://doi.org/10.3390/met13030479

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop