Next Article in Journal
Thermoformable Conductive Compositions for Printed Electronics
Previous Article in Journal
The Role of Period Modulation on the Structure, Composition and Mechanical Properties of Nanocomposite Multilayer TiAlSiN/AlSiN Coatings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Automated Particle Size Analysis Method for SEM Images of Powder Coating Particles

1
School of Civil and Architecture Engineering, Changzhou Institute of Technology, 666 Liaohe Road, Changzhou 213032, China
2
School of Health Science and Engineering, University of Shanghai for Science and Technology, 516 Jungong Road, Shanghai 200093, China
*
Author to whom correspondence should be addressed.
Coatings 2023, 13(9), 1547; https://doi.org/10.3390/coatings13091547
Submission received: 9 July 2023 / Revised: 16 August 2023 / Accepted: 29 August 2023 / Published: 4 September 2023

Abstract

:
The particle size of powder coatings is an important factor affecting flow and fluidization performance, which in turn determines the quality of the coating. Powder coating particles, such as boron nitride, titanium dioxide, barium titanate, niobium oxide, and tungsten oxide, can be seen in SEM images as circular or polygonal shapes, with irregular edges and sizes, as well as aggregation and stacking between them. This paper introduces a novel method of automatic particle identification and size analysis applied in SEM images for the investigation of coating quality. Firstly, a fast gradient segmentation algorithm (Split–Merge) is utilized to automatically generate samples through determining the adhesive particles and particle groups. The samples are then manually checked and corrected to further segment the particles in order to form the dataset. Finally, the YOLOv5, a mature target detection algorithm, is used to train the labeled data in order to produce a multi-target detection recognition model. The model can be applied to identify more pictures of the same substance, and the particles’ long and short axes are estimated using the elliptical coverage of the particles within the identified image boundary box. Experiment results from four kinds of powders suggest that this method can provide automatic particle size analysis for industrial SEM particle images.

1. Introduction

Powder coating is a 100% solid coating in the form of dry powder particles. It is transferred to the object to be coated in powder form, and is baked, melted, and solidified to form a film coating. In the context of manufacturing and coating processes, the utilization of the median or average particle size of powder coatings can facilitate the management of coating thickness. This sorting method aids in effectively managing the particle size of powder coatings.
Currently, powder coatings have emerged as the most rapidly expanding category within the global market for coating goods. In addition to their notable environmental advantages, these products possess distinctive characteristics such as exceptional coating performance, great efficiency in production, and evident economic benefits. Among the critical factors that significantly affect the quality of powder coatings during the spraying process are the flow and fluidization characteristics of the coating particles. Numerous variables influence the flow of powder and the efficacy of fluidization, including sphericity and density. Among these characteristics, the average particle size holds substantial importance [1].
A particle swarm often consists of a considerable quantity of particles with varying sizes, and the particle size distribution of this assemblage pertains to the proportion of particles with distinct sizes within the assemblage. The percentage of particles with different sizes in the total amount is called the frequency distribution. There are several methods for particle size analysis, which may be categorized into several groups based on the measurement principle, including the microscope (or scanning electron microscope) and image analysis method, laser particle size analyzer method, sieving method, resistance method, and sedimentation method.
(1)
Microscopy and Image Analysis
Among the aforementioned approaches, microscopy stands as the sole test method capable of directly acquiring the absolute size and shape of particles. The use of scanning electron microscopy (SEM) allows the lower limit of measurement to be reduced to 10 nm, significantly broadening the application range of microscopy. Microscopy proves to be highly advantageous in research endeavors that necessitate the acquisition of particle morphology and precise particle dimensions [2,3].
(2)
Laser Diffraction and Static Scattering Techniques
Laser methods are commonly used for particle size measurement, including laser diffraction and static scattering techniques [4,5,6]. The laser diffraction and static scattering analysis method is currently recognized as a quick analytical approach with good accuracy. The experimental results exhibit a notable degree of consistency and effectiveness, enabling the rapid acquisition of particle volume distribution data. The determination of the average diameter of particles may be achieved by the utilization of the laser diffraction technique. However, these techniques ignore the inherent shape features of particles. The disadvantage is that this method has a large measuring error for samples with several microns.
(3)
Sieving Method
Sieving is a traditional method for measuring particle size distribution, which is achieved by using standard sieves of different aperture sizes to screen particles of different grade levels. The sieving process is widely recognized as a prominent and enduring technique for particle size analysis, frequently employed with ultrasonic and vibratory apparatus [7]. Theoretically, the particles within the vibration will traverse the sieve that possesses the smallest predicted size. When the particles traverse the coarse sieve and become retained by the fine sieve, the range of pore sizes in both the coarse and fine sieves is referred to as the sieve diameter. The process of sieving is characterized by a lengthy duration, susceptibility to human influence, and the impact of wear and tear on the sieves, resulting in diminished reproducibility. The sieving method exhibits a limited range of applicability, and its capacity to accurately detect particles with elevated moisture content and a tendency to agglomerate is diminished. Although this method is simple and fast, it requires a lot of human resources and suffers from low accuracy.
Other analytical techniques are widely used in specific fields due to the limitation of test principles or usage conditions. Sedimentation methods measure particle size distribution based on the different sinking speeds of particles with different sizes in liquids [8]. However, this method is not only inaccurate but also susceptible to the influence of human factors. The aforementioned methods suffer from disadvantages such as low accuracy or high human cost. The small-angle X-ray diffraction method is suitable for accurate measurement of nano-scale particles, but the sample preparation is relatively complicated, making it suitable primarily for use research fields such as life sciences. Jomin Thomas works on tire and road wear particles (TRWPs). Collected particles from different areas are subjected to infrared spectroscopy (IR), thermogravimetric analysis (TGA), and scanning electron microscopy coupled with energy-dispersive X-ray spectroscopy (SEM-EDX) to confirm the presence of TRWP [9]. For the same particle sample, the particle size analysis results may vary due to different measurement methods.
Currently, with the development of artificial intelligence technology, image analysis methods based on deep learning have attracted high and extensive attention due to their intuitive and precise features. Convolutional neural networks are at the forefront of this development. With rapid advancements in image acquisition devices and technologies, image resolution is ever increasing. The growth in computer performance has enabled and facilitated acquiring and processing huge amounts of data, which provides the prerequisite necessary for the implementation of artificial intelligence technology. With the use of a large number of image data for training, deep learning convolutional neural networks mimic the learning process of the human brain, which enables an algorithm to achieve a function that people can use. At present, image acquisition devices such as optical microscopes [10,11] and scanning electron microscopes (SEM) [12,13,14] are commonly used for fine particle image capture, and can meet the requirements for particle observation at various particle scales. With the rapid development of computer vision [15,16,17,18], particle detection and analysis based on deep learning image processing methods [19,20] have emerged as a new research hotspot. The image processing analysis of particle size holds significant practical value, and can improve the efficiency of the process evaluation of particles, substantially reducing the workload of traditional particle size analysis through manual sieving.
In the problem described in this paper, the primary task of particle size analysis is considered complete once the target is detected. The system used for analyzing SEM images of circular particle targets encounters various challenges, such as different types of particle targets requiring segmentation, and significant variations in particle sizes. Some particle targets have ideal sizes, but due to production processes, collection noise, and occlusion, irregular size phenomena are observed in the image. Furthermore, the edges of particle shapes are not smooth or regular, and particle targets often exhibit stacking phenomena, where they adhere to or overlap with one another. Overlapping particle targets may exhibit different texture and grayscale features at the overlap, while in other cases, they may have similar texture and grayscale features. Traditional object recognition algorithms are insufficient for accurately segmenting various overlapping particle targets. Therefore, it is necessary to research and develop an algorithm specifically designed for recognizing overlapped circular particle targets.
In this paper, the target particles in SEM images are calibrated to generate samples, resulting in the formation of a dataset. The labeled data is then trained by the mature YOLOv5 [21,22] target detection algorithm to obtain a target detection recognition model. This model is subsequently applied to judge and identify other images of the same substance, while detecting the target, calculating the particle size of the detected target, and visualizing the distribution of particle size.

2. Materials and Method

2.1. Image Acquisition

The dataset used in this paper was obtained through cooperation with a coating company, and the images were taken and evaluated by the company’s quality experts using an electron microscope. The dataset used is compatible with the YOLOv5 algorithm, and the images in the dataset are of fine particles obtained through scanning electron microscopy (SEM) imaging equipment (JSM-IT800SHL, made by JEOL, Tokyo, Japan). The image format is BMP, shown in Figure 1.
During the process of creating the dataset, the labels for the particles in the images were created using DrawCli software 2019 version. Before using the software to label the images, a text file with the same name as the image to be labeled is created in the folder of the image to be labeled. The results of the labeling are saved in the text sample file.
Since there is only one type of target particle in an SEM image and the main task is to detect the particle size of the target particle in the image, rather than achieving object classification, this paper only labels three different SEM images of the same type of particles. Each image only needs to label 20 to 40 target particle objects, and then these three images are augmented to generate 120 samples. Each sample and its corresponding labeling file (label) have different names, forming a dataset with 120 samples.

2.2. Network Structure

YOLO is one of the top-performing models for object detection. It is a popular tool for developers, students, and professionals working in the fields of artificial intelligence and computer vision, as well as those in industry and academia, who use it to develop applications in various scenarios. The YOLO algorithm has fast and accurate detection capabilities, making real-time object detection possible. The network structure of YOLOv5 mainly consists of Backbone: New CSP Darknet 53; Neck: SPPF, New CSPPAN; and Head: YOLOv3 Head. YOLOv5 replaces the original focus module of the network’s first layer with a standard 6 × 6 convolutional layer, offering the same functionality but with greater efficiency. In the Neck section, spatial pyramid pooling (SPP) is replaced with SPPF, which is more efficient while maintaining the same function. Figure 2 shows the SPPF structure, which solves the problem of multi-scale objects to some extent by parallelly passing the input to multiple max pooling layers with different sizes, and then merging the results. At the same time, the SPPF structure serially passes the input to multiple 5 × 5 max pooling layers. Experiments have shown that the calculation results of two consecutive 5 × 5 max pooling layers are the same as those of a 9 × 9 max pooling layer, and three consecutive 5 × 5 max pooling layers produce the same results as those of a 13 × 13 max pooling layer. Although SPPF and SPP produce identical results, SPPF is more than twice as fast as SPP. Another modification in the Neck section is the introduction of the CSP structure in YOLOv5 compared to YOLOv4, and each C3 structure contains a CSP structure.

2.3. Algorithm Process

The process begins with high-resolution pre-training using the ImageNet dataset to establish a classification network. This network is then fine-tuned through a second training process, utilizing a predefined particle database to train the detection network model. The initial parameters for this process are provided by the pre-trained convolutional neural network model and the YOLO-vocRV model. During training, data augmentation techniques such as random scaling and adjustments to saturation and exposure are employed to enhance the diversity of the samples. The entire image is subsequently processed through a neural network all at once, where the network segments the image into different regions, providing bounding box predictions and associated probabilities for each region, and assigning weights to all bounding boxes based on their likelihood. Finally, a threshold of 25% is set to output detection results with confidence scores exceeding this threshold. The training process is depicted in Figure 3.
Due to the relatively small proportion of the object within the image sample’s grid, the gradient of no confidence can often appear to be more advantageous than that of confidence. Thus, this paper introduces weights to balance confidence, and sets different scaling factors for bounding boxes based on the presence or absence of the target. A bigger weight of 5 is assigned to the front of the loss function, and a smaller weight of 0.5 is assigned to the bounding boxes without the target, while those with the target are assigned a normal weight of 1.
During the training process, the mean squared error is used as the loss function, but this can cause candidate boxes to be too large. Therefore, this paper performs regression using the square root to weaken the weight of the bounding boxes and ensure that a target and a background waveform correspond to each other in the predictions of multiple bounding boxes. Moreover, the size, proportion, and target type of each bounding box are all taken into account to obtain the loss function. The final loss function is the sum of 5 loss items: the prediction loss of the center coordinates (tx, ty) of the bounding box (1st item), the prediction loss of the width and height (tw, th) of the bounding box (2nd item), the overlap error (3rd and 4th items), which is the prediction loss for the confidence score of the bounding box containing the target and the one not containing the target, and the classification error (5th item). During training, it is expected that only one bounding box corresponds to one target. Therefore, the intersection over union (IoU) between each bounding box and the background is calculated. The best bounding box is selected as the criterion, while others are considered as not detecting the target. Bounding box regression and category regression are performed only when the network detects the target. When no target is present, only regression for confidence is performed.
In order to verify the effectiveness of the proposed method, the test samples were collected by capturing titanium oxide particles with camera resolutions of 300 × 225 pixels and 550 × 448 pixels. When the deep convolutional network in the Darknet framework is trained for particle recognition, a huge amount of computation is required. Thus, experiments are conducted on a deep learning workstation server. During the detection process, the trained detection model can be called directly; therefore, a regular computer configuration is sufficient and there is no need for high-end devices. Test samples were collected from raw images obtained by SEM scanning and were not preprocessed. The current intensity was set to be consistent with the existing usage patterns. Since the particle detection method based on convolutional neural networks requires learning particle features from a large number of samples, it is difficult to select good features if the sample set is not representative. To ensure the diversity of the data, the same target road section was captured at different time periods to ensure that the sample data from different time periods included comparable particle types and colors. Additionally, three sets of data samples of current and particle density were obtained, and the samples are shown in Figure 4. The original sample data were labeled and processed during the experiment, and the images used for training and validation in each time period were randomly split at the ratio of 80% and 20%. In addition to comparing the detection methods, the above experiments also allowed for detailed comparison and analysis of results under different particle density conditions based on training data samples with different particle densities.

3. Experimental Results and Analysis

The detection results of the established models of the four materials were achieved by adjusting the input image size to 320 × 320 pixels for detection. In other words, regardless of the original size of the SEM images, they were resized to 320 × 320 pixels for detection. The detection results of four kinds of powders are shown in Figure 4.
The visualization presented above displays the prediction results on the testing set. As illustrated in Figure 4, the bounding boxes predicted by the YOLOv5 algorithm accurately fit with the edges of the particles, resulting in excellent prediction accuracy within a certain tolerance range. This study aims to employ the deep learning algorithm (YOLOv5) to detect the particle size and obtain corresponding data. To accomplish this, we use the product of the two columns of pixel dimensions multiplied by the actual length of each pixel in reality, which gives us the particle size. The calculated particle size data corresponding to four SEM images are shown in Figure 5.
The four scatter plots above clearly depict the comparison between the particle size information detected automatically by the model and the particle size information calculated by manual identification for four kinds of powders. Each figure contains two sets of data: one is the particle size data automatically detected by the model, and the other represents the particle size data calculated by manual detection. The x-axis signifies the width of the particles, while the y-axis indicates the height of the particles, both in micrometers. As can be seen from Figure 5, the particle size data that are automatically detected by the model and those that are manually detected are essentially similar within a certain error tolerance.
Notably, there is a slight discrepancy between the number of manually detected particles and the number detected by the model, indicating a difference in the recall rate of the two methods. Neither manual inspection nor model inspection will identify all particles, but this has no effect on the final result. These variations do not affect the acceptance of the model’s performance, because the goal of our measurement is to measure the overall particle size information for the requirements of most industrial tasks. The omission of a few individual particles can be tolerated. As shown in Figure 6, the particle size histogram of each particle is obtained by analyzing one SEM picture. In industrial applications, the program can identify eight, sixteen, or more pictures of the same sample, and output a particle size histogram, which can completely reflect the particle size distribution of the sample. Experimental results show that the method has an average accuracy of over 97% for different particle densities. Based on the performance of the model, the model is predicted to replace the manual calculation of particle size, thereby effectively reducing the time-consuming, costly manual detection and calculation of particle size.

4. Discussion and Conclusions

It is proposed that the YOLO algorithm be utilized for the SEM image particle detection system, thereby acquiring microscopic insights into the particle targets and enabling real-time object detection for SEM images. Experimental results showed that this proposed method attained an average precision of over 97% for varied particle density, making a significant enhancement in both accuracy and efficiency when compared to traditional machine learning. Different training sets with varying particle densities were employed, and the established model was tested and analyzed on a variety of testing sets with different particle densities. This resulted in system improvements to obtain superior detection results with lower recall losses while maintaining good accuracy, and to facilitate multi-target detection for different particle densities.

Author Contributions

Conceptualization, C.L. and Z.J.; Methodology, C.L. and Z.J.; Software, Z.J.; Investigation, Z.J.; Resources, R.C.; Data curation, R.C.; Writing—original draft, C.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (NSFC China 42004105).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. Due to the requirements of a commercial company that provides samples, the data cannot be publicly available.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Claase, M.B.; Vercoulen, P.; Misev, T.A. Powder coatings and the effects of particle size. In Particulate Products: Tailoring Properties for Optimal Performance; Springer International Publishing: Cham, Switzerland, 2013; pp. 371–404. [Google Scholar]
  2. Shekunov, B.Y.; Chattopadhyay, P.; Tong, H.H.Y.; Chow, A.H.L. Particle size analysis in pharmaceutics: Principles, methods and applications. Pharm. Res. 2006, 24, 203–227. [Google Scholar] [CrossRef] [PubMed]
  3. Nyathi, M.S.; Mastalerz, M.; Kruse, R. Influence of coke particle size on pore structural determination by optical microscopy. Int. J. Coal Geol. 2013, 118, 8–14. [Google Scholar] [CrossRef]
  4. Corcoran, T.E.; Hitron, R.; Humphrey, W.; Chigier, N. Optical measurement of nebulizer sprays: A quantitative comparison of diffraction, phase doppler interferometry, and time of flight techniques. J. Aerosol Sci. 2000, 31, 35–50. [Google Scholar] [CrossRef]
  5. Nweke, M.C.; Mccartney, R.G.; Bracewell, D.G. Mechanical characterisation of agarose-based chromatography resins for biopharmaceutical manufacture. J. Chromatogr. A 2017, 1530, 129–137. [Google Scholar] [CrossRef] [PubMed]
  6. Tan, J.H.; Wong WL, E.; Dalgarno, K.W. An overview of powder granulometry on feedstock and part performance in the selective laser melting process. Addit. Manuf. 2017, 18, 228–255. [Google Scholar] [CrossRef]
  7. Santomaso, A.C.; Carazzai, E.; Volpato, S. Particle Size Characterization Through Bed Permeability Tests. Chem. Eng. Trans. 2023, 100, 139–144. [Google Scholar]
  8. Wuddivira, M.N.; Bramble, D.S.E.; Ramlochan, A.; Gouveia, G.A.; Francis, R.C.; De Caires, S.A. Effects of sample mass and suspension salinity on particle size distribution in predominantly medium to heavy textured soils using the hydrometer method. J. Plant Nutr. Soil Sci. 2023. early view. [Google Scholar] [CrossRef]
  9. Thomas, J.; Moosavian, S.K.; Cutright, T.; Pugh, C.; Soucek, M.D. Method development for separation and analysis of tire and road wear particles from roadside soil samples. Environ. Sci. Technol. 2022, 56, 11910–11921. [Google Scholar] [CrossRef] [PubMed]
  10. Al-Thyabat, S.; Miles, N.J. An improved estimation of size distribution from particle profile measurements. Powder Technol. 2006, 166, 152–160. [Google Scholar] [CrossRef]
  11. Ulusoy, U.; Igathinathane, C. Dynamic image based shape analysis of hard and lignite coal particles ground by laboratory ball and gyro mills. Fuel Process. Technol. 2014, 126, 350–358. [Google Scholar] [CrossRef]
  12. Ulusoy, U.; Igathinathane, C. Particle size distribution modeling of milled coals by dynamic image analysis and mechanical sieving. Fuel Process. Technol. 2016, 143, 100–109. [Google Scholar] [CrossRef]
  13. Maiti, A.; Chakravarty, D.; Biswas, K.; Halder, A. Development of a mass model in estimating weight-wise particle size distribution using digital image processing. Int. J. Min. Sci. Technol. 2017, 27, 435–443. [Google Scholar] [CrossRef]
  14. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning, PMLR, Lille, France, 6–11 July 2015; pp. 448–456. [Google Scholar]
  15. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  16. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25. [Google Scholar] [CrossRef]
  17. Wang, C.Y.; Liao HY, M.; Wu, Y.H.; Chen, P.Y.; Hsieh, J.W.; Yeh, I.H. CSPNet: A new backbone that can enhance learning capability of CNN. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 390–391. [Google Scholar]
  18. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  19. He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [Google Scholar] [CrossRef]
  20. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
  21. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
  22. Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 6517–6525. [Google Scholar]
Figure 1. SEM image.
Figure 1. SEM image.
Coatings 13 01547 g001
Figure 2. YOLOv5 network structure.
Figure 2. YOLOv5 network structure.
Coatings 13 01547 g002
Figure 3. Training process.
Figure 3. Training process.
Coatings 13 01547 g003
Figure 4. SEM images of four kinds of powders. (a) PIV, (b) titanium dioxide, (c) aluminum oxide, (d) stainless steel powder.
Figure 4. SEM images of four kinds of powders. (a) PIV, (b) titanium dioxide, (c) aluminum oxide, (d) stainless steel powder.
Coatings 13 01547 g004
Figure 5. Comparison between manual calculations and model detection of four kinds of powders. (a) PIV, (b) titanium dioxide, (c) aluminum oxide, (d) stainless steel powder.
Figure 5. Comparison between manual calculations and model detection of four kinds of powders. (a) PIV, (b) titanium dioxide, (c) aluminum oxide, (d) stainless steel powder.
Coatings 13 01547 g005
Figure 6. The histogram of four kinds of powders. (a) PIV, (b) titanium dioxide, (c) aluminum oxide, (d) stainless steel powder.
Figure 6. The histogram of four kinds of powders. (a) PIV, (b) titanium dioxide, (c) aluminum oxide, (d) stainless steel powder.
Coatings 13 01547 g006
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liang, C.; Jia, Z.; Chen, R. An Automated Particle Size Analysis Method for SEM Images of Powder Coating Particles. Coatings 2023, 13, 1547. https://doi.org/10.3390/coatings13091547

AMA Style

Liang C, Jia Z, Chen R. An Automated Particle Size Analysis Method for SEM Images of Powder Coating Particles. Coatings. 2023; 13(9):1547. https://doi.org/10.3390/coatings13091547

Chicago/Turabian Style

Liang, Can, Zijian Jia, and Rui Chen. 2023. "An Automated Particle Size Analysis Method for SEM Images of Powder Coating Particles" Coatings 13, no. 9: 1547. https://doi.org/10.3390/coatings13091547

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop