Next Article in Journal
The Innovative Use of Intelligent Chatbot for Sustainable Health Education Admission Process: Learnt Lessons and Good Practices
Next Article in Special Issue
Decision Support System for Predicting Mortality in Cardiac Patients Based on Machine Learning
Previous Article in Journal
Novel Multibus Multivoltage Concept for DC-Microgrids in Buildings: Modeling, Design and Local Control
Previous Article in Special Issue
HPoC: A Lightweight Blockchain Consensus Design for the IoT
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Intelligent Alignment Monitoring Method for Tortilla Processing Based on Machine Vision

School of Mechanical Engineering, Anhui Science and Technology University, 9 Donghua East Road, Fengyang 233100, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(4), 2407; https://doi.org/10.3390/app13042407
Submission received: 31 December 2022 / Revised: 8 February 2023 / Accepted: 9 February 2023 / Published: 13 February 2023
(This article belongs to the Special Issue Opinion Mining and Sentiment Analysis Using Deep Neural Network)

Abstract

:
As people pay more and more attention to a healthy diet, it has become a consensus to eat more coarse grains. The development of its edible value is of great significance for a healthy human diet and has attracted the attention of many scholars and food processing companies. However, due to the differences in protein composition and structure between corn flour and wheat protein, it is difficult to form a network structure during processing, and the viscoelasticity and flexibility are poor. Based on this, this paper proposes a machine vision-based noodle positioning monitoring method to achieve noodle alignment monitoring in the noodle processing process. First, the images are captured by binocular cameras and preprocessed. Further, feature detection and matching algorithms are used to recover the pose information between binocular cameras, and then the recognition targets are matched. Finally, noodle alignment monitoring during noodle processing is achieved. Experiments show that the detection accuracy of the method proposed in this paper is much higher than the traditional manual detection, which can improve the noodle quality and reduce labor costs.

1. Introduction

Noodles are a traditional staple of Chinese food. They are easy to make and eat. Traditional noodles are mainly made from wheat flour [1]. However, in recent years, the increased precision of wheat flour processing has resulted in more nutritional losses. Long-term consumption can lead to a nutritional imbalance in the body and is harmful to health. As people pay more attention to a healthy diet, it has become a consensus to eat more coarse grains. Corn is one of the most productive and nutritious coarse grains. The development of its edible value is of great significance for a healthy human diet and has attracted the attention of many scholars and food processing companies. At present, corn flour has become one of the research hotspots in corn staple food processing. However, due to the differences in protein composition and structure between corn flour and wheat protein, it is difficult to form a network structure during processing, and the viscoelasticity and flexibility are poor. As a result, the dough quality is poor, and the texture is coarse and unsuitable for processing and making noodles, which greatly limits the application of corn flour in tortilla processing.
Among research advances in corn flour pretreatment to overcome the shortcomings of corn flour without gluten protein and to improve the suitability of corn flour for noodle products, corn flour is often pretreated. Commonly used treatments include raw material ultrafine, gelatinization, bio enzyme modification, and raw material puffing. Bai et al. [2] performed ultrafine grinding of corn to study the effect of corn flour fineness on the quality of corn noodles. The results showed that the fineness of corn flour had a great influence on the quality of corn noodles. As the fineness of cornmeal increased, it became more delicate, its crushing rate decreased significantly, and its sensory score showed a sharp increase. Particle size affected the physicochemical properties of the cornmeal. As the average particle size of maize flour decreased, the straight-chain starch content of maize flour first decreased slightly and then stabilized [3], while the noodles made from maize flour with less straight-chain starch were of better quality [4]. Li et al. [5] investigated the application of the gelatinization process in the development of maize noodles by pregelatinizing the maize flour. The results showed that the gelatinization process could improve the quality of corn noodles. Hou et al. [6] used yeast to ferment a corn-wheat flour mixture with the appropriate amount of gluten to produce corn-fermented noodles with good palatability, good elasticity, good toughness, moderately strong fermented flavor, and strong corn flavor.
After the improvement in raw materials, but in the processing process, if the noodles are not aligned, there will be noodle breakage, serious cases appear to block the machine, burn the machine, or even start a factory fire, which greatly reduces the efficiency of noodle processing. However, few studies have been carried out on such problems as noodles. Therefore, we use a machine vision to monitor whether the noodles processing process is aligned. The structure of the monitoring system is shown in Figure 1.
As shown in Figure 1, our detection system uses a dual camera. The camera is placed on top of the noodle processing; by collecting the position of each noodle, we go through the data and processing, and we have to reconstruct the noodle in 3D, after reconstruction we want to match it with the aligned noodle, here we use our alignment monitoring algorithm to see if each 3D angle is the same, and finally, if there is no alignment, we issue a warning.
Our contributions are as follows:
  • We designed a binocular vision-based vision monitoring system that can easily monitor the alignment of noodles during the noodle processing;
  • The article uses binocular vision cameras and inserts an SVM classifier in the subsequent algorithm, a stable and inexpensive architecture that is well-suited for noodle production.
The experiments prove that our method is much more efficient than traditional human efficiency in terms of accuracy rate and time efficiency.

2. Related Work

2.1. Theoretical Basis of Teaching Quality Evaluation Related Research

With the improvement of people’s lives in urban and rural areas, their dietary habits are also changing. Balanced nutrition has become a way of life. Coarse grain noodles with nutritional and health benefits are becoming more and more popular. Grains and cereals are rich in essential vitamins, minerals, and other nutrients and have a comprehensive amino acid composition. They have health benefits, such as enhancing human physiological functions and promoting metabolism. Consumed together with wheat flour, they are beneficial for balancing nutrition and delaying aging [7].
Maize is one of the three major food crops in the world and is mainly used as a food ingredient, feed ingredient, and raw material for industrial production. Since it is a dryland crop, it plays an important role in agricultural production because of its high acreage yield and a high potential for yield increase [8]. Currently, the United States ranks first in the world in corn production, with an annual output of more than 300 million tons, of which a total of 50 million tons of corn is deeply processed, accounting for more than 15% of the total corn production, and most of the rest is processed into feed. Maize is widely distributed in China, with an annual production of more than 150 million tons. It is one of the main foods for people in the mountainous regions of northern and southwestern China and other arid river valleys, with the second highest production in the world. However, the utilization of maize in China is low [9]. About 70% of maize is used for feed processing, and maize is poorly utilized in industry. Compared with developed countries such as the United States, China’s corn deep processing industry still has problems such as a small processing scale, scattered processing enterprises, and a short deep processing industry chain [10]. Therefore, it is important to promote the return of maize to the staple food, improve the utilization rate and economic value of maize, and accelerate the transformation rate of maize industrialization [11].

2.2. Target Recognition Methods

In the application of target recognition under complex background, it is difficult for the traditional threshold segmentation method to segment the target from the background because the difference between the target to be recognized and the surrounding environment is small [12]. The commonly used recognition methods are mainly divided into two categories: one is to establish a target template and match the target directly through template matching; the other is to preprocess the image to obtain the target features to be recognized and then perform secondary analysis within the featured neighborhood to extract the target. Albertop [13] et al. proposed a target recognition algorithm based on hough voting. For the problem that traditional edge detection results are sensitive to noise, a gradient direction-based edge detection method is proposed. Based on the CAD model features of the target, voting detection is performed on selected edge points to identify targets with high edge integrity. Jong [14] et al. used template matching to accomplish the identification and localization of connecting rods and other workpieces. By identifying distinct geometric features in connecting rod parts and defining the relationships between them, they built an offline template library that enables real-time identification and localization of simple stacked workpieces. Buchholz [15] et al. used the Ransam method to identify workpieces. Point cloud images of stacked workpieces were obtained by combining structured light scanning with a monocular camera. The RANSAM algorithm was used to match the point cloud with a 3D model of the workpiece. The stacked artifacts are segmented and localized and are highly robust to noise.
In recent years, machine learning has achieved good results in the field of target recognition. Szegedyc [16] et al. used deep neural networks for target recognition and localization. Through multi-scale analysis and recognition of the image, the target is selected from the rectangular frame with the smallest area, thus locating the target accurately in the image. Lee [17] et al. used hog feature descriptions to identify the vehicle in the image by sliding window detection traversing the whole image and machine learning to identify the vehicle. Lin [18] et al. proposed a morphology-based edge extraction operator that better solves the conflict between noise suppression and image detail retention. The method uses multiple morphological structure elements with different orientations and binarizes the target edges by using the grayscale weighted average as the threshold. At the same time, the final target is recognized using a fuzzy integrated judging technique. Li [19] et al. completed the recognition of occluded artifacts using a binary feature-matching method. The method combines the advantages of the free model and the simple model. The initial set of feature points is first obtained by fixed sampling, and then the final target matching is completed by random sampling in its neighborhood.

2.3. SVM Classification Methods

A support vector machine (SVM) is a supervised machine learning algorithm. It is a statistical learning method based on structural risk minimization and VC dimensional theory. It accomplishes classification in the sample space by creating an optimal hyperplane with a maximum interval in the sample space. The goal of support vector machine optimization is to minimize the structural risk, so it requires less data size and distribution and has better classification results than other methods on small sample sets. It is a well-researched classification algorithm that has been applied in many fields. Danieleb [20] et al. applied support vector machines to cell phone environment recognition by analyzing the sound signals captured by cell phones to determine the environment. The collected data are clustered and analyzed to reduce the dimensionality to solve the problems of real-time classification and space occupation, thus reducing the number of support vectors required to form the optimal hyperplane. Support vector machines have more accurate recognition and consume less memory than the commonly used Hidden Markov Models. Christians [21] et al. used SVMs to recognize human behavior in videos and used local spatial features and histogram features to characterize specific recognition behaviors. By classifying and learning from 2300 scenes, they obtained more accurate results than the nearest neighbor algorithm.
KD Renuka [22] and Ke Xu [23] applied support vector machines to email classification, built a database of spam samples, and analyzed their semantics. The combination of latent semantic indexing and support vector machine can determine whether emails are spam or not. Although support vector machines are effective in many fields, suitable classifier parameters require user experience and a lot of experiments. Subasia [24] et al. used a particle swarm optimization algorithm to optimize the input parameters to solve the parameter selection problem of support vector machines in myoelectric signal classification, which can save time and avoid getting into local minima compared with the traditional interpolation methods. The classification by the PSO-SVM classifier can accurately diagnose patients’ diseases. Dong Wenfei [25] et al. proposed an image retrieval technique based on multi-feature fusion, using DS theory to complete the weighted fusion of multiple features and combined with an SVM-based semantic classification technique. The results showed that semantic classification is better than traditional content classification in image retrieval. Yufeng Li [26] et al. studied the problem of completing multi-class image classification using SVM. The weight coefficients of individual features were obtained by normalizing the classification effects of different features in an image, thus combining individual classifiers into a complex classifier with certain adaptive capability and robustness.

3. Method

In this section, we first introduce the mathematical model of binocular vision 3D coordinate measurement, and further, the 3D deformation measurement algorithm is introduced to monitor and identify the noodles.

3.1. Mathematical Analysis of Binocular Stereo Vision

The 3D coordinate measurement principle and mathematical model binocular vision 3D coordinate measurement is based on the principle of parallax [27], as shown in Figure 2, which assumes that the two cameras’ imaging plane is in the same plane, i.e., the two cameras optical axis is parallel; the two cameras simultaneously image the spatial object point P, and the coordinates in the left and right images are p l e f t = ( X l e f t , Y l e f t ) , p r i g h t = ( X r i g h t , Y r i g h t ) , P ( x c , y c , z c ) . The spatial coordinates of the object point in the left camera coordinate system.
From trigonometric geometry, we have
x c = L X l e f t X l e f t X r i g h t y c = L Y X l e f t X r i g h t z c = L f X l e f t X r i g h t
where the baseline distance L is the distance between the projection center lines of the two cameras, and, since the two camera imaging planes are in the same plane, Y l e f t = Y l e f t = Y .
On the basis of the principle of flat binocular stereo vision 3D measurement, the more general case is considered, i.e., there is no requirement for the position of the two cameras. As shown in Figure 3, let the left camera coordinate system O l x l y l z l coincide with the world coordinate system O x y z , the coordinates of the space point P are ( x p , y p , z p ) , and the image coordinate system is O 1 x 1 y 1 ; the right camera coordinate system is O r x r y r z r , and the image coordinate system is O r x r y r .
From the projective transformation, we have
s l p l = M l X w s r p r = M r X w
where p l and p r are the image coordinates of the spatial object point in the left and right cameras, respectively, M l and M r are the projection matrices of the left and right cameras, respectively, and X w is the 3D coordinate of the spatial object point in the world coordinate system O l x l y l z l The relationship between the coordinate system and the O r x r y r z r coordinate system is represented by the transformation matrix as
x r y r z r = M l r x l y l z l 1 = r 1 r 2 r 3 r x r 4 r 5 r 6 r y r 7 r 8 r 9 r z x l y l z l 1
The 3D coordinates of the spatial object point P can be obtained by associating Equations (2) and (3): where fl and fr are the effective focal lengths of the left and right cameras, respectively.
x p = z p X l / f l y p = z p Y l / f l z p = f l ( f r t x X r t z ) X r ( r 7 X l + r 8 Y l + f l r 9 ) f r ( r 1 X l + r 2 Y l + f l r 3 )

3.2. Three-Dimensional Deformation Measurement Algorithm

For the 3D deformation at the structure measurement point, this paper uses the coordinate change of the measurement point in the world coordinate system at each moment to obtain the deformation information of the structure measurement point.

3.2.1. Displacement Measurement Algorithm

For the displacement measurement of the structure measurement point position, in order to make the measurement results easier to understand and intuitively represent the displacement information of the structure, this paper uses the 2D checkerboard grid target to establish the x o y plane of the world coordinate system o x y z on the structure plane, and the specific establishment method is as follows.
1. Based on the known internal and external parameters of the camera of the binocular stereo vision structure deformation measurement system, the 2D checkerboard target is placed at the desired plane, and a single frame is acquired by the left camera of the measurement system.
2. Extract the image coordinates of each corner point of the checkerboard grid in the checkerboard grid image by the Hough transformation algorithm, as shown in Figure 4.
3. Assign the corresponding real 3D coordinates to each corner point, where the Z-axis coordinates of each corner point are assumed to be zero, and the X-axis and Y-axis coordinates are the real coordinates of the checkerboard grid relative to the set coordinate origin O, respectively.
4. Solve Equation (5) to obtain the stiffness transformation matrix [R, T] of the left camera coordinate system O l x l y l z l of the measurement system with respect to the established world coordinate system o x y z .
s f x λ u 0 f y v 0 0 1 u v 1 = r 11 r 12 r 13 t x r 21 r 22 r 23 t y r 31 r 32 r 33 t z x y z 1 = [ R , T ] x y z 1
where s f x λ u 0 f y v 0 0 1 is a known internal parameter of the left camera, s is a scale factor, and u , v , 1 T is the chi-square form of the angular image coordinates.
5. Using the stiffness conversion matrix obtained in the previous step, the 3D coordinates in the coordinate system of the left camera calculated by the measurement point Equation (4) are converted to the world coordinate system o, as shown in Equation (6).
x y z = R 1 ( x l y l z l T )
The in-plane and out-of-plane displacements of the structure are calculated from the 3D coordinates of the measurement points in the established world coordinate system. Assume that the 3D coordinates of the measurement point P at time t are ( x 0 , y 0 , z 0 ) . Due to the deformation of the structure, the 3D coordinates of the measurement point P at time t are ( x i , y i , z i ) , as shown in Figure 5.
Then:
d i = ( x i x 0 ) 2 + ( y i y 0 ) 2 × s i g n ( x i x 0 ( x i x 0 ) 2 + ( y i y 0 ) 2 ) d i = z i z 0

3.2.2. Strain Measurement Algorithm

In this paper, the strain information in the area of the structural measurement point is determined by calculating the relative change in the distance between the measurement points. As shown in Figure 6, suppose the coordinates of structural measurement points at t 0 a time are A ( x A 0 , y A 0 , z A 0 ) and B ( x B 0 , y B 0 , z B 0 ) , the distance between AB two measurement points is l 0 , due to structural deformation, the coordinates of measurement point A and B at their time change to A ( x A i , y A i , z A i ) and B ( x B i , y B i , z B i ) , the distance between AB two measurement points are l i .
Then the strain between the measurement points AB at t i the time is:
ε i = l i l 0 l 0
where
l 0 = ( x A 0 x B 0 ) 2 + ( y A 0 y B 0 ) 2 + ( z A 0 z B 0 ) 2 l i = ( x A i x B i ) 2 + ( y A i y B i ) 2 + ( z A i z B i ) 2
The change of 3D coordinates at the structure measurement point reflects the deformation information of the structure. Binocular stereo vision recovers the 3D deformation information of the measurement point based on the parallax of the same measurement point in different scenes. Therefore, the key problem of structural deformation measurement is how to determine the images of the same object point in different fields of view. One way to solve this problem is to select suitable image features to match the images of the measurement points in different fields of view. Therefore, image feature extraction and matching are key elements in binocular stereo-vision structural deformation measurement algorithms.

4. Experimental and Analysis

4.1. Datasets Collection and Preprocessing

The raw image captured by the binocular camera and transmitted to the computer includes the entire section of the tool and unnecessary noodle processing environments such as conveyor belts, spindles, machine workpieces, etc. After the acquired images are acquired, to further improve the detection accuracy, some noises in the images are first eliminated and attenuated, and then image preprocessing is performed. The specific steps of this link include smooth filtering, image enhancement, erosion, dilation, and other operations. Smoothing filtering is to filter out small noises in the image to achieve the effect of smoothing the overall image; image enhancement is to make up for the image loss caused by the filtering operation and enhance details; erosion and expansion are part of the selective filtering of non-interest points and the enhancement of interest points. Also part of noise filtering. After preliminary preprocessing, the image features are extracted. The demonstration of data preprocessing is shown in Figure 7.
During the structural deformation monitoring process, the binocular stereo vision deformation measurement system should not be disturbed to ensure that the relative positions between the two cameras in the measurement system and the position of the measurement system relative to the structural measurement points do not change. Before measurement, adjust the field of view so that the structural measurement point will not move out of the image of the two cameras due to displacement to ensure the continuous monitoring of the structural measurement point.
The brightness and lighting conditions of the environment often create noise disturbances in the image. For image processing methods based on gray pixel value, noise has a great influence on subsequent image processing results, so it is necessary to denoise the image. Median filtering is an effective noise reduction algorithm that minimizes the effects of noise while preserving image details [12]. Use a 3 × 3 median filter algorithm: Statistically sort the gray values of all pixel points, and set the sorting median as the gray size of the processed points. As shown in Figure 8, it is the histogram display of the image.

4.2. Train

For training, all composition images are randomly cropped to 320 × 320, 640 × 640, and 800 × 800. Then they are resized to a resolution of 320 × 320. The structure of the neural network used in this paper is 128 × 128 × 1, as shown in Figure 9. For loss optimization, we use ADAM. The learning rate is initialized to 0.01. The classification module is trained first. As shown in Figure 10, this is the graph of the loss based on the above parameter settings.
The actual grasping results are used to determine the classifier samples. Depending on the position of the noodle, the camera captures the pose and attempts to grasp the artifact. Since the failure rate of grasping in the initial case is high, grasping is performed by micro-motioning the camera to the grasping pose. First, the initial sample is captured 20 times, and the initial classifier is trained. Then incremental learning method is used. The acquired samples are included in the training sample set every three times, and the classifier is updated. A total of 50 subsequent attempts can be made to obtain the variation in the crawling success rate, as shown in Figure 11. The experiments show that as the number of crawls increases, the number of samples also increases, and the crawl success rate also increases. This demonstrates the effectiveness of incremental learning in this discipline.

4.3. Monitoring Accuracy Results

Further, we compare the method proposed in this paper with the traditional manual monitoring method. In this paper, the classification accuracy is selected as the evaluation index, and the index is calculated by dividing the number of correct classifications by the total number of noodles.
Table 1 shows the results of the classification accuracy comparison between our method and traditional manual monitoring [4] when the number of instances (noodles) in the test set is 100, 300, 500, 1000, and 3000, respectively.
As can be seen in Table 1, the classification accuracy of our algorithm is generally higher than that of the traditional algorithm under different test sets. As the number of noodles increases, the accuracy of traditional manual monitoring decreases. In particular, as the number of instances in the test set increases, the accuracy is much higher than that of the traditional manual monitoring, which proves that our method is very effective in the forward monitoring of the noodle processing function.

5. Discussion

The accuracy of the noodle alignment monitoring method based on machine vision proposed in this paper is significantly better than the traditional manual efficiency, and the time efficiency can be greatly improved by using the noodle alignment monitoring method. Compared with other detection methods in other fields, the method proposed in this paper can also be applied to the detection of items such as tool wear, only changing the input.

6. Conclusions

In this paper, we propose a noodle alignment monitoring method based on machine vision to solve problems such as it being difficult for the noodle machine to align the noodles; the noodles are easy to break and block the machine. We first use the binocular camera to capture the image, and then after the image and processing, we carry out a 3D reconstruction of noodles, then use our alignment monitoring algorithm, and finally give the monitoring results of whether to align. Experiments show that our method is more effective than traditional artificial efficiency in terms of accuracy and time efficiency. In the future, we will be committed to researching and extending the method in this paper to other fields of agriculture, such as cutting machine recognition.

Author Contributions

Y.S. is responsible for writing the article, and K.Y. is responsible for writing the experiment. All authors have read and agreed to the published version of the manuscript.

Funding

The study was supported by the Key Project of the Anhui Provincial Department of Education, China (Grant No. KJ2020A0068).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The experimental data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest regarding this work.

References

  1. Xiang, R.; Hong, T.; Zhou, M. Analysis of depth measurement errors of tomatoes using binocular stereo vision based on single factor experiments. In Proceedings of the 2014 13th International Conference on Control Automation Robotics & Vision (ICARCV), Singapore, 10–12 December 2014; pp. 88–93. [Google Scholar]
  2. Ranft, B.; Strauß, T. Modeling arbitrarily oriented slanted planes for efficient stereo vision based on block matching. In Proceedings of the 17th International IEEE Conference on Intelligent Transportation Systems (ITSC), Qingdao, China, 8–11 October 2014; pp. 1941–1947. [Google Scholar]
  3. Lee, C.H.; Kim, D. Stereo vision-based pedestrian detection using dense disparity map-based detection and segmentation. In Proceedings of the 6th International Conference on Graphic and Image Processing (ICGIP 2014), Beijing, China, 24–26 October 2014; pp. 944–965. [Google Scholar]
  4. Marr, D.; Nishihara, H.K. Representation and recognition of the spatial organization of three-dimensional shapes. Proc. R. Soc. Lond. B Biol. Sci. 1978, 200, 269–294. [Google Scholar] [PubMed]
  5. Di, K. A review of the positioning methods of Spirit and Opportunity rovers. Spacecr. Eng. 2009, 11, 1–5. [Google Scholar]
  6. Schmid, K.; Tomic, T.; Ruess, F.; Hirschmüller, H.; Suppa, M. Stereo vision based indoor/outdoor navigation for flying robots. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 3955–3962. [Google Scholar]
  7. Okada, K.; Inaba, M.; Inoue, H. Integration of real-time binocular stereo vision and whole body information for dynamic walking navigation of humanoid robot. In Proceedings of the IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, MFI2003, Tokyo, Japan, 1 August 2003; pp. 131–136. [Google Scholar]
  8. Sun, D.; Zhang, G.; Chen, Y. Research on camera external parameter calibration of binocular vision measurement system. Control Eng. 2012, 19, 598–602. [Google Scholar]
  9. He, T.; Chen, J.Y.; Hu, X.; Wang, X. A Study of 3d Coordinate Measuring Based on Binocular Stereo Vision. Appl. Mech. Mater. 2015, 740, 531–534. [Google Scholar] [CrossRef]
  10. Wang, G. Research on 3D Face Model Reconstruction Based on Binocular Stereo Vision; Jilin University: Changchun, China, 2014. [Google Scholar]
  11. Deep, K.; Arya, M.; Thakur, M.; Raman, B. Stereo Camera Calibration Using Particle Swarm Optimization. Appl. Artif. Intell. 2013, 27, 618–634. [Google Scholar] [CrossRef]
  12. Grossmann, E.; Woodfill, J.I.; Gordon, G.G. Camera Calibration Using an Easily Produced 3D Calibration Pattern. U.S. Patent 8,872,897, 28 October 2014. [Google Scholar]
  13. Malekian, M.; Park, S.S.; Jun MB, G. Tool wear monitoring of micro-milling operations. J. Mater. Process. Technol. 2009, 209, 4903–4914. [Google Scholar] [CrossRef]
  14. Zhang, C.; Zhang, J.L. On-line tool wear measurement for ball-end milling cutter based on machine vision. Comput. Ind. 2013, 64, 708–719. [Google Scholar] [CrossRef]
  15. Xie, Q.; Wang, G. Research on monitoring of wear condition of milling tool with variable parameters. Mech. Sci. Technol. 2016, 35, 1842–1847. [Google Scholar]
  16. Rao, K.V.; Murthy, B.S.N.; Rao, N.M. Prediction of cutting tool wear, surface roughness and vibration of work piece in boring of AISI 316 steel with artificial neural network. Measurement 2014, 51, 63–70. [Google Scholar]
  17. Ren, Q.; Balazinski, M.; Baron, L.; Jemielniak, K.; Botez, R.; Achiche, S. Type-2 fuzzy tool condition monitoring system based on acoustic emission in micromilling. Inf. Sci. 2014, 255, 121–134. [Google Scholar] [CrossRef]
  18. García-Ordás, M.T.; Alegre, E.; González-Castro, V.; Alaiz-Rodríguez, R. A computer vision approach to analyze and classify tool wear level in milling processes using shape descriptors and machine learning techniques. Int. J. Adv. Manuf. Technol. 2017, 90, 1947–1961. [Google Scholar] [CrossRef]
  19. Su, J.C.; Huang, C.K.; Tarng, Y.S. An automated flank wear measurement of microdrills using machine vision. J. Mater. Process. Technol. 2006, 180, 328–335. [Google Scholar] [CrossRef]
  20. Qin, G.; Yi, X.; Li, Y.; Xie, W. Automatic detection and detection system of tool wear. Opt. Precis. Eng. 2014, 22, 3332–3341. [Google Scholar]
  21. Jurkovic, J.; Korosec, M.; Kopac, J. New approach in tool wear measuring technique using CCD vision system. Int. J. Mach. Tools Manuf. 2005, 45, 1023–1030. [Google Scholar] [CrossRef]
  22. Kim, J.H.; Moon, D.K.; Lee, D.W.; Kim, J.S.; Kang, M.C.; Kim, K.H. Tool wear measuring technique on the machine using CCD and exclusive jig. J. Mater. Process. Technol. 2002, 130–131, 668–674. [Google Scholar] [CrossRef]
  23. Dias, L.R.M.; Diniz, A.E. Effect of the gray cast iron microstructure on milling tool life and cutting force. J. Braz. Soc. Mech. Sci. Eng. 2013, 35, 17–29. [Google Scholar] [CrossRef]
  24. Ishizuka, T.; Tanabata, T.; Takano, M.; Shinomura, T. Kinetic measuring method of rice growth in tillering stage using automatic digitalimaging system. Environ. Control Biol. 2005, 43, 83–96. [Google Scholar] [CrossRef]
  25. Ma, Z.; Qingshui, H.; Gu, S. Automatic non-destructive monitoring technology of chrysanthemum growth based on machine vision. Chin. J. Agric. Eng. 2010, 26, 203–209. [Google Scholar]
  26. Wang, C.; Zhao, M.; Yan, J. Measurement of maize plant shape at seedling stage based on binocular stereo vision. J. Agric. Mach. 2009, 40, 144–148. [Google Scholar]
  27. Wang, C.; Zhao, M.; Yan, J. Measurement of maize seedling morphological traits based on binocular stereo vision. Trans. Chin. Soc. Agric. Mach. 2009, 40, 144–148. [Google Scholar]
Figure 1. Monitoring system architecture.
Figure 1. Monitoring system architecture.
Applsci 13 02407 g001
Figure 2. Schematic diagram of binocular stereo vision imaging.
Figure 2. Schematic diagram of binocular stereo vision imaging.
Applsci 13 02407 g002
Figure 3. Binocular stereo vision 3D coordinate monitoring.
Figure 3. Binocular stereo vision 3D coordinate monitoring.
Applsci 13 02407 g003
Figure 4. Establishing the world coordinate system.
Figure 4. Establishing the world coordinate system.
Applsci 13 02407 g004
Figure 5. Schematic of measurement point displacement before and after structure deformation.
Figure 5. Schematic of measurement point displacement before and after structure deformation.
Applsci 13 02407 g005
Figure 6. Strain diagram before and after structure deformation.
Figure 6. Strain diagram before and after structure deformation.
Applsci 13 02407 g006
Figure 7. Preprocessing of the original image.
Figure 7. Preprocessing of the original image.
Applsci 13 02407 g007
Figure 8. Histogram of the sampled image.
Figure 8. Histogram of the sampled image.
Applsci 13 02407 g008
Figure 9. Structure of the neural network used in this paper.
Figure 9. Structure of the neural network used in this paper.
Applsci 13 02407 g009
Figure 10. Classifier loss convergence graph.
Figure 10. Classifier loss convergence graph.
Applsci 13 02407 g010
Figure 11. Sample size determination.
Figure 11. Sample size determination.
Applsci 13 02407 g011
Table 1. Accuracy experimental results.
Table 1. Accuracy experimental results.
Number of InstancesClassification Accuracy (Ours)Classification Accuracy
(Traditional Manual Monitoring)
10087.4985.14
30087.5084.99
50087.6585.02
100088.5184.94
300090.9284.92
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sun, Y.; Yi, K. Intelligent Alignment Monitoring Method for Tortilla Processing Based on Machine Vision. Appl. Sci. 2023, 13, 2407. https://doi.org/10.3390/app13042407

AMA Style

Sun Y, Yi K. Intelligent Alignment Monitoring Method for Tortilla Processing Based on Machine Vision. Applied Sciences. 2023; 13(4):2407. https://doi.org/10.3390/app13042407

Chicago/Turabian Style

Sun, Yerong, and Kechuan Yi. 2023. "Intelligent Alignment Monitoring Method for Tortilla Processing Based on Machine Vision" Applied Sciences 13, no. 4: 2407. https://doi.org/10.3390/app13042407

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop