Next Article in Journal
The Contribution of Scalded and Scalded-Fermented Rye Wholemeal Flour to Quality Parameters and Acrylamide Formation in Semi-Wheat-Rye Bread
Next Article in Special Issue
Research on Automatic Classification and Detection of Mutton Multi-Parts Based on Swin-Transformer
Previous Article in Journal
New Route to the Production of Almond Beverages Using Hydrodynamic Cavitation
Previous Article in Special Issue
Surface Defect Detection System for Carrot Combine Harvest Based on Multi-Stage Knowledge Distillation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Single-View Measurement Method for Egg Size Based on Small-Batch Images

1
College of Engineering, Huazhong Agricultural University, Wuhan 430070, China
2
Ministry of Agriculture Key Laboratory of Agricultural Equipment in the Middle and Lower Reaches of the Yangtze River, Wuhan 430070, China
3
National Research and Development Center for Egg Processing, Huazhong Agricultural University, Wuhan 430070, China
4
College of Food Science and Technology, Huazhong Agricultural University, Wuhan 430070, China
*
Author to whom correspondence should be addressed.
Foods 2023, 12(5), 936; https://doi.org/10.3390/foods12050936
Submission received: 24 November 2022 / Revised: 7 February 2023 / Accepted: 8 February 2023 / Published: 22 February 2023
(This article belongs to the Special Issue Machine Vision Applications in Food)

Abstract

:
Egg size is a crucial indicator for consumer evaluation and quality grading. The main goal of this study is to measure eggs’ major and minor axes based on deep learning and single-view metrology. In this paper, we designed an egg-carrying component to obtain the actual outline of eggs. The Segformer algorithm was used to segment egg images in small batches. This study proposes a single-view measurement method suitable for eggs. Experimental results verified that the Segformer could obtain high segmentation accuracy for egg images in small batches. The mean intersection over union of the segmentation model was 96.15%, and the mean pixel accuracy was 97.17%. The R-squared was 0.969 (for the long axis) and 0.926 (for the short axis), obtained through the egg single-view measurement method proposed in this paper.

1. Introduction

Eggs are an inexpensive and nutritious food, and egg quality grading is very important in commercialization. Furthermore, the measurement accuracy of egg size is essential in quality testing and affects the hatching rate of breeder eggs [1,2]. However, manual labor cannot consistently perform the detection perfectly because of eye strain. A machine vision approach would not suffer from fatigue, but failure to capture an actual outline view results in measurement errors due to the ellipsoidal appearance of eggs, which affects the determination of the eggs’ appearance quality. Therefore, it is crucial to study how to obtain the exact size of eggs through machine vision.
Many scholars used machine vision for egg size detection, volume prediction, and freshness grading. Among them, some scholars used external parameters such as egg size to achieve the primary grading. Omid developed an intelligent system based on machine vision to achieve egg grading using defects and size [3]. Vasileva analyzed the external parameters of eggs through machine vision to automate the primary grading stage [4]. Some scholars achieved egg detection through quantifying volume and appearance parameters, where volume is estimated using images. Soltani proposed a new method for egg volume prediction and implemented an efficient image processing algorithm for egg size measurement [5]. Quilloy developed an automatic single-line tabletop egg sorter that integrated machine learning and electromechanical principles; the sorter used egg projection area and weight estimation, which is more efficient than manual sorting [6]. Harnsoongnoen performed egg grading and freshness assessment based on machine vision and load cells; they used the machine vision system to estimate the eggs’ volume, and the results indicate that the system has potential in the poultry industry [7]. In addition, machine vision has been applied to breeder eggs. Haixia detected egg size through machine vision and found that the detection values highly correlated with the measured values [1]. Xiulian achieved the rejection of sperm-less eggs through machine-vision-extracted regional invariance parameters such as the long-to-short axis ratio [8]. Weizhong screened breeder eggs based on machine vision, using egg size as indicators, and the screening results were highly consistent with manual results [2]. Okinda used an infrared depth sensor to estimate the volume of eggs on the production line and developed a volume estimation model using feature variables [9]. Nasiria achieved the classification of unwashed eggs using machine vision and deep learning methods [10]. Aboonajmi discussed the importance of using non-destructive inspection methods such as machine vision in an in-line sorting system for the internal and external quality of eggs [11]. Valeriy proposed a mathematical formula for describing the shape of the egg based on four parameters, namely egg length, maximum breadth, the shift of the vertical axis, and the diameter at one-quarter of the egg length [12]. Liao proposed a clear definition of the standard egg shape curve in an analytical geometric sense and also discovered the equation for the segmental function of the curve [13]. All the literature above used machine vision to achieve egg quality detection, but the two following issues are overlooked in acquiring egg images: first, the imaging plane’s height differs for different egg sizes; second, due to eggs’ ellipsoidal appearance and the component’s structure, the imaging outline may not match the actual outline of the egg.
Single-view metrology is convenient and effective for studying object dimensioning through images. Some scholars studied theoretical and algorithmic geometric information extraction for single uncalibrated images [14]. These works focused on new single-view metrology, including new methods for obtaining vanishing points [15,16], off-surface point transfer [17], single-view metrology after lens distortion correction [18], and calculating camera matrix parameters using vanishing points [19]. Other scholars studied applications for single-view metrology, including road horizontal and vertical measurements [20], distance estimation in the car driving [21], Google Street View building height estimation [22], and real-time target tracking and manipulation combined with robotics [23]. Single-view metrology requires specific geometric information based on known scenes. The measurement accuracy depends heavily on the accuracy of image preprocessing, such as edge detection, line fitting, and vanishing point determination [19]. This paper proposes a measurement method for eggs based on the above single-view metrology, which solves the problem of current machine vision methods being unable to capture the actual outline of eggs. In addition, this paper obtained an egg image segmentation model by combining small-batch egg samples with deep learning. Small-batch eggs refers to using a small number of egg samples to obtain the egg segmentation model, which can obtain the segmented image of a single egg; this model combined with single-view metrology can achieve egg size measurement, laying the foundation in egg commercialization lines.
The goal of this study is to achieve size measurement of small-batch eggs through single-view metrology and deep learning. The specific studies are (1) designing the egg-carrying component to realize the egg’s outline acquisition in a single view, (2) achieving image segmentation for eggs in small batches through deep learning, and (3) designing the geometric environment for vanishing point acquisition based on single-view metrology. Moreover, we completed the mathematical measurement method applicable to the egg’s major and minor axes.

2. Materials and Methods

2.1. Experimental Materials

A total of 30 eggs under the Sundaily Farm brand were purchased (Wuhan, China), and the mean and median weights of all samples measured using an electronic scale were 47.65 g and 46.71 g, respectively. The eggs’ major axis and minor axis were measured using vernier calipers. In addition, egg sizes were measured five times in the morning and afternoon using vernier calipers to reduce random errors. After the vernier caliper measurement, a camera (IDS company, Saarbrücken, Germany, CCD industrial camera, model UI-2210RE-C-HQ, resolution 640 × 480; lens model M0814-MP2) and a surface lamp (Guiguang Instrument company, Guilin, China, DY30-1C, AC220V, DC24V, 30 W) were used to collect the egg images (Figure 1).
As in Figure 2a,b, the egg deflected to different degrees in different views when placed on a flat surface. This placement causes the problem that the egg’s major and minor axes cannot be measured using a single view. The egg-carrying component (Figure 2c,d) designed in this paper ensured that the major axis was parallel to the front view of the egg. Thus, the egg-carrying component designed in this paper can help in obtaining the actual outline of the egg from a single view. When the egg slides down from the top of the component, it adjusts its posture using gravity. The egg-bearing component was obtained via 3D printing using resin material; the component was 75 mm long, 64 mm wide, and 34 mm high. The principle is to use the tangency relationship between the egg’s cross-sectional circles and the component’s triangles boundary. Here, Table 1 shows the differences between this component and existing egg image capture components.

2.2. Single-View Measurement for Eggs

(1)
Single-View measurement principle
The length, width, and height properties of three-dimensional objects cannot be measured from a single view alone, but the length ratio of two parallel objects can be obtained. Thus, if a reference object is added and the actual size is known, combined with the length ratio, the size of the measured object can be obtained. Criminisi described how the affine 3D geometry of a scene could be measured from a single perspective image [14]. The proposed basic geometry of a single view is shown in Figure 3. The vanishing line l is the projection of the line at infinity of the reference plane in the image. The vanishing point v is the image of the point at infinity in the reference direction.
Criminisi proposed interplanar distance measurement [24]; this method (Figure 4) uses the cross-ratio with a known distance Z r (the distance between the planes π and π ) to obtain the value of Z (the distance between the planes π r and π r ). First, the mathematical relation between the distance Z c and Z can be established according to the cross-ratio. Z c is the distance of the camera from the plane π.
D x , c d x , c / d x , v d x , v = d X , C d X , C / d X , V d X , V
X is a point on the plane π, X is the parallel projection of the point X on the plane π , and d() is the distance between the two points. Since the vanishing point V is at infinity, d X , V d X , V = 1 , it follows that,
Z Z     Z c = d X , C d X , C = d x , c d x , v d x , c d x , v
In order to compute the distance Z, it is necessary to determine Z c by the distance Z r . According to the cross-ratio, the mathematical relation between Z c r (the distance of the camera from the plane π r ) and Z r (the distance between the planes π r and π r ) can be obtained as follows,
Z r Z cr = 1     d r 1 , v d r 2 , c r d r 2 , v d r 1 , c r
r 1 and r 2 are the points corresponding to R 1 and R 2 in the image, and c r is the intersection of the lines r 1 and r 2 with the fading line l. Where Z r is known, the distance Z c r can be determined. Further, based on the cross-ratio and the distance ( Z c     Z c r ) between the plane π r and π, the distance Z c can be determined as follows,
Z c = Z cr d s 1 , c s d s 1 , v d s 2 , c s d s 2 , v
Further, the distance Z can be computed by combining Equation (4) with Equation (2).
(2)
Vanishing point determination and measurement principle of egg images
In this paper, the vanishing point was determined using the multi-vanishing point determination method proposed by Nietoa [25]; the method is based on a robust version of the MLESAC (maximum likelihood estimation by sample and consensus) algorithm. Moreover, it is used similarly to the EM (expectation maximization algorithm), which allows a trade-off between accuracy and efficiency and enables real-time manipulation of video sequences. Nietoa provides the source code for download at https://sourceforge.net/projects/vanishingpoint/ (accessed on 5 July 2022) for the figures that were drawn in this paper. As shown in Figure 5, we prepared the cards with crossed parallel lines to obtain vanishing points.
This paper used vanishing lines and points to measure the egg’s major and minor axis based on the interplanar distance measurement principle proposed by Criminisi. First, as shown in Figure 6a, geometrizing the egg and the reference as a straight line, this paper assumed that AB is the egg of length Z and CD is the reference of length R, where the projection of CD on AB is FB. Next, as shown in Figure 6b, the parallel lines of the reference plane were used to obtain the vanishing points v l and v r . The vanishing lines were obtained by connecting the two vanishing points, and all the lines from the vanishing lines are parallel. Therefore, bd intersects the vanishing line at e, and ec then intersects ab to obtain the projection point f. Based on the cross-ratios, the mathematical relation between the geometrized egg and the reference can be obtained as follows,
d ( B ,   F ) d ( B , A ) / d ( G , F ) d ( G , A ) = d ( b , f ) d ( b , a ) / d ( g , f ) d ( g , a )
Since point G is at infinity, d ( G , F ) d ( G , A ) = 1 , Equation (5) can obtain the following equation.
R Z = d b , f d g , a d b , a d g , f
R is the reference height, a known quantity, so the length Z can be computed by combining the image processing results with Equation (6).

2.3. Egg Image Segmentation in Small Batches

In order to achieve accurate and efficient image segmentation, this paper used Segformer to complete the egg image segmentation. Then, the egg size was measured based on single-view metrology. Segformer is a simple but powerful semantic segmentation method proposed by Xie; the method avoids the complex design commonly found in previous methods. Furthermore, it unifies the transformer encoder with a new hierarchy that does not require positional encoding and the lightweight multilayer perceptron (MLP) decoder. Thus, it achieves high efficiency and performance and shows zero-shot robustness [26]. In this paper, the weights provided by Xie (pre-trained on ImageNet-1K) were used. Fifteen samples were randomly selected from thirty data samples as the test set to ensure independence. The remaining 15 samples were expanded to 75 for training by randomly flipping up and down, flipping left and right, and varying contrast and lightness. Moreover, the specific training parameters are shown in Table 2. Annotation was performed using the PixelAnnotationTool (https://github.com/abreheret/PixelAnnotationTool, accessed on 15 June 2022) to obtain labels, and then trained and tested based on the code and usage provided by Enze (https://github.com/NVlabs/SegFormer, accessed on 28 July 2022).

2.4. Single-View Measurement Framework and Evaluation Metrics

The proposed measurement framework consists of three main steps to measure the egg’s major and minor axis (Figure 7). The first step is to acquire a single-view image using the component that can be calibrated for the egg pose. The second step is to obtain segmentation images of eggs and references through a Segformer model trained with a few-shot. The third step is to obtain the vanishing point or the egg’s major and minor axis through egg-size single-view metrology.
In this paper, we used intersection over union (IoU) and pixel accuracy (PA) as the performance discriminators of the segmentation model.
IoU = TP/(TP + FP + FN)
PA = (TP + TN)/(TP + TN + FP + FN)
Here, TP indicates that the model predicts a positive case and predicts correctly, FP indicates that the model predicts a positive case and predicts incorrectly, FN indicates that the model predicts a negative case and predicts incorrectly, and TN indicates that the model predicts a negative case and predicts correctly.
The absolute error was used as an evaluation index to confirm the performance of the single-view measurement framework. Furthermore, R2 (R-squared) was used to assess the conformity degree between the single-view measured value and the actual value. In particular, to eliminate random errors in the measurement of actual values, each egg was measured by the same experimenter five times in the morning as well as the afternoon. Then, the average of the measured values was taken.

3. Results

3.1. Segmentation Result of Egg Images

The mean intersection over union (MIoU) of the segmentation model was 95.35%, and MPA (pixel average accuracy) was 97.31%. IoU and PA for each category are shown in Table 3. Table 3 shows that the model has the best segmentation result for the background, with both IoU and PA reaching over 99%. The segmentation result of the egg is fine, with IoU and PA reaching 97% or more. The reference has the worst segmentation result, with a PA of 89.22% and Iou of 93.71%. It can be seen that the Segformer algorithm demonstrated strong segmentation performance. However, the small area in the image, the similar color to the background, and being occluded (Figure 8) resulted in poor segmentation of the reference. Therefore, to reduce the measurement error caused by reference segmentation, the average of the reference coordinate values (the egg size calculation requires the reference coordinates, as detailed in Section 2.2, Figure 6) from multiple images was taken for the egg size calculation.

3.2. Vanishing Point Image

Figure 9 shows the vanishing points obtained through the multiple vanishing point determination method (Section 2.2). Since the egg’s ideal major and minor axes are perpendicular to different planes, the original image was rotated by 90 degrees to obtain the vanishing points of the right card. In addition, we cropped the images to prevent the two cards from interfering with each other in acquiring the vanishing points; we supplemented the vanishing point coordinate values in the measurement calculations.

3.3. Single-View Measurement Result of Egg Size

Table 4 shows that the measurement errors of the major axis were all less than 1 mm, and the adj.R2 was 0.9725 (Figure 10), which was obtained from the regression analysis of the measured and actual values. For the minor axis, the measurement errors of four samples were greater than 1 mm, but they were all within 2 mm, and the adj.R2 was 0.8353 (Figure 10). In addition, this paper used an F-test for the regression models, and the results show that all models have a significant linear relationship (Table 5).
The measurement results of the minor axis are undesirable. The possible reasons are as follows; first, the segmentation of the reference object could be more satisfactory, where the length and width determine the measurements of the major and minor axes of the egg. In addition, Figure 8 shows that there is more interference from the same color background towards the short side of the reference object, so it leads to a greater calculation error of the minor axis. Second, observing Figure 8, the contact position of the egg with the egg-bearing component is very dark, which also interferes with the segmentation of the short axis direction of the egg, which affects the calculation of the minor axis.

4. Discussion

In this paper, we designed the egg-bearing component to acquire the egg’s actual outline. This component uses the tangency of the egg’s cross-sectional circle to the triangular boundary; we found that this component provides an excellent actual outline of eggs by comparing the actual and measured values of the egg’s major and minor axis. However, due to the ellipsoidal shape of the egg, the component can only obtain the actual outline of the front view and not the actual outline of the other views.
Most of the current studies into egg detection via image have been performed by processing and measuring directly through the image. Additionally, this research did not consider the consistency of the egg imaging plane. However, it is vital to ensure that the image plane of the egg is consistent. The literature [27] also suggests that whether the major axis is parallel to the horizontal plane affects the measurement of eggs, and the literature agrees that a suitable egg roller can improve the accuracy of egg measurements. Compared with these studies [7,28,29], the method presented in this study can disregard the consistency of the egg imaging plane by applying single-view metrology. Using the vanishing point in the perspective view and the reference size, the difference in the imaging plane does not affect the measurement for different sizes of eggs.
However, the measurement accuracy of the egg sizes depends on image processing. In this paper, an image segmentation model was developed using Segformer. Segformer can build a robust image segmentation model with few samples, but the image background is the same color as the reference; we obtain a worse reference segmentation. Therefore, we will use different colored backgrounds or references in future studies. This paper verified the feasibility of single-view metering eggs using an egg-carrying component and single-view metrology. Future research should explore a dynamic egg-carrying component. The dynamic online measurement component can provide a reliable and efficient basis for determining the eggs’ appearance quality in the commercialization process. In addition, the component can provide researchers with a dynamic acquisition method for single-view dimensional metrology.
Taken together, our results show that the actual outline of eggs can be obtained using the tangency of the egg cross-section circle to the triangular boundary; single-view metrology can solve the problem of inaccurate measurements caused by different egg imaging planes. However, this paper only verified the method’s feasibility using the static egg-bearing component. Moreover, further improvements in the image segmentation method are yet to be made to improve the measurement accuracy of egg size.

5. Conclusions

This paper verified the feasibility of single-view measurement of the egg’s major and minor axes; this paper proposed the method of single-view egg major and minor axis determination by combining with single-view metrology. Due to the strong zero-shot robustness and high efficiency of Segformer, this paper achieved good image segmentation using a small batch of images. Moreover, we obtained excellent measurement results of the eggs’ major and minor axes using the projection relation between parallel lines and the vanishing line property. In order to simplify model building and validation, this work chose a single type of egg for experiments, and subsequent exploration of universal models for multiple types will be undertaken. Future research will focus on multiple types of egg measurements and online dynamic egg measurements.

Author Contributions

Conceptualization, Q.W. and M.M.; methodology, C.L.; software, C.L.; validation, Q.W. and C.L.; formal analysis, C.L.; investigation, C.L.; resources, Q.W.; data curation, C.L.; writing—original draft preparation, C.L.; writing—review and editing, C.L., Q.W., M.M., Z.Z., W.L., S.L. and W.F.; visualization, Q.W., S.L., Z.Z. and W.F.; supervision, Q.W.; project administration, Q.W.; funding acquisition, Q.W. and W.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (No. 31871863422 and No. 32072302) and the Fundamental Research Funds for the Central Universities (No. 2662022GXYJ004).

Data Availability Statement

Not applicable.

Acknowledgments

The authors thank laboratory colleagues for their help.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Deng, H.; Liu, Y.M.; Wen, Y.X.; Wang, S.C. Research on method of detecting egg size by machine vision. J. Huazhong Agric. Univ. 2006, 25, 452–454. [Google Scholar] [CrossRef]
  2. Zhou, W.Z.; Feng, X.H. Shape identification of fertile eggs by using computer vision. Trans. CSAE 2000, 16, 126–130. [Google Scholar] [CrossRef]
  3. Omid, M.; Soltani, M.; Dehrouyeh, M.H.; Mohtasebi, S.S.; Ahmadi, H. An expert egg grading system based on machine vision and artificial intelligence techniques. J. Food Eng. 2013, 118, 70–77. [Google Scholar] [CrossRef]
  4. Vasileva, A.V.; Gorbunova, E.V.; Vasilev, A.S.; Peretyagin, V.S.; Chertov, A.N.; Korotaev, V.V. Assessing exterior egg quality indicators using machine vision. Br. Poult. Sci. 2018, 59, 636–645. [Google Scholar] [CrossRef]
  5. Soltani, M.; Omid, M.; Alimardani, R. Egg volume prediction using machine vision technique based on pappus theorem and artificial neural network. J. Food Sci. Technol. 2015, 52, 3065–3071. [Google Scholar] [CrossRef] [Green Version]
  6. Erwin, P.Q.; Delfin, C.S.; Pepito, M.B. Single-line automated sorter using mechatronics and machine vision system for Philippine table eggs. Afr. J. Agric. Res. 2018, 13, 918–926. [Google Scholar] [CrossRef] [Green Version]
  7. Harnsoongnoen, S.; Jaroensuk, N. The grades and freshness assessment of eggs based on density detection using machine vision and weighing sensor. Sci. Rep. 2021, 11, 16640. [Google Scholar] [CrossRef]
  8. Ma, X.L.; Yi, S.J. Unfertilized eggs verification system before hatching based on embedded system and machine vision. Trans. Chin. Soc. Agric. Mach. 2011, 42, 187–192. [Google Scholar]
  9. Okinda, C.; Sun, Y.; Nyalala, I.; Korohou, T.; Opiyo, S.; Wang, J.; Shen, M. Egg volume estimation based on image processing and computer vision. J. Food Eng. 2020, 283, 110041. [Google Scholar] [CrossRef]
  10. Nasiri, A.; Omid, M.; Taheri-Garavand, A. An automatic sorting system for unwashed eggs using deep learning. J. Food Eng. 2020, 283, 110036. [Google Scholar] [CrossRef]
  11. Aboonajmi, M.; Mostafaei, Z. Non-destructive quality assessment of table eggs for online sorting. In Informatics in Poultry Production; Khaliduzzaman, A., Ed.; Springer: Singapore, 2022; pp. 35–51. [Google Scholar]
  12. Narushin, V.G.; Romanov, M.N.; Griffin, D.K. Egg and math: Introducing a universal formula for egg shape. Ann. N. Y. Acad. Sci. 2021, 1505, 169–177. [Google Scholar] [CrossRef]
  13. Liao, H.B. Egg shape standard curve and its piecewise function equation. J. Math. Discov. 2022, 2, 1–31. [Google Scholar] [CrossRef]
  14. Criminisi, A. Single-view metrology: Algorithms and applications. In Proceedings of the Pattern Recognition: 24th DAGM Symposium, Zurich, Switzerland, 16–18 September 2002. [Google Scholar]
  15. Wang, G.; Wu, Y.; Hu, Z. A novel approach for single view based plane metrology. In Proceedings of the 2002 International Conference on Pattern Recognition, Quebec City, QC, Canada, 11–15 August 2002. [Google Scholar]
  16. Peng, K.; Hou, L.; Ren, R.; Ying, X.H.; Zha, H.B. Single view metrology along orthogonal directions. In Proceedings of the 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010. [Google Scholar]
  17. Mendonca, P.R.S.; Kaucic, R. Single view metrology: A practical example. In Proceedings of the 2008 IEEE Workshop on Applications of Computer Vision, Copper Mountain, CO, USA, 7–9 January 2008. [Google Scholar]
  18. Chen, Y.; IP, H.H.S. Single view metrology of wide-angle lens images. Vis. Comput. 2006, 22, 445–455. [Google Scholar] [CrossRef]
  19. Wang, G.H.; Hu, Z.Y.; Wu, F.C.; Tsui, H.T. Single view metrology from scene constraints. Image Vis. Comput. 2005, 23, 831–840. [Google Scholar] [CrossRef]
  20. Hoo, J.E.; Lim, K.C. Accuracy and error study of horizontal and vertical measurements with single view metrology for road surveying. ARPN J. Eng. Appl. Sci. 2016, 11, 7872–7876. [Google Scholar]
  21. Ali, A.; Hassan, A.; Ali, A.R.; Khan, H.U.; Kazmi, W.; Zaheer, A. Real-time vehicle distance estimation using single view geometry. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Snowmass Village, CO, USA, 1–5 March 2020. [Google Scholar] [CrossRef]
  22. Díaz, E.; Arguello, H. An algorithm to estimate building heights from Google street-view imagery using single view metrology across a representational state transfer system. In Dimensional Optical Metrology and Inspection for Practical Applications V; SPIE: Baltimore, MD, USA, 2016. [Google Scholar] [CrossRef]
  23. Kansal, S.; Mukherjee, S. Automatic single-view monocular camera calibration-based object manipulation using novel dexterous multi-fingered delta robot. Neural Comput. Appl. 2019, 31, 2661–2678. [Google Scholar] [CrossRef]
  24. Criminisi, A.; Reid, I.; Zisserman, A. Single view metrology. Int. J. Comput. Vis. 2000, 40, 123–148. [Google Scholar] [CrossRef]
  25. Nieto, M.; Salgado, L. Real-time robust estimation of vanishing points through nonlinear optimization. In Real-Time Image and Video Processing 2010; SPIE: Brussels, Belgium, 2010. [Google Scholar]
  26. Xie, E.; Wang, W.; Yu, Z.; Anandkumar, A.; Alvarez, J.M.; Luo, P. SegFormer: Simple and efficient design for semantic segmentation with transformers. Adv. Neural Inf. Process. Syst. 2021, 34, 12077–12090. [Google Scholar] [CrossRef]
  27. Zhang, S.Q.; Tu, J.; Sun, L.; Cai, J.R.; Gu, H.F.; Gu, H. Online detection method of eggs major-minor-axis and mass. Trans. Chin. Soc. Agric. Mach. 2014, 45, 204–209. [Google Scholar] [CrossRef]
  28. Widiasri, M.; Santoso, L.P.; Siswantoro, J. Computer vision system in measurement of the volume and mass of egg using the disc method. IOP Conf. Ser. Mater. Sci. Eng. 2019, 703, 012050. [Google Scholar] [CrossRef]
  29. Zhu, Z.H.; Ye, Z.F.; Tang, Y. Nondestructive identification for gender of chicken eggs based on GA-BPNN with double hidden layers. J. Appl. Poult. Res. 2021, 30, 100203. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of egg image acquisition.
Figure 1. Schematic diagram of egg image acquisition.
Foods 12 00936 g001
Figure 2. Different view of the egg (possible poses placed naturally on a flat surface, (a) front view; (b) top view. The stance using component, (c) top view; (d) front view).
Figure 2. Different view of the egg (possible poses placed naturally on a flat surface, (a) front view; (b) top view. The stance using component, (c) top view; (d) front view).
Foods 12 00936 g002
Figure 3. Basic geometry of single-view metrology.
Figure 3. Basic geometry of single-view metrology.
Foods 12 00936 g003
Figure 4. The geometry of calculating plane distance using cross-ratio, (a) in the world, (b) in the image.
Figure 4. The geometry of calculating plane distance using cross-ratio, (a) in the world, (b) in the image.
Foods 12 00936 g004
Figure 5. Cross-parallel reference card, (a) front view, (b) perspective view.
Figure 5. Cross-parallel reference card, (a) front view, (b) perspective view.
Foods 12 00936 g005
Figure 6. The geometry of obtaining egg size based on cross-ratio and single-view metrology, (a) in the world, (b) in the image.
Figure 6. The geometry of obtaining egg size based on cross-ratio and single-view metrology, (a) in the world, (b) in the image.
Foods 12 00936 g006
Figure 7. The flow chart of egg single-view metrology.
Figure 7. The flow chart of egg single-view metrology.
Foods 12 00936 g007
Figure 8. Results of image segmentation. (a) Original image; (b) image segmentation result; (c) mask image.
Figure 8. Results of image segmentation. (a) Original image; (b) image segmentation result; (c) mask image.
Foods 12 00936 g008
Figure 9. The location of the vanishing points. (a) Original image; (b) rotated image.
Figure 9. The location of the vanishing points. (a) Original image; (b) rotated image.
Foods 12 00936 g009
Figure 10. Comparison of measured and actual values of eggs’ size. (a) Eggs’ major axis; (b) eggs’ minor axis.
Figure 10. Comparison of measured and actual values of eggs’ size. (a) Eggs’ major axis; (b) eggs’ minor axis.
Foods 12 00936 g010
Table 1. Comparison of the egg-carrying component designed in this paper with common egg-carrying methods.
Table 1. Comparison of the egg-carrying component designed in this paper with common egg-carrying methods.
Egg-Carrying MethodDesign InnovationAdvantages
Mahmoud Soltani [5]Foods 12 00936 i001The tangency between the cross-sectional circle of the fresh egg and the triangle of the component is used to obtain the true profile of the fresh egg. The design idea of this component can be used for the adjustment of the fresh egg posture of the production line.The components designed in this paper allow for a single view to obtain the real outline of the egg and for the adjustment of the egg’s attitude using gravity.
Erwin [6]Foods 12 00936 i002
Harnsoongnoen [7]Foods 12 00936 i003
This paperFoods 12 00936 i004
Table 2. Parameters used in training model.
Table 2. Parameters used in training model.
Parameter or ResourceValue or Model
Checkpoint_filepspnet_r50-d8_512×1024_40k_cityscapes_20200605_003338-2966598c.pth
OptimizerSGD
Batch size8
Initial learning rate0.01
Train, validate, and test images60, 15 and 15 images
Total iterations5000
BackendPytorch 1.7
Python3.6
Operation systemWindows 10
GPUNVIDIA RTX 2080 Ti
Class(‘background’, ‘egg’, ‘ref’)
Palette(255, 0, 0), (250, 250, 0), (255,255, 255)
Table 3. Iou accuracy and PA of the segmentation model.
Table 3. Iou accuracy and PA of the segmentation model.
ClassIoU (%)PA (%)
Background99.7299.88
Egg97.1198.34
Reference89.2293.71
Table 4. Eggs’ size measurement results.
Table 4. Eggs’ size measurement results.
Serial NumberMinor Axis Size (mm)Major Axis Size (mm)
Measured ValueActual ValueErrorMeasured ValueActual ValueError
139.32940.8661.53751.13650.7740.362
240.66841.0360.36857.04056.7130.327
340.77041.9391.16953.40552.8560.549
440.74241.6610.91954.14053.3880.752
540.61741.8281.21153.29152.4780.813
642.82542.9020.07754.96654.0330.933
740.44541.3020.85755.05354.3210.732
841.59942.3730.77454.11153.7120.400
941.74642.1470.40157.37757.1320.245
1042.51743.2530.73656.43356.4070.026
1142.04643.3601.31456.23756.3760.139
1240.27041.1150.84555.66355.5660.097
1341.38542.1980.81353.05252.3120.740
1441.11942.0110.89254.07954.0240.055
1542.26442.8620.59854.10553.6130.492
Table 5. F-test results for the regression models.
Table 5. F-test results for the regression models.
Regression ModelsSignificance LevelF-Value
Major axis0.01552.999P{F ≥ 98.503} = 0.01
Minor axis72.320P{F ≥ 34.116} = 0.01
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, C.; Wang, Q.; Ma, M.; Zhu, Z.; Lin, W.; Liu, S.; Fan, W. Single-View Measurement Method for Egg Size Based on Small-Batch Images. Foods 2023, 12, 936. https://doi.org/10.3390/foods12050936

AMA Style

Liu C, Wang Q, Ma M, Zhu Z, Lin W, Liu S, Fan W. Single-View Measurement Method for Egg Size Based on Small-Batch Images. Foods. 2023; 12(5):936. https://doi.org/10.3390/foods12050936

Chicago/Turabian Style

Liu, Chengkang, Qiaohua Wang, Meihu Ma, Zhihui Zhu, Weiguo Lin, Shiwei Liu, and Wei Fan. 2023. "Single-View Measurement Method for Egg Size Based on Small-Batch Images" Foods 12, no. 5: 936. https://doi.org/10.3390/foods12050936

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop