Next Article in Journal
The Complete Chloroplast Genomes of Two Physalis Species, Physalis macrophysa and P. ixocarpa: Comparative Genomics, Evolutionary Dynamics and Phylogenetic Relationships
Previous Article in Journal
Effects of Straw Return with Nitrogen Fertilizer Reduction on Rice (Oryza sativa L.) Morphology, Photosynthetic Capacity, Yield and Water–Nitrogen Use Efficiency Traits under Different Water Regimes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Method of Situ Measurement Algorithm for Oudemansiella raphanipies Caps Based on YOLO v4 and Distance Filtering

1
School of Computer and Information Engineering, Jiangxi Agricultural University, Nanchang 330045, China
2
School of Software, JiangXi Agricultural University, Nanchang 330045, China
3
Bioengineering and Technological Research Centre for Edible and Medicinal Fungi, Nanchang 330045, China
*
Author to whom correspondence should be addressed.
Agronomy 2023, 13(1), 134; https://doi.org/10.3390/agronomy13010134
Submission received: 9 November 2022 / Revised: 20 December 2022 / Accepted: 28 December 2022 / Published: 30 December 2022

Abstract

:
Oudemansiella raphanipies has gradually gained more and more popularity in the market for its delicious taste, while enhancing human immunity and regulating human body functions as well. To achieve the high-throughput and automatic monitoring of the phenotypes of Oudemansiella raphanipies, a novel method, based on YOLO v4 and Distance Filter (DF), was proposed for high-precision diameter estimation of Oudemansiella raphanipies caps. To begin with, a dataset of Oudemansiella raphanipies was established by the laboratory cultivation and collection of factory samples. The improved YOLO v4 target detection model with added CBAM modules to each convolution block in the backbone was trained to locate the caps and, thus, obtain an approximate bounding box. Secondly, the approximate contour of the cap was gained through the H component, canny edge detection operators, and distance filtering to conduct the noise elimination. Finally, the center of the fitted circle and its accurate contour of the cap could be obtained by the constrained least square method, and the diameter of the fitted circle was estimated by the calibration data. The results of practical tests showed that this method achieved an accuracy of 95.36% in recognizing Oudemansiella raphanipies caps in the growing bed, and the fitting effect of caps was superior to Circle Hough Transform (CHT), the least square method (LS), and Ransac, with no manual adjustment on parameters. Compared with the manual measurement, the mean absolute error (MAE) of this method was 0.77 mm, the coefficient of determination (R2) was 0.95, and the root mean square error (RMSE) was 0.96 mm. Therefore, the model had high-cost performance and could meet the needs of continuous and long-term tracking of the cap shape of Oudemansiella raphanipies, providing the basis for future high-throughput breeding and machine picking.

1. Introduction

Oudemansiella raphanipies has delicious taste and rich nutrition. Furthermore, it has certain healthcare functions such as lowering blood pressure, and it is loved by consumers [1]. To meet people’s demand for its high quality, it is imperative to cultivate new varieties with short growth periods, strong disease resistance, and good quality. The phenotype is the key point that must be paid attention to in the breeding and cultivation process of edible fungi. By observing their appearance characteristics, the growth density, speed, and period of edible fungi can be analyzed automatically; so, their optimal environment can be deduced. This helps farmers with little experience obtain high-grade mushrooms easily and fungi breeders discover new varieties preliminarily. Furthermore, in factory cultivation, it is a prerequisite for picking by machine vision to accurately locate and judge the best harvest time [2]. However, by far, most observation methods of Oudemansiella raphanipies rely on naked-eye judgment and manual measurement, which is labor-intensive, with low efficiency. As a result, these methods are not suitable for large-scale use, which, to some extent, also hinders the large-scale industrial production of Oudemansiella raphanipies [3].
With the continuous development of computer technology, it has become popular to obtain the shape of objects and perform quantitative analysis automatically, efficiently, and accurately by using machine vision and deep learning technology, which also has been widely used to monitor various crops in natural environments or greenhouse environments [4,5,6,7,8]. As industrial-grade depth cameras are expensive, they are not conducive to popularization in factories, while monocular cameras are widely used in plant phenotype platforms due to their high-cost performance. For example, Du et al. [9] used Point Grey industrial cameras to establish a high-throughput vegetable phenotype platform based on greenhouses, which could quickly evaluate various characteristics of lettuce varieties. Li et al. [10] used Sony JV205 cameras to shoot immature apples in natural scenes and proposed a watershed segmentation algorithm by combining local minimum points to obtain the contour, and then to extract the points on the contour by convex hull to reconstruct the fruit contour and measure the fruit diameter, with the mean root square error of 2.27 mm. Zhao et al. [11] applied an image acquisition sensor to the apple sorter, and calculated the maximum diameter of apples after denoising and edge detection, which improved the accuracy of the apple sorter. In the field of edible fungi, Yu et al. [12] put forward a region marking technique based on sequential scanning algorithm, which scanned the binarized images twice to realize the independent segmentation of Agaricus bisporus. Sun et al. [13] put forward a “submerged method” to detect Agaricus bisporus by analyzing the depth image, and obtained the diameter by using the Hough circle detection algorithm, with a detection rate of over 89% and a diameter error rate of 2.15%-3.15%. Lu et al. [14] adopted a new image measurement method of Agaricus bisporus based on YOLO v3 to realize its segmentation and detection. All these methods discussed above had good effects under the corresponding conditions and laid a solid foundation for the phenotypic study of edible fungi.
The mushroom cap of Oudemansiella raphanipies is approximately round—an important feature in its growth process. Therefore, the circle can be fitted and the diameter can be used as a parameter to represent its size. However, because the shape, color, and environment of Oudemansiella raphanipies are different from those examined in the studies above, the existing methods cannot be applied directly.
In this paper, an online monitoring system was developed based on YOLO v4 and distance filtering algorithm, aiming at the morphological characteristics of Oudemansiella raphanipies caps. By doing so, the in situ on-line measurement of the number of Oudemansiella raphanipies and the size of its caps in the mushroom bed was realized, and good results were obtained, which gave us the ability to monitor the mushrooms during the whole cultivation cycle.

2. Materials and Methodology

2.1. Data Processing

2.1.1. Data Acquisition

The experimental data were collected in the intelligent mushroom greenhouse of the training base of School of Biosciences and Bioengineering in Jiangxi Agricultural University, and two mushroom factories in Zhangshu City and Nanchang City (Jiangxi Province, China), respectively. To realize the measurement of caps in the natural scene, the greenhouse environment was controlled in exactly the same way as the mushroom factory by Industrial PC (IPC, Intel(R) i5-6200U CPU, 12 GB RAM, without GPU). The temperature of the greenhouse was 22 ± 2 ℃, with humidity 90 ± 2% and light 300 ± 20 Lux. Oudemansiella raphanipies was cultivated in the mushroom bed covered with soil, with the X and Y tracks installed above the mushroom bed, and the camera (Daheng MER-500-14GM, lens HN-0619-5M) placed on the tracks. The lens shot vertically downward, 70 cm away from the mushroom bed soil, and could move left and right, and back and forth, along the guide rails (tracks) under the control of the computer program (Figure 1). To monitor the whole growth process of Oudemansiella raphanipies (from 24 February to 25 March 2022), this system was automatically photographed every 20 min, and a total of 3000 images (resolution of 2592 × 1944) were obtained. In order to improve the generalization ability of the model, 410 images (resolution of 4608 × 3456) of Oudemansiella raphanipies in different locations were collected by mobile phones (Vivo Z5x) in the mushroom factories (Zhangshu City on 12 January 2021 and Nanchang City on 10 December 2021), which diversified the datasets. Figure 2 shows the overall flowchart of the current research.

2.1.2. Histogram Equalization

As the light in the mushroom room was dim, the pictures taken by the camera were dark, which was not conducive to the subsequent processing; so, the images of Oudemansiella raphanipies needed to be fortified first. Histogram equalization is a commonly used method to strengthen brightness and contrast [15]. Its basic principle is to change the gray histogram of the original image from a concentrated range to uniform distribution in an all-gray range. Through the histogram equalization, the image is stretched nonlinearly, so that the dynamic range of the image is increased, and the image becomes clearer and brighter.

2.1.3. Data Augmentation

Although 3000 pictures were taken from the mushroom greenhouse, there was little difference in adjacent time points with short shooting intervals. Therefore, a total of 530 pictures with differences were selected as the final dataset. Manual labeling was conducted by labeling software, which was then further divided into the training set, verification set, and test set at a ratio of 8:1:1. In order to improve the generalization ability of the model, data augmentation means such as adding noise, cropping, brightening, dimming, scaling, etc. were performed on the images in the training set. After the data augmentation, 3700 images were obtained.

2.2. Cap Location Based on Improved YOLO v4 Model

Through the analysis of the obtained images, it was found that the key to constructing the fitting circle was to accurately separate the outline of the fungus caps from the complex planting environment, which would directly affect the accuracy of the subsequent measurement results. For this reason, firstly, the identification and location of Oudemansiella raphanipies were considered so as to obtain the coordinates and sizes of each sample; then, the noise was removed by distance filtering to obtain the contour range of the fitting circle, and finally obtain the best center and fitting circle by using the constrained least square method.
Compared with YOLO v1, YOLO v2, and YOLO v3, the YOLO v4 algorithm [16] achieved a good effect in terms of speed and accuracy, which used Cspdarknet53 as the backbone feature extraction network to convolve the input images. On the basis of Darknet53, the CSPNet structure was introduced, which increased the learning ability of the network and sped up the calculation. Meanwhile, the SPP structure was also used to expand the receptive field and PANet structure was applied to better fuse the features.
For the newborn Oudemansiella raphanipies, its cap is small, and the traditional YOLO v4 algorithm cannot locate caps accurately. CBAM [17] is a kind of attention mechanism module focusing on the local information of the target images. Based on learning, CBAM assigns different weights to the images in each channel to make the algorithm pay more attention to the target area of interest and suppress useless information. When a feature mapping is given, CBAM module infers the attention mapping along two independent dimensions of the channel and space, and then multiplies the attention mapping by input feature mapping to perform the adaptive feature refinement. At present, many studies have shown that adding CBAM module to the YOLO v4 model can improve the detection effects on small targets and the accuracy of the algorithm as well [18]. However, no definite method concerns where to add CBAM module, which needs to be determined according to specific problems in practice. After many attempts, it was found that adding CBAM module to each convolution block in Cspdarknet53 backbone network could improve the detection effects of the YOLO v4 model. The structure of the improved YOLO v4 model is shown in Figure 3.
By training the improved YOLO v4 network, the algorithm model to locate Oudemansiella raphanipies is obtained. The effects of the test set are shown in Figure 4, from which it can be seen the improved YOLO v4 algorithm can identify and locate different sizes of Oudemansiella raphanipies caps precisely; however, the bounding boxes of the caps cannot be completely attached to the cap accurately, and so they need further processing. The bounding box can be enlarged by five pixels up, down, left, and right, so as to ensure that the bounding box completely contains Oudemansiella raphanipies caps for next operation.

2.3. Background Segmentation and Edge Detection

To fit the sizes of the caps accurately, the outlines of the caps need to be obtained first. Currently, there are two main methods for segmenting objects: the traditional morphological method and the end-to-end deep-learning-based method. The deep learning method usually requires a large amount of annotated data to achieve superior performance of the neural network, but labeling data is labor-intensive and dull. In contrast, the traditional morphological method manually selects features based on the characteristics of the target objects and so is easier to use when the dataset is small. Besides, different Oudemansiella raphanipies cultivation factories have a similar environment, and so it is easy to find the common features for the segmentation. Therefore, using the traditional method to extract the cap profiles was the best choice for our project.
It was found that the cap color of Oudemansiella raphanipies obtained by YOLO v4 algorithm was similar to that of the covering soil, but some differences still exist. Therefore, it was considered to transform RGB color space into another color space for further processing; however, how to choose the best space is still a hard problem. Different from RGB images obtained by cameras, HSI model is usually used to reflect people’s visual perception, which is more suitable for detecting and analyzing the color characteristics of images. Equations (1)–(3) are used to transfer the images from RGB color space to HSI color space.
H = arccos [ ( R G ) + ( R B ) ] / 2 [ ( R G ) 2 + ( R B ) ( G B ) ] 1 2 , B G 2 π arccos [ ( R G ) + ( R B ) ] / 2 [ ( R G ) 2 + ( R B ) ( G B ) ] 1 2 , B > G
S = 1 3 R + G + B [ min ( R , G , B ) ]
I = 1 3 ( R + G + B )
where H represents hue, S represents saturation, I represents intensity, R represents red value, G represents green value, and B represents blue value. Three different color component diagrams of H, S, and I are obtained (Figure 5), respectively. It can be seen that the cap and background are not distinguished in S and I component diagrams, but in H component, the cap and background are quite different.
The threshold value of H component was determined by OTSU algorithm [19], and a lot of noise still existed in the fungus cap area and the background area (Figure 6a). To obtain the edge of fungus caps accurately, it was necessary to remove them. Statistical data showed that the fungus cap area was usually not less than 80 pixels; thus, the connected domains with less than 80 pixels in the binary images were removed as noise (Figure 6b) and were then superimposed on the original images as masks (Figure 6c). Finally, the canny operator [20] was used for the edge detection to obtain the basic contour information (Figure 6d). The outline of the cap could be gained by a series of operations on H component, but still needed further processing.

2.4. Determination of Fungal Cap Shape

The size of the cap is an important basis for judging the growth state of Oudemansiella raphanipies. To calculate the cap diameter, the fitting circle with small errors must be found first. As a classical algorithm, Circle Hough Transform (CHT) [21] has been widely used in the measurement of fruits and Agaricus bisporus [22,23]. Its basic principle is to calculate the local gradient of each pixel after the edge detection and determine the coordinate of the candidate center, and then select the best radius according to the distance between the candidate center and all non-zero pixels. However, because the color of Oudemansiella raphanipies is similar to that of the background, the image of Oudemansiella raphanipies obtained by the edge detection is incomplete, often mixed with a lot of noise (Figure 5d), and big errors occur in determining the circle center by the local gradient. In addition, the sizes of all Oudemansiella raphanipies are not the same; so, it is difficult to set uniform parameters, and a good result cannot be obtained by directly using Circle Hough Transform as a detection algorithm. In view of this, this paper designed a fungal cap fitting algorithm based on distance filtering, which adopts two steps of “rough selection+fine tuning” to determine the center and contour of the fitting circle.

2.4.1. Distance Filtering

According to the previous analysis, a rough outline of the fungal cap could only be obtained by segmenting the image and extracting the edge, but it also contained many parts not belonging to the cap that were difficult to remove by classical methods. It was observed that although there was a lot of noise after the edge detection, most edge points occupying a large proportion were the edges of the fungal cap to be acquired and formed a circular arc, and the distance between them and the center of the cap was roughly equal, while the noise caused by the background, such as the covering soil, had no such similar characteristics. Therefore, based on the distance from the edge points to the center of the circle, a filter could be constructed to filter out the relevant noise and then obtain the outline of the fungal cap. On the other hand, there were two kinds of fungal cap images that the YOLO algorithm could detect: complete fungal cap (Figure 1b,c) and partially blocked fungal cap (Figure 1a,d). In whichever image, the expanded bounding box would contain the target area as much as possible, though it could not accurately circumscribe the fungal cover. The cap was approximately round, and it was similarly considered that the geometric center of the bounding box was the center of it. To sum up, as long as an appropriate threshold was set, the edge contour obviously not belonging to the cap could be removed, and so, the method of determining the upper and lower boundaries of the distance band-pass filter could be obtained as follows:
STEP 1: The Euclidean distance sequence D from each pixel point to the geometric center O1(0, 0) of the bounding box after the edge detection could be calculated according to Equation (4).
D = x i 2 + y i 2 , i = 1 , 2 , 3 , , N
where (xi, yi) represents the coordinates of the i-th edge point and N represents the total number of edge points in the image.
A binary set {(mj, nj)|nj = count(mj), j = 1,2,3,…,k} was then constructed, where k represents the number of factors in the set, mj represents a certain distance in the sequence D, and nj represents the number of pixel points with mj distance in the sequence D; then, the binary set was sorted from small to large according to the size of mj.
STEP 2: The threshold was set as θ, and then argmax f(p, q) were the upper and lower boundaries of the filter, where f(p, q) was defined as follows:
f ( p , q ) = 1 N t = p q n t m q m p , q > p   a n d   1 N t = p q n t θ
where p and q represents the serial number of factors in the binary set.
STEP 3: With O1 as the center, the radii were represented by mp and mq, respectively; determined the inner and outer edges; and filtered out points other than them.
After repeated experiments, it was found that a θ value set to 75% could obtain better results. Figure 7a,d were the original images of non-occluded and occluded fungal caps, respectively. Outline A and outline B in Figure 7b,e were the inner and outer edge contour ranges determined after the execution of the algorithm, with the middle parts of outline A and outline B reserved, which were the contour ranges of fungal caps after rough selection (Figure 7c,f).

2.4.2. Constrained Least Square Fitting Contour

The contour edge obviously not belonging to the fungal cap area could be filtered out by the distance filtering, and then the approximate range of radius could be determined. However, as the circle center was estimated, the fitting circle obtained at this time was obviously not accurate enough; thus, it was necessary to optimize it further. Considering that the approximate center O1 had been obtained, the candidate center set Cen could be constructed in the specified range around it with its coordinates of (0, 0):
C e n = O 1 , O 2 , , O 2 h + 1 2
where O 1 , O 2 , , O ( 2 h + 1 ) 2 represents the candidate circle center, with its x and y axis coordinates set as follows:
x Z : h x h y Z : h y h
where Z is an integer and h is a positive integer. The distance from the point on the contour to the center set Cen was calculated according to Equation (4). To make the fitting circle accurate enough, the essence was to make the distance from the contour of the fitting circle to each contour point as short as possible—that is, the least square solution. This method could meet the requirements in most cases, but when a few target pixels deviated greatly from the original contour, it would bring errors. Through observation, it could be found that after the distance filtering, the contour of the fungal cap was clear, and most points could form a completely fitting contour. When the circle center range was roughly known, the value of constraint radius could be considered and defined based on the least squares, which was defined as follows:
G = ( x f x s ) 2 + ( y f y s ) 2 N U M
where (xf, yf) represent the coordinates of edge points of the fitting contour and (xs, ys) represent the coordinates of edge points of the existing contour. NUM represents the number of edge points through which the fitted contour passes. The smaller the G value, the smaller the total error is under the premise that the number of edge points of the fitted circle in a certain circle center is the biggest—that is, the best circle center is determined when the number of the contour fitting edge points is the largest, while the error with other points is the smallest. The specific steps were conducted as follows:
STEP 1: Determine a candidate center set according to Formula (5); according to Formula (2), to calculate the pixel distance of each candidate center and each pixel on the edge between Outline A and Outline B.
STEP 2: Construct a binary set {(u, v)|v = count(u)} for the distance sequence of each candidate center, where u represents a certain distance in the sequence and v represents the number of the pixel with a distance of u in the sequence D; to calculate the G values of the factors in the set according to Formula (6).
STEP 3: Select the smallest G value, take its corresponding candidate center as the center of the circle and u as the radius, and construct the best fitting circle.
Taking h = 1 as an example, there were 9 candidate center points. Figure 8a,b showed the fitting results of the cap with the smallest G value.

3. Results

The whole experiment was based on Pytorch framework, the programming language was Python, the computer used for training had an Intel(R)Xeon(R) Silver 4112 CPU, a NVIDIA Quadro RTX 5000 GPU, and the system environment was Windows 10. Furthermore, as mentioned above, the computer used for measuring caps was IPC.

3.1. Result Comparison of Target Testing

After repeated comparisons, the picture size was set to 416 × 416 pixels, and the training parameters of the improved YOLO v4 model were set as follows: batch size was set to 8, the epoch was set to 100, the weight attenuation ratio was set to 0.0005, and the learning rate was set to 0.0001. mAP was the average precision of all categories, an index to measure the precision. F1 was the harmonic average of the precision and recall, reflecting the robustness of the model. On this basis, this experiment used mAP and F1 values to evaluate the model. The calculation formulae are described as follows:
F 1 = 2 P R P + R
m A P = 0 1 P R d R
where P represents the precision and R represents recall. The calculation formulae of P and R are listed in the following:
P = T P T P + F P
R = T P T P + F N
where TP represents a positive sample identified as positive, FP represents a negative sample identified as positive, and FN represents a positive sample identified as negative.
To verify the performance of the improved YOLO v4 network, the network was tested and the results were analyzed statistically. Figure 9 shows the comparison of YOLO v4 networks before and after the improvement when different IOU are selected. It can be seen from Figure 8 that adding CBAM effectively improves the situation of missed detection and false detection of the network, and so, increases AP50, AP75, and AP50:95 by 2.33%, 7.64%, and 2.86%, respectively.
The conf-thresh (target confidence threshold) was taken as 0.5, and the IOU was taken as 0.5. The results are shown in Table 1. The precision, recall, and F1 of the improved model increased. Specifically, the precision increased by 0.7%, the recall increased by 3.35%, and the F1 score increased by 2%.
Table 2 shows that when IOU is 0.5, the effects of the improved YOLO v4 model are compared with those of other models. The mAP of the improved model (ours) reached 95.36%.
Figure 10 is a comparison diagram before and after the improvement. Figure 10a shows the detection under overlapping conditions, and Figure 10b shows the detection of small targets. The blue box represents the false detection, and the red box represents the missed detection. The first line shows the detection result of the YOLO v4 algorithm without improvement, and the second line shows the detection result after its improvement. The results altogether show that the improved YOLO v4 model greatly improved the ability of the detection of small and overlapping targets. However, for some smaller objects with similar background colors, the improved model (ours) still could not detect them but the number was greatly reduced.

3.2. Circle Fitting Results

After statistics, it was found that the centers of most YOLO bounding boxes deviated from those of the fungi by no more than 3 pixels; so, the candidate area of the circle center h = 3 was selected for the fungal cap fitting. The effects are shown in Figure 11. Although some deformed fungi in the pictures had relatively poor fitting effects due to their shapes, it could be seen from the whole image that the algorithm in this paper could fit the caps of Oudemansiella raphanipies well.

3.3. Comparison of Fitting Effects

To verify the effects of the algorithm proposed in this paper, 40 different sizes of Oudemansiella raphanipies in the test set were randomly selected, and the minimum circumscribed circle was manually labeled as the true value. On account of the diameters of the fungal caps, the caps larger than 100 pixels already became matured and had been picked, and the caps smaller than 20 pixels were too young to pluck. Therefore, the diameters of the fungal caps were mostly distributed between 20–100 pixels in the test, as shown in Figure 12.

3.3.1. Comparison with Ground Truth

The root mean square error (RMSE) of the diameter measurement was 2.97 pixels; the mean absolute error (MAE) was 2.47 pixels, with no missing detection; and the relative error was 0~10.71%. The specific results are shown in Figure 13. It can be seen that the largest error appears in the range of 80–100 pixels, but the maximum value of error is less than 6 pixels. This is chiefly because most fungal caps in this range are occluded.

3.3.2. Comparison with Circle Hough Transform (CHT)

The HoughCircles function of opencv is used in the Circle Hough transform (CHT) detection algorithm, and five parameters (minDist, param1, param2, minRadius, maxRadius) contained in this function need to be adjusted manually. minDist is the minimum distance between the centers of adjacent circles. Since there is only one fungal cap in the target image after the target detection, this parameter will not affect the final result. minRadius is the minimum radius of the circle; maxRadius is the maximum radius of the circle; and both values are set to 5 and 200, respectively, according to the characteristics of the fungal caps. Param1 is the maximum threshold of canny edge detection, param2 is the threshold set when detecting the center and radius of the circle, and both need to be determined according to the actual situation.
Figure 14 illustrates the results of Circle Hough Transform (CHT) detection algorithms with different parameters, and Figure 14a shows the error statistics of the fitting results. It can be seen from Figure 14a that the algorithm is sensitive to the parameter selection. When the parameter is selected inappropriately, it is easy to miss the detection; thus, the fitting error is quite different. Figure 14b shows the comparison of the actual fitting results of different types of samples. Here, the first column shows the fitting results of our algorithm. Columns 2–5 show the fitting results of CHT algorithm with different parameters, and lines 1–3 show the small-diameter fungal cap, the occluded fungal cap and the large-diameter fungal cap, respectively. According to the results in Figure 14b (pixel as the unit), it can be seen that the best CHT parameters are different for different fungal caps. In some cases, the position of the fitting circle may be quite different to that of the target. Therefore, the parameter selection of CHT detection algorithm is relatively difficult, and the optimal parameter cannot be selected according to each detected object in the practical application.
According to Figure 14a, parameters (param1=100, param2=10) are selected due to no missing detection with RMSE 10.68 pixels and MAE 8.16 pixels. Effects of the CHT detection algorithm to fit different fungal caps are compared with those of the improved method (ours) (Figure 15), and it can be seen that the method proposed in this paper is stable and accurate for measuring fungal caps with different diameters.

3.3.3. Comparison with Other Algorithms

In addition to CHT detection, a common method—the least square method (LS)—is also usually used to fit the samples after the extracted edge contour, which requires the minimum sum of squares of distances between edge points and the fitting circle. In this paper, the RMSE of the least square method was 6.37 pixels and the MAE was 5.3 pixels compared with those of the manual measurement method by using the image fitting. Considering that the least square method was more susceptible to large noise points, and the Random sample consensus (Ransac) [27] could fit a more accurate model from the data containing a large number of outliers, the Ransac algorithm was also used in this paper to conduct the fitting. Compared with the manual measurement method, the RMSE of Ransac algorithm was 5.89 pixels and the MAE was 4.82 pixels.
Figure 16 shows the fitting results of the improved algorithm (ours), least square algorithm, and Ransac algorithm with different fungal caps. It can be seen from Figure 16a that the error of Ransac is smaller than that of the least square method, showing that Ransac can restrain the influences of loud noise points on the final results, but the fitting effects on occluded fungal caps are not good either. On the whole, the effects of the above three algorithms are close to each other on fungal caps with a size of 40–60 pixels, but the error of the improved algorithm (ours) is still lower than that of others, which further confirms the effectiveness of the improved algorithm.

3.4. Analysis of Diameter Measurement

In order to obtain the final diameter of Oudemansiella raphanipies caps, it is necessary to convert the pixel distance into the actual distance. In the process of camera shooting, due to the position angle, the deflection of light passing through the lens, and the non-parallel between the lens and the imaging plane, some distortion existed in the photos. In this paper, Zhang Zhengyou’s calibration method [28] was used to calibrate the images. According to the on-site environment, the height of Oudemansiella raphanipies to be identified was in the range of 12 to 62 mm. As the camera used in the experiment was a monocular industrial camera, it was impossible to obtain the depth value. Therefore, 37 mm away from the covering soil was selected as the average height for the calibration to obtain the result. Here, each pixel in the picture represented 0.241 mm.
A total 55 strains of Oudemansiella raphanipies cultivated at the laboratory and mushroom factory (ZhangShu City, Jiangxi Province, China) were randomly selected and their fungal cap diameters were measured to evaluate the effects of the algorithm in practical application. Vernier calipers were used manually to measure the true diameters of the caps. To reduce the measurement error, the diameter of each fungal cap was measured three times and their average values were taken. Figure 17 shows the comparison between the ground truth measured manually and the values measured by our improved method. The RMSE is 0.96 mm, the MAE is 0.77 mm, and the R2 (coefficient of determination) is 0.95. Table 3 illustrates the average processing time of each image in a monitoring cycle by different algorithms running on IPC. The results showed that our algorithm took more time than the CHT and LS algorithms did, but less than Ransac method. Despite the time of our algorithm being slightly higher than that of CHT and LS, our accuracy rate increased. Moreover, the cycle of observation was 20 min, and the time of our algorithm was adequate to meet the requirements perfectly. Based on the performance of the algorithm and the consumption of time, all these discussed above showed that our algorithm could estimate the diameters of Oudemansiella raphanipies caps accurately and timely, and meet the requirements in the actual engineering application.

4. Conclusions

(1) An improved YOLO v4 network based on the CMBA model was proposed and the experimental results showed that the P, R, and F1 of this model were 93.6, 90.7, and 0.92, respectively, superior to the others in the Oudemansiella raphanipies dataset.
(2) Based on distance filtering and constrained least squares, a fungal cap fitting algorithm was proposed to find the best-fitting contour in the processed edge images. Test results showed that this method was effective, accurate, and stable.
(3) The MAE of the final result was 0.77 mm and could satisfy the applications. For example, fungi breeders could acquire Oudemansiella raphanipies’ morphological data quickly and completely whiteout hardworking, which also could greatly improve the efficiency of breeding. In the future, the motion recovery structures or some similar methods would be considered to obtain the three-dimensional information of the objects and other cameras (such as Hyperspectral, Near infrared, and Raman spectroscopy) would be tried to detect internal qualities such as invisible defects.

Author Contributions

Conceptualization, H.Y. and D.H.; methodology, J.X.; software, J.X.; validation, H.Y. and Y.W.; formal analysis, H.Y. and W.Y.; writing—original draft preparation, J.X.; writing—review and editing, H.Y., D.H. and M.L; visualization, J.X.; project administration, W.Y.; funding acquisition, W.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This study was funded by the National Key Research and Development Project (No. 2020YFD1100603), Natural Science Foundation of Jiangxi Province (Grant No. 20212BAB202015) and the National Natural Science Foundation of China (NSFC) (Nos. 32070023 and 32060014).

Data Availability Statement

Given that the data used in this study were self-collected, the dataset is being further improved. Thus, the dataset is unavailable at present.

Acknowledgments

Special thanks to the reviewers for their valuable comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ba, D.M.; Ssentongo, P.; Beelman, R.B.; Muscat, J.; Gao, X.; Richie, J.P. Higher mushroom consumption is associated with lower risk of cancer: A systematic review and meta-analysis of observational studies. Adv. Nutrit. 2021, 12, 1691–1704. [Google Scholar] [CrossRef] [PubMed]
  2. Yin, H.; Yi, W.L.; Hu, D.M. Computer vision and machine learning applied in the mushroom industry: A critical review. Comput. Electron. Agric. 2022, 198, 107015. [Google Scholar] [CrossRef]
  3. Yuan, X.H.; Fu, Y.P.; Xiao, S.J.; Li, C.T.; Wang, D.; Li, Y. Research progress on mushroom phenotyping. Mycosystema 2021, 40, 721–742. [Google Scholar]
  4. Yang, K.L.; Zhong, W.Z.; Li, F.G. Leaf Segmentation and Classification with a Complicated Background Using Deep Learning. Agronomy 2020, 10, 1721. [Google Scholar] [CrossRef]
  5. Su, F.; Zhao, Y.; Shi, Y.; Zhao, D.; Wang, G.; Yan, Y.; Zu, L.; Chang, S. Tree Trunk and Obstacle Detection in Apple Orchard Based on Improved YOLOv5s Model. Agronomy 2022, 12, 2427. [Google Scholar] [CrossRef]
  6. Moreira, G.; Magalhães, S.A.; Pinho, T.; dos Santos, F.N.; Cunha, M. Benchmark of Deep Learning and a Proposed HSV Colour Space Models for the Detection and Classification of Greenhouse Tomato. Agronomy 2022, 12, 356. [Google Scholar] [CrossRef]
  7. Li, G.; Suo, R.; Zhao, G.N.; Gao, C.Q.; Fu, L.S.; Shi, F.X.; Dhupia, J.; Li, R.; Cui, Y.J. Real-time detection of kiwifruit flower and bud simultaneously in orchard using YOLOv4 for robotic pollination. Comput. Electron. Agric. 2022, 193, 106641. [Google Scholar] [CrossRef]
  8. Jia, W.K.; Zhang, Z.H.; Shao, W.J.; Ji, Z.; Hou, S.J. RS-Net: Robust segmentation of green overlapped apples. Precis. Agric. 2021, 23, 492–513. [Google Scholar] [CrossRef]
  9. Du, J.J.; Fan, J.C.; Wang, C.Y.; Lu, X.J.; Zhang, Y.; Wen, W.L.; Liao, S.J.; Yang, X.Z.; Guo, X.Y.; Zhao, C.Y. Greenhouse-based vegetable high-throughput phenotyping platform and trait evaluation for large-scale lettuces. Comput. Electron. Agric. 2021, 186, 106193. [Google Scholar] [CrossRef]
  10. Li, W.Y.; Chen, M.X.; Xu, S.P.; Chen, X.Y.; Qian, J.P.; Du, S.F.; Li, S.; Li, M. Diameter measurement method for immature apple based on watershed and convex hull theory. Trans. Chin. Soc. Agric. Eng. (Trans. CSAE) 2014, 30, 207–214. [Google Scholar]
  11. Zhao, D.D.; Ai, Y. Research on Apple Size Detection Method Based on Computer Vision. J. Agric. Mech. Res. 2022, 44, 206–209. [Google Scholar]
  12. Yu, G.H.; Luo, J.M.; Zhao, Y. Region marking technique based on sequential scan and segmentation method of mushroom images. Trans. Chin. Soc. Agric. Eng. (Trans. CSAE) 2006, 22, 139–142. [Google Scholar]
  13. Sun, J.W.; Zhao, K.X.; Ji, J.T.; Zhu, X.F.; Ma, H. Detection and Diameter Measurement Method of Agaricus Bisporus Based on “Submerged Method”. J. Agric. Mech. Res. 2021, 43, 28–33. [Google Scholar]
  14. Lu, C.P.; Liaw, J.J. A novel image measurement algorithm for common mushroom caps based on convolutional neural network. Comput. Electron. Agric. 2020, 171, 105336. [Google Scholar] [CrossRef]
  15. Choi, W.; Huh, H.; Tama, B.A. A neural network model for material degradation and diagnosis using microscopic images. IEEE Access 2019, 7, 92151–92160. [Google Scholar] [CrossRef]
  16. Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. YOLO v4: Optimal Speed andAccuracy of Object Detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
  17. Woo, S.; Park, J.; Lee, J.; Kweon, I.S. CBAM: Convolutional block attention module. In Proceedings of the 2018 European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; p. 11211. [Google Scholar]
  18. Yang, B.; Gao, Z.; Gao, Y.; Zhu, Y. Rapid Detection and Counting of Wheat Ears in the Field Using YOLOv4 with Attention Module. Agronomy 2021, 11, 1202. [Google Scholar] [CrossRef]
  19. Otsu, N. A Threshold Selection Method from Gray-Level Histograms. IEEE Trans. Syst. Man Cybern. 2007, 9, 62–66. [Google Scholar] [CrossRef] [Green Version]
  20. Canny, J. A Computational Approach to Edge Detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, PAMI-8, 679–698. [Google Scholar] [CrossRef]
  21. Yuen, H.K.; Princen, J.; Illingworth, J.; Kittler, J. Comparative study of Hough Transform methods for circle finding. Image Vis. Comput. 1989, 8, 71–77. [Google Scholar] [CrossRef] [Green Version]
  22. Zhou, W.J.; Zha, Z.H.; Wu, J. Maturity discrimination of “Red Globe” grape cluster in grapery by improved circle Hough transform. Trans. Chin. Soc. Agric. Eng. (Trans. CSAE) 2020, 36, 205–213. [Google Scholar]
  23. Feng, J.H.; Li, Z.W.; Rong, Y.L.; Sun, Z.L. Identification of mature tomatoes based on an algorithm of modified circular Hough transform. J. Chin. Agric. Mech. 2021, 42, 190–196. [Google Scholar]
  24. Liu, Z.; Lin, Y.; Cao, Y. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021. [Google Scholar]
  25. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Egi, Y.; Hajyzadeh, M.; Eyceyurt, E. Drone-Computer Communication Based Tomato Generative Organ Counting Model Using YOLO V5 and Deep-Sort. Agriculture 2022, 12, 1290. [Google Scholar] [CrossRef]
  27. FischlerISCHLER, M.A.; BOLLES, R.C. Ramdom sample consensus:A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  28. Zhang, Z. A Flexible New Technique for Camera Calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
Figure 1. Image acquisition system in the mushroom greenhouse.
Figure 1. Image acquisition system in the mushroom greenhouse.
Agronomy 13 00134 g001
Figure 2. Pipelines of measuring algorithm for Oudemansiella raphanipies caps.
Figure 2. Pipelines of measuring algorithm for Oudemansiella raphanipies caps.
Agronomy 13 00134 g002
Figure 3. Structure illustration of the improved YOLO v4.
Figure 3. Structure illustration of the improved YOLO v4.
Agronomy 13 00134 g003
Figure 4. The results of locating different size caps by improved YOLO v4.
Figure 4. The results of locating different size caps by improved YOLO v4.
Agronomy 13 00134 g004
Figure 5. HSI color component diagrams.
Figure 5. HSI color component diagrams.
Agronomy 13 00134 g005
Figure 6. Background removal and edge detection.
Figure 6. Background removal and edge detection.
Agronomy 13 00134 g006
Figure 7. Images of original and distance filtering.
Figure 7. Images of original and distance filtering.
Agronomy 13 00134 g007
Figure 8. Schematic diagram of the best-fitting circle.
Figure 8. Schematic diagram of the best-fitting circle.
Agronomy 13 00134 g008
Figure 9. Contrast graphs of the improved algorithm results.
Figure 9. Contrast graphs of the improved algorithm results.
Agronomy 13 00134 g009
Figure 10. Detection results based on YOLO v4 and the improved YOLO v4 (ours) algorithm.
Figure 10. Detection results based on YOLO v4 and the improved YOLO v4 (ours) algorithm.
Agronomy 13 00134 g010
Figure 11. Using the proposed algorithm (ours) to fit mushroom circles of the entire image.
Figure 11. Using the proposed algorithm (ours) to fit mushroom circles of the entire image.
Agronomy 13 00134 g011
Figure 12. Diameter distribution of caps.
Figure 12. Diameter distribution of caps.
Agronomy 13 00134 g012
Figure 13. Analysis of fitting effects by the proposed algorithm.
Figure 13. Analysis of fitting effects by the proposed algorithm.
Agronomy 13 00134 g013
Figure 14. The comparison of CHT algorithm with different parameters.
Figure 14. The comparison of CHT algorithm with different parameters.
Agronomy 13 00134 g014aAgronomy 13 00134 g014b
Figure 15. Fitting effect comparison of the proposed algorithm (ours) and CHT.
Figure 15. Fitting effect comparison of the proposed algorithm (ours) and CHT.
Agronomy 13 00134 g015
Figure 16. The fitting effect comparison between our, the least square and the Ransac algorithms.
Figure 16. The fitting effect comparison between our, the least square and the Ransac algorithms.
Agronomy 13 00134 g016
Figure 17. The measurement result of caps.
Figure 17. The measurement result of caps.
Agronomy 13 00134 g017
Table 1. Test set experiment results; conf-thresh = 0.5; IOU = 0.5.
Table 1. Test set experiment results; conf-thresh = 0.5; IOU = 0.5.
ModelPRF1
YOLO v492.9%86.82%0.90
Improved YOLO v4 (ours)93.6%90.17%0.92
Table 2. The results of different algorithm.
Table 2. The results of different algorithm.
ModelmAP
Swin transformer [24]81.36%
Faster Rcnn [25]86.20%
YOLO v3 [14]89.48%
YOLO v4 [16]93.03%
YOLO v5 [26]92.52%
Improved YOLO v4 (ours)95.36%
Table 3. Comparison of measuring time of different algorithms.
Table 3. Comparison of measuring time of different algorithms.
CHTRansacLSOurs
Average time/s4.234.474.254.42
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yin, H.; Xu, J.; Wang, Y.; Hu, D.; Yi, W. A Novel Method of Situ Measurement Algorithm for Oudemansiella raphanipies Caps Based on YOLO v4 and Distance Filtering. Agronomy 2023, 13, 134. https://doi.org/10.3390/agronomy13010134

AMA Style

Yin H, Xu J, Wang Y, Hu D, Yi W. A Novel Method of Situ Measurement Algorithm for Oudemansiella raphanipies Caps Based on YOLO v4 and Distance Filtering. Agronomy. 2023; 13(1):134. https://doi.org/10.3390/agronomy13010134

Chicago/Turabian Style

Yin, Hua, Jingling Xu, Yinglong Wang, Dianming Hu, and Wenlong Yi. 2023. "A Novel Method of Situ Measurement Algorithm for Oudemansiella raphanipies Caps Based on YOLO v4 and Distance Filtering" Agronomy 13, no. 1: 134. https://doi.org/10.3390/agronomy13010134

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop