Next Article in Journal
An Optimization Framework for Information Management in Adaptive Automotive Human–Machine Interfaces
Next Article in Special Issue
IFE-Net: An Integrated Feature Extraction Network for Single-Image Dehazing
Previous Article in Journal
Detailed Analysis of the Effects of Viscosity Measurement Errors Caused by Heat Transfer during Continuous Viscosity Measurements under Various Temperature Changes and the Proposed Solution of a Non-Dimensional Parameter Called the Akpek Number
Previous Article in Special Issue
Bi-LS-AttM: A Bidirectional LSTM and Attention Mechanism Model for Improving Image Captioning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Machine Vision-Based Chinese Walnut Shell–Kernel Recognition and Separation

1
Modern Agricultural Engineering Key Laboratory at Universities of Education Department of Xinjiang Uygur Autonomous Region, Tarim University, Alaer 843300, China
2
College of Mechanical Electrification Engineering, Tarim University, Alaer 843300, China
3
College of Engineering, Huazhong Agricultural University, Wuhan 430070, China
4
Faculty of Modern Agricultural Engineering, Kunming University of Science and Technology, Kunming 650500, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(19), 10685; https://doi.org/10.3390/app131910685
Submission received: 24 August 2023 / Revised: 24 September 2023 / Accepted: 25 September 2023 / Published: 26 September 2023
(This article belongs to the Special Issue Recent Trends in Automatic Image Captioning Systems)

Abstract

:
Walnut shell–kernel separation is an essential step in the deep processing of walnut. It is a crucial factor that prevents the increase in the added value and industrial development of walnuts. This study proposes a walnut shell–kernel detection method based on YOLOX deep learning using machine vision and deep-learning technology to address common issues, such as incomplete shell–kernel separation in the current airflow screening, high costs and the low efficiency of manually assisted screening. A dataset was produced using Labelme by acquiring walnut shell and kernel images following shellshock. This dataset was transformed into the COCO dataset format. Next, 110 epochs of training were performed on the network. When the intersection over the union threshold was 0.5, the average precision (AP), the average recall rate (AR), the model size, and floating point operations per second were 96.3%, 84.7%, 99 MB, and 351.9, respectively. Compared with YOLOv3, Faster Region-based Convolutional Neural Network (Faster R-CNN), and Single Shot MultiBox Detector algorithms (SSD), the AP value of the proposed algorithm was increased by 2.1%, 1.3%, and 3.4%, respectively. Similarly, the AR was increased by 10%, 2.3%, and 9%, respectively. Meanwhile, walnut shell–kernel detection was performed under different situations, such as distinct species, supplementary lighting, or shielding conditions. This model exhibits high recognition and positioning precision under different walnut species, supplementary lighting, and shielding conditions. It has high robustness. Moreover, the small size of this model is beneficial for migration applications. This study’s results can provide some technological references to develop faster walnut shell–kernel separation methods.

1. Introduction

Walnut shell–kernel separation is a critical procedure in the deep processing of walnuts and is a vital link to advance the industrial chain and added value of walnuts [1,2]. Following the mechanical breaking of the walnut, it is impossible to completely separate the shell and kernel during cleaning due to the complicated structure of mixtures and diversified forms [3]. Manual intervention is needed in the walnut selection. It incurs high costs and low efficiency [4], thereby restricting the development of the walnut industry. Shell–kernel recognition is one of the crucial technologies to realise intelligent shell–kernel selection. Analysing walnut shell–kernel detection algorithms based on deep learning is critical for achieving intelligent and mechanical walnut shell–kernel separation.
Walnut shell–kernel separation is a process that separates shell–kernel mixtures acquired after shell breaking by the shell-breaking device and collects walnut kernels. Many enterprises still adopt the manual selection technique, which has high labour intensity and separation costs [5]. Several Chinese and foreign scholars have conducted abundant investigations on the shell–kernel separation method of nuts. For instance, Krishnan et al. [6] proposed the magnetic shell–kernel separation technique according to their differences in magnetic conductance. However, fresh impurities were introduced into the separation process, necessitating further separation. Moreover, it was easy to contaminate kernels. Wang et al. [7] invented an electrostatic shell–kernel separation machine for nuts and successfully realised the shell–kernel separation of nuts based on their different dielectric properties. Moreover, there was no mechanical force on the shell and kernel during separation, without the restriction of particle size, shape and proportion. They provided a novel solution to the shell–kernel separation of nuts. Nevertheless, these methods presented various challenges, such as complicated separation processes, high costs, low efficiency, and poor guarantee of food quality and safety, among others. Hence, their promotion was challenging. The airflow selection method has a simple device structure and good separation effect and solves existing problems to a large extent. It is a commonly used method in walnut shell–kernel separation. Liu et al. [8], Cao et al. [9], and Li et al. [10] investigated walnut shell–kernel separation based on airflow separation and achieved a relatively good effect. Due to the impact of diversified sizes, shapes and windward areas, the walnut shells and kernels have complicated and changing motions in the airflow field, thus making it arduous to achieve thorough separation. In particular, some overlapping aerodynamic characteristics exist between walnut shells and kernels, thereby increasing the separation difficulty and restricting the improvement of the shell–kernel separation effect. Recently, the development of computer technology and machine vision technology provided novel solutions to walnut shell–kernel separation. Wang et al. [11] designed a pecan shell–kernel separation system based on a fuzzy clustering algorithm and separated shells and kernels after breaking using the fuzzy clustering algorithm. Their results demonstrated that the walnut kernel recognition rate of the system was higher than 83%. Jiang et al. [4] classified shells and kernels of black walnuts based on hyperspectral fluorescence imaging technology; the recognition rate could reach 90.3%. Jin et al. [12] designed an intelligent black walnut shell–kernel recognition system that acquired relatively high separation precision and calculation speed. It had some application potential in the field of walnut shell–kernel separation. However, it is hard to create high-efficiency applications of conventional image recognition technology in complicated actual production due to its detailed processing, low recognition precision, and requests for the manual extraction of target features [13,14]. With quick recognition, small model volume and stronger timeliness, the target detection algorithm based on YOLO can achieve end-to-end detection. Moreover, it has good feature extraction and generalisation abilities [15,16]. This algorithm is becoming an essential tool for the online detection and target recognition of agricultural products. For instance, Yao et al. [17] and Xiao et al. [18] used YOLOv5 and YOLOv8 target recognition algorithms, respectively, to conduct a real-time detection of fruit maturity. The results demonstrated that these models exhibited fast and accurate recognition. Wang et al. [19] used the YOLOv5 algorithm for the real-time recognition of apple stem and calyx, laying the foundation for the automation of fruit loading and packaging systems. Wu et al. [20] applied the deep convolutional neural network algorithm to detect walnut shell kernels, achieving good recognition results when the shell kernels were dispersed and classified. Meng et al. [21] detected tea buds amidst complex backgrounds using the YOLOv7 algorithm, providing a theoretical basis for intelligent tea picking. In addition, Zhang et al. [22] used YOLOX to achieve high-precision detection and counting of winter jujube, testing the method’s accuracy and effectiveness under different scales and scenarios. The results indicated that the algorithm had strong robustness under scenarios with shadows, coverage, and incomplete contours. These studies collectively demonstrate the success of YOLO-based target detection in recognition tasks, providing a reference for the development of walnut shell kernel separation technology.
This study proposed a fast walnut shell–kernel detection algorithm based on YOLOX to address the failure of complete shell–kernel separation after airflow selection. To verify the detection effect of the proposed algorithm in walnut shells-kernels, the images of shell–kernel mixtures that have not been separated and have disordered distribution after breaking walnuts were labelled. It achieved fast and accurate walnut shell–kernel detection based on the YOLOX target detection network, aiming for a fast, high-efficiency, lightweight walnut shell–kernel detection model. Moreover, the same dataset was selected for comparison with YOLOv3, Faster Region-based Convolutional Neural Network (Faster R-CNN), and the Single Shot MultiBox Detector algorithms (SSD) network models. This study provides references to investigate the walnut shell–kernel separation technology and research and development of online separation devices.

2. Materials and Methods

2.1. Materials

2.1.1. Data Acquisition and Processing of Walnut Shells and Kernels

Different species of walnuts have significant differences in appearance. For example, Wen 185 Walnut has thin pericarps and smooth shells. Yunnan Juglans sigillata has an obovate kernel with flat surfaces at two ends and wrinkles on the surface (Figure 1). These factors may influence the walnut shell–kernel recognition effect. Two major species (Yunnan Juglans sigillata and Wen 185 Walnut) in the primary producing areas of walnut in China were selected to obtain target detection models applicable to different walnut species. After breaking, the samples of walnut-shell–kernel mixtures were prepared. Using the Nikon D3500 camera, a total of 2753 photos were captured at the same height and background. The pictures were taken at a resolution of 3904 pixels × 2928 pixels in March 2023. The quality of the photos was primarily evaluated based on traditional personal perception [23,24], which generally met the practical criteria for shell and kernel separation conditions.
During walnut shell–kernel detection, numerous factors, such as walnut species, illumination, and shell–kernel distribution, may generate differences in walnut shell–kernel pictures and thereby influence their recognition effect (Figure 2). Walnut shell–kernel images under different scenes were collected. To obtain the optimal model for walnut shell–kernel target detection, the images of shells and kernels of Yunnan Juglans sigillata and Wen 185 Walnut under natural light and supplementary light, with and without mutual shielding, were collected in this study.

2.1.2. Dataset Preparation

A total of 2753 images were collected in this study. The walnut shell and kernel datasets were manually labelled using Labelme-master (Labelme-master 5.1.1, USA), during which the label frames selected the minimum enclosing rectangles of walnut shells and kernels. Label documents in the JSON format were produced. Because this study focused on walnut shell–kernel recognition, the images were only divided into shells, kernels and background while making labels. Only the shell and kernel must be labelled, whereas other parts of the image were labelled as background automatically using Labelme. Later, the JSON data were transformed into data in the COCO format.

2.1.3. Test Conditions

The training and test of research models of Yunnan Juglans sigillata and Wen 185 were conducted under the same conditions. The hardware setting of the device was 12th Gen Intel(R) Core(TM) i7-12700H 2.70 GHz, 11 GB NVIDIA GeForce RTX 2050Ti as CPU, 16 GB memory, and 64-bit Windows 11 system. The PyTorch deep-learning framework was applied, and the PyCharm programming platform was used.

2.2. Methods

2.2.1. Walnut Shell–Kernel Recognition Algorithm Based on YOLOX

YOLO (You Only Look Once) is an object recognition and positioning algorithm based on deep neural networks. It is an end-to-end real-time object detection system that treats the object detection task as a regression problem. YOLO series algorithms are favoured by engineering researchers for their fast response, high accuracy, simple structure, and easy deployment. The YOLOX model has improved and optimised based on the YOLO v5 network model from three aspects: decoupled head layer, data enhancement and anchor-free. Additionally, the model has been widely used due to its high accuracy and efficiency [25,26]. Unlike Faster R-CNN, YOLO can predict several candidate frames simultaneously. Based on the idea of regression, it directly detects target positions and classifies target objects via the first-order network [27,28]. Compared with a series of target detection algorithms like R-FCN and Faster R-CNN, YOLO has an outstanding characteristic of high operational speed and can meet end-to-end training and real-time detection [29]. In this test, walnut shell–kernel detection was realised based on the YOLOX deep convolutional neural network model, and the optimal model was achieved via 110 epochs of training. The validity and adaptation of the model were verified by comparing evaluation metrics.

2.2.2. YOLOX Network Structure

YOLOX uses the overall layout of the YOLO series, and its network structure mainly comprises Input, Backbone network, Neck network and Prediction [25,26]. Figure 3 shows the structure of YOLOX. Similar to YOLOv4 and YOLOv5, the Input applies the Mosaic data enhancement mode. Images were spliced after stacking, scaling, tailoring, and arranging different images [30]. The optimal anchoring frame value of the dataset was calculated automatically. The Backbone primarily consists of Focus and CSP. Based on YOLOv4-tiny, it realised a cross-stage local fusion network. In YOLOX, the input images are first cut into pieces in Focus, and the original input is divided into two branches using the GSPDarknet structure for convolutional operation and N residual block operation. Later, the branches were joined [31]. This action effectively relieves the vanishing gradient problem and decreases the number of network parameters. Moreover, it not only shortens the calculation time but also increases precision. The spatial pyramid pooling structure was applied at the position of the Backbone output, and the maximum pooling feature extraction was conducted by pooling nuclei with different sizes. This enhances the reception field of convolutional kernels effectively and is conducive to extracting richer local feature information [32]. The Neck comprises feature pyramid networks (FPN) + path aggregation network (PAN). From top to bottom, FPN integrates the deep layer features with superficial layer features by upsampling. It is mainly used to transfer semantic features. From bottom to top, PAN transfers superficial layer features to deep layers and integrates them. It is primarily used to transfer positioning information [33]. The prediction of YOLOX changed Yolohead into Decoupled Head [25] and realised regression and classification in two parts. Regression and classification are integrated during prediction. Moreover, the convergence rate and precision of the algorithm are improved.

2.2.3. Evaluation Metrics

To verify practicability and detection effect of the YOLOX model, the same dataset was selected for comparison with YOLOv3, Faster R-CNN and SSD network model. Average precisions (AP50, AP75, and APs) at the intersection over union (IOU) = 0.50, IOU = 0.75 and IOU = 0.50:0.95 and average recall (AR) at IOU = 0.50:0.95 were selected as the evaluation metrics of performances of the walnut shell–kernel detection training model.
P = T P T P + F P × 100 %
R = T P T P + F N × 100 %
A P = 0 1 P ( R ) d R
where P refers to precision and R is recall. TP means the number of accurately predicted positive samples. FP is the quantity of wrongly predicted positive samples. FN is the quantity of wrongly predicted negative real samples.

3. Results and Analysis

3.1. Model Training

In this study, 110 training epochs were performed on the dataset. Figure 4 shows the network training results. From the beginning to the 25th epoch of training, the learning efficiency of the model was high, and the loss curve demonstrated a high convergence rate. At about the 750th iteration, the learning efficiency of the model reached saturation gradually, and the loss fluctuated at about 1.4. The AP (IOU = 0.5) was 96.3% for the final training model, and the AR was 84.7%. The AP (IOU = 0.75) was 92.4%, and the AP (IOU = 0.50:0.95) was 80.6%.

3.2. Detection Results

In total, 550 walnut shell–kernel images in the test set were selected to verify the validity of the YOLOX network under different walnut shell–kernel detection scenes. According to test results, the AP50 and AR of YOLOX were 97.2% and 84.7% with respect to different walnut shell–kernel separations, respectively. The floating point operations per second (FLOPs) were 351.9, and the model size was 99 MB. The model presented high detection precision and speed and was robust under mutual shell–kernel shielding and illumination. Figure 5 shows the detection results. The confidence of detection results is presented above the detection frame. In Figure 5, all species of walnut shells and kernels could be recognised accurately, and the confidence is higher than 0.90. This outcome demonstrates that the proposed algorithm can effectively distinguish walnut shells and kernels in the mixture under different scenes with relatively high detection confidence.

3.3. Performance Comparison of Several Target Detection Algorithms

To evaluate the walnut shell–kernel detection effect of the YOLOX network, the walnut shell and kernel training sets were trained based on YOLOv3, SSD [27], and Faster R-CNN target detection algorithms under the same conditions. The number of training epochs was set at 110. Moreover, the performance of the above four detection algorithms was evaluated using the test set. The visual confusion matrix of performance comparison and evaluation metrics of the detection model at IOU = 0.5 are shown in Figure 6 and Table 1, respectively.
Table 2 shows that the YOLOX model side is basically equivalent to the Faster R-CNN target detection algorithm and is larger than YOLOv3 and SSD target detection algorithms. Compared with YOLOv3, Faster R-CNN, and SSD target detection algorithms, the AP of YOLOX at IOU = 0.50 is increased by 2.1%, 1.3%, and 3.4%, and the AR is increased by 10%, 2.3%, and 9%, respectively. The AP at IOU = 0.50:0.95 is increased by 12.1%, 3.9%, and 9.8%, respectively.
The 550 images in the test set were classified according to walnut species, illumination, and shielding conditions. Given the same test condition, the walnut shell–kernel image test sets under different scenes were tested using the above four target detection algorithms. Table 2 presents the test results of different algorithms.
Table 2 shows that the YOLOX detection model achieves relatively high AP and AR values under different walnut species, illumination, and shielding conditions. The AP and AR values of the YOLOX detection model are equivalent to those of Faster R-CNN and are evidently superior to those of YOLOv3 and SSD models. According to the results, YOLOX can quickly detect walnut shell–kernel in the target detection model. The YOLOX algorithm is a relatively good selection for online walnut shell–kernel separation with considerations to both recognition precision and detection speed.

3.4. Walnut Shell–Kernel Detection Effect Analysis under Different Scenes

3.4.1. Detection Analysis of Different Walnut Species Based on the Network Model

Shells and kernels of Wen 185 and Yunnan Juglans sigillata were detected using the YOLOX network (Figure 7). According to detection, the AP50 values were 96.8% and 95.7%, the AR values were 84.8% and 80.5%, and the APs values were 80.9% and 76.3%, respectively. The results demonstrated that the YOLOX network could still accurately distinguish the shells and kernels of different walnut species with relatively high detection confidence.

3.4.2. Walnut Shell–Kernel Detection Effect Analysis under Different Illumination Intensities

To verify the walnut shell–kernel recognition effect of the model under different illumination conditions, the shells and kernels of Wen 185 and Yunnan Juglans sigillata were tested using the YOLOX algorithm in this study. The AP50 of detection results were 95.7% and 95.9%. The AR values were 79.8% and 85.2%. The APs values were 75.5% and 81.3%. Figure 8 depicts the detection effects. To summarise, the YOLOX network can accurately recognise the walnut shells and kernels under different illumination conditions, with relatively high detection confidence.

3.4.3. Walnut Shell–Kernel Detection Effect under Mutual Shielding

To evaluate the walnut shell–kernel recognition effect under complicated conditions, the shells and kernels of Wen 185 and Yunnan Juglans sigillata with mutual shielding were tested using the YOLOX algorithm. The AP50 values were 95.8% and 96.4%. The AR values were 82% and 83.7%. The APs values were 78.1% and 79.8%. Figure 9 shows the detection effects. The YOLOX network could accurately recognise walnut shells and kernels accurately under mutual shielding with relatively high detection confidence.

4. Discussions

Table 2 demonstrates the superiority of the YOLOX detection model over YOLOV3, Faster-RCNN, and SSD. To achieve this, YOLOX uses the Mosaic data enhancement strategy for input and replaces the traditional YOLO baseline header with a decoupling baseline, significantly improving the convergence speed. In addition, SimOTA is used, reducing training time and avoiding extra solver hyperparameters in the Sinkhorn Knopp algorithm, thereby improving detection accuracy and efficiency. In this paper, YOLOX obtained good recognition accuracy for walnut shell kernel detection in different scenarios, including complex cases such as occluded shell kernels. This performance meets the requirements for online real-time detection of walnut shell kernels. Yu et al. [34] used an improved YOLOv5 series algorithm to detect mixed materials in walnut kernels, achieving an mAP value of 88.9%. Meanwhile, Pham et al. [35] used the YOLOv7 series algorithm to detect and identify the good, bad, or incomplete cashews on a packaging production line, with an average identification accuracy (mAP) of about 90%, demonstrating good results. While near-infrared spectroscopy is also an important tool in nut detection [36], it demands higher hardware requirements. YOLO-based detection technology has higher detection efficiency, more affordable detection equipment, and a wider range of application objects. Moreover, it has certain advantages in detection accuracy, making it suitable for current online detection and identification requirements for walnut shell kernels.
However, it is important to note that the YOLOX model does have limitations. As shown in Figure 10, after the walnut shell breaks, the shell kernels exhibit different shapes, and their appearance closely resembles that of walnut shells. When shell kernels densely block each other and have similar appearances, the detection performance of walnut shells and kernels is slightly poor, resulting in lower confidence scores. Sometimes, the model may mistakenly predict two closely contacted and shielded walnut shells or kernels within a box, resulting in target positioning errors or reduced confidence. In addition, when exposed to strong light, the detection accuracy is limited due to overexposure in the central region of images caused by manual photo acquisition, resulting in increased similarity between shell kernels and unclear details [37]. The YOLOX detection model struggles to extract features effectively when processing walnut shell kernels with similar appearances, resulting in reduced confidence in some images during detection.
In future research work, the utilisation of scientific image acquisition can be employed to further improve the photo accuracy of walnut shells and kernels [38,39]. In addition, the future should aim to create a more lightweight detection model, exploring the replacement of the backbone network with other lightweight alternatives and reducing the number of model parameters.

5. Conclusions

The YOLOX algorithm is applied in this study to realise the fast recognition and accurate separation of walnut shells and kernels after breaking. Some significant conclusions could be drawn as follows:
(1)
For walnut shell–kernel detection, the AP50, APs, and AR of the YOLOX algorithm are 96.3%, 80.6%, and 84.7%, respectively. The model size was 99 MB, and the FLOPs were 351.9. The AR of the YOLOX target detection algorithm is increased by 10%, 2.3%, and 9% than those of YOLOv3, Faster R-CNN, and SSD target detection algorithms. Moreover, APs increased by 12.1%, 3.9%, and 9.8%, respectively. Moreover, YOLOX has apparent advantages in the model size and detection speed. It can decrease the consumption of memory to a great extent during model training, which is beneficial for the migration application of the model.
(2)
Under different walnut species, supplementary light, and shielding conditions, AP50 of the YOLOX algorithm is higher than 95%, and AR is higher than 79%. The YOLOX algorithm can realise accurate walnut shell–kernel recognition and has good robustness. Research conclusions can provide technological support to walnut shell–kernel separation.

Author Contributions

Resources, Y.Z.; data curation, X.W.; writing—original draft preparation, Y.Z. and J.M.; writing—review and editing, X.W. and Y.L.; visualization, H.L. supervision, Z.L.; project administration, H.L. and Z.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was financially supported by the Project of the Modern Agricultural Engineering Key Laboratory (TDNG2022101), the Shishi Science and Technology Program (Grant No. 2022ZB05), and the Nanjing Agricultural University-Tarim University Joint Program on Scientific Research (NNLH202302).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Acknowledgments

The authors thank Hong Zhang from Tarim University for thesis supervision. The authors are grateful to the anonymous reviewers for their comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. An, M.; Cao, C.; Wu, Z.; Luo, K. Detection Method for Walnut Shell-Kernel Separation Accuracy Based on Near-Infrared Spectroscopy. Sensors 2022, 22, 8301. [Google Scholar] [CrossRef] [PubMed]
  2. Liu, M.; Li, C.; Cao, C.; Tu, D.; Li, X.; Che, J.; Yang, H.; Zhang, X.; Shi, M.; Zhao, H.; et al. Research progress of key technology and device for size-grading shell-breaking and shell-kernel separation of walnut. Trans. Chin. Soc. Agric. Eng. 2020, 36, 294–310. [Google Scholar]
  3. Niu, H. Experimental Study and Design of Separation Device of Walnut Shell and Kernel. Master’s Thesis, Tarim University, Alaer, China, 2017. [Google Scholar]
  4. Jiang, L.; Zhu, B.; Rao, X.; Berney, G.; Tao, Y. Discrimination of black walnut shell and pulp in hyperspectral fluorescence imagery using Gaussian kernel function approach. J. Food Eng. 2006, 81, 108–117. [Google Scholar] [CrossRef]
  5. Nahal, A.M.; Arabhosseini, A.; Kianmehr, M.H. Separation of shelled walnut particles using pneumatic method. Int. J. Agric. Biol. Eng. 2013, 6, 88–93. [Google Scholar]
  6. Krishnan, P.; Berlage, A.G. Separation of shells from walnut meats using magnetic methods. Trans. ASAE 1984, 27, 1990–1992. [Google Scholar] [CrossRef]
  7. Wang, Z.; Xiao, W. Electrostatic Fruit Shell Kernel Separator. CN2041490, 26 July 1989. Available online: https://kns.cnki.net/kcms2/article/abstract?v=kxaUMs6x7-4I2jr5WTdXtkOSbVhUnTwo_UJJAd4NGDqpeyCQzr6nn8GZTDCXBoMWz67kms7HBWUKA5AE-8ynjg%3d%3d&uniplatform=NZKPT (accessed on 6 June 2023).
  8. Liu, M.; Li, C.; Cao, C.; Wang, L.; Li, X.; Che, J.; Yang, H.; Zhang, X.; Zhao, H.; He, G.; et al. Walnut Fruit Processing Equipment: Academic Insights and Perspectives. Food Eng. Rev. 2021, 13, 822–857. [Google Scholar] [CrossRef]
  9. Cao, C.; Li, Z.; Luo, K.; Mei, P.; Wang, T.; Wu, Z.; Xie, C. Experiment on Winnowing Mechanism and Winnowing Performance of Hickory Material. Trans. Chin. Soc. Agric. Mach. 2019, 50, 105–112. [Google Scholar]
  10. Li, H.; Tang, Y.; Zhang, H.; Liu, Y.; Zeng, Y.; Niu, H. Technological parameter optimization for walnut shell-kernel winnowing device based on neural network. Front. Bioeng. Biotechnol. 2023, 11, 1107836. [Google Scholar] [CrossRef]
  11. Wang, T.; Cao, C.; Xie, C.; Li, Z. Design of hickory nut’ shell and kernel sorting system based on fuzzy clustering algorithm. Food Mach. 2018, 34, 110–114. [Google Scholar]
  12. Jin, F.; Qin, L.; Jiang, L.; Zhu, B.; Tao, Y. Novel separation method of black walnut meat from shell using invariant features and a supervised self-organizing map. J. Food Eng. 2008, 88, 75–85. [Google Scholar] [CrossRef]
  13. Fan, X.; Xu, Y.; Zhou, J.; Li, Z.; Peng, X.; Wang, X. Detection system for grape leaf diseases based on transfer learning and updated CNN. Trans. Chin. Soc. Agric. Eng. 2021, 37, 75–85. [Google Scholar]
  14. Lin, J.; Wu, X.; Chai, Y.; Yin, H. Structure optimization of convolutional neural networks: A survey. Acta Autom. Sin. 2020, 46, 24–37. [Google Scholar]
  15. Huu-Thiet, N.; Chien, C. Analytic Deep Neural Network-Based Robot Control. IEEE/ASME Trans. Mechatron. 2022, 27, 2176–2184. [Google Scholar]
  16. Agu, S.; Eze, C.; Omeje, U. Separation of oil palm kernel and shell mixture using soil and palm ash slurries. Niger. J. Technol. 2017, 36, 621–627. [Google Scholar] [CrossRef]
  17. Yao, J.; Qi, J.; Zhang, J.; Shao, H.; Yang, J.; Li, X. Real-Time Detection Algorithm for Kiwifruit Defects Based on YOLOv5. Electronics 2021, 10, 1711. [Google Scholar] [CrossRef]
  18. Xiao, B.; Nguyen, M.; Yan, Q. Fruit ripeness identification using YOLOv8 model. Multimed. Tools Appl. 2023, 9. [Google Scholar] [CrossRef]
  19. Wang, Z.; Jin, L.; Wang, S.; Xu, H. Apple stem/calyx real-time recognition using YOLO-v5 algorithm for fruit automatic loading system. Postharvest Biol. Technol. 2022, 185, 111808. [Google Scholar] [CrossRef]
  20. Wu, Z.; Luo, K.; Cao, C.; Liu, G.; Wang, E.; Li, W. Fast location and classification of small targets using region segmentation and a convolutional neural network. Comput. Electron. Agric. 2020, 169, 105207. [Google Scholar] [CrossRef]
  21. Meng, J.; Kang, F.; Wang, Y.; Tong, S.; Zhang, C.; Chen, C. Tea Buds Detection in Complex Background Based on Improved YOLOv7. IEEE Access 2023, 11, 88295–88304. [Google Scholar] [CrossRef]
  22. Zhang, Y.; Zhang, W.; Yu, J.; Hei, L.; Chen, J.; He, Y. Complete and accurate holly fruits counting using YOLOX object detection. Comput. Electron. Agric. 2022, 198, 107062. [Google Scholar] [CrossRef]
  23. Zhang, H.; Ji, S.; Shao, M.; Pu, H.; Zhang, L. Non-destructive Internal Defect Detection of In-Shell Walnuts by X-ray Technology Based on Improved Faster R-CNN. Appl. Sci. 2023, 13, 7311. [Google Scholar] [CrossRef]
  24. Wang, D.; Dai, D.; Zheng, J.; Li, L.; Kang, H.; Zheng, X. WT-YOLOM: An Improved Target Detection Model Based on YOLOv4 for Endogenous Impurity in Walnuts. Agronomy 2023, 13, 1462. [Google Scholar] [CrossRef]
  25. Zhang, G.; Liu, S.; Wang, F.; Li, Z.; Sun, J. YOLOX: Exceeding YOLO Series in 2021. arXiv 2021, arXiv:2107.08430. [Google Scholar]
  26. Shen, C.; Ma, C.; Gao, W. Multiple Attention Mechanism Enhanced YOLOX for Remote Sensing Object Detection. Sensors 2023, 23, 1261. [Google Scholar] [CrossRef]
  27. Shang, Y.; Zhang, Q.; Song, H. Application of deep learning based on YOLOv5s to apple flower detection in natural scenes. Trans. Chin. Soc. Agric. Eng. 2022, 38, 222–229. [Google Scholar]
  28. Yang, Z.; Zhao, C.; Maeda, H.; Sekimoto, Y. Development of a Large-Scale Roadside Facility Detection Model Based on the Mapillary Dataset. Sensors 2022, 22, 9992. [Google Scholar] [CrossRef] [PubMed]
  29. Chang, Y.; Zhang, Y. Deep Learning for Clothing Style Recognition Using YOLOv5. Micromachines 2022, 13, 1678. [Google Scholar] [CrossRef]
  30. Huang, G. A Comparative study of underwater marine products detection based on YOLOv5 and underwater image enhancement. Int. J. Eng. 2021, 7, 213–221. [Google Scholar]
  31. Song, X.; Wu, Y.; Liu, B.; Zhang, Q. Improved YOLOv5s Algorithm for Helmet Wearing Detection. Comput. Eng. Appl. 2023, 59, 176–183. [Google Scholar]
  32. Liu, J. Research and Software Design of Transmission Line Foreign Objects Detection Algorithm Based on YOLOX. Master’s Thesis, China University of Mining and Technology, Xuzhou, China, 2022. [Google Scholar]
  33. Zhao, Y.; Han, R.; Rao, Y. A New Feature Pyramid Network for Object Detection. In Proceedings of the 2019 International Conference on Virtual Reality and Intelligent Systems, Hunan, China, 14 September 2019. [Google Scholar]
  34. Yu, L.; Qian, M.; Chen, Q.; Sun, F.; Pan, J. An Improved YOLOv5 Model: Application to Mixed Impurities Detection for Walnut Kernels. Foods 2023, 12, 624. [Google Scholar] [CrossRef]
  35. Pham, V.; Nguyen, N.; Pham, V. A Novel Approach to Cashew Nut Detection in Packaging and Quality Inspection Lines. Int. J. Adv. Comput. Sci. Appl. (IJACSA) 2022, 13, 356–361. [Google Scholar] [CrossRef]
  36. Ma, L.; Ma, J.; Han, J.; Li, Y. Research on target detection algorithm based on YOLOv5s. Comput. Knowl. Technol. 2021, 17, 100–103. [Google Scholar]
  37. Rajevenceltha, J.; Gaidhane, V. An efficient approach for no-reference image quality assessment based on statistical texture and structural features. Eng. Sci. Technol. 2022, 30, 101039. [Google Scholar] [CrossRef]
  38. Rajevenceltha, J.; Gaidhane, H. A novel approach for image focus measure. Signal Image Video Process. 2020, 15, 547–555. [Google Scholar] [CrossRef]
  39. Al, N.; Gaidhane, V.H.; Rajevenceltha, J. Image Focus Measure Based on Polynomial Coefficients and Reduced Gerschgorin Circle Approach. IETE Tech. Rev. 2023. [Google Scholar] [CrossRef]
Figure 1. Different walnut species. (a) Yunnan Juglans sigillata. (b) Wen 185.
Figure 1. Different walnut species. (a) Yunnan Juglans sigillata. (b) Wen 185.
Applsci 13 10685 g001
Figure 2. Images of walnut shell–kernels under different scenes: (a) natural light, (b) supplementary light, (c) with mutual shielding, and (d) without shielding.
Figure 2. Images of walnut shell–kernels under different scenes: (a) natural light, (b) supplementary light, (c) with mutual shielding, and (d) without shielding.
Applsci 13 10685 g002
Figure 3. YOLOX network model.
Figure 3. YOLOX network model.
Applsci 13 10685 g003
Figure 4. Training results.
Figure 4. Training results.
Applsci 13 10685 g004
Figure 5. Detection results of the YOLOX target detection algorithm. (a) Detection results of Yunnan Juglans sigillata. (b) Detection results of Wen 185.
Figure 5. Detection results of the YOLOX target detection algorithm. (a) Detection results of Yunnan Juglans sigillata. (b) Detection results of Wen 185.
Applsci 13 10685 g005
Figure 6. Normalized confusion matrices of detection results. (a) YOLOX. (b) YOLOv3. (c) Faster-RCNN. (d) SSD.
Figure 6. Normalized confusion matrices of detection results. (a) YOLOX. (b) YOLOv3. (c) Faster-RCNN. (d) SSD.
Applsci 13 10685 g006
Figure 7. Detection effects of different walnut species. (a) Wen 185. (b) Yunnan Juglans sigillata.
Figure 7. Detection effects of different walnut species. (a) Wen 185. (b) Yunnan Juglans sigillata.
Applsci 13 10685 g007
Figure 8. Detection effects under natural light and supplementary light. (a) Natural light. (b) Supplementary light.
Figure 8. Detection effects under natural light and supplementary light. (a) Natural light. (b) Supplementary light.
Applsci 13 10685 g008
Figure 9. Detection effect with and without shielding conditions. (a) With shielding. (b) Without shielding.
Figure 9. Detection effect with and without shielding conditions. (a) With shielding. (b) Without shielding.
Applsci 13 10685 g009
Figure 10. Detection effects under complicated targets and strong illumination. (a) Detection effect of complicated targets. (b) Detection effect under strong illumination. (c) Detection effect under similar appearances.
Figure 10. Detection effects under complicated targets and strong illumination. (a) Detection effect of complicated targets. (b) Detection effect under strong illumination. (c) Detection effect under similar appearances.
Applsci 13 10685 g010
Table 1. Performance comparison of several target detection algorithms.
Table 1. Performance comparison of several target detection algorithms.
AlgorithmModel Size/MBFLOPsAP50(%)AP75(%)APs(%)AR(%)
YOLOX99351.996.392.480.684.7
YOLOv361.53193.8794.283.768.574.7
Faster-RCNN98.85427.079589.176.782.4
SSD3.047.0292.982.670.975.7
Table 2. Detection results of several algorithms under different conditions.
Table 2. Detection results of several algorithms under different conditions.
AlgorithmConsidered FactorsSample ConditionAP50 (%)APs (%)AR (%)
YOLOXWalnut speciesWen 18596.880.984.8
Yunnan Juglans sigillata95.776.380.5
Light sourceSupplementary light95.775.579.8
Natural light95.981.385.2
Shielding conditionWith mutual shielding95.878.182.0
Without shielding96.479.883.7
YOLOv3walnut speciesWen 18595.270.075.7
Yunnan Juglans sigillata94.165.271.8
Light sourceSupplementary light94.264.671.4
Natural light95.070.475.9
Shielding conditionWith mutual shielding94.565.671.6
Without shielding95.871.276.6
Faster
-RCNN
Walnut speciesWen 18595.479.384.3
Yunnan Juglans sigillata95.674.380.1
Light sourceSupplementary light95.373.579.6
Natural light95.779.784.6
Shielding conditionWith mutual shielding95.775.781.2
Without shielding96.478.883.8
SSDwalnut speciesWen 18594.272.076.7
Yunnan Juglans sigillata92.967.372.7
Light sourceSupplementary light93.067.072.4
Natural light93.672.076.5
Shielding conditionWith mutual shielding92.867.171.9
Without shielding96.478.883.8
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, Y.; Wang, X.; Liu, Y.; Li, Z.; Lan, H.; Zhang, Z.; Ma, J. Machine Vision-Based Chinese Walnut Shell–Kernel Recognition and Separation. Appl. Sci. 2023, 13, 10685. https://doi.org/10.3390/app131910685

AMA Style

Zhang Y, Wang X, Liu Y, Li Z, Lan H, Zhang Z, Ma J. Machine Vision-Based Chinese Walnut Shell–Kernel Recognition and Separation. Applied Sciences. 2023; 13(19):10685. https://doi.org/10.3390/app131910685

Chicago/Turabian Style

Zhang, Yongcheng, Xingyu Wang, Yang Liu, Zhanbiao Li, Haipeng Lan, Zhaoguo Zhang, and Jiale Ma. 2023. "Machine Vision-Based Chinese Walnut Shell–Kernel Recognition and Separation" Applied Sciences 13, no. 19: 10685. https://doi.org/10.3390/app131910685

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop