Recent Advancements in Precision Livestock Farming

A special issue of Agriculture (ISSN 2077-0472). This special issue belongs to the section "Farm Animal Production".

Deadline for manuscript submissions: closed (25 June 2023) | Viewed by 32658

Special Issue Editors

College of Information and Electrical Engineering, China Agricultural University, No. 17 Qinghuadonglu, Haidian District, Beijing 100083, China
Interests: image processing; computer vision application; smart livestock
College of Information and Electrical Engineering, China Agricultural University, No.17 Qinghuadonglu,Haidian District, Beijing 100083, China
Interests: livestock health monitor; 3d vision; livestock body measurement; animal identification
Federal Research Centre of Biological Systems and Agro-technologies of the Russian Academy of Sciences, 9 Yanvarya 29, Orenburg 460000, Russia
Interests: agricultural and livestock engineering; 3D point clouds data; image processing; data mining; machine learning; computer vision and its application in agriculture
Department of Land, Environment, Agriculture and Forestry, University of Padova, 35020 Legnaro, Italy
Interests: agricultural and livestock engineering; rural buildings; agro-environmental sustainability; byproducts; biomass and renewable energies
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The increasing global demand for sustainably sourced animal-derived food has prompted the development and application of smart technologies to address environmental, economic, and societal concerns, resulting in precision livestock farming (PLF) applications. PLF is defined as “individual animal management by continuous real-time monitoring of health, welfare, production/reproduction, and environmental impact”.  This approach includes the application of single/multiple tools in integrated systems. PLF could provide farmers with continuous, contactless, and objective data collection, detecting small but significant changes in behavioural patterns or unrelated parameters, which greatly improve farmers’ decision management.

This editorial initiative aims to highlight research across the entire breadth of precision livestock farming, focusing on new insights, novel developments, current challenges, and future perspectives.

The Research Topic solicits articles that will inspire, inform, and provide direction and guidance to researchers in the field, and welcomes contributions covering:

  • Smart Animal Farming;
  • Precision Feeding;
  • Sensor Technologies;
  • Livestock Engineering;
  • Automated monitoring of animal behaviour;
  • Robotics Automation in Livestock Environment;
  • Technologies to monitor welfare/health at animal/herd level;
  • Artificial intelligence applications;
  • Data management and Decision Support Systems;

Prof. Dr.  Gang Liu
Dr. Hao Guo
Dr. Alexey Ruchay
Dr. Andrea Pezzuolo
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Agriculture is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • precision livestock farming
  • smart livestock farming
  • animal science
  • animal production
  • artificial intelligence

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

3 pages, 184 KiB  
Editorial
Recent Advancements in Precision Livestock Farming
by Gang Liu, Hao Guo, Alexey Ruchay and Andrea Pezzuolo
Agriculture 2023, 13(9), 1652; https://doi.org/10.3390/agriculture13091652 - 22 Aug 2023
Viewed by 910
Abstract
The increasing global demand for sustainably sourced animal-derived food has prompted the development and application of smart technologies to address environmental, economic, and societal concerns, resulting in precision livestock farming (PLF) applications [...] Full article
(This article belongs to the Special Issue Recent Advancements in Precision Livestock Farming)

Research

Jump to: Editorial, Review

10 pages, 1278 KiB  
Article
System Design of Optimal Pig Shipment Schedule through Prediction Model
by Jin-Wook Jang, Jong-Hee Lee, Gi-Pou Nam and Sung-Ho Lee
Agriculture 2023, 13(8), 1520; https://doi.org/10.3390/agriculture13081520 - 31 Jul 2023
Cited by 1 | Viewed by 940
Abstract
We propose an optimal system for determining the shipping schedule for pigs using a predictive model using machine learning based on big data. This system receives photographic and weight measurement information for each pig from a camera and a weighing machine installed in [...] Read more.
We propose an optimal system for determining the shipping schedule for pigs using a predictive model using machine learning based on big data. This system receives photographic and weight measurement information for each pig from a camera and a weighing machine installed in a pig pen for raising pigs corresponding to a predetermined fattening period. Then, the photographic information of each of these pigs is applied to a predictive model machine-learned in advance to determine whether or not there are candidate pigs for determining the presence or absence of abdominal fat-forming pigs. And if there is a candidate pig, it is determined using a machine-learning model for predicting whether the candidate pig is an abdominal fat-forming pig by analyzing the pattern of weight increase of the abdominal fat-forming pig and changes in weight of a candidate. If the candidate pig is an abdominal fat-forming pig, the timing of shipping is determined by predicting when the weight of the candidate pigs, specifically the abdominal fat-forming pigs, will reach a predetermined minimum shipping weight. This prediction is made using a machine-learning model that considers the weight gain trend pattern of abdominal fat-forming pigs and tracks changes in the weight of the candidate pig. A machine-learning model is used to predict the timing of weight gain in candidate pigs, specifically those that develop abdominal fat, in order to determine the optimal shipping time. By analyzing the weight gain patterns of abdominal fat-forming pigs and monitoring the weight changes in the candidate pig, the model can predict when the candidate pig will reach the minimum weight required for shipping. In this paper, we would like to present a point of view based on the body type and weight of pigs corresponding to the fattening period through this system, whether intramuscular fat has adhered or abdominal fat is excessively formed by the fed feed and appropriate shipment as the fattening status of pigs. Full article
(This article belongs to the Special Issue Recent Advancements in Precision Livestock Farming)
Show Figures

Figure 1

18 pages, 5537 KiB  
Article
Exploratory Study of Sex Identification for Chicken Embryos Based on Blood Vessel Images and Deep Learning
by Nan Jia, Bin Li, Yuliang Zhao, Shijie Fan, Jun Zhu, Haifeng Wang and Wenwen Zhao
Agriculture 2023, 13(8), 1480; https://doi.org/10.3390/agriculture13081480 - 26 Jul 2023
Cited by 2 | Viewed by 1891
Abstract
The identification of a chicken’s sex is a massive task in the poultry industry. To solve the problems of traditional artificial observation in determining sex, such as time-consuming and laborious, a sex identification method of chicken embryos based on blood vessel images and [...] Read more.
The identification of a chicken’s sex is a massive task in the poultry industry. To solve the problems of traditional artificial observation in determining sex, such as time-consuming and laborious, a sex identification method of chicken embryos based on blood vessel images and deep learning was preliminarily investigated. In this study, we designed an image acquisition platform to capture clear blood vessel images with a black background. 19,748 images of 3024 Jingfen No. 6 breeding eggs were collected from days 3 to 5 of incubation in Beijing Huadu Yukou Poultry Industry. Sixteen thousand seven hundred sixty-one images were filtered via color sexing in 1-day-old chicks and constructed the dataset of this study. A sex identification model was proposed based on an improved YOLOv7 deep learning algorithm. An attention mechanism CBAM was introduced for YOLOv7 to improve the accuracy of sex identification of chicken eggs; the BiFPN feature fusion was used in the neck network of YOLOv7 to fuse the low-level and high-level features efficiently; and α-CIOU was used as the bounding box loss function to accelerate regression prediction and improve the positioning accuracy of the bounding box of the model. Results showed that the mean average precision (mAP) of 88.79% was achieved by modeling with the blood vessel data on day 4 of incubation of chicken eggs, with the male and female reaching 87.91% and 89.67%. Compared with the original YOLOv7 network, the mAP of the improved model was increased by 3.46%. The comparison of target detection model results showed that the mAP of our method was 32.49%, 17.17%, and 5.96% higher than that of SSD, Faster R-CNN, and YOLOv5, respectively. The average image processing time was 0.023 s. Our study indicates that using blood vessel images and deep learning has great potential applications in the sex identification of chicken embryos. Full article
(This article belongs to the Special Issue Recent Advancements in Precision Livestock Farming)
Show Figures

Figure 1

16 pages, 5664 KiB  
Article
Development Results of a Cross-Platform Positioning System for a Robotics Feed System at a Dairy Cattle Complex
by Dmitriy Yu. Pavkin, Evgeniy A. Nikitin, Denis V. Shilin, Mikhail V. Belyakov, Ilya A. Golyshkov, Stanislav Mikhailichenko and Ekaterina Chepurina
Agriculture 2023, 13(7), 1422; https://doi.org/10.3390/agriculture13071422 - 19 Jul 2023
Cited by 3 | Viewed by 1391
Abstract
Practical experience demonstrates that the development of agriculture is following the path of automating and robotizing operational processes. The operation of feed pushing in the feeding alley is an integral part of the feeding process and significantly impacts dairy cattle productivity. The aim [...] Read more.
Practical experience demonstrates that the development of agriculture is following the path of automating and robotizing operational processes. The operation of feed pushing in the feeding alley is an integral part of the feeding process and significantly impacts dairy cattle productivity. The aim of this research is to develop an algorithm for automatic positioning and a mobile remote-control system for a wheeled robot on a dairy farm. The kinematic and dynamic motion characteristics of the wheeled robot were obtained using software that allows simulation of physical processes in an artificial environment. The mobile application was developed using Swift tools, with the preliminary visualization of interfaces and graphic design. The system uses technical vision based on RGB cameras and programmed color filters and is responsible for the automatic positioning of the feed-pusher robot. This system made it possible to eliminate the inductive sensors from the system and suspend the labor effort required for assembling the contour wire of the feed alley. By assessing the interaction between the mobile app and the feed pusher via the base station connected to the Internet and located on the farm, the efficiency and accuracy of the feedback was measured. Furthermore, remote changes in the operating regime of the robot (start date) were proven to be achievable, and the productiveness of the food supplement dispenser also became manageable. Full article
(This article belongs to the Special Issue Recent Advancements in Precision Livestock Farming)
Show Figures

Figure 1

15 pages, 3317 KiB  
Article
Non-Contact Measurement of Pregnant Sows’ Backfat Thickness Based on a Hybrid CNN-ViT Model
by Xuan Li, Mengyuan Yu, Dihong Xu, Shuhong Zhao, Hequn Tan and Xiaolei Liu
Agriculture 2023, 13(7), 1395; https://doi.org/10.3390/agriculture13071395 - 14 Jul 2023
Cited by 2 | Viewed by 964
Abstract
Backfat thickness (BF) is closely related to the service life and reproductive performance of sows. The dynamic monitoring of sows’ BF is a critical part of the production process in large-scale pig farms. This study proposed the application of a hybrid CNN-ViT (Vision [...] Read more.
Backfat thickness (BF) is closely related to the service life and reproductive performance of sows. The dynamic monitoring of sows’ BF is a critical part of the production process in large-scale pig farms. This study proposed the application of a hybrid CNN-ViT (Vision Transformer, ViT) model for measuring sows’ BF to address the problems of high measurement intensity caused by the traditional contact measurement of sows’ BF and the low efficiency of existing non-contact models for measuring sows’ BF. The CNN-ViT introduced depth-separable convolution and lightweight self-attention, mainly consisting of a Pre-local Unit (PLU), a Lightweight ViT (LViT) and an Inverted Residual Unit (IRU). This model could extract local and global features of images, making it more suitable for small datasets. The model was tested on 106 pregnant sows with seven randomly divided datasets. The results showed that the CNN-ViT had a Mean Absolute Error (MAE) of 0.83 mm, a Root Mean Square Error (RMSE) of 1.05 mm, a Mean Absolute Percentage Error (MAPE) of 4.87% and a coefficient of determination (R-Square, R2) of 0.74. Compared to LviT-IRU, PLU-IRU and PLU-LviT, the CNN-ViT’s MAE decreased by more than 12%, RMSE decreased by more than 15%, MAPE decreased by more than 15% and R² improved by more than 17%. Compared to the Resnet50 and ViT, the CNN-ViT’s MAE decreased by more than 7%, RMSE decreased by more than 13%, MAPE decreased by more than 7% and R2 improved by more than 15%. The method could better meet the demand for the non-contact automatic measurement of pregnant sows’ BF in actual production and provide technical support for the intelligent management of pregnant sows. Full article
(This article belongs to the Special Issue Recent Advancements in Precision Livestock Farming)
Show Figures

Figure 1

15 pages, 9232 KiB  
Article
Weight Prediction of Landlly Pigs from Morphometric Traits in Different Age Classes Using ANN and Non-Linear Regression Models
by Andrew Latha Preethi, Ayon Tarafdar, Sheikh Firdous Ahmad, Snehasmita Panda, Kumar Tamilarasan, Alexey Ruchay and Gyanendra Kumar Gaur
Agriculture 2023, 13(2), 362; https://doi.org/10.3390/agriculture13020362 - 02 Feb 2023
Cited by 3 | Viewed by 2220
Abstract
The present study was undertaken to identify the best estimator(s) of body weight based on various linear morphometric measures in Landlly pigs using artificial neural network (ANN) and non-linear regression models at three life stages (4th, 6th and 8th week). Twenty-four different linear [...] Read more.
The present study was undertaken to identify the best estimator(s) of body weight based on various linear morphometric measures in Landlly pigs using artificial neural network (ANN) and non-linear regression models at three life stages (4th, 6th and 8th week). Twenty-four different linear morphometric measurements were taken on 279 piglets individually at all the stages and their correlations with body weight were elucidated. The traits with high correlation (≥0.8) with body weight were selected at different stages. The selected traits were categorized into 31 different combinations (single, two, three, four and five) and subjected to ANN modelling for determining the best combination of body weight predictors at each stage. The model with highest R2 and lowest MSE was selected as best fit for a particular trait. Results revealed that the combination of heart girth (HG), body length (BL) and paunch girth (PG) was most efficient for predicting body weight of piglets at the 4th week (R2 = 0.8697, MSE = 0.4419). The combination of neck circumference (NCR), height at back (HB), BL and HG effectively predicted body weight at 6 (R2 = 0.8528, MSE = 0.8719) and 8 (R2 = 0.9139, MSE = 1.2713) weeks. The two-trait combination of BL and HG exhibited notably high correlation with body weight at all stages and hence was used to develop a separate ANN model which resulted into better body weight prediction ability (R2 = 0.9131, MSE = 1.004) as compared to age-dependent models. The results of ANN models were comparable to non-linear regression models at all the stages. Full article
(This article belongs to the Special Issue Recent Advancements in Precision Livestock Farming)
Show Figures

Figure 1

17 pages, 6407 KiB  
Article
Pig Face Recognition Based on Metric Learning by Combining a Residual Network and Attention Mechanism
by Rong Wang, Ronghua Gao, Qifeng Li and Jiabin Dong
Agriculture 2023, 13(1), 144; https://doi.org/10.3390/agriculture13010144 - 05 Jan 2023
Cited by 6 | Viewed by 2020
Abstract
As machine vision technology has advanced, pig face recognition has gained wide attention as an individual pig identification method. This study establishes an improved ResNAM network as a backbone network for pig face image feature extraction by combining an NAM (normalization-based attention module) [...] Read more.
As machine vision technology has advanced, pig face recognition has gained wide attention as an individual pig identification method. This study establishes an improved ResNAM network as a backbone network for pig face image feature extraction by combining an NAM (normalization-based attention module) attention mechanism and a ResNet model to probe non-contact open-set pig face recognition. Then, an open-set pig face recognition framework is designed by integrating three loss functions and two metrics to finish the task with no crossover of individuals in the training and test sets. The SphereFace loss function with the cosine distance as a metric and ResNAM are combined in the framework to obtain the optimal open-set pig face recognition model. To train our model, 37 pigs with a total of 12,993 images were randomly selected from the collected pig face images, and 9 pigs with a total of 3431 images were set as a test set. 900 pairs of positive sample pairs and 900 pairs of negative pairs were obtained from the images in the test set. A series of experimental results show that our accuracy reached 95.28%, which was 2.61% higher than that of a human face recognition model. NAM was more effective in improving the performance of the pig face recognition model than the mainstream BAM (bottleneck attention module) and CBAM (convolutional block attention module). The research results can provide technological support for non-contact open-set individual recognition for intelligent farming processes. Full article
(This article belongs to the Special Issue Recent Advancements in Precision Livestock Farming)
Show Figures

Figure 1

17 pages, 8288 KiB  
Article
Live Weight Prediction of Cattle Based on Deep Regression of RGB-D Images
by Alexey Ruchay, Vitaly Kober, Konstantin Dorofeev, Vladimir Kolpakov, Alexey Gladkov and Hao Guo
Agriculture 2022, 12(11), 1794; https://doi.org/10.3390/agriculture12111794 - 28 Oct 2022
Cited by 10 | Viewed by 5350
Abstract
Predicting the live weight of cattle helps us monitor the health of animals, conduct genetic selection, and determine the optimal timing of slaughter. On large farms, accurate and expensive industrial scales are used to measure live weight. However, a promising alternative is to [...] Read more.
Predicting the live weight of cattle helps us monitor the health of animals, conduct genetic selection, and determine the optimal timing of slaughter. On large farms, accurate and expensive industrial scales are used to measure live weight. However, a promising alternative is to estimate live weight using morphometric measurements of livestock and then apply regression equations relating such measurements to live weight. Manual measurements on animals using a tape measure are time-consuming and stressful for the animals. Therefore, computer vision technologies are now increasingly used for non-contact morphometric measurements. The paper proposes a new model for predicting live weight based on augmenting three-dimensional clouds in the form of flat projections and image regression with deep learning. It is shown that on real datasets, the accuracy of weight measurement using the proposed model reaches 91.6%. We also discuss the potential applicability of the proposed approach to animal husbandry. Full article
(This article belongs to the Special Issue Recent Advancements in Precision Livestock Farming)
Show Figures

Figure 1

19 pages, 13641 KiB  
Article
Detection Method of Cow Estrus Behavior in Natural Scenes Based on Improved YOLOv5
by Rong Wang, Zongzhi Gao, Qifeng Li, Chunjiang Zhao, Ronghua Gao, Hongming Zhang, Shuqin Li and Lu Feng
Agriculture 2022, 12(9), 1339; https://doi.org/10.3390/agriculture12091339 - 30 Aug 2022
Cited by 18 | Viewed by 2679
Abstract
Natural breeding scenes have the characteristics of a large number of cows, complex lighting, and a complex background environment, which presents great difficulties for the detection of dairy cow estrus behavior. However, the existing research on cow estrus behavior detection works well in [...] Read more.
Natural breeding scenes have the characteristics of a large number of cows, complex lighting, and a complex background environment, which presents great difficulties for the detection of dairy cow estrus behavior. However, the existing research on cow estrus behavior detection works well in ideal environments with a small number of cows and has a low inference speed and accuracy in natural scenes. To improve the inference speed and accuracy of cow estrus behavior in natural scenes, this paper proposes a cow estrus behavior detection method based on the improved YOLOv5. By improving the YOLOv5 model, it has stronger detection ability for complex environments and multi-scale objects. First, the atrous spatial pyramid pooling (ASPP) module is employed to optimize the YOLOv5l network at multiple scales, which improves the model’s receptive field and ability to perceive global contextual multiscale information. Second, a cow estrus behavior detection model is constructed by combining the channel-attention mechanism and a deep-asymmetric-bottleneck module. Last, K-means clustering is performed to obtain new anchors and complete intersection over union (CIoU) is used to introduce the relative ratio between the predicted box of the cow mounting and the true box of the cow mounting to the regression box prediction function to improve the scale invariance of the model. Multiple cameras were installed in a natural breeding scene containing 200 cows to capture videos of cows mounting. A total of 2668 images were obtained from 115 videos of cow mounting events from the training set, and 675 images were obtained from 29 videos of cow mounting events from the test set. The training set is augmented by the mosaic method to increase the diversity of the dataset. The experimental results show that the average accuracy of the improved model was 94.3%, that the precision was 97.0%, and that the recall was 89.5%, which were higher than those of mainstream models such as YOLOv5, YOLOv3, and Faster R-CNN. The results of the ablation experiments show that ASPP, new anchors, C3SAB, and C3DAB designed in this study can improve the accuracy of the model by 5.9%. Furthermore, when the ASPP dilated convolution was set to (1,5,9,13) and the loss function was set to CIoU, the model had the highest accuracy. The class activation map function was utilized to visualize the model’s feature extraction results and to explain the model’s region of interest for cow images in natural scenes, which demonstrates the effectiveness of the model. Therefore, the model proposed in this study can improve the accuracy of the model for detecting cow estrus events. Additionally, the model’s inference speed was 71 frames per second (fps), which meets the requirements of fast and accurate detection of cow estrus events in natural scenes and all-weather conditions. Full article
(This article belongs to the Special Issue Recent Advancements in Precision Livestock Farming)
Show Figures

Figure 1

18 pages, 10014 KiB  
Article
Key Region Extraction and Body Dimension Measurement of Beef Cattle Using 3D Point Clouds
by Jiawei Li, Qifeng Li, Weihong Ma, Xianglong Xue, Chunjiang Zhao, Dan Tulpan and Simon X. Yang
Agriculture 2022, 12(7), 1012; https://doi.org/10.3390/agriculture12071012 - 13 Jul 2022
Cited by 5 | Viewed by 2459
Abstract
Body dimensions are key indicators for the beef cattle fattening and breeding process. On-animal measurement is relatively inefficient, and can induce severe stress responses among beef cattle and pose a risk for operators, thereby impacting the cattle’s growth rate and wellbeing. To address [...] Read more.
Body dimensions are key indicators for the beef cattle fattening and breeding process. On-animal measurement is relatively inefficient, and can induce severe stress responses among beef cattle and pose a risk for operators, thereby impacting the cattle’s growth rate and wellbeing. To address the above issues, a highly efficient and automatic method was developed to measure beef cattle’s body dimensions, including the oblique length, height, width, abdominal girth, and chest girth, based on the reconstructed three-dimensional point cloud data. The horizontal continuous slice sequence of the complete point clouds was first extracted, and the central point of the beef cattle leg region was determined from the span distribution of the point cloud clusters in the targeted slices. Subsequently, the boundary of the beef cattle leg region was identified by the “five-point clustering gradient boundary recognition algorithm” and was then calibrated, followed by the accurate segmentation of the corresponding region. The key regions for body dimension data calculation were further determined by the proposed algorithm, which forms the basis of the scientific calculation of key body dimensions. The influence of different postures of beef cattle on the measurement results was also preliminarily discussed. The results showed that the errors of calculated body dimensions, i.e., the oblique length, height, width, abdominal girth, and chest girth, were 2.3%, 2.8%, 1.6%, 2.8%, and 2.6%, respectively. In the present work, the beef cattle body dimensions could be effectively measured based on the 3D regional features of the point cloud data. The proposed algorithm shows a degree of generalization and robustness that is not affected by different postures of beef cattle. This automatic method can be effectively used to collect reliable phenotype data during the fattening of beef cattle and can be directly integrated into the breeding process. Full article
(This article belongs to the Special Issue Recent Advancements in Precision Livestock Farming)
Show Figures

Figure 1

19 pages, 3624 KiB  
Article
Curve Skeleton Extraction from Incomplete Point Clouds of Livestock and Its Application in Posture Evaluation
by Yihu Hu, Xinying Luo, Zicheng Gao, Ao Du, Hao Guo, Alexey Ruchay, Francesco Marinello and Andrea Pezzuolo
Agriculture 2022, 12(7), 998; https://doi.org/10.3390/agriculture12070998 - 11 Jul 2022
Cited by 3 | Viewed by 1982
Abstract
As consumer-grade depth sensors provide an efficient and low-cost way to obtain point cloud data, an increasing number of applications regarding the acquisition and processing of livestock point clouds have been proposed. Curve skeletons are abstract representations of 3D data, and they have [...] Read more.
As consumer-grade depth sensors provide an efficient and low-cost way to obtain point cloud data, an increasing number of applications regarding the acquisition and processing of livestock point clouds have been proposed. Curve skeletons are abstract representations of 3D data, and they have great potential for the analysis and understanding of livestock point clouds. Articulated skeleton extraction has been extensively studied on 2D and 3D data. Nevertheless, robust and accurate skeleton extraction from point set sequences captured by consumer-grade depth cameras remains challenging since such data are often corrupted by substantial noise and outliers. Additionally, few approaches have been proposed to overcome this problem. In this paper, we present a novel curve skeleton extraction method for point clouds of four-legged animals. First, the 2D top view of the livestock was constructed using the concave hull algorithm. The livestock data were divided into the left and right sides along the bilateral symmetry plane of the livestock. Then, the corresponding 2D side views were constructed. Second, discrete skeleton evolution (DSE) was utilized to extract the skeletons from those 2D views. Finally, we divided the extracted skeletons into torso branches and leg branches. We translated each leg skeleton point to the border of the nearest banded point cluster and then moved it to the approximate centre of the leg. The torso skeleton points were calculated according to their positions on the side view and top view. Extensive experiments show that quality curve skeletons can be extracted from many livestock species. Additionally, we compared our method with representative skeleton extraction approaches, and the results show that our method performs better in avoiding topological errors caused by the shape characteristics of livestock. Furthermore, we demonstrated the effectiveness of our extracted skeleton in detecting frames containing pigs with correct postures from the point cloud stream. Full article
(This article belongs to the Special Issue Recent Advancements in Precision Livestock Farming)
Show Figures

Figure 1

Review

Jump to: Editorial, Research

22 pages, 3033 KiB  
Review
A Review of Key Techniques for in Ovo Sexing of Chicken Eggs
by Nan Jia, Bin Li, Jun Zhu, Haifeng Wang, Yuliang Zhao and Wenwen Zhao
Agriculture 2023, 13(3), 677; https://doi.org/10.3390/agriculture13030677 - 14 Mar 2023
Cited by 2 | Viewed by 7989
Abstract
The identification of chicken sex before hatching is an important problem in large-scale breeding applications in the poultry industry. This paper systematically reviews the key techniques for in ovo sexing of chicken eggs before hatching and presents recent research on molecular-based, spectral-based, acoustic-based, [...] Read more.
The identification of chicken sex before hatching is an important problem in large-scale breeding applications in the poultry industry. This paper systematically reviews the key techniques for in ovo sexing of chicken eggs before hatching and presents recent research on molecular-based, spectral-based, acoustic-based, morphology-based, and volatile organic compound (VOC)-based technologies. Molecular-based methods are standard techniques for accurate sexing but require perforations by skilled technicians in certified laboratories to extract egg contents. Spectral-based techniques show great potential as noninvasive methods but require complex data processing and modeling. Acoustic-based techniques are sensitive to environmental noise. Morphology-based studies on the outer shape of the eggshell and distribution of blood vessels provide novel methods for in ovo sexing of chicken eggs. However, they face challenges such as the color, thickness, and smoothness of the eggshell. VOC profiling of chicken eggs allows sexing in the early stages of incubation; however, the VOC composition may be influenced by species or feed, and more research is required to explore potential applications. In addition, recent breakthroughs on in ovo chicken egg sexing are discussed. Physiological changes in chicken eggs during the whole incubation period have been well studied using metabolism and phenotype tools to enhance mechanism recognition. Furthermore, various sensing techniques, from the X-ray to terahertz range, and deep learning algorithms have been employed for data acquisition, processing, mining, and modeling to capture and analyze key features. Finally, commercialization and practical applications are discussed. This study provides a reference for in ovo sexing of chicken eggs before hatching in the poultry industry. Full article
(This article belongs to the Special Issue Recent Advancements in Precision Livestock Farming)
Show Figures

Figure 1

Back to TopTop