Automated Monitoring of Livestock and Poultry with Machine Learning Technology

A special issue of Animals (ISSN 2076-2615). This special issue belongs to the section "Animal System and Management".

Deadline for manuscript submissions: closed (15 June 2023) | Viewed by 33351

Special Issue Editors

Department of Poultry Science, University of Georgia, Athens, GA, USA
Interests: precision livestock farming; animal welfare and behavior; smart sensing; applied artificial intelligence
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Animal Science, Department of Biological Systems Engineering, University of Nebraska-Lincoln, Lincoln, NE 68583, USA
Interests: precision livestock management; animal housing and environmental control; applied data analysis in agriculture
Department of Agricultural Structure Environment Engineering, College of Water Resources and Civil Engineering, China Agricultural University, Beijing 100193, China
Interests: livestock environmental engineering; indoor environment and ventilation; precise ventilation of livestock houses; CFD; environmental control system and strategy
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The livestock and poultry industry producing daily animal protein for humans continues to grow for improved genetics, nutrition, stewardship, and welfare to enhance production efficiency, secure food safety, sufficiency, and sustainability, and feed the growing human population. Maintaining and improving contemporary intensive production systems require substantial natural and human resources and can greatly impact economy, public health, environment, and society. Automated monitoring features developments and applications of continuous, objective, and supportive sensing technologies and computer tools for sustainable and efficient animal production. Recent advancements in computer hardware and machine learning modelling boost the performance of automated monitoring and can assist producers in management decisions and provide early detection and prevention of disease and production inefficiencies. Automated monitoring with machine learning technology offers solutions to the animal industry to address challenges with regard to precision/smart management, environment, nutrition, genetics, big data analytics, real-time monitoring, automation and robotics, welfare assessment, animal tracking, individual identification, behavior recognition, etc.

We are pleased to invite original research and review papers from all over the globe. Contributing papers are expected to address advancement towards production efficiency, safety, and sustainability of the animal industry and explore abovementioned areas through developments and applications of automated monitoring with machine learning technology.

Dr. Guoming Li
Dr. Yijie Xiong
Dr. Hao Li
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Animals is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • precision livestock farming
  • precision poultry farming
  • machine learning
  • deep learning
  • IoT
  • sensor
  • computer vision
  • big data
  • robotics
  • modelling

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 8474 KiB  
Article
A Computer Vision-Based Automatic System for Egg Grading and Defect Detection
by Xiao Yang, Ramesh Bahadur Bist, Sachin Subedi and Lilong Chai
Animals 2023, 13(14), 2354; https://doi.org/10.3390/ani13142354 - 19 Jul 2023
Cited by 4 | Viewed by 4307
Abstract
Defective eggs diminish the value of laying hen production, particularly in cage-free systems with a higher incidence of floor eggs. To enhance quality, machine vision and image processing have facilitated the development of automated grading and defect detection systems. Additionally, egg measurement systems [...] Read more.
Defective eggs diminish the value of laying hen production, particularly in cage-free systems with a higher incidence of floor eggs. To enhance quality, machine vision and image processing have facilitated the development of automated grading and defect detection systems. Additionally, egg measurement systems utilize weight-sorting for optimal market value. However, few studies have integrated deep learning and machine vision techniques for combined egg classification and weighting. To address this gap, a two-stage model was developed based on real-time multitask detection (RTMDet) and random forest networks to predict egg category and weight. The model uses convolutional neural network (CNN) and regression techniques were used to perform joint egg classification and weighing. RTMDet was used to sort and extract egg features for classification, and a Random Forest algorithm was used to predict egg weight based on the extracted features (major axis and minor axis). The results of the study showed that the best achieved accuracy was 94.8% and best R2 was 96.0%. In addition, the model can be used to automatically exclude non-standard-size eggs and eggs with exterior issues (e.g., calcium deposit, stains, and cracks). This detector is among the first models that perform the joint function of egg-sorting and weighing eggs, and is capable of classifying them into five categories (intact, crack, bloody, floor, and non-standard) and measuring them up to jumbo size. By implementing the findings of this study, the poultry industry can reduce costs and increase productivity, ultimately leading to better-quality products for consumers. Full article
Show Figures

Figure 1

16 pages, 2101 KiB  
Article
Early Detection of Avian Diseases Based on Thermography and Artificial Intelligence
by Mohammad Sadeghi, Ahmad Banakar, Saeid Minaei, Mahdi Orooji, Abdolhamid Shoushtari and Guoming Li
Animals 2023, 13(14), 2348; https://doi.org/10.3390/ani13142348 - 19 Jul 2023
Cited by 1 | Viewed by 2525
Abstract
Non-invasive measures have a critical role in precision livestock and poultry farming as they can reduce animal stress and provide continuous monitoring. Animal activity can reflect physical and mental states as well as health conditions. If any problems are detected, an early warning [...] Read more.
Non-invasive measures have a critical role in precision livestock and poultry farming as they can reduce animal stress and provide continuous monitoring. Animal activity can reflect physical and mental states as well as health conditions. If any problems are detected, an early warning will be provided for necessary actions. The objective of this study was to identify avian diseases by using thermal-image processing and machine learning. Four groups of 14-day-old Ross 308 Broilers (20 birds per group) were used. Two groups were infected with one of the following diseases: Newcastle Disease (ND) and Avian Influenza (AI), and the other two were considered control groups. Thermal images were captured every 8 h and processed with MATLAB. After de-noising and removing the background, 23 statistical features were extracted, and the best features were selected using the improved distance evaluation method. Support vector machine (SVM) and artificial neural networks (ANN) were developed as classifiers. Results indicated that the former classifier outperformed the latter for disease classification. The Dempster–Shafer evidence theory was used as the data fusion stage if neither ANN nor SVM detected the diseases with acceptable accuracy. The final SVM-based framework achieved 97.2% and 100% accuracy for classifying AI and ND, respectively, within 24 h after virus infection. The proposed method is an innovative procedure for the timely identification of avian diseases to support early intervention. Full article
Show Figures

Figure 1

20 pages, 3037 KiB  
Article
Attention-Guided Instance Segmentation for Group-Raised Pigs
by Zhiwei Hu, Hua Yang and Hongwen Yan
Animals 2023, 13(13), 2181; https://doi.org/10.3390/ani13132181 - 03 Jul 2023
Cited by 2 | Viewed by 1172
Abstract
In the pig farming environment, complex factors such as pig adhesion, occlusion, and changes in body posture pose significant challenges for segmenting multiple target pigs. To address these challenges, this study collected video data using a horizontal angle of view and a non-fixed [...] Read more.
In the pig farming environment, complex factors such as pig adhesion, occlusion, and changes in body posture pose significant challenges for segmenting multiple target pigs. To address these challenges, this study collected video data using a horizontal angle of view and a non-fixed lens. Specifically, a total of 45 pigs aged 20–105 days in 8 pens were selected as research subjects, resulting in 1917 labeled images. These images were divided into 959 for training, 192 for validation, and 766 for testing. The grouped attention module was employed in the feature pyramid network to fuse the feature maps from deep and shallow layers. The grouped attention module consists of a channel attention branch and a spatial attention branch. The channel attention branch effectively models dependencies between channels to enhance feature mapping between related channels and improve semantic feature representation. The spatial attention branch establishes pixel-level dependencies by applying the response values of all pixels in a single-channel feature map to the target pixel. It further guides the original feature map to filter spatial location information and generate context-related outputs. The grouped attention, along with data augmentation strategies, was incorporated into the Mask R-CNN and Cascade Mask R-CNN task networks to explore their impact on pig segmentation. The experiments showed that introducing data augmentation strategies improved the segmentation performance of the model to a certain extent. Taking Mask-RCNN as an example, under the same experimental conditions, the introduction of data augmentation strategies resulted in improvements of 1.5%, 0.7%, 0.4%, and 0.5% in metrics AP50, AP75, APL, and AP, respectively. Furthermore, our grouped attention module achieved the best performance. For example, compared to the existing attention module CBAM, taking Mask R-CNN as an example, in terms of the metric AP50, AP75, APL, and AP, the grouped attention outperformed 1.0%, 0.3%, 1.1%, and 1.2%, respectively. We further studied the impact of the number of groups in the grouped attention on the final segmentation results. Additionally, visualizations of predictions on third-party data collected using a top-down data acquisition method, which was not involved in the model training, demonstrated that the proposed model in this paper still achieved good segmentation results, proving the transferability and robustness of the grouped attention. Through comprehensive analysis, we found that grouped attention is beneficial for achieving high-precision segmentation of individual pigs in different scenes, ages, and time periods. The research results can provide references for subsequent applications such as pig identification and behavior analysis in mobile settings. Full article
Show Figures

Figure 1

15 pages, 7528 KiB  
Article
Efficient Aggressive Behavior Recognition of Pigs Based on Temporal Shift Module
by Hengyi Ji, Guanghui Teng, Jionghua Yu, Yanbin Wen, Huixiang Deng and Yanrong Zhuang
Animals 2023, 13(13), 2078; https://doi.org/10.3390/ani13132078 - 23 Jun 2023
Cited by 2 | Viewed by 1408
Abstract
Aggressive behavior among pigs is a significant social issue that has severe repercussions on both the profitability and welfare of pig farms. Due to the complexity of aggression, recognizing it requires the consideration of both spatial and temporal features. To address this problem, [...] Read more.
Aggressive behavior among pigs is a significant social issue that has severe repercussions on both the profitability and welfare of pig farms. Due to the complexity of aggression, recognizing it requires the consideration of both spatial and temporal features. To address this problem, we proposed an efficient method that utilizes the temporal shift module (TSM) for automatic recognition of pig aggression. In general, TSM is inserted into four 2D convolutional neural network models, including ResNet50, ResNeXt50, DenseNet201, and ConvNext-t, enabling the models to process both spatial and temporal features without increasing the model parameters and computational complexity. The proposed method was evaluated on the dataset established in this study, and the results indicate that the ResNeXt50-T (TSM inserted into ResNeXt50) model achieved the best balance between recognition accuracy and model parameters. On the test set, the ResNeXt50-T model achieved accuracy, recall, precision, F1 score, speed, and model parameters of 95.69%, 95.25%, 96.07%, 95.65%, 29 ms, and 22.98 M, respectively. These results show that the proposed method can effectively improve the accuracy of recognizing pig aggressive behavior and provide a reference for behavior recognition in actual scenarios of smart livestock farming. Full article
Show Figures

Figure 1

15 pages, 12033 KiB  
Article
Sheep Face Recognition Model Based on Deep Learning and Bilinear Feature Fusion
by Zhuang Wan, Fang Tian and Cheng Zhang
Animals 2023, 13(12), 1957; https://doi.org/10.3390/ani13121957 - 11 Jun 2023
Cited by 5 | Viewed by 2189
Abstract
A key prerequisite for the establishment of digitalized sheep farms and precision animal husbandry is the accurate identification of each sheep’s identity. Due to the uncertainty in recognizing sheep faces, the differences in sheep posture and shooting angle in the recognition process have [...] Read more.
A key prerequisite for the establishment of digitalized sheep farms and precision animal husbandry is the accurate identification of each sheep’s identity. Due to the uncertainty in recognizing sheep faces, the differences in sheep posture and shooting angle in the recognition process have an impact on the recognition accuracy. In this study, we propose a deep learning model based on the RepVGG algorithm and bilinear feature extraction and fusion for the recognition of sheep faces. The model training and testing datasets consist of photos of sheep faces at different distances and angles. We first design a feature extraction channel with an attention mechanism and RepVGG blocks. The RepVGG block reparameterization mechanism is used to achieve lossless compression of the model, thus improving its recognition efficiency. Second, two feature extraction channels are used to form a bilinear feature extraction network, which extracts important features for different poses and angles of the sheep face. Finally, features at the same scale from different images are fused to enhance the feature information, improving the recognition ability and robustness of the network. The test results demonstrate that the proposed model can effectively reduce the effect of sheep face pose on the recognition accuracy, with recognition rates reaching 95.95%, 97.64%, and 99.43% for the sheep side-, front-, and full-face datasets, respectively, outperforming several state-of-the-art sheep face recognition models. Full article
Show Figures

Figure 1

14 pages, 1610 KiB  
Article
Study on the Influence of PCA Pre-Treatment on Pig Face Identification with Random Forest
by Hongwen Yan, Songrui Cai, Erhao Li, Jianyu Liu, Zhiwei Hu, Qiangsheng Li and Huiting Wang
Animals 2023, 13(9), 1555; https://doi.org/10.3390/ani13091555 - 06 May 2023
Cited by 1 | Viewed by 1294
Abstract
To explore the application of a traditional machine learning model in the intelligent management of pigs, in this paper, the influence of PCA pre-treatment on pig face identification with RF is studied. By this testing method, the parameters of two testing schemes, one [...] Read more.
To explore the application of a traditional machine learning model in the intelligent management of pigs, in this paper, the influence of PCA pre-treatment on pig face identification with RF is studied. By this testing method, the parameters of two testing schemes, one adopting RF alone and the other adopting RF + PCA, were determined to be 65 and 70, respectively. With individual identification tests carried out on 10 pigs, accuracy, recall, and f1-score were increased by 2.66, 2.76, and 2.81 percentage points, respectively. Except for the slight increase in training time, the test time was reduced to 75% of the old scheme, and the efficiency of the optimized scheme was greatly improved. It indicates that PCA pre-treatment positively improved the efficiency of individual pig identification with RF. Furthermore, it provides experimental support for the mobile terminals and the embedded application of RF classifiers. Full article
Show Figures

Figure 1

16 pages, 2957 KiB  
Article
FedAAR: A Novel Federated Learning Framework for Animal Activity Recognition with Wearable Sensors
by Axiu Mao, Endai Huang, Haiming Gan and Kai Liu
Animals 2022, 12(16), 2142; https://doi.org/10.3390/ani12162142 - 21 Aug 2022
Cited by 7 | Viewed by 1678
Abstract
Deep learning dominates automated animal activity recognition (AAR) tasks due to high performance on large-scale datasets. However, constructing centralised data across diverse farms raises data privacy issues. Federated learning (FL) provides a distributed learning solution to train a shared model by coordinating multiple [...] Read more.
Deep learning dominates automated animal activity recognition (AAR) tasks due to high performance on large-scale datasets. However, constructing centralised data across diverse farms raises data privacy issues. Federated learning (FL) provides a distributed learning solution to train a shared model by coordinating multiple farms (clients) without sharing their private data, whereas directly applying FL to AAR tasks often faces two challenges: client-drift during local training and local gradient conflicts during global aggregation. In this study, we develop a novel FL framework called FedAAR to achieve AAR with wearable sensors. Specifically, we devise a prototype-guided local update module to alleviate the client-drift issue, which introduces a global prototype as shared knowledge to force clients to learn consistent features. To reduce gradient conflicts between clients, we design a gradient-refinement-based aggregation module to eliminate conflicting components between local gradients during global aggregation, thereby improving agreement between clients. Experiments are conducted on a public dataset to verify FedAAR’s effectiveness, which consists of 87,621 two-second accelerometer and gyroscope data. The results demonstrate that FedAAR outperforms the state-of-the-art, on precision (75.23%), recall (75.17%), F1-score (74.70%), and accuracy (88.88%), respectively. The ablation experiments show FedAAR’s robustness against various factors (i.e., data sizes, communication frequency, and client numbers). Full article
Show Figures

Figure 1

12 pages, 5829 KiB  
Article
A Deep Learning Model for Detecting Cage-Free Hens on the Litter Floor
by Xiao Yang, Lilong Chai, Ramesh Bahadur Bist, Sachin Subedi and Zihao Wu
Animals 2022, 12(15), 1983; https://doi.org/10.3390/ani12151983 - 05 Aug 2022
Cited by 29 | Viewed by 4948
Abstract
Real-time and automatic detection of chickens (e.g., laying hens and broilers) is the cornerstone of precision poultry farming based on image recognition. However, such identification becomes more challenging under cage-free conditions comparing to caged hens. In this study, we developed a deep learning [...] Read more.
Real-time and automatic detection of chickens (e.g., laying hens and broilers) is the cornerstone of precision poultry farming based on image recognition. However, such identification becomes more challenging under cage-free conditions comparing to caged hens. In this study, we developed a deep learning model (YOLOv5x-hens) based on YOLOv5, an advanced convolutional neural network (CNN), to monitor hens’ behaviors in cage-free facilities. More than 1000 images were used to train the model and an additional 200 images were adopted to test it. One-way ANOVA and Tukey HSD analyses were conducted using JMP software (JMP Pro 16 for Mac, SAS Institute, Cary, North Caronia) to determine whether there are significant differences between the predicted number of hens and the actual number of hens under various situations (i.e., age, light intensity, and observational angles). The difference was considered significant at p < 0.05. Our results show that the evaluation metrics (Precision, Recall, F1 and mAP@0.5) of the YOLOv5x-hens model were 0.96, 0.96, 0.96 and 0.95, respectively, in detecting hens on the litter floor. The newly developed YOLOv5x-hens was tested with stable performances in detecting birds under different lighting intensities, angles, and ages over 8 weeks (i.e., birds were 8–16 weeks old). For instance, the model was tested with 95% accuracy after the birds were 8 weeks old. However, younger chicks such as one-week old birds were harder to be tracked (e.g., only 25% accuracy) due to interferences of equipment such as feeders, drink lines, and perches. According to further data analysis, the model performed efficiently in real-time detection with an overall accuracy more than 95%, which is the key step for the tracking of individual birds for evaluation of production and welfare. However, there are some limitations of the current version of the model. Error detections came from highly overlapped stock, uneven light intensity, and images occluded by equipment (i.e., drinking line and feeder). Future research is needed to address those issues for a higher detection. The current study established a novel CNN deep learning model in research cage-free facilities for the detection of hens, which provides a technical basis for developing a machine vision system for tracking individual birds for evaluation of the animals’ behaviors and welfare status in commercial cage-free houses. Full article
Show Figures

Figure 1

24 pages, 19073 KiB  
Article
Behavior Classification and Analysis of Grazing Sheep on Pasture with Different Sward Surface Heights Using Machine Learning
by Zhongming Jin, Leifeng Guo, Hang Shu, Jingwei Qi, Yongfeng Li, Beibei Xu, Wenju Zhang, Kaiwen Wang and Wensheng Wang
Animals 2022, 12(14), 1744; https://doi.org/10.3390/ani12141744 - 07 Jul 2022
Cited by 8 | Viewed by 2846
Abstract
Behavior classification and recognition of sheep are useful for monitoring their health and productivity. The automatic behavior classification of sheep by using wearable devices based on IMU sensors is becoming more prevalent, but there is little consensus on data processing and classification methods. [...] Read more.
Behavior classification and recognition of sheep are useful for monitoring their health and productivity. The automatic behavior classification of sheep by using wearable devices based on IMU sensors is becoming more prevalent, but there is little consensus on data processing and classification methods. Most classification accuracy tests are conducted on extracted behavior segments, with only a few trained models applied to continuous behavior segments classification. The aim of this study was to evaluate the performance of multiple combinations of algorithms (extreme learning machine (ELM), AdaBoost, stacking), time windows (3, 5 and 11 s) and sensor data (three-axis accelerometer (T-acc), three-axis gyroscope (T-gyr), and T-acc and T-gyr) for grazing sheep behavior classification on continuous behavior segments. The optimal combination was a stacking model at the 3 s time window using T-acc and T-gyr data, which had an accuracy of 87.8% and a Kappa value of 0.836. It was applied to the behavior classification of three grazing sheep continuously for a total of 67.5 h on pasture with three different sward surface heights (SSH). The results revealed that the three sheep had the longest walking, grazing and resting times on the short, medium and tall SHH, respectively. These findings can be used to support grazing sheep management and the evaluation of production performance. Full article
Show Figures

Figure 1

15 pages, 4304 KiB  
Article
Using Pruning-Based YOLOv3 Deep Learning Algorithm for Accurate Detection of Sheep Face
by Shuang Song, Tonghai Liu, Hai Wang, Bagen Hasi, Chuangchuang Yuan, Fangyu Gao and Hongxiao Shi
Animals 2022, 12(11), 1465; https://doi.org/10.3390/ani12111465 - 05 Jun 2022
Cited by 22 | Viewed by 2863
Abstract
Accurate identification of sheep is important for achieving precise animal management and welfare farming in large farms. In this study, a sheep face detection method based on YOLOv3 model pruning is proposed, abbreviated as YOLOv3-P in the text. The method is used to [...] Read more.
Accurate identification of sheep is important for achieving precise animal management and welfare farming in large farms. In this study, a sheep face detection method based on YOLOv3 model pruning is proposed, abbreviated as YOLOv3-P in the text. The method is used to identify sheep in pastures, reduce stress and achieve welfare farming. Specifically, in this study, we chose to collect Sunit sheep face images from a certain pasture in Xilin Gol League Sunit Right Banner, Inner Mongolia, and used YOLOv3, YOLOv4, Faster R-CNN, SSD and other classical target recognition algorithms to train and compare the recognition results, respectively. Ultimately, the choice was made to optimize YOLOv3. The mAP was increased from 95.3% to 96.4% by clustering the anchor frames in YOLOv3 using the sheep face dataset. The mAP of the compressed model was also increased from 96.4% to 97.2%. The model size was also reduced to 1/4 times the size of the original model. In addition, we restructured the original dataset and performed a 10-fold cross-validation experiment with a value of 96.84% for mAP. The results show that clustering the anchor boxes and compressing the model using this dataset is an effective method for identifying sheep. The method is characterized by low memory requirement, high-recognition accuracy and fast recognition speed, which can accurately identify sheep and has important applications in precision animal management and welfare farming. Full article
Show Figures

Figure 1

19 pages, 2610 KiB  
Article
Individual Beef Cattle Identification Using Muzzle Images and Deep Learning Techniques
by Guoming Li, Galen E. Erickson and Yijie Xiong
Animals 2022, 12(11), 1453; https://doi.org/10.3390/ani12111453 - 04 Jun 2022
Cited by 17 | Viewed by 3917
Abstract
Individual feedlot beef cattle identification represents a critical component in cattle traceability in the supply food chain. It also provides insights into tracking disease trajectories, ascertaining ownership, and managing cattle production and distribution. Animal biometric solutions, e.g., identifying cattle muzzle patterns (unique features [...] Read more.
Individual feedlot beef cattle identification represents a critical component in cattle traceability in the supply food chain. It also provides insights into tracking disease trajectories, ascertaining ownership, and managing cattle production and distribution. Animal biometric solutions, e.g., identifying cattle muzzle patterns (unique features comparable to human fingerprints), may offer noninvasive and unique methods for cattle identification and tracking, but need validation with advancement in machine learning modeling. The objectives of this research were to (1) collect and publish a high-quality dataset for beef cattle muzzle images, and (2) evaluate and benchmark the performance of recognizing individual beef cattle with a variety of deep learning models. A total of 4923 muzzle images for 268 US feedlot finishing cattle (>12 images per animal on average) were taken with a mirrorless digital camera and processed to form the dataset. A total of 59 deep learning image classification models were comparatively evaluated for identifying individual cattle. The best accuracy for identifying the 268 cattle was 98.7%, and the fastest processing speed was 28.3 ms/image. Weighted cross-entropy loss function and data augmentation can increase the identification accuracy of individual cattle with fewer muzzle images for model development. In conclusion, this study demonstrates the great potential of deep learning applications for individual cattle identification and is favorable for precision livestock management. Scholars are encouraged to utilize the published dataset to develop better models tailored for the beef cattle industry. Full article
Show Figures

Figure 1

16 pages, 4081 KiB  
Article
Precision Feeding in Ecological Pig-Raising Systems with Maize Silage
by Yun Lyu, Jing Li, Ruixing Hou, Yitao Zhang, Sheng Hang, Wanxue Zhu, He Zhu and Zhu Ouyang
Animals 2022, 12(11), 1446; https://doi.org/10.3390/ani12111446 - 03 Jun 2022
Cited by 2 | Viewed by 2186
Abstract
Ecological pig-raising systems (EPRSs) differ from conventional breeding systems, focusing more on environmental consequences, human health, and food safety during production processes. Thus productions from EPRSs have undergone significant development in China. Thus far, adding plant fiber sources (e.g., sweet potato leaves, maize [...] Read more.
Ecological pig-raising systems (EPRSs) differ from conventional breeding systems, focusing more on environmental consequences, human health, and food safety during production processes. Thus productions from EPRSs have undergone significant development in China. Thus far, adding plant fiber sources (e.g., sweet potato leaves, maize or wheat straw, potato, alfalfa, and vinasse) to feed has become a common practice to reduce the cost during the fattening period. Under such a context, it is necessary to choose the precision EPRS diet components and fattening period with low environmental consequences and high economic benefits. This study set up a database via pig growth models to predict environmental and economic performance based on two trials with 0%, 10%, 40%, 60%, and 80% maize silage (dry weight) added to the feed. A continuous curve about plant fiber concentration was built through the generated database. Our results showed that, with increased plant fiber concentration, the environmental performance of the EPRSs exhibited an “increase-decrease-increase” trend, and the economic performance firstly increased and then decreased. The best maize silage added percentages of emergy yield ratio (EYR), environmental loading ratio (ELR), unit emergy value (UEV), and emergy sustainability index (ESI), and the economic profits were 19.0%, 34.3%, 24.6%, 19.9%, and 18.0%, respectively. Besides, the 19.9% sun-dried maize silage added to the feed with a 360-day raising period had the best balance for environmental impact and economic performance. At the balance point, the performances of EYR, ELR, UEV, ESI, and the economic profit were only 0.04%, 3.0%, 0.8%, 0.0%, and 0.1%, respectively, lower than their maximum values. Therefore, we recommended the feed added 20% sun-dried maize silage is suitable for practical pig raising systems. Full article
Show Figures

Figure 1

Back to TopTop