Next Article in Journal
Hematological Changes in Sika Doe and Suckling Fawn Fed with Spent Mushroom Substrate of Pleurotus ostreatus
Next Article in Special Issue
FedAAR: A Novel Federated Learning Framework for Animal Activity Recognition with Wearable Sensors
Previous Article in Journal
The Economic Burden of Chromosome Translocations and the Benefits of Enhanced Screening for Cattle Breeding
Previous Article in Special Issue
Behavior Classification and Analysis of Grazing Sheep on Pasture with Different Sward Surface Heights Using Machine Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Deep Learning Model for Detecting Cage-Free Hens on the Litter Floor

1
Department of Poultry Science, College of Agricultural & Environmental Sciences, University of Georgia, Athens, GA 30602, USA
2
Department of Computer Science, Franklin College of Arts and Sciences, University of Georgia, Athens, GA 30602, USA
*
Author to whom correspondence should be addressed.
Animals 2022, 12(15), 1983; https://doi.org/10.3390/ani12151983
Submission received: 15 June 2022 / Revised: 29 July 2022 / Accepted: 3 August 2022 / Published: 5 August 2022

Abstract

:

Simple Summary

Real-time and automatic detection of chickens such as laying hens and broilers is the cornerstone of precision poultry farming. For laying hens, it is more challenging under cage-free conditions comparing to caged systems. In this study, we developed a deep learning model (YOLOv5x-hens) to monitor hens’ behaviors in cage-free facilities. More than 1000 images were used to train the model and an additional 200 images were adopted to test it. The newly developed YOLOv5x-hens was tested with stable performances in detecting birds under different lighting intensities, angles, and ages over 8 weeks. According to further data analysis, the model performed efficiently in real-time detection with an overall accuracy more than 95%, which is the key step for the tracking of individual birds for evaluation of production and welfare. However, there are still some limitations of the current version of the model. Error detections were caused by highly overlapped stock, uneven light intensity, and images occluded by equipment (i.e., drinking lines and feeders). Future research is needed to address those issues for a higher detection rate. The current study provides technical basis for developing a machine vision system for tracking individual birds for evaluation of animals’ behaviors and welfare status in commercial cage-free houses.

Abstract

Real-time and automatic detection of chickens (e.g., laying hens and broilers) is the cornerstone of precision poultry farming based on image recognition. However, such identification becomes more challenging under cage-free conditions comparing to caged hens. In this study, we developed a deep learning model (YOLOv5x-hens) based on YOLOv5, an advanced convolutional neural network (CNN), to monitor hens’ behaviors in cage-free facilities. More than 1000 images were used to train the model and an additional 200 images were adopted to test it. One-way ANOVA and Tukey HSD analyses were conducted using JMP software (JMP Pro 16 for Mac, SAS Institute, Cary, North Caronia) to determine whether there are significant differences between the predicted number of hens and the actual number of hens under various situations (i.e., age, light intensity, and observational angles). The difference was considered significant at p < 0.05. Our results show that the evaluation metrics (Precision, Recall, F1 and mAP@0.5) of the YOLOv5x-hens model were 0.96, 0.96, 0.96 and 0.95, respectively, in detecting hens on the litter floor. The newly developed YOLOv5x-hens was tested with stable performances in detecting birds under different lighting intensities, angles, and ages over 8 weeks (i.e., birds were 8–16 weeks old). For instance, the model was tested with 95% accuracy after the birds were 8 weeks old. However, younger chicks such as one-week old birds were harder to be tracked (e.g., only 25% accuracy) due to interferences of equipment such as feeders, drink lines, and perches. According to further data analysis, the model performed efficiently in real-time detection with an overall accuracy more than 95%, which is the key step for the tracking of individual birds for evaluation of production and welfare. However, there are some limitations of the current version of the model. Error detections came from highly overlapped stock, uneven light intensity, and images occluded by equipment (i.e., drinking line and feeder). Future research is needed to address those issues for a higher detection. The current study established a novel CNN deep learning model in research cage-free facilities for the detection of hens, which provides a technical basis for developing a machine vision system for tracking individual birds for evaluation of the animals’ behaviors and welfare status in commercial cage-free houses.

1. Introduction

Daily routine evaluation of chickens (e.g., broilers and layers) is critical for maintaining animals’ health and welfare in commercial poultry houses [1,2,3]. For laying hen production, this is becoming more challenging under cage-free production systems as compared to conventional caged systems because hens can move freely in cage-free houses, where animals have opportunities to perform common natural behaviors [4,5]. In recent years, computer vision has been used to monitor farm animals due to the non-invasive nature of this method. A completed object detection cycle in the computer vision includes observation, diagnostics, and prediction. In using computer vision, visual sensors such as cameras are installed at fixed locations to collect images or videos of animals (i.e., cattle, pigs, and poultry) [6]. Collected data (i.e., images and videos) are fed to diagnostic components (i.e., cloud storage or digital video recorder) for further analysis with machine learning or deep learning models, which need to be specifically programmed to extract object features (e.g., chickens’ profile and body features) and predict the target class, and thus determine the accuracy of the classification.
Deep learning models process images with the self-learning capability, which enables models to perform well in various animal farming environments [6]. Earlier studies have investigated pigs by using deep learning techniques to locate their positions and track their movements. These image processing algorithms showed acceptable accuracy when cameras were installed to collect images of top view animals because the difference in colors between background and animals are clear [7,8]. Similar methods have been used to detect broilers’ behaviors and changes over time in different areas [9,10]. Convolutional neural network (CNN) is one of the most stable and effective techniques in deep learning for animal detection [11,12,13]. The combination of CNN and image processing has been developed to detect chickens. In the previous research, detection of chicken behaviors (i.e., drinking and feeding) has been conducted accurately by two-stage CNN [14,15]. The two-stage CNN method generates whole bounding boxes first and then the detection network determines target objects. Although it functions well in terms of accuracy of classification, each individual component of the two-stage CNN model must be trained separately, and it requires higher computation, and thus slows analysis speed. To enhance real-time detection accuracy, the YOLO (you only look once) model has been developed as a one-stage CNN for object detection. With its end to end training and entire feature maps to predict each bounding box, it performed well on real-time behavior detection of broilers and breeders [16,17]. Ye et al. (2020) used the CNN algorithm (YOLO + multilayer residual module (MRM)) to detect 180,000 white feather broilers per hour [18]. Anlan et al. (2019) developed a YOLOv3 model to detect and locate yellow feather broilers to investigate their heat stress conditions [19]. Zhang et al. (2019) proposed a deep learning model to detect sick broilers simultaneously [20]. However, most of these deep learning methods for poultry detection are focused on broilers. Few studies investigated cage-free layers because it is hard to collect clear images in cage-free houses due to the dusty environment. Deep learning has been tested on target detection for dusty images [21,22]. In fact, that is why the deep learning model is important, because it can detect birds more accurately than our human eyes in chicken houses. The egg industry is shifting to cage-free houses to improve bird welfare, providing enough space for birds to perform their natural behaviors [23], now that all eggs sold in California must come from hens living in cage-free houses [24]. With the increase in cage-free systems in the USA and EU countries, it is critical to develop an automatic method for detecting laying hens on the litter floor of cage-free houses.
The objectives of this study were to: (1) develop a detector for monitoring the real-time number of laying hens in different fixed zones on the floor of cage-free facilities; (2) train the YOLOv5 model (a newer version of YOLO object detection model) with hens’ images/videos collected at different ages, angles, light intensities, and stock densities; and (3) test the performance of newly developed models (YOLOv5x-hens) under various production situations.

2. Materials and Methods

2.1. Experimental Setup

About 800 one-day-old Hy-Line W-36 chicks are reared evenly in the four rooms, each was measured as 24 ft long, 20 ft wide and 10 ft high (7.3 mL × 6.1 mW × 3 mH), in the University of Georgia (UGA) Poultry Research Center (Athens, GA, USA) (Figure 1). Each room contains six hanging feeders, two drinker kits, and a portable A-frame hen perch to encourage birds’ natural perching behaviors.
The raising environmental conditions were controlled by the automatic system (CHORE-Time Controller, Milford, IN, USA) and the set points were following Hy-Line W-36 commercial layer management guides. The relative humidity (RH) was controlled between 40–60%, air temperature was set at 21–23 °C, light intensity was controlled at 20 lux and lighting period was 19L:5D during the egg laying. The feed was soy-corn manufactured in UGA feed mill every two months to maintain fresh quality and inhibit mildewing. Team members checked the growth and environmental conditions of hens every day as suggested by the UGA Poultry Research Center Standard Operating Procedure Form. The animal use and management were approved by the Institutional Animal Care and Use Committee (IACUC) of the UGA.

2.2. Date Acquisition

Waterproof HD cameras (PRO-1080MSFB, Swann Communications, Santa Fe Springs, CA, USA) were installed on the ceiling and the side wall in each room to collect chickens’ video data (18 frames per second (FPS), 1440 pixels high and 1080 pixels wide) and the installation height of cameras was 3 m (Figure 2). To protect the lens and collect clear video, lens cleaning cloth was used to wipe off dust weekly. Footage data were saved on video recorders (Swann Company, Santa Fe Springs, CA, USA) temporarily on the farm and then transferred to massive hard drives (HDD) (Western Digital Corporation, CA, USA) for safe storage in the data hub in the Department of Poultry Science at UGA.

2.3. Data Labeling

Videos recorded at birds’ age of 8–16 weeks (as this is a transition period from pullet to layers) were used for data analysis to make sure the method would be applicable for both hens and pullets. Birds’ images were randomly extracted from HDD and converted to JPG format by Free Video to JPG Converter. Total function was selected at the converting process to obtain random pictures. After removing blurred images, 1200 photos were labeled through Labeling Windows_v1.6.1. During labeling operation, chickens with 1/3 or more body contained in the image were labeled by bounding box (Figure 3).

2.4. Model Innovation for Detecting Chickens

The YOLOv5x model was adapted and innovated by integrating hens’ image information as a new model “YOLOv5x-hens” for detecting birds on the litter floor. The YOLOv5x model is one of the four most commonly used models for object detection in YOLOv5 (i.e., YOLOv5x, YOLOv5s, YOLOv5m and YOLOv5l) [25]. Compared to the other three models, the YOLOv5x model is more powerful and flexible in detecting small-size objects such as chickens due to its enhanced characteristics of depth_multiple, width_multiple, the number of residual network (ResNet) in cross stage partial network (CSPNet), and the amount of convolution kernel (CK). Table 1 shows the differences between the aforementioned models. The YOLOv5x model has the best performance among the whole elements, which optimize resolution and capacity of YOLOv5x in network. Therefore, the YOLOv5x model was adopted and innovated for detecting chickens under sheltered and overlapped situations.
The network structure is shown in Figure 4. FOCUS means focusing width and height information into channel space. “Conv + Bottle Neck + Hard Swish (CBH)” aimed to extract short-term time features. Cross stage partial (CSP) partitions and merges the feature maps for object detection. Spatial pyramid pooling (SPP) results in fixed-length representations by resampling the feature maps [29].
Model Backbone: Four different sub models were developed in the backbone of YOLOv5x to extract the basic features of hens. Compared to YOLOv4, YOLOv5x improved mosaic data enhancement method by adding FOCUS structure, and thus our newly developed model YOLOv5x-hens is expected to be more accurate in small object detection [30]. Alphanumeric characters refer to the number of ResNet in distant CSPNet (i.e., CSP:4 indicates that there are four ResNet in this CSPNet).
Model Neck and Head: The model of YOLOv5x-hens added bottom-up path augmentation by using PAFPN [31], which is a feature pyramid module. The neck utilizes different feature pyramids to recognize the same chicken under diverse sizes and scales. There are three different levels of feature maps at the head phase by a 1 × 1 convolutional layer [32]. This module can maintain chicken’s salient features as well as control the increase in the number of feature maps, so as to decrease the amount of computation required. Finally, three decreased feature maps of same target were used during detection tests.

2.5. Model Evaluation and Statistical Data Analysis

The approach to summing the number of chickens in the image was based on the “For loop and If” statement [33,34]. For loop structure allows code to be repeatedly executed to extract each center coordinates of bounding box. Several conditional statements were used to collect the population of chickens in the given area, normalize the frame of reference and add a bounding box to the object (chicken). When the input class (CLS) was 0, the code would generate an accumulator—a function that takes a number 1 and returns the total number incremented by 1.
In this study, precision, recall, F1 score and mean average precision (mAP) metrics were applied to assess the performance of the trained model in detecting chickens [32]. Detailed calculation processes are showed in equations below:
P r e c i s i o n = T P ( T P + F P )
R e c a l l = T P ( T P + F N )
F 1   s c o r e = ( 2 P r e c i s i o n R e c a l l ) ( P r e c i s i o n + R e c a l l )
m A P = 1 n k = 1 k = n A P k
where APk is the average precision of the class k and n is the number of classes.
A true positive (TP) is a result where the model accurately concludes the positive class in chicken detection. Similarly, a false positive (FP) is an outcome where the model falsely predicts the positive class in chicken detection. A false negative (FN) indicates that the model incorrectly predicts the class. The mAP@0.5 indicates that it is the mAP calculated when at IOU (Intersection over Union) threshold 0.5.
A one-way ANOVA and Tukey HSD were conducted using JMP software (JMP Pro 16 for Mac, SAS Institute, Cary, NC, USA) to determine if there are significant differences between the predicted number of chickens and the actual number of chickens under different production and environmental conditions (i.e., light intensity, ages, stock density and observational angles) [35]. The difference was considered significant at p < 0.05.

3. Results and Discussion

3.1. Performance of the YOLOv5x-Hens

A confusion matrix of the best YOLOv5x-hens model was generated after training 300 epochs. The numbers of 970, 42, 46 and 0 were the true positive number, false positive number, false negative number, and true negative number, respectively. Table 2 sums up the results of performance metrics for YOLOv5x-hens and provides a comparison to the YOLOv3, a widely used CNN model. The target objects were chickens in both models, but the chickens’ feathers are presented differently. In the YOLOv5x-hens, the feathers are in white, but they are in yellow in YOLOv3. Additionally, the floor bedding/litter color in the two experiments was not the same. In our experiment, the color of litter was close to white, but it was brown in [29]. From an overview of these two models, they both performed well in target detection, but the YOLOv5x-hens has a higher recall by 8%, although the precision is 3% lower. For our newly developed model, our dataset is from white hens living on white bedding materials, which was more challenging in edge detection. In addition, our experiment was based on cage-free facilities, which contained more frequently moving birds that changed their positions so fast between adjacent frames and affected our training effectiveness [14]. Therefore, our model performed accurately in detecting the real-time number of birds from pullet (young hens) to layers (mature hens).

3.2. Convergence Results of Object Detector

Datasets were divided into training and verification sets, the loss curves consisted of loss of frame errors, and the loss of the hens on the floor (Figure 4). The frame loss is defined as the amount of service frames that are not delivered to their destination node. A high frame loss value indicates an unsatisfied prediction rate. The object loss is a compound loss based on the probability that object detection occurs in the region proposed. A high object loss means the accuracy of the model needed to be improved [36].
From an overview in Figure 5, the loss function of the training and validation process showed a downward trend during the whole process, the Stochastic gradient descent (SGD) optimized the objective function with suitable paraments that correspond to the best fit between the predicted and actual outputs. Before the training batch reached 100, the loss values decreased rapidly, and the accuracy, precision and average accuracy upgraded rapidly. The SGD kept on iterating. When the training epochs arrived 200, the decreasing trend in the loss values slowed. Similarly, the improving parameters also slowed. When the training batch reached approximately 280, the loss function values of the training and validation sets showed a slight change, which indicates where the accuracy and precision of the model arrived at its peak. The best model weights were outputted after training finished.

3.3. Evaluation of Model Performance under Different Level of Light Intensity

In the study, 200 photos were used to test the performance of YOLOv5x-hens under different levels of light intensity (10 lux and 30 lux; Figure 6). The accuracies at 10 lux and 30 lux are 95.15% and 95.02%, respectively. There is not a significant difference in accuracy between light intensity because deep learning, especially convolutional neural networks, display strong learning abilities due to inner detector algorithms, which apply a convolution layer, pooling layer and weight-based filter on each pixel of the image, enhancing the robust control [37].

3.4. Evaluation of Model Performance under Different Level of Flock Density

For model evaluation, 300 photos were used to test model accuracy under different levels of flock density (Figure 7 and Figure 8): low density (0–5 birds/m2), moderate density (5–9 birds/m2), and high density (9–18 bird/m2). For the three different densities, there was no difference in accuracies under low and middle densities (95.60% and 95.58%). Under the bird density of 9 birds/m2 or more, the accuracy of the model (60.74%) started to decrease due to extremely overlapped chickens and occlusion, which led to detector errors in classifying the hens’ boundaries [38]. Tracking individual hens from a pilling group is hard. In previous studies, the density map was used to estimate the chickens’ density, but the result tends to be unstable [39]. Therefore, further studies are needed to improve detector’s performance under high flock density.

3.5. Performance of YOLOv5x-Hens under Different Angles

In the current study, cameras were installed on the celling (vertical angle) and sidewall (horizontal angle). A total of 200 images were used for evaluating the effect of angles on image quality. The model performance changed slightly under two different angels. It performed better in the vertical (96.33%) angle than the horizonal (82.24%) monitoring angle (Figure 9). Chickens could be occluded by feeders, drinking equipment and other facilities, which were previously noticed in broiler chicken houses [40,41]. Misidentifications were also observed from similar margins between chickens and other objects. In previous studies, the accuracy of vertical observation was also higher than that of horizonal observation. Researchers developed a region-based CNN model to detect chickens and reported that the model performed accurately under the vertical angle with 99.5% accuracy, while the accuracy of the horizontal angle was lover than 90% (i.e., 89.6%). For the horizontal angle, more objects tend to have similar margins to the target chickens (e.g., the shape of feeder is nearly same as the main body of a chicken) [40,41]. Therefore, collecting birds’ image data under a vertical angle is critical for developing a hens’ tracking model or system.

3.6. Performance of YOLOv5x-Hens under Different Ages of Birds

To test the accuracy of the YOLOv5x-hens under different stages of growth, video data collected at the birds’ age of week 1, week 8, week 16, and week 18, were used because these ages represent different key stages of laying hens (i.e., baby chick, teenage, first egg stage, and adult stage) (Figure 10). From teenage to first egg stage, the model performed accurately in detecting individual hens (around 96.0%). For baby chicks’ (<1 week) detection, however, the model achieved only about 25.6% accuracy due to the chicks’ small body size. Similar experiments have been conducted in broiler houses. A faster region-based CNN model was developed to detect broiler chickens continuously and showed stable performances for 4–5 weeks old broilers, which have reached market body weight (e.g., 2–2.5 kg) [14]. For cage-free houses, the monitoring accuracy of the YOLOv5x-hens was observed to be higher with the increase in birds’ age until week 16 or 17, when the pullets had a similar body size to matured hens.

4. Conclusions

In this study, a CNN-based deep learning model YOLOv5x-hens was built and evaluated to track hens (e.g., real-time number of hens in different locations) on the litter floor of cage–free facilities. The YOLOv5x-hens model performed efficiently in real-time detection under different lighting intensities, angles, bird densities, and ages over 8 weeks (i.e., birds were 8–16 weeks old). However, some misidentifications happened due to the hens’ pilling behaviors, uneven light intensities, and occluded images by equipment (i.e., drinking line and feeder). Future research will be guaranteed to address those issues (i.e., higher bird density with over 9 birds/m2) for improving model detection efficiency and applicability. The current study established the first real-time and accurate CNN model under cage-free facilities for the detection of pullets or hens. It provides a technical basis for developing a machine vision system for tracking individual pullets and hens for the evaluation of behavior and welfare status (i.e., sick birds or pododermatitis evaluation) in the future.

Author Contributions

L.C. designed the experiment; X.Y., R.B.B., S.S. and L.C. collected data; X.Y., Z.W. and L.C. analyzed data; X.Y. and L.C. wrote the paper; L.C. provided resources; R.B.B., X.Y. and S.S. managed the chickens. All authors have read and agreed to the published version of the manuscript.

Funding

Egg Industry Center Competitive Grant (2020–2022); Oracle for Research Grant, Oracle America (Award Number: CPQ-2060433); UGA College of Agricultural and Environmental Sciences Dean’s Office Research Fund.

Institutional Review Board Statement

The animal use and management were approved by the Institutional Animal Care and Use Committee (IACUC) of the University of Georgia (UGA) (approval number: A2020-08-014).

Informed Consent Statement

Not applicable, as this research did not involve humans.

Data Availability Statement

Data available on request due to ethical restrictions.

Acknowledgments

This project is supported by the Egg Industry Center; Oracle for Research Grant, Oracle America (Award Number: CPQ-2060433); UGA College of Agricultural and Environmental Sciences Dean’s Office Research Fund; and USDA-Hatch projects: Future Challenges in Animal Production Systems: Seeking Solutions through Focused Facilitation (GEO00895; Accession Number: 1021519) & Enhancing Poultry Production Systems through Emerging Technologies and Husbandry Practices (GEO00894; Accession Number: 1021518).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Abas, A.M.F.M.; Azmi, N.A.; Amir, N.S.; Abidin, Z.Z.; Shafie, A.A. Chicken Farm Monitoring System. In Proceedings of the 6th International Conference on Computer and Communication Engineering (ICCCE 2016), Chengdu, China, 23–26 April 2021; IEEE: New York, NY, USA, 2016; pp. 132–137. [Google Scholar]
  2. Faroqi, A.; Utama, A.N.; Ramdhani, M.A.; Mulyana, E. Design Of a Cage Temperature Monitoring System and Microcontroller Base On Automatic Chicken. In Proceedings of the 2020 6th International Conference on Wireless and Telematics (ICWT), Yogyakarta, Indonesia, 3–4 September 2020; IEEE: New York, NY, USA, 2020. [Google Scholar]
  3. Stadig, L.M.; Rodenburg, T.B.; Ampe, B.; Reubens, B.; Tuyttens, F.A.M. An Automated Positioning System for Monitoring Chickens’ Location: Effects of Wearing a Backpack on Behaviour, Leg Health and Production. Appl. Anim. Behav. Sci. 2018, 198, 83–88. [Google Scholar] [CrossRef]
  4. Brannan, K.E.; Anderson, K.E. Examination of the Impact of Range, Cage-Free, Modified Systems, and Conventional Cage Environments on the Labor Inputs Committed to Bird Care for Three Brown Egg Layer Strains. J. Appl. Poult. Res. 2021, 30, 100118. [Google Scholar] [CrossRef]
  5. Hartcher, K.M.; Jones, B. The Welfare of Layer Hens in Cage and Cage-Free Housing Systems. World’s Poult. Sci. J. 2017, 73, 767–782. [Google Scholar] [CrossRef] [Green Version]
  6. Li, G.; Huang, Y.; Chen, Z.; Chesser, G.D.; Purswell, J.L.; Linhoss, J.; Zhao, Y. Practices and Applications of Convolutional Neural Network-Based Computer Vision Systems in Animal Farming: A Review. Sensors 2021, 21, 1492. [Google Scholar] [CrossRef]
  7. Tillett, R.D.; Onyango, C.M.; Marchant, J.A. Using Model-Based Image Processing to Track Animal Movements. Comput. Electron. Agric. 1997, 17, 249–261. [Google Scholar] [CrossRef]
  8. Lind, N.M.; Vinther, M.; Hemmingsen, R.P.; Hansen, A.K. Validation of a Digital Video Tracking System for Recording Pig Locomotor Behaviour. J. Neurosci. Methods 2005, 143, 123–132. [Google Scholar] [CrossRef] [PubMed]
  9. Kashiha, M.; Pluk, A.; Bahr, C.; Vranken, E.; Berckmans, D. Development of an Early Warning System for a Broiler House Using Computer Vision. Biosyst. Eng. 2013, 116, 36–45. [Google Scholar] [CrossRef]
  10. Neves, D.P.; Mehdizadeh, S.A.; Tscharke, M.; Nääs, I.D.A.; Banhazi, T.M. Detection of Flock Movement and Behaviour of Broiler Chickens at Different Feeders Using Image Analysis. Inf. Processing Agric. 2015, 2, 177–182. [Google Scholar] [CrossRef] [Green Version]
  11. Avanzato, R.; Beritelli, F. An Innovative Acoustic Rain Gauge Based on Convolutional Neural Networks. Information 2020, 11, 183. [Google Scholar] [CrossRef] [Green Version]
  12. Ahmed, M.S.; Mahmud, M.Z.; Akhter, S. Musical Genre Classification on the Marsyas Audio Data Using Convolution NN. In Proceedings of the 2020 23rd International Conference on Computer and Information Technology (ICCIT 2020), Dhaka, Bangladesh, 19–21 December 2020; IEEE: New York, NY, USA, 2020; p. 243. [Google Scholar]
  13. Chan, T.K.; Chin, C.S.; Li, Y. Semi-Supervised NMF-CNN for Sound Event Detection. IEEE Access 2021, 9, 130529–130542. [Google Scholar] [CrossRef]
  14. Li, G.; Hui, X.; Chen, Z.; Chesser, G.D.; Zhao, Y. Development and Evaluation of a Method to Detect Broilers Continuously Walking around Feeder as an Indication of Restricted Feeding Behaviors. Comput. Electron. Agric. 2021, 181, 105982. [Google Scholar] [CrossRef]
  15. Lin, C.-Y.; Hsieh, K.-W.; Tsai, Y.-C.; Kuo, Y.-F. Automatic Monitoring of Chicken Movement and Drinking Time Using Convolutional Neural Networks. Trans. ASABE 2020, 63, 2029–2038. [Google Scholar] [CrossRef]
  16. Ye, C.; Yousaf, K.; Qi, C.; Liu, C.; Chen, K. Broiler Stunned State Detection Based on an Improved Fast Region-Based Convolutional Neural Network Algorithm. Poult. Sci. 2020, 99, 637–646. [Google Scholar] [CrossRef] [PubMed]
  17. Wang, J.; Wang, N.; Li, L.; Ren, Z. Real-Time Behavior Detection and Judgment of Egg Breeders Based on YOLO V3. Neural Comput. Appl. 2020, 32, 5471–5481. [Google Scholar] [CrossRef]
  18. Ye, C.; Yu, Z.; Kang, R.; Yousaf, K.; Qi, C.; Chen, K.; Huang, Y. An Experimental Study of Stunned State Detection for Broiler Chickens Using an Improved Convolution Neural Network Algorithm. Comput. Electron. Agric. 2020, 170, 105284. [Google Scholar] [CrossRef]
  19. Ding, A.; Zhang, X.; Zou, X.; Qian, Y.; Yao, H.; Zhang, S.; Wei, Y. A Novel Method for the Group Characteristics Analysis of Yellow Feather Broilers Under the Heat Stress Based on Object Detection and Transfer Learning. INMATEH Agric. Eng. 2019, 58, 49–58. [Google Scholar] [CrossRef]
  20. Zhuang, X.; Zhang, T. Detection of Sick Broilers by Digital Image Processing and Deep Learning. Biosyst. Eng. 2019, 179, 106–116. [Google Scholar] [CrossRef]
  21. Liu, H.; Li, C.; Jia, S.; Zhang, D. Text Detection for Dust Image Based on Deep Learning. In Proceedings of the 2018 33rd Youth Academic Annual Conference of Chinese Association of Automation (YAC), Nanjing, China, 18–20 May 2018; pp. 754–759. [Google Scholar]
  22. Yan, T.; Wang, L.J.; Wang, J.X. Video Image Enhancement Method Research in the Dust Environment. Laser J. 2014, 35, 23–25. [Google Scholar]
  23. Lusk, J.L. Consumer Preferences for Cage-Free Eggs and Impacts of Retailer Pledges. Agribusiness 2019, 35, 129–148. [Google Scholar] [CrossRef]
  24. Oh, S.E.; Vukina, T. The Price of Cage-Free Eggs: Social Cost of Proposition 12 in California. Am. J. Agric. Econ. 2022, 104, 1293–1326. [Google Scholar] [CrossRef]
  25. Walia, I.S.; Kumar, D.; Sharma, K.; Hemanth, J.D.; Popescu, D.E. An Integrated Approach for Monitoring Social Distancing and Face Mask Detection Using Stacked ResNet-50 and YOLOv5. Electronics 2021, 10, 2996. [Google Scholar] [CrossRef]
  26. Zhu, X.; Lyu, S.; Wang, X.; Zhao, Q. TPH-YOLOv5: Improved YOLOv5 Based on Transformer Prediction Head for Object Detection on Drone-Captured Scenarios. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW 2021), Montreal, BC, Canada, 11–17 October 2021; IEEE Computer Soc: Los Alamitos, CA, USA, 2021; pp. 2778–2788. [Google Scholar]
  27. Wang, H.; Hu, Z.; Guo, Y.; Yang, Z.; Zhou, F.; Xu, P. A Real-Time Safety Helmet Wearing Detection Approach Based on CSYOLOv3. Appl. Sci. Basel 2020, 10, 6732. [Google Scholar] [CrossRef]
  28. Zhang, M.; Che, W.; Zhou, G.; Aw, A.; Tan, C.L.; Liu, T.; Li, S. Semantic Role Labeling Using a Grammar-Driven Convolution Tree Kernel. IEEE Trans. Audio Speech Lang. Process. 2008, 16, 1315–1329. [Google Scholar] [CrossRef]
  29. Fan, Y.; Li, Y.; Shi, Y.; Wang, S. Application of YOLOv5 Neural Network Based on Improved Attention Mechanism in Recognition of Thangka Image Defects. KSII Trans. Internet Inf. Syst. 2022, 16, 245–265. [Google Scholar] [CrossRef]
  30. Zhu, L.; Geng, X.; Li, Z.; Liu, C. Improving YOLOv5 with Attention Mechanism for Detecting Boulders from Planetary Images. Remote Sens. 2021, 13, 3776. [Google Scholar] [CrossRef]
  31. Zhang, Y.; Xie, F.; Huang, L.; Shi, J.; Yang, J.; Li, Z. A Lightweight One-Stage Defect Detection Network for Small Object Based on Dual Attention Mechanism and PAFPN. Front. Phys. 2021, 9, 491. [Google Scholar] [CrossRef]
  32. Yamashita, R.; Nishio, M.; Do, R.K.G.; Togashi, K. Convolutional Neural Networks: An Overview and Application in Radiology. Insights Imaging 2018, 9, 611–629. [Google Scholar] [CrossRef] [Green Version]
  33. Mosqueira-Rey, E.; Alonso-Ríos, D.; Baamonde-Lozano, A. Integrating Iterative Machine Teaching and Active Learning into the Machine Learning Loop. Procedia Comput. Sci. 2021, 192, 553–562. [Google Scholar] [CrossRef]
  34. Shukla, A.; Hudemann, K.N.; Hecker, A.; Schmid, S. Runtime Verification of P4 Switches with Reinforcement Learning. In Proceedings of the Netai’19: Proceedings of the 2019 Acm Sigcomm Workshop on Network Meets Ai & Ml, Beijing, China, 23 August 2019; Assoc Computing Machinery: New York, NY, USA, 2019; pp. 1–7. [Google Scholar]
  35. Wang, W.; Chen, Y.; Wang, J.; Lv, Z.; Li, E.; Zhao, J.; Liu, L.; Wang, F.; Liu, H. Effects of Reduced Dietary Protein at High Temperature in Summer on Growth Performance and Carcass Quality of Finishing Pigs. Animals 2022, 12, 599. [Google Scholar] [CrossRef]
  36. Chen, Z.; Wu, R.; Lin, Y.; Li, C.; Chen, S.; Yuan, Z.; Chen, S.; Zou, X. Plant Disease Recognition Model Based on Improved YOLOv5. Agronomy 2022, 12, 365. [Google Scholar] [CrossRef]
  37. Chen, C.-H.; Lai, Y.-K. The Influence Measures of Light Intensity on Machine Learning for Semantic Segmentation. In Proceedings of the 2020 17th International Soc Design Conference (ISOCC 2020), Yeosu, Korea, 21–24 October 2020; IEEE: New York, NY, USA, 2020; pp. 199–200. [Google Scholar]
  38. Mahyari, T.L.; Dansereau, R.M. Deep Learning Methods for Image Segmentation Containing Translucent Overlapped Objects. In Proceedings of the 2019 7th IEEE Global Conference on Signal and Information Processing (IEEE Globalsip), Ottawa, ON, Canada, 11–14 November 2019; IEEE: New York, NY, USA, 2019. [Google Scholar]
  39. Cheng, D.; Rong, T.; Cao, G. Density Map Estimation for Crowded Chicken. In Proceedings of the Image and Graphics-10th International Conference, ICIG 2019, Beijing, China, 23–25 August 2019; Zhao, Y., Barnes, N., Chen, B., Westermann, R., Kong, X., Lin, C., Eds.; Springer International Publishing Ag: Cham, Switzerland, 2019; Volume 11903, pp. 432–441. [Google Scholar]
  40. Li, G.; Zhao, Y.; Porter, Z.; Purswell, J.L. Automated Measurement of Broiler Stretching Behaviors under Four Stocking Densities via Faster Region-Based Convolutional Neural Network. Animal 2021, 15, 100059. [Google Scholar] [CrossRef] [PubMed]
  41. Geffen, O.; Yitzhaky, Y.; Barchilon, N.; Druyan, S.; Halachmi, I. A Machine Vision System to Detect and Count Laying Hens in Battery Cages. Animal 2020, 14, 2628–2634. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Experiment setup of cage-free research houses.
Figure 1. Experiment setup of cage-free research houses.
Animals 12 01983 g001
Figure 2. Positions of installed cameras.
Figure 2. Positions of installed cameras.
Animals 12 01983 g002
Figure 3. Examples of chicken image labeling.
Figure 3. Examples of chicken image labeling.
Animals 12 01983 g003
Figure 4. The network structure of YOLOv5x.
Figure 4. The network structure of YOLOv5x.
Animals 12 01983 g004
Figure 5. Convergence of the loss functions of training and validation sets (Box and Objectness are the frame loss and hens’ loss of training set respectively; val box and val objectness are the frame loss and hens’ loss of validation set, respectively).
Figure 5. Convergence of the loss functions of training and validation sets (Box and Objectness are the frame loss and hens’ loss of training set respectively; val box and val objectness are the frame loss and hens’ loss of validation set, respectively).
Animals 12 01983 g005
Figure 6. Number of chickens identified under different level of light intensity by our model: 10 lux (a) vs. 30 lux (b).
Figure 6. Number of chickens identified under different level of light intensity by our model: 10 lux (a) vs. 30 lux (b).
Animals 12 01983 g006
Figure 7. Number of chickens identified under low density and moderate density by our model: low density (a) vs. moderate density (b).
Figure 7. Number of chickens identified under low density and moderate density by our model: low density (a) vs. moderate density (b).
Animals 12 01983 g007
Figure 8. Number of chickens identified under high density by our model: original image of high density (a) vs. identified high density (b).
Figure 8. Number of chickens identified under high density by our model: original image of high density (a) vs. identified high density (b).
Animals 12 01983 g008
Figure 9. Number of chickens identified under horizontal angle and vertical angle by our model: horizontal angle (a) and vertical angle (b).
Figure 9. Number of chickens identified under horizontal angle and vertical angle by our model: horizontal angle (a) and vertical angle (b).
Animals 12 01983 g009
Figure 10. Number of chickens identified under different ages: (a) 1 week old, (b) 8 weeks old, (c) 16 weeks old, and (d) 18 weeks old with the newly developed model.
Figure 10. Number of chickens identified under different ages: (a) 1 week old, (b) 8 weeks old, (c) 16 weeks old, and (d) 18 weeks old with the newly developed model.
Animals 12 01983 g010
Table 1. The difference between YOLOv5 models.
Table 1. The difference between YOLOv5 models.
YOLOv5sYOLOv5mYOLOv5lYOLOv5xFunction
Depth_multiple0.330.671.001.33Model scaling [26]
Width_multiple0.500.751.001.25Model scaling [26]
ResNet in CSPNet12243648Computational loan [27]
Convolution kernel51276810241280Feature extraction [28]
Table 2. Performance metrics for YOLOv5x-hens and the comparison.
Table 2. Performance metrics for YOLOv5x-hens and the comparison.
ItemsPrecisionRecallF1 ScoremAP@0.5
YOLOv3 [29]0.9880.8750.926/
YOLOv5x-hens0.9580.9540.9560.948
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yang, X.; Chai, L.; Bist, R.B.; Subedi, S.; Wu, Z. A Deep Learning Model for Detecting Cage-Free Hens on the Litter Floor. Animals 2022, 12, 1983. https://doi.org/10.3390/ani12151983

AMA Style

Yang X, Chai L, Bist RB, Subedi S, Wu Z. A Deep Learning Model for Detecting Cage-Free Hens on the Litter Floor. Animals. 2022; 12(15):1983. https://doi.org/10.3390/ani12151983

Chicago/Turabian Style

Yang, Xiao, Lilong Chai, Ramesh Bahadur Bist, Sachin Subedi, and Zihao Wu. 2022. "A Deep Learning Model for Detecting Cage-Free Hens on the Litter Floor" Animals 12, no. 15: 1983. https://doi.org/10.3390/ani12151983

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop