Next Article in Journal
Hyperspectral Estimation of Nitrogen Content in Wheat Based on Fractional Difference and Continuous Wavelet Transform
Next Article in Special Issue
Detection of Cattle Key Parts Based on the Improved Yolov5 Algorithm
Previous Article in Journal
An Approach Based on Web Scraping and Denoising Encoders to Curate Food Security Datasets
Previous Article in Special Issue
An Approach towards a Practicable Assessment of Neonatal Piglet Body Core Temperature Using Automatic Object Detection Based on Thermal Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Facial Region Analysis for Individual Identification of Cows and Feeding Time Estimation

1
Graduate School of Engineering, University of Miyazaki, Miyazaki 889-2192, Japan
2
Field Science Center, Faculty of Agriculture, University of Miyazaki, Miyazaki 889-2192, Japan
*
Author to whom correspondence should be addressed.
Agriculture 2023, 13(5), 1016; https://doi.org/10.3390/agriculture13051016
Submission received: 3 March 2023 / Revised: 8 April 2023 / Accepted: 27 April 2023 / Published: 6 May 2023
(This article belongs to the Special Issue Artificial Intelligence in Livestock Farming)

Abstract

:
With the increasing number of cows per farmer in Japan, an automatic cow monitoring system is being introduced. One important aspect of such a system is the ability to identify individual cows and estimate their feeding time. In this study, we propose a method for achieving this goal through facial region analysis. We used a YOLO detector to extract the cow head region from video images captured during feeding with the head region cropped as a face region image. The face region image was used for cow identification and transfer learning was employed for identification. In the context of cow identification, transfer learning can be used to train a pre-existing deep neural network to recognize individual cows based on their unique physical characteristics, such as their head shape, markings, or ear tags. To estimate the time of feeding, we divided the feeding area into vertical strips for each cow and established a horizontal line just above the feeding materials to determine whether a cow was feeding or not by using Hough transform techniques. We tested our method using real-life data from a large farm, and the experimental results showed promise in achieving our objectives. This approach has the potential to diagnose diseases and movement disorders in cows and could provide valuable insights for farmers.

1. Introduction

The Japanese livestock industry is facing several challenges due to the aging of the population and the lack of workers [1]. As a result, the number of dairy cow farms is decreasing year by year, while the number of cows per farm is increasing. This trend has led to the current form of livestock farming becoming larger than in the past, which creates an urgent need to reduce the labor burden and improve management efficiency [2].
To address these challenges, the implementation of smart agriculture technology, which utilizes the Internet of Things (IoT) and robots, is progressing in Japan [3]. Smart agriculture aims to automate work, reduce burdens, and facilitate technology transfer, with the goal of improving productivity and reducing labor costs. Practical examples of smart agriculture in dairy farming include milking robots and behavior analysis using acceleration sensors.
However, there are still many areas in dairy farming where automation technology is not widely used. One example is individual cow identification, which is required in Japan. Currently, cows wear ear tags in both ears, and RFID (Radio Frequency Identification) is used in the ear tags to identify individual cows through wireless communication. While this method can greatly improve work efficiency, it may not be readable due to the fixed effective distance or radio wave interference when several cows are clustered together. Additionally, installing both the government-designated ear tag and a separate RFID ear tag requires a significant amount of time and money.
Feeding time is a crucial factor for the diagnosis of various diseases and movement disorders in cows. Studies have found that feeding behavior decreases before parturition, and changes in feeding time can be used to diagnose diseases such as ketosis and movement disorders such as wave height [4,5]. However, it is difficult to observe the feeding behavior of cows with the human eye throughout the day, and the number of cows that can be managed and the time available for this work are limited due to other tasks that must be performed on the cows [6,7]. There are contact-type devices that can be attached to cows to monitor feeding behavior, but these devices have several problems, including stress and discomfort for the cows and the need for specialists to reattach them if they become damaged [8].
In this study, we propose a method for identifying individual cows, detecting feeding behavior, and estimating feeding time using images of cows obtained from a camera via image processing technology. Our method utilizes YOLO, a highly real-time object detector, which allows for the instantaneous detection of the head region of cows and real-time confirmation of the feeding time of each cow.
The main contributions of this paper are:
(1)
The use of YOLO, which provides the real-time detection of cow head regions and real-time confirmation of feeding time.
(2)
The ability to diagnose cow diseases and movement disorders by recording changes in feeding time, which helps improve health management.
(3)
The potential to improve farmers’ work efficiency and reduce their workload by managing the feeding time of all cows.
Overall, our proposed method has the potential to address some of the challenges faced by the Japanese livestock industry, specifically by improving productivity and reducing the labor burden associated with managing large numbers of cows.

2. Materials and Methods

In this study, we propose a video-based approach for the individual identification and feeding time estimation of cows. Our approach focuses on analyzing the facial region of cows captured by a camera. We extracted the head region of each cow from the images and applied a series of processes to identify individual cows and estimate their feeding time. Specifically, we used a novel algorithm shown in Figure 1, which involves various techniques such as image processing, machine learning, and data analysis. Our approach is designed to be accurate, efficient, and scalable, and has the potential to improve the management and productivity of dairy farms. This chapter provides a detailed description of each process involved in the proposed algorithm and presents the experimental results to validate our approach.

2.1. Individual Identification Using Face Region Images of Cow

The individual cow identification process is shown in Figure 2 as a subsystem of the general framework.

2.1.1. Cow Head Region Detection

To perform individual identification in the face region of a cow, the head region of the cow was extracted from the input image. Figure 3 shows an example of the input image. The procedure involves extracting several candidate head regions of cows using the YOLO detector [9] and surrounding them with bounding boxes. The detection results of the candidate cow head regions obtained by the YOLO detector are shown in Figure 4, and the details of the training data and hyperparameters are presented in Table 1 and Table 2, respectively.
It is important to note that adjusting the YOLO parameters can significantly affect the performance of the algorithm. For instance, the input size determines the size of the image that YOLO will process. A larger input size can result in higher accuracy, but it also requires more computational resources. Anchor boxes are predefined shapes that YOLO uses to detect objects of different sizes and aspect ratios. Choosing the right anchor box sizes and ratios can help improve detection accuracy. The confidence threshold determines the minimum confidence score required for YOLO to detect an object. A higher confidence threshold can help reduce false positives but may also result in missing some objects. The non-max suppression threshold determines how overlapping bounding boxes are handled. A higher threshold can help eliminate redundant detections but may also result in missing some objects. By fine-tuning these parameters, we can improve YOLO’s performance to better suit our specific use case.
However, duplicate head regions are sometimes detected for the same cow. To address this issue, we excluded the detection results of the duplicate regions using the Intersection over Union (IoU) method. The IoU of the two detected regions was calculated, and if the IoU was greater than the threshold value, the candidate region closer to the cow’s head was selected as the cow’s head region, and the other region was excluded from the candidate targets. Additionally, when a cow stretches her neck outside the camera’s shooting range, a part of her head may also be detected as the head region if it appears within the shooting range. Similarly, if the cow is on the edge of the camera and stretches her neck, her head may go outside the shooting range and still be detected as the head region. To overcome this issue, if only a part of the head region was detected, the aspect ratio of the bounding box given to the candidate region was calculated, and if the aspect ratio was greater than a threshold value, the region was judged not to be a cow’s head region and excluded from the candidate cow’s head region.

2.1.2. Individual Identification Using Face Region Images of Cow

In this study, we used transfer learning as a method for individual identification in the face region of cows. Specifically, we first detected the head region of a cow using the YOLO detector and cut out only the detected head region as the face region image of the cow [10]. An example of a cow face region image is shown in Figure 5, which is then input into a network model that has been retrained through transfer learning for individual identification.
To apply transfer learning, we first created a folder containing the face region images of each cow shown in Figure 5 in which some of the sample cow faces are described. Moreover, the number of cow ear tag numbers and respective face region images are described in Table 3. We marked the earmarks with 10 digits, and only four of them were enlarged and used as the individual identification number of each cow in the created folder. We then performed re-training using the face images of the cows and the enlarged four digits of the earmarks. By carrying this process out, we were able to create a classifier through re-training. When a cow’s face region image is input to the classifier, the enlarged four digits of the earmarks in the image are predicted and output.
To perform the transfer learning, we used five previously trained models: SqueezeNet [11], ResNet-18 [12], ResNet-50 [12], Res-Net-101 [12], and Inception-v3 [13]. By fine-tuning these pre-trained models on our specific data set, we were able to leverage the already learned features and significantly reduce the amount of training data needed for our individual identification task.
By applying transfer learning, we were able to achieve better individual identification accuracy with a smaller amount of data than training from scratch.

2.2. Feeding Time Estimation

In this section, we describe a proposed method for detecting feeding behavior from the head region obtained in Section 2.1.1 and estimating the feeding time of cows. The proposed algorithm for estimating the feeding time of cows is shown in Figure 6.

2.2.1. Preprocessing

To estimate the feeding time of cows, the first step is to detect their feeding behavior. In this study, a straight line around the lower part of the fence at the feeding area in the barn was applied as the standard for detecting feeding behavior. The straight line was detected using a Hough transform on an image cut out based on the bounding box of the cow head region detected in Section 2.1.1. Specifically, we used the OpenCV library to implement the Hough transform. In particular, we used the probabilistic Hough transform (PHT), which is a faster version of the classic Hough transform, as it only considers a subset of points that lie on a potential line segment. The PHT significantly reduces the processing time compared to the classic Hough transform, making it more suitable for real-time applications.
After cropping the image, it is converted to grayscale and subjected to a smoothing process to reduce the noise caused by the bait area. The Hough transform detects the bait area due to its type and scattering, hence the smoothing process is performed. Several types of smoothing filters are available, such as averaging, median, and Gaussian filters, but for this study, median filters were used. The goal is to detect a straight line around the lower part of the fence in the feeding area as a criterion for detecting feeding behavior. This approach was used to avoid detecting straight lines in the background during the Hough transform. Since it is necessary to perform smoothing while preserving the edges, we used a median filter as the smoothing filter. Although the effect of a median filter depends on its kernel size, in this study, a median filter with a kernel size of 9 × 9 was used. A Prewitt filter was then used to detect horizontal edges, and morphology operations were used to enhance horizontal straight lines.
Specifically, we applied morphology operations to enhance the detected horizontal straight lines. In particular, we used dilation and erosion operations to fill gaps and remove small unwanted details in the detected lines. This step was essential for the accurate measurement of the cattle’s width and for the subsequent calculation of its body condition score.

2.2.2. Reference Line Detection for Feeding Behavior Detection

After enhancing the horizontal straight lines obtained in Section 2.2.1, we performed a Hough transform to detect some straight lines. Figure 7 displays an example image of a straight line detected through this process. To detect feeding behavior, we used the y-coordinate data of each detected line, as shown in Table 4. We identified the reference straight line by selecting the line whose y-coordinate corresponds to the median value of all detected lines.

2.2.3. Detection of Feeding Behavior

During feeding behavior, cows lower their heads towards the feeding area to eat. Accordingly, we judge a cow’s behavior as feeding when the line under the bounding box enclosing its head region crosses the reference line detected in Section 2.2.5. An example of feeding behavior detection is shown in Figure 8, where the red line represents the reference line. Figure 8a presents an example where the bounding box around the cow’s head region does not cross the feeding behavior detection criterion line, resulting in a non-feeding judgment. In contrast, Figure 8b shows an example where the bounding box crosses the criterion line, leading to a feeding behavior decision.

2.2.4. Region Segmentation of Images and Creation of Feeding Time Matrices

To estimate the feeding time of individual cows, we divided the feeding area into multiple regions, assigned each cow to a specific region, and created a feeding time matrix for the total number of cows in the image. This approach allowed us to estimate the feeding time for each region and, subsequently, for each cow.
To estimate the feeding time of cows, we followed several steps. Firstly, we inputted the maximum number of cows visible in the camera at the time of video input. Secondly, we created a feeding time matrix for the inputted number of cows and divided the image into equal regions corresponding to the number of cows. Thirdly, we estimated the feeding time by mapping the feeding time matrices to the divided regions. Equation (1) shows an example of a feeding time matrix, which assumes a maximum of five cows captured by camera 5. Figure 9 presents an example of region segmentation, where each region corresponds to a cow for the inputted number of cows.
T = 0     0     0     0     0

2.2.5. Bounding Box Existence Area Identification

After detecting the feeding behavior of cows, we identified the area where they were feeding by determining the location of the bounding box. The coordinate data of the bounding box was used to pinpoint the region where the cow was feeding. The coordinate data of the bounding box is shown in Equation (2) where x and y is the x coordinate and y coordinate of the upper left corner of the bounding boxes, respectively, width is the width of the bounding box, and height is the height of the bounding box.
b b x _ D a t a = x , y , w i d t h , h e i g h t
From the coordinate data of the bounding box, the area where the bounding box existed was identified. Specifically, the center coordinates of the bounding box were calculated using x and width in Equation (2). The center coordinates of the bounding box were then compared with the range of x coordinates for each region, and the bounding box was identified as existing in the region contained within the range.

2.2.6. Feeding Time Estimation

The feeding behavior detection in Section 2.2.3 was performed in each region to estimate the feeding time. The feeding time estimation was carried out for each region that was divided in Section 2.2.4. When a feeding behavior of a cow was detected in a region, the corresponding elements of the feeding time matrix was added. The feeding time was calculated based on the frame rate of the video. For instance, in this process, if the frame rate is 10, then 0.1 s is added to the feeding time when one feeding is detected. Figure 10 illustrates an example of the feeding time addition for each region, specifically for region 2 at a frame rate of 10. The feeding behavior of cows in region 2 is detected from frame t − 1 of (a) to frame t of (b), and the times of the elements of the feeding time matrix corresponding to region 2 are added.

3. Results

3.1. Cow Head Region Detection

Table 5 presents the results of the cow head region detection experiment, indicating an average detection rate of 94.5% and the average correct response rate was 91.7%, based on the results of the experiment on eight 5 min videos. For the head region detection, we used the head regions of 11 cows as training data when training the YOLO detector. However, some of the data used in the experiment included cows that were not part of the training data. As a result of this, the detection rate of the cow head region was low when we verified with such data. Specifically, the testing video 8 described in Table 5 had both undetected and false positive detections, with many false positives for the cow’s neck. This was due to the overlap of cow’s neck detection with the cow’s head detection, leading to the exclusion of the cow’s neck region in the process of excluding overlapping head region candidates, as described in Section 2.1.1. Consequently, the head region was also excluded, resulting in many undetected and false positives inthe testing video 8.

3.2. Individual Identification

Table 6 illustrates the results of the individual identification experiment, demonstrating high identification accuracy of more than 90% on average for all training models. We attribute the high accuracy to the cows’ typical forward-facing posture during feeding, which facilitates capturing distinct features such as the pattern of their heads. However, when individual identification was attempted using face region images of cows with their heads turned to the side, identification accuracy was sometimes incorrect. This was mainly due to the presence of two cows with entirely white heads, resulting in several false identifications during the experiment.

3.3. Feeding Time Estimation

Table 7 presents the results of the feeding time estimation experiment. The ‘Visual’ column shows the time obtained through manual observation of the feeding behavior of cows in the video used for verification. The number of ‘detected’, ‘undetected’, and ‘false positives’ columns indicate the count of detected, undetected, and falsely detected feeding behavior, respectively. The average correct response rate for feeding behavior detection during feeding time estimation was 82.9%, which is considered good. However, the success of detection depended on the position of the reference line for detecting feeding behavior, and sometimes failed to identify the feeding behavior of cows at the edge of the camera. Figure 11 provides an example where the bounding box surrounding the head region of the first cow from the left failed to reach the reference line for detecting feeding behavior, leading to a missed detection. Similarly, some cows on the edge of the camera extended their necks out of the camera’s range, resulting in undetected feeding behavior as the head region was not detected.
We also acknowledge that there is room for further analysis to support the findings. In future research, we plan to conduct a more detailed statistical analysis of the data, including regression analysis to investigate any potential correlations between the feeding behavior and other variables. Additionally, we will explore the use of machine learning algorithms to enhance the accuracy of individual cow recognition. We believe these additional analyses will provide more in-depth insights and strengthen the validity of our findings.

4. Discussion

In this study, we proposed a system that utilizes image processing technology to identify individual cows and estimate their feeding time using face region images. The primary objective of this system is to enhance the efficiency of dairy farmers’ work and enable them to monitor the feeding time of cows accurately. We conducted two experiments to evaluate the proposed system. The first experiment involved a method for individual identification using transfer learning, which achieved an average accuracy of 99.3%. The second experiment focused on detecting the head region of cows, which is crucial for both individual identification and feeding time estimation. The results showed an average detection rate of 94.5% and an average correct response rate of 91.7%.
There are three major prospects. The first prospect is to improve the accuracy of head region detection. Although high accuracy was obtained in both the detection rate and correct response rate of the head region of cows, as shown in Table 5 of the experimental results, undetected head regions and false detections can still be found. We believe that further improving the accuracy of the head region detection will enable a more accurate estimation of the feeding time of cows and improve the efficiency and productivity of work when managing and raising cows. The second prospect is to improve the accuracy of feeding behavior detection. In this study, a straight line at the bottom of the fence in the dairy barn was detected and used as a reference line for detecting feeding behavior. When the cow’s head region crossed the reference line, it was judged that the cow had engaged in feeding behavior. However, depending on the position of the reference line, feeding behavior may not be detected, as shown in Figure 11. Therefore, we believe that the accuracy of feeding behavior detection can be improved by extracting the feeding area and detecting feeding behavior using the overlap between the feeding area and the cow’s head region. The third prospect is the linkage with other camera data. In this experiment, multiple cameras were used to capture data. However, when estimating feeding time, the feeding time was estimated using only the input data, without coordination with other camera data. Therefore, if cows in the input data went out of the shooting range of the camera, feeding behavior could not be detected, and feeding time could not be estimated accurately. We believe that by linking with other camera data, even if feeding behavior cannot be detected with one set of data, feeding behavior can be detected with another set of data, and feeding time can be estimated with high accuracy.
As a final thought, we would like to emphasize the capability of our proposed system to classify new cows without further training. When a new cow is added to the herd, its face region image will be input into the trained classifier, and the classifier will predict the earmark digits of the new cow based on its facial features. However, it is important to note that if the appearance of the new cow significantly differs from the cows in the training set, the classifier’s performance may be affected. In such cases, it is necessary to update the training set and retrain the classifier to ensure accurate identification. Our proposed system acknowledges this limitation and emphasizes the need to update the training set regularly to maintain the accuracy of the classifier’s performance.
In addition, we acknowledge that the use of morphology operations in Section 2.2.1 is inherently sensitive to changes in lighting conditions. To address this, we ensured that the lighting conditions during data collection were consistent and stable. Additionally, we performed experiments under different lighting conditions to evaluate the robustness of our method. However, we acknowledge that different lighting conditions may still affect the performance of our method to some extent. As a future direction, we plan to explore the use of alternative methods that are less sensitive to changes in lighting conditions, such as deep learning-based approaches that can learn to account for variations in lighting.
Furthermore, our experimental results indicate that there may be specific features that contributed to discrepancies in the accuracy of detection. However, these specific features have not been identified yet. This highlights the need for further research to determine the specific factors that impact the performance of the cow head region detection method. Therefore, future research could investigate these factors to improve the accuracy of the detection method.
We have identified several issues that require consideration in future works, including the small sample size, reliability of feeding behavior detection, lack of a comprehensive literature review, and processing time of the algorithm. Moving forward, we will explore ways to address these concerns in our future research.
In addition to the previous studies discussed, we have identified two more relevant works that are worth mentioning. The first work, proposed by Santosh Kumar and Sanjay Kumar Singh [14], presents a method for the automatic identification of cattle using muzzle point pattern recognition, which is similar to our approach of analyzing the facial region. However, the techniques employed in their work are different from ours, and we intend to further investigate and compare their method with ours in our future works.
The second work, presented by Yu et al. [15], introduces a method for monitoring dairy cow feeding behavior using edge computing and deep learning algorithms based on the characteristics of dairy cow feeding behavior. Although the themes of our work and theirs are similar, the approaches are different, and we aim to conduct a detailed comparison between our work and theirs in our future research.
Before concluding this discussion section, we would like to note that it may be beneficial to include comparisons with a few more papers related to our work. In [16], a deep learning re-identification network model, Global and Part Network (GPN) is proposed to identify individual cow faces. However, in our paper, we have utilized the transfer learning approach and achieved a higher accuracy than theirs. In our work, an average accuracy of 99.3% was achieved for the individual identification of dairy cows, which is higher by 2.24% than that of [17] where a convolutional neural network was employed for individual cow identification and monitoring feeding behaviors, and by 3.5% than that of [18]. Moreover, our work can be considered an extension of the individual cow identification system introduced in our previous paper [19]. We have made several improvements in several aspects.
In [20], the authors considered developing and piloting a method for improving recognition accuracy and recovering identity information for generating cow faces that are closer to the real identity, achieving a recognition accuracy of 94.92%. This rate is lower by 4.38% than ours. Another interesting paper that should be mentioned is [21], which is a recently published cow individual identification method based on Deep Otsu and Efficient Net, achieving an average accuracy of 0.985 in the individual identification of dairy cows. However, this rate is lower by 0.8% compared to ours, and the methodology is different from ours.
Moreover, several papers recently appeared in the literature related to our works. Studies on feeding behaviors around calving in dairy cattle [22] and predicting the feed intake of cattle based on jaw movement using a triaxial accelerometer [23] are some of them. Although the approaches are different from ours, since feeding time estimation and feed intakes are related, further analysis would be worthwhile to investigate in more detail [24,25].
In summary, our work achieved a high accuracy rate in the individual identification of dairy cows using the transfer learning approach and outperformed several existing methods. Our work is an extension of our previous paper, and we have made several improvements in different aspects. Although some recent papers have been published on cow individual identification and feeding behavior, our approach is different and further analysis could be done in future research.

5. Conclusions

In this paper, we proposed a method for identifying individual cows and estimating their feeding time using facial analysis. We utilized a YOLO detector to extract the cow head region from video images captured during feeding and employed transfer learning for cow identification. We conducted experiments in a medium-sized dairy farm and the results indicate that our method has the potential to address some of the challenges faced by the Japanese livestock industry, particularly by improving productivity and reducing the labor burden associated with managing large numbers of cows.
However, we acknowledge that there is a need for further analysis to support our findings. We identified several issues that need to be considered in future works, including the small sample size, reliability of feeding behavior detection, and processing time of the algorithm. Moving forward, we will explore ways to address these concerns in our future research.

Author Contributions

Conceptualization, Y.K., T.T.Z. and I.K.; methodology, Y.K. and T.T.Z.; software, Y.K.; validation, Y.K.; formal analysis, Y.K. and T.T.Z.; investigation, Y.K.; resources, T.T.Z. and I.K.; data curation, Y.K., T.T.Z. and I.K.; writing—original draft preparation, Y.K.; writing—review and editing, T.T.Z.; visualization, Y.K.; supervision, T.T.Z.; project administration, Y.K. and T.T.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This publication was subsidized by JKA through its promotion funds from KEIRIN RACE.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ministry of Agriculture, Forestry and Fisheries, Livestock Statistics Survey, Livestock Statistics, Dairy Cattle, and Beef Cattle, Number of Houses and Number of Cattle. Available online: https://www.maff.go.jp/j/tokei/kouhyou/tikusan/ (accessed on 15 February 2023).
  2. Present Situation of Japanese Dairy Farming (National Survey). Available online: https://www.dairy.co.jp/news/kulbvq000000mybw-img/kulbvq000000myd8.pdf (accessed on 15 February 2023).
  3. Zin, T.T.; Misawa, S.; Pwint, M.Z.; Thant, S.; Seint, P.T.; Sumi, K.; Yoshida, K. Cow Identification System using Ear Tag Recognition. In Proceedings of the 2020 IEEE 2nd Global Conference on Life Sciences and Technologies (LifeTech), Kyoto, Japan, 10–12 March 2020. [Google Scholar] [CrossRef]
  4. González, L.A.; Tolkamp, B.J.; Coffey, M.P.; Ferret, A.; Kyriazakis, I. Changes in Feeding Behavior as Possible Indicators for the Automatic Monitoring of Health Disorders in Dairy Cows. J. Dairy Sci. 2008, 91, 1017–1028. [Google Scholar] [CrossRef] [PubMed]
  5. Bao, J.; Giller, P.S. Observations on the changes in behavioral activities of dairy cows prior to and after parturition. Ir. Vet. J. 1991, 44, 43–47. [Google Scholar]
  6. Schirmann, K.; Chapinal, N.; Weary, D.M.; Vickers, L.; Von Keyserlingk, M.A.G. Rumination, and feeding behavior before and after calving in dairy cows. J. Dairy Sci. 2013, 96, 7088–7092. [Google Scholar] [CrossRef]
  7. Büchel, S.; Sundrum, A. Decrease in rumination time as an indicator of the onset of calving. J. Dairy Sci. 2014, 97, 3120–3127. [Google Scholar] [CrossRef] [PubMed]
  8. Shiiya, K.; Otsuka, F.; Zin, T.T.; Kobayashi, I. Image-Based Feeding Behavior Detection for Dairy Cow. In Proceedings of the 2019 IEEE 8th Global Conference on Consumer Electronics (GCCE), Osaka, Japan, 15–18 October 2019. [Google Scholar] [CrossRef]
  9. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. arXiv 2016, arXiv:1506.02640. [Google Scholar]
  10. Kawagoe, Y.; Zin, T.T.; Kobayashi, I. Individual Identification of Cow Using Image Processing Techniques. In Proceedings of the 2022 IEEE 4th Global Conference on Life Sciences and Technologies (Life Tech), Osaka, Japan, 7–9 March 2022. [Google Scholar] [CrossRef]
  11. Iandola, F.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5 MB model size. arXiv 2016, arXiv:1602.07360. [Google Scholar]
  12. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. arXiv 2015, arXiv:1512.03385. [Google Scholar]
  13. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. arXiv 2015, arXiv:1512.00567. [Google Scholar]
  14. Kumar, S.; Singh, S.K. Automatic identification of cattle using muzzle point pattern: A hybrid feature extraction and classification paradigm. Multimed. Tools Appl. 2017, 76, 26551–26580. [Google Scholar] [CrossRef]
  15. Yu, Z.; Liu, Y.; Yu, S.; Wang, R.; Song, Z.; Yan, Y.; Li, F.; Wang, Z.; Tian, F. Automatic Detection Method of Dairy Cow Feeding Behaviour Based on YOLO Improved Model and Edge Computing. Sensors 2022, 22, 3271. [Google Scholar] [CrossRef] [PubMed]
  16. Chen, X.; Yang, T.; Mai, K.; Liu, C.; Xiong, J.; Kuang, Y.; Gao, Y. Holstein Cattle Face Re-Identification Unifying Global and Part Feature Deep Network with Attention Mechanism. Animals 2022, 12, 1047. [Google Scholar] [CrossRef] [PubMed]
  17. Achour, B.; Belkadi, M.; Filali, I.; Laghrouche, M.; Lahdir, M. Image analysis for individual identification and feeding behavior monitoring of dairy cows based on Convolutional Neural Networks (CNN). Biosyst. Eng. 2020, 198, 31–49. [Google Scholar] [CrossRef]
  18. Zhao, K.; Jin, X.; Ji, J.; Wang, J.; Ma, H.; Zhu, X. Individual identification of Holstein dairy cows based on detecting and matching feature points in body images. Biosyst. Eng. 2019, 181, 128–139. [Google Scholar] [CrossRef]
  19. Zin, T.T.; Phyo, C.N.; Tin, P.; Hama, H.; Kobayashi, I. Image technology based cow identification system using deep learning. In Proceedings of the International MultiConference of Engineers and Computer Scientists (IMECS2018), Hong Kong, China, 14–16 March 2018; pp. 320–323. [Google Scholar]
  20. Yang, Z.; Xiong, H.; Chen, X.; Liu, H.; Kuang, Y.; Gao, Y. Dairy cow tiny face recognition based on convolutional neural networks. In Proceedings of the 14th Chinese Conference on Biometric Recognition, Zhuzhou, China, 12–13 October 2019; pp. 216–222. [Google Scholar]
  21. Zhang, R.; Ji, J.; Zhao, K.; Wang, J.; Zhang, M.; Wang, M. A Cascaded Individual Cow Identification Method Based on DeepOtsu and EfficientNet. Agriculture 2023, 13, 279. [Google Scholar] [CrossRef]
  22. Antanaitis, R.; Anskienė, L.; Palubinskas, G.; Džermeikaitė, K.; Bačėninaitė, D.; Viora, L.; Rutkauskas, A. Ruminating, Eating, and Locomotion Behavior Registered by Innovative Technologies around Calving in Dairy Cows. Animals 2023, 13, 1257. [Google Scholar] [CrossRef] [PubMed]
  23. Ding, L.; Lv, Y.; Jiang, R.; Zhao, W.; Li, Q.; Yang, B.; Yu, L.; Ma, W.; Gao, R.; Yu, Q. Predicting the Feed Intake of Cattle Based on Jaw Movement Using a Triaxial Accelerometer. Agriculture 2022, 12, 899. [Google Scholar] [CrossRef]
  24. Bloch, V.; Frondelius, L.; Arcidiacono, C.; Mancino, M.; Pastell, M. Development and Analysis of a CNN- and Transfer-Learning-Based Classification Model for Automated Dairy Cow Feeding Behavior Recognition from Accelerometer Data. Sensors 2023, 23, 2611. [Google Scholar] [CrossRef] [PubMed]
  25. Pires, B.V.; Reolon, H.G.; Abduch, N.G.; Souza, L.L.; Sakamoto, L.S.; Mercadante, M.E.Z.; Silva, R.M.O.; Fragomeni, B.O.; Baldi, F.; Paz, C.C.P.; et al. Effects of Feeding and Drinking Behavior on Performance and Carcass Traits in Beef Cattle. Animals 2022, 12, 3196. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Algorithm for individual identification and feeding time estimation using cow face regions.
Figure 1. Algorithm for individual identification and feeding time estimation using cow face regions.
Agriculture 13 01016 g001
Figure 2. Algorithm for individual identification using face regions of a cow.
Figure 2. Algorithm for individual identification using face regions of a cow.
Agriculture 13 01016 g002
Figure 3. Input image.
Figure 3. Input image.
Agriculture 13 01016 g003
Figure 4. Example of a detection result of cow head region.
Figure 4. Example of a detection result of cow head region.
Agriculture 13 01016 g004
Figure 5. Example of face region image of cows.
Figure 5. Example of face region image of cows.
Agriculture 13 01016 g005
Figure 6. Algorithm for estimating the feeding time of cows.
Figure 6. Algorithm for estimating the feeding time of cows.
Agriculture 13 01016 g006
Figure 7. Example of the linear detection result by Hough transform.
Figure 7. Example of the linear detection result by Hough transform.
Agriculture 13 01016 g007
Figure 8. Example of feeding behavior detection. (a) Example of non-feeding decision; (b) example of feeding decision.
Figure 8. Example of feeding behavior detection. (a) Example of non-feeding decision; (b) example of feeding decision.
Agriculture 13 01016 g008
Figure 9. Example of region segmentation.
Figure 9. Example of region segmentation.
Agriculture 13 01016 g009
Figure 10. Example of additional feeding time. (a) Frame at time t − 1; (b) frame at time t.
Figure 10. Example of additional feeding time. (a) Frame at time t − 1; (b) frame at time t.
Agriculture 13 01016 g010
Figure 11. Example of undetected feeding behavior.
Figure 11. Example of undetected feeding behavior.
Agriculture 13 01016 g011
Table 1. Details of the YOLO detector data set.
Table 1. Details of the YOLO detector data set.
Number of ImagesNumber of Cows
That Were Annotated
Input Size
2819111920 × 1080
Table 2. Details of the YOLO detector hyperparameters.
Table 2. Details of the YOLO detector hyperparameters.
EpochMinibatch SizeLeaning RateOptimization Algorithm
50160.001Momentum SDG
Table 3. The number of cow ear tag numbers and face region images used for individual identification.
Table 3. The number of cow ear tag numbers and face region images used for individual identification.
Cow NumberNumber of ImagesCow NumberNumber of Images
cow14117cow114321
cow24287cow124383
cow33896cow133063
cow44349cow143440
cow54246cow152262
cow64132cow164295
cow74257cow174235
cow84227cow184351
cow94113cow194164
cow104158cow202495
cow214379
Table 4. Example of y-coordinate data of detected lines.
Table 4. Example of y-coordinate data of detected lines.
Detected LineY-Coordinate
12
22
32
12775
13777
14781
15979
16993
251562
261562
271562
Table 5. Results of head region detection.
Table 5. Results of head region detection.
VideoNumber of DetectionsNumber of UndetectedFalse
Positives
Detection RateAccuracy
198883299.296.1
210386399.499.1
3120264394.994.7
411738399.399.1
511062811091.088.9
61095431099.195.4
71133508692.989.3
8115012222383.876.9
Table 6. Results of individual identification.
Table 6. Results of individual identification.
VideoSqueezeNetResNet-18ResNet-50ResNet-101Inception-V3
197.998.799.299.299.7
297.099.999.710099.8
389.399.899.999.899.9
475.495.796.210099.1
599.599.899.9100100
698.798.899.199.499.2
794.194.996.394.997.0
Table 7. Results of feeding time estimation.
Table 7. Results of feeding time estimation.
VideoCowEstimated Time (s)Visual (s)Number of DetectionsNumber of UndetectedFalse
Positives
Accuracy
1J11165263163100161.7
97092012462406097.6
961526426426400100
959219325418074070.9
2977924024123851393.0
98172552622550797.3
M6022326322304184.5
96152622582600299.2
39779215299215572871.7
964224930024425281.9
M3526430026303887.4
4M35193300198100365.8
959226730026752789.3
96422933002921398.6
5971828828828800100
98172852882853099.0
970727728826622092.4
M6023828821207474.1
6M4022628822649082.2
970924427724712195.0
13962642792645696.0
J11180286177101263.2
7M6921330021188270.1
982525330022655678.7
964123130023107076.7
9829265285268241986.2
M40132291132162044.9
89641175290165109060.2
981827730024602590.8
970726629122727275.4
982528930028812195.7
982923629023654181.1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kawagoe, Y.; Kobayashi, I.; Zin, T.T. Facial Region Analysis for Individual Identification of Cows and Feeding Time Estimation. Agriculture 2023, 13, 1016. https://doi.org/10.3390/agriculture13051016

AMA Style

Kawagoe Y, Kobayashi I, Zin TT. Facial Region Analysis for Individual Identification of Cows and Feeding Time Estimation. Agriculture. 2023; 13(5):1016. https://doi.org/10.3390/agriculture13051016

Chicago/Turabian Style

Kawagoe, Yusei, Ikuo Kobayashi, and Thi Thi Zin. 2023. "Facial Region Analysis for Individual Identification of Cows and Feeding Time Estimation" Agriculture 13, no. 5: 1016. https://doi.org/10.3390/agriculture13051016

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop