Next Article in Journal
E-Learning Course Recommender System Using Collaborative Filtering Models
Previous Article in Journal
Can Teleworking Lead to Economic Growth during Pandemic Times? Empirical Evidence at the European Union Level
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cattle Facial Matching Recognition Algorithm Based on Multi-View Feature Fusion

1
College of Electronic Information Engineering, Inner Mongolia University, Hohhot 010021, China
2
State Key Laboratory of Reproductive Regulation and Breeding of Grassland Livestock, Inner Mongolia University, Hohhot 010030, China
3
College of Mechanical and Electrical Engineering, Inner Mongolia Agricultural University, Hohhot 010018, China
*
Authors to whom correspondence should be addressed.
Electronics 2023, 12(1), 156; https://doi.org/10.3390/electronics12010156
Submission received: 5 December 2022 / Revised: 25 December 2022 / Accepted: 26 December 2022 / Published: 29 December 2022
(This article belongs to the Section Artificial Intelligence)

Abstract

:
In the process of collecting facial images of cattle in the field, some features of the collected images end up going missing due to the changeable posture of the cattle, which makes the recognition accuracy decrease or impossible to recognize. This paper verifies the practical effects of the classical matching algorithms ORB, SURF, and SIFT in bull face matching recognition. The experimental results show that the traditional matching algorithms perform poorly in terms of matching accuracy and matching time. In this paper, a new matching recognition model is constructed. The model inputs the target cattle facial data from different angles into the feature extraction channel, combined with GMS (grid-based motion statistics) algorithm and random sampling consistent algorithm, to achieve accurate recognition of individual cattle, and the recognition process is simple and fast. The recognition accuracy of the model was 85.56% for the Holstein cow face dataset, 82.58% for the Simmental beef cattle, and 80.73% for the mixed Holstein and Simmental beef cattle dataset. The recognition model constructed in the study can achieve individual recognition of cattle in complex environments, has good robustness to matching data, and can effectively reduce the effects of data angle changes and partial features missing in cattle facial recognition.

1. Introduction

Animal biometrics has always been a very popular and promising field [1]. In the process of modern cattle breeding on a large scale, intelligent and refined breeding based on individual cattle has become an important development direction for scientific breeding [2], while individual cattle identification is the first step in the process of cattle research. In practical applications, cattle identification is the prerequisite and basis for the application of automation technology [3]; the information management of cattle, insurance loans and claims, disease control, selection and breeding, and loss recovery all require fast and accurate identification of individual cattle [4]. The aim of this study is to build a cattle individual identification model based on multi-angle cattle facial data, and to use phenotypic data to achieve contactless and accurate identification of cattle individuals.
Cattle faces have stable and distinct feature data, better universality, uniqueness and scalability, and are easy to collect, so they have been widely studied and applied in the individual identification of cattle [5,6]. Kimet al. achieved the first recognition of Japanese black cattle using cattle face data combined with ideal memory networks [7]. This study demonstrated that determining individual cattle identity from cattle face images is achievable. Xia constructed a face description model based on local binary pattern (LBP) texture features and used principal component analysis (PCA) combined with sparse representation classification (SRC) [8]. Cai and Li proposed a cattle face recognition model that used LBP and extended LBP descriptors to achieve individual recognition of cattle [9]. Santosh Kumar implemented the facial recognition of beef cattle in his study by combining traditional classical machine learning methods [10]. Kaixuan Zhao et al. used FAST (features from accelerated segment test), SIFT (scale invariant feature transform) and FLANN (fast library for approximate nearest neighbors) to extract, describe, and match cattle feature points, respectively, and achieved better results in cattle recognition [11].
With the development of neural networks, deep-learning-based methods have been used in the field of individual cattle identification. Li et al. applied a lightweight modification of the neural network for the individual recognition of bovine faces and carried out test experiments with the model on a Raspberry Pi. The experimental results show that the model has good performance in recognition accuracy and recognition speed [12]. Billah Met et al. used a target detection algorithm to detect cattle faces and then used a multilayer convolutional network to classify the detection results [13]. Xu et al. combined lightweight RetinaFace-mobilenet with additive angular interval loss (ArcFace) to achieve 91.3% accuracy and a recognition speed of 24 frames per second (fps) in a dataset of real scenes [14]. Xu et al. fused local binary pattern (LBP) with a capsule network to construct a C-LBP feature extractor and then introduced a self-attentive module and an intermediate capsule layer to enhance the feature extraction ability and utilization efficiency of the network. The model shows higher performance and stronger resilience in the process of individual cow identification [15]. Weng Zhi et al. proposed a two-branch convolutional network method for individual cattle recognition for the difficult problem of individual recognition caused by changes in cattle posture during cattle face recognition and achieved excellent recognition results [16].
In summary, at this stage, contactless cattle individual recognition based on cattle faces is mainly focused on two aspects: traditional algorithms and deep learning methods. Traditional classical algorithms can achieve individual recognition through the processing of standard data and adaptation improvements to the algorithm and still have a good presentation in complex contexts. However, the algorithms require too much standardization of data and lack adaptability to non-standard data in practical applications. The method of using cattle face data combined with deep learning to achieve individual recognition of cattle has high recognition accuracy and fast recognition speed. However, the training part of deep learning has demanding requirements on the amount of data and the complexity of the data. The small amount of data and the low complexity of the data will affect the recognition results. Secondly, deep learning is computationally expensive and requires a high hardware environment, especially for large network models when training large data samples.
The ORB (oriented FAST and rotated BRIEF) algorithm is an in-depth extension of this paper to address the characteristics of bull face data and the need for recognition accuracy under complex angles in practical applications. The model uses the idea of fusing and extracting features from multiple angles to enhance the robustness of the algorithm to the data to be recognized under complex angles and to effectively reduce the requirement for data normalization. Furthermore, by introducing mathematical statistics, the accuracy of the model is improved without a significant increase in identification time and computational cost, enabling the fast and accurate identification of cattle.

2. Materials and Methods

The creation of the dataset and the experimental approach in the construction of the model are central to this study. The research idea is shown in Figure 1.

2.1. Cattle-Only Facial Datasets

The data used in this study were obtained from the Heilinger Jiayu Breeding Cooperative in the Inner Mongolia Autonomous Region, as shown in Figure 1, with Simmental beef cattle and Holstein cows selected as data collection objects for video data collection. In order to enhance the credibility of the data and improve the quality of the data, a variety of methods were used to collect the data during data collection. On the one hand, raw video data from the cattle were captured manually by means of a capture device (a mobile phone or SLR camera) at a frame rate of 30 fps while the cattle were being fed. On the other hand, cameras were set up in the pasture to capture real-time facial data of cattle in production and living conditions. A total of 19 Simmental cows and 22 Holstein cows were collected, and the facial data were collected from multiple angles as experimental data. The cattle facial data collection device is shown in Figure 2, where the numbers one to seven represent: (1) a mobile phone, (2) a mobile phone stand, (3) a combined steel frame, (4) an isolation tape, (5) a webcam, (6) a spherical camera, (7) a solar panel., and (8) a support post.
The raw cattle data were collected in the form of a video containing the facial features of the cattle, and the video file is decomposed into individual complete cattle image data by means of the video frame decomposition method. The frames are then filtered to remove images that are blurred or have unclear textures due to movement, lighting, etc. The selected cattle images are cropped to remove the images with high similarity, and the cattle face image dataset is built. The Simmental cattle facial image dataset beef-data was established for a total of 19 Simmental cattle, with an average of 81 images per cow, totaling 1544 facial images. The Holstein cow face image dataset cow-data was created for a total of 22 Holstein cows, with an average of 130 face images per cow, for a total of 2862 cow face images. Mix-data, a mixed dataset of dairy and beef cattle, contains 4406 cattle facial images for a total of 41 cattle. Some of the data are shown in Figure 3.
The dataset is shown in Table 1.

2.2. Cattle Facial Image Feature Matching Individual Recognition Model

2.2.1. Image Feature Matching Algorithms

A two-branch matching algorithm for cattle identification is used. Image feature matching is a key technology in the field of computer vision and has been more widely used in the fields of visual SLAM (simultaneous localization and mapping) [17,18], 3D reconstruction [19], image retrieval [20], and visual tracking [21], and has been less used in individual recognition. Improving the robustness of image feature extraction and the accuracy and speed of feature matching is the focus of research in the field of image feature matching [22]. Taking the ORB algorithm as an example, feature matching is mainly divided into two steps. The first step extracts the feature points of the matched image and the image to be matched, respectively, and the feature points contain two parts: key points and descriptors. The key point is the position of the feature point in the image, and the descriptor contains the orientation of the key point and the surrounding pixel information. The second step compares the similarity of the feature points based on the information in the descriptors to determine the matching pair.
This paper performs parallel recognition of two different angles of target image data on the basis of feature matching. Compared to single feature matching, an additional image input is used to extract features from the two images to be matched by two independent feature matching modules, and then the features are integrated for comprehensive recognition. By processing the features from two different angles of the same target individual, the impact on recognition due to changes in the posture of the cattle during data collection is reduced. The flow chart of the two-branch feature extraction is shown in Figure 4. Each feature extraction channel consists of a feature point extraction algorithm, a description algorithm and a matching algorithm. Attention should be paid to the stability of the number of feature points extracted from each channel to ensure that the features of each image are fully captured. In the process of cattle facial image selection, the two matched images selected must contain all the feature data of the cattle’s face in order to achieve a good matching effect and matching accuracy, so this paper proposes a multi-angle data acquisition.

2.2.2. GMS Algorithm

The ORB algorithm combined with the GMS algorithm can extract the feature points in two sets of images for fast matching and achieve a short time to eliminate a large number of mis-matched pairs. The ORB algorithm is a fast creation of a feature vector of key feature points in an image. The vector contains feature information of points such as position, neighborhood, neighborhood diameter, feature direction, response strength, multi-scale information, and classification. By comparing the feature information of two feature points, Hamming distance matches them into a matched pair, thus enabling the recognition of the same object in different images. However, in the process of practical application, in the face of the same object in different angle changes and environmental changes, graphical feature matching algorithms, including the ORB algorithm, all suffer from many false matches. Bian et al. proposed a statistical screening method for feature points based on grid motion statistics; the algorithm converts motion smoothing constraints into statistics for rejecting false matches, and the grid-based implementation A fast calculation is performed, which can achieve fast screening of false matches in the image matching process [23].
A pair of images containing the same target has feature correspondence. For a cow face object in an image, due to the continuity and homogeneity of the overall area change of the cow face, all points in the neighborhood of its corresponding feature point move with it, so that a correctly matched feature point pair (which can be called a principal homonymous point) has some other correct matching points within its neighborhood. As a support for the reliability of the primary homonym, these points in the neighborhood are called support points for the primary homonym. The GMS algorithm includes the information in the neighborhood of a feature point and uses a grid to count the number of support points in the neighborhood of the main homonym (i.e., the score), which is used to determine whether the match is correct or incorrect.
The neighborhood of each feature point in the GMS algorithm is defined by the formula:
N i j = { c i j | c i j C , τ 1 < d ( c i j ) < c }
In the formula, c i j is a feature matching pair, C is the set of all feature point matching pairs. d ( c i j ) represents the Hamming distance between two points, and τ 1 and τ 2 are the threshold values.
The score S i j for each feature point X i is calculated as:
S i j = k = 1 9 N i k j k , k = 1 , 2 , 3 , , 9
where N i j is the number of supported points in the neighborhood grid of feature point X i and k represents the statistical nine-box grid divided by the domain around feature point X i (including its own central grid), i.e., the score of X i is the total number of all feature matching pairs contained in the neighborhood nine-box grid. The schematic diagram is shown in Figure 5.
In distinguishing between correct and incorrect matches, the distribution of approximate scores can be obtained by means of a binomial distribution, i.e.,:
S i j { B ( n , p j ) , X i   is   an   error   match B ( n , p i ) , X i   is   the   correct   match
X i denotes the specific feature points, n denotes the average number of feature points (support points) per net variety, p i denotes the probability of matching to an event in the corresponding region when the feature points are correctly matched, determined by the quality of the feature, and p j denotes the probability of matching to an event in the corresponding region when the feature points are incorrectly matched. p j is usually small because incorrect matches are almost randomly distributed.

2.2.3. The RANSAC Algorithm

The RANSAC (random sample consensus) algorithm uses an iterative approach to estimate the parameters of a mathematical model from a set of observed data containing outliers. The RANSAC algorithm is able to avoid the influence of noisy data on the results. In the feature matching process, the RANSAC algorithm works on all feature matching pairs. The correct matches are the inliers, and the incorrect matches are the outliers.
The main steps of the algorithm are divided into the following.
  • A random sample of n points in the dataset is selected to construct the minimum sample set (the default for initial sampling is that all points in n are interior points);
  • The sample set n is combined to construct a data model that fits the dataset N;
  • The dataset N is tested in this model and the points that fit the model are counted as internal points to construct a new sample set of internal point sets m;
  • Combining the sample set n sample set m to construct a data model that fits the dataset N;
  • Steps 1 to 4 are repeated, keeping the model with the largest set of samples m, i.e., the largest number of interior points, as the best model. The inner points obtained under this model are the correct matches.
In order to ensure that the true set of interior points is found, it is necessary to ensure that the number of iterations is large enough. Suppose that the points are selected independently of each other and that the probability that each measured point is an interior point is w and p is the overall probability that the true set of interior points is obtained after k experiments. Then, the probability that all n random samples are interior points in a given experiment is w n (the n smallest set of samples).
Then, after p experiments, the probability of failure of the experiment is:
1 p = ( 1 w n ) k
Thus, the minimum number of iterations k required by the algorithm is:
k = log ( 1 p ) log ( 1 w n )

3. Results

The experimental results and the evaluation metrics of the model have been studied and analyzed to allow for a reliable analysis of the model performance. We discuss the influence of the number of feature points on the matching results and perform a comprehensive validation evaluation of the model in combination with runtime, accuracy, precision, recall, and F-measure.

3.1. Experimental Environment and Parameter Settings

The experimental environment is a 64-bit system with Windows 10 and the python programming language. The computer hardware system configurations were computer memory 32 G, a processor with intel(R) CoreTMi9-9900K CPU@3.6 GHz × 8, and a NVIDIA Quadro P6000 (PNY Technologies Inc., Parsippany, NJ, USA) graphics card to accelerate image processing.
In order to exclude the impact of pixel differences on the experimental results, the image data resolution was processed to 640 × 480 (approximately 300,000 pixels) in conjunction with the actual pixel situation of the acquisition device during data collection. The number of feature points extracted was set to 10,000, taking into account the performance of all of the indicators of the experiment.

3.2. Number of Feature Points

During the experiments, it was found that increasing the number of feature points had a positive effect on the recognition accuracy. Therefore, the experiments were implemented on the cow dataset, the beef cattle dataset, and the mixed dataset with 5000 feature points up to 15,000 feature points, respectively, and the results are shown in Figure 6a–c. As can be seen from the graphical curves, the trend of the model remains consistent across the three datasets, so we analyzed the three experimental results’ graphs as a whole. Among the classical matching algorithms, the recognition accuracy of the ORB and SIFT algorithms in the three datasets is not sensitive to the number of feature points, and the change in the number of feature points does not have any effect on the recognition accuracy. After 10,000 points, the change in accuracy is negligible. The algorithm constructed in this paper achieves the highest recognition accuracy at 10,000 feature points, with a 5.55%, 7.35%, and 4.69% increase in accuracy in the dairy cattle dataset, beef cattle dataset, and mixed dataset, respectively compared to the 5000 feature points. After 10,000 points, the change in the number of feature points had essentially no effect on the accuracy. Therefore, the final number of feature points for the experiment was set at 10,000 after all considerations.

3.3. Results of Individual Cattle Identification

The experimental dataset is based on three cattle facial datasets cow-data, beef-data and mix-data, containing an average of 130 images per cattle. Table 2 shows the experimental results on the cattle facial dataset. The experiments compare the model constructed in this paper with the classical model, and the experimental results are shown in Table 2. Among the classical models, the SUFR algorithm gave the best recognition results, with 45.16%, 37.82%, and 27.65% recognition accuracy in the cow-data, beef-data and mix-data datasets, respectively. The two-branch feature point extraction model was improved based on the ORB algorithm feature extraction method, using the ORB algorithm to extract the target features, and the incorrect matching pairs were screened by the grid motion statistical method and random sampling consistent algorithm; the recognition accuracy was improved by 54.17%, 55.07%, and 57.06% in the three datasets, respectively compared to the ORB algorithm, and a significant improvement in recognition accuracy was achieved.
The computational cost is also an important reference for model selection, especially for recognition models with high real-time requirements. The computational cost is different in different hardware experimental environments. We tested the computational cost of all models in the experimental environment of Section 3.1, and the results are shown in Table 2. Among the classical matching models, the ORB algorithm took the shortest time of 28.15 ms, and the SIFT method was the slowest, taking 140 ms, which was 2.4 times faster than the SURF algorithm and five times faster than the ORB algorithm. The computational cost of the algorithm model constructed in this paper is 78.83 ms, which is an increase of 50.68 ms compared to the ORB algorithm before the improvement, but still faster than the SIFT algorithm.
Table 3 shows a comparison of the experimental results of the above model in terms of precision, recall, and F-measure on the mix-data dataset. The model achieves the screening of a large number of mis-matched points by introducing the GMS algorithm, and then combines it with the random sampling consistency algorithm to eliminate a few outliers. Compared with the ORB algorithm, the precision, recall, and F-measure are improved by 57.37%, 56.31%, and 56.85%, respectively. Therefore, the classification ability of the model is superior.

3.4. Impact of GMS and Random Sampling Consistency Algorithms on Accuracy

The experimental data still used cow-data, a cow face dataset, beef-data, a beef cow face dataset, and mix-data, a mixed cow and beef cow dataset, and the experimental results are shown in Table 4, reflecting the influence of the GMS algorithm and the random sampling consistent algorithm on the recognition accuracy. GMS, by gridding the feature matching pairs and combining them with other feature matching pairs in the domain to synthetically determine the recognition accuracy, was improved by 43.01%, 47.22%, and 49.11% in the three datasets, respectively, which was good for screening the incorrect matching pairs. However, the feature matching pairs screened by the GMS algorithm still contained a small number of incorrect feature point pairs. Further mathematical optimisation of the remaining feature pairs by the random sampling consistent algorithm, which eliminates the incorrect feature pairs that do not conform to the distribution pattern by a limited number of iterations, can further improve the recognition accuracy, compared to combining only GMS, whereby the accuracy is improved by 6.2%, 5.38%, and 7.47% in the three datasets, respectively.

4. Discussion

At present, research on feature matching methods in individual recognition is mainly focused on face recognition, and research on cattle individual recognition technology is mainly focused on deep learning algorithms. The multi-angle cattle face feature matching model proposed in this paper applies the feature matching method to the cattle individual recognition neighborhood and achieves excellent recognition results in complex data situations, with good performance in terms of recognition accuracy and recognition speed. To a certain extent, it has promoted the development of feature matching algorithms in individual animal recognition neighborhoods.
The multi-angle matching model has achieved an accuracy of more than 80% in all three datasets, and the multi-angle data input design has greatly improved the number of quality feature points extracted. At the same time, the number of feature points extracted was determined to be 10,000 points per image through comparison experiments, taking into account the image resolution and saving computational resources while ensuring that the recognition accuracy was not affected. The model introduces the GMS algorithm, which uses the theory of motion smoothing statistics to substantially eliminate the mis-matched pairs in the feature point population, reducing most of the interference in model optimization and then optimizing the remaining point population using the optimal model fitted by the algorithm through a random sampling consistency algorithm to eliminate outlier feature points and improve recognition accuracy.
The multi-angle feature fusion cattle facial matching recognition model proposed in this paper will provide a new idea for the application research of feature matching algorithms in the field of individual cattle identification, which can be used to determine the identity of cattle in a no-contact situation. Given the advantages of the model for individual cattle identification, it can be applied to the cow insurance claim process as an important basis for cattle identification. At the same time, the model can be further extended to the construction of smart farms, which is of practical importance for the construction of electronic identification documents for cattle and the real-time detection of cattle.

5. Conclusions

This paper investigates the application of computer vision technology to cattle contactless individual recognition and analyzes the difficulties of image methods in cattle face recognition, proposing a novel method of cattle individual recognition based on multi-angle cattle face data. The model combines the feature extraction algorithm’s ability to capture data features, the grid motion statistical algorithm, and the random sampling consistency algorithm’s ability to statistically analyze the data, thus enabling the screening of mis-matching points to obtain the correct matching target. The fusion of features from different angles is used to achieve effective identification of target cattle in complex data situations. The model is compared with the SIFT algorithm, SURF algorithm, and ORB algorithm to evaluate the performance of the model and the results show that the model can achieve the desired objectives in terms of recognition accuracy and recognition speed. This model provides a solution for the real-time identification of cattle, and future research will focus on the fusion of multi-part features to achieve further enhancement of identification reliability and accuracy.

Author Contributions

Conceptualization, Z.W. and Z.Z.; methodology, Z.W. and Y.Z.; software, C.G.; validation, S.L.; formal analysis, Z.Z.; investigation, Z.Z.; writing—original draft, Z.W.; writing—review and editing, Z.Z. and Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded the Natural Science Foundation of Inner Mongolia Autonomous Region under Grant 2020MS06015 and Grant 2021MS06014 and funded in part by the National Natural Science Foundation of China under Grant 61966026.

Data Availability Statement

Not applicable.

Acknowledgments

The authors appreciate the funding organizations for their financial support.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to have influenced the work reported in this paper.

References

  1. Kühl, H.S.; Burghardt, T. Animal biometrics: Quantifying and detecting phenotypic appearance. Trends Ecol. Evol. 2013, 28, 432–441. [Google Scholar] [CrossRef]
  2. Nogoy KM, C.; Park, J.; Chon, S.I.; Sivamani, S.; Park, M.J.; Cho, J.P.; Hong, H.K.; Lee, D.H.; Choi, S.H. Precision Detection of Real-Time Conditions of Dairy Cows Using an Advanced Artificial Intelligence Hub. Appl. Sci. 2021, 11, 12043. [Google Scholar] [CrossRef]
  3. Yajuvendra, S.; Lathwal, S.S.; Rajput, N.; Raja, T.V.; Gupta, A.K.; Mohanty, T.K.; Ruhil, A.P.; Chakravarty, A.K.; Sharma, P.C.; Sharma, V.; et al. Effective and accurate discrimination of individual dairy cattle through acoustic sensing. Appl. Anim. Behav. Sci. 2013, 146, 11–18. [Google Scholar] [CrossRef]
  4. He, D.J.; Liu, D.; Zhao, K.X. Review of perceiving animal information and behavior in precision livestock farming. Trans. Chin. Soc. Agric. Mach. 2016, 47, 231–244. [Google Scholar]
  5. Wang, H.; Qin, J.; Hou, Q.; Gong, S. Cattle face recognition method based on parameter transfer and deep learning. J. Phys. Conf. Ser. 2020, 1453, 012054. [Google Scholar] [CrossRef]
  6. Qiao, Y.; Truman, M.; Sukkarieh, S. Cattle segmentation and contour extraction based on Mask R-CNN for precision livestock farming. Comput. Electron. Agric. 2019, 165, 104958. [Google Scholar] [CrossRef]
  7. Kim, H.T.; Ikeda, Y.; Choi, H.L. The identification of Japanese black cattle by their faces. Asian-Australas. J. Anim. Sci. 2005, 18, 868–872. [Google Scholar] [CrossRef]
  8. Xia, M.; Cai, C. Cattle face recognition using sparse representation classifier. ICIC Express Letters. Part B Appl. Int. J. Res. Surv. 2012, 3, 1499–1505. [Google Scholar]
  9. Cai, C.; Li, J. Cattle face recognition using local binary pattern descriptor. In Proceedings of the 2013 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, Kaohsiung, Taiwan, 29 October–1 November 2013; pp. 1–4. [Google Scholar]
  10. Kumar, S.; Singh, S.K.; Singh, R.; Singh, A.K. Recognition of cattle using face images. Anim. Biom. 2017, 1, 79–110. [Google Scholar]
  11. Zhao, K.; Jin, X.; Ji, J.; Wang, J.; Ma, H.; Zhu, X. Individual identification of Holstein dairy cows based on detecting and matching feature points in body images. Biosyst. Eng. 2019, 181, 128–139. [Google Scholar] [CrossRef]
  12. Li, Z.; Lei, X.; Liu, S. A lightweight deep learning model for cattle face recognition. Comput. Electron. Agric. 2022, 195, 106848. [Google Scholar] [CrossRef]
  13. Billah, M.; Wang, X.; Yu, J.; Jiang, Y. Real-time goat face recognition using convolutional neural network. Comput. Electron. Agric. 2022, 194, 106730. [Google Scholar] [CrossRef]
  14. Xu, B.; Wang, W.; Guo, L.; Chen, G.; Li, Y.; Cao, Z.; Wu, S. CattleFaceNet: A cattle face identification approach based on RetinaFace and ArcFace loss. Comput. Electron. Agric. 2022, 193, 106675. [Google Scholar] [CrossRef]
  15. Xu, F.; Pan, X.; Gao, J. Feature fusion capsule network for cow face recognition. J. Electron. Imaging 2022, 31, 061817. [Google Scholar] [CrossRef]
  16. Weng, Z.; Meng, F.; Liu, S.; Zhang, Y.; Zheng, Z.; Gong, C. Cattle face recognition based on a Two-Branch convolutional neural network. Comput. Electron. Agric. 2022, 196, 106871. [Google Scholar] [CrossRef]
  17. Mur-Artal, R.; Montiel, J.M.M.; Tardos, J.D. ORB-SLAM: A versatile and accurate monocular SLAM system. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef] [Green Version]
  18. Mur-Artal, R.; Tardós, J.D. Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [Google Scholar] [CrossRef] [Green Version]
  19. Chen, J.; Bautembach, D.; Izadi, S. Scalable real-time volumetric surface reconstruction. ACM Trans. Graph. (ToG) 2013, 32, 113. [Google Scholar] [CrossRef] [Green Version]
  20. Vadivukarassi, M.; Puviarasan, N.; Aruna, P. A framework of keyword based image retrieval using proposed Hog_Sift feature extraction method from Twitter Dataset. Procedia Comput. Sci. 2018, 132, 1422–1431. [Google Scholar] [CrossRef]
  21. Gauglitz, S.; Höllerer, T.; Turk, M. Evaluation of interest point detectors and feature descriptors for visual tracking. Int. J. Comput. Vis. 2011, 94, 335–360. [Google Scholar] [CrossRef]
  22. Jia, K.; Chan, T.H.; Zeng, Z.; Roml, Y.M. A robust feature correspondence approach for matching objects in a set of images. Int. J. Comput. Vis. 2016, 117, 173–197. [Google Scholar] [CrossRef]
  23. Bian, J.; Lin, W.Y.; Matsushita, Y.; Yeung, S.K.; Nguyen, T.D.; Cheng, M.M. Gms: Grid-based motion statistics for fast, ultra-robust feature correspondence. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4181–4190. [Google Scholar]
Figure 1. Research methodology data flow diagram.
Figure 1. Research methodology data flow diagram.
Electronics 12 00156 g001
Figure 2. Cattle facial data collection device.
Figure 2. Cattle facial data collection device.
Electronics 12 00156 g002
Figure 3. Bovine facial image data. (a) Facial images of Simmental cattle. (b) Facial images of Holstein cow.
Figure 3. Bovine facial image data. (a) Facial images of Simmental cattle. (b) Facial images of Holstein cow.
Electronics 12 00156 g003
Figure 4. Flow chart of multi-angle feature recognition.
Figure 4. Flow chart of multi-angle feature recognition.
Electronics 12 00156 g004
Figure 5. Grid movement statistics.
Figure 5. Grid movement statistics.
Electronics 12 00156 g005
Figure 6. Influence of the number of feature points on the recognition results. (a) Variation in feature points in the dairy cow dataset. (b) Variation in feature points in the beef cattle dataset. (c) Variation of feature points in the mixed dataset.
Figure 6. Influence of the number of feature points on the recognition results. (a) Variation in feature points in the dairy cow dataset. (b) Variation in feature points in the beef cattle dataset. (c) Variation of feature points in the mixed dataset.
Electronics 12 00156 g006
Table 1. Dataset data.
Table 1. Dataset data.
RaceNo. of SubjectNo. of Face Images
Beef-data191544
Cow-data222862
Mix-data414406
Table 2. Cattle face recognition results.
Table 2. Cattle face recognition results.
ModelDatasetAccuracy/%Time/ms
SIFTCow-Data36.98140.32
Beef-Data30.80
Mix-Data28.58
SURFCow-Data45.1658.01
Beef-Data37.82
Mix-Data30.65
ORBCow-Data31.3928.15
Beef-Data27.51
Mix-Data23.36
OURSCow-Data85.5678.83
Beef-Data82.58
Mix-Data80.73
Table 3. Classification results of each model.
Table 3. Classification results of each model.
ModelPrecision/%Recall/%F-Measure
SIFT28.6829.0128.84
SURF30.5230.7630.64
ORB23.3924.5223.94
OURS80.7680.8380.79
Table 4. Performance of the update method.
Table 4. Performance of the update method.
ModelTwo-BranchGMSRANSACDatasetAccuracy/%
0on\\Cow-Data39.68
Beef-Data30.76
Mix-Data28.57
1onon\Cow-Data79.36
Beef-Data77.20
Mix-Data73.26
2on\onCow-Data42.57
Beef-Data35.36
Mix-Data31.62
3\ononCow-Data60.73
Beef-Data58.62
Mix-Data54.39
4onononCow-Data85.56
Beef-Data82.58
Mix-Data80.73
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Weng, Z.; Liu, S.; Zheng, Z.; Zhang, Y.; Gong, C. Cattle Facial Matching Recognition Algorithm Based on Multi-View Feature Fusion. Electronics 2023, 12, 156. https://doi.org/10.3390/electronics12010156

AMA Style

Weng Z, Liu S, Zheng Z, Zhang Y, Gong C. Cattle Facial Matching Recognition Algorithm Based on Multi-View Feature Fusion. Electronics. 2023; 12(1):156. https://doi.org/10.3390/electronics12010156

Chicago/Turabian Style

Weng, Zhi, Shaoqing Liu, Zhiqiang Zheng, Yong Zhang, and Caili Gong. 2023. "Cattle Facial Matching Recognition Algorithm Based on Multi-View Feature Fusion" Electronics 12, no. 1: 156. https://doi.org/10.3390/electronics12010156

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop