sensors-logo

Journal Browser

Journal Browser

Advances in Sensors, Big Data and Machine Learning in Intelligent Animal Farming

A topical collection in Sensors (ISSN 1424-8220). This collection belongs to the section "Sensing and Imaging".

Viewed by 48739
Printed Edition Available!
A printed edition of this Special Issue is available here.

Editors


E-Mail Website
Collection Editor
Australian Centre for Field Robotics, University of Sydney, Sydney 2006, Australia
Interests: agricultural robots; deep learning; multi-sensor fusion; pattern recognition
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Collection Editor
College of Mechanical and Electronic Engineering, Northwest A&F University, Yangling 712100, China
Interests: image analysis and recognition; intelligent detecting and control; agricultural information technology; digital plant and virtual technology; animal health and welfare

E-Mail Website
Collection Editor
College of Engineering, China Agricultural University, Beijing 100083, China
Interests: field robotics; SLAM; robot audition; computer vision; machine learning
Special Issues, Collections and Topics in MDPI journals

Topical Collection Information

Dear Colleagues,

Animal production (milk, meat, and eggs) provides the most valuable protein production for human beings and animals. However, animal production is facing several challenges worldwide, such as environmental impact and animal welfare/health concerns. In animal farming, accurate and efficient monitoring of animal information and behavior can help to analyze the physiological, health, and welfare status of animals, and identify sick or abnormal individuals to reduce economic losses and protect animal welfare.

In recent years, there has been a growing interest in animal welfare. At present, sensors, big data, machine learning, and artificial intelligence are used to improve management efficiency, reduce production costs, and enhance animal welfare. Although these technologies still have challenges and limitations, the application and exploration of these technologies in animal farms will greatly promote the intelligent management of farms.

Therefore, this Topical Collection will collect original papers with novel contributions based on technologies such as sensors, big data, machine learning, and artificial intelligence to study animal behavior monitoring and recognition, environmental monitoring, health evaluation, etc., to promote intelligent and accurate animal farm management.

This Topical Collection will focus on (but is not limited to) the following topics:

  • Automatic animal detection and identification
  • Animal welfare and health monitoring
  • Sensors and sensing techniques for animal behavior recognition and detection
  • Sensors for automated animal and environmental management
  • Quantification of individual animal behavioral and physiological variations
  • Big data and machine learning for group behavior monitoring and evaluation
  • Real-time modeling of animal responses and decision-making
  • Sensors application in precision livestock farming
  • Application of robots in animal farming

Dr. Yongliang Qiao
Dr. Lilong Chai
Prof. Dr. Dongjian He
Dr. Daobilige Su
Collection Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the collection website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • animal farming
  • animal welfare
  • big data
  • behavior monitoring
  • animal identification
  • internet of things
  • deep learning
  • precision livestock farming

Published Papers (13 papers)

2022

Jump to: 2021

23 pages, 3928 KiB  
Review
The Research Progress of Vision-Based Artificial Intelligence in Smart Pig Farming
by Shunli Wang, Honghua Jiang, Yongliang Qiao, Shuzhen Jiang, Huaiqin Lin and Qian Sun
Sensors 2022, 22(17), 6541; https://doi.org/10.3390/s22176541 - 30 Aug 2022
Cited by 20 | Viewed by 8336
Abstract
Pork accounts for an important proportion of livestock products. For pig farming, a lot of manpower, material resources and time are required to monitor pig health and welfare. As the number of pigs in farming increases, the continued use of traditional monitoring methods [...] Read more.
Pork accounts for an important proportion of livestock products. For pig farming, a lot of manpower, material resources and time are required to monitor pig health and welfare. As the number of pigs in farming increases, the continued use of traditional monitoring methods may cause stress and harm to pigs and farmers and affect pig health and welfare as well as farming economic output. In addition, the application of artificial intelligence has become a core part of smart pig farming. The precision pig farming system uses sensors such as cameras and radio frequency identification to monitor biometric information such as pig sound and pig behavior in real-time and convert them into key indicators of pig health and welfare. By analyzing the key indicators, problems in pig health and welfare can be detected early, and timely intervention and treatment can be provided, which helps to improve the production and economic efficiency of pig farming. This paper studies more than 150 papers on precision pig farming and summarizes and evaluates the application of artificial intelligence technologies to pig detection, tracking, behavior recognition and sound recognition. Finally, we summarize and discuss the opportunities and challenges of precision pig farming. Full article
Show Figures

Figure 1

15 pages, 2417 KiB  
Article
Evaluation of an Active LF Tracking System and Data Processing Methods for Livestock Precision Farming in the Poultry Sector
by Camille Marie Montalcini, Bernhard Voelkl, Yamenah Gómez, Michael Gantner and Michael J. Toscano
Sensors 2022, 22(2), 659; https://doi.org/10.3390/s22020659 - 15 Jan 2022
Cited by 8 | Viewed by 2188
Abstract
Tracking technologies offer a way to monitor movement of many individuals over long time periods with minimal disturbances and could become a helpful tool for a variety of uses in animal agriculture, including health monitoring or selection of breeding traits that benefit welfare [...] Read more.
Tracking technologies offer a way to monitor movement of many individuals over long time periods with minimal disturbances and could become a helpful tool for a variety of uses in animal agriculture, including health monitoring or selection of breeding traits that benefit welfare within intensive cage-free poultry farming. Herein, we present an active, low-frequency tracking system that distinguishes between five predefined zones within a commercial aviary. We aimed to evaluate both the processed and unprocessed datasets against a “ground truth” based on video observations. The two data processing methods aimed to filter false registrations, one with a simple deterministic approach and one with a tree-based classifier. We found the unprocessed data accurately determined birds’ presence/absence in each zone with an accuracy of 99% but overestimated the number of transitions taken by birds per zone, explaining only 23% of the actual variation. However, the two processed datasets were found to be suitable to monitor the number of transitions per individual, accounting for 91% and 99% of the actual variation, respectively. To further evaluate the tracking system, we estimated the error rate of registrations (by applying the classifier) in relation to three factors, which suggested a higher number of false registrations towards specific areas, periods with reduced humidity, and periods with reduced temperature. We concluded that the presented tracking system is well suited for commercial aviaries to measure individuals’ transitions and individuals’ presence/absence in predefined zones. Nonetheless, under these settings, data processing remains a necessary step in obtaining reliable data. For future work, we recommend the use of automatic calibration to improve the system’s performance and to envision finer movements. Full article
Show Figures

Figure 1

2021

Jump to: 2022

20 pages, 1905 KiB  
Article
Modular E-Collar for Animal Telemetry: An Animal-Centered Design Proposal
by Marta Siguín, Teresa Blanco, Federico Rossano and Roberto Casas
Sensors 2022, 22(1), 300; https://doi.org/10.3390/s22010300 - 31 Dec 2021
Cited by 1 | Viewed by 3078
Abstract
Animal telemetry is a subject of great potential and scientific interest, but it shows design-dependent problems related to price, flexibility and customization, autonomy, integration of elements, and structural design. The objective of this paper is to provide solutions, from the application of design, [...] Read more.
Animal telemetry is a subject of great potential and scientific interest, but it shows design-dependent problems related to price, flexibility and customization, autonomy, integration of elements, and structural design. The objective of this paper is to provide solutions, from the application of design, to cover the niches that we discovered by reviewing the scientific literature and studying the market. The design process followed to achieve the objective involved a development based on methodologies and basic design approaches focused on the human experience and also that of the animal. We present a modular collar that distributes electronic components in several compartments, connected, and powered by batteries that are wirelessly recharged. Its manufacture is based on 3D printing, something that facilitates immediacy in adaptation and economic affordability. The modularity presented by the proposal allows for adapting the size of the modules to the components they house as well as selecting which specific modules are needed in a project. The homogeneous weight distribution is transferred to the comfort of the animal and allows for a better integration of the elements of the collar. This device substantially improves the current offer of telemetry devices for farming animals, thanks to an animal-centered design process. Full article
Show Figures

Figure 1

24 pages, 4626 KiB  
Article
Livestock Informatics Toolkit: A Case Study in Visually Characterizing Complex Behavioral Patterns across Multiple Sensor Platforms, Using Novel Unsupervised Machine Learning and Information Theoretic Approaches
by Catherine McVey, Fushing Hsieh, Diego Manriquez, Pablo Pinedo and Kristina Horback
Sensors 2022, 22(1), 1; https://doi.org/10.3390/s22010001 - 21 Dec 2021
Cited by 4 | Viewed by 3674
Abstract
Large and densely sampled sensor datasets can contain a range of complex stochastic structures that are difficult to accommodate in conventional linear models. This can confound attempts to build a more complete picture of an animal’s behavior by aggregating information across multiple asynchronous [...] Read more.
Large and densely sampled sensor datasets can contain a range of complex stochastic structures that are difficult to accommodate in conventional linear models. This can confound attempts to build a more complete picture of an animal’s behavior by aggregating information across multiple asynchronous sensor platforms. The Livestock Informatics Toolkit (LIT) has been developed in R to better facilitate knowledge discovery of complex behavioral patterns across Precision Livestock Farming (PLF) data streams using novel unsupervised machine learning and information theoretic approaches. The utility of this analytical pipeline is demonstrated using data from a 6-month feed trial conducted on a closed herd of 185 mix-parity organic dairy cows. Insights into the tradeoffs between behaviors in time budgets acquired from ear tag accelerometer records were improved by augmenting conventional hierarchical clustering techniques with a novel simulation-based approach designed to mimic the complex error structures of sensor data. These simulations were then repurposed to compress the information in this data stream into robust empirically-determined encodings using a novel pruning algorithm. Nonparametric and semiparametric tests using mutual and pointwise information subsequently revealed complex nonlinear associations between encodings of overall time budgets and the order that cows entered the parlor to be milked. Full article
Show Figures

Figure 1

18 pages, 4355 KiB  
Article
A Non-Invasive Millimetre-Wave Radar Sensor for Automated Behavioural Tracking in Precision Farming—Application to Sheep Husbandry
by Alexandre Dore, Cristian Pasquaretta, Dominique Henry, Edmond Ricard, Jean-François Bompa, Mathieu Bonneau, Alain Boissy, Dominique Hazard, Mathieu Lihoreau and Hervé Aubert
Sensors 2021, 21(23), 8140; https://doi.org/10.3390/s21238140 - 06 Dec 2021
Cited by 2 | Viewed by 2459
Abstract
The automated quantification of the behaviour of freely moving animals is increasingly needed in applied ethology. State-of-the-art approaches often require tags to identify animals, high computational power for data collection and processing, and are sensitive to environmental conditions, which limits their large-scale utilization, [...] Read more.
The automated quantification of the behaviour of freely moving animals is increasingly needed in applied ethology. State-of-the-art approaches often require tags to identify animals, high computational power for data collection and processing, and are sensitive to environmental conditions, which limits their large-scale utilization, for instance in genetic selection programs of animal breeding. Here we introduce a new automated tracking system based on millimetre-wave radars for real time robust and high precision monitoring of untagged animals. In contrast to conventional video tracking systems, radar tracking requires low processing power, is independent on light variations and has more accurate estimations of animal positions due to a lower misdetection rate. To validate our approach, we monitored the movements of 58 sheep in a standard indoor behavioural test used for assessing social motivation. We derived new estimators from the radar data that can be used to improve the behavioural phenotyping of the sheep. We then showed how radars can be used for movement tracking at larger spatial scales, in the field, by adjusting operating frequency and radiated electromagnetic power. Millimetre-wave radars thus hold considerable promises precision farming through high-throughput recording of the behaviour of untagged animals in different types of environments. Full article
Show Figures

Figure 1

15 pages, 3776 KiB  
Article
Prediction of Cow Calving in Extensive Livestock Using a New Neck-Mounted Sensorized Wearable Device: A Pilot Study
by Carlos González-Sánchez, Guillermo Sánchez-Brizuela, Ana Cisnal, Juan-Carlos Fraile, Javier Pérez-Turiel and Eusebio de la Fuente-López
Sensors 2021, 21(23), 8060; https://doi.org/10.3390/s21238060 - 02 Dec 2021
Cited by 1 | Viewed by 3152
Abstract
In this study, new low-cost neck-mounted sensorized wearable device is presented to help farmers detect the onset of calving in extensive livestock farming by continuously monitoring cow data. The device incorporates three sensors: an inertial measurement unit (IMU), a global navigation satellite system [...] Read more.
In this study, new low-cost neck-mounted sensorized wearable device is presented to help farmers detect the onset of calving in extensive livestock farming by continuously monitoring cow data. The device incorporates three sensors: an inertial measurement unit (IMU), a global navigation satellite system (GNSS) receiver, and a thermometer. The hypothesis of this study was that onset calving is detectable through the analyses of the number of transitions between lying and standing of the animal (lying bouts). A new algorithm was developed to detect calving, analysing the frequency and duration of lying and standing postures. An important novelty is that the proposed algorithm has been designed with the aim of being executed in the embedded microcontroller housed in the cow’s collar and, therefore, it requires minimal computational resources while allowing for real time data processing. In this preliminary study, six cows were monitored during different stages of gestation (before, during, and after calving), both with the sensorized wearable device and by human observers. It was carried out on an extensive livestock farm in Salamanca (Spain), during the period from August 2020 to July 2021. The preliminary results obtained indicate that lying-standing animal states and transitions may be useful to predict calving. Further research, with data obtained in future calving of cows, is required to refine the algorithm. Full article
Show Figures

Figure 1

16 pages, 4642 KiB  
Article
Automated Processing and Phenotype Extraction of Ovine Medical Images Using a Combined Generative Adversarial Network and Computer Vision Pipeline
by James Francis Robson, Scott John Denholm and Mike Coffey
Sensors 2021, 21(21), 7268; https://doi.org/10.3390/s21217268 - 31 Oct 2021
Cited by 3 | Viewed by 2961
Abstract
The speed and accuracy of phenotype detection from medical images are some of the most important qualities needed for any informed and timely response such as early detection of cancer or detection of desirable phenotypes for animal breeding. To improve both these qualities, [...] Read more.
The speed and accuracy of phenotype detection from medical images are some of the most important qualities needed for any informed and timely response such as early detection of cancer or detection of desirable phenotypes for animal breeding. To improve both these qualities, the world is leveraging artificial intelligence and machine learning against this challenge. Most recently, deep learning has successfully been applied to the medical field to improve detection accuracies and speed for conditions including cancer and COVID-19. In this study, we applied deep neural networks, in the form of a generative adversarial network (GAN), to perform image-to-image processing steps needed for ovine phenotype analysis from CT scans of sheep. Key phenotypes such as gigot geometry and tissue distribution were determined using a computer vision (CV) pipeline. The results of the image processing using a trained GAN are strikingly similar (a similarity index of 98%) when used on unseen test images. The combined GAN-CV pipeline was able to process and determine the phenotypes at a speed of 0.11 s per medical image compared to approximately 30 min for manual processing. We hope this pipeline represents the first step towards automated phenotype extraction for ovine genetic breeding programmes. Full article
Show Figures

Figure 1

20 pages, 7426 KiB  
Article
A Cascaded Model Based on EfficientDet and YOLACT++ for Instance Segmentation of Cow Collar ID Tag in an Image
by Kaixuan Zhao, Ruihong Zhang and Jiangtao Ji
Sensors 2021, 21(20), 6734; https://doi.org/10.3390/s21206734 - 11 Oct 2021
Cited by 4 | Viewed by 2395
Abstract
In recent years, many imaging systems have been developed to monitor the physiological and behavioral status of dairy cows. However, most of these systems do not have the ability to identify individual cows because the systems need to cooperate with radio frequency identification [...] Read more.
In recent years, many imaging systems have been developed to monitor the physiological and behavioral status of dairy cows. However, most of these systems do not have the ability to identify individual cows because the systems need to cooperate with radio frequency identification (RFID) to collect information about individual animals. The distance at which RFID can identify a target is limited, and matching the identified targets in a scenario of multitarget images is difficult. To solve the above problems, we constructed a cascaded method based on cascaded deep learning models, to detect and segment a cow collar ID tag in an image. First, EfficientDet-D4 was used to detect the ID tag area of the image, and then, YOLACT++ was used to segment the area of the tag to realize the accurate segmentation of the ID tag when the collar area accounts for a small proportion of the image. In total, 938 and 406 images of cows with collar ID tags, which were collected at Coldstream Research Dairy Farm, University of Kentucky, USA, in August 2016, were used to train and test the two models, respectively. The results showed that the average precision of the EfficientDet-D4 model reached 96.5% when the intersection over union (IoU) was set to 0.5, and the average precision of the YOLACT++ model reached 100% when the IoU was set to 0.75. The overall accuracy of the cascaded model was 96.5%, and the processing time of a single frame image was 1.92 s. The performance of the cascaded model proposed in this paper is better than that of the common instance segmentation models, and it is robust to changes in brightness, deformation, and interference around the tag. Full article
Show Figures

Figure 1

16 pages, 2392 KiB  
Article
An Absorbing Markov Chain Model to Predict Dairy Cow Calving Time
by Swe Zar Maw, Thi Thi Zin, Pyke Tin, Ikuo Kobayashi and Yoichiro Horii
Sensors 2021, 21(19), 6490; https://doi.org/10.3390/s21196490 - 28 Sep 2021
Cited by 11 | Viewed by 2386
Abstract
Abnormal behavioral changes in the regular daily mobility routine of a pregnant dairy cow can be an indicator or early sign to recognize when a calving event is imminent. Image processing technology and statistical approaches can be effectively used to achieve a more [...] Read more.
Abnormal behavioral changes in the regular daily mobility routine of a pregnant dairy cow can be an indicator or early sign to recognize when a calving event is imminent. Image processing technology and statistical approaches can be effectively used to achieve a more accurate result in predicting the time of calving. We hypothesize that data collected using a 360-degree camera to monitor cows before and during calving can be used to establish the daily activities of individual pregnant cows and to detect changes in their routine. In this study, we develop an augmented Markov chain model to predict calving time and better understand associated behavior. The objective of this study is to determine the feasibility of this calving time prediction system by adapting a simple Markov model for use on a typical dairy cow dataset. This augmented absorbing Markov chain model is based on a behavior embedded transient Markov chain model for characterizing cow behavior patterns during the 48 h before calving and to predict the expected time of calving. In developing the model, we started with an embedded four-state Markov chain model, and then augmented that model by adding calving as both a transient state, and an absorbing state. Then, using this model, we derive (1) the probability of calving at 2 h intervals after a reference point, and (2) the expected time of calving, using their motions between the different transient states. Finally, we present some experimental results for the performance of this model on the dairy farm compared with other machine learning techniques, showing that the proposed method is promising. Full article
Show Figures

Figure 1

17 pages, 4401 KiB  
Article
Cross-Modality Interaction Network for Equine Activity Recognition Using Imbalanced Multi-Modal Data
by Axiu Mao, Endai Huang, Haiming Gan, Rebecca S. V. Parkes, Weitao Xu and Kai Liu
Sensors 2021, 21(17), 5818; https://doi.org/10.3390/s21175818 - 29 Aug 2021
Cited by 11 | Viewed by 3161
Abstract
With the recent advances in deep learning, wearable sensors have increasingly been used in automated animal activity recognition. However, there are two major challenges in improving recognition performance—multi-modal feature fusion and imbalanced data modeling. In this study, to improve classification performance for equine [...] Read more.
With the recent advances in deep learning, wearable sensors have increasingly been used in automated animal activity recognition. However, there are two major challenges in improving recognition performance—multi-modal feature fusion and imbalanced data modeling. In this study, to improve classification performance for equine activities while tackling these two challenges, we developed a cross-modality interaction network (CMI-Net) involving a dual convolution neural network architecture and a cross-modality interaction module (CMIM). The CMIM adaptively recalibrated the temporal- and axis-wise features in each modality by leveraging multi-modal information to achieve deep intermodality interaction. A class-balanced (CB) focal loss was adopted to supervise the training of CMI-Net to alleviate the class imbalance problem. Motion data was acquired from six neck-attached inertial measurement units from six horses. The CMI-Net was trained and verified with leave-one-out cross-validation. The results demonstrated that our CMI-Net outperformed the existing algorithms with high precision (79.74%), recall (79.57%), F1-score (79.02%), and accuracy (93.37%). The adoption of CB focal loss improved the performance of CMI-Net, with increases of 2.76%, 4.16%, and 3.92% in precision, recall, and F1-score, respectively. In conclusion, CMI-Net and CB focal loss effectively enhanced the equine activity classification performance using imbalanced multi-modal sensor data. Full article
Show Figures

Figure 1

17 pages, 5004 KiB  
Article
Classifying Ingestive Behavior of Dairy Cows via Automatic Sound Recognition
by Guoming Li, Yijie Xiong, Qian Du, Zhengxiang Shi and Richard S. Gates
Sensors 2021, 21(15), 5231; https://doi.org/10.3390/s21155231 - 02 Aug 2021
Cited by 13 | Viewed by 2435
Abstract
Determining ingestive behaviors of dairy cows is critical to evaluate their productivity and health status. The objectives of this research were to (1) develop the relationship between forage species/heights and sound characteristics of three different ingestive behaviors (bites, chews, and chew-bites); (2) comparatively [...] Read more.
Determining ingestive behaviors of dairy cows is critical to evaluate their productivity and health status. The objectives of this research were to (1) develop the relationship between forage species/heights and sound characteristics of three different ingestive behaviors (bites, chews, and chew-bites); (2) comparatively evaluate three deep learning models and optimization strategies for classifying the three behaviors; and (3) examine the ability of deep learning modeling for classifying the three ingestive behaviors under various forage characteristics. The results show that the amplitude and duration of the bite, chew, and chew-bite sounds were mostly larger for tall forages (tall fescue and alfalfa) compared to their counterparts. The long short-term memory network using a filtered dataset with balanced duration and imbalanced audio files offered better performance than its counterparts. The best classification performance was over 0.93, and the best and poorest performance difference was 0.4–0.5 under different forage species and heights. In conclusion, the deep learning technique could classify the dairy cow ingestive behaviors but was unable to differentiate between them under some forage characteristics using acoustic signals. Thus, while the developed tool is useful to support precision dairy cow management, it requires further improvement. Full article
Show Figures

Figure 1

17 pages, 6272 KiB  
Article
Automatic Detection and Segmentation for Group-Housed Pigs Based on PigMS R-CNN
by Shuqin Tu, Weijun Yuan, Yun Liang, Fan Wang and Hua Wan
Sensors 2021, 21(9), 3251; https://doi.org/10.3390/s21093251 - 07 May 2021
Cited by 21 | Viewed by 3309
Abstract
Instance segmentation is an accurate and reliable method to segment adhesive pigs’ images, and is critical for providing health and welfare information on individual pigs, such as body condition score, live weight, and activity behaviors in group-housed pig environments. In this paper, a [...] Read more.
Instance segmentation is an accurate and reliable method to segment adhesive pigs’ images, and is critical for providing health and welfare information on individual pigs, such as body condition score, live weight, and activity behaviors in group-housed pig environments. In this paper, a PigMS R-CNN framework based on mask scoring R-CNN (MS R-CNN) is explored to segment adhesive pig areas in group-pig images, to separate the identification and location of group-housed pigs. The PigMS R-CNN consists of three processes. First, a residual network of 101-layers, combined with the feature pyramid network (FPN), is used as a feature extraction network to obtain feature maps for input images. Then, according to these feature maps, the region candidate network generates the regions of interest (RoIs). Finally, for each RoI, we can obtain the location, classification, and segmentation results of detected pigs through the regression and category, and mask three branches from the PigMS R-CNN head network. To avoid target pigs being missed and error detections in overlapping or stuck areas of group-housed pigs, the PigMS R-CNN framework uses soft non-maximum suppression (soft-NMS) by replacing the traditional NMS to conduct post-processing selected operation of pigs. The MS R-CNN framework with traditional NMS obtains results with an F1 of 0.9228. By setting the soft-NMS threshold to 0.7 on PigMS R-CNN, detection of the target pigs achieves an F1 of 0.9374. The work explores a new instance segmentation method for adhesive group-housed pig images, which provides valuable exploration for vision-based, real-time automatic pig monitoring and welfare evaluation. Full article
Show Figures

Figure 1

13 pages, 4650 KiB  
Article
Pig Weight and Body Size Estimation Using a Multiple Output Regression Convolutional Neural Network: A Fast and Fully Automatic Method
by Jianlong Zhang, Yanrong Zhuang, Hengyi Ji and Guanghui Teng
Sensors 2021, 21(9), 3218; https://doi.org/10.3390/s21093218 - 06 May 2021
Cited by 35 | Viewed by 5741
Abstract
Pig weight and body size are important indicators for producers. Due to the increasing scale of pig farms, it is increasingly difficult for farmers to quickly and automatically obtain pig weight and body size. Due to this problem, we focused on a multiple [...] Read more.
Pig weight and body size are important indicators for producers. Due to the increasing scale of pig farms, it is increasingly difficult for farmers to quickly and automatically obtain pig weight and body size. Due to this problem, we focused on a multiple output regression convolutional neural network (CNN) to estimate pig weight and body size. DenseNet201, ResNet152 V2, Xception and MobileNet V2 were modified into multiple output regression CNNs and trained on modeling data. By comparing the estimated performance of each model on test data, modified Xception was selected as the optimal estimation model. Based on pig height, body shape, and contour, the mean absolute error (MAE) of the model to estimate body weight (BW), shoulder width (SW), shoulder height (SH), hip width (HW), hip width (HH), and body length (BL) were 1.16 kg, 0.33 cm, 1.23 cm, 0.38 cm, 0.66 cm, and 0.75 cm, respectively. The coefficient of determination (R2) value between the estimated and measured results was in the range of 0.9879–0.9973. Combined with the LabVIEW software development platform, this method can estimate pig weight and body size accurately, quickly, and automatically. This work contributes to the automatic management of pig farms. Full article
Show Figures

Figure 1

Back to TopTop