Next Article in Journal
Weather-Conscious Adaptive Modulation and Coding Scheme for Satellite-Related Ubiquitous Networking and Computing
Previous Article in Journal
An Automated Image Segmentation and Useful Feature Extraction Algorithm for Retinal Blood Vessels in Fundus Images
 
 
Article
Peer-Review Record

BTENet: Back-Fat Thickness Estimation Network for Automated Grading of the Korean Commercial Pig

Electronics 2022, 11(9), 1296; https://doi.org/10.3390/electronics11091296
by Hyo-Jun Lee 1, Jong-Hyeon Baek 2, Young-Kuk Kim 2, Jun Heon Lee 3, Myungjae Lee 4, Wooju Park 4, Seung Hwan Lee 3,* and Yeong Jun Koh 2,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Electronics 2022, 11(9), 1296; https://doi.org/10.3390/electronics11091296
Submission received: 29 March 2022 / Revised: 18 April 2022 / Accepted: 18 April 2022 / Published: 19 April 2022
(This article belongs to the Topic Machine and Deep Learning)

Round 1

Reviewer 1 Report

It's a resubmitted manuscript. In the new version, the authors improved the description of the technique used, explained in detail the architecture of the neural network; compared it with other image classification networks.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

This paper is very well-written and introduces its motivation, methodology, and results in a well-organized order. The authors share their code publicly on Github. The research topic is interesting to utilize the deep learning method to measure/estimate the back-fat thickness of the Korean commercial pig. I am not quite sure whether it fits the publishing goal of the MDPI "Electronics" journal since it looks not very "electronics". 

In addition, I have several minor suggestions for the authors to further improve the readability for readers:

1) Some abbreviations need to be adjusted:  

Table 1: "Vali" --> "Validation"

Tables 2 and 3: "Cor" --> "Corr"

2) Figures 2,5,6: It will be helpful for the authors to mark where/how the back-fat thickness was measured. Maybe directly marking the back-fat thickness in Fig. 2 will help.

3) Just curious:  Is this paper's result (deep learning method) only working for Korean commercial pigs? What about other countries?

 

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.


Round 1

Reviewer 1 Report

This manuscript presents an automated grading of Korean commercial pig using deep learning method. Since the Korean pig grading system grades pig carcasses in primary grades based on carcass weight, backfat thickness, and sex, two networks are designed in this paper. There are three main points of your contribution: (1) Build two large-scale image datasets of Korean commercial pig, called PigBT and PigSEX. (2) Build a back-fat thickness estimation network (BTENet) to predict the back-fat of pig carcass. (3) Build a sex classification network (SCNet) to classify the sex of pig. However, both the BTENet and SCNet are just simple combination of existing deep learning frameworks. In all, the reviewer think that the innovation is not enough and the authors’ motivation is also not clear enough.
1. The following suggestions are proposed for the backfat thickness prediction network:
(1) Please explain carefully why the backfat thickness prediction network (BTENet) network is divided into a segmentation module and a backfat thickness prediction network, and what are their functions? What role does the introduction of the segmentation module play in the overall prediction effect?
(2) Please explain in detail why the back-fat thickness estimation network (BTENet) consists of segmentation module and thickness estimation module network and what’s the function respectively? What role does the introduction of the segmentation module play in the overall prediction effect?
(3) The reviewer suggest that the authors should introduce the detailed network configuration of segmentation module and the thickness estimation module in the BTENet.
(4) In the experimental part, it is recommended that the authors first describe the relevant indicators of the experiment and explain why the relevant indicators are selected before introducing the experimental results. In addition, it is recommended to add ablation experiments to verify the impact of each module on the experimental results.
2. In the sex classification network, the authors treat it as a classification problem. That is, the sex of pigs has three categories: boars, female pigs and barrows. The manuscript simply uses the existing ResNet and MobileNet networks for pig classification gender classification. It is not an innovation from the perspective of image processing, but simply uses the existing classification network.

Author Response

We thank the associate editor and the reviewers for their time and efforts to read our manuscript and constructive suggestions to improve the work. We read the comments carefully and tried to follow the suggestions as closely as possible. Our responses to the comments are given in the attached  Word file.

Author Response File: Author Response.pdf

Reviewer 2 Report

SUMMARY

The article presents the developed system for estimating a back-fat thickness and determining the sex classes of pig carcasses. The authors apply deep neural networks (DNN), demonstrating a high technical level of DNN usage. Advanced models such as encoder-decoder U-Net and ResNet are used to create the proposed system. Finally, the article describes in detail the methodology and the results obtained.

 

COMMENTS

  1. Fig. doesn't show all arrows (arrows between Encoder1 and Encoder 2, ..., Decoder 2 and Decoder 1, and between FC layers). Since the journal "Electronics" audience is much broader than specialists in neural networks, such clarification can be helpful.
  2. The range of back-fat thickness is about 5-40 mm (as shown in Fig. 1). As a result, the absolute error value (MAE = 1.339 mm) has a different significance degree depending on the actual back-fat thickness. Therefore, relative error (MAPE) should be added in Table 3.
  3. Table 1 shows that the dataset is unbalanced because the number of "Boar" samples is three times less than the numbers of "Female" and "Barrow." So Table 4 should present the performance metrics for each class.
  4.  The segmentation module improves accuracy, but the improvement is not critical since the average error changes by only 0.5 mm (Table 3). At the same time, it uses a complex DNN model. There is a risk that the model will not work well when conditions change, such as lighting, camera parameters, distance from the camera to the carcass, etc. Is the slight increase in accuracy worth increasing the model's complexity and introducing additional operational risks into the system?
  5. Did the authors consider more straightforward segmentation methods? Why did they choose the model based on U-Net?
  6. The article does not provide requirements for equipment and operating conditions of the system, such as camera computing resources. 
  7. It is unclear how flexible and versatile the system is. Does it require retraining when conditions change, such as changing the input image parameters (resolution, brightness, contrast, viewing angle, focusing, ...), changing the background, or changing the camera position (e.g., the distance between camera and carcass)? 
  8. Did the authors experiment with the color model (for example, use HSV instead of RGB)?

Author Response

We thank the associate editor and the reviewers for their time and efforts to read our manuscript and constructive suggestions to improve the work. We read the comments carefully and tried to follow the suggestions as closely as possible. Our responses to the comments are given in the attached Word file.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

The authors did revise the manuscript according to the reviewer’s comments. The review comments for the first four points are revised better and the explanations are clearer. However, for the fifth revision, the authors also admit that SCNet has not innovated, but only uses the existing classification network to handle specific classification tasks and can achieve high performance. As far as I’m concerned, the high performance of the classification task should owe to the existing classification network rather than the authors’ credit. The innovation of the article still makes me skeptical.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

The authors answered all the questions raised in the first revision report. The manuscript has been improved according to the comments.

The single new remark:

  • It is better to indicate MAPE as a percentage specifying the units of measurement (6.8 % instead of 0.068). Similarly, for the MAE, the unit of measure (mm) should be specified in the text and Table 3.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 3

Reviewer 1 Report

The reviewer has reviewed the authors’ reply letter and still decides to reject the manuscript for the following reasons:
I think the major contributions of submitted manuscript are:(1) BTENet is proposed to estimate back-fat thickness; (2) SCNet Network is adopted to classify the pig sex. For the above two contributions, my opinions are as follows:
(1)As for the BTENet network, I admit that the proposed network has a certain degree of innovation, but for a journal, this innovation is not enough. In my opinion, this kind of innovation is more inclined to simple combination of existing methods. Therefore, the innovation of this manuscript should not be accepted.
(2)As for the SCNet network, I acknowledge the innovativeness of its application. The SCNet network uses the existing deep convolutional neural network for pig sex prediction for the first time, but I don’t accept the use of existing classification networks for a new classification task as a contribution. In other words, the reviewer thinks that the second contribution is the establishment of the relevant pig sex classification dataset rather than the use of SCNet for the pig sex classification.

Back to TopTop