Next Article in Journal / Special Issue
Mathematical Model for Chemical Reactions in Electrolytes Applied to Cytochrome c Oxidase: An Electro-Osmotic Approach
Previous Article in Journal
Exploring Polygonal Number Sieves through Computational Triangulation
Previous Article in Special Issue
Global Dynamics of a Within-Host Model for Usutu Virus
 
 
Article
Peer-Review Record

Deep Reinforcement Learning for Efficient Digital Pap Smear Analysis

Computation 2023, 11(12), 252; https://doi.org/10.3390/computation11120252
by Carlos Macancela 1,*, Manuel Eugenio Morocho-Cayamcela 1 and Oscar Chang 2
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3: Anonymous
Computation 2023, 11(12), 252; https://doi.org/10.3390/computation11120252
Submission received: 18 October 2023 / Revised: 19 November 2023 / Accepted: 1 December 2023 / Published: 10 December 2023
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Biology)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

This manuscript proposes a cervical cancer screening protocol based on deep learning.  It utilizes liquid-based Pap smear images to create a deep reinforcement learning environment. A proximal strategy optimization algorithm is used to identify cells with the help of rewards, punishments, and accumulated experience.  The detected cells were classified using a pre-trained convolutional neuron network, Res-Net54, etc.  

 

In the manuscript, in the part where the model results are described, the performance of the model is described in figures 18 and 19, but there is no clear description of the data division. In addition, it is slightly insufficient to use only the data of Validation accuracy for model performance evaluation, and it is recommended to add other indicators. It is also suggested that the results of the manuscript should be compared with those of related work

Comments on the Quality of English Language

The manuscript is clearly structured and the language is clear

Author Response

  1. Data Division Clarity:
    In response to your suggestion, we have updated the manuscript to include a detailed explanation of how the dataset was divided for training, validation, and testing purposes. Where 80% of the dataset was dedicated to both training and validation. The remaining 20% of the dataset was distinctively allocated to the testing set, ensuring a separate and untouched subset for evaluating the model's generalization performance on previously unseen data. 
  2. Performance Metrics:
    Following your suggestion, we have incorporated metrics such as precision, recall, and F1 score in our assessment of the model's performance. This enriches the evaluation process, providing a more comprehensive and nuanced understanding of our model's effectiveness.
  3. Comparison with Related Work:
    To address this, we have included a thorough comparison of our findings with those of related studies. This comparison not only strengthens the significance of our contributions but also facilitates a more comprehensive understanding of the advancements made in the field.

Reviewer 2 Report

Comments and Suggestions for Authors

1. Enhance the main contributions of this article, which are provided at end of the introduction section.

2. Literature review and related work section need to be further conducted to discuss more references used and how they support the design of the method.

3. Could you tell me the limitations of the proposed method? How will you solve them? Please add this part to the manuscript.

4. In the experiments, in order to validate the effectivity of the proposed method, could the authors compare with other CNNs models?

5. What about the computational complexity of the proposed model? Authors should compare their model with recent deep learning models and explain the total parameters and FLOPs.

6. More experiments should be conducted using a different dataset to prove the generalization.

7. Hyperspectral and multispectral systems also play an important role in disease diagnosis. For example, “A stare-down video-rate high-throughput hyperspectral imaging system and its applications in biological sample sensing” and “Open-source mobile multispectral imaging system and its applications in biological sample sensing”, it is suggested that hyperspectral and multispectral systems should be discussed.

Author Response

  1. Enhanced Contributions in Introduction:

    • We have revisited the introduction section and refined the articulation of the main contributions of our paper to provide a clearer and more impactful presentation.
  2. Expanded Literature Review:

    • The literature review and related work section have been expanded to include a more in-depth discussion of the references used, emphasizing how they support the design of our proposed method.
  3. Limitations and Mitigation:

    • We have included a dedicated section addressing the limitations of our proposed method and outlined strategies for mitigating these limitations. This addition enhances the transparency of our research and provides insights into our proactive approach to addressing challenges.
  4. Comparison with Other CNN Models:

    • We have incorporated a comparative analysis with other CNN models in the experiments section to validate the effectiveness of our proposed method. Additionally, the model was assessed with more metrics such as Precision, Recall, and F1 score. 
  5. Computational Complexity:

    • We acknowledge your recommendation to include the computational complexity and FLOPs of our model. Unfortunately, we couldn't find related articles providing this information for comparison. As a result, we were unable to include this aspect in the manuscript.
  6. Generalization Experiments:

    • Regrettably, due to time constraints, we were unable to conduct experiments with another dataset to test generalization. However, we've emphasized the potential for future research to explore this aspect comprehensively.
  7. Discussion on Hyperspectral and Multispectral Systems:

    • We have added a new subsection titled "Hyperspectral and Multispectral Systems" to discuss their relevance in disease diagnosis. This section explores their applications, implementation, and limitations, drawing comparisons with our study.

Reviewer 3 Report

Comments and Suggestions for Authors

The authors propose a solution based on reinforcement learning and CNN to allow cervical cancer screening. The objective set by the authors of the paper is the development of an efficient, automated Papanicolaou analysis system, which will reduce human intervention in regions with limited pathology.

The proposed approach is based on using two datasets (section 1.1) and ROI agents that interact with the environment by performing some actions (Table 1) for moving through the digital Pap smear and collect the large amount of cells during an episode. 

The environment design is presented in Section 2 and it follows the specifications outlined by OpenAI Gym. Therefore, the deep learning environment, is using the OpenAI-GYM API and the fluid-based Pap images.

A suggestive representation of the environment for searching the cells is given in Figure 4 and the pseudocode for the Environment feature extraction is explained in Algorithm 1 (Section 2.2).

Cells are identified using the rewards, penalties, and experiences accumulated by these autonomous agents that were trained by using the proximal policy optimization (PPO) algorithm for developing a searching behavior for locating cells in digital Pap smear images (Algorithm 2 in Section 2.5). 

Section 3 presents the 3 stages of the training process, whose graphic representation is given in Figure 8.

The trained agents operate autonomously, utilizing cell recognition to scan their surroundings and identify cells in real time.

To ensure that the agent is located on a cell, a convolutional neural network (CNN) is employed (presented in Figure 6).

The classification of detected cells based on their malignancy potential is improved by using Res-Net54 (a pre-trained CNN network) which performs a classification of the detected cells into four distinct categories – the results are given in Table 4. 

In conclusion, the presented results prove the effectiveness of the proposed solution of using autonomous ROI agents for scanning their environment and identifying cells for their classification according to the four prefined categories (Figure 20).

Weakness

1.     A description of the structure of the work and the research methodology would be indicated

 

2.     the most current references from 2020. It would be advisable to refer to more recent publications as well

3. it would be useful to give access via github to the second data set, which was obtained from Topapanta's undergraduate thesis [11] and to the code

Author Response

  1. Description of Work Structure and Research Methodology:

    • We have included a detailed description of the structure of our work, providing clarity on the organization of the manuscript. Additionally, the research methodology is now presented explicitly in "Proposed Approach", offering readers a comprehensive understanding of our approach.
  2. Updated References:

    • We have revised the references to include the most current publications. This ensures that our work is anchored in the latest research in the field, contributing to the currency and relevance of our study.
  3. GitHub Access for Second Data Set and Code:

    • We acknowledge the importance of transparency and accessibility. This is the link to the repository:  https://github.com/CarlosJMB/Deep-Reinforcement-Learning-for-Efficient-Digital-Pap-Smear-Analysis.git

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

This manuscript has been revised, and the quality of the manuscript has been improved compared to the previous version. I think this manuscript is suitable for acceptance and publication.

Comments on the Quality of English Language

In the manuscript, the sentences are clearly expressed and the structure of the article is basically reasonable.

Reviewer 2 Report

Comments and Suggestions for Authors Accept in present form
Back to TopTop