Next Article in Journal
Dynamic Analysis of a Planar Suspension Mechanism Based on Kinestatic Relations
Next Article in Special Issue
Optimized Seam-Driven Image Stitching Method Based on Scene Depth Information
Previous Article in Journal
Improved Boundary Support Vector Clustering with Self-Adaption Support
Previous Article in Special Issue
Deep Learning-Based Context-Aware Video Content Analysis on IoT Devices
 
 
Article
Peer-Review Record

MEAN: Multi-Edge Adaptation Network for Salient Object Detection Refinement

Electronics 2022, 11(12), 1855; https://doi.org/10.3390/electronics11121855
by Jing-Ming Guo 1,2 and Herleeyandi Markoni 1,2,*
Reviewer 1:
Electronics 2022, 11(12), 1855; https://doi.org/10.3390/electronics11121855
Submission received: 2 May 2022 / Revised: 30 May 2022 / Accepted: 8 June 2022 / Published: 11 June 2022
(This article belongs to the Collection Image and Video Analysis and Understanding)

Round 1

Reviewer 1 Report

The paper presents a novel approach through original image gradient as a guide to detect and refine the saliency result.

The paper presents very poor related work. The author needs to improve related work with recent papers. 

There is no need to separate the section. Merge FCN, attention-based mechanism and refinement network. The overall architecture of the proposed mean needs to explain for each training mechanism. In the proposed model author may put a subsection for each refinement method.

There are no details of the experimental results set up to validate the effect. 

How many datasets are considered and what were the selection criteria. 

There have been several formating and issues and finally add future work

Author Response

Thanks for the valuable comments and suggestions. We sincerely appreciate this suggestion for enriching the clarity and readability of this paper. All of the responses are available in the attachment file. Please let us know if any part is still dissatisfactory, and we will be happy to address accordingly.

Author Response File: Author Response.pdf

Reviewer 2 Report

The article is well written, with a long introduction and description of the story that guided NN's research. It's fluent, just a little wordy, but it's worth reading in full.

Specific comments:
- As it covers many pages, it would make sense also give some numbers for the computational time (fractions of second, minutes, hours....) required by the algorithm to carry out its final edge detection. It could be normalized using commodity computers.

- Please spell out the acronyms RNN, FCN, and CNN as they appear for the first time in the paper 
- Table 1 reports number using different digits....0.86, 0.8479, etc. Please use a common number of digits for reasonable comparisons.
- Again for Table 2: HKU-IS MAE uses 1 digit as in 0.03 or 3 digits as in 0.129. These are non easily comparable since they have different precision as they are expressed. If not, please fill the table using sam number of significant digits

Author Response

Thanks for the valuable comments and suggestions. We sincerely appreciate this suggestion for enriching the clarity and readability of this paper. All of the responses are available in the attachment file. Please let us know if any part is still dissatisfactory, and we will be happy to address accordingly.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

The paper presents a novel approach through original image gradient as a guide to detect and refine the saliency result.

The paper presents very poor related work. The author needs to improve related work with recent papers. 

There is no need to separate the section. Merge FCN, attention-based mechanism and refinement network. The overall architecture of the proposed mean needs to explain for each training mechanism. In the proposed model author may put a subsection for each refinement method.

There are no details of the experimental results set up to validate the effect. 

How many datasets are considered and what were the selection criteria. 

There have been several formating and issues and finally add future work

Author Response

All of the responses are available in the attachment file "reviewer1.docx".

Author Response File: Author Response.pdf

Back to TopTop