Next Article in Journal
Multimodal Features Alignment for Vision–Language Object Tracking
Previous Article in Journal
Sentinel-2 Reference Fire Perimeters for the Assessment of Burned Area Products over Latin America and the Caribbean for the Year 2019
Previous Article in Special Issue
Deep Learning-Based Spatiotemporal Fusion Architecture of Landsat 8 and Sentinel-2 Data for 10 m Series Imagery
 
 
Article
Peer-Review Record

A No-Reference Quality Assessment Method for Hyperspectral Sharpened Images via Benford’s Law

Remote Sens. 2024, 16(7), 1167; https://doi.org/10.3390/rs16071167
by Xiankun Hao 1, Xu Li 1,*, Jingying Wu 1, Baoguo Wei 1, Yujuan Song 2 and Bo Li 1
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3: Anonymous
Reviewer 4: Anonymous
Reviewer 5:
Reviewer 6: Anonymous
Remote Sens. 2024, 16(7), 1167; https://doi.org/10.3390/rs16071167
Submission received: 21 December 2023 / Revised: 19 March 2024 / Accepted: 25 March 2024 / Published: 27 March 2024
(This article belongs to the Special Issue Remote Sensing Data Fusion and Applications)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

The paper tackles a very important problem of evaluating the quality of a pansharpened image in a reference-free manner. In particular, the authors focus on assessing the quality of pansharpened original (rather than simulated) HSIs. I have a few request:

 

1) The problem is tighly related to real-world super-resolution and evaluation of its outcome. See for example:

- Chen, Honggang, et al. "Real-world single image super-resolution: A brief review." Information Fusion 79 (2022): 124-145.

 

2) FR consistency can still be applied for assessing the pansharpened real-world HSIs that are not accompanied with any HR reference. Please elaborate on that. Basically, I do not fully agree that the consistency assessment is an FR one.

 

3) A very interesting approach to evaluate the enhanced original data without an HR reference is a task-based approach, in which the enhanced image is used for more advanced data analysis. The authors should also include the discussion on that. See:

- Kawulok, Michal, et al. "Understanding the value of hyperspectral image super-resolution from PRISMA data." IGARSS 2023-2023 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2023.

- Kelkar, Varun A., et al. "Task-based evaluation of deep image super-resolution in medical imaging." Medical Imaging 2021: Image Perception, Observer Performance, and Technology Assessment. Vol. 11599. SPIE, 2021.

- Razzak, Muhammed T., et al. "Multi-spectral multi-image super-resolution of Sentinel-2 with radiometric consistency losses and its effect on building delineation." ISPRS Journal of Photogrammetry and Remote Sensing 195 (2023): 1-13.

 

4) The authors should also look at no-reference metrics that were developed for natural images, like NIQE. See:

- Xu, Shaoping, Shunliang Jiang, and Weidong Min. "No-reference/blind image quality assessment: a survey." IETE Technical Review 34.3 (2017): 223-245.

 

5) Make sure all the abbreviations are explained on the first use - FDD is not defined in the main text, just in the abstract. By the way, there are too many abbreviations in the abstract. 

 

6) Explain the meaning of P(1), P(2), etc. in the tables. Also, explain the contents of the tables and figures thoroughly in the captions. 

 

7) In Section 4.2, all the groups are used for pansharpening, so the name of the first group is confusing (pansharpening-based methods).

 

8) The plots with the evaluation values (for nine metrics) could be aggregated so that the curves can be compared with each other. 

 

9) Although the experiments are quite extensive in terms of the numebr of reported scores, they are also superficial - they are limited to pansharpening three scenes using a battery of techniques. How about pansharpening a real-world scene (not a simulated one) to check the scores and compare them with the consistency? How about sensitivity of the proposed technique against different distortions? See for example the test performed in:

 - Sheikh, Hamid R., Muhammad F. Sabir, and Alan C. Bovik. "A statistical evaluation of recent full reference image quality assessment algorithms." IEEE Transactions on image processing 15.11 (2006): 3440-3451.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

In the manuscript, the authors proposed a novel NR quality assessment method for HS sharpened images based on Benford’s law and carry out experiments to verify the effectiveness of the proposed method. Without the reference image,  the authors detect fusion distortion by performing first digit distribution (FDD) on three quality perception features in HS sharpened images, using the standard Benford’s law as a benchmark. The spectral, spatial, and overall quality of the fused HS images were evaluated by using three quality perception features, namely FDDlf, FDDhf, and FDDQ, and verify the effectiveness through comprehensive experiments. The proposed method is reliable and robust. The experimental design is reasonable, the research method and technical route adopted are clear, and the research results have high prediction accuracy. This study provides a new method to evaluate the quality of hyperspectral images. The technical terms of this manuscript are standardized. It is a paper of relatively high quality.

Comments on the Quality of English Language

Minor changes should be made to grammar and some expressions to make the manuscript more readable.

Author Response

Thank you for your feedback. We have thoroughly reviewed the manuscript and made corrections to make it more readable.

Reviewer 3 Report

Comments and Suggestions for Authors

This work presents a novel method for no-reference quality assessment of hyperspectral sharpened Images. The basis of the method is the use of the Benford's law and a check of the consistency of certain quantities computed from the sharpened hyperspectral image to this law. I find this approach very interesting and that the results obtained by the authors support this approach very nicely and draw very interesting conclussions. From my point of view the article shall be accepted and it is practically ready for publication. I have added only a few points below for the authors to consider. The text also is also clearly written and it is very understandable. 

I have only some minor suggestions:

- In page 2, number of bands in EnMAP is 224 (not 242 as written, probably a typo)

- I think the introduction makes a good job, but perhaps I am missing some more intuitive connection of why to expect Benford's law to apply for the 3 quantities proposed by the authors. The references provided support the use of high-frequency coefficients, but the authors use also low-frequency coefficients and Q-index. Intituitively if these are quanties that extend several orders of magnitudes one shall expect the application of Benford's law for the first digits of those quantities. 

- in section 3.2.2 there is no noise added to the images. It is done in section 3.2.1, but I thought it could also be interesting in this case. Or is there a reason why this would not be interesting for the High Frequency coefficients?

- in the overall Q_FDD (equation 12) all FDD differences for the 3 quantities are added, the Manhattan norm. The choice is not explained and actually  I would say that onme does not expected that all 3 quantities work equally well (the results in sections 3.2.1, 3.2.2 and 3.2.3 suggest that). Did the authors consider applying a weight different from 1 to each of the 3 quantities?

- In table 13 (page 25) in the column Q^{2n} it seems that the second best value is 0.8667 and it shall be underlined, however the value appears 2 times and it is underlined only for "proposed"

- I think the limitations of the work are very well summarized by the authors in section 5.2. This are fair points and I find it very good that the authors mention them here

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 4 Report

Comments and Suggestions for Authors

The authors propose a novel no-reference  quality assessment method based on Benford’s law for hyperspectral sharpened images. They also show comparisons with other 10 fusion methods on 3 datasets. The results are clearly presented and reproducible (since also a link to the code is provided). I personally find the written manuscript very well presented and very well explained in all the technical details.  I do not have any additional comment. 

Author Response

Thank you for your recognition of our work.

 

Reviewer 5 Report

Comments and Suggestions for Authors

Dear Authors,

I recommend accepting the paper due to its clear organization, well-written sections (problem description, literature review, and conclusion), and robust methodology. The use of state-of-the-art datasets and comprehensive evaluation metrics, both full-reference and no-reference, adds strength to the findings. The availability of the source code on GitHub enhances transparency and reproducibility.

Author Response

Thank you for your recognition of our work.

 

Reviewer 6 Report

Comments and Suggestions for Authors

04/03/2024

Dear authors,

 

In the manuscript A No-Reference Quality Assessment Method for Hyperspectral Sharpened Images via Benford’s Law you propose a novel no-reference (NR) quality assessment method based on Benford’s law for HS sharpened images

General comments

The study is interesting and have some potential in use of the hyperspectral imagery. However, such manuscripts should be written in the third person. In the entire text, you mention the word 'we' as many as 95 times. Below the title of the manuscript are authors names, therefore, everything they write in it without references is your results. Because of that, this greatly, greatly irritates the reader.  So, you need to change this throughout the text.

The methodology is described in detail and explained. However, before the methodology, the image sets should be defined. In this way, the reader will more easily follow and understand what is to be presented in the work.

Discussion is too short. You presented a handful of analyzes and results on 26 pages, so it should be adequately discussed.

Conclusion is too short and too general. In the Conclusion, you must interpret all the results in detail and specifically (nominally) highlight what you have proven in your tests, i.e. what is the use of what you have achieved through experiments.

Specific comments (are in the manuscript)

-          Line 11 - Such manuscripts should be written in the third person. Please change it throughout the text.

-          Line 271 - In this chapter, you should introduce readers to image sets.

-          Lines 827-834 - This is a repetition of everything you did earlier and described in the text. This is not a Conclusion.

 

Best regards

Comments for author File: Comments.pdf

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop