Next Article in Journal
Improving End-To-End Latency Fairness Using a Reinforcement-Learning-Based Network Scheduler
Previous Article in Journal
Comprehensive Identification of Reliable Reference Genes for qRT-PCR Normalization of Fusarium oxysporum-Resistant Genes’ Expressions in Lilium sargentiae Wilson
 
 
Article
Peer-Review Record

A Method of Aerial Multi-Modal Image Registration for a Low-Visibility Approach Based on Virtual Reality Fusion

Appl. Sci. 2023, 13(6), 3396; https://doi.org/10.3390/app13063396
by Yuezhou Wu 1,*,† and Changjiang Liu 2,*,†
Reviewer 1:
Reviewer 2:
Reviewer 3:
Appl. Sci. 2023, 13(6), 3396; https://doi.org/10.3390/app13063396
Submission received: 19 February 2023 / Revised: 1 March 2023 / Accepted: 3 March 2023 / Published: 7 March 2023

Round 1

Reviewer 1 Report

Approved

Author Response

Thanks for your professional suggestions, which have greatly helped us improve the quality of our articles. We have carefully revised the comments put forward by the review, and the revised part is highlighted in blue. 

Reviewer 2 Report

The article is very well written. Still there are few corrections/justifications are needed to make it more technically stronger.

1) On what basis canny edge detection algorithm is chosen?

2) What the variables and/or constants in the related equation represent should be stated after the equation.

3) The proposed method provides the approximate position of the matching points of the image up to how much percentage of accuracy?

4) What is OSG Imagining?

5) Equation no. 11 need brief explanation that how it works for the physical interpretation?

6) To prove the novelty of proposed algorithm, make a comparison table with the state of art algorithms for the same approach.

7) Figure 6 needs more technical description and explanation. Especially Coarse matching of infrared image and visible image needs to elaborate more in details.

8) Kindly prepare a final table of all result's parameter used to prove the proposed purpose of the article. 

Author Response

Thank you for your careful review. We have checked and revised them one by one. The revised part of the text is highlighted in blue. Thank you for your review again.

1) On what basis canny edge detection algorithm is chosen?
Canny edge detection algorithm proposes the technique of high and low double thresholds, which can not only obtain accurate edge points, but also achieve more continuous contour. The continuous contour is beneficial to extract accurate curvature points in this paper.

2) What the variables and/or constants in the related equation represent should be stated after the equation.
Modified as suggested.

3) The proposed method provides the approximate position of the matching points of the image up to how much percentage of accuracy?
According to formula (13), the error is related to the position. From our experimental data, the error is about 6-15 pixels.

4) What is OSG Imagining?
OSG (Open Scene Graph) is an open source 3D engine. With the virtual terrain model and position+pose data, virtual images were generated by OSG, which simulate what the pilot actually saw in that position.

5) Equation no. 11 need brief explanation that how it works for the physical interpretation?
In the virtual image generation of virtual reality technology proposed in this paper, the position of the camera is provided by GPS, and the attitude of the camera is provided by the aerial attitude system AHRS. Formula (11) is to calculate the position of the actual image point theoretically under the assumption of position error delta x, delta y, delta z and attitude error delta gamma, delta beta, delta alpha. From the perspective of computer vision, the position error is equivalent to the translation vector, and the attitude error is equivalent to the product of three rotation matrices rotated according to the coordinate axis.

6) To prove the novelty of proposed algorithm, make a comparison table with the state of art algorithms for the same approach.
The ROI guidance based on virtual scene proposed in this paper uses the feature extraction algorithm proposed to carry out the registration based on infrared image and visible image. No similar work has been found and it is difficult to carry out more comparative analysis. 

7) Figure 6 needs more technical description and explanation. Especially Coarse matching of infrared image and visible image needs to elaborate more in details.
Elaborated information has been added.

8) Kindly prepare a final table of all result's parameter used to prove the proposed purpose of the article. 
The parameters used in this experiment are the same, namely \sigma = \sqrt{2}, N = 64, \eta=0.7, r = 0.4. 

Reviewer 3 Report

A few suggestions to improve this manuscript:

(1) When the abbreviation, such as the SIFT, SURF, ORB and FAST, appear in the manuscript in the first time, the full name should be given.

(2) Please double check the expression of Eq. (3), the dimensions of the matrix are inconsistent.

(3) The paragraph below Eq. (6) needs double checked and improved, such as the missing apace and extra commas.

(4) Eq. (8): The elements should be separated by commas, to avoid ambiguity.

(5) Figure 3: this figure is not clear, and should be revised and improved.

Author Response

Thanks for your contributions to improving the quality of the article. We carefully revise it one by one. Thank you for your review again.

(1) When the abbreviation, such as the SIFT, SURF, ORB and FAST, appear in the manuscript in the first time, the full name should be given.
Modified as suggested.

(2) Please double check the expression of Eq. (3), the dimensions of the matrix are inconsistent.
The description information of the symbol has been added, where R is the size of 3 * 3, t is the size of 3 * 1, $0^T$ is the size of 
1 * 3, namely the product of several matrices on the right side of the equation (3) : 3x3, 3x4, 4x4, 4x1, subsequently. Therefore, the final 
result is the matrix with the size of 3x1, the same as the left side of equation (3).

(3) The paragraph below Eq. (6) needs double checked and improved, such as the missing apace and extra commas.
Modified as suggested.

(4) Eq. (8): The elements should be separated by commas, to avoid ambiguity.
Modified as suggested.

(5) Figure 3: this figure is not clear, and should be revised and improved.
The image quality of Figure 3 has been greatly improved to make it easy to see clearly.

Round 2

Reviewer 2 Report

All the corrections are incorporated.

Reviewer 3 Report

This manuscript has been revised by following my suggestion.

Back to TopTop