Next Article in Journal
Load-Settlement Behaviour Analysis Based on the Characteristics of the Vertical Loads under a Pile Group
Next Article in Special Issue
Special Issue on Intelligent Processing on Image and Optical Information III
Previous Article in Journal
The Effect of Soil Amendments on Trace Elements’ Bioavailability and Toxicity to Earthworms in Contaminated Soils
Previous Article in Special Issue
BenSignNet: Bengali Sign Language Alphabet Recognition Using Concatenated Segmentation and Convolutional Neural Network
 
 
Review
Peer-Review Record

A Survey of Multi-Focus Image Fusion Methods

Appl. Sci. 2022, 12(12), 6281; https://doi.org/10.3390/app12126281
by Youyong Zhou 1, Lingjie Yu 1, Chao Zhi 1, Chuwen Huang 1, Shuai Wang 1, Mengqiu Zhu 1, Zhenxia Ke 1, Zhongyuan Gao 1, Yuming Zhang 2,* and Sida Fu 3,*
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Appl. Sci. 2022, 12(12), 6281; https://doi.org/10.3390/app12126281
Submission received: 27 May 2022 / Revised: 15 June 2022 / Accepted: 18 June 2022 / Published: 20 June 2022
(This article belongs to the Special Issue Intelligent Processing on Image and Optical Information, Volume III)

Round 1

Reviewer 1 Report

* Figure-3 should be more clear 

* In the section-2 author should describe more scientifically.

* Add the citation "Detecting Third Umpire Decisions & Automated Scoring System of Cricket" in the "Convolutional neural network model" description 

 

Author Response

Dear reviewers,

 

Thank you very much for allowing a resubmission of our manuscript, with an opportunity to address the reviewers’ comments. The manuscript has been carefully revised according to the reviewers’ valuable advice. Followings are our point-by-point response to the comments, and the revised manuscript with yellow highlighting indicating changes are also updated.

 

- Figure-3 should be more clear.

 

Response: Thank you for your valuable advice. We have redrawn the Figure 3 to make it more clear and update it in the article.

 

- In the section-2 author should describe more scientifically.

 

Response: Thank you for your careful work and valuable advice. We have checked through the section-2 and revised the description to make it more scientific. We also modified and added some relevant literature. The modified parts have been indicated with yellow highlighting in the section-2.

 

- Add the citation "Detecting Third Umpire Decisions & Automated Scoring System of Cricket" in the "Convolutional neural network model" description.

 

Response: Thank you for your valuable advice. We have added this citation in “Line 293” in “section-2.3.1” as the Reference [56].

 

Reviewer 2 Report

Reviewer's summary after reading the manuscript:

In the discipline of image fusion, multi-focus image fusion is an essential subfield that can successfully manage the problem of optical lens depth of field. This subfield creates two or more partial focus images that are then fused into an entirely new focus image. An investigation into the details of the many approaches to multi-focus picture fusion reveals that these approaches may be broken down into four categories: transform domain, boundary segmentation, deep learning techniques, and combination fusion method. The evaluation criteria, both subjective and objective, are presented, and eight essential objective evaluation indicators are explained in great depth. This study analyzes and summarizes a number of exemplary strategies by comparing and contrasting them using a wide variety of Chinese and international sources of information. In the final few paragraphs of this part, a synopsis of the most important problems that have arisen in multi-focus picture fusion as well as a forecast of its further development is presented.

----------------------------------------

Dear authors, thank you for your manuscript. I enjoyed reading it. Presented are some suggestions to improve it:

(1) In the abstract section, on line 19 there is a typo error. It should be "technique" and not "technic". Please correct it. There are some other typo errors. Please kindly check the manuscript. 

(2) Many of the sentences are overly long and use too many conjunctions. Please kindly engage the services of a professional English language editor to check the manuscript. Please also rectify any typo errors or grammatical errors throughout the manuscript.

(3) As a literature review, the authors cannot just use the words "in recent years" throughout the manuscript. The years or the range of the years must be clearly stated in the main text of the manuscript. This can help future readers to get a better sense of time and context when the authors mention a particular research work.

(4) Please include a "Limitations" section to discuss what were the challenges faced, and how your team overcame those challenges. This would be very beneficial to the readers as they would be able to learn from your expert knowledge.

(5) To improve the impact and readership of your manuscript, the authors need to clearly articulate in the Abstract and in the Introduction sections the uniqueness or novelty of this article, and why or how it is different from other similar articles. 

(6) Please substantially expand your review work, and cite more of the journal papers published by MDPI.

(7) For the references, instead of formatting "by-hand", please kindly consider using the free Zotero software (https://www.zotero.org/), and select "Multidisciplinary Digital Publishing Institute" as the citation format, since there are currently 61 citations in your manuscript, and there may probably be more once you have revised the manuscript.

Thank you.

 

Author Response

Dear reviewers,

 

Thank you very much for allowing a resubmission of our manuscript, with an opportunity to address the reviewers’ comments. The manuscript has been carefully revised according to the reviewers’ valuable advice. Followings are our point-by-point response to the comments, and the revised manuscript with yellow highlighting indicating changes are also updated.

 

In the discipline of image fusion, multi-focus image fusion is an essential subfield that can successfully manage the problem of optical lens depth of field. This subfield creates two or more partial focus images that are then fused into an entirely new focus image. An investigation into the details of the many approaches to multi-focus picture fusion reveals that these approaches may be broken down into four categories: transform domain, boundary segmentation, deep learning techniques, and combination fusion method. The evaluation criteria, both subjective and objective, are presented, and eight essential objective evaluation indicators are explained in great depth. This study analyzes and summarizes a number of exemplary strategies by comparing and contrasting them using a wide variety of Chinese and international sources of information. In the final few paragraphs of this part, a synopsis of the most important problems that have arisen in multi-focus picture fusion as well as a forecast of its further development is presented.

 

Dear authors, thank you for your manuscript. I enjoyed reading it. Presented are some suggestions to improve it:

 

- In the abstract section, on line 19 there is a typo error. It should be "technique" and not "technic". Please correct it. There are some other typo errors. Please kindly check the manuscript.

 

Response: Thank you for your careful work and valuable advice. We are very sorry for our mistakes.

We corrected the spelling mistake on line 19. We also have carefully checked the whole manuscript and corrected other typos or grammatical errors. The modified parts have been indicated with yellow highlighting.

 

- Many of the sentences are overly long and use too many conjunctions. Please kindly engage the services of a professional English language editor to check the manuscript. Please also rectify any typo errors or grammatical errors throughout the manuscript.

Response: We are very sorry for the grammatical errors. We have asked a native English researcher specialized in image processing to help check the manuscript. We carefully checked and corrected spelling and grammatical errors throughout the manuscript. Thank you again for your valuable advice. The modified parts have been indicated with yellow highlighting.

 

- As a literature review, the authors cannot just use the words "in recent years" throughout the manuscript. The years or the range of the years must be clearly stated in the main text of the manuscript. This can help future readers to get a better sense of time and context when the authors mention a particular research work.

Response: We are very sorry for the vague expression and bringing you inconvenience. We replaced the word "in recent years" in sections 2.2.3 and 2.3 with a specific time range, and the modified parts have been indicated with yellow highlighting.

 

- Please include a "Limitations" section to discuss what were the challenges faced, and how your team overcame those challenges. This would be very beneficial to the readers as they would be able to learn from your expert knowledge.

Response: Thank you for your careful work and valuable advice. In the last ten years, multi-focus image fusion technology has been developed. However, there are still some urgent problems need to be addressed. (1) Image registration. Most of the current fusion methods focus on feature extraction of source images, paying little attention to the image scene consistency, content deformation, and other registration problems. The actual source images are not as accurate as the experimental samples. Thus, the fusion effect would be greatly affected. In our view, the multi-view registration method can be studied to address the above problem. To be specific, capturing images of similar objects or scene from multiple perspectives can obtain a better representation of the scanned object. The multi-view registration can realized by various algorithms such as image mosaic, 3D model reconstruction from 2D image, etc. (2) Fusion efficiency. Many scholars pursue the applicability and quality of fusion methods, but ignore the efficiency of fusion. However, we believe the fusion efficiency is of great value in practical application. The difficulty may be alleviated by immerging several fusion stages to a one-stop rapid stage, thus simplifying the sophisticated fusion process. (3) Application scenarios. Although there are many multi-focus image fusion methods, most of them are studied and tested in public image libraries. We think it is helpful to collect and build image libraries in many specific industrial fields. Based on the specific image library, with the help of state-of-the-art mathematical theory or models, researches can develop multi-focus image fusion methods suitable for actually application.

We added the section 4 as the limitations section to discuss the above three difficulties. The modified parts have been indicated with yellow highlighting in the section 4. 

 

- To improve the impact and readership of your manuscript, the authors need to clearly articulate in the Abstract and in the Introduction sections the uniqueness or novelty of this article, and why or how it is different from other similar articles.

Response: Thank you for your valuable advice. Traditional classification methods include spatial domain method and transform domain method. With the soaring new multi-focus image fusion methods, it is difficult for the existing classification methods to accurately position all image fusion algorithms. Therefore, the existing multi-focus fusion methods cannot reasonably classify and summarize the pixel level fusion methods. For instance, the pixel level image fusion method can be simultaneously divided into spatial domain and transform domain according to the choice of domain. Therefore, this paper innovatively proposes the fusion method type based on boundary segmentation, and classifies the pixel level fusion methods into the method of boundary segmentation.

    The novelty of this article has been added in the Abstract and Introduction section, and the modified part has been highlighted in yellow.

- Please substantially expand your review work, and cite more of the journal papers published by MDPI.

Response: Thank you for your careful work. We have cited the following references in line 102, line 165, line 293, line 294, line 296, line 311, and line 349, respectively.

Reference 33. Wei, B.; Feng, X.; Wang, K.; Gao, B. The Multi-Focus-Image-Fusion Method Based on Convolutional Neural Network and Sparse Representation. Entropy 202123, 827, doi: 10.3390/e23070827.

Reference 38. Du, G.; Dong, M.; Sun, Y.; Li, S.; Mu, X.; Wei, H.; Lei, M.; Liu, B. A new method for detecting architectural distortion in mammograms by NonSubsampled contourlet transform and improved PCNN. Applied Sciences 20199, 4916, doi: 10.3390/app9224916.

Reference 56. Kowsher, M.; Alam, M. A.; Uddin, M. J.; Ahmed, F.; Ullah, M. W.; Islam, M. R. Detecting Third Umpire Decisions & Automated Scoring System of Cricket. In Proceedings of the 2019 International Conference on Computer, Communication, Chemical, Materials and Electronic Engineering (IC4ME2), Rajshahi, Bangladesh, 11-12 July 2019; pp. 1-8.

Reference 57. Javed Awan, M.; Mohd Rahim, M. S.; Salim, N.; Mohammed, M. A.; Garcia-Zapirain, B.; Abdulkareem, K. H. Efficient detection of knee anterior cruciate ligament from magnetic resonance imaging using deep learning approach. Diagnostics 2021, 11, 105, doi: 10.3390/diagnostics11010105.

Reference 58. Zhang, X.; Yang, Y.; Li, Z.; Ning, X.; Qin, Y.; Cai, W. An improved encoder-decoder network based on strip pool method applied to segmentation of farmland vacancy field. Entropy 2021, 23, 435, doi: 10.3390/e23040435.

Reference 61. Wang, K.; Zheng, M.; Wei, H.; Qi, G.; Li, Y. Multi-modality medical image fusion using convolutional neural network and contrast pyramid. Sensors 202020, 2169, doi: 10.3390/s20082169.

Reference 64. Basar, S.; Waheed, A.; Ali, M.; Zahid, S.; Zareei, M.; Biswal, R. R. An Efficient Defocus Blur Segmentation Scheme Based on Hybrid LTP and PCNN. Sensors 2022, 22, 2724, doi: 10.3390/s22072724.

The modified part has been highlighted in yellow.

 

- For the references, instead of formatting "by-hand", please kindly consider using the free Zotero software (https://www.zotero.org/), and select "Multidisciplinary Digital Publishing Institute" as the citation format, since there are currently 61 citations in your manuscript, and there may probably be more once you have revised the manuscript.

Response: Thank you for your careful work and valuable advice. We have tried to use the Zotero software to help format. However, perhaps due to the campus network or computer configuration problems in our school, we failed to use this software successfully. In order to submit the modified paper as soon as possible without affecting the subsequent article processing, we had to format the references "by-hand". We have checked the format again and again and hope it can meet the requirements of Applied Sciences. If there is still exists format errors, please feel free to contact us and we will try our best to correct it.

 

Author Response File: Author Response.docx

Reviewer 3 Report

The manuscript entitled “Multi-Focus Image Fusion Methods” provides a review of the multi-focus image fusion methods, classifying them into transform domain, boundary segmentation, deep learning, and combination fusion methods. Moreover, several evaluation indicators are analyzed. Finally, the manuscript discusses current challenges and future development of multi-focus image fusion.

The manuscript is well written and easy to follow. The studied principles and approaches are well explained. Moreover, the considered multi-focus image fusion methods are analyzed in detail, and their advantages and disadvantages are appropriately discussed. The state of the research field and the future developments are also discussed.

Here are some comments I would like the authors to address before the manuscript is considered for publication:

1.      Please add a paragraph describing the structure of the paper to the end of the Introduction section.

2.      The analysis of the application of convolutional neural networks (CNNs) in the multi-focus image fusion is well done, with appropriate references. However, I would like to suggest the authors supplement the introductory part about CNNs with some of last year’s studies to briefly illustrate the state-of-the-art performances of the CNNs in many different applications today and provide an interested reader with examples of their very diverse applications. Please consider mentioning the following papers: 10.3390/diagnostics11010105; 10.1109/ACCESS.2021.3139850; 10.3390/e23040435.

3.      Some of the references are outdated. Please extend the literature review with some recent studies from the last 2-3 years.

 

Author Response

Dear reviewers,

 

Thank you very much for allowing a resubmission of our manuscript, with an opportunity to address the reviewers’ comments. The manuscript has been carefully revised according to the reviewers’ valuable advice. Followings are our point-by-point response to the comments, and the revised manuscript with yellow highlighting indicating changes are also updated.

 

The manuscript entitled “Multi-Focus Image Fusion Methods” provides a review of the multi-focus image fusion methods, classifying them into transform domain, boundary segmentation, deep learning, and combination fusion methods. Moreover, several evaluation indicators are analyzed. Finally, the manuscript discusses current challenges and future development of multi-focus image fusion.

 

The manuscript is well written and easy to follow. The studied principles and approaches are well explained. Moreover, the considered multi-focus image fusion methods are analyzed in detail, and their advantages and disadvantages are appropriately discussed. The state of the research field and the future developments are also discussed.

 

Here are some comments I would like the authors to address before the manuscript is considered for publication:

 

- Please add a paragraph describing the structure of the paper to the end of the Introduction section.

 

Response: Thank you for your careful work and valuable advice. The first part of this paper is the introduction, which introduces the concepts of multi- focus image fusion, and summarizes the content of this paper; The second part is the fusion method and analysis, which analyzes and classifies a variety of multi focus fusion methods; The third part is the evaluation indicators, which introduces the commonly used subjective evaluation and objective evaluation; The fourth part is the limitations, and gives the corresponding solutions according to the common fusion problems; The fifth part is the conclusion, which analyzes the application and development of multi-focus fusion. 

   A structural paragraph describing the paper has been added at the end of the introduction section and has been highlighted in yellow.

 

- The analysis of the application of convolutional neural networks (CNNs) in the multi-focus image fusion is well done, with appropriate references. However, I would like to suggest the authors supplement the introductory part about CNNs with some of last year’s studies to briefly illustrate the state-of-the-art performances of the CNNs in many different applications today and provide an interested reader with examples of their very diverse applications. Please consider mentioning the following papers: 10.3390/diagnostics11010105; 10.1109/ACCESS.2021.3139850; 10.3390/e23040435.

 

Response: Thank you for your valuable advice. We have added these three references to the introductory part about CNNs in “Line 293-Line 304” in “Chapter 2.3.1 Convolutional neural network model” and introduced the latest performance of CNN in many different applications. The modified parts have been indicated with yellow highlighting in the 2.3.1 section.

   

- Some of the references are outdated. Please extend the literature review with some recent studies from the last 2-3 years.

Response: Thank you for your careful work. We have cited the following references in line 102, line 165, line 293, line 294, line 296, line 311, and line 349, respectively.

Reference 33. Wei, B.; Feng, X.; Wang, K.; Gao, B. The Multi-Focus-Image-Fusion Method Based on Convolutional Neural Network and Sparse Representation. Entropy 202123, 827, doi: 10.3390/e23070827.

Reference 38. Du, G.; Dong, M.; Sun, Y.; Li, S.; Mu, X.; Wei, H.; Lei, M.; Liu, B. A new method for detecting architectural distortion in mammograms by NonSubsampled contourlet transform and improved PCNN. Applied Sciences 20199, 4916, doi: 10.3390/app9224916.

Reference 56. Kowsher, M.; Alam, M. A.; Uddin, M. J.; Ahmed, F.; Ullah, M. W.; Islam, M. R. Detecting Third Umpire Decisions & Automated Scoring System of Cricket. In Proceedings of the 2019 International Conference on Computer, Communication, Chemical, Materials and Electronic Engineering (IC4ME2), Rajshahi, Bangladesh, 11-12 July 2019; pp. 1-8.

Reference 57. Javed Awan, M.; Mohd Rahim, M. S.; Salim, N.; Mohammed, M. A.; Garcia-Zapirain, B.; Abdulkareem, K. H. Efficient detection of knee anterior cruciate ligament from magnetic resonance imaging using deep learning approach. Diagnostics 2021, 11, 105, doi: 10.3390/diagnostics11010105.

Reference 58. Zhang, X.; Yang, Y.; Li, Z.; Ning, X.; Qin, Y.; Cai, W. An improved encoder-decoder network based on strip pool method applied to segmentation of farmland vacancy field. Entropy 2021, 23, 435, doi: 10.3390/e23040435.

Reference 61. Wang, K.; Zheng, M.; Wei, H.; Qi, G.; Li, Y. Multi-modality medical image fusion using convolutional neural network and contrast pyramid. Sensors 202020, 2169, doi: 10.3390/s20082169.

Reference 64. Basar, S.; Waheed, A.; Ali, M.; Zahid, S.; Zareei, M.; Biswal, R. R. An Efficient Defocus Blur Segmentation Scheme Based on Hybrid LTP and PCNN. Sensors 2022, 22, 2724, doi: 10.3390/s22072724.

The modified part has been highlighted in yellow.

 

 

Author Response File: Author Response.docx

Round 2

Reviewer 3 Report

The authors have addressed my comments.

Back to TopTop