Next Article in Journal
Optogenetics in Brain Research: From a Strategy to Investigate Physiological Function to a Therapeutic Tool
Previous Article in Journal
Update of fNIRS as an Input to Brain–Computer Interfaces: A Review of Research from the Tufts Human–Computer Interaction Laboratory
 
 
Article
Peer-Review Record

Hyperspectral Imaging Bioinspired by Chromatic Blur Vision in Color Blind Animals

by Shuyue Zhan 1, Weiwen Zhou 2, Xu Ma 1 and Hui Huang 1,*
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3: Anonymous
Submission received: 3 July 2019 / Revised: 2 August 2019 / Accepted: 9 August 2019 / Published: 12 August 2019

Round 1

Reviewer 1 Report

The idea is very novel.  But more information regarding how this hyperspectral camera could be built should be given.  What are the main issues regarding image quality, add comments about SNR for example.  As you have one detector, what happens with the stripping noise.  Please, provide 3D images of the PSF. Can you quantify blur vs spectral band? 

Author Response

Response to reviewer 1#:

Thank you for your valuable suggestions. We think your suggestions are very helpful to improve the quality of this paper. We have finished the revision work according to your review reports, all the revision colored blue in the paper, and the detailed corrections according to your suggestions are listed below point by point:

 

(1) But more information regarding how this hyperspectral camera could be built should be given.

In the original manuscript, we only vaguely represent the eyeball model as a camera imaging system in section 2.1. Now we add a section 3.4 to introduce the camera system.

In fact, we have now made the camera lens and built the imaging system to carry out experiments (as shown in the figure 1 below). However, the current experimental results have not reached the level of publication, and the main problem is that the theoretical calculation of PSFs used in the restoration of blur images obtained by the experimental system has a large deviation (the results are shown in the figure 2 below). However, we have come up with an effective method to obtain the experimental system PSF, and we found that the CNN image restoration is also very effective (even only one frame of blur image is collected, and system PSFs are not required). We still have a lot of experimental work to do, which is expected to be completed and submitted next year. Therefore, please forgive that we are unable to give experimental results in this paper.

(Please see the PDF document for figure)

Figure 1. Chromatic camera lens and imaging system

(Please see the PDF document for figure)

Figure 2. Experimental blur images and restoration

 

 

(2) What are the main issues regarding image quality, add comments about SNR for example.

In the original manuscript, we only subjectively evaluated the clear topography of the restored spectral images, focusing on the deviation between the restored spectra and the target spectra in section 3.2 (MSE values). Now PSNR values are given to evaluate image quality in section 3.2.

 

(3) As you have one detector, what happens with the stripping noise.

Yes, stripping noise is common for image mosaicking in remote sensing. This work focuses on theoretical verification of the chromatic blur imaging principle for hyperspectral imaging. Therefore, images of different small target were studied instead of wide target such as terrain in remote sensing. Image mosaicking was not involved here. Hence, only PSNR and MSE can be calculated. In the subsequent work, we will carry out image mosaicking to obtain continuous scene images of wide field of view.

 

(4) Please, provide 3D images of the PSF.

Figure 2 has been replaced with 3D PSF plot.

 

(5) Can you quantify blur vs spectral band?

Because the chromatic blur is not only related to wavelength, but also to detection plane position (as figure 2 in paper): at a certain detection plane position, different wavelengths have different chromatic blur, and at a certain wavelength, different detection plane position have different chromatic blur. It can be quantifying blur vs wavelength vs position, but we don't know yet how to discuss these data,and we think these data have no effect on image restoration and image quality evaluation at present.

Author Response File: Author Response.pdf

Reviewer 2 Report

Referee Report on 553808, Zhan et al “Hyperspectral imaging bioinspired by chromatic blur vision in color blind animals”. July 15, 2019.

 

This paper presents an interesting idea- using chromatic aberration, and deconvolution methods, to achieve spectral resolution in an electronic imaging system with a monochromatic-pixel detector. This is based on an earlier suggestion that certain marine animals use chromatic aberration to infer spectral content in a scene.

 

The primary new result in this paper over prior work is a more sophisticated deconvolution method to extract the spectral structure in the scene. I do have a major question about this, noted in major point (1) below, that must be addressed before publication.

 

 

Major comments:

 

1)    There is a basic aspect of this that I don’t understand, even after reading the paper multiple times. I gather from the flow diagram in Figure 4 that each of the 61 images ik(x,y) are processed independently, with multiple trial inverse PSF’s? I don’t understand, from an information theory point of view, how a wavelength can be assigned to a uniform region of pixels without adopting some prior assumption about the structure of the objects in the scene. In other words imagine an image divided into two halves, with one side red and side one blue. Away from the boundary between them, in any single-color region that is larger than the most aberrated PSF all these deconvolutions will have zero effect on the pixel values. So how can the spectral content of those areas be determined? If I’ve missed something basic here I do apologize, but I suggest the authors clarify this point. Said differently, if there is zero structure in the image it’s impossible to know the extent of defocus due to chromatic effects.

 

2)    The authors rightly mention that Fourier transform spectrographs can similarly perform a scan through successive images to obtain spectral information, but they don’t really provide a performance comparison of that method to the one proposed. In both cases the full Poisson noise appears in every image, but they seem to limit consideration to the high-SNR case. I suspect that since the Fourier transform method requires a collimated beam, the chromatic-aberration method will likely be useful over much wider fields of view. Both methods require that the image cube be stable over the course of the scan. I suggest the authors add a paragraph with a back-of-the-envelope performance comparison to an integral-field scanning Fourier imager. Line 44 dismisses Fourier imaging by saying it requires high stability, but so does the method being proposed in this paper. Perhaps in a subsequent publication the authors might wish to provide a more quantitative comparison of the performance parameters of these difference methods, instead of the descriptive comparison provided here.

 

3)    Line 53 says the authors used a model developed by Stubbs et al., but Figure 3 and the description in the text makes me think they used/modified the publicly available computer code written by Stubbs et al. Line 138, for example, just happens to describe the same input spectra as used in that computer code. If that computer program was used/adapted, it is important to add that fact to the acknowledgments. Also, the idea of obtaining spectral information from chromatic aberration was explicitly proposed by Stubbs et al and it seems appropriate to say so very explicitly in this paper.

 

4)    Line 66 asserts that a reasonable explanation for spectral discrimination in color-blind animals has not been given. That’s not true! The Stubbs et al paper does precisely that, the authors should say so.

 

5)    Line 96 talks about an “infinite point object”, but I think what is meant is a point object at a distance of infinity.

 

6)    Line 119 refers to the diffraction-field. This entire situation is in the domain of ray-tracing optics, and diffraction does not play a strong role, as long as the input aperture is large enough that chromatic effects dominate. The PSF does not have much dependence on diffraction, so this should be re-phrased.

 

7)    It would be helpful to have more details on how the information in Figure 6(b) was obtained.

 

 

Minor comments:

 

8)    The authors are congratulated on a well-written paper, but the English in the manuscript would benefit from editing by a native speaker. There are numerous minor corrections that would improve the paper.

 

9)    The labeling in Figure 5 was very hard to read, in the PDF copy I downloaded.

 

10) Does this journal have a policy of all computer programs being made available upon publication? If so, the authors should do 


Author Response

Response to reviewer 2#:

Thank you for your detailed and valuable suggestions. We think your suggestions are very helpful to improve the quality of this paper. We have finished the revision work according to your review reports, all the revision colored blue in the paper, and the detailed corrections according to your suggestions are listed below point by point:

 

(1) There is a basic aspect of this that I don’t understand, even after reading the paper multiple times. I gather from the flow diagram in Figure 4 that each of the 61 images ik(x,y) are processed independently, with multiple trial inverse PSF’s? I don’t understand, from an information theory point of view, how a wavelength can be assigned to a uniform region of pixels without adopting some prior assumption about the structure of the objects in the scene. In other words imagine an image divided into two halves, with one side red and side one blue. Away from the boundary between them, in any single-color region that is larger than the most aberrated PSF all these deconvolutions will have zero effect on the pixel values. So how can the spectral content of those areas be determined? If I’ve missed something basic here I do apologize, but I suggest the authors clarify this point. Said differently, if there is zero structure in the image it’s impossible to know the extent of defocus due to chromatic effects.

Figure 4 and the expression of the algorithm have been revised, 61 images and 61×61 PSFs are SVD inverse filtering together, but pixel by pixel.

As to a 2D scene (or 2D image), a pixel of the image is a numerical value, when a spectral curve (n numerical values) is assigned to a pixel, the 2D image becomes an image cube (3D, x, y, and λ), now we annotate with spectral image cube in line 151 of revised paper.

If there is zero structure in the image (that is flat-field background), Stubbs et al. have pointed out in their paper that octopus vision cannot perceive the spectrum of the flat-field background and image contrast method cannot extract the spectrum of the flat-field background. However, the PSF inverse filtering method adopted in our paper can cope with the flat-field background, which can be seen from the restored spectral images (Figure 5 and 6 in paper). To further prove this point, we used a uniform color picture, and each pixel was endowed with white LED spectrum. The results as show below, firstly, we obtained blur images (as the upper row, because it is uniform, it is impossible to visually evaluate whether it is blur or not), and restored the spectral image of each band (the bottom row), compared the restored spectrum with the target spectrum, and the two are consistent (as shown below).

(Please see the pdf document for figure)

Figure 1. Flat-field background target test

 

(2) The authors rightly mention that Fourier transform spectrographs can similarly perform a scan through successive images to obtain spectral information, but they don’t really provide a performance comparison of that method to the one proposed. In both cases the full Poisson noise appears in every image, but they seem to limit consideration to the high-SNR case. I suspect that since the Fourier transform method requires a collimated beam, the chromatic-aberration method will likely be useful over much wider fields of view. Both methods require that the image cube be stable over the course of the scan. I suggest the authors add a paragraph with a back-of-the-envelope performance comparison to an integral-field scanning Fourier imager. Line 44 dismisses Fourier imaging by saying it requires high stability, but so does the method being proposed in this paper. Perhaps in a subsequent publication the authors might wish to provide a more quantitative comparison of the performance parameters of these difference methods, instead of the descriptive comparison provided here.

Another reviewer also pointed out that the tone of our presentation of traditional hyperspectral techniques in the introduction was inappropriate, which we have modified (including in the abstract). In fact, this paper presents a technical approach that also requires scanning, which is clearly pointed out in the new added section 3.4 (line 271).

In this paper, we mainly prove that this new method is feasible to achieve hyperspectral imaging, with a focus on the obtain of spectral and spatial information, while without much performance analysis. We will try to compare the performance of Fourier spectrometer and the proposed method in the subsequent work. However, it is not easy due to the limited time and lack of instrument of an integral-field scanning Fourier imager. We are currently carrying out experimental testing work (including the CNN method mentioned in section 3.4), which is expected to be completed next year, when we will definitely carry out performance analysis and comparison.

 

(3) Line 53 says the authors used a model developed by Stubbs et al., but Figure 3 and the description in the text makes me think they used/modified the publicly available computer code written by Stubbs et al. Line 138, for example, just happens to describe the same input spectra as used in that computer code. If that computer program was used/adapted, it is important to add that fact to the acknowledgments. Also, the idea of obtaining spectral information from chromatic aberration was explicitly proposed by Stubbs et al and it seems appropriate to say so very explicitly in this paper.

We are very sorry for neglecting this point before, acknowledgments section has been added in revised manuscript line 323.

 

(4) Line 66 asserts that a reasonable explanation for spectral discrimination in color-blind animals has not been given. That’s not true! The Stubbs et al paper does precisely that, the authors should say so.

The previous expression is really inappropriate, we have modified this sentence (line 64). The explanation of contrast method put forward by Stubbs et al. holds great promise, but we think there are still some questions in the aspect of image understanding. 1) The spectral discrimination efficiency of the contrast method is not high, which seems to be inconsistent with the octopus's excellent camouflage ability. 2) Even though the contrast method can obtain the target spectrum, how to obtain the clear morphology/texture of the target? We think these questions need to be further studied by biologists.

 

(5) Line 96 talks about an “infinite point object”, but I think what is meant is a point object at a distance of infinity.

It is a point object at a distance of infinity, now we have revised in paper (line 100 and 110).

 

(6) Line 119 refers to the diffraction-field. This entire situation is in the domain of ray-tracing optics, and diffraction does not play a strong role, as long as the input aperture is large enough that chromatic effects dominate. The PSF does not have much dependence on diffraction, so this should be re-phrased.

The previous expression is inappropriate, it should be “optical field distribution”, now we have revised in paper (line 128).

 

(7) It would be helpful to have more details on how the information in Figure 6(b) was obtained.

  Now this point is revised (line 231).

 

(8) The authors are congratulated on a well-written paper, but the English in the manuscript would benefit from editing by a native speaker. There are numerous minor corrections that would improve the paper.

The English of original manuscript was edited by Mogo Edit (a language editing company). We have gone through the English of the revised manuscript carefully.

 

(9) The labeling in Figure 5 was very hard to read, in the PDF copy I downloaded.

The label is very small indeed, and now we've zoomed in as much as possible. The resolution of this image is high (600dpi), word file magnification (or printing) can clearly display the label, but the resolution may be reduced due to the submission system automatically converted to PDF format.

 

(10) Does this journal have a policy of all computer programs being made available upon publication? If so, the authors should do.

Supplementary materials of this journal does not support code files. We made the codes available on third-party websites (currently many codes of research papers are shared on GitHub). And the link to the codes is addressed in the paper (line 55).

Author Response File: Author Response.pdf

Reviewer 3 Report

Overall I think this is an excellent paper. The fundamental idea is sound and novel, to the best of my knowledge . There have been a number of recent papers looking at how animals with a single color photoreceptor can effectively "see"in color - and the authors have used this concept to inspire a new type of hyper spectral imaging. In the paper of Stubbs and Stubbs - which is the key reference the pupil shape is the critical factor - which is not relevant here - but I think that is a minor difference.  So - overall I strongly recommend publication since it is not often that ones sees a paper with a fundamentally new idea (albeit inspired from a different field). So - I do strongly recommend publication. My only recommendation is that the paper would benefit from "tightening up". Specific examples include

Intro. The authors follow the classic route of critiquing other hyper spectral imaging techniques and therefore imply that their technique is better. Hyperspectral imaging is a huge area and it is not obvious to me that this technique is fundamentally better. I don't mean that as a criticism as the paper stands up without this justification due to its novelty. I would therefore "tone down"  the implication that this technique is better than others. All will have pros and cons.

A general review of the English (which is very good on the whole - but could be improved)

Fig 1d. I assume this is a "picture" - i.e. not a real simulation. This could be clarified

In their simulations the authors don't actually state explicitly that the detection system is for a single photoreceptor response. This is implicit (via equation 2 and the text and fig 3) but I think it would be helpful if this was stated more clearly. I am sure the authors think this must be obvious but sometimes obvious things are not clear to all readers.


Overall though - a very interesting paper.

Author Response

Response to reviewer 3#:

Thank you for your valuable suggestions. We think your suggestions are very helpful to improve the quality of this paper. We have finished the revision work according to your review reports, all the revision colored blue in the paper, and the detailed corrections according to your suggestions are listed below point by point:

 

(1) In the paper of Stubbs and Stubbs - which is the key reference the pupil shape is the critical factor - which is not relevant here - but I think that is a minor difference.

We had introduced this point in the original paper line 67-74. The difference of blurring degree vs wavelength is very important for contrast method, but we are not clear about the effects of different pupils on spectral image restoration under PSF inverse filtering method at present. Since the work in this paper is only a numerical simulation, pupils with different morphologies have different PSF characteristics. However, as long as the PSFs are known, the restoration results by simulation are almost consistent, which is different from the image contrast method. We are currently carrying out experimental work, the specific role of pupil in image restoration is worth studying, and we will discuss this issue in future papers.

 

(2) Intro. The authors follow the classic route of critiquing other hyper spectral imaging techniques and therefore imply that their technique is better. Hyperspectral imaging is a huge area and it is not obvious to me that this technique is fundamentally better. I don't mean that as a criticism as the paper stands up without this justification due to its novelty. I would therefore "tone down" the implication that this technique is better than others. All will have pros and cons.

We think your suggestion is very reasonable, now the description of existing hyperspectral imaging is modified in the introduction, line 39-46, and the abstract, line 10-12.

 

(3) A general review of the English (which is very good on the whole - but could be improved).

The English of original manuscript was edited by Mogo Edit (a language editing company). We have gone through the English of the revised manuscript carefully.

 

(4) Fig 1d. I assume this is a "picture" - i.e. not a real simulation. This could be clarified.

It is a diagrammatic drawing, not a real simulation, now it is clarified in the text (line 101) and figure caption (line 109).

 

(5) In their simulations the authors don't actually state explicitly that the detection system is for a single photoreceptor response. This is implicit (via equation 2 and the text and fig 3) but I think it would be helpful if this was stated more clearly. I am sure the authors think this must be obvious but sometimes obvious things are not clear to all readers.

Now it is stated more clearly in line 142-145.

This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.


Back to TopTop