Next Article in Journal
Facile Preparation of Au–Ag Composite Nanostructure for High-Sensitive and Uniform Surface-Enhanced Raman Spectroscopy
Next Article in Special Issue
Enhanced Deconvolution and Denoise Method for Scattering Image Restoration
Previous Article in Journal
A Hybrid Cladding Ring-Core Photonic Crystal Fibers for OAM Transmission with Weak Spin–Orbit Coupling and Strong Bending Resistance
Previous Article in Special Issue
Optimization of Sampling Mode in Macro Fourier Ptychography Imaging Based on Energy Distribution
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

High-Quality Computational Ghost Imaging with a Conditional GAN

1
School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
2
School of Physics and Electronic Engineering, Fuyang Normal University, Fuyang 236037, China
*
Author to whom correspondence should be addressed.
Photonics 2023, 10(4), 353; https://doi.org/10.3390/photonics10040353
Submission received: 7 February 2023 / Revised: 7 March 2023 / Accepted: 21 March 2023 / Published: 23 March 2023
(This article belongs to the Special Issue Advances and Applications in Computational Imaging)

Abstract

:
In this study, we demonstrated a framework for improving the image quality of computational ghost imaging (CGI) that used a conditional generative adversarial network (cGAN). With a set of low-quality images from a CGI system and their corresponding ground-truth counterparts, a cGAN was trained that could generate high-quality images from new low-quality images. The results showed that compared with the traditional method based on compressed sensing, this method greatly improved the image quality when the sampling ratio was low.

1. Introduction

Ghost imaging, also known as correlation imaging, has been one of the frontiers and hot spots in the field of quantum optics in recent years [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21]. It is a new imaging technique based on the correlation characteristics of quantum entanglement or classical light field fluctuations. Ghost imaging generally obtains the target object information nonlocally through the intensity correlation between the reference light field and the target detection light field.
Ghost imaging techniques, as new imaging techniques, can solve some problems that are difficult to solve using traditional optical imaging techniques. For example, ghost imaging has a strong anti-interference ability, which leads to it having an imaging quality that is significantly better than that obtained by traditional optical imaging techniques in complex optical environments (such as atmospheric turbulence [22] and scattering mediums [20,21,23]). Recently, ghost imaging techniques have been widely used in laser radar detection [24,25], 3D imaging [26,27,28], X-ray ghost imaging [29,30,31], fluorescence microscopy [32], optical encryption [19,33], and terahertz imaging [34,35,36]. Ghost imaging has also gradually outperformed classical imaging in terms of its extremely weak light imaging ability, its image signal-to-noise ratio, its spatial resolution, and its dynamic range. In addition, researchers have enriched and innovated the technical means of ghost imaging from the perspectives of its irradiation light source, its imaging mechanism, its recovery algorithm, and so on, which has greatly expanded the implementation methods and application scenarios of ghost imaging.
Ghost imaging techniques have attracted much attention due to their significant advantages, such as their strong anti-interference abilities. However, ghost imaging requires a long sampling time and a large number of iterative calculations to obtain high-quality imaging since it needs to conduct multiple measurements. Therefore, it is very important for ghost imaging to obtain an appreciable image quality with fewer measurements.
The development of compressed sensing in signal processing [37,38], which introduced compressed sensing into ghost imaging, has allowed ghost imaging to achieve a relatively higher imaging quality with fewer measurements than traditional methods [39]. For example, Katkovnik et al. applied compressed sensing to computational ghost imaging(CGI), which simplified the experimental device and reduced the number of measurements while guaranteeing a certain imaging quality [40]. Along this line, Wenlin Gong and Shensheng Han and others realized compressed sensing super-resolution ghost imaging by introducing a sparse constraint [41,42,43].
In addition to compressed sensing, artificial intelligence was considered for use in ghost imaging due to its various advantages. For example, Barbastathis et al. were the first to propose the training of a deep neural network for a lensless imaging system to recover phase objects [44]. Lyu Meng [45] developed a neural network based on a DNN to optimize the imaging quality under a low sampling rate, and Shimobaba [46] proposed a neural network based on a U-net structure to achieve the denoising optimization of CGI. Subsequently, the authors of [47] employed a CNN to further improve the image quality of ghost imaging. Zhang [48] proposed a singular-value decomposition compressed ghost imaging method based on deep unfolding to achieve a good antinoise performance at low sampling rates. Although compressed sensing has been applied in ghost imaging, its computational complexity is still relatively high since it involves a large number of iterative calculations. The aforementioned artificial intelligence methods have complex network structures, resulting in a huge number of parameters that need to be optimized.
Against this background, we combined a conditional generative adversarial network (cGAN) [49,50] with CGI to reduce the complexity in the reconstruction computation. Our aim of this work was to be able to quickly and accurately reconstruct a target image with a low sampling ratio. Specifically, first, a low-quality image was obtained through CGI at a lower sampling rate, which was used as the condition input of a lightweight cGAN network, and the trained cGAN then generated a high-quality image so as to improve the overall imaging quality performance. In the simulation experiment, we used the MNIST dataset of 50,000 images [51] and their low-quality ghost images to train the network. Through training, the deep neural network could learn the features of the image and make a prediction, in addition to establishing a mapping between a low-quality image and its repective high-quality image. The rest of this work is organized as follows. In Section 2, we describe the considered scenarios and introduce the network structure. In Section 3, we provide the numerical simulation results and verify the effectiveness of the proposed scheme. Finally, Section 4 gives a brief conclusion of this study.

2. Methods

In this section, the computational ghost imaging system and network structure are summarized, and the network training is then described.

2.1. Imaging Scheme

Differently from traditional ghost imaging, CGI reconstructs an image of an object by correlating the preset speckle light field distribution and the light intensity obtained by the bucket detector. Therefore, it only needs one bucket detector to recover the image of an object. The modulation of the speckle light field can be realized by a light source modulator, such as spatial light modulator (SLM) or a digital micromirror device (DMD). A series of speckle light field designs have been achieved by programming. Specifically, the light source modulator is loaded according to the preset cycle sequence and the target scene is modulated N times in the spatial light field. After the emitted laser is modulated by the light source modulator, the speckle light field for ghost imaging is obtained. The detection value B i of the corresponding bucket detector can be obtained through a series of illuminations, where the object is illuminated by N different speckle light fields. The B i can then be expressed as
B i = x y I i x , y T x , y ,
where T ( x , y ) is the transmittance function of the measured object in which x and y are the coordinates of the object plane and I i ( x , y ) is the ith speckle light field, i { 1 , 2 , , N } .
For the reconstruction image of the measured object, the signal intensity B i detected by the bucket detector and the speckle light field intensity I i ( x , y ) can be subjected to a second-order correlation operation to recover the image, which can be expressed as [52]
O x , y = 1 N i = 1 N B i B i I i x , y ,
where · = 1 N i · denotes the ensemble average over all N different speckle light fields.
We observe from Equation (2) that N speckle light fields are required to restore high-quality target images. However, the value of N is generally relatively large, which causes imaging process to suffer from high computational complexity. In order to overcome this difficulty, we first reduced the number of measurements needed to obtain a relatively low-quality reconstruction image, and we then employed a cGAN to enhance the image quality. The corresponding reconstruction process can be expressed as
O ˜ x , y = R { O x , y } ,
where R { · } represents the cGAN that maps the low-quality image O x , y to its respective high-quality image. Here, we proposed the training of a feasible neural network R ˜ c G A N , which is given by
R ˜ c G A N = arg min G max D L c G A N G , D + λ L L 1 ( G ) ,
where G is a generative model and D is a discriminative model. Both G and D are nonlinear mapping functions. The function L c G A N is the loss function of the cGAN. The notation arg min G max D L c G A N G , D means that G tries to minimize L c G A N while D tries to maximize it, and arg is the abbreviation of argument. The last item makes the generator near the ground-truth output in an L 1 sense. Here, λ is a constant and its value is 100. The two loss functions in Equation (4) can be described in detail as follows [49]:
L c G A N G , D = E log D x y + E log 1 D x G z y
L L 1 G = E y G x z 1 ,
where x, y, and z represent the output image, the ground-truth image, and the input image, respectively. Here E stands for the operation of mathematical expectation. L 1 means L 1 n o r m .

2.2. Simulation

Figure 1 is a diagrammatic sketch of computational ghost imaging. The simulation was based on this standard computational ghost imaging setup [39]. Monochromatic light with a wavelength of 532 nm was emitted by the laser source. The phase modulation masks were generated by a computer-controlled SLM according to the following formula:
M r k 1 , k 2 = exp j 2 π φ r k 1 , k 2 1 k 1 N 1 , 1 k 2 N 2 , r = 1 , 2 , , K ,
where the random phase φ r k 1 , k 2 is uniformly distributed over the interval [ 0 , 1 ) and is independent for all k 1 , k 2 , and r. In the modeling of wavefield propagation, the F-DDT technique was applied [53]. The bucket detector measured the total intensity of the light illuminating it. Considering the noise of the bucket detector when observing the light intensity and the thermal noise of the detector itself, Gaussian white noise was added to the output signal of the bucket detector during the simulation. Finally, the image was reconstructed by calculating the correlation between the measurement intensity of the bucket detector and the corresponding speckle pattern generated by the SLM.

2.3. Network Structure

Inspired by a cGAN [49] and pix2pix [50], we used a neural network whose structure is schematically shown in Figure 2. The inputs of the generator were the low-quality reconstructed images from the CGI system. The output images of the generator and their corresponding ground-truth images constituted a pair of inputs to the discriminator. The output of the discriminator was a binary value, which indicated whether the input generated image was true or fake. The architecture of the generator is shown in Figure 3, and it was based on U-Net [54]. There were only five layers of encoders and four layers of decoders, which was lightweight for our scheme. Each convolution block in the encoders was composed of a convolutional layer with a 3 × 3 convolutional kernel and the rectified linear unit (ReLU). In addition, a 2 × 2 convolution kernel was used for the maximum pooling layer during the downsampling process. A 2 × 2 convolutional kernel was applied in the process of upsampling. For the discriminator we employed a PatchGAN discriminator to produce high-quality target images.

2.4. Preparation of Training Data and Network Training

We recall that the inputs of the training network are pairs of picture data, which are composed of ground-truth images and their corresponding CGI reconstruction results. According to Equation (2), we found that the reconstruction of the target image was, mathematically, the inner product between the measured intensity and the corresponding speckle pattern. Thus, we used the simulation method to generate training data pairs. Specifically, the train set was generated from the MNIST handwritten digit databases through ghost imaging simulation.
The training step was 1000. The program was implemented using Python version 3.6 and the cGAN was implemented based on the TensorFlow framework. The GPU-chip NVIDIA 1080ti was used to accelerate the calculation.

3. Results and Discussions

In this section, we evaluate the performance that was achieved by our improved CGI scheme based on a cGAN. To demonstrate the benefit of our improved scheme, we compared our scheme with traditional CGI and compressive sensing computational ghost imaging (CSCGI) techniques. For the CGI, the correlation between the measured intensity of the bucket detector and the random patterns were calculated to reconstruct the image. For the CSCGI, we further applied the BM3D algorithm on the basis of CGI to reconstruct the target image. In the simulations, each MNIST handwritten digit was binarized and resized to a size of 32 × 32 , which indicated that N = 1024 . In addition, the sampling ratio β was set as M N , where M { 4 , 8 , 16 , 32 , 64 , 128 , 256 , 512 , 768 , 1024 } . Then, we had β { 0.39 % , 0.78 % , 1.56 % , 3.12 % , 6.25 % , 12.5 % , 25 % , 12.5 % , 25 % , 50 % , 75 % , 100 % } .

3.1. Results

In Figure 4, the simulation results achieved by our developed scheme are compared with CGI and CSCGI for different values of the sampling ratio, β . As expected, we first observed that the quality of the target images achieved by all schemes improved with the different values of the sampling ratio, β . In addition, we observed that the image quality achieved by CSCGI was better than that achieved by CGI when the value of β was relatively large (e.g., β 6.25 % ), which is due to the fact that the BM3D algorithm was employed to improve the reconstruct performance in the CSCGI.
We also observed that the image quality achieved by our scheme was superior to that of the CGI and CSCGI schemes, especially when the value of β was relatively small (e.g., β 6.25 % ). To illustrate this phenomenon, we plotted Figure 5, where the sampling ratio β was set as 1.56 % . From this figure, we observed that the reconstructed image achieved by the CGI and CSCGI schemes were completely overwhelmed by noise, while our proposed scheme still achieved recognizable image quality. This result demonstrates that our scheme can effectively suppress noise.

3.2. Discussions

In this subsection, we employ the root of mean square error (RMSE), the peak signal-to-noise ratio (PSNR), and the structural similarity index (SSIM) to quantify the reconstruction performance. The RMSE is defined as
R M S E U , V = 1 H W i = 1 H j = 1 W U i , j V i , j 2 ,
where H and W are the height and width of the image, respectively. The values U and V are the evaluated image and the reference image, respectively.
The PSNR is defined as
P S N R U , V = 20 log 10 M A X R M S E U , V ,
where M A X = 255 is the maximum gray value of the image.
The SSIM is described in [55] as
S S I M U , V = 2 μ u μ v + C 1 σ u v + C 2 μ u 2 + μ v 2 + C 1 σ u 2 + σ v 2 + C 2 ,
where μ and σ are the mean and variance of U and V, respectively, and σ u v is the covariances of U and V. The constants C 1 and C 2 are included to avoid instability when μ u 2 + μ v 2 and σ u 2 + σ v 2 are very close to zero. Specifically, we chose C 1 = K 1 L 2 and C 2 = K 2 L 2 , where K 1 and K 2 are tiny constants and L is the dynamic range of pixel values. In this work K 1 = 0.01 , K 2 = 0.03 , and L = 255 , so we obtained C 1 = 6.5025 and C 2 = 58.5225 .
In Figure 6, we evaluate the SSIM, the PSNR, and the RMES performances achieved by CGI, CSCGI and our developed scheme. We note that all the original images in the test set of the MNIST dataset were selected to verify the performance achieved by all schemes. We observed from Figure 6 that both the SSIM and the PSNR performances achieved by all schemes increases with the sampling ratio, while the RMSE performances obtained by all schemes decreased with the sampling ratio. This is because the obtained target image information became richer as the sampling ratio increased. From this figure, we also observed that the SSIM, the PSNR, and the RMSE performances achieved by our scheme were significantly better than that achieved by CGI and CSCGI. Specifically, the SSIM, the PSNR, and the RMSE obtained by our scheme at low sampling ratios (e.g., β = 6.25 % ) was even better than that obtained by the CGI and CSCGI schemes at high sampling ratios (e.g., β = 100 % ), which showed the superiority of our developed cGAN.
According to β = M N and N = 1024 , this meant M = 64 when β = 6.25 % , and M = 1024 when β = 100 % . We know that the smaller the number of samples M, the less time is taken. As we know from Figure 6, with the same performance, our scheme took less time than the GI and CSGI schemes. For example, when the M was 64, the GI scheme took 2.3488 s for each image to be reconstructed, while the CSGI scheme, respectively, took 8.3272 s. When the M was 1024, the GI scheme took 24.3536 s for each image to be reconstructed, while the CSGI scheme, respectively, took 124.6102 s. Our scheme predicted that 10,000 images would take 107.7102 s, and each image would take 0.0108 s. Considering the time required for CGI to reconstruct each image before, the actual time for our scheme to reconstruct each image was only 2.3596 s. When the PSNR was about 13.5 dB, our scheme took 2.3596 s, while the GI and CSGI schemes took 24.3536 s and 124.6102 s, respectively. Therefore, our scheme (at a sampling ratio of 6.25 % ) had the same performance as the GI and CSGI schemes (at a sampling ratio of 100 % ), but the time consumption was greatly reduced.
We also present the corresponding standard deviations of the SSIM, the PSNR, and the RMES. Although the corresponding standard deviations of the PSNR and the RMSE obtained by our scheme were close to those of the CGI and CSCGI schemes, the standard deviations of the SSIM obtained by our scheme was significantly smaller than that of the CGI and CSCGI schemes, especially when the sampling ratio β was relatively large.

4. Conclusions

We proposed using a cGAN to improve the quality of images produced with CGI. We analyzed the performance of the proposed ghost imaging method under different sampling ratio conditions and compared the results with conventional CGI and CSCGI schemes. Our observations suggested that the proposed method had much better performance in comparison to the other two schemes, especially at low sampling ratios, which indicated that our scheme has the advantages of being strongly antinoise and having a fast computing speed. Moreover, this work made some useful progress towards exploring the practical applications of GI.

Author Contributions

Conceptualization, M.Z. and R.Z.; methodology, M.Z.; software, M.Z.; validation, M.Z. and R.Z.; formal analysis, M.Z.; investigation, M.Z.; resources, R.Z.; data curation, M.Z.; writing—original draft preparation, M.Z.; writing—review and editing, M.Z. and R.Z.; supervision, X.Z. and R.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

This work was supported by the Anhui Provincial Natural Science Foundation (2108085MA18).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pittman, T.B.; Shih, Y.H.; Strekalov, D.V.; Sergienko, A.V. Optical imaging by means of two-photon quantum entanglement. Phys. Rev. A 1995, 52, R3429–R3432. [Google Scholar] [CrossRef]
  2. Shih, Y. Quantum Imaging. IEEE J. Sel. Top. Quantum Electron. 2007, 13, 1016–1030. [Google Scholar] [CrossRef]
  3. Shapiro, J.H.; Boyd, R.W. The physics of ghost imaging. Quantum Inf. Process. 2012, 11, 949–993. [Google Scholar] [CrossRef]
  4. Gatti, A.; Brambilla, E.; Bache, M.; Lugiato, L. Correlated imaging, quantum and classical. Phys. Rev. A 2003, 70, 235–238. [Google Scholar] [CrossRef] [Green Version]
  5. Cao, D.Z.; Xiong, J.; Wang, K. Geometrical optics in correlated imaging systems. Phys. Rev. A 2005, 71, 13801. [Google Scholar] [CrossRef] [Green Version]
  6. Zhang, P.; Gong, W.; Shen, X.; Han, S. Correlated imaging through atmospheric turbulence. Phys. Rev. A 2010, 82, 033817. [Google Scholar] [CrossRef] [Green Version]
  7. Shapiro, J.H. Computational Ghost Imaging. Phys. Rev. A 2008, 78, 061802. [Google Scholar] [CrossRef]
  8. Erkmen, B.I.; Shapiro, G.H. Ghost imaging: From quantum to classical to computational. Adv. Opt. Photonics 2010, 2, 405–450. [Google Scholar] [CrossRef]
  9. Ferri, F.; Magatti, D.; Lugiato, L.A.; Gatti, A. Differential Ghost Imaging. Phys. Rev. Lett. 2010, 104, 253603. [Google Scholar] [CrossRef] [Green Version]
  10. Sun, B.; Welsh, S.S.; Edgar, M.P.; Shapiro, J.H.; Padgett, M.J. Normalized ghost imaging. Opt. Express 2012, 20, 16892–16901. [Google Scholar] [CrossRef] [Green Version]
  11. Luo, K.H.; Huang, B.Q.; Zheng, W.M.; Wu, L.A. Nonlocal Imaging by Conditional Averaging of Random Reference Measurements. Chin. Phys. Lett. 2012, 29, 074216. [Google Scholar] [CrossRef] [Green Version]
  12. Gong, W.; Han, S. Correlated imaging in scattering media. Opt. Lett. 2011, 36, 394–396. [Google Scholar] [CrossRef] [Green Version]
  13. Yang, X.; Zhang, Y.; Xu, L.; Yang, C.H.; Wang, Q.; Liu, Y.-H.; Zhao, Y. Increasing the range accuracy of three-dimensional ghost imaging ladar using optimum slicing number method. Chin. Phys. B 2015, 24, 124202. [Google Scholar] [CrossRef]
  14. Ryczkowski, P.; Barbier, M.; Friberg, A.T.; Dudley, J.M.; Genty, G. Ghost imaging in the time domain. Nat. Photonics 2016, 10, 167–170. [Google Scholar] [CrossRef]
  15. Chen, X.; Jin, M.; Chen, H.; Wang, Y.; Qiu, P.; Cui, X.; Sun, B.; Tian, P. Computational temporal ghost imaging for long-distance underwater wireless optical communication. Opt. Lett. 2021, 46, 1938–1941. [Google Scholar] [CrossRef] [PubMed]
  16. Zhang, L.; Wang, Y.; Ye, H.; Xu, R.; Kang, Y.; Zhang, Z.; Zhang, D.; Wang, K. Camouflaged encryption mechanism based on sparse decomposition of principal component orthogonal basis and ghost imaging. Opt. Eng. 2021, 60, 013110. [Google Scholar] [CrossRef]
  17. Han, J.; Sun, L.; Lian, B.; Tang, Y. A deterministic matrix design method based on the difference set modulo subgroup for computational ghost imaging. IEEE Access 2021, 10, 66601–66610. [Google Scholar] [CrossRef]
  18. Zhao, H.; Wang, X.; Gao, C.; Yu, Z.; Wang, S.; Gou, L.; Yao, Z. Second-order cumulants ghost imaging. Chin. Opt. Lett. 2022, 20, 112602. [Google Scholar] [CrossRef]
  19. Bai, X.; Li, J.; Yu, Z.; Yang, Z.; Wang, Y.; Chen, X.; Yuan, S.; Zhou, X. Real single-channel color image encryption method based on computational ghost imaging. Laser Phys. Lett. 2022, 19, 125204. [Google Scholar] [CrossRef]
  20. Gao, Z.; Cheng, X.; Yue, J.; Hao, Q. Extendible ghost imaging with high reconstruction quality in strong scattering medium. Opt. Express 2022, 30, 45759–45775. [Google Scholar] [CrossRef]
  21. Lin, L.X.; Cao, J.; Zhou, D.; Hao, Q. Scattering medium-robust computational ghost imaging with random superimposed-speckle patterns. Opt. Commun. 2023, 529, 129083. [Google Scholar] [CrossRef]
  22. Cheng, J. Ghost imaging through turbulent atmosphere. Opt. Express 2009, 17, 7916–7921. [Google Scholar] [CrossRef] [PubMed]
  23. Gao, Z.; Cheng, X.; Chen, K.; Wang, A.; Hao, Q. Computational Ghost Imaging in Scattering Media Using Simulation-Based Deep Learning. IEEE Photonics J. 2020, 12, 1–15. [Google Scholar] [CrossRef]
  24. Zhao, C.; Gong, W.; Chen, M.; Li, E.; Wang, H.; Xu, W.; Han, A.S. Ghost imaging lidar via sparsity constraints. Appl. Phys. Lett. 2012, 101, 141123. [Google Scholar] [CrossRef] [Green Version]
  25. Chen, M.; Li, E.; Gong, W.; Bo, Z.; Xu, X.; Zhao, C.; Shen, X.; Xu, W.; Han, S. Ghost imaging lidar via sparsity constraints in real atmosphere. Opt. Photonic J. 2013, 3, 83–85. [Google Scholar] [CrossRef] [Green Version]
  26. Hardy, N.D.; Shapiro, J.H. Computational ghost imaging versus imaging laser radar for three-dimensional imaging. Phys. Rev. A 2013, 87, 023820. [Google Scholar] [CrossRef]
  27. Edgar, M.P.; Sun, B.; Bowman, R.; Welsh, S.S.; Padgett, M.J. 3D Computational Ghost Imaging. Int. Soc. Opt. Photonics 2013, 8899, 889902. [Google Scholar]
  28. Zhang, H.; Cao, J.; Zhou, D.; Cui, H.; Cheng, Y.; Hao, Q. Three-dimensional computational ghost imaging using a dynamic virtual projection unit generated by Risley prisms. Opt. Express 2022, 30, 39152–39161. [Google Scholar] [CrossRef]
  29. Ceddia, D.; Paganin, D.M. On Random-Matrix Bases, Ghost Imaging and X-ray Phase Contrast Computational Ghost Imaging. Phys. Rev. A 2018, 97, 062119. [Google Scholar] [CrossRef] [Green Version]
  30. Smith, T.A.; Shih, Y.; Wang, Z.; Li, X.; Adams, B.; Demarteau, M.; Wagner, R.; Xi, J.; Xia, L.; Zhu, R.Y. From optical to X-ray ghost imaging. Nucl. Instrum. Methods Phys. Res. Sect. A Accel. Spectrometers Detect. Assoc. Equip. 2019, 935, 173–177. [Google Scholar] [CrossRef] [Green Version]
  31. Yu, H.; Lu, R.; Han, S.; Xie, H.; Du, G.; Xiao, T.; Zhu, D. Fourier-Transform Ghost Imaging with Hard X Rays. Phys. Rev. Lett. 2016, 117, 113901. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  32. Mizutani, Y.; Shibuya, K.; Iwata, T.; Takaya, Y. Fluorescence microscope by using computational ghost imaging. MATEC Web Conf. 2015, 32, 05001. [Google Scholar] [CrossRef] [Green Version]
  33. Yuan, S.; Wang, L.; Liu, X.; Zhou, X. Forgery attack on optical encryption based on computational ghost imaging. Opt. Lett. 2020, 45, 3917–3920. [Google Scholar] [CrossRef] [PubMed]
  34. Totero Gongora, J.S.; Olivieri, L.; Peters, L.; Tunesi, J.; Cecconi, V.; Cutrona, A.; Tucker, R.; Kumar, V.; Pasquazi, A.; Peccianti, M. Route to intelligent imaging reconstruction via terahertz nonlinear ghost imaging. Micromachines 2020, 11, 521. [Google Scholar] [CrossRef] [PubMed]
  35. Leibov, L.; Ismagilov, A.; Zalipaev, V.; Nasedkin, B.; Grachev, Y.; Petrov, N.; Tcypkin, A. Speckle patterns formed by broadband terahertz radiation and their applications for ghost imaging. Sci. Rep. 2021, 11, 20071. [Google Scholar] [CrossRef]
  36. Ismagilov, A.; Lappo-Danilevskaya, A.; Grachev, Y.; Nasedkin, B.; Zalipaev, V.; Petrov, N.V.; Tcypkin, A. Ghost imaging via spectral multiplexing in the broadband terahertz range. J. Opt. Soc. Am. B 2022, 39, 2335–2340. [Google Scholar] [CrossRef]
  37. Candes, E.J.; Romberg, J.; Tao, T. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 2006, 52, 489–509. [Google Scholar] [CrossRef] [Green Version]
  38. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  39. Katz, O.; Bromberg, Y.; Silberberg, Y. Compressive ghost imaging. Appl. Phys. Lett. 2009, 95, 131110. [Google Scholar] [CrossRef] [Green Version]
  40. Katkovnik, V.; Astola, J. Compressive sensing computational ghost imaging. J. Opt. Soc. Am. A 2012, 29, 1556–1567. [Google Scholar] [CrossRef] [Green Version]
  41. Du, J.; Gong, W.; Han, S. The influence of sparsity property of images on ghost imaging with thermal light. Opt. Lett. 2012, 37, 1067–1069. [Google Scholar] [CrossRef] [PubMed]
  42. Gong, W.; Han, S. Experimental investigation of the quality of lensless super-resolution ghost imaging via sparsity constraints. Phys. Lett. A 2012, 376, 1519–1522. [Google Scholar] [CrossRef] [Green Version]
  43. Chen, J.; Gong, W.; Han, S. Sub-Rayleigh ghost imaging via sparsity constraints based on a digital micro-mirror device. Phys. Lett. A 2013, 377, 1844–1847. [Google Scholar] [CrossRef]
  44. Sinha, A.T.; Lee, J.; Li, S.; Barbastathis, G. Lensless computational imaging through deep learning. Optica 2017, 4, 1117–1125. [Google Scholar] [CrossRef] [Green Version]
  45. Lyu, M.; Wang, W.; Wang, H.; Wang, H.; Li, G.; Chen, N.; Situ, G. Deep-learning-based ghost imaging. Sci. Rep. 2017, 7, 17865. [Google Scholar] [CrossRef] [Green Version]
  46. Shimobaba, T.; Endo, Y.; Nishitsuji, T.; Takahashi, T.; Nagahama, Y.; Hasegawa, S.; Sano, M.; Hirayama, R.; Kakue, T.; Shiraki, A.A. Computational ghost imaging using deep learning. Opt. Commun. 2017, 413, 147–151. [Google Scholar] [CrossRef] [Green Version]
  47. He, Y.; Wang, G.; Dong, G.; Zhu, S.; Chen, H.; Zhang, A.; Xu, Z. Ghost Imaging Based on Deep Learning. Sci. Rep. 2018, 8, 6469. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  48. Zhang, C.; Zhou, J.; Tang, J.; Wu, F.; Cheng, H.; Wei, S. Deep unfolding for singular value decomposition compressed ghost imaging. Appl. Phys. B 2022, 128, 185. [Google Scholar] [CrossRef]
  49. Mirza, M.; Osindero, S. Conditional Generative Adversarial Nets. arXiv 2014, arXiv:1411.1784. [Google Scholar]
  50. Isola, P.; Zhu, J.; Zhou, T.; Efros, A.A. Image-to-Image Translation with Conditional Adversarial Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5967–5976. [Google Scholar]
  51. Lecun, Y.; Bottou, L. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
  52. Bromberg, Y.; Katz, O.; Silberberg, Y. Ghost imaging with a single detector. Phys. Rev. A 2009, 79, 053840. [Google Scholar] [CrossRef] [Green Version]
  53. Katkovnik, V.; Astola, J.; Egiazarian, K. Discrete diffraction transform for propagation, reconstruction, and design of wavefield distributions. Appl. Opt. 2008, 47, 3481–3493. [Google Scholar] [CrossRef] [PubMed]
  54. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
  55. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Diagrammatic sketch of computational ghost imaging.
Figure 1. Diagrammatic sketch of computational ghost imaging.
Photonics 10 00353 g001
Figure 2. Schematic illustration of the training pipeline of deep learning ghost imaging.
Figure 2. Schematic illustration of the training pipeline of deep learning ghost imaging.
Photonics 10 00353 g002
Figure 3. Generator architecture: the number of channels is provided at the bottom of the box, and the size of the image is marked on the right side of the box.
Figure 3. Generator architecture: the number of channels is provided at the bottom of the box, and the size of the image is marked on the right side of the box.
Photonics 10 00353 g003
Figure 4. Comparison of simulation results from CGI, CSCGI, and our scheme (labeled as OURS) at different values of β .
Figure 4. Comparison of simulation results from CGI, CSCGI, and our scheme (labeled as OURS) at different values of β .
Photonics 10 00353 g004
Figure 5. Comparison of simulation results from CGI, CSCGI, and our scheme (labeled as OURS) at a sampling ratio of 1.56%.
Figure 5. Comparison of simulation results from CGI, CSCGI, and our scheme (labeled as OURS) at a sampling ratio of 1.56%.
Photonics 10 00353 g005
Figure 6. Quantiative evaluation of CGI, CSCGI, and our scheme (labeled as OURS) in the simulation with a sampling ratio of β = W · H N .
Figure 6. Quantiative evaluation of CGI, CSCGI, and our scheme (labeled as OURS) in the simulation with a sampling ratio of β = W · H N .
Photonics 10 00353 g006
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, M.; Zhang, X.; Zhang, R. High-Quality Computational Ghost Imaging with a Conditional GAN. Photonics 2023, 10, 353. https://doi.org/10.3390/photonics10040353

AMA Style

Zhao M, Zhang X, Zhang R. High-Quality Computational Ghost Imaging with a Conditional GAN. Photonics. 2023; 10(4):353. https://doi.org/10.3390/photonics10040353

Chicago/Turabian Style

Zhao, Ming, Xuedian Zhang, and Rongfu Zhang. 2023. "High-Quality Computational Ghost Imaging with a Conditional GAN" Photonics 10, no. 4: 353. https://doi.org/10.3390/photonics10040353

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop