Smart Pixels and Imaging

A special issue of Photonics (ISSN 2304-6732). This special issue belongs to the section "Lasers, Light Sources and Sensors".

Deadline for manuscript submissions: closed (15 December 2021) | Viewed by 50048

Special Issue Editors


E-Mail Website
Guest Editor
Department of Opto-Electronic Engineering, Beihang University, Beijing 100191, China
Interests: imaging; single-pixel imaging; single-photon imaging; ultrafast imaging; quantum imaging
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Hamamatsu Photonics Europe and Institute of Microengineering, EPFL, Switzerland
Interests: semiconductor photosensing; 3D time-of-flight imaging; optical measurement techniques; medical and biological imaging methods

Special Issue Information

Dear Colleagues,

Semiconductor technology is progressing at a relentless pace, providing image sensors, and each pixel with an increasing amount of custom analog and digital functionality. The growing experience with such photosensor functionality has led to the development of a large variety of modular building blocks for smart pixels and high-performance image-sensing solutions. Examples include in-pixel amplifiers and avalanche-effect pixels for single-photon resolution at room temperature (including Quanta pixels), non-linear pixel response for high-dynamic-range imagers reaching 200 dB D/R, lock-in pixels for optical time-of-flight range cameras with sub-millimeter distance resolution, high-speed pixels and sensors for image acquisition at 100 million frames per second, or OCT-imagers with in-pixel demodulation circuits for miniaturized, real-time optical coherence tomography 3D imaging systems.

These smart-pixel capabilities open the door to new high-performance photonic microsystems, either by implementing known optical measurement techniques in a more efficient way, or by realizing novel photonic sensing approaches, whose realization requires the availability of unconventional pixel and image sensing functionality at the limits imposed by physics.

Prof. Dr. Ming-Jie Sun
Prof. Dr. Peter Seitz
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Photonics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Smart pixels
  • Quanta pixels
  • Pixel functionality
  • High-performance pixel architectures
  • Smart image sensors
  • Seeing chips
  • Single-chip vision systems
  • Optical systems with smart imagers

Published Papers (20 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

12 pages, 10168 KiB  
Communication
A Single-Pixel Imaging Scheme with Obstacle Detection
by Peiming Li, Haixiao Zhao, Wenjie Jiang, Zexin Zhang and Baoqing Sun
Photonics 2022, 9(4), 253; https://doi.org/10.3390/photonics9040253 - 11 Apr 2022
Viewed by 1815
Abstract
Single-pixel imaging (SPI) utilizes a second-order correlation of structured illumination light field and a single-pixel detector to form images. As the single-pixel detector provides no spatial resolution, a structured illumination light field generated by devices such as a spatial light modulator substitutes the [...] Read more.
Single-pixel imaging (SPI) utilizes a second-order correlation of structured illumination light field and a single-pixel detector to form images. As the single-pixel detector provides no spatial resolution, a structured illumination light field generated by devices such as a spatial light modulator substitutes the role of array camera to retrieve pixel-wise spatial information. Due to its unique imaging modality, SPI has certain advantages. Meanwhile, its counterintuitive configuration and reciprocity relation to traditional array cameras have been studied to understand its fundamental principle. According to previous studies, the non-spatial detection property makes it possible for SPI to resist scattering in the detection part. In this work, we study the influence of an obstacle aperture in the detection part of SPI. We notice that such an obstacle aperture can restrict the field-of-view (FOV) of SPI, which can be diminished by a scattering process. We investigate these properties with experiment results and analysis under geometry optics. We believe that our study will be helpful in understanding the counterintuitive configuration of SPI and its reciprocity to traditional imaging. Full article
(This article belongs to the Special Issue Smart Pixels and Imaging)
Show Figures

Figure 1

17 pages, 7309 KiB  
Article
A Full-Aperture Image Synthesis Method for the Rotating Rectangular Aperture System Using Fourier Spectrum Restoration
by Guomian Lv, Hao Xu, Huajun Feng, Zhihai Xu, Hao Zhou, Qi Li and Yueting Chen
Photonics 2021, 8(11), 522; https://doi.org/10.3390/photonics8110522 - 22 Nov 2021
Cited by 5 | Viewed by 2004
Abstract
The novel rotating rectangular aperture (RRA) system provides a good solution for space-based, large-aperture, high-resolution imaging tasks. Its imaging quality depends largely on the image synthesis algorithm, and the mainstream multi-frame deblurring approach is sophisticated and time-consuming. In this paper, we propose a [...] Read more.
The novel rotating rectangular aperture (RRA) system provides a good solution for space-based, large-aperture, high-resolution imaging tasks. Its imaging quality depends largely on the image synthesis algorithm, and the mainstream multi-frame deblurring approach is sophisticated and time-consuming. In this paper, we propose a novel full-aperture image synthesis algorithm for the RRA system, based on Fourier spectrum restoration. First, a numerical simulation model is established to analyze the RRA system’s characteristics and obtain the point spread functions (PSFs) rapidly. Then, each image is used iteratively to calculate the increment size and update the final restored Fourier spectrum. Both the simulation’s results and the practical experiment’s results show that our algorithm performs well in terms of objective evaluation and time consumption. Full article
(This article belongs to the Special Issue Smart Pixels and Imaging)
Show Figures

Figure 1

12 pages, 28711 KiB  
Communication
A Metasurface Beam Combiner Based on the Control of Angular Response
by Zhihao Liu, Weibin Feng, Yong Long, Songming Guo, Haowen Liang, Zhiren Qiu, Xiao Fu and Juntao Li
Photonics 2021, 8(11), 489; https://doi.org/10.3390/photonics8110489 - 02 Nov 2021
Cited by 6 | Viewed by 2619
Abstract
Beam combiners are widely used in various optical applications including optical communication and smart detection, which spatially overlap multiple input beams and integrate a output beam with higher intensity, multiple wavelengths, coherent phase, etc. Since conventional beam combiners consist of various optical components [...] Read more.
Beam combiners are widely used in various optical applications including optical communication and smart detection, which spatially overlap multiple input beams and integrate a output beam with higher intensity, multiple wavelengths, coherent phase, etc. Since conventional beam combiners consist of various optical components with different working principles depending on the properties of incident light, they are usually bulky and have certain restrictions on the incident light. In recent years, metasurfaces have received much attention and become a rapidly developing research field. Their novel mechanisms and flexible structural design provide a promising way to realize miniaturized and integrated components in optical systems. In this paper, we start from studying the ability of metasurfaces to manipulate the incident wavefront, and then propose a metasurface beam combiner in theory that generates an extraordinary refracted beam based on the principle of phase gradient metasurface. This metasurface combines two monochromatic light incidents at different angles with identical polarization but arbitrary amplitudes and initial phases. The combining efficiency, which is defined as the ratio of the power in the combining direction to the total incident power, is 42.4% at the working wavelength of 980 nm. The simulated results indicate that this proposed method is able to simplify the design of optical combiners, making them miniaturized and integrated for smart optical systems. Full article
(This article belongs to the Special Issue Smart Pixels and Imaging)
Show Figures

Figure 1

10 pages, 8719 KiB  
Communication
Multi-View Optical Image Fusion and Reconstruction for Defogging without a Prior In-Plane
by Yuru Huang, Yikun Liu, Haishan Liu, Yuyang Shui, Guanwen Zhao, Jinhua Chu, Guohai Situ, Zhibing Li, Jianying Zhou and Haowen Liang
Photonics 2021, 8(10), 454; https://doi.org/10.3390/photonics8100454 - 18 Oct 2021
Cited by 3 | Viewed by 1955
Abstract
Image fusion and reconstruction from muldti-images taken by distributed or mobile cameras need accurate calibration to avoid image mismatching. This calibration process becomes difficult in fog when no clear nearby reference is available. In this work, the fusion of multi-view images taken in [...] Read more.
Image fusion and reconstruction from muldti-images taken by distributed or mobile cameras need accurate calibration to avoid image mismatching. This calibration process becomes difficult in fog when no clear nearby reference is available. In this work, the fusion of multi-view images taken in fog by two cameras fixed on a moving platform is realized. The positions and aiming directions of the cameras are determined by taking a close visible object as a reference. One camera with a large field of view (FOV) is applied to acquire images of a short-distance object which is still visible in fog. This reference is then adopted to the calibration of the camera system to determine the positions and pointing directions at each viewpoint. The extrinsic parameter matrices are obtained with these data, which are applied for the image fusion of distant images captured by another camera beyond visibility. The experimental verification was carried out in a fog chamber and the technique is shown to be valid for imaging reconstruction in fog without a prior in-plane. The synthetic image, accumulated and averaged by ten-view images, is shown to perform potential applicability for fog removal. The enhanced structure similarity is discussed and compared in detail with conventional single-view defogging techniques. Full article
(This article belongs to the Special Issue Smart Pixels and Imaging)
Show Figures

Figure 1

14 pages, 5491 KiB  
Article
Target Detection Method for Low-Resolution Remote Sensing Image Based on ESRGAN and ReDet
by Yuwu Wang, Guobing Sun and Shengwei Guo
Photonics 2021, 8(10), 431; https://doi.org/10.3390/photonics8100431 - 08 Oct 2021
Cited by 5 | Viewed by 2175
Abstract
With the widespread use of remote sensing images, low-resolution target detection in remote sensing images has become a hot research topic in the field of computer vision. In this paper, we propose a Target Detection on Super-Resolution Reconstruction (TDoSR) method to solve the [...] Read more.
With the widespread use of remote sensing images, low-resolution target detection in remote sensing images has become a hot research topic in the field of computer vision. In this paper, we propose a Target Detection on Super-Resolution Reconstruction (TDoSR) method to solve the problem of low target recognition rates in low-resolution remote sensing images under foggy conditions. The TDoSR method uses the Enhanced Super-Resolution Generative Adversarial Network (ESRGAN) to perform defogging and super-resolution reconstruction of foggy low-resolution remote sensing images. In the target detection part, the Rotation Equivariant Detector (ReDet) algorithm, which has a higher recognition rate at this stage, is used to identify and classify various types of targets. While a large number of experiments have been carried out on the remote sensing image dataset DOTA-v1.5, the results of this paper suggest that the proposed method achieves good results in the target detection of low-resolution foggy remote sensing images. The principal result of this paper demonstrates that the recognition rate of the TDoSR method increases by roughly 20% when compared with low-resolution foggy remote sensing images. Full article
(This article belongs to the Special Issue Smart Pixels and Imaging)
Show Figures

Figure 1

16 pages, 3249 KiB  
Article
SP-ILC: Concurrent Single-Pixel Imaging, Object Location, and Classification by Deep Learning
by Zhe Yang, Yu-Ming Bai, Li-Da Sun, Ke-Xin Huang, Jun Liu, Dong Ruan and Jun-Lin Li
Photonics 2021, 8(9), 400; https://doi.org/10.3390/photonics8090400 - 18 Sep 2021
Cited by 9 | Viewed by 2405
Abstract
We propose a concurrent single-pixel imaging, object location, and classification scheme based on deep learning (SP-ILC). We used multitask learning, developed a new loss function, and created a dataset suitable for this project. The dataset consists of scenes that contain different numbers of [...] Read more.
We propose a concurrent single-pixel imaging, object location, and classification scheme based on deep learning (SP-ILC). We used multitask learning, developed a new loss function, and created a dataset suitable for this project. The dataset consists of scenes that contain different numbers of possibly overlapping objects of various sizes. The results we obtained show that SP-ILC runs concurrent processes to locate objects in a scene with a high degree of precision in order to produce high quality single-pixel images of the objects, and to accurately classify objects, all with a low sampling rate. SP-ILC has potential for effective use in remote sensing, medical diagnosis and treatment, security, and autonomous vehicle control. Full article
(This article belongs to the Special Issue Smart Pixels and Imaging)
Show Figures

Figure 1

12 pages, 5165 KiB  
Article
Aerial Projection 3D Display Based on Integral Imaging
by Wu-Xiang Zhao, Han-Le Zhang, Qing-Lin Ji, Huan Deng and Da-Hai Li
Photonics 2021, 8(9), 381; https://doi.org/10.3390/photonics8090381 - 09 Sep 2021
Cited by 3 | Viewed by 2800
Abstract
We proposed an aerial projection 3D display based on integral imaging. It is composed of a projector, a lens-array holographic optical element (HOE), and two parabolic mirrors. The lens-array HOE is a diffraction grating and is made by the volume holography technique. The [...] Read more.
We proposed an aerial projection 3D display based on integral imaging. It is composed of a projector, a lens-array holographic optical element (HOE), and two parabolic mirrors. The lens-array HOE is a diffraction grating and is made by the volume holography technique. The lens-array HOE can be produced on a thin glass plate, and it has the optical properties of a lens array when the Bragg condition is satisfied. When the display beams of the element image array (EIA) are projected on the lens-array HOE, 3D images can be reconstructed. The two parabolic mirrors can project 3D images into the air. The Bragg-unmatched light simply passes through the lens-array HOE. Therefore, the aerial projection 3D images appear to be imaged in the air without any medium. In the experiment, a BenQ projector was used for the projection of 3D images, with a resolution of 1600 × 1200. The diameter and the height of each parabolic mirror are 150 mm and 25 mm, respectively. The inner diameter of the parabolic mirror is 40 mm. The 3D images were projected in the air, and the experimental results prove the correctness of our display system. Full article
(This article belongs to the Special Issue Smart Pixels and Imaging)
Show Figures

Figure 1

19 pages, 5851 KiB  
Article
Image Restoration Based on End-to-End Unrolled Network
by Xiaoping Tao, Hao Zhou and Yueting Chen
Photonics 2021, 8(9), 376; https://doi.org/10.3390/photonics8090376 - 08 Sep 2021
Cited by 5 | Viewed by 2456
Abstract
Recent studies on image restoration (IR) methods under unrolled optimization frameworks have shown that deep convolutional neural networks (DCNNs) can be implicitly used as priors to solve inverse problems. Due to the ill-conditioned nature of the inverse problem, the selection of prior knowledge [...] Read more.
Recent studies on image restoration (IR) methods under unrolled optimization frameworks have shown that deep convolutional neural networks (DCNNs) can be implicitly used as priors to solve inverse problems. Due to the ill-conditioned nature of the inverse problem, the selection of prior knowledge is crucial for the process of IR. However, the existing methods use a fixed DCNN in each iteration, and so they cannot fully adapt to the image characteristics at each iteration stage. In this paper, we combine deep learning with traditional optimization and propose an end-to-end unrolled network based on deep priors. The entire network contains several iterations, and each iteration is composed of analytic solution updates and a small multiscale deep denoiser network. In particular, we use different denoiser networks at different stages to improve adaptability. Compared with a fixed DCNN, it greatly reduces the number of computations when the total parameters are equal and the number of iterations is the same, but the gains from a practical runtime are not as significant as indicated in the FLOP count. The experimental results of our method of three IR tasks, including denoising, deblurring, and lensless imaging, demonstrate that our proposed method achieves state-of-the-art performances in terms of both visual effects and quantitative evaluations. Full article
(This article belongs to the Special Issue Smart Pixels and Imaging)
Show Figures

Figure 1

9 pages, 2795 KiB  
Communication
Resolution Enhancement in Coherent Diffraction Imaging Using High Dynamic Range Image
by Yuanyuan Liu, Qingwen Liu, Shuangxiang Zhao, Wenchen Sun, Bingxin Xu and Zuyuan He
Photonics 2021, 8(9), 370; https://doi.org/10.3390/photonics8090370 - 02 Sep 2021
Cited by 3 | Viewed by 1995
Abstract
In a coherent diffraction imaging (CDI) system, the information of the sample is retrieved from the diffraction patterns recorded by the image sensor via multiple iterations. The limited dynamic range of the image sensor restricts the resolution of the reconstructed sample information. To [...] Read more.
In a coherent diffraction imaging (CDI) system, the information of the sample is retrieved from the diffraction patterns recorded by the image sensor via multiple iterations. The limited dynamic range of the image sensor restricts the resolution of the reconstructed sample information. To alleviate this problem, the high dynamic range imaging technology is adopted to increase the signal-to-noise ratio of the diffraction patterns. A sequence of raw diffraction images with differently exposure time are recorded by the image sensor. Then, they are fused to generate a high quality diffraction pattern based on the response function of the image sensor. With the fused diffraction patterns, the resolution of the coherent diffraction imaging can be effectively improved. The experiments on USAF resolution card is carried out to verify the effectiveness of our proposed method, in which the spatial resolution is improved by 1.8 times using the high dynamic range imaging technology. Full article
(This article belongs to the Special Issue Smart Pixels and Imaging)
Show Figures

Figure 1

23 pages, 57786 KiB  
Article
Phase-Shifting Projected Fringe Profilometry Using Binary-Encoded Patterns
by Nai-Jen Cheng and Wei-Hung Su
Photonics 2021, 8(9), 362; https://doi.org/10.3390/photonics8090362 - 29 Aug 2021
Cited by 8 | Viewed by 2198
Abstract
A phase unwrapping method for phase-shifting projected fringe profilometry is presented. It did not require additional projections to identify the fringe orders. The pattern used for the phase extraction could be used for phase unwrapping directly. By spatially encoding the fringe patterns that [...] Read more.
A phase unwrapping method for phase-shifting projected fringe profilometry is presented. It did not require additional projections to identify the fringe orders. The pattern used for the phase extraction could be used for phase unwrapping directly. By spatially encoding the fringe patterns that were used to perform the phase-shifting technique with binary contrasts, fringe orders could be discerned. For spatially isolated objects or surfaces with large depth discontinuities, unwrapping could be identified without ambiguity. Even though the surface color or reflectivity varied periodically with position, it distinguished the fringe order very well. Full article
(This article belongs to the Special Issue Smart Pixels and Imaging)
Show Figures

Figure 1

8 pages, 3166 KiB  
Communication
Augmented Reality Vector Light Field Display with Large Viewing Distance Based on Pixelated Multilevel Blazed Gratings
by Jiacheng Shi, Jianyu Hua, Fengbin Zhou, Min Yang and Wen Qiao
Photonics 2021, 8(8), 337; https://doi.org/10.3390/photonics8080337 - 16 Aug 2021
Cited by 14 | Viewed by 2959
Abstract
Glasses-free augmented reality (AR) 3D display has attracted great interest in its ability to merge virtual 3D objects with real scenes naturally, without the aid of any wearable devices. Here we propose an AR vector light field display based on a view combiner [...] Read more.
Glasses-free augmented reality (AR) 3D display has attracted great interest in its ability to merge virtual 3D objects with real scenes naturally, without the aid of any wearable devices. Here we propose an AR vector light field display based on a view combiner and an off-the-shelf purchased projector. The view combiner is sparsely covered with pixelated multilevel blazed gratings (MBG) for the projection of perspective virtual images. Multi-order diffraction of the MBG is designed to increase the viewing distance and vertical viewing angle. In a 20-inch prototype, multiple sets of 16 horizontal views form a smooth parallax. The viewing distance of the 3D scene is larger than 5 m. The vertical viewing angle is 15.6°. The light efficiencies of all views are larger than 53%. We demonstrate that the displayed virtual 3D scene retains natural motion parallax and high brightness while having a consistent occlusion effect with natural objects. This research can be extended to applications in areas such as human–computer interaction, entertainment, education, and medical care. Full article
(This article belongs to the Special Issue Smart Pixels and Imaging)
Show Figures

Figure 1

12 pages, 6090 KiB  
Article
Three-Dimensional Stitching of Binocular Endoscopic Images Based on Feature Points
by Changjiang Zhou, Hao Yu, Bo Yuan, Liqiang Wang and Qing Yang
Photonics 2021, 8(8), 330; https://doi.org/10.3390/photonics8080330 - 12 Aug 2021
Cited by 3 | Viewed by 2338
Abstract
There are shortcomings of binocular endoscope three-dimensional (3D) reconstruction in the conventional algorithm, such as low accuracy, small field of view, and loss of scale information. To address these problems, aiming at the specific scenes of stomach organs, a method of 3D endoscopic [...] Read more.
There are shortcomings of binocular endoscope three-dimensional (3D) reconstruction in the conventional algorithm, such as low accuracy, small field of view, and loss of scale information. To address these problems, aiming at the specific scenes of stomach organs, a method of 3D endoscopic image stitching based on feature points is proposed. The left and right images are acquired by moving the endoscope and converting them into point clouds by binocular matching. They are then preprocessed to compensate for the errors caused by the scene characteristics such as uneven illumination and weak texture. The camera pose changes are estimated by detecting and matching the feature points of adjacent left images. Finally, based on the calculated transformation matrix, point cloud registration is carried out by the iterative closest point (ICP) algorithm, and the 3D dense reconstruction of the whole gastric organ is realized. The results show that the root mean square error is 2.07 mm, and the endoscopic field of view is expanded by 2.20 times, increasing the observation range. Compared with the conventional methods, it does not only preserve the organ scale information but also makes the scene much denser, which is convenient for doctors to measure the target areas, such as lesions, in 3D. These improvements will help improve the accuracy and efficiency of diagnosis. Full article
(This article belongs to the Special Issue Smart Pixels and Imaging)
Show Figures

Figure 1

15 pages, 3783 KiB  
Article
Low-Light-Level Image Super-Resolution Reconstruction Based on a Multi-Scale Features Extraction Network
by Bowen Wang, Yan Zou, Linfei Zhang, Yan Hu, Hao Yan, Chao Zuo and Qian Chen
Photonics 2021, 8(8), 321; https://doi.org/10.3390/photonics8080321 - 10 Aug 2021
Cited by 17 | Viewed by 3120
Abstract
Wide field-of-view (FOV) and high-resolution (HR) imaging are essential to many applications where high-content image acquisition is necessary. However, due to the insufficient spatial sampling of the image detector and the trade-off between pixel size and photosensitivity, the ability of current imaging sensors [...] Read more.
Wide field-of-view (FOV) and high-resolution (HR) imaging are essential to many applications where high-content image acquisition is necessary. However, due to the insufficient spatial sampling of the image detector and the trade-off between pixel size and photosensitivity, the ability of current imaging sensors to obtain high spatial resolution is limited, especially under low-light-level (LLL) imaging conditions. To solve these problems, we propose a multi-scale feature extraction (MSFE) network to realize pixel-super-resolved LLL imaging. In order to perform data fusion and information extraction for low resolution (LR) images, the network extracts high-frequency detail information from different dimensions by combining the channel attention mechanism module and skip connection module. In this way, the calculation of the high-frequency components can receive greater attention. Compared with other networks, the peak signal-to-noise ratio of the reconstructed image was increased by 1.67 dB. Extensions of the MSFE network are investigated for scene-based color mapping of the gray image. Most of the color information could be recovered, and the similarity with the real image reached 0.728. The qualitative and quantitative experimental results show that the proposed method achieved superior performance in image fidelity and detail enhancement over the state-of-the-art. Full article
(This article belongs to the Special Issue Smart Pixels and Imaging)
Show Figures

Figure 1

12 pages, 6794 KiB  
Article
Efficient Fourier Single-Pixel Imaging with Gaussian Random Sampling
by Ziheng Qiu, Xinyi Guo, Tian’ao Lu, Pan Qi, Zibang Zhang and Jingang Zhong
Photonics 2021, 8(8), 319; https://doi.org/10.3390/photonics8080319 - 09 Aug 2021
Cited by 12 | Viewed by 3176
Abstract
Fourier single-pixel imaging (FSI) is a branch of single-pixel imaging techniques. It allows any image to be reconstructed by acquiring its Fourier spectrum by using a single-pixel detector. FSI uses Fourier basis patterns for structured illumination or structured detection to acquire the Fourier [...] Read more.
Fourier single-pixel imaging (FSI) is a branch of single-pixel imaging techniques. It allows any image to be reconstructed by acquiring its Fourier spectrum by using a single-pixel detector. FSI uses Fourier basis patterns for structured illumination or structured detection to acquire the Fourier spectrum of image. However, the spatial resolution of the reconstructed image mainly depends on the number of Fourier coefficients sampled. The reconstruction of a high-resolution image typically requires a number of Fourier coefficients to be sampled. Consequently, a large number of single-pixel measurements lead to a long data acquisition time, resulting in imaging of a dynamic scene challenging. Here we propose a new sampling strategy for FSI. It allows FSI to reconstruct a clear and sharp image with a reduced number of measurements. The key to the proposed sampling strategy is to perform a density-varying sampling in the Fourier space and, more importantly, the density with respect to the importance of Fourier coefficients is subject to a one-dimensional Gaussian function. The final image is reconstructed from the undersampled Fourier spectrum through compressive sensing. We experimentally demonstrate the proposed method is able to reconstruct a sharp and clear image of 256 × 256 pixels with a sampling ratio of 10%. The proposed method enables fast single-pixel imaging and provides a new approach for efficient spatial information acquisition. Full article
(This article belongs to the Special Issue Smart Pixels and Imaging)
Show Figures

Figure 1

9 pages, 4018 KiB  
Article
Optical See-through 2D/3D Compatible Display Using Variable-Focus Lens and Multiplexed Holographic Optical Elements
by Qinglin Ji, Huan Deng, Hanle Zhang, Wenhao Jiang, Feiyan Zhong and Fengbin Rao
Photonics 2021, 8(8), 297; https://doi.org/10.3390/photonics8080297 - 27 Jul 2021
Cited by 7 | Viewed by 1992
Abstract
An optical see-through two-dimensional (2D)/three-dimensional (3D) compatible display using variable-focus lens and multiplexed holographic optical elements (MHOE) is presented. It mainly consists of a MHOE, a variable-focus lens and a projection display device. The customized MHOE, by using the angular multiplexing technology of [...] Read more.
An optical see-through two-dimensional (2D)/three-dimensional (3D) compatible display using variable-focus lens and multiplexed holographic optical elements (MHOE) is presented. It mainly consists of a MHOE, a variable-focus lens and a projection display device. The customized MHOE, by using the angular multiplexing technology of volumetric holographic grating, records the scattering wavefront and spherical wavefront array required for 2D/3D compatible display. In particular, we proposed a feasible method to switch the 2D and 3D display modes by using a variable-focus lens in the reconstruction process. The proposed system solves the problem of bulky volume, and makes the MHOE more efficient to use. Based on the requirements of 2D and 3D displays, we calculated the liquid pumping volume of the variable-focus lens under two kinds of diopters. Full article
(This article belongs to the Special Issue Smart Pixels and Imaging)
Show Figures

Figure 1

17 pages, 3656 KiB  
Article
Target Tracking and Ranging Based on Single Photon Detection
by Zhikang Li, Bo Liu, Huachuang Wang, Zhen Chen, Qun Zhang, Kangjian Hua and Jing Yang
Photonics 2021, 8(7), 278; https://doi.org/10.3390/photonics8070278 - 15 Jul 2021
Cited by 5 | Viewed by 2243
Abstract
In order to achieve non-cooperative target tracking and ranging in conditions of a weak echo signal, this paper presents a real-time acquisition, pointing, tracking (APT), and ranging (APTR) lidar system based on single photon detection. With this system, an active target APT mechanism [...] Read more.
In order to achieve non-cooperative target tracking and ranging in conditions of a weak echo signal, this paper presents a real-time acquisition, pointing, tracking (APT), and ranging (APTR) lidar system based on single photon detection. With this system, an active target APT mechanism based on a single photon detector is proposed. The target tracking and ranging strategy and the simulation of target APT are presented. Experiments in the laboratory show that the system has good performance to achieve the acquisition, pointing and ranging of a static target, and track a dynamic target (angular velocity around 3 mrad/s) under the condition of extremely weak echo signals (a dozen photons). Meanwhile, through further theoretical analysis, it can be proven that the mechanism has stronger tracking and detection ability in long distance. It can achieve the active tracking of the target with a lateral velocity of hundreds of meters per second at about one hundred kilometers distance. This means that it has the ability of fast long-distance non-cooperative target tracking and ranging, only by using a single-point single photon detector. Full article
(This article belongs to the Special Issue Smart Pixels and Imaging)
Show Figures

Figure 1

11 pages, 6967 KiB  
Article
Efficient and Noise Robust Photon-Counting Imaging with First Signal Photon Unit Method
by Kangjian Hua, Bo Liu, Zhen Chen, Liang Fang and Huachuang Wang
Photonics 2021, 8(6), 229; https://doi.org/10.3390/photonics8060229 - 19 Jun 2021
Cited by 10 | Viewed by 2273
Abstract
Efficient photon-counting imaging in low signal photon level is challenging, especially when noise is intensive. In this paper, we report a first signal photon unit (FSPU) method to rapidly reconstruct depth image from sparse signal photon counts with strong noise robustness. The method [...] Read more.
Efficient photon-counting imaging in low signal photon level is challenging, especially when noise is intensive. In this paper, we report a first signal photon unit (FSPU) method to rapidly reconstruct depth image from sparse signal photon counts with strong noise robustness. The method consists of acquisition strategy and reconstruction strategy. Different statistic properties of signal and noise are exploited to quickly distinguish signal unit during acquisition. Three steps, including maximum likelihood estimation (MLE), anomaly censorship and total variation (TV) regularization, are implemented to recover high quality images. Simulations demonstrate that the method performs much better than traditional photon-counting methods such as peak and cross-correlation methods, and it also has better performance than the state-of-the-art unmixing method. In addition, it could reconstruct much clearer images than the first photon imaging (FPI) method when noise is severe. An experiment with our photon-counting LIDAR system was conducted, which indicates that our method has advantages in sparse photon-counting imaging application, especially when signal to noise ratio (SNR) is low. Without the knowledge of noise distribution, our method reconstructed the clearest depth image which has the least mean square error (MSE) as 0.011, even when SNR is as low as −10.85 dB. Full article
(This article belongs to the Special Issue Smart Pixels and Imaging)
Show Figures

Figure 1

12 pages, 2273 KiB  
Article
Influence of Pixel Etching on Electrical and Electro-Optical Performances of a Ga-Free InAs/InAsSb T2SL Barrier Photodetector for Mid-Wave Infrared Imaging
by Maxime Bouschet, Ulises Zavala-Moran, Vignesh Arounassalame, Rodolphe Alchaar, Clara Bataillon, Isabelle Ribet-Mohamed, Francisco de Anda-Salazar, Jean-Philippe Perez, Nicolas Péré-Laperne and Philippe Christol
Photonics 2021, 8(6), 194; https://doi.org/10.3390/photonics8060194 - 30 May 2021
Cited by 12 | Viewed by 3226
Abstract
In this paper, the influence of etching depth on the dark current and photo-response of a mid-wave infrared Ga-free T2SL XBn pixel detector is investigated. Two wet chemical etching depths have been considered for the fabrication of a non-passivated individual pixel detector having [...] Read more.
In this paper, the influence of etching depth on the dark current and photo-response of a mid-wave infrared Ga-free T2SL XBn pixel detector is investigated. Two wet chemical etching depths have been considered for the fabrication of a non-passivated individual pixel detector having a cut-off wavelength of 5 µm at 150 K. This study shows the strong influence of the lateral diffusion length of a shallow-etched pixel on the electro-optical properties of the device. The lowest dark current density was recorded for the deep-etched detector, on the order of 1 × 10−5 A/cm2 at 150 K and a bias operation equal to −400 mV. The corresponding quantum efficiency was measured at 60% (without anti-reflection coating) for a 3 µm thick absorbing layer. A comparison of experimental results obtained on the two kinds of etched pixels demonstrates the need for a deep-etching process combined with efficient passivation for FPA manufacturing. Full article
(This article belongs to the Special Issue Smart Pixels and Imaging)
Show Figures

Figure 1

13 pages, 6898 KiB  
Article
Three-Dimensional Laser Imaging with a Variable Scanning Spot and Scanning Trajectory
by Ao Yang, Jie Cao, Yang Cheng, Chuanxun Chen and Qun Hao
Photonics 2021, 8(6), 173; https://doi.org/10.3390/photonics8060173 - 21 May 2021
Cited by 4 | Viewed by 1721
Abstract
Traditional lidar scans the target with a fixed-size scanning spot and scanning trajectory. Therefore, it can only obtain the depth image with the same pixels as the number of scanning points. In order to obtain a high-resolution depth image with a few scanning [...] Read more.
Traditional lidar scans the target with a fixed-size scanning spot and scanning trajectory. Therefore, it can only obtain the depth image with the same pixels as the number of scanning points. In order to obtain a high-resolution depth image with a few scanning points, we propose a scanning and depth image reconstruction method with a variable scanning spot and scanning trajectory. Based on the range information and the proportion of the area of each target (PAET) contained in the multi echoes, the region with multi echoes (RME) is selected and a new scanning trajectory and smaller scanning spot are used to obtain a finer depth image. According to the range and PAET obtained by scanning, the RME is segmented and filled to realize the super-resolution reconstruction of the depth image. By using this method, the experiments of two overlapped plates in space are carried out. By scanning the target with only forty-three points, the super-resolution depth image of the target with 160 × 160 pixels is obtained. Compared with the real depth image of the target, the accuracy of area representation (AOAR) and structural similarity (SSIM) of the reconstructed depth image is 99.89% and 98.94%, respectively. The method proposed in this paper can effectively reduce the number of scanning points and improve the scanning efficiency of the three-dimensional laser imaging system. Full article
(This article belongs to the Special Issue Smart Pixels and Imaging)
Show Figures

Graphical abstract

8 pages, 2527 KiB  
Communication
Illumination Calibration for Computational Ghost Imaging
by Song-Ming Yan, Ming-Jie Sun, Wen Chen and Li-Jing Li
Photonics 2021, 8(2), 59; https://doi.org/10.3390/photonics8020059 - 22 Feb 2021
Cited by 5 | Viewed by 2521
Abstract
We propose a fast calibration method to compensate the non-uniform illumination in computational ghost imaging. Inspired by a similar procedure to calibrate pixel response differences for detector arrays in conventional digital cameras, the proposed method acquires one image of an all-white paper to [...] Read more.
We propose a fast calibration method to compensate the non-uniform illumination in computational ghost imaging. Inspired by a similar procedure to calibrate pixel response differences for detector arrays in conventional digital cameras, the proposed method acquires one image of an all-white paper to determine the non-uniformity of the illumination, and uses the information to calibrate any further reconstructed images under the same illumination. The numerical and experimental results are in a good agreement, and the experimental results showed that the root mean square error of the reconstructed image was reduced by 79.94% after the calibration. Full article
(This article belongs to the Special Issue Smart Pixels and Imaging)
Show Figures

Figure 1

Back to TopTop