Next Article in Journal
A Low-Current and Multi-Channel Chemiresistor Array Sensor Device
Previous Article in Journal
Propulsion Calculated by Force and Displacement of Center of Mass in Treadmill Cross-Country Skiing
Previous Article in Special Issue
Estimation with Heisenberg-Scaling Sensitivity of a Single Parameter Distributed in an Arbitrary Linear Optical Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Effect of Finite-Sized Optical Components and Pixels on Light-Field Imaging through Correlated Light

by
Gianlorenzo Massaro
1,2,*,
Francesco Di Lena
2,
Milena D’Angelo
1,2,† and
Francesco V. Pepe
1,2,†
1
Dipartimento Interateneo di Fisica, Università degli Studi di Bari, 70126 Bari, Italy
2
Istituto Nazionale di Fisica Nucleare, Sezione di Bari, 70125 Bari, Italy
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2022, 22(7), 2778; https://doi.org/10.3390/s22072778
Submission received: 4 March 2022 / Revised: 30 March 2022 / Accepted: 1 April 2022 / Published: 5 April 2022
(This article belongs to the Special Issue Novel Sensors and Techniques in Quantum Imaging Applications)

Abstract

:
Diffraction-limited light-field imaging has been recently achieved by exploiting light spatial correlations measured on two high-resolution detectors. As in conventional light-field imaging, the typical operations of refocusing and 3D reconstruction are based on ray tracing in a geometrical optics context, and are thus well defined in the ideal case, both conceptually and theoretically. However, some properties of the measured correlation function are influenced by experimental features such as the finite size of apertures, detectors, and pixels. In this work, we take into account realistic experimental conditions and analyze the resulting correlation function through theory and simulation. We also provide an expression to evaluate the pixel-limited resolution of the refocused images, as well as a strategy for eliminating artifacts introduced by the finite size of the optical elements.

1. Introduction

Light-field imaging is a rapidly developing technique that enables the simultaneous measurement of both light intensity distribution and propagation direction of light rays coming from a three-dimensional scene of interest [1]. The availability of the angular information represents the main difference between light-field (also known as plenoptic) and conventional imaging. The richness of the data collected by a light-field device entails the possibility to perform single-shot 3D sampling; to achieve the same result with a normal camera, multiple acquisitions on a stack of different planes would be needed [2,3,4]. Its scanning-free nature makes light-field imaging one of the fastest methods for 3D reconstruction and its applications span a large variety of fields, from photography [5,6,7,8] and microscopy [9] to real-time imaging of neuronal activity [10], wavefront sensing [11], and particle image velocimetry [12], to name a few. In its conventional form, light-field imaging is implemented by introducing an array of micro-lenses between the sensor and the imaging device (e.g., the camera lens); the role of the micro-lenses is to obtain a set of “sub-images” corresponding to different propagation directions. However, the micro-lenses impose a strong limitation on the image resolution, which cannot reach the diffraction limit and rapidly worsens with improved 3D reconstruction capabilities [3,13,14].
A recently developed approach to plenoptic imaging is based on retrieving the spatial and angular information by measuring light intensity correlations between two separate sensors. Therefore, the spatial and directional information can be retrieved with the maximum optical resolution, as set by wave-optics [15,16]. The technique, named correlation plenoptic imaging (CPI), does not need the micro-lenses array to decouple the propagation direction from the position of the light rays on the plane of interest, since this information is intrinsically encoded within the correlation function. CPI can thus produce images with the same diffraction-limited resolution as conventional imaging devices [16]. Furthermore, the measurement of plenoptic information through correlations has proven to better perform than the conventional approach, endowing CPI with unprecedented refocusing and depth-of-field extension capabilities. CPI is a quite versatile technique in that the idea of correlating two high spatial resolution detectors to obtain plenoptic information can be applied to any imaging technique, from photography to microscopy [17,18], and works, with the appropriate adaptations, with both thermal sources of light and entangled photons [19,20].
Regardless of the specific experimental realization, from the theoretical viewpoint, the workflow for decoding the plenoptic information from the CPI is unchanged: the correlation function is evaluated by modeling the propagation of the electromagnetic fields from the source to the sensors, a geometrical optics approximation is then performed to find the relationship between the position of rays impinging on the detectors and the coordinates on the imaged scene. The relevant parameters for the data processing algorithms are deduced from the expression of the correlation function in the ray optics regime. Interestingly, the variation of resolution with the axial position of the object is much slower in CPI than in conventional imaging [21] and has the peculiar property of being, to a large extent, independent of the numerical aperture of the imaging system [18]. In fact, only the resolution and the natural depth of field of the focused plane are determined by the numerical aperture of the imaging system; when planes outside of the natural depth-of-field are considered, the resolution of the resulting refocused image only depends on the illumination wavelength and the object size. In fact, the resolution in CPI is determined by the apertures when the object is focused, but depends only on the illumination wavelength and the object size as soon as one considers axial planes, which are outside of the natural depth-of-field of the lenses. In this article, for the first time, we outline the effect of both the finite extension of optical components and detectors, showing that large numerical apertures and detectors, although not influencing the resolution of the refocused images, are indeed beneficial for increasing the signal-to-noise ratio. Furthermore, the discrete nature of the measuring process (pixel size) will be considered, and the effect it has on the measured correlation function and on the refocusing capability of CPI will be investigated for the first time. Particularly, we shall show that the finite pixel size has non-trivial implications on the resolution (the specific expression will be provided) and affects the noise properties of the final image.

2. Materials and Methods

CPI is based on the simultaneous measurement and subsequent correlation of the light intensity distributions I A ( ρ a ) and I B ( ρ b ) , where ρ a = ( x a , y a ) and ρ b = ( x b , y b ) are two-dimensional (transverse) space coordinates defined on the photosensitive planes of two detectors D A and D B , respectively. The correlation function is evaluated from the measured intensity distributions as:
Γ ( 2 ) ( ρ a , ρ b ) = I A ( ρ a ) I B ( ρ b ) κ I A ( ρ a ) I B ( ρ b ) ,
where the angular brackets denote the statistical ensemble average [22] and the coefficient κ equals either 1, for thermal illumination, or 0, if the source is an emitter of entangled photon pairs [21,23]. Experimentally, an ergodic hypothesis is made on the statistical properties of the source, so as to replace the quantum state average with an operationally well-defined time average.
The dataset collected by a CPI apparatus is composed of a series of N pairs of frames, one for each detector, containing samples of the intensity impinging on the detectors surfaces at times t 1 , t 2 , . . . , t N . Each j-th element of the series contains the 2D arrays:
I I ( ρ i ) = t j t j + Δ t I I ( ρ i , t ) d t ,
where I = A , B ; i = a , b ; t denotes the time dependence of the intensity arriving on the detectors, and Δ t is the exposure time, supposed to be constant throughout the measurement. Due to the discrete nature of pixels, the space variable ρ i is not continuous and must be considered as a discrete set of coordinates associated with the pixel centers. When a sufficient number of frames are collected, the discrete time average yields approximately the same results as the state average of Equation (1), so that, for κ = 0 ,
Γ exp ( 2 ) ( ρ a , ρ b ) = 1 N t { t 1 , . . , t N } I A ( ρ a , t ) I B ( ρ b , t ) Γ ( 2 ) ( ρ a , ρ b ) .
An analogous approximation holds for the case κ = 1 in Equation (1). Each of the N terms I A ( ρ a , t ) I B ( ρ b , t ) is a four-dimensional array obtained by computing the tensor product between the matrices I A and I B corresponding to the same acquisition time. The experimentally measured correlation function is thus a 4D array as well.
From now on, we shall assume that the sample of interest is a two-dimensional object characterized by a polarization-independent amplitude transmissivity A ( ρ s ) . In this case, the theoretical correlation function reads:
Γ ( 2 ) ( ρ a , ρ b ) = + A ( ρ s ) + A * ( ρ s ) m Φ ( ρ s , ρ s , ρ a , ρ b ) d 2 ρ s d 2 ρ s 2 ,
where A * denotes the complex conjugate of the amplitude transmissivity, and ρ s and ρ s are the transverse coordinates on the object plane [17,21,23]. The integer coefficient m and Φ depend on the particular realization of CPI. If the object is placed only in one of the two arms, as in Figure 1a, then m = 0 . Instead, if light crosses the object before it is split towards the detectors, as in Figure 1b, then m = 1 . The complex function Φ represents the point spread function of CPI and depends both on the light source, and on the configuration of the optical paths [17,21,23]. Within Φ are enclosed all the details about the field propagation through the optical components, and also the coherence properties of the source [24]. The expression of Φ is analyzed extensively in Refs. [17,21,23]. For our purpose, it is sufficient to consider it as a second-order transfer function [25], describing the (back-)propagation of a correlated detection at coordinates ρ a on D A and ρ b on D B towards the object plane. Let us now make a few assumptions and approximations that enable simplifying Equation (4) and relating the aperture function A to the coordinates on the detectors, namely, establishing a relationship between the two detectors and the object plane. We consider Φ in the paraxial approximation [25] and assume perfect mode-to-mode correlation [24] (i.e., infinitesimal transverse coherence area) on the source. Furthermore, all the apertures in the setup are assumed to be infinite (including the source transverse size), so that Φ has an analytical expression, and the two double integrals on the object plane are easily solved in the asymptotic limit of geometrical optics [21]; in this limit, the result of all integrals can be approximated by applying a stationary phase method [26]. Since all optical propagators in the paraxial approximation involve phases that are, at most, quadratic in the integration variables, all functions come out to depend on linear combinations of the detector variables. In fact, the correlation function reduces to:
Γ geom ( 2 ) ( ρ a , ρ b ) = A ( α ( z ) ρ a + β ( z ) ρ b ) 2 ( 1 + m ) .
Hence, regardless of the specific CPI configuration, the correlation function in the geometrical approximation reduces to the power 1 + m of the object intensity transmissivity (the square module of the amplitude transmissivity that appears in Equation (4)), evaluated in a linear combination of ρ a and ρ b . The coefficients α and β strictly depend on the experimental realization of CPI, and are a function of the axial position z of the object plane. Equation (5) thus shows a point-to-point correspondence between coordinates at the two detectors and areas of the object planes, which is determined only by the two coefficients α and β . This interpretation shows the convenience of the ray optics limit for interpreting the measured correlation function. Two more aspects of Equation (5) are worth emphasizing: first, the coordinates x and y are not mixed up by defocusing; second, α and β do not depend on the transverse direction, namely, they are the same for the two independent directions x and y on the detector planes.
To better understand the extent of the considerations above, it is worth noticing that, in the most general case, the 2D aperture function A could depend on a linear combination of the four coordinates at the detectors; thus, involving up to eight coefficients: four in the first argument involving all four components on the detectors, and four in the second argument. However, from Equation (5), we see that only two coefficients, α and β , are needed and the problem of refocusing can be solved along x and y separately.
For the sake of simplicity and ease of visualization, from now on, the second-order transfer function Φ shall be assumed factorable with respect the two transverse coordinates x and y (i.e., Φ ( ρ s , ρ s , ρ a , ρ v ) = Φ x ( x s , x s , x a , x b ) Φ y ( y s , y s , y a , y b ) ) and symmetric in the two dimensions ( Φ x = Φ y ) throughout the rest of the paper, even outside of the geometrical approximation. These assumptions imply that field propagation from the source to the detectors has cylindrical symmetry around the optical axis, and also that the expression of propagators is such that integration along orthogonal transverse directions can be solved independently. In this context, the 2D treatment leads to the same results as the complete 4D case, with the added benefit of having 2D correlation functions that are easily represented as images. Moreover, the factorization and symmetry are generally good approximations of the experimental conditions. Based on these assumptions, we shall drop the boldface from space variables, regard the detectors as mono-dimensional photosensitive strips and the object transmissivity as a 1D function.
Going back to Equation (5), we see that the correlation function is an infinitely extended collection of shifted and magnified replicas of the object aperture function. In fact, for each given coordinate x a on D A , the correlation function of Equation (5) represents a shifted and magnified “copy” of the object on D B ; the shift amounts to α x a and the magnification to 1 / β . The analogous is obtained by fixing x b . Another way to look at this is the following: a detail at coordinate x s in the object plane is infinitely replicated along the line γ z ( x s ) of equation α ( z ) x a + β ( z ) x b = x s in the space defined by the coordinates at the detectors, x a , x b . This picture enables a very intuitive understanding of the process of refocusing, as well as of the role played by the coefficients α and β . In fact, the line integral involved in the refocusing process:
Σ z ( x s ) = γ z ( x s ) Γ exp ( 2 ) ( x a , x b ) d ;
enables summing together all the contributions in the correlation function to a single point of the object. The result is proportional to the intensity transmissivity profile of the object, evaluated in the single object coordinate x s . By repeating this operation over the entire object plane, i.e., for all the possible x s , the whole transmission function of the object is reconstructed, namely, the image is refocused. The orientation of the lines along which the integration of Equation (6) is performed is uniquely identified by the ratio α ( z ) / β ( z ) , which, in turn, depends on the position z of the object along the optical axis. The dependence of α and β on the axial position indicates that, when refocusing on a different object plane z , the integration needs to be performed along lines having a different tilting arctan ( α ( z ) / β ( z ) ) in the x a , x b plane. As we shall see, the point-to-point correspondence between sample and detector points is not altered by finite optical components, but other features of the reconstructed images are, such as the field of view and signal-to-noise ratio. In the following sections, expressions clarifying the role of apertures and detector size will be provided, as well as a strategy aimed at eliminating the artifacts they induce. The difference between the ideal case of a continuous correlation function and the experimental reality, in which only discrete values are available because of the pixel size, will also be explored and exemplified through simulations.

3. Results

3.1. Effect of the Finite Apertures

Due to the independence of α and β in Equation (5), on the transverse coordinates, the finite size of apertures within the experimental setup does not influence the point-to-point correspondence between the object and the detector planes. Still, the finite size of the optical components determines both the maximum resolution of the focused image, as in conventional imaging, and the spatial extension of the correlation function in the x a , x b plane. Hence, despite the fact that finite apertures do not influence the resolution curves of CPI far from the focused planes [18,21,23], large apertures are still convenient to maximize the amount of collected information, and have important consequences on the signal-to-noise ratio of the final image [27].
To better see these points, we shall assume only one limiting aperture along each of the two optical paths going from the source to the detectors; all other apertures, if any, can be disregarded. In this approximation, the function Φ can can be expressed as:
Φ ( x s , x s , x a , x b ) = + + ϕ ( x s , x s , x l , x l ) P A ( x l ) φ A ( x l , x a ) P B ( x l ) φ B ( x l , x b ) d x l d x l ,
with ϕ , φ a , and φ b functions propagating the fields to and from the planes of the lenses, that vary with the specific CPI configuration. P A , B are the pupil functions of the limiting apertures in the optical paths leading towards the detectors D A , B . By combining Equations (4) and (7), and solving all the integrals through a stationary phase approximation (geometrical optics limit), the correlation function can be expressed as:
Γ fin ( 2 ) ( x a , x b ) = Γ geom ( 2 ) ( x a , x b ) P A ( a 1 x a + a 2 x b ) 2 P B ( b 1 x a + b 2 x b ) 2 .
Notably, in the geometrical approximation, the intensity transmission functions of the lenses (or any other limiting aperture) appear as multiplying factors in the correlation function. The arguments of the pupils are linear combinations of the detector coordinates, with coefficient a 1 , 2 , in P A , and b 1 , 2 , in P B , as for the object transmissivity. Unlike the coefficients α and β that appear in the object transmissivity, however, the new coefficients a 1 , 2 and b 1 , 2 are independent of the axial position of the object z; hence, the location of the object does not modify the areas enclosed in the transmissive region of the pupils.
In summary, Equation (8) indicates that the correlation function is still populated by a set of lines, each one corresponding to an object point, and whose tilting is determined by the coefficients α ( z ) and β ( z ) , as in the case of infinite apertures (see Equation (5)); the novelty here is that these lines are limited in extension by an object-independent area that is defined by the finite size of the apertures.

3.2. Effect of the Finite Size of the Detectors

The limited extension of the photosensitive surface of the detectors is taken into account by imposing that the light intensity distribution at the detector planes is zero outside the photosensitive region. This means that substituting:
I I ( x i ) χ d i ( x i ) I I ( x i )
into Equation (1), with I = A , B ; i = a , b ; d i the linear size of the photosensitive surface of the two detectors, and χ d i , the characteristic function of the photosensitive region, that equals 1 when its argument falls within a d i -long segment centered on 0, and zero elsewhere. The effect of the characteristic functions of the detectors is to further reduce the area of the non-vanishing part of the correlation function, which now reads:
Γ fin ( 2 ) ( x a , x b ) = Γ geom ( 2 ) ( x a , x b ) J , j χ d j ( x j ) P J ( j 1 x a + j 2 x b ) 2 .
Equation (10) indicates that the correlation function is now a tilted and re-scaled version of the correlation function obtained in the geometric approximation for infinite apertures and sensors ( Γ geom ( 2 ) defined in Equation (5)), provided the coordinates ( x a , x b ) are both within the photosensitive areas of the detectors, and the ray corresponding to this pair of coordinates crosses the planes of the finite apertures through their transmissive region. For all other values of the coordinates ( x a , x b ) , the correlation function is zero.
To offer a graphical representation of the correlation function, we shall now consider a particular CPI scheme, named CPI between arbitrary planes (CPI-AP) [23]. The peculiarity of this configuration is that two planes in the proximity of the object are imaged by the two sensors at a diffraction-limited resolution. In Figure 2a, we report the specific single-lens realization of CPI-AP that will be employed in all simulations. Here, light emitted by a two-dimensional object, such as a fluorescent sample, propagates towards a lens whose focal length is f = 100 mm ; after the lens, the light beam is split towards two detectors D A and D B , endowed with spatial resolution. The two lens-detector distances are such that the conjugate plane of detector D A is at a distance z a = 210 mm from the lens, with magnification M A = 0.91 , and the conjugate plane of detector D B is at a distance z B = 190 mm on D B , with magnification M B = 1.1 . The object, placed at a distance z from the lens, is modeled as an infinitely extended distribution of equally-spaced Gaussian apertures, having center-to-center distance equal to 500 μ m and width equal to 100 μ m , as reported in Figure 2b. The size of the detectors is d a = d b = 10 mm .
In the geometrical approximation, the correlation function of the apparatus described above reads:
Γ AP ( 2 ) ( x a , x b ) = A α x a + β x b 4 P l a x a + b x b 4 χ d a ( x a ) χ d b ( x b ) ,
α = 1 M a z z b z a z b β = 1 M b z z a z a z b a = 1 M a z b z a z b b = 1 M b z a z a z b .
were A ( x ) is the aperture function of the object and P l ( x ) 2 = exp ( x 2 / 2 l 2 ) is the pupil function of the lens. The coefficients α and β , on which the image of the object depends, determine the tilting of the integration path required for refocusing. In this particular CPI implementation, the two limiting apertures in the optical paths (see Equation (10)) depend on the same linear combination of the detector coordinates; the coefficients a and b determine the non-zero area of the correlation function, defined by the lens and detector size. In Figure 3a, we report the plot of the simulated correlation function, showing the effect of the finite size of the lens aperture. The simulation has been carried out in the paraxial approximation by considering a Gaussian lens of aperture l, whose numerical aperture (NA) shall be estimated as sin l / f , and a perfect point-like correlation (i.e., infinitesimal coherence area) on the source. The geometrical approximation has not been applied, and we will show that, even though all the data processing considerations that have been and will be discussed and demonstrated are based solely on geometrical arguments, they work equally well outside of such approximation. The plots in Figure 3a show that, as the numerical aperture has increased from 0.051 to 0.25 , the extension of the non-zero portion of the correlation function, within the ( x a , x b ) space defined by the detectors, greatly increases. The region of the ( x a , x b ) space enclosed within the two red dashed lines represents the lens extension, which we have chosen as the portion of space for which P l ( x ) 2 > 0.001 . Regardless of the particular value of the threshold selected to identify the lens area, something very interesting can be inferred from the plots. Even without the geometrical optics approximation, the finite lens aperture sets a boundary to the areas where correlations can be measured: correlations are non-zero as long as a x a + b x b c · l , where c is a constant chosen to define the size of the Gaussian lens. The finite lens aperture also influences the field of view (FOV), which can be thought as the number of slits that contribute to the measurable correlations. Since the area a x a + b x b c · l is infinitely extended, each point x s of the object gives rise to a non-zero contribution of the correlation function along the segment defined on the line:
α x a + β x b = x s
and enclosed by the two lines of equations:
a x a + b x b = ± c · l .
Interestingly, since the matrix:
α β a b
obtained from the coefficients in Equations (13) and (14) is never singular, such a segment is always well defined. However, the only object details that contribute to the measured correlation function are those associated to segments d a and d b , in the ( x a , x b ) plane, that are (at least partly) contained within the photosensitive portion of the detectors, namely, segments containing points such that x a d a / 2 and x b < d b / 2 . The length of the segment under consideration defines the number of “replicas” that a given object point produces in the correlation function. This gives rise to an important difference of CPI with respect to conventional imaging, where the FOV (in the absence of aberrations) is only defined by the detector size. In fact, as we shall see in the next section, the length of the segment in the space defined by the detector coordinates plays an important role in determining the successful reconstruction of the final images, particularly in terms of their signal-to-noise ratio [18]. In fact, the combined effect of the lens and the detector size is to shorten the length of the segment; at some point, the segment will be too short for the associated object detail to be effectively reconstructed. Interestingly, given the chosen optical parameters and detector size, the extension of the lines associated with the object coordinates is almost exclusively limited by the detector size already for NA = 0.25; hence, a larger numerical aperture would yield no advantage in that regard.
Figure 3b shows how the position of the object along the optical axis influences the inclination of the segments mentioned above, as given by Equation (13). When the object is placed on either one of the two planes that are conjugate to the detectors ( z = z a or z = z b ), the object details line up along the horizontal ( z = z b ) or along the vertical ( z = z a ) and the values of the coefficients α and β are ( 0 , 1 ) and ( 1 , 0 ) , respectively. This result is rather intuitive: when z = z a ( z b ), the slits are on focus on detector D A ( D B ) (i.e., along the x a - ( x b -)axis) independent of the specific coordinate x b ( x a ) on detector D B ( D A ). The segments associated with object points are tilted when the object is placed in any other plane different from z a and z b . In particular, the replicas of the object points are arranged along a line that forms an angle θ with the x a -axis such that tan θ = M b M a z z b z z a . This can be better understood by analyzing Figure 4a, where we report the correlation function associated with an object placed at z = 195 mm (Figure 3a, center).
The measurable correlation function is contained in the region given by the intersection between the area of the lens (enclosed by the blue lines) and the area defined by the two sensors (orange square). All segments corresponding to a given object point x s identify a preferred direction (red dashed line) along which the correlation function can be integrated, so as to reconstruct the object without losing resolution (i.e., to refocus it). This preferred direction is at the expected θ = 18.4 . Depending on the value of the coordinate x s , the corresponding segment can:
1.
Fall outside of the area defined by the detectors for slits that are outside the FOV;
2.
Fall only partially outside of the detector area so that, upon reconstruction, the corresponding slits will receive a decreasing amount of contributions than that available on the lens area (red area);
3.
Be fully included in the detector area, thus enabling reconstruction of the corresponding object point with all possible contributions from the lens (green area).
As we shall see below, if these different conditions are not correctly accounted for, upon refocusing, artifacts will appear in the reconstructed image. However, once again, the geometrical optics approximation (Equation (10)) is an adequate tool for quantifying these effects and finding a strategy to eliminate the artifacts. In fact, by applying the refocusing process described by Equation (6), the correlation function of Equation (8) gives the product of the object aperture transmissivity with the function:
Σ corr ( x s ) = γ z ( x s ) J , j χ d j ( x j ) P J ( j 1 x a + j 2 x b ) 2 d ,
where γ z ( x s ) , defined by the equation α ( z ) x a + β ( z ) x b = x s , is the integration line associated with the object detail identified by the coordinate x s . This gives rise to the artifacts mentioned above. However, the aperture function can still be properly reconstructed by either dividing the refocused image of Equation (6) by the function in Equation (16), or by dividing the non-zero part of the correlation function by the integrand function of Equation (16), prior to applying the refocusing algorithm. Either way, the effect of the different length of the segments contributing to different parts of the sample can be factored out, enabling the proper reconstruction of the object, as shown in Figure 4b. Here, the blue curve represents the infinitely extended set of Gaussian apertures composing the sample. The effect of the finiteness of the optical components is evident in the uncorrected refocused image (black curve): near the edges of the FOV, defined by the finite sensor sizes, the slits receive less contributions due to the shorter extension of the integration segments. This effect, which depends on the angle of inclination of the integration line (determined by the axial position z), can be compensated by applying the proposed correction. The effectiveness of this correction strictly relies on how well the theoretical model, bringing to the expression in Equation (16), adheres to the experimental situation. In order to experimentally evaluate the correction function and calibrate the device, one would gain access to the integrand function of Equation (16). This could be conducted by measuring correlations after either removing the sample, if working in transmission, or replacing it with a mirror, if working in reflection.

3.3. Effect of the Finite Size of the Pixels

From a formal point of view, the effect of the pixel size ( δ x a , b ) on the planes of the sensors D A and D B implies that the measured 2D correlation function is described by the quantity:
I I ( x i , t j ) = x i δ x i / 2 x i + δ x i / 2 t j t j + Δ t I I ( x i , t ) d t d x i
rather than by Equation (2). The spatial coordinates vary within the discrete set associated with the pixel centers, on the two detectors. The first obvious limitation that such discretization imposes to CPI is resolution, in a similar fashion as it does to conventional imaging. From the insight built-up so far, we know that two neighboring point-like details on the object plane, placed at coordinates x s and x s + δ x s , will originate two lines of equations α x a + β x b = x s and α x a + β x b = x s + δ x s in the ( x a , x b ) space where the correlation function is defined. This plane is available as a discrete 2D space with points placed in a regular lattice with spacing δ x a along the horizontal and δ x b along the vertical directions. The spacing between the two lines associated with the two point objects is δ x s α 2 + β 2 ; when this value is comparable to the pixel size, discriminating between the two lines becomes impossible. A rough yet intuitive estimate of the minimum separation δ x s that can be resolved is obtained by imposing that the two lines never cross the same δ x a · δ x b cell in the ( x a , x b ) discrete space. This is expected to depend both on the angle θ = arctan α / β of inclination of the lines and on the size of the pixels of the two detectors. Geometrical considerations (reported in Appendix A) lead to the following condition for the minimum separation δ x s ( z ) :
δ x s δ x a α + δ x b β .
This expression reduces to the one known for conventional imaging when the object is on focus ( z = z a , b ), that is, when α , β = 1 / M A , 0 or α , β = 0 , 1 / M B , where M A and M B are the magnifications on the conjugate planes of detectors D A and D B . Unlike conventional imaging, however, Equation (18) implies a dependence of the resolution on the axial position z of the unfocused object. In particular, the coefficients α and β can diverge at large distances from the focused planes, as in the example chosen for the simulations. Similar to conventional imaging, both the pixel size and the finiteness of the point-spread function imposes limits on the maximum achievable resolution. A plot of the pixel-limited resolution, normalized over the pixel size (supposed to be equal on the two detectors) is shown in Figure 5a.
Figure 5b displays the effect of pixelization by considering the same object and setup as in the previous section, and choosing a pixel size of δ x a = δ x a = 200 μ m . In this condition, based on Equation (18), an object with a spacing of 500 μ m is expected to produce lines that are separated by roughly two pixels in the ( x a , x b ) plane, in good agreement with the result shown in the figure.
The importance of retrieving multiple shifted “replicas” of the same object portion within the measured correlation function becomes particularly relevant when considering noise. In fact, a typical drawback of correlation-based imaging, is the low SNR of the measured correlation function. CPI makes no exception: in a typical scenario, the signal-to-noise ratio is lower than unity, as shown in Figure 5c, where details of the object are not immediately recognizable due to noise burying the space modulation on the correlation function induced by the object features. The image shown here was obtained by superimposing a white-noise mask to the pixelated correlation function of Figure 5b, in such a way to obtain an SNR of 0.5 at best. Due to the addition of noise, Pearson’s correlation coefficient between Figure 5b,c drops to 42.7 % ; nevertheless, upon refocusing, the correlation with the reference object was 72.1 % , showing that the sample features were reconstructed even without the need for noise suppression algorithms. These conditions are ideal for showing the importance of retrieving multiple “replicas” of each object detail: despite the noise and pixelization limit the number of contributions to the same object coordinate, the process of refocusing, by making use of the redundancy of the correlation function, can (at least partially) isolate the object features from the noise background. Larger apertures and detectors, combined with smaller pixels, are thus always beneficial to CPI, as, besides improving the FOV, also increase the quality of the final images, thanks to statistical effects.

4. Discussion

The effect of the finite size of the optical components is difficult to treat mathematically due to the fact that, in most cases of interest, Equation (7) cannot be solved analytically. However, a sufficiently deep insight can be gained in the geometrical optics context. The predictions based on the geometrical approach have proven to be quite reliable, and show that finite apertures do not alter the geometrical point-to-point correspondence between object and detector points. Their main effect is to limit the extension of the measurable correlation function in the 4D space defined by the coordinates of the two detectors. Interestingly, this limitation is independent of the position of the object along the optical axis. A similar result was found when accounting for the finite photosensitive area of the detectors: non-zero contributions to the correlation function were experimentally available only as long as the reconstructed optical rays crossed the finite apertures in their transmissive regions and were detected within the sensor surface. The combined effect of finite detectors and apertures is to limit the FOV of the reconstructed image, as well as to create artifacts; the latter can be eliminated either through accurate theoretical modeling or by experimental calibration of the device.
We have also found that the finite size of the pixels on the detectors imposes limitations on the achievable resolution of the refocused image, and this occurs in a way that strongly depends on the position of the object along the optical axis. The pixel size also determines how many times the same detail on the object plane is encoded in the correlation function. Being based on the superposition of redundant details, the refocusing procedure is much more efficient, in terms of noise suppression, when the object details are available with a higher redundancy.
Future developments on the topic shall include optimization of the data processing algorithm to the case of the experimental discrete correlation function. Applying the refocusing procedure and other 3D reconstruction techniques can be demanding in terms of time and computational resources; hence, in conjunction with the use of next generation ultra-fast sensors [28], an approach requiring the minimum amount of resources is needed in view of applications such as real-time imaging.

Author Contributions

Conceptualization, G.M. and F.D.L.; methodology, formal analysis, and simulations, G.M.; writing—original draft preparation, G.M.; writing—review and editing, F.V.P., F.D.L. and M.D.; supervision, F.V.P. and M.D.; project administration and funding acquisition, M.D. All authors have read and agreed to the published version of the manuscript.

Funding

Project Qu3D is supported by the Italian Istituto Nazionale di Fisica Nucleare, the Swiss National Science Foundation (grant 20QT21 187716 “Quantum 3D Imaging at high speed and high resolution”), the Greek General Secretariat for Research and Technology, the Czech Ministry of Education, Youth and Sports, under the QuantERA program, which has received funding from the European Union’s Horizon 2020 research and innovation program. Istituto Nazionale di Fisica Nucleare project PICS4ME. Ministero Italiano dello Sviluppo Economico project INTEFF-TOPMICRO call Proof-of-Concept.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Derivation of the Resolution Limit Due to the Finite Pixel Size

Here, we derive a rough estimate of the resolution in the context outlined in Section 3.3 that brings Equation (18).
As in Figure A1, we are interested in finding the minimum separation d = δ x s α 2 + β 2 between two details in the object plane before the corresponding lines in the correlation function fall within the same pixel. Since the direction of lines is determined by the coefficients α and β , the vector orthogonal to this direction has components sin θ cos θ . If 0 θ < 90 , the lines will not cross the same pixel as long as their separation is larger than the projection of the first diagonal δ x a δ x b of the rectangle onto the versor, while, for 90 < θ 0 , the same holds for the second diagonal δ x a δ x b . By computing the scalar products, one obtains:
d δ x b cos θ δ x a sin θ for 0 < θ 90 d δ x b cos θ + δ x a sin θ for 90 < θ 0
which, by considering that:
sin arctan x = x 1 + x 2 cos arctan x = 1 1 + x 2
and θ = arctan α β , can be combined into:
δ x s α 2 + β 2 δ x a α α 2 + β 2 + δ x b β α 2 + β 2 ,
thus, leading to Equation (18).
Figure A1. Minimum separation between two details. See text for details.
Figure A1. Minimum separation between two details. See text for details.
Sensors 22 02778 g0a1

References

  1. Adelson, E.H.; Wang, J.Y. Single lens stereo with a plenoptic camera. IEEE 1992, 14, 99–106. [Google Scholar] [CrossRef] [Green Version]
  2. Pawley, J. Handbook of Biological Confocal Microscopy; Springer: Berlin/Heidelberg, Germany, 2006; Volume 236. [Google Scholar]
  3. Ihrke, I.; Restrepo, J.; Mignard-Debise, L. Principles of light field imaging: Briefly revisiting 25 years of research. IEEE 2016, 33, 59–69. [Google Scholar] [CrossRef] [Green Version]
  4. Lien, M.B.; Liu, C.H.; Chun, I.Y.; Ravishankar, S.; Nien, H.; Zhou, M.; Fessler, J.A.; Zhong, Z.; Norris, T.B. Ranging and light field imaging with transparent photodetectors. Nat. Photonics 2020, 14, 143–148. [Google Scholar] [CrossRef]
  5. Ng, R.; Levoy, M.; Brédif, M.; Duval, G.; Horowitz, M.; Hanrahan, P. Light Field Photography with a Hand-Held Plenoptic Camera. Ph.D. Thesis, Stanford University, Stanford, CA, USA, 2005. [Google Scholar]
  6. Ng, R. Fourier slice photography. ACM Trans. Graph. 2005, 24, 735–744. [Google Scholar] [CrossRef]
  7. Birklbauer, C.; Bimber, O. Panorama light-field imaging. In Computer Graphics Forum; Wiley Online Library: Hoboken, NJ, USA, 2014; Volume 33, pp. 43–52. [Google Scholar]
  8. Xue, Z.; Baboulaz, L.; Prandoni, P.; Vetterli, M. Light field panorama by a plenoptic camera. In Computational Imaging XII; SPIE: Bellingham, WA, USA, 2014; Volume 9020, pp. 186–196. [Google Scholar]
  9. Broxton, M.; Grosenick, L.; Yang, S.; Cohen, N.; Andalman, A.; Deisseroth, K.; Levoy, M. Wave optics theory and 3-D deconvolution for the light field microscope. Opt. Exp. 2013, 21, 25418–25439. [Google Scholar] [CrossRef] [PubMed]
  10. Prevedel, R.; Yoon, Y.G.; Hoffmann, M.; Pak, N.; Wetzstein, G.; Kato, S.; Schrödel, T.; Raskar, R.; Zimmer, M.; Boyden, E.S.; et al. Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy. Nat. Methods 2014, 11, 727–730. [Google Scholar] [CrossRef] [PubMed]
  11. Ko, J.; Davis, C.C. Comparison of the plenoptic sensor and the Shack–Hartmann sensor. Appl. Opt. 2017, 56, 3689–3698. [Google Scholar] [CrossRef] [PubMed]
  12. Zhu, H.; Wang, Q.; Yu, J. Light field imaging: Models, calibrations, reconstructions, and applications. Front. Inf. Technol. Electron. Eng. 2017, 18, 1236–1249. [Google Scholar] [CrossRef]
  13. Georgiev, T.; Zheng, K.C.; Curless, B.; Salesin, D.; Nayar, S.K.; Intwala, C. Spatio-angular resolution tradeoffs in integral photography. Render. Tech. 2006, 2006, 263–272. [Google Scholar]
  14. Goldluecke, B.; Klehm, O.; Wanner, S.; Eisemann, E. Digital Representations of the Real World: How to Capture, Model, and Render Visual Reality. In Plenoptic Cameras; CRC Press: Boca Raton, FL, USA, 2015; pp. 65–77. [Google Scholar]
  15. D’Angelo, M.; Pepe, F.V.; Garuccio, A.; Scarcelli, G. Correlation plenoptic imaging. Phys. Rev. Lett. 2016, 116, 223602. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Pepe, F.V.; Di Lena, F.; Mazzilli, A.; Edrei, E.; Garuccio, A.; Scarcelli, G.; D’Angelo, M. Diffraction-limited plenoptic imaging with correlated light. Phys. Rev. Lett. 2017, 119, 243602. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Scagliola, A.; Di Lena, F.; Garuccio, A.; D’Angelo, M.; Pepe, F.V. Correlation Plenoptic Imaging for microscopy applications. Phys. Lett. A 2020, 384, 126472. [Google Scholar] [CrossRef]
  18. Massaro, G.; Giannella, D.; Scagliola, A.; Lena, F.D.; Scarcelli, G.; Garuccio, A.; Pepe, F.V.; D’Angelo, M. Light-field microscopy with correlated beams for extended volumetric imaging at the diffraction limit. arXiv 2021, arXiv:2110.00807. [Google Scholar]
  19. Pepe, F.V.; Di Lena, F.; Garuccio, A.; Scarcelli, G.; D’Angelo, M. Correlation Plenoptic Imaging With Entangled Photons. Technologies 2016, 4, 17. [Google Scholar] [CrossRef] [Green Version]
  20. Rubin, M.H.; Klyshko, D.N.; Shih, Y.; Sergienko, A. Theory of two-photon entanglement in type-II optical parametric down-conversion. Phys. Rev. A 1994, 50, 5122. [Google Scholar] [CrossRef] [PubMed]
  21. Di Lena, F.; Pepe, F.; Garuccio, A.; D’Angelo, M. Correlation Plenoptic Imaging: An Overview. Appl. Sci. 2018, 8, 1958. [Google Scholar] [CrossRef] [Green Version]
  22. Mandel, L.; Wolf, E. Optical Coherence and Quantum Optics; Cambridge University Press: Cambridge, UK, 1995. [Google Scholar]
  23. Di Lena, F.; Massaro, G.; Lupo, A.; Garuccio, A.; Pepe, F.V.; D’Angelo, M. Correlation plenoptic imaging between arbitrary planes. Opt. Expr. 2020, 28, 35857–35868. [Google Scholar] [CrossRef] [PubMed]
  24. Glauber, R.J. The quantum theory of optical coherence. Phys. Rev. 1963, 130, 2529. [Google Scholar] [CrossRef] [Green Version]
  25. Saleh, B.E.; Teich, M.C. Fundamentals of Photonics; Wiley: Hoboken, NJ, USA, 2007. [Google Scholar]
  26. Bleistein, N.; Handelsman, R.A. Asymptotic Expansions of Integrals; Ardent Media: Dallas, PA, USA, 1975. [Google Scholar]
  27. Scala, G.; D’Angelo, M.; Garuccio, A.; Pascazio, S.; Pepe, F.V. Signal-to-noise properties of correlation plenoptic imaging with chaotic light. Phys. Rev. A 2019, 99, 053808. [Google Scholar] [CrossRef] [Green Version]
  28. Antolovic, I.M.; Burri, S.; Hoebe, R.A.; Maruyama, Y.; Bruschini, C.; Charbon, E. Photon-counting arrays for time-resolved imaging. Sensors 2016, 16, 1005. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Comparison between different CPI protocols. (a) The source can emit either entangled photons or chaotic light. Light from the source is split into two paths: in one arm, light illuminates the object, passes through a lens and is detected by detector D a ; in the second arm, light propagates freely toward a lens and is collected by sensor D b . in Equation (4). (b) The source emits chaotic light, which illuminates the object, and is then split into two paths; each beam then passes through a lens and is collected by a detector. In both panels, A indicates the aperture function of the object, P A , B the pupil functions of the lenses, D A and D B are high-resolution detectors, ⊗ indicates that correlations are measured between the two detectors D A and D B .
Figure 1. Comparison between different CPI protocols. (a) The source can emit either entangled photons or chaotic light. Light from the source is split into two paths: in one arm, light illuminates the object, passes through a lens and is detected by detector D a ; in the second arm, light propagates freely toward a lens and is collected by sensor D b . in Equation (4). (b) The source emits chaotic light, which illuminates the object, and is then split into two paths; each beam then passes through a lens and is collected by a detector. In both panels, A indicates the aperture function of the object, P A , B the pupil functions of the lenses, D A and D B are high-resolution detectors, ⊗ indicates that correlations are measured between the two detectors D A and D B .
Sensors 22 02778 g001
Figure 2. (a) Example of a CPI setup, named correlation plenoptic imaging between arbitrary planes [23]. A source of thermal light is placed on the object plane, and plays the role of both the illuminating source and the object (e.g., a fluorescent object). Photons from the object propagate through a single lens with focal length f and, after being separated by a beam-splitter, are collected by two spatially-resolving detectors D A and D B . The planes optically conjugated to the detectors do not coincide ( z a z b ) and, in the most general case, the object is placed out of focus with respect to both detectors ( z z a , b ). Details about the experimental parameters can be found in the text. This setup is used to generate all the simulations reported in this work. (b) The intensity transmissivity of the object used for the simulations ( A ( x s ) 2 ). The object is an indefinitely extended set of equally-spaced ( 500 μ m ) Gaussian apertures of width 100 μ m .
Figure 2. (a) Example of a CPI setup, named correlation plenoptic imaging between arbitrary planes [23]. A source of thermal light is placed on the object plane, and plays the role of both the illuminating source and the object (e.g., a fluorescent object). Photons from the object propagate through a single lens with focal length f and, after being separated by a beam-splitter, are collected by two spatially-resolving detectors D A and D B . The planes optically conjugated to the detectors do not coincide ( z a z b ) and, in the most general case, the object is placed out of focus with respect to both detectors ( z z a , b ). Details about the experimental parameters can be found in the text. This setup is used to generate all the simulations reported in this work. (b) The intensity transmissivity of the object used for the simulations ( A ( x s ) 2 ). The object is an indefinitely extended set of equally-spaced ( 500 μ m ) Gaussian apertures of width 100 μ m .
Sensors 22 02778 g002
Figure 3. (a) Study of the effect of the numerical aperture (NA) of the lens on the extension of the correlation function in the ( x a , x b ) plane. The numerical aperture is changed by varying the lens aperture l. With reference to Figure 2a, the object was placed at a distance z = 195 mm z a , b from the lens. The area enclosed between the red dashed lines represents the lens extension (in this region, the lens aperture function, whose values vary in ( 0 , 1 ) , is larger than 0.001 ). (b) The effect of the displacement of the object along the optical axis, at a fixed lens numerical aperture of 0.15. Four axial positions of the object were explored. When the sample was placed in either one of the two planes of focus ( z = z a = 190 mm and z = z b = 210 mm ), the replicas of the details appearing in the correlation function line up either horizontally or vertically. When z = 200 mm the object was placed between the two planes focused on D A and D B ; while z = 220 mm corresponds to a sample placed farther away from the lens than the farthest plane focused on D A and D B .
Figure 3. (a) Study of the effect of the numerical aperture (NA) of the lens on the extension of the correlation function in the ( x a , x b ) plane. The numerical aperture is changed by varying the lens aperture l. With reference to Figure 2a, the object was placed at a distance z = 195 mm z a , b from the lens. The area enclosed between the red dashed lines represents the lens extension (in this region, the lens aperture function, whose values vary in ( 0 , 1 ) , is larger than 0.001 ). (b) The effect of the displacement of the object along the optical axis, at a fixed lens numerical aperture of 0.15. Four axial positions of the object were explored. When the sample was placed in either one of the two planes of focus ( z = z a = 190 mm and z = z b = 210 mm ), the replicas of the details appearing in the correlation function line up either horizontally or vertically. When z = 200 mm the object was placed between the two planes focused on D A and D B ; while z = 220 mm corresponds to a sample placed farther away from the lens than the farthest plane focused on D A and D B .
Sensors 22 02778 g003
Figure 4. (a) Subdivision of the correlation function into parts characterized by different properties. The strip enclosed between the two green lines is the area defined by the lens aperture, and the squared area defined by the orange line identifies the space defined by the size of two detectors. The correlation function can only be measured in the region defined by the overlap of the two. The red dashed line identifies the integration direction, namely, the direction along which the correlation function contains information about the detail located at coordinate x s on the object plane. As the integration line sweeps the entire object plane (i.e., all possible values of x s ), two cases can occur: either the same detail receives contributions from the whole lens or, due to the finite size of the detectors, contributions decrease towards the edges. (b) Effect of the finite size of the lens and the detectors on the reconstruction of the object. The solid blue line represents the reference object, in arbitrary units. Points of the correlation function located close to the edge of the detectors area (red area in (a)) give rise to artifacts in the reconstructed object (black dashed line), due to the varying amount of contributions received by the corresponding object details. Artifacts can be removed by applying the correction discussed in the text Equation (16), so that all the slits within the field of view can be reconstructed (solid red line) with their correct relative height.
Figure 4. (a) Subdivision of the correlation function into parts characterized by different properties. The strip enclosed between the two green lines is the area defined by the lens aperture, and the squared area defined by the orange line identifies the space defined by the size of two detectors. The correlation function can only be measured in the region defined by the overlap of the two. The red dashed line identifies the integration direction, namely, the direction along which the correlation function contains information about the detail located at coordinate x s on the object plane. As the integration line sweeps the entire object plane (i.e., all possible values of x s ), two cases can occur: either the same detail receives contributions from the whole lens or, due to the finite size of the detectors, contributions decrease towards the edges. (b) Effect of the finite size of the lens and the detectors on the reconstruction of the object. The solid blue line represents the reference object, in arbitrary units. Points of the correlation function located close to the edge of the detectors area (red area in (a)) give rise to artifacts in the reconstructed object (black dashed line), due to the varying amount of contributions received by the corresponding object details. Artifacts can be removed by applying the correction discussed in the text Equation (16), so that all the slits within the field of view can be reconstructed (solid red line) with their correct relative height.
Sensors 22 02778 g004
Figure 5. Analysis of the effect of pixelization on the correlation function. Pixel size was taken to be 200 μ m . (a) Resolution curve due to pixelization. When the object was placed between the two planes at focus, the resolution defined by the pixel size was given by the linear interpolation between the points ( z a , M A 1 ) and ( z b , M B 1 ) . If the object was placed outside of the two planes, the required pixel size needed to discriminate that close details rapidly decrease. (b) Effect of pixelization on the correlation function when the object was placed out of focus (at z = 195 mm from the lens); the pixel size ( 200 μ m ) corresponds to 50 pixels per detector. (c) Typical aspect of a measured correlation function: zero-average white noise has been superimposed on the normalized theoretical correlation function, so as to simulate a correlation function with a signal-to-noise ratio, which was, at best, 0.5 (standard deviation of white noise amounts to 2 arbitrary units). (d) Refocused image obtained from the correlation function reported in panel (c) (blue line), as compared with the reference object (orange line). The refocused image was obtained by applying the refocusing algorithm in Equation (6) and dividing by the correction coefficient in Equation (16).
Figure 5. Analysis of the effect of pixelization on the correlation function. Pixel size was taken to be 200 μ m . (a) Resolution curve due to pixelization. When the object was placed between the two planes at focus, the resolution defined by the pixel size was given by the linear interpolation between the points ( z a , M A 1 ) and ( z b , M B 1 ) . If the object was placed outside of the two planes, the required pixel size needed to discriminate that close details rapidly decrease. (b) Effect of pixelization on the correlation function when the object was placed out of focus (at z = 195 mm from the lens); the pixel size ( 200 μ m ) corresponds to 50 pixels per detector. (c) Typical aspect of a measured correlation function: zero-average white noise has been superimposed on the normalized theoretical correlation function, so as to simulate a correlation function with a signal-to-noise ratio, which was, at best, 0.5 (standard deviation of white noise amounts to 2 arbitrary units). (d) Refocused image obtained from the correlation function reported in panel (c) (blue line), as compared with the reference object (orange line). The refocused image was obtained by applying the refocusing algorithm in Equation (6) and dividing by the correction coefficient in Equation (16).
Sensors 22 02778 g005aSensors 22 02778 g005b
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Massaro, G.; Di Lena, F.; D’Angelo, M.; Pepe, F.V. Effect of Finite-Sized Optical Components and Pixels on Light-Field Imaging through Correlated Light. Sensors 2022, 22, 2778. https://doi.org/10.3390/s22072778

AMA Style

Massaro G, Di Lena F, D’Angelo M, Pepe FV. Effect of Finite-Sized Optical Components and Pixels on Light-Field Imaging through Correlated Light. Sensors. 2022; 22(7):2778. https://doi.org/10.3390/s22072778

Chicago/Turabian Style

Massaro, Gianlorenzo, Francesco Di Lena, Milena D’Angelo, and Francesco V. Pepe. 2022. "Effect of Finite-Sized Optical Components and Pixels on Light-Field Imaging through Correlated Light" Sensors 22, no. 7: 2778. https://doi.org/10.3390/s22072778

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop