Next Article in Journal
A Unique Model to Estimate Geometric Deviations in Drilling and Milling Due to Two Uncertainty Sources
Previous Article in Journal
Evaluation of Keratin/Bacterial Cellulose Based Scaffolds as Potential Burned Wound Dressing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Effect of Fiber Optic Plate on Centroid Locating Accuracy of Monocentric Imager

1
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
3
Center of Materials Science and Optoelectrics Engineering, University of Chinese Academy of Science, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(5), 1993; https://doi.org/10.3390/app11051993
Submission received: 31 January 2021 / Revised: 15 February 2021 / Accepted: 21 February 2021 / Published: 24 February 2021
(This article belongs to the Section Optics and Lasers)

Abstract

:
We propose a method for obtaining the centroid locating accuracy (CLA) of a monocentric imager with a fiber optic plate (FOP) as a relay image transmission element in order to reduce the loss of CLA due to the addition of FOP. We constructed a two-stage image transmission coupling model of spherical focal surface (FOP) image sensor. By analyzing the influences of FOP parameters, including the fill factor and the fiber diameter, and FOP in-plane displacements, including rotation and translation on CLA, the loss of the lowest CLA that the monocentric imager can withstand caused by the addition of the FOP was reduced by 20%.

1. Introduction

The fiber optic plate (FOP) is a large-area array rigid passive fiber optic image transmission element composed of millions of fibers arranged in a hexagonal shape, with a compact and stable structure. In theory, each fiber optic transmits light independently as a unit without interfering with the others. The output end face of FOP is aligned with the input end face. The FOP thickness is thin; that is, fiber optic transmission channels are short. Therefore, the FOP has lower transmission signal loss, higher transmittance and better image transmission performance. The most important advantage of FOP is that it can achieve one-to-one signal transmission between the input and output end faces of different shapes. According to actual requirements of image surface characteristics, we can prepare the input–output end surface of a FOP as a spherical–planar surface, a planar–spherical surface, a curved–curved surface, a planar–planar surface and other forms.
Engineering practice shows that monocentric imagers have the characteristics of light weight, miniaturization and larger fields of view for imaging than traditional imaging systems under the same circumstances [1,2]. There is no need to correct the curvature of field when the image quality of the monocentric imager needs be optimized, because the symmetry in a monocentric lens cancels out most of the geometrical aberrations [3]. However, current large-area array image sensors are basically flat and cannot be fully coupled with the spherical focal surface. FOP can solve this problem well. We can prepare the input–output end surface of an FOP as a spherical–planar surface according to the actual needs of monocentric imager, which not only realizes the coupling between the spherical focal surface and image sensor, but also realizes the large-format imaging of hundreds of millions or even billions of pixels by splicing multiple FOPs through a specific spatial layout. I. Stamenov’s team at the University of California used the structure of fiber-coupled monocentric lens imaging to develop an ultra-wide-angle 30 Mpixel panoramic imager with a 126 “letterbox” format field of view [2,3,4,5]. The research and applications of monocentric imagers are currently mainly aimed at the ground-based field, and the research in the space-based field for space target imaging has not been reported yet. In the field of the space-based space target surveillance, the monocentric imagers with FOPs as relay image transmission elements can realize the detection of space targets in a large field of view, and the FOP has the characteristics of miniaturization, reliability, etc., in line with the requirements of the aerospace environment. However, the addition of FOPs will inevitably increase the error of the centroid positioning of the spatial point target. It is of great significance to analyze the influence of FOP on the centroid locating accuracy (CLA) and to reduce the loss of CLA caused by the addition of FOPs by choosing a suitable FOP scheme.
In this paper, according to FOP structural characteristics, we construct a two-stage discrete coupling model of the spherical, focal surface (FOP) image sensor and analyze the influences of FOP parameters, including the fill factor and the fiber diameter, and FOP in-plane displacements, including rotation and translation on CLA. Finally, we get the FOP scheme that can achieve the best CLA.

2. Monocentric Imager with FOPs as Relay Image Transmission Elements

2.1. Fiber Optic Plate Relay Image Transmission Scheme

An FOP’s structural characteristics determine its own image transmission mechanism: a space target is imaged on the imaging focal plane coupled with the FOP input end surface by the monocentric imager front imaging system; each fiber on the incident end surface transmits a pixel independently through its own optical fiber channel; that is, the FOP divides the signal into millions of pixels arranged in a honeycomb [6]; each divided signal is transmitted separately along its own channel and finally imaged on an image sensor coupled with the FOP exit end surface [7,8]. This method of using FOP to achieve inter-stage coupling is called the fiber optic plate relay image transmission (FOPRIT) scheme (see Figure 1). In some complex optical system structures, this scheme can greatly reduce the size of entire imaging system, can replace the traditional relay imaging scheme and has an otherwise unattainable advantage.

2.2. Periodic Translation of the Imaging Spot

In the field of the space-based space target surveillance, the use of FOPRIT scheme can realize the detection of space targets in a large field of view. A distant space target is imaged as a diffuse spot by a monocentric imager. However, the diffuse spot is not a large target on the entire image, and usually only a few image sensor pixels can receive all the energy. We cannot determine the precise position of the space target, and then cannot identify and locate the space target, which affects the measurement accuracy of the monocentric imager. We note that the position of the space target imaged on the image sensor is random, and the position of the same space target imaged on the image sensor at different times is also different. As image sensor pixels are arranged in a regular square lattice, the centroid locating error of the imaging spot is same as the centroid locating error of the imaging spot after being translated by one image sensor pixel; that is, the translation period of the imaging spot is one image sensor pixel. We call this process the periodic translation of imaging spot (PTOIS) (see Figure 2). We note that the selection of the relative initial position between the imaging spot and the image sensor pixel has no effect on the PTOIS simulation. For the convenience of observation, we take the intersection of the imaging spot center and a certain image sensor pixel center as the relative initial position. It is meaningless to simulate and analyze the situation of the centroid location of the imaging spot on a certain instantaneous position. Only by analyzing the periodic translation of the imaging spot can we get the most referential and accurate conclusion.

3. Materials and Methods

3.1. Evaluation Standard of Centroid Locating Accuracy

Moreover, in the aerospace field, image processing is more complicated, and star image processing can only be done with traditional methods. We use centroid method to solve the centroid location of the spatial target. The centroid method is a centroid location algorithm with the pixel intensity value as the weight [9,10]. Assuming that only one target is imaged on the sensor (see Figure 1), the intensity value of any pixel whose coordinate is x , y on the image sensor is I x , y . The centroid coordinate x 0 , y 0 of the imaging spot calculated by centroid method can be obtained from:
x 0 = I x , y x I x , y , y 0 = I x , y y I x , y
We set the ideal coordinate of the space target to x 0 , y 0 and set the distance difference between the ideal position of the space target and the centroid position of the imaging spot to the centroid locating error (CLE) which can be expressed by:
C L E = x 0 x 0 2 + y 0 y 0 2 ,
We use centroid locating error standard deviation (CLESD) to quantitatively describe the CLA of a space target. CLESD is defined as:
μ = 1 M N i = 1 M j = 1 N C L E i j ,
C L E S D = 1 M N 1 i = 1 M j = 1 N C L E i j μ 2 ,
where C L E i j is the CLE when the imaging spot is translated i times in the x direction and j times in the y direction. μ is the average value of CLE. M is the total number of imaging spot translations in the x direction and N is the total number of imaging spot translations in the y direction. CLESD is comprehensive and objective. It is not an evaluation standard for the instantaneous imaging of a certain space target, but an evaluation standard based on comprehensive consideration of dynamic imaging of multiple space targets that can be detected by the monocentric imager.
When M = N and L M = L N = 1 pixel , CLESD can be used to quantitatively evaluate the CLA of the PTOIS process, where L M is the distance when the imaging spot is translated M times in x direction and L N is the distance when the imaging spot is translated N times in y direction. The lower the CLESD, the higher the CLA.

3.2. Defocusing of Imaging Spots

As we mentioned earlier, the imaging spot is not a large target on the entire image, and we only need a few image sensor pixels to receive the signal of the entire imaging spot. The CLE calculated directly by the centroid method is relatively large, so we have to defocus the imaging spot before obtaining the centroid position of the imaging spot in order to obtain more details of the edge of the imaging spot and more accurate CLE. Strictly speaking, defocus is not considered a kind of aberration, but its result is devastating. If the receiving surface is not at the paraxial focal plane, then the geometrically enlarged image of the space target will be mixed with the dimensional noise. In other words, defocus will cause a reduction in resolution and loss of detail. As the distance increases, the noise becomes larger and the resolution drops sharply. Therefore, the selection of defocus cannot be too large. We analyze the influence of different defocus parameters on CLA and perform appropriate defocus processing on the imaging spot, which can improve the analysis accuracy and lay a solid foundation for a series of CLE analysis that will be carried out in the following.
We simulate the defocus degree of the imaging spot through Zemax OpticStudio. First, we set the wavelength to 0.55 μm, the fields to 0, the entrance pupil diameter to 50 μm, the focal length to 100 μm and the F number to 2. Then, the type of the STOP Surf was set to the Zernike Fringe Phase, where the maximum term was set to 9; the norm radius was set to 25 μm; and the Zernike 4 (defocus) was set to increase from 0 to 2( λ ) with a step size of 0.05( λ ). Finally, we obtained discrete images of diffuse spots with different defocus degrees. We performed PTOIS simulations in MATLAB to analyze the effect of different defocus degrees of imaging spots on CLA (see Figure 3).
Figure 3 shows that as the defocus degree increases, the CLA is gradually increased. When the defocus degree is 0.8 λ , the CLA is highest with value of 0.002 pixel. As the defocus degree continues to increase, the CLA always maintains within 0.012 pixel, but it is not as high as the CLA when the defocus degree is 0.8 λ . Moreover, the previous analysis mentioned that the defocus degree cannot be too large. So all subsequent analyses are based on the imaging spot with a defocus degree value of 0.8 λ .

3.3. Fiber Position Calibration

We note that the position of a space target imaging to the FOP incident end at different times is different. The signal received by each fiber optic is also different. The final image will also be different. So we need to mark every fiber position to locate each fiber in the FOP. Fibers are arranged as shown in Figure 4. We set the position where the center of a certain optical fiber coincides with the center of a certain image sensor pixel as the original coordinate ( x 0 , y 0 ) and observe each optical fiber center coordinate. We set the fiber radius to R, the fiber core radius to r, the coordinate unit spacing in the y f direction to 3 R and the coordinate unit spacing in the x f direction to R.
Figure 4 shows that when x f and y f are both even or odd, that is, when x f + y f is an even number, ( x f , y f ) is the fiber center coordinate. We set the coordinate of the spherical focal surface of the monocentric imager as x , y (see Figure 1). The light at the spherical focal surface can be received by the FOP, when x, y satisfies the following formula:
x ( x 0 + R x f ) 2 + y ( y 0 + 3 R y f ) 2 < r 2 , with x f + y f is even ,
Since optical fiber is regarded as the smallest pixel unit, the light intensity of the exit face of the fiber will be equal to the average of the light intensity falling on the entrance face [11]. For brevity, we make the following substitutions: x = x 0 + R x f ; y = y 0 + 3 R y f . Thus, the light intensity of the exit face of the fiber, when its center is at the point x , y is given by
I x , y = 1 π r 2 + I ( x , y ) e x x , y y d x d y with e x x , y y = 1 for x x 2 + y y 2 < r 2 = 0 elsewhere .
where I x , y is the light intensity of the exit face of the fiber, and I(x, y) is the light intensity on the spherical focal surface. The uniform light emitted from the FOP exit end face is received by the image sensor with the pixels arranged in a square lattice. The intensity value of the image sensor pixel with coordinate x , y is expressed as [12]:
I x , y = x = x p i x e l 2 x + p i x e l 2 y = y p i x e l 2 y + p i x e l 2 I x , y
So far, all formula derivations required for PTOIS simulation of the monocentric imager with FOPRIT scheme have been given.

4. Simulation Analysis and Results

4.1. FOP Parameters: Fill Factor and Fiber Diameter

The fill factor is an important parameter of a FOP, which is determined by the ratio of the light passing area of the FOP to the total area and can be expressed as
K = 0.907 × d c 2 d f 2
where K is the fill factor, 0.907 is the fill coefficient of the FOP arranged in a hexagonal shape, d c is the fiber core diameter and d f is the fiber diameter. We note that the fill factor is determined by the fiber diameter and the fiber core diameter. If the filling factor increases, and the fiber diameter decreases, the luminous flux of FOP also increases. The image transmitted by the FOP is closer to the image received by the image sensor which is directly coupled to the imaging focal plane. The FOP image transmission performance is greatly enhanced [6,13]. At the same time, the fiber cladding thickness become thinner. If the fiber cladding thickness is too thin, it is very easy to cause mutual interference of the transmitted signals between the fibers, and the fiber cladding no longer plays a mechanical protection role. During the entire FOP preparation process, it is extremely easy to produce the fiber deformation resulting in the large-area dark filaments, the broken filaments and other phenomena that damage the image transmission quality of the FOP. Therefore, the optical fiber cladding cannot be too thin—that is, the fiber diameter cannot be too small—and the fill factor cannot be too large. However, if the filling factor is too small, the luminous flux of the FOP must be reduced, and the overall image transmission performance of the FOP is greatly reduced. The signal strength that the image sensor can receive is also very weak, which is bound to increase the difficulty of space target identification. Therefore, considering the high stability and the high luminous flux of FOP image transmission, it is more reasonable to choose a FOP type with a larger fiber diameter and a larger fill factor.
We set the image sensor pixel spacing to 10 μm. We note that the fill factor should be less than 0.907. The fiber cladding should not be too thin, so the fill factor cannot be too close to 0.907. We set the fill factor to increase from 0.5 to 0.8 with a single step size of 0.1. The FOP is made by secondary drawing, so the fiber optic diameter should not be too large. We set the fiber diameter to increase from 3 μm to 10 μm with a single step size of 1 μm. Once the fill factor and the fiber diameter are determined, the fiber core diameter is determined. We set relative initial position where a certain fiber center coincides with a certain image sensor pixel center. We made a PTOIS simulation in MATLAB and established the relationship among the fill factor, the fiber diameter and CLESD. Figure 5 intuitively shows the simulation result that after adding FOPs as relay image transmission elements, the CLA will be more or less reduced. Under the conditions that the fiber diameter remains unchanged and the fill factor becomes smaller, the CLA tends to enhance as a whole. Under the conditions that the filling factor remains unchanged and the fiber diameter becomes larger, the CLA tends to decrease as a whole. The case of an FOP with a smaller fiber diameter and a smaller fill factor has a higher CLA.
In a word, in order to make the monocentric imager with the FOPRIT scheme have a higher CLA, we choose the FOP case with a smaller fill factor and a smaller fiber diameter. Then, as mentioned above, in order to prevent the fiber cladding from becoming too thin and increase the luminous flux of FOP, a FOP with a larger fiber diameter and a larger fiber diameter should be selected. In summary of the above two conclusions, observing Figure 5, we note that when the fiber diameter value of 4 μm or 6 μm, the fill factor has a weak effect on CLA. When fill factor values are 0.5 , 0.6 and 0.7 , CLAs are almost uniform. Therefore, we choose two FOP schemes for subsequent analysis: (1) scheme one: d f = 4 μm, K = 0.7 and C L A = 0.031 pixel ; (2) scheme two: d f = 6 μm, K = 0.7 and C L A = 0.037 pixel . Both FOP schemes meet the above conclusions. Compared with FOP scheme two, FOP scheme one can achieve higher CLA value of 0.031 pixel.

4.2. FOP in-Plane Displacements: Rotation and Translation

The above simulation is based on the premise that the coincidence of the center of a certain fiber and the center of a certain image sensor pixel is taken as the relative initial position. However, in reality, since the fibers arranged in a hexagonal shape, the image sensor pixels arranged in a square lattice, and the diameter of the optical fiber was different from the image sensor spacing; the actual relative initial position was not consistent with the above simulation. The relative initial positions between the FOP and the image sensor are irregular. However, it should be noted that once FOP and the image sensor are fixed, FOP will not produce in-plane displacement. The FOP in-plane displacement does not mean that the FOP is actually moving, but refers to simulate the CLA of the imaging spot on the all relative initial positions between the FOP and the image sensor. In this way, the simulation analysis we have done is objective and effective. Considering all the relative initial positions, we can objectively describe the impact of the addition of the FOP on the CLA and find the lowest CLA that the monocentric imager can withstand.
Since the coupling type between FOP and image sensor is a plane–plane coupling type, the movement of the FOP relative to the image sensor belongs to the in-plane displacement. So we consider two aspects of rotation and translation.

4.2.1. Rotation

We set the image sensor pixel spacing to 10 μm. For the two FOP schemes obtained in Section 4.1, we rotate the FOP counterclockwise from 0 to π with the rotation step value of π / 12 . We set the FOP rotation angle to θ , then when the FOP rotates counterclockwise by θ , the coordinates of the fiber are expressed as:
x = x 0 + R x f cos θ 3 R y f sin θ y = y 0 + R x f sin θ + 3 R y f cos θ
where x , y , x f , y f , x 0 and y 0 are known by Formula (5). Every time the FOP was rotated by one step, we performed one PTOIS simulation in MATLAB. We established the relationship between the CLESD and the rotation angle. The simulation results were shown in Figure 6.
Combined the above two FOP schemes, we can get a comprehensive analysis. We observed that FOP rotation periods of two FOP schemes were both π / 3 . No matter which FOP scheme we chose, when FOP was rotated counterclockwise by 0, π / 3 , 2 π / 3 , π , the monocentric imager had a highest CLA, at this time, the arrangement of FOP was shown in Figure 7a. When FOP was rotated counterclockwise by π / 6 , π / 2 , 5 π / 6 , the monocentric imager had a lowest CLA, at this time, the arrangement of FOP was shown in Figure 7b. When we chose FOP scheme one, the lowest CLA that the monocentric imager can withstand is 0.035 pixel. When we chose FOP scheme two, the lowest CLA that the monocentric imager can withstand is 0.041 pixel. In conclusion, no matter which FOP scheme is selected, the loss of CLA generated by FOP rotation is within 0.004 pixels, and when the FOP is arranged as shown in Figure 7a, the monocentric imager has the higher CLA.

4.2.2. Translation

As the FOP fiber diameter and the image sensor pixel spacing are not same in size and shape, the relative initial position is random. Since FOP is arranged periodically, the lowest CLA that the monocentric imager can withstand can be obtained by simulating all CLESDs at relative initial positions in a cycle. Figure 4 shows that optical fibers are periodically arranged in a hexagonal shape. The translation period in the x direction is T x = 2 R , and the translation period in the y direction is T y = 2 3 R . The centroid position of the imaging spot where FOP is shifted for one cycle is the same as the centroid position where FOP does not shift.
In view of the two different FOP arrangements shown in Figure 7, we perform translation simulations on the two FOP schemes. We also set the image sensor pixel spacing to 10 μm. The FOP is translated in the x direction with the translation step value of 1 μm, and the FOP is translated in the y direction with the translation step value of 3 μm. Every time the FOP was translated by one step, we performed one PTOIS simulation in MATLAB. We could obtain the CLESD when the FOP had been translated for one period in the x and y directions relative to the image sensor. The simulation results are shown in Figure 8 and Figure 9. We simplified the data and display them in Table 1, in order to obtain the maximum and minimum CLESD values of each FOP scheme more intuitively.
When FOP scheme one is selected, the fiber diameter is 4 μm and the fill factor is 0.7. Due to the in-plane displacement of the FOP, the lowest CLA that the monocentric imager can withstand has been reduced from 0.031 pixel to 0.038 pixel with an accuracy loss of 0.007 pixel. When FOP scheme two is selected, the fiber diameter is 6 μm and the fill factor is 0.7. Due to the in-plane displacement of the FOP, the lowest CLA that the monocentric imager can withstand has been reduced from 0.037 to 0.045 pixels with an accuracy loss of 0.008 pixel. If the FOP is controlled to be arranged in the way shown in Figure 7a during processing, the lowest CLA of FOP scheme one is reduced from 0.031 to 0.034 pixels with an accuracy loss of 0.003 pixels, and the lowest CLA of FOP scheme two is reduced from 0.037 to 0.042 pixels with an accuracy loss of 0.005 pixels. Only considering FOP translation, compared with FOP scheme two, FOP scheme one can achieve a higher CLA value of 0.034 pixels.

5. Conclusions

In conclusion, the addition of the FOP will greatly reduce the CLA. We simulated and analyzed the different CLA losses caused by different FOP schemes. By selecting appropriate FOP schemes, it is feasible to appropriately reduce the loss of CLA of the monocentric imager with FOPRIT schemes. By analyzing the periodic in-plane displacement of the FOP relative to the image sensor, the lowest CLA that the monocentric imager can withstand can be obtained. The main conclusions are as follows: (1) The CLA of the monocentric imager without the FOPRIT scheme is 0.002 pixels by defocusing the imaging spot. (2) By analyzing the influences of FOP parameters, including fill factor and fiber diameter, on CLA, we have obtained two suitable FOP schemes: scheme one: d f = 4 μm, K = 0.7 and C L A = 0.031 pixel ; scheme two: d f = 6 μm, K = 0.7 and C L A = 0.037 pixel . (3) By analyzing the influences of FOP in-plane displacement, including rotation and translation on CLA, the lowest CLA of the monocentric imager with FOP scheme one has been reduced from 0.031 pixel to 0.038 pixels, and the lowest CLA of the monocentric imager with FOP scheme two has been reduced from 0.037 pixels to 0.045 pixels. Compared with FOP scheme two, the loss of the lowest CLA that the monocentric imager can withstand caused by the addition of the FOP is reduced by 20% in FOP scheme one.
For space point target imaging, given clear FOP parameters, the lowest CLA that the monocentric imager can withstand can be given by the method in this paper. We provided some constructive guidance on the selection of the FOP, which can appropriately reduce the loss of the CLA caused by the addition of FOPs and provide some feasibility for the wide application of monocentric imagers in the field of the space-based space target surveillance. FOPs are being processed and prepared. We also aim to report a comprehensive experimental verification of our simulation and analysis in our future work.

Author Contributions

Conceptualization, Y.H. and C.W.; methodology, Y.H.; software, Y.H. and D.X.; validation, C.Y. and C.W.; formal analysis, Y.H.; investigation, Y.H. and D.X.; resources, C.W.; writing—original draft preparation, Y.H.; writing—review and editing, Y.H.; funding acquisition, C.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (NSFC), grant number 61805235.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Brady, D.J.; Gehm, M.E.; Stack, R.A.; Marks, D.L.; Kittle, D.S.; Golish, D.R.; Vera, E.M.; Feller, S.D. Multiscale gigapixel photography. Nature 2012, 486, 386–389. [Google Scholar] [CrossRef] [PubMed]
  2. Olivas, S.J.; Orel, M.; Arianpour, A.; Stamenov, I.; Ford, J.E. Digital image processing for wide-angle highly spatially-variant imagers. In Proceedings of the SPIE Optics and Photonics & Imaging, San Diego, CA, USA, 17–21 August 2014. [Google Scholar]
  3. Ford, J.; Stamenov, I.; Olivas, S.J.; Schuster, G.; Morrison, R. Fiber-coupled Monocentric Lens Imaging. In Proceedings of the Computational Optical Sensing & Imaging, Arlington, VA, USA, 23–27 June 2013. [Google Scholar]
  4. Stamenov, I.; Arianpour, A.; Olivas, S.J.; Agurok, I.P.; Johnson, A.R.; Stack, R.A.; Morrison, R.L.; Ford, J.E. Panoramic monocentric imaging using fiber-coupled focal planes. Opt. Express 2014, 22, 31708. [Google Scholar] [CrossRef] [PubMed]
  5. Olivas, S.J.; Nikzad, N.; Stamenov, I.; Arianpour, A.; Ford, J.E. Fiber Bundle Image Relay for Monocentric Lenses. In Proceedings of the Computational Optical Sensing & Imaging, Kohala Coast, HI, USA, 22–26 June 2014. [Google Scholar]
  6. Shao, J.; Zhang, J.; Huang, X.; Liang, R.; Barnard, K. Fiber bundle image restoration using deep learning. Opt. Lett. 2019, 44, 1080–1083. [Google Scholar] [CrossRef] [PubMed]
  7. Kapany, N.S.; Eyer, J.A.; Keim, R.E. Fiber Optics. Part II. Image Transfer on Static and Dynamic Scanning with Fiber Bundles. J. Opt. Soc. Am. 1957, 47, 423–427. [Google Scholar] [CrossRef]
  8. Sawatari, T.; Sayanagi, K. Image Transfer Properties of Optical Fiber Bundles. Oyobuturi 1965, 34, 207–213. [Google Scholar]
  9. Hao, J.; Yu, F. Centroid Locating for Star Image Object by FPGA. Adv. Mater. Res. 2011, 403, 1379–1383. [Google Scholar] [CrossRef]
  10. Chang, Y.K.; Lee, B.H.; Kang, S.J. High-Accuracy Image Centroiding Algorithm for CMOS-Based Digital Sun Sensors. In Proceedings of the Sensors, 2007 IEEE, Atlanta, GA, USA, 28–31 October 2007. [Google Scholar]
  11. Renee, D. Optical Transfer Properties of Fiber Bundles. In Proceedings of the Meeting of the Optical Society of America, Jacksonville, FL, USA, March 1963. [Google Scholar]
  12. He, X.; Yuan, L.; Jin, C.; Zhang, X. Imaging quality evaluation method of pixel coupled electro-optical imaging system. Opt. Commun. 2017, 399, 87–97. [Google Scholar] [CrossRef]
  13. Chang, Y.; Lin, W.; Cheng, J.; Chen, S.C. Compact high-resolution endomicroscopy based on fiber bundles and image stitching. Opt. Lett. 2018, 43, 4168. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The two-stage discrete coupling model of a spherical focal surface (fiber optic plate—FOP) image sensor, establishing a rectangular coordinate system O- x y on the spherical focal surface and setting the centroid position of the imaging spot to ( x 0 , y 0 ) ; establishing a rectangular coordinate system O - x y on the FOP exit end surface and setting the centroid position of the imaging spot to ( x 0 , y 0 ) ; establishing a rectangular coordinate system O - x y on the image sensor surface and setting the centroid position of the imaging spot to ( x 0 , y 0 ) .
Figure 1. The two-stage discrete coupling model of a spherical focal surface (fiber optic plate—FOP) image sensor, establishing a rectangular coordinate system O- x y on the spherical focal surface and setting the centroid position of the imaging spot to ( x 0 , y 0 ) ; establishing a rectangular coordinate system O - x y on the FOP exit end surface and setting the centroid position of the imaging spot to ( x 0 , y 0 ) ; establishing a rectangular coordinate system O - x y on the image sensor surface and setting the centroid position of the imaging spot to ( x 0 , y 0 ) .
Applsci 11 01993 g001
Figure 2. Whether in x or y direction, the centroid position of the imaging spot is the same as the centroid position of the imaging spot after being translated by one image sensor pixel. S 1 is the initial imaging spot, S 2 is the imaging spot over one periodic translation and T is the translation period of the imaging spot.
Figure 2. Whether in x or y direction, the centroid position of the imaging spot is the same as the centroid position of the imaging spot after being translated by one image sensor pixel. S 1 is the initial imaging spot, S 2 is the imaging spot over one periodic translation and T is the translation period of the imaging spot.
Applsci 11 01993 g002
Figure 3. The relationship between defocus degrees of imaging spots and CLESD. When the defocus degree was 0.8 λ , the CLA was highest with a value of 0.002 pixels.
Figure 3. The relationship between defocus degrees of imaging spots and CLESD. When the defocus degree was 0.8 λ , the CLA was highest with a value of 0.002 pixels.
Applsci 11 01993 g003
Figure 4. Fibers arranged in dense hexagonal shapes. By establishing the rectangular coordinate O f - x f y f , each fiber can be located. T x is an FOP translation period in the x f direction, and T y is an FOP translation period in the y f direction. T x = 2 R ; T y = 2 3 R .
Figure 4. Fibers arranged in dense hexagonal shapes. By establishing the rectangular coordinate O f - x f y f , each fiber can be located. T x is an FOP translation period in the x f direction, and T y is an FOP translation period in the y f direction. T x = 2 R ; T y = 2 3 R .
Applsci 11 01993 g004
Figure 5. The impacts of different fiber diameters (3∼10 μm) and different fill factors 0.5 0.8 on the CLESD. When the fiber diameter was 4 μm or 6 μm, the fill factor had a weak effect on CLESD.
Figure 5. The impacts of different fiber diameters (3∼10 μm) and different fill factors 0.5 0.8 on the CLESD. When the fiber diameter was 4 μm or 6 μm, the fill factor had a weak effect on CLESD.
Applsci 11 01993 g005
Figure 6. The FOP was rotated π counterclockwise with the relative initial position as the center. The connection between the rotation angle θ and the CLESD of the two FOP schemes was established separately. Scheme one: d f = 4 μm, K = 0.7 ; scheme two: d f = 6 μm, K = 0.7 .
Figure 6. The FOP was rotated π counterclockwise with the relative initial position as the center. The connection between the rotation angle θ and the CLESD of the two FOP schemes was established separately. Scheme one: d f = 4 μm, K = 0.7 ; scheme two: d f = 6 μm, K = 0.7 .
Applsci 11 01993 g006
Figure 7. Two FOP arrangements. (a) FOP was rotated counterclockwise by 0, π / 3 , 2 π / 3 or π . (b) FOP was rotated counterclockwise by π / 6 , π / 2 or 5 π / 6 .
Figure 7. Two FOP arrangements. (a) FOP was rotated counterclockwise by 0, π / 3 , 2 π / 3 or π . (b) FOP was rotated counterclockwise by π / 6 , π / 2 or 5 π / 6 .
Applsci 11 01993 g007
Figure 8. The FOP was translated for one period in the x and y directions. The FOP translation period in the x direction was T x = 2 R , and the FOP translation period in the y direction was T y = 2 3 R . The connection between the FOP translation and the CLESD was established. (a) FOP scheme one: d f = 4 μm; K = 0.7 ; angle of rotation: No rotation. (b) FOP scheme two: d f = 6 μm; K = 0.7 ; angle of rotation: No rotation.
Figure 8. The FOP was translated for one period in the x and y directions. The FOP translation period in the x direction was T x = 2 R , and the FOP translation period in the y direction was T y = 2 3 R . The connection between the FOP translation and the CLESD was established. (a) FOP scheme one: d f = 4 μm; K = 0.7 ; angle of rotation: No rotation. (b) FOP scheme two: d f = 6 μm; K = 0.7 ; angle of rotation: No rotation.
Applsci 11 01993 g008
Figure 9. The FOP was translated for one period in the x and y directions. The FOP translation period in the x direction was T x = 2 R , and the FOP translation period in the y direction was T y = 2 3 R . The connection between the FOP translation and the CLESD was established. (a) FOP scheme one: d f = 4 μm; K = 0.7 ; angle of rotation: π / 6 . (b) FOP scheme two: d f = 6 μm; K = 0.7 ; angle of rotation: π / 6 .
Figure 9. The FOP was translated for one period in the x and y directions. The FOP translation period in the x direction was T x = 2 R , and the FOP translation period in the y direction was T y = 2 3 R . The connection between the FOP translation and the CLESD was established. (a) FOP scheme one: d f = 4 μm; K = 0.7 ; angle of rotation: π / 6 . (b) FOP scheme two: d f = 6 μm; K = 0.7 ; angle of rotation: π / 6 .
Applsci 11 01993 g009
Table 1. The impact of FOP translation on CLESD under different FOP schemes and rotation angles. No translation CLESD is the CLESD when FOP does not translate. Minimum CLESD is the highest CLA that the monocentric imager can reach due to FOP translation. Maximum CLESD is the lowest CLA that the monocentric imager can withstand due to FOP translation.
Table 1. The impact of FOP translation on CLESD under different FOP schemes and rotation angles. No translation CLESD is the CLESD when FOP does not translate. Minimum CLESD is the highest CLA that the monocentric imager can reach due to FOP translation. Maximum CLESD is the lowest CLA that the monocentric imager can withstand due to FOP translation.
FOP SchemeScheme OneScheme TwoScheme OneScheme Two
Angle of RotationNo rotation π / 6
Minimum CLESD0.0270.0310.0220.023
No translation CLESD0.0310.0370.0350.041
Maximum CLESD0.0340.0420.0380.045
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Huang, Y.; Xie, D.; Yan, C.; Wu, C. Effect of Fiber Optic Plate on Centroid Locating Accuracy of Monocentric Imager. Appl. Sci. 2021, 11, 1993. https://doi.org/10.3390/app11051993

AMA Style

Huang Y, Xie D, Yan C, Wu C. Effect of Fiber Optic Plate on Centroid Locating Accuracy of Monocentric Imager. Applied Sciences. 2021; 11(5):1993. https://doi.org/10.3390/app11051993

Chicago/Turabian Style

Huang, Yawei, Dandan Xie, Changxiang Yan, and Congjun Wu. 2021. "Effect of Fiber Optic Plate on Centroid Locating Accuracy of Monocentric Imager" Applied Sciences 11, no. 5: 1993. https://doi.org/10.3390/app11051993

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop