Next Article in Journal
A Graph-Neural-Network-Based Social Network Recommendation Algorithm Using High-Order Neighbor Information
Previous Article in Journal
AI Based Digital Twin Model for Cattle Caring
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Method Used to Improve the Dynamic Range of Shack–Hartmann Wavefront Sensor in Presence of Large Aberration

1
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(19), 7120; https://doi.org/10.3390/s22197120
Submission received: 24 August 2022 / Revised: 13 September 2022 / Accepted: 18 September 2022 / Published: 20 September 2022
(This article belongs to the Section Optical Sensors)

Abstract

:
With the successful application of the Shack–Hartmann wavefront sensor in measuring aberrations of the human eye, researchers found that, when the aberration is large, the local wavefront distortion is large, and it causes the spot corresponding to the sub-aperture of the microlens to shift out of the corresponding range of the sub-aperture. However, the traditional wavefront reconstruction algorithm searches for the spot within the corresponding range of the sub-aperture of the microlens and reconstructs the wavefront according to the calculated centroid, which leads to wavefront reconstruction errors. To solve the problem of the small dynamic range of the Shack–Hartmann wavefront sensor, this paper proposes a wavefront reconstruction algorithm based on the autocorrelation method and a neural network. The autocorrelation centroid extraction method was used to calculate the centroid in the entire spot map in order to obtain a centroid map and to reconstruct the wavefront by matching the centroid with the microlens array through the neural network. This method breaks the limitation of the sub-aperture of the microlens. The experimental results show that the algorithm improves the dynamic range of the first 15 terms of the Zernike aberration reconstruction to varying degrees, ranging from 62.86% to 183.87%.

Graphical Abstract

1. Introduction

The Shack–Hartmann wavefront sensor (SHWFS) has achieved a wide range of well-established applications in adaptive optics systems due to its compact structure, high utilization of luminous energy, and ability to operate continuous or pulsed target light [1,2,3,4]. An SHWFS consists of a microlens array and a photodetector (currently mostly CCDs) [5,6]. If there is a phase distortion in the incident wavefront, the spot formed by each lenslet deviates from its focal plane. The sub-aperture spot offset between the measured wavefront and the reference wavefront reflects the instantaneous average wavefront slope of the incident wavefront within the sub-aperture, and the average slope in the x and y quadrature directions within the sub-aperture can be obtained via computer processing. The incident wavefront phase can be reconstructed based on the average slope of each sub-aperture [7,8]. From the working principle of SHWFS, it can be seen that the wavefront detection accuracy mainly depends on the centroid detection accuracy of the spot corresponding to each sub-aperture.
To date, a large number of studies have focused on improving the center of gravity (CoG) method [9,10,11,12,13,14,15,16], but, if the aberration to be measured is too large and the fluctuation of the wavefront is too steep, it will cause some sub-aperture spots to deviate from the corresponding sub-aperture. This condition is commonly measured in patients with severe refractive errors and patients who have undergone corneal or lens surgery [17,18]. Carmen Canovas and Erez N. Ribak compared and analyzed SHWFS methods (convolution, interpolation, Fourier methods, and centroid methods) for ophthalmology, and found that the Fourier method has the best effect for pupils with a small slope of the boundary or with a large distance from the boundary [19]. In the field of optical detection, such as the surface shape detection of high-order parabolic mirrors and conventional optical elements, it is important to reduce the measurement error by overcoming the problem of steep or linear edges [20,21,22,23,24]. In addition, researchers are focused on trying to rectify the distortion caused by the atmosphere to the propagating laser beam, which may detect optical vortices [25,26,27,28].
On the one hand, the traditional wavefront reconstruction algorithm attempts to find the spot within the corresponding range of the microlens sub-aperture and reconstructs the wave surface according to the calculated centroid displacement. When the partial wavefront distortion is large, the spot corresponding to the microlens shifts out of the sub-aperture range, and the traditional algorithm cannot search for the spot. In this case, the focal coordinate of the microlens is usually taken as the calculated spot centroid, which leads to the local large distortion being miscalculated as a plane wave; thus, the wavefront reconstruction is incorrect. On the other hand, when the spot enters the other adjacent sub-aperture range, the traditional algorithm takes the common centroid of two spots as the centroid of the current sub-aperture spot, which also leads to wavefront reconstruction errors. Although the dynamic range of the wavefront sensor can be effectively increased by reducing the focal length of the microlens, the centroid detection accuracy of the wavefront sensor will be reduced, which is not worth the gain. Therefore, it is necessary to eliminate the limitation of the microlens sub-aperture on the spot in traditional wavefront reconstruction algorithms.
To expand the dynamic range of SHWFS, C. Leroux proposed an iterative extrapolation method based on measuring the centroid positions [29]. J. Pfund used the improved expansion algorithm to realize that, as long as the change in the spot dislocation between two adjacent sub-apertures is less than half of the distance between the two sub-apertures, it is possible to clearly allocate the sub-apertures of each spot [30]. Geun-Young Yoon used a translatable plate of sub-apertures placed conjugate to the lenslet arrays [31]. Norbert Lindlein solved this problem by using a spatial light modulator array in front of the microlens of the sensor to control the sub-aperture opening or closing [32]. Zeyu Gao proposed a centroid estimation algorithm based on image segmentation and related techniques to expand the dynamic range based on search strategies [33]. Meanwhile, artificial intelligence methods have been widely used in Shack–Hartmann wavefront sensing technology. For example, the strong fitting ability of neural networks is used for wavefront restoration [34,35], and, even when the sub-aperture is sparse [36,37] or in an extremely low signal–noise ratio (SNR) situation [38], it can still work well. Additionally, in the multi-sensor data fusion field [39,40], deep learning methods have also shown their effectiveness.
This paper presents a new wavefront reconstruction algorithm used to expand the dynamic range. Based on the improved autocorrelation centroid detection method, the centroid of the spot is calculated from the entire spot image, and the correlation between the centroid of the spot and the microlens is transformed into a classification problem using a neural network to reconstruct the wavefront. Not only does our method eliminate the limitation of the microlens sub-aperture on the spot, but it also does not need to add other auxiliary conditions, such as hardware; even if the local wavefront aberration is large, the spot is offset out of the corresponding microlens aperture or even into the adjacent sub-aperture, and the algorithm can still accurately reconstruct the wavefront. We verified the accuracy of the method using a numerical simulation. Compared with the traditional method, the dynamic range of the different terms of the Zernike polynomials is improved by 62.86~183.87%.

2. Methods

2.1. Shack–Hartmann Wavefront Sensing Technology

As the core device of Shack–Hartmann wavefront sensing technology, an SHWFS is composed of a microlens array and a CCD camera [41,42], as shown in Figure 1. The microlens array converts the wavefront information into tilt information, which is collected by CCD, and then the wavefront is restored via computer operation. The entire process of this technology includes three steps: centroid calculation, slope calculation, and wavefront reconstruction. In the wavefront reconstruction of an SHWFS, the wavefront error information of the full aperture is mainly detected according to the calculated position of the spot x i , y i :
x i = m = 1 M n = 1 N x n m I n m m = 1 M n = 1 N I n m y i = m = 1 M n = 1 N y n m I n m m = 1 M n = 1 N I n m
where the detection region of each sub-aperture focal plane is M × N , m = 1 M and n = 1 N are the sub-apertures mapped to the corresponding pixel region on the CCD photosensitive target surface, and I n m is the signal received by the n , m pixel on the CCD photosensitive target surface. x n m and y n m are the X and Y coordinates of the n , m pixel, respectively.
Then, the wavefront slope of the incident wavefront is calculated according to the following formula:
g x i = Δ x f = x i x 0 f g y i = Δ y f = y i y 0 f
where x 0 , y 0 is the centroid coordinate of the spot array image formed by the calibrated wavefront, x i , y i is the centroid coordinate of the spot array image formed by the measured wavefront, i is the corresponding sub-aperture serial number, and f is the focal length of the microlens array.
To convert the slope information measured by an SHWFS into the phase of the wavefront or the voltage value of the deformable mirror driver, an algorithm, which is called the wavefront reconstruction algorithm, is required to establish the relationship between the slope and the phase/driving voltage. The most widely used methods are the region method [43,44,45], the model method [46], and the direct slope method [47]. Among them, the mode reconstruction method based on the Zernike polynomial is widely used in wavefront detection because it can restore the continuous wavefront phase. This method is also used in this paper for wavefront reconstruction. The basic principle is to describe the phase to be measured in the circular domain with a set of Zernike polynomials [48]:
ϕ x , y = a 0 + k = 1 n a k Z k x , y + ε
where a 0 is the average wavefront phase, a k is the coefficient of the k -term Zernike polynomial Z k x , y , n is the mode order, and ε is the residual phase measurement error.
For the i -th sub-aperture, the average slopes G x i and G y i within the sub-aperture are the average of the gradients of the wavefront phase in the x and y directions, respectively:
G x i = k = 1 n a k S i Z k x , y x d x d y S i + ε x G y i = k = 1 n a k S i Z k x , y y d x d y S i + ε y
where ε x and ε y are the residual phase measurement errors, and S i is the normalized area of the sub-aperture.
Order Z x k i = S i Z k x , y x d x d y S i , Z y k i = S i Z k x , y y d x d y S i ; then, there are
G x i = k = 1 n a k Z x k i + ε x G y i = k = 1 n a k Z y k i + ε y
For an SHWFS with m effective sub-apertures, when restoring the phase represented by n -term Zernike polynomials, the relationship between the slope of the sub-aperture and the Zernike coefficient is expressed as a matrix:
[ G x ( 1 ) G y ( 1 ) G x ( 2 ) G y ( 2 ) G x ( m ) G y ( m ) ] = [ Z x 1 ( 1 ) Z x 2 ( 1 ) Z x n ( 1 ) Z y 1 ( 1 ) Z y 2 ( 1 ) Z y n ( 1 ) Z x 1 ( 2 ) Z x 2 ( 2 ) Z x n ( 2 ) Z y 1 ( 2 ) Z y 2 ( 2 ) Z y n ( 2 ) Z x 1 ( m ) Z x 2 ( m ) Z x n ( m ) Z y 1 ( m ) Z y 2 ( m ) Z y n ( m ) ] [ a 1 a 2 a n ] + [ ε 1 ε 2 ε 3 ε 4 ε 2 m 1 ε 2 m ]
This matrix can be written as
G = D A + ε
where G is the slope vector, D is the mode coefficient reconstruction matrix, and A is the corresponding Zemike coefficient.
At this time, the wavefront reconstruction process can be regarded as a solution process for the above-mentioned linear equations. The least-squares solution of Equation (7) can be obtained by measuring the slope vector G :
A = D + G
where D + is the pseudo-inverse of D , which can be solved by a singular value decomposition (SVD).
Therefore, when the phase is measured after the calibration is completed, the Zernike polynomial coefficient A can be obtained as long as the slope G of each sub-aperture is calculated, and the phase to be measured can be reconstructed by Equation (3).

2.2. Centroid Detection Based on Autocorrelation Method

Sub-spot localization is the first step for an SHWFS to reconstruct the wavefront. By analyzing the shape of the spot distribution detected by an SHWFS, it is found that the actual spot distribution is approximately normal [49]. This is because the illumination area of the detection light on the retina is approximately circular, and when it reaches the CCD array of the wavefront sensor through the retinal imaging system, it is still approximately circular. Although the light spot detected by the CCD is affected by the aberrations of the human eye, the aberrations of the human eye mainly show defocus and astigmatism; the higher order aberration component is relatively small, and the defocus aberration and astigmatism are both central symmetric. Therefore, it is reasonable to approximate the actual spot distribution to the normal distribution. Based on this premise, we introduced the autocorrelation method into the centroid detection algorithm. Through the convolution operation of the signal and the autocorrelator template, the response value was obtained. When the amplitude spectrum of the signal is consistent with the amplitude characteristics of the autocorrelator template, the response is the largest. According to the convolution characteristics of Fourier transform, the convolution in the time domain can be expressed as the product in the frequency domain, and the autocorrelator response can be expressed as
C R x , y = F T 1 F T I X , Y × F T H X , Y
where C R x , y represents the matched filter response, F T represents the Fourier transform, I X , Y represents the spot pattern, and H X , Y represents the autocorrelator template.
The key to the autocorrelation method is determining how to select an autocorrelator template that is consistent with the amplitude characteristics of the target signal. For a retinal adaptive imaging system, according to the parameters of the optical system and the distribution pattern of the illumination light during the aberration detection of the human eye, the distribution pattern of the spot in the wavefront sensor can be calculated as the autocorrelator template. Due to the dynamic characteristics of the human eye aberration analyzed above, the influence of the human eye aberration can be ignored when selecting the autocorrelator template, and the amplitude spectrum of the actual spot can also be well estimated.
In practical application, we calculated the convolution response of the actual spot to the autocorrelator template with different center coordinates, and we took the center coordinate of the autocorrelator template at the maximum value of the response as the calculated spot centroid:
x i , y i = arg max C R x , y
When the centroid position of the spot is found, the local area of the spot in the correlation matrix resets to zero. Then, the next spot centroid position was calculated until all k centroids in the spot array were traversed (k is the number of effective microlens sub-apertures).
The autocorrelation algorithm has the disadvantage of the error being greatly affected by the sampling frequency. In order to maintain the accuracy of centroid detection, it is necessary to ensure that the processed pixels are small enough. In this case, the Gaussian-weighted bilinear interpolation method was used to determine the centroid position of the spot by using a small window around the maximum value of the correlation response. As shown in Figure 2, the interpolation algorithm works by subsampling the pixels to obtain the sub-pixels and then by performing bilinear interpolation to obtain their respective gray values; each sub-pixel has different weights when contributing its gray values. For a Gaussian distribution spot, the corresponding Gaussian distribution template is adopted.
When the template moves near the actual centroid of the spot without the influence of noise or multi-layer reflected stray light, the relative error between the template and the spot distribution is shown in Figure 3; here, only the abscissa direction is considered. From the error curve in Figure 3, it is found that the relative error is minimum when the center of the autocorrelation template coincides with the center of the spot, and the relative error increases with the distance of the autocorrelation template from the center of the spot. Here, it is assumed that the autocorrelation template parameters calculated according to the optical system parameters coincide with the Gaussian distribution spot parameters. Therefore, when the autocorrelation template center coincides with the spot centroid, the relative error is 0.
Because the centroid detection algorithm based on the autocorrelation method has the advantage of calculating the centroid position in the entire spot array, rather than in the corresponding region of a certain sub-aperture, the wavefront can still be reconstructed effectively by matching the calculated spot centroid with the microlens one by one with the corresponding algorithm. Therefore, the algorithm can also be used to expand the dynamic range of an SHWFS. The processing speed of the autocorrelation method is 33 fps, which basically meets the real-time requirements. In the next section, we discuss in detail a new spot-matching algorithm based on the above detection algorithm to expand the dynamic range of SHWFS.

2.3. Spot-Matching Network

We consider the problem of matching the spot with the sub-aperture as a classification problem. Firstly, the microlens array is numbered in the form of one-hot to facilitate neural network training. Then, the preprocessed centroid coordinates and labels of each sub-aperture spot are used as the input layer, which is propagated forward through the hidden layer. Finally, the predicted probability of each sub-aperture of the spot is generated by the output layer. The network structure is shown in Figure 4.
We observed that the input of the neural network is the centroid coordinate point, which is a two-dimensional coordinate set and has disordered characteristics. Specifically, as Figure 5 shows, the order of the centroid coordinate data does not affect the position or the property of the point on the spot map; that is, it is not sensitive to the order of the data. This means that the model processing the centroid coordinate data needs to maintain invariance to the different arrangements of the data.
For the disorder problem, we adopted the design of symmetry functions,
f x 1 , x 2 , , x n f x π 1 , x π 2 , , x π n , x i D
For example, Sum and Max are common symmetric functions.
f x 1 , x 2 , , x n max x 1 , x 2 , , x n
f x 1 , x 2 , , x n x 1 + x 2 + + x n
Although the direct symmetry operation on the data satisfies the permutation invariance, it is easy to lose a large amount of geometric and meaningful information. For example, when taking the maximum value, only the farthest point is obtained; when taking the average value, only the center of gravity is obtained. For the expression of two-dimensional points in a high-dimensional space, information must be redundant. To reduce the loss of information, we mapped each point to a high-dimensional space and performed symmetry operations on the data in the high-dimensional space so that we could retain enough point information.
Here, a symmetric function takes n vectors as the input and outputs a new vector independent of the input order to approximate a general function defined on the point set:
f x 1 , , x n g h x 1 , , h x n
where f : 2 N , h : N K , g : K × × K n is a symmetric function.
As shown in Figure 6, our basic module is simple: we approximate with a multi-layer perceptron (MLP) network and with a combination of a single-variable function and a max pooling function. In this way, after the network performs a certain degree of feature extraction on each point, the global feature can be extracted from the overall points through max pooling. In our network, MLP was realized by convolution with shared weights. The size of the first layer convolution kernel was 1 × 2 (because the dimensions of each point are X and Y), and each subsequent convolution kernel size was 1 × 1 . After two MLPs, 512-dimensional features were extracted for each point, and then they were transformed into 1 × 512 global features through max pooling. After the last MLP, the output was classification scores for k classes. All layers, except for the last one, included ReLU and batch normalization.
The loss function used in the network was the softmax loss. The formula is as follows:
L o s s = i = 1 N y i log y ^ i
where N is the number of the samples, y i is the real sample label, and y ^ i is the prediction label. The loss function represents the difference between the forward calculation result of each iteration of the neural network and the real value. The smaller the loss function, the more accurate the network classification.

3. Results

3.1. Data Preparation and Implementation Details

To verify the effectiveness of the algorithm, we carried out a series of numerical simulation experiments. Table 1 shows the key parameters of the simulation. The incident wavefront was derived from the first 15 Zernike polynomials (excluding piston and tilt), because the first 15 Zernike polynomials are sufficient for general wavefront sensing problems [50]. In addition, the Zernike coefficients were obtained from the randomly weighted Karhunen–Loève [51] atmospheric turbulence model. To enrich the dataset and test the performance of the new algorithm, we randomly generated 10,000 wavefronts. We randomly disrupted the order of the datasets, and we selected 9000 sets of centroid coordinates with their sub-aperture labels as the training set and the remaining 1000 sets as the test set. The noise of the sensor was not considered in the simulation.
The experimental environment was Intel(R)CoreTMi7−9700 K CPU@3.60 GHz, DDR4 RAM 16 G, Windows 10, NVIDIA GEFORCE RTX2070S GPU. We performed network training in Pytorch 1.3 (created by FAIR, Menlo Park, CA, USA). We used the Adam optimizer as our optimization algorithm. Other experimental details about the neural network training are shown in Table 2.

3.2. Evaluation Indicators

The dynamic range is one of the most important metrics for wavefront sensors, and we chose the maximum root mean square (RMS) as the dynamic range indicator. Zernike polynomials are orthogonal and linearly independent of each other, and the coefficients of Zernike polynomials are positively correlated with RMS. In view of these properties, the dynamic range can be measured by increasing the coefficients of the Zernike polynomials when the other terms of coefficients remain zero. The maximum RMS is regarded as the dynamic range when the error between the reconstructed wavefront and the actual wavefront is less than 1% of the threshold value. The improvement in the dynamic range is calculated as
δ R M S = R M S ours R M S classical R M S classical × 100 %
where R M S ours is the largest RMS of the wavefront measured by the proposed algorithm, and R M S classical is the largest RMS of the wavefront measured by the classical algorithm.

3.3. Qualitative Results

To verify the effectiveness of the proposed algorithm, a comparative experiment with the traditional wavefront reconstruction algorithm was carried out. Since the local slope of the wavefront to be measured is related to the coordinates of the centroid of the spot, and, at the same time, the wavefront to be measured can be expressed in the form of Zernike polynomials, the corresponding spot centroid distribution can be calculated according to the wavefront corresponding to the Zernike coefficient. We used the traditional algorithm and our algorithm to reconstruct the wavefront, and we calculated the relative error of the reference wavefront corresponding to the Zernike coefficient. We gradually increased the Zernike coefficient and repeated the above process until the wavefront reconstructed by the two algorithms produced a large error. The results of Z 2 0 and Z 3 3 are shown in Figure 7 and Figure 8, respectively. There are some spots in the spot array, part or all of which exceed the corresponding sub-aperture pixel region. At this moment, the classical algorithm cannot correctly reconstruct such a wavefront. However, our algorithm can reconstruct the correct wavefront information well.

3.4. Quantitative Results

Through the numerical simulation, the dynamic range of the method was quantitatively evaluated. We calculated the dynamic range of our algorithm and the classical algorithm in the first 15 terms of Zernike polynomials without the piston and tilt. Compared with the classical algorithm, our algorithm improved the dynamic range of the Zernike polynomials by 62.86% to 183.87%. The result is shown in Figure 9. From the results, we can see that our method improves the dynamic range of low-order aberrations more obviously, which conforms to the basic characteristics of human eye aberration. For the higher-order aberrations, there is also a certain degree of improvement.

3.5. Limitation

It should be noted that this method cannot handle two extreme cases. As shown in Figure 10, the wavefront detection spot is represented by the same color as the corresponding microlens, in which the black microlens and the spot of the red microlens are imaged in the adjacent pixel region of the CCD, and parts of the spot overlap with each other. At this time, it is impossible to distinguish whether the extended spot is caused by the overlap of two adjacent spots or by the aberration modulation of a single spot. Therefore, the local aberrations in this region cannot be calculated accurately. However, when the corresponding spot of the blue microlens enters the aperture range of the green microlens, and the corresponding spot of the green microlens enters the aperture range of the blue microlens, due to the limitations of the SHWFS’s principle, it is impossible to distinguish whether the microlens corresponding to the blue spot is the blue microlens or the green microlens; similarly, the green spot cannot be determined. The above two extreme cases inevitably lead to the wavefront reconstruction error, so the analysis of this paper is based on the spot distribution, which does not appear in the above extreme cases.

4. Conclusions

Due to the lack of a dynamic range, SHWFS has limited application in wavefront detection with large aberrations, such as human eye aberration detection. The traditional centroid algorithm and a large number of improved weighted centroid algorithms calculate the centroid of the light spot within the sub-aperture of the microlens, which cannot meet the requirements. The autocorrelation-based centroid algorithm used in this paper eliminates the limitation of the microlens sub-aperture in the traditional wavefront reconstruction algorithm without sacrificing the centroid detection accuracy, and it associates the spot centroid with the corresponding microlens through a neural network to reconstruct the wavefront. The comparative experimental results show that our algorithm can effectively improve the dynamic range of the wavefront sensor without adding any hardware facilities, and the dynamic range of Zernike’s first 15 aberrations can be improved from 62.86% to 183.87%.

Author Contributions

W.Y. programmed, analyzed the results, and wrote the paper; J.W. was responsible for project administration and the methodology; B.W. provided academic guidance and revised the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Science and Technology Department of Jilin Province, China: 20210101146JC, National Natural Science Foundation of China: 62135015 and National Key Research and Development Program of China: 2020YFA0714102.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the anonymous reviewers for their constructive comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ares, J.; Mancebo, T.; Bara, S. Position and Displacement Sensing with Shack-Hartmann Wave-Front Sensors. Appl. Opt. 2000, 39, 1511–1520. [Google Scholar] [CrossRef] [PubMed]
  2. Hartmann, J. Bermerkungen über Den Bau Und Die Justierung Von Spektrographen. Z. Instrum. 1900, 20, 47. [Google Scholar]
  3. Shack, R.V. Production and Use of a Lecticular Hartmann Screen. J. Opt. Soc. Am. 1971, 61, 656–661. [Google Scholar]
  4. Vargas, J.; González-Fernandez, L.; Quiroga, J.A.; Belenguer, T. Shack–Hartmann Centroid Detection Method Based on High Dynamic Range Imaging and Normalization Techniques. Appl. Opt. 2010, 49, 2409–2416. [Google Scholar] [CrossRef]
  5. Neal, D.R.; Copland, J.; Neal, D.A. Shack-Hartmann Wavefront Sensor Precision and Accuracy. In Proceedings of the Advanced Characterization Techniques for Optical, Semiconductor, and Data Storage Components, Seattle, WA, USA, 9–11 July 2002. [Google Scholar]
  6. Primot, J. Theoretical Description of Shack–Hartmann Wave-Front Sensor. Opt. Commun. 2003, 222, 81–92. [Google Scholar] [CrossRef]
  7. Wang, J.Y.; Silva, D.E. Wave-Front Interpretation with Zernike Polynomials. Appl. Opt. 1980, 19, 1510–1518. [Google Scholar] [CrossRef] [PubMed]
  8. Soloviev, O.; Vdovin, G. Hartmann-Shack Test with Random Masks for Modal Wavefront Reconstruction. Opt. Express 2005, 13, 9570–9584. [Google Scholar] [CrossRef]
  9. Thomas, S.; Fusco, T.; Tokovinin, A.; Nicolle, M.; Rousset, G. Comparison of Centroid Computation Algorithms in a Shack–Hartmann Sensor. Mon. Not. R. Astron. Soc. 2006, 371, 323–336. [Google Scholar] [CrossRef]
  10. Li, X.; Li, X.; Wang, C. Optimum Threshold Selection Method of Centroid Computation for Gaussian Spot. In Proceedings of the Aopc: Image Processing & Analysis, Beijing, China, 5–7 May 2015. [Google Scholar]
  11. Lardière, O.; Conan, R.; Clare, R.; Bradley, C.; Hubin, N. Compared Performance of Different Centroiding Algorithms for High-Pass Filtered Laser Guide Star Shack-Hartmann Wavefront Sensors. Proc. SPIE Int. Soc. Opt. Eng. 2010, 7736, 821–835. [Google Scholar]
  12. Ma, X.; Rao, C.; Zheng, H. Error Analysis of Ccd-Based Point Source Centroid Computation under the Background Light. Opt. Express 2009, 17, 8525–8541. [Google Scholar] [CrossRef]
  13. Leroux, C.; Dainty, C. Estimation of Centroid Positions with a Matched-Filter Algorithm: Relevance for Aberrometry of the Eye. Opt. Express 2010, 18, 1197–1206. [Google Scholar] [CrossRef] [PubMed]
  14. Kong, F.; Polo, M.C.; Lambert, A. Centroid Estimation for a Shack–Hartmann Wavefront Sensor Based on Stream Processing. Appl. Opt. 2017, 56, 6466–6475. [Google Scholar] [CrossRef] [PubMed]
  15. Vargas, J.; Restrepo, R.; Estrada, J.C.; Sorzano, C.O.; Du, Y.Z.; Carazo, J.M. Shack-Hartmann Centroid Detection Using the Spiral Phase Transform. Appl. Opt. 2012, 51, 7362–7367. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Vargas, J.; Restrepo, R.; Belenguer, T. Shack-Hartmann Spot Dislocation Map Determination Using an Optical Flow Method. Opt. Express 2014, 22, 1319–1329. [Google Scholar] [CrossRef]
  17. Schwiegerling, J. History of the Shack Hartmann Wavefront Sensor and Its Impact in Ophthalmic Optics; SPIE Optical Engineering + Applications:SPIE: Bellingham, WA, USA, 2014; Volume 9186. [Google Scholar]
  18. van Ginkel, R.; Mechó, M.; Cardona, G.; González-Méijome, J.M. The Effect of Accommodation on Peripheral Refraction under Two Illumination Conditions. Photonics 2022, 9, 364. [Google Scholar] [CrossRef]
  19. Canovas, C.; Ribak, E.N. Comparison of Hartmann Analysis Methods. Appl. Opt. 2007, 46, 1830–1835. [Google Scholar] [CrossRef]
  20. Zhao, S.; Cheng, X. Application and Development of Wavefront Sensor Technology. Int. J. Mater. Sci. Appl. 2017, 6, 154–159. [Google Scholar] [CrossRef]
  21. Sakharov, A.M.; Baryshnikov, N.V.; Karasik, V.E.; Sheldakova, J.V.; Kudryashov, A.; Nikitin, A. A Method for Reconstructing the Equation of the Aspherical Surface of Mirrors in an Explicit Form Using a Device with a Wavefront Sensor. In Proceedings of the Optical Manufacturing and Testing XIII, Virtual, 24 August–4 September 2020. [Google Scholar]
  22. Rocktäschel, M.; Tiziani, H.J. Limitations of the Shack–Hartmann Sensor for Testing Optical Aspherics. Opt. Laser Technol. 2002, 34, 631–637. [Google Scholar] [CrossRef]
  23. Neal, D.R.; Pulaski, P.; Raymond, T.D.; Neal, D.A.; Wang, Q.; Griesmann, U. Testing Highly Aberrated Large Optics with a Shack-Hartmann Wavefront Sensor. In Proceedings of the Advanced Wavefront Control: Methods, Devices, and Applications, San Diego, CA, USA, 6–7 August 2003. [Google Scholar]
  24. Li, C. Three-Dimensional Surface Profile Measurement of Microlenses Using the Shack–Hartmann Wavefront Sensor. J. Micro Electromech. Syst. 2012, 21, 530–540. [Google Scholar] [CrossRef]
  25. Sheldakova, J.; Kudryashov, A.; Zavalova, V.; Romanov, P. Shack-Hartmann Wavefront Sensor Versus Fizeau Interferometer for Laser Beam Measurements. In Proceedings of the Laser Resonators and Beam Control XI, San Jose, CA, USA, 26–27 January 2009. [Google Scholar]
  26. Murphy, K.; Burke, D.; Devaney, N.; Dainty, C. Experimental Detection of Optical Vortices with a Shack-Hartmann Wavefront Sensor. Opt. Express 2010, 18, 15448–15460. [Google Scholar] [CrossRef]
  27. Li, T.; Huang, L.; Gong, M. Wavefront Sensing for a Nonuniform Intensity Laser Beam by Shack–Hartmann Sensor with Modified Fourier Domain Centroiding. Opt. Eng. 2014, 53, 044101. [Google Scholar] [CrossRef]
  28. Alexandrov, A.; Rukosuev, A.L.; Zavalova, V.Y.; Romanov, P.; Samarkin, V.V.; Kudryashov, A.V. Adaptive System for Laser Beam Formation. In Proceedings of the Laser Beam Shaping III, Seattle, WA, USA, 9–11 July 2002. [Google Scholar]
  29. Leroux, C.; Dainty, C. A Simple and Robust Method to Extend the Dynamic Range of an Aberrometer. Opt. Express 2009, 17, 19055–19061. [Google Scholar] [CrossRef] [PubMed]
  30. Pfund, J.; Lindlein, N.; Schwider, J. Dynamic Range Expansion of a Shack–Hartmann Sensor by Use of a Modified Unwrapping Algorithm. Opt. Lett. 1998, 23, 995–997. [Google Scholar] [CrossRef] [PubMed]
  31. Yoon, G.-Y.; Pantanelli, S.; Nagy, L.J. Large-Dynamic-Range Shack-Hartmann Wavefront Sensor for Highly Aberrated Eyes. J. Biomed. Opt. 2006, 11, 030502. [Google Scholar] [CrossRef]
  32. Lindlein, N.; Pfund, J.; Schwider, J. Algorithm for Expanding the Dynamic Range of a Shack-Hartmann Sensor by Using a Spatial Light Modulator. Opt. Eng. 2001, 40, 837–840. [Google Scholar] [CrossRef]
  33. Gao, Z.; Li, X.; Ye, H. Large Dynamic Range Shack–Hartmann Wavefront Measurement Based on Image Segmentation and a Neighbouring-Region Search Algorithm. Opt. Commun. 2019, 450, 190–201. [Google Scholar] [CrossRef]
  34. Guo, H.; Korablinova, N.; Ren, Q.; Bille, J. Wavefront Reconstruction with Artificial Neural Networks. Opt. Express 2006, 14, 6456–6462. [Google Scholar] [CrossRef]
  35. Suárez Gómez, S.L.; González-Gutiérrez, C.; García Riesgo, F.; Sánchez Rodríguez, M.L.; Iglesias Rodríguez, F.J.; Santos, J.D. Convolutional Neural Networks Approach for Solar Reconstruction in Scao Configurations. Sensors 2019, 19, 2233. [Google Scholar] [CrossRef]
  36. Xu, Z.; Zhao, M.; Zhao, W.; Dong, L.; Bing, X. Wavefront Reconstruction of Shack-Hartmann Sensorwith Insufficient Lenslets Based on Extreme Learningmachine. Appl. Opt. 2020, 59, 4768–4774. [Google Scholar] [CrossRef]
  37. He, Y.; Liu, Z.; Ning, Y.; Li, J.; Xu, X.; Jiang, Z. Deep Learning Wavefront Sensing Method for Shack-Hartmann Sensors with Sparse Sub-Apertures. Opt. Express 2021, 29, 17669–17682. [Google Scholar] [CrossRef]
  38. Li, Z.; Li, X. Centroid Computation for Shack-Hartmann Wavefront Sensor in Extreme Situations Based on Artificial Neural Networks. Opt. Express 2018, 26, 31675–31692. [Google Scholar] [CrossRef] [PubMed]
  39. González-Gutiérrez, C.; Sánchez-Rodríguez, M.L.; Calvo-Rolle, J.L.; de Cos Juez, F.J. Multi-Gpu Development of a Neural Networks Based Reconstructor for Adaptive Optics. Complexity 2018, 2018, 5348265. [Google Scholar] [CrossRef]
  40. González-Gutiérrez, C.; Santos, J.D.; Martínez-Zarzuela, M.; Basden, A.G.; Osborn, J.; Díaz-Pernas, F.J.; de Cos Juez, F.J. Comparative Study of Neural Network Frameworks for the Next Generation of Adaptive Optics Systems. Sensors 2017, 17, 1263. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  41. Seifert, L.; Liesener, J.; Tiziani, H.J. The Adaptive Shack–Hartmann Sensor. Opt. Commun. 2003, 216, 313–319. [Google Scholar] [CrossRef]
  42. Platt, B.C.; Shack, R. History and Principles of Shack-Hartmann Wavefront Sensing. J. Refract. Surg. 2001, 17, S573–S577. [Google Scholar] [CrossRef]
  43. Southwell, W.H. Wave-Front Estimation from Wave-Front Slope Measurements. JOsA 1980, 70, 998–1006. [Google Scholar] [CrossRef]
  44. Hudgin, R.H. Optimal Wave-Front Estimation. JOsA 1977, 67, 378–382. [Google Scholar] [CrossRef]
  45. Fried, D.L. Least-Square Fitting a Wave-Front Distortion Estimate to an Array of Phase-Difference Measurements. JOsA 1977, 67, 370–375. [Google Scholar] [CrossRef]
  46. Cubalchini, R. Modal Wave-Front Estimation from Phase Derivative Measurements. JOsA 1979, 69, 972–977. [Google Scholar] [CrossRef]
  47. Ríos, S.; Acosta, E.; Bará, S. Hartmann Sensing with Albrecht Grids. Opt. Commun. 1997, 133, 443–453. [Google Scholar] [CrossRef]
  48. Noll, R.J. Zernike Polynomials and Atmospheric Turbulence. JOsA 1976, 66, 207–211. [Google Scholar] [CrossRef]
  49. Liang, J.; Grimm, B.; Goelz, S.; Bille, J.F. Objective Measurement of Wave Aberrations of the Human Eye with the Use of a Hartmann–Shack Wave-Front Sensor. JOSA A 1994, 11, 1949–1957. [Google Scholar] [CrossRef]
  50. Rukosuev, A.; Nikitin, A.; Belousov, V.; Sheldakova, J.; Toporovsky, V.; Kudryashov, A. Expansion of the Laser Beam Wavefront in Terms of Zernike Polynomials in the Problem of Turbulence Testing. Appl. Sci. 2021, 11, 12112. [Google Scholar] [CrossRef]
  51. Roddier, N.A. Atmospheric Wavefront Simulation Using Zernike Polynomials. Opt. Eng. 1990, 29, 1174–1180. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of the SHWFS principle.
Figure 1. Schematic diagram of the SHWFS principle.
Sensors 22 07120 g001
Figure 2. Schematic diagram of Gaussian-weighted bilinear interpolation method.
Figure 2. Schematic diagram of Gaussian-weighted bilinear interpolation method.
Sensors 22 07120 g002
Figure 3. Relative error corresponding to template movement near the centroid of the spot.
Figure 3. Relative error corresponding to template movement near the centroid of the spot.
Sensors 22 07120 g003
Figure 4. Spot-matching neural network structure.
Figure 4. Spot-matching neural network structure.
Sensors 22 07120 g004
Figure 5. Disorder characteristics of data.
Figure 5. Disorder characteristics of data.
Sensors 22 07120 g005
Figure 6. Basic modules of the neural network.
Figure 6. Basic modules of the neural network.
Sensors 22 07120 g006
Figure 7. Z 2 0 (a) Reference wavefront; (b) spot array; (c) wavefront reconstructed with the classical algorithm; (d) wavefront reconstructed with our algorithm.
Figure 7. Z 2 0 (a) Reference wavefront; (b) spot array; (c) wavefront reconstructed with the classical algorithm; (d) wavefront reconstructed with our algorithm.
Sensors 22 07120 g007
Figure 8. Z 3 3 (a) Reference wavefront; (b) spot array; (c) wavefront reconstructed with the classical algorithm; (d) wavefront reconstructed with our algorithm.
Figure 8. Z 3 3 (a) Reference wavefront; (b) spot array; (c) wavefront reconstructed with the classical algorithm; (d) wavefront reconstructed with our algorithm.
Sensors 22 07120 g008
Figure 9. The dynamic range of the algorithms.
Figure 9. The dynamic range of the algorithms.
Sensors 22 07120 g009
Figure 10. Schematic diagram of overlap and intersection of wavefront detection spots.
Figure 10. Schematic diagram of overlap and intersection of wavefront detection spots.
Sensors 22 07120 g010
Table 1. The key parameters of the simulation.
Table 1. The key parameters of the simulation.
ParameterValue
Focal length of the lenslets6.5 mm
Wavelength500 nm
Lenslet numbers16 × 16
Lenslet size500 µm
Number of pixels in each sub-aperture20 × 20 pixels
Pixel size10 µm
Table 2. Training parameters.
Table 2. Training parameters.
ParameterValue
Learning rate0.001
Epoch50
Batch size8
Momentum0.9
Decay rate0.5~0.99
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yang, W.; Wang, J.; Wang, B. A Method Used to Improve the Dynamic Range of Shack–Hartmann Wavefront Sensor in Presence of Large Aberration. Sensors 2022, 22, 7120. https://doi.org/10.3390/s22197120

AMA Style

Yang W, Wang J, Wang B. A Method Used to Improve the Dynamic Range of Shack–Hartmann Wavefront Sensor in Presence of Large Aberration. Sensors. 2022; 22(19):7120. https://doi.org/10.3390/s22197120

Chicago/Turabian Style

Yang, Wen, Jianli Wang, and Bin Wang. 2022. "A Method Used to Improve the Dynamic Range of Shack–Hartmann Wavefront Sensor in Presence of Large Aberration" Sensors 22, no. 19: 7120. https://doi.org/10.3390/s22197120

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop