Next Article in Journal
Experimental Results of Three-Dimensional Modeling and Mapping with Airborne Ka-Band Fixed-Baseline InSAR in Typical Topographies of China
Previous Article in Journal
Automated Delineation of Supraglacial Debris Cover Using Deep Learning and Multisource Remote Sensing Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stray Light Correction Algorithm for High Performance Optical Instruments: The Case of Metop-3MI

Centre Spatial de Liège, STAR Institute, Université de Lège, 4031 Liège, Belgium
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(6), 1354; https://doi.org/10.3390/rs14061354
Submission received: 31 January 2022 / Revised: 25 February 2022 / Accepted: 6 March 2022 / Published: 10 March 2022

Abstract

:
Stray light is a critical aspect for high performance optical instruments. When stray light control by design is insufficient to reach the performance requirement, correction by post-processing must be considered. This situation is encountered, for example, in the case of the Earth observation instrument 3MI, whose stray light properties are complex due to the presence of many ghosts distributed on the detector array. We implement an iterative correction method and discuss its convergence properties. Spatial and field binning can be employed to reduce the computation time but at the cost of a decreased performance. Interpolation of the stray light properties is required to achieve high performance correction. For that, two methods are proposed and tested. The first interpolate the stray light in the field domain while the second applies a scaling operation based on a local symmetry assumption. Ultimately, the scaling method is selected and a stray light reduction by a factor of 58 is obtained at 2σ (129 at 1σ) for an extended scene illumination.

1. Introduction

Stray light (SL) degrades the performances of optical instruments due to effects such as ghost reflections or scattering [1]. In high end applications such as space instrumentation, SL is a key limiting factor. It must be controlled by design, for example by using baffles, apertures, or anti-reflection coatings [1,2,3]. However, if the desired performances cannot be reached by design, an additional post-processing step must be included to decrease the SL level.
This situation is encountered in the Metop-3MI Earth observation instrument [4,5,6,7]. The scientific objective of the mission is to study the chemical composition of the Earth’s atmosphere, analyze clouds properties, and measure the Earth’s albedo [4,5,6,7]. With an instrument in an on-axis refractive configuration [4], its many lenses are necessary for correcting optical aberrations over a wide field of view (FOV) of ±57°. Hence many ghost paths reach the detector and, despite anti-reflection coatings, it affects the radiometric accuracy beyond user requirements. This study discusses the SL correction algorithm developed for 3MI to reach satisfactory SL performances. Previously, the Earth observation instrument POLDER [8] had a similar optical configuration and also required an algorithm to decrease the SL. With the trend of users requiring always higher performing instruments, it is likely that SL correction algorithms will become the norm in the near future of Earth observation missions (e.g., FLEX [9], ALTIUS [10], etc.). Furthermore, many other applications can benefit from this type of algorithm, including non-space optical systems.
In Earth observation instruments, the SL level is often specified based on an extended scene illumination [11]; referred as a black and white (B and W) scene, it consists in illuminating half of the FOV with a bright uniform radiance Lmax, and the other side with a dark radiance Lref. Figure 1a shows the corresponding nominal image Inom (i.e., SL free) on the 3MI VNIR detector, with a signal Imax in the region illuminated at Lmax, and Iref = 0.1∙Imax on the other side (flat field is neglected). The detector has N × N pixels of 26 µm (N = 512), the corners are dark as the optical FOV is slightly smaller than the detector encircled diameter. The effective area of the detector is therefore the full area except for the corners.
To fulfill the 3MI mission requirements, the residual SL (ΔISL) for the B and W extended scene must be smaller than 0.017%∙Imax. The level is evaluated at 2σ percentile (95.4%) over the effective detector area except for the region located at less than 5 pixels from the transition. We denote by dSL the residual SL on pixels from this requirement area, evaluated at 2σ unless stated otherwise. Thus, the requirement writes as per Equation (1). In comparison, Figure 1b shows the SL map, ISL, of the instrument by design at 410 nm. This map was obtained by ray tracing simulation considering second order ghost reflections, by far the most dominant SL contributor in the system. With an initial 2σ level dSL = 0.97% Imax, the correction algorithm must decrease the SL by about 2 orders of magnitude (factor ≥ 58 at 2σ).
dSL ≤ 0.017% · Imax
Several authors developed methods for correcting SL. Deconvolution can be used to correct the SL contribution from scattering on optical surfaces [12,13,14]. This can occur due to lens roughness or contamination, it creates a simple SL profile which broadens the point-spread function (PSF) [1,15]. When the SL pattern is more complex, for example, with ghosts distributed all over the detector with discontinuous geometries and strong field dependence, this approach is insufficient. In that case, we can take advantage of the fact that SL is a linear, additive phenomenon [8]. On a given pixel of the detector, the total SL is the sum of the contributions coming from all fields. If the SL dependance with the field is known, it can be used to estimate the total SL reaching the detector for any input scene. The estimated map is then subtracted from the measurement to obtain the corrected image. Janson et al. [16,17] used that principle to estimate iteratively SL maps, assuming a shift-variant but rotationally symmetrical shape for the SL as a function of the field. For the POLDER instrument, Laherrere et al. [8] applied a similar approach by calibrating the SL coming from different regions of the FOV. As a direct extension, Zong et al. [18,19,20] used an inverse matrix formulation to correct SL without the need for iterations. This approach can be applied for SL correction in different kinds of instrument, including spectrographs [19] and hyperspectral instruments [21,22].
Based on these principles, a correction method was developed for the 3MI instrument. An iterative approach was used rather than the inverse matrix formulation due to its versatility for high performance correction. Convergence properties were studied and we showed that the residual error at first iteration cannot always be neglected. Spatial and field binning were used to limit the computation time and required computer memory; however, it has consequences on the SL correction ratio. Furthermore, SL calibration on a restricted grid limits the performance, therefore interpolation methods are introduced to deduce the SL at intermediary fields. At this stage, experimental measurements were not yet available and simulations were used throughout this study. The SL correction performance for 3MI was predicted by testing the method with data obtained by ray tracing simulations (410 nm) with the FRED® software [23]. This was performed by considering experimental limitations such as the reasonable quantity of fields to calibrate, or the modeling of detector noise.

2. SL Correction Method

2.1. Principles

We defined the spatial point source transmittance (SPST) as the SL map when the instrument is illuminated by a point-like source, normalized to the nominal signal. Hence, we denoted by SPSTi,j(x,y) the SL map as a function of the spatial coordinates (x,y) on the detector, for an illumination at field (i,j) such that Inom(i,j) = 1. The spatial and field coordinates, respectively (x,y) and (i,j), are integers considered in units of pixels ranging from 1 to N = 512 for the 3MI VNIR instrument. Because the PSF is sub-pixel, an illumination at field (i,j) provides a nominal signal of 0 on all the pixels other than (i,j). Figure 2 shows examples of SPST maps for different fields in 3MI. They are composed of multiple ghosts distributed on the detector array. In each case, intense and localized ghosts are present around the nominal pixel (i,j), due to ghost reflections close to the focal plane.
The measured signal at the detector (Imes) is the sum of the nominal and SL signals (Equation (2)). When the instrument is illuminated by an extended scene, the total SL is the sum of the SL contributions from the different fields. Therefore, ISL is obtained with Equation (3), corresponding to the linear combination over the FOV of the SPST maps, modulated by the nominal signal at the corresponding field. This equation can also be written as a matrix-vector multiplication (Equation (4)), where the map Inom is reshaped as a vector of dimension N2 × 1 and ASL is a N2 × N2 matrix composed of the SPST maps also reshaped as vectors N2 × 1 (see Appendix A).
Imes = Inom + ISL
I S L x , y = i , j N , N S P S T i , j x , y ·   I n o m i , j
ISL = ASL · Inom
From the matrix formulation of Equation (4), the nominal signal can be recovered with Equation (5), where id is the identity matrix [18,19,20]. However, it can be impractical to compute the inverse matrix when it is composed of many elements. For example, the full resolution matrix ASL for 3MI is composed of 6.9 × 1010 elements. Moreover, Equation (5) is only valid for a non-singular matrix and, even if it is the case, the inversion process can lead to error amplification. The condition number can be used to estimate the sensitivity to errors [24,25]. Finally, a practical limitation of Equation (5) is that it requires ASL to be square. This restricts the applications of spatial and field binning as we will discuss later.
Inom = (id + ASL)−1 · Imes
For these reasons, a more versatile iterative approach to SL correction is preferred for 3MI. The first iteration consists in estimating the SL (ISL,1) by modulating the ASL matrix by the measured signal. This provides a first estimation of the corrected image (Icorr,1). At the second iteration, ASL is modulated by Icorr,1 to obtain a better estimation of the SL (ISL,2). This iterative process is described in Equations (6) to (8), with ISL,p and Icorr,p, respectively, the estimated SL and corrected maps at iteration p. The more this process is repeated, the more accurate the correction. This approach can be applied in hyperspectral instruments as well.
ISL,0 = 0
ISL,p = ASL · (ImesISL,p−1)
Icorr,p = ImesISL,p

2.2. Convergence of the Iterative Method

The residual SL error at first iteration is not always negligible, opposed to what some authors state [9,16,17]. At convergence (p = ∞), ISL,p = ISL and Icorr,p = Imes. By mathematical induction, we found that the residual convergence error ΔSLp at iteration p is provided by Equation (9). In this equation, (−ASL)p+1 is a matrix multiplication of (−ASL) by itself p + 1 times. The error alternates between positive and negative values. Indeed, at first iteration the SPST maps are modulated by a scene Imes larger than the nominal scene, thus the SL is overestimated. At second iteration, the SPST maps are modulated by a scene smaller than the nominal (ImesISL,1), therefore underestimating the SL. This provides a convergence profile similar to a damped harmonic oscillator.
ΔSLp = (− ASL)p+1 · Inom
Figure 3 shows the 2σ error dSLp at iteration p. The residual error at p = 0 corresponds to the initial SL of the instrument before applying the correction. To evaluate how the convergence is affected by the initial SL level of the instrument, the curve is computed for the situation of 3MI as well as in the case where its initial SL level is multiplied by a factor s (s = 1/5, 1, 5 and 10). As Figure 3 shows, the residual error at iteration p = 1 increases when the initial SL level increases. A quadratic dependance is found between dSL1 and the initial SL level, which can also be deduced directly from Equation (9). In each case, the residual error decreases linearly on a semi-log scale, characteristic of an inverse power law or an exponential decay. The lower the initial SL level, the faster the convergence. The convergence curve is compared in Figure 3 to the performance requirement of Equation (1). With the SL level of 3MI, a single iteration is sufficient to fulfill the requirement. However, the convergence error is only one contributor among others; therefore, an additional iteration is preferable. With two iterations the convergence error is two orders or magnitude below the requirement and can be considered as negligible.
It is interesting to remark that we are here in the case of a Jacobi convergence. Rigorously, the convergence is ensured if the modulus of all eigenvalues of (id + ASL) are inferior to 1. Moreover, the convergence speed can be increased with the Gauss–Seidel method [26]. Each time the product of a line of ASL by a column of Icorr,p is calculated, it provides a new estimated value ISL,p+1 for the SL at the same line. Therefore, at that line a better estimation of the corrected scene is obtained and can be used for the estimation of the SL at the next line. This speeds the convergence as the SPST are modulated with a map closer to the real nominal scene (see Appendix C for details).
Another way for speeding the convergence is to apply the basic principles of the multigrid method [27]. This consists in first correcting the SL with a lower resolution matrix ASL, then increasing it up to N2 × N2 as we progress in the iterations. Therefore, the computation is less cumbersome for the first iterations, resulting in time saving. The application of the Gauss–Seidel or multigrid methods is particularly interesting when multiple iterations are required to reach the target performance. Here, with only two iterations it is not necessary.

3. Impact of Errors

In practice, SPST maps will be affected by errors and the matrix from Equation (4) becomes ASL* = ASL + ΔA. Errors can come from detector noise during the calibration, or from the interpolation process as it will be shown later. While ΔA has negligible impact on the convergence speed, the limit of ΔSLp is not 0 anymore but ΔSL = ΔA · Inom. A simple case to derive is when each element of ΔA is a random variable centered on zero and with Gaussian distribution. For example, this is typically the case for detector noise. In that condition, the value ΔSL on each pixel is a random variable with mathematical expectation of zero. Its standard deviation provided by Equation (10), a root square sum (RSS) of the standard deviations of the elements of ΔA modulated by Inom. If each element of ΔA has the same standard deviation δ, Equation (10) simplifies to (11).
s t d Δ S L q = r = 1 N 2 s t d Δ A q , r · I n o m r 2
= δ r = 1 N 2 I n o m r 2
In a Gaussian distribution assumption, the standard deviation is the error at 1σ percentile and the 2σ error is twice that value. Therefore, introducing Equation (11) in the Equation (1) provides the maximum error δ to fulfill the performance requirement (Equation (12)). For the B and W extended scene, this provides a 2σ error on the SPST maps below 2.4 · 10−5%.
δ I m a x 0.017 % 2 k = 1 N 2 I n o m k 2
In practice, errors on the SPST maps do not necessarily have a Gaussian distribution centered on zero. However, this assumption provides an estimation in order of magnitude of the maximum allowed error. This is useful for establishing a preliminary error budget or to specify the accuracy to achieve in the calibration and interpolation processes.

4. Spatial and Field Binning

The quantity of data is very large when SPST maps are known in high resolution over the N × N field grid. Binning of the SPST maps can be performed to decrease the quantity of data and reduce the SL correction computation time. Two types of binning can be performed: spatial binning and field binning. The first decreases the resolution of the maps while the second decreases the number of maps considered in the SL calculation. Therefore, spatial binning reduces the number of lines in matrix ASL and field binning reduces the number of columns. As the inverse matrix approach requires a square dimension for ASL, spatial and field binning must reduce the number of columns and lines in the same ratio. With the iterative approach, ASL does not need to be a square matrix, thus any choice of spatial and field binning can be made.
Spatial binning consists of decreasing the resolution of the SPST maps. From a high-resolution map of dimension N × N, spatial binning to a dimension n × n is performed by averaging groups of N/n by N/n adjacent pixels. Figure 4 shows a high-resolution SPST map (a) and the same map binned with n = 16 (b). The SL correction process provides an estimation of the SL map in dimension n × n, an oversampling to N × N is thus necessary before subtracting it from the measured image.
Figure 5a shows the SL map for the B and W extended scene illumination, estimated with spatial binning (n = 16). The result is the same as if the SL was computed with high-resolution maps and that spatial binning was applied afterward. The SL map undergoes a resolution loss, causing a deviation from the theoretical SL. Figure 5b shows the 2σ error dSL on the estimated SL as a function of the spatial binning. The smaller the n compared with N, the larger the error. Here, a binning as low as n = 100 still provides a residual error within performance requirement and reduces the number of data by (512/100)2 ≈ 26. Nevertheless, the impact of spatial binning depends on the scene and other sources of errors will add up, therefore n should not be so small in practice.
Field binning consists in averaging the SPST maps associated to neighboring fields. If the maps are known for the fields associated to each of the N × N pixels, field binning on a grid m × m consists in averaging SPST maps associated to groups of N/m by N/m adjacent fields. An example is shown on Figure 4c, where the map from Figure 4a is binned in the field domain with m = 16. SL correction with field binning involves a linear combination of the SPST maps by the scene signal over the m × m field grid. Therefore, the same binning must be applied to the scene too. Moreover, for energy conservation the estimated SL is multiplied by (N/m)2.
Field binning has the same consequences for SL correction as if the SL was estimated based on the N × N field grid, but with SPST maps modulated by a lower resolution scene. Hence, field binning does not affect the SL estimation from uniform regions of the FOV, but only from non-uniform regions. Consequently, the impact of field binning is dependent upon the scene illuminating the instrument. Errors typically come from areas of the FOV with the higher spatial frequencies; however, it depends on how they are distributed on the detector array. For example, field binning on the B and W scene of Figure 1a does not introduce any error on the SL estimation if m is even, as the transition is exactly at the center. However, a scene with a transition within a binned area of the FOV undergoes SL estimation errors. For example, Figure 6a shows a B and W scene with a tilted transition and field binning of m = 16. Due to the tilt, the field binning impacts the transition for any value of m. Figure 6a shows the corresponding SL map estimation. As the SPST have bright ghosts localized around the nominal pixels, and because SL errors occur only for fields around the transition, the estimated SL emphasizes an irregular transition.
Figure 6c shows the error dSL as a function of the field binning m for the B and W scene with tilted transition. The error remains small even for small values of m, as the field binning only impacts the contribution from fields localized around the transition. Therefore, reduction in the quantity of data should preferably be performed with field binning rather than spatial binning. Nevertheless, the impact of field binning depends on the type of scene and one with high contrast and multiple transitions is more sensitive. Ideally, m should be as close to N as possible within computational capabilities. In the case of 3MI, m = 128 is reasonable in terms of quantity of data and it provides an error significantly lower than the performance requirement.
Finally, it can be mentioned that binning can have a consequence on SPST calibration errors (detector noise, interpolation, etc.). As spatial binning averages the signal of adjacent pixels, the error on these pixels is averaged. If the error is random and centered on zero, the statistical error in the binned dimension is reduced too. The error is however not reduced if the distribution is not centered on zero. Moreover, random noise reduction is usually compensated by the intrinsic error of spatial binning occurring by reducing the SL map resolution. Regarding field binning, it averages the SL signal from the same pixels and therefore it does not decrease the SPST noise. Indeed, with field binning we average signals from fields which would anyway be summed up together on that pixel.

5. SPST Maps Interpolation

5.1. The Need for Interpolation

In practice, it is usually not possible to calibrate the SPST maps on the full N × N field grid. For example, in 3MI this would mean N2 = 262,144 fields to calibrate. Instead, maps are calibrated on a restricted field grid with a lower density. A simple case is a k × k grid regularly spaced at detector level (k < N). Similarly as field binning, where maps are available on a m × m grid, correction with the restricted grid is performed by modulating the maps by the scene binned on the k × k grid. The SL map is multiplied by (N/k)2 for energy conservation. If the inverse matrix approach is used, spatial binning with n = k must be applied so that ASL is square. Alternatively, spatial binning can be avoided if the columns of ASL are filled with N × N maps, attributing for each field the SPST calibrated at the closest field. With the iterative approach, this is not necessary as ASL does not need to be square.
While SL correction on a restricted grid involves similar practical considerations as with field binning, its impact on the performance is worse. Indeed, using a restricted grid is equivalent to assuming that within a group of fields associated to N/k × N/k pixels, the SPST is equal to the one of the central field. Figure 7a–c shows the estimated SL for the B and W scene, obtained with a restricted grid with different values of k. When the grid density is small (k << N), the map has a dark background and bright localized features. These come from the bright localized ghosts present on the individual SPST maps, in particular around the nominal pixel. When the grid density increases, the number of localized features increases while their brightness decreases, resembling more to the theoretical SL map. The continuous line on Figure 7g shows the error on the SL estimation, dSL, as a function of the restricted grid size k. As it shows, the error increases very fast when k decreases.
While spatial binning usually increases the SL estimation error, in the case of a low-density restricted grid, it can be used to smooth the localized features from individual SPST maps. Figure 7h shows the error dSL as a function of the spatial binning (N/n), for different values of k. As it shows, for each value k there is an optimal spatial binning (N/n) minimizing dSL. For k < 256, the optimal binning is n = k. A stronger spatial binning is therefore required when the grid density is decreased. However, if k ≥ 256 the optimal spatial binning is n = 512. This is because the bright localized ghosts present on the individual SPST maps have a width larger than the pixel. Therefore, when k ≥ 256 the localized ghosts from adjacent fields overlap each other. Figure 7d–f show the estimated SL obtained with the same restricted grid as (a), (b) and (c) but with the optimal spatial binning applied. The dotted line on Figure 7g also shows the error as a function of the restricted grid size k, this time with the application of the optimal spatial binning. As it shows, spatial binning improves the SL estimation in the case of a low grid density.
A dense grid is required for accurate estimation of the SL, however sufficient number of maps cannot be realistically calibrated to achieve the 3MI performance requirement. Indeed, even with maps calibrated with no experimental errors, Figure 7g shows that at least about 180 × 180 maps are needed. In practice it is estimated that about 27 × 27 (729 maps) can be measured within a realistic time frame. Consequently, maps at intermediary fields should be deduced numerically by interpolation. While the idea of using an interpolation method is vaguely stated in literature [18], to the best of our knowledge there is no detailed description about how this can be achieved. In the following sections, we introduce two interpolation methods and discuss their performance: field domain interpolation and scaling interpolation.
Another way to solve the impact of a restricted grid is to calibrate the SL by illuminating groups of pixels simultaneously, instead of a single pixel. This is equivalent to calibrating SPST over the N × N field grid and then applying field binning. However, the limitation is that the FOV sustained by groups of pixels varies with the elevation and azimuth. Therefore, it is difficult in practice to illuminate precisely a group of pixels and not their neighbors, thus introducing either gaps or overlaps between fields. In POLDER, this is the reason why the SL is calibrated with an integrating sphere illuminating large areas of the detector [8]. In that case, the FOV was divided in 13 × 17 zones, which in addition of the gaps and overlaps between the different zones is equivalent to a strong field binning.

5.2. Field Domain Interpolation

We define the Field Point Source Transmittance (FPST) such that FPSTx,y(i,j) = SPSTi,j(x,y). While the SPST is the SL spatial distribution for a single field illumination, the FPST is the SL on a single pixel as a function of the field. In the matrix ASL, the SPST are the columns while the FPST are the lines.
If the SPST are calibrated on a regularly spaced field grid, we can plot for any pixel the FPST map as a function of the field (i,j). For example, Figure 8a shows the FPST map on pixel (x,y) = (359,205), considering a calibration on a 27 × 27 field grid. In comparison, Figure 8c shows the theoretical FPST on a higher resolution grid (native 512 × 512). Field domain interpolation consists in interpolating the FPST map from the calibration grid to a higher density grid. Figure 8b shows the result of the interpolation of the map (a), using a cubic kernel.
Field domain interpolation can be applied to the FPST on each pixel (x,y). Then, interpolated FPST maps are transformed back into SPST over the interpolation field grid. The interpolated FPST can also be directly introduced into matrix ASL. Field domain interpolation is conceptually straight forward; however, its accuracy is strongly dependent on the calibration grid density. It cannot recover variations in the SL faster than the grid sampling. Moreover, as it considers the SL on one pixel at a time, it is sensitive to detector noise.

5.3. Scaling Interpolation

Another approach to SPST interpolation is based on the observation that SL evolves with the field with a symmetry with respect to the optical axis, at least locally. Figure 9a shows SPST maps at fields (i,j) with i = 32 and j from 96 to 256. All SL features are aligned along the radial direction, joining the detector center to the nominal pixel, and have a mirror symmetry with respect to that axis.
When varying the field along the horizontal direction x (j varies, i stays constant), the nominal signal as well as most SL features move along that direction as well. Therefore, the distance from the features to the center varies with about the same ratio. While this might not be true for every single ghost, it is verified at least locally for the most significant ones. Furthermore, locally the features do not change significantly in shape and size when the field is varied. Based on these observations, a scaling interpolation method can be implemented.
The SPST maps are calibrated on a restricted grid. To obtain the SPST at an intermediary field (i,j), we search for the closest calibrated field (ic,jc). Then, we operate to SPSTic,jc a scaling by a factor s = r/rc and a rotation by an angle α. Here, r and rc are the radial distance of pixels (i,j) and (ic,jc), respectively. The angle α is the difference of the azimuth for both fields. This operation places the nominal pixel from the calibrated map from position (ic,jc) to (i,j) and moves the SL features to about the right position as well.
A drawback of the scaling method is that it modifies the size of the ghosts. With a regularly spaced grid, the ratio s has the largest deviation from 1 when the nominal pixel (ic,jc) is close to the center (i.e., small elevation angle). Hence, the ghosts undergo the largest scaling for those fields and the interpolated maps are less accurate. For that reason, the density of the calibration grid is increased at the center. Figure 10 shows the calibration grid considered for 3MI. It consists of a regularly spaced grid of dimension 27 × 27, to which are added extra fields at the center such that the grid density is double for small elevations. This provides a total of 797 calibration fields, including the 90 additional fields at the center, and not considering the fields in the corners of the detector. Moreover, a threshold is set on s such that if its deviation from 1 is too important, the SPST map of the closest neighbor is applied with no scaling or rotation. This is equivalent to applying a restricted grid for the SL correction from these fields. In the case of 3MI, the threshold is set to a maximum deviation from 1 of 0.2. Finally, scaling and rotation of an SPST map can have as a consequence the absence of signal along the edges, as shown on Figure 9b. This gap can be filled by applying a scaling and rotation operation to the second closest neighbor. Therefore, if the signal on some pixels (i.e., the gaps) is not found based on the operation on the first neighbor, then the second neighbor is used to estimate the signal on these pixels. If a gap is still present after using two neighbors, the next neighbors can be considered. It is found that the gaps are always filled by using up to four neighbors. Moreover, the scaling factor is minimized by sorting the four closest neighbors by their deviation of s from 1. Hence, the first neighbor is the one within the 4 closest which has the closest ratio s from 1. Figure 9b shows the SPST map interpolated with this method, where no signal is missing on the edges.
Figure 8d shows the FPST at pixel (359,205), obtained by interpolating the SPST maps from the calibration grid to a higher density grid with the scaling method. In this case, features and details smaller than the calibration grid sampling are recovered, which was not the case with field domain interpolation. Figure 11 shows the 2σ error on the FPSTx,y (dFPST) as a function of the radial distance of pixel (x,y), considering the scaling method or the field domain interpolation. In average, the scaling method provides an error 1.5 times smaller than field domain interpolation, despite using nearly the same number of calibration maps (797 and 707 respectively). The exception is close to the center of the detector, where the scaling method has a lower performance as in that area the scaling factor s is the largest. For both methods, the error is below the target value derived in the previous section which predicted the SL calibration accuracy (2.4 · 10−5%) to reach the performance requirement.
In the case of a purely rotationally symmetrical optical system but with a square detector aligned on the optical axis, it is sufficient to calibrate only 1/8th of the field grid, with an azimuth between 0° and 45°. Indeed, the other portions can be obtained directly by symmetry. It is not sufficient however to calibrate only the maps along the radial direction. Indeed, a rotation applied to the maps would always introduce missing signal on the edges of the detector.

5.4. SL Correction Performance

While both interpolation methods seem promising, the scaling method is selected as it has better performances. Maps are interpolated from the calibration grid to the N × N grid. Matrix ASL is then built and the estimated SL is computed at the convergence of the iterative method (Equation (4)). For now, no spatial or field binning is applied. The result for the B and W scene is shown on Figure 12a and is very similar to the theoretical scene of Figure 1a. Figure 12b shows the profile along x of the estimated SL at the center or the detector (y = 256). It is superposed to a gray envelope corresponding to the theoretical SL profile and the performance requirement interval. As it shows, the estimated SL is mostly inside that area.
Table 1 provides the residual SL after correction with interpolated maps, compared with the initial SL level. While the performance requirement is evaluated at 2σ, the 1σ and mean residual SL are also shown. At 2σ, the correction method decreases the SL by a factor 58, reaching the performance requirement. The ratio is even higher at 1σ or at mean value (up to 129). No spatial binning should be added, as the correction performance is already very close to performance requirement (spatial binning would increase the residual 2σ SL level by a factor 1.5). However, a field binning will be added with m = 128. In the case of the B and W scene, the field binning does not affect the SL correction as the transition is located at the center and along the y direction. For other kinds of scenes, this choice will have limited impact as shown previously for a scene with tilted transition.

6. Calibration of SPST Maps

SPST map calibration can be performed experimentally by illuminating the 3MI instrument with a collimated beam. The collimated beam is obtained by placing a point-like source at the focal plane of an off-axis parabola (OAP). A rotation stage enables calibration of SPST maps at different fields. Ideally, the collimated beam should cover the full entrance aperture of the instrument (170 mm diameter in 3MI). An alternative is to use a smaller collimator with square pupil and to scan the instrument entrance aperture, with no gap or overlaps. Significant gain in time is achieved by scanning only over the stray light entrance pupil [28,29] (SLEP), which represents the areas of the entrance aperture to illuminate in order to measure all the possible SL paths. The source diameter at the OAP focal plane as well as the OAP focal length are set such that the collimated beam has an angular width smaller than a pixel at instrument level. Therefore, even though the collimated beam covers the first lens of the instrument, the nominal signal stays within a pixel.
The 3MI detector records signals on 14 bits, with a saturation signal Isat. On any pixel, a white noise is present with a standard deviation ε, described by the theoretical model of Equation (13). It contains two contributions, one function of the received signal I and one independent. With a single acquisition of the SL, the dynamic of the detector is insufficient to resolve the faint features of the map. Therefore, dynamic range decomposition is performed by acquiring images with different levels of illumination or different integration times. Three levels are considered, with ratio of the received signal of respectively 1, 102 and 104. The three images are then normalized and recombined, keeping for each pixel the signal from the image where it is the highest yet below saturation. Next, the map is normalized to the nominal signal. The signal on the nominal pixel is then removed to keep only the SL.
ε = 3 2 + 0.0133 · I I s a t · 12000 · 1 12000
Based on the model of Equation (13), the experimental process was simulated to determine the impact of detector noise on the SL correction. At this stage, experimental measurements were not yet available and therefore simulations were used for performance prediction. The dark continuous line on Figure 13 shows the theoretical profile along x, for y = 256, when the instrument is illuminated by a point-like source at field pixel (i,j) = (256,32) and for a perfect detector (noise free). The nominal signal is visible at x = 32, while the signal on all the other pixels come from SL. The gray dotted line shows the same profile, with detector noise, when performing a single shot acquisition at the lowest dynamic level (ratio of received signal of 1). The curve is obtained by applying the detector noise with Equation (13) and clearly shows that SL features are not resolved correctly, as the noise is far too high. The gray continuous line shows the result of the acquisition with a dynamic range decomposition. In that case, we can measure faint ghosts with a noise significantly smaller than the accuracy derived from the simplified model from Section 3 (2.4 · 10−5%).
The detector noise model and dynamic range decomposition process was applied for all the calibrated SPST maps. The maps were then interpolated to the N × N field grid and were used to estimate the SL for the B and W scene. As shown in Table 1, the SL error when considering the detector noise is slightly increased compared with the case where the noise is neglected. A residual SL of 0.0174%∙Imax is obtained at 2σ, nearly equal to the performance requirement. Of course, in practice this will require a calibration facility with low SL so that the SPST is correctly measured without measurement artefacts.
Figure 13. Profile along x (at y = 256) of the signal when the instrument is illuminated by a point-like source at the field (i,j) = (256,32). The continuous curve provides the theoretical profile that would be obtained for a noise-free detector. The case of a detector with a noise modeled as per Equation (13) is also shown, considering a single shot acquisition or dynamic range decomposition.
Figure 13. Profile along x (at y = 256) of the signal when the instrument is illuminated by a point-like source at the field (i,j) = (256,32). The continuous curve provides the theoretical profile that would be obtained for a noise-free detector. The case of a detector with a noise modeled as per Equation (13) is also shown, considering a single shot acquisition or dynamic range decomposition.
Remotesensing 14 01354 g013

7. Conclusions

When SL control by design is not sufficient to reach the performance requirement, correction by post-processing must be considered. A correction method is developed for the Earth observation instrument Metop-3MI. The SL performance is specified based on the B and W extended scene illumination, with a required correction factor of at least 57 at 2σ percentile. The SL correction algorithm is based on the calibration of the SPST maps as a function of the field. The SL is estimated by a linear combination of the SPST maps by the scene, it is then removed from the measured signal to obtain the corrected image. An iterative correction approach is selected due to its versatility for high performance correction, with only two iterations required to reach performance requirement. Convergence speed can be increased by using the Gauss–Seidel form or the multigrid method. Spatial and field binning can be performed to reduce the quantity of data and computation time; however, it affects the SL correction performance. Spatial binning has the worse effect and should be limited. Field binning only impacts the estimation of SL from regions of the FOV with high spatial frequencies, for example a transition between two high contrast zones. Field binning is thus an effective way to reduce the computation time with limited impact on the performance. In the case of 3MI it is necessary to use such field binning so that the computation time remains realistic, with an initial target of about 10 min for a correction, which will vary depending on the computational power of the computer which is used. In practice, only a limited number of SPST maps can be calibrated. As SL correction over a low-density field grid provides poor performance, interpolation is required to deduce the SPST maps on a denser field grid. Two methods were proposed, one in the field domain and the second which consists in applying scaling and rotation operations to the SPST maps based on a verified local symmetry assumption. Both methods are promising but the scaling method is selected as it provides better results. With interpolated maps, a correction factor of 58 is demonstrated at 2σ for the SL on the B and W scene (129 at 1σ). No spatial binning is considered, but a field binning to a dimension 128 × 128 can be applied. In practice, the calibrated SPST maps will be affected by detector noise and therefore dynamic range decomposition is necessary to resolve the faint SL features. Considering the detector theoretical model, an SL correction factor of 56 is obtained at 2σ (119 at 1σ). It is hard to compare this factor with other reported SL correction algorithms, as it is strongly dependent upon the specific parameters such as number of pixels on the detector. SL correction algorithms are likely to become the norm in the near future, as performance requirements are becoming more challenging while SL control by design is intrinsically limited by physics, practical constraints, and processes. This method can also be used in other kinds of applications, from scientific instruments to even personal cameras.
In the future, a variant of this method can be to use theoretical SPST maps rather than calibrated ones. This can be performed either with ray-traced SPST maps, or with analytical functions. This will however require to be able to adapt theoretical SL models to the reality of experiments, which is a complex task when many ghosts are present. Ultrafast SL characterization is an interesting prospect for that matter as it can be used for reverse engineering the SL properties of an instrument [30,31].

Author Contributions

L.C. conducted this research including methodology, software and formal analysis. Validation was performed by L.C., C.M. and Y.S. The paper was written by L.C. and reviewed by C.M. and Y.S. Y.S. was in charge of project administration. L.C., C.M. and Y.S. collaborated on the funding acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded under the 4500511744/MetOp-SG/CSL contract with Leonardo company.

Data Availability Statement

Not applicable.

Acknowledgments

The authors are grateful for helpful discussions with Leonardo and ESA.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

The Equation (3) can be formulated as a matrix-vector multiplication (Equation (4)) by reshaping the maps into vectors as shown here below:
I m e s 1 , 1 I m e s i , j I m e s N , N = I n o m 1 , 1 I n o m i , j I n o m N , N + S P S T 1 , 1 1 , 1 S P S T 1 , 1 i , j S P S T 1 , 1 N , N S P S T i , j 1 , 1 S P S T i , j i , j S P S T i , j N , N S P S T N , N 1 , 1 S P S T N , N i , j S P S T N , N N , N · I n o m 1 , 1 I n o m i , j I n o m N , N

Appendix B

If we assume that all elements of ASL have the same value a = <ASL>, and if we replace all elements of I n o m by the mean value < I n o m > , Equation (9) simplifies to < A S L > N 2 p + 1   < I n o m > . From there, it can be shown that this simple equation fits accurately the curve when estimating the average residual error ( < Δ S L p > ), with Equation (A2). Therefore, based on the knowledge of a few SPST maps, we estimate the mean value of ASL and evaluate the mean residual error as a function of the iteration p. With this model, the convergence criteria is a N 2 < 1 . This simplified model however fails to estimate accurately the residual error at a statistical value, for example at 2σ.
< Δ S L p > = < A S L > N 2 p + 1 < I n o m >

Appendix C

The iterative SL correction algorithm presented in this paper is a case of Jacobi convergence. Equation (A3) provides the column q of the estimated SL at iteration p + 1.
I S L , p + 1 q = r = 1 N 2 A S L q , r × I m e s r I S L , p r
The convergence speed of the SL correction can be increased by using the Gauss–Seidel method. The principle is the following:
-
The first line of the estimated SL at iteration p + 1, I S L , p + 1 , is computed with Equation (A4) similarly as in the case of the Jacobi convergence;
-
The second line, I S L , p + 1 2 , is computed with Equation (A5). This is the same equation as in the case of the Jacobi convergence, except that I S L , p 1 is replaced by the value I S L , p + 1 1 that was just obtained above;
-
The third line, I S L , p + 1 3 , is computed with Equation (A6). This is the same equation as in the case of the Jacobi convergence, except that I S L , p 2 is replaced by I S L , p + 1 2 and I S L , p 2 is replaced by I S L , p + 1 3 ;
-
This process continues and the line q + 1, I S L , p + 1 q + 1 , is computed with Equation (A7).
I S L , p + 1 1 = A S L 1 , 1 × I m e s 1 I S L , p 1 + A S L 1 , 2 × I m e s 2 I S L , p 2 + + A S L 1 , N 2 × I m e s N 2 I S L , p N 2
I S L , p + 1 2 = A S L 2 , 1 × I m e s 1 I S L , p + 1 1    + A S L 2 , 2 × I m e s 2 I S L , p 2 +    + A S L 2 , N 2 × I m e s N 2 I S L , p N 2
I S L , p + 1 3 = A S L 3 , 1 × I m e s 1 I S L , p + 1 1    + A S L 3 , 2 × I m e s 2 I S L , p + 1 2 +    + A S L 3 , N 2 × I m e s N 2 I S L , p N 2
I S L , p + 1 q = r = 1 q 1 A S L q , r × I m e s r I S L , k + 1 r    + r = q N 2 A S L q , r × I m e s r I S L , k r

References

  1. Fest, E. Stray Light Analysis and Control; SPIE Press: Bellingham, WA, USA, 2013. [Google Scholar]
  2. Breault, R. Control of stray light. In Handbook of Optics; Bass, M., Ed.; McGraw-Hill: New York, NY, USA, 1995; Chapter 38; Volume 1, pp. 38.1–38.35. [Google Scholar]
  3. Clermont, L.; Aballea, L. Stray light control and analysis for an off-axis three-mirror anastigmat telescope. Opt. Eng. 2021, 60, 055106. [Google Scholar] [CrossRef]
  4. Manolis, I.; Bézy, J.-L.; Meynart, R.; Porciani, M.; Loiselet, M.; Mason, G.; Labate, D.; Bruno, U.; De Vidi, R. The 3MI instrument on the Metop second generation. Proc. SPIE 2017, 10563, 1056324. [Google Scholar]
  5. Fougnie, B.; Marbach, T.; Lacan, A.; Schluessel, P.; Lang, R.; Poli, G.; Munro, R. The 3MI mission on-board EPS-SG: A multi-spectral multi-polarization multi-directional imager for operational characterization of aerosol and cloud. Proc. SPIE 2018, 10764, 107640L. [Google Scholar]
  6. Fougnie, B. The multi-viewing multi-channel multi-polarisation imager—Overview of the 3MI polarimetric mission for aerosol and cloud characterization. J. Quant. Spectrosc. Radiat. Transf. 2018, 219, 23–32. [Google Scholar] [CrossRef]
  7. Clermont, L.; Michel, C.; Mazy, E.; Pachot, C.; Daddi, N.; Mastrandread, C.; Stockman, Y. Stray-light calibration and correction for the MetOp-SG 3MI mission. Proc. SPIE 2018, 10704, 1070406. [Google Scholar]
  8. Laherrere, J.M.; Poutier, L.; Bret-Dibat, T.; Hagolle, O.; Baque, C.; Moyer, P.; Verges, E. POLDER on-ground stray light analysis, calibration, and correction. Proc. SPIE 1997, 3221, 132–140. [Google Scholar]
  9. Abdon, S.; Armand, D.; Laurent, F.; Barillot, M.; D’Ottavi, A.; Coppo, P.; De Luca, E.; Gabrieli, R.; Bouvet, M.; François, M.; et al. Digital correction of residual straylight in FLEX images. Proc. SPIE 2019, 11180, 111804G. [Google Scholar]
  10. Montrone, L.; Aballea, L.; Bernaerts, D.; Navarro-Reyes, D.; Santandrea, S.; Saillen, N.; Sarna, K.; Holbrouck, P.; Moelans, W.; Kendall, D.; et al. Technological innovation for the ALTIUS atmospheric limb sounding mission. Proc. SPIE 2019, 11151, 111510S. [Google Scholar]
  11. Kirschner, V. Stray Light Analysis and Minimization, European Space Agency: Noordwijk, The Netherlands, 2017; Unpublished Tutorial.
  12. Yeo, A.; Feller, A.; Solanki, S.K.; Couvidat, S.; Danilovic, S.; Krivova, N.A. Point spread function of SDO/HMI and the effects of stray light correction on the apparent properties of solar surface phenomena. Astron. Astrophys. 2014, 561, A22. [Google Scholar] [CrossRef] [Green Version]
  13. Richardson, W.H. Bayesian-based iterative method of image restoration. J. Opt. Soc. Am. 1972, 62, 55. [Google Scholar] [CrossRef]
  14. Lucy, L.B. An iterative technique for the rectification of observed distributions. Astron. J. 1974, 79, 745. [Google Scholar] [CrossRef] [Green Version]
  15. Stover, J.C. Optical Scattering: Measurement and Analysis; SPIE Press: Bellingham, WA, USA, 2012. [Google Scholar]
  16. Jansson, P.A.; Bitlis, B.; Allebach, J.P. Correcting Color Images for Stray-Light Effects by Computationally Solving an Inverse Problem via Selected-Ordinate Image (SORI) Processing; Technical Digest Optical Society of America: Washington DC, USA, 2005. [Google Scholar]
  17. Bitlis, B.; Jansson, P.A.; Allebach, J.P. Parametric point spread function modeling and reduction of stray light effects in digital still cameras. Proc. SPIE 2007, 6498, 64980V. [Google Scholar]
  18. Zong, Y.; Brown, S.W.; Johnson, B.C.; Lykke, K.R.; Ohno, Y. Correction of stray light in spectrographs: Implications for remote sensing. Proc. SPIE 2005, 5882, 588201. [Google Scholar]
  19. Zong, Y.; Brown, S.W.; Johnson, B.C.; Lykke, K.R.; Ohno, Y. Simple spectral stray light correction method for array spectroradiometers. Appl. Opt. 2006, 45, 1111–1119. [Google Scholar] [CrossRef]
  20. Zong, Y.; Brown, S.B.; Meister, G.; Barnes, R.A.; Lykke, K.R. Characterization and correction of stray light in optical instruments. Proc. SPIE 2007, 6744, 67441L. [Google Scholar]
  21. Feinholz, M.E.; Flora, S.J.; Brown, S.W.; Zong, Y.; Lykke, K.R.; Yarbrough, M.A.; Johnson, B.C.; Clark, D.K. Stray light correction algorithm for multichannel hyperspectral spectrographs. Appl. Opt. 2012, 51, 3631–3641. [Google Scholar] [CrossRef]
  22. Feinholz, M.E.; Flora, S.J.; Yarbrough, M.A.; Lykke, K.R.; Brown, S.W.; Johnson, B.C.; Clark, D.K. Stray Light Correction of the Marine Optical System. J. Atmos. Ocean. Technol. 2009, 26, 57–73. [Google Scholar] [CrossRef]
  23. FRED Software, Version 18.61; Edited by Photon Engineering. Available online: www.photonengr.com (accessed on 1 June 2017).
  24. Moler, C.B. Linear equations. In Numerical Computing with Matlab; Society for Industrial and Applied Mathematics: Philadelphy, PA, USA, 2004; pp. 53–92. [Google Scholar]
  25. James, D. Nearest defective matrices and the geometry of ill-conditioning. In Reliable Numerical Computation; Cox, M.G., Hammarling, S., Eds.; Clarendon Press: Oxford, UK, 1990; pp. 35–55. [Google Scholar]
  26. Quarteroni, A.; Sacco, R.; Saleri, F. Numerical Mathematics; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  27. Wesseling, P. An Introduction to Multigrid Methods; Wiley: Hoboken, NJ, USA, 1992. [Google Scholar]
  28. Clermont, L.; Michel, C.; Blain, P.; Loicq, J.; Stockman, Y. Stray light entrance pupil: An efficient tool for stray light characterization. Opt. Eng. 2020, 59, 025102. [Google Scholar] [CrossRef]
  29. Clermont, L.; Michel, C.; Blain, P.; Stockman, Y. The stray-light entrance pupil concept and how it can be used to facilitate stray-light characterization. Proc. SPIE 2019, 11103, 111030H. [Google Scholar]
  30. Clermont, L.; Uhring, W.; Georges, M. Stray Light Characterization with Ultrafast Time-of-flight Imaging. Sci. Rep. 2021, 11, 10081. [Google Scholar] [CrossRef]
  31. Clermont, L.; Uhring, W.; Georges, M. Time resolved characterization of stray light. Proc. SPIE 2021, 11782, 1178211. [Google Scholar]
Figure 1. Nominal signal for the B and W extended scene (a) and corresponding SL computed by ray tracing at 410 nm (b).
Figure 1. Nominal signal for the B and W extended scene (a) and corresponding SL computed by ray tracing at 410 nm (b).
Remotesensing 14 01354 g001
Figure 2. SPST maps for second order ghosts in 3MI, for different fields along the x direction.
Figure 2. SPST maps for second order ghosts in 3MI, for different fields along the x direction.
Remotesensing 14 01354 g002
Figure 3. Residual SL as a function of the iteration, for different initial SL levels in 3MI.
Figure 3. Residual SL as a function of the iteration, for different initial SL levels in 3MI.
Remotesensing 14 01354 g003
Figure 4. (a) SPST145,32 at the resolution of the detector array (512 × 512). (b) Spatial binning reduces the dimension of the map to n × n with n = 16. (c) Field binning averages the SPST for adjacent fields (m = 16) but keep unchanged the resolution.
Figure 4. (a) SPST145,32 at the resolution of the detector array (512 × 512). (b) Spatial binning reduces the dimension of the map to n × n with n = 16. (c) Field binning averages the SPST for adjacent fields (m = 16) but keep unchanged the resolution.
Remotesensing 14 01354 g004
Figure 5. (a) SL map for the B and W extended scene illumination, estimated with spatial binning (n = 16). (b) Error on the estimated SL map as a function of the binning n.
Figure 5. (a) SL map for the B and W extended scene illumination, estimated with spatial binning (n = 16). (b) Error on the estimated SL map as a function of the binning n.
Remotesensing 14 01354 g005
Figure 6. (a) Nominal image for a tilted B and W scene with field binning (m = 16). (b) Corresponding estimated SL scene, emphasizing an irregular transition due to field binning. (c) SL estimation error as a function of the field binning.
Figure 6. (a) Nominal image for a tilted B and W scene with field binning (m = 16). (b) Corresponding estimated SL scene, emphasizing an irregular transition due to field binning. (c) SL estimation error as a function of the field binning.
Remotesensing 14 01354 g006
Figure 7. SL of the B and W extended scene, estimated with SPST maps known on a restricted grid k × k (ac). The SL estimation can be improved by applying an optimal spatial binning (df). (g) SL estimation error dSL as a function of the grid size k, with or without the optimal spatial binning. (h) Error dSL as a function of the spatial binning n for different values of k. For each k, there is an optimal value of n minimizing the error.
Figure 7. SL of the B and W extended scene, estimated with SPST maps known on a restricted grid k × k (ac). The SL estimation can be improved by applying an optimal spatial binning (df). (g) SL estimation error dSL as a function of the grid size k, with or without the optimal spatial binning. (h) Error dSL as a function of the spatial binning n for different values of k. For each k, there is an optimal value of n minimizing the error.
Remotesensing 14 01354 g007
Figure 8. FPSTx,y(i,j) at pixel (x,y) = (359,205). (a) FPST on a 27 × 27 grid calibration grid. (b) FPST interpolated in the field domain from the 27 × 27 grid to a high-resolution grid. (c) Theoretical FPST on the high-resolution field grid. (d) FPST on the high-resolution grid, obtained by scaling interpolation with SPST maps calibrated over the grid of Figure 10.
Figure 8. FPSTx,y(i,j) at pixel (x,y) = (359,205). (a) FPST on a 27 × 27 grid calibration grid. (b) FPST interpolated in the field domain from the 27 × 27 grid to a high-resolution grid. (c) Theoretical FPST on the high-resolution field grid. (d) FPST on the high-resolution grid, obtained by scaling interpolation with SPST maps calibrated over the grid of Figure 10.
Remotesensing 14 01354 g008
Figure 9. (a) SPSTi,j with i = 32 and j = 96 to 256. When the field is varied along the horizontal direction, ghosts moves in that direction too, thus their distance from the center all evolve with the same ratio. (b) SPST interpolated with one neighbor, with missing signal on the edges. (c) SPST interpolated with up to 4 neighbors to fill, with no missing signal on the edges.
Figure 9. (a) SPSTi,j with i = 32 and j = 96 to 256. When the field is varied along the horizontal direction, ghosts moves in that direction too, thus their distance from the center all evolve with the same ratio. (b) SPST interpolated with one neighbor, with missing signal on the edges. (c) SPST interpolated with up to 4 neighbors to fill, with no missing signal on the edges.
Remotesensing 14 01354 g009
Figure 10. Calibration grid for the scaling interpolation method. The grid is regularly spaced everywhere but at the center where the density is doubled.
Figure 10. Calibration grid for the scaling interpolation method. The grid is regularly spaced everywhere but at the center where the density is doubled.
Remotesensing 14 01354 g010
Figure 11. The 2σ error on FPSTx,y as a function of the radial distance of pixel (x,y) from the center. The FPST is interpolated either with the scaling method or with the field domain method.
Figure 11. The 2σ error on FPSTx,y as a function of the radial distance of pixel (x,y) from the center. The FPST is interpolated either with the scaling method or with the field domain method.
Remotesensing 14 01354 g011
Figure 12. (a) SL for the B and W scene, estimated with SPST maps interpolated with the scaling method. (b) Profile along x at y = 256. The gray background represents the performance requirement envelope.
Figure 12. (a) SL for the B and W scene, estimated with SPST maps interpolated with the scaling method. (b) Profile along x at y = 256. The gray background represents the performance requirement envelope.
Remotesensing 14 01354 g012
Table 1. SL performance of 3MI before and after correction with maps interpolated with the scaling method, considering or not the detector noise.
Table 1. SL performance of 3MI before and after correction with maps interpolated with the scaling method, considering or not the detector noise.
Mean Value
Initial SL level0.7243% ∙ Imax0.9669% ∙ Imax0.5729% ∙ Imax
Residual SL0.0056% ∙ Imax0.0166% ∙ Imax0.0052% ∙ Imax
Correction factor12958110
Residual SL with detector noise0.0061% ∙ Imax0.0174% ∙ Imax0.0054% ∙ Imax
Correction factor11956106
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Clermont, L.; Michel, C.; Stockman, Y. Stray Light Correction Algorithm for High Performance Optical Instruments: The Case of Metop-3MI. Remote Sens. 2022, 14, 1354. https://doi.org/10.3390/rs14061354

AMA Style

Clermont L, Michel C, Stockman Y. Stray Light Correction Algorithm for High Performance Optical Instruments: The Case of Metop-3MI. Remote Sensing. 2022; 14(6):1354. https://doi.org/10.3390/rs14061354

Chicago/Turabian Style

Clermont, Lionel, Céline Michel, and Yvan Stockman. 2022. "Stray Light Correction Algorithm for High Performance Optical Instruments: The Case of Metop-3MI" Remote Sensing 14, no. 6: 1354. https://doi.org/10.3390/rs14061354

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop