Next Article in Journal
Evaluation of User-Prosthesis-Interfaces for sEMG-Based Multifunctional Prosthetic Hands
Previous Article in Journal
A Robust Observer-Based Control Strategy for n-DOF Uncertain Robot Manipulators with Fixed-Time Stability
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Smooth Non-Iterative Local Polynomial (SNILP) Model of Image Vignetting

1
Department of Data Science and Engineering, Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland
2
Biotechnology Center, Silesian University of Technology, Bolesława Krzywoustego 8, 44-100 Gliwice, Poland
*
Authors to whom correspondence should be addressed.
Sensors 2021, 21(21), 7086; https://doi.org/10.3390/s21217086
Submission received: 17 September 2021 / Revised: 18 October 2021 / Accepted: 22 October 2021 / Published: 26 October 2021
(This article belongs to the Section Sensing and Imaging)

Abstract

:
Image vignetting is one of the major radiometric errors, which occurs in lens-camera systems. In many applications, vignetting is an undesirable phenomenon; therefore, when it is impossible to fully prevent its occurrence, it is necessary to use computational methods for its correction in the acquired image. In the most frequently used approach to the vignetting correction, i.e., the flat-field correction, the usage of appropriate vignetting models plays a crucial role. In the article, the new model of vignetting, i.e., Smooth Non-Iterative Local Polynomial (SNILP) model, is proposed. The SNILP model was compared with the models known from the literature, e.g., the polynomial 2D and radial polynomial models, in a series of numerical tests and in the real-data experiment. The obtained results prove that the SNILP model usually gives better vignetting correction results than the other aforementioned tested models. For images larger than UXGA format ( 1600 × 1200 ), the proposed model is also faster than other tested models. Moreover, among the tested models, the SNILP model requires the least hardware resources for its application. This means that the SNILP model is suitable for its usage in devices with limited computing power.

1. Introduction

1.1. Problem of Vignetting

The image vignetting is a phenomenon of the reduction of the brightness of an image from its optical center toward its edges. The vignetting is often an unintended and undesired effect, and its characteristics depends on optical properties of the lens-camera system. Considering the causes of vignetting, there are four main types of this phenomenon [1,2], listed it in the order of the place of its occurrence from an imaged scene to an image sensor, i.e., mechanical vignetting, optical vignetting, natural vignetting, and pixel vignetting. Mechanical vignetting refers to the occlusion of the light path by elements of lens-camera system, such as a filter mounted on a lens. This type of vignetting usually causes a 100% reduction in image brightness, which means a complete loss of information. Therefore, this type of vignetting will not be taken into consideration in this article. The next type of vignetting is optical vignetting, which refers to the light fall-off caused by the blockage of off-axis incident light inside the lens body. The amount of blocked light depends on the physical dimensions of the lens, particularly the lens focal length, diameter of lenses in the lens body, and the lens aperture setting. Natural vignetting refers to the light fall-off related to the geometry of the image-forming system. Pixel vignetting refers to the light fall-off related to the angle-dependent sensitivity of the image sensor pixels.
The effect of vignetting is undesirable in many applications, especially when there is a need for stitching images for creating panoramic images [3,4] or mosaic images [5,6], radiometric, or quantitative analysis of images [7,8], e.g., in such areas as microscopy [6,9,10], micro CT imaging [11], and remote sensing [12,13]. The best way to reduce vignetting is to remove its causes, e.g., by the usage of a lens with appropriate characteristics or by careful choice of exposure parameters. However, the usage of such solutions is not always possible nor brings the desired result. In such cases, the only solution is to use a software-based vignetting correction method.

1.2. Vignetting Correction

Probably the most frequently used approach to vignetting correction is based on the idea of ”flat field correction.” In this approach, the vignetting is estimated based on reference image of vignetting  I V , which presents uniformly illuminated flat surface with a uniform color. The only source of brightness differences across I V is the vignetting of the used lens-camera system. The way an image  I V is formed can be described by the following formula:
I V ( i , j ) = V ( i , j ) · I f l a t ( i , j ) ,
where V is a real vignetting of the analyzed lens-camera system, I f l a t is an ideal image of a scene with reference flat surface, and ( i , j ) are, respectively, horizontal and vertical pixel coordinates. Vignetting estimator V ˜ is established during approximation process using assumed vignetting model VM and image I V , i.e.,
V ˜ ( i , j ) = approx V M , I V ( i , j ) ,
where V ˜ ( 0 , 1 ] . For obtaining image I ˜ with corrected vignetting, which is based on the acquired image I, the following calculations must be done:
I ˜ ( i , j ) = I ( i , j ) · V ˜ ( i , j ) 1 .
For the best correction results the images I V and I should be acquired using the same lens-camera system, its parameters (e.g., focal length for zoom lenses), and it exposure parameters (e.g., aperture stop number of the lens). In practice, for real-time vignetting correction, (3) is realized using lookup table.
The flat field correction is, of course, not the only approach which realizes the vignetting correction. The others are based on, e.g., physically-based models of vignetting [14,15] and analysis of a sequence of images [1,2,3,16] or a single image [17,18,19,20] of a natural scene to estimate the vignetting. However, the usage of these approaches is connected with many limitations. In the case of methods, which use the physically-based approach, detailed information about the parameters of a lens-camera system is needed, which is often unavailable. Additionally, the obtained vignetting model is usually limited to only one type of vignetting. In the case of both aforementioned image-based approaches, the vignetting estimation is usually obtained as a result of finding an optimal value an objective function for an assumed radial vignetting model; however, this assumption limits the number of lens-camera systems for which these approaches can be used. Moreover, the effectiveness of image-based methods depends strongly on the precision of localization of corresponding pixels in images sequence, which requires usage of an additional correction of lens distortion. It should also be mentioned that the effectiveness of these methods strongly depends on scene uniformity.
In comparison to the aforementioned approaches, the flat-field-based approach is more universal; this method can be used for virtually any lens-camera system. The adjustment of the flat-field method to the analyzed system is the matter of the usage of the vignetting model, which is appropriate for the actual vignetting occurring in the analyzed system. This mainly means the usage of vignetting model, for which assumptions of its use are consistent with the features of vignetting of the analyzed lens-camera system.
Over the years, in the literature, different vignetting models based on the idea of parametric approximation have been presented, e.g., polynomial model [1,21,22], exponential polynomial model [21], hyperbolic cosine model [23], Gaussian function [24], and radial polynomial model [25]. The last three aforementioned vignetting models belong to a popular approach, which assumes that the actual vignetting V of lens-camera system has a cylindrical symmetry. This means that vignetting can be modeled using radial function, i.e., the lens-camera vignetting V ( i , j ) is a function of r, where r is a distance between pixel p ( i , j ) and optical center C (with coordinates ( i C , j C ) ) of the lens-camera system. The use of the radial vignetting assumption simplifies the process of searching for the vignetting estimator; the approximation of the 2D function is replaced by the approximation of the 1D function. On the other hand, searching for optical center is not a trivial task [26]; its realization involves additional calculations and is a source of potential errors. Moreover, the usage of a radial function limits the applicability of this method.
The choosing of the proper vignetting model is not the only problem related to flat-field correction. The other problem is the preparation of a reference scene, especially the uniform illumination of the reference surface is a problem. It should be emphasized that the quality of the obtained V ˜ estimator strongly depends on the luminance uniformity of the reference scene.

1.3. Objective of the Work

The assumption about radial vignetting can be used for most lens-camera systems. However, there is a large group of systems for which this assumption is not applicable, e.g., industrial lenses designed with reducing vignetting in mind, perspective-control and focal plane-control lenses (shift and tilt lenses), and anamorphic lenses. Therefore, there is a need for an universal vignetting model which can be used for different lens-camera systems.
Good examples of such an universal solution are the vignetting models proposed by Sawchuk in 1977, in Reference [21], in the form of polynomial 2D (P2D) and exponential polynomial 2D vignetting models. In both models, vignetting estimators are found as a result of seeking a solution to the multiple regression problem. Another universal solution was proposed in 2004 by Yu [23]. The proposed method is based on the idea of wavelet denoising and decimation. The common feature of the aforementioned methods is the relatively high computational complexity of these methods. Due to this feature, these methods are not very suitable for implementation in devices with relatively low computing power or memory resources, such as, e.g., portable devices, smart cameras, vision sensors, embedded vision systems, and unmanned vehicles. Hence, there is a need to develop a universal vignetting model that will combine relatively low computational and memory complexity with the high accuracy of vignetting correction.
A possible solution to this problem uses the idea of decomposition of the 2D approximation problem into many interrelated 1D approximation problems. This approach was presented by Reference [27,28,29]. The results obtained so far in this area are briefly described and summarized in Section 2. The aim of the conducted research was to improve the vignetting model based on this idea. The new vignetting model (SNILP) that meets this requirement is presented in Section 3. Section 4 presents results of experiment that was performed to compare the quality of the vignetting correction results obtained from the new model with the results obtained from selected vignetting models described in the literature. In Section 5, comparisons of others features of the compared models are presented. Lastly, Section 6 contains a summary of the research.

2. Vignetting Models Based on Decomposition

2.1. Local Parabolic Model

The problem of vignetting estimation is, in fact, the problem of seeking an approximation function of 2D data. As presented in Reference [27], in the form of Local Parabolic model, the 2D multiple approximation problem for image of size M × N pixels can be decomposed into M + N simple linear regression problems using the following formula:
V ˜ ( i , j ) = 1 2 V ˜ x j ( i ) + V ˜ y i ( j ) ,
where
V ˜ x j ( i ) = a x j , 2 i 2 + a x j , 1 i + a x j , 0 , V ˜ y i ( j ) = a y i , 2 j 2 + a y i , 1 j + a y i , 0 ,
are, respectively, values of approximation functions of degree 2 obtained for the i-th point of the j-th horizontal line and the j-th point of the i-th vertical line of the input image. The parameters a of functions (5) are obtained as a result of repeated 1D polynomial regression of degree 2, which is performed along each horizontal and each vertical line of the input image.

2.2. Local Polynomial (LP) Model

The more general Local Polynomial (LP) model of vignetting was proposed in Reference [28]. In this model, the degree of polynomial s used as 1D regression function is not limited to degree of 2 but can be freely chosen; hence, V ˜ x j and V ˜ y i are calculated as follows:
V ˜ x j ( i ) = a x j , s i s + + a x j , 1 i + a x j , 0 , V ˜ y i ( j ) = a y i , s j s + + a y i , 1 j + a y i , 0 ,
where s is the assumed degree of regression polynomials.
The LP model gives more exact approximation results of vignetting than Local Parabolic model. However, the more exact analysis of the LP model shows that the obtained surface of vignetting estimator is non-smooth and ragged.

2.3. Smooth Local Polynomial (SLP) Model

As a solution to this problem, in Reference [29], the Smooth Local Polynomial (SLP) model was proposed. The core idea of this method is the iterative repetition of the LP model estimation, i.e.,
V ˜ k ( i , j ) = LP V ˜ k 1 ( i , j ) ,
where V ˜ k is a result of the k-th iteration of the SLP, and, for k = 0 , it holds that
V ˜ 0 ( i , j ) = I V ( i , j ) ,
which is the input image for the SLP model. It follows directly from the definition of the SLP model that, for k = 1 , the results obtained from this model are the same as the results obtained from the LP model.

2.4. Essential Properties of the LP and SLP Models

The following belong to the essential properties of each of the vignetting models: accuracy of reproducing the actual vignetting V on the basis of the obtained image or images I V , features of the obtained model (such as, e.g., surface smoothness), and ease of use of the model. Due to the fact that, when determining the vignetting estimate for real lens-camera systems, it is difficult to reliably verify the correctness of the obtained model, it was decided to perform numerical simulations.
The same test conditions were used in all simulations presented in this section, as well as in Section 3.2. The simulation conditions are described in detail in Appendix A. For comparison of vignetting models, lenses with three focal lengths were simulated for 135 film format, i.e., f =  24 mm, 50 mm, and 300 mm. This selection of focal lengths includes the most important focal lengths of lenses (converted to the appropriate image formats used in a given application), which are used for, e.g., photography, film-making, monitoring, machine vision, and remote sensing.
The use of vignetting models based on the idea of image decomposition requires the determination of the degree of the polynomial that will be used to approximate individual lines of the image. In the case of the presented simulation results, the following polynomial degree values were used, respectively, to the aforementioned focal lengths: 10, 6, and 2. Such a selection of the degrees of approximation polynomials results from the fact that, for these degrees of the polynomial, the best results (in terms of Root Mean Square Error) of the vignetting estimation for the tested focal lengths were obtained. To obtain statistically reliable results, the simulations were repeated for each lens 100 times.
The most important property of vignetting model is probably its accuracy. Thanks to the simulation, the exact shape of the V vignetting is known. Hence, for evaluation of model accuracy, the commonly used difference measures, such as Mean Absolute Error (MAE)
MAE V ˜ , V = 1 M N i = 1 M j = 1 N | V ˜ ( i , j ) V ( i , j ) , |
or Root-Mean-Square Error (RMSE)
RMSE V ˜ , V = 1 M N i = 1 M j = 1 N V ˜ ( i , j ) V ( i , j ) 2 ,
can be used. Figure 1 presents values of those measures obtained for the simulated data.
Based on the knowledge about the causes of vignetting, it can be concluded that the vignetting function should have a smooth surface. As previously mentioned, surfaces obtained from the LP model are non-smooth; thus, they differ from the vignetting of real lens-camera systems. Figure 2 presents a comparison of smoothness of exemplary results obtained from the LP model and different numbers of iterations of the SLP model. The presented results confirm the significant improvement in smoothness obtained by using the SLP model compared to the LP model.
The smoothness of the surface of vignetting estimator V ˜ can be evaluated not only visually but also using objective measures, such as Mean Local Standard Deviation (MLSD) and Mean Local Slope Corrected Deviation (MLSCD). The usage of MLSD measure for smoothness evaluation is based on the idea of Local Standard Deviation (LSD), which was used for decades by image processing community, in original [30,31] or slightly modified form, for evaluation of local changes in images. LSD is calculated in moving window w i j of size w x × w y , where coordinates ( i , j ) defined the position of the window center in the analyzed image I of resolution M × N , i.e.,
w i j = I ( i r x : i + r x , j r y : j + r y ) ,
where 2 r x + 1 = w x , and 2 r y + 1 = w y ; notation a : b denotes the range of values from a to b with increment + 1 . In this work, MLSD is calculated as follows:
MLSD   I = 1 ( M w x + 1 ) ( N w y + 1 ) i = r x + 1 M r x j = r y + 1 N r y LSD ( w i j ) ,
where
LSD ( w i j ) = σ ( w i j ) = 1 w x w y g = 1 w x h = 1 w y w i j ( g , h ) w ¯ i j 2 ,
and
w ¯ i j = 1 w x w y g = 1 w x h = 1 w y w i j ( g , h ) .
The idea of MLSCD is based on the idea of MLSD with one important modification; in the case of MLSCD locally (i.e., inside of moving window w i j ), the standard deviation σ ( w i j ) of pixel values is not calculated, such as in LSD, in relation to the average value w ¯ i j of pixels inside w i j , but in relation to a local 2D polynomial approximation PolyApp 2 D s w i j of degree s of image I surface covered by  w i j . This change results in a much lower sensitivity of the MLSCD measure to different slopes of analyzed surfaces; thanks to this, the proposed measure suits better to measure the surface smoothness than the MLSD measure. This property of MLSD results from the fact that values of LSD are strongly influenced by the local slope of the analyzed surfaces; therefore, in the LSD measure, the fluctuations of surface smoothness are dominated by changes of pixel values caused by a surface slope. The MLSCD measure is calculated as follows:
MLSCD   I = 1 ( M w x + 1 ) ( N w y + 1 ) i = r x + 1 M r x j = r y + 1 N r y σ ( Δ i j ) ,
where
σ ( Δ i j ) = 1 w x w y g = 1 w x h = 1 w y Δ i j ( g , h ) Δ ¯ i j 2 ,
and
Δ i j = w i j PolyApp 2 D s w i j ,
Δ ¯ i j = 1 w x w y g = 1 w x h = 1 w y Δ i j ( g , h ) .
It is worth noting that, using s = 0 in (17), the MLSCD measures come down to calculating the value of MLSD. In the work, the windows size w x = w y = 7 were used for both MLSD and MLSCD measures; in the case of MLSCD, s = 2 was used. To reduce the influence of the border effect associated with the use of a moving window on the smoothness evaluation results, the range of coordinates of moving window centers used for calculation of MLSD and MLSCD (Equations (12) and (15)) is smaller than the actual size of the analyzed images.
Figure 3 presents the comparison of MLSD and MLSCD values obtained for LP and SLP (for k = 1 , , 30 , where, as mentioned earlier, for k = 1 , it holds that L P ( I V ) = S L P ( I V ) ) models for simulated vignetting of three focal length f of simulated lenses (see Appendix A for experiment details). The presented results show that the SLP model outperforms the LP model at the price of k times larger computational complexity. At this moment, the practical question that arises is as follows: Is it possible to obtain estimation results with quality similar to those obtained from the SLP model for large k, with the computational cost close to the cost required for the LP model?

3. SNILP—A New Vignetting Model

3.1. Model Description

To simplify further considerations, the whole process of repeated 1D polynomial regression is based on the horizontal and the vertical lines, and then calculation, based on these results, approximated values of the input image I for point ( i , j ) , which, from now on, is, respectively, denoted as PolyApp x s I , ( i , j ) and PolyApp y s I , ( i , j ) ; s is a degree of the used 1D polynomial regression and, later, 1D approximation function.
In the SLP model, the approximations of the input image I V based on independently calculated approximations for horizontal and vertical lines, i.e., V ˜ x and V ˜ y . The values of these estimators depend only on the input image and the chosen degree s of regression polynomial. The usage of the opposite approach is the foundation idea used in the new Smooth Non-Iterative Local Polynomial Vignetting (SNILP) model. The values of approximations PolyApp x s and PolyApp y s of I V are calculated in sequences, e.g., as a first approximation in horizontal direction is calculated, and, then, based on these results, the approximation in vertical direction is calculated. This procedure can be written formally as
V ˜ x ( i , j ) = PolyApp x s I V , ( i , j ) ,
V ˜ ( i , j ) = PolyApp y s V ˜ x , ( i , j ) .
All operations used for calculation needed in the SNILP model are linear. Therefore, the sequence of the directions in which the regression is performed does not matter; hence,
V ˜ ( i , j ) = PolyApp y s   PolyApp x s   I V ,   i , j ,   ( i , j )   =   PolyApp x s   PolyApp y s   I V ,   i , j , ( i , j ) .
Since, in practice, the acquisition parameters of image I V are selected so that there is no saturation phenomenon in it, therefore, the following condition is fulfilled max   ( V ˜ ) < 1 . Hence, it is necessary to normalize the range of V ˜ to the range ( 0 , 1 ] , which is performed as follows:
V ˜ ( i , j ) : = V ˜ ( i , j ) max i , j   V ˜ ( i , j ) .
From now on, V ˜ denotes the vignetting estimate in the normalized range.

3.2. Selected Properties of the SNILP Model

Analyzing the properties of the SNILP model, it is worth starting with paying attention to the fact that, in practice, the relationship (21) is satisfied with a certain error. The occurrence of this numerical error is related to the fact that the change of the order of directions in which the 1D approximations are calculated (e.g., first along X axis, and then along Y axis) may cause some differences in estimation results. However, on the basis of the carried out tests, the results of which are presented in Table 1 and Figure 4, it can be concluded that, for each repetition, the maximal error that arises due to a change in the approximations directions order was not greater than 2 × 10 12 ; so, they are irrelevant, even in the case of use 16-bit analog-to-digital converters in a camera image processor. The conditions of numerical test were identical to those used in Section 2.4.
Idempotence is the property of some mathematical operations or algorithms, whereby its multiple applications for the same data and the same parameters does not affect the obtained result. In the case of the SNILP model, it means
V ˜ ( i , j ) = SNILP   I V ( i , j ) = SNILP k   SNILP k 1   SNILP 2   SNILP   I V ( i , j ) ,
where k is a number of repetition usage of the SNILP model. In practice, however, because of the presence of the numerical error in polynomial approximation procedures,
SNILP   I V ( i , j ) = SNILP k   SNILP k 1   SNILP 2   SNILP   I V ( i , j ) + ε ( i , j ) ,
where ε ( i , j ) is the error value for point ( i , j ) related to numerical errors.
Figure 5 presents mean errors ε ¯ ( i , j ) calculated for different numbers k of repeated usage of the SNILP model, the values ε ¯ ( i , j ) calculated on the basis of 100 repetitions of the experiment. One additional usage ( k = 2 ) of the SNILP model changes, on average, the results for each point ( i , j ) not more than 10 13 , and such changes do not have practical meaning. The influence of numerical errors is, of course, more visible for larger k, but, even for k = 25 , the values of ε ¯ ( i , j ) are not greater than 10 11 . The maximum difference between RMSE values caused by repetitive usage of the SNILP model does not exceed 5 × 10 13 . These results show that usage of the SNILP model is an idempotent operation.

3.3. Comparison of the SLP and SNILP Models

Figure 1 presents a comparison of MAE and RMSE values for SLP (box-plot) and SNILP (orange lines—solid line is a median) models. The presented plots show that differences between results, obtained from the SLP and SNILP models, gradually decrease with the execution of subsequent iterations of the SLP model. Based on these results, the following supposition can be formulated:
lim k SLP k s ( I V ) = SNILP s ( I V ) ,
i.e., vignetting estimates obtained from the SNILP model are, for the same degree s of approximation polynomials, asymptotes for a series of estimates obtained from the successive iterations of the SLP model. This supposition is confirmed by the results (Figure 3) of measure of smoothness, using the MLSD and MLSCD measures, of vignetting estimate surfaces obtained from both models.
Comparing the SLP model with the SNILP model, it should be recalled that the result for the SNILP model is obtained in one step, when a qualitative similar result from the SLP model requires k iterations. In the SLP model, for the image of size M × N pixels, one iteration step includes N approximations in the horizontal direction, M approximations in the vertical direction, and M N summations and M N splits for calculation of mean from these approximations. In the case of the SNILP model, a whole calculation requires only N approximations in the horizontal direction, as well as M approximations in the vertical direction. Hence, assuming a similar quality of the results obtained from the SLP and SNILP models and the same polynomial degree s used in both models, the following relationship between the computational complexities of these models is true:
O SNILP ( M , N )   <   1 k O SLP ( k , M , N ) .

4. Experimental Results

4.1. Assumptions and Conditions of the Experiment

The aim of conducted experiment was a comparison, based on real data, of the selected, known from the literature, vignetting models with the SNILP model. The quality of vignetting model can be evaluated by the analysis of the result image:
I ˜ f l a t = I V · V ˜ 1 ,
i.e., results of correction of the vignetting image I V , with the usage of vignetting estimates V ˜ obtained using different vignetting models. Ideal correction results should be flat, i.e., all pixels in the corrected image I ˜ f l a t should have the same value. Of course, in the case of real data, this ideal result cannot be achieved because of the presence of noise in the input I V image. In such cases, the pixels in the I ˜ f l a t image should have similar values with the possible minimal dispersion. Good measures of dispersion are standard deviation (STD) and interquartile range (IQR); therefore, these measures were used for quantitative assessment of the vignetting correction results. Hence, a lower value of STD I ˜ f l a t or IQR I ˜ f l a t means better ability of the analyzed vignetting model to adapt to the real vignetting.
Since the I V image contains noise, and the estimation result V ˜ may differ significantly from the vignetting of a given camera, it is possible that the I ˜ f l a t image will contain pixels with values beyond the range of [ 0 , 255 ] , i.e., the range of pixel values for cameras used in the experiment. Therefore, the correction result, i.e., the image I ˜ f l a t , before its evaluation, is subjected to the operation of truncation of pixel values according to the formula
I ˜ f l a t ( i , j ) : = 0 for I ˜ f l a t ( i , j ) < 0 I ˜ f l a t ( i , j ) for I ˜ f l a t ( i , j ) [ 0 , 255 ] 255 for I ˜ f l a t ( i , j ) > 255 .
The data analyzed in the experiment were acquired using three different webcams, namely: Logitech C920, Xiaomi IMILAB CMSXJ22A, and A4Tech PK-910H (Figure 6), which, in the article, are denoted, respectively, as Cam-A, Cam-B, and Cam-C. In the case of webcams, their manufacturers do not provide detailed technical data, such as, e.g., lens focal length, lens speed, etc. However, the webcams used in the experiment were carefully selected to differ in their parameters, particularly the field of view and geometric distortions; comparison of the webcams main technical data is presented in Table 2. Such a selection of cameras allows for carrying out the experiment for possibly different examples of real vignetting.
As a flat-field surface needed for acquiring vignetting image ( I V ), the uniformly back-lighted milky poly(methyl methacrylate) (PMMP, ”plexiglass”) panel of size 50   cm × 100   cm and thickness 2.5 mm was used. As a light source, the Asus PA248QV graphic monitor displaying a white screen with brightness set to maximum, i.e.,  300 lx, was used. The plexiglass panel was carefully positioned parallel to the monitor screen surface. In order to position the camera parallel to the monitor screen, each camera was positioned in such a way that the geometric distortions of the obtained reproduction of the test image were symmetric. The test image was displayed on a monitor serving as the panel illumination (Asus PA248QV), and the acquired images were observed in real time on the second monitor. For the time of acquiring the images, the aforementioned plexiglass panel was inserted between the monitor and the used camera.
In order to reduce the noise which is present in the captured images, in the experiment, the I ¯ V image was used, obtained as the average of 100 originally captured images I V e . Since the aim of the experiment is evaluation of availability of fitting of the compared vignetting models to the real vignetting, there is no need for calculating of vignetting estimate for each color channel, i.e., R ¯ , G ¯ and B ¯ , of the I ¯ V image. Therefore, in the experiment, for each camera, as the input image,
I V = 0.2989 R ¯ + 0.5870 G ¯ + 0.1140 B ¯
was used. The exposure parameters for each camera were carried out automatically for the first image I V 1 , and, for the rest of the images, i.e., I V 2 , , I V 100 , the same parameters were used.
The I V e images were acquired in the darkroom so that additional lights did not interfere with this process. Additionally, in the case of Cam-A and Cam-B, all signaling diodes with which the cameras are equipped were covered. Unfortunately, due to its design (the signaling diode has the form of a ring around the lens), such an operation was impossible with the Cam-C. However, failure to perform this operation had no significant impact on the acquired images I V e due to the relatively high brightness of the imaged panel surface compared to the Cam-C signaling diodes, and quite large distance between the camera and the plexiglass panel.
It is important to notice that, because of the aim of the experiment eventually small errors in, e.g., positioning of the monitor, the plexiglass panel or the cameras, as well as a small non-uniformity of screen illumination, do not influence on qualitative evaluation of the experiment results. Of course, such errors affect the quantitative results of the estimation; however, these errors do not change the judgment of the ability of the tested vignetting models on finding the best approximation V ˜ based on the input image I V ; it is exactly this feature of the vignetting models that is evaluated in the experiment.
In the experiment, four vignetting models were compared, i.e., polynomial 2D (P2D), radial polynomial (RP), smooth local polynomial (SLP), and smooth non-iterative local polynomial (SNILP). The entire experiment (i.e., from image acquisition, through all calculations, up to data presentation) was performed using MATLAB R2021a software package with appropriate Toolboxes, i.e., Image Acquisition, Image Processing, and Curve Fitting.
For implementation of the P2D model, the MATLAB fit function from Curve Fitting Toolbox, which calculates, among others, 2D polynomial approximation, can be used. However, due to the limitation of this function, it can only be used for approximation of polynomials with a maximum degree s = 5 ; in the experiment, the own function, which directly uses ordinary least squares equation for 2D polynomial approximation, was written and used. It is important to notice that the own function, in general, has no limit for degree s of the approximation polynomial. Moreover, the own function returns the approximation results faster than the fit function, and the differences between approximation results obtained from the own function and the fit function for data from the experiment are no greater than 10 10 ; therefore, from a practical point of view, they are negligible.
The implementation of the RP model uses MATLAB polyfit and polyval functions, which calculate parameters of approximation polynomial and values of approximation function using the obtained parameters. The implemented own function, which carries out these tasks, was no better than the aforementioned MATLAB functions. Additionally, in the case of the RP model, there is a need for founding of optical center C of the lens-camera system based on I V  image; from this point, the distance r is calculated (see Section 1.2). In the experiment, C was found by searching of the coordinates of the maximum of 2D polynomial approximation of degree s = 2 of the I V image. This approach was compared with others, e.g., a 2D polynomial approximation with degree s = 4 and the usage of smoothing function with moving window; however, none of the tested approaches give better results in the sense of values of STD I ˜ f l a t and IQR I ˜ f l a t than the usage of a 2D polynomial approximation with s = 2 .
In the case of SLP and SNILP, both models were implemented using polyfit and polyval functions for 1D approximation and calculation approximated values. The results for the SLP model are obtained for k = 25 iterations.
All models were compared for different degrees s of the approximation polynomials, i.e., for s { 2 , , 10 } . The tested range of the values of polynomials degree fully covers the value of the approximation polynomial degree used in practice; in the literature, it is recommended to use s from 2, for tasks that do not require precise correction, and up to 6 in more demanding applications.

4.2. Results of the Experiment

In Figure 7, acquired calibration images (Figure 7a–c) and I V images (Figure 7d–f) obtained from the tested cameras are presented. The numerical results of the experiment are presented in Table 3 and Table 4. Additionally, in Figure 8, comparison of normalized I V images and the obtained V ˜ for s = 10 is presented. The normalized images norm ( I V ) are calculated as follows:
norm   I V ( i , j ) = I V ( i , j ) · max i , j   smooth   I V ( i , j ) 1 ,
where smooth   I V ( i , j ) is a result of smoothing of the I V image with 2D Gaussian filter with standard deviation σ = 0.5 and size 10 × 10 ; the norm ( I V ) images are used only for visualization (these images are not used during calculation of V ˜ ). Figure 9 shows the results of corrections of I V images, i.e., the I ˜ f l a t images, obtained for s = 10 .

4.3. Discussion of the Results

The experiment on real data confirms the results of numerical tests (Section 3.3), i.e., for a large number k of iterations of the SLP model, the results obtained from the SLP and SNILP models are the same. However, it is important to remember that the computation time which is needed for obtaining the same results from both models is much longer in the case of the SLP model than in the case of the SNILP model.
Comparing results from all tested models, it can be easily seen that, beyond the results for the IQR measure obtained from Cam-B for s { 2 , 3 } , the order of the obtained results (regardless of the used measure) is the same, i.e., the best results are given by the SLP and SNILP models, slightly worse results are given by the P2D model, and much worse results are obtained from the RP model. It is worth noting that, e.g., for the STD measure and s 4 , the results from the SNILP and SLP models are usually at least 2 times better than the results obtained from the RP model, which is preferred by many authors.
The STD and IQR measures show aggregated information about models quality. Additional knowledge about the models is provided by the graphs presented in Figure 9. Comparing the results for the SPL and SNILP models with the results obtained for the P2D and RP models, it can be seen that the spatial uniformity of the dispersion of values in the I ˜ f l a t images are different. For the individual I ˜ f l a t images obtained from the SLP and SNILP models, the dispersion of its values is spatially uniform, which indicates—assuming that the mean value of cameras noise η C a m = 0 —a good fit of the V ˜ estimates to the input images I V . Following this line of thinking, assuming that the I f l a t images were taken correctly, it means that the V ˜ estimates obtained from these models exactly approximate the real vignetting V of the individual cameras. Contrary to these results, in the I ˜ f l a t images obtained from the P2D and RP models, regions exist in which the spatial uniformity is not preserved. This is especially true for the results obtained for Cam-B (Figure 9b,e) and Cam-C (Figure 9c,f). Based on above the assumptions, the occurrence of such areas in the I ˜ f l a t image proves that the obtained vignetting estimate V ˜ is inaccurately adjusted to the real vignetting V of the tested camera.

5. Comparison of Others Features of the Tested Vignetting Models

The quality of the vignetting correction results is not the only criterion which should be taken into consideration while choosing a model of vignetting for a specific application. The others are, e.g., computational complexity (determining the speed of calculation), a required memory size (determining the size of images for which the selected model can be used in the given conditions, e.g., using intelligent camera with limited hardware resources), and, last but not least, the minimum amount of data required to save the determined vignetting estimate.
In Table 5, results of the measurement of computation times of the compared vignetting models are presented. Additionally, in Figure 10, the obtained results are presented in the form of a log-log graph. The presented results are obtained as a median value from 10 measures for each tested image format and the vignetting model. For all vignetting models, the degree s = 5 of their approximation polynomials was chosen. In the case of the RP model, an image optical center was found using the same method as in the experiment (Section 4.1). For the SLP model, the number of iterations k = 10 was chosen. The test vignetting images were obtained using the method presented in Appendix A. The measurements were conducted on a computer equipped with AMD Ryzen 7 3800X processor, 16 GB RAM, and a M.2 PCIe NVMe SSD disk. The results show that the SNILP model, already, for images with a resolution equal to or greater than 1600 × 1200 , is the fastest model. The difference in computation times between the SNILP model and the next model, i.e., RP, grows with increasing image resolution. It is worth noticing that the results obtained for the SLP and SNILP models confirm relation (26), i.e., that the calculation for the SNILP model is at least k times faster than in the case of the usage the SLP model.
The lower bound of a model required memory size can be estimated from the analysis of the minimal memory size needed for calculation of vignetting estimate. In the case of the tested models, for this purpose, Ordinary Least Squares method is used; hence, based on its matrix formulation, i.e.,
β ˜ = X T X 1 X T y ,
where β ˜ is a vector of estimated polynomial parameter, X is a matrix of independent variables, and y is a vector of observations; it can be seen that the size of the matrix X , in practice, is determining the amount of needed memory. Let I denote the size of the analyzed image of resolution M × N , where M N , and s is an order of the used approximation polynomial. The comparison of X size is presented in Table 6; in the table, additional remarks about memory size needed by each vignetting model are also presented. Based on this information, it can be concluded that the SNILP model has the lowest memory requirements among the analyzed models.
The determined vignetting estimate can be saved in form, e.g., a matrix V ˜ of size I or a set of parameters. The exact number of needed parameters for each model is presented in Table 7. This comparison shows that the SLP model requires the largest number of parameters. The SNILP model requires N ( s + 1 ) parameters less, but, when compared with the P2D model, and especially with the RP model, the number of parameters which are required to save the vignetting estimate V ˜ is still large. On the other hand, in order to use the vignetting estimate V ˜ saved in the form of a set of parameters, it is necessary to carry out appropriate calculations which require appropriate hardware resources (including memory larger than I needed for save V ˜ in form of a matrix) and time. Moreover, in applications where the real-time vignetting correction is needed, e.g., on-line monitoring or quality control, storing V ˜ in the form of a matrix is the best solution. In such cases, the advantages of usage a smaller number of parameters, as in the case of P2D and RP models, are no longer relevant.

6. Conclusions

Both, i.e., numerical and experimental, results prove that the proposed SNILP vignetting model outperforms the SLP model in both quality of vignetting estimation and the computation speed. The comparison of results obtained from the SNILP model with these obtained from the P2D and RP models shows that the proposed model provides a better vignetting correction, in a sense of the used measures STD I ˜ f l a t and IQR I ˜ f l a t , than both models known from the literature.
Summarizing the results presented in this article, it can be stated that the SNILP vignetting model is a universal vignetting model (similar to the P2D and SLP models). However, compared to these models, the SNILP model is characterized by greater accuracy of the obtained result V ˜ . It should also be noted that, compared to the widely used RP model, the SNILP model gives for s 2 better, and, for s 4 , much better, results and can be used for vignetting correction of virtually any lens-camera system (where the usage of the RP model is limited due to the use of the radial function). Moreover, the results from the SNILP model for larger images are obtained in the lowest computation time and the more limited hardware resources can be used for its computing. This last feature is especially important for visual systems that have limited computational capabilities, such as, e.g., smart cameras and sensors, embedded vision systems or vision systems used in unmanned vehicles.
So far, the vignetting models presented in the literature can be divided into two groups, i.e., universal models (e.g., the P2D model), in which usage is not limited to a certain type of vignetting, but their use requires relatively high computing power and dedicated models (e.g., the RP model), which have a relatively low requirement for computing power; however, their usage is limited. Hence, there is no model that would combine its universal application with low computational complexity. Based on the results presented in the article, it can be concluded that the proposed SNILP model combines these requirements and, in this sense, is a novel contribution to solving the vignetting correction issue.

Author Contributions

Conceptualization, A.B.; methodology, A.B.; software, A.B.; validation, A.B. and H.P.; formal analysis, A.B.; investigation, A.B.; resources, A.B.; data curation, A.B.; writing—original draft preparation, A.B.; writing—review and editing, A.B. and H.P.; visualization, A.B.; supervision, H.P. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Polish Ministry for Science and Education under internal grant BK-258/RAU4/2021—02/070/BK_21/0022 for the Institute of Automatic Control, Silesian University of Technology, Gliwice, Poland.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on reasonable request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
IQRInterQuartile Range
LPLocal Polynomial (vignetting model)
LSCDLocal Slope Corrected Deviation
LSDLocal Standard Deviation
MAEMean Absolute Error
MLSDMean Local Standard Deviation
MLSCDMean Local Slope Corrected Deviation
P2DPolynomial 2D (vignetting model)
PolyApp x / y s Polynomial Approximation of degree s among horizontal (x) or vertical (y) line of the input image
RMSERoot Mean Square Error
RPRadial Polynomial (vignetting model)
SLPSmooth Local Polynomial (vignetting model)
SNILPSmooth Non-Iterative Local Polynomial (vignetting model)
STDSTandard Deviation
VMVignetting Model

Appendix A. Data Preparation for Numerical Tests

Because of the aims of numerical tests, there is no need for using very accurate models of vignetting V; therefore, as the vignetting function V, the relatively simple cos 4 model of off-axis illumination [15] was adopted. The original formula was modified for easier calculation of V for different resolutions of the analyzed images. Hence, the vignetting function V used in test is defined as follows:
V ( i , j ) = 1 1 + r D / 2 f 2 2 ,
where V ( i , j ) ( 0 , 1 ] , f is the focal length of the lens in millimeters, D is a length of diagonal of a simulated sensor in millimeters, and r is a normalized distance from the center ( i c , j c ) of an image to the image pixel of coordinates ( i , j ) (where i { 1 , M } and j { 1 , , N } ) and is calculated as follows:
r = R max ( R ) ,
R = ( i i c ) 2 + ( j j c ) 2 .
The obtained vignetting V is a radial function.
To simulate non-radial vignetting, it is enough to slightly modify (A3), i.e.,
R = ( i i c ) 2 + α ( j j c ) 2 ,
where, for α = 1 , V ( i , j ) is a radial function, and, for α 1 , V ( i , j ) is a non-radial function, e.g., for α > 1 , the vignetting in the vertical (y) direction is stronger than in the horizontal (x) direction. For simulation of a lens with optical center shifted in relation to the geometrical center of the lens, it is enough to replace in (A3) or (A4) coordinates ( i c , j c ) of the geometrical lens center by coordinates ( i o c , j o c ) of the simulated optical center of the lens.
For the tests, in which the results are presented in Section 2 and Section 3, the values of D and f were chosen according to 135 (35 mm) film standard; hence, D / 2 = 43.267   mm . The tests were carried on for the three values of lens focal length f, i.e., 24 mm, 50 mm, and 300 mm. Such a choice of lens focal lengths covers (after its conversion, taking into account the sensor size used in the application) the most frequently used focal length range. The results presented in the article were obtained for images of resolution 1280 × 1024 , α = 1 (radial vignetting) and coordinates of optical center consistent with the coordinates of the image geometric center.
Real vignetting image I V contains a noise; therefore, in the simulation vignetting function, V is contaminated with a Gaussian mixture noise; the formula is as follows:
I V = κ V 1 + N 0 , σ m u l t i + N 0 , σ a d d ,
where N μ , σ represents noise with a normal distribution, with mean μ and standard deviation σ . The parameters σ m u l t i and σ a d d are the standard deviations of, respectively, multiplicative and additional component of mixture noise; the mean values of noise of both components are equal to 0, and κ is a scaling factor that simulates exposure correction used during the acquisition of real I V images to prevent saturation from occurring in captured images. The test images were created for: σ m u l t i = 10 % , σ a d d = 5 % , and κ = 0.8 . The last stage of creation of an I V image is a truncation of its values to the range [ 0 , 255 ] .

References

  1. Goldman, D.; Chen, J.H. Vignette and exposure calibration and compensation. In Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV’05), Beijing, China, 17–21 October 2005; Volume 1, pp. 899–906. [Google Scholar]
  2. Goldman, D.B. Vignette and Exposure Calibration and Compensation. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 2276–2288. [Google Scholar] [CrossRef]
  3. Kim, S.J.; Pollefeys, M. Robust radiometric calibration and vignetting correction. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 562–576. [Google Scholar] [CrossRef]
  4. Alomran, M.; Chai, D. Feature-based panoramic image stitching. In Proceedings of the 2016 14th International Conference on Control, Automation, Robotics and Vision (ICARCV), Phuket, Thailand, 13–15 November 2016; pp. 1–6. [Google Scholar]
  5. Chen, C.; Pan, J.; Wang, M.; Zhu, Y. Side-Slither Data-Based Vignetting Correction of High-Resolution Spaceborne Camera with Optical Focal Plane Assembly. Sensors 2018, 18, 3402. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Piccinini, F.; Bevilacqua, A. Colour Vignetting Correction for Microscopy Image Mosaics Used for Quantitative Analyses. Biomed Res. Int. 2018, 2018, 7082154. [Google Scholar] [CrossRef] [Green Version]
  7. Cauwerts, C.; Bodart, M.; Deneyer, A. Comparison of the Vignetting Effects of Two Identical Fisheye Lenses. Leukos 2012, 8, 181–203. [Google Scholar] [CrossRef]
  8. Wagdy, A.; Garcia-Hansen, V.; Isoardi, G.; Pham, K. A Parametric Method for Remapping and Calibrating Fisheye Images for Glare Analysis. Buildings 2019, 9, 219. [Google Scholar] [CrossRef] [Green Version]
  9. Piccinini, F.; Lucarelli, E.; Gherardi, A.; Bevilacqua, A. Multi-image based method to correct vignetting effect in light microscopy images. J. Microsc. 2012, 248, 6–22. [Google Scholar] [CrossRef]
  10. Mignard-Debise, L.; Ihrke, I. A Vignetting Model for Light Field Cameras With an Application to Light Field Microscopy. IEEE Trans. Comput. Imaging 2019, 5, 585–595. [Google Scholar] [CrossRef] [Green Version]
  11. Park, S.E.; Kim, J.G.; Hegazy, M.A.A.; Cho, M.H.; Lee, S.Y. A flat-field correction method for photon-counting-detector-based micro-CT. In Medical Imaging 2014: Physics of Medical Imaging; Whiting, B.R., Hoeschen, C., Eds.; International Society for Optics and Photonics, SPIE: Bellingham, WA, USA, 2014; Volume 9033, pp. 1536–1542. [Google Scholar]
  12. Hakala, T.; Suomalainen, J.; Peltoniemi, J.I. Acquisition of Bidirectional Reflectance Factor Dataset Using a Micro Unmanned Aerial Vehicle and a Consumer Camera. Remote Sens. 2010, 2, 819–832. [Google Scholar] [CrossRef] [Green Version]
  13. Cao, H.; Gu, X.; Wei, X.; Yu, T.; Zhang, H. Lookup Table Approach for Radiometric Calibration of Miniaturized Multispectral Camera Mounted on an Unmanned Aerial Vehicle. Remote Sens. 2020, 12, 4012. [Google Scholar] [CrossRef]
  14. Asada, N.; Amano, A.; Baba, M. Photometric calibration of zoom lens systems. In Proceedings of the 13th International Conference on Pattern Recognition, Washington, DC, USA, 25–29 August 1996; Volume 1, pp. 186–190. [Google Scholar]
  15. Kang, S.B.; Weiss, R. Can We Calibrate a Camera Using an Image of a Flat, Textureless Lambertian Surface? In Computer Vision—ECCV 2000, 6th European Conference on Computer Vision Dublin, Ireland; Vernon, D., Ed.; Springer: Berlin/Heidelberg, Germany, 2000; pp. 640–653. [Google Scholar]
  16. Doutre, C.; Nasiopoulos, P. Fast vignetting correction and color matching for panoramic image stitching. In Proceedings of the 2009 16th IEEE International Conference on Image Processing (ICIP), IEEE Computer Society, Cairo, Egypt, 7–10 November 2009; pp. 709–712. [Google Scholar]
  17. Zheng, Y.; Lin, S.; Kambhamettu, C.; Yu, J.; Kang, S.B. Single-Image Vignetting Correction. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 2243–2256. [Google Scholar] [CrossRef] [PubMed]
  18. Zheng, Y.; Lin, S.; Kang, S.B.; Xiao, R.; Gee, J.C.; Kambhamettu, C. Single-image vignetting correction from gradient distribution symmetries. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1480–1494. [Google Scholar] [CrossRef] [Green Version]
  19. Cho, H.; Lee, H.; Lee, S. Radial Bright Channel Prior for Single Image Vignetting Correction. In Computer Vision—ECCV 2014; Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T., Eds.; Springer International Publishing: Cham, Switzerland, 2014; pp. 189–202. [Google Scholar]
  20. Rohlfing, T. Single-Image Vignetting Correction by Constrained Minimization of log-Intensity Entropy; Figshare, 2012; Available online: https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.258.4780&rep=rep1&type=pdf (accessed on 5 September 2021).
  21. Sawchuk, A.A. Real-Time Correction of Intensity Nonlinearities in Imaging Systems. IEEE Trans. Comput. 1977, C-26, 34–39. [Google Scholar] [CrossRef]
  22. Brady, M.; Legge, G.E. Camera calibration for natural image studies and vision research. J. Opt. Soc. Am. A Opt. Image Sci. Vis. 2009, 26, 30–42. [Google Scholar] [CrossRef] [Green Version]
  23. Yu, W. Practical anti-vignetting methods for digital cameras. IEEE Trans. Consum. Electron. 2004, 50, 975–983. [Google Scholar]
  24. Leong, F.J.; Brady, M.; McGee, J.O. Correction of uneven illumination (vignetting) in digital microscopy images. J. Clin. Pathol. 2003, 56, 619–621. [Google Scholar] [CrossRef] [PubMed]
  25. Burt, P.J.; Adelson, E.H. A multiresolution spline with application to image mosaics. ACM Trans. Graph. TOG 1983, 2, 217–236. [Google Scholar] [CrossRef]
  26. Willson, R.G.; Shafer, S.A. What is the center of the image? J. Opt. Soc. Am. A 1994, 11, 2946–2955. [Google Scholar] [CrossRef]
  27. Kordecki, A.; Palus, H.; Bal, A. Practical vignetting correction method for digital camera with measurement of surface luminance distribution. Signal Image Video Process. 2016, 10, 1417–1424. [Google Scholar] [CrossRef] [Green Version]
  28. Kordecki, A.; Bal, A.; Palus, H. Local polynomial model: A new approach to vignetting correction. In Proceedings of the Ninth International Conference on Machine Vision (ICMV 2016), Nice, France, 17 March 2017; Verikas, A., Radeva, P., Nikolaev, D.P., Zhang, W., Zhou, J., Eds.; International Society for Optics and Photonics, SPIE: Bellingham, WA, USA, 2017; Volume 10341, pp. 463–467. [Google Scholar]
  29. Kordecki, A.; Bal, A.; Palus, H. A smooth local polynomial model of vignetting. In Proceedings of the 2017 22nd International Conference on Methods and Models in Automation and Robotics (MMAR), Międzyzdroje, Poland, 28–31 August 2017; pp. 878–882. [Google Scholar]
  30. Hunt, B.R.; Cannon, T.M. Nonstationary Assumptions for Gaussian Models of Images. IEEE Trans. Syst. Man Cybern. 1976, SMC-6, 876–882. [Google Scholar]
  31. Chang, D.C.; Wu, W.R. Image contrast enhancement based on a histogram transformation of local standard deviation. IEEE Trans. Med. Imaging 1998, 17, 518–531. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Comparison of MAE and RMSE error values obtained for the LP, SLP, and SNILP models and simulated vignetting of lenses with different focal length f; results for IV contained with Gaussian mixture noise (A5) for σadd = 5% and σmulti = 10%. Results for the LP model are equal to results from the SLP model for k = 1. Results for the SNILP model are presented as orange lines, whereas lines, from bottom to top, represent minimal, first quartile, median, third quartile, and maximal of error values.
Figure 1. Comparison of MAE and RMSE error values obtained for the LP, SLP, and SNILP models and simulated vignetting of lenses with different focal length f; results for IV contained with Gaussian mixture noise (A5) for σadd = 5% and σmulti = 10%. Results for the LP model are equal to results from the SLP model for k = 1. Results for the SNILP model are presented as orange lines, whereas lines, from bottom to top, represent minimal, first quartile, median, third quartile, and maximal of error values.
Sensors 21 07086 g001
Figure 2. Comparison of surface smoothness of vignetting estimators V ˜ obtained from: (b) LP and (bf) SLP models, (a) image IV of size 640 × 480 pixels contained with Gaussian mixture noise (A5) for σadd = 5% and σmulti = 10%.
Figure 2. Comparison of surface smoothness of vignetting estimators V ˜ obtained from: (b) LP and (bf) SLP models, (a) image IV of size 640 × 480 pixels contained with Gaussian mixture noise (A5) for σadd = 5% and σmulti = 10%.
Sensors 21 07086 g002
Figure 3. Comparison of MLSD and MLSCD values obtained for the LP, SLP, and SNILP models and simulated vignetting of lenses with different focal length f; results for IV contained with Gaussian mixture noise (A5) for σadd = 5% and σmulti = 10%. Results for the LP model are equal to results from the SLP model for k = 1. Results for the SNILP model are presented as orange lines, whereas lines, from bottom to top, represent minimal, first quartile, median, third quartile, and maximal of error values.
Figure 3. Comparison of MLSD and MLSCD values obtained for the LP, SLP, and SNILP models and simulated vignetting of lenses with different focal length f; results for IV contained with Gaussian mixture noise (A5) for σadd = 5% and σmulti = 10%. Results for the LP model are equal to results from the SLP model for k = 1. Results for the SNILP model are presented as orange lines, whereas lines, from bottom to top, represent minimal, first quartile, median, third quartile, and maximal of error values.
Sensors 21 07086 g003
Figure 4. Errors related to using SNILP model with different sequence of approximation directions, examples for images IV of resolution 1280 × 1024 pixels contained with Gaussian mixture noise (A5) for σadd = 5% and σmulti = 10%.
Figure 4. Errors related to using SNILP model with different sequence of approximation directions, examples for images IV of resolution 1280 × 1024 pixels contained with Gaussian mixture noise (A5) for σadd = 5% and σmulti = 10%.
Sensors 21 07086 g004
Figure 5. Mean errors ε ¯ ( i , j ) related to using SNILP in an iterative manner for different numbers of repetitions k, results for image IV of size 1280 × 1024 pixels contained with Gaussian mixture noise (A5) for σadd = 5% and σmulti = 10%.
Figure 5. Mean errors ε ¯ ( i , j ) related to using SNILP in an iterative manner for different numbers of repetitions k, results for image IV of size 1280 × 1024 pixels contained with Gaussian mixture noise (A5) for σadd = 5% and σmulti = 10%.
Sensors 21 07086 g005
Figure 6. Cameras used in the experiment, from left to right: Logitech C920 (Cam-A), Xiaomi IMILAB CMSXJ22A (Cam-B), and A4Tech PK-910H (Cam-C).
Figure 6. Cameras used in the experiment, from left to right: Logitech C920 (Cam-A), Xiaomi IMILAB CMSXJ22A (Cam-B), and A4Tech PK-910H (Cam-C).
Sensors 21 07086 g006
Figure 7. Comparison of acquired calibration images and IV images obtained from the tested cameras.
Figure 7. Comparison of acquired calibration images and IV images obtained from the tested cameras.
Sensors 21 07086 g007
Figure 8. Comparison of vignetting estimates V ˜ obtained for the tested cameras and vignetting models for degree s = 10 of the used approximation polynomials. The charts (jl) present results obtained for the SLP and SNILP models, which, in the case of the used parameter k = 25 (number of iterations), in the SPL model, are equivalent.
Figure 8. Comparison of vignetting estimates V ˜ obtained for the tested cameras and vignetting models for degree s = 10 of the used approximation polynomials. The charts (jl) present results obtained for the SLP and SNILP models, which, in the case of the used parameter k = 25 (number of iterations), in the SPL model, are equivalent.
Sensors 21 07086 g008
Figure 9. Comparison of vignetting correction of flat-field image IV for the tested cameras and vignetting models for degree s = 10 of the used approximation polynomials. The charts (ji) present results obtained for the SLP and SNILP models, which, in the case of the used parameter k = 25 (number of iterations), in the SPL model, are equivalent.
Figure 9. Comparison of vignetting correction of flat-field image IV for the tested cameras and vignetting models for degree s = 10 of the used approximation polynomials. The charts (ji) present results obtained for the SLP and SNILP models, which, in the case of the used parameter k = 25 (number of iterations), in the SPL model, are equivalent.
Sensors 21 07086 g009
Figure 10. Comparison of computation times of the tested models for selected image formats and the degree of approximation polynomial s = 5 . Exact information about chosen image formats is given in Table 5.
Figure 10. Comparison of computation times of the tested models for selected image formats and the degree of approximation polynomial s = 5 . Exact information about chosen image formats is given in Table 5.
Sensors 21 07086 g010
Table 1. Differences in values of vignetting estimators for using the SNILP model with different sequences of approximation direction.
Table 1. Differences in values of vignetting estimators for using the SNILP model with different sequences of approximation direction.
PercentileFocal Length [mm]
2450300
00.00.00.0
0.051.11e-160.00.0
0.254.44e-161.11e-161.11e-16
0.51.67e-151.11e-161.11e-16
0.757.44e-152.22e-163.33e-16
0.953.26e-144.44e-164.44e-16
17.04e-131.89e-151.55e-15
mean7.13e-151.80e-161.83e-16
Table 2. Main technical data of the cameras used in the experiment.
Table 2. Main technical data of the cameras used in the experiment.
ParametersWebcam
Cam-ACam-BCam-C
Logitech
C920
Xiaomi
IMILAB CMSXJ22A
A4Tech
PK-910H
Diagonal field of view78°85°70°
Maximal resolution 1920 × 1080 1920 × 1080 1920 × 1080
Maximal frame rate @ 1080p30 fps30 fps30 fps
Focus typeauto focusfixed focusfixed focus
Focus range>60 cm
Remarks: All data are provided by camera manufacturers.
Table 3. Comparison of STD I ˜ f l a t values.
Table 3. Comparison of STD I ˜ f l a t values.
CameraModelOrder
2345678910
Cam-AP2D0.48310.48020.39320.33710.29300.28370.27460.26770.2577
RP0.61810.61810.58580.58580.58370.58370.58280.58280.5805
SLP0.44850.43380.32540.31480.27960.27320.25200.24930.2458
SNILP0.44850.43380.32540.31480.27960.27320.25200.24930.2458
Cam-BP2D2.77852.75030.97880.93950.92990.92240.61450.60170.5447
RP2.81432.81431.22961.22961.23221.23221.04231.04231.0143
SLP2.26182.22760.92670.92220.68180.66650.52950.51780.4965
SNILP2.26182.22760.92670.92220.68180.66650.52950.51780.4965
Cam-CP2D2.07981.89891.72791.69510.88100.82210.70580.68570.5785
RP2.87622.87622.65392.65392.28162.28162.25102.25102.2501
SLP2.07201.88251.51421.47890.65780.64460.56350.54080.4378
SNILP2.07201.88251.51421.47890.65780.64460.56350.54080.4378
Remarks: For camera cells, background colors (from blue, through light blue, light red to red) represent range (from the best to the worst one) of the model results based on the STD I ˜ f l a t values.
Table 4. Comparison of IQR I ˜ f l a t values.
Table 4. Comparison of IQR I ˜ f l a t values.
CameraModelOrder
2345678910
Cam-AP2D0.65650.66290.51020.41010.35630.35120.33630.33020.3088
RP0.83650.83650.75440.75440.74590.74590.74150.74150.7421
SLP0.61560.57620.39740.38180.34800.34080.29890.29510.2901
SNILP0.61560.57620.39740.38180.34800.34080.29890.29510.2901
Cam-BP2D2.74442.60711.24561.18311.09881.08240.68020.68540.6690
RP2.86752.86751.61031.61031.60801.60801.30721.30721.3121
SLP3.23783.27121.16371.16850.84120.81020.66740.65480.6277
SNILP3.23783.27121.16371.16850.84120.81020.66740.65480.6277
Cam-CP2D2.99022.93232.23312.23321.05411.00780.92220.89440.7722
RP4.08004.08003.16573.16572.76572.76572.64632.64632.6344
SLP2.88202.84412.17952.18180.86120.84550.74200.71820.5849
SNILP2.88202.84412.17952.18180.86120.84550.74200.71820.5849
Remarks: For camera cells, background colors (from blue, through light blue, light red to red) represent range (from the best to the worst one) of the model results based on the IQR I ˜ f l a t values.
Table 5. Computation times of the tested models for selected image formats and the degree of approximation polynomial s = 5 .
Table 5. Computation times of the tested models for selected image formats and the degree of approximation polynomial s = 5 .
Image ResolutionImage Size [Mpx]Image FormatComputional Time [s] for Model
P2DRPSLPSNILP
320 × 240 0.08QVGA0.030.020.100.01
640 × 480 0.31VGA0.110.061.430.14
800 × 600 0.48Super VGA0.170.101.810.18
1024 × 768 0.79XGA0.300.182.440.24
1280 × 1024 1.31SXGA0.540.303.420.33
1600 × 1200 1.92UXGA0.740.444.140.40
1920 × 1080 2.07HDV0.800.484.430.43
2048 × 1080 2.212K Digital Cinema0.940.504.680.45
3840 × 2160 8.294K UHDTV3.362.1011.701.14
4096 × 2160 8.85Canon PowerShot G9 X Mark II3.782.1812.221.19
5472 × 3648 19.964K Digital Cinema20.854.6023.402.29
7680 × 4320 33.188K UHDTV61.038.4334.213.35
8688 × 5792 50.32Canon EOS 5DS158.9717.4852.165.10
Remarks: For each image resolution, color of cell background (from blue, through light blue, light red to red) represents range (from the best to the worst one) of the vignetting model computational times.
Table 6. Comparison of models required memory size.
Table 6. Comparison of models required memory size.
ModelSize of XRemarks
P2D 1 2 s 2 + 3 s + 2 I
RP ( s + 1 ) I In the case of this model, the stage of searching for an optical center of the I V image can require more memory than the stage of approximation of the radial function V ˜ . For the approach to searching for the optical center, which is used in the experiment above, the statement is true when
1 2 s c 2 + 3 s c + 2 I > ( s + 1 ) I ,
where s c is the order of 2D polynomial used for image optical center searching.
SLP 1 M ( s + 1 ) I In the SPL model, there is a stage of calculation vignetting estimate  V ˜ by averaging (4) the previously calculated values V ˜ x and V ˜ y , which requires additional space of a size 2 I to keep these variables. Because of the small value of s, which is commonly used, the stage of averaging usually requires more memory than 1D approximation needed for calculation of V ˜ x and V ˜ y .
SNILP 1 M ( s + 1 ) I
Table 7. The number of parameters required for saving vignetting estimation results.
Table 7. The number of parameters required for saving vignetting estimation results.
ModelType of ParametersNumber of Parameters
P2Dparameters of 2D approximation polynomial 1 2 s 2 + 3 s + 2
RPparameters of 1D radial polynomial function + coordinates of image optical center s + 3
SLPparameters of 1D approximation of each line in the horizontal and the vertical direction of the input image I V ( M + N ) ( s + 1 )
SNILPparameters of 1D approximation of each line along the larger side of the input image I V M ( s + 1 ) , where M N
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bal, A.; Palus, H. A Smooth Non-Iterative Local Polynomial (SNILP) Model of Image Vignetting. Sensors 2021, 21, 7086. https://doi.org/10.3390/s21217086

AMA Style

Bal A, Palus H. A Smooth Non-Iterative Local Polynomial (SNILP) Model of Image Vignetting. Sensors. 2021; 21(21):7086. https://doi.org/10.3390/s21217086

Chicago/Turabian Style

Bal, Artur, and Henryk Palus. 2021. "A Smooth Non-Iterative Local Polynomial (SNILP) Model of Image Vignetting" Sensors 21, no. 21: 7086. https://doi.org/10.3390/s21217086

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop