Next Article in Journal
Stress Analysis and Structural Improvement of LNG Tank Container Frames under Impact from Railway Transport Vehicles
Next Article in Special Issue
Multiscale Feature-Based Infrared Ship Detection
Previous Article in Journal
Global Time-Varying Path Planning Method Based on Tunable Bezier Curves
Previous Article in Special Issue
Quantitative Investigation of Containment Liner Plate Thinning with Combined Thermal Wave Signal and Image Processing in Thermography Testing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Scene-Based Nonuniformity Correction Method Using Principal Component Analysis for Infrared Focal Plane Arrays

1
School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
2
Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, Nanjing 210094, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(24), 13331; https://doi.org/10.3390/app132413331
Submission received: 27 October 2023 / Revised: 12 December 2023 / Accepted: 13 December 2023 / Published: 18 December 2023

Abstract

:
In this paper, principal component analysis is introduced to form a scene-based nonuniformity correction method for infrared focal plane arrays. The estimation of the gain and offset of the infrared detector and the correction of nonuniformity based on the neural network method with a novel estimation of desired target value are achieved concurrently. The current frame and several adjacent registered frames are decomposed onto a set of principal components, and then the first principal component is extracted to construct the desired target value. It is practical, forms fewer ghosting artifacts, and considerably promotes correction precision. Numerical experiments demonstrate that the proposed method presents excellent performance when dealing with clean infrared data with synthetic pattern noise as well as the real infrared video sequence.

Infrared focal plane arrays (IRFPAs) are widely utilized in night vision detection, industrial monitoring, medical imaging applications, and scientific research. However, due to its inconsistent pixel-level response, it exhibits a noticeable fixed-pattern noise (FPN), which is superimposed on the true image [1]. Furthermore, slow drifts in the infrared detectors’ parameters tend to appear with variation in the transistor bias voltage and environmental conditions, making the traditional reference-based (e.g., two-point calibration [2]) nonuniformity correction (NUC) methods insufficient. So, plenty of scene-based nonuniformity correction (SBNUC) methods, which could be continuously adaptively updated according to scene information, have been developed to overcome this problem. Generally, SBNUC methods can be classified into two categories, i.e., statistical methods [3,4,5,6,7,8,9,10] and registration-based methods [11,12,13,14]. The constant statistics (CS) method proposed by Harris and Chiang [3] first assumes a Gaussian distribution of incident irradiance, in which all pixels display the same mean and variance of the irradiance of a scene. Scribner et al. [4] originally developed the neural network (NN-NUC) method, which adaptively estimates a “desired” value to obtain the nonuniformity parameters. Based on the CS algorithm, the local CS (LCS) algorithm was then proposed by C. Zhang and W. Zhao [7]. On the basis of the Kalman filter, the NUC algorithm was presented by Torres et al. [8]. By introducing a total variation approach, Esteban Vera et al. developed a PDE-based method [9] that adaptively achieves estimations of the gain and bias. Rossi et al. [10] utilized the bilateral filter and presented a de-ghosting technique. For the algorithms based on registration, accurate interframe estimation of the motion is required. Some representatives are the motion-compensated average, [11] the algebraic scene-based algorithm, [12] the interframe registration-based algorithm (IRLMS), [13] and the multiframe registration-based adaptive NUC algorithm (MRA-NUC) [14]. Moreover, the unidirectional total variation model [15], weighted least squares [16], temporal filter [17] as well as spatial correlation [18] have been taken into consideration in order to gain a balance between complexity and real-time performance. Furthermore, with the development of various kinds of neural networks, methods based on deep learning have been gradually introduced into stripe removal and nonuniformity correction [19,20,21]. All the NUC methods face the same problems of convergence speed and “ghosting artifacts”. In this paper, a multiframe registration-based NUC method combined with principal component analysis (PCA-NUC) is proposed to deal with the FPN. The PCA-NUC first employs a phase correlation algorithm [22] to achieve an accurate multiframe registration, and principal component analysis is introduced to eliminate the noise pattern and finally construct the desired output. To gain a faster convergence speed while updating the parameter estimation, the steepest descent algorithm is adopted. Due to its success in data mining and machine learning, the Karhunen–Loeve (KL) transform [23] is utilized to convert several registered adjacent frames into a set of values of linearly uncorrelated variables called principal components. Then, the first principal component is extracted to construct the desired target value. Compared with the existing methods [6,13], the proposed PCA-NUC shows a better noise elimination capability and an apparent visual quality improvement of the corrected images, as well as a faster convergence speed, with barely any ghosting artifacts. The flow chart of the proposed method is shown in Figure 1.
Here, the responses of all the detectors are assumed to be linear, and gain and bias are employed to model the nonuniformity. The observed output of the detectors Y i , j ( n ) can be written as follows:
Y i , j ( n ) = a i , j ( n ) X i , j ( n ) + b i , j ( n ) ( i [ 1 , M ] ; j [ 1 , N ] )
where a i , j ( n ) and b i , j ( n ) represent the real gain and bias of the ( i , j ) t h detector at a given frame n with a size of M × N , respectively. X i , j ( n ) represents the real incident infrared photon flux, which is collected by the respective detector. By applying linear mapping to the observed pixel values, NUC is performed to provide an estimate of the true scene value X ^ i , j ( n ) so that the responses of the detectors appear to be uniform. The calibrated output of the detector can be expressed as follows:
X ^ i , j ( n ) = g i , j ( n ) Y i , j ( n ) + o i , j ( n )
where g i , j ( n ) and o i , j ( n ) represent the gain and bias of the linear correction model of the ( i , j ) t h detector, respectively. And, the relations between the real and the modeled gain and bias can be expressed as
g i , j ( n ) = 1 a i , j ( n )
o i , j ( n ) = b i , j ( n ) a i , j ( n )
The error equation for the correction coefficient estimate is the following:
E i , j ( n ) = T i , j ( n ) X ^ i , j ( n )
where T i , j ( n ) is the expectation output.
We can use the phase correlation algorithm [22] to obtain the translation between two adjacent observed frames.
[ d i ( n , t ) , d j ( n , t ) ] = arg max i , j Re { F F T 1 ( F u , v ( n ) F * u , v ( n t ) | F u , v ( n ) F * u , v ( n t ) | ) } ( t [ 1 , K ] )
where K represents the number of adjacent frames. F u , v ( n ) and F u , v ( n t ) denote the Fourier transforms of Y i , j ( n ) and Y i , j ( n t ) , and ( u , v ) are the Fourier domain coordinates. Here, any rotation, scaling, or other warping of the images are neglected, and the interframe motion is assumed to consist only of translation. So, we can obtain the registered frames Y i , j ( n t ) relative to the current frame, which can be represented with
Y i , j ( n ) = Y i , j ( n t ) = Y i + d i ( n , t ) , j + d j ( n , t ) ( n t ) = F F T 1 { F u , v ( n t ) e 2 π j [ u d i ( n , t ) + v d j ( n , t ) ] }
Next, we rewrite the Y i , j ( n t ) as an S -dimensional ( S = M × N ) column vector I ( n , t ) , and several adjacent frames are utilized to construct an S × K matrix I n , K = { I ( n , 0 ) , I ( n , 1 ) , , I ( n , K ) } . It is assumed that K = S , and the mean of the matrix I n , K is I ¯ n , K = 1 K k = 1 K I ( n , k ) . According to the method in [23], the process can be performed using the singular value decomposition (SVD).
I n , K I ¯ n , K = U D V T
where D is a diagonal matrix with eigenvalues that can be derived from ( I n , K I ¯ n , K ) ( I n , K I ¯ n , K ) T and are listed in descending order. U and V represent an orthogonal transformation matrix in which the columns are eigenvectors computed from ( I n , K I ¯ n , K ) ( I n , K I ¯ n , K ) T corresponding to the eigenvalues in D and ( I n , K I ¯ n , K ) T ( I n , K I ¯ n , K ) , respectively. Once the covariance matrix is diagonalized, the principal components can be obtained from U . Therefore, we can extract the first principal component ( U 1 ) to construct the desired target value T ( n ) , which is the following:
T ( n ) = U 1 U 1 T [ Y i , j ( n ) I ¯ n , K ] + I ¯ n , K
By employing the steepest descent algorithm [4], g i , j ( n ) and o i , j ( n ) can be adaptively updated via the following equations:
g i , j ( n + 1 ) = g i , j ( n ) + λ E i , j ( n ) X i , j ( n )
o i , j ( n + 1 ) = o i , j ( n ) + λ E i , j ( n )
where λ denotes the iteration step size coefficient.
When evaluating the estimation of nonuniformity parameters in this study, PCA-NUC is studied using a clean IR video sequence with synthetic nonuniformity and is compared with the original NN-NUC ( 3 × 3 spatial average filter is used as suggested in [6]) and IRLMS methods by applying a set of real infrared image data. Since the stochastic steepest decent technique is utilized by all three methods to achieve an optimization of the correction coefficients, the same global step constant λ ( λ = 0.05 ) is adopted. The clean infrared data were captured with a properly calibrated 320 × 256 HgCdTe FPA camera operating in the 3~5 μm range and working at 25 FPS from a tall building. Using a synthetic gain which has a unit-mean Gaussian distribution with a standard deviation of 0.1 and a synthetic offset with a zero-mean Gaussian distribution with a standard deviation of 30, the corrupted video sequence is obtained.
To measure the difference between the true infrared image and the corrected result, we use the metric named root mean square error, which is defined as follows:
R M S E = 1 M × N i = 1 M j = 1 N [ Y ^ i , j Y i , j ] 2
where Y i , j and Y ^ i , j are the true value and the corrected value of the ( i , j ) t h pixel, respectively. M × N represents the size of the infrared image. As shown in Figure 2, the RMSE evolution of the three tested algorithms is displayed.
The three curves decrease rapidly in the first 50 frames. The red curve is the RMSE result of the PCA-NUC method, in which the result reduced to below 20 within 20 frames, whereas more than 50 frames are needed for the other two methods. In the first 40 frames, the RMSE for the NN-NUC method drops almost as fast as for the IRLMS, but it falls abruptly after that. At the same time, the red curve decreases in a relatively stable manner and its RMSE reaches 6.28 after 50 frames, decreasing consistently around 30% over the IRLMS’s and at least 50% over the NN-NUC’s. Obviously, due to its fast convergence speed and stability with hardly any rebound, the PCA-NUC method presents a superior performance compared to the other two methods.
As is well-known, SVD is a computational expensive operation. Moreover, many practical factors, such as memory hierarchy and operating system planning, must be considered in real systems. In this paper, our primary focus lies in evaluating the operational speed of various methods. The following tests have been carried out on a PC with an Intel(R) Core(TM) i5 processor and a 16 Gbyte RAM as well as on the Visual Studio 2019 platform. As shown in Table 1, a rough average of the CPU time consumed per frame of the three algorithms is displayed. Due to the higher operational needs of PCA-NUC, such as registration and SVD, it is more time-consuming compared to the other two algorithms.
In the following subsection, PCA-NUC is applied to a set of 14-bit real infrared data, which was acquired using an 320 × 256 HgCdTe FPA camera operating in the 8~14 μm range. When testing the NUC performance with real infrared data, the roughness index ( ρ ), which measures the high-pass content of an image, can be adopted as a reference for comparison to obtain the calibration data needed to perform a radiometrically accurate correction. The definition of ρ can be written as follows:
ρ = m e a n ( | L Y | ) m e a n ( Y )
where L represents a Laplacian filter; is the discrete convolution operator. Y is the image, and m e a n ( Y ) represents the mean value of Y . The roughness evolution of the three tested algorithms and the result of the mean roughness over the video sequence are presented in Figure 3 and Table 2, respectively.
As shown in Figure 3, all three methods display a distinct capability of correcting nonuniformity. During whole the sequence, PCA-NUC exhibits the lowest roughness among all three methods. The curves of the other two algorithms drop dramatically in the first 50 frames and fluctuate around the mean value in the remaining frames. The PCA-NUC method presents an excellent performance when compared with the other two methods.
Figure 4 shows frame 30 of the real infrared video sequence. The true scene irradiance is displayed in Figure 4a. The outputs obtained using the NN-NUC, IRLMS, and PCA-NUC methods are shown in Figure 4b–d, respectively. As shown in Figure 4b,c, the clear residual fixed-pattern noise is noticeable. Since the motion of the scene is not sufficient, it is a tough task to correct low spatial frequency nonuniformity effectively. The red arrow shows the residual patches of fixed-pattern noise. The results demonstrate that, within only 30 frames, the FPN was almost eliminated with the proposed PCA-NUC method.
As shown in Figure 5, another sample image (the 140th) of the real infrared data is displayed, in which all three NUC methods eliminate the nonuniformity presented in the raw image. The camera moved in the vertical direction from the bottom–up, then suddenly moved to the right. In the outputs, especially in Figure 5b, ghosting artifacts are noticeable (indicated with the red arrow). Also, the residual low spatial nonuniformity is visible in Figure 5b. There exists some residual nonuniformity that can be perceived in the output of the IRLMS method. However, in the PCA-NUC’s output, neither ghost artifacts nor residual inhomogeneity can be detected by the naked eye.
In conclusion, by introducing principal component analysis, a scene-based nonuniformity correction method for infrared focal plane arrays is proposed in this paper. We use principal component analysis to build an eigenspace representation of several registered adjacent frames and then extract the first principal component to construct the desired target value. Some experiments have been conducted to test the proposed method, and the comparison results between the PCA-NUC, traditional neural network, and IRLMS models demonstrate the superiority of our model to the existing methods. In addition, the PCA-NUC method can avoid undesirable effects, i.e., the “burn-in” problem. We have observed that the proposed method shows a significant fixed-pattern noise elimination capability, an apparent visual quality improvement of the corrected images, as well as almost no residual nonuniformity. However, the PCA-NUC method requires a large computational load, so our future research goal will be focused on further reducing its computational complexity for hardware implementation to improve the model’s real-time performance.

Author Contributions

Conceptualization, D.L.; methodology, J.R.; validation, J.T. and L.T.; formal analysis, M.W.; investigation, L.T.; resources, J.T.; data curation, M.W.; writing—original draft preparation, D.L. and L.T.; writing—review and editing, D.L. and L.T.; visualization, J.T.; supervision, L.W.; project administration, G.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to undergoing research projection.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Scribner, D.A.; Sarkady, K.A.; Caulfield, J.T.; Kruer, M.R.; Katz, G.; Gridley, C.J.; Herman, C. Nonuniformity Correction for Staring IR Focal Plane Arrays Using Scene-Based Techniques. In Proceedings of the 1990 Technical Symposium on Optics, Electro-Optics, and Sensors, Orlando, FL, USA, 16–20 April 1990; Volume 1308, pp. 224–233. [Google Scholar] [CrossRef]
  2. Friedenberg, A.; Goldblatt, I. Nonuniformity Two-Point Linear Correction Errors in Infrared Focal Plane Arrays. Opt. Eng. 1998, 37, 1251–1253. [Google Scholar] [CrossRef]
  3. Harris, J.G.; Chiang, Y.-M. Nonuniformity Correction Using the Constant-Statistics Constraint: Analog and Digital Implementations. In Proceedings of the Infrared Technology and Applications XXIII, Orlando, FL, USA, 20–25 April 1997; Volume 3061, pp. 895–905. [Google Scholar] [CrossRef]
  4. Scribner, D.A.; Sarkady, K.A.; Kruer, M.R.; Caulfield, J.T.; Hunt, J.D.; Herman, C. Adaptive Nonuniformity Correction for IR Focal-Plane Arrays Using Neural Networks. In Proceedings of the Infrared Sensors: Detectors, Electronics, and Signal Processing, San Diego, CA, USA, 21 July 1991; Volume 1541, pp. 100–109. [Google Scholar] [CrossRef]
  5. Vera, E.; Torres, S. Fast Adaptive Nonuniformity Correction for Infrared Focal-Plane Array Detectors. EURASIP J. Adv. Signal Process. 2005, 2005, 560759. [Google Scholar] [CrossRef]
  6. Torres, S.N.; Vera, E.M.; Reeves, R.A.; Sobarzo, S.K. Adaptive Scene-Based Nonuniformity Correction Method for Infrared-Focal Plane Arrays. In Proceedings of the Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XIV, Orlando, FL, USA, 21–25 April 2003; Volume 5076, pp. 130–139. [Google Scholar] [CrossRef]
  7. Zhang, C.; Zhao, W. Scene-Based Nonuniformity Correction Using Local Constant Statistics. J. Opt. Soc. Am. A Opt. Image Sci. Vis. 2008, 25, 1444–1453. [Google Scholar] [CrossRef] [PubMed]
  8. Torres, S.N.; Hayat, M.M. Kalman Filtering for Adaptive Nonuniformity Correction in Infrared Focal-Plane Arrays. J. Opt. Soc. Am. A 2003, 20, 470–480. [Google Scholar] [CrossRef] [PubMed]
  9. Vera, E.; Meza, P.; Torres, S. Total Variation Approach for Adaptive Nonuniformity Correction in Focal-Plane Arrays. Opt. Lett. 2011, 36, 172–174. [Google Scholar] [CrossRef] [PubMed]
  10. Rossi, A.; Diani, M.; Corsini, G. Bilateral Filter-Based Adaptive Nonuniformity Correction for Infrared Focal-Plane Array Systems. Opt. Eng. 2010, 49, 057003. [Google Scholar] [CrossRef]
  11. Hardie, R.C.; Hayat, M.M.; Armstrong, E.; Yasuda, B. Scene-Based Nonuniformity Correction with Video Sequences and Registration. Appl. Opt. 2000, 39, 1241–1250. [Google Scholar] [CrossRef] [PubMed]
  12. Ratliff, B.M.; Hayat, M.M.; Hardie, R.C. An Algebraic Algorithm for Nonuniformity Correction in Focal-Plane Arrays. J. Opt. Soc. Am. A 2002, 19, 1737–1747. [Google Scholar] [CrossRef] [PubMed]
  13. Zuo, C.; Chen, Q.; Gu, G.; Sui, X. Scene-Based Nonuniformity Correction Algorithm Based on Interframe Registration. J. Opt. Soc. Am. A Opt. Image Sci. Vis. 2011, 28, 1164–1176. [Google Scholar] [CrossRef] [PubMed]
  14. Ren, J.-L.; Chen, Q.; Qian, W.-X.; Gu, G.; Yu, X.-L.; Liu, N. Multiframe Registration Based Adaptive Nonuniformity Correction Algorithm for Infrared Focal Plane Arrays. J. Infrared Millim. Waves 2014, 33, 122–128. [Google Scholar]
  15. Boutemedjet, A.; Deng, C.; Zhao, B. Edge-Aware Unidirectional Total Variation Model for Stripe Non-Uniformity Correction. Sensors 2018, 18, 1164. [Google Scholar] [CrossRef] [PubMed]
  16. Li, F.; Zhao, Y.; Xiang, W. Single-Frame-Based Column Fixed-Pattern Noise Correction in an Uncooled Infrared Imaging System Based on Weighted Least Squares. Appl. Opt. 2019, 58, 9141–9153. [Google Scholar] [CrossRef] [PubMed]
  17. Liu, C.; Sui, X.; Liu, Y.; Kuang, X.; Gu, G.; Chen, Q. FPN Estimation Based Nonuniformity Correction for Infrared Imaging System. Infrared Phys. Technol. 2019, 96, 22–29. [Google Scholar] [CrossRef]
  18. Zhou, B.; Luo, Y.; Chen, B.; Wang, M.; Peng, L.; Liang, K. Local Spatial Correlation-Based Stripe Non-Uniformity Correction Algorithm for Single Infrared Images. Signal Process. Image Commun. 2019, 72, 47–57. [Google Scholar] [CrossRef]
  19. Yu, H.; Zhang, Z.; Wang, C. An Improved Retina-like Nonuniformity Correction for Infrared Focal-Plane Array. Infrared Phys. Technol. 2015, 73, 62–72. [Google Scholar] [CrossRef]
  20. He, Z.; Cao, Y.; Dong, Y.; Yang, J.; Cao, Y.; Tisse, C.-L. Single-Image-Based Nonuniformity Correction of Uncooled Long-Wave Infrared Detectors: A Deep-Learning Approach. Appl. Opt. 2018, 57, D155–D164. [Google Scholar] [CrossRef] [PubMed]
  21. Mou, X.; Zhu, T.; Zhou, X. Visible-Image-Assisted Nonuniformity Correction of Infrared Images Using the GAN with SEBlock. Sensors 2023, 23, 3282. [Google Scholar] [CrossRef] [PubMed]
  22. Kuglin, C.D. The Phase Correlation Image Alignment Method. IEEE Int. Conf. Cybern. Soc. 1975, 1975, 163–165. [Google Scholar]
  23. Turk, M.; Pentland, A. Face Recognition Using Eigenfaces. In Proceedings of the 1991 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Maui, HI, USA, 3–6 June 1991; pp. 586–591. [Google Scholar] [CrossRef]
Figure 1. The flow chart of the proposed PCA-NUC.
Figure 1. The flow chart of the proposed PCA-NUC.
Applsci 13 13331 g001
Figure 2. RMSE versus frames using different NUC methods.
Figure 2. RMSE versus frames using different NUC methods.
Applsci 13 13331 g002
Figure 3. Roughness versus frames using different NUC methods.
Figure 3. Roughness versus frames using different NUC methods.
Applsci 13 13331 g003
Figure 4. The performance comparison between different NUCs of frame 30 of the raw infrared data. The red arrow indicated the ghosting artifacts. (a) Sample image from the data. (b) Corrected result with the NN-NUC method. (c) Corrected result with the IRLMS method. (d) Corrected result with the PCA-NUC.
Figure 4. The performance comparison between different NUCs of frame 30 of the raw infrared data. The red arrow indicated the ghosting artifacts. (a) Sample image from the data. (b) Corrected result with the NN-NUC method. (c) Corrected result with the IRLMS method. (d) Corrected result with the PCA-NUC.
Applsci 13 13331 g004
Figure 5. The performance comparison between different NUCs of frame 140 of the raw infrared data. The red arrow indicated the ghosting artifacts. (a) Sample image from the data. (b) Corrected result with the NN-NUC method. (c) Corrected result with the IRLMS method. (d) Corrected result with the PCA-NUC.
Figure 5. The performance comparison between different NUCs of frame 140 of the raw infrared data. The red arrow indicated the ghosting artifacts. (a) Sample image from the data. (b) Corrected result with the NN-NUC method. (c) Corrected result with the IRLMS method. (d) Corrected result with the PCA-NUC.
Applsci 13 13331 g005
Table 1. Average CPU time consumed per frame for the three algorithms.
Table 1. Average CPU time consumed per frame for the three algorithms.
AlgorithmAverage CPU Time(s)
NN-NUC0.006
IRLMS0.087
PCA-NUC0.146
Table 2. Mean roughness ρ results for the real IR data.
Table 2. Mean roughness ρ results for the real IR data.
AlgorithmRoughness ρ
Unprocessed0.2721
NN-NUC0.2143
IRLMS0.1998
PCA-NUC0.1851
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lu, D.; Teng, L.; Ren, J.; Tan, J.; Wang, M.; Wang, L.; Gu, G. Scene-Based Nonuniformity Correction Method Using Principal Component Analysis for Infrared Focal Plane Arrays. Appl. Sci. 2023, 13, 13331. https://doi.org/10.3390/app132413331

AMA Style

Lu D, Teng L, Ren J, Tan J, Wang M, Wang L, Gu G. Scene-Based Nonuniformity Correction Method Using Principal Component Analysis for Infrared Focal Plane Arrays. Applied Sciences. 2023; 13(24):13331. https://doi.org/10.3390/app132413331

Chicago/Turabian Style

Lu, Dongming, Longyin Teng, Jianle Ren, Jiangyun Tan, Mengke Wang, Liping Wang, and Guohua Gu. 2023. "Scene-Based Nonuniformity Correction Method Using Principal Component Analysis for Infrared Focal Plane Arrays" Applied Sciences 13, no. 24: 13331. https://doi.org/10.3390/app132413331

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop