Next Article in Journal
A Fast Point Cloud Recognition Algorithm Based on Keypoint Pair Feature
Previous Article in Journal
A Survey on UAV Computing Platforms: A Hardware Reliability Perspective
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Spectral Reflectance Recovery from the Quadcolor Camera Signals Using the Interpolation and Weighted Principal Component Analysis Methods

1
Department of Electrophysics, National Yang Ming Chiao Tung University, No. 1001 University Road, Hsinchu 30010, Taiwan
2
Department of Electrical Engineering, Yuan Ze University, No. 135 Yuan-Tung Road, Taoyuan 320, Taiwan
3
Department of Photonics, National Yang Ming Chiao Tung University, No. 1001 University Road, Hsinchu 30010, Taiwan
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(16), 6288; https://doi.org/10.3390/s22166288
Submission received: 20 July 2022 / Revised: 14 August 2022 / Accepted: 19 August 2022 / Published: 21 August 2022
(This article belongs to the Section Optical Sensors)

Abstract

:
The recovery of surface spectral reflectance using the quadcolor camera was numerically studied. Assume that the RGB channels of the quadcolor camera are the same as the Nikon D5100 tricolor camera. The spectral sensitivity of the fourth signal channel was tailored using a color filter. Munsell color chips were used as reflective surfaces. When the interpolation method or the weighted principal component analysis (wPCA) method is used to reconstruct spectra, using the quadcolor camera can effectively reduce the mean spectral error of the test samples compared to using the tricolor camera. Except for computation time, the interpolation method outperforms the wPCA method in spectrum reconstruction. A long-pass optical filter can be applied to the fourth channel for reducing the mean spectral error. A short-pass optical filter can be applied to the fourth channel for reducing the mean color difference, but the mean spectral error will be larger. Due to the small color difference, the quadcolor camera using an optimized short-pass filter may be suitable as an imaging colorimeter. It was found that an empirical design rule to keep the color difference small is to reduce the error in fitting the color-matching functions using the camera spectral sensitivity functions.

1. Introduction

Multispectral imaging is an important application in color science and technology [1,2,3,4,5,6,7,8,9,10,11,12]. Spatial and spectral resolution can be improved using multiple cameras [13,14,15]. Surface spectral reflectance can be recovered from multispectral image data and the light source spectrum. Spectral reflectance can be measured directly with an imaging spectrometer, but the measurement is expensive due to the need for a diffractive optical imaging system [16,17,18]. Therefore, indirect measurements using the spectrum reconstruction technique are of interest [19,20,21,22,23,24,25,26,27,28,29]. The spectra of the image pixels are reconstructed from the channel outputs of the image acquisition device, which is an extension of the conventional trichromatic system, where the number of channels is provided by different filters, greater than three. Orthogonal projection [19], Gaussian mixture [20], principal component analysis (PCA) [21,22], non-negative matrix transformation (NMT) [23,24] and interpolation [25,26,27,28,29] have been proposed for spectrum reconstruction.
Indirect methods that require training spectra are also known as learning-based methods, such as PCA and NMT. The training spectra are used to derive basis spectra. The reconstructed spectrum is the linear combination of basis spectra. The coefficients of basis spectra can be solved from simultaneous equations describing the channel outputs of the imaging device. However, for case where XYZ tristimulus values or RGB signal values are available, only three basis spectra can be used, limiting the accuracy of the reconstructed spectra. Improved methods have been proposed to enhance the contribution of neighboring training samples of the test sample in a color space to basis spectra, such as weighted PCA (wPCA) and adaptive NMT methods [22,24]. Since basis spectra depend on the sample to be reconstructed, the computation time is significantly increased compared to conventional methods [27,29].
Another disadvantage of learning-based methods is that the equations describing the channel outputs require the spectral sensitivity functions of the imaging device for solving the coefficients of the linear combination of basis spectra. The spectral sensitivities can be directly measured using a monochromator [30], but accurate measurements are expensive. Without the use of a monochromator, the spectral sensitivities can be estimated by solving a quadric minimization problem [30,31,32]. An alternative approach is to estimate the spectral sensitivities including the device and light source so that the spectral reflectance can be calculated from signals [33,34,35,36,37,38]. The estimation errors of spectral sensitivity cause additional errors in the reconstructed spectrum.
The interpolation method uses reference spectra to reconstruct the spectrum from input values, e.g., XYZ tristimulus values [25,26,27] and RGB signal values [28,29]. Due to the use of a look-up table (LUT) to store the reference spectra, this method is often referred to as the LUT method. Since the reconstructed spectrum is interpolated from reference spectra, the LUT method does not require the spectral sensitivity functions. Furthermore, the authors of [25] showed that the LUT method has the advantage of being more accurate than the PCA method, where the reference spectra for interpolation are the same as the training spectra for the PCA method. The authors of [26,27,28,29] showed that the spectrum reconstruction errors using the wPCA and adaptive NMT methods may not be less than the LUT method. The results are reasonable because reference samples are the neighbors of the sample to be interpolated in the color or signal space.
The authors of [28,29] investigated the use of a tricolor camera for spectral reflectance recovery using the LUT method. Although tricolor cameras have only three available channels, they have the advantages of low cost and fast detection in addition to no need to measure/estimate the camera spectral sensitivity functions. The use of cameras enables more field applications, e.g., smartphone cameras used as sensors. However, if a sample is outside the convex hull of the reference samples in the signal space, it cannot be interpolated and must be extrapolated instead. The sample corresponding to such a signal vector is called an outside sample to distinguish it from the samples inside the convex hull [26,27,28,29]. The authors of [29] proposed the auxiliary reference samples (ARSs) to extrapolate the outside samples. The results showed that the extrapolation error utilizing the ARSs is lower than other methods in [27,28].
This paper studies the recovery of spectral reflectance using quadcolor cameras, where four channel signals can be used to reduce spectrum reconstruction errors. The wPCA and LUT methods were used to reconstruct spectra, respectively. To the best of the authors’ knowledge, this paper is the first research work to study the use of a quadcolor camera and the LUT/wPCA method to recover surface spectral reflectance. The color filter array (CFA) on a conventional camera is shown in Figure 1a, which is known as a Bayer filter. One unit cell of the CFA includes one red, two green and one blue square filters. Each color filter corresponds to one pixel. Assuming that the irradiance varies smoothly around the unit cell, the demosaicing algorithm can be used to interpolate the missing signal of a pixel from neighboring pixels. The spatial resolution of captured images can be improved using the CFA. There are other color filter layouts, such as the RGBE filter used in the SONY Cyber-shot DSC-F828, where a green filter on the CFA unit cell is modified to a cyan filter, as shown in Figure 1b.
Due to the higher dimensional signal space, the extrapolation problem of quadcolor cameras is more severe than that of tricolor cameras using the LUT method. This paper will show that this is also a problem using the wPCA method. ARSs were used for extrapolation using the LUT method. A Nikon D5100 camera was taken as a reference tricolor camera. The RGB channels of the quadcolor cameras under consideration were assumed to be the same as the D5100 camera. The spectral sensitivity of the fourth channel was tailored using a color filter. The reflection spectra from the Munsell color chips irradiated with the illuminant D65 were taken as samples for testing.
This paper is organized as follows. Section 2.1, Section 2.2 and Section 2.3 describe the considered camera spectral sensitivities, color samples, and the assessment metrics for the recovered spectral reflectance, respectively. Four spectral sensitivity types for the fourth channel of the quadcolor camera are described in Section 2.1. Section 3.1 and Section 3.2 describe the wPCA and LUT methods, respectively. Section 3 presents a method for preparing the ARSs for extrapolation of outside samples using the quadcolor camera and the LUT method. Section 3.4 defines the factor that can be used to design the spectral sensitivities of a camera to achieve the small color difference of the reconstructed spectrum. Section 4 shows the results. Compared to the D5100 camera, the reduction in spectral reconstruction errors using the quadcolor cameras is presented. The performances using the wPCA and LUT methods are compared. The optimal designs of the considered quadcolor cameras are shown. The spectral sensitivity characteristics affecting spectral reflectance recovery were investigated. Section 5 gives the conclusions. Appendix A and Appendix B give the proofs of the zero-color-difference condition using the LUT and wPCA methods, respectively. For ease of reference, section Abbreviations lists the abbreviations defined herein in alphabetical order.

2. Materials and Assessment Metrics

2.1. Camera Spectral Sensitivities

A spectrum can be represented by the vector S = [S(λ1), S(λ2), …, S(λMw) ]T, where S(λj) is the spectral amplitude at wavelength λj; λj = λ1 + (j − 1)Δλ is the j-th sampling wavelength, j = 1, 2, …, Mw, and Δλ is the wavelength sampling interval; Mw is the number of sampling wavelengths; the subscript T denotes the transpose operation. In this paper, spectra were sampled from 400 to 700 nm in a step of 10 nm, i.e., λ1 = 400 nm, Δλ = 10 nm and Mw = 31. The spectral sensitivity vector of a camera signal channel can be written as
S Cam = T Opt T IRC T CF D ,
where TOpt, TIRC and TCF are the spectral transmittance vectors of the imaging lens set, IR cut filter and color filter, respectively; D is the spectral sensitivity vector of the CMOS sensor at the focal plane; and the operator is the Hadamard product, which is also known as the element-wise product. The infrared cut filter blocks the invisible infrared light, which can be replaced with the UV/IR cut filter. For simplicity, the lens transmittance was not considered in this paper.
The CMOS sensor converts the light into electric signals. Conventional CFA on the CMOS sensor filters light in order to separate the short-, mid- and long-wavelength components of the light. As shown in Figure 1a, the sensor pixels corresponding to the red, green and blue filters provide R, G and B signals, respectively. The spectral sensitivity vectors of the red, green and blue channels are designated with SCamR, SCamG and SCamB, respectively. Figure 2a shows the SCamR, SCamG and SCamB of the D5100 camera measured using a monochromator [30]. The IR cutoff filter for the D5100 camera has a cutoff wavelength of approximately 690 nm. In this paper, it was assumed that the spectral sensitivities of the red, green and blue channels of the quadcolor cameras under consideration are the same as shown in Figure 2a.
Figure 2b shows the spectral sensitivity of the fourth channel of a quadcolor camera, which is the product of the spectral sensitivity of a typical silicon sensor [39] and the spectral transmittance of the Baader UV/IR cut filter. This fourth channel is the greenish yellow channel, although only the UV/IR cut filter is applied. The output signal of the channel is designated as the F signal because this channel is free of the optical filter. In order to distinguish it from the CIE stimulus Y, it is not designated as the Y signal. Therefore, the quadcolor camera with this fourth channel is called the RGBF camera.
Blue or cyan filters can be used to compensate for the increased sensitivity of silicon sensors with wavelength, e.g., Isuzu IEC series filters. Figure 3a shows the spectral transmittance of five IEC series filters. The spectral sensitivities of the channels with each of the five filters applied are shown in Figure 3b. They are the yellowish green channel. Because of the compensation filter applied, the quadcolor camera with such a fourth channel is called the RGBC camera.
Short-pass and long-pass optical filters were also applied to the fourth channel, respectively. The spectral transmittance of the optical filters is based on the super-Gaussian function. The spectral transmittance functions for the short-pass and long-pass optical filters are the same as for the cyan and yellow filters in [29], respectively. Figure 4a shows their specification, where f0 is the maximum transmittance; λS and λL are the edge wavelengths at 0.5 f0; ΔλS and ΔλL are the edge widths from 0.1 f0 to 0.9f0. In this paper, for simplicity, ΔλS = ΔλL = 30 nm were assumed. The fourth channel using the short-pass and long-pass optical filters are called the S and L channels, respectively. The quadcolor cameras with the S and L channels are called the RGBS and RGBL cameras, respectively. Figure 4b shows the spectral transmittance of the short-pass and long-pass optical filters with λS = 528 nm and λL = 585 nm, respectively, where the corresponding spectral sensitivities of the S and L channels are also shown.
To sum up, four types of quadcolor cameras were considered, which are the RGBF, RGBC, RGBS and RGBL cameras. They are identical except for the color filter applied to the fourth channel.

2.2. Color Samples

The reference/training and test samples were prepared using the reflectance spectra of matt Munsell color chips measured by a spectroradiometer [40]. A total of 1268 reflectance spectra in [40] were used in this paper. Illuminant D65 was assumed to be the light source. In the case of using the LUT method, the same 202 and 1066 color chips as in [29] were used to prepare the reference and test samples, respectively. In the case of using the wPCA method, the reference and test samples were also used as training and test samples, respectively.
The spectrum vector of the light reflected from a color chip is
S Reflection = S Ref S D65 ,
where SRef and SD65 are the spectral reflectance vector of the color chip and the spectrum vector of the illuminant D65, respectively. The color points of light reflected from the 1268 Munsell color chips in the CIELAB color space have been shown in [29]. The CIE 1931 color-matching functions (CMFs) were adopted in this paper.
In the following, the RGBF camera is taken as an example. The measured signal of a color channel is UMeas = SReflectionTSCamU, where U = R, G, B and F for the red, green, blue and fourth channels, respectively; SReflection is the reflection spectrum vector; and SCamF is the spectral sensitivity vector of the channel calculated from Equation (1). For the white balance condition, the channel signals are normalized to U = UMeas/UMeasD65, where U = R, G, B and F; UMeasD65 is the measured signal when SRef = SWhite, where SWhite is the spectral reflectance of a white card. The same white card in [29] was taken, which is the white side of a Kodak gray card.
The vector representing the camera signals is designated as C = [R, G, B, F]T. Figure 5a–d show the color points of the light reflected from the Munsell color chips in the RGB, GBF, BFR and FRG signal spaces, respectively, using the RGBF camera. In these figures, the 202 reference samples are shown as red dots; out of the 1066 test samples, the 726 inside samples and 340 outside samples are shown as green and blue dots, respectively.

2.3. Assessment Metrics

For a given test signal vector, the wPCA and LUT methods to reconstruct the reflection light spectrum are shown in Section 3.1 and Section 3.2, respectively. The reconstructed spectrum vector is designated as SRec. The reconstructed spectral reflectance vector SRefRec was calculated as the reflection spectrum vector SRec divided by the D65 spectrum vector SD65 element by element.
The reconstructed spectral reflectance vector SRefRec was assessed by the root mean square (RMS) error ERef = (|SRefRecSRef|2/Mw)1/2 and the goodness-of-fit coefficient GFC = |SRefRecTSRef|/|SRefRec||SRef|, where |·| stands for the norm operation. The color difference between SRec and SReflection was assessed using CIEDE2000 ΔE00. The spectral comparison index (SCI) was also used to assess the reconstructed results, which is an index of metamerism [41]. The parameter k in the formula for calculating SCI shown in [41] was set to 1.0.
For the values of ERef, ΔE00 and SCI, the smaller, the better. The statistics of the three metrics were calculated, which are the mean μ, standard deviation σ, 50th percentile PC50, 98th percentile PC98 and maximum MAX. For the value of GFC, the larger, the better. The statistics of GFC were calculated, which are the mean μ, standard deviation σ, 50th percentile PC50, and minimum MIN. The fit of the spectral curve shape is good if GFC > 0.99 [28,42]. The ratio of samples with GFC > 0.99 was calculated, which is called the ratio of good fit and designated as RGF99.
The assessment metrics ERef and 1– GFC are related to spectral error. The assessment metrics ΔE00 and SCI are related to color appearance error. Section 4.3 will show that the values of ERef and 1 − GFC are roughly consistent with each other; the values of ΔE00 and SCI are also roughly consistent with each other. Since the reconstructed spectrum is a metameric spectrum, the spectral error can be large when the color appearance error is small. Therefore, it is necessary to use these two types of metrics for assessment.

3. Methods

3.1. The wPCA Method

From the theory of PCA [43], the spectrum vector S can be decomposed as
S = P 0 + k = 1 M w d k k   ,
where P0 is the average spectrum vector of training samples; dk and Pk are the coefficient and spectrum vector of the k-th principal component. Principal components are derived from training samples using PCA. The number of principal components is the number of sampling wavelengths, Mw. The camera spectral sensitivity matrix is defined as DCam = (DCamR DCamG DCamB DCam4th), where DCamR, DCamG, DCamB and DCam4th are the normalized spectral sensitivity vectors of the red, green, blue and fourth channel for the white balance condition. For example, DCamR = SCamR/(SWhite SD65)T SCamR. DCam is an Mw × 4 matrix. If both sides of Equation (3) are multiplied by DCamT, we have the signal vector
C = C 0 + k = 1 M w d k k   ,
where C0 = DCamTP0 and Qk = DCamTPk. Since Equation (4) represents four scalar equations, the summation in Equation (4) was truncated, and the upper limit of the summation index Mw was modified to 4 for solving the first four dk coefficients. From the solved coefficients, the reconstructed spectrum vector is
S Rec = P 0 + k = 1 4 d k k   .
If the reconstructed spectrum has negative values, the value is set to zero. The first four principal components are the basis spectra for the spectrum reconstruction using a quadcolor camera. The channel spectral sensitivity vectors are given in Section 2.1. As described in Section 1, in practice, the spectral sensitivity matrix DCam is measured or estimated experimentally. Additional errors introduced from the measurements/estimations were not considered in this paper.
The wPCA method is the same as the PCA method shown above, except that training samples are weighted according to the sample to be reconstructed [22]. The i-th training sample was multiplied by a weighting factor ΔEi  γ, where ΔEi is the color difference between the test sample and the i-th training sample in CIELAB color space; γ is a constant. Weighted training samples were used to derive basis spectra. The larger the value of γ, the smaller the color difference, and the greater the contribution of the training samples to the basis spectra. If γ = 0, the wPCA method becomes the traditional PCA method. The value of γ is usually set to 1.0 [22]. The value of γ was optimized for the minimum mean ERef of the test samples for individual camera in this paper. A camera device model was used to convert RGB signal values into tristimulus values for calculating ΔEi. A third-order root polynomial regression model (RPRM) was employed and trained using the reference samples [44]. The accuracy of the RPRM was slightly higher than that of the polynomial regression model in this case.
We also tried to use the weighting factor GFCi κ instead of ΔEi −γ, where κ is a constant to be optimized; GFCi is the GFC of the test sample and the i-th training sample. The larger the value of κ, the larger the goodness-of-fitting coefficient, and the greater the contribution of the training samples to the basis spectra. Using such a weighting factor requires two-stage spectrum reconstruction. The first stage reconstructs the spectrum using the weighting factor ΔEi −γ. The reconstruct spectrum is used to calculate the GFCi. The second stage reconstructs the spectrum using the weighting factor GFCiκ. However, the spectrum reconstruction error using the weighting factor GFCiκ is larger than using the weighting factor ΔEi −γ. Therefore, this paper only considers the case of using the weighting factor ΔEi −γ.

3.2. The LUT Method: Interpolation

Detailed descriptions of the LUT method for the 3D case were given in [25,29]. The LUT method for the 4D case is the same as for the 3D case, but the signal dimensions are different. This subsection shows the reconstruction of reflection spectrum vector SRec from the test signal vector C for the 4D case. Linear scattered data interpolation was used to reconstruct the spectrum due to its simplicity and computational time savings [25,26,27,29]. A simplex mesh in the signal space was generated from the reference signal vectors using the Delaunay triangulation [25]. For example, a simplex is the triangle and tetrahedron in 2D and 3D signal spaces, respectively. All programs for this work were implemented in MATLAB (version R2021a, MathWorks). The simplex mesh was generated by the MATLAB function “delaunayn” [45]. There were three steps to interpolate the test sample.
(i)
The simplex that encloses the vector C in the signal space was located. This paper used the MATLAB function “tsearchn” to locate the simplex [46].
(ii)
It is required that C is the linear combination of the five reference signal vectors at the vertices of the simplex, and
C = α1C1 + α2C2 + α3C3 + α4C4 + α5C5,
1 = α1 + α2 + α3 + α4 + α5,
where the coefficients α1, α2, α3, α4 and α5 are weighting factors. Equation (6) comprises four scalar equations because the signal vector is 4D. Equation (7) guarantees that the color point of the signal vector is inside the simplex in the signal space if 0 < α1, α2, α3, α4, α5 < 1. The five coefficients in Equations (6) and (7) were solved.
(iii)
The reconstructed reflection spectrum vector is
SRec = α1S1 + α2S2 + α3S3 + α4S4 + α5S5,
where Sj is the reference spectrum vector corresponding to the j-th vertex, j = 1, 2, 3, 4 and 5. If the reconstructed spectrum has negative values, the value is set to zero.
The solutions to the coefficients in Equations (6) and (7) are unique. These coefficients are called barycentric coordinates. They describe the location of the color point in the simplex [25]. The linear interpolation is called the barycentric interpolation.
If a signal vector is outside the convex hull of the simplex mesh, it is an outside sample. Figure 5a–d show the 340 outside samples as blue dots in the signal space using the RGBF camera. The method to extrapolate outside samples is described in Section 3.3.

3.3. The LUT Method: Extrapolation

Section 2.2 shows that there are 340 outside samples using the RGBF camera, while the number of outside samples is 202 using the D5100 camera [29] for the same 202 reference samples and 1066 test samples. Imagine projecting multiple points in a 3D space onto a 2D space. If the point volume density in the 3D space is low, the point density in the 2D space can be high and vice versa. Similarly, if the color point density of the reference samples in the RGB signal space is high enough to interpolate, say, 80% of the test samples, then the color point density of the same reference samples in the 4D signal space may only be able to interpolate, say, 70% of the test samples. Therefore, for the same reference and test samples, the number of outside samples increases with the signal dimension.
The spectra of all signals can be reconstructed using the wPCA method but not the LUT method. However, due to the lack of suitable nearby training samples, as shown in Figure 5a–d, the spectrum reconstruction error of an outside sample using the wPCA method is likely to be larger than that of an inside sample. In this paper, outside samples of the LUT method were extrapolated utilizing the reference samples and ARSs [29]. ARSs are high-saturation samples. They are created using appropriately selected color filters and color chips. The color filters are called the ARS filters. The extrapolation process is the same as the interpolation method shown in Section 3.2 but using the expanded reference samples including ARSs.
The authors of [29] used cyan, magenta and yellow (CMY) ARS filters to extrapolate the 202 outside samples for the case with the D5100 camera. It was found that of the 340 outside samples, a few cannot be extrapolated using the CMY ARS filters for some quadcolor cameras under consideration. For example, 2 outside samples cannot be extrapolated using the RGBC cameras. It was also found that the use of additional red, green, and blue (RGB) ARS filters to create more ARSs enables all outside samples to be extrapolated.
The ARS filters can be optimized to minimize spectrum reconstruction errors, but the optimization requires the spectral sensitivity functions of the camera. Filter characteristics can be specified by the edge wavelength and edge width, which are defined in the same way as the color filter of the fourth channel in Section 2.1. Although the design is not optimal, their specifications can be selected according to channel wavelengths. The channel wavelength is the average wavelength of the spectral sensitivity of the signal channel. The RGB channel wavelengths of the D5100 camera are λCamR = 603.4 nm, λCamG = 530.7 nm and λCamB = 466.7 nm, respectively. From [29], empirically, the edge wavelengths of cyan and yellow filters can be λC = λCamR and λY = λCamC, respectively, where λCamC = (λCamB + λCamG)/2 is the mean wavelength of λCamB and λCamG; the edge wavelengths at the short-wavelength side and the long-wavelength side of the magenta filter can be λMS = λCamC and λML = λCamR, respectively. Therefore, we set λC = 603.4 nm, λY = 498.7 nm, λMS = 498.7 nm and λML = 603.4 nm. Figure 6a shows the spectral transmittance of the CMY ARS filters, where the maximum transmittance and edge width of all filters were set to 0.9 and 30 nm, respectively.
The spectral transmittance of the RGB ARS filters is also based on the super-Gaussian function. In [29], the spectral transmittance function of the magenta ARS filter is an inverted super-Gaussian function. Since filter optimization is not the purpose of this paper, for simplicity, we set the edge wavelengths of the blue and red filters as λB = λCamC and λR = λCamR, respectively; the edge wavelengths on the short- and long-wavelength sides of the green filter were λGS = λCamC and λGL = λCamR, respectively. Therefore, we set λR = 603.4 nm, λB = 498.7 nm, λGS = 498.7 nm and λGL = 603.4 nm. Figure 6b shows the spectral transmittance of the RGB ARS filters, where the maximum transmittance and edge width of all filters were set to 0.9 and 30 nm, respectively. The CMY and RGB filters are designated as the CMYRGB filters.
The ARSs were created according to the method in [29] using the CMYRGB filters specified above and the color chips corresponding to the vertices of the reference sample convex hull. The convex hull in the RGBF signal space cannot be shown due to its 4D geometry. Figure 7a–d show the color points of the ARSs created with the CYMRGB filters in the RGB, GBF, BFR and FRG signal spaces, respectively, for the case with the RGBF camera. The color points of ARSs created using CYM and RGB filters are shown as 47 red dots and 79 purple hollow dots, respectively. Figure 7a–d also show the 340 outside samples as blue dots for comparison. We can see that the gamut volume expanded by the ARSs in Figure 7a–d is larger than that expanded by the reference samples in Figure 5a–d. Both the reference samples and the ARSs created with the CMYRGB filters were used to extrapolate the outside samples for the cases with the tricolor and quadcolor cameras under consideration using the LUT method. The inclusion of the ARSs in the training sample set was found to deteriorate the spectrum reconstruction using the wPCA method as in [29].

3.4. CMF Mismatch Factor (CMFMisF)

If both sides of Equation (8) are multiplied by the spectral sensitivity matrix DCamT and integrated over the wavelength, we obtain Equation (6). However, the interpolation is an inverse problem. The reconstructed spectrum vector SRec is one of numerous metameric spectrum vectors corresponding to the test signal vector C. Equations (6) and (7) are five constraints for finding a metameric spectrum vector. If a tricolor camera is used to reconstruct the spectrum, the number of constraints is only four. The difference between the target spectral reflectance vector SRef and the reconstructed spectral reflectance vector SRefRec calculated from the metameric spectrum vector SRec was assessed using the metrics defined in Section 2.3.
Appendix A and Appendix B show zero-color-difference conditions using the LUT and wPCA methods, respectively. If the sensitivity functions of the camera fit the CIE CMFs, x ¯ , y ¯ and z ¯ , ideally, the color difference between the reflection light spectrum SReflection and the spectrum SRec reconstructed using the LUT method is zero. Under this condition, the tristimulus XYZ values of the metameric spectrum vector SRec are the same as those of the sample to be reconstructed. For the case of using the wPCA method, the color difference is also zero if the additional condition is satisfied, which requires that the spectral amplitude calculated from Equation (5) be non-negative.
The color difference ΔE00 between SReflection and SRec will be non-zero when the spectral sensitivity functions of the considered cameras does not fit the CMFs ideally. Therefore, it can be inferred that the mean ΔE00 of the test samples is related to the CMF mismatch factor (CMFMisF) defined as
CMFMisF = (σx2 + σy2 + σz2)1/2,
where σm is the relative RMS error of fitting the CMF vector um using the spectral sensitivity vectors for m = x, y, and z; ux = [ x ¯ (λ1), x ¯ (λ2), …, x ¯ (λMw)]T, uy = [ y ¯ (λ1), y ¯ (λ2), …, y ¯ (λMw)]T, uz = [ z ¯ (λ1), z ¯ (λ2), …, z ¯ (λMw)]T and λj = λ1 + (j –1) Δλ for j = 1, 2, …, Mw.
σm = (|uFitum|2 )1/2/|um|,
where m = x, y, and z; uFit is the least squares fit of the CMF vector um using the spectral sensitivity vectors of the camera. For the case of using the quadcolor camera,
uFit = βRSCamR + βGSCamG + βBSCamB + β4thSCam4th,
where the coefficients βR, βG, βB and β4th were solved using the Moore–Penrose pseudo-inversion in least-squares sense [43].

4. Results and Discussion

4.1. Using the RGBF Camera

Table 1 shows the assessment metric statistics for the test samples using the LUT method, where the cameras are the D5100 and RGBF. The LUT method for the D5100 camera is the same as the quadcolor camera shown in Section 3.2, except that the signal space is reduced from 4D to 3D. As can be seen from Table 1, using the RGBF camera reduced the mean ERef, ΔE00 and SCI of the inside samples and increased the mean GFC of the inside samples compared to the D5100 camera. The mean ERef and GFC values of the outside samples using the RGBF camera were even smaller and larger than the inside samples using the D5100 camera, respectively. While non-zero as expected, the color difference ΔE00 was small for most of the test samples. Compared to the D5100 camera, the mean ERef of the test samples, inside samples and outside samples using the RGBF camera was reduced by 31.98%, 35.81% and 35.82%, respectively. Compared to the D5100 camera, the RGF99 of the test samples, inside samples and outside samples using the RGBF camera was increased from 0.9343, 0.9375 and 0.9208 to 0.9887, 0.9972 and 0.9706, respectively.
Table 2 is the same as Table 1 except that the wPCA method was used. The wPCA method for the D5100 camera is the same as the quadcolor camera shown in Section 3.1, except that three basis spectra were used. The inside and outside samples using the wPCA method were the same as those using the LUT method for comparison. The optimized γ = 1.7 and 1.2 for the cases of using the D5100 and RGBF cameras, respectively. From Table 2, the mean assessment metrics of the outside samples were worse than those of the inside samples. Compared to the D5100 camera, the mean ERef values of the test samples, inside samples and outside samples using the RGBF camera were reduced by 21.6%, 27.3% and 24.9%, respectively. Compared to the D5100 camera, the RGF99 values of the test samples, inside samples and outside samples using the RGBF camera increased from 0.9493, 0.9676 and 0.8713 to 0.9765, 0.9945 and 0.9382, respectively. The improvement of RGF99 on outside samples using the RGBF camera is significant.
From Table 1 and Table 2, it can be seen that the LUT method outperformed the wPCA method using the RGBF camera. Note that the wPCA method outperformed the LUT method using the D5100 camera except for about two orders of magnitude longer computation time [27,29]. In [29], the LUT method outperformed the wPCA method using the D5100 camera because the value of γ was not optimized for the wPCA method, where γ = 1.0. Figure 8a–d show the ERef, GFC, ΔE00, and SCI histograms for the test samples, respectively, where the three shown cases are (i) using the D5100 camera and the wPCA method, (ii) using the RGBF camera and the LUT method, and (iii) using the RGBF camera and the wPCA method.
From Figure 8a–d, the numbers of test samples in the “ERef > 0.05”, “GFC < 0.99”, “ΔE00 > 2.0”, and “SCI > 20” bins using the LUT method are less than those using the wPCA method. From Table 1 and Table 2, using the RGBF camera, the maximum ERef = 0.0567 and 0.0742 for the cases of using the LUT and wPCA method, respectively. These results show that using the LUT method is more reliable for spectral reflectance recovery. However, when using the LUT method or the wPCA method, the assessment metrics were improved using the RGBF camera compared to the D5100 camera.
Since the mean assessment metrics of outside samples are worse than those of inside samples, the spectrum reconstruction of outside samples was investigated in more detail. Figure 9a–f show the recovered spectral reflectance SRefRec using the LUT method from the light reflected from 2.5G 7/6, 10P 7/8, 2.5R 4/12, 2.5Y 9/4, 10BG 4/8 and 5PB 4/12 color chips, respectively, where their target reflectance SRef values are also shown. In addition to the D5100 and RGBF cameras, Figure 9a–f also show the results using other cameras, which will be considered in the following subsections. The same color chips were used as examples in [29] to show the spectral reflectance recovery using the D5100 camera and the LUT method. All the cases in Figure 9a–f are outside examples. The case in Figure 9a is an inside sample using the D5100 camera, but it becomes an outside sample using the RGBF camera. Figure 10a–f are the same as Figure 9a–f, respectively, except that spectra were recovered using the wPCA method.
Table 3 shows the ERef values for the cases shown in Figure 9a–f and Figure 10a–f. Table 4 is the same as Table 3 except that the values of ΔE00 are shown. Values for ERef and ΔE00 larger than 0.03 and 1.0, respectively, are shown in bold. The cases where the error ERef of using the RGBF camera is larger than that of using the D5100 camera are the cases of Figure 9e,f and the case of Figure 10a. The cases where the difference ΔE00 of using the RGBF camera is larger than that of using the D5100 camera are the cases of Figure 9b,f and the cases of Figure 10a,b,f. Compared to the D5100 camera, using the RGBF camera effectively improved the statistics of the assessment metrics, but it does not guarantee better spectral reflectance recovery for every color chip tested.

4.2. Using the RGBC Camera

Since there are red, green and blue channels, the fourth channel is reasonably designed to be either a cyan channel or a yellow channel. From the spectral sensitivity of the silicon sensor shown in Figure 2b, if the fourth channel is a cyan channel, its sensitivity in long wavelength is suppressed. Using a compensation color filter shown in Figure 3a modifies the greenish yellow channel in Figure 2b to a yellowish green channel in Figure 3b instead of a cyan channel. If the applied color filter has a much smaller mid- and long-wavelength transmittance than the filters shown in Figure 3a, the fourth channel becomes the blue or greenish blue channel. The S channel of the RGBS camera is an example of such a case, which will be considered in Section 4.3. The case with spectral sensitivity shown in Figure 2b has been considered in Section 4.1, i.e., the RGBF camera. This subsection will consider the cases with spectral sensitivities shown in Figure 3b.
It was found that among the five color filters in Figure 3a, the spectrum reconstruction error was the smallest when the IEC 131K filter was applied to the fourth channel. Using the LUT method, the mean ERef = 0.0091, 0.0128, 0.0093, 0.0099 and 0.0125 for the cases with the IEC 131K, 501, 508, 518 and 578 filters, respectively; RGF99 = 0.9897, 0.9060, 0.9878, 0.9803 and 0.9240 for the cases with the IEC 131K, 501, 508, 518 and 578 filters, respectively. The mean spectrum reconstruction error increased as the spectral transmittance of filter decreased in the long wavelength region. Table 5 shows the assessment metric statistics for the test samples of the RGBC camera with the IEC 131K filter using the LUT and wPCA methods. For the case of using the wPCA method, the optimized γ = 1.2. Figure 11a–d show the ERef, GFC, ΔE00, and SCI histograms, respectively, for the test samples of the optimized RGBC camera with the IEC 131K filter, using the LUT method. Figure 12a–d are the same as Figure 11a–d, except for using the wPCA method. Figure 9a–f and Figure 10a–f also show the spectral reflectance recovery examples using the optimized RGBC camera, where the values of ERef and ΔE00 are shown in Table 3 and Table 4, respectively.
For ease of comparison, the assessment metric statistics for using the RGBF camera are also shown in Table 5. As can be seen from Table 5, the assessment metric statistics using the optimized RGBC camera were about the same as the RGBF camera but with an additional compensation color filter. The use of the compensation color filter to suppress the spectral sensitivity of the fourth channel in the long wavelength region did not improve the performance of spectrum reconstruction.

4.3. Using the RGBS and RGBL Cameras

The above results show that using the fourth channel without a compensation color filter produced a slightly smaller mean ERef. Further reduction in the mean ERef is possible if the spectral sensitivity of the fourth channel can be appropriately modified using a color filter. This subsection considers the cases of using the short-pass and long-pass filters defined in Section 2.1 as the color filter applied to the fourth channel.
Figure 13a shows the mean ERef and 1– GFC of the test samples using the LUT and wPCA methods versus the edge wavelength λS of the short-pass optical filter. Figure 13b is the same as Figure 13a except that the mean ΔE00 and SCI are shown. Figure 14a,b are the same as Figure 13a,b, respectively, except that the long-pass optical filter was used. As can be seen from Figure 13a and Figure 14a, the mean 1– GFC roughly followed the mean ERef. From Figure 13b and Figure 14b, the mean SCI roughly followed the mean ΔE00. Figure 15 shows the optimized value of γ for the case of using the wPCA method in Figure 13 and Figure 14. For the case of using the RGBS camera, the optimized value of γ for the wPCA method is larger around 530 nm, as shown in Figure 15. We will show that the minimum CMFMisF is at this wavelength. From Figure 13b and Figure 14a, it can be seen that the RGBS and RGBL cameras are suitable designed for low color difference and low spectral error, respectively.
From Figure 13a,b, using the LUT method, the quadcolor camera optimized for the minimum mean ΔE00 is the RGBS camera using the short-pass optical filter of λS = 528 nm. For this optimized RGBS camera, the mean ERef = 0.0115 and ΔE00 = 0.1666, where the mean ERef is larger than the RGBF camera but smaller than the D5100 camera; the mean ΔE00 is much smaller than the RGBF and D5100 cameras; the maximum ΔE00 is only 1.0169. Using the wPCA method, the edge wavelength of the optimized RGBS is λS = 529 nm and the optimized γ = 1.9. For this optimized RGBS camera, the mean ERef = 0.0141 and ΔE00 = 0.158, where the mean ERef and ΔE00 are larger and smaller, respectively, than the D5100 and RGBF cameras.
From Figure 14a,b, using the LUT method, the quadcolor camera optimized for the minimum mean ERef is the RGBL camera using the long-pass optical filter of λL = 585 nm. The mean ERef = 0.0083 and ΔE00 = 0.3506 using the optimized RGBL camera with λL = 585 nm are smaller than the mean ERef = 0.0089 and ΔE00 = 0.3992 using the RGBF camera, respectively. Using the wPCA method, the edge wavelength of the optimized RGBL is λL = 587 nm and the optimized γ = 1.3. The mean ERef = 0.0087 and ΔE00 = 0.3862 using the optimized RGBL camera are smaller than the mean ERef = 0.0095 and ΔE00 = 0.4257 using the RGBF camera, respectively.
For the cases of using the LUT method, the spectral transmittance of the optimized long-pass and short-pass optical filters and their corresponding fourth channel spectral sensitivities are shown in Figure 3b. The histograms of assessment metrics for using the optimized RGBS and RGBL cameras are shown in Figure 11a–d for the case of using the LUT method. Figure 12a–d are the same as Figure 11a–d, respectively, except for the case of using the wPCA method. From Figure 11a–d and Figure 12a–d, it can be seen that the histogram characteristics for the RGBC and RGBL cameras are similar when using the LUT method or the wPCA method. The histogram characteristics for the RGBS camera are quite different from those for the RGBC and RGBL cameras when using the LUT method or the wPCA method. For the case of using the RGBS camera and the LUT method, there are 28, 52 and 2 test samples in the “ERef > 0.05”, “GFC < 0.99” and “SCI > 20” bins, respectively, while there is only 1 test sample in the “ΔE00 = 1.1” bin and no test sample in the larger ΔE00 bins. For this case, the color difference of all test samples is low despite the large spectral error. For the case of using the RGBS camera and the wPCA method, there are 52, 82, 4 and 14 test samples in the “ERef > 0.05”, “GFC < 0.99”, “ΔE00 > 2.0” and “SCI > 20” bins, respectively. For this case, most of the test samples have a low color difference, but there are few test samples with a large color difference. Therefore, if the RGBS camera is used as an imaging colorimeter, it is more reliable to reconstruct spectra using the LUT method.
Table 5 shows the assessment metric statistics for the test samples of the optimized RGBS and RGBL cameras using the LUT and wPCA methods. For the case of using the LUT or wPCA method, the mean ERef and mean ΔE00 using the optimized RGBS camera are the largest and smallest, respectively, compared to the other quadcolor cameras. Figure 9a–f and Figure 10a–f also show the spectral reflectance recovery examples using the optimized RGBS and RGBL cameras, where the values of ERef and ΔE00 are shown in Table 3 and Table 4, respectively. Notably, Figure 10f shows poor spectral reflectance recovered using the RGBS camera and the wPCA method, where the reflectance is 1.486 at 700 nm and zero around 580 nm. The zero is due to the negative value calculated from Equation (5).
The small color difference using the optimized RGBS camera can be explained using the CMFMisF defined in Section 3.4. Figure 16 shows the CMFMisF value versus the edge wavelength of the optical filter. Comparing Figure 16 with Figure 13b and Figure 14b, it can be seen that the mean ΔE00 closely relates to CMFMisF for the RGBS camera. The optimized edge wavelength of the RGBS camera using the LUT method or the wPCA method is about the wavelength of the minimum value in Figure 16, where the minimum CMFMisF = 0.09537 at λS = 530 nm. CMFMisF = 0.1495, 0.1495, 0.1486, 0.09543 and 0.1493 for the RGB, RGBF, optimized RGBC, optimized RGBS (λS = 528 nm) and optimized RGBL (λL = 585 nm) cameras, respectively. Note that the CMFMisF values are the same for the RGB and RGBF cameras, since the fourth channel of the RGBF camera contributes negligibly to the fit. The CMFs can be better fitted using the spectral sensitivities of the optimized RGBS camera. Figure 17a,b show the least squares fits of the CMF vectors using the spectral sensitivity vectors of the RGBF and optimized RGBS camera (λS = 528 nm), respectively. Although the CMF vectors were not well fitted in Figure 17a, if the spectrum of the test sample is well reconstructed using the RGBF camera, the color difference ΔE00 of the test sample will not be small.
As a comparison, the authors of [29] showed that for the same test samples using the CMF camera and optimized CMY ARS filters, the mean ERef, GFC, ΔE00 and SCI are 0.0132, 0.9972, 0.0 and 4.1869, respectively. The CMF camera is the artificial tricolor camera with spectral sensitivity functions that are the same as the CMFs. While the mean ΔE00 is zero, the mean ERef of the CMF camera is about the same as the D5100 camera.

4.4. Cross Comparison

From the results shown above, the LUT method outperformed the wPCA method in spectrum reconstruction. We first discuss the case of using the LUT method. Compared to the D5100 camera, the mean ERef using the RGBF and optimized RGBL cameras were reduced by 32.0% and 36.8%, respectively. Compared to the D5100 camera, the mean ΔE00 using the RGBF and optimized RGBS cameras were reduced by 5.3% and 60.5%, respectively. Compared to the RGBF camera, the advantage of using the optimized RGBL camera to obtain a smaller mean ERef was not significant; the advantage of using the optimized RGBS camera to obtain a smaller mean ΔE00 was significant. Compared to the RGBF camera, the mean ΔE00 using the optimized RGBS camera was reduced by 58.3%, but the mean ERef was increased by 28.6%. However, since the mean ΔE00 was as small as 0.3992, using the RGBF camera may be better than the optimized RGBS camera in spectral reflectance recovery due to the smaller mean ERef and no need to use the color filter applied to the fourth channel.
In this paragraph, the use of the wPCA method is discussed. Compared to the D5100 camera, the mean ERef values using the RGBF and optimized RGBL cameras were reduced by 21.6% and 28.2%, respectively. Compared to the D5100 camera, the mean ΔE00 values using the RGBF and optimized RGBS cameras were increased by 9.6% and reduced by 59.3%, respectively. Compared to the RGBF camera, the advantage of using the optimized RGBL camera to obtain smaller spectral error was also not significant. Compared to the RGBF camera, using the optimized RGBS camera had a 62.9% smaller mean ΔE00 but a 49.5% larger mean ERef.
The RGBF camera is a compromised design for the case of using the LUT or wPCA method. If a mean ERef smaller than that of the RGBF camera is required, a suitable long-pass optical filter can be applied to the fourth channel. If an ultra-small color difference is required, a suitable short-pass optical filter can be applied to the fourth channel, but care must be taken not to add too much spectral error.
The computation time required for the LUT method is about two orders of magnitude faster than that required for reconstruction methods using basis spectra that emphasize the relationship between the test and training samples for the 3D case [27,29]. However, for the 4D case, the computation time required to use the LUT method is longer than the wPCA method, because it takes much longer time to locate the simplex for interpolation than the 3D case. For the case of using the quadcolor camera, the ratio of the computation time required to use the LUT method and wPCA method was 1:0.49, where samples were reconstructed from their signal vector C to the spectral reflectance vector SRefRec using MATLAB on the Windows 10 platform. The improvement of the algorithm to locate the simplex faster is a computational geometry problem and is beyond the scope of this paper.

5. Conclusions

Using the conventional tricolor cameras to recover the spectral reflectance has the advantages of low cost, high spatial resolution, fast detection, and no need to measure/estimate the camera spectral sensitivity functions. The reduction in the spectrum reconstruction error using the quadcolor cameras was shown, where the color filter array is compatible with the tricolor cameras. The wPCA and LUT methods were used to reconstruct spectra from the quadcolor camera signals. The optimized weighting factor of the wPCA method was used for individual cameras. The spectral error metrics, ERef and 1– GFC, and the color appearance error metrics, ΔE00 and SCI, were used to assess the reconstructed spectra. The assessment results for using the two spectrum reconstruction methods were compared.
It was assumed that the spectral sensitivities of the red, green and blue channels of the quadcolor cameras are the same as those of the Nikon D5100 camera. The spectral sensitivity of the fourth channel depends on the spectral transmittance of the IR-cut filter and the color filter in addition to the spectral sensitivity of the silicon sensor. The quadcolor RGBF camera is considered, where no color filter is applied to its fourth channel. The quadcolor RGBC, RGBS and RGBL cameras were also considered with sensitivity compensation filters, short-pass filters, and long-pass filters applied to their fourth channels, respectively. Five commercially available sensitivity compensation optical filters were used.
The Munsell color chips were taken as reflective surface examples where 202 and 1066 color chips were used to prepare reference/training and test samples, respectively, under the illuminant D65. It was found that using the LUT method, the number of outside samples that cannot be interpolated increases with the signal dimension. The outside samples were extrapolated using reference samples and the ARSs created with the CYMRGB color filters. The results are summarized below.
  • Comparison of the LUT and wPCA methods:
The advantage of using the LUT method compared to the wPCA method is that the mean spectral error is smaller and the spectral sensitivity functions of the camera are not required. Errors in the estimation of the camera spectral sensitivities can lead to additional errors in the reconstructed spectra using the wPCA method. This paper does not consider such errors. If included, the mean spectral error and color difference increase. The disadvantage of using the LUT method compared to the wPCA method is that the computation time is approximately doubled and the ARSs need to be measured for extrapolation. The computation time using the LUT method is longer because it takes time to locate the simplex for interpolation in the 4D signal space.
2.
RGBF camera:
Compared to using the D5100 camera, using the RGBF camera effectively reduces the mean spectral error of the test samples even when no color filter was applied to the fourth channel. For the case of using the LUT method, the mean spectral error ERef = 0.0131 and 0.0089 using the D5100 and RGBF cameras, respectively. For the case of using the wPCA method, the mean spectral error ERef = 0.0121 and 0.0095 using the D5100 and RGBF cameras, respectively. Using the RGBF camera effectively reduces the mean spectral error compared to the D5100 camera when the LUT method or wPCA method is used to reconstruct spectra.
3.
RGBL camera:
A long-pass optical filter can be applied to the fourth channel for further reducing the mean spectral error. Using the optical filter optimized for the minimum mean spectral error metric ERef, the mean ERef = 0.0083 and 0.0087 for the cases of using the LUT and wPCA methods, respectively. The reduction in the mean spectral error is not significant using the optimized RGBL camera compared to using the RGBF camera.
4.
RGBS camera:
A short-pass optical filter can be applied to the fourth channel for reducing the mean color difference of the test samples, but the mean spectral error is larger. Using the optical filter optimized for the mean minimum mean color difference ΔE00, the mean ΔE00 = 0.1666 and 0.158 for the cases of using the LUT and wPCA methods, respectively; the maximum ΔE00 = 1.0169 and 4.1594 for the cases of using the LUT and wPCA methods, respectively. The mean and maximum ΔE00 using the LUT method are small, so the optimized RGBS camera using the LUT method may be suitable as an imaging colorimeter.
5.
Zero-color-difference condition:
The zero-color-difference conditions using the LUT method and the wPCA method were given, respectively. If the camera sensitivity functions ideally fit the CIE CMFs, then the color difference between the reflection light spectrum and the spectrum reconstructed using the LUT method is zero. For the case of using the wPCA method, the color difference condition is also zero if the additional condition is satisfied, which requires that the spectral amplitude calculated from Equation (5) be non-negative.
6.
Color matching function mismatching factor (CMFMisF):
From the zero-color-difference condition, CMFMisF is defined to represent the RMS error of fitting the CMFs using the spectral sensitivity functions of a camera. An empirical design rule to keep the color difference small is to reduce the value of CMFMisF.
Since the red, green and blue channels of the considered quadcolor cameras were assumed to be the RGB channels of the D5100 camera, it is possible to design color filters for all four channels for further improving the spectral reflectance recovery. Future developments are shown below.
  • The relation of the color difference and camera spectral sensitivity functions was shown in this paper. More research is needed to investigate the relation of the spectral error and camera spectral sensitivity functions.
  • Methods are needed to optimize the camera spectral sensitivity functions to achieve low spectral error and low color difference.
  • Time-saving algorithms need to be developed for the LUT method in the 4D case.

Author Contributions

Conceptualization, Y.-C.W. and S.W.; Data collection, Y.-C.W.; Methodology, Y.-C.W. and S.W.; Software, Y.-C.W.; Data analysis, Y.-C.W.; Supervision, L.H. and S.C. Writing—original draft, Y.-C.W. and S.W.; Writing—review and editing, L.H. and S.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

(1) Spectral sensitivities of the Nikon D5100 camera are available: http://spectralestimation.wordpress.com/data/. (2) Spectral reflectance of matt Munsell color chips are available: https://sites.uef.fi/spectral/munsell-colors-matt-spectrofotometer-measured/. (3) Spectral transmittance of the UV/IR cut filter is available: https://agenaastro.com/downloads/manuals/baader-uvir-cut-filter-stat-sheet.pdf. (4) Spectral transmittance of the sensitivity compensation color filters are available: https://www.isuzuglass.com/products/glass-iec.html. All are accessed on 12 July 2022.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

ARSAuxiliary Reference Sample
CFAColor Filter Array
CMFColor-Matching Function
CMFMisFColor-Matching Function Mismatch Factor
CMYCyan, Magenta and Yellow
CMYRGBCyan, Magenta, Yellow, Red, Green and Blue
GFCGoodness-of-Fit Coefficient
LUTLook-Up Table
MAXMaximum
MINMinimum
NMTNon-Negative Matrix Transformation
PC5050th Percentile
PC9898th Percentile
PCAPrincipal Component Analysis
RGBRed, Green and Blue
RGF99Ratio of Good Fit (the ratio of samples with GFC > 0.99)
RMSRoot Mean Square
RPRMRoot Polynomial Regression Model
SCISpectral Comparison Index
wPCAweighted Principal Component Analysis

Appendix A. Zero-Color-Difference Condition Using the LUT Method

The CMF matrix is defined as F= [uxuyuz], where ux, uy, and uz are the CMF vectors defined in Section 3.4. If the camera spectral sensitivity functions fit the CMFs ideally, the CMF matrix F can be written as
F = DCamBT,
where DCam is the camera spectral sensitivity matrix defined in Section 3.1; B is a 3x4 coefficient matrix. If Equation (A1) is valid, the color difference between the reflection light spectrum SReflection and the spectrum SRec reconstructed using the LUT method is zero.

Proof:

From Equation (A1), the tristimulus vectors of the reflection light spectrum and the i-th reference spectrum Si can be written as
TReflection = FTSReflection
and
Ti = FTSi,
respectively. In Equation (6), the signal and i-th reference vectors are
C = DCamTSReflection,
and
Ci = DCamTSi,
respectively.
Substituting Equations (A4) and (A5) into Equation (6) and multiplying both sides of Equation (6) with the coefficient matrix B, we have
BD Cam T S Reflection = i = 1 5 α i Cam T S i .
From Equations (A1) and (A6),
F T S Reflection = i = 1 5 α i T S i .
Substituting Equations (A2) and (A3) into Equation (A7), we have
T Reflection   = i = 1 5 α i .
Multiplying both sides of Equation (8) with the transpose of the CMF matrix F, we have
F T S Rec = i = 1 5 α i T S i .
Substituting Equations (A2) and (A3) into Equation (A9), we have
T Rec   = i = 1 5 α i .
From Equations (A8) and (A10), TRec = TReflection. Therefore, the color difference between the reflection light spectrum and the reconstructed spectrum is zero. Since no assumptions are made about the reflection light spectrum, the result of the zero color difference is general. □

Appendix B. Zero-Color-Difference Condition Using the wPCA Method

If Equation (A1) is valid and the spectral amplitude calculated from Equation (5) is non-negative, then the color difference between the reflection light spectrum SReflection and the spectrum SRec reconstructed using the wPCA method is zero. Following a similar procedure in Appendix A, from Equation (5), we have
T Reflection   = T Rec   = T 0   + k = 1 4 d k ,
where T0 = FTP0 and Tk = FTPk. Therefore, the color difference between SReflection and SRec is zero.
However, the negative spectral amplitude calculated from Equation (5) is not uncommon due to the truncated higher-order principal components in Equation (5). In practice, the color difference between the reflection light spectrum and the spectrum reconstructed using the wPCA method is usually non-zero, even if Equation (A1) is valid.

References

  1. Picollo, M.; Cucci, C.; Casini, A.; Stefani, L. Hyper-spectral imaging technique in the cultural heritage field: New possible scenarios. Sensors 2020, 20, 2843. [Google Scholar] [CrossRef]
  2. Grillini, F.; Thomas, J.B.; George, S. Mixing models in close-range spectral imaging for pigment mapping in cultural heritage. In Proceedings of the International Colour Association (AIC) Conference, Online, 26–27 November 2020; pp. 372–376. [Google Scholar]
  3. Xu, P.; Xu, H.; Diao, C.; Ye, Z. Self-training-based spectral image reconstruction for art paintings with multispectral imaging. Appl. Opt. 2017, 56, 8461–8470. [Google Scholar] [CrossRef] [PubMed]
  4. Chen, Z.; Wang, J.; Wang, T.; Song, Z.; Li, Y.; Huang, Y.; Wang, L.; Jin, J. Automated in-field leaf-level hyperspectral imaging of corn plants using a Cartesian robotic platform. Comput. Electron. Agric. 2021, 183, 105996. [Google Scholar] [CrossRef]
  5. Hu, N.; Li, W.; Du, C.; Zhang, Z.; Gao, Y.; Sun, Z.; Yang, L.; Yu, K.; Zhang, Y.; Wang, Z. Predicting micronutrients of wheat using hyperspectral imaging. Food Chem. 2021, 343, 128473. [Google Scholar] [CrossRef]
  6. Gholizadeh, H.; Gamon, J.A.; Helzer, C.J.; Cavender-Bares, J. Multi-temporal assessment of grassland a-and b-diversity using hyperspectral imaging. Ecol. Appl. 2020, 30, e02145. [Google Scholar] [CrossRef] [PubMed]
  7. Gomes, V.; Mendes-Ferreira, A.; Melo-Pinto, P. Application of Hyperspectral Imaging and Deep Learning for Robust Prediction of Sugar and pH Levels in Wine Grape Berries. Sensors 2021, 21, 3459. [Google Scholar] [CrossRef]
  8. Zhang, Y.; Xi, Y.; Yang, Q.; Cong, W.; Zhou, J.; Wang, G. Spectral CT reconstruction with image sparsity and spectral mean. IEEE Trans. Comput. Imaging 2016, 2, 510–523. [Google Scholar] [CrossRef] [Green Version]
  9. Lv, M.; Chen, T.; Yang, Y.; Tu, T.; Zhang, N.; Li, W.; Li, W. Membranous nephropathy classification using microscopic hyperspectral imaging and tensor patch-based discriminative linear regression. Biomed. Opt. Express 2021, 12, 2968–2978. [Google Scholar] [CrossRef]
  10. Weksler, S.; Rozenstein, O.; Haish, N.; Moshelion, M.; Wallach, R.; Ben-Dor, E. Detection of Potassium Deficiency and Momentary Transpiration Rate Estimation at Early Growth Stages Using Proximal Hyperspectral Imaging and Extreme Gradient Boosting. Sensors 2021, 21, 958. [Google Scholar] [CrossRef]
  11. Courtenay, L.A.; González-Aguilera, D.; Lagüela, S.; del Pozo, S.; Ruiz-Mendez, C.; Barbero-García, I.; Román-Curto, C.; Cañueto, J.; Santos-Durán, C.; Cardeñoso-Álvarez, M.E.; et al. Hyperspectral imaging and robust statistics in non-melanoma skin cancer analysis. Biomed. Opt. Express 2021, 12, 5107–5127. [Google Scholar] [CrossRef]
  12. Ortega, S.; Halicek, M.; Fabelo, H.; Callico, G.M.; Fei, B. Hyperspectral and multispectral imaging in digital and computational pathology: A systematic review. Biomed. Opt. Express 2020, 11, 3195–3233. [Google Scholar] [CrossRef] [PubMed]
  13. Wang, L.; Xiong, Z.; Gao, D.; Shi, G.; Zeng, W.; Wu, F. High-speed hyperspectral video acquisition with a dual-camera architecture. In Proceedings of the Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 4942–4950. [Google Scholar]
  14. Chen, M.; Tang, Y.; Zou, X.; Huang, K.; Li, L.; He, Y. High-accuracy multi-camera reconstruction enhanced by adaptive point cloud correction algorithm. Opt. Lasers Eng. 2019, 122, 170–183. [Google Scholar] [CrossRef]
  15. Tang, Y.; Zhu, M.; Chen, Z.; Wu, C.; Chen, B.; Li, C.; Li, L. Seismic performance evaluation of recycled aggregate concrete-filled steel tubular columns with field strain detected via a novel mark-free vision method. Structures 2022, 37, 426–441. [Google Scholar] [CrossRef]
  16. Xie, Y.; Liu, C.; Liu, S.; Song, W.; Fan, X. Snapshot imaging spectrometer based on pixel-level filter array (PFA). Sensors 2021, 21, 2289. [Google Scholar] [CrossRef] [PubMed]
  17. Schaepman, M.E. Imaging spectrometers. In The SAGE Handbook of Remote Sensing; Warner, T.A., Nellis, M.D., Foody, G.M., Eds.; Sage Publications: Los Angeles, CA, USA, 2009; pp. 166–178. [Google Scholar]
  18. Cai, F.; Lu, W.; Shi, W.; He, S. A mobile device-based imaging spectrometer for environmental monitoring by attaching a lightweight small module to a commercial digital camera. Sci. Rep. 2017, 7, 15602. [Google Scholar] [CrossRef] [Green Version]
  19. Zhao, Y.; Berns, R.S. Image-based spectral reflectance reconstruction using the matrix R method. Col. Res. Appl. 2007, 32, 343–351. [Google Scholar] [CrossRef]
  20. Attarchi, N.; Amirshahi, S.H. Reconstruction of reflectance data by modification of Berns’ Gaussian method. Col. Res. Appl. 2009, 34, 26–32. [Google Scholar] [CrossRef]
  21. Tzeng, D.Y.; Berns, R.S. A review of principal component analysis and its applications to color technology. Col. Res. Appl. 2005, 30, 84–98. [Google Scholar] [CrossRef]
  22. Agahian, F.; Amirshahi, S.A.; Amirshahi, S.H. Reconstruction of reflectance spectra using weighted principal component analysis. Col. Res. Appl. 2008, 33, 360–371. [Google Scholar] [CrossRef]
  23. Hamza, A.B.; Brady, D.J. Reconstruction of reflectance spectra using robust nonnegative matrix factorization. IEEE Trans. Signal Process. 2006, 54, 3637–3642. [Google Scholar] [CrossRef] [Green Version]
  24. Amirshahi, S.H.; Amirhahi, S.A. Adaptive non-negative bases for reconstruction of spectral data from colorimetric information. Opt. Rev. 2010, 17, 562–569. [Google Scholar] [CrossRef]
  25. Abed, F.M.; Amirshahi, S.H.; Abed, M.R.M. Reconstruction of reflectance data using an interpolation technique. J. Opt. Soc. Am. A 2009, 26, 613–624. [Google Scholar] [CrossRef] [PubMed]
  26. Kim, B.G.; Han, J.; Park, S. Spectral reflectivity recovery from the tristimulus values using a hybrid method. J. Opt. Soc. Am. A 2012, 29, 2612–2621. [Google Scholar] [CrossRef] [PubMed]
  27. Kim, B.G.; Werner, J.S.; Siminovitch, M.; Papamichael, K.; Han, J.; Park, S. Spectral reflectivity recovery from tristimulus values using 3D extrapolation with 3D interpolation. J. Opt. Soc. Korea 2014, 18, 507–516. [Google Scholar] [CrossRef] [Green Version]
  28. Chou, T.-R.; Hsieh, C.-H.; Chen, E. Recovering spectral reflectance based on natural neighbor interpolation with model-based metameric spectra of extreme points. Col. Res. Appl. 2019, 44, 508–525. [Google Scholar] [CrossRef]
  29. Wen, Y.-C.; Wen, S.; Hsu, L.; Chi, S. Auxiliary Reference Samples for Extrapolating Spectral Reflectance from Camera RGB Signals. Sensors 2022, 22, 4923. [Google Scholar] [CrossRef]
  30. Darrodi, M.M.; Finlayson, G.; Goodman, T.; Mackiewicz, M. Reference data set for camera spectral sensitivity estimation. J. Opt. Soc. Am. A 2015, 32, 381–391. [Google Scholar] [CrossRef] [Green Version]
  31. Finlayson, G.; Darrodi, M.M.; Mackiewicz, M. Rank-based camera spectral sensitivity estimation. J. Opt. Soc. Am. A 2016, 33, 589–599. [Google Scholar] [CrossRef] [Green Version]
  32. Ji, Y.; Kwak, Y.; Park, S.M.; Kim, Y.L. Compressive recovery of smartphone RGB spectral sensitivity functions. Opt. Express 2021, 29, 11947–11961. [Google Scholar] [CrossRef]
  33. Maloney, L.T. Evaluation of linear models of surface spectral reflectance with small numbers of parameters. J. Opt. Soc. Am. A 1986, 3, 1673–1683. [Google Scholar] [CrossRef]
  34. Valero, E.M.; Nieves, J.L.; Nascimento, S.M.C.; Amano, K.; Foster, D.H. Recovering spectral data from natural scenes with an RGB digital camera and colored Filters. Col. Res. Appl. 2007, 32, 352–360. [Google Scholar] [CrossRef] [Green Version]
  35. Babaei, V.; Amirshahi, S.H.; Agahian, F. Using weighted pseudo-inverse method for reconstruction of reflectance spectra and analyzing the dataset in terms of normality. Col. Res. Appl. 2011, 36, 295–305. [Google Scholar] [CrossRef]
  36. Liang, J.; Wan, X. Optimized method for spectral reflectance reconstruction from camera responses. Opt. Express 2017, 25, 28273–28287. [Google Scholar] [CrossRef]
  37. Xiao, G.; Wan, X.; Wang, L.; Liu, S. Reflectance spectra reconstruction from trichromatic camera based on kernel partial least square method. Opt. Express 2019, 27, 34921–34936. [Google Scholar] [CrossRef] [PubMed]
  38. Tominaga, S.; Nishi, S.; Ohtera, R.; Sakai, H. Improved method for spectral reflectance estimation and application to mobile phone cameras. J. Opt. Soc. Am. A 2022, 39, 494–508. [Google Scholar] [CrossRef] [PubMed]
  39. Mangold, K.; Shaw, J.A.; Vollmer, M. The physics of near-infrared photography. Eur. J. Phys. 2013, 34, S51–S71. [Google Scholar] [CrossRef] [Green Version]
  40. Kohonen, O.; Parkkinen, J.; Jaaskelainen, T. Databases for spectral color science. Col. Res. Appl. 2006, 31, 381–390. [Google Scholar] [CrossRef]
  41. Viggiano, J.A.S. A perception-referenced method for comparison of radiance ratio spectra and its application as an index of metamerism. Proc. SPIE 2002, 4421, 701–704. [Google Scholar]
  42. Mansouri1, A.; Sliwa1, T.; Hardeberg, J.Y.; Voisin, Y. An adaptive-PCA algorithm for reflectance estimation from color images. In Proceedings of the 19th International Conference on Pattern Recognition, Tampa, FL, USA, 8–11 December 2008. [Google Scholar] [CrossRef]
  43. Leon, S. Linear Algebra with Applications, 9th ed.; Pearson: Harlow, UK, 2015. [Google Scholar]
  44. Finlayson, G.D.; Mackiewicz, M.; Hurlbert, A. Color correction using root-polynomial regression. IEEE Trans. Image Process. 2015, 24, 1460–1470. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  45. Delaunayn. Available online: https://www.mathworks.com/help/matlab/ref/delaunayn.html (accessed on 12 July 2022).
  46. Tsearchn. Available online: https://www.mathworks.com/help/matlab/ref/tsearchn.html (accessed on 12 July 2022).
Figure 1. Schematic diagrams showing the (a) RGB color filter array and (b) RGBE color filter array.
Figure 1. Schematic diagrams showing the (a) RGB color filter array and (b) RGBE color filter array.
Sensors 22 06288 g001
Figure 2. (a) Spectral sensitivities of the Nikon D5100 camera, where the spectra SCamR, SCamG and SCamB are the sensitivities of the red, green and blue signal channels, respectively. (b) Spectral sensitivity of the F signal channel that is the product of the spectral sensitivity of a typical silicon sensor and the spectral transmittance of the Baader UV/IR cut filter.
Figure 2. (a) Spectral sensitivities of the Nikon D5100 camera, where the spectra SCamR, SCamG and SCamB are the sensitivities of the red, green and blue signal channels, respectively. (b) Spectral sensitivity of the F signal channel that is the product of the spectral sensitivity of a typical silicon sensor and the spectral transmittance of the Baader UV/IR cut filter.
Sensors 22 06288 g002
Figure 3. (a) Spectral transmittance of five compensation filters for camera sensors. (b) Spectral sensitivities of the yellowish green channels using the filters in (a).
Figure 3. (a) Spectral transmittance of five compensation filters for camera sensors. (b) Spectral sensitivities of the yellowish green channels using the filters in (a).
Sensors 22 06288 g003
Figure 4. (a) Schematic diagram showing the specification definitions of the short-pass and long-pass optical filters. (b) Spectral sensitivity examples of the fourth channel using the short-pass and long-pass optical filters, where λS = 528 nm and λL = 585 nm. Refer to Section 2.1 for details.
Figure 4. (a) Schematic diagram showing the specification definitions of the short-pass and long-pass optical filters. (b) Spectral sensitivity examples of the fourth channel using the short-pass and long-pass optical filters, where λS = 528 nm and λL = 585 nm. Refer to Section 2.1 for details.
Sensors 22 06288 g004
Figure 5. Color points of light reflected from the Munsell color chips in the (a) RGB, (b) GBF, (c) BFR and (d) FRG signal spaces, using the RGBF camera. The 202 reference samples are shown as red dots. The 726 inside samples and 340 outside samples are shown as green and blue dots, respectively.
Figure 5. Color points of light reflected from the Munsell color chips in the (a) RGB, (b) GBF, (c) BFR and (d) FRG signal spaces, using the RGBF camera. The 202 reference samples are shown as red dots. The 726 inside samples and 340 outside samples are shown as green and blue dots, respectively.
Sensors 22 06288 g005
Figure 6. Spectral transmittance of the (a) cyan, yellow, magenta, (b) red, green and blue filters for creating ARSs. Refer to Section 3.3 for details.
Figure 6. Spectral transmittance of the (a) cyan, yellow, magenta, (b) red, green and blue filters for creating ARSs. Refer to Section 3.3 for details.
Sensors 22 06288 g006
Figure 7. Color points of ARSs created with CYM and RGB filters shown as 47 red dots and 79 purple hollow dots, respectively, in the (a) RGB, (b) GBF, (c) BFR and (d) FRG signal spaces using the RGBF camera. The 340 outside samples are shown as blue dots for comparison.
Figure 7. Color points of ARSs created with CYM and RGB filters shown as 47 red dots and 79 purple hollow dots, respectively, in the (a) RGB, (b) GBF, (c) BFR and (d) FRG signal spaces using the RGBF camera. The 340 outside samples are shown as blue dots for comparison.
Sensors 22 06288 g007
Figure 8. (a) ERef, (b) GFC, (c) ΔE00 and (d) SCI histograms for the 1066 test samples. The corresponding camera and spectrum reconstruction method are shown in figures. The insets in (ad) show enlarged parts.
Figure 8. (a) ERef, (b) GFC, (c) ΔE00 and (d) SCI histograms for the 1066 test samples. The corresponding camera and spectrum reconstruction method are shown in figures. The insets in (ad) show enlarged parts.
Sensors 22 06288 g008
Figure 9. The target spectra SRef and recovered reflectance spectra SRefRec of the test samples with the (a) 2.5G 7/6, (b) 10P 7/8, (c) 2.5R 4/12, (d) 2.5Y 9/4, (e) 10BG 4/8 and (f) 5PB 4/12 color chips. The cameras are indicated. Spectra are reconstructed using the LUT method.
Figure 9. The target spectra SRef and recovered reflectance spectra SRefRec of the test samples with the (a) 2.5G 7/6, (b) 10P 7/8, (c) 2.5R 4/12, (d) 2.5Y 9/4, (e) 10BG 4/8 and (f) 5PB 4/12 color chips. The cameras are indicated. Spectra are reconstructed using the LUT method.
Sensors 22 06288 g009
Figure 10. (af) are the same as Figure 9a–f respectively, except that spectra were reconstructed using the wPCA method.
Figure 10. (af) are the same as Figure 9a–f respectively, except that spectra were reconstructed using the wPCA method.
Sensors 22 06288 g010
Figure 11. (a) ERef, (b) GFC, (c) ΔE00 and (d) SCI histograms for the 1066 test samples. The cameras are the optimized RGBC, RGBS and RGBL cameras. Spectra are reconstructed using the LUT method. The insets in (ad) show enlarged parts.
Figure 11. (a) ERef, (b) GFC, (c) ΔE00 and (d) SCI histograms for the 1066 test samples. The cameras are the optimized RGBC, RGBS and RGBL cameras. Spectra are reconstructed using the LUT method. The insets in (ad) show enlarged parts.
Sensors 22 06288 g011
Figure 12. The same as Figure 11 except that spectra were reconstructed using the wPCA method. (a) ERef, (b) GFC, (c) ΔE00 and (d) SCI histograms for the 1066 test samples.
Figure 12. The same as Figure 11 except that spectra were reconstructed using the wPCA method. (a) ERef, (b) GFC, (c) ΔE00 and (d) SCI histograms for the 1066 test samples.
Sensors 22 06288 g012aSensors 22 06288 g012b
Figure 13. Mean (a) ERef, 1 – GFC and (b) ΔE00, SCI versus the edge wavelength of the short-pass optical filter using the wPCA and LUT methods.
Figure 13. Mean (a) ERef, 1 – GFC and (b) ΔE00, SCI versus the edge wavelength of the short-pass optical filter using the wPCA and LUT methods.
Sensors 22 06288 g013
Figure 14. The same as Figure 13 except a long-pass optical filter is used. Mean (a) ERef, 1 – GFC and (b) ΔE00.
Figure 14. The same as Figure 13 except a long-pass optical filter is used. Mean (a) ERef, 1 – GFC and (b) ΔE00.
Sensors 22 06288 g014
Figure 15. The optimized values of γ for the case of using the wPCA method in Figure 13 and Figure 14.
Figure 15. The optimized values of γ for the case of using the wPCA method in Figure 13 and Figure 14.
Sensors 22 06288 g015
Figure 16. CMF mismatch factor CMFMisF versus the edge wavelength of the optical filter.
Figure 16. CMF mismatch factor CMFMisF versus the edge wavelength of the optical filter.
Sensors 22 06288 g016
Figure 17. Least squares fit of CMF vectors using the spectral sensitivity vectors of (a) the RGBF camera and (b) the optimized RGBS camera with the 528 nm short-pass optical filter. The CMFs x ¯ , y ¯ and z ¯ are shown in red, green and blue, respectively. Solid and dash lines show the CMFs and least squares fits, respectively.
Figure 17. Least squares fit of CMF vectors using the spectral sensitivity vectors of (a) the RGBF camera and (b) the optimized RGBS camera with the 528 nm short-pass optical filter. The CMFs x ¯ , y ¯ and z ¯ are shown in red, green and blue, respectively. Solid and dash lines show the CMFs and least squares fits, respectively.
Sensors 22 06288 g017
Table 1. Assessment metric statistics for the spectrum reconstruction of test samples using the D5100 and RGBF cameras. The spectrum reconstruction method is the LUT method.
Table 1. Assessment metric statistics for the spectrum reconstruction of test samples using the D5100 and RGBF cameras. The spectrum reconstruction method is the LUT method.
MetricCameraNikon D5100RGBF
SampleAllInsideOutsideAllInsideOutside
No.10668642021066726340
ERefmean μ0.01310.01200.01800.00890.00770.0115
std σ0.01240.01070.01690.00790.00640.0100
PC500.00890.00870.01250.00640.00550.0082
PC980.05400.04850.06980.03450.02620.0416
MAX0.10380.08590.10380.05670.04510.0567
GFCmean μ0.99710.99740.99610.99890.99930.9980
std σ0.00740.00710.00850.00230.00120.0035
PC500.99930.99940.99850.99960.99970.9992
MIN0.90000.90000.93250.96690.98320.9669
RGF990.93430.93750.92080.98870.99720.9706
ΔE00mean μ0.42150.42390.41110.39920.38650.4263
std σ0.40650.41820.35290.35900.35350.3696
PC500.27960.27950.28300.27920.27340.2902
PC981.64781.69001.43861.44471.44411.4453
MAX2.59182.59181.82071.88431.88431.8099
SCImean μ4.12533.75035.72913.52962.90274.8682
std σ3.23812.92663.94872.76432.06773.4962
PC503.14842.93104.78272.75852.36113.9021
PC9813.717212.123916.402711.92109.111516.2671
MAX25.359625.229925.359620.050614.536520.0506
Table 2. The same as Table 1, except that the spectrum reconstruction method is the wPCA method. The optimized γ = 1.7 and 1.2 using the D5100 and RGBF cameras, respectively.
Table 2. The same as Table 1, except that the spectrum reconstruction method is the wPCA method. The optimized γ = 1.7 and 1.2 using the D5100 and RGBF cameras, respectively.
MetricMethodNikon D5100RGBF
SampleAllInsideOutsideAllInsideOutside
No.10668642021066726340
ERefmean μ0.0121 0.0110 0.0169 0.0095 0.0080 0.0127
std σ0.0121 0.0098 0.0181 0.0087 0.0068 0.0110
PC500.0086 0.0082 0.0115 0.0066 0.0059 0.0091
PC980.0531 0.0437 0.0783 0.0338 0.0288 0.0490
MAX0.1152 0.0817 0.1152 0.0742 0.0742 0.0738
GFCmean μ0.9978 0.9984 0.9950 0.9985 0.9992 0.9972
std σ0.0056 0.0032 0.0108 0.0046 0.0029 0.0068
PC500.9994 0.9994 0.9988 0.9995 0.9996 0.9990
MIN0.9017 0.9618 0.9017 0.9315 0.9364 0.9315
RGF990.9493 0.9676 0.8713 0.9765 0.9945 0.9382
ΔE00mean μ0.3884 0.3443 0.5769 0.4257 0.3475 0.5926
std σ0.4133 0.3235 0.6416 0.4558 0.3213 0.6252
PC500.2573 0.2422 0.3259 0.2942 0.2689 0.3985
PC981.8615 1.3261 2.5734 1.7690 1.1557 2.6548
MAX2.8330 2.5378 2.8330 5.0328 3.3353 5.0328
SCImean μ3.8000 3.1831 6.4384 3.8938 2.9955 5.8118
std σ3.9295 2.7777 6.3288 4.2229 2.9221 5.6872
PC502.6841 2.5232 4.3897 2.6196 2.2980 4.0662
PC9816.7371 11.2917 29.0811 15.9216 9.1608 19.4698
MAX34.6758 28.8125 34.6758 46.2716 42.5477 46.2716
Table 3. The values of ERef for the cases shown in Figure 9a–f and Figure 10a–f. The values larger than 0.03 are shown in bold.
Table 3. The values of ERef for the cases shown in Figure 9a–f and Figure 10a–f. The values larger than 0.03 are shown in bold.
MethodLUT/Figure 9a–fwPCA/Figure 10a–f
CameraD5100RGBFRGBCRGBSRGBLD5100RGBFRGBCRGBSRGBL
(a)0.00400.00320.00330.00480.00330.0093 0.0124 0.0123 0.0059 0.0106
(b)0.02200.01080.01080.02300.01350.0157 0.0076 0.0076 0.0124 0.0069
(c)0.01290.00710.01900.01390.00630.04620.0076 0.0112 0.0178 0.0085
(d)0.02440.02290.02290.02520.02270.0221 0.0260 0.0242 0.0063 0.0227
(e)0.00810.00820.01090.01520.00840.0181 0.0203 0.0190 0.0068 0.0177
(f)0.02420.03080.02920.02750.02520.03920.03430.03430.42070.0341
Table 4. The values of ΔE00 for the cases shown in Figure 9a–f and Figure 10a–f. The values larger than 1.0 are shown in bold.
Table 4. The values of ΔE00 for the cases shown in Figure 9a–f and Figure 10a–f. The values larger than 1.0 are shown in bold.
MethodLUT/Figure 9a–fwPCA/Figure 10a–f
CameraD5100RGBFRGBCRGBSRGBLD5100RGBFRGBCRGBSRGBL
(a)0.1065 0.1043 0.1056 0.1283 0.1028 0.1986 0.5428 0.5415 0.1529 0.4654
(b)0.1092 0.1395 0.1398 0.1755 0.1836 0.0290 0.1045 0.1053 0.0422 0.0879
(c)0.0934 0.0852 0.2068 0.0663 0.0507 0.4602 0.1721 0.3605 0.2063 0.2128
(d)0.6975 0.6439 0.6442 0.1811 0.6077 0.6969 0.8374 0.7762 0.0611 0.7220
(e)0.7396 0.7238 0.9626 0.4721 0.6973 1.85522.05051.88880.3183 1.7703
(f)0.4089 0.7133 0.6205 0.2830 0.5285 1.05571.11661.04004.15941.1133
Table 5. Assessment metric statistics for the spectrum reconstruction of the 1066 test samples. The quadcolor cameras and spectrum reconstruction methods are indicated. For the case of using the wPCA method, the optimized γ = 1.2, 1.2, 1.9 and 1.3 using the RGBF, RGBC, RGBS and RGBL cameras, respectively. The best values are shown in bold.
Table 5. Assessment metric statistics for the spectrum reconstruction of the 1066 test samples. The quadcolor cameras and spectrum reconstruction methods are indicated. For the case of using the wPCA method, the optimized γ = 1.2, 1.2, 1.9 and 1.3 using the RGBF, RGBC, RGBS and RGBL cameras, respectively. The best values are shown in bold.
MetricMethodLUTwPCA
CameraRGBFRGBCRGBSRGBLRGBFRGBCRGBSRGBL
ERefmean μ0.00890.00910.01150.00830.0095 0.0098 0.0141 0.0087
std σ0.00790.00820.01350.00720.0087 0.0103 0.0328 0.0081
PC500.00640.00640.00740.00610.0066 0.0070 0.00580.0061
PC980.03450.03480.05790.03100.0338 0.0353 0.0944 0.0317
MAX0.05670.06230.12440.04900.0742 0.1722 0.4207 0.0975
GFCmean μ0.99890.99880.99780.99900.9985 0.9984 0.9925 0.9988
std σ0.00230.00250.00660.00230.0046 0.0070 0.0415 0.0033
PC500.99960.99960.99960.99970.9995 0.9995 0.99970.9996
MIN0.96690.96590.91280.95940.9315 0.8197 0.3628 0.9417
RGF990.98870.98970.95120.98970.9765 0.9747 0.9212 0.9812
ΔE00mean μ0.39920.39870.16660.35070.4257 0.4434 0.15800.3862
std σ0.35900.35960.12980.32970.4558 0.6440 0.2640 0.4745
PC500.27920.28710.13130.24090.2942 0.3071 0.09710.2610
PC981.44471.48660.55881.38061.7690 1.8975 0.7307 1.7779
MAX1.88431.78171.01692.16485.0328 15.6619 4.1594 8.1484
SCImean μ3.52963.57223.00333.25293.8938 4.0839 2.90143.4897
std σ2.76432.79262.75872.55914.2229 5.8323 4.8805 3.6736
PC502.75852.72022.22042.59132.6196 2.6887 1.69232.3663
PC9811.921012.276611.627910.822415.9216 16.0729 14.5862 15.7454
MAX20.050618.873530.169920.830046.2716 129.481 75.332 36.3269
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wen, Y.-C.; Wen, S.; Hsu, L.; Chi, S. Spectral Reflectance Recovery from the Quadcolor Camera Signals Using the Interpolation and Weighted Principal Component Analysis Methods. Sensors 2022, 22, 6288. https://doi.org/10.3390/s22166288

AMA Style

Wen Y-C, Wen S, Hsu L, Chi S. Spectral Reflectance Recovery from the Quadcolor Camera Signals Using the Interpolation and Weighted Principal Component Analysis Methods. Sensors. 2022; 22(16):6288. https://doi.org/10.3390/s22166288

Chicago/Turabian Style

Wen, Yu-Che, Senfar Wen, Long Hsu, and Sien Chi. 2022. "Spectral Reflectance Recovery from the Quadcolor Camera Signals Using the Interpolation and Weighted Principal Component Analysis Methods" Sensors 22, no. 16: 6288. https://doi.org/10.3390/s22166288

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop