Next Article in Journal
Pose ResNet: 3D Human Pose Estimation Based on Self-Supervision
Next Article in Special Issue
ABANICCO: A New Color Space for Multi-Label Pixel Classification and Color Analysis
Previous Article in Journal
Dynamic Service Function Chain Deployment and Readjustment Method Based on Deep Reinforcement Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimized Method Based on Subspace Merging for Spectral Reflectance Recovery

1
Faculty of Light Industry, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, China
2
State Key Laboratory of Biobased Material and Green Papermaking, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(6), 3056; https://doi.org/10.3390/s23063056
Submission received: 8 February 2023 / Revised: 27 February 2023 / Accepted: 8 March 2023 / Published: 12 March 2023
(This article belongs to the Special Issue Recent Trends and Advances in Color and Spectral Sensors)

Abstract

:
The similarity between samples is an important factor for spectral reflectance recovery. The current way of selecting samples after dividing dataset does not take subspace merging into account. An optimized method based on subspace merging for spectral recovery is proposed from single RGB trichromatic values in this paper. Each training sample is equivalent to a separate subspace, and the subspaces are merged according to the Euclidean distance. The merged center point for each subspace is obtained through many iterations, and subspace tracking is used to determine the subspace where each testing sample is located for spectral recovery. After obtaining the center points, these center points are not the actual points in the training samples. The nearest distance principle is used to replace the center points with the point in the training samples, which is the process of representative sample selection. Finally, these representative samples are used for spectral recovery. The effectiveness of the proposed method is tested by comparing it with the existing methods under different illuminants and cameras. Through the experiments, the results show that the proposed method not only shows good results in terms of spectral and colorimetric accuracy, but also in the selection representative samples.

1. Introduction

Spectral reflectance is an inherent nature of matter itself, which can be regarded as the ‘fingerprint’ of an object [1]. Due to its unique properties, it can be used to characterize the color of materials and the surface properties of materials in art [2,3,4], remote sensing [5,6], medicine [7,8,9], textiles [7,10] and so on [11,12]. In real life, it is difficult to directly obtain the surface spectral reflectance of an object; researchers mostly obtain the surface spectral reflectance by indirectly obtaining the response value of a digital camera [9]. The response values are usually obtained by using digital devices, which saves time and effort. Therefore, there are many kinds of spectral recovery methods, such as the most common pseudo-inverse method [13,14], principal component analysis method [15], compressive sensing [16,17], Wiener estimation method [8,18], and so on [19,20,21,22,23,24,25]. In the above description, the pseudo-inverse method is more common in the process of spectral recovery.
The similarity of samples is one of the methods to improve the accuracy of spectral recovery. The selection of samples has two major directions: the first is extensive, which mainly selects several fixed representative samples from significant data. Hardeberg [26] proposed a method based on the minimum condition number. Mohammadi [27] proposed a method based on hierarchical clustering. Cheung [28] proposed four optimal sample selection rules based on different criteria between subsequent samples and representative sample subsets. Shen [29] proposed a representative sample selection method based on representative vectors and virtual imaging. Liang [30] proposed representative samples based on the minimum of the defined simulated spectral recovery error. The second is the partition acquisition: the training samples are divided into subspaces, and the test sample is used to select the training sample. The methods proposed above can be simply defined as dynamic partitioning and stationary partitioning. For example, Zhang [31] divided Munsell data into 11 regions, which is stationary partitioning. Zhang [32] selected testing samples by distance, and Liang [1] used the nearest training sample as the training subspace, which are examples of dynamic partitioning. Xiong [33] used dynamic partitional clustering to recover the spectral reflectance from camera response values.
Although the above methods all consider the selection of training samples and reduce the redundancy of data, they do not consider that the sample subsets can also be merged and optimized.
Their limitation is that they only use the differences between the samples, which is often referred to as the competition among the samples, and they do not consider the merging of the sample. Additionally, these methods do not consider the problem of data redundancy. This will increase the amount of computation and increase the cost. Our method takes both cases into account and conducts separate experiments.
In this paper, a novel spectral recovery method based on the merging of subspace is proposed. In this method, each testing sample is taken as an independent sample subspace, which is merged according to the distance between sample subspaces to obtain the final partition information. Similarly, we use the nearest training sample to substitute virtual points as the representative samples to recover the spectral reflectance. The approach can be divided into two parts:
  • Firstly, the training samples are divided into independent classes. We use the subspace concept to treat each class as a subspace, and the subspaces are merged according to the set distance. Secondly, the distance of the first subspace is calculated from the second subspace. If the distance is less than the set distance, then the average between the first subspace and the second subspace is calculated as the first subspace. If the distance is greater than the set distance, then the relationship between the first point and the third point is calculated, which will reduce the number of subspaces and yield partition information. Thirdly, the final merged center points are obtained through many iterations, which are used to determine the subspace. Finally, subspace tracking is used to determine the subspace where each testing sample is located for spectral recovery.
  • After obtaining the center points, these center points are not actual points in the training samples. The nearest distance principle is used to replace the center points with the points in training samples, which is the process of representative sample selection. Finally, these representative samples are used for spectral recovery.

Mathematic Background and Method

In this section, the imaging function of digital devices is introduced. When the light source distribution function I ( λ ) , camera sensitivity function q ( λ ) and spectral reflectance r ( λ ) are determined, the integral process can be expressed as:
T = m i n m a x I ( λ ) q ( λ ) r ( λ ) d λ + γ ,
where T = [R, G, B] + is the RGB response values; + means transpose; m i n and m a x represent the visible wavelength range (400 nm–700 nm); the system noise is normally represented by γ , which is omitted in this study [34]; Equation (1) can be further simplified in matrix vector form.
T = M R ,
where M represents the integral matrix of the light source distribution function and the camera sensitivity function; R denotes the spectral reflectance. The inverse solution yields the spectral reflectance recovery to be obtained by Equation (3)
R = Q T ,
where Q is the transpose matrix, and we can easily obtain the spectral reflectance by knowing Q.

2. The Proposed Method

In this section, we first show the schematic illustration of the proposed method for spectral reflectance recovery in Figure 1, and formulate the spectral reflectance recovery process based on subspace merging.
As can be seen in Figure 1, firstly, the response values and spectral reflectance information of the color samples are obtained. Secondly, after obtaining the corresponding response value information, subspace merging is carried out according to the distance. Finally, after the merging points are obtained, we can divide it into two categories. The first type is directly partitioned for spectral recovery, and the second type is used in the latest training sample data to replace virtual points in order to directly obtain the restored spectral reflectance.

2.1. Spectral Reflectance Recovery Based on Subspace Merging

For the response values, the obtained response values will change due to the difference of spectral sensitivity function and light source. So, the obtained raw response values need to be normalized. The process of standardization is shown in Equations (4) and (5)
H = a r g m a x ( T ) ,
T = ( T / H ) × 255 ,
where a r g m a x describes the function to determine the maximum value of response values. When the maximum value is known, the response values are divided by the maximum and multiplied by 255.
Given the training sample ( x 1 ... x m ) T T r a i n , where m represents the number of training samples, let x 1 = ( r 1 , g 1 ,   b 1 ), since each samples represents a subspace, which is based on the Euclidean distance as the standard to start merging. The initial distance k is given. The distance is calculated in Equation (6)
s j = ( r j r i ) 2 + ( g j g i ) 2 + b j b i 2   ,
where j is the initial value of the jth sample; i is the remaining sample points; when k < s j , proceed to the next step; when k >  s j the two samples are combined and used as the starting samples of the jth point to repeat Equation (6).
r j = ( r j + r i ) / 2 g j = ( g j + g i ) / 2 b j = ( b j + b i ) / 2 ,
At the end of the process of comparing the distance with k and continuously fusing, all the samples are gathered in a new set J. The number of points in J varies depending on the value of k. We set the number of J to n. The partition information of the subspaces C is found by the division of clustering centers.
W   ( C , J ) = m i n C J = 1 a x i C a x i J a 2 2   ( i = 1 , 2 , m , a = 1 , 2 , n ) ,
where the W   ( C , J ) defines the partitioning of the samples and the generation of new clustering centers; · 2 2 represents the Euclidean distance. Let ( p 1 p f ) T T e s t be the testing samples. When the testing samples are input into Equation (8), the partition of the training sample set of the test sample is determined.
After the training sample subspace of the testing sample is selected, the distance information between the testing sample and the training sample is used to calculate the weight of the training sample.
w i = T t e s t C a 2 2 ,
where C a represents the corresponding training sample partition of the testing sample. Since the distance is far away, the weight is small. Therefore, its reciprocal is used as the new weighted function. The new weighted function can be expressed as:
w i = 1 w i + ϵ ,
Just to make sure the denominator does not equal zero, ϵ = 0.001.
W = [ w 1 0 0 0 w 2 0 0 0 0 0 w i ] i × i ,
The spectral recovery function can be expressed as:
Q = R T r a i n   W ( T T r a i n W ) 1 ,
R = Q T T e s t ,
The superscript ‘−1’ indicates the pseudo-inverse matrix operator; R T r a i n represents the selected optimal local training sample; T T e s t represents the standardized response values of the testing sample; T T r a i n represents the standardized response values of the training sample; R is the corresponding recovered spectral reflectance.

2.2. Representative Samples for Spectral Reflectance Recovery

The center point set J can be obtained by using Equation (9) in this section, but the obtained center points are virtual points. The nearest distance principle is used to replace the virtual points with the points in the training samples, which is the process of representative sample selection. The proximate similar substitution principle is used in this section.
V = x i J a 2 2 ( i = 1 , 2 , m , a = 1 , 2 , n ) ,
where V represents the distance. Firstly, the distance between the first point of J and the training sample is calculated, replace the first virtual points of J by determining the closest training samples. Then, the selected point is removed from the training samples, and the 2nd to the nth point are also replaced.
After obtaining the representative samples set J, the final recovered equation can be expressed as:
Q = R t r a i n ( T t r a i n ) 1 ,
R = Q T T e s t ,
where R t r a i n represents the selected optimal local training sample; T t r a i n is the response values of training samples. T T e s t is the response values of testing samples.

3. Experiment

The experiment is divided into two parts: partitioning the acquisition to recover spectral reflectance and selecting the representative samples to recover spectral reflectance. The color difference of CIE DE76 under the CIE 1964 standard observation system and CIE A light source is calculated. The root mean square (RMSE) and goodness-of-fit coefficient (GFC) also as a precision standard. These three parameters are selected as the evaluation standard for spectral recovery.
Δ E a b = ( Δ L ) 2 + ( Δ a ) 2 + ( Δ b ) 2 ,
RMSE = 1 m ( R t e s t R ) T ( R t e s t R ) ,
GFC = R t e s t T   R R t e s t T   R t e s t · R R       ,
The calculation method of color difference is introduced in Equation (17), where Δ L represents the difference in brightness; Δ a represents the difference in redness and greenness; Δ b represents the difference in yellowness and blueness. So, color difference represents the colorimetric accuracy. For Equations (18) and (19), where R t e s t represents the recovery spectral reflectance, R represents the original spectral reflectance; in this work, m = 31 . The root of mean square shows the distance between the original and recovered spectral reflectance. The goodness-of-fit coefficient shows the similarity between the original and recovered spectral reflectance.

3.1. Simulation Experiment

The 1269 Munsell Matt chips [27], 140 ColorChecker SG [35] and 354 Vrhel spectral datasets [36] are used in the simulation experiment. Firstly, The Munsell Matt chips are used as the training samples. The Munsell Matt chips have 1269 color chips, which are mostly used in spectral recovery, and there are corresponding color blocks for each hue in this training sample. Therefore, the Munsell Matt chips are more convincing as the training sample. Using only one type of color chip undoubtedly affects the universality and effectiveness of the experiment. So, other color chips and Munsell Matt chips are used together to verify the proposed method.
The simulated environment is described in this section. The NokiaN900(Nokia Corporation, Espoo, Finland) is selected as the spectral sensitivity function, and the CIE D65 is selected as the light source environment in Figure 2.
All the spectral reflectance data from in the experiment are presented in Figure 3. Figure 3 shows that our experiment involves three kinds of color chips. Figure 3a shows the Munsell Matt chips with 1269 color chips. Figure 3b shows the 140 ColorChecker SG. Figure 3c shows the 354 Vrhel spectral dataset. The spectral reflectance ranges from 400 to 700 at a 10 nm interval.
After analyzing the spectral information data, the response value information is also analyzed. The equipment that obtains the response values greatly depends on the real environment. The camera response value depends a lot on the equipment, which is not a uniform space. In order to facilitate better observation and description of the color, the CIE Lab space with good spatial uniformity is selected as the description background. The full name of Lab is CIELAB, sometimes written as CIE L *a*b*. It is a color pattern developed by the CIE (International Commission on Illumination). Therefore, in Figure 4a, it can be easily seen that the data are distributed uniformly in space, which describes the LAB information of Munsell Matt chips. From Figure 4b,c, it is also easy to see that since the color chips selected are ColorChecker SG and Vrhel spectral dataset, the number is obviously smaller. The LAB is calculated under CIE D65 illuminants.
After the spectral information and response values of the experiment are introduced, the proposed method is tested in the following sections. Distance is used as a parameter in the experiment, and the number of merging iterations between subspaces becomes more and more with the increase in distance. The number of center points is also less and less with the increase in distance, but this does not mean that the center point can directly determine whether the recovery accuracy is good or bad, due to the complex relationship inside. Therefore, the relationship between parameters and accuracy is explored through experiments. In Figure 5, the Munsell Matt chips are used as the training samples. It can be easily seen that both the self-recovery and the recovery of the ColorChecker SG and Vrhel spectral datasets have the best results under the distance of 40. As can be seen from Figure 5, when the Munsell Matt chips are used to recover the other three kinds of data, the results all have the same trend of change. So, the distance is set to 40.
As we can see from Table 1, the proposed method, PI, PCA and Wang’s [37] and Zhang’s [32] methods recover spectral reflectance under the same conditions. The evaluation of the recovery accuracy can be divided into two parts: colorimetric accuracy and spectral accuracy. Firstly, colorimetric analysis is the analysis of color difference. For the color difference analysis in Table 1, it can be seen that the average color difference obtained by either self-recovery or using other samples as training samples using the proposed method is the smallest. The smallest average color difference is self-recovery, which is 0.3063. Secondly, spectral accuracy analysis is the RMSE and GFC. The RMSE and GFC show the same results. The best results are in bold.
To visualize the recovery accuracy, a boxplot is used in Figure 6. Munsell Matt chips are used as training samples and ColorChecker SG is used as the testing sample. Figure 6a represents CIE DE76 color difference and Figure 6b represents CIE DE2000 color difference. Figure 6c represents RMSE and Figure 6b represents GFC. The more compact the box, the better the precision. It is not difficult to conclude that the proposed method shows better performance. In Figure 6a–c, it can be seen that the distance between the red dots is relatively close, and the accuracy of recovery is more stable than other methods. There are six important data points related to a boxplot: upper edge, lower edge, upper quartile, lower quartile, the median, and outlier. The upper and lower solid black lines represent the upper and lower edge values. The top and bottom of the blue box line indicate the top and bottom quartiles. The red color inside the box indicates the median, and red circles represent outliers.
In Figure 7, Munsell Matt chips are used as training samples and ColorChecker SG is used as the testing sample. Four random samples are selected for comparison. It can be easily seen that the proposed method is closer to the original sample, so the proposed method shows better performance.
In Figure 7, different colors represent different methods. Correspond to the color of the method in the Figure 7d. After simple verification of the proposed method, in order to show the good performance of the method, which is applied to the spectral images [27], the spectral images ColorChecker and fruitandflowers are used.
It can easily be seen in Figure 8 that the results comparison of the spectral images uses the different methods to recover spectral reflectance. Figure 8a represents the original RGB image. Figure 8b–f is called the error map, which calculates the color difference of the spectral reflectance recovered by different methods. More red means a larger color difference, and more blue means a lesser color difference. Therefore, the proposed method shows better performance.
Previous research results show that so many training samples also contain sample redundancy in the field of spectral recovery. It is not necessary to use all samples in the database as training samples. Otherwise, it will cause a heavy workload and inconvenience for sample collection and processing, especially in outdoor applications. Therefore, the optimal selection of representative samples from existing databases has always been an important aspect of spectral recovery.
According to Table 2, using different distances will select the corresponding representative samples. As the distance increases, the accuracy shows a trend of first increasing, then decreasing.
As can be seen from Figure 9, the results show that there is an extreme value of recovery accuracy, which is very similar to Figure 5. It is also a concave linear curve with a small amplitude, which shows better properties at the distance of 30, and the selected sample is 24. So, we set the distance to 30 to determine the representative samples. The comparison of the proposed method using representative points and some current methods is shown in Table 3.
After processing selected representative samples, Figure 10 shows the distribution of the training samples and the representative samples selected by several methods in the xyY space. Blue points represent training points and red points represent representative points. Timo Eckhard [38] discussed the sample selection method proposed above, and we select his detection results as the selection samples. Liang’s [30] selected sample used 60 in his article. Figure 10a represents the distribution of selected samples obtained by the proposed method. Figure 10b uses Cheung’s method to calculate the distribution of the selected samples in the training sample. Figure 10c represents Hardburg’s method to calculate the distribution of the selected samples in the training sample. Figure 10d uses Liang’s method to obtain the selected samples. The red dots represent the selected representative points, and the blue dots represent the overall data in Figure 10 and Figure 11.

3.2. Recovery for Different Illuminants and Cameras

Considering that different illuminants’ [24,39] and cameras’ spectral sensitivities will affect the proposed method, the Munsell chips are used as training samples, and different illuminants and spectral sensitivity functions are used to verify the effectiveness of the proposed method in Table 4, Table 5, Table 6 and Table 7. Then, the results are recovered according to the ColorChecker SG data testing sample.
Table 4 shows the results of RMSE, GFC and color difference calculated by each comparison method under different illuminants. Five kinds of illuminants are used in Table 4, which are CIE illuminant B, CIE illuminant C, CIE illuminant D50, CIE illuminant D65, CIE illuminant E, and CIE illuminant F2, respectively. It is easy to see that both the mean of color difference and the average value obtained by RMSE and GFC show better performance.
Table 5 shows the results of the whole recovery by selecting representative samples under different illuminants. The proposed method shows better performance in more scenarios.
As can be seen from Figure 11, the distribution of representative samples with the distance of 30 under different illuminants is shown.
The experiments are performed by using the spectral sensitivity of different commercial cameras. The results are general due to different spectral sensitivities. The red, green and blue channels of the digital camera replace the NokiaN900 sensitivity function mentioned above, which is the database of camera sensitivity functions measured by Jiang in 2013 [40].
The results in Table 6 are obtained using Munsell Matt chips as training samples and ColorChecker SG data as testing samples. The spectral sensitivity functions in Figure 12 are used as the observer condition. The results are the same as those shown in Table 4, and the proposed method shows better results in terms of mean values.
After the spectral sensitivities are introduced, Table 6 shows the results of several recovery methods using different camera sensitivities. Table 7 shows the results of several recovery methods using different illuminants under Canon5D Mark II spectral sensitivity function after the exploration of the illuminants and spectral sensitivity. Recovery accuracy is demonstrated by using spectral maps.
Figure 13 and Figure 14 both use Munsell as the training sample to obtain the response value under the Canon5D Mark II and the illuminant CIE D65. Figure 13 shows fruitandflowers as the testing sample and Figure 14 shows ColorChecker as the testing sample. Their error bars are the same as in Figure 5. Figure 13 and Figure 14 show the results obtained with six sensitivity functions using different spectral recovery methods. The range of color from blue to red indicates that the error ranges from small to large. The results show that the proposed method is superior to other methods.

4. Discussion and Conclusions

In this study, an optimized method based on subspace merging for spectral reflectance recovery is proposed. The optimal training distance is selected to determine the subspace where the training samples are located, and then the subspace where the testing samples are located is selected according to the subspace tracing for spectral recovery. In this paper, the merged points are also used to determine the representative samples from the large number of training samples. In this experiment, three kinds of samples, six kinds of illuminants and six kinds of camera spectral sensitivity functions are selected. The results show that the best recovery effect is achieved when the Euclidean distance is 40. For the selection of representative points, the recovery effect of the overall sample is better when the Euclidean distance is 30.
The results shows whether Munsell chips recover themselves or recover ColorCheck SG and Vrhel. The best results are obtained when using the proposed Munsell method to recover Munsell, and the color difference is 0.3063. Under the spectral sensitivity function of NokiaN900, the mean color difference of the changed illuminants is still the minimum 0.9546 and the GFC reaches 0.9998, which indicates that both spectral accuracy and colorimetric accuracy have good performance. After changing the spectral sensitivity, a different camera sensitivity function is used. The proposed method’s average color difference is 0.9441 and reaches 0.9999. Canon5D Mark II spectral sensitivity function is used under different illuminants to calculate the error. The average color difference is 0.9982. The average of GFC of 0.9999 is the largest. You can see whether it is sensitive to different camera functions or different illuminants. All the proposed methods show good performance.
In future research, we will conduct more tests on the method in different application fields. In future research, we will try to explore real mineral color chips.

Author Contributions

Conceptualization, Y.X. and G.W.; methodology, Y.X. and G.W.; software, Y.X. and G.W.; validation, Y.X. and G.W.; formal analysis, Y.X. and G.W.; investigation, Y.X., G.W. and X.L.; resources, Y.X. and G.W.; data curation, Y.X. and G.W.; writing—original draft preparation, Y.X. and G.W.; writing—review and editing, Y.X., G.W. and X.L.; visualization, Y.X. and G.W.; supervision, Y.X. and G.W.; project administration, Y.X. and G.W.; funding acquisition, G.W. and X.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Shandong Provincial Natural Science Foundation (ZR2020MF091), Key Lab of Intelligent and Green Flexographic Printing (ZBKT202101), Qilu University of Technology (Shandong Academy of Sciences) Pilot Project for Integrating Science, Education, and Industry (2022PX078), A Project of Shandong Province Higher Educational Science and Technology Program (J17KA178), Foundation (ZZ20210108) of State Key Laboratory of Biobased Material and Green Papermaking, Qilu University of Technology, Shandong Academy of Sciences.

Data Availability Statement

The Figure 8, Figure 13 and Figure 14 are from University of Eastern Finland, Spectral Color Research Group. Available online: http://www.uef.fi/web/spectral/-spectral-database (accessed on 24 November 2022). Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liang, J.; Wan, X. Optimized method for spectral reflectance reconstruction from camera responses. Opt. Express 2017, 25, 28273–28287. [Google Scholar] [CrossRef]
  2. Xu, P.; Xu, H.; Diao, C.; Ye, Z. Self-training-based spectral image reconstruction for art paintings with multispectral imaging. Appl. Opt. 2017, 56, 8461–8470. [Google Scholar] [CrossRef]
  3. Liu, Q.; Huang, Z.; Pointer, M.R.; Luo, M.R. Optimizing the Spectral Characterisation of a CMYK Printer with Embedded CMY Printer Modelling. Appl. Sci. 2019, 9, 5308. [Google Scholar] [CrossRef] [Green Version]
  4. Liang, H. Advances in multispectral and hyperspectral imaging for archaeology and art conservation. Appl. Phys. A 2011, 106, 309–323. [Google Scholar] [CrossRef] [Green Version]
  5. Yuen, P.W.; Richardson, M. An introduction to hyperspectral imaging and its application for security, surveillance and target acquisition. Imaging Sci. J. 2013, 58, 241–253. [Google Scholar] [CrossRef]
  6. Lin, Y.T.; Finlayson, G.D. Physically Plausible Spectral Reconstruction. Sensors 2020, 20, 6399. [Google Scholar] [CrossRef]
  7. Depeursinge, C.D.; Everdell, N.L.; Vitkin, I.A.; Styles, I.B.; Claridge, E.; Hebden, J.C.; Calcagni, A.S. Multispectral imaging of the ocular fundus using LED illumination. In Proceedings of the Novel Optical Instrumentation for Biomedical Applications IV, Munich, German, 14–17 June 2009. [Google Scholar]
  8. Nishidate, I.; Maeda, T.; Niizeki, K.; Aizu, Y. Estimation of melanin and hemoglobin using spectral reflectance images reconstructed from a digital RGB image by the Wiener estimation method. Sensors 2013, 13, 7902–7915. [Google Scholar] [CrossRef] [Green Version]
  9. Xiao, K.; Zhu, Y.; Li, C.; Connah, D.; Yates, J.M.; Wuerger, S. Improved method for skin reflectance reconstruction from camera images. Opt. Express 2016, 24, 14934–14950. [Google Scholar] [CrossRef] [Green Version]
  10. Octaviana, M.; Maria, G. Conservation-restoration of Textile Materials from Romanian Medieval Art Collections. Rev. De Chim. Buchar. Orig. Ed. 2009, 60, 9–14. [Google Scholar]
  11. Raju, V.B.; Sazonov, E. Detection of Oil-Containing Dressing on Salad Leaves Using Multispectral Imaging. IEEE Access 2020, 8, 86196–86206. [Google Scholar] [CrossRef]
  12. Valero, E.M.; Hu, Y.; Herna’ndez-Andre’s, J.; Eckhard, T.; Nieves, J.L.; Romero, J.; Schnitzlein, M.; Nowack, D. Comparative Performance Analysis of Spectral Estimation Algorithms and Computational Optimization of a Multispectral Imaging System for Print Inspection. Color Res. Appl. 2012, 39, 16–27. [Google Scholar] [CrossRef]
  13. Maali Amiri, M.; Fairchild, M.D. A strategy toward spectral and colorimetric color reproduction using ordinary digital cameras. Color Res. Appl. 2018, 43, 675–684. [Google Scholar] [CrossRef]
  14. Babaei, V.; Amirshahi, S.H.; Agahian, F. Using weighted pseudo-inverse method for reconstruction of reflectance spectra and analyzing the dataset in terms of normality. Color Res. Appl. 2011, 36, 295–305. [Google Scholar] [CrossRef]
  15. Tzeng, D.-Y.; Berns, R.S. A review of principal component analysis and its applications to color technology. Color Res. Appl. 2005, 30, 84–98. [Google Scholar] [CrossRef]
  16. Wu, G. Reflectance spectra recovery from a single RGB image by adaptive compressive sensing. Laser Phys. Lett. 2019, 16, 085208. [Google Scholar] [CrossRef]
  17. Wu, G.; Xiong, Y.; Li, X. Spectral sparse recovery form a single RGB image. Laser Phys. Lett. 2021, 18, 095201. [Google Scholar] [CrossRef]
  18. Stigell, P.; Miyata, K.; Hauta-Kasari, M. Wiener estimation method in estimating of spectral reflectance from RGB images. Pattern Recognit. Image Anal. 2007, 17, 233–242. [Google Scholar] [CrossRef]
  19. Li, H.; Wu, Z.; Zhang, L.; Parkkinen, J. SR-LLA: A novel spectral reconstruction method based on locally linear approximation. In Proceedings of the 2013 IEEE International Conference on Image Processing, Melbourne, Australia, 15–18 September 2013; pp. 2029–2033. [Google Scholar]
  20. Wang, L.; Sole, A.; Hardeberg, J.Y. Densely Residual Network with Dual Attention for Hyperspectral Reconstruction from RGB Images. Remote Sens. 2022, 14, 3128. [Google Scholar] [CrossRef]
  21. Xiong, Y.; Wu, G.; Li, X.; Niu, S.; Han, X. Spectral reflectance recovery using convolutional neural network. In Proceedings of the International Conference on Optoelectronic Materials and Devices (ICOMD 2021), Guangzhou, China, 10–12 December 2021; pp. 63–67. [Google Scholar]
  22. Arad, B.; Timofte, R.; Yahel, R.; Morag, N.; Bernat, A.; Cai, Y.; Lin, J.; Lin, Z.; Wang, H.; Zhang, Y. NTIRE 2022 Spectral Recovery Challenge and Data Set. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 863–881. [Google Scholar]
  23. Lin, Y.-T.; Finlayson, G.D. On the optimization of regression-based spectral reconstruction. Sensors 2021, 21, 5586. [Google Scholar] [CrossRef]
  24. Liu, Z.; Xiao, K.; Pointer, M.R.; Liu, Q.; Li, C.; He, R.; Xie, X. Spectral Reconstruction Using an Iteratively Reweighted Regulated Model from Two Illumination Camera Responses. Sensors 2021, 21, 7911. [Google Scholar] [CrossRef]
  25. Li, S.; Xiao, K.; Li, P. Spectra Reconstruction for Human Facial Color from RGB Images via Clusters in 3D Uniform CIELab* and Its Subordinate Color Space. Sensors 2023, 23, 810. [Google Scholar] [CrossRef] [PubMed]
  26. Jon, Y.; Hardeberg, P.D. Acquisition and Reproduction of Color Images: Colorimetric and Multispectral Approaches; Universal-Publishers: Irvine, CA, USA, 2001. [Google Scholar]
  27. Mohammadi, M.; Nezamabadi, M.; Berns, R.; Taplin, L. A prototype calibration target for spectral imaging. In Proceedings of the Tenth Congress of the International Colour Association Granada, Granada, Spain, 8–13 May 2005; pp. 387–390. [Google Scholar]
  28. Cheung, V.; Westland, S. Methods for Optimal Color Selection. J. Imaging Sci. Technol. 2006, 50, 481–488. [Google Scholar] [CrossRef] [Green Version]
  29. Shen, H.; Zhang, H.; Xin, J.H.; Shao, S. Optimal selection of representative colors for spectral reflectance reconstruction in a multispectral imaging system. Appl. Opt. 2008, 47, 2494–2502. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Liang, J.; Zhu, Q.; Liu, Q.; Xiao, K. Optimal selection of representative samples for efficient digital camera-based spectra recovery. Color Res. Appl. 2022, 47, 107–120. [Google Scholar] [CrossRef]
  31. Zhang, X.; Xu, H. Reconstructing spectral reflectance by dividing spectral space and extending the principal components in principal component analysis. J. Opt. Soc. Am. A Opt. Image. Sci. Vis. 2008, 25, 371–378. [Google Scholar] [CrossRef] [PubMed]
  32. Zhang, L.; Li, B.; Pan, Z.; Liang, D.; Kang, Y.; Zhang, D.; Ma, X. A method for selecting training samples based on camera response. Laser Phys. Lett. 2016, 13, 095201. [Google Scholar] [CrossRef]
  33. Xiong, Y.; Wu, G.; Li, X.; Wang, X. Optimized clustering method for spectral reflectance recovery. Front. Psychol. 2022, 13, 1051286. [Google Scholar] [CrossRef]
  34. Liang, J.; Xiao, K.; Pointer, M.R.; Wan, X.; Li, C. Spectra estimation from raw camera responses based on adaptive local-weighted linear regression. Opt. Express 2019, 27, 5165–5180. [Google Scholar] [CrossRef] [Green Version]
  35. Wu, G.; Liu, Z.; Fang, E.; Yu, H. Reconstruction of spectral color information using weighted principal component analysis. Optik 2015, 126, 1249–1253. [Google Scholar] [CrossRef]
  36. Vrhel, M.J.; Gershon, R.; Iwan, L.S. Measurement and analysis of object reflectance spectra. Color Res. Appl. 1994, 19, 4–9. [Google Scholar] [CrossRef]
  37. Wang, L.; Wan, X.; Xiao, G.; Liang, J. Sequential adaptive estimation for spectral reflectance based on camera responses. Opt. Express 2020, 28, 25830–25842. [Google Scholar] [CrossRef] [PubMed]
  38. Eckhard, T.; Valero, E.M.; Hernandez-Andres, J.; Schnitzlein, M. Adaptive global training set selection for spectral estimation of printed inks using reflectance modeling. Appl. Opt. 2014, 53, 709–719. [Google Scholar] [CrossRef] [PubMed]
  39. Fu, Y.; Zheng, Y.; Zhang, L.; Huang, H. Spectral Reflectance Recovery From a Single RGB Image. IEEE Trans. Comput. Imaging 2018, 4, 382–394. [Google Scholar] [CrossRef]
  40. Jiang, J.; Liu, D.; Gu, J.; Süsstrunk, S. What is the space of spectral sensitivity functions for digital color cameras? In Proceedings of the IEEE Workshop on Applications of Computer Vision (WACV), Clearwater Beach, FL, USA, 15–17 January 2013; pp. 168–179. [Google Scholar]
Figure 1. Schematic illustration of the proposed method for spectral reflectance recovery.
Figure 1. Schematic illustration of the proposed method for spectral reflectance recovery.
Sensors 23 03056 g001
Figure 2. (a) Spectral sensitivity functions and (b) the spectral power distribution of D65.
Figure 2. (a) Spectral sensitivity functions and (b) the spectral power distribution of D65.
Sensors 23 03056 g002
Figure 3. The spectral reflectance of (a) 1269 Munsell Matt chips, (b) the 140 ColorChecker SG and (c) the 354 Vrhel spectral dataset.
Figure 3. The spectral reflectance of (a) 1269 Munsell Matt chips, (b) the 140 ColorChecker SG and (c) the 354 Vrhel spectral dataset.
Sensors 23 03056 g003
Figure 4. CIE Lab coordinate of (a) 1269 Munsell Matt chips, (b) the 140 ColorChecker SG and (c) the 354 Vrhel spectral dataset.
Figure 4. CIE Lab coordinate of (a) 1269 Munsell Matt chips, (b) the 140 ColorChecker SG and (c) the 354 Vrhel spectral dataset.
Sensors 23 03056 g004
Figure 5. Relationship between the distance and averaged mean color differences.
Figure 5. Relationship between the distance and averaged mean color differences.
Sensors 23 03056 g005
Figure 6. Boxplots of simulated experiment results: (a) CIEDE 1976 color difference, (b) CIE DE2000 color difference, (c) RMSE and (d) GFC.
Figure 6. Boxplots of simulated experiment results: (a) CIEDE 1976 color difference, (b) CIE DE2000 color difference, (c) RMSE and (d) GFC.
Sensors 23 03056 g006
Figure 7. Spectral reflectance recovery results from our proposed and existing methods with four randomly selected samples (Wang [37], Zhang [32]): (a) 18, (b) 22, (c) 92, and (d) 40.
Figure 7. Spectral reflectance recovery results from our proposed and existing methods with four randomly selected samples (Wang [37], Zhang [32]): (a) 18, (b) 22, (c) 92, and (d) 40.
Sensors 23 03056 g007
Figure 8. Results comparison of spectral images in different methods using the NokiaN900 as the spectral sensitivity: (a) the RGB image, (b) PI, (c) PCA, (d) Wang [37], (e) Zhang [32] and (f) the proposed.
Figure 8. Results comparison of spectral images in different methods using the NokiaN900 as the spectral sensitivity: (a) the RGB image, (b) PI, (c) PCA, (d) Wang [37], (e) Zhang [32] and (f) the proposed.
Sensors 23 03056 g008
Figure 9. The relationship between mean color difference and distance (a) Columnar statistical chart and (b) Linear statistical graph.
Figure 9. The relationship between mean color difference and distance (a) Columnar statistical chart and (b) Linear statistical graph.
Sensors 23 03056 g009
Figure 10. The distribution of training samples and representative points selected by four methods in xyY space, respectively: (a) the proposed, (b) Cheung [28], (c) Hardeberg [26] and (d) Liang [30].
Figure 10. The distribution of training samples and representative points selected by four methods in xyY space, respectively: (a) the proposed, (b) Cheung [28], (c) Hardeberg [26] and (d) Liang [30].
Sensors 23 03056 g010
Figure 11. The distribution of training samples and representative samples selected by different illuminants in xyY space, respectively: (a) B, (b) C, (c) D50, (d) D65, (e) E and (f) F2.
Figure 11. The distribution of training samples and representative samples selected by different illuminants in xyY space, respectively: (a) B, (b) C, (c) D50, (d) D65, (e) E and (f) F2.
Sensors 23 03056 g011
Figure 12. The distribution of spectral sensitivity functions of four selected digital cameras: (a) Canon5D Mark II, (b) Canon60D, (c) Nikon D3, (d) Nikon D50, (e) PentaK5 and (f) Sony Nex5N.
Figure 12. The distribution of spectral sensitivity functions of four selected digital cameras: (a) Canon5D Mark II, (b) Canon60D, (c) Nikon D3, (d) Nikon D50, (e) PentaK5 and (f) Sony Nex5N.
Sensors 23 03056 g012
Figure 13. Results comparison of fruit spectral images in different methods using six spectral sensitivity functions as the spectral sensitivity: (a) PI, (b) PCA, (c) Wang [37], (d) Zhang [32] and (e) the proposed.
Figure 13. Results comparison of fruit spectral images in different methods using six spectral sensitivity functions as the spectral sensitivity: (a) PI, (b) PCA, (c) Wang [37], (d) Zhang [32] and (e) the proposed.
Sensors 23 03056 g013
Figure 14. Results comparison of ColorChecker spectral images in different methods using six spectral sensitivity functions as the spectral sensitivity: (a) PI, (b) PCA, (c) Wang [37], (d) Zhang [32] and (e) the proposed.
Figure 14. Results comparison of ColorChecker spectral images in different methods using six spectral sensitivity functions as the spectral sensitivity: (a) PI, (b) PCA, (c) Wang [37], (d) Zhang [32] and (e) the proposed.
Sensors 23 03056 g014
Table 1. Results of recovered spectral reflectance for different testing samples using different spectral recovery methods.
Table 1. Results of recovered spectral reflectance for different testing samples using different spectral recovery methods.
RMSEGFC Δ E a b
Testing SampleMethodsMeanMaxMeanMaxMeanMax
MunsellPI0.02120.13870.99420.99990.81426.8360
PCA0.02180.13100.99420.99991.884416.9725
Wang [37]0.03410.16880.98440.99991.57885.9162
Zhang [32]0.01920.12340.99501.00000.76175.8412
Proposed0.01030.10280.99851.00000.30632.1240
ColorChecker SGPI0.03440.15900.98830.99971.44057.8373
PCA0.03320.15030.98860.99972.723119.3066
Wang [37]0.02670.11630.97650.99981.42128.1198
Zhang [32]0.03120.13930.98970.99991.30536.4735
Proposed0.02360.09550.99320.99980.97433.1183
VrhelPI0.03830.18820.98410.99971.78297.7830
PCA0.03790.18700.98430.99973.286520.4141
Wang [37]0.03410.16880.98430.99991.57755.9369
Zhang [32]0.03500.18370.98510.99991.55886.5949
Proposed0.03160.17060.98660.99991.18856.7224
Table 2. The relationship between recovery accuracy and distance.
Table 2. The relationship between recovery accuracy and distance.
DistanceSelected SamplesColor Difference
102410.8158
20630.8037
30240.7715
40130.8049
5081.2054
Table 3. Compares recovery error for Optimal Selected Training Samples in difference methods.
Table 3. Compares recovery error for Optimal Selected Training Samples in difference methods.
MethodSelected SamplesMean RMSE Δ E a b
The proposed240.02140.7715
Cheung [28]350.02851.2434
Hardeberg [26]4850.02300.8031
Liang [30]600.02190.8099
Table 4. Results of recovery spectral reflectance for different illuminants using different spectral recovery methods and NokiaN900 spectral sensitivity function.
Table 4. Results of recovery spectral reflectance for different illuminants using different spectral recovery methods and NokiaN900 spectral sensitivity function.
RMSEGFC Δ E a b
IlluminantsMethodsMeanMaxMeanMaxMeanMax
BPI0.03310.15190.98850.99971.19937.6998
PCA0.03300.14410.98880.99971.767713.0748
Wang [37]0.02610.10630.97850.99981.256410.5796
Zhang [32]0.03200.12920.99900.99983.402937.7525
Proposed0.02400.11290.99310.99980.84743.2051
CPI0.03340.15700.98840.99971.29497.2480
PCA0.03320.14850.98870.99972.477617.5926
Wang [37]0.02650.11310.97670.999811.33328.6304
Zhang [32]0.03110.13680.98980.99982.570421.3022
Proposed0.02260.09380.99350.99980.81052.9903
D50PI0.03310.15350.98850.99971.28978.5185
PCA0.03300.14540.98870.99972.022815.0978
Wang [37]0.02620.10900.97830.99981.279010.1201
Zhang [32]0.03180.13260.98990.99993.378940.5513
Proposed0.02320.11260.99320.99990.88153.1797
D65PI0.03340.15900.98830.99971.44057.8373
PCA0.03320.15030.98860.99972.723119.3066
Wang [37]0.02670.11630.97650.99981.42128.1198
Zhang [32]0.03120.13930.98970.99992.521723.0217
Proposed0.02360.09550.99320.99980.97433.1183
EPI0.03300.15480.98840.99971.67758.9601
PCA0.03280.14670.98870.99972.303214.4599
Wang [37]0.02630.11050.97690.99981.63169.1008
Zhang [32]0.03000.14160.98950.99982.585724.9101
Proposed0.02370.10280.99330.99981.14143.9218
F2PI0.03560.18550.98760.99971.69529.1798
PCA0.03560.17520.98770.99972.303214.4599
Wang [37]0.02900.13880.97890.99981.51707.9642
Zhang [32]0.03400.16700.98880.99984.353447.3932
Proposed0.02540.11100.99260.99981.07245.0488
AveragePI0.03360.16030.98830.99971.43298.2406
PCA0.03350.15170.98850.99972.266315.6653
Wang [37]0.02680.11570.97760.99983.07319.0858
Zhang [32]0.03170.14110.99110.99983.135532.4885
Proposed0.02380.10480.99320.99980.95463.5773
Table 5. The spectral recovery color difference of selected representative samples to recover total samples.
Table 5. The spectral recovery color difference of selected representative samples to recover total samples.
Method
IlluminantsThe ProposedCheung [28]Hardeberg [26]Liang [30]
B0.70591.02740.73780.7995
C0.73801.15040.78140.7727
D500.76991.04920.74400.8222
D650.77151.24340.80310.8099
E0.82171.45880.90350.8827
F21.29051.34721.11531.1046
Table 6. Results of recovery spectral reflectance for different spectral sensitivity functions using different spectral recovery methods.
Table 6. Results of recovery spectral reflectance for different spectral sensitivity functions using different spectral recovery methods.
RMSEGFC Δ E a b
Spectral SensitivityMethodsMeanMaxMeanMaxMeanMax
Canon5D Mark IIPI0.03520.17220.98790.99971.93398.7401
PCA0.03490.16140.98820.99973.593730.1884
Wang [37]0.02840.11710.97030.99983.047823.8372
Zhang [32]0.03160.15330.98900.99981.747316.0427
Proposed0.02480.11350.99180.99991.08446.2860
Canon60DPI0.03500.17780.98800.99971.635515.6113
PCA0.03470.16120.98830.99973.593727.5312
Wang [37]0.02820.11730.97040.99982.804320.0648
Zhang [32]0.03150.15360.98910.99981.511413.3564
Proposed0.02530.11440.99180.99990.98555.7372
Nikon D3PI0.03540.17830.98770.99971.666614.9339
PCA0.03540.16750.98800.99973.961335.1863
Wang [37]0.02920.12700.97010.99982.918319.2593
Zhang [32]0.03160.16010.98910.99981.414611.2617
Proposed0.02530.09880.99210.99990.85085.0009
Nikon D50PI0.04200.17300.98820.99971.40459.7738
PCA0.03410.16320.98850.99974.158738.4800
Wang [37]0.02830.12630.97050.99982.691313.1325
Zhang [32]0.03090.15560.98940.99981.19466.9526
Proposed0.02490.10790.99210.99990.89653.0218
PentaxK5PI0.03460.16970.98810.99971.550615.4997
PCA0.03450.15890.98830.99973.206525.2036
Wang [37]0.02750.12950.97780.99981.352011.6586
Zhang [32]0.03130.15530.98910.99981.402313.2577
Proposed0.02440.10480.99220.99990.79995.5646
Sony Nex5NPI0.03540.18610.98760.99971.674012.9780
PCA0.03530.17490.98790.99973.221426.1147
Wang [37]0.02910.13860.97050.99982.770917.0108
Zhang [32]0.03260.17250.98860.99981.556410.7624
Proposed0.02530.10830.99140.99991.04734.9687
AveragePI0.03630.17620.98790.99971.644212.9228
PCA0.034820.16450.98820.99973.622630.4507
Wang [37]0.028450.12600.97160.99982.597417.4939
Zhang [32]0.03160.15840.98910.99981.471111.9389
Proposed0.02500.10800.99190.99990.94415.0965
Table 7. Results of recovery spectral reflectance for different illuminants using different spectral recovery methods and Canon5D Mark II spectral sensitivity function.
Table 7. Results of recovery spectral reflectance for different illuminants using different spectral recovery methods and Canon5D Mark II spectral sensitivity function.
RMSEGFC Δ E a b
IlluminantsMethodsMeanMaxMeanMaxMeanMax
BPI0.03430.16170.98830.99971.898528.3501
PCA0.03410.15250.98860.99972.637922.3945
Wang [37]0.02780.10480.97060.99983.374232.9993
Zhang [32]0.03080.14090.98940.99981.684524.3096
Proposed0.02380.10670.99280.99980.96439.2983
CPI0.03500.17000.98800.99971.799716.1746
PCA0.03480.15940.98830.99973.677729.4821
Wang [37]0.02820.11490.97050.99982.950720.0315
Zhang [32]0.03140.15090.98910.99991.616813.6400
Proposed0.02440.11420.99200.99991.01466.4344
D50PI0.03450.16360.98820.99972.057630.6053
PCA0.03430.15390.98850.99972.845823.8719
Wang [37]0.02800.10910.97050.99853.450036.2491
Zhang [32]0.03100.14350.98930.99981.839526.6381
Proposed0.02360.11060.99270.99980.99488.2185
D65PI0.03520.17220.98790.99971.93398.7401
PCA0.03490.16140.98820.99973.593730.1884
Wang [37]0.02840.11710.97030.99983.047823.8372
Zhang [32]0.03160.02840.11710.99981.747316.0427
Proposed0.02480.11350.99180.99991.08446.2860
EPI0.03440.16530.98820.99971.621118.5733
PCA0.03420.15540.98850.99973.229427.5190
Wang [37]0.02670.11540.97670.99981.331810.2925
Zhang [32]0.03100.14510.98930.99981.455815.5029
Proposed0.02370.10330.99260.99980.84686.0694
F2PI0.03750.20530.98710.99972.706029.6194
PCA0.03750.19310.98720.99973.106119.8124
Wang [37]0.03030.15260.97870.99982.332319.7941
Zhang [32]0.03430.18570.98800.99982.645729.7150
Proposed0.02480.11350.99180.99991.08446.2860
AveragePI0.03520.17300.98800.99972.002822.0105
PCA0.03500.16260.98820.99973.181825.5447
Wang [37]0.02820.11900.97290.99962.747823.8673
Zhang [32]0.03170.13240.84370.99981.831620.9747
Proposed0.02420.11030.99230.99990.99827.0988
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xiong, Y.; Wu, G.; Li, X. Optimized Method Based on Subspace Merging for Spectral Reflectance Recovery. Sensors 2023, 23, 3056. https://doi.org/10.3390/s23063056

AMA Style

Xiong Y, Wu G, Li X. Optimized Method Based on Subspace Merging for Spectral Reflectance Recovery. Sensors. 2023; 23(6):3056. https://doi.org/10.3390/s23063056

Chicago/Turabian Style

Xiong, Yifan, Guangyuan Wu, and Xiaozhou Li. 2023. "Optimized Method Based on Subspace Merging for Spectral Reflectance Recovery" Sensors 23, no. 6: 3056. https://doi.org/10.3390/s23063056

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop