Next Article in Journal
Root System Architecture Differences of Maize Cultivars Affect Yield and Nitrogen Accumulation in Southwest China
Previous Article in Journal
Green Payment Programs and Farmland Prices—An Empirical Investigation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Non-Destructive Detection of pH Value of Kiwifruit Based on Hyperspectral Fluorescence Imaging Technology

College of Mechanical and Electrical Engineering, Sichuan Agriculture University, Xinkang Road 46, Ya’an 625000, China
*
Author to whom correspondence should be addressed.
Agriculture 2022, 12(2), 208; https://doi.org/10.3390/agriculture12020208
Submission received: 5 January 2022 / Revised: 22 January 2022 / Accepted: 24 January 2022 / Published: 1 February 2022
(This article belongs to the Section Agricultural Technology)

Abstract

:
Non-destructive detection of the pH value of kiwifruit has important practical significance for its quality classification. In this study, hyperspectral fluorescence imaging technology was proposed to quantitatively predict the pH value of kiwifruit non-destructively. Firstly, the SPXY algorithm was used to divide samples into training and prediction sets and three different algorithms were used to preprocess the raw spectral data. Secondly, algorithms such as the iteratively retaining information variables (IRIV), the variable iterative space shrinkage approach (VISSA), the model adaptive space shrinkage (MASS), the random frog (RF), and their combination (i.e., IRIV + VISSA + MASS + RF, IVMR) were used to extract effective variables from the preprocessed spectral data. Moreover, the second extractions, such as IRIV-VISSA and IRIV-MASS, and the third extraction (i.e., IVMR-VISSA-IRIV) were used to further reduce the redundant variables. Based on the effective variables, four regression models—random forest (RF), partial least square (PLSR), extreme learning machines (ELM), and multiple-kernel support vector regression (MK-SVR)—were built and compared for predicting. The results show that IVMR-VISSA-IRIV-MK-SVR had the best prediction results, with RP2, RC2 and RPD of 0.8512, 0.8580, and 2.66, respectively, which verifies that hyperspectral fluorescence imaging technology is reliable for predicting the pH value of kiwifruit non-destructively.

1. Introduction

Kiwifruit is rich in nutrients, minerals, and vitamins that can effectively promote digestion in human intestine [1], and is popular all over the world [2]. China is not only a major kiwifruit-planting country [3], but also the native center of kiwifruit [4]. Because of its great ornamental, therapeutic, and nutritional value, there are an increasing number of studies on the non-destructive testing of kiwifruit’s internal quality parameters, such as soluble solid content (SSC), pH value, moisture content, and vitamin content.
Currently, the common detection method for the pH value of kiwifruit is to use a pH meter, which is time consuming and labor intensive. Therefore, it is of great significance to detect the pH value of kiwifruit nondestructively. Hyperspectral imaging technology and visible-near infrared spectral imaging technology have the advantage of fast and non-destructive detection. In recent years, they have been successfully applied to detect the internal quality of fruits.
Leiva-Valenzuela et al. [5] used hyperspectral imaging technology to predict the firmness and SSC of blueberries. Based on the partial least square method using cross validation, the predictive correlation coefficient (RP) for firmness was 0.87, and the RP for SSC was 0.79. Guo et al. [6] detected the SSC of apples using near-infrared spectroscopy imaging technology. The CARS-PLS model had the best prediction results, with the RP and the residual predictive deviation (RPD) of 0.9808 and 4.845, respectively. Li et al. [7] used the hyperspectral imaging technology to compare the effects of different models on the prediction of SSC of “Pinggu” peaches, and the EW-multi-region-RF-PLS had the best prediction, with the RP of 0.86. Pu et al. [8] used near-infrared hyperspectral imaging technology to perform non-destructive detection of the moisture distribution of mango slices during the microwave vacuum drying process. The best prediction was obtained by the RC-PLS-2 model with the RP2 of 0.972. Sun et al. [9] used hyperspectral imaging technology to detect the SSC, firmness, and other internal parameters of hami melons. The SNV-CARS-PLS model achieved the optimal prediction for SSC with the RP and the RPD of 0.9606 and 3.598, respectively. The RAW-CARS-PLS model achieved the optimal prediction for firmness with the RP and the RPD of 0.8671 and 1.996, respectively. Li et al. [10] used visible-near infrared hyperspectral imaging technology to predict the SSC, firmness, and other parameters of pears. SPA-PLS had the best prediction, with the RP for SSC and the RP for the firmness of 0.9924 and 0.9977, respectively.
In recent years, hyperspectral fluorescence technology has been used for non-destructive testing of food and has achieved good testing results. Wold et al. [11] used hyperspectral fluorescence to detect light-induced oxidation in different dairy products. Bertani et al. [12] conducted the detection of aflatoxins B in grained almonds using hyperspectral fluorescence. Liu et al. [13] used the front-face synchronous hyperspectral fluorescence to perform non-destructive testing of beef quality during cold storage. It can be seen that hyperspectral fluorescence technology will continue to play a significant role in the non-destructive detection of food.
To date, many studies have been conducted using visible/near-infrared hyperspectral imaging technology for non-destructive testing of the internal parameters of fruit, such as SSC and firmness. However, there are few studies on the pH value, which is also one of the important internal parameters of fruits. In addition, a considerable portion of non-destructive testing studies on fruits have chosen hyperspectral imaging, near-infrared spectroscopy imaging, and other technologies; however, hyperspectral fluorescence imaging technology is rarely used. This approach can not only detect the measured object quickly and non-destructively, but also obtain the image information and spectral information of the measured object.
Therefore, the aim of this study was to verify the feasibility of predicting the pH value of kiwifruit based on hyperspectral fluorescence imaging technology.
The remainder of the paper is organized as follows: Section 1 introduces the variety of kiwifruit samples, the environmental temperature during the experiment, the structure and use of the hyperspectral fluorescence imaging instrument and the pH meter. In addition, it also briefly introduces the preprocessing methods and the effective variable extraction algorithms used in this study. Section 2 outlines the approach, which is summarized as follows: first divide samples into training and prediction sets, use three preprocessing algorithms to eliminate the noise of the spectral image, and then screen out the best combination of variables that can better predict the pH value of kiwifruit using multiple effective variable extraction algorithms and combination algorithms. Finally, we establish and analyze four different regression models for quantitatively predicting the pH value of kiwifruit. The final section summarizes the optimal preprocessing method, effective variable extraction algorithm, and prediction model. We then draw a conclusion based on the predicted results, namely, that hyperspectral fluorescence imaging technology can realize high-precision non-destructive detection of the pH value of kiwifruit.

2. Materials and Methods

2.1. Samples

Ninety samples of “Red Sun” kiwifruit were sampled from Ya’an, Sichuan province, China. They were similar in size and showed no obvious damage on the surface. After cleaning and wiping, they were numbered individually and placed in the laboratory at around 25 ℃ for 24 h. Then, the hyperspectral fluorescence images and the physicochemical pH values of the samples were sequentially collected. Figure 1 shows samples numbered 45 to 48.

2.2. Instruments and Equipment

The physicochemical values of the pH of the samples were measured by a pH-100 A pen-type pH meter (Figure 2), with a resolution of 0.01 and an accuracy of ±0.02. The hyperspectral fluorescence images of the samples were collected by the hyperspectral fluorescence testing system (Figure 3), with the spectral range of 250—1100 nm. The system includes a 150 W xenon lamp light source, a hyperspectral camera, and multiple excitation and emission filters.

2.3. The pH Physicochemical Values Measurement of the Samples

After collecting the hyperspectral fluorescence images of all samples, they were sent for measurement of the physicochemical values of pH immediately. First, the pH composite electrode was stirred in pure water and shaken dry after being fully washed. The pH meter was inserted into the 4.00 pH calibration solution for calibration. After the calibration was completed, the meter was washed with pure water and dried. The pH meter was then calibrated in a standard buffer of 7.00 pH. Finally, the meter was rinsed with pure water and wiped dry, and the calibration was completed in a pH 9.18 calibration solution. At this point, the three-point calibration was completed. Then, an appropriate amount of pulp was taken from each sample and pressed into juice. The electrode was then placed in the juice, and the physicochemical value of pH was recorded. Each sample was measured three times according to the above operation, and the average value of the three measurements was taken as the physicochemical value of pH of the sample.

2.4. Extraction of Spectral Data

A xenon lamp was used as the excitation light source, and an excitation filter of 390 nm and a fluorescence filter of 495 nm were selected. The moving speed of the GaiaFluo hyperspectral fluorescence testing system is 0.13 mm/s, the exposure time of the camera is 800 ms, and the distance between the spectral camera lens and the measured object is 70 cm. After all the fluorescence images were collected, version 5.3 of ENVI [14] software was used to take the 3/4 area of the fluorescence image of each sample as the region of interest (ROI), and the spectral data were extracted from the ROI. The average spectrum data in the ROI were taken as the raw spectrum of the sample. The raw hyperspectral fluorescent images of 90 samples are shown in Figure 4. The wavelength range is 376.80–1011.05 nm, with a total of 125 bands (variables).
Figure 4a,b shows identical images, both of which are the raw hyperspectral fluorescence spectrum images. The difference between the two is that the abscissa of Figure 4a is the wavelength (in nm), and the abscissa of Figure 4b is the number of band variables. For the sake of convenience, the fluorescence images involved in the following are all described in terms of band-variable images.

2.5. Preprocessing Methods of Spectral Data

The noise generated in the process of extracting the fluorescence spectrum data has a certain impact on the prediction results. Therefore, it is necessary to preprocess the fluorescence spectrum data. In this study, three algorithms, namely de-trending (DT) [15,16], moving average (MA) [17], and Savitzky–Golay smoothing (S-G) [18], were selected to preprocess the raw fluorescence spectrum data. Both S-G and MA can improve the smoothness of spectral data. DT can effectively remove the baseline drift of the raw spectrum data.

2.6. Methods of Extracting Effective Variables

The preprocessed spectral data not only have a high dimension, but also contain a large number of collinear variables, which is not conducive to the establishment of the model. Therefore, it is essential to reduce the dimensionality of the preprocessed spectral data to extract effective variables. In this study, the extracted algorithms, namely, the iteratively retaining information variables (IRIV), the variable iterative space shrinkage approach (VISSA), the model adaptive space shrinkage (MASS), the random frog (RF), and their combination (i.e., IRIV + VISSA + MASS + RF, IVMR), were used to extract effective variables from the preprocessed spectral data. Moreover, the second and the third extraction methods were also used to further reduce redundant variables.

2.6.1. Iteratively Retaining Information Variables

The iteratively retaining information variables (IRIV) method [19,20,21,22] was applied to extract effective variables from the preprocessed spectral data. Based on the binary matrix transformation filter (BMSF), IRIV judges the importance of one variable by observing the difference in predictive performance when multiple sub-models contain and do not contain that variable.

2.6.2. Model Adaptive Space Shrinkage

The model adaptive space shrinkage (MASS) method [23] was applied to extract effective variables from the preprocessed spectral data. Based on the model population analysis (MPA) and weighted binary matrix sampling (WBMS), MASS extracts effective variables through the continuous shrinkage processes of the model space.

2.6.3. Variable Iterative Space Shrinkage Approach

The variable iterative space shrinkage approach (VISSA) [24,25] was used to search the optimal interval position and combination globally. Based on the MPA, the variable space is gradually optimized in each iteration, and effective variables are obtained while the root mean square error of interactive verification (RMSECV) is the minimum value.

2.6.4. Random Frog

The random frog (RF) algorithm [26,27] uses a small number of variables to generate multiple models iteratively. It can output the probability of each variable, and then sort the variables according to their probabilities. The variables with a probability greater than the threshold are selected as the effective variables [28].

3. Results and Discussions

3.1. Sample Division

The SPXY algorithm [29] was used to divide 90 samples into the training set of 70 and the prediction set of 20. The statistical results of the physicochemical values of pH of the training set and the prediction set are shown in Table 1.
From Table 1, it can be seen that the range of pH of the prediction set is within that of the training set, which is helpful for modeling. In Figure 5, it can be seen that the dispersion degrees of the training set and the prediction set are basically the same, indicating that the samples of the two sets are representative.

3.2. Preprocessing the Fluorescence Spectral Data

In order to establish a stable model to improve the signal-to-noise ratio [28], three algorithms, namely, de-trending (DT), moving average (MA), and Savitzky–Golay smoothing (S-G), were selected. A partial least square regression (PLSR) model was established to compare the preprocessing effects of the three algorithms. The prediction performance of PLSR was evaluated by the determination coefficient and the root mean square error of calibration (training set) (RC2, RMSEC), the determination coefficient and the root mean square error of prediction (RP2, RMSEP), and the residual predictive deviation (RPD). The prediction results of PLSR are listed in Table 2.
From Table 2, it can be seen that Rp2 and Rc2 of DT and S-G are both improved compared with Raw, and the RPD of DT is higher than that of S-G. Therefore, DT was selected as the preprocess method. The preprocessed spectral images are shown in Figure 6.
From Figure 6, it can be seen that both S-G and MA can improve the smoothness of spectrum data and reduce the interference of noise, and there are obvious peaks around the variables of 30 and 70. DT can effectively remove the baseline drift of the raw spectrum data and increase the signal-to-noise ratio. In summary, DT shows the best performance among the three algorithms.

3.3. Extracting Effective Variables

The preprocessed spectral data contain a large number of redundant variables. If they are directly applied to the prediction model, it will not only increase the calculation burden of the model, but also reduce the model’s prediction accuracy. Therefore, it is essential to reduce the dimensionality of the preprocessed spectral data to extract effective variables.

3.3.1. Effective Variables Extracted by IRIV

The iteratively retaining information variables (IRIV) method was applied to extract effective variables from the preprocessed spectral data. The maximum number of principal factors and the number of cross-validation were set to 20 and 5, respectively. The extracting process of IRIV is shown in Figure 7. It can be seen from Figure 7a that the distribution of effective variables is relatively uniform. Figure 7b shows the number of variables in the iterative process of IRIV. From Figure 7b, it can be seen that the number of variables decreases from 125 to 62 in only two iterations. This is due to the high correlation in the variables. As the number of iterations increases, most of the redundant variables are filtered out and the number of remaining variables gradually stabilizes. After five iterations, all invalid variables are filtered out, 11 variables are further filtered out by reverse elimination, and 28 effective variables are obtained.

3.3.2. Effective Variables Extracted by MASS

The model adaptive space shrinkage (MASS) method was applied to extract effective variables from the preprocessed spectral data, by setting the sampling rate of the sample to 0.95, the sampling rate of the variables to 0.5, the sampling number of the binary matrix to 1000, the maximum number of principal factors to 10, and the cross-validation to 5. The extraction process of MASS is shown in Figure 8. The distribution of effective variables is shown in Figure 8a, and the change process of the weights of variables is shown in Figure 8b. It is known that the weights of these variables are constantly updated in the iterative process of MASS. After 25 iterations, the weights of all variables tend to be stable (0 or 1). Among them, the variables with a weight of 0 are eliminated, and those remaining with a weight of 1 are effective variables. Figure 8c shows the change process of the weights of samples. After 15 iterations, the weights of all samples tend to be stable. Among them, there are three samples with a weight of 0, which are non-information samples. After 25 iterations, RMSECV reaches the minimum value of 0.1271, and 34 effective variables are obtained.

3.3.3. Effective Variables Extracted by VISSA

The variable iterative space shrinkage approach (VISSA) was used to extract effective variables from the preprocessed spectral data, by setting the initial weight of the variables to 0.5, the sampling number of the binary matrix to 1000, the maximum number of principal factors to 10, and the cross-validation to 5. The extraction process of VISSA is shown in Figure 9. The distribution of effective variables is shown in Figure 9a, and the curve of RMSECV in the iterative process of VISSA is shown in Figure 9b. RMSECV drops quickly at the beginning of the iterations. As the number of iterations increases, the RMSECV gradually stabilizes. There are a large number of collinear and redundant variables in the original spectral data, and VISSA can quickly eliminate them at the beginning of the iterations. As the number of redundant variables decrease, the collinearity among the remaining variables decreases; therefore, the value of RMSECV gradually stabilizes. After 24 iterations, RMSECV reaches the minimum value of 0.1452, and 42 effective variables are obtained.

3.3.4. Effective Variables Extracted by RF

The random frog (RF) algorithm was used to extract effective variables from the preprocessed spectral data, by setting the number of initial variables to 5, the number of iterations to 2000, and the threshold to 0.15 (i.e., the black horizontal line in Figure 10b). The extraction process of RF is shown in Figure 10. Figure 10a shows the distribution of effective variables. It can be seen that effective variables are mainly concentrated in the front and the back regions of the spectrum, and a small number of effective variables are distributed in the middle regions of the spectrum. Figure 10b shows the probability distribution diagram of effective variables. The top ten variables are 6, 17, 94, 54, 20, 19, 125, 45, 90, and 27; that is, the distribution of effective variables is balanced in the total spectral interval. The threshold was set to 0.15, and a total of 35 variables with a probability greater than or equal to 0.15 were finally selected as effective variables.
The effective variables extracted by the above four methods and their combined variables (i.e., IRIV + VISSA + MASS + RF, which is abbreviated as IVMR) are listed in Table 3.
It can be seen from Table 3 that the number of effective variables extracted by IRIV is the least, and the number of effective variables extracted by IVMR is the largest, accounting for 22.4% and 48.0% of the original variables, respectively. Although the number of variables is greatly reduced, the number is still quite large, and there are still some linearly related variables. Therefore, the effective variables in Table 3 should be further extracted, that is, via the secondary extraction.

3.3.5. Effective Variables Extracted by IRIV-VISSA

A total of 28 effective variables extracted by IRIV were further extracted by VISSA to eliminate redundant variables, by setting the initial weight of the variables to 0.5, the sampling number of the binary matrix to 1000, the maximum number of principal factors to 10, and the cross-validation to 5. The extraction process of IRIV-VISSA is shown in Figure 11. Figure 11a shows the distribution of effective variables, and it can be seen that the distribution of effective variables is relatively uniform and concentrated in the spectral region of 80 to 120. Compared with the effective variables extracted by IRIV, the eight variables of 18, 19, 24, 31, 69, 82, 98, and 124 are removed. Figure 11b shows the curve of RMSECV in the iterative process of IRIV-VISSA. It can be seen that RMSECV reduces quickly at the beginning of the iterations, indicating that there are redundant variables in the effective variables extracted by IRIV. After 13 iterations, RMSECV reaches the minimum value of 0.1462, and 20 effective variables are finally obtained.

3.3.6. Effective Variables Extracted by IRIV-MASS

A total of 28 effective variables extracted by IRIV were further extracted by MASS to reduce the collinear variables. The extraction process of IRIV-MASS is shown in Figure 12. Figure 12a shows the distribution diagram of effective variables. Compared with the effective variables extracted by IRIV, the six variables of 31, 69, 82, 84, 98, and 124 are removed. From Figure 12b, it can be seen that as the number of iterations increases, the weights of some variables change from 0.5 to 0, and the 22 variables with a weight of 1 are effective variables. From Figure 12c, it can be seen that as the number of iterations increases, the weights of three samples are significantly reduced, that is, the invalid samples. After 21 iterations, 22 effective variables are finally obtained.

3.3.7. Effective Variables Extracted by IVMR-VISSA

The combined variables contain rich information but high dimensionality and redundancy, so they need to be extracted further. A total of 60 combined variables were further extracted by VISSA. The extraction process of IVMR-VISSA is shown in Figure 13. From Figure 13a, it can be seen that effective variables are mainly concentrated in three spectral regions of 0–10, 40–60, and 80–125. Figure 13b shows the curve of RMSECV. RMSECV reduces quickly at the beginning of the iterations, due to the large collinearity between the combined variables. After 25 iterations, RMSECV reaches the minimum value of 0.1271, and 27 effective variables are finally obtained.
Compared with the data in Table 3, it can be seen that the secondary extraction can further reduce the number of effective variables. However, there are still 27 effective variables extracted by IVMR-VISSA, and they need to be extracted for the third time.

3.3.8. Effective Variables Extracted by IVMR-VISSA-IRIV

The effective variables extracted by IVMR-VISSA were further extracted by IRIV. The maximum number of principal factors and the cross-validation were set to 20 and 8, respectively. The extraction process of IVMR-VISSA-IRIV is shown in Figure 14. Figure 14a shows the distribution of effective variables. Compared with the variables extracted by IVMR-VISSA, the four variables of 42, 1, 50, and 43 are removed from the 27 variables. Figure 14b shows the change process of the number of effective variables. After three iterations and the reverse variable elimination, 23 effective variables are finally obtained.
The numbers of effective variables extracted by the different methods are shown in Table 4.

3.4. Building the Models and Analyzing the Results

The effective variables in Table 4 were used to build four regression models, namely, the random forest regression (RFR), the partial least square regression (PLSR), the extreme learning machine (ELM), and the multiple-kernel support vector regression (MK-SVR).

3.4.1. RFR Model

The random forest regression (RFR) [30,31] is an integrated classifier composed of multiple decision trees based on bagging. There is no correlation between the decision trees. In the regression process, each decision tree returns its own prediction result, and the output result of RFR is the average of these prediction results of the decision trees [32].
The number of decision trees was set from 1 to 500, and the number of effective variables at each node was one-third of the total number of effective variables. The prediction results of RFR for the pH value of kiwifruit samples are listed in Table 5.
From Table 5, it can be seen that the ranges of RPD, Rp2 and Rc2 of RFR models are 1.63 to 2.01, 0.6057 to 0.7406, and 0.6548 to 0.8025, respectively. Among these, IVMR-VISSA-RFR shows the worst prediction performance, with RPD of only 1.63. The prediction performance of IVMR-VISSA-IRIV-RFR was greatly improved, which means that there are still redundant variables in the effective variables extracted by IVMR-VISSA. IVMR-RFR has the best prediction results, with Rp2, Rc2, and RPD of 0.7406, 0.7937, and 2.01, respectively. This is because the combined variables have the largest number and contain abundant information, which is more suitable for building RFR model.

3.4.2. PLSR Model

The partial least square regression (PLSR) [33,34,35,36] integrates the advantages of principal component analysis, canonical correlation analysis, and linear correlation analysis, and is suitable for solving the problem of multiple correlations between the variables. We set the range of the latent variable (LV) of the PLSR model to 1–20, and the step size to 1. The prediction results of PLSR for the pH values of kiwifruit samples are listed in Table 6.
From Table 6, it can be seen that compared with the original spectral data (see Table 2), the Rc2 of effective variables-PLSR was significantly improved. Among these, IVMR-PLSR shows the worst prediction performance with an RPD of only 1.67. After the second and third extractions are performed on the combined variables (corresponding to the method of IVMR), the prediction effect of PLSR was improved, which shows that the combined variables contain redundant information. IRIV-VISSA-PLSR has the best prediction results, with RP2, RC2, and RPD of 0.7790, 0.8568, and 2.18, respectively, indicating that the secondary extraction can effectively reduce the collinearity between the effective variables.

3.4.3. ELM Model

The extreme learning machine (ELM) [37,38] is a single hidden layer feedforward neural network. Compared with a three-layer feedforward neural network, ELM has a faster training speed and stronger generalization ability. We selected the “sig” function as the activation function of ELM, and set the number of hidden layer neurons from 1 to 100. The prediction results of ELM for the pH values of kiwifruit samples are listed in Table 7.
From Table 7, it can be seen that the predictive results of ELM are generally good, with RPD ranging from 1.60 to 2.28. Among these, MASS-ELM shows the worst prediction results, which is due to the collinearity between the effective variables. IRIV-VISSA-ELM has the best prediction results, with RP2, RC2, and RPD of 0.7980, 0.8856, and 2.28, respectively.

3.4.4. MK-SVR Model

The SVR model sets only one kernel function to predict the samples, and its prediction result is poor when there are multiple types of data. The MK-SVR [39] model can solve this problem well. Multiple-kernel functions can adapt to a variety of variables, which can improve the prediction performance of the model.
The multiple-kernel function is a weighted linear combination of multiple basic kernel functions, and is calculated as follows:
k x , y = m = 1 M θ m k m x , y ,   θ m 0
where k m x , y is the mth single kernel function, θ m is the weight of the mth single kernel function.
The polynomial function was selected as the basic kernel function, and five kernel functions with different parameters were linearly combined to construct a multiple-kernel function. We set the values of the highest degree d of these five polynomials to be 1, 2, 3, 4, and 5, and the range of the value of the penalty coefficient C was [10−1–106]. Table 8 shows the prediction results of MK-SVR based on the simple MKL algorithm.
From Table 8, it can be seen that IVMR-MK-SVR has the worst prediction results, with RPD of 1.5. The prediction performance of IVMR -VISSA-MK-SVR is greatly improved, indicating that the redundancy in the combined variables is removed by VISSA. Compared with the first extraction, it can be seen that the prediction accuracy of MK-SVR is significantly improved with the secondary extracted variables, indicating that second extraction can effectively reduce the redundant variables. Among all methods, IVMR-VISSA-IRIV-MK-SVR has the best prediction results, with RP2, RC2, and RPD of 0.8512, 0.8580, and 2.66, respectively.
In order to compare the prediction results of the four regression models, the corresponding optimal prediction results are listed in Table 9, and their regression graphs are plotted in Figure 15.
Table 9 shows that IVMR-RFR has the worst prediction results, with RPD of 2.01. Both IRIV-VISSA-PLSR and IRIV-VISSA-ELM have better prediction results, and their RPDs are 2.18 and 2.28, respectively. Of course, IVMR-VISSA-IRIV-MK-SVR has the best prediction results. From Figure 15, it can be seen that the optimal prediction results of the training set and the prediction set of IVMR-RFR deviate significantly from the measured values (i.e., the physicochemical values). For two methods, namely, IRIV-VISSA-PLSR and IRIV-VISSA-ELM, the values of RP2 are significantly smaller than the values of RC2. The optimal prediction values of the training set and the prediction set of IVMR-VISSA-IRIV-MK-SVR are distributed much closer to the measured values, with RP2 of 0.8512, and this method shows the best generalization performance.

4. Conclusions

Hyperspectral fluorescence imaging technology was used to nondestructively detect the pH value of kiwifruit, and the research results are summarized as follows:
(1)
By comparing the prediction results of the PLSR built using the raw spectrum data and three preprocessing algorithms, namely, de-trending, S-G, and moving average, it can be seen that DT can effectively eliminate baseline drift and improve the signal-to-noise ratio of the raw spectrum data. Compared with the raw spectral data, the Rp2, Rc2, and RPD of DT-PLSR were increased to 0.7037, 0.7263, and 1.88, respectively. Therefore, DT was selected to preprocess the raw spectrum data.
(2)
Different methods were used to extract effective variables from the preprocessed spectral data. The experimental results showed that the second and the third extractions can effectively reduce redundant variables and reduce the collinearity between the variables. The number of extracted effective variables accounts for between 16% and 48% of the full spectrum.
(3)
Comparing the prediction results of four regression models, namely, RFR, PLSR, ELM, and MK-SVR, it can be seen that the prediction results of MK-SVR are generally higher than those of other models. Among these, IVMR-VISSA-IRIV-MK-SVR has the best prediction results, with RP2, RC2, and RPD of 0.8512, 0.8580, and 2.66, respectively.
In this study, we built four different regression models and compared their predictions to validate the results. In addition, in the extraction of effective variables, the advantages of different extraction algorithms were combined, and the extracted variables were combined to obtain more abundant and effective variables.
Although the use of hyperspectral fluorescence instruments can achieve high-precision non-destructive detection of the pH value of kiwifruit, some errors remained with the actual physicochemical values, and the prediction accuracy also needs to be further improved. In addition, hyperspectral fluorescence instruments are relatively expensive, which hinders the popularization and use of this technology. Therefore, our team is working on the design of portable instruments to help the future detection of the internal quality parameters of fruits such as kiwifruit.
In summary, the research results show that it is feasible to use hyperspectral fluorescence imaging technology to non-destructively detect the pH value of kiwifruit. This approach also provides strong technical support for the non-destructive testing of other internal quality parameters of kiwifruit.

Author Contributions

Conceptualization, X.W.; methodology, X.W.; software, X.W.; validation, L.X., H.C. and B.X.; formal analysis, X.W., H.C. and B.X.; investigation, Z.Z.; resources, P.H. and B.X.; data curation, X.W.; writing—original draft preparation, X.W. and L.X.; writing—review and editing, L.X. and X.W.; visualization, X.W.; supervision, L.X.; project administration, L.X.; funding acquisition, L.X. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the National Natural Science Foundation of China (Grant No. 31901413), the National Key Research and Development Program of China (Grant No. 2018yfd0301204), the Key R & D General Project of Sichuan Provincial Department of science and Technology (Grant No. 20zdyf2384), the Natural Science Key Project of Sichuan Provincial Education Department (Grant No. 17ZB0333), and the City-School Cooperation Project (Grant No. 2018sxhz02).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gao, M.; Guo, W.; Huang, X.; Du, R.; Zhu, X. Effect of pressing and impacting bruises on optical properties of kiwifruit flesh. Postharvest Biol. Technol. 2021, 172, 111385. [Google Scholar] [CrossRef]
  2. Ma, T.; Xia, Y.; Inagaki, T.; Tsuchikawa, S. Non-destructive and fast method of mapping the distribution of the soluble solids content and pH in kiwifruit using object rotation near-infrared hyperspectral imaging approach. Postharvest Biol. Technol. 2021, 174, 111440. [Google Scholar] [CrossRef]
  3. Liu, D.; Guo, W. Identifying CPPU-Treated Kiwifruits Using Near-Infrared Hyperspectral Imaging Technology. Food Anal. Methods 2017, 10, 1273–1283. [Google Scholar] [CrossRef]
  4. Zhu, H.; Chu, B.; Fan, Y.; Tao, X.; Yin, W.; He, Y. Hyperspectral Imaging for Predicting the Internal Quality of Kiwifruits Based on Variable Selection Algorithms and Chemometric Models. Sci. Rep. 2017, 7, 7845. [Google Scholar] [CrossRef] [Green Version]
  5. Leiva-Valenzuela, G.; Lu, R.; Aguilera, J. Prediction of firmness and soluble solids content of blueberries using hyperspectral reflectance imaging. J. Food Eng. 2013, 115, 91–98. [Google Scholar] [CrossRef]
  6. Guo, Z.; Wang, M.; Agyekum, A.; Wu, J.; Chen, Q.; Zou, M.; El-Seedi, H.; Tao, F.; Shi, J.; Ouyang, Q.; et al. Quantitative detection of apple watercore and soluble solids content by near infrared transmittance spectroscopy. J. Food Eng. 2020, 279, 109955. [Google Scholar] [CrossRef]
  7. Li, J.; Chen, L. Comparative analysis of models for robust and accurate evaluation of soluble solids content in ‘Pinggu’ peaches by hyperspectral imaging. Comput. Electron. Agric. 2017, 142, 524–535. [Google Scholar] [CrossRef]
  8. Pu, Y.; Sun, D. Vis–NIR hyperspectral imaging in visualizing moisture distribution of mango slices during microwave-vacuum drying. Food Chem. 2015, 188, 271–278. [Google Scholar] [CrossRef]
  9. Sun, J.; Ma, B.; Dong, J.; Zhu, R.; Zhang, R.; Jiang, W. Detection of internal qualities of hami melons using hyperspectral imaging technology based on variable selection algorithms. J. Food Process Eng. 2017, 40. [Google Scholar] [CrossRef]
  10. Li, B.; Hou, B.; Zhang, D.; Zhou, Y.; Zhao, M.; Hong, R.; Huang, Y. Pears characteristics (soluble solids content and firmness prediction, varieties) testing methods based on visible-near infrared hyperspectral imaging. Optik 2016, 127, 2624–2630. [Google Scholar] [CrossRef]
  11. Wold, J.; Jørgensen, K.; Lundby, F. Nondestructive Measurement of Light-induced Oxidation in Dairy Products by Fluorescence Spectroscopy and Imaging. J. Dairy Sci. 2002, 85, 1693–1704. [Google Scholar] [CrossRef]
  12. Bertani, F.; Businaro, L.; Gambacorta, L.; Mencattini, A.; Brenda, D.; Di Giuseppe, D.; Gerardino, A. Optical detection of aflatoxins B in grained almonds using fluorescence spectroscopy and machine learning algorithms. Food Control 2020, 112, 107073. [Google Scholar] [CrossRef] [Green Version]
  13. Liu, H.; Ji, Z.; Liu, X.; Shi, C.; Yang, X. Non-destructive determination of chemical and microbial spoilage indicators of beef for freshness evaluation using front-face synchronous fluorescence spectroscopy. Food Chem. 2020, 321, 126628. [Google Scholar] [CrossRef] [PubMed]
  14. ElMasry, G.; Wang, N.; ElSayed, A.; Ngadi, M. Hyperspectral imaging for nondestructive determination of some quality attributes for strawberry. J. Food Eng. 2007, 81, 98–107. [Google Scholar] [CrossRef]
  15. Tian, Y.; Sun, J.; Zhou, X.; Wu, X.; Lu, B.; Dai, C. Research on apple origin classification based on variable iterative space shrinkage approach with stepwise regression–support vector machine algorithm and visible-near infrared hyperspectral imaging. J. Food Process Eng. 2020, 43, e13432. [Google Scholar] [CrossRef]
  16. Wang, H.; Peng, J.; Xie, C.; Bao, Y.; He, Y. Fruit Quality Evaluation Using Spectroscopy Technology: A Review. Sensors 2015, 15, 11889–11927. [Google Scholar] [CrossRef] [Green Version]
  17. Yuan, R.; Liu, G.; He, J.; Wan, G.; Fan, N.; Li, Y.; Sun, Y. Classification of Lingwu long jujube internal bruise over time based on visible near-infrared hyperspectral imaging combined with partial least squares-discriminant analysis. Comput. Electron. Agric. 2021, 182, 106043. [Google Scholar] [CrossRef]
  18. Sadeghi, M.; Behnia, F.; Amiri, R. Window Selection of the Savitzky–Golay Filters for Signal Recovery from Noisy Measurements. IEEE Trans. Instrum. Meas. 2020, 69, 5418–5427. [Google Scholar] [CrossRef]
  19. Yun, Y.; Wang, W.; Tan, M.; Liang, Y.; Li, H.; Cao, D.; Xu, Q. A strategy that iteratively retains informative variables for selecting optimal variable subset in multivariate calibration. Anal. Chim. Acta 2014, 807, 36–43. [Google Scholar] [CrossRef]
  20. Wei, L.; Yuan, Z.; Yu, M.; Huang, C.; Cao, L. Estimation of Arsenic Content in Soil Based on Laboratory and Field Reflectance Spectroscopy. Sensors 2019, 19, 3904. [Google Scholar] [CrossRef] [Green Version]
  21. Yun, Y.; Bin, J.; Liu, D.; Xu, L.; Yan, T.; Cao, D.; Xu, Q. A hybrid variable selection strategy based on continuous shrinkage of variable space in multivariate calibration. Anal. Chim. Acta 2019, 1058, 58–69. [Google Scholar] [CrossRef] [PubMed]
  22. Ren, G.; Ning, J.; Zhang, Z. Intelligent assessment of tea quality employing visible-near infrared spectra combined with a hybrid variable selection strategy. Microchem. J. 2020, 157, 105085. [Google Scholar] [CrossRef]
  23. Wen, M.; Deng, B.; Cao, D.; Yun, Y.; Yang, R.; Lu, H.; Liang, Y. The model adaptive space shrinkage (MASS) approach: A new method for simultaneous variable selection and outlier detection based on model population analysis. Analyst 2016, 141, 5586–5597. [Google Scholar] [CrossRef] [PubMed]
  24. Zhou, X.; Sun, J.; Tian, Y.; Wu, X.; Dai, C.; Li, B. Spectral classification of lettuce cadmium stress based on information fusion and VISSA-GOA-SVM algorithm. J. Food Process Eng. 2019, 42, e13085. [Google Scholar] [CrossRef]
  25. Sun, J.; Tang, K.; Wu, X.; Dai, C.; Chen, Y.; Shen, J. Nondestructive identification of green tea varieties based on hyperspectral imaging technology. J. Food Process Eng. 2018, 41, e12800. [Google Scholar] [CrossRef]
  26. Deng, B.; Yun, Y.; Ma, P.; Lin, C.; Ren, D.; Liang, Y. A new method for wavelength interval selection that intelligently optimizes the locations, widths and combinations of the intervals. Analyst 2015, 140, 1876–1885. [Google Scholar] [CrossRef] [PubMed]
  27. Sun, J.; Yang, W.; Zhang, M.; Feng, M.; Xiao, L.; Ding, G. Estimation of water content in corn leaves using hyperspectral data based on fractional order Savitzky-Golay derivation coupled with wavelength selection. Comput. Electron. Agric. 2021, 182, 105989. [Google Scholar] [CrossRef]
  28. Zhang, D.; Xu, Y.; Huang, W.; Tian, X.; Xia, Y.; Xu, L.; Fan, S. Nondestructive measurement of soluble solids content in apple using near infrared hyperspectral imaging coupled with wavelength selection algorithm. Infrared Phys. Technol. 2019, 98, 297–304. [Google Scholar] [CrossRef]
  29. Yu, Y.; Zhang, Q.; Huang, J.; Zhu, J.; Liu, J. Nondestructive determination of SSC in Korla fragrant pear using a portable near-infrared spectroscopy system. Infrared Phys. Technol. 2021, 116, 103785. [Google Scholar] [CrossRef]
  30. Liang, L.; Geng, D.; Yan, J.; Qiu, S.; Di, L.; Wang, S.; Li, L. Estimating Crop LAI Using Spectral Feature Extraction and the Hybrid Inversion Method. Remote Sens. 2020, 12, 3534. [Google Scholar] [CrossRef]
  31. Wang, S.; Chen, Y.; Wang, M.; Li, J. Performance Comparison of Machine Learning Algorithms for Estimating the Soil Salinity of Salt-Affected Soil Using Field Spectral Data. Remote Sens. 2019, 11, 2605. [Google Scholar] [CrossRef] [Green Version]
  32. An, G.; Xing, M.; He, B.; Liao, C.; Huang, X.; Shang, J.; Kang, H. Using Machine Learning for Estimating Rice Chlorophyll Content from In Situ Hyperspectral Data. Remote Sens. 2020, 12, 3104. [Google Scholar] [CrossRef]
  33. Zhang, L.; Wang, Y.; Wei, Y.; An, D. Near-infrared hyperspectral imaging technology combined with deep convolutional generative adversarial network to predict oil content of single maize kernel. Food Chem. 2022, 370, 131047. [Google Scholar] [CrossRef] [PubMed]
  34. Su, W.; Bakalis, S.; Sun, D. Chemometrics in tandem with near infrared (NIR) hyperspectral imaging and Fourier transform mid infrared (FT-MIR) microspectroscopy for variety identification and cooking loss determination of sweet potato. Biosyst. Eng. 2019, 180, 70–86. [Google Scholar] [CrossRef]
  35. Jung, A.; Vohland, M.; Thiele-Bruhn, S. Use of a Portable Camera for Proximal Soil Sensing with Hyperspectral Image Data. Remote Sens. 2015, 7, 11434–11448. [Google Scholar] [CrossRef] [Green Version]
  36. Helsen, K.; Bassi, L.; Feilhauer, H.; Kattenborn, T.; Matsushima, H.; Van Cleemput, E.; Honnay, O. Evaluating different methods for retrieving intraspecific leaf trait variation from hyperspectral leaf reflectance. Ecol. Indic. 2021, 130, 108111. [Google Scholar] [CrossRef]
  37. Chen, S.; Hu, T.; Luo, L.; He, Q.; Zhang, S.; Li, M.; Li, H. Rapid estimation of leaf nitrogen content in apple-trees based on canopy hyperspectral reflectance using multivariate methods. Infrared Phys. Technol. 2020, 111, 103542. [Google Scholar] [CrossRef]
  38. Liu, Y.; Li, M.; Wang, S.; Wu, T.; Jiang, W.; Liu, Z. Identification of heat damage in imported soybeans based on hyperspectral imaging technology. J. Sci. Food Agric. 2020, 100, 1775–1786. [Google Scholar] [CrossRef]
  39. Wang, L.; Zhou, X.; Zhu, X.; Guo, W. Estimation of leaf nitrogen concentration in wheat using the MK-SVR algorithm and satellite remote sensing data. Comput. Electron. Agric. 2017, 140, 327–337. [Google Scholar] [CrossRef]
Figure 1. The kiwifruit samples.
Figure 1. The kiwifruit samples.
Agriculture 12 00208 g001
Figure 2. pH-100 A pen-type pH meter.
Figure 2. pH-100 A pen-type pH meter.
Agriculture 12 00208 g002
Figure 3. GaiaFluo hyperspectral fluorescence testing system.
Figure 3. GaiaFluo hyperspectral fluorescence testing system.
Agriculture 12 00208 g003
Figure 4. Raw hyperspectral fluorescent images: (a) wavelengths (nm), (b) variables.
Figure 4. Raw hyperspectral fluorescent images: (a) wavelengths (nm), (b) variables.
Agriculture 12 00208 g004
Figure 5. Scatter plot of pH of two sets: (a) scatter plot of the training set, (b) scatter plot of the prediction set.
Figure 5. Scatter plot of pH of two sets: (a) scatter plot of the training set, (b) scatter plot of the prediction set.
Agriculture 12 00208 g005
Figure 6. The spectral images preprocessed by three algorithms: (a) the spectral images preprocessed by de-trending; (b) the spectral images preprocessed by moving average; (c) the spectral images preprocessed by Savitzky–Golay smoothing.
Figure 6. The spectral images preprocessed by three algorithms: (a) the spectral images preprocessed by de-trending; (b) the spectral images preprocessed by moving average; (c) the spectral images preprocessed by Savitzky–Golay smoothing.
Agriculture 12 00208 g006aAgriculture 12 00208 g006b
Figure 7. The extracting process of IRIV: (a) distribution of effective variables; (b) the number of variables in the iterative process of IRIV.
Figure 7. The extracting process of IRIV: (a) distribution of effective variables; (b) the number of variables in the iterative process of IRIV.
Agriculture 12 00208 g007
Figure 8. The extraction process of MASS: (a) distribution of effective variables; (b) the change process of the weights of variables; (c) the change process of the weights of samples.
Figure 8. The extraction process of MASS: (a) distribution of effective variables; (b) the change process of the weights of variables; (c) the change process of the weights of samples.
Agriculture 12 00208 g008
Figure 9. The extraction process of VISSA: (a) distribution of effective variables; (b) the curve of RMSECV.
Figure 9. The extraction process of VISSA: (a) distribution of effective variables; (b) the curve of RMSECV.
Agriculture 12 00208 g009
Figure 10. The extraction process of RF: (a) distribution of effective variables; (b) the probability distribution of variables.
Figure 10. The extraction process of RF: (a) distribution of effective variables; (b) the probability distribution of variables.
Agriculture 12 00208 g010
Figure 11. The extracting process of IRIV-VISSA: (a) distribution of effective variables; (b) the curve of RMSECV.
Figure 11. The extracting process of IRIV-VISSA: (a) distribution of effective variables; (b) the curve of RMSECV.
Agriculture 12 00208 g011
Figure 12. The extraction process of IRIV-MASS: (a) distribution of effective variables; (b) the change process of the weights of variables; (c) the change process of the weights of samples.
Figure 12. The extraction process of IRIV-MASS: (a) distribution of effective variables; (b) the change process of the weights of variables; (c) the change process of the weights of samples.
Agriculture 12 00208 g012
Figure 13. The extraction process of IVMR-VISSA: (a) distribution of effective variables; (b) the curve of RMSECV.
Figure 13. The extraction process of IVMR-VISSA: (a) distribution of effective variables; (b) the curve of RMSECV.
Agriculture 12 00208 g013
Figure 14. The extraction process of IVMR-VISSA-IRIV: (a) distribution of effective variables; (b) the change process of the number of effective variables.
Figure 14. The extraction process of IVMR-VISSA-IRIV: (a) distribution of effective variables; (b) the change process of the number of effective variables.
Agriculture 12 00208 g014
Figure 15. Regression graphs of the optimal prediction results of the four regression models. (a) IVMR-RFR; (b) IRIV-VISSA-PLSR; (c) IRIV-VISSA-ELM; (d) IVMR-VISSA-IRIV-MK-SVR.
Figure 15. Regression graphs of the optimal prediction results of the four regression models. (a) IVMR-RFR; (b) IRIV-VISSA-PLSR; (c) IRIV-VISSA-ELM; (d) IVMR-VISSA-IRIV-MK-SVR.
Agriculture 12 00208 g015
Table 1. Statistical results of the pH measurement values of samples.
Table 1. Statistical results of the pH measurement values of samples.
Sample SetThe SamplesMaxMinMeanS.D.
Training set703.902.202.930.33
Prediction set203.402.602.920.20
Table 2. Prediction results of PLSR with different preprocessing algorithms.
Table 2. Prediction results of PLSR with different preprocessing algorithms.
Pretreatment
Method
LVsRp2Rc2RMSECRMSEPRPD
Raw60.64890.71350.17710.11741.73
De-trending60.70370.72630.17310.10781.88
Moving Average90.62850.79420.15010.12071.68
Savitzky–Golay Smoothing90.66040.74460.16730.11511.76
Table 3. Effective variables extracted by different methods.
Table 3. Effective variables extracted by different methods.
Extraction MethodEffective Variables
IRIV2, 6, 7, 18, 19, 20, 24, 25, 26, 28, 31, 35, 42, 54, 69, 73, 77, 82, 84, 88, 94, 97, 98, 101, 105, 112, 124, 125
VISSA73, 28, 26, 42, 54, 25, 77, 125, 69, 112, 48, 45, 2, 88, 97, 90, 120, 27, 35, 101, 84, 7, 94, 124, 68, 38, 1, 98, 85, 31, 22, 105, 118, 123, 9, 17, 20, 122, 99, 3, 6, 93
MASS73, 26, 28, 25, 35, 54, 112, 48, 58, 125, 77, 97, 17, 84, 45, 60, 88, 7, 70, 103, 38, 6, 9, 49, 2, 1, 23, 4, 14, 43, 20, 99, 117, 107
RF2, 3, 4, 6, 7, 9, 17, 19, 20, 22, 26, 27, 45, 46, 48, 54, 56, 60, 82, 84, 88, 90, 94, 97, 99, 101, 103, 105, 110, 112, 117, 118, 125
IVMR1, 2, 3, 4, 6, 7, 9, 17, 19, 20, 22, 25, 26, 27, 28, 31, 35, 38, 42, 45, 48, 54, 60, 69, 73, 77, 82, 84, 88, 90, 94, 97, 98, 99, 101, 103, 105, 112, 117, 118, 124, 125, 18, 24, 58, 70, 49, 23, 14, 43, 107, 46, 56, 110, 120, 68, 85, 123, 122, 93
Table 4. The numbers of effective variables extracted by different methods.
Table 4. The numbers of effective variables extracted by different methods.
Extraction MethodNumber of Effective Variables
IRIV28
VISSA42
MASS34
RF33
IVMR60
IRIV-VISSA20
IRIV-MASS22
IVMR-VISSA27
IVMR-VISSA-IRIV23
Table 5. The prediction results of RFR.
Table 5. The prediction results of RFR.
Extraction MethodNumber of Decision TreesRp2Rc2RMSECRMSEPRPD
IRIV3360.66380.80250.14710.11491.77
VISSA420.68070.75940.16230.11191.82
MASS70.72010.69250.18350.10481.94
IVMR1610.74060.79370.15030.10092.01
RF1350.66130.78550.15330.11531.76
IRIV-VISSA3900.68240.79050.15150.11161.82
IRIV-MASS4490.65880.79880.14840.11571.76
IVMR-VISSA600.60570.73880.16910.12441.63
IVMR-VISSA-IRIV120.71600.65480.19440.10561.93
Table 6. The prediction results of PLSR.
Table 6. The prediction results of PLSR.
Extraction MethodLVsRp2Rc2RMSECRMSEPRPD
IRIV150.75260.84060.13210.09852.06
VISSA70.72590.79660.14930.10371.96
MASS60.67390.79070.15140.11311.80
RF80.63140.86460.12180.12031.69
IVMR60.62240.80670.14550.12171.67
IRIV-VISSA180.77900.85680.12520.09312.18
IRIV-MASS200.77880.86190.12300.09322.18
IVMR-VISSA120.66140.89700.10620.11531.76
IVMR-VISSA-IRIV100.74960.86990.11940.09912.05
Table 7. The prediction results of ELM.
Table 7. The prediction results of ELM.
Extraction MethodNeurons
Number
Rp2Rc2RMSECRMSEPRPD
IRIV90.78550.80730.14530.09172.22
VISSA940.71940.80880.14470.10491.94
MASS110.59110.80560.14590.12671.60
RF720.74500.79640.14930.10002.03
IVMR940.69130.83810.13310.11011.85
IRIV-VISSA410.79800.88560.11190.08902.28
IRIV-MASS980.77900.79960.14820.09312.18
IVMR-VISSA830.72160.83850.13300.10451.94
IVMR-VISSA-IRIV570.79010.86160.12310.09082.24
Table 8. The prediction results of MK-SVR.
Table 8. The prediction results of MK-SVR.
Extraction MethodCRp2Rc2RMSECRMSEPRPD
IRIV103.70.77050.76320.16100.09492.14
VISSA103.80.77820.75970.16220.09332.18
MASS103.60.72660.74580.16690.10361.96
RF103.70.77300.79590.14950.09442.15
IVMR104.80.53010.66530.19150.13581.50
IRIV-VISSA104.00.83060.84890.12860.08152.49
IRIV-MASS104.10.83950.86930.11970.07942.56
IVMR-VISSA105.00.82030.81990.14050.08402.42
IVMR-VISSA-IRIV105.30.85120.85800.12470.07642.66
Table 9. Comparison of the optimal prediction results of four regression models.
Table 9. Comparison of the optimal prediction results of four regression models.
Regression ModelExtraction MethodRP2RC2RMSECRMSEPRPD
RFRIVMR0.74060.79370.15030.10092.01
PLSRIRIV-VISSA0.77900.85680.12520.09312.18
ELMIRIV-VISSA0.79800.88560.11190.08902.28
MK-SVRIVMR-VISSA-IRIV0.85120.85800.12470.07642.66
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, X.; Xu, L.; Chen, H.; Zou, Z.; Huang, P.; Xin, B. Non-Destructive Detection of pH Value of Kiwifruit Based on Hyperspectral Fluorescence Imaging Technology. Agriculture 2022, 12, 208. https://doi.org/10.3390/agriculture12020208

AMA Style

Wang X, Xu L, Chen H, Zou Z, Huang P, Xin B. Non-Destructive Detection of pH Value of Kiwifruit Based on Hyperspectral Fluorescence Imaging Technology. Agriculture. 2022; 12(2):208. https://doi.org/10.3390/agriculture12020208

Chicago/Turabian Style

Wang, Xiaohui, Lijia Xu, Heng Chen, Zhiyong Zou, Peng Huang, and Bo Xin. 2022. "Non-Destructive Detection of pH Value of Kiwifruit Based on Hyperspectral Fluorescence Imaging Technology" Agriculture 12, no. 2: 208. https://doi.org/10.3390/agriculture12020208

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop