Next Article in Journal
A Novel Cipher-Based Data Encryption with Galois Field Theory
Next Article in Special Issue
Vertical Farming Monitoring: How Does It Work and How Much Does It Cost?
Previous Article in Journal
Visible-Image-Assisted Nonuniformity Correction of Infrared Images Using the GAN with SEBlock
Previous Article in Special Issue
Optimization of the Outlet Shape of an Air Circulation System for Reduction of Indoor Temperature Difference
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Feasibility Study on the Classification of Persimmon Trees’ Components Based on Hyperspectral LiDAR

1
School of Electronics and Information Engineering, Anhui Jianzhu University, Hefei 230601, China
2
Anhui International Joint Research Center for Ancient Architecture Intellisencing and Multi-Dimensional Modeling, Hefei 230601, China
3
Institute of Unmanned System, Beihang University, Beijing 100191, China
4
Department of Remote Sensing and Photogrammetry, Finnish Geospatial Research Institute, 02150 Espoo, Finland
5
Ji Hua Laboratory, Foshan 528200, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(6), 3286; https://doi.org/10.3390/s23063286
Submission received: 5 February 2023 / Revised: 17 March 2023 / Accepted: 17 March 2023 / Published: 20 March 2023

Abstract

:
Intelligent management of trees is essential for precise production management in orchards. Extracting components’ information from individual fruit trees is critical for analyzing and understanding their general growth. This study proposes a method to classify persimmon tree components based on hyperspectral LiDAR data. We extracted nine spectral feature parameters from the colorful point cloud data and performed preliminary classification using random forest, support vector machine, and backpropagation neural network methods. However, the misclassification of edge points with spectral information reduced the accuracy of the classification. To address this, we introduced a reprogramming strategy by fusing spatial constraints with spectral information, which increased the overall classification accuracy by 6.55%. We completed a 3D reconstruction of classification results in spatial coordinates. The proposed method is sensitive to edge points and shows excellent performance for classifying persimmon tree components.

1. Introduction

The persimmon, cultivated in China for more than 3000 years, is a nutritious fruit containing a large amount of sugar and various vitamins. The development of persimmon farming has resulted in numerous orchards with high automation management, such as fertilization, automatic fruit picking or package, and irrigation. In order to manage persimmon trees precisely and intelligently, it is essential to have abundant and readily available information that can record their growth stages based on the structural components, including leaves, fruit, and wood. However, it is difficult to accurately measure and describe these components due to their spatial variability and structural complexity of the persimmon tree. Therefore, there is an urgent need for a method to separate and classify persimmon trees’ components.
To detect and classify fruit trees’ components, researchers utilize different methods, including visible cameras, multispectral/hyperspectral cameras, LiDAR, and these methods in combination. Several methods have been developed to obtain information from trees under various natural conditions with visible light image processing, such as fruit target detection [1,2], segmentation of green object fruit under complex orchard backgrounds [3], segregation of tomato phenotypes [4], and fast extraction of tree canopy areas from UAV images [5].
The evolution of hyperspectral imaging technology has driven a more comprehensive extraction of fruit tree data. Varga et al. established an evaluation model of hyperspectral images combined with deep neural networks to discriminate and visualize different fruit ripeness levels [6]. The discrepancy in reflectance at specific wavelengths was used to detect potentially damaged fruit [7,8]. The capabilities of hyperspectral imaging have also been explored for applications such as fruit identification and detection [9], nondestructive testing of dry matter [10], moisture content estimation [11], etc. However, the natural defects of images and passive spectral information may be disturbed by factors such as illustration condition, shadows, occlusion, complex branch structures and understory layers, etc., which will inevitably lead to a reduction in wood, leaf, and fruit recognition accuracy in the understory [12].
Time-of-flight measurement is used in pulsed LiDAR for range measurement. Thus, LiDAR can obtain an accurate and instant distance image of the target, which drives processing and learning for autonomous driving [13], while in precision agriculture and smart forestry, LiDAR are used to monitor and reconstruct 3D models of fruit tree components [14]. In existing studies, researchers have employed LiDAR to extract tree height [15], tree branch topology [16], and fruit location [17]. LiDAR can measure distance accurately and obtain spatial information of fruit trees efficiently, but the laser’s monochromatic nature limits its ability to provide abundant spectral information.
Combining several monochromatic laser sources or fusing LiDAR and multispectral/hyperspectral data is an instant method used to obtain fine spatial-spectral information. In a study, researchers fused different wavelengths of LiDAR to complete tree wood and leaf component separation [18,19,20]. This general fusion strategy to meet quantitative analysis requirements has only four to eight channels of spectra, and the spectral resolution is insufficient [21]. Another option is fusing LiDAR and hyperspectral data to generate spatial-spectral domain data for describing tree composition [22]. However, combining too many laser sources is problematic in extending spectral band coverage and improving spectral resolution, resulting in higher hardware costs and more complicated registration. The latest developed active remote sensing system, hyperspectral LiDAR (HSL), can obtain spatial and spectral information simultaneously without any external illumination [23]. Nevalainen et al. searched for two vegetation indices sensitive to nitrogen concentration and verified the possibility of the 3D estimation of nitrogen with HSL data [24]. Bi et al. established a partial least squares regression model to achieve the inversion of chlorophyll concentration at any vertical position in maize plants with HSL spectral and spatial information [25]. To explore the potential of HSL applications in forestry development, Hakala et al. presented the first scheme for modeling and assessing the three-dimensional distribution of chlorophyll concentration and water content in Norway spruce based on a full-waveform HSL [26]. After that, Vauhkonen et al. explored HSL’s feasibility for tree species classification using similar techniques [27]. For the study of tree components’ classification, most previous studies have focused on wood-leaf classification [28] and neglected undesirable fruit extraction. Therefore, the simultaneous monitoring of fruit, leaves, and wood in precision agriculture and forestry has become an urgent problem to be solved.
This paper explores the feasibility of the classification of persimmon trees’ components under laboratory conditions with a revised 101-channel HSL system. We proposed a classification method with spectral and spatial characteristic parameters to classify four components and indicate classification results with 3D reconstruction. Firstly, we extracted nine characteristic parameters in the spectral domain based on persimmon tree HSL point cloud data. Then, a preliminary classification of the fruit trees’ components was conducted with characteristic spectral parameters. To solve the misclassification of edge points, we propose an enhanced classification method for edge-points-based spatial constraint relationships among point clouds. Finally, the results of tree component classification were fused into spatial coordinates to accomplish 3D reconstruction of a persimmon tree.

2. Materials and Methods

2.1. Hyperspectral LiDAR System

Figure 1 shows the structure of a hyperspectral LiDAR system which consists of an emission unit, an integrated scanning control unit, and a receiving unit. The emission unit’s acoustic-optically tunable filter (AOTF) provides a detection laser with a spectral resolution of 5 nm from 550 nm to 1050 nm by filtering the outgoing laser from the super continuous laser. The HSL system emits a 10 mm diameter laser beam with a divergence angle of 1 mrad, and the transmitted laser beam is collimated by a collimator with a focal length of 33 mm, resulting in a 5–8.5 mm spot diameter of the emitted laser after collimation. A two-axis rotator of a scanning control unit conducts precise scanning to generate final colorful point cloud of a target. The laser echoes reflected from the target are focused on an avalanche photon diode (APD) by the receiving optics, which are captured and stored by a high-speed data acquisition card (AC) for subsequent processing.

2.2. Experimental Samples

To evaluate and verify the performance of the classification method in the next section, we acquired spatial–spectral point cloud data of tree samples from our HSL. The tree samples include two species: persimmon (Diospyros kaki Thunb.) and lemon (Citrus limon (L.) Burm. F.). The persimmon samples we used for the study included six branches with unripe fruit and six branches with both ripe and unripe fruit. All fresh branches were sawn off from persimmon trees in the South Campus of Anhui JianZhu University in October 2021 and July, August, and September 2022. In addition, we selected three lemon trees (bonsai trees), which were used to simulate tree samples for an orchard, to explore the generalizability of our method to other fruit tree species. For persimmon samples, the spectral characteristics of unripe and mid-ripe fruits are similar, while the spectral characteristics of ripe and over-ripe fruits are similar [29]. Therefore, this study defines two ripeness levels (ripe and unripe) to characterize fruit ripeness. The lemon samples were from trees at the mid-ripening stage with unripe and ripe fruit. As illustrated in Figure 2, the order numbers ①, ②, ③, and ④ correspond to the components of the wood, ripe fruit, unripe fruit, and leaves, respectively. All samples were hung vertically on a metal stand at 10 cm from the black cloth behind the sample.

2.3. Data Acquisition and Processing

Data acquisition was conducted in a laboratory environment where the samples were placed at a horizontal distance of 5 m in front of the HSL system. We made the point cloud cover the whole sample by setting appropriate pitch and horizontal steps for the scanning unit. Figure 3a shows the zigzag scanning pattern, which starts at the start point (the top left corner of the target). Scanning is accomplished at the endpoint following the direction of the arrow, and the vertical and horizontal scanning step are both 0.05 radians for generating a dense but evenly distributed point cloud. The HSL point cloud includes spatial coordinates and full waveform signals of 101 wavelengths that are recorded and stored in real time by a two-axis rotator actuator.
Prior to data collection, a standard 99% reflectivity diffuse reflection whiteboard (TD-MEB99-141Y-20) as a reference whiteboard in front of a black fabric with less stray light was scanned using the HSL system. Samples placed at the same distance as the whiteboard were scanned immediately. The intensity values of the sample and reference whiteboard were used to calculate the reflectance of the sample [27], as shown in Equation (1). Here, ρ t ( λ i ) is the reflectance value of each wavelength of the sample, I indicates the 101 spectral channels of the HSL system, V t ( λ i ) and V b ( λ i ) are the peak voltage of the HSL echo signal at wavelengths for the sample and the reference whiteboard, respectively, and ρ b ( λ i ) is the reflectivity value of the reference whiteboard.
ρ t ( λ i ) = V t ( λ i ) V b ( λ i ) ρ b ( λ i )
To eliminate the interference of the background echo signal, we performed point cloud segmentation. First, we used the difference in spatial coordinates between the background cloth point cloud and the sample point cloud on the Y-axis to separate the sample and the background with a fixed distance value. The complete sample scan points were obtained (Figure 3b).

3. Methods

Figure 4 shows a schematic diagram of the proposed classification method of persimmon trees’ components, which includes four parts: data preprocessing, preliminary classification, enhanced classification, and 3D reconstruction.
The data preprocessing part consists of point cloud segmentation and spectral reflectance calculation, which was discussed in Section 2.3. The preliminary classification includes feature parameter extraction (refer to the analysis in Section 3.1 for details), multiple classifications, and misclassification of edge points analysis. In order to reduce the misclassification of edge points, enhanced classification was conducted based on the spatial constraint relationship of the HSL point cloud. Finally, the 3D reconstruction of the classification results was completed by fusing the spatial coordinates.

3.1. Feature Parameter Extraction

In order to analyze the reflection properties of the persimmon tree samples across different wavelengths, we plotted the average reflectance distribution curves for each of the four components in Figure 5. The reflectance variation tendencies of the four components are considerably different, as observed. The reflectance of the wood increases with the wavelength. The leaf reflectance has a clear red-edge effect, with low reflectance in visible bands and high reflectance in near-infrared bands [30]. In addition, the reflectance of unripe fruit is similar to the leaves, with a clear red-edge effect, indicating that unripe fruit contains a certain amount of chlorophyll. The ripe fruit reflectance distribution tends to stabilize at 20% in the spectral range from 600 to 900 nm without an obvious red-edge effect.
Feature parameter selection should be based on practical application and classification performance considerations. We selected feature parameters based on the differences in the spectra of the four components, which can effectively retain the physical information of persimmon spectra while avoiding the information loss and computational complexity issues that may arise from traditional dimensionality reduction algorithms [31]. Selecting vegetation indices as parameters can also eliminate errors caused by laser incidence angles [32]. Considering the difference in the components’ spectral reflectance, we selected five reflectance values as the feature parameters with the largest differences at typical bands (700 nm, 730 nm, 780 nm, 850 nm, and 900 nm), defined as R700, R730, R780, R850, and R900, respectively, based on the maximization of interclass variance. The R700 and R730 bands, which are sensitive to chlorophyll, accentuate the contrast between components with chlorophyll and those lacking chlorophyll. R780 is the reflectance of the band with the largest difference in reflectance between the four components. The reason for choosing R850 and R900 was to reflect the difference in reflectance between wood and other persimmon components. The average value (AVG R760–R930) of the reflectance in the range (760 nm–930 nm) with the large spectral differences between these four components was used as a feature parameter. The red-edge chlorophyll index (CI red edge) [33] was selected as a feature parameter based on the red-edge effect due to chlorophyll’s absorption of visible light. The normalized difference vegetation index (NDVI) [34] and the normalized difference red-edge index (NDRE) [32] were selected to distinguish the wood, leaf, and fruit components. The specific parameters are listed in Table 1.

3.2. Preliminary Classification

We focused on the random forest (RF), support vector machine (SVM), and BP neural network (BPNN) methods, which are machine learning methods that have demonstrated excellent classification performance in previous remote sensing studies [35,36,37]. After nine feature parameters in the spectral domain were chosen, we investigated the performance of the SVM, BPNN, and RF methods in the classification of persimmon trees’ components; we selected the best one as a preliminary classifier.
Binary decision trees are used as the basic building blocks of RF [38], combined with the basic building blocks for training and prediction to achieve the resultant output of each decision tree, and, finally, they use voting for the plurality to obtain the final classification result. Eight decision trees were selected to build the classification trees, and four classes (wood, leaf, ripe fruit, and unripe fruit) were classified by sampling the data with replacement.
The central idea of the SVM classification algorithm is to maximize the optimal hyperplane as the decision function. The hyperplane used to separate the data is also known as a support vector, and the optimal hyperplane is created by accurately separating each data point and ensuring that the distance between classes is maximized [39]. We used spectral features as the input to the SVM and four persimmon tree component labels as the output vectors, and a common radial basis function was selected as the kernel function. SVM parameters, including the kernel function and penalty parameters, were selected with default values of 0.1 and 10, respectively.
The core idea of the BPNN method is to reasonably distribute all features into a uniform feature space. The BPNN method accomplishes the corresponding clustering or classification of data by constructing nonlinear functions and optimizing the loss function to fit data cells in the target domain [40]. In this paper, we constructed a total of nine input neurons and one hidden layer, and the number of hidden layer nodes was five according to the empirical equation; the active function was the sigmoid function, and, finally, the number of output neurons was four.
First, we selected six persimmon tree samples and three lemon tree samples as the training samples for preliminary classification. We manually labeled the training point clouds by compared with RGB images in CloudCompare (version 2.11.3, CloudCompare SAS, F-34000, Montpellier, France). The labeled data were extracted from training point cloud data and we completed preprocessing. Then, the labeled data were randomly divided into two sets in the ratio of 7:3, i.e., a training set and a validation set. Finally, we input the selected parameters of each point cloud as the training data.

3.3. Enhanced Classification

We used only spectral domain feature parameters in the preliminary classification, causing the misclassification of edge points (we will analyze the reasons in detail in Section 4.1). To enhance the accuracy of the classification, especially correcting the misclassification points at the edges, we proposed an enhanced classification method based on spatial distance, which reprograms the edge point class, according to the class consistency of adjacent spatial points [41]; its block diagram is shown in Figure 6.
First, spatial distances of all points were calculated to generate an initial distance matrix, which, together with the preliminary classification results, was used as the input to the reprogramming algorithm. The initialized distance matrix S was calculated as Equation (2) by the Euclidean distance between point i and point j in the sample point cloud.
S ( i , j ) = ( i x j x ) 2 + ( i y j y ) 2 + ( i z j z ) 2
The reprogramming algorithm consists of four steps, as shown in the blue box in Figure 6.
Step 1—reprogrammed point selected: Take a sample point k that belongs to a class decided by the preliminary classification results and sort the distance S ( k , j ) from k to any other sample point j .
Step 2—adjacent points decision: Establish a matrix SN, which includes the N smallest distance points in the point k neighborhood, where N is the empirical value obtained from our multiple experiments.
Step 3—class statistics in the point domain space: Count the number of four classes separately in SN, select the largest proportion of class labels as the point k label, and complete the class rewriting of point k.
Step 4—point class reprogramming: After the k-point label has been rewritten, return to step 1 and iterate through all of the sample points in the sequence until all of the points are reprogrammed.

3.4. Three-Dimensional Reconstruction

The target point cloud includes spatial-spectral information, which provides the basis for 3D reconstruction after classifying persimmon tree components. In this study, we used color mapping to display the point cloud classification results, providing an intuitive visualization of the spatial distribution of the different classes. By assigning unique color to each class, we could differentiate and identify each point based on its classification. Three-dimensional reconstruction of different sample trees’ components was accomplished in the Python 3.6 environments.

3.5. Accuracy Evaluation

In previous point cloud classification studies, the correctness of validation sets has often been used to evaluate the performance of classification algorithms. However, generalization errors often exist due to certain factors, such as insufficient data in the validation set and overfitting of the model during classification. To evaluate the performance of our classification method in the HSL point cloud, we proposed an accuracy evaluation method. First, we manually annotated the sample point cloud in CloudCompare to create a real dataset with spatial coordinate information and labeled values.
The real label dataset Q is built as Equation (3).
Q = { q i | i [ 1 , n ] } , q i = { X q i , Y q i , Z q i , l a b e l q j }
n is the overall number of points in the sample point cloud, and X q i , Y q i , Z q i are the 3D coordinates of point i . l a b e l q j denotes class labels, j ranges from 1 to 4, and their corresponding classes are { U n r i p e F r u i t , R i p e F r u i t , W o o d , L e a f } .
Similarly, we can build the predicted label dataset P , as shown in Equation (4).
P = { p i | i [ 1 , n ] } , p i = { X p i , Y p i , Z p i , l a b e l p j }
Then, we calculate the correct prediction points. We set the number of correct classified points for each class in the prediction set P as T j 0 and the number of correct classified points for each class in the true set Q as H j 0 . The initial values H j 0 and T j 0 are both 0. We loop through the prediction set and the true set and compare their label values at the same coordinate position, i.e., X p i = X q i , Y p i = Y q i , Z p i = Z q i is satisfied. If l a b e l p j = l a b e l q j is satisfied, T j 0 = T j 0 + 1 , H j 0 = T j 0 + 1 , otherwise T j 1 = T j 0 , H j 1 = T j 0 + 1 . When all of the points in P and Q are compared, we can obtain the final T j n and H j n .
Finally, the classification accuracy of one class is defined as Equation (5).
K j = T j n H j n
The overall classification accuracy ( K O v e r a l l ) is determined as Equation (6), which is the quotient that divides all of the correctly classified points in the prediction set by all of the sample points.
K O v e r a l l = i = 1 j T i i = 1 j H i

4. Results and Discussion

4.1. Preliminary Classification Performance

Figure 7 illustrates the reconstructed results of three classifiers for some of the persimmon samples (Figure 2). As we can observe, the SVM method has the lowest accuracy, as it cannot classify the corresponding target class at the edge correctly, and it also presents misclassification at some leaf and fruit nonedge areas, such as red boxes 0, 1, and 2. The reconstructed image with the BPNN method can correctly classify most of the sample points, although there are some misclassified points at the edges of the wood and the fruit edges, as shown in boxes 3 and 4. Finally, the RF classifier has the highest accuracy as four classes of the persimmon trees’ components can be distinguished, but there are some misclassified points at the edges of the fruit, as shown in boxes 5 and 6.
Table 2 lists the classification accuracy of the nine spectral domain parameters by the SVM, BPNN, and RF classifiers. All three classifiers demonstrate excellent classification performance, with the RF classifier exhibiting the highest accuracy and the SVM classifier showing the lowest accuracy. The three algorithms’ overall accuracy values are 84.6%, 86.3%, and 88.6%, respectively. The mean accuracy of the leaf is greater than 87%. The maximum accuracy of the ripe fruit is 85.5% with the RF classifier, the maximum accuracy of the unripe fruit reaches 86.2% with the BPNN classifier, and the accuracy of the wood is below 82% with all three classifiers. We selected the best one, RF, as the preliminary classification classifier on all these counts.
To analyze misclassification with the RF method at edge points, we manually extracted the edge and nonedge region points of the four components of the persimmon tree samples (Figure 7c), selected more than half of the points, and calculated the average spectral reflectance.
As shown in Figure 8, there are obvious reflectance differences between the edge and nonedge points. The nonedge reflectance of unripe fruit is 10.36% higher than its edge average, and there is a clear red-edge effect on the spectral reflectance of nonedge points, while the spectral reflectance of edge points shows a slowly increasing trend. The nonedge point reflectance of ripe fruit is 17.81% higher than that of the edge point on average, and the spectral reflectance is more flatly distributed in the range of 600 nm–900 nm, while the spectral reflectance at the edge shows a slowly increasing trend. The nonedge reflectance of the leaf and wood are, on average, 18.18% and 11.06% higher than their corresponding edge points, respectively, and their edge reflectance curves have a similar trend. The overall reflectance curve trends similarly at the edge of the four component classes, while the leaf edge spectral reflectance is higher. In summary, the spectral differences between the edge points and nonedge are the main reason for the misclassification of the edge points with reflectance as a feature for classification only.
In the preliminary classification, the parameters were selected only from the spectral data, which were calculated based on the peak value of the LiDAR echo signals [28]. However, when the HSL system collects data at fruit or wood edges, the calculated reflectance often differs from the nonedge’s (the large incidence angle at the fruit’s edges can also lead to abnormal reflectance). The reason is that part of the HSL signal spot falls on the background, covers multiple components, or even misses the target, causing error in the collected echo signal [42].

4.2. Enhanced Classification Performance

4.2.1. Neighboring Point Decision

Table 3 lists the enhanced classification average accuracy of persimmon tree samples with different numbers of neighboring points (N). The proposed method can obtain the highest classification accuracy when N is 12, so we selected 12 as the N value in the following sections.

4.2.2. Classification Performance

The accuracy of the persimmon components with preliminary classification versus our enhanced classification is listed in Table 4. The overall accuracy of our classification method increases to 96.6%, an 8% gain over the preliminary classification. Through the spatial features, the gains of our classification method over preliminary classification are 12.9%, 12.4%, 12.2%, and 2.3% on the unripe fruit, ripe fruit, wood, and leaf, respectively. Therefore, the proposed method outperforms classification that uses spectral features in each component’s experimental conditions, which benefits from the fact that our method can preserve the spatial structure and reveal the constrained dependencies between points besides the spectral features.
The classification results of the lemon trees by different methods is listed in Table 5. The overall accuracy values of the SVM, BPNN, and RF methods in the preliminary classification are 80.1%, 84.9%, and 88.3%, respectively. The BPNN classification accuracy values of the wood and unripe fruit are 78.8% and 88.4%, respectively. The RF method has the highest classification accuracy for the leaf and ripe fruit, with 89.3% and 83.3%, respectively, and the SVM classifier has the worst classification accuracy. Following the enhanced classification of the lemon sample, the accuracy of our method increased by 5.1% over the preliminary classification and could reach 93.4%. Compared to the preliminary classification using spectral features, the classification accuracy values of the leaf, ripe fruit, unripe fruit, and wood with our method increased by 5.4%, 9%, 3.7%, and 17.4%, respectively. The overall accuracy of the lemon sample was lower than that of the persimmon sample as the lemon tree had a more complex spatial structure.

4.3. Reconstruction of the Classification Results

Figure 9a shows the classification result reconstruction of the two persimmon samples. Our method can effectively distinguish different tree components; in particular, the point class in the edge areas can still be corrected during preliminary misclassification. Compared to Figure 7, the leaf and wood classification results are ideal, as shown in red boxes 1, 2, 3, and 4, and most of the misclassified edge points of unripe and ripe fruit were corrected, as shown in red boxes 5 and 6.
Figure 9b shows the class change of the sample points in the enhanced classification compared to the preliminary classification. The red points denote no component types changed, and they were recorded as unchanged class points. The green points are the points that show that the component type changed, and they were recorded as changed class points. The points of the changed class are concentrated at the component edges, especially at the unripe and ripe fruit edges. In addition, there are more class change points at the leafstalk, whose diameter is smaller than the HSL footprint. Thus, inaccurate echo signals resulted in inaccurate spectral reflectance values. In addition, using a reprogramming strategy, the enhanced classification can correct edge misclassification points in persimmon tree samples, effectively improving classification accuracy.
The classification results for the lemon components are shown in Figure 10. Figure 10a shows the true class. Figure 10b shows the reconstruction of the preliminary classification results; some misclassified points are at the edges of the ripe fruit. The class change points (Figure 10c) show that most of the corrected points are located in the edge area of each class. Four classes are distinguishable by their spatial–spectral features through our method, as shown in Figure 10d. Ripe fruit is clearly distinguished from leaves; however, unripe fruit edges are still slightly misclassified, such as in box 1. There are two reasons for the poor unripe component classification results: the low result of the preliminary classification of unripe fruit, while the result is used as an input for enhanced classification that will affect the accuracy; another is that the unripe fruit points are insufficient. In addition, chlorophyll contained in the shoots at the end of branches, which have similar reflectance curves to the leaves, is judged as a leaf in the preliminary classification, and the shoots were partially corrected with the enhanced classification.

5. Conclusions

We proposed a method for separating and classifying the wood, ripe fruit, unripe fruit, and leaves of persimmon trees with HSL measurements. Firstly, the spectral–spatial data of persimmon trees were acquired by HSL, and we classified each component of the samples via preliminary classification with spectral features. Then, based on understanding of edge point misclassification, we proposed an enhanced classification method to increase classification accuracy utilizing spatial information. Finally, we fused the classification results with 3D coordinates to visually reconstruct persimmon tree samples. The experimental results show that our method can be effectively used to classify the various components of fruit trees, providing a reference for further application. Additionally, our efforts will be directed towards integrating high-dimensional spectral and single-wavelength LiDAR spatial data to address differences in data structure and density between HSL’s spectral–spatial data and traditional 3D point cloud data. This will enhance the suitability of HSL data for 3D point cloud processing models.
Our future work aims to design a classification method that satisfies different fruit tree components in orchards and to develop a lightweight HSL system for more complex 3D modeling cases on agriculture and pomology.

Author Contributions

Conceptualization, H.S. and F.W.; methodology, H.S.; software, F.W.; validation, W.L. and F.W.; formal analysis, L.S.; investigation, C.J.; resources, C.X.; data curation, F.W.; writing—original draft preparation, H.S.; writing—review and editing, H.S., Y.C. and P.H.; visualization, F.W.; supervision, H.S.; project administration, H.S.; funding acquisition, H.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Anhui Provincial Natural Science Foundation, grant number 2008085MF182; Anhui Provincial DOHURD Science Foundation, grant number 2022-YF077; University Synergy Innovation Program of Anhui Province, grant number GXXT-2021-028; Program of Natural Science Research Project of Anhui Province of China, grant number KJ2021JD16, KJ2021A0622.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sa, I.; Ge, Z.; Dayoub, F.; Upcroft, B.; Perez, T.; McCool, C. Deepfruits: A fruit detection system using deep neural networks. Sensors 2016, 16, 1222. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Junos, M.H.; Mohd Khairuddin, A.S.; Thannirmalai, S.; Dahari, M. Automatic detection of oil palm fruits from UAV images using an improved YOLO model. Vis. Comput. 2022, 38, 2341–2355. [Google Scholar] [CrossRef]
  3. Jia, W.; Liu, M.; Luo, R.; Wang, C.; Pan, N.; Yang, X.; Ge, X. YOLOF-Snake: An Efficient Segmentation Model for Green Object Fruit. Front. Plant Sci. 2022, 13, 765523. [Google Scholar] [CrossRef] [PubMed]
  4. Zhu, Y.; Gu, Q.; Zhao, Y.; Wan, H.; Wang, R.; Zhang, X.; Cheng, Y. Quantitative Extraction and Evaluation of Tomato Fruit Phenotypes Based on Image Recognition. Improv. Qual. Saf. Trait. Hortic. Plants 2022, 13, 859290. [Google Scholar] [CrossRef] [PubMed]
  5. Lu, Z.; Qi, L.; Zhang, H.; Wan, J.; Zhou, J. Image Segmentation of UAV Fruit Tree Canopy in a Natural Illumination Environment. Agriculture 2022, 12, 1039. [Google Scholar] [CrossRef]
  6. Varga, L.A.; Makowski, J.; Zell, A. Measuring the Ripeness of Fruit with Hyperspectral Imaging and Deep Learning. In Proceedings of the 2021 International Joint Conference on Neural Networks (IJCNN), Shenzhen, China, 18–22 July 2021; pp. 1–8. [Google Scholar]
  7. Fu, X.; Wang, M. Detection of Early Bruises on Pears Using Fluorescence Hyperspectral Imaging Technique. Food Anal. Methods 2022, 15, 115–123. [Google Scholar] [CrossRef]
  8. Munera, S.; Rodríguez-Ortega, A.; Aleixos, N.; Cubero, S.; Gómez-Sanchis, J.; Blasco, J. Detection of Invisible Damages in ‘Rojo Brillante’ Persimmon Fruit at Different Stages Using Hyperspectral Imaging and Chemometrics. Foods 2021, 10, 2170. [Google Scholar] [CrossRef]
  9. Steinbrener, J.; Posch, K.; Leitner, R. Hyperspectral fruit and vegetable classification using convolutional neural networks. Comput. Electron. Agric. 2019, 162, 364–372. [Google Scholar] [CrossRef]
  10. Kang, Z.; Geng, J.; Fan, R.; Hu, Y.; Sun, J.; Wu, Y.; Liu, C. Nondestructive Testing Model of Mango Dry Matter Based on Fluorescence Hyperspectral Imaging Technology. Agriculture 2022, 12, 1337. [Google Scholar] [CrossRef]
  11. Raj, R.; Cosgun, A.; Kulić, D. Strawberry Water Content Estimation and Ripeness Classification Using Hyperspectral Sensing. Agronomy 2022, 12, 425. [Google Scholar] [CrossRef]
  12. Perez-Sanz, F.; Navarro, P.J.; Egea-Cortines, M. Plant phenomics: An overview of image acquisition technologies and image data analysis algorithms. GigaScience 2017, 6, gix092. [Google Scholar] [CrossRef] [Green Version]
  13. Abbasi, R.; Bashir, A.K.; Alyamani, H.J.; Amin, F.; Doh, J.; Chen, J. Lidar point cloud compression, processing and learning for autonomous driving. IEEE Trans. Intell. Transp. Syst. 2022, 24, 962–979. [Google Scholar] [CrossRef]
  14. Rosell, J.R.; Sanz, R. A review of methods and applications of the geometric characterization of tree crops in agricultural activities. Comput. Electron. Agric. 2012, 81, 124–141. [Google Scholar] [CrossRef] [Green Version]
  15. Liao, K.; Li, Y.; Zou, B.; Li, D.; Lu, D. Examining the Role of UAV Lidar Data in Improving Tree Volume Calculation Accuracy. Remote Sens. 2022, 14, 4410. [Google Scholar] [CrossRef]
  16. Zhang, C.; Yang, G.; Jiang, Y.; Xu, B.; Li, X.; Zhu, Y.; Yang, H. Apple tree branch information extraction from terrestrial laser scanning and backpack-lidar. Remote Sens. 2020, 12, 3592. [Google Scholar] [CrossRef]
  17. Gené-Mola, J.; Gregorio, E.; Guevara, J.; Auat, F.; Sanz-Cortiella, R.; Escolà, A.; Rosell-Polo, J.R. Fruit detection in an apple orchard using a mobile terrestrial laser scanner. Biosyst. Eng. 2019, 187, 171–184. [Google Scholar] [CrossRef]
  18. Omasa, K.; Hosoi, F.; Uenishi, T.M.; Shimizu, Y.; Akiyama, Y. Three-dimensional modeling of an urban park and trees by combined airborne and portable on-ground scanning LIDAR remote sensing. Environ. Modeling Assess. 2008, 13, 473–481. [Google Scholar] [CrossRef]
  19. Kim, S.; McGaughey, R.J.; Andersen, H.E.; Schreuder, G. Tree species differentiation using intensity data derived from leaf-on and leaf-off airborne laser scanner data. Remote Sens. Environ. 2009, 113, 1575–1586. [Google Scholar] [CrossRef]
  20. Korpela, I.; Ørka, H.O.; Maltamo, M.; Tokola, T.; Hyyppä, J. Tree species classification using airborne LiDAR–effects of stand and tree parameters, downsizing of training set, intensity normalization, and sensor type. Silva Fenn. 2010, 44, 319–339. [Google Scholar] [CrossRef] [Green Version]
  21. Mark Danson, F.; Sasse, F.; Schofield, L.A. Spectral and spatial information from a novel dual-wavelength full-waveform terrestrial laser scanner for forest ecology. Interface Focus 2018, 8, 20170049. [Google Scholar] [CrossRef]
  22. Sankey, T.; Donager, J.; McVay, J.; Sankey, J.B. UAV lidar and hyperspectral fusion for forest monitoring in the southwestern USA. Remote Sens. Environ. 2017, 195, 30–43. [Google Scholar] [CrossRef]
  23. Chen, Y. Environment Awareness with Hyperspectral LiDAR Technologies. Ph.D. Thesis, Aalto University, Helsinki, Finland, 2020. [Google Scholar]
  24. Nevalainen, O.; Hakala, T.; Suomalainen, J.; Kaasalainen, S. Nitrogen concentration estimation with hyperspectral LiDAR. ISPRS Annals of the Photogrammetry. Remote Sens. Spat. Inf. Sci. 2013, 2, 205–210. [Google Scholar]
  25. Bi, K.; Xiao, S.; Gao, S.; Zhang, C.; Huang, N.; Niu, Z. Estimating vertical chlorophyll concentrations in maize in different health states using hyperspectral LiDAR. IEEE Trans. Geosci. Remote Sens. 2020, 58, 8125–8133. [Google Scholar] [CrossRef]
  26. Hakala, T.; Suomalainen, J.; Kaasalainen, S.; Chen, Y. Full waveform hyperspectral LiDAR for terrestrial laser scanning. Opt. Express 2012, 20, 7119–7127. [Google Scholar] [CrossRef]
  27. Vauhkonen, J.; Hakala, T.; Suomalainen, J.; Kaasalainen, S.; Nevalainen, O.; Vastaranta, M.; Hyyppä, J. Classification of spruce and pine trees using active hyperspectral LiDAR. IEEE Geosci. Remote Sens. Lett. 2013, 10, 1138–1141. [Google Scholar] [CrossRef]
  28. Shao, H.; Cao, Z.; Li, W.; Chen, Y.; Jiang, C.; Hyyppä, J.; Sun, L. Feasibility Study of Wood-Leaf Separation Based on Hyperspectral LiDAR Technology in Indoor Circumstances. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 15, 729–738. [Google Scholar] [CrossRef]
  29. Wei, X.; Liu, F.; Qiu, Z.; Shao, Y.; He, Y. Ripeness classification of astringent persimmon using hyperspectral imaging technique. Food Bioprocess Technol. 2014, 7, 1371–1380. [Google Scholar] [CrossRef]
  30. Clevers, J.G.; De Jong, S.M.; Epema, G.F.; Van Der Meer, F.; Bakker, W.H.; Skidmore, A.K.; Addink, E.A. MERIS and the red-edge position. Int. J. Appl. Earth Obs. Geoinf. 2001, 3, 313–320. [Google Scholar] [CrossRef]
  31. Wold, S.; Esbensen, K.; Geladi, P. Principal component analysis. Chemom. Intell. Lab. Syst. 1987, 2, 37–52. [Google Scholar] [CrossRef]
  32. Barnes, E.M.; Clarke, T.R.; Richards, S.E.; Colaizzi, P.D.; Haberland, J.; Kostrzewski, M.; Moran, M.S. Coincident detection of crop water stress, nitrogen status and canopy density using ground based multispectral data. In Proceedings of the Fifth International Conference on Precision Agriculture, Bloomington, MN, USA, 16–10 July 2000; Volume 1619, p. 6. [Google Scholar]
  33. Gitelson, A.A.; Gritz, Y.; Merzlyak, M.N. Relationships between leaf chlorophyll content and spectral reflectance and algorithms for nondestructive chlorophyll assessment in higher plant leaves. J. Plant Physiol. 2003, 160, 271–282. [Google Scholar] [CrossRef]
  34. Gitelson, A.; Merzlyak, M.N. Spectral reflectance changes associated with autumn senescence of Aesculus hippocastanum L. and Acer platanoides L. leaves. Spectral features and relation to chlorophyll estimation. J. Plant Physiol. 1994, 143, 286–292. [Google Scholar] [CrossRef]
  35. Chen, B.; Shi, S.; Gong, W.; Sun, J.; Chen, B.; Du, L.; Zhao, X. True-color three-dimensional imaging and target classification based on hyperspectral LiDAR. Remote Sens. 2019, 11, 1541. [Google Scholar] [CrossRef] [Green Version]
  36. Pham, Q.T.; Liou, N.S. The development of on-line surface defect detection system for jujubes based on hyperspectral images. Comput. Electron. Agric. 2022, 194, 106743. [Google Scholar] [CrossRef]
  37. Shen, X.; Cao, L. Tree-species classification in subtropical forests using airborne hyperspectral and LiDAR data. Remote Sens. 2017, 9, 1180. [Google Scholar] [CrossRef] [Green Version]
  38. Breiman, L. Bagging prediction. Mach. Learn. 1996, 14, 123–140. [Google Scholar] [CrossRef] [Green Version]
  39. Colgan, M.S.; Baldeck, C.A.; Féret, J.B.; Asner, G.P. Mapping savanna tree species at ecosystem scales using support vector machine classification and BRDF correction on airborne hyperspectral and LiDAR data. Remote Sens. 2012, 4, 3462–3480. [Google Scholar] [CrossRef] [Green Version]
  40. Wang, J.; Liao, X.; Zheng, P.; Xue, S.; Peng, R. Classification of Chinese herbal medicine by laser-induced breakdown spectroscopy with principal component analysis and artificial neural network. Anal. Lett. 2018, 51, 575–586. [Google Scholar] [CrossRef]
  41. Chen, B.; Shi, S.; Gong, W.; Zhang, Q.; Yang, J.; Du, L.; Song, S. Multispectral LiDAR point cloud classification: A two-step approach. Remote Sens. 2017, 9, 373. [Google Scholar] [CrossRef] [Green Version]
  42. Song, S.; Wang, B.; Gong, W.; Chen, Z.; Lin, X.; Sun, J.; Shi, S. A new waveform decomposition method for multispectral LiDAR. ISPRS J. Photogramm. Remote Sens. 2019, 149, 40–49. [Google Scholar] [CrossRef]
Figure 1. Schematic of the HSL system: (a) installation and (b) system schematic; The red arrow represents the optical signal, while the black arrow represents the electrical signal.
Figure 1. Schematic of the HSL system: (a) installation and (b) system schematic; The red arrow represents the optical signal, while the black arrow represents the electrical signal.
Sensors 23 03286 g001
Figure 2. Fruit tree samples. ①, ②, ③, and ④ correspond to the components of the wood, ripe fruit, unripe fruit, and leaves, respectively.
Figure 2. Fruit tree samples. ①, ②, ③, and ④ correspond to the components of the wood, ripe fruit, unripe fruit, and leaves, respectively.
Sensors 23 03286 g002
Figure 3. HSL scanning strategy and persimmon tree scan points. (a) The zigzag scanning pattern of HSL; (b) preprocessed HSL point cloud of the persimmon tree.
Figure 3. HSL scanning strategy and persimmon tree scan points. (a) The zigzag scanning pattern of HSL; (b) preprocessed HSL point cloud of the persimmon tree.
Sensors 23 03286 g003
Figure 4. Structure diagram of tree components’ classification and 3D reconstruction.
Figure 4. Structure diagram of tree components’ classification and 3D reconstruction.
Sensors 23 03286 g004
Figure 5. The reflectance of persimmon tree components.
Figure 5. The reflectance of persimmon tree components.
Sensors 23 03286 g005
Figure 6. Enhanced reprogramming algorithm based on spatial distance.
Figure 6. Enhanced reprogramming algorithm based on spatial distance.
Sensors 23 03286 g006
Figure 7. Three-dimensional reconstruction of the preliminary classification of the persimmon sample. (a) Support vector machine classifier; (b) backpropagation neural network classifier; (c) random forest classifier; (d) real class labels of the persimmon samples.
Figure 7. Three-dimensional reconstruction of the preliminary classification of the persimmon sample. (a) Support vector machine classifier; (b) backpropagation neural network classifier; (c) random forest classifier; (d) real class labels of the persimmon samples.
Sensors 23 03286 g007
Figure 8. Reflectance of the edge and nonedge of the persimmon components.
Figure 8. Reflectance of the edge and nonedge of the persimmon components.
Sensors 23 03286 g008
Figure 9. Reconstruction of classification results and changes in point cloud class. (a) Reconstruction based on the proposed method and (b) class changes in the reprogramming strategy. The red boxes are the areas misclassified in the preliminary classification.
Figure 9. Reconstruction of classification results and changes in point cloud class. (a) Reconstruction based on the proposed method and (b) class changes in the reprogramming strategy. The red boxes are the areas misclassified in the preliminary classification.
Sensors 23 03286 g009
Figure 10. Reconstructed diagram of different algorithms for the classification of lemons. (a) Reconstruction of the real classification; (b) reconstruction of the preliminary classification; (c) class changes; and (d) reconstruction based on the proposed method results.
Figure 10. Reconstructed diagram of different algorithms for the classification of lemons. (a) Reconstruction of the real classification; (b) reconstruction of the preliminary classification; (c) class changes; and (d) reconstruction based on the proposed method results.
Sensors 23 03286 g010
Table 1. Selected classification feature parameters.
Table 1. Selected classification feature parameters.
Feature ParametersDescription
R700Reflectance in the 700 nm band
R730Reflectance in the 730 nm band
R780Reflectance in the 780 nm band
R850Reflectance in the 850 nm band
R900Reflectance in the 900 nm band
AVG R760–R930Average reflectance in the wavelength range from 760 nm to 930 nm
CI red edge(R780/R710) – 1
NDVI(R800 – R670)/(R800 + R670)
NDRE(R790 – R720)/(R790 + R720)
Table 2. Comparison of the three classifiers’ accuracy.
Table 2. Comparison of the three classifiers’ accuracy.
MethodAccuracy (%)
LeafRipe FruitUnripe FruitWoodOverall
SVM87.385.879.781.284.6
BPNN96.78086.267.886.3
RF97.185.881.776.988.6
Table 3. Enhanced classification accuracy with different N values.
Table 3. Enhanced classification accuracy with different N values.
N/Number9101112131415
Overall accuracy (%)95.495.896.496.696.596.396.2
Table 4. Comparison of classification accuracy of the persimmon sample.
Table 4. Comparison of classification accuracy of the persimmon sample.
MethodAccuracy (%)
LeafRipe FruitUnripe FruitWoodOverall
Preliminary Classification97.185.881.776.988.6
Enhanced Classification99.498.294.689.196.6
Table 5. Comparison of classification accuracy of the lemon sample.
Table 5. Comparison of classification accuracy of the lemon sample.
MethodsAccuracy (%)
LeafRipe FruitUnripe FruitWoodOverall
Preliminary ClassificationSVM87.264.556.169.980.1
BPNN86.578.588.478.884.9
RF89.383.382.375.588.3
Enhanced Classification94.792.38692.993.4
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shao, H.; Wang, F.; Li, W.; Hu, P.; Sun, L.; Xu, C.; Jiang, C.; Chen, Y. Feasibility Study on the Classification of Persimmon Trees’ Components Based on Hyperspectral LiDAR. Sensors 2023, 23, 3286. https://doi.org/10.3390/s23063286

AMA Style

Shao H, Wang F, Li W, Hu P, Sun L, Xu C, Jiang C, Chen Y. Feasibility Study on the Classification of Persimmon Trees’ Components Based on Hyperspectral LiDAR. Sensors. 2023; 23(6):3286. https://doi.org/10.3390/s23063286

Chicago/Turabian Style

Shao, Hui, Fuyu Wang, Wei Li, Peilun Hu, Long Sun, Chong Xu, Changhui Jiang, and Yuwei Chen. 2023. "Feasibility Study on the Classification of Persimmon Trees’ Components Based on Hyperspectral LiDAR" Sensors 23, no. 6: 3286. https://doi.org/10.3390/s23063286

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop