Next Article in Journal
A Fabricated Force Glove That Measures Hand Forces during Activities of Daily Living
Next Article in Special Issue
Transmission Line Vibration Damper Detection Using Multi-Granularity Conditional Generative Adversarial Nets Based on UAV Inspection Images
Previous Article in Journal
Research on Position and Torque Loading System with Velocity-Sensitive and Adaptive Robust Control
Previous Article in Special Issue
UAV Photogrammetry under Poor Lighting Conditions—Accuracy Considerations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Individualization of Pinus radiata Canopy from 3D UAV Dense Point Clouds Using Color Vegetation Indices

by
Antonio M. Cabrera-Ariza
1,2,*,†,
Miguel A. Lara-Gómez
3,†,
Rómulo E. Santelices-Moya
2,
Jose-Emilio Meroño de Larriva
4 and
Francisco-Javier Mesas-Carrascosa
4
1
Centro de Investigación y Estudios Avanzados del Maule, Universidad Católica del Maule, Avenida San Miguel 3605, Talca 3460000, Chile
2
Centro de Desarrollo del Secano Interior, Facultad de Ciencias Agrarias y Forestales, Universidad Católica del Maule, Avenida San Miguel 3605, Talca 3460000, Chile
3
Centro de Investigaciones Aplicadas al Desarrollo Agroforestal S.L., 14001 Córdoba, Spain
4
Department of Graphic Engineering and Geomatics, Campus de Rabanales, University of Cordoba, 14071 Córdoba, Spain
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2022, 22(4), 1331; https://doi.org/10.3390/s22041331
Submission received: 24 December 2021 / Revised: 30 January 2022 / Accepted: 7 February 2022 / Published: 9 February 2022
(This article belongs to the Special Issue Unmanned Aerial Systems and Remote Sensing)

Abstract

:
The location of trees and the individualization of their canopies are important parameters to estimate diameter, height, and biomass, among other variables. The very high spatial resolution of UAV imagery supports these processes. A dense 3D point cloud is generated from RGB UAV images, which is used to obtain a digital elevation model (DEM). From this DEM, a canopy height model (CHM) is derived for individual tree identification. Although the results are satisfactory, the quality of this detection is reduced if the working area has a high density of vegetation. The objective of this study was to evaluate the use of color vegetation indices (CVI) in canopy individualization processes of Pinus radiata. UAV flights were carried out, and a 3D dense point cloud and an orthomosaic were obtained. Then, a CVI was applied to 3D point cloud to differentiate between vegetation and nonvegetation classes to obtain a DEM and a CHM. Subsequently, an automatic crown identification procedure was applied to the CHM. The results were evaluated by contrasting them with results of manual individual tree identification on the UAV orthomosaic and those obtained by applying a progressive triangulated irregular network to the 3D point cloud. The results obtained indicate that the color information of 3D point clouds is an alternative to support individualizing trees under conditions of high-density vegetation.

Graphical Abstract

1. Introduction

Traditional forest inventory systems rely primarily on field data and statistical estimators based on sample design. These methods can provide estimates of inventory variables, although they come at a significant economic cost [1]. In addition, field-scale data collection is time-consuming and offers uncertain results due to the variability of tree canopies in forests or plantations and the difficulty of adapting to geometric patterns such as cones or ovoids to be able to map them in geographic information systems [2]. In addition, data collected from field measurements are often associated with sampling and observation errors [3].
In recent years, remote sensing has become an increasingly reliable discipline in geomatic techniques to determine parameters of interest in forests, both at mass and individual tree levels [4]. The images used can be registered by sensors on-board three types of platforms: satellite, manned, and unmanned air platforms. Firstly, earth observation (EO) programs have been used in natural resource management to obtain images of medium [5] or high spatial resolution [6], offering data with different spatial, spectral, radiometric, and temporal resolution based on different technologies [7]. Furthermore, its global coverage reduces the intensity of sampling, and thus economic and temporary costs, and provides data on inaccessible or difficult-to-access areas. However, satellite platforms have some drawbacks. Passive sensors are dependent on meteorological conditions, and there are limitations on acquiring traditional set of forest parameters obtained by the classical method, such as canopy diameter or basal areas. Nevertheless, these images have been widely used in forestry activities [5,6,8]. Manned aerial platforms allow forest inventory to be carried out on much larger areas compared to what is achievable with traditional field methods [9]. This includes the use of passive sensors, such as RGB [10], multispectral [11], hyperspectral [12], and thermographic [13] sensors, as well as active sensors, such as light detection and ranging (LiDAR) [14], which has become a tool for forest inventory in many countries around the world [15,16,17,18]. However, the high economical cost of manned aerial platforms makes it difficult to carry out continuous monitoring of an area of interest [19]. Unmanned aerial vehicles (UAV) are increasingly being used in forestry [20,21,22,23,24,25]. These platforms allow acquisition of data with very high spatial and temporal resolution, which can be used for mapping forest areas [25] to identify species or degree of stress and/or diseases [23,26] as well as individual tree identification by means of RGB [27], hyperspectral [28], multispectral [29], or LiDAR [30,31] sensors. Therefore, UAVs present a good alternative that can be used to register remote RGB, multispectral, hyperspectral, and thermographic images at the right moment and in a repeated manner [32,33].
Forest inventory remains a challenge, with the detection and delineation of individual tree crowns (ITCs) being a prerequisite to estimate parameters such as diameter, height, and biomass, among other variables [34,35]. Different ITC methods, such as passive [36], active [37], and multiple data sources [38], have been developed. Tree location algorithms include template matching, imagen binarization, and local maximum filtering techniques, among others [39]. On the other hand, delineate tree crown algorithms can be categorized into valley following, region growing, and watershed segmentation [39]. In this context, data for ITC can be obtained from either passive sensors [40] or LiDAR [41]. Passive sensors and photogrammetry techniques allow forest inventory metrics to be determined because of their ability to provide orthomosaics and 3D point clouds, which are produced from stereoscopic images based on structure from motion (SfM) [42]. However, unlike LiDAR point clouds, they can only produce accurate digital surface model (DSM) in dense forest because of their inability to penetrate the foliage to reach the ground [43]. Therefore, an external digital elevation model (DEM) is needed to produce a canopy height model (CHM).
The information derived from 3D dense point clouds, whether from active or passive sensors, starts from the correct individualization of trees. For this, the first step is to classify ground points. The classification quality of 3D dense point clouds generated using images from passive sensor on-board UAVs in dense forest areas is poor, offering unsatisfactory results and significantly affecting other processes [14]. Therefore, accurate DEM generation is a prerequisite for accurate characterization of forest information using photogrammetric 3D dense point clouds. The results obtained can be comparable even to those acquired with LiDAR data [10,44].
A common strategy for tree individualization is to convert 3D dense point clouds, mainly derived from LiDAR flights, into a CHM or another tree height model and then find local minimum height values [45,46,47,48]. In this case, depending on the sensor type, the difficulty lies in the need to classify points belonging to the ground, which will allow the point cloud to be processed correctly for the individualization of trees [49,50]. Once the point clouds have been processed and filtered, there are various algorithms for the detection and segmentation of trees, such as the local maximum algorithm [30], template matching [51], watershed segmentation [3], region growing [52], and crown delineation based on optimized object recognition, treetop identification, and hill-climbing (COTH) algorithm [53], among others. Methods for the individualization of canopies using passive sensors and photogrammetry techniques can be distinguished into two groups: those based on using data shapes derived from point clouds [11] and those using orthomosaics. In this context, Sperlich et al. [54] developed point clouds from aerial imageries based on UAVs and achieved an individualization precision of 87.68% using a watershed algorithm in a dense coniferous forest. Kattenborn et al. [55] updated the algorithm of Sperlich et al. [54], geometrically classifying UAV-derived point clouds and identifying densely scattered palm trees in a 9.4 ha study area with abundant undergrowth and other trees with an overall accuracy of 86.1%. All the methodologies outlined above are based on metric parameters, such as slope, minimum distance, or height. Using a different approach, Mesas-Carrascosa et al. [2] applied color vegetation indices on 3D dense points clouds to determine the height of a plant species, vines in this case, automatically detecting and classifying points belonging to the vegetation class to later determine the height of vines with reference to heights of the points classified as ground.
The objective of this study was to evaluate the use of a color vegetation index in Pinus radiata canopy individualization processes using CHMs obtained from high-density 3D point clouds generated by RGB sensors on-board UAVs.

2. Materials and Methods

2.1. Study Area

The present research was performed on a 1998 Pinus radiata D Don plantation (35°28′20.32″ S, 71°48′55.41″ W, WGS84) covering an area equal to 23.7 hectares, located in the Querquel area (Talca, Chile) (Figure 1) at a height of 93 m above sea level. The mean annual temperature is equal to 14.2 °C, and the mean annual rainfall is 845 mm. The plantation is located on soils from the Pocillas Association series, characterized by having a moderately fine texture and being deep (more than 100 cm), gently rolling, slightly stony without erosion, moderately acidic (pH between 5.6 and 6), nonsaline, and nonalkaline [56].
Figure 2 shows the workflow followed in the present study. Once the UAV flights were performed, we proceeded to process them to obtain a 3D dense point cloud and an orthomosaic. A color vegetation index (CVI) was applied to the point cloud to differentiate the points belonging to vegetation from nonvegetation classes. The latter were used to create a DEM that will hereafter be referred to as DEM based on CVI (DEM-CVI). On the other hand, ground points from original 3D dense point cloud were classified by a progressive triangulated irregular network (TIN) algorithm, and a DEM was generated (DEM-TIN). From each DEM, a CHM was derived and an automatic canopy identification procedure was applied. Finally, the results were evaluated by contrasting with the canopies manually identified in the orthomosaic generated from the UAV flight.

2.2. UAV Flights

The images were acquired on 31 March 2020 using a DJI Plahtom4 advanced platform (SZ DJI Technology Co., Shenzhen, China). The on-board sensor for acquiring images was an RGB sensor (R: red; G: green; B: blue) with a sensor size of 1/2.3” CMOS, a field of view lens equal to 94° lens, and a focal length of 20 mm, allowing images with an image size of 4000 × 3000 pixels to be registered. The flight height was 100 m above ground level. A crossover UAV flight was planned with flightlines in N–S and E–O directions. The images were registered in continuous mode to 2 s intervals and a speed of 4.5 m × s−1, resulting in a side and forward lap equal to 95% and 70%, respectively. The selection of these overlapping percentages between images allowed an adequate 3D reconstruction of the study area [57].
Five ground control points (GCPs) were placed, one in each corner and the other in the center of the study area. Then, aerotriangulation was calculated, allowing accurate and precise determination of the absolute orientation, position, and orientation of each image of the photogrammetric block. Subsequently, the 3D dense point cloud was generated using structure from motion (SfM) techniques. This methodology has been validated in previous research projects [58]. In addition, an orthomosaic was generated. We used Pix4Dmapper software (Pix4D S.A., Prilly, Suiza) for photogrammetric processing.

2.3. Ground Points Classification and Digital Elevation Model

Two different strategies were applied in point classification based on (a) color vegetation index and (b) point elevation. In the generation of the 3D dense point cloud, each of the points was associated with an RGB color value resulting from projecting these onto the stereoscopic model where applicable. Based on these RGB values, a classification was performed to discriminate between points belonging to the vegetation class and nonvegetation class. The nonvegetation class collected points that belong to the ground as well as shadows and other artificial elements. Based on our previous research experience [2], the normal green-red difference index (NGRDI) [59] using Equation (1) was calculated for each point based on RGB values.
N G R D I = g r / g + r
Thus, taking into account the information of each point referred to the RGB color space and before calculating the index, a standardized color space was performed [60]. As a result, the normalized color components r, g, and b were found in the range [0, 1] as calculated using Equations (2)–(4):
r = R / R + G + B
g = G / R + G + B
b = B / R + G + B
where R, G and B are the normalized RGB values in the range [0, 1] obtained using Equations (5)–(7):
R = R / R _ max  
G = G / G _ max
B = B / B _ max
where, R_max, G_max, and B_max are all equal to 255 for images with 24 radiometric bit resolution.
Through a script developed in MATLAB, the original 3D RGB point cloud was converted into a grayscale, with the value of the NGRDI index being the value of the attribute for each point. The distribution of NGRDI values of the points followed a binomial distribution representing the vegetation and nonvegetation classes. The next step was to analytically determine the value of the separation threshold between both classes using the Otsu method [61]. This method consists of analyzing the histogram of the NGRDI values to search for the separation of the two normal distributions present in the bimodal distribution. As a result, two 3D point clouds were obtained from the original, one representing points belonging to the vegetation class and the other to the nonvegetation class.
On the other hand, based on point elevation, ground points were classified using a progressive triangulated irregular network (TIN) densification algorithm using LAStools [62]. Although there are different filtering algorithms that offer good results [63], the progressive TIN algorithm is suitable for working with 3D UAV point clouds [64] as it is robust against the random noise of these point clouds [65]. According to Mohan et al. (2017) [66], the parameters settings were as follows: step 10 m, bulge 0.5, spike 1 m, and offset 0.05 m.
As a result, two DEMs with a spatial resolution equal to 1 m were generated from both classifications. Based on RGB values, points classified as nonvegetation were used to obtain a DEM-CVI, while points classified as ground were used to generate a DEM-TIN.

2.4. Canopy Height Model and Individualization of Canopies

From the two previously generated DEMs, two CHMs (CHM-CVI and CHM-TIN) were determined. Each CHM was created by assigning the highest elevation point within 1 m to the center of the grid cell in each grid, which were processed using the rLiDAR package. First, CHM was filtered by 3 × 3 pixel window Gaussian filter to search for apices [46,66]. Subsequently, the height from which the processing interrupts the search for new trees was established at 7 m after verifying with greater heights, which obtained worse results. A maximum canopy radius of 2.5 m was also established according to what was observed on the field. The exclusion parameter, which takes values between 0 and 1, represents the percentages of excluded pixels. A value of 0.5 will exclude all the pixels of a single tree that has a height of less than 50% of the maximum height of the same tree. After several tests, this value was set to 0.66. Finally, the projected area on the ground of the individual tree canopies detected from the CHM was delineated, and the coordinates of the centroids of the individualized canopy areas were calculated. For the individualization of canopies, FUSION [67] and the rLiDAR package were used [68].
To validate the results, 30 random sampled plots were established in the study area. The plots, which were circular shape with a radius of 12.7 m, covered an area of 507 square meters. A visual inspection on the orthomosaic was performed on these plots to identify each of the trees as ground truth to carry out a quality control of the results obtained in the automatic identification processes using both CHMs. In particular, the precision was evaluated in terms of true positive (TP, correct detection), false negative (FN, omission error), and false positive (FP, commission error) as well as with respect to sensitivity (S), precision (P), and F-score (F) as explained in Mohan et al. (2017) [66] using Equations (8)–(10):
S = TP / TP + FN
P = TP / TP + FP
F = 2 × S × P / S + P
In this case, sensitivity is understood as a measure of correctly detected trees as it is inversely related to omission error, precision is the measure of correctly detected trees as it is inversely related to the commission error, and F-score represents the harmonic mean of sensitivity and precision.

3. Results

3.1. Digital Elevation and Canopy Height Models

Figure 3 shows the orthomosaic of the study area as well as the DEMs and CHMs generated by the CVI and TIN methods. In addition, Table 1 shows statistics for each digital model. In orthomosaic processing (Figure 3a), about 99 million 3D points were generated, that is, about 78.75 points per m3. The spatial resolution of the orthomosaic was 2.8 cm per pixel. As shown, there were areas with dense vegetation where the ground was not visible and areas with less dense vegetation.
In relation to the DEM, in DEM-TIN (Figure 3b.1), there were islands distributed throughout the study area where the elevation rose abruptly. These areas coincided with the presence of dense vegetation. On the other hand, in DEM-CVI (Figure 3b.2), these areas did not appear. Both DEM had different percentiles for the elevation variable, with DEM-TIN having higher percentiles, except for the 50th percentile. These differences increased with increasing percentile. Such differences can be justified because points belonging to the vegetation class were classified as ground in DEM-TIN, thus increasing the value of the terrain elevation represented in the DEM.
On the other hand, both CHMs also showed differences according to the DEM used. The areas that showed a high value of height with respect to the surrounding areas in DEM-TIN presented low values in the case of CHM-TIN (Figure 3c.1). Furthermore, the values of normalized heights were higher for CHM-CVI (Figure 3c.2) than for CHM-TIN. As an example, Figure 4 shows a profile of the points classified as ground taking into account the use of a progressive TIN (Figure 4a) and CVI (Figure 4b). Using progressive TIN (Figure 4a), a group of points that belonged to the vegetation class were classified as ground points and therefore altered the derived DEM and CHM. On the other hand, these points did not appear with CVI (Figure 4b), and DEM and CHM are therefore properly generated.

3.2. Individualization of Canopies

Figure 5 shows the location of the sample plots in the study area and details of visual individualization in orthomosaic processing (Figure 5a) as well as the results, including false positives and false negatives, of automatic individualization based on CVI (Figure 5b) and TIN (Figure 5c). Table 2 shows the results of the quality assessment for each plot.
A total of 660 individual trees were manually identified in the 30 plots, with an average value of 22 trees per plot. Regarding the number of TPs, a total of 481 trees (72.9%) were correctly detected using CVI compared to 392 (59.4%) using TIN. The number of FP was equal to 8 (1.2%) and 23 (3.5%) for CVI and TIN, respectively. Moreover, the number of FN was lower in the CVI-based classification (171, 25.9%) than in TIN (245, 37.15%). Thus, the average precision obtained for classification by CVI reached a value equal to 0.98 compared to 0.62 obtained by TIN. Similarly, the mean sensitivity and F1-score using CVI was equal to 0.74 and 0.84, respectively, versus 0.62 and 0.74, respectively, using the TIN classification.
Based on these results, better results for sensitivity, accuracy, and F-score were achieved for the classification of 3D point cloud using CVI compared to those obtained by TIN. This indicates that the method of filtering 3D UAV point cloud using CVI in a scenario with high vegetation density provides more accurate results in individual tree identification.

4. Discussion

In recent years, several studies have highlighted the potential of remote sensing in forestry. In particular, sensors on-board UAV platforms are an adequate tool in determining the number of trees, height, or biomass [27,57,69,70]. In this paper, we present the utility of CVI in classifying 3D point clouds in vegetation and nonvegetation classes in forestry areas with high density vegetation as a preliminary step to generate a DEM and CHM. The use of CVI has been successfully employed to mainly identify vegetation in images [71], with a few prior cases of it being applied to 3D point clouds [2] and never in forest scenery.
Previous studies have reported an accuracy higher than 80.0% for individual tree detection [72,73,74]. However, these studied forests had low density or flat ground plantations. In particular, our results in canopy mountains were similar to those reported by Guerra-Hernández et al. [10] and much better than those reported by other authors [75] with an accuracy of 67%. On the other hand, recent studies have demonstrated that deep learning methods are an alternative to detect individual trees [76,77]. To our knowledge, CVIs such as NGRDI have not previously been used to automatically classify 3D cloud points in forestry area for individual tree detection. The use of CVIs to classify 3D cloud points to perform DEM and CHM allows a fully automatic method without the need for any manual selection parameter. Therefore, the results depend only on the radiometric information of each of the individual points without any geometric requirement. However, the conditions under which the UAV flight is performed can affect the quality of the results. In addition, the time of day when the UAV flight takes place is important and should preferably be at noon sunlight. Thus, images must be captured under stable weather, light, and shadow conditions. Radiometric quality of 3D point colors, such as color contrast and image contrast [78], can be reduced on cloudy days because of lack of direct sunlight [79]. On the other hand, direct lighting increases contrast and also leads to an increase in the amount of shadows, as does flying on sunny days in the morning and afternoon with low solar angles, which will affect point cloud quality [79].
Modern forestry primarily requires digital forest information, and UAV-based remote sensing offers a promising future in this regard [80]. In addition, the ease of data collection, images with very high spatial and temporal resolution, and low operating costs support data collection with UAV. Future projects should develop tree detection algorithms based on the characteristics of 3D point clouds to include species identification and evaluation of estimation of other characteristics at the tree level, such as DBH and canopy area, which are important and necessary factors to estimate biomass.

5. Conclusions

In this work, a new methodology is presented for the individualization of Pinus radiata based on the color information of the 3D point clouds generated by RGB sensor images on-board UAVs. The results were compared with those obtained for individualization of trees using progressive triangulated irregular network and with visual tree identification on an orthomosaic. The results obtained indicate that the color information of 3D point clouds is an alternative for the individualization of trees under the conditions of this investigation.
The proposed methodology reveals the potential of cloud-based UAV photogrammetric points for the individualization of trees and forest monitoring. Future research should focus on estimating individual tree attributes, such as canopy height, size, and diameter, and on developing models predictive of estimating aerial biomass and stem volume from UAV images.

Author Contributions

Conceptualization, F.-J.M.-C. and M.A.L.-G.; methodology, F.-J.M.-C. and M.A.L.-G.; software, A.M.C.-A. and F.-J.M.-C.; validation, A.M.C.-A., M.A.L.-G., J.-E.M.d.L. and F.-J.M.-C.; formal analysis, A.M.C.-A.; investigation, A.M.C.-A., M.A.L.-G. and F.-J.M.-C.; resources, R.E.S.-M.; data curation, A.M.C.-A., M.A.L.-G. and F.-J.M.-C.; writing—original draft preparation, A.M.C.-A.; writing—review and editing, R.E.S.-M., M.A.L.-G. and F.-J.M.-C.; visualization, R.E.S.-M.; supervision, F.-J.M.-C.; project administration, A.M.C.-A. and F.-J.M.-C.; funding acquisition, A.M.C.-A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Brack, C.; Schaefer, M.; Jovanovic, T.; Crawford, D. Comparing terrestrial laser scanners’ ability to measure tree height and diameter in a managed forest environment. Aust. For. 2020, 83, 161–171. [Google Scholar] [CrossRef]
  2. Mesas-Carrascosa, F.-J.; De Castro, A.I.; Torres-Sánchez, J.; Triviño-Tarradas, P.; Jiménez-Brenes, F.M.; García-Ferrer, A.; López-Granados, F. Classification of 3D Point Clouds Using Color Vegetation Indices for Precision Viticulture and Digitizing Applications. Remote Sens. 2020, 12, 317. [Google Scholar] [CrossRef] [Green Version]
  3. Gu, J.; Grybas, H.; Congalton, R.G. A Comparison of Forest Tree Crown Delineation from Unmanned Aerial Imagery Using Canopy Height Models vs. Spectral Lightness. Forests 2020, 11, 605. [Google Scholar] [CrossRef]
  4. Özdemir, S.; Akbulut, Z.; Karsli, F.; Acar, H. Automatic Extraction of Trees by Using Multiple Return Properties of the Lidar Point Cloud. Int. J. Eng. Geosci. 2021, 6, 20–26. [Google Scholar] [CrossRef]
  5. Lucas, R.; Van De Kerchove, R.; Otero, V.; Lagomasino, D.; Fatoyinbo, L.; Omar, H.; Satyanarayana, B.; Dahdouh-Guebas, F. Structural characterisation of mangrove forests achieved through combining multiple sources of remote sensing data. Remote Sens. Environ. 2019, 237, 111543. [Google Scholar] [CrossRef]
  6. Zhu, F.; Shen, W.; Diao, J.; Li, M.; Zheng, G. Integrating cross-sensor high spatial resolution satellite images to detect subtle forest vegetation change in the Purple Mountains, a national scenic spot in Nanjing, China. J. For. Res. 2019, 31, 1743–1758. [Google Scholar] [CrossRef] [Green Version]
  7. Vásconez, N.L.; Sevilla, H.C. Uso De Los Sensores Remotos En Mediciones Forestales. Eur. Sci. J. ESJ 2018, 14, 15. [Google Scholar] [CrossRef] [Green Version]
  8. Kokubu, Y.; Hara, S.; Tani, A. Mapping Seasonal Tree Canopy Cover and Leaf Area Using Worldview-2/3 Satellite Imagery: A Megacity-Scale Case Study in Tokyo Urban Area. Remote Sens. 2020, 12, 1505. [Google Scholar] [CrossRef]
  9. Axelsson, A.; Lindberg, E.; Olsson, H. Exploring Multispectral ALS Data for Tree Species Classification. Remote Sens. 2018, 10, 548. [Google Scholar] [CrossRef] [Green Version]
  10. Guerra-Hernández, J.; Cosenza, D.N.; Rodriguez, L.C.E.; Silva, M.; Tomé, M.; Díaz-Varela, R.A.; Gonzalez-Ferreiro, E. Comparison of ALS- and UAV(SfM)-derived high-density point clouds for individual tree detection in Eucalyptus plantations. Int. J. Remote Sens. 2018, 39, 5211–5235. [Google Scholar] [CrossRef]
  11. Windrim, L.; Carnegie, A.J.; Webster, M.; Bryson, M. Tree Detection and Health Monitoring in Multispectral Aerial Imagery and Photogrammetric Pointclouds Using Machine Learning. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 2554–2572. [Google Scholar] [CrossRef]
  12. Medina, N.; Vidal, P.; Cifuentes, R.; Torralba, J.; Keusch, F. Evaluación del estado sanitario de individuos de Araucaria araucana a través de imágenes hiperespectrales. Rev. Teledetección 2018, 52, 41–53. [Google Scholar] [CrossRef]
  13. Woodgate, W.; Van Gorsel, E.; Hughes, D.; Suarez, L.; Jimenez-Berni, J.A.; Held, A. THEMS: An automated thermal and hyperspectral proximal sensing system for canopy reflectance, radiance and temperature. Plant Methods 2020, 16, 105. [Google Scholar] [CrossRef] [PubMed]
  14. Moe, K.T.; Owari, T.; Furuya, N.; Hiroshima, T. Comparing Individual Tree Height Information Derived from Field Surveys, LiDAR and UAV-DAP for High-Value Timber Species in Northern Japan. Forests 2020, 11, 223. [Google Scholar] [CrossRef] [Green Version]
  15. Næsset, E. Predicting forest stand characteristics with airborne scanning laser using a practical two-stage procedure and field data. Remote Sens. Environ. 2001, 80, 88–99. [Google Scholar] [CrossRef]
  16. Hyyppä, J.; Yu, X.; Hyyppä, H.; Vastaranta, M.; Holopainen, M.; Kukko, A.; Kaartinen, H.; Jaakkola, A.; Vaaja, M.; Koskinen, J.; et al. Advances in Forest Inventory Using Airborne Laser Scanning. Remote Sens. 2012, 4, 1190–1207. [Google Scholar] [CrossRef] [Green Version]
  17. Navarro, J.A.; Tomé, J.L.; Marino, E.; Guillén-Climent, M.L.; Fernández-Landa, A. Assessing the transferability of airborne laser scanning and digital aerial photogrammetry derived growing stock volume models. Int. J. Appl. Earth Obs. Geoinf. 2020, 91, 102135. [Google Scholar] [CrossRef]
  18. Magnussen, S.; Nord-Larsen, T.; Riis-Nielsen, T. Lidar supported estimators of wood volume and aboveground biomass from the Danish national forest inventory (2012–2016). Remote Sens. Environ. 2018, 211, 146–153. [Google Scholar] [CrossRef]
  19. Thomas, O.H.; Smith, C.E.; Wilkinson, B.E. Economics of Mapping Using Small Manned and Unmanned Aerial Vehicles. Photogramm. Eng. Remote Sens. 2017, 83, 581–591. [Google Scholar] [CrossRef]
  20. Musso, R.F.G.; Oddi, F.J.; Goldenberg, M.G.; Garibaldi, L.A. Applying unmanned aerial vehicles (UAVs) to map shrubland structural attributes in northern Patagonia, Argentina. Can. J. For. Res. 2020, 50, 615–623. [Google Scholar] [CrossRef]
  21. Kachamba, D.J.; Ørka, H.O.; Gobakken, T.; Eid, T.; Mwase, W. Biomass Estimation Using 3D Data from Unmanned Aerial Vehicle Imagery in a Tropical Woodland. Remote Sens. 2016, 8, 968. [Google Scholar] [CrossRef] [Green Version]
  22. Paneque-Gálvez, J.; McCall, M.K.; Napoletano, B.M.; Wich, S.A.; Koh, L.P. Small Drones for Community-Based Forest Monitoring: An Assessment of Their Feasibility and Potential in Tropical Areas. Forests 2014, 5, 1481–1507. [Google Scholar] [CrossRef] [Green Version]
  23. Waite, C.E.; Van Der Heijden, G.M.F.; Field, R.; Boyd, D.S. A view from above: Unmanned aerial vehicles (UAVs) provide a new tool for assessing liana infestation in tropical forest canopies. J. Appl. Ecol. 2019, 56, 902–912. [Google Scholar] [CrossRef]
  24. Zahawi, R.A.; Dandois, J.P.; Holl, K.D.; Nadwodny, D.; Reid, J.L.; Ellis, E.C. Using lightweight unmanned aerial vehicles to monitor tropical forest recovery. Biol. Conserv. 2015, 186, 287–295. [Google Scholar] [CrossRef] [Green Version]
  25. Zhang, J.; Hu, J.; Lian, J.; Fan, Z.; Ouyang, X.; Ye, W. Seeing the forest from drones: Testing the potential of lightweight drones as a tool for long-term forest monitoring. Biol. Conserv. 2016, 198, 60–69. [Google Scholar] [CrossRef]
  26. Baena, S.; Moat, J.; Whaley, O.; Boyd, D. Identifying species from the air: UAVs and the very high resolution challenge for plant conservation. PLoS ONE 2017, 12, e0188714. [Google Scholar] [CrossRef] [Green Version]
  27. Lim, Y.S.; La, P.H.; Park, J.S.; Lee, M.H.; Pyeon, M.W.; Kim, J.-I. Calculation of Tree Height and Canopy Crown from Drone Images Using Segmentation. Korean J. Geomat. 2015, 33, 605–614. [Google Scholar] [CrossRef] [Green Version]
  28. Nevalainen, O.; Honkavaara, E.; Tuominen, S.; Viljanen, N.; Hakala, T.; Yu, X.; Hyyppä, J.; Saari, H.; Pölönen, I.; Imai, N.N.; et al. Individual Tree Detection and Classification with UAV-Based Photogrammetric Point Clouds and Hyperspectral Imaging. Remote Sens. 2017, 9, 185. [Google Scholar] [CrossRef] [Green Version]
  29. Xu, Z.; Shen, X.; Cao, L.; Coops, N.C.; Goodbody, T.R.; Zhong, T.; Zhao, W.; Sun, Q.; Ba, S.; Zhang, Z.; et al. Tree species classification using UAS-based digital aerial photogrammetry point clouds and multispectral imageries in subtropical natural forests. Int. J. Appl. Earth Obs. Geoinf. 2020, 92, 102173. [Google Scholar] [CrossRef]
  30. Kwong, I.H.Y.; Fung, T. Tree height mapping and crown delineation using LiDAR, large format aerial photographs, and unmanned aerial vehicle photogrammetry in subtropical urban forest. Int. J. Remote Sens. 2020, 41, 5228–5256. [Google Scholar] [CrossRef]
  31. Wu, X.; Shen, X.; Cao, L.; Wang, G.; Cao, F. Assessment of Individual Tree Detection and Canopy Cover Estimation using Unmanned Aerial Vehicle based Light Detection and Ranging (UAV-LiDAR) Data in Planted Forests. Remote Sens. 2019, 11, 908. [Google Scholar] [CrossRef] [Green Version]
  32. Mesas-Carrascosa, F.-J.; Torres-Sánchez, J.; Clavero-Rumbao, I.; García-Ferrer, A.; Peña, J.-M.; Borra-Serrano, I.; López-Granados, F. Assessing Optimal Flight Parameters for Generating Accurate Multispectral Orthomosaicks by UAV to Support Site-Specific Crop Management. Remote Sens. 2015, 7, 12793–12814. [Google Scholar] [CrossRef] [Green Version]
  33. Rango, A.; Laliberte, A.; Steele, C.; Herrick, J.E.; Bestelmeyer, B.; Schmugge, T.; Roanhorse, A.; Jenkins, V. Research Article: Using Unmanned Aerial Vehicles for Rangelands: Current Applications and Future Potentials. Environ. Pract. 2006, 8, 159–168. [Google Scholar] [CrossRef] [Green Version]
  34. Yilmaz, V.; Yilmaz, C.S.; Tasci, L.; Gungor, O. Determination of Tree Crown Diameters with Segmentation of a UAS-Based Canopy Height Model. Ipsi Bgd Trans. Internet Res. 2017, 13, 63–67. [Google Scholar]
  35. Liu, G.; Wang, J.; Dong, P.; Chen, Y.; Liu, Z.J.F. Estimating Individual Tree Height and Diameter at Breast Height (DBH) from Terrestrial Laser Scanning (TLS) Data at Plot Level. Forests 2018, 9, 398. [Google Scholar] [CrossRef] [Green Version]
  36. Wagner, F.H.; Ferreira, M.P.; Sanchez, A.; Hirye, M.C.M.; Zortea, M.; Gloor, E.; Phillips, O.L.; de Souza Filho, C.R.; Shimabukuro, Y.E.; Aragão, L.E.O.C. Individual tree crown delineation in a highly diverse tropical forest using very high resolution satellite images. ISPRS J. Photogramm. Remote Sens. 2018, 145, 362–377. [Google Scholar] [CrossRef]
  37. Ferraz, A.; Saatchi, S.; Mallet, C.; Meyer, V. Lidar detection of individual tree size in tropical forests. Remote Sens. Environ. 2016, 183, 318–333. [Google Scholar] [CrossRef]
  38. Zhen, Z.; Quackenbush, L.J.; Zhang, L. Trends in Automatic Individual Tree Crown Detection and Delineation—Evolution of LiDAR Data. Remote Sens. 2016, 8, 333. [Google Scholar] [CrossRef] [Green Version]
  39. Ke, Y.; Quackenbush, L.J. A review of methods for automatic individual tree-crown detection and delineation from passive remote sensing. Int. J. Remote Sens. 2011, 32, 4725–4747. [Google Scholar] [CrossRef]
  40. Tuominen, S.; Pitkänen, T.; Balazs, A.; Kangas, A. Improving Finnish Multi-Source National Forest Inventory by 3D aerial imaging. Silva Fenn. 2017, 51, 7743. [Google Scholar] [CrossRef] [Green Version]
  41. Krofcheck, D.J.; Litvak, M.E.; Lippitt, C.D.; Neuenschwander, A. Woody Biomass Estimation in a Southwestern U.S. Juniper Savanna Using LiDAR-Derived Clumped Tree Segmentation and Existing Allometries. Remote Sens. 2016, 8, 453. [Google Scholar] [CrossRef] [Green Version]
  42. Hildreth, E.C.; Ando, H.; Andersen, R.A.; Treues, S. Recovering three-dimensional structure from motion with surface reconstruction. Vis. Res. 1995, 35, 117–137. [Google Scholar] [CrossRef] [Green Version]
  43. Zhang, J.; Lin, X. Advances in fusion of optical imagery and LiDAR point cloud applied to photogrammetry and remote sensing. Int. J. Image Data Fusion 2016, 8, 1–31. [Google Scholar] [CrossRef]
  44. Cao, L.; Liu, H.; Fu, X.; Zhang, Z.; Shen, X.; Ruan, H. Comparison of UAV LiDAR and Digital Aerial Photogrammetry Point Clouds for Estimating Forest Structural Attributes in Subtropical Planted Forests. Forests 2019, 10, 145. [Google Scholar] [CrossRef] [Green Version]
  45. Koch, B.; Heyder, U.; Weinacker, H. Detection of Individual Tree Crowns in Airborne Lidar Data. Photogramm. Eng. Remote Sens. 2006, 72, 357–363. [Google Scholar] [CrossRef] [Green Version]
  46. Silva, C.A.; Hudak, A.T.; Vierling, L.A.; Loudermilk, E.L.; O’Brien, J.J.; Hiers, J.K.; Jack, S.B.; Gonzalez-Benecke, C.; Lee, H.; Falkowski, M.J.; et al. Imputation of Individual Longleaf Pine (Pinus palustris Mill.) Tree Attributes from Field and LiDAR Data. Can. J. Remote Sens. 2016, 42, 554–573. [Google Scholar] [CrossRef]
  47. Véga, C.; Durrieu, S. Multi-level filtering segmentation to measure individual tree parameters based on Lidar data: Application to a mountainous forest with heterogeneous stands. Int. J. Appl. Earth Obs. Geoinf. 2011, 13, 646–656. [Google Scholar] [CrossRef] [Green Version]
  48. Zhen, Z.; Quackenbush, L.J.; Zhang, L. Impact of Tree-Oriented Growth Order in Marker-Controlled Region Growing for Individual Tree Crown Delineation Using Airborne Laser Scanner (ALS) Data. Remote Sens. 2014, 6, 555–579. [Google Scholar] [CrossRef] [Green Version]
  49. Noordermeer, L.; Bollandsås, O.M.; Ørka, H.O.; Næsset, E.; Gobakken, T. Comparing the accuracies of forest attributes predicted from airborne laser scanning and digital aerial photogrammetry in operational forest inventories. Remote Sens. Environ. 2019, 226, 26–37. [Google Scholar] [CrossRef]
  50. Wallace, L.; Lucieer, A.; Malenovský, Z.; Turner, D.; Vopěnka, P. Assessment of Forest Structure Using Two UAV Techniques: A Comparison of Airborne Laser Scanning and Structure from Motion (SfM) Point Clouds. Forests 2016, 7, 62. [Google Scholar] [CrossRef] [Green Version]
  51. Larsen, M.; Rudemo, M. Optimizing templates for finding trees in aerial photographs. Pattern Recognit. Lett. 1998, 19, 1153–1162. [Google Scholar] [CrossRef]
  52. Ramli, M.F.; Tahar, K.N. Homogeneous tree height derivation from tree crown delineation using Seeded Region Growing (SRG) segmentation. Geo-Spat. Inf. Sci. 2020, 23, 195–208. [Google Scholar] [CrossRef]
  53. Gleason, C.J.; Im, J. A Fusion Approach for Tree Crown Delineation from Lidar Data. Photogramm. Eng. Remote Sens. 2012, 78, 679–692. [Google Scholar] [CrossRef]
  54. Sperlich, M.; Kattenborn, T.; Koch, B.; Kattenborn, G. Potential of Unmanned Aerial Vehicle Based Photogrammetric Point Clouds for Automatic Single Tree Detection. Gem. Tag. 2014, 23, 1–6. [Google Scholar]
  55. Kattenborn, T.; Sperlich, M.; Bataua, K.; Koch, B. Automatic Single Tree Detection in Plantations using UAV-based Photogrammetric Point clouds. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, XL-3, 139–144. [Google Scholar] [CrossRef] [Green Version]
  56. Instituto Nacional de Investigación de Recursos Naturales. Suelos: Descripciones. Pub. IREN N°2; Proyecto Aerofotogramétrico Chile/OEA/BID: Santiago de Chile, Chile, 1964. [Google Scholar]
  57. Torres-Sánchez, J.; Lopez-Granados, F.; Serrano, N.; Arquero, O.; Peña, J.M. High-Throughput 3-D Monitoring of Agricultural-Tree Plantations with Unmanned Aerial Vehicle (UAV) Technology. PLoS ONE 2015, 10, e0130479. [Google Scholar] [CrossRef] [Green Version]
  58. Mesas-Carrascosa, F.J.; Rumbao, I.C.; Berrocal, J.A.B.; Porras, A.G.-F. Positional Quality Assessment of Orthophotos Obtained from Sensors Onboard Multi-Rotor UAV Platforms. Sensors 2014, 14, 22394–22407. [Google Scholar] [CrossRef]
  59. Woebbecke, D.M.; Meyer, G.E.; Von Bargen, K.; Mortensen, D.A. Plant species identification, size, and enumeration using machine vision techniques on near-binary images. SPIE 1993, 1836, 208–219. [Google Scholar] [CrossRef]
  60. Gée, C.; Bossu, J.; Jones, G.; Truchetet, F. Crop/weed discrimination in perspective agronomic images. Comput. Electron. Agric. 2008, 60, 49–59. [Google Scholar] [CrossRef]
  61. Otsu, N. A Threshold Selection Method from Gray-Level Histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef] [Green Version]
  62. Isenburg, M. Efficient LiDAR Processing Software (Version 170322). Rapidlasso. 2021. Available online: https://rapidlasso.com/lastools/ (accessed on 15 June 2020).
  63. Klápště, P.; Fogl, M.; Barták, V.; Gdulová, K.; Urban, R.; Moudrý, V. Sensitivity analysis of parameters and contrasting performance of ground filtering algorithms with UAV photogrammetry-based and LiDAR point clouds. Int. J. Digit. Earth 2020, 13, 1672–1694. [Google Scholar] [CrossRef]
  64. Zeybek, M.; Şanlıoğlu, I. Point cloud filtering on UAV based point cloud. Measurement 2018, 133, 99–111. [Google Scholar] [CrossRef]
  65. Zhang, Z.; Gerke, M.; Vosselman, G.; Yang, M.Y. Filtering photogrammetric point clouds using standard lidar filters towards dtm generation. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, IV-2, 319–326. [Google Scholar] [CrossRef] [Green Version]
  66. Mohan, M.; Silva, C.A.; Klauberg, C.; Jat, P.; Catts, G.; Cardil, A.; Hudak, A.T.; Dia, M. Individual Tree Detection from Unmanned Aerial Vehicle (UAV) Derived Canopy Height Model in an Open Canopy Mixed Conifer Forest. Forests 2017, 8, 340. [Google Scholar] [CrossRef] [Green Version]
  67. McGaughey, R. FUSION/LDV: Software for LiDAR analysis and visualization, FUSION version 3.78. 2018. Available online: https://forsys.cfr.washinton.edu.fusion.html/ (accessed on 19 June 2020).
  68. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2019. [Google Scholar]
  69. Koh, L.P.; Wich, S.A. Dawn of Drone Ecology: Low-Cost Autonomous Aerial Vehicles for Conservation. Trop. Conserv. Sci. 2012, 5, 121–132. [Google Scholar] [CrossRef] [Green Version]
  70. Puttock, A.; Cunliffe, A.M.; Anderson, K.; Brazier, R. Aerial photography collected with a multirotor drone reveals impact of Eurasian beaver reintroduction on ecosystem structure. J. Unmanned Veh. Syst. 2015, 3, 123–130. [Google Scholar] [CrossRef]
  71. David, L.C.; Ballado, A.J. Vegetation indices and textures in object-based weed detection from UAV imagery. In Proceedings of the 6th IEEE International Conference on Control System, Compu-ting and Engineering, ICCSCE, Penang, Malaysia, 25–27 November 2016. [Google Scholar]
  72. Malek, S.; Bazi, Y.; Alajlan, N.; Hichri, H.; Melgani, F. Efficient Framework for Palm Tree Detection in UAV Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 4692–4703. [Google Scholar] [CrossRef]
  73. Jiang, H.; Chen, S.; Li, D.; Wang, C.; Yang, J. Papaya Tree Detection with UAV Images Using a GPU-Accelerated Scale-Space Filtering Method. Remote Sens. 2017, 9, 721. [Google Scholar] [CrossRef] [Green Version]
  74. Dos Santos, A.A.; Marcato Junior, J.; Araújo, M.S.; Di Martini, D.R.; Tetila, E.C.; Siqueira, H.L.; Aoki, C.; Eltner, A.; Matsubara, E.T.; Pistori, H.; et al. Assessment of CNN-Based Methods for Individual Tree Detection on Images Captured by RGB Cameras Attached to UAVs. Sensors 2019, 19, 3595. [Google Scholar] [CrossRef] [Green Version]
  75. Wulder, M.; Niemann, K.; Goodenough, D.G. Local Maximum Filtering for the Extraction of Tree Locations and Basal Area from High Spatial Resolution Imagery. Remote Sens. Environ. 2000, 73, 103–114. [Google Scholar] [CrossRef]
  76. Li, W.; Fu, H.; Yu, L.; Cracknell, A. Deep Learning Based Oil Palm Tree Detection and Counting for High-Resolution Remote Sensing Images. Remote Sens. 2016, 9, 22. [Google Scholar] [CrossRef] [Green Version]
  77. Csillik, O.; Cherbini, J.; Johnson, R.; Lyons, A.; Kelly, M. Identification of Citrus Trees from Unmanned Aerial Vehicle Imagery Using Convolutional Neural Networks. Drones 2018, 2, 39. [Google Scholar] [CrossRef] [Green Version]
  78. Cox, S.E.; Booth, D.T. Shadow attenuation with high dynamic range images. Environ. Monit. Assess. 2008, 158, 231–241. [Google Scholar] [CrossRef] [PubMed]
  79. Dandois, J.P.; Olano, M.; Ellis, E.C. Optimal Altitude, Overlap, and Weather Conditions for Computer Vision UAV Estimates of Forest Structure. Remote Sens. 2015, 7, 13895–13920. [Google Scholar] [CrossRef] [Green Version]
  80. Birdal, A.C.; Avdan, U.; Türk, T. Estimating tree heights with images from an unmanned aerial vehicle. Geomat. Nat. Hazards Risk 2017, 8, 1144–1156. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Study area.
Figure 1. Study area.
Sensors 22 01331 g001
Figure 2. Workflow.
Figure 2. Workflow.
Sensors 22 01331 g002
Figure 3. Results of processing: (a) orthomosaic of the study area, (b) digital elevation model, and (c) canopy height Models generated through (1) progressive triangulated irregular network and (2) color vegetation index.
Figure 3. Results of processing: (a) orthomosaic of the study area, (b) digital elevation model, and (c) canopy height Models generated through (1) progressive triangulated irregular network and (2) color vegetation index.
Sensors 22 01331 g003
Figure 4. Classification of ground points through (a) progressive triangulated irregular network and (b) color vegetation index.
Figure 4. Classification of ground points through (a) progressive triangulated irregular network and (b) color vegetation index.
Sensors 22 01331 g004
Figure 5. Sample plots in the study area. Detail of plot N° 4: Identification of trees visually (a) by color index (b) and by original cloud (c). Detail of false positives and false negatives.
Figure 5. Sample plots in the study area. Detail of plot N° 4: Identification of trees visually (a) by color index (b) and by original cloud (c). Detail of false positives and false negatives.
Sensors 22 01331 g005
Table 1. Distribution of percentile heights in digital elevation models (DEM) and canopy height models (CHM) considering the classification of ground points with progressive triangulated irregular network (TIN) and color vegetation index (CVI).
Table 1. Distribution of percentile heights in digital elevation models (DEM) and canopy height models (CHM) considering the classification of ground points with progressive triangulated irregular network (TIN) and color vegetation index (CVI).
Height Percentile [m]
Digital Model0102030405060708090100
DEM-TIN53.3059.1760.5562.0563.8064.9366.2067.5068.5970.9883.40
DEM-CVI53.1558.3659.4760.4662.1165.6664.6565.8367.1068.1674.75
CHM-TIN00.594.318.5911.5213.6215.2716.7018.1619.9128.54
CHM-CVI00.754.9511.5814.3416.0917.4818.7219.9921.5529.01
Table 2. The accuracy evaluation for the individualization of trees from the point cloud filtered with color index and progressive triangulated irregular network. TP: true positive; FP: false positive; FN: false negative; S: sensitivity; P: precision; F: F-score.
Table 2. The accuracy evaluation for the individualization of trees from the point cloud filtered with color index and progressive triangulated irregular network. TP: true positive; FP: false positive; FN: false negative; S: sensitivity; P: precision; F: F-score.
PlotManual InventoryColor Vegetation IndexTriangulated Irregular Network
TPFPFNSPFTPFPFNSPF
11616001.001.001.0011050.691.000.81
22116050.761.000.86101100.500.910.65
395040.561.000.714140.500.800.62
42211290.550.850.6762140.300.750.43
52015140.790.940.8614150.740.930.82
62822060.791.000.8819270.730.900.81
73025050.831.000.91160140.531.000.70
81711150.690.920.7911150.690.920.79
926160100.621.000.76132110.540.870.67
102518070.721.000.8418070.721.000.84
1125130120.521.000.68110140.441.000.61
122419140.830.950.8881150.350.890.50
1324140100.581.000.74120120.501.000.67
141715020.881.000.9413040.761.000.87
152823050.821.000.90160120.571.000.73
16108020.801.000.899010.901.000.95
172013070.651.000.7911090.551.000.71
183123080.741.000.8525060.811.000.89
192927020.931.000.9622070.761.000.86
202318050.781.000.8816070.701.000.82
2121110100.521.000.69101100.500.910.65
221512030.801.000.8911040.731.000.85
231814130.820.930.8810170.590.910.71
241815030.831.000.9111160.650.920.76
252617090.651.000.79133100.570.810.67
261813050.721.000.8411340.730.790.76
272920090.691.000.82181100.640.950.77
2833220110.671.000.80170160.521.000.68
29107120.780.880.825140.560.830.67
302722140.850.960.9021150.810.950.88
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Cabrera-Ariza, A.M.; Lara-Gómez, M.A.; Santelices-Moya, R.E.; Meroño de Larriva, J.-E.; Mesas-Carrascosa, F.-J. Individualization of Pinus radiata Canopy from 3D UAV Dense Point Clouds Using Color Vegetation Indices. Sensors 2022, 22, 1331. https://doi.org/10.3390/s22041331

AMA Style

Cabrera-Ariza AM, Lara-Gómez MA, Santelices-Moya RE, Meroño de Larriva J-E, Mesas-Carrascosa F-J. Individualization of Pinus radiata Canopy from 3D UAV Dense Point Clouds Using Color Vegetation Indices. Sensors. 2022; 22(4):1331. https://doi.org/10.3390/s22041331

Chicago/Turabian Style

Cabrera-Ariza, Antonio M., Miguel A. Lara-Gómez, Rómulo E. Santelices-Moya, Jose-Emilio Meroño de Larriva, and Francisco-Javier Mesas-Carrascosa. 2022. "Individualization of Pinus radiata Canopy from 3D UAV Dense Point Clouds Using Color Vegetation Indices" Sensors 22, no. 4: 1331. https://doi.org/10.3390/s22041331

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop