Next Article in Journal
Mapping Paddy Rice Distribution and Cropping Intensity in China from 2014 to 2019 with Landsat Images, Effective Flood Signals, and Google Earth Engine
Next Article in Special Issue
A UAS and Machine Learning Classification Approach to Suitability Prediction of Expanding Natural Habitats for Endangered Flora Species
Previous Article in Journal
Remote Sensing Monitoring of Winter Wheat Stripe Rust Based on mRMR-XGBoost Algorithm
Previous Article in Special Issue
Estimating Floodplain Vegetative Roughness Using Drone-Based Laser Scanning and Structure from Motion Photogrammetry
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparative Assessment of Pixel and Object-Based Approaches for Mapping of Olive Tree Crowns Based on UAV Multispectral Imagery

1
Department of Geography, University of Zadar, Trg kneza Višeslava 9, 23000 Zadar, Croatia
2
Faculty of Geodesy, University of Zagreb, Kačićeva 26, 10000 Zagreb, Croatia
3
Faculty of Technical Sciences, University of Novi Sad, Trg Dositej Obradović 6, 21000 Novi Sad, Serbia
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(3), 757; https://doi.org/10.3390/rs14030757
Submission received: 29 December 2021 / Revised: 2 February 2022 / Accepted: 4 February 2022 / Published: 6 February 2022

Abstract

:
Pixel-based (PB) and geographic-object-based (GEOBIA) classification approaches allow the extraction of different objects from multispectral images (MS). The primary goal of this research was the analysis of UAV imagery applicability and accuracy assessment of MLC and SVM classification algorithms within PB and GEOBIA classification approaches. The secondary goal was to use different accuracy assessment metrics to determine which of the two tested classification algorithms (SVM and MLC) most reliably distinguishes olive tree crowns and which approach is more accurate (PB or GEOBIA). The third goal was to add false polygon samples for Correctness (COR), Completeness (COM) and Overall Quality (OQ) metrics and use them to calculate the Total Accuracy (TA). The methodology can be divided into six steps, from data acquisition to selection of the best classification algorithm after accuracy assessment. High-quality DOP (digital orthophoto) and UAVMS were generated. A new accuracy metric, called Total Accuracy (TA), combined both false and true positive polygon samples, thus providing a more comprehensive insight into the assessed classification accuracy. The SVM (GEOBIA) was the most reliable classification algorithm for extracting olive tree crowns from UAVMS imagery. The assessment carried out indicated that application of GEOBIA-SVM achieved a TACOR of 0.527, TACOM of 0.811, TAOQ of 0.745, Overall Accuracy (OA) of 0.926 or 0.980 and Area Under Curve (AUC) value of 0.904 or 0.929. The calculated accuracy metrics confirmed that the GEOBIA approach (SVM and MLC) achieved more accurate olive tree crown extraction than the PB approach (SVM and MLC) if applied to classifying VHR UAVMS imagery. The SVM classification algorithm extracted olive tree crowns more accurately than MLC in both approaches. However, the accuracy assessment has proven that PB classification algorithms can also achieve satisfactory accuracy.

1. Introduction and Background

The olive is one of the oldest and most widely cultivated species in the Mediterranean [1,2,3]. Over the centuries, the distribution, spread, and finally the dominance of olives through the centuries have shaped the Mediterranean landscape’s recognizable character [1]. Over 70% of the olives in the world are grown in the Mediterranean countries of the European Union [4]. Olives are well adapted to sloping and poorly fertile soils, thus providing an ecological, economic and social benefit to the areas in which they are cultivated [5]. Preservation of olive groves as an element of Mediterranean identity [6] and a strategic economic resource [7] depends on sustainable environmental management, based on the improvement of the competitiveness of the agricultural sector and on precision agricultural methodology and technology [8,9]. The development of geospatial technologies has enabled precise mapping and inventorying of olives [10,11]. The application of unmanned aerial vehicles (UAV) in aerial photogrammetry is widely accepted as a reliable and accurate remote sensing method for environmental protection and mapping of vegetation species [10,12,13]. Compared to satellite images, UAV multispectral images (UAVMS) have significantly better spatial resolution and greater flexibility in the selection of the appropriate spatio-temporal resolution [14]. In comparison to airborne photogrammetric systems, UAV-based systems have lower operating costs and the possibility of lower flight altitudes, thus flying closer to treetops and providing a very detailed insight into vegetation dynamics [15]. Various pixel-based (PB) and geographic-object-based (GEOBIA) classification approaches allow the extraction of different objects from MS, and, depending on the set parameters and the quality of the input data, they can achieve different results [16]. PB methods use the individual pixels of an MS as a minimum mapping unit, thus allowing a very detailed classification of different objects, which in some cases can potentially lead to the formation of various types of “noise” or the “salt and pepper” effect [17,18]. On the other hand, GEOBIA allows pixels of given images to be grouped into meaningful homogeneous “superpixels” of different shapes and sizes, according to their common spectral, spatial and geometric features [18,19,20]. A highly accurate land cover model (LCM) can be generated using the MS and PB or GEOBIA approach, together with selected test samples and an appropriate classification algorithm [18]. However, classifying data collected by remote sensing methods into a meaningful and accurate thematic map remains a challenge. The classification outcome is still influenced by many factors such as the complexity of the landscape within the study area, selected test patterns and chosen approaches to image processing and classification [21,22,23]. Therefore, the constant emergence of new classification algorithms and methods [24] in recent years requires a critical approach. Different classification algorithms should be compared and evaluated to facilitate the selection of the most accurate and suitable one for particular research [16].
Many authors have used UAVs for aerial photogrammetric surveys over areas of olive groves to collect the data required to extract olive trees crowns via the GEOBIA approach. In [25], olive and citrus tree crowns were extracted from UAVMS imagery using a multiscale object-based approach. In [26], photogrammetric point clouds were generated by UAV technology and analyzed using GEOBIA. In [27] and [28], olive trees were also extracted from the UAVMS imagery using the GEOBIA approach to assess olive tree characteristics. In [29], the authors developed and tested the performance of a method based on low-cost UAV imagery to estimate olive crown parameters (tree height and crown diameter). Different authors have used different classification algorithms in their research; for example, those in [25] used the Assign Class algorithm, those in [30] used Random Forest by applying PB and GEOBIA approaches and those in [31] compared the results of two deep learning (Fully Convolutional Networks and patch-based Deep Convolutional Neural Networks) and two conventional (Support Vector Machine (SVM) and Random Forest) classification algorithms. In [32], the Maximum Likelihood Classifier (MLC) was used, while in [29] classification was performed using the Classification and Regression Trees algorithm. Furthermore, researchers used different methods to assess the accuracy of the models; for example, those in [25] used Recall, Precision, F-score and Branching Factor metrics. In [27] and [32], the Overall Accuracy (OA) [33] metric was used. The authors in [34] used Correctness (COR) [35], Completeness (COM) [35] and Overall Quality (OQ) [35] metrics. In [27], an OA value of 0.93 was obtained. The authors in [30] also used OA, as well as Producer’s Accuracy (PA) [33] and User’s Accuracy (UA) [33], which were also used by [31]. Using these metrics, [30] proved that GEOBIA separates tree species more reliably than the PB approach.
The primary goal of this research is the analysis of UAV imagery applicability and accuracy assessment of the most commonly used classification algorithms (MLC and SVM) [34,36,37,38,39] within PB and GEOBIA classification approaches. The secondary goal is to use different accuracy assessment metrics to determine which of the two tested classification algorithms (SVM and MLC) most reliably distinguishes olive tree crowns, and which approach is more accurate (PB or GEOBIA). The third goal is to add false polygon samples for COR, COM and OQ metrics, and use them to calculate Total Accuracy (TA).
The paper includes the following sections. Section 1. Introduction and Background and Section 2. Materials and Methods, which consists of Section 2.1. Study Area, Section 2.2. The Methodological Framework of the Research, Section 2.3. Field Research, Section 2.3.1. Data Acquisition, Section 2.4. UAV Imagery Processing, Section 2.5. Segmentation, Section 2.6. Adding Test Samples, Section 2.7. Classification and Section 2.8. Accuracy Assessment. Then follows Section 3. Results and Discussion, which consists of Section 3.1. Derivation of DSM and DOP, Section 3.2. Derivation of UAVMS, Section 3.3. Segmentation, Section 3.4. Adding Test Samples, Section 3.5. Results of Classification Algorithms and Section 3.6. Accuracy of Classification Algorithms. The last section is Section 4. Conclusions.

2. Materials and Methods

2.1. Study Area

The study area for the extraction of olive tree crowns is the topographic basin of the settlement of Sali (Figure 1). It largely includes the Saljsko polje olive grove (202.1 ha), which protects the Natura 2000 ecological network as a special botanical reserve. The area is located in the south-eastern part of Dugi Otok, and contains olive trees up to 700 years old which, along with the size of the trees, makes this a unique ecological area within this area [40].

2.2. The Methodological Framework of the Research

The methodology is divided into the following steps: field research (1); development of digital orthophoto (DOP) and UAVMS (2); UAVMS segmentation (3); adding test samples (4); PB classification using two classification algorithms (MLC and SVM) (5.1); GEOBIA classification using two classification algorithms (MLC and SVM) (5.2); estimation of the accuracy of the model and selection of the best classification algorithm (6) (Figure 2).

2.3. Field Research

The field research (Figure 2(1)) was conducted on June 27 and 28, 2020. This period is most suitable for aerial photogrammetric imaging of olive trees due to the fact that pruning is already done, the culmination of vegetation and the minimal presence of shadows [41]. Two aerial photogrammetric surveys of the topographic basin of the Sali settlement were performed. The first survey used a camera that records the visible part of the spectrum (RGB), and the second used a multispectral camera. In the process of collecting RGB images, a UAV DJI Matrice 210 RTK V2 was used, on which a Zenmuse X7 camera was mounted (Figure 2(1)), while a DJI Matrice 600 Pro on which a MicaSense RedEdge-MX (Figure 2(1)) multispectral camera was mounted was used to collect multispectral images. An RTK-GPS Stonex S10 (Figure 2(1)) was used to collect ground control points (GCPs) and checkpoints (CPs).

2.3.1. Data Acquisition

The first step was to mark and collect points on a local geodetic basis. GCPs were collected to achieve a better absolute orientation of the model. A total of ten GCPs and five CPs were marked and measured in the official projection coordinate reference system of the transverse Mercator projection (HTRS96/TM). The points were collected at different altitudes, considering the rules for the spatial arrangement of placement of landmarks in photogrammetry (distribution throughout the entire survey plan) [42] (Figure 1). The next step was to create an optimal flight plan. This included selecting mission types considering terrain morphology, the research object, and the distribution of GCPs. The development of missions via UAV was performed in the DJI GS Pro software. Seven double grid missions with front and side overlap of 80% were planned for the RGB camera’s data acquisition in the topographic basin settlement of Sali, while seven single grid missions with front and side overlap of 80% were planned for data acquisition by the multispectral camera. Considering the comprehensiveness of the terrain and the desired spatial resolution (ground sampling distance: GSD) for DOP (GSD = 5 cm), the average flight or acquisition altitude was about 200 m, and for UAVMS (GSD = 18 cm), it was 260 m. The compasses and inertial measurement systems (IMUs) of both UAVs were then calibrated, and a multispectral camera radiometric calibration was performed using a reflection calibration panel (CRP2). The multispectral camera was calibrated before and after each mission, in order to accurately display the light conditions during flight. The calibration process was performed in such a way that the CP2 was lowered onto a flat surface. Then, the camera was connected to the configuration page via a Wi-Fi network. Special attention was given to ensuring that the panel was not in the shade when taking a calibration photo, and that the sensor was at least 1 m above the CP2. Finally, the aerial photogrammetric recording was performed.

2.4. UAV Imagery Processing

The second step in extracting olive tree crowns was the production of DOP and UAVMS images (Figure 2(2)). The collected RGB and multispectral images were processed using Agisoft Metashape Professional 1.5.1., one of the most commonly used software packages for photogrammetric image processing [43,44]. Thanks to implemented structure-from-motion (SfM) and multi-view algorithms, Agisoft enables 3D modelling based on reconstructing 3D structures from overlapping 2D images [45,46,47]. First, connection points were generated, and models were oriented with the help of collected GCPs. Then, dense point clouds and polygon networks were created, from which digital surface models (DSM) (Figure 3C), DOP (Figure 3A) and UAVMS (Figure 3B) were derived.

2.5. Segmentation

The third step was the segmentation of UAVMS (Figure 2(3)) based on the Mean shift approach [19] within ArcGIS. The Segment Mean Shift tool within ArcGIS identifies features or segments in imagery by grouping neighboring pixels together that have similar spectral, spatial, and geometric characteristics [19]. Since the characteristics of the image segments depend on the spectral detail, spatial detail and the minimum segment size, the optimization of the values of stated parameters was achieved. An iterative process (n = 64) was performed. The best combination of parameter values was selected based on the visual interpretation of the UAVMS segmentation results.

2.6. Adding Test Samples

The fourth step was to add test samples (Figure 2(4), Supplementary Material Figure S1). This refers to the process of collecting test samples and verifying them. Samples were collected for the following 18 classes: Young Olive Trees, Old Olive Trees, Older Olive Trees, Oldest Olive Trees, Tile, Sheet Metal, Sheet Metal 2, Asphalt, Concrete, Shadow, High Vegetation, Macadam, Bare Stone, Quarry, Sea, Grass, Shrubby Vegetation, Brown Soil. Due to the possible input distortions of the sharpened multispectral image, a larger number of samples were marked. The collected test samples were checked by the cross-validation method. In total ≈ 1200 samples for all classes were collected.

2.7. Classification

The fifth step was classification (Figure 2(5)). Within the PB (Figure 2(5.1)) and GEOBIA (Figure 2(5.2)) classification approaches, MLC and SVM classification algorithms were tested. Classifications were based on selected samples. For this purpose, a tool called PvO-ACP (pixel vs. object automated classification process) was used (Supplementary Material Figure S2). This was created in Model Builder within ArcGIS. PvO-ACP allows simultaneous generation of PB and GEOBIA models via access based on selected parameters. The generated models were then reclassified into 12 classes (Figure 4) and finally into two classes: Olive Tree and Other, as in [28] (Figure 7).

2.8. Accuracy Assessment

The sixth step was to assess the accuracy of the models and select the best classification algorithm (Figure 2(6)). Estimating the accuracy of the generated models was based on COR, COM and OQ metric indicators. These metrics quantify the relationship between reference objects (reference olive trees, ROT) and derived objects (classified olive trees, COT) and examine the accuracy of the executed classification [48]. The accuracy assessment was performed based on 13 polygon features of ROTs (ROT1–ROT13) (Figure 4A2,B2,C2,D2 and Figure 7). ROTs were selected using the Create Accuracy Assessment Points tool within ArcGIS. Olive trees on which a pixel was located, or was closest to them, were vectorized at a scale of 1:25 from the produced DOP with a spatial resolution of 5 cm. The authors in [26] also manually delineated tree crowns in their research, but they vectorized all the trees. The overlap area (AO) of all four COTs was calculated. The values were used to assess the accuracy of the classification algorithms according to the following formulas:
COR = A o   A C O T
where A o is the overlap area of ROT and COT and A C O T is the total area of COT
COM = A o A R O T
where A o is the overlap area of ROT and COT and A R O T is the total area of ROT
OQ = A o A R O T + A C O T A o
where A o is the overlap area of ROT and COT, A R O T is the total area of ROT and A C O T is the total area of COT.
The COR, COM and OQ metrics values vary in the range of 0–1. Higher values indicate a higher match between reference and classified objects, i.e., higher accuracy of the classification algorithm [49].
Since it can be seen from the generated models that the classification algorithms in the PB models overestimated the area of olive tree crowns, it was necessary to add another 13 polygon features, representing false olive trees (FOT). This is the first time, to the best of our knowledge, that false polygon samples have been added to these metrics. FOTs were added on the same principle as ROTs. Then, COR, COM and OQ values for FOTs were calculated. The same methodology was applied as for ROTs, except that in this case a lower value represents higher accuracy. The Total Accuracy (TA) was calculated by subtracting the FOT indicator value from the ROT indicator value according to the formulas below:
T A = C O R R O T C O R F O T
T A = C O M R O T C O M F O T
T A = O Q R O T O Q F O T
TA values can vary between −1 and 1. A higher value represents higher accuracy.
The second accuracy assessment was performed using Producer Accuracy (PA) and User Accuracy (UA) for both classes. PA represents the probability that a reference pixel was classified correctly while UA represents the probability that a classified pixel represents that class on the ground [50,51]. Overall Accuracy (OA) was also calculated for estimation of classification rate correctness. It represents the quotient of the total number of correct pixels and the total number of pixels in the error matrix [50,51]. The accuracy assessment using these indicators was calculated in two ways. The first way was to randomly add 500 points using the Create Accuracy Assessment tool. Each generated point received ground truth data and classification data (for each of the four generated models). The values of PA, UA and OA were then calculated using the Create Confusion Matrix tool within ArcGIS, based on the formulas below:
P A i = p i i p + i
U A i = p i i p i +
O A = i = 1 m p i i
where p i i is the major diagonal element for class I, p + i is the total number of observations in column i (bottom margin), p i + is the total number of observations in row i (right margin) and m is the number of rows, columns in the error matrix. PA, UA and OA values can vary between 0 and 1. A higher value represents higher accuracy.
Another way to calculate PA, UA and OA values was to create a point feature for each pixel based on the generated ROT and FOT polygons. Considering the spatial resolution of the UAVMS, as described in [34], the Create Fishnet tool within ArcGIS generated a grid of polygons of 18 ∗ 18 cm within the ROTs and FOTs, with centroids that were used as input data to assess the accuracy of the models. Then, in the same way, the pixel value for each generated model was added for all point features, and PA, UA and OA were calculated using the Create Confusion Matrix tool.
Tertiary accuracy assessment was performed using the receiver operating characteristics curve (ROC) and the calculation of the area under the curve (AUC), which are widely used methods for estimating the accuracy of classification algorithms [52,53,54,55]. AUC values vary in the range of 0–1, with higher values representing greater accuracy of the model. Values <0.6 represent poor accuracy; 0.6–0.7 average accuracy; 0.7–0.8 good accuracy; 0.8–0.9 very good accuracy; and > 0.9 excellent accuracy [56,57]. The creation of ROC curves was automated with the Calculate ROC Curves and AUC Values tools within ArcGIS. The same 500 random points as those created within ROTs and FOTs for calculating PA, UA and OA were used as input data to assess the accuracy of the models and select the best classification algorithm.

3. Results and Discussion

3.1. Derivation of DSM and DOP

A total of 6587 high-quality images were collected in the planned seven double grid missions. The total error in five CPs used for the accuracy assessment of the generated model CPs was 5.83 cm. Suppose the spatial resolution of the aerial photogrammetric images and the purpose of the model are considered. In that case, this accuracy of the developed models satisfies the needs of the analyses for which the created models were used. By interpolating 1.5 ∗ 109 dots within a dense cloud, a 5 cm spatial resolution DSM was generated (Figure 3C). Based on the generated DSM and point clouds, a 5 cm spatial resolution DOP was created (Figure 3A), containing three visible channels (RGB).

3.2. Derivation of UAVMS

A total of 7245 high-quality images were collected in the planned seven missions. Five images were collected at each recording location, one for each multispectral camera channel (red, green, blue, red edge, near-infrared). It follows that a total of 7245 × 5 = 36,280 images were taken. By processing the images within Agisoft Metashape 1.5.1., very-high-resolution UAVMS were generated. The combination of channels determined that the differences between vegetation species are best observed in the layout of channels 5-4-1 (Figure 3B). The UAVMS, with this channel order, served as the basis for extracting classes with the PB approach, while for the GEOBIA approach, segmentation of the same image was performed.

3.3. Segmentation

As in [29], the iterative process of testing segmentation parameters based on trial-error procedure yielded optimal values for olive tree crowns extraction by the GEOBIA approach. Of the 64 derived models, a model containing the following parameter values was selected: (a) spectral detail = 18/20; (b) spatial detail = 15/20; (c) minimum segment size = 10/20 (Figure 3D). Given that the spectral detail is conditioned by the spectral resolution of UAVMS [58], which affects the differentiation of different vegetation species, the selected value of the spectral detail allowed the separation of olive trees from other vegetation. The value of the spatial detail parameter, which determines the importance of feature proximity in the multispectral model, enabled the isolation of individual trees and reduced the influence of generalization during classification. The minimum segment size was set at 10 pixels which, given the spatial resolution of the UAVMS of 18 cm, allowed mapping of all olive tree crowns larger than 0.324 m2.

3.4. Adding Test Samples

Numerous factors such as shadows caused by tall objects or terrain morphology, sunlight angle, vegetation physiology, etc., affect differences in the spectral characteristics of elements of the same class within the same image [59]. For this reason, a total of ≈1200 samples were collected within 18 classes. Four classes related to olive trees whose spectral differences are influenced by ecological conditions, tree age, agronomic processes, variety, leaf growth rate and other abiotic and biotic stress factors [60].

3.5. Results of Classification Algorithms

SVM and MLC models for the PB and GEOBIA approaches (Figure 4A–D) were generated by the developed PvO-ACP tool. The PB approach has been observed to overestimate olive tree crown areas, especially in areas where pine forests predominate (Figure 4A1,B1). This is because the pixel as the minimum mapping unit is responsible for the “salt and pepper” effect. The GEOBIA-SVM model recognized the fewest olive trees in the pine forest (Figure 4C1), while the GEOBIA-MLC model had slightly lower results (Figure 4D1). The algorithms recognized olive trees in the shadows within all LCMs, but mostly in those generated by the PB approach, particularly the MLC classification algorithm. PB models are “grainy” (Figure 4A2,B2), while GEOBIA models are more compact (Figure 4C2,D2), especially the SVM model (Figure 4C2).

3.6. Accuracy of Classification Algorithms

Using COR, COM and OQ metrics on ROTs, the results obtained showed that the GEOBIA-SVM model had the highest COR value (0.9064), which is similar to the results in [34] (Figure 5C). The PB-SVM model had the highest value of the COM (0.8772), as well as of the OQ (0.7767) (Figure 5A), which proves it to be the most accurate model. However, using FOTs, it has been proven that the GEOBIA-SVM model overestimates the surface area of the Olive Tree class by the least amount. GEOBIA-SVM had the best index values (COR = 0.3796; COM = 0.0159; OQ = 0.0157) (Table 1C). By calculating the TA, the GEOBIA-SVM model was shown to be the most reliable (TA COR = 0.5269; TA COM = 0.8110; TA OQ = 0.7450) (Table 2). In addition, it has been proven that false samples in these metrics affect the change in accuracy results.
The confusion matrix results showed that the GEOBIA-SVM model had by far the largest UA and OA values, regardless of the number and arrangement of points at which accuracy was tested (Table 3C, C1). The OA values of 0.926 and 0.980 are similar to the results in [27], where an OA value of 0.93 was obtained after certain manual corrections to the final model. Both MLC models had slightly lower accuracy but were still better than results obtained in [32] where an OA value of 0.69 was obtained for an MLC model generated by supervised classification. It has been proven that an increase in the number of test points increases the model’s accuracy according to the OA indicator, but the value of UA decreases significantly for the Olive Tree class. Furthermore, with an increase in the number of test points, the value of PA increased, except in the case of PB-SVM, where it decreased slightly but was still the highest (Table 3A, A1). As in [30], the OA proved that GEOBIA more reliably separates tree species than the PB approach. From the results, it can be seen that there are some general features in the results of confusion matrices derived based on a larger number of test samples. These refer to relatively low values of the UA metric, ranging from 0.093 (Table 3B1) to 0.497 (Table 3C1). This may indicate the problem of overestimating the number of olive trees in a specific area. Therefore, this approach must be applied with caution, and additional ground validation data used in addition to MS.
According to the derived ROC curves and AUC values, SVM (GEOBIA) (Figure 4C and Figure 7) is the most reliable classification algorithm for extracting olive tree crowns from UAVMS imagery (Figure 6). Its accuracy has excellent values of 0.904 when using 500 random points (Figure 6A) and 0.929 when using points within ROTs and FOTs (Figure 6B). The GEOBIA-MLC model also has excellent accuracy (0.901) in points within ROTs and FOTs and very good accuracy (0.864) within 500 random points. SVM (PB) has very good accuracy within both approaches (0.833 and 0.874), while the less accurate model is PB-MLC, also with very good accuracy, but much lower at 0.826 and 0.813. The AUC indicator derived for 13 ROTs and 13 FOTs indicates the problem of shadows, which classification algorithms, especially in PB analyses, recognize as an olive tree, mostly in areas of dense pine forest and tall buildings. The SVM classification algorithm has proven to be an excellent solution for reducing shadow problems in GEOBIA analyses, while in the case of PB analyses, SVM is also a better solution than MLC as the latter produces “noise” (Figure 7).

4. Conclusions

SVM and MLC LCMs of the topographic area of the Sali settlement on the island of Dugi Otok were generated using the PB and GEOBIA approaches. All generated LCMs were reclassified into two classes: Olive Tree and Other. Accuracy assessment metrics showed that the GEOBIA approach generated more accurate models than the PB approach. Therefore, it was found that the applied classification algorithms achieved better results on the segmented image. Since GEOBIA groups pixels into homogeneous meaningful “superpixels”, various types of “noise” within the shaded areas (forests and tall buildings) occur much less frequently. These problems can often occur in the PB approach because classification algorithms within the PB approach classify each pixel separately. However, PB classification algorithms can produce results with satisfactory accuracy.
Although the accuracy assessment indicated that SVM (GEOBIA) was the best algorithm for extracting olive tree canopies, it is necessary to point out certain limitations. As the results indicate (Table 3), there may be a problem of overestimation (low value of UA) of the number of olive trees in a specific area. Therefore, this approach must be applied with caution, and ground validation data should be used.
The COR, COM and OQ metrics proved that all classification algorithms overestimated the area of olive trees (high false-positive rate), except the SVM (GEOBIA), which does so rarely, as 7 out of 13 false samples did not recognize the Olive Tree class. For this reason, the TA results indicated that the GEOBIA-SVM model was the most accurate (TACOR of 0.527, TACOM of 0.811, TAOQ of 0.745). Furthermore, other metrics used also showed that GEOBIA-SVM was the most accurate classification method. More precisely, GEOBIA-SVM achieved an OA of 0.926 or 0.980 and an AUC value of 0.904 or 0.929, depending on the number and arrangement of accuracy assessment points. Limitations on the software used and the lack of licenses for other software, such as eCognition, limited the choice of algorithms. In future research, the accuracy of several segmentation methods (Multiresolution, Spectral Difference, etc.) and other classification algorithms (Hierarchical Classification, Random Forest, Fully Convolutional Networks, Deep Convolutional Neural Networks, etc.) will be examined. Furthermore, the influence of spectral resolution and UAV flight settings (flight altitude and camera calibration) on the accuracy of GEOBIA and PB classification algorithms will be determined.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/rs14030757/s1, Figure S1: Collected test samples. Figure S2: PvO-ACP tool.

Author Contributions

Conceptualization, A.Š., L.P., F.D. and I.M.; methodology, A.Š., L.P. and M.G.; software, F.D., I.M. and R.M.; validation, L.P., M.G. and M.B.; formal analysis, L.P., F.D., I.M., M.B. and R.M.; investigation, A.Š., L.P., M.G. and M.B.; resources, A.Š.; data curation, F.D., I.M. and R.M.; writing—original draft preparation, L.P. and I.M.; writing—review and editing, A.Š., L.P., F.D., I.M., M.G., M.B. and R.M.; visualization, L.P., F.D. and I.M.; supervision, A.Š., M.G. and M.B.; project administration, A.Š. and F.D.; funding acquisition, A.Š., F.D., I.M., M.G. and M.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was performed within the PEPSEA (Protecting the Enclosed Parts of the Sea in Adriatic from Pollution) project, funded by the Italy–Croatia cross-border cooperation program 2014–2020 and the project UIP-2017-05-2694 financially supported by the Croatian Science Foundation. This research was also funded by the University of Zagreb for the project: “Advanced photogrammetry and remote sensing methods for environmental change monitoring” (Grant No. RS4ENVIRO).

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

This research was performed within the PEPSEA (Protecting the Enclosed Parts of the Sea in Adriatic from Pollution) project, funded by the Italy–Croatia cross-border cooperation program 2014–2020 and the project UIP-2017-05-2694 financially supported by the Croatian Science Foundation. The authors would like to thank the Croatian Science Foundation for providing necessary research funds. The acquired skills, knowledge and results from this research are used to meet the needs of the STREAM project (Strategic Development of Flood Management).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Loumou, A.; Giourga, C. Olive groves: The life and identity of the Mediterranean. Agric. Hum. Values 2003, 201, 87–95. [Google Scholar]
  2. Tasić, N. Flora Mediterana sa Osvrtom na Maslinu. Master’s Thesis, Fakultet za Mediteranske Poslovne Studije, Tivat, Montenegro, 2015. [Google Scholar]
  3. Orlandi, F.; Aguilera, F.; Galan, C.; Msallem, M.; Fornaciari, M. Olive yields forecasts and oil price trends in Mediterranean areas: A comprehensive analysis of the last two decades. Exp. Agric. 2017, 53, 71–83. [Google Scholar] [CrossRef]
  4. Michalopoulos, G.; Kasapi, K.A.; Koubouris, G.; Psarras, G.; Arampatzis, G.; Hatzigiannakis, E.; Kokkinos, G. Adaptation of Mediterranean olive groves to climate change through sustainable cultivation practices. Climate 2020, 8, 54. [Google Scholar] [CrossRef] [Green Version]
  5. Gomez, J.A.; Amato, M.; Celano, G.; Koubouris, G.C. Organic olive orchards on sloping land: More than a specialty niche production system? J. Environ. Manag. 2008, 89, 99–109. [Google Scholar] [CrossRef] [PubMed]
  6. Di Fazio, S.; Modica, G. Historic rural landscapes: Sustainable planning strategies and action criteria. The Italian experience in the global and European context. Sustainability 2018, 10, 3834. [Google Scholar] [CrossRef] [Green Version]
  7. Hernández-Mogollón, J.M.; Di-Clemente, E.; Folgado-Fernández, J.A.; Campón-Cerro, A.M. Olive oil tourism: State of the art. Tour. Hosp. Manag. 2019, 25, 179–207. [Google Scholar] [CrossRef]
  8. Jurišić, M.; Šumanovac, L.; Zimmer, D.; Barač, Ž. Tehnički i tehnološki aspekti pri zaštiti bilja u sustavu precizne poljoprivrede. Poljoprivreda 2015, 21, 75–81. [Google Scholar] [CrossRef]
  9. Solano, F.; Di Fazio, S.; Modica, G. A methodology based on GEOBIA and WorldView-3 imagery to derive vegetation indices at tree crown detail in olive orchards. Int. J. Appl. Earth Obs. Geoinf. 2019, 83, 101912. [Google Scholar] [CrossRef]
  10. Nolè, G.; Pilogallo, A.; Lanortea, F.; De Santisa, F. Remote Sensing Techniques in Olive-Growing: A Review. Curr. Inves. Agri. Curr. Res. 2018, 2, 205–208. [Google Scholar] [CrossRef]
  11. Mitran, T.; Meena, R.S.; Chakraborty, A. Geospatial Technologies for Crops and Soils: An Overview. Geospat. Technol. Crops Soils 2021, 1–48. [Google Scholar] [CrossRef]
  12. Minařík, R.; Langhammer, J. Use of a multispectral UAV photogrammetry for detection and tracking of forest disturbance dynamics. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41. [Google Scholar] [CrossRef]
  13. Domazetović, F.; Šiljeg, A.; Marić, I.; Jurišić, M. Assessing the Vertical Accuracy of Worldview-3 Stereo-extracted Digital Surface Model over Olive Groves. GISTAM 2020, 246, 253. [Google Scholar] [CrossRef]
  14. Zhang, C.; Kovacs, J.M. The application of small unmanned aerial systems for precision agriculture: A review. Precis. Agric. 2012, 13, 693–712. [Google Scholar] [CrossRef]
  15. Näsi, R.; Honkavaara, E.; Lyytikäinen-Saarenmaa, P.; Blomqvist, M.; Litkey, P.; Hakala, T.; Viljen, N.; Kantola, T.; Tanhuanpää, T.; Holopainen, M. Using UAV-Based Photogrammetry and Hyperspectral Imaging for Mapping Bark Beetle Damage at Tree-Level. Remote Sens. 2015, 7, 15467–15493. [Google Scholar] [CrossRef] [Green Version]
  16. Lu, D.; Weng, Q. A survey of image classification methods and techniques for improving classification performance. Int. J. Remote Sens. 2007, 28, 823–870. [Google Scholar] [CrossRef]
  17. Gao, Y.; Mas, J.F. A Comparison of the Performance of Pixel Based and Object Based Classifications over Images with Various Spatial Resolutions. Online J. Earth Sci. 2008, 2, 27–35. [Google Scholar]
  18. Weih, R.C.; Riggan, N.D. Object-based classification vs. pixel-based classification: Comparative importance of multi-resolution imagery. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2010, 38, C7. [Google Scholar]
  19. Comaniciu, D.; Meer, P. Mean shift: A robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 603–619. [Google Scholar] [CrossRef] [Green Version]
  20. Abburu, S.; Golla, S.B. Satellite image classification methods and techniques: A review. Int. J. Comput. Appl. 2015, 119. [Google Scholar] [CrossRef]
  21. Fan, R.; Hou, B.; Liu, J.; Yang, J.; Hong, Z. Registration of Multiresolution Remote Sensing Images Based on L2-Siamese Model. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 14, 237–248. [Google Scholar] [CrossRef]
  22. Wang, P.; Wang, L.; Leung, H.; Zhang, G. Super-Resolution Mapping Based on Spatial–Spectral Correlation for Spectral Imagery. IEEE Trans. Geosci. Remote Sens. 2020, 59, 2256–2268. [Google Scholar] [CrossRef]
  23. Wang, P.; Yao, H.; Li, C.; Zhang, G.; Leung, H. Multiresolution Analysis Based on Dual-Scale Regression for Pansharpening. IEEE Trans. Geosci. Remote Sens. 2021. [Google Scholar] [CrossRef]
  24. Gašparović, M.; Zrinjski, M.; Gudelj, M. Automatic cost-effective method for land cover classification (ALCC). Comput. Environ. Urban. Syst. 2019, 76, 1–10. [Google Scholar] [CrossRef]
  25. Modica, G.; Messina, G.; De Luca, G.; Fiozzo, V.; Praticò, S. Monitoring the vegetation vigor in heterogeneous citrus and olive orchards. A multiscale object-based approach to extract ‘trees’ crowns from UAV multispectral imagery. Comput. Electron. Agric. 2020, 175, 105500. [Google Scholar] [CrossRef]
  26. Torres-Sánchez, J.; de Castro, A.I.; Pena, J.M.; Jimenez-Brenes, F.M.; Arquero, O.; Lovera, M.; Lopez-Granados, F. Mapping the 3D structure of almond trees using UAV acquired photogrammetric point clouds and object-based image analysis. Biosyst. Eng. 2018, 176, 172–184. [Google Scholar] [CrossRef]
  27. Karydas, C.; Gewehr, S.; Iatrou, M.; Iatrou, G.; Mourelatos, S. Olive plantation mapping on a sub-tree scale with object-based image analysis of multispectral UAV data; Operational potential in tree stress monitoring. J. Imaging 2017, 3, 57. [Google Scholar] [CrossRef] [Green Version]
  28. Stateras, D.; Kalivas, D. Assessment of Olive Tree Canopy Characteristics and Yield Forecast Model Using High Resolution UAV Imagery. Agriculture 2020, 10, 385. [Google Scholar] [CrossRef]
  29. Díaz-Varela, R.A.; De la Rosa, R.; León, L.; Zarco-Tejada, P.J. High-resolution airborne UAV imagery to assess olive tree crown parameters using 3D photo reconstruction: Application in breeding trials. Remote Sens. 2015, 7, 4213–4232. [Google Scholar] [CrossRef] [Green Version]
  30. Immitzer, M.; Atzberger, C.; Koukal, T. Tree species classification with random forest using very high spatial resolution 8-band WorldView-2 satellite data. Remote Sens. 2012, 4, 2661–2693. [Google Scholar] [CrossRef] [Green Version]
  31. Liu, T.; Abd-Elrahman, A.; Morton, J.; Wilhelm, V.L. Comparing fully convolutional networks, random forest, support vector machine, and patch-based deep convolutional neural networks for object-based wetland mapping using images from small unmanned aircraft system. GIScience Remote Sens. 2018, 55, 243–264. [Google Scholar] [CrossRef]
  32. Castrignanò, A.; Belmonte, A.; Antelmi, I.; Quarto, R.; Quarto, F.; Shaddad, S.; Nigro, F. Semi-automatic method for early detection of Xylella fastidiosa in olive trees using UAV multispectral imagery and geostatistical-discriminant analysis. Remote Sens. 2021, 13, 14. [Google Scholar] [CrossRef]
  33. Story, M.; Congalton, R.G. Accuracy assessment: A ‘user’s perspective. Photogramm. Eng. Remote Sens. 1986, 52, 397–399. [Google Scholar]
  34. Panđa, L.; Šiljeg, A.; Marić, I.; Domazetović, F.; Šiljeg, S.; Milošević, R. Usporedba GEOBIA klasifikacijskih algoritama na temelju Worldview-3 snimaka u izdvajanju šuma primorskih četinjača. Šumarski List 2021, 145, 535–544. [Google Scholar] [CrossRef]
  35. Cai, L.; Shi, W.; Miao, Z.; Hao, M. Accuracy assessment measures for object extraction from remote sensing images. Remote Sens. 2018, 10, 303. [Google Scholar] [CrossRef] [Green Version]
  36. Otukei, J.R.; Blaschke, T. Land cover change assessment using decision trees, support vector machines and maximum likelihood classification algorithms. Int. J. Appl. Earth Obs. Geoinf. 2010, 12, 27–31. [Google Scholar] [CrossRef]
  37. Mondal, A.; Kundu, S.; Chandniha, S.K.; Shukla, R.; Mishra, P.K. Comparison of support vector machine and maximum likelihood classification technique using satellite imagery. Int. J. Remote Sens. GIS 2012, 1, 116–123. [Google Scholar]
  38. Nitze, I.; Schulthess, U.; Asche, H. Comparison of machine learning algorithms random forest, artificial neural network and support vector machine to maximum likelihood for supervised crop type classification. In Proceedings of the 4th GEOBIA, Rio de Janeiro, Brazil, 7–9 May 2012. [Google Scholar]
  39. Karan, S.K.; Samadder, S.R. Accuracy of land use change detection using support vector machine and maximum likelihood techniques for open-cast coal mining areas. Environ. Monit. Assess. 2016, 188, 486. [Google Scholar] [CrossRef] [PubMed]
  40. Javna Ustanova “Natura Jadera”. Available online: https://natura-jadera.com/prirodne-vrijednosti/posebni-rezervati/maslinik-saljsko-polje/ (accessed on 20 September 2021).
  41. Peña-Barragán, J.M.; Jurado-Expósito, M.; López-Granados, F.; Atenciano, S.; Sánchez-De la Orden, M.; Garcıa-Ferrer, A.; Garcıa-Torres, L. Assessing land-use in olive groves from aerial photographs. Agric. Ecosyst. Environ. 2004, 103, 117–122. [Google Scholar] [CrossRef]
  42. James, M.R.; Robson, S. Straightforward reconstruction of 3D surfaces and topography with a camera: Accuracy and geoscience application. J. Geophys. Res. Earth Surf. 2012, 117. [Google Scholar] [CrossRef] [Green Version]
  43. Lin, J.; Wang, R.; Li, L.; Xiao, Z. A workflow of SfM-based digital outcrop reconstruction using Agisoft PhotoScan. In Proceedings of the 2019 IEEE 4th International Conference on Image, Vision and Computing (ICIVC), Xiamen, China, 5–7 July 2019; pp. 711–715. [Google Scholar]
  44. Kingsland, K. Comparative analysis of digital photogrammetry software for cultural heritage. Digit. Appl. Archaeol. Cult. Herit. 2020, 18, e00157. [Google Scholar] [CrossRef]
  45. Mancini, F.; Dubbini, M.; Gattelli, M.; Stecchi, F.; Fabbri, S.; Gabbianelli, G. Using unmanned aerial vehicles (UAV) for high-resolution reconstruction of topography: The structure from motion approach on coastal environments. Remote Sens. 2013, 5, 6880–6898. [Google Scholar] [CrossRef] [Green Version]
  46. Arza-García, M.; Gil-Docampo, M.; Ortiz-Sanz, J. A hybrid photogrammetry approach for archaeological sites: Block alignment issues in a case study (the Roman camp of A Cidadela). J. Cult. Herit. 2019, 38, 195–203. [Google Scholar] [CrossRef]
  47. Pena-Villasenin, S.; Gil-Docampo, M.; Ortiz-Sanz, J. Desktop vs cloud computing software for 3D measurement of building façades: The monastery of San Martín Pinario. Measurement 2020, 149, 106984. [Google Scholar] [CrossRef]
  48. Eisank, C.; Smith, M.; Hillier, J. Assessment of multiresolution segmentation for delimiting drumlins in digital elevation models. Geomorphology 2014, 214, 452–464. [Google Scholar] [CrossRef] [Green Version]
  49. Whiteside, T.G.; Maier, S.W.; Boggs, G.S. Area-based and location-based validation of classified image objects. Int. J. Appl. Earth Obs. Geoinf. 2014, 28, 117–130. [Google Scholar] [CrossRef]
  50. Congalton, R.G. A review of assessing the accuracy of classifications of remotely sensed data. Remote Sens. Environ. 1991, 37, 35–46. [Google Scholar] [CrossRef]
  51. Liu, C.; Frazier, P.; Kumar, L. Comparative assessment of the measures of thematic classification accuracy. Remote Sens. Environ. 2007, 107, 606–616. [Google Scholar] [CrossRef]
  52. Bradley, A.P. The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern Recognit. 1997, 30, 1145–1159. [Google Scholar] [CrossRef] [Green Version]
  53. Mas, J.F.; Soares Filho, B.; Pontius, R.G.; Farfán Gutiérrez, M.; Rodrigues, H. A suite of tools for ROC analysis of spatial models. ISPRS Int. J. Geo Inf. 2013, 2, 869–887. [Google Scholar] [CrossRef]
  54. Arabameri, A.; Chen, W.; Loche, M.; Zhao, X.; Li, Y.; Lombardo, L.; Bui, D.T. Comparison of machine learning models for gully erosion susceptibility mapping. Geosci. Front. 2020, 11, 1609–1620. [Google Scholar] [CrossRef]
  55. Tharwat, A. Classification assessment methods. Appl. Comput. Inf. 2020. [Google Scholar] [CrossRef]
  56. Rahmati, O.; Tahmasebipour, N.; Haghizadeh, A.; Pourghasemi, H.R.; Feizizadeh, B. Evaluating the influence of geo-environmental factors on gully erosion in a semi-arid region of Iran: An integrated framework. Sci. Total Environ. 2017, 579, 913–927. [Google Scholar] [CrossRef]
  57. Arabameri, A.; Rezaei, K.; Pourghasemi, H.R.; Lee, S.; Yamani, M. GIS-based gully erosion susceptibility mapping: A comparison among three data-driven models and AHP knowledge-based technique. Environ. Earth Sci. 2018, 77, 628. [Google Scholar] [CrossRef]
  58. Momeni, R.; Aplin, P.; Boyd, D.S. Mapping complex urban land cover from spaceborne imagery: The influence of spatial resolution, spectral band set and classification approach. Remote Sens. 2016, 8, 88. [Google Scholar] [CrossRef] [Green Version]
  59. Fu, H.; Zhou, T.; Sun, C. Object-based shadow index via illumination intensity from high resolution satellite images over urban areas. Sensors 2020, 20, 1077. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  60. Cetinkaya, H.; Kulak, M. Relationship between total phenolic, total flavonoid and oleuropein in different aged olive (Olea europaea L.) Cultivar leaves. Afr. J. Tradit. Complementary Altern. Med. 2016, 13, 81–85. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Study area.
Figure 1. Study area.
Remotesensing 14 00757 g001
Figure 2. The methodological framework of the research.
Figure 2. The methodological framework of the research.
Remotesensing 14 00757 g002
Figure 3. (A) DOP; (B) UAVMS; (C) DSM; (D) segmented image.
Figure 3. (A) DOP; (B) UAVMS; (C) DSM; (D) segmented image.
Remotesensing 14 00757 g003
Figure 4. (A) PB-SVM; (B) PB-MLC; (C) GEOBIA-SVM; (D) GEOBIA-MLC.
Figure 4. (A) PB-SVM; (B) PB-MLC; (C) GEOBIA-SVM; (D) GEOBIA-MLC.
Remotesensing 14 00757 g004
Figure 5. Results of COR, COM and OQ metrics for: (A) PB-SVM; (B) PB-MLC; (C) GEOBIA-SVM; (D) GEOBIA-MLC on ROTs.
Figure 5. Results of COR, COM and OQ metrics for: (A) PB-SVM; (B) PB-MLC; (C) GEOBIA-SVM; (D) GEOBIA-MLC on ROTs.
Remotesensing 14 00757 g005
Figure 6. ROC curves and AUC values for the SVM and MLC classification algorithms used within PB and GEOBIA approaches: (A) using 500 random points (B) using points within ROTs and FOTs.
Figure 6. ROC curves and AUC values for the SVM and MLC classification algorithms used within PB and GEOBIA approaches: (A) using 500 random points (B) using points within ROTs and FOTs.
Remotesensing 14 00757 g006
Figure 7. The final model of olive trees within the topographic basin of Sali settlement.
Figure 7. The final model of olive trees within the topographic basin of Sali settlement.
Remotesensing 14 00757 g007
Table 1. Results of COR, COM and OQ metrics for (A) PB-SVM; (B) PB-MLC; (C) GEOBIA-SVM; (D) GEOBIA-MLC on FOTs.
Table 1. Results of COR, COM and OQ metrics for (A) PB-SVM; (B) PB-MLC; (C) GEOBIA-SVM; (D) GEOBIA-MLC on FOTs.
ATest AreaCOR
PB-SVM
COM
PB-SVM
OQ
PB-SVM
BTest
Area
COR
PB-MLC
COM
PB-MLC
OQ
PB-MLC
FOT10.03370.98680.0337 FOT10.88580.33990.3257
FOT20.32740.95100.3219 FOT20.87770.25430.2456
FOT30.05640.99120.0564 FOT30.31640.00750.0073
FOT40.05270.98380.0527 FOT40.42590.02030.0198
FOT50.08760.98090.0875 FOT50.84580.13040.1274
FOT60.01580.99110.0158 FOT60.60000.01260.0125
FOT70.13620.99000.1360 FOT70.91340.16490.1624
FOT80.05000.97910.0500 FOT80.24910.01500.0143
FOT90.00680.99830.0068 FOT90.70760.00570.0057
FOT100.19480.99370.1945 FOT100.91850.19610.1928
FOT110.03970.98790.0397 FOT110.35810.00420.0041
FOT120.07430.98240.0742 FOT120.63950.07710.0739
FOT130.03470.99910.0347 FOT130.75830.01880.0187
Total0.75950.07280.0712 Total0.65350.09590.0931
CTest AreaCOR
GEOBIA-SVM
COM
GEOBIA-SVM
OQ
GEOBIA-SVM
DTest AreaCOR
GEOBIA-MLC
COM
GEOBIA-MLC
OQ
GEOBIA-MLC
FOT10.83430.01960.0196 FOT10.93670.63080.605
FOT20.86220.11610.114 FOT20.81050.19450.186
FOT3000 FOT30.04760.00060.0006
FOT4000 FOT40.50090.0090.0089
FOT51.15880.00050.0005 FOT50.8820.02010.0201
FOT60.28210.00050.0005 FOT60.51860.00160.0016
FOT70.85240.01690.0169 FOT70.88180.08330.0824
FOT8000 FOT80.32380.00940.0092
FOT9000 FOT90.64450.00180.0018
FOT100.94450.05320.0531 FOT100.89230.12760.1256
FOT11000 FOT110.37530.01090.0107
FOT12000 FOT120.73430.00730.0073
FOT13000 FOT130.40430.00320.0031
Total0.37960.01590.0157 Total0.61170.08460.0817
Table 2. TA for PB-SVM, PB-MLC, GEOBIA-SVM and GEOBIA-MLC models.
Table 2. TA for PB-SVM, PB-MLC, GEOBIA-SVM and GEOBIA-MLC models.
CORCOMOQ
TAPBSVM0.1110.8040.705
TAPBMLC0.2400.7250.648
TAGEOBIASVM0.5270.8110.745
TAGEOBIAMLC0.2340.7780.658
Table 3. PA, UA and OA for: (A-A1) PB-SVM; (B-B1) PB-MLC; (C-C1) GEOBIA-SVM; (D-D1) GEOBIA-MLC.
Table 3. PA, UA and OA for: (A-A1) PB-SVM; (B-B1) PB-MLC; (C-C1) GEOBIA-SVM; (D-D1) GEOBIA-MLC.
AClass ValueOtherOlive TreeTotalUAOAA1Class ValueOtherOlive TreeTotalUAOA
Other351103610.972 Other76266224127650740.997
Olive tree69701390.504 Olive tree108345153601237050.124
Total42080500 Total87100717772888779
PA0.8360.875 PA0.8760.864
OA 0.842 OA 0.875
BClass ValueOtherOlive TreeTotalUAOAB1Class ValueOtherOlive TreeTotalUAOA
Other350163660.956 Other73177935057352840.995
Olive tree70641340.478 Olive tree139228142671534950.093
Total42080500 Total87100717772888779
PA0.8330.800 PA0.8400.803
OA 0.828 OA 0.839
CClass ValueOtherOlive TreeTotalUAOAC1Class ValueOtherOlive TreeTotalUAOA
Other400174170.959 Other85624631778594230.996
Olive tree2063830.759 Olive tree1476114595293560.497
Total42080500 Total87100717772888779
PA0.9520.788 PA0.9830.821
OA 0.926 OA 0.980
DClass ValueOtherOlive TreeTotalUAOAD1Class ValueOtherOlive TreeTotalUAOA
Other369143830.963 Other79947526008020750.997
Olive tree51661170.564 Olive tree7153215172867040.175
Total42080500 Total87100717772888779
PA0.8790.825 PA0.9180.854
OA 0.870 OA 0.917
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Šiljeg, A.; Panđa, L.; Domazetović, F.; Marić, I.; Gašparović, M.; Borisov, M.; Milošević, R. Comparative Assessment of Pixel and Object-Based Approaches for Mapping of Olive Tree Crowns Based on UAV Multispectral Imagery. Remote Sens. 2022, 14, 757. https://doi.org/10.3390/rs14030757

AMA Style

Šiljeg A, Panđa L, Domazetović F, Marić I, Gašparović M, Borisov M, Milošević R. Comparative Assessment of Pixel and Object-Based Approaches for Mapping of Olive Tree Crowns Based on UAV Multispectral Imagery. Remote Sensing. 2022; 14(3):757. https://doi.org/10.3390/rs14030757

Chicago/Turabian Style

Šiljeg, Ante, Lovre Panđa, Fran Domazetović, Ivan Marić, Mateo Gašparović, Mirko Borisov, and Rina Milošević. 2022. "Comparative Assessment of Pixel and Object-Based Approaches for Mapping of Olive Tree Crowns Based on UAV Multispectral Imagery" Remote Sensing 14, no. 3: 757. https://doi.org/10.3390/rs14030757

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop