Next Article in Journal
Using Genomic Selection to Leverage Resources among Breeding Programs: Consortium-Based Breeding
Previous Article in Journal
Managing Soil Organic Carbon for Mitigating Climate Change and Increasing Food Security
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

UAV, a Farm Map, and Machine Learning Technology Convergence Classification Method of a Corn Cultivation Area

1
Department of Agricultural and Rural Engineering, Chungbuk National University, 1 Chungdae-ro, Seowon-gu, Cheongju 28644, Chungbuk, Korea
2
Geosapatial Information, 26 Jeongbohwa-gil, Naju 58323, Jeollanamdo, Korea
*
Author to whom correspondence should be addressed.
Agronomy 2021, 11(8), 1554; https://doi.org/10.3390/agronomy11081554
Submission received: 8 July 2021 / Revised: 28 July 2021 / Accepted: 2 August 2021 / Published: 4 August 2021

Abstract

:
South Korea’s agriculture is characterized by a mixture of various cultivated crops. In such an agricultural environment, convergence technology for ICT (information, communications, and technology) and AI (artificial intelligence) as well as agriculture is required to classify objects and predict yields. In general, the classification of paddy fields and field boundaries takes a lot of time and effort. The Farm Map was developed to clearly demarcate and classify the boundaries of paddy fields and fields in Korea. Therefore, this study tried to minimize the time and effort required to divide paddy fields and fields through the application of the Farm Map. To improve the fact that UAV image processing for a wide area requires a lot of time and effort to classify objects, we suggest a method for optimizing cultivated crop recognition. This study aimed to evaluate the applicability and effectiveness of machine learning classification techniques using a Farm Map in object-based mapping of agricultural land using unmanned aerial vehicles (UAVs). In this study, the advanced function selection method for object classification is to improve classification accuracy by using two types of classifiers, support vector machine (SVM) and random forest (RF). As a result of classification by applying a Farm Map-based SVM algorithm to wide-area UAV images, producer’s accuracy (PA) was 81.68%, user’s accuracy (UA) was 75.09%, the Kappa coefficient was 0.77, and the F-measure was 0.78. The results of classification by the Farm Map-based RF algorithm were as follows: PA of 96.58%, UA of 92.27%, a Kappa coefficient of 0.94, and the F-measure of 0.94. In the cultivation environment in which various crops were mixed, the corn cultivation area was estimated to be 96.54 ha by SVM, showing an accuracy of 90.27%. RF provided an estimate of 98.77 ha and showed an accuracy of 92.36%, which was higher than that of SVM. As a result of using the Farm Map for the object-based classification method, the agricultural land classification showed a higher efficiency in terms of time than the existing object classification method. Most importantly, it was confirmed that the efficiency of data processing can be increased by minimizing the possibility of misclassification in the obtained results. The obtained results confirmed that rapid and reliable analysis is possible when the cultivated area of crops is identified using UAV images, a Farm Map, and machine learning.

1. Introduction

Corn, along with rice and wheat, is one of the world’s three major grains, and it is a food crop that has high productivity per unit area and is widely used as snacks, forage, starch, and cooking oil [1,2]. The consumption of corn continues to increase worldwide [3]. Production was 1135 million tons in 2019, accounting for the largest proportion of food crops [4]. However, the Korean domestic grain self-sufficiency rate was 21.7% in 2018, and corn, among major food crops, had the lowest rate at 0.8%. In addition, the import volume of corn is about 10,166,000 tons, with an import value of 2126 million U.S. dollars, which is a situation dependent on imports. The South Korean government is preparing a plan to increase the production of crops with high import dependence, such as corn, by cultivating other crops in paddy fields and improving varieties [5]. Global abnormal weather events such as those in 2020 highlight the instability of international grain prices and the importance of food security.
In particular, in South Korea, which is highly reliant on imports, it is very important to understand the current status of the cultivation area and production in order to stabilize consumer prices, make decisions on grain import policies, and efficiently inventory self-sufficiency. The crop cultivation area survey conducted in the past was carried out by selecting a sample and visiting the site in person or by conducting a cultivation intention survey by interview [6]. It is very difficult to obtain objective data because the survey results depend mainly on observations and can be subjective depending on the skill level of the investigator [7]. As an alternative to this, the research team proposed a method to check the cultivated area without directly visiting the site, and it is being applied and utilized in practice [6]. The proposed technology utilizes remote sensing technology that can acquire objective information for a large area and is being applied in various ways [8,9,10]. In particular, in the remote sensing (RS) field, unmanned aerial vehicles (UAVs) have advantages in precision, economy, and periodicity compared to satellite images, so they are attracting attention as a suitable platform for agricultural monitoring in South Korea, where various crops are grown in small areas and precipitation is concentrated in summer [11].
Land use/land cover (LULC) and crop classification are some of the most active research areas in the RS field [12,13,14,15,16,17,18]. The research results obtained through this study are useful as basic data for the calculation of cultivated areas and the monitoring of crop conditions in the agricultural field [19].
On the other hand, from 2010, research on land cover classification and crop classification that applied artificial intelligence technology increased rapidly with the development of software and hardware [20,21,22,23]. In particular, in the early stage, land cover classification was performed using satellite data such as Landsat images and various machine learning classification algorithms (ANN: artificial neural network, SVM: support vector machine, RF: random forest), and their functions were compared [24,25,26]. During this period, studies were conducted to evaluate and compare machine learning classification models for target regions of each country mainly using satellite images [27,28,29,30]. Since then, research on land use and classification methods has been continuously developed, and various application cases of object-based classification methods with GIS (Geographic Information System) have been reported [7,31,32,33,34]. As for the cases of using UAV images, as various UAVs have been developed and distributed, the number of applications used, not only in surveying but also in agriculture, has increased. In the agricultural field, research on detecting weeds in paddy fields or classifying cultivated crops using UAV images and object-based classification techniques has begun to be actively conducted [35,36,37]. Most of these studies use SVM, RF, neural networks, Bayesian networks, and maximum likelihood classification methods to recognize plants or objects of interest [38,39,40,41,42,43,44,45,46].
In 2020, AI techniques using deep learning methods were proposed, but research focusing on leaves or individual plants has been the main focus [47]. Based on a review of various proposed studies, deep learning techniques are not applied in studies across a wide area, where mainly ANN, SVM, and RF algorithms are used, and it is suggested that the accuracy is higher than that obtained using other machine learning classification techniques [48,49].
Previous domestic studies using UAV and satellite images include studies on autumn cabbage and radish [8], onion, garlic [35], winter crops [36], and potatoes in high-altitude cabbage cultivation regions [49]. In the case of overseas regions, research is mainly conducted on major crops and in areas with social problems with high added value in each country [50,51,52].
In South Korea, there is a lack of research on corn classification, and research on corn is mainly conducted by estimating the cultivated area and production overseas based on satellite images [27,28]. In addition, most of the studies using UAVs are on a field scale of less than 20 to 200 ha, and there are very few studies on a 2000 ha-wide area [53].
In the future, as the scope of the application of UAVs is diversified, they are expected to be used in various ways in the agricultural field, and the scope of application is expected to expand. The expansion of the scope of application in agriculture requires the development of related technologies and research on how to apply them. To this end, the Korean government has developed a Farm Map and is trying to use it in various fields in agriculture [54]. The Farm Map provides important spatial information about agriculture that is updated every two years by applying satellite and aerial imagery. Although the technology using Farm Map is limited, if UAV and machine learning methods are combined, the time and effort required for image processing can be minimized [6].
Therefore, the purpose of this study is to (1) propose a UAV-based multi-spectral image acquisition and processing method for an area of 2000 ha or more, (2) apply a Farm Map and machine learning classification algorithm to identify the fields where corn is grown, and (3) estimate the corn cultivation area in the relevant area using the acquired cultivation land information.

2. Materials and Methods

2.1. Study Area

As shown in Figure 1, this study was conducted in Gammul-myeon (36°50′15″ N, 127°52′29″ E), located in the northeastern part of Goesan-gun, Chungcheongbuk-do, South Korea. The research area covers 4280 ha and consists of 7 legal districts. The western part of the region is a plain that forms agricultural land, and the southeast consists of mountainous areas where Juwolsan and Bakdalsan are located. To the northwest is the Dalcheon River flows, forming fertile agricultural land.
Goesan-gun is an area where wax corn was developed, and edible corn such as Mibaekchal and Miheukchal are mainly grown. In addition, this area was evaluated for local adaptability by selecting it as an open-field cultivation test bed for Golden Matchal, a newly developed variety developed by the RDA (Rural Development Administration, South Korea). The corn cultivation area of Gammul-myeon in Goesan-gun is 141.4 ha, based on the registration of agricultural business in 2017, and it is the region with the highest ratio of cultivated area [54].

2.2. Corn Growth Schedule and Weather Conditions in Goesan-Gun

Corn has a slightly different growing period depending on the growing region and method. Corn cultivation in Goesan-gun is mainly carried out by the seedling method. Corn is sown using seedling plugs in March and is planted in April after a seedling period of about 25–30 days. As shown in Figure 2, planted corn is harvested through vegetative growth stages (VE–V9), flowering stages (VT), and reproductive growth stages (R1–R6). The cultivated corn has been shown to grow with the growth characteristics suggested in the previous research results [55,56]. Corn’s vegetative stage (VE~V9) is a period in which the number of leaves and plant height growth are remarkable, and it lasts for about 60 to 70 days after planting. Corn, which has sufficiently grown leaves and plant height, begins flowering (VT) from early to mid-June. Corn that has flowered is harvested after going through reproductive stages (R1~R6) for about 30 days. The reproductive stages are the period in which leaf and plant height growth are no longer achieved and the ears mature.
The optimum growing temperature for corn is 20~30 °C, and as shown in Figure 3, the average temperature in Goesan-gun in early April is 5~15 °C [57]. As a result, corn planted in early April often suffers from cold damage. In addition, corn is one of the crops that causes a large decrease in farmers’ income due to a decrease in harvest when rainfall is concentrated during the harvest season. The year 2020 was also greatly affected by cold weather in the early stages of cultivation and rain that continued during the harvest period (Figure 3).
The UAV image acquisition was carried out for four days over a total of two corn growth stages: 8–9 May 2020, which corresponds to stage V3 in Figure 2, and 18–19 June, which corresponds to stage VT. The field survey was conducted for a total of four days from 18 to 21 June in accordance with the UAV imaging period.

2.3. Farm Map (Electronic Map of Farmland)

The Farm Map was promoted with the goal of increasing the efficiency of policies related to agriculture by generating accurate farmland data consistent with the field, and at the same time linking agricultural and rural-related administrative information. Therefore, the Farm Map is an electronic map produced to construct high-quality farmland information that matches the actual farmland due to the discrepancy in the boundary between the South Korean cadastral map and the actual farmland [54]. The Ministry of Agriculture, Food, and Rural Affairs, South Korea, established an electronic map of agricultural land across the country that was acquired from aerial imagery from 2014 to 2016 and has been continuously updated since 2017. The current version of the Farm Map is updated every two years by dividing South Korea into eastern and western regions. Therefore, in the case of general rural areas, the Farm Map accurately reflects the current state of farmland boundaries.
The initial purpose of developing the Farm Map was to provide accurate data to overcome the limitations of front-line work on the payment status and field verification, as the problem of the difference between the area under agricultural subsidy inspection and the actual area was raised in the inspection system for crops. The accuracy of the Farm Map boundary is very accurate because the farmland boundary was demarcated and divided by experts using high-resolution satellite and aerial images. Currently, it is usefully used to block the illegal receipt of agricultural subsidies. However, there are some problems in the lot number mapping on the Farm Map and the cadastral map, so the utilization rate in the field is somewhat insufficient. In addition, there is still inefficiency in the operation of geospatial data such as those of Farm Maps due to the lack of a connection base for agricultural-related geospatial data and a lack of awareness of their use.
A Farm Map is produced through the following steps: First, satellite (Kompsat, etc.) and aerial imagery data are collected. Second, image processing of the collected data is undertaken. Third, the boundaries for seven agricultural land items are demarcated. Fourth, the processes of classification and segmentation of the image are undertaken. Fifth, a field survey for comparison with field conditions is performed. Sixth, on-site inspection is carried out to ensure that the site conditions are properly reflected. Seventh, metadata are created so that they can be used in various ways in the agricultural field. The Farm Map is an agricultural support map optimized for agriculture by linking aerial and satellite images, which are not easy to obtain and process by ordinary people, with field conditions. As a problem, it has been pointed out that it is difficult to immediately reflect changes in the field conditions, such as agricultural land near town centers, where the update cycle of the source data, such as aerial images, rapidly changes every two years.
On the other hand, as in this study, rural areas are represented in an electronic map of farmland that can be used as farmland does not change much. The land use classification of the Farm Map is divided into seven fields: paddy fields, fields, orchards, facilities, ginseng, fallow land, and bare land. In this study, paddy fields, fields, orchards, ginseng, and fallow land were investigated, i.e., not facilities and bare land, from the land use classifications using the 2019 Farm Map, the most recent data of the Chungcheongbuk-do region (Figure 4).

2.4. Crop Classification Method and Algorithm

The classification method used in this study is an object-based classification method. For wide-area classification using low-resolution satellite images, pixel-based classification has been used [13,15]. However, in high-resolution images of 5 m or less, there is a limitation in classification accuracy due to the Salt and Pepper phenomenon [22]. For this reason, object-based classification is widely used for UAV or high-resolution satellite images with a spatial resolution of 5 m or less [37,53,58].
Machine learning is a branch of AI that finds rules by inputting data and obtaining expected answers from these data [59,60]. For this, it is important to learn the input data. In the field of spatial information, machine learning is applied for classifying or recognizing objects using images and point clouds. When machine learning provides a lot of data related to a task, it can find a statistical structure in these data and create rules to automate that task.
If data on crops are accumulated in the field of spatial information and such learning data can be automatically built, the work efficiency for machine learning will be improved. In this study, SVM and RF methods were reviewed to evaluate the efficiency of object classification by applying machine learning to the spatial information field.
SVM is an algorithm to find the optimal linear decision boundary that classifies classification items using the concepts of support vectors and margins [61,62]. Here, the support vector and margin mean the distance between the support vector and the decision boundary, that is, the data closest to the decision boundary. SVM is an algorithm that finds the optimal decision boundary that maximizes the margin using this principle. However, when the input data are difficult to distinguish with a linear decision boundary, the kernel method can be used in SVM. The kernel trick is a method to find the optimal decision boundary in the projected multidimensional space by mapping the dimension of the input data to the multidimensional space [29]. Here, various nonlinear kernel functions can be used. Among them, a polynomial kernel and a radial basis function (RBF) kernel are representative [30]. In this study, the RBF kernel was selected as the kernel function. The next step was to select hyperparameters. The hyperparameter has a regulatory parameter (C) and gamma, which is the inverse of the kernel width [43]. C serves to limit the importance of each training material. The larger the C value, the more non-linear the decision boundary becomes for classifying the classification items. Gamma defines the range of influence that training data can have. The larger the gamma and the narrower the kernel width, the more complex the model.
RF is an ensemble learning approach developed by Breiman [63] and maximizes algorithm performance by collecting several decision tree algorithm results, which are basic components, compositing them into one result [64,65]. The final class decision is a method of weighted voting for the discriminant classes of the generated decision tree. Since RF iteratively constructs independent decision trees through maximum randomness when selecting samples and variables for each model, it is possible to reduce the prediction error by lowering the variance while maintaining the low bias of the decision tree [47,66]. In addition, RF is stable even for high-dimensional data including multiple explanatory variables because it considers the interaction and nonlinearity between explanatory counts.
The hyperparameters to be considered in the RF algorithm are mtry and n.tree. Here, mtry is the number of randomly sampled variables in each partition, and n.tree is the number of decision trees. If the number of mtry is reduced, the calculation speed increases, but the correlation between the two trees and the strength of all trees in the forest decrease, which has a complicated effect on the classification accuracy [66]. It is known that n.tree has less influence on classification results than mtry [52].

2.5. Verification of the Accuracy of the Classification Results

The accuracy evaluation of the SVM and RF classification algorithms was evaluated by creating an error matrix by comparing it with field survey data collected at the time of image acquisition. The error matrix is evaluated with five indicators for LULC classification results [25]. Evaluation indicators were compared and evaluated by expressing the overall accuracy (OA), user’s accuracy (UA), producer’s accuracy (PA), Kappa coefficient, and the F-measure. The evaluation of these indicators was performed by preparing an error matrix, as shown in Table 1. In Table 1, Type A and B indicate classification items. In this formula, a is the number of rows and columns in the error matrix, n is the number of observations in the error matrix, aij is a major diagonal element for class i, ai+ is the total number of observations in row i (right margin), and a+j is the total number of observations in column j (bottom margin).

2.6. Research Progression and Method

This study was conducted using the following steps and methods.
(1)
Image acquisition equipment: The UAV used to acquire wide-area images was a fixed-wing eBee Plus (Sensefly, Cheseaux-sur-Lausanne, Switzerland) (Figure 5). The UAV-mounted sensor used an RGB (16 MP) camera and a multi-spectral sensor (Sequoia+, Parrot, Paris, France) consisting of four spectral bands (12 MP): Green, Red, Red-Edge (RE), and Near Infrared (NIR). The Sequoia+ sensor collects the amount of light in real time by using the light sensor located on the top, and the light information is inputted into every single image and corrected.
(2)
UAV image acquisition plan: For the UAV imaging plan, an area of 2520 ha was planned using eMotion 3 (Sensefly, Cheseaux-sur-Lausanne, Switzerland) software, as shown in Figure 6, excluding mountainous and non-agricultural land. In consideration of the battery capacity, the flight time was set to not exceed 40 min at a time, and a total of 18 routes were set to acquire the entire area of the farmland. In Figure 6, the path in the orange box is the path taken on Day 1 of image acquisition, and the blue box is the path progression on Day 2. The numbers in the boxes are numbered arbitrarily, and the number in parentheses is the flight time. The time required for acquiring images in a wide area was two days for each acquisition, and it was performed once in May and June, taking a total of 4 days. The image acquisition altitude was 110 m, and a spatial resolution of 10 cm was set based on the multi-spectral band. The collected single images were produced as one reflectance image for each band using the Pix4D Mapper (Sensefly, Cheseaux-sur-Lausanne, Switzerland) program. The production time was about 26 h at a time in the hardware environment of Table 2. Radiation correction of each image was performed using the real-time light amount collected by the optical sensor in Figure 5 and the calibration panel during the image processing process.
(3)
Field Investigation: For the field survey, the Farm Map was superimposed on the UAV orthographic image acquired in advance and printed in A0 size, and the survey was conducted by directly visiting the site. As shown in Figure 7, with the use of ArcGIS Pro (Esri, Redlands, CA, USA) software, corn cultivation areas were classified as 1 and the rest as 0 based on the Farm Map boundary.
(4)
Data preprocessing: The classification data were prepared by extracting the average reflectance of the relevant lot using the acquired UAV image and the boundary division of the Farm Map. The explanatory variables of the classification model were constructed by selecting the average Green, Red, Red-Edge, and Near infrared (NIR) reflectance and NDVI (Normalized Difference Vegetation Index) of 10 variables in the two periods. NDVI was calculated using Equation (1):
NDVI = NIR RED NIR + RED
Here, NIR is the reflectance of the UAV image near infrared band, and RED is the reflectance of the Red band.
(5)
The dependent variable was based on whether or not corn was cultivated. The constructed variables were divided into training data and test data. For training data, 30% of the total data were randomly selected. The remaining 70% were used as test data.
(6)
Crop classification: In this study, an object-based classification method based on the Farm Map was applied. In object-based classification, image segmentation precedes object classification. The segmentation operation creates a region based on the similarity of texture, distance, and spacing of neighboring pixels. In order to apply this method to each site, the optimal division operation is completed through trial and error, and a lot of time and effort are required. Therefore, in this study, the image segmentation task was omitted by setting the boundary of the Farm Map as one object and using it for classification in order to efficiently utilize the high-capacity UAV data.
(7)
Accuracy evaluation: The UAV multispectral image consists of four bands, and when time series images are analyzed, 4*n high-dimensional data are generated. In this study, the mutual accuracy was compared by applying the SVM and RF algorithms with high interpretability in high-dimensional data. For crop classification, the Farm Map boundary was set as one object, and the SVM and RF algorithms were used to classify corn and other crops. The classified results were evaluated for accuracy using an error matrix.
(8)
Analysis tools: ArcGIS Pro software version 2.5 was used for explanation and input of dependent variables as well as visualization of image classification results. All procedures of data preprocessing and classification were performed using RStudio, an open-source program. The SVM and RF models were built using the R packages “Caret” and “randomForest”.
Figure 8 shows the schematic of the whole process of the study. The composition of the research process system was largely divided into data acquisition, image pre-processing, image classification, and cultivation area estimation steps to calculate the total area of corn cultivation in Gammul-myeon, South Korea.

3. Results

3.1. UAV Orthoimage and Field Survey

Image acquisition using UAV was performed twice, in May (Figure 9a) and in June (Figure 9b), targeting agricultural land, except for mountainous areas in Gammul-myeon, Goesan-gun, Chungcheongbuk-do, South Korea, as shown in Figure 9. The area of the obtained image was 2721 ha in May and 2680 ha in June, which corresponds to about 63% of the total area of Gammul-myeon. The numbers of images were 48,564 and 47,858 in May and June, respectively. As a result of matching based on the image acquisition period, the acquired image capacities were 14.8 and 14.7 GB, respectively. Table 3 summarizes the data acquisition area, number of shots, and orthoimage capacity of UAV images for each image acquisition date.
South Korea uses cadastral maps to classify land locations, lot numbers, ground points, and boundaries. However, there is a problem in that the cadastral map does not accurately indicate the relationship between agricultural boundaries and land ownership in agricultural areas. Therefore, farmers in many agricultural areas have demanded solutions to these problems. Accordingly, the South Korean government has produced and distributed agricultural land maps for crop-growing areas that are not related to land ownership, apart from the cadastral map of cultivated land [54]. This map is called a Farm Map and is currently being used for agricultural statistics and policy operation [6].
The Farm Map was used because it accurately distinguishes the boundaries of farmland using aerial and satellite images. The Farm Map consists of a PNU code, lot center coordinates, read code, readjustment properties of arable land, and lot area as the major constituent items. First, it is necessary to integrate the coordinate system of the image data acquired by the UAV and the Farm Map. Therefore, in this study, WGS 1984 UTM Zone 52 was used to unify the UAV coordinate system with the Farm Map coordinate system. As shown in Figure 10, the Farm Map showed high accuracy because the study area was mostly composed of agricultural areas.
The total number of lots in Gammul-myeon is 5827, based on the Farm Map. The number of lots included in the UAV image is 5500, which corresponds to 94.4% of the total agricultural land area. The field survey was conducted on 5500 lots included in the UAV image. The number of corn cultivation plots surveyed in the field was 582, and the total of other crops was 4918. The cultivated area was calculated using ArcGIS Pro based on the farmland division of the Farm Map. As a result of the calculation, the total agricultural land area was 855.99 ha, the cultivated area of corn was 106.94 ha, and the cultivated area of other crops was 749.05 ha. The corn cultivation area accounted for 12.5% of the total agricultural land area (Table 4).

3.2. Data Preprocessing and Hyperparameter Tuning

For data preprocessing, the reflectance and vegetation index (NDVI) corresponding to each lot were extracted using multispectral images of the two periods and Farm Map. The pre-processed data were displayed in 12 rows and 5501 columns consisting of a lot number for parcel classification, 10 explanatory variables, and independent variables. For the preprocessed data, the training data were set before applying the classification algorithm. Of the total 5500 lots, 1650 lots, which accounted for 30% of the training data, were randomly selected. A total of 30% of the training data were used to tune the optimal hyperparameters. The most representative tuning parameter setting method is a grid search. A grid search is a function that creates a model for each parameter and finds the optimal parameter, and it is a method of searching including all combinations of each parameter. The optimal combination of each parameter was determined through 5-fold cross-validation of training data.
In the SVM algorithm, the optimal hyperparameters were selected using a grid search method, such as C and Gamma. As a result of the analysis, as shown in Figure 11a, C was set to 1 and Gamma was set to 0.25, and the accuracy of the training model was 0.955 at this time. The hyperparameter of RF was searched by using mtry and n.Tree. n.Tree was set to 500, which is the default value of R package “random Forest”, and mtry was searched by a grid search method. The applied n.Tree was set to 500, mtry was set to 5, and the accuracy of the training model was about 0.958 (Figure 11b).

3.3. Crop Classification Results and Accuracy Verification

The classification of cultivated crops in the study area was performed using Farm Map-based SVM and RF algorithms. In general, classification items are used for cultivated crops grown during the observation period. However, since this study was aimed at estimating the cultivated area of corn, the classification items were set by dividing them into corn and other crops. Figure 12 shows the classification results of corn cultivation areas obtained by each algorithm. The error matrix was used to evaluate the accuracy of the classified results using the Farm Map-based SVM and RF algorithms. Accuracy evaluation was conducted by comparing the image classification results and field survey data. The verification items were set as corn and other crops as the classification items. Accuracy evaluation was performed on 5500 parcels acquired in UAV images.
As a result of classification using the Farm Map-based SVM algorithm, corn was classified as 437 lots and other crops as 4820 lots, so a total of 5257 lots were exactly the same. The overall accuracy obtained by the SVM algorithm was 95.88%. The classification accuracy of corn, i.e., the purpose of this study, was 81.68% for PA and 75.09% for UA. The Kappa coefficient was 0.7777 and the F-measure was 0.78 (Table 5).
The classification result obtained by applying the Farm Map-based RF algorithm was 537 lots of corn and 4899 lots of other crops, and a total of 5436 lots were exactly the same. The overall accuracy of RF algorithm application was 98.84% higher than that of SVM application. In the classification accuracy of corn, PA was 96.58%, UA was 92.27%, the Kappa coefficient was 0.94, and the F-measure was 0.94, showing a higher classification performance compared to that of the SVM algorithm (Table 6).
To calculate the total corn cultivation area of Gammul-myeon, the results of classification using two algorithms, SVM and RF, were used. The result of calculating the cultivated area using the Farm Map-based SVM algorithm was 96.54 ha, which was estimated to be 90.27% of the actual corn cultivation area. The result of calculating the cultivated area by applying the Farm Map-based RF algorithm was 98.77 ha, which was estimated to be 92.36% of the actual corn cultivation area (Table 7).
In this way, it can be seen that when estimating the cultivation area for a wide area, some errors are included depending on the applied algorithm. In both algorithms, RF was found to be higher in accuracy than SVM, and, in terms of processing speed, SVM was found to be slightly faster than RF. Considering the recent computer processing power, there is no significant difference in processing speed and time, so it will be necessary to increase the efficiency and to maximize the learning of wide-area images for machine learning to secure accuracy.

4. Discussion

4.1. Crop Classification Methodology

The overall accuracy of the classification results using the Farm Map-based SVM and RF algorithms used in this study did not differ significantly, but there was a slight difference in the classification accuracy of corn. The reason is the difference between the total number of corn items and the total number of other items. There was an exact seven-fold difference between the number of non-corn and corn items, and even if the classification accuracy of corn was lower than others, it did not significantly affect the overall accuracy. Therefore, it would be appropriate to judge the accuracy of this study based on the classification accuracy of corn. The corn classification results of the two algorithms used differed by about 15% in UA and 17% in PA, confirming that the RF algorithm was more suitable for corn classification. For the computational speed of both algorithms, SVM took about 10 min and RF took about 60 min. As such, it was confirmed that the most important factor in classifying the research target crop in the agricultural field is that it is more accurate to use the classification accuracy for the target crop rather than the overall classification accuracy.
In addition, it can be seen that the time required for classification differs depending on the algorithm, but is short when the investigation time is taken into consideration. Therefore, in consideration of the development of the computer processing speed, etc., the acquisition of learning data that can be applied to AI application algorithms and learning about them are considered to be important to further increase accuracy.

4.2. Comparison between the Existing Image Segmentation Method and the Method Using Farm Map Data

The Farm Map-based method was compared with the image segmentation method used in the existing object-based classification method to verify the efficiency. In general, the image segmentation algorithm used in wide-area satellite images utilizes the region growing technique, which is a technique for expanding an area with one pixel. The region growing technique creates one region based on the similarity of spacing, texture, and distance in pixels around one pixel, and when it exceeds the threshold value of a set parameter, it is set as another object. Parameters for object segmentation include segmentation scale, spatial information (shape), spectral information (color), compactness, and smoothness. In general parameter threshold selection, a trial-and-error method is used to find the optimal threshold value while changing parameter values. The optimal parameter settings were determined to be scale 700, shape 0.1, color 0.9, compactness 0.5, and smoothness 0.5. The segmentation result is presented in Figure 13 for comparison with the boundary data of the Farm Map. Image segmentation techniques are generally very complex and time-consuming [42]. The image segmentation of the UAV image in the study area took about 18 h to perform once. The optimal threshold takes a lot of time because of trial and error while changing the parameter values in many cases.
On the other hand, the Farm Map has the advantage of reducing the processing time because EPIS (Korea Agency of Education, Promotion and Information Service in Food, Agriculture, Forestry and Fishery) provides various types of information based on satellite and aerial images by combining them [54]. This study was able to minimize the time required for image segmentation by using the open source provided by EPIS.
In addition, if the Farm Map is not used, the time and effort required in the process of image segmentation must be put into operation. In the general method, other land cover information such as forests, downtown areas, roads, and water is included in addition to agricultural land, so cover classification for agricultural land and non-agricultural land must be performed separately and removed. However, when using the Farm Map, land cover other than agricultural land is excluded from the classification items in advance, thereby minimizing the possibility of misclassification and increasing the classification accuracy and data processing efficiency.

5. Conclusions

In this study, the vegetative (V3) and flowering (VT) stages of corn were selected for Gammul-myeon, Goesan-gun, Chungcheongbuk-do, South Korea, and the cultivation area of corn was estimated using UAV imaging and Farm Map-based machine learning techniques.
The conclusions obtained in this study can be summarized as follows:
  • Existing UAV images were used for a narrow area of a lot unit (2 ha) or a field unit (20 ha), but in this study, a wide area of about 2700 ha was studied. It took about 4 days to acquire the UAV imagery, 36 flights, and about 52 h to produce the reflectance image.
  • As a result of classification using the Farm Map-based SVM algorithm for wide-area images, we found 81.68% for PA, 75.09% for UA, 0.77 for the Kappa coefficient, and 0.78 for the F-measure. In particular, the results of the classification of the Farm Map-based RF algorithm showed a PA of 96.58%, UA of 92.27%, a Kappa coefficient of 0.94, and the F-measure of 0.94, with higher accuracy than SVM.
  • As a result of applying the SVM algorithm, the corn cultivation area was estimated to be 96.54 ha, showing an accuracy of 90.27%. The RF algorithm estimated the corn cultivation area to be 98.77 ha and showed higher accuracy than SVM at 92.36%.
  • In the task of classifying and dividing the cultivated area of crops, as a result of performing object-based classification using the Farm Map, it was found that the work efficiency, accuracy improvement, and processing time reduction were found to be very effective compared to the existing object division method. In addition, the Farm Map-based method was able to increase the efficiency of data processing by minimizing the possibility of misclassification of farmland and cultivated crops. As a result, it was found that the identification of the cultivated area of crops is a method that enables rapid and reliable analysis when using UAV images, Farm Map, and machine learning convergence.
The Farm Map is an agricultural land information map specifically produced for agriculture by the South Korean government. Although the information provided here may not immediately apply changes depending on the agricultural area, most agricultural areas do not show any difference due to the update every two years. In this way, if the boundary and area information of farmland directly digitized through visual inspection and field inspection using aerial and satellite images are effectively utilized in related fields, this will be useful for saving time and for management in agricultural fields.
In particular, since the agricultural field requires a lot of effort and time according to environmental changes, it will be possible to create a more efficient working environment, classify crops, and derive cultivation areas by using public data and machine learning algorithms. In the future, it is necessary to improve the accuracy through acquisition and learning of large amounts of data to proceed and apply AI.

Author Contributions

Conceptualized and designed the research, J.-H.P. and D.-H.L.; performed the field experiments and data collection, H.-J.K. and D.-H.L.; analyzed the data and wrote the original manuscript, D.-H.L. and J.-H.P.; reviewed and revised the paper, J.-H.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Cooperative Research Program for Agriculture Science and Technology Development (Project No. PJ014049022021) from the Rural Development Administration, Korea.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The dataset analyzed during the current study are available from the corresponding author on reasonable request.

Acknowledgments

The authors thank J.K. Park at the Rural Development Administration, Korea, for assistance during this research. The Farm Map used in this study was provided by the Korea Agency of Education, Promotion and Information Service in Food, Agriculture, Forestry and Fishery, and we are grateful for this. We would also like to thank all of the farmers for their cooperation in this research, as well as the editors and reviewers for their suggestions to improve the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

Summary of abbreviations used in this paper.
AbbreviationFull Name
UAVUnmanned aerial vehicles
ICTInformation, Communications, and Technology
AIArtificial intelligence
RSRemote sensing
GISGeographic information system
LULCLand use/land cover
SVMSupport vector machine
RFRandom forest
ANNArtificial neural network
OAOverall accuracy
UAUser’s accuracy
PAProducer’s accuracy
RRed
GGreen
RERed edge
NIRNear infrared
NDVINormalized Difference Vegetation Index

References

  1. Neeson, R. Going Organic: Organic Vegetable Production: A Guide to Convert to Organic Production; RIRDC: Wagga, Australia, 2007. [Google Scholar]
  2. Rosillo-Calle, F.; Johnson, F.X. Food Versus Fuel: An. Informed Introduction to Biofuels; ZED Books: London, UK, 2010. [Google Scholar]
  3. Laser, M.; Lynd, L. Introduction to cellulosic energy crops. Cellul. Energy Crop. Syst. 2014, 1–14. [Google Scholar] [CrossRef]
  4. Ministry of Agriculture, Food and Rural Affairs Home Page. Available online: https://www.mafra.go.kr/mafra/366/subview.do?enc=Zm5jdDF8QEB8JTJGYmJzJTJGbWFmcmElMkY3MSUyRjMwNTQzOCUyRmFydGNsVmlldy5kbyUzRg%3D%3D (accessed on 19 April 2021).
  5. Baek, S.-B.; Son, B.-Y.; Kim, J.-T.; Bae, H.-H.; Go, Y.-S.; Kim, S.-L. Changes and prospects in the development of corn varieties in Korea. Korean Soc. Breed. Sci. 2020, 52, 93–102. [Google Scholar] [CrossRef]
  6. Park, J.K.; Park, J.H. Applicability evaluation of agricultural subsidies inspection using unmanned aerial vehicle. J. Korean Soc. Agric. Eng. 2016, 58, 29–37. [Google Scholar] [CrossRef]
  7. Jeong, W.-C.; Kim, S.-B. Utilization of UAV and GIS for Efficient Agricultural Area Survey. J. Converg. Inf. Technol. 2020, 10, 201–207. [Google Scholar] [CrossRef]
  8. Park, J.K.; Park, J.H. Crops classification using imagery of unmanned aerial vehicle (UAV). J. Korean Soc. Agric. Eng. 2015, 57, 91–97. [Google Scholar] [CrossRef]
  9. Na, S.I.; Park, C.W.; So, K.H.; Ahn, H.Y.; Lee, K.D. Application Method of Unmanned Aerial Vehicle for Crop Monitoring in Korea. Korean J. Remote Sens. 2018, 34, 829–846. [Google Scholar] [CrossRef]
  10. Ye, H.C.; Huang, W.J.; Huang, S.Y.; Cui, B.; Dong, Y.Y.; Guo, A.T.; Ren, Y.; Jin, Y. Recognition of Banana Fusarium Wilt Based on UAV Remote Sensing. Remote Sens. 2020, 12, 938. [Google Scholar] [CrossRef] [Green Version]
  11. Park, J.-K.; Das, A.; Park, J.-H. Application trend of unmanned aerial vehicle (UAV) image in agricultural sector: Review and proposal. Korean J. Agric. Sci. 2015, 42, 269–276. [Google Scholar] [CrossRef] [Green Version]
  12. Turner, B.; Meyer, W.B.; Skole, D.L. Global land-use/land-cover change: Towards an integrated study. Ambio 1994, 23, 91–95. [Google Scholar]
  13. Di Gregorio, A. Land Cover Classification System: Classification Concepts and User Manual: LCCS; Food & Agriculture Organization: Rome, Italy, 2005; Volume 2. [Google Scholar]
  14. Foody, G.M. Status of land cover classification accuracy assessment. Remote Sens. Environ. 2002, 80, 185–201. [Google Scholar] [CrossRef]
  15. Fisher, P.; Comber, A.J.; Wadsworth, R. Land use and land cover: Contradiction or complement. In Re-Presenting GIS; John Wiley and Sons: Hoboken, NJ, USA, 2005; pp. 85–98. [Google Scholar]
  16. Warner, T.; Almutairi, A.; Lee, J.Y. Remote Sensing of Land Cover Change; SAGE: London, UK, 2009. [Google Scholar]
  17. Adam, E.; Mutanga, O.; Odindi, J.; Abdel-Rahman, E.M. Land-use/cover classification in a heterogeneous coastal landscape using RapidEye imagery: Evaluating the performance of random forest and support vector machines classifiers. Int. J. Remote Sens. 2014, 35, 3440–3458. [Google Scholar] [CrossRef]
  18. Tolessa, T.; Senbeta, F.; Kidane, M. The impact of land use/land cover change on ecosystem services in the central highlands of Ethiopia. Ecosyst. Serv. 2017, 23, 47–54. [Google Scholar] [CrossRef]
  19. Mubako, S.; Belhaj, O.; Heyman, J.; Hargrove, W.; Reyes, C. Monitoring of Land Use/Land-Cover Changes in the Arid Transboundary Middle Rio Grande Basin Using Remote Sensing. Remote Sens. 2018, 10, 2005. [Google Scholar] [CrossRef] [Green Version]
  20. Kim, Y.; Kwak, G.-H.; Lee, K.-D.; Na, S.-I.; Park, C.-W.; Park, N.-W. Performance evaluation of machine learning and deep learning algorithms in crop classification: Impact of hyper-parameters and training sample size. Korean J. Remote Sens. 2018, 34, 811–827. [Google Scholar] [CrossRef]
  21. Maxwell, A.E.; Warner, T.A.; Fang, F. Implementation of machine-learning classification in remote sensing: An applied review. Int. J. Remote Sens. 2018, 39, 2784–2817. [Google Scholar] [CrossRef] [Green Version]
  22. Shih, H.C.; Stow, D.A.; Tsai, Y.H. Guidance on and comparison of machine learning classifiers for Landsat-based land cover and land use mapping. Int. J. Remote Sens. 2019, 40, 1248–1274. [Google Scholar] [CrossRef]
  23. Jamali, A. Evaluation and comparison of eight machine learning models in land use/land cover mapping using Landsat 8 OLI: A case study of the northern region of Iran. SN Appl. Sci. 2019, 1. [Google Scholar] [CrossRef] [Green Version]
  24. de Jong, S.M.; Hornstra, T.; Maas, H.-G. An integrated spatial and spectral approach to the classification of Mediterranean land cover types: The SSC method. Int. J. Appl. Earth Obs. Geoinf. 2001, 3, 176–183. [Google Scholar] [CrossRef]
  25. Hsu, C.-W.; Chang, C.-C.; Lin, C.-J. A Practical Guide to Support. In Vector Classification; National Taiwan University: Taipei, Taiwan, 2003. [Google Scholar]
  26. Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef] [Green Version]
  27. Kim, N.; Cho, J.; Shibasaki, R.; Lee, Y. Estimation of Corn and Soybeans Yields of the US Midwest using Satellite Imagery and Climate Dataset. J. Clim. Res. 2014, 9, 315–329. [Google Scholar] [CrossRef]
  28. Park, H.-J.; Ahn, J.-B.; Jung, M.-P. Correlation between the maize yield and satellite-based vegetation index and agricultural climate factors in the three provinces of Northeast China. Korean J. Remote Sens. 2017, 33, 709–720. [Google Scholar] [CrossRef]
  29. Ghosh, A.; Joshi, P.K. A comparison of selected classification algorithms for mapping bamboo patches in lower Gangetic plains using very high resolution WorldView 2 imagery. Int. J. Appl. Earth Obs. Geoinf. 2014, 26, 298–311. [Google Scholar] [CrossRef]
  30. Kuo, B.-C.; Ho, H.-H.; Li, C.-H.; Hung, C.-C.; Taur, J.-S. A kernel-based feature selection method for SVM with RBF kernel for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 7, 317–326. [Google Scholar] [CrossRef]
  31. Stefanski, J.; Mack, B.; Waske, B. Optimization of Object-Based Image Analysis with Random Forests for Land Cover Mapping. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 2492–2504. [Google Scholar] [CrossRef]
  32. Nitze, I.; Barrett, B.; Cawkwell, F. Temporal optimisation of image acquisition for land cover classification with Random Forest and MODIS time-series. Int. J. Appl. Earth Obs. Geoinf. 2015, 34, 136–146. [Google Scholar] [CrossRef] [Green Version]
  33. Kim, Y.; Park, N.-W.; Hong, S.; Lee, K.; Yoo, H.Y. Early production of large-area crop classification map using time-series vegetation index and past crop cultivation patterns-A case study in Iowa State, USA. Korean J. Remote Sens. 2014, 30, 493–503. [Google Scholar] [CrossRef] [Green Version]
  34. Hong, S.; Lee, B.; Jang, S.; Park, Y. Improve the quality of farm map using unmanned aerial photogrammetry. In Proceedings of the Conference of the Korean Society for GeoSpatial Information Science, Jeju, Korea, 2015; pp. 159–162. [Google Scholar]
  35. Lee, K.-D.; Lee, Y.-E.; Park, C.-W.; Na, S.-I. A comparative study of image classification method to classify onion and garlic using Unmanned Aerial Vehicle (UAV) imagery. Korean J. Soil. Sci. Fertil. 2016, 49, 743–750. [Google Scholar] [CrossRef] [Green Version]
  36. Na, S.I.; Park, C.W.; So, K.H.; Park, J.M.; Lee, K.D. Satellite Imagery based Winter Crop Classification Mapping using Hierarchica Classification. Korean J. Remote Sens. 2017, 33, 677–687. [Google Scholar] [CrossRef]
  37. Huang, H.S.; Lan, Y.B.; Yang, A.Q.; Zhang, Y.L.; Wen, S.; Deng, J.Z. Deep learning versus Object-based Image Analysis (OBIA) in weed mapping of UAV imagery. Int. J. Remote Sens. 2020, 41, 3446–3479. [Google Scholar] [CrossRef]
  38. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  39. Friedman, N.; Geiger, D.; Goldszmidt, M. Bayesian network classifiers. Mach. Learn. 1997, 29, 131–163. [Google Scholar] [CrossRef] [Green Version]
  40. Ho, T.K. The random subspace method for constructing decision forests. IEEE Trans. Pattern Anal. 1998, 20, 832–844. [Google Scholar]
  41. Erbek, F.S.; Ozkan, C.; Taberner, M. Comparison of maximum likelihood classification method with supervised artificial neural network algorithms for land use activities. Int. J. Remote Sens. 2004, 25, 1733–1748. [Google Scholar] [CrossRef]
  42. Abdel-Rahman, E.M.; Mutanga, O.; Adam, E.; Ismail, R. Detecting Sirex noctilio grey-attacked and lightning-struck pine trees using airborne hyperspectral data, random forest and support vector machines classifiers. ISPRS J. Photogramm. Remote Sens. 2014, 88, 48–59. [Google Scholar] [CrossRef]
  43. Shao, Y.; Lunetta, R.S. Comparison of support vector machine, neural network, and CART algorithms for the land-cover classification using limited training data points. ISPRS J. Photogramm. Remote Sens. 2012, 70, 78–87. [Google Scholar] [CrossRef]
  44. Hu, W.; Huang, Y.Y.; Wei, L.; Zhang, F.; Li, H.C. Deep Convolutional Neural Networks for Hyperspectral Image Classification. J. Sens. 2015, 2015. [Google Scholar] [CrossRef] [Green Version]
  45. Diago, M.P.; Sanz-Garcia, A.; Millan, B.; Blasco, J.; Tardaguila, J. Assessment of flower number per inflorescence in grapevine by image analysis under field conditions. J. Sci. Food Agric. 2014, 94, 1981–1987. [Google Scholar] [CrossRef]
  46. Dias, P.A.; Tabb, A.; Medeiros, H. Multispecies Fruit Flower Detection Using a Refined Semantic Segmentation Network. IEEE Robot. Autom. Lett. 2018, 3, 3003–3010. [Google Scholar] [CrossRef] [Green Version]
  47. Sheykhmousa, M.; Mahdianpari, M.; Ghanbari, H.; Mohammadimanesh, F.; Ghamisi, P.; Homayouni, S. Support Vector Machine vs. Random Forest for Remote Sensing Image Classification: A Meta-analysis and systematic review. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020. [Google Scholar] [CrossRef]
  48. Carranza-Garcia, M.; Garcia-Gutierrez, J.; Riquelme, J.C. A Framework for Evaluating Land Use and Land Cover Classification Using Convolutional Neural Networks. Remote Sens. 2019, 11, 274. [Google Scholar] [CrossRef] [Green Version]
  49. Lee, D.H.; Shin, H.S.; Park, J.H. Developing a p-NDVI Map for Highland Kimchi Cabbage Using Spectral Information from UAVs and a Field Spectral Radiometer. Agronomy 2020, 10, 1798. [Google Scholar] [CrossRef]
  50. Stumpf, A.; Kerle, N. Object-oriented mapping of landslides using Random Forests. Remote Sens. Environ. 2011, 115, 2564–2577. [Google Scholar] [CrossRef]
  51. Puissant, A.; Rougier, S.; Stumpf, A. Object-oriented mapping of urban trees using Random Forest classifiers. Int. J. Appl. Earth Obs. Geoinf. 2014, 26, 235–245. [Google Scholar] [CrossRef]
  52. Berhane, T.M.; Lane, C.R.; Wu, Q.; Autrey, B.C.; Anenkhonov, O.A.; Chepinoga, V.V.; Liu, H. Decision-tree, rule-based, and random forest classification of high-resolution multispectral imagery for wetland mapping and inventory. Remote Sens. 2018, 10, 580. [Google Scholar] [CrossRef] [Green Version]
  53. Ma, L.; Cheng, L.; Li, M.C.; Liu, Y.X.; Ma, X.X. Training set size, scale, and features in Geographic Object-Based Image Analysis of very high resolution unmanned aerial vehicle imagery. ISPRS J. Photogramm. Remote Sens. 2015, 102, 14–27. [Google Scholar] [CrossRef]
  54. Korea Agency of Education, Promotion and Information Service in Food, Agriculture, Forestry and Fishery hompage. Available online: https://agis.epis.or.kr/ASD/main/intro.do# (accessed on 10 April 2021).
  55. University of Arkansas. Available online: https://www.uaex.edu/publications/pdf/mp437/mp437.pdf (accessed on 25 June 2021).
  56. Ransom, J.; Endres, G. Corn Growth and Management Quick Guide; North Dakota State University: Fargo, ND, USA, 2013. [Google Scholar]
  57. Korea Meteorological Administration Hompage. Available online: https://data.kma.go.kr/data/grnd/selectAsosRltmList.do?pgmNo=36 (accessed on 18 April 2021).
  58. Cheng, G.; Han, J.W.; Zhou, P.C.; Guo, L. Multi-class geospatial object detection and geographic image classification based on collection of part detectors. ISPRS J. Photogramm. Remote Sens. 2014, 98, 119–132. [Google Scholar] [CrossRef]
  59. Dangeti, P. Statistics for Machine Learning; Packt Publishing Ltd.: Birmingham, UK, 2017. [Google Scholar]
  60. Géron, A. Hands-On Machine Learning with Scikit-Learn. and TensorFlow: Concepts, Tools, and Techniques to Build. Intelligent Systems; O’Reilly Media: Newton, MA, USA, 2017. [Google Scholar]
  61. Mountrakis, G.; Im, J.; Ogole, C. Support vector machines in remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2011, 66, 247–259. [Google Scholar] [CrossRef]
  62. Chang, C.; Lin, C. LIBSVM: A library for support vector machines ACM Trans. Intell. Syst. Technol. 2001, 2, 1–27. [Google Scholar]
  63. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  64. Pal, M. Random forest classifier for remote sensing classification. Int. J. Remote Sens. 2005, 26, 217–222. [Google Scholar] [CrossRef]
  65. Rodriguez-Galiano, V.F.; Ghimire, B.; Rogan, J.; Chica-Olmo, M.; Rigol-Sanchez, J.P. An assessment of the effectiveness of a random forest classifier for land-cover classification. ISPRS J. Photogramm. Remote Sens. 2012, 67, 93–104. [Google Scholar] [CrossRef]
  66. Klusowski, J.M. Complete analysis of a random forest model. arXiv 2018, arXiv:1805.02587. [Google Scholar]
Figure 1. The location of the study area. In the figure on the right, the yellow line indicates the boundary of Gammul-myeon, the study area.
Figure 1. The location of the study area. In the figure on the right, the yellow line indicates the boundary of Gammul-myeon, the study area.
Agronomy 11 01554 g001
Figure 2. Corn cultivation and growth process: field UAV imaging and field investigation time.
Figure 2. Corn cultivation and growth process: field UAV imaging and field investigation time.
Agronomy 11 01554 g002
Figure 3. Precipitation and temperature variations during the 2020 corn growing season in Goesan-gun.
Figure 3. Precipitation and temperature variations during the 2020 corn growing season in Goesan-gun.
Agronomy 11 01554 g003
Figure 4. A farmland map that superimposes UAV orthogonal images taken of the study area and a Farm Map.
Figure 4. A farmland map that superimposes UAV orthogonal images taken of the study area and a Farm Map.
Agronomy 11 01554 g004
Figure 5. Configuration of fixed-wing UAVs (eBee Plus, Sensefly) and onboard devices used to collect imagery in a wide area.
Figure 5. Configuration of fixed-wing UAVs (eBee Plus, Sensefly) and onboard devices used to collect imagery in a wide area.
Agronomy 11 01554 g005
Figure 6. Pre-UAV image acquisition and flight route planning for wide-area data acquisition.
Figure 6. Pre-UAV image acquisition and flight route planning for wide-area data acquisition.
Agronomy 11 01554 g006
Figure 7. The classification of corn cultivation areas used for field surveys after overlapping ArcGIS and Farm Map. Numbers in the figure are coded as 1 for corn-growing fields and 0 for other fields.
Figure 7. The classification of corn cultivation areas used for field surveys after overlapping ArcGIS and Farm Map. Numbers in the figure are coded as 1 for corn-growing fields and 0 for other fields.
Agronomy 11 01554 g007
Figure 8. Schematic indicating the UAV image acquisition, Farm Map, field survey, pre-processing, and classification for corn cultivation area estimation.
Figure 8. Schematic indicating the UAV image acquisition, Farm Map, field survey, pre-processing, and classification for corn cultivation area estimation.
Agronomy 11 01554 g008
Figure 9. Wide-area orthoimages acquired using UAV and Sequoia + RGB in May and June, targeting agricultural land except for mountainous areas in Gammul-myeon, Goesan-gun, Chungcheongbuk-do, South Korea. (a) Orthoimages acquired on 8–9 May 2020, (b) Orthoimages acquired on 18–19 June 2020.
Figure 9. Wide-area orthoimages acquired using UAV and Sequoia + RGB in May and June, targeting agricultural land except for mountainous areas in Gammul-myeon, Goesan-gun, Chungcheongbuk-do, South Korea. (a) Orthoimages acquired on 8–9 May 2020, (b) Orthoimages acquired on 18–19 June 2020.
Agronomy 11 01554 g009
Figure 10. A comparison of the farmland boundary based on the Farm Map and cadastral map.
Figure 10. A comparison of the farmland boundary based on the Farm Map and cadastral map.
Agronomy 11 01554 g010
Figure 11. This figure shows the result of tuning hyperparameters using the preprocessed training dataset to utilize the SVM algorithm and RF algorithm. (a) A heat map for cross-validation of SVM hyperparameter C and Gamma, and (b) a grid search result for the mtry setting, a hyperparameter of RF.
Figure 11. This figure shows the result of tuning hyperparameters using the preprocessed training dataset to utilize the SVM algorithm and RF algorithm. (a) A heat map for cross-validation of SVM hyperparameter C and Gamma, and (b) a grid search result for the mtry setting, a hyperparameter of RF.
Agronomy 11 01554 g011
Figure 12. Analysis result of corn plantations classified using the field survey and Farm Map-based SVM and RF algorithms.
Figure 12. Analysis result of corn plantations classified using the field survey and Farm Map-based SVM and RF algorithms.
Agronomy 11 01554 g012
Figure 13. Comparison of farmland classification in the case of (a) applying the previously used region growing image segmentation method and (b) the Farm Map-based method.
Figure 13. Comparison of farmland classification in the case of (a) applying the previously used region growing image segmentation method and (b) the Farm Map-based method.
Agronomy 11 01554 g013
Table 1. Equations related to classification error evaluation by the error matrix.
Table 1. Equations related to classification error evaluation by the error matrix.
Field Survey
Classification Result Type AType BTotalProducer’s
Accuracy (%)
Type A a 11 a 12 j = 1 n a 1 j a 11 j = 1 n a 1 j
Type B a 21 a 22 j = 1 n a 2 j a 22 j = 1 n a 1 j
Total i = 1 n a i 1 i = 1 n a i 2 i = 1 n j = 1 n a i j
User’s
accuracy (%)
a 11 i = 1 n a i 1 a 22 i = 1 n a i 2
Overall Accuracy (%) i = 1 n a i i i = 1 n j = 1 n a i j
Kappa Coefficient N i = 1 n a i i i = 1 n ( a i + × a + i ) N 2 i = 1 n ( a i + × a + i ) = o b s e r v e d   a c c u r a c y c h a n c e   a g r e e m e n t 1 c h a n c e   a g r e e m e n t
F-measure P r e c i s i o n = a 11 a 11 + a 21 ,   R e c a l l = a 11 a 11 + a 21   ,   F m e a s u r e = 2 P r e c i s i o n R e c a l l P r e c i s i o n + R e c a l l
Table 2. Hardware specifications of the image processing system used in this study.
Table 2. Hardware specifications of the image processing system used in this study.
ItemSpecification
CPUIntel(R) Core(TM) i9-7900X CPU@ 3.3 GHz 3.31 GHz
RAM64 GB
Graphic cardNVIDIA GeForce GTX 1080 Ti
Table 3. Data acquisition area of UAV images, number of shots, and file size of orthoimages for each date of data acquisition.
Table 3. Data acquisition area of UAV images, number of shots, and file size of orthoimages for each date of data acquisition.
Date of AcquisitionArea (ha)Number of Shots (ea)File Size (GB)
May 8~9, 2020272148,56414.8
June 18~19, 2020268047,85814.7
Table 4. Number of fields and cultivation area of corn and other crops for the entire agricultural land area in Gammul-myeon.
Table 4. Number of fields and cultivation area of corn and other crops for the entire agricultural land area in Gammul-myeon.
CropNumber of Fields (ea)Area (ha)Area Rate (%)
Corn582109.9412.5
Others4918749.0587.5
Table 5. Error matrix for evaluating the accuracy of the Farm Map-based SVM algorithm.
Table 5. Error matrix for evaluating the accuracy of the Farm Map-based SVM algorithm.
Field Survey
Classification Result CornOthersTotalProducer’s
Accuracy (%)
Corn43714558275.09
Others984820491898.01
Total53549655500
User’s
Accuracy (%)
81.6897.08
Overall Accuracy (%)95.58
Kappa Coefficient0.77
F-measure0.78
Table 6. Error matrix for evaluating the accuracy of the Farm Map-based RF algorithm.
Table 6. Error matrix for evaluating the accuracy of the Farm Map-based RF algorithm.
Field Survey
Classification Result CornOthersTotalProducer’s
Accuracy (%)
Corn5374558292.27
Others194899491899.61
Total55649445500
User’s
Accuracy (%)
96.5899.09
Overall Accuracy (%)98.84
Kappa Coefficient0.94
F-measure0.94
Table 7. Corn cultivation area and accuracy according to classification method.
Table 7. Corn cultivation area and accuracy according to classification method.
Classification MethodArea (ha)Area Accuracy (%)
Field survey106.94100
SVM96.5490.27
RF98.7792.36
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lee, D.-H.; Kim, H.-J.; Park, J.-H. UAV, a Farm Map, and Machine Learning Technology Convergence Classification Method of a Corn Cultivation Area. Agronomy 2021, 11, 1554. https://doi.org/10.3390/agronomy11081554

AMA Style

Lee D-H, Kim H-J, Park J-H. UAV, a Farm Map, and Machine Learning Technology Convergence Classification Method of a Corn Cultivation Area. Agronomy. 2021; 11(8):1554. https://doi.org/10.3390/agronomy11081554

Chicago/Turabian Style

Lee, Dong-Ho, Hyeon-Jin Kim, and Jong-Hwa Park. 2021. "UAV, a Farm Map, and Machine Learning Technology Convergence Classification Method of a Corn Cultivation Area" Agronomy 11, no. 8: 1554. https://doi.org/10.3390/agronomy11081554

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop