Next Article in Journal
Improved-UWB/LiDAR-SLAM Tightly Coupled Positioning System with NLOS Identification Using a LiDAR Point Cloud in GNSS-Denied Environments
Next Article in Special Issue
Scattering Intensity Analysis and Classification of Two Types of Rice Based on Multi-Temporal and Multi-Mode Simulated Compact Polarimetric SAR Data
Previous Article in Journal
Deep-Learning-Based Object Filtering According to Altitude for Improvement of Obstacle Recognition during Autonomous Flight
Previous Article in Special Issue
Mapping the Northern Limit of Double Cropping Using a Phenology-Based Algorithm and Google Earth Engine
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Identification of Crop Type Based on C-AENN Using Time Series Sentinel-1A SAR Data

1
College of Computer and Information Engineering, Henan University, Kaifeng 475004, China
2
Henan Engineering Research Center of Intelligent Technology and Application, Henan University, Kaifeng 475004, China
3
Henan Key Laboratory of Big Data Analysis and Processing, Henan University, Kaifeng 475004, China
4
Faculty of Engineering and Technology, Multimedia University, Melaka 76450, Malaysia
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(6), 1379; https://doi.org/10.3390/rs14061379
Submission received: 9 February 2022 / Revised: 9 March 2022 / Accepted: 9 March 2022 / Published: 12 March 2022

Abstract

:
Crop type identification is the initial stage and an important part of the agricultural monitoring system. It is well known that synthetic aperture radar (SAR) Sentinel-1A imagery provides a reliable data source for crop type identification. However, a single-temporal SAR image does not contain enough features, and the unique physical characteristics of radar images are relatively lacking, which limits its potential in crop mapping. In addition, current methods may not be applicable for time-series SAR data. To address the above issues, a new crop type identification method was proposed. Specifically, a farmland mask was firstly generated by the object Markov random field (OMRF) model to remove the interference of non-farmland factors. Then, the features of the standard backscatter coefficient, Sigma-naught (σ0), and the normalized backscatter coefficient by the incident angle, Gamma-naught (γ0), were extracted for each type of crop, and the optimal feature combination was found from time-series SAR images by means of Jeffries-Matusita (J-M) distance analysis. Finally, to make efficient utilization of optimal multi-temporal feature combination, a new network, the convolutional-autoencoder neural network (C-AENN), was developed for the crop type identification task. In order to prove the effectiveness of the method, several classical machine learning methods such as support vector machine (SVM), random forest (RF), etc., and deep learning methods such as one dimensional convolutional neural network (1D-CNN) and stacked auto-encoder (SAE), etc., were used for comparison. In terms of quantitative assessment, the proposed method achieved the highest accuracy, with a macro-F1 score of 0.9825, an overall accuracy (OA) score of 0.9794, and a Kappa coefficient (Kappa) score of 0.9705. In terms of qualitative assessment, four typical regions were chosen for intuitive comparison with the sample maps, and the identification result covering the study area was compared with a contemporaneous optical image, which indicated the high accuracy of the proposed method. In short, this study enables the effective identification of crop types, which demonstrates the importance of multi-temporal radar images in feature combination and the necessity of deep learning networks to extract complex features.

1. Introduction

With the intensive development of agricultural production modes in China, smart agriculture has emerged [1]. There is an urgent need for large-scale and efficient monitoring of crops [2]. Due to remote sensing technology having the benefit of objectivity and economy, its application in the agricultural field is continuously expanding and deepening [3,4]. At present, agricultural remote sensing applications consist of crop type identification [5], yield estimation [6], soil moisture inversion [7], growth and phenological phase monitoring [8,9], etc. Among them, crop type identification is a pre-requisite work for national government departments to grasp the status of crop production, which is of great significance to agricultural management.
Crop type identification by remote sensing is usually carried out by visual interpretation [10] or computer techniques (supervised and unsupervised classification methods [11]). Currently, the employment of optical images for crop type identification has been achieved with good results [12,13]. However, optical remote sensing is susceptible to interference from weather [14,15]. Cloudy and rainy days often occur during the key growing stage of crops. In this case, it is difficult to obtain usable images, which affects the accuracy and timeliness of crop type identification. Fortunately, microwave remote sensing is playing an increasingly important role [16]. synthetic aperture radar (SAR) data can be obtained all -day and in all -weather. Not only can the surface information of the crops be captured, but the leaves, stems, and branches of the crops can be reflected in some way [17]. Moreover, Sentinel-1A images have good spatial and temporal resolution [18], which ensures the reliability of the data and shows potential value in crop type identification. Therefore, Sentinel-1A can be selected as the main data source for this study.
Up to now, scholars have attempted two major directions to improve the accuracy of crop type identification using SAR data. At first, time-series images are considered available to complement the necessary features for crop type identification [19,20]. Second, mainstream deep learning technology is adopted to maximize the utilization of features [21,22].
Due to the complexity and similarity of crop types, the spatial resolution of single-temporal SAR data is limited [23], and it is difficult to identify crop types effectively. On the other hand, multi-temporal SAR images can be used to improve the accuracy of crop type identification [24,25]. It has been shown that crop development over time can lead to changes in the backscatter coefficient of SAR images [26]. The multi-temporal SAR images are richer in features, which cover different stages of the crop. Xiao et al. [27] applied the subspace K-nearest neighbor (KNN) classifier to the Sentinel-1 images of the crop growing season, which indicated the importance of the time-series SAR data for crop type identification applications in rural China. Arias et al. [28] extracted polarimetric features (VH, VV, and VH/VV) from time-series Sentinel-1 images and used supervised classification techniques for crop classification. The results showed that the combination of time-series features provided accurate results, with an overall accuracy of over 70%. However, these studies usually only consider the standard backscatter coefficient, Sigma-naught (σ0), in the SAR image [29,30], and ignore the influence of the normalized backscatter coefficient by the incident angle, Gamma-naught (γ0), on crop type identification. It has been proved that incident angle is an important factor affecting SAR backscattering intensity, and the γ0 can be used to capture the backscatter process of the sensor signal well [31].
The abundance of features makes the processing and analysis of SAR images complicated in the agricultural application field. In order to make full use of the multi-temporal information of SAR images, various methods have been developed. Classifiers such as support vector machine (SVM) [32], random forest (RF) [33] and KNN [34] have been proven to be very helpful. These methods have strong versatility, but the feature extraction ability is limited. In recent years, the deep learning method [35,36] is used as an ensemble framework, which can learn directly from data and conduct classification in an end-to-end manner [37]. Ndikumana et al. [38] proposed a recurrent neural network (RNN) for agricultural land cover classification, which demonstrated the effectiveness of long short-term memory networks (LSTMs) and gated recurrent unit (GRU) for processing sequence features. Zhang et al. [39] employed a new crop discrimination network with multi-scale features (MSCDN) to improve crop classification ability, with an overall accuracy rate of up to 99%. The application of deep learning methods have obtained favorable results. Nevertheless, most of these networks use only one model framework, which potentially restricts their feature mining capabilities. It is still a challenge to develop a network that can efficiently exploit features.
This study aims to develop a deep learning method to improve the accuracy of crop type identification using Sentinel-1A SAR images. First of all, the object Markov random field (OMRF) model was applied to produce a farmland mask, removing non-farmland areas. Then, the backscatter coefficients feature σ0 and γ0 were extracted for each type of crop, and Jeffries-Matusita (J-M) distance analysis was carried out to obtain the optimal combination of identification features. Finally, the convolutional-autoencoder neural network (C-AENN) was proposed for learning valid features automatically on a given crop type identification task. The main contributions of this paper are:
  • Extracting of backscatter coefficient σ0 and γ0 features from the time-series Sentinel-1A images and combining them together, which fully exploits the potential of the time-series Sentinel-1A images in crop type identification.
  • Aiming at time-series SAR data, an effective crop type identification classifier C-AENN is proposed. It has outstanding feature learning and representation capabilities, which improves identification accuracy.
The rest of this paper is organized as follows. Section 2 describes the study area and data. Section 3 introduces the method of crop type identification in detail. Section 4 shows the results of identification. Section 5 presents the conclusion.

2. Study Area and Data

2.1. Study Area

The study area is Kaifeng City (113°52′15″E–115°15′42″E, 34°11′45″N–35°01′20″N), Henan Province, China. As shown in Figure 1, it is situated in the center of the Henan Province map, adjacent to the provincial capital, which has an area of about 6266 km2. The climate in Kaifeng is a continental monsoon, with an average annual temperature of 14.52 °C and an annual rainfall of 635 mm. The rainfall usually concentrates in July and August each year. In addition, Kaifeng has become an important base for agricultural production. It is located in the plain, and the soil is mostly clay, loess, and sandy soil, which is suitable for crop cultivation. Farmers usually adopt a two-crop-a-year rotation farming mode, one stage is from June to October, defined as stage I. The other stage is from October to June of the following year. This research focuses on stage I. The crops in this period are mainly corn, peanut, and a small amount of rice.

2.2. Data and Preprocessing

2.2.1. Sentinel-1A Data and Preprocessing

Sentinel-1A carries an imaging mission all-day and in all-weather, which is used for observing land and ocean. In particular, Sentinel-1A was launched in April 2014. It is an active microwave remote sensing satellite and the revisit period is 12 days [40]. The specific parameters of the data are shown in Table 1. The SAR image on 17 March 2021 was selected for farmland extraction. At that time, the characteristics of farmland and non-farmland (buildings, water) were quite different. Therefore, this study is conducted to create a farmland mask firstly to remove the non-farmland areas. SAR images from 3 July 2021 to 1 September 2021 were used for crop type identification, which almost covered the crop growth cycle (stage I).
The Sentinel-1A data was preprocessed based on the Sentinel Application Platform (SNAP) software provided by the European Space Agency (ESA). The pre-processing mainly consists of: (1) Orbit file application. (2) Thermal noise removal. (3) Calibration. The pixel value in the image was returned in the following form: σ0 band and γ0 band. (4) Mosaicking. Multiple images covering the study area were mosaicked together. (5) Speckle filtering. Refined Lee filtering (filter window 7 pixels × 7 pixels) operation was used to reduce the impact of coherent speckle noise. (6) Range-Doppler terrain correction. With the help of the Shuttle Radar Topography Mission (SRTM), precise geographic information was given. (7) Conversion of the backscatter coefficient from linear to dB scale. (8) Study area extraction.

2.2.2. Ground Truth Data and Preprocessing

In order to obtain a comprehensive and accurate view of the crop planting structure in the study area, the local Department of Agriculture and Rural Affairs was visited to obtain a general distribution of crops. A field survey was carried out from July to September 2021 to collect and record sample data of crops in the study area. Due to the family cropping mode of the small-scale peasants in China, the distribution of parcels is relatively fragmented and there are usually multiple crops in one parcel, which increases the difficulty of sample collection. Based on the actual complexity of the situation, parcels with a larger area were selected for sample collection. A Global Positioning System (GPS) device was used to locate the four corners of the parcel to ensure that the ground data was reasonable.
The sample of major crops was converted into shapefile (SHP) files in Geographic Information System (ArcGIS) software, which provided ground data support for crop type identification. The ground truth data includes the types of main crops and their geographic distribution information, as shown in Figure 2. The crop types in the study area were classified into four categories: peanut, maize, rice, and others. The type of others is mostly vegetable mixed planting. Eventually, the 109 typical parcels (158,223 pixels) were selected as samples, including 36 peanuts, 45 corn, 14 rice, and 14 others. In this study, 50% (79,111 pixels) of the samples were randomly selected as training samples and the remaining 50% (79,112 pixels) as test samples. The parameters of the samples are illustrated in Table 2.

2.2.3. Optical Reference Data and Preprocessing

Due to the difficulty of obtaining ground truth data covering the entire study area, Sentinel-2A is used as reference data in this study to evaluate the relative accuracy of the crop type identification results qualitatively. Sentinel-2A is tasked with multispectral high-resolution imaging. Its short-wave and near-infrared bands are critical for monitoring crops, and the spectral characteristics of its bands are shown in Table 3. Sentinel-2A images possess good resolvability, however, images can only be obtained under cloudless and rainless conditions. Affected by the weather, only one Sentinel-2A image (25 August 2021) was available during the study period, which covered the study area with relatively little cloudiness.
Sentinel-2A data pre-processing mainly consists: (1) Resampling. The images were resampled to a uniform resolution of 10 m in all bands. (2) Layer stacking. Band combinations were employed to better explain the features in an image. By using band combinations, specific information in the image can be extracted. (3) Mosaicking. (4) Study area extraction.

3. Methodology

3.1. Overview

The flow chart of the proposed method is presented in Figure 3. After the data acquisition and preprocessing, the crop type identification method was mainly divided into 3 steps. In step 1, the farmland areas were first extracted by the OMRF model in order to exclude the interference of non-farmland factors. In step 2, the backward scatter features σ0 and γ0 of each crop were extracted. By analyzing the J-M distances of various crop samples under different time phase conditions, the optimal feature combination was selected, which can improve identification accuracy and efficiency. In step 3, a novel classifier, C-AENN, was developed for the generation of the crop type identification model. It can efficiently mine useful features. At the same time, the classic machine learning methods SVM, RF, KNN, and neural network methods artificial neural network (ANN), one dimensional convolutional neural network (1D-CNN), stacked auto-encoder (SAE) were used for comparison. At last, the identification results of the model were evaluated quantitatively and qualitatively.

3.2. Crop Type Identification

3.2.1. Farmland Extraction

Since the backscatter features of crops at some growth stages may be similar to those of non-farm features, it may cause confusion. Therefore, this study first carried out the extraction of farmland areas. For example, in the early stage of crop sowing, some crops and buildings have similar backscatter features, which makes them difficult to be distinguished. Removing the non-farmland regions in advance is beneficial to avoid the interference of other ground objects on the identification result of crop types [41].
This research first selected a SAR image with a large difference between farmland and non-farmland features, and applied the OMRF [42] model to classify the image into buildings, water, and farmland. The OMRF model takes full account of the spatial contextual information of pixels in the image, which has a high resistance to noise. After that, buildings and water were removed and farmland was retained.
In detail, the OMRF model uses basic segmentation methods, such as simple linear iterative clustering (SLIC), mean shift (MS), etc., to segment a given SAR image into several over-segmented regions, firstly, which are denoted by R = { R i | i = 1 , 2 , , k } . Each R i is an over-segmented region, and k is the total number of over-segmented regions. These regions are then used to build a regional adjacency graph (RAG).
RAG consists of two random fields, the label field x = { x i | i = 1 , 2 , , k } represents the category of each R i , and the feature field y = { y i | i = 1 , 2 , , k } represents the average value of each R i . According to the maximum a posteriori (MAP) criterion and Bayesian formulation, the classification problem of the SAR image can be solved by finding the best result x ^ using the posterior probability P ( x | y ) , which is shown in Equation (1).
x ^ = arg max x P ( x | y ) = arg max x P ( y | x ) P ( x ) P ( y ) = arg max x P ( y | x ) P ( x )
Then the solution of x ^ can be converted into an energy optimal problem expressed by Equation (2). E ( y | x ) is the conditional energy and E ( x ) is the prior energy. Finally, the best classification result x ^ is found by using the optimization algorithm iterated conditional mode (ICM).
x ^ = arg min x { ln P ( y | x ) ln P ( x ) } = arg min x { E ( y | x ) + E ( x ) }

3.2.2. Feature Extraction and J-M Distance Analysis

The backscatter coefficients σ0 and γ0 of each crop sample were extracted from SAR images. In order to find the optimum feature combination for crop identification, J-M distances between various crop samples were calculated under different time-phased conditions.
Based on the conditional probability theory, the J-M distance is employed to calculate the separability of sample characteristics [43], which is illustrated in Equation (3). It is a useful tool to evaluate whether the sample characteristics are qualified. The overall variation range of the J-M distance value is 0–2. When the value exceeds 1.9, it indicates that the selected sample features have good separability. When the value is less than 1.8, it indicates that the selected sample features are moderately separable. However, if the value is less than 1, it is considered that the two types of samples are not separable [44]. Depending on the analysis, the optimal combination of features can be obtained:
D i j = { x [ P ( x / w i ) P ( x / w j ) ] 2 d x } 0.5
where D i j represents the separability of class i and class j . P ( x / w i )   P ( x / w j ) are the conditional probability densities of the feature x under class i and class j , respectively.

3.2.3. C-AENN Classifier

In order to effectively utilize the optimal feature combination, the classifier C-AENN was built to obtain the crop type identification model. The structure of C-AENN is shown in Figure 4, which consists of three main components: 1D-CNN, SAE, and the Softmax classification layer. CNN [45,46] and SAE [47] are the more widely used methods in deep learning, which are proven to be effective for feature mining and representation [48]. The C-AENN classifier proposed in this paper attempts to combine these two deep learning models to explore their classification capabilities. Specifically, the 1D-CNN model is used to perform feature extraction on the original optimal identification features. Then, the features extracted by 1D-CNN are used as the input of SAE, and further feature mining work is performed to obtain new feature expressions.
In 1D-CNN, three convolutional filters were set up. Multilayer convolution can extract complex features and the relationships which are embedded in the preceding and following sequences. After the convolution layer, the maximum pool and batch normalization layers followed, respectively, which can reduce the output dimensionality and improve network generalization capabilities. At last, a flatten layer was applied to convert multi-dimensional data into one-dimensional data. In SAE, the features extracted by the 1D-CNN were fed into the encoder, and then they were compressed into a latent space representation. Afterwards, the decoder aimed to reconstruct the space representation, which mapped the feature vectors back to the SAE input space size. The number of nodes in each layer of SAE was set to 128, 64, 32, 16, 32, 64 and 128, respectively. Finally, the extracted features were fed into the Softmax layer for classification.
In brief, the C-AENN classifier fused two network frameworks, 1D-CNN and SAE. 1D-CNN was used to extract relevant temporal features from time-series SAR data. SAE projected the feature to a lower-dimensional latent space and then attempted to reproduce its original input. The process used supervised learning to discover internal correlations in the time-series feature dataset and extract as much useful information as possible. Compared with using 1D-CNN or SAE alone, C-AENN can retain and obtain a better representation of SAR data.

3.2.4. Other Classifier

Several existing classifiers were used to compare the proposed methods, for example, the classical machine learning methods SVM, RF, and KNN, and the neural network methods ANN, 1D-CNN, and SAE. The hyperparameters of the SVM, RF, KNN, and ANN are listed in Table 4. The structures of the 1D-CNN and SAE are shown in Figure 4. These two networks were used separately to test the effects.

3.3. Model Accuracy Evaluation

For the purpose of assessing the accuracy of different classifiers, the identification results were evaluated both quantitatively and qualitatively.
For the quantitative evaluation, precision, recall, F1-score, macro-F1, overall accuracy (OA), and Kappa coefficient (Kappa) are used. The calculations of them are illustrated in Equations (4)–(10), respectively. Higher values indicate better results. The definitions of true positive (TP), false positive (FP), false negative (FN), and true negative (TN) in the formulas are given in the confusion matrix, which is demonstrated in Table 5. A, B are the assumed categories.
  • Precision;
precision = TP TP + FP
  • Recall;
recall = TP TP + FN
  • F1-score;
F 1 - score = 2 × precision × recall precision + recall
  • Macro-F1;
macro - F 1 = Average ( F 1 - score )
  • OA;
OA = TP + TN TP + TN + FP + FN
  • Kappa.
Kappa = OA p e 1 p e
p e = ( TP + FN ) × ( FP + TP ) + ( FP + TN ) × ( FN + TN ) ( FP + TN + TP + FN ) 2
For the qualitative assessment, two approaches were chosen. One approach is to make a visual comparison between the sample maps and the result maps of different classifiers, and the other is to compare the identification result covering the study area with an optical image of the same period. Both quantitative and qualitative evaluation results could support the validity of the proposed methodology.

4. Results

4.1. Temporal Profiles of the Sentinel-1A Backscatter Coefficient

The geometric and physical characteristics of the irradiated object can be reflected to some extent, by the backscatter coefficients in the image, assisting in the identification of the crop types. Figure 5 summarizes the temporal profiles for the four crop types, with each point being the average backscatter coefficient for each crop type sample, which provides temporal dynamic information about crops.
Several vital points of information can be seen in Figure 5, which is concluded in two dimensions.
1.
The necessity of combining VV polarization and VH polarization.
Overall, due to long-term coverage by water layers, rice always exhibited a low backscattering coefficient [49]. In contrast, peanut had a high backscatter coefficient. Moreover, the backscatter coefficient of maize and the “others” category intersected several times, which meant that it was more difficult to distinguish them. According to the fact that the average backscatter coefficient had no intersection with other crops, the peanut was relatively easier to distinguish from other crops under VV polarization, and rice showed a good separability under VH polarization. Furthermore, in July, the stems of corn elongated upward rapidly, so it showed greater variation in the backscatter coefficient under VH polarization. This polarization mode is more sensitive to crop structure changes [50]. While in August, rice was in the filling stage and required a lot of water. VV polarization is very sensitive to water [50]. Thus, rice varied more under the VV polarization mode. In summary, necessary information to differentiate crops was provided by both polarization modes. To better recognize crops, VV and VH polarizations were combined in the study.
2.
The criticality of time-series features.
It can be seen that the trends of the backscatter coefficients σ0 and γ0 over time were generally consistent and the average backscatter coefficient σ0 was 1 dB lower than the γ0 for all crops. At the beginning of July, peanuts, corn, and rice had just been sown and were in the early stages of growth. The radar response was mainly surface scattering from rough soils, tillage conditions, and soil moisture content. From July to August, crops were in the peak growing season, and the backscatter coefficients gradually increased. They both reached a maximum on 27 July 2021. In August, the backscatter coefficients of crops all decreased, which may be related to variations in crop growth and soil water content. As a whole, changes in peanuts and the “others” category were basically flat. The backscatter coefficients of corn and rice were relatively variable, which indicated the necessity of using time-series images to distinguish them.

4.2. Farmland Mask

The extraction result of the farmland mask is shown in Figure 6. The SAR image on 17 March 2021 was made into a mask by OMRF, as shown in Figure 6a,b. The mask was applied to the time-series SAR images during the study period, as shown in Figure 6c. Subsequent studies were conducted only in the farmland area.

4.3. J-M Distance Analysis of Features

The backscatter coefficients σ0 and γ0 of peanuts, corn, rice, and other crops were extracted, which showed different separability under different time-phase conditions. This paper performed J-M distance calculations for samples based on a single-temporal image and multi-temporal images.
Table 6 gives the J-M distances for various crop samples under a single time-temporal image. The maximum J-M distance between each of the two crops was bolded. Based on the previous backscatter coefficient analysis, the VV polarization and VH polarization features were combined, so that σ0 in Table 6 represents σ0 VV and σ0 VH, and γ0 represents γ0 VV and γ0 VH. It can be seen that the J-M distances for σ0 and γ0 features were not significantly different and the values were all low, essentially below 1.0, which indicated a large similarity of samples that were not separable. Therefore, σ0 and γ0 features were considered in combination to explore crops divisibility. The results are shown in Table 7. The symbol σ0-γ0 represents four features, which are σ0 VV, σ0 VH, γ0 VV and γ0 VH. It is clearly shown that the combination of σ0 and γ0 features was effective in improving the J-M distance values. The J-M distances between some crops already exceeded 1.0, but there was still much space for improvement.
Within a single-temporal SAR image, it is possible for different crops to have similar backscatter responses [51], causing the J-M distance values to perform poorly and, therefore, cannot yet meet the application requirements. However, within multi-temporal SAR images, it is possible to characterize crop growth patterns. Typically, the patterns of growth change for different crops are not identical, and the differences between their temporal trends are key to distinguishing them. Table 8 exhibits the J-M distances for crops σ0 and γ0 features based on multi-temporal SAR images. Apparently, the J-M distance tended to increase as the images increased. When the images of 3 July 2021, 15 July 2021, 27 July 2021, 8 August 2021, 20 August 2021 and 1 September 2021 were all combined, defined by T, the J-M distance values exceeded any other combination of SAR images. In particular, σ0 and γ0 features had their own special advantages. Between peanut and maize, peanut and rice, and maize and rice, the γ0 feature was better than the σ0 feature. The rest were the opposite. Hence, the σ0 and γ0 features were combined to exploit whether there was an improvement in the J-M distance values. The results are indicated in Table 9. The combined σ0 and γ0 features from SAR image T showed the greatest degree of separation, with the J-M distance values between each crop even exceeding 1.9. It meant that the combination of σ0 and γ0 was productive and samples exhibited a good separability. Therefore, this feature was selected as the optimal feature combination for subsequent crop identification.

4.4. Model Generation

The SAR images have two polarizations, four features can be obtained for each SAR image. They are σ0 VV, σ0 VH, γ0 VV, and γ0 VH. Since the optimal feature combination contains six images, each sample had a feature shape of (1, 24), as shown in Figure 7. In order to facilitate the input of the samples into C-AENN, they were reshaped. The shape of each sample was changed to (12, 2). After that, the reshaped sample features were fed to the classifier.
The classifier C-AENN was built to identify crops. The detailed parameters of C-AENN are shown in Table 10. After the training samples were trained, a crop type identification model was generated.

4.5. Accuracy Comparison with Other Classifiers

To demonstrate the effectiveness of the model generated by C-AENN, several classifiers were used for comparison, which included SVM, RF, KNN, ANN, 1D-CNN, and SAE. The models generated by these classifiers were evaluated quantitatively and qualitatively.

4.5.1. Quantitative Evaluation

The test samples were fed into the models, then the outputs of models were compared with the ground truth sample labels to generate confusion matrices, which are shown in Figure 8. According to the confusion matrices, the models generated by the above-mentioned classifiers were quantitatively evaluated. All these models showed good accuracies, which denoted that the optimal identification features played an effective role. The results are listed in Table 11.
Among them, the C-AENN model achieved the best accuracy. Macro-F1 scored 0.9825, OA scored 0.9794, and Kappa scored 0.9705. In addition, 1D-CNN and SAE models also achieved good performance, and the accuracy metrics were slightly lower than the C-AENN model. They achieved Macro-F1 of 0.9650 and 0.9550, OA of 0.9671 and 0.9584, and Kappa of 0.9529 and 0.9404, respectively. Experiments proved that the combination of 1D-CNN classifier and SAE classifier improved the accuracy of using either classifier alone. Macro-F1 and Kappa were improved by approximately 2~3%, and OA increased by approximately 1~2%. Nevertheless, the RF model performed the worst with Macro-F1 at 0.8400, OA at 0.8757, and Kappa at 0.8191. Compared with the results of the proposed C-AENN model, Macro-F1 was approximately 14% lower, OA was about 10%lower, and Kappa was about 15% lower.
Moreover, in order to objectively see the model ability of each classifier, a visual comparison of three more representative accuracy evaluation indicators, Macro-F1, OA, and Kappa, is presented in Figure 9. The accuracy of the models generated by classifiers can be ranked in the following order: C-AENN > 1D-CNN > SAE > ANN > SVM > KNN > RF. It demonstrates that the neural network models outperformed the machine learning models on our dataset. In particular, the C-AENN model exhibited an outstanding capability.

4.5.2. Qualitative Evaluation

The qualitative assessment was carried out in two ways, one of which was to select four typical areas, and compare them with the sample maps for visual interpretation. The second was a comparison with contemporaneous optical images. Both of them gave an intuitive performance assessment of the proposed method.
In the first place, four typical regions A (peanut), B (corn), C (rice), and D (others) were selected, as shown in Figure 10, to present the effects of the models generated by various classifiers. Figure 10a gave the location of the selected regions, and Figure 10b displayed the sample maps for these regions. The comparison results are exhibited in Figure 11. It can be clearly seen that the effects of 1D-CNN, SAE, and C-AENN models were better than others, and there were fewer mistakes. In region A (peanut), numerous peanut pixels were misclassified as maize. In contrast to the sample map, SAE and C-AENN models had better results. In region B (corn), the 1D-CNN, SAE, and C-AENN models showed superiority over the other models. In regions C (rice) and D (others), 1D-CNN and C-AENN models were relatively good. The error rate was significantly reduced. In summary, the proposed C-AENN model had better robustness.
Next, the C-AENN model was used to predict the entire study area. Figure 12 shows a qualitative comparison of the identification result for multiple crops with the optical image of the same period and the identification result covering the study area is given in Figure 12a. Due to the difficulty in obtaining real ground conditions covering the entire study area, the high-resolution the Sentinel-2 optical image on 25 August 2021 was used for qualitative evaluation. The band combinations of Sentinel-2 image can extract specific information to help better understand the ground objects. The image of bands B11 (SWIR-1), B8 (NIR-1), and the B2 (Blue) combination was defined as a pseudo-color image, as shown in Figure 12b. It is mainly used to monitor crops because it applies shortwave and near-infrared bands. Both of these two bands are more sensitive to vegetation.
Ignoring the effects of clouds, the prediction result of the C-AENN model was largely consistent with the pseudo-color image. As can be seen in Figure 12a,b, the study area was dominated by maize cultivation, which is grown in all counties and districts. Additionally, the red rectangular boxes in Figure 12a,b were the main peanut cultivation areas and they were mainly distributed in Weishi, Tongxu, and Qixian counties. Moreover, the red circles in Figure 12a,b were the main rice cultivation areas. The rice was grown in a relatively concentrated area, which was mainly on the border between the Shunhe and Xiangfu districts. Furthermore, due to the cultivation structure of the “others“ type isrelatively complex, there were no distinctive features in Figure 12b. Combined with the fieldwork records, the “others“ category was mainly located in Xiangfu district and Tongxu county, which were basically consistent with the results in Figure 12a. In short, the crop identification results almost matched the distribution in the optical images of the same period, which further validates the effectiveness of the model.

5. Discussion

In this work, the study of crop type identification methods using multi-temporal Sentinel-1A images was carried out. Good results were achieved through feature optimization and building a new classifier. At first, the unique physical characteristics of σ0 and γ0 features of SAR images were extracted, and their combination showed a better J-M distance value, which provided an optimal identification feature for the crop classification model. Then, a new classifier C-AENN was developed for the type identification task, which combined the 1D-CNN model and the SAE model. In the end, the quantitative and qualitative evaluation results demonstrated the validity of the method.
First of all, the extraction of the farmland mask was carried out to avoid the interference of non-farmland features on the identification results of crop types. The public mask data Global Food Security-Support Analysis Data at 30 m (GFSAD30) product has been used in many agricultural fields. However, due to the development of urbanization in China, farmland is changing rapidly, therefore, relatively timely and accurate mask data are required. Figure 13 shows the results for the publicly available GFSAD30 mask data and the proposed mask data in this study. Additionally, this paper selected two enlarged regions E and F for comparison. As can be seen from Figure 13, both mask data can detect non-farmland areas well, such as cities and towns, while the proposed mask was better in terms of edge detail for cities and towns, which basically completed the extraction of the farmland.
Next, this paper demonstrated the importance of multi-temporal radar data for crop type identification. Good accuracy can be obtained even using classical machine learning methods such as SVM and RF. The optimal selection of features and the J-M distance analysis played a useful role. In this paper, J-M distance analysis was performed on σ0 features, γ0 features, and σ0 and γ0 combination features of single-temporal and multi-temporal SAR images. It was found that the combination of σ0 and γ0 features can significantly enhance the J-M distance values between crop samples, and the multi-temporal images were apparently better than the single-temporal image. Therefore, multi-temporal σ0 and γ0 features combination is the best choice for the optimal identification feature. Most of the previous studies have mostly neglected the role of γ0, surprisingly, the combination of σ0 and γ0 yielded better results, and the application of γ0 can be paid more attention in subsequent studies.
In the end, this paper demonstrated that the C-AENN classifier outperformed the classical machine learning methods SVM and RF, etc., and the neural network methods 1D-CNN and SAE, etc. Based on the optimal identification feature, the most representative machine learning methods, SVM, RF, and KNN, were chosen for comparison and they were still competitive with other methods in many cases. As can be seen in Figure 8, these machine learning methods achieved better results in the type identification of peanut, corn, and rice crops, but the identification accuracy of the “other” category was lower. However, this misclassification was not so high for neural network methods such as 1D-CNN and SAE. This showed that the neural network approach outperformed the traditional machine learning approach in our dataset. In particular, the proposed C-AENN showed good classification performance in every category, exceeding 97% in all quantitative evaluation metrics. 1D-CNN and SAE also showed better classification accuracy, however, in general, the accuracy was slightly lower than the proposed method. Additionally, this paper explored the time consumption of 1D-CNN, SAE, and C-AENN. Through ten experiments, it was concluded that the time consumption of SAE was relatively less, and the time consumption of 1D-CNN and C-AENN was similar. The result is shown in Figure 14. For research, it is worthwhile to use C-AENN to improve accuracy compared with 1D-CNN alone and SAE alone.
With regard to the selection of network parameters, this paper was based on the data of the study area and multiple debugging. The selection of convolutional kernels in 1D-CNN is illustrated as an example. The size of the convolution kernel was related to the local feature extraction dimension. According to the optimal identification feature shape of (12, 2), this paper attempted smaller convolutional kernels, such as 7, 5, 3. Then, based on several debugging sessions, the optimal parameters can be selected. As for the node settings of SAE, the last layer of 1D-CNN in C-AENN was 256, so some common parameter settings of 128, 64, 32, etc. can be attempted, and the optimal parameters through multiple tests can be determined. It is worth noting that the parameter settings of the deep learning methods have their unexplainable aspects, which need specific analysis for specific problems. This is also the direction that needs to be studied in follow-up research.

6. Conclusions

In this study, a novel method was proposed for crop type identification based on the C-AENN classifier using time-series Sentinel-1A images. The method first removed non-farm areas by the OMRF model. Then features were extracted for each type of crop and the optimal identification feature, the combined time-series backscatter coefficient σ0 and γ0, was selected by J-M distance analysis. Finally, the optimal identification feature was used for crop type identification model generation by using the C-AENN classifier, which was fused by 1D-CNN and SAE. The quantitative and qualitative model assessments demonstrated the validity of the proposed method.
In terms of quantitative assessment, the effectiveness of the proposed method was illustrated by comparison with the traditional machine learning methods SVM, RF, and KNN, as well as the deep learning methods ANN, 1D-CNN, and SAE. Six evaluation metrics directly reflected the ability of the various methods to identify different crops. Compared with other methods, the proposed method achieved the highest F1, OA, and Kappa. All of these evaluation indicators were above 97%.
In addition, in terms of qualitative assessment, four typical areas were selected for visual presentation in comparison with the sample maps. The model identification result covering the study area was compared with the contemporaneous optical image, which proved that the proposed method gave satisfactory results. In conclusion, both the quantitative and qualitative assessments proved that the result of crop type identification significantly improved.
Furthermore, a few interesting things are found in this paper, such as the unique advantage of combining multi-temporal σ0 and γ0 features; they can obtain higher classification accuracy without using other auxiliary data. In addition, compared with one network framework, a joined neural network plays a better role. This study demonstrates the validity of time-series SAR data and the importance of deep learning methods for feature mining. Both methods have great potential for crop type identification, which have promising application prospects in agricultural management.

Author Contributions

Conceptualization, Z.G., W.Q. and N.L.; data curation, H.Y. and N.L.; investigation, Z.G., W.Q., J.Z. and N.L.; methodology, W.Q., Y.H. and J.Z.; supervision, Z.G., H.Y. and V.-C.K.; writing—original draft, W.Q. and Y.H.; writing—review and editing, Z.G., V.-C.K. and N.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Plan of Science and Technology of Kaifeng City (2102005); the National Natural Science Foundation of China (42101386, 61871175); the Plan of Science and Technology of Henan Province (212102210093, 202102210175, 222102110439); the College Key Research Project of Henan Province (22A520021).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data and model presented in this study are available.

Acknowledgments

The authors would like to thank the ESA for providing the multi-temporal sentinel-1A SAR data and sentinel-2 optical data for agriculture applications.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Beriaux, E.; Jago, A.; Lucau-Danila, C.; Planchon, V.; Defourny, P. Sentinel-1 Time Series for Crop Identification in the Framework of the Future CAP Monitoring. Remote Sens. 2021, 13, 2785. [Google Scholar] [CrossRef]
  2. Tomppo, E.; Antropov, O.; Praks, J. Cropland Classification Using Sentinel-1 Time Series: Methodological Performance and Prediction Uncertainty Assessment. Remote Sens. 2019, 11, 2480. [Google Scholar] [CrossRef] [Green Version]
  3. Yang, H.; Pan, B.; Li, N.; Wang, W.; Zhang, J.; Zhang, X. A systematic method for spatio-temporal phenology estimation of paddy rice using time series Sentinel-1 images. Remote Sens. Environ. 2021, 259, 112394. [Google Scholar] [CrossRef]
  4. Xie, Q.; Lai, K.; Wang, J.; Lopez-Sanchez, J.M.; Shang, J.; Liao, C.; Zhu, J.; Fu, H.; Peng, X. Crop Monitoring and Classification Using Polarimetric RADARSAT-2 Time-Series Data Across Growing Season: A Case Study in Southwestern Ontario, Canada. Remote Sens. 2021, 13, 1394. [Google Scholar] [CrossRef]
  5. Jiang, H.; Li, D.; Jing, W.; Xu, J.; Huang, J.; Yang, J.; Chen, S. Early Season Mapping of Sugarcane by Applying Machine Learning Algorithms to Sentinel-1A/2 Time Series Data: A Case Study in Zhanjiang City, China. Remote Sens. 2019, 11, 861. [Google Scholar] [CrossRef] [Green Version]
  6. Xie, Y.; Huang, J. Integration of a Crop Growth Model and Deep Learning Methods to Improve Satellite-Based Yield Estimation of Winter Wheat in Henan Province, China. Remote Sens. 2021, 13, 4372. [Google Scholar] [CrossRef]
  7. Ezzahar, J.; Ouaadi, N.; Zribi, M.; Elfarkh, J.; Aouade, G.; Khabba, S.; Er-Raki, S.; Chehbouni, A.; Jarlan, L. Evaluation of Backscattering Models and Support Vector Machine for the Retrieval of Bare Soil Moisture from Sentinel-1 Data. Remote Sens. 2020, 12, 72. [Google Scholar] [CrossRef] [Green Version]
  8. Dey, S.; Bhogapurapu, N.; Homayouni, S.; Bhattacharya, A.; McNairn, H. Unsupervised Classification of Crop Growth Stages with Scattering Parameters from Dual-Pol Sentinel-1 SAR Data. Remote Sens. 2021, 13, 4412. [Google Scholar] [CrossRef]
  9. Nasrallah, A.; Baghdadi, N.; El Hajj, M.; Darwish, T.; Belhouchette, H.; Faour, G.; Darwich, S.; Mhawej, M. Sentinel-1 Data for Winter Wheat Phenology Monitoring and Mapping. Remote Sens. 2019, 11, 2228. [Google Scholar] [CrossRef] [Green Version]
  10. Wei, M.; Qiao, B.; Zhao, J.; Zuo, X. The area extraction of winter wheat in mixed planting area based on Sentinel-2a remote sensing satellite images. Int. J. Parallel Emergent Distrib. Syst. 2019, 35, 297–308. [Google Scholar] [CrossRef]
  11. Brinkhoff, J.; Vardanega, J.; Robson, A.J. Land Cover Classification of Nine Perennial Crops Using Sentinel-1 and -2 Data. Remote Sens. 2020, 12, 96. [Google Scholar] [CrossRef] [Green Version]
  12. Pan, L.; Xia, H.; Zhao, X.; Guo, Y.; Qin, Y. Mapping Winter Crops Using a Phenology Algorithm, Time-Series Sentinel-2 and Landsat-7/8 Images, and Google Earth Engine. Remote Sens. 2021, 13, 2510. [Google Scholar] [CrossRef]
  13. Yang, Y.; Huang, Q.; Wu, W.; Luo, J.; Gao, L.; Dong, W.; Wu, T.; Hu, X. Geo-Parcel Based Crop Identification by Integrating High Spatial-Temporal Resolution Imagery from Multi-Source Satellite Data. Remote Sens. 2017, 9, 1298. [Google Scholar] [CrossRef] [Green Version]
  14. Useya, J.; Chen, S. Exploring the Potential of Mapping Cropping Patterns on Smallholder Scale Croplands Using Sentinel-1 SAR Data. Chin. Geogr. Sci. 2019, 29, 626–639. [Google Scholar] [CrossRef] [Green Version]
  15. Busquier, M.; Valcarce-Diñeiro, R.; Lopez-Sanchez, J.M.; Plaza, J.; Sánchez, N.; Arias-Pérez, B. Fusion of Multi-Temporal PAZ and Sentinel-1 Data for Crop Classification. Remote Sens. 2021, 13, 3915. [Google Scholar] [CrossRef]
  16. Luo, C.; Qi, B.; Liu, H.; Guo, D.; Lu, L.; Fu, Q.; Shao, Y. Using Time Series Sentinel-1 Images for Object-Oriented Crop Classification in Google Earth Engine. Remote Sens. 2021, 13, 561. [Google Scholar] [CrossRef]
  17. Cable, J.W.; Kovacs, J.M.; Jiao, X.; Shang, J. Agricultural Monitoring in Northeastern Ontario, Canada, Using Multi-Temporal Polarimetric RADARSAT-2 Data. Remote Sens. 2014, 6, 2343–2371. [Google Scholar] [CrossRef] [Green Version]
  18. Xu, L.; Zhang, H.; Wang, C.; Zhang, B.; Liu, M. Crop Classification Based on Temporal Information Using Sentinel-1 SAR Time-Series Data. Remote Sens. 2019, 11, 53. [Google Scholar] [CrossRef] [Green Version]
  19. Mestre-Quereda, A.; Lopez-Sanchez, J.M.; Vicente-Guijalba, F.; Jacob, A.W.; Engdahl, M.E. Time-Series of Sentinel-1 Interferometric Coherence and Backscatter for Crop-Type Mapping. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 4070–4084. [Google Scholar] [CrossRef]
  20. Cué La Rosa, L.E.; Queiroz Feitosa, R.; Nigri Happ, P.; Del’Arco Sanches, I.; Ostwald Pedro da Costa, G.A. Combining Deep Learning and Prior Knowledge for Crop Mapping in Tropical Regions from Multitemporal SAR Image Sequences. Remote Sens. 2019, 11, 2029. [Google Scholar] [CrossRef] [Green Version]
  21. Zhong, L.; Hu, L.; Zhou, H. Deep learning based multi-temporal crop classification. Remote Sens. Environ. 2019, 221, 430–443. [Google Scholar] [CrossRef]
  22. Kussul, N.; Lavreniuk, M.; Skakun, S.; Shelestov, A. Deep Learning Classification of Land Cover and Crop Types Using Remote Sensing Data. IEEE Geosci. Remote Sens. Lett. 2017, 14, 778–782. [Google Scholar] [CrossRef]
  23. Qu, Y.; Zhao, W.; Yuan, Z.; Chen, J. Crop Mapping from Sentinel-1 Polarimetric Time-Series with a Deep Neural Network. Remote Sens. 2020, 12, 2493. [Google Scholar] [CrossRef]
  24. Huang, H.Y.; Ombao, H.; Stoffer, D.S. Discrimination and classification of nonstationary time series using the SLEX model. J. Am. Stat. Assoc. 2004, 99, 763–774. [Google Scholar] [CrossRef]
  25. Maharaj, E.A.; Alonso, A.M. Discrimination of locally stationary time series using wavelets. Comput. Stat. Data Anal. 2007, 52, 879–895. [Google Scholar] [CrossRef]
  26. Zhao, H.; Chen, Z.; Jiang, H.; Jing, W.; Sun, L.; Feng, M. Evaluation of Three Deep Learning Models for Early Crop Classification Using Sentinel-1A Imagery Time Series—A Case Study in Zhanjiang, China. Remote Sens. 2019, 11, 2673. [Google Scholar] [CrossRef] [Green Version]
  27. Xiao, X.; Lu, Y.; Huang, X.; Chen, T. Temporal Series Crop Classification Study in Rural China Based on Sentinel-1 SAR Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 2769–2780. [Google Scholar] [CrossRef]
  28. Arias, M.; Campo-Bescós, M.Á.; Álvarez-Mozos, J. Crop Classification Based on Temporal Signatures of Sentinel-1 Observations over Navarre Province, Spain. Remote Sens. 2020, 12, 278. [Google Scholar] [CrossRef] [Green Version]
  29. Xu, L.; Zhang, H.; Wang, C.; Wei, S.; Zhang, B.; Wu, F.; Tang, Y. Paddy Rice Mapping in Thailand Using Time-Series Sentinel-1 Data and Deep Learning Model. Remote Sens. 2021, 13, 3994. [Google Scholar] [CrossRef]
  30. Song, Y.; Wang, J. Mapping Winter Wheat Planting Area and Monitoring Its Phenology Using Sentinel-1 Backscatter Time Series. Remote Sens. 2019, 11, 449. [Google Scholar] [CrossRef] [Green Version]
  31. Small, D. Flattening Gamma: Radiometric Terrain Correction for SAR Imagery. IEEE Trans. Geosci. Remote Sens. 2011, 49, 3081–3093. [Google Scholar] [CrossRef]
  32. Chakhar, A.; Hernández-López, D.; Ballesteros, R.; Moreno, M.A. Improving the Accuracy of Multiple Algorithms for Crop Classification by Integrating Sentinel-1 Observations with Sentinel-2 Data. Remote Sens. 2021, 13, 243. [Google Scholar] [CrossRef]
  33. Dobrinić, D.; Gašparović, M.; Medak, D. Sentinel-1 and 2 Time-Series for Vegetation Mapping Using Random Forest Classification: A Case Study of Northern Croatia. Remote Sens. 2021, 13, 2321. [Google Scholar] [CrossRef]
  34. Crisóstomo de Castro Filho, H.; Abílio de Carvalho Júnior, O.; Ferreira de Carvalho, O.L.; Pozzobon de Bem, P.; dos Santos de Moura, R.; Olino de Albuquerque, A.; Rosa Silva, C.; Guimarães Ferreira, P.H.; Fontes Guimarães, R.; Trancoso Gomes, R.A. Rice Crop Detection Using LSTM, Bi-LSTM, and Machine Learning Models from Sentinel-1 Time Series. Remote Sens. 2020, 12, 2655. [Google Scholar] [CrossRef]
  35. Zhou, Y.; Luo, J.; Feng, L.; Zhou, X. DCN-Based Spatial Features for Improving Parcel-Based Crop Classification Using High-Resolution Optical Images and Multi-Temporal SAR Data. Remote Sens. 2019, 11, 1619. [Google Scholar] [CrossRef] [Green Version]
  36. Teimouri, N.; Dyrmann, M.; Jørgensen, R.N. A Novel Spatio-Temporal FCN-LSTM Network for Recognizing Various Crop Types Using Multi-Temporal Radar Images. Remote Sens. 2019, 11, 990. [Google Scholar] [CrossRef] [Green Version]
  37. Ismail Fawaz, H.; Forestier, G.; Weber, J.; Idoumghar, L.; Muller, P. Deep learning for time series classification: A review. Data Min. Knowl. Discov. 2019, 33, 917–963. [Google Scholar] [CrossRef] [Green Version]
  38. Ndikumana, E.; Ho Tong Minh, D.; Baghdadi, N.; Courault, D.; Hossard, L. Deep Recurrent Neural Network for Agricultural Classification using multitemporal SAR Sentinel-1 for Camargue, France. Remote Sens. 2018, 10, 1217. [Google Scholar] [CrossRef] [Green Version]
  39. Zhang, W.-T.; Wang, M.; Guo, J.; Lou, S.-T. Crop Classification Using MSCDN Classifier and Sparse Auto-Encoders with Non-Negativity Constraints for Multi-Temporal, Quad-Pol SAR Data. Remote Sens. 2021, 13, 2749. [Google Scholar] [CrossRef]
  40. Khabbazan, S.; Vermunt, P.; Steele-Dunne, S.; Ratering Arntz, L.; Marinetti, C.; van der Valk, D.; Iannini, L.; Molijn, R.; Westerdijk, K.; van der Sande, C. Crop Monitoring Using Sentinel-1 Data: A Case Study from The Netherlands. Remote Sens. 2019, 11, 1887. [Google Scholar] [CrossRef] [Green Version]
  41. Forkuor, G.; Conrad, C.; Thiel, M.; Ullmann, T.; Zoungrana, E. Integration of Optical and Synthetic Aperture Radar Imagery for Improving Crop Mapping in Northwestern Benin, West Africa. Remote Sens. 2014, 6, 6472–6499. [Google Scholar] [CrossRef] [Green Version]
  42. Wu, L.; Qi, W.; Guo, Z.; Zhao, J.; Yang, H.; Li, N. Winter wheat planting area extraction using SAR change detection. Remote Sens. Lett. 2021, 12, 951–960. [Google Scholar] [CrossRef]
  43. Wei, S.; Zhang, H.; Wang, C.; Wang, Y.; Xu, L. Multi-Temporal SAR Data Large-Scale Crop Mapping Based on U-Net Model. Remote Sens. 2019, 11, 68. [Google Scholar] [CrossRef] [Green Version]
  44. Li, L.; Kong, Q.; Wang, P.; Wang, L.; Xun, L. Monitoring of maize planting area based on time-series Sentinel-1A SAR data. Resour. Sci. 2018, 40, 1608–1621. [Google Scholar]
  45. Liu, Y.; Zhao, W.; Chen, S.; Ye, T. Mapping Crop Rotation by Using Deeply Synergistic Optical and SAR Time Series. Remote Sens. 2021, 13, 4160. [Google Scholar] [CrossRef]
  46. Liao, C.; Wang, J.; Xie, Q.; Baz, A.A.; Huang, X.; Shang, J.; He, Y. Synergistic Use of Multi-Temporal RADARSAT-2 and VENµS Data for Crop Classification Based on 1D Convolutional Neural Network. Remote Sens. 2020, 12, 832. [Google Scholar] [CrossRef] [Green Version]
  47. Guo, J.; Li, H.; Ning, J.; Han, W.; Zhang, W.; Zhou, Z.-S. Feature Dimension Reduction Using Stacked Sparse Auto-Encoders for Crop Classification with Multi-Temporal, Quad-Pol SAR Data. Remote Sens. 2020, 12, 321. [Google Scholar] [CrossRef] [Green Version]
  48. Martino, D.; Guinvarc’h, R.; Thirion-Lefevre, L.; Koeniguer, É. Beets or Cotton? Blind Extraction of Fine Agricultural Classes Using a Convolutional Autoencoder Applied to Temporal SAR Signatures. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–18. [Google Scholar] [CrossRef]
  49. Hoshikawa, K.; Nagano, T.; Kotera, A.; Watanabe, K.; Fujihara, Y.; Kozan, O. Classification of crop fields in northeast Thailand based on hydrological characteristics detected by L-band SAR backscatter data. Remote Sens. Lett. 2014, 5, 323–331. [Google Scholar] [CrossRef]
  50. Hosseini, M.; Kerner, H.R.; Sahajpal, R.; Puricelli, E.; Lu, Y.-H.; Lawal, A.F.; Humber, M.L.; Mitkish, M.; Meyer, S.; Becker-Reshef, I. Evaluating the Impact of the 2020 Iowa Derecho on Corn and Soybean Fields Using Synthetic Aperture Radar. Remote Sens. 2020, 12, 3878. [Google Scholar] [CrossRef]
  51. Orynbaikyzy, A.; Gessner, U.; Mack, B.; Conrad, C. Crop Type Classification Using Fusion of Sentinel-1 and Sentinel-2 Data: Assessing the Impact of Feature Selection, Optical Data Availability, and Parcel Sizes on the Accuracies. Remote Sens. 2020, 12, 2779. [Google Scholar] [CrossRef]
Figure 1. Location map of study area.
Figure 1. Location map of study area.
Remotesensing 14 01379 g001
Figure 2. Distribution of crop samples.
Figure 2. Distribution of crop samples.
Remotesensing 14 01379 g002
Figure 3. Flow chart of the proposed method.
Figure 3. Flow chart of the proposed method.
Remotesensing 14 01379 g003
Figure 4. Structure of C-AENN.
Figure 4. Structure of C-AENN.
Remotesensing 14 01379 g004
Figure 5. Temporal profiles of the four different crop types with respect to the backscatter coefficient (a) σ0 VV, (b) σ0 VH, (c) γ0 VV, and (d) γ0 VH.
Figure 5. Temporal profiles of the four different crop types with respect to the backscatter coefficient (a) σ0 VV, (b) σ0 VH, (c) γ0 VV, and (d) γ0 VH.
Remotesensing 14 01379 g005
Figure 6. Extraction of the farmland mask. (a) SAR image on 2021-03-17. (b) Farmland mask. (c) Multi-temporal SAR images during the study period.
Figure 6. Extraction of the farmland mask. (a) SAR image on 2021-03-17. (b) Farmland mask. (c) Multi-temporal SAR images during the study period.
Remotesensing 14 01379 g006
Figure 7. Samples feature reshaping.
Figure 7. Samples feature reshaping.
Remotesensing 14 01379 g007
Figure 8. Confusion matrices for models generated by different classifiers. (a) SVM, (b) RF, (c) KNN, (d) ANN, (e) 1D-CNN, (f) SAE, and (g) C-AENN.
Figure 8. Confusion matrices for models generated by different classifiers. (a) SVM, (b) RF, (c) KNN, (d) ANN, (e) 1D-CNN, (f) SAE, and (g) C-AENN.
Remotesensing 14 01379 g008aRemotesensing 14 01379 g008b
Figure 9. Visual comparison of three representative evaluation indicators.
Figure 9. Visual comparison of three representative evaluation indicators.
Remotesensing 14 01379 g009
Figure 10. Selected typical areas. (a) Locations of selected typical areas. (b) Sample maps of selected typical areas.
Figure 10. Selected typical areas. (a) Locations of selected typical areas. (b) Sample maps of selected typical areas.
Remotesensing 14 01379 g010
Figure 11. (a) SVM, (b) RF, (c) KNN, (d) ANN, (e) 1D-CNN, (f) SAE, and (g) C-AENN maps for regions (A) (peanut), (B) (corn), (C) (rice), and (D) (others).
Figure 11. (a) SVM, (b) RF, (c) KNN, (d) ANN, (e) 1D-CNN, (f) SAE, and (g) C-AENN maps for regions (A) (peanut), (B) (corn), (C) (rice), and (D) (others).
Remotesensing 14 01379 g011
Figure 12. Qualitative comparison of identification result with optical image. (a) Identification results of main crops in Kaifeng City in 2021. (b) Pseudo-color image generated by Sentinel-2 band combination.
Figure 12. Qualitative comparison of identification result with optical image. (a) Identification results of main crops in Kaifeng City in 2021. (b) Pseudo-color image generated by Sentinel-2 band combination.
Remotesensing 14 01379 g012
Figure 13. Results of the public mask and the proposed mask.
Figure 13. Results of the public mask and the proposed mask.
Remotesensing 14 01379 g013
Figure 14. Time consumption of 1D-CNN, SAE and C-AENN.
Figure 14. Time consumption of 1D-CNN, SAE and C-AENN.
Remotesensing 14 01379 g014
Table 1. Parameters for Sentinel-1A.
Table 1. Parameters for Sentinel-1A.
Sentinel-1A ParametersSentinel-1A
Product typeGRD
Imaging modeIW
PolarizationVV
VH
Resolution10 × 10 m
BandC
Pass directionAscending
Dates17 March 2021, 3 July 2021, 15 July 2021, 27 July 2021, 8 August 2021, 20 August 2021, 1 September 2021
Table 2. Ground truth samples for major crop types in the study area.
Table 2. Ground truth samples for major crop types in the study area.
LabelTypeNumber of ParcelsTotal Number of PixelsArea (km2)Number of Training SamplesNumber of Test Samples
1Peanut3630,3942.5115,05615,338
2Corn4568,7675.7334,52934,238
3Rice1439,1133.2219,72119,392
4Others1419,9491.67980510,144
Total 109158,22313.1379,11179,112
Table 3. Spectral description for optical reference data.
Table 3. Spectral description for optical reference data.
Sentinel-2A BandsSentinel-2A Spectral Description
Band 1Coastal aerosol
Band 2Blue
Band 3Green
Band 4Red
Band 5Red-edge 1
Band 6Red-edge 2
Band 7Red-edge 3
Band 8NIR-1
Band 8ANIR-2
Band 9Water vapor
Band 10SWIR-cirrus
Band 11SWIR-1
Band 12SWIR-2
Dates25 August 2021
Table 4. Hyperparameters of the classifier.
Table 4. Hyperparameters of the classifier.
ClassifierParametersDescriptionValue
SVMCPenalty coefficient2
KernelKernel functionRbf
RFN_estimatorsNumber of decision trees550
KNNN_neighborsNumber of neighboring points20
ANNHidden_layer_sizesNumber of neurons in the hidden layer(20, 20, 20, 20)
Table 5. Confusion matrix.
Table 5. Confusion matrix.
Predicted Label
TypeAB
Ground truthATrue positive (TP)False negative (FN)
BFalse positive (FP)True negative (TN)
Table 6. J-M distances for crops σ0 and γ0 features based on a single-temporal image.
Table 6. J-M distances for crops σ0 and γ0 features based on a single-temporal image.
J-M
Distance
3 July 202115 July 202127 July 20218 August 202120 August 20211 September 2021
σ0γ0σ0γ0σ0γ0σ0γ0σ0γ0σ0γ0
Peanut–Corn0.31790.33590.11890.10160.15860.15490.38140.39700.36520.37630.23050.2499
Peanut–Rice0.30560.32180.35190.35290.15970.16000.62570.64590.86170.88220.74361.2088
Peanut–Others0.13460.14300.14300.14110.47750.46620.27960.30430.20740.22370.16940.2160
Corn–Rice0.10210.10280.23470.23560.08030.08390.16750.17630.36650.38340.41290.8187
Corn–Others0.12330.12510.00640.00730.13370.12910.02090.02040.03490.03140.01750.0375
Rice–Others0.25120.25750.23720.23750.30510.30790.18120.17980.47270.48140.42240.7027
Table 7. J-M distances for crops σ0 and γ0 combination features based on a single-temporal image.
Table 7. J-M distances for crops σ0 and γ0 combination features based on a single-temporal image.
J-M
Distance
3 July 202115 July 202127 July 20218 August 202120 August 20211 September 2021
σ0-γ0σ0-γ0σ0-γ0σ0-γ0σ0-γ0σ0-γ0
Peanut–Corn0.91140.17890.53010.95210.85150.3528
Peanut–Rice0.80950.45831.04261.07001.20811.3609
Peanut–Others0.52990.23090.48030.63831.60400.3380
Corn–Rice1.19320.32190.50470.43980.71580.9456
Corn–Others0.45030.01250.24440.52281.74080.0814
Rice–Others0.97700.32430.59000.32381.73850.8216
Table 8. J-M distances for crops σ0 and γ0 features based on multi-temporal images.
Table 8. J-M distances for crops σ0 and γ0 features based on multi-temporal images.
J-M DistanceSingle-Temporal Maximum3 July 2021
15 July 2021
3 July 2021
15 July 2021
27 July 2021
3 July 2021
15 July 2021
27 July 2021
8 August 2021
3 July 2021
15 July 2021
27 July 2021
8 August 2021
20 August 2021
3 July 2021
15 July 2021
27 July 2021
8 August 2021
20 August 2021
1 September 2021
σ0γ0σ0γ0σ0γ0σ0γ0σ0γ0σ0γ0
Peanut–Corn0.38140.39700.51930.51280.61150.61870.95680.95240.98411.01731.07651.0781
Peanut–Rice0.86171.20880.52920.53330.62620.63831.31361.32101.55861.59951.60891.6497
Peanut–Others0.47750.46620.24620.26200.35850.39100.65530.69960.75010.72130.97940.7839
Corn–Rice0.41290.81870.40680.37850.54280.52270.82250.80811.12981.16251.19171.2819
Corn–Others0.13370.12910.31400.27300.26950.25670.39320.37190.48150.42280.61760.4800
Rice–Others0.47270.70270.45560.42660.61480.58650.90240.86371.33911.33771.43781.4031
Table 9. J-M distances for crops σ0 and γ0 combination features based on multi-temporal images.
Table 9. J-M distances for crops σ0 and γ0 combination features based on multi-temporal images.
J-M DistanceSingle-Temporal Maximum3 July 2021
15 July 2021
3 July 2021
15 July 2021
27 July 2021
3 July 2021
15 July 2021
27 July 2021
8 August 2021
3 July 2021
15 July 2021
27 July 2021
8 August 2021
20 August 2021
3 July 2021
15 July 2021
27 July 2021
8 August 2021
20 August 2021
1 September 2021
σ0-γ0σ0-γ0σ0-γ0σ0-γ0σ0-γ0σ0-γ0
Peanut–Corn0.95211.05651.60451.81371.96981.9924
Peanut–Rice1.36091.59801.76701.96991.99491.9955
Peanut–Others1.60400.75751.66471.94271.98651.9898
Corn–Rice1.19321.74421.84151.95441.98371.9961
Corn–Others0.52280.73680.94661.83381.99571.9933
Rice–Others1.73851.60921.81171.99981.99991.9995
Table 10. Parameters of C-AENN.
Table 10. Parameters of C-AENN.
LayerParametersOutput Shape
Input (n, 12, 2)
Conv1DFilters = 32, kernel_size = 7, activation = ‘relu’(n, 12, 32)
MaxPooling1DPool_size = 2(n, 6, 32)
BatchNormalization (n, 6, 32)
Conv1DFilters = 64, kernel_size = 5, activation = ‘relu’(n, 6, 64)
MaxPooling1DPool_size = 2(n, 3, 64)
BatchNormalization (n, 3, 64)
Conv1DFilters = 128, kernel_size = 3, activation = ‘relu’(n, 3, 128)
MaxPooling1DPool_size = 2(n, 2, 128)
BatchNormalization (n, 2, 128)
Flatten (n, 256)
Encoder1128, activation = ‘relu’(n, 128)
Encoder264, activation = ‘relu’(n, 64)
Encoder332, activation = ‘relu’(n, 32)
Compressed features16, activation = ‘relu’(n, 16)
Decoder132, activation = ‘relu’(n, 32)
Decoder264, activation = ‘relu’(n, 64)
Decoder3128, activation = ‘relu’(n, 128)
ClassificationSoftmax(n, 4)
Table 11. Quantitative evaluation of models generated by different classifiers.
Table 11. Quantitative evaluation of models generated by different classifiers.
PeanutCornRiceOtherMacro-F1OAKappa
SVMPrecision0.910.930.990.830.91250.92730.8956
Recall0.920.950.970.79
F1-score0.920.940.980.81
RFPrecision0.850.860.970.780.84000.87570.8191
Recall0.880.930.950.54
F1-score0.870.890.960.64
KNNPrecision0.850.860.990.770.84500.87600.8201
Recall0.900.930.910.59
F1-score0.870.890.950.67
ANNPrecision0.890.940.980.920.93500.93580.9082
Recall0.920.930.960.93
F1-score0.910.940.970.92
1D-CNNPrecision0.970.970.980.940.96500.96710.9529
Recall0.930.970.990.96
F1-score0.950.970.990.95
SAEPrecision0.930.950.990.960.95500.95840.9404
Recall0.970.970.970.90
F1-score0.950.960.980.93
C-AENNPrecision0.970.980.990.970.98250.97940.9705
Recall0.970.980.990.97
F1-score0.970.980.990.99
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Guo, Z.; Qi, W.; Huang, Y.; Zhao, J.; Yang, H.; Koo, V.-C.; Li, N. Identification of Crop Type Based on C-AENN Using Time Series Sentinel-1A SAR Data. Remote Sens. 2022, 14, 1379. https://doi.org/10.3390/rs14061379

AMA Style

Guo Z, Qi W, Huang Y, Zhao J, Yang H, Koo V-C, Li N. Identification of Crop Type Based on C-AENN Using Time Series Sentinel-1A SAR Data. Remote Sensing. 2022; 14(6):1379. https://doi.org/10.3390/rs14061379

Chicago/Turabian Style

Guo, Zhengwei, Wenwen Qi, Yabo Huang, Jianhui Zhao, Huijin Yang, Voon-Chet Koo, and Ning Li. 2022. "Identification of Crop Type Based on C-AENN Using Time Series Sentinel-1A SAR Data" Remote Sensing 14, no. 6: 1379. https://doi.org/10.3390/rs14061379

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop