Next Article in Journal
Short-Term Abandonment versus Mowing in a Mediterranean-Temperate Meadow: Effects on Floristic Composition, Plant Functionality, and Soil Properties—A Case Study
Next Article in Special Issue
Evaluating the Effectiveness and Efficiency of Climate Information Communication in the African Agricultural Sector: A Systematic Analysis of Climate Services
Previous Article in Journal
Graft Compatibility Classification within Aurantioideae Based on Biometric Traits and the Anatomy of Graft Union
Previous Article in Special Issue
Detection and Analysis of Sow Targets Based on Image Vision
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Temporal Data Fusion in MS and SAR Images Using the Dynamic Time Warping Method for Paddy Rice Classification

1
Department of Urban Planning and Spatial Information, Feng Chia University, Taichung 40724, Taiwan
2
Department of Information Technology, Ling Tung University, Taichung 40851, Taiwan
3
Construction and Disaster Prevention Research Center, Feng Chia University, Taichung 40724, Taiwan
*
Author to whom correspondence should be addressed.
Agriculture 2022, 12(1), 77; https://doi.org/10.3390/agriculture12010077
Submission received: 12 December 2021 / Revised: 1 January 2022 / Accepted: 5 January 2022 / Published: 7 January 2022
(This article belongs to the Special Issue Digital Innovations in Agriculture)

Abstract

:
This study employed a data fusion method to extract the high-similarity time series feature index of a dataset through the integration of MS (Multi-Spectrum) and SAR (Synthetic Aperture Radar) images. The farmlands are divided into small pieces that consider the different behaviors of farmers for their planting contents in Taiwan. Hence, the conventional image classification process cannot produce good outcomes. The crop phenological information will be a core factor to multi-period image data. Accordingly, the study intends to resolve the previous problem by using three different SPOT6 satellite images and nine Sentinel-1A synthetic aperture radar images, which were used to calculate features such as texture and indicator information, in 2019. Considering that a Dynamic Time Warping (DTW) index (i) can integrate different image data sources, (ii) can integrate data of different lengths, and (iii) can generate information with time characteristics, this type of index can resolve certain classification problems with long-term crop classification and monitoring. More specifically, this study used the time series data analysis of DTW to produce “multi-scale time series feature similarity indicators”. We used three approaches (Support Vector Machine, Neural Network, and Decision Tree) to classify paddy patches into two groups: (a) the first group did not apply a DTW index, and (b) the second group extracted conflict predicted data from (a) to apply a DTW index. The outcomes from the second group performed better than the first group in regard to overall accuracy (OA) and kappa. Among those classifiers, the Neural Network approach had the largest improvement of OA and kappa from 89.51, 0.66 to 92.63, 0.74, respectively. The rest of the two classifiers also showed progress. The best performance of classification results was obtained from the Decision Tree of 94.71, 0.81. Observing the outcomes, the interference effects of the image were resolved successfully by various image problems using the spectral image and radar image for paddy rice classification. The overall accuracy and kappa showed improvement, and the maximum kappa was enhanced by about 8%. The classification performance was improved by considering the DTW index.

1. Introduction

Paddy rice takes up the largest crop area and has great significance for the global economy, society, and culture. Presently, farmland surveys in various countries are mainly manual surveys, which are time-consuming, labor-intensive, and extremely inefficient. It is hard to conduct a large-scale survey in a short time. However, with the advancement and development of satellite remote sensing detection technology in recent years, farmland monitoring methods have become well-accepted. The use of satellite image data as a monitoring tool along with the use of the machine learning approach has become a major solution for land cover measurements. This includes supervised learning and unsupervised learning in machine learning. This greatly reduces the manpower and material resources required for agricultural monitoring and management [1,2].
Many types of research have been dedicated to investigating satellite optical data to carry out the target GIS map for delineation of paddy field areas by image classification through the pixel-based method. This includes Maximum Likelihood, Neural Network, Decision Tree, Support Vector Machine, K-means, ISODAT, etc. Different classification methods can be applied by using by the material of Landsat TM and ETM+ series, SPOT series, MODIS, Sentinel-2 and 3, RADARSAT series, ERS-1 and ERS-2, ENVISAT/ASAR, IRS, AVHRR, Sentinel-1A, Aerial-Photo, UAV, etc. In addition, related research is focused on how to increase the accuracy of paddy rice fields. For instance, a series of Vegetation Indicators (VI) and Texture Indicators (TI) may become the proper material for interpretation results and classification accuracy, which can be reinforced. The VI indicators include the Ratio Vegetation Index (RVI), NDVI, Soil-adjusted Vegetation Index (SAVI), etc. The texture indicators include the Gray Level Co-Occurrence Matrix (GLCM), Fractal dimension, Semi-Variogram, etc. Hence, the benefits of statistical analysis and machine learning can be greatly improved. In addition to pixel-based classification, the Region-based Object Classification (ROC) is also well-accepted. There are two main steps for ROC: (1) image segmentation and (2) image classification. The method also renders an effective performance in classification. On the other hand, some scholars have imported Synthetic Aperture Radar (SAR) with multiple time-series data to detect the area of rice fields, which has attracted new attention in recent years. The SAR data are not affected by sunlight. Accordingly, there are many kinds of research that perform Image Fusion (IF) processing between MS and SAR as well [3,4]. Unfortunately, traditional image fusion methods seem to be unable to solve the large differences between the two images, and it is not easy to overcome some practical limitations. For example, the images after fusion are prone to produce unexpected noise [5,6]. The errors of data fusion by multi-period sequence images are often accumulated into classification progress. It generally results in unsatisfactory classification outcomes. That is, the image fusion can not obtain the crop phenological information in detail, which is important for image classification [7]. Rice patches may be affected by mixed crops of land-use on a single patch, different planting seasons, and different varieties, which are governed by different farmer behaviors. In addition, using a single image may result in the interference of cloud and fog effects, which can obstruct the classification results. Furthermore, it destroys the structure of the landscape for considering a single period of texture/vegetation indicator through image fusion.
Therefore, this research does not aim at using the IF methods but uses the concept of the DF method at the starting point. The DF method requires a set of integrated calculations, rather than a method of evaluating data through a single model [8,9]. More specifically, this is a hybrid model to employ different data sources or analysis methods. Related research shows that most of the past studies focused on the fusion of multiple methods [9]. In this study, the data fusion process extracts the variation of features based on various periods with the following considerations. First, the length of time-series changes of different data sources is different. Second, the different properties of data sources vary greatly. Whereas the quality of the original spectral bands must be effectively converted into rational indicators or textures, the different resource data have different resolutions and formats. Third, the method for changing characteristics of patches of different properties vary at different times. It has to effectively extract this relevant information for our research. Hence, DTW is applied in this study for multi-period images for data fusion [10]. It was successfully applied to rice area survey [11,12], landscape changes [10,13], forest type classification [14], farmland mapping [15,16,17,18,19,20], crop phenological period of factor analysis [21], crop yield estimation [22], etc. This research plans to extract the phenological information in the fragmented landscape from the image through the DTW method to achieve the purpose of rapid mapping, stability, and high accuracy when making rice farmland thematic maps.
Consequently, this study uses the Dynamic Time Warping (DTW) method to compare the similarity results of MS and SAR image datasets by vegetation index and texture index characteristics, respectively. The similarity index results can show the relations in the change of each different land-use of patches. That is, the DTW is examined by the DF method, which contains numerical responses (spectrum, indicators, and textures) to detect similar features in different time series. These relations can improve the classification results efficiently. Hence, three approaches (Support Vector Machine, SVM; Neural Network, NN; and Decision Tree, DT) are used to classify the paddy patches with two groups: (a) without applying the DTW index and (b) considering the DTW index. We adopt the most common classifiers such as SVM, NN, and DT. The goal is not to compare the performance of those three classifiers and determine which one is the best. The key point is to find a possible solution to rationally integrate them with consideration of the function of DTW employing two kinds of image data (optical and radar). The paper contains four steps: Step 1, the first stage of the accuracy of consistency classification; Step 2, the discussion of the accuracy of inconsistency classification; Step 3, examples of multi-scale features and description of integration results; Step 4, the overall accuracy of the hybrid classification.

2. Research Materials and Design

2.1. Research Materials

2.1.1. Research Site

The study was located at Yunlin County, which is in the Jianan Plain of western Taiwan. It is a major agricultural county for Taiwan. The annual grain output is noteworthy. It mainly produces rice, head, vegetables, peanuts, sweets, and other grains. The coordinates of longitude and latitude are 27′00″120 E, 48′00″23 N. The soil is rich in organic nitrogen, phosphorus, potassium, and other elements, which makes the land abundant in agricultural production. The soil is fertile, and the climate is suitable. Irrigation and rainfall are abundant. Therefore, this study selected Yunlin Xiluo, as in Figure 1. Figure 2a is a map of farmland patches in this area. The total area is about 5016.21 ha with 53,212 patches. Figure 2b is the ground truth data of the study area taken in the first half of 2019 by the Agriculture and Food Agency. Since the main axis of this research is to classify and interpret rice fields, the classification of ground truth data is only divided into paddy rice and non-paddy rice, as in Table 1.

2.1.2. Research Data

SPOT 6 Images

SPOT images basically have four bands, which are multi-spectral images of B, G, R, and IR, with a spatial resolution of 6 m. This study selected SPOT-6 images on 23 January, 1 March, and 9 April in 2019. The time differences between the three were 37 days and 39 days, respectively. Figure 3 below is the three SPOT6 optical satellite images selected for this study. The SPOT6 satellites are easy to attain in Taiwan, and they are already atmospherically and geometrically corrected. They are completed by the Central University Space Remote Sensing Center. Hence, this study decided to use them. However, in this study, we only used the three indicators of G, R, and IR to generate indicators of ancillary information. For detailed indicator descriptions, please see the content in Section 2.2.

Sentinel-1A Images

Sentinel-1A has a shooting period of 12 days, and Sentinel-1 is equipped on four sensors for different shooting purposes, namely Stripmap Mode (SM) and Interfero-metric Wide Swath Mode (IW), Extra Wide Swath Mode (EW), and Wave Mode (Wave Mode, WM). Its spatial resolution is 5 × 20 m (16 × 66 feet). To have a better understanding of the rice growth cycle time of this survey, the short time of Sentinel-1 radar image material in this study includes all radar images between 31 January and 7 May 2019. Since the radar images are 12 days old during the shooting cycle, nine radar images during this period are finally selected. All the data can be downloaded from the European Space Agency (ESA) website for free [23]. The downloaded images must be downloaded through the SNAP software. There are three pre-processing steps: radiometric correction, geometric correction, and image speckle noise removal. The selection of radar images was taken on 1/31, 2/12, 2/24, 3/8, 3/20, 4/1, 4/13, 4/25, 5/7. This study uses images in IW mode because this mode is the main operating mode for shooting on land. The shot type is IW and VV polarization and VH polarization images.

2.2. Research Design

In this study, 3 of SPOT6 optical satellites and 9 of Sentinel-1A synthetic aperture radar images were used (total 12 images) to identity features such as textures and indicators. The feature value information was extracted using patches as the smallest unit. In the meantime, the time-series features were constructed. Dynamic Time Warping was used to produce “multi-scale time series feature similarity indicators”. DTW is dynamic programming [10,11]. The algorithm of this approach compares two sequences with different lengths. It effectively solves the deviation of time distortion in identification and calculates the Euclidean distance between the two sequences to determine the similarity of content information. The gap between the vector distances is rationally found. This indicator can convert an image in a time series of feature information on different scales and different sources into ancillary information.
Figure 4 outlines steps for research, and it has four parts: 1, image feature calculation and database construction; 2, classification algorithm; 3, comparison of classification results. The contents are as follows.
  • Image feature calculation and dataset construction: In the optical image part, in addition to the four basic bands of red light (Red), green light (Green), blue light (Blue), and near-infrared light (NIR), this research included Ratio Vegetation Index (RVI) and Normalized Difference Vegetation Index (NDVI) and four Co-occurrence matrix (Gray Level Co-Occurrence Matrix, GLCM) texture indexes, including Homogeneity, Contrast, Dissimilarity, and Entropy, making a total of 19 types of feature information. It is worth mentioning that GLCM and associated texture features are image analysis techniques. An image is composed of pixels, each with an intensity (a specific gray level) suitable to apply GLCM, as different combinations of gray levels often co-occur in an image or image section. Texture feature calculations use the contents of GLCM to measure the variation in intensity (image texture) at the pixel of interest; on the other hand, the radar part uses C-band synthetic aperture radar (SAR) of two polarized images, called VV and VH. In addition to VV and VH, the 4 aforementioned kinds of texture information were also adopted. There was a total of 10 types of feature information. The features are shown in Table 2. In addition, the variation of satellite images for different time features could become a key factor when observing paddy rice and non-paddy rice patterns. We assign this part of the data as Label A.
  • Classification algorithm: This study used Support Vector Machine (SVM), Neural Network (NN), and C5.0 Decision Tree (DT) machine learning algorithm models. We conducted the training and verification of the models. The model training and verification rate ranged from 70% (37,227 patches) to 30% (15,985 patches). This further illustrates that the design of this research is different from the traditional method. We applied the three classification methods with two direct classification methods and a hybrid classification method to perform better classification outcomes. Direct classification methods use three machine learning models to directly classify the image data information (optical, index, and texture) features (Label A data set). Generally speaking, paddy rice is a long-period crop that requires an indicator and involves the combination of optical, index, and texture features for recording the variation over a long period. Hence, this study adopted two stages to solve the problem. The first stage was to import the image data to three classifiers (SVM, DT, and NN). The second stage was to extract the confusion samples (or patches) from the first stage to employ a new factor (DTW), which considers the time variation factor to improve classification performance. Specifically, this new ancillary information (DTW) was used for the dynamic calculation of the two-time series, and Euclidean distance matrices were computed one by one. For example, optical image characteristic band B to band G is one group, band B to band R is another group, etc. All combinations had to be calculated. The total number of feature information groups reached 26 levels. It was necessary to add 351 combinations of optical feature similarity. The similarity index between the two-time series features was produced, and the dataset was consolidated by the combination of the time series in patch features. The 351 combinations contained Optical Image Feature Similarity (171 attributes), Radar Image Feature Similarity (28 attributes), and the two combinations of feature similarity indicator (optical image + radar image) groups (152 at-tributes). Therefore, in this study, all data combinations were generated to form a DTW index (Figure 5). We hope this process can resolve the confusion around classification patches (samples). In the meantime, the inconsistent patches from the classification model were further refined.
  • The comparison of classification: The training model accuracy had two parts. The first part was the result of the direct classification method, which was assigned as Label B. The second part was the result of the hybrid classification, which was assigned Label C. The comparison items were computed from an analysis of commission errors and omission errors. The overall accuracy and kappa values were also employed. We also compared the performance of Label B and Label C.

3. Research Methods

3.1. Support Vector Machine

The Support Vector Machine (SVM) is a popular machine learning tool that offers solutions for both classification and regression problems. Support vector machines (SVMs) are well-accepted supervised learning methods used for classification. The SVM classifier supports binary classification and multiclass classification, whereas the structured SVM trains a classifier for generally structured output labels. Moreover, there exist many hyperplanes that may be able to classify the data. One rational choice as the best hyperplane is to produce the largest separation, or margin, between the two classes. The optimal choice of the hyperplane is the distance from the selected sample to the nearest sample point on each side, which is maximized. The study considers the concept of improving statistical learning theory, generally applied as an effective classifier to solve many practical problems. The feature of these classifiers is to minimize the empirical classification error and maximize the geometric margin [25]. The support vector machine is requested to select an appropriate kernel function. The function of the kernel is to take data as input and convert it into the required form. This is because different types of data cannot be linearized in the original space. When separated, the data after nonlinear projection can be more separated in a higher-dimensional space, usually linear, polynomial, Radial Basis Function (RBF), and Sigmoid Function. We explain the classification methods of SVM in detail. The value of bias is set to 0. The core function is adopted as RBF. The c = 10 and gamma = 0.1 are used as the initial conditions for setting parameters. The stopping criterion of 0.001 was used as a standard to terminate the program and output the results.

3.2. Neural Network

The neural networks are information processing networks inspired by the way biological neural systems process data. Neural Networks (NN) were first proposed in the early 1940s as an attempt to simulate human brain cognitive learning processes [26]. They are programmed with a primary function, which is to develop models of problems based on trial and error or learning procedures. In the last decades, Back Propagation Neural Network has been widely applied in many fields. The relations among massive data and a certain phenomenon are obtained through a learning system (instead of calculation). In the past, scientists and researchers experienced that the inputs of attributes (included ancillary information) for remote sensing images are usually used to apply to image classification. If a paddy area spatial dataset was well developed to perform the input variables and output categories rationally, it may be appropriate to apply Back Propagation Neural Network as a learning machine [27]. Basically, the neural network consists of many nodes to connect input neurons and output neurons to three sorts of layers: input layer, hidden layer, and output layer. The study adopted Multi-Layer Perception. Our MLP consists of at least three layers of nodes: an input layer, a hidden layer with 13 neurons, and an output layer of classification. The optical input has 19 neurons, and the radar input has 8 neurons. The active function used the sigmoid function. The output used 800 epochs or 0.02% difference as a criterion to obtain the classification outcomes.

3.3. Decision Tree

A decision tree is a tree structure containing internal and external nodes connected by branches. A decision tree is a data-driven predictive model where it is mapped from the observation of samples about an item to conclusions about its target value. It is usually used as a tool for scientists and engineers to generate “rules” [28]. The internal node is a decision-making point to investigate a decision function to determine which child node to visit next. On the other hand, an external node is also known as a leaf or terminal node, which has no child nodes and, with respect to a label, characterizes the given data that lead to being visited. In general, a decision tree is employed as follows. It presents a datum (a vector composed of several attributes) to the root node of the decision tree. It may depend on the result of a decision function used by an internal node, and the tree will branch to one of the children of the node. This process will repeat until a terminal node is approached and a label or value is then assigned to the given data. The height of DT is limited to 17 layers, which uses the Exhaustive Algorithm to display all the possibilities of the condition the samples should fit in.

3.4. DTW Methods

DTW is one of the algorithms for computing the similarity on two temporal sequences. Hence, DTW can be successfully applied to temporal sequences of video, audio, and graphics data. That is, any data that can be turned into a linear sequence can be analyzed by DTW as well. A well-known application has been automatic speech recognition for different speaking speeds. It can also be used in partial shape matching applications. Reviewing the DTW past research, Petitjean et al. (2012) used the DTW algorithm for SPOT time-series satellite images to classify the land-use coverage. This research incorporated the K-means and DTW into image measurement to obtain the classification. The results show that the similarity of multi-period images is matched by using DTW, which performs better in classification outcomes of multi-period images than that of a single image use [10].
Similarity measurements between the two sequences are named as “warping path”. In this path the two signals may be aligned at the same time. The signal with an original set of points X (original), Y (original) is converted to X (warped), Y (warped). Related technique sequences of varying speed may be averaged using this technique. While two different time series data are matched with each other, it can be seen directly through a line chart or other visualized graphs whether there is a strong similarity between the two. It is possible to objectively quantify the degree of similarity between the two images.
To calculate the DTW similarity of two-time series, one can establish an m × n Euclidean distance matrix. Then, a cost matrix or cumulative matrix Mc based on the distance matrix is generated. The cumulative matrix is Mc (i, j) defined as follows:
M C ( i , j ) = min { M C ( i 1 , j 1 ) M C ( i 1 , j ) M C ( i , j 1 ) + M ( i , j )
In Equation (1), M C ( i , j ) represents a matrix from a point (i, j) of the route. The accumulated value of the minimal value in (1) has three terms. The optimal value can be found by considering   M C ( i , j ) with two periods of distance in DTW by computing from (1, 1) to (i, j).
Since the DTW can analyze two sets of timing information with different scales and timing lengths, it produces the most intuitive numerical value to show the degree of similarity of the timing fluctuations between the two sets of information [10,11,12,13,14,15,16,17,18,19,20,21,22]. This study presumes that different sources of information have their contributions for classification. We utilized the “multi-scale time series feature similarity” indicators from the concept of data fusion, especially considering the ancillary information of radar data, to compare the optical image data to produce the similarities. According to Equation (1), this study uses Python to write a “multi-scale time series feature similarity indicators” program that can process multiple time series feature information in batches. The program merges all the characteristics into it as well. The features are computed as a feature similarity index, and then each of the rest of the features step-by-step will be imported. All the aforementioned data are applied to the classification outcomes from three approaches (SVM, NN, and DT) by inconsistent classified results. The best way to resolve the inconsistent classified results is to use new ancillary information (such as DTW).

3.5. Accuracy Verification

To test the accuracy of the final automated classification model, this study uses the Confusion Matrix and kappa value in image interpretation and classification accuracy of the final results of this research. Four different regions are randomly selected in this study area as verification regions to check the final results of the developed new classification model.

3.6. Model Software

In this study, we used IBM SPSS Modeler 18.1 to carry out the analysis. The software is user-friendly with graphical interference to display how the outcome is obtained.

4. Results and Discussion

4.1. Examples of Optical and Radar Timing Characteristics Data

The results of this research are shown in Label A in Figure 4. Conventionally, vegetation indices and texture information can successfully classify paddy rice through image classification. However, this study made further progress. Taking a closer look at the radar image, Figure 6 shows the texture characteristic curve of Sentinel-1A (VH) and (VV) polarization images, respectively. Overall, entropy and homogeneity display dramatic differences in the time series analysis for paddy rice and non-paddy rice. In addition, rice and non-rice also show differences among these indicators. By carefully examining the texture analysis of Figure 6a,c in the changes of rice growth, it can be found that rice transplanting (transplanting rice seedlings) happens after 31 January. While the rice grows, the rice leaves gradually cover the surface soil. Then the leaves continuously have edges broken, and the bright spots and flat areas increase at the same time. The changes in the entire texture information are inconsistent, and the texture value decreases. After paddy heading from 24 February to 1 April as the ears of rice grow, the degree of texture disorder (see the entropy indicator) gradually increases, while the homogeneity decreases slightly and the homogeneity area produces a smaller value. From 1 April to 25 April as the rice leaves grow to cover the soil reflection, the texture tends to be consistent. The entropy decreases while the homogeneity increases. On 27 May when the rice ears are mature and exposed, the paddy harvest period begins. The rice ears of these highly reflective objects reduce the uniformity, so the texture value increases sharply [29]. According to the analysis of the aforementioned waveband information, we notice certain temporal characteristics of the trend information t. In the past, we tended to ignore these changes in the time axis. In Figure 6, the x axis is the observation time, and y axis is the normalization value of each type of texture information. Previously, there was a lack of an ideal tool to effectively integrate the information. If we employ the subsequent classification algorithms to increase the effective image information, we can certainly provide some help for the classification of rice fields.

4.2. Comparison on Direct Classification Method and Hybrid Classification Method

Since the range of study area is too large, we chose a small area to display the classification performance result. However, the confusion matrix is generated by the entire study area. The range is presented as the red frame in Figure 7.

4.2.1. Direct Classification Method

The results of this research are shown in Label B in Figure 4. Table 3 shows the analysis results of the direct classification methods of this study. From top to bottom, the methods are SVM, NN, and DT. Among the three classification methods, DT achieved the best accuracy and kappa value of 93.26% and 0.76, respectively. The worst result was NN; overall accuracy and kappa value were, respectively, 89.51% and 0.66. It seems that the overall median value is the SVM method. Figure 8 shows the results displayed by the three algorithms. The blue frame in Figure 8 indicates that the calculation result is inconsistent compared to the ground truth data. In terms of direct classification, no matter which algorithm we used, the commission errors of rice still accounted for a certain number of cases.
However, we decided to examine how to integrate the optical information and radar information by considering multiple data resources with multiple algorithms. See Table 3 for further information. For this algorithm, the commission error of rice is quite serious. Although a large number of texture images in the analysis process are used, it seems that there are still many commission errors in the classification problem. Even if we use radar information at the same time, this does not seem to enhance the performance. On the other hand, this shows that the current analysis results may tend to be over-trained in the non-rice part. There are many reasons for the result of rice misclassification. This reason is a common phenomenon in the problem of rice classification because the rice samples are grown on the ground in different time scenarios. To solve the problem of commission errors, we employ a hybrid classification method in the next step.

4.2.2. Hybrid Classification

The results are shown in Label C in Figure 4. The hybrid classification method is divided into two parts. In the first, the patches show consistency after the first stage of classification, regardless if they show rice or non-rice (49,084 patches). The number of patches with inconsistent parts is 4128, so we execute re-classification. In the meantime, the DTW index information of “multi-scale time series feature similarity indicators” is employed. The DTW is calculated based on the values for the entire image, and the extraction of the inconsistent patches is carried out individually. Then, they are newly plugged into the dataset and re-classified. Therefore, we explain the results of the three parts as follows: 1, the accuracy of consistency classification; 2, inconsistency classification patches for discussion; 3, DTW indicator results; 4, hybrid classification method integration accuracy results.
Step 1. The first stage of accuracy of consistency classification.
This study uses a two-stage classification. Showing the results of the first stage of the multi-calculation classification method, Table 4 presents the patches that were consistent in the classification among the three algorithms. The meaning of this analysis is that current input feature variables for classification reach the best limitation of classification. In other words, considering a classification model under the condition of maximizing the accuracy of image data, the maximum classification approach of the machine learning model may be found.
Step 2. Discuss accuracy of consistency classification.
Table 5 presents the patches that were inconsistent in the classification among the three algorithms. From top to bottom, they are SVM, NN, and DT. Among these three classification methods, DT had the best accuracy and kappa value of 73.64% and 0.26, respectively. The worst result was NN. Its overall accuracy and kappa value were, respectively, 25.34% and −0.14. Table 5 shows that the patches had inconsistent classifications under different classification approaches. Both rice and non-rice samples had extreme commission errors and omission errors. Furthermore, there are many reasons for the resulting commission errors of rice. The complications of image quality and planting methods (time difference, mixed planting) are the major reasons. It is very difficult to resolve them by using an existing classifier unless time-history data are employed. Hence, in this study, we decided to incorporate DTW and time feature variables to provide three algorithms for the second stage classification. According to the results when employing DTW in the classification process, the proposed approach enhanced classification to the maximum classification level.
Step 3. Examples of multi-scale time series feature similarity indicators and description of integration results.
Table 6 presents a dataset that converts the time series dynamic relationship of the information. That is, each patch is generated by an index for the calculation of “multiscale time series feature similarity indicators” in this research. Because the amount of data is too large, we extracted a part of the data to present the research results.
In Table 6, the y axis is the number of patches in the demonstration area, including number of 53,212. The x axis is the number of combinations of multi-scale time series features, i.e., 351 features, and each grid value in Table 6 is the time series between the two-time series features of different patches. The higher the similarity value is, the more similarity between them. For instance, taking patch ID = 1 as an example, one to four groups of feature groups of similar value indexes are sorted by size as follows: feature dataset 2 (0.7318) > dataset 3 (0.2544) > dataset 1 (0.1027) > dataset 4 (0.0869). The information of the feature number is derived from dataset one to four as (1) Dataset1: SPOT6 red light band vs. SPOT6 green light; (2) Dataset2: SPOT6 red light band vs. SPOT6 blue light; (3) Dataset3: SPOT6 red light band vs. SPOT6 near-infrared light; (4) Dataset4: SPOT6 near-infrared light vs. crop management factor index (CMFI).
The above example shows the calculation of this indicator. The similarity between time series features of different scales can be converted into actual values, and the dataset’s multiple features can be integrated into a worksheet, which greatly increases the analysis among different time domains and various data sources. It is worth mentioning that our analysis at this stage analyzes the entire image. The above analyses are easy to govern numerically because we can trace the IDs for their corresponding locations.
Step 4. The overall accuracy of the hybrid classification.
Table 7 shows the analysis results of the hybrid classification method of this study. The results are presented for SVM, NN, and DT. From the results, when we compare the three classification methods, DT had the best classification result. The accuracy and kappa values were 94.71% and 0.81, respectively. NN showed the worst result again, with overall accuracy and kappa values of 92.63% and 0.74, respectively. The NN approach still greatly improved the classification accuracy when applying DTW. The other two classifiers (SVM and DT) also had increased performance in classification for DTW. The final classification results of DT and SVM are largely the same. This also shows that the new ancillary information of DTW can sustainably improve the classification results. Figure 9 shows the results of the three algorithms. By zooming in on the selected area, it can be found that the original direct classification method achieved more significant improvement in commission errors than in omission errors. Figure 9 shows that there are many yellow frames indicating correction of NN, which also means that the prediction accuracy of NN in the hybrid classification method is improved when compared to the direct classification method. In other words, the result shows that the DTW indicator can provide better classification performance. To sum up, we decided to employ the DTW index in the classification process. Our results show how the DTW index resolved the confusing parts of the image. As usual, if a pixel is classified as the same pattern by different classifiers, very few errors are produced [9]. However, if a pixel is not classified as the same category for a different classifier, deployment of a new indicator (DTW) can be expected to update the erroneous pattern.
Usually, there are two ways to express classification accuracy, the first is the overall accuracy (OA), and the second is the kappa value. OA represents the proportion of the number of correctly classified samples to the total number of samples, but such indicators are easily affected by the omission error and commission error rate. Thus, the kappa value must be considered. The kappa is a better reference than OA to observe commission errors and omission errors. For instance, kappa results for SVM (0.72 vs. 0.80), NN (0.66 vs. 0.74), and DT (0.76 vs. 0.81) are adopted in Table 3 and Table 7 which proved that the three classifier models are satisfactory in terms of applying DTW.
Overall, DTW is based on the dynamic programming method for effectively reducing search and comparison time. The multi-scale time series feature similarity indicators developed in this research have the ability to transform multi-dimensional data into two-dimensional information. The reason for applying DTW is because the status of crops is a long-term characteristic. This research shows that time features are helpful for long-term characteristic image classification. In particular, it can be used in small farmland areas and fragile landscapes. Through the integration of DTW data, it can overcome the limitations of the large difference between optical images and radar images. In addition, the different spatial resolutions of the two types of images are integrated. Moreover, the various limitations of different atmospheric conditions in the shooting process of the two images are resolved. This indicator has extremely high potential in the detection of crop phenology.

5. Summary and Conclusions

This study developed a multi-scale time series feature similarity index through the Dynamic Time Wrapping (DTW) theory to integrate multi-source scale time-series image information. The training/test dataset was analyzed through a verification process proving that the original feature information was added to the time series similarity index of the multi-scale time series feature data. The conclusions of this paper are as follows:
  • This study used SPOT6 optical images and Sentinel-1A radar images as the materials of research, which differs from the mainstream use of image fusion in the interpretation in past studies. The massive time series features in the datasets are integrated into a simple index to present the data dimensions in a single dataset. This approach provides new possibilities for subsequent analysis of information considering different scales of data.
  • The homogeneity and entropy in radar images provides some new information in time series analysis, which greatly helps the classification of paddy rice. It is found that the behavior of time variations can distinguish paddy rice and non-paddy rice easily.
  • This study uses the “direct classification method” and “hybrid classification method” for comparison. The characteristic information of optical satellite images and radar images is applied to directly perform classification methods for their behaviors. The results show that the overall accuracy results of the direct classification method are 91.7% (kappa value 0.72, SVM), 89.5% (kappa value 0.66, NN), and 93.26% (kappa value 0.76, DT). In the second stage of classification, the patches were classified optically with DTW feature information using three approaches, and neutral patches were added in the first stage, producing the overall accuracy results of 94.43% (kappa value is 0.80, SVM), 92.63% (kappa value is 0.74, NN), and 94.71% (kappa value is 0.81, DT). This also proves the DTW is robust.
  • This result renders a feasible way to integrate radar feature information with optical feature information, especially in multi-period data. The optical images in different periods are difficult to obtain due to the influence of weather conditions. Radar images can be obtained regularly since cloud and fog interference can be avoided. A possible solution has been designed to overcome their disadvantages, which could lead to better classification performance. Considering those various restrictions, it is especially suitable for small farmland areas and fragile landscapes.

Author Contributions

T.C.L. was responsible for the plan and design of this study. He analyzed the data and discussion. S.W. helped with writing the manuscript and discussion of results. Y.C.W. wrote the computer program. H.-P.W. and C.-W.H. used the program to plot the thematic map and generate tables. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Ministry of Science and Technology (MOST) 103-2119-M-035 -002 –.

Acknowledgments

The authors would like to thank the MOST for providing image data and related information. The authors are also very grateful to Z. H. Zhu, Department of Geography, National Taiwan University, for his advice and suggestions for this project.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kumar, S.; Mishra, S.; Khanna, P. Precision sugarcane monitoring using SVM classifier. Procedia Comput. Sci. 2017, 122, 881–887. [Google Scholar] [CrossRef]
  2. Wan, S.; Wang, Y.P. The comparison of density-based clustering approach among different machine learning models on paddy rice image classification of multispectral and hyperspectral image data. Agriculture 2020, 10, 465. [Google Scholar] [CrossRef]
  3. De Bernardis, C.; Vicente-Guijalba, F.; Martinez-Marin, T.; Lopez-Sanchez, J.M. Contribution to real-time estimation of crop phenological states in a dynamical framework based on NDVI time series: Data fusion with SAR and temperature. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 3512–3523. [Google Scholar] [CrossRef] [Green Version]
  4. Onojeghuo, A.O.; Blackburn, G.A.; Wang, Q.; Atkinson, P.M.; Kindred, D.; Miao, Y. Mapping paddy rice fields by applying machine learning algorithms to multi-temporal Sentinel-1A and Landsat data. Int. J. Remote Sens. 2018, 39, 1042–1067. [Google Scholar] [CrossRef] [Green Version]
  5. Betbeder, J.; Laslier, M.; Corpetti, T.; Pottier, E.; Corgne, S.; Hubert-Moy, L. Multi-temporal optical and radar da-ta fusion for crop monitoring: Application to an intensive agricul-tural area in Brittany (France). In Proceedings of the 2014 IEEE Geoscience and Remote Sensing Symposium, Quebec City, QC, Canada, 13–18 July 2014; pp. 1493–1496. [Google Scholar]
  6. Esteban, J.; Starr, A.; Willetts, R.; Hannah, P.; Bryanston-Cross, P. A Review of data fusion models and architectures: Towards engineering guidelines. Neural Comput. 2005, 14, 273–281. [Google Scholar] [CrossRef] [Green Version]
  7. Zhou, G.; Liu, X.; Liu, M. Assimilating remote sensing phenological information into the WOFOST model for rice growth simulation. Remote Sens. 2019, 11, 268. [Google Scholar] [CrossRef] [Green Version]
  8. Joshi, N.; Baumann, M.; Ehammer, A.; Fensholt, R.; Grogan, K.; Hostert, P.; Waske, B. A review of the application of optical and radar remote sensing data fusion to land use mapping and monitoring. Remote Sens. 2016, 8, 70. [Google Scholar] [CrossRef] [Green Version]
  9. Lei, T.C.; Wan, S.; Wu, S.C.; Wang, H.P. A new approach of ensemble learning technique to resolve the uncertainties of paddy area through image classification. Remote Sens. 2020, 12, 3666. [Google Scholar] [CrossRef]
  10. Petitjean, F.; Ketterlin, A.; Gancearski, P. A global averaging method for dynamic time warping, with applications to clustering. Pattern Recognit. 2011, 44, 678–693. [Google Scholar] [CrossRef]
  11. Wang, M.; Wang, J.; Chen, L. Mapping paddy rice using weakly supervised Long Short-term Memory Network with time Series sentinel optical and SAR Images. Agriculture 2020, 10, 483. [Google Scholar] [CrossRef]
  12. Gella, G.W.; Bijker, W.; Belgiu, M. Mapping crop types in complex farming areas using SAR imagery with dynamic time warping. ISPRS J. Photogramm. Remote Sens. 2021, 175, 171–183. [Google Scholar] [CrossRef]
  13. Viana, C.M.; Girão, I.; Rocha, J. Long-term satellite image time-series for land use/land cover change detection using refined open source data in a rural region. Remote Sens. 2019, 11, 1104. [Google Scholar] [CrossRef] [Green Version]
  14. Cheng, K.; Wang, J. Forest-type classification using time-weighted Dynamic Time Warping analysis in mountain areas: A case study in southern China. Forests 2019, 10, 1040. [Google Scholar] [CrossRef] [Green Version]
  15. Guan, X.; Huang, C.; Liu, G.; Meng, X.; Liu, Q. Mapping rice cropping systems in Vietnam using an NDVI-based time-series similarity measurement based on DTW distance. Remote Sens. 2016, 8, 19. [Google Scholar] [CrossRef] [Green Version]
  16. Moola, W.S.; Bijker, W.; Belgiu, M.; Li, M. Vegetable mapping using fuzzy classification of Dynamic Time Warping distances from time series of Sentinel-1A images. Int. J. Appl. Earth Obs. Geoinf. 2021, 102, 102405. [Google Scholar] [CrossRef]
  17. Guan, X.D.; Liu, G.H.; Huang, C.; Meng, X.L.; Liu, Q.S.; Wu, C.; Ablat, X.; Chen, Z.R.; Wang, Q. An Open-boundary locally weighted Dynamic Time Warping method for cropland mapping. ISPRS Int. J. Geo-Inf. 2018, 7, 75. [Google Scholar] [CrossRef] [Green Version]
  18. Manabe, V.D.; Melo, M.R.; Rocha, J.V. Framework for mapping integrated crop-livestock systems in Mato Grosso, Brazil. Remote Sens. 2018, 10, 1322. [Google Scholar] [CrossRef] [Green Version]
  19. Csillik, O.; Belgiu, M.; Asner, G.P.; Kelly, M. Object-Based time-constrained Dynamic Time Warping classification of crops using Sentinel-2. Remote Sens. 2019, 11, 1257. [Google Scholar] [CrossRef] [Green Version]
  20. Dong, Q.; Chen, X.; Chen, J.; Zhang, C.S.; Liu, L.; Cao, X.; Zang, Y.Z.; Zhu, X.F.; Cui, X.H. Mapping winter wheat in North China using Sentinel 2A/B data: A method based onvphenology-Time Weighted Dynamic Time Warping. Remote Sens. 2020, 12, 1274. [Google Scholar] [CrossRef] [Green Version]
  21. Zhao, F.; Yang, G.; Yang, X.; Cen, H.; Zhu, Y.; Han, S.; Yang, H.; He, Y.; Zhao, C. Determination of key phenological phases of winter wheat based on the time-weighted Dynamic Time Warping algorithm and MODIS time-series data. Remote Sens. 2021, 13, 1836. [Google Scholar] [CrossRef]
  22. Zhao, F.; Yang, G.J.; Yang, H.; Zhu, Y.H.; Meng, Y.; Han, S.Y.; Bu, X.L. Short and medium-term prediction of winter wheat NDVI based on the DTW–LSTM combination method and MODIS time series data. Remote Sens. 2021, 13, 4660. [Google Scholar] [CrossRef]
  23. European Space Agency—ESA. Available online: https://step.esa.int/main/toolboxes/snap/ (accessed on 1 May 2018).
  24. Huete, A.R. A soil-adjusted vegetation index (SAVI). Remote Sens. Environ. 1988, 25, 295–309. [Google Scholar] [CrossRef]
  25. Wan, S.; Yeh, M.L.; Ma, H.L. An innovative intelligent system with integrated CNN and SVM: Considering various crops through hyperspectral image data. ISPRS Int. J of Geo-Inform. 2021, 10, 242. [Google Scholar] [CrossRef]
  26. Werbos, P.J. Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences. Ph.D. Thesis, Harvard University, Cambridge, MA, USA, 1974. [Google Scholar]
  27. Kumar, A.; Kim, J.; Lyndon, D.; Fulham, M.; Feng, D. An ensemble of fine-tuned convolutional Neural Networks for medical image classification. IEEE J. Biomed. Health Inform. 2017, 21, 31–40. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Bazzi, H.; Baghdadi, N.; Hajj, E.M.; Zribi, M.; Minh, D.H.T.; Ndikumana, E.; Courault, D.; Belhouchette, H. Mapping paddy rice using Sentinel-1 SAR time series in camargue, France. Remote Sens. 2019, 11, 887. [Google Scholar] [CrossRef] [Green Version]
  29. Yang, K.; Gong, Y.; Fang, S.; Duan, B.; Yuan, N.; Peng, Y.; Zhu, R. Combining Spectral and Texture Features of UAV Images for the Remote Estimation of Rice LAI throughout the Entire Growing Season. Remote Sens. 2021, 13, 3001. [Google Scholar] [CrossRef]
Figure 1. Study area.
Figure 1. Study area.
Agriculture 12 00077 g001
Figure 2. The patches are taken from the photo in 2019. (a) Farmland patches in study area. (b) The ground truth data from the Agriculture and Food Agency.
Figure 2. The patches are taken from the photo in 2019. (a) Farmland patches in study area. (b) The ground truth data from the Agriculture and Food Agency.
Agriculture 12 00077 g002
Figure 3. The selected SPOT6 satellite image.
Figure 3. The selected SPOT6 satellite image.
Agriculture 12 00077 g003
Figure 4. Steps for this study.
Figure 4. Steps for this study.
Agriculture 12 00077 g004
Figure 5. The DTW index of analyzed data sets for steps.
Figure 5. The DTW index of analyzed data sets for steps.
Agriculture 12 00077 g005
Figure 6. Sentinel-1A VH and VV variation, (a) paddy rice (VH), (b) non-paddy rice (VH), (c) paddy rice (VV), and (d) non-paddy rice (VV).
Figure 6. Sentinel-1A VH and VV variation, (a) paddy rice (VH), (b) non-paddy rice (VH), (c) paddy rice (VV), and (d) non-paddy rice (VV).
Agriculture 12 00077 g006aAgriculture 12 00077 g006b
Figure 7. Display area with detailed information.
Figure 7. Display area with detailed information.
Agriculture 12 00077 g007
Figure 8. Comparison of direct classification and ground truth data, (a) SVM, (b) NN, and (c) DT.
Figure 8. Comparison of direct classification and ground truth data, (a) SVM, (b) NN, and (c) DT.
Agriculture 12 00077 g008aAgriculture 12 00077 g008b
Figure 9. Comparison on hybrid classification and ground truth data, (a) SVM, (b) NN, and (c) DT.
Figure 9. Comparison on hybrid classification and ground truth data, (a) SVM, (b) NN, and (c) DT.
Agriculture 12 00077 g009aAgriculture 12 00077 g009b
Table 1. The ground truth data of Xiluo in 2019.
Table 1. The ground truth data of Xiluo in 2019.
CategoriesNumber of PatchesArea (ha)
Paddy Rice76991634.96
Non-Paddy Rice45,5133379.83
Total53,2125014.80
Table 2. Ancillary Information.
Table 2. Ancillary Information.
Vegetation IndexFormula
RVI
(Ratio Vegetation Index)
R N I R
NDVI
(Normalized Difference Vegetation Index)
N I R R N I R + R
PVI
(Perpendicular Vegetation Index)
N I R N I R s o i l 1 + B 2
SAVI
(Soil-adjusted Vegetation Index)
( 1 + L ) × N I R R N I R + R + L
TSAVI
(Transformed Soil-adjusted Vegetation Index)
B ( N I R N I R S o i l ) R + B ( N I R A ) + X ( 1 + B 2 )
CMFI
(Cropping Management Factor Index)
R N I R + R
GI
(Greenness Index)
N I R G
IPVI
(Infrared Percentage Vegetation Index)
N I R N I R + R
MSAVI
(Modified Soil-adjusted Vegetation Index)
( 2 N I R + 1 ( 2 N I R + 1 ) 2 8 ( N I R R ) ) 2
OSAVI
(Optimization Soil adjusted Vegetation Index)
N I R R N I R + R + Y
GESAVI
(Generalize Soil- adjusted Vegetation Index)
N I R N I R S o i l R + Z
HOM
(Homogeneity)
Homogeneity = i = 0 N j = 0 N 1 1 + ( i j ) 2 C i j ( d , θ )
CON
(Contrast)
Contrast = i , j | i j | 2 p ( i , j )
DIS
(Dissimilarity)
Dissimilarity = i = 0 n j = 0 n C i j | i   j |
ENT
(Entropy)
Entropy = i = 0 n j = 0 n C i j log C i j
Experienced Coefficient for L = 0.5; X = 0.08; Y = 0.16; Z = 0.35 considering multiple scattered conditions: RSoil = A + B × R; (A = 0.011, B = 1.16) [24]. N I R S o i l = BR − A; The A and B is the soil line parameters. BR = Blue light × B.
Table 3. Direct classification.
Table 3. Direct classification.
SVMGround TruthProducer’s Accuracy
Paddy RiceNon-Paddy RiceSum of Columns
Direct ClassificationPaddy Rice7173386811,0410.65
Non-Paddy Rice52641,64542,1710.99
Sum of Rows769945,51353,212
User’s Accuracy0.930.92
Accuracy91.74%
kappa0.72
NNGround TruthProducer’s Accuracy
Paddy RiceNon-Paddy RiceSum of Columns
Direct ClassificationPaddy Rice7075495812,0330.59
Non-Paddy Rice62440,55541,1790.99
Sum of Rows769945,51353,212
User’s Accuracy0.920.89
Accuracy89.51%
kappa0.66
DTGround TruthProducer’s Accuracy
Paddy RiceNon-Paddy RiceSum of Columns
Direct ClassificationPaddy Rice7250313910,3890.70
Non-Paddy Rice44942,37442,8230.99
Sum of Rows769945,51353,212
User’s Accuracy0.940.93
Accuracy93.26%
kappa0.76
Table 4. Consistency of classification outcome.
Table 4. Consistency of classification outcome.
Ground TruthProducer’s Accuracy
Paddy RiceNon-Paddy RiceSum of Columns
Consistency of ClassificationPaddy Rice6904220891120.76
Non-Paddy Rice29239,68039,9720.99
Sum of Rows719641,88849,084
User’s Accuracy0.960.95
Accuracy94.91%
kappa0.82
Table 5. Inconsistency of classification outcome.
Table 5. Inconsistency of classification outcome.
SVMGround TruthProducer’s Accuracy
Paddy RiceNon-Paddy RiceSum of Columns
Inconsistency of ClassificationPaddy Rice269166019290.14
Non-Paddy Rice234196521990.89
Sum of Rows50336254128
User’s Accuracy0.530.54
Accuracy54.12%
kappa0.03
NNGround TruthProducer’s Accuracy
Paddy RiceNon-Paddy RiceSum of Columns
Inconsistency of ClassificationPaddy Rice171275029210.06
Non-Paddy Rice33287512070.72
Sum of Rows50336254128
User’s Accuracy0.340.24
Accuracy25.34%
kappa−0.14
DTGround TruthProducer’s Accuracy
Paddy RiceNon-Paddy RiceSum of Columns
Inconsistency of ClassificationPaddy Rice34693112770.27
Non-Paddy Rice157269428510.94
Sum of Rows50336254128
User’s Accuracy0.690.74
Accuracy73.64%
kappa0.26
Table 6. The relations of patches and multi-scale of features.
Table 6. The relations of patches and multi-scale of features.
Agriculture 12 00077 i001
Table 7. Outcomes for hybrid classification.
Table 7. Outcomes for hybrid classification.
SVMGround TruthProducer’s Accuracy
Paddy RiceNon-Paddy RiceSum of Columns
Hybrid ClassificationPaddy Rice7360262699860.74
Non-Paddy Rice33942,88743,2260.99
Sum of Rows769945,51353,212
User’s Accuracy0.960.94
Accuracy94.43%
kappa0.80
NNGround TruthProducer’s Accuracy
Paddy RiceNon-Paddy RiceSum of Columns
Hybrid ClassificationPaddy Rice7184340710,5910.68
Non-Paddy Rice51542,10642,6210.99
Sum of Rows769945,51353,212
User’s Accuracy0.930.93
Accuracy92.63%
kappa0.74
DTGround TruthProducer’s Accuracy
Paddy RiceNon-Paddy RiceSum of Columns
Hybrid ClassificationPaddy Rice7397251299090.75
Non-Paddy Rice30243,00143,3030.99
Sum of Rows769945,51353,212
User’s Accuracy0.960.94
Accuracy94.71%
kappa0.81
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lei, T.C.; Wan, S.; Wu, Y.C.; Wang, H.-P.; Hsieh, C.-W. Multi-Temporal Data Fusion in MS and SAR Images Using the Dynamic Time Warping Method for Paddy Rice Classification. Agriculture 2022, 12, 77. https://doi.org/10.3390/agriculture12010077

AMA Style

Lei TC, Wan S, Wu YC, Wang H-P, Hsieh C-W. Multi-Temporal Data Fusion in MS and SAR Images Using the Dynamic Time Warping Method for Paddy Rice Classification. Agriculture. 2022; 12(1):77. https://doi.org/10.3390/agriculture12010077

Chicago/Turabian Style

Lei, Tsu Chiang, Shiuan Wan, You Cheng Wu, Hsin-Ping Wang, and Chia-Wen Hsieh. 2022. "Multi-Temporal Data Fusion in MS and SAR Images Using the Dynamic Time Warping Method for Paddy Rice Classification" Agriculture 12, no. 1: 77. https://doi.org/10.3390/agriculture12010077

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop