Next Article in Journal
Economic Sustainability in Emerging Agro-Industrial Systems: The Case of Brazilian Olive Cultivation
Next Article in Special Issue
State of Major Vegetation Indices in Precision Agriculture Studies Indexed in Web of Science: A Review
Previous Article in Journal
Effects of Oak Leaf Extract, Biofertilizer, and Soil Containing Oak Leaf Powder on Tomato Growth and Biochemical Characteristics under Water Stress Conditions
Previous Article in Special Issue
Investigating Sentinel-1 and Sentinel-2 Data Efficiency in Studying the Temporal Behavior of Wheat Phenological Stages Using Google Earth Engine
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Convolutional Neural Network Method for Rice Mapping Using Time-Series of Sentinel-1 and Sentinel-2 Imagery

1
School of Surveying and Geospatial Engineering, College of Engineering, University of Tehran, Tehran 14174-66191, Iran
2
Center Eau Terre Environnement, Institut National de la Recherche Scientifique, Québec, QC G1K 9A9, Canada
*
Author to whom correspondence should be addressed.
Agriculture 2022, 12(12), 2083; https://doi.org/10.3390/agriculture12122083
Submission received: 23 October 2022 / Revised: 23 November 2022 / Accepted: 2 December 2022 / Published: 5 December 2022
(This article belongs to the Special Issue Recent Advances in Agro-Geoinformatics)

Abstract

:
Rice is one of the most essential and strategic food sources globally. Accordingly, policymakers and planners often consider a special place in the agricultural economy and economic development for this essential commodity. Typically, a sample survey is carried out through field observations and farmers’ consultations to estimate annual rice yield. Studies show that these methods lead to many errors and are time-consuming and costly. Satellite remote sensing imagery is widely used in agriculture to provide timely, high-resolution data and analytical capabilities. Earth observations with high spatial and temporal resolution have provided an excellent opportunity for monitoring and mapping crop fields. This study used the time series of dual-pol synthetic aperture radar (SAR) images of Sentinel-1 and multispectral Sentinel-2 images from Sentinel-1 and Sentinel-2 ESA’s Copernicus program to extract rice cultivation areas in Mazandaran province in Iran. A novel multi-channel streams deep feature extraction method was proposed to simultaneously take advantage of SAR and optical imagery. The proposed framework extracts deep features from the time series of NDVI and original SAR images by first and second streams. In contrast, the third stream integrates them into multi-levels (shallow to deep high-level features); it extracts deep features from the channel attention module (CAM), and group dilated convolution. The efficiency of the proposed method was assessed on approximately 129,000 in-situ samples and compared to other state-of-the-art methods. The results showed that combining NDVI time series and SAR data can significantly improve rice-type mapping. Moreover, the proposed methods had high efficiency compared with other methods, with more than 97% overall accuracy. The performance of rice-type mapping based on only time-series SAR images was better than only time-series NDVI datasets. Moreover, the classification performance of the proposed framework in mapping the Shirodi rice type was better than that of the Tarom type.

1. Introduction

According to the Food and Agriculture Organization (FAO), rice is one of the essential commodities in the world [1,2,3]. More than half of the world’s population depends on rice as the primary food [1]. Rice paddies account for approximately 12% of the global croplands of the planet [4]. According to FAO’s 2016 statistics, global rice cultivation is around 741 million hectares, and about 50% of the world’s rice is cultivated in India and China. Historical records show rice has been cultivated in Iran since the first century BC. Based on statistics, Iran is a medium rice producer [5].
Given the importance of rice in the household food basket, creating jobs and income for large-scale agricultural producers and the government’s willingness to replace high-yielding cultivars with other varieties are crucial challenges for regional and local authorities [6,7,8]. According to the latest reports from governmental institutions in Iran, statistics about the area under cultivation and the annual rate of rice harvesting were carried out through field sampling and census data collection (www.maj.ir, accessed on 23 October 2022). The reliability of this approach is very low, and consequently, these results cannot be used in critical decision-making due to uncertainties. One of the most critical aspects of this lack of information and accuracy is the spatial distribution of rice paddies in this region [9].
Identifying and classifying the area under the cultivation of various agricultural products is vital for decision-making, improving management and policymaking, and economic planning for agricultural development [10,11]. Monitoring and mapping rice paddies continuously and efficiently play an essential role in agriculture and a sustainable environment, water and food security, damage assessment, and policy and decision-making [12,13,14]. Remote sensing techniques can provide timely and valuable information on crop distribution and the area under cultivation and yield potential for decision-makers [15,16].
For decades, optical remote sensing data have been used in different agricultural applications, including crop types classification [17], biomass and leaf area estimation [18], yield modeling [19], and disease identifications [20]. These applications used data from various spectral bands and sensors, such as Moderate Resolution Imaging Spectroradiometer (MODIS), SPOT, Landsat, etc. [21].
Due to the increased availability of multispectral remote sensing images, rice mapping has become a hot research topic. To this end, many studies have been done for mapping rice based on optical multispectral satellite imagery. For instance, Zhang et al. [22] used deep learning convolutional neural network (CNN) for paddy-rice mapping from multi-temporal Landsat 8, phenology data, and thermal datasets. In the first step, the spatial-temporal adaptive fusion model was employed to fuse MODIS and Landsat data to simulate multi-temporal Landsat-like data. Then, a thresholding procedure was conducted to derive the phenological variables from the Landsat-like NDVI time series. A generalized single-channel algorithm was used to derive LST from Landsat 8. Finally, multi-temporal Landsat 8 images, combined with phenology and LST data, were used to map rice paddies using a CNN framework.
Moreover, Jiang et al. [23] have recently used the differenced NDVI (dNDVI) and a thresholding approach based on Landsat’s NDVI to monitor changes in rice-cultivated areas in Southern China. They used two Landsat images to identify double-cropping rice (DCRs) using dNDVI and one Landsat image to identify single-cropping rice (SCRs). Most studies using optical data to identify rice-cultivated areas have relatively decent accuracies and results. The greatest challenge related to these studies is obtaining a significant number of images (i.e., high temporal resolution) to correctly identify the rice cultivation pattern. However, due to the small size of agricultural lands, the low spatial resolution causes spectral mixing issues, which highly affects the performance of small rice paddy detection and mapping.
According to the previous studies, methods based on the rice phenology cycle employ data from the time-series images of Landsat and MODIS. Due to the high temporal resolution of the MODIS data, it has a high potential to identify the area under rice cultivation. However, due to the low spatial resolution of this data and the small size of the paddies, MODIS pixels may include several different classes. As a result, this spectral mixture may lead to inaccurate identification of the areas under rice cultivation and increases uncertainty. On the other hand, Landsat images have a higher spatial resolution, which can resolve the problem of the mixed pixels and can much better extract the areas under rice cultivation. Due to the high cloud coverage in the study area and the low temporal resolution of Landsat imagery, it is not easy to obtain a significant amount of Landsat data. On the other side, using a high temporal and spatial resolution of the Sentinel-2 images, identifying the area under rice cultivation with the phenological-based rice approach is possible. Due to adverse weather conditions, e.g., clouds and rain, in the northern region of Iran, most of the time, optical data may not be beneficial for rice monitoring.
SAR data has shown attractive characteristics for agricultural applications thanks to their independence from weather and illumination conditions. These advantages of SAR satellite imagery caused some studies to focus on rice mapping based on SAR imagery. For instance, Nguyen, et al. [24] have studied the potential of time series of C-band SAR data for seasonal mapping of rice-cultivated areas. They showed that Sentinel-1 high-resolution images could increase the accuracy of the classification of rice paddies. Besides, Clauss, et al. [25] also used the Sentinel-1 time series and super-pixel segmentation for mapping rice fields in Poyang Lake in China. They applied image segmentation using the super-pixel method (i.e., SLIC). They also used a phenology-based decision tree for the extracted time series to classify each segment or object as either rice or no rice. Additionally, Bazzi, et al. [26] have analyzed the temporal behavior of SAR backscattering in many plots covering different crops. This analysis identified the rice paddies using several metrics derived from the Gaussian profile, the VV/VH time-series variance, and the slope of the linear regression of the VH time series.
Despite all these results, the biggest challenge with radar images and phenology-based methods for identifying rice paddies is the existence of similar classes with rice phenology cycles, such as wetlands, which slightly increases the error rate. Several studies have overlooked this challenge by integrating optical imagery, potentially improving discrimination among similar classes. The fusion of SAR and optical datasets can be considered an essential source for rice mapping. In this regard, Torbick et al. [27] have produced a land cover map through a random forest (RF) algorithm, including crop, water, forest, shrub, and urban areas, jointly using Sentinel-1, Landsat-8 OLI, and PALSAR-2 satellites in Myanmar. They used only crop pixels and masked non-crop pixels and analyzed the behavior of time-series pixels of crops for Sentinel-1 images. Finally, they used these analyses to extract the rice crop using the decision tree method. Their results showed that the mapped cultivated rice areas were close to the census statistics (e.g., R2 = 78%).
Moreover, Onojeghuo et al. [28] employed RF and SVM algorithms to extract rice paddies from multi-temporal Sentinel-1A SAR data and Landsat-derived NDVI data. When the NDVI time-series data were fused with the various combinations of multi-temporal polarization channels (VH, VV, and VH/VV), overall classification improved significantly. In another study, Park et al. [29] classified paddy rice using RF and SVM machine learning algorithms based on satellite sensors Landsat, RadarSAT-1, and ALOS PALSAR. They first used Landsat images to identify rice paddies using RF and SVM algorithms. Then, SAR images were used to classify and extract rice paddies. Finally, they combined Landsat, SAR, and 30-m DEM to classify paddy rice. The results have shown that the fusion of optical and SAR sensor time-series data has the highest accuracy.
On the other hand, SAR data can be an attractive alternative to optical imagery for studying the dynamics of rice. However, optical data can still help identify rice paddies better, thanks to the absence of speckles in these images. Because of this region’s relatively small rice paddies and the necessity of providing reliable information for operational applications, high-spatial-resolution imageries from both SAR and optical sensors are primordial for rice mapping and monitoring. In addition, multi-temporal satellite imagery is essential for crop monitoring applications in general and rice in particular; this is because, according to the rice cultivation pattern, earth observations with a few days or a week of revisit time are needed to accurately study the crop conditions during the growing season.
The performance of deep learning-based frameworks has been assessed in many agricultural fields, and rice-type mapping is one of the most critical applications. Recently, some deep learning-based rice mapping frameworks have been proposed [30,31]. Unlike standard supervised methods, deep learning-based frameworks can automatically generate high-level instances and spatial features from the input dataset [32]. To this end, this research proposes a novel deep-learning framework for rice-type mapping. The proposed framework uses three-channel streams for deep feature extraction. The first and second channels investigate the original SAR and optical datasets, while the third channel considers the fusion of both. This research has made the following key contributions:
(1)
Presenting a novel deep learning procedure for mapping rice types based on multi-stream CNN.
(2)
Introducing an informative channel attention module for rice mapping.
(3)
Utilizing point-wise and group dilated convolution layers for improving the deep feature extractor.
(4)
Assessing the performance of the advanced rice-type methods based on deep learning frameworks and machine learning-based algorithms.

2. Study Area and Datasets

2.1. Study Area

Historical records show that rice has been cultivated in Iran since the first century BC, and today Iran is known as a mid-producer of rice (FAOSTAT, 2020). A small percentage of agricultural lands cultivated annually in Iran is under rice cultivation. About 70% of the rice cultivated areas are located in the two northern provinces, Gilan and Mazandaran, located south of the Caspian Sea. The position and original location of Mazandaran province represent two major areas, consisting of the Alborz coastal line and alluvial plains. The weather of the study area is moderate, humid in the summer, and relatively cold and dry in the winter, and the average temperature required for rice cultivation in this region is 23.5 °C. The total annual precipitation in this province is 631 mm (www.irimo.ir, accessed on 23 October 2022). The study area of this research is located in the eastern part of Mazandaran province, with an area of 73 km2 (Figure 1).

2.2. Sentinel Imagery

The rice growth stages are from 85 to 100 days from planting to harvesting in the study area. Rice is transplanted in this area from early May to mid-June and harvested from the beginning of August through mid-September using mechanical or traditional methods (Figure 2).
According to the crop calendar of the study area, multi-temporal imagery from Sentinel-1 and Sentinel-2 in 2019 was used to determine the rice cultivated areas. Fifteen Sentinel-1 images were collected every 12 days and used as SAR data. However, only twelve cloud-free Sentinel-2 images were helpful over the study area during the growing season (Figure 2).
Sentinel-1 provides dual-pol SAR data in Interferometric Wide Swath (IW) mode Level-1. The ground-range detected high-resolution (GRDH) products were employed. These image data were collected during the spring and summer of 2019. The pre-processing steps include radiometric calibration, speckle filtering, and terrain correction. To remove the speckle from SAR data in this study, the Enhanced Lee Filter (3 × 3) has been applied. The multi-temporal imagery from Sentinel-2 was also used as the optical/multispectral data. The Sen2Cor processor was then used for Sentinel-2 data correction [33,34]. All Sentinel-1 and Sentinel-2 datasets were resampled to a 10 m grid.

2.3. Reference Samples

In the study area, various surveys were conducted to collect samples of rice cultivation systems according to the growing season. The distribution of the in-situ samples over the study area is shown in Figure 3. A total of 354 samples, including 259 rice fields of Tarom-Hashemi and Shirodi varieties and 167 non-rice fields and other land covers, including wheat, canola, gardens, roads, wetlands, and rural areas, were collected (Table 1). These samples were randomly divided into three groups for training, validation, and testing of classification results, respectively.
In addition, during the field study, local farmers were consulted to collect some information regarding cropping information. Moreover, statistics from the ministry of agriculture were obtained about two significant rice varieties cultivated in the study area. Both rice varieties in the study area are cultivated simultaneously, and the time interval between planting and harvesting is similar. Nevertheless, the only thing distinguishing these two rice varieties is that one has a higher vegetation density and more weight than the other.
We have tried to select training samples from these two varieties equally. Another critical factor in selecting training samples in the study area is the soil type throughout the region. In some areas, farmers add clay soils or sea sands to their paddies each year to have more fertile paddies and grow more rice products, but in some other areas, paddies have rough soil, which causes low cropping per year. We have considered this factor in collecting training samples and identified areas with different soil types and high quality through the ministry of agriculture statistics and the local farmers. We also collected training samples in several areas with more diverse soils.

3. Rice Type Classification

The overview of the rice types classification proposed framework is illustrated in Figure 4. Based on the presented flowchart, the rice-type mapping framework is applied in three main steps.

3.1. Data Preparation

This research used the pre-processed Sentinel-2 L1-C dataset available through the Sentinel Hub by ESA Copernicus program. However, the atmospheric correction was applied to these data using the SNAP’s Sen2Cor module. Next, the NDVI time series was generated. Feature extraction can be carried out in various ways (i.e., statistical methods, spectral indices), but one of the most common analyses is combining spectral bands using simple mathematical operations. Since NDVI is simple and highly applicable for crop mapping, it was chosen from different spectral indices. The NIR and red bands were used to calculate NDVI (see Equation (1)).
N D V I = ( N I R R e d ) ( N I R + R e d )
Furthermore, Sentinel-1 SAR Level-1 Ground Range Detected (GRD) images were used. The main pre-processing of Sentinel-1 SAR imagery includes despeckling (refined Lee filter) and the geometric correction conducted in SNAP software.

3.2. Proposed Multi-Streams Framework

Deep feature extraction is the most crucial part of rice-type mapping. The multi-stream deep feature extract framework is widely used for analyzing a bi-temporal dataset. The Siamese architectures are the most common such architecture that has two streams. Our proposed framework inspired the Siamese network architecture but with more modification. Thus, this research proposed a novel multi-streams deep feature extraction framework. Figure 5 shows the proposed architecture for rice-type mapping in this research. The proposed architecture has three streams that extract the deep features from the input dataset based on this figure. The first channel considers the task of deep feature extraction from the optical time-series NDVI dataset. The third stream investigates the time series SAR images for deep feature generation. The second stream extracts the deep feature by fusion of optical and SAR deep features at different abstracts. Next, the feature maps are fed to the global average-pooling layer to reduce feature map size. The latest part is the classification, which uses the fully connected and soft-max layers. This part extracts deep features by three streams concatenated by concatenating layers. Then, the deep features are fed to a fully connected layer for more consideration. Finally, the soft-max layer is employed to make decisions on the input datasets and map rice-type. In comparison with other CNN frameworks, the proposed architecture has the following differences:
(1)
Utilizing a multi-streams framework that can consider high-level meaningful features from the original dataset and fuse them in different feature levels.
(2)
Introducing a channel attention module for informative feature extraction.
(3)
Using residual, point-wise, and group dilated convolution layers for deep feature extraction.
(4)
Employing the global average-pooling layer instead of a flattening layer reduces the model parameters.

3.2.1. Channel Attention Mechanism (CAM)

Each high-level channel mapping of a feature can be considered a class-specific response, with different semantic responses being interrelated [35]. By leveraging the dependencies between channel maps, you can emphasize interdependent functional maps and improve the functional representation of specific semantics [17]. The result is a CAM that explicitly models channel interdependence [36]. Thus, this research used a CAM for rice mapping to improve the generalization capabilities of the proposed model. According to Figure 6, the channel attention module has the following general structure.
The out-of-the-CAM module (Ω) for the input feature map ( f ) can be formulated as follows:
i , j = e ( f i f j ) i = 1 C e ( f i f j )
where C is the number of channels, and i , j calculates the impact of the ith channel on the jth channel. Finally, the output of the CAM module ( ) can be calculated by multiplying a scale parameter ( η ) and applying an element-wise sum operation with ( f ).
j = η i = 1 C ( i , j f i ) + f j
where η gradually learns a weight from 0.

3.2.2. Convolution Layer

Convolutional layers are the fundamental basis of the CNN framework, and their primary function is to generate high-level depth features from the feature data ( f ) [32]. At the convolution layer, both spatial and spectral features are considered. The basic calculation for the lth convolution layer with a activation function (χ) is given below [37].
f l = χ ( w l f l 1 ) + b l
where w and b denote weight and bias, respectively. Equation (1) for a 2D-convolution at position ( ϕ , λ ) with a dilation rate ( ζ ) can be expanded as follows [38]:
f l ϕ λ = χ ( b l + m   x y W l , m x , y f l 1 , m ( ϕ + x × ζ ) ( λ + y × ζ ) )
where m denotes the feature map associated with the current feature map in the ( α 1 )th layer. The point-wise convolution layer uses the kernel convolution filter with size (1 × 1). Moreover, the group dilated convolution layers use the convolution layers with different dilation rates (i.e., 1, 2) and kernel sizes. In this layer, the number of filters for each layer θ n are determined by multiplying a ratio of parameters τ n with θ . The mathematical number of filters can be described as follows:
θ n = τ n × θ                   s . t : n = 1 N τ n = 1
where n is the number of layers in the group convolution layer, which this study set at 3, and τ is the coefficient that determines the number of filters for the convolution layer. The residual block allows gradients to be directly back-propagated to earlier layers, which is particularly useful for preventing vanishing gradients or explosions.

3.2.3. Comparison with Other Classification Methods

The result of rice mapping is compared with the most common machine learning and deep learning-based methods. To this end, we implemented four methods: RF, XGBoost, 2D-CNN, and 3D-CNN. The 2D-CNN proposed by Zhao et al. [24] includes two 2D-convolution and 2D-pooling layers and soft-max layers for making input decisions. Fernandez-Beltran proposed the 3D-CNN framework and utilized 3D convolution layers. This method uses six 3D-convolution and 3D pooling layers for feature map reduction.
Finally, a fully connected layer and soft-max layer are used for making a decision. The input data of these methods are stacking time-series Sentinel-1 and NDVI derived from Sentinel-2 images. The results of rice-type mapping are assessed by quality measurement indices that include overall accuracy (OA), kappa coefficient (KC), user’s accuracy (UA), producer’s accuracy (PA), omission error (OE), and commission error (CE).

4. Experiment and Discussion

4.1. Parameter Setting

Some parameters need to be set for rice mapping classifier algorithms. The best value of classifier parameters is obtained by trial and error and evaluated on the validation dataset. Table 2 provides the optimum value of machine learning methods and deep learning algorithms.
In deep learning-based methods, patch size is of utmost importance. We evaluated the sensitivity of patch size in the performance of the proposed method (Figure 7).

4.2. Classification Results

Optical and Radar earth observations from Sentinel-1 and Sentinel-2 sensors have advantages and disadvantages for rice crop mapping. For example, the pixel-based rice map would be noisy due to speckle noise in Sentinel-1 images. On the other hand, it is necessary to have a regular SAR image every few days due to the rice cultivation pattern. Moreover, SAR data would be a real advantage for the cloudy conditions of the study area. In contrast, the Sentinel-2 data are affected by clouds and shadows but are free of noise.
Consequently, we integrated these two data to improve the classification performances for efficient and reliable rice crop mapping. As shown in Figure 8, the rice maps classified by RF and XGBoost methods provide many noisy labels. In contrast, the DL-based methods performed better thanks to their internal use of spatial and spectral features for classification.
In this area, rice paddies are mainly located near wetlands to benefit from water for irrigation. According to the field observations from the study area, in the center of the study area, more Tarom rice is cultivated, and in the south, more Shirodi rice is cultivated. The rice map obtained from our proposed method corresponds more with field observations.
Nevertheless, as shown in Figure 9, the second factor is the large rice volume in some areas, with a few pixels of another type of rice among them. This may be because the rice-type seeds are not completely separated, which means that one type of rice does not grow uniformly in the rice fields.
The performance of the rice-type mapping was assessed based on the field samples collected. Figure 10 shows the proposed and implemented classification methods’ confusion matrices. As can be seen, many pixels are incorrectly classified in RF and XGBOOST methods, and the error rate in Deep Learning methods is much lower. In deep learning methods, the wrong pixels classified in the 3D-CNN algorithm are gradually less than in the 2D-CNN algorithm, which may be because the 3D-CNN algorithm analyses the data in more detail. Furthermore, in our proposed method, the percentage of wrong pixels is much lower than other algorithms because it has a more complex architecture than other methods that perform much more accurate analysis.
Table 3 summarizes the statistical accuracy assessment for the rice-type mapping utilizing different accuracy measures. A non-deep learning algorithm, the RF algorithm, had the lowest performance (OA = 81% and KC = 0.60), while the XGBoost classifier provided satisfactory results (OA = 84% and KC = 0.65). In contrast, all deep learning algorithms except the 2D-CNN algorithm achieved over 90% by OA. In particular, the proposed method presented the highest accuracy for rice-type mapping with an OA and KC of 97% and 0.94, respectively. In addition, 3D-CNN provided UA more than the proposed method (0.2%) in the Non-Rice class, while it missed its performance in the other classes.

4.3. Ablation Analysis

Various aspects of artificial intelligence methods can be evaluated using ablation analysis. The primary purpose of this analysis is to obtain an insight into the effects of removing a part of the system, e.g., an attention layer, on the general model performance. Using three scenarios, we examined the effectiveness of our proposed method after ablation analysis (S1) without attention block, (S2) without a pooling layer, and (S3) without a Fully Connected Layer. Table 4 shows the result of ablation analysis in the different scenarios. Based on numerical results, the first scenario (without attention block) has the highest effect on the result of rice mapping. Moreover, the third scenario, i.e., without a Fully Connected Layer, has the lowest effect on the proposed model’s performance.

4.4. Discussion

4.4.1. Accuracy Assessment and Comparison with Other Models

The result of rice mapping shows that the proposed method outperformed other methods. The OA index provided the proposed method with more than 97% accuracy. Furthermore, the proposed method has high efficiency in detecting rice type compared with other methods, as the UA and PA are more than 89%. Based on numerical analysis, the proposed method’s performance in classifying Shirodi is better than Tarom-Hashemi. This issue derived from the used sample data for Shirodi is more than Tarom-Hashemi.
Figure 9 shows the classified map of the proposed method in different areas using VHR imagery. As can be seen, our proposed method has been very good at separating roads and even the boundaries between rice paddies. It has correctly identified residential areas, water areas, gardens, etc., as a non-rice class. The accuracy of the classified map using our proposed method is 97.12, which is approximately a three percent error, which can be due to two reasons. Wetlands are the first cause of the change process in this crop calendar, somewhat similar to rice cultivation. In the first stage, wetlands are dry areas filled with water to supply rice paddies. Then water gradually decreases, and the vegetation grows until the water is completely drained, and the vegetation will be the size of a rice plant.
Several studies have been done for rice mapping based on various remote sensing datasets in recent years. Table 5 shows some recent and advanced rice mapping methods based on remote sensing satellite images. The proposed method has provided promising results compared with other machine and deep learning methods.
Rice cultivation in Iran mainly covers a small region in the north. The size of rice paddies in this region is typically small. Therefore, spatial resolution can be essential for rice mapping in such areas. There are many studies for rice-type mapping based on medium-resolution remote sensing datasets (i.e., MODIS, VIIRIS) [47,48]. These studies reported promising accuracy for rice-type mapping. For example, Gumma et al. [49] reported an OA under 90 for rice classes based on MODIS imagery.
Moreover, the medium-resolution sensors have a high temporal resolution. They can provide rich information for rice-type mapping but cannot map the rice fields with small regions. This issue is more evident in northern Iran, where more rice fields cover small areas. Thus, mapping small rice fields is more complex, and utilizing medium-resolution datasets is unsuitable for such areas. High-resolution satellite imagery (i.e., Sentinel-1, Sentinel-2, and Landsat) can be a valuable data source for this application.
The sample data collection is the most crucial factor in rice mapping by supervised learning methods. The collection of relevant and reliable sample data is time-consuming and challenging. It is worth noticing that the proposed model is trained only by about 6000 pixels. Moreover, other semantic segmentation-based methods (i.e., U-Net, Deeplab) [50] demand more sample datasets with all label pixels for a patch. Thus, the proposed method can be applied to rice-type mapping with sample data lower than used by other semantic segmentation-based methods.

4.4.2. Variables Assessment

The fusion of remote sensing datasets has improved mapping results in many applications. This study takes advantage of time-series Sentinel-1 and Sentinel-2 images for rice-type mapping. To evaluate the effectiveness of these datasets, we assessed the performance of the proposed method by three scenarios: S1 for only time-series Sentinel-1, S2 for only Sentinel-2 NDVI time-series, and S3 for a combination of Sentinel-1 and Sentinel-2 NDVI time-series. Table 6 represents the results of the evaluation of rice-type mapping by the proposed method. The rice map from Sentinel-1 data is more accurate than the time series of Sentinel-2 NDVI datasets. Furthermore, the combination of both datasets improved the result of rice-type mapping significantly compared with the other only one dataset.

4.4.3. Feature Fusion Strategies

Because the study area always has rainy and cloudy conditions for more of the time during the cultivation season, the SAR data cloud is used to improve the result of crop mapping. Integrating multisource data is the most common strategy to enhance the mapping results. The integration of multisource datasets can be applied on different levels. The feature-level fusion is widely used in many applications of remote sensing. The deep learning methods extract deep features at different levels from the input dataset. The deep features show spatial features (i.e., the object’s edge) in the shallow levels, while deep high-level features present semantic spectral information. This research proposed a multi-streams deep learning-based method that fuses deep features at low and high levels, improving the mapping results.
The deep feature level fusion can be applied in the first, intermediate, and latest layers. We evaluated the performance of feature-level fusion in the different layers that are included: (1) first-layers (concatenating optical and SAR dataset), (2) latest-layer (concatenating extracted deep features from optical and SAR datasets), and (3) all layers (concatenating all deep features of all layers). Table 7 shows the influence of feature-level fusion in the different layers. As seen, the fusion of deep features in the latest layers is better than the first layers as it improves the accuracy of the OA by 5%. Furthermore, the fusion of deep features from different layers provided the best performance. They improved the accuracy of rice-type mapping by more than 8% and 3% compared with the first and latest layers, respectively.

5. Conclusions

This study aimed to develop an operational framework for mapping and inventorying rice using remote sensing technology in Iran, as there has been a lack of information for statistics and planning needs in the agricultural sector. This study analyzed two-time series of Sentinel-1 and Sentinel-2 datasets from the spring and summer of 2019 to identify the areas under rice cultivation. This research proposes a novel rice-mapping framework that can detect rice areas. Furthermore, the proposed framework can identify the rice type into two main classes that can inform decisions to support food security. The proposed framework fuses time-series SAR and optical images for rice-type mapping. The model is built based on a multi-stream deep feature fusion and extraction manner that extract deep features from the original dataset and fuse them with another stream. The proposed model has some advantages over the currently available models: (1) high accuracy in rice-type mapping, (2) extracting deep features at the same time from SAR and optical datasets and fusing them in different levels, (3) ability to detect rice type in the two main classes, and (4) ability to take advantage of the CAM, residual block, and multi-scale kernel convolutions.

Author Contributions

Conceptualization, M.S., S.T.S., M.H. and S.H.; methodology, S.T.S. and M.S., writing—original draft preparation, S.T.S.; writing—review and editing, S.T.S., M.H. and S.H.; visualization, M.S. and S.T.S.; supervision, M.H. and S.H.; funding acquisition, S.H. All authors have read and agreed to the published version of the manuscript.

Funding

This study received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Publicly available datasets were analyzed in this study. These datasets can be found here: [https://rslab.ut.ac.ir] (accessed on 15 January 2022).

Acknowledgments

The authors would like to thank the anonymous reviewers for their valuable comments on our manuscript.

Conflicts of Interest

The authors declare that they have no conflict of interest.

References

  1. Fairhurst, T.; Dobermann, A. Rice in the global food supply. World 2002, 5, 349–511, 675. [Google Scholar]
  2. He, G.; Liu, X.; Cui, Z. Achieving global food security by focusing on nitrogen efficiency potentials and local production. Glob. Food Secur. 2021, 29, 100536. [Google Scholar] [CrossRef]
  3. Ma, S.; Wang, Z.; Guo, X.; Wang, F.; Huang, J.; Sun, B.; Wang, X. Sourdough improves the quality of whole-wheat flour products: Mechanisms and challenges—A review. Food Chem. 2021, 360, 130038. [Google Scholar] [CrossRef] [PubMed]
  4. Zou, J.; Huang, Y.; Qin, Y.; Liu, S.; Shen, Q.; Pan, G.; Lu, Y.; Liu, Q. Changes in fertilizer-induced direct N2O emissions from paddy fields during rice-growing season in China between 1950s and 1990s. Glob. Chang. Biol. 2009, 15, 229–242. [Google Scholar] [CrossRef]
  5. Saadat, M.; Hasanlou, M.; Homayouni, S. Rice Crop Mapping Using SENTINEL-1 Time Series Images (case Study: Mazandaran, Iran). Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, 42, 897–904. [Google Scholar] [CrossRef] [Green Version]
  6. Jahan, A.; Islam, A.; Sarkar, M.I.U.; Iqbal, M.; Ahmed, M.N.; Islam, M.R. Nitrogen response of two high yielding rice varieties as influenced by nitrogen levels and growing seasons. Geol. Ecol. Landsc. 2022, 6, 24–31. [Google Scholar] [CrossRef] [Green Version]
  7. Kharim, M.N.A.; Wayayok, A.; Abdullah, A.F.; Shariff, A.R.M. Effect of variable rate application on rice leaves burn and chlorosis in system of rice intensification. Malays. J. Sustain. Agric. (MJSA) 2020, 4, 66–70. [Google Scholar] [CrossRef]
  8. Lee, W.C.; Hoe, N.; Viswanathan, K.K.; Baharuddin, A.H. An economic analysis of anthropogenic climate change on rice production in Malaysia. Malays. J. Sustain. Agric. 2019, 4, 01–04. [Google Scholar] [CrossRef]
  9. Sharifi, A.; Hosseingholizadeh, M. Application of sentinel-1 data to estimate height and biomass of rice crop in Astaneh-ye Ashrafiyeh, Iran. J. Indian Soc. Remote Sens. 2020, 48, 11–19. [Google Scholar] [CrossRef]
  10. Wei, P.; Huang, R.; Lin, T.; Huang, J. Rice Mapping in Training Sample Shortage Regions Using a Deep Semantic Segmentation Model Trained on Pseudo-Labels. Remote Sens. 2022, 14, 328. [Google Scholar] [CrossRef]
  11. Soh, N.C.; Shah, R.M.; Giap, S.G.E.; Setiawan, B.I.; Minasny, B. High-Resolution Mapping of Paddy Rice Extent and Growth Stages across Peninsular Malaysia Using a Fusion of Sentinel-1 and 2 Time Series Data in Google Earth Engine. Remote Sens. 2022, 14, 1875. [Google Scholar]
  12. Alexandridis, T.K.; Ovakoglou, G.; Cherif, I.; Gómez Giménez, M.; Laneve, G.; Kasampalis, D.; Moshou, D.; Kartsios, S.; Karypidou, M.C.; Katragkou, E. Designing AfriCultuReS services to support food security in Africa. Trans. GIS 2021, 25, 692–720. [Google Scholar] [CrossRef]
  13. Zhao, Z.-Y.; Wang, P.-Y.; Xiong, X.-B.; Wang, Y.-B.; Zhou, R.; Tao, H.-Y.; Grace, U.A.; Wang, N.; Xiong, Y.-C. Environmental risk of multi-year polythene film mulching and its green solution in arid irrigation region. J. Hazard. Mater. 2022, 435, 128981. [Google Scholar] [CrossRef] [PubMed]
  14. Munyasya, A.N.; Koskei, K.; Zhou, R.; Liu, S.-T.; Indoshi, S.N.; Wang, W.; Zhang, X.-C.; Cheruiyot, W.K.; Mburu, D.M.; Nyende, A.B. Integrated on-site & off-site rainwater-harvesting system boosts rainfed maize production for better adaptation to climate change. Agric. Water Manag. 2022, 269, 107672. [Google Scholar]
  15. Zhan, P.; Zhu, W.; Li, N. An automated rice mapping method based on flooding signals in synthetic aperture radar time series. Remote Sens. Environ. 2021, 252, 112112. [Google Scholar] [CrossRef]
  16. Wei, P.; Chai, D.; Lin, T.; Tang, C.; Du, M.; Huang, J. Large-scale rice mapping under different years based on time-series Sentinel-1 images using deep semantic segmentation model. ISPRS J. Photogramm. Remote Sens. 2021, 174, 198–214. [Google Scholar] [CrossRef]
  17. Seydi, S.T.; Amani, M.; Ghorbanian, A. A Dual Attention Convolutional Neural Network for Crop Classification Using Time-Series Sentinel-2 Imagery. Remote Sens. 2022, 14, 498. [Google Scholar] [CrossRef]
  18. Chang, Q.; Zwieback, S.; DeVries, B.; Berg, A. Application of L-band SAR for mapping tundra shrub biomass, leaf area index, and rainfall interception. Remote Sens. Environ. 2022, 268, 112747. [Google Scholar] [CrossRef]
  19. Gumma, M.K.; Kadiyala, M.; Panjala, P.; Ray, S.S.; Akuraju, V.R.; Dubey, S.; Smith, A.P.; Das, R.; Whitbread, A.M. Assimilation of remote sensing data into crop growth model for yield estimation: A case study from India. J. Indian Soc. Remote Sens. 2022, 50, 257–270. [Google Scholar] [CrossRef]
  20. Terentev, A.; Dolzhenko, V.; Fedotov, A.; Eremenko, D. Current State of Hyperspectral Remote Sensing for Early Plant Disease Detection: A Review. Sensors 2022, 22, 757. [Google Scholar] [CrossRef]
  21. Gao, F.; Zhang, X. Mapping crop phenology in near real-time using satellite remote sensing: Challenges and opportunities. J. Remote Sens. 2021, 2021, 8379391. [Google Scholar] [CrossRef]
  22. Zhang, M.; Lin, H.; Wang, G.; Sun, H.; Fu, J. Mapping paddy rice using a convolutional neural network (CNN) with Landsat 8 datasets in the Dongting Lake Area, China. Remote Sens. 2018, 10, 1840. [Google Scholar] [CrossRef] [Green Version]
  23. Jiang, M.; Xin, L.; Li, X.; Tan, M.; Wang, R. Decreasing rice cropping intensity in southern China from 1990 to 2015. Remote Sens. 2018, 11, 35. [Google Scholar] [CrossRef]
  24. Nguyen, D.B.; Gruber, A.; Wagner, W. Mapping rice extent and cropping scheme in the Mekong Delta using Sentinel-1A data. Remote Sens. Lett. 2016, 7, 1209–1218. [Google Scholar] [CrossRef]
  25. Clauss, K.; Ottinger, M.; Künzer, C. Mapping rice areas with Sentinel-1 time series and superpixel segmentation. Int. J. Remote Sens. 2018, 39, 1399–1420. [Google Scholar] [CrossRef] [Green Version]
  26. Bazzi, H.; Baghdadi, N.; El Hajj, M.; Zribi, M.; Minh, D.H.T.; Ndikumana, E.; Courault, D.; Belhouchette, H. Mapping paddy rice using Sentinel-1 SAR time series in Camargue, France. Remote Sens. 2019, 11, 887. [Google Scholar] [CrossRef] [Green Version]
  27. Torbick, N.; Chowdhury, D.; Salas, W.; Qi, J. Monitoring rice agriculture across myanmar using time series Sentinel-1 assisted by Landsat-8 and PALSAR-2. Remote Sens. 2017, 9, 119. [Google Scholar] [CrossRef] [Green Version]
  28. Onojeghuo, A.O.; Blackburn, G.A.; Wang, Q.; Atkinson, P.M.; Kindred, D.; Miao, Y. Mapping paddy rice fields by applying machine learning algorithms to multi-temporal Sentinel-1A and Landsat data. Int. J. Remote Sens. 2018, 39, 1042–1067. [Google Scholar] [CrossRef] [Green Version]
  29. Park, S.; Im, J.; Park, S.; Yoo, C.; Han, H.; Rhee, J. Classification and mapping of paddy rice by combining Landsat and SAR time series data. Remote Sens. 2018, 10, 447. [Google Scholar] [CrossRef] [Green Version]
  30. Fernandez-Beltran, R.; Baidar, T.; Kang, J.; Pla, F. Rice-yield prediction with multi-temporal sentinel-2 data and 3D CNN: A case study in Nepal. Remote Sens. 2021, 13, 1391. [Google Scholar] [CrossRef]
  31. Zhao, S.; Liu, X.; Ding, C.; Liu, S.; Wu, C.; Wu, L. Mapping rice paddies in complex landscapes with convolutional neural networks and phenological metrics. GISci. Remote Sens. 2020, 57, 37–48. [Google Scholar] [CrossRef]
  32. Seydi, S.T.; Hasanlou, M.; Chanussot, J. DSMNN-Net: A Deep Siamese Morphological Neural Network Model for Burned Area Mapping Using Multispectral Sentinel-2 and Hyperspectral PRISMA Images. Remote Sens. 2021, 13, 5138. [Google Scholar] [CrossRef]
  33. Main-Knorn, M.; Pflug, B.; Louis, J.; Debaecker, V.; Müller-Wilm, U.; Gascon, F. Sen2Cor for sentinel-2. In Proceedings of the Image and Signal Processing for Remote Sensing XXIII, Warsaw, Poland, 11–13 September 2017; p. 1042704. [Google Scholar]
  34. Louis, J.; Pflug, B.; Main-Knorn, M.; Debaecker, V.; Mueller-Wilm, U.; Iannone, R.Q.; Cadau, E.G.; Boccia, V.; Gascon, F. Sentinel-2 global surface reflectance level-2A product generated with Sen2Cor. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 8522–8525. [Google Scholar]
  35. Ma, W.; Zhao, J.; Zhu, H.; Shen, J.; Jiao, L.; Wu, Y.; Hou, B. A spatial-channel collaborative attention network for enhancement of multiresolution classification. Remote Sens. 2020, 13, 106. [Google Scholar] [CrossRef]
  36. Fu, J.; Liu, J.; Tian, H.; Li, Y.; Bao, Y.; Fang, Z.; Lu, H. Dual attention network for scene segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3146–3154. [Google Scholar]
  37. Vrskova, R.; Hudec, R.; Kamencay, P.; Sykora, P. Human Activity Classification Using the 3DCNN Architecture. Appl. Sci. 2022, 12, 931. [Google Scholar] [CrossRef]
  38. Seydi, S.T.; Hasanlou, M.; Amani, M.; Huang, W. Oil Spill Detection Based on Multi-Scale Multi-Dimensional Residual CNN for Optical Remote Sensing Imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 10941–10952. [Google Scholar] [CrossRef]
  39. Zhai, P.; Li, S.; He, Z.; Deng, Y.; Hu, Y. Collaborative mapping rice planting areas using multisource remote sensing data. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 5969–5972. [Google Scholar]
  40. Xu, X.; Ji, X.; Jiang, J.; Yao, X.; Tian, Y.; Zhu, Y.; Cao, W.; Cao, Q.; Yang, H.; Shi, Z. Evaluation of one-class support vector classification for mapping the paddy rice planting area in Jiangsu Province of China from Landsat 8 OLI imagery. Remote Sens. 2018, 10, 546. [Google Scholar] [CrossRef] [Green Version]
  41. Cao, J.; Cai, X.; Tan, J.; Cui, Y.; Xie, H.; Liu, F.; Yang, L.; Luo, Y. Mapping paddy rice using Landsat time series data in the Ganfu Plain irrigation system, Southern China, from 1988− 2017. Int. J. Remote Sens. 2021, 42, 1556–1576. [Google Scholar] [CrossRef]
  42. Zhang, M.; Lin, H. Object-based rice mapping using time-series and phenological data. Adv. Space Res. 2019, 63, 190–202. [Google Scholar] [CrossRef]
  43. Liu, Y.; Xiao, D.; Yang, W. An algorithm for early rice area mapping from satellite remote sensing data in southwestern Guangdong in China based on feature optimization and random Forest. Ecol. Inform. 2022, 92, 101853. [Google Scholar] [CrossRef]
  44. Lasko, K.; Vadrevu, K.P.; Tran, V.T.; Justice, C. Mapping double and single crop paddy rice with Sentinel-1A at varying spatial scales and polarizations in Hanoi, Vietnam. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 498–512. [Google Scholar] [CrossRef]
  45. Zhang, X.; Wu, B.; Ponce-Campos, G.E.; Zhang, M.; Chang, S.; Tian, F. Mapping up-to-date paddy rice extent at 10 m resolution in china through the integration of optical and synthetic aperture radar images. Remote Sens. 2018, 10, 1200. [Google Scholar] [CrossRef] [Green Version]
  46. Xu, L.; Zhang, H.; Wang, C.; Wei, S.; Zhang, B.; Wu, F.; Tang, Y. Paddy rice mapping in thailand using time-series sentinel-1 data and deep learning model. Remote Sens. 2021, 13, 3994. [Google Scholar] [CrossRef]
  47. Nuarsa, I.W.; Nishio, F.; Hongo, C.; Mahardika, I.G. Using variance analysis of multitemporal MODIS images for rice field mapping in Bali Province, Indonesia. Int. J. Remote Sens. 2012, 33, 5402–5417. [Google Scholar] [CrossRef]
  48. Onojeghuo, A.O.; Blackburn, G.A.; Wang, Q.; Atkinson, P.M.; Kindred, D.; Miao, Y. Rice crop phenology mapping at high spatial and temporal resolution using downscaled MODIS time-series. GIScience Remote Sens. 2018, 55, 659–677. [Google Scholar] [CrossRef] [Green Version]
  49. Gumma, M.K.; Thenkabail, P.S.; Maunahan, A.; Islam, S.; Nelson, A. Mapping seasonal rice cropland extent and area in the high cropping intensity environment of Bangladesh using MODIS 500 m data for the year 2010. ISPRS J. Photogramm. Remote Sens. 2014, 91, 98–113. [Google Scholar] [CrossRef]
  50. Xia, L.; Zhao, F.; Chen, J.; Yu, L.; Lu, M.; Yu, Q.; Liang, S.; Fan, L.; Sun, X.; Wu, S. A full resolution deep learning network for paddy rice mapping using Landsat data. ISPRS J. Photogramm. Remote Sens. 2022, 194, 91–107. [Google Scholar] [CrossRef]
Figure 1. (a) Geographical location of the study area, and (b) a false-color composite NDVI image (R: April, G: June, and B: July).
Figure 1. (a) Geographical location of the study area, and (b) a false-color composite NDVI image (R: April, G: June, and B: July).
Agriculture 12 02083 g001
Figure 2. Sentinel-1 and Sentinel-2 acquisition dates (2019) and phenology stages of rice crop in the Mazandaran province, Iran.
Figure 2. Sentinel-1 and Sentinel-2 acquisition dates (2019) and phenology stages of rice crop in the Mazandaran province, Iran.
Agriculture 12 02083 g002
Figure 3. Distribution of the in-situ samples of ten classes for training and testing.
Figure 3. Distribution of the in-situ samples of ten classes for training and testing.
Agriculture 12 02083 g003
Figure 4. Overview of the proposed framework for rice-type mapping.
Figure 4. Overview of the proposed framework for rice-type mapping.
Agriculture 12 02083 g004
Figure 5. The proposed multi-streams framework for rice-type mapping.
Figure 5. The proposed multi-streams framework for rice-type mapping.
Agriculture 12 02083 g005
Figure 6. The general structure of CAM.
Figure 6. The general structure of CAM.
Agriculture 12 02083 g006
Figure 7. Performance of proposed rice-mapping framework as a function of the patch size.
Figure 7. Performance of proposed rice-mapping framework as a function of the patch size.
Agriculture 12 02083 g007
Figure 8. Comparison of the rice variety maps produced using the proposed classification algorithm and other methods: (a) RF, (b) XGBOOST, (c) 2-CNN, (d) 3-CNN, and (e) the proposed method.
Figure 8. Comparison of the rice variety maps produced using the proposed classification algorithm and other methods: (a) RF, (b) XGBOOST, (c) 2-CNN, (d) 3-CNN, and (e) the proposed method.
Agriculture 12 02083 g008
Figure 9. Zoom of the results of crop mapping methods: (a) RF, (b) XGBOOST, (c) 2-CNN, (d) 3-CNN, and (e) the proposed method.
Figure 9. Zoom of the results of crop mapping methods: (a) RF, (b) XGBOOST, (c) 2-CNN, (d) 3-CNN, and (e) the proposed method.
Agriculture 12 02083 g009
Figure 10. Comparison of confusion matrices for different rice-type mapping methods: (a) RF, (b) XGBoost, (c) 2D-CNN, (d) 3D-CNN, and (e) the proposed method.
Figure 10. Comparison of confusion matrices for different rice-type mapping methods: (a) RF, (b) XGBoost, (c) 2D-CNN, (d) 3D-CNN, and (e) the proposed method.
Agriculture 12 02083 g010
Table 1. A description of the statistical details of the reference samples.
Table 1. A description of the statistical details of the reference samples.
IDCrop TypeAll SamplesTraining (4.1%)Validation (0.9%)Test (95%)
1Non-Rice93,991385484689,291
2Tarom-Hashemi15,56863814014,790
3Shirodi26,009106623524,708
Total135,56855581221128,789
Table 2. Optimal values for the classifier parameters.
Table 2. Optimal values for the classifier parameters.
DataDescription
RFestimators = 105, features to split each node = 3
XGBoostrounds = 500, subsample = 1, min-child-weight = 1, max-depth = 5.
Deep Learning ModelsDropout-Rate = 0.1, Mini-Batch-Size = 150, Iterations = 500,
Initial Learning = 10−4
Table 3. Comparison of different classification algorithm accuracies. The bold values show the highest accuracies.
Table 3. Comparison of different classification algorithm accuracies. The bold values show the highest accuracies.
MethodClassUAPACEOEOAKC
RFNon-Rice96.592.53.57.580.80.60
Tarom-Hashemi34.154.665.945.4
Shirodi68.453.931.546.1
XGBOOSTNon-Rice94.296.65.83.484.00.65
Tarom-Hashemi47.232.252.767.8
Shirodi63.669.736.430.3
2D-CNNNon-Rice98.998.91.01.093.40.86
Tarom-Hashemi73.179.626.820.4
Shirodi86.181.513.818.4
3D-CNNNon-Rice99.699.20.30.795.80.91
Tarom-Hashemi81.486.118.513.8
Shirodi90.889.29.110.7
ProposedNon-Rice99.499.70.60.397.10.94
Tarom-Hashemi89.189.410.810.6
Shirodi93.792.56.27.5
Table 4. Ablation analysis of proposed method in rice variety mapping. The bold values show the highest accuracies.
Table 4. Ablation analysis of proposed method in rice variety mapping. The bold values show the highest accuracies.
MethodClassUAPACEOEOAKC
S1Non-Rice99.299.40.70.595.50.90
Tarom-Hashemi81.385.118.614.8
Shirodi90.787.49.212.5
S2Non-Rice99.599.10.40.896.10.92
Tarom-Hashemi86.085.413.914.6
Shirodi90.091.69.98.3
S3Non-Rice99.599.60.40.396.30.92
Tarom-Hashemi86.185.013.815.0
Shirodi90.691.29.48.8
All ComponentsNon-Rice99.499.70.60.397.10.94
Tarom-Hashemi89.189.410.810.6
Shirodi93.792.56.27.5
Table 5. The comparison of numerical results from rice-type mapping is based on multisource remote sensing imagers.
Table 5. The comparison of numerical results from rice-type mapping is based on multisource remote sensing imagers.
Reference MethodDatasetOA (%)
Zhai et al. [39]RF classifierSentinel-2, Radarsat-294
Xu et al. [40]SVM classifierLandsat-888
Cao et al. [41]Decision tree classifierLandsat-5/Landsat-885
Zhang and Lin [42]Object-basedMODIS/Landsat92
Liu et al. [43]RFSentinel-291
Lasko et al. [44]RF classifierSentinel-193
Zhang et al. [45]Object-basedSentinel-1, Sentinel-2 90
Nguyen, Gruber and Wagner [24]Decision tree classifierSentinel-187
Zhao, Liu, Ding, Liu, Wu and Wu [31]Deep Learning HJ-1 A/B93
Xu et al. [46]Deep LearningSentinel-191
Wei et al. [16]Deep LearningSentinel-191
Proposed MethodDeep LearningSentinel-1, Sentinel-297
Table 6. Comparison time-series Sentinel-1 and Sentinel-2 based NDVI datasets in rice variety mapping. The bold values show the highest accuracies.
Table 6. Comparison time-series Sentinel-1 and Sentinel-2 based NDVI datasets in rice variety mapping. The bold values show the highest accuracies.
MethodClassUAPACEOEOAKC
S1Non-Rice99.097.41.0 90.70.81
Tarom-Hashemi75.051.525.0
Shirodi72.590.327.5
S2Non-Rice99.397.40.72.689.50.78
Tarom-Hashemi61.656.338.443.7
Shirodi72.380.972.319.1
S3Non-Rice99.499.70.60.397.10.94
Tarom-Hashemi89.189.410.810.6
Shirodi93.792.56.27.5
Table 7. Comparison of different strategies for deep feature fusion. The bold values show the highest accuracies.
Table 7. Comparison of different strategies for deep feature fusion. The bold values show the highest accuracies.
MethodClassUAPACEOEOAKC
First-layerNon-Rice98.996.91.13.187.30.73
Tarom-Hashemi49.472.850.627.2
Shirodi77.661.322.438.7
Latest-layerNon-Rice98.998.61.11.492.30.84
Tarom-Hashemi68.976.131.123.9
Shirodi83.779.316.320.7
All-layerNon-Rice99.2899.40.70.695.90.91
Tarom-Hashemi84.684.615.415.4
Shirodi90.489.89.610.2
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Saadat, M.; Seydi, S.T.; Hasanlou, M.; Homayouni, S. A Convolutional Neural Network Method for Rice Mapping Using Time-Series of Sentinel-1 and Sentinel-2 Imagery. Agriculture 2022, 12, 2083. https://doi.org/10.3390/agriculture12122083

AMA Style

Saadat M, Seydi ST, Hasanlou M, Homayouni S. A Convolutional Neural Network Method for Rice Mapping Using Time-Series of Sentinel-1 and Sentinel-2 Imagery. Agriculture. 2022; 12(12):2083. https://doi.org/10.3390/agriculture12122083

Chicago/Turabian Style

Saadat, Mohammad, Seyd Teymoor Seydi, Mahdi Hasanlou, and Saeid Homayouni. 2022. "A Convolutional Neural Network Method for Rice Mapping Using Time-Series of Sentinel-1 and Sentinel-2 Imagery" Agriculture 12, no. 12: 2083. https://doi.org/10.3390/agriculture12122083

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop