Next Article in Journal
Gender, Empowerment and Food Security Status of Households in Nigeria
Previous Article in Journal
Multigenerational Effects of Short-Term High Temperature on the Development and Reproduction of the Zeugodacus cucurbitae (Coquillett, 1899)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluation of SAR and Optical Image Fusion Methods in Oil Palm Crop Cover Classification Using the Random Forest Algorithm

by
Jose Manuel Monsalve-Tellez
1,2,
Jorge Luis Torres-León
1 and
Yeison Alberto Garcés-Gómez
2,*
1
Colombian Oil Palm Research Center—Cenipalma, Oil Palm Agronomy Research Program, Geomatics Section, Calle 98 # 70–91, Bogotá 111121, Colombia
2
Facultad de Ingeniería y Arquitectura, Universidad Católica de Manizales, Carrera 23 No 60–63, Manizales 170001, Colombia
*
Author to whom correspondence should be addressed.
Agriculture 2022, 12(7), 955; https://doi.org/10.3390/agriculture12070955
Submission received: 2 May 2022 / Revised: 29 June 2022 / Accepted: 29 June 2022 / Published: 1 July 2022
(This article belongs to the Section Agricultural Technology)

Abstract

:
This paper presents an evaluation of land cover accuracy, particularly regarding oil palm crop cover, using optical/synthetic aperture radar (SAR) image fusion methods through the implementation of the random forest (RF) algorithm on cloud computing platforms using Sentinel-1 SAR and Sentinel-2 optical images. Among the fusion methods evaluated were Brovey (BR), high-frequency modulation (HFM), Gram–Schmidt (GS), and principal components (PC). This work was developed using a cloud computing environment employing R and Python for statistical analysis. It was found that an optical/SAR image stack resulted in the best overall accuracy with 82.14%, which was 11.66% higher than that of the SAR image, and 7.85% higher than that of the optical image. The high-frequency modulation (HFM) and Brovey (BR) image fusion methods showed overall accuracies higher than the Sentinel-2 optical image classification by 3.8% and 3.09%, respectively. This demonstrates the potential of integrating optical imagery with Sentinel SAR imagery to increase land cover classification accuracy. On the other hand, the SAR images obtained very high accuracy results in classifying oil palm crops and forests, reaching 94.29% and 90%, respectively. This demonstrates the ability of synthetic aperture radar (SAR) to provide more information when fused with an optical image to improve land cover classification.

1. Introduction

Remote sensing is a science that studies different natural phenomena occuring on the Earth’s surface using sophisticated techniques to capture information from sensors assembled on different types of aerial platforms, ranging from huge satellites orbiting thousands of kilometers away from the Earth’s surface to unmanned aerial vehicles (UAV) operating just a few meters away from it. The last few years have witnessed significant advances in the electronic and technological developments of both these platforms and their onboard sensors and in the various applications where remote sensing provides valuable information for decision-making, especially in the determination of more accurate land cover classifications [1].
Today, it is possible to obtain satellite images from several platforms located over the Earth’s entire surface. Electronic developments in sensors and satellite platforms make it possible for satellite spatial and temporal resolutions to continue to increase, thus reducing the time window for capturing information about a specific point on the Earth’s surface, allowing for more continuous monitoring and analysis of the dynamics of different territories worldwide. This temporal window is of the utmost importance for agricultural area classification studies, especially for seasonal crops [2]. The Copernicus Program, sponsored by the European Space Agency (ESA), nowadays provides a global monitoring satellite platform named Sentinel, which captures images of the entire Earth’s surface every six days in different regions of the electromagnetic spectrum. The Sentinel-2 satellite [3], mounting an MSI instrument, and Sentinel-1 [4], mounting a SAR-C sensor, both measure in the optical and the microwave ranges of the spectrum.
In [5,6,7], the authors mention the importance of using satellite information captured in different electromagnetic spectrum ranges, including those associated with the optical and microwave ranges. The joint use of both sources can provide more information on a given area than using these sources separately [8,9,10,11]. By using these two sources, it is possible to link reflectance information captured by optical satellites with information on texture/roughness, moisture content, geometry, and the orientation of the different coverages present in the study area [12]; particularly, on oil palm coverage [13,14,15]. In addition, research has been carried out that shows an increase in the accuracy of classification algorithms when both sources are integrated, demonstrating that they are complementary to each other [2,16]. By implementing optical and synthetic aperture radar (SAR) image fusion methods, it is possible to visually improve the differentiation of land cover, especially land cover with the presence of water bodies [17], and increase classification accuracy by using supervised classification algorithms [18,19]. Studies have been conducted to map oil palm plantations using a combination of optical and SAR information [20] and using SAR images only [21], demonstrating the potential for the microwave range to discriminate the areas associated with this coverage [13,22,23]. In particular, SAR images have been used in areas with high cloud cover since the information captured in the microwave range is not affected by weather conditions.
Some commonly used image fusion methods are Brovey (BR), high-frequency modulation (HFM), Gram–Schmidt (GS) and principal components (PC). Brovey (BR) is a modulation-based method that builds a new synthetic image from the concept of RGB image compositing, which is based on the conformation of the intensity value from the original image bands [24]. High-frequency modulation (HFM) transfers the high frequencies of a high spatial resolution image containing image detail information to a low spatial resolution image, in this case, the multispectral image. To avoid fusion errors, mainly in areas of low radiometric correlation between the high-frequency image and the original image, a correction is applied using a low-intensity image [25]. The Gram–Schmidt (GS) process is a method patented by [26] that generates a synthetic band from the bands of the original image using the Gram–Schmidt transformation process. The SAR band replaces this generated band, and the inverse process of the transformation is performed. The principal components (PC) method is based on the concept of principal component transformation of the original multispectral image. The first principal component (PC1), which contains information on the intensity values of the image, is replaced by the SAR band. Subsequently, the inverse transformation process is performed to obtain the fused image [27].
Both images must be pre-processed to obtain a good fusion of the optical and SAR images. The “salt and pepper” speckle effect associated with SAR images must be reduced [28]. Usually, speckle reduction is performed using different methods based on the kernel matching of different sizes [29]; however, by employing the optical and SAR image fusion methods, the fused synthetic image still presents a lot of noise associated with the speckle effect. To obtain a more significant reduction in the speckle effect, and thus, to derive a better fused optical and SAR image, this research applies the multitemporal speckle reduction method proposed by [30], which allows a remarkable reduction in the speckle effect.
In the bibliography, it is possible to find different methods for land cover classification on satellite images [31] based on machine learning (ML) algorithms, such as the support-vector machine, decision tree, random forest and k-nearest neighbour, and others based in deep learning (DL) algorithms, such as the fractional Gabor convolutional network, spatial–temporal invariant contrastive learning, subspace clustering for hyperspectral images and others. Deep learning (DL) algorithms have been highly studied recently and have shown great potential for classification accuracy. However, the computational cost for implementing these methods is extremely high, requiring supercomputers or high-performance servers to process the large volume of data needed. Meanwhile, studies show that machine learning (ML) algorithms can obtain high accuracies in land cover classification [32,33] with meager computational costs, as opposed to DL. Among these is the random forest (RF) method, which is a machine learning algorithm widely used as a supervised classification technique [34] and proposed by [35]. This algorithm consists of a combination of tree predictors based on the principle of decision trees, which are analytical methods that explore inside a data set of different possibilities for a given decision [36]. The property that makes this model so robust in its prediction is the large number of uncorrelated trees that can be analyzed, and putting them together significantly reduces individual errors. This is why random forest obtains very high accuracies compared with other ML classification methods [33]. In the literature, it is possible to find research focused on determining different land covers through satellite images from various sensors. Among these, studies focused on the application of the random forest classifier to fusions of optical and SAR images to improve the detection of permeable and non-permeable surfaces stand out [19] and are oriented to change detection analysis [37], evidencing better overall crop mapping accuracies when using this classifier over other classifiers [38,39] and demonstrating the potential of this algorithm for the classification of multitemporal (multi-data frequency) SAR images, such as the Sentinel-1 images. In addition, studies conducted by [40,41] emphasize the efficiency of this algorithm in the Google Earth Engine (GEE) cloud computing environment when analyzing large territories using only optical imagery and research combining optical and SAR imagery to improve classifier accuracies [14,42].
Globally, negative impacts on natural ecosystems due to the growth of the agricultural sector have been evidenced [11], where the oil palm agro-industry has shown a growing demand for oil, which has boosted the development of this sector [13]. The oil palm agro-industry in Colombia is positioned as the largest oil producer in the Americas and the fourth largest in the world [43]. The growth of the crop could impact mainly the savanna region of the country [44]. Therefore, it is essential that we find new methodologies based on geospatial Earth observation technologies that allow the agricultural sector to establish a basis and criteria for achieving productive sectoral organization and, at the same time, environmentally sustainable development [45,46], as proposed in the elaboration of the agroecological zoning plans.
Given that the official sourcing of information on land cover in Colombia is carried out at large scales (1:100,000) [47], the maps show errors in vegetation coverage interpretation, mainly when dealing with oil palm, and in forest areas and tall vegetation discrimination, so it is not possible to conduct detailed studies within a given area. In addition, the satellites used to produce land cover maps have low temporal resolutions, and the high percentage of cloudiness present in the national territory hinders the availability of good quality images, thus delaying the updating of land cover maps.
This research evaluates the delimitation of land covers focused on oil palm crops using optical and SAR image fusion methods, which have been little studied until now. Optical and SAR image fusion methods allow for more accurate and detailed land cover classification [18,19], enabling improved monitoring of the dynamics of change associated with oil palm cultivation in Colombia. In addition, in the literature, the fusion methods and the integration/synergism of optical and SAR images are usually evaluated separately, without comparing the two ways of linking this information. This study compares the evaluation of land cover classification in the optical image, the SAR image and in image fusion methods, and includes image integration/synergism.
It is worth mentioning that remote sensing plays a vital role in carrying out studies on large extensions of territory and efficiently monitoring dynamic sectors, such as the agricultural sector [40]. Therefore, this research aims to improve land cover classification, especially oil palm crop cover, using optical/SAR image fusion methods by implementing a random forest (RF) algorithm in cloud computing platforms.

2. Materials and Methods

The project was conducted in the central oil palm region of Colombia, where Palmar de la Vizcaína Experimental Station (CEPV) is located (centered on the WGS84 geographic coordinates of: 6°59′13″ N, 73°40′62″ W). This station is managed by the Centro de Investigación en Palma de aceite—Cenipalma, located in the municipality of Barrancabermeja, Santander, Colombia (Figure 1).
Currently, oil palm crops in Colombia are distributed throughout four palm zones, where the central palm zone is the largest, corresponding to 48% of the total area planted in the country [48]. The study area is a rural zone with a predominance of forest cover, oil palm crops, pastures, low vegetation, bare soils, and water bodies. It is characterized as a tropical zone with an average altitude above sea level of 90 m, a mean annual temperature of 27 °C, mean annual precipitation of 1600 mm, relative humidity above 80%, and shallow slopes less than 10%.
Figure 2 presents the implemented methodology. The cloud computing platform used in this research was GEE, which contains a the collection of archival images from the Sentinel satellite constellation, both optical and SAR. Regarding the SAR image used, a multitemporal speckle reduction methodology was proposed by [30] and implemented in GEE by [49]. The “salt and pepper” speckle effect is present in all SAR images and is caused by the interaction of radar waves with different types of coverages [50,51,52]. The multi-temporal speckle reduction methodology uses several images with acquisition dates as close as possible to perform a noise reduction process caused by the “salt and pepper” speckle effect in all SAR images. Edge noise is reduced by eliminating low-intensity noise and invalid data within the image. Subsequently, the decibel (dB) values of each image within the established period are averaged.
The size of the images within the study area for both the Sentinel-2 optical image and the SAR image is 1192 columns by 611 rows, for a total area of 71.83 km2. Once the Sentinel-2 optical image and the Sentinel-1 SAR image collection were established, the bands associated with the optical and near-infrared spectrum from B2 to B9 and shortwave infrared B11 and B12 were selected. The vertical emission and horizontal reception (“VH”) polarization band were selected for the SAR case. This band was chosen considering the interaction of radar waves with different types of geometries present in the study area [13]. These waves are highly influenced by crop canopies, especially in oil palm crops, where the radar signal is more sensitive in the cross-polarization bands [53]. The operating wavelength of the Sentinel-1 C-band SAR sensor at 5.55 cm [5] in the “VH” polarization presents less reflection when it interacts with the oil palm leaves compared with the reflection of the interaction with the leaves of the trees in the forest, thus allowing more significant differences between these two coverages, as shown by [54]. A normalization process was carried out on the decibel values of the backscatter of the SAR image, assigning a minimum value of 0 and a maximum value of 1 to be comparable with the reflectance values of the Sentinel-2 optical image [55].
Next, image fusion methods were applied: Brovey (BR) [24], high-frequency modulation (HFM) [25], Gram–Schmidt (GS) [26], and principal components (PC) [27]. Each of these fusion methods was evaluated with five commonly used statistical indicators: the root mean square error (RMSE) measures the error associated with two data sets; i.e., the values of the fused image versus the original image (in this case, the optical image); standard deviation (SDT) estimates image contrast, where a high contrast represents greater information available in the image; the peak signal-to-noise ratio (PSNR) measures the noise present in the fused image compared with the original image, where a high value represents lower noise in the fused image; the correlation coefficient (CC) evaluates the degree of correlation, a high correlation means high similarity with the original image (in this case, the optical image); and the relative global synthesis error (ERGAS) measures the overall quality of the image fusion process, taking into account the total number of bands of the fused image and considering the pixel size of the optical images and the image to be fused (in this case, the SAR image).
Once the fused images were generated, the random forest (RF) classification algorithm was implemented using a parameter of 100 decision trees, without limiting the number of nodes per leaf in each tree. This algorithm was implemented to determine the land cover types in the study area. This process was performed on the Sentinel-2 optical image and on the Sentinel-1 SAR image, with each of the images fused by the methods mentioned above and on the image formed by the compilation of Sentinel-2 optical bands and Sentinel-1 SAR.
Regarding the SAR image, both the vertical–vertical (“VV”) and vertical–horizontal (“VH”) polarization bands were selected. Six SAR indices commonly implemented in the literature were also added to increase the classification accuracy [56]. SAR indices allow us to obtain more information on land cover characteristics. Their application significantly increases the identification and differentiation of land covers, such as oil palm crop areas and natural forests [57,58]. The SAR indices implemented were: division ratio (DR) (1), cross-ratio (CR) (2), difference (DIF) (3), normalized difference bands (NDB) (4), radar vegetation index (RVI) (5), and radar square index (RSI) (6).
Division   Ratio   DR = VV VH
Cross   Ratio   CR = VH VV
Difference   DIF = VV VH
Normalized   Difference   Bands   NDB = VV VH VV + VH
Radar   Vegetation   Index   RVI = 4 * VH VV + VH
Radar   Square   Index   RSI = VV 2 VH
In addition, textural indices from the gray-level co-occurrence matrix (GLCM) [59] were calculated using a 5 × 5 pixel moving window (kernel) for both SAR image polarization bands “VV” and “VH”. These textural indices have been widely found in the literature to increase the accuracies of SAR image classification, obtaining more information associated with the characteristics of the interaction of radar waves on different types of coverages [14,38,39,60]. The GLCM textural indices implemented were: second angular momentum, average, contrast, correlation, variance, homogeneity, entropy, and dissimilarity.
In total, 28 bands were implemented by the random forest (RF) classification algorithm for SAR imagery (2 original bands “VV” and “VH”, 6 SAR indices, and 20 GLCM textural indices); 11 bands were used for optical imagery and optical/SAR fusion methods (“B2”, “B3”, “B4”, “B5”, “B6”, “B7”, “B8”, “B8A”, “B9”, “B11”, “B12”); and for the optical and SAR band stack, 39 bands (a union of the abovementioned) were employed. Subsequently, an adaptive median filter with a moving window (kernel) of 3 × 3 pixels was applied to each classification to homogenize the result.
Below is an analysis of the accuracies obtained by the classifier on each of the images, evaluating the statistical indicators of user accuracy (UA), which is the probability that a pixel was correctly included in a known class, and producer accuracy (PA), being the probability that a pixel of a known class is classified in another category, on which the omission error and commission error were estimated, respectively, and understood as the percentage left to reach 100% accuracy. Finally, the overall accuracy (OA) and the kappa coefficient (KC) were calculated, reporting the overall rate of pixels correctly classified in the classes evaluated. For this, 300 ground truth points were located in the study area, as shown in Figure 3, 70 points for each of the classes being assessed.

3. Results

Table 1 shows the results of the statistical indicators of Root mean square error (RMSE), Standard deviation (SDT), Peak Signal to noise ratio (PSNR), Correlation coefficient (CC), and Relative global synthesis error (ERGAS). As mentioned above, for the RSME, CC, and ERGAS statistical indicators, the original Sentinel-2 optical images and the resulting fused image were considered.
An RSME close to zero means that the difference between the values of the optical image compared with the fused image is very low. A CC close to one indicates very high similarity between the values of the optical image compared with the fused image. The Brovey (BR) method achieved the highest RMSE values. However, it obtained a high correlation of 0.7 compared with the optical image. This means that it presented a significant change in the values of the image; however, it preserves the distribution of the data. The high-frequency modulation (HFM) method showed the lowest RMSE and high CC values, which were very similar to the original optical image. A high SDT value means a high dispersion of the data in relation to its arithmetic mean, showing that the image has a high level of contrast. An image with a high level of contrast presents a greater wealth of information [28] that can be used to make a more significant differentiation of coverages, as observed in the Brovey (BR) method.
A high PSNR value means less noise in the fused image. The highest PSNR was achieved by the HFM method and the lowest by the Brovey (BR) method. The Gram–Schmidt (GS) and principal components (PC) methods presented similar mean values. A low ERGAS value means a better overall fusion result that considers all the bands of the fused image. The high-frequency modulation (HFM) method was observed as having the best ERGAS value, while the Brovey (BR) method presented the highest value in this indicator. The Gram–Schmidt (GS) method showed CC values similar to the Brovey (BR) method, with low values in RSME, SDT, and ERGAS. In the principal components (PC) method, despite obtaining good quality values in RSME, SDT, and ERGAS, the CC was very low compared with the optical image, being 0.23.
In general, the high-frequency modulation (HFM) method achieved the best fusion evaluation indicators, which means that it did not present significant distortions during the fusion process compared with the original optical image. The Brovey (BR) method presented the lowest indicators, showing significant distortions compared with the original optical image. However, it did obtain a high correlation with the original image.
Figure 4 shows the histograms of the results of the pixel values obtained by the fusion of optical/SAR images implementing the BR, HFM, GS and PC fusion methods on Sentinel-1 and Sentinel-2 images.

3.1. Analysis of Land Cover Spectral Signatures

Different land cover types in the study area were identified and training samples were generated (Figure 5). The classes of interest identified were: water body, forest, oil palm, grassland, bare soil, and low vegetation. The training samples were taken in homogeneous zones, avoiding the edges of division between two types of cover so as not to add noise to the sample.
Figure 5A shows the spectral signatures for each coverage identified on the Sentinel-2 optical image. The spectral signatures showed normal behavior at all wavelengths, except water bodies, which showed a behavior very similar to that of the vegetation. This is because the bodies within the study area are lagoons that present advanced eutrophication processes, so the reflectance values in the NIR range tend to be high, resembling those of vegetation coverage.
The spectral signatures of the different coverages for each implemented fusion method are presented below. In the case of the BR method shown in Figure 5B, the image values underwent the most significant changes, being increased but preserving the trend of the normal behavior of the spectral signature of each coverage in the optical image. The only notable difference was observed in the behavior of the bare soil cover, which presented lower values in the merged image, and the oil palm cover showed differential changes in the visible spectral range with respect to the forest cover. Figure 5C shows the result of the HFM method, in which no significant differences were found in the values of the fused image compared with the optical image. For all the coverages, there was a very high similarity in the behavior of its spectral signature and its range of values.
The result achieved by the GS method in Figure 5D shows, as in the case of the BR method, a notable decrease in the values of bare soil cover at wavelengths above the red spectrum range (705 nm). In addition, more significant differentiation was found between the coverages associated with oil palm crops and forests in the visible spectrum range (490–700 nm). It should be noted that the range of image values was preserved, being very similar to the pixel values of the optical image. Finally, the result of the PC method is shown in Figure 5E, observing similarities with the HFM method in terms of its behavior and optical image range of values, presenting only significant differences in the coverage of bare soil and noting an increase in the differentiation of coverages in the range of the SWIR spectrum range.

3.2. Random Forest (RF) Land Cover Classification

The random forest classification algorithm was implemented with GEE in a cloud computing environment setting with the generation of 50 decision trees as a classifer parameter. This classifier was implemented on the Sentinel-2 optical image, the Sentinel-1 SAR image, and the optical/SAR images fused with the BR, HFM, GS and PC methods. Figure 6 shows the results obtained for each of the cases.
Figure 6A shows the result of the classification process using the Sentinel-2 optical image. Unlike the case analyzed above, a finer delimitation can be observed between the different land covers of the study area, such as access roads and areas of bare soils. A remarkable similarity is observed in the classification of forest cover in relation to that obtained by the SAR image. In this case, a greater area was observed in the classification of the cover associated with grassland and a reduction of low vegetation. Significant confusion can be observed in the classification of cover associated with oil palm areas with and forest areas, where the differentiation of these two coverages presented the most significant OE along with grassland cover.
The result of the land cover classification using the SAR Sentinel-1 image is shown in Figure 6B. As can be seen, using only the SAR image, it is possible to discriminate between different types of coverages. However, there are classification errors between coverages associated with water bodies with bare soils, and with low vegetation zones with grassland and some wooded areas. Through the implementation of the adaptive median filter with a 5 × 5 pixel moving kernel window and the reduction in “salt and pepper” noise, using the multitemporal speckle method, a high attenuation of the mottling produced by this effect was observed, noting a remarkable clarity in the differentiation of the different coverages present in the study area. A high level of precision in identifying the oil palm cover and its differentiation from the forest cover was evidenced.
The results obtained in the classifications of the fused optical/SAR images are detailed below. In the case of the BR method, shown in Figure 6C, grassland, bare soil, and low vegetation coverages performed similarly to the optical image. However, a notable difference was observed in oil palm and forest cover classification. In this case, the oil palm cover was classified better, avoiding the CE and OE of the forest mentioned above. The resulting oil palm classification was similar to that evidenced using only the SAR image. The fused image, despite having the information provided by the SAR image, did not show any noise caused by the speckle effect.
In the case of the classification of the image fused by the HFM method, shown in Figure 6D, a very high similarity was found compared with the result obtained from the optical image. This is because, as mentioned above in the results obtained by this fusion method, the correlations between the values of the fused image and the original image were very high. A difference in the previous case was evidenced in the oil palm coverage. A more significant similarity to the result obtained using only the optical image is appreciated, demonstrating a lower contribution of the SAR image. In this case, a lower saturation is observed in the classification of oil palm coverage when compared with the one obtained via the optical image. However, there are still areas with significant errors in classifying this coverage with forest areas.
The classification of the image fused by the GS method is shown in Figure 6E, and reveals that the areas classified as oil palm cropare very similar to that obtained using the high-frequency modulation (HFM) method; however, a significant commission error and omissions were found between the oil palm and forest areas. In addition, an increase in the areas classified as forest areas was observed compared with the fusion methods previously analyzed.
After observing the image classified with the PC method shown in Figure 6F, it was even more evident that the increase in the classification of forested areas presented a higher saturation, difficulting to the differentiation of this coverage from others, mainly in grassland areas. A high similarity was found in the classification of oil palm coverage with the previous method. Although a very high similarity was found in the results obtained in the histogram of values of the image fused by this method and the values of the Sentinel-2 optical image, shown above in Figure 6D, very significant changes in the result of the land cover classification were found.
Figure 6G shows the optical/SAR image stack results. In this case, an adequate low vegetation and grassland cover classification, similar to the result obtained using the optical image, can be observed. A good classification of oil palm coverage similar to that from the SAR image is also observed without showing confusion, as seen in both the optical image and the optical and SAR image fusion methods. There is a significant differentiation between different types of coverages in the study area, that are very close to reality.

3.3. Accuracy Evaluation of the Random Forest (RF) Classifier

Figure 7 and Figure 8 show the user and producer accuracies calculated for each land cover evaluation. After evaluating the SAR Sentinel-1 image, an OA of 70.48% with a KC of 0.65 was achieved. It was observed that the forest coverage achieved a high OE of 25%, indicating that many pixels were not correctly assigned to the category to which they belonged. A high degree of confusion was found between the coverages of water bodies and bare soils because the SAR image shows a very high similarity between both coverages. The highest CE found for water bodies was 54.71% and 54.29% for bare soil. Meanwhile, the coverage associated with oil palm obtained the highest UA, corresponding to 94.29%.
As for Sentinel-2 optical image classification, an OA value of 74.29% with a KC of 0.69 was found. Comparing the KC to the SAR image, a difference of 0.05 was calculated. This value was similar to that obtained by [2] where the classification of their study areas, also evidenced higher values for the optical image. Low values were found in the UA for the bare soil cover, corresponding to 51.43%, presenting significant confusion with grassland and low vegetation. Confusion was found between the classification of low vegetation and grassland with water bodies. This may be because, as mentioned above, within the study area, the water bodies are lagoons with advanced eutrophication processes. The reflectance values in the infrared spectrum range resemble those of a vegetation cover. In the classification of oil palm coverage, there was confusion with some areas of forest and low vegetation, so its UA was only 83.33%, equivalent to a decrease of 10.96% with respect to the classification of this coverage using SAR imagery.
BR method classification reached an OA of 77.38% with a KC of 0.73, with a 3.10% higher OA as compared to that achieved only with the optical image. A significant increase in UA was also found in oil palm and forest coverages being 91.43% and 84.29% respectively, close to those obtained in the SAR image classification. A high OE was found in the low vegetation cover, presenting more significant confusion with the coverages of water bodies and bare soils. There was a decrease equivalent to 21.43% in the classification accuracy for water bodies compared with the optical image. The classification of oil palm, bare soil, low vegetation and grassland coverages showed better results than using only the optical image.
The highest value in OA was obtained using the HFM method, achieving the best OA of the four fusion methods implemented, reaching 78.10%, being 3.81% higher than in the case of the optical image, and a KC of 0.74. In this case, the oil palm and forest coverage obtained UA of 81.43% for both, slightly lower than those obtained with the BR method. As in the case of the BR method, the most significant error in the classification was obtained in the cover associated with bare soils, with a lower UA of 58.57%. Confusion was found in soils with grassland and low vegetation, as well as in some areas of water bodies and forests. Better accuracies were found in the classification of water bodies, equivalent to those found in the optical image. As in the previous case, a high percentage of OE 38.37% was obtained in the low vegetation cover, presenting confusion mainly due to the cover of bare soil and water bodies.
In the case of classification by the GS method, an OA of 72.14% was found with a KC of 0.67, which is 2.14% below the OA achieved by the optical image. In this case, there is confusion between oil palm cover with some forest and low vegetation areas, with a UA of 84.29% for this cover, and this value being very close to the one found in the classification of the optical image. A good classification of the grassland coverage was evidenced, reaching 91.43%. However, it presented a high OE of 30.43%, confusing mainly with coverages of bare soil and low vegetation. Again, low UA was found for bare soil and low vegetation coverages, reaching a CE of 45.71% and 35.71% respectively, meaning that most of the assignments for these two coverages belonged to coverages associated with low vegetation in the case of bare soils, and oil palm and forest coverages in the case of low vegetation.
The classification using the PC method resulted with an OA of 66.19% with a KC of 0.59, and OA of 8.10% below that obtained with the optical image. Compared with the GS method, the OA decreased by 5.95%. Oil palm and forest coverage showed a UA of 77.14% and 64.29% respectively, which are lower than those obtained by both the optical and the SAR images. For water body coverage, the highest UA was achieved, that is also the case for the optical image and in HFM method, reaching 78.57%. In this case, the forest coverage presented the lowest UA, corresponding to 64.29%, finding significant confusion with the low vegetation and grassland coverages and getting a high OE that reached 43.75%. Unlike the other fusion methods, high confusions were found in the classification of low vegetation and grassland coverages, with UA of 58.57% and 68.57%, respectively, and OE of 50.50% and 41.43%, respectively.
Finally, the optical/SAR image stack classification gave the highest OA, reaching 82.14% with a KC of 0.79. This result corresponds to an increase of 11.67% in the SAR image, 7.86% in the optical image and 4.05% in the HFM method. Regarding the oil palm and forest classification, they presented high UA, very close to those from the SAR image and using the BR method, 92.86% and 82.86% respectively. Bare soil and low vegetation coverages showed significantly higher UA compared with the other methods. The low vegetation cover presented a high OE reaching 42%, confused mainly with forest cover and water bodies. Grassland cover reached a high UA of 90%, being similar to the BR, HFM, and GS methods.
Figure 9 shows the OA and the KC obtained for each image analyzed.
The highest value was achieved for the image resulting from the optical/SAR image stack, with a value of 82.14% for the OA and 0.79 for the KC, equivalent to an increase in OA of 11.67% for the SAR image and 7.86% for the optical image. The OA obtained in the Sentinel-2 optical image classification was similar to that found in the HFM and BR methods, slightly higher by 3.81% and 3.10 respectively. The OA found when using optical and SAR data sources was similar to those found by [6], which was close to 80%.

4. Discussion

It is possible to find several research studies referring to the common use of different sources of satellite data, both optical and SAR, since the information acquired by both sources is complementary [2] and allows for an increase in the accuracy of the classification of different land covers through the implementation of supervised classification methods such as the random forest (RF) algorithm. These studies have been implemented in other applications, such as in the improvement of cartography focused on the mapping of large-scale crops [42,61], in improved water body visualization using optical and SAR image fusion techniques [19], and in applications oriented to oil palm crop mapping [14], among others.
A high level of accuracy was found in the classification of coverages based on the SAR image, corroborating the research developed by [14], which evidences an increase in user accuracy (UA) compared with optical image classification. As mentioned in [57,62], the specific classification of the cover associated with oil palm crops is highly reliable when using only SAR images. However, the classification of bare soil and vegetation coverage was confusing, corroborating the results obtained by [54], where the main confusions of the classifications in their research were in these two coverages. Regarding the identification of water bodies, the classification through the SAR image presented confusion with bare soils, which, as shown by [13,63], is associated with the effect of the interaction of SAR waves with water bodies, given that, as mentioned above, these are highly influenced by the specular effect and the dielectric constant of the surface, thus contributing to the differentiation of this cover with others.
Regarding the classification through optical imaging, as shown by [3,63], the capacity to discriminate coverages based solely on SAR images does not attain the high accuracies that are possible to achieve using optical information of the study area. In addition to this, research such as those developed by [56,64,65] show that the overall accuracy (OA) using only the optical image is superior to that found using only the SAR image. However, in the classification of oil palm coverage, a significant decrease in user accuracy was observed compared with that obtained in the SAR image only, which corroborates the result obtained by [14], where they showed that it is possible to find higher accuracies in the classification of oil palm areas using SAR images compared with optical images.
In the image fusion methods implemented, an increase in the overall accuracy (OA) was evidenced for the high-frequency modulation (HFM) and Brovey (BR) methods, thus showing the potential of using complementary information sources to improve the land cover classification. In the case of crop mapping, studies developed by [10,16,42,66] demonstrated the increase in classification accuracy of this coverage using Sentinel-2 optical and Sentinel-1 SAR imagery, which is associated with the significant increase of 5.72% obtained in this research when compared only with optical imagery. In the Brovey fusion method, a low user accuracy was evidenced for the coverage associated with bare soil. This is related to the results obtained by [6,14,63], where the bare soil and low vegetation coverages presented the lowest user accuracies (UA) compared with the other classified coverages. It was evidenced that the Sentinel-2 optical image classification showed high similarity with the high-frequency modulation (HFM) and Brovey (BR) methods, which corroborates the results found by [2,11,12,56], where, using only optical images, the results were slightly low compared with those obtained by integrating optical and SAR images; however, a significant decrease was found comparing the classification using only SAR images, as evidenced in this study. The overall accuracy obtained by the principal components (PC) method was lower than that obtained by the Gram–Schmidt (GS) method, a result that corroborates that found in [19], where in their evaluated areas, the principal components (PC) method obtained accuracies lower than those using the Gram–Schmidt (GS) method.
Finally, it was found that the best result for evaluating the overall accuracy (OA) and kappa coefficient was for the classification of the compiled image of optical and SAR bands (stack), corroborating the results from [14], in which the overall accuracy (OA) was obtained by combining both sources of information, rather than using them separately.

5. Conclusions

The use of the optical Sentinel-2 and SAR Sentinel-1 images proved that they complement each other, allowing for the creation of a new synthetic image with the characteristics of both sources through the implementation of image fusion methods, and obtaining more information on the study area compared with that obtained from these two sources of information separately. In addition, as observed in the histograms of the fused images compared with the optical image, except for the Brovey method (BR), the other methods did not present significant distortions in the values of the fused image, demonstrating that no significant alterations are experienced when linking the SAR information to the optical image.
The coverage classification through the random forest (RF) algorithm presented the highest overall accuracy (OA) for the image formed by the compilation (stack) of optical and SAR bands, reaching 82.14%, being 11.66% higher compared with the SAR image and 7.85% higher compared with the optical image. It was evidenced that the user accuracies (UA) obtained in the stack of bands were higher than 70% in all the analyzed coverages. Regarding the fused images, the high-frequency modulation (HFM) method showed an increase in the overall accuracy (OA) of 3.8%, and 3.09% for the Brovey (BR) method compared with the Sentinel-2 optical image. These results demonstrate the potential of integrating optical images with Sentinel SAR images to increase the accuracy of coverage classification.
The overall accuracy (OA) obtained by the SAR image was slightly low compared with that obtained by the optical image, being 3.81% lower. However, it showed very high accuracies in the classification of oil palm and forest coverages, reaching 94.29% and 90%, respectively. This demonstrates the capacity of synthetic aperture radar (SAR) systems in land cover mapping, capturing global information day and night without being affected by atmospheric conditions, which is a significant limitation in the use of optical satellites. On the other hand, the Gram–Schmidt (GS) and principal components (PC) methods presented an overall accuracy (OA) lower than the optical image by 3.81%, 8.1%, and 2.15%, respectively.
A high increase in SAR image accuracy was evidenced by linking both SAR indices and textural indices through a GLCM gray level co-occurrence matrix. It is recommended that future research implements the use of SAR images containing other polarizations, such as “HH” and “HV”, and different wavelengths, such as L and P bands, which can provide more information on the coverages, and thus improve the accuracy of the classification using only SAR images.

Author Contributions

Conceptualization, J.M.M.-T., J.L.T.-L. and Y.A.G.-G.; methodology, J.M.M.-T.; software, J.M.M.-T.; validation, J.M.M.-T. and J.L.T.-L.; formal analysis, J.M.M.-T., J.L.T.-L. and Y.A.G.-G.; investigation, J.M.M.-T., J.L.T.-L. and Y.A.G.-G.; resources, J.L.T.-L.; data curation, J.M.M.-T.; writing—original draft preparation, J.M.M.-T.; writing—review and editing, J.L.T.-L. and Y.A.G.-G.; visualization, J.M.M.-T.; supervision, J.L.T.-L. and Y.A.G.-G.; project administration, J.L.T.-L. and Y.A.G.-G.; funding acquisition, J.L.T.-L. and Y.A.G.-G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Colombian Oil Palm Promotion Fund (FFP), administered by Fedepalma, Colombian Oil Palm Research Center—Cenipalma and “Universidad Católica de Manizales”.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

Acknowledgements to the “Universidad Católica de Manizales” and the Master’s program in remote sensing for the support and guidance provided in the development of the project. Thanks to the “Centro de Investigación en Palma de Aceite—Cenipalma” for the technical support and the financing of this research. Finally, thanks to the European Space Agency (ESA) for providing the free optical and SAR data from Sentinel.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mercier, A.; Betbeder, J.; Rumiano, F.; Baudry, J.; Gond, V.; Blanc, L.; Bourgoin, C.; Cornu, G.; Ciudad, C.; Marchamalo, M.; et al. Evaluation of Sentinel-1 and 2 Time Series for Land Cover Classification of Forest–Agriculture Mosaics in Temperate and Tropical Landscapes. Remote Sens. 2019, 11, 979. [Google Scholar] [CrossRef] [Green Version]
  2. McNairn, H.; Champagne, C.; Shang, J.; Holmstrom, D.; Reichert, G. Integration of optical and Synthetic Aperture Radar (SAR) imagery for delivering operational annual crop inventories. ISPRS J. Photogramm. Remote Sens. 2009, 64, 434–449. [Google Scholar] [CrossRef]
  3. ESA. Sentinel-2 User Handbook; ESA Standard Document; ESA: Paris, France, 2015. [Google Scholar]
  4. ESA. Sentinel-1 User Handbook; ESA User Guide; ESA: Paris, France, 2013. [Google Scholar]
  5. Verde, N.; Kokkoris, I.; Georgiadis, C.; Kaimaris, D.; Dimopoulos, P.; Mitsopoulos, I.; Mallinis, G. National Scale Land Cover Classification for Ecosystem Services Mapping and Assessment, Using Multitemporal Copernicus EO Data and Google Earth Engine. Remote Sens. 2020, 12, 3303. [Google Scholar] [CrossRef]
  6. Fieuzal, R.; Marais Sicre, C.; Baup, F. Estimation of corn yield using multi-temporal optical and radar satellite data and artificial neural networks. Int. J. Appl. Earth Obs. Geoinf. ITC J. 2017, 57, 14–23. [Google Scholar] [CrossRef]
  7. Haldar, D.; Patnaik, C.; Mohan, S.; Chakraborty, M. Jute and tea discrimination through fusion of sar and optical data. Prog. Electromagn. Res. B 2012, 39, 337–354. [Google Scholar] [CrossRef] [Green Version]
  8. Florez, R.M.J. Evaluación de Imágenes de Radar Sentinel-1A e Imágenes Multiespectrales Sentinel-2A en la Clasificación de Cobertura del Suelo en Diferentes Niveles de Detalle. Master’s Thesis, National University of Colombia, Bogota, Colombia, 2019. [Google Scholar]
  9. Sonobe, R.; Yamaya, Y.; Tani, H.; Wang, X.; Kobayashi, N.; Mochizuki, K.-I. Assessing the suitability of data from Sentinel-1A and 2A for crop classification. GISci. Remote Sens. 2017, 54, 918–938. [Google Scholar] [CrossRef]
  10. De Alban, J.D.T.; Connette, G.M.; Oswald, P.; Webb, E.L. Combined Landsat and L-Band SAR Data Improves Land Cover Classification and Change Detection in Dynamic Tropical Landscapes. Remote Sens. 2018, 10, 306. [Google Scholar] [CrossRef] [Green Version]
  11. Robertson, L.D.; Davidson, A.M.; McNairn, H.; Hosseini, M.; Mitchell, S.; de Abelleyra, D.; Verón, S.; le Maire, G.; Plannells, M.; Valero, S.; et al. C-band synthetic aperture radar (SAR) imagery for the classification of diverse cropping systems. Int. J. Remote Sens. 2020, 41, 9628–9649. [Google Scholar] [CrossRef]
  12. Li, L.; Dong, J.; Tenku, S.N.; Xiao, X. Mapping Oil Palm Plantations in Cameroon Using PALSAR 50-m Orthorectified Mosaic Images. Remote Sens. 2015, 7, 1206–1224. [Google Scholar] [CrossRef] [Green Version]
  13. Sarzynski, T.; Giam, X.; Carrasco, L.; Lee, J.S.H. Combining Radar and Optical Imagery to Map Oil Palm Plantations in Sumatra, Indonesia, Using the Google Earth Engine. Remote Sens. 2020, 12, 1220. [Google Scholar] [CrossRef] [Green Version]
  14. Carolita, I.; Darmawan, S.; Permana, R.; Dirgahayu, D.; Wiratmoko, D.; Kartika, T.; Arifin, S. Comparison of Optic Landsat-8 and SAR Sentinel-1 in Oil Palm Monitoring, Case Study: Asahan, North Sumatera, Indonesia. IOP Conf. Ser. Earth Environ. Sci. 2019, 280. [Google Scholar] [CrossRef]
  15. Fernandez-Beltran, R.; Haut, J.M.; Paoletti, M.E.; Plaza, J.; Plaza, A.; Pla, F. Multimodal Probabilistic Latent Semantic Analysis for Sentinel-1 and Sentinel-2 Image Fusion. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1347–1351. [Google Scholar] [CrossRef]
  16. Li, D.; Zhang, Y.; Dong, X.; Shi, X.; Zhai, W. A HSV-Based Fusion of InIRA SAR and GoogleEarth Optical Images. In Proceedings of the 2018 Asia-Pacific Microwave Conference (APMC), Kyoto, Japan, 6–9 November 2018. [Google Scholar] [CrossRef]
  17. Manakos, I.; Kordelas, G.A.; Marini, K. Fusion of Sentinel-1 data with Sentinel-2 products to overcome non-favourable atmospheric conditions for the delineation of inundation maps. Eur. J. Remote Sens. 2020, 53, 53–66. [Google Scholar] [CrossRef] [Green Version]
  18. Quan, Y.; Tong, Y.; Feng, W.; Dauphin, G.; Huang, W.; Xing, M. A Novel Image Fusion Method of Multi-Spectral and SAR Images for Land Cover Classification. Remote Sens. 2020, 12, 3801. [Google Scholar] [CrossRef]
  19. Pohl, C.; Loong, C.K.; van Genderen, J. Multisensor approach to oil palm plantation monitoring using data fusion and GIS. In Proceedings of the 36th Asian Conference on Remote Sensing ‘Fostering Resiient Growth in Asia’, Manila, Philippines, 19–23 October 2015. [Google Scholar]
  20. Darmawan, S.; Carolita, I.; Hernawati, R.; Dirgahayu, D.; Agustan; Permadi, D.A.; Sari, D.K.; Suryadini, W.; Wiratmoko, D.; Kunto, Y. The Potential Scattering Model for Oil Palm Phenology Based on Spaceborne X-, C-, and L-Band Polarimetric SAR Imaging. J. Sens. 2021, 2021, 6625774. [Google Scholar] [CrossRef]
  21. Kee, Y.W.; Shariff, A.R.M.; Sood, A.M.; Nordin, L. Application of SAR data for oil palm tree discrimination. IOP Conf. Ser. Earth Environ. Sci. 2018, 169, 012065. [Google Scholar] [CrossRef]
  22. Pohl, C. Mapping palm oil expansion using SAR to study the impact on the CO2 cycle. IOP Conf. Ser. Earth Environ. Sci. 2014, 20, 12012. [Google Scholar] [CrossRef] [Green Version]
  23. Yıldırım, D.; Güngör, O. A novel image fusion method using IKONOS satellite images. J. Geod. Geoinf. 2012, 1, 75–83. [Google Scholar] [CrossRef]
  24. Yonghong, J.; Meng, W.; Xiaoping, Z. An improved high frequency modulating fusion method based on modulation transfer function filters. In Proceedings of the XXII ISPRS Congress, Melbourne, Australia, 25 August–1 September 2012. [Google Scholar]
  25. Laben, C.A.; Browen, B.V. Process for enhancing the spatial resolution of multispectral imagery using pan-sharpening. U.S. Patent No. 6,011,875, 4 January 2000. [Google Scholar]
  26. González-Audícana, M.; Saleta, J.; Catalan, R.; Garcia, R. Fusion of multispectral and panchromatic images using improved IHS and PCA mergers based on wavelet decomposition. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1291–1299. [Google Scholar] [CrossRef]
  27. Kulkarni, S.; Rege, P. Pixel level fusion techniques for SAR and optical images: A review. Inf. Fusion 2020, 59, 13–29. [Google Scholar] [CrossRef]
  28. Kulkarni, S.; Kedar, M.; Rege, P.P. Comparison of Different Speckle Noise Reduction Filters for RISAT -1 SAR Imagery. In Proceedings of the 2018 International Conference on Communication and Signal Processing (ICCSP), Chennai, India, 3–5 April 2018; pp. 0537–0541. [Google Scholar] [CrossRef]
  29. Quegan, S.; Yu, J.J. Filtering of multichannel SAR images. IEEE Trans. Geosci. Remote Sens. 2001, 39, 2373–2379. [Google Scholar] [CrossRef]
  30. Abburu, S.; Golla, S.B. Satellite Image Classification Methods and Techniques: A Review. Int. J. Comput. Appl. 2015, 119, 20–25. [Google Scholar] [CrossRef]
  31. Li, C.; Wang, J.; Wang, L.; Hu, L.; Gong, P. Comparison of Classification Algorithms and Training Sample Sizes in Urban Land Classification with Landsat Thematic Mapper Imagery. Remote Sens. 2014, 6, 964–983. [Google Scholar] [CrossRef] [Green Version]
  32. Thanh Noi, P.; Kappas, M. Comparison of Random Forest, k-Nearest Neighbor, and Support Vector Machine Classifiers for Land Cover Classification Using Sentinel-2 Imagery. Sensors 2017, 18, 18. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. More, A.S.; Rana, D.P. Review of Random Forest Classification Techniques to Resolve Data Imbalance. In Proceedings of the 1st International Conference on Intelligent Systems and Information Management (ICISIM), Aurangabad, India, 5–6 October 2017; pp. 72–78. [Google Scholar]
  34. Breiman, L. Random Forests. Machine Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  35. Myles, A.J.; Feudale, R.N.; Liu, Y.; Woody, N.A.; Brown, S.D. An introduction to decision tree modeling. J. Chemom. 2004, 18, 275–285. [Google Scholar] [CrossRef]
  36. Seo, D.K.; Kim, Y.H.; Eo, Y.D.; Lee, M.H.; Park, W.Y. Fusion of SAR and Multispectral Images Using Random Forest Regression for Change Detection. ISPRS Int. J. Geo-Inf. 2018, 7, 401. [Google Scholar] [CrossRef] [Green Version]
  37. Sun, C.; Bian, Y.; Zhou, T.; Pan, J. Using of Multi-Source and Multi-Temporal Remote Sensing Data Improves Crop-Type Mapping in the Subtropical Agriculture Region. Sensors 2019, 19, 2401. [Google Scholar] [CrossRef] [Green Version]
  38. Loosvelt, L.; Peters, J.; Skriver, H.; Lievens, H.; Van Coillie, F.M.; De Baets, B.; Verhoest, N.E. Random Forests as a tool for estimating uncertainty at pixel-level in SAR image classification. Int. J. Appl. Earth Obs. Geoinf. ITC J. 2012, 19, 173–184. [Google Scholar] [CrossRef]
  39. Torbick, N.; Ledoux, L.; Salas, W.; Zhao, M. Regional Mapping of Plantation Extent Using Multisensor Imagery. Remote Sens. 2016, 8, 236. [Google Scholar] [CrossRef] [Green Version]
  40. Zhang, M.; Huang, H.; Li, Z.; Hackman, K.; Liu, C.; Andriamiarisoa, R.; Raherivelo, T.N.A.N.; Li, Y.; Gong, P. Automatic High-Resolution Land Cover Production in Madagascar Using Sentinel-2 Time Series, Tile-Based Image Classification and Google Earth Engine. Remote Sens. 2020, 12, 3663. [Google Scholar] [CrossRef]
  41. Amani, M.; Kakooei, M.; Moghimi, A.; Ghorbanian, A.; Ranjgar, B.; Mahdavi, S.; Davidson, A.; Fisette, T.; Rollin, P.; Brisco, B.; et al. Application of Google Earth Engine Cloud Computing Platform, Sentinel Imagery, and Neural Networks for Crop Mapping in Canada. Remote Sens. 2020, 12, 3561. [Google Scholar] [CrossRef]
  42. Fedepalma. Innovación y sostenibilidad en la agroindustria de la palma de aceite en Colombia. Mem. XIX Conf. Int. Sobre Palma Aceite 2018, 1, 9–18. [Google Scholar]
  43. Vargas, L.E.P.; Laurance, W.F.; Clements, G.R.; Edwards, W. The Impacts of Oil Palm Agriculture on Colombia’s Biodiversity: What We Know and Still Need to Know. Trop. Conserv. Sci. 2015, 8, 828–845. [Google Scholar] [CrossRef]
  44. Chong, K.L.; Kanniah, K.D.; Pohl, C.; Tan, K.P. A review of remote sensing applications for oil palm studies. Geo-Spatial Inf. Sci. 2017, 20, 184–200. [Google Scholar] [CrossRef] [Green Version]
  45. Danylo, O.; Pirker, J.; Lemoine, G.; Ceccherini, G.; See, L.; McCallum, I.; Hadi; Kraxner, F.; Achard, F.; Fritz, S. A map of the extent and year of detection of oil palm plantations in Indonesia, Malaysia and Thailand. Sci. Data 2021, 8, 96. [Google Scholar] [CrossRef]
  46. IDEAM. Leyenda Nacional de Coberturas de la Tierra. Metodología CORINE Land Cover Adaptada para Colombia Escala 1:100.000; IDEAM: Bogotá, Colombia, 2010.
  47. Cenipalma. Portal GeoPalma. Tablero Catastro: Geoservicio Catastro Físico. Available online: http://geoportal.cenipalma.org (accessed on 1 March 2022).
  48. Mullissa, A.; Vollrath, A.; Odongo-Braun, C.; Slagter, B.; Balling, J.; Gou, Y.; Gorelick, N.; Reiche, J. Sentinel-1 SAR Backscatter Analysis Ready Data Preparation in Google Earth Engine. Remote Sens. 2021, 13, 1954. [Google Scholar] [CrossRef]
  49. Byun, Y.; Choi, J.; Han, Y. An Area-Based Image Fusion Scheme for the Integration of SAR and Optical Satellite Imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 2212–2220. [Google Scholar] [CrossRef]
  50. Chu, T.; Tan, Y.; Liu, Q.; Bai, B. Novel fusion method for SAR and optical images based on non-subsampled shearlet transform. Int. J. Remote Sens. 2020, 41, 4590–4604. [Google Scholar] [CrossRef]
  51. Mullissa, A.G.; Marcos, D.; Tuia, D.; Herold, M.; Reiche, J. deSpeckNet: Generalizing Deep Learning-Based SAR Image Despeckling. IEEE Trans. Geosci. Remote Sens. 2020, 60, 5200315. [Google Scholar] [CrossRef]
  52. Hashim, I.C.; Shariff, A.R.M.; Bejo, S.K.; Muharam, F.M.; Ahmad, K. Machine-Learning Approach Using SAR Data for the Classification of Oil Palm Trees That Are Non-Infected and Infected with the Basal Stem Rot Disease. Agronomy 2021, 11, 532. [Google Scholar] [CrossRef]
  53. Lazecky, M.; Lhota, S.; Penaz, T.; Klushina, D. Application of Sentinel-1 satellite to identify oil palm plantations in Balikpapan Bay. IOP Conf. Ser. Earth Environ. Sci. 2018, 169, 012064. [Google Scholar] [CrossRef]
  54. Zhang, H.; Lin, H.; Li, Y. Impacts of Feature Normalization on Optical and SAR Data Fusion for Land Use/Land Cover Classification. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1061–1065. [Google Scholar] [CrossRef]
  55. Spracklen, B.; Spracklen, D.V. Synergistic Use of Sentinel-1 and Sentinel-2 to Map Natural Forest and Acacia Plantation and Stand Ages in North-Central Vietnam. Remote Sens. 2021, 13, 185. [Google Scholar] [CrossRef]
  56. Miettinen, J.; Liew, S.C.; Kwoh, L.K. Usability of sentinel-1 dual polarization C-band data for plantation detection in Insular Southeast Asia. In Proceedings of the 36th Asian Conference on Remote Sensing (ACRS2015), Quezon, Philippines, 19–23 October 2015; pp. 19–33. [Google Scholar]
  57. Ballester-Berman, J.; Rastoll-Gimenez, M. Sensitivity Analysis of Sentinel-1 Backscatter to Oil Palm Plantations at Pluriannual Scale: A Case Study in Gabon, Africa. Remote Sens. 2021, 13, 2075. [Google Scholar] [CrossRef]
  58. Haralick, R.M.; Shanmugam, K.; Dinstein, I.H. Textural Features for Image Classification. IEEE Trans. Syst. Man Cybern. 1973, 6, 610–621. [Google Scholar] [CrossRef] [Green Version]
  59. Tan, K.P.; Kanniah, K.D.; Cracknell, A.P. Use of UK-DMC 2 and ALOS PALSAR for studying the age of oil palm trees in southern peninsular Malaysia. Int. J. Remote Sens. 2013, 34, 7424–7446. [Google Scholar] [CrossRef]
  60. Van Tricht, K.; Gobin, A.; Gilliams, S.; Piccard, I. Synergistic Use of Radar Sentinel-1 and Optical Sentinel-2 Imagery for Crop Mapping: A Case Study for Belgium. Remote Sens. 2018, 10, 1642. [Google Scholar] [CrossRef] [Green Version]
  61. Oon, A.; Ngo, K.D.; Azhar, R.; Ashton-Butt, A.; Lechner, A.; Azhar, B. Assessment of ALOS-2 PALSAR-2L-band and Sentinel-1 C-band SAR backscatter for discriminating between large-scale oil palm plantations and smallholdings on tropical peatlands. Remote Sens. Appl. Soc. Environ. 2019, 13, 183–190. [Google Scholar] [CrossRef]
  62. Werner, A.; Storie, C.D.; Storie, J. Evaluating SAR-Optical Image Fusions for Urban LULC Classification in Vancouver Canada. Can. J. Remote Sens. 2014, 40, 278–290. [Google Scholar] [CrossRef]
  63. Orynbaikyzy, A.; Gessner, U.; Mack, B.; Conrad, C. Crop Type Classification Using Fusion of Sentinel-1 and Sentinel-2 Data: Assessing the Impact of Feature Selection, Optical Data Availability, and Parcel Sizes on the Accuracies. Remote Sens. 2020, 12, 2779. [Google Scholar] [CrossRef]
  64. Heckel, K.; Urban, M.; Schratz, P.; Mahecha, M.; Schmullius, C. Predicting Forest Cover in Distinct Ecosystems: The Potential of Multi-Source Sentinel-1 and -2 Data Fusion. Remote Sens. 2020, 12, 302. [Google Scholar] [CrossRef] [Green Version]
  65. Sicre, C.M.; Fieuzal, R.; Baup, F. Contribution of multispectral (optical and radar) satellite images to the classification of agricultural surfaces. Int. J. Appl. Earth Obs. Geoinf. ITC J. 2020, 84, 101972. [Google Scholar] [CrossRef]
  66. Amani, M.; Salehi, B.; Mahdavi, S.; Granger, J.; Brisco, B. Wetland classification in Newfoundland and Labrador using multi-source SAR and optical data integration. GIScience Remote Sens. 2017, 54, 779–796. [Google Scholar] [CrossRef]
Figure 1. Location of study area, central palm zone, Palmar de la Vizcaína Experimental Station, Santander, Colombia.
Figure 1. Location of study area, central palm zone, Palmar de la Vizcaína Experimental Station, Santander, Colombia.
Agriculture 12 00955 g001
Figure 2. Overall workflow used in this study describing the pre-processing, image fusion process, and land cover classification assessment.
Figure 2. Overall workflow used in this study describing the pre-processing, image fusion process, and land cover classification assessment.
Agriculture 12 00955 g002
Figure 3. Location of ground truth points for the validation process of land cover classification.
Figure 3. Location of ground truth points for the validation process of land cover classification.
Agriculture 12 00955 g003
Figure 4. Histogram by bands of the pixel values of the images fused by the different methods evaluated vs. Sentinel-2 optical image. Values in orange correspond to the fused image; values in blue correspond to the Sentinel-2 optical image.
Figure 4. Histogram by bands of the pixel values of the images fused by the different methods evaluated vs. Sentinel-2 optical image. Values in orange correspond to the fused image; values in blue correspond to the Sentinel-2 optical image.
Agriculture 12 00955 g004
Figure 5. Spectral signatures of coverages identified on the Sentinel-2 optical image (A) and the images fused by the Brovey (BR) (B), high-frequency modulation (HFM) (C), Gram–Schmidt (GS) (D), and principal components (PC) (E) methods.
Figure 5. Spectral signatures of coverages identified on the Sentinel-2 optical image (A) and the images fused by the Brovey (BR) (B), high-frequency modulation (HFM) (C), Gram–Schmidt (GS) (D), and principal components (PC) (E) methods.
Agriculture 12 00955 g005
Figure 6. Results of land cover classifications using RF algorithm. Classification of optical (A) and SAR images (B). Classification images fused by BR method (C), HFM method (D), GS method (E), PC method (F) and optical/SAR image stack (G) respectively.
Figure 6. Results of land cover classifications using RF algorithm. Classification of optical (A) and SAR images (B). Classification images fused by BR method (C), HFM method (D), GS method (E), PC method (F) and optical/SAR image stack (G) respectively.
Agriculture 12 00955 g006
Figure 7. User accuracies obtained for each land cover.
Figure 7. User accuracies obtained for each land cover.
Agriculture 12 00955 g007
Figure 8. Producer accuracies obtained for each land cover.
Figure 8. Producer accuracies obtained for each land cover.
Agriculture 12 00955 g008
Figure 9. Comparison of overall accuracies and kappa coefficient for each of the images analyzed.
Figure 9. Comparison of overall accuracies and kappa coefficient for each of the images analyzed.
Agriculture 12 00955 g009
Table 1. Results of statistical indicators implemented to measure quality in fusion methods: Brovey (BR), high-frequency modulation (HFM), Gram–Schmidt (GS), and principal components (PC).
Table 1. Results of statistical indicators implemented to measure quality in fusion methods: Brovey (BR), high-frequency modulation (HFM), Gram–Schmidt (GS), and principal components (PC).
Fusion MethodRMSESDTPSNRCCERGAS
BR0.000570.1350312.606020.701020.32001
HFM0.000020.0389035.827450.929810.00979
GS0.000030.0337126.224110.683610.02747
PC0.000050.0344923.025220.230650.04880
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Monsalve-Tellez, J.M.; Torres-León, J.L.; Garcés-Gómez, Y.A. Evaluation of SAR and Optical Image Fusion Methods in Oil Palm Crop Cover Classification Using the Random Forest Algorithm. Agriculture 2022, 12, 955. https://doi.org/10.3390/agriculture12070955

AMA Style

Monsalve-Tellez JM, Torres-León JL, Garcés-Gómez YA. Evaluation of SAR and Optical Image Fusion Methods in Oil Palm Crop Cover Classification Using the Random Forest Algorithm. Agriculture. 2022; 12(7):955. https://doi.org/10.3390/agriculture12070955

Chicago/Turabian Style

Monsalve-Tellez, Jose Manuel, Jorge Luis Torres-León, and Yeison Alberto Garcés-Gómez. 2022. "Evaluation of SAR and Optical Image Fusion Methods in Oil Palm Crop Cover Classification Using the Random Forest Algorithm" Agriculture 12, no. 7: 955. https://doi.org/10.3390/agriculture12070955

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop