Next Article in Journal
Field-Scale Crop Seeding Date Estimation from MODIS Data and Growing Degree Days in Manitoba, Canada
Next Article in Special Issue
Coupled Higher-Order Tensor Factorization for Hyperspectral and LiDAR Data Fusion and Classification
Previous Article in Journal
A BDS-2/BDS-3 Integrated Method for Ultra-Rapid Orbit Determination with the Aid of Precise Satellite Clock Offsets
Previous Article in Special Issue
Generating Red-Edge Images at 3 M Spatial Resolution by Fusing Sentinel-2 and Planet Satellite Products
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improvement of Clustering Methods for Modelling Abrupt Land Surface Changes in Satellite Image Fusions

Canada Center for Remote Sensing, Canada Centre for Mapping and Earth Observation, Natural Resources Canada, 6th floor, 560 Rochester Street, Ottawa, ON K1S 5K2, Canada
*
Author to whom correspondence should be addressed.
Remote Sens. 2019, 11(15), 1759; https://doi.org/10.3390/rs11151759
Submission received: 13 June 2019 / Revised: 16 July 2019 / Accepted: 24 July 2019 / Published: 26 July 2019
(This article belongs to the Special Issue Advances in Remote Sensing Image Fusion)

Abstract

:
A key challenge in developing models for the fusion of surface reflectance data across multiple satellite sensors is ensuring that they apply to both gradual vegetation phenological dynamics and abrupt land surface changes. To better model land cover spatial and temporal changes, we proposed previously a Prediction Smooth Reflectance Fusion Model (PSRFM) that combines a dynamic prediction model based on the linear spectral mixing model with a smoothing filter corresponding to the weighted average of forward and backward temporal predictions. One of the significant advantages of PSRFM is that PSRFM can model abrupt land surface changes either through optimized clusters or the residuals of the predicted gradual changes. In this paper, we expanded our approach and developed more efficient methods for clustering. We applied the new methods for dramatic land surface changes caused by a flood and a forest fire. Comparison of the model outputs showed that the new methods can capture the land surface changes more effectively. We also compared the improved PSRFM to two most popular reflectance fusion algorithms: Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) and Enhanced version of STARFM (ESTARFM). The results showed that the improved PSRFM is more effective and outperforms STARFM and ESTARFM both visually and quantitatively.

Graphical Abstract

1. Introduction

The demands for satellite images with higher spatial, temporal and/or spectral resolution from different application developments such as regional land-cover mapping, environmental monitoring, crop mapping and yield estimation, and climate change studies etc. are rapidly growing. The satellite imagery data collected systematically by various sensors such as NASA’s Moderate resolution Imaging Spectroradiometer (MODIS), Landsat Operational Land Imagers and Sentinels Multispectral Imagers developed by the European Space Agency (ESA) enable us to produce some images for such demands. A technical challenge is that the usability of fine spatial-resolution images (e.g., Landsat and Sentinel-2 images) are still largely constrained by their low revisit frequency and cloud contamination. Developing advanced methods for fusing satellite images from high-temporal frequency (e.g., MODIS) and fine spatial-resolution sensors (e.g., Landsat Operational Land Imager or Sentinel-2 Multispectral Imager) has become a well-established research field [1,2,3,4,5,6,7,8,9,10,11,12,13,14].
A number of satellite image fusion methods have been developed to predict synthetic Landsat-like images from dense time series of MODIS images and a limited set of fine-resolution Landsat images. Examples include the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) [1], the Enhanced version of STARFM (ESTARFM) [2], the Spatial Temporal Adaptive Algorithm for Mapping Reflectance Change (STAARCH) [3], the Unmixing-based Spatial-Temporal Reflectance Fusion Model (U-STFM) [4,5], the Flexible Spatiotemporal Data Fusion (FSDAF) model [6], the Spatiotemporal image-fusion model (STI-FM) for enhancing the temporal resolution [7], the Hybrid Color Mapping (HCM) approach [8], the Rigorously-Weighted Spatiotemporal Data Fusion Model (RWSTFM) [9], the Prediction Smooth Reflectance Fusion Model (PSRFM) [10]. These studies have demonstrated that the fusion methods are mostly limited to predicting spectral changes in reflectance due to gradual vegetation phenological changes [6,7]. They still face challenges for predicting abrupt surface changes such as urbanization, deforestation, wildfires, floods, etc. [6]. In summary, there is a clear need for better fusion model to predict both gradual and abrupt land surface changes [10].
To better model both gradual vegetation phenological and abrupt land surface changes, we proposed the Prediction Smooth Reflectance Fusion Model (PSRFM) [10]. PSRFM combines a dynamic prediction model based on a linear spectral mixing algorithm [4,5,6] with a smoothing filter based on a weighted average of forward and backward temporal predictions. For some special applications, constraints based on the Normalized Difference Snow Index (NDSI) or the Normalized Difference Vegetation Index (NDVI) were also utilized in the smoothing filter. In comparison to some commonly used reflectance fusion models, such as STARFM and ESTARFM, one of PSRFM’s significant advantages is that it can account for abrupt surface changes observed in reflectance either through optimized clusters used for the gradual dynamic change estimations and/or by a model adjustment based on the residuals of the predicted gradual changes. However, this advantage was not explicitly tested in [10]. Here we expand our approach developed in [10] by testing this feature with an improved strategy for cluster optimization and more efficient computational solution.
In the following, we will first briefly review PSRFM and discuss how the improved clustering methods lead to better modelling of abrupt surface changes. Then, we will test and evaluate the improved methods by using two different land surface change cases caused by a flood and a forest fire to show their capabilities. To evaluate their performances, we will also compare the improved PSRFM to two popular reflectance fusion algorithms STARFM and ESTARFM.

2. Methodology

2.1. Overview of PSRFM

The problem to be solved by the image fusion model is to predict a synthetic fine-resolution image on date t1 given two pairs of co-incident clear-sky fine-resolution (e.g., Landsat Operational Land Imager or Sentinel-2 Multispectral Imager) and coarse-resolution (e.g., MODIS) surface reflectance images on a start date t0 and an end date t2 and a coarse-resolution image at t1 within this period.
The basic idea of PSRFM is to take the fine-resolution reflectance image R0 on the start date t0 as an initial estimate with a variance (i.e., its observation uncertainty) and then add two additional terms that represent gradual changes Δ R 01 and abrupt changes Δ 01 from the start date t0 to a prediction date t1, respectively. That is, the replicated fine-resolution image R1 at t1 is derived as:
R 1 = R 0   +   Δ R 01   +   Δ 01
where the corrections Δ R 01 and Δ 01 are to be determined from the coarse-resolution (MODIS) observations at t0 and t1.
The gradual changes Δ R 01 are further modelled by a linear surface reflectance change rate or velocity for a classified region or cluster based on the fine-resolution pixels. These rates for all clusters are estimated by solving an equation system built from the spectral linear mixing theory [4,5,6] and the Least-Squares adjustment method [15,16]. Details about how the reflectance change velocities are determined are described in [10]. To model abrupt changes Δ 01 , a model adjustment based on the spatial residual analysis and interpolation similar to the thin plate spline (TPS) interpolation [6] is adopted.
Assuming that abrupt surface change, such as the one caused by flooding or a forest fire, at a fine-resolution image pixel is δ , and the reflectance change velocity for a cluster c is r ˙ , the prediction of reflectance r of band b at the pixel (x, y) belonging to cluster c at time t1 is derived as follows:
r 1 ( x , y , c , b )   =   r 0 ( x , y , c , b )   +   ( t 1 t 0 ) r ˙ ( x , y ,   c , b ) + δ ( x , y ,   c , b )
where r 0 ( x , y ,   c , b ) is the fine-resolution surface reflectance observation of band b at the pixel (x, y) belonging to cluster c at time t0.
The term δ is introduced to model abrupt surface changes within a cluster. It can be seen as a correction to the prediction of the gradual changes of the cluster and can be considered further as follows.

2.2. Modelling Abrupt Changes by Residual Adjustment

In Equation (2), we assume that the reflectance change velocity of a cluster will reflect reasonably well the gradual temporal changes if there is no abrupt changes within the clusters between the start date t0 and the prediction date t1. For this reason, most fusion models based on the spectral linear mixing theory such as U-STFM [4], STRUM [13], STDFA [17] and MSTDFA [18] etc. are usually effective and able to predict the gradual vegetation phenological changes if the fine-resolution pixels at t0 are clustered properly. However, if surface property has significantly changed within a cluster, the reflectance change rate of this cluster cannot be estimated accurately because the coarse-resolution (MODIS) observations within the cluster on the start date t0 and the prediction date t1 do not represent a same cluster anymore. Using all the pixel values including those with abrupt changes within the cluster to estimate a common change rate may increase estimation error. The effects of abrupt changes that occur in a relatively small portion of a cluster are similar to that of observation outliers at those MODIS pixels that contain the abrupt changes, which would increase estimation error of r ˙ . and lead to bigger residuals between modelled and observed MODIS reflectance. Based on this fact, the residuals can be used to model the abrupt surface changes. Zhu et al. [6] attempted to adjust the residuals by using the thin plate spline (TPS) interpolation. For simplicity, PSRFM implemented a bilinear interpolation method to adjust the residuals spatially to all of the predicted fine-resolution-like pixels and treated this as modelling abrupt changes by residual adjustment because the residuals mainly come from surface reflectance changes and within-cluster variability.
For individual MODIS pixel, the residue can be calculated from [6]:
δ M ( x i , y i , b ) = Δ M ( x i , y i , b ) 1 p i [ j = 1 p i r ^ t 1 ( x i j , y i j , b ) j = 1 p i r t 0 ( x i j , y i j , b ) ]
where δ M ( x i , y i , b ) is the residual pixel value of band b at pixel ( x i , y i ) ,   Δ M ( x i , y i , b ) is the MODIS observation differences between dates t0 and t1; r ^ t 1 ( x i j , y i j , b ) is the predicted fine-resolution-like pixel value on the prediction date t1, r t 0 ( x i j , y i j , b ) is the fine-resolution pixel value on the start date t0, and p i is the number of fine-resolution pixels within a MODIS pixel i.
Equation (3) indicates that the residual at a MODIS pixel represents a bias of the estimated MODIS reflectance based on the predicted fine-resolution-like image in comparison to the observed coarse-resolution MODIS reflectance at pixel ( x i , y i ) . If the predicted temporal change reflects well the gradual vegetation phenological change of a cluster of fine-resolution pixels, the corresponding residual should follow a zero mean normal distribution because the MODIS observations are normal without bias caused by any outliers like abrupt changes. Otherwise, either abrupt reflectance changes or observation outliers may have happened at some of the coarse-resolution MODIS pixels within the same cluster. Based on this fact, we can subsequently model the abrupt changes through optimized clusters that can well separate the pixels with the abrupt changes from their original cluster and deal with them as a new cluster.

2.3. Modelling Abrupt Changes by Optimized Clusters

There are two considerations that may affect the effectiveness of the prediction model PSRFM. One is related to the clustering approach used for predicting gradual changes, and the other one is the residual adjustment method for abrupt surface changes. They are interconnected. If we could classify the observations to match the changed surface reflectance accurately, the residual adjustment might not be critical and even unnecessary. Therefore, correct clustering is critical to achieve a good result.
To determine the clusters optimally, PSRFM developed a method based on the rule of Maximizing Correlation but Smaller Residuals (MCSR) [10]. This rule is used to select an optimal cluster number from a set of pre-defined cluster numbers. The optimized cluster number should maximize the correlation between the differences ( Δ R = R ^ 1   R 0 ) of the fine-resolution images and the differences ( Δ M = M 1   M 0 ) of the coarse-resolution (MODIS) images but still not increase the sum of the residual squares ( s u m = δ M 2 ) of the predicted fine-resolution-like image considerably. This optimization method can result in an optimized cluster number, but the clustering process is only based on the input fine-resolution data. In the previous implementation, the input fine-resolution data are Landsat images on the start date t0 for forward prediction or on the end date t2 for backward prediction. The problem is that the input data at t0 may not contain any surface change information if these changes occur after t0, and the image data at t2 may or may not have surface change information. Hence, it is only suitable for image fusion without land surface changes occurred between t0 and t2. If some land surface changes happened on the prediction date t1, the residual adjustment becomes necessary.
To obtain better clusters that could reflect correctly land surface changes, we may use not only the fine-resolution image at t0 or t2 but also the coarse-resolution MODIS observations at t1 as the input data for the classification. Since land surfaces at t0, t1 and t2 may be all different for the changed areas. For example, in the first test case described in Section 3, a flood occurred at t1 and then it slowly diminished until the date t2. The inundated areas are dynamic and different at the three times t0, t1 and t2. To consider such situations, we may need to include both the fine-resolution images at t0 and t2, and the coarse-resolution MODIS observations at t1 as the input data for the clustering process. Based on this fact, PSRFM implemented several clustering options with different combinations of the input data:
(1)
CLUSTER-1—Use the fine-resolution image at t0 or t2 as the input data for the classification of forward and backward predictions, respectively;
(2)
CLUSTER-2—Use the fine-resolution image at t0 or t2 and the MODIS image at t1 (resampled to the fine-resolution) as the input data for the classifications of forward and backward predictions, respectively;
(3)
CLUSTER-3—Use the fine-resolution image at t0 and t2, as well as the resampled MODIS image at t1 all together as the input data for the classifications for both forward and backward predictions.
If land surface changes occur between t0 and t1 or t1 and t2, the coarse-resolution MODIS reflectance change ratios between t0 or t2 and t1 are more straightforward to reflect the changes. Therefore, alternatively, we may also use the MODIS reflectance change ratios as the input data for the classification. The reflectance change ratio M ˙ 01 is calculated as:
M ˙ 01 = ( M 1 M 0 ) / M 0   o r   M ˙ 21 = ( M 1 M 2 ) / M 2
where M 0 , M 1 and M 2 are the coarse-resolution MODIS reflectance observations at t0, t1 and t2, respectively.
The reflectance change ratios M ˙ 01 and M ˙ 21 contain the information of land surface change at each MODIS pixel if there is any, the clusters based on them would be closer to reality and improve the model output. A shortcoming with the MODIS observations is that its spatial resolution is much lower than typical fine-resolution images used. To match the coarse resolution of MODIS images to the fine resolution of Landsat or Sentinel-2 images, all MODIS observations should be resampled and co-registered accurately.
The clustering method can be the same as used for CLUSTER-i (i = 1, 2, 3), i.e., the ISODATA or K-Means [19]. A question is how to determine the parameters for the classification. A simple and practical method is to cluster the reflectance change ratios into k groups based on their values band by band. Hassan et al. tested an image fusion method by dividing the reflectance change ratios into three groups [7]. Our test results show that the group numbers may vary with different datasets and even different spectral bands. A better way is to optimize the group number k by the same method used for the optimization of the cluster number described in [10]. As this classification algorithm involves only simple data grouping, it is much more efficient than the ISODATA or K-Means methods. For our tests and experiments presented in Section 3 and Section 4, we named this classification option as:
(4)
CLUSTER-4—Use the reflectance change ratios calculated from coarse-resolution images between t0 and t1 or between t2 and t1 as the input data for the classification of forward and backward temporal predictions, respectively; the classification method is to divide the reflectance change ratios into optimized k clusters based on the values of each spectral band.

2.4. Implementation

We expanded the PSRFM algorithms to include all the options mentioned in Section 2.3. To improve the co-registration between fine-resolution and coarse-resolution images, we designed a new co-registration module. It is based on maximizing correlation between the two images, and then cropping them to cover the same area. For evaluation of the new clustering approaches, the NDVI or NDSI constraints used for smoothing of forward and backward predictions in the previous implementation are omitted in this study.
Figure 1 shows the processing flowchart of the improved PSRFM algorithm. The main processing steps include:
(1)
Preprocessing of required input images (e.g., Landsat or Sentinel-2 and MODIS images). This includes data download, cloud mask generation, remapping all input images to a common projection if they are different, resampling coarse-resolution image to fine resolution by bilinear interpolation, co-registering one image to another by maximizing correlation between the coarse and fine images, and then cropping them to cover the same area [11,17]. For co-registering, the input MODIS images should cover a little bigger area than the Landsat or Sentinel-2 images. According to our tests, an extension of 2 km at each side of the fine-resolution images is reasonable.
(2)
Performing forward predictions by clustering the input data, estimating reflectance change rates of all clusters for predetermined cluster number range (e.g., from 5 to 20) and determine the final optimized forward prediction [10].
(3)
Performing the same processing as step 2 for backward prediction.
(4)
Combining the optimized forward and backward predictions as the final prediction of the fine-resolution-like image with weights based on either temporal intervals of the input and the prediction images or uncertainties of the predicted images.

3. Validation Test Cases

To verify the effectiveness of modelling abrupt surface changes by optimized clusters (Section 2.3) and to compare the modified PSFRM with the original method, validation experiments were carried out for predicting images containing a large flood and a forest fire in two different cases. The predicted images are then evaluated against observations, and at the same time, we also compared the outputs with those of two well-studied methods STARFM [1] and ESTARFM [2] to assess performance of PSRFM’s new classification approaches.
The first test case is taken from the preprocessed Landsat-5 TM and MODIS data of the Lower Gwydir Catchment study site (https://data.csiro.au/dap/landingpage?pid=csiro:5847) [20] located in northern New South Wales, Australia (149.2815E, 29.0855S), where 14 cloud–free Landsat-MODIS pairs were available from April 2004 to April 2005. Gwydir is 80 km N–S and 68 km E–W covering 5440 km2 (3200 columns by 2720 lines at 25 m resolution). All Landsat imagery are acquired by Landsat-5 TM. The Landsat imagery was atmospherically corrected using Li et al.’s (2010) algorithm [19]. Geo-referencing was conducted using the Geocentric Datum of Australia 1994 (GDA94). MODIS Collection 5 data were acquired by Terra spacecraft, specifically MOD09GA data files were downloaded from the USGS DAAC (ftp://e4ftl01.cr.usgs.gov/MOLT/, during Sep 2011). MODIS data were geometrically transformed from a sinusoidal projection to WGS84 using the MODIS Re-projection Tool (https://lpdaac.usgs.gov/lpdaac/tools/modis_reprojection_tool, accessed on 27 September 2011).
For this study, we cropped a test area of 25 km by 25 km (1000 × 1000 pixels) of the image with three bands Green, Red, and NIR as shown in Figure 2. We used three pairs of MODIS (MOD09GA) and Landsat-5 TM images for the experiment. They were acquired on 26 November 2004, 12 December 2004 and 28 December 2004, respectively. Since the crops were irrigated in October and grown in December, and a large flood occurred around 12 December 2004, this dataset illustrates very well both phenological changes and abrupt surface changes. Figure 3 shows the scenes of the images using Red-Green-NIR as R-G-B false color composite. From the Landsat-5 TM image on December 12, we can see clearly some large inundated areas at the lower right part of the image. They shrank to a few smaller areas but still visible on the image of December 28.
For the validation experiment, we used the pair of MODIS and Landsat-5 TM images on 26 November 2004 that were acquired before the flood occurred as input data on the start date t0. The pair of MODIS and Landsat-5 TM images acquired after the flood on 28 December 2004 were used as input data for the end date t2. The MODIS image on 12 December 2004 that recorded the land surface changes caused by the flood was used to predict a Landsat-like image on the same day from the two pairs of images on the start and end dates. The observed Landsat TM image on the prediction date 12 December 2004 was used for visual and quantitative assessment of the predicted image.
The second test case is located around 55°03′N and 126°35′W in British Columbia, Canada. Three pairs of Sentinel-2 MSI L2A and MODIS BRDF adjusted (MCD43A4) surface reflectance images acquired on 31 July, 15 August and 22 September 2018 were used for the experiment. All the data were downloaded from the USGS Earth Explorer website (https://earthexplorer.usgs.gov/). This dataset was selected specially for testing the capability of the new clustering approaches for detecting land surface changes caused by a forest fire. Around 15 August 2018, the Torkelsen Lake Fire (Fire ID: R32182) occurred and burnt out approximately 1300 hectares in the study area (Figure 4). For this study, we cropped a test area of 20 km by 20 km (1000 × 1000 pixels with a resolution of 20 m) and used three bands of B8A (Near Infrared—NIR), B11 (Shortwave Infrared 1—SWIR1) and B12 (Shortwave Infrared 2—SWIR2) from Sentinel-2 MSI L2A data within the tile T09UXB. The corresponding MODIS bands are 2, 6 and 7, respectively, each with a nominal resolution of 500 m.
The reason for selecting these bands for this study is to reduce some thick smoke effects caused by the fire to a lower level. After the MODIS data are geometrically transformed from a sinusoidal projection to the same WGS84 UTM zone, we resampled them to the same resolution 20 m as that of the Sentinel-2 MSI image by bilinear interpolation using the USGS MODIS Re-projection Tool. Then, they were co-registered with the Sentinel-2 images by maximizing their correlations. Figure 5 shows the scenes of the images using bands SWIR1-NIR-SWIR2 as R–G-B false color composite.
For the validation experiment, we used the pair of MODIS BRDF adjusted and Sentinel-2 MSI images on 31 July 2018 (before the forest fire occurred) as input data for the start date t0. The pair of MODIS BRDF adjusted and Sentinel-2 MSI images acquired after the fire on 22 September 2018 were used as input data for the end date t2. The MODIS BRDF adjusted image on 15 August 2018 that recorded the land surface changes caused by the fire was used to predict a Sentinel-2-like image on the same day with the two pairs of images on the start and end dates. The observed Sentinel-2 MSI image on the prediction date 15 August 2018 was used for visual and quantitative assessment of the predicted image.
Different from the first test case, some pixel values of the MODIS BFDF adjusted image within the burning areas (two rectangles shown as white in Figure 5) on the prediction date 15 August 2018 are not available for all the bands. Some pixel values (the pixels in red of the same MODIS image in Figure 5) are not available only for SWIR1 band. For purpose of visual comparison, we reproduced a Sentinel-2 MSI image by marking these unavailable pixel values of the MODIS image on the observed Sentinel-2 MSI image on the prediction date 15 August 2018 accordingly. Figure 6 shows the original and marked Sentinel-2 MSI images.

4. Results

Using the two datasets, we compared the performances of PRFRM using CLUSTER-i (i = 1, 2, 3, 4) strategies discussed in Section 2.3 for two different approaches. One is image fusion with CLUSTER-i (i = 1, 2, 3, 4) without the residual adjustment (RA) for abrupt changes. The other is image fusion using CLUSTER-i (i = 1, 2, 3, 4) with RA for abrupt changes. To evaluate performances of these PSRFM algorithms, we also compared the results to STARFM and ESTARFM methods. We selected these two methods mainly due to their wide acceptance within the remote sensing community, and availability of processing software. In a summary, our experiments include the following 10 algorithms:
(1)
PSRFM-11—CLUSTER-1 without RA
(2)
PSRFM-12—CLUSTER-1 with RA
(3)
PSRFM-21—CLUSTER-2 without RA
(4)
PSRFM-22—CLUSTER-2 with RA
(5)
PSRFM-31—CLUSTER-3 without RA
(6)
PSRFM-32—CLUSTER-3 with RA
(7)
PSRFM-41—CLUSTER-4 without RA
(8)
PSRFM-42—CLUSTER-4 with RA
(9)
STARFM—Spatial and Temporal Adaptive Reflectance Fusion Model
(10)
ESTARFM—the Enhanced version of STARFM
For both study cases we applied STARFM method with a maximum search distance of 25 pixels and other default values set by the program (available online: https://www.ars.usda.gov/research/software/download/?softwareid=432#downloadForm). For ESTARFM we used half-window size of 25 pixels and other default values set by the program (available online: https://xiaolinzhu.weebly.com/open-source-code.html). The different PSRFM algorithms employed an unsupervised ISODATA classification method and its default parameters implemented in Geomatica 2016 except that the number of clusters is determined by the model optimization algorithm described in [10]. Following the recommendation of Zhong and Zhou [10], the uncertainties of the input images used for all PSRFM processing are 40 for Landsat-5 TM or Sentinel-2 MSI images and 20 for MODIS images with a scale factor of 0.0001, respectively.
To quantify the closeness of the predicted images to the observed Landsat-5 TM or Sentinel-2 MSI images, a number of quality indices such as average differences (AD), average absolute difference (AAD), average absolute relative difference (ARD), correlation coefficient (CC), root mean squared error (RMSE), relative dimensionless global error (ERGAS), and quality index (QI) or the structure similarity index (SSIM), etc. could be used [10,21,22,23]. Since the AD, AAD and ARD did not show significant differences among the different methods, only comparison results of the RMSE, CC, ERGAS and QI or SSIM are presented in this study.

4.1. Test Case with Flood

Figure 7 shows visual comparisons of the predicted images against the observed Landsat-5 TM image on 12 December 2004 for the flooding dataset derived from the Landsat-5 TM and MODIS image pairs acquired on 26 November 2004 and 28 December 2004, and the MODIS image on 12 December 2004. Their corresponding evaluation results of the quality indices are presented in Figure 8.
Visual comparisons of the predicted images in Figure 7 and Figure 8 reveal following facts:
(1)
All of the methods PSRFM-ij (i = 1, 2, 3, 4; j = 1, 2), STARFM and ESTARFM except the PSRFM-11 can largely maintain the spatial patterns and generate the similar spectral information of the observed Landsat-5 TM image, but some deviations from the reference image are visible.
(2)
Without the RA for the abrupt surface changes, an unmixing-based image fusion with the clusters determined only from the fine-resolution (Landsat-5 TM) image on the start date 26 November 2004 or the end date 28 December 2004 such as the PSRFM-11 is unable to predict the abrupt land surface changes (Figure 7a). Its performance is worse than STARFM and ESTARFM. This can be verified from the visual comparison of Figure 7a PSRFM-11 vs. Figure 7e STARFM and ESTARFM and the comparison of the quality indices in Figure 8. However, with introduction of the MODIS observations for the classification, PSRFM-21, PSRFM-31 and PSRFM-41 all produce an improved result to some degree (Figure 7b–d). This verifies that clustering with MODIS input leads to better results.
(3)
Comparisons between the algorithms PSRFM-i1 and PSRFM-i2 (i = 1, 2, 3, 4), i.e., the models with and without the RA, show that the RA for abrupt surface changes plays a more important role when only fine resolution images are used for clustering (Figure 7a PSRFM-11 vs. PSRFM-12). Abrupt surface changes such as flooding, may occur quickly, as such the clustering using only the two images cannot reflect the reality on the prediction date. With the improvement of the clusters by introducing the MODIS observations in different ways, the role of the RA becomes less critical and even unnecessary. For example, the output of PSRFM-22 is better than that of PSRFM-21 (Figure 7b PSRFM-21 vs. PARFM-22), while PSRFM-42 shows almost no improvement than PSRFM-41 (Figure 7d PSRFM-41 vs. PSRFM-42) because the clusters determined from the MODIS reflectance change ratios may be closer to the reality. These visual comparison conclusions are also confirmed by the comparison of the quality indices in Figure 8.
(4)
If we compare the different approaches PSRFM-ij (i = 1, 2, 3, 4; j = 1, 2), we can conclude that PSRFM-22 and PSRFM-41 or PSRFM-42 are the better options because of their relatively good quality indices and higher computation efficiency with less input data for clustering. The result of PSRFM-32 looks comparable but it is less computationally efficient because it involves all the Landsat-5 and MODIS image bands for the clustering process which slows down the computations.
(5)
Comparison of the improved PSRFM algorithms such as PSRFM-22, PSRFM-32, PSRFM-41 and PSRFM-42 with the two published methods STARFM and ESTARFM reveals that all these improved PSRFM methods yield better results, especially the result of NIR band in this test case. The reason for the better result of NIR band might be due to the properties of water body related to NIR spectrum. In general, surface of water reflects a litter Near Infrared spectrum as well as in other infrared bands. As the result, water body has very small value in those bands and exhibits black in false color composite using those bands. However, water body may present some various colors due to different materials contained in the water, such as soils. This may cause difficulty for STARFM and ESTARFM algorithms to find similar pixels around a prediction pixel, or the values of some similar pixels identified from the calibration images were changed on the prediction date. For instance, in this study image, some areas of water body (including flood areas) show yellow instead of black in the false color composite of SWIR1-NIR-SWRI2 in Figure 7. Since the improved PSRFM considered the surface changes caused by the flood by including the MODIS image on the prediction date for the clustering, it may overcome this issue better and therefore achieve a better result. As the flood water effects on NIR band are more significant than that on the green and red bands, the fusion result of NIR band with the improved PSRFM looks also better. Between STARFM and ESTARFM, STARFM looks a little better according to their quality indices in this case (Figure 8).

4.2. Test Case with Forest Fire

In a similar setup, all the produced results were compared to the observed Sentinel-2 MSI image both visually and quantitatively. Figure 9 shows visual comparisons of the predicted images resulted from the different methods against the observed Sentinel-2 images on 15 August 2018 when the forest fire was burning.
From Figure 9, we can see that the images outside of the burning area are all similar without significant differences for all the methods. To highlight the visual comparison for the burning area, we enlarged the marked area in Figure 10 and compared all the results visually in Figure 11. The evaluation scores of the quality indices against the observed Sentinel-2 MSI reflectance are compared in Figure 12. Since the variation of QI of all the outputs is very similar to that of CC as shown in Figure 8, we choose SSIM as one of the quality indices instead of QI for this test case to show their differences.
The observed Sentinel-2 MSI reference image on 15 August 2018 shows not only land surface changes from the start date July 31 and the end date September 22 but also some thick smoke plumes. The MODIS BRDF adjusted image on the prediction date of 15 August 2018 presents some unavailable pixels with an indicator value of 32767 within and around the burning area too. With these MODIS pixels excluded, all of the PSRFM-ij (i = 1, 2, 3, 4; j = 1, 2) worked well. However, their effectiveness in predicting the surface changes are different. From the visual comparisons in Figure 9 and Figure 11, we can see that the methods PSRFM-11, PSRFM-21 and PSRFM-31 without the RA for the land surface changes are not as good as the other methods although their corresponding evaluation scores of the quality indices in Figure 12 are not very poor. This may be because the surface change area is relatively small in comparison to the entire test area, and part of it is also excluded for the unavailable pixel values of the MODIS image on the prediction date.
However, if we compare the results from the different methods carefully, we can see following facts:
(1)
STARFM results (Figure 9c and Figure 11c) look not considerably affected by the areas of no MODIS observations because its prediction is based on the similar pixels around the prediction pixel. This can be an advantage for some special cases. For example, when some MODIS pixels of a small area are not available due to cloud contamination but their surrounding pixels have a valid value and a similar land-cover type that does not change between the dates of the calibration and the prediction images, these similar pixels can be used for prediction. In this test case, although pixel values can be predicted for those pixels of no MODIS observations, the predicted pixels bear some bias from the actual value because the land surface of some similar pixels based on the calibration image is changed by the fire on the prediction date. For this reason, its quality indices are the worst among all the methods.
(2)
The current implementation of ESTARFM excludes predictions for all pixels of no MODIS observations or with an invalid value, even only a single band has no observation or invalid value. Therefore, the predicted result shows a big black area in the predicted image as displayed in Figure 9d and Figure 11d. In comparison to STARFM, ESTARFM has the similar issue in selecting the similar pixels from the calibration image. However it excluded all pixels where the MODIS image does not have valid data. Its corresponding quality indices look better than that of STARFM in this test case.
(3)
Comparisons between the observed Sentinel-2 MSI image and the images predicted from the different algorithms PSRFM-i1 and PSRFM-i2 (i = 1, 2, 3, 4), with and without the RA show a similar result as the first dataset, i.e., the RA plays a more important role when the clusters are based on the fine-resolution images only. With the improvement of the clusters by introducing MODIS observations on the prediction date in different ways, the results are improved. For example, PSRFM-21 and PSRFM-31 are better than PSRFM11 (Figure 11g,i vs. Figure 11e); and PSRFM-41 is better than PSRFM21 and PSRFM31 (Figure 11k vs. Figure 11g,i). With the RA, PSRFM-22 is much better than PSRFM-21, while PSRFM-42 shows much less and even no improvement than PSRFM-41 because the clusters determined from the MODIS reflectance change ratios with PSRFM-41 is closer to the reality.
(4)
If we compare the different approaches PSRFM-ij (i = 1, 2, 3, 4; j = 1, 2), we can draw the same conclusion as the first test case, i.e., PSRFM-22 and PSRFM-41 or PSRFM-42 are the better options because of their relatively better quality indices (Figure 12) and higher computation efficiency. The result of PSRFM-32 looks comparable but its computation is less efficient because it involves all the fine- and coarse-resolution image bands for the clustering process which slows down the computation.
(5)
Comparison of the improved PSRFM algorithms such as PSRFM-22, PSRFM-32, PSRFM-41 and PSRFM-42 with the two published methods STARFM and ESTARFM reveals once again that all these improved PSRFM methods yield better results. This can be evidenced obviously from the evaluation scores of the quality indices against the observed Sentinel-2 MSI reflectance in Figure 12. Different from the first test case, the improved results are equally visible for all three bands NIR, SWIR1 and SWIR2 (Figure 8 vs. Figure 12).

5. Discussion

From the experiment results demonstrated with two different test cases above, we can draw a conclusion: the closer to the reality the clusters used for the estimation of the land surface changes, the better is the output (e.g., PSRFM21 and PSRFM31 vs. PSRFM11, and PSRFM41 vs. PSRFM11, PSRFM21 and PSRFM31) and the residual adjustment for abrupt surface changes becomes less and less critical with the improvement of the clusters (e.g., PSRFM-41 vs. PSRFM-42). Overall, PSRFM-22, PSRFM-41 and PSRFM-42 are recommendable for practical implementation for their relatively better evaluation score of the quality indices and higher computation efficiency. However, based on our experiments with more test cases that include that was used for validation of the STARFM and ESTARFM methods (http://ledaps.nascom.nasa.gov/ledaps/Tools/StarFM.htm) [1] and some test cases without abrupt surface changes, these methods are not always better than others. There are two points which are worth to pay attention to in practice:
(1)
The method PSRFM-41 might introduce some unexpected side effect for some land-cover types such as water when their reflectance values are small/or close to zero. Since their reflectance change ratios cannot be determined accurately due to their small values and minor changes, their predicted values may become negative or an unreasonable value. This is evident from the result of PSRFM-41 in Figure 9k. Some pixels in the river (at the bottom right corner) show different colors from the black color as the results from other methods. Similarly, the Residual Adjustment (RA) may also cause a similar problem for some same land-cover types when their reflectance values are very close to zeros because the RA may adjust these pixel values and result in negative values. To avoid such prediction errors, we kept these pixels from the original fine images unchanged, i.e., ignored the predicted values if they become negative or invalid.
(2)
If the dataset does not have significant abrupt surface changes, the RA is not necessary and the PSRFM11 and PSRFM41 are relatively simple and can achieve reasonably good results. If the RA is applied, the results may become worse because of the possible RA interpolation errors. To avoid the negative impact of possible RA interpolation errors, the same criteria used for optimizing the clusters, i.e., the rule of Maximizing Correlation but Smaller Residuals (MCSR) [10], was used to determine whether RA process is needed or not. If the result with RA shows an improved correlation between the differences ( Δ R = R ^ 1 R 0 ) of the fine-resolution images and the differences ( Δ M = M 1 M 0 ) of the coarse-resolution (MODIS) images but still no increase of the sum of the residual squares ( s u m = δ M 2   ) of the predicted fine-resolution-like image considerably, it is acceptable. Otherwise, the RA is not necessary.

6. Conclusions

Based on the previously proposed Prediction Smooth Reflectance Fusion Model (PSRFM), we have expanded our clustering method to better model abrupt land surface changes. In our earlier study, it was suggested that residual adjustment could be used to model abrupt surface changes. In this paper, we model abrupt surface changes by optimizing clusters using both fine- and coarse-resolution images. We tested various combinations of residual adjustment and cluster optimization using realistic case studies. The test results reveal both methods play a role in improving the solutions, but residual adjustment and its effectiveness depends on image clusters strongly. Therefore, cluster optimization is critical to achieve better solutions. Regarding the improvement of the clustering method for PSRFM, we found that the input data for the classification should include the coarse-resolution observation, such as MODIS image, on the prediction date because it is only observation that contains information of the surface changes on the prediction date. A recommendable method is to classify the reflectance change ratios to optimized clusters.
Using two different satellite datasets with significant surface changes caused by a flood and a forest fire, we tested several different clustering options with varied input data and residual adjustment combinations in PSRFM and compared the outputs among them and with those from two published methods STARFM and ESTARFM. The results show that the new algorithms improve the model output of PSRFM in capturing dynamic land surface changes. For practical implementation, we recommend PSRFM22 (clustering with fine-resolution image, and coarse-resolution image on the prediction date with residual adjustment) and PSRFM42 (clustering with reflectance change ratio of coarse-resolution images with residual adjustment) methods for datasets with significant land surface changes. The significance of land surface changes can be judged by comparing the results before and after the residual adjustment with the same criteria used for the cluster optimization. If no abrupt surface changes are detected, the PSRFM11 (clustering with fine-resolution image and without residual adjustment) and PSRFM41 (clustering with the reflectance change ratios and without residual adjustment) are also acceptable. These recommended methods produced higher quality of outputs and are of higher computation efficiency. From our validation datasets, the improved PSRFM algorithms outperform both STARFM (Spatial and Temporal Adaptive Reflectance Fusion Model) and ESTARFM (Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model), visually and quantitatively.

Author Contributions

D.Z. and F.Z. jointly conceived and designed the experiments; D.Z. developed the processing program and performed the experiments; D.Z. and F.Z. analyzed the data; D.Z. wrote the paper, F.Z. reviewed and edited the paper as supervisor.

Funding

This research was funded by Natural Resources Canada, Canadian Space Agency and Government Related Initiatives Program (GRIP).

Acknowledgments

This study is a part of the research and development project “Big Data Analytics of Earth Observation Data in Support of Evidence-Based Decision Making for Climate Change” fully funded by Natural Resources Canada and Canadian Space Agency’s GRIP program. Thanks to Richard Fernandes and anonymous reviewers for their insightful review of and valuable comments on the manuscript, which helped to improve the quality of the paper. Thanks to Gao F. and Zhu X. for the free accesses to their well-developed processing program STARFM and ESTARFM. Thanks to the Commonwealth Scientific and Industrial Research Organization (CRIRO) for the free data license of the first test dataset.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gao, F.; Masek, J.; Schwaller, M.; Hall, F. On the blending of the Landsat and MODIS surface reflectance: Predicting daily Landsat surface reflectance. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2207–2218. [Google Scholar]
  2. Zhu, X.; Chen, J.; Gao, F.; Chen, X.; Masek, J.G. An enhanced spatial and temporal adaptive reflectance fusion model for complex heterogeneous regions. Remote Sens. Environ. 2010, 114, 2610–2623. [Google Scholar] [CrossRef]
  3. Hilker, T.; Wulder, M.A.; Coops, N.C.; Linke, J.; McDermid, G.; Masek, J.G.; Gao, F.; White, J.C. A new data fusion model for high spatial- and temporal-resolution mapping of forest disturbance based on Landsat and MODIS. Remote Sens. Environ. 2009, 113, 1613–1627. [Google Scholar] [CrossRef]
  4. Zhukov, B.; Oertel, D.; Lanzl, F.; Reinhackel, G. Unmixing-based multisensor multiresolution image fusion. IEEE Trans. Geosci. Remote Sens. 1999, 37, 1212–1226. [Google Scholar] [CrossRef]
  5. Huang, B.; Zhang, H. Spatio-temporal reflectance fusion via unmixing: Accounting for both phenological and land-cover changes. Int. J. Remote Sens. 2014, 35, 6213–6233. [Google Scholar] [CrossRef]
  6. Zhu, X.; Helmer, E.H.; Gao, F.; Liu, D.; Chen, J.; Lefsky, M.A. A flexible spatiotemporal method for fusing satellite images with different resolutions. Remote Sens. Environ. 2016, 172, 165–177. [Google Scholar] [CrossRef]
  7. Hazaymeh, K.; Hassan, Q.K. Spatiotemporal image-fusion model for enhancing temporal resolution of Landsat-8 surface reflectance images using MODIS images. J. Appl. Remote Sens. 2015, 9, 096095. [Google Scholar] [CrossRef]
  8. Kwan, C.; Budavari, B.; Gao, F.; Zhu, X. A Hybrid Color Mapping Approach to Fusing MODIS and Landsat Images for Forward Prediction. Remote Sens. 2018, 10, 520. [Google Scholar] [CrossRef]
  9. Wang, J.; Huang, B. A Rigorously-Weighted Spatiotemporal Fusion Model with Uncertainty Analysis. Remote Sens. 2017, 9, 990. [Google Scholar] [CrossRef]
  10. Zhong, D.; Zhou, F. A Prediction Smooth Method for Blending Landsat and Moderate Resolution Imagine Spectroradiometer Images. Remote Sens. 2018, 10, 1371. [Google Scholar] [CrossRef]
  11. Sekrecka, A.; Kedzierski, M. Integration of Satellite Data with High resolution Ratio: Improvement od Spectral Quality with Preserving Spatial Details. Sensors 2018, 13, 4418. [Google Scholar] [CrossRef] [PubMed]
  12. Wang, Q.; Atkinson, P.M. Spatio-temporal fusion for daily Sentinel-2 images. Remote Sens. Environ. 2018, 204, 31–42. [Google Scholar] [CrossRef] [Green Version]
  13. Gevaert, C.M.; García-Haro, F.J. A comparison of STARFM and an unmixing based algorithm for Landsat and MODIS data fusion. Remote Sens. Environ. 2015, 156, 34–44. [Google Scholar] [CrossRef]
  14. Emelyanova, I.V.; McVicar, T.R.; Van Niel, T.G.; Li, L.T.; van Dijk, A.I.J.M. Assessing the accuracy of blending Landsat–MODIS surface reflectances in two landscapes with contrasting spatial and temporal dynamics: A framework for algorithm selection. Remote Sens. Environ. 2013, 133, 193–209. [Google Scholar] [CrossRef]
  15. Hirvonen, R.A. Adjustments by Least Squares in Geodesy and Photogrammetry; Ungar: New York, NY, USA, 1971; 261p, ISBN1 0804443971. ISBN2 978-0804443975. [Google Scholar]
  16. Wolf, P.R. Survey Measurement Adjustments by Least Squares. In The Surveying Handbook; Springer: Berlin/Heidelberg, Germany, 1995; pp. 383–413. [Google Scholar] [CrossRef]
  17. Wu, M.; Niu, Z.; Wang, C.; Wu, C.; Wang, L. Use of modis and landsat time series data to generate high-resolution temporal synthetic landsat data using a spatial and temporal reflectance fusion model. J. Appl. Remote Sens. 2012, 6. [Google Scholar] [CrossRef]
  18. Wu, M.; Huang, W.; Niu, Z.; Wang, C. Generating daily synthetic landsat imagery by combining landsat and modis data. Sensors 2015, 15, 24002–24025. [Google Scholar] [CrossRef] [PubMed]
  19. Ei Rahman, S.A. Hyperspectral Image Classification Using Unsupervised Algorithms. Int. J. Adv. Computer Sci. Appl. 2016, 7, 198–205. [Google Scholar]
  20. Emelyanova, I.; McVicar, T.; Van Niel, T.; Li, L.; Van Dijk, A. Landsat and MODIS Data for the Lower Gwydir Catchment Study Site; v3.; CSIRO: Canberra, Australia, 2013. [Google Scholar] [CrossRef]
  21. Zhou, W.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [Green Version]
  22. Pa, J.; Hegdeb, A.V. A Review of Quality Metrics for Fused Image. Aquat. Procedia 2015, 4, 133–142. [Google Scholar]
  23. Zhang, Y. Methods for Image Fusion Quality Assessment—A Review, Comparison and Analysis. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, XXXVII, 1101–1109. [Google Scholar]
Figure 1. Processing flowchart of the improved Prediction Smooth Reflectance Fusion Model (PSRFM). The numbers 1 to 4 around the box classification mean the different input data for the methods CLUSTER-i (i = 1, 2, 3, 4).
Figure 1. Processing flowchart of the improved Prediction Smooth Reflectance Fusion Model (PSRFM). The numbers 1 to 4 around the box classification mean the different input data for the methods CLUSTER-i (i = 1, 2, 3, 4).
Remotesensing 11 01759 g001
Figure 2. Landsat-5 TM surface reflectance image acquired on 12 December 2004 for the Lower Gwydir Catchment study site (left) located in northern New South Wales, Australia and the test area for this study (right).
Figure 2. Landsat-5 TM surface reflectance image acquired on 12 December 2004 for the Lower Gwydir Catchment study site (left) located in northern New South Wales, Australia and the test area for this study (right).
Remotesensing 11 01759 g002
Figure 3. A test area cropped from the Lower Gwydir Catchment study site: Red-Green-NIR (R-G-B) composites of MODIS surface reflectance (upper row) and Landsat-5 TM surface reflectance (lower row) images on 26 November 2004, 12 December 2004, and 28 December 2004.
Figure 3. A test area cropped from the Lower Gwydir Catchment study site: Red-Green-NIR (R-G-B) composites of MODIS surface reflectance (upper row) and Landsat-5 TM surface reflectance (lower row) images on 26 November 2004, 12 December 2004, and 28 December 2004.
Remotesensing 11 01759 g003
Figure 4. A map of the Torkelsen Lake Fire occurred in British Columbia, Canada, in 2018 (Map source: BC Wild Service, Ministry of Forest, Lands, Natural Resource Operation and Rural Development).
Figure 4. A map of the Torkelsen Lake Fire occurred in British Columbia, Canada, in 2018 (Map source: BC Wild Service, Ministry of Forest, Lands, Natural Resource Operation and Rural Development).
Remotesensing 11 01759 g004
Figure 5. The Torkelsen Lake Fire study site: SWIR1-NIR-SWIR2 (R-G-B) false color composites of MODIS BRDF adjusted surface reflectance (upper row) and Sentinel-2 MSI L2A surface reflectance (lower row) images acquired on 31 July, 15 August and 22 September 2018. The dark purple in the upper center of the two images on 15 August and 22 September 2018 shows the burned area. The image on 15 August 2018 also shows some smoke feathers.
Figure 5. The Torkelsen Lake Fire study site: SWIR1-NIR-SWIR2 (R-G-B) false color composites of MODIS BRDF adjusted surface reflectance (upper row) and Sentinel-2 MSI L2A surface reflectance (lower row) images acquired on 31 July, 15 August and 22 September 2018. The dark purple in the upper center of the two images on 15 August and 22 September 2018 shows the burned area. The image on 15 August 2018 also shows some smoke feathers.
Remotesensing 11 01759 g005
Figure 6. (a) The observed Sentinel-2 MSI image on the prediction date 15 August 2018 and (b) the same image marked with the unavailable pixel values of the MODIS image on the same day for visual comparison. The two black rectangles mean that the pixel values of MODIS BRDF adjusted image are not available for all the bands. The areas with variation of cyan color represent unavailable pixel values of the MODIS image only for the SWIR1 band.
Figure 6. (a) The observed Sentinel-2 MSI image on the prediction date 15 August 2018 and (b) the same image marked with the unavailable pixel values of the MODIS image on the same day for visual comparison. The two black rectangles mean that the pixel values of MODIS BRDF adjusted image are not available for all the bands. The areas with variation of cyan color represent unavailable pixel values of the MODIS image only for the SWIR1 band.
Remotesensing 11 01759 g006
Figure 7. Visual comparisons of the observed Landsat-5 TM image on 12 December 2004 and the predicted images by PSRFM-ij (i = 1, 2, 3, 4; j = 1, 2), STARFM and ESTARFM using two pairs of Landsat-5 TM and MODIS images acquired on 26 November 2004 and 28 December 2004, respectively. All images are in Red-Green-NIR (R-G-B) false color composites.
Figure 7. Visual comparisons of the observed Landsat-5 TM image on 12 December 2004 and the predicted images by PSRFM-ij (i = 1, 2, 3, 4; j = 1, 2), STARFM and ESTARFM using two pairs of Landsat-5 TM and MODIS images acquired on 26 November 2004 and 28 December 2004, respectively. All images are in Red-Green-NIR (R-G-B) false color composites.
Remotesensing 11 01759 g007aRemotesensing 11 01759 g007b
Figure 8. The quality indices of predicted reflectance by PSRFM-ij (i = 1, 2, 3, 4; j = 1, 2), STARFM and ESTARFM against the observed reflectance on December 12 using two pairs of Landsat-5 TM and MODIS images acquired on 26 November 2004 and 28 December 2004.
Figure 8. The quality indices of predicted reflectance by PSRFM-ij (i = 1, 2, 3, 4; j = 1, 2), STARFM and ESTARFM against the observed reflectance on December 12 using two pairs of Landsat-5 TM and MODIS images acquired on 26 November 2004 and 28 December 2004.
Remotesensing 11 01759 g008
Figure 9. Visual comparison of the observed Sentinel-2 MSI image (a) and (b) (same as (a) but marked with the unavailable pixel values of the MODIS image on the prediction date for purpose of comparison) on 15 August 2018 and the predicted images (el) by PSRFM-ij (i = 1, 2, 3, 4, j = 1, 2), (c) STARFM and (d) ESTARFM using two pairs of Sentinel-2 MSI and MODIS BRDF adjusted images acquired on 31 July 2018 and 22 September 2018 respectively. All images are in SWIR1-NIR-SWIR2 (R-G-B) false color composites.
Figure 9. Visual comparison of the observed Sentinel-2 MSI image (a) and (b) (same as (a) but marked with the unavailable pixel values of the MODIS image on the prediction date for purpose of comparison) on 15 August 2018 and the predicted images (el) by PSRFM-ij (i = 1, 2, 3, 4, j = 1, 2), (c) STARFM and (d) ESTARFM using two pairs of Sentinel-2 MSI and MODIS BRDF adjusted images acquired on 31 July 2018 and 22 September 2018 respectively. All images are in SWIR1-NIR-SWIR2 (R-G-B) false color composites.
Remotesensing 11 01759 g009aRemotesensing 11 01759 g009b
Figure 10. The cropped burning area for an enlarged visual comparison in Figure 11.
Figure 10. The cropped burning area for an enlarged visual comparison in Figure 11.
Remotesensing 11 01759 g010
Figure 11. Area enlarged for visual comparison of the burning area observed by Sentinel-2 MSI image (a) and (b) (same as (a) but marked with the unavailable pixel values of the MODIS image on the prediction date for purpose of comparison) on 15 August 2018 and predicted images (el) by PSRFM-ij (i = 1, 2, 3, 4, j = 1, 2), (c) STARFM and (d) ESTARFM using two pairs of Sentinel-2 MSI and MODIS BRDF adjusted images acquired on 31 July 2018 and 22 September 2018, respectively. All images are in SWIR1-NIR-SWIR2 (R-G-B) false color composites. The two small rectangles shown as black indicate that no MODIS data are available for all the bands on the prediction date 5 August 2018, while the areas with variation of cyan color indicate that no MODIS data are available from the SWIR1 band.
Figure 11. Area enlarged for visual comparison of the burning area observed by Sentinel-2 MSI image (a) and (b) (same as (a) but marked with the unavailable pixel values of the MODIS image on the prediction date for purpose of comparison) on 15 August 2018 and predicted images (el) by PSRFM-ij (i = 1, 2, 3, 4, j = 1, 2), (c) STARFM and (d) ESTARFM using two pairs of Sentinel-2 MSI and MODIS BRDF adjusted images acquired on 31 July 2018 and 22 September 2018, respectively. All images are in SWIR1-NIR-SWIR2 (R-G-B) false color composites. The two small rectangles shown as black indicate that no MODIS data are available for all the bands on the prediction date 5 August 2018, while the areas with variation of cyan color indicate that no MODIS data are available from the SWIR1 band.
Remotesensing 11 01759 g011aRemotesensing 11 01759 g011b
Figure 12. The quality indices of predicted reflectance by PSRFM-ij (i = 1, 2, 3, 4; j = 1, 2), STARFM and ESTARFM against the observed Sentinel-2 MSI reflectance on 15 August using two pairs of Sentinel-2 MSI and MODIS BRDF adjusted images acquired on 31 July 2018 and 22 September 2018.
Figure 12. The quality indices of predicted reflectance by PSRFM-ij (i = 1, 2, 3, 4; j = 1, 2), STARFM and ESTARFM against the observed Sentinel-2 MSI reflectance on 15 August using two pairs of Sentinel-2 MSI and MODIS BRDF adjusted images acquired on 31 July 2018 and 22 September 2018.
Remotesensing 11 01759 g012

Share and Cite

MDPI and ACS Style

Zhong, D.; Zhou, F. Improvement of Clustering Methods for Modelling Abrupt Land Surface Changes in Satellite Image Fusions. Remote Sens. 2019, 11, 1759. https://doi.org/10.3390/rs11151759

AMA Style

Zhong D, Zhou F. Improvement of Clustering Methods for Modelling Abrupt Land Surface Changes in Satellite Image Fusions. Remote Sensing. 2019; 11(15):1759. https://doi.org/10.3390/rs11151759

Chicago/Turabian Style

Zhong, Detang, and Fuqun Zhou. 2019. "Improvement of Clustering Methods for Modelling Abrupt Land Surface Changes in Satellite Image Fusions" Remote Sensing 11, no. 15: 1759. https://doi.org/10.3390/rs11151759

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop