Next Article in Journal
Investigating the Feasibility of Multi-Scan Terrestrial Laser Scanning to Characterize Tree Communities in Southern Boreal Forests
Next Article in Special Issue
Improvement of Clustering Methods for Modelling Abrupt Land Surface Changes in Satellite Image Fusions
Previous Article in Journal
Dynamic Harris Hawks Optimization with Mutation Mechanism for Satellite Image Segmentation
Previous Article in Special Issue
Spatiotemporal Image Fusion in Remote Sensing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Generating Red-Edge Images at 3 M Spatial Resolution by Fusing Sentinel-2 and Planet Satellite Products

1
National Engineering and Technology Center for Information Agriculture, Nanjing Agricultural University, Nanjing 210095, China
2
Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture, Nanjing Agricultural University, Nanjing 210095, China
3
Jiangsu Key Laboratory for Information Agriculture, Nanjing Agricultural University, Nanjing 210095, China
4
Jiangsu Collaborative Innovation Center for Modern Crop Production, Nanjing Agricultural University, Nanjing 210095, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2019, 11(12), 1422; https://doi.org/10.3390/rs11121422
Submission received: 6 May 2019 / Revised: 6 June 2019 / Accepted: 11 June 2019 / Published: 14 June 2019
(This article belongs to the Special Issue Advances in Remote Sensing Image Fusion)

Abstract

:
High-resolution satellite images can be used to some extent to mitigate the mixed-pixel problem caused by the lack of intensive production, farmland fragmentation, and the uneven growth of field crops in developing countries. Specifically, red-edge (RE) satellite images can be used in this context to reduce the influence of soil background at early stages as well as saturation due to crop leaf area index (LAI) at later stages. However, the availability of high-resolution RE satellite image products for research and application globally remains limited. This study uses the weight-and-unmixing algorithm as well as the SUPer-REsolution for multi-spectral Multi-resolution Estimation (Wu-SupReME) approach to combine the advantages of Sentinel-2 spectral and Planet spatial resolution and generate a high-resolution RE product. The resultant fused image is highly correlated (R2 > 0.98) with Sentinel-2 image and clearly illustrates the persistent advantages of such products. This fused image was significantly more accurate than the originals when used to predict heterogeneous wheat LAI and therefore clearly illustrated the persistence of Sentinel-2 spectral and Planet spatial advantage, which indirectly proved that the fusion methodology of generating high-resolution red-edge products from Planet and Sentinel-2 images is possible. This study provided method reference for multi-source data fusion and image product for accurate parameter inversion in quantitative remote sensing of vegetation.

Graphical Abstract

1. Introduction

Low agriculture intensification in most countries, especially developing ones, results in a disperse distribution of fields and uneven crop growth [1]. This means that low- to medium-resolution satellite images tend to suffer from mixed-pixel problems, so high-resolution images are required for crop growth monitoring in such areas [2]. In addition, monitoring vegetation effectively using satellite images requires the use of abundant spectral bands, especially red-edge (RE) examples.
Leaf area index (LAI) is a key parameter for vegetation monitoring, because it is correlated with wheat canopy structure and is related to both canopy chlorophyll contents and photosynthesis rate [3,4]. Predicting LAI over large spatial scales has become possible due to the development of satellite technology and can be implemented via models that relate this variable to satellite reflectance or vegetation indices (VIs). Most previous studies have shown that VIs generated using RE bands (RE-VIs) tend to be more accurate for predicting LAI than their counterparts (none-RE-VIs) that do not include these bands [5,6]. Indeed, RE-VIs have the capacity to reduce the influence of soil on LAI predictions at early stages and remove saturation due to high LAI at later stages [5,6,7,8].
Several multi-spectral satellites capture images with RE bands, including Rapideye, Worldview-2, and Sentinel-2. Rapideye and Worldview-2 are commercial satellites for target acquisition that encompass a single RE band, while Sentinel-2 has three RE bands that can be acquired freely [9,10,11]. Sentinel-2 actually includes two twin satellites, i.e., Sentinel-2A (launched in 2015) and Sentinel-2B (launched in 2017). Both of they carry a Multi-Spectral Instrument (MSI), which has 13 spectral channels with different spatial resolutions, including three 20 m resolution RE bands at 705 nm, 740 nm, and 783 nm. In an earlier study, Herrmann et al. [8] simulated three Sentinel-2 RE bands based on hyper-spectrum and structured RE VI-Red-Edge Inflection Point (REIP). The results of this earlier work also demonstrated that REIP boasts higher accuracy than Normalized Difference Vegetation Index (NDVI) when used to predict wheat and potato LAI. However, the coarse resolution of RE bands influences LAI predictive accuracy when using RE-VIs during Sentinel-2 applications. In light of this result, Clevers et al. [12] structured RE-VIs and none-RE-VIs using original Sentinel-2 images, and the outcomes of this study showed that red-edge chlorophyll index (CIred-edge) at 20 m resolution were less accurate than their 10 m resolution weighted difference vegetation index (WDVI) counterparts for predicting potato LAI. Therefore, it is necessary to improve the spatial resolution of Sentinel-2 images by exploring image fusion in both spectral and spatial dimensions.
A range of methods are available for improving spatial resolution that use both spectral and spatial image fusion, most notably pan and multi-spectral sharpening. Pan sharpening requires a panchromatic band within an image and includes those derived from IKONOS, QuickBird, and WorldView satellites [13,14,15]. Multi-spectral sharpening is designed for the use with multi-resolution multi-spectral image and includes images derived from Sentinel-2 [16] and Moderate Resolution Imaging Spectroradiometer (MODIS) satellites [17,18]. Two variants of multi-spectral sharpening are available, namely component substitution (CS) and multi-resolution analysis (MRA) [14]. Both of CS and MRA transform multiple fine resolution bands to a single one and then implement “pan sharpening” but risk the loss of spectral information. To avoid this issue, a range of studies have explored the use of image super-resolution technology [19]. SupReME [20] and SuperRes [21] are two typical image super-resolution technology initially designed for Sentinel-2. They can be used to implement image fusion based on multiple fine bands and information integration in multiple dimensions. SupReME can fully use the textural information from bands at high resolution with better performance than SuperRes theoretically. It also has more adaptability of other sensors than SuperRes due to its adjustable parameters but with limitation in automatic parameters determination [20].
The Planet satellite is a novel source of 3 m resolution data and encompasses four bands in visible and near-infrared (VNIR) regions. One advantage of this satellite is that although it encompasses limited spectral information and does not include RE bands, daily global images with small width can nevertheless be acquired. Daily global Planet images have the potential for use in combination with other satellite data to monitor terrestrial regions [22,23,24]. Indeed, the use of Planet image at 3 m resolution means that it is possible to further implement the fusion of Sentinel-2 images in both spectral and spatial dimensions.
The aim of this study was to generate a high-resolution RE image by combining the advantages of Sentinel-2 spectral information and Planet spatial resolution. A fused image was initially acquired from Sentinel-2 and Planet sources using Wu-SupReME and was assessed using observational and correlational analyses. A series of comparative wheat LAI predictions were then performed using different images to further assess the fused image. The final image generated in this study has clear potential to comprehensively improve satellite-based vegetation monitoring.

2. Materials and Methods

2.1. Study Area and Field Measurements

The study areas encompass three towns (i.e., Diaoyu, Daiyao, and Zhangguo) within Xinghua city, Jiangsu Province, China. These regions are all characterized by the average annual temperature of 15.0 °C and the average annual rainfall of 1032.3 mm. The main soil type is loam. We selected 35 winter wheat sample fields (Figure 1) that had all been sown via low mechanization in 2018. A series of 1 m × 1 m sample plots were selected in the center of these fields. GPS coordinates and LAI values for wheat sample were measured using Trimble GeoXH6000 (Trimble, CA, USA) and LAI-3000 (Li-cor, Lincoln, NE, USA), respectively.

2.2. Satellite Data Processing

2.2.1. Sentinel-2 Images

Sentinel-2 data can be directly acquired from the website of the European Space Agency (https://scihub.copernicus.eu/dhus/#/home). There are 13 bands in Sentinel-2 image with 290 km orbital swath width (Table 1). The three bands at 60 m resolution designed for monitoring atmospheric conditions with 60 m resolution (i.e., B1, B9, B10) were not considered in this study. The other ten bands were used for the image fusion and agriculture application.
Atmospheric correction of Sentinel-2 images was initially carried out using Sen2Cor (http://step.esa.int/main/third-party-plugins-2/sen2cor/). The first step of image fusion processing was then implemented using Sen2Res (http://step.esa.int/main/third-party-plugins-2/sen2res/). Sen2Res encompasses the main aspects of SuperRes [21] and was designed for downscaling Sentinel-2 coarse resolution bands via fine resolution VNIR.

2.2.2. Planet Images

Planet system became the first constellation for acquiring daily global satellite images since 2017. Planet images are acquired by PlanetScope satellite and can be downloaded from the official website (https://www.planet.com/). A series of surface reflectance products were downloaded and used directly for image fusion and agriculture monitoring. The spectral information of these bands is summarized in Table 2.
It is noteworthy that about 18 Planet images were needed to fully cover all the three towns due to the small-scale images with frame size of 20 km × 12 km in international space station orbit and 24.6 km × 16.4 km in sun-synchronous orbit respectively. Since there are more than 175 Dove satellites within the Planet system, daily images for the study area might be derived from different satellites at different time. To reduce the impact of lighting conditions, we selected Planet images that were acquired as close as possible to corresponding Sentinel-2 data (Table 3).

2.3. Image Fusion Methods

Subsequent to atmospheric correction using the Sen2Cor and image fusion using the Sen2Res, Sentinel-2 10 m images were then further resampled to 12 m, four times the spatial resolution of Planet data. A combination of weight-and-unmixing and SupReME were implemented to fuse the two image types (Figure 2).
The weight-and-unmixing approach was initially adapted to generate a Sentinel-2 VNIR image at 3 m resolution (S-VNIR3m) based on a 12 m resolution Sentinel-2 VNIR image (S-VNIR12m) and a 3 m resolution Planet VNIR image (P-VNIR3m) (Figure 3).
As the S-VNIR12m image encompasses Nx/4 columns and Nx/4 rows, the resultant S-VNIR3m matrix contains Nx columns and Nx rows. Thus, the weights of 16 subpixels within the S-VNIR12m pixel in the ith column and jth row (Si,j) were calculated using the corresponding P-VNIR3m image:
W i , j ( x , y ) = P i , j ( x , y ) x = 1 4 y = 1 4 P i , j ( x , y ) , x , y [ 1 , 2 , 3 , 4 ]
x = 1 4 y = 1 4 W i , j ( x , y ) = 1
where x and y denote the number of columns and rows occupied by one subpixel in Si,j, respectively. P represents the reflectance of one band from P-VNIR3m and W represents the weight of each Si,j subpixel. Accordingly, S-VNIR3m (Si,j) reflectance was calculated as follows:
s i , j ( x , y ) = 16 · S i , j · W i , j ( x , y ) , x , y [ 1 , 2 , 3 , 4 ]
Additional Sentinel-2 12 m bands were downscaled using the SupReME algorithm based on the S-VNIR3m image. This algorithm integrates all the bands from a single Sentinel-2 sensor via the minimization of a convex objective function (Equation (4)) and an adaptive (edge-reserving) regularizer (Equation (5)).
m i n z 1 2 | | M B ( U I ) z y b | | 2 + λ ϕ w , q ( D h z , D v z )
where y b denotes image input bands. U is semi-unitary and I denotes an identity matrix with suitable dimensions. Similarly, z refers to parameter vectorization due to the transposition of representation coefficients with respect to U. M is a block that represents sampling of x b (i.e., x b = ( U I ) z ) to obtain y b . B is a block-circulant-circulant-block (BCCB) with each representing a two-dimensional cyclic convolution with a point spread function (PSF) of the relevant band at highest spatial resolution. λ denotes regularization strength and takes a value of 0.01. D h and D v refer to two block-diagonal linear operators that approximate horizontal and vertical derivatives of z images. These relationships are expressed as follows:
w , q ( z ) = i = 1 p j = 1 n { q i w j ( H h z i ) j 2 + q i w j ( H v z i ) j 2 }
where p denotes the largest components of correlation-based eigen-decomposition and takes a value of 6 in the case of bands from Sentinel-2. Similarly, n is the number of high-resolution pixels within a fixed image area and w denotes the weight of each pixel used to reduce smoothing across discontinuities. H h and H v refer to individual blocks within the finite difference operators D h and D v , respectively. q was used to reweight the regularizer in a heuristic fashion to determine the most appropriate setting for Sentinel-2, as follows:
q = [1 1.2 4 8 15 15 20].
The regularizer in this approach can capture discontinuities from fine resolution bands and then apply them to other coarser ones.

2.4. Fusion Assessment

2.4.1. Correlation Analysis

Because of different band numbers and the fact that individual entities exhibit sensor differences between fused and Planet images, the original Planet data is hard at working as a reference image during correlation analysis. This means that no 3 m scale reference image was available to implement a straightforward correlation analysis with the fused image. However, as discussed in previous studies [25], it is significant for a fused image of high spatial resolution to be resampled to its original resolution during the image fusion process and to remain as similar as possible to the original. Therefore, the resultant fused image was resampled to 12 m scale via Gaussian PSF and was compared with the original Sentinel-2 image by removing the influence of spatial resolution (Figure 4). Values of R2, slope, and residuals from a correlation analysis can be used to indirectly assess fusion processing performance.

2.4.2. Agricultural Application

The fused image assessment procedure described above was carried out at low spatial resolution as in most previous studies [13,18,26]. However, this assessment was insufficient to illustrate the performance of fused image at 3 m scale and further exploration proved necessary. Therefore, we undertook an agriculture application, the prediction of wheat LAI, to compare original 10 m Sentinel-2, 3 m Planet, and 3 m fused images. A suite of VIs known to be sensitive to LAI were used to predict wheat LAI (Table 4). None-RE-VIs structured by VNIR as well as RE-VIs structured by VNIR and REs were all used to explore whether the fused image adequately preserved spectral information.
Ground sampling was implemented at tillering, jointing, and booting stage (Table 3) to avoid wheat LAI saturation when comparing different images for predictions. It is also noteworthy that because ineffective tillers tend to grow out first and then fade away during these stages [27], wheat field heterogeneity initially increases and then decreases in tandem with uneven sowing and nutrient competition. These three stages also occurred before ineffective tiller extinction took place in each case. This means that field heterogeneity was always preserved when using different images to predict wheat LAI. Due to the existing heterogeneity together with the scatter and small scale of wheat fields, more mixed pixels are present in lower-resolution images with lower R2 values ( R c a l 2 ) for wheat LAI prediction by VIs together with univariate linear regression model. The comparison of R c a l 2 between Sentinel-2 and fused images could indicate whether the fused images were able to adequately preserve both spectral and spatial advantages.
As the vegetation indices used here to compare fused and original satellite images for the prediction of wheat LAI did not contain all bands, just contained Red, Green, NIR, and REs, further exploration via multiple linear regression (MLR) was carried out to enable comparisons between Planet, Sentinel-2, and fused images.
y L A I = i = 1 n u m a i x i + b
where n u m means the number of bands in different images. x i denotes the band value and y L A I denotes the measured wheat LAI. a i and b are coefficient.
A k-fold (k = 10) cross-validation procedure was then used to assess all models and a relative RMSE (RRMSE) was calculated in each case. Comparing RRMSE values between different images means that it is possible to determine whether, or not, fused products preserve the spectral and spatial advantages of Sentinel-2 and Planet outputs. Values for RRMSE were calculated as follows:
RRMSE v a l ( % ) = 100 L A I v a l ¯ 1 N i = 1 N ( L A I c a l   i L A I v a l   i ) 2
where L A I c a l , L A I v a l , and L A I v a l ¯ denote predicted, measured, and average LAI values, respectively.

3. Results

3.1. Generation of Fine Red-Edge Images

Representative samples of 10 m Sentinel-2, 12 m Sentinel-2, 3 m Planet, and 3 m fused images of Diaoyu town collected on 9 April 2018, were selected for initial analysis in this study. True and false color images from these samples are shown in Figure 5. The 10 m Sentinel-2 image was first resampled to 12 m with more block effects before its 12 m counterpart was fused with the 3 m Planet image using the weight-and-unmixing algorithm. This fused image exhibits less block effects than either 10 m or 12 m Sentinel-2 images but contains more than the original Planet product. It was necessary to generate a fused image in the first place to replace the 3 m scale Planet version and implement additional processing of other Sentinel-2 bands because of sensor differences between the two satellites. Then, the SupReME algorithm was applied to implement further fusion between the 3 m VNIR (from the fused image) and its 12 m Sentinel-2 counterpart. Samples of REs and SWIRs-NNIR images from 10 m Sentinel-2, 12 m Sentinel-2, and 3 m fused images are shown in Figure 6. Fusion image contains less block effects than either 10 m or 12 m Sentinel-2 images.

3.2. Fusion Assessment by Correlation Analysis

Results show that a higher degree of correlation (R2 = 0.98) can be achieved via this approach with a linear relationship that is close to y = x (Figure 7). This also demonstrates that the fused VNIR image generated here is similar to the Sentinel-2 VNIR and can therefore be considered to come from the same sensor. These images can therefore be used for further fusion via SupReME to improve other Sentinel-2 bands.
A comparison of REs, SWIRs, and NNIR between 12 m scale resampled fusion and Sentinel-2 images is shown in Figure 8. These images also reveal a high degree of correlation (R2 = 0.99) and a linear relationship that is also close to y = x; this suggests that the fused image REs, SWIRs, and NNIR all adequately preserve the spectral properties of the original Sentinel-2 image.

3.3. Fusion Assessment by Wheat LAI Prediction

The fused image assessments outlined above were mostly carried out at low spatial resolutions. Fusion processing alone remained insufficient to illustrate the performance of this image at 3 m scale and so an agriculture application, wheat LAI prediction, was carried out to compare this product with both 10 m Sentinel-2 and 3 m Planet images.
A range of different VIs was used in this study to predict wheat LAI to compare the performance of different images. As none-RE-VIs were based on VNIR, comparisons involved Planet, Setninel-2, and fusion images (Table 5). These results reveal that the fused image exhibited better performance than either Planet or Sentinel-2. Specifically, DVI from the fused image exhibited the best performance ( R c a l 2 = 0.70 and RRMSE v a l = 22.84%) while Sentinel-2 had limited accuracy for the prediction of wheat LAI because the 10 m image from this source contained more mixed pixels than the 3 m fused image. In addition, the 3 m Planet image exhibited lower R c a l 2 and higher RRMSE v a l value because its bandwidth is broad and soil exerts a greater effect in tandem with less sensitive information in specific bands.
The absence of RE bands in Planet images limited comparisons to just involve Sentinel-2 and fused image VIs (Table 6). Results show that all RE-VIs require a single RE band except for REIP (Table 4) but both Sentinel-2 and fusion images contain three of these elements. Therefore, all RE-VIs except for REIP were constructed with RE-1, RE-2, and RE-3, respectively. These results show that in terms of VIs within the same bands, fused images performed better than Sentinel-2, while the MEVI 2 derived from the fused image performed best ( R c a l 2 = 0.78 and RRMSE v a l = 19.97%). As RE-2 falls within the middle of the red-edge position within the vegetation spectrum and is therefore more sensitive to LAI, RE-VIs constructed via this sequence tended to be more accurate at predicting wheat LAI than either RE-1 or RE-3.
It is noteworthy that the VIs discussed above do not include all bands and comparisons remained insufficient to illustrate the adequate performance of the fused image across all bands. All the bands from Planet, Sentinel-2, and fused images were therefore implemented separately to predict wheat LAI and MLR was used to assess performance. As show in Table 7, the Planet image is characterized by R c a l 2 = 0.63 and RRMSE v a l = 42.36% because of its limited number of bands and broad bandwidth, while the Sentinel-2 image had limited R c a l 2 and RRMSE v a l (0.76 and 33.58%, respectively) because of its low spatial resolution. The fused image was characterized by R c a l 2 = 0.81 and RRMSE v a l = 33.40% when used to predict wheat LAI; these values indicate that this image retains the spectral and spatial advantages of Sentinel-2 and Planet products, respectively.

3.4. Mapping Wheat LAI Using the Fused Image

The fused image was used last to map the distribution of wheat LAI within three towns over three stages within Jiangsu Province, China. Results show that all bands within fused image assessed via MLR had highest R c a l 2 values and higher RRMSE v a l values than most VIs according to our synthetic comparisons of wheat LAI prediction. It is also inconvenient to use all bands to predict wheat LAI via MLR because of the large number of input parameters. Data show that the MEVI 2 from the fused image had a higher R c a l 2 value compared to the next highest R c a l 2 on the basis of MLR but lowest RRMSE v a l when predicting wheat LAI. All above, we used MEVI 2 to map wheat LAI distribution across the three study areas. As shown in Figure 9, wheat LAI remained less than 3 at the tillering stage, ranged between 3 and 4 at the jointing stage, and was between 4 and 5 or 5 and 6 at the booting stage within the three towns. These results show reasonable trends of wheat LAI within different growth stages.

4. Discussion

4.1. A Novel Satellite Source for Downscaling Sentinel-2 Images to 3 m Scale

The satellite Planet was used here as a novel source to enable fusion processing with Sentinel-2 VNIR data. As Planet captures daily global images at high resolution, it can acquire data in tandem with Sentinel-2 on the same day and can further be used to implement image fusion between spatial and spectral dimensions. Correlation of VNIR between Planet and Sentinel-2 images is shown in Figure 10; these images show similar variation (R2 = 0.90) when monitoring different objects via either satellite source even though the VNIR linear relationship between them is not close to y = x. This result suggests that it is not feasible to use Planet VNIR as a substitute for Sentinel-2 data when downscaling other bands via the SupReME algorithm because of the presence of different sensors with varied spectral responses. It is necessary instead to acquire a 3 m fused VNIR image for further fusion processing via SupReME as this approach was designed for bands from a single sensor. Differences between Planet and Sentinel-2 images also suggest that the former is unsuitable for use as a reference image during fusion image VNIR assessment by correlation analysis, as these bands have the same spectral properties as Sentinel-2.

4.2. Novel Combinations of the Weight-and-Unmixing and SupReME Algorithms for Fusing Images from Two Satellites

Several previous studies have applied the weight-and-unmixing algorithm for downscaling coarse resolution bands using the same fine resolution elements from another sensor [37,38]. The method was used here during image fusion processing by applying weight-and-unmixing to decide VNIR weights and to then implement pixel unmixing from Sentinel-2. Downscaled VNIR can perfectly preserve the properties of these satellite images at higher spatial resolution for further downscaling other Sentinel-2 bands.
A range of approaches have been proposed in recent years to downscale Sentinel-2 coarse resolution bands based on high-resolution VNIR including area-to-point regression kriging (ATPRK) [16,18], SuperRes [21], and SupReME [20]. These algorithms have generally been applied to implement band downscaling from single sensors or satellites. Specifically, ATPRK accepts just a single band selected from either VNIR or the bands used for synthesis to sharpen other coarse resolution elements while SuperRes and SupReME emphasize the full use of original VNIR and marginally lose spectral information during this process.
The SupReME algorithm exhibits better theoretical performance than either SuperRes or ATPRK in downscaling Sentinel-2 coarse bands based on VNIR [20]. However, the fusion methods applied here highlight the fact that the Sen2Res (i.e., SuperRes) can usefully be adapted for Sentinel-2 preprocessing because it includes comprehensive improvements to image quality assessment mode (http://nicolas.brodu.net/code/superres/log/) and is therefore more suitable for use with original large scale images captured under different atmospheric conditions than other approaches, such as SupReME. Indeed, SupReME was used here during Sentinel-2 and Planet image fusion because this processing approach was founded on small-scale Planet images collected under almost the same atmospheric conditions. This image quality assessment mode has limited advantages, however, and it will be necessary to further explore this area to improve the SupReME algorithm and incorporate an image quality assessment mode.
Wu-SupReME has the potential to implement spatial and spectral fusion between two satellites but with challenge in parameter determination when the downscaled image is not Sentinel-2. Three parameters (including p, λ and q) in SupReME were default for Sentinel-2 and cannot adapt to other sensors automatically. It is necessary to determine them firstly when downscaling other satellite images. Automatic parameters determination in SupReME worth further exploration.

4.3. Assessment of the Fusion Image Via Wheat LAI Prediction

A range of cost functions and correlation analyses have been used in previous studies to assess spectral and spatial fusion image. However, due to the lack of reference images at high resolution, most fusion image assessments have been carried out at low resolution and use original images [16,17,18,20,39]. A few studies have used ground objects to assess fusion images directly, but most have focused on the classification of different ground objects [13,40,41], ranging into qualitative remote sensing (RS). These approaches have confirmed the persistence of differences between objects in the spectral dimension but have been unable to adequately assess whether absolute reflectance value have been persisted well.
One novel aspect of this study is that we assess the resultant fused image via agriculture quantitative RS, wheat LAI prediction. Due to the reason of uneven sowing, ineffective tillers, and nutrient competition, the heterogeneity of wheat fields causes more mixed pixels in lower-resolution images with lower R c a l 2 and higher RRMSE v a l for wheat LAI prediction. However, the less mixed pixels was existed in the fusion image, which can be proved with the higher R c a l 2 and lower RRMSE v a l for fused image than Sentinel-2. The performance of LAI prediction indirectly indicate the fusion algorithm is feasible, and the fusion images were able to adequately preserve both spectral and spatial advantages from the Sentinel-2 and Planet, respectively, which provides us with a chance to assess the fusion image in both spectral and spatial dimensions.

4.4. The Effects of LAI Prediction Using VIs and All Bands

The VIs selected for use in this study were mostly derived from LAI predictions for different vegetation types using hyperspectral data [30,34], specifically unmanned aerial vehicle images [42,43] or some other kind of satellite output containing RE information [7,35,44]. It is, therefore, noteworthy that different sensors or platforms might incorporate different VIs for most effective wheat LAI predictions. Therefore, it will be important to further explore how to establish optimal VIs for wheat LAI prediction from fused images in future research.
This study used VIs to predict wheat LAI, implementing all bands separately from Planet, Sentinel-2, and fusion products and attained high R c a l 2 values via MLR. As all bands were considered here, the R c a l 2 value achieved in each case indirectly illustrates fusion performance. However, MLR is a simple method and incorporates a significant risk of overfitting. Several other approaches are also available that can make use of all bands to carry out wheat LAI prediction, including machine and deep learning as well as the radiative transfer model. Further research will be required to determine the most appropriate method to make full use of all fused image bands in predicting wheat LAI.

5. Conclusions

This study initially combined the weight-and-unmixing algorithm with the SupReME approach to fuse Sentinel-2 and Planet images and generated a novel high-resolution RE product at 3 m scale. Correlation analysis, together with wheat LAI prediction, between fused and original images shows that fused image is both high quality and retains the spectral and spatial advantages of Sentinel-2 and Planet outputs, respectively. This study provides a novel fusion algorithm and processing for different sensors from multi-source satellites. The proposed Wu-SupReME has capacity to implement fusion of two satellites in spatial and spectral dimension. The generated fusion image improves VI accuracy when predicting wheat LAI and highlights the feasibility of this approach in agriculture applications for the assessment of other growth parameters.
In future research, it is possible to use Wu-SupReME for fusion of other two satellites, such as Sentinel-2 and Sentinel-3. It is also necessary to develop a more effective quantitative method to directly assess the quality of fused images as well as to use independent data for validation when indirectly predicting crop growth parameters. The volume of band information required for predicting crop growth parameters will also necessitate further exploration when developing fused images for agriculture applications.

Author Contributions

Conceptualization, W.L. and X.Y.; Formal analysis, J.J.; Investigation, Y.T. and Y.W.; Writing—original draft, W.L., J.J., T.G., M.Z. and X.Y.; Writing—review and editing, Y.Z. (Yu Zhang), T.C., Y.Z. (Yan Zhu) and W.C.

Funding

This work was supported by grants from the National Key Research and Development Program of China (2016YFD0200700), National Natural Science Foundation of China (31671582), Jiangsu Qinglan Project, the Fundamental Research Funds for the Central Universities (SYSB201801), the Postgraduate Research and Practice Innovation Program of Jiangsu Province (KYCX18_0659), Jiangsu Collaborative Innovation Center for Modern Crop Production(JCICMCP), the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD), the Jiangsu Province Key Technologies R&D Program (BE2016375), the 111 project (B16026), and Qinghai Project of Transformation of Scientific and Technological Achievements (2018-NK-126), Xinjiang Corps Great Science and Technology Projects (2018AA00403).

Acknowledgments

We would like to thank farmers who managed the study fields and supplied us with wheat samples. We would like to thank Planet’s education and research program for providing access to their imagery archive. Finally, we would like to thank the reviewers for recommendations which improved the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lindblom, J.; Lundström, C.; Ljung, M.; Jonsson, A. Promoting sustainable intensification in precision agriculture: Review of decision support systems development and strategies. Precis. Agric. 2017, 18, 1–23. [Google Scholar] [CrossRef]
  2. Khan, A.; Hansen, M.C.; Potapov, P.V.; Adusei, B.; Pickens, A.; Krylov, A.; Stehman, S.V. Evaluating landsat and rapideye data for winter wheat mapping and area estimation in punjab, pakistan. Remote Sens. 2018, 10, 489. [Google Scholar] [CrossRef]
  3. Darvishzadeh, R.; Skidmore, A.; Schlerf, M.; Atzberger, C.; Corsi, F.; Cho, M. Lai and chlorophyll estimation for a heterogeneous grassland using hyperspectral measurements. ISPRS J. Photogramm. Remote Sens. 2008, 63, 409–426. [Google Scholar] [CrossRef]
  4. Baret, F.; Jacquemoud, S.; Guyot, G.; Leprieur, C. Modeled analysis of the biophysical nature of spectral shifts and comparison with information content of broad bands ☆. Remote Sens. Environ. 1992, 41, 133–142. [Google Scholar] [CrossRef]
  5. Mohd Shafri, H.Z.; Mohd Salleh, M.A.; Ghiyamat, A. Hyperspectral remote sensing of vegetation using red edge position techniques. Am. J. Appl. Sci. 2006, 3, 1864–1871. [Google Scholar] [CrossRef]
  6. Pu, R.; Gong, P.; Biging, G.S.; Larrieu, M.R. Extraction of red edge optical parameters from hyperion data for estimation of forest leaf area index. IEEE Trans. Geosci. Remote Sens. 2008, 41, 916–921. [Google Scholar]
  7. Ali, M.; Montzka, C.; Stadler, A.; Menz, G.; Thonfeld, F.; Vereecken, H. Estimation and validation of rapideye-based time-series of leaf area index for winter wheat in the rur catchment (germany). Remote Sens. 2015, 7, 2808–2831. [Google Scholar] [CrossRef]
  8. Herrmann, I.; Pimstein, A.; Karnieli, A.; Cohen, Y.; Alchanatis, V.; Bonfil, D.J. Lai assessment of wheat and potato crops by venμs and sentinel-2 bands. Remote Sens. Environ. 2011, 115, 2141–2151. [Google Scholar] [CrossRef]
  9. Drusch, M.; Del Bello, U.; Carlier, S.; Colin, O.; Fernandez, V.; Gascon, F.; Hoersch, B.; Isola, C.; Laberinti, P.; Martimort, P.; et al. Sentinel-2: Esa’s optical high-resolution mission for gmes operational services. Remote Sens. Environ. 2012, 120, 25–36. [Google Scholar] [CrossRef]
  10. Roteta, E.; Bastarrika, A.; Padilla, M.; Storm, T.; Chuvieco, E. Development of a sentinel-2 burned area algorithm: Generation of a small fire database for sub-saharan africa. Remote Sens. Environ. 2019, 222, 1–17. [Google Scholar] [CrossRef]
  11. Lacroix, P.; Bièvre, G.; Pathier, E.; Kniess, U.; Jongmans, D. Use of sentinel-2 images for the detection of precursory motions before landslide failures. Remote Sens. Environ. 2018, 215, 507–516. [Google Scholar] [CrossRef]
  12. Clevers, J.; Kooistra, L.; Marnix, V.D.B. Using sentinel-2 data for retrieving lai and leaf and canopy chlorophyll content of a potato crop. Remote Sens. 2017, 9, 405. [Google Scholar] [CrossRef]
  13. Xu, R.; Zhang, H.; Wang, T.; Lin, H. Using pan-sharpened high resolution satellite data to improve impervious surfaces estimation. Int. J. Appl. Earth Obs. Geoinf. 2017, 57, 177–189. [Google Scholar] [CrossRef]
  14. Vivone, G.; Alparone, L.; Chanussot, J.; Mura, M.D.; Garzelli, A.; Licciardi, G.A.; Restaino, R.; Wald, L. A critical comparison among pansharpening algorithms. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2565–2586. [Google Scholar] [CrossRef]
  15. Zhang, Y.; Hong, G. An ihs and wavelet integrated approach to improve pan-sharpening visual quality of natural colour ikonos and quickbird images. Inf. Fusion 2005, 6, 225–234. [Google Scholar] [CrossRef]
  16. Wang, Q.; Shi, W.; Li, Z.; Atkinson, P.M. Fusion of sentinel-2 images. Remote Sens. Environ. 2016, 187, 241–252. [Google Scholar] [CrossRef]
  17. Sales, M.H.R.; Souza, C.M.; Kyriakidis, P.C. Fusion of modis images using kriging with external drift. IEEE Trans. Geosci. Remote Sens. 2013, 51, 2250–2259. [Google Scholar] [CrossRef]
  18. Wang, Q.; Shi, W.; Atkinson, P.M.; Zhao, Y. Downscaling modis images with area-to-point regression kriging. Remote Sens. Environ. 2015, 166, 191–204. [Google Scholar] [CrossRef]
  19. Park, S.C.; Min, K.P.; Kang, M.G. Super-resolution image reconstruction: A technical overview. IEEE Signal Process. Mag. 2003, 20, 21–36. [Google Scholar] [CrossRef]
  20. Lanaras, C.; Bioucas-Dias, J.; Baltsavias, E.; Schindler, K. Super-resolution of multispectral multiresolution images from a single sensor. In Proceedings of the Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 1505–1513. [Google Scholar]
  21. Brodu, N. Super-resolving multiresolution images with band-independent geometry of multispectral pixels. IEEE Trans. Geosci. Remote Sens. 2017, 55, 1–8. [Google Scholar] [CrossRef]
  22. Houborg, R.; Mccabe, M.F. High-resolution ndvi from planet’s constellation of earth observing nano-satellites: A new data source for precision agriculture. Remote Sens. 2016, 8, 768. [Google Scholar] [CrossRef]
  23. Houborg, R.; McCabe, F.M. Daily retrieval of ndvi and lai at 3 m resolution via the fusion of cubesat, landsat, and modis data. Remote Sens. 2018, 10, 890. [Google Scholar] [CrossRef]
  24. Houborg, R.; McCabe, M.F. A cubesat enabled spatio-temporal enhancement method (cestem) utilizing planet, landsat and modis data. Remote Sens. Environ. 2018, 209, 211–226. [Google Scholar] [CrossRef]
  25. Wald, L.; Ranchin, T.; Mangolini, M. Fusion of satellite images of different spatial resolutions: Assessing the quality of resulting images. Photogramm. Eng. Remote Sens. 1997, 63, 691–699. [Google Scholar]
  26. Shahdoosti, H.R.; Ghassemian, H. Combining the Spectral pca and Spatial pca Fusion Methods by An Optimal Filter; Elsevier Science Publishers B. V.: Amsterdam, The Netherlands, 2016; pp. 150–160. [Google Scholar]
  27. Mcmaster, G.S. Phenology, development, and growth of the wheat (triticum aestivum l.) shoot apex: A review. Adv. Agron. 1997, 59, 63–118. [Google Scholar]
  28. Tucker, C.J. Red and photographic infrared linear combinations for monitoring vegetation. Remote Sens. Environ. 1979, 8, 127–150. [Google Scholar] [CrossRef] [Green Version]
  29. Jiang, Z.; Huete, A.R.; Didan, K.; Miura, T. Development of a two-band enhanced vegetation index without a blue band. Remote Sens. Environ. 2008, 112, 3833–3845. [Google Scholar] [CrossRef]
  30. Huete, A.R. A soil-adjusted vegetation index (savi). Remote Sens. Environ. 1988, 25, 295–309. [Google Scholar] [CrossRef]
  31. Birth, G.S.; Mcvey, G.R. Measuring the color of growing turf with a reflectance spectrophotometer. Agron. J. 1968, 60, 640–643. [Google Scholar] [CrossRef]
  32. Jordan, C.F. Derivation of leaf-area index from quality of light on the forest floor. Ecology 1969, 50, 663–666. [Google Scholar] [CrossRef]
  33. Gitelson, A.A.; Gritz, Y.; Merzlyak, M.N. Relationships between leaf chlorophyll content and spectral reflectance and algorithms for non-destructive chlorophyll assessment in higher plant leaves. J. Plant Physiol. 2003, 160, 271–282. [Google Scholar] [CrossRef] [PubMed]
  34. Sims, D.A.; Gamon, J.A. Relationships between leaf pigment content and spectral reflectance across a wide range of species, leaf structures and developmental stages. Remote Sens. Environ. 2002, 81, 337–354. [Google Scholar] [CrossRef]
  35. Justice, C.O.; Vermote, E.; Townshend, J.R.G.; Defries, R.; Roy, D.P.; Hall, D.K.; Salomonson, V.V.; Privette, J.L.; Riggs, G.; Strahler, A. The moderate resolution imaging spectroradiometer (modis): Land remote sensing for global change research. IEEE Trans. Geosci. Remote Sens. 1998, 36, 1228–1249. [Google Scholar] [CrossRef]
  36. Jasper, J.; Reusch, S.; Link, A. Active sensing of the N status of wheat using optimized wavelength combination: impact of seed rate, variety and growth stage. In Precision Agriculture 09, Proceedings of the 7th European Conference on Precision Agriculture, Wageningen, The Netherlands, 6–8 July 2009; Wageningen Academic: Wageningen, The Netherlands, 2009; pp. 23–30. [Google Scholar]
  37. Gao, F.; Masek, J.; Schwaller, M.; Hall, F. On the blending of the landsat and modis surface reflectance: Predicting daily landsat surface reflectance. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2207–2218. [Google Scholar]
  38. Zhu, X.; Chen, J.; Gao, F.; Chen, X.; Masek, J.G. An enhanced spatial and temporal adaptive reflectance fusion model for complex heterogeneous regions. Remote Sens. Environ. 2010, 114, 2610–2623. [Google Scholar] [CrossRef]
  39. Xing, Y.; Wang, M.; Yang, S.; Jiao, L. Pan-sharpening via deep metric learning. ISPRS J. Photogramm. Remote Sens. 2018, 145, 165–183. [Google Scholar] [CrossRef]
  40. Chen, Y.; Li, C.; Ghamisi, P.; Jia, X.; Gu, Y. Deep fusion of remote sensing data for accurate classification. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1253–1257. [Google Scholar] [CrossRef]
  41. Li, F.; Jia, X.; Fraser, D. Superresolution reconstruction of multispectral data for improved image classification. IEEE Geosci. Remote Sens. Lett. 2009, 6, 689–693. [Google Scholar]
  42. Gao, L.; Yang, G.; Yu, H.; Xu, B.; Zhao, X.; Dong, J.; Ma, Y. Retrieving winter wheat leaf area index based on unmanned aerial vehicle hyperspectral remote sensing. Trans. Chin. Soc. Agric. Eng. 2016, 32, 113–120. [Google Scholar]
  43. Yao, X.; Wang, N.; Liu, Y.; Cheng, T.; Tian, Y.; Chen, Q.; Zhu, Y. Estimation of wheat lai at middle to high levels using unmanned aerial vehicle narrowband multispectral imagery. Remote Sens. 2017, 9, 1304. [Google Scholar] [CrossRef]
  44. Asam, S.; Fabritius, H.; Klein, D.; Conrad, C.; Dech, S. Derivation of leaf area index for grassland within alpine upland using multi-temporal rapideye data. Int. J. Remote Sens. 2013, 34, 8628–8652. [Google Scholar] [CrossRef]
Figure 1. The distribution of sample fields in Xinghua city, Jiangsu Province, China.
Figure 1. The distribution of sample fields in Xinghua city, Jiangsu Province, China.
Remotesensing 11 01422 g001
Figure 2. Flowchart of fused image processing and wheat LAI prediction.
Figure 2. Flowchart of fused image processing and wheat LAI prediction.
Remotesensing 11 01422 g002
Figure 3. Weight-and-unmixing processing approach: (a) represents the reflectance of Planet VNIR; (b) represents the weight ensured by (a), and; (c) represents Sentinel-2 12 m resolution pixels containing 16 pixels of 3 m resolution Sentinel-2.
Figure 3. Weight-and-unmixing processing approach: (a) represents the reflectance of Planet VNIR; (b) represents the weight ensured by (a), and; (c) represents Sentinel-2 12 m resolution pixels containing 16 pixels of 3 m resolution Sentinel-2.
Remotesensing 11 01422 g003
Figure 4. Image assessment via correlation analysis.
Figure 4. Image assessment via correlation analysis.
Remotesensing 11 01422 g004
Figure 5. Comparison of true (R,G,B) and false (NIR,R,G) color of 10 m Sentinel-2 (a), 12 m Sentinel-2 (b), 3 m Planet (c), and 3 m fused images (d).
Figure 5. Comparison of true (R,G,B) and false (NIR,R,G) color of 10 m Sentinel-2 (a), 12 m Sentinel-2 (b), 3 m Planet (c), and 3 m fused images (d).
Remotesensing 11 01422 g005
Figure 6. Comparison of REs (RE-3, RE-2,RE-1) and SWIRs-NNIR (SWIR-2, SWIR-1, Narrow NIR) 10 m Sentinel-2 (a), 12 m Sentinel-2 (b), and 3 m fused images (c).
Figure 6. Comparison of REs (RE-3, RE-2,RE-1) and SWIRs-NNIR (SWIR-2, SWIR-1, Narrow NIR) 10 m Sentinel-2 (a), 12 m Sentinel-2 (b), and 3 m fused images (c).
Remotesensing 11 01422 g006
Figure 7. Comparison of blue (a), green (b), red (c) and NIR (d) bands at 12 m resolution from fused and Sentinel-2 images.
Figure 7. Comparison of blue (a), green (b), red (c) and NIR (d) bands at 12 m resolution from fused and Sentinel-2 images.
Remotesensing 11 01422 g007
Figure 8. Comparison of RE-1 (a), RE-2 (b), RE-3 (c), NNIR (d), SWIR-1 (e), and SWIR-2 (f) bands at 12 m resolution from fused and Sentinel-2 images.
Figure 8. Comparison of RE-1 (a), RE-2 (b), RE-3 (c), NNIR (d), SWIR-1 (e), and SWIR-2 (f) bands at 12 m resolution from fused and Sentinel-2 images.
Remotesensing 11 01422 g008
Figure 9. The distribution of LAI at three stages within the three towns studied here based on the fused image generated in this analysis.
Figure 9. The distribution of LAI at three stages within the three towns studied here based on the fused image generated in this analysis.
Remotesensing 11 01422 g009
Figure 10. Comparison of blue (a), green (b), red (c) and NIR (d) bands at 12 m resolution from Planet and Sentinel-2 sources.
Figure 10. Comparison of blue (a), green (b), red (c) and NIR (d) bands at 12 m resolution from Planet and Sentinel-2 sources.
Remotesensing 11 01422 g010
Table 1. Multi-Spectral Instrument (MSI) data from Sentinel-2.
Table 1. Multi-Spectral Instrument (MSI) data from Sentinel-2.
BandsCentral Wavelength (nm)Bandwidth (nm)Spatial Resolution (m)
B1-Coastal aerosol4432060
B2-Blue4906510
B3-Green5603510
B4-Red6653010
B5-red-edge (RE-1)7051520
B6-red-edge (RE-2)7401520
B7-red-edge (RE-3)7832020
B8-NIR84211510
B8a-Narrow NIR (NNIR)8652020
B9-water vapor9452060
B10-SWIR-Cirrus13803060
B11-SWIR-116109020
B12-SWIR-2219018020
Table 2. PlanetScope band information.
Table 2. PlanetScope band information.
BandsCentral Wavelength (nm)Bandwidth (nm)Spatial Resolution (m)
B1-Blue480603
B2-Green540903
B3-Red610803
B4-NIR780803
Table 3. Acquisition times for wheat samples as well as Planet and Sentinel-2 images at different crop growth stages.
Table 3. Acquisition times for wheat samples as well as Planet and Sentinel-2 images at different crop growth stages.
StagesGround Sampling DateStudy AreasPlanet Image Acquisition TimeSentinel-2 Image Acquisition Date
Tillering8 March 2018–10 March 2018Diaoyu13 March 2018 2:33:36 A.M. UTC10 March
Daiyao9 March 2018 2:33:17 and 2:06:55 A.M. UTC
Zhangguo9 March 2018 2:06:55 and 2:05:38 A.M. UTC
Jointing22 March 2018–24 March 2018Diaoyu27 March 2018 2:07:49 A.M. UTC25 March
Daiyao27 March 2018 2:30:35 A.M. UTC
Zhangguo27 March 2018 2:07:06 A.M. UTC
Booting9 April 2018–11 April 2018Diaoyu9 April 2018 2:08:50 A.M. UTC9 April
Daiyao9 April 2018 2:07:23 A.M. UTC
Zhangguo9 April 2018 2:29:25 A.M. UTC
Table 4. Algorithms and references for VIs for the agriculture application of fused, Planet, and Sentinel-2 images.
Table 4. Algorithms and references for VIs for the agriculture application of fused, Planet, and Sentinel-2 images.
IndexFormulationReference
Normalized difference vegetation index (NDVI)(NIR − Red)/(NIR + Red)[28]
Enhanced vegetation index (EVI)2.5 × (NIR − Red)/(NIR + 6 × Red − 7.5 × Green + 1)[29]
Soil-adjusted vegetation index (SAVI)(NIR − Red)/(NIR + Red + 0.25) + 0.25[30]
Ratio vegetation index (RVI)NIR/Red[31]
Difference vegetation index (DVI)NIR − Red[32]
Green chlorophyll index (CI green)NIR/Green − 1[33]
Normalized difference red-edge index (NDRE)(NIR − RE)/(NIR + RE)[34]
Modified enhanced vegetation index (MEVI)2.5 × (NIR − RE)/(NIR+6 × RE-7.5 × Green + 1)[35]
Soil-adjusted red-edge index (SARE)(NIR − RE)/(NIR + RE + 0.25) + 0.25[7]
Red-edge ratio vegetation index (RERVI)NIR/RE[36]
Red-edge difference vegetation index (REDVI)NIR − RE[28]
Chlorophyll index (CI red-edge)NIR/RE − 1[12]
Red-edge inflection point (REIP)705 + 35 × ((Red+RE3)/2 − RE1)/(RE2 − RE1)[8]
Table 5. Comparison of R c a l 2 and RRMSE v a l values between none-RE-VIs and LAI estimates derived from Planet, Sentinel-2, and fused images.
Table 5. Comparison of R c a l 2 and RRMSE v a l values between none-RE-VIs and LAI estimates derived from Planet, Sentinel-2, and fused images.
Source3 m Planet10 m Sentinel-23 m Fusion
VIs R c a l 2 R R M S E v a l R c a l 2 R R M S E v a l R c a l 2 R R M S E v a l
NDVI0.2839.46%0.5428.35%0.5627.71%
EVI0.2738.68%0.6325.45%0.6624.56%
SAVI0.3737.14%0.6225.58%0.6424.81%
RVI0.2840.12%0.5926.62%0.6225.50%
DVI0.4235.60%0.6724.08%0.7022.84%
CIgreen0.4137.23%0.6225.86%0.6524.78%
Table 6. Comparison of R c a l 2 and RRMSE v a l values between RE-VIs and LAI estimates derived from Planet, Sentinel-2, and fused image.
Table 6. Comparison of R c a l 2 and RRMSE v a l values between RE-VIs and LAI estimates derived from Planet, Sentinel-2, and fused image.
Source10 m Sentinel-23 m FusionSource10 m Sentinel-23 m Fusion
VIs R c a l 2 R R M S E v a l R c a l 2 R R M S E v a l VIs R c a l 2 R R M S E v a l R c a l 2 R R M S E v a l
NDRE 10.6225.37%0.6424.60%RERVI 20.6922.94%0.7620.41%
NDRE 20.6923.02%0.7620.70%RERVI 30.2735.33%0.3932.77%
NDRE 30.2735.49%0.3932.89%REDVI 10.6923.11%0.7222.26%
MEVI 10.6823.67%0.7022.67%REDVI 20.7122.06%0.7620.45%
MEVI 20.7222.17%0.7819.97%REDVI 30.4131.94%0.5129.10%
MEVI 30.2735.75%0.3832.75%CI red-edge 10.6424.94%0.6723.72%
SARE 10.6624.42%0.6923.13%CI red-edge 20.6922.90%0.7620.78%
SARE 20.7122.38%0.7720.14%CI red-edge 30.2735.09%0.3933.27%
SARE 30.3134.49%0.4331.65%REIP0.6325.35%0.6624.46%
RERVI 10.6424.83%0.6723.69%
Table 7. Values for R c a l 2 and RRMSE v a l from the different images used to assess wheat LAI via MLR.
Table 7. Values for R c a l 2 and RRMSE v a l from the different images used to assess wheat LAI via MLR.
PlanetSentinel-2Fusion
R c a l 2 0.630.760.81
RRMSE v a l 42.36%33.58%33.40%

Share and Cite

MDPI and ACS Style

Li, W.; Jiang, J.; Guo, T.; Zhou, M.; Tang, Y.; Wang, Y.; Zhang, Y.; Cheng, T.; Zhu, Y.; Cao, W.; et al. Generating Red-Edge Images at 3 M Spatial Resolution by Fusing Sentinel-2 and Planet Satellite Products. Remote Sens. 2019, 11, 1422. https://doi.org/10.3390/rs11121422

AMA Style

Li W, Jiang J, Guo T, Zhou M, Tang Y, Wang Y, Zhang Y, Cheng T, Zhu Y, Cao W, et al. Generating Red-Edge Images at 3 M Spatial Resolution by Fusing Sentinel-2 and Planet Satellite Products. Remote Sensing. 2019; 11(12):1422. https://doi.org/10.3390/rs11121422

Chicago/Turabian Style

Li, Wei, Jiale Jiang, Tai Guo, Meng Zhou, Yining Tang, Ying Wang, Yu Zhang, Tao Cheng, Yan Zhu, Weixing Cao, and et al. 2019. "Generating Red-Edge Images at 3 M Spatial Resolution by Fusing Sentinel-2 and Planet Satellite Products" Remote Sensing 11, no. 12: 1422. https://doi.org/10.3390/rs11121422

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop