Next Article in Journal
Sustainability Indicators and GIS as Land-Use Planning Instrument Tools for Urban Model Assessment
Next Article in Special Issue
A Spatial and Temporal Evaluation of Broad-Scale Yield Predictions Created from Yield Mapping Technology and Landsat Satellite Imagery in the Australian Mediterranean Dryland Cropping Region
Previous Article in Journal
Abnormal-Trajectory Detection Method Based on Variable Grid Partitioning
Previous Article in Special Issue
A Comparison of Several UAV-Based Multispectral Imageries in Monitoring Rice Paddy (A Case Study in Paddy Fields in Tottori Prefecture, Japan)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Imputation of Missing Parts in UAV Orthomosaics Using PlanetScope and Sentinel-2 Data: A Case Study in a Grass-Dominated Area

by
Francisco R. da S. Pereira
1,2,*,†,
Aliny A. Dos Reis
1,†,
Rodrigo G. Freitas
3,
Stanley R. de M. Oliveira
3,4,
Lucas R. do Amaral
3,
Gleyce K. D. A. Figueiredo
3,
João F. G. Antunes
1,4,
Rubens A. C. Lamparelli
1,
Edemar Moro
5 and
Paulo S. G. Magalhães
1,3
1
Interdisciplinary Centre of Energy Planning, University of Campinas, Campinas 13083-896, São Paulo, Brazil
2
Federal Institute of Education, Science and Technology of Alagoas, Satuba 57120-000, Alagoas, Brazil
3
School of Agricultural Engineering, University of Campinas, Campinas 13083-875, São Paulo, Brazil
4
Embrapa Digital Agriculture, Brazilian Agricultural Research Corporation, Campinas 13083-886, São Paulo, Brazil
5
University of West Paulista, Agricultural Sciences department, Presidente Prudente Campus, Presidente Prudente 19050-920, São Paulo, Brazil
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
ISPRS Int. J. Geo-Inf. 2023, 12(2), 41; https://doi.org/10.3390/ijgi12020041
Submission received: 31 October 2022 / Revised: 12 January 2023 / Accepted: 14 January 2023 / Published: 28 January 2023
(This article belongs to the Special Issue Geomatics in Forestry and Agriculture: New Advances and Perspectives)

Abstract

:
The recent advances in unmanned aerial vehicle (UAV)-based remote sensing systems have broadened the remote sensing applications for agriculture. Despite the great possibilities of using UAVs to monitor agricultural fields, specific problems related to missing parts in UAV orthomosaics due to drone flight restrictions are common in agricultural monitoring, especially in large areas. In this study, we propose a methodological framework to impute missing parts of UAV orthomosaics using PlanetScope (PS) and Sentinel-2 (S2) data and the random forest (RF) algorithm of an integrated crop–livestock system (ICLS) covered by grass at the time. We validated the proposed framework by simulating and imputing artificial missing parts in a UAV orthomosaic and then comparing the original data with the model predictions. Spectral bands and the normalized difference vegetation index (NDVI) derived from PS, as well as S2 images (separately and combined), were used as predictor variables of the UAV spectral bands and NDVI in developing the RF-based imputation models. The proposed framework produces highly accurate results (RMSE = 6.77–17.33%) with a computationally efficient and robust machine-learning algorithm that leverages the wealth of empirical information present in optical satellite imagery (PS and S2) to impute up to 50% of missing parts in a UAV orthomosaic.

1. Introduction

Remote sensing technologies are critical components in precision agriculture (PA) and smart farming [1,2]. In recent decades, the advances of unmanned aerial vehicle (UAV)-based remote sensing systems have broadened the remote sensing applications for PA [3]. Unlike satellite-based remote sensing technologies, UAVs offer high versatility, adaptability, and flexibility for data collection with high spatial and temporal resolutions [4] that allow several PA applications. Crop growth monitoring, yield estimation, health monitoring, disease detection, and weed management [3,4,5,6] are examples of PA applications that take advantage of the spatial and temporal resolution of UAV remote sensing, exploring the ideal configurations for purposes that aim to identify crop spatial variability.
The dynamic nature of agriculture demands crop monitoring at punctual phenological stages to identify the factors that may affect the final yield [4]. Despite UAVs’ temporal flexibility, missing a flight opportunity may be unrecoverable in some situations. This situation is authentic for using UAV remote sensing for PA applications in tropical areas, where vegetation grows fast and agriculture usually relies on rainfall. The acquisition of UAV images, even a few days apart, may result in losing valuable information about the target of interest due to everyday agriculture events, such as intensive grazing, grass mowing, weed spraying, pre-harvest crop desiccation, tillage, and harvest.
The ideal spatial resolutions of UAV remote sensing for PA applications are defined based on the target of interest and the possibility of machinery intervention [7]. Usually, such interventions are executed by farm equipment, which cannot deal with sub-meter spatial resolutions. However, monitoring large agricultural areas (>200 ha) using UAV imagery is challenging, even adjusting for moderate spatial resolutions suitable for typical PA applications. The UAV systems have some disadvantages, such as small ground coverage (a few km2 of swath) [8,9] due to battery capacity, flight restrictions, and limited flights under wind gust conditions. Those disadvantages may result in missing parts in UAV orthomosaics of large agricultural areas and limited data acquisition in critical crop growth stages.
Imputing missing parts in UAV orthomosaics is crucial and necessary to ensure UAV imagery application in PA over large areas and in specific crop growth stages. For years, missing values in optical satellite imagery have been a challenge for the remote sensing community [8,10,11,12,13], usually caused by cloud cover, shadows, or sensor malfunctions. The proposed approaches to replace missing values in remote sensing data often rely on filling in the pixel values in data gaps [14]. Spatial gap-filling methods or spatial imputation methods are usually based on deterministic and geostatistical interpolation approaches, such as ordinary kriging and cokriging, where the information contained within surrounding (non-missing) pixels is used to interpolate the missing parts of the images [10,14,15,16]. Temporal gap-filling methods predict the missing pixel values using the non-missing pixel values on images from different times. This process is usually carried out by fitting a curve on the non-missing data before and after the gap, such as the Savitzky–Golay filter [17,18,19,20]. On the other hand, spatio-temporal gap-filling methods combine both temporal and spatial methods in a multi-step approach to incrementally remove gaps in the time series of remote sensing data [12,13,14,21].
Machine learning (ML) algorithms have recently emerged as an innovative approach to impute missing values in remotely sensed data for learning complex spatial and temporal relationships between input and target variables [22,23]. Although optical satellite imagery shows a lower spatial resolution compared with UAV imagery, existing machine learning methods, such as the random forest (RF) algorithm, can capture the relationship between coarse-scale (optical satellite imagery) and fine-scale (UAV imagery) remotely sensed data acquired for the same location and time to impute the missing parts in UAV orthomosaics with high spatial detail required for PA applications. In land-use classification applications, the RF algorithm has been used to combine and optimize the spectral, spatial, and temporal resolutions of satellite and UAV data [24,25]. However, we are unaware of studies attempting to address the problem of missing parts in UAV orthomosaics using the RF algorithm and ancillary data derived from satellite images.
Additionally, the launch of the Sentinel-2 missions by the European Space Agency (ESA) in 2015 [26] and the new generation of orbital platforms, such as Planet CubeSats [27], have greatly enhanced the possibilities for ancillary data with improved spectral and spatial resolutions (10 m and 3 m, respectively) still unexplored in imputation approaches of missing parts in UAV orthomosaics using ML algorithms. So, this study proposes a methodological framework to impute the missing parts of UAV orthomosaics using PlanetScope (PS) and Sentinel-2 (S2) data and the robust RF algorithm. The main novelty of our proposed framework is using a data inter-calibration approach and the RF’s ability to learn complex relationships between remotely sensed images with different spatial and spectral resolutions to impute missing parts of UAV orthomosaics.
Our goals in this study were to develop a data-driven spatial imputation methodology that (1) balances the need for high spatial resolution in the UAV orthomosaics with the spatial resolution necessary for feasible PA application in specific crop growth stages, (2) uses ancillary data from satellite imagery within the same or very close acquisition dates to impute the missing parts in UAV orthomosaics, and (3) provides a generalizable and flexible approach that applies to a wide range of remote sensing datasets. Our underlying hypothesis was that spatial and spectral autocorrelation inherent within spectral bands from UAV and satellite images from the same target of interest can be leveraged to impute the missing parts in UAV orthomosaics.

2. Materials and Methods

2.1. Study Area

To evaluate the proposed framework, we selected an area of approximately 200 hectares in the western region of São Paulo, Brazil (Figure 1). The study area has been managed as an integrated crop–livestock system (ICLS) based on cultivated pasture and soybean rotation within the agricultural year. In August 2019, when the remotely sensed images used in this study (UAV, PS, and S2) were acquired, the study area was used as pasture fields for rotational grazing. The pasture coverage was predominantly ruzi grass (Urochloa ruziziensis), with sparse trees for shading (Figure 1).

2.2. Remotely Sensed Data Collection and Pre-Processing

The UAV images were acquired on 10–11August 2019 under clear-sky conditions from 11:00 a.m. to 2:00 p.m. local time, using a multispectral sensor RedEdge M (Micasense, Seattle, Washington, USA), boarded on a quadcopter G-Q45 (G-drones, São Paulo, Brazil). The RedEdge M sensor collected information on the blue (B) (465–485 nm), green (G) (550–570 nm), red (R) (663–673 nm), red-edge (RE) (712–722 nm), and near-infrared (NIR) (820–860 nm) spectral bands (Figure 2). We used 75% image overlap and side-lap. The automatic flight control system maintained the UAV at 115 m aboveground over parallel paths, resulting in a ground sample distance (GSD) of ~0.08 m. We captured images from the calibration panel (Micasense) before and after each flight for the image radiometric corrections. A manufacturer conversion model transforms raw pixel values to absolute spectral radiance values, considering the radiometric sensor resolution. The calibration panel images enable computing the reflectance calibration factor for each band by calculating the ratio between the known reflectance value of the calibration panel from the manufacturer and the average radiance for pixels inside the panel. We used the Agisoft Metashape® software (Agisoft LLC, St. Petersburg, Russia) for the radiometric correction and image mosaicking.
The selected PlanetScope (PS) CubeSat multispectral cloud-free image was acquired on 10 August 2019, and downloaded from the Brazilian Planet Labs commercial representative platform. The PS is a constellation of nanosatellites, comprising over 130 CubeSats 3U form factors (0.10 m × 0.10 m × 0.30 m) operated by Planet Labs, Inc., able to provide a daily image of all of the Earth’s land surface at high spatial resolution (~3 m). Most PS CubeSats are in a sun-synchronous orbit with an equator crossing time between 9:30 and 11:30 a.m. (local solar time). The PS image used in our study was acquired by the Bayer Mask CCD sensor with four spectral bands (Figure 2): B (455–515 nm), G (500–590 nm), R (590–670 nm), and NIR (780–860 nm) [27]. We used the Planet Surface Reflectance (SR) product, which was atmospherically corrected using coefficients supplied with the Planet Analytic Product (Radiance), processed to the top of the atmosphere (TOA) and the 6SV2.1 radiative transfer code [28].
The cloud-free Sentinel-2A image (Level-2A surface reflectance product) selected for this study was acquired on 11 August 2019. Sentinel-2 (S2) is a satellite mission of the Copernicus program from the European Space Agency (ESA) based on two polar-orbiting satellites (Sentinel-2A and Sentinel-2B) positioned in the same sun-synchronous orbit phased at 180° to each other. Each S2 satellite carries a MultiSpectral Instrument (MSI), and together they can image all of the Earth’s land surface at a revisit time of 5 days with a 290 km swath width. S2 images are publicly accessible and can be freely downloaded at the Copernicus website (https://scihub.copernicus.eu/dhus/#/home (accessed on 1 May 2020)). The MSI sensor captures 13 spectral bands with varying spatial resolutions (10 m, 20 m, and 60 m). The spectral bands with 10 m spatial resolution are B (459.4–525.4 nm), G (541.8–577.8 nm), R (649.1–680.1 nm), and NIR (779.8–885.8 nm). Red-edge1 (RE1) (696.6–711.6 nm), red-edge2 (RE2) (733–748 nm), red-edge3 (RE3) (772.8–792.8 nm), red-edge4 (RE4) (854.2–875.2 nm), shortwave infrared 1 (SWIR1) (1568.2–1659.2 nm), and shortwave infrared 2 (SWIR2) (2114.9–2289.9 nm) correspond to the spectral bands with 20 m resolution (Figure 2). This study did not consider the three bands with 60 m resolution designed for monitoring atmospheric conditions (coastal aerosol, water vapor, and SWIR/Cirrus).
We also calculated the normalized difference vegetation index (NDVI) [29] for the UAV orthomosaic and the PS and S2 images. The NDVI was chosen in this study due to being the most widely used spectral vegetation index for vegetation monitoring applications in precision agriculture [7,30] and its long history, simplicity, and dependence on readily available spectral bands (R and NIR) [31].
A flowchart, Figure 3, illustrates the main steps of our proposed spatial imputation methodology for filling in the missing parts in UAV orthomosaics using the RF algorithm and ancillary data derived from satellite image.

2.3. Data Extraction, Missing Part Simulation, and Dataset Division

The proposed methodological framework to impute missing parts of UAV orthomosaics required that the UAV and satellite images were closely aligned. Therefore, we first checked if the UAV, PS, and S2 data were entirely co-registered to each other. Next, we resampled the UAV (0.08 m), PS (3 m), and S2 (10 and 20 m) images using bilinear interpolation to the spatial resolution of 1 m to ensure better integration between the remotely sensed data. The spatial resolution of 1 m was chosen based on the spatial resolution required in typical PA applications, the greatest common divisor of the spatial resolutions (3, 10, and 20 m) of the optical satellite imagery, and the computational costs involved in the proposed imputation framework. A spatial resolution of 1 m is suitable for several PA applications that use equipment that cannot deal with sub-meter spatial resolutions, such as variable-rate seeding and fertilization [32,33]. After resampling the images, we selected 220,000 points regularly distributed based on a 10 m × 10 m grid. The selected points corresponded to the centroid of around 10% of the total number of pixels of the image. Those points were considered the original sample to calibrate the imputation models and avoid a large amount of data derived from the entire image that could decrease the computing efficiency of the proposed framework. For each one of the 220,000 points, we extracted the spectral bands and NDVI values on the UAV, PS, and S2 images.
The parallel path planning system commonly adopted to collect UAV images combined with unexpected flight situations (such as sudden wind gusts) may affect the perfect overlap of the images, resulting in missing parts in the UAV orthomosaic in the shape of strips. Thus, we simulated three levels of artificial missing parts in the UAV orthomosaic in strips, corresponding to gaps of 10% (MP-10%), 30% (MP-30%), and 50% (MP-50%) of the orthomosaic total area (Figure 4). Considering the missing part levels and the 220,000 points (original sample), we defined the training and testing datasets for data modelling. From the selected 220,000 points, those not matching the missing parts of the UAV orthomosaic were used to train the imputation models, i.e., 198,000 points for MP-10%, 154,000 points for MP-30%, and 110,000 points for MP-50%. The points matching the missing parts of MP-10%, that is, the same 22,000 points were used in the testing dataset for the performance evaluation of all three levels of missing parts (MP-10%, MP-30%, and MP-50%).

2.4. Data Modelling

The proposed framework was based on data inter-calibration and used the nonparametric random forest (RF) machine learning algorithm to impute the missing parts of a UAV orthomosaic.
The RF, proposed by [34], is an ensemble learning algorithm comprising many decision trees (forest) combined to solve both classification and regression problems. Using a bootstrap sampling strategy (sample with replacement), a random independent draw of predictors’ subsets generates the sets of decision trees. The RF regression algorithm has a history of successful use in remote sensing applications [22,23,25,35] and comparable accuracy to other state-of-the-art machine learning algorithms [36], in addition to its simplicity and efficiency and ability to process high-dimensional data, as well as prevent overfitting [37]. In addition, the RF is a well-known regression method with high stability and robustness that provides fast, flexible, and accurate predictive capabilities in which the final prediction value corresponds to the averaged output of all individual decision trees, thus decreasing the variance of the results.
RF modelling involves a hyperparameter tuning process that maximizes the model’s predictive accuracy. We used the 3-fold cross-validation method to evaluate the accuracy estimation in the training dataset during the hyperparameter tuning process, which was conducted to select the optimal values of the hyperparameters: ntree (number of decision trees) and mtry (number of predictor variables randomly sampled at each split). The hyperparameter tuning process for each RF-based imputation model employed the random search method [38] to values ranging from 150 to 300 for each ntree. The mtry hyperparameter definition was based on Equation 1— p , where p is the number of predictor variables.
The target variables in the RF-based imputation models were the five spectral bands of the UAV orthomosaic (B, G, R, RE, and NIR) and the NDVI. We considered three sets of predictor variables in the RF-based imputation models: (i) the PS image spectral bands (B, G, R, and NIR) and its NDVI; (ii) the S2 image spectral bands (B, G, R, RE1, RE2, RE3, RE4, NIR, SWIR1, and SWIR2) and its NDVI; and (iii) the combination of the predictor variables in (i) and (ii).
We assessed the importance of the predictor variables in the RF-based imputation models to identify each model’s most important predictor variables. The RF uses the mean square error increase (IncMSE) when a variable is randomly permuted to measure its importance [34]. We normalized the variable importance metric of each model in the 0 to 1 range to obtain the predictors’ overall performance for the best RF-based imputation models. Then, we summed the variable importance value of each selected RF model for every predictor variable. Finally, we obtained the relative importance of each predictor variable by normalizing the total variable importance in the range of (0, 100). We calculated all data modelling using the mlr package [39] in the R software.

2.5. Model Evaluation and Agreement Analysis

The RF-based imputation models assessment used the 22,000 points, matching the missing parts of the MP-10% (Figure 4a) to compare imputed with observed values of each target variable and the level of missing parts. We considered a series of the goodness of fit measures, including the root mean square error (RMSE) in absolute and percentage terms and the coefficient of determination (R2). Next, we only applied the validated RF-based imputation models on the missing parts extent of the raster layers of the predictor variables (spectral bands and NDVI from PS and S2). The spatial agreement between the imputed missing parts of the UAV orthomosaic and the original UAV orthomosaic was assessed considering Pearson’s correlation coefficient (r) and the Willmott agreement index (d) [40].
The UAV-imputed orthomosaic was qualitatively evaluated by a visual inspection between imputed and original pixel values in the true-colour composite (RGB:321) images. We also calculated the NDVI using the imputed R and NIR bands to assess whether the imputed spectral bands can directly derive other vegetation indices. The calculated NDVI (based on the predicted spectral bands) was then compared with the original UAV-based NDVI and the predicted NDVI.

3. Results

Combining predictor variables derived from PS and S2 images resulted in the RF-based imputation models with the lowest predicted errors (Table 1) and the highest spatial agreement between imputed and observed values in the extent of the missing parts (Table 2). The NIR spectral band showed the lowest RMSE values (RMSE < 7%), whereas the NDVI showed the highest RMSE values (RMSE > 17%). Our results indicated that the S2 models outperformed the PS models in predicting the missing parts of the UAV orthomosaic. Additionally, the levels of missing parts in the UAV orthomosaic did not significantly affect the prediction performance of the RF-based imputation models.
The predictions of the five spectral bands of the UAV orthomosaic (B, G, R, RE, and NIR) and the NDVI, retrieved for the best RF-based imputation models using the combination of PS and S2 data, showed a good agreement with the observed values in the testing dataset (not shown here). The lowest values of the spectral variables were overestimated, whereas the highest observed values were underestimated.
The RE4, SWIR1, RE1, G, and SWIR2 spectral bands from the S2 image were the five most important variables (Figure 5) in imputing the missing parts in the UAV orthomosaic. They were measured according to the overall relative importance of the predictor variables, as measured by the variable importance metric in the eighteen RF-based imputation models using both PS and S2 data.
Among the PS predictor variables, the NIR spectral band was the most important variable (the sixth most important variable in the overall ranking). The variables that least contributed to RF-based imputation model prediction performance were the B spectral bands of both PS and S2 images.
The plot of true colour composites (RGB:321) of the original UAV orthomosaic and imputed UAV orthomosaic for the three levels of missing parts for a visual inspection of the prediction performance of the proposed framework is shown in Figure 6. We also selected three regions (squares #1, #2, and #3 in Figure 6) within the image to show in more detail the UAV orthomosaic imputed parts in the highest level of missing parts assessed in this study (MP–50%). We observed that the RF-based imputation models could reconstruct abrupt changes in the image, such as sparse trees (squares #1 and #2), tree rows (square #2), paddock divisions (square #3), and terrace edges present in the grass-dominated vegetation (squares #1, #2, and #3).
The NDVI images calculated using the imputed R and NIR spectral bands and the NDVI images predicted directly by the RF-based imputation models showed the same spatial agreement with the original UAV NDVI images in the three levels of missing parts (Table 3). Figure 7 also shows the high spatial agreement between the original UAV NDVI image and the predicted and calculated NDVI image, highlighting the potential of the imputed spectral bands to further derive other vegetation indices.

4. Discussion

This study proposed a framework to impute missing parts of UAV orthomosaics based on data intercalibration and the well-known RF algorithm using PS and S2 data. The positive results of the RF-based imputation models with RMSE below 20% (Table 1) and the high spatial agreement (d > 0.74) between the imputed and the original pixel values in the simulated artificial missing parts of the UAV orthomosaic (Table 2) demonstrated the proposed framework potential to impute missing parts for both spectral bands and vegetation indices derived from UAV orthomosaics. No spatial imputation method perfectly restores a remotely sensed image [10,41]. However, slight visual differences occurred between imputed and non-imputed pixels in the UAV orthomosaics (Figure 6). Furthermore, there was a very similar spatial agreement between the original NDVI with the NDVI calculated from the predicted spectral bands and the NDVI predicted directly by the RF-based imputation models. It indicates that the prediction error of the spectral bands (R and NIR) was equivalent to the error of predicting the NDVI directly (Table 3 and Figure 7). These results have a practical implication for indicating that several vegetation indices of agronomic interest can be calculated from the imputed UAV spectral bands, illustrating the practical importance of the proposed framework.
Much of the success of the proposed framework can be attributed to the RF algorithm’s high learning capacity [34,37] and the extensive data samples provided for model training and validation. Those conditions allowed the RF-based imputation models to capture the complex relationships between the spectral bands and vegetation indices derived from the same or different sensor images. The RF algorithm also successfully filled gaps in satellite image time-series data using spatial-temporal statistics in the study of [23]. The success of restoring up to 50% of missing parts in the UAV orthomosaic in our study indicates the flexibility of the proposed framework to deal with different levels of missing parts. Moreover, by using data derived from images obtained from the same area and at the same time (even with different spatial and spectral resolutions) as predictor variables, the proposed framework could predict discontinuities and abrupt changes in the landscape, such as sparse trees, tree rows, paddock divisions, and terrace edges (Figure 6). Restoring discontinuities and abrupt changes in the landscape has been reported in the literature as a challenge for the spatial imputation methods of remotely sensed images [11,41,42], corroborating the potential of using the proposed framework to impute missing parts in remotely sensed imagery.
The combined spatial and spectral characteristics of PS and S2 images resulted in the superior performance of the RF-based imputation models using both data instead of using just one of them, likely due to the association between the high spatial resolution of PS images and the high spectral resolution of S2 images (Table 1 and Table 2). The contribution of each satellite was better understood when we analysed the models’ performance using predictor variables derived from PS and S2 separately. The RF-imputation models built using S2 data outperformed those built using PS data. In our study, the S2 images’ spectral characteristics were more relevant to imputing UAV orthomosaic missing parts than the PS images’ high spatial resolution. The S2 has a relatively high number of spectral bands with narrow widths in which the central wavelengths are close to the central wavelengths of UAV spectral bands (Figure 2). By analysing the overall relative importance of the predictor variables (Figure 5), the S2 spectral contribution in the RF-based imputation models became even more evident, since the five most important variables in the imputation of the missing parts in the UAV orthomosaic were five S2 spectral bands (RE4, SWIR1, RE1, G, and SWIR2). Since the study area is covered by grass-dominated vegetation, S2 images improved spectral configuration with four narrow bands in the RE region and two bands in the SWIR region could better detect the subtle differences in the cultivated pasture characteristics [26]. For multispectral remotely sensed images, the spectral bands are correlated with each other in the spectral domain for the same target of interest. Therefore, although UAV images do not have a SWIR band, the SWIR bands of S2 brought additional information to the RF-based imputation models. Thus, the sensed target plays a critical role in defining the most important predictor variables and choosing optical satellites to impute missing parts in UAV orthomosaics.
Although the high spatial resolution of the PS images contributed to the good validation results of the RF-based imputation models built using PS data, the PS spectral characteristics used in this study were crucial for their worse performance than S2 images. The selected PS image covering the study area was captured by a PS satellite with only four spectral bands. However, Planet has announced the next generation of PS imagery powered by SuperDove, the latest Planet CubeSats design Planet [27]. The SuperDove satellites will introduce new spectral bands to PS imagery, resulting in eight spectral bands. Six of them are interoperable with S2 images (coastal blue, blue, green II, red, red-edge, and NIR), and two are unique spectral bands (green I and yellow) [27,43]. These improvements to PS images will undoubtedly contribute to a better performance of the proposed framework using PS data to impute missing parts of UAV orthomosaics in future applications.
The superior individual performance of the RF-based imputation models using S2 data and limited PS data access, since PS imagery is not open access, highlights the importance of using free operational satellite data to assess new approaches for imputing missing parts in UAV orthomosaics. Moreover, whereas our analysis was restricted to PS and S2 images, the proposed framework could be readily adapted to use a wide variety of remotely sensed images as long as all remotely sensed imagery are entirely co-registered to each other. In addition, although we had applied the proposed spatial imputation approach in a study area covered by grass-dominated vegetation, the concepts that were the basis of the development of this approach are expected to be consistent regardless of the land use and land cover (LULC) classes on the region of interest, as long as all LULC classes are present in the non-missing parts of the images.
Additionally, although the imputed UAV images may be inaccurate for PA applications requiring very high spatial resolution (<1 m), the proposed framework is appropriate for applications where imaging large agricultural areas using UAV systems are still a challenge. For such tasks and considering the limitations of UAV systems, the proposed framework may be used to optimize UAV imaging surveys in time- or resource-constrained situations, planning intentional missing parts for further imputation using satellite data.

5. Conclusions

This study describes a framework for imputing missing parts in UAV orthomosaics using a data intercalibration approach and the nonparametric random forest (RF) algorithm of an ICLS covered by grass at the time. The proposed framework produces highly accurate imputation of missing parts in UAV orthomosaics, up to 50% of the level of missing parts. It uses a random forest algorithm to leverage the wealth of practical information in optical satellite imagery (PlanetScope (PS) and Sentinel-2 (S2)). The imputed UAV spectral bands can also be successfully used to derive many other vegetation indices of agronomic interest.
Associating PS images with high spatial resolution and S2 images with high spectral resolution improved the performance of the RF-based imputation models. However, when PS data accessibility is limited, the RF-based imputation can be generated using only S2 data at the expense of greater spatial detailing in the imputed UAV images. Additionally, the proposed framework can be readily adapted to use a wide variety of optical satellite imagery with improved spatial and spectral resolutions to impute missing parts in UAV orthomosaics.

Author Contributions

Conceptualization, Francisco R. da S. Pereira, Aliny A. Dos Reis and Rodrigo G. Freitas; methodology, Francisco R. da S. Pereira, Aliny A. Dos Reis and Rodrigo G. Freitas; formal analysis, Francisco R. da S. Pereira, Aliny A. Dos Reis and Stanley R. de M. Oliveira; investigation, Francisco R. da S. Pereira, Aliny A. Dos Reis and Rodrigo G. Freitas; data curation, Francisco R. da S. Pereira and Aliny A. Dos Reis; writing—original draft, Francisco R. da S. Pereira, Aliny A. Dos Reis and Rodrigo G. Freitas; resources, Stanley R. de M. Oliveira and Paulo S. G. Magalhães; writing—review and editing, Francisco R. da S. Pereira, Aliny A. Dos Reis, Rodrigo G. Freitas, Stanley R. de M. Oliveira, Lucas R. do Amaral, Gleyce K. D. A. Figueiredo, Rubens A. C. Lamparelli, João F. G. Antunes, Edemar Moro and Paulo S. G. Magalhães; visualization, Francisco R. da S. Pereira and Aliny A. Dos Reis; supervision, Paulo S. G. Magalhães; project administration, Paulo S. G. Magalhães; funding acquisition, Paulo S. G. Magalhães. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by FAPESP (Process numbers 2017/50205-9 and 2018/24985-0) and, partly, by CAPES (Finance Code 001).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this work are available on request from the corresponding author. The data are not publicly available due to other ongoing studies.

Acknowledgments

The authors would like to thank the owner and all collaborators of the Campina Farm (Nelore CV Mocho) and the graduate and undergraduate students, postdoctoral researchers, and technicians of NIPE, UNOESTE, IFAL, and FEAGRI/UNICAMP by the support with the field data collection and preparation.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in any process of the present study, including design, collection, analyses, interpretation, writing, or the decision to publish the results.

References

  1. Liaghat, S.; Balasundram, S.K. A Review: The Role of Remote Sensing in Precision Agriculture. Am. J. Agric. Biol. Sci. 2010, 5, 50–55. [Google Scholar] [CrossRef] [Green Version]
  2. Mulla, D.J. Twenty Five Years of Remote Sensing in Precision Agriculture: Key Advances and Remaining Knowledge Gaps. Biosyst. Eng. 2013, 114, 358–371. [Google Scholar] [CrossRef]
  3. Radoglou-Grammatikis, P.; Sarigiannidis, P.; Lagkas, T.; Moscholios, I. A Compilation of UAV Applications for Precision Agriculture. Comput. Netw. 2020, 172, 107148. [Google Scholar] [CrossRef]
  4. Tsouros, D.C.; Bibi, S.; Sarigiannidis, P.G. A Review on UAV-Based Applications for Precision Agriculture. Information 2019, 10, 349. [Google Scholar] [CrossRef] [Green Version]
  5. Zhang, C.; Kovacs, J.M. The Application of Small Unmanned Aerial Systems for Precision Agriculture: A Review. Precis. Agric. 2012, 13, 693–712. [Google Scholar] [CrossRef]
  6. Manfreda, S.; McCabe, M.F.; Miller, P.E.; Lucas, R.; Madrigal, V.P.; Mallinis, G.; ben Dor, E.; Helman, D.; Estes, L.; Ciraolo, G.; et al. On the Use of Unmanned Aerial Systems for Environmental Monitoring. Remote Sens. 2018, 10, 641. [Google Scholar] [CrossRef] [Green Version]
  7. Sishodia, R.P.; Ray, R.L.; Singh, S.K. Applications of Remote Sensing in Precision Agriculture: A Review. Remote Sens. 2020, 12, 3136. [Google Scholar] [CrossRef]
  8. Rossi, R.E.; Dungan, J.L.; Beck, L.R. Kriging in the Shadows: Geostatistical Interpolation for Remote Sensing. Remote Sens. Environ. 1994, 49, 32–40. [Google Scholar] [CrossRef]
  9. Jiang, J.; Zhang, Q.; Wang, W.; Wu, Y.; Zheng, H.; Yao, X.; Zhu, Y.; Cao, W.; Cheng, T. MACA: A Relative Radiometric Correction Method for Multiflight Unmanned Aerial Vehicle Images Based on Concurrent Satellite Imagery. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
  10. Zhang, C.; Li, W.; Travis, D.J. Restoration of Clouded Pixels in Multispectral Remotely Sensed Imagery with Cokriging. Int. J. Remote Sens. 2009, 30, 2173–2195. [Google Scholar] [CrossRef]
  11. Chen, J.; Zhu, X.; Vogelmann, J.E.; Gao, F.; Jin, S. A Simple and Effective Method for Filling Gaps in Landsat ETM+ SLC-off Images. Remote Sens. Environ. 2011, 115, 1053–1064. [Google Scholar] [CrossRef]
  12. Gerber, F.; de Jong, R.; Schaepman, M.E.; Schaepman-Strub, G.; Furrer, R. Predicting Missing Values in Spatio-Temporal Remote Sensing Data. IEEE Trans. Geosci. Remote Sens. 2018, 56, 2841–2853. [Google Scholar] [CrossRef] [Green Version]
  13. Siabi, N.; Sanaeinejad, S.H.; Ghahraman, B. Effective Method for Filling Gaps in Time Series of Environmental Remote Sensing Data: An Example on Evapotranspiration and Land Surface Temperature Images. Comput. Electron. Agric. 2022, 193, 106619. [Google Scholar] [CrossRef]
  14. Weiss, D.J.; Atkinson, P.M.; Bhatt, S.; Mappin, B.; Hay, S.I.; Gething, P.W. An Effective Approach for Gap-Filling Continental Scale Remotely Sensed Time-Series. ISPRS J. Photogramm. Remote Sens. 2014, 98, 106–118. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Maxwell, S.K.; Schmidt, G.L.; Storey, J.C. A Multi-scale Segmentation Approach to Filling Gaps in Landsat ETM+ SLC-off Images. Int. J. Remote Sens. 2007, 28, 5339–5356. [Google Scholar] [CrossRef]
  16. Zhu, X.; Liu, D.; Chen, J. A New Geostatistical Approach for Filling Gaps in Landsat ETM+ SLC-off Images. Remote Sens. Environ. 2012, 124, 49–60. [Google Scholar] [CrossRef]
  17. Chen, J.; Jönsson, P.; Tamura, M.; Gu, Z.; Matsushita, B.; Eklundh, L. A Simple Method for Reconstructing a High-Quality NDVI Time-Series Data Set Based on the Savitzky-Golay Filter. Remote Sens. Environ. 2004, 91, 332–344. [Google Scholar] [CrossRef]
  18. Kandasamy, S.; Baret, F.; Verger, A.; Neveux, P.; Weiss, M. A Comparison of Methods for Smoothing and Gap Filling Time Series of Remote Sensing Observations—Application to MODIS LAI Products. Biogeosciences 2013, 10, 4055–4071. [Google Scholar] [CrossRef] [Green Version]
  19. Zhou, J.; Jia, L.; Menenti, M. Reconstruction of Global MODIS NDVI Time Series: Performance of Harmonic ANalysis of Time Series (HANTS). Remote Sens. Environ. 2015, 163, 217–228. [Google Scholar] [CrossRef]
  20. Cai, Z.; Jönsson, P.; Jin, H.; Eklundh, L. Performance of Smoothing Methods for Reconstructing NDVI Time-Series and Estimating Vegetation Phenology from MODIS Data. Remote Sens. 2017, 9, 1271. [Google Scholar] [CrossRef]
  21. Moreno-Martínez, Á.; Izquierdo-Verdiguier, E.; Maneta, M.P.; Camps-Valls, G.; Robinson, N.; Muñoz-Marí, J.; Sedano, F.; Clinton, N.; Running, S.W. Multispectral High Resolution Sensor Fusion for Smoothing and Gap-Filling in the Cloud. Remote Sens. Environ. 2020, 247, 111901. [Google Scholar] [CrossRef] [PubMed]
  22. Zhang, L.; Liu, Y.; Ren, L.; Teuling, A.J.; Zhang, X.; Jiang, S.; Yang, X.; Wei, L.; Zhong, F.; Zheng, L. Reconstruction of ESA CCI Satellite-Derived Soil Moisture Using an Artificial Neural Network Technology. Sci. Total Environ. 2021, 782, 146602. [Google Scholar] [CrossRef]
  23. Sarafanov, M.; Kazakov, E.; Nikitin, N.O.; Kalyuzhnaya, A.V. A Machine Learning Approach for Remote Sensing Data Gap-Filling with Open-Source Implementation: An Example Regarding Land Surface Temperature, Surface Albedo and Ndvi. Remote Sens. 2020, 12, 3865. [Google Scholar] [CrossRef]
  24. Data, S.; Zhao, L.; Shi, Y.; Liu, B.; Hovis, C.; Duan, Y.; Shi, Z. Finer Classification of Crops by Fusing UAV Images and Sentinel-2A Data. Remote Sens. 2019, 11, 3012. [Google Scholar]
  25. Gränzig, T.; Fassnacht, F.E.; Kleinschmit, B.; Förster, M. Mapping the Fractional Coverage of the Invasive Shrub Ulex Europaeus with Multi-Temporal Sentinel-2 Imagery Utilizing UAV Orthoimages and a New Spatial Optimization Approach. Int. J. Appl. Earth Obs. Geoinf. 2021, 96, 102281. [Google Scholar] [CrossRef]
  26. Drusch, M.; del Bello, U.; Carlier, S.; Colin, O.; Fernandez, V.; Gascon, F.; Hoersch, B.; Isola, C.; Laberinti, P.; Martimort, P.; et al. Sentinel-2: ESA’s Optical High-Resolution Mission for GMES Operational Services. Remote Sens. Environ. 2012, 120, 25–36. [Google Scholar] [CrossRef]
  27. Planet Labs. Planet Imagery Product Specifications; Planet Labs: San Francisco, CA, USA, 2021. [Google Scholar]
  28. Planet Labs. Planet Surface Reflectance Version 2.0; Planet Labs: San Francisco, CA, USA, 2021. [Google Scholar]
  29. Rouse, J.W.; Hass, R.H.; Schell, J.A.; Deering, D.W. Monitoring Vegetation Systems in the Great Plains with ERTS. Third Earth Resour. Technol. Satell. (ERTS) Symp. 1973, 1, 309–317. [Google Scholar]
  30. Xue, J.; Su, B. Significant Remote Sensing Vegetation Indices: A Review of Developments and Applications. J. Sens. 2017, 2017. [Google Scholar] [CrossRef] [Green Version]
  31. Huang, S.; Tang, L.; Hupy, J.P.; Wang, Y.; Shao, G. A Commentary Review on the Use of Normalized Difference Vegetation Index (NDVI) in the Era of Popular Remote Sensing. J. Res. (Harbin) 2021, 32, 1–6. [Google Scholar] [CrossRef]
  32. Söderström, M.; Piikki, K.; Stenberg, M.; Stadig, H.; Martinsson, J. Producing Nitrogen (N) Uptake Maps in Winter Wheat by Combining Proximal Crop Measurements with Sentinel-2 and DMC Satellite Images in a Decision Support System for Farmers. Acta Agric. Scand B Soil Plant Sci. 2017, 67, 637–650. [Google Scholar] [CrossRef] [Green Version]
  33. Yang, C. High Resolution Satellite Imaging Sensors for Precision Agriculture. Front. Agric. Sci. Eng. 2018, 5, 393–405. [Google Scholar] [CrossRef] [Green Version]
  34. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  35. Feng, S.; Zhao, J.; Liu, T.; Zhang, H.; Zhang, Z.; Guo, X. Crop Type Identification and Mapping Using Machine Learning Algorithms and Sentinel-2 Time Series Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 3295–3306. [Google Scholar] [CrossRef]
  36. Daroczi, G. Mastering Data Analysis with R: Gain Clear Insights into Your Data and Solve Real-World Data Science Problems with R—From Data Munging to Modeling and Visualization; Packt Publishing Ltd.: Birmingham, UK, 2015; ISBN 9781783982028. [Google Scholar]
  37. Belgiu, M.; Drăgu, L. Random Forest in Remote Sensing: A Review of Applications and Future Directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  38. Bergstra, J.; Bengio, Y. Random Search for Hyper-Parameter Optimization. J. Mach. Learn. Res. 2012, 13, 281–305. [Google Scholar]
  39. Bischl, B.; Lang, M.; Kotthoff, L.; Schiffner, J.; Richter, J.; Studerus, E.; Casalicchio, G.; Jones, Z.M. Mlr: Machine Learning in R. J. Mach. Learn. Res. 2016, 17, 5938–5942. [Google Scholar]
  40. Willmott, C.J. On the Validation of Models. Phys. Geogr. 1981, 2, 184–194. [Google Scholar] [CrossRef]
  41. Zhang, C.; Li, W.; Travis, D.J. Gaps-fill of SLC-off Landsat ETM+ Satellite Image Using a Geostatistical Approach. Int. J. Remote Sens. 2007, 28, 5103–5122. [Google Scholar] [CrossRef]
  42. Pringle, M.J.; Schmidt, M.; Muir, J.S. Geostatistical Interpolation of SLC-off Landsat ETM+ Images. ISPRS J. Photogramm. Remote Sens. 2009, 64, 654–664. [Google Scholar] [CrossRef]
  43. Safyan, M. Planet’s Dove Satellite Constellation. In Handbook of Small Satellites; Springer International Publishing: Cham, Switzerland, 2020; pp. 1–17. ISBN 9783030207076. [Google Scholar]
Figure 1. Location of the study area in the western region of the state of São Paulo, Brazil, and the UAV orthomosaic (true colour composite red-green-blue (RGB):321) of the study area in August 2019.
Figure 1. Location of the study area in the western region of the state of São Paulo, Brazil, and the UAV orthomosaic (true colour composite red-green-blue (RGB):321) of the study area in August 2019.
Ijgi 12 00041 g001
Figure 2. Spectral (bandwidth and central wavelength) characteristics of UAV orthomosaic, PlanetScope, and Sentinel-2A imagery.
Figure 2. Spectral (bandwidth and central wavelength) characteristics of UAV orthomosaic, PlanetScope, and Sentinel-2A imagery.
Ijgi 12 00041 g002
Figure 3. Flowchart for UAV and satellite imagery processing and filling in the missing parts in UAV orthomosaics based on the RF algorithm and ancillary data derived from satellite image.
Figure 3. Flowchart for UAV and satellite imagery processing and filling in the missing parts in UAV orthomosaics based on the RF algorithm and ancillary data derived from satellite image.
Ijgi 12 00041 g003
Figure 4. Simulation of the missing parts (white strips) in the original UAV orthomosaic: (a) missing part of 10%—MP-10%, (b) missing part of 30%—MP-30%, (c) missing part of 50%—MP-50%.
Figure 4. Simulation of the missing parts (white strips) in the original UAV orthomosaic: (a) missing part of 10%—MP-10%, (b) missing part of 30%—MP-30%, (c) missing part of 50%—MP-50%.
Ijgi 12 00041 g004
Figure 5. The relative importance of the predictor variables was measured by the variable importance metric in the eighteen RF-based imputation models using both PlanetScope and Sentinel-2 data.
Figure 5. The relative importance of the predictor variables was measured by the variable importance metric in the eighteen RF-based imputation models using both PlanetScope and Sentinel-2 data.
Ijgi 12 00041 g005
Figure 6. Comparison between the true colour composite (RGB:321) of the original UAV orthomosaic (a) and the imputed UAV orthomosaic—10% (b), 30% (c), and 50% (d) of image prediction. Squares #1, #2, and #3 show, in more detail, the original UAV orthomosaic, the introduced missing parts, and the imputed UAV orthomosaic in the highest level of missing parts assessed in this study (MP–50%).
Figure 6. Comparison between the true colour composite (RGB:321) of the original UAV orthomosaic (a) and the imputed UAV orthomosaic—10% (b), 30% (c), and 50% (d) of image prediction. Squares #1, #2, and #3 show, in more detail, the original UAV orthomosaic, the introduced missing parts, and the imputed UAV orthomosaic in the highest level of missing parts assessed in this study (MP–50%).
Ijgi 12 00041 g006
Figure 7. Original UAV NDVI image (a); predicted NDVI images with missing parts of 10% (b), 30% (d), and 50% (f); and the calculated NDVI images (based on the imputed R and NIR spectral bands) with missing parts of 10% (c), 30% (e), and 50% (g).
Figure 7. Original UAV NDVI image (a); predicted NDVI images with missing parts of 10% (b), 30% (d), and 50% (f); and the calculated NDVI images (based on the imputed R and NIR spectral bands) with missing parts of 10% (c), 30% (e), and 50% (g).
Ijgi 12 00041 g007
Table 1. Performance of the random forest models in imputing the missing parts of the UAV orthomosaic using PlanetScope (PS) and Sentinel-2 (S2) data.
Table 1. Performance of the random forest models in imputing the missing parts of the UAV orthomosaic using PlanetScope (PS) and Sentinel-2 (S2) data.
Missing Part PercentagePredictor ImageryError StatisticsSpectral Variable
BlueGreenRedRed-EdgeNIRNDVI
10%PSRMSE0.00680.01210.01700.02090.01420.0814
RMSE%12.339.5918.188.847.9718.81
0.360.480.390.430.460.43
S2RMSE0.00590.01120.01600.02110.01330.0797
RMSE%10.738.8917.078.927.4418.44
0.480.580.430.500.610.38
PS + S2RMSE0.00550.01030.01480.01920.01220.0745
RMSE%9.998.1715.778.126.8217.23
R20.500.520.430.460.530.40
30%PSRMSE0.00680.01210.01700.02090.01420.0814
RMSE%12.359.6618.198.857.9618.82
R20.350.480.370.460.460.43
S2RMSE0.00590.01120.01600.02110.01330.0799
RMSE%10.778.9217.088.927.4918.48
R20.480.590.420.510.630.38
PS + S2RMSE0.00550.01030.01480.01920.01210.0747
RMSE%10.038.1915.758.126.8217.28
R20.450.520.410.470.470.40
50%PSRMSE0.00690.01220.01720.02110.01420.0818
RMSE%12.499.6818.388.917.9818.91
R20.380.480.390.490.460.45
S2RMSE0.00580.01110.01580.01580.01320.0798
RMSE%10.648.8316.9116.917.4018.46
R20.500.580.450.450.620.40
PS + S2RMSE0.00550.01020.01470.01910.01210.0749
RMSE%9.958.1515.728.106.7717.33
R20.470.520.430.480.530.41
Table 2. Spatial agreement between original and imputed UAV data.
Table 2. Spatial agreement between original and imputed UAV data.
Missing Data PercentagePredictor ImageryAgreement ParameterSpectral Variable
BlueGreenRedRed-EdgeNIRNDVI
10%PSr0.500.590.520.590.580.54
d0.660.740.680.730.720.70
S2r0.650.670.590.610.550.55
d0.770.790.720.540.620.70
PS + S2r0.700.730.670.720.660.62
d0.800.820.770.820.770.74
30%PSr0.490.570.540.530.510.56
d0.630.710.670.680.680.70
S2r0.690.660.650.530.530.60
d0.800.790.770.680.700.73
PS + S2r0.730.720.700.680.610.66
d0.820.820.800.790.740.77
50%PSr0.450.580.510.530.500.54
d0.620.720.670.670.670.69
S2r0.640.650.570.620.530.57
d0.770.780.720.760.690.72
PS + S2r0.690.710.660.670.600.63
d0.790.810.770.780.730.75
Table 3. Spatial agreement between the original, predicted, and calculated UAV NDVI images.
Table 3. Spatial agreement between the original, predicted, and calculated UAV NDVI images.
Predictor ImageryMissing Data PercentageAgreement ParameterNDVI
PredictedCalculated
PS + S210%r0.620.62
d0.740.74
30%r0.660.65
d0.770.77
50%r0.630.63
d0.750.75
Where: r = Pearson’s correlation coefficient, d = Willmott’s agreement index.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pereira, F.R.d.S.; Dos Reis, A.A.; Freitas, R.G.; Oliveira, S.R.d.M.; Amaral, L.R.d.; Figueiredo, G.K.D.A.; Antunes, J.F.G.; Lamparelli, R.A.C.; Moro, E.; Magalhães, P.S.G. Imputation of Missing Parts in UAV Orthomosaics Using PlanetScope and Sentinel-2 Data: A Case Study in a Grass-Dominated Area. ISPRS Int. J. Geo-Inf. 2023, 12, 41. https://doi.org/10.3390/ijgi12020041

AMA Style

Pereira FRdS, Dos Reis AA, Freitas RG, Oliveira SRdM, Amaral LRd, Figueiredo GKDA, Antunes JFG, Lamparelli RAC, Moro E, Magalhães PSG. Imputation of Missing Parts in UAV Orthomosaics Using PlanetScope and Sentinel-2 Data: A Case Study in a Grass-Dominated Area. ISPRS International Journal of Geo-Information. 2023; 12(2):41. https://doi.org/10.3390/ijgi12020041

Chicago/Turabian Style

Pereira, Francisco R. da S., Aliny A. Dos Reis, Rodrigo G. Freitas, Stanley R. de M. Oliveira, Lucas R. do Amaral, Gleyce K. D. A. Figueiredo, João F. G. Antunes, Rubens A. C. Lamparelli, Edemar Moro, and Paulo S. G. Magalhães. 2023. "Imputation of Missing Parts in UAV Orthomosaics Using PlanetScope and Sentinel-2 Data: A Case Study in a Grass-Dominated Area" ISPRS International Journal of Geo-Information 12, no. 2: 41. https://doi.org/10.3390/ijgi12020041

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop