Next Article in Journal
Research on Digital Credit Behavior of Farmers’ Cooperatives—A Grounded Theory Analysis Based on the “6C” Family Model
Previous Article in Journal
QTL-seq Identifies Pokkali-Derived QTLs and Candidate Genes for Salt Tolerance at Seedling Stage in Rice (Oryza sativa L.)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Radiometric Compensation for Occluded Crops Imaged Using High-Spatial-Resolution Unmanned Aerial Vehicle System

1
Department of GIS and Remote Sensing, University of Fort Hare, Private Bag X1314, Alice 5700, South Africa
2
Agriculture Research Council, Institute for Soil, Climate and Water (ARC-ISCW), Pretoria 0001, South Africa
*
Author to whom correspondence should be addressed.
Agriculture 2023, 13(8), 1598; https://doi.org/10.3390/agriculture13081598
Submission received: 2 July 2023 / Revised: 30 July 2023 / Accepted: 10 August 2023 / Published: 12 August 2023

Abstract

:
Crop characterization is considered a prerequisite to devising effective strategies for ensuring successful implementation of sustainable agricultural management strategies. As such, remote-sensing technology has opened an exciting horizon for crop characterization at reasonable spatial, spectral, and temporal scales. However, the presence of shadows on croplands tends to distort radiometric properties of the crops, subsequently limiting the retrieval of crop-related information. This study proposes a simple and reliable approach for radiometrically compensating crops under total occlusion using brightness-based compensation and thresholding approaches. Unmanned aerial vehicle (UAV) imagery was used to characterize crops at the experimental site. In this study, shadow was demarcated through the computation and use of mean spectral radiance values as the threshold across spectral channels of UAV imagery. Several image classifiers, viz., k-nearest neighbor (KNN), maximum likelihood, multilayer perceptron (MLP), and image segmentation, were used to categorize land features, with a view to determine the areal coverage of crops prior to the radiometric compensation process. Radiometric compensation was then performed to restore radiometric properties of land features under occlusion by performing brightness tuning on the RGB imagery. Radiometric compensation results revealed maize and soil as land features subjected to occlusion. The relative error of the mean results for radiance comparison between lit and occluded regions revealed 26.47% deviation of the restored radiance of occluded maize from that of lit maize. On the other hand, the reasonable REM value of soil was noted to be 50.92%, implying poor radiometric compensation results. Postradiometric compensation classification results revealed increases in the areal coverage of maize cultivars and soil by 40.56% and 12.37%, respectively, after being radiometrically compensated, as predicted by the KNN classifier. The maximum likelihood, MLP, and segmentation classifiers predicted increases in area covered with maize of 18.03%, 22.42%, and 30.64%, respectively. Moreover, these classifiers also predicted increases in the area covered with soil of 1.46%, 10.05%, and 14.29%, respectively. The results of this study highlight the significance of brightness tuning and thresholding approaches in radiometrically compensating occluded crops.

1. Introduction

The progression in remote-sensing technology has presented the world with invaluable sources of crop-related information. Through this technology, crops can be better quantified for their management and yield estimation [1,2]. However, discriminating small-scale crops requires remotely sensed images with very high spatial resolutions, typically in centimeters [3,4]. Unmanned aerial vehicle (UAV) systems, in particular, have recently emerged as a valuable source of high-spatial-resolution remote-sensing data, offering substantial benefits in relation to cost, adaptability, and precise spatial resolution. UAV systems fly at a low altitude, allowing acquisition of multispectral images with very high spatial resolution (VHSR) [5]. With these advantages, UAV systems are increasingly developing to be an effective way to complement satellite remote sensing [6,7]. From a multispectral remote-sensing perspective, detection of features from imagery with an accurate and computationally efficient approach is one of the foremost modern challenges in image pattern recognition [8]. From the crop perspective, pixels that make up an image contain unique information about the nature and type of crop at a particular location. As such, changes in pixel values across the image become the basis for defining the spatial pattern of a particular crop in that imagery. Moreover, analysis of the interaction of scene illumination with the reflectance and geometry of respective illuminated crop types may serve as the basis for quantifying the amount of the different types of crops.
Just like with any other remote-sensing end products, the quality of the UAV photogrammetric end products still requires serious attention if accurate spectral characterization of crops is to be achieved with these products [9]. However, correction of these images is usually confined to noise reduction and detector errors [10]. UAV data are subjected to radiometric quality issues as a result of variations in solar radiation, atmospheric effects, sensor viewing angle, and calibration mistakes [11]. This underscores the need to remove these errors, with a view to produce illumination-consistent and atmospheric-independent imagery [12]. Radiometric quality restoration can be defined as the detection and removal of radiation anomalies recorded by a sensor, with a view to maintain a range of brightness suitable for the imaged land features. Several studies have been and are still underway to devise radiometric approaches for enhancing the quality of UAV images [10,13,14,15].
Although radiometric errors in aerial imagery are a function of the local image contrast, tonal array, random noise, and radiometric resolution [16], the presence of shadows on croplands tends to play a role in the alteration of radiometric properties of crops. This subsequently limits the retrieval of crop-related information. This is true, despite the substantial amount of beneficial landscape feature information that a shadow provides, such as shape, relative position, height [17,18], and illumination direction [19]. A shadow makes image pixels darker, and this may lead to noticeable alteration in spectral patterns of crops. Subsequently, a partial or a complete loss of spectral information regarding crop types and condition may occur [20,21]. The occluded crops may subsequently be subjected to spectral confusion and misclassification, and this may have serious implications on agricultural-monitoring programs and the attainment of global food security and hunger alleviation goals. At the crop level, occlusion may occur as a result of the presence of cast- and self-shadows. Whereas a cast shadow occurs due to the blocking of light by another object at the scene, a self-shadow occurs due to occlusion by the crop itself, i.e., the side of the crop itself that is not exposed to the illumination source [22]. This may also occur when the top-most leaves of a crop occlude the leaves situated below them. Therefore, occluded crops must be radiometrically compensated for their accurate and precise characterization. A reliable radiometric compensation approach must be able to remove cast shadows while preserving self-shadows as part of the crop. Occluded crops can be radiometrically compensated via detecting shadow and removing the distortions from imagery, each of which can be investigated independently of the other [23]. In fact, the techniques used to radiometrically compensate occluded pixels from remote-sensing images can be achieved through two main steps: detection of shadows and performing a de-shadowing process [24].
Radiometric compensation of crops under occlusion can be accomplished using a thresholding algorithm, modeling, or object-oriented techniques. The thresholding process involves determining the optimal threshold value of a digital number by analyzing histograms to differentiate shadow information from other types of information [25]. Modeling approaches are commonly used since they are less complex than thresholding, which requires prior knowledge of shadow and mathematical modeling. On the other hand, an object-oriented technique can also effectively detect shadows but does not operate directly on individual pixels, as it operates on image fragments [26]. However, modeling and object-oriented approaches are complicated, and they involve a long sequence of equations [26]. Although thresholding has demonstrated the capability to radiometrically compensate occluded regions, this approach is only suitable for compensating regions under partial occlusion [27,28]. Moreover, studies on radiometric restoration have extensively focused on built-up environments, neglecting other environmental disciplines [19,28,29,30,31]. We envisage that, when used in conjunction with a brightness-tuning approach, thresholding can aid in the restoration of radiometric properties of crops under total occlusion. Owing to the limited ability to effectively handle total occlusion scenarios and the lack of simplicity and reliability of these radiometric compensation techniques, this study proposes a simple and reliable approach for radiometrically compensating crops under total occlusion by integrating brightness-based compensation and thresholding techniques. This approach relies on tuning brightness properties of the imagery while observing the restoration patterns of spectral information in the occluded region.

2. Material and Methods

2.1. Experimental Site Characterization

The experimental site was situated within the Mutale River catchment in the Limpopo province of South Africa, which is well known for its agricultural practices. Small-scale crop farming for supporting local livelihoods and the rural economy is dominant in this area. The small-scale farms of the study area were found at 22°47′37.22″ S, 30°29′08.41″ E absolute location of the Earth. A subtropical climate, with a mean annual rainfall ranging between 300 mm and 1000 mm, characterized the experimental site [32], with a large amount of rainfall received during the summer season. The experimental site was a part of many small land plots that cultivated a variety of crops, such as maize, cabbage, sweet potatoes, sugar beans, peas, green beans, and Solanum retroflexum, across all the seasons due to an available furrow irrigation system to support crop growth during rain-scarce seasons. The variety of crops cultivated in this area underscores the need to quantify these crops for yield estimation purposes. Figure 1 shows the location of the experimental site with respect to South Africa and the Mutale River catchment. The selection of the experimental site was prompted by the presence of a shadow, which appears as large black spot in the image, created by an adjacent tree.

2.2. Methods

The following sequence of methods and techniques was carried out to achieve the purpose of this study (Figure 2).

UAV Imagery Acquisition

Remotely sensed imagery used in the current study was acquired from a UAV platform. Prior to the acquisition of the UAV imagery, the weather conditions were assessed to ensure that the images were acquired during midday and in cloudless conditions to eliminate the influences of haze, smoke, and clouds on the quality of the imagery. The UAV imagery was acquired on 30 June 2021. Two drone flight campaigns were surveyed to capture images during midday (i.e., between 12:00 and 14:00). A DJI Matrice 600 UAV with a Mica Red-Edge Multispectral Sensor captured land surface images using an 8.5 cm spatial resolution. This UAV multispectral sensor captured images in five spectral channels of the electromagnetic spectrum, i.e., red, green, blue, near-infrared, and red-edge. The flight-imaging process was carried out at a speed of 10 m per second. The flight plan was designed such that the imaging process created a minimum lateral and frontal overlap of 75% for feature matching and mosaicking in postprocessing. The flight altitude for the UAV was set at 120 m above ground level, capturing images at a spatial resolution of 8.0 cm. Table 1 provides details of the UAV multispectral sensor employed in the study area.

2.3. UAV Camera Calibration

Calibration of each camera of the UAV multispectral sensor was performed to remove the lens distortion and calculate the focal length and principal point of each camera. Each camera designed and captured a white chess-board target with different roll and pitch angles to produce convergent images. The multispectral camera was configured to ensure 75% overlap between consecutive images. This was carried out to ensure that all the portions of the study area were covered during the imaging process and to facilitate an accurate orthomosaic process. An image of a reflectance calibration panel was captured before and after each flight to remove the effects of sunlight variation and reflectance characteristics.

2.4. UAV Image Processing

The raw remotely sensed data collected by unmanned aerial vehicles represent the Earth’s irregular surface; therefore, georeferencing was utilized to assign map geographic coordinates to image data. In this study, geometric correction was used to ensure that pixels or features in an image were in their proper and exact position on the Earth’s surface and to minimize or reduce geometric distortions between sets of data points. This was achieved by employing the nearest neighbor resampling techniques in the TerrSet 18.31 geospatial-monitoring software package. Georeferencing is frequently used in the correction process because shifting pixels to remove distortion and assigning coordinates to those pixels can both be performed at the same time.

2.5. Image Stretching

An image enhancement technique in the form of linear stretching was applied to rescale the pixel values of each original image band to new values that ranged from 0 to 255. The linear stretching was performed on each UAV band using the Stretch module embedded in the TerrSet platform based on Equation (1):
L i n _ s t r e t c h = I V I L O U L O L L I U L I L L + O L L
IV is the value of a pixel in the input map,
ILL is the lower value of the “stretch from” range,
IUL is the upper value of the “stretch from” range,
OLL is the lower value of the “stretch to” range,
OUL is the upper value of the “stretch to” range.
The input values were determined by the ‘stretch from’ values, and the lower and upper ‘stretch from’ boundary values were involved in the stretching process. The output values were then determined using the output domain, the value range, and precision of this domain.

2.6. Conversion of Digital Number (DN) Values to Radiance

Multispectral images require conversion from digital number (DN) to reflectance data before they can be interpreted or used as input for image analysis [33]. When reflectance maps are the intended end products, results can be improved by performing absolute radiometric sensor correction. This involves converting unitless DN values into at-sensor radiance L using the following Equation (2):
L = W m 2 S r n m
such that
L x , y = V x , y a 1 g p x , y p B L x , y t e + a 2 y a 3 t e y
where L(x, y) for each x, y pixel is determined in terms of V(x, y), which is the vignetting correction of the normalized raw DN and normalized black-level DN [34].
a1, a2, and a3 denote radiometric correction factors. The sensor-specific constants a1−3a may be obtained from the metadata file of the UAV imagery-scanning specification results.
p(x, y) and pBL(x, y) denote pixel radiance at location (x, y),
g denotes the sensor gain,
te denotes the exposure time.
The DN values were converted to calibrated radiance values based on the user-defined values for Lmin/Lmax for UAV sensor systems. This process was achieved using the image calculator embedded in the TerrSet 18.31 software package.

2.7. Image Filtering

Shadow has demonstrated the ability to produce sharp discontinuities in an image. These discontinuities are abrupt changes in pixel intensity, which characterize boundaries of objects in a scene [35]. In this study, edge pixels were detected via applying a filtering technique in the form of Gaussian-based Sobel edge detection. This technique uses a pair of 3 × 3 convolution kernels or masks, Gx and Gy, as shown in Equation (4) adopted from Yin et al. [36]:
| G | = G x 2 + G y 2
where G denotes the Gaussian filter,
x is the resulting image after applying the kernel Kx to the input image,
y is the resulting image after applying the kernel Ky to the input image.
Kernels Kx and Ky were computed using the matrices in Equation (5) and Equation (6), respectively:
K x = 1   0   1 2   0   2 1   0   1
K y = 1 2 1 0 0 0 1 2 1
These convolution kernels are typically fused together to determine the absolute magnitude and the x- and y-orientations (horizontal and vertical directions) of the gradient. The selection of the Sobel operator was based on its insensitivity to noise and relatively small mask in image detection [37].

2.8. Demarcation of Shadow

Demarcation of shadow at the experimental site was achieved using the brightness statistics of each band of RGB. A total of 100 points were randomly digitized on the occluded region and superimposed on each spectral band of the UAV imagery. The pixel values on which the points were overlain were then extracted to compute the brightness statistics of the occluded region under each spectral band. Upon the successful computation of the descriptive statistics, the mean radiance value of each spectral band was used as a threshold for demarcating shadow, such that
L s h a d o w = 1   i f   L × i , j T 0 ,   o t h e r w i s e
where L is the radiance of shadow, i is the pixel at the i-th location, and j is the pixel at the j-th location.
This process was carried out in an ArcMap GIS environment.

2.9. Derivation of Color Composite Image

A 24-bit RGB composite image was initially produced to facilitate the identification of shadow and various crops. This composite image showed features using the proportions of radiance reflected in three channels of the UAV sensor. A 24-bit false color composite (FCC) image was also generated based on the NIR, red, and red-edge spectral bands. The motive behind the FCC image generation was to ensure that the NIR and red-edge spectral channels to which crops are sensitive also underwent the radiometric restoration process.

2.10. Generation of Spectral Vegetation Indices for Occluded Crop Characterization

Several vegetation indices were generated for the purpose of evaluating their efficacy in characterizing occluded crops. Table 2 provides a list of the spectral vegetation indices generated for the purpose of this study. The selection of these spectral vegetation indices was informed by the spectral resolution of the UAV sensor employed in this study.
Table 2. List of spectral vegetation indices generated in this study.
Table 2. List of spectral vegetation indices generated in this study.
IndexEquationAuthor(s)
NDVI
N I R R N I R + R
Filgueiras et al. [38]
GNDVI
N I R G N I R + G
Mangewa et al. [39]
SAVI
N I R   R N I R + R + 0.5 × 1.5
Wang et al. [40]
OSAVI
N I R R N I R + R + 0.16
Bastiaanssen et al. [41]
GOSAVI
N I R G N I R + G + 0.16
Ji et al. [42]
NDRE
N I R R E N I R + R E
Crema et al. [43]

2.11. Classification of UAV Imagery

Land cover types were classified using a supervised image classification technique. Initially, the training site was created for four (4) cover types, namely cabbage, maize, soil, and shadow. The inclusion of shadow as a land cover class was triggered by the inability of the composite images and spectral vegetation indices to identify and recognize crops and soil located in the shadowed region. Four supervised image classifiers, namely k-nearest neighbor, maximum likelihood, multilayer perceptron neural network, and object-oriented, were applied to categorize these land cover types based on the spectral bands prior radiometric compensation. This process was repeated for only three (3) land cover types, viz., cabbage, maize, and soil, based on a radiometrically compensated RGB image.

2.12. Radiometric Compensation

The initial step of radiometric restoration was to compensate the occluded region by adding the radiation of the occluded region (Ldir(P)) and the diffused radiance from the region adjacent the occluded region. This was achieved by modifying Equation (14) adopted from Li et al. [44]:
L s e n P = L d i r P + L d i f f + L a t m
where Ldir(P) denotes radiation directed to the sensor from the occluded region,
Ldiff denotes the diffused radiance from the adjacent region,
Latm denotes the radiance reflected by the atmosphere without reaching the ground.
The radiometric compensation Equation (14) adopted from Li et al. [44] is relevant for compensating radiometric properties of occluded regions as imaged with satellite sensor. As such, the Equation also accounts for the radiation loss (Latm) as a result of interaction with the atmosphere. In this study, the radiance reflected by the atmosphere (Latm) was deliberately omitted because the employed UAV system captured the image while situated below the atmosphere to record the reflected radiation that did not interact with the atmosphere, such that
L s e n = L d i r + L d i f f
Subsequently, an iterative thresholding method was applied to further radiometrically compensate the occluded region of the study area by tuning brightness using both TCC and FCC images as inputs to the equation. The brightness tuning was achieved via obtaining the minimum and maximum radiance values of each spectral band fused in both the TCC and FCC images. The brightness of each image was set at the initial threshold (T0) to divide each image into foreground and background using Equation (16):
T 0 = ( r a d m a x r a d m i n ) 2
where T0 denotes the initial threshold,
radmax denotes the maximum radiance value of the image,
radmin denotes the minimum radiance value of the image.
The subsequent brightness values of the image were obtained via pushing the threshold to the next level, such that
T k + 1 = r a d m a x + r a d m i n 2 + 1
where k denotes the spectral channel to be radiometrically compensated, i.e., R, G, and B or NIR, R, and RE channels.
+1 denotes radiance values at the next threshold level.
This process was repeated until the brightness threshold reached its saturation, such that
T k = r a d m a x + r a d m i n 2 + K
where Tk denotes the k-th threshold.
The brightness threshold saturation was reached when Tk+1 = Tk, denoting the possible maximum threshold level of the image. However, four threshold levels were applied in this study. Upon the successful radiometric compensation process, the supervised image classification process was repeated with the exclusion of shadow as a land cover to determine the deviation in the area of the cover types under a radiometric compensation situation for those generated under uncompensated situation.

2.13. Spectral Radiance Evaluation of Compensated Land Features

The compensated radiometric properties of the occluded land features were evaluated to determine the extent to which they deviated from the same land feature types situated in the sunlit area of the experimental site. For this purpose, the relative error to the mean technique was applied to evaluate the radiance of both maize and soil using Equations (19) and (20) proposed by Thai et al. [45]:
R E M m a i z e = L i t m a i z e c o m p e n s a t e d m a i z e L i t m a i z e × 100 %
R E M s o i l s = L i t s o i l s c o m p e n s a t e d s o i l s L i t s o i l s × 100 %
This method facilitated the measurement of the amount of radiance error in the restored radiance properties of the occluded land feature types relative to the radiance amounts of the sunlit land features.

3. Results

3.1. Demarcation of Shadow

In this study, shadow was demarcated via applying mean radiance values computed from the samples collected from the shaded region of the experimental site. Figure 3 shows the general spectral pattern of the mean radiance data sampled from all the spectral channels of the UAV imagery. The mean radiance values of shadow, as extracted from blue, green, red, NIR, and red-edge spectral channels, were observed to be 0.005, 0.005, 0.002, 0.004, and 0.003, respectively.
The computed mean radiance values were then used to demarcate the shadow on the respective spectral bands, as shown in Figure 4.

3.2. Land Cover Types at the Experimental Site

This study aimed to radiometrically compensate the occluded land cover in imagery acquired with a UAV system. The original images (TCC and FCC) showing the land cover types of the experimental site are shown in Figure 5. The existing land cover types in these images were cabbage, maize, and soil. In these images, maize and soil were noted to be subjected to occlusion by an adjacent tree. Whereas maize was observed to only be subjected to occlusion by an adjacent tree, soil was noted to be subjected to occlusion by both adjacent trees and aboveground maize cover.

3.3. Comparative Analysis of Spectral Mean Values of Lit Cover and Occluded Cover

A total of 30 pixels from each lit cover type and each occluded cover type were randomly sampled to determine variations in their spectral radiance values. The mean radiance values of various UAV spectral channels were computed for each lit cover type and each occluded cover type to achieve this. Table 3 shows the variations in mean spectral radiance values across lit and occluded land cover types. The computed descriptive statistics revealed significant variations in the mean spectral radiance values between lit and occluded land cover types across the spectral channels of the UAV sensor.

3.4. Spectral Vegetation Indices for Crop Characterization

Upon successful derivation of several spectral vegetation indices, it can be noted through visual interpretation that these indices demonstrated varying sensitivity levels in characterizing occluded crops (Figure 6). Whereas some spectral vegetation indices, such as GOSAVI (Figure 6d) and GNDVI (Figure 6f), showed an inability to clearly distinguish crops from soil, clear distinctions between crops and soil were clearly visible with the NDVI (Figure 6a), SAVI (Figure 6b), OSAVI (Figure 6c), and NDRE (Figure 6e) spectral vegetation indices. Although OSAVI demonstrated its ability to draw a clear distinction between crops and soil, it generalized crops in the occluded region as soil. Subsequently, the occluded soil was overestimated. Whereas NDVI and NDRE managed to show patterns in crops in the occluded region, SAVI showed patterns in soil in this region. Whereas NDVI and NDRE did not associate occluded soil with any of their spectral values, SAVI did not associate occluded crops with any of its spectral values.
Subsequently, spectral differences between crops and soil exposed to light and those in the occluded region were analyzed via comparing the profiled spectral mean radiance values (Table 4). Based on the mean spectral vegetation index value analysis, NDVI, SAVI, and OSAVI showed good detection ability of lit maize, with the mean values of 0.762, 0.706, and 0.643, respectively. The lit soil was better characterized by NDRE, with the mean value of 0.371. However, the highest mean spectral values of occluded maize were shown by NDVI and OSAVI (0.078 and 0.07, respectively). The occluded soil, on the other hand, was noted to stand out with NDRE, with a mean spectral index value of 0.468.

3.5. Spatial Pattern Analysis of Crops and Soil

Upon successful analysis of lit and occluded crops and soil from the selected spectral indices, the crops and soil were classified to determine the area that they covered. In this case, shadow was also included as a land cover class due to the inability of the explored spectral vegetation indices to effectively detect both crops and soil in the occluded region, i.e., specific indices could only effectively detect either crops or soil or neither soil nor crops. Figure 7 shows the spatial configuration of the land cover types at the experimental site. It was noted that the experimental site was characterized by three (3) types of land cover, i.e., cabbage, maize, and soil. These crop types were generated by employing four supervised image classification techniques, viz., k-nearest neighbor (KNN) (Figure 7a), maximum likelihood (Max_Like) (Figure 7b), multilayer perceptron neural network (MLP) (Figure 7c), and object-oriented (segmentation) (Figure 7d) classifiers. The deployment of several classifiers in this study was based on the need to evaluate whether these classifiers could be able to determine crops and soil in the occluded region without having been trained to recognize them.
The area covered by each land cover type at the experimental site was assessed, with special attention placed on the occluded region. The main purpose of placing the attention on the occluded region was because the radiance properties of land features situated within it were compromised. Analysis of this region provided insight regarding the radiant nature and extent of the compromised land features. Moreover, areal analyses of different land features were carried out to evaluate the efficacy of the selected classifiers in estimating the area occupied by the occluded region or shadow. From Figure 8, it was noted that the segmentation classifier predicted the occluded region as the largest land feature occupying the experimental site, with an area of 40.84%. The KNN classifier also predicted a large area of the occluded region, with an area of 37.57%. The MLP and maximum likelihood classifiers showed the occluded shadow covering areas of 25.09% and 28.47%, respectively.

3.6. Radiometric Compensation

The radiometric compensation analyses of four threshold levels applied on both FCC and TCC images are presented in Figure 9 and Figure 10. From Figure 9, the brightness-tuning results for the FCC image showed increased radiance intensity in the lit region of the experimental site at the T1, T2, T3, and T4 threshold levels. However, there was no radiometric restoration noted in the occluded region, as expected. By implication, the brightness-tuning process did not yield the anticipated results when carried out on the FCC imagery produced from the fusion of the NIR, red, and red-edge spectral channels of the UAV sensor. Through visual interpretation of Figure 10, the brightness-tuning results revealed gradual increases in radiance intensity in both the lit and occluded regions of the experimental site as threshold level increased. This radiometric compensation technique yielded reasonably better results when performed on the TCC image since radiometric properties of the land features located in the occluded region were reasonably restored. It was subsequently noted that maize cultivars and soil were the only land feature types subjected to occlusion.

3.7. Spectral Analysis of Radiometrically Compensated Maize and Soil

Upon successful restoration of radiometric properties of the occluded land features, the restored spectral radiance samples of the occluded land futures were collected through digitization of point shapefiles and extracting restored spectral radiance values on which the point shapefiles were superimposed. Subsequently, the mean spectral radiance value of each occluded land feature was calculated and compared with the ones sampled from the lit region and occluded region prior to the radiometric restoration process. Table 5 and Figure 10 show the mean spectral radiance values of land features for the lit area and occluded region precompensation and postcompensation at the fourth threshold level.
From Figure 11, it was noted that the spectral radiance values of both occluded maize and soil substantially increased. Table 6 shows the relative error of the mean results for land features at the experimental site. The computed relative error of the mean results for radiance comparison between the lit and occluded regions revealed 26.47% deviation of the restored radiance of occluded maize from that of lit maize. On the other hand, radiometric properties of soil were not accurately restored since the reasonable REM value for this land feature was 50.92% for the blue channel of the UAV sensor.

3.8. Spatial Configuration of Land Feature Types Postradiometric Compensation

Upon successful analysis of radiometrically compensated land features, land features were classified again with a view to determine the lost area as a result of occlusion. Figure 12 shows the land feature classes after radiometric compensation of the occluded region as predicted by (a) KNN, (b) maximum likelihood, (c) MLP, and (d) object-oriented classifiers. The oval shape in each classified image shows the radiometrically restored land features in the occluded region of the imagery.
Moreover, areal analyses of different land features were carried out to evaluate the deviation of land feature areas post-radiometric compensation process. Table 7 shows the area covered by each land feature type as predicted prior to radiometric compensation (land feature types predicted in Figure 6) and post-radiometric compensation (land feature types predicted in Figure 12).
From Table 7, the selected image classifiers showed varying ability in classifying radiometrically compensated land features. However, there was a general increase in the land feature types under occlusion as predicted by all the classifiers. It was noted that the areas covered by maize cultivars and soil increased by 40.56% and 12.37%, respectively, after being radiometrically compensated, as predicted by the KNN classifier. Moreover, the maximum likelihood, MLP, and segmentation classifiers predicted increases in area covered with maize of 18.03%, 22.42%, and 30.64%, respectively. Ultimately, these classifiers also predicted increases in the area covered with soil of 1.46%, 10.05%, and 14.29%, respectively.

4. Discussion

The importance of analyzing the spectral properties of land features has been well received across several disciplines, such as crop type and condition assessment [46]. Shadow has the ability to cause a loss in radiometric information, leading to pixel misclassification and image misinterpretation [47]. Shadows are an unavoidable component of high-resolution remotely sensed imagery, and the impact of shadows increases as the spatial resolution of imagery increases [48]. Tomas et al. [49] also noted that accurate extraction of pure crop and bare soil pixels is always challenging as a result of the influence of shaded pixels. This study was aimed at proposing a simple approach for radiometrically compensating crops under occlusion for their improved quantification. The UAV system facilitated the successful acquisition of high-spatial-resolution imagery for characterizing crops at the experimental site. Milas et al. [50] noted UAV systems are capable of providing detailed information about land features, even at a resolution of several centimeters, and that removing shadow from data acquired using these systems is not easy. This is even more difficult if the imagery is to be subjected to image classification process, due to a challenge pertaining to the description of the distinct properties of various land feature classes using single-level features [44]. Movia et al. [47] also noted that, although high-spatial-resolution UAV images facilitate the retrieval of many land feature information types, they are subjected to classification problems as a result of shadow. In this study, dead pixels were noted when characterizing occluded land feature types with spectral vegetation indices. In order to achieve accurate and automatic crop detection, along with correct segmentation parameters, it is necessary to find an automatic and efficient method to look for the fourth-threshold value that sets the breakpoint between vegetation and bare soil. These pixels appear with no spectral radiance value, especially when characterizing occluded land feature types with the NDVI, SAVI, NDRE, and GNDVI spectral vegetation indices.
Although Milas et al. [50] also noted a higher sensitivity of several classification algorithms to shadow at different spatial resolutions, their study attributed this sensitivity to variations in the texture of land features. Texture would not be properly identified and, subsequently, the shadow could neither be identified nor eliminated properly. In this study, the KNN classifier classified some soil as maize, whereas some maize cultivars were recognized as cabbage with the maximum likelihood classifier. This could be attributed to radiometric restoration in the imagery. In their study to remove shadow through separated illumination correction for urban aerial remote-sensing images, Luo et al. [51] also noted that some road and vegetation fragments in the occluded regions were categorized as buildings. Moreover, several existing shadow removal methods evaluate their results quantitatively, as no shadow free-ground truth is available [51]. No technique can deal with a shadow projected on a complex texture [50]. The results from visual and statistical assessments indicated a significant difference between soil/vegetation indices in sunlit and shaded pixels [52]. The areas of discontinuity between illuminated and occluded land features were not consistent across the experimental site due to variations in the illumination condition, as also noted in the study of Pons and Padró [53]. The area coverage of maize was noted to be exaggerated as a result of the spectral similarity between maize and weeds located among cabbage. Moreover, the REM results of radiometrically compensated soil revealed the limitation of the proposed approach in completely restoring occluded soil. It is important to note that, during the radiometric compensation process, the land feature types in the sunlit area were also subjected to the process. This might have influenced the accuracy of the classification of land feature types after the radiometric compensation process. Moreover, this study did not employ field-based measurements to verify the precision of the estimated areal coverage of occluded crops. As such, the reliability of the classifiers was evaluated via visual comparison of the classified land cover types with those shown in the TCC image. However, only ground-based knowledge was employed in the recognition of land feature types without measurements.

5. Conclusions

This study demonstrated the importance of radiometrically compensating crops for informed quantification of their areal coverage. This was achieved by employing brightness compensation and the thresholding of visible spectral bands fused in an RGB composite image. The reliability of the proposed approach was evaluated by analyzing the relative error of the mean results, which compared the mean spectral radiance of the compensated land features to those in the sunlit area. Image classification results for KNN, maximum likelihood, MLP, and segmentation classifiers revealed an increase in the area covered by maize and soil in radiometrically compensated images compared to the area coverage prior to radiometric compensation. Overall, the results of this study highlighted the significance of radiometrically compensating occluded crops for their precise quantification. Ultimately, this study emphasized the ongoing significance of remote-sensing technology in addressing agricultural issues that hamper the successful attainment of poverty alleviation and food security goals.

Author Contributions

Conceptualization, N.N. and Y.M.; Methodology, N.N. and K.H.T.; Software, K.H.T.; Formal analysis, N.N. and A.N.; Investigation, N.N., K.H.T., Y.M. and A.N.; Resources, A.N.; Writing—original draft, Y.M.; Writing—review & editing, N.N., K.H.T. and A.N.; Visualization, Y.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Govan Mbeki Research and Development Center, University of Fort Hare.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to volume of the data from which the data of interest was extracted.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lobell, D.B.; Thau, D.; Seifert, C.; Engle, E.; Little, B. A scalable satellite-based crop yield mapper. Remote Sens. Environ. 2015, 164, 324–333. [Google Scholar] [CrossRef]
  2. Sakamoto, T. Refined shape model fitting methods for detecting various types of phenological information on major U.S. crops. ISPRS J. Photogramm. Remote Sens. 2018, 138, 176–192. [Google Scholar] [CrossRef]
  3. Löw, F.; Duveiller, G. Defining the Spatial Resolution Requirements for Crop Identification Using Optical Remote Sensing. Remote Sens. 2014, 6, 9034–9063. [Google Scholar] [CrossRef] [Green Version]
  4. Peña, J.M.; Torres-Sánchez, J.; Serrano-Pérez, A.; De Castro, A.I.; López-Granados, F. Quantifying Efficacy and Limits of Unmanned Aerial Vehicle (UAV) Technology for Weed Seedling Detection as Affected by Sensor Resolution. Sensors 2015, 15, 5609–5626. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Gómez-Candón, D.; De Castro, A.I.; López-Granados, F. Assessing the accuracy of mosaics from unmanned aerial vehicle (UAV) imagery for precision agriculture purposes in wheat. Precis. Agric. 2014, 15, 44–56. [Google Scholar] [CrossRef] [Green Version]
  6. Wu, B.; Liang, A.; Zhang, H.; Zhu, T.; Zou, Z.; Yang, D.; Tang, W.; Li, J.; Su, J. Application of conventional UAV-based highthroughput object detection to the early diagnosis of pine wilt disease by deep learning. For. Ecol. Manag. 2021, 486, 118986. [Google Scholar] [CrossRef]
  7. Jin, X.; Zarco-Tejada, P.J.; Schmidhalter, U.; Reynolds, M.P.; Hawkesford, M.J.; Varshney, R.K.; Yang, T.; Nie, C.; Li, Z.; Ming, B.; et al. High-Throughput Estimation of Crop Traits: A Review of Ground and Aerial Phenotyping Platforms. IEEE Geosci. Remote Sens. Mag. 2021, 9, 200–231. [Google Scholar] [CrossRef]
  8. Panuju, D.R.; Paull, D.J.; Griffin, A.L. Change Detection Techniques Based on Multispectral Images for Investigating Land Cover Dynamics. Remote Sens. 2020, 12, 1781. [Google Scholar] [CrossRef]
  9. Kedzierski, M.; Wierzbicki, D. Radiometric quality assessment of images acquired by UAV’s in various lighting and weather conditions. Measurement 2015, 76, 156–169. [Google Scholar] [CrossRef]
  10. Jenerowicz, A.; Wierzbicki, D.; Kedzierski, M. Radiometric Correction with Topography Influence of Multispectral Imagery Obtained from Unmanned Aerial Vehicles. Remote Sens. 2023, 15, 2059. [Google Scholar] [CrossRef]
  11. Moghimi, A.; Celik, T.; Mohammadzadeh, A. Tensor-based keypoint detection and switching regression model for relative radiometric normalization of bitemporal multispectral images. Int. J. Remote Sens. 2022, 43, 3927–3956. [Google Scholar] [CrossRef]
  12. Xu, H.; Wei, Y.; Li, X.; Zhao, Y.; Cheng, Q. A Novel Automatic Method on Pseudo-Invariant Features Extraction for Enhancing the Relative Radiometric Normalization of High-Resolution Images. Int. J. Remote Sens. 2021, 42, 6155–6186. [Google Scholar] [CrossRef]
  13. Jaud, M.; Sicot, G.; Brunier, G.; Michaud, E.; Le Dantec, N.; Ammann, J.; Grandjean, P.; Launeau, P.; Thouzeau, G.; Fleury, J.; et al. Easily Implemented Methods of Radiometric Corrections for Hyperspectral–UAV—Application to Guianese Equatorial Mudbanks Colonized by Pioneer Mangroves. Remote Sens. 2021, 13, 4792. [Google Scholar] [CrossRef]
  14. Chaudhry, M.H.; Ahmad, A.; Gulzar, Q.; Farid, M.S.; Shahabi, H.; Al-Ansari, N. Assessment of DSM Based on Radiometric Transformation of UAV Data. Sensors 2021, 21, 1649. [Google Scholar] [CrossRef] [PubMed]
  15. Chi, J.; Kim, J.-I.; Lee, S.; Jeong, Y.; Kim, H.-C.; Lee, J.; Chung, C. Geometric and Radiometric Quality Assessments of UAV-Borne Multi-Sensor Systems: Can UAVs Replace Terrestrial Surveys? Drones 2023, 7, 411. [Google Scholar] [CrossRef]
  16. Kedzierski, M.; Wierzbicki, D.; Sekrecka, A.; Fryskowska, A.; Walczykowski, P.; Siewert, J. Influence of Lower Atmosphere on the Radiometric Quality of Unmanned Aerial Vehicle Imagery. Remote Sens. 2019, 11, 1214. [Google Scholar] [CrossRef] [Green Version]
  17. Comber, A.; Umezaki, M.; Zhou, R.; Ding, Y.; Li, Y.; Fu, H.; Jiang, H.; Tewkesbury, A. Using shadows in high-resolution imagery to determine building height. Remote Sens. Lett. 2012, 3, 551–556. [Google Scholar] [CrossRef] [Green Version]
  18. Zhou, T.; Fu, H.; Sun, C.; Wang, S. Shadow Detection and Compensation from Remote Sensing Images under Complex Urban Conditions. Remote Sens. 2021, 13, 699. [Google Scholar] [CrossRef]
  19. Silva, G.F.; Carneiro, G.B.; Doth, R.; Amaral, L.A.; De Azevedo, D.F.G. Near real-time shadow detection and removal in aerial motion imagery application. ISPRS J. Photogramm. Remote Sens. 2017, 140, 104–121. [Google Scholar] [CrossRef]
  20. Adeline, K.; Chen, M.; Briottet, X.; Pang, S.; Paparoditis, N. Shadow detection in very high spatial resolution aerial images: A comparative study. ISPRS J. Photogramm. Remote Sens. 2013, 80, 21–38. [Google Scholar]
  21. Alavipanah, S.K.; Karimi Firozjaei, M.; Sedighi, A.; Fathololoumi, S.; Zare Naghadehi, S.; Saleh, S.; Naghdizadegan, M.; Gomeh, Z.; Arsanjani, J.J.; Makki, M.; et al. The Shadow Effect on Surface Biophysical Variables Derived from Remote Sensing: A Review. Land 2022, 11, 2025. [Google Scholar] [CrossRef]
  22. Magno, R.; Rocchi, L.; Dainelli, R.; Matese, A.; Di Gennaro, S.F.; Chen, C.-F.; Son, N.-T.; Toscano, P. AgroShadow: A New Sentinel-2 Cloud Shadow Detection Tool for Precision Agriculture. Remote Sens. 2021, 13, 1219. [Google Scholar] [CrossRef]
  23. Tarko, A.; De Bruin, S.; Bregt, A.K. Comparison of manual and automated shadow detection on satellite imagery for agricultural land delineation. Int. J. Appl. Earth Obs. Geoinf. 2018, 73, 493–502. [Google Scholar] [CrossRef]
  24. Shahtahmassebi, A.; Yang, N.; Wang, K.; Moore, N.; Shen, Z. Review of shadow detection and de-shadowing methods in remote sensing. Chin. Geogr. Sci. 2013, 23, 403–420. [Google Scholar] [CrossRef] [Green Version]
  25. Anoopa, S.; Dhanya, V.; Kizhakkethottam, J.J. Shadow Detection and Removal Using Tri-Class Based Thresholding and Shadow Matting Technique. Procedia Technol. 2016, 24, 1358–1365. [Google Scholar] [CrossRef] [Green Version]
  26. Anju, J.; Krishna, A.R. Shadow detection using object-oriented segmentation, its analysis and removal from high resolution remote sensing images. Int. J. Electron. Commun. Eng. 2015, 8, 43–49. [Google Scholar]
  27. Ni, Y.; Mao, J.; Fu, Y.; Wang, H.; Zong, H.; Luo, K. Damage Detection and Localization of Bridge Deck Pavement Based on Deep Learning. Sensors 2023, 23, 5138. [Google Scholar] [CrossRef]
  28. Zhang, G.; Cerra, D.; Müller, R. Shadow Detection and Restoration for Hyperspectral Images Based on Nonlinear Spectral Unmixing. Remote Sens. 2020, 12, 3985. [Google Scholar] [CrossRef]
  29. Susaki, J. Segmentation of Shadowed Buildings in Dense Urban Areas from Aerial Photographs. Remote Sens. 2012, 4, 911–933. [Google Scholar] [CrossRef] [Green Version]
  30. Ma, X.; Man, Q.; Yang, X.; Dong, P.; Yang, Z.; Wu, J.; Liu, C. Urban Feature Extraction within a Complex Urban Area with an Improved 3D-CNN Using Airborne Hyperspectral Data. Remote Sens. 2023, 15, 992. [Google Scholar] [CrossRef]
  31. Li, F.; Wang, Z.; He, G. AP Shadow Net: A Remote Sensing Shadow Removal Network Based on Atmospheric Transport and Poisson’s Equation. Entropy 2022, 24, 1301. [Google Scholar] [CrossRef] [PubMed]
  32. Kephe, P.N.; Petja, B.M.; Kabanda, T.A. Spatial and inter-seasonal behaviour of rainfall in the Soutpansberg region of South Africa as attributed to the changing climate. Theor. Appl. Climatol. 2016, 126, 233–245. [Google Scholar] [CrossRef]
  33. Daniels, L.; Eeckhout, E.; Wieme, J.; Dejaegher, Y.; Audenaert, K.; Maes, W.H. Identifying the Optimal Radiometric Calibration Method for UAV-Based Multispectral Imaging. Remote Sens. 2023, 15, 2909. [Google Scholar] [CrossRef]
  34. Jiang, J.; Johansen, K.; Tu, Y.H.; McCabe, M.F. Multi-sensor and multi-platform consistency and interoperability between UAV, Planet CubeSat, Sentinel-2, and Landsat reflectance data. GIScience Remote Sens. 2022, 59, 936–958. [Google Scholar] [CrossRef]
  35. Bhardwaja, S.; Mittal, S. A Survey on Various Edge Detector Techniques. Procedia Technol. 2012, 4, 220–226. [Google Scholar] [CrossRef]
  36. Yin, Z.; Zheng, M.; Ren, Y. A ViSAR Shadow-Detection Algorithm Based on LRSD Combined Trajectory Region Extraction. Remote Sens. 2023, 15, 1542. [Google Scholar] [CrossRef]
  37. Sobel, I.; Feldman, G. A 3 × 3 Isotropic Gradient Operator for Image Processing. Pattern Classification and Scene Analysis. 1973, pp. 271–272. Available online: https://www.researchgate.net/publication/285159837_A_33_isotropic_gradient_operator_for_image_processing (accessed on 9 August 2023).
  38. Filgueiras, R.; Mantovani, E.C.; Althoff, D.; Fernandes Filho, E.I.; Cunha, F.F.d. Crop NDVI Monitoring Based on Sentinel 1. Remote Sens. 2019, 11, 1441. [Google Scholar] [CrossRef] [Green Version]
  39. Mangewa, L.J.; Ndakidemi, P.A.; Alward, R.D.; Kija, H.K.; Bukombe, J.K.; Nasolwa, E.R.; Munishi, L.K. Comparative Assessment of UAV and Sentinel-2 NDVI and GNDVI for Preliminary Diagnosis of Habitat Conditions in Burunge Wildlife Management Area, Tanzania. Earth 2022, 3, 769–787. [Google Scholar] [CrossRef]
  40. Wang, W.; Yao, X.; Tian, Y.-C.; Liu, X.-J.; NI, J.; Cao, W.-X.; Zhu, Y. Common Spectral Bands and Optimum Vegetation Indices for Monitoring Leaf Nitrogen Accumulation in Rice and Wheat. J. Integr. Agric. 2012, 11, 2001–2012. [Google Scholar] [CrossRef]
  41. Bastiaanssen, W.G.M.; Molden, D.J.; Makin, I.W. Remote sensing for irrigated agriculture: Examples from research and possible applications. Agric. Water Manag. 2000, 46, 137–155. [Google Scholar] [CrossRef]
  42. Ji, J.; Li, N.; Cui, H.; Li, Y.; Zhao, X.; Zhang, H.; Ma, H. Study on Monitoring SPAD Values for Multispatial Spatial Vertical Scales of Summer Maize Based on UAV Multispectral Remote Sensing. Agriculture 2023, 13, 1004. [Google Scholar] [CrossRef]
  43. Crema, A.; Boschetti, M.; Nutini, F.; Cillis, D.; Casa, R. Influence of Soil Properties on Maize and Wheat Nitrogen Status Assessment from Sentinel-2 Data. Remote Sens. 2020, 12, 2175. [Google Scholar] [CrossRef]
  44. Li, T.; Hu, D.; Wang, Y.; Di, Y.; Liu, M. Correcting remote-sensed shaded image with urban surface radiative transfer model. Int. J. Appl. Earth Obs. Geoinf. 2021, 106, 102654. [Google Scholar] [CrossRef]
  45. Huynh Thai, H.; Silhavy, P.; Fajkus, M.; Prokopova, Z.; Silhavy, R. Propose-Specific Information Related to Prediction Level at x and Mean Magnitude of Relative Error: A Case Study of Software Effort Estimation. Mathematics 2022, 10, 4649. [Google Scholar] [CrossRef]
  46. Xue, J.; Su, B. Significant Remote Sensing Vegetation Indices: A Review of Developments and Applications. J. Sens. 2017, 2017, 1353691. [Google Scholar] [CrossRef] [Green Version]
  47. Movia, A.; Beinat, A.; Crosilla, F. Shadow detection and removal in RGB VHR images for land use unsupervised classification. ISPRS J. Photogramm. Remote Sens. 2016, 119, 485–495. [Google Scholar] [CrossRef]
  48. Dare, P.M. Shadow Analysis in High-Resolution Satellite Imagery of Urban Areas. Photogramm. Eng. Remote Sens. 2005, 71, 169–177. [Google Scholar] [CrossRef] [Green Version]
  49. Poblete, T.; Ortega-Farías, S.; Ryu, D. Automatic Coregistration Algorithm to Remove Canopy Shaded Pixels in UAV-Borne Thermal Images to Improve the Estimation of Crop Water Stress Index of a Drip-Irrigated Cabernet Sauvignon Vineyard. Sensors 2018, 18, 397. [Google Scholar] [CrossRef] [Green Version]
  50. Milas, A.S.; Arend, K.; Mayer, C.; Simonson, M.A.; Mackey, S. Different colours of shadows: Classification of UAV images. Int. J. Remote Sens. 2017, 38, 3084–3100. [Google Scholar] [CrossRef]
  51. Luo, S.; Shen, H.; Li, H.; Chen, Y. Shadow removal based on separated illumination correction for urban aerial remote sensing images. Signal Process. 2019, 165, 197–208. [Google Scholar] [CrossRef]
  52. Ma, H.; Qin, Q.; Shen, X. Shadow Segmentation and Compensation in High Resolution Satellite Images. In Proceedings of the IGARSS 2008—2008 IEEE International Geoscience and Remote Sensing Symposium, Boston, MA, USA, 7–11 July 2008; Volume 2, pp. 1036–1039. [Google Scholar]
  53. Pons, X.; Padró, J.-C. An empirical approach on shadow reduction of UAV imagery in forests. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019. [Google Scholar] [CrossRef]
Figure 1. Map of the experimental site.
Figure 1. Map of the experimental site.
Agriculture 13 01598 g001
Figure 2. Schematic flowchart diagram explaining the employed approach.
Figure 2. Schematic flowchart diagram explaining the employed approach.
Agriculture 13 01598 g002
Figure 3. Mean spectral radiance profile of shadow.
Figure 3. Mean spectral radiance profile of shadow.
Agriculture 13 01598 g003
Figure 4. Demarcated shadow based on mean spectral radiance profile.
Figure 4. Demarcated shadow based on mean spectral radiance profile.
Agriculture 13 01598 g004
Figure 5. (a) RGB image and (b) FCC image of the experimental site.
Figure 5. (a) RGB image and (b) FCC image of the experimental site.
Agriculture 13 01598 g005
Figure 6. Land feature types as predicted by (a) NDVI, (b) SAVI, (c) OSAVI, (d) GOSAVI, (e) NDRE, and (f) GNDVI spectral indices.
Figure 6. Land feature types as predicted by (a) NDVI, (b) SAVI, (c) OSAVI, (d) GOSAVI, (e) NDRE, and (f) GNDVI spectral indices.
Agriculture 13 01598 g006
Figure 7. Land feature types as predicted by (a) KNN, (b) maximum likelihood, (c) MLP, and (d) segmentation.
Figure 7. Land feature types as predicted by (a) KNN, (b) maximum likelihood, (c) MLP, and (d) segmentation.
Agriculture 13 01598 g007
Figure 8. Areal extent (%) of the land features at the experimental site.
Figure 8. Areal extent (%) of the land features at the experimental site.
Agriculture 13 01598 g008
Figure 9. Brightness-tuning results at (a) T1, (b) T2, (c) T3, and (d) T4 threshold levels for FCC image.
Figure 9. Brightness-tuning results at (a) T1, (b) T2, (c) T3, and (d) T4 threshold levels for FCC image.
Agriculture 13 01598 g009
Figure 10. Brightness-tuning results at (a) T1, (b) T2, (c) T3, and (d) T4 threshold levels for TCC image.
Figure 10. Brightness-tuning results at (a) T1, (b) T2, (c) T3, and (d) T4 threshold levels for TCC image.
Agriculture 13 01598 g010
Figure 11. Mean spectral radiance patterns of maize and soil for different regions of the experiment site.
Figure 11. Mean spectral radiance patterns of maize and soil for different regions of the experiment site.
Agriculture 13 01598 g011
Figure 12. Land feature classes post-radiometric compensation as predicted by (a) KNN, (b) maximum likelihood, (c) MLP, and (d) segmentation classifiers.
Figure 12. Land feature classes post-radiometric compensation as predicted by (a) KNN, (b) maximum likelihood, (c) MLP, and (d) segmentation classifiers.
Agriculture 13 01598 g012
Table 1. Description of the UAV sensor used to acquire the imagery.
Table 1. Description of the UAV sensor used to acquire the imagery.
UAVFeaturesDescription
PlatformWeight6 kg
Altitude120 m
Area covered8711 m squared
Flight time13 min 28 s
Speed10 ms−1
Visible satellites13
Overlap75%
Side lap75%
Dimension1.2 m
Red-Edge SensorSpectral bandsRed, green, blue, NIR, and red-edge
Focal length5.5 mm
Field of view7.2 degrees
Weight150 g
Image resolution1280 × 960 mm
Spatial resolution8.5 cm
Table 3. Mean spectral values of lit and occluded land cover types.
Table 3. Mean spectral values of lit and occluded land cover types.
Land FeaturesBlueGreenRedNIRRed-Edge
Lit soil0.4890.7220.5960.8870.427
Occluded soil0.0010.0010.0010.0020.002
Occluded maize0.0140.0230.0020.0170.027
Lit maize0.3360.7330.1090.7910.940
Table 4. Mean spectral index values of the selected vegetation indices.
Table 4. Mean spectral index values of the selected vegetation indices.
NDVISAVIOSAVIGOSAVINDREGNDVI
Lit maize0.7620.7060.6430.041−0.0920.047
Occluded maize0.0780.0230.070−0.026−0.133−0.174
Lit soil0.1940.2430.1760.1030.3710.115
Occluded soil0.0860.0010.0040.0060.4680.157
Table 5. Mean spectral radiance values of land features for lit area and occluded and compensated areas.
Table 5. Mean spectral radiance values of land features for lit area and occluded and compensated areas.
SunlitOccludedCompensated
BlueGreenRedBlueGreenRedBlueGreenRed
Maize0.3360.7330.1090.0140.0230.0020.0790.5390.267
Soil0.4890.7220.5960.0010.0010.0010.240.1410.102
Table 6. The relative error of the mean results for the restored radiance of the occluded land features.
Table 6. The relative error of the mean results for the restored radiance of the occluded land features.
BlueGreenRed
REMMaize76.4926.47153.21
REMSoils50.9280.4782.89
Table 7. Areal comparison of land feature types before and after radiometric compensation.
Table 7. Areal comparison of land feature types before and after radiometric compensation.
Area Covered (%) PrecompensationArea Covered (%) Postcompensation
KNNMax_LikeMLPSegmentationKNNMax_LikeMLPSegmentation
Cabbage25.0925.9624.8824.3628.3332.1120.5621.09
Maize8.8110.1312.837.9249.3728.1635.2538.56
Soil27.3638.2735.1426.0622.339.7344.1940.35
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ndou, N.; Thamaga, K.H.; Mndela, Y.; Nyamugama, A. Radiometric Compensation for Occluded Crops Imaged Using High-Spatial-Resolution Unmanned Aerial Vehicle System. Agriculture 2023, 13, 1598. https://doi.org/10.3390/agriculture13081598

AMA Style

Ndou N, Thamaga KH, Mndela Y, Nyamugama A. Radiometric Compensation for Occluded Crops Imaged Using High-Spatial-Resolution Unmanned Aerial Vehicle System. Agriculture. 2023; 13(8):1598. https://doi.org/10.3390/agriculture13081598

Chicago/Turabian Style

Ndou, Naledzani, Kgabo Humphrey Thamaga, Yonela Mndela, and Adolph Nyamugama. 2023. "Radiometric Compensation for Occluded Crops Imaged Using High-Spatial-Resolution Unmanned Aerial Vehicle System" Agriculture 13, no. 8: 1598. https://doi.org/10.3390/agriculture13081598

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop