remotesensing-logo

Journal Browser

Journal Browser

UAVs in Sustainable Agriculture

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing in Agriculture and Vegetation".

Deadline for manuscript submissions: closed (28 February 2022) | Viewed by 43787

Special Issue Editors


E-Mail Website
Guest Editor
National Center for International Collaboration Research on Precision Agricultural Aviation Pesticide Spraying Technology, Wushan Road, Guangzhou 510642, China
Interests: precision agricultural aviation; agricultural UAV; digital agriculture
Special Issues, Collections and Topics in MDPI journals

E-Mail
Guest Editor
Department of Rural & Bio-Systems Engineering, Chonnam National University, Gwangju 61186, Korea
Interests: 3D computer vision; robotics; digital agriculture
1. College of Engineering, South China Agricultural University, Wushan Road, Guangzhou 510642, China
2. National Center for International Collaboration Research on Precision Agricultural Aviation Pesticide Spraying Technology, Guangzhou 510642, China
Interests: sensing technologies; agricultural UAV application
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Agronomy, PMAS-Arid Agriculture University Rawalpindi, Rawalpindip 46300, Pakistan
Interests: UAV-based crop monitoring including plant health, disease and pest, and biomass and yield

Special Issue Information

Dear Colleagues,

The use of unmanned aerial vehicles (UAVs) was introduced in agriculture a few years back. Agricultural fields are complex entities in many Asian and African countries with small landholding. Recent advanced image processing techniques and improved efficiency have made UAVs the best option for small landholding crop and orchard management and decision making in real time. UAVs equipped with sensors can accurately and quickly obtain low-altitude remote sensing data, which allow precision management of agricultural production and help to develop modern and efficient sustainable agriculture.

This Special Issue on “UAV in Sustainable Agriculture” will mainly focus on topics related to the use of UAV for crop monitoring, including growth rate, LAI, soil and plant nutrition, detection of disease and pest, biomass and yield estimation, and image processing. Artificial Intelligence (AI)- and deep learning (DL)-based algorithms are encouraged.

Prof. Dr. Yubin Lan
Prof. Dr. Wenjiang Huang
Prof. Dr. Kyeong-Hwan Lee
Dr. Yali Zhang
Dr. Muhammad Naveed Tahir
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Precision agricultural aviation (PAA)
  • Unmanned aerial vehicle (UAV)
  • Sustainable agriculture
  • Artificial Intelligence (AI)
  • Deep learning (DL)
  • Digital agriculture
  • Agricultural information acquisition

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 4999 KiB  
Article
Method for Identifying Litchi Picking Position Based on YOLOv5 and PSPNet
by Xiaokang Qi, Jingshi Dong, Yubin Lan and Hang Zhu
Remote Sens. 2022, 14(9), 2004; https://doi.org/10.3390/rs14092004 - 21 Apr 2022
Cited by 27 | Viewed by 3366
Abstract
China has the largest output of litchi in the world. However, at present, litchi is mainly picked manually, fruit farmers have high labor intensity and low efficiency. This means the intelligent unmanned picking system has broad prospects. The precise location of the main [...] Read more.
China has the largest output of litchi in the world. However, at present, litchi is mainly picked manually, fruit farmers have high labor intensity and low efficiency. This means the intelligent unmanned picking system has broad prospects. The precise location of the main stem picking point of litchi is very important for the path planning of an unmanned system. Some researchers have identified the fruit and branches of litchi; however, there is relatively little research on the location of the main stem picking point of litchi. So, this paper presents a new open-access workflow for detecting accurate picking locations on the main stems and presents data used in the case study. At the same time, this paper also compares several different network architectures for main stem detection and segmentation and selects YOLOv5 and PSPNet as the most promising models for main stem detection and segmentation tasks, respectively. The workflow combines deep learning and traditional image processing algorithms to calculate the accurate location information of litchi main stem picking points in the litchi image. This workflow takes YOLOv5 as the target detection model to detect the litchi main stem in the litchi image, then extracts the detected region of interest (ROI) of the litchi main stem, uses PSPNet semantic segmentation model to semantically segment the ROI image of the main stem, carries out image post-processing operation on the ROI image of the main stem after semantic segmentation, and obtains the pixel coordinates of picking points in the ROI image of the main stem. After coordinate conversion, the pixel coordinates of the main stem picking points of the original litchi image are obtained, and the picking points are drawn on the litchi image. At present, the workflow can obtain the accurate position information of the main stem picking point in the litchi image. The recall and precision of this method were 76.29% and 92.50%, respectively, which lays a foundation for the subsequent work of obtaining the three-dimensional coordinates of the main stem picking point according to the image depth information, even though we have not done this work in this paper. Full article
(This article belongs to the Special Issue UAVs in Sustainable Agriculture)
Show Figures

Graphical abstract

18 pages, 3697 KiB  
Article
The Optimal Phenological Phase of Maize for Yield Prediction with High-Frequency UAV Remote Sensing
by Bin Yang, Wanxue Zhu, Ehsan Eyshi Rezaei, Jing Li, Zhigang Sun and Junqiang Zhang
Remote Sens. 2022, 14(7), 1559; https://doi.org/10.3390/rs14071559 - 24 Mar 2022
Cited by 28 | Viewed by 4352
Abstract
Unmanned aerial vehicle (UAV)-based multispectral remote sensing effectively monitors agro-ecosystem functioning and predicts crop yield. However, the timing of the remote sensing field campaigns can profoundly impact the accuracy of yield predictions. Little is known on the effects of phenological phases on skills [...] Read more.
Unmanned aerial vehicle (UAV)-based multispectral remote sensing effectively monitors agro-ecosystem functioning and predicts crop yield. However, the timing of the remote sensing field campaigns can profoundly impact the accuracy of yield predictions. Little is known on the effects of phenological phases on skills of high-frequency sensing observations used to predict maize yield. It is also unclear how much improvement can be gained using multi-temporal compared to mono-temporal data. We used a systematic scheme to address those gaps employing UAV multispectral observations at nine development stages of maize (from second-leaf to maturity). Next, the spectral and texture indices calculated from the mono-temporal and multi-temporal UAV images were fed into the Random Forest model for yield prediction. Our results indicated that multi-temporal UAV data could remarkably enhance the yield prediction accuracy compared with mono-temporal UAV data (R2 increased by 8.1% and RMSE decreased by 27.4%). For single temporal UAV observation, the fourteenth-leaf stage was the earliest suitable time and the milking stage was the optimal observing time to estimate grain yield. For multi-temporal UAV data, the combination of tasseling, silking, milking, and dough stages exhibited the highest yield prediction accuracy (R2 = 0.93, RMSE = 0.77 t·ha−1). Furthermore, we found that the Normalized Difference Red Edge Index (NDRE), Green Normalized Difference Vegetation Index (GNDVI), and dissimilarity of the near-infrared image at milking stage were the most promising feature variables for maize yield prediction. Full article
(This article belongs to the Special Issue UAVs in Sustainable Agriculture)
Show Figures

Figure 1

19 pages, 5357 KiB  
Article
Deep-Learning-Based Multispectral Image Reconstruction from Single Natural Color RGB Image—Enhancing UAV-Based Phenotyping
by Jiangsan Zhao, Ajay Kumar, Balaji Naik Banoth, Balram Marathi, Pachamuthu Rajalakshmi, Boris Rewald, Seishi Ninomiya and Wei Guo
Remote Sens. 2022, 14(5), 1272; https://doi.org/10.3390/rs14051272 - 05 Mar 2022
Cited by 13 | Viewed by 5062
Abstract
Multispectral images (MSIs) are valuable for precision agriculture due to the extra spectral information acquired compared to natural color RGB (ncRGB) images. In this paper, we thus aim to generate high spatial MSIs through a robust, deep-learning-based reconstruction method using ncRGB images. Using [...] Read more.
Multispectral images (MSIs) are valuable for precision agriculture due to the extra spectral information acquired compared to natural color RGB (ncRGB) images. In this paper, we thus aim to generate high spatial MSIs through a robust, deep-learning-based reconstruction method using ncRGB images. Using the data from the agronomic research trial for maize and breeding research trial for rice, we first reproduced ncRGB images from MSIs through a rendering model, Model-True to natural color image (Model-TN), which was built using a benchmark hyperspectral image dataset. Subsequently, an MSI reconstruction model, Model-Natural color to Multispectral image (Model-NM), was trained based on prepared ncRGB (ncRGB-Con) images and MSI pairs, ensuring the model can use widely available ncRGB images as input. The integrated loss function of mean relative absolute error (MRAEloss) and spectral information divergence (SIDloss) were most effective during the building of both models, while models using the MRAEloss function were more robust towards variability between growing seasons and species. The reliability of the reconstructed MSIs was demonstrated by high coefficients of determination compared to ground truth values, using the Normalized Difference Vegetation Index (NDVI) as an example. The advantages of using “reconstructed” NDVI over Triangular Greenness Index (TGI), as calculated directly from RGB images, were illustrated by their higher capabilities in differentiating three levels of irrigation treatments on maize plants. This study emphasizes that the performance of MSI reconstruction models could benefit from an optimized loss function and the intermediate step of ncRGB image preparation. The ability of the developed models to reconstruct high-quality MSIs from low-cost ncRGB images will, in particular, promote the application for plant phenotyping in precision agriculture. Full article
(This article belongs to the Special Issue UAVs in Sustainable Agriculture)
Show Figures

Figure 1

20 pages, 10108 KiB  
Article
Transformer Neural Network for Weed and Crop Classification of High Resolution UAV Images
by Reenul Reedha, Eric Dericquebourg, Raphael Canals and Adel Hafiane
Remote Sens. 2022, 14(3), 592; https://doi.org/10.3390/rs14030592 - 26 Jan 2022
Cited by 80 | Viewed by 9490
Abstract
Monitoring crops and weeds is a major challenge in agriculture and food production today. Weeds compete directly with crops for moisture, nutrients, and sunlight. They therefore have a significant negative impact on crop yield if not sufficiently controlled. Weed detection and mapping is [...] Read more.
Monitoring crops and weeds is a major challenge in agriculture and food production today. Weeds compete directly with crops for moisture, nutrients, and sunlight. They therefore have a significant negative impact on crop yield if not sufficiently controlled. Weed detection and mapping is an essential step in weed control. Many existing research studies recognize the importance of remote sensing systems and machine learning algorithms in weed management. Deep learning approaches have shown good performance in many agriculture-related remote sensing tasks, such as plant classification, disease detection, etc. However, despite the success of these approaches, they still face many challenges such as high computation cost, the need of large labelled datasets, intra-class discrimination (in growing phase weeds and crops share many attributes similarity as color, texture, and shape), etc. This paper aims to show that the attention-based deep network is a promising approach to address the forementioned problems, in the context of weeds and crops recognition with drone system. The specific objective of this study was to investigate visual transformers (ViT) and apply them to plant classification in Unmanned Aerial Vehicles (UAV) images. Data were collected using a high-resolution camera mounted on a UAV, which was deployed in beet, parsley and spinach fields. The acquired data were augmented to build larger dataset, since ViT requires large sample sets for better performance, we also adopted the transfer learning strategy. Experiments were set out to assess the effect of training and validation dataset size, as well as the effect of increasing the test set while reducing the training set. The results show that with a small labeled training dataset, the ViT models outperform state-of-the-art models such as EfficientNet and ResNet. The results of this study are promising and show the potential of ViT to be applied to a wide range of remote sensing image analysis tasks. Full article
(This article belongs to the Special Issue UAVs in Sustainable Agriculture)
Show Figures

Graphical abstract

19 pages, 2726 KiB  
Article
Quick Detection of Field-Scale Soil Comprehensive Attributes via the Integration of UAV and Sentinel-2B Remote Sensing Data
by Wanxue Zhu, Ehsan Eyshi Rezaei, Hamideh Nouri, Ting Yang, Binbin Li, Huarui Gong, Yun Lyu, Jinbang Peng and Zhigang Sun
Remote Sens. 2021, 13(22), 4716; https://doi.org/10.3390/rs13224716 - 22 Nov 2021
Cited by 15 | Viewed by 3486
Abstract
Satellite and unmanned aerial vehicle (UAV) remote sensing can be used to estimate soil properties; however, little is known regarding the effects of UAV and satellite remote sensing data integration on the estimation of soil comprehensive attributes, or how to estimate quickly and [...] Read more.
Satellite and unmanned aerial vehicle (UAV) remote sensing can be used to estimate soil properties; however, little is known regarding the effects of UAV and satellite remote sensing data integration on the estimation of soil comprehensive attributes, or how to estimate quickly and robustly. In this study, we tackled those gaps by employing UAV multispectral and Sentinel-2B data to estimate soil salinity and chemical properties over a large agricultural farm (400 ha) covered by different crops and harvest areas at the coastal saline-alkali land of the Yellow River Delta of China in 2019. Spatial information of soil salinity, organic matter, available/total nitrogen content, and pH at 0–10 cm and 10–20 cm layers were obtained via ground sampling (n = 195) and two-dimensional spatial interpolation, aiming to overlap the soil information with remote sensing information. The exploratory factor analysis was conducted to generate latent variables, which represented the salinity and chemical characteristics of the soil. A machine learning algorithm (random forest) was applied to estimate soil attributes. Our results indicated that the integration of UAV texture and Sentinel-2B spectral data as random forest model inputs improved the accuracy of latent soil variable estimation. The remote sensing-based information from cropland (crop-based) had a higher accuracy compared to estimations performed on bare soil (soil-based). Therefore, the crop-based approach, along with the integration of UAV texture and Sentinel-2B data, is recommended for the quick assessment of soil comprehensive attributes. Full article
(This article belongs to the Special Issue UAVs in Sustainable Agriculture)
Show Figures

Figure 1

15 pages, 2751 KiB  
Article
Predicting Days to Maturity, Plant Height, and Grain Yield in Soybean: A Machine and Deep Learning Approach Using Multispectral Data
by Paulo Eduardo Teodoro, Larissa Pereira Ribeiro Teodoro, Fábio Henrique Rojo Baio, Carlos Antonio da Silva Junior, Regimar Garcia dos Santos, Ana Paula Marques Ramos, Mayara Maezano Faita Pinheiro, Lucas Prado Osco, Wesley Nunes Gonçalves, Alexsandro Monteiro Carneiro, José Marcato Junior, Hemerson Pistori and Luciano Shozo Shiratsuchi
Remote Sens. 2021, 13(22), 4632; https://doi.org/10.3390/rs13224632 - 17 Nov 2021
Cited by 26 | Viewed by 3856
Abstract
In soybean, there is a lack of research aiming to compare the performance of machine learning (ML) and deep learning (DL) methods to predict more than one agronomic variable, such as days to maturity (DM), plant height (PH), and grain yield (GY). As [...] Read more.
In soybean, there is a lack of research aiming to compare the performance of machine learning (ML) and deep learning (DL) methods to predict more than one agronomic variable, such as days to maturity (DM), plant height (PH), and grain yield (GY). As these variables are important to developing an overall precision farming model, we propose a machine learning approach to predict DM, PH, and GY for soybean cultivars based on multispectral bands. The field experiment considered 524 genotypes of soybeans in the 2017/2018 and 2018/2019 growing seasons and a multitemporal–multispectral dataset collected by embedded sensor in an unmanned aerial vehicle (UAV). We proposed a multilayer deep learning regression network, trained during 2000 epochs using an adaptive subgradient method, a random Gaussian initialization, and a 50% dropout in the first hidden layer for regularization. Three different scenarios, including only spectral bands, only vegetation indices, and spectral bands plus vegetation indices, were adopted to infer each variable (PH, DM, and GY). The DL model performance was compared against shallow learning methods such as random forest (RF), support vector machine (SVM), and linear regression (LR). The results indicate that our approach has the potential to predict soybean-related variables using multispectral bands only. Both DL and RF models presented a strong (r surpassing 0.77) prediction capacity for the PH variable, regardless of the adopted input variables group. Our results demonstrated that the DL model (r = 0.66) was superior to predict DM when the input variable was the spectral bands. For GY, all machine learning models evaluated presented similar performance (r ranging from 0.42 to 0.44) for each tested scenario. In conclusion, this study demonstrated an efficient approach to a computational solution capable of predicting multiple important soybean crop variables based on remote sensing data. Future research could benefit from the information presented here and be implemented in subsequent processes related to soybean cultivars or other types of agronomic crops. Full article
(This article belongs to the Special Issue UAVs in Sustainable Agriculture)
Show Figures

Figure 1

22 pages, 5506 KiB  
Article
The Improved A* Obstacle Avoidance Algorithm for the Plant Protection UAV with Millimeter Wave Radar and Monocular Camera Data Fusion
by Xin Huang, Xiaoya Dong, Jing Ma, Kuan Liu, Shibbir Ahmed, Jinlong Lin and Baijing Qiu
Remote Sens. 2021, 13(17), 3364; https://doi.org/10.3390/rs13173364 - 25 Aug 2021
Cited by 18 | Viewed by 3106
Abstract
To enhance obstacle avoidance abilities of the plant protection UAV in unstructured farmland, this article improved the traditional A* algorithms through dynamic heuristic functions, search point optimization, and inflection point optimization based on millimeter wave radar and monocular camera data fusion. Obstacle information [...] Read more.
To enhance obstacle avoidance abilities of the plant protection UAV in unstructured farmland, this article improved the traditional A* algorithms through dynamic heuristic functions, search point optimization, and inflection point optimization based on millimeter wave radar and monocular camera data fusion. Obstacle information extraction experiments were carried out. The performance between the improved algorithm and traditional algorithm was compared. Additionally, obstacle avoidance experiments were also carried out. The results show that the maximum error in distance measurement of data fusion method was 8.2%. Additionally, the maximum error in obstacle width and height measurement were 27.3% and 18.5%, respectively. The improved algorithm is more useful in path planning, significantly reduces data processing time, search grid, and turning points. The algorithm at most increases path length by 2.0%, at least reduces data processing time by 68.4%, search grid by 74.9%, and turning points by 20.7%. The maximum trajectory offset error was proportional to the flight speed, with a maximum trajectory offset of 1.4 m. The distance between the UAV and obstacle was inversely proportional to flight speed, with a minimum distance of 1.6 m. This method can provide a new idea for obstacle avoidance of the plant protection UAV. Full article
(This article belongs to the Special Issue UAVs in Sustainable Agriculture)
Show Figures

Figure 1

24 pages, 11054 KiB  
Article
Deep Convolutional Neural Network for Large-Scale Date Palm Tree Mapping from UAV-Based Images
by Mohamed Barakat A. Gibril, Helmi Zulhaidi Mohd Shafri, Abdallah Shanableh, Rami Al-Ruzouq, Aimrun Wayayok and Shaiful Jahari Hashim
Remote Sens. 2021, 13(14), 2787; https://doi.org/10.3390/rs13142787 - 15 Jul 2021
Cited by 29 | Viewed by 3435
Abstract
Large-scale mapping of date palm trees is vital for their consistent monitoring and sustainable management, considering their substantial commercial, environmental, and cultural value. This study presents an automatic approach for the large-scale mapping of date palm trees from very-high-spatial-resolution (VHSR) unmanned aerial vehicle [...] Read more.
Large-scale mapping of date palm trees is vital for their consistent monitoring and sustainable management, considering their substantial commercial, environmental, and cultural value. This study presents an automatic approach for the large-scale mapping of date palm trees from very-high-spatial-resolution (VHSR) unmanned aerial vehicle (UAV) datasets, based on a deep learning approach. A U-Shape convolutional neural network (U-Net), based on a deep residual learning framework, was developed for the semantic segmentation of date palm trees. A comprehensive set of labeled data was established to enable the training and evaluation of the proposed segmentation model and increase its generalization capability. The performance of the proposed approach was compared with those of various state-of-the-art fully convolutional networks (FCNs) with different encoder architectures, including U-Net (based on VGG-16 backbone), pyramid scene parsing network, and two variants of DeepLab V3+. Experimental results showed that the proposed model outperformed other FCNs in the validation and testing datasets. The generalizability evaluation of the proposed approach on a comprehensive and complex testing dataset exhibited higher classification accuracy and showed that date palm trees could be automatically mapped from VHSR UAV images with an F-score, mean intersection over union, precision, and recall of 91%, 85%, 0.91, and 0.92, respectively. The proposed approach provides an efficient deep learning architecture for the automatic mapping of date palm trees from VHSR UAV-based images. Full article
(This article belongs to the Special Issue UAVs in Sustainable Agriculture)
Show Figures

Graphical abstract

13 pages, 3460 KiB  
Article
Dependence of CWSI-Based Plant Water Stress Estimation with Diurnal Acquisition Times in a Nectarine Orchard
by Suyoung Park, Dongryeol Ryu, Sigfredo Fuentes, Hoam Chung, Mark O’Connell and Junchul Kim
Remote Sens. 2021, 13(14), 2775; https://doi.org/10.3390/rs13142775 - 14 Jul 2021
Cited by 10 | Viewed by 2975
Abstract
Unmanned aerial vehicle (UAV) remote sensing has become a readily usable tool for agricultural water management with high temporal and spatial resolutions. UAV-borne thermography can monitor crop water status near real-time, which enables precise irrigation scheduling based on an accurate decision-making strategy. The [...] Read more.
Unmanned aerial vehicle (UAV) remote sensing has become a readily usable tool for agricultural water management with high temporal and spatial resolutions. UAV-borne thermography can monitor crop water status near real-time, which enables precise irrigation scheduling based on an accurate decision-making strategy. The crop water stress index (CWSI) is a widely adopted indicator of plant water stress for irrigation management practices; however, dependence of its efficacy on data acquisition time during the daytime is yet to be investigated rigorously. In this paper, plant water stress captured by a series of UAV remote sensing campaigns at different times of the day (9h, 12h and 15h) in a nectarine orchard were analyzed to examine the diurnal behavior of plant water stress represented by the CWSI against measured plant physiological parameters. CWSI values were derived using a probability modelling, named ‘Adaptive CWSI’, proposed by our earlier research. The plant physiological parameters, such as stem water potential (ψstem) and stomatal conductance (gs), were measured on plants for validation concurrently with the flights under different irrigation regimes (0, 20, 40 and 100 % of ETc). Estimated diurnal CWSIs were compared with plant-based parameters at different data acquisition times of the day. Results showed a strong relationship between ψstem measurements and the CWSIs at midday (12 h) with a high coefficient of determination (R2 = 0.83). Diurnal CWSIs showed a significant R2 to gs over different levels of irrigation at three different times of the day with R2 = 0.92 (9h), 0.77 (12h) and 0.86 (15h), respectively. The adaptive CWSI method used showed a robust capability to estimate plant water stress levels even with the small range of changes presented in the morning. Results of this work indicate that CWSI values collected by UAV-borne thermography between mid-morning and mid-afternoon can be used to map plant water stress with a consistent efficacy. This has important implications for extending the time-window of UAV-borne thermography (and subsequent areal coverage) for accurate plant water stress mapping beyond midday. Full article
(This article belongs to the Special Issue UAVs in Sustainable Agriculture)
Show Figures

Figure 1

17 pages, 3481 KiB  
Article
Modeling of Environmental Impacts on Aerial Hyperspectral Images for Corn Plant Phenotyping
by Dongdong Ma, Tanzeel U. Rehman, Libo Zhang, Hideki Maki, Mitchell R. Tuinstra and Jian Jin
Remote Sens. 2021, 13(13), 2520; https://doi.org/10.3390/rs13132520 - 28 Jun 2021
Cited by 8 | Viewed by 2449
Abstract
Aerial imaging technologies have been widely applied in agricultural plant remote sensing. However, an as yet unexplored challenge with field imaging is that the environmental conditions, such as sun angle, cloud coverage, temperature, and so on, can significantly alter plant appearance and thus [...] Read more.
Aerial imaging technologies have been widely applied in agricultural plant remote sensing. However, an as yet unexplored challenge with field imaging is that the environmental conditions, such as sun angle, cloud coverage, temperature, and so on, can significantly alter plant appearance and thus affect the imaging sensor’s accuracy toward extracting plant feature measurements. These image alterations result from the complicated interaction between the real-time environments and plants. Analysis of these impacts requires continuous monitoring of the changes through various environmental conditions, which has been difficult with current aerial remote sensing systems. This paper aimed to propose a modeling method to comprehensively understand and model the environmental influences on hyperspectral imaging data. In 2019, a fixed hyperspectral imaging gantry was constructed in Purdue University’s research farm, and over 8000 repetitive images of the same corn field were taken with a 2.5 min interval for 31 days. Time-tagged local environment data, including solar zenith angle, solar irradiation, temperature, wind speed, and so on, were also recorded during the imaging time. The images were processed for phenotyping data, and the time series decomposition method was applied to extract the phenotyping data variation caused by the changing environments. An artificial neural network (ANN) was then built to model the relationship between the phenotyping data variation and environmental changes. The ANN model was able to accurately predict the environmental effects in remote sensing results, and thus could be used to effectively eliminate the environment-induced variation in the phenotyping features. The test of the normalized difference vegetation index (NDVI) calculated from the hyperspectral images showed that variance in NDVI was reduced by 79%. A similar performance was confirmed with the relative water content (RWC) predictions. Therefore, this modeling method shows great potential for application in aerial remote sensing applications in agriculture, to significantly improve the imaging quality by effectively eliminating the effects from the changing environmental conditions. Full article
(This article belongs to the Special Issue UAVs in Sustainable Agriculture)
Show Figures

Figure 1

Back to TopTop