Drone-Based Information Fusion for Agricultural and Forestry Applications

A special issue of Drones (ISSN 2504-446X). This special issue belongs to the section "Drones in Agriculture and Forestry".

Deadline for manuscript submissions: closed (20 August 2023) | Viewed by 16380

Special Issue Editors


E-Mail Website
Guest Editor
National Engineering & Technology Center for Information Agriculture, Nanjing Agricultural University, Nanjing 210095, China
Interests: reflectance spectroscopy; quantitative remote sensing of vegetation; crop growth monitoring; crop mapping; smart farming
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
College of Agriculture, National Engineering and Technology Center for Information Agriculture (NETCIA), Nanjing Agricultural University, Nanjing 210095, China
Interests: UAV; precision agriculture; agricultural remote rensing; image analysis
Special Issues, Collections and Topics in MDPI journals
College of Big Data and Intelligence Engineering, Institute of Big Data and Artificial Intelligence, Southwest Forestry University, Kunming 650223, China
Interests: UAV; forest monitoring and mapping; agricultural remote sensing; algorithms analysis
College of Forestry, Co-Innovation Center for Sustainable Forestry in Southern China, Nanjing Forestry University, Nanjing 210037, China
Interests: precision silviculture; UAV; hyperspectral imaging; LiDAR; forest monitoring and mapping

Special Issue Information

Dear Colleagues,

In the past decade, unmanned aerial vehicle (UAV) or drone-based remote sensing has received growing attention for precision agriculture and forest monitoring. UAVs possess significant advantages of flexible integration and strong mobility for acquiring images at high spatial and temporal resolutions. Thus, UAVs could serve as valuable platforms complementary to satellite instruments for efficient monitoring during key crop and forest growth periods. One of the greatest advantages is that UAVs could be used to mount various sensors, such as RGB, multispectral, hyperspectral and thermal cameras, and laser scanners. Nowadays, UAV images are widely used to estimate structural, biophysical, and biochemical traits in crop growth monitoring, such as plant height, leaf area index (LAI), chlorophyll content, biomass, and yield. Meanwhile, UAVs have also been applied to estimating forest parameters (e.g., tree height, biomass, canopy structure, and volume), detecting individual tree crowns, and monitoring forestry fire. Recent efforts have been dedicated to fusing multi-sensor data or extracting various types of information from regular drone images. These types of information could be extracted with point cloud analysis, texture anlaysis, multi-view analysis, or multi-angular observations, etc. Since much effort is centered on acquisitions and image analysis, we expect more advances in drone-based information fusion for improving the accuracy and/or efficiency of monitoring and prediction with limited instrument cost. We are pleased to invite you to submit original research articles and reviews to this Special Issue of Drone-based information fusion for agricultural and forestry applications.

This Special Issue aims to provide a forum for disseminating the achievements related to the research and applications of UAV techniques for crop and forest monitoring. In this Special Issue, original research articles and reviews are welcome. Research areas may include (but are not limited to) the following:

  • Point cloud generation from drone imagery
  • Extraction of structural variables from point cloud data
  • Textural analysis
  • Drone-based multi-angle imaging
  • Extraction of multi-view information from drone imagery
  • Integration of multi-sensor data
  • Integration of multi-modal information
  • Integration of multi-resolution imagery
  • Crop growth monitoring and productivity prediction
  • UAV-based high-throughput field phenotyping
  • Forest inventory
  • Forest species identification and growth monitoring
  • Forest carbon counting
  • Tree health assessment

Prof. Dr. Tao Cheng
Dr. Hengbiao Zheng
Dr. Ning Lu
Dr. Kai Zhou
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Drones is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing
  • imagery
  • point cloud
  • texture
  • index
  • information fusion
  • data integration
  • monitoring
  • prediction
  • phenotyping

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 8236 KiB  
Article
Rubber Tree Recognition Based on UAV RGB Multi-Angle Imagery and Deep Learning
by Yuying Liang, Yongke Sun, Weili Kou, Weiheng Xu, Juan Wang, Qiuhua Wang, Huan Wang and Ning Lu
Drones 2023, 7(9), 547; https://doi.org/10.3390/drones7090547 - 24 Aug 2023
Viewed by 1246
Abstract
The rubber tree (Hevea brasiliensis) is an important tree species for the production of natural latex, which is an essential raw material for varieties of industrial and non-industrial products. Rapid and accurate identification of the number of rubber trees not only [...] Read more.
The rubber tree (Hevea brasiliensis) is an important tree species for the production of natural latex, which is an essential raw material for varieties of industrial and non-industrial products. Rapid and accurate identification of the number of rubber trees not only plays an important role in predicting biomass and yield but also is beneficial to estimating carbon sinks and promoting the sustainable development of rubber plantations. However, the existing recognition methods based on canopy characteristic segmentation are not suitable for detecting individual rubber trees due to their high canopy coverage and similar crown structure. Fortunately, rubber trees have a defoliation period of about 40 days, which makes their trunks clearly visible in high-resolution RGB images. Therefore, this study employed an unmanned aerial vehicle (UAV) equipped with an RGB camera to acquire high-resolution images of rubber plantations from three observation angles (−90°, −60°, 45°) and two flight directions (SN: perpendicular to the rubber planting row, and WE: parallel to rubber planting rows) during the deciduous period. Four convolutional neural networks (multi-scale attention network, MAnet; Unet++; Unet; pyramid scene parsing network, PSPnet) were utilized to explore observation angles and directions beneficial for rubber tree trunk identification and counting. The results indicate that Unet++ achieved the best recognition accuracy (precision = 0.979, recall = 0.919, F-measure = 94.7%) with an observation angle of −60° and flight mode of SN among the four deep learning algorithms. This research provides a new idea for tree trunk identification by multi-angle observation of forests in specific phenological periods. Full article
Show Figures

Figure 1

15 pages, 6521 KiB  
Article
Estimation of Aboveground Biomass Stock in Tropical Savannas Using Photogrammetric Imaging
by Roberta Franco Pereira de Queiroz, Marcus Vinicio Neves d’Oliveira, Alba Valéria Rezende and Paola Aires Lócio de Alencar
Drones 2023, 7(8), 493; https://doi.org/10.3390/drones7080493 - 27 Jul 2023
Viewed by 1004
Abstract
The use of photogrammetry technology for aboveground biomass (AGB) stock estimation in tropical savannas is a challenging task and is still at a preliminary stage. This work aimed to use metrics derived from point clouds, constructed using photogrammetric imaging obtained by an RGB [...] Read more.
The use of photogrammetry technology for aboveground biomass (AGB) stock estimation in tropical savannas is a challenging task and is still at a preliminary stage. This work aimed to use metrics derived from point clouds, constructed using photogrammetric imaging obtained by an RGB camera on board a remotely piloted aircraft (RPA), to generate a model for estimating AGB stock for the shrubby-woody stratum in savanna areas of Central Brazil (Cerrado). AGB stock was estimated using forest inventory data and an allometric equation. The photogrammetric digital terrain model (DTM) was validated with altimetric field data, demonstrating that the passive sensor can identify topographic variations in sites with discontinuous canopies. The inventory estimated an average AGB of 18.3 (±13.3) Mg ha−1 at the three sampled sites. The AGB model selected was composed of metrics used for height at the 10th and 95th percentile, with an adjusted R2 of 93% and a relative root mean squared error (RMSE) of 16%. AGB distribution maps were generated from the spatialization of the metrics selected for the model, optimizing the visualization and our understanding of the spatial distribution of forest AGB. The study represents a step forward in mapping biomass and carbon stocks in tropical savannas using low-cost remote sensing platforms. Full article
Show Figures

Figure 1

19 pages, 15314 KiB  
Article
Tassel-YOLO: A New High-Precision and Real-Time Method for Maize Tassel Detection and Counting Based on UAV Aerial Images
by Hongli Pu, Xian Chen, Yiyu Yang, Rong Tang, Jinwen Luo, Yuchao Wang and Jiong Mu
Drones 2023, 7(8), 492; https://doi.org/10.3390/drones7080492 - 27 Jul 2023
Cited by 5 | Viewed by 2231
Abstract
Tassel is an important part of the maize plant. The automatic detection and counting of tassels using unmanned aerial vehicle (UAV) imagery can promote the development of intelligent maize planting. However, the actual maize field situation is complex, and the speed and accuracy [...] Read more.
Tassel is an important part of the maize plant. The automatic detection and counting of tassels using unmanned aerial vehicle (UAV) imagery can promote the development of intelligent maize planting. However, the actual maize field situation is complex, and the speed and accuracy of the existing algorithms are difficult to meet the needs of real-time detection. To solve this problem, this study constructed a large high-quality maize tassel dataset, which contains information from more than 40,000 tassel images at the tasseling stage. Using YOLOv7 as the original model, a Tassel-YOLO model for the task of maize tassel detection is proposed. Our model adds a global attention mechanism, adopts GSConv convolution and a VoVGSCSP module in the neck part, and improves the loss function to a SIoU loss function. For the tassel detection task, the mAP@0.5 of Tassel-YOLO reaches 96.14%, with an average prediction time of 13.5 ms. Compared with YOLOv7, the model parameters and computation cost are reduced by 4.11 M and 11.4 G, respectively. The counting accuracy has been improved to 97.55%. Experimental results show that the overall performance of Tassel-YOLO is better than other mainstream object detection algorithms. Therefore, Tassel-YOLO represents an effective exploration of the YOLO network architecture, as it satisfactorily meets the requirements of real-time detection and presents a novel solution for maize tassel detection based on UAV aerial images. Full article
Show Figures

Figure 1

11 pages, 3367 KiB  
Article
Spatial Variability of Albedo and Net Radiation at Local Scale Using UAV Equipped with Radiation Sensors
by Anders Lindroth
Drones 2023, 7(4), 276; https://doi.org/10.3390/drones7040276 - 18 Apr 2023
Viewed by 1044
Abstract
Energy balance closure is an important feature in studies of ecosystem exchanges of energy and greenhouse gases using the eddy covariance method. Previous analyses show that this is still a problem with imbalances in the order of 0.6–0.7 to full closure (for only [...] Read more.
Energy balance closure is an important feature in studies of ecosystem exchanges of energy and greenhouse gases using the eddy covariance method. Previous analyses show that this is still a problem with imbalances in the order of 0.6–0.7 to full closure (for only a few sites). It has been suggested that mesoscale transport processes that are not captured by the eddy covariance measurements are the main reason behind the closure problem. So far, very little action has been taken to investigate another potential cause of the problem, namely, the role of spatial variation in net radiation at the scale of typical flux footprints. The reason for this knowledge gap is mainly due to the lack of suitable methods to perform such investigations. Here, we show that such measurements can be performed with an unmanned aerial vehicle equipped with radiation sensors. A comparison using a reference radiometer on a fixed mast with a hovering UAV equipped with pyranometers for incoming and outgoing shortwave radiation and an infrared thermometer for surface temperature measurements shows that incoming and outgoing shortwave radiation can be measured with a standard error of 7.4 Wm−2 and 1.8 Wm−2, respectively. An application of the system was made over a five-year-old forest flux site in Sweden. Here, the net longwave radiation was estimated from the measured surface temperature and the calculated incoming longwave radiation. The results show that during the mission around noon on a clear day, distinct ‘hotspots’ existed over the plantation with the albedo varying between 15.5 and 17.9%, the surface temperature varying between 22.2 and 25.5 °C and the net radiation varying between 330 and 380 Wm−2. These variations are large enough to have a significant impact on the energy balance closure problem. Our conclusion is that we now have the tools to investigate the spatial variability of the radiation regime over flux sites and that this should be given more attention in the future. Full article
Show Figures

Figure 1

21 pages, 33128 KiB  
Article
Improvement of Treetop Displacement Detection by UAV-LiDAR Point Cloud Normalization: A Novel Method and A Case Study
by Kaisen Ma, Chaokui Li, Fugen Jiang, Liangliang Xu, Jing Yi, Heqin Huang and Hua Sun
Drones 2023, 7(4), 262; https://doi.org/10.3390/drones7040262 - 12 Apr 2023
Cited by 1 | Viewed by 1374
Abstract
Normalized point clouds (NPCs) derived from unmanned aerial vehicle-light detection and ranging (UAV-LiDAR) data have been applied to extract relevant forest inventory information. However, detecting treetops from topographically normalized LiDAR points is challenging if the trees are located in steep terrain areas. In [...] Read more.
Normalized point clouds (NPCs) derived from unmanned aerial vehicle-light detection and ranging (UAV-LiDAR) data have been applied to extract relevant forest inventory information. However, detecting treetops from topographically normalized LiDAR points is challenging if the trees are located in steep terrain areas. In this study, a novel point cloud normalization method based on the imitated terrain (NPCIT) method was proposed to reduce the effect of vegetation point cloud normalization on crown deformation in regions with high slope gradients, and the ability of the treetop detection displacement model to quantify treetop displacements and tree height changes was improved, although the model did not consider the crown shape or angle. A forest farm in the mountainous region of south-central China was used as the study area, and the sample data showed that the detected treetop displacement increased rapidly in steep areas. With this work, we made an important contribution to theoretical analyses using the treetop detection displacement model with UAV-LiDAR NPCs at three levels: the method, model, and example levels. Our findings contribute to the development of more accurate treetop position identification and tree height parameter extraction methods involving LiDAR data. Full article
Show Figures

Figure 1

16 pages, 6626 KiB  
Article
A Method for Forest Canopy Height Inversion Based on UAVSAR and Fourier–Legendre Polynomial—Performance in Different Forest Types
by Hongbin Luo, Cairong Yue, Hua Yuan, Ning Wang and Si Chen
Drones 2023, 7(3), 152; https://doi.org/10.3390/drones7030152 - 22 Feb 2023
Viewed by 1882
Abstract
Mapping forest canopy height at large regional scales is of great importance for the global carbon cycle. Polarized interferometric synthetic aperture radar is an efficient and irreplaceable remote sensing tool. Developing an efficient and accurate method for forest canopy height estimation is an [...] Read more.
Mapping forest canopy height at large regional scales is of great importance for the global carbon cycle. Polarized interferometric synthetic aperture radar is an efficient and irreplaceable remote sensing tool. Developing an efficient and accurate method for forest canopy height estimation is an important issue that needs to be addressed urgently. In this paper, we propose a novel four-stage forest height inversion method based on a Fourier–Legendre polynomial (FLP) with reference to the RVoG three-stage method, using the multi-baseline UAVSAR data from the AfriSAR project as the data source. The third-order FLP is used as the vertical structure function, and a small amount of ground phase and LiDAR canopy height is used as the input to solve and fix the FLP coefficients to replace the exponential function in the RVoG three-stage method. The performance of this method was tested in different forest types (mangrove and inland tropical forests). The results show that: (1) in mangroves with homogeneous forest structure, the accuracy based on the four-stage FLP method is better than that of the RVoG three-stage method. For the four-stage FLP method, R2 is 0.82, RMSE is 6.42 m and BIAS is 0.92 m, while the R2 of the RVoG three-stage method is 0.77, RMSE is 7.33 m, and bias is −3.49 m. In inland tropical forests with complex forest structure, the inversion accuracy based on the four-stage FLP method is lower than that of the RVoG three-stage method. The R2 is 0.50, RMSE is 11.54 m, and BIAS is 6.53 m for the four-stage FLP method; the R2 of the RVoG three-stage method is 0.72, RMSE is 8.68 m, and BIAS is 1.67 m. (2) Compared to the RVoG three-stage method, the efficiency of the four-stage FLP method is improved by about tenfold, with the reduction of model parameters. The inversion time of the FLP method in a mangrove forest is 3 min, and that of the RVoG three-stage method is 33 min. In an inland tropical forest, the inversion time of the FLP method is 2.25 min, and that of the RVoG three-stage method is 21 min. With the application of large regional scale data in the future, the method proposed in this study is more efficient when conditions allow. Full article
Show Figures

Figure 1

15 pages, 3293 KiB  
Article
Retrieval of Fractional Vegetation Cover from Remote Sensing Image of Unmanned Aerial Vehicle Based on Mixed Pixel Decomposition Method
by Mengmeng Du, Minzan Li, Noboru Noguchi, Jiangtao Ji and Mengchao (George) Ye
Drones 2023, 7(1), 43; https://doi.org/10.3390/drones7010043 - 07 Jan 2023
Cited by 5 | Viewed by 2252
Abstract
FVC (fractional vegetation cover) is highly correlated with wheat plant density in the reviving period, which is an important indicator for conducting variable-rate nitrogenous topdressing. In this study, with the objective of improving inversion accuracy of wheat plant density, an innovative approach of [...] Read more.
FVC (fractional vegetation cover) is highly correlated with wheat plant density in the reviving period, which is an important indicator for conducting variable-rate nitrogenous topdressing. In this study, with the objective of improving inversion accuracy of wheat plant density, an innovative approach of retrieval of FVC values from remote sensing images of a UAV (unmanned aerial vehicle) was proposed based on the mixed pixel decomposition method. Firstly, remote sensing images of an experimental wheat field were acquired by using a DJI Mini UAV and endmembers in the image were identified. Subsequently, a linear unmixing model was used to subdivide mixed pixels into components of vegetation and soil, and an abundance map of vegetation was acquired. Based on the abundance map of vegetation, FVC was calculated. Consequently, a linear regression model between the ground truth data of wheat plant density and FVC was established. The coefficient of determination (R2), RMSE (root mean square error), and RRMSE (Relative-RMSE) of the inversion model were calculated as 0.97, 1.86 plants/m2, and 0.677%, which indicates strong correlation between the FVC of mixed pixel decomposition method and wheat plant density. Therefore, we can conclude that the mixed pixel decomposition model of the remote sensing image of a UAV significantly improved the inversion accuracy of wheat plant density from FVC values, which provides method support and basic data for variable-rate nitrogenous fertilization in the wheat reviving period in the manner of precision agriculture. Full article
Show Figures

Figure 1

19 pages, 6164 KiB  
Article
Transferability of Models for Predicting Rice Grain Yield from Unmanned Aerial Vehicle (UAV) Multispectral Imagery across Years, Cultivars and Sensors
by Hengbiao Zheng, Wenhan Ji, Wenhui Wang, Jingshan Lu, Dong Li, Caili Guo, Xia Yao, Yongchao Tian, Weixing Cao, Yan Zhu and Tao Cheng
Drones 2022, 6(12), 423; https://doi.org/10.3390/drones6120423 - 16 Dec 2022
Cited by 3 | Viewed by 2059
Abstract
Timely and accurate prediction of crop yield prior to harvest is vital for precise agricultural management. Unmanned aerial vehicles (UAVs) provide a fast and convenient approach to crop yield prediction, but most existing crop yield models have rarely been tested across different years, [...] Read more.
Timely and accurate prediction of crop yield prior to harvest is vital for precise agricultural management. Unmanned aerial vehicles (UAVs) provide a fast and convenient approach to crop yield prediction, but most existing crop yield models have rarely been tested across different years, cultivars and sensors. This has limited the ability of these yield models to be transferred to other years or regions or to be potentially used with data from other sensors. In this study, UAV-based multispectral imagery was used to predict rice grain yield at the booting and filling stages from four field experiments, involving three years, two rice cultivars, and two UAV sensors. Reflectance and texture features were extracted from the UAV imagery, and vegetation indices (VIs) and normalized difference texture indices (NDTIs) were computed. The models were independently validated to test the stability and transferability across years, rice cultivars, and sensors. The results showed that the red edge normalized difference texture index (RENDTI) was superior to other texture indices and vegetation indices for model regression with grain yield in most cases. However, the green normalized difference texture index (GNDTI) achieved the highest prediction accuracy in model validation across rice cultivars and sensors. The yield prediction model of Japonica rice achieved stronger transferability to Indica rice with root mean square error (RMSE), bias, and relative RMSE (RRMSE) of 1.16 t/ha, 0.08, and 11.04%, respectively. Model transferability was improved significantly between different sensors after band correction with a decrease of 15.05–59.99% in RRMSE. Random forest (RF) was found to be a good solution to improve the model transferability across different years and cultivars and obtained the highest prediction accuracy with RMSE, bias, and RRMSE of 0.94 t/ha, −0.21, and 9.37%, respectively. This study provides a valuable reference for crop yield prediction when existing models are transferred across different years, cultivars and sensors. Full article
Show Figures

Figure 1

16 pages, 2489 KiB  
Article
A Global Multi-Scale Channel Adaptation Network for Pine Wilt Disease Tree Detection on UAV Imagery by Circle Sampling
by Dong Ren, Yisheng Peng, Hang Sun, Mei Yu, Jie Yu and Ziwei Liu
Drones 2022, 6(11), 353; https://doi.org/10.3390/drones6110353 - 15 Nov 2022
Cited by 4 | Viewed by 1534
Abstract
Pine wilt disease is extremely ruinous to forests. It is an important to hold back the transmission of the disease in order to detect diseased trees on UAV imagery, by using a detection algorithm. However, most of the existing detection algorithms for diseased [...] Read more.
Pine wilt disease is extremely ruinous to forests. It is an important to hold back the transmission of the disease in order to detect diseased trees on UAV imagery, by using a detection algorithm. However, most of the existing detection algorithms for diseased trees ignore the interference of complex backgrounds to the diseased tree feature extraction in drone images. Moreover, the sampling range of the positive sample does not match the circular shape of the diseased tree in the existing sampling methods, resulting in a poor-quality positive sample of the sampled diseased tree. This paper proposes a Global Multi-Scale Channel Adaptation Network to solve these problems. Specifically, a global multi-scale channel attention module is developed, which alleviates the negative impact of background regions on the model. In addition, a center circle sampling method is proposed to make the sampling range of the positive sample fit the shape of a circular disease tree target, enhancing the positive sample’s sampling quality significantly. The experimental results show that our algorithm exceeds the seven mainstream algorithms on the diseased tree dataset, and achieves the best detection effect. The average precision (AP) and the recall are 79.8% and 86.6%, respectively. Full article
Show Figures

Figure 1

Back to TopTop