Novel Applications of Optical Sensors and Machine Learning in Agricultural Monitoring

A special issue of Agriculture (ISSN 2077-0472). This special issue belongs to the section "Digital Agriculture".

Deadline for manuscript submissions: closed (20 June 2023) | Viewed by 37271

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors

College of Information and Management Science, Henan Agricultural University, Zhengzhou 450002, China
Interests: remote sensing; precision agriculture; machine learning; crop model; crop mapping
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institute of Agricultural Equipment, Zhejiang Academy of Agricultural Sciences, Hangzhou 310021, China
Interests: image segmentation; UAV; machine learning; pattern recognition; IOT
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Key Laboratory of Quantitative Remote Sensing in Agriculture, Ministry of Agriculture and Rural Affairs, Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China
Interests: UAV; biomass; nutrient management; yield mapping
Special Issues, Collections and Topics in MDPI journals
Agricultural Information Institute, Chinese Academy of Agricultural Sciences, Beijing 100081, China
Interests: UAV; smart orchard; pest management; pest risk mapping
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institute of Botany, Chinese Academy of Sciences, Beijing 100093, China
Interests: remote sensing; climate change; machine learning; ecosystem model
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Agricultural production management is facing a new era of intelligence and automation. With developments in sensor technologies, the temporal, spectral, and spatial resolution from ground/air/space platforms have been notably improved. Optical sensors play an essential role in agriculture production management. Especially, monitoring plant health, growth condition, and insect infestation have traditionally been approached by performing extensive fieldwork.

The processing and analysis of huge amounts of data from different sensors still face many challenges. Machine learning can derive and process agricultural information from the optical sensors onboard ground, air, and space platforms. Advances in optical images and machine learning have attracted widespread attention, but we call for more highly flexible solutions for various agriculture study applications.

We believe that sensors, artificial intelligence, and machine learning are not simply scientific experiments, but opportunities to make our agricultural production management more efficient and cost-effective, further contributing to the healthy development of natural-human systems.

This Topic seeks to compile the latest research on optical sensors and machine learning in agricultural monitoring. The following provides a general (but not exhaustive) overview of topics that might be relevant to this Research Topic:

  • Machine learning approaches for crop health, growth, and yield monitoring.
  • Combined multisource/multi-sensor data to improve the crop parameters mapping.
  • Crop-related growth models, artificial intelligence models, algorithms, and precision management.
  • Farmland environmental monitoring and management.
  • Ground, air, and space platforms application in precision agriculture.
  • Development and application of field robotics.
  • High-throughput field information survey.
  • Phenological monitoring.

Dr. Jibo Yue
Dr. Chengquan Zhou
Dr. Haikuan Feng
Dr. Yanjun Yang
Dr. Ning Zhang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Agriculture is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • deep learning
  • optical sensor
  • crop mapping
  • precision agriculture

Published Papers (17 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

4 pages, 175 KiB  
Editorial
Novel Applications of Optical Sensors and Machine Learning in Agricultural Monitoring
by Jibo Yue, Chengquan Zhou, Haikuan Feng, Yanjun Yang and Ning Zhang
Agriculture 2023, 13(10), 1970; https://doi.org/10.3390/agriculture13101970 - 10 Oct 2023
Viewed by 942
Abstract
The rapid development of intelligence and automated technologies has provided new management opportunities for agricultural production [...] Full article

Research

Jump to: Editorial

19 pages, 1873 KiB  
Article
VGNet: A Lightweight Intelligent Learning Method for Corn Diseases Recognition
by Xiangpeng Fan and Zhibin Guan
Agriculture 2023, 13(8), 1606; https://doi.org/10.3390/agriculture13081606 - 14 Aug 2023
Cited by 5 | Viewed by 1411
Abstract
The automatic recognition of crop diseases based on visual perception algorithms is one of the important research directions in the current prevention and control of crop diseases. However, there are two issues to be addressed in corn disease identification: (1) A lack of [...] Read more.
The automatic recognition of crop diseases based on visual perception algorithms is one of the important research directions in the current prevention and control of crop diseases. However, there are two issues to be addressed in corn disease identification: (1) A lack of multicategory corn disease image datasets that can be used for disease recognition model training. (2) The existing methods for identifying corn diseases have difficulty satisfying the dual requirements of disease recognition speed and accuracy in actual corn planting scenarios. Therefore, a corn diseases recognition system based on pretrained VGG16 is investigated and devised, termed as VGNet, which consists of batch normalization (BN), global average pooling (GAP) and L2 normalization. The performance of the proposed method is improved by using transfer learning for the task of corn disease classification. Experiment results show that the Adam optimizer is more suitable for crop disease recognition than the stochastic gradient descent (SGD) algorithm. When the learning rate is 0.001, the model performance reaches a highest accuracy of 98.3% and a lowest loss of 0.035. After data augmentation, the precision of nine corn diseases is between 98.1% and 100%, and the recall value ranges from 98.6% to 100%. What is more, the designed lightweight VGNet only occupies 79.5 MB of space, and the testing time for 230 images is 75.21 s, which demonstrates better transferability and accuracy in crop disease image recognition. Full article
Show Figures

Figure 1

15 pages, 2406 KiB  
Article
Spectral Detection of Peanut Southern Blight Severity Based on Continuous Wavelet Transform and Machine Learning
by Wei Guo, Heguang Sun, Hongbo Qiao, Hui Zhang, Lin Zhou, Ping Dong and Xiaoyu Song
Agriculture 2023, 13(8), 1504; https://doi.org/10.3390/agriculture13081504 - 27 Jul 2023
Cited by 3 | Viewed by 966
Abstract
Peanut southern blight has a severe impact on peanut production and is one of the most devastating soil-borne fungal diseases. We conducted a hyperspectral analysis of the spectral responses of plants to peanut southern blight to provide theoretical support for detecting the severity [...] Read more.
Peanut southern blight has a severe impact on peanut production and is one of the most devastating soil-borne fungal diseases. We conducted a hyperspectral analysis of the spectral responses of plants to peanut southern blight to provide theoretical support for detecting the severity of the disease via remote sensing. In this study, we collected leaf-level spectral data during the winter of 2021 and the spring of 2022 in a greenhouse laboratory. We explored the spectral response mechanisms of diseased peanut leaves and developed a method for assessing the severity of peanut southern blight disease by comparing the continuous wavelet transform (CWT) with traditional spectral indices and incorporating machine learning techniques. The results showed that the SVM model performed best and was able to effectively detect the severity of peanut southern blight when using CWT (WF770~780, 5) as an input feature. The overall accuracy (OA) of the modeling dataset was 91.8% and the kappa coefficient was 0.88. For the validation dataset, the OA was 90.5% and the kappa coefficient was 0.87. These findings highlight the potential of this CWT-based method for accurately assessing the severity of peanut southern blight. Full article
Show Figures

Figure 1

21 pages, 4908 KiB  
Article
Rapid and Non-Destructive Methodology for Measuring Canopy Coverage at an Early Stage and Its Correlation with Physiological and Morphological Traits and Yield in Sugarcane
by Raja Arun Kumar, Srinivasavedantham Vasantha, Raju Gomathi, Govindakurup Hemaprabha, Srinivasan Alarmelu, Venkatarayappa Srinivasa, Krishnapriya Vengavasi, Muthalagu Alagupalamuthirsolai, Kuppusamy Hari, Chinappagounder Palaniswami, Krishnasamy Mohanraj, Chinnaswamy Appunu, Ponnaiyan Geetha, Arjun Shaligram Tayade, Shareef Anusha, Vazhakkannadi Vinu, Ramanathan Valarmathi, Pooja Dhansu and Mintu Ram Meena
Agriculture 2023, 13(8), 1481; https://doi.org/10.3390/agriculture13081481 - 26 Jul 2023
Cited by 1 | Viewed by 1228
Abstract
Screening for elite sugarcane genotypes for canopy cover in a rapid and non-destructive way is important to accelerate varietal/clonal selection, and little information is available regarding canopy cover and leaf production, leaf area, biomass production, and cane yield in sugarcane crop. In the [...] Read more.
Screening for elite sugarcane genotypes for canopy cover in a rapid and non-destructive way is important to accelerate varietal/clonal selection, and little information is available regarding canopy cover and leaf production, leaf area, biomass production, and cane yield in sugarcane crop. In the present investigation, the digital images of sugarcane crop by using Canopeo software was assessed for their correlation with the physiological and morphological parameters and cane yield production. The results revealed that among the studied parameters, canopy coverage has shown a significantly better correlation with the plant height (0.581 **), leaf length (0.853 **), leaf width (0.587 **), and leaf area (0.770 **) in commercial sugarcane clones. Two-way cluster analysis has led to the identification of Co 0238, Co 86249, Co 10026, Co 99004, Co 94008, and Co 95020 with better physiological traits for higher sugarcane yield under changing climate. Additionally, in another field experiment with pre-breeding, germplasm, and interspecific hybrid sugarcane clones, the canopy coverage showed a significantly better correlation with germination, shoot count, leaf weight, leaf area index, and plant height, and finally with biomass (r = 0.612 **) and cane yield (r = 0.458 **). It has been found that the plant height, total dry matter (TDM), and leaf area index (LAI) had significant correlation with the cane yield, and the canopy cover data from digital images act as a surrogate for these traits, and further it has been observed that CC had better correlation with cane yield compared to the other physiological traits viz., SPAD, total chlorophyll (TC), and canopy temperature (CT) under ambient conditions. Light interception determined using a line quantum sensor had a significant positive correlation (r = 0.764 **) with canopy coverage, signifying the importance of determining the latter in a non-destructive way in a rapid manner and low cost. Full article
Show Figures

Figure 1

19 pages, 4794 KiB  
Article
Technology of Automatic Evaluation of Dairy Herd Fatness
by Sergey S. Yurochka, Igor M. Dovlatov, Dmitriy Y. Pavkin, Vladimir A. Panchenko, Aleksandr A. Smirnov, Yuri A. Proshkin and Igor Yudaev
Agriculture 2023, 13(7), 1363; https://doi.org/10.3390/agriculture13071363 - 08 Jul 2023
Cited by 3 | Viewed by 1081
Abstract
The global recent development trend in dairy farming emphasizes the automation and robotization of milk production. The rapid development rate of dairy farming requires new technologies to increase the economic efficiency and improve production. The research goal was to increase the milk production [...] Read more.
The global recent development trend in dairy farming emphasizes the automation and robotization of milk production. The rapid development rate of dairy farming requires new technologies to increase the economic efficiency and improve production. The research goal was to increase the milk production efficiency by introducing the technology to automatically assess the fatness of a dairy herd in 0.25-point step on a 5-point scale. Experimental data were collected on the 3D ToF camera O3D 303 installed in a walk-through machine on robotic free-stall farms in the period from August 2020 to November 2022. The authors collected data on 182 animals and processed 546 images. All animals were between 450 and 700 kg in weight. Based on the regression analysis, they developed software to find and identify the main five regions of interest: the spinous processes of the lumbar spine and back; the transverse processes of the lumbar spine and the gluteal fossa area; the malar and sciatic tuberosities; the tail base; and the vulva and anus region. The adequacy of the proposed technology was verified by means of a parallel expert survey. The developed technology was tested on 3 farms with a total of 1810 cows and is helpful for the non-contact evaluation of the fatness of a dairy herd within the herd’s life cycle. The developed method can be used to evaluate the tail base area with 100% accuracy. The hungry hole can be determined with a 98.9% probability; the vulva and anus area—with a 95.10% probability. Protruding vertebrae—namely, spinous processes and transverse processes—were evaluated with a 52.20% and 51.10% probability. The system’s overall accuracy was assessed as 93.4%, which was a positive result. Animals in the condition of 2.5 to 3.5 at 5–6 months were considered healthy. The developed system makes it possible to divide the animals into three groups, confirming their physiological status: normal range body condition, exhaustion, and obesity. By means of a correlation dependence equal to R = 0.849 (Pearson method), the authors revealed that animals of the same breed and in the same lactation range have a linear dependence of weight-to-fatness score. They have developed an algorithm for automated assessment of the fatness of animals with further staging of their physiological state. The economic effect of implementing the proposed system has been demonstrated. The effect of increasing the production efficiency of a dairy farm by introducing the technology of automatic evaluation of the fatness of a dairy herd with a 0.25-point step on a 5-point scale had been achieved. The overall accuracy of the system was estimated at 93.4%. Full article
Show Figures

Figure 1

19 pages, 5060 KiB  
Article
Deep Learning Application for Crop Classification via Multi-Temporal Remote Sensing Images
by Qianjing Li, Jia Tian and Qingjiu Tian
Agriculture 2023, 13(4), 906; https://doi.org/10.3390/agriculture13040906 - 20 Apr 2023
Cited by 5 | Viewed by 3644
Abstract
The combination of multi-temporal images and deep learning is an efficient way to obtain accurate crop distributions and so has drawn increasing attention. However, few studies have compared deep learning models with different architectures, so it remains unclear how a deep learning model [...] Read more.
The combination of multi-temporal images and deep learning is an efficient way to obtain accurate crop distributions and so has drawn increasing attention. However, few studies have compared deep learning models with different architectures, so it remains unclear how a deep learning model should be selected for multi-temporal crop classification, and the best possible accuracy is. To address this issue, the present work compares and analyzes a crop classification application based on deep learning models and different time-series data to exploit the possibility of improving crop classification accuracy. Using Multi-temporal Sentinel-2 images as source data, time-series classification datasets are constructed based on vegetation indexes (VIs) and spectral stacking, respectively, following which we compare and evaluate the crop classification application based on time-series datasets and five deep learning architectures: (1) one-dimensional convolutional neural networks (1D-CNNs), (2) long short-term memory (LSTM), (3) two-dimensional-CNNs (2D-CNNs), (4) three-dimensional-CNNs (3D-CNNs), and (5) two-dimensional convolutional LSTM (ConvLSTM2D). The results show that the accuracy of both 1D-CNN (92.5%) and LSTM (93.25%) is higher than that of random forest (~ 91%) when using a single temporal feature as input. The 2D-CNN model integrates temporal and spatial information and is slightly more accurate (94.76%), but fails to fully utilize its multi-spectral features. The accuracy of 1D-CNN and LSTM models integrated with temporal and multi-spectral features is 96.94% and 96.84%, respectively. However, neither model can extract spatial information. The accuracy of 3D-CNN and ConvLSTM2D models is 97.43% and 97.25%, respectively. The experimental results show limited accuracy for crop classification based on single temporal features, whereas the combination of temporal features with multi-spectral or spatial information significantly improves classification accuracy. The 3D-CNN and ConvLSTM2D models are thus the best deep learning architectures for multi-temporal crop classification. However, the ConvLSTM architecture combining recurrent neural networks and CNNs should be further developed for multi-temporal image crop classification. Full article
Show Figures

Figure 1

19 pages, 11089 KiB  
Article
Real-Time Detection of Apple Leaf Diseases in Natural Scenes Based on YOLOv5
by Huishan Li, Lei Shi, Siwen Fang and Fei Yin
Agriculture 2023, 13(4), 878; https://doi.org/10.3390/agriculture13040878 - 15 Apr 2023
Cited by 3 | Viewed by 2543
Abstract
Aiming at the problem of accurately locating and identifying multi-scale and differently shaped apple leaf diseases from a complex background in natural scenes, this study proposed an apple leaf disease detection method based on an improved YOLOv5s model. Firstly, the model utilized the [...] Read more.
Aiming at the problem of accurately locating and identifying multi-scale and differently shaped apple leaf diseases from a complex background in natural scenes, this study proposed an apple leaf disease detection method based on an improved YOLOv5s model. Firstly, the model utilized the bidirectional feature pyramid network (BiFPN) to achieve multi-scale feature fusion efficiently. Then, the transformer and convolutional block attention module (CBAM) attention mechanisms were added to reduce the interference from invalid background information, improving disease characteristics’ expression ability and increasing the accuracy and recall of the model. Experimental results showed that the proposed BTC-YOLOv5s model (with a model size of 15.8M) can effectively detect four types of apple leaf diseases in natural scenes, with 84.3% mean average precision (mAP). With an octa-core CPU, the model could process 8.7 leaf images per second on average. Compared with classic detection models of SSD, Faster R-CNN, YOLOv4-tiny, and YOLOx, the mAP of the proposed model was increased by 12.74%, 48.84%, 24.44%, and 4.2%, respectively, and offered higher detection accuracy and faster detection speed. Furthermore, the proposed model demonstrated strong robustness and mAP exceeding 80% under strong noise conditions, such as exposure to bright lights, dim lights, and fuzzy images. In conclusion, the new BTC-YOLOv5s was found to be lightweight, accurate, and efficient, making it suitable for application on mobile devices. The proposed method could provide technical support for early intervention and treatment of apple leaf diseases. Full article
Show Figures

Figure 1

19 pages, 11213 KiB  
Article
UAV-Based Remote Sensing for Soybean FVC, LCC, and Maturity Monitoring
by Jingyu Hu, Jibo Yue, Xin Xu, Shaoyu Han, Tong Sun, Yang Liu, Haikuan Feng and Hongbo Qiao
Agriculture 2023, 13(3), 692; https://doi.org/10.3390/agriculture13030692 - 16 Mar 2023
Cited by 5 | Viewed by 1806
Abstract
Timely and accurate monitoring of fractional vegetation cover (FVC), leaf chlorophyll content (LCC), and maturity of breeding material are essential for breeding companies. This study aimed to estimate LCC and FVC on the basis of remote sensing and to monitor maturity on the [...] Read more.
Timely and accurate monitoring of fractional vegetation cover (FVC), leaf chlorophyll content (LCC), and maturity of breeding material are essential for breeding companies. This study aimed to estimate LCC and FVC on the basis of remote sensing and to monitor maturity on the basis of LCC and FVC distribution. We collected UAV-RGB images at key growth stages of soybean, namely, the podding (P1), early bulge (P2), peak bulge (P3), and maturity (P4) stages. Firstly, based on the above multi-period data, four regression techniques, namely, partial least squares regression (PLSR), multiple stepwise regression (MSR), random forest regression (RF), and Gaussian process regression (GPR), were used to estimate the LCC and FVC, respectively, and plot the images in combination with vegetation index (VI). Secondly, the LCC images of P3 (non-maturity) were used to detect LCC and FVC anomalies in soybean materials. The method was used to obtain the threshold values for soybean maturity monitoring. Additionally, the mature and immature regions of soybean were monitored at P4 (mature stage) by using the thresholds of P3-LCC. The LCC and FVC anomaly detection method for soybean material presents the image pixels as a histogram and gradually removes the anomalous values from the tails until the distribution approaches a normal distribution. Finally, the P4 mature region (obtained from the previous step) is extracted, and soybean harvest monitoring is carried out in this region using the LCC and FVC anomaly detection method for soybean material based on the P4-FVC image. Among the four regression models, GPR performed best at estimating LCC (R2: 0.84, RMSE: 3.99) and FVC (R2: 0.96, RMSE: 0.08). This process provides a reference for the FVC and LCC estimation of soybean at multiple growth stages; the P3-LCC images in combination with the LCC and FVC anomaly detection methods for soybean material were able to effectively monitor soybean maturation regions (overall accuracy of 0.988, mature accuracy of 0.951, immature accuracy of 0.987). In addition, the LCC thresholds obtained by P3 were also applied to P4 for soybean maturity monitoring (overall accuracy of 0.984, mature accuracy of 0.995, immature accuracy of 0.955); the LCC and FVC anomaly detection method for soybean material enabled accurate monitoring of soybean harvesting areas (overall accuracy of 0.981, mature accuracy of 0.987, harvested accuracy of 0.972). This study provides a new approach and technique for monitoring soybean maturity in breeding fields. Full article
Show Figures

Figure 1

19 pages, 6045 KiB  
Article
Extraction of Cropland Spatial Distribution Information Using Multi-Seasonal Fractal Features: A Case Study of Black Soil in Lishu County, China
by Qi Wang, Peng Guo, Shiwei Dong, Yu Liu, Yuchun Pan and Cunjun Li
Agriculture 2023, 13(2), 486; https://doi.org/10.3390/agriculture13020486 - 18 Feb 2023
Cited by 3 | Viewed by 1317
Abstract
Accurate extraction of cropland distribution information using remote sensing technology is a key step in the monitoring, protection, and sustainable development of black soil. To obtain precise spatial distribution of cropland, an information extraction method is developed based on a fractal algorithm integrating [...] Read more.
Accurate extraction of cropland distribution information using remote sensing technology is a key step in the monitoring, protection, and sustainable development of black soil. To obtain precise spatial distribution of cropland, an information extraction method is developed based on a fractal algorithm integrating temporal and spatial features. The method extracts multi-seasonal fractal features from the Landsat 8 OLI remote sensing data. Its efficiency is demonstrated using black soil in Lishu County, Northeast China. First, each pixel’s upper and lower fractal signals are calculated using a blanket covering method based on the Landsat 8 OLI remote sensing data in the spring, summer, and autumn seasons. The fractal characteristics of the cropland and other land-cover types are analyzed and compared. Second, the ninth lower fractal scale is selected as the feature scale to extract the spatial distribution of cropland in Lishu County. The cropland vector data, the European Space Agency (ESA) WorldCover data, and the statistical yearbook from the same period are used to assess accuracy. Finally, a comparative analysis of this study and existing products at different scales is carried out, and the point matching degree and area matching degree are evaluated. The results show that the point matching degree and the area matching degree of cropland extraction using the multi-seasonal fractal features are 90.66% and 96.21%, and 95.33% and 83.52%, respectively, which are highly consistent with the statistical data provided by the local government. The extracted accuracy of cropland is much better than that of existing products at different scales due to the contribution of the multi-seasonal fractal features. This method can be used to accurately extract cropland information to monitor changes in black soil, and it can be used to support the conservation and development of black soil in China. Full article
Show Figures

Figure 1

16 pages, 2566 KiB  
Article
Monitoring of Wheat Fusarium Head Blight on Spectral and Textural Analysis of UAV Multispectral Imagery
by Chunfeng Gao, Xingjie Ji, Qiang He, Zheng Gong, Heguang Sun, Tiantian Wen and Wei Guo
Agriculture 2023, 13(2), 293; https://doi.org/10.3390/agriculture13020293 - 26 Jan 2023
Cited by 6 | Viewed by 2356
Abstract
Crop disease identification and monitoring is an important research topic in smart agriculture. In particular, it is a prerequisite for disease detection and the mapping of infected areas. Wheat fusarium head blight (FHB) is a serious threat to the quality and yield of [...] Read more.
Crop disease identification and monitoring is an important research topic in smart agriculture. In particular, it is a prerequisite for disease detection and the mapping of infected areas. Wheat fusarium head blight (FHB) is a serious threat to the quality and yield of wheat, so the rapid monitoring of wheat FHB is important. This study proposed a method based on unmanned aerial vehicle (UAV) low-altitude remote sensing and multispectral imaging technology combined with spectral and textural analysis to monitor FHB. First, the multispectral imagery of the wheat population was collected by UAV. Second, 10 vegetation indices (VIs)were extracted from multispectral imagery. In addition, three types of textural indices (TIs), including the normalized difference texture index (NDTI), difference texture index (DTI), and ratio texture index (RTI) were extracted for subsequent analysis and modeling. Finally, VIs, TIs, and VIs and TIs integrated as the input features, combined with k-nearest neighbor (KNN), the particle swarm optimization support vector machine (PSO-SVM), and XGBoost were used to construct wheat FHB monitoring models. The results showed that the XGBoost algorithm with the fusion of VIs and TIs as the input features has the highest performance with the accuracy and F1 score of the test set being 93.63% and 92.93%, respectively. This study provides a new approach and technology for the rapid and nondestructive monitoring of wheat FHB. Full article
Show Figures

Figure 1

21 pages, 6089 KiB  
Article
Monitoring of Soybean Maturity Using UAV Remote Sensing and Deep Learning
by Shanxin Zhang, Hao Feng, Shaoyu Han, Zhengkai Shi, Haoran Xu, Yang Liu, Haikuan Feng, Chengquan Zhou and Jibo Yue
Agriculture 2023, 13(1), 110; https://doi.org/10.3390/agriculture13010110 - 30 Dec 2022
Cited by 8 | Viewed by 2513
Abstract
Soybean breeders must develop early-maturing, standard, and late-maturing varieties for planting at different latitudes to ensure that soybean plants fully utilize solar radiation. Therefore, timely monitoring of soybean breeding line maturity is crucial for soybean harvesting management and yield measurement. Currently, the widely [...] Read more.
Soybean breeders must develop early-maturing, standard, and late-maturing varieties for planting at different latitudes to ensure that soybean plants fully utilize solar radiation. Therefore, timely monitoring of soybean breeding line maturity is crucial for soybean harvesting management and yield measurement. Currently, the widely used deep learning models focus more on extracting deep image features, whereas shallow image feature information is ignored. In this study, we designed a new convolutional neural network (CNN) architecture, called DS-SoybeanNet, to improve the performance of unmanned aerial vehicle (UAV)-based soybean maturity information monitoring. DS-SoybeanNet can extract and utilize both shallow and deep image features. We used a high-definition digital camera on board a UAV to collect high-definition soybean canopy digital images. A total of 2662 soybean canopy digital images were obtained from two soybean breeding fields (fields F1 and F2). We compared the soybean maturity classification accuracies of (i) conventional machine learning methods (support vector machine (SVM) and random forest (RF)), (ii) current deep learning methods (InceptionResNetV2, MobileNetV2, and ResNet50), and (iii) our proposed DS-SoybeanNet method. Our results show the following: (1) The conventional machine learning methods (SVM and RF) had faster calculation times than the deep learning methods (InceptionResNetV2, MobileNetV2, and ResNet50) and our proposed DS-SoybeanNet method. For example, the computation speed of RF was 0.03 s per 1000 images. However, the conventional machine learning methods had lower overall accuracies (field F2: 63.37–65.38%) than the proposed DS-SoybeanNet (Field F2: 86.26%). (2) The performances of the current deep learning and conventional machine learning methods notably decreased when tested on a new dataset. For example, the overall accuracies of MobileNetV2 for fields F1 and F2 were 97.52% and 52.75%, respectively. (3) The proposed DS-SoybeanNet model can provide high-performance soybean maturity classification results. It showed a computation speed of 11.770 s per 1000 images and overall accuracies for fields F1 and F2 of 99.19% and 86.26%, respectively. Full article
Show Figures

Figure 1

22 pages, 2777 KiB  
Article
Improving Land Use/Cover Classification Accuracy from Random Forest Feature Importance Selection Based on Synergistic Use of Sentinel Data and Digital Elevation Model in Agriculturally Dominated Landscape
by Sa’ad Ibrahim
Agriculture 2023, 13(1), 98; https://doi.org/10.3390/agriculture13010098 - 29 Dec 2022
Cited by 7 | Viewed by 2414
Abstract
Land use and land cover (LULC) mapping can be of great help in changing land use decisions, but accurate mapping of LULC categories is challenging, especially in semi-arid areas with extensive farming systems and seasonal vegetation phenology. Machine learning algorithms are now widely [...] Read more.
Land use and land cover (LULC) mapping can be of great help in changing land use decisions, but accurate mapping of LULC categories is challenging, especially in semi-arid areas with extensive farming systems and seasonal vegetation phenology. Machine learning algorithms are now widely used for LULC mapping because they provide analytical capabilities for LULC classification. However, the use of machine learning algorithms to improve classification performance is still being explored. The objective of this study is to investigate how to improve the performance of LULC models to reduce prediction errors. To address this question, the study applied a Random Forest (RF) based feature selection approach using Sentinel-1, -2, and Shuttle Radar Topographic Mission (SRTM) data. Results from RF show that the Sentinel-2 data only achieved an out-of-bag overall accuracy of 84.2%, while the Sentinel-1 and SRTM data achieved 83% and 76.44%, respectively. Classification accuracy improved to 89.1% when Sentinel-2, Sentinel-1 backscatter, and SRTM data were combined. This represents a 4.9% improvement in overall accuracy compared to Sentinel-2 alone and a 6.1% and 12.66% improvement compared to Sentinel-1 and SRTM data, respectively. Further independent validation, based on equally sized stratified random samples, consistently found a 5.3% difference between the Sentinel-2 and the combined datasets. This study demonstrates the importance of the synergy between optical, radar, and elevation data in improving the accuracy of LULC maps. In principle, the LULC maps produced in this study could help decision-makers in a wide range of spatial planning applications. Full article
Show Figures

Figure 1

13 pages, 4001 KiB  
Article
Winter Wheat Yield Prediction Using an LSTM Model from MODIS LAI Products
by Jian Wang, Haiping Si, Zhao Gao and Lei Shi
Agriculture 2022, 12(10), 1707; https://doi.org/10.3390/agriculture12101707 - 17 Oct 2022
Cited by 26 | Viewed by 3030
Abstract
Yield estimation using remote sensing data is a research priority in modern agriculture. The rapid and accurate estimation of winter wheat yields over large areas is an important prerequisite for food security policy formulation and implementation. In most county-level yield estimation processes, multiple [...] Read more.
Yield estimation using remote sensing data is a research priority in modern agriculture. The rapid and accurate estimation of winter wheat yields over large areas is an important prerequisite for food security policy formulation and implementation. In most county-level yield estimation processes, multiple input data are used for yield prediction as much as possible, however, in some regions, data are more difficult to obtain, so we used the single-leaf area index (LAI) as input data for the model for yield prediction. In this study, the effects of different time steps as well as the LAI time series on the estimation results were analyzed for the properties of long short-term memory (LSTM), and multiple machine learning methods were compared with yield estimation models constructed by the LSTM networks. The results show that the accuracy of the yield estimation results using LSTM did not show an increasing trend with the increasing step size and data volume, while the yield estimation results of the LSTM were generally better than those of conventional machine learning methods, with the best R2 and RMSE results of 0.87 and 522.3 kg/ha, respectively, in the comparison between predicted and actual yields. Although the use of LAI as a single input factor may cause yield uncertainty in some extreme years, it is a reliable and promising method for improving the yield estimation, which has important implications for crop yield forecasting, agricultural disaster monitoring, food trade policy, and food security early warning. Full article
Show Figures

Figure 1

13 pages, 3174 KiB  
Article
A Synthetic Angle Normalization Model of Vegetation Canopy Reflectance for Geostationary Satellite Remote Sensing Data
by Yinghao Lin, Qingjiu Tian, Baojun Qiao, Yu Wu, Xianyu Zuo, Yi Xie and Yang Lian
Agriculture 2022, 12(10), 1658; https://doi.org/10.3390/agriculture12101658 - 10 Oct 2022
Cited by 2 | Viewed by 1285
Abstract
High-frequency imaging characteristics allow a geostationary satellite (GSS) to capture the diurnal variation in vegetation canopy reflectance spectra, which is of very important practical significance for monitoring vegetation via remote sensing (RS). However, the observation angle and solar angle of high-frequency GSS RS [...] Read more.
High-frequency imaging characteristics allow a geostationary satellite (GSS) to capture the diurnal variation in vegetation canopy reflectance spectra, which is of very important practical significance for monitoring vegetation via remote sensing (RS). However, the observation angle and solar angle of high-frequency GSS RS data usually differ, and the differences in bidirectional reflectance from the reflectance spectra of the vegetation canopy are significant, which makes it necessary to normalize angles for GSS RS data. The BRDF (Bidirectional Reflectance Distribution Function) prototype library is effective for the angle normalization of RS data. However, its spatiotemporal applicability and error propagation are currently unclear. To resolve this problem, we herein propose a synthetic angle normalization model (SANM) for RS vegetation canopy reflectance; this model exploits the GSS imaging characteristics, whereby each pixel has a fixed observation angle. The established model references a topographic correction method for vegetation canopies based on path-length correction, solar zenith angle normalization, and the Minnaert model. It also considers the characteristics of diurnal variations in vegetation canopy reflectance spectra by setting the time window. Experiments were carried out on the eight Geostationary Ocean Color Imager (GOCI) images obtained on 22 April 2015 to validate the performance of the proposed SANM. The results show that SANM significantly improves the phase-to-phase correlation of the GOCI band reflectance in the morning time window and retains the instability of vegetation canopy spectra in the noon time window. The SANM provides a preliminary solution for normalizing the angles for the GSS RS data and makes the quantitative comparison of spatiotemporal RS data possible. Full article
Show Figures

Figure 1

15 pages, 3815 KiB  
Article
Weed Detection in Peanut Fields Based on Machine Vision
by Hui Zhang, Zhi Wang, Yufeng Guo, Ye Ma, Wenkai Cao, Dexin Chen, Shangbin Yang and Rui Gao
Agriculture 2022, 12(10), 1541; https://doi.org/10.3390/agriculture12101541 - 24 Sep 2022
Cited by 17 | Viewed by 3240
Abstract
The accurate identification of weeds in peanut fields can significantly reduce the use of herbicides in the weed control process. To address the identification difficulties caused by the cross-growth of peanuts and weeds and by the variety of weed species, this paper proposes [...] Read more.
The accurate identification of weeds in peanut fields can significantly reduce the use of herbicides in the weed control process. To address the identification difficulties caused by the cross-growth of peanuts and weeds and by the variety of weed species, this paper proposes a weed identification model named EM-YOLOv4-Tiny incorporating multiscale detection and attention mechanisms based on YOLOv4-Tiny. Firstly, an Efficient Channel Attention (ECA) module is added to the Feature Pyramid Network (FPN) of YOLOv4-Tiny to improve the recognition of small target weeds by using the detailed information of shallow features. Secondly, the soft Non-Maximum Suppression (soft-NMS) is used in the output prediction layer to filter the best prediction frames to avoid the problem of missed weed detection caused by overlapping anchor frames. Finally, the Complete Intersection over Union (CIoU) loss is used to replace the original Intersection over Union (IoU) loss so that the model can reach the convergence state faster. The experimental results show that the EM-YOLOv4-Tiny network is 28.7 M in size and takes 10.4 ms to detect a single image, which meets the requirement of real-time weed detection. Meanwhile, the mAP on the test dataset reached 94.54%, which is 6.83%, 4.78%, 6.76%, 4.84%, and 9.64% higher compared with YOLOv4-Tiny, YOLOv4, YOLOv5s, Swin-Transformer, and Faster-RCNN, respectively. The method has much reference value for solving the problem of fast and accurate weed identification in peanut fields. Full article
Show Figures

Figure 1

17 pages, 4896 KiB  
Article
Remotely Sensed Prediction of Rice Yield at Different Growth Durations Using UAV Multispectral Imagery
by Shanjun Luo, Xueqin Jiang, Weihua Jiao, Kaili Yang, Yuanjin Li and Shenghui Fang
Agriculture 2022, 12(9), 1447; https://doi.org/10.3390/agriculture12091447 - 13 Sep 2022
Cited by 8 | Viewed by 2357
Abstract
A precise forecast of rice yields at the plot scale is essential for both food security and precision agriculture. In this work, we developed a novel technique to integrate UAV-based vegetation indices (VIs) with brightness, greenness, and moisture information obtained via tasseled cap [...] Read more.
A precise forecast of rice yields at the plot scale is essential for both food security and precision agriculture. In this work, we developed a novel technique to integrate UAV-based vegetation indices (VIs) with brightness, greenness, and moisture information obtained via tasseled cap transformation (TCT) to improve the precision of rice-yield estimates and eliminate saturation. Eight nitrogen gradients of rice were cultivated to acquire measurements on the ground, as well as six-band UAV images during the booting and heading periods. Several plot-level VIs were then computed based on the canopy reflectance derived from the UAV images. Meanwhile, the TCT-based retrieval of the plot brightness (B), greenness (G), and a third component (T) indicating the state of the rice growing and environmental information, was performed. The findings indicate that ground measurements are solely applicable to estimating rice yields at the booting stage. Furthermore, the VIs in conjunction with the TCT parameters exhibited a greater ability to predict the rice yields than the VIs alone. The final simulation models showed the highest accuracy at the booting stage, but with varying degrees of saturation. The yield-prediction models at the heading stage satisfied the requirement of high precision, without any obvious saturation phenomenon. The product of the VIs and the difference between the T and G (T − G) and the quotient of the T and B (T/B) was the optimum parameter for predicting the rice yield at the heading stage, with an estimation error below 7%. This study offers a guide and reference for rice-yield estimation and precision agriculture. Full article
Show Figures

Figure 1

17 pages, 3016 KiB  
Article
Hyperspectral Estimates of Soil Moisture Content Incorporating Harmonic Indicators and Machine Learning
by Xueqin Jiang, Shanjun Luo, Qin Ye, Xican Li and Weihua Jiao
Agriculture 2022, 12(8), 1188; https://doi.org/10.3390/agriculture12081188 - 10 Aug 2022
Cited by 4 | Viewed by 2134
Abstract
Soil is one of the most significant natural resources in the world, and its health is closely related to food security, ecological security, and water security. It is the basic task of soil environmental quality assessment to monitor the temporal and spatial variation [...] Read more.
Soil is one of the most significant natural resources in the world, and its health is closely related to food security, ecological security, and water security. It is the basic task of soil environmental quality assessment to monitor the temporal and spatial variation of soil properties scientifically and reasonably. Soil moisture content (SMC) is an important soil property, which plays an important role in agricultural practice, hydrological process, and ecological balance. In this paper, a hyperspectral SMC estimation method for mixed soil types was proposed combining some spectral processing technologies and principal component analysis (PCA). The original spectra were processed by wavelet packet transform (WPT), first-order differential (FOD), and harmonic decomposition (HD) successively, and then PCA dimensionality reduction was used to obtain two groups of characteristic variables: WPT-FOD-PCA (WFP) and WPT-FOD-HD-PCA (WFHP). On this basis, three regression models of principal component regression (PCR), partial least squares regression (PLSR), and back propagation (BP) neural network were applied to compare the SMC predictive ability of different parameters. Meanwhile, we also compared the results with the estimates of conventional spectral indices. The results indicate that the estimation results based on spectral indices have significant errors. Moreover, the BP models (WFP-BP and WFHP-BP) show more accurate results when the same variables are selected. For the same regression model, the choice of variables is more important. The three models based on WFHP (WFHP-PCR, WFHP-PLSR, and WFHP-BP) all show high accuracy and maintain good consistency in the prediction of high and low SMC values. The optimal model was determined to be WFHP-BP with an R2 of 0.932 and a prediction error below 2%. This study can provide information on farm entropy before planting crops on arable land as well as a technical reference for estimating SMC from hyperspectral images (satellite and UAV, etc.). Full article
Show Figures

Figure 1

Back to TopTop