Computer Vision for Intelligent Crop Identification and Crop Protection

A special issue of Agriculture (ISSN 2077-0472). This special issue belongs to the section "Agricultural Technology".

Deadline for manuscript submissions: closed (20 December 2023) | Viewed by 27271

Special Issue Editors

College of Engineering, China Agricultural University, Beijing 100083, China
Interests: smart urban agriculture; artificial intelligence; agricultural robotics; automated control; unmanned aerial vehicle; plant phenotyping; computer vision; crop plant signaling; machine (deep) learning; food processing and safety; fluorescence imaging; hyper/multispectral imaging; Vis/NIR/MIR imaging spectroscopy
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Affected by climate change and other factors, crops are susceptible to a variety of diseases, pests, and weeds, resulting in production loss and quality degradation. Crop protection is the science and practice of managing plant diseases, weeds, and pests that damage agricultural crops. Herbicides, insecticides, and fungicides are widely used for crop protection in agricultural areas. However, a series of factors, including resistance to chemicals, environmental pollution, growing consumer concerns, and their strong interest in organic food, limit the acceptability of chemical reagents in future applications. Conventional protocols of weed control or phenotyping crop disease severity are a costly and time-consuming process. This context requires the development of smart technologies to accelerate the selection of disease-resistant crops, or to apply compounds or alternative products to targets to control diseases, pests, or weeds.

This Special Issue focuses on computer vision using near-ground and airborne cameras to identify plant traits for crop protection. We would like to invite experts and researchers in the field to contribute original and high-quality research articles and reviews to the journal (Agriculture or Agronomy) peer-reviewed Special Issue: “Computer Vision for Intelligent Crop Identification and Crop Protection”.

You may choose our Joint Special Issue in Agronomy.

Dr. Wen-Hao Su
Dr. Zhou Zhang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Agriculture is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • computer vision
  • crop plant signaling
  • automatic system
  • robot–plant interaction
  • high-throughput phenotyping
  • machine learning
  • image segmentation
  • plant detection
  • physiology of crops and weeds
  • monitoring and control of diseases
  • pests and weeds

Related Special Issue

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

22 pages, 8157 KiB  
Article
Maize Leaf Compound Disease Recognition Based on Attention Mechanism
by Ping Dong, Kuo Li, Ming Wang, Feitao Li, Wei Guo and Haiping Si
Agriculture 2024, 14(1), 74; https://doi.org/10.3390/agriculture14010074 - 30 Dec 2023
Viewed by 869
Abstract
In addition to the conventional situation of detecting a single disease on a single leaf in corn leaves, there is a complex phenomenon of multiple diseases overlapping on a single leaf (compound diseases). Current research on corn leaf disease detection predominantly focuses on [...] Read more.
In addition to the conventional situation of detecting a single disease on a single leaf in corn leaves, there is a complex phenomenon of multiple diseases overlapping on a single leaf (compound diseases). Current research on corn leaf disease detection predominantly focuses on single leaves with single diseases, with limited attention given to the detection of compound diseases on a single leaf. However, the occurrence of compound diseases complicates the accuracy of traditional deep learning algorithms for disease detection, necessitating the exploration of new models for the identification of compound diseases on corn leaves. To achieve rapid and accurate identification of compound diseases in corn fields, this study adopts the YOLOv5s model as the base network, chosen for its smaller size and faster detection speed. We propose a corn leaf compound disease recognition method, YOLOv5s-C3CBAM, based on an attention mechanism. To address the challenge of limited data for corn leaf compound diseases, a CycleGAN model is employed to generate synthetic images. The scarcity of real data is thereby mitigated, facilitating the training of deep learning models with sufficient data. The YOLOv5s model is selected as the base network, and an attention mechanism is introduced to enhance the network’s focus on disease lesions while mitigating interference from compound diseases. This augmentation results in improved recognition accuracy. The YOLOv5s-C3CBAM compound disease recognition model, incorporating the attention mechanism, achieves an average precision of 83%, an F1 score of 81.98%, and a model size of 12.6 Mb. Compared to the baseline model, the average precision is improved by 3.1 percentage points. Furthermore, it outperforms Faster R-CNN and YOLOv7-tiny models by 27.57 and 2.7 percentage points, respectively. This recognition method demonstrates the ability to rapidly and accurately identify compound diseases on corn leaves, offering valuable insights for future research on precise identification of compound agricultural crop diseases in field conditions. Full article
Show Figures

Figure 1

26 pages, 8500 KiB  
Article
An Integrated Multi-Model Fusion System for Automatically Diagnosing the Severity of Wheat Fusarium Head Blight
by Ya-Hong Wang, Jun-Jiang Li and Wen-Hao Su
Agriculture 2023, 13(7), 1381; https://doi.org/10.3390/agriculture13071381 - 12 Jul 2023
Cited by 1 | Viewed by 1170
Abstract
Fusarium has become a major impediment to stable wheat production in many regions worldwide. Infected wheat plants not only experience reduced yield and quality but their spikes generate toxins that pose a significant threat to human and animal health. Currently, there are two [...] Read more.
Fusarium has become a major impediment to stable wheat production in many regions worldwide. Infected wheat plants not only experience reduced yield and quality but their spikes generate toxins that pose a significant threat to human and animal health. Currently, there are two primary methods for effectively controlling Fusarium head blight (FHB): spraying quantitative chemical agents and breeding disease-resistant wheat varieties. The premise of both methods is to accurately diagnosis the severity of wheat FHB in real time. In this study, a deep learning-based multi-model fusion system was developed for integrated detection of FHB severity. Combination schemes of network frameworks and backbones for wheat spike and spot segmentation were investigated. The training results demonstrated that Mobilev3-Deeplabv3+ exhibits strong multi-scale feature refinement capabilities and achieved a high segmentation accuracy of 97.6% for high-throughput wheat spike images. By implementing parallel feature fusion from high- to low-resolution inputs, w48-Hrnet excelled at recognizing fine and complex FHB spots, resulting in up to 99.8% accuracy. Refinement of wheat FHB grading classification from the perspectives of epidemic control (zero to five levels) and breeding (zero to 14 levels) has been accomplished. In addition, the effectiveness of introducing HSV color feature as a weighting factor into the evaluation model for grading of wheat spikes was verified. The multi-model fusion algorithm, developed specifically for the all-in-one process, successfully accomplished the tasks of segmentation, extraction, and classification, with an overall accuracy of 92.6% for FHB severity grades. The integrated system, combining deep learning and image analysis, provides a reliable and nondestructive diagnosis of wheat FHB, enabling real-time monitoring for farmers and researchers. Full article
Show Figures

Figure 1

17 pages, 4839 KiB  
Article
ResViT-Rice: A Deep Learning Model Combining Residual Module and Transformer Encoder for Accurate Detection of Rice Diseases
by Yujia Zhang, Luteng Zhong, Yu Ding, Hongfeng Yu and Zhaoyu Zhai
Agriculture 2023, 13(6), 1264; https://doi.org/10.3390/agriculture13061264 - 19 Jun 2023
Viewed by 1716
Abstract
Rice is a staple food for over half of the global population, but it faces significant yield losses: up to 52% due to leaf blast disease and brown spot diseases, respectively. This study aimed at proposing a hybrid architecture, namely ResViT-Rice, by taking [...] Read more.
Rice is a staple food for over half of the global population, but it faces significant yield losses: up to 52% due to leaf blast disease and brown spot diseases, respectively. This study aimed at proposing a hybrid architecture, namely ResViT-Rice, by taking advantage of both CNN and transformer for accurate detection of leaf blast and brown spot diseases. We employed ResNet as the backbone network to establish a detection model and introduced the encoder component from the transformer architecture. The convolutional block attention module was also integrated to ResViT-Rice to further enhance the feature-extraction ability. We processed 1648 training and 104 testing images for two diseases and the healthy class. To verify the effectiveness of the proposed ResViT-Rice, we conducted comparative evaluation with popular deep learning models. The experimental result suggested that ResViT-Rice achieved promising results in the rice disease-detection task, with the highest accuracy reaching 0.9904. The corresponding precision, recall, and F1-score were all over 0.96, with an AUC of up to 0.9987, and the corresponding loss rate was 0.0042. In conclusion, the proposed ResViT-Rice can better extract features of different rice diseases, thereby providing a more accurate and robust classification output. Full article
Show Figures

Figure 1

23 pages, 13273 KiB  
Article
MYOLO: A Lightweight Fresh Shiitake Mushroom Detection Model Based on YOLOv3
by Peichao Cong, Hao Feng, Kunfeng Lv, Jiachao Zhou and Shanda Li
Agriculture 2023, 13(2), 392; https://doi.org/10.3390/agriculture13020392 - 07 Feb 2023
Cited by 9 | Viewed by 2509
Abstract
Fruit and vegetable inspection aids robotic harvesting in modern agricultural production. For rapid and accurate detection of fresh shiitake mushrooms, picking robots must overcome the complex conditions of the growing environment, diverse morphology, dense shading, and changing field of view. The current work [...] Read more.
Fruit and vegetable inspection aids robotic harvesting in modern agricultural production. For rapid and accurate detection of fresh shiitake mushrooms, picking robots must overcome the complex conditions of the growing environment, diverse morphology, dense shading, and changing field of view. The current work focuses on improving inspection accuracy at the expense of timeliness. This paper proposes a lightweight shiitake mushroom detection model called Mushroom You Only Look Once (MYOLO) based on You Only Look Once (YOLO) v3. To reduce the complexity of the network structure and computation and improve real-time detection, a lightweight GhostNet16 was built instead of DarkNet53 as the backbone network. Spatial pyramid pooling was introduced at the end of the backbone network to achieve multiscale local feature fusion and improve the detection accuracy. Furthermore, a neck network called shuffle adaptive spatial feature pyramid network (ASA-FPN) was designed to improve fresh shiitake mushroom detection, including that of densely shaded mushrooms, as well as the localization accuracy. Finally, the Complete Intersection over Union (CIoU) loss function was used to optimize the model and improve its convergence efficiency. MYOLO achieved a mean average precision (mAP) of 97.03%, 29.8M parameters, and a detection speed of 19.78 ms, showing excellent timeliness and detectability with a 2.04% higher mAP and 2.08 times fewer parameters than the original model. Thus, it provides an important theoretical basis for automatic picking of fresh shiitake mushrooms. Full article
Show Figures

Figure 1

20 pages, 4951 KiB  
Article
Low Illumination Soybean Plant Reconstruction and Trait Perception
by Yourui Huang, Yuwen Liu, Tao Han, Shanyong Xu and Jiahao Fu
Agriculture 2022, 12(12), 2067; https://doi.org/10.3390/agriculture12122067 - 01 Dec 2022
Cited by 3 | Viewed by 1359
Abstract
Agricultural equipment works poorly under low illumination such as nighttime, and there is more noise in soybean plant images collected under light constraints, and the reconstructed soybean plant model cannot fully and accurately represent its growth condition. In this paper, we propose a [...] Read more.
Agricultural equipment works poorly under low illumination such as nighttime, and there is more noise in soybean plant images collected under light constraints, and the reconstructed soybean plant model cannot fully and accurately represent its growth condition. In this paper, we propose a low-illumination soybean plant reconstruction and trait perception method. Our method is based on low-illumination enhancement, using the image enhancement algorithm EnlightenGAN to adjust soybean plant images in low-illumination environments to improve the performance of the scale-invariant feature transform (SIFT) algorithm for soybean plant feature detection and matching and using the motion recovery structure (SFM) algorithm to generate the sparse point cloud of soybean plants, and the point cloud of the soybean plants is densified by the face slice-based multi-view stereo (PMVS) algorithm. We demonstrate that the reconstructed soybean plants are close to the growth conditions of real soybean plants by image enhancement in challenging low-illumination environments, expanding the application of three-dimensional reconstruction techniques for soybean plant trait perception, and our approach is aimed toward achieving the accurate perception of current crop growth conditions by agricultural equipment under low illumination. Full article
Show Figures

Figure 1

20 pages, 3428 KiB  
Article
Channel–Spatial Segmentation Network for Classifying Leaf Diseases
by Balaji Natesan, Anandakumar Singaravelan, Jia-Lien Hsu, Yi-Hsien Lin, Baiying Lei and Chuan-Ming Liu
Agriculture 2022, 12(11), 1886; https://doi.org/10.3390/agriculture12111886 - 09 Nov 2022
Cited by 2 | Viewed by 2040
Abstract
Agriculture is an important resource for the global economy, while plant disease causes devastating yield loss. To control plant disease, every country around the world spends trillions of dollars on disease management. Some of the recent solutions are based on the utilization of [...] Read more.
Agriculture is an important resource for the global economy, while plant disease causes devastating yield loss. To control plant disease, every country around the world spends trillions of dollars on disease management. Some of the recent solutions are based on the utilization of computer vision techniques in plant science which helps to monitor crop industries such as tomato, maize, grape, citrus, potato and cassava, and other crops. The attention-based CNN network has become effective in plant disease prediction. However, existing approaches are less precise in detecting minute-scale disease in the leaves. Our proposed Channel–Spatial segmentation network will help to determine the disease in the leaf, and it consists of two main stages: (a) channel attention discriminates diseased and healthy parts as well as channel-focused features, and (b) spatial attention consumes channel-focused features and highlights the diseased part for the final prediction process. This investigation forms a channel and spatial attention in a sequential way to identify diseased and healthy leaves. Finally, identified leaf diseases are divided into Mild, Medium, Severe, and Healthy. Our model successfully predicts the diseased leaves with the highest accuracy of 99.76%. Our research study shows evaluation metrics, comparison studies, and expert analysis to comprehend the network performance. This concludes that the Channel–Spatial segmentation network can be used effectively to diagnose different disease degrees based on a combination of image processing and statistical calculation. Full article
Show Figures

Figure 1

18 pages, 7315 KiB  
Article
Automatic Tandem Dual BlendMask Networks for Severity Assessment of Wheat Fusarium Head Blight
by Yichao Gao, Hetong Wang, Man Li and Wen-Hao Su
Agriculture 2022, 12(9), 1493; https://doi.org/10.3390/agriculture12091493 - 18 Sep 2022
Cited by 14 | Viewed by 2553
Abstract
Fusarium head blight (FHB) disease reduces wheat yield and quality. Breeding wheat varieties with resistance genes is an effective way to reduce the impact of this disease. This requires trained experts to assess the disease resistance of hundreds of wheat lines in the [...] Read more.
Fusarium head blight (FHB) disease reduces wheat yield and quality. Breeding wheat varieties with resistance genes is an effective way to reduce the impact of this disease. This requires trained experts to assess the disease resistance of hundreds of wheat lines in the field. Manual evaluation methods are time-consuming and labor-intensive. The evaluation results are greatly affected by human factors. Traditional machine learning methods are only suitable for small-scale datasets. Intelligent and accurate assessment of FHB severity could significantly facilitate rapid screening of resistant lines. In this study, the automatic tandem dual BlendMask deep learning framework was used to simultaneously segment the wheat spikes and diseased areas to enable the rapid detection of the disease severity. The feature pyramid network (FPN), based on the ResNet-50 network, was used as the backbone of BlendMask for feature extraction. The model exhibited positive performance in the segmentation of wheat spikes with precision, recall, and MIoU (mean intersection over union) values of 85.36%, 75.58%, and 56.21%, respectively, and the segmentation of diseased areas with precision, recall, and MIoU values of 78.16%, 79.46%, and 55.34%, respectively. The final recognition accuracies of the model for wheat spikes and diseased areas were 85.56% and 99.32%, respectively. The disease severity was obtained from the ratio of the diseased area to the spike area. The average accuracy for FHB severity classification reached 91.80%, with the average F1-score of 92.22%. This study demonstrated the great advantage of a tandem dual BlendMask network in intelligent screening of resistant wheat lines. Full article
Show Figures

Figure 1

22 pages, 10682 KiB  
Article
A Diameter Measurement Method of Red Jujubes Trunk Based on Improved PSPNet
by Yichen Qiao, Yaohua Hu, Zhouzhou Zheng, Zhanghao Qu, Chao Wang, Taifeng Guo and Juncai Hou
Agriculture 2022, 12(8), 1140; https://doi.org/10.3390/agriculture12081140 - 01 Aug 2022
Cited by 5 | Viewed by 2116
Abstract
A trunk segmentation and a diameter measurement of red jujubes are important steps in harvesting red jujubes using vibration harvesting robots as the results directly affect the effectiveness of the harvesting. A trunk segmentation algorithm of red jujubes, based on improved Pyramid Scene [...] Read more.
A trunk segmentation and a diameter measurement of red jujubes are important steps in harvesting red jujubes using vibration harvesting robots as the results directly affect the effectiveness of the harvesting. A trunk segmentation algorithm of red jujubes, based on improved Pyramid Scene Parsing Network (PSPNet), and a diameter measurement algorithm to realize the segmentation and diameter measurement of the trunk are proposed in this research. To this end, MobilenetV2 was selected as the backbone of PSPNet so that it could be adapted to embedded mobile applications. Meanwhile, the Convolutional Block Attention Module (CBAM) was embedded in the MobilenetV2 to enhance the feature extraction capability of the model. Furthermore, the Refinement Residual Blocks (RRBs) were introduced into the main branch and side branch of PSPNet to enhance the segmentation result. An algorithm to measure trunk diameter was proposed, which used the segmentation results to determine the trunk outline and the normal of the centerline. The Euclidean distance of the intersection point of the normal with the trunk profile was obtained and its average value was regarded as the final trunk diameter. Compared with the original PSPNet, the Intersection-over-Union (IoU) value, PA value and Fps of the improved model increased by 0.67%, 1.95% and 1.13, respectively, and the number of parameters was 5.00% of that of the original model. Compared with other segmentation networks, the improved model had fewer parameters and better segmentation results. Compared with the original network, the trunk diameter measurement algorithm proposed in this research reduced the average absolute error and the average relative error by 3.75 mm and 9.92%, respectively, and improved the average measurement accuracy by 9.92%. To sum up, the improved PSPNet jujube trunk segmentation algorithm and trunk diameter measurement algorithm can accurately segment and measure the diameter in the natural environment, which provides a theoretical basis and technical support for the clamping of jujube harvesting robots. Full article
Show Figures

Figure 1

17 pages, 848 KiB  
Article
ADDLight: An Energy-Saving Adder Neural Network for Cucumber Disease Classification
by Chen Liu, Chunjiang Zhao, Huarui Wu, Xiao Han and Shuqin Li
Agriculture 2022, 12(4), 452; https://doi.org/10.3390/agriculture12040452 - 23 Mar 2022
Cited by 6 | Viewed by 2237
Abstract
It is an urgent task to improve the applicability of the cucumber disease classification model in greenhouse edge-intelligent devices. The energy consumption of disease diagnosis models designed based on deep learning methods is a key factor affecting its applicability. Based on this motivation, [...] Read more.
It is an urgent task to improve the applicability of the cucumber disease classification model in greenhouse edge-intelligent devices. The energy consumption of disease diagnosis models designed based on deep learning methods is a key factor affecting its applicability. Based on this motivation, two methods of reducing the model’s calculation amount and changing the calculation method of feature extraction were used in this study to reduce the model’s calculation energy consumption, thereby prolonging the working time of greenhouse edge devices deployed with disease models. First, a cucumber disease dataset with complex backgrounds is constructed in this study. Second, the random data enhancement method is used to enhance data during model training. Third, the conventional feature extraction module, depthwise separable feature extraction module, and the squeeze-and-excitation module are the main modules for constructing the classification model. In addition, the strategies of channel expansion and = shortcut connection are used to further improve the model’s classification accuracy. Finally, the additive feature extraction method is used to reconstruct the proposed model. The experimental results show that the computational energy consumption of the adder cucumber disease classification model is reduced by 96.1% compared with the convolutional neural network of the same structure. In addition, the model size is only 0.479 MB, the calculation amount is 0.03 GFLOPs, and the classification accuracy of cucumber disease images with complex backgrounds is 89.1%. All results prove that our model has high applicability in cucumber greenhouse intelligent equipment. Full article
Show Figures

Figure 1

16 pages, 11509 KiB  
Article
Development of a Three-Dimensional Plant Localization Technique for Automatic Differentiation of Soybean from Intra-Row Weeds
by Wen-Hao Su, Ji Sheng and Qing-Yang Huang
Agriculture 2022, 12(2), 195; https://doi.org/10.3390/agriculture12020195 - 31 Jan 2022
Cited by 4 | Viewed by 2669
Abstract
Soybean is a legume that is grown worldwide for its edible bean. Intra-row weeds greatly hinder the normal growth of soybeans. The continuous emergence of herbicide-resistant weeds and the increasing labor costs of weed control are affecting the profitability of growers. The existing [...] Read more.
Soybean is a legume that is grown worldwide for its edible bean. Intra-row weeds greatly hinder the normal growth of soybeans. The continuous emergence of herbicide-resistant weeds and the increasing labor costs of weed control are affecting the profitability of growers. The existing cultivation technology cannot control the weeds in the crop row which are highly competitive with the soybean in early growth stages. There is an urgent need to develop an automated weeding technology for intra-row weed control. The prerequisite for performing weeding operations is to accurately determine the plant location in the field. The purpose of this study is to develop a plant localization technique based on systemic crop signalling to automatically detect the appearance of soybean. Rhodamine B (Rh-B) is a signalling compound with a unique fluorescent appearance. Different concentrations of Rh-B were applied to soybean based on seed treatment for various durations prior to planting. The potential impact of Rh-B on seedling growth in the outdoor environment was evaluated. Both 60 and 120 ppm of Rh-B were safe for soybean plants. Higher doses of Rh-B resulted in greater absorption. A three-dimensional plant localization algorithm was developed by analyzing the fluorescence images of multiple views of plants. The soybean location was successfully determined with the accuracy of 97%. The Rh-B in soybean plants successfully created a machine-sensible signal that can be used to enhance weed/crop differentiation, which is helpful for performing automatic weeding tasks in weeders. Full article
Show Figures

Figure 1

Review

Jump to: Research

24 pages, 2834 KiB  
Review
Deep Learning Models for the Classification of Crops in Aerial Imagery: A Review
by Igor Teixeira, Raul Morais, Joaquim J. Sousa and António Cunha
Agriculture 2023, 13(5), 965; https://doi.org/10.3390/agriculture13050965 - 27 Apr 2023
Cited by 5 | Viewed by 5449
Abstract
In recent years, the use of remote sensing data obtained from satellite or unmanned aerial vehicle (UAV) imagery has grown in popularity for crop classification tasks such as yield prediction, soil classification or crop mapping. The ready availability of information, with improved temporal, [...] Read more.
In recent years, the use of remote sensing data obtained from satellite or unmanned aerial vehicle (UAV) imagery has grown in popularity for crop classification tasks such as yield prediction, soil classification or crop mapping. The ready availability of information, with improved temporal, radiometric, and spatial resolution, has resulted in the accumulation of vast amounts of data. Meeting the demands of analysing this data requires innovative solutions, and artificial intelligence techniques offer the necessary support. This systematic review aims to evaluate the effectiveness of deep learning techniques for crop classification using remote sensing data from aerial imagery. The reviewed papers focus on a variety of deep learning architectures, including convolutional neural networks (CNNs), long short-term memory networks, transformers, and hybrid CNN-recurrent neural network models, and incorporate techniques such as data augmentation, transfer learning, and multimodal fusion to improve model performance. The review analyses the use of these techniques to boost crop classification accuracy by developing new deep learning architectures or by combining various types of remote sensing data. Additionally, it assesses the impact of factors like spatial and spectral resolution, image annotation, and sample quality on crop classification. Ensembling models or integrating multiple data sources tends to enhance the classification accuracy of deep learning models. Satellite imagery is the most commonly used data source due to its accessibility and typically free availability. The study highlights the requirement for large amounts of training data and the incorporation of non-crop classes to enhance accuracy and provide valuable insights into the current state of deep learning models and datasets for crop classification tasks. Full article
Show Figures

Figure 1

Back to TopTop