remotesensing-logo

Journal Browser

Journal Browser

Advanced Sensing and Image Processing in Agricultural Applications

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (15 January 2024) | Viewed by 13500

Special Issue Editors

Key Laboratory of Key Technology on Agricultural Machine and Equipment, Ministry of Education, College of Engineering, South China Agricultural University, Guangzhou 510642, China
Interests: agricultural robotics; image processing; motion control; neural networks; artificial intelligence; pattern recognition
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
College of Engineering ; South China Agricultural University, Guangzhou, China
Interests: agricultural machinery; rice precision direct seeding technology; farmland precision leveling technology; navigation and automatic operation
Special Issues, Collections and Topics in MDPI journals

grade E-Mail Website
Guest Editor
Bristol Robotics Laboratory, University of the West of England, UWE Bristol - Frenchay Campus, Coldharbour Ln, Bristol BS16 1QY, UK
Interests: human robot interaction and intelligent system design
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Advanced sensing and image processing technology has recently attracted increasing research interest in agricultural production scenarios, such as fruit-picking robots, pest monitoring, growth environment factor monitoring, agricultural planting management, and seed quality breeding. However, in a complex agricultural environment, image processing is more difficult, leading to misclassification due to the interference of external factors, resulting in errors in the experimental results. In addition, with the rapid development of image processing algorithms, the application of image processing technology in agriculture still faces many challenges, including the overlapping of agricultural products, the serious occlusion of detection targets, the excessive detection of targets, and the difficulty of image processing caused by light and camera angle. These problems hinder the application of image-processing technology in complex agricultural environments. Therefore, advanced image processing technology in agriculture is an inspiring and promising topic. This Special Issue aims to present state-of-the-art research achievements and advances by world-class researchers that contribute to the agricultural field in terms of image processing, environment perception, and sensor fusion. Review articles are also encouraged. The potential topics of this organized session include but are not limited to:

  • Deep learning algorithm in agricultural applications;
  • Image processing technology in pest monitoring;
  • Soil spectral data in agricultural engineering;
  • Multispectral image processing in agricultural engineering;
  • Satellite remote sensing technology in agriculture;
  • Detection and location of agricultural robotics;
  • Near-infrared image processing in agricultural engineering;
  • Hyperspectral technology in crop monitoring;
  • Integrating perception, sensor fusion and control in agricultural applications;
  • Fault detection and diagnosis in agricultural engineering.

Dr. Jiehao Li
Prof. Dr. Xiwen Luo
Prof. Dr. Chenguang Yang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • agricultural robotics
  • sensing information
  • computer vision
  • deep learning
  • hyperspectral imagery
  • RGB image
  • image segmentation
  • feature extraction
  • laser radar

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

21 pages, 9142 KiB  
Article
Water Stress Index and Stomatal Conductance under Different Irrigation Regimes with Thermal Sensors in Rice Fields on the Northern Coast of Peru
by Lia Ramos-Fernández, Maria Gonzales-Quiquia, José Huanuqueño-Murillo, David Tito-Quispe, Elizabeth Heros-Aguilar, Lisveth Flores del Pino and Alfonso Torres-Rua
Remote Sens. 2024, 16(5), 796; https://doi.org/10.3390/rs16050796 - 24 Feb 2024
Viewed by 1013
Abstract
In the face of the climate change crisis, the increase in air temperature negatively impacts rice crop productivity due to stress from water scarcity. The objective of this study was to determine the rice crop water stress index (CWSI) and stomatal conductance (Gs) [...] Read more.
In the face of the climate change crisis, the increase in air temperature negatively impacts rice crop productivity due to stress from water scarcity. The objective of this study was to determine the rice crop water stress index (CWSI) and stomatal conductance (Gs) under different irrigation regimes, specifically continuous flood irrigation treatments (CF) and irrigations with alternating wetting and drying (AWD) at water levels of 5 cm, 10 cm, and 20 cm below the soil surface (AWD5, AWD10, and AWD20) in an experimental area of INIA-Vista Florida and in six commercial areas of the Lambayeque region using thermal images captured with thermal sensors. The results indicated that AWD irrigation generated more water stress, with CWSI values between 0.4 and 1.0. Despite this, the yields were similar in CF and AWD20. In the commercial areas, CWSI values between 0.38 and 0.51 were obtained, with Santa Julia having the highest values. Furthermore, a strong Pearson correlation (R) of 0.91 was established between the CWSI and Gs, representing a reference scale based on Gs values for evaluating water stress levels. Full article
(This article belongs to the Special Issue Advanced Sensing and Image Processing in Agricultural Applications)
Show Figures

Figure 1

23 pages, 12597 KiB  
Article
Estimating the SPAD of Litchi in the Growth Period and Autumn Shoot Period Based on UAV Multi-Spectrum
by Jiaxing Xie, Jiaxin Wang, Yufeng Chen, Peng Gao, Huili Yin, Shiyun Chen, Daozong Sun, Weixing Wang, Handong Mo, Jiyuan Shen and Jun Li
Remote Sens. 2023, 15(24), 5767; https://doi.org/10.3390/rs15245767 (registering DOI) - 17 Dec 2023
Cited by 1 | Viewed by 854
Abstract
The relative content of chlorophyll, assessed through the soil and plant analyzer development (SPAD), serves as a reliable indicator reflecting crop photosynthesis and the nutritional status during crop growth and development. In this study, we employed machine learning methods utilizing unmanned aerial vehicle [...] Read more.
The relative content of chlorophyll, assessed through the soil and plant analyzer development (SPAD), serves as a reliable indicator reflecting crop photosynthesis and the nutritional status during crop growth and development. In this study, we employed machine learning methods utilizing unmanned aerial vehicle (UAV) multi-spectrum remote sensing to predict the SPAD value of litchi fruit. Input features consisted of various vegetation indices and texture features during distinct growth periods, and to streamline the feature set, the full subset regression algorithm was applied for dimensionality reduction. Our findings revealed the superiority of stacking models over individual models. During the litchi fruit development period, the stacking model, incorporating vegetation indices and texture features, demonstrated a validation set coefficient of determination (R2) of 0.94, a root mean square error (RMSE) of 2.4, and a relative percent deviation (RPD) of 3.0. Similarly, in the combined litchi growing period and autumn shoot period, the optimal model for estimating litchi SPAD was the stacking model based on vegetation indices and texture features, yielding a validation set R2, RMSE, and RPD of 0.84, 3.9, and 1.9, respectively. This study furnishes data support for the precise estimation of litchi SPAD across different periods through varied combinations of independent variables. Full article
(This article belongs to the Special Issue Advanced Sensing and Image Processing in Agricultural Applications)
Show Figures

Figure 1

21 pages, 27120 KiB  
Article
Visual Navigation and Obstacle Avoidance Control for Agricultural Robots via LiDAR and Camera
by Chongyang Han, Weibin Wu, Xiwen Luo and Jiehao Li
Remote Sens. 2023, 15(22), 5402; https://doi.org/10.3390/rs15225402 - 17 Nov 2023
Cited by 2 | Viewed by 1312
Abstract
Obstacle avoidance control and navigation in unstructured agricultural environments are key to the safe operation of autonomous robots, especially for agricultural machinery, where cost and stability should be taken into account. In this paper, we designed a navigation and obstacle avoidance system for [...] Read more.
Obstacle avoidance control and navigation in unstructured agricultural environments are key to the safe operation of autonomous robots, especially for agricultural machinery, where cost and stability should be taken into account. In this paper, we designed a navigation and obstacle avoidance system for agricultural robots based on LiDAR and a vision camera. The improved clustering algorithm is used to quickly and accurately analyze the obstacle information collected by LiDAR in real time. Also, the convex hull algorithm is combined with the rotating calipers algorithm to obtain the maximum diameter of the convex polygon of the clustered data. Obstacle avoidance paths and course control methods are developed based on the danger zones of obstacles. Moreover, by performing color space analysis and feature analysis on the complex orchard environment images, the optimal H-component of HSV color space is selected to obtain the ideal vision-guided trajectory images based on mean filtering and corrosion treatment. Finally, the proposed algorithm is integrated into the Three-Wheeled Mobile Differential Robot (TWMDR) platform to carry out obstacle avoidance experiments, and the results show the effectiveness and robustness of the proposed algorithm. The research conclusion can achieve satisfactory results in precise obstacle avoidance and intelligent navigation control of agricultural robots. Full article
(This article belongs to the Special Issue Advanced Sensing and Image Processing in Agricultural Applications)
Show Figures

Figure 1

21 pages, 9059 KiB  
Article
Hyperspectral Prediction Model of Nitrogen Content in Citrus Leaves Based on the CEEMDAN–SR Algorithm
by Changlun Gao, Ting Tang, Weibin Wu, Fangren Zhang, Yuanqiang Luo, Weihao Wu, Beihuo Yao and Jiehao Li
Remote Sens. 2023, 15(20), 5013; https://doi.org/10.3390/rs15205013 - 18 Oct 2023
Cited by 2 | Viewed by 905
Abstract
Nitrogen content is one of the essential elements in citrus leaves (CL), and many studies have been conducted to determine the nutrient content in CL using hyperspectral technology. To address the key problem that the conventional spectral data-denoising algorithms directly discard high-frequency signals, [...] Read more.
Nitrogen content is one of the essential elements in citrus leaves (CL), and many studies have been conducted to determine the nutrient content in CL using hyperspectral technology. To address the key problem that the conventional spectral data-denoising algorithms directly discard high-frequency signals, resulting in missing effective signals, this study proposes a denoising preprocessing algorithm, complete ensemble empirical mode decomposition with adaptive noise joint sparse representation (CEEMDAN–SR), for CL hyperspectral data. For this purpose, 225 sets of fresh CL were collected at the Institute of Fruit Tree Research of the Guangdong Academy of Agricultural Sciences, to measure their elemental nitrogen content and the corresponding hyperspectral data. First, the spectral data were preprocessed using CEEMDAN–SR, Stein’s unbiased risk estimate and the linear expansion of thresholds (SURE–LET), sparse representation (SR), Savitzky–Golay (SG), and the first derivative (FD). Second, feature extraction was carried out using principal component analysis (PCA), uninformative variables elimination (UVE), and the competitive adaptive re-weighted sampling (CARS) algorithm. Finally, partial least squares regression (PLSR), support vector regression (SVR), random forest (RF), and Gaussian process regression (GPR) were used to construct a CL nitrogen prediction model. The results showed that most of the prediction models preprocessed using the CEEMDAN–SR algorithm had better accuracy and robustness. The prediction models based on CEEMDAN–SR preprocessing, PCA feature extraction, and GPR modeling had an R2 of 0.944, NRMSE of 0.057, and RPD of 4.219. The study showed that the CEEMDAN–SR algorithm can be effectively used to denoise CL hyperspectral data and reduce the loss of effective information. The prediction model using the CEEMDAN–SR+PCA+GPR algorithm could accurately obtain the nitrogen content of CL and provide a reference for the accurate fertilization of citrus trees. Full article
(This article belongs to the Special Issue Advanced Sensing and Image Processing in Agricultural Applications)
Show Figures

Figure 1

17 pages, 33048 KiB  
Article
Embedded Yolo-Fastest V2-Based 3D Reconstruction and Size Prediction of Grain Silo-Bag
by Shujin Guo, Xu Mao, Dong Dai, Zhenyu Wang, Du Chen and Shumao Wang
Remote Sens. 2023, 15(19), 4846; https://doi.org/10.3390/rs15194846 - 07 Oct 2023
Cited by 2 | Viewed by 1337
Abstract
Contactless and non-destructive measuring tools can facilitate the moisture monitoring of bagged or bulk grain during transportation and storage. However, accurate target recognition and size prediction always impede the effectiveness of contactless monitoring in actual use. This paper developed a novel 3D reconstruction [...] Read more.
Contactless and non-destructive measuring tools can facilitate the moisture monitoring of bagged or bulk grain during transportation and storage. However, accurate target recognition and size prediction always impede the effectiveness of contactless monitoring in actual use. This paper developed a novel 3D reconstruction method upon multi-angle point clouds using a binocular depth camera and a proper Yolo-based neural model to resolve the problem. With this method, this paper developed an embedded and low-cost monitoring system for the in-warehouse grain bags, which predicted targets’ 3D size and boosted contactless grain moisture measuring. Identifying and extracting the object of interest from the complex background was challenging in size prediction of the grain silo-bag on a conveyor. This study first evaluated a series of Yolo-based neural network models and explored the most appropriate neural network structure for accurately extracting the grain bag. In point-cloud processing, this study constructed a rotation matrix to fuse multi-angle point clouds to generate a complete one. This study deployed all the above methods on a Raspberry Pi-embedded board to perform the grain bag’s 3D reconstruction and size prediction. For experimental validation, this study built the 3D reconstruction platform and tested grain bags’ reconstruction performance. First, this study determined the appropriate positions (−60°, 0°, 60°) with the least positions and high reconstruction quality. Then, this study validated the efficacy of the embedded system by evaluating its speed and accuracy and comparing it to the original Torch model. Results demonstrated that the NCNN-accelerated model significantly enhanced the average processing speed, nearly 30 times faster than the Torch model. The proposed system predicted the objects’ length, width, and height, achieving accuracies of 97.76%, 97.02%, and 96.81%, respectively. The maximum residual value was less than 9 mm. And all the root mean square errors were less than 7 mm. In the future, the system will mount three depth cameras for achieving real-time size prediction and introduce a contactless measuring tool to finalize grain moisture detection. Full article
(This article belongs to the Special Issue Advanced Sensing and Image Processing in Agricultural Applications)
Show Figures

Figure 1

27 pages, 7901 KiB  
Article
Soil Salinity Estimation for South Kazakhstan Based on SAR Sentinel-1 and Landsat-8,9 OLI Data with Machine Learning Models
by Ravil I. Mukhamediev, Timur Merembayev, Yan Kuchin, Dmitry Malakhov, Elena Zaitseva, Vitaly Levashenko, Yelena Popova, Adilkhan Symagulov, Gulshat Sagatdinova and Yedilkhan Amirgaliyev
Remote Sens. 2023, 15(17), 4269; https://doi.org/10.3390/rs15174269 - 30 Aug 2023
Cited by 2 | Viewed by 1180
Abstract
Climate change, uneven distribution of water resources and anthropogenic impact have led to salinization and land degradation in the southern regions of Kazakhstan. Identification of saline lands and their mapping is a laborious process associated with a complex of ground measurements. Data from [...] Read more.
Climate change, uneven distribution of water resources and anthropogenic impact have led to salinization and land degradation in the southern regions of Kazakhstan. Identification of saline lands and their mapping is a laborious process associated with a complex of ground measurements. Data from remote sensing are widely used to solve this problem. In this paper, the problem of assessing the salinity of the lands of the South Kazakhstan region using remote sensing data is considered. The aim of the study is to analyze the applicability of machine learning methods to assess the salinity of agricultural lands in southern Kazakhstan based on remote sensing. The authors present a salinity dataset obtained from field studies and containing more than 200 laboratory measurements of soil salinity. Moreover, the authors describe the results of applying several regression reconstruction algorithms (XGBoost, LightGBM, random forest, Support vector machines, Elastic net, etc.), where synthetic aperture radar (SAR) data from the Sentinel-1 satellite and optical data in the form of spectral salinity indices are used as input data. The obtained results show that, in general, these input data can be used to estimate salinity of the wetted arable land. XGBoost regressor (R2 = 0.282) showed the best results. Supplementing the radar data with the values of salinity spectral index improves the result significantly (R2 = 0.356). For the local datasets, the best result shown by the model is R2 = 0.473 (SAR) and R2 = 0.654 (SAR with spectral indexes), respectively. The study also revealed a number of problems that justify the need for a broader range of ground surveys and consideration of multi-year factors affecting soil salinity. Key results of the article: (i) a set of salinity data for different geographical zones of southern Kazakhstan is presented for the first time; (ii) a method is proposed for determining soil salinity on the basis of synthetic aperture radar supplemented with optical data, and this resulted in the improved prediction of the results for the region under consideration; (iii) a comparison of several types of machine learning models was made and it was found that boosted models give, on average, the best prediction result; (iv) a method for optimizing the number of model input parameters using explainable machine learning is proposed; (v) it is shown that the results obtained in this work are in better agreement with ground-based measurements of electrical conductivity than the results of the previously proposed global model. Full article
(This article belongs to the Special Issue Advanced Sensing and Image Processing in Agricultural Applications)
Show Figures

Figure 1

24 pages, 6406 KiB  
Article
HyperSFormer: A Transformer-Based End-to-End Hyperspectral Image Classification Method for Crop Classification
by Jiaxing Xie, Jiajun Hua, Shaonan Chen, Peiwen Wu, Peng Gao, Daozong Sun, Zhendong Lyu, Shilei Lyu, Xiuyun Xue and Jianqiang Lu
Remote Sens. 2023, 15(14), 3491; https://doi.org/10.3390/rs15143491 - 11 Jul 2023
Cited by 3 | Viewed by 1735
Abstract
Crop classification of large-scale agricultural land is crucial for crop monitoring and yield estimation. Hyperspectral image classification has proven to be an effective method for this task. Most current popular hyperspectral image classification methods are based on image classification, specifically on convolutional neural [...] Read more.
Crop classification of large-scale agricultural land is crucial for crop monitoring and yield estimation. Hyperspectral image classification has proven to be an effective method for this task. Most current popular hyperspectral image classification methods are based on image classification, specifically on convolutional neural networks (CNNs) and recurrent neural networks (RNNs). In contrast, this paper focuses on methods based on semantic segmentation and proposes a new transformer-based approach called HyperSFormer for crop hyperspectral image classification. The key enhancement of the proposed method is the replacement of the encoder in SegFormer with an improved Swin Transformer while keeping the SegFormer decoder. The entire model adopts a simple and uniform transformer architecture. Additionally, the paper introduces the hyper patch embedding (HPE) module to extract spectral and local spatial information from the hyperspectral images, which enhances the effectiveness of the features used as input for the model. To ensure detailed model processing and achieve end-to-end hyperspectral image classification, the transpose padding upsample (TPU) module is proposed for the model’s output. In order to address the problem of insufficient and imbalanced samples in hyperspectral image classification, the paper designs an adaptive min log sampling (AMLS) strategy and a loss function that incorporates dice loss and focal loss to assist model training. Experimental results using three public hyperspectral image datasets demonstrate the strong performance of HyperSFormer, particularly in the presence of imbalanced sample data, complex negative samples, and mixed sample classes. HyperSFormer outperforms state-of-the-art methods, including fast patch-free global learning (FPGA), a spectral–spatial-dependent global learning framework (SSDGL), and SegFormer, by at least 2.7% in the mean intersection over union (mIoU). It also improves the overall accuracy and average accuracy values by at least 0.9% and 0.3%, respectively, and the kappa coefficient by at least 0.011. Furthermore, ablation experiments were conducted to determine the optimal hyperparameter and loss function settings for the proposed method, validating the rationality of these settings and the fusion loss function. Full article
(This article belongs to the Special Issue Advanced Sensing and Image Processing in Agricultural Applications)
Show Figures

Figure 1

18 pages, 5888 KiB  
Article
Multi-Scale Depthwise Separable Convolution for Semantic Segmentation in Street–Road Scenes
by Yingpeng Dai, Chenglin Li, Xiaohang Su, Hongxian Liu and Jiehao Li
Remote Sens. 2023, 15(10), 2649; https://doi.org/10.3390/rs15102649 - 19 May 2023
Cited by 6 | Viewed by 1708
Abstract
Vision is an important way for unmanned mobile platforms to understand surrounding environmental information. For an unmanned mobile platform, quickly and accurately obtaining environmental information is a basic requirement for its subsequent visual tasks. Based on this, a unique convolution module called Multi-Scale [...] Read more.
Vision is an important way for unmanned mobile platforms to understand surrounding environmental information. For an unmanned mobile platform, quickly and accurately obtaining environmental information is a basic requirement for its subsequent visual tasks. Based on this, a unique convolution module called Multi-Scale Depthwise Separable Convolution module is proposed for real-time semantic segmentation. This module mainly consists of concatenation pointwise convolution and multi-scale depthwise convolution. Not only does the concatenation pointwise convolution change the number of channels, but it also combines the spatial features from the multi-scale depthwise convolution operations to produce additional features. The Multi-Scale Depthwise Separable Convolution module can strengthen the non-linear relationship between input and output. Specifically, the multi-scale depthwise convolution module extracts multi-scale spatial features while remaining lightweight. This fully uses multi-scale information to describe objects despite their different sizes. Here, Mean Intersection over Union (MIoU), parameters, and inference speed were used to describe the performance of the proposed network. On the Camvid, KITTI, and Cityscapes datasets, the proposed algorithm compromised between accuracy and memory in comparison to widely used and cutting-edge algorithms. In particular, the proposed algorithm acquired 61.02 MIoU with 2.68 M parameters on the Camvid test dataset. Full article
(This article belongs to the Special Issue Advanced Sensing and Image Processing in Agricultural Applications)
Show Figures

Figure 1

18 pages, 16929 KiB  
Article
Spatiotemporal Variation Characteristics of Reference Evapotranspiration and Relative Moisture Index in Heilongjiang Investigated through Remote Sensing Tools
by Siyi Wen, Zihan Liu, Yu Han, Yuyan Chen, Liangsi Xu and Qiongsa Li
Remote Sens. 2023, 15(10), 2582; https://doi.org/10.3390/rs15102582 - 15 May 2023
Cited by 1 | Viewed by 930
Abstract
Reference evapotranspiration (ET0) is one of the significant parameters in agricultural irrigation, especially in Heilongjiang, a big agricultural province in China. In this research, the spatiotemporal variation characteristics of evapotranspiration (ET), relative moisture index (MI) and influencing factors of ET0 [...] Read more.
Reference evapotranspiration (ET0) is one of the significant parameters in agricultural irrigation, especially in Heilongjiang, a big agricultural province in China. In this research, the spatiotemporal variation characteristics of evapotranspiration (ET), relative moisture index (MI) and influencing factors of ET0 in Heilongjiang, which was divided into six ecology districts according to landforms, were analyzed with meteorological data observed over 40 years from 1980 and MOD16 products from 2000 to 2017 using Morlet wavelet analysis and partial correlation analysis. The results indicated that (1) the spatial distribution of ET and PET in Heilongjiang in humid, normal and arid years showed a distribution of being higher in the southwest and lower in the northwest, and higher in the south and lower in the north. The PET was higher than ET from 2002 to 2017, and the difference was small, indicating that the overall moisture in Heilongjiang was sufficient in these years. (2) In the last 40 years, the ET0 increased while the annual MI decreased. The annual minimum of MI in the six regions of Heilongjiang was −0.25, showing that all six regions were drought free. (3) The importance of the meteorological factors affecting ET0 was ranked as average relative humidity > average wind speed > sunshine duration. This research provides scientific guidance for the study of using remote sensing to reverse ET. Full article
(This article belongs to the Special Issue Advanced Sensing and Image Processing in Agricultural Applications)
Show Figures

Figure 1

Review

Jump to: Research

32 pages, 3720 KiB  
Review
Advancements in Utilizing Image-Analysis Technology for Crop-Yield Estimation
by Feng Yu, Ming Wang, Jun Xiao, Qian Zhang, Jinmeng Zhang, Xin Liu, Yang Ping and Rupeng Luan
Remote Sens. 2024, 16(6), 1003; https://doi.org/10.3390/rs16061003 - 12 Mar 2024
Viewed by 1158
Abstract
Yield calculation is an important link in modern precision agriculture that is an effective means to improve breeding efficiency and to adjust planting and marketing plans. With the continuous progress of artificial intelligence and sensing technology, yield-calculation schemes based on image-processing technology have [...] Read more.
Yield calculation is an important link in modern precision agriculture that is an effective means to improve breeding efficiency and to adjust planting and marketing plans. With the continuous progress of artificial intelligence and sensing technology, yield-calculation schemes based on image-processing technology have many advantages such as high accuracy, low cost, and non-destructive calculation, and they have been favored by a large number of researchers. This article reviews the research progress of crop-yield calculation based on remote sensing images and visible light images, describes the technical characteristics and applicable objects of different schemes, and focuses on detailed explanations of data acquisition, independent variable screening, algorithm selection, and optimization. Common issues are also discussed and summarized. Finally, solutions are proposed for the main problems that have arisen so far, and future research directions are predicted, with the aim of achieving more progress and wider popularization of yield-calculation solutions based on image technology. Full article
(This article belongs to the Special Issue Advanced Sensing and Image Processing in Agricultural Applications)
Show Figures

Graphical abstract

Back to TopTop