remotesensing-logo

Journal Browser

Journal Browser

Digital Agricultural Production Based on Remote Sensing Technology, AI Applications and Robotic Systems

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing in Agriculture and Vegetation".

Deadline for manuscript submissions: closed (31 January 2023) | Viewed by 14508

Special Issue Editors


E-Mail Website1 Website2
Guest Editor
Division of Food Systems and Bioengineering, University of Missouri-Columbia, 256 WC Stringer Wing, Eckles Hall, Columbia, MO 65211, USA
Interests: remote sensing; spatial analystics; plant science and machine learning for predictive modelling
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Geospatial Institute, Saint Louis University, Des Peres Hall 203B, 3694 West Pine Mall, St. Louis, MO 63108, USA
Interests: remote sensing; machine learning/AI; and imagery analysis for precision agriculture and food security; computer vision/photogrammetry-aided georeferencing and GPS alternates

E-Mail Website
Guest Editor
Geospatial Institute, Saint Louis University, Des Peres Hall, 3694 West Pine Mall, St. Louis, MO 63108, USA
Interests: remote sensing; geospatial data analysis and deep learning

Special Issue Information

Dear Colleagues,

The current global demand for food production triggered by population growth and climate change has prompted research for improving agricultural production and protecting natural resources. While digital technologies such as computers and global positioning system have been used in agriculture for decades, agricultural activities are projected to experience further excessive digitalization under the application of various sensors, data collection platforms, , robotic systems, communication technologies, and artificial intelligence (AI) for extracting useful information from ever-growing tsunami of data.

Digital agriculture production is not only a solution to the long-standing problem of food shortage but also a solution to sustainable productivity growth by maximizing yield and quality, minimizing resources and the environmental impact on plot and farm at regional levels. However, several challenges and limitations, such as accuracy, interoperability, data storage, computation power, and farmers' reluctance to adoption, need to be addressed for effective use of these technologies and widespread digital transformation of agriculture. To highlight the potential and better understand challenges in the application of the aforementioned technologies in the process of digital agriculture production, solicited topics for this special issue include but are not limited to the following:

  • Remote and proximal sensing in agriculture
  • Hyperspectral, multispectral, LiDAR, and thermal sensors for crop monitoring
  • Unmanned Aerial Vehicles (UAVs) or Unmanned Ground Vehicles (UGVs) based sensing for soil and field three-dimensional models, crop growth monitoring, and decision making for sustainable field management
  • Leveraging commercial and free satellite data hubs for agricultural monitoring and yield prediction
  • Multi-sensor data fusion and exploitation to assess biomass, yield, acreage, vegetation vigor, nutrient deficiency, water stress, disease and phenological development
  • Internet of Things (IoT) for automatic real-time interaction, controlling, and decision-making
  • Advancement in data storage and computation for remote sensing BIG DATA
  • Robotic systems for reducing the farming inputs and environmental impact
  • AI and machine learning applications in agriculture

Dr. Matthew Maimaitiyiming
Dr. Vasit Sagan
Dr. Sidike Paheding
Dr. Maitiniyazi Maimaitijiang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Remote/Proximal Sensing
  • Precision agriculture
  • Digital agriculture
  • UAVs/UGVs
  • Machine learning
  • Deep learning
  • Artificial Intelligence (AI)
  • Transfer learning
  • Internet of Things (IoT)
  • Big data analytics
  • Cloud computing
  • Robotics

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 8583 KiB  
Article
Detection and Counting of Maize Leaves Based on Two-Stage Deep Learning with UAV-Based RGB Image
by Xingmei Xu, Lu Wang, Meiyan Shu, Xuewen Liang, Abu Zar Ghafoor, Yunling Liu, Yuntao Ma and Jinyu Zhu
Remote Sens. 2022, 14(21), 5388; https://doi.org/10.3390/rs14215388 - 27 Oct 2022
Cited by 15 | Viewed by 2295
Abstract
Leaf age is an important trait in the process of maize (Zea mays L.) growth. It is significant to estimate the seed activity and yield of maize by counting leaves. Detection and counting of the maize leaves in the field are very [...] Read more.
Leaf age is an important trait in the process of maize (Zea mays L.) growth. It is significant to estimate the seed activity and yield of maize by counting leaves. Detection and counting of the maize leaves in the field are very difficult due to the complexity of the field scenes and the cross-covering of adjacent seedling leaves. A method was proposed in this study for detecting and counting maize leaves based on deep learning with RGB images collected by unmanned aerial vehicles (UAVs). The Mask R-CNN was used to separate the complete maize seedlings from the complex background to reduce the impact of weeds on leaf counting. We proposed a new loss function SmoothLR for Mask R-CNN to improve the segmentation performance of the model. Then, YOLOv5 was used to detect and count the individual leaves of maize seedlings after segmentation. The 1005 field seedlings images were randomly divided into the training, validation, and test set with the ratio of 7:2:1. The results showed that the segmentation performance of Mask R-CNN with Resnet50 and SmoothLR was better than that with LI Loss. The average precision of the bounding box (Bbox) and mask (Mask) was 96.9% and 95.2%, respectively. The inference time of single image detection and segmentation was 0.05 s and 0.07 s, respectively. YOLOv5 performed better in leaf detection compared with Faster R-CNN and SSD. YOLOv5x with the largest parameter had the best detection performance. The detection precision of fully unfolded leaves and newly appeared leaves was 92.0% and 68.8%, and the recall rates were 84.4% and 50.0%, respectively. The average precision (AP) was 89.6% and 54.0%, respectively. The rates of counting accuracy for newly appeared leaves and fully unfolded leaves were 75.3% and 72.9%, respectively. The experimental results showed the possibility of current research on exploring leaf counting for field-grown crops based on UAV images. Full article
Show Figures

Graphical abstract

29 pages, 9885 KiB  
Article
Estimating Crop Seed Composition Using Machine Learning from Multisensory UAV Data
by Kamila Dilmurat, Vasit Sagan, Maitiniyazi Maimaitijiang, Stephen Moose and Felix B. Fritschi
Remote Sens. 2022, 14(19), 4786; https://doi.org/10.3390/rs14194786 - 25 Sep 2022
Cited by 9 | Viewed by 2792
Abstract
The pre-harvest estimation of seed composition from standing crops is imperative for field management practices and plant phenotyping. This paper presents for the first time the potential of Unmanned Aerial Vehicles (UAV)-based high-resolution hyperspectral and LiDAR data acquired from in-season stand crops for [...] Read more.
The pre-harvest estimation of seed composition from standing crops is imperative for field management practices and plant phenotyping. This paper presents for the first time the potential of Unmanned Aerial Vehicles (UAV)-based high-resolution hyperspectral and LiDAR data acquired from in-season stand crops for estimating seed protein and oil compositions of soybean and corn using multisensory data fusion and automated machine learning. UAV-based hyperspectral and LiDAR data was collected during the growing season (reproductive stage five (R5)) of 2020 over a soybean test site near Columbia, Missouri and a cornfield at Urbana, Illinois, USA. Canopy spectral and texture features were extracted from hyperspectral imagery, and canopy structure features were derived from LiDAR point clouds. The extracted features were then used as input variables for automated machine-learning methods available with the H2O Automated Machine-Learning framework (H2O-AutoML). The results presented that: (1) UAV hyperspectral imagery can successfully predict both the protein and oil of soybean and corn with moderate accuracies; (2) canopy structure features derived from LiDAR point clouds yielded slightly poorer estimates of crop-seed composition compared to the hyperspectral data; (3) regardless of machine-learning methods, the combination of hyperspectral and LiDAR data outperformed the predictions using a single sensor alone, with an R2 of 0.79 and 0.67 for corn protein and oil and R2 of 0.64 and 0.56 for soybean protein and oil; and (4) the H2O-AutoML framework was found to be an efficient strategy for machine-learning-based data-driven model building. Among the specific regression methods evaluated in this study, the Gradient Boosting Machine (GBM) and Deep Neural Network (NN) exhibited superior performance to other methods. This study reveals opportunities and limitations for multisensory UAV data fusion and automated machine learning in estimating crop-seed composition. Full article
Show Figures

Figure 1

17 pages, 4106 KiB  
Article
Aerial Imagery Paddy Seedlings Inspection Using Deep Learning
by Mohamed Marzhar Anuar, Alfian Abdul Halin, Thinagaran Perumal and Bahareh Kalantar
Remote Sens. 2022, 14(2), 274; https://doi.org/10.3390/rs14020274 - 7 Jan 2022
Cited by 11 | Viewed by 3073
Abstract
In recent years complex food security issues caused by climatic changes, limitations in human labour, and increasing production costs require a strategic approach in addressing problems. The emergence of artificial intelligence due to the capability of recent advances in computing architectures could become [...] Read more.
In recent years complex food security issues caused by climatic changes, limitations in human labour, and increasing production costs require a strategic approach in addressing problems. The emergence of artificial intelligence due to the capability of recent advances in computing architectures could become a new alternative to existing solutions. Deep learning algorithms in computer vision for image classification and object detection can facilitate the agriculture industry, especially in paddy cultivation, to alleviate human efforts in laborious, burdensome, and repetitive tasks. Optimal planting density is a crucial factor for paddy cultivation as it will influence the quality and quantity of production. There have been several studies involving planting density using computer vision and remote sensing approaches. While most of the studies have shown promising results, they have disadvantages and show room for improvement. One of the disadvantages is that the studies aim to detect and count all the paddy seedlings to determine planting density. The defective paddy seedlings’ locations are not pointed out to help farmers during the sowing process. In this work we aimed to explore several deep convolutional neural networks (DCNN) models to determine which one performs the best for defective paddy seedling detection using aerial imagery. Thus, we evaluated the accuracy, robustness, and inference latency of one- and two-stage pretrained object detectors combined with state-of-the-art feature extractors such as EfficientNet, ResNet50, and MobilenetV2 as a backbone. We also investigated the effect of transfer learning with fine-tuning on the performance of the aforementioned pretrained models. Experimental results showed that our proposed methods were capable of detecting the defective paddy rice seedlings with the highest precision and an F1-Score of 0.83 and 0.77, respectively, using a one-stage pretrained object detector called EfficientDet-D1 EficientNet. Full article
Show Figures

Graphical abstract

16 pages, 1957 KiB  
Article
Mapping Rice Paddy Distribution Using Remote Sensing by Coupling Deep Learning with Phenological Characteristics
by A-Xing Zhu, Fang-He Zhao, Hao-Bo Pan and Jun-Zhi Liu
Remote Sens. 2021, 13(7), 1360; https://doi.org/10.3390/rs13071360 - 2 Apr 2021
Cited by 17 | Viewed by 4813
Abstract
Two main approaches are used in mapping rice paddy distribution from remote sensing images: phenological methods or machine learning methods. The phenological methods can map rice paddy distribution in a simple way but with limited accuracy. Machine learning, particularly deep learning, methods that [...] Read more.
Two main approaches are used in mapping rice paddy distribution from remote sensing images: phenological methods or machine learning methods. The phenological methods can map rice paddy distribution in a simple way but with limited accuracy. Machine learning, particularly deep learning, methods that learn the spectral signatures can achieve higher accuracy yet require a large number of field samples. This paper proposed a pheno-deep method to couple the simplicity of the phenological methods and the learning ability of the deep learning methods for mapping rice paddy at high accuracy without the need of field samples. The phenological method was first used to initially delineate the rice paddy for the purpose of creating training samples. These samples were then used to train the deep learning model. The trained deep learning model was applied to map the spatial distribution of rice paddy. The effectiveness of the pheno-deep method was evaluated in Jin’an District, Lu’an City, Anhui Province, China. Results show that the pheno-deep method achieved a high performance with the overall accuracy, the precision, the recall, and AUC (area under curve) being 88.8%, 87.2%, 91.1%, and 94.4%, respectively. The pheno-deep method achieved a much better performance than the phenological alone method and can overcome the noises in the training samples from the phenological method. The overall accuracy of the pheno-deep method is only 2.4% lower than that of the deep learning alone method trained with field samples and this difference is not statistically significant. In addition, the pheno-deep method requires no field sampling, which would be a noteworthy advantage for situations when large training samples are difficult to obtain. This study shows that by combining knowledge-based methods with data-driven methods, it is possible to achieve high mapping accuracy of geographic variables using remote sensing even with little field sampling efforts. Full article
Show Figures

Graphical abstract

Back to TopTop