remotesensing-logo

Journal Browser

Journal Browser

Deep and Machine Learning Applications in Remote Sensing Data to Monitor and Manage Crops Using Precision Agriculture Systems

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing in Agriculture and Vegetation".

Deadline for manuscript submissions: closed (30 November 2022) | Viewed by 39085

Special Issue Editors


E-Mail Website
Guest Editor
School of Plant, Enviromental and Soil Sciences, Louisiana State University (LSU), Baton Rouge, LA, USA
Interests: precision agriculture; remote sensing; on-farm precision experimentation
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With the evolution of orbital and proximal remote sensing technologies, a big data that must be converted to information is being generated in the agricultural sector. These data when analyzed with machine and deep learning approaches applied to remote sensing products have been recently used with success. The computational power using cloud based systems and recent advances on farm machinery equipments providing data collection, processing and analysis open up several opportunities of development and adoption of new technologies. Large scale on farm precision experimentation conducted in partnership with commercial farms and the appearence of new sensors on board of UAVs, crop duster airplanes and satelittes such as radar technologies that allow daily remote data collection under cloudy skies are exciting and require more investigation of several sorts. New equipment, sensors are enabling a better crop monitoring and land use map as well in a regional scale. The intent of this topical edition of Remote Sensing is to convey publications from collaborators that are working with a big pool of data that is being analyzed using deep and machine learning approaches in Precision Agriculture and also to improve regional scale remote sensing applications.

Prof. Dr. Carlos Antonio Da Silva Junior
Dr. Luciano Shiratsuchi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Precision agriculture
  • Active crop canopy sensors
  • On farm precision experimentation
  • Monitoring crop areas
  • Neural network
  • Image processing
  • Orbital sensors

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 12948 KiB  
Article
Comparing Machine Learning Algorithms for Pixel/Object-Based Classifications of Semi-Arid Grassland in Northern China Using Multisource Medium Resolution Imageries
by Nitu Wu, Luís Guilherme Teixeira Crusiol, Guixiang Liu, Deji Wuyun and Guodong Han
Remote Sens. 2023, 15(3), 750; https://doi.org/10.3390/rs15030750 - 28 Jan 2023
Cited by 8 | Viewed by 2483
Abstract
Knowledge of grassland classification in a timely and accurate manner is essential for grassland resource management and utilization. Although remote sensing imagery analysis technology is widely applied for land cover classification, few studies have systematically compared the performance of commonly used methods on [...] Read more.
Knowledge of grassland classification in a timely and accurate manner is essential for grassland resource management and utilization. Although remote sensing imagery analysis technology is widely applied for land cover classification, few studies have systematically compared the performance of commonly used methods on semi-arid native grasslands in northern China. This renders the grassland classification work in this region devoid of applicable technical references. In this study, the central Xilingol (China) was selected as the study area, and the performances of four widely used machine learning algorithms for mapping semi-arid grassland under pixel-based and object-based classification methods were compared: random forest (RF), support vector machine (SVM), k-nearest neighbor (KNN), and naive Bayes (NB). The features were composed of the Landsat OLI multispectral data, spectral indices, Sentinel SAR C bands, topographic, position (coordinates), geometric, and grey-level co-occurrence matrix (GLCM) texture variables. The findings demonstrated that (1) the object-based methods depicted a more realistic land cover distribution and had greater accuracy than the pixel-based methods; (2) in the pixel-based classification, RF performed the best, with OA and Kappa values of 96.32% and 0.95, respectively. In object-based classification, RF and SVM presented no statistically different predictions, with OA and Kappa exceeding 97.5% and 0.97, respectively, and both performed significantly better than other algorithms. (3) In pixel-based classification, multispectral bands, spectral indices, and geographic features significantly distinguished grassland, whereas, in object-based classification, multispectral bands, spectral indices, elevation, and position features were more prominent. Despite the fact that Sentinel 1 SAR variables were chosen as an effective variable in object-based classification, they made no significant contribution to the grassland distinction. Full article
Show Figures

Figure 1

18 pages, 6713 KiB  
Article
VIS-NIR-SWIR Hyperspectroscopy Combined with Data Mining and Machine Learning for Classification of Predicted Chemometrics of Green Lettuce
by Renan Falcioni, João Vitor Ferreira Gonçalves, Karym Mayara de Oliveira, Werner Camargos Antunes and Marcos Rafael Nanni
Remote Sens. 2022, 14(24), 6330; https://doi.org/10.3390/rs14246330 - 14 Dec 2022
Cited by 4 | Viewed by 2147
Abstract
VIS-NIR-SWIR hyperspectroscopy is a significant technique used in remote sensing for classification of prediction-based chemometrics and machine learning. Chemometrics, together with biophysical and biochemical parameters, is a laborious technique; however, researchers are very interested in this field because of the benefits in terms [...] Read more.
VIS-NIR-SWIR hyperspectroscopy is a significant technique used in remote sensing for classification of prediction-based chemometrics and machine learning. Chemometrics, together with biophysical and biochemical parameters, is a laborious technique; however, researchers are very interested in this field because of the benefits in terms of optimizing crop yields. In this study, we investigated the hypothesis that VIS-NIR-SWIR could be efficiently applied for classification and prediction of leaf thickness and pigment profiling of green lettuce in terms of reflectance, transmittance, and absorbance data according to the variety. For this purpose, we used a spectroradiometer in the visible, near-infrared, and shortwave ranges (VIS-NIR-SWIR). The results showed many chemometric parameters and fingerprints in the 400–2500 nm spectral curve range. Therefore, this technique, combined with rapid data mining, machine learning algorithms, and other multivariate statistical analyses such as PCA, MCR, LDA, SVM, KNN, and PLSR, can be used as a tool to classify plants with the highest accuracy and precision. The fingerprints of the hyperspectral data indicated the presence of functional groups associated with biophysical and biochemical components in green lettuce, allowing the plants to be correctly classified with higher accuracy (99 to 100%). Biophysical parameters such as thickness could be predicted using PLSR models, which showed R2P and RMSEP values greater than >0.991 and 6.21, respectively, according to the relationship between absorbance and reflectance or transmittance spectroscopy curves. Thus, we report the methodology and confirm the ability of VIS-NIR-SWIR hyperspectroscopy to simultaneously classify and predict data with high accuracy and precision, at low cost and with rapid acquisition, based on a remote sensing tool, which can enable the successful management of crops such as green lettuce and other plants using precision agriculture systems. Full article
Show Figures

Graphical abstract

21 pages, 55551 KiB  
Article
Detection of White Leaf Disease in Sugarcane Crops Using UAV-Derived RGB Imagery with Existing Deep Learning Models
by Narmilan Amarasingam, Felipe Gonzalez, Arachchige Surantha Ashan Salgadoe, Juan Sandino and Kevin Powell
Remote Sens. 2022, 14(23), 6137; https://doi.org/10.3390/rs14236137 - 03 Dec 2022
Cited by 15 | Viewed by 4524
Abstract
White leaf disease (WLD) is an economically significant disease in the sugarcane industry. This work applied remote sensing techniques based on unmanned aerial vehicles (UAVs) and deep learning (DL) to detect WLD in sugarcane fields at the Gal-Oya Plantation, Sri Lanka. The established [...] Read more.
White leaf disease (WLD) is an economically significant disease in the sugarcane industry. This work applied remote sensing techniques based on unmanned aerial vehicles (UAVs) and deep learning (DL) to detect WLD in sugarcane fields at the Gal-Oya Plantation, Sri Lanka. The established methodology to detect WLD consists of UAV red, green, and blue (RGB) image acquisition, the pre-processing of the dataset, labelling, DL model tuning, and prediction. This study evaluated the performance of the existing DL models such as YOLOv5, YOLOR, DETR, and Faster R-CNN to recognize WLD in sugarcane crops. The experimental results indicate that the YOLOv5 network outperformed the other selected models, achieving a precision, recall, mean average precision@0.50 (mAP@0.50), and mean average precision@0.95 (mAP@0.95) metrics of 95%, 92%, 93%, and 79%, respectively. In contrast, DETR exhibited the weakest detection performance, achieving metrics values of 77%, 69%, 77%, and 41% for precision, recall, mAP@0.50, and mAP@0.95, respectively. YOLOv5 is selected as the recommended architecture to detect WLD using the UAV data not only because of its performance, but this was also determined because of its size (14 MB), which was the smallest one among the selected models. The proposed methodology provides technical guidelines to researchers and farmers for conduct the accurate detection and treatment of WLD in the sugarcane fields. Full article
Show Figures

Graphical abstract

22 pages, 5582 KiB  
Article
Generating Salt-Affected Irrigated Cropland Map in an Arid and Semi-Arid Region Using Multi-Sensor Remote Sensing Data
by Deji Wuyun, Junwei Bao, Luís Guilherme Teixeira Crusiol, Tuya Wulan, Liang Sun, Shangrong Wu, Qingqiang Xin, Zheng Sun, Ruiqing Chen, Jingyu Peng, Hongtao Xu, Nitu Wu, Anhong Hou, Lan Wu and Tingting Ren
Remote Sens. 2022, 14(23), 6010; https://doi.org/10.3390/rs14236010 - 27 Nov 2022
Viewed by 1800
Abstract
Soil salinization is a widespread environmental hazard and a major abiotic constraint affecting global food production and threatening food security. Salt-affected cropland is widely distributed in China, and the problem of salinization in the Hetao Irrigation District (HID) in the Inner Mongolia Autonomous [...] Read more.
Soil salinization is a widespread environmental hazard and a major abiotic constraint affecting global food production and threatening food security. Salt-affected cropland is widely distributed in China, and the problem of salinization in the Hetao Irrigation District (HID) in the Inner Mongolia Autonomous Region is particularly prominent. The salt-affected soil in Inner Mongolia is 1.75 million hectares, accounting for 14.8% of the total land. Therefore, mapping saline cropland in the irrigation district of Inner Mongolia could evaluate the impacts of cropland soil salinization on the environment and food security. This study hypothesized that a reasonably accurate regional map of salt-affected cropland would result from a ground sampling approach based on PlanetScope images and the methodology developed by Sentinel multi-sensor images employing the machine learning algorithm in the cloud computing platform. Thus, a model was developed to create the salt-affected cropland map of HID in 2021 based on the modified cropland base map, valid saline and non-saline samples through consistency testing, and various spectral parameters, such as reflectance bands, published salinity indices, vegetation indices, and texture information. Additionally, multi-sensor data of Sentinel from dry and wet seasons were used to determine the best solution for mapping saline cropland. The results imply that combining the Sentinel-1 and Sentinel-2 data could map the soil salinity in HID during the dry season with reasonable accuracy and close to real time. Then, the indicators derived from the confusion matrix were used to validate the established model. As a result, the combined dataset, which included reflectance bands, spectral indices, vertical transmit–vertical receive (VV) and vertical transmit–horizontal receive (VH) polarization, and texture information, outperformed the highest overall accuracy at 0.8938, while the F1 scores for saline cropland and non-saline cropland are 0.8687 and 0.9109, respectively. According to the analyses conducted for this study, salt-affected cropland can be detected more accurately during the dry season by using just Sentinel images from March to April. The findings of this study provide a clear explanation of the efficiency and standardization of salt-affected cropland mapping in arid and semi-arid regions, with significant potential for applicability outside the current study area. Full article
Show Figures

Figure 1

21 pages, 6261 KiB  
Article
PODD: A Dual-Task Detection for Greenhouse Extraction Based on Deep Learning
by Junning Feng, Dongliang Wang, Fan Yang, Jing Huang, Minghao Wang, Mengfan Tao and Wei Chen
Remote Sens. 2022, 14(19), 5064; https://doi.org/10.3390/rs14195064 - 10 Oct 2022
Cited by 3 | Viewed by 2104
Abstract
The rapid boom of the global population is causing more severe food supply problems. To deal with these problems, the agricultural greenhouse is an effective way to increase agricultural production within a limited space. To better guide agricultural activities and respond to future [...] Read more.
The rapid boom of the global population is causing more severe food supply problems. To deal with these problems, the agricultural greenhouse is an effective way to increase agricultural production within a limited space. To better guide agricultural activities and respond to future food crises, it is important to obtain both the agricultural greenhouse area and quantity distribution. In this study, a novel dual-task algorithm called Pixel-based and Object-based Dual-task Detection (PODD) that combines object detection and semantic segmentation is proposed to estimate the quantity and extract the area of agricultural greenhouses based on RGB remote sensing images. This algorithm obtains the quantity of agricultural greenhouses based on the improved You Only Look Once X (YOLOX) network structure, which is embedded with Convolutional Block Attention Module (CBAM) and Adaptive Spatial Feature Fusion (ASFF). The introduction of CBAM can make up for the lack of expression ability of its feature extraction layer to retain more important feature information. Adding the ASFF module can make full use of the features in different scales to increase the precision. This algorithm obtains the area of agricultural greenhouses based on the DeeplabV3+ neural network using ResNet-101 as a feature extraction network, which not only effectively reduces hole and plaque issues but also extracts edge details. Experimental results show that the mAP and F1-score of the improved YOLOX network reach 97.65% and 97.50%, 1.50% and 2.59% higher than the original YOLOX solution. At the same time, the accuracy and mIoU of the DeeplabV3+ network reach 99.2% and 95.8%, 0.5% and 2.5% higher than the UNet solution. All of the metrics in the dual-task algorithm reach 95% and even higher. Proving that the PODD algorithm could be useful for agricultural greenhouse automatic extraction (both quantity and area) in large areas to guide agricultural policymaking. Full article
Show Figures

Figure 1

12 pages, 25744 KiB  
Article
Cuscuta spp. Segmentation Based on Unmanned Aerial Vehicles (UAVs) and Orthomasaics Using a U-Net Xception-Style Model
by Lucia Gutiérrez-Lazcano, César J. Camacho-Bello, Eduardo Cornejo-Velazquez, José Humberto Arroyo-Núñez and Mireya Clavel-Maqueda
Remote Sens. 2022, 14(17), 4315; https://doi.org/10.3390/rs14174315 - 01 Sep 2022
Cited by 2 | Viewed by 1601
Abstract
Cuscuta spp. is a weed that infests many crops, causing significant losses. Traditional assessment methods and onsite manual measurements are time consuming and labor intensive. The precise identification of Cuscuta spp. offers a promising solution for implementing sustainable farming systems in order to [...] Read more.
Cuscuta spp. is a weed that infests many crops, causing significant losses. Traditional assessment methods and onsite manual measurements are time consuming and labor intensive. The precise identification of Cuscuta spp. offers a promising solution for implementing sustainable farming systems in order to apply appropriate control tactics. This document comprehensively evaluates a Cuscuta spp. segmentation model based on unmanned aerial vehicle (UAV) images and the U-Net architecture to generate orthomaps with infected areas for better decision making. The experiments were carried out on an arbol pepper (Capsicum annuum Linnaeus) crop with four separate missions for three weeks to identify the evolution of weeds. The study involved the performance of different tests with the input image size, which exceeded 70% of the mean intersection-over-union (MIoU). In addition, the proposal outperformed DeepLabV3+ in terms of prediction time and segmentation rate. On the other hand, the high segmentation rates allowed approximate quantifications of the infestation area ranging from 0.5 to 83 m2. The findings of this study show that the U-Net architecture is robust enough to segment pests and have an overview of the crop. Full article
Show Figures

Graphical abstract

23 pages, 9998 KiB  
Article
An Improved Apple Object Detection Method Based on Lightweight YOLOv4 in Complex Backgrounds
by Chenxi Zhang, Feng Kang and Yaxiong Wang
Remote Sens. 2022, 14(17), 4150; https://doi.org/10.3390/rs14174150 - 24 Aug 2022
Cited by 29 | Viewed by 3327
Abstract
Convolutional neural networks have recently experienced successful development in the field of computer vision. In precision agriculture, apple picking robots use computer vision methods to detect apples in orchards. However, existing object detection algorithms often face problems such as leaf shading, complex illumination [...] Read more.
Convolutional neural networks have recently experienced successful development in the field of computer vision. In precision agriculture, apple picking robots use computer vision methods to detect apples in orchards. However, existing object detection algorithms often face problems such as leaf shading, complex illumination environments, and small, dense recognition targets, resulting in low apple detection rates and inaccurate localization. In view of these problems, we designed an apple detection model based on lightweight YOLOv4—called Improved YOLOv4—from the perspective of industrial application. First, to improve the detection accuracy while reducing the amount of computation, the GhostNet feature extraction network with a Coordinate Attention module is implemented in YOLOv4, and depth-wise separable convolution is introduced to reconstruct the neck and YOLO head structures. Then, a Coordinate Attention module is added to the feature pyramid network (FPN) structure in order to enhance the feature extraction ability for medium and small targets. In the last 15% of epochs in training, the mosaic data augmentation strategy is turned off in order to further improve the detection performance. Finally, a long-range target screening strategy is proposed for standardized dense planting apple orchards with dwarf rootstock, removing apples in non-target rows and improving detection performance and recognition speed. On the constructed apple data set, compared with YOLOv4, the mAP of Improved YOLOv4 was increased by 3.45% (to 95.72%). The weight size of Improved YOLOv4 is only 37.9 MB, 15.53% of that of YOLOv4, and the detection speed is improved by 5.7 FPS. Two detection methods of similar size—YOLOX-s and EfficientNetB0-YOLOv3—were compared with Improved YOLOv4. Improved YOLOv4 outperformed these two algorithms by 1.82% and 2.33% mAP, respectively, on the total test set and performed optimally under all illumination conditions. The presented results indicate that Improved YOLOv4 has excellent detection accuracy and good robustness, and the proposed long-range target screening strategy has an important reference value for solving the problem of accurate and rapid identification of various fruits in standard orchards. Full article
Show Figures

Graphical abstract

24 pages, 4691 KiB  
Article
UAV Remote Sensing for High-Throughput Phenotyping and for Yield Prediction of Miscanthus by Machine Learning Techniques
by Giorgio Impollonia, Michele Croci, Andrea Ferrarini, Jason Brook, Enrico Martani, Henri Blandinières, Andrea Marcone, Danny Awty-Carroll, Chris Ashman, Jason Kam, Andreas Kiesel, Luisa M. Trindade, Mirco Boschetti, John Clifton-Brown and Stefano Amaducci
Remote Sens. 2022, 14(12), 2927; https://doi.org/10.3390/rs14122927 - 19 Jun 2022
Cited by 12 | Viewed by 3859
Abstract
Miscanthus holds a great potential in the frame of the bioeconomy, and yield prediction can help improve Miscanthus’ logistic supply chain. Breeding programs in several countries are attempting to produce high-yielding Miscanthus hybrids better adapted to different climates and end-uses. Multispectral images acquired [...] Read more.
Miscanthus holds a great potential in the frame of the bioeconomy, and yield prediction can help improve Miscanthus’ logistic supply chain. Breeding programs in several countries are attempting to produce high-yielding Miscanthus hybrids better adapted to different climates and end-uses. Multispectral images acquired from unmanned aerial vehicles (UAVs) in Italy and in the UK in 2021 and 2022 were used to investigate the feasibility of high-throughput phenotyping (HTP) of novel Miscanthus hybrids for yield prediction and crop traits estimation. An intercalibration procedure was performed using simulated data from the PROSAIL model to link vegetation indices (VIs) derived from two different multispectral sensors. The random forest algorithm estimated with good accuracy yield traits (light interception, plant height, green leaf biomass, and standing biomass) using 15 VIs time series, and predicted yield using peak descriptors derived from these VIs time series with root mean square error of 2.3 Mg DM ha−1. The study demonstrates the potential of UAVs’ multispectral images in HTP applications and in yield prediction, providing important information needed to increase sustainable biomass production. Full article
Show Figures

Graphical abstract

22 pages, 38899 KiB  
Article
The Classification Method Study of Crops Remote Sensing with Deep Learning, Machine Learning, and Google Earth Engine
by Jinxi Yao, Ji Wu, Chengzhi Xiao, Zhi Zhang and Jianzhong Li
Remote Sens. 2022, 14(12), 2758; https://doi.org/10.3390/rs14122758 - 08 Jun 2022
Cited by 24 | Viewed by 6554
Abstract
The extraction and classification of crops is the core issue of agricultural remote sensing. The precise classification of crop types is of great significance to the monitoring and evaluation of crops planting area, growth, and yield. Based on the Google Earth Engine and [...] Read more.
The extraction and classification of crops is the core issue of agricultural remote sensing. The precise classification of crop types is of great significance to the monitoring and evaluation of crops planting area, growth, and yield. Based on the Google Earth Engine and Google Colab cloud platform, this study takes the typical agricultural oasis area of Xiangride Town, Qinghai Province, as an example. It compares traditional machine learning (random forest, RF), object-oriented classification (object-oriented, OO), and deep neural networks (DNN), which proposes a random forest combined with deep neural network (RF+DNN) classification framework. In this study, the spatial characteristics of band information, vegetation index, and polarization of main crops in the study area were constructed using Sentinel-1 and Sentinel-2 data. The temporal characteristics of crops phenology and growth state were analyzed using the curve curvature method, and the data were screened in time and space. By comparing and analyzing the accuracy of the four classification methods, the advantages of RF+DNN model and its application value in crops classification were illustrated. The results showed that for the crops in the study area during the period of good growth and development, a better crop classification result could be obtained using RF+DNN classification method, whose model accuracy, training, and predict time spent were better than that of using DNN alone. The overall accuracy and Kappa coefficient of classification were 0.98 and 0.97, respectively. It is also higher than the classification accuracy of random forest (OA = 0.87, Kappa = 0.82), object oriented (OA = 0.78, Kappa = 0.70) and deep neural network (OA = 0.93, Kappa = 0.90). The scalable and simple classification method proposed in this paper gives full play to the advantages of cloud platform in data and operation, and the traditional machine learning combined with deep learning can effectively improve the classification accuracy. Timely and accurate extraction of crop types at different spatial and temporal scales is of great significance for crops pattern change, crops yield estimation, and crops safety warning. Full article
Show Figures

Figure 1

25 pages, 5199 KiB  
Article
Identification of Infiltration Features and Hydraulic Properties of Soils Based on Crop Water Stress Derived from Remotely Sensed Data
by Jakub Brom, Renata Duffková, Jan Haberle, Antonín Zajíček, Václav Nedbal, Tereza Bernasová and Kateřina Křováková
Remote Sens. 2021, 13(20), 4127; https://doi.org/10.3390/rs13204127 - 15 Oct 2021
Cited by 1 | Viewed by 2149
Abstract
Knowledge of the spatial variability of soil hydraulic properties is important for many reasons, e.g., for soil erosion protection, or the assessment of surface and subsurface runoff. Nowadays, precision agriculture is gaining importance for which knowledge of soil hydraulic properties is essential, especially [...] Read more.
Knowledge of the spatial variability of soil hydraulic properties is important for many reasons, e.g., for soil erosion protection, or the assessment of surface and subsurface runoff. Nowadays, precision agriculture is gaining importance for which knowledge of soil hydraulic properties is essential, especially when it comes to the optimization of nitrogen fertilization. The present work aimed to exploit the ability of vegetation cover to identify the spatial variability of soil hydraulic properties through the expression of water stress. The assessment of the spatial distribution of saturated soil hydraulic conductivity (Ks) and field water capacity (FWC) was based on a combination of ground-based measurements and thermal and hyperspectral airborne imaging data. The crop water stress index (CWSI) was used as an indicator of crop water stress to assess the hydraulic properties of the soil. Supplementary vegetation indices were used. The support vector regression (SVR) method was used to estimate soil hydraulic properties from aerial data. Data analysis showed that the approach estimated Ks with good results (R2 = 0.77) for stands with developed crop water stress. The regression coefficient values for estimation of FWC for topsoil (0–0.3 m) ranged from R2 = 0.38 to R2 = 0.99. The differences within the study sites of the FWC estimations were higher for the subsoil layer (0.3–0.6 m). R2 values ranged from 0.12 to 0.99. Several factors affect the quality of the soil hydraulic features estimation, such as crop water stress development, condition of the crops, period and time of imaging, etc. The above approach is useful for practical applications for its relative simplicity, especially in precision agriculture. Full article
Show Figures

Figure 1

18 pages, 63970 KiB  
Article
Crop Classification of Satellite Imagery Using Synthetic Multitemporal and Multispectral Images in Convolutional Neural Networks
by Guillermo Siesto, Marcos Fernández-Sellers and Adolfo Lozano-Tello
Remote Sens. 2021, 13(17), 3378; https://doi.org/10.3390/rs13173378 - 25 Aug 2021
Cited by 12 | Viewed by 5964
Abstract
The demand for new tools for mass remote sensing of crops, combined with the open and free availability of satellite imagery, has prompted the development of new methods for crop classification. Because this classification is frequently required to be completed within a specific [...] Read more.
The demand for new tools for mass remote sensing of crops, combined with the open and free availability of satellite imagery, has prompted the development of new methods for crop classification. Because this classification is frequently required to be completed within a specific time frame, performance is also essential. In this work, we propose a new method that creates synthetic images by extracting satellite data at the pixel level, processing all available bands, as well as their data distributed over time considering images from multiple dates. With this approach, data from images of Sentinel-2 are used by a deep convolutional network system, which will extract the necessary information to discern between different types of crops over a year after being trained with data from previous years. Following the proposed methodology, it is possible to classify crops and distinguish between several crop classes while also being computationally low-cost. A software system that implements this method has been used in an area of Extremadura (Spain) as a complementary monitoring tool for the subsidies supported by the Common Agricultural Policy of the European Union. Full article
Show Figures

Graphical abstract

Back to TopTop