remotesensing-logo

Journal Browser

Journal Browser

Machine Learning for Multi-Source Remote Sensing Images Analysis

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (16 June 2023) | Viewed by 13365

Special Issue Editor


E-Mail Website
Guest Editor
Faculty of Agriculture, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
Interests: artificial intelligence; machine learning; data mining; data fusion; precision agriculture; biosystems; engineering; automation; sensors; yield prediction; crop disease detection; weed management
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The recent emergence of learning methods comprises a powerful driving force of artificial intelligence technology that has greatly stimulated the enthusiasm of various research fields to utilize machine learning to solve existing problems. Several outstanding machine learning models have been widely used, achieving good performances in multi-object and multiscale remote sensing image segmentation, classification, clustering, object recognition, anomaly detection and prediction. The information from images from multiple sources can be combined to achieve improved accuracy and more specific inferences compared to those that could be achieved by the use of a single source alone. Therefore, machine learning algorithms are considered as a synergistic framework instead of solely a collection of tools and methods for integration. This could be attributed to the complex data processing steps involved in the related machine learning processes.

Remote sensing images and data provide critical information about how solar energy is partitioned into different compartments in natural systems.  The application of machine learning methods to remote sensing images and data with different spatial, spectral, radiometric, and temporal resolutions can be used for pre-processing, retrieval, analysis, interpretation, and mapping in an iterative and holistic way, supporting various types of decision analysis for sustainable development.

The current special issue aims to share quality research concerning the application of machine learning techniques to remote sensing images acquired from several sources for increasing the data usability and quality of remote sensing images. Latest advances and trends of restoration and reconstruction algorithms and applications for remote sensing image processing will be presented, addressing novel approaches and practical solutions for multimodal remote sensing data processing and analysis applications.

Topics of interest include but are not limited to the following:

  • Remote sensing image fusion;
  • Remote sensing image super-resolution;
  • Deep learning for multimodal land use and land cover classification/mapping;
  • Advanced ANNs for large-scale and even global object classification and recognition;
  • Multi-modal data fusion, analysis, and interpretation;
  • Multi-temporal remote sensing data for time series analysis;
  • Neural architectures optimized for multimodal remote sensing;
  • Feature fusion and learning for anomaly and object detection.

Dr. Xanthoula Eirini Pantazi
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial neural networks
  • imaging spectroscopy analysis
  • deep learning
  • pattern recognition and data mining
  • data regression and classification
  • multi-spectral image and data processing
  • data fusion
  • image fusion
  • information fusion
  • sensor fusion

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

19 pages, 12206 KiB  
Article
Integrating Topographic Skeleton into Deep Learning for Terrain Reconstruction from GDEM and Google Earth Image
by Kai Chen, Chun Wang, Mingyue Lu, Wen Dai, Jiaxin Fan, Mengqi Li and Shaohua Lei
Remote Sens. 2023, 15(18), 4490; https://doi.org/10.3390/rs15184490 - 12 Sep 2023
Cited by 1 | Viewed by 1123
Abstract
The topographic skeleton is the primary expression and intuitive understanding of topographic relief. This study integrated a topographic skeleton into deep learning for terrain reconstruction. Firstly, a topographic skeleton, such as valley, ridge, and gully lines, was extracted from a global digital elevation [...] Read more.
The topographic skeleton is the primary expression and intuitive understanding of topographic relief. This study integrated a topographic skeleton into deep learning for terrain reconstruction. Firstly, a topographic skeleton, such as valley, ridge, and gully lines, was extracted from a global digital elevation model (GDEM) and Google Earth Image (GEI). Then, the Conditional Generative Adversarial Network (CGAN) was used to learn the elevation sequence information between the topographic skeleton and high-precision 5 m DEMs. Thirdly, different combinations of topographic skeletons extracted from 5 m, 12.5 m, and 30 m DEMs and a 1 m GEI were compared for reconstructing 5 m DEMs. The results show the following: (1) from the perspective of the visual effect, the 5 m DEMs generated with the three combinations (5 m DEM + 1 m GEI, 12.5 m DEM + 1 m GEI, and 30 m DEM + 1 m GEI) were all similar to the original 5 m DEM (reference data), which provides a markedly increased level of terrain detail information when compared to the traditional interpolation methods; (2) from the perspective of elevation accuracy, the 5 m DEMs reconstructed by the three combinations have a high correlation (>0.9) with the reference data, while the vertical accuracy of the 12.5 m DEM + 1 m GEI combination is obviously higher than that of the 30 m DEM + 1 m GEI combination; and (3) from the perspective of topographic factors, the distribution trends of the reconstructed 5 m DEMs are all close to the reference data in terms of the extracted slope and aspect. This study enhances the quality of open-source DEMs and introduces innovative ideas for producing high-precision DEMs. Among the three combinations, we recommend the 12.5 m DEM + 1 m GEI combination for DEM reconstruction due to its relative high accuracy and open access. In regions where a field survey of high-precision DEMs is difficult, open-source DEMs combined with GEI can be used in high-precision DEM reconstruction. Full article
(This article belongs to the Special Issue Machine Learning for Multi-Source Remote Sensing Images Analysis)
Show Figures

Graphical abstract

18 pages, 3295 KiB  
Article
UAV-Based Disease Detection in Palm Groves of Phoenix canariensis Using Machine Learning and Multispectral Imagery
by Enrique Casas, Manuel Arbelo, José A. Moreno-Ruiz, Pedro A. Hernández-Leal and José A. Reyes-Carlos
Remote Sens. 2023, 15(14), 3584; https://doi.org/10.3390/rs15143584 - 18 Jul 2023
Viewed by 1438
Abstract
Climate change and the appearance of pests and pathogens are leading to the disappearance of palm groves of Phoenix canariensis in the Canary Islands. Traditional pathology diagnostic techniques are resource-demanding and poorly reproducible, and it is necessary to develop new monitoring methodologies. This [...] Read more.
Climate change and the appearance of pests and pathogens are leading to the disappearance of palm groves of Phoenix canariensis in the Canary Islands. Traditional pathology diagnostic techniques are resource-demanding and poorly reproducible, and it is necessary to develop new monitoring methodologies. This study presents a tool to identify individuals infected by Serenomyces phoenicis and Phoenicococcus marlatti using UAV-derived multispectral images and machine learning. In the first step, image segmentation and classification techniques allowed us to calculate a relative prevalence of affected leaves at an individual scale for each palm tree, so that we could finally use this information with labelled in situ data to build a probabilistic classification model to detect infected specimens. Both the pixel classification performance and the model’s fitness were evaluated using different metrics such as omission and commission errors, accuracy, precision, recall, and F1-score. It is worth noting the accuracy of more than 0.96 obtained for the pixel classification of the affected and healthy leaves, and the good detection ability of the probabilistic classification model, which reached an accuracy of 0.87 for infected palm trees. The proposed methodology is presented as an efficient tool for identifying infected palm specimens, using spectral information, reducing the need for fieldwork and facilitating phytosanitary treatment. Full article
(This article belongs to the Special Issue Machine Learning for Multi-Source Remote Sensing Images Analysis)
Show Figures

Figure 1

16 pages, 2793 KiB  
Communication
A Target Imaging and Recognition Method Based on Raptor Vision
by Bitong Xu, Zhengzhou Li, Bei Cheng, Yuxin Yang and Abubakar Siddique
Remote Sens. 2023, 15(8), 2106; https://doi.org/10.3390/rs15082106 - 17 Apr 2023
Viewed by 1498
Abstract
It is a big challenge to quickly and accurately recognize targets in a complex background. The mutual constraints between a wide field of vision (FOV) and high resolution affect the optical tracking and imaging ability in a wide area. In nature, raptors possess [...] Read more.
It is a big challenge to quickly and accurately recognize targets in a complex background. The mutual constraints between a wide field of vision (FOV) and high resolution affect the optical tracking and imaging ability in a wide area. In nature, raptors possess unique imaging structures and optic nerve systems that can accurately recognize prey. This paper proposes an imaging system combined with a deep learning algorithm based on the visual characteristics of raptors, aiming to achieve wide FOV, high spatial resolution, and accurate recognition ability. As for the imaging system, two sub-optical systems with different focal lengths and various-size photoreceptor cells jointly simulate the deep fovea of a raptor’s eye. The one simulating the peripheral region has a wide FOV and high sensitivity for capturing the target quickly by means of short focal length and large-size photoreceptor cells, and the other imitating the central region has high resolution for recognizing the target accurately through the long focal length and small-size photoreceptor cells. Furthermore, the proposed algorithm with an attention and feedback network based on octave convolution (AOCNet) simulates the mechanism of the optic nerve pathway by adding it into the convolutional neural network (CNN), thereby enhancing the ability of feature extraction and target recognition. Experimental results show that the target imaging and recognition system eliminates the limitation between wide FOV and high spatial resolution, and effectively improves the accuracy of target recognition in complex backgrounds. Full article
(This article belongs to the Special Issue Machine Learning for Multi-Source Remote Sensing Images Analysis)
Show Figures

Figure 1

26 pages, 16407 KiB  
Article
Autonomous Detection of Mouse-Ear Hawkweed Using Drones, Multispectral Imagery and Supervised Machine Learning
by Narmilan Amarasingam, Mark Hamilton, Jane E. Kelly, Lihong Zheng, Juan Sandino, Felipe Gonzalez, Remy L. Dehaan and Hillary Cherry
Remote Sens. 2023, 15(6), 1633; https://doi.org/10.3390/rs15061633 - 17 Mar 2023
Cited by 5 | Viewed by 1680
Abstract
Hawkweeds (Pilosella spp.) have become a severe and rapidly invading weed in pasture lands and forest meadows of New Zealand. Detection of hawkweed infestations is essential for eradication and resource management at private and government levels. This study explores the potential of [...] Read more.
Hawkweeds (Pilosella spp.) have become a severe and rapidly invading weed in pasture lands and forest meadows of New Zealand. Detection of hawkweed infestations is essential for eradication and resource management at private and government levels. This study explores the potential of machine learning (ML) algorithms for detecting mouse-ear hawkweed (Pilosella officinarum) foliage and flowers from Unmanned Aerial Vehicle (UAV)-acquired multispectral (MS) images at various spatial resolutions. The performances of different ML algorithms, namely eXtreme Gradient Boosting (XGB), Support Vector Machine (SVM), Random Forest (RF), and K-nearest neighbours (KNN), were analysed in their capacity to detect hawkweed foliage and flowers using MS imagery. The imagery was obtained at numerous spatial resolutions from a highly infested study site located in the McKenzie Region of the South Island of New Zealand in January 2021. The spatial resolution of 0.65 cm/pixel (acquired at a flying height of 15 m above ground level) produced the highest overall testing and validation accuracy of 100% using the RF, KNN, and XGB models for detecting hawkweed flowers. In hawkweed foliage detection at the same resolution, the RF and XGB models achieved highest testing accuracy of 97%, while other models (KNN and SVM) achieved an overall model testing accuracy of 96% and 72%, respectively. The XGB model achieved the highest overall validation accuracy of 98%, while the other models (RF, KNN, and SVM) produced validation accuracies of 97%, 97%, and 80%, respectively. This proposed methodology may facilitate non-invasive detection efforts of mouse-ear hawkweed flowers and foliage in other naturalised areas, enabling land managers to optimise the use of UAV remote sensing technologies for better resource allocation. Full article
(This article belongs to the Special Issue Machine Learning for Multi-Source Remote Sensing Images Analysis)
Show Figures

Graphical abstract

21 pages, 120562 KiB  
Article
Attention-Based Matching Approach for Heterogeneous Remote Sensing Images
by Huitai Hou, Chaozhen Lan, Qing Xu, Liang Lv, Xin Xiong, Fushan Yao and Longhao Wang
Remote Sens. 2023, 15(1), 163; https://doi.org/10.3390/rs15010163 - 27 Dec 2022
Cited by 2 | Viewed by 2683
Abstract
Heterogeneous images acquired from various platforms and sensors provide complementary information. However, to use that information in applications such as image fusion and change detection, accurate image matching is essential to further process and analyze these heterogeneous images, especially if they have significant [...] Read more.
Heterogeneous images acquired from various platforms and sensors provide complementary information. However, to use that information in applications such as image fusion and change detection, accurate image matching is essential to further process and analyze these heterogeneous images, especially if they have significant differences in radiation and geometric characteristics. Therefore, matching heterogeneous remote sensing images is challenging. To address this issue, we propose a feature point matching method named Cross and Self Attentional Matcher (CSAM) based on Attention mechanisms (algorithms) that have been extensively used in various computer vision-based applications. Specifically, CSAM alternatively uses self-Attention and cross-Attention on the two matching images to exploit feature point location and context information. Then, the feature descriptor is further aggregated to assist CSAM in creating matching point pairs while removing the false matching points. To further improve the training efficiency of CSAM, this paper establishes a new training dataset of heterogeneous images, including 1,000,000 generated image pairs. Extensive experiments indicate that CSAM outperforms the existing feature extraction and matching methods, including SIFT, RIFT, CFOG, NNDR, FSC, GMS, OA-Net, and Superglue, attaining an average precision and processing time of 81.29% and 0.13 s. In addition to higher matching performance and computational efficiency, CSAM has better generalization ability for multimodal image matching and registration tasks. Full article
(This article belongs to the Special Issue Machine Learning for Multi-Source Remote Sensing Images Analysis)
Show Figures

Figure 1

Review

Jump to: Research

23 pages, 3535 KiB  
Review
A Review on UAV-Based Applications for Plant Disease Detection and Monitoring
by Louis Kouadio, Moussa El Jarroudi, Zineb Belabess, Salah-Eddine Laasli, Md Zohurul Kadir Roni, Ibn Dahou Idrissi Amine, Nourreddine Mokhtari, Fouad Mokrini, Jürgen Junk and Rachid Lahlali
Remote Sens. 2023, 15(17), 4273; https://doi.org/10.3390/rs15174273 - 31 Aug 2023
Cited by 3 | Viewed by 3792
Abstract
Remote sensing technology is vital for precision agriculture, aiding in early issue detection, resource management, and environmentally friendly practices. Recent advances in remote sensing technology and data processing have propelled unmanned aerial vehicles (UAVs) into valuable tools for obtaining detailed data on plant [...] Read more.
Remote sensing technology is vital for precision agriculture, aiding in early issue detection, resource management, and environmentally friendly practices. Recent advances in remote sensing technology and data processing have propelled unmanned aerial vehicles (UAVs) into valuable tools for obtaining detailed data on plant diseases with high spatial, temporal, and spectral resolution. Given the growing body of scholarly research centered on UAV-based disease detection, a comprehensive review and analysis of current studies becomes imperative to provide a panoramic view of evolving methodologies in plant disease monitoring and to strategically evaluate the potential and limitations of such strategies. This study undertakes a systematic quantitative literature review to summarize existing literature and discern current research trends in UAV-based applications for plant disease detection and monitoring. Results reveal a global disparity in research on the topic, with Asian countries being the top contributing countries (43 out of 103 papers). World regions such as Oceania and Africa exhibit comparatively lesser representation. To date, research has largely focused on diseases affecting wheat, sugar beet, potato, maize, and grapevine. Multispectral, reg-green-blue, and hyperspectral sensors were most often used to detect and identify disease symptoms, with current trends pointing to approaches integrating multiple sensors and the use of machine learning and deep learning techniques. Future research should prioritize (i) development of cost-effective and user-friendly UAVs, (ii) integration with emerging agricultural technologies, (iii) improved data acquisition and processing efficiency (iv) diverse testing scenarios, and (v) ethical considerations through proper regulations. Full article
(This article belongs to the Special Issue Machine Learning for Multi-Source Remote Sensing Images Analysis)
Show Figures

Graphical abstract

Back to TopTop