remotesensing-logo

Journal Browser

Journal Browser

Satellite Remote Sensing with Artificial Intelligence

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Satellite Missions for Earth and Planetary Exploration".

Deadline for manuscript submissions: closed (31 March 2024) | Viewed by 9959

Special Issue Editors


E-Mail Website
Guest Editor
Multidisciplinary Institute for Environment Studies “Ramon Margalef”, University of Alicante, Edificio Nuevos Institutos, Carretera de San Vicente del Raspeig s/n, 03690 San Vicente del Raspeig, Alicante, Spain
Interests: deep learning; forest; desertification; LU/LC change; convolutional neural networks; ecology; generative adversarial networks; object-based image analysis; semantic segmentation; super-resolution

E-Mail Website
Guest Editor
Climate and Livability Initiative, Division of Biological and Environmental Sciences and Engineering, King Abdullah University of Science and Technology, Thuwal 23955, Saudi Arabia
Interests: drylands; ecology; climate change; species distribution modeling; carbon sequestration; deep learning; object-based image analysis; UAVs; sensor fusion

Special Issue Information

Dear Colleagues,

Artificial intelligence has become a key tool in the interpretation and improvement of remotely sensed data. Methodologies based on machine learning and deep learning have become established methods for characterizing, modeling and improving remote sensing data sources. In particular, the use of a supervised Machine Learning Algorithm that is widely used in Classification and Regression problems or an artificial neural network used in image recognition and processing that is specifically designed to process pixel data are good examples widely used with satellite remote sensing data. Along with advances in methods and the popularisation and increasing improvement of satellite data (e.g., optical, multispectral and hyperspectral sensors, thermal, lidar, synthetic aperture radar (SAR)) and wide time series available. All these remotely sensed data are driving the development of computer vision in artificial intelligence that has obtained unprecedented results at local and global scales.

This Special Issue targets studies that apply artificial intelligence in any subset (e.g., machine learning, deep learning) to satellite imagery from different sensors and platforms in a wide range, from natural to artificial ecosystems, such as forests, cropland or urban. Topics can range from the enhancement of satellite data using techniques such as super-resolution to the modeling of variables at all levels, from single objects to larger scales. Thus, integration or fusion of data from multiple satellite sources, multi-scale approaches, land-use change monitoring, studies to identify and monitor ecosystem services, and restoration or desertification, among other topics, are welcome. Papers may address, but are not limited to, the following topics:

  • Multispectral and hyperspectral, active and passive microwave remote sensing data enhancement.
  • Modeling and use of Lidar and laser scanning data.
  • Modeling of local to global scale variables.
  • Land use and land cover change detection.
  • Image processing and pattern recognition.
  • Data fusion and assimilation.
  • Remote sensing applications with artificial intelligence.
  • Applications for ecosystem restoration.
  • Forest ecology.
  • Soil ecology and microbiome with remote sensing data.

Dr. Emilio Guirado
Dr. Javier Blanco-Sacristán
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • generative adversarial networks (GANs)
  • transformers
  • neural networks
  • multi-band and multi-spectral imaging
  • deep learning
  • image classification
  • object detection
  • instance segmentation
  • predictive modelling
  • species distribution models

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 3125 KiB  
Article
Wavelet Transform Feature Enhancement for Semantic Segmentation of Remote Sensing Images
by Yifan Li, Ziqian Liu, Junli Yang and Haopeng Zhang
Remote Sens. 2023, 15(24), 5644; https://doi.org/10.3390/rs15245644 - 06 Dec 2023
Viewed by 1018
Abstract
With developments in deep learning, semantic segmentation of remote sensing images has made great progress. Currently, mainstream methods are based on convolutional neural networks (CNNs) or vision transformers. However, these methods are not very effective in extracting features from remote sensing images, which [...] Read more.
With developments in deep learning, semantic segmentation of remote sensing images has made great progress. Currently, mainstream methods are based on convolutional neural networks (CNNs) or vision transformers. However, these methods are not very effective in extracting features from remote sensing images, which are usually of high resolution with plenty of detail. Operations including downsampling will cause the loss of such features. To address this problem, we propose a novel module called Hierarchical Wavelet Feature Enhancement (WFE). The WFE module involves three sequential steps: (1) performing multi-scale decomposition of an input image based on the discrete wavelet transform; (2) enhancing the high-frequency sub-bands of the input image; and (3) feeding them back to the corresponding layers of the network. Our module can be easily integrated into various existing CNNs and transformers, and does not require additional pre-training. We conducted experiments on the ISPRS Potsdam and ISPRS Vaihingen datasets, with results showing that our method improves the benchmarks of CNNs and transformers while performing little additional computation. Full article
(This article belongs to the Special Issue Satellite Remote Sensing with Artificial Intelligence)
Show Figures

Graphical abstract

21 pages, 7778 KiB  
Article
YOLO-RS: A More Accurate and Faster Object Detection Method for Remote Sensing Images
by Tianyi Xie, Wen Han and Sheng Xu
Remote Sens. 2023, 15(15), 3863; https://doi.org/10.3390/rs15153863 - 03 Aug 2023
Cited by 2 | Viewed by 2303
Abstract
In recent years, object detection based on deep learning has been widely applied and developed. When using object detection methods to process remote sensing images, the trade-off between the speed and accuracy of models is necessary, because remote sensing images pose additional difficulties [...] Read more.
In recent years, object detection based on deep learning has been widely applied and developed. When using object detection methods to process remote sensing images, the trade-off between the speed and accuracy of models is necessary, because remote sensing images pose additional difficulties such as complex backgrounds, small objects, and dense distribution to the detection task. This paper proposes YOLO-RS, an optimized object detection algorithm based on YOLOv4 to address the challenges. The Adaptively Spatial Feature Fusion (ASFF) structure is introduced after the feature enhancement network of YOLOv4. It assigns adaptive weight parameters to fuse multi-scale feature information, improving detection accuracy. Furthermore, optimizations are applied to the Spatial Pyramid Pooling (SPP) structure in YOLOv4. By incorporating residual connections and employing 1 × 1 convolutions after maximum pooling, both computation complexity and detection accuracy are improved. To enhance detection speed, Lightnet is introduced, inspired by Depthwise Separable Convolution for reducing model complexity. Additionally, the loss function in YOLOv4 is optimized by introducing the Intersection over Union loss function. This change replaces the aspect ratio loss term with the edge length loss, enhancing sensitivity to width and height, accelerating model convergence, and improving regression accuracy for detected frames. The mean Average Precision (mAP) values of the YOLO-RS model are 87.73% and 92.81% under the TGRS-HRRSD dataset and RSOD dataset, respectively, which are experimentally verified to be 2.15% and 1.66% higher compared to the original YOLOv4 algorithm. The detection speed reached 43.45 FPS and 43.68 FPS, respectively, with 5.29 Frames Per Second (FPS) and 5.30 FPS improvement. Full article
(This article belongs to the Special Issue Satellite Remote Sensing with Artificial Intelligence)
Show Figures

Graphical abstract

22 pages, 10050 KiB  
Article
Machine Learning Classifier Evaluation for Different Input Combinations: A Case Study with Landsat 9 and Sentinel-2 Data
by Prathiba A. Palanisamy, Kamal Jain and Stefania Bonafoni
Remote Sens. 2023, 15(13), 3241; https://doi.org/10.3390/rs15133241 - 23 Jun 2023
Cited by 7 | Viewed by 1769
Abstract
High-resolution multispectral remote sensing images offer valuable information about various land features, providing essential details and spatially accurate representations. In the complex urban environment, classification accuracy is not often adequate using the complete original multispectral bands for practical applications. To improve the classification [...] Read more.
High-resolution multispectral remote sensing images offer valuable information about various land features, providing essential details and spatially accurate representations. In the complex urban environment, classification accuracy is not often adequate using the complete original multispectral bands for practical applications. To improve the classification accuracy of multispectral images, band reduction techniques are used, which can be categorized into feature extraction and feature selection techniques. The present study examined the use of multispectral satellite bands, spectral indices (including Normalized Difference Built-up Index, Normalized Difference Vegetation Index, and Normalized Difference Water Index) for feature extraction, and the principal component analysis technique for feature selection. These methods were analyzed both independently and in combination for the classification of multiple land use and land cover features. The classification was performed for Landsat 9 and Sentinel-2 satellite images in Delhi, India, using six machine learning techniques: Classification and Regression Tree, Minimum Distance, Naive Bayes, Random Forest, Gradient Tree Boosting, and Support Vector Machine on Google Earth Engine platform. The performance of the classifiers was evaluated quantitatively and qualitatively to analyze the classification results with whole image (comprehensive feature) and small subset (targeted feature). The RF and GTB classifiers were found to outperform all others in the quantitative analysis of all input combinations for both Landsat 9 and Sentinel-2 datasets. RF achieved a classification total accuracy of 96.19% for Landsat and 96.95% for Sentinel-2, whereas GTB achieved 91.62% for Landsat and 92.89% for Sentinel-2 in all band combinations. Furthermore, the RF classifier achieved the highest F1 score of 0.97 in both the Landsat and Sentinel datasets. The qualitative analysis revealed that the PCA bands were particularly useful to classifiers in distinguishing even the slightest differences among the feature class. The findings contribute to the understanding of feature extraction and selection techniques for land use and land cover classification, offering insights into their effectiveness in different scenarios. Full article
(This article belongs to the Special Issue Satellite Remote Sensing with Artificial Intelligence)
Show Figures

Graphical abstract

29 pages, 2389 KiB  
Article
MCANet: A Multi-Branch Network for Cloud/Snow Segmentation in High-Resolution Remote Sensing Images
by Kai Hu, Enwei Zhang, Min Xia, Liguo Weng and Haifeng Lin
Remote Sens. 2023, 15(4), 1055; https://doi.org/10.3390/rs15041055 - 15 Feb 2023
Cited by 20 | Viewed by 3943
Abstract
Because clouds and snow block the underlying surface and interfere with the information extracted from an image, the accurate segmentation of cloud/snow regions is essential for imagery preprocessing for remote sensing. Nearly all remote sensing images have a high resolution and contain complex [...] Read more.
Because clouds and snow block the underlying surface and interfere with the information extracted from an image, the accurate segmentation of cloud/snow regions is essential for imagery preprocessing for remote sensing. Nearly all remote sensing images have a high resolution and contain complex and diverse content, which makes the task of cloud/snow segmentation more difficult. A multi-branch convolutional attention network (MCANet) is suggested in this study. A double-branch structure is adopted, and the spatial information and semantic information in the image are extracted. In this way, the model’s feature extraction ability is improved. Then, a fusion module is suggested to correctly fuse the feature information gathered from several branches. Finally, to address the issue of information loss in the upsampling process, a new decoder module is constructed by combining convolution with a transformer to enhance the recovery ability of image information; meanwhile, the segmentation boundary is repaired to refine the edge information. This paper conducts experiments on the high-resolution remote sensing image cloud/snow detection dataset (CSWV), and conducts generalization experiments on two publicly available datasets (HRC_WHU and L8 SPARCS), and the self-built cloud and cloud shadow dataset. The MIOU scores on the four datasets are 92.736%, 91.649%, 80.253%, and 94.894%, respectively. The experimental findings demonstrate that whether it is for cloud/snow detection or more complex multi-category detection tasks, the network proposed in this paper can completely restore the target details, and it provides a stronger degree of robustness and superior segmentation capabilities. Full article
(This article belongs to the Special Issue Satellite Remote Sensing with Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop