remotesensing-logo

Journal Browser

Journal Browser

Pattern Recognition in Remote Sensing II

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 30 June 2024 | Viewed by 3373

Special Issue Editors


E-Mail Website
Guest Editor
National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
Interests: pattern recognition; deep learning; remote sensing; image processing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Automation, Beijing Institute of Technology, Beijing 100081, China
Interests: pattern recognition; image processing; multi-modal information fusion
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Space Information, Space Engineering University, Beijing 101416, China
Interests: remote sensing image processing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Laboratoire d’Informatique, du Traitement de l’Information et des Systèmes (LITIS), Normandie Université, UNIROUEN, UNIHAVRE, INSA Rouen, 76000 Rouen, France
Interests: attern recognition; autonomous navigation; information fusion; non conventional imaging; polarimetric imaging; roas scene analysis; obstacle detection; ADAS; intelligent vehicle

Special Issue Information

Dear Colleagues,

Pattern recognition is a powerful tool for remote sensing image analysis. With the development of deep learning, several remote sensing applications with cutting-edge performance have been achieved in the last decade. However, it is evident that remote sensing has been lagging behind other domains. In this context, this Special Issue encourages the submission of papers that offer recent advances and innovative solutions on the wide topic of remote sensing image analysis. In particular, topics that fall within topics including, but not limited to, the following are welcome:

  • New pattern recognition principles and their potential in remote sensing image analysis;
  • Low-level image processing techniques (e.g., denoising, enhancing, deblurring, and rectification);
  • Mid-level image processing techniques (e.g., feature extraction, feature matching, image mosaic, image fusion, super-resolution, salience detection, and change detection);
  • High-level image processing techniques (e.g., object recognition, semantic segmentation, image classification, image captioning, and image understanding);
  • Parallel computing and cloud computing techniques;
  • Light-weight network and embedding design for remote sensing processing;
  • Applications in resource management, disaster monitoring, intelligent agriculture, and smart cities.

Prof. Dr. Chunlei Huo
Dr. Zhiqiang Zhou
Dr. Lurui Xia
Dr. Samia Ainouz
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • pattern recognition
  • deep learning
  • remote sensing
  • image processing
  • artificial intelligence

Related Special Issue

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

22 pages, 4754 KiB  
Article
A Multi-Modality Fusion and Gated Multi-Filter U-Net for Water Area Segmentation in Remote Sensing
by Rongfang Wang, Chenchen Zhang, Chao Chen, Hongxia Hao, Weibin Li and Licheng Jiao
Remote Sens. 2024, 16(2), 419; https://doi.org/10.3390/rs16020419 - 21 Jan 2024
Viewed by 1150
Abstract
Water area segmentation in remote sensing is of great importance for flood monitoring. To overcome some challenges in this task, we construct the Water Index and Polarization Information (WIPI) multi-modality dataset and propose a multi-Modality Fusion and Gated multi-Filter U-Net (MFGF-UNet) convolutional neural [...] Read more.
Water area segmentation in remote sensing is of great importance for flood monitoring. To overcome some challenges in this task, we construct the Water Index and Polarization Information (WIPI) multi-modality dataset and propose a multi-Modality Fusion and Gated multi-Filter U-Net (MFGF-UNet) convolutional neural network. The WIPI dataset can enhance the water information while reducing the data dimensionality: specifically, the Cloud-Free Label provided in the dataset can effectively alleviate the problem of labeled sample scarcity. Since a single form or uniform kernel size cannot handle the variety of sizes and shapes of water bodies, we propose the Gated Multi-Filter Inception (GMF-Inception) module in our MFGF-UNet. Moreover, we utilize an attention mechanism by introducing a Gated Channel Transform (GCT) skip connection and integrating GCT into GMF-Inception to further improve model performance. Extensive experiments on three benchmarks, including the WIPI, Chengdu and GF2020 datasets, demonstrate that our method achieves favorable performance with lower complexity and better robustness against six competing approaches. For example, on the WIPI, Chengdu and GF2020 datasets, the proposed MFGF-UNet model achieves F1 scores of 0.9191, 0.7410 and 0.8421, respectively, with the average F1 score on the three datasets 0.0045 higher than that of the U-Net model; likewise, GFLOPS were reduced by 62% on average. The new WIPI dataset, the code and the trained models have been released on GitHub. Full article
(This article belongs to the Special Issue Pattern Recognition in Remote Sensing II)
Show Figures

Figure 1

23 pages, 18133 KiB  
Article
NMS-Free Oriented Object Detection Based on Channel Expansion and Dynamic Label Assignment in UAV Aerial Images
by Yunpeng Dong, Xiaozhu Xie, Zhe An, Zhiyu Qu, Lingjuan Miao and Zhiqiang Zhou
Remote Sens. 2023, 15(21), 5079; https://doi.org/10.3390/rs15215079 - 24 Oct 2023
Viewed by 926
Abstract
Object detection in unmanned aerial vehicle (UAV) aerial images has received extensive attention in recent years. The current mainstream oriented object detection methods for aerial images often suffer from complex network structures, slow inference speeds, and difficulties in deployment. In this paper, we [...] Read more.
Object detection in unmanned aerial vehicle (UAV) aerial images has received extensive attention in recent years. The current mainstream oriented object detection methods for aerial images often suffer from complex network structures, slow inference speeds, and difficulties in deployment. In this paper, we propose a fast and easy-to-deploy oriented detector for UAV aerial images. First, we design a re-parameterization channel expansion network (RE-Net), which enhances the feature representation capabilities of the network based on the channel expansion structure and efficient layer aggregation network structure. During inference, RE-Net can be equivalently converted to a more streamlined structure, reducing parameters and computational costs. Next, we propose DynamicOTA to adjust the sampling area and the number of positive samples dynamically, which solves the problem of insufficient positive samples in the early stages of training. DynamicOTA improves detector performance and facilitates training convergence. Finally, we introduce a sample selection module (SSM) to achieve NMS-free object detection, simplifying the deployment of our detector on embedded devices. Extensive experiments on the DOTA and HRSC2016 datasets demonstrate the superiority of the proposed approach. Full article
(This article belongs to the Special Issue Pattern Recognition in Remote Sensing II)
Show Figures

Figure 1

Other

Jump to: Research

15 pages, 6520 KiB  
Technical Note
Sensing and Navigation for Multiple Mobile Robots Based on Deep Q-Network
by Yanyan Dai, Seokho Yang and Kidong Lee
Remote Sens. 2023, 15(19), 4757; https://doi.org/10.3390/rs15194757 - 28 Sep 2023
Cited by 1 | Viewed by 802
Abstract
In this paper, a novel DRL algorithm based on a DQN is proposed for multiple mobile robots to find optimized paths. The multiple robots’ states are the inputs of the DQN. The DQN estimates the Q-value of the agents’ actions. After selecting the [...] Read more.
In this paper, a novel DRL algorithm based on a DQN is proposed for multiple mobile robots to find optimized paths. The multiple robots’ states are the inputs of the DQN. The DQN estimates the Q-value of the agents’ actions. After selecting the action with the maximum Q-value, the multiple robots’ actions are calculated and sent to them. Then, the robots will explore the area and detect the obstacles. In the area, there are static obstacles. The robots should detect the static obstacles using a LiDAR sensor. The other moving robots are recognized as dynamic obstacles that need to be avoided. The robots will give feedback on the reward and the robots’ new states. A positive reward will be given when a robot successfully arrives at its goal point. If it is in a free space, zero reward will be given. If the robot collides with a static obstacle or other robots or reaches its start point, it will receive a negative reward. Multiple robots explore safe paths to the goals at the same time, in order to improve learning efficiency. If a robot collides with an obstacle or other robots, it will stop and wait for the other robots to complete their exploration tasks. The episode will end when all robots find safe paths to reach their goals or when all of them have collisions. This collaborative behavior can reduce the risk of collisions between robots, enhance overall efficiency, and help avoid multiple robots attempting to navigate through the same unsafe path simultaneously. Moreover, storage space is used to store the optimal safe paths of all robots. Finally, the multi-robots will learn the policy to find the optimized paths to go to the goal points. The goal of the simulations and experiment is to make multiple robots efficiently and safely move to their goal points. Full article
(This article belongs to the Special Issue Pattern Recognition in Remote Sensing II)
Show Figures

Figure 1

Back to TopTop