remotesensing-logo

Journal Browser

Journal Browser

Remote Sensing Target Recognition and Detection: Theory and Applications

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: 31 October 2024 | Viewed by 2549

Special Issue Editors

School of Electronic Engineering, Key Laboratory of Collaborative Intelligence Systems of Ministry of Education, Xidian University, Xi’an 710071, China
Interests: computational intelligence; evolutionary computation; neural networks; multi-objective optimization; remote sensing image interpretation
Special Issues, Collections and Topics in MDPI journals

E-Mail Website1 Website2
Guest Editor
Key Laboratory of Collaborative Intelligence Systems of Ministry of Education, School of Electronic Engineering, Xidian University, Xi’an 710071, China
Interests: artificial intelligence (in particular, machine learning, multiagent systems and their applications) and formal methods (in particular, machine-learning-based model checking)
Special Issues, Collections and Topics in MDPI journals

E-Mail Website1 Website2
Guest Editor
Department Software Engineering and Artificial Intelligence, Faculty of Informatics, University Complutense of Madrid, 28040 Madrid, Spain
Interests: computer vision; image processing; pattern recognition; 3D image reconstruction, spatio-temporal image change detection and tracking; fusion and registering from imaging sensors; superresolution from low-resolution image sensors
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Target Recognition and Detection is a multidisciplinary field that involves a variety of sensors, such as synthetic aperture radar (SAR), inverse synthetic aperture radar (ISAR), side-scan sonar, multispectral/hyperspectral sensors and others. Target recognition and detection is to search regions of interested targets in an image and determine the category and location of targets. It usually marks the image, selects the target area of interest in the image with a rectangular box, and finally creates a category label for the image target. As important steps of image processing and further analysis, improvement in target recognition and detection techniques is urgently needed to achieve higher performance in various tasks. Although deep learning has achieved unprecedented success in the field, there are still open application issues that must be comprehensively addressed.

This Special Issue aims to gather papers presenting recent advances in Target Recognition and Detection with novel and impactful applications. Topics of interest include, but are not limited to, the following:

  • Machine learning for target recognition and detection;
  • Theory of multi-objective/multi-task optimization and learning;
  • Change detection and classification in remote sensing;
  • Remote sensing/teaching image object detection, segmentation and categorization;
  • Underwater target recognition and detection;
  • Ocean acoustic remote sensing;
  • Radar high-speed target detection, tracking, imaging and recognition;
  • Computational electromagnetics and scattering measurement theory;
  • Sensor signal detection, identification and categorization.

Research articles, review articles and short communications are welcome for submission.

Dr. Hao Li
Dr. Mingyang Zhang
Prof. Dr. Gonzalo Pajares Martinsanz
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • target recognition
  • target detection
  • deep learning
  • neural networks
  • image classification
  • image processing

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 7718 KiB  
Article
Planar Reconstruction of Indoor Scenes from Sparse Views and Relative Camera Poses
by Fangli Guan, Jiakang Liu, Jianhui Zhang, Liqi Yan and Ling Jiang
Remote Sens. 2024, 16(9), 1616; https://doi.org/10.3390/rs16091616 - 30 Apr 2024
Viewed by 536
Abstract
Planar reconstruction detects planar segments and deduces their 3D planar parameters (normals and offsets) from the input image; this has significant potential in the fields of digital preservation of cultural heritage, architectural design, robot navigation, intelligent transportation, and security monitoring. Existing methods mainly [...] Read more.
Planar reconstruction detects planar segments and deduces their 3D planar parameters (normals and offsets) from the input image; this has significant potential in the fields of digital preservation of cultural heritage, architectural design, robot navigation, intelligent transportation, and security monitoring. Existing methods mainly employ multiple-view images with limited overlap for reconstruction but lack the utilization of the relative position and rotation information between the images. To fill this gap, this paper uses two views and their relative camera pose to reconstruct indoor scene planar surfaces. Firstly, we detect plane segments with their 3D planar parameters and appearance embedding features using PlaneRCNN. Then, we transform the plane segments into a global coordinate frame using the relative camera transformation and find matched planes using the assignment algorithm. Finally, matched planes are merged by tackling a nonlinear optimization problem with a trust-region reflective minimizer. An experiment on the Matterport3D dataset demonstrates that the proposed method achieves 40.67% average precision of plane reconstruction, which is an improvement of roughly 3% over Sparse Planes, and it improves the IPAA-80 metric by 10% to 65.7%. This study can provide methodological support for 3D sensing and scene reconstruction in sparse view contexts. Full article
Show Figures

Figure 1

19 pages, 6634 KiB  
Article
A Lightweight Remote Sensing Aircraft Object Detection Network Based on Improved YOLOv5n
by Jiale Wang, Zhe Bai, Ximing Zhang and Yuehong Qiu
Remote Sens. 2024, 16(5), 857; https://doi.org/10.3390/rs16050857 - 29 Feb 2024
Viewed by 894
Abstract
Due to the issues of remote sensing object detection algorithms based on deep learning, such as a high number of network parameters, large model size, and high computational requirements, it is challenging to deploy them on small mobile devices. This paper proposes an [...] Read more.
Due to the issues of remote sensing object detection algorithms based on deep learning, such as a high number of network parameters, large model size, and high computational requirements, it is challenging to deploy them on small mobile devices. This paper proposes an extremely lightweight remote sensing aircraft object detection network based on the improved YOLOv5n. This network combines Shufflenet v2 and YOLOv5n, significantly reducing the network size while ensuring high detection accuracy. It substitutes the original CIoU and convolution with EIoU and deformable convolution, optimizing for the small-scale characteristics of aircraft objects and further accelerating convergence and improving regression accuracy. Additionally, a coordinate attention (CA) mechanism is introduced at the end of the backbone to focus on orientation perception and positional information. We conducted a series of experiments, comparing our method with networks like GhostNet, PP-LCNet, MobileNetV3, and MobileNetV3s, and performed detailed ablation studies. The experimental results on the Mar20 public dataset indicate that, compared to the original YOLOv5n network, our lightweight network has only about one-fifth of its parameter count, with only a slight decrease of 2.7% in mAP@0.5. At the same time, compared with other lightweight networks of the same magnitude, our network achieves an effective balance between detection accuracy and resource consumption such as memory and computing power, providing a novel solution for the implementation and hardware deployment of lightweight remote sensing object detection networks. Full article
Show Figures

Graphical abstract

21 pages, 1046 KiB  
Article
Transformer-Based Feature Compensation Network for Aerial Photography Person and Ground Object Recognition
by Guoqing Zhang, Chen Zheng and Zhonglin Ye
Remote Sens. 2024, 16(2), 268; https://doi.org/10.3390/rs16020268 - 10 Jan 2024
Viewed by 607
Abstract
Visible-infrared person re-identification (VI-ReID) aims at matching pedestrian images with the same identity between different modalities. Existing methods ignore the problems of detailed information loss and the difficulty in capturing global features during the feature extraction process. To solve these issues, we propose [...] Read more.
Visible-infrared person re-identification (VI-ReID) aims at matching pedestrian images with the same identity between different modalities. Existing methods ignore the problems of detailed information loss and the difficulty in capturing global features during the feature extraction process. To solve these issues, we propose a Transformer-based Feature Compensation Network (TFCNet). Firstly, we design a Hierarchical Feature Aggregation (HFA) module, which recursively aggregates the hierarchical features to help the model preserve detailed information. Secondly, we design the Global Feature Compensation (GFC) module, which exploits Transformer’s ability to capture long-range dependencies in sequences to extract global features. Extensive results show that the rank-1/mAP of our method on the SYSU-MM01 and RegDB datasets reaches 60.87%/58.87% and 91.02%/75.06%, respectively, which is better than most existing excellent methods. Meanwhile, to demonstrate our method‘s transferability, we also conduct related experiments on two aerial photography datasets. Full article
Show Figures

Figure 1

Back to TopTop