remotesensing-logo

Journal Browser

Journal Browser

Information Extraction, Processing and Analysis Methods for Remote Sensing Multi-Modal Information Navigation Applications

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Engineering Remote Sensing".

Deadline for manuscript submissions: closed (16 December 2023) | Viewed by 18532

Special Issue Editors

School of Computing, National University of Singapore, Singapore 118404, Singapore
Interests: object retrieval; 2D/3D generation
Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
Interests: interpretable artificial intelligence; multispectral LiDAR point clouds classification; semantic segmentation and target detection of RGB-T images; hyperspectral image and LiDAR data interpretation
College of Underwater Acoustic Engineering, Harbin Engineering University, Harbin 150001, China
Interests: image processing; deep learning; underwater imaging and remote sensing

Special Issue Information

Dear Colleagues,

With the rapid development of remote sensors, multi-modal information navigation has received a great deal of interest. In both urban and wild areas, multi-modal information (optical/thermal infrared images and videos, SAR/hyper-spectral images, LIDAR, etc.) plays an important role in many navigation applications, such as search and rescue in the wild, target positioning or orientation, autonomous drone driving, disaster assessment, garbage removal, environmental protection scene change analysis, etc. Moreover, rapid advances in deep-learning methods for different kinds of information extraction, processing and analysis have also promoted the application of associated algorithms and techniques to problems in many related fields, such as target detection and tracking, image segmentation, image matching, etc. This Special Issue aims to report and cover the latest advances and trends in multi-modal remote sensing information processing methods for different kinds of aircraft navigation applications. We welcome the submission of papers addressing both theoretical methods and applicative techniques, as well as contributions regarding new advanced methodologies for relevant scenarios of remote sensing data. We look forward to receiving your contributions.

Dr. Yiming Yan
Dr. Zhedong Zheng
Dr. Qingwang Wang
Prof. Dr. Suleman Mazhar
Dr. Nan Su
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • complex morphological object positioning and extraction
  • cross-domain/cross-dimensional object/scene image matching
  • six-dimensional object pose estimation in remote sensing images
  • target detection, tracking and control for UAV platform
  • building and road extraction in multi-modal remote sensing data
  • scene change detection and analysis

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 24057 KiB  
Article
UniRender: Reconstructing 3D Surfaces from Aerial Images with a Unified Rendering Scheme
by Yiming Yan, Weikun Zhou, Nan Su and Chi Zhang
Remote Sens. 2023, 15(18), 4634; https://doi.org/10.3390/rs15184634 - 21 Sep 2023
Viewed by 873
Abstract
While recent advances in the field of neural rendering have shown impressive 3D reconstruction performance, it is still a challenge to accurately capture the appearance and geometry of a scene by using neural rendering, especially for remote sensing scenes. This is because both [...] Read more.
While recent advances in the field of neural rendering have shown impressive 3D reconstruction performance, it is still a challenge to accurately capture the appearance and geometry of a scene by using neural rendering, especially for remote sensing scenes. This is because both rendering methods, i.e., surface rendering and volume rendering, have their own limitations. Furthermore, when neural rendering is applied to remote sensing scenes, the view sparsity and content complexity that characterize these scenes will severely hinder its performance. In this work, we aim to address these challenges and to make neural rendering techniques available for 3D reconstruction in remote sensing environments. To achieve this, we propose a novel 3D surface reconstruction method called UniRender. UniRender offers three improvements in locating an accurate 3D surface by using neural rendering: (1) unifying surface and volume rendering by employing their strengths while discarding their weaknesses, which enables accurate 3D surface position localization in a coarse-to-fine manner; (2) incorporating photometric consistency constraints during rendering, and utilizing the points reconstructed by structure from motion (SFM) or multi-view stereo (MVS), to constrain reconstructed surfaces, which significantly improves the accuracy of 3D reconstruction; (3) improving the sampling strategy by locating sampling points in the foreground regions where the surface needs to be reconstructed, thus obtaining better detail in the reconstruction results. Extensive experiments demonstrate that UniRender can reconstruct high-quality 3D surfaces in various remote sensing scenes. Full article
Show Figures

Graphical abstract

24 pages, 2603 KiB  
Article
An Underwater Side-Scan Sonar Transfer Recognition Method Based on Crossed Point-to-Point Second-Order Self-Attention Mechanism
by Jian Wang, Haisen Li, Chao Dong, Jing Wang, Bing Zheng and Tianyao Xing
Remote Sens. 2023, 15(18), 4517; https://doi.org/10.3390/rs15184517 - 14 Sep 2023
Viewed by 738
Abstract
Recognizing targets through side-scan sonar (SSS) data by deep learning-based techniques has been particularly challenging. The primary challenge stems from the difficulty and time consumption associated with underwater acoustic data acquisition, which demands systematic explorations to obtain sufficient training samples for accurate deep [...] Read more.
Recognizing targets through side-scan sonar (SSS) data by deep learning-based techniques has been particularly challenging. The primary challenge stems from the difficulty and time consumption associated with underwater acoustic data acquisition, which demands systematic explorations to obtain sufficient training samples for accurate deep learning-based models. Moreover, if the sample size of the available data is small, the design of effective target recognition models becomes complex. These challenges have posed significant obstacles to developing accurate SSS-based target recognition methods via deep learning models. However, utilizing multi-modal datasets to enhance the recognition performance of sonar images through knowledge transfer in deep networks appears promising. Owing to the unique statistical properties of various modal images, transitioning between different modalities can significantly increase the complexity of network training. This issue remains unresolved, directly impacting the target transfer recognition performance. To enhance the precision of categorizing underwater sonar images when faced with a limited number of mode types and data samples, this study introduces a crossed point-to-point second-order self-attention (PPCSSA) method based on double-mode sample transfer recognition. In the PPCSSA method, first-order importance features are derived by extracting key horizontal and longitudinal point-to-point features. Based on these features, the self-supervised attention strategy effectively removes redundant features, securing the second-order significant features of SSS images. This strategy introduces a potent low-mode-type small-sample learning method for transfer learning. Classification experiment results indicate that the proposed method excels in extracting key features with minimal training complexity. Moreover, experimental outcomes underscore that the proposed technique enhances recognition stability and accuracy, achieving a remarkable overall accuracy rate of 99.28%. Finally, the proposed method maintains high recognition accuracy even in noisy environments. Full article
Show Figures

Figure 1

21 pages, 45359 KiB  
Article
Deep Spatial Graph Convolution Network with Adaptive Spectral Aggregated Residuals for Multispectral Point Cloud Classification
by Qingwang Wang, Zifeng Zhang, Xueqian Chen, Zhifeng Wang, Jian Song and Tao Shen
Remote Sens. 2023, 15(18), 4417; https://doi.org/10.3390/rs15184417 - 07 Sep 2023
Cited by 1 | Viewed by 792
Abstract
Over an extended period, considerable research has focused on elaborated mapping in navigation systems. Multispectral point clouds containing both spatial and spectral information play a crucial role in remote sensing by enabling more accurate land cover classification and the creation of more accurate [...] Read more.
Over an extended period, considerable research has focused on elaborated mapping in navigation systems. Multispectral point clouds containing both spatial and spectral information play a crucial role in remote sensing by enabling more accurate land cover classification and the creation of more accurate maps. However, existing graph-based methods often overlook the individual characteristics and information patterns in these graphs, leading to a convoluted pattern of information aggregation and a failure to fully exploit the spatial–spectral information to classify multispectral point clouds. To address these limitations, this paper proposes a deep spatial graph convolution network with adaptive spectral aggregated residuals (DSGCN-ASR). Specifically, the proposed DSGCN-ASR employs spatial graphs for deep convolution, using spectral graph aggregated information as residuals. This method effectively overcomes the limitations of shallow networks in capturing the nonlinear characteristics of multispectral point clouds. Furthermore, the incorporation of adaptive residual weights enhances the use of spatial–spectral information, resulting in improved overall model performance. Experimental validation was conducted on two datasets containing real scenes, comparing the proposed DSGCN-ASR with several state-of-the-art graph-based methods. The results demonstrate that DSGCN-ASR better uses the spatial–spectral information and produces superior classification results. This study provides new insights and ideas for the joint use of spatial and spectral information in the context of multispectral point clouds. Full article
Show Figures

Figure 1

19 pages, 26887 KiB  
Article
Spectral Swin Transformer Network for Hyperspectral Image Classification
by Baisen Liu, Yuanjia Liu, Wulin Zhang, Yiran Tian and Weili Kong
Remote Sens. 2023, 15(15), 3721; https://doi.org/10.3390/rs15153721 - 26 Jul 2023
Cited by 2 | Viewed by 1753
Abstract
Hyperspectral images are complex images that contain more spectral dimension information than ordinary images. An increasing number of HSI classification methods are using deep learning techniques to process three-dimensional data. The Vision Transformer model is gradually occupying an important position in the field [...] Read more.
Hyperspectral images are complex images that contain more spectral dimension information than ordinary images. An increasing number of HSI classification methods are using deep learning techniques to process three-dimensional data. The Vision Transformer model is gradually occupying an important position in the field of computer vision and is being used to replace the CNN structure of the network. However, it is still in the preliminary research stage in the field of HSI. In this paper, we propose using a spectral Swin Transformer network for HSI classification, providing a new approach for the HSI field. The Swin Transformer uses group attention to enhance feature representation, and the sliding window attention calculation can take into account the contextual information of different windows, which can retain the global features of HSI and improve classification results. In our experiments, we evaluated our proposed approach on several public hyperspectral datasets and compared it with several methods. The experimental results demonstrate that our proposed model achieved test accuracies of 97.46%, 99.7%, and 99.8% on the IP, SA, and PU public HSI datasets, respectively, when using the AdamW optimizer. Our approach also shows good generalization ability when applied to new datasets. Overall, our proposed approach represents a promising direction for hyperspectral image classification using deep learning techniques. Full article
Show Figures

Figure 1

26 pages, 11502 KiB  
Article
A Global Structure and Adaptive Weight Aware ICP Algorithm for Image Registration
by Lin Cao, Shengbin Zhuang, Shu Tian, Zongmin Zhao, Chong Fu, Yanan Guo and Dongfeng Wang
Remote Sens. 2023, 15(12), 3185; https://doi.org/10.3390/rs15123185 - 19 Jun 2023
Cited by 4 | Viewed by 1554
Abstract
As an important technology in 3D vision, point-cloud registration has broad development prospects in the fields of space-based remote sensing, photogrammetry, robotics, and so on. Of the available algorithms, the Iterative Closest Point (ICP) algorithm has been used as the classic algorithm for [...] Read more.
As an important technology in 3D vision, point-cloud registration has broad development prospects in the fields of space-based remote sensing, photogrammetry, robotics, and so on. Of the available algorithms, the Iterative Closest Point (ICP) algorithm has been used as the classic algorithm for solving point cloud registration. However, with the point cloud data being under the influence of noise, outliers, overlapping values, and other issues, the performance of the ICP algorithm will be affected to varying degrees. This paper proposes a global structure and adaptive weight aware ICP algorithm (GSAW-ICP) for image registration. Specifically, we first proposed a global structure mathematical model based on the reconstruction of local surfaces using both the rotation of normal vectors and the change in curvature, so as to better describe the deformation of the object. The model was optimized for the convergence strategy, so that it had a wider convergence domain and a better convergence effect than either of the original point-to-point or point-to-point constrained models. Secondly, for outliers and overlapping values, the GSAW-ICP algorithm was able to assign appropriate weights, so as to optimize both the noise and outlier interference of the overall system. Our proposed algorithm was extensively tested on noisy, anomalous, and real datasets, and the proposed method was proven to have a better performance than other state-of-the-art algorithms. Full article
Show Figures

Figure 1

19 pages, 3011 KiB  
Article
A Novel Object-Level Building-Matching Method across 2D Images and 3D Point Clouds Based on the Signed Distance Descriptor (SDD)
by Chunhui Zhao, Wenxuan Wang, Yiming Yan, Nan Su, Shou Feng, Wei Hou and Qingyu Xia
Remote Sens. 2023, 15(12), 2974; https://doi.org/10.3390/rs15122974 - 07 Jun 2023
Viewed by 1148
Abstract
In this work, a novel object-level building-matching method using cross-dimensional data, including 2D images and 3D point clouds, is proposed. The core of this method is a newly proposed plug-and-play Joint Descriptor Extraction Module (JDEM) that is used to extract descriptors containing buildings’ [...] Read more.
In this work, a novel object-level building-matching method using cross-dimensional data, including 2D images and 3D point clouds, is proposed. The core of this method is a newly proposed plug-and-play Joint Descriptor Extraction Module (JDEM) that is used to extract descriptors containing buildings’ three-dimensional shape information from object-level remote sensing data of different dimensions for matching. The descriptor is named Signed Distance Descriptor (SDD). Due to differences in the inherent properties of different dimensional data, it is challenging to match buildings’ 2D images and 3D point clouds on the object level. In addition, features extracted from the same building in images taken at different angles are usually not exactly identical, which will also affect the accuracy of cross-dimensional matching. Therefore, the question of how to extract accurate, effective, and robust joint descriptors is key to cross-dimensional matching. Our JDEM maps different dimensions of data to the same 3D descriptor SDD space through the 3D geometric invariance of buildings. In addition, Multi-View Adaptive Loss (MAL), proposed in this paper, aims to improve the adaptability of the image encoder module to images with different angles and enhance the robustness of the joint descriptors. Moreover, a cross-dimensional object-level data set was created to verify the effectiveness of our method. The data set contains multi-angle optical images, point clouds, and the corresponding 3D models of more than 400 buildings. A large number of experimental results show that our object-level cross-dimensional matching method achieves state-of-the-art outcomes. Full article
Show Figures

Figure 1

21 pages, 7190 KiB  
Article
PBFormer: Point and Bi-Spatiotemporal Transformer for Pointwise Change Detection of 3D Urban Point Clouds
by Ming Han, Jianjun Sha, Yanheng Wang and Xiangwei Wang
Remote Sens. 2023, 15(9), 2314; https://doi.org/10.3390/rs15092314 - 27 Apr 2023
Cited by 1 | Viewed by 1173
Abstract
Change detection (CD) is a technique widely used in remote sensing for identifying the differences between data acquired at different times. Most existing 3D CD approaches voxelize point clouds into 3D grids, project them into 2D images, or rasterize them into digital surface [...] Read more.
Change detection (CD) is a technique widely used in remote sensing for identifying the differences between data acquired at different times. Most existing 3D CD approaches voxelize point clouds into 3D grids, project them into 2D images, or rasterize them into digital surface models due to the irregular format of point clouds and the variety of changes in three-dimensional (3D) objects. However, the details of the geometric structure and spatiotemporal sequence information may not be fully utilized. In this article, we propose PBFormer, a transformer network with Siamese architecture, for directly inferring pointwise changes in bi-temporal 3D point clouds. First, we extract point sequences from irregular 3D point clouds using the k-nearest neighbor method. Second, we uniquely use a point transformer network as an encoder to extract point feature information from bitemporal 3D point clouds. Then, we design a module for fusing the spatiotemporal features of bi-temporal point clouds to effectively detect change features. Finally, multilayer perceptrons are used to obtain the CD results. Extensive experiments conducted on the Urb3DCD benchmark show that PBFormer outperforms other excellent approaches for 3D point cloud CD tasks. Full article
Show Figures

Figure 1

22 pages, 3954 KiB  
Article
Co-Visual Pattern-Augmented Generative Transformer Learning for Automobile Geo-Localization
by Jianwei Zhao, Qiang Zhai, Pengbo Zhao, Rui Huang and Hong Cheng
Remote Sens. 2023, 15(9), 2221; https://doi.org/10.3390/rs15092221 - 22 Apr 2023
Cited by 5 | Viewed by 1564
Abstract
Geolocation is a fundamental component of route planning and navigation for unmanned vehicles, but GNSS-based geolocation fails under denial-of-service conditions. Cross-view geo-localization (CVGL), which aims to estimate the geographic location of the ground-level camera by matching against enormous geo-tagged aerial (e.g., satellite) images, [...] Read more.
Geolocation is a fundamental component of route planning and navigation for unmanned vehicles, but GNSS-based geolocation fails under denial-of-service conditions. Cross-view geo-localization (CVGL), which aims to estimate the geographic location of the ground-level camera by matching against enormous geo-tagged aerial (e.g., satellite) images, has received a lot of attention but remains extremely challenging due to the drastic appearance differences across aerial–ground views. In existing methods, global representations of different views are extracted primarily using Siamese-like architectures, but their interactive benefits are seldom taken into account. In this paper, we present a novel approach using cross-view knowledge generative techniques in combination with transformers, namely mutual generative transformer learning (MGTL), for CVGL. Specifically, by taking the initial representations produced by the backbone network, MGTL develops two separate generative sub-modules—one for aerial-aware knowledge generation from ground-view semantics and vice versa—and fully exploits the entirely mutual benefits through the attention mechanism. Moreover, to better capture the co-visual relationships between aerial and ground views, we introduce a cascaded attention masking algorithm to further boost accuracy. Extensive experiments on challenging public benchmarks, i.e., CVACT and CVUSA, demonstrate the effectiveness of the proposed method, which sets new records compared with the existing state-of-the-art models. Our code will be available upon acceptance. Full article
Show Figures

Graphical abstract

11 pages, 2149 KiB  
Communication
A New Method for False Alarm Suppression in Heterogeneous Change Detection
by Cong Xu, Baisen Liu and Zishu He
Remote Sens. 2023, 15(7), 1745; https://doi.org/10.3390/rs15071745 - 24 Mar 2023
Viewed by 1052
Abstract
Heterogeneous change detection has a wide range of applications in many fields. However, to date, many existing problems of heterogeneous change detection, such as false alarm suppression, have not been specifically addressed. In this article, we discuss the problem of false alarm suppression [...] Read more.
Heterogeneous change detection has a wide range of applications in many fields. However, to date, many existing problems of heterogeneous change detection, such as false alarm suppression, have not been specifically addressed. In this article, we discuss the problem of false alarm suppression and propose a new method based on the combination of a convolutional neural network (CNN) and graph convolutional network (GCN). This approach employs a two-channel CNN to learn the feature maps of multitemporal images and then calculates difference maps of different scales, which means that both low-level and high-level features contribute equally to the change detection. The GCN, with a newly built convolution kernel (called the partially absorbing random walk convolution kernel), classifies these difference maps to obtain the inter-feature relationships between true targets and false ones, which can be represented by an adjacent matrix. We use pseudo-label samples to train the whole network, which means our method is unsupervised. Our method is verified on two typical data sets. The experimental results indicate the superiority of our method compared to some state-of-the-art approaches, which proves the efficacy of our method in false alarm suppression. Full article
Show Figures

Graphical abstract

17 pages, 2073 KiB  
Article
Dynamic Data Augmentation Based on Imitating Real Scene for Lane Line Detection
by Qingwang Wang, Lu Wang, Yongke Chi, Tao Shen, Jian Song, Ju Gao and Shiquan Shen
Remote Sens. 2023, 15(5), 1212; https://doi.org/10.3390/rs15051212 - 22 Feb 2023
Cited by 2 | Viewed by 1800
Abstract
With the rapid development of urban ground transportation, lane line detection is gradually becoming a major technological direction to help to realize safe vehicle navigation. However, lane line detection results may have incompleteness issues, such as blurry lane lines and disappearance of the [...] Read more.
With the rapid development of urban ground transportation, lane line detection is gradually becoming a major technological direction to help to realize safe vehicle navigation. However, lane line detection results may have incompleteness issues, such as blurry lane lines and disappearance of the lane lines in the distance, since the lane lines may be heavily obscured by vehicles and pedestrians on the road. In addition, low-visibility environments also pose a challenge for lane line detection. To solve the above problems, we propose a dynamic data augmentation framework based on imitating real scenes (DDA-IRS). DDA-IRS contains three data augmentation strategies that simulate different realistic scenes (i.e., shadows, dazzle, and crowded). In this way, we expand from a limited scene dataset to realistically fit multiple complex scenes. Importantly, DDA-IRS is a lightweight framework that can be integrated with a variety of training-based models without modifying the original model. We evaluate the proposed DDA-IRS on the CULane dataset, and the results show that the data-enhanced model outperforms the baseline model by 0.5% in terms of F-measure. In particular, the F-measure of the “Normal”, “Crowded”, “Shadow”, “Arrow”, and “Curve” achieve a 0.4%, 0.1%, 1.6%, 0.4%, and 1.4% improvement, respectively. Full article
Show Figures

Figure 1

16 pages, 8476 KiB  
Article
SAR and Multi-Spectral Data Fusion for Local Climate Zone Classification with Multi-Branch Convolutional Neural Network
by Guangjun He, Zhe Dong, Jian Guan, Pengming Feng, Shichao Jin and Xueliang Zhang
Remote Sens. 2023, 15(2), 434; https://doi.org/10.3390/rs15020434 - 11 Jan 2023
Cited by 2 | Viewed by 1913
Abstract
The local climate zone (LCZ) scheme is of great value for urban heat island (UHI) effect studies by providing a standard classification framework to describe the local physical structure at a global scale. In recent years, with the rapid development of satellite imaging [...] Read more.
The local climate zone (LCZ) scheme is of great value for urban heat island (UHI) effect studies by providing a standard classification framework to describe the local physical structure at a global scale. In recent years, with the rapid development of satellite imaging techniques, both multi-spectral (MS) and synthetic aperture radar (SAR) data have been widely used in LCZ classification tasks. However, the fusion of MS and SAR data still faces the challenges of the different imaging mechanisms and the feature heterogeneity. In this study, to fully exploit and utilize the features of SAR and MS data, a data-grouping method was firstly proposed to divide multi-source data into several band groups according to the spectral characteristics of different bands. Then, a novel network architecture, namely Multi-source data Fusion Network for Local Climate Zone (MsF-LCZ-Net), was introduced to achieve high-precision LCZ classification, which contains a multi-branch CNN for multi-modal feature extraction and fusion, followed by a classifier for LCZ prediction. In the proposed multi-branch structure, a split–fusion-aggregate strategy was adopted to capture multi-level information and enhance the feature representation. In addition, a self channel attention (SCA) block was introduced to establish long-range spatial and inter-channel dependencies, which made the network pay more attention to informative features. Experiments were conducted on the So2Sat LCZ42 dataset, and the results show the superiority of our proposed method when compared with state-of-the-art methods. Moreover, the LCZ maps of three main cities in China were generated and analyzed to demonstrate the effectiveness of our proposed method. Full article
Show Figures

Figure 1

22 pages, 1641 KiB  
Article
Vision-Based Moving-Target Geolocation Using Dual Unmanned Aerial Vehicles
by Tingwei Pan, Jianjun Gui, Hongbin Dong, Baosong Deng and Bingxu Zhao
Remote Sens. 2023, 15(2), 389; https://doi.org/10.3390/rs15020389 - 08 Jan 2023
Cited by 3 | Viewed by 2104
Abstract
This paper develops a framework for geolocating ground-based moving targets with images taken from dual unmanned aerial vehicles (UAVs). Unlike the usual moving-target geolocation methods that rely heavily on accurate navigation state sensors or assumptions of the known target’s altitude, the proposed framework [...] Read more.
This paper develops a framework for geolocating ground-based moving targets with images taken from dual unmanned aerial vehicles (UAVs). Unlike the usual moving-target geolocation methods that rely heavily on accurate navigation state sensors or assumptions of the known target’s altitude, the proposed framework does not have the same limitations and performs geolocation of moving targets utilizing dual UAVs equipped with the low-quality navigation state sensors. Considering the Gaussian measurement errors and yaw-angle measurement bias provided by low-quality sensors, we first propose an epipolar constraint-based corresponding-point-matching method, which enables the historical measurement data to be used to estimate the current position of the moving target; after that, we propose a target altitude estimation method based on multiview geometry, which utilizes multiple images, including historical images, to estimate the altitude of the moving target; finally, considering the negative influence of yaw-angle measurement bias on the processes of target altitude estimation and parameter regression, we take advantage of multiple iterations among the two processes to accurately estimate the moving target’s two-dimensional position and the yaw-angle measurement biases of two UAVs. The effectiveness and practicability of the framework proposed in this paper are proved by simulation experiments and actual flight experiments. Full article
Show Figures

Figure 1

Back to TopTop