remotesensing-logo

Journal Browser

Journal Browser

Machine Vision and Advanced Image Processing in Remote Sensing II

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (31 May 2023) | Viewed by 9251

Special Issue Editors


E-Mail Website
Guest Editor
School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
Interests: image fusion; image restoration; machine learning; sparse optimization modeling; tensor decomposition; numerical PDE for image processing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institute of Methodologies for Environmental Analysis, CNR-IMAA, Tito Scalo, 85050 Tito, Italy
Interests: data fusion; pansharpening; statistical signal processing; detection of remotely sensed images
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With sensor technology development, we could acquire more remote sensing images from the sensors installed on satellites, aircraft, etc. By these acquired remote sensing data, people can observe the objects clearly and discover the ground's underlying materials, which opens a new window for us to understand the world. In particular, machine vision and image processing in remote sensing recently have become a hot topic. We believe this trend will continue expectantly in the future; thus, the advancement of excellent approaches and techniques for machine vision and image processing in remote sensing plays a more critical role. In this Special Issue, we intend to collect several papers scoping at the area of machine vision and advanced image processing in remote sensing. By this Special Issue, we hope to promote machine vision and image processing on several remote sensing tasks, e.g., fusion, restoration, classification, unmixing, detection, segmentation, etc. For the methodology, there are no limitations if your approach could effectively deal with the mentioned tasks.

Prof. Dr. Liang-Jian Deng
Prof. Dr. Gemine Vivone
Prof. Dr. Danfeng Hong
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine vision in remote sensing
  • image processing in remote sensing
  • multispectral and hyperspectral images
  • algorithms and modelling in remote sensing
  • data fusion
  • image restoration
  • other vision and image tasks in remote sensing

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

25 pages, 11441 KiB  
Article
D-VINS: Dynamic Adaptive Visual–Inertial SLAM with IMU Prior and Semantic Constraints in Dynamic Scenes
by Yang Sun, Qing Wang, Chao Yan, Youyang Feng, Rongxuan Tan, Xiaoqiong Shi and Xueyan Wang
Remote Sens. 2023, 15(15), 3881; https://doi.org/10.3390/rs15153881 - 04 Aug 2023
Cited by 2 | Viewed by 1362
Abstract
Visual–inertial SLAM algorithms empower robots to autonomously explore and navigate unknown scenes. However, most existing SLAM systems heavily rely on the assumption of static environments, making them ineffective when confronted with dynamic objects in the real world. To enhance the robustness and localization [...] Read more.
Visual–inertial SLAM algorithms empower robots to autonomously explore and navigate unknown scenes. However, most existing SLAM systems heavily rely on the assumption of static environments, making them ineffective when confronted with dynamic objects in the real world. To enhance the robustness and localization accuracy of SLAM systems in dynamic scenes, this paper introduces a visual–inertial SLAM framework that integrates semantic and geometric information, called D-VINS. This paper begins by presenting a method for dynamic object classification based on the current motion state of features, enabling the identification of temporary static features within the environment. Subsequently, a feature dynamic check module is devised, which utilizes inertial measurement unit (IMU) prior information and geometric constraints from adjacent frames to calculate dynamic factors. This module also validates the classification outcomes of the temporary static features. Finally, a dynamic adaptive bundle adjustment module is developed, utilizing the dynamic factors of the features to adjust their weights during the nonlinear optimization process. The proposed methodology is evaluated using both public datasets and a dataset created specifically for this study. The experimental results demonstrate that D-VINS stands as one of the most real-time, accurate, and robust systems for dynamic scenes, showcasing its effectiveness in challenging real-world scenes. Full article
(This article belongs to the Special Issue Machine Vision and Advanced Image Processing in Remote Sensing II)
Show Figures

Figure 1

23 pages, 10758 KiB  
Article
Tensor Completion via Smooth Rank Function Low-Rank Approximate Regularization
by Shicheng Yu, Jiaqing Miao, Guibing Li, Weidong Jin, Gaoping Li and Xiaoguang Liu
Remote Sens. 2023, 15(15), 3862; https://doi.org/10.3390/rs15153862 - 03 Aug 2023
Viewed by 919
Abstract
In recent years, the tensor completion algorithm has played a vital part in the reconstruction of missing elements within high-dimensional remote sensing image data. Due to the difficulty of tensor rank computation, scholars have proposed many substitutions of tensor rank. By introducing the [...] Read more.
In recent years, the tensor completion algorithm has played a vital part in the reconstruction of missing elements within high-dimensional remote sensing image data. Due to the difficulty of tensor rank computation, scholars have proposed many substitutions of tensor rank. By introducing the smooth rank function (SRF), this paper proposes a new tensor rank nonconvex substitution function that performs adaptive weighting on different singular values to avoid the performance deficiency caused by the equal treatment of all singular values. On this basis, a novel tensor completion model that minimizes the SRF as the objective function is proposed. The proposed model is efficiently solved by adding the hot start method to the alternating direction multiplier method (ADMM) framework. Extensive experiments are carried out in this paper to demonstrate the resilience of the proposed model to missing data. The results illustrate that the proposed model is superior to other advanced models in tensor completeness. Full article
(This article belongs to the Special Issue Machine Vision and Advanced Image Processing in Remote Sensing II)
Show Figures

Figure 1

22 pages, 34338 KiB  
Article
Homography Matrix-Based Local Motion Consistent Matching for Remote Sensing Images
by Junyuan Liu, Ao Liang, Enbo Zhao, Mingqi Pang and Daijun Zhang
Remote Sens. 2023, 15(13), 3379; https://doi.org/10.3390/rs15133379 - 02 Jul 2023
Cited by 2 | Viewed by 1310
Abstract
Feature matching is a fundamental task in the field of image processing, aimed at ensuring correct correspondence between two sets of features. Putative matches constructed based on the similarity of descriptors always contain a large number of false matches. To eliminate these false [...] Read more.
Feature matching is a fundamental task in the field of image processing, aimed at ensuring correct correspondence between two sets of features. Putative matches constructed based on the similarity of descriptors always contain a large number of false matches. To eliminate these false matches, we propose a remote sensing image feature matching method called LMC (local motion consistency), where local motion consistency refers to the property that adjacent correct matches have the same motion. The core idea of LMC is to find neighborhoods with correct motion trends and retain matches with the same motion. To achieve this, we design a local geometric constraint using a homography matrix to represent local motion consistency. This constraint has projective invariance and is applicable to various types of transformations. To avoid outliers affecting the search for neighborhoods with correct motion, we introduce a resampling method to construct neighborhoods. Moreover, we design a jump-out mechanism to exit the loop without searching all possible cases, thereby reducing runtime. LMC can process over 1000 putative matches within 100 ms. Experimental evaluations on diverse image datasets, including SUIRD, RS, and DTU, demonstrate that LMC achieves a higher F-score and superior overall matching performance compared to state-of-the-art methods. Full article
(This article belongs to the Special Issue Machine Vision and Advanced Image Processing in Remote Sensing II)
Show Figures

Figure 1

31 pages, 7644 KiB  
Article
Dim and Small Target Detection Based on Energy Sensing of Local Multi-Directional Gradient Information
by Xiangsuo Fan, Juliu Li, Lei Min, Linping Feng, Ling Yu and Zhiyong Xu
Remote Sens. 2023, 15(13), 3267; https://doi.org/10.3390/rs15133267 - 25 Jun 2023
Cited by 1 | Viewed by 770
Abstract
It is difficult for traditional algorithms to remove cloud edge contours in multi-cloud scenarios. In order to improve the detection ability of dim and small targets in complex edge contour scenes, this paper proposes a new dim and small target detection algorithm based [...] Read more.
It is difficult for traditional algorithms to remove cloud edge contours in multi-cloud scenarios. In order to improve the detection ability of dim and small targets in complex edge contour scenes, this paper proposes a new dim and small target detection algorithm based on local multi-directional gradient information energy perception. Herein, based on the information difference between the target area and the background area in the four direction neighborhood blocks, an energy enhancement model for multi-directional gray aggregation (EMDGA) is constructed to preliminarily enhance the target signal. Subsequently, a local multi-directional gradient reciprocal background suppression model (LMDGR) was constructed to model the background of the image. Furthermore, this paper proposes a multi-directional gradient scale segmentation model (MDGSS) to obtain candidate target points and then combines the proposed multi-frame energy-sensing (MFESD) detection algorithm to extract the true targets from sequence images. Finally, in order to better illustrate the effect of the algorithm proposed in this paper in detecting small targets in a cloudy background, four sequence images are selected for detection. The experimental results show that the proposed algorithm can effectively suppress the edge contour of complex clouds compared with the traditional algorithm. When the false alarm rate Pf is 0.005%, the detection rate Pd is greater than 95%. Full article
(This article belongs to the Special Issue Machine Vision and Advanced Image Processing in Remote Sensing II)
Show Figures

Figure 1

19 pages, 10627 KiB  
Article
ADD-UNet: An Adjacent Dual-Decoder UNet for SAR-to-Optical Translation
by Qingli Luo, Hong Li, Zhiyuan Chen and Jian Li
Remote Sens. 2023, 15(12), 3125; https://doi.org/10.3390/rs15123125 - 15 Jun 2023
Viewed by 1200
Abstract
Synthetic aperture radar (SAR) imagery has the advantages of all-day and all-weather observation. However, due to the imaging mechanism of microwaves, it is difficult for nonexperts to interpret SAR images. Transferring SAR imagery into optical imagery can better improve the interpretation of SAR [...] Read more.
Synthetic aperture radar (SAR) imagery has the advantages of all-day and all-weather observation. However, due to the imaging mechanism of microwaves, it is difficult for nonexperts to interpret SAR images. Transferring SAR imagery into optical imagery can better improve the interpretation of SAR data and support the further fusion research of multi-source remote sensing. Methods based on generative adversarial networks (GAN) have been proven to be effective in SAR-to-optical translation tasks. To further improve the translation results of SAR data, we propose a method of an adjacent dual-decoder UNet (ADD-UNet) based on conditional GAN (cGAN) for SAR-to-optical translation. The proposed network architecture adds an adjacent scale of the decoder to the UNet, and the multi-scale feature aggregation of the two decoders improves structures, details, and edge sharpness of generated images while introducing fewer parameters compared with UNet++. In addition, we combine multi-scale structure similarity (MS-SSIM) loss and L1 loss as loss functions with cGAN loss together to help preserve structures and details. The experimental results demonstrate the superiority of our method compared with several state-of-the-art methods. Full article
(This article belongs to the Special Issue Machine Vision and Advanced Image Processing in Remote Sensing II)
Show Figures

Graphical abstract

20 pages, 106782 KiB  
Article
Improved Generalized IHS Based on Total Variation for Pansharpening
by Xuefeng Zhang, Xiaobing Dai, Xuemin Zhang, Yuchen Hu, Yingdong Kang and Guang Jin
Remote Sens. 2023, 15(11), 2945; https://doi.org/10.3390/rs15112945 - 05 Jun 2023
Cited by 2 | Viewed by 1195
Abstract
Pansharpening refers to the fusion of a panchromatic (PAN) and a multispectral (MS) image aimed at generating a high-quality outcome over the same area. This particular image fusion problem has been widely studied, but until recently, it has been challenging to balance the [...] Read more.
Pansharpening refers to the fusion of a panchromatic (PAN) and a multispectral (MS) image aimed at generating a high-quality outcome over the same area. This particular image fusion problem has been widely studied, but until recently, it has been challenging to balance the spatial and spectral fidelity in fused images. The spectral distortion is widespread in the component substitution-based approaches due to the variation in the intensity distribution of spatial components. We lightened the idea using the total variation optimization to improve upon a novel GIHS-TV framework for pansharpening. The framework drew the high spatial fidelity from the GIHS scheme and implemented it with a simpler variational expression. An improved L1-TV constraint to the new spatial–spectral information was introduced to the GIHS-TV framework, along with its fast implementation. The objective function was solved by the Iteratively Reweighted Norm (IRN) method. The experimental results on the “PAirMax” dataset clearly indicated that GIHS-TV could effectively reduce the spectral distortion in the process of component substitution. Our method has achieved excellent results in visual effects and evaluation metrics. Full article
(This article belongs to the Special Issue Machine Vision and Advanced Image Processing in Remote Sensing II)
Show Figures

Graphical abstract

22 pages, 8643 KiB  
Article
Hyperspectral Denoising Using Asymmetric Noise Modeling Deep Image Prior
by Yifan Wang, Shuang Xu, Xiangyong Cao, Qiao Ke, Teng-Yu Ji and Xiangxiang Zhu
Remote Sens. 2023, 15(8), 1970; https://doi.org/10.3390/rs15081970 - 08 Apr 2023
Viewed by 1673
Abstract
Deep image prior (DIP) is a powerful technique for image restoration that leverages an untrained network as a handcrafted prior. DIP can also be used for hyperspectral image (HSI) denoising tasks and has achieved impressive performance. Recent works further incorporate different regularization terms [...] Read more.
Deep image prior (DIP) is a powerful technique for image restoration that leverages an untrained network as a handcrafted prior. DIP can also be used for hyperspectral image (HSI) denoising tasks and has achieved impressive performance. Recent works further incorporate different regularization terms to enhance the performance of DIP and successfully show notable improvements. However, most DIP-based methods for HSI denoising rarely consider the distribution of complicated HSI mixed noise. In this paper, we propose the asymmetric Laplace noise modeling deep image prior (ALDIP) for HSI mixed noise removal. Based on the observation that real-world HSI noise exhibits heavy-tailed and asymmetric properties, we model the HSI noise of each band using an asymmetric Laplace distribution. Furthermore, in order to fully exploit the spatial–spectral correlation, we propose ALDIP-SSTV, which combines ALDIP with a spatial–spectral total variation (SSTV) term to preserve more spatial–spectral information. Experiments on both synthetic data and real-world data demonstrate that ALDIP and ALDIP-SSTV outperform state-of-the-art HSI denoising methods. Full article
(This article belongs to the Special Issue Machine Vision and Advanced Image Processing in Remote Sensing II)
Show Figures

Graphical abstract

Back to TopTop