sensors-logo

Journal Browser

Journal Browser

State-of-the-Art Multimodal Remote Sensing Technologies

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Remote Sensors".

Deadline for manuscript submissions: closed (30 June 2023) | Viewed by 6543

Special Issue Editor


E-Mail Website
Guest Editor
Institut d'Électronique et des Technologies du numéRique, 35700 Rennes, France
Interests: multimodal remote sensing data analysis and processing; machine and deep learning; image registration; adaptive multichannel signal and image processing; blind image restoration and blind estimation of image noise characteristics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The rapid development of sensor technology today offers a wide range of possibilities for data acquisition and processing across different observation modalities and scales for multidisciplinary applications. Naturally, multimodal data complement each other and convey additional information that is fully beneficial to significantly improve the performance of the resulting analysis, processing, and interpretation tasks.

Despite their heterogeneity and non-linear relationships in their differences in intensity and amount of information, they are very useful, if not essential, in the processing flow from captured data at the sensor level towards knowledge exploration and data-based decision making.

However, the integration of various data acquired in different ways and at different scales, possibly at different times, remains a difficult task. Many methodological questions are still open to outline the benefits they can bring and avoid wasting their potential value.

These questions relate to the proper understanding and modeling of these data as well as their intrinsic complexities and properties, to the most efficient means of extracting the maximum informative value from them, and then to efficient use of the full added-value of the information they contain despite their uncertainties.

Responses to these questions can trigger original ideas and innovative approaches, whatever modalities (high-resolution video, depth imagery, RGB, multispectral, hyperspectral optical, infrared, light detection and ranging (LiDAR), microwave imaging, synthetic aperture radar (SAR), and topographic data), observation scales (laboratory bench, ground field survey, aerial surveys with unmanned aerial systems, airplanes, satellite surveys), or application fields (environmental and/or infrastructure surveillance and monitoring among others) are targeted.

This Special Issue will cover and promote the latest advances related to multimodal remote sensing technologies. Its scope includes current technological advances at either the sensor or acquisition platform levels for combining synchronously or not two or more imaging techniques as well as recent methodological advances (models and algorithms) for efficient and successful processing of the multimodal data collected.

This includes innovative approaches based either on advanced mathematics and statistics or supervised and unsupervised deep learning as soon as they are designed to infer the true informative value of the multimodal data remotely sensed and showcase how they can improve output performance.

A wide spectrum of the latest emerging applications highlighting both the capacity and benefits enabled by remotely sensed multimodal data is accordingly targeted.

Dr. Benoit Vozel
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Multimodal remote sensing image analysis and processing
  • Image registration
  • Localization accuracy
  • Similarity measure
  • Feature extraction
  • Feature and data fusion
  • Uncertainty quantification
  • Multimodal clustering, consensus clustering, ensemble clustering
  • Multimodal deep learning
  • Computational scalability

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

30 pages, 40008 KiB  
Article
Contribution of Geometric Feature Analysis for Deep Learning Classification Algorithms of Urban LiDAR Data
by Fayez Tarsha Kurdi, Wijdan Amakhchan, Zahra Gharineiat, Hakim Boulaassal and Omar El Kharki
Sensors 2023, 23(17), 7360; https://doi.org/10.3390/s23177360 - 23 Aug 2023
Cited by 3 | Viewed by 1167
Abstract
The use of a Machine Learning (ML) classification algorithm to classify airborne urban Light Detection And Ranging (LiDAR) point clouds into main classes such as buildings, terrain, and vegetation has been widely accepted. This paper assesses two strategies to enhance the effectiveness of [...] Read more.
The use of a Machine Learning (ML) classification algorithm to classify airborne urban Light Detection And Ranging (LiDAR) point clouds into main classes such as buildings, terrain, and vegetation has been widely accepted. This paper assesses two strategies to enhance the effectiveness of the Deep Learning (DL) classification algorithm. Two ML classification approaches are developed and compared in this context. These approaches utilize the DL Pipeline Network (DLPN), which is tailored to minimize classification errors and maximize accuracy. The geometric features calculated from a point and its neighborhood are analyzed to select the features that will be used in the input layer of the classification algorithm. To evaluate the contribution of the proposed approach, five point-clouds datasets with different urban typologies and ground topography are employed. These point clouds exhibit variations in point density, accuracy, and the type of aircraft used (drone and plane). This diversity in the tested point clouds enables the assessment of the algorithm’s efficiency. The obtained high classification accuracy between 89% and 98% confirms the efficacy of the developed algorithm. Finally, the results of the adopted algorithm are compared with both rule-based and ML algorithms, providing insights into the positioning of DL classification algorithms among other strategies suggested in the literature. Full article
(This article belongs to the Special Issue State-of-the-Art Multimodal Remote Sensing Technologies)
Show Figures

Figure 1

19 pages, 61563 KiB  
Article
Remote Sensing Image Fusion Based on Morphological Convolutional Neural Networks with Information Entropy for Optimal Scale
by Bairu Jia, Jindong Xu, Haihua Xing and Peng Wu
Sensors 2022, 22(19), 7339; https://doi.org/10.3390/s22197339 - 27 Sep 2022
Cited by 1 | Viewed by 1231
Abstract
Remote sensing image fusion is a fundamental issue in the field of remote sensing. In this paper, we propose a remote sensing image fusion method based on optimal scale morphological convolutional neural networks (CNN) using the principle of entropy from information theory. We [...] Read more.
Remote sensing image fusion is a fundamental issue in the field of remote sensing. In this paper, we propose a remote sensing image fusion method based on optimal scale morphological convolutional neural networks (CNN) using the principle of entropy from information theory. We use an attentional CNN to fuse the optimal cartoon and texture components of the original images to obtain a high-resolution multispectral image. We obtain the cartoon and texture components using sparse decomposition-morphological component analysis (MCA) with an optimal threshold value determined by calculating the information entropy of the fused image. In the sparse decomposition process, the local discrete cosine transform dictionary and the curvelet transform dictionary compose the MCA dictionary. We sparsely decompose the original remote sensing images into a texture component and a cartoon component at an optimal scale using the information entropy to control the dictionary parameter. Experimental results show that the remote sensing image fusion method proposed in this paper can effectively retain the information of the original image, improve the spatial resolution and spectral fidelity, and provide a new idea for image fusion from the perspective of multi-morphological deep learning. Full article
(This article belongs to the Special Issue State-of-the-Art Multimodal Remote Sensing Technologies)
Show Figures

Figure 1

20 pages, 27923 KiB  
Article
A Novel Remote Sensing Image Registration Algorithm Based on Feature Using ProbNet-RANSAC
by Yunyun Dong, Chenbin Liang and Changjun Zhao
Sensors 2022, 22(13), 4791; https://doi.org/10.3390/s22134791 - 24 Jun 2022
Cited by 1 | Viewed by 1410
Abstract
Image registration based on feature is a commonly used approach due to its robustness in complex geometric deformation and larger gray difference. However, in practical application, due to the effect of various noises, occlusions, shadows, gray differences, and even changes of image contents, [...] Read more.
Image registration based on feature is a commonly used approach due to its robustness in complex geometric deformation and larger gray difference. However, in practical application, due to the effect of various noises, occlusions, shadows, gray differences, and even changes of image contents, the corresponding feature point set may be contaminated, which may degrade the accuracy of the transformation model estimate based on Random Sample Consensus (RANSAC). In this work, we proposed a semi-automated method to create the image registration training data, which greatly reduced the workload of labeling and made it possible to train a deep neural network. In addition, for the model estimation based on RANSAC, we determined the process according to a probabilistic perspective and presented a formulation of RANSAC with the learned guidance of hypothesis sampling. At the same time, a deep convolutional neural network of ProbNet was built to generate a sampling probability of corresponding feature points, which were then used to guide the sampling of a minimum set of RANSAC to acquire a more accurate estimation model. To illustrate the effectiveness and advantages of the proposed method, qualitative and quantitative experiments are conducted. In the qualitative experiment, the effectiveness of the proposed method was illustrated by a checkerboard visualization of image pairs before and after being registered by the proposed method. In the quantitative experiment, other three representative and popular methods of vanilla RANSAC, LMeds-RANSAC, and ProSAC-RANSAC were compared, and seven different measures were introduced to comprehensively evaluate the performance of the proposed method. The quantitative experimental result showed that the proposed method had better performance than the other methods. Furthermore, with the integration of the model estimation of the image registration into the deep-learning framework, it was possible to jointly optimize all the processes of image registration via end-to-end learning to further improve the accuracy of image registration. Full article
(This article belongs to the Special Issue State-of-the-Art Multimodal Remote Sensing Technologies)
Show Figures

Figure 1

17 pages, 4998 KiB  
Article
Exhaustive Search of Correspondences between Multimodal Remote Sensing Images Using Convolutional Neural Network
by Mykhail Uss, Benoit Vozel, Vladimir Lukin and Kacem Chehdi
Sensors 2022, 22(3), 1231; https://doi.org/10.3390/s22031231 - 6 Feb 2022
Cited by 6 | Viewed by 1714
Abstract
Finding putative correspondences between a pair of images is an important prerequisite for image registration. In complex cases such as multimodal registration, a true match could be less plausible than a false match within a search zone. Under these conditions, it is important [...] Read more.
Finding putative correspondences between a pair of images is an important prerequisite for image registration. In complex cases such as multimodal registration, a true match could be less plausible than a false match within a search zone. Under these conditions, it is important to detect all plausible matches. This could be achieved by an exhaustive search using a handcrafted similarity measure (SM, e.g., mutual information). It is promising to replace handcrafted SMs with deep learning ones that offer better performance. However, the latter are not designed for an exhaustive search of all matches but for finding the most plausible one. In this paper, we propose a deep-learning-based solution for exhaustive multiple match search between two images within a predefined search area. We design a computationally efficient convolutional neural network (CNN) that takes as input a template fragment from one image, a search fragment from another image and produces an SM map covering the entire search area in spatial dimensions. This SM map finds multiple plausible matches, locates each match with subpixel accuracy and provides a covariance matrix of localization errors for each match. The proposed CNN is trained with a specially designed loss function that enforces the translation and rotation invariance of the SM map and enables the detection of matches that have no associated ground truth data (e.g., multiple matches for repetitive textures). We validate the approach on multimodal remote sensing images and show that the proposed “area” SM performs better than “point” SM. Full article
(This article belongs to the Special Issue State-of-the-Art Multimodal Remote Sensing Technologies)
Show Figures

Figure 1

Back to TopTop