remotesensing-logo

Journal Browser

Journal Browser

Advanced Super-resolution Methods in Remote Sensing

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (30 June 2023) | Viewed by 32783

Special Issue Editors


E-Mail Website
Guest Editor
1. Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109, USA
2. Department of Mathematics, University of California, Los Angeles, CA 90095, USA
Interests: data science; remote sensing; image processing; inverse problems; optimization; computational methods
Special Issues, Collections and Topics in MDPI journals
Department of Mathematics, University of Kentucky, Lexington, KY 40506, USA
Interests: mathematical image processing; compressive sensing; inverse problems; optimization; high-dimensional signal processing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

High-resolution hyperspectral data in remote sensing play a crucial role in many fields, such as land surveying and weather prediction. Super-resolution image reconstruction, rooted in modeling and algorithmic advances, has attracted a large amount of research interest. The high dimensionality of hyperspectral data and various types of degradations in image generation and acquisition raise a sequence of challenges on several aspects, including excessive unknown noise and blurring artifacts. Recent advances in machine learning have achieved tremendous improvements on reconstruction accuracy. This issue aims to showcase some of the most advanced super-resolution imaging methods, including physical and computational techniques. It will serve as a platform to facilitate interdisciplinary research and inspire innovative ideas and perspectives. Authors are encouraged to submit high-quality, original research papers on remote-sensing image super-resolution. Topics of interest include but are not limited to the following:

  • Spatial super-resolution;
  • Temporal resolution enhancement;
  • Spatiotemporal super-resolution;
  • Spectral super-resolution;
  • Radiometric super-resolution;
  • Single-frame and multi-frame resolution enhancement;
  • Super-resolution from geometrically deformed remote-sensing images;
  • Pansharpening of remote-sensing images;
  • Fusion of multi-instrument data for enhancing its resolution.

Dr. Igor Yanovsky
Dr. Jing Qin
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing
  • super-resolution
  • satellite images
  • spatial resolution
  • temporal resolution
  • spectral resolution
  • radiometric resolution

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

29 pages, 9624 KiB  
Article
Enhancing Remote Sensing Image Super-Resolution with Efficient Hybrid Conditional Diffusion Model
by Lintao Han, Yuchen Zhao, Hengyi Lv, Yisa Zhang, Hailong Liu, Guoling Bi and Qing Han
Remote Sens. 2023, 15(13), 3452; https://doi.org/10.3390/rs15133452 - 07 Jul 2023
Cited by 7 | Viewed by 3229
Abstract
Recently, optical remote-sensing images have been widely applied in fields such as environmental monitoring and land cover classification. However, due to limitations in imaging equipment and other factors, low-resolution images that are unfavorable for image analysis are often obtained. Although existing image super-resolution [...] Read more.
Recently, optical remote-sensing images have been widely applied in fields such as environmental monitoring and land cover classification. However, due to limitations in imaging equipment and other factors, low-resolution images that are unfavorable for image analysis are often obtained. Although existing image super-resolution algorithms can enhance image resolution, these algorithms are not specifically designed for the characteristics of remote-sensing images and cannot effectively recover high-resolution images. Therefore, this paper proposes a novel remote-sensing image super-resolution algorithm based on an efficient hybrid conditional diffusion model (EHC-DMSR). The algorithm applies the theory of diffusion models to remote-sensing image super-resolution. Firstly, the comprehensive features of low-resolution images are extracted through a transformer network and CNN to serve as conditions for guiding image generation. Furthermore, to constrain the diffusion model and generate more high-frequency information, a Fourier high-frequency spatial constraint is proposed to emphasize high-frequency spatial loss and optimize the reverse diffusion direction. To address the time-consuming issue of the diffusion model during the reverse diffusion process, a feature-distillation-based method is proposed to reduce the computational load of U-Net, thereby shortening the inference time without affecting the super-resolution performance. Extensive experiments on multiple test datasets demonstrated that our proposed algorithm not only achieves excellent results in quantitative evaluation metrics but also generates sharper super-resolved images with rich detailed information. Full article
(This article belongs to the Special Issue Advanced Super-resolution Methods in Remote Sensing)
Show Figures

Figure 1

25 pages, 11246 KiB  
Article
Enhancing Remote Sensing Image Super-Resolution Guided by Bicubic-Downsampled Low-Resolution Image
by Minkyung Chung, Minyoung Jung and Yongil Kim
Remote Sens. 2023, 15(13), 3309; https://doi.org/10.3390/rs15133309 - 28 Jun 2023
Viewed by 1446
Abstract
Image super-resolution (SR) is a significant technique in image processing as it enhances the spatial resolution of images, enabling various downstream applications. Based on recent achievements in SR studies in computer vision, deep-learning-based SR methods have been widely investigated for remote sensing images. [...] Read more.
Image super-resolution (SR) is a significant technique in image processing as it enhances the spatial resolution of images, enabling various downstream applications. Based on recent achievements in SR studies in computer vision, deep-learning-based SR methods have been widely investigated for remote sensing images. In this study, we proposed a two-stage approach called bicubic-downsampled low-resolution (LR) image-guided generative adversarial network (BLG-GAN) for remote sensing image super-resolution. The proposed BLG-GAN method divides the image super-resolution procedure into two stages: LR image transfer and super-resolution. In the LR image transfer stage, real-world LR images are restored to less blurry and noisy bicubic-like LR images using guidance from synthetic LR images obtained through bicubic downsampling. Subsequently, the generated bicubic-like LR images are used as inputs to the SR network, which learns the mapping between the bicubic-like LR image and the corresponding high-resolution (HR) image. By approaching the SR problem as finding optimal solutions for subproblems, the BLG-GAN achieves superior results compared to state-of-the-art models, even with a smaller overall capacity of the SR network. As the BLG-GAN utilizes a synthetic LR image as a bridge between real-world LR and HR images, the proposed method shows improved image quality compared to the SR models trained to learn the direct mapping from a real-world LR image to an HR image. Experimental results on HR satellite image datasets demonstrate the effectiveness of the proposed method in improving perceptual quality and preserving image fidelity. Full article
(This article belongs to the Special Issue Advanced Super-resolution Methods in Remote Sensing)
Show Figures

Figure 1

26 pages, 14213 KiB  
Article
CGC-Net: A Context-Guided Constrained Network for Remote-Sensing Image Super Resolution
by Pengcheng Zheng, Jianan Jiang, Yan Zhang, Chengxiao Zeng, Chuanchuan Qin and Zhenghao Li
Remote Sens. 2023, 15(12), 3171; https://doi.org/10.3390/rs15123171 - 18 Jun 2023
Viewed by 1342
Abstract
In remote-sensing image processing tasks, images with higher resolution always result in better performance on downstream tasks, such as scene classification and object segmentation. However, objects in remote-sensing images often have low resolution and complex textures due to the imaging environment. Therefore, effectively [...] Read more.
In remote-sensing image processing tasks, images with higher resolution always result in better performance on downstream tasks, such as scene classification and object segmentation. However, objects in remote-sensing images often have low resolution and complex textures due to the imaging environment. Therefore, effectively reconstructing high-resolution remote-sensing images remains challenging. To address this concern, we investigate embedding context information and object priors from remote-sensing images into current deep learning super-resolution models. Hence, this paper proposes a novel remote-sensing image super-resolution method called Context-Guided Constrained Network (CGC-Net). In CGC-Net, we first design a simple but effective method to generate inverse distance maps from the remote-sensing image segmentation maps as prior information. Combined with prior information, we propose a Global Context-Constrained Layer (GCCL) to extract high-quality features with global context constraints. Furthermore, we introduce a Guided Local Feature Enhancement Block (GLFE) to enhance the local texture context via a learnable guided filter. Additionally, we design a High-Frequency Consistency Loss (HFC Loss) to ensure gradient consistency between the reconstructed image (HR) and the original high-quality image (HQ). Unlike existing remote-sensing image super-resolution methods, the proposed CGC-Net achieves superior visual results and reports new state-of-the-art (SOTA) performance on three popular remote-sensing image datasets, demonstrating its effectiveness in remote-sensing image super-resolution (RSI-SR) tasks. Full article
(This article belongs to the Special Issue Advanced Super-resolution Methods in Remote Sensing)
Show Figures

Figure 1

26 pages, 8571 KiB  
Article
SA-GAN: A Second Order Attention Generator Adversarial Network with Region Aware Strategy for Real Satellite Images Super Resolution Reconstruction
by Jiayi Zhao, Yong Ma, Fu Chen, Erping Shang, Wutao Yao, Shuyan Zhang and Jin Yang
Remote Sens. 2023, 15(5), 1391; https://doi.org/10.3390/rs15051391 - 01 Mar 2023
Cited by 5 | Viewed by 2601
Abstract
High-resolution (HR) remote sensing images have important applications in many scenarios, and improving the resolution of remote sensing images via algorithms is one of the key research fields. However, current super-resolution (SR) algorithms, which are trained on synthetic datasets, tend to have poor [...] Read more.
High-resolution (HR) remote sensing images have important applications in many scenarios, and improving the resolution of remote sensing images via algorithms is one of the key research fields. However, current super-resolution (SR) algorithms, which are trained on synthetic datasets, tend to have poor performance in real-world low-resolution (LR) images. Moreover, due to the inherent complexity of real-world remote sensing images, current models are prone to color distortion, blurred edges, and unrealistic artifacts. To address these issues, real-SR datasets using the Gao Fen (GF) satellite images at different spatial resolutions have been established to simulate real degradation situations; moreover, a second-order attention generator adversarial attention network (SA-GAN) model based on real-world remote sensing images is proposed to implement the SR task. In the generator network, a second-order channel attention mechanism and a region-level non-local module are used to fully utilize the a priori information in low-resolution (LR) images, as well as adopting region-aware loss to suppress artifact generation. Experiments on test data demonstrate that the model delivers good performance for quantitative metrics, and the visual quality outperforms that of previous approaches. The Frechet inception distance score (FID) and the learned perceptual image patch similarity (LPIPS) value using the proposed method are improved by 17.67% and 6.61%, respectively. Migration experiments in real scenarios also demonstrate the effectiveness and robustness of the method. Full article
(This article belongs to the Special Issue Advanced Super-resolution Methods in Remote Sensing)
Show Figures

Figure 1

23 pages, 8641 KiB  
Article
Blind Super-Resolution for SAR Images with Speckle Noise Based on Deep Learning Probabilistic Degradation Model and SAR Priors
by Chongqi Zhang, Ziwen Zhang, Yao Deng, Yueyi Zhang, Mingzhe Chong, Yunhua Tan and Pukun Liu
Remote Sens. 2023, 15(2), 330; https://doi.org/10.3390/rs15020330 - 05 Jan 2023
Cited by 5 | Viewed by 2399
Abstract
As an active microwave coherent imaging technology, synthetic aperture radar (SAR) images suffer from severe speckle noise and low-resolution problems due to the limitations of the imaging system, which cause difficulties in image interpretation and target detection. However, the existing SAR super-resolution (SR) [...] Read more.
As an active microwave coherent imaging technology, synthetic aperture radar (SAR) images suffer from severe speckle noise and low-resolution problems due to the limitations of the imaging system, which cause difficulties in image interpretation and target detection. However, the existing SAR super-resolution (SR) methods usually reconstruct the images by a determined degradation model and hardly consider multiplicative speckle noise, meanwhile, most SR models are trained with synthetic datasets in which the low-resolution (LR) images are down-sampled from their high-resolution (HR) counterparts. These constraints cause a serious domain gap between the synthetic and real SAR images. To solve the above problems, this paper proposes an unsupervised blind SR method for SAR images by introducing SAR priors in a cycle-GAN framework. First, a learnable probabilistic degradation model combined with SAR noise priors was presented to satisfy various SAR images produced from different platforms. Then, a degradation model and a SR model in a unified cycle-GAN framework were trained simultaneously to learn the intrinsic relationship between HR–LR domains. The model was trained with real LR and HR SAR images instead of synthetic paired images to conquer the domain gap. Finally, experimental results on both synthetic and real SAR images demonstrated the high performance of the proposed method in terms of image quality and visual perception. Additionally, we found the proposed SR method demonstrates the tremendous potential for target detection tasks by reducing missed detection and false alarms significantly. Full article
(This article belongs to the Special Issue Advanced Super-resolution Methods in Remote Sensing)
Show Figures

Figure 1

22 pages, 2026 KiB  
Article
Single-Image Super Resolution of Remote Sensing Images with Real-World Degradation Modeling
by Jizhou Zhang, Tingfa Xu, Jianan Li, Shenwang Jiang and Yuhan Zhang
Remote Sens. 2022, 14(12), 2895; https://doi.org/10.3390/rs14122895 - 17 Jun 2022
Cited by 15 | Viewed by 5011
Abstract
Limited resolution is one of the most important factors hindering the application of remote sensing images (RSIs). Single-image super resolution (SISR) is a technique to improve the spatial resolution of digital images and has attracted the attention of many researchers. In recent years, [...] Read more.
Limited resolution is one of the most important factors hindering the application of remote sensing images (RSIs). Single-image super resolution (SISR) is a technique to improve the spatial resolution of digital images and has attracted the attention of many researchers. In recent years, with the advancement of deep learning (DL) frameworks, many DL-based SISR models have been proposed and achieved state-of-the-art performance; however, most SISR models for RSIs use the bicubic downsampler to construct low-resolution (LR) and high-resolution (HR) training pairs. Considering that the quality of the actual RSIs depends on a variety of factors, such as illumination, atmosphere, imaging sensor responses, and signal processing, training on “ideal” datasets results in a dramatic drop in model performance on real RSIs. To address this issue, we propose to build a more realistic training dataset by modeling the degradation with blur kernels and imaging noises. We also design a novel residual balanced attention network (RBAN) as a generator to estimate super-resolution results from the LR inputs. To encourage RBAN to generate more realistic textures, we apply a UNet-shape discriminator for adversarial training. Both referenced evaluations on synthetic data and non-referenced evaluations on actual images were carried out. Experimental results validate the effectiveness of the proposed framework, and our model exhibits state-of-the-art performance in quantitative evaluation and visual quality. We believe that the proposed framework can facilitate super-resolution techniques from research to practical applications in RSIs processing. Full article
(This article belongs to the Special Issue Advanced Super-resolution Methods in Remote Sensing)
Show Figures

Figure 1

22 pages, 2443 KiB  
Article
Unsupervised Remote Sensing Image Super-Resolution Guided by Visible Images
by Zili Zhang, Yan Tian, Jianxiang Li and Yiping Xu
Remote Sens. 2022, 14(6), 1513; https://doi.org/10.3390/rs14061513 - 21 Mar 2022
Cited by 6 | Viewed by 2983
Abstract
Remote sensing images are widely used in many applications. However, due to being limited by the sensors, it is difficult to obtain high-resolution (HR) images from remote sensing images. In this paper, we propose a novel unsupervised cross-domain super-resolution method devoted to reconstructing [...] Read more.
Remote sensing images are widely used in many applications. However, due to being limited by the sensors, it is difficult to obtain high-resolution (HR) images from remote sensing images. In this paper, we propose a novel unsupervised cross-domain super-resolution method devoted to reconstructing a low-resolution (LR) remote sensing image guided by an unpaired HR visible natural image. Therefore, an unsupervised visible image-guided remote sensing image super-resolution network (UVRSR) is built. The network is divided into two learnable branches: a visible image-guided branch (VIG) and a remote sensing image-guided branch (RIG). As HR visible images can provide rich textures and sufficient high-frequency information, the purpose of VIG is to treat them as targets and make full use of their advantages in reconstruction. Specially, we first use a CycleGAN to drag the LR visible natural images to the remote sensing domain; then, we apply an SR network to upscale these simulated remote sensing domain LR images. However, the domain gap between SR remote sensing images and HR visible targets is massive. To enforce domain consistency, we propose a novel domain-ruled discriminator in the reconstruction. Furthermore, inspired by the zero-shot super-resolution network (ZSSR) to explore the internal information of remote sensing images, we add a remote sensing domain inner study to train the SR network in RIG. Sufficient experimental works show UVRSR can achieve superior results with state-of-the-art unpaired and remote sensing SR methods on several challenging remote sensing image datasets. Full article
(This article belongs to the Special Issue Advanced Super-resolution Methods in Remote Sensing)
Show Figures

Figure 1

23 pages, 9269 KiB  
Article
Saliency-Guided Remote Sensing Image Super-Resolution
by Baodi Liu, Lifei Zhao, Jiaoyue Li, Hengle Zhao, Weifeng Liu, Ye Li, Yanjiang Wang, Honglong Chen and Weijia Cao
Remote Sens. 2021, 13(24), 5144; https://doi.org/10.3390/rs13245144 - 17 Dec 2021
Cited by 13 | Viewed by 3120
Abstract
Deep learning has recently attracted extensive attention and developed significantly in remote sensing image super-resolution. Although remote sensing images are composed of various scenes, most existing methods consider each part equally. These methods ignore the salient objects (e.g., buildings, airplanes, and vehicles) that [...] Read more.
Deep learning has recently attracted extensive attention and developed significantly in remote sensing image super-resolution. Although remote sensing images are composed of various scenes, most existing methods consider each part equally. These methods ignore the salient objects (e.g., buildings, airplanes, and vehicles) that have more complex structures and require more attention in recovery processing. This paper proposes a saliency-guided remote sensing image super-resolution (SG-GAN) method to alleviate the above issue while maintaining the merits of GAN-based methods for the generation of perceptual-pleasant details. More specifically, we exploit the salient maps of images to guide the recovery in two aspects: On the one hand, the saliency detection network in SG-GAN learns more high-resolution saliency maps to provide additional structure priors. On the other hand, the well-designed saliency loss imposes a second-order restriction on the super-resolution process, which helps SG-GAN concentrate more on the salient objects of remote sensing images. Experimental results show that SG-GAN achieves competitive PSNR and SSIM compared with the advanced super-resolution methods. Visual results demonstrate our superiority in restoring structures while generating remote sensing super-resolution images. Full article
(This article belongs to the Special Issue Advanced Super-resolution Methods in Remote Sensing)
Show Figures

Figure 1

20 pages, 15700 KiB  
Article
Single-Image Super-Resolution of Sentinel-2 Low Resolution Bands with Residual Dense Convolutional Neural Networks
by Luis Salgueiro, Javier Marcello and Verónica Vilaplana
Remote Sens. 2021, 13(24), 5007; https://doi.org/10.3390/rs13245007 - 09 Dec 2021
Cited by 9 | Viewed by 3494
Abstract
Sentinel-2 satellites have become one of the main resources for Earth observation images because they are free of charge, have a great spatial coverage and high temporal revisit. Sentinel-2 senses the same location providing different spatial resolutions as well as generating a multi-spectral [...] Read more.
Sentinel-2 satellites have become one of the main resources for Earth observation images because they are free of charge, have a great spatial coverage and high temporal revisit. Sentinel-2 senses the same location providing different spatial resolutions as well as generating a multi-spectral image with 13 bands of 10, 20, and 60 m/pixel. In this work, we propose a single-image super-resolution model based on convolutional neural networks that enhances the low-resolution bands (20 m and 60 m) to reach the maximal resolution sensed (10 m) at the same time, whereas other approaches provide two independent models for each group of LR bands. Our proposed model, named Sen2-RDSR, is made up of Residual in Residual blocks that produce two final outputs at maximal resolution, one for 20 m/pixel bands and the other for 60 m/pixel bands. The training is done in two stages, first focusing on 20 m bands and then on the 60 m bands. Experimental results using six quality metrics (RMSE, SRE, SAM, PSNR, SSIM, ERGAS) show that our model has superior performance compared to other state-of-the-art approaches, and it is very effective and suitable as a preliminary step for land and coastal applications, as studies involving pixel-based classification for Land-Use-Land-Cover or the generation of vegetation indices. Full article
(This article belongs to the Special Issue Advanced Super-resolution Methods in Remote Sensing)
Show Figures

Graphical abstract

21 pages, 13414 KiB  
Article
Cross-Dimension Attention Guided Self-Supervised Remote Sensing Single-Image Super-Resolution
by Wenzong Jiang, Lifei Zhao, Yanjiang Wang, Weifeng Liu and Baodi Liu
Remote Sens. 2021, 13(19), 3835; https://doi.org/10.3390/rs13193835 - 25 Sep 2021
Cited by 3 | Viewed by 1802
Abstract
In recent years, the application of deep learning has achieved a huge leap in the performance of remote sensing image super-resolution (SR). However, most of the existing SR methods employ bicubic downsampling of high-resolution (HR) images to obtain low-resolution (LR [...] Read more.
In recent years, the application of deep learning has achieved a huge leap in the performance of remote sensing image super-resolution (SR). However, most of the existing SR methods employ bicubic downsampling of high-resolution (HR) images to obtain low-resolution (LR) images and use the obtained LR and HR images as training pairs. This supervised method that uses ideal kernel (bicubic) downsampled images to train the network will significantly degrade performance when used in realistic LR remote sensing images, usually resulting in blurry images. The main reason is that the degradation process of real remote sensing images is more complicated. The training data cannot reflect the SR problem of real remote sensing images. Inspired by the self-supervised methods, this paper proposes a cross-dimension attention guided self-supervised remote sensing single-image super-resolution method (CASSISR). It does not require pre-training on a dataset, only utilizes the internal information reproducibility of a single image, and uses the lower-resolution image downsampled from the input image to train the cross-dimension attention network (CDAN). The cross-dimension attention module (CDAM) selectively captures more useful internal duplicate information by modeling the interdependence of channel and spatial features and jointly learning their weights. The proposed CASSISR adapts well to real remote sensing image SR tasks. A large number of experiments show that CASSISR has achieved superior performance to current state-of-the-art methods. Full article
(This article belongs to the Special Issue Advanced Super-resolution Methods in Remote Sensing)
Show Figures

Figure 1

Other

Jump to: Research

11 pages, 2447 KiB  
Technical Note
Fusing Sentinel-2 and Landsat 8 Satellite Images Using a Model-Based Method
by Jakob Sigurdsson, Sveinn E. Armannsson, Magnus O. Ulfarsson and Johannes R. Sveinsson
Remote Sens. 2022, 14(13), 3224; https://doi.org/10.3390/rs14133224 - 05 Jul 2022
Cited by 8 | Viewed by 3508
Abstract
The Copernicus Sentinel-2 (S2) constellation comprises of two satellites in a sun-synchronous orbit. The S2 sensors have three spatial resolutions: 10, 20, and 60 m. The Landsat 8 (L8) satellite has sensors that provide seasonal coverage at spatial resolutions of 15, 30, and [...] Read more.
The Copernicus Sentinel-2 (S2) constellation comprises of two satellites in a sun-synchronous orbit. The S2 sensors have three spatial resolutions: 10, 20, and 60 m. The Landsat 8 (L8) satellite has sensors that provide seasonal coverage at spatial resolutions of 15, 30, and 60 m. Many remote sensing applications require the spatial resolutions of all data to be at the highest resolution possible, i.e., 10 m for S2. To address this demand, researchers have proposed various methods that exploit the spectral and spatial correlations within multispectral data to sharpen the S2 bands to 10 m. In this study, we combined S2 and L8 data. An S2 sharpening method called Sentinel-2 Sharpening (S2Sharp) was modified to include the 30 m and 15 m spectral bands from L8 and to sharpen all bands (S2 and L8) to the highest resolution of the data, which was 10 m. The method was evaluated using both real and simulated data. Full article
(This article belongs to the Special Issue Advanced Super-resolution Methods in Remote Sensing)
Show Figures

Graphical abstract

Back to TopTop