remotesensing-logo

Journal Browser

Journal Browser

Remote Sensing Data Compression

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (31 March 2021) | Viewed by 51009

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
Institute of Electronics and Telecommunications IETR UMR CNRS 6164, University of Rennes, 22305 Lannion, France
Interests: blind estimation of degradation characteristics (noise, PSF); blind restoration of multicomponent images; multimodal image correction; multicomponent image compression; multi-channel adaptive processing of signals and images; unsupervised machine learning and deep learning; multi-mode remote sensing data processing; remote sensing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Information and Communications Engineering, Universitat Autònoma de Barcelona, Campus UAB, 08193 Cerdanyola del Vallès, Spain
Interests: remote sensing data compression; source data coding
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Remote sensing has become a standard tool for solving a set of important tasks in  areas such as agriculture, forestry, hydrology, ecology, urban planning, etc. A huge amount of data is acquired each day and they have to be transferred to image processing centers and/or to customers. Due to different limitations on one hand and rapid improvement of spatial resolution and increase of channel number on the other, compression has to be applied on-board and/or on-the-ground.

Despite great successes gained in design of data compression methods, there are still many unsolved tasks. It is planned that this Special Issue will address all aspects of compression methodology and applications including but not limited to:

  • Advances in lossless compression;
  • Multi- and hyperspectral image compression;
  • Radar image compression;
  • Applications of remote sensing data compression;
  • Compression standards;
  • Practical implementation of image compression techniques;
  • Data compression hardware and software;
  • Impact of data compression on solving classification and identifications tasks.
Prof. Vladimir Lukin
Dr. Benoit Vozel
Dr. Joan Serra-Sagristà
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (16 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Other

5 pages, 211 KiB  
Editorial
Editorial to Special Issue “Remote Sensing Data Compression”
by Benoit Vozel, Vladimir Lukin and Joan Serra-Sagristà
Remote Sens. 2021, 13(18), 3727; https://doi.org/10.3390/rs13183727 - 17 Sep 2021
Cited by 1 | Viewed by 1624
Abstract
A huge amount of remote sensing data is acquired each day, which is transferred to image processing centers and/or to customers. Due to different limitations, compression has to be applied on-board and/or on-the-ground. This Special Issue collects 15 papers dealing with remote sensing [...] Read more.
A huge amount of remote sensing data is acquired each day, which is transferred to image processing centers and/or to customers. Due to different limitations, compression has to be applied on-board and/or on-the-ground. This Special Issue collects 15 papers dealing with remote sensing data compression, introducing solutions for both lossless and lossy compression, analyzing the impact of compression on different processes, investigating the suitability of neural networks for compression, and researching on low complexity hardware and software approaches to deliver competitive coding performance. Full article
(This article belongs to the Special Issue Remote Sensing Data Compression)

Research

Jump to: Editorial, Other

19 pages, 6662 KiB  
Article
Compressive Underwater Sonar Imaging with Synthetic Aperture Processing
by Ha-min Choi, Hae-sang Yang and Woo-jae Seong
Remote Sens. 2021, 13(10), 1924; https://doi.org/10.3390/rs13101924 - 14 May 2021
Cited by 9 | Viewed by 2462
Abstract
Synthetic aperture sonar (SAS) is a technique that acquires an underwater image by synthesizing the signal received by the sonar as it moves. By forming a synthetic aperture, the sonar overcomes physical limitations and shows superior resolution when compared with use of a [...] Read more.
Synthetic aperture sonar (SAS) is a technique that acquires an underwater image by synthesizing the signal received by the sonar as it moves. By forming a synthetic aperture, the sonar overcomes physical limitations and shows superior resolution when compared with use of a side-scan sonar, which is another technique for obtaining underwater images. Conventional SAS algorithms require a high concentration of sampling in the time and space domains according to Nyquist theory. Because conventional SAS algorithms go through matched filtering, side lobes are generated, resulting in deterioration of imaging performance. To overcome the shortcomings of conventional SAS algorithms, such as the low imaging performance and the requirement for high-level sampling, this paper proposes SAS algorithms applying compressive sensing (CS). SAS imaging algorithms applying CS were formulated for a single sensor and uniform line array and were verified through simulation and experimental data. The simulation showed better resolution than the ω-k algorithms, one of the representative conventional SAS algorithms, with minimal performance degradation by side lobes. The experimental data confirmed that the proposed method is superior and robust with respect to sensor loss. Full article
(This article belongs to the Special Issue Remote Sensing Data Compression)
Show Figures

Figure 1

23 pages, 2687 KiB  
Article
Real-Time Hyperspectral Data Transmission for UAV-Based Acquisition Platforms
by José M. Melián, Adán Jiménez, María Díaz, Alejandro Morales, Pablo Horstrand, Raúl Guerra, Sebastián López and José F. López
Remote Sens. 2021, 13(5), 850; https://doi.org/10.3390/rs13050850 - 25 Feb 2021
Cited by 12 | Viewed by 3431
Abstract
Hyperspectral sensors that are mounted in unmanned aerial vehicles (UAVs) offer many benefits for different remote sensing applications by combining the capacity of acquiring a high amount of information that allows for distinguishing or identifying different materials, and the flexibility of the UAVs [...] Read more.
Hyperspectral sensors that are mounted in unmanned aerial vehicles (UAVs) offer many benefits for different remote sensing applications by combining the capacity of acquiring a high amount of information that allows for distinguishing or identifying different materials, and the flexibility of the UAVs for planning different kind of flying missions. However, further developments are still needed to take advantage of the combination of these technologies for applications that require a supervised or semi-supervised process, such as defense, surveillance, or search and rescue missions. The main reason is that, in these scenarios, the acquired data typically need to be rapidly transferred to a ground station where it can be processed and/or visualized in real-time by an operator for taking decisions on the fly. This is a very challenging task due to the high acquisition data rate of the hyperspectral sensors and the limited transmission bandwidth. This research focuses on providing a working solution to the described problem by rapidly compressing the acquired hyperspectral data prior to its transmission to the ground station. It has been tested using two different NVIDIA boards as on-board computers, the Jetson Xavier NX and the Jetson Nano. The Lossy Compression Algorithm for Hyperspectral Image Systems (HyperLCA) has been used for compressing the acquired data. The entire process, including the data compression and transmission, has been optimized and parallelized at different levels, while also using the Low Power Graphics Processing Units (LPGPUs) embedded in the Jetson boards. Finally, several tests have been carried out to evaluate the overall performance of the proposed design. The obtained results demonstrate the achievement of real-time performance when using the Jetson Xavier NX for all the configurations that could potentially be used during a real mission. However, when using the Jetson Nano, real-time performance has only been achieved when using the less restrictive configurations, which leaves room for further improvements and optimizations in order to reduce the computational burden of the overall design and increase its efficiency. Full article
(This article belongs to the Special Issue Remote Sensing Data Compression)
Show Figures

Graphical abstract

27 pages, 2004 KiB  
Article
Reduced-Complexity End-to-End Variational Autoencoder for on Board Satellite Image Compression
by Vinicius Alves de Oliveira, Marie Chabert, Thomas Oberlin, Charly Poulliat, Mickael Bruno, Christophe Latry, Mikael Carlavan, Simon Henrot, Frederic Falzon and Roberto Camarero
Remote Sens. 2021, 13(3), 447; https://doi.org/10.3390/rs13030447 - 27 Jan 2021
Cited by 26 | Viewed by 3861
Abstract
Recently, convolutional neural networks have been successfully applied to lossy image compression. End-to-end optimized autoencoders, possibly variational, are able to dramatically outperform traditional transform coding schemes in terms of rate-distortion trade-off; however, this is at the cost of a higher computational complexity. An [...] Read more.
Recently, convolutional neural networks have been successfully applied to lossy image compression. End-to-end optimized autoencoders, possibly variational, are able to dramatically outperform traditional transform coding schemes in terms of rate-distortion trade-off; however, this is at the cost of a higher computational complexity. An intensive training step on huge databases allows autoencoders to learn jointly the image representation and its probability distribution, possibly using a non-parametric density model or a hyperprior auxiliary autoencoder to eliminate the need for prior knowledge. However, in the context of on board satellite compression, time and memory complexities are submitted to strong constraints. The aim of this paper is to design a complexity-reduced variational autoencoder in order to meet these constraints while maintaining the performance. Apart from a network dimension reduction that systematically targets each parameter of the analysis and synthesis transforms, we propose a simplified entropy model that preserves the adaptability to the input image. Indeed, a statistical analysis performed on satellite images shows that the Laplacian distribution fits most features of their representation. A complex non parametric distribution fitting or a cumbersome hyperprior auxiliary autoencoder can thus be replaced by a simple parametric estimation. The proposed complexity-reduced autoencoder outperforms the Consultative Committee for Space Data Systems standard (CCSDS 122.0-B) while maintaining a competitive performance, in terms of rate-distortion trade-off, in comparison with the state-of-the-art learned image compression schemes. Full article
(This article belongs to the Special Issue Remote Sensing Data Compression)
Show Figures

Graphical abstract

21 pages, 10243 KiB  
Article
Spectral–Spatial Feature Partitioned Extraction Based on CNN for Multispectral Image Compression
by Fanqiang Kong, Kedi Hu, Yunsong Li, Dan Li and Shunmin Zhao
Remote Sens. 2021, 13(1), 9; https://doi.org/10.3390/rs13010009 - 22 Dec 2020
Cited by 21 | Viewed by 2872
Abstract
Recently, the rapid development of multispectral imaging technology has received great attention from many fields, which inevitably involves the image transmission and storage problem. To solve this issue, a novel end-to-end multispectral image compression method based on spectral–spatial feature partitioned extraction is proposed. [...] Read more.
Recently, the rapid development of multispectral imaging technology has received great attention from many fields, which inevitably involves the image transmission and storage problem. To solve this issue, a novel end-to-end multispectral image compression method based on spectral–spatial feature partitioned extraction is proposed. The whole multispectral image compression framework is based on a convolutional neural network (CNN), whose innovation lies in the feature extraction module that is divided into two parallel parts, one is for spectral and the other is for spatial. Firstly, the spectral feature extraction module is used to extract spectral features independently, and the spatial feature extraction module is operated to obtain the separated spatial features. After feature extraction, the spectral and spatial features are fused element-by-element, followed by downsampling, which can reduce the size of the feature maps. Then, the data are converted to bit-stream through quantization and lossless entropy encoding. To make the data more compact, a rate-distortion optimizer is added to the network. The decoder is a relatively inverse process of the encoder. For comparison, the proposed method is tested along with JPEG2000, 3D-SPIHT and ResConv, another CNN-based algorithm on datasets from Landsat-8 and WorldView-3 satellites. The result shows the proposed algorithm outperforms other methods at the same bit rate. Full article
(This article belongs to the Special Issue Remote Sensing Data Compression)
Show Figures

Graphical abstract

35 pages, 9685 KiB  
Article
Lossy Compression of Multichannel Remote Sensing Images with Quality Control
by Vladimir Lukin, Irina Vasilyeva, Sergey Krivenko, Fangfang Li, Sergey Abramov, Oleksii Rubel, Benoit Vozel, Kacem Chehdi and Karen Egiazarian
Remote Sens. 2020, 12(22), 3840; https://doi.org/10.3390/rs12223840 - 23 Nov 2020
Cited by 21 | Viewed by 3183
Abstract
Lossy compression is widely used to decrease the size of multichannel remote sensing data. Alongside this positive effect, lossy compression may lead to a negative outcome as making worse image classification. Thus, if possible, lossy compression should be carried out carefully, controlling the [...] Read more.
Lossy compression is widely used to decrease the size of multichannel remote sensing data. Alongside this positive effect, lossy compression may lead to a negative outcome as making worse image classification. Thus, if possible, lossy compression should be carried out carefully, controlling the quality of compressed images. In this paper, a dependence between classification accuracy of maximum likelihood and neural network classifiers applied to three-channel test and real-life images and quality of compressed images characterized by standard and visual quality metrics is studied. The following is demonstrated. First, a classification accuracy starts to decrease faster when image quality due to compression ratio increasing reaches a distortion visibility threshold. Second, the classes with a wider distribution of features start to “take pixels” from classes with narrower distributions of features. Third, a classification accuracy might depend essentially on the training methodology, i.e., whether features are determined from original data or compressed images. Finally, the drawbacks of pixel-wise classification are shown and some recommendations on how to improve classification accuracy are given. Full article
(This article belongs to the Special Issue Remote Sensing Data Compression)
Show Figures

Graphical abstract

37 pages, 5546 KiB  
Article
FPGA-Based On-Board Hyperspectral Imaging Compression: Benchmarking Performance and Energy Efficiency against GPU Implementations
by Julián Caba, María Díaz, Jesús Barba, Raúl Guerra and Jose A. de la Torre and Sebastián López
Remote Sens. 2020, 12(22), 3741; https://doi.org/10.3390/rs12223741 - 13 Nov 2020
Cited by 16 | Viewed by 3816
Abstract
Remote-sensing platforms, such as Unmanned Aerial Vehicles, are characterized by limited power budget and low-bandwidth downlinks. Therefore, handling hyperspectral data in this context can jeopardize the operational time of the system. FPGAs have been traditionally regarded as the most power-efficient computing platforms. However, [...] Read more.
Remote-sensing platforms, such as Unmanned Aerial Vehicles, are characterized by limited power budget and low-bandwidth downlinks. Therefore, handling hyperspectral data in this context can jeopardize the operational time of the system. FPGAs have been traditionally regarded as the most power-efficient computing platforms. However, there is little experimental evidence to support this claim, which is especially critical since the actual behavior of the solutions based on reconfigurable technology is highly dependent on the type of application. In this work, a highly optimized implementation of an FPGA accelerator of the novel HyperLCA algorithm has been developed and thoughtfully analyzed in terms of performance and power efficiency. In this regard, a modification of the aforementioned lossy compression solution has also been proposed to be efficiently executed into FPGA devices using fixed-point arithmetic. Single and multi-core versions of the reconfigurable computing platforms are compared with three GPU-based implementations of the algorithm on as many NVIDIA computing boards: Jetson Nano, Jetson TX2 and Jetson Xavier NX. Results show that the single-core version of our FPGA-based solution fulfils the real-time requirements of a real-life hyperspectral application using a mid-range Xilinx Zynq-7000 SoC chip (XC7Z020-CLG484). Performance levels of the custom hardware accelerator are above the figures obtained by the Jetson Nano and TX2 boards, and power efficiency is higher for smaller sizes of the image block to be processed. To close the performance gap between our proposal and the Jetson Xavier NX, a multi-core version is proposed. The results demonstrate that a solution based on the use of various instances of the FPGA hardware compressor core achieves similar levels of performance than the state-of-the-art GPU, with better efficiency in terms of processed frames by watt. Full article
(This article belongs to the Special Issue Remote Sensing Data Compression)
Show Figures

Graphical abstract

20 pages, 1511 KiB  
Article
An FPGA Accelerator for Real-Time Lossy Compression of Hyperspectral Images
by Daniel Báscones, Carlos González and Daniel Mozos
Remote Sens. 2020, 12(16), 2563; https://doi.org/10.3390/rs12162563 - 09 Aug 2020
Cited by 13 | Viewed by 3488
Abstract
Hyperspectral images offer great possibilities for remote studies, but can be difficult to manage due to their size. Compression helps with storage and transmission, and many efforts have been made towards standardizing compression algorithms, especially in the lossless and near-lossless domains. For long [...] Read more.
Hyperspectral images offer great possibilities for remote studies, but can be difficult to manage due to their size. Compression helps with storage and transmission, and many efforts have been made towards standardizing compression algorithms, especially in the lossless and near-lossless domains. For long term storage, lossy compression is also of interest, but its complexity has kept it away from real-time performance. In this paper, JYPEC, a lossy hyperspectral compression algorithm that combines PCA and JPEG2000, is accelerated using an FPGA. A tier 1 coder (a key step and the most time-consuming in JPEG2000 compression) was implemented in a heavily pipelined fashion. Results showed a performance comparable to that of existing 0.18 μm CMOS implementations, all while keeping a small footprint on FPGA resources. This enabled the acceleration of the most complex step of JYPEC, bringing the total execution time below the real-time constraint. Full article
(This article belongs to the Special Issue Remote Sensing Data Compression)
Show Figures

Graphical abstract

18 pages, 1237 KiB  
Article
Analysis of Variable-Length Codes for Integer Encoding in Hyperspectral Data Compression with the k2-Raster Compact Data Structure
by Kevin Chow, Dion Eustathios Olivier Tzamarias, Miguel Hernández-Cabronero, Ian Blanes and Joan Serra-Sagristà
Remote Sens. 2020, 12(12), 1983; https://doi.org/10.3390/rs12121983 - 20 Jun 2020
Cited by 5 | Viewed by 2161
Abstract
This paper examines the various variable-length encoders that provide integer encoding to hyperspectral scene data within a k 2 -raster compact data structure. This compact data structure leads to a compression ratio similar to that produced by some of the classical compression techniques. [...] Read more.
This paper examines the various variable-length encoders that provide integer encoding to hyperspectral scene data within a k 2 -raster compact data structure. This compact data structure leads to a compression ratio similar to that produced by some of the classical compression techniques. This compact data structure also provides direct access for query to its data elements without requiring any decompression. The selection of the integer encoder is critical for obtaining a competitive performance considering both the compression ratio and access time. In this research, we show experimental results of different integer encoders such as Rice, Simple9, Simple16, PForDelta codes, and DACs. Further, a method to determine an appropriate k value for building a k 2 -raster compact data structure with competitive performance is discussed. Full article
(This article belongs to the Special Issue Remote Sensing Data Compression)
Show Figures

Figure 1

33 pages, 30497 KiB  
Article
Lossy Compression of Multispectral Satellite Images with Application to Crop Thematic Mapping: A HEVC Comparative Study
by Miloš Radosavljević, Branko Brkljač, Predrag Lugonja, Vladimir Crnojević, Željen Trpovski, Zixiang Xiong and Dejan Vukobratović
Remote Sens. 2020, 12(10), 1590; https://doi.org/10.3390/rs12101590 - 16 May 2020
Cited by 14 | Viewed by 4091
Abstract
Remote sensing applications have gained in popularity in recent years, which has resulted in vast amounts of data being produced on a daily basis. Managing and delivering large sets of data becomes extremely difficult and resource demanding for the data vendors, but even [...] Read more.
Remote sensing applications have gained in popularity in recent years, which has resulted in vast amounts of data being produced on a daily basis. Managing and delivering large sets of data becomes extremely difficult and resource demanding for the data vendors, but even more for individual users and third party stakeholders. Hence, research in the field of efficient remote sensing data handling and manipulation has become a very active research topic (from both storage and communication perspectives). Driven by the rapid growth in the volume of optical satellite measurements, in this work we explore the lossy compression technique for multispectral satellite images. We give a comprehensive analysis of the High Efficiency Video Coding (HEVC) still-image intra coding part applied to the multispectral image data. Thereafter, we analyze the impact of the distortions introduced by the HEVC’s intra compression in the general case, as well as in the specific context of crop classification application. Results show that HEVC’s intra coding achieves better trade-off between compression gain and image quality, as compared to standard JPEG 2000 solution. On the other hand, this also reflects in the better performance of the designed pixel-based classifier in the analyzed crop classification task. We show that HEVC can obtain up to 150:1 compression ratio, when observing compression in the context of specific application, without significantly losing on classification performance compared to classifier trained and applied on raw data. In comparison, in order to maintain the same performance, JPEG 2000 allows compression ratio up to 70:1. Full article
(This article belongs to the Special Issue Remote Sensing Data Compression)
Show Figures

Graphical abstract

21 pages, 2294 KiB  
Article
Spectral Imagery Tensor Decomposition for Semantic Segmentation of Remote Sensing Data through Fully Convolutional Networks
by Josué López, Deni Torres, Stewart Santos and Clement Atzberger
Remote Sens. 2020, 12(3), 517; https://doi.org/10.3390/rs12030517 - 05 Feb 2020
Cited by 9 | Viewed by 3750
Abstract
This work aims at addressing two issues simultaneously: data compression at input space and semantic segmentation. Semantic segmentation of remotely sensed multi- or hyperspectral images through deep learning (DL) artificial neural networks (ANN) delivers as output the corresponding matrix of pixels classified elementwise, [...] Read more.
This work aims at addressing two issues simultaneously: data compression at input space and semantic segmentation. Semantic segmentation of remotely sensed multi- or hyperspectral images through deep learning (DL) artificial neural networks (ANN) delivers as output the corresponding matrix of pixels classified elementwise, achieving competitive performance metrics. With technological progress, current remote sensing (RS) sensors have more spectral bands and higher spatial resolution than before, which means a greater number of pixels in the same area. Nevertheless, the more spectral bands and the greater number of pixels, the higher the computational complexity and the longer the processing times. Therefore, without dimensionality reduction, the classification task is challenging, particularly if large areas have to be processed. To solve this problem, our approach maps an RS-image or third-order tensor into a core tensor, representative of our input image, with the same spatial domain but with a lower number of new tensor bands using a Tucker decomposition (TKD). Then, a new input space with reduced dimensionality is built. To find the core tensor, the higher-order orthogonal iteration (HOOI) algorithm is used. A fully convolutional network (FCN) is employed afterwards to classify at the pixel domain, each core tensor. The whole framework, called here HOOI-FCN, achieves high performance metrics competitive with some RS-multispectral images (MSI) semantic segmentation state-of-the-art methods, while significantly reducing computational complexity, and thereby, processing time. We used a Sentinel-2 image data set from Central Europe as a case study, for which our framework outperformed other methods (included the FCN itself) with average pixel accuracy (PA) of 90% (computational time ∼90s) and nine spectral bands, achieving a higher average PA of 91.97% (computational time ∼36.5s), and average PA of 91.56% (computational time ∼9.5s) for seven and five new tensor bands, respectively. Full article
(This article belongs to the Special Issue Remote Sensing Data Compression)
Show Figures

Figure 1

24 pages, 1244 KiB  
Article
Using Predictive and Differential Methods with K2-Raster Compact Data Structure for Hyperspectral Image Lossless Compression
by Kevin Chow, Dion Eustathios Olivier Tzamarias, Ian Blanes and Joan Serra-Sagristà
Remote Sens. 2019, 11(21), 2461; https://doi.org/10.3390/rs11212461 - 23 Oct 2019
Cited by 9 | Viewed by 2577
Abstract
This paper proposes a lossless coder for real-time processing and compression of hyperspectral images. After applying either a predictor or a differential encoder to reduce the bit rate of an image by exploiting the close similarity in pixels between neighboring bands, it uses [...] Read more.
This paper proposes a lossless coder for real-time processing and compression of hyperspectral images. After applying either a predictor or a differential encoder to reduce the bit rate of an image by exploiting the close similarity in pixels between neighboring bands, it uses a compact data structure called k 2 -raster to further reduce the bit rate. The advantage of using such a data structure is its compactness, with a size that is comparable to that produced by some classical compression algorithms and yet still providing direct access to its content for query without any need for full decompression. Experiments show that using k 2 -raster alone already achieves much lower rates (up to 55% reduction), and with preprocessing, the rates are further reduced up to 64%. Finally, we provide experimental results that show that the predictor is able to produce higher rates reduction than differential encoding. Full article
(This article belongs to the Special Issue Remote Sensing Data Compression)
Show Figures

Graphical abstract

18 pages, 636 KiB  
Article
Compression of Hyperspectral Scenes through Integer-to-Integer Spectral Graph Transforms
by Dion Eustathios Olivier Tzamarias, Kevin Chow, Ian Blanes and Joan Serra-Sagristà
Remote Sens. 2019, 11(19), 2290; https://doi.org/10.3390/rs11192290 - 30 Sep 2019
Cited by 5 | Viewed by 2538
Abstract
Hyperspectral images are depictions of scenes represented across many bands of the electromagnetic spectrum. The large size of these images as well as their unique structure requires the need for specialized data compression algorithms. The redundancies found between consecutive spectral components and within [...] Read more.
Hyperspectral images are depictions of scenes represented across many bands of the electromagnetic spectrum. The large size of these images as well as their unique structure requires the need for specialized data compression algorithms. The redundancies found between consecutive spectral components and within components themselves favor algorithms that exploit their particular structure. One novel technique with applications to hyperspectral compression is the use of spectral graph filterbanks such as the GraphBior transform, that leads to competitive results. Such existing graph based filterbank transforms do not yield integer coefficients, making them appropriate only for lossy image compression schemes. We propose here two integer-to-integer transforms that are used in the biorthogonal graph filterbanks for the purpose of the lossless compression of hyperspectral scenes. Firstly, by applying a Triangular Elementary Rectangular Matrix decomposition on GraphBior filters and secondly by adding rounding operations to the spectral graph lifting filters. We examine the merit of our contribution by testing its performance as a spatial transform on a corpus of hyperspectral images; and share our findings through a report and analysis of our results. Full article
(This article belongs to the Special Issue Remote Sensing Data Compression)
Show Figures

Graphical abstract

16 pages, 910 KiB  
Article
Performance Impact of Parameter Tuning on the CCSDS-123.0-B-2 Low-Complexity Lossless and Near-Lossless Multispectral and Hyperspectral Image Compression Standard
by Ian Blanes, Aaron Kiely, Miguel Hernández-Cabronero and Joan Serra-Sagristà
Remote Sens. 2019, 11(11), 1390; https://doi.org/10.3390/rs11111390 - 11 Jun 2019
Cited by 17 | Viewed by 3347
Abstract
This article studies the performance impact related to different parameter choices for the new CCSDS-123.0-B-2 Low-Complexity Lossless and Near-Lossless Multispectral and Hyperspectral Image Compression standard. This standard supersedes CCSDS-123.0-B-1 and extends it by incorporating a new near-lossless compression capability, as well as other [...] Read more.
This article studies the performance impact related to different parameter choices for the new CCSDS-123.0-B-2 Low-Complexity Lossless and Near-Lossless Multispectral and Hyperspectral Image Compression standard. This standard supersedes CCSDS-123.0-B-1 and extends it by incorporating a new near-lossless compression capability, as well as other new features. This article studies the coding performance impact of different choices for the principal parameters of the new extensions, in addition to reviewing related parameter choices for existing features. Experimental results include data from 16 different instruments with varying detector types, image dimensions, number of spectral bands, bit depth, level of noise, level of calibration, and other image characteristics. Guidelines are provided on how to adjust the parameters in relation to their coding performance impact. Full article
(This article belongs to the Special Issue Remote Sensing Data Compression)
Show Figures

Graphical abstract

Other

Jump to: Editorial, Research

14 pages, 3076 KiB  
Technical Note
A Task-Driven Invertible Projection Matrix Learning Algorithm for Hyperspectral Compressed Sensing
by Shaofei Dai, Wenbo Liu, Zhengyi Wang and Kaiyu Li
Remote Sens. 2021, 13(2), 295; https://doi.org/10.3390/rs13020295 - 15 Jan 2021
Cited by 3 | Viewed by 1798
Abstract
The high complexity of the reconstruction algorithm is the main bottleneck of the hyperspectral image (HSI) compression technology based on compressed sensing. Compressed sensing technology is an important tool for retrieving the maximum number of HSI scenes on the ground. However, the complexity [...] Read more.
The high complexity of the reconstruction algorithm is the main bottleneck of the hyperspectral image (HSI) compression technology based on compressed sensing. Compressed sensing technology is an important tool for retrieving the maximum number of HSI scenes on the ground. However, the complexity of the compressed sensing algorithm is limited by the energy and hardware of spaceborne equipment. Aiming at the high complexity of compressed sensing reconstruction algorithm and low reconstruction accuracy, an equivalent model of the invertible transformation is theoretically derived by us in the paper, which can convert the complex invertible projection training model into the coupled dictionary training model. Besides, aiming at the invertible projection training model, the most competitive task-driven invertible projection matrix learning algorithm (TIPML) is proposed. In TIPML, we don’t need to directly train the complex invertible projection model, but indirectly train the invertible projection model through the training of the coupled dictionary. In order to improve the accuracy of reconstructed data, in the paper, the singular value transformation is proposed. It has been verified that the concentration of the dictionary is increased and that the expressive ability of the dictionary has not been reduced by the transformation. Besides, two-loop iterative training is established to improve the accuracy of data reconstruction. Experiments show that, compared with the traditional compressed sensing algorithm, the compressed sensing algorithm based on TIPML has higher reconstruction accuracy, and the reconstruction time is shortened by more than a hundred times. It is foreseeable that the TIPML algorithm will have a huge application prospect in the field of HSI compression. Full article
(This article belongs to the Special Issue Remote Sensing Data Compression)
Show Figures

Graphical abstract

16 pages, 929 KiB  
Technical Note
High-Performance Lossless Compression of Hyperspectral Remote Sensing Scenes Based on Spectral Decorrelation
by Miguel Hernández-Cabronero, Jordi Portell, Ian Blanes and Joan Serra-Sagristà
Remote Sens. 2020, 12(18), 2955; https://doi.org/10.3390/rs12182955 - 11 Sep 2020
Cited by 10 | Viewed by 3598
Abstract
The capacity of the downlink channel is a major bottleneck for applications based on remote sensing hyperspectral imagery (HSI). Data compression is an essential tool to maximize the amount of HSI scenes that can be retrieved on the ground. At the same time, [...] Read more.
The capacity of the downlink channel is a major bottleneck for applications based on remote sensing hyperspectral imagery (HSI). Data compression is an essential tool to maximize the amount of HSI scenes that can be retrieved on the ground. At the same time, energy and hardware constraints of spaceborne devices impose limitations on the complexity of practical compression algorithms. To avoid any distortion in the analysis of the HSI data, only lossless compression is considered in this study. This work aims at finding the most advantageous compression–complexity trade-off within the state of the art in HSI compression. To do so, a novel comparison of the most competitive spectral decorrelation approaches combined with the best performing low-complexity compressors of the state is presented. Compression performance and execution time results are obtained for a set of 47 HSI scenes produced by 14 different sensors in real remote sensing missions. Assuming only a limited amount of energy is available, obtained data suggest that the FAPEC algorithm yields the best trade-off. When compared to the CCSDS 123.0-B-2 standard, FAPEC is 5.0 times faster and its compressed data rates are on average within 16% of the CCSDS standard. In scenarios where energy constraints can be relaxed, CCSDS 123.0-B-2 yields the best average compression results of all evaluated methods. Full article
(This article belongs to the Special Issue Remote Sensing Data Compression)
Show Figures

Graphical abstract

Back to TopTop