entropy-logo

Journal Browser

Journal Browser

Advances in Information and Coding Theory

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: closed (28 February 2023) | Viewed by 23876

Special Issue Editors

Department of Electrical & Computer Engineering, McMaster University, 1280 Main Street West, Hamilton, ON L8S 4K1, Canada
Interests: information and coding theory; wireless communications; multimedia communications; signal and image processing; data compression and storage; networking
Special Issues, Collections and Topics in MDPI journals
Department of Electrical and Computer Engineering, University of Toronto, Toronto, ON M5S, Canada
Interests: information theory; security and cryptography; hypothesis testing; machine learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Communication and compression, the two pillars of information and coding theory, have undergone a revolution in the past decade with the advent of new paradigms and challenges (e.g., the Internet of Things, molecular and biological communications, neural network compression, and perceptual compression). Furthermore, information and coding theory has evolved from a communication- and compression-centric research field to the driving force behind a myriad of new applications (including distributed storage, cloud computing, and data analysis, among others); in addition, it has shifted from focusing almost exclusively on efficiency-oriented performance metrics to encompassing security, privacy, and fairness considerations. This Special Issue aims to highlight recent advances in information and coding theory as well as their broad impacts. It has been designed with a wide scope in mind and welcomes novel research contributions that involve information- and coding-theoretic analyses, concepts, methodologies, or applications. Areas of interest for this Special Issue include (but are not limited to) coding theory and applications, communication theory, emerging applications of information theory, coded and distributed computing, network coding and data storage, information-theoretic methods in machine learning, information theory in data science, security and privacy, network information theory, source coding, and data compression. 

Prof. Dr. Jun Chen
Dr. Sadaf Salehkalaibar
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • information-theoretic methods
  • coding techniques
  • distributed storage
  • cloud computing
  • machine learning
  • data analysis
  • wireless communications
  • networking
  • emerging applications

Related Special Issue

Published Papers (20 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

47 pages, 681 KiB  
Article
Utility–Privacy Trade-Offs with Limited Leakage for Encoder
by Naruki Shinohara and Hideki Yagi
Entropy 2023, 25(6), 921; https://doi.org/10.3390/e25060921 - 11 Jun 2023
Viewed by 794
Abstract
The utilization of databases such as IoT has progressed, and understanding how to protect the privacy of data is an important issue. As pioneering work, in 1983, Yamamoto assumed the source (database), which consists of public information and private information, and found theoretical [...] Read more.
The utilization of databases such as IoT has progressed, and understanding how to protect the privacy of data is an important issue. As pioneering work, in 1983, Yamamoto assumed the source (database), which consists of public information and private information, and found theoretical limits (first-order rate analysis) among the coding rate, utility and privacy for the decoder in two special cases. In this paper, we consider a more general case based on the work by Shinohara and Yagi in 2022. Introducing a measure of privacy for the encoder, we investigate the following two problems: The first problem is the first-order rate analysis among the coding rate, utility, privacy for the decoder, and privacy for the encoder, in which utility is measured by the expected distortion or the excess-distortion probability. The second task is establishing the strong converse theorem for utility–privacy trade-offs, in which utility is measured by the excess-distortion probability. These results may lead to a more refined analysis such as the second-order rate analysis. Full article
(This article belongs to the Special Issue Advances in Information and Coding Theory)
Show Figures

Figure 1

23 pages, 624 KiB  
Article
On Decoder Ties for the Binary Symmetric Channel with Arbitrarily Distributed Input
by Ling-Hua Chang, Po-Ning Chen and Fady Alajaji
Entropy 2023, 25(4), 668; https://doi.org/10.3390/e25040668 - 16 Apr 2023
Viewed by 638
Abstract
The error probability of block codes sent under a non-uniform input distribution over the memoryless binary symmetric channel (BSC) and decoded via the maximum a posteriori (MAP) decoding rule is investigated. It is proved that the ratio of the probability of MAP decoder [...] Read more.
The error probability of block codes sent under a non-uniform input distribution over the memoryless binary symmetric channel (BSC) and decoded via the maximum a posteriori (MAP) decoding rule is investigated. It is proved that the ratio of the probability of MAP decoder ties to the probability of error grows most linearly in blocklength when no MAP decoding ties occur, thus showing that decoder ties do not affect the code’s error exponent. This result generalizes a similar recent result shown for the case of block codes transmitted over the BSC under a uniform input distribution. Full article
(This article belongs to the Special Issue Advances in Information and Coding Theory)
Show Figures

Figure 1

16 pages, 424 KiB  
Article
On the Asymptotic Capacity of Information-Theoretic Privacy-Preserving Epidemiological Data Collection
by Jiale Cheng, Nan Liu and Wei Kang
Entropy 2023, 25(4), 625; https://doi.org/10.3390/e25040625 - 06 Apr 2023
Cited by 2 | Viewed by 928
Abstract
The paradigm-shifting developments of cryptography and information theory have focused on the privacy of data-sharing systems, such as epidemiological studies, where agencies are collecting far more personal data than they need, causing intrusions on patients’ privacy. To study the capability of the data [...] Read more.
The paradigm-shifting developments of cryptography and information theory have focused on the privacy of data-sharing systems, such as epidemiological studies, where agencies are collecting far more personal data than they need, causing intrusions on patients’ privacy. To study the capability of the data collection while protecting privacy from an information theory perspective, we formulate a new distributed multiparty computation problem called privacy-preserving epidemiological data collection. In our setting, a data collector requires a linear combination of K users’ data through a storage system consisting of N servers. Privacy needs to be protected when the users, servers, and data collector do not trust each other. For the users, any data are required to be protected from up to E colluding servers; for the servers, any more information than the desired linear combination cannot be leaked to the data collector; and for the data collector, any single server can not know anything about the coefficients of the linear combination. Our goal is to find the optimal collection rate, which is defined as the ratio of the size of the user’s message to the total size of downloads from N servers to the data collector. For achievability, we propose an asymptotic capacity-achieving scheme when E<N1, by applying the cross-subspace alignment method to our construction; for the converse, we proved an upper bound of the asymptotic rate for all achievable schemes when E<N1. Additionally, we show that a positive asymptotic capacity is not possible when EN1. The results of the achievability and converse meet when the number of users goes to infinity, yielding the asymptotic capacity. Our work broadens current researches on data privacy in information theory and gives the best achievable asymptotic performance that any epidemiological data collector can obtain. Full article
(This article belongs to the Special Issue Advances in Information and Coding Theory)
Show Figures

Figure 1

15 pages, 11032 KiB  
Article
FLoCIC: A Few Lines of Code for Raster Image Compression
by Borut Žalik, Damjan Strnad, Štefan Kohek, Ivana Kolingerová, Andrej Nerat, Niko Lukač, Bogdan Lipuš, Mitja Žalik and David Podgorelec
Entropy 2023, 25(3), 533; https://doi.org/10.3390/e25030533 - 20 Mar 2023
Cited by 1 | Viewed by 1216
Abstract
A new approach is proposed for lossless raster image compression employing interpolative coding. A new multifunction prediction scheme is presented first. Then, interpolative coding, which has not been applied frequently for image compression, is explained briefly. Its simplification is introduced in regard to [...] Read more.
A new approach is proposed for lossless raster image compression employing interpolative coding. A new multifunction prediction scheme is presented first. Then, interpolative coding, which has not been applied frequently for image compression, is explained briefly. Its simplification is introduced in regard to the original approach. It is determined that the JPEG LS predictor reduces the information entropy slightly better than the multi-functional approach. Furthermore, the interpolative coding was moderately more efficient than the most frequently used arithmetic coding. Finally, our compression pipeline is compared against JPEG LS, JPEG 2000 in the lossless mode, and PNG using 24 standard grayscale benchmark images. JPEG LS turned out to be the most efficient, followed by JPEG 2000, while our approach using simplified interpolative coding was moderately better than PNG. The implementation of the proposed encoder is extremely simple and can be performed in less than 60 lines of programming code for the coder and 60 lines for the decoder, which is demonstrated in the given pseudocodes. Full article
(This article belongs to the Special Issue Advances in Information and Coding Theory)
Show Figures

Figure 1

13 pages, 313 KiB  
Article
Skew Constacyclic Codes over a Non-Chain Ring
by Mehmet Emin Köroğlu and Mustafa Sarı
Entropy 2023, 25(3), 525; https://doi.org/10.3390/e25030525 - 17 Mar 2023
Viewed by 1056
Abstract
In this paper, we investigate the algebraic structure of the non-local ring Rq=Fq[v]/v2+1 and identify the automorphisms of this ring to study the algebraic structure of the skew constacyclic [...] Read more.
In this paper, we investigate the algebraic structure of the non-local ring Rq=Fq[v]/v2+1 and identify the automorphisms of this ring to study the algebraic structure of the skew constacyclic codes and their duals over this ring. Furthermore, we give a necessary and sufficient condition for the skew constacyclic codes over Rq to be linear complementary dual (LCD). We present some examples of Euclidean LCD codes over Rq and tabulate the parameters of Euclidean LCD codes over finite fields as the Φ-images of these codes over Rq, which are almost maximum distance separable (MDS) and near MDS. Eventually, by making use of Hermitian linear complementary duals of skew constacyclic codes over Rq and the map Φ, we give a class of entanglement-assisted quantum error correcting codes (EAQECCs) with maximal entanglement and tabulate parameters of some EAQECCs with maximal entanglement over finite fields. Full article
(This article belongs to the Special Issue Advances in Information and Coding Theory)
Show Figures

Figure 1

10 pages, 503 KiB  
Article
Fair Numerical Algorithm of Coset Cardinality Spectrum for Distributed Arithmetic Coding
by Yong Fang and Nan Yang
Entropy 2023, 25(3), 437; https://doi.org/10.3390/e25030437 - 01 Mar 2023
Viewed by 862
Abstract
As a typical symbol-wise solution of asymmetric Slepian-Wolf coding problem, Distributed Arithmetic Coding (DAC) non-linearly partitions source space into disjoint cosets with unequal sizes. The distribution of DAC coset cardinalities, named the Coset Cardinality Spectrum (CCS), plays an important role in both theoretical [...] Read more.
As a typical symbol-wise solution of asymmetric Slepian-Wolf coding problem, Distributed Arithmetic Coding (DAC) non-linearly partitions source space into disjoint cosets with unequal sizes. The distribution of DAC coset cardinalities, named the Coset Cardinality Spectrum (CCS), plays an important role in both theoretical understanding and decoder design for DAC. In general, CCS cannot be calculated directly. Instead, a numerical algorithm is usually used to obtain an approximation. This paper first finds that the contemporary numerical algorithm of CCS is theoretically imperfect and does not finally converge to the real CCS. Further, to solve this problem, we refine the original numerical algorithm based on rigorous theoretical analyses. Experimental results verify that the refined numerical algorithm amends the drawbacks of the original version. Full article
(This article belongs to the Special Issue Advances in Information and Coding Theory)
Show Figures

Figure 1

16 pages, 549 KiB  
Article
Analysis and Optimization of a General Linking Matrix for JSCC Scheme Based on Double LDPC Codes
by Qiwang Chen, Zhiping Xu, Huihui Wu and Guofa Cai
Entropy 2023, 25(2), 382; https://doi.org/10.3390/e25020382 - 19 Feb 2023
Viewed by 1187
Abstract
A key component of the joint source-channel coding (JSCC) scheme based on double low-density parity-check (D-LDPC) codes is the introduction of a linking matrix between the source LDPC code and channel LDPC code, by which the decoding information including the source redundancy and [...] Read more.
A key component of the joint source-channel coding (JSCC) scheme based on double low-density parity-check (D-LDPC) codes is the introduction of a linking matrix between the source LDPC code and channel LDPC code, by which the decoding information including the source redundancy and channel state information can be transferred iteratively. However, the linking matrix is a fixed one-to-one mapping, i.e., an identity matrix in a conventional D-LDPC code system, which may not take full advantage of the decoding information. Therefore, this paper introduces a general linking matrix, i.e., a non-identity linking matrix, connecting the check nodes (CNs) of the source LDPC code and the variable nodes (VNs) of the channel LDPC code. Further, the encoding and decoding algorithms of the proposed D-LDPC coding system are generalized. A joint extrinsic information transfer (JEXIT) algorithm is derived for calculating the decoding threshold of the proposed system with a general linking matrix. In addition, several general linking matrices are optimized with the aid of the JEXIT algorithm. Finally, the simulation results demonstrate the superiority of the proposed D-LDPC coding system with general linking matrices. Full article
(This article belongs to the Special Issue Advances in Information and Coding Theory)
Show Figures

Figure 1

18 pages, 352 KiB  
Article
Linear Codes from Two Weakly Regular Plateaued Balanced Functions
by Shudi Yang, Tonghui Zhang and Ping Li
Entropy 2023, 25(2), 369; https://doi.org/10.3390/e25020369 - 17 Feb 2023
Viewed by 1010
Abstract
Linear codes with a few weights have been extensively studied due to their wide applications in secret sharing schemes, strongly regular graphs, association schemes, and authentication codes. In this paper, we choose the defining sets from two distinct weakly regular plateaued balanced functions, [...] Read more.
Linear codes with a few weights have been extensively studied due to their wide applications in secret sharing schemes, strongly regular graphs, association schemes, and authentication codes. In this paper, we choose the defining sets from two distinct weakly regular plateaued balanced functions, based on a generic construction of linear codes. Then we construct a family of linear codes with at most five nonzero weights. Their minimality is also examined and the result shows that our codes are helpful in secret sharing schemes. Full article
(This article belongs to the Special Issue Advances in Information and Coding Theory)
15 pages, 693 KiB  
Article
Lossy P-LDPC Codes for Compressing General Sources Using Neural Networks
by Jinkai Ren, Dan Song, Huihui Wu and Lin Wang
Entropy 2023, 25(2), 252; https://doi.org/10.3390/e25020252 - 30 Jan 2023
Cited by 1 | Viewed by 1062
Abstract
It is challenging to design an efficient lossy compression scheme for complicated sources based on block codes, especially to approach the theoretical distortion-rate limit. In this paper, a lossy compression scheme is proposed for Gaussian and Laplacian sources. In this scheme, a new [...] Read more.
It is challenging to design an efficient lossy compression scheme for complicated sources based on block codes, especially to approach the theoretical distortion-rate limit. In this paper, a lossy compression scheme is proposed for Gaussian and Laplacian sources. In this scheme, a new route using “transformation-quantization” was designed to replace the conventional “quantization-compression”. The proposed scheme utilizes neural networks for transformation and lossy protograph low-density parity-check codes for quantization. To ensure the system’s feasibility, some problems existing in the neural networks were resolved, including parameter updating and the propagation optimization. Simulation results demonstrated good distortion-rate performance. Full article
(This article belongs to the Special Issue Advances in Information and Coding Theory)
Show Figures

Figure 1

11 pages, 429 KiB  
Article
Protograph Designing of P-LDPC Codes via M3 Method
by Dan Song, Meiyuan Miao and Lin Wang
Entropy 2023, 25(2), 232; https://doi.org/10.3390/e25020232 - 27 Jan 2023
Viewed by 1130
Abstract
Recently, a mesh model-based merging (M3) method and four basic graph models were proposed to construct the double protograph low-density parity-check (P-LDPC) code pair of the joint source channel coding (JSCC). Designing the protograph (mother code) of the P-LDPC code with [...] Read more.
Recently, a mesh model-based merging (M3) method and four basic graph models were proposed to construct the double protograph low-density parity-check (P-LDPC) code pair of the joint source channel coding (JSCC). Designing the protograph (mother code) of the P-LDPC code with both a good waterfall region and lower error floor is a challenge, and few works have existed until now. In this paper, the single P-LDPC code is improved to further verify the availability of the M3 method, and its structure is different from the channel code in the JSCC. This construction technique yields a family of new channel codes with lower power consumption and higher reliability. The structured design and better performance demonstrate that the proposed code is hardware-friendly. Full article
(This article belongs to the Special Issue Advances in Information and Coding Theory)
Show Figures

Figure 1

19 pages, 603 KiB  
Article
Lossless Image Coding Using Non-MMSE Algorithms to Calculate Linear Prediction Coefficients
by Grzegorz Ulacha and Mirosław Łazoryszczak
Entropy 2023, 25(1), 156; https://doi.org/10.3390/e25010156 - 12 Jan 2023
Cited by 2 | Viewed by 1491
Abstract
This paper presents a lossless image compression method with a fast decoding time and flexible adjustment of coder parameters affecting its implementation complexity. A comparison of several approaches for computing non-MMSE prediction coefficients with different levels of complexity was made. The data modeling [...] Read more.
This paper presents a lossless image compression method with a fast decoding time and flexible adjustment of coder parameters affecting its implementation complexity. A comparison of several approaches for computing non-MMSE prediction coefficients with different levels of complexity was made. The data modeling stage of the proposed codec was based on linear (calculated by the non-MMSE method) and non-linear (complemented by a context-dependent constant component removal block) predictions. Prediction error coding uses a two-stage compression: an adaptive Golomb code and a binary arithmetic code. The proposed solution results in 30% shorter decoding times and a lower bit average than competing solutions (by 7.9% relative to the popular JPEG-LS codec). Full article
(This article belongs to the Special Issue Advances in Information and Coding Theory)
Show Figures

Figure 1

14 pages, 1238 KiB  
Article
EXK-SC: A Semantic Communication Model Based on Information Framework Expansion and Knowledge Collision
by Gangtao Xin and Pingyi Fan
Entropy 2022, 24(12), 1842; https://doi.org/10.3390/e24121842 - 17 Dec 2022
Cited by 5 | Viewed by 1564
Abstract
Semantic communication is not focused on improving the accuracy of transmitted symbols, but is concerned with expressing the expected meaning that the symbol sequence exactly carries. However, the measurement of semantic messages and their corresponding codebook generation are still open issues. Expansion, which [...] Read more.
Semantic communication is not focused on improving the accuracy of transmitted symbols, but is concerned with expressing the expected meaning that the symbol sequence exactly carries. However, the measurement of semantic messages and their corresponding codebook generation are still open issues. Expansion, which integrates simple things into a complex system and even generates intelligence, is truly consistent with the evolution of the human language system. We apply this idea to the semantic communication system, quantifying semantic transmission by symbol sequences and investigating the semantic information system in a similar way as Shannon’s method for digital communication systems. This work is the first to discuss semantic expansion and knowledge collision in the semantic information framework. Some important theoretical results are presented, including the relationship between semantic expansion and the transmission information rate. We believe such a semantic information framework may provide a new paradigm for semantic communications, and semantic expansion and knowledge collision will be the cornerstone of semantic information theory. Full article
(This article belongs to the Special Issue Advances in Information and Coding Theory)
Show Figures

Figure 1

22 pages, 382 KiB  
Article
An Efficient CRT-Base Power-of-Two Scaling in Minimally Redundant Residue Number System
by Mikhail Selianinau and Yuriy Povstenko
Entropy 2022, 24(12), 1824; https://doi.org/10.3390/e24121824 - 14 Dec 2022
Cited by 3 | Viewed by 1230
Abstract
In this paper, we consider one of the key problems in modular arithmetic. It is known that scaling in the residue number system (RNS) is a rather complicated non-modular procedure, which requires expensive and complex operations at each iteration. Hence, it is time [...] Read more.
In this paper, we consider one of the key problems in modular arithmetic. It is known that scaling in the residue number system (RNS) is a rather complicated non-modular procedure, which requires expensive and complex operations at each iteration. Hence, it is time consuming and needs too much hardware for implementation. We propose a novel approach to power-of-two scaling based on the Chinese Remainder Theorem (CRT) and rank form of the number representation in RNS. By using minimal redundancy of residue code, we optimize and speed up the rank calculation and parity determination of divisible integers in each iteration. The proposed enhancements make the power-of-two scaling simpler and faster than the currently known methods. After calculating the rank of the initial number, each iteration of modular scaling by two is performed in one modular clock cycle. The computational complexity of the proposed method of scaling by a constant Sl=2l associated with both required modular addition operations and lookup tables is estimeted as k and 2k+1, respectively, where k equals the number of primary non-redundant RNS moduli. The time complexity is log2k+l modular clock cycles. Full article
(This article belongs to the Special Issue Advances in Information and Coding Theory)
18 pages, 3891 KiB  
Article
DIR-Net: Deep Residual Polar Decoding Network Based on Information Refinement
by Bixue Song, Yongxin Feng and Yang Wang
Entropy 2022, 24(12), 1809; https://doi.org/10.3390/e24121809 - 12 Dec 2022
Viewed by 1179
Abstract
Polar codes are closer to the Shannon limit with lower complexity in coding and decoding. As traditional decoding techniques suffer from high latency and low throughput, with the development of deep learning technology, some deep learning-based decoding methods have been proposed to solve [...] Read more.
Polar codes are closer to the Shannon limit with lower complexity in coding and decoding. As traditional decoding techniques suffer from high latency and low throughput, with the development of deep learning technology, some deep learning-based decoding methods have been proposed to solve these problems. Usually, the deep neural network is treated as a black box and learns to map the polar codes with noise to the original information code directly. In fact, it is difficult for the network to distinguish between valid and interfering information, which leads to limited BER performance. In this paper, a deep residual network based on information refinement (DIR-NET) is proposed for decoding polar-coded short packets. The proposed method works to fully distinguish the effective and interference information in the codewords, thus obtaining a lower bit error rate. To achieve this goal, we design a two-stage decoding network, including a denoising subnetwork and decoding subnetwork. This structure can further improve the accuracy of the decoding method. Furthermore, we construct the whole network solely on the basis of the attention mechanism. It has a stronger information extraction ability than the traditional neural network structure. Benefiting from cascaded attention modules, information can be filtered and refined step-by-step, thus obtaining a low bit error rate. The simulation results show that DIR-Net outperforms existing decoding methods in terms of BER performance under both AWGN channels and flat fading channels. Full article
(This article belongs to the Special Issue Advances in Information and Coding Theory)
Show Figures

Figure 1

22 pages, 1544 KiB  
Article
Adaptive List Flip Decoder for Polar Codes with High-Order Error Correction Capability and a Simplified Flip Metric
by Yansong Lv, Hang Yin, Zhanxin Yang, Yuhuan Wang and Jingxin Dai
Entropy 2022, 24(12), 1806; https://doi.org/10.3390/e24121806 - 10 Dec 2022
Viewed by 864
Abstract
Designing an efficient decoder is an effective way to improve the performance of polar codes with limited code length. List flip decoders have received attention due to their good performance trade-off between list decoders and flip decoders. In particular, the newly proposed dynamic [...] Read more.
Designing an efficient decoder is an effective way to improve the performance of polar codes with limited code length. List flip decoders have received attention due to their good performance trade-off between list decoders and flip decoders. In particular, the newly proposed dynamic successive cancellation list flip (D-SCLF) decoder employs a new flip metric to effectively correct high-order errors and thus enhances the performance potential of present list flip decoders. However, this flip metric introduces extra exponential and logarithmic operations, and the number of these operations rises exponentially with the increase in the order of error correction and the number of information bits, which then limits its application value. Therefore, we designed an adaptive list flip (ALF) decoder with a new heuristic simplified flip metric, which replaces these extra nonlinear operations in the original flip metric with linear operations. Simulation results show that the simplified flip metric does not reduce the performance of the D-SCLF decoder. Moreover, based on the in-depth theoretical analyses of the combination of the adaptive list and the list flip decoders, the ALF decoder adopts the adaptive list to further reduce the average complexity. Full article
(This article belongs to the Special Issue Advances in Information and Coding Theory)
Show Figures

Figure 1

16 pages, 2521 KiB  
Article
Iterative Joint Estimation Procedure of Channel and PDP for OFDM Systems
by Ruixuan He, Xiaoran Liu, Kai Mei, Guangwei Gong, Jun Xiong and Jibo Wei
Entropy 2022, 24(11), 1664; https://doi.org/10.3390/e24111664 - 15 Nov 2022
Viewed by 1144
Abstract
The power-delay profile (PDP) estimation of wireless channels is an important step to generate a channel correlation matrix for channel linear minimum mean square error (LMMSE) estimation. Estimated channel frequency response can be used to obtain time dispersion characteristics that can be exploited [...] Read more.
The power-delay profile (PDP) estimation of wireless channels is an important step to generate a channel correlation matrix for channel linear minimum mean square error (LMMSE) estimation. Estimated channel frequency response can be used to obtain time dispersion characteristics that can be exploited by adaptive orthogonal frequency division multiplexing (OFDM) systems. In this paper, a joint estimator for PDP and LMMSE channel estimation is proposed. For LMMSE channel estimation, we apply a candidate set of frequency-domain channel correlation functions (CCF) and select the one that best matches the current channel to construct the channel correlation matrix. The initial candidate set is generated based on the traditional CCF calculation method for different scenarios. Then, the result of channel estimation is used as an input for the PDP estimation whereas the estimated PDP is further used to update the candidate channel correlation matrix. The enhancement of LMMSE channel estimation and PDP estimation can be achieved by the iterative joint estimation procedure. Analysis and simulation results show that in different communication scenarios, the PDP estimation error of the proposed method can approach the Cramér–Rao lower bound (CRLB) after a finite number of iterations. Moreover, the mean square error of channel estimation is close to the performance of accurate PDP-assisted LMMSE. Full article
(This article belongs to the Special Issue Advances in Information and Coding Theory)
Show Figures

Figure 1

11 pages, 521 KiB  
Article
A Nonbinary LDPC-Coded Probabilistic Shaping Scheme for a Rayleigh Fading Channel
by Weimin Kang
Entropy 2022, 24(11), 1649; https://doi.org/10.3390/e24111649 - 14 Nov 2022
Viewed by 1068
Abstract
In this paper, a novel, nonbinary (NB) LDPC-coded probabilistic shaping (PS) scheme for a Rayleigh fading channel is proposed. For the NB LDPC-coded PS scheme in Rayleigh fading channel, the rotation angle of 16 quadrature amplitude modulation (QAM) constellations, 64QAM constellations and 256QAM [...] Read more.
In this paper, a novel, nonbinary (NB) LDPC-coded probabilistic shaping (PS) scheme for a Rayleigh fading channel is proposed. For the NB LDPC-coded PS scheme in Rayleigh fading channel, the rotation angle of 16 quadrature amplitude modulation (QAM) constellations, 64QAM constellations and 256QAM constellations are optimized by the exhaustive search. The simulation results verify the information–theoretical analysis. Compared with the binary LDPC-coded PS scheme for Rayleigh fading channel, the proposed NB LDPC-coded PS scheme can improve error performance. In summary, the proposed NB LDPC-coded PS scheme for Rayleigh fading channel is reliable and thus suitable for future communication systems. Full article
(This article belongs to the Special Issue Advances in Information and Coding Theory)
Show Figures

Figure 1

12 pages, 1645 KiB  
Article
Statistical Degree Distribution Design for Using Fountain Codes to Control the Peak-To-Average Power Ratio in an OFDM System
by Cheng Bi, Zheng Xiang, Peng Ren, Ting Yin and Yang Zhang
Entropy 2022, 24(11), 1541; https://doi.org/10.3390/e24111541 - 26 Oct 2022
Viewed by 873
Abstract
Utilizing fountain codes to control the peak-to-average power ratio (PAPR) is a classic scheme in Orthogonal Frequency Division Multiplexing (OFDM) wireless communication systems. However, because the robust soliton distribution (RSD) produces large-degree values, the decoding performance is severely reduced. In this paper, we [...] Read more.
Utilizing fountain codes to control the peak-to-average power ratio (PAPR) is a classic scheme in Orthogonal Frequency Division Multiplexing (OFDM) wireless communication systems. However, because the robust soliton distribution (RSD) produces large-degree values, the decoding performance is severely reduced. In this paper, we design statistical degree distribution (SD) under a scenario that utilizes fountain codes to control the PAPR. The probability of the PAPR produced is combined with RSD to design PRSD, which enhances the smaller degree value produced. Subsequently, a particle swarm optimization (PSO) algorithm is used to search the optimal degree value between the binary exponential distribution (BED) and PRSD distribution according to the minimum average degree principle. Simulation results demonstrate that the proposed method outperforms other relevant degree distributions in the same controlled PAPR threshold, and the average degree value and decoding efficiency are remarkably improved. Full article
(This article belongs to the Special Issue Advances in Information and Coding Theory)
Show Figures

Figure 1

20 pages, 2394 KiB  
Article
Variable-to-Variable Huffman Coding: Optimal and Greedy Approaches
by Kun Tu and Dariusz Puchala
Entropy 2022, 24(10), 1447; https://doi.org/10.3390/e24101447 - 11 Oct 2022
Cited by 1 | Viewed by 1332
Abstract
In this paper, we address the problem of m-gram entropy variable-to-variable coding, extending the classical Huffman algorithm to the case of coding m-element (i.e., m-grams) sequences of symbols taken from the stream of input data for m>1. [...] Read more.
In this paper, we address the problem of m-gram entropy variable-to-variable coding, extending the classical Huffman algorithm to the case of coding m-element (i.e., m-grams) sequences of symbols taken from the stream of input data for m>1. We propose a procedure to enable the determination of the frequencies of the occurrence of m-grams in the input data; we formulate the optimal coding algorithm and estimate its computational complexity as O(mn2), where n is the size of the input data. Since such complexity is high in terms of practical applications, we also propose an approximate approach with linear complexity, which is based on a greedy heuristic used in solving backpack problems. In order to verify the practical effectiveness of the proposed approximate approach, experiments involving different sets of input data were conducted. The experimental study shows that the results obtained with the approximate approach were, first, close to the optimal results and, second, better than the results obtained using the popular DEFLATE and PPM algorithms in the case of data that can be characterized by highly invariable and easy to estimate statistics. Full article
(This article belongs to the Special Issue Advances in Information and Coding Theory)
Show Figures

Figure 1

13 pages, 527 KiB  
Article
A Hybrid Scheme of MCS Selection and Spectrum Allocation for URLLC Traffic under Delay and Reliability Constraints
by Yuehong Gao, Haotian Yang, Xiao Hong and Lu Chen
Entropy 2022, 24(5), 727; https://doi.org/10.3390/e24050727 - 20 May 2022
Cited by 3 | Viewed by 1683
Abstract
The Ultra-Reliable Low-Latency Communication (URLLC) is expected to be an important feature of 5G and beyond networks. Supporting URLLC in a resource-efficient manner demands optimal Modulation and Coding Scheme (MCS) selection and spectrum allocation. This paper presents a study on MCS selection and [...] Read more.
The Ultra-Reliable Low-Latency Communication (URLLC) is expected to be an important feature of 5G and beyond networks. Supporting URLLC in a resource-efficient manner demands optimal Modulation and Coding Scheme (MCS) selection and spectrum allocation. This paper presents a study on MCS selection and spectrum allocation to support URLLC. The essential idea is to establish an analytical connection between the delay and reliability requirements of URLLC data transmission and the underlying MCS selection and spectrum allocation. In particular, the connection factors in fundamental aspects of wireless data communication include channel quality, coding and modulation, spectrum allocation and data traffic characteristics. With this connection, MCS selection and spectrum allocation can be efficiently performed based on the delay and reliability requirements of URLLC. Theoretical results in the scenario of a 5G New Radio system are presented, where the Signal-to-Noise Ratio (SNR) thresholds for adaptive MCS selection, data-transmission rate and delay, as well as spectrum allocation under different configurations, including data duplication, are discussed. Simulation results are also obtained and compared with the theoretical results, which validate the analysis and its efficiency. Full article
(This article belongs to the Special Issue Advances in Information and Coding Theory)
Show Figures

Figure 1

Back to TopTop