entropy-logo

Journal Browser

Journal Browser

Advances in Information and Coding Theory II

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: closed (31 January 2024) | Viewed by 7509

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electrical & Computer Engineering, McMaster University, 1280 Main Street West, Hamilton, ON L8S 4K1, Canada
Interests: information and coding theory; wireless communications; multimedia communications; signal and image processing; data compression and storage; networking
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, University of Toronto, Toronto, ON M5S, Canada
Interests: information theory; security and cryptography; hypothesis testing; machine learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Communication and compression, the two pillars of information and coding theory, have undergone a revolution in the past decade with the advent of new paradigms and challenges (e.g., the Internet of Things, molecular and biological communications, neural network compression, and perceptual compression). Furthermore, information and coding theory has evolved from a communication- and compression-centric research field to the driving force behind a myriad of new applications (including distributed storage, cloud computing, and data analysis, among others); in addition, it has shifted from focusing almost exclusively on efficiency-oriented performance metrics to encompassing security, privacy, and fairness considerations. This Special Issue aims to highlight recent advances in information and coding theory as well as their broad impacts. It has been designed with a wide scope in mind and welcomes novel research contributions that involve information- and coding-theoretic analyses, concepts, methodologies, or applications. Areas of interest for this Special Issue include (but are not limited to) coding theory and applications, communication theory, emerging applications of information theory, coded and distributed computing, network coding and data storage, information-theoretic methods in machine learning, information theory in data science, security and privacy, network information theory, source coding, and data compression. 

Dr. Jun Chen
Dr. Sadaf Salehkalaibar
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • information-theoretic methods
  • coding techniques
  • distributed storage
  • cloud computing
  • machine learning
  • data analysis
  • wireless communications
  • networking
  • emerging applications

Related Special Issue

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 836 KiB  
Article
Bilayer LDPC Codes Combined with Perturbed Decoding for MLC NAND Flash Memory
by Lingjun Kong, Haiyang Liu, Wentao Hou and Chao Meng
Entropy 2024, 26(1), 54; https://doi.org/10.3390/e26010054 - 08 Jan 2024
Viewed by 865
Abstract
This paper presents a coding scheme based on bilayer low-density parity-check (LDPC) codes for multi-level cell (MLC) NAND flash memory. The main feature of the proposed scheme is that it exploits the asymmetric properties of an MLC flash channel and stores the extra [...] Read more.
This paper presents a coding scheme based on bilayer low-density parity-check (LDPC) codes for multi-level cell (MLC) NAND flash memory. The main feature of the proposed scheme is that it exploits the asymmetric properties of an MLC flash channel and stores the extra parity-check bits in the lower page, which are activated only after the decoding failure of the upper page. To further improve the performance of the error correction, a perturbation process based on the genetic algorithm (GA) is incorporated into the decoding process of the proposed coding scheme, which can convert uncorrectable read sequences into error-correctable regions of the corresponding decoding space by introducing GA-trained noises. The perturbation decoding process is particularly efficient at low program-and-erase (P/E) cycle regions. The simulation results suggest that the proposed bilayer LDPC coding scheme can extend the lifetime of MLC NAND flash memory up to 10,000 P/E cycles. The proposed scheme can achieve a better balance between performance and complexity than traditional single LDPC coding schemes. All of these findings indicate that the proposed coding scheme is suitable for practical purposes in MLC NAND flash memory. Full article
(This article belongs to the Special Issue Advances in Information and Coding Theory II)
Show Figures

Figure 1

21 pages, 416 KiB  
Article
A Numerical Study on the Capacity Region of a Three-Layer Wiretap Network
by Jiahong Wu, Nan Liu and Wei Kang
Entropy 2023, 25(12), 1566; https://doi.org/10.3390/e25121566 - 21 Nov 2023
Viewed by 633
Abstract
In this paper, we study a three-layer wiretap network including the source node in the top layer, N nodes in the middle layer and L sink nodes in the bottom layer. Each sink node recovers the message generated from the source node correctly [...] Read more.
In this paper, we study a three-layer wiretap network including the source node in the top layer, N nodes in the middle layer and L sink nodes in the bottom layer. Each sink node recovers the message generated from the source node correctly via the middle layer nodes that it has access to. Furthermore, it is required that an eavesdropper eavesdropping a subset of the channels between the top layer and the middle layer learns absolutely nothing about the message. For each pair of decoding and eavesdropping patterns, we are interested in finding the capacity region consisting of (N+1)-tuples, with the first element being the size of the message successfully transmitted and the remaining elements being the capacity of the N channels from the source node to the middle layer nodes. This problem can be seen as a generalization of the secret sharing problem. We show that when the number of middle layer nodes is no larger than four, the capacity region is fully characterized as a polyhedral cone. When such a number is 5, we find the capacity regions for 74,222 decoding and eavesdropping patterns. For the remaining 274 cases, linear capacity regions are found. The proving steps are: (1) Characterizing the Shannon region, an outer bound of the capacity region; (2) Characterizing the common information region, an outer bound of the linear capacity region; (3) Finding linear schemes that achieve the Shannon region or the common information region. Full article
(This article belongs to the Special Issue Advances in Information and Coding Theory II)
Show Figures

Figure 1

33 pages, 627 KiB  
Article
Energy-Limited Joint Source–Channel Coding of Gaussian Sources over Gaussian Channels with Unknown Noise Level
by Omri Lev and Anatoly Khina
Entropy 2023, 25(11), 1522; https://doi.org/10.3390/e25111522 - 06 Nov 2023
Viewed by 849
Abstract
We consider the problem of transmitting a Gaussian source with minimum mean square error distortion over an infinite-bandwidth additive white Gaussian noise channel with an unknown noise level and under an input energy constraint. We construct a universal joint source–channel coding scheme with [...] Read more.
We consider the problem of transmitting a Gaussian source with minimum mean square error distortion over an infinite-bandwidth additive white Gaussian noise channel with an unknown noise level and under an input energy constraint. We construct a universal joint source–channel coding scheme with respect to the noise level, that uses modulo-lattice modulation with multiple layers. For each layer, we employ either analog linear modulation or analog pulse-position modulation (PPM). We show that the designed scheme with linear layers requires less energy compared to existing solutions to achieve the same quadratically increasing distortion profile with the noise level; replacing the linear layers with PPM layers offers an additional improvement. Full article
(This article belongs to the Special Issue Advances in Information and Coding Theory II)
Show Figures

Figure 1

13 pages, 556 KiB  
Article
A Novel Image-Classification-Based Decoding Strategy for Downlink Sparse Code Multiple Access Systems
by Zikang Chen, Wenping Ge, Juan Chen, Jiguang He and Hongliang He
Entropy 2023, 25(11), 1514; https://doi.org/10.3390/e25111514 - 04 Nov 2023
Viewed by 733
Abstract
The introduction of sparse code multiple access (SCMA) is driven by the high expectations for future cellular systems. In traditional SCMA receivers, the message passing algorithm (MPA) is commonly employed for received-signal decoding. However, the high computational complexity of the MPA falls short [...] Read more.
The introduction of sparse code multiple access (SCMA) is driven by the high expectations for future cellular systems. In traditional SCMA receivers, the message passing algorithm (MPA) is commonly employed for received-signal decoding. However, the high computational complexity of the MPA falls short in meeting the low latency requirements of modern communications. Deep learning (DL) has been proven to be applicable in the field of signal detection with low computational complexity and low bit error rate (BER). To enhance the decoding performance of SCMA systems, we present a novel approach that replaces the complex operation of separating codewords of individual sub-users from overlapping codewords using classifying images and is suitable for efficient handling by lightweight graph neural networks. The eigenvalues of training images contain crucial information, such as the amplitude and phase of received signals, as well as channel characteristics. Simulation results show that our proposed scheme has better BER performance and lower computational complexity than other previous SCMA decoding strategies. Full article
(This article belongs to the Special Issue Advances in Information and Coding Theory II)
Show Figures

Figure 1

9 pages, 308 KiB  
Article
A Note on the Mixing Factor of Polar Codes
by Keer Wei, Xiaoyu Jin and Weihua Yang
Entropy 2023, 25(11), 1498; https://doi.org/10.3390/e25111498 - 30 Oct 2023
Viewed by 752
Abstract
Over binary-input memoryless symmetric (BMS) channels, the performance of polar codes under successive cancellation list (SCL) decoding can approach maximum likelihood (ML) algorithm when the list size L is greater than or equal to 2MF, where MF, known as mixing [...] Read more.
Over binary-input memoryless symmetric (BMS) channels, the performance of polar codes under successive cancellation list (SCL) decoding can approach maximum likelihood (ML) algorithm when the list size L is greater than or equal to 2MF, where MF, known as mixing factor of code, represents the number of information bits before the last frozen bit. Recently, Yao et al. showed the upper bound of the mixing factor of decreasing monomial codes with length n=2m and rate R12 when m is an odd number; moreover, this bound is reachable. Herein, we obtain an achievable upper bound in the case of an even number. Further, we propose a new decoding hard-decision rule beyond the last frozen bit of polar codes under BMS channels. Full article
(This article belongs to the Special Issue Advances in Information and Coding Theory II)
37 pages, 543 KiB  
Article
Variable-Length Resolvability for General Sources and Channels
by Hideki Yagi and Te Sun Han
Entropy 2023, 25(10), 1466; https://doi.org/10.3390/e25101466 - 19 Oct 2023
Viewed by 798
Abstract
We introduce the problem of variable-length (VL) source resolvability, in which a given target probability distribution is approximated by encoding a VL uniform random number, and the asymptotically minimum average length rate of the uniform random number, called the VL resolvability, is investigated. [...] Read more.
We introduce the problem of variable-length (VL) source resolvability, in which a given target probability distribution is approximated by encoding a VL uniform random number, and the asymptotically minimum average length rate of the uniform random number, called the VL resolvability, is investigated. We first analyze the VL resolvability with the variational distance as an approximation measure. Next, we investigate the case under the divergence as an approximation measure. When the asymptotically exact approximation is required, it is shown that the resolvability under two kinds of approximation measures coincides. We then extend the analysis to the case of channel resolvability, where the target distribution is the output distribution via a general channel due to a fixed general source as an input. The obtained characterization of channel resolvability is fully general in the sense that, when the channel is just an identity mapping, it reduces to general formulas for source resolvability. We also analyze the second-order VL resolvability. Full article
(This article belongs to the Special Issue Advances in Information and Coding Theory II)
Show Figures

Figure 1

25 pages, 390 KiB  
Article
Lossless Transformations and Excess Risk Bounds in Statistical Inference
by László Györfi, Tamás Linder and Harro Walk
Entropy 2023, 25(10), 1394; https://doi.org/10.3390/e25101394 - 28 Sep 2023
Viewed by 721
Abstract
We study the excess minimum risk in statistical inference, defined as the difference between the minimum expected loss when estimating a random variable from an observed feature vector and the minimum expected loss when estimating the same random variable from a transformation (statistic) [...] Read more.
We study the excess minimum risk in statistical inference, defined as the difference between the minimum expected loss when estimating a random variable from an observed feature vector and the minimum expected loss when estimating the same random variable from a transformation (statistic) of the feature vector. After characterizing lossless transformations, i.e., transformations for which the excess risk is zero for all loss functions, we construct a partitioning test statistic for the hypothesis that a given transformation is lossless, and we show that for i.i.d. data the test is strongly consistent. More generally, we develop information-theoretic upper bounds on the excess risk that uniformly hold over fairly general classes of loss functions. Based on these bounds, we introduce the notion of a δ-lossless transformation and give sufficient conditions for a given transformation to be universally δ-lossless. Applications to classification, nonparametric regression, portfolio strategies, information bottlenecks, and deep learning are also surveyed. Full article
(This article belongs to the Special Issue Advances in Information and Coding Theory II)
25 pages, 409 KiB  
Article
On Optimal and Quantum Code Construction from Cyclic Codes over FqPQ with Applications
by Shakir Ali, Amal S. Alali, Pushpendra Sharma, Kok Bin Wong, Elif Segah Öztas and Mohammad Jeelani
Entropy 2023, 25(8), 1161; https://doi.org/10.3390/e25081161 - 02 Aug 2023
Viewed by 1436
Abstract
The key objective of this paper is to study the cyclic codes over mixed alphabets on the structure of FqPQ, where P=Fq[v]v3α22v and [...] Read more.
The key objective of this paper is to study the cyclic codes over mixed alphabets on the structure of FqPQ, where P=Fq[v]v3α22v and Q=Fq[u,v]u2α12,v3α22v are nonchain finite rings and αi is in Fq/{0} for i{1,2}, where q=pm with m1 is a positive integer and p is an odd prime. Moreover, with the applications, we obtain better and new quantum error-correcting (QEC) codes. For another application over the ring P, we obtain several optimal codes with the help of the Gray image of cyclic codes. Full article
(This article belongs to the Special Issue Advances in Information and Coding Theory II)
Back to TopTop