entropy-logo

Journal Browser

Journal Browser

The Interplay between Storage, Computing, and Communications from an Information-Theoretic Perspective

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: closed (31 October 2020) | Viewed by 10563

Special Issue Editors


E-Mail Website
Guest Editor
School of Electrical Engineering and Computing, The University of Newcastle, Callaghan NSW 2308, Australia
Interests: information theory; communication theory; wireless communications; index coding
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Discipline of Computer Science & Information Technology, School of Science, RMIT University, Melbourne, VIC 3000, Australia
Interests: coding theory; discrete mathematics; blockchains

Special Issue Information

Dear Colleagues,

In the age of the internet of things, billions of physical devices with local computational power and local data storage are connected through ubiquitous communication links. This has led to the distribution of logical computations and storage across many physical devices, enabling a multitude of new applications. These applications have created a complex intertwined relationship among storage, computing, and communications.

This special issue aims to consolidate recent advancement in the fundamental understanding of problems that build on the interplay between storage, computing, and communications. More specifically, these problems study how to efficiently perform tasks, from an information-theoretic perspective, that jointly optimizes distributed computation at different nodes and communications among them through the use of local storage.

Prof. Lawrence Ong
Dr. Son Hoang Dau
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Coded computing
  • Coded caching
  • Index coding
  • Private information retrieval
  • Private computing

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 425 KiB  
Article
On Grid Quorums for Erasure Coded Data
by Frédérique Oggier and Anwitaman Datta
Entropy 2021, 23(2), 177; https://doi.org/10.3390/e23020177 - 30 Jan 2021
Cited by 1 | Viewed by 1505
Abstract
We consider the problem of designing grid quorum systems for maximum distance separable (MDS) erasure code based distributed storage systems. Quorums are used as a mechanism to maintain consistency in replication based storage systems, for which grid quorums have been shown to produce [...] Read more.
We consider the problem of designing grid quorum systems for maximum distance separable (MDS) erasure code based distributed storage systems. Quorums are used as a mechanism to maintain consistency in replication based storage systems, for which grid quorums have been shown to produce optimal load characteristics. This motivates the study of grid quorums in the context of erasure code based distributed storage systems. We show how grid quorums can be built for erasure coded data, investigate the load characteristics of these quorum systems, and demonstrate how sequential consistency is achieved even in the presence of storage node failures. Full article
Show Figures

Figure 1

16 pages, 378 KiB  
Article
Cache-Aided General Linear Function Retrieval
by Kai Wan, Hua Sun, Mingyue Ji, Daniela Tuninetti and Giuseppe Caire
Entropy 2021, 23(1), 25; https://doi.org/10.3390/e23010025 - 26 Dec 2020
Cited by 4 | Viewed by 1976
Abstract
Coded Caching, proposed by Maddah-Ali and Niesen (MAN), has the potential to reduce network traffic by pre-storing content in the users’ local memories when the network is underutilized and transmitting coded multicast messages that simultaneously benefit many users at once during peak-hour times. [...] Read more.
Coded Caching, proposed by Maddah-Ali and Niesen (MAN), has the potential to reduce network traffic by pre-storing content in the users’ local memories when the network is underutilized and transmitting coded multicast messages that simultaneously benefit many users at once during peak-hour times. This paper considers the linear function retrieval version of the original coded caching setting, where users are interested in retrieving a number of linear combinations of the data points stored at the server, as opposed to a single file. This extends the scope of the authors’ past work that only considered the class of linear functions that operate element-wise over the files. On observing that the existing cache-aided scalar linear function retrieval scheme does not work in the proposed setting, this paper designs a novel coded caching scheme that outperforms uncoded caching schemes that either use unicast transmissions or let each user recover all files in the library. Full article
Show Figures

Figure 1

17 pages, 403 KiB  
Article
Optimal Linear Error Correcting Delivery Schemes for Two Optimal Coded Caching Schemes
by Nujoom Sageer Karat, Anoop Thomas and Balaji Sundar Rajan
Entropy 2020, 22(7), 766; https://doi.org/10.3390/e22070766 - 13 Jul 2020
Viewed by 1620
Abstract
For coded caching problems with small buffer sizes and the number of users no less than the amount of files in the server, an optimal delivery scheme was proposed by Chen, Fan, and Letaief in 2016. This scheme is referred to as the [...] Read more.
For coded caching problems with small buffer sizes and the number of users no less than the amount of files in the server, an optimal delivery scheme was proposed by Chen, Fan, and Letaief in 2016. This scheme is referred to as the CFL scheme. In this paper, an extension to the coded caching problem where the link between the server and the users is error prone, is considered. The closed form expressions for average rate and peak rate of error correcting delivery scheme are found for the CFL prefetching scheme using techniques from index coding. Using results from error correcting index coding, an optimal linear error correcting delivery scheme for caching problems employing the CFL prefetching is proposed. Another scheme that has lower sub-packetization requirement as compared to CFL scheme for the same cache memory size was considered by J. Gomez-Vilardebo in 2018. An optimal linear error correcting delivery scheme is also proposed for this scheme. Full article
Show Figures

Figure 1

22 pages, 554 KiB  
Article
Generalized Index Coding Problem and Discrete Polymatroids
by Anoop Thomas and Balaji Sundar Rajan
Entropy 2020, 22(6), 646; https://doi.org/10.3390/e22060646 - 10 Jun 2020
Viewed by 1666
Abstract
The connections between index coding and matroid theory have been well studied in the recent past. Index coding solutions were first connected to multi linear representation of matroids. For vector linear index codes, discrete polymatroids, which can be viewed as a generalization of [...] Read more.
The connections between index coding and matroid theory have been well studied in the recent past. Index coding solutions were first connected to multi linear representation of matroids. For vector linear index codes, discrete polymatroids, which can be viewed as a generalization of the matroids, were used. The index coding problem has been generalized recently to accommodate receivers that demand functions of messages and possess functions of messages. In this work we explore the connections between generalized index coding and discrete polymatroids. The conditions that need to be satisfied by a representable discrete polymatroid for a generalized index coding problem to have a vector linear solution is established. From a discrete polymatroid, an index coding problem with coded side information is constructed and it is shown that if the index coding problem has a certain optimal length solution then the discrete polymatroid is representable. If the generalized index coding problem is constructed from a matroid, it is shown that the index coding problem has a binary scalar linear solution of optimal length if and only if the matroid is binary representable. Full article
Show Figures

Figure 1

30 pages, 1151 KiB  
Article
Straggler-Aware Distributed Learning: Communication–Computation Latency Trade-Off
by Emre Ozfatura, Sennur Ulukus and Deniz Gündüz
Entropy 2020, 22(5), 544; https://doi.org/10.3390/e22050544 - 13 May 2020
Cited by 30 | Viewed by 3299
Abstract
When gradient descent (GD) is scaled to many parallel workers for large-scale machine learning applications, its per-iteration computation time is limited by straggling workers. Straggling workers can be tolerated by assigning redundant computations and/or coding across data and computations, but in most existing [...] Read more.
When gradient descent (GD) is scaled to many parallel workers for large-scale machine learning applications, its per-iteration computation time is limited by straggling workers. Straggling workers can be tolerated by assigning redundant computations and/or coding across data and computations, but in most existing schemes, each non-straggling worker transmits one message per iteration to the parameter server (PS) after completing all its computations. Imposing such a limitation results in two drawbacks: over-computation due to inaccurate prediction of the straggling behavior, and under-utilization due to discarding partial computations carried out by stragglers. To overcome these drawbacks, we consider multi-message communication (MMC) by allowing multiple computations to be conveyed from each worker per iteration, and propose novel straggler avoidance techniques for both coded computation and coded communication with MMC. We analyze how the proposed designs can be employed efficiently to seek a balance between the computation and communication latency. Furthermore, we identify the advantages and disadvantages of these designs in different settings through extensive simulations, both model-based and real implementation on Amazon EC2 servers, and demonstrate that proposed schemes with MMC can help improve upon existing straggler avoidance schemes. Full article
Show Figures

Figure 1

Back to TopTop