Next Article in Journal
Probabilistic Learning and Psychological Similarity
Next Article in Special Issue
A Communication Anti-Jamming Scheme Assisted by RIS with Angular Response
Previous Article in Journal
Pre-Configured Error Pattern Ordered Statistics Decoding for CRC-Polar Codes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Unconditionally Secure Ciphers with a Short Key for a Source with Unknown Statistics

1
Federal Research Center for Information and Computational Technologies, Novosibirsk 630090, Russia
2
Department of Information Technologies, Novosibirsk State University, Novosibirsk 630090, Russia
Entropy 2023, 25(10), 1406; https://doi.org/10.3390/e25101406
Submission received: 21 August 2023 / Revised: 24 September 2023 / Accepted: 28 September 2023 / Published: 30 September 2023
(This article belongs to the Special Issue Information-Theoretic Cryptography and Steganography)

Abstract

:
We consider the problem of constructing an unconditionally secure cipher with a short key for the case where the probability distribution of encrypted messages is unknown. Note that unconditional security means that an adversary with no computational constraints can only obtain a negligible amount of information (“leakage”) about an encrypted message (without knowing the key). Here, we consider the case of a priori (partially) unknown message source statistics. More specifically, the message source probability distribution belongs to a given family of distributions. We propose an unconditionally secure cipher for this case. As an example, one can consider constructing a single cipher for texts written in any of the languages of the European Union. That is, the message to be encrypted could be written in any of these languages.

1. Introduction

The concept of unconditional security is very attractive to cryptography and has found many applications since C. Shannon described it in his famous article [1]. The concept refers to secret-key cryptography involving three participants, Alice, Bob and Eve, where Alice wants to send a message to Bob in secret from Eve, who has the ability to read all correspondence between Alice and Bob. To accomplish this, Alice and Bob use a cipher with a secret key K (i.e., a word from some alphabet), which is known to them in advance (but not to Eve). When Alice wants to send some message m, she first encrypts m using key K and sends it to Bob, who in turn decrypts the received encrypted message using the key K. Eve also receives the encrypted message and tries to decrypt it without knowing the key. The system is called unconditionally secure, or perfect, if Eve, with computers and other equipment of unlimited power and unlimited time, cannot obtain any information about the encrypted message. Not only did C. Shannon provide a formal definition of perfect (or unconditional) secrecy, but he also showed that the so-called one-time pad (or Vernam cipher) is such a system. One of the specific properties of this system is the equivalence of the length of the secret key and the message (or its entropy). Moreover, C. Shannon proved that this property must be true for any perfect system. Quite often this property has limited practical application as many modern telecommunication systems forward and store megabytes of information and the requirement to have secret keys of the same length seems to be quite stringent. There are, therefore, many different approaches to overcoming this obstacle. These include the ideal systems proposed by C. Shannon [1], the so-called honeycomb cipher proposed by Jewels and Ristenpart [2], the so-called entropy security proposed by Russell and Wang [3] and some others developed in recent decades [4,5,6,7,8,9].
It is worth noting that quantum key distribution (QKD) is currently under active research, which can create an unconditional secure key for Alice and Bob, cf. [10,11,12].
The present work is concerned with entropically secure ciphers. It is important to note that an entropically secure cipher is not perfect, and Eve may obtain some information about the message—the property referred to as “leakage” (see the definition below), but this leakage can be made negligible. On the other hand, an entropically secure cipher makes it possible to significantly reduce the key length (compared to the perfect cipher).
The concept of entropically secure cipher was proposed in 2006 by Russell and Wang in the paper [3] where they also created the first entropically secure cipher. In their cipher, the length of the secret key is proportional to the length of the encrypted message if the min-entropy of that message is less than one bit per letter. Recently, the results of Russell and Wang have been developed such that the length of the secret key is independent of the length of the message in the case where the messages to be encrypted have a known probability distribution [9] (see the definition of the min-entropy (2) and Theorems 1 and 2 below for details).
In this paper, we consider the situation where encrypted messages obey an unknown (or partially unknown) probability distribution. We propose an entropically secure cipher for which the key length depends on the universal code (or data compressor) used for encoding the source and on the admissible leakage of the cipher. The construction of the cipher is based on entropically secure ciphers [3,5,8,9] and universal coding [13]. It is worth noting that the proposed cipher uses data compression and randomization, both of which are quite popular in unconditional security, cf. [14,15,16] and [16,17], respectively.

2. Definitions and Preliminaries

2.1. Basic Concepts

We consider the problem of symmetric encryption, where Alice wants to securely transmit a message to Bob. The messages are n-letter binary words, they obey a certain probability distribution p defined on the set { 0 , 1 } n , n 1 . This distribution is only partially known, i.e., it is known that p belongs to some given set P, P R n . Alice and Bob have a shared secret key K = K 1 . . . K k , and Alice encrypts the message M { 0 , 1 } n using K and possibly some random bits. Then, she sends the word c i p h e r ( M , K ) to Bob, who decrypts the received c i p h e r ( M , K ) and obtains M. The third participant is a computationally unconstrained adversary Eve, who knows c i p h e r ( M , K ) and the distribution p, and wants to find some information about M without knowing K.
Russell and Wang [3] suggested a definition of entropic security which was generalized by Dodis and Smith [5] as follows: A probabilistic map Y is said to hide all functions on { 0 , 1 } n with leakage ϵ if, for every adversary A, there exists some adversary A ^ (who does not know Y ( M ) ) such that for all functions f,
| P r { A ( Y ( M ) ) = f ( M ) } P r { A ^ ( ) = f ( M ) } | ϵ .
(note that A ^ does not know Y ( M ) and, in fact, she guesses the meaning of the function f ( M ) .) In what follows, the probabilistic map Y will be c i p h e r ( M , K ) and f is a map f : { 0 , 1 } n { 0 , 1 } * .
Definition 1.
The map Y ( ) is called ϵ-entropically secure for family probability distributions P if Y ( ) hides all functions on { 0 , 1 } n with leakage of ϵ, whenever p P .
Note that, in a sense, Definition 1 is a generalization of Shannon’s notion of perfect security. Namely, if we take ϵ = 0 and Y = c i p h e r ( M , K ) and f ( x ) = x , we obtain that for any M
| P r { A ( c i p h e r ( M , K ) ) = M } P r { A ^ ( ) = M } | = 0
So, A and A ^ obtained the same result, but A estimates the probability based on c i p h e r ( M , K ) , whereas A ^ does it without knowledge of c i p h e r ( M , K ) . Thus, the entropic security (1) can be considered as a generalization of the Shannon’s perfect secrecy.
We will use another important concept, the notion of indistinguishability.
Definition 2.
A randomized map Y : { 0 , 1 } n { 0 , 1 } n , n 1 , is ϵ-indistinguishable for some family of distributions P and ϵ > 0 if there is a probability distribution G on { 0 , 1 } n such that for every probability distribution p P we have
S D ( Y ( M ) , G ) ϵ ,
where for two distributions A , B
S D ( A , B ) = 1 2 U M | P r { A = U } P r { B = U } | .
Importantly, G is independent of Y ( M ) .
Dodis and Smith [5] showed that the concepts of ϵ -entropic security and ϵ -indistinguishability are equivalent up to small parameter changes.

2.2. ϵ -Entropically Secure Ciphers for Distributions with Bounded Min-Entropy

In 2006 [3], the first entropically secure cipher was developed for probability distributions with a limited value of the so-called minimum entropy, which is defined as follows:
h m i n ( p ) = log max a A p ( a ) .
where p is a probability distribution, log = log 2 . The Russell and Wang [3] cipher was generalized and developed by Dodis and Smith [5] and their result can be formulated as follows:
Theorem 1
([5]). Let p be a probability distribution on { 0 , 1 } n , n > 0 , whose min-entropy is not less than h , h [ 0 , n ] . Then there exists an ϵ-entropically secure cipher with the k-bit key where
k = n h + 2 l o g ( 1 / ϵ ) + 2 .
Let us denote this cipher as c i p h e r r w d s .
In a sense, this cipher generalizes the perfect Shannon cipher as follows: In a perfect cipher, the key is a word from { 0 , 1 } n , while in an entropically secure cipher, the key belongs to the 2 k -element subset K { 0 , 1 } n , which is a so-called small-biased set. Informally, this means that for any m n and a uniformly chosen binary word u { 0 , 1 } m , for any m positions i 1 i 2 , . . . , i m , the probability that K i 1 , K i 2 , . . . . , K i m = u is close to 2 m . (This construction is based on some deep results in combinatorics [5,18,19].) Thus, the key length decreases from n to k. Note that the leakage ϵ and hence the summand 2 log ( 1 / ϵ ) + 2 depends on the size of the “small-biased set” 2 k (In general, a larger k implies a smaller ϵ .)

2.3. ϵ -Entropically Secure Ciphers with Reduced Secret Key

In equality (3), the linearly increasing summand n h depends on the min-entropy h. So, it seems natural to transform the set { 0 , 1 } n so as to reduce the min-entropy of the original distribution p and hence the summand n h . In [9], this approach was realized as follows: let there be a set of probability distributions P defined on { 0 , 1 } n , n 1 . The key part of the cipher is such a randomized map ϕ : { 0 , 1 } n { 0 , 1 } n * , n * n , that there exists a map ϕ 1 (i.e u ϕ 1 ( ϕ ( u ) ) = u ) and a min-entropy of the transform probability distribution π p is close to n * (here the distribution π p is such that p ( u ) = v : ϕ 1 ( v ) = u π p ( v ) ). And then the c i p h e r r w d s can be applied to ϕ ( m ) with a shorter key, because the difference n * h m i n ( π p ) will be less than n h m i n ( p ) , see (3). Thus, the smaller sup p P ( n * h m i n ( π p ) ) , the shorter the secret key. The described cipher is based on data compression and randomization and denoted in [9] by c i p h e r c & r . The following theorem describes its properties.
Theorem 2
([9]). Suppose there is a family P of probability distributions defined on { 0 , 1 } n and there is a randomized mapping ϕ : { 0 , 1 } n { 0 , 1 } n * , n * n for which there exists a mapping ϕ 1 and let
sup p P ( n * h m i n ( π p ) ) Δ .
for some Δ. Then,
(i) 
c i p h e r c & r is ϵ-entropically secure with secret key length Δ + 2 log ( 1 / ϵ ) + 2 , and
(ii) 
c i p h e r c & r is ϵ-indistinguishable with secret key length Δ + 2 log ( 1 / ϵ ) + 6 .
Now we consider a simple example to illustrate the basic idea. Let n = 2 , p ( 00 ) = 1 / 2 , p ( 01 ) = 1 / 4 , p ( 10 ) = p ( 11 ) = 1 / 8 . Obviously, h m i n ( p ) = 1 and Δ = ( 2 1 ) . The map ϕ is constructed in two steps: first, “compress” the letters till log p ( a ) , that is, in our example, 00 0 , 01 10 and 10 110 , 11 111 . Secondly, randomize as follows: 00 uniformly { 000 , 001 , 010 , 011 } , 01 { 100 , 101 } and two last letters as { 110 } and { 111 } correspondingly. As a result, we obtain a set { 0 , 1 } 3 subject to a uniform distribution whose min-entropy is equal to three, and hence Δ = 3 3 = 0 . Thus, the key length becomes 1 bit shorter, but the message length is longer. It is proved that such a “bloated” cipher is ϵ -entropically secure [9].
Obviously, the key length depends on the efficiency of the compression method, or code. Thus, in the case of known statistics (i.e., known p), the key length is Δ + 2 l o g ( 1 / ϵ ) + 2 , where Δ is 1 or 2 and depends on the compression code chosen. If p is unknown, but the messages are known to be generated by a Markov chain with known memory, then Δ = O ( log n ) (and the key length is O ( log n ) + 2 l o g ( 1 / ϵ ) [9]).

2.4. Universal Coding

The problem of constructing a single code for multiple probability distributions (information sources) is well known in information theory, and there are currently dozens of effective universal codes based on different ideas and approaches. It is worth noting that, at present, there are dozens universal codes, which are the basis for so-called archivers (e.g., ZIP). The first universal code for Bernoulli and Markov processes was proposed by Fitinghof [20], and then Krichevsky found an asymptotically optimal code for these processes [13,21]. Other universal codes include the PPM universal code [22], which is used together with the arithmetic code [23], the Lempel-Ziv (LZ) codes [24], the Burrows–Wheeler transformation [25], which is used together with the book-stack code (or MTF) [26] (see also [27,28]), grammar codes [29,30] and some others [31,32,33,34].
The universal code c has to“compress” sequences x = x 1 . . . x n that obey the distribution p P down to Shannon entropy p, that is h S h ( p ) , and the difference between E p ( | c ( x ) | ) h S h ( p ) is called redundancy r ( p )  [13] (here E p is the expectation and | u | is the legth of u). In [35], an algorithm was proposed to construct a code c o p t whose redundancy is minimal on P , that is, r p o p t = inf p P r ( p ) . In [35], it was shown that r p o p t is equal to the capacity of a channel whose input alphabet is P , whose output alphabet is the alphabet on which distributions from P are defined (in our case it is the alphabet { 0 , 1 } n ), and the lines of the channel matrix are probability distributions from P (see also [36] for the history of this discovery). This fact is important, because it allows us to use known methods to compute the channel capacity to find the optimal code.
In this paper, we will use the so-called Shtarkov maximum likelihood code c S h t [37], whose construction is much simpler, and its redundancy is often close to that of the optimal code. This code is described as follows: first define
p m a x ( u ) = sup p P p ( u ) , u { 0 , 1 } n , S P = u { 0 , 1 } n p m a x ( u ) , q ( u ) = p m a x ( u ) / S P .
Clearly,
u : p ( u ) / q ( u ) S P .
Shtakov proposed to build code c S h t for which | c S h t ( u ) | = log q ( u ) . (Such a code exists, see [38]).
Claim. If the set P is finite, then S P | P | .
Proof. 
From the definition (5) we can see that
S P = u { 0 , 1 } n p m a x ( u ) u { 0 , 1 } n ( p P p ( u ) ) .
Clearly, p P p ( u ) = 1 and from this and the previous inequality we obtain
S P = u { 0 , 1 } n p m a x ( u ) u { 0 , 1 } n ( p P p ( u ) ) = p P ( u { 0 , 1 } n p ( u ) ) = p P 1 = | P | .
Claim is proven. □
Note that this claim is true when P contains probability distributions corresponding to several languages.

3. The Cipher

Now we are going to construct an ϵ -entropically secure cipher c c & r for the case of unknown statistics, i.e., there exists some set of probability distributions P generating words from { 0 , 1 } n , n 1 , and the constructed cipher should be applicable to messages obeying any p P with a leakage no larger than ϵ . In short, we apply the general method from [9] to the probability distribution q (5). In detail, Alice wants to send messages m { 0 , 1 } n to Bob, and they both know in advance that m can obey any probability distribution p of the set of distributions P . The cipher algorithm is as follows.
Constructing the cipher. We describe all calculations in the following steps:
(i)
Compute the distribution q according to (5) and order the set q ( u ) , u { 0 , 1 } n . (Denote the ordered probabilities as q 1 , q 2 , . . . , q N , N = 2 n and let ν ( u ) = i for which q ( u ) = q i .)
(ii)
Encode the “letters” 1 , 2 , . . . , N with the distribution q by the trimmed Shannon code from [9]. Denote this code λ and note that
i : | λ ( i ) | < log q i + 2
and λ is prefix-free, that is, for any i and j, i j , neither λ ( i ) is a prefix λ ( j ) , and λ ( j ) is a prefix λ ( i ) [9].
(iii)
Build the following randomized map ϕ First, find n * = max i λ ( i ) and then define for u { 0 , 1 } n ,
ϕ ( u ) = λ ( ν ( u ) ) r | λ ( ν ( u ) | + 1 . . . r n * ,
where r j are equiprobable independent binary digits.
(iv)
For the desired leakage ϵ build c i p h e r r w d s with secret key length
log S P + 2 log ( 1 / ϵ ) + δ ,
where δ = 2 for ϵ -entropically secure cipher and δ = 6 for ϵ - indistinguishable one.
It is worth noting that Alice and Bob (and Eve) can perform all the calculations described independently of each other.
Use of the cipher. Suppose Alice and Bob have a randomly chosen secret key K, | K | = k , and Alice wants to send Bob a message m. To accomplish this, she computes c i p h e r c & r ( m , K ) , as described above, and sends it to Bob.
Bob receives the word c i p h e r c & r ( m , K ) and decrypts it with the key K. As a result, he obtains the word ϕ ( m ) = λ ( ν ( m ) r | λ ( ν ( m ) | + 1 . . . r n * whose prefix λ ( ν ( m ) ) defines the message m (this is possible because λ is prefix-free).
The properties of this cipher are described in the following theorem.
Theorem 3.
Suppose there is a family P of probability distributions defined on { 0 , 1 } n and some ϵ > 0 . If the described c i p h e r c & r is applied then,
(i) 
The c i p h e r c & r is ϵ-entropically secure with secret key length log S P + 2 log ( 1 / ϵ ) + 4 , and
(ii) 
The c i p h e r c & r is ϵ–indistinguishable with secret key length log S P + 2 log ( 1 / ϵ ) + 8 .
Proof. 
For any p P , the random map ϕ defines a probability distribution π p ( v ) , v { 0 , 1 } * as follows: for any u { 0 , 1 } n and v ϕ ( u )
π p ( v ) = p ( u ) 2 ( n * | λ ( ν ( u ) ) | ) ,
see (8). From definitions ϕ and (8), (7) we obtain
π p ( v ) = p ( m ) 2 ( n * | λ ( ν ( m ) ) | ) p ( m ) 2 ( n * ( log q ν ( m ) + 2 ) )
for any m { 0 , 1 } n and v ϕ ( m ) { 0 , 1 } n * . Then,
log π p ( v ) log p ( m ) + ( n * ( log q ν ( m ) + 2
log S P + log q ν ( m ) n * log q ν ( m ) + 2 = log S P + 2 n *
for any m and v ϕ ( m ) { 0 , 1 } n * . So, h m i n ( π p ) = min v { 0 , 1 } n * log π p ( v ) log S P + 2 n * and, hence, sup p P ( n * h m i n ( π p ) ) log S P + 2 . From (4) (Theorem 2) and the description of the cipher (9), we can see that the c i p h e r c & r is
(i)
ϵ -entropically secure with a secret key of length log S P + 2 log ( 1 / ϵ ) + 4 , and
(ii)
ϵ -indistinguishable with a secret key of length log S P + 2 log ( 1 / ϵ ) + 8 .

4. Conclusions

We described the cipher for a family of probability distributions P defined on the set { 0 , 1 } n , n 1 , for which the length of the secret key does not depend directly on n, but depends on P . For example, if P is finite, the key length is less than log | P | + 2 log ( 1 / ϵ ) + O ( 1 ) and hence independent of n. This example includes the case where one needs to have the same cipher for texts written in different languages. Here, the size of the set P is equal to the number of languages. Thus, in some practically interesting cases, the extra length of the secret key is quite small.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Shannon, C.E. Communication theory of secrecy systems. Bell Syst. Tech. J. 1949, 28, 656–715. [Google Scholar] [CrossRef]
  2. Juels, A.; Ristenpart, T. Honey encryption: Security beyond the brute-force bound. In Proceedings of the Annual International Conference on the Theory and Applications of Cryptographic Techniques, Copenhagen, Denmark, 11–15 May 2014; Springer: Berlin/Heidelberg, Germany, 2014; pp. 293–310. [Google Scholar]
  3. Russell, A.; Wang, H. How to fool an unbounded adversary with a short key. IEEE Trans. Inf. Theory 2006, 52, 1130–1140. [Google Scholar] [CrossRef]
  4. Jaeger, J.; Ristenpart, T.; Tang, Q. Honey encryption beyond message recovery security. In Advances in Cryptology—EUROCRYPT 2016, Proceedings of the 35th Annual International Conference on the Theory and Applications of Cryptographic Techniques, Vienna, Austria, 8–12 May 2016; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  5. Dodis, Y.; Smith, A. Entropic security and the encryption of high entropy messages. In Proceedings of the Theory of Cryptography Conference, Cambridge, MA, USA, 10–12 February 2005; Springer: Berlin/Heidelberg, Germany, 2005; pp. 556–577. [Google Scholar]
  6. du Pin Calmon, F.; Médard, M.; Zeger, L.M.; Barros, J.; Christiansen, M.M.; Duffy, K.R. Lists that are smaller than their parts: A coding approach to tunable secrecy. In Proceedings of the 50th Annual Allerton Conference on Communication, Control, and Computing, Allerton, Monticello, IL, USA, 1–5 October 2012; pp. 1387–1394. [Google Scholar]
  7. Calmon, F.D. Information-Theoretic Metrics for Security and Privacy. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 2015. [Google Scholar]
  8. Li, X.; Tang, Q.; Zhang, Z. Fooling an Unbounded Adversary with a Short Key, Repeatedly: The Honey Encryption Perspective. In Proceedings of the 2nd Conference on Information-Theoretic Cryptography (ITC 2021), Virtually, 24–26 July 2021; Schloss Dagstuhl-Leibniz-Zentrum Informatik: Wadern, Germany, 2021. [Google Scholar]
  9. Ryabko, B. Unconditionally secure short key ciphers based on data compression and randomization. Des. Codes Cryptogr. 2023, 91, 2201–2212. [Google Scholar] [CrossRef]
  10. Lucamarini, M.; Yuan, Z.L.; Dynes, J.F.; Shields, A.J. Overcoming the rate-distance limit of quantum key distribution without quantum repeaters. Nature 2018, 557, 400–403. [Google Scholar] [CrossRef]
  11. Wang, X.B.; Yu, Z.W.; Hu, X.L. Twin-field quantum key distribution with large misalignment error. Phys. Rev. A 2018, 98, 062323. [Google Scholar] [CrossRef]
  12. Liu, Y.; Zhang, W.J.; Jiang, C.; Chen, J.P.; Zhang, C.; Pan, W.X.; Ma, D.; Dong, H.; Xiong, J.M.; Zhang, C.J.; et al. Experimental twin-field quantum key distribution over 1000 km fiber distance. Phys. Rev. Lett. 2023, 130, 210801. [Google Scholar] [CrossRef]
  13. Krichevsky, R. Universal Compression and Retrival; Kluver Academic Publishers: Dordrecht, The Netherlands, 1993. [Google Scholar]
  14. Shkel, Y.Y.; Poor, H.V. A compression perspective on secrecy measures. IEEE J. Sel. Areas Inf. Theory 2021, 2, 163–176. [Google Scholar] [CrossRef]
  15. Bloch, M.; Günlü, O.; Yener, A.; Oggier, F.; Poor, H.V.; Sankar, L.; Schaefer, R.F. An overview of information-theoretic security and privacy: Metrics, limits and applications. IEEE J. Sel. Areas Inf. Theory 2021, 2, 5–22. [Google Scholar] [CrossRef]
  16. Ryabko, B.; Fionov, A. Cryptography in the Information Society; World Scientific Publishing: Singapore, 2020; 280p. [Google Scholar]
  17. Gunther, C.G. A universal algorithm for homophonic coding. In Proceedings of the Workshop on the Theory and Application of Cryptographic Techniques, Davos, Switzerland, 25–27 May 1988; Springer: Berlin/Heidelberg, 1988; pp. 405–414. [Google Scholar]
  18. Naor, J.; Naor, M. Small-bias probability spaces: Efficient constructions and applications. In Proceedings of the Twenty-Second Annual ACM Symposium on Theory of Computing, Baltimore, MD, USA, 13–17 May 1990; pp. 213–223. [Google Scholar]
  19. Alon, N.; Goldreich, O.; Håstad, J.; Peralta, R. Simple constructions of almost k-wise independent random variables. Random Struct. Algorithms 1992, 3, 289–304. [Google Scholar] [CrossRef]
  20. Fitingof, B.M. Optimal coding in the case of unknown and changing message statistics. Probl. Peredachi Inf. 1966, 2, 3–11. [Google Scholar]
  21. Krichevsky, R. A relation between the plausibility of information about a source and encoding redundancy. Probl. Inform. Transm. 1968, 4, 48–57. [Google Scholar]
  22. Cleary, J.; Witten, I. Data compression using adaptive coding and partial string matching. IEEE Trans. Commun. 1984, 32, 396–402. [Google Scholar] [CrossRef]
  23. Rissanen, J.; Langdon, G.G. Arithmetic coding. IBM J. Res. Dev. 1979, 23, 149–162. [Google Scholar] [CrossRef]
  24. Ziv, J.; Lempel, A. A universal algorithm for sequential data compression. IEEE Trans. Inf. Theory 1977, 23, 337–343. [Google Scholar] [CrossRef]
  25. Burrows, M.; Wheeler, D.J. A block-sorting lossless data compression algorithm. SRS Res. Rep. 1994, 124, 10009821328. [Google Scholar]
  26. Ryabko, B.Y. Data compression by means of a “book stack”. Probl. Peredachi Inf. 1980, 16, 16–21. [Google Scholar]
  27. Bentley, J.; Sleator, D.; Tarjan, R.; Wei, V. A locally adaptive data compression scheme. Commun. ACM 1986, 29, 320–330. [Google Scholar] [CrossRef]
  28. Ryabko, B.; Horspool, N.R.; Cormack, G.V.; Sekar, S.; Ahuja, S.B. Technical correspondence. Commun. ACM 1987, 30, 792–797. [Google Scholar]
  29. Kieffer, J.C.; Yang, E.-H. Grammar-based codes: A new class of universal lossless source codes. IEEE Trans. Inf. Theory 2000, 46, 737–754. [Google Scholar] [CrossRef]
  30. Yang, E.-H.; Kieffer, J.C. Efficient universal lossless data compression algorithms based on a greedy sequential grammar transform. i. without context models. IEEE Trans. Inf. Theory 2000, 46, 755–777. [Google Scholar] [CrossRef]
  31. Drmota, M.; Reznik, Y.; Szpankowski, W. Tunstall code, Khodak variations, and random walks. IEEE Trans. Inf. Theory 2010, 56, 2928–2937. [Google Scholar] [CrossRef]
  32. Louchard, G.; Szpankowski, W. Average profile and limiting distribution for a phrase size in the Lempel-Ziv parsing algorithm. IEEE Trans. Inf. Theory 1995, 41, 478–488. [Google Scholar] [CrossRef]
  33. Ryabko, B. Twice-universal coding. Probl. Inf. Transm. 1984, 3, 173–177. [Google Scholar]
  34. Reznik, Y.A. Coding of Sets of Words. In Proceedings of the 2011 Data Compression Conference, Snowbird, UT, USA, 29–31 March 2011. [Google Scholar]
  35. Ryabko, B. Coding of a source with unknown but ordered probabilities. Probl. Inform. Transm. 1979, 15, 134–138. [Google Scholar]
  36. Ryabko, B. Comments on: “A source matching approach to finding minimax codes”. IEEE Trans. Inform. Theory 1981, 27, 780–781. [Google Scholar] [CrossRef]
  37. Shtar’kov, Y.M. Universal sequential coding of single messages. Problemy Peredachi Inf. 1987, 23, 3–17. [Google Scholar]
  38. Cover, T.M.; Thomas, J.A. Elements of Information Theory; Wiley-Interscience: New York, NY, USA, 2006. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ryabko, B. Unconditionally Secure Ciphers with a Short Key for a Source with Unknown Statistics. Entropy 2023, 25, 1406. https://doi.org/10.3390/e25101406

AMA Style

Ryabko B. Unconditionally Secure Ciphers with a Short Key for a Source with Unknown Statistics. Entropy. 2023; 25(10):1406. https://doi.org/10.3390/e25101406

Chicago/Turabian Style

Ryabko, Boris. 2023. "Unconditionally Secure Ciphers with a Short Key for a Source with Unknown Statistics" Entropy 25, no. 10: 1406. https://doi.org/10.3390/e25101406

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop