Next Article in Journal
The Modal Components of Judgements in a Quantum Model of Psychoanalytic Theory
Next Article in Special Issue
Weighted Sum Secrecy Rate Maximization for Joint ITS- and IRS-Empowered System
Previous Article in Journal
Design and Analysis of Systematic Batched Network Codes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Practical Entropy Accumulation for Random Number Generators with Image Sensor-Based Quantum Noise Sources

1
Department of Financial Information Security, Kookmin University, Seoul 02707, Republic of Korea
2
Department of Mathematics, Kookmin University, Seoul 02707, Republic of Korea
*
Author to whom correspondence should be addressed.
Entropy 2023, 25(7), 1056; https://doi.org/10.3390/e25071056
Submission received: 14 June 2023 / Revised: 4 July 2023 / Accepted: 11 July 2023 / Published: 13 July 2023
(This article belongs to the Special Issue Quantum and Classical Physical Cryptography)

Abstract

:
The efficient generation of high-quality random numbers is essential in the operation of cryptographic modules. The quality of a random number generator is evaluated by the min-entropy of its entropy source. The typical method used to achieve high min-entropy of the output sequence is an entropy accumulation based on a hash function. This is grounded in the famous Leftover Hash Lemma, which guarantees a lower bound on the min-entropy of the output sequence. However, the hash function-based entropy accumulation has slow speed in general. For a practical perspective, we need a new efficient entropy accumulation with the theoretical background for the min-entropy of the output sequence. In this work, we obtain the theoretical bound for the min-entropy of the output random sequence through the very efficient entropy accumulation using only bitwise XOR operations, where the input sequences from the entropy source are independent. Moreover, we examine our theoretical results by applying them to the quantum random number generator that uses dark shot noise arising from image sensor pixels as its entropy source.

1. Introduction

A random number generator (RNG) is an important component of cryptographic systems used by cryptographic modules to generate random numbers. Random numbers are used for various purposes including the generation of cryptographic keys, and their significance has been increasingly highlighted, particularly with the recent emergence of quantum key distribution [1].
An RNG can be divided into three main processes: digitization, entropy accumulation, and pseudo random number generation (PRNG). Digitization is the process of converting entropy sources into binary data. We call the converted binary data the “input sequence”. Typically, the input sequence has a low min-entropy. Entropy accumulation is the process of transforming input sequences into data with high min-entropy. We denote the input sequence that has undergone the entropy accumulation process as the “output sequence”. PRNG is composed of deterministic algorithms, such as block ciphers or hash functions, and it assumes the output sequence as input and then outputs the final “random number”. The operation of the RNG is illustrated in Figure 1.
Although unpredictable random numbers can be generated using a digitized entropy source, this is impractical in cryptographic systems because of the significant amount of time required. The PRNG was used to address this limitation. A PRNG produces the same output with the same input. This implies that the generated random numbers are not unpredictable. However, the PRNG can generate multiple random numbers in a short time because the length of the output is longer than the length of the input. Therefore, if the length of the input of the PRNG is small but random, the output of the PRNG provides good random numbers. Consequently, the high min-entropy of the input sequence can be observed as an important factor in constructing an RNG.
In the process of entropy accumulation, an accumulation function H is a transformation from an input sequence to an output sequence. Let X be an input sequence and X be the corresponding output sequence, then the entropy accumulation can be expressed as X = H ( X ) .
On the other hand, Dodis et al. [2] have proposed that the entropy accumulation is divided into two types depending on the characteristics of the accumulation function H. The first type is called “Slow-Refresh”, which is characterized by a high computational complexity of H leading to slower accumulation speed but results in a long output sequence. The second type, so called “Fast-Refresh”, features a lower computational complexity of H leading to faster accumulation speed but results in a short output sequence.
Traditional hash function-based entropy accumulation is categorized as Slow-Refresh due to its relatively slow accumulation speed. However, this method is widely utilized because of the theoretical foundation provided by the Leftover Hash Lemma [3]. The Leftover Hash Lemma ensures the lower bound of min-entropy for the output sequence, where a low min-entropy input sequence passes through a hash function H randomly selected from a universal hash family.
There are two major considerations in the Slow-Refresh process, which are the construction of a universal hash family and the random selection of a hash function from the uniformly distributed universal hash family. However, constructing a universal hash family is not a trivial task. One example satisfying an appropriate property is the family of the Hankel matrix [4]. If a Hankel matrix is randomly selected from the uniformly distributed family, and an n-bit input sequence is processed through this selected matrix, then the resulting m-bit output sequence will have a sufficient min-entropy. Similarly, a universal hash family can also be constructed by using Toeplitz matrices [5,6].
In such a matrix-based universal hash family, the matrices typically used have input bit-lengths larger than the output bit-lengths, and the ratio m / n approaches 1 as n increases. Thus, the size of the matrix must be sufficiently large in order to minimize the lost bits. However, this leads to a high computational complexity, that is, this method falls under the Slow-Refresh category, and Slow-Refresh may not always be suitable in practical situations that require rapid entropy accumulation. Therefore, in this work, we focus on Fast-Refresh, which is clearly described in the next subsection.

1.1. Related Works

One of the typical examples of Fast-Refresh without a hash function is Microsoft Windows RNG [7]. Windows RNG uses only the bitwise XOR operation and bit permutation r o t ( α , n ) for the entropy accumulation. In particular, if we employ the following notation, the entropy accumulation operation of Windows RNG can be depicted as shown in Figure 2.
  • Y 1 , Y 2 , , Y l : n-bit input sequences.
  • π : { 0 , 1 , , n 1 } { 0 , 1 , , n 1 } , π is one-to-one.
  • A π : { 0 , 1 } n { 0 , 1 } n , A π ( b 0 , b 1 , , b n 1 ) : = A π ( b π ( 0 ) , b π ( 1 ) , , b π ( n 1 ) ) .
  • r o t ( α , n ) : { 0 , 1 , , n 1 } { 0 , 1 , , n 1 } , r o t ( α , n ) ( i ) : = i α ( mod n ).
Figure 3 is an example of the r o t ( 3 , 8 ) operation on an 8-bit input sequence.
  • Λ π ( l ) : { 0 , 1 } n { 0 , 1 } n , Λ π ( l ) : = Y 1 if l = 1 A π ( Λ π ( l 1 ) ) Y l if l 2
Windows RNG accumulates output sequences relatively quickly in an entropy pool because it uses bit permutations and bitwise XOR operations without employing hash functions. Despite these advantages, it has not been proven whether the entropy accumulation of Windows RNG guarantees a lower bound for the min-entropy of the output sequence, as does the Leftover Hash Lemma when using a hash function. Therefore, it has been challenging to consider this method of secure entropy accumulation. However, recent research presented at Crypto 2021 analyzed Microsoft Windows RNG. In [2], the security of Windows RNG was analyzed by providing the number of iterations of bit permutation and bitwise XOR to surpass an arbitrary min-entropy under three conditions. First, the input sequences must be independent. Second, the probability distribution of input sequences must follow the “2-monotone distribution”. Third, the “covering number” of the bit permutation must be finite. In [2], it was claimed that the three conditions just mentioned are easy to satisfy. However, satisfying these conditions may be challenging for hardware entropy sources rather than software entropy sources, particularly when managing multiple entropy sources. The first and third conditions are easily satisfied, as in the case of Windows RNG. The second condition may seem easy to achieve, but it is challenging. Therefore, to handle entropy sources other than Windows, a more relaxed condition is required than that presented in [2].

1.2. Our Contributions

The contributions of this paper can be summarized into three main aspects. First, we provide Fast-Refresh that does not require hash functions and uses only bitwise XOR operations to generate output sequences. In particular, if we employ the following notation, the proposed entropy accumulation can be depicted as shown in Figure 4.
  • Y 1 , Y 2 , , Y l : n-bit input sequences.
  • Γ ( l ) = j = 1 l Y j .
This method requires only two conditions for the input sequences, making it relatively easier to satisfy than Windows RNG entropy accumulation.
Second, we establish a min-entropy lower bound for a secure random number generator and demonstrate that our entropy accumulation successfully surpasses this lower bound when applied to our RNG. We used quantum random number generators (QRNGs) as an entropy source. QRNGs utilize several quantum phenomena to generate high-quality random numbers [8,9,10].
The first published QRNG was based on radioactive decay. This emerged as the need for random number generators increased alongside the rise of computer simulations in the late 20th century. QRNGs with radioactive decay utilize the random behavior of particles emitted from radioactive materials. This method is still in use today, and, along with methods using the photon, it is one of the most common QRNG [8]. There are various quantum phenomena used for quantum random number generation. For example, the study in [9] uses the interference of photons to generate random numbers. Another example from [11] involves the use of tunneling signals in silicon diodes for random number generation. Furthermore, the work in [12] employs short laser pulses with quantum random phases to generate random numbers.
In this paper, we use the image sensor-based QRNG of [13]. The image sensor-based QRNG generates random numbers using dark shot noise. Dark shot noise is a fluctuation of small current that flows through the pixels of the image sensor even when they do not receive light. It is known that the number of electrons follows the Poisson distribution [14,15]. For this reason, we utilize optical black pixels (OBP) of the image sensor, which do not receive light, as the entropy source. Furthermore, because each pixel outputs dark shot noise independently, all the entropy sources can be considered independent of each other.
Third, we conduct a comparative analysis of our proposed entropy accumulation with other entropy accumulations. The examples chosen for comparison are the Slow-Refresh used in IDQ QRNG and the Fast-Refresh of Windows RNG. When comparing with the Slow-Refresh, we focus on the theoretical differences in the accumulation mechanisms between ours and the IDQ QRNG. Under the view point of Fast-Refresh, we compare efficiency of ours with Windows RNG by evaluating the iteration number of operations.
The remainder of this paper is organized as follows. In Section 2, we describe the theoretical background and propose our main theorem, which guarantees the lower bound of min-entropy of the output sequence. In Section 3, we describe the process of applying the theory outlined in Section 2 to an image sensor-based QRNG. We establish a min-entropy lower bound based on three standards and provide experimental results demonstrating that the output sequences generated by applying our theory to the input sequences have a min-entropy higher than the established lower bound. In Section 4, we compare our entropy accumulation with other entropy accumulations. The first comparison is with Slow-Refresh of IDQ QRNG. We describe the theoretical background of Slow-Refresh and the Leftover Hash Lemma and explain the operation of IDQ QRNG. Then, we present two limitations of IDQ QRNG and describe the differences between IDQ QRNG and our entropy accumulation. The second comparison is with Fast-Refresh of Windows RNG. We compare the iteration number l, which is obtained when applying each entropy accumulation. The iteration number of Windows RNG is calculated using the theory in [2]. Note that without some additional components, the theory in [2] cannot be directly applied. Section 5 is the conclusion.

2. Theoretical Background and Main Theorem

In this section, we describe the theoretical background of entropy accumulation using only the XOR operation. In particular, we use the following notation:
  • Z m n : Direct product of n copies of the group Z m . Note that the bitwise XOR operation corresponds to the + operation over Z 2 n .
  • F ( Z m n ) : the space of all complex valued functions on Z m n .
  • Γ ( l ) = j = 1 l Y j , where Y j Z 2 n is the n-bit random variable that represents the input sequence.
  • f m i n = min { | f ( x ) | : x Z m n } , f F ( Z m n ) .
  • f = max { | f ( x ) | : x Z m n } , f F ( Z m n ) .
  • f 1 = x Z m n | f ( x ) | , f F ( Z m n ) .
  • D X : probability distribution of the random variable X.
  • H m i n ( D X ) = log 2 D X . H m i n ( D X ) implies the min-entropy of D X .
We show that as the number of input sequences required to generate one output sequence, represented by l, approaches infinity, H m i n ( D Γ ( l ) ) converges to n. Furthermore, we provide the optimal value of l necessary to surpass the specified min-entropy α ( < n ) . First, we provide a solution for the case n = 1 and explain why this solution is inappropriate for the general n-bit case. Thereafter, we provide a general solution using a Discrete Fourier Transform and Convolution.
First, we show why the problem we are trying to solve is challenging. The difficult point of our problem is that in order to determine the value of D Γ ( l ) , complex linear operations must be performed on the function values of D Y j . For example, suppose n = 2 ,   Y 1 , Y 2 , Y 3 are independent and  D Y 1 , D Y 2 , D Y 3 are identical to the distribution D. The distribution D is determined as D ( 0 , 0 ) = 1 8 , D ( 0 , 1 ) = 1 4 , D ( 1 , 0 ) = 3 8 , D ( 1 , 1 ) = 1 4 . Let us calculate D Γ ( 3 ) , which suffices to show the complexity of computation.
D Γ ( 3 ) ( 0 , 0 ) = x y z = ( 0 , 0 ) D Y 1 ( x ) D Y 2 ( y ) D Y 3 ( z ) = D ( 0 , 0 ) D ( 0 , 0 ) D ( 0 , 0 ) + D ( 0 , 0 ) D ( 0 , 1 ) D ( 0 , 1 ) + + D ( 1 , 1 ) D ( 1 , 1 ) D ( 0 , 0 ) = 124 512 0.242 . D Γ ( 3 ) ( 0 , 1 ) = x y z = ( 0 , 1 ) D Y 1 ( x ) D Y 2 ( y ) D Y 3 ( z ) = D ( 0 , 0 ) D ( 0 , 0 ) D ( 0 , 1 ) + D ( 0 , 0 ) D ( 0 , 1 ) D ( 0 , 0 ) + + D ( 1 , 1 ) D ( 1 , 1 ) D ( 0 , 1 ) = 128 512 = 0.25 .
D Γ ( 3 ) ( 1 , 0 ) = x y z = ( 1 , 0 ) D Y 1 ( x ) D Y 2 ( y ) D Y 3 ( z ) = D ( 0 , 0 ) D ( 0 , 0 ) D ( 1 , 0 ) + D ( 0 , 0 ) D ( 0 , 1 ) D ( 1 , 1 ) + + D ( 1 , 1 ) D ( 1 , 1 ) D ( 1 , 0 ) = 132 512 0.258 . D Γ ( 3 ) ( 1 , 1 ) = x y z = ( 1 , 1 ) D Y 1 ( x ) D Y 2 ( y ) D Y 3 ( z ) = D ( 0 , 0 ) D ( 0 , 0 ) D ( 1 , 1 ) + D ( 0 , 0 ) D ( 0 , 1 ) D ( 1 , 0 ) + + D ( 1 , 1 ) D ( 1 , 1 ) D ( 1 , 1 ) = 128 512 = 0.25 .
From the above calculations, we derive two features. First, to calculate one function value of D Γ ( l ) , we must sum 2 ( l 1 ) n terms. That is, to calculate
D Γ ( l ) ( x ) = x 1 x 2 x l = x D Y 1 ( x 1 ) D Y 2 ( x 2 ) D Y l ( x l ) ,
the first l 1 terms x 1 , x 2 , x l 1 could be any value and the last term x l is automatically determined by equation x 1 x 2 x l = x . Because there is 2 n choices respectively, the total terms would be 2 ( l 1 ) n ; however, it is difficult to calculate. Second, as l grows, D Γ ( l ) tends to uniform distribution. The distance between the original distribution D and the uniform distribution I with respect to infinite norm D I of above example is 1 8 . However, we can observe that D Γ ( 3 ) I is 1 128 . As l grows, the terms that should be computed to calculate the function value grow rapidly; consequently, the impact of one function value will decrease. Although this phenomenon seems natural, still, the following questions remain: Under what conditions does this convergence happen? How about the convergence rate? How can we prove the related results?

2.1. Entropy Accumulation with n = 1

Let us consider the relatively simple case of n = 1 and Y j following an independent and identical distribution (IID). In this case, all Y j follow the same distribution D ( = D Y j ) and a recursive relationship Γ ( j + 1 ) = Γ ( j ) Y j + 1 is established. Thus, we can express D Γ ( j + 1 ) ( 1 ) based on the following relationship:
D Γ ( j + 1 ) ( 1 ) = D Γ ( j ) ( 1 ) [ 1 D Y j + 1 ( 1 ) ] + [ 1 D Γ ( j ) ( 1 ) ] D Y j + 1 ( 1 ) .
(2) is derived based on the property that the sum of two bits resulting in 1 can be obtained by adding 1 and 0, or 0 and 1. Moreover, D Γ ( j + 1 ) ( 1 ) in (2) can be interpreted as a point where D Γ ( j ) ( 1 ) and 1 D Γ ( j ) ( 1 ) are internalized into 1 D Y j + 1 ( 1 ) : D Y j + 1 ( 1 ) . Because D Γ ( j ) ( 1 ) and 1 D Γ ( j ) ( 1 ) are symmetric about x = 1 2 , the condition 0 < D ( 1 ) < 1 causes the convergence of D Γ ( j ) ( 1 ) to 1 2 (i.e., the maximum possible entropy) as j increases. Figure 5 illustrates the scenario.
A more specific formula exists to accurately illustrate this situation. The following lemma, often referred to as the “Piling Up Lemma,” further details this [16].
Fact 1
(Piling Up Lemma [16]). Let Y 1 , Y 2 , , Y l be independent one-bit random variables, and let Γ ( l ) : = j = 1 l Y j . Then,
D Γ ( l ) ( 0 ) = 1 2 + 2 l 1 j = 1 l D Y j ( 0 ) 1 2 .
Because 0 < | D Y j ( 0 ) 1 2 | < 1 2 for each j, 2 l 1 j = 1 l D Y j ( 0 ) 1 2 converges to 0 and D Γ ( l ) ( 0 ) converges to 1/2 as l approaches infinity.
This equation cannot be applied when n is greater than 2. When n = 2 or more, the probability that each bit produces 0 or 1 converges to 1/2; however, we cannot sum up the min-entropies of each position to calculate the total min-entropy because it is allowed only when all bits are independent of each other. Therefore, a new method is required for addressing these problems.

2.2. Convolution and Discrete Fourier Transform

In this subsection, we describe techniques applicable in the special case where n equals 1, as well as in more general cases. First, we reformulated the problem using the concept of convolution.
Definition 1
(Convolution). The Convolution of f , g F ( Z m n ) is defined as
f g ( x ) : = y Z m n f ( x y ) g ( y ) .
For the entropy accumulation problem of interest, m = 2 . Using the language of convolution, (1) becomes D Γ ( l ) = D Y 1 D Y 2 D Y l . The entropy accumulation problem is reduced to a problem of handling this convolution. Fortunately, there exists a mathematical concept, the “Fourier Transform”, that harmonizes well with convolution.
Definition 2
(Discrete Fourier Transform). The Discrete Fourier Transform of f F ( Z m n ) is defined as
f ^ ( t ) : = x Z m n f ( x ) e 2 π i m x · t .
A Discrete Fourier Transform is the mapping from F ( Z m n ) to F ( Z m n ) . In fact, this transform is one-to-one mapping. Proposition 1 supports this.
Lemma 1.
If t 0 , x Z m n e 2 π i m x · t = 0 .
Proof. 
First, note that
x Z m n e 2 π i m x · t = x Z m n e 2 π i m x · t ( mod m ) .
We define ϕ t : Z m n Z m as:
ϕ t ( x ) : = x · t ( mod m ) .
Then, ϕ t is a homomorphism:
ϕ t ( x + y ) = ( x + y ) · t ( mod m ) = x · t ( mod m ) + y · t ( mod m ) = ϕ t ( x ) + ϕ t ( y ) .
As t 0 , ϕ t 1 ( 0 ) Z m n . Therefore, for every s Z m , ϕ t 1 ( s ) contains the same number of elements. Let the number of elements be N, then
x Z m n e 2 π i m x · t = x Z m n e 2 π i m ϕ t ( x ) = N s Z m e 2 π i m s = 0 .
The last equality holds because each e 2 π i m s is the root of the complex equation z m 1 = 0 .    □
Lemma 2.
Let f be the element in F ( Z m n ) and f ^ be the Fourier Transform function. Then,
1 m n t Z m n f ^ ( t ) e 2 π i m x · t = f ( x ) .
Proof. 
By Lemma 1, t Z m n f ( s ) e 2 π i m ( x s ) · t = 0 if s x . Therefore,
1 m n s Z m n t Z m n f ( s ) e 2 π i m ( x s ) · t = 1 m n t Z m n f ( x ) e 2 π i m ( x x ) · t = 1 m n t Z m n f ( x ) = f ( x ) .
   □
Proposition 1.
Let f and g be the elements of F ( Z m n ) . If  f ^ = g ^ , f = g .
Proof. 
We assume that f ^ = g ^ . Then,
f ( x ) = 1 m n t Z m n f ^ ( t ) e 2 π i m x · t = 1 m n t Z m n g ^ ( t ) e 2 π i m x · t = g ( x )
holds for every x Z m n from Lemma 2.
   □
The following theorem asserts that the convolution product of functions is represented as a multiplication in the transformed space. This plays a significant role in proving our main theorem.
Proposition 2.
Let f and g be the elements of F ( Z m n ) . Then, f g ^ = f ^ g ^ .
Proof. 
By the definitions of convolution and Discrete Fourier Transform,
f g ^ ( t ) = x Z m n y Z m n f ( x y ) g ( y ) e 2 π i m x · t = x Z m n y Z m n f ( x y ) e 2 π i m ( x y ) · t g ( y ) e 2 π i m y · t = x Z m n f ( x y ) e 2 π i m ( x y ) · t y Z m n g ( y ) e 2 π i m y · t = x Z m n f ( x ) e 2 π i m x · t y Z m n g ( y ) e 2 π i m y · t = f ^ ( t ) g ^ ( t ) .
   □
To intuitively determine why Γ ( l ) converges to a uniform distribution, we must understand both the properties of the Discrete Fourier Transform applied to the distribution and the Discrete Fourier Transform of a uniform distribution.
Proposition 3.
Let D be an arbitrary probability distribution and I be a uniform distribution of Z m n . Then,
(i)      
For all t Z m n , | D ^ ( t ) | 1 .
(ii)     
D ^ ( 0 ) = 1 .
(iii)    
For all t Z m n , I ^ ( t ) = δ t , 0 .
The symbols δ t , 0 denote the Kronecker delta. The Kronecker delta is defined as 1 when t is zero vector and 0 for all other t.
Proof. 
Proof of (a):
| D ^ ( t ) | = | x Z m n D ( x ) e 2 π i m x · t | x Z m n | D ( x ) e 2 π i m x · t | = x Z m n D ( x ) = 1 .
Proof of (b):
D ^ ( 0 ) = x Z m n D ( x ) e 2 π i m x · 0 = x Z m n D ( x ) = 1 .
Proof of (c): We know from part (a) that I ^ ( 0 ) = 1 . For the remaining t 0 ,
I ^ ( t ) = x Z m n I ( x ) e 2 π i m x · t = 1 m n x Z m n e 2 π i m x · t = 0 .
The last equality is based on Lemma 1.    □
From Proposition 2, we have D Γ ( l ) ^ = D Y 1 ^ D Y 2 ^ D Y l ^ . By Proposition 3, D Γ ( l ) ^ ( 0 ) = 1 , whereas for t 0 , the value of D Γ ( l ) ^ ( t ) approaches 0 as l increases. Specifically, D Γ ( l ) ^ converges to δ , t , 0 as l increases. Since the Discrete Fourier Transform is a one-to-one function by Proposition 1, we can infer that D Γ ( l ) converges to I as l increases.

2.3. Main Theorem

In the previous subsection, we confirmed that D Γ ( l ) converges to a uniform distribution I as l increases. In this subsection, we present a solution to the entropy accumulation problem based on this approach. Specifically, we aim to find a condition for the random variable Y j and a value for l such that D Γ ( l ) achieves a specific min-entropy. The following theorem is one of the main results of our study:
Theorem 1.
Let Y 1 , Y 2 , Y l be independent n-bit random variables, and  Γ ( l ) : = Y 1 Y 2 Y l . We define ω : = min { D Y 1 m i n , D Y 2 m i n , , D Y l m i n } . Then,
H m i n ( D Γ ( l ) ) n log 2 1 + ( 2 n 1 ) ( 1 2 n ω ) l n 1 ln 2 ( 2 n 1 ) ( 1 2 n ω ) l .
Note that the condition for n-bit random variables Y 1 , Y 2 , Y l is not an IID. Because the above theorem only requires the independence of random variables without the condition of identical distribution, it can be effectively applied when using parallel entropy sources. We provide the proof of Theorem 1.
Proof. 
For any function f L ( Z 2 n ) , we have
f ^ ( t ) = x Z 2 n f ( x ) e π i ( x · t ) = x Z 2 n f ( x ) ( 1 ) x · t ,
1 2 n t Z m n f ^ ( t ) e π i ( x · t ) = 1 2 n t Z m n f ^ ( t ) ( 1 ) x · t = f ( x ) .
This is obtained from Lemma 2 with m = 2 . Using the function ϕ t of Lemma 2, (3) and (4) can be written as
f ^ ( t ) = x Z 2 n f ( x ) e π i ( x · t ) = x Z 2 n f ( x ) ( 1 ) x · t = x ϕ t 1 ( 0 ) f ( x ) x ϕ t 1 ( 1 ) f ( x ) ,
1 2 n t Z 2 n f ^ ( t ) e π i ( x · t ) = 1 2 n t Z 2 n f ^ ( t ) ( 1 ) x · t = 1 2 n t ϕ x 1 ( 0 ) f ^ ( t ) t ϕ x 1 ( 1 ) f ^ ( t ) = f ( x ) .
We apply (5) to each D Y j with t 0 . Because  x Z 2 n D Y j ( x ) = 1 and D Y j ( x ) > ω > 0 ,
D Y j ^ ( t ) = x ϕ t 1 ( 0 ) D Y j ( x ) x ϕ t 1 ( 1 ) D Y j ( x ) = x ϕ t 1 ( 0 ) D Y j ( x ) ω x ϕ t 1 ( 1 ) D Y j ( x ) ω x Z 2 n D Y j ( x ) ω = 1 2 n ω .
From Theorems (2) and (7),
D Γ ( l ) ( t ) = 1 2 n t ϕ x 1 ( 0 ) D Γ ( l ) ^ ( t ) t ϕ x 1 ( 1 ) D Γ ( l ) ^ ( t ) 1 2 n t Z 2 n D Γ ( l ) ^ ( t ) = 1 2 n t Z 2 n j = 1 l D Y j ^ ( t ) = 1 2 n j = 1 l D Y j ^ ( 0 ) + t Z 2 n / { 0 } j = 1 l D Y j ^ ( t ) 1 2 n 1 + ( 2 n 1 ) ( 1 2 n ω ) l .
Therefore,
H m i n ( D Γ ( l ) ) = max { log 2 D Γ ( l ) ( t ) : t Z 2 n } n log 2 1 + ( 2 n 1 ) ( 1 2 n ω ) l n 1 ln 2 ( 2 n 1 ) ( 1 2 n ω ) l .
The final approximation is based on the Taylor theorem.    □

3. Applying Theorem 1 to Image Sensor-Based RNG

In this section, we describe the process of applying Theorem 1 to an image sensor-based random number generator. First, we describe the process of generating the input sequences from the entropy sources of the image sensor. Subsequently, we verify whether the generated input sequences satisfy the assumptions of Theorem 1. Next, we establish the lower bound for the min-entropy, which is considered secure based on three standards. Then, we provide the theoretical number of iterations required to achieve a min-entropy higher than the established lower bound. Furthermore, we validate our theory using experimental results. Finally, we estimate the entropy accumulation speed based on frames per second (FPS) of the image sensors used. Thereafter, we compare and analyze the random number generation speed of our system with that of the ID Quantique’s QRNG chip.

3.1. Image Sensor-Based RNG

We use the ‘PV 4209K’ image sensor, which utilizes 11,520 optical black pixels (OBP) as physical entropy sources. Each OBP of the image sensor transmits 2-bit data to a PC. The PC sequentially stores the 2-bit data transmitted by the multiple OBPs. See Figure 6.

3.2. Experimentation Process for Entropy Accumulation

Before describing the entropy accumulation experiments, we use the following notation:
  • W i : A random variable corresponding to the 2-bit data of the i-th optical black pixel (OBP). “If the value of i reaches the last pixel (11,520), the next value of i refers to the first pixel.”
  • Y j : = W 4 j 3 W 4 j 2 W 4 j 1 W 4 j . For example, if  W 1 = ( 0 , 1 ) , W 2 = ( 1 , 1 ) , W 3 = ( 0 , 0 ) , W 4 = ( 1 , 0 ) , Y 1 becomes Y 1 = ( 0 , 1 , 1 , 1 , 0 , 0 , 1 , 0 ) .
  • Γ k ( l ) : = j = ( k l l + 1 ) k l Y j . This refers to the k-th output sequence, which is generated by adding (XOR) l input sequences.
To experimentally validate entropy accumulation, we utilized the verification method outlined in [17]. Ref. [17] is a min-entropy estimation tool, which estimates the min-entropy of output sequences by collecting 1,000,000 n-bit output sequences. However, there is a requirement that the value of n must be at least 8. Therefore, to satisfy this condition, we created new 8-bit data Y j by concatenating four 2-bit datasets W 4 j 3 , W 4 j 2 , W 4 j 1 , W 4 j , and  Y j becomes an input sequence. This process is shown in Figure 7.
After setting Y j , we select the XOR operation iteration number ( l 1 ) and accumulate Γ k ( l ) in the entropy pool. This process is illustrated in Figure 8.
The number l is determined using Theorem 1. After collecting Γ k ( l ) ( 1 k 1,000,000) in the entropy pool, we use [17] to verify the min-entropy.

3.3. Setting Lower Bound of Min-Entropy

We provide three evaluation criteria that can be used to determine the lower bound of the min-entropy, which a true random number generator must exceed, acquired through the entropy accumulation process.

3.3.1. Maximum Value of the Most Common Value Estimate

In [17], when the output sequences are determined to follow the IID, the Most Common Value Estimate is assumed to be the min-entropy of the output sequences [17]. However, there is an upper bound on the min-entropy value in the Most Common Value Estimate. Regardless of the output sequences used for the test, this upper bound cannot be exceeded. The Most Common Value Estimate estimates the min-entropy as described in Algorithm 1.
Algorithm 1 Most Common Value Estimate
Input:  S = ( s 1 , , s L ) , L: length of S, S i { 0 , 1 } n ( 1 i L )
Output: min-entropy of dataset S : H m i n
      1:
Calculate the mode of S. We denote this value by M O D E
      2:
p ^ = M O D E L
      3:
p u = min ( 1 , p ^ + 2.576 p ^ ( 1 p ^ ) L 1 )
      4:
H m i n = log 2 p u
To compute the upper bound of the Most Common Value Estimate for an 8-bit dataset S, where the length of S is 1,000,000, the mode of S should be one. That is, p ^ = 1 / 256 . Using p ^ , we obtain H m i n = 7.94 . Therefore, it is reasonable for the lower bound of the min-entropy to not exceed 7.94.

3.3.2. True Random 8-Bit in [17]

Ref. [17] provides True Random Data in 1-bit, 4-bit, and 8-bit units as samples for evaluating min-entropy. From Figure 9, it can be confirmed that the min-entropy of true random 8-bit data in [17] is approximately 7.86.

3.3.3. Criterion of Min-Entropy by BSI AIS 20/31 [18]

The Federal Office for Information Security in Germany (Bundesamt für Sicherheit in der Informationstechni, BSI) asserts in the AIS 20/31 document that the output sequences of a cryptographic random number generator should have a min-entropy of 0.98 per bit. In the case of 8-bit data, 7.84 becomes the lower bound of entropy.
From the three criteria mentioned above, we have chosen 7.86 as the min-entropy lower bound. This value, which is smaller than 7.94 and more stringent than 7.84, appears to be a valid choice for the lower bound.

3.4. Applying Theorem 1 to Input Sequences

In this subsection, we use Theorem 1 and obtain l. Instead of directly applying Theorem 1 to Y j , we employ a “divide and conquer” approach to compute the total min-entropy. Before explaining this strategy, note that each W i can be regarded as an independent random variable. This assumption is reasonable because all pixels can be considered independent entropy sources.
Because we conducted the experiment with eight bits, n must be eight when applying Theorem 1. However, this results in the following problem: the required number l is exceedingly large. To address this issue, we exploit the fact that Y j is constructed by concatenating four independent entropy sources. Γ k ( l ) is generated by considering the XOR of l instances in Y j . Therefore, if we break down Γ k ( l ) into two bits, then Γ k ( l ) can be considered as four concatenations of the XOR of l instances of W i . This can be expressed by the following formula:
Γ k ( l ) = j = ( k l l + 1 ) k l W 4 j 3 W 4 j 2 W 4 j 1 W 4 j = j = ( k l l + 1 ) k l W 4 j 3 j = ( k l l + 1 ) k l W 4 j 2 j = ( k l l + 1 ) k l W 4 j 1 j = ( k l l + 1 ) k l W 4 j .
The four parts of Γ k ( l ) are independent of each other. Therefore, we calculated the total min-entropy by individually determining the min-entropy for each of the four parts and then summing them up. If the min-entropy of each 2-bit segment of Γ k ( l ) is greater than 1.965, H m i n ( Γ k ( l ) ) will be greater than 7.86. Theorem 1 is applied to the four 2-bit segments of Γ k ( l ) . To apply Theorem 1, the following two conditions must be satisfied. The first pertains to the independence of W i . The second condition states that the value of ω should exceed zero. For the first condition, we established that each W i could be regarded as an independent random variable. For the second condition, we can estimate the value of ω by analyzing the probability distribution of the data transmitted by each OBP. We constructed the probability distribution of each W i using 2-bit data transmitted by each of the 11,520 OBPs over 2000 transmissions. From the obtained distribution, we can confirm that the value of D W i m i n is greater than 0.075 for all i. Figure 10 illustrates the probability distribution of four randomly selected OBPs. It can be observed that each number has appeared at least 150 times.
Therefore, we estimate that ω is at least 0.075 . From Theorem 1, the inequality
2 1 ln 2 ( 2 2 1 ) ( 1 2 2 ω ) l 1.965
provides the number l necessary for the min-entropy of each 2-bit segment of Γ k ( l ) to exceed 1.965.
2 1 ln 2 ( 2 2 1 ) ( 1 2 2 ω ) l 1.965 0.035 · ln 2 3 ( 0.7 ) l ln 0.035 · ln 2 3 ln ( 0.7 ) l 15.845 l .
From the last inequality, we can conclude that if we use 15 XOR operations to create Γ k ( l ) , H m i n ( Γ k ( l ) ) exceeds 7.86.

3.5. Experimental Result

We describe the experimental validation of the results in Section 3.4. We predicted that through 15 XOR operations for entropy accumulation, more than 7.86-bit of entropy per 8-bit would be guaranteed. Upon conducting actual experiments, it was confirmed that even with only four XOR operations, more than 7.86 min-entropy per 8-bit was accomplished. Table 1 presents the experimental results.
The experimental results are analyzed as follows. When l = 1 , it refers to the case where the XOR operation is not used, and the min-entropy per 8-bit is 3.305, which is lower than the min-entropy calculated based on the probability distribution of the 2-bit from the pixels.
Min-entropy depends on the maximum value of the probability distribution function. By observing the 2-bit probability distribution, we confirmed that the maximum value of the probability distribution function was 0.425. This can also be confirmed from Figure 10, where all the numbers (0, 1, 2, 3) in the four randomly selected distributions do not exceed 850. Therefore, when calculating the min-entropy of 2-bit, the lower bound of H m i n ( D W i ) is 1.2344, and the lower bound of H m i n ( D Y j ) created by concatenating the four D W i is 4.9378.
The min-entropy estimated through the probability distribution is lower than that estimated by [17] because of the conservative measurement method of [17]. In [17], it is determined whether the output sequence follows the IID track or the non-IID track before measuring min-entropy. If the output sequence follows the IID track, the min-entropy measured by the most common value estimation method becomes the final min-entropy of the output sequence. However, if the output sequence follows the non-IID track, the smallest value among the min-entropies measured by the other 10 methods, including the Most Common Value Estimate, becomes the final min-entropy of the output sequence. For l = 1 , it was confirmed that the original output sequence followed a non-IID track. Therefore, the min-entropy value estimated by [17] was smaller than that calculated based on the probability distribution.
As the value of l increases, it can be observed that the increment of min-entropy decreases. This is because the maximum possible min-entropy value is 7.94, and as it approaches this number, the amount of min-entropy that can be increased by increasing the number of XOR operations becomes small. The greatest change in the min-entropy value occurs when l changes from l = 1 to l = 2 . This is because the output sequence follows the IID track if l 2 .

4. Comparing with Other Entropy Accumulations

In this section, we compare and analyze the entropy accumulation we developed with other entropy accumulations. The first comparison is with the Slow-Refresh used in IDQ QRNG. The second comparison target is the Fast-Refresh of Windows RNG. In comparing with the Slow-Refresh, we focus on the differences in the accumulation mechanism. In comparing with the Fast-Refresh, the analysis is carried out by calculating the iteration number of operations of Fast-Refresh in our experimental environment.

4.1. Comparing with the Slow-Refresh of IDQ QRNG

IDQ QRNG uses matrix multiplication as the entropy accumulation function [19,20]. For the analysis of this method, we first explain the theoretical background of entropy accumulation and the “Leftover Hash Lemma” and then explain the entropy accumulation of IDQ QRNG. We also check whether the entropy accumulation of IDQ QRNG meets the conditions of the Leftover Hash Lemma. Lastly, we summarize the differences between our entropy accumulation and that of IDQ QRNG.

4.1.1. Leftover Hash Lemma [3,19]

The Leftover Hash Lemma is a theorem that ensures the conversion of a low min-entropy input sequence into a high min-entropy output sequence. In order to state the Leftover Hash Lemma, we need two key concepts: the 2-universal hash family and statistical distance.
Definition 3
(Statistical distance). Let X and X be random variables that take values in same set. The statistical distance between two probability distributions D X and D X is defined as
Δ D X , D X = 1 2 D X D X 1 .
Definition 4
(2-universal hash family). Let Y be a random variable uniformly distributed over S. A family { f s : T V } s S of hash functions is called 2-universal if for any distinct inputs x x
P r f Y ( x ) = f Y ( x ) 1 | V | .
Based on these definitions, we can state the Leftover Hash Lemma.
Fact 2
(Leftover Hash Lemma). Let { f s : T V } s S be the 2-universal hash family. Let X and Y be independent random variables, where Y is uniformly distributed over S, and X takes values in T. Let U S × V be the uniform distribution on S × V. Then,
Δ D ( Y , f Y ( X ) ) , U S × V 2 1 2 ( H m i n ( X ) log 2 | V | ) .
Corollary 1.
Under the same assumptions, we obtain that
H m i n ( D f Y ( X ) ) log 2 1 | V | + 2 1 2 ( H m i n ( X ) log 2 | V | ) + 1 log 2 | V | | V | ln 2 2 1 2 ( H m i n ( X ) log 2 | V | ) + 1 = log 2 | V | ϵ .
Proof. 
Let U V be the uniform distribution on V. By triangle inequality and Leftover Hash Lemma,
H m i n ( D f Y ( X ) ) = log 2 D f Y ( X ) log 2 U V + D f Y ( X ) U V log 2 U V + D f Y ( X ) U V 1 log 2 U V + D ( Y , f Y ( X ) ) U S × V 1 = log 2 U V + 2 Δ D ( Y , f Y ( X ) ) , U S × V log 2 1 | V | + 2 1 2 ( H m i n ( X ) log 2 | V | ) + 1 log 2 | V | | V | ln 2 2 1 2 ( H m i n ( X ) log 2 | V | ) + 1 .
The last approximation is due to Taylor expansion at x = 1 | V | . □
The Leftover Hash Lemma is effective in generating high-quality output sequences in the following situations:
(i)      
When H m i n ( X ) is smaller than | T | .
(ii)     
When H m i n ( X ) is significantly larger than | V | .
(iii)    
When | S | is substantially smaller than | V | .
The key point here is the last condition. To generate output sequences using the Leftover Hash Lemma, a hash function must be uniformly selected from the 2-universal hash family. This means that random numbers are required to generate random numbers. If the size of the 2-universal hash family is small, a large amount of random numbers can be generated using a small amount of random numbers. Therefore, constructing a 2-universal hash family of small size is the key point in the use of the Leftover Hash Lemma.
Example 1.
Suppose that T = { 0 , 1 } 900 , V = { 0 , 1 } 100 , | S | = 2 30 , H m i n ( X ) = 550 , H m i n ( Y ) = 30 , then by Corollary 1, H m i n ( D f Y ( X ) ) 100 1 ln 2 2 124 . This implies that if we have a low quality entropy source with a min-entropy of 550 out of 900, and a 2-universal hash family of size 2 30 , we can leverage a 30-bit random number generator to produce nearly random 100-bit numbers.

4.1.2. Slow-Refresh of IDQ QRNG [19,20]

IDQ QRNG uses an m × n random matrix as the entropy accumulation function. This function transforms an input sequence of length m into an output sequence of length n. It can also be readily proven that this collection of matrices forms a 2-universal hash family [4]. Using the notation from the Leftover Hash Lemma, this can be represented as
T = { 0 , 1 } n , V = { 0 , 1 } m , S = { 0 , 1 } m × n .
The min-entropy per bit of the quantum entropy source (which corresponds to H m i n ( X ) in our notation) is not disclosed. However, according to [19], the ϵ value is designed to be set to 2 100 . The random matrix is generated only once, and the m n elements that make up this matrix are generated using a 1-bit random number generator m n times. The 1-bit random number generator creates a single bit by collecting multiple bits from the digitized entropy source and performing an XOR operation. The more bits that are XORed, the higher the min-entropy of the generated 1-bit. This can be confirmed through the Piling Up Lemma. Figure 11 illustrates the entropy accumulation of IDQ QRNG.

4.1.3. Limitations of IDQ QRNG Entropy Accumulation Model

At first glance, the entropy accumulation model of IDQ QRNG seems to generate random numbers based on the Leftover Hash Lemma, but this model has two inherent limitations.
The first point concerns the safety of the 1-bit random number generator used to create the random matrix. This generator should output 0 and 1 with a probability of 1/2. However, the min-entropy of the entropy source used in IDQ QRNG and the iteration number of XOR operations are not specified, which makes it impossible to confirm whether the random matrix was uniformly generated.
The second point is that the random matrix is generated just one time in IDQ QRNG, but this implementation seems an incomplete application of the Leftover Hash Lemma. If the random matrix is continuously used, the condition of independence between the consecutive m-bit output sequences is compromised. Thus, in this case, it is impossible to obtain the overall entropy of long output sequence by the Leftover Hash Lemma. For example, if we generate m-bit output sequences X i ’s ( 1 i 5 ) by using the entropy accumulation of IDQ QRNG, then although for each 1 i 5 , H m i n ( X i ) = m ϵ , it does not always satisfy that H m i n ( X 1 X 2 X 3 X 4 X 5 ) = 5 ( m ϵ ) . Therefore, the matrix should be updated whenever a new output sequence is generated in order to properly apply the Leftover Hash Lemma.

4.1.4. Differences between IDQ QRNG and Our Entropy Accumulation

The simple characteristic distinguishing Fast-Refresh from Slow-Refresh is the length of the input sequence. In our entropy accumulation, we generate the output sequence by XORing five 8-bit input sequences, that is, the total length of the input sequence is 40-bit with the 8-bit length output sequence. However, IDQ QRNG uses the input sequence of 1024-bit or 2048-bit, and the output sequences of 768-bit or 1792-bit, respectively [19]. The reason for such a long length of the input sequence is to adjust the ϵ of the Leftover Hash Lemma to about 2 100 [19].
The bit loss rate is another characteristic distinguishing Fast-Refresh from Slow-Refresh. In our entropy accumulation process, 32-bit are discarded from 40-bit of input sequence, resulting in a bit loss rate of 80%. On the other hand, in the IDQ QRNG entropy accumulation process, 256-bit are discarded from 1024-bit or 2048-bit of input sequence, resulting in bit loss rates of 25% or 12.5%, respectively. Through this, we can see that the Slow-Refresh handles a large number of bits at once but has a relatively low loss rate. However, a low bit loss rate results in a decrease in operation speed. The major differences are shown in Table 2.

4.2. Comparing with the Fast-Refresh of Windows RNG

We have already described how Fast-Refresh of Windows RNG works in the introduction. Hence, we describe the entropy accumulation theory in [2] and calculate the number of operations that must be iterated when applying it to the input sequence Y j . First, we describe the 2-monotone distribution and the covering number, which are essential concepts in [2]. Thereafter, we present Theorem 5.2 of [2] (which is the main result of [2]) with our notation.
Next, we explain why [2] cannot be directly applied to our entropy accumulation model and suggest an additional S-box operation as a solution. With this additional operation, we can apply Theorem 5.2 of [2] and overcome the limitations of the original theory. Finally, we provide the theoretical number of operations necessary to guarantee a min-entropy of 7.86 per 8-bit when Theorem 5.2 of [2] is applied to the RNG.

4.2.1. Main Results of [2]

The covering number is used to measure the efficiency of the permutations used in entropy accumulation. The efficiency of entropy accumulation increased as the covering number of permutations decreased.
Definition 5
(Covering number). For a permutation π : 0 , 1 , . . . , n 1 0 , 1 , . . . , n 1 , and an integer 1 k n , the c o v e r i n g n u m b e r   C π , k is the smallest natural number m such that
{ π l ( j ) : 0 j < k , 0 l < m } = { 0 , 1 , . . . , n 1 } .
If no such m exists, then C π , k = ;
Figure 12 shows the calculation of the number of coverings.
One special property of an entropy source in [2] is 2-monotone distribution. The definition is as follows:
Definition 6
(2-monotone distribution). The probability distribution D X of an n-bit random variable X follows a 2-monotone distribution if its domain can be divided into two disjoint intervals, where D X is a monotone function.
A 2-monotone distribution has at least one inflection point (peak), because it is divided into two monotonic intervals.
Based on these definitions, we can state Theorem 5.2 of [2].
Theorem 2
(Theorem 5.2 of [2]). Suppose that for independent n-bit random variables Y 1 , Y 2 , . . . , Y l , the probability distributions D Y 1 , D Y 2 , . . . , D Y l have a min-entropy of at least k ( 2 ) and follow a 2-monotone distribution. Let π : 0 , 1 , . . . , n 1 0 , 1 , . . . , n 1 be a permutation and m = C π , k be a covering number where k = k 2 . Let Λ π ( l ) = A π l 1 ( Y 1 ) A π l 2 ( Y 2 ) Y l . Then, for any l m ,
H m i n ( D Λ π ( l ) ) n n k + 1 · l o g 2 1 + 2 k k 2 l m n 1 2 k 2 k l 2 m .
One can easily observe that as n increased, H m i n ( D Λ π ( l ) ) converged to n.

4.2.2. Windows RNG Entropy Accumulation without 2-Monotone Condition

Theorem 2 provides a min-entropy lower bound for Λ π ( l ) with only three restrictions. The conditions under which the input sequences are independent, and the covering number of permutations is finite, are relatively easy to satisfy. However, it is challenging to satisfy the condition that all input sequences follow a 2-monotone distribution. In particular, for the image sensor entropy sources used, input sequences that follow a 2-monotone distribution are unattainable.
We generated Y j by concatenating four 2-bit entropy sources: W 4 j 3 , W 4 j 2 , W 4 j 1 , W 4 j . We used this method only for experimental verification (SP-800-90b requires at least 8-bit data), and H m i n ( Γ ( l ) ) was theoretically calculated by adding the four min-entropy values of the 2-bit segment of Γ ( l ) (see Section 3.4). However, if we apply Theorem 2 to our input sequences, we cannot use the divide-and-conquer approach. This is because while D W i definitely satisfies a 2-monotone (as can be observed in Figure 10), Theorem 2 requires input sequences with a min-entropy of two or more. Note that the 2-bit entropy sources W i cannot achieve this condition. For this reason, when applying Theorem 2 to our input sequences, we must use the concatenation method for theoretical, rather than experimental, reasons. However, if the input sequences are processed in a concatenated manner, it is impossible to satisfy the 2-monotone assumption. We explain this using a 4-bit example. Figure 13 represents the probability distribution of 2-bit entropy sources W 1 and W 2 that follow a 2-monotone distribution. The probability distribution of W 1 W 2 , created by concatenating the two entropy sources, is shown in Figure 14.
If we consider { 4 i 3 , 4 i 2 , 4 i 1 , 4 i } as one group, there are four groups and the overall shape of D W 1 W 2 is similar to that of D W 1 . However, the shape of each group’s graph is similar to D W 2 . The shape of such a concatenated distribution tends to become more complex as the entropy sources become more concatenated. Therefore, inevitably, the distribution of the input sequences assumes a shape that is far from a 2-monotone. However, if Y j passes through a “good” S-box, the data can be transformed to follow a 2-monotone distribution (in particular, a monotone distribution). For example, if W 1 W 2 passes through an S-box S in Table 3, D S ( W 1 W 2 ) follows a monotone distribution. This is illustrated in Figure 15. The method for creating such S is simple. After obtaining the distribution of concatenated data, arrange the distribution values in ascending order and input the elements of the domain that provide the distribution values into the S-box in order. For example, as shown in Figure 14, D W 1 W 2 has a minimum value at x = 15 and the second-smallest value at x = 3 . If x values are arranged in this manner, they become 15, 3, 12, 7,⋯, 2, 6, and 10. Inputting these in sequence into the S-box creates S results in Table 3.
Although we provide an example of transforming 4-bit data created by concatenating two 2-bit entropy sources, this method can be applied to arbitrary n-bit data. With this method, even if the input sequences do not follow the 2-monotone distribution, Theorem 2 can be applied. However, memory is required to store the S-box, and the accumulation speed can be reduced owing to additional operations. Additionally, significant amounts of meaningful data may be required to estimate the distribution. Figure 16 illustrates the Windows RNG entropy accumulation process using an additional S-box.

4.2.3. Applying Theorem 2

In this subsection, we determine the number l required for Λ π ( l ) to exceed a min-entropy of 7.86-bit per 8-bit by applying Theorem 2 to S ( Y j ) . We did not conduct actual experiments because of the numerous S-boxes that needed to be implemented and the vast amount of data required to estimate D Y j .
To apply Theorem 2, the first step is obtaining H m i n ( D S ( Y j ) ) ; as the S-box does not change the min-entropy, H m i n ( D Y j ) is used instead. Using [17] to estimate H m i n ( D Y j ) requires 1,000,000 pieces of Y j , which is practically impossible. Therefore, we set k = 4.9378 , which is the lower bound of H m i n ( D Y j ) , estimated using 2000 pieces of W i data, as mentioned in Section 3.5. Note that the estimated min-entropy at l = 1 in Table 1 of Section 3.5 and the previously estimated min-entropy have explicitly different estimation targets. With k = 4.9378 , k = k 2 = 2 . If we set π = r o t ( 2 , 8 ) , the covering number m = C π , k becomes four, which is the minimum value of every possible π . Thus, we opted to use r o t ( 2 , 8 ) as a bit permutation for the entropy accumulation. Considering these considerations, l can be calculated as follows:
n 1 2 k 2 k l 2 m 7.86 n 7.86 n 2 k 2 ( 1 l m ) ln ( n 7.86 ) ln n ln 2 k 2 1 l m l m 1 2 k ln ( n 7.86 ) ln n ln 2 l 4 1 2 4.9378 ln ( 8 7.86 ) ln 8 ln 2 = 13.4559 .
That is, if we assume l = 14 to create Λ π ( l ) , H m i n ( D Λ π ( l ) ) will exceed the value 7.86.
In Section 3.4, we observed that our theory yields l = 16 . Although Theorem 2 may yield slightly superior results, the difference is insignificant. Considering the time consumed for the additional S-box and bit permutations and the storage space for S-boxes, we can conclude that our entropy accumulation provides more practical results.

5. Conclusions

The contributions of our study can be summarized as follows. First, we have proposed entropy accumulation of the Fast-Refresh type, which is composed of bitwise XOR alone without hash functions, and have proved the theorem that requires only the independence without identical distribution condition of input sequences.
Second, we have established 7.86 as the lower bound for the min-entropy per 8-bit, which was considered secure based on the three benchmarks. To surpass this lower bound, our proposed theory yielded iteration number l = 16 .
We have implemented an actual RNG to verify the theory. Our experimental results have indicated that if we use XOR operations just four times, the generated output sequences exceeded the lower bound. The entropy source used in this experiment is an image sensor PV 4209 K. This entropy source is a QRNG that utilizes dark shot noise to generate random numbers. The most important property of our entropy source is the independence of pixels. Since each piece of 2-bit data from pixels is considered as an independent random variable, we can apply the main theorem to obtain the lower bound of the min-entropy.
Finally, we have compared our entropy accumulation with two types of entropy accumulations, which are Slow-Refresh of IDQ QRNG and Fast-Refresh of Windows RNG.
As a further study, we would like to consider various entropy accumulations that have more general and practical applications than our proposed Fast-Refresh mechanism.

Author Contributions

Formal analysis, Y.C.; Writing–original draft, Y.C.; Writing—review & editing, J.-S.K. and Y.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2021M1A2A2043893) and Ministry of Science and ICT, South Korea (1711174177).

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kwek, L.C.; Cao, L.; Luo, W.; Wang, Y.; Sun, S.; Wang, X.; Liu, A.Q. Chip-based quantum key distribution. AAPPS Bull. 2021, 31, 1–8. [Google Scholar] [CrossRef]
  2. Dodis, Y.; Guo, S.; Stephens-Davidowitz, N.; Xie, Z. No time to hash: On super-efficient entropy accumulation. In Proceedings of the Advances in Cryptology—CRYPTO 2021: 41st Annual International Cryptology Conference, CRYPTO 2021, Virtual Event, 16–20 August 2021; Proceedings, Part IV 41. Springer: Berlin/Heidelberg, Germany, 2021; pp. 548–576. [Google Scholar]
  3. Shoup, V. A Computational Introduction to Number Theory and Algebra; Cambridge University Press: Cambridge, UK, 2009. [Google Scholar]
  4. Dodis, Y. Randomness in Cryptography; Spring: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  5. Hayashi, M.; Tsurumaru, T. More efficient privacy amplification with less random seeds via dual universal hash function. IEEE Trans. Inf. Theory 2016, 62, 2213–2232. [Google Scholar] [CrossRef] [Green Version]
  6. Hayashi, M. Exponential decreasing rate of leaked information in universal random privacy amplification. IEEE Trans. Inf. Theory 2011, 57, 3989–4001. [Google Scholar] [CrossRef] [Green Version]
  7. Ferguson, N. The Windows 10 Random Number Generation Infrastructure; Microsoft Corporation: Redmond, WA, USA, 2019. [Google Scholar]
  8. Herrero-Collantes, M.; Garcia-Escartin, J.C. Quantum random number generators. Rev. Mod. Phys. 2017, 89, 015004. [Google Scholar] [CrossRef] [Green Version]
  9. Lin, X.; Wang, R.; Wang, S.; Yin, Z.Q.; Chen, W.; Guo, G.C.; Han, Z.F. Certified randomness from untrusted sources and uncharacterized measurements. Phys. Rev. Lett. 2022, 129, 050506. [Google Scholar] [CrossRef] [PubMed]
  10. Liu, W.B.; Lu, Y.S.; Fu, Y.; Huang, S.C.; Yin, Z.J.; Jiang, K.; Yin, H.L.; Chen, Z.B. Source-independent quantum random number generator against tailored detector blinding attacks. Opt. Express 2023, 31, 11292–11307. [Google Scholar] [CrossRef] [PubMed]
  11. Zhou, H.; Li, J.; Zhang, W.; Long, G.L. Quantum random-number generator based on tunneling effects in a Si diode. Phys. Rev. Appl. 2019, 11, 034060. [Google Scholar] [CrossRef] [Green Version]
  12. Zhou, Q.; Valivarthi, R.; John, C.; Tittel, W. Practical quantum random-number generation based on sampling vacuum fluctuations. Quantum Eng. 2019, 1, e8. [Google Scholar] [CrossRef] [Green Version]
  13. Park, B.K.; Park, H.; Kim, Y.S.; Kang, J.S.; Yeom, Y.; Ye, C.; Moon, S.; Han, S.W. Practical true random number generator using CMOS image sensor dark noise. IEEE Access 2019, 7, 91407–91413. [Google Scholar] [CrossRef]
  14. Tawfeeq, S.K. A random number generator based on single-photon avalanche photodiode dark counts. J. Light. Technol. 2009, 27, 5665–5667. [Google Scholar] [CrossRef]
  15. Wang, F.X.; Wang, C.; Chen, W.; Wang, S.; Lv, F.S.; He, D.Y.; Yin, Z.Q.; Li, H.W.; Guo, G.C.; Han, Z.F. Robust quantum random number generator based on avalanche photodiodes. J. Light. Technol. 2015, 33, 3319–3326. [Google Scholar] [CrossRef] [Green Version]
  16. Matsui, M. Linear cryptanalysis method for DES cipher. In Proceedings of the Advances in Cryptology—EUROCRYPT’93: Workshop on the Theory and Application of Cryptographic Techniques, Lofthus, Norway, 23–27 May 1993; Proceedings 12. Springer: Berlin/Heidelberg, Germany, 1994; pp. 386–397. [Google Scholar]
  17. Turan, M.S.; Barker, E.; Kelsey, J.; McKay, K.A.; Baish, M.L.; Boyle, M.E. Recommendation for the entropy sources used for random bit generation. NIST Spec. Publ. 2018, 800, 102. [Google Scholar]
  18. Peter, M.; Schindler, W. A Proposal for Functionally Classes for Random Number Generators; AIS 20/31; BSI: Hamburg, Germany, 2022. [Google Scholar]
  19. Troyer, M.; Renner, R. A randomness extractor for the Quantis device. Quantum Number Gener. 2001, 2001–2010. [Google Scholar]
  20. IDQ. Randomness Extraction for the Quantis True Random Number Generator; IDQ: Geneva, Switzerland, 2012. [Google Scholar]
Figure 1. Operation of an RNG.
Figure 1. Operation of an RNG.
Entropy 25 01056 g001
Figure 2. Windows RNG entropy accumulation.
Figure 2. Windows RNG entropy accumulation.
Entropy 25 01056 g002
Figure 3. Example of the r o t permutation when n = 8 and α = 3 .
Figure 3. Example of the r o t permutation when n = 8 and α = 3 .
Entropy 25 01056 g003
Figure 4. Entropy accumulation using only bitwise XOR operations.
Figure 4. Entropy accumulation using only bitwise XOR operations.
Entropy 25 01056 g004
Figure 5. Our entropy accumulation with n = 1 .
Figure 5. Our entropy accumulation with n = 1 .
Entropy 25 01056 g005
Figure 6. Data transmission process of image sensor.
Figure 6. Data transmission process of image sensor.
Entropy 25 01056 g006
Figure 7. Concatenation of 2-bit data to form 8-bit input sequence.
Figure 7. Concatenation of 2-bit data to form 8-bit input sequence.
Entropy 25 01056 g007
Figure 8. Accumulating entropy source using XOR operation.
Figure 8. Accumulating entropy source using XOR operation.
Entropy 25 01056 g008
Figure 9. Min-entropy of true random 8-bit.
Figure 9. Min-entropy of true random 8-bit.
Entropy 25 01056 g009
Figure 10. Probability distribution of each OBP.
Figure 10. Probability distribution of each OBP.
Entropy 25 01056 g010
Figure 11. IDQ QRNG entropy accumulation model.
Figure 11. IDQ QRNG entropy accumulation model.
Entropy 25 01056 g011
Figure 12. Illustration of covering number when C π , 2 = 3 . The covering number is 3 because all bits are covered using the permutation operation twice.
Figure 12. Illustration of covering number when C π , 2 = 3 . The covering number is 3 because all bits are covered using the permutation operation twice.
Entropy 25 01056 g012
Figure 13. Example of two distributions that follow a 2-monotone distribution.
Figure 13. Example of two distributions that follow a 2-monotone distribution.
Entropy 25 01056 g013
Figure 14. Distribution of concatenated entropy sources.
Figure 14. Distribution of concatenated entropy sources.
Entropy 25 01056 g014
Figure 15. Distribution of S-boxed concatenated entropy sources.
Figure 15. Distribution of S-boxed concatenated entropy sources.
Entropy 25 01056 g015
Figure 16. Windows RNG entropy accumulation with additional S-box.
Figure 16. Windows RNG entropy accumulation with additional S-box.
Entropy 25 01056 g016
Table 1. Min-entropy values corresponding to number of XOR operation repetitions.
Table 1. Min-entropy values corresponding to number of XOR operation repetitions.
lijkMin-Entropy per 8-Bit
14,000,0001,000,0001,000,0003.305
28,000,0002,000,0001,000,0007.115
312,000,0003,000,0001,000,0007.700
416,000,0004,000,0001,000,0007.852
520,000,0005,000,0001,000,0007.864
Table 2. Major differences between our entropy accumulation and that of IDQ QRNG.
Table 2. Major differences between our entropy accumulation and that of IDQ QRNG.
Our Entropy AccumulationIDQ QRNG
Refresh TypeFast-RefreshSlow-Refresh
Theoretical BackgroundFourier TransformLeftover Hash Lemma
Implementation AspectSimple XOR operationsDifficulty of implementing
universal hash family
Input Sequence Length401024 or 2048
Output Sequence Length8768 or 1792
Bit Loss Rate80%25% or 12.5%
Table 3. 4-bit S-box, which is used with W 1 W 2 to create monotone distribution.
Table 3. 4-bit S-box, which is used with W 1 W 2 to create monotone distribution.
x1531271311014148592610
S ( x ) 0123456789101112131415
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Choi, Y.; Yeom, Y.; Kang, J.-S. Practical Entropy Accumulation for Random Number Generators with Image Sensor-Based Quantum Noise Sources. Entropy 2023, 25, 1056. https://doi.org/10.3390/e25071056

AMA Style

Choi Y, Yeom Y, Kang J-S. Practical Entropy Accumulation for Random Number Generators with Image Sensor-Based Quantum Noise Sources. Entropy. 2023; 25(7):1056. https://doi.org/10.3390/e25071056

Chicago/Turabian Style

Choi, Youngrak, Yongjin Yeom, and Ju-Sung Kang. 2023. "Practical Entropy Accumulation for Random Number Generators with Image Sensor-Based Quantum Noise Sources" Entropy 25, no. 7: 1056. https://doi.org/10.3390/e25071056

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop