Next Article in Journal
Communication-Efficient Distributed Learning for High-Dimensional Support Vector Machines
Next Article in Special Issue
An Image Encryption Scheme Synchronizing Optimized Chaotic Systems Implemented on Raspberry Pis
Previous Article in Journal
Generalized Exp-Function Method to Find Closed Form Solutions of Nonlinear Dispersive Modified Benjamin–Bona–Mahony Equation Defined by Seismic Sea Waves
Previous Article in Special Issue
A Novel Chaos-Based Image Encryption Using Magic Square Scrambling and Octree Diffusing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Image Encryption Schemes Based on a Class of Uniformly Distributed Chaotic Systems

Mathematics and Physics School, University of Science and Technology Beijing, Beijing 100083, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(7), 1027; https://doi.org/10.3390/math10071027
Submission received: 16 February 2022 / Revised: 7 March 2022 / Accepted: 18 March 2022 / Published: 23 March 2022
(This article belongs to the Special Issue Chaos-Based Secure Communication and Cryptography)

Abstract

:
This paper proposes a method to construct a one-dimensional discrete chaotic system. First, we define a generalized distance function to control the boundedness of the one-dimensional discrete system. Based on Marotto’s theorem, one-dimensional discrete systems are proven to be chaotic in the sense of Li–Yorke, and the corresponding chaos criterion theorem is proposed. The system can be distributed uniformly by adjusting the parameters. In this paper, we propose an image encryption scheme based on a uniformly distributed discrete chaotic system and DNA encoding. DNA encoding and decoding rules are determined by plain text. The experimental results demonstrate that our encryption algorithm has a large key space, high key sensitivity, and fast encryption speed and can resist differential and statistical attacks.

1. Introduction

Chaos is a special type of complex dynamic behavior displayed in nonlinear systems that commonly exists in nature, such as mathematics, physics, psychology, biology, and other fields. Chaotic systems have many essential properties, such as ergodicity, extreme sensitivity to initial conditions, and good pseudorandom behavior, which makes chaos theory a popular research subject. In recent years, chaotic systems have been rapidly developed and applied in many fields, especially electronic communications and cryptography [1,2]. As one of the most important information carriers, the security of images is very important and has drawn increasing attention from the public and researchers. However, due to a variety of intrinsic characteristics of images, such as a strong correlation of adjacent pixels, data redundancy, and high computational complexity, traditional encryption algorithms are unsuited to encrypt images. Therefore, researchers have proposed many image encryption algorithms [3,4,5]. Chaos-based image cryptosystems have become one of the most ideal encryption methods [6,7,8] because of the main features of chaotic systems, such as sensitivity to initial conditions, ergodicity, and highly complex behavior in addition to their mixing properties.
Researchers have been extensively attracted to constructing new chaotic systems in terms of the existing theory [9,10], which involves the discrimination for the existence of chaos in dynamical systems. In 1975, Li and Yorke first defined the term chaos from a mathematical perspective and proposed a criterion for the existence of chaos in one-dimensional discrete dynamical systems [11], which is well known as “period three implies chaos”. Under the guidance of Li–Yorke’s criterion, Marotto generalized a high-dimensional discrete dynamical system in 1978, which is known as Marotto’s theorem [12]. Shi and Chen proposed a new modified version of Marotto’s theorem [13] in 2005. Based on Li–Yorke’s criterion, a sufficient and necessary condition for the existence of the three periodic points of a quadratic polynomial is obtained by decomposing the real coefficient polynomial in a complex field [14]. A chaos criterion theorem on a cubic discrete system was established and proven to be chaotic in the sense of Li–Yorke [15]. Nevertheless, it is difficult to construct chaotic systems in terms of this criterion. Thus far, only a small number of related work has been reported. In contrast to Li–Yorke’s criterion, Marotto’s theorem appears more instrumental in proposing a theoretical direction. Based on Marotto’s theorem, Chen and Lai discussed an automatic control problem for discrete-time dynamical systems by adding a control term and then proposed an algorithm to control Lyapunov exponents for discrete-time dynamical systems, which is known as the Chen–Lai algorithm [16]. Moreover, several bounded functions containing modulus, sine, and saw-tooth functions were applied to globally bind the discrete system in the Chen–Lai algorithm, and the control term is a linear function [17]. To eliminate the linear control term in the Chen–Lai algorithm, a chaos criterion theorem for a one-dimensional discrete system was provided, in which a modulus function was used as a bounded function [18]. It is worth considering whether other bounded functions could lead to a similar proposition while eliminating linear control.
Low-dimensional chaotic maps have a simple structure and are easy to implement, but they usually have a small key space. In this paper, the proposed chaotic system overcomes the shortcomings of the traditional low-dimensional chaotic system with a small key space by adjusting parameters; therefore, a uniform distribution can be achieved.
DNA computing technology has attracted more and more attention since Adleman studied it [19]. Cryptography utilizes DNA as an information carrier in image encryption and has shown promising results by taking advantage of excellent DNA properties, such as massive parallelism, large storage, and ultralow power consumption. The authors of [20] proposed a color image encryption scheme based on DNA operations and a spatiotemporal chaotic system. The key stream of image encryption is associated with the key and plaintext image, which improves the ability to resist known plaintext or selected plaintext attack. Liu et al. [21] proposed color image encryption based on dynamic DNA and 4-D memristive hyperchaos. The main feature of the algorithm is that the dynamic DNA mechanism based on hyperchaos is performed on the processes of encoding, confusion, and diffusion, improving the security of the algorithm. Liu et al. [22] combined DNA computing with double-chaos systems composed of Lorenz chaotic mapping with variable parameters and fourth-order Rossler hyperchaotic mapping and proposed an algorithm for color image encryption at the bit level. The double-chaos system compensates for the pseudorandomness of the two types of chaotic mappings, making chaotic sequences more difficult to predict.
In this paper, a generalized distance function is defined as bounded, and it is applied to control the boundedness of a discrete system. In terms of Marotto’s theorem, a one-dimensional discrete system is discussed, and the corresponding chaos criterion theorem is set up to determine the existence of chaos in the discrete system. The system can be distributed uniformly by adjusting the parameters. An image encryption scheme is proposed based on this kind of uniformly distributed discrete chaotic system. First, the chaotic sequence is used to scramble and XOR transform the image, and then DNA coding and DNA operation are performed, the rules of which are determined by plain text. The remainder of this paper is organized as follows. Section 2 presents a class of uniformly distributed discrete chaotic systems and analyzes their dynamic behaviors. Image encryption based on DNA coding is proposed in Section 3. Section 4 gives the simulation experiment results and states security analyses. Finally, Section 5 concludes the paper.

2. A Class of Uniformly Distributed Chaotic Systems

2.1. Chen–Lai Algorithm

The Chen–Lai algorithm considers a nonlinear discrete system, not necessarily chaotic, of the form:
x k + 1 = f k ( x k ) ,     x k R n .
Then, a control input sequence { u k } k = 0 is designed to investigate the automatic control of system (1), and a new system is described as:
x k + 1 = f k ( x k ) + u k ,     x k R n ,
where u k = B k x k is discussed for short.
Assume the sequence { B k } is uniformly bounded:
sup 0 k < B k M <
where M is a positive constant and denotes the spectral norm of a finite-dimensional matrix.
Under the only assumption, it is proven that, in practice, the algorithm can provide the required Lyapunov exponents and achieve the expected anti-control of system (2).
The Chen–Lai algorithm, based on the modulus, sine, and saw-tooth function, is further discussed in detail in [17]. Based on the modulus operation, the one-dimensional discrete system in the Chen–Lai algorithm has the form:
x k + 1 = f ( x k ) + u k   ( mod 1 ) ,
where u k = ( N + e c ) x k is the control term and N and c are two constants.
A proposition is given on the one-dimensional discrete system (3) as follows.
Lemma 1.
[17] If f ( 0 ) = 0 , c > 0 and | f ( x ) | < 1 N are satisfied, then the controlled system (3) is chaotic in the sense of Li–Yorke.
The chaotic systems constructed by proposition 1 appear limited in having a linear control term; then, a one-dimensional system without a linear control term is considered with the form [18]:
x k + 1 = f ( x k )     ( mod 1 ) ,
where x k R 1 , f ( x ) C 1 [ 0 , 1 ] and f ( 0 ) = 0 .
The chaos criterion theorem for system (4) is also provided.
Lemma 2.
[18] If | f ( x ) | > 1 and x [ 0 , 1 ) are satisfied, then system (4) is chaotic in the sense of Li–Yorke.
Evidently, Lemma 1 is a special case of Lemma 2. In short, the form of system (3) is generalized to the form of system (4), while the bounded function is a modulus function. Naturally, under the guidance of the Chen–Lai algorithm and the work of proposition 2, a new bounded function can be defined to replace the modulus function and propose the corresponding chaos criterion theorem. This work is described in Section 2.2.

2.2. A One-Dimensional Discrete Chaotic System

This definition begins without describing the modified Marotto’s theorem, which is used later.
Lemma 3.
[13] Let f : R R be a map with a fixed point z R n . Assume that
(1) 
f is continuously differentiable in a neighborhood of z and all the eigenvalues of D f ( z ) have absolute values larger than 1, which implies that there exists a positive constant r and a norm in R n such that f is expanding in B ¯ r ( z ) in , where B ¯ r ( z ) is the closed ball of radius centered at z in ( R n , ) ;
(2) 
z is a snap-back repeller of f with f m ( x 0 ) = z , x 0 z , for some x 0 B r ( z ) and some positive integer m, where B r ( z ) is the open ball of radius centered at z in ( R n , ) . Furthermore, f is continuously differentiable in some neighborhoods of x 0 , x 1 , , x m 1 , and det D f ( x j ) 0 for 0 j m 1 , where x j = f ( x j 1 ) and 0 j m 1 .

2.2.1. A Generalized Distance Function

First, a distance function is defined.
Definition 1.
Let x R ; then, there exists an integer N R such that x [ N , N + 1 ] . A distance function as f ( x ) = min { x N , N + 1 x } is defined. For simplicity, it is denoted as f ( x ) ( x ) .
Absolutely, the function f ( x ) ( x ) is an even function. Then, a generalized distance function is further defined by adding a scale parameter into the distance function in Definition 1.
Definition 2.
A generalized distance function is defined as D i s ε ( x ) = ε ( x ) , where parameter ε is a positive constant.
The image of function D i s ε ( x ) is displayed in Figure 1. Note that D i s ε ( x ) [ 0 , ε / 2 ] and it is also an even function.

2.2.2. Two Chaos Criterion Theorems

Theorem 1.
Consider a one-dimensional linear discrete system.
x k + 1 = D i s ε ( a x k ) ,   a 0 .
If | a | ε 2 is satisfied, then system (5) is a chaotic system in sense of Li–Yorke.
Proof of Theorem 1.
If a < 0 is satisfied, then x k + 1 = D i s ε ( a x k ) = D i s ε ( a x k ) , which is actually a > 0 . Therefore, only a > 0 is proved for simplicity.□
Denote g ( x ) = D i s ε ( a x ) . Then, while 0 x 1 / a , g ( x ) can be expressed as:
g ( x ) = { a ε x , 0 x < 1 / ( 2 a ) ε ( 1 a x ) ,   1 / ( 2 a ) x 1 / a
The derivative of map g ( x ) satisfies | g ( x ) | = | a | ε > 1 , where 0 x 1 / a and x 1 / ( 2 a ) . In addition, condition | a | ε = a ε 2 gives 1 / a ε / 2 .
Denote J 1 = ( 0 , 1 / ( 2 a ) ) , J 2 = ( 1 / ( 2 a ) , 1 / 2 a ) .
In interval J 2 , g ( x ) = ε ( 1 a x ) = x gives a fixed point:
x * = ε 1 + a ε
Next, the fixed point x * is proven as a snap-back repeller.
A sequence { x m | m = 0 , 1 , 2 , } is defined as:
x 0 = 1 a ( 1 + a ε ) J 1 ,   x m = ε x m + 1 a ε J 2 ,   m = 1 , 2 ,
Then, x * = g ( x 0 ) = g 2 ( x 1 ) = = g m ( x m + 1 ) .
By the Lagrange mean value theorem, there exists a point ξ ( x m + 1 , x * ) or ξ ( x * , x m + 1 ) such that:
| x m + 2 x * | = | g ( x m + 1 ) g ( x * ) | = | g ( ξ ) ( x m + 1 x * ) | > | x m + 1 x * |
Hence, let the positive integer m be large enough; then, there exists a constant r > 0 such that:
x m + 1 B r ( x * ) J 2 ,   x m + k B r ( x * ) ,   k = 2 , 3 , , m
where B r ( x * ) = [ x * r , x * + r ] is a closed ball and g ( x ) is continuously differentiable in B r ( x * ) .
In summary, the fixed point x * satisfies the following conditions:
(a) There exists a positive constant r > 0 such that for any point x B r ( x * ) J 2 ,
det { D g ( x ) } = | g ( x ) | = | a | ε = a ε > 1 ,
where D g ( x ) = g ( x ) denotes the Jacobi matrix of g ( x ) .
That is, the eigenvalue of D g ( x ) is λ = g ( x ) , and it satisfies | λ | = | g ( x ) | = a ε > 1 .
(b) There exists a point x m + 1 B r ( x * ) J 2 and a positive integer m 2 such that g m ( x m + 1 ) = x * , and
| det { D g m ( x m + 1 ) } | = | i = 1 m det { D g ( x i + 1 ) } | = ( a ε ) m 0
In summary, x * is a snap-back repeller of system (5). This completes the proof.
In the proof of Theorem 1, the derivative of map g ( x ) = D i s ε ( a x ) satisfies | g ( x ) | = | a ε | > 2 , which shows that the Lyapunov exponent of system (5) is λ = ln | a ε | > 0 .
Theorem 2.
Consider a one-dimensional nonlinear discrete system
x k + 1 = D i s ε ( f ( x k ) ) ,  
where f ( x ) C 1 [ 0 , ε / 2 ]  and f ( 0 ) = 0 .
If | f ( x ) | > 1 , x [ 0 , ε / 2 ]  and ε 2 , then system (6) is chaotic in the sense of Li–Yorke.
Proof of Theorem 2.
Assume | f ( x ) | > 1 gives f ( x ) > 1 or f ( x ) < 1 .□
If f ( x ) < 1 is satisfied, then x k + 1 = D i s ε ( f ( x k ) ) = D i s ε ( f ( x k ) ) , which is actually f ( x ) > 1 . Therefore, only f ( x ) > 1 is proved for simplicity.
Denote g ( x ) = D i s ε ( f ( x ) ) , then its derivative satisfies | g ( x ) | = | D i s ε ( x ) f ( x ) | = ε f ( x ) > 1 .
By the Lagrange mean value theorem, there exists a point ξ [ 0 , x ] such that
f ( x ) = f ( x ) f ( 0 ) = f ( ξ ) x > x
which gives f ( 1 / 2 ) > 1 / 2 and f ( 1 ) > 1 .
Since f ( 1 / 2 ) > 1 / 2 > 0 = f ( 0 ) , there exists a point t 0 ( 0 , 1 / 2 ) such that f ( t 0 ) = 1 / 2 .
Since f ( 1 ) > 1 > 1 / 2 = f ( t 0 ) , there exists a point t 1 ( t 0 , 1 ) such that f ( t 1 ) = 1 .
The monotonicity of map f ( x ) ensures the uniqueness of points t 0 and t 1 .
Then, g ( t 0 ) = D i s ε ( f ( t 0 ) ) = ε / 2 and g ( t 1 ) = D i s ε ( f ( t 1 ) ) = 0 .
Denote h ( x ) = g ( x ) x , and ε 2 gives
h ( t 0 ) = g ( t 0 ) t 0 = ε / 2 t 0 > 0 , h ( t 1 ) = g ( t 1 ) t 1 = t 1 < 0
which indicates there exists a point x * ( t 0 , t 1 ) such that h ( x * ) = 0 . Moreover, h ( x ) = g ( x ) 1 > 0 .
Absolutely, point x * is a fixed point of map g ( x ) in interval ( t 0 , t 1 ) .
Next, the fixed point x * is proven to be a snap-back repeller.
Since g ( 0 ) = 0 < x * < ε / 2 = g ( t 0 ) , there exists a point x 0 ( 0 , t 0 ) such that g ( x 0 ) = x * .
Since g ( t 1 ) = 0 < x 0 < x * = g ( x * ) , there exists a point x 1 ( x * , t 1 ) such that g ( x 1 ) = x 0 .
Since g ( x * ) = x * < x 1 < ε / 2 = g ( t 0 ) , there exists a point x 2 ( t 0 , x * ) such that g ( x 2 ) = x 1 .
Since g ( t 1 ) = 0 < x 2 < x * = g ( x * ) , there exists a point x 3 ( x * , t 1 ) such that g ( x 3 ) = x 1 .
Therefore, it can be proven that there exists a point x m + 1 ( t 0 , x * ) or x m + 1 ( x * , t 1 ) such that g ( x m + 1 ) = x m + 2 .
Then, x * = g ( x 0 ) = g 2 ( x 1 ) = = g m ( x m + 1 ) .
By the Lagrange mean value theorem, there exists a point ξ ( x m + 1 , x * ) or ξ ( x * , x m + 1 ) such that
| x m + 2 x * | = | g ( x m + 1 ) g ( x * ) | = | g ( ξ ) ( x m + 1 x * ) | > | x m + 1 x * | .
Hence, let the positive integer m be large enough; then, there exists a constant r > 0 such that
x m + 1 B r ( x * ) ( t 0 , t 1 ) , x m + 1 B r ( x * ) , k = 2 , 3 , , m
where B r ( x * ) = [ x * r , x * + r ] is a closed ball and g ( x ) is continuously differentiable in B r ( x * ) .
In summary, the fixed point x * satisfies the following conditions:
(a) There exists a positive constant r > 0 such that for any point
x B r ( x * ) ( t 0 , t 1 ) ,   det { D g ( x ) } = | g ( x ) | = ε f ( x ) > 1 ,
where D g ( x ) = g ( x ) denotes the Jacobi matrix of g ( x ) .
That is, the eigenvalue of D g ( x ) is λ = g ( x ) , and it satisfies | λ | = | g ( x ) | = ε f ( x ) > 1 .
(b) There exists a point x m + 1 B r ( x * ) ( t 0 , t 1 ) and a positive integer m 2 such that g m ( x m + 1 ) = x * , and
| det { D g m ( x m + 1 ) } | = | i = 1 m det { D g ( x i + 1 ) } | = ( ε f ( x ) ) m 0 .
In summary, x * is a snap-back repeller of system (6). This completes the proof.
Theorem 2 cannot contain Theorem 1 while setting f ( x ) = a x in Theorem 2.

2.2.3. Three Specific Propositions

To explain the application of Theorem 2, three propositions, based on Theorem 2, are proposed by designing the form of f ( x ) in system (6).
Proposition 1.
Consider a one-dimensional discrete system
x k + 1 = D i s ε ( x k 2 + a x k ) = D i s ε ( f ( x k ) ) ,  
where f ( x ) = x 2 + a x and ε 2 . If a > 1 or a < ε 1 , then system (7) is chaotic.
Note that | f ( x ) | = | 2 x + a | > 1 and x [ 0 , ε / 2 ] give a > 1 or a < ε 1 .
Proposition 2.
Consider a one-dimensional discrete system
x k + 1 = D i s ε ( a n x k n + a n 1 x k n 1 + + a 1 x k ) = D i s ε ( f ( x k ) ) ,  
where f ( x ) = a n x n + a n 1 x n 1 + + a 1 x and ε 2 . If | f ( x ) | > 1 , x [ 0 , ε / 2 ] , then system (8) is chaotic.
Especially, if set a i > 0 , i = 2 , 3 , , n and a 1 > 1 , then f ( x ) = a n x n 1 + + a 2 x + a 1 > 1 , x [ 0 , ε / 2 ] . That is, system (8) is chaotic.
Proposition 3.
Consider a one-dimensional discrete system
x k + 1 = D i s ε ( 0 x k g ( t ) d t + a x k ) = D i s ε ( f ( x k ) ) ,  
where f ( x ) = 0 x g ( t ) d t + a x , g ( x ) C 1 [ 0 , ε / 2 ] and ε 2 . If g ( x ) > 0 , x [ 0 , ε / 2 ] and a > 1 , then system (9) is chaotic. Note that f ( x ) = g ( x ) + a > 0 + 1 = 1 , x [ 0 , ε / 2 ] and f ( 0 ) = 0 .

2.3. Dynamical Properties Analysis

In this section, the dynamic properties of chaotic systems in Theorems 1 and 2 will be analyzed by means of numerical simulations, such as bifurcation diagrams, and Lyapunov exponent spectra. Bifurcation diagrams describe the process in which states of nonlinear systems change when one parameter changes. An examination of Lyapunov exponents and bifurcation diagram together proved the existence of the chaotic behavior feature.

2.3.1. Bifurcation Diagrams and Lyapunov Exponent Spectra

In Theorem 1, let ε = 2 ; then, if | a | 1 , system (5) is chaotic in the sense of Li–Yorke. Let a = 1 ; then, if ε 2 , system (5) is chaotic in the sense of Li–Yorke.
Figure 2a,b show the bifurcation diagram and Lyapunov exponent spectrum of parameterin system (5) respectively, where. Figure 2c,d show the bifurcation diagram and Lyapunov exponent spectrum of parameterin system (5) respectively, where. As shown in Figure 2, system (5) displays its chaotic characteristics as theorem 1 expects.
In Proposition 1, let ε = 2 ; then, if or, system (7) is chaotic. The bifurcation diagram and Lyapunov exponent spectrum of the parameter in system (7) are shown in Figure 3a,b, respectively. Figure 3 shows that system (7) displays chaotic characteristics as expected by Proposition 1.
In Proposition 3, set g ( x ) = sin ( x ) + 1 ; then, if a > 1 , system (9) is chaotic. The bifurcation diagram and Lyapunov exponent spectrum of parameter a in system (9) are shown in Figure 4a,b, respectively. Figure 4 shows that system (9) displays chaotic characteristics expected by Proposition 3.

2.3.2. Correlation Analysis

In this subsection, we set a = 1 , ε = 3 , x 0 = 0.2759 in system (5).
We set a = 2 , ε = 2 , x 0 = 0.2759 in system (7).
The evolution of the state variable k x ( k ) in systems (5) and (7) for the first 3000 iterations is shown in Figure 5a,b, respectively. The dynamic behaviors of chaotic systems (5) and (7) all demonstrate chaotic attractors.
Autocorrelation and cross-correlation are two main methods to measure the pseudorandomness of chaotic systems. For a truly random series such as white noise, the autocorrelation and cross-correlation are the δ function and zero, respectively.
The autocorrelation coefficient at lag k of a series { x ( n ) } of length N is normally given as:
a u t o c o r r ( k ) = i = 1 N ( x ( i ) x ¯ ) ( x ( i + k ) x ¯ ) i = 1 N ( x ( i ) x ¯ ) 2 ,
where x ¯ is the mean of the series { x ( n ) } .
The cross-correlation of two series { x ( n ) } and { y ( n ) } of length N at lag k is defined as:
c r o s s c o r r ( k ) = i = 1 N ( x ( i ) x ¯ ) ( y ( i k ) y ¯ ) i = 1 N ( x ( i ) x ¯ ) 2 i = 1 N ( y ( i ) y ¯ ) 2 .
For system (5), the autocorrelation function of the chaotic sequence generated with the initial parameters a = 1 , ε = 3 , and x 0 = 0.2759 is shown in Figure 6a, and its cross-correlation with another chaotic system generated with a = 1 , ε = 3 , and x 0 = 0.3257 is shown in Figure 6b.
For system (7), the autocorrelation function of chaotic sequence generated with initial parameter a = 2 , ε = 2 , x 0 = 0.2759 is shown in Figure 6c and its cross-correlation with another chaotic system generated with a = 2 , ε = 2 , x 0 = 0.3257 is shown in Figure 6d.
Figure 6 shows that the autocorrelation and cross-correlation functions of the chaotic systems generated by systems (5) and (7) are all ideal, as expected, which means that their pseudorandomness is very close to that of a truly random sequence.

2.3.3. Distribution Density Analysis

In practical applications, the distribution density of chaotic systems is usually required to be uniform or nearly uniform. In this section, the distribution density of chaotic systems based on Theorems 1 and 2 is investigated by means of histograms. First, the simulation method of the chaotic system distribution density is described as follows.
Step 1 First, { x ( n ) } is denoted as a chaotic sequence of length N generated by a chaotic system. Assume the value range { x ( n ) } of is Δ = [ α , β ] . In fact, α = min { x { n } } , β = max { x { n } } .
Step 2 The interval Δ is divided into M subintervals equally, and the length of each subinterval is h = ( β α ) / M .
Step 3 The number of samples that fall into each subinterval is counted and denoted as n i , i = 1 , 2 , , M .
Step 4 The probability of every point in the subinterval is denoted as p i , i = 1 , 2 , , M . Then, probability p i can be approximated by:
p i = n i N Δ i .
Thus, i = 1 N p i Δ i = 1 .
Then, the distribution density of a chaotic system can be simulated by the corresponding probability histogram in terms of the above method. In the following simulation, without specific declaration, N = 10 6 , M = 500 .
In Theorem 1, let a = 1 ; then, ε = 3 , ε = 3.3 , ε = 3.8 , and ε = 3.99 are set, and histograms of the chaotic sequences generated by system (5) are shown in Figure 7a–d.
In each histogram, the simulation represents the approximate value of probability p i , and the curve p ( x ) = 2 / ε is used to fit the simulation of the distribution density of chaotic systems. Figure 7 shows that the distribution density of system (5) can be close to a uniform distribution by varying the system parameters, as shown in Figure 5a,d.
In fact, if set a = 1 and ε = 2 , the linear system (5) is a tent map that follows a uniform distribution. It can be verified that if | a ε | 2 is an integer, system (5) follows a uniform distribution in terms of the proof of the tent map. Now, the distribution density of nonlinear system (6) is studied, and for simplicity, system (7) is utilized as an example to study this problem.
Without loss of generality, we set ε = 2 in system (7) and then set a = 1 , a = 1.5 , a = 1.8 , and a = 2 , respectively, and histograms of the chaotic sequences generated by system (7) are shown in Figure 8a–d. As shown in Figure 8, the distribution density of system (7) can also be close to the uniform distribution by varying the system parameters.

3. The Proposed Image Encryption Scheme

3.1. DNA Encoding and Computing Rules

A DNA sequence consists of four basic nucleic acids: A (adenine), C (cytosine), G (guanine), and T (thymine). According to the pair rules, A and T are complementary, as are C and G. In the binary system, 0 and 1 are complementary. Therefore, binary numbers 00 and 11, and 10 and 01 are also complementary. If we use the four basic nucleic acids (i.e., A, C, G, T) to denote the four binary numbers 00, 01, 10, and 11, there are in total 4! = 24 kinds of encoding rules. However, only eight rules which satisfy the Watson–Crick complementary requirement are valid. The rules are shown in Table 1.
According to the rules of DNA encoding and decoding, DNA sequences can be computed using algebraic calculation, such as addition, subtraction, and XOR operations. Table 2 lists the three operations for DNA sequences according to Rule 1.

3.2. Iterations of Chaotic Systems

For chaotic system (5), we take three group parameters { a 1 , x 01 } ,   { a 2 , x 02 } , and { a 3 , x 03 } , and iterate N 0 + M N times. In order to avoid the harmful effect of the transition procedure, we discard the previous N 0 sequence value and we obtain three chaotic sequences of length M × N :
X 1 = { x 1 ( i ) , i = 1 , 2 , , M × N } X 2 = { x 2 ( i ) , i = 1 , 2 , , M × N } X 3 = { x 3 ( i ) , i = 1 , 2 , , M × N }
The sequence values of the first group of chaotic sequences X 1 are sorted from small to large, and the corresponding subscript sequences are recorded to obtain an ordered subscript sequence X P 1 = { x p 1 ( i ) , i = 1 , 2 M × N } . For example, the sequence value of x p 1 ( 3 ) in X P 1 is 18, which means that the sequence number x 1 ( 3 ) of the sequence value in the chaotic sequence X 1 in the whole sequence X 1 is 18.
Similarly, the ordered subscript sequence X P 2 of chaotic sequence X 2 can be obtained.
The third group of chaotic sequences X 3 is transformed into an integer column r between [0, 255].
r = mod ( c e i l ( x 3 × 10 8 ) , 256 )
The integer column r is arranged in rows into a matrix with size M × N , which is recorded as R;
For chaotic system (8), two sets of parameters { b 1 , b 2 , b 3 , b 4 , b 5 , b 6 , x 04 } , { c 1 , c 2 , c 3 , c 4 , c 5 , c 6 , x 05 , } are taken and iterated N 0 + M N times. Discarding the previous N 0 sequence value, we obtain two chaotic sequences of length M × N :
X 4 = { x 4 ( i ) , i = 1 , 2 , , M × N } X 5 = { x 5 ( i ) , i = 1 , 2 , , M × N }
The chaotic sequences X 4 and X 5 are transformed into two pseudorandom sequences between [0, 255]:
x 4 ( i ) = mod ( f l o o r ( L ( x 4 ( i ) min ( x 4 ) ) max ( x 4 ) min ( x 4 ) ) , 256 ) x 5 ( i ) = mod ( f l o o r ( L ( x 5 ( i ) min ( x 5 ) ) max ( x 5 ) min ( x 5 ) ) , 256 )
where L = 255 2 × 10 8 is a constant.

3.3. Proposed Image Encryption Scheme

Suppose the plain image is P = ( p ( i , j ) ) M × N , i = 1 , 2 M , j = 1 , 2 N .
Step 1: Convert the image matrix P into a one-dimensional array and the pixel position of the image P is transformed by using the ordered subscript sequence X P 1 to obtain the image P 1 .
Step 2: The image P 1 is divided into blocks and N pixels are taken in turn as a group to obtain M subimage P R i = { p r i ( j ) , j = 1 , 2 , , N } , i = 1 , 2 , , M .
Step 3: Combined with chaotic sequence X 4 , the pixel value of each subimage P R i is transformed forward to obtain the subimage P R N i = { p r n i ( j ) , j = 1 , 2 , , N } . The specific transformation is:
p r n i ( j ) = p r i ( j ) x 4 ( ( i 1 ) N + j ) k i 1 , j = 1 , 2 , , N , i = 1 , 2 M , k i = p r n i ( 1 ) p r n i ( 2 ) p r n i ( N ) , i = 1 , 2 M 1 ,
where represents XOR operation, k 0 = mod ( i M j N p r i ( j ) + i + j , 256 ) as a key.
Step 4: All pixel values in each subimage P R N i are shifted to the left circularly in bits, and the moving bit is d { 1 , 2 , , 8 } to obtain the subimage P R B i = { p r b i ( j ) , j = 1 , 2 , , N } . The specific transformation is:
p r b i ( j ) = mod ( p r n i ( j ) × 2 d , 256 ) + f l o o r ( p r n i ( j ) / 2 8 d )
Step 5: Combined with chaotic sequence X 5 , the pixel value of each subimage P R B i is inversely transformed to obtain a new subimage P R C i . The specific transformation is:
p r c i ( j ) = p r b i ( j ) x 5 ( ( i 1 ) N + j ) l i 1 , j = N , N 1 , , 1 , i = M , M 1...1 , l i 2 = p r c i ( 1 ) p r c i ( 2 ) p r c i ( N ) , i = M , M 1...2 ,
where l M 1 = mod ( i M j N p r b i ( j ) + i + j , 256 ) as a key.
Step 6: The P R C i subimages are spliced to obtain the image P 2 , and the pixel position of the image P 2 is transformed by the ordered subscript sequence X P 2 . The transformed image P 3 is rearranged into an image of size M × N .
Step 7: The image P 3 is encoded as a DNA matrix, and the matrix R in Section 3.2 is also encoded as a DNA matrix. The two DNA matrices are operated, the calculated DNA matrix is decoded to obtain a binary matrix, and finally, it is converted into a decimal matrix, that is, the last ciphertext image. The encoding and decoding rules and operation rules in the step are determined by the plaintext image. The calculation formula of encoding, decoding, and operation rules is as follows:
r b = mod ( f l o o r ( s u m ( P ) 0.68 ) , 8 ) + 1 r r = mod ( ( s u m ( P ) + i + j ) , 8 ) + 1 r y = mod ( s u m ( P ) , 3 ) r j = mod ( c e i l ( s u m ( P ) / 126 ) , 8 ) + 1
where s u m ( P ) represents the sum of pixel values of image P, r b represents the coding rules of image P 3 , r r represents the coding rules of matrix R, r y represents the operation rules, and r j represents the decoding rules.
The encryption process can be described by Figure 9.
The decryption algorithm is the reverse process of the encryption algorithm.

4. Simulation Results and Security Analysis

We used MATLAB 2016a to verify the proposed encryption algorithm on a personal computer with an Intel(R) Core(TM) i7-7500U CPU @ 2.70 GHz and 8.00 GB memory, and the operating system was a Microsoft Windows 10. The test images are Lena (256 × 256), Girl(256 × 256), and Baboon(256 × 256). The system parameters of the chaotic system are
{ a 1 = 2 , x 01 = 0.2759 , a 2 = 1.5 , x 02 = 0.3257 , a 3 = 2 , x 03 = 0.28 , b 1 = 2 , b 2 = b 3 = b 4 = b 5 = b 6 = 1 , x 04 = 0.6871 , c 1 = 3 , c 2 = 2 , c 3 = c 4 = c 5 = c 6 = 1 , x 05 = 0.4179 n 0 = 1000 , d = 3 }
The simulation results are shown in Figure 10.

4.1. Key Space Analysis

Key space measures the ability to resist exhaustive attacks. The key space size needs to be analyzed and calculated in combination with the system parameters involved in encryption, initial value conditions, and computer accuracy. Generally, the more key parameters there are, the greater the sensitivity of the key, the larger the key space, and the more difficult it is for the encryption and decryption algorithm to crack. In many other studies, the calculation accuracy is usually 10−14. Therefore, this paper sets the calculation accuracy to 10−14 to compare with the key space of the same scale.
The key space of this algorithm is ( 10 14 ) 20 = 10 280 . Since the keys { n 0 , k 0 , l M 1 , d , e , f , g , h } are numbers in the integer field, the key space is not taken into account when calculating. Therefore, the key space of this algorithm is much larger than the minimum value of resisting violent attacks and the key space in the following references. The comparison with the key space in other references is provided in Table 3.

4.2. Key Sensitivity Analysis

A good encryption scheme should be sensitive to the key in the encryption process. Below, the key is slightly disturbed, and NPCR and UACI are used to measure the differences of the images before and after the perturbation.
The NPCR and UACI are expressed as follows:
NPCR = i = 1 W j = 1 H D ( i , j ) W × H × 100 % ,
UACI = i = 1 W j = 1 H | p 1 ( i , j ) p 2 ( i , j ) | L 1 W × H × 100 % ,
D ( i , j ) = { 0 , p 1 ( i , j ) = p 2 ( i , j ) 1 , p 1 ( i , j ) p 2 ( i , j ) ,
where W × H is the number of pixels.
The key sensitivity analysis results of the encryption algorithm are shown in Table 4. The values of NPCR and UACI are very close to the ideal values of 99.609% and 33.464%, respectively, after a minor disturbance of the parameters. Therefore, there are large differences in decrypted images and high key sensitivity.

4.3. Histogram Analysis

The statistical histogram directly observes the encryption effect of image encryption and reflects the distribution of pixels by comparing the pixel statistical histogram of the original image with the encrypted image. It is generally considered that the statistical histogram of an encrypted image is approximately uniform. Figure 11 shows the encrypted histogram of the Lena image. The histogram distribution is flat and better hides the statistical law of pixels. It can effectively resist statistical attacks and pure password attacks.

4.4. Correlation Analysis

The correlation coefficient calculates the correlation coefficient of the original image and the encrypted image, compares the absolute value of the correlation coefficient, judges the correlation change of adjacent pixels of the encrypted image, and measures the correlation of adjacent pixels.
In digital images, the gray values between adjacent pixels are often very close, indicating that adjacent pixels have a strong correlation, which will lead to insufficient encryption security performance. When the absolute value of the correlation coefficient is close to 1, it is considered that there is a strong correlation between adjacent pixels; when the absolute value of the correlation coefficient is close to 0, it is considered that there is no or weak correlation between adjacent pixels. The calculation of the correlation coefficient between adjacent pixels is divided into three directions: vertical, horizontal, and diagonal. Equations (10)–(12) are used to calculate the correlation between adjacent pixels of an image:
C o v ( x , y ) = 1 n i = 1 n [ x i E ( x ) ] [ y i E ( y ) ] ,
D ( x ) = 1 n i = 1 n [ x i E ( x ) ] 2 ,
E ( x ) = 1 n i = 1 n x i ,
where x and y are the gray values of two adjacent pixels. E(x), D(x), and Cov(x, y) are expectation, variance, and covariance. In this paper we select 2000 pairs of pixels in Lena’s plain image and cipher image.
The relevant distributions of adjacent pixels of the Lena image along the horizontal, vertical, and diagonal directions are shown in Figure 12a–c, respectively, and the corresponding distributions of encrypted images are shown in Figure 12d–f. The results show that the correlation between adjacent pixels in ordinary images is greatly reduced. The comparison results with other studies are shown in Table 5.

4.5. Information Entropy Analysis

Information entropy is used to judge the degree of unpredictability, uncertainty, and randomness of information sources. Information entropy is expressed as Equation (13):
H ( S ) = i = 1 n p i log p i ,
where S = { x 1 , x 2 , x n } is an information source and P is a probability distribution of S. The probability of x i is p i . According to the principle of maximum information entropy, when the probability distribution of the source is an equal probability distribution, p i = 1 n , and the maximum information entropy log 2 n can be obtained.
Since the image used in this experiment is a 256-order gray image, the closer the information entropy is to 8, the better the encryption algorithm performs in this test. The original picture information entropy = 7.4455, and the picture information entropy encrypted by the algorithm in this paper = 7.9994, which is very close to the ideal value of 8. Table 6 shows the comparison of entropy with other literature.

4.6. Robustness Analysis

In the process of image transmission, some data may be modified or lost. Therefore, an algorithm should have the ability to resist noise attacks or data loss. A robust encryption algorithm means that most of the useful information of the plain image can still be recovered when such situations occur.
Currently, almost all transmission channels are noise channels. When data propagate in the channel, it receives various types of noise interference, such as Gaussian noise and salt and pepper noise. A robust image encryption algorithm should be immune to noise interference. In the actual algorithm analysis, a small amount of some type of noise is usually added to the encrypted image and then the encrypted image is decrypted after adding noise. The smaller the contrast difference between the decrypted image and the original image, the stronger the ability of the algorithm to resist noise attacks.
We added 0.01%, 0.03%, and 0.05% salt and pepper noise. The results are shown in Figure 13, Figure 14 and Figure 15, respectively. These figures show that the decryption algorithm can still restore the original image well, that is, it has a certain ability to resist noise attacks.
During transmission, the encrypted data may be partially modified or lost. An encryption algorithm should be immune to data loss. In the actual algorithm analysis, a small image of the encrypted image is usually removed, and then the encrypted image after removing part of the image is decrypted. The smaller the contrast difference between the decrypted image and the original image, the stronger the ability of the algorithm to resist data loss attacks. The 1 × 8, 8 × 8, 8 × 16 subblock in the upper left corner of the encrypted image is removed in Figure 10b. The decryption algorithm is used to decrypt the encrypted image after removing a molecular block. The decryption effect is shown in Figure 16b, Figure 17b, Figure 18b. Figure 16, Figure 17 and Figure 18 show that the decryption algorithm can still restore the original image well; that is, it has a certain ability to resist data loss.

4.7. Time Complexity Analysis

The time consumed by the encryption algorithm and decryption algorithm provided in this paper can be divided into two stages: (1) preparation stage, that is, the generation of chaotic pseudorandom sequences; and (2) formal encryption and decryption stage. The time of phase (1) in this algorithm is 8.948 s. For phase (2), the time consumed by the encryption algorithm and decryption algorithm is 3.702 s and 3.893 s, respectively. Therefore, the chaotic image encryption algorithm in this paper consumes less time; that is, its time complexity is low.

5. Conclusions

This paper proposes a method to construct a one-dimensional discrete chaotic system and an image encryption scheme based on a uniformly distributed chaotic system. Based on Marotto’s theorem, one-dimensional discrete systems are proven to be chaotic in the sense of Li–Yorke, and the corresponding chaos criterion theorems are proposed. The system can be distributed uniformly which means better randomness. We propose an image encryption scheme based on a uniformly distributed discrete chaotic system and DNA encoding. The experimental results demonstrate that our encryption algorithm has a large key space, high key sensitivity, and fast encryption speed and can resist differential attacks and statistical attacks.

Author Contributions

H.Z. carried out numerical simulation and analysis; M.T. and X.W. studied the proof of relevant theories and proposed the image encryption scheme; H.Z. and M.T. wrote the paper; and H.Z. and M.T. revised the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tutueva, A.V.; Nepomuceno, E.G.; Karimov, A.I.; Andreev, V.S.; Butusov, D.N. Adaptive chaotic maps and their application to pseudo-random numbers generation. Chaos Solitons Fractals 2020, 133, 109615. [Google Scholar] [CrossRef]
  2. Wang, L.Y.; Cheng, H. Pseudo-Random Number Generator Based on Logistic Chaotic System. Entropy 2019, 21, 960. [Google Scholar] [CrossRef] [Green Version]
  3. Lambic, D. A new discrete-space chaotic map based on the multiplication of integer numbers and its application in S-box design. Nonlinear Dyn. 2020, 100, 699–711. [Google Scholar] [CrossRef]
  4. Jiang, D.H.; Liu, L.D.; Zhu, L.Y.; Wang, X.Y.; Rong, X.W.; Chai, H.X. Adaptive embedding: A novel meaningful image encryption scheme based on parallel compressive sensing and slant transform. Signal Process. 2021, 188, 108220. [Google Scholar] [CrossRef]
  5. Cheng, G.F.; Wang, C.H.; Chen, H. A Novel Color Image Encryption Algorithm Based on Hyperchaotic System and Permutation-Diffusion Architecture. Int. J. Bifurc. Chaos 2019, 29, 1950115. [Google Scholar] [CrossRef]
  6. Chai, X.L.; Fu, X.L.; Gan, Z.H.; Lu, Y.; Chen, Y.R. A color image cryptosystem based on dynamic DNA encryption and chaos. Signal Process. 2019, 155, 44–62. [Google Scholar] [CrossRef]
  7. Wang, J.; Zhi, X.C.; Chai, X.L.; Lu, Y. Chaos-based image encryption strategy based on random number embedding and DNA-level self-adaptive permutation and diffusion. Multimed. Tools Appl. 2021, 80, 16087–16122. [Google Scholar] [CrossRef]
  8. Wang, X.Y.; Zhao, H.Y.; Hou, Y.T.; Luo, C.; Zhang, Y.Q.; Wang, C.P. Chaotic image encryption algorithm based on pseudo-random bit sequence and DNA plane. Mod. Phys. Lett. B 2019, 33, 1950263. [Google Scholar] [CrossRef]
  9. Zang, H.; Zhao, X.; Wei, X. Construction and application of new high-order polynomial chaotic maps. Nonlinear Dyn. 2022, 107, 1247–1261. [Google Scholar] [CrossRef]
  10. Wei, X.Y.; Zang, H.Y. Construction and complexity analysis of new cubic chaotic maps based on spectral entropy algorithm. J. Intell. Fuzzy Syst. 2019, 37, 4547–4555. [Google Scholar] [CrossRef]
  11. Li, T.Y.; Yorke, J.A. Period Three Implies Chaos. Am. Math. Mon. 1975, 82, 985–992. [Google Scholar] [CrossRef]
  12. Marotto, F.R. Snap-back repellers imply chaos in Rn. J. Math. Anal. Appl. 1978, 63, 199–223. [Google Scholar] [CrossRef] [Green Version]
  13. Chen, G.; Hsu, S.; Zhou, J. Snapback repellers as a cause of chaotic vibration of the wave equation with a van der Pol boundary condition and energy injection at the middle of the span. J. Math. Phys. 1998, 39, 6459–6489. [Google Scholar] [CrossRef] [Green Version]
  14. Zhou, H.L.; Song, E.B. Discrimination of the 3-periodic points of a quadratic polynomial. J. Sichuan Univ. 2009, 46, 561–564. [Google Scholar]
  15. Yang, X.P.; Min, L.Q.; Wang, X. A cubic map chaos criterion theorem with applications in generalized synchronization based pseudorandom number generator and image encryption. Chaos 2015, 25, 053104. [Google Scholar] [CrossRef] [PubMed]
  16. Chen, G.R.; Lai, D.J. Feedback control of Lyapunov exponent for discrete-time dynamical systems. Int. J. Bifurc. Chaos 1996, 6, 1341–1349. [Google Scholar] [CrossRef]
  17. Yu, S.M.; Lv, J.H.; Chen, G.R. Anti-Control Method of Dynamical Systems and Its Application, 2nd ed.; Science Press: Beijing, China, 2013. [Google Scholar]
  18. Zang, H.Y.; Li, J.; Li, G.D. A One-dimensional Discrete Map Chaos Criterion Theorem with Applications in Pseudo-random Number Generator. J. Electron. Inf. Technol. 2018, 40, 1992–1997. [Google Scholar]
  19. Adleman, L.M. Molecular computation of solutions to combinatorial problems. Science 1994, 266, 1021–1024. [Google Scholar] [CrossRef] [Green Version]
  20. Kang, X.J.; Guo, Z.H. A new color image encryption scheme based on DNA encoding and spatiotemporal chaotic system. Signal Process. Image Commun. 2020, 80, 115670. [Google Scholar]
  21. Liu, Z.T.; Wu, C.X.; Wang, J.; Hu, Y.H. A Color Image Encryption Using Dynamic DNA and 4-D Memristive Hyper-Chaos. IEEE Access 2019, 7, 78367–78378. [Google Scholar] [CrossRef]
  22. Liu, Q.; Liu, L.F. Color Image Encryption Algorithm Based on DNA Coding and Double Chaos System. IEEE Access 2020, 8, 83596–83610. [Google Scholar] [CrossRef]
  23. Song, C.Y.; Qiao, Y.L. A Novel Image Encryption Algorithm Based on DNA Encoding and Spatiotemporal Chaos. Entropy 2015, 17, 6954–6968. [Google Scholar] [CrossRef]
  24. Cavusoglu, U.; Kacar, S.; Pehlivan, I.; Zengin, A. Secure image encryption algorithm design using a novel chaos based S-Box. Chaos Solitons Fractals 2017, 95, 92–101. [Google Scholar] [CrossRef]
  25. Zhang, S.J.; Liu, L.F.; Xiang, H.Y. A Novel Plain-Text Related Image Encryption Algorithm Based on LB Compound Chaotic Map. Mathematics 2021, 9, 2778. [Google Scholar] [CrossRef]
  26. Nkandeu, Y.P.K.; Tiedeu, A. An image encryption algorithm based on substitution technique and chaos mixing. Multimed. Tools Appl. 2019, 78, 10013–10034. [Google Scholar] [CrossRef]
Figure 1. Image of function D i s ε ( x ) .
Figure 1. Image of function D i s ε ( x ) .
Mathematics 10 01027 g001
Figure 2. Bifurcation diagrams and Lyapunov exponent spectra of system (5). Let ε = 2 : (a) bifurcation diagram of a and (b) Lyapunov exponent spectrum of a. Let a = 1 : (c) bifurcation diagram of ε and (d) Lyapunov exponent spectrum of ε .
Figure 2. Bifurcation diagrams and Lyapunov exponent spectra of system (5). Let ε = 2 : (a) bifurcation diagram of a and (b) Lyapunov exponent spectrum of a. Let a = 1 : (c) bifurcation diagram of ε and (d) Lyapunov exponent spectrum of ε .
Mathematics 10 01027 g002
Figure 3. System (7): (a) bifurcation diagram of a and (b) Lyapunov exponent spectrum of a .
Figure 3. System (7): (a) bifurcation diagram of a and (b) Lyapunov exponent spectrum of a .
Mathematics 10 01027 g003
Figure 4. System (9): (a) bifurcation diagram of a and (b) Lyapunov exponent spectrum of a .
Figure 4. System (9): (a) bifurcation diagram of a and (b) Lyapunov exponent spectrum of a .
Mathematics 10 01027 g004
Figure 5. The evolution of the state variable: (a) system (5) and (b) system (7).
Figure 5. The evolution of the state variable: (a) system (5) and (b) system (7).
Mathematics 10 01027 g005
Figure 6. System (5): (a) autocorrelation function and (b) cross-correlation function. System (7): (c) autocorrelation function and (d) cross-correlation function.
Figure 6. System (5): (a) autocorrelation function and (b) cross-correlation function. System (7): (c) autocorrelation function and (d) cross-correlation function.
Mathematics 10 01027 g006
Figure 7. Histogram of system (5) with a = 1 : (a) ε = 3 ; (b) ε = 3.3 ; (c) ε = 3.8 ; and (d) ε = 3.99 .
Figure 7. Histogram of system (5) with a = 1 : (a) ε = 3 ; (b) ε = 3.3 ; (c) ε = 3.8 ; and (d) ε = 3.99 .
Mathematics 10 01027 g007
Figure 8. Histogram of system (7) where ε = 2 : (a) a = 1 ; (b) a = 1.5 ; (c) a = 1.8 ; and (d) a = 2 .
Figure 8. Histogram of system (7) where ε = 2 : (a) a = 1 ; (b) a = 1.5 ; (c) a = 1.8 ; and (d) a = 2 .
Mathematics 10 01027 g008
Figure 9. Image encryption scheme.
Figure 9. Image encryption scheme.
Mathematics 10 01027 g009
Figure 10. Images encryption and decryption effect: (a,d,g) original image; (b,e,h) encrypted image; and (c,f,i) decrypted image.
Figure 10. Images encryption and decryption effect: (a,d,g) original image; (b,e,h) encrypted image; and (c,f,i) decrypted image.
Mathematics 10 01027 g010
Figure 11. The result of Lena histogram analysis: (a) original image; (b) encrypted image; and (c) decrypted image.
Figure 11. The result of Lena histogram analysis: (a) original image; (b) encrypted image; and (c) decrypted image.
Mathematics 10 01027 g011
Figure 12. Correlation analysis of the Lena image. (a) Horizontal direction; (b) vertical direction; and (c) diagonal direction. Encrypted image of (d) horizontal direction; (e) vertical direction; and (f) diagonal direction.
Figure 12. Correlation analysis of the Lena image. (a) Horizontal direction; (b) vertical direction; and (c) diagonal direction. Encrypted image of (d) horizontal direction; (e) vertical direction; and (f) diagonal direction.
Mathematics 10 01027 g012
Figure 13. Salt and pepper noise level of 0.01%: (a) encrypted image and (b) decrypted image.
Figure 13. Salt and pepper noise level of 0.01%: (a) encrypted image and (b) decrypted image.
Mathematics 10 01027 g013
Figure 14. Salt and pepper noise level of 0.03%: (a) encrypted image and (b) decrypted image.
Figure 14. Salt and pepper noise level of 0.03%: (a) encrypted image and (b) decrypted image.
Mathematics 10 01027 g014
Figure 15. Salt and pepper noise level of 0.05%: (a) encrypted image and (b) decrypted image.
Figure 15. Salt and pepper noise level of 0.05%: (a) encrypted image and (b) decrypted image.
Mathematics 10 01027 g015
Figure 16. The removal of a 1 × 8 subblock: (a) encrypted image and (b) decrypted image.
Figure 16. The removal of a 1 × 8 subblock: (a) encrypted image and (b) decrypted image.
Mathematics 10 01027 g016
Figure 17. The removal of an 8 × 8 sub block: (a) encrypted image and (b) decrypted image.
Figure 17. The removal of an 8 × 8 sub block: (a) encrypted image and (b) decrypted image.
Mathematics 10 01027 g017
Figure 18. The removal of an 8 × 16 sub block: (a) encrypted image and (b) decrypted image.
Figure 18. The removal of an 8 × 16 sub block: (a) encrypted image and (b) decrypted image.
Mathematics 10 01027 g018
Table 1. DNA encoding rules.
Table 1. DNA encoding rules.
RuleRule 1Rule 2Rule 3Rule 4Rule 5Rule 6Rule 7Rule 8
00AATTCCGG
01GCCGATAT
10CGGCTATA
11TTAAGGCC
Table 2. DNA computing rules.
Table 2. DNA computing rules.
+AGCT
AAGCT
GGCTA
CCTAG
TTAGC
-AGCT
AATCG
GGATC
CCGAT
TTCGA
XORAGCT
AAGCT
GGATC
CCTAG
TTCGA
Table 3. Key space analysis.
Table 3. Key space analysis.
Ref. [23]Ref. [24]Ref. [25]Ref. [26]Proposed
Key space 10 93 10 98 10 84 10 142 10 280
Table 4. Key sensitivity analysis.
Table 4. Key sensitivity analysis.
Initial ParametersMinor DisturbanceNPCRUACI
a 1 + 10 14 99.63%33.51%
x 01 + 10 14 99.61%33.45%
a 2 + 10 14 99.62%33.49%
x 02 + 10 14 99.60%33.52%
a 3 + 10 14 99.60%33.45%
x 03 + 10 14 99.62%33.46%
b 1 + 10 14 99.59%33.47%
b 2 + 10 14 99.62%33.47%
b 3 + 10 14 99.60%33.44%
b 4 + 10 14 99.63%33.47%
b 5 + 10 14 99.61%33.45%
b 6 + 10 14 99.61%33.38%
x 04 + 10 14 99.61%33.40%
c 1 + 10 14 99.58%33.47%
c 2 + 10 14 99.60%33.39%
c 3 + 10 14 99.62%33.45%
c 4 + 10 14 99.59%33.45%
c 5 + 10 14 99.62%33.44%
c 6 + 10 14 99.60%33.45%
x 05 + 10 14 99.61%33.48%
Table 5. Correlation analysis.
Table 5. Correlation analysis.
CorrelationHorizontalVerticalDiagonal
Lena0.98490.97040.9611
Ref. [23]0.00070.00150.0014
Ref. [24]---
Ref. [25]−0.0034−0.00790.0010
Ref. [26]0.001−0.014−0.006
Proposed0.00720.0055−0.0008
Table 6. Information entropy analysis.
Table 6. Information entropy analysis.
Ref. [23]Ref. [24]Ref. [25]Ref. [26]Proposed
Information entropy7.99677.956677.99777.99947.9994
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zang, H.; Tai, M.; Wei, X. Image Encryption Schemes Based on a Class of Uniformly Distributed Chaotic Systems. Mathematics 2022, 10, 1027. https://doi.org/10.3390/math10071027

AMA Style

Zang H, Tai M, Wei X. Image Encryption Schemes Based on a Class of Uniformly Distributed Chaotic Systems. Mathematics. 2022; 10(7):1027. https://doi.org/10.3390/math10071027

Chicago/Turabian Style

Zang, Hongyan, Mengdan Tai, and Xinyuan Wei. 2022. "Image Encryption Schemes Based on a Class of Uniformly Distributed Chaotic Systems" Mathematics 10, no. 7: 1027. https://doi.org/10.3390/math10071027

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop