Next Article in Journal
New Materials and Advanced Procedures of Obtaining and Processing—Applied Sciences Insights
Next Article in Special Issue
Hybrid Dark Channel Prior for Image Dehazing Based on Transmittance Estimation by Variant Genetic Algorithm
Previous Article in Journal
Supporting Tech Founders—A Needs-Must Approach to the Delivery of Acceleration Programmes for a Post-Pandemic World
Previous Article in Special Issue
Review: A Survey on Objective Evaluation of Image Sharpness
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Novel Block Sorting and Symbol Prediction Algorithm for PDE-Based Lossless Image Compression: A Comparative Study with JPEG and JPEG 2000

by
Časlav Livada
1,*,
Tomislav Horvat
2 and
Alfonzo Baumgartner
1
1
Chair of Visual Computing, Faculty of Electrical Engineering, Computer Science and Information Technology Osijek, Josip Juraj Strossmayer University of Osijek, Kneza Trpimira 2B, 31000 Osijek, Croatia
2
Department of Electrical Engineering, University North, 104. Brigade 3, 42000 Varaždin, Croatia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(5), 3152; https://doi.org/10.3390/app13053152
Submission received: 16 January 2023 / Revised: 22 February 2023 / Accepted: 24 February 2023 / Published: 28 February 2023
(This article belongs to the Special Issue Advances in Digital Image Processing)

Abstract

:
In this paper, we present a novel compression method based on partial differential equations complemented by block sorting and symbol prediction. Block sorting is performed using the Burrows–Wheeler transform, while symbol prediction is performed using the context mixing method. With these transformations, the range coder is used as a lossless compression method. The objective and subjective quality evaluation of the reconstructed image illustrates the efficiency of this new compression method and is compared with the current standards, JPEG and JPEG 2000.

1. Introduction

Partial differential Equation (PDE)-based image compression techniques use PDEs to model the image data and convert it into a set of parameters that can be efficiently encoded. There are several PDE-based image compression techniques that differ in the way they model and transform the image data. PDE-based image enhancement methods were invented to restore missing image regions by uniformly transferring information from the known surrounding regions. The filling effect has also become the main feature of PDE-based inpainting methods such as [1,2,3,4]. The main idea is to consider the image data as Dirichlet boundary conditions and interpolate the data in the inpainting regions by solving the appropriate boundary value problems.
Why use PDE-based image compression? PDE-based image compression methods offer several advantages over traditional compression techniques such as JPEG and MPEG. Firstly, they offer higher compression ratios while maintaining better image quality. This is achieved by modeling the image as a set of partial differential equations and transforming it into a set of parameters that can be efficiently encoded. The resulting compressed image contains much less data, causing it to be easier to store and transmit [5]. Secondly, PDE-based compression methods are more robust to noise and distortion. Unlike traditional compression techniques, PDE-based methods rely on modeling the image using differential equations, which can adapt to changes in the image data, causing them to be more tolerant to noise and distortion. This causes them to be particularly useful in scenarios where the image quality needs to be maintained despite noise and other distortions [6,7]. Thirdly, PDE-based compression methods are versatile and can be adapted to various applications. For instance, they can be used for compressing medical images, satellite images, and video streams, among others. This causes them to be a valuable tool in various domains, including medical imaging, remote sensing, and multimedia [8,9]. Overall, PDE-based image compression methods offer a compelling alternative to traditional compression techniques. They offer higher compression ratios, better image quality, and are more robust to noise and distortion. Moreover, they are versatile and can be adapted to different applications.
Existing PDE-based compression algorithms. The Embedded Zero-Tree Wavelet (EZW) algorithm is a compression technique that uses the wavelet transform to decompose an image into sub-bands, and then applies a PDE-based thresholding to produce a sparse representation of the image. The resulting sparse representation is then encoded using entropy coding. EZW is widely used in image and video compression [10]. The SPIHT (Set Partitioning In Hierarchical Trees) algorithm is a wavelet-based compression algorithm that uses a PDE-based algorithm to sort and partition the wavelet coefficients in a hierarchical tree structure. This tree structure is then used to encode the wavelet coefficients using binary arithmetic coding. SPIHT has been used in various applications, such as satellite imagery and medical imaging [11].
The Geometric Image Compression (GIC) algorithm is a PDE-based compression algorithm that uses a geometric representation of images based on a set of partial differential equations. GIC models the image data as a series of curves, areas, and volumes, and then applies PDE-based compression techniques to the curves, areas, and volumes to create a compact representation of the image. GIC has proven to be very powerful in compressing geometric data such as 3D meshes and point clouds [12]. The Variational Image Compression (VIC) algorithm is a PDE-based compression algorithm that uses a variational approach to model the image data. VIC solves an optimization problem that minimizes an energy function containing both fidelity to the original image and a regularization term that promotes the smoothness of the compressed image. The resulting compressed image is then encoded using entropy coding. VIC has shown good performance in compressing natural and medical images [13].
Variational and PDE-based interpolation and inpainting techniques have been used to interpolate the scattered data. Lower order PDEs often result in singularities at isolated interpolation points on images, higher-order PDEs provide smoother solutions, but the violation of an extremum principle can result in undesirable overshoots and undershoots [14]. Recently, PDE-based image compression techniques have been developed using PDE-based interpolation of scattering data [15,16,17,18,19,20]. The idea is to keep only a small number of pixels and reconstruct the remaining data using PDE-based interpolation. There are many subdivision strategies, in particular methods based on quadtree decompositions [21,22]) and adaptive triangulation ideas can be found in [23,24,25]. In R-EED [18], the adaptive triangulation from [17] is replaced by a subdivision into a rectangular structure with various modifications.
Reliability and efficiency. PDE-based image compression techniques have been proposed as a means of reducing the size of image data while preserving image quality. However, these methods have both strengths and limitations that affect their reliability and efficiency. One of the limitations of PDE-based compression techniques is their high computational complexity, which can cause them to be computationally intensive and time-consuming, especially for large images or real-time applications [26]. Another limitation of PDE-based compression techniques is their sensitivity to image content. PDE-based methods rely on modeling the image data using PDEs, which may not always accurately capture the structure and content of the image. In some cases, PDE-based methods may not perform well for certain types of images, such as those with sharp edges or high-frequency content [27].
Most PDE-based compression techniques are lossy, which means that they discard some information from the original image in order to achieve compression; while lossy compression can be effective in reducing the size of the image data, it can also result in perceptual distortions or artifacts in the compressed image, which can affect its quality and reliability [28]. In addition, PDE-based compression techniques may not always achieve the same level of compression performance as other state-of-the-art image compression methods, such as those based on deep learning or neural networks. In some cases, PDE-based methods may not be able to achieve the same level of compression while preserving the same level of image quality [7,29].
In conclusion, while PDE-based image compression techniques have shown promise in reducing the size of image data, their reliability and efficiency depend on various factors, such as the specific method used, the nature of the image data being compressed, and the compression requirements of the application. Further research is needed to improve the reliability and efficiency of PDE-based compression methods, and to address their limitations and challenges.
Our contribution. The aim of this paper is to investigate the compression capabilities of unaided PDE-based compression methods and PDE-based compression methods using data transformations and symbol prediction. The compression methods and ratios for grayscale compression and binary tree structure compression are evaluated in terms of the extent to which the quality of a reconstructed image depends on the choice of compression methods. The improvement of the compression algorithms is performed by applying data transforms to the input data stream. The data transformations used are delta coding, context tree weighting, Burrows–Wheeler transform, prediction by partial fitting, and dynamic Markov coding.
Related work. PDE-based compression was evaluated with range coding and Burrows–Wheeler transform for grey values only [30] and concluded that it performed better than EED with this combination. In [31], further analysis of the range coder was performed in combination with context tree weighting, prediction by partial adaptation, and dynamic Markov coding.
Organization of the paper. We begin with a brief introduction to PDE-based inpainting in Section 2. Section 3 discusses possible encoders for grey-level compression. In Section 4, we describe data transformations applied to grey values to increase their effectiveness. In Section 5, we introduce the context-blending method as a means to improve the compression. In Section 6, we perform a detailed objective and subjective quality analysis of four images. Section 7 concludes our work with a summary and an outlook on future work.

2. Partial Differential Equations in Image Compression

Partial differential equations are mainly used for image preprocessing [32,33,34] or as a tool for the postprocessing of image errors arising during coding. Partial differential equations are particularly useful for data compression and interpolation in scattered data, which was enabled by the improvement of binary tree coding by Distasi et al. [23].

2.1. Pde-Based Interpolation

The main goal of PDE-based interpolation is to reconstruct the original image from the known pixels without losing their primary information. With PDE-based interpolation, the image can be reconstructed from sparse data with relative accuracy.
The concept of diffusion is known mainly from the physical context. It is a process that compensates for concentration differences without creating or destroying mass. This idea can be applied to image processing tasks, and we will formulate it in a continuous framework.
Let Ω I R n be an n-dimensional image domain. An unknown scalar-valued function v : Ω I R must be recovered, of which only its values on a subset Ω 1 Ω are known. The goal is to find an interpolating function u : Ω I R that is smooth and close to v in Ω Ω 1 and identical to v in Ω 1 .
This problem can be embedded in an evolution environment with an evolution parameter t 0 . Its solution u ( x , t ) yields the desired interpolating function as its steady state ( t ). The evolution is initialized with a function f : Ω I R initialized to v identical to Ω 1 and set to Ω Ω 1 to any value:
f ( x ) : = v ( x ) i f   x Ω 1 0 e l s e .
The evolution is considered
t u = ( 1 c ( x ) ) L u c ( x ) ( u f )
with f as initial value,
u ( x , 0 ) = f ( x ) ,
and reflecting (homogeneous Neumann) boundary conditions on the image boundary Ω . The function c : Ω I R is the characteristic function on Ω 1 , i.e.,
c ( x ) : = 1 i f   x Ω 1 0 e l s e ,
and L is some elliptic differential operator. The idea is to solve the steady state equation
( 1 c ( x ) ) L u c ( x ) ( u f ) = 0
with reflecting boundary conditions. In Ω 1 , c ( x ) = 1 such that the interpolation condition u ( x ) = f ( x ) = v ( x ) is fulfilled. In Ω Ω 1 , it follows from c ( x ) = 0 that the solution has to satisfy L u = 0 . This elliptic PDE can be regarded as the steady state of the evolution equation
t u = L u
with the Dirichlet boundary conditions provided by the interpolation data on Ω 1 .
Specific Smoothing Operators. There are many possibilities for the elliptic differential operator L. The simplest and best studied uses the Laplace operator L u : = Δ u , which leads to linear isotropic diffusion [35]:
t u = Δ u ,
also called homogeneous diffusion (HD).
A prototype for a higher-order differential operator is the biharmonic operator L u : = Δ 2 u providing the biharmonic smoothing (BS) evolution
t u = Δ 2 u .
Using it for interpolation comes down to thin plate spline interpolation [36], rotationally invariant multidimensional generalizations of the cubic spline interpolation.
The higher-order nonlinear diffusion considered in this paper is related to the differential operator based on Laplacian L u : = Δ ( g ( | Δ u | 2 ) Δ u ) proposed by [37]. The fourth-order nonlinear diffusion equation is
t u = Δ ( g ( | Δ u | 2 ) Δ u )
where the diffusivity function g is
g ( | Δ u | 2 ) = 1 1 + | Δ u | 2 / λ 2 .
with some contrast parameter λ > 0 . Since the highest derivative operator is 4, it will be denoted as fourth-order Charbonnier diffusion (4ChD).
Finally, the nonlinear anisotropic diffusion considers L u : = div ( g ( u σ u σ ) u ) , namely edge-enhancing diffusion (EED) [38] with
t u = div ( g ( u σ u σ ) u ) .
where the diffusivity function g is the Charbonnier diffusivity [39]
g ( | u | 2 ) = 1 1 + | u | 2 / λ 2 .

2.2. Binary Tree Triangular Coding

In the B-Tree Triangular Coding Scheme (BTTC), the image is decomposed into a large number of triangular areas so that they can be reconstructed to a satisfactory quality using vertex interpolation. The data obtained during triangulation is stored in a binary tree structure.
This process is shown in Figure 1, where an initial image, Figure 1a, is approximated by its four vertices. The interpolation is performed using the information about the location of the vertices and their grey values. If the original image is presented by ( f i , j ) and its approximation is ( u i , j ) then an error can be defined ( e i , j ) as e i , j : = u i , j f i , j . If ( e i , j ) satisfies e i , j ε for every image element ( i , j ) with given error threshold parameter ε > 0 , the representation with triangles is considered to be sufficiently good.
If the above equality is not satisfied, the triangle containing ( i , j ) is divided into two similar triangles by reducing its height on the hypotenuse, Figure 1d. This is repeated until the given threshold for triangle formation is reached.
During this process, two images are generated. One is the mask image containing the coordinates of the vertices, and the other is a sparse image containing the information of the grey values in the same vertices.

2.3. Binary Tree Structure

During the triangulation process, a binary tree structure is formed in which each triangle is represented by a node, while the triangles that are no longer divided are represented by a leaf. A triangle is divided into two subordinate triangles. To save this tree structure, a preorder traversal is performed and a 1 is stored for each node and a 0 for each leaf. Additional space saving is achieved by storing two numbers minimum tree depth and maximum tree depth. The minimum tree depth is the tree depth up to which all nodes are dividing (all values are 1 s), and the maximum tree depth is the depth above which nodes are not divided any more (all values are 0 s).
The process of creating a binary tree structure is shown in Figure 2. In Figure 2a, the process is shown for a square image and, in Figure 2b, the resulting binary tree is generated. First, the image is divided by its main diagonal, then the process is repeated for the newly created triangles. To compress the grey values in all vertices, a zigzag traversal of the sparse image is performed and stored in the grey value stream. The pseudocode can be seen in the following text below:
Main algorithm: Image is grayscale image:
Step 1. roottriangle = new Triangle(0,0,Image.width,Image.height)
Step 2. Btree root = new Node(roottriangle,”)
Step 3. CreateChildNodes(root)
Step 4. EncodeImage(root)
Recursive procedure CreateChildNodes(node):
Step 1. Let t = node.triangle
Step 2. midx = t.x + t.wdith/2, midy = t.y + t.width/2
Step 3. topleft = new Triangle(t.x,t.y,midx-t.x,midy-y)
Step 4. botright = new Triangle(midx,midy,t.x+t.width-midx,y+t.height+midy)
Step 5. tlaverage = ComputeAverageColorValue(topleft)
Step 6. braverage = ComputeAverageColorValue(botright)
Step 7. If tlaverage≠braverage then
      node.left = new Node(tltriangle,node.code+’0’)
      node.right = new Node(brtriangle,node.code+’1’)
      CreateChildNodes(node.left)
      CreateChildNodes(node.right)
Procedure EncodeImage(node):
Step 1. If node=NULL return
Step 2. Print node.code
Step 3. EncodeImage(node.left)
Step 4. EncodeImage(node.right)
Finally, due to the high similarity of the tree structure, the data stream is compressed using Huffman coding [40].
The final compressed file format consists of the following:
  • Image height and width (4 byte);
  • Minimum and maximum binary tree depth (2 byte);
  • Binary tree structure in binary form (1 bit for every node);
  • Huffman coded gray values.
    First value of gray value in stream (1 byte);
    Minimum and maximum Huffman binary tree depth (2 byte);
    Huffman binary tree binary characters (1 bit for every node);
    Gray values coded with Huffman coding.
We have further improved this encoding by a lossy requantization step that reduces the number of grey values in the original image from 256 to 64.

Binary Tree Decoding and Interpolation

Decompression of the encoded image is performed in two steps. In the first step, a reconstruction of the mask image is performed, and the stored grey values are placed in appropriate locations so that the sparse image can be created. For the reconstruction of the mask vertices, a tree is created in the same order as it was saved. The second step is the image interpolation, where the vertex mask is used as the interpolation mask. In their BTTC scheme, Distasi et al. [23] used linear interpolation. In this study, homogeneous diffusion, biharmonic and triharmonic smoothing, absolute monotone Lipschitz extension (AMLE), Charbonnier diffusion, and edge-enhancing diffusion are tested.
To check the quality of the interpolation methods, two methods of quantitative error analysis were used. These methods are: mean absolute error and mean square error of the decoded image u i , j and the original image v i , j .
Average absolute error (AAE):
AAE ( u , v ) : = 1 n m i , j | u i , j v i , j | ,
where m depicts the image height and n depicts the image width. The smaller value of AAE represents a smaller deformation of the decoded image compared to the original one.
Mean square error (MSE):
MSE ( u , v ) : = 1 n m i , j | u i , j f i , j | 2 .
The square error in the MSE dampens the small differences between two image elements but emphasizes the big differences. The smaller values of MSE represent a smaller error.
Implementing different types of interpolation on a test image horse is shown on Figure 1a; the results are shown in Table 1.
According to the results shown in Table 1, the best results were achieved by using the edge-enhancing diffusion; therefore, this compression algorithm will be referred to as EEDC—Edge-Enhancing Diffusion Compression.

3. Gray Value Compression

To increase the compression ratio of the EEDC, a first possible solution is the gray value compression. A randomly selected portion of the grey values is selected as an example of the data to be compressed and is shown in Figure 3. This data stream is generated by the binary triangular encoding method described above.
When looking at the data stream in Figure 3, the regularity, repetition, and redundancy stand out, i.e., all the phenomena that compression methods use to increase the compression ratio. The compression methods described and used in this article are: Huffman coding, arithmetic coding, range coding, and LZ coding family.

3.1. Huffman Coding

Huffman encoding is a lossless variable-length prefix encoding. This greedy algorithm considers the occurrence of each symbol in an optimal way, i.e., the characters that occur more frequently receive shorter codewords, while the characters that occur less frequently receive longer codewords [41]. In this way, Huffman coding reduces the number of bits needed to represent a set of symbols.
In this subsection, EEDC compression without modification of the grey values is compared with JPEG and JPEG 2000 compression methods at five compression ratios (0.8, 0.4, 0.2, 0.1, and 0.05 bpp). The compression analysis is performed for the previously mentioned Figure 1a).
The smaller the value of the AAE parameter, the better the quality of the reconstructed image. At a compression ratio of 0.8 bpp, as we can see from Table 2, the JPEG and JPEG 2000 compression methods are better than EEDC. At 0.4 bpp, JPEG 2000 achieves the best result, followed by EEDC, while JPEG has the worst result. At 0.2 bpp, JPEG 2000 is still better than EEDC, but, at 0.1 and 0.05 bpp, EEDC achieves the best quality of the reconstructed image. Figure 4 shows a graphical relationship of AAE between the above compression methods at different compression ratios.

3.2. Arithmetic Coding

Arithmetic coding, similar to Huffman coding, belongs to a group of entropy coders; it assigns codes to symbols so that the code length corresponds to the probability of the occurrence of symbols, while other entropy coders decompose the input data stream into constituents, i.e., symbols, and replace each symbol with a codeword. Arithmetic coding encodes the entire message into a single number n for which n 0 , 1 [42]. Instead of the Huffman coding used in the original EEDC compression, an arithmetic coding is used, and the encoder is called EEDC-Arith. An equivalent analysis is performed as in the previous chapter. The encoders shown in Figure 5 were obtained by applying the new compression algorithm on the gray values series.
From the graphical representation of the results in Figure 5, it can be concluded that the introduction of arithmetic coding did not yield better results, so this compression method is discarded.

3.3. Range Coding

Range coding also belongs to the family of entropy coders and is very similar to arithmetic coding, with the main difference being the choice of the interval within which to search for a number that can represent a range of data as in [43]. In arithmetic coding, the interval within which a codeword is searched is 0 , 1 , while in range coding an interval is defined at 0 , N , where N is the arbitrarily chosen upper bound of the interval.
Looking at the results presented in Figure 6, we can see that the quality of the reconstructed image has not improved, i.e., the direct indicator—AAE—is not lower at any compression ratio. The EEDC with Huffman coding for gray value compression is better than the EEDC with range coding.

3.4. Lz-Family Coding

In this chapter, a family of compression algorithms is analyzed, based on the work of Lempel and Ziv [44,45]. These algorithms have a different approach to symbol compression. Instead of assigning code words to known symbols a priori, the code words are assigned to recurring symbols in the input data stream. The length of the recurring symbols can adopt the values from 1 to a certain constant value. There are three basic Lempel–Ziv algorithms, LZ77, which was described by Lempel and Ziv in 1977 [44], LZ78, which was described in 1978 [45], and LZW, which was described by Welsh in 1984 [46]. In addition to the three main algorithms, there are numerous other versions, but we will not discuss them in this article.

3.4.1. LZ77

The LZ77 algorithm is also known as the sliding window compression algorithm. This encoding algorithm holds n L S symbols ( L s is the maximum string length to be compressed, and n is the size of the input buffer) to find a substring within n L S that corresponds to the prefix of the input stream to be encoded. The found prefix is encoded with the position of the substring, its length, and the first symbol of the prefix. The length of the prefix can vary from 0 to n L S , and the greater it is, the greater the coding efficiency.

3.4.2. LZ78

The LZ78 coding algorithm is also a dictionary-based coding algorithm. Instead of searching for the largest substring within a constant number of symbols as the LZ77 algorithm does, the LZ78 algorithm searches for the largest substring within all symbols that have appeared and represent the prefix to be compressed. In the case of an infinitely large input string, the location of the largest substring is unbound and complicates the encoding. To avoid this situation, the input string is divided into large blocks of length n so that the largest strings can only begin at certain locations and that the location determines the length of the substring.

3.4.3. Lzw Coding Algorithm

LZW uses the same coding principles as LZ78, but features technical improvements. In the LZW algorithm, when incrementally parsing the input stream, each word begins with the extension character of the previously parsed word. LZW uses fixed-length codewords, while LZ78 uses variable-length codewords. Welsh suggests 12-bit codewords in their article [46]. Due to the change in incremental parsing, the codeword contains only the index of the corresponding words in the dictionary, so each stream contains only one character.

3.4.4. Mutual Comparison of the Lz-Family Coders

In this subsection, LZ77, LZ78, and LZW are compared with each other, and the procedure that achieves the best results is then compared with the original EEDC. The results of the mutual comparison in terms of coding efficiency are shown in Table 3.
From the results in Table 3, we can conclude that the best algorithm is LZ77. Although algorithms LZ78 and LZW are improvements of algorithm LZ77, they showed worse results in this example because their disadvantage is that they do not work well on small files. They only show their full potential with files that contain large amounts of data. At the beginning of the encoding process, the initialization of the dictionary occupies a lot of free memory due to the large amount of different data. As new entries are added to the dictionary, the free space becomes smaller and smaller. For comparison with the original EEDC algorithm, only LZ77 is used.
Although the LZ77 algorithm is the best in its family, the LZ77, according to Figure 7, performs worse than the unmodified EEDC algorithm with Huffman coding. Therefore, the LZ77 algorithm is also rejected because it did not improve.

3.5. Comparative Analysis of the Described Coders

In this subsection, all the described coders are compared at five compression rates (0.8, 0.4, 0.2, 0.1, and 0.05 bpp) on the test image horse. The cumulative results are shown in Table 4.
From careful examination of Table 4, it can be concluded that all of the described changes in grey level compression did not improve the quality of the reconstructed image. Changing the compression algorithm alone does not produce better results, so a modification of the input data stream is required.

4. Data Transformation Impact on Gray Values Compression

To increase the efficiency of compression, it is necessary to transform an input data stream, as shown in Figure 3. The methods for transforming input data streams described in this article are:
  • Delta coding.
  • CTW—Context Tree Weighting.
  • BWT—Burrows Wheeler Transformation.
  • PPM—Prediction by Partial Matching.
  • DMC—Dynamic Markov Compression.

4.1. Delta Coding

Delta coding is a data stream transformation in which symbols are stored as the difference between the current character and the previous character [47]. Fewer bits are needed to represent the differences between successive characters than the characters themselves. The histogram of the image horse can be seen in Figure 8.
After applying delta coding on the histogram data, the results can be seen of Figure 9.
When analyzing Figure 9, the frequency distribution is better for entropy coders because the differences in the frequencies of occurrence of each element are greater. For example, in Figure 9, there are about 250 elements with value 0 and about 10 elements with value 100. This difference should be exploited by Huffman coding and increase the effectiveness of the compression.

4.2. Context Tree Weighting

The context tree weighting method was first described by Willems, Shtarkov, and Tjalkens in their 1995 article [48]. Context Tree Weighting (CTW for short) is an effective implementation of weighting the distribution of codes. The main components of the CTW codec are the CTW model estimator and the entropy encoder. The CTW model estimator is responsible for modeling the input stream, i.e., evaluating the probability of the next symbol occurring, while the entropy encoder uses the predicted probabilities to compress the input stream.
A very important part of the CTW algorithm is the context tree, which is dynamically created during encoding and decoding. The context is defined as all previous symbols of an input stream that is being encoded. Each context that occurs is stored as a path in a contextual tree. The encoding process consists of four stages:
  • Searching for a path in the contextual tree that matches the current context.
  • In every node in the contextual tree, the probability of the next symbol P e is predicted using data that are stored in the node itself (estimated probability is calculated using Krichevski–Trofimov estimation method or with Zero-Redundancy estimation method);
  • Calculating the weighted probability P w on all P e values;
  • Sending the weighted probability to entropy coder.

4.3. Burrows–Wheeler Transformation

The Burrows–Wheeler transformation, or BWT for short, was described in a 1994 article by Burrows and Wheeler [49]. In BWT, the input data are not treated as a string, but the input stream is divided into blocks and each block is encoded independently. This transformation is most efficient when the input data are processed as one block. The basic idea is to reorder a current stream S with N symbols into another stream L, and the mathematical term for the reordering is permutation.
The encoding algorithm adopts a stream S consisting of N symbols S [ 0 ] , , S [ N 1 ] selected from the sorted alphabet X. The encoder creates an N × N matrix M and stores S in its first row followed by N 1 copies of the stream S cyclically shifted one symbol to the left. The matrix is sorted lexicographically by row, and the output of the BWT algorithm is the last symbol in each permuted stream and the row number where the original stream is found.

4.4. Prediction by Partial Matching

Prediction by Partial Matching was invented in 1984 by Cleary and Witten [50]. It is a statistical method for modeling input data streams with a limited context, which can be viewed as merging multiple contextual models to predict the next symbol. The main idea of prediction by partial matching is to exploit the knowledge about the previous K symbols to create a conditional probability for the current symbol.

4.5. Dynamic Markov Coding

Dynamic Markov coding is an adaptive statistical compression procedure in two phases described by Cormack and Horspool in 1987 [51]. The first phase of this coding process consists of a finite state machine for predicting the probability of the next symbol and the second phase consists of an entropy encoder, usually arithmetic coding that performs the compression. This algorithm was originally developed for binary data, i.e., machine code, images, and sound. This algorithm reads a bit from the input stream, assigns it a certain probability value based on its previous occurrence, and switches it to the next state depending on its value. The algorithm starts with a small finite state machine that is assigned to new states during the encoding process, causing it to be an adaptive process. Thus, a finite state automaton can grow very quickly and fill the entire free memory. This algorithm is divided into two parts:
  • Probability calculation;
  • New states addition.
The probability calculation is performed by counting 0 and 1 in the input stream. Let it be assumed that the finite machine has been in the state S several times in the past and has received s 0 zeros and s 1 ones. The easiest way to assign a probability is this simple expression:
s 0 s 0 + s 1 p r o b a b i l i t y t h a t t h e i n p u t d a t a i s 0 .
s 1 s 0 + s 1 p r o b a b i l i t y t h a t t h e i n p u t d a t a i s 1 .

4.6. Possible Combinations of Transformation and Compression

Using the information in the last two sections, which describe all compression methods used and all data transformations, a table of all possible combinations can be created, Table 5.
From Table 5, it can be seen that, with delta coding, Burrows–Wheeler transform, and Prediction by Partial Matching, all compression methods can be used. With the context tree weighting, all entropy encoders can be used, while with Dynamic Markov coding, only certain entropy encoders can be used, i.e., arithmetic coding and range coding. For all possible combinations, an analysis of the AAE parameter is performed for the image horse with a compression ratio of 0.2 bpp, Table 6.
For the compression to be more efficient than the original EEDC, the value of the AAE parameter must be lower than 3.80 . From the results in Table 6, it can be seen that the entropy coders achieve better results compared to dictionary coders, which do not use their full potential due to the small size of the input data. The best result was obtained by a combination of range coding with Burrows–Wheeler transform; an AAE value of 3.61 was achieved. In further analysis, this combination will be EEDC-RANGE (BWT).

4.7. Analysis of the EEDC-RANGE(BWT) Algorithm

In this subsection, the new algorithm is tested at five compression ratios on the standard test image horse. Table 7 shows the values of the average absolute error for five encoders: JPEG, JPEG 2000, EEDC, and EEDC-RANGE (BWT).
By carefully observing Table 7, it can be seen that the novel algorithm beats its predecessor in all compression ratios. EEDC-RANGE(BWT) is better than the JPEG at 0.1, 0.2, and 0.4 bpp while it is only better than JPEG 2000 at 0.05, 0.1, and 0.2 bpp. The graphical representation is shown on Figure 10.
Figure 11 shows a series of reconstructed images at all compression ratios for all compression methods. Looking at the images shown, it is clear that the differences are more pronounced at higher compression ratios. The JPEG method was never able to produce a reconstructed image at 0.05 bpp. At 0.1 and 0.2 bpp, this new method is obviously better than other methods. At 0.4 and 0.8 bpp, it is difficult to see the differences between the tested methods. The analysis and the results obtained in this section have shown that the compression of grey values directly affects the quality of the reconstructed image. The improvement is due to the introduction of the Burrows–Wheeler transform and range coding. Still, this method is no better than JPEG 2000 at 0.4 and 0.8 bpp. The question arises, “Is it possible to achieve better results than JPEG 2000 at all compression ratios?”, and the answer to this question follows in the next section.

5. Binary Tree Structure Compression

In the original compression method (EEDC), the binary tree structure is not compressed in any way. This structure, a stream of 1s and 0s, is stored in groups of 8 bits—bytes. In this section, the compression of the binary tree structure is analyzed using the compression methods and data transformation algorithms described earlier. An example of a binary tree structure can be seen in Figure 12.
When looking at Figure 12, certain blocks/sequences can be seen, but they have a size of 2 or 3 bits, which cannot be effectively used for increasing the compression ratio. The compression method EEDC-RANGE (BWT) is the basis of this research and, in this binary tree structure, compression is implemented in it. After applying data transformation and compression methods, the results can be seen in Table 8.
The data reported in Table 8 proves that not a single value was below the reference value of 3.61 obtained by EEDC-RANGE (BWT) at 0.02 bpp. The best compression was achieved with dictionary coders, especially LZ77, due to the repetition of subsets in the data stream. By using delta coding, a new character, 1 , appeared, which caused compression to be more difficult, as shown by the results in Table 6. Rearranging the bits, as BWT does, does not provide better results. The best result is obtained by combining DMC with arithmetic coding, but it is still above the value of 3.61 .

Context Mixing Method

The shortcoming of methods that use contexts to predict the next symbol (DMC, PPM, and CTW) is that the context must be coherent. The best symbol prediction for images is adjacent horizontal and vertical pixels, but they do not form a coherent context. The previously mentioned methods do not provide a mechanism for combining statistics from contexts. Context mixing method combines predictions from a large number of independent models using weighted averaging [52].
The input data are represented as a stream of 1 s and 0 s. For each bit, each model independently provides two characters n 0 , n 1 0 , which can be used as the probability that the next symbol will be 0 or 1. Considering these two numbers together, they are the model’s assertion that the next bit will be 0 with probability n 0 / n or 1 with probability n 1 / n , where n = n 0 + n 1 is the model’s relative confidence in this prediction. Since the models are independent of each other, the confidence is only meaningful when comparing two predictions of the same model. The models are combined by a weighted summation of n 0 and n 0 over all models as follows:
S 0 = ϵ + w i n 0 i e v i d e n c e f o r 0 S 1 = ϵ + w i n 1 i e v i d e n c e f o r 1 S = S 0 + S 1 t o t a l e v i d e n c e p 0 = S 0 S p r o b a b i l i t y t h a t t h e n e x t b i t i s 0 p 1 = S 1 S p r o b a b i l i t y t h a t t h e n e x t b i t i s 1 ,
where w i 0 is the weight of the i’th model, n 0 i and n 1 i are outputs n 0 i n 1 of the i-th model, and ϵ > 0 is a constant that guarantees S 0 , S 1 > 0 i p 0 , p 1 ( 0 , 1 ) .
After coding each bit, the weights are adjusted in favor of the models that correctly predict that bit. Let x be the first bit to be encoded in the sequence. The cost of the optimal encoding of x is log 2 1 / p x . From the partial derivative of the encoding cost with respect to each w i in (15) with the constraint that the weights must be non-negative, we obtain the weight adjustment term:
w i m a x 0 , w i + x p 1 S n 1 i S 1 n i S 0 S 1 ,
where n i = n 0 i + n 1 i . The term ( x p 1 ) is the prediction error. The weights tend to grow logarithmically because the term S 0 S 1 grows along with the weights.
The predictions from Equation (15) are sent to arithmetic coder. If the input in the coder is x then the output from the coder is the length of x together with a number from a half-open interval [ p < x , p < x + p ( x ) ) where p < x is a probability that a random picked string is lexicographically smaller than x. Within the interval, there is a number with a base B encoding that has maximum 1 + log b 1 / p ( x ) digits [42]. When p ( x ) is expressed as a product of conditional probabilities
p x 1 , x 2 , , x n = i p x i | x 1 , x 2 , , x i 1
and only 1 s and 0 s are used. Then, the arithmetic code can be computed efficiently so that it starts with the interval [ 0 , 1 ) , and, for each bit x i , the interval is divided into two parts proportional to p 0 and p 1 from Equation (15) and replaced by a sub-interval corresponding to p x i . If the range is l o w , h i g h and the probability that x i is 0; then, the interval updates as follows:
m i d = l o w + p 0 h i g h l o w l o w , h i g h l o w , m i d i f x i = 0 m i d , h i g h i f x i = 1
As the interval shrinks, the leading digits of the low and high will match; then, these digits are outputted immediately.
When the interval shrinks, the leading digits of low and high coincide and these digits are immediately output.
The context shuffling method applied to binary tree compression consists of 18 context models (10 models for general use, 4 models for long contexts, and 4 models for short contexts) and 8 sets of weights that choose a 4-bit context consisting of the 4 most significant bits of the previous data stream. This method was implemented in an EEDC-RANGE (BWT) encoder and was tested for the horse image with a compression ratio of 0.2 bpp. The best result for the AAE value is 3.61 using EEDC-RANGE (BWT). When context mixing is used, the value of the AAE parameter is 3.47, which is significantly better than the previous algorithm. The new algorithm is named EEDC-BSSP (Block-Sorting Symbol Prediction) because it uses the block-sorting algorithm (Burrows–Wheeler Transform) for compressing the grey values and the symbol prediction method (Context Mixing Method) for compressing the binary tree structure.

6. Quality Analysis of the Reconstructed Image

The measures used to determine the quality of the reconstructed image are divided into objective and subjective. The objective measurements are determined using mathematical operations or by the use of measuring devices WB06. The subjective measurements are determined by human judgement.
For quality analysis of the EEDC-BSSP compression method, four images are used—horse, beauty, mask, pills—as can be seen on Figure 13.
The images have a size of 257 × 257 pixels. The images are compressed using JPEG, JPEG 2000, and EEDC-BSSP compression algorithms with five compression ratios (0.8, 0.4, 0.2, 0.1, and 0.05 bpp). The images are then decompressed and the reconstructed images are subjected to further analysis.

6.1. Objective Quality Analysis

Objective quality analysis is based on the difference of the original f i , j and reconstructed image u i , j . Two metrics are already mentioned and they are AAE, (13), and MSE (14).
The signal-to-noise ratio (SNR) is a ratio of the power of the signal and power of the background noise:
SNR ( f , g ) = 10 · l o g 10 σ 2 g MSE .
The amplitude of image elements has a range:
MAX : = 0 . 2 q 1
where q is the number of bits needed to display the amplitudes of the original image. The MSE does not take MAX into consideration so the Peak Signal-to-Noise Ratio (PSNR) is introduced:
PSNR ( u , v ) : = 10 log 10 MAX 2 MSE = 20 log 10 MAX MSE .
SSIM (Structure Similarity) is a novel method of calculating similarity of two images [27]. The main idea is that the human visual system is suited for structural information processing and this measurement strives to measure differences in this information between original and reconstructed images. Let x i and y i be two discrete signals where i = 1 , 2 , , N . The average value of discrete signal x i is
μ x : = 1 N i N x i .
The average value of discrete signal y i is
μ y : = 1 N i N y i .
The variance of x i is
σ x 2 : = 1 N 1 i N x i μ x 2 .
The variance of y i is
σ y 2 : = 1 N 1 i N y i μ y 2 .
The covariance of x i and y i is
σ x y : = 1 N 1 i N x i μ x y i μ y .
The SSIM is locally calculated over a window of determined size (usually 8 × 8 ). For every local window, three parameters are calculated: brightness, contrast, and structure. The value that depicts the change of brightness intensity is calculated by:
l x , y : = 2 μ x μ y μ x 2 + μ y 2 .
The contrast is calculated using the variance between the original and reconstructed image as
c x , y : = 2 σ x σ μ y σ x 2 + σ y 2 .
The structural similarity uses the covariance of two images and is calculated by
s x , y : = σ x y σ x σ y .
These three components are combined
SSIM x , y : = l x , y α c x , y β s x , y γ
where α , β , and γ are parameters that define the relative importance of each component. The special case is when α = β = γ = 1 ; the resulting SSIM index is provided with the next expression.
SSIM x , y = 4 μ x μ y σ x y μ x 2 + μ y 2 σ x 2 + σ y 2 .
MS-SSIM (Multi-Scale Structure Similarity) is based on SSIM [53], but the contrast and structure are calculated at different frequency scales, so it tests the image quality at different observation distances. The brightness is calculated only at the last scale, and the MS-SSIM is calculated as a combination of all the previously calculated coefficients at all other scales. The low-pass filter is iteratively applied to the original and reconstructed images, and the filtered image is requantized by a factor of 2. The original image is referred to as scale 1 and the largest scale as scale M. The change in brightness intensity (26) is calculated only at scale M and denoted as l M ( x , y ) . On a random scale j, the contrast c j ( x , y ) is computed with (27) and the structural similarity s j ( x , y ) is computed with (28).
MS SSIM x , y : = l x , y α M j = 1 M c x , y β j s x , y γ j .
Similar to (29), the exponents α M , β j , and γ j are used to assign the relative importance to the different components. Just as with the SSIM, the result of this measurement can range from 0 (no similarity) to 1 (identical images). The disadvantage of this measurement is that the calculation requires longer than with SSIM.
VIF (Visual Information Fidelity) [54] quantifies the information divided between the reference image and the distorted image relative to the information contained in the reference image. It uses NSS (Natural Scene Statistic) together with the distortion model of the image and the model of the human visual system, HVS. The result can range from 0 (no similarity) to 1 (identical images).

6.2. Review and Analysis of Objective Measurement Results

In this subsection, a comparative analysis of three encoders—JPEG, JPEG 2000, and EEDC-BSSP—is performed for five compression rates and four images in Figure 13. These images were selected based on their characteristics to comprehensively test the new compression method. The image horse contains a large number of color gradients from one intensity to another, and the details are concentrated in a few parts of the image (head and tail). The image beauty was chosen because it contains parts with more details (face and hair) and also features parts with less details (wall in the background). The image mask contains a small amount of intensity gradients, but the main reason this image was chosen is the faster transition between gradients and scattered details throughout the image. The image pills is selected because it contains a large amount of details and repetitive shapes.

6.2.1. Objective Quality of the Test Image Horse

The reconstructed images for each compression method and ratio are shown in Figure 14. At the lowest compression ratio (top row), the images look almost the same. At 0.4 bpp, the differences between JPEG 2000 and EEDC-BSSP are imperceptible, while the degradation begins for the image compressed with JPEG. At 0.2 bpp, artifacts begin to appear on the JPEG image, while degradation is observed on the JPEG 2000 image. On the EEDC-BSSP image, loss of detail is evident but structure is preserved. At 0.1 bpp, the JPEG image is severely degraded, the structure and details of the JPEG-2000 image are distorted, and the EEDC-BSSP image still has the original structure but lacks details. At 0.05 bpp, the JPEG cannot reproduce the image, JPEG 2000 is very blurred, while the EEDC-BSSP image is degraded in its structure and details, but still a picture of a horse is recognizable. The data shown in Table 9, Table 10 and Table 11 are the numerical evidence of the image degradation shown in Figure 14.
From Table 9, it can be seen that, the lower the parameter AAE, the smaller the difference between the original and the reconstructed image, i.e., the smaller the distortion. Furthermore, the parameter MSE should be lower, while the parameter SNR should be larger, which means that the reconstructed image has better visual quality.
In Table 10, the parameter PSNR is similar to SNR, i.e., its value increases with the quality of the reconstructed image. The values of SSIM, MS-SSIM in Table 10, and VIF in Table 11 range from 0 (no similarity) to 1 (identical images). According to the SSIM values, EEDC-BSSP is better than the other two compression methods at compression ratios of 0.05, 0.1, and 0.2 bpp, while it is the same as JPEG 2000 at 0.4 bpp and the same as JPEG at 0.8 bpp, but JPEG 2000 has the best result at low compression. MS-SSIM has similar results. The VIF results tend to be better for EEDC-BSSP at all compression ratios except 0.8 bpp.

6.2.2. Objective Quality of the Test Image Beauty

The reconstructed images for each compression method and ratio are shown in Figure 15. At the lowest compression ratio (top row), the quality of the JPEG image appears to immediately drop; JPEG 2000 has some edges, while EEDC-BSSP is smooth. JPEG has the worst image quality for all compression ratios, while JPEG 2000 and EEDC-BSSP are on par with only a slight difference. EEDC-BSSP is smoother in detail. At 0.05 bpp, the face is not recognizable on the JPEG 2000 image, while recognizable facial features are preserved on EEDC-BSSP. The results of the objective quality assessment are shown in Table 12, Table 13 and Table 14.
From Table 12, it can conclude that EEDC-BSSP performs better than JPEG 2000 only at 0.1 and 0.05 bpp, while it is better than JPEG at all compression ratios except 0.8 bpp.
The SSIM, MS-SSIM parameters in Table 13, and VIF in Table 14 show similar results to the AAE, MSE, and SNR values. EEDC-BSSP is very similar to JPEG, but JPEG 2000 outperforms this new compression method at 0.2, 0.4, and 0.8 bpp, respectively.

6.2.3. Objective Quality of the Test Image Mask

The reconstructed images for each compression method and ratio are shown in Figure 16. At the lowest compression ratio (top row) because of the low detail of this image, it is quite difficult to distinguish between images at 0.8 and 0.4 bpp. At 0.2 bpp, JPEG starts to degrade and artifacts appear, especially at 0.1 bpp. JPEG 2000 and EEDC-BSSP are quite similar, except that JPEG 2000 emphasizes the artifacts that degrade the image more.
Table 15, Table 16 and Table 17 show the results that follow the quality of the reconstructed images on Figure 16.

6.2.4. Objective Quality of the Test Image Pills

Figure 17 represents all the reconstructed images of image pills. Due to the high level of detail, i.e., the high frequencies in the image, it is very difficult to detect errors in the images at low compression ratios (0.8 and 0.4 bpp). The quality of JPEG images deteriorates rapidly as the compression rate increases. JPEG 2000 and EEDC-BSSP go head-to-head at high compression rates, and, at the highest compression rate (0.05 bpp), JPEG 2000 loses its integrity, while EEDC-BSSP managed to preserve the shapes. The results are consistent with the quality of the reconstructed images quality Table 18, Table 19 and Table 20.
Table 18, Table 19 and Table 20 show the results that follow the quality of the reconstructed images on Figure 17.

6.3. Subjective Quality Analysis

A competent measure of image quality is the subjective image quality, i.e., although two images may have identical values in the objective quality parameters, the human visual system can distinguish these two images. The standard procedures for subjective assessment are specified in Recommendation ITU-R BT.500-11 [55].
The subjective measurements can be divided into several groups in the ITU-R 500 recommendation, with the basic classification being general and alternative methods. Two general methods are Double stimulus impairment scale—DSIS and Double stimulus continuous quality scale—DSCQS. The alternative methods are Single stimulus methods—single stimulus categorical rating and single stimulus continuous quality scale.
The double stimulus impairment scale method is a method in which a pair of images is observed, one of which is the original image and the other the reconstructed image. For each image, the observers provide a rating on the image distortion compared to the original image. The ratings can vary from 1 to 5, with 1 representing complete distortion of the image and 5 representing imperceptible distortion. The ratings are provided in Table 21.
After image grading, the average grade for all images is calculated and that grade is the Mean Opinion Score—MOS:
M O S = i = 1 5 i · p ( i )
where p ( i ) is a share of grade i in the total number of ratings. The double stimulus impairment scale provides stable results for the small amount of deterioration caused by changes in the level of distortion.
In double stimulus, the continuous quality scale method original image is first shown in the original image, then in the same image or a deteriorated image. The observer rates the quality of the second image. The evaluation of the reference image and the test image are performed using descriptive scores via Table 22.
For optimal conditions in the determining subjective quality, it is necessary to provide the same screen, distance, and viewing angle for all observers.

6.4. Review and Analysis of Subjective Measurement Results

In this article, the subjective assessment of image quality is performed using the double stimulus continuous quality scale. The image evaluation was performed by 40 observers, 8 of whom were female and 32 of whom were male. The observers first saw the original image, then the same image or the decoded image was shown. The subjective evaluation was performed using Table 22. After grading, the cumulative MOS was calculated for each tested image. The analysis for the image horse is shown in Table 23.
As can be seen from Table 23, the competition was again between JPEG 2000 and EEDC-BSSP. EEDC-BSSP provided better results at high compression ratios and even received the best score at 0.4 bpp. At 0.8 bpp, JPEG was surprisingly rated the best. A graphical representation can be found in Figure 18.
From Figure 18, it is easy to see which compression method was found to be the best for a given compression ratio.
The quantitative analysis of the MOS result for the test image beauty is provided in Table 24 with its graphical representation on Figure 19.
The values provided and the corresponding graph show that, for image beauty, JPEG 2000 performs best at all compression ratios except 0.05 bpp, and JPEG performs by far the worst at all compression ratios.
The calculated values of MOS for the test image mask are listed in Table 25. The graph of agreement is shown in Figure 20.
For the image mask, JPEG was rated the worst for all compression ratios, but still JPEG 2000 and EEDC-BSSP are very close according to the results. It is really hard to distinguish these two algorithms for this image. The results for these two algorithms are similar except that, at 0.2 bpp, JPEG 2000 was rated as better than EEDC-BSSP, but, at 0.4 bpp, the roles reversed, i.e., EEDC-BSSP was better than JPEG 2000.
The numerical values of the MOS rating for the pills image are provided in Table 26 and the graph for this image is shown in Figure 21.
For the image pills, JPEG 2000 was generally rated as the best compression method, but had a lower value than EEDC-BSSP at 0.2 bpp (seen as a “drop” at 0.2 bpp in Figure 21). Interestingly, JPEG was ranked better than EEDC-BSSP at 0.8 bpp, but all other values followed the same regularity as the previous images.

7. Discussion

The objective and subjective image quality analyses performed in this study provide insight into the effectiveness of the presented algorithm EEDC-BSSP compared to the two standardized image compression methods, JPEG and JPEG2000. The four test images (horse, beauty, mask, and pills) were compressed at five different compression ratios ranging from 0.8 (lowest) to 0.05 (highest). The results for each image are presented in an image box allowing for a clear visualization of the quality loss of each image.
The objective image analysis was performed using several metrics, including AAE, MSE, SNR, PSNR, SSIM, MS-SSIM, and VIM. These metrics were presented in tables and used to evaluate the performance of each compression method for the four test images. The objective image analysis showed that EEDC-BSSP outperformed JPEG and JPEG2000 in terms of image quality at high compression rates.
The subjective image analysis was performed by calculating the MOS for each image using the double stimulus impairment method. The values of MOS were presented in tables and a graph, which showed a clear comparison of the performance of the three compression methods. The subjective analysis confirmed the superiority of EEDC-BSSP over JPEG and JPEG2000 at high compression ratios, as evidenced by the higher values of MOS.
Overall, the objective and subjective analyzes show that the presented algorithm EEDC-BSSP is more effective in preserving the image quality than the two standardized compression methods evaluated in this study. The results of this study could have important implications for applications that require high compression ratios without a significant loss of image quality, such as medical imaging and satellite imagery. Future work could include further optimization of the EEDC-BSSP algorithm to improve its performance for specific applications.

8. Conclusions

In this work, we have presented two extensions of the PDE-based image compression algorithm EEDC and demonstrated its superior performance compared to JPEG and JPEG 2000 at high compression rates. In particular, we analyzed the impact of grey-level compression and binary tree compression on the quality of the reconstructed image. Although PDE-based methods are more computationally intensive than transform-based compression methods (DCT in JPEG and DWT in JPEG 2000), the clever use of the available data and its statistics enables remarkable performance.
In our future work, we intend to optimize the EEDC-BSSP method for specific applications. Possible improvements include automatic selection of the diffusion and convergence rate parameters, i.e., whether decomposition and inpainting can be achieved in fewer iterations.

Author Contributions

Conceptualization, Č.L. and A.B.; methodology, Č.L.; software, A.B.; validation, T.H.; formal analysis, A.B.; investigation, Č.L.; resources, Č.L.; data curation, T.H.; writing—original draft preparation, Č.L.; writing—review and editing, Č.L., T.H., and A.B.; visualization, Č.L.; supervision, Č.L.; project administration, Č.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bertalmío, M.; Sapiro, G.; Caselles, V.; Ballester, C. Image inpainting. In Proceedings of the ACM SIGGRAPH, New Orleans, LA, USA, 23–28 July 2000; pp. 417–424. [Google Scholar]
  2. Chan, T.F.; Shen, J. Non-texture inpainting by curvature-driven diffusions (CDD). J. Vis. Commun. Image Represent. 2001, 12, 436–449. [Google Scholar] [CrossRef]
  3. Grossauer, H.; Scherzer, O. Using the complex Ginzburg–Landau equation for digital impainting in 2D and 3D. In Scale-Space Methods in Computer Vision, Volume 2695 of Lecture Notes in Computer Science; Springer: Berlin, Germany, 2003; pp. 225–236. [Google Scholar]
  4. Tschumperlé, D.; Deriche, R. Vector-valued image regularization with PDEs: A common framework for different applications. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 506–516. [Google Scholar] [CrossRef] [Green Version]
  5. Kansal, K.; Singh, P. A review on image compression using partial differential equation. Int. J. Adv. Res. Comput. Sci. Softw. Eng. 2013, 3, 1192–1202. [Google Scholar]
  6. Li, H.; Li, C.; Li, X.; Xu, X. An overview of partial differential equation-based image processing techniques. J. Electron. Imaging 2015, 24, 013011. [Google Scholar]
  7. Saha, S.; Pramanik, S. A review on image compression using partial differential equations. J. Comput. Theor. Nanosci. 2018, 15, 1574–1584. [Google Scholar]
  8. Li, S.; Zhang, Y.; Xie, X.; Li, X. Recent progress on partial differential equation-based image processing: Denoising, segmentation, inpainting, and beyond. J. Electron. Imaging 2017, 26, 011007. [Google Scholar]
  9. Tai, X.C.; Chan, T.F. A survey on image denoising using PDEs. J. Vis. Commun. Image Represent. 2018, 55, 247–267. [Google Scholar]
  10. Shapiro, J.M. Embedded image coding using zero trees of wavelet coefficients. IEEE Trans. Signal Process. 1993, 41, 3445–3462. [Google Scholar] [CrossRef]
  11. Said, A.; Pearlman, W.A. A new, fast, and efficient image codec based on set partitioning in hierarchical trees. IEEE Trans. Circuits Syst. Video Technol. 1996, 6, 243–250. [Google Scholar] [CrossRef]
  12. Taubin, G. A signal processing approach to fair surface design. In Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques, Los Angeles, CA, USA, 15 August 1995; ACM: New York, NY, USA, 1995; pp. 351–358. [Google Scholar]
  13. Kolouri, S.; Rohde, G.K. Variational image compression with a scale hyperprior. IEEE Trans. Image Process. 2018, 27, 4937–4950. [Google Scholar]
  14. Caselles, V.; Morel, J.M.; Sbert, C. An axiomatic approach to image interpolation. IEEE Trans. Image Process. 1998, 7, 376–386. [Google Scholar] [CrossRef] [PubMed]
  15. Galić, I.; Weickert, J.; Welk, M.; Bruhn, A.; Belyaev, A.; Seidel, H.P. Towards PDE-based image compression. In Variational, Geometric and Level-Set Methods in Computer Vision, Volume 3752 of Lecture Notes in Computer Science; Springer: Berlin, Germany, 2005; pp. 37–48. [Google Scholar]
  16. Kostler, H.; Sturmer, M.; Freundl, C.; Rude, U. PDE based video compression in real time. In Technical Report 07–11, Lehrstuhl fur Informatik 10; University of Erlangen-Nurnberg: Erlangen, Germany, 2007. [Google Scholar]
  17. Galić, I.; Weickert, J.; Welk, M.; Bruhn, A.; Belyaev, A.; Seidel, H.P. Image compression with anisotropic diffusion. J. Math. Imaging Vis. 2008, 31, 255–269. [Google Scholar] [CrossRef] [Green Version]
  18. Schmaltz, C.; Weickert, J.; Bruhn, A. Beating the quality of JPEG 2000 with anisotropic diffusion. In Pattern Recognition, Volume 5748 of Lecture Notes in Computer Science; Springer: Berlin, Germany, 2009; pp. 452–461. [Google Scholar]
  19. Mainberger, M.; Hoffmann, S.; Weickert, J.; Tang, C.; Johannsen, D.; Neumann, F.; Doerr, B. Optimising spatial and tonal data for homogeneous diffusion inpainting. In Scale Space and Variational Methods in Computer Vision, Lecture Notes in Computer Science; Springer: Berlin, Germany, 2012; Volume 66, pp. 26–37. [Google Scholar]
  20. Schmaltz, C.; Peter, P.; Mainberger, M.; Ebel, F.; Weickert, J.; Bruhn, A. Understanding, optimising, and extending data compression with anisotropic diffusion. Int. J. Comput. Vis. 2014, 108, 222–240. [Google Scholar] [CrossRef] [Green Version]
  21. Strobach, P. Quadtree-structured recursive plane decomposition coding of images. IEEE Trans. Signal Process. 1991, 39, 1380–1397. [Google Scholar] [CrossRef]
  22. Sullivan, G.J.; Baker, R.J. Efficient quadtree coding of images and video. IEEE Trans. Image Process. 1994, 3, 327–331. [Google Scholar] [CrossRef] [Green Version]
  23. Distasi, R.; Nappi, M.; Vitulano, S. Image compression by B-tree triangular coding. IEEE Trans. Commun. 1997, 45, 1095–1100. [Google Scholar] [CrossRef] [Green Version]
  24. Demaret, L.; Dyn, N.; Iske, A. Image compression by linear splines over adaptive triangulations. Signal Process. 2006, 86, 1604–1616. [Google Scholar] [CrossRef]
  25. Bougleux, S.; Peyré, G.; Cohen, L. Image compression with anisotropic triangulations. In Proceedings of the Tenth International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009. [Google Scholar]
  26. Liu, X.; Sun, J.; Tang, X. An iterative total variation algorithm for compressive image sensing. IEEE Trans. Image Process. 2017, 26, 4787–4799. [Google Scholar]
  27. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
  28. Goh, P.; Ramesh, V. Multiscale edge-based PDEs for image compression. IEEE Trans. Image Process. 2010, 19, 645–656. [Google Scholar]
  29. Chen, D.; Zhang, X.; Jiang, J. Learning-based image compression: Past, present and future. Signal Process. Image Commun. 2020, 84, 115797. [Google Scholar]
  30. Livada, C.; Galic, I.; Zovko-Cihlar, B. EEDC image compression using Burrows-Wheeler data modeling. In Proceedings of the ELMAR 2012, Zadar, Croatia, 12–14 September 2012; pp. 1–5. [Google Scholar]
  31. Livada, C.; Galic, I.; Zovko-Cihlar, B. EEDC image compression enhancement by symbol prediction. In Proceedings of the ELMAR 2013, Zadar, Croatia, 25–27 September 2013; pp. 59–63. [Google Scholar]
  32. Chan, T.; Zhou, H. Feature preserving lossy image compression using nonlinear PDE’s. Adv. Signal Process. Algorithms Arch. Implementations VIII 1998, 3461, 316–327. [Google Scholar]
  33. Ford, G. Application of inhomogeneous diffusion to image and video coding. In Proceedings of the 13th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 3–6 November 1996; Volume 2, pp. 926–930. [Google Scholar]
  34. Kopilovic, I.; Szirányi, T. Artifact reduction with diffusion preprocessing for image compression. Opt. Eng. 2005, 44, 1–14. [Google Scholar]
  35. Iijima, T. Basic theory on normalization of pattern (in case of typical one-dimensional pattern. Bull. Electrotech. Lab. 1962, 26, 368–388. [Google Scholar]
  36. Duchon, J. Interpolation des fonctions de deux variables suivant le principe de la flexion des plaques minces. Rairo Math. Model. Methods Appl. Sci. 1976, 10, 5–12. [Google Scholar] [CrossRef] [Green Version]
  37. Didas, J.S.; Weickert, B.B. Stability and local feature enhancement of higher order nonlinear diffusion filtering. In Pattern Recognition; Lecture Notes in Computer Science; Kropatsch, W., Sablatnig, R., Hanbury, A., Eds.; Springer: Berlin, Germany, 2005; Volume 3663. [Google Scholar]
  38. Weickert, J. Theoretical foundations of anisotropic diffusion in image processing. Comput. Suppl. 1996, 11, 221–236. [Google Scholar]
  39. Charbonnier, P.; Blanc-Féraud, L.; Aubert, G.; Barlaud, M. Deterministic edge-preserving regularization in computed imaging. Trans. Image Process. 1997, 6, 298–311. [Google Scholar] [CrossRef]
  40. Huffman, D. A method for the construction of minimum redundancy codes. Proc. Ire 1952, 40, 1098–1101. [Google Scholar] [CrossRef]
  41. Mathur, I.K.; Loonker, S.; Saxena, D. Lossless Huffman Coding Technique For Image Compression and Reconstruction using Binary Trees. Int. J. Comput. Technol. Appl. 2012, 3, 76–79. [Google Scholar]
  42. Howard, P.G.; Vitter, J.S. Analysis of arithmetic coding for data compression. Inf. Process. Manag. 1992, 28, 749–763. [Google Scholar] [CrossRef] [Green Version]
  43. Martin, G.N.N. Range encoding: An algorithm for removing redundancy from a digitized message. Video Data Rec. Conf. Southampt. 1979. [Google Scholar]
  44. Lempel, A.; Ziv, J. A universal algorithm for sequential data compression. IEEE Trans. Inf. Theory 1977, 23, 337–343. [Google Scholar]
  45. Lempel, A.; Ziv, J. Compression of Individual Sequences via Variable-Rate Coding. IEEE Trans. Inf. Theory 1978, 24, 530–536. [Google Scholar]
  46. Welsh, T.A. A technique for high-performance data compression. Computer 1984, 17, 8–19. [Google Scholar]
  47. Salomon, D. Data Compression: The Complete Reference, 3rd ed.; Springer: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
  48. Willems, F.; Shtarkov, Y.; Tjalkens, T. The Context Tree Weighting Method: Basic Properties. IEEE Trans. Inf. Theory 1995, 41, 653–664. [Google Scholar] [CrossRef] [Green Version]
  49. Burrows, M.; Wheeler, D. A Block-Sorting Lossless Data Compression Algorithm; Digital Systems Research Center, SRC Research Report 124; Digital Systems Research Center: Palo Alto, CA, USA, 1994. [Google Scholar]
  50. Cleary, J.; Witten, I. Data Compression Using Adaptive Coding and Partial String Matching. IEEE Trans. Commun. 1984, 32, 196–402. [Google Scholar] [CrossRef] [Green Version]
  51. Cormack, G.V.; Horspool, R.N.S. Data compression using dynamic Markov modelling. Comput. J. 1987, 30, 541–550. [Google Scholar] [CrossRef]
  52. Mahoney, M.V. Adaptive Weighing of Context Models for Lossless Data Compression; Florida Institute of Technology CS Dept Technical Report CS-2005-16; Florida Institute of Technology: Melbourne, FL, USA, 2005. [Google Scholar]
  53. Wang, Z.; Simoncelli, E.P.; Bovik, A.C. Multiscale structural similarity for image quality assessment. In Proceedings of the 37th Asilomar Conference on Signals, Systems & Computers, Pacific Grove, CA, USA, 9–12 November 2003; Volume 2, pp. 1398–1402. [Google Scholar]
  54. Sheikh, H.R.; Bovik, A.C. Image information and visual quality. IEEE Trans. Image Process. 2003, 15, 430–444. [Google Scholar] [CrossRef] [PubMed]
  55. BT.500-11. Methodology for the Subjective Assessment of the Quality of Television Pictures; International Telecommunication Union/ITU Radiocommunication Sector: Geneva, Switzerland, 2012. [Google Scholar]
Figure 1. BTTC process. (a) Original image. (b) Triangle division process. (c) Reconstruction of inner values by interpolation. (d) Repeated triangle division process.
Figure 1. BTTC process. (a) Original image. (b) Triangle division process. (c) Reconstruction of inner values by interpolation. (d) Repeated triangle division process.
Applsci 13 03152 g001
Figure 2. Process of creating the B-tree structure. (a) Process of dividing the triangles. (b) Corresponding binary tree structure.
Figure 2. Process of creating the B-tree structure. (a) Process of dividing the triangles. (b) Corresponding binary tree structure.
Applsci 13 03152 g002
Figure 3. Random part of the gray value stream acquired using the EEDC.
Figure 3. Random part of the gray value stream acquired using the EEDC.
Applsci 13 03152 g003
Figure 4. AAE for image horse and unmodified gray value compression algorithm.
Figure 4. AAE for image horse and unmodified gray value compression algorithm.
Applsci 13 03152 g004
Figure 5. AAE for image horse and modified gray values compression using arithmetic coding.
Figure 5. AAE for image horse and modified gray values compression using arithmetic coding.
Applsci 13 03152 g005
Figure 6. AAE for image horse and modified gray values compression with range coding.
Figure 6. AAE for image horse and modified gray values compression with range coding.
Applsci 13 03152 g006
Figure 7. AAE for image horse and modified gray values compression with LZ77 coding.
Figure 7. AAE for image horse and modified gray values compression with LZ77 coding.
Applsci 13 03152 g007
Figure 8. Histogram of the image horse.
Figure 8. Histogram of the image horse.
Applsci 13 03152 g008
Figure 9. Histogram of the image horse after applying delta coding.
Figure 9. Histogram of the image horse after applying delta coding.
Applsci 13 03152 g009
Figure 10. AAE comparison for image horse.
Figure 10. AAE comparison for image horse.
Applsci 13 03152 g010
Figure 11. Comparative overview of reconstructed images ranging from 0.8 bpp (top row) to 0.05 bpp (bottom row). Compression methods: (a) JPEG, (b) JPEG 2000, (c) EEDC i, and (d) EEDC-RANGE (BWT).
Figure 11. Comparative overview of reconstructed images ranging from 0.8 bpp (top row) to 0.05 bpp (bottom row). Compression methods: (a) JPEG, (b) JPEG 2000, (c) EEDC i, and (d) EEDC-RANGE (BWT).
Applsci 13 03152 g011
Figure 12. A segment of the binary tree structure obtained using triangular coding.
Figure 12. A segment of the binary tree structure obtained using triangular coding.
Applsci 13 03152 g012
Figure 13. Test images: (a) horse, (b) beauty, (c) mask, and (d) pills.
Figure 13. Test images: (a) horse, (b) beauty, (c) mask, and (d) pills.
Applsci 13 03152 g013
Figure 14. Comparative overview of reconstructed images for image horse ranging from 0.8 bpp (top row) to 0.05 bpp (bottom row). Compression methods: (a) JPEG, (b) JPEG 2000, and (c) EEDC-BSSP.
Figure 14. Comparative overview of reconstructed images for image horse ranging from 0.8 bpp (top row) to 0.05 bpp (bottom row). Compression methods: (a) JPEG, (b) JPEG 2000, and (c) EEDC-BSSP.
Applsci 13 03152 g014
Figure 15. Comparative overview of reconstructed images for image beauty ranging from 0.8 bpp (top row) to 0.05 bpp (bottom row). Compression methods: (a) JPEG, (b) JPEG 2000, and (c) EEDC-BSSP.
Figure 15. Comparative overview of reconstructed images for image beauty ranging from 0.8 bpp (top row) to 0.05 bpp (bottom row). Compression methods: (a) JPEG, (b) JPEG 2000, and (c) EEDC-BSSP.
Applsci 13 03152 g015
Figure 16. Comparative overview of reconstructed images for image mask ranging from 0.8 bpp (top row) to 0.05 bpp (bottom row). Compression methods: (a) JPEG, (b) JPEG 2000, and (c) EEDC-BSSP.
Figure 16. Comparative overview of reconstructed images for image mask ranging from 0.8 bpp (top row) to 0.05 bpp (bottom row). Compression methods: (a) JPEG, (b) JPEG 2000, and (c) EEDC-BSSP.
Applsci 13 03152 g016
Figure 17. Comparative overview of reconstructed images for image pills ranging from 0.8 bpp (top row) to 0.05 bpp (bottom row). Compression methods: (a) JPEG, (b) JPEG 2000, and (c) EEDC-BSSP.
Figure 17. Comparative overview of reconstructed images for image pills ranging from 0.8 bpp (top row) to 0.05 bpp (bottom row). Compression methods: (a) JPEG, (b) JPEG 2000, and (c) EEDC-BSSP.
Applsci 13 03152 g017
Figure 18. MOS values comparison for EEDC-BSSP with JPEG and JPEG 2000 for image horse.
Figure 18. MOS values comparison for EEDC-BSSP with JPEG and JPEG 2000 for image horse.
Applsci 13 03152 g018
Figure 19. MOS values comparison for EEDC-BSSP with JPEG and JPEG 2000 for image beauty.
Figure 19. MOS values comparison for EEDC-BSSP with JPEG and JPEG 2000 for image beauty.
Applsci 13 03152 g019
Figure 20. MOS values comparison for EEDC-BSSP with JPEG and JPEG 2000 for image mask.
Figure 20. MOS values comparison for EEDC-BSSP with JPEG and JPEG 2000 for image mask.
Applsci 13 03152 g020
Figure 21. MOS values comparison for EEDC-BSSP with JPEG and JPEG 2000 for image pills.
Figure 21. MOS values comparison for EEDC-BSSP with JPEG and JPEG 2000 for image pills.
Applsci 13 03152 g021
Table 1. Average absolute error (AAE) and mean square error (MSE) for the sparse data interpolation of the image horse.
Table 1. Average absolute error (AAE) and mean square error (MSE) for the sparse data interpolation of the image horse.
PDE MethodAAEMSE
Homogeneous diffusion14.98366.94
Biharmonic smoothing13.16373.52
Triharmonic smoothing16.25491.15
AMLE14.91376.99
Charbonnier diffusion18.66572.03
Edge-enhancing diffusion12.52353.12
Table 2. Comparison of Average Absolute Error for image horse on an unmodified compression algorithm.
Table 2. Comparison of Average Absolute Error for image horse on an unmodified compression algorithm.
bppJPEGJPEG 2000EEDC
0.81.651.392.08
0.42.912.292.67
0.26.203.783.80
0.110.966.755.94
0.05/12.529.31
Table 3. Average absolute error for image horse and LZ-family coders.
Table 3. Average absolute error for image horse and LZ-family coders.
bppLZ77LZ78LZW
0.83.013.773.62
0.43.513.923.84
0.24.334.714.58
0.17.127.907.33
0.0514.2215.2014.78
Table 4. Comparison of average absolute error for image horse and all modified compression algorithms.
Table 4. Comparison of average absolute error for image horse and all modified compression algorithms.
bppJPEGJPEG 2000EEDCEEDC-ArithEEDC-RangeEEDC-LZ77EEDC-LZ78EEDC-LZW
0.81.651.392.082.492.243.013.773.62
0.42.912.292.673.032.883.513.923.84
0.26.203.783.803.983.864.334.714.58
0.110.966.755.946.506.317.127.907.33
0.05/12.529.3110.2010.0814.2215.2014.78
Table 5. Possible combinations of coders and transformations.
Table 5. Possible combinations of coders and transformations.
Compression MethodInput Stream Transformation
Delta CodingCTWBWTPPMDMC
Huffman coding
Arithmetic coding
Range coding
LZ77
LZ78
LZW
Table 6. AAE for all possible combinations of the compression and transformation.
Table 6. AAE for all possible combinations of the compression and transformation.
Compression MethodInput Stream Transformation
Delta CodingCTWBWTPPMDMC
Huffman coding3.893.913.844.12/
Arithmetic coding4.023.913.884.094.53
Range coding3.913.843.613.934.11
LZ774.21/3.914.51/
LZ784.55/4.175.01/
LZW4.34/3.994.89/
Table 7. AAE values comparison.
Table 7. AAE values comparison.
bppJPEGJPEG 2000EEDCEEDC-RANGE (BWT)
0.81.651.392.082.02
0.42.912.292.672.52
0.26.203.783.803.61
0.110.966.755.945.65
0.05/12.529.318.68
Table 8. AAE for possible combinations of data transformation and compression at binary tree structure coding.
Table 8. AAE for possible combinations of data transformation and compression at binary tree structure coding.
Compression MethodData Stream Transformation
NoneDelta CodingCTWBWTPPMDMC
Huffman coding4.756.124.224.684.66/
Arithmetic coding5.027.104.085.114.213.80
Range coding5.337.243.995.394.133.84
LZ774.685.88/4.654.34/
LZ784.896.01/4.934.48/
LZW4.765.94/4.874.42/
Table 9. Results of the AAE, MSE, and SNR metrics for image horse.
Table 9. Results of the AAE, MSE, and SNR metrics for image horse.
bppAAEMSESNR
JPEGJPEG 2000EEDC BSSPJPEGJPEG 2000EEDC BSSPJPEGJPEG 2000EEDC BSSP
0.81.651.391.976.513.506.2227.0929.8827.44
0.42.912.292.4519.7411.1511.0422.2824.8924.91
0.26.203.783.4775.0836.4529.0016.6119.6620.64
0.110.966.755.46231.92115.6088.2511.5714.6015.82
0.05/12.528.84/353.12257.27/9.4711.05
Table 10. Results of the PSNR, SSIM, and MS-SSIM metrics for image horse.
Table 10. Results of the PSNR, SSIM, and MS-SSIM metrics for image horse.
bppPSNRSSIMMS-SSIM
JPEGJPEG 2000EEDC BSSPJPEGJPEG 2000EEDC BSSPJPEGJPEG 2000EEDC BSSP
0.839.9942.7040.190.970.980.971.001.000.99
0.435.1837.6637.700.930.960.960.990.990.99
0.229.3832.5133.510.840.920.930.940.980.98
0.124.4827.5028.670.740.840.880.870.950.95
0.05/22.6524.03/0.720.87/0.820.89
Table 11. Results of the VIF metric for image horse.
Table 11. Results of the VIF metric for image horse.
bppJPEGJPEG 2000EEDC BSSP
0.80.750.800.75
0.40.590.660.67
0.20.380.500.54
0.10.230.330.38
0.05/0.170.23
Table 12. Results of the AAE, MSE, and SNR metrics for image beauty.
Table 12. Results of the AAE, MSE, and SNR metrics for image beauty.
bppAAEMSESNR
JPEGJPEG 2000EEDC BSSPJPEGJPEG 2000EEDC BSSPJPEGJPEG 2000EEDC BSSP
0.82.532.083.0115.538.2317.1625.0527.8424.70
0.44.343.274.1041.1922.8233.8720.8223.4121.68
0.28.344.915.12131.0248.2851.5115.8019.8319.31
0.112.408.087.82270.45140.47128.1512.5015.4615.81
0.05/13.4612.20/384.84300.65/10.8611.89
Table 13. Results of the PSNR, SSIM, and MS-SSIM metrics for image beauty.
Table 13. Results of the PSNR, SSIM, and MS-SSIM metrics for image beauty.
bppPSNRSSIMMS-SSIM
JPEGJPEG 2000EEDC BSSPJPEGJPEG 2000EEDC BSSPJPEGJPEG 2000EEDC BSSP
0.836.2238.9835.790.950.970.940.990.990.99
0.431.9834.5532.830.890.930.910.980.990.98
0.226.9631.0130.480.760.880.870.910.970.97
0.123.8126.6627.050.680.790.800.850.920.93
0.05/22.2823.35/0.660.73/0.780.85
Table 14. Results of the VIF metric for image beauty.
Table 14. Results of the VIF metric for image beauty.
bppJPEGJPEG 2000EEDC BSSP
0.80.690.740.65
0.40.530.600.55
0.20.320.490.46
0.10.210.320.34
0.05/0.190.23
Table 15. Results of the AAE, MSE, and SNR metrics for image mask.
Table 15. Results of the AAE, MSE, and SNR metrics for image mask.
bppPSNRSSIMMS-SSIM
JPEGJPEG 2000EEDC BSSPJPEGJPEG 2000EEDC BSSPJPEGJPEG 2000EEDC BSSP
0.82.532.083.0115.538.2317.1625.0527.8424.70
0.44.343.274.1041.1922.8233.8720.8223.4121.68
0.28.344.915.12131.0248.2851.5115.8019.8319.31
0.112.408.087.82270.45140.47128.1512.5015.4615.81
0.05/13.4612.20/384.84300.65/10.8611.89
Table 16. Results of the PSNR, SSIM, and MS-SSIM metrics for image mask.
Table 16. Results of the PSNR, SSIM, and MS-SSIM metrics for image mask.
bppPSNRSSIMMS-SSIM
JPEGJPEG 2000EEDC BSSPJPEGJPEG 2000EEDC BSSPJPEGJPEG 2000EEDC BSSP
0.836.2238.9835.790.950.970.940.990.990.99
0.431.9834.5532.830.890.930.910.980.990.98
0.226.9631.0130.480.760.880.870.910.970.97
0.123.8126.6627.050.680.790.800.850.920.93
0.05/22.2823.35/0.660.73/0.780.85
Table 17. Results of the VIF metric for image mask.
Table 17. Results of the VIF metric for image mask.
bppJPEGJPEG 2000EEDC BSSP
0.80.690.740.65
0.40.530.600.55
0.20.320.490.46
0.10.210.320.34
0.05/0.190.23
Table 18. Results of the AAE, MSE, and SNR metrics for image pills.
Table 18. Results of the AAE, MSE, and SNR metrics for image pills.
bppPSNRSSIMMS-SSIM
JPEGJPEG 2000EEDC BSSPJPEGJPEG 2000EEDC BSSPJPEGJPEG 2000EEDC BSSP
0.81.751.481.926.553.955.9826.1528.4426.67
0.43.292.472.5922.0512.1912.4020.8823.5623.41
0.26.174.254.1972.3739.1238.7115.7418.4118.46
0.111.947.236.51243.65114.56102.6010.5513.6814.13
0.05/13.3410.29/362.35257.54/8.209.92
Table 19. Results of the PSNR, SSIM, and MS-SSIM metrics for image pills.
Table 19. Results of the PSNR, SSIM, and MS-SSIM metrics for image pills.
bppPSNRSSIMMS-SSIM
JPEGJPEG 2000EEDC BSSPJPEGJPEG 2000EEDC BSSPJPEGJPEG 2000EEDC BSSP
0.839.9742.1740.360.970.980.971.001.000.99
0.434.7037.2737.200.920.960.960.980.990.99
0.229.5432.2132.250.820.900.920.940.970.97
0.124.2627.5428.020.680.820.850.840.930.94
0.05/22.5424.02/0.690.76/0.770.86
Table 20. Results of the VIF metric for image pills.
Table 20. Results of the VIF metric for image pills.
bppJPEGJPEG 2000EEDC BSSP
0.80.760.810.77
0.40.590.680.68
0.20.390.500.52
0.10.210.340.37
0.05/0.180.24
Table 21. Image distortion ratings.
Table 21. Image distortion ratings.
RatingDistortion
1Particularly irritating
2Irritating
3Slightly irritating
4Noticeably
5Not noticeably
Table 22. Image quality ratings.
Table 22. Image quality ratings.
RatingQuality
1Unwatchable
2Barely watchable
3Watchable
4Good
5Excellent
Table 23. MOS values for image horse at various compression ratios.
Table 23. MOS values for image horse at various compression ratios.
bppJPEGJPEG 2000EEDC BSSP
0.84.534.524.13
0.43.413.804.24
0.22.062.812.65
0.11.231.611.96
0.05/1.131.57
Table 24. MOS values for image beauty at various compression ratios.
Table 24. MOS values for image beauty at various compression ratios.
bppJPEGJPEG 2000EEDC BSSP
0.84.514.564.52
0.43.254.033.71
0.22.073.052.84
0.11.311.671.53
0.05/1.111.42
Table 25. MOS values for image mask at various compression ratios.
Table 25. MOS values for image mask at various compression ratios.
bppJPEGJPEG 2000EEDC BSSP
0.84.914.954.92
0.43.504.044.46
0.22.913.363.16
0.12.052.162.17
0.05/1.161.11
Table 26. MOS values for image pills at various compression ratios.
Table 26. MOS values for image pills at various compression ratios.
bppJPEGJPEG 2000EEDC BSSP
0.84.734.844.00
0.43.163.923.62
0.22.402.342.73
0.11.221.911.76
0.05/1.001.14
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Livada, Č.; Horvat, T.; Baumgartner, A. Novel Block Sorting and Symbol Prediction Algorithm for PDE-Based Lossless Image Compression: A Comparative Study with JPEG and JPEG 2000. Appl. Sci. 2023, 13, 3152. https://doi.org/10.3390/app13053152

AMA Style

Livada Č, Horvat T, Baumgartner A. Novel Block Sorting and Symbol Prediction Algorithm for PDE-Based Lossless Image Compression: A Comparative Study with JPEG and JPEG 2000. Applied Sciences. 2023; 13(5):3152. https://doi.org/10.3390/app13053152

Chicago/Turabian Style

Livada, Časlav, Tomislav Horvat, and Alfonzo Baumgartner. 2023. "Novel Block Sorting and Symbol Prediction Algorithm for PDE-Based Lossless Image Compression: A Comparative Study with JPEG and JPEG 2000" Applied Sciences 13, no. 5: 3152. https://doi.org/10.3390/app13053152

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop