1. Introduction
Multiresolution representations are one of the most efficient tools for data compression, and in particular for image compression. The multi-scale representation of a signal is well adapted to quantization or simple thresholding.
We start the algorithm with an input data
obtaining a multiresolution version of the initial data, which is processed according to the desired application in mind. After decoding the processed representation, we obtain a discrete set
which is expected to be
close to the original discrete set
. In order for this to be true, some form of stability is needed, i.e., we must require that
where
satisfies
Harten’s framework for multiresolution provides an adequate setting for the design of discrete multiresolution representations [
1]. Discrete resolution levels are connected by inter-resolution operators, named decimation (from fine
to coarse
) and prediction (from coarse to fine). These inter-scale operators are directly related to the
discretization and
reconstruction operators, which act between the continuous level (where a function
f, related to the discrete data, lives) to each discrete level (where
lives). The greatest advantage of Harten’s general framework lies in its adaptability. The fundamental role played by the reconstruction operator makes it possible to perform
specific adaptive treatments at singularities. In general, this involves data-dependent reconstruction operators, which lead to nonlinear prediction schemes and, hence, to nonlinear multiresolution decompositions [
1].
Linear multiresolution schemes derived following Harten’s framework can be also recovered from the theory of wavelets. Many applications have been found for this kind of algorithms; see, for example, References [
2,
3]. Nonlinearity in these contexts can bring some improvements when discontinuities are presented in the data.
Some nonlinear multiresolution schemes have been previously studied and they have been the starting point to the improvement that we propose in this paper. In particular, we refer to the PPH nonlinear multiresolution scheme presented in References [
4,
5,
6], which gives quite nice visual effects in the reconstructions. This scheme is proven to be stable in
(see Reference [
7]), but nothing is proven for higher dimensions. A possibility to find nonlinear stable two-dimensional multiresolution schemes is to considered the non-separable approach introduced in Reference [
8]. But a good candidate for the prediction operator with the right contraction properties was still to be found. In this paper, we present subdivision and multiresolution schemes based on the use of the so-called generalized means, which give rise to more accurate contractivity constants according to a crucial inequality for the first differences of the proposed schemes. This fact allows us to easily prove more accurate stability results as much in
as in
We know some references where the generalized means have been previously used in different practical applications with interesting results; see, for example, References [
9,
10].
Since nonlinearity seems to be crucial to get more accurate results, it is also important to point out the promising role that could play artificial intelligence in order to design adapted algorithms with optimal properties; see, for example, Reference [
11], for papers on this matter.
The paper is organized as follows: In
Section 2, we recall the basic concepts of point value multiresolution in 2D. In particular, we give the two-dimensional non-separable multiresolution algorithms to be used. In
Section 3, we define and study the new particular prediction operators based on the generalized means and prove important properties. In
Section 4, we present the stability results giving the main inequality ensuring stability. Some numerical experiments are given in
Section 5. Finally, in
Section 6, we present some conclusions and future perspectives.
2. Harten Multiresolution in 2D
We introduce in this section the basic concepts about multiresolution that we will need for the rest of the paper. In particular, we will be working mainly in the point value setting. We refer to the interested reader to Reference [
12] for a more detailed description about multiresolution.
Let us consider the grid in
given by
and the discretization operator for point values
where
is defined by
is the space of continuous functions in , and is the space of real sequences of dimension related with the resolution of .
An associated reconstruction operator
for this discretization is any right inverse of
, which means that, for all
,
and
Thus, for the point value setting, the reconstruction operator amounts to an interpolation.
The sequences and define a multiresolution transform, and the prediction operator, , defines an associated subdivision scheme. If is a nonlinear operator, then the corresponding subdivision and multiresolution schemes are also nonlinear.
The decimation operator
is always linear and, in our case, can be expressed as
We also need to define the errors that, in this case, are given by
It is easy to prove that the errors belong to the null space of the decimation operator; in fact,
therefore, taking into account that the prediction operator inherits the consistency property from the reconstruction operator, i.e., it is a right inverse of the decimation operator, we have
which, in practice, means that there is redundancy in the errors, and it is sufficient to keep the errors which are located at a position with any odd coordinate.
We now have all the needed ingredients to give the coding and decoding multiresolution algorithms. Let us denote first:
Then, the mentioned algorithms take the form:
These algorithms, Algorithms 1 and 2, are nothing more than another representation of the initial data, which is better adapted to processes of compression and denoising. These processes will be done to the multiresolution representation of the data
before the decompression stage. Notice that the better the nonlinear prediction the larger the attained compression after simple truncation, since many details would be close to zero. We would also like to emphasize a strategy that allows Algorithms 1 and 2 to control the rate of compression, just keeping the chosen percentage of the kept details in the multiresolution representation, setting to zero the rest of them. If, on the contrary, one wants to monitor the total accumulated error that will be expected after pre-processing the multiresolution of the data and applying the decoding algorithm, then, one needs to consider Algorithm 3, which includes some slight modifications according to the theoretical result in Theorem 1.
Algorithm 1:
(Coding) |
for l = L,…,1 |
for i1, i2 = 0,…, Jl−1 |
|
for
|
|
end |
end |
end |
Algorithm 2:
(Decoding) |
for l = L,…,1 |
for i1, i2 = 0,…, Jl−1 |
for
|
|
end |
|
end |
end |
Algorithm 1 starts descending one scale from the original data and then reorganizes the coefficient matrix at each step in order to continue working with the significant coefficients of the multiresolution representation to compute another scale. In
Figure 1, we show a related application in image processing of the cell average version of Algorithm 1, in which it is easy to observe the scales and the different types of coefficients. In
Figure 1, to the right we see two scales of the multiresolution version of the data. In the upper left corner, one can see the second step of Algorithm 1 for
applied to the significative coefficients resulting after the first step for
In the upper right, bottom left, and bottom right corners appear the detail coefficients, which, in some cases, are below a given tolerance and have been set to zero (this is why they appear in black color.)
Algorithm 3:
(Alternative Coding to monitor the accumulated error) |
Given |
|
for l = L,…,1 |
for i1, i2 = 0,…, Jl−1 |
|
for
|
Compute using (9) and choosing the case
|
according to the index |
|
end |
end |
|
end |
|
3. A Prediction Operator Based on the Generalized Means
Our objective in this section is the definition of an adapted nonlinear prediction operator with desirable properties regarding to adaption to potential discontinuities, order of approximation, and stability issues of the associated subdivision and multiresolution schemes.
First, we define the generalized means, which appear in the definition of the new prediction operator. The generalized means depending on
of
n positive values
are given by
We are interested in the case
since we will be working in
with fourth order reconstructions, and in the value of the parameter
Therefore, the considered means read
Notice that, in order to apply the
mean in the definition of the prediction operator, we need to redefine it in
in the following way:
where
stands for the sign of
Some of the basic properties of these means appear in the following lemma,
Lemma 1. For any couples,, the function, withsatisfies the following properties:
, if .
, is continuous in k.
if
We refer to three more properties of these means that will be useful later to attain adaption in case of discontinuities, order of approximation in smooth areas, and stability results, respectively.
Lemma 2. (Adaption to discontinuities) For any couple , i.e., , Proof. Without lost of generality, we consider
:
□
Lemma 3. (Order of approximation) For any couple , i.e., , satisfying , , and , then Proof. In order to get this result, it will be useful to rewrite the
means as
Then, our proof is based on the following observations:
- (a)
- (b)
If
,
, satisfy
, then
- (c)
The proof of the first observation comes from the fact that
hence,
For the second observation, we simply apply the basic Lagrange theorem to the function
thus,
with
c an intermediate point between
A and
and then
.
Finally, to prove the third observation, we use the following developments using the Newton binomial theorem,
Using the following very well known properties of the combinatorial numbers
we can regroup terms and get
since
Finally, combining the three observations, it is trivial to finish the proof.
□
Lemma 4. (Lipchitz, needed for stability reasons) For any couples , and , Proof. The property is trivial if and
Let us consider now the case
and
and let us suppose, without lost of generality,
then,
The same arguments are true for the case and
If
and
with
, we can use the mean value theorem for several variables, and we directly get
where
is a point in the segment between
and
Therefore, the proof will be finished for this case just by getting a suitable bound in the infinity norm for the gradient. Computing
, we get
and simplifying the last expression
By symmetry, we also have
Thus,
and this finishes the proof of this case.
The remaining case is when
and
We can proceed as follows:
□
For the upcoming proofs, we will also need the following Lemma.
Lemma 5. The function Z defined in by satisfies the following properties:
,
.
Proof. The proof for the first point comes from the fact that
and
according to property 6 of Lemma 1 and Lemma 2. Thus,
Therefore, and the first affirmation is proven. Then, second point is trivially true just by using the previous point. □
We now use these properties of the Generalized means for our purposes.
Definition 1. Given the uniform grid at scale L with grid spacing we define the prediction operator based on the generalized means by where stands for the second order divided difference.
In Reference [
4], it is proven that the replacement of the arithmetic mean in (1) for an adequate nonlinear mean gives rise to desirable properties regarding adaption to potential singularities, while maintaining the approximation order. The gain using the generalized means instead of only the Harmonic mean (which coincides with
) as in Reference [
4] is noticeable both in practice, giving better adaption to potential singularities (see Lemma 2), and in theory, obtaining better Lipchitz constants (see Lemma 4), which gives rise to a better stability behavior and simpler stability results. We are going to focus now in what concerns stability results. To start with, we can introduce the following proposition, which is the basis of the stability proofs for the associated subdivision schemes in
(see Reference [
13]), and it will be also needed for the
non-separable multiresolution that we present.
Proposition 1. If, removing L for simplicity, , , then
,
, for ,
, for .
Proof. Let us prove the first point. Considering the indexes
, we have, using Definition 1,
Using now property 1 of Lemma 5, we get
For the case
, we have
Using property 6 of Lemma 1,
Thus,
Let us prove now the second point. Again, we consider separately the indexes
and
For
, we have
therefore,
Finally, to prove the third point, we also consider
and
For
,
□
Notice that occurs for which lets outside of the upcoming stability results to the previous reconstruction operator which is recovered in this setting for This means that, for , we get the contractivity of the second order differences in only one step of subdivision, and this simplifies in a great measure the theory and allows us to obtain stability also for two dimensions in an easy way, as shown in next section.
4. Stability Results for a Non Separable Multiresolution in
Let us consider the non-separable multiresolution transformations given by Algorithms 1 and 2 in
Section 2. These algorithms are quite general and valid for a large range of prediction operators. But, in order to apply the coming stability theorem, we need to define a prediction operator
which satisfies several properties. These properties are the following:
where
is a linear operator verifying the contraction property in the next point.
We easily define our prediction operator in
in the following way. Supposing the data at scale
is already known, we compute the data at scale
using the proposed
one-dimensional prediction operator defined in
Section 3. Since our one-dimensional prediction is local, it is the two-dimensional prediction, as well. In
Figure 2, we can see the disposition of the considered cells in order to compute the proposed
prediction operator
.
If we suppose that we have the values
then we propose the following calculations to get the needed values
,
,
,
,
Of course, the values at the positions are just projections from the coarser level, i.e.,
Notice that, from the definition, we immediately get that the required properties for the prediction operator are satisfied, just by using Proposition 1. In fact, for , ,
With all these ingredients, we can give now the following Theorem regarding the stability of the
multiresolution transform coming from the use of the
prediction operator associated to
defined through the use of the Generalized means (
8) for
Theorem 1. The non-separable multiresolution transform associated with the prediction operator related with the Generalized means for satisfies where Therefore, we get the stability of the decoding multiresolution transformation.
The proof of this theorem is a particularization of a general proof for prediction operators that contract in one step that can be found in Reference [
8].
A Specific Coding Algorithm Controlling the Committed Error
Given a prescribed tolerance
using Theorem 1, one can control how to carry out the truncation of the details at each scale of the multiresolution pyramid in order to ensure this requirement, that is, having the final committed error bounded by the specified
. In this case, one loses control of how much compression is attained in favor of controlling the final error at the decompression stage. We now give a slightly modified version of Algorithm 1 such that the total accumulated error is under control as explained above. Notice that, in order to decompress the signal, one just needs to follow the same decompression Algorithm 2 without any change applied to the truncated version of the multiresolution representation of the data
We use the following truncation operator
, defined as follows:
for all entries
of the vector
5. Numerical Experiments
In this section, we offer some numerical tests to compare our proposed
non-separable multiresolution algorithm with other existing multiresolution transformations in the literature. At the same time, we will verify the numerical stability and the overall performance of the schemes. Let us consider a discrete uniform grid with
points in the rectangle
and the function
Since
is a discontinuous function with a jump along a curve (the straight line
), we can expect that nonlinear methods will work much better in this case than their linear counterparts, which are known to produce artificial maxima and minima around the jump discontinuity. These artificial maxima and minima are not reduced by taking smaller grid sizes when using linear methods. These undesirable features are widely known as Gibbs effects [
14].
We consider the discretization of the function by point values in the given rectangle. From these data, we will perform a multiresolution decomposition, we will keep a percentage of the details, and then we will decode the processed multiresolution version of the data to obtain an approximation to the original data. We take into account the following prediction operators:
LAG stands for the tensor product multiresolution transform based on fourth order accurate Lagrange prediction operator. This transform is linear and, therefore, stable, but it does not adapt to discontinuities.
ENO stands for the tensor product multiresolution transform based on fourth order accurate ENO prediction operator. This transform is nonlinear and obtains acceptable resolution of the edges when noise is not present, and the discontinuities are well defined. However, it presents stability problems, forcing to keep the majority of the details to get a appropriate performance.
stands for the proposed non-separable multiresolution scheme with
stands for the proposed non-separable multiresolution scheme with
stands for the proposed non-separable multiresolution scheme with
The last three considered multiresolution transforms , , are nonlinear by definition and theoretically stable, as proven in Theorem 1.
In our experiment, we have descended two scales in the multiresolution algorithm, and we have kept only
of the details. In
Figure 3, we see the original function to the top-left, and the obtained reconstruction using
to the top-right, using
to the bottom-left and using
to the bottom-right. It is immediate to see how the nonlinear scheme
performs better. Gibbs effects are observed in the reconstruction
. Stability problems are present for
which traduce in undesirable visual effects around the discontinuity. In
Figure 4, we see how the algorithms
and
improve progressively the accuracy in the definition of the discontinuity. This fact comes from property of Lemma 2. All these visual appreciations can be reinforced with the numerical results offered in
Table 1. In particular, we clearly see the Gibbs effects of
, the instabilities of
, and the improvement of
with increasing
k by paying attention to the fourth column, where the infinity norm of the errors between the original signal and the reconstructed signal are shown. All these computations and graphical representations were carried out using MATLAB under an Intel(R) Core(TM) i7-3770 @
GHz processor with
Gb of RAM memory.
Remark 1. This test could simulate the compression of real geographical data, elevations, or depths of difficult access areas, for instance, in oceanography. The presented methods are designed to work well, especially where cliffs and similar terrain irregularities are encountered.