Next Article in Journal
Combined Games with Randomly Delayed Beginnings
Next Article in Special Issue
Unsteady MHD Mixed Convection Flow in Hybrid Nanofluid at Three-Dimensional Stagnation Point
Previous Article in Journal
Blockchain Technology for Winning Consumer Loyalty: Social Norm Analysis Using Structural Equation Modeling
Previous Article in Special Issue
A Piecewise Polynomial Harmonic Nonlinear Interpolatory Reconstruction Operator on Non Uniform Grids—Adaptation around Jump Discontinuities and Elimination of Gibbs Phenomenon
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Application of the Generalized Means to Construct Multiresolution Schemes Satisfying Certain Inequalities Proving Stability

1
Departamento de Matemática Aplicada y Estadística, Universidad Politécnica de Cartagena, 30202 Cartagena, Spain
2
Departamento de Matemáticas y Computación, Universidad de La Rioja, 26006 Logroño, Spain
3
Departamento de Matemáticas, Universidad de Valencia, 46100 Valencia, Spain
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(5), 533; https://doi.org/10.3390/math9050533
Submission received: 9 February 2021 / Revised: 20 February 2021 / Accepted: 24 February 2021 / Published: 4 March 2021
(This article belongs to the Special Issue Numerical Analysis and Scientific Computing)

Abstract

:
Multiresolution representations of data are known to be powerful tools in data analysis and processing, and they are particularly interesting for data compression. In order to obtain a proper definition of the edges, a good option is to use nonlinear reconstructions. These nonlinear reconstruction are the heart of the prediction processes which appear in the definition of the nonlinear subdivision and multiresolution schemes. We define and study some nonlinear reconstructions based on the use of nonlinear means, more in concrete the so-called Generalized means. These means have two interesting properties that will allow us to get associated reconstruction operators adapted to the presence of discontinuities, and having the maximum possible order of approximation in smooth areas. Once we have these nonlinear reconstruction operators defined, we can build the related nonlinear subdivision and multiresolution schemes and prove more accurate inequalities regarding the contractivity of the scheme for the first differences and in turn the results about stability. In this paper, we also define a new nonlinear two-dimensional multiresolution scheme as non-separable, i.e., not based on tensor product. We then present the study of the stability issues for the scheme and numerical experiments reinforcing the proven theoretical results and showing the usefulness of the algorithm.

1. Introduction

Multiresolution representations are one of the most efficient tools for data compression, and in particular for image compression. The multi-scale representation of a signal is well adapted to quantization or simple thresholding.
We start the algorithm with an input data f L , obtaining a multiresolution version of the initial data, which is processed according to the desired application in mind. After decoding the processed representation, we obtain a discrete set f ^ L which is expected to be close to the original discrete set f L . In order for this to be true, some form of stability is needed, i.e., we must require that
| | f ^ L f L | | σ ( ϵ 0 , ϵ 1 , , ϵ L ) ,
where σ ( · , , · ) satisfies
lim ϵ l 0 , 0 l L σ ( ϵ 0 , ϵ 1 , , ϵ L ) = 0 .
Harten’s framework for multiresolution provides an adequate setting for the design of discrete multiresolution representations [1]. Discrete resolution levels are connected by inter-resolution operators, named decimation (from fine ( k ) to coarse ( k 1 ) ) and prediction (from coarse to fine). These inter-scale operators are directly related to the discretization and reconstruction operators, which act between the continuous level (where a function f, related to the discrete data, lives) to each discrete level (where f k lives). The greatest advantage of Harten’s general framework lies in its adaptability. The fundamental role played by the reconstruction operator makes it possible to perform specific adaptive treatments at singularities. In general, this involves data-dependent reconstruction operators, which lead to nonlinear prediction schemes and, hence, to nonlinear multiresolution decompositions [1].
Linear multiresolution schemes derived following Harten’s framework can be also recovered from the theory of wavelets. Many applications have been found for this kind of algorithms; see, for example, References [2,3]. Nonlinearity in these contexts can bring some improvements when discontinuities are presented in the data.
Some nonlinear multiresolution schemes have been previously studied and they have been the starting point to the improvement that we propose in this paper. In particular, we refer to the PPH nonlinear multiresolution scheme presented in References [4,5,6], which gives quite nice visual effects in the reconstructions. This scheme is proven to be stable in 1 D (see Reference [7]), but nothing is proven for higher dimensions. A possibility to find nonlinear stable two-dimensional multiresolution schemes is to considered the non-separable approach introduced in Reference [8]. But a good candidate for the prediction operator with the right contraction properties was still to be found. In this paper, we present subdivision and multiresolution schemes based on the use of the so-called generalized means, which give rise to more accurate contractivity constants according to a crucial inequality for the first differences of the proposed schemes. This fact allows us to easily prove more accurate stability results as much in 1 D as in 2 D . We know some references where the generalized means have been previously used in different practical applications with interesting results; see, for example, References [9,10].
Since nonlinearity seems to be crucial to get more accurate results, it is also important to point out the promising role that could play artificial intelligence in order to design adapted algorithms with optimal properties; see, for example, Reference [11], for papers on this matter.
The paper is organized as follows: In Section 2, we recall the basic concepts of point value multiresolution in 2D. In particular, we give the two-dimensional non-separable multiresolution algorithms to be used. In Section 3, we define and study the new particular prediction operators based on the generalized means and prove important properties. In Section 4, we present the stability results giving the main inequality ensuring stability. Some numerical experiments are given in Section 5. Finally, in Section 6, we present some conclusions and future perspectives.

2. Harten Multiresolution in 2D

We introduce in this section the basic concepts about multiresolution that we will need for the rest of the paper. In particular, we will be working mainly in the point value setting. We refer to the interested reader to Reference [12] for a more detailed description about multiresolution.
Let us consider the grid in [ 0 , 1 ] 2 given by
X l = { x i 1 l , x i 2 l } i 1 , i 2 = 0 J l , J l = 2 l J 0 , J 0 integer , h l = 1 J 0 2 l ,
and the discretization operator for point values
D l : C ( [ 0 , 1 ] 2 ) V l f f l = ( f i 1 , i 2 l ) i 1 , i 2 = 1 J l ,
where f i 1 , i 2 l , 0 i s J l , s = 1 , 2 is defined by
f i 1 , i 2 l : = f ( x i 1 , x i 2 ) .
C ( [ 0 , 1 ] 2 ) is the space of continuous functions in [ 0 , 1 ] 2 , and V l is the space of real sequences of dimension ( J l + 1 ) 2 related with the resolution of X l .
An associated reconstruction operator R l for this discretization is any right inverse of D l , which means that, for all f l V l , R l f l C ( [ 0 , 1 ] 2 ) and
f i 1 , i 2 l = ( R l f l ) ( x i 1 , x i 2 ) .
Thus, for the point value setting, the reconstruction operator amounts to an interpolation.
The sequences { D l } and { R l } define a multiresolution transform, and the prediction operator, P l 1 l : = D l + 1 R l : V l V l + 1 , defines an associated subdivision scheme. If R l is a nonlinear operator, then the corresponding subdivision and multiresolution schemes are also nonlinear.
The decimation operator D l l 1 : V l V l 1 is always linear and, in our case, can be expressed as
f i 1 , i 2 l 1 = ( D l l 1 f l ) i 1 , i 2 = ( D l 1 R l f l ) i 1 , i 2 = f 2 i 1 , 2 i 2 l .
We also need to define the errors that, in this case, are given by
e j 1 , j 2 l : = f j 1 , j 2 l ( P l 1 l f l 1 ) j 1 , j 2 .
It is easy to prove that the errors belong to the null space of the decimation operator; in fact,
D l l 1 e 2 i 1 , 2 i 2 l = D l l 1 ( f 2 i 1 , 2 i 2 l ( P l 1 l f l 1 ) 2 i 1 , 2 i 2 ) = f i 1 , i 2 l 1 D l l 1 ( P l 1 l f l 1 ) 2 i 1 , 2 i 2 ;
therefore, taking into account that the prediction operator inherits the consistency property from the reconstruction operator, i.e., it is a right inverse of the decimation operator, we have
e 2 i 1 , 2 i 2 l = 0 ,
which, in practice, means that there is redundancy in the errors, and it is sufficient to keep the errors which are located at a position with any odd coordinate.
We now have all the needed ingredients to give the coding and decoding multiresolution algorithms. Let us denote first:
J : = { ( j 1 , j 2 ) : j s { 2 i s 1 , 2 i s } , s = 1 , 2 } , J : = J \ { j 1 = 2 i 1 , j 2 = 2 i 2 } .
Then, the mentioned algorithms take the form:
These algorithms, Algorithms 1 and 2, are nothing more than another representation of the initial data, which is better adapted to processes of compression and denoising. These processes will be done to the multiresolution representation of the data μ ( f L ) before the decompression stage. Notice that the better the nonlinear prediction the larger the attained compression after simple truncation, since many details would be close to zero. We would also like to emphasize a strategy that allows Algorithms 1 and 2 to control the rate of compression, just keeping the chosen percentage of the kept details in the multiresolution representation, setting to zero the rest of them. If, on the contrary, one wants to monitor the total accumulated error that will be expected after pre-processing the multiresolution of the data and applying the decoding algorithm, then, one needs to consider Algorithm 3, which includes some slight modifications according to the theoretical result in Theorem 1.
Algorithm 1:  μ ( f L ) = M f L   (Coding)
       for l = L,…,1
        for i1, i2 = 0,…, Jl−1
          f i 1 , i 2 l 1 = f 2 i 1 , 2 i 2 l
         for ( j 1 , j 2 ) J
           e j 1 , j 2 l = f j 1 , j 2 l ( P l 1 l f l 1 ) j 1 , j 2
         end
        end
       end
Algorithm 2:  f L = M 1 μ ( f L )   (Decoding)
       for l = L,…,1
        for i1, i2 = 0,…, Jl−1
         for ( j 1 , j 2 ) J
           f j 1 , j 2 l = ( P l 1 l f l 1 ) j 1 , j 2 + e j 1 , j 2 l
         end
          f 2 i 1 , 2 i 2 l = f i 1 , i 2 l 1
        end
       end
Algorithm 1 starts descending one scale from the original data and then reorganizes the coefficient matrix at each step in order to continue working with the significant coefficients of the multiresolution representation to compute another scale. In Figure 1, we show a related application in image processing of the cell average version of Algorithm 1, in which it is easy to observe the scales and the different types of coefficients. In Figure 1, to the right we see two scales of the multiresolution version of the data. In the upper left corner, one can see the second step of Algorithm 1 for L = 2 applied to the significative coefficients resulting after the first step for L = 1 . In the upper right, bottom left, and bottom right corners appear the detail coefficients, which, in some cases, are below a given tolerance and have been set to zero (this is why they appear in black color.)
Algorithm 3:  μ ˜ ( f L ) = M ˜ f L   (Alternative Coding to monitor the accumulated error)
       Given ϵ
        δ = ϵ C ˜ L
       for l = L,…,1
        for i1, i2 = 0,…, Jl−1
          f i 1 , i 2 l 1 = f 2 i 1 , 2 i 2 l
         for ( j 1 , j 2 ) J
         Compute ( P l 1 l f l 1 ) j 1 , j 2 using (9) and choosing the case
         according to the index ( j 1 , j 2 )
           e j 1 , j 2 l = f j 1 , j 2 l ( P l 1 l f l 1 ) j 1 , j 2
         end
        end
         e ˜ l = t r ( e l , δ )
       end
   μ ˜ ( f L ) = { f 0 , e ˜ 1 , , e ˜ L } .

3. A Prediction Operator Based on the Generalized Means

Our objective in this section is the definition of an adapted nonlinear prediction operator with desirable properties regarding to adaption to potential discontinuities, order of approximation, and stability issues of the associated subdivision and multiresolution schemes.
First, we define the generalized means, which appear in the definition of the new prediction operator. The generalized means depending on m Z of n positive values x 1 , x 2 , , x n are given by
G M m ( x 1 , x 2 , , x n ) : = x 1 x 2 x n 1 n m = 0 1 n i = 1 n x i m 1 m m 0 .
We are interested in the case n = 2 , since we will be working in 2 D with fourth order reconstructions, and in the value of the parameter m = k , k N \ { 0 } . Therefore, the considered means read
G M k ( x 1 , x 2 ) : = ( 2 x 1 k x 2 k x 1 k + x 2 k ) 1 k .
Notice that, in order to apply the G M k mean in the definition of the prediction operator, we need to redefine it in R 2 in the following way:
G M k ( x , y ) : = s g n ( x ) ( 2 x k y k x k + y k ) 1 k if x y > 0 , 0 otherwise ,
where s g n ( x ) stands for the sign of x .
Some of the basic properties of these means appear in the following lemma,
Lemma 1.
For any couples ( x , y ) , ( x , y ) R + 2 , the function G M k , with k N \ { 0 } satisfies the following properties:
  • G M i ( x , y ) G M j ( x , y ) , if i j .
  • G M ( k ) = G M k ( x , y ) , is continuous in k.
  • G M k ( x , y ) = G M k ( y , x ) .
  • G M k ( x , y ) = G M k ( x , y ) .
  • G M k ( x , y ) = 0 , if x y 0 .
  • min { | x | , | y | } | G M k ( x , y ) | max { | x | , | y | } .
We refer to three more properties of these means that will be useful later to attain adaption in case of discontinuities, order of approximation in smooth areas, and stability results, respectively.
Lemma 2.
(Adaption to discontinuities) For any couple ( x , y ) R + 2 , i.e., x > 0 , y > 0 ,
| G M k ( x , y ) | 2 k min { | x | , | y | } .
Proof. 
Without lost of generality, we consider x < y :
G M k ( x , y ) = 2 x k y k x k + y k k = x 2 y k x k + y k k 2 k y k x k + y k k x 2 k min { x , y } .
Lemma 3.
(Order of approximation) For any couple ( x , y ) R + 2 , i.e., x > 0 , y > 0 , satisfying x = O ( 1 ) , y = O ( 1 ) , and | x y | = O ( h ) , then
| G M k ( x , y ) x + y 2 |   = O ( h 2 ) .
Proof. 
In order to get this result, it will be useful to rewrite the G M k means as
G M k ( x , y ) : = ( 2 x k y k x k + y k ) 1 k =   ( x k + y k 2 ) ( 1 ( x k y k x k + y k ) 2 ) 1 k .
Then, our proof is based on the following observations:
(a)
| 2 x k y k x k + y k x k + y k 2 |   = O ( h 2 ) .
(b)
If A > 0 , B > 0 , satisfy | A B | = O ( h 2 ) , then
| A 1 k B 1 k |   = O ( h 2 ) ,
(c)
| x k + y k 2 ( x + y 2 ) k |   = O ( h 2 ) .
The proof of the first observation comes from the fact that
2 x k y k x k + y k = x k + y k 2 ( 1 ( x k y k x k + y k ) 2 ) ;
hence,
| 2 x k y k x k + y k x k + y k 2 |   = | x k + y k 2 ( ( x y ) ( x k 1 + x k 2 y + + y k 1 ) x k + y k ) 2 |   = O ( h 2 ) .
For the second observation, we simply apply the basic Lagrange theorem to the function f ( x ) = x 1 k ; thus,
| A 1 k B 1 k |   =   | 1 k c 1 k 1 ( A B ) | = O ( h 2 ) ,
with c an intermediate point between A and B , and then c = O ( 1 ) .
Finally, to prove the third observation, we use the following developments using the Newton binomial theorem,
( x + y 2 ) k = 1 2 k ( j = 0 k k j x j y k j ) , x k + y k 2 = 1 2 k ( ( 1 + 1 ) k 1 x k + ( 1 + 1 ) k 1 y k ) = 1 2 k ( j = 0 k 1 k 1 j x k + j = 0 k 1 k 1 j y k ) .
Using the following very well known properties of the combinatorial numbers
k j   =   k 1 j   +   k 1 k j and k 1 j   =   k 1 k j ,
we can regroup terms and get
| x k + y k 2 ( x + y 2 ) k | = 1 2 k ( j = 1 k 1 k 1 j x j ( x k j y k j )   +   k 1 k j y k j ( y j x j ) ) = 1 2 k ( j = 1 k 1 k 1 j ( x j y j ) ( x k j y k j ) ) = O ( h 2 ) ,
since x s y s = ( x y ) ( x s 1 + x s 2 y + + x y s 2 + y s 1 ) .
Finally, combining the three observations, it is trivial to finish the proof.
Lemma 4.
(Lipchitz, needed for stability reasons) For any couples ( x , y ) , and ( x , y ) R 2 ,
| G M k ( x , y ) G M k ( x , y ) | 2 k max { | x x | , | y y | } .
Proof. 
The property is trivial if x y 0 and x y 0 .
Let us consider now the case x y > 0 and x y 0 , and let us suppose, without lost of generality, x x 0 ; then,
| G M k ( x , y ) G M k ( x , y ) | = | G M k ( x , y ) | 2 k | x | 2 k | x x | 2 k max { | x x | , | y y | } .
The same arguments are true for the case x y 0 and x y > 0 .
If x y > 0 and x y > 0 with x x > 0 , we can use the mean value theorem for several variables, and we directly get
| G M k ( x , y ) G M k ( x , y ) | | | G M k ( θ ) | | max { | x x | , | y y | } ,
where θ is a point in the segment between ( x , y ) and ( x , y ) . Therefore, the proof will be finished for this case just by getting a suitable bound in the infinity norm for the gradient. Computing x G M k , we get
x G M k = 1 k ( 2 x k y k x k + y k ) 1 k 1 ( 2 k x k 1 y k ( x k + y k ) 2 k x k 1 x k y k ( x k + y k ) 2 ) ,
and simplifying the last expression x G M k = 2 k y k + 1 ( x k + y k ) 1 k + 1 . By symmetry, we also have y G M k = 2 k x k + 1 ( x k + y k ) 1 k + 1 . Thus,
| x G M k |   +   | y G M k |   = 2 k ( | y k x k + y k y ( x k + y k ) 1 k |   +   | x k x k + y k x ( x k + y k ) 1 k | ) 2 k ,
and this finishes the proof of this case.
The remaining case is when x y > 0 , x y > 0 and x x < 0 . We can proceed as follows:
| G M k ( x , y ) G M k ( x , y ) | | G M k ( x , y ) | + | G M k ( x , y ) | 2 k | x | + 2 k | x | = 2 k | x x | 2 k max { | x x | , | y y | } .
For the upcoming proofs, we will also need the following Lemma.
Lemma 5.
The function Z defined in R 3 by Z ( x , y , z ) = x 2 1 8 ( G M k ( x , y ) + G M k ( x , z ) ) satisfies the following properties:
  • | Z ( x , y , z ) | | x | 2 ,
  • s i g n ( Z ( x , y , z ) = s i g n ( x ) .
Proof. 
The proof for the first point comes from the fact that | G M K ( x , y ) | 2 k | x | and | G M K ( x , z ) | 2 k | x | according to property 6 of Lemma 1 and Lemma 2. Thus,
| 1 8 ( G M k ( x , y ) + G M k ( x , z ) ) | 2 k 4 | x | 1 2 | x | for k 1 .
Therefore, | Z ( x , y , z ) | | x | 2 , and the first affirmation is proven. Then, second point is trivially true just by using the previous point. □
We now use these properties of the Generalized means for our purposes.
Definition 1.
Given the uniform grid X = ( x j k ) j Z at scale L with grid spacing h , we define the prediction operator G L L + 1 based on the generalized means by G L L + 1 : l l ,
f L + 1 = G L L + 1 f L : = f 2 j L + 1 = f j L , f 2 j + 1 L + 1 = f j L + f j + 1 L 2 h 2 4 G M k ( D f j L , D f j + 1 L ) ,
where D f s L : = f s 1 L 2 f s L + f s + 1 L 2 h 2 stands for the second order divided difference.
In Reference [4], it is proven that the replacement of the arithmetic mean in (1) for an adequate nonlinear mean gives rise to desirable properties regarding adaption to potential singularities, while maintaining the approximation order. The gain using the generalized means instead of only the Harmonic mean (which coincides with G M 1 ) as in Reference [4] is noticeable both in practice, giving better adaption to potential singularities (see Lemma 2), and in theory, obtaining better Lipchitz constants (see Lemma 4), which gives rise to a better stability behavior and simpler stability results. We are going to focus now in what concerns stability results. To start with, we can introduce the following proposition, which is the basis of the stability proofs for the associated subdivision schemes in 1 D (see Reference [13]), and it will be also needed for the 2 D non-separable multiresolution that we present.
Proposition 1.
If, removing L for simplicity, f ^ = G L L + 1 f , g ^ = G L L + 1 g , then
  • D f ^ 1 2 D f ,
  • | | f ^ g ^ | | | | f g | | + 2 k 4 h 2 | | D f D g | | ,
    • | D ( f ^ j g ^ j ) | 2 k 4 D ( f g ) , for j = 2 n + 1 ,
    • | D ( f ^ j g ^ j ) | 2 + 2 k 4 | D ( f g ) , for j = 2 n .
Proof. 
Let us prove the first point. Considering the indexes j = 2 n , we have, using Definition 1,
D f ^ 2 n = f ^ 2 n 1 2 f ^ 2 n + f ^ 2 n + 1 2 h 2 = f n 1 + f n 2 h 2 4 G M k ( D f n 1 , D f n ) 2 f n + f n + f n + 1 2 h 2 4 G M k ( D f n , D f n + 1 ) 2 h 2 = D f n 2 1 8 ( G M k ( D f n , D f n 1 ) + G M k ( D f n , D f n + 1 ) ) .
Using now property 1 of Lemma 5, we get | D f ^ 2 n | | D f n | 2 .
For the case j = 2 n + 1 , we have
D f ^ 2 n + 1 = f ^ 2 n 2 f ^ 2 n + 1 + f ^ 2 n + 2 2 h 2 = f n 2 ( f n + f n + 1 2 h 2 4 G M k ( D f n , D f n + 1 ) ) + f n + 1 2 h 2 = G M k ( D f n , D f n + 1 ) 4 .
Using property 6 of Lemma 1, | D f ^ 2 n + 1 | 1 4 m a x { | D f n | , | D f n + 1 | } .
Thus, | | D f ^ | | 1 2 | | D f | | .
Let us prove now the second point. Again, we consider separately the indexes j = 2 n and j = 2 n + 1 . For j = 2 n , we have
| f ^ 2 n g ^ 2 n | = | f n g n | ;
therefore, | f ^ 2 n g ^ 2 n | | | f g | | .
For j = 2 n + 1 ,
| f ^ 2 n + 1 g ^ 2 n + 1 | = | f n + f n + 1 2 h 2 4 G M k ( D f n , D f n + 1 ) ( g n + g n + 1 2 h 2 4 G M k ( D g n , D g n + 1 ) ) | | f n g n | + | f n + 1 g n + 1 | 2 + h 2 4 | G M k ( D f n , D f n + 1 ) G M k ( D g n , D g n + 1 ) | | | f g | | + 2 k 4 h 2 m a x { | D f n D g n | , | D f n + 1 D g n + 1 | } | | f g | | + 2 k 4 h 2 | | D f D g | | .
Finally, to prove the third point, we also consider j = 2 n and j = 2 n + 1 . For j = 2 n ,
| D ( f ^ j g ^ j ) | = | f ^ 2 n 1 2 f ^ 2 n + f ^ 2 n + 1 2 h 2 g ^ 2 n 1 2 g ^ 2 n + g ^ 2 n + 1 2 h 2 | = | D f n 2 1 8 ( G M k ( D f n , D f n 1 ) + G M k ( D f n , D f n + 1 ) ) ( D g n 2 1 8 ( G M k ( D g n , D g n 1 ) + G M k ( D g n , D g n + 1 ) ) ) | = | D f n D g n | 2 + 1 8 | G M k ( D f n , D f n 1 ) G M k ( D g n , D g n 1 ) | + 1 8 | G M k ( D f n , D f n + 1 ) G M k ( D g n , D g n + 1 ) | | | D f D g | | 2 + 2 k 8 m a x { | D f n D g n | , | D f n 1 D g n 1 | } + 2 k 8 m a x { | D f n D g n | , | D f n + 1 D g n + 1 | } 2 + 2 k 4 | | D f D g | | .
For j = 2 n + 1 ,
| D ( f ^ j g ^ j ) | = | G M k ( D f n , D f n + 1 ) 4 G M k ( D g n , D g n + 1 ) 4 | 2 k 4 m a x { | D f n D g n | , | D f n + 1 D g n + 1 | } 2 k 4 | | D f D g | | .
Notice that ρ = 2 + 2 k 4 < 1 occurs for k > 1 , which lets outside of the upcoming stability results to the previous P P H reconstruction operator which is recovered in this setting for k = 1 . This means that, for k > 1 , we get the contractivity of the second order differences in only one step of subdivision, and this simplifies in a great measure the theory and allows us to obtain stability also for two dimensions in an easy way, as shown in next section.

4. Stability Results for a Non Separable Multiresolution in 2 D

Let us consider the non-separable multiresolution transformations given by Algorithms 1 and 2 in Section 2. These algorithms are quite general and valid for a large range of prediction operators. But, in order to apply the coming stability theorem, we need to define a prediction operator P L L + 1 which satisfies several properties. These properties are the following:
  • | | ( P L L + 1 f L ) ( P L L + 1 g L ) | | | | f L g L | | + C | | δ ( f L g L ) | | ,
    where δ is a linear operator verifying the contraction property in the next point.
  • | | δ ( P L L + 1 f L P L L + 1 g L ) | | ρ | | δ ( f L g L ) | |
    with ρ < 1 .
We easily define our prediction operator in 2 D in the following way. Supposing the data at scale f L is already known, we compute the data at scale f L + 1 using the proposed G L L + 1 one-dimensional prediction operator defined in Section 3. Since our one-dimensional prediction is local, it is the two-dimensional prediction, as well. In Figure 2, we can see the disposition of the considered cells in order to compute the proposed 2 D prediction operator G L L + 1 .
If we suppose that we have the values f i , j L , then we propose the following calculations to get the needed values f 2 i + 1 , 2 j L + 1 , f 2 i + 1 , 2 j + 2 L + 1 , f 2 i , 2 j + 1 L + 1 , f 2 i + 2 , 2 j + 1 L + 1 , f 2 i + 1 , 2 j + 1 L + 1 :
( 1 ) 1 2 ( f i , j L + f i + 1 , j L ) h 2 4 G M k ( D x , j f i L , D x , j f i + 1 L ) , ( 2 ) 1 2 ( f i , j + 1 L + f i + 1 , j + 1 L ) h 2 4 G M k ( D x , j + 1 f i L , D x , j + 1 f i + 1 L ) , ( 3 ) 1 2 ( f i , j L + f i , j + 1 L ) h 2 4 G M k ( D i , y f j L , D i , y f j + 1 L ) , ( 4 ) 1 2 ( f i + 1 , j L + f i + 1 , j + 1 L ) h 2 4 G M k ( D i + 1 , y f j L , D i + 1 , y f j + 1 L ) ,
( 5 ) 1 4 ( ( 1 ) + ( 2 ) + ( 3 ) + ( 4 ) ) .
Of course, the values at the positions ( 2 i , 2 j ) are just projections from the coarser level, i.e., f 2 i , 2 j L + 1 = f i , j L .
Notice that, from the definition, we immediately get that the required properties for the prediction operator are satisfied, just by using Proposition 1. In fact, ρ = 2 + 2 k 4 < 1 for k > 1 ,   δ = h 2 D , | | δ | | = 2 , C = 2 k 4 .
With all these ingredients, we can give now the following Theorem regarding the stability of the 2 D multiresolution transform coming from the use of the 2 D prediction operator associated to G L L + 1 defined through the use of the Generalized means (8) for k > 1 .
Theorem 1.
The 2 D non-separable multiresolution transform associated with the prediction operator G L L + 1 related with the Generalized means G M k for k > 1 satisfies
| | f L g L | | C ˜ ( | | f 0 g 0 | | + l = 1 L | | e ( f ) l e ( g ) l | | ) ,
where C ˜ = 1 + C | | δ | | 1 ρ = 4 2 k 4 2 2 k . Therefore, we get the stability of the decoding multiresolution transformation.
The proof of this theorem is a particularization of a general proof for prediction operators that contract in one step that can be found in Reference [8].

A Specific Coding Algorithm Controlling the Committed Error

Given a prescribed tolerance ϵ , using Theorem 1, one can control how to carry out the truncation of the details at each scale of the multiresolution pyramid in order to ensure this requirement, that is, having the final committed error bounded by the specified ϵ . In this case, one loses control of how much compression is attained in favor of controlling the final error at the decompression stage. We now give a slightly modified version of Algorithm 1 such that the total accumulated error is under control as explained above. Notice that, in order to decompress the signal, one just needs to follow the same decompression Algorithm 2 without any change applied to the truncated version of the multiresolution representation of the data μ ˜ ( f L ) . We use the following truncation operator e ˜ = t r ( e , δ ) , defined as follows:
t r ( e , δ ) j : = e j if | e j | δ , 0 otherwise ,
for all entries e ˜ j of the vector e ˜ .

5. Numerical Experiments

In this section, we offer some numerical tests to compare our proposed 2 D non-separable multiresolution algorithm with other existing multiresolution transformations in the literature. At the same time, we will verify the numerical stability and the overall performance of the schemes. Let us consider a discrete uniform grid with 65 × 65 points in the rectangle [ 5 , 5 ] × [ 5 , 5 ] and the function
f ( x , y ) : = 3 cos ( x 2 + y 2 3 ) 3 + x 2 + y 2 , if x 1 2 3 cos ( x 2 + y 2 3 ) 3 + x 2 + y 2 + 10 , otherwise .
Since f ( x , y ) is a discontinuous function with a jump along a curve (the straight line x = 1 2 ), we can expect that nonlinear methods will work much better in this case than their linear counterparts, which are known to produce artificial maxima and minima around the jump discontinuity. These artificial maxima and minima are not reduced by taking smaller grid sizes when using linear methods. These undesirable features are widely known as Gibbs effects [14].
We consider the discretization of the function by point values F = ( f i , j ) in the given rectangle. From these data, we will perform a multiresolution decomposition, we will keep a percentage of the details, and then we will decode the processed multiresolution version of the data to obtain an approximation to the original data. We take into account the following prediction operators:
  • LAG stands for the tensor product multiresolution transform based on fourth order accurate Lagrange prediction operator. This transform is linear and, therefore, stable, but it does not adapt to discontinuities.
  • ENO stands for the tensor product multiresolution transform based on fourth order accurate ENO prediction operator. This transform is nonlinear and obtains acceptable resolution of the edges when noise is not present, and the discontinuities are well defined. However, it presents stability problems, forcing to keep the majority of the details to get a appropriate performance.
  • G M 1 stands for the proposed non-separable multiresolution scheme with k = 1 .
  • G M 2 stands for the proposed non-separable multiresolution scheme with k = 2 .
  • G M 3 stands for the proposed non-separable multiresolution scheme with k = 3 .
The last three considered multiresolution transforms G M 1 , G M 2 , G M 3 are nonlinear by definition and theoretically stable, as proven in Theorem 1.
In our experiment, we have descended two scales in the multiresolution algorithm, and we have kept only 7 % of the details. In Figure 3, we see the original function to the top-left, and the obtained reconstruction using L A G to the top-right, using E N O to the bottom-left and using G M 1 to the bottom-right. It is immediate to see how the nonlinear scheme G M 1 performs better. Gibbs effects are observed in the reconstruction L A G . Stability problems are present for E N O , which traduce in undesirable visual effects around the discontinuity. In Figure 4, we see how the algorithms G M 2 and G M 3 improve progressively the accuracy in the definition of the discontinuity. This fact comes from property of Lemma 2. All these visual appreciations can be reinforced with the numerical results offered in Table 1. In particular, we clearly see the Gibbs effects of L A G , the instabilities of E N O , and the improvement of G M k with increasing k by paying attention to the fourth column, where the infinity norm of the errors between the original signal and the reconstructed signal are shown. All these computations and graphical representations were carried out using MATLAB under an Intel(R) Core(TM) i7-3770 @ 3.40 GHz processor with 8.00 Gb of RAM memory.
Remark 1.
This test could simulate the compression of real geographical data, elevations, or depths of difficult access areas, for instance, in oceanography. The presented methods are designed to work well, especially where cliffs and similar terrain irregularities are encountered.

6. Conclusions and Perspectives

We have defined a new nonlinear reconstruction operator adapted to singularities which is based on the Generalized means G M k for k 1 . Using a non-separable strategy, we have defined new nonlinear non-separable 2 D multiresolution schemes for which the stability is easy to prove (see Reference [8]). In fact, we give a specific stability result for all these presented schemes. The validity of the theoretical results has been tested in a numerical experiment, where one can observe certain improvements for k increasing. This improvement is in agreement with the better obtained theoretical bounds, since the larger k the lower 2 k . The overall performance of the schemes is quite acceptable, avoiding Gibbs effects and instabilities due to the presence of discontinuities in the original data.

Author Contributions

Conceptualization, S.A., A.M., J.R., J.C.T. and D.F.Y.; methodology, S.A., A.M., J.R., J.C.T. and D.F.Y.; software, S.A., A.M., J.R., J.C.T. and D.F.Y.; validation, S.A., A.M., J.R., J.C.T. and D.F.Y.; formal analysis, S.A., A.M., J.R., J.C.T. and D.F.Y.; investigation, S.A., A.M., J.R., J.C.T. and D.F.Y.; writing—original draft preparation, S.A., A.M., J.R., J.C.T. and D.F.Y.; writing—review and editing, S.A., A.M., J.R., J.C.T. and D.F.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the FUNDACIÓN SÉNECA, AGENCIA DE CIENCIA Y TECNOLOGÍA DE LA REGIÓN DE MURCIA grant number 20928/PI/18, the Spanish national project PID2019-108336GB-I00, and the Spanish MINECO MTM2017-83942-P.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not available.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
p o w e r p Power Means of Order p
G P k , p Generalized Power Means

References

  1. Harten, A. Multiresolution representation of data II. SIAM J. Numer. Anal. 1996, 33, 1205–1256. [Google Scholar] [CrossRef]
  2. Lakestani, M.; Jokar, M.; Dehghan, M. Numerical solution of nth-order integro-differential equations using trigonometric wavelets. Math. Appl. Sci. 2011, 34, 1317–1329. [Google Scholar] [CrossRef]
  3. Razzaghi, M.; Yousefi, S. Legendre wavelets method for constrained optimal control problems. Math. Methods Appl. Sci. 2002, 25, 529–539. [Google Scholar] [CrossRef]
  4. Amat, S.; Donat, R.; Liandrat, J.; Trillo, J.C. Analysis of a new nonlinear subdivision scheme. Applications in image processing. Found. Comput. Math. 2006, 6, 193–225. [Google Scholar] [CrossRef]
  5. Trillo, J.C. Nonlinear Multiresolution and Applications in image Processing. Ph.D. Thesis, University of Valencia, Valencia, Spain, 2007. [Google Scholar]
  6. Amat, S.; Dadourian, k.; Liandrat, J.; Trillo, J.C. High order nonlinear interpolatory reconstruction operators and associated multiresolution schemes. J. Comput. Appl. Math. 2013, 253, 163–180. [Google Scholar] [CrossRef]
  7. Amat, S.; Liandrat, J. On the stability of PPH nonlinear multiresolution. Appl. Comput. Harmon. Anal. 2005, 18, 198–206. [Google Scholar] [CrossRef] [Green Version]
  8. Amat, S.; Dadourian, K.; Liandrat, J.; Ruiz, J.; Trillo, J.C. A family of stable nonlinear nonseparable multiresolution schemes in 2D. J. Comput. Appl. Math. 2010, 234, 1277–1290. [Google Scholar] [CrossRef] [Green Version]
  9. Nigmatullin, R.R.; Moroz, A.; Smith, G. Application of the generalized mean value function to the statistical detection of water in decane by near-infrared spectroscopy. Phys. A Stat. Mech. Its Appl. 2005, 352, 379–396. [Google Scholar] [CrossRef]
  10. Pyun, C.W. Generalized Means: Properties and Applications. Am. J. Physics. 1974, 42, 896–901. [Google Scholar] [CrossRef]
  11. Huang, X.; Ma, X.; Hu, F. Machine learning and intelligent communications. Mob. Netw. Appl. 2018, 23, 68–70. [Google Scholar] [CrossRef] [Green Version]
  12. Aràndiga, F.; Donat, R. Nonlinear Multi-scale Decomposition: The Approach of A. Harten. Numer. Algorithms 2000, 23, 175–216. [Google Scholar] [CrossRef]
  13. Guessab, A.; Moncayo, M.; Schmeisser, G. A class of nonlinear four-point subdivision schemes. Adv. Comput. Math. 2012, 37, 151–190. [Google Scholar] [CrossRef]
  14. Amat, S.; Shu, C.W.; Ruiz, J.; Trillo, J.C. On a class of splines free of Gibbs phenomenon. Math. Model. Numer. Anal. 2021, 55, 29–64. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Multiresolution representation of the image of a cameraman using L = 2 scales. (Left) Original image. (Right) Multiresolution representation after L = 2 scales.
Figure 1. Multiresolution representation of the image of a cameraman using L = 2 scales. (Left) Original image. (Right) Multiresolution representation after L = 2 scales.
Mathematics 09 00533 g001
Figure 2. Disposition of the cells to compute the prediction operator G L L + 1 .
Figure 2. Disposition of the cells to compute the prediction operator G L L + 1 .
Mathematics 09 00533 g002
Figure 3. Obtained reconstructions after the compression (93%) and decompression process using 2 scales of different MR algorithms. Top-left: Original, Top-right: LAG, Bottom-left: ENO, Bottom-right: G M 1 .
Figure 3. Obtained reconstructions after the compression (93%) and decompression process using 2 scales of different MR algorithms. Top-left: Original, Top-right: LAG, Bottom-left: ENO, Bottom-right: G M 1 .
Mathematics 09 00533 g003aMathematics 09 00533 g003b
Figure 4. Obtained reconstructions after the compression (93%) and decompression process using 2 scales of different MR algorithms. Top-left: Original, Top-right: G M 1 , Bottom-left: G M 2 , Bottom-right: G M 3 .
Figure 4. Obtained reconstructions after the compression (93%) and decompression process using 2 scales of different MR algorithms. Top-left: Original, Top-right: G M 1 , Bottom-left: G M 2 , Bottom-right: G M 3 .
Mathematics 09 00533 g004
Table 1. Obtained quality after the compression (93%) and decompression process using 2 scales of different MR algorithms: LAG, ENO, G M 1 , G M 2 , G M 3 without error control strategies.
Table 1. Obtained quality after the compression (93%) and decompression process using 2 scales of different MR algorithms: LAG, ENO, G M 1 , G M 2 , G M 3 without error control strategies.
%COM93%
Norm | | F F ˜ | | 1 | | F F ˜ | | 2 | | F F ˜ | |
LAG 0.16049 0.07714 0.98302
ENO 0.17761 0.13827 2.07510
G M 1 0.05075 0.00436 0.18657
G M 2 0.05379 0.00453 0.18561
G M 3 0.05140 0.00429 0.17322
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Amat, S.; Magreñan, A.; Ruiz, J.; Trillo, J.C.; Yañez, D.F. On the Application of the Generalized Means to Construct Multiresolution Schemes Satisfying Certain Inequalities Proving Stability. Mathematics 2021, 9, 533. https://doi.org/10.3390/math9050533

AMA Style

Amat S, Magreñan A, Ruiz J, Trillo JC, Yañez DF. On the Application of the Generalized Means to Construct Multiresolution Schemes Satisfying Certain Inequalities Proving Stability. Mathematics. 2021; 9(5):533. https://doi.org/10.3390/math9050533

Chicago/Turabian Style

Amat, Sergio, Alberto Magreñan, Juan Ruiz, Juan Carlos Trillo, and Dionisio F. Yañez. 2021. "On the Application of the Generalized Means to Construct Multiresolution Schemes Satisfying Certain Inequalities Proving Stability" Mathematics 9, no. 5: 533. https://doi.org/10.3390/math9050533

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop