Next Article in Journal
The Extended Linguistic Hellwig’s Methods Based on Oriented Fuzzy Numbers and Their Application to the Evaluation of Negotiation Offers
Next Article in Special Issue
Transversality Conditions for Geodesics on the Statistical Manifold of Multivariate Gaussian Distributions
Previous Article in Journal
Communication Efficient Algorithms for Bounding and Approximating the Empirical Entropy in Distributed Systems
Previous Article in Special Issue
Spatial Information-Theoretic Optimal LPI Radar Waveform Design
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Some Properties of Weighted Tsallis and Kaniadakis Divergences

by
Răzvan-Cornel Sfetcu
1,*,
Sorina-Cezarina Sfetcu
1 and
Vasile Preda
1,2,3
1
Faculty of Mathematics and Computer Science, University of Bucharest, Str. Academiei 14, 010014 Bucharest, Romania
2
“Gheorghe Mihoc-Caius Iacob” Institute of Mathematical Statistics and Applied Mathematics, Calea 13 Septembrie 13, 050711 Bucharest, Romania
3
“Costin C. Kiriţescu” National Institute of Economic Research, Calea 13 Septembrie 13, 050711 Bucharest, Romania
*
Author to whom correspondence should be addressed.
Entropy 2022, 24(11), 1616; https://doi.org/10.3390/e24111616
Submission received: 29 September 2022 / Revised: 30 October 2022 / Accepted: 2 November 2022 / Published: 5 November 2022
(This article belongs to the Special Issue Information and Divergence Measures)

Abstract

:
We are concerned with the weighted Tsallis and Kaniadakis divergences between two measures. More precisely, we find inequalities between these divergences and Tsallis and Kaniadakis logarithms, prove that they are limited by similar bounds with those that limit Kullback–Leibler divergence and show that are pseudo-additive.

1. Introduction

Shannon entropy, in the form we know it, was introduced by Boltzmann and used by Shannon in the context of Information Theory. This entropy has applications in Statistical Thermodynamics, Combinatorics and Machine Learning. In Machine Learning, Shannon entropy represents the basis for building decision trees and fitting classification models.
In the last couple of years, many generalizations of Shannon entropy appeared: Tsallis entropy, Kaniadakis entropy, Rényi entropy, Varma entropy, weighted entropy, relative entropy, cumulative entropy, etc. These entropies have applications in areas such as Physics, Information Theory, Probabilities, Communication Theory, and Statistics.
Tsallis entropy was introduced by C. Tsallis in [1] and is applied to: Income distribution (see [2,3]), Internet (see [4]), Non-coding human DNA (see [5]), Plasma (see [6]), and Stock exchanges (see [7,8]).
Kaniadakis entropy was introduced by G. Kaniadakis in [9] and is useful in many areas like: Finance (see [10,11,12]), Astrophysics (see [13,14]), Networks (see [15,16]), Economics (see [17,18]), and Statistical Mechanics (see [19,20,21]).
S. Kullback and R.A. Leibler were concerned “to measure” “the distance” or “the divergence” between statistical populations and they generalized Shannon entropy by defining, in [22], a nonsymmetric measure, called Kullback–Leibler divergence. This divergence between two probability measures μ 1 and μ 2 on a measurable non-negligible set A is additive, non-negative and greater than log μ 1 ( A ) μ 2 ( A ) , where log is the classical logarithm function. Divergences are a key tool in Information Geometry (see [23]).
The goodness of fit test is based on the Corrected Weighted Kullback–Leibler divergence (see [24]) and, as a consequence, it inherits all special characteristics of this divergence measure. Narowcki and Harding proposed the use of weighted entropy as a measure of investment risk (see [25]). Afterwards, Guiaşu used the weighted entropy to group data with respect to the importance of specific regions of the domain (see [26]), Di Crescenzo and Longobardi propose the weighted residual and past entropies (see [27]) and Suhov and Zohren proposed the quantum version of weighted entropy and its properties in Quantum Statistical Mechanics (see [28]).
Working with the Kullback–Leibler divergence formula and using the same technique like in the cases of Tsallis and Kaniadakis entropies (i.e., classical logarithm is replaced by Tsallis logarithm, respectively, by Kaniadakis logarithm), Tsallis and Kaniadakis divergences were introduced in some papers (see [29,30,31,32]).
Motivated by the aforementioned facts and by the papers [33,34,35], we deal with the weighted Tsallis and Kaniadakis divergences in this article.
In the following, we briefly describe the structure of the paper. Section 2 is dedicated to preliminaries. In Section 3, using some inequalities concerning the Tsallis logarithm, we obtain inequalities between the weighted Tsallis and Kaniadakis divergences on a non-negligible measurable arbitrary set and Tsallis logarithm, respectively, Kaniadakis logarithm (see Theorem 1). Finally, we prove that the weighted Tsallis and Kaniadakis divergences are limited by bounds that are similar to those that limit Kullback–Leibler divergence (see Theorem 2). In Section 4, we define the weighted Tsallis and Kaniadakis divergences for product measure spaces and prove some pseudo-additivity properties for them (see Theorem 3).
Other interesting results related to the present topics can be found in: [36,37,38,39,40].

2. Preliminary Facts

Definition 1.
Let k R * . We consider the Tsallis logarithm given by
log T x = x k 1 k if x > 0 0 if x = 0
and the Kaniadakis logarithm given via
log K x = x k x k 2 k if x > 0 0 if x = 0 .
Remark 1.
It is easy to see that log k K x = 1 2 log k T x + log k T x for any x 0 .
We have lim k 0 log k T x = lim k 0 log k K x = log x for any x > 0 (“log” is the classical logarithm function).
Definition 2.
Let ( Ω , T ) be a measurable space and μ , ν : T R ¯ + = [ 0 , ) { } two measures. We say that μ is absolutely continuous with respect to ν if, for any A T such that ν ( A ) = 0 , one has μ ( A ) = 0 .
Notation 1.
If μ and ν are absolutely continuous with respect to each other, we denote this fact by μ ν .
In the absence of other mentions, we work in the following scenario: Let ( Ω , T , μ i ) , i = 1 , 2 be two measure spaces and λ a measure on ( Ω , T ) such that μ i λ for any i = 1 , 2 . With the help of Radon–Nikodým Theorem we find two non-negative measurable functions f 1 and f 2 defined on Ω such that μ i ( A ) = A f i d λ for any A T and any i = 1 , 2 . Consider w : Ω ( 0 , ) a weight function (i.e., w is a non-negative measurable function).
Definition 3.
Let A T . The weighted Tsallis divergence on A between μ 1 and μ 2 is defined via
D k w , T ( μ 1 | μ 2 , A ) = 1 A w d μ 1 A w log k T f 1 f 2 d μ 1 if μ 1 ( A ) 0 0 if μ 1 ( A ) = 0
and the weighted Kaniadakis divergence on A between μ 1 and μ 2 is given by
D k w , K ( μ 1 | μ 2 , A ) = 1 A w d μ 1 A w log k K f 1 f 2 d μ 1 if μ 1 ( A ) 0 0 if μ 1 ( A ) = 0 .
Remark 2.
We assume that all divergences and integrals which appear in this paper are finite.
Remark 3.
We can see that the values of D k w , T ( μ 1 | μ 2 , A ) and D k w , K ( μ 1 | μ 2 , A ) do not depend on the choice of reference measure λ (because f 1 f 2 = d μ 1 / d λ d μ 2 / d λ = d μ 1 d μ 2 ).

3. Bounds of the Weighted Tsallis and Kaniadakis Divergences

The proof of the following lemma is elementary and is omitted.
Lemma 1.
For any x ( 0 , ) \ { 1 } , let φ x : R \ { 0 } R ,
φ x ( t ) = x t 1 t .
The function φ x is strictly increasing.
The next two corollaries are very useful in this article.
Corollary 1.
Let x > 0 and k 1 , \ { 0 } . Then,
x log k T x x 1
and the equality is valid if and only if x = 1 .
Corollary 2.
Let x > 0 and k 1 2 , 1 \ { 0 } . Then,
2 ( x 1 ) x log k T x x 1 .
We have equality in these inequalities if and only if x = 1 .
Theorem 1.
Let A T such that μ i ( A ) 0 for any i = 1 , 2 .
( a ) Assume that k 1 , \ { 0 } . Then,
D k w , T ( μ 1 | μ 2 , A ) log k T A w d μ 1 A w d μ 2 .
( b ) Assume that k ( 1 , 1 ) \ { 0 } . Then,
D k w , K ( μ 1 | μ 2 , A ) log k K A w d μ 1 A w d μ 2 .
In both cases, the equality holds if and only if f 1 ( ω ) f 2 ( ω ) = A w d μ 1 A w d μ 2 λ a . e . for ω A .
Proof .
( a ) We will make the proof in two steps.
Step 1. Assume that A w d μ 1 = A w d μ 2 . Because log k T A w d μ 1 A w d μ 2 = 0 , we have to show that D k w , T ( μ 1 | μ 2 , A ) 0 .
According to Corollary 1, we have
D k w , T ( μ 1 | μ 2 , A ) = 1 A w d μ 1 A w log k T f 1 f 2 d μ 1 = 1 A w d μ 1 A w · f 2 f 1 · f 1 f 2 log k T f 1 f 2 d μ 1 1 A w d μ 1 A w · f 2 f 1 f 1 f 2 1 d μ 1 = 1 A w d μ 1 A w 1 f 2 f 1 d μ 1 = 1 1 A w d μ 1 A w · f 2 f 1 d μ 1 = 1 A w d μ 2 A w d μ 1 = 0 .
The equality holds if and only if f 1 ( ω ) f 2 ( ω ) = 1 μ 1 a . e , i.e., if and only if f 1 ( ω ) f 2 ( ω ) = 1 λ a . e .
Step 2. Let A T with μ i ( A ) 0 for any i = 1 , 2 . We define the measures μ ˜ 1 and μ ˜ 2 via
μ ˜ i ( B ) = μ i ( B ) A w d μ i for   any   B T   and   any   i = 1 , 2 .
We remark that A w d μ ˜ 1 = A w d μ ˜ 2 = 1 .
Hence the weighted Tsallis divergence between μ ˜ 1 and μ ˜ 2 on A is
D k T ( μ ˜ 1 | μ ˜ 2 , A ) = A w · f 1 A w d μ 1 · log k T f 1 A w d μ 1 f 2 A w d μ 2 d λ = A w · f 1 A w d μ 1 · log k T f 1 A w d μ 2 f 2 A w d μ 1 d λ .
We deduce from Step 1 that D k T ( μ ˜ 1 | μ ˜ 2 , A ) 0 .
So,
0 A w · f 1 A w d μ 1 · log k T f 1 A w d μ 2 f 2 A w d μ 1 d λ = A w · f 1 A w d μ 1 · f 1 A w d μ 2 f 2 A w d μ 1 k 1 k d λ = A w · f 1 A w d μ 1 · A w d μ 2 A w d μ 1 k · f 1 f 2 k 1 k d λ = A w · f 1 A w d μ 1 · A w d μ 2 A w d μ 1 k · f 1 f 2 k 1 + 1 1 k d λ = A w · f 1 A w d μ 1 · A w d μ 2 A w d μ 1 k · f 1 f 2 k 1 k + A w d μ 2 A w d μ 1 k 1 k d λ = A w d μ 2 A w d μ 1 k D k w , T ( μ 1 | μ 2 , A ) + A w d μ 2 A w d μ 1 k 1 k .
Hence, D k w , T ( μ 1 | μ 2 , A ) A w d μ 1 A w d μ 2 k · 1 A w d μ 2 A w d μ 1 k k = log k T A w d μ 1 A w d μ 2 .
The equality holds if and only if f 1 ( ω ) A w d μ 1 = f 2 ( ω ) A w d μ 2 λ a . e . for ω A , i.e., if and only if f 1 ( ω ) f 2 ( ω ) = A w d μ 1 A w d μ 2 λ a . e . for ω A .
( b ) Because log k K x = 1 2 log k T x + log k T x , we obtain
D k w , K ( μ 1 | μ 2 ) = 1 2 D k w , T ( μ 1 | μ 2 ) + D k w , T ( μ 1 | μ 2 ) 1 2 log k T A w d μ 1 A w d μ 2 + log k T A w d μ 1 A w d μ 2 = log k K A w d μ 1 A w d μ 2 ( see   ( a ) ) .
We have equality in the preceding inequality if and only if
1 2 D k w , T ( μ 1 | μ 2 , A ) + D k w , T ( μ 1 | μ 2 , A ) = 1 2 log k T A w d μ 1 A w d μ 2 + log k T A w d μ 1 A w d μ 2 ,
which is equivalent to D k w , T ( μ 1 | μ 2 , A ) = log k T A w d μ 1 A w d μ 2 and D k w , T ( μ 1 | μ 2 , A ) = log k T A w d μ 1 A w d μ 2 and these are equivalent to f 1 ( ω ) f 2 ( ω ) = A w d μ 1 A w d μ 2 λ a . e . for ω A .
Theorem 2.
Let A T such that μ i ( A ) 0 for any i = 1 , 2 and A w d μ 1 = A w d μ 2 .
( a ) Assume that k 1 2 , 1 \ { 0 } . Then,
1 A w d μ 1 A w d μ 1 d μ 2 1 2 d μ 2 D k w , T ( μ 1 | μ 2 ) 1 A w d μ 1 A w d μ 1 d μ 2 1 2 d μ 2 .
( b ) Assume that k 1 2 , 1 2 \ { 0 } . Then,
1 A w d μ 1 A w d μ 1 d μ 2 1 2 d μ 2 D k w , K ( μ 1 | μ 2 ) 1 A w d μ 1 A w d μ 1 d μ 2 1 2 d μ 2 .
Proof .
( a ) We have (see Corollary 2)
D k w , T ( μ 1 | μ 2 ) = 1 A w d μ 1 A w · f 1 f 2 · log k T f 1 f 2 d μ 2 1 A w d μ 1 A w · f 1 f 2 · 2 f 1 f 2 1 f 1 f 2 d μ 2 = 1 A w d μ 1 A 2 w · f 1 f 2 · f 1 f 2 1 d μ 2 = 1 A w d μ 1 A 2 w · d μ 1 d μ 2 · d μ 1 d μ 2 1 d μ 2 = 1 A w d μ 1 A w · 2 · d μ 1 d μ 2 2 d μ 1 d μ 2 d μ 2 = 1 A w d μ 1 A w · d μ 1 d μ 2 d μ 2 2 · 1 A w d μ 1 A w · d μ 1 d μ 2 d μ 2 + 1 A w d μ 1 A w d μ 2 = 1 A w d μ 1 A w · d μ 1 d μ 2 1 2 d μ 2 .
On the other hand (see again Corollary 2),
D k w , T ( μ 1 | μ 2 ) = 1 A w d μ 1 A w · f 1 f 2 · log k T f 1 f 2 d μ 2 1 A w d μ 1 A w · f 1 f 2 · f 1 f 2 1 d μ 2 = 1 A w d μ 1 A w · d μ 1 d μ 2 · d μ 1 d μ 2 1 d μ 2 = 1 A w d μ 1 A w · d μ 1 d μ 2 2 d μ 1 d μ 2 d μ 2 = 1 A w d μ 1 A w · d μ 1 d μ 2 2 d μ 2 2 · 1 A w d μ 1 A w · d μ 1 d μ 2 d μ 2 + 1 A w d μ 1 A w d μ 2 = 1 A w d μ 1 A w d μ 1 d μ 2 1 2 d μ 2 .
( b ) Using ( a ) we obtain
1 2 · 1 A w d μ 1 A w d μ 1 d μ 2 1 2 d μ 2 + 1 2 · 1 A w d μ 1 A w d μ 1 d μ 2 1 2 d μ 2 1 2 D k w , T ( μ 1 | μ 2 ) + 1 2 D k w , T ( μ 1 | μ 2 ) 1 2 · 1 A w d μ 1 A w d μ 1 d μ 2 1 2 d μ 2 + 1 2 · 1 A w d μ 1 A w d μ 1 d μ 2 1 2 d μ 2 .
Hence,
1 A w d μ 1 A w d μ 1 d μ 2 1 2 d μ 2 D k w , K ( μ 1 | μ 2 ) 1 A w d μ 1 A w d μ 1 d μ 2 1 2 d μ 2 .

4. Pseudo-Additivity of the Weighted Tsallis and Kaniadakis Divergences

Let ( Ω , T , μ i ) , i = 1 , 2 be two measure spaces and λ 1 a measure on ( Ω , T ) such that μ i λ 1 for any i = 1 , 2 . We consider Radon–Nikodým derivatives f 1 ( 1 ) and f 2 ( 1 ) on Ω , i.e., f i ( 1 ) = d μ i d λ 1 for any i = 1 , 2 . Let also ( S , S , ν j ) , j = 1 , 2 be two measure spaces and λ 2 a measure on ( S , S ) such that ν j λ 2 for any j = 1 , 2 . We apply Radon–Nikodým Theorem and find the non-negative measurable functions f 1 ( 2 ) and f 2 ( 2 ) defined on S such that f j ( 2 ) = d ν j d λ 2 for any j = 1 , 2 . We take w 1 : Ω ( 0 , ) and w 2 : S ( 0 , ) two weight functions.
We consider the measure λ on ( Ω × S , T × S ) induced by λ 1 and λ 2 . Because μ i × ν i is absolutely continuous with respect to λ , we apply Radon–Nikodým Theorem and find two non-negative measurable functions f 1 and f 2 on Ω × S such that f i = d ( μ i × ν i ) d λ for any i = 1 , 2 .
The uniqueness from Radon–Nikodým Theorem assures us that f i ( ω , s ) = f i ( 1 ) ( ω ) f i ( 2 ) ( s ) for any ω Ω , s S and any i = 1 , 2 .
Let A T and B S .
We define the weighted Tsallis divergence for product measures via
D k w 1 w 2 , T ( μ 1 × ν 1 | μ 2 × ν 2 , A × B ) = 1 A × B w 1 w 2 d ( μ 1 × ν 1 ) A × B w 1 ( ω ) w 2 ( s ) log k T f 1 ( ω , s ) f 2 ( ω , s ) d ( μ 1 × ν 1 ) ( ω , s ) = 1 A w 1 d μ 1 B w 2 d ν 1 A × B w 1 ( ω ) w 2 ( s ) f 1 ( ω , s ) log k T f 1 ( ω , s ) f 2 ( ω , s ) d λ ( ω , s )   if
( μ 1 × ν 1 ) ( A × B ) 0
and
D k w 1 w 2 , T ( μ 1 × ν 1 | μ 2 × ν 2 , A × B ) = 0   if   ( μ 1 × ν 1 ) ( A × B ) = 0 .
The weighted Kaniadakis divergence for product measures is given by
D k w 1 w 2 , K ( μ 1 × ν 1 | μ 2 × ν 2 , A × B ) = 1 A × B w 1 w 2 d ( μ 1 × ν 1 ) A × B w 1 ( ω ) w 2 ( s ) log k K f 1 ( ω , s ) f 2 ( ω , s ) d ( μ 1 × ν 1 ) ( ω , s ) = 1 A w 1 d μ 1 B w 2 d ν 1 A × B w 1 ( ω ) w 2 ( s ) f 1 ( ω , s ) log k K f 1 ( ω , s ) f 2 ( ω , s ) d λ ( ω , s )   if
( μ 1 × ν 1 ) ( A × B ) 0
and
D k w 1 w 2 , K ( μ 1 × ν 1 | μ 2 × ν 2 , A × B ) = 0   if   ( μ 1 × ν 1 ) ( A × B ) = 0 .
Lemma 2
(see [41]). We have the following pseudo-additivity property for Tsallis logarithm (valid for any x,y > 0)
log k T ( x y ) = log k T x + log k T y + k ( log k T x ) ( log k T y ) .
Theorem 3.
Let A T and B S such that ( μ 1 × ν 1 ) ( A × B ) 0 . The weighted Tsallis and Kaniadakis divergences for product measures satisfy the following pseudo-additivity properties:
( a ) D k w 1 w 2 , T ( μ 1 × ν 1 | μ 2 × ν 2 , A × B ) = D k w 1 , T ( μ 1 | μ 2 , A ) + D k w 2 , T ( ν 1 | ν 2 , B ) + k D k w 1 , T ( μ 1 | μ 2 , A ) D k w 2 , T ( ν 1 | ν 2 , B ) .
( b ) D k w 1 w 2 , K ( μ 1 × ν 1 | μ 2 × ν 2 , A × B ) = D k w 1 , K ( μ 1 | μ 2 , A ) + D k w 2 , K ( ν 1 | ν 2 , B ) + k 2 D k w 1 , T ( μ 1 | μ 2 , A ) D k w 2 , T ( ν 1 | ν 2 , B ) D k w 1 , T ( μ 1 | μ 2 , A ) D k w 2 , T ( ν 1 | ν 2 , B ) .
( c ) D k w 1 w 2 , K ( μ 1 × ν 1 | μ 2 × ν 2 , A × B ) = 1 B w 2 d ν 1 B w 2 ( s ) f 1 ( 2 ) ( s ) f 1 ( 2 ) ( s ) f 2 ( 2 ) ( s ) k d λ 2 ( s ) · D k w 1 , K ( μ 1 | μ 2 , A ) + 1 A w 1 d μ 1 A w 1 ( ω ) f 1 ( 1 ) ( ω ) f 1 ( 1 ) ( ω ) f 2 ( 1 ) ( ω ) k d λ 1 ( ω ) D k w 2 , K ( ν 1 | ν 2 , B ) .
Proof .
( a ) According to Lemma 2, we have
D k w 1 w 2 , T ( μ 1 × ν 1 | μ 2 × ν 2 , A × B ) = 1 A w 1 d μ 1 B w 2 d ν 1 A × B w 1 ( ω ) w 2 ( s ) f 1 ( ω , s ) log k T f 1 ( ω , s ) f 2 ( ω , s ) d λ ( ω , s ) = 1 A w 1 d μ 1 · 1 B w 2 d ν 1 A × B w 1 ( ω ) w 2 ( s ) f 1 ( 1 ) ( ω ) f 1 ( 2 ) ( s ) log k T f 1 ( 1 ) ( ω ) f 2 ( 1 ) ( ω ) d λ 1 ( ω ) d λ 2 ( s ) + 1 A w 1 d μ 1 · 1 B w 2 d ν 1 A × B w 1 ( ω ) w 2 ( s ) f 1 ( 1 ) ( ω ) f 1 ( 2 ) ( s ) log k T f 1 ( 2 ) ( s ) f 2 ( 2 ) ( s ) d λ 1 ( ω ) d λ 2 ( s ) + k · 1 A w 1 d μ 1 · 1 B w 2 d ν 1 · A × B w 1 ( ω ) w 2 ( s ) f 1 ( 1 ) ( ω ) f 1 ( 2 ) ( s ) log k T f 1 ( 1 ) ( ω ) f 2 ( 1 ) ( ω ) log k T f 1 ( 2 ) ( s ) f 2 ( 2 ) ( s ) d λ 1 ( ω ) d λ 2 ( s ) = 1 A w 1 d μ 1 A w 1 ( ω ) f 1 ( 1 ) ( ω ) log k T f 1 ( 1 ) ( ω ) f 2 ( 1 ) ( ω ) d λ 1 ( ω ) 1 B w 2 d ν 1 · B w 2 ( s ) f 1 ( 2 ) ( s ) d λ 2 ( s ) + 1 A w 1 d μ 1 A w 1 ( ω ) f 1 ( 1 ) ( ω ) d λ 1 ( ω ) 1 B w 2 d ν 1 B w 2 ( s ) f 1 ( 2 ) ( s ) log k T f 1 ( 2 ) ( s ) f 2 ( 2 ) ( s ) d λ 2 ( s ) + k · 1 A w 1 d μ 1 A w 1 ( ω ) f 1 ( 1 ) ( ω ) log k T f 1 ( 1 ) ( ω ) f 2 ( 1 ) ( ω ) d λ 1 ( ω ) 1 B w 2 d ν 1 · B w 2 ( s ) f 1 ( 2 ) ( s ) log k T f 1 ( 2 ) ( s ) f 2 ( 2 ) ( s ) d λ 2 ( s ) = D k w 1 , T ( μ 1 | μ 2 , A ) + D k w 2 , T ( ν 1 | ν 2 , B ) + k D k w 1 , T ( μ 1 | μ 2 , A ) D k w 2 , T ( ν 1 | ν 2 , B ) .
( b ) Because log k K x = 1 2 log k T x + log k T x , we have
D k w 1 w 2 , K ( μ 1 × ν 1 | μ 2 × ν 2 , A × B ) = 1 A w 1 d μ 1 B w 2 d ν 1 A × B w 1 ( ω ) w 2 ( s ) f 1 ( ω , s ) log k K f 1 ( ω , s ) f 2 ( ω , s ) d λ ( ω , s ) = 1 A w 1 d μ 1 · 1 B w 2 d ν 1 · A × B w 1 ( ω ) w 2 ( s ) f 1 ( 1 ) ( ω ) f 1 ( 2 ) ( s ) · 1 2 log k T f 1 ( 1 ) ( ω ) f 1 ( 2 ) ( s ) f 2 ( 1 ) ( ω ) f 2 ( 2 ) ( s ) + log k T f 1 ( 1 ) ( ω ) f 1 ( 2 ) ( s ) f 2 ( 1 ) ( ω ) f 2 ( 2 ) ( s ) d λ 1 ( ω ) d λ 2 ( s ) = 1 2 D k w 1 w 2 , T ( μ 1 × ν 1 | μ 2 × ν 2 , A × B ) + D k w 1 w 2 , T ( μ 1 × ν 1 | μ 2 × ν 2 , A × B ) .
Using ( a ) , we get
D k w 1 w 2 , K ( μ 1 × ν 1 | μ 2 × ν 2 , A × B ) = 1 2 D k w 1 , T ( μ 1 | μ 2 , A ) + D k w 2 , T ( ν 1 | ν 2 , B ) + k D k w 1 , T ( μ 1 | μ 2 , A ) D k w 2 , T ( ν 1 | ν 2 , B ) + 1 2 D k w 1 , T ( μ 1 | μ 2 , A ) + D k w 2 , T ( ν 1 | ν 2 , B ) k D k w 1 , T ( μ 1 | μ 2 , A ) D k w 2 , T ( ν 1 | ν 2 , B ) = D k w 1 , K ( μ 1 | μ 2 , A ) + D k w 2 , K ( ν 1 | ν 2 , B ) + k 2 D k w 1 , T ( μ 1 | μ 2 , A ) D k w 2 , T ( ν 1 | ν 2 , B ) D k w 1 , T ( μ 1 | μ 2 , A ) D k w 2 , T ( ν 1 | ν 2 , B ) .
( c ) It is easy to prove that
log k K ( x y ) = y k · x k x k 2 k + x k · y k y k 2 k = y k log k K x + x k log k K y   for   any   x , y ( 0 , ) .
Hence,
D k w 1 w 2 , K ( μ 1 × ν 1 | μ 2 × ν 2 , A × B ) = 1 A w 1 d μ 1 B w 2 d ν 1 A × B w 1 ( ω ) w 2 ( s ) f 1 ( ω , s ) log k K f 1 ( ω , s ) f 2 ( ω , s ) d λ ( ω , s ) = 1 A w 1 d μ 1 · 1 B w 2 d ν 1 A × B w 1 ( ω ) w 2 ( s ) f 1 ( 1 ) ( ω ) f 1 ( 2 ) ( s ) f 1 ( 2 ) ( s ) f 2 ( 2 ) ( s ) k log k K f 1 ( 1 ) ( ω ) f 2 ( 1 ) ( ω ) + f 1 ( 1 ) ( ω ) f 2 ( 1 ) ( ω ) k log k K f 1 ( 2 ) ( s ) f 2 ( 2 ) ( s ) d λ 1 ( ω ) d λ 2 ( s ) = 1 A w 1 d μ 1 A w 1 ( ω ) f 1 ( 1 ) ( ω ) log k K f 1 ( 1 ) ( ω ) f 2 ( 1 ) ( ω ) d λ 1 ( ω ) · 1 B w 2 d ν 1 B w 2 ( s ) f 1 ( 2 ) ( s ) f 1 ( 2 ) ( s ) f 2 ( 2 ) ( s ) k d λ 2 ( s ) + 1 A w 1 d μ 1 A w 1 ( ω ) f 1 ( 1 ) ( ω ) f 1 ( 1 ) ( ω ) f 2 ( 1 ) ( ω ) k d λ 1 ( ω ) · 1 B w 2 d ν 1 B w 2 ( s ) f 1 ( 2 ) ( s ) log k K f 1 ( 2 ) ( s ) f 2 ( 2 ) ( s ) d λ 2 ( s ) = 1 B w 2 d ν 1 B w 2 ( s ) f 1 ( 2 ) ( s ) f 1 ( 2 ) ( s ) f 2 ( 2 ) ( s ) k d λ 2 ( s ) D k w 1 , K ( μ 1 | μ 2 , A ) + 1 A w 1 d μ 1 A w 1 ( ω ) f 1 ( 1 ) ( ω ) f 1 ( 1 ) ( ω ) f 2 ( 1 ) ( ω ) k d λ 1 ( ω ) D k w 2 , K ( ν 1 | ν 2 , B ) .

5. Conclusions

With the help of some inequalities concerning Tsallis logarithm, we obtained inequalities between the weighted Tsallis and Kaniadakis divergences on an arbitrary measurable non-negligible set and Tsallis logarithm, respectively, Kaniadakis logarithm (Theorem 1). We showed that the aforementioned divergences are limited by similar bounds with those that limit Kullback–Leibler divergence (Theorem 2) and proved that are pseudo-additive (Theorem 3).

Author Contributions

Conceptualization, R.-C.S., S.-C.S. and V.P.; Formal analysis, V.P.; Investigation, R.-C.S., S.-C.S. and V.P.; Methodology, R.-C.S.; Validation, S.-C.S.; Writing—original draft, R.-C.S.; Writing—review and editing, S.-C.S. and V.P. All authors contributed equally to the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors are very much indebted to the anonymous referees and to the editors for their most valuable comments and suggestions which improved the quality of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tsallis, C. Possible generalization of Boltzmann-Gibbs statistics. J. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  2. Preda, V.; Dedu, S.; Gheorghe, C. New classes of Lorenz curves by maximizing Tsallis entropy under mean and Gini equality and inequality constraints. Physics A 2015, 436, 925–932. [Google Scholar] [CrossRef]
  3. Soares, A.D.; Moura, N.J., Jr.; Ribeiro, M.B. Tsallis statistics in the income distribution of Brazil. Chaos Solitons Fractals 2016, 88, 158–171. [Google Scholar] [CrossRef] [Green Version]
  4. Abe, S.; Suzuki, N. Itineration of the Internet over nonequilibrium stationary states in Tsallis statistics. Phys. Rev. E 2003, 67, 016106. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Oikonomou, N.; Provata, A.; Tirnakli, U. Nonextensive statistical approach to non-coding human DNA. Physics A 2008, 387, 2653–2659. [Google Scholar] [CrossRef]
  6. Lima, J.; Silva, R., Jr.; Santos, J. Plasma oscillations and nonextensive statistics. Phys. Rev. E 2000, 61, 3260. [Google Scholar] [CrossRef]
  7. Jiang, Z.Q.; Chen, W.; Zhou, W.X. Scaling in the distribution of intertrade durations of Chinese stocks. Physics A 2008, 387, 5818–5825. [Google Scholar] [CrossRef] [Green Version]
  8. Kaizoji, T. An interacting-agent model of financial markets from the viewpoint of nonextensive statistical mechanics. Physics A 2006, 370, 109–113. [Google Scholar] [CrossRef] [Green Version]
  9. Kaniadakis, G. Non-linear kinetics underlying generalized statistics. Physics A 2001, 296, 405–425. [Google Scholar] [CrossRef] [Green Version]
  10. Preda, V.; Dedu, S.; Sheraz, M. New measure selection for Hunt-Devolder semi-Markov regime switching interest rate models. Physics A 2014, 407, 350–359. [Google Scholar] [CrossRef]
  11. Trivellato, B. The minimal k-entropy martingale measure. Int. J. Theor. Appl. Financ. 2012, 15, 1250038. [Google Scholar] [CrossRef]
  12. Trivellato, B. Deformed exponentials and applications to finance. Entropy 2013, 15, 3471–3489. [Google Scholar] [CrossRef] [Green Version]
  13. Abreul, E.M.C.; Neto, J.A.; Barboza, E.M., Jr.; Nunes, R.C. Jeans instability criterion from the viewpoint of Kaniadakis’ statistics. EPL 2016, 114, 55001. [Google Scholar] [CrossRef] [Green Version]
  14. Cure, M.; Rial, D.F.; Christen, A.; Cassetti, J. A method to deconvolve stellar rotational velocities. Astron. Astrophys. 2014, 564, A85. [Google Scholar]
  15. Macedo-Filho, A.; Moreira, D.A.; Silva, R.; da Silva, L.R. Maximum entropy principle for Kaniadakis statistics and networks. Phys. Lett. A 2013, 377, 842–846. [Google Scholar] [CrossRef] [Green Version]
  16. Stella, M.; Brede, M. A kappa-deformed model of growing complex networks with fitness. Physics A 2014, 407, 360–368. [Google Scholar] [CrossRef] [Green Version]
  17. Clementi, F.; Gallegati, M.; Kaniadakis, G. A k-generalized statistical mechanics approach to income analysis. J. Stat. Mech. 2009, 2009, P02037. [Google Scholar] [CrossRef] [Green Version]
  18. Modanese, G. Common origin of power-law tails in income distributions and relativistic gases. Phys. Lett. A 2016, 380, 29–32. [Google Scholar] [CrossRef] [Green Version]
  19. Kaniadakis, G. Statistical mechanics in the context of special relativity. Phys. Rev. E 2002, 66, 056125. [Google Scholar] [CrossRef] [Green Version]
  20. Kaniadakis, G. Statistical mechanics in the context of special relativity. II. Phys. Rev. E 2005, 72, 036108. [Google Scholar] [CrossRef] [Green Version]
  21. Kaniadakis, G.; Scarfone, A.M.; Sparavigna, A.; Wada, T. Composition law of k-entropy for statistically independent systems. Phys. Rev. E 2017, 95, 052112. [Google Scholar] [CrossRef] [PubMed]
  22. Kullback, S.; Leibler, R.A. On information and sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
  23. Amari, S.; Nagaoka, H. Methods of Information Geometry (Translated from the Japanese by Daishi Harada); American Mathematical Society: Providence, RI, USA, 2000. [Google Scholar]
  24. Gkelsinis, T.; Karagrigoriou, A.; Barbu, V.S. Statistical inference based on weighted divergence measures with simulations and applications. Stat. Pap. 2022, 63, 1511–1536. [Google Scholar] [CrossRef]
  25. Nawrocki, D.N.; Harding, W.H. State-value weighted entropy as a measure of investment risk. Appl. Econ. 1986, 18, 411–419. [Google Scholar] [CrossRef]
  26. Guiaşu, S. Grouping data by using the weighted entropy. J. Stat. Plan. Inference 1990, 15, 63–69. [Google Scholar] [CrossRef]
  27. Di Crescenzo, A.; Longobardi, M. On weighted residual and past entropies. arXiv 2007, arXiv:math/0703489. [Google Scholar]
  28. Suhov, Y.; Zohren, S. Quantum weighted entropy and its properties. arXiv 2014, arXiv:11411.0892. [Google Scholar]
  29. Furuichi, S.; Yanagi, K.; Kuriyama, K. Fundamental properties of Tsallis relative entropy. J. Math. Phys. 2004, 45, 4868–4877. [Google Scholar] [CrossRef] [Green Version]
  30. Huang, J.; Yong, W.-A.; Hong, L. Generalization of the Kullback-Leibler divergence in the Tsallis statistics. J. Math. Anal. Appl. 2016, 436, 501–512. [Google Scholar] [CrossRef]
  31. Sfetcu, R.-C. Tsallis and Rényi divergences of generalized Jacobi polynomials. Physics A 2016, 460, 131–138. [Google Scholar] [CrossRef]
  32. Sfetcu, R.-C.; Sfetcu, S.-C.; Preda, V. On Tsallis and Kaniadakis divergences. Math. Phys. Anal. Geom. 2022, 25, 23. [Google Scholar] [CrossRef]
  33. Barbu, V.S.; Karagrigoriou, A.; Preda, V. Entropy and divergence rates for Markov chains. II: The weighted case. Proc. Rom. Acad. Ser. A Math. Phys. Tech. Sci. Inf. Sci. 2018, 19, 3–10. [Google Scholar]
  34. Beliş, M.; Guiaşu, S. A quantitative-qualitative measure of information in cybernetic systems. IEEE Trans. Inf. Theory 1968, 14, 593–594. [Google Scholar] [CrossRef]
  35. Guiaşu, S. Weighted entropy. Rep. Math. Phys. 1971, 2, 165–179. [Google Scholar] [CrossRef]
  36. Barbu, V.S.; Karagrigoriou, A.; Makrides, A. Semi-Markov modelling for multi-state systems. Methodol. Comput. Appl. Probab. 2017, 19, 1011–1028. [Google Scholar] [CrossRef]
  37. Bulinski, A.; Dimitrov, D. Statistical estimation of the Kullback-Leibler divergence. Mathematics 2021, 9, 544. [Google Scholar] [CrossRef]
  38. Gkelsinis, T.; Karagrigoriou, A. Theoretical aspects on measures of directed information with simulations. Mathematics 2020, 8, 587. [Google Scholar] [CrossRef] [Green Version]
  39. Toma, A. Optimal robust M-estimators using divergences. Stat. Probab. Lett. 2009, 79, 1–5. [Google Scholar] [CrossRef] [Green Version]
  40. Toma, A. Model selection criteria using divergences. Entropy 2014, 16, 2686–2698. [Google Scholar] [CrossRef]
  41. Tsallis, C. Introduction to Nonextensive Statistical Mechanics; Springer Science Business Media LLC: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sfetcu, R.-C.; Sfetcu, S.-C.; Preda, V. Some Properties of Weighted Tsallis and Kaniadakis Divergences. Entropy 2022, 24, 1616. https://doi.org/10.3390/e24111616

AMA Style

Sfetcu R-C, Sfetcu S-C, Preda V. Some Properties of Weighted Tsallis and Kaniadakis Divergences. Entropy. 2022; 24(11):1616. https://doi.org/10.3390/e24111616

Chicago/Turabian Style

Sfetcu, Răzvan-Cornel, Sorina-Cezarina Sfetcu, and Vasile Preda. 2022. "Some Properties of Weighted Tsallis and Kaniadakis Divergences" Entropy 24, no. 11: 1616. https://doi.org/10.3390/e24111616

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop