Next Article in Journal
M-ary Rank Classifier Combination: A Binary Linear Programming Problem
Next Article in Special Issue
Melodies as Maximally Disordered Systems under Macroscopic Constraints with Musical Meaning
Previous Article in Journal
The Weibull-Gamma Distribution: Properties and Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Attention to the Variation of Probabilistic Events: Information Processing with Message Importance Measure

Beijing National Research Center for Information Science and Technology, Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
*
Author to whom correspondence should be addressed.
Entropy 2019, 21(5), 439; https://doi.org/10.3390/e21050439
Submission received: 10 March 2019 / Revised: 9 April 2019 / Accepted: 23 April 2019 / Published: 26 April 2019
(This article belongs to the Special Issue Information Theoretic Measures and Their Applications)

Abstract

:
Different probabilities of events attract different attention in many scenarios such as anomaly detection and security systems. To characterize the events’ importance from a probabilistic perspective, the message importance measure (MIM) is proposed as a kind of semantics analysis tool. Similar to Shannon entropy, the MIM has its special function in information representation, in which the parameter of MIM plays a vital role. Actually, the parameter dominates the properties of MIM, based on which the MIM has three work regions where this measure can be used flexibly for different goals. When the parameter is positive but not large enough, the MIM not only provides a new viewpoint for information processing but also has some similarities with Shannon entropy in the information compression and transmission. In this regard, this paper first constructs a system model with message importance measure and proposes the message importance loss to enrich the information processing strategies. Moreover, the message importance loss capacity is proposed to measure the information importance harvest in a transmission. Furthermore, the message importance distortion function is discussed to give an upper bound of information compression based on the MIM. Additionally, the bitrate transmission constrained by the message importance loss is investigated to broaden the scope for Shannon information theory.

Graphical Abstract

1. Introduction

In recent years, massive data has attracted much attention in various realistic scenarios. Actually, there exist many challenges for data processing such as distributed data acquisition, huge-scale data storage and transmission, as well as correlation or causality representation [1,2,3,4,5]. Facing these obstacles, it is a promising way to make good use of information theory and statistics to deal with mass information. For example, a method based on Max Entropy in Metric Space (MEMS) is utilized for local features extraction and mechanical system analysis [6]; as an information measure different from Shannon entropy, Voronoi entropy is discussed to characterize the random 2D patterns [7]; Category theory, which can characterize the Kolmogorov–Sinai and Shannon entropy as the unique functors, is used in autonomous and networked dynamical systems [8].
To some degree, probabilistic events attract different interests according to their probability. For example, considering that small probability events hidden in massive data contain more semantic importance [9,10,11,12,13], people usually pay more attention to the rare events (rather than the common events) and design the corresponding strategies of their information representation and processing in many applications including outliers detection in the Internet of Things (IoT), smart cities and autonomous driving [14,15,16,17,18,19,20,21,22]. Therefore, the probabilistic events processing has special values in the information technology based on semantics analysis of message importance.
In order to characterize the importance of probabilistic events, a new information measure named MIM is presented to generalize Shannon information theory [23,24,25]. Here, we shall investigate the information processing including compression (or storage) and transmission based on MIM to bring some new viewpoints in the information theory. Now, we first give a short review on MIM.

1.1. Review of Message Importance Measure

Essentially, the message importance measure (MIM) is proposed to focus on the probabilistic events importance [23]. In particular, the core idea of this information measure is that the weights of importance are allocated to different events according to the corresponding events’ probability. In this regard, as an information measure, MIM may provide an applicable criterion to characterize the message importance from the viewpoint of inherent property of events without the human subjective factors. For convenience of calculation, an exponential expression of MIM is defined as follows.
Definition 1.
For a discrete distribution P ( X ) = { p ( x 1 ) , p ( x 2 ) , …, p ( x n ) } , the exponential expression of message importance measure (MIM) is given by
L ( ϖ , X ) = x i p ( x i ) e ϖ { 1 p ( x i ) } ,
where the adjustable parameter ϖ is nonnegative and p ( x i ) e ϖ { 1 p ( x i ) } is viewed as the self-scoring value of event i to measure its message importance.
Actually, from the perspective of generalized Fadeev’s postulates, the MIM is viewed as a rational information measure similar to Shannon entropy and Renyi entropy which are respectively defined by
H ( X ) = x i p ( x i ) log p ( x i ) ,
H α ( X ) = 1 1 α log x i { p ( x i ) } α , ( 0 < α < , α 1 ) ,
where the condition of variable X is the same as that described in Definition 1. In particular, a postulate for the MIM weaker than that for Shannon entropy and Renyi entropy is given by
F ( P Q ) F ( P ) + F ( Q ) ,
while F ( P Q ) = F ( P ) + F ( Q ) is satisfied for Shannon entropy and Renyi entropy [26], where P and Q are two independent random distributions and F ( · ) denotes a kind of information measure.
Moreover, the crucial operator of MIM to handle probability elements is exponential function while the corresponding operators of Shannon and Renyi entropy are logarithmic function and polynomial function respectively. In this case, MIM can be viewed as a map for the assignments of events’ importance weights or the achievement for the self-scoring values of events different from conventional information measures.
As far as the application of MIM is concerned, it may be a better method by using this information measure to detect unbalanced events in signal processing. Ref. [27] has investigated the minor probability event detection by combining MIM and Bayes detection. Moreover, it is worth noting that the physical meaning of the components of MIM corresponds to the normalized optimal data recommendation distribution, which makes a trade-off between the users’ preference and system revenue [28]. In this respect, MIM plays a fundamental role in the recommendation system (a popular applications of big data) from the theoretic viewpoint. Therefore, MIM does not come from the imagination directly, whereas it is a meaningful information measure originated from the practical scenario.

1.2. The Importance Coefficient ϖ in MIM

In general, the parameter ϖ viewed as the importance coefficient has a great impact on the MIM. Actually, different parameter ϖ can lead to different properties and performances for this information measure. In particular, to measure a distribution P ( X ) = { p ( x 1 ) , p ( x 2 ) , , p ( x n ) } , there are three kinds of work regions of MIM which can be classified by the parameters, whose details are discussed as follows.
(i)
If the parameter satisfies 0 ϖ 2 / max { p ( x i ) } , the convexity of MIM is similar to Shannon entropy and Renyi entropy. Actually, these three information measures all have maximum value properties and allocate weights for probability elements of the distribution P ( X ) . It is notable that the MIM in this work region focuses on the typical sets rather than atypical sets, which implies that the uniform distribution reaches the maximum value. In brief, the MIM in this work region can be regarded as the same class of message measure as Shannon entropy and Renyi entropy to deal with the problems of information theory.
(ii)
If we have ϖ > 2 / max { p ( x i ) } , the small probability elements will be the dominant factor for MIM to measure a distribution. That is, the small probability events can be highlighted more in this work region of MIM than those in the first one. Moreover, in this work region, MIM can pay more attention to atypical sets, which can be viewed as a magnifier for rare events. In fact, this property corresponds to some common scenarios where anomalies catch more eyes such as anomalous detection and alarm. In this case, some problems (including communication and probabilistic events processing) can be rehandled from the perspective of rare events importance. Particularly, the compression encoding and maximum entropy rate transmission are proposed based on the non-parametric MIM (namely NMIM) [24]; in addition, the distribution goodness-of-fit approach is also presented by use of the differential MIM (namely DMIM) [29].
(iii)
If the MIM has the parameter ϖ < 0 , the large probability elements will be the main part contributing to the value of this information measure. In other words, the normal events attract more attention in this work region of MIM than rare events. In practice, this can be used in many applications where regular events are popular such as filter systems and data cleaning.
As a matter of fact, by selecting the parameter ϖ properly, we can exploit the MIM to solve several problems in different scenarios. The importance coefficient facilitates more flexibility of MIM in applications beyond Shannon entropy and Renyi entropy.
To focus on a concrete object, in this paper, we mainly investigate the first work region of MIM (namely 0 ϖ 2 / max { p ( x i ) } ) and intend to dig out some novelties related to this metric for information processing.

1.3. Similarities and Differences between Shannon Entropy and MIM

In fact, when the parameter ϖ satisfies 0 ϖ 2 / max { p ( x i ) } , MIM is similar to Shannon entropy in regard to the expression and properties. The exponential operator of MIM is a substitute for the logarithm operator of Shannon entropy. As a kind of tool based on probability distributions, the MIM with parameter 0 ϖ 2 / max { p ( x i ) } has the same concavity and monotonicity as Shannon entropy, which can characterize the information otherness for different variables.
By resorting to the exponential operator of MIM, the weights for small probability elements are amplified more in some degree than those for large probability ones, which is considered as message importance allocation based on the self-scoring values. In this regard, the MIM may add fresh factors to the information processing, which takes into account the effects of probabilistic events’ importance from an objective viewpoint.
In the conventional Shannon information theory, data transmission and compression both can be viewed as the information transfer process from the variable X to Y. The capacity of information transmission is achieved by maximizing the mutual information between the X and Y. Actually, there exists distortion for probabilistic events during an information transfer process, which denotes the difference between the source and its corresponding reconstruction. Due to this fact, it is possible to compress data based on the allowable information loss in a certain extent [30,31,32]. In Shannon information theory, rate-distortion theory is investigated for lossy data compression, whose essence is mutual information minimization under the constraint of a certain distortion. However, in some cases involved with distortion, small probability events containing more message importance require higher reliability than those with large probability. In this sense, another aspect of information distortion may be essential, in which message importance is considered as a reasonable metric. Particularly, information transfer process is characterized by the MIM (rather than the entropy) with controlling the distortion, which can be viewed as a new kind of information compression, compared to the conventional scheme compressing redundancy to save resources. In fact, some information measures with respect to message importance have been investigated to extend the range of Shannon information theory [33,34,35,36,37]. In this regard, it is worthwhile exploring the information processing in the sense of MIM. Furthermore, it is also promising to investigate the Shannon mutual information constrained by the MIM in an information transfer process which may become a novel system invariant.
In addition, similar to Shannon conditional entropy, a conditional message importance measure for two distributions is proposed to process conditional probability.
Definition 2.
For the two discrete probability P ( X ) = { p ( x 1 ) , p ( x 2 ) , …, p ( x n ) } and P ( Y ) = { p ( y 1 ) , p ( y 2 ) , …, p ( y n ) } , the conditional message importance measure (CMIM) is given by
L ( ϖ , X | Y ) = y j p ( y j ) x i p ( x i | y j ) e ϖ { 1 p ( x i | y j ) } ,
where p ( x i | y j ) denotes the conditional probability between y j and x i . The component p ( x i | y j ) e ϖ { 1 p ( x i | y j ) } is similar to self-scoring value. Therefore, the CMIM can be considered as a system invariant which indicates the average total self-scoring value for an information transfer process.
Actually, the MIM is a metric with different mathematical and physical meaning from Shannon entropy and Renyi entropy, which provides its own perspective to process probabilistic events. However, due to the similarity between the MIM and Shannon entropy, they may have analogous performance in some aspects. To this end, the information processing based on the MIM is discussed in this paper.

1.4. Motivation and Contributions

The purpose of this paper is to characterize the probabilistic events processing including compression and transmission by means of MIM. Particularly, in terms of the information processing system model shown in Figure 1, the message source φ (regarded as a random variable whose support set corresponds to the set of events’ types) can be measured by the amount of information H ( · ) and the message importance L ( · ) according to the probability distribution. Then, the information transfer process whose details are presented in Section 2 can be characterized based on these two metrics. Different from the mathematically probabilistic characterization of traditional telecommunication system, this paper mainly discusses the information processing from the perspectives of message importance. In this regard, the information importance harvest in a transmission is characterized by the proposed message importance loss capacity. Moreover, the upper bound of information compression based on the MIM is described by the message importance distortion function. In addition, we also investigate the trade-off between bitrate transmission and message importance loss to bring some inspiration to the conventional information theory.

1.5. Organization

The rest of this paper is discussed as follows. In Section 2, a system model involved with message importance is constructed to help analyze the data compression and transmission in big data. In Section 3, we propose a kind of message transfer capacity to investigate the message importance loss in the transmission. In Section 4, message importance distortion function is introduced and its properties are also presented to give some details. In Section 5, we discuss the bitrate transmission constrained by message importance to widen the horizon for the Shannon theory. In Section 6, some numerical results are presented to validate propositions and the analysis in theory. Finally, we conclude this paper in Section 7. Additionally, the fundamental notations in this paper are summarized in Table 1.

2. System Model with Message Importance

Considering an information processing system model shown in Figure 1, the information transfer process is discussed as follows. At first, a message source φ follows a distribution P φ = { p ( φ 1 ) , p ( φ 2 ) , . . . , p ( φ n ) } whose support set is { φ 1 , φ 2 , . . . , φ n } corresponding to the events types. Then, the message φ is encoded or compressed into the variable φ ˜ following the distribution P φ ˜ = { p ( φ ˜ 1 ) , p ( φ ˜ 2 ) , . . . , p ( φ ˜ n ) } whose alphabet is { φ 1 , φ 2 , . . . , φ n } . After the information transfer process denoted by matrix p ( Ω ˜ j | φ ˜ i ) , the received message Ω ˜ originating from φ ˜ is observed as a random variable, where the distribution of Ω ˜ is P Ω ˜ = { p ( Ω ˜ 1 ) , p ( Ω ˜ 2 ) , . . . , p ( Ω ˜ n ) } whose alphabet is { Ω ˜ 1 , Ω ˜ 2 , . . . , Ω ˜ n } . Finally, the receiver recovers the original message φ by decoding Ω = g ( Ω ˜ ) where g ( · ) denotes the decoding function and Ω is the recovered message with the alphabet { Ω 1 , Ω 2 , . . . , Ω n } .
From the viewpoint of generalized information theory, a two-layer framework is considered to understand this model, where the first layer is based on the amount of information characterized by Shannon entropy denoted by H ( · ) , while the second layer reposes on message importance measure of events denoted by L ( · ) . Due to the fact that the former is discussed pretty entirely, we mainly investigate the latter in the paper.
Considering the source-channel separation theorem [38], the above information processing model consists of two problems, namely data compression and data transmission. On one hand, the data compression of the system can be achieved by using classical source coding strategies to reduce more redundancy, in which the information loss is described by H ( φ ) H ( φ | φ ˜ ) under the information transfer matrix p ( φ ˜ | φ ) . Similarly, from the perspective of message importance, the data can be further compressed by discarding worthless messages, where the message importance loss can be characterized by L ( φ ) L ( φ | φ ˜ ) . On the other hand, the data transmission is discussed to obtain the upper bound of the mutual information H ( φ ˜ ) H ( φ ˜ | Ω ˜ ) , namely the information capacity. In a similar way, L ( φ ˜ ) L ( φ ˜ | Ω ˜ ) means the income of message importance in the transmission.
In essence, it is apparent that the data compression and transmission are both considered as an information transfer processes { X , p ( y | x ) , Y } , and they can be characterized by the difference between { X } and { X | Y } . In order to facilitate the analysis of the above model, the message importance loss is introduced as follows.
Definition 3.
For two discrete probability P ( X ) = { p ( x 1 ) , p ( x 2 ) , …, p ( x n ) } and P ( Y ) = { p ( y 1 ) , p ( y 2 ) , …, p ( y n ) } , the message importance loss based on MIM and CMIM is given by
Φ ϖ ( X | | Y ) = L ( ϖ , X ) L ( ϖ , X | Y ) ,
where L ( ϖ , X ) and L ( ϖ , X | Y ) are given by the Definitions 1 and 2.
In fact, according to the intrinsic relationship between L ( ϖ , X ) and L ( ϖ , X | Y ) , it is readily seen that
Φ ϖ ( X | | Y ) 0 ,
where 0 < ϖ 2 2 / max { p ( x i | y j ) } .
Proof. 
Considering a function f ( x ) = x e ϖ ( 1 x ) ( 0 x 1 and 0 < ϖ ), it is easy to have 2 f ( x ) x = ϖ e ϖ ( 1 x ) ( 2 ϖ x ) , which implies if ϖ 2 2 / x , the function f ( x ) is concave.
In the light of Jensen’s inequality, if 0 < ϖ 2 2 / max { p ( x i | y j ) } is satisfied, it is not difficult to see
L ( ϖ , X ) = x i p ( x i ) e ϖ ( 1 p ( x i ) ) = x i { y j p ( y j ) p ( x i | y j ) } e ϖ ( 1 { y j p ( y j ) p ( x i | y j ) } ) y j p ( y j ) x i { p ( x i | y j ) e ϖ ( 1 p ( x i | y j ) ) } = L ( ϖ , X | Y ) .

3. Message Importance Loss in Transmission

In this section, we will introduce the CMIM to characterize the information transfer processing. To do so, we define a kind of message transfer capacity measured by the CMIM as follows.
Definition 4.
Assume that there exists an information transfer process as
{ X , p ( y | x ) , Y } ,
where the p ( y | x ) denotes a probability distribution matrix describing the information transfer from the variable X to Y. We define the message importance loss capacity (MILC) as
C = max p ( x ) { Φ ϖ ( X | | Y ) } = max p ( x ) { L ( ϖ , X ) L ( ϖ , X | Y ) } ,
where L ( ϖ , X ) = x i p ( x i ) e ϖ { 1 p ( x i ) } , p ( y j ) = x i p ( x i ) p ( y j | x i ) , p ( x i | y j ) = p ( x i ) p ( y j | x i ) p ( y j ) , L ( ϖ , X | Y ) is defined by Equation (4), and ϖ < 2 2 / max { p ( x i ) } .
In order to have an insight into the applications of MILC, some specific information transfer scenarios are discussed as follows.

3.1. Binary Symmetric Matrix

Consider the binary symmetric information transfer matrix, where the original variables are complemented with the transfer probability which can be seen in the following proposition.
Proposition 1.
Assume that there exists an information transfer process { X , p ( y | x ) , Y } , where the information transfer matrix is
p ( y | x ) = 1 β s β s β s 1 β s ,
which indicates that X and Y both follow binary distributions. In that case, we have
C ( ϖ , β s ) = e ϖ 2 L ( ϖ , β s ) ,
where L ( ϖ , β s ) = β s e ϖ ( 1 β s ) + ( 1 β s ) e ϖ β s ( 0 β s 1 ) and ϖ < 2 2 / max { p ( x i ) } .
Proof of Proposition 1.
Assume that the distribution of variable X is a binary distribution ( p , 1 p ) . According to Equation (10) and Bayes’ theorem (namely, p ( x | y ) = p ( x ) p ( y | x ) p ( y ) ), it is not difficult to see that
p ( x | y ) = p ( 1 β s ) p ( 1 β s ) + ( 1 p ) β s ( 1 p ) β s p ( 1 β s ) + ( 1 p ) β s p β s p β s + ( 1 p ) ( 1 β s ) ( 1 p ) ( 1 β s ) p β s + ( 1 p ) ( 1 β s ) .
Furthermore, in accordance with Equations (4) and (9), we have
C ( ϖ , β s ) = max p { C ( p , ϖ , β s ) } = max p { L ( ϖ , p ) { p ( 1 β s ) e ϖ ( 1 p ) β s p ( 1 β s ) + ( 1 p ) β s + ( 1 p ) β s e ϖ p ( 1 β s ) p ( 1 β s ) + ( 1 p ) β s + p β s e ϖ ( 1 p ) ( 1 β s ) p β s + ( 1 p ) ( 1 β s ) + ( 1 p ) ( 1 β s ) e ϖ p β s p β s + ( 1 p ) ( 1 β s ) } } ,
where L ( ϖ , p ) = p e ϖ ( 1 p ) + ( 1 p ) e ϖ p ( 0 < p < 1 ). Then, it is readily seen that
C ( p , ϖ , β s ) p = ( 1 ϖ p ) e ϖ ( 1 p ) + [ ( 1 p ) ϖ 1 ] e ϖ p { ( 1 β s ) 1 ϖ p ( 1 β s ) β s [ p ( 1 β s ) + ( 1 p ) β s ] 2 e ϖ ( 1 p ) β p ( 1 β ) + ( 1 p ) β + ( 1 β s ) ϖ ( 1 p ) β s ( 1 β s ) [ p β s + ( 1 p ) ( 1 β s ) ] 2 1 e ϖ p β s p β s + ( 1 p ) ( 1 β s ) + β s ϖ ( 1 p ) β s ( 1 β s ) [ p ( 1 β s ) + ( 1 p ) β s ] 2 1 e ϖ p ( 1 β s ) p ( 1 β s ) + ( 1 p ) β s + β s 1 ϖ p ( 1 β s ) β s [ p β s + ( 1 p ) ( 1 β s ) ] 2 e ϖ ( 1 p ) ( 1 β s ) p β s + ( 1 p ) ( 1 β s ) } .
In the light of the positivity for C ( p , β s ) p in { p | p ( 0 , 1 / 2 ) } and the negativity in { p | p ( 1 / 2 , 1 ) } (if β s 1 / 2 ), it is apparent that p = 1 / 2 is the only solution for C ( p , β s ) p = 0 . That is, if β s 1 / 2 , the extreme value is indeed the maximum value of C ( p , ϖ , β s ) when p = 1 / 2 . Similarly, if β s = 1 / 2 , the solution p = 1 / 2 also results in the same conclusion. □
Remark 1.
According to Proposition 1, on one hand, when β s = 1 / 2 , that is, the information transfer process is just random, we will gain the lower bound of the MILC namely C ( β s ) = 0 . On the other hand, when β s = 0 , namely there is a certain information transfer process, we will have the maximum MILC. As for the distribution selection for the variable X, the uniform distribution is preferred to gain the capacity.

3.2. Binary Erasure Matrix

The binary erasure information transfer matrix is similar to the binary symmetric one; however, in the former, a part of information is lost rather than corrupted. The MILC of this kind of information transfer matrix is discussed as follows.
Proposition 2.
Consider an information transfer process { X , p ( y | x ) , Y } , in which the information transfer matrix is described as
p ( y | x ) = 1 β e 0 β e 0 1 β e β e ,
which indicates that X follows the binary distribution and Y follows the 3-ary distribution. Then, we have
C ( ϖ , β e ) = ( 1 β e ) { e ϖ 2 1 } ,
where 0 β e 1 and 0 < ϖ < 2 2 / max { p ( x i ) } .
Proof of Proposition 2.
Assume the distribution of variable X is ( p , 1 p ) . Furthermore, according to the binary erasure matrix and Bayes theorem, we have that the transmission matrix conditioned by the variable Y as follows:
p ( x | y ) = 1 0 0 1 p 1 p .
Then, it is not difficult to have
L ( ϖ , X | Y ) = β e p e ϖ ( 1 p ) + β e ( 1 p ) e ϖ p + 1 β e .
Furthermore, it is readily seen that
C ( p , ϖ , β e ) = max p L ( ϖ , p ) β e p e ϖ ( 1 p ) + β e ( 1 p ) e ϖ p + 1 β e = ( 1 β e ) max p { L ( ϖ , p ) } 1 ,
where L ( ϖ , p ) = p e ϖ ( 1 p ) + ( 1 p ) e ϖ p . Moreover, we have the solution p = 1 / 2 leads to L ( ϖ , p ) p = 0 and the corresponding second derivative is
2 L ( ϖ , p ) p 2 = e ϖ ( 1 p ) ( ϖ p 2 ) ϖ + e ϖ p [ ( 1 p ) ϖ 2 ] ϖ < 0 ,
which results from the condition 0 < ϖ < 2 2 / max { p ( x i ) } .
Therefore, it is readily seen that, in the case p = 1 / 2 , the capacity C ( p , ϖ , β e ) reaches the maximum value. □
Remark 2.
Proposition 2 indicates that, in the case β e = 1 , the lower bound of the capacity is obtained, that is C ( β e ) = 0 . However, if a certain information transfer process is satisfied (namely β e = 0 ), we will have the maximum MILC. Similar to Proposition 1, the uniform distribution is selected to reach the capacity in practice.

3.3. Strongly Symmetric Backward Matrix

As for a strongly symmetric backward matrix, it is viewed as a special example of information transmission. The discussion for the message transfer capacity in this case is similar to that in the symmetric matrix, whose details are given as follows.
Proposition 3.
For an information transmission from the source X to the sink Y, assume that there exists a strongly symmetric backward matrix as follows:
p ( x | y ) = 1 β k β k K 1 . . . β k K 1 β k K 1 1 β k . . . β k K 1 . . . . . . . . . . . . β k K 1 . . . β k K 1 1 β k ,
which indicates that X and Y both obey K-ary distribution. We have
C ( ϖ , β k ) = e ϖ ( K 1 ) K { ( 1 β k ) e ϖ β k + β k e ϖ ( 1 β k K 1 ) } ,
where 0 β k 1 , K 2 and 0 < ϖ < 2 2 / max { p ( x i ) } .
Proof of Proposition 3.
For given K-ary variables X and Y whose distribution are { p ( x 1 ) , p ( x 2 ) , . . . , p ( x K ) } and { p ( y 1 ) , p ( y 2 ) , . . . , p ( y K ) } respectively, we can use the strongly symmetric backward matrix to obtain the relationship between the two variables as follows:
p ( x i ) = ( 1 β k ) p ( y i ) + β k K 1 [ 1 p ( y i ) ] ( i = 1 , 2 , . . . , K ) ,
which implies p ( x i ) is a one-to-one onto function for p ( y i ) .
In accordance with Definition 2, it is easy to see that
L ( ϖ , X | Y ) = x i y j p ( y j ) p ( x i | y j ) e ϖ ( 1 p ( x i | y j ) ) = y j p ( y j ) ( 1 β k ) e ϖ β k + β k e ϖ ( 1 β k K 1 ) = ( 1 β k ) e ϖ β k + β k e ϖ ( 1 β k K 1 ) .
Moreover, by virtue of the definition of MILC in Equation (9), it is readily seen that
C ( ϖ , β k ) = max p ( x ) { L ( ϖ , X ) } [ ( 1 β k ) e ϖ β k + β k e ϖ ( 1 β k K 1 ) ] ,
where L ( ϖ , X ) = x i p ( x i ) e ϖ { 1 p ( x i ) } .
Then, by using Lagrange multiplier method, we have
G ( p ( x i ) , λ 0 ) = x i p ( x i ) e ϖ ( 1 p ( x i ) ) + λ 0 x i p ( x i ) 1 .
By setting G ( p ( x i ) , λ 0 ) p ( x i ) = 0 and G ( p ( x i ) , λ 0 ) λ 0 = 0 , it can be readily verified that the extreme value of y j p ( y j ) e ϖ ( 1 p ( y j ) ) is achieved by the uniform distribution as a solution, that is p ( x 1 ) = p ( x 2 ) = . . . = p ( x K ) = 1 / K . In the case that 0 < ϖ < 2 2 / max { p ( x i ) } , we have 2 G ( p ( x i ) , λ 0 ) p 2 ( x i ) < 0 with respect to p ( x i ) [ 0 , 1 ] , which implies that the extreme value of x i p ( x i ) e ϖ ( 1 p ( x i ) ) is the maximum value.
In addition, according to the Equation (23), the uniform distribution of variable X is resulted from the uniform distribution for variable Y.
Therefore, by substituting the uniform distribution for p ( x ) into Equation (25), we will obtain the capacity C ( ϖ , β k ) . □
Furthermore, in light of Equation (22), we have
C ( ϖ , β k ) β k = { 1 ϖ ( 1 β k ) } e ϖ β k + ϖ β k K 1 1 e ϖ ( 1 β k K 1 ) .
By setting C ( ϖ , β k ) β k = 0 , it is apparent that C ( ϖ , β k ) reaches the extreme value in the case that β k = K 1 K . Additionally, when the parameter ϖ satisfies 0 < ϖ < 2 2 / max { p ( x i ) } , we also have the second derivative of the C ( ϖ , β k ) as follows:
2 C ( ϖ , β k ) β k 2 = ϖ [ 2 ( 1 β k ) ϖ ] e ϖ β k + ϖ K 1 2 ϖ β k K 1 e ϖ ( 1 β k K 1 ) > 0 ,
which indicates that the convex C ( ϖ , β k ) reaches the minimum value 0 in the case β k = K 1 K .
Remark 3.
According to Proposition 3, when β k = K 1 K , namely, the channel is just random, we gain the lower bound of the capacity namely C ( ϖ , β k ) = 0 . On the contrary, when β k = 0 (that is, there is a certain channel), we will have the maximum capacity.

4. Distortion of Message Importance Transfer

In this section, we will focus on the information transfer distortion, a common problem of information processing. In a real information system, there exists inevitable information distortion caused by noises or other disturbances, though the devices and hardware of telecommunication systems are updating and developing. Fortunately, there are still some bonuses from allowable distortion in some scenarios. For example, in conventional information theory, rate distortion is exploited to obtain source compression such as predictive encoding and hybrid encoding, which can save a lot of hardware resources and communication traffic [39].
Similar to the rate distortion theory for Shannon entropy [38], a kind of information distortion function based on MIM and CMIM is defined to characterize the effect of distortion on the message importance loss. In particular, there are some details of discussion as follows.
Definition 5.
Assume that there exists an information transfer process { X , p ( y | x ) , Y } from the variable X to Y, where the p ( y | x ) denotes a transfer matrix (distributions of X and Y are denoted by p ( x ) and p ( y ) respectively). For a given distortion function d ( x , y ) ( d ( x , y ) 0 ) and an allowable distortion D, the message importance distortion function is defined as
R ϖ ( D ) = min p ( y | x ) B D Φ ϖ ( X | | Y ) = min p ( y | x ) B D { L ( ϖ , X ) L ( ϖ , X | Y ) } ,
in which L ( ϖ , X ) = x i p ( x i ) e ϖ { 1 p ( x i ) } , L ( ϖ , X | Y ) is defined by Equation (4), 0 < ϖ 2 min j { p ( y j ) } max i { p ( x i ) } and B D = { q ( y | x ) : D ¯ D } denotes the allowable information transfer matrix set where
D ¯ = x i y j p ( x i ) p ( y j | x i ) d ( x i , y j ) ,
which is the average distortion.
In this model, the information source X is given and our goal is to select an adaptive p ( y | x ) to achieve the minimum allowable message importance loss under the distortion constraint. This provides a new theoretical guidance for information source compression from the perspective of message importance.
In contrast to the rate distortion of Shannon information theory, this new information distortion function just depends on the message importance loss rather than entropy loss to choose an appropriate information compression matrix. In practice, there are some similarities and differences between the rate distortion theory and the message importance distortion in terms of the source compression. On one hand, both two information distortion encodings can be regarded as special information transfer processes just with different optimization objectives. On the other hand, the new distortion theory tries to keep the rare events as high as possible, while the conventional rate distortion focuses on the amount of information itself. To some degree, by reducing more redundant common information, the new source compression strategy based on rare events (viewed as message importance) may save more computing and storage resources in big data.

4.1. Properties of Message Importance Distortion Function

In this subsection, we shall discuss some fundamental properties of rate distortion function based on message importance in details.

4.1.1. Domain of Distortion

Here, we investigate the domain of allowable distortion, namely [ D min , D max ] , and the corresponding message importance distortion function values as follows.
(i) The lower bound D min : Due to the fact 0 d ( x i , y j ) , it is easy to obtain the non-negative average distortion, namely 0 D ¯ . Considering D ¯ D , we readily have the minimum allowable distortion, that is
D min = 0 ,
which implies the distortionless case, namely Y is the same as X.
In addition, when the lower bound D min (namely the distortionless case) is satisfied, it is readily seen that
L ( ϖ , X | Y ) = L ( ϖ , X | X ) = x i p ( x i ) p ( x i | x i ) e ϖ { 1 p ( x i | x i ) } = 1 ,
and according to the Equation (29) the message importance distortion function is
R ϖ ( D min ) = L ( ϖ , X ) L ( ϖ , X | X ) = L ( ϖ , X ) 1 ,
where L ( ϖ , X ) = x i p ( x i ) e ϖ { 1 p ( x i ) } and 0 < ϖ 2 min j { p ( y j ) } max i { p ( x i ) } .
(ii) The upper bound D max : When the allowable distortion satisfies D D max , it is apparent that the variables X and Y are independent, that is, p ( y | x ) = p ( y ) . Furthermore, it is not difficult to see that
D max = min p ( y ) x i y j p ( x i ) p ( y j ) d ( x i , y j ) = y j p ( y j ) min p ( y ) x i p ( x i ) d ( x i , y j ) min y j x i p ( x i ) d ( x i , y j ) ,
which indicates that when the distribution of variable Y follows p ( y j ) = 1 and p ( y l ) = 0 ( l j ), we have the upper bound
D max = min y j x i p ( x i ) d ( x i , y j ) .
Additionally, on account of the independent X and Y, namely p ( x | y ) = p ( x ) , it is readily seen that
R ϖ ( D max ) = L ( ϖ , X ) y j p ( y j ) L ( ϖ , X ) = 0 .

4.1.2. The Convexity Property

For two allowable distortions D a and D b , whose optimal allowable information transfer matrixes are p a ( y | x ) and p b ( y | x ) respectively, we have
R ϖ ( δ D a + ( 1 δ ) D b ) δ R ϖ ( D a ) + ( 1 δ ) R ϖ ( D b ) ,
where 0 δ 1 and 0 < ϖ 2 min j { p ( y j ) } max i { p ( x i ) } .
Proof. 
Refer to the Appendix A. □

4.1.3. The Monotonically Decreasing Property

For two given allowable distortions D a and D b , if 0 D a < D b < D max is satisfied, we have R ϖ ( D a ) R ϖ ( D b ) , where 0 < ϖ 2 min j { p ( y j ) } max i { p ( x i ) } .
Proof. 
Considering that 0 D a < D b < D max , we have D b = γ D a + ( 1 γ ) D max where γ = D max D b D max D a . On account of the Equation (36) and the convexity property mentioned in Equation (37), it is not difficult to see that
R ϖ ( D b ) γ R ϖ ( D a ) + ( 1 γ ) R ϖ ( D max ) = γ R ϖ ( D a ) < R ϖ ( D a ) ,
where 0 < γ < 1 . □

4.1.4. The Equivalent Expression

For an information transfer process { X , p ( y | x ) , Y } , if we have a given distortion function d ( x , y ) , an allowable distortion D and a average distortion D ¯ defined in Equation (30), the message importance distortion function defined in Equation (29) can be rewritten as
R ϖ ( D ) = min D ¯ = D { L ( ϖ , X ) L ( ϖ , X | Y ) } ,
where L ( ϖ , X ) and L ( ϖ , X | Y ) are defined by the Equations (1) and (4), as well as 0 < ϖ 2 min j { p ( y j ) } max i { p ( x i ) } .
Proof. 
For a given allowable distortion D, if there exists an allowable distortion D * ( D min D * < D < D max ) and the corresponding optimal information transfer matrix p * ( y | x ) leads to R ϖ ( D ) , we will have R ϖ ( D ) = R ϖ ( D * ) , which contradicts the monotonically decreasing property. □

4.2. Analysis for Message Importance Distortion Function

In this subsection, we shall investigate the computation of message importance distortion function, which has a great impact on the probabilistic events analysis in practice. Actually, the definition of message importance distortion function in Equation (29) can be regarded as a special function, which is the minimization of the message importance loss with the symbol error less than or equal to the allowable distortion D. In particular, Definition 5 can also be expressed as the following optimization:
P 1 : min p ( y j | x i ) { L ( ϖ , X ) L ( ϖ , X | Y ) } s . t . x i y j p ( x i ) p ( y j | x i ) d ( x i , y j ) D , y j p ( y j | x i ) = 1 , p ( y j | x i ) 0 ,
where L ( ϖ , X ) and L ( ϖ , X | Y ) are MIM and CMIM defined in Equations (1) and (4), as well as 0 < ϖ 2 min j { p ( y j ) } max i { p ( x i ) } .
To take a computable optimization problem as an example, we consider Hamming distortion as the distortion function d ( x , y ) , namely
d ( x , y ) = 0 1 1 1 0 1 1 1 0 ,
which means d ( x i , y i ) = 0 and d ( x i , y j ) = 1 ( i j ). In order to reveal some intrinsic meanings of R ϖ ( D ) , we investigate an information transfer of Bernoulli source as follows.
Proposition 4.
For a Bernoulli(p) source denoted by a variable X and an information transfer process { X , p ( y | x ) , Y } with Hamming distortion, the message importance distortion function is given by
R ϖ ( D ) = { p e ϖ ( 1 p ) + ( 1 p ) e ϖ p } { D e ϖ ( 1 D ) + ( 1 D ) e ϖ D } ,
and the corresponding information transfer matrix is
p ( y | x ) = ( 1 D ) ( p D ) p ( 1 2 D ) ( 1 p D ) D p ( 1 2 D ) D ( p D ) ( 1 p ) ( 1 2 D ) ( 1 p D ) ( 1 D ) ( 1 p ) ( 1 2 D ) ,
where 0 < ϖ 2 min j { p ( y j ) } max i { p ( x i ) } and 0 D min { p , 1 p } .
Proof of Proposition 4.
Refer to the Appendix B. □

5. Bitrate Transmission Constrained by Message Importance

We investigate the information capacity in the case of a limited message importance loss in this section. The objective is to achieve the maximum transmission bitrate under the constraint of a certain message importance loss ϵ . The maximum transmission bitrate is one of system invariants in a transmission process, which provides a upper bound of amount of information obtained by the receiver.
In an information transmission process, the information capacity is the mutual information between the encoded signal and the received signal with the dimension bit/symbol. In a real transmission, there always exists an allowable distortion between the sending sequence X and the received sequence Y, while the maximum allowable message importance loss is required to avoid too much distortion of important events. From this perspective, message importance loss is considered to be another constraint for the information transmission capacity beyond the information distortion. Therefore, this might play a crucial role in the design of transmission in information processing systems.
In particular, we characterize the maximizing mutual information constrained by a controlled message importance loss as follows:
P 2 : max p ( x ) I ( X | | Y ) s . t . L ( ϖ , X ) L ( ϖ , X | Y ) ϵ , y j p ( x i ) = 1 , p ( x i ) 0 ,
where I ( X | | Y ) = x i , y j p ( x i ) p ( y j | x i ) log p ( x i ) p ( y j | x i ) p ( y j ) , p ( y j ) = x i p ( x i ) p ( y j | x i ) , L ( ϖ , X ) and L ( ϖ , X | Y ) are MIM and CMIM defined in Equations (1) and (4), as well as 0 < ϖ 2 min j { p ( y j ) } max i { p ( x i ) } .
Actually, the bitrate transmission with a message importance loss constraint has a special solution for a certain scenario. In order to give a specific example, we investigate the optimization problem in the Bernoulli(p) source with a symmetric or erasure transfer matrix as follows.

5.1. Binary Symmetric Matrix

Proposition 5.
For a Bernoulli(p) source X whose distribution is { p , 1 p } ( 0 p 1 / 2 ) and an information transfer process { X , p ( y | x ) , Y } with transfer matrix
p ( y | x ) = 1 β s β s β s 1 β s ,
we have the solution for P 2 defined in Equation (44) as follows:
max p ( x ) I ( X | | Y ) = 1 H ( β s ) , ( ϵ C β s ) H ( p s ( 1 β s ) + ( 1 p s ) β s ) H ( β s ) , ( 0 < ϵ C β s ) ,
where p s is the solution of L ( ϖ , X ) L ( ϖ , X | Y ) = ϵ ( L ( ϖ , X ) and L ( ϖ , X | Y ) mentioned in the optimization problem P 2 ), whose approximate value is
p s 1 Θ 2 ,
in which the parameter Θ is given by
Θ = 1 4 ϵ 4 ϖ + ϖ 2 4 ( 1 2 β s ) 2 ϵ 2 + 2 ( 4 ϖ + ϖ 2 ) β s ( 1 β s ) ϵ ( 4 ϖ + ϖ 2 ) | 1 2 β s | ,
and H ( · ) denotes the operator for Shannon entropy, that is H ( p ) = [ ( 1 p ) log ( 1 p ) + p log p ] , C β s = e ϖ 2 { β s e ϖ ( 1 β s ) + ( 1 β s ) e ϖ β s } ( 0 β s 1 ) and ϖ < 2 2 / max { p ( x i ) } .
Proof of Proposition 5.
Considering the Bernoulli(p) source X following { p , 1 p } and the binary symmetric matrix, it is not difficult to gain
I ( X | | Y ) = H ( Y ) H ( Y | X ) = { p ( y 0 ) log p ( y 0 ) + p ( y 1 ) log p ( y 1 ) } H ( β s ) ,
where p ( y 0 ) = p ( 1 β s ) + ( 1 p ) β s , p ( y 1 ) = p β s + ( 1 p ) ( 1 β s ) and H ( β s ) = [ ( 1 β s ) log ( 1 β s ) + β s log β s ] .
Moreover, define the Lagrange function as G s ( p ) = I ( X | | Y ) + λ s ( L ( ϖ , X ) L ( ϖ , X | Y ) ϵ ) where ϵ > 0 , 0 p 1 / 2 and λ s 0 . It is not difficult to have the partial derivative of G s ( p ) as follows:
G s ( p ) p = I ( X | | Y ) p + λ s C ( p , ϖ , β s ) p ,
where C ( p , ϖ , β s ) p is given by the Equation (14) and
I ( X | | Y ) p = ( 1 2 β s ) log ( 2 β s 1 ) p + 1 β s ( 1 2 β s ) p + β s .
By virtue of the monotonic increasing function log ( x ) for x > 0 , it is easy to see the nonnegativity of I ( X | | Y ) p is equal to ( 1 2 β s ) { ( 2 β s 1 ) p + 1 β s [ ( 1 2 β s ) p + β s ] } = ( 1 2 p ) ( 1 2 β s ) 2 0 in the case 0 p 1 / 2 . Moreover, due to the nonnegative C ( p , ϖ , β s ) p in p [ 0 , 1 / 2 ] which is mentioned in the proof of Proposition 1, it is readily seen that G s ( p ) p 0 is satisfied under the condition 0 p 1 / 2 .
Thus, the optimal solution p s * is the maximal available p ( p [ 0 , 1 / 2 ] ) as follows:
p s * = 1 2 , for ϵ C β s , p s , for 0 < ϵ C β s ,
where p s is the solution of L ( ϖ , X ) L ( ϖ , X | Y ) = ϵ , and C β s is the MILC mentioned in Equation (11).
By using Taylor series expansion, the equation L ( ϖ , X ) L ( ϖ , X | Y ) = ϵ can be expressed approximately as follows:
( 2 ϖ + ϖ 2 2 ) ( 1 p ) p p ( 1 p ) β s ( 1 β s ) [ ( 2 β s 1 ) p + 1 β s ] [ ( 1 2 β s ) p + β s ] = ϵ ,
whose solution is the approximate p s as the Equation (47).
Therefore, by substituting the p s * into Equation (49), we have Equation (46). □
Remark 4.
Proposition 5 gives the maximum transmission bitrate under the constraint of message importance loss. Particularly, there are growth regions and smooth regions for the maximum transmission bitrate in the receiver with respect to message importance loss ϵ. When the message importance loss ϵ is constrained in a little range, the real bitrate is less than the Shannon information capacity, which is involved with the entropy of the symmetric matrix parameter β s .

5.2. Binary Erasure Matrix

Proposition 6.
Assume that there is a Bernoulli(p) source X following distribution { p , 1 p } ( 0 p 1 / 2 ) and an information transfer process { X , p ( y | x ) , Y } with the binary erasure matrix
p ( y | x ) = 1 β e 0 β e 0 1 β e β e ,
where 0 β e 1 . In this case, the solution for P 2 described in Equation (44) is
max p ( x ) I ( X | | Y ) = 1 β e , ( ϵ C β e ) ( 1 β e ) H ( p e ) , ( 0 < ϵ C β s ) ,
where p e is the solution of ( 1 β e ) { p e ϖ ( 1 p ) + ( 1 p ) e ϖ p 1 } = ϵ , whose approximate value is
p e 1 1 8 ϵ ( 1 β e ) ( 4 ϖ + ϖ 2 ) 2 ,
and H ( x ) = [ ( 1 x ) log ( 1 x ) + x log x ] , C β e = ( 1 β e ) ( e ϖ 2 1 ) and ϖ < 2 2 / max { p ( x i ) } .
Proof of Proposition 6.
In the binary erasure matrix, considering the Bernoulli(p) source X whose distribution is { p , 1 p } , it is readily seen that
I ( X | | Y ) = H ( Y ) H ( Y | X ) = ( 1 β e ) H ( p ) ,
where H ( · ) denotes the Shannon entropy operator, namely H ( p ) = [ ( 1 p ) log ( 1 p ) + p log p ] .
Moreover, according to the Definitions 1 and 2, it is easy to see that
L ( ϖ , X ) L ( ϖ , X | Y ) = ( 1 β e ) { L ( ϖ , p ) 1 } ,
where L ( ϖ , p ) = p e ϖ ( 1 p ) + ( 1 p ) e ϖ p .
Similar to the proof of the Proposition 5 and considering the monotonically increasing H ( p ) and L ( ϖ , p ) in p [ 0 , 1 / 2 ] , it is not difficult to see that the optimal solution p e * is the maximal available p in the case 0 p 1 2 , which is given by
p e * = 1 2 , for ϵ C β e , p e , for 0 < ϵ C β e ,
where p e is the solution of ( 1 β e ) { L ( ϖ , p ) 1 } = ϵ , and the upper bound C β e is gained in Equation (16).
By resorting to Taylor series expansion, the approximate equation for ( 1 β e ) { L ( ϖ , p ) 1 } = ϵ is given by
( 1 β e ) ( 2 ϖ + ϖ 2 2 ) ( 1 p ) p = ϵ ,
from which the approximate solution p e in Equation (56) is obtained.
Therefore, Equation (55) is obtained by substituting the p e * into the Equation (57). □
Remark 5.
From Proposition 6, there are two regions for the maximum transmission bitrate with respect to message importance loss. The one depends on the message importance loss threshold ϵ. The other is just related to the erasure matrix parameter β e .
Note that single-letter models are discussed to show some theoretical results for information transfer under the constraint of massage importance loss, which may be used in some special potential applications such as maritime international signal or switch signal processing. As a matter of fact, in practice, it is preferred to operate multi-letters models which can be applied to more scenarios such as the multimedia communication, cooperative communications and multiple access, etc. As for these complicated cases which may be different from conventional Shannon information theory, we shall consider it in the near future.

6. Numerical Results

This section shall provide numerical results to validate the theoretical results in this paper.

6.1. The Message Importance Loss Capacity

First of all, we give some numerical simulation with respect to the MILC in different information transmission cases. In Figure 2, it is apparent to see that if the Bernoulli source follows the uniform distribution, namely p = 0.5 , the message importance loss will reach the maximum in the cases of different matrix parameter β s . That is, the numerical results of MILC are obtained as { 0.4081 , 0.0997 , 0 , 0.2265 } in the case of parameter β s = { 0.1 , 0.3 , 0.5 , 0.8 } and ϖ = 1 , which corresponds to Proposition 1. Moreover, we also know that if β s = 0.5 , namely the random transfer matrix is satisfied, the MILC reaches the lower bound that is C = 0 . In contrast, if the parameter β s satisfies β s = 0 , the upper bound of MILC will be gained such as { 0.1618 , 0.4191 , 0.6487 , 1.7183 } in the case ϖ = { 0.3 , 0.7 , 1.0 , 2.0 } .
Figure 3 shows that, in the binary erasure matrix, the MILC is reached under the same condition as that in the binary symmetric matrix, namely p = 0.5 . For example the numerical results of MILC with ϖ = 1 are { 0.5838 , 0.4541 , 0.3244 , 0.1297 } in the cases β e = { 0.1 , 0.3 , 0.5 , 0.8 } . However, if β e = 1 , the lower bound of MILC ( C = 0 ) is obtained in the erasure transfer matrix, different from the symmetric case.
From Figure 4, it is not difficult to see that the certain transfer matrix (namely β k = 0 ) leads to upper bound of MILC. For example, when the number of source symbols satisfies K = { 4 , 6 , 8 , 10 } , the numerical results of MILC with ϖ = 2 are { 3.4817 , 4.2945 , 4.7546 , 5.0496 } . In addition, the lower bound of MILC is reached in the case that β k = 1 1 K .

6.2. Message Importance Distortion

We focus on the distortion of message importance transfer and give some simulations in this subsection. From Figure 5, it is illustrated that the message importance distortion function R ϖ ( D ) is monotonically non-increasing with respect to the distortion D, which can validate some properties mentioned in Section 4.1. Moreover, the maximum R ϖ ( D ) is obtained in the case D = 0 . Taking the Bernoulli(p) source as an example, the numerical results of R ϖ ( D ) with ϖ = 0.2 are { 0.0379 , 0.0674 , 0.0884 , 0.1010 , 0.1052 } and the corresponding probability satisfies p = { 0.1 , 0.2 , 0.3 , 0.4 , 0.5 } . Note that the turning point of R ϖ ( D ) is gained when the probability p equals to the distortion D, which conforms to Proposition 4.

6.3. Bitrate Transmission with Message Importance Loss

Figure 6 shows the allowable maximum bitrate (characterized by mutual information) constrained by a message importance loss ϵ in a Bernoulli(p) source case. It is worth noting that there are two regions for the mutual information in the both transfer matrixes. In the first region, the mutual information is monotonically increasing with respect to the ϵ ; however, in the second region, the mutual information is stable, namely the information transmission capacity is obtained. As for the numerical results, the turning points are obtained at ϵ = { 0.0328 , 0.0185 , 0.0082 , 0.0021 } and the maximum mutual information values are { 0.5310 , 0.2781 , 0.1187 , 0.0290 } in the binary symmetric matrix with the corresponding parameter β s = { 0.1 , 0.2 , 0.3 , 0.4 } , while the turning points of erasure matrix are at ϵ = { 0.0416 , 0.0410 , 0.0359 , 0.0308 } in the case that β e = { 0.1 , 0.2 , 0.3 , 0.4 } with the maximum mutual information values { 0.9 , 0.8 , 0.7 , 0.6 } . Consequently, Propositions 5 and 6 are validated from the numerical results.

6.4. Experimental Simulations

In this subsection, we take the binary stochastic process (in which the random variable follows Bernoulli distribution) as an example to validate theoretical results. In particular, the Bernoulli(p) source X (whose distribution is denoted by P ( X ) = { p , 1 p } where 0 < p < 1 ) with the symmetric or erasure matrix (described by Equations (10) and (15)) is considered to reveal some properties of message importance loss capacity (in Section 3), message importance distortion function (in Section 4) as well as bitrate transmission constrained by message importance (in Section 5).
From Figure 7, it is seen that the uniform information source X (that is P ( X ) = { 1 / 2 , 1 / 2 } ) leads to the maximum message importance loss (namely MILC) in both cases of symmetric matrix and erasure matrix, which implies Propositions 1 and 2. Moreover, with the increase of number of samples, the performance of massage importance loss tends to smooth. In addition, the MILC in symmetric transfer matrix is larger than that in the erasure one when the matrix parameters β s and β e are the same.
As for the distortion of message importance transfer, we investigate the message importance loss based on different transfer matrices, which is shown in Figure 8 where p o p t i m a l ( y | x ) is described as Equation (43), p s y m m e t r i c ( y | x ) = 1 D D D 1 D , p r a n d o m 1 ( y | x ) = 1 D 10 p D 10 p 9 D 10 ( 1 p ) 1 9 D 10 ( 1 p ) , p r a n d o m 2 ( y | x ) = 1 D 5 p D 5 p D 5 ( 1 p ) 1 D 5 ( 1 p ) , p r a n d o m 3 ( y | x ) = 1 D 10 p D 10 p D 10 ( 1 p ) 1 D 10 ( 1 p ) , p c e r t a i n ( y | x ) = 1 0 0 1 , D is the allowable distortion and p is the probability element of Bernoulli(p) source. From Figure 8, it is illustrated that, when the p o p t i m a l ( y | x ) is selected as the transfer matrix, the massage importance loss reaches the minimum, which corresponds to Proposition 4. In addition, if the transfer matrix is not certain (existing distortion), message importance loss is decreasing with the increase of allowable distortion.
Considering the transmission with a message importance loss constraint, Figure 9 shows that, when the p s * (given by Equation (52)) and p e * (given by Equation (59)) are selected as the probability elements for the Bernoulli(p) source in the symmetric matrix and erasure matrix respectively, the corresponding mutual information values are larger than those based on other probability (such as p r a n d o m 1 = ( 1 1 8 ϵ ) / 2 and p r a n d o m 2 = ( 1 1 4 ϵ ) / 2 ). In addition, it is not difficult to see that, when the parameter β s is equal to β e , the mutual information (constrained by a message importance loss) in symmetric transfer matrix is larger than that in the erasure one.

7. Conclusions

In this paper, we investigated the information processing from the perspective of an information measure i.e., MIM. Actually, with the help of parameter ϖ , the MIM has more flexibility and can be used widely. Here, we just focused on the MIM with 0 ϖ 2 / max { p ( x i ) } which not only has properties of self-scoring values for probabilistic events but also has similarities with Shannon entropy in information compression and transmission. In particular, based on a system model with message importance processing, a message importance loss was presented. This measure can characterize the information distinction before and after a message transfer process. Furthermore, we have proposed the message importance loss capacity which can provide an upper bound for the message importance harvest in the information transmission. Moreover, the message importance distortion function, which is to select an information transfer matrix to minimize the message importance loss, was discussed to characterize the performance of information lossy compression from the viewpoint of message importance of events. In addition, we exploited the message importance loss to constrain the bitrate transmission so that the combined factors of message importance and amount of information are considered to guide an information transmission. To give the validation for theoretical analyses, some numerical results and experimental simulations were also presented in details. As the next step research, we are looking forward to exploiting real data to design some applicable strategies for information processing based on the MIM, as well as investigating the performance of multivariate systems in the sense of MIM.

Author Contributions

R.S., S.L. and P.F. all contributed to this work on investigation and writing.

Funding

The authors appreciate the support of the National Natural Science Foundation of China (NSFC) No. 61771283.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MIMMessage Importance Measure
MEMSMax Entropy in Metric Space
IoTInternet of Things
NMIMNon-Parametric MIM
DMIMDifferential MIM
CMIMConditional Message Importance Measure
MILCMessage Importance Loss Capacity

Appendix A. Proof of the Convexity Property of Rϖ (D)

As for an allowable distortion D 0 = δ D a + ( 1 δ ) D b , we have the average distortion for the information transfer matrix p 0 ( y | x ) = δ p a ( y | x ) + ( 1 δ ) p b ( y | x ) as follows:
D ¯ 0 = δ x i y j p ( x i ) p a ( y j | x i ) d ( x i , y j ) + ( 1 δ ) x i y j p ( x i ) p b ( y j | x i ) d ( x i , y j ) δ D a + ( 1 δ ) D b = D 0 ,
which indicates that the p 0 ( y | x ) is an allowable information transfer matrix for D 0 .
Moreover, by using Jensen’s inequality and Bayes’ theorem, we have the CMIM with respect to p 0 ( y | x ) as follows:
L 0 ( ϖ , X | Y ) = x i y i p ( x i ) p 0 ( y j | x i ) e ϖ { 1 p ( x i ) p 0 ( y j | x i ) p 0 ( y j ) } = x i y i p ( x i ) [ δ p a ( y j | x i ) + ( 1 δ ) p b ( y j | x i ) ] e ϖ { 1 p ( x i ) [ δ p a ( y j | x i ) + ( 1 δ ) p b ( y j | x i ) ] p 0 ( y j ) } x i y i p ( x i ) [ δ p a ( y j | x i ) ] e ϖ { 1 p ( x i ) [ δ p a ( y j | x i ) ] p 0 ( y j ) } + x i y i p ( x i ) [ ( 1 δ ) p b ( y j | x i ) ] e ϖ { 1 p ( x i ) [ ( 1 δ ) p b ( y j | x i ) ] p 0 ( y j ) } δ x i y i p ( x i ) p a ( y j | x i ) e ϖ { 1 p ( x i ) p a ( y j | x i ) p a ( y j ) } + ( 1 δ ) x i y i p ( x i ) p b ( y j | x i ) e ϖ { 1 p ( x i ) p b ( y j | x i ) p b ( y j ) } = δ L a ( ϖ , X | Y ) + ( 1 δ ) L b ( ϖ , X | Y ) ,
in which
p 0 ( y j ) = x i p ( x i ) p 0 ( y j | x i ) = x i p ( x i ) [ δ p a ( y j | x i ) + ( 1 δ ) p b ( y j | x i ) ] = δ p a ( y j ) + ( 1 δ ) p b ( y j ) ,
and the parameter ϖ is 0 < ϖ 2 min j { p ( y j ) } max i { p ( x i ) } .
Furthermore, according to the Equations (29) and (A2), it is not difficult to have
R ϖ ( D 0 ) = min p ( y | x ) B D 0 { L ( ϖ , X ) L ( ϖ , X | Y ) } { L ( ϖ , X ) L 0 ( ϖ , X | Y ) } δ { L ( ϖ , X ) L a ( ϖ , X | Y ) } + ( 1 δ ) { L ( ϖ , X ) L b ( ϖ , X | Y ) } = δ R ϖ ( D a ) + R ϖ ( D b ) ,
where L ( ϖ , X ) is the MIM for the given information source X, while L a ( ϖ , X | Y ) and L b ( ϖ , X | Y ) denote the CMIM with respect to p a ( y | x ) and p b ( y | x ) , respectively.
Therefore, the convexity property is testified.

Appendix B. Proof of Proposition 4

Considering the fact that the Bernoulli source X is given and the equivalent expression is mentioned in Equation (39), the optimization problem P 1 can be regarded as
P 1 A : max p ( y j | x i ) L ( ϖ , X | Y ) s . t . p ( x 0 ) p ( y 1 | x 0 ) + p ( x 1 ) p ( y 0 | x 1 ) = D , p ( y 0 | x 0 ) + p ( y 1 | x 0 ) = 1 , p ( y 0 | x 1 ) + p ( y 1 | x 1 ) = 1 , p ( y j | x i ) 0 , ( i = 0 , 1 ; j = 0 , 1 ) ,
where L ( ϖ , X | Y ) = x i , y j p ( x i , y j ) e ϖ ( 1 p ( x i | y j ) ) and 0 < ϖ 2 min j { p ( y j ) } max i { p ( x i ) } .
To simplify the above one, we have
P 1 B : max α , β L D ( ϖ , X | Y ) s . t . p α + ( 1 p ) β = D , 0 α 1 , 0 β 1 , 0 p 1 ,
in which p and ( 1 p ) denote p ( x 0 ) and p ( x 1 ) , α and β denote p ( y 1 | x 0 ) and p ( y 0 | x 1 ) , and
L D ( ϖ , X | Y ) = p ( 1 α ) e ϖ ( 1 p ) β p ( 1 α ) + ( 1 p ) β + ( 1 p ) β e ϖ p ( 1 α ) ( 1 p ) β + p ( 1 α ) + ( 1 p ) ( 1 β ) e ϖ p α p α + ( 1 p ) ( 1 β ) + p α e ϖ ( 1 p ) ( 1 β ) p α + ( 1 p ) ( 1 β ) ,
where 0 < ϖ 2 min j { p ( y j ) } max i { p ( x i ) } .
Actually, it is not easy to deal with the Equation (A6) directly; we intend to use an equivalent expression to describe this objective. By using Taylor series expansion of e x , namely e x = 1 + x + x 2 2 + o ( x 2 ) , we have
L D ( ϖ , X | Y ) 1 + ( 2 ϖ + ϖ 2 2 ) p α ( 1 p ) ( 1 β ) p α + ( 1 p ) ( 1 β ) + p ( 1 α ) ( 1 p ) β p ( 1 α ) + ( 1 p ) β .
By substituting β = D p α 1 p into the Equation (A8), it is easy to have
L D ( ϖ , X | Y ) 1 + p ( 2 ϖ + ϖ 2 2 ) p α 2 + ( 1 p D ) α 2 p α + ( 1 p D ) + p α 2 ( p + D ) α + D ( p + D ) 2 p α ,
where max { 0 , 1 + D 1 p } α min { 1 , D p } resulted from the constraints in Equation (A6).
Moreover, it is not difficult to have the partial derivative of L D ( ϖ , X | Y ) in Equation (A9) with respect to α as follows:
L D ( ϖ , X | Y ) α 2 p 2 ( 2 ϖ + ϖ 2 2 ) p α 2 ( 1 p D ) α [ 2 p α + ( 1 p D ) ] 2 + p α 2 ( p + D ) α + D [ ( p + D ) 2 p α ] 2 .
By setting L D ( ϖ , X | Y ) α = 0 , it is readily seen that the solutions of α in Equation (A10) are given by α 1 = ( 1 p D ) D p ( 1 2 D ) and α 2 = 1 D p 1 2 p , respectively.
In addition, in the light of the domain of D mentioned in Equation (35), it is easy to have D max = min { p , 1 p } in the Bernoulli source case. That is, the allowable distortion satisfies 0 D min { p , 1 p } . Thus, the domain of α namely max { 0 , 1 + D 1 p } α min { 1 , D p } , can be given by 0 α D p .
Then, it is easy to have the appropriate solution of α as follows:
α * = ( 1 p D ) D p ( 1 2 D ) ,
in which the second derivative 2 L D ( ϖ , X | Y ) α 2 is non-positive, namely maximum value is reached, and the corresponding information transfer matrix is
p ( y | x ) = ( 1 D ) ( p D ) p ( 1 2 D ) ( 1 p D ) D p ( 1 2 D ) D ( p D ) ( 1 p ) ( 1 2 D ) ( 1 p D ) ( 1 D ) ( 1 p ) ( 1 2 D ) ,
where 0 D min { p , 1 p } .
Consequently, by substituting the matrix Equation (A12) into the Equation (40), it is not difficult to verify this proposition.

References

  1. Ju, B.; Zhang, H.; Liu, Y.; Liu, F.; Lu, S.; Dai, Z. A feature extraction method using improved multi-scale entropy for rolling bearing fault diagnosis. Entropy 2018, 20, 212. [Google Scholar] [CrossRef]
  2. Wei, H.; Chen, L.; Guo, L. KL divergence-based fuzzy cluster ensemble for image segmentation. Entropy 2018, 20, 273. [Google Scholar] [CrossRef]
  3. Rehman, S.; Tu, S.; Rehman, O.; Huang, Y.; Magurawalage, C.M.S.; Chang, C.C. Optimization of CNN through novel training strategy for visual classification problems. Entropy 2018, 20, 290. [Google Scholar] [CrossRef]
  4. Rui, S.; Liu, S.; Fan, P. Recognizing information feature variation: message importance transfer measure and its applications in big data. Entropy 2018, 20, 401. [Google Scholar] [CrossRef]
  5. Hu, H.; Wen, Y.; Chua, T.S.; Li, X. Toward scalable systems for big data analytics: A technology tutorial. IEEE Access 2017, 5, 7776–7797. [Google Scholar]
  6. Villecco, F. On the evaluation of errors in the virtual design of mechanical systems. Machines 2018, 6, 36. [Google Scholar] [CrossRef]
  7. Bormashenko, E.; Frenkel, M.; Legchenkova, I. Is the Voronoi Entropy a True Entropy? Comments on “Entropy, Shannon’s Measure of Information and Boltzmann’s H-Theorem”. Entropy 2019, 21, 251. [Google Scholar] [CrossRef]
  8. Delvenne, J. Category theory for autonomous and networked dynamical systems. Entropy 2019, 21, 301. [Google Scholar] [CrossRef]
  9. Ramaswamy, S.; Rastogi, R.; Shim, K. Efficient algorithms for mining outliers from large data sets. ACM SIGMOD Rec. 2000, 29, 427–438. [Google Scholar] [CrossRef] [Green Version]
  10. Harrou, F.; Kadri, F.; Chaabane, S.; Tahon, C.; Sun, Y. Improved principal component analysis for anomaly detection: Application to an emergency department. Comput. Ind. Eng. 2015, 88, 63–77. [Google Scholar] [CrossRef]
  11. Xu, S.; Baldea, M.; Edgar, T.F.; Wojsznis, W.; Blevins, T.; Nixon, M. An improved methodology for outlier detection in dynamic datasets. AIChE J. 2015, 61, 419–433. [Google Scholar] [CrossRef]
  12. Yu, H.; Khan, F.; Garaniya, V. Nonlinear Gaussian belief network based fault diagnosis for industrial processes. J. Process Control 2015, 35, 178–200. [Google Scholar] [CrossRef]
  13. Prieto-Moreno, A.; Llanes-Santiago, O.; Garcia-Moreno, E. Principal components selection for dimensionality reduction using discriminant information applied to fault diagnosis. J. Process Control 2015, 33, 14–24. [Google Scholar] [CrossRef]
  14. Christidis, K.; Devetsikiotis, M. Blockchains and Smart Contracts for the Internet of Things. IEEE Access 2016, 4, 2292–2303. [Google Scholar] [CrossRef]
  15. Wu, J.; Zhao, W. Design and realization of winternet: From net of things to internet of things. ACM Trans. Cyber Phys. Syst. 2017, 1, 2. [Google Scholar] [CrossRef]
  16. Lin, J.; Yu, W.; Zhang, N.; Yang, X.; Zhang, H.; Zhao, W. A Survey on Internet of Things: Architecture, Enabling Technologies, Security and Privacy, and Applications. IEEE Internet Things J. 2017, 4, 1125–1142. [Google Scholar] [CrossRef]
  17. Sun, Y.; Song, H.; Jara, A.J.; Bie, R. Internet of things and big data analytics for smart and connected communities. IEEE Access 2016, 4, 766–773. [Google Scholar] [CrossRef]
  18. Zanella, A.; Bui, N.; Zorzi, M. Internet of Things for smart cities. IEEE Internet Things J. 2014, 1, 22–32. [Google Scholar] [CrossRef]
  19. Jain, R.; Shah, H. An anomaly detection in smart cities modeled as wireless sensor network. In Proceedings of the 2016 International Conference on Signal and Information Processing (IConSIP), Nanded, India, 6–8 October 2016; pp. 1–5. [Google Scholar]
  20. Ramos, S.; Gehrig, S.; Pinggera, P.; Franke, U.; Rother, C. Detecting unexpected obstacles for self-driving cars: Fusing deep learning and geometric modeling. In Proceedings of the 2017 IEEE Intelligent Vehicles Symposium (IV), Redondo Beach, CA, USA, 11–14 June 2017; pp. 1025–1032. [Google Scholar]
  21. Amaradi, P.; Sriramoju, N.; Dang, L.; Tewolde, G.S.; Kwon, J. Lane following and obstacle detection techniques in autonomous driving vehicles. In Proceedings of the 2016 IEEE International Conference on Electro Information Technology (EIT), Dekalb, IL, USA, 19–21 May 2016; pp. 674–679. [Google Scholar]
  22. Gaikwad, V.; Lokhande, S. An improved lane departure method for advanced driver assistance system. In Proceedings of the 2012 International Conference on Computing, Communication and Applications (ICCCA), Dindigul, India, 22–24 February 2012; pp. 1–5. [Google Scholar]
  23. Fan, P.; Dong, Y.; Lu, J.; Liu, S. Message importance measure and its application to minority subset detection in big data. In Proceedings of the 2017 IEEE Globecom Workshops (GC Wkshps), Washington, DC, USA, 4–8 December 2016; pp. 1–6. [Google Scholar]
  24. Liu, S.; She, R.; Fan, P.; Letaief, K.B. Non-parametric Message Importance Measure: Storage Code Design and Transmission Planning for Big Data. IEEE Trans. Commun. 2018, 66, 5181–5196. [Google Scholar] [CrossRef]
  25. She, R.; Liu, S.; Dong, Y.; Fan, P. Focusing on a probability element: Parameter selection of message importance measure in big data. In Proceedings of the 2017 IEEE International Conference on Communications (ICC), Paris, France, 21–25 May 2017; pp. 1–6. [Google Scholar]
  26. Renyi, A. On measures of entropy and information. In Proceedings of the 4th Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics; University of California Press: Berkeley, CA, USA, 1961; pp. 547–561. [Google Scholar]
  27. Wan, S.; Lu, J.; Fan, P.; Letaief, K. Minor probability events’ detection in big data: An integrated approach with bayes detection and mim. IEEE Commun. Lett. 2019, 23, 418–421. [Google Scholar] [CrossRef]
  28. Liu, S.; Dong, Y.; Fan, P.; She, R.; Wan, S. Matching users’ preference under target revenue constraints in data recommendation systems. Entropy 2019, 21, 205. [Google Scholar] [CrossRef]
  29. Liu, S.; She, R.; Fan, P. Differential message importance measure: A new approach to the required sampling number in big data structure characterization. IEEE Access 2018, 6, 42851–42867. [Google Scholar] [CrossRef]
  30. Jalali, S.; Weissman, T. Block and sliding-block lossy compression via MCMC. IEEE Trans. Commun. 2012, 60, 2187–2198. [Google Scholar] [CrossRef]
  31. Cui, T.; Chen, L.; Ho, T. Distributed distortion optimization for correlated sources with network coding. IEEE Trans. Commun. 2012, 60, 1336–1344. [Google Scholar] [CrossRef]
  32. Koken, E.; Tuncel, E. Joint source–Channel coding for broadcasting correlated sources. IEEE Trans. Commun. 2017, 65, 3012–3022. [Google Scholar] [CrossRef]
  33. Lee, W.; Xiang, D. Information-theoretic measures for anomaly detection. In Proceedings of the 2001 IEEE Symposium on Security and Privacy, Oakland, CA, USA, 13–16 May 2001; pp. 130–143. [Google Scholar]
  34. Ando, S.; Suzuki, E. An information theoretic approach to detection of minority subsets in database. In Proceedings of the IEEE Sixth International Conference on Data Mining, Hong Kong, China, 13–15 December 2006; pp. 11–20. [Google Scholar]
  35. Touchette, H. The large deviation approach to statistical mechanics. Phys. Rep. 2009, 478, 1–69. [Google Scholar] [CrossRef] [Green Version]
  36. Curiel, R.P.; Bishop, S. A measure of the concentration of rare events. Sci. Rep. 2016, 6, 1–6. [Google Scholar]
  37. Weinberger, N.; Merhav, N. A large deviations approach to secure lossy compression. IEEE Trans. Inf. Theory 2017, 63, 2533–2559. [Google Scholar] [CrossRef]
  38. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; Wiley Series in Telecommunications and Signal Processing; John Wiley & Sons Inc.: Hoboken, NJ, USA, 2006. [Google Scholar]
  39. Sechelea, A.; Munteanu, A.; Cheng, S.; Deligiannis, N. On the rate-distortion function for binary source coding with side information. IEEE Trans. Commun. 2016, 64, 5203–5216. [Google Scholar] [CrossRef]
Figure 1. Information processing system model.
Figure 1. Information processing system model.
Entropy 21 00439 g001
Figure 2. The performance of message importance loss and MILC (mentioned in Definition 4) in the Binary symmetric matrix. (a) the performance of message importance loss (with ϖ = 1 ) versus probability p in the cases of different symmetric matrix parameter ( β s = 0.1 , 0.3 , 0.5 , 0.8 ); (b) the performance of MILC versus matrix parameter β s in the cases of different parameter ϖ .
Figure 2. The performance of message importance loss and MILC (mentioned in Definition 4) in the Binary symmetric matrix. (a) the performance of message importance loss (with ϖ = 1 ) versus probability p in the cases of different symmetric matrix parameter ( β s = 0.1 , 0.3 , 0.5 , 0.8 ); (b) the performance of MILC versus matrix parameter β s in the cases of different parameter ϖ .
Entropy 21 00439 g002
Figure 3. The performance of message importance loss and MILC in the Binary erasure matrix. (a) the performance of message importance loss (with ϖ = 1 ) versus probability p in the cases of different matrix parameter ( β e = 0.1 , 0.3 , 0.5 , 0.8 ); (b) the performance of MILC versus erasure matrix parameter β e in the cases of different parameter ϖ .
Figure 3. The performance of message importance loss and MILC in the Binary erasure matrix. (a) the performance of message importance loss (with ϖ = 1 ) versus probability p in the cases of different matrix parameter ( β e = 0.1 , 0.3 , 0.5 , 0.8 ); (b) the performance of MILC versus erasure matrix parameter β e in the cases of different parameter ϖ .
Entropy 21 00439 g003
Figure 4. The performance of MILC in strongly symmetric matrix with K = 4 , 6 , 8 , 10 .
Figure 4. The performance of MILC in strongly symmetric matrix with K = 4 , 6 , 8 , 10 .
Entropy 21 00439 g004
Figure 5. The performance of message importance distortion function R ϖ ( D ) in the case of Bernoulli(p) source ( p = 0.1 , 0.2 , 0.3 , 0.4 ).
Figure 5. The performance of message importance distortion function R ϖ ( D ) in the case of Bernoulli(p) source ( p = 0.1 , 0.2 , 0.3 , 0.4 ).
Entropy 21 00439 g005
Figure 6. The performance of mutual information I ( X | | Y ) constrained by the message importance loss ϵ (the parameter ϖ = 0.1 ). (a) the performance of I ( X | | Y ) versus ϵ in the binary symmetric matrix; (b) the performance of I ( X | | Y ) versus ϵ in the erasure matrix.
Figure 6. The performance of mutual information I ( X | | Y ) constrained by the message importance loss ϵ (the parameter ϖ = 0.1 ). (a) the performance of I ( X | | Y ) versus ϵ in the binary symmetric matrix; (b) the performance of I ( X | | Y ) versus ϵ in the erasure matrix.
Entropy 21 00439 g006
Figure 7. The message importance loss (with parameter ϖ = 1 ) versus the probability p of Bernoulli(p) source with number of samples N ( N = { 100 , 1000 , 10,000 } ). There are two different transfer matrices, namely the symmetric matrix with parameter β s = 0.1 and the erasure matrix with parameter β e = 0.1 .
Figure 7. The message importance loss (with parameter ϖ = 1 ) versus the probability p of Bernoulli(p) source with number of samples N ( N = { 100 , 1000 , 10,000 } ). There are two different transfer matrices, namely the symmetric matrix with parameter β s = 0.1 and the erasure matrix with parameter β e = 0.1 .
Entropy 21 00439 g007
Figure 8. The message importance loss (with parameter ϖ = 0.1 ) versus allowable distortion D (the corresponding distortion function is Hamming distortion) in the case of different transfer matrices. The information source X follows Bernoulli(p) distribution (where p = 0.3 , namely P ( X ) = { 0.3 , 0.7 } ) and the number of samples is n = 10,000 .
Figure 8. The message importance loss (with parameter ϖ = 0.1 ) versus allowable distortion D (the corresponding distortion function is Hamming distortion) in the case of different transfer matrices. The information source X follows Bernoulli(p) distribution (where p = 0.3 , namely P ( X ) = { 0.3 , 0.7 } ) and the number of samples is n = 10,000 .
Entropy 21 00439 g008
Figure 9. The mutual information I ( X | | Y ) versus the rare message importance loss threshold ϵ (the parameter ϖ = 0.1 ) in the case of Bernoulli(p) source X (that is P ( X ) = { p , 1 p } with different probability p). The number of samples observed from the source X is n = 10,000 , and transfer matrix is the symmetric matrix with parameter β s = 0.1 or the erasure matrix with parameter β e = 0.1 .
Figure 9. The mutual information I ( X | | Y ) versus the rare message importance loss threshold ϵ (the parameter ϖ = 0.1 ) in the case of Bernoulli(p) source X (that is P ( X ) = { p , 1 p } with different probability p). The number of samples observed from the source X is n = 10,000 , and transfer matrix is the symmetric matrix with parameter β s = 0.1 or the erasure matrix with parameter β e = 0.1 .
Entropy 21 00439 g009
Table 1. Notations.
Table 1. Notations.
NotationDescription
P ( X ) = { p ( x 1 ) , p ( x 2 ) , …, p ( x n ) } The discrete probability distribution with respect to the variable X
φ The message source in the information processing system model
φ ˜ The mapped or compressed message with respect to the φ
Ω ˜ The received message transferred from the φ ˜
Ω The recovered message with respect to the φ by the decoding process
ϖ The importance coefficient
L ( · ) The message importance measure (MIM) described as Definition 1
H ( · ) The Shannon entropy, H ( X ) = x i p ( x i ) log p ( x i )
or H ( p ) = p log p ( 1 p ) log ( 1 p ) , ( 0 p 1 )
H α ( · ) The Renyi entropy with the parameter α H α ( X ) = 1 1 α log x i { p ( x i ) } α
L ( · | · ) The CMIM described as Definition 2
H ( · | · ) The conditional Shannon entropy, H ( X | Y ) = x i y j p ( x i , y j ) log 1 p ( x i | y j )
Φ ϖ ( · | | · ) The message importance loss described as Definition 3
Cthe message importance loss capacity (MILC) described as Definition 4
p ( y | x ) An information transfer matrix from the variable X to Y
{ X , p ( y | x ) , Y } An information transfer process from the variable X to Y
β s , β e ,The parameters in the binary symmetric matrix, binary eraser matrix and
β k k-ary symmetric matrix respectively
d ( x , y ) The distortion function, d ( x , y ) 0
DThe allowable distortion ( D min D D max )
D ¯ The average distortion, D ¯ = x i y j p ( x i ) p ( y j | x i ) d ( x i , y j )
B D The the allowable information transfer matrix set B D = { q ( y | x ) : D ¯ D }
R ϖ ( D ) The message importance distortion function described as Definition 5
I ( X | | Y ) Mutual information, I ( X | | Y ) = x i y j p ( x i , y j ) log p ( x i , y j ) p ( x i ) p ( y j )

Share and Cite

MDPI and ACS Style

She, R.; Liu, S.; Fan, P. Attention to the Variation of Probabilistic Events: Information Processing with Message Importance Measure. Entropy 2019, 21, 439. https://doi.org/10.3390/e21050439

AMA Style

She R, Liu S, Fan P. Attention to the Variation of Probabilistic Events: Information Processing with Message Importance Measure. Entropy. 2019; 21(5):439. https://doi.org/10.3390/e21050439

Chicago/Turabian Style

She, Rui, Shanyun Liu, and Pingyi Fan. 2019. "Attention to the Variation of Probabilistic Events: Information Processing with Message Importance Measure" Entropy 21, no. 5: 439. https://doi.org/10.3390/e21050439

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop