Next Article in Journal
Creating a Computable Cognitive Model of Visual Aesthetics for Automatic Aesthetics Evaluation of Robotic Dance Poses
Next Article in Special Issue
The Asymmetric Alpha-Power Skew-t Distribution
Previous Article in Journal
Emotion Classification Based on Biophysical Signals and Machine Learning Techniques
Previous Article in Special Issue
On a Class of Optimal Fourth Order Multiple Root Solvers without Using Derivatives
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Parametric Jensen-Shannon Statistical Complexity and Its Applications on Full-Scale Compartment Fire Data

by
Flavia-Corina Mitroi-Symeonidis
1,2,
Ion Anghel
2 and
Nicușor Minculete
3,*
1
Academy of Economic Studies, Department of Applied Mathematics, Calea Dorobantilor 15-17, Sector 1, RO-010552 Bucharest, Romania
2
Police Academy “Alexandru Ioan Cuza”, Fire Officers Faculty, Str. Morarilor 3, Sector 2, RO-022451 Bucharest, Romania
3
Faculty of Mathematics and Computer Science, Transilvania University of Brașov, Str. Iuliu Maniu50, 500091 Brașov, Romania
*
Author to whom correspondence should be addressed.
Symmetry 2020, 12(1), 22; https://doi.org/10.3390/sym12010022
Submission received: 12 November 2019 / Revised: 19 December 2019 / Accepted: 19 December 2019 / Published: 20 December 2019
(This article belongs to the Special Issue Symmetry in Applied Mathematics)

Abstract

:
The order/disorder characteristics of a compartment fire are researched based on experimental data. From our analysis performed by new, pioneering methods, we claim that the parametric Jensen-Shannon complexity can be successfully used to detect unusual data, and that one can use it also as a means to perform relevant analysis of fire experiments. Thoroughly comparing the performance of different algorithms (known as permutation entropy and two-length permutation entropy) to extract the probability distribution is an essential step. We discuss some of the theoretical assumptions behind each step and stress that the role of the parameter is to fine-tune the results of the Jensen-Shannon statistical complexity. Note that the Jensen-Shannon statistical complexity is symmetric, while its parametric version displays a symmetric duality due to the a priori probabilities used.

1. Introduction

We aim to perform a local entropic analysis of the evolution of the temperature during a full-scale fire experiment and seek a straightforward, general, and process-based model of the compartment fire. We propose a new statistical complexity and compare known algorithms dedicated to the extraction of the underlying probabilities, checking their suitability to point out the abnormal values and structure of the experimental time series. For recent research on the fire phenomena performed using entropic tools, see Takagi, Gotoda, Tokuda and Miyano [1] and Murayama, Kaku, Funatsu, and Gotoda [2].
The experimental data was collected during a full-scale fire experiment conducted at the Fire Officers Faculty in Bucharest. We briefly include here the description of the experimental setup (Materials and Methods). Details can be found in [3].
The experiment has been carried out using a container (single-room compartment) having the following dimensions: 12 m × 2.2 m × 2.6 m. A single ventilation opening was available, namely the front door of the container, which remained open during the experiment. Parts of the walls and the ceiling of the container were furnished with oriented strand boards (OSB). The fire source has been a wooden crib made of 36 pieces of wood strips 2.5 cm × 2.5 cm × 30 cm, on which has been poured 500 mL ethanol shortly before ignition. The fire bed was situated in a corner of the compartment, at 1.2 m below the ceiling. The measurement devices consisted of six built-in K-type thermocouples, which were fixed at key locations (see Figure 1) and connected to a data acquisition logger. Flames were observed to impinge on the ceiling and exit through the opening, and we also noted the ignition of crumpled newspaper and stages of fire development that are known as indicators of flashover.
In Section 2, we present the theoretical background and briefly summarize the approaches that are used to model fire.
Section 3 is dedicated to the results regarding the analysis of the collected raw data.

2. Theoretical Background and Remarks

2.1. Entropy and Statistical Complexity

The natural logarithm is used below, as elsewhere in this paper.
Shannon’s entropy [4] is defined as H ( P ) = i = 1 n p i logp i , where P = ( p 1 , , p n ) is a finite probability distribution. It is nonnegative and its maximum value is H ( U ) = logn , where U = ( 1 n , , 1 n ) . Throughout the paper, we use the convention 0 · log 0 = 0 .
The Kullback-Leibler divergence [5] is defined by
D ( P R ) = i = 1 n p i ( log   p i log   r i )
where P = ( p 1 , , p n ) and R = ( r 1 , , r n ) are probability distributions. It is nonnegative and it vanishes for P = R .
If the value 0 appears in probability distributions P = ( p 1 , , p n ) and R = ( r 1 , , r n ) , it must appear in the same positions for the sake of significance. Otherwise, one usually consider the conventions 0 log 0 b = 0 for b 0 and a log a 0 = for a > 0 . We remark that these are strong limitations and such conditions rarely occur in practice.
To overcome this issue, the following divergence, well-defined, is used in the literature.
The Jensen-Shannon divergence (see [6,7]) is given by
JS ( P R ) = 1 2 D ( P P + R 2 ) + 1 2 D ( R P + R 2 ) = H ( P + R 2 ) H ( P ) + H ( R ) 2 .
The disequilibrium-based statistical complexity (LMC statistical complexity) introduced in 1995 by López-Ruiz, Mancini, and Calbet in [8] is defined as C ( P ) = D ( P ) H ( P ) log n , where D ( P ) , which is interpreted as disequilibrium, is the quadratic distance D ( P ) = i = 1 n ( p i 1 n ) 2 .
Interpreted as entropic non-triviality (in Lamberti et al. [9] and Zunino et al. [10]), the Jensen-Shannon statistical complexity is defined by C ( JS ) ( P ) = Q ( JS ) ( P ) H ( P ) log n , where the disequilibrium Q ( JS ) ( P ) is Q ( JS ) ( P ) = k · JS ( P U ) . Here, k = ( max P JS ( P U ) ) 1 is the normalizing constant and U = ( 1 n , , 1 n ) . Therefore, we have 0 C ( JS ) ( P ) 1 .
For the convenience of the interested reader, we include the following method to determine the normalizing constant (a result stated for computational purposes, without proof, in [9]).
Proposition 1.
Using the above notation, for the computation of the normalizing constant, k = ( max P JS ( P U ) ) 1 , the maximum is attained for P such that there exists i ,   p i = 1 .
It holds that k = ( log 2 1 2 log n + 1 n log ( n + 1 ) 2 n ) 1 .
Proof. 
We have the following calculations:
JS ( P U ) =   H ( P + U 2 ) H ( P ) + H ( U ) 2 = 1 2 i = 1 n p i log p i 1 2 log n i = 1 n ( p i 2 + 1 2 n ) log ( p i 2 + 1 2 n ) .
JS ( P U ) p i =   1 2 log p i + 1 2 1 2 log ( p i 2 + 1 2 n ) 1 2 = 1 2 log p i 1 2 log ( p i 2 + 1 2 n ) .
2 JS ( P U ) p i 2 =   1 2 p i 1 4 ( p i 2 + 1 2 n ) = 1 2 p i 1 2 p i + 2 n > 0 ,   2 JS ( P U ) p i p j = 0 .
So, the Hessian of JS ( P U ) is everywhere positive definite, whence JS ( P U ) is (strictly) convex on the open convex set { ( p 1 , , p n ) : 0 < p i < 1   for   all   i ,   i = 1 n p i = 1 } . Therefore, JS ( P U ) cannot have a maximum inside (otherwise, it would be constant), and the points of maximum must lie on the boundary. See Theorem 3.10.10 in [11] (p. 171). Such points exist, because JS ( P U ) is continuous on the compact set Δ = { ( p 1 , , p n ) : 0 p i 1   for   all   i ,   i = 1 n p i = 1 } . The function JS ( P U ) is continuous and convex on the compact convex set Δ , so its maximum lies on the set of vertices of Δ ( where   p i = 1   for   one   i ). See Theorem 3.10.11 in [11] (p. 171). Since JS ( P U ) does not depend on the order of the components of P , the maximum value is attained at all vertices, so it can be straightforwardly computed by setting P = ( 1 ,   0 , , 0 ) . □
Remark 1.
Note that the maximal value of JS ( P U ) is log 2 1 2 log n + 1 n log ( n + 1 ) 2 n log 2 , as n . Since JS ( P U ) is bounded from above by log 2 , independently of n , the normalization of JS ( P U ) in the definition of the Jensen-Shannon complexity does not seem to be relevant, and one could simply consider JS ( P U ) H ( P ) log n .
Let λ [ 0 , 1 ] . The parametric Jensen-Shannon divergence (see for instance, [6]) is given by
JS λ ( P R ) = ( 1 λ ) D ( P ( 1 λ ) P + λ R ) + λ D ( R ( 1 λ ) P + λ R ) = H ( ( 1 λ ) P + λ R ) ( ( 1 λ ) H ( P ) + λ H ( R ) ) .
It is positive and it vanishes for P = R or λ = 0   or   1 . See also Figure 2.
The values 1 λ and λ are interpreted as a priori probabilities. Note that JS λ ( P R ) = JS 1 λ ( R P ) and JS λ is not symmetric, unless λ = 0.5 .
Mutatis mutandis, from Donald’s identity (Lemma 2.12 in [12]), one has
JS λ ( P R ) + D ( ( 1 λ ) P + λ R Q ) = ( 1 λ ) D ( P Q ) + λ D ( R Q )
for an arbitrarily fixed λ [ 0 , 1 ] . One needs only straightforward computation to check that it holds. Therefore,
JS λ ( P R ) = min { ( 1 λ ) D ( P Q ) + λ D ( R Q ) : Q = ( q 1 , , q n )   is   a   finite   probability   distribution } .
We introduce the parametric Jensen-Shannon statistical complexity as
C λ ( JS ) ( P ) JS λ ( P U ) H ( P ) log n .  
As in the case of the complexities C ( P ) , C ( JS ) ( P ) , the new ones,   C λ ( JS ) ( P ) , would be zero (minimum complexity) for P = U or if there exists i such that p i = 1 . These two cases describe very different states of the system, both of which are extreme circumstances being considered simple, namely the states with respectively maximum and minimum entropy.
We do not need to normalize JS λ ( P U ) in the definition of the parametric Jensen-Shannon complexity (possibly one can feel more comfortable with its normalized version in other frameworks), but we stress that one can easily prove, following the same recipe as above, that its maximum value is attained for P such that there exists i ,   p i = 1 .
Proposition 2.
Let λ [ 0 , 1 ] . Using the above notation, it holds
max P JS λ ( P U ) = λ log λ ( 1 λ ) log ( 1 λ ) ( 1 λ ) log ( 1 + λ ( 1 λ ) n ) λ n log ( 1 λ ) n + λ λ .
Moreover, max P JS λ ( P U ) λ log λ ( 1 λ ) log ( 1 λ ) log 2 , as n .
Proof. 
We omit the computation of max P JS λ ( P U ) , which is straightforward.
To justify the monotonicity, it is enough to prove that f ( x ) = λ x log ( 1 g λ ) x + λ λ is decreasing:
f ( x ) = λ x 2 [ λ ( 1 λ ) x + λ 1 log λ ( 1 λ ) x + λ ] < 0 ,   for   λ ( 0 , 1 )   and   x > 0 .
Furthermore, it is obvious that ( 1 λ ) log ( 1 + λ ( 1 λ ) n ) + λ n log ( 1 λ ) n + λ λ 0 .
The last inequality follows from Jensen’s inequality, which is applied to the concave logarithmic function.
Therefore, JS λ ( P U ) is bounded from above by log 2 , independently of n . □
Remark 2.
We split this result into two inequalities (of independent interest), which can be proved by the same technique. Namely, it holds that
max P D ( P ( 1 λ ) P + λ U ) log ( 1 λ )
and
max P D ( U ( 1 λ ) P + λ U ) log λ ,
as n .
Proposition 3.
Let λ , μ [ 0 , 1 ] . Using the above notation, the following inequality holds:
  min { 1 λ 1 μ , λ μ } JS μ ( P R ) JS λ ( P R ) max { 1 λ 1 μ , λ μ } JS μ ( P R )  
where P = ( p 1 , , p n ) and R = ( r 1 , , r n ) are two finite probability distributions.
Proof. 
The result is a particular case of Theorem 3.2 from [13]. We include here an alternative proof for the sake of completeness.
It is known that the entropy H is concave; Hence, H ( ( 1 λ ) P + λ R ) ( 1 λ ) H ( P ) + λ H ( R ) . We prove that
min { 1 λ 1 μ , λ μ } [ H ( ( 1 μ ) P + μ R ) ( 1 μ ) H ( P ) μ H ( R ) ]
H ( ( 1 λ ) P + λ R ) ( 1 λ ) H ( P ) λ H ( R )
  max { 1 λ 1 μ , λ μ } [ H ( ( 1 μ ) P + μ R ) ( 1 μ ) H ( P ) μ H ( R ) ] .
We consider 0 1 λ 1 μ λ μ , so λ μ   , which implies, by the concavity of H , that
( 1 λ ) H ( P ) + λ H ( R ) + min { 1 λ 1 μ , λ μ } [ H ( ( 1 μ ) P + μ R ) ( 1 μ ) H ( P ) μ H ( R ) ] =
( 1 λ ) H ( P ) + λ H ( R ) + 1 λ 1 μ [ H ( ( 1 μ ) P + μ R ) ( 1 μ ) H ( P ) μ H ( R ) ] =
λ μ 1 μ H ( R ) + 1 λ 1 μ H ( ( 1 μ ) P + μ R ) H ( λ μ 1 μ R + 1 λ 1 μ ( ( 1 μ ) P + μ R ) ) =
H ( ( 1 λ ) P + λ R ) ,
because it holds λ μ 1 μ + 1 λ 1 μ = 1 and λ μ 0 .
For the second inequality, we have
( 1 λ ) H ( P ) + λ H ( R ) + max { 1 λ 1 μ , λ μ } [ H ( ( 1 μ ) P + μ R ) ( 1 μ ) H ( P ) μ H ( R ) ] =
( 1 λ ) H ( P ) + λ H ( R ) + λ μ [ H ( ( 1 μ ) P + μ R ) ( 1 μ ) H ( P ) μ H ( R ) ] =
λ μ μ H ( P ) + λ μ H ( ( 1 μ ) P + μ R ) H ( ( 1 λ ) P + λ R ) ,
because it holds that λ μ H ( ( 1 μ ) P + μ R ) H ( ( 1 λ ) P + λ R ) + λ μ μ H ( P ) , which is equivalent to
H ( ( 1 μ ) P + μ R ) μ λ H ( ( 1 λ ) P + λ R ) + λ μ λ H ( P ) .
For 0 λ μ 1 λ 1 μ , the proof is similar. □
Remark 3.
For λ [ 0 , 1 ] , μ (0,1), and R = U in Equation (1), then the following inequality holds:
  min { 1 λ 1 μ , λ μ } JS μ ( P U ) JS λ ( P U ) max { 1 λ 1 μ , λ μ } JS μ ( P U ) .
For μ = 1 2 in Equation (3), we obtain:
  2 min { 1 λ , λ } JS ( P U ) JS λ ( P U ) 2 max { 1 λ , λ } JS ( P U ) .
Multiplying by H ( P ) logn in Equation (4), we deduce the following inequality related to the parametric Jensen-Shannon statistical complexity:
2 k min { 1 λ , λ } C ( JS ) ( P ) C λ ( JS ) ( P ) 2 k max { 1 λ , λ } C ( JS ) ( P ) ,
where k = log 2 1 2 log n + 1 n log ( n + 1 ) 2 n .

2.2. Extraction of the Underlying Probability Distribution

The permutation entropy (PE) [14] quantifies randomness and the complexity of a time series based on the appearance of ordinal patterns, that is on comparisons of neighboring values of a time series. For other details on the PE algorithm applied to the present experimental data, see [3].
Let T = ( t 1 , , t n ) be a time series with distinct values.
Step 1. The increasing rearranging of the components of each j -tuple ( t i , , t i + j 1 ) as ( t i + r 1 1 , , t i + r j 1 ) yields a unique permutation of order j denoted by π = ( r 1 , , r j ) , which is an encoding pattern that describes the up-and-downs in the considered j-tuple.
Simple numerical examples may help clarify the concepts throughout this section.
Example 1.
For the five-tuple ( 2.3 ,   1 ,   3.1 ,   6.1 ,   5.2 ) , the corresponding permutation (encoding) is ( 2 ,   1 ,   3 ,   5 ,   4 ) .
Step 2. The absolute frequency of this permutation (the number of j -tuples which are associated to this permutation) is
k π # { i : i n ( j 1 ) ,   ( t i ,   , t i + j 1 )   is   of   type   π } .
These values have the sum equal to the number of all the consecutive j-tuples; that is, n ( j 1 ) .
Step 3. The permutation entropy of order j is defined as PE ( j ) π p π logp π , where p π = k π n ( j 1 ) is the relative frequency.
In [14], the measured values of the time series are considered distinct. The authors neglect equalities and propose to break them by adding small random perturbations (random noise) to the original series.
Another known approach is to rank the equalities according to their order of emergence (to rank the equalities with their sequential/chronological order, see for instance [15,16]). We use this method throughout the paper to compute PE ( j ) for j = 3, 4, 5.
Applying the PE algorithm for experimental fire data, C λ ( JS ) ( P ) cannot be zero. The number of the encoding patterns that occur is >1, and these patterns are not equiprobable: some patterns may be rare or locally forbidden (that is, one encounters such patterns at some thermocouples, but not in all six time series), as discussed in [3].
We briefly describe now the encoding steps in the TLPE algorithm (Two-Length Permutation Entropy algorithm) given by Watt and Politi in [17]; other details are provided in [3].
Step 1 given the j -tuple T = ( t 1 , , t j ) , we start encoding the last k j elements ( t j k + 1 ,   , t j ) according to the ordinal position of each element; that is, every t s is replaced by a symbol which indicates the position occupied by t s within the increasing rearranging of the considered k -tuple.
Next, we proceed by encoding each previous element t m up to m   =   1 according to the symbol provided by Step 1 applied to the k -tuple ( t m ,   , t m + k 1 ) .
Example 2.
Encoding obtained by the chronological ordering of equal values ( 4.1 ,   4.1 ,   4.1 ,   5 ,   2.1 ) ( 1 ,   1 ,   2 ,   3 ,   1 ) for k = 3 and j = 5.
Step 2 and Step 3, they coincide with Step 2 and Step 3 in the PE algorithm above.
This algorithm leads, after computing the relative frequencies of the encoding sequences, to the two-length permutation entropy (TLPE ( k ,   j ) ).
Given the pair ( k ,   j ) of values, the number of symbolic (encoding) sequences of length j is k ! k j k , which is a number that can be much smaller than j!, so this algorithm is faster, it involves a simplified computation, and sometimes it makes the results more relevant for big values of j.
We deal with the equal values by using the same method as for PE; that is, we consider them ordered chronologically.
In the next section, we apply the above techniques and observe their capability to discern the changes of the parametric Jensen-Shannon statistical complexity of the experimental data.

3. Raw Data Analysis

The raw data set under consideration consists of measured temperatures during a compartment fire: six thermocouples T1, …, T6 measure the temperatures every second during the experiment. Hence, we get six time series consisting of 3046 entries (data points), and we aim to a better understanding of these results by modeling the time series using information theory, and to assess the performance of the discussed statistical complexities.
We plot the parametric Jensen-Shannon statistical complexity against the parameter (for λ { 0 , 0.2 , , 1 } ). We notice the unusual plotting for the time series at T5, which is definitely not caused by the position of this thermocouple. The graph corresponding to the time series at T5 is far from the rest of the graphs for the other thermocouples; hence, smaller values were obtained for the statistical complexities, with no apparent experiment related or mathematical reason. See Figure 3, Figure 4, Figure 5, Figure 6 and Figure 7.
We conclude that the PE and TLPE algorithms can be successfully used to detect unusual data collected in fire experiments: different embedding dimensions and different algorithms used to determine the underlying probabilities provide the same conclusion, the hierarchy among the statistical complexities established for the thermocouples T1–T5 is the same, and T5 is always at a bigger distance from the rest of them. The position of the thermocouple T5 does not justify this big difference (see Figure 1). This also agrees with the smaller values provided at T5 by the LMC statistical complexity in Figure 8.
It is not clear in [9] why only the disequilibrium provided by JS has been considered and why JS λ has been avoided. Using experimental data, we have verified that the parametric Jensen-Shannon complexity can be used for the analysis of the time series related to the fire dynamics: except for the trivial cases λ = 0 or 1, the results are not altered by the non-symmetry of JS λ for λ 0.5 (however, the embedding dimension j has to be adequate to the amount of data), so one can draw similar conclusions as for λ = 0.5 . See Figure 8 (the plots obtained by PE(3), PE(4), TLPE(3,5), and TLPE(2,5) look similar, so we do not include them here). We have limitations for the choice of the embedding dimension j , since the factorial increases fast, and one then requires a bigger amount of data n . So, as a guideline for choosing the embedding dimension, the value of the statistical complexities remains relevant for j such that n j ! . See also [18].
Moreover, the proposed parametric Jensen-Shannon statistical complexities complement and validate the information provided bythe usual LMC and Jensen-Shannon statistical complexities. See Figure 8 and Figure 9 for a quick comparison to the descriptions provided by the Jensen-Shannon and LMC statistical complexities. According to our findings, the parametric Jensen-Shannon statistical complexity is a valid tool for the analysis of the evolution of the temperature in compartment fire data. The slight differences that appear between the upper line (corresponding to λ = 0.5 ) in Figure 9 and the one in Figure 10 are because the Jensen-Shannon complexity [9] is defined using the normalized Jensen-Shannon divergence, while we introduced the parametric Jensen-Shannon divergence in the LMC style, that is without normalizing the disequilibrium.
The most relevant aspect is that by applying the formula for the parametric Jensen-Shannon complexity, one gets similar plots, regardless of the embedding dimension and the encoding type algorithms used to determine the probability distribution, so the analysis is coherent and not misleading. Similarities with other in use complexity formulae would certainly improve the whole picture and bring us one step closer to the understanding of their ability to capture the behavior of various phenomena, in this particular case the fire dynamics. See Figure 8, Figure 9 and Figure 10. We remark that these types of similarities might yield further mathematical results stating relationships among these mathematical notions: the (parametric) Jensen-Shannon and the LMC complexities.

4. Concluding Remarks on the Limitations of Our Study

The newly proposed complexities are used to analyze a full-scale experimental data set collected from a compartment fire.
For various algorithms and various embedding dimensions, more comparisons can be performed from this point onwards. We could not answer the questions about the merits and demerits of the known statistical complexities: such aspects are not yet clear in the literature, even in other frameworks where the permutation entropy has already been used by many researchers. Therefore, we discussed the relevance of the use of statistical complexities in the framework of fire data: small changes in the algorithms or choosing different embedding dimensions does not affect the interpretation of the results and the conclusions. This means that this new mathematical tool (the parametric Jensen-Shannon complexity) is informally staying “stable” in the framework of fire data. The accuracy of the interpretations can definitely be improved by the choice of the parameters, but the degree of its change cannot be estimated out of the data gathered in just one experiment: further research is required.
Other recent results on the analysis of this data set can be found in [19]. To understand this material the reader is referred to [20]. For the use of the permutation entropy in another framework see [21,22].
Our results might also indicate a turbulenceor a malfunction of the thermocouple T5 (an improperly calibrated scale); however, it is beyond the scope of the present paper to discuss it in detail.

Author Contributions

The work presented here was carried out in collaboration between all authors. All authors contributed equally and significantly in writing this article. All authors have contributed to the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by a grant of the Romanian Ministry of Research and Innovation, CCCDI-UEFISCDI, project number PN-III-P1-1.2-PCCDI-2017-0350/38PCCDI within PNCDI III.

Acknowledgments

The authors thank Eleutherius Symeonidis for the argumentation of the proof of Proposition 1.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Takagi, K.; Gotoda, H.; Tokuda, I.T.; Miyano, T. Dynamic behavior of temperature field in a buoyancy-driven turbulent fire. Phys. Lett. A 2018, 382, 3181–3186. [Google Scholar] [CrossRef]
  2. Murayama, S.; Kaku, K.; Funatsu, M.; Gotoda, H. Characterization of dynamic behavior of combustion noise and detection of blowout in a laboratory-scale gas-turbine model combustor. Proc. Combust. Inst. 2019, 37, 5271–5278. [Google Scholar] [CrossRef]
  3. Mitroi-Symeonidis, F.-C.; Anghel, I.; Lalu, O.; Popa, C. The permutation entropy and its applications on fire tests data. arXiv 2019, arXiv:1908.04274. [Google Scholar]
  4. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
  5. Kullback, S.; Leibler, L.A. On information and sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
  6. Lin, J. Divergence measures based on the Shannon entropy. IEEE-Trans. Inf. Theory 1991, 37, 145–151. [Google Scholar] [CrossRef] [Green Version]
  7. Rao, C.R.; Nayak, T.K. Cross entropy, dissimilarity measures, and characterizations of quadratic entropy. IEEE Trans. Inform. Theory 1985, 31, 589–593. [Google Scholar] [CrossRef]
  8. López-Ruiz, R.; Mancini, H.L.; Calbet, X. A statistical measure of complexity. Phys. Lett. A 1995, 209, 321–326. [Google Scholar] [CrossRef] [Green Version]
  9. Lamberti, P.W.; Martin, M.T.; Plastino, A.; Rosso, O.A. Intensive entropic non-triviality measure. Phys. A Stat. Mech. Appl. 2004, 334, 119–131. [Google Scholar] [CrossRef]
  10. Zunino, L.; Soriano, M.C.; Rosso, O.A. Distinguishing chaotic and stochastic dynamics from time series by using a multiscale symbolic approach. Phys. Rev. E 2012, 86, 046210. [Google Scholar] [CrossRef] [Green Version]
  11. Niculescu, C.P.; Persson, L.-E. Convex Functions and their Applications. A Contemporary Approach, 2nd ed.; CMS Books in Mathematics; Springer-Verlag: New York, NY, USA, 2018; Volume 23. [Google Scholar]
  12. Donald, M.J. Further results on the relative entropy. Math. Proc. Camb. Philos. Soc. 1987, 101, 363–373. [Google Scholar] [CrossRef]
  13. Mitroi-Symeonidis, F.-C. About the precision in Jensen-Steffensen inequality. Ann. Univ. Craiova Ser. Mat. Inform. 2010, 37, 73–84. [Google Scholar]
  14. Bandt, C.; Pompe, B. Permutation entropy: A natural complexity measure for time series. Phys. Rev. Lett. 2002, 88, 174102. [Google Scholar] [CrossRef] [PubMed]
  15. Cao, T.; Tung, W.W.; Gao, J.B.; Protopopescu, V.A.; Hively, L.M. Detecting dynamical changes in time series using the permutation entropy. Phys. Rev. E 2004, 70, 046217. [Google Scholar] [CrossRef] [Green Version]
  16. Duan, S.; Wang, F.; Zhang, Y. Research on the biophoton emission of wheat kernels based on permutation entropy. Optik 2019, 178, 723–730. [Google Scholar] [CrossRef]
  17. Watt, S.J.; Politi, A. Permutation entropy revisited. Chaos Solitons Fractals 2019, 120, 95–99. [Google Scholar] [CrossRef] [Green Version]
  18. Riedl, M.; Müller, A.; Wessel, N. Practical considerations of permutation entropy. Eur. Phys. J. Spec. Top. 2013, 222, 249–262. [Google Scholar] [CrossRef]
  19. Mitroi-Symeonidis, F.-C.; Anghel, I.; Furuichi, S. Encodings for the calculation of the permutation hypoentropy and their applications on full-scale compartment fire data. Acta Tech. Napoc. Ser. Appl. Math. Mech. Eng. 2019, 62, 607–616. [Google Scholar]
  20. Furuichi, S.; Mitroi-Symeonidis, F.-C.; Symeonidis, E. On some properties of Tsallis hypoentropies and hypodivergences. Entropy 2014, 16, 5377–5399. [Google Scholar] [CrossRef] [Green Version]
  21. Araujo, F.H.A.; Bejan, L.; Rosso, O.A.; Stosic, T. Permutation entropy and statistical complexity analysis of Brazilian agricultural commodities. Entropy 2019, 21, 1220. [Google Scholar] [CrossRef] [Green Version]
  22. Song, Y.; Ju, Y.; Du, K.; Liu, W.; Song, J. Online road detection under a shadowy traffic image using a learning-based illumination-independent image. Symmetry 2018, 10, 707. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Arrangement of the flashover container.
Figure 1. Arrangement of the flashover container.
Symmetry 12 00022 g001
Figure 2. The parametric Jensen-Shannon divergence JS λ ( P 1 P ) ,   for   P = ( t , 1 t ) ,   t [ 0 , 1 ] .
Figure 2. The parametric Jensen-Shannon divergence JS λ ( P 1 P ) ,   for   P = ( t , 1 t ) ,   t [ 0 , 1 ] .
Symmetry 12 00022 g002
Figure 3. Plot obtained using the PE(5) algorithm. PE: permutation entropy.
Figure 3. Plot obtained using the PE(5) algorithm. PE: permutation entropy.
Symmetry 12 00022 g003
Figure 4. Plot obtained using the PE(4) algorithm.
Figure 4. Plot obtained using the PE(4) algorithm.
Symmetry 12 00022 g004
Figure 5. Plot obtained using the PE(3) algorithm.
Figure 5. Plot obtained using the PE(3) algorithm.
Symmetry 12 00022 g005
Figure 6. Plot obtained using the TLPE(3,5) algorithm.TLPE: Two-Length Permutation Entropy algorithm.
Figure 6. Plot obtained using the TLPE(3,5) algorithm.TLPE: Two-Length Permutation Entropy algorithm.
Symmetry 12 00022 g006
Figure 7. Plot obtained using the TLPE(2,5) algorithm.
Figure 7. Plot obtained using the TLPE(2,5) algorithm.
Symmetry 12 00022 g007
Figure 8. Statistical complexity.
Figure 8. Statistical complexity.
Symmetry 12 00022 g008
Figure 9. Jensen-Shannon statistical complexity ( λ = 1 / 2 ).
Figure 9. Jensen-Shannon statistical complexity ( λ = 1 / 2 ).
Symmetry 12 00022 g009
Figure 10. Obtained for the parametric Jensen-Shannon complexity using the PE(5)algorithm.
Figure 10. Obtained for the parametric Jensen-Shannon complexity using the PE(5)algorithm.
Symmetry 12 00022 g010

Share and Cite

MDPI and ACS Style

Mitroi-Symeonidis, F.-C.; Anghel, I.; Minculete, N. Parametric Jensen-Shannon Statistical Complexity and Its Applications on Full-Scale Compartment Fire Data. Symmetry 2020, 12, 22. https://doi.org/10.3390/sym12010022

AMA Style

Mitroi-Symeonidis F-C, Anghel I, Minculete N. Parametric Jensen-Shannon Statistical Complexity and Its Applications on Full-Scale Compartment Fire Data. Symmetry. 2020; 12(1):22. https://doi.org/10.3390/sym12010022

Chicago/Turabian Style

Mitroi-Symeonidis, Flavia-Corina, Ion Anghel, and Nicușor Minculete. 2020. "Parametric Jensen-Shannon Statistical Complexity and Its Applications on Full-Scale Compartment Fire Data" Symmetry 12, no. 1: 22. https://doi.org/10.3390/sym12010022

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop