Next Article in Journal
Mathematical Modeling of the Financial Impact of Air Crashes on Airlines and Involved Manufacturers
Previous Article in Journal
Parameter Identification and the Finite-Time Combination–Combination Synchronization of Fractional-Order Chaotic Systems with Different Structures under Multiple Stochastic Disturbances
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

(αF,αF¯)-Information Fusion Generated by Information Segmentation and Its Intelligent Retrieval

1
School of Mathematics and Statistics, Huanghuai University, Zhumadian 463000, China
2
School of Mathematics and Systems Science, Shandong University, Jinan 250100, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(5), 713; https://doi.org/10.3390/math10050713
Submission received: 27 January 2022 / Revised: 17 February 2022 / Accepted: 22 February 2022 / Published: 24 February 2022
(This article belongs to the Topic Dynamical Systems: Theory and Applications)

Abstract

:
Making use of the mathematical model with dynamic features and attribute disjunctive characteristics, the new concepts of α F -information segmentation, α F ¯ -information segmentation, ( α F , α F ¯ ) -information segmentation and their attribute characteristics are given, and the intelligent acquisition of matrix reasoning and information segmentation is given, as well as the information segmentation theorem. Moreover, the equivalence between information segmentation and information fusion is discussed, and the information fusion intelligent acquisition intelligent retrieval algorithm is given. Based on these theoretical results, the intelligent information fusion retrieval algorithm and its simple application in health big data are presented. In conclusion, the results presented in this paper are entirely based on new ideas.
MSC:
primary: 03E72, 37N35; Secondly: 93A10,15B52

1. Introduction

Facts I~III encountered in the application research of information fusion and information retrieval are as follows: ( x ) = { x 1 , x 2 , , x k } is information, x i ( x ) is an information element; α = { α 1 , α 2 , , α k } is the attribute set of ( x ) ( α is the characteristic set of ( x ) ); and attribute α i α of information element x i ( x ) satisfies the “disjunctive normal form”. Information ( x ) has dynamic characteristics:
  • Some information elements x i outside ( x ) are added to ( x ) , ( x ) generates ( x ¯ ) F , ( x ) ( x ¯ ) F ;
  • Some information elements x j in ( x ) are deleted from ( x ) , ( x ) generates ( x ¯ ) F ¯ , and ( x ¯ ) F ¯ ( x ) ;
  • Under the condition that I and II exist at the same time, ( x ) generates ( x ¯ ) F and ( x ¯ ) F ¯ at the same time, and ( x ¯ ) F ¯ ( x ) ( x ) F .
Facts I~III have not attracted people’s attention in the application research of information fusion and information retrieval. I is internal information fusion (the information element x i outside ( x ) is fused into ( x ) , and ( x ) generates ( x ¯ ) F ); II is the external information fusion (the information element x j in ( x ) is fused outside ( x ) , and ( x ) generates ( x ¯ ) F ¯ ); III is internal and external information fusion. I and II are two forms of information fusion. Many authors have studied the theory and applications of information fusion (see [1,2,3,4,5,6,7]). The following new concepts are obtained by re-understanding and re-studying I~III:
  • I * . I is α F -information segmentation;
  • II * . II is α F ¯ -information segmentation;
  • III * . III is ( α F , α F ¯ ) -information segmentation; “Information segmentation” has become a new concept in the application of information fusion.
A number of authors are focusing to find a mathematical model and method with dynamic characteristics to study I~III; in article [8], the authors propose an inverse packet sets (p-sets) model is given and it also gives the structure of the model; several applications of inverse p-sets are given in [8,9,10,11,12]. The inverse p-sets is obtained by introducing the dynamic characteristics into the finite ordinary element set X and improving the finite ordinary element set X . The characteristics of inverse p-sets are exactly the same as facts I~III. Therefore, inverse p-sets is a new mathematical method to study information fusion, information dynamic retrieval and application. Refs. [13,14,15,16,17,18,19] presented p-sets that are are dual forms of inverse p-sets, and [20,21,22] presented function inverse p-sets as the functional form of inverse p-sets. Function p-sets are the functional forms of p-sets as given in [23,24], and they are widely used in dynamic information systems.
The main results of this paper are as follows:
  • The structure and characteristics of inverse packet sets are introduced, and the fact of the existence of inverse p-sets and its logical characteristics are given, which are important and indispensable.
  • The concept of information segmentation is given, and their attribute characteristics are discussed.
  • The intelligent acquisition theorem of information segmentation is given by using inverse p-augmented matrix reasoning;
  • The equivalence concept and theorem of information segmentation and information fusion are given;
  • The information fusion intelligent acquisition–retrieval algorithm and its application are presented. Application examples come from the disease diagnostic–treatment block of “Health Big Data”. The conceptual and theoretical results presented in this paper are new.

2. Inverse P-Sets Mathematical Model and Its Dynamic Structure

Given finite ordinary element set X = { x 1 , x 2 , , x q } U , α = { α 1 , α 2 , , α q } V is the attribute set of X , and X ¯ F is referred to as the internal inverse p-sets generated by X ,
X ¯ F = X X +
X + is called the F-element supplementary set,
X + = { u i | u i U , u i X , f ( u i ) = x i X , f F }
If attribute set α F of X ¯ F meets
α F = α { α i | f ( β i ) = α i α , f F }
here, in (3), β i V , β i α , and f F changes β i into f ( β i ) = α i α ; in (1), X ¯ F = { x 1 , x 2 , , x r } , q < r , q , r N + .
Call X ¯ F ¯ the outer inverse p-sets of X ,
X ¯ F ¯ = X X
X is called the F ¯ -element deletion set of X ,
X = { x i | x i X , f ¯ ( x i ) = u i X , f ¯ F ¯ }
If attribute set α F ¯ of X ¯ F ¯ meets
α F ¯ = α { β i | f ¯ ( α i ) = β i α , f ¯ F ¯ }
here, in (6), α i α , and f ¯ F ¯ changes α i into f ¯ ( α i ) = β i α ; and α F ¯ ; in (4), X ¯ F ¯ = { x 1 , x 2 , , x p } , p < q , p , q N + .
The element set pairs constituted by internal inverse p-sets X ¯ F ¯ and outer inverse p-sets X ¯ F ¯ are called the inverse p-sets generated by X , inverse p-sets for short, and recorded as
( X ¯ F , X ¯ F ¯ )
Cantor set X is referred to as the ground set of inverse packet sets.
From (1)–(3), we have that if α 1 F α 2 F α n 1 F α n F , then
X ¯ 1 F X ¯ 2 F X ¯ n 1 F X ¯ n F
From (4)–(6), we also have that if α n F ¯ α n 1 F ¯ α 2 F ¯ α 1 F ¯ , then
X ¯ n F ¯ X ¯ n 1 F ¯ X ¯ 2 F ¯ X ¯ 1 F ¯
From (7)–(9), we obtain
( X ¯ 1 F , X ¯ n F ¯ ) ( X ¯ 2 F , X ¯ n 1 F ¯ ) ( X ¯ n 1 F , X ¯ 2 F ¯ ) ( X ¯ n F , X ¯ 1 F ¯ )
From (8) and (9), we obtain
{ ( X ¯ i F , X ¯ j F ¯ ) | i I , j J }
(11) is referred to as the family of inverse p-sets generated by X , and (11) is the general expression of inverse packet sets.
From (1)–(11), we obtain the following theorem.
Theorem 1.
In the case of F = F ¯ = , the inverse p-sets ( X ¯ F , X ¯ F ¯ ) and finite common element set X meet:
( X ¯ F , X ¯ F ¯ ) F = F ¯ = = X
Proof. 
1. If F = , then we have
in (3), α F = α = α , { α i | f ( β i ) = α i α , f F } = ,
in (2), X + = { u i | u i U , u i X , f ( u i ) = x i X , f F } = ,
in (1), X ¯ F = X X + = X = X .
2. if F ¯ = , then in (6), α F ¯ = α { β i | f ¯ ( β i ) = α i α , f ¯ F ¯ } = α = α , { β i | f ¯ ( β i ) = α i α , f ¯ F ¯ } = , in (5), X = { x i | x i X , f ¯ ( x i ) = u i X , f ¯ F ¯ } , in (4), X ¯ F ¯ = X X = X = X .
From 1 and 2, we can complete this theorem. □
Theorem 2.
In the case of F = F ¯ = , the inverse p-sets family
{ ( X ¯ i F , X ¯ j F ¯ ) | i I , j J }
and finite common element set X meet:
{ ( X ¯ i F , X ¯ j F ¯ ) | i I , j J } F = F ¯ = = X
The proof is similar to Theorem 1, and it is omitted.
Proposition 1.
Under static dynamic conditions, finite ordinary element set X is a special case of inverse p-sets ( X ¯ F , X ¯ F ¯ ) , and inverse p-sets ( X ¯ F , X ¯ F ¯ ) is the general form of finite ordinary element set X .
Proposition 2.
The dynamic characteristics of the inverse p-sets ( X ¯ F , X ¯ F ¯ ) come from the attribute supplement and attribute deletion in the attribute set α of X , the opposite is true.
Remark 1.
(1) U is a finite element domain and V is a finite attribute domain;
(2) F = { f 1 , f 2 , , f n } and F ¯ = { f ¯ 1 , f ¯ 2 , , f ¯ n } are the family of element (attribute) transfer; f F , f ¯ F ¯ are element (attribute) transfer, element (attribute) migration is the concept of transformation or function;
(3) The characteristics of f F are that for element u i U , u i X , f F changes u i into f ( u i ) = x i X ; for attribute β i V , β i α , f F changes β i into f ( β i ) = α i α ;
(4) The characteristics of f ¯ F ¯ are that for element x i X , f ¯ F ¯ changes x i into f ¯ ( x i ) = u i X ; for attribute α i α , f ¯ F ¯ changes α i into f ¯ ( α i ) = β i α ;
(5) The dynamic characteristics of (1) are the same as the dynamic characteristics of accumulator T = T + 1 ;
(6) The dynamic characteristics of (4) are the same as the dynamic characteristic of down-counter T = T 1 . For example, in (1) X ¯ 1 F = X X 1 + , let X ¯ 1 F = X , X ¯ 2 F = X ¯ 1 F X 2 + = ( X X 1 + ) X 2 + , , and so forth.
The fact of the existence of inverse p-sets and its logical characteristics.
X ¯ = { x 1 , x 2 , x 3 , x 4 , x 5 } is a finite set of common elements composed of five children’s toys, α = { α 1 , α 2 , α 3 , α 4 , α 5 } is the attribute set of X (the color set of the children’s toys), where α 1 denotes red color, α 2 denotes yellow color, α 3 denotes blue color, α 4 denotes blue color, and α 5 denotes orange color. The attribute α i of x i X satisfies the “disjunctive” feature in mathematical logic, or the attribute α i of x i X satisfies α i = α 1 α 2 α 3 α 4 α 5 ; i = 1 , 2 , , 5 ; and ” “ is a “disjunctive” operation.
1. If the attributes α 6 and α 7 are supplemented in α , among them, α 6 denotes black color, α 7 denotes purple color, α generates
α F = α { α 6 , α 7 } = { α 1 , α 2 , α 3 , α 4 , α 5 , α 6 , α 7 }
then X is supplemented with x 6 and x 7 ,
X generates X ¯ F = X { x 6 , x 7 } = { x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 } , the attribute of x i is
α i = ( α 1 α 2 α 3 α 4 α 5 ) α 6 α 7 = α 1 α 2 α 3 α 4 α 5 α 6 α 7
2. If the attributes α 4 and α 5 are deleted in α , α generates α F ¯ = α { α 4 , α 5 } = { α 1 , α 2 , α 3 } , then x 4 and x 5 are deleted in X , X generates X ¯ F ¯ = X X = { x 1 , x 2 , x 3 , x 4 , x 5 } { x 4 , x 5 } = { x 1 , x 2 , x 3 } , and the attribute of x i X ¯ F ¯ is
α i = ( α 1 α 2 α 3 α 4 α 5 ) α 4 α 5 = α 1 α 2 α 3
3. If the supplementary attribute and deletion attribute are carried out simultaneously in α , α generates α F and α F ¯ at the same time, X generates X ¯ F and X ¯ F ¯ , or X generates ( X ¯ F , X ¯ F ¯ ) ; the attribute α i of x i X ¯ F and the attribute α j of x j X ¯ F ¯ satisfy ( ( k = 1 5 α k ) k = 6 7 α k , ( k = 1 5 α k ) k = 4 5 α k ) . This is a simple fact which can be accepted by ordinary people.
The concepts and models given in Section 2 are the preparation of the research theories, and the related methods are given in Section 3, Section 4, Section 5 and Section 6; it is important to read and accept the research results of Section 3, Section 4, Section 5 and Section 6.
Agreement: ( x ) = X , ( x ¯ ) F = X ¯ F , ( x ¯ ) F ¯ = X ¯ F ¯ , ( ( x ¯ ) F , ( x ¯ ) F ¯ ) = ( X ¯ F , X ¯ F ¯ ) in Section 2. These marks are used in Section 3, Section 4, Section 5 and Section 6 without special explanation.

3. Information Segmentation and Its Attribute Characteristics

If there exists Δ x , Δ x ( x ) = meet
( x ¯ ) F = ( x ) Δ x
then ( x ¯ ) F is the α F -information segmentation of information ( x ) .
If there are x , x ( x ) = meet
( x ¯ ) F ¯ = ( x ) x
then ( x ¯ ) F ¯ is the α F ¯ -information segmentation of information ( x ) .
The information segmentation pair composed of ( x ¯ ) F and ( x ¯ ) F ¯ is called ( α F , α F ¯ ) -information segmentation of information ( x ) , recorded as
( ( x ¯ ) F , ( x ¯ ) F ¯ )
called
{ ( ( x ¯ ) i F , ( x ¯ ) j F ¯ ) | i I , j J }
is the ( α F , α F ¯ ) -information segmentation family of information ( x ) , and (16) is a general expression of ( α F , α F ¯ ) -information segmentation.
In (13), Δ x is the composition of information element x i supplemented in information ( x ) ;
In (14), x is the composition of the deleted information element x j in the information ( x ) .
From (13)–(16), we obtain the following.
Theorem 3.
( α F -information segmentation attribute theorem) If ( x ¯ ) k F is the α F -information partition of information ( x ) , α k F and α are the attribute set of ( x ¯ ) k F and ( x ) , respectively, then
α k F ( α Δ α ) =
In (17), Δ α , α Δ α = ,   Δ α consists of the attribute α i added to α .
Proof. 
From (1)–(3) and (13) in Section 2, α α k F ,   Δ α is the supplementary attribute set in α , or α k F = α Δ α , we can directly obtained (17). □
Theorem 4.
( α F ¯ -information segmentation attribute theorem) If ( x ¯ ) k F ¯ is the α F ¯ -information partition of information ( x ) , α k F ¯ and α are the attribute set of ( x ¯ ) k F ¯ and ( x ) , respectively, then
α k F ¯ ( α α ) =
In (18), α , α α = ,   α is the composition of attribute α j in α .
The proof is similar to Theorem 3, and it is omitted.
From Theorems 3 and 4, we can obtain directly the following.
Inference 1.
If ( ( x ¯ ) k F , ( x ¯ ) k F ¯ ) is the ( α F , α F ¯ ) -information segmentation of ( x ) , the attribute set ( α k F , α k F ¯ ) of ( ( x ¯ ) k F , ( x ¯ ) k F ¯ ) and the attribute set α of ( x ) meet
( α k F , α k F ¯ ) ( α Δ α , α α ) =
In (19), ( α k F , α k F ¯ ) ( α Δ α , α α ) = represents α k F ( α Δ α ) = , α k F ¯ ( α α ) = .
Theorem 5.
(Attribute disjunctive extension theorem of α F -information segmentation) If ( x ¯ ) k F is the α F -information partition of information ( x ) , then the attribute α i of information element x i ( x ¯ ) k F meets
α i = ( λ = 1 q α λ ) λ = q + 1 r α λ
In (20), α = { α 1 , α 2 , , α q } is the attribute set of information ( x ) = { x 1 , x 2 , , x q } , and α k F = { α 1 , α 2 , , α q , α q + 1 , , α r } is the attribute set of ( x ¯ ) k F .
Theorem 6.
(Attribute disjunctive contraction theorem of α F ¯ -information segmentation) If ( x ¯ ) k F ¯ is the α F ¯ -information partition of information ( x ) , then the attribute α j of information element x j ( x ¯ ) k F ¯ meets
α j = ( λ = 1 q α λ ) λ = p + 1 q α λ
In (21), α = { α 1 , α 2 , , α p , α p + 1 , , α q } is the attribute set of information ( x ) = { x 1 , x 2 , , x p , x p + 1 , , x q } , and α k F ¯ = { α 1 , α 2 , , α p } is the attribute set of ( x ¯ ) k F ¯ = { x 1 , x 2 , , x p } .
Through the fact and logical characteristics of the existence of inverse p-sets in Section 2. It is easy to prove theorems 5 and 6, and the proof is omitted.
Inference 2.
If ( α k F , α k F ¯ ) is the attribute set of ( α F , α F ¯ ) -information segmentation ( ( x ¯ ) k F , ( x ¯ ) k F ¯ ) , then ( α i , α j ) composed of attribute α i of information element x i ( x ¯ ) k F and attribute α j of x j ( x ¯ ) k F ¯ meets
( α i , α j ) = ( ( λ = 1 q α λ ) λ = q + 1 r α λ , ( λ = 1 q α λ ) λ = p + 1 q α λ )
here, (22) represents α i = ( λ = 1 q α λ ) λ = q + 1 r α λ , α j = ( λ = 1 q α λ ) λ = p + 1 q α λ ; in (20)–(22), p < q < r ; p , q , r N + .
By utilizing the concepts and results in Section 3 and Ref. [25], we will give Section 4.

4. Inverse P-Matrix Reasoning and Intelligent Acquisition of ( α F , α F ¯ ) -Information Segmentation

If A ¯ k F , A ¯ k + 1 F and ( x ¯ ) k F , ( x ¯ ) k + 1 F meet
i f A ¯ k F A ¯ k + 1 F , t h e n ( x ¯ ) k F ( x ¯ ) k + 1 F
(23) is referred to as internal inverse packet matrix reasoning generated by an internal inverse packet matrix; A ¯ k F A ¯ k + 1 F is referred to as the condition of the internal inverse packet matrix reasoning, and ( x ¯ ) k F ( x ¯ ) k + 1 F is referred to as the conclusion of the internal inverse packet matrix reasoning.
If A ¯ k + 1 F ¯ , A ¯ k F ¯ and ( x ¯ ) k + 1 F ¯ , ( x ¯ ) k F ¯ meet
i f A ¯ k + 1 F ¯ A ¯ k F ¯ , t h e n ( x ¯ ) k + 1 F ¯ ( x ¯ ) k F ¯
(24) is referred to as the outer inverse packet matrix reasoning generated by an outer inverse packet matrix; A ¯ k + 1 F ¯ A ¯ k F ¯ is referred to as the condition of outer inverse packet matrix reasoning, and ( x ¯ ) k + 1 F ¯ ( x ¯ ) k F ¯ is referred to as the conclusion of the outer inverse packet matrix reasoning.
Here, in (23) and (24), “ ” is equivalent to “ ”.
If ( A ¯ k F , A ¯ k + 1 F ¯ ) , ( A ¯ k + 1 F , A ¯ k F ¯ ) and ( ( x ¯ ) k F , ( x ¯ ) k + 1 F ¯ ) , ( ( x ¯ ) k + 1 F ( x ¯ ) k F ¯ ) meet
i f ( A ¯ k F , A ¯ k + 1 F ¯ ) ( A ¯ k + 1 F , A ¯ k F ¯ ) , t h e n ( ( x ¯ ) k F , ( x ¯ ) k + 1 F ¯ ) ( ( x ¯ ) k + 1 F ( x ¯ ) k F ¯ )
(25) is referred to as the inverse packet matrix reasoning generated by the inverse packet matrix; ( A ¯ k F , A ¯ k + 1 F ¯ ) ( A ¯ k + 1 F , A ¯ k F ¯ ) is referred to as the condition of inverse packet matrix reasoning, and ( ( x ¯ ) k F , ( x ¯ ) k + 1 F ¯ ) ( ( x ¯ ) k + 1 F ( x ¯ ) k F ¯ ) is referred to as the conclusion of the inverse packet matrix reasoning.
From (23)–(25), we obtain the following.
Theorem 7.
( α F -information segmentation intelligent acquisition theorem)
If A ¯ k F , A ¯ k + 1 F and ( x ¯ ) k F , ( x ¯ ) k + 1 F meet (23), then we have the following:
1. ( x ¯ ) k + 1 F is segmented outside ( x ) k F for intelligent acquisition and meets
c a r d ( ( x ¯ ) k + 1 F ) c a r d ( ( x ¯ ) k F ) > 0
2. The attribute set α k + 1 F of ( x ¯ ) k + 1 F and the attribute set α k F of ( x ¯ ) k F meet
α k + 1 F α k F
Proof. 
1. If A ¯ k F , A ¯ k + 1 F and ( x ¯ ) k F , ( x ¯ ) k + 1 F meet i f A ¯ k F A ¯ k + 1 F , t h e n ( x ¯ ) k F ( x ¯ ) k + 1 F . From (1)–(3) in Section 2 and (13), supplement the information element x j in ( x ¯ ) k F to generate ( x ¯ ) k + 1 F , or ( x ¯ ) k F ( x ¯ ) k + 1 F , under the condition of A ¯ k F A ¯ k + 1 F , ( x ¯ ) k + 1 F is segmented and intelligently acquired outside ( x ¯ ) k F or c a r d ( ( x ¯ ) k + 1 F ) c a r d ( ( x ¯ ) k F ) > 0 , we get (26).
2. With the help of ( x ¯ ) k F ( x ¯ ) k + 1 F , and the attribute set α k F of ( x ¯ ) k F and the attribute set α k + 1 F of ( x ¯ ) k + 1 F meet α k F α k + 1 F , or α k + 1 F α k F , we get (27). □
Theorem 8.
( α F ¯ -information segmentation intelligent acquisition theorem)
If A ¯ k + 1 F ¯ , A ¯ k F ¯ and ( x ¯ ) k + 1 F ¯ , ( x ¯ ) k F ¯ meet (24), then we have the following:
1. ( x ¯ ) k + 1 F ¯ is segmented and intelligently acquired in ( x ¯ ) k F ¯ and meets
c a r d ( ( x ) k + 1 F ¯ ) c a r d ( ( x ) k F ¯ ) < 0
2. The attribute set α k + 1 F ¯ of ( x ¯ ) k + 1 F ¯ and the attribute set α k F ¯ of ( x ¯ ) k F ¯ meet
α k + 1 F ¯ α k F ¯
The proof of Theorem 8 is similar to Theorem 7, and the proof is omitted.
Inference 3.
If the reasoning condition ( A ¯ k F , A ¯ k + 1 F ¯ ) ( A ¯ k + 1 F , A ¯ k F ¯ ) of the inverse P-matrix is met, the information segmentation ( x ¯ ) k + 1 F of α F and the information segmentation ( x ¯ ) k + 1 F ¯ of α F ¯ are obtained intelligently at the same time.
Theorem 9.
( α F -information segmentation relation theorem)
If { ( x ¯ ) i F | ( x ¯ ) i F ( x ¯ ) i + 1 F , α i F α i + 1 F , i = 1 , 2 , , n } is the α F -information partition chain generated by ( x ) , then ( x ¯ ) k F and its attribute set α k F meet
i = 1 k 1 ( x ¯ ) i F ( x ¯ ) k F i = k + 1 n ( x ¯ ) i F
i = 1 k 1 α i F α k F i = k + 1 n α i F
Proof. 
( x ¯ ) i meets ( x ¯ ) 1 F ( x ¯ ) 2 F ( x ¯ ) k 1 F ( x ¯ ) k F ( x ¯ ) k + 1 F ( x ¯ ) n F , directly obtained (30). The attribute set of ( x ¯ ) 1 F , ( x ¯ ) 2 F , , ( x ¯ ) k 1 F , ( x ¯ ) k F , ( x ¯ ) k + 1 F , , ( x ¯ ) n F meets α 1 F α 2 F α k 1 F α k F α k + 1 F α n F , directly obtained (31). □
Theorem 10.
( α F ¯ -information segmentation relation theorem)
If { ( x ¯ ) j F ¯ | ( x ¯ ) j + 1 F ¯ ( x ¯ ) j F ¯ , α j + 1 F ¯ α j F ¯ , j = 1 , 2 , , n } is the α F ¯ -information partition chain generated by ( x ) , then ( x ¯ ) k F ¯ and its attribute set α k F ¯ meet
j = 1 k 1 ( x ¯ ) j F ¯ ( x ¯ ) k F ¯ j = k + 1 n ( x ¯ ) j F ¯
j = 1 k 1 α j F ¯ α k F ¯ j = k + 1 n α j F ¯
Theorem 11.
( ( α F , α F ¯ ) -information segmentation relation theorem)
If { ( ( x ¯ ) i F , ( x ¯ ) j F ¯ ) | ( ( x ¯ ) i F , ( x ¯ ) j + 1 F ¯ ) ( ( x ¯ ) i + 1 F , ( x ¯ ) j F ¯ ) , ( α i F , α j + 1 F ¯ ) ( α i + 1 F , α j F ¯ ) , i , j = 1 , 2 , , n } , is the ( α F , α F ¯ ) -information partition chain generated by ( x ) , then ( ( x ¯ ) k F , ( x ¯ ) k F ¯ ) and its attribute set ( α k F , α k F ¯ ) meet
( i = 1 k 1 ( x ¯ ) i F , j = 1 k 1 ( x ¯ ) j F ¯ ) ( ( x ¯ ) k F , ( x ¯ ) k F ¯ ) ( i = k + 1 n ( x ¯ ) i F , j = k + 1 n ( x ¯ ) j F ¯ )
( i = 1 k 1 α i F , j = 1 k 1 α j F ¯ ) ( α k F , α k F ¯ ) ( i = k + 1 n α i F , j = k + 1 n α j F ¯ )
Inference 4.
The attribute set Δ α of Δ x and the attribute set α of ( x ) meet Δ α α = .
Inference 5.
The attribute set α of x and the attribute set α of ( x ) meet α α .
Remark 2.
A n × n matrix A n × n is given, and supplement t columns into n columns of A n × n , where A n × n becomes A n × ( n + t ) and A n × ( n + t ) is an ordinary augmented matrix of A n × n . In the application research of dynamic information system, we often encounter the following:
(1) If deleting λ columns from n columns of A n × n , λ < n , A n × n becomes A n × ( n - λ ) ;
(2) Matrix pair ( A n × ( n - λ ) , A n × ( n + t ) ) composed of A n × ( n - λ ) and A n × ( n + t ) . In ordinary mathematics, we cannot find the definition and name of A n × ( n - λ ) and ( A n × ( n - λ ) , A n × ( n + t ) ) . Using the structure of P-sets, the finite ordinary set X = { x 1 , x 2 , , x q } is given, and α = { α 1 , α 2 , , α k } is the attribute set of X ; X F ¯ = { x 1 , x 2 , , x p } is called the internal p-sets generated by X , X F = { x 1 , x 2 , , x r } is called the outer p-sets generated by X , and ( X F ¯ , X F ) is called the p-set generated by X where p < q < r , p , q , r N + . If x i X has n element values y i , take y j = ( y 1 , j , y 2 , j , , y q , j ) T as the column, then X generates matrix A n × q . Ref. [25] gives the following: X F ¯ generates matrix A n × p F ¯ , and X F generates matrix A n × r F ; A n × p F ¯ is called the internal p-augmented matrix of A n × q ; A n × r F is called the outer p-augmented matrix of A n × q ; and ( A n × p F ¯ , A n × r F ) is called the p-augmented matrix of A n × q . A n × r F is the same concept as an ordinary augmented matrix, where A n × p F ¯ A n × q , A n × q A n × r F . In Ref. [26], an augmented matrix was applied in the information fusion research.
By using the research results of Refs. [25,27] gives A ¯ n × r F . A ¯ n × p F ¯ and ( A ¯ n × r F , A ¯ n × p F ¯ ) are respectively called the internal inverse p-augmented matrix, outer inverse p-augmented matrix and inverse p-augmented matrix of A n × q . Here, A n × q A ¯ n × r F , A ¯ n × p F ¯ A n × q . The reasoning (23)–(25) in Section 4 can be easily obtained by using the properties of the inverse p-augmented matrix. The properties and applications of the inverse p-augmented matrix are widely discussed in Refs. [28,29,30].
Based on the results in Section 4 and Section 5 is given.

5. Equivalence of Information Segmentation and Information Fusion

If η F is the α F -fusion coefficient of [ x ¯ ] F and η F meets
η F 1 > 0
then α F -information segmentation ( x ¯ ) F is the α F -information fusion [ x ¯ ] F generated by [ x ] .
If η F ¯ is the α F ¯ -fusion coefficient of [ x ¯ ] F ¯ and η F ¯ meets
η F ¯ 1 < 0
then α F ¯ -information segmentation ( x ¯ ) F ¯ is α F ¯ -information fusion [ x ¯ ] F ¯ generated by [ x ] .
If η F and η F ¯ form a discrete interval [ η F , η F ¯ ] 1 , then ( α F , α F ¯ ) -information segmentation ( ( x ¯ ) F , ( x ¯ ) F ¯ ) is ( α F , α F ¯ ) -information fusion ( [ x ¯ ] F , [ x ¯ ] F ¯ ) generated by [ x ] .
Here ( x ¯ ) F = [ x ¯ ] F , ( x ¯ ) F ¯ = [ x ¯ ] F ¯ , ( ( x ¯ ) F , ( x ¯ ) F ¯ ) = ( [ x ¯ ] F , [ x ¯ ] F ¯ ) , ( x ) = [ x ] ; in (36), η F = c a r d ( [ x ¯ ] F ) / c a r d ( [ x ] ) ; in (37), η F ¯ = c a r d ( [ x ¯ ] F ¯ ) / c a r d ( [ x ] ) ; [ η F , η F ¯ ] 1 = [ η F , η F ¯ ] ; c a r d = c a r d i n a l n u m b e r .
Theorem 12.
(Interval outer point theorem of η F -fusion coefficient)
The α F -fusion coefficient η F of α F -information fusion [ x ¯ ] F is the outer point of unit discrete interval [ 0 , 1 ] , and meets
η F [ 0 , 1 ]
Theorem 13.
(Interval interior point theorem of η F ¯ -fusion coefficient)
The α F ¯ -fusion coefficient η F ¯ of α F ¯ -information fusion [ x ¯ ] F ¯ is the interior point of unit discrete interval [ 0 , 1 ] , and meets
η F ¯ [ 0 , 1 ]
Theorem 14.
(Interval relation theorem of ( α F , α F ¯ ) -fusion coefficient.) The interval [ η F , η F ¯ ] 1 formed by the fusion coefficient of ( α F , α F ¯ ) -information fusion ( [ x ¯ ] F , [ x ¯ ] F ¯ ) and the unit discrete interval [ 0 , 1 ] meet
[ η F , η F ¯ ] 1 [ 0 , 1 ]
here, in (38)–(40), [ 0 , 1 ] is the unit discrete interval of values 0 and 1 = η = c a r d ( [ x ] ) / c a r d ( [ x ] ) , and η = 1 is the fusion coefficient of [ x ] itself.
Theorem 15.
(Equivalence theorem of α F -information segmentation and α F -information fusion) α F -information segmentation ( x ¯ ) F and α F -information fusion [ x ¯ ] F are equivalent classes about attribute set α F
[ ( x ¯ ) F ] α F = [ [ x ¯ ] F ] α F
Theorem 16.
(Equivalence theorem of α F ¯ -information segmentation and α F ¯ -information fusion) α F ¯ -information segmentation ( x ¯ ) F ¯ and α F ¯ -information fusion [ x ¯ ] F ¯ are equivalent classes about attribute set α F ¯
[ ( x ¯ ) F ¯ ] α F ¯ = [ [ x ¯ ] F ¯ ] α F ¯
Theorem 17.
(Equivalence theorem of ( α F , α F ¯ ) -information segmentation and ( α F , α F ¯ ) -information fusion) ( α F , α F ¯ ) -information segmentation ( ( x ¯ ) F , ( x ¯ ) F ¯ ) and ( α F , α F ¯ ) -information fusion [ ( x ¯ ) F , ( x ¯ ) F ¯ ] are equivalent classes about attribute set ( α F , α F ¯ )
( ( x ¯ ) F , ( x ¯ ) F ¯ ) ( α F , α F ¯ ) = [ [ x ¯ ] F , [ x ¯ ] F ¯ ] ( α F , α F ¯ )
Remark 3.
The complete concept of “information fusion” is composed of two sub concepts: external information fusion (or α F information fusion) and internal information fusion (or α F ¯ information fusion). Information ( x ) = { x 1 , x 2 , , x q } is given, under certain conditions, the information element x i outside ( x ) is migrated into ( x ) , ( x ) generates ( x ¯ ) F , and ( x ¯ ) F is the external information fusion [ x ¯ ] F generated by [ x ] , [ x ] [ x ¯ ] F . Under certain conditions, the information element x j in ( x ) is migrated from inside ( x ) to outside x, and ( x ) generates ( x ¯ ) F ¯ ; ( x ¯ ) F ¯ is the internal information fusion [ x ¯ ] F ¯ generated by [ x ] , [ x ¯ ] F ¯ [ x ] . In the application research of information fusion, two basic forms of information fusion: information outer fusion and information internal fusion are often encountered. The inverse p-sets given in Section 2 are a dynamic mathematical model for studying information fusion. The concepts of α F -information segmentation ( x ¯ ) F and α F ¯ -information segmentation ( x ¯ ) F ¯ given in Section 3 are obtained through a new understanding of the concept of information fusion. Obviously, α F -information segmentation and the external information fusion are two equivalent concepts, and α F ¯ -information segmentation and internal information fusion are two equivalent concepts, α F -information segmentation and α F ¯ -information segmentation are new concepts to study two kinds of information fusion.
Using the concepts and models in Section 2, and the theoretical results in Section 3, Section 4 and Section 5, the applications of these theoretical results are given in Section 6.

6. ( α F , α F ¯ ) -Information Fusion Intelligent Acquisition-Intelligent Retrieval Algorithm and Its Application

6.1. ( α F , α F ¯ ) -Information Fusion Intelligent Acquisition Intelligent Retrieval Algorithm

In this section, only the α F ¯ -intelligent fusion intelligent acquisition intelligent retrieval algorithm is given, which is a part of the ( α F , α F ¯ ) -information fusion intelligent acquisition intelligent retrieval algorithm; the complete ( α F , α F ¯ ) -information fusion intelligent acquisition intelligent retrieval algorithm is composed of the α F -information fusion intelligent acquisition intelligent retrieval algorithm and α F -intelligent fusion intelligent acquisition intelligent retrieval algorithm. The α F ¯ -information fusion intelligent acquisition intelligent retrieval algorithm is shown in Figure 1.
The detailed process of the algorithm is as follows:
(1)
Algorithm preparation: information ( x ) and its attribute set α are given;
(2)
Using information ( x ) , attribute set α generates α F ¯ -information segmentation ( x ¯ ) k F ¯ ,   k = 1 , 2 , , n ;
(3)
The outer inverse P-matrix reasoning is established: i f   A ¯ k + 1 F ¯ A ¯ k F ¯ , t h e n   ( x ¯ ) k + 1 F ¯ ( x ¯ ) k F ¯ , and the outer inverse P-matrix inference database is generated;
(4)
α F ¯ -information fusion [ x ¯ ] k F ¯ and the α F ¯ -information fusion database are generated by II and III, k = 1 , 2 , , n ;
(5)
The α F ¯ -information fusion intelligent retrieval rules are given;
(6)
Given the standard α F ¯ -information fusion [ x ¯ ] k F ¯ , * , if [ x ¯ ] k F ¯ = [ x ¯ ] k F ¯ , * , then the algorithm ends; if [ x ¯ ] k F ¯ [ x ¯ ] k F ¯ , * , return to (3) and (4); repeat (2)–(6). If [ x ¯ ] k F ¯ = [ x ¯ ] k F ¯ , * is satisfied, the algorithm ends.

6.2. Application of ( α F , α F ¯ ) -Information Fusion Intelligent Retrieval

In this section, only a simple application of α F ¯ -information fusion intelligent retrieval is given; the application of α F -information fusion intelligent retrieval and ( α F , α F ¯ ) -information fusion intelligent retrieval is omitted; application examples are taken from the “heart disease” block in “health big data”. The concepts and models in Section 2, the theoretical results in Section 3, Section 4 and Section 5, and the algorithms in Section 6.1 are applied in this section.
Given information ( x ) and its attribute set α , y j is the diagnostic value of x j ( x ) (examination value of disease, such as blood pressure, heart rate, etc.).
( x ) = { x 1 , x 2 , x 3 , x 4 , x 5 , x 6 }
α = { α 1 , α 2 , α 3 , α 4 , α 5 , α 6 }
y j = ( y 1 , j , y 2 , j , y 3 , j , y 4 , j , y 5 , j , y 6 , j ) T
j = 1 , 2 , 3 , 4 , 5 , 6 .
( x ) is composed of patients with “heart disease”, and α i α is the “symptom” (attribute) of x i ( x ) ; x i , x j ( x ) meets i j , α i α j . In order to protect the privacy of patients, patients and symptoms are represented by information element x i and attribute α i , respectively; i = 1 , 2 , , 6 . The attribute α i of x i ( x ) satisfies the attribute disjunctive normal form (22): α i = α 1 α 2 α 3 α 4 α 5 α 6 = λ = 1 6 α λ .
I. In the treatment of t 1 t 4 T , the symptom α 3 , α 5 of x 3 , x 5 disappears and α generates α 1 F ¯ , where
α 1 F ¯ = α 1 { α 3 , α 5 } = { α 1 , α 2 , α 4 , α 6 }
x 3 , x 5 ( x ) returns to the standard of healthy people, x 3 , x 5 disappears from ( x ) , and ( x ) generates α 1 F ¯ -information segmentation ( x ¯ ) 1 F ¯ , where
( x ¯ ) 1 F ¯ = x 1 { x 3 , x 5 } = { x 1 , x 2 , x 4 , x 6 }
y j = ( y 1 , j , y 2 , j , y 3 , j , y 4 , j , y 5 , j , y 6 , j ) T generates y 1 F ¯ in (46), where
y 1 F ¯ = ( y 1 , j , y 2 , j , y 4 , j , y 6 , j ) T
y j and y 1 F ¯ constitute matrices A 6 × 6 and A ¯ 6 × 4 F ¯ respectively, from the algorithm in Section 6.1: α 1 F ¯ -information segmentation ( x ¯ ) 1 F ¯ is intelligently retrieved, obtained in ( x ) .
II. In the treatment of t 1 t 8 T , the symptom α 1 , α 6 of x 1 , x 6 disappears and α 1 F ¯ generates α 2 F ¯ , where
α 2 F ¯ = α 1 F ¯ { α 1 , α 6 } = { α 2 , α 4 }
( x ) generates α 2 F ¯ -information segmentation ( x ¯ ) 2 F ¯ , where
( x ¯ ) 2 F ¯ = ( x ¯ ) 1 F ¯ { x 1 , x 6 } = { x 2 , x 4 }
α 2 F ¯ -information segmentation ( x ¯ ) 2 F ¯ is intelligently retrieved and acquired in ( x ¯ ) 1 F ¯ . x 2 , x 4 ( x ) entered the ICU ward for treatment.

6.3. Result Authentication in Application Example

The search results (44)–(51) given in the example are accepted and confirmed by medical experts.

7. Discussion

The inverse p-set model with dynamic features and element attributes satisfying attribute extraction feature matches a class of information fusion features. If this kind of information fusion has dynamic characteristics, then the attributes α i of the information element x i have the characteristics of attribute extraction. This kind of information fusion is commonly encountered in applied research. The inverse p-set model provides the support of mathematical models and methods for the study of this kind of information fusion and application. Information fusion is a dynamic concept with two forms: internal information fusion and external information fusion. In this paper, a new concept of information segmentation is proposed by using the inverse p-set mathematical model to re-recognize and re-study the concept of information fusion. Many new results can be obtained by using the concept of information segmentation to study information fusion and application, among which what is given in 3–6 is only a part of them.
In this paper, the information fusion intelligent retrieval algorithm and simple application are given on the basis of matrix reasoning. If the results in the paper are further improved, new α F -information fusion and α F ¯ -information fusion are obtained, which are α F -information chain fusion and α F ¯ -information chain fusion, respectively. These new studies are in progress and are obtained from the inverse p-sets family (11) in Section 2.

Author Contributions

Conceptualization, X.Z.; methodology, X.Z.; writing—original draft preparation, X.Z.; writing—review and editing, L.S.; software, L.S.; investigation and supervision, K.S.; validation and funding acquisition, K.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Natural Science Foundation of China(NSFC) under Grant 12171193, and the Fund of Young Backbone Teacher in Henan Province under Grant 2021GGJS158.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mnatsakanyan, Z.R.; Burkom, H.S.; Hashemian, M.R.; Coletta, M.A. Distributed information fusion models for regional public health surveillance. Inf. Fusion 2012, 13, 129–136. [Google Scholar] [CrossRef]
  2. Zhang, W.; Zuo, J.; Guo, Q.; Ling, Z. Multisensor information fusion scheme for particle filter. Electron. Lett. 2015, 51, 486–488. [Google Scholar] [CrossRef]
  3. Zhao, M.; Li, Y.; Hao, G.; Li, J.; Jin, H. Information fusion predictive control algorithm for time-varying systems with unknown stochastic system bias. Int. J. Hybrid Inf. Technol. 2014, 7, 173–184. [Google Scholar] [CrossRef]
  4. Liu, J.; Chen, Y.; Cheng, C.; Huang, S.F. OWA based PCA information fusion method for classification problem. Int. J. Inf. Manag. Sci. 2010, 21, 209–225. [Google Scholar]
  5. Nicholson, D. Defense applications of agent-based information fusion. Comput. J. 2011, 54, 263–273. [Google Scholar] [CrossRef]
  6. Biernacki, P. Songs recognition using audio information fusion. Int. J. Electron. Telecommun. 2015, 61, 37–41. [Google Scholar] [CrossRef] [Green Version]
  7. Shi, K. Function inverse P-sets and the hiding information generated by function inverse P-information law fusion. In Proceedings of the Conference on e-Business, e-Services and e-Society I3E 2014: Digital Services and Information Intelligence, Sanya, China, 28–30 November 2014; Springer: Berlin/Heidelberg, Germany, 2014; pp. 224–237. [Google Scholar]
  8. Shi, K. Inverse P-sets. J. Shandong Univ. Nat. Sci. 2012, 47, 98–109. (In Chinese) [Google Scholar]
  9. Fan, C.; Huang, S. Inverse P-reasoning discovery ienification of Inverse P-information. Int. J. Digit. Content Technol. Its Appl. 2012, 6, 735–744. [Google Scholar]
  10. Lin, K.; Fan, C. Embedding camouflage of inverse P-information and application. Int. J. Converg. Inf. Technol. 2012, 7, 471–480. [Google Scholar]
  11. Yu, X.; Xu, F. Random inverse packet information and its acquisition. Appl. Math. Nonlinear Sci. 2020, 5, 357–366. [Google Scholar] [CrossRef]
  12. Huang, W.; Wei, B. Iterative intelligent camouflage and reduction of innernal inverse P-information. J. Huaihai Inst. Technol. Nat. Sci. 2016, 25, 26–29. (In Chinese) [Google Scholar]
  13. Shi, K. P-sets and its applications. Adv. Syst. Sci. Appl.-Ions 2009, 9, 209–219. [Google Scholar]
  14. Hao, X.; Jiang, X. Structure characteristics on probability rough information matrix. Fuzzy Syst. Math. 2017, 31, 153–158. (In Chinese) [Google Scholar]
  15. Li, X. An algebraic model of P-sets. J. Shangqiu Norm. Univ. 2020, 36, 1–5. (In Chinese) [Google Scholar]
  16. Hao, X.; Li, N. Quantitative characteristics and applications of P-information hidden mining. J. Shandong Univ. Nat. Sci. 2019, 54, 9–14. (In Chinese) [Google Scholar]
  17. Fan, C.; Lin, K. P-sets and the reasoning-identification of disaster information. Int. J. Converg. Inf. Technol. 2012, 7, 337–345. [Google Scholar]
  18. Lin, H.; Fan, C. The dual form of P-reasoning and identification of unknown attribute. Int. J. Digit. Content Technol. Its Appl. 2012, 6, 121–131. [Google Scholar]
  19. Liu, J.; Zhang, H. Information P-dependence and P-dependence mining-sieving. Comput. Sci. 2018, 45, 202–206. (In Chinese) [Google Scholar]
  20. Shi, K. Function inverse P-sets and information law fusion. J. Shandong Univ. Nat. Sci. 2012, 47, 73–80. (In Chinese) [Google Scholar]
  21. Zhang, Y. Random function inverse P-sets and its characteristics depending on attributes. J. Shandong Univ. Nat. Sci. 2014, 49, 90–94. (In Chinese) [Google Scholar]
  22. Tang, J.; Chen, B.; Zhang, L.; Bai, X. Function inverse P-sets and the dynamic separation of inverse P-information laws. J. Shandong Univ. Nat. Sci. 2013, 48, 104–110. (In Chinese) [Google Scholar]
  23. Shi, K. Function P-Sets. Int. J. Mach. Learn. Cybern. 2011, 2, 281–288. [Google Scholar] [CrossRef]
  24. Yu, X.; Xu, F. Function P (σ,τ)-Set and Its Characteristics. J. Jilin Univ. Nat. Sci. 2018, 56, 53–59. (In Chinese) [Google Scholar]
  25. Shi, K. P-augmented matrix and dynamic intelligent discovery-identification of information. J. Shandong Univ. Nat. Sci. 2015, 50, 1–12. (In Chinese) [Google Scholar]
  26. Xu, F.; Yu, X.; Zhang, L. Intelligent fusion of information and reasoning generation of its P-augmented matrix. J. Shandong Univ. Nat. Sci. 2019, 54, 22–28. (In Chinese) [Google Scholar]
  27. Guo, H.; Ren, X.; Zhang, L. Relationships between dynamic data mining and P-augmented matrix. J. Shandong Univ. Nat. Sci. 2016, 51, 105–110. (In Chinese) [Google Scholar]
  28. Zhang, L.; Ren, X.; Shi, K. Inverse p-matrix reasoning model-based the intelligent dynamic separation and acquisition of educational information. Microsyst. Technol. 2018, 24, 4415–4421. [Google Scholar] [CrossRef]
  29. Hao, X.; Guo, H.; Jiang, X. Multi-attributes risk investment decision making based on dynamic probability rough sets. In Proceedings of the 2017 International Conference on Natural Computation? Fuzzy Systems and Knowledge Discovery, Guilin, China, 29–31 July 2017; pp. 219–242. [Google Scholar]
  30. Zhang, L.; Ren, X. The relationship between abnormal information system and inverse P-augmented matrices. J. Shandong Univ. Nat. Sci. 2019, 54, 15–21. (In Chinese) [Google Scholar]
Figure 1. α F ¯ -information fusion intelligent acquisition intelligent retrieval algorithm.
Figure 1. α F ¯ -information fusion intelligent acquisition intelligent retrieval algorithm.
Mathematics 10 00713 g001
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, X.; Shen, L.; Shi, K. (αF,αF¯)-Information Fusion Generated by Information Segmentation and Its Intelligent Retrieval. Mathematics 2022, 10, 713. https://doi.org/10.3390/math10050713

AMA Style

Zhang X, Shen L, Shi K. (αF,αF¯)-Information Fusion Generated by Information Segmentation and Its Intelligent Retrieval. Mathematics. 2022; 10(5):713. https://doi.org/10.3390/math10050713

Chicago/Turabian Style

Zhang, Xiuquan, Lin Shen, and Kaiquan Shi. 2022. "(αF,αF¯)-Information Fusion Generated by Information Segmentation and Its Intelligent Retrieval" Mathematics 10, no. 5: 713. https://doi.org/10.3390/math10050713

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop