Next Article in Journal
Design of N-Way Wilkinson Power Dividers with New Input/Output Arrangements for Power-Halving Operations
Previous Article in Journal
Effects of Short-Term Sodium Nitrate versus Sodium Chloride Supplementation on Energy and Lipid Metabolism during High-Intensity Intermittent Exercise in Athletes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

SFCA: A Scalable Formal Concepts Driven Architecture for Multi-Field Knowledge Graph Completion

1
School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China
2
Department of Computer Science & Engineering, Jeonbuk National University, Jeonju 54896, Republic of Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(11), 6851; https://doi.org/10.3390/app13116851
Submission received: 20 May 2023 / Revised: 2 June 2023 / Accepted: 4 June 2023 / Published: 5 June 2023
(This article belongs to the Topic Data Science and Knowledge Discovery)

Abstract

:
With the proliferation of Knowledge Graphs (KGs), knowledge graph completion (KGC) has attracted much attention. Previous KGC methods focus on extracting shallow structural information from KGs or in combination with external knowledge, especially in commonsense concepts (generally, commonsense concepts refer to the basic concepts in related fields that are required for various tasks and academic research, for example, in the general domain, “Country” can be considered as a commonsense concept owned by “China”), to predict missing links. However, the technology of extracting commonsense concepts from the limited database is immature, and the scarce commonsense database is also bound to specific verticals (commonsense concepts vary greatly across verticals, verticals refer to a small field subdivided vertically under a large field). Furthermore, most existing KGC models refine performance on public KGs, leading to inapplicability to actual KGs. To address these limitations, we proposed a novel Scalable Formal Concept-driven Architecture (SFCA) to automatically encode factual triples into formal concepts as a superior structural feature, to support rich information to KGE. Specifically, we generate dense formal concepts first, then yield a handful of entity-related formal concepts by sampling and delimiting the appropriate candidate entity range via the filtered formal concepts to improve the inference of KGC. Compared with commonsense concepts, KGC benefits from more valuable information from the formal concepts, and our self-supervision extraction method can be applied to any KGs. Comprehensive experiments on five public datasets demonstrate the effectiveness and scalability of SFCA. Besides, the proposed architecture also achieves the SOTA performance on the industry dataset. This method provides a new idea in the promotion and application of knowledge graphs in AI downstream tasks in general and industrial fields.

1. Introduction

Knowledge Graphs(KGs), as a structured representation of interlinked descriptions of concepts, entities, relations and events, provide effective support for question answering [1], recommendation systems [2,3], information retrieval [4,5], and natural language processing [6]. By analyzing the public KGs (Freebase [7], YAGO [8], and DBPedia [9]), the incompleteness of KGs is the inevitable problem limited by existing KGs construction technology, requiring KGC to infer new facts. Among all available Knowledge Graph Completion (KGC) research, the knowledge graph embedding (KGE) model shows efficiency and significant performance, which embeds KGs components (entities and relations) into a latent space to learn topological structure information.
Existing KGE-based KGC methods can be divided into two streams: (1) Structure information-driven methods [10,11,12,13,14,15,16]. This stream focuses on learning the embedding of entities and relations from KG structural information to score the plausibility of triples for link prediction. (2) External information fusion method [17,18,19,20,21,22]. This stream augments the training or inference process by fusing additional information with structural information to improve the link prediction performance.

1.1. Limitations

Judging by the results of recent methods, combining external information performed better than the structure information-driven method. Among various external information, commonsense concepts are recognized as more appropriate and effective by many researchers in enhancing the KGE model. However, there are still several challenges to catching commonsense concepts from data or a knowledge base.
(1)
For the DBpedia-related KGs, only some famous KGs have their commonsense knowledge base, and the number of including commonsense concepts is not large. In general, the commonsense concepts from the DBpedia KGs are hard to share with other verticals KGs because commonsense concepts are mainly appropriate for the corresponding KGs (The greater the difference between common sense concepts in the more specialized fields, the more difficult it is to reuse).
(2)
For the specific KGs, such as industrial KGs, do not even have a commonsense knowledge base. Meanwhile, the commonsense concepts of specific KGs are hard to collect since they are commonly defined by corresponding researchers or experts. For the same reason, the previous models or algorithms [23,24] performed poorly on automatic commonsense extraction (The more specialized domains rely more on human-defined commonsense concepts).

1.2. Motivation

After analyzing the dilemmas involved in fetching external information, we meet three puzzles. What kind of concept can be defined as commonsense concepts? Which part of the commonsense concept provides valuable information in the KGC model? Is it possible to refine a concept from data to instead commonsense concept? The essence of the commonsense concept is summarized in many instances by human cognition. Thus, we try to explore new metaphysical ‘concepts’ from the latent space of KGs. Inspired by data mining theories, we argue that formal concepts with lattice structures are similar to ontologies with tree structures and can guide instance knowledge in KGC tasks. Thus, we use formal concepts to represent a concept, including entities and relations subset of KG. Figure 1a shows a schematic diagram of the ontology and concept lattice structure.
In light of the definition of formal concept analysis in data mining, a formal concept is an idea or category defined by a concrete or specific set of rules, guidelines, or properties. Extending to KGs, formal concepts can be treated as an ensemble of two sets. One is its extension to denote entity sets. The other is its intention to represent latent relations of entities. For the first set, our identification of instance membership in formal concepts relies on the formal concept’s instance set. By comparing the source instances and instances in the formal concept instances collection to judge whether this instance belongs to the formal concept. As shown in Figure 1b, In the lower part of the figure, diamonds represent formal concepts, and the squares and the rectangles to the right of the diamond, respectively, represent the objects and properties that make up the formal concept. The source of formal concepts is shown in the upper part of the figure. Specifically, huskies can be recognized as an instance in the formal concept of the “Dog” and an instance in the formal concept of the “Sled Dog” since it is contained in the instances collection of “Dog” and “Sled Dog” in formal concepts (The three names of “Dog”, “Herding Dog” and “Sled Dog” are matched by ourselves according to the instance set an attribute set of formal concept, formal concepts themselves have no names, but the individual elements of instance sets and property sets have their names). For the second set, formal concepts involve potential information of instances concepts, including possible properties (According to the different attributes of the instance, we can find the different formal concepts corresponding to the instance that are more focused on a certain attribute in meaning). For example, Border Collies, in this instance, hide the attribute of “Hunting” and the attribute of “Herding”, which are respectively contained in the formal concept of “Dog” and the formal concept of “Herding Dog”.
This sparked our interest in exploring the role of formal concepts in KGC research. Formal concepts can be found from the binary relations between known instances and attributes and exactly corresponds to the entity and relation in KG. For example, in a triplet SPO, P can be regarded as an attribute of S, and S can be regarded as an instance with the P property, from which we can naturally mine the formal concept in KG. In this work, we apply formal concepts, such as metaphysical structural information, to enhance the KGE model for the KGC task. Compared with commonsense concepts, formal concepts have the following advantages:
(1)
Formal concepts can be efficient and automatically generated, while commonsense concepts require expensive manual annotation. The formal concept is derived from the KG itself and belongs to the information of the KGs. On the contrary, the commonsense concept must be manually annotated for information outside the KG.
(2)
Formal concepts are not subject to KGs, while commonsense concepts are limited to the corresponding KGs. To be detailed, the formal concept can be applied to any KG, including a commonsense KG, while the commonsense concept is only applicable to the KG that has corresponding commonsense in theory and has artificially annotated commonsense concepts.

1.3. Architecture for KGC

Based on this, we proposed a Scalable Formal Concepts driven Architecture (SFCA) to extract formal concepts from the KGs to improve the performance of KGE. The SFCA conidia of three modules:
(1)
Formal concept extraction module (FCE) extracts the formal concepts in triples and links the formal concepts with the entities.
(2)
Formal concept sampling module (FCS) sifting streamlined formal concepts from redundant formal concepts.
(3)
Formal concept-driven link prediction module (FCLP)leverages sampled formal concepts to improve KGC performance.
This framework applies to all knowledge graphs and fully benefits from the self-supervision of the data itself. The contributions of the proposed method are summarized as follows:
(1)
We propose a scalable KGC architecture based on formal concept analysis to generate formal concepts from KG. In comparison, existing methods either perform poorly or require commonsense concepts of manual extraction. To our knowledge, we are the first to apply formal concepts to KGC.
(2)
We design a coarser-to-fine formal concept extraction strategy to choose streamlined formal concepts of entities in the KG that can be used to improve the computational efficiency of the model.
(3)
Comprehensive experiments on four public datasets demonstrate the effectiveness and robustness of the proposed architecture with each module. Furthermore, the results show that SFCA helps generate KG’s formal concepts, which greatly supplements the existing methods.
(4)
Extensive experiments on real data of industrial proving the practicality of the proposed model. The results show that SFCA is effective and robust in various fields of KGC tasks.

2. Related Works

2.1. KGE Models

According to KGE models’ input data, the KGE models currently in use can be broadly split into two main streams:
(1)
The KG structural information-based methods include translation-based and semantic matching models. Translation-based models [10,12,15] are models that utilize entity and relation embeddings to compute translation scores, where relations represent the translation operations between entities. Translating embeddings for modeling multi-relational data (TransE) [10] is the pioneer of the translation-based model, which embeds entities and relations into the space with the same dimension, and regards relations as translation operations between entity vectors. The advantage of TransE is that it is simple and efficient, while it cannot model various relation patterns. Knowledge graph embedding by relational rotation in complex space (RotatE) [15] embeds entities and relations in a complex vector space, treats relations as rotation operations between entity vectors, effectively models and infers various relational patterns, and refreshes the best results of the KGC task. Learning hierarchy-aware knowledge graph embeddings for link prediction (HAKE) [12] effectively embeds the semantic hierarchy by mapping the entity to the polar coordinate system, achieving the STOA result of the KGC task. Semantic matching models [11,13,14,16] compute semantic matching scores for entity and relation embeddings in the latent space. A three-way model for collective learning on multi-relational data (RESCAL) [16] treats entities as vectors and relations as matrices and calculates scores with bilinear functions. Embedding entities and relations for learning and inference in knowledge bases (DistMult) [13] simplifies RESCAL by restricting the relation matrix to be a diagonal matrix. Complex embeddings for simple link prediction (ComplEx) [14] extend DistMult to embed entities and relations into complex space. Quaternion knowledge graph embeddings (QuatE) [11] embed a hypercomplex value with three imaginary components to represent entities and model the relation as a rotation on a 4-dimensional space (hypercomplex space), thus unifying ComplEx [14] and RotatE [15].
(2)
External information-based methods focus on adding extra information to enrich the KGE models. Most models that add external information [18,20,21] utilize the logic rules mined from the knowledge graph to improve the link prediction results. Fast rule mining in ontological knowledge bases with AMIE + + (AMIE+) [20], End-to-end differentiable rule mining on knowledge graphs (DRUM) [21], and Knowledge graph embedding with iterative guidance from soft rules (RUGE) [18] automatically mine the logic rules in KG and apply the logic rules to KGC tasks. A considerable part of the models that add internal information [17,19,22] use the known concept information corresponding to the entity to improve the link prediction outcomes. Representation learning of knowledge graphs with hierarchical types (TKRL) [17] uses the type information of the entity to design a scoring function with a hierarchical projection matrix for the entity, which improves the performance of the KGC task, Type-based multiple embedding representations for knowledge graph completion (TransT) [22] adopts entity types to construct relation types and takes the similarity between relative entities and relations as prior knowledge, and utilizes prior knowledge to improve the KGC task results. A scalable commonsense-aware framework for multi-view knowledge graph completion (CAKE) [19] leverages commonsense concepts of entities to improve the quality of negative sampling and the accuracy of link prediction candidate entities. Ontology-guided Entity Alignment via Joint Knowledge Graph Embedding (OntoEA) [25] embeds ontology and knowledge graphs together to get better entity embedding.

2.2. KG Embedding with Ontology

At present, many scholars are studying KG embedding with ontology. Semantically smooth embedding for knowledge graphs (SSE) [26] models the intrinsic geometry of KG based on the assumption that entities belonging to the same semantic category are close to each other in the embedding space (semantically smooth). SSE [26] uses two manifold learning algorithms, Laplacian Eigenmap and Local Linear Embedding, as regularization terms to model the smoothness hypothesis. Differentiating concepts and instances for knowledge graph embedding (TransC) [27] models concept embeddings as a sphere and assumes that the embedding vectors corresponding to instances of categories belonging to concepts should lie in this sphere. TKRL [17] focuses on the Type hierarchy of KG and believes that entities have different representations under different categories. And use the layered type as a mapping matrix and use type encoders to design. A knowledge-driven representation learning method with ontology information constraints (TransO) [28] model considers the limitations of the three types of ontology information, type information, relation constraint information, and hierarchical structure information, and maps entities and relationships to the ontology information-limited space. Based on the TransE model, loss functions are defined in the basic space and ontology information-limited space and combine the two for representation learning. Knowledge graph embedding with hierarchical relation structure (HRS) [29] learns representations of Relation clusters, Relations, and Sub-relations separately. It sums the three as the embedding vector of the relationship, thus modeling the hierarchical structure of the relationship. A representation learning method for knowledge graphs with relation hierarchical structure (TransRHS) [30] builds on HRS [29]. It encodes each relation as a vector and relation-specific spheres in the same space. TransRHS [30] uses the relative positions between vectors and spheres to model sub-relationships, which embodies the inherent generalization relationship between relations. The Universal representation learning of knowledge bases by jointly embedding instances and ontological concepts (JOIE) [31] model proposes a method that considers that KG and ontology use different embedding spaces and enables cross-space interaction between the two embeddings. A Modified Joint Knowledge Graph Embedding Model for Concepts and Instances (JECI++) [32] simplifies hierarchical concepts and links instances to them, making identifying cases based on neighboring instances and simplified concepts easier. It uses circular convolution to locate instances in the embedding space and employs CBOW and Skip-Gram strategies to embed simplified concepts and instances jointly.

2.3. Formal Concept Analysis

Formal Concept Analysis (FCA) [33] is a powerful and widely used method in information science that enables the creation of a concept hierarchy or formal ontology from a given set of objects and their properties. This approach is based on the mathematical theory of lattices and ordered sets, which allows for the identification of shared properties and relationships between objects in a structured and systematic way.
The resulting hierarchy of concepts represents a logical and intuitive organization of the objects and their properties, with each concept capturing a group of objects that share a set of common attributes. Moreover, the sub-concepts in the hierarchy represent a more specific grouping of the objects and a superset of the attributes from the abovementioned concepts.
Introduced by Rudolf Wille in 1981, Formal Concept Analysis (FCA) [33] has become a fundamental tool in various fields, including data mining, text mining, machine learning, knowledge management, semantic web, software development, chemistry, and biology. Its practical applications are diverse and numerous, ranging from discovering hidden patterns and relationships in data sets to developing more effective search algorithms and enhancing the quality of knowledge representation in various domains.

3. Methodology

We will introduce the SFCA framework in this section. As shown in Figure 2, the whole architecture includes three modules. The FCE module realizes the conversion of KG to formal context and formal context to formal concepts. The FCS module realizes the conversion of the whole formal concepts to the partial formal concepts. Finally, the FCLP module implements partial formal concepts supervision for link prediction.
As shown in Figure 2, In the Vertical field KG, points of different colors correspond to different types of real-world things (type information is not included in this KG). In the formal context of the FCE module, the various entities and relations are denoted by name abbreviations and the color of the type to which they belong. The FCE module’s concept lattice’s formal concept is depicted as a diamond, and the colored rectangle on the right side of the diamond identifies the different kinds of entities and relations that make up the formal concept’s entity set and relation set, respectively.
As shown in Figure 2, In the formal concept sampling module in the lower right corner, the initial input is the natural mapping relationship between the formal concept and the entity. First, the natural mapping between the formal concept and the relationship is considered to find the relationship between each triple “head entity-relationship” and the formal concept. And then consider the natural mapping of formal concepts and entities again to find the correspondence between each triplet and the “formal concept triplet”. Finally, in the link prediction module driven by the formal concept in the lower left corner, whether the new triplet combined with the triplet of the missing head entity or the missing tail entity and the candidate entity can correspond to the known “formal concept triplet” is initially carried out a filter, and then enter the ranking stage to output scores.

3.1. Notations and Problem Formalization

3.1.1. Preliminary Knowledge of FCA

For better understanding, we first provide a brief introduction to Formal Concept Analysis [19] (FCA). FCA is a method for knowledge representation, information management and data analysis. Generally, we regarded FCA as a conceptual clustering method used to determine implicit associations between objects and attributes. Formal context, formal concept, and concept lattice are three central notions in FCA. The following are the key definitions:
Definition 1. 
Let G be the set of objects, M be the set of attributes, and I be the binary relationship between object set G and attribute set M. Then, the triple (G, M, I) is a formal context.
Definition 2. 
Given a formal context K = (G, M, I), for A⊆ G, B⊆ M, the following operations are defined:
A = { m M g I m , g A }
B = { g G g I m , m B }
A  is the set of attributes shared by all objects in  A ;  B  is the set of objects corresponding to all attributes in  B .
If  A = B ,  B = A  , then the binary  ( A , B )  is a concept in the formal context, where  A  is the intent of the concept  ( A , B )  and  B  is the extent of the concept  ( A , B ) .
Example 1. 
A formal context K = (G, M, I) is shown in Table 1, where the object set G = {1, 2, 3, 4, 5} and the attribute set M = {a, b, c, d, e}.
According to Definition 2, We can get all the concepts in the formal context K: (G, Ø),(1, ab), (2, bde), (3, bcd), (4, ce), (23, bd), (245, e), (235, d), (135, b), (Ø,M).
Definition 3. 
Let (A1, B1) and (A2, B2) be two concepts on the formal context K = (G, M, I), and A1 ⊆ A2 (equivalent to B1 ⊇ B2), then we call (A1, B1) the subconcept of (A2, B2) and (A2, B2) the parent concept of (A1, B1), denoted as (A1, B1) ≤ (A2, B2).
The concept lattice can be visualized by a Hasse diagram. For example, the corresponding concept lattice in Example 1 is shown in Figure 3.

3.1.2. KGE Score Function

Inspired by the CAKE [12], we adopt a scalable architecture design that takes any KGE model as a plugin module for SFCA direct usage. The KGE model plays the role of extracting the entity and relation embeddings in architecture. Here, we give a uniform symbol E (h, r, t) to clearly describe the score function of any KGE model for assessing the plausibility of a triple (h, r, t). Table 2 shows the definitions of various KGE models.

3.1.3. KGC

KGC is commonly divided into three subtasks: triple classification, link prediction and relation prediction. In this work, we only concern about link prediction, not involving other KGC tasks. The link prediction task refers to finding the missing entity when the head or tail entity in the triple is missing. Specifically, we treat link prediction as an entity prediction task that searches for the reasonable entity when the triple’s a head entity or tail entity is missing. Every entity in KGs will be considered a candidate when encountering a missing entity or in a triple query. We choose the top n (n = 1, 3, 10) hits of correct entities as predicted results by ranking the scores of the candidate entities.

3.2. Formal Concept Extraction Module

According to the formal concepts definition (see in Section 3.1.1), SFCA automatically generate formal concepts from arbitrary KGs without external annotated knowledge. To get high-quality formal concepts, we developed a FCE module to generate massive corresponding formal concepts from KGs in mining valuable information. All generated formal concepts contain corresponding entity set and relation set.
Each fact triple set will first be encoded into a two-dimensional tabular as the formal context. Then, we produce formal concepts by the entity and relation binary relations in these formal contexts. The formal representation of KG:
K G = { ( e i , r j , e k ) e i E , e k E , r j R }
where E is the set of entities, R is the set of relations from E to E .
In this paper, entity e i is regarded as the object and relation r j as the attribute of e i . The formal context can be obtained by a K G and then the concept lattice K can be induced by the formal context.
For every e i E , let C e i t e m p = { ( A l , B l ) e i A l } . Where ( A l , B l ) is the concept in the concept lattice K .
The mapping from the formal concepts to the entity is formulated as:
f : C e i t e m p { e i } for   every   ( A l , B l ) C e i t e m p ,   f ( ( A l , B l ) ) = e i .

3.3. Formal Concept Sampling Module

An obvious problem with dense formal concepts produced by the FCE module: Not all formal concepts are requisite. Similar semantic information and negative gain information consist of dense formal concepts (Not all formal concepts can have corresponding real concepts in the real world, so formal concepts need to be screened before using them). Thus, we proposed an FCS module to reduce complexity and improve the quality of formal concepts.
An entity can be mapped into many formal concepts in a sizable KG. Formal concepts, including the same entity, can be considered a hierarchy. Among all formal concepts, we argue that those located in the top-most node formal concepts involve the most valuable information, and relation in factual triples is sufficient to filter out them (Here, the topmost formal concept refers to the formal concept that has a non-zero and minimum number of elements in the relationship set, and the number of elements in the entity set owned by this formal concept is closest to all entities). As shown in Figure 4, our FCS module is designed by a rough-to-fine sampling strategy in the usage of instance relations and partial order relations. Specifically, the formal concepts mapped by Entity 2 are marked by the blue dots in the concept lattice on the left top. The formal concepts mapped by Entity 2 and relation e after the first sampling are marked by the blue dots in the left middle concept lattice. The formal concept mapped by entity 2 and relation e after the second sampling is marked by the blue dots in the concept lattice on the lower left. The blue dots in the green parallelogram on top of the figure are the sampled formal concepts.
(1) Sampling with instance relations: In the first stage, we only use the instance relations for sampling by inferring that formal concepts mapped by the same entity under different relations should differ. Given an instance triplet, if a formal concept in the set is mapped to the head entity of the instance triplet and the relation of the instance triplet belongs to the relation set of the formal concept, the formal concept will be mapped to the entity-relation pair composed of the head entity and the relation of the instance triple:
For every ( e i , r j , e k ) K G , let C e i r j t e m p = { ( A l , B l ) ( A l , B l ) C e i t e m p , r j B l } . Where ( A l , B l ) is the concept in the concept lattice K .
We can get the following mapping:
g : C e i r j t e m p { ( e i , r j ) } ,   for   every   ( A l , B l ) C e i r j t e m p , g ( ( A l , B l ) ) = ( e i , r j ) .
(2) Sampling with partial order relations: After the first sampling stage, the selected formal concept set mapped to the head entity-relation pair still contains some formal concepts, and the formal concepts in this set can form a partial order relation between formal concepts. Thus, to simplify the mapping, we select the formal concept with the same link position for all formal concept links. Here, we select the most valuable formal concept in the formal concept set. Substitute formal concept set as a mapping formal concept of head entity-relation pair.
Given an instance triplet, if a formal concept in the set mapped to the head entity-relation pair of the instance triplet and the relation set of the formal concept is included in the relation set of all formal concepts in the formal concept set, then the formal concept is selected to be mapped to the entity-relation pair composed of the instance triple head entity and relation, according to Definition 3, there exists a maximum concept in the C e i r j t e m p .
For every ( e i , r j , e k ) K G , let C e i r j f i n a l = { ( A l , B l ) ( A l , B l ) C e i r j t e m p , ( A m , B m ) C e i r j t e m p , ( A m , B m ) ( A l , B l ) } .
We can get the following mapping:
h : C e i r j f i n a l { ( e i , r j ) } , for   every   ( A l , B l )   C e i r j f i n a l , h ( ( A l , B l ) ) = ( e i , r j ) .
After sampling twice, the mapping from formal concepts to instance entities and the formal concept triples can be obtained—the mapping of entity-relation pairs from formal concepts to instance triples after sampling twice.
For every e i E , let C e i f i n a l = { ( A l , B l ) ( A l , B l )   C e i r j f i n a l , ( e i , r j , e k ) K G } .
We can get the following mapping:
l : C e i f i n a l { e i } ,   for   every   ( A l , B l ) C e i f i n a l , l ( ( A l , B l ) ) = e i .
A collection of formal concept triples is denoted as F C , where each triple consists of a head entity’s formal concept set C h r f i n a l and a tail entity’s formal concept set C t f i n a l associated with their instance-level relation r , defined as:
F C = { ( C h r f i n a l , r , C t f i n a l ) ( h , r , t ) K G }

3.4. Formal Concept-Driven Link Prediction Module

To find better candidate entities and improve prediction outcomes, we propose a novel two-stage formal concept supervised link prediction mechanism. In the first stage, candidate entities are selected from the perspective of formal concepts; specifically, accepting a query ( h , r , ? ) to filter plausible formal concepts of tail entities using the set of formal concept triples F C , the set of candidate formal concepts of tail entity t is C t f i n a l , and then determine the entity belonging to the formal concept set as the candidate entity.
In the second stage, for each candidate entity e i that has been screened, the score is calculated by the scoring function. The candidate triplet that finally calculates the score is:
s c o r e ( e i ) = E ( h , r , e i )
Among them, E ( h , r , e i ) is the scoring function used to train the KGE model, and then the prediction results will arrange the scores of the candidate entities in ascending order and output the top n hits of the correct entities.

4. Experiments

For a comprehensive comparison, we evaluate our SFCA on five real-world datasets and one industry dataset. In this section, the detail of the setting of the experiment will be introduced. First, the performance of SFCA on four public datasets will be shown. Second, the comparison with common sense concepts will be discussed third, and the effectiveness of real data will be proved finally.

4.1. Experiment Settings

4.1.1. Datasets

Our evaluation is based on five public datasets (FB15K237 [34], YAGO3-10 [35], WN18RR [36], NELL-995 [37], DBpedia-242 [19]) and an industrial KG datasets collecting from the workshop of actual factory. Table 3 shows the statistics of public and industry datasets.
FB15K237 is a link prediction dataset created from FB15k. FB15k-237 was created by Toutanova and Chen to ensure that the test and evaluation datasets do not have an inverse relation with test leakage. YAGO3-10 is a benchmark dataset for knowledge base completion. It is a subset of YAGO3 (YAGO3 is an extension of YAGO) and contains entities associated with at least ten different relations. WN18RR is a link prediction dataset created by WN18, a subset of WordNet. However, many text triples are obtained by inversely finding triples from the training set. The WN18RR dataset was therefore created to ensure that the evaluation dataset does not have an inverse relation with test leaks. NELL-995 is a subset of NELL suitable for multi-hop inference proposed from the 995th iteration of the NELL system. Useless triples are first removed using relations that occur more than 2 M times in the NELL dataset. After this step, the triples with Top-200 relations are selected, and the dataset is obtained after adding the inverse triples. DBpedia-242 is extracted from DBpedia [6], which contains 242 concepts. It is worth mentioning that the entities in FB15K237, YAGO3-10 and NELL-995 have a corresponding ontology, while in WN18RR, The entity does not have a corresponding ontology.

4.1.2. Baselines

We compare our SFCA model with five baseline models, including TransE [10], DistMult [13], ComplEx [14], RotatE [15], and HAKE [12], and we also integrate these baseline models into our framework. All baseline models are KG-structured models. As a result, our framework does not require the input of external data, and we show through experiments that our model can be applied to most models without the input of external expert data.

4.1.3. Implementation Details

We use the Adam optimizer for training, and all models adopt the self-adversarial negative sampling method. In terms of the parameters, under the same dataset, we use the same parameters of different baseline models, including embedding size, batch size, negative sampling size, learning rate, margin, and sampling temperature. All experiments are performed on Pytorch and NVIDIA Quadro RTX 5000 GPU.

4.1.4. Evaluation Protocol

We choose three recognized evaluation metrics for comparison, including mean rank (MR), mean reciprocal rank (MRR), and the proportion of top-N rankings of correct entities (Hits@N). Notably, we filtered out all candidate triples in the datasets. The detailed computing formulas and notation definition as shown in Table 4.

4.2. Evaluation of Public KGs

Table 5 shows the link prediction performance of SFCA on the four public datasets. The formal concepts-driven KGE module improved significantly, which has an average increase of 11.83% (3.78 points), 16.53% (4.12 points), and 19.13% (5.82 points) in the MRR indicators of different baselines on the three datasets of FB15K237, YAGO3-10 and NELL-995, respectively. On WN18RR, the Hit@10 indicator of different baselines improved by an average of 2.63% (1.34 points). These results prove formal concepts, as metaphysical features of KG-structured information, are more splendid and effective for link prediction.

4.3. Common Sense Concepts vs. Formal Concepts

We also compare our SFCA with an external information-based method: Commonsense-Aware Knowledge Embedding (CAKE) [19] framework. The comparison results are obtained by combining uniform sampling [10] and self-adversarial sampling [15] with the KGE model TransE [10] and RotatE [15]. Table 6 presents the link prediction evaluation results on the three datasets. By comparison, our SFCA has an average increase of 147.54% (45.13 points) and 9.34% (3.33 points) higher than CAKE in the MRR indicators of different baselines on the two datasets of FB15K237 and NELL-995, respectively. On DBpedia-242, in the best effect on each indicator of different baselines, the highest is 14.47% (2.3 points) higher than CAKE, and the lowest is 3.36% (1.5 points) lower than CAKE. These results prove formal concepts are more effective in most cases than common sense concepts for the KGC task.

4.4. Evaluation of Actual Industrial KGs

Table 7 shows the link prediction evaluation results on the industrial KG dataset. By comparison, we can see that our SFCA framework has improved by more than 13.11% (7.3 points) in the MRR indicators of different baselines on the fault diagnosis industrial dataset. Our SFCA achieves remarkable performance in industrial filed KGs. With this result, we believe our SFCA can play well in multi-field KGs.

5. Conclusions

Motivated by the formal concept analysis theory, we propose a novel scalable formal concept-driven knowledge graph completion framework (SFCA) applying to multiple verticals. SFCA can automatically generate formal concepts from KG with a coarse-to-fine extraction strategy and a formal concept-supervised link prediction module to filter candidate entities from the perspective of formal concepts. Experiments on five public datasets demonstrate the effectiveness and scalability of SFCA. In addition, our model is also experimentally performed on real industrial datasets to demonstrate that the model has high performance in both general and industrial domains.
Despite the performance of SFCA, it is still some areas for improvement. First, the application of our method on the KGC task is based on the closed-world hypothesis, which considers any triplet not explicitly present in the graph as a negative triplet. Second, our method only applies to the KGC task and does not extend to other knowledge-related tasks. Thus, our further work is to study the application of formal concept analysis on KGC tasks under the open-world hypothesis and explore the application of formal concept analysis on various concept-related KG tasks, such as life-long learning. Third, this paper’s knowledge graph embedding-related tasks only involve knowledge graph completion. Future work can consider other knowledge map-related tasks such as named entity recognition and relationship extraction, that is, using formal concept analysis to assist tasks such as named entity recognition and relationship extraction.

Author Contributions

Conceptualization, X.S., C.W. and S.Y.; methodology, X.S.; validation, X.S.; investigation, X.S. and C.W.; resources, C.W. and S.Y.; data curation, X.S.; writing—original draft prepa-ration, X.S.; writing—review and editing, X.S. and C.W.; supervision., C.W. and S.Y.; project administration, S.Y.; funding acquisition, S.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key R&D Program of China, grant number: 2020AAA0109300.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

Thanks to teachers and classmates for their suggestions on this paper and the program.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, X.; Luo, F.; Wu, Q.; Bao, Z. How Context or Knowledge Can Benefit Healthcare Question Answering? IEEE Trans. Knowl. Data Eng. 2021, 35, 575–588. [Google Scholar] [CrossRef]
  2. Liu, Y.; Yang, S.; Xu, Y.; Miao, C.; Wu, M.; Zhang, J. Contextualized graph attention network for recommendation with item knowledge graph. IEEE Trans. Knowl. Data Eng. 2021, 35, 181–195. [Google Scholar] [CrossRef]
  3. Kang, S.; Shi, L.; Zhang, Z. Knowledge Graph Double Interaction Graph Neural Network for Recommendation Algorithm. Appl. Sci. 2022, 12, 12701. [Google Scholar] [CrossRef]
  4. Zhong, M.; Zheng, Y.; Xue, G.; Liu, M. Reliable keyword query interpretation on summary graphs. IEEE Trans. Knowl. Data Eng. 2022, 35, 5187–5202. [Google Scholar] [CrossRef]
  5. Sun, Y.; Chun, S.-J.; Lee, Y. Learned semantic index structure using knowledge graph embedding and density-based spatial clustering techniques. Appl. Sci. 2022, 12, 6713. [Google Scholar] [CrossRef]
  6. Gu, R.; Wang, T.; Deng, J.; Cheng, L. Improving Chinese Named Entity Recognition by Interactive Fusion of Contextual Representation and Glyph Representation. Appl. Sci. 2023, 13, 4299. [Google Scholar] [CrossRef]
  7. Bollacker, K.; Evans, C.; Paritosh, P.; Sturge, T.; Taylor, J. Freebase: A collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data, Vancouver, BC, Canada, 10–12 June 2008; pp. 1247–1250. [Google Scholar]
  8. Suchanek, F.M.; Kasneci, G.; Weikum, G. Yago: A core of semantic knowledge. In Proceedings of the 16th international conference on World Wide Web, Banff, AB, Canada, 8–12 May 2007; pp. 697–706. [Google Scholar]
  9. Lehmann, J.; Isele, R.; Jakob, M.; Jentzsch, A.; Kontokostas, D.; Mendes, P.N.; Hellmann, S.; Morsey, M.; Van Kleef, P.; Auer, S. Dbpedia–a large-scale, multilingual knowledge base extracted from wikipedia. Semant. Web 2015, 6, 167–195. [Google Scholar] [CrossRef] [Green Version]
  10. Bordes, A.; Usunier, N.; Garcia-Duran, A.; Weston, J.; Yakhnenko, O. Translating embeddings for modeling multi-relational data. Adv. Neural Inf. Process. Syst. 2013, 26, 2787–2795. [Google Scholar]
  11. Zhang, S.; Tay, Y.; Yao, L.; Liu, Q. Quaternion knowledge graph embeddings. Adv. Neural Inf. Process. Syst. 2019, 32, 2735–2745. [Google Scholar]
  12. Zhang, Z.; Cai, J.; Zhang, Y.; Wang, J. Learning hierarchy-aware knowledge graph embeddings for link prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; pp. 3065–3072. [Google Scholar]
  13. Yang, B.; Yih, W.-t.; He, X.; Gao, J.; Deng, L. Embedding entities and relations for learning and inference in knowledge bases. arXiv 2014, arXiv:1412.6575. [Google Scholar]
  14. Trouillon, T.; Welbl, J.; Riedel, S.; Gaussier, É.; Bouchard, G. Complex embeddings for simple link prediction. In Proceedings of the International Conference on Machine Learning, New York, NY, USA, 19–24 June 2016; pp. 2071–2080. [Google Scholar]
  15. Sun, Z.; Deng, Z.-H.; Nie, J.-Y.; Tang, J. Rotate: Knowledge graph embedding by relational rotation in complex space. arXiv 2019, arXiv:1902.10197. [Google Scholar]
  16. Nickel, M.; Tresp, V.; Kriegel, H.-P. A three-way model for collective learning on multi-relational data. In Proceedings of the Icml, Bellevue, WC, USA, 28 June–2 July 2011; pp. 3104482–3104584. [Google Scholar]
  17. Xie, R.; Liu, Z.; Sun, M. Representation learning of knowledge graphs with hierarchical types. In Proceedings of the IJCAI, New York, NY, USA, 9–15 July 2016; pp. 2965–2971. [Google Scholar]
  18. Guo, S.; Wang, Q.; Wang, L.; Wang, B.; Guo, L. Knowledge graph embedding with iterative guidance from soft rules. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LO, USA, 2–7 February 2018. [Google Scholar]
  19. Niu, G.; Li, B.; Zhang, Y.; Pu, S. CAKE: A scalable commonsense-aware framework for multi-view knowledge graph completion. arXiv 2022, arXiv:2202.13785. [Google Scholar]
  20. Galárraga, L.; Teflioudi, C.; Hose, K.; Suchanek, F.M. Fast rule mining in ontological knowledge bases with AMIE+. VLDB J. 2015, 24, 707–730. [Google Scholar] [CrossRef] [Green Version]
  21. Sadeghian, A.; Armandpour, M.; Ding, P.; Wang, D.Z. Drum: End-to-end differentiable rule mining on knowledge graphs. Adv. Neural Inf. Process. Syst. 2019, 32, 1–13. [Google Scholar]
  22. Ma, S.; Ding, J.; Jia, W.; Wang, K.; Guo, M. Transt: Type-based multiple embedding representations for knowledge graph completion. In Proceedings of the Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2017, Proceedings, Part I 10. Skopje, North Macedonia, 18–22 September 2017; pp. 717–733. [Google Scholar]
  23. Liu, J.; Chen, T.; Wang, C.; Liang, J.; Chen, L.; Xiao, Y.; Chen, Y.; Jin, K. VoCSK: Verb-oriented commonsense knowledge mining with taxonomy-guided induction. Artif. Intell. 2022, 310, 103744. [Google Scholar] [CrossRef]
  24. Li, C.; Liang, J.; Xiao, Y.; Jiang, H. Towards Fine-Grained Concept Generation. IEEE Trans. Knowl. Data Eng. 2021, 35, 986–997. [Google Scholar] [CrossRef]
  25. Xiang, Y.; Zhang, Z.; Chen, J.; Chen, X.; Lin, Z.; Zheng, Y. OntoEA: Ontology-guided entity alignment via joint knowledge graph embedding. arXiv 2021, arXiv:2105.07688. [Google Scholar]
  26. Guo, S.; Wang, Q.; Wang, B.; Wang, L.; Guo, L. SSE: Semantically smooth embedding for knowledge graphs. IEEE Trans. Knowl. Data Eng. 2016, 29, 884–897. [Google Scholar] [CrossRef]
  27. Lv, X.; Hou, L.; Li, J.; Liu, Z. Differentiating concepts and instances for knowledge graph embedding. arXiv 2018, arXiv:1811.04588. [Google Scholar]
  28. Li, Z.; Liu, X.; Wang, X.; Liu, P.; Shen, Y. Transo: A knowledge-driven representation learning method with ontology information constraints. World Wide Web 2023, 26, 297–319. [Google Scholar] [CrossRef]
  29. Zhang, Z.; Zhuang, F.; Qu, M.; Lin, F.; He, Q. Knowledge graph embedding with hierarchical relation structure. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, 31 October–4 November 2018; pp. 3198–3207. [Google Scholar]
  30. Zhang, F.; Wang, X.; Li, Z.; Li, J. TransRHS: A representation learning method for knowledge graphs with relation hierarchical structure. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, Yokohama, Japan, 7–15 January 2021. [Google Scholar]
  31. Hao, J.; Chen, M.; Yu, W.; Sun, Y.; Wang, W. Universal representation learning of knowledge bases by jointly embedding instances and ontological concepts. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, AK, USA, 4–8 August 2019; pp. 1709–1719. [Google Scholar]
  32. Wang, P.; Zhou, J. JECI++: A Modified Joint Knowledge Graph Embedding Model for Concepts and Instances. Big Data Res. 2021, 24, 100160. [Google Scholar] [CrossRef]
  33. Ganter, B.; Wille, R. Formal Concept Analysis: Mathematical Foundations; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  34. Toutanova, K.; Chen, D. Observed versus latent features for knowledge base and text inference. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and Their Compositionality, Beijing, China, 31 July 2015; pp. 57–66. [Google Scholar]
  35. Mahdisoltani, F.; Biega, J.; Suchanek, F. Yago3: A knowledge base from multilingual wikipedias. In Proceedings of the 7th Biennial Conference on Innovative Data Systems Research, Asilomar, CA, USA, 4–7 January 2015. [Google Scholar]
  36. Shang, C.; Tang, Y.; Huang, J.; Bi, J.; He, X.; Zhou, B. End-to-end structure-aware convolutional networks for knowledge base completion. In Proceedings of the AAAI Conference on Artificial Intelligence, Hilton, HI, USA, 27 January–1 February 2019; pp. 3060–3067. [Google Scholar]
  37. Xiong, W.; Hoang, T.; Wang, W.Y. Deeppath: A reinforcement learning method for knowledge graph reasoning. arXiv 2017, arXiv:1707.06690. [Google Scholar]
Figure 1. (a) A schematic diagram of ontology structure and concept lattice structure. (b) Examples of Correspondence between instances and formal concepts and sources of formal concepts.
Figure 1. (a) A schematic diagram of ontology structure and concept lattice structure. (b) Examples of Correspondence between instances and formal concepts and sources of formal concepts.
Applsci 13 06851 g001
Figure 2. An overview of the SFCA framework.
Figure 2. An overview of the SFCA framework.
Applsci 13 06851 g002
Figure 3. The lattice of formal concepts for Example 1.
Figure 3. The lattice of formal concepts for Example 1.
Applsci 13 06851 g003
Figure 4. Formal Concept Sampling Module.
Figure 4. Formal Concept Sampling Module.
Applsci 13 06851 g004
Table 1. A formal context.
Table 1. A formal context.
abcde
111000
201011
301110
400101
500011
Table 2. Details of several knowledge graph embedding models.
Table 2. Details of several knowledge graph embedding models.
ModelScore FunctionParameters
TransE h + r t 1 / 2 h , r , t k
DistMult h d i a g ( M r ) t h , r , t k
ComplEx R e ( h d i a g ( M r ) t ¯ ) h , r , t k
RotatE h r t 2 h , r , t k , r i = 1
HAKE h m r m t m 2 λ sin ( ( h p + r p t p ) / 2 ) 1 h m , t m k , r m + k , h p , r p , t p [ 0 , 2 π ) k , λ
Table 3. Statistics of datasets.
Table 3. Statistics of datasets.
Dataset#Rel#Ent#Train#Valid#Test
FB15K23723714,541272,11517,53520,466
YAGO3-1037123,1821,079,04050005000
WN18RR1140,94386,83530343134
NELL-99520075,492123,37015,00015,843
Dbpedia-24229899,744529,65435,85030,000
Industrial KG1557,37378,78730003000
Table 4. Detailed computing formulas of evaluation metrics for KGC.
Table 4. Detailed computing formulas of evaluation metrics for KGC.
MetricsComputing FormulaNotation Definition
MRR M R R = 1 Q Q i = 1 1 r a n k i Q : query   sets ;   Q : queries   numbers ; r a n k i :   the   rank   of   the   first   correct   answer   for   the   i th   query
MR M R = 1 Q Q i = 1 r a n k i Q : query   sets ;   Q : queries   numbers ; r a n k i :   the   rank   of   the   first   correct   answer   for   the   i th   query
Hits@n H i t s @ n = 1 Q C o u n t ( r a n k i n ) , 0 < i Q C o u n t ( ) :   the   hit   test   number   in   the   top   n   rankings   among   test   examples ; Q : query   sets ;   Q : queries   numbers ; r a n k i :   the   rank   of   the   first   correct   answer   for   the   i th   query
Table 5. Link prediction results on four datasets. Bold numbers are the best results for each type of model. * denotes SFCA is used.
Table 5. Link prediction results on four datasets. Bold numbers are the best results for each type of model. * denotes SFCA is used.
ModelFB15K237YAGO3-10
MRMRRHits@1Hits@3Hits@10MRMRRHits@1Hits@3Hits@10
TransE1750.3310.2340.3690.52611240.5040.4110.5630.672
TransE *1100.3710.2750.4100.5636480.5350.4520.5830.688
DistMult2160.2840.2230.3100.44717230.1300.0840.1350.219
DistMult *1450.3200.2400.3460.48211100.1740.1160.1850.287
ComplEx1810.3080.2200.3370.48611180.1980.1310.2140.325
ComplEx *1190.3440.2580.3720.5186600.2530.1810.2760.393
RotatE1770.3360.2400.3740.53018590.4970.4060.5530.665
RotatE *1080.3760.2800.4130.5677950.5420.4690.5910.694
HAKE1840.3430.2460.3800.53713840.5310.4440.5860.687
HAKE *1120.3800.2850.4170.5726130.5620.4850.6050.703
ModelWN18RRNELL-995
MRMRRHits@1Hits@3Hits@10MRMRRHits@1Hits@3Hits@10
TransE50810.1910.0040.3430.47822380.2390.0790.3480.508
TransE *42090.2210.0290.3820.5003530.3250.2020.3940.548
DistMult60430.4060.3580.4370.49025230.3480.2680.3850.500
DistMult *49350.4140.3640.4440.5034720.3980.3110.4350.564
ComplEx63940.4570.4160.4810.52636640.3690.2890.4050.518
ComplEx *52740.4630.4210.4880.5357270.4170.3310.4530.582
RotatE56480.4660.4350.4800.52723270.3580.2690.4040.526
RotatE *45970.4730.4400.4880.5393370.4050.3130.4500.580
HAKE41320.4920.4500.5090.57424190.3130.2050.3740.514
HAKE *33250.5040.4630.5190.5853480.3730.2700.4270.567
Table 6. Link prediction results on three datasets. Bold numbers are the best results for each type of model. * Denotes SFCA is used. +U denotes uniform sampling strategies are used. +S denotes self-adversarial negative sampling strategies used. +C denotes Commonsense-Aware Knowledge Embedding framework used.
Table 6. Link prediction results on three datasets. Bold numbers are the best results for each type of model. * Denotes SFCA is used. +U denotes uniform sampling strategies are used. +S denotes self-adversarial negative sampling strategies used. +C denotes Commonsense-Aware Knowledge Embedding framework used.
ModelFB15K237NELL-995Dbpedia-242
MRMRRHits@1Hits@3Hits@10MRMRRHits@1Hits@3Hits@10MRMRRHits@1Hits@3Hits@10
TransE + U1670.3090.2150.3400.49612610.2490.1530.2880.42813320.2810.1230.3930.538
TransE + S1720.3030.2080.3370.49412120.2550.1560.2950.43613020.3020.1320.4270.573
TransE + C1770.2950.1990.3300.4844850.3100.1870.3740.5329010.3310.1590.4610.596
TransE + U *290.7560.7090.7790.8502320.3440.2480.3860.5276350.3160.1700.4150.557
TransE + S *280.7540.7040.7800.8492260.3510.2530.3920.5366060.3360.1820.4460.586
RotatE + U1880.3160.2210.3510.50713890.3420.2490.3790.52219350.3370.1970.4340.560
RotatE + S1900.3240.2300.3570.51413570.3510.2600.3850.53317880.3720.2390.4640.592
RotatE + C1920.3180.2260.3480.5035030.4040.3050.4550.59310490.4190.3100.4860.605
RotatE + U *340.7610.7120.7840.8532230.4330.3430.4690.6117720.3660.2360.4540.576
RotatE + S *320.7600.7110.7840.8552200.4330.3410.4700.6117180.4130.2990.4850.607
Table 7. Link prediction results on Industrial datasets. Bold numbers are the best results for each type of model. * denotes SFCA is used.
Table 7. Link prediction results on Industrial datasets. Bold numbers are the best results for each type of model. * denotes SFCA is used.
ModelIndustrial KG
MRMRRHits@1Hits@3Hits@10
TransE59490.3530.1430.5050.758
TransE *10000.6810.6090.7410.796
DistMult68110.5450.4920.5720.671
DistMult *8470.6360.5740.6800.744
ComplEx62460.5570.4980.5910.685
ComplEx *8350.6300.5620.6780.752
RotatE70860.5880.5100.6340.760
RotatE *10610.6720.6020.7260.788
HAKE35380.5840.5210.6100.728
HAKE *4480.6990.6290.7560.819
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sun, X.; Wu, C.; Yang, S. SFCA: A Scalable Formal Concepts Driven Architecture for Multi-Field Knowledge Graph Completion. Appl. Sci. 2023, 13, 6851. https://doi.org/10.3390/app13116851

AMA Style

Sun X, Wu C, Yang S. SFCA: A Scalable Formal Concepts Driven Architecture for Multi-Field Knowledge Graph Completion. Applied Sciences. 2023; 13(11):6851. https://doi.org/10.3390/app13116851

Chicago/Turabian Style

Sun, Xiaochun, Chenmou Wu, and Shuqun Yang. 2023. "SFCA: A Scalable Formal Concepts Driven Architecture for Multi-Field Knowledge Graph Completion" Applied Sciences 13, no. 11: 6851. https://doi.org/10.3390/app13116851

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop