Next Article in Journal
Interaction Regularity of Biomolecules on Mg and Mg-Based Alloy Surfaces: A First-Principles Study
Previous Article in Journal
Study on the Performance of Nano-Zinc Oxide/Basalt Fiber Composite Modified Asphalt and Mixture
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Knowledge Representation and Reuse of Ship Block Coating Based on Knowledge Graph

School of Mechanical Engineering, Jiangsu University of Science and Technology, Zhenjiang 212100, China
*
Author to whom correspondence should be addressed.
Coatings 2024, 14(1), 24; https://doi.org/10.3390/coatings14010024
Submission received: 27 November 2023 / Revised: 20 December 2023 / Accepted: 21 December 2023 / Published: 25 December 2023
(This article belongs to the Section Corrosion, Wear and Erosion)

Abstract

:
Ship coating, as one of the three pillar processes in the shipbuilding industry, runs through the entire process of ship construction. However, there is currently a lack of effective organization, management methods, and mechanisms for ship coating process data, which not only leads to the dispersion of data but also limits the effective representation and reuse of the coating knowledge. To solve this problem, this paper takes the ship block coating process as the research object and proposes a method for knowledge modeling and reuse of coating knowledge using knowledge graph and question answering technology. Compared with existing strategies, this paper introduces the temporal knowledge graph, which allows for dynamic updating and generation of the knowledge graph specific to ship coating processes. In addition, we apply the knowledge embedding question answering (KEQA) method improved by the analytic hierarchy process (AHP) to facilitate high-quality retrieval and personalized question answering regarding ship block coating knowledge. We validate the proposed method using block coating process data from the 81200DWT bulk carrier and advanced ship coating methods and optimization data. The results demonstrate that the AHP-KEQA (KEQA method improved by the AHP) method improves the accuracy of knowledge question answering compared with KEQA, which further reinforces the effectiveness of the AHP-KEQA method for question answering of ship block coating knowledge.

1. Introduction

Coating is an important part of the modern product manufacturing process, and plays the role of decoration and rust protection for the products. For the modern shipbuilding mode of “hull outfitting and coating” integration, ship coating is one of the key technologies in the whole shipbuilding process, starting from steel discharging, until the delivery of the ship, through the whole process of shipbuilding [1]. The ship coating process involves a large amount of data flow, which is dispersed in various stages of ship coating in the form of documents, experiences, etc. It contains not only static data, such as product models, process specifications, and equipment information that support process design and planning, but also dynamic data obtained during the process implementation. However, at present, there is no scientific and effective management for these data, and the value of the data is not fully utilized. Therefore, researching knowledge modeling and reuse technology of the ship coating process, realizing data fusion and integration at the knowledge level, and improving the utilization rate of knowledge based on knowledge question answering technology is the necessary way to promote the intelligent and green development of the ship coating industry.
Knowledge modeling can effectively organize various data in the manufacturing domain and transform them into knowledge for reasonable expression and storage. Currently, the knowledge modeling approaches of the manufacturing process mainly include object-oriented [2,3], ontology [4,5,6], metadata [7], and complex networks [8,9,10]. However, the above methods only provide unified management for information integration and lack the description of semantic relationships between these data and the expression of temporal relationships. The knowledge graph (KG) [11,12,13,14] is a structured knowledge organization and expression technology derived from ontology technology, using “entity–relationship–entity” and “entity–attribute–value” forms of organizing information, which can efficiently deal with multi-source heterogeneous data with complex semantic relationships, and is more suitable for knowledge expression and management of complex processes. With further research, domestic and foreign scholars integrate the time dimension into the knowledge graph so that it can better describe the knowledge with temporal characteristics, making knowledge representation based on the knowledge graph more widely used in various fields, and enabling the effective use of knowledge. Ma et al. [15] proposed a unified conceptual model data fusion framework based on the knowledge graph, and the multi-source heterogeneous refueling behavior data to be organized and fused to achieve monitoring and anomaly identification of refueling behavior. Xu et al. [16] innovatively proposed a temporal knowledge embedding method and defined a temporal knowledge graph (TKG). Ding et al. [17] proposed a robot-assisted assembly knowledge graph to provide support for efficient and smooth human–machine collaborative work during disassembly. Chhim et al. [18] focused on the product design and manufacturing process, and constructed a manufacturing knowledge network to improve the reuse rate of knowledge. Song et al. [19] proposed a dynamic knowledge graph modeling method for the manufacturing process through the composition of data and processing flow analysis, which solved the problems of difficult resource information retrieval in the block workshop manufacturing process, disjointed production planning, and low production efficiency. Shen et al. [20] used the method of type-aware attention path reasoning to complete a knowledge graph by simultaneously considering KG structural information, text information, and type information. Guo et al. [21] investigated how to automatically obtain and integrate multi-source, heterogeneous, multilayer, and multidimensional retired mechanical product information, and its implicit knowledge through a knowledge graph to form a gene bank. In summary, scholars have achieved certain research results in knowledge graph modeling, but there are few reports on the progress related to knowledge modeling of the coating process in the shipbuilding field.
In recent years, with the intensive application of knowledge graphs in various fields, more and more public or domain knowledge graphs have been widely reported. This has led a group of researchers to explore how to utilize the good structure of knowledge graphs to realize the effective reuse of knowledge, among which knowledge question answering based on knowledge graphs is one of the main research directions of knowledge reuse. Yang et al. [22] established an approximate model in the disciplines of hydrodynamic shape and pressure hull, and combined the concurrent subspace optimization, penalty function method, and the multipopulational genetic algorithm to propose a new method for the system design optimization of an underwater glider. Li et al. [23] combined a convolutional neural network (CNN) with a Gaussian mixture model to develop a physics-guided deep learning framework integrating supervised and unsupervised learning. Acoustic emission data were then used to diagnose manifold damage mechanisms and identify various loading stages. Yin et al. [24] matched the subject in the fact candidate with entity in the question through character-level CNN, matched the predicate in that fact with the question through word-level CNN, and used attention and CNN for simple question answering. Golub and He [25] designed an improved long short-term memory network (LSTM) model based on attention to encoding and decoding questions. Bao et al. [26] manually defined several types of constraints and performed constraint learning to deal with complex questions, where each question is related to several facts. Lukovnikov et al. [27] used character-level gated recurrent unit neural networks to project questions and relationships/entities into the same space, and proposed character-level knowledge graph question answering based on neural networks. Huang et al. [28] proposed an effective knowledge question answering framework and verified its feasibility on Freebase (open knowledge base). The above research results provide a good theoretical foundation for the reuse of ship coating process knowledge.
At present, the research on intelligent shipbuilding and an intelligent shipyard is in its initial stage. Aiming at the problems that exist in the ship coating process, such as multi-source heterogeneous and dispersed coating data, which are difficult to transform into knowledge for scientific and effective management and utilization. This paper proposes a knowledge representation and reuse method for the ship coating process. Taking ship block coating as the research object, the TKG is introduced into the field of ship coating, and a dynamic updating and generation method of knowledge graphs for the coating process is proposed to realize the visual expression of the knowledge for the ship block coating process. The weight parameter setting of the joint distance metric in the knowledge embedding question answering (KEQA) algorithm framework is subjective, which in turn seriously affects the performance of the algorithm. This paper applies the analytic hierarchy process (AHP) to improve the KEQA algorithm framework based on the contribution degree of each item of the joint distance metric, which improves the computational accuracy and, at the same time, saves the optimization search time of the algorithm model.
Section 1 of this article is the introduction, Section 2 identifies the research object and analyzes the data of ship block coating, Section 3 defines and constructs a TKG of ship block coating, Section 4 improves the KEQA model based on the AHP, and Section 5 verifies question answering of the ship block coating based on the KEQA method improved by the AHP (AHP-KEQA). Section 6 describes the conclusion, limitations, and future prospects of the article.
The main novelties and contributions of this paper are summarized as follows:
  • Detailed investigation and research have been conducted on the ship coating process.
  • Modeling and construction of a KG for the ship coating process for the first time, with the introduction of temporal characteristics.
  • The importance formula was first proposed, transforming objective numerical values into a judgment matrix.
  • Based on the contribution degree of each item in the joint distance metric, the AHP is applied to improve the KEQA algorithm framework.
  • The first construction of a ship block coating dataset.

2. Determination of Research Objectives

2.1. Ship Block Coating Process

Ship construction has a standard and professional process procedure, which is significantly different from the production of general industrial products. Ship coating is one of the important processes in shipbuilding, which directly affects the service life and maintenance cycle of the ship [29,30]. Ship coating is usually carried out synchronously in the process of shipbuilding and must be integrated with the whole ship construction process, which can be divided into the following stages:
  • Starting from the material yard, the steel surface is pretreated and coated with shop primer paint before the coating is applied.
  • The most important secondary descaling and coating operations of the blocks are carried out after combination into blocks.
  • Perform secondary descaling and coating operations on the block assembly.
  • Area coating, including the deck, the quay, and the dockyard before delivery.
  • Coating of outfitting parts always.
Among them, block coating is the most basic and significant part of the ship coating process, in which the hull needs to be coated partially or fully. The procedure of block coating is mainly composed of block blasting and block painting, and the general process of block coating is shown in Figure 1. For large- and medium-sized shipbuilding enterprises, blasting and painting operations of the blocks can be completed in different sites, consisting of the blasting workshops and painting workshops. In addition to small-sized enterprises, due to the limitations of site size and the number of blocks, the blasting workshops and painting workshops are integrated.
In the stage of block blasting, all processes before blasting preparation belong to the preparatory work before block blasting. The process from equipment start-up to sand collection belongs to the formal blasting work of block blasting. The process from repairing and cleaning in blocks to internal and external inspection after blasting belongs to the final work of block blasting. In the stage of block painting, the preparation work before block painting is from entering the painting workshop to painting preparation. The core work of block painting includes pre-painting, full painting, and painting patching. The final work after block painting includes internal inspection, delivery after completion, and 5S work (including SEIRI, SEITON, SEISO, SEIKETSU, and SHITSUKE).

2.2. Data Analysis of Ship Block Coating

The ship blocks can be divided into plane blocks, curved blocks, and three-dimensional blocks according to their shapes. The coating process of each block can be divided into the following workstations: blasting pretreatment, painting preparation, pre-painting, full painting, painting patching, and so on. However, due to the adoption of traditional manual management methods at each workstation, the information generated during the coating process is usually stored in tables. This makes information retrieval, queries, and interactive sharing difficult. Therefore, to ensure that the data resources of each station can be used effectively and to avoid redundancy, it is essential to analyze the data characteristics of each workstation in the block coating workshop and classify different types of data. As the block coating process has a certain timing sequence, some of the data generated are strongly time-related, while the other data are time-independent. To understand and control the various aspects of the coating process better, this paper classifies them into two dimensions: static data and temporal data.
(1)
Static data: the term static data refers to data that are not time-dependent regardless of the type of block being coated. These types of data encompass various aspects, including planning for the coating operation stage, pretreatment procedures prior to coating, materials and equipment involved in the coating process, and coating methods and associated procedures, as well as quality inspections conducted after coating. Static data serve primarily for the analysis and evaluation of coating quality, optimization of the coating process, formulation of coating standards, and other related tasks. By analyzing static data, potential issues in the coating process can be identified, enabling appropriate adjustments and enhancements to be implemented. Figure 2 illustrates the static data involved in the ship coating process.
(2)
Temporal data: temporal data pertain to information that undergoes changes over time during the block coating process. This category of data includes various elements such as planned task durations, coating progress, real-time monitoring of environmental parameters, presence of harmful substances, spraying parameters, paint rheological properties, paint drying time, coating thickness, and additional data details. Real-time analysis of temporal data allows for monitoring changes within the coating process, thereby ensuring the stability and consistency of coating quality. The specific content and accompanying comments for the temporal data are outlined in Table 1.
The blasting pretreatment station is used as an example to illustrate the flow of data and their correlation in two dimensions, as shown in Figure 3.

3. Definition and Construction of TKG

3.1. Definition of TKG for Ship Block Coating

The TKG is an enhanced version of the traditional knowledge graph that incorporates timestamp information on the relationships. It represents a multi-relationship directed graph that extends the temporal dimension of the graph, encompassing not only the time information of events but also implying the development patterns and evolution laws between events. This extension provides significant research and application value. This paper focuses on the definition of the TKG specific to the block coating process, elucidating various types of entities and their interconnection. Essentially, the TKG in the block coating process forms a semantic network comprising entity nodes and linkage relationships, formalizing the knowledge associated with each stage of the coating process. This graph facilitates easy access and reuse of knowledge within the block coating process.
Definition 1.
TKG of block coating: in ship coating, a directed knowledge graph is constructed to represent the ship block coating process. It includes entities (E) related to materials, techniques, environmental factors, and quality standards. Relationships (R) with timestamp information connect these entities. Additionally, knowledge triples (Gs) capture specific associations, enriching the ship block coating knowledge representation.
T K G = E ,   R , G
Definition 2.
Entity set (E): the entity set is the set of all entities in the TKG of block coating, which is structured into head entity,  e h ,  and tail entity,  e o .
E = e h e o
Definition 3.
Relationship set (R): the relationship set is the set of all relationships in the TKG of block coating, including the static relationship,  r s ,  that responds to the relationships between static knowledge, and the temporal relationship,  r t ,  that responds to the relationships between temporal knowledge.
R = r s , r t
Definition 4.
Knowledge triples set (G): the knowledge triples of block coating processes consist of head entities, tail entities, and relationships.
G = { e h , R ,   e o | e h , e o E }
Formulas (1)–(4) come from reference [13], and the main relationships are defined as shown in Table 2.

3.2. Construction of TKG for Ship Block Coating

3.2.1. Theoretical Framework

The construction of a knowledge graph involves two primary approaches: top-down and bottom-up [31]. The top-down method emphasizes the inter-conceptual hierarchy but is more suitable for small dataset knowledge graph construction due to its manual dependency and limited ontology layer updates. On the other hand, the bottom-up method enables fast updates and supports large-scale data in knowledge graph construction. However, the automatically acquired knowledge from big data analysis and clustering methods tends to be noisy and inaccurate. Therefore, combining the top-down and bottom-up approaches can overcome these limitations, enhancing the quality and usability of the knowledge graph. However, when constructing a knowledge graph, combining bottom-up and top-down approaches may result in slower model processing. This is because combining the two approaches increases the complexity of the model, data redundancy, and the overhead of information transfer and integration. Nevertheless, the impact of this speed reduction can be reduced by proper algorithm design and optimization. Therefore, in practical applications, there is a need to make a trade-off between accuracy and efficiency.
The construction of a TKG for block coating involves organizing and associating static and temporal resources in the ship block coating process. This knowledge is stored and structured to facilitate utilization. To address the dynamic and discrete nature of coating knowledge, the TKG is constructed based on the block, station, and ontology dimensions. The block and station dimensions can be seen as sequences, enabling the construction and updating of the TKG through the integration of bottom-up and top-down approaches.
The construction process involves several steps. Firstly, the top-down method analyzes and reconstructs the data. It differentiates between static and temporal data in the block coating process and extracts ontology information from heterogeneous data sources to construct the coating knowledge ontology. Then, the bottom-up method is employed for knowledge extraction. By cleaning the data, structured, semi-structured, and unstructured data are extracted using techniques such as D2R [32,33], extracting schema [34,35], and NLP [36] for triple extraction and refining the coating knowledge ontology. The extracted entities are then integrated into the ontology and stored in a graph database for visualization. Finally, feature extraction and embedded quantification are applied to the knowledge graph, facilitating knowledge exploitation and reuse. This approach inductively constructs an ontology layer through data analysis and knowledge extraction. The updated ontology layer allows for iterative updates and entity populations using new knowledge and data.

3.2.2. Ontology-Based Knowledge Modeling for Ship Block Coating

The TKG of block coating can be logically structured into two layers: the schema layer (also known as the ontology layer) and the data layer. The data layer serves as the knowledge base where all triple information is stored, while the schema layer constitutes the essence of the knowledge graph by refining the knowledge structure of the data layer. Typically, the schema layer is stored with the aid of an ontology library which enables the establishment of constraints and rules. These constraints and rules serve to regulate entities, relationships, entity attributes, and the connections between attribute values. Additionally, they allow for reasoning capabilities within the knowledge graph.
Ontology models are widely used to formalize domain knowledge and its associations at the engineering semantics level. They have become an essential tool for extracting, understanding, processing, and storing knowledge in various domains. An ontology model enables the accurate representation of complex relationships and rich semantic information within a specific domain. Furthermore, it can reveal hidden information and semantics by leveraging rule-based reasoning. In this paper, we construct an ontology-based knowledge model for the ship block coating process. To facilitate knowledge query and management, we utilize Protégé, an open-source ontology editor, for constructing the ontology model of the block coating process.
To capture the characteristics of ship block coating, the ontology of block coating is divided into two distinct parts: the flow ontology and the resource ontology. This division is achieved through the definition of ontology elements. The flow ontology primarily represents the temporal sequence relationship, while the resource ontology illustrates the static resource dependency relationship. The flow ontology of the block coating process needs to be centered on the ship and contain ontology nodes and relationships related to the coating steps. According to the defined inter-node relationships, the flow ontology of different stages is associated to form a semantic network. Figure 4 illustrates the structure between the ontologies and entities pertaining to specific block coating process.

3.2.3. Construction of KG for Ship Block Coating

To validate the method’s feasibility, we used the 81200DWT bulk carrier (Chengxi Shipyard Co., Ltd., Wuxi, China) double-bottom block coating process data (refer to Table 3) as an example to construct the coating knowledge ontology. By analyzing the data structure and the linking relationship between the tables, we established connections between the entities of the block coating process and the block coating paint summary tables. To achieve visualization, we employed the Neo4j graph database management system. As depicted in Figure 5, the knowledge graph consists of 991 nodes and encompasses 3 relationship types.

4. Improving KEQA Based on AHP

Question answering based on the knowledge graph (QA-KG) [37,38,39] is a significant application area for knowledge reuse. The question answering of the block coating knowledge graph aims to answer some natural language questions by using the facts in the knowledge graph. It helps users to access valuable knowledge in the knowledge graph more efficiently and easily.
Although the KEQA [28] framework can address simple question answering in QA-KG, it has a limitation in that the predefined weight values in the joint distance metric formula are subjective and need to be reset when using datasets with different structures. To overcome this issue, this paper introduces the AHP [40,41] to enhance the KEQA framework. With this approach, the contribution of each item in the joint distance metric formula is calculated to determine the predefined weights. The effectiveness and accuracy of this method are validated using three datasets. To provide a comprehensive overview, Table 4 summarizes the important symbols used in this chapter.

4.1. KEQA Framework

The main idea of the KEQA is shown in Figure 6. The triples in the knowledge graph can be embedded into two low-dimensional spaces, and each fact h , l , t can be represented as three potential vectors, i.e., ( e h , r l , e t ). Thus, given a question, the question can be answered correctly if the corresponding head entity representations, e h , and relationship representations, r l , can be predicted.
The KEQA achieves its goal in three steps:
  • The KEQA utilizes embedding representations to train a relationship learning model, which takes the problem as input and predicts relationship representations in the KG embedding space as output. In a similar manner, a head entity learning model can be constructed to predict the head entity representation of the problem.
  • Given the typically large number of entities in a KG, the KEQA designs the head entity detection model to reduce the number of candidate head entities. Several head entity tokens in the problem are identified as predicted head entity names, and then the search space is reduced from whole entities to multiple entities with the same or similar names.
  • In designing the relation function within the KG embedding algorithm, the computation of the tail entity representation for the prediction problem is performed. Ultimately, based on the designed joint distance metric, the closest fact of the predicted facts in the KG is returned as the answer.

4.2. KEQA Model Based on Bi-LSTM

4.2.1. Head Entity and Relationship Learning Models

The head entity and relationship learning model is a neural network model that trains head entity vectors and relationship vectors based on a given simple problem. In this model, the head entity vectors and relationship vectors are trained separately using the same network architecture, as shown in Figure 7. The model primarily utilizes the bidirectional long short-term memory network (Bi-LSTM) [42] and the attention layer [43].
Taking the relationship learning model as an example, given a problem Q of length t, based on the pre-trained model Glove [44], the word x j , j = 1 , t is first mapped to a sequence of word embedding vectors e j , j = 1 , t . Then, the Bi-LSTM is used to learn the sequence of forward hidden states ( h 1 , h 2 , , h t ) and the sequence of backward hidden states ( h 1 , h 2 , , h t ) , and the front and back hidden state vectors are spliced together to obtain the output, h j , of the bidirectional LSTM network. In the attention layer, the attention weights, α j , are calculated from the values of h j and e j , the attention weights, α j , are applied to h j and connected with the embedded words, e j , to obtain the output, s j , of the attention layer, and then a fully connected layer is applied to the result, r j d × 1 . Finally, the representation, r j , of each position is averaged to obtain a predicted relationship vector representation, R l ^ , like the pre-embedded relationship vector.
In the Bi-LSTM layer, a j is the information of position j , f j is the forgetting gating information, i j is the input gating information, o j is the output gating information, c j is the state vector, h j and h j are the hidden state vectors of position j in the forward and backward LSTM process, h j is the output of the Bi-LSTM layer, is the Hadamard product, [,] splice the two vectors sequentially to become a new vector, and W and b are the weights and biases of each position. The formulas [28] are as follows.
a j = tanh ( W a [ e j , h j 1 ] + b a )
f j = sig m o i d ( W f [ e j , h j 1 ] + b f )
i j = sig m o i d ( W i [ e j , h j 1 ] + b i )
o j = sig m o i d ( W o [ e j , h j 1 ] + b o )
c j = f j c j 1 + i j a j
h j = o j tanh ( c j )
h j = [ s i g m i d ( W h h j + b h ) , s i g m i d ( W h h j + b h ) ]
In the attention layer, q j is the attention score of position j with the following formulas [28].
q j = tanh ( W q [ e j , h j ] + b q )
α j = exp ( q j ) i = 1 t exp ( q j )
s j = [ e j , α j h j ]
At the fully connected layer, the target vector of position j is computed with the following formula [28].
r j = s i g m o i d ( W r s j + b r )
The final output is obtained at the output layer, where t is the length of the sentence with the following formula [28].
R l ^ = 1 t j = 1 t r j Τ
All the weights and biases in the computation of this model are calculated from the training data, where all the entity and relationship embedded representations ( R , E ) are obtained by pre-training with the TransE model [45].

4.2.2. Head Entity Detection Model

In this step, the goal of the model is to select one or several consecutive words as the name of the head entity in the problem so that the search space can be narrowed down from the whole entity to multiple entities with the same or similar names.
In this paper, a model based on Bi-LSTM is used to perform the head entity word detection task, and its structure is shown in Figure 8. It has a similar structure to the relationship/head entity learning model but without the attention layer. First, the model maps the problem into a sequence of word embedding vectors, and the Bi-LSTM is applied to obtain output h j . Then, the fully connected layer and SoftMax layer are applied to output h j to attain the target vector V j 2 × 1 . In this way, the tokens at each position are classified, with one or more tokens identified as head entity nouns, denoted as H E D e n t i t y , and tokens at other positions are identified as non-entity nouns, denoted as H E D n o n .
In addition, we use the problems in Q and their head entity names as training data to train the head entity detection model. Since the words of entity names in these problems are continuous, the trained model will also return continuous words as H E D e n t i t y with high probability. If a discrete H E D e n t i t y is returned, each continuous part of the discrete H E D e n t i t y will be treated as a separate head entity name.

4.3. Joint Distance Metric Based on AHP

For each new problem, this paper can predict its relationship and head entity representations, as well as its candidate head entities through the KEQA algorithm described above. Since the tail entity obtained from the same head entity and relationship is not always unique, the most conforming fact is obtained by designing the joint distance metric formula to go through the computation. However, the joint distance metric in the KEQA algorithm is highly dependent on the weights, which are generated by predefined methods and are highly subjective. The AHP-KEQA fully considers the meaningful relationship information preserved by KG embedding representation, and for the first time proposes that the predefined weights are designed using the AHP. In addition, it makes the joint distance metric more objective, reasonable, and saves time for model optimization. The proposed joint distance metric formula [28] is as follows.
minimize   h , l , t C α 1 r l r ^ l 2 + α 2 e h e ^ h 2 + α 3 f ( e h , p l ) e ^ t 2 α 4 s i m n h , H E D e n t i t y α 5 s i m n l , H E D n o n
where the first term is the relationship loss, which is used to reduce the distance between the head entity extracted by the model and the corresponding representation in the knowledge graph embedding space. The second term is the head entity loss, which is used to reduce the distance between the relationship extracted by the model and the corresponding representation in the knowledge graph embedding space. The third term is the tail entity loss, which is used to reduce the distance between the tail entity representation and the corresponding representation in the knowledge graph embedding. The fourth and fifth terms are for the head entity detection model. In addition, the function sim [,] is used to measure the similarity of two strings, and the function n() returns the name of the entity or relationship [24]. The weight parameters α 1 , α 2 , α 3 , α 4 , α 5 are designed to constrain each term in the joint distance measure formula α 1 + α 2 + α 3 + α 4 + α 5 1 .

4.3.1. Calculation of Significance

The AHP is a comprehensive evaluation method for system analysis and determination created by the American operations researcher T.L. Saaty in the 1970s, which reasonably solves the process of quantification of qualitative problems. In the process of using the AHP, experts typically need to rate the importance of different objects and then compare them pairwise to construct the judgment matrix. However, the algorithm proposed in this article can calculate the performance of different function items on the dataset (denoted as a ) separately, which replaces the subjective process of expert scoring and represents the importance of each object. But the scoring values of experts range from 1 to 9, while a ranges from 0 to 1 and cannot be compared pairwise to obtain the judgment matrix. However, the importance formula proposed in this article can perfectly transform a into each item in the judgment matrix. The formula is shown below.
C i j = r o u n d a i a j × 10 + 1 , a i a j > 0 1 r o u n d a i a j × 10 1 , a i a j < 0
where the symbol of a is the performance of different function terms on the dataset, round denotes rounding, and c i j denotes the importance of the performance of the i th function term compared with the performance of the j th function term. According to the above formula, c i j can be calculated sequentially, and then the importance judgment matrix A between the performance of different function terms is obtained, as shown in Table 5.

4.3.2. Calculation of Predefined Weight

The importance judgment matrix A can be obtained through the calculation in the previous section, and this section uses the judgment matrix as the basis to calculate the predefined weights of different function terms by using the sum product method. Finally, a consistency test is conducted to verify the reasonableness of the calculated results.
Using the sum product method to calculate predefined weights and perform the consistency test, the formulas [40] are as follows:
  • Each column element of the judgment matrix is normalized to obtain the matrix M. The formula is as follows, where M i j is the element in the corresponding position in matrix M.
    M i j = C i j i = 1 n C i j
  • Summing W i j by rows yields vector K, which is calculated as follows, where K i is the element at the corresponding position in vector K.
    K i = j = 1 n M i j  
  • Normalize vector K to obtain the weight vector, W. The formula is as follows, where α i is the element at the corresponding position in the weight vector, W.
    α i = K i i = 1 n K i
  • Calculate the maximum characteristic root, λ m a x , and carry out the consistency test. The formula is as follows, where C.I. is the consistency index, R.I. is the random consistency index, and C.R. is the consistency ratio. R.I. is obtained by checking Table 6, and the consistency test passes when the final obtained consistency ratio C.R. < 0.1.
    λ m a x = 1 n i = 1 n A W i W i
    C . I . = λ m a x n n 1
    C . R . = C . I . R . I .
In this section, based on the understanding and analysis of the KEQA algorithm, weight solutions of the joint distance metric in the KEQA algorithm are improved through the AHP method. In addition, the importance formula is proposed for the first time to calculate the importance ratio between each function term. Finally, the weights can be calculated objectively and quickly.

5. Verification of Ship Block Coating QA-KG Based on AHP-KEQA

To evaluate the effectiveness of the enhanced algorithm, this section provides a description of the dataset employed in the experiment, including its characteristics, as well as some fundamental experiment settings. The validation of the AHP-KEQA algorithm’s accuracy is conducted using the dataset. The formulation of the AHP-KEQA algorithm is as follows (Algorithm 1):
Algorithm 1: The proposed AHP-KEQA framework
Input: KG, E, R, Q, entities, relationships, questions.
Output: head entity h * and relationship l *
/ * Training the relationship learning model:        */
1 for questions in Q do
2 
Take the t tokens of questions as the input and its relationship l as the label to train, as show in Figure 7;
3 
Update weight matrices{W}, {b} to minimize the objective function r l 1 t j = 1 t r j T 2 ;
/ * Training the head entity learning model:        */
4 for questions in Q do
5 
Take the t tokens of questions as the input and its head entity   h as the label to train, as show in Figure 7;
6 
Update weight matrices and bias terms to minimize the objective function e h 1 t j = 1 t r j T 2 ;
/ * Training the HED model:               */
7 for questions in Q do
8 
Take the t tokens of questions as the input and its head entity name positions as the label to train;
9 
Update weight matrices and bias as shown in Figure 8;
/* Question answering processes:               */
10 Input Q into the relationship learning model to learn r l ^ ;
11 Input Q into the head entity learning model to learn e h ^ ;
12 Input Q into the HED model to learn H E D e n t i t y and H E D n o n ;
13 Find the candidate fact set C from KG, based on H E D e n t i t y ;
14 For all facts in C, calculate the fact h * , l * , t * , weight matrices α and minimizes the
  objective function in Equation (17)

5.1. Experimental Design

5.1.1. Datasets

The data utilized in this experiment comprise both public datasets and a self-made dataset. In general, using novel and advanced ship coating process data to validate a proposed method is often more convincing. Reference [46] introduces process planning of non-structured surface spray equipment for ultra-large spaces in ship block manufacturing. Reference [47] tested three kinds of arc-sprayed zinc aluminum coatings to choose the best coating system for application on the research vessel Yongle by electrochemical behavior and a long-term atmospheric exposure experiment. References [48,49,50,51] involve research on ship anti-fouling coatings, with different specific research contents and methods. However, they all aim to develop effective anti-fouling coatings to reduce biological fouling on ship surfaces and reduce friction resistance.
The above literature describe advanced coating equipment and coating processes in the field of ship coating, providing some available data. However, it is difficult to obtain a large amount of data for constructing datasets. Therefore, this article focuses on the block coating process table of an 81200DWT bulk carrier, integrating some advanced ship coating methods and optimization data to create a ship block coating dataset, as illustrated in Figure 9. The entities, relationships, and their number in the dataset are obtained from the KG of ship block coating, which has already been constructed earlier. The problems in the dataset are proposed by experts in the field and labeled, where label “I” represents the entities and label “O” represents any text other than the entity. In addition, the public datasets used are FB2M and FB5M, which are subsets of Freebase. It is important to note that duplicate facts in these datasets have been eliminated to enhance data quality.

5.1.2. Evaluating Indicator

This experiment focuses on three datasets. The input questions are encoded as word vectors using the Glove algorithm. Theoretical output of R and E are learned using the TransE algorithm, with the KG embedding representations set to a dimension of 250. The AHP-KEQA algorithm is employed to train, test, and validate the head entities and relationships of each question. The performance of the final model is evaluated based on its accuracy in predicting the facts, as indicated by the following formula [52].
A C C = T P + T N T P + T N + F P + F N
Table 7 shows the confusion matrix used to evaluate the accuracy.

5.2. Effectiveness of AHP-KEQA

In this paper, we distinguish the importance of different objective function terms based on the performance of each objective function term in the joint distance metric formula on the dataset. The performance of different objective function terms on the dataset FB2M is shown in Table 8.
From Equation (18), we obtain the judgement matrix A = 1 6 1 7 4 1 6 1 1 6 1 1 3 1 6 1 7 4 1 7 1 1 7 1 1 4 1 4 3 1 4 4 1 . The calculation from Equation (19) yields the normalized matrix M = 0.3907 0.3529 0.3907 0.3500 0.4174 0.0651 0.0588 0.0651 0.0500 0.0348 0.3907 0.3529 0.3907 0.3500 0.4174 0.0558 0.0588 0.0558 0.0500 0.0261 0.0977 0.1765 0.0977 0.2000 0.1043 . The calculation from Equation (20) yields the vector K = 1.9017 0.2738 1.9017 0.2465 0.6762 . The calculation from Equation (21) yields the weight vector W = 0.3803 0.0548 0.3803 0.0493 0.1352 . The calculation from Equation (22) yields the largest characteristic root   λ m a x = 5.1011. Calculated from Equation (23), the consistency index CI = 0.0253, from Table 6, we can obtain RI = 1.12, and, calculated from Equation (24), we can obtain the consistency ratio CR = 0.026 < 0.1, which passes the consistency test. After bringing their weights into the joint distance metric formula, the KEQA algorithm optimized by AHP leads to an improvement in the accuracy of the final model on the FB2M dataset, and, similarly, applying the above steps to validate it on the FB5M dataset and the block coating dataset.
To verify the effectiveness of the proposed optimization method, some algorithms were used as a baseline, and the performance of the algorithms on different datasets is shown in Table 9.
The validation results obtained from public datasets demonstrate the superior performance of the AHP-KEQA algorithm when compared with the baseline algorithm across all cases. Specifically, the AHP-KEQA algorithm exhibits slight improvements in accuracy of 1‰ and 7‰ when compared with the KEQA algorithm on both the FB2M and FB5M datasets. Furthermore, the evaluation conducted on the block coating dataset also confirms that the AHP-KEQA algorithm outperforms the baseline algorithm, showcasing a more significant accuracy improvement of 1.9% over the KEQA algorithm. There are several reasons contributing to the higher accuracy achieved by the AHP-KEQA algorithm. Firstly, unlike other baseline algorithms, the KEQA algorithm utilizes the word embedding method to project triples into the word vector space. It trains the head entity and relationship models, along with the head entity detection model based on the Bi-LSTM model, and introduces the joint distance metric. This comprehensive consideration of the structural characteristics of the knowledge graph allows the KEQA algorithm to outperform other baseline algorithms in terms of accuracy. Comparatively, the AHP-KEQA algorithm further enhances the accuracy by leveraging the AHP to analyze the importance of each item in the joint distance metric formula. This analysis enables the derivation of weight factors, which proves to be more effective than the subjective assignment of predefined weights in the KEQA algorithm. Consequently, the AHP-KEQA algorithm surpasses the baseline models and achieves a higher accuracy rate.
Based on the data analysis, it can be observed that the performance of the AHP-KEQA algorithm varies across different datasets. Specifically, when trained on the FB2M and FB5M datasets, the AHP-KEQA algorithm yields relatively lower accuracy compared with the block coating dataset. This discrepancy can be attributed to the following reason: the FB2M and FB5M datasets belong to the public datasets that contain many noisy data, while the block coating dataset is a domain self-made dataset, and some obvious noise data have been filtered during the process of making the dataset.

6. Conclusions and Future Prospects

In this paper, we propose an integration of TKG into the field of ship coating, leveraging the specific characteristics of the ship coating process. Our contribution includes the introduction of a dynamic updating and generation method for the coating process knowledge graph, which is the first of its kind. Building upon this method, we construct a TKG for block coating. To address the issue of subjectivity in assigning weight parameters for the joint distance metric in the KEQA algorithm, we propose an improvement that incorporates the AHP. By employing the AHP, we construct an importance judgment matrix and solve for the weight parameters based on the importance ratio. Furthermore, we utilized the 81200DWT bulk carrier and the advanced ship coating methods and optimization data to construct a dataset for the field of ship coating. It is used together with the public dataset to validate the AHP-KEQA algorithm. The experimental results demonstrate that the AHP-KEQA algorithm not only reduces the optimization time of the algorithmic model but also achieves higher accuracy compared with other algorithms. This validates the effectiveness of the AHP-KEQA algorithm in addressing ship block coating technological issues. However, this article still has certain limitations. Firstly, due to limited time and effort, the KG of ship block coating constructed in this article is not comprehensive enough. Secondly, as the algorithm rapidly iterates and updates, more advanced question answering algorithms should be chosen. Thirdly, it is worth noting whether the importance formula proposed in this article is suitable for more case studies.
The future research in ship coating will focus on constructing a comprehensive, precise, and extensive knowledge graph of the coating process. This will involve integrating data from multiple heterogeneous sources to cover a broader range of ship coating knowledge, ensuring the accuracy and timeliness of the knowledge. By doing so, a more robust and reliable knowledge base can be established to enhance the performance of the question answering algorithms. Furthermore, the development of QA-KG algorithms will emphasize improving the comprehension of questions. This includes enhancing the algorithms’ ability to understand the intent behind a question, comprehend the complex structure of a question, and capture relevant contextual information. Additionally, there will be a focus on enabling more in-depth reasoning and logical inference capabilities to answer complex questions that require multi-step reasoning. These advancements will enable the algorithms to provide more comprehensive and accurate answers.

Author Contributions

H.B. revised the paper and completed it, Y.P. wrote the first draft of the paper, Q.G. collected and sorted the data, H.Z. provided funding for the paper. All authors reviewed the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

The authors gratefully acknowledge the financial support from the Ministry of Industry and Information Technology High-Tech Ship Research Project: Research on the Development and Application of a Digital Process Design System for Ship Coating (No.: MC-202003-Z01-02), the National Defense Basic Scientific Research Project: Research and Development of an Intelligent Methanol-Fueled New Energy Ship (No.: JCKY2021414B011), and the RO-RO Passenger Ship Efficient Construction Process and Key Technology Research (No.: CJ07N20).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this work.

Abbreviations

KGKnowledge graph
TKGTemporal knowledge graph
KEQAKnowledge embedding question answering
AHPAnalytic hierarchy process
AHP-KEQAKEQA method improved by the AHP
CNNConvolutional neural network
LSTMLong short-term memory network
Bi-LSTMBidirectional long short-term memory network
QA-KGQuestion answering based on the knowledge graph

References

  1. Dev, A.K.; Saha, M. Analysis of Hull Coating Renewal in Ship Repairing. J. Ship Prod. Des. 2017, 33, 197–211. [Google Scholar] [CrossRef]
  2. Otter, T. DOOML: A New Database & Object-Oriented Modeling Language for Database-Driven Web Application Design and Development. Int. J. Softw. Eng. Appl. 2022, 13, 23–31. [Google Scholar] [CrossRef]
  3. He, B.; Deng, Z.; Lv, H. Object-oriented Knowledge Modelling for Conceptual Design of Mechanisms. Int. J. Database Theory Appl. 2013, 6, 67–84. [Google Scholar] [CrossRef]
  4. Bolbakov, R.G.; Sinitsyn, A.V.; Tsvetkov, V.Y. Onomasiological modeling in the information field. J. Phys. Conf. Ser. 2022, 2373, 022010. [Google Scholar] [CrossRef]
  5. He, Y.; Hao, C.; Wang, Y.; Li, Y.; Wang, Y.; Huang, L.; Tian, X. An ontology-based method of knowledge modelling for remanufacturing process planning. J. Clean. Prod. 2020, 258, 120952. [Google Scholar] [CrossRef]
  6. Zhong, S.; Wen, Y.; Huang, Y.; Cheng, X.; Huang, L. Ontological Ship Behavior Modeling Based on COLREGs for Knowledge Reasoning. J. Mar. Sci. Eng. 2022, 10, 203. [Google Scholar] [CrossRef]
  7. Eichler, R.; Giebler, C.; Gröger, C.; Schwarz, H.; Mitschang, B. Modeling metadata in data lakes—A generic model. Data Knowl. Eng. 2021, 136, 101931. [Google Scholar] [CrossRef]
  8. Arora, V.; Ventresca, M. Action-based modeling of complex networks. Sci. Rep. 2017, 7, 6673. [Google Scholar] [CrossRef]
  9. Zhao, J.; Deng, Y. Complex network modeling of evidence theory. IEEE Trans. Fuzzy Syst. 2020, 29, 3470–3480. [Google Scholar] [CrossRef]
  10. Chen, L.; Yu, X.; Sun, C. Characteristic modeling approach for complex network systems. IEEE Trans. Syst. Man Cybern. A 2017, 48, 1383–1388. [Google Scholar] [CrossRef]
  11. Hao, X.; Ji, Z.; Li, X.; Yin, L.; Liu, L.; Sun, M.; Liu, Q.; Yang, R. Construction and Application of a Knowledge Graph. Remote Sens. 2021, 13, 2511. [Google Scholar] [CrossRef]
  12. Liu, G.; Hong, G.; Huang, M.; Xia, T.; Chen, Z. Integrated modelling of automobile maintenance expert system based on knowledge graph. J. Phys. Conf. Ser. 2021, 1983, 012118. [Google Scholar] [CrossRef]
  13. Zhu, L.; Li, N.; Bai, L.; Gong, Y.; Xing, Y. stRDFS: Spatiotemporal Knowledge Graph Modeling. IEEE Access 2020, 8, 129043–129057. [Google Scholar] [CrossRef]
  14. Cambria, E.; Ji, S.; Pan, S.; Yu, P. Knowledge graph representation and reasoning. Neurocomputing 2021, 461, 494–496. [Google Scholar] [CrossRef]
  15. Ma, B.; Jiang, T.; Zhou, X.; Zhao, F.; Yang, Y. A Novel Data Integration Framework Based on Unified Concept Model. IEEE Access 2017, 5, 5713–5722. [Google Scholar] [CrossRef]
  16. Xu, C.; Nayyeri, M.; Alkhoury, F.; Yazdi, H.; Lehmann, J. Temporal knowledge graph completion based on time series gaussian embedding. In Proceedings of the Semantic Web–ISWC 2020: 19th International Semantic Web Conference, Athens, Greece, 2–6 November 2020; pp. 654–671. [Google Scholar] [CrossRef]
  17. Ding, Y.; Xu, W.; Liu, Z.; Zhou, Z.; Pham, D. Robotic Task Oriented Knowledge Graph for Human-Robot Collaboration in Disassembly. Procedia CIRP 2019, 83, 105–110. [Google Scholar] [CrossRef]
  18. Chhim, P.; Chinnam, R.; Sadawi, N. Product design and manufacturing process based ontology for manufacturing knowledge reuse. J. Intell. Manuf. 2019, 30, 905–916. [Google Scholar] [CrossRef]
  19. Song, D.; Zhou, B.; Shen, X.; Bao, J.; Zhou, Y. Dynamic Knowledge Graph Modeling Method for Ship Segmentation Manufacturing Process. J. Shanghai Jiao Tong Univ. 2021, 55, 544–556. [Google Scholar] [CrossRef]
  20. Shen, Y.; Ding, N.; Zheng, H.T.; Li, Y.L.; Yang, M. Modeling Relation Paths for Knowledge Graph Completion. IEEE Trans. Knowl. Data Eng. 2021, 33, 3607–3617. [Google Scholar] [CrossRef]
  21. Guo, Y.Y.; Wang, L.; Zhang, Z.L.; Cao, J.H.; Xia, X.H.; Liu, Y. Integrated modeling for retired mechanical product genes in remanufacturing: A knowledge graph-based approach. Adv. Eng. Inform. 2024, 59, 102254. [Google Scholar] [CrossRef]
  22. Yang, M.; Wang, Y.H.; Liang, Y.; Wang, C. A New Approach to System Design Optimization of Underwater Gliders. IEEE-ASME Trans. Mech. 2022, 27, 3494–3505. [Google Scholar] [CrossRef]
  23. Li, D.; Nie, J.H.; Wang, H.; Ren, W.X. Loading condition monitoring of high-strength bolt connections based on physics-guided deep learning of acoustic emission data. Mech. Syst. Signal Process. 2024, 206, 110908. [Google Scholar] [CrossRef]
  24. Yin, W.; Yu, M.; Xiang, B.; Zhou, B.; Schütze, H. Simple Question Answering by Attentive Convolutional Neural Network. In Proceedings of the COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, Osaka, Japan, 11–16 December 2016; pp. 1746–1756. [Google Scholar] [CrossRef]
  25. Golub, D.; He, X. Character-level question answering with attention. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, TX, USA, 1–5 November 2016; pp. 1598–1607. [Google Scholar] [CrossRef]
  26. Bao, J.; Duan, N.; Yan, Z.; Zhou, M.; Zhao, T. Constraint-based question answering with knowledge graph. In Proceedings of the COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, Osaka, Japan, 11–16 December 2016; pp. 2503–2514. [Google Scholar]
  27. Lukovnikov, D.; Fischer, A.; Lehmann, J.; Auer, S. Neural network-based question answering over knowledge graphs on word and character level. In Proceedings of the 26th International Conference on World Wide Web, Perth, WA, Australia, 3–7 April 2017; pp. 1211–1220. [Google Scholar] [CrossRef]
  28. HUANG, X.; ZHANG, J.; LI, D. Knowledge Graph Embedding Based Question Answering. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, Melbourne, VIC, Australia, 11–15 January 2019; pp. 105–113. [Google Scholar] [CrossRef]
  29. Bu, H.; Yuan, X.; Niu, J.; Yu, W.; Ji, X.; Lyu, H.; Zhou, H. Ship painting process design based on IDBSACN-RF. Coatings 2021, 11, 1458. [Google Scholar] [CrossRef]
  30. Yuan, X.; Bu, H.; Niu, J.; Yu, W.; Zhou, H.; Ji, X.; Ye, P. Coating matching recommendation based on improved fuzzy comprehensive evaluation and collaborative filtering algorithm. Sci. Rep. 2021, 11, 14035. [Google Scholar] [CrossRef]
  31. Shen, X.; Li, X.; Zhou, B.; Jiang, Y.; Bao, J. Dynamic knowledge modeling and fusion method for custom apparel production process based on knowledge graph. Adv. Eng. Inform. 2023, 55, 101880. [Google Scholar] [CrossRef]
  32. Bizer, C.; Cyganiak, R. D2R Server—Publishing Relational Databases on the Semantic Web. In Proceedings of the 5th International Semantic Web Conference, Athens, GA, USA, 5–9 November 2006. [Google Scholar]
  33. Yu, Y.; Zhang, J. Constructing government procurement knowledge graph based on crawler data. J. Phys. Conf. Ser. 2020, 1693, 012032. [Google Scholar] [CrossRef]
  34. Yang, X.; Yang, J.; Li, R.; Li, H.; Zhang, H.; Zhang, Y. Complex Knowledge Base Question Answering for Intelligent Bridge Management Based on Multi-Task Learning and Cross-Task Constraints. Entropy 2022, 24, 1805. [Google Scholar] [CrossRef] [PubMed]
  35. Chang, C.; Hsu, C.; Lui, S. Automatic information extraction from semi-structured web pages by pattern discovery. Decis. Support Syst. 2003, 35, 129–147. [Google Scholar] [CrossRef]
  36. Tixier, A.J.; Hallowell, M.R.; Rajagopalan, B.; Bowman, D. Construction Safety Clash Detection: Identifying Safety Incompatibilities among Fundamental Attributes using Data Mining. Automat. Constr. 2017, 74, 39–54. [Google Scholar] [CrossRef]
  37. Wang, T.; Huang, R.; Wang, H.; Zhi, H.; Liu, H. Multi-Hop Knowledge Graph Question Answer Method Based on Relation Knowledge Enhancement. Electronics 2023, 12, 1905. [Google Scholar] [CrossRef]
  38. Liu, C.; Ji, X.; Dong, Y.; He, M.; Yang, M.; Wang, Y. Chinese mineral question and answering system based on knowledge graph. Expert. Syst. Appl. 2023, 231, 120841. [Google Scholar] [CrossRef]
  39. Jiang, Z.; Chi, C.; Zhan, Y. Research on medical question answering system based on knowledge graph. IEEE Access 2021, 9, 21094–21101. [Google Scholar] [CrossRef]
  40. Mu, E.; Pereyra-Rojas, M. Understanding the Analytic Hierarchy Process. In Practical Decision Making, 2nd ed.; Springer: Cham, Switzerland, 2016; pp. 7–22. ISBN 978-3-319-33860-6. [Google Scholar]
  41. Chan, H.; Sun, X.; Chung, S. When should fuzzy analytic hierarchy process be used instead of analytic hierarchy process. Decis. Support Syst. 2019, 125, 113114. [Google Scholar] [CrossRef]
  42. Jiang, Y.; Zhao, T.; Chai, Y.; Gao, P. Bidirectional LSTM-CRF models for keyword extraction in Chinese sport news. In Proceedings of the MIPPR 2019: Pattern Recognition and Computer Vision, Wuhan, China, 2–3 November 2020; pp. 86–92. [Google Scholar] [CrossRef]
  43. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, L. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 6000–6010. [Google Scholar] [CrossRef]
  44. Pennington, J.; Socher, R.; Manning, C. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, 25–29 October 2014; pp. 1532–1543. [Google Scholar] [CrossRef]
  45. Bordes, A.; Usunier, N.; Garcia-Duran, A.; Weston, J.; Yakhnenko, O. Translating embeddings for modeling multi-relational data. In Proceedings of the 26th International Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 5–10 December 2013; pp. 2787–2795. [Google Scholar]
  46. Qie, J.; Miao, Y.; Liu, H.; Han, T.; Shao, Z.; Duan, J. Design and Process Planning of Non-Structured Surface Spray Equipment for Ultra-Large Spaces in Ship Section Manufacturing. J. Mar. Sci. Eng. 2023, 11, 1723. [Google Scholar] [CrossRef]
  47. Huang, G.-S.; Li, Z.-L.; Zhao, X.-S.; Xin, Y.-L.; Ma, L.; Sun, M.-X.; Li, X.-B. Degradation Behavior of Arc-Sprayed Zinc Aluminum Alloy Coatings for the Vessel Yongle in the South China Sea. Coatings 2023, 13, 1139. [Google Scholar] [CrossRef]
  48. Hu, P.; Xie, Q.; Ma, C.; Zhang, G. Silicone-Based Fouling-Release Coatings for Marine Antifouling. Langmuir 2020, 36, 2170–2183. [Google Scholar] [CrossRef]
  49. Wang, P.; He, B.; Wang, B.; Wang, L.; Yu, H.; Liu, S.; Ye, Q.; Zhou, F. Durable self-polishing antifouling coating based on fluorine-containing pyrrolidone amphiphilic copolymer-functionalized nanosilica. Prog. Org. Coat. 2022, 165, 106706. [Google Scholar] [CrossRef]
  50. Wen, S.; Wang, P.; Wang, L. Preparation and antifouling performance evaluation of fluorine-containing amphiphilic silica nanoparticles. Colloid Surf. A 2021, 611, 125823. [Google Scholar] [CrossRef]
  51. Zhang, J.; Qin, W.; Chen, W.; Feng, Z.; Wu, D.; Liu, L.; Wang, Y. Integration of Antifouling and Anti-Cavitation Coatings on Propellers: A Review. Coatings 2023, 13, 1619. [Google Scholar] [CrossRef]
  52. Wang, Y.B.; You, Z.H.; Yang, S.; Yi, H.C.; Chen, Z.H.; Zheng, K. A deep learning-based method for drug-target interaction prediction based on long short-term memory neural network. BMC Med. Inform. Decis. Mak. 2020, 20, 49. [Google Scholar] [CrossRef]
  53. Bordes, A.; Usunier, N.; Chopra, S.; Westo, J. Large-scale Simple Question Answering with Memory Networks. arXiv 2015, arXiv:1506.02075. [Google Scholar] [CrossRef]
Figure 1. The general process of block coating.
Figure 1. The general process of block coating.
Coatings 14 00024 g001
Figure 2. The static data of the ship coating process.
Figure 2. The static data of the ship coating process.
Coatings 14 00024 g002
Figure 3. The distribution of data in different dimensions between stations.
Figure 3. The distribution of data in different dimensions between stations.
Coatings 14 00024 g003
Figure 4. The structure between the ontologies and entities of ship block coating knowledge.
Figure 4. The structure between the ontologies and entities of ship block coating knowledge.
Coatings 14 00024 g004
Figure 5. The knowledge graph of ship block coating. *—total amount.
Figure 5. The knowledge graph of ship block coating. *—total amount.
Coatings 14 00024 g005
Figure 6. The KEQA algorithm framework.
Figure 6. The KEQA algorithm framework.
Coatings 14 00024 g006
Figure 7. The head entity and relationship learning model.
Figure 7. The head entity and relationship learning model.
Coatings 14 00024 g007
Figure 8. The head entity detection model.
Figure 8. The head entity detection model.
Coatings 14 00024 g008
Figure 9. The block coating dataset.
Figure 9. The block coating dataset.
Coatings 14 00024 g009
Table 1. The temporal data of the ship coating process.
Table 1. The temporal data of the ship coating process.
Temporal DateComments
Planned task durationsTheoretical start and finish times for each step of the coating task
Coating progressActual start and completion times of the various steps of the coating tasks
Environmental parametersIncluding temperature, humidity, and other environmental parameters that affect the quality of coating and work safety monitoring in real time
Harmful substance contentLevels of harmful substances monitored in real time, mainly volatile organic compounds (VOCs) and particulate matter
Spraying parametersReal-time monitoring of spraying parameters including nozzle pressure, nozzle distance, spraying speed, etc.
Paint rheological propertiesReal-time monitoring of paint rheological properties including viscosity, flow, rheological stress, etc.
Paint drying timeReal-time monitoring of the time from the end of spraying to paint drying
Coating thicknessReal-time monitoring of the thickness of the coating on the surface of the material after spraying.
Table 2. The relationship table of the ship block coating process between entities.
Table 2. The relationship table of the ship block coating process between entities.
Relationship TypeRelationship NameRelationship Definition
Static relationshipHasIndicates the set of basic inherent information such as coating object, coating process parameter, coating process specification, and coating package that the station has
Belong_toIndicates the ownership relationship between entity nodes
IsIndicates the attribute–value relationship of an attribute node
UseIndicates the usage relationship between nodes
Use_stateAnnotations shared by several nodes
Temporal relationshipFlowTo: TIndicates the flow relationship between workstations and between tasks at time T
Involved: TIndicates entity nodes such as personnel, equipment, etc., involved in the coating task at time T
HasWork: TIndicates that, at time T, there are tasks at the stations that are finished with coating
Is: TIndicates the value of the attribute parameter generated by the monitoring equipment, etc., in the coating task at time T
Table 3. The 81200DWT bulk carrier double-bottom block coating process data.
Table 3. The 81200DWT bulk carrier double-bottom block coating process data.
NO.NamePainting StationAreaMethod and Grade of 2nd PreparationPaint NameCoat S NO.Dry Film Thickness
(μm)
Total DFT
(μm)
Loss FactorQTY(L)
1201Bottom of a flat boat211B: Sa2.5
O: St3
Jotaprime 51011503951.772
201211Safeguard Plus11001.764
201211Seaforce 6011451.579
2201NO. 7 inner bottom plate165B: Sa2.5
O: St3
Jotaprime 51015050222
3201Tube well245B: Sa2.5
O: St3
Jotaprime 51011501501.888
4201NO. 5(p)
ballast tank
382B: Sa2.5
O: As per PSPC
Jotaprime 51011603201.8147
201382Jotaprime 51011601.85151
5201NO. 5(p)
ballast tank
382B: Sa2.5
O: As per PSPC
Jotaprime 51011603201.8147
201382Jotaprime 51011601.85151
Table 4. The KEQA framework symbol definitions.
Table 4. The KEQA framework symbol definitions.
NotationsDefinitions
KGknowledge graph
h , l , t a fact, i.e., (head entity, relationship, tail entity)
Qa set of simple questions with ground truth facts
Ccandidate fact set
Mtotal number of relationships in KG
Ntotal number of entities in KG
ddimension of the embedding representations
R M × d embedding representations of all relationships in KG
E M × d embedding representations of all entities in KG
f ( ) relation function, given h , l , t ,   e t f e h , r l
R ^ l 1 × d predicated relationship representation
e ^ h 1 × d predicated entity representation
HEDhead entity detection model
H E D e n t i t y head entity name tokens returned by the HED
H E D n o n non-entity name tokens returned by the HED
Table 5. The importance judgment matrix.
Table 5. The importance judgment matrix.
Term 1Term 2Term 3Term 4Term 5
term 1 C 11 C 12 C 13 C 14 C 15
term 2 C 21 C 22 C 23 C 24 C 25
term 3 C 31 C 32 C 33 C 34 C 35
term 4 C 41 C 42 C 43 C 44 C 45
term 5 C 51 C 52 C 53 C 54 C 55
Table 6. The random consistency index (R.I.) values.
Table 6. The random consistency index (R.I.) values.
Matrix Order12345678910111213
R.I.000.580.901.121.241.321.411.451.491.511.541.56
Table 7. The confusion matrix.
Table 7. The confusion matrix.
PositiveNegative
TrueTrue Positive (TP)True Negative (TN)
FalseFalse Positive (FP)False Negative (FN)
Table 8. The accuracy of each item in the joint distance metric equation.
Table 8. The accuracy of each item in the joint distance metric equation.
Each Term in the Joint Metric DistanceKeep Only the Accuracy of This Item
p l p ^ l 2 0.728
e h e ^ h 2 0.195
f ( e h , p l ) e ^ t 2 0.730
s i m n h , H E D e n t i t y 0.173
s i m n l , H E D n o n 0.435
Table 9. The performance of question answering methods on datasets.
Table 9. The performance of question answering methods on datasets.
AlgorithmsFB2MFB5MBlock Coating Dataset
Bords et al. [53]0.6270.6390.741
Yin et al. [24]0.683 (+8.9%)0.672 (+5.1%)0.793 (+7%)
KEQA-noEmbed [28]0.731 (+16.6%)0.726 (+13.6%)0.861 (+16.2%)
KEQA0.754 (+20.3%)0.749 (+17.2%)0.907 (+22.4%)
AHP-KEQA0.755 (+20.4%)0.754 (+17.9%)0.921 (+24.3%)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bu, H.; Peng, Y.; Guo, Q.; Zhou, H. Knowledge Representation and Reuse of Ship Block Coating Based on Knowledge Graph. Coatings 2024, 14, 24. https://doi.org/10.3390/coatings14010024

AMA Style

Bu H, Peng Y, Guo Q, Zhou H. Knowledge Representation and Reuse of Ship Block Coating Based on Knowledge Graph. Coatings. 2024; 14(1):24. https://doi.org/10.3390/coatings14010024

Chicago/Turabian Style

Bu, Henan, Yang Peng, Qinzheng Guo, and Honggen Zhou. 2024. "Knowledge Representation and Reuse of Ship Block Coating Based on Knowledge Graph" Coatings 14, no. 1: 24. https://doi.org/10.3390/coatings14010024

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop