Next Article in Journal
Identifying Queenlessness in Honeybee Hives from Audio Signals Using Machine Learning
Next Article in Special Issue
An Automatic Generation and Verification Method of Software Requirements Specification
Previous Article in Journal
Thermal Analysis of a Modular Permanent Magnet Machine under Open-Circuit Fault with Asymmetric Temperature Distribution
Previous Article in Special Issue
Incremental Connected Component Detection for Graph Streams on GPU
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Chinese Brand Identity Management Based on Never-Ending Learning and Knowledge Graphs

1
School of Computer Science, Zhuhai College of Science and Technology, Zhuhai 519041, China
2
School of Humanities, Zhuhai College of Science and Technology, Zhuhai 519041, China
3
Electronic Engineering College, Heilongjiang University, Harbin 150080, China
4
Department of Information Engineering and Science, University of Trento, 38100 Trento, Italy
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(7), 1625; https://doi.org/10.3390/electronics12071625
Submission received: 14 February 2023 / Revised: 24 March 2023 / Accepted: 28 March 2023 / Published: 30 March 2023
(This article belongs to the Special Issue Applications of Big Data and AI)

Abstract

:
Brand identity (BI) refers to the individual characteristics of an enterprise or a certain brand in the market and in the mind of the public. It reflects the evaluation and recognition of the public on the brand and is the core of the market strategy. Successful BI management can bring great business value. Nowadays, the BI management methods based on Internet, big data, and AI are widely adopted. However, they are also confronted with problems, such as accuracy, effectiveness, and sustainability, especially for the Chinese BI. Our work applies the knowledge graph (KG) and never-ending learning (NEL) for exploring efficient Chinese BI management methods. We adapt the NEL framework for the sustainability. In order to improve the accuracy and effectiveness, we express the BI knowledge with KGs and propose two methods in the subsystem components of NEL: (1) the BI evaluation model based on KG and two-dimensional bag-of-words; (2) the Apriori based on KG. In the knowledge integrator of NEL, we propose the synonym KGs for suppressing the concept duplication and drift. The experimental results show that our method reached high consistency with the experts of BI management and the industry reports.

1. Introduction

Brand identity (BI) is the cognitive link between producers and consumers in the commodity society, which can reflect the evaluation and recognition of the public on the brand. The producer’s positioning of the brand and the consistency of consumers’ perception of the brand represent key determinants of business success. Therefore, BI is also the core of the market strategy [1].
David Ogilvy, the father of advertising, said that a brand is a complicated symbol and an assembly of something intangible, extending to include brand attribute, brand name, packaging, price, history, reputation, and advertising [2]. In his BI theory, brands should have four core elements: (1) BI should have personality characteristics. (2) Advertising activities are long-term investments in brands. (3) It is more important to spread the BI than simply emphasize the functions of the product. (4) Building BI can meet the psychological needs of consumers. Therefore, the establishment and management of BI should be a long-term investment to maximize the cognitive overlap between product characteristics and customer needs.
The “original problem” of BI establishment and management includes two aspects: how to communicate between brands and consumers, how to establish relationships and how to generate value, i.e., how to conduct the most basic connection between brands and consumers. Nowadays, information on the Internet has gradually become the main source for people to obtain effective information. Since Web 2.0, information dissemination on the Internet has new features, e.g., interaction, decentration [3]. The effect of traditional BI management methods, which mainly include the brand operators’ prediction of brand development trends and customer needs, user information sampling collection, and fixed venue and periodic brand marketing activities, has been constantly weakened [4]. The accuracy, effectiveness, and the sustainability of information interaction between brands and users is gradually lagging behind the speed of information growth on the Internet, and the gap between the two sides has been gradually enlarged. In this case, how to effectively establish BI and conduct long-term BI building, tracking and management remains to be addressed.
At the 24th International Conference on Artificial Intelligence (AAAI) in July 2010, Professor Tom M. Mitchell first proposed never-ending language learning (NELL), which obtains information from the Internet continuously, and then extracts knowledge from it [5]. The NELL improves its intelligence through constantly enriching its knowledge base with the knowledge acquired. The NELL is one of the typical applications of building artificial intelligence using network big data resources. The NELL system at Carnegie Mellon University works 7 × 24 h. It mainly performs two tasks: (1) reading tasks, constantly extracts knowledge from web information, and enriches the knowledge base based on structured facts and knowledge; (2) the learning task learns more intelligent reading methods according to the existing text and the knowledge extraction ability of the system, so as to extract increasingly accurate information. The article [5] pointed out that the accuracy rate of eternal language learning has reached 74%.
In 2018, Professor Mitchell published the progress of his work [6]. The NELL system added learning algorithms and expanded the scope of knowledge acquisition, not only text information, but also image and other information. Professor Mitchell renamed the system Never-Ending Learning (NEL). The NEL system mainly consists of four parts, as shown in Figure 1: (1) data resources, the basis of the NEL, with data and knowledge mainly coming from the Internet and corpus; (2) subsystem components, the core of the eternal language learning model, which learns the text content through algorithms such as co-occurrence statistics and path sorting algorithm; (3) knowledge base, the result of NEL; and (4) knowledge integrator, the filter of NEL, which is responsible for selecting knowledge from the set of candidate facts to form relevant conclusions, and additionally feeds back to the subsystem components as the basis of new learning.
In addition to Mitchell’s continuous application of NELL to acquire knowledge, NELL has also been widely used in solving other related problems. Maria et al. applied NEL on condition monitoring (CM) problems [7]. They developed a CM model based on the never-ending learning paradigm, which is applied to a synthetic case study and a real case study concerning the monitoring of the tank pressure of an aero derivative gas turbine lube oil system. The CM model provides satisfactory performance in terms of classification accuracy, while remarkably reducing the expert efforts for data labeling and model (periodic) updating.
Elizalde applied NEL in sound understanding [8], which is an emerging field of machine hearing, aiming to build systems that can do sound-related tasks that have nothing to do with hearing—such as sonography, seismic, and sonar—and systems that could hear the way humans do and distinguish between music, speech and sounds. He proposed the Never-Ending Learning of Sounds (NELS), a computational program that aims to build hearing machines that understand sounds under a never-ending learning paradigm. NELS breaks ground in challenges of sound understanding, such as collecting datasets with different types of labels and annotation processes, designing and improving sound recognition models, defining knowledge about sounds, and retrieving sounds with different types of similarities.
The Knowledge Graph (KG) was first proposed by Google on 17 May 2012, which aims to describe the concepts, entities, events, and their relationships in the objective world, and serve as the core basis for building the next generation of intelligent search engines [9]. The KG links all different kinds of information together to form a relationship network. The KG provides the ability to analyze problems from the perspective of “relationship”. KG is essentially a knowledge base, i.e., semantic network [10], e.g., a knowledge base with a directed graph structure. KG mainly consists of entities, relationships, and attributes.
The NEL and KG are widely adopted in natural language processing (NLP) problems [11]. The BI management can also be explored by NLP methods [12]. In order to make our data acquisition and analysis methods more consistent with the laws of BI construction, we introduce the NEL and KG into BI management.
In this work, we first construct a NEL system following Professor Mitchell’s model. Secondly, we adapt the system according to the characteristics of BI management. We construct BI management KGs and communication vocabulary as the knowledge base, take the distributed topic crawlers as the data acquisition tool of Data Resource. In Subsystem Components, we propose the BI evaluation model based on KG and two-dimensional bag-of-words as the BI management strategy exploring method, and the Apriori base on KG as the new brand words mining tool.
In our experiment, we take two brands with different features as the analysis object of brand strategy. According to the experimental results, the BI management conclusions of our NEL are highly consistent with the field experts and industry reports.
In summary, our work makes the following contributions:
(1)
We introduce the idea of Never-Ending Learning into solving the problem of brand identity management.
(2)
We use the knowledge graphs to solidify domain knowledge and experience, and design new brand management algorithms combined with the communication theory and knowledge graphs.
This paper is organized as follows: Section 2 describes the proposed BI management NEL system. Section 3 provides experimental results. We draw some discussions and conclusions in Section 4.

2. Never-Ending Learning Model for Chinese Brand Identity Management

In this section, we construct the overall framework based on NEL model and implement each part aiming at the characteristics of the Chinese BI data, as shown in Figure 2.
For a better understanding of the Chinese BI knowledge, we propose the BI management KG and Chinese word segmentation library as the basic knowledge topology. Then, we provide the BI evaluation method based on KG and two-dimensional bag-of-words for BI statistics, and the Apriori based on KG for BI development observation in the subsystem components of NEL. In the Knowledge Integrator of NEL, the synonym KGs are designed for suppressing the concept duplication and drift.

2.1. Brand Identity Management Knowledge Graph

In order to improve the accuracy of NEL’s knowledge acquisition, we use the KG to describe the BI domain knowledge. There are two methods constructing KG: Top-down and Bottom-up [13]. The Top-down method defines an ontology first, and then completes the process of information extraction to map construction based on input data. It is applicable to the construction of a map of professional knowledge, such as enterprise knowledge map and domain oriented professional user use. The Bottom-up method extracts highly trusted knowledge from open associated data, or extracting knowledge from unstructured text, to complete the construction of a knowledge map, which is more suitable for the construction of general knowledge maps such as person names, organization names, and other common knowledge maps.
In our work, we use the Top-down method, which is applicable for the KG of BI domain knowledge. Top-down method needs to define the ontology first, and then complete information extraction and graph construction based on the input data. The definition of ontology is based on combing the domain knowledge, terminology dictionary, artificial experience, etc., and then combined with the application scenarios of the KG to improve the construction. Finally, the ontology category, the relationship between the categories, and the attribute definitions contained in the ontology are obtained.
For the definition of ontology, we choose David A. Aaker’s brand theory, which is widely followed by communication and marketing, as the domain knowledge framework. Aaker is praised by Brand Weekly as the “Ancestor of Brand Equity” [14]; he put forward the “brand identity system” in his book “Building Strong Brands” [4], as shown in Figure 3.
There are four aspects and 14 independent attributes in the “brand identity system”, which includes core identity and extended identity [4]. Core identity is the core value of the brand to be shaped by businesses. The consistency of the core identity shaped by producers and the core identity perceived by consumers is the ultimate goal of brand management. Extended identity is the expansion around core identity, which is a commodity attribute closer to user perception. Both types of identification are composed of one or more of the 14 features. Our work takes David’s “brand identity system” as the domain knowledge to construct the Brand KG framework. Parts of the 14 attributes defined by David are abstract, i.e., 9. personality, 14. brand tradition, which cannot be expressed by specific words with clear meaning, and cannot obtain accurate meanings through NLP methods for specific brands. Therefore, we construct a multi-layer brand KG framework, as shown in Figure 4.
The Ex layer (x = 1,2, …, 14) nearest to the Brand Core of the KG are the 14 attributes in the “brand identity system”, i.e., E1 = 1 Product scope, E14 = 14 Brand tradition. Based on the Ex layer, we define the relationship between specific words and brand attributes in the next 2 layers (Ex_y and Ex_y_z, y = 1,2, …; z = 1,2, …). Each ontology in Ex_y and Ex_y_z consists of words with specific concept, which are defined for products with different brand concepts and management strategies, i.e., as coffee and air conditioner, through the method of artificial experience, combined with the description of communication and advertising industry experts. In addition, the specific words are constantly enriched during the NEL learning. The Ex_y layer is responsible for interpreting the “brand identity system”, and the Ex_y_z layer is responsible for concept representing. Due to the specific concept, the ontologies in the outer two layers can be used for Internet information acquisition and NLP methods.

2.2. Chinese Word Segmentation

The accuracy of Chinese word segmentation is strongly related to the correctness of NEL results. Our work takes the jieba library [15] as the word segmentation tool. Two customized domain-specific lexicons are added into jieba, containing the BI words and communication words, respectively.
The initial vocabulary of the BI words mainly comes from the brand’s official website, official promotional videos, company financial reports, official brand We Media accounts (official Weibo, WeChat official account), etc. Because the meaning of words in brand expression will change due to different contexts, we use KG to express them, and fixes the brand meaning of words in the form of relationship between ontologies, so as to facilitate the application in the two-dimensional word bag model proposed. The words will be sorted and classified, then put into the Ex_y and Ex_y_z layers of the brand KG.
The initial vocabulary of the communication words was obtained by the searching and screening work of 20 third-year undergraduate students majoring in advertising in a domestic college of Humanities invited by the research team. There are 334 words in V1.0.0, including Chinese words, number combinations, English letter combinations, etc. The vocabulary is updated once a month. Parts of the communication words are shown as follows. The words in the brackets next to each Chinese word are its meaning notes in English, but not the content of the communication words.
AdNewWords = [web3.0, 绯红金粟兰(Crimson Magnolia), bg之光(bg light), 光合计划(photosynthetic plan), RTX4080, 疯校时装周(mad school fashion week), 二郎嘴(Erlang mouth), 性感熟男(sexy mature male), 天生臭脸综合征(congenital stinky face syndrome), 量子纠缠(Quantum Entanglement), KPL, 踩雷(encountering bad things), luckin, yyds, 妈生好皮(natural good skin), 神仙(carefree person), 哒咩(da baa), 集美(collect beauty), 大怨种(a sullen person who has been wronged), 破防(overwhelmed), 友宝女(a girl spoiled by her friends), 针不戳(that’s great), 绝绝子(that’s great), 你没事吧(Are you OK?), 灵动岛(the smart island of apple), kpd, 瑞斯拜(respect), 心巴(heart), 999, 刺客(assassin), 野性消费(irrational consumption), 脚艺人(playing handsome with kicks), 种草(share), 拔草(eliminate purchasing desire), 666, 沁园春(patio spring), GREE,…].

2.3. Data Resource

In the Data Resource, the information acquired includes texts, images, and videos. We use the distributed topic crawlers [16] for texts and images, which use the KG stored in Belief as acquiring clues. For pictures, we catch the title of the picture, or use OCR [17] tools to identify the text in the picture; For voice information in the videos, we screen the videos manually first, and then use iFLYTEK’s “voice transfer” API [18] for conversion. Finally, all the information is kept as text.

2.3.1. Data Source Range

Referring to the opinions of industry experts, the features of target brands and the current mainstream Chinese Internet media, the data sources are limited and classified as shown in Table 1.
There are six categories in Table 1. Social media, word of mouth media, and short video platform represent the first group, which are mainly interactive between the publishers and receivers. They have a large number of personal users, and we can obtain information about the 5, 7, 9, and 10 attributes in the “brand identity system”. The mass media and portal disseminate information mainly in the form of one-way broadcasts, and can provide information about the 8 and 12 attributes. The E-commerce platform mainly containing the top-three e-commerce websites in our work, which can provide information about the 8 and 12 attributes. Domain media focuses on knowledge in certain professional fields, containing the information about 8, 10 attributes.
For the short video platform, we use manual retrieval to obtain the key content according to experts’ suggestions. For the left categories, we focus on their text and image.
In order to prevent the repeated acquisition of the same information caused by hyperlinks, the crawlers obtain the timestamp of the information when acquiring the information, and only the information of the previous day of the program’s running time is specified as the valid information of the time dimension.

2.3.2. Confirming the Validity of the Content

The crawlers mainly search new content through keywords. However, the meaning of the word is variant in different contexts. For example, the Chinese word “悦风”, which has the literal meaning “pleasant wind”, is used as the name of a style of air conditioner by GREE brand. It is also used in other scenarios, i.e., “北京悦风美妆学院”, part of the name of Young Forever Beauty Makeup College. In order to catch the content we need, we establish the validity relevance of words through KGs. If “悦风” can be treated as valid information, it must appear in the BI management KG of GREE brand, and other neighbor or import BI words must appears together with it in the same context.

2.4. SubSystem Components

Based on the communication theory and Chinese NLP models, we provide the BI evaluation method based on KG and two-dimensional bag-of-words for BI statistics, and the Apriori based on KG for BI development observation.

2.4.1. BI Evaluation Model Based on KG and Two-Dimensional Bag-of-Words

There is a dedicated KG for each brand as shown in Figure 4. The dedicated KG limits the search scope of BI vocabulary, as shown in Figure 2.
In order to apply the brand knowledge space described by KGs to Chinese word segmentation and brand knowledge retrieval, we design a two-dimensional bag-of-words model, as shown in Figure 5.
The columns in the figure are dictionary generated according to the BI management KG. Weight lines represent the scene weight of words. The location of words has a great impact on the effect of brand communication. For example, when a word appears in the title of an article, the coefficient value is higher than that in the picture, higher than that in the text, and higher than that in the video; In mass media, the coefficient value is higher than that of We-media type; In short content video, the coefficient value is higher than that of live video, and so on. Therefore, we define the word scene coefficient vector according to the influence theory and attention theory of communication [19], as shown in Table 2.
The coefficients are graded, and their values are normalized, as shown in Equation (1).
i = 1 10 v i = 1
The Count line is a vector that counts the dictionary words in all texts obtained by NEL. It has the same length as the word scene coefficient vector.
  V c n _ i = [ c n t t i t l e _ M a s s , c n t I m a g e _ M a s s , , c n t A d v e r t i s e m e n t _ V i d e o   ]
The relations in the Knowledge Graph line construct a word validity relationship vector generated based on BI management KG, which are used for confirming the validity of the content.
The Total Score line is the final score of each word.
S C w o r d _ i = V c n _ i = [ v 1 , v 2 , v 3 , v 4 , v 5 , v 6 , v 7 , v 8 , v 9 , v 10 ] T
NEL ranks the S C w o r d _ i every day or following the experts’ setting and shows the results in the form of word cloud. The top T ranked words are used as alternatives to the BI management strategy. The bottom B ranked words in cumulative days will be used as alternatives to the eliminated keywords. The T and B are configurable parameters in the system. The Knowledge Integrator part judges the brand strategy according to the ranking results.

2.4.2. Apriori Based on KG

During the brand communication, brand vocabulary will be updated frequently. Some of the BI words are actively updated by the brand communicator, and some are passively generated in the process of communication. It is difficult to master the vocabulary update comprehensively only by the manual excavation of industry experts. Therefore, we propose the Apriori Based on BI KGs for mining new words.
Apriori is a classic association mining algorithm, which is used to explore the association relationship between various data in the data set [20]. Apriori has three metrics: support, confidence, and lift. Based on NEL and brand communication theory, we design a new support calculation method, and combined it with the BI KGs to find the words strongly related to the existing brand dictionary. The results are treated as an alternative to brand new words. Finally, the Knowledge Integrator part complete the vocabulary selection and KG adjustment to achieve new knowledge discovery.
Our Apriori takes the words in the Data Resource part as the candidate data. First, there is a synonym replacement according to the Synonym KGs in the Knowledge Integrator part, which can make the frequency of words more concentrated. Secondly, count the frequency of each word. Thirdly, construct the word frequency vector as shown in Figure 6. There are two parts in the vector, the KG words part (KWn) and the new words part (NWm). The words in each part are sorted separately based on their frequencies.
The top c new words are selected as candidates for new KG words. The c is configurable according to the strategies of dedicated brand, taking 5 as the default value. For example, for brands with multiple product categories, this c value will be appropriately enlarged. For brands with few product categories or new brands with concentrated product strategies, this c value will be appropriately reduced.
In order to prevent the word frequency vector from being too long, we set the default value of m is 100. The value of m can also be configured according to the strategies of dedicated brand. The n + c words are used as the input of Apriori. The remaining m-c words compose the new word candidate pool. Their frequencies will be accumulated on a weekly basis. The top rc words will be used as the reviving candidate and decided in the Knowledge Integrator.
We define the direction of the association rules of Apriori as follows:
{words in the KG}->{new words}
The length of the vector of brand candidate new words is 1, and the length of the vocabulary vector of the words in the KG is 2, which is composed of the top two words in the KG. Each data source (an article, a picture, a video) is recorded as a data record of Apriori.
The support, confidence, and lift are expressed as follows:
S u p p o r t ( X Y ) = P ( X   Y ) P ( I ) = n u m ( X   Y ) n u m ( I ) ,
C o n f i d e n c e ( X Y ) = P ( Y | X ) = P ( X   Y ) P ( X ) ,
L i f t ( X Y ) = P ( Y | X ) P ( Y ) = P ( X   Y ) P ( Y ) ,
where X { words   in   the   KG } , Y { new   words } , I { data   source   records   of   the   day } , length (X) = 2, length (Y) = 1.
After comprehensively considering the method in [21,22,23] and the opinions of domain experts, the default minimum support is set to 15% of the vocabulary acquired on the current day, and the default confidence is set to 45%. The lift is used as reference in order to further improve the accuracy of new word mining and reduce the possibility of concept drift.

2.5. Knowledge Base

The Knowledge Base part is the learning achievement of NEL. In our work, the beliefs and candidate are designed as follows:
The beliefs include: (1) BI manage KG. It is used to show the relationship between the concerned brand-related knowledge. The cold start data of this knowledge map is provided by domain experts and corrected by Knowledge Integrator. (2) The brand word cloud based on timeline. It stores the evolution of the brand strategy.
The candidate facts store the candidate knowledge mined by Subsystem Components, mainly including the BI candidate words obtained by the BI evaluation model based on KG and two-dimensional bag-of-words, and the association rules mined by the Apriori based on KG.

2.6. Knowledge Integrator

Knowledge Integrator is the filter of NEL, which selects beliefs from the candidate facts. In our work, it is composed of two parts: (1) Brand strategy experts participate once a week to evaluate the content of candidate facts and revise the knowledge map according to the evaluation results. (2) Prevent concept duplication and drift base on the synonym KGs.
In the process of brand communication, many synonyms will evolve. For example, “loving to eat” (爱吃) has the following synonyms in Chinese: “爱吃”(loving to eat), “饕餮”(glutton), “美食家”(foodie), “好吃鬼”(foodie), “贪吃”(gluttonous), “大胃王”(hungribles), “谗猫”(calumny cat), “经不起诱惑”(unable to withstand temptation), “品味人生”(taste life), “foodie”, “food junkie”. The last tow English words are used as the foreign words. If they are all included in the BI KG, the KG structure will be complex and the brand concept will be scattered, which will interfere with the exploration of brand strategy. Therefore, the Knowledge Integrator part maintains synonym KGs, taking “loving to eat” for example, as shown in Figure 7. Due to the complexity of brand vocabulary semantics, the synonym KGs are currently mainly completed by industry experts.
In the synonym KGs, the words are defined as ontologies, and the relationships between ontologies are weights recording the distance between the meaning of words, which can be merged into the two-dimensional bag-of-words. The synonym KGs integrate the meaning gaps of the synonyms, and represent a better alternative to synonym dictionaries.

3. Experimental Results

In order to observe the management strategies of different types of brands and verify the effectiveness of the NEL model we proposed, according to the suggestions of industry experts, we select two representative brands in the experiments: (1) GREE Electric, a famous brand established in 1985, whose main products include white appliances, such as air conditioners, refrigerators, and small appliances. With a mature brand strategy, it has experienced traditional marketing and new media marketing; (2) Luckin Coffee, a rapidly growing emerging brand established in 2017, mainly adopts an online marketing and online-to-offline sales mode, focusing on the coffee and tea market.

3.1. Experimental Enviroment

For running NEL stably, our experiments deploy the NEL model on a NVIDIA deep learning server with the configuration shown in Table 3:
We use software and libraries base on Python as the development tools, as shown in Table 4:

3.2. Experimental Data

The data involved in the experimental results includes the learning progress of NEL on the two selected brands from February to November 2022, with a total of 21.6 × 104, 2.2 T bytes of valid data. Due to the effective constraints of the KGs, the number of effective samples obtained is not as large as expected. The summary of the data records is shown in Table 5.

3.3. The Definition of the Word Scene Coefficient Vector

The products of GREE Electric are durable goods. Its market strategy emphasizes the depth of communication, including the authority, technology, reliability, cost performance, etc. It also emphasizes the guiding effect on consumers [24]. Therefore, the weight of Mass Media is increased in coefficient design. Table 6 is the definition of the word scene coefficient vector of GREE Electric.
The products of Luckin Coffee products are fast moving consumer goods. Its market strategy emphasizes the breadth of dissemination, relies on the frequent repetition and update of information, maintains consumers’ attention and freshness, and emphasizes consumers’ consumption feelings. Therefore, the weight of We Media is increased in coefficient design. Table 7 presents the definition of the word scene coefficient vectors of Luckin Coffee.

3.4. The Brand Identity Management Knowledge Graphs

We provide the cold start data of the KGs on 1 February 2022. In the conclusion display KGs, Figure 8 and Figure 9, if there is Chinese in the ontology, only Chinese or Chinese in the brackets, the ontologies will be parts of the knowledges used in the Data Resources or the Subsystem Components. The newly added words are marked in red. The green rectangles illustrate the relationships between the ontologies. We treat them as parameters. Because the frame of the KG is from the brand identity system, the parameters are simply set to 1.0 currently, and they will be modified if more knowledge is acquired from the brand manager or communication.
Figure 8 shows the KG of GREE Electric in November. The brand vocabulary of GREE is relatively stable. The five newly added words are “云逸”(Yunyi), “全域养鲜”(whole region fresh), “新轻厨”(new light kitchen), “挂式”(hanging), and “柜式”(cabinet). The “云逸”(Yunyi) is the new air conditioner product in 2022. The “全域养鲜”(whole region fresh) is the slogan of the new refrigerator product. The “新轻厨”(new light kitchen) is the slogan of the new kitchen electrical appliance series. These three words are from the new products of the year. The “挂式”(hanging) and “柜式”(cabinet) represent the style of the air conditioner. Their parent ontology “style” is added to them, but this was not considered as the BI word. The reasons for adding them are explained in the following subsection.
Figure 9 shows the KG of Luckin Coffee on 30 November 2022. The brand vocabulary of Luckin Coffee changes rapidly. The new added words include the name of new coffee products, “冰萃咖啡”(iced coffee), “生椰拿铁”(raw coconut latte), “厚乳拿铁”(thick milk latte), “陨石拿铁”(meteorite latte), the name of new product spokesperson, “谷爱凌”(Gu Ailing), “肖战”(Xiao Zhang). According to the experts’ suggestions, the Knowledge Integrator reserved the name of the former brand spokesperson, because the brand spokesperson has been updated very quickly, and the former brand spokesperson still has a strong correlation with the brand in the minds of consumers.

3.5. The Results of BI Evaluation Model Based on KG and Two-Dimensional Bag-of-Words

The results of the BI evaluation model based on KG and two-dimensional bag-of-words are used as alternatives to the BI management strategy, and shown in the form of word clouds for better human–computer interaction. We take the results of the third week of September as an example, as shown in Figure 10.
The English meaning of the Chinese words in the clouds are listed as follows:
The Word Cloud of GREE Electric: 格力(GREE), 董明珠(Dong Mingzhu),空调(air conditioner), 压缩机(compressor), 性价比(cost performance), 售后(after sales), 技术(technology), 好空调, 格力造(good air conditioning, GREE made), 贵(expensive), 匹数(number of horses), 变频(frequency conversion), 型号(model), 掌握核心科技(mastering core technology), 质量(quality), 让世界爱上中国造(Let the world fall in love with Made in China), 市值(market value), 十年质保(ten year warranty), 珠海(Zhuhai), 冰箱(refrigerator), 电饭煲(rice cooker), 晶弘(Jinghong), 大松(Dasong), 能效(energy efficiency).
The Cord Cloud of Luckin Coffee: 瑞幸(luckin), 新品(new product), 网友点评(comments from netizens), 年轻人(young people), 必喝(must drink), 生椰拿铁(raw coconut latte), 椰云拿铁(coconut cloud latte), 小蓝杯(Small Blue Cup), YYDS, 优惠券(coupon), 打卡(sign in), 厚乳拿铁(thick milk latte), 丝绒拿铁(velour latte), 好喝(good-tasting), 吉祥好运(good luck), 果汁(juice), 门店(store), 新零售(new retail), 小鹿茶(Xiaolu Tea), 免费喝(free drinking), IIAC, 代言人(spokesman), 品质(quality), 白领(white collar).
Due to the strategy of KG and crawlers, the names of brands are always the hottest words. The strategies can be discovered from the words immediately following them. For the GREE Electric, we could find the technology and service (“压缩机”, “性价比”, “售后”) are more concerned. “董明珠” is the name of the chairman, who is also the spokesperson. This is the important marketing strategy of GREE Electrics. The air conditioner (“空调”) is still the hottest product. The high price (“贵”) is also a focus of GREE Electric, which is also the experience of most users. Due to the high quality of its products, most users still affirm its cost-effectiveness (“性价比”).
Luckin Coffee is the brand of beverage which pays more attention to the customers’ feelings and feedback (“网友点评”, “必喝”). It must maintain its attraction to customers with new products and flexible prices (“新品”, “生椰拿铁”, “优惠券”). Its better if a product in vogue (“椰云拿铁”) is created. Compared to GREE Electric, the quality, spokesperson, and slogans are not of so much concern. For Luckin Coffee, it must keep its change and hit.

3.6. The Results of Apriori Based on KG

We also take the results of the third week of September as an example. Setting the parameter c of the word frequency vectors to 4, the new word lists are shown as follows.
NWGREE = [“美的”, “柜式”, “挂式”, “多元化”]
NWLuckin = [“复活”, “星巴克”, “加盟”, “财报”]
The new words has the following traits:
(1) There are relations between the brand and its main competitors, i.e., “格力 vs. 美的”, “瑞幸 vs. 星巴克”. Because brands are often compared with brands of the same category by market analysts or consumers to find the differences between them and make choices. The comparison is also an important brand strategy to position the brands.
(2) There are relations between the brand and its hot events at that time. The main reason is that the lists only reflect the words of last week in the NEL. In April 2022, Luckin Coffee completed its debt restructuring. In addition, it launched many popular drinks. There was a profit in the financial report (“财报”) of the second quarter. The analysts or consumers use “revive” (“复活”) as a metaphor for this event.
(3) Related to the product form for durable goods, the new words, cabinet (“柜式”) and hanging (“挂式”), of GREE indicate that consumers pay more attention to the functions and pay less attention to the segmented products. It is also reflected in the word cloud of GREE that the name of segmented products, “悦风”, “云逸”, does not appear.
Based on the results of Apriori, the new associations are shown as follows:
{“格力”, “董明珠”}->{“美的”, 0.67}
{“格力”, “空调”}->{“挂式”, 0.52}
{“格力”, “空调”}->{“柜式”, 0.51}
{“格力”, “董明珠”}->{“多元化”, 0.23}
{“瑞幸”, “新品”}->{“复活”, 0.47}
{“瑞幸”, “新品”}->{“加盟”, 0.61}
{“瑞幸”, “点评”}->{“星巴克”, 0.67}
{“瑞幸”, “点评”}->{“财报”, 0.49}

3.7. The Results of the Knowledge Integrator

All the results of Subsystem Components are reserved as candidate facts in the Knowledge Base. Moreover, the Knowledge Integrator acts on final decisions about the update of beliefs. There are mainly two missions for it: (1) judging the BI strategies; (2) updating the KGs.

3.7.1. Judgement of the BI Strategies

The most frequent words in the results of the BI evaluation model based on KG and two-dimensional bag-of-words are always the brand name, which is strong enough for sporting the analysis of the BI strategies. Therefore, the Knowledge Integrator traces the second most frequent words weekly in the results of through a timeline, as shown in Figure 11.
The timeline of GREE Electric shows that its brand vocabulary has not changed frequently, mainly focusing on “空调”, “董明珠”. The three months of June, July, and August are the peak sales season of air conditioners, and GREE Electric carried out price strategy (“性价比”) in May. The financial report (“财报”) in September reflected its operating conditions in the first half of the year, and investors and analysts paid high attention to this data.
From the above data, we can see that GREE Electric has adopted a relatively stable BI management strategy according to its product characteristics. Its purpose is to form a relatively unified cognition in the minds of consumers and form consumption inertia. This conclusion is basically consistent with the report of the International Finance News [25].
The timeline of Luckin Coffee shows that its brand vocabulary changed frequently, which is update every two weeks on average. Approximately one sixth of the new words are the name of the new products (“椰云拿铁”, “生椰拿铁”, “陨石拿铁”). The emergence of coupons (“优惠券”) has certain periodicity, which occurs in the peak season of consumption or when there are no new products.
From the data of Luckin Coffee, we can see that Luckin Coffee has adopted a relative active BI management strategy. Its purpose is to maintain consumers’ attention to the brand, bring continuous freshness to consumers, ensure network flow, and expand sales scale. This conclusion is basically consistent with the report of the China Business Daily [26].

3.7.2. Update of the KGs

Take the results of the Apriori based on KG as reference. The experts make the following decisions: (1) Add “挂式” and “柜式” into the BI management KG of GREE. Because there is no suitable secondary ontology for them, a new secondary ontology “style” is also be supplemented under the product characteristics attribute. The updated KG is shown as Figure 7. (2) Although “美的” and “星巴克” have higher confidence, they are not added to the KGs, because in accordance with China’s Advertising Law, comparative propaganda should be avoided [27]. (3) The “复活”, “财报” and the left words are not added to the KGs, because they are related to the timeliness time, cannot reflect the essence of the brand, and do not have sustainability.

4. Discussion

Our research work applies the NEL general learning framework proposed by Professor Mitchell to the BI management domain, which expands the application scope of NEL. In order to obtain domain knowledge more accurately, we use KG as the basis for knowledge representation, which is an improvement compared to existing work.
On the other hand, our work also has certain limitations. The knowledge of KGs is mainly maintained by domain experts, which makes the acquisition of knowledge lag behind. In the Knowledge Integrator, we adopted the same strategy as the original model of Professor Mitchell and conducted knowledge screening through human intervention. This improves the accuracy of knowledge acquisition. However, when the amount of knowledge increases and the knowledge structure becomes more complex, it will further increase the delay of knowledge acquisition.
In the future, our research work could integrate more NLP methods based on artificial intelligence, such as ChatGPT [28,29]. Future research could also conduct two-way emotional analysis and record the emotional trend based on the characteristics of NEL to judge the interaction effect between information publishers and receivers. In addition, the intelligence of the Knowledge Integrator and automatic judgment of BI strategy could be improved. However, due to the rapid updating of communication information, the intelligent methods may cause conceptual drift and other problems. Future research will explore more solutions in practice.

5. Conclusions

Focusing on the “original problem” of BI establishment and management, we take Internet media as the target media for our research on the connection between brands and consumers. Based on the communication theory and the characteristics of the BI establishment and management process, we use NEL as the data learning method for continuously tracking the BI management. Limited to the professionalism and complexity of knowledge in the field of communication, we use the KG for the expression of domain knowledge expression. This enables our research to solidify the domain experience as the index of information mining.
In the Two-dimensional Bag-of-words model, we establish the BI vocabulary scoring system through combining the knowledge system of the KGs with the text structure on the Internet. The Apriori based on KG provides a method to discover new knowledge based on existing knowledge, which simplifies the polysemy of language expression in communication and the complexity of knowledge dissemination on the Internet.
According to the experimental results, we have reached a BI management conclusion that is basically consistent with the field experts and industry reports. Our research work expands the application scope of NEL, enriches the information mining methods involved in NEL, and could be used by domain experts to support more effective work in brand identity analysis and management.

Author Contributions

Conceptualization, D.L. and Y.W.; methodology, D.L. and Y.L.; software, D.L. and J.L.; validation, Y.Z. and G.W.; formal analysis, D.L. and Y.W.; investigation, G.W.; resources, J.L.; data curation, D.L. and J.L.; writing—original draft preparation, D.L.; writing—review and editing, D.L., G.B. and Y.L.; visualization, D.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by NSFC grant number 61972174, Guangdong Universities’ Innovation Team Project grant number 2021KCXTD015, Guangdong Universities’ key scientific research platforms and projects grant number 2021ZDZX1083, and Guangdong Key Disciplines Project grant number 2021ZDJS138, 2022ZDJS139.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to further research plan.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Elliott, R.H.; Rosenbaum-Elliott, R.; Percy, L.; Pervan, S. Strategic Brand Management; Oxford University Press: New York, NY, USA, 2015. [Google Scholar]
  2. Ogilvy, D.; Horgan, P. Confessions of an Advertising Man; Atheneum: New York, NY, USA, 1963. [Google Scholar]
  3. Sun, J.; Gan, W.; Chao, H.C.; Philip, S.Y.; Ding, W. Internet of Behaviors: A Survey. IEEE Internet Things J. 2023. Early Access. [Google Scholar] [CrossRef]
  4. Aaker, D.A. Building Strong Brands; Simon and Schuster: New York, NY, USA, 2012. [Google Scholar]
  5. Carlson, A.; Betteridge, J.; Kisiel, B.; Settles, B.; Hruschka, E.; Mitchell, T. Toward an architecture for never-ending language learning. Proc. AAAI Conf. Artif. Intell. 2010, 24, 1306–1313. [Google Scholar] [CrossRef]
  6. Mitchell, T.; Cohen, W.; Hruschka, E.; Talukdar, P.; Yang, B.; Betteridge, J.; Carlson, A.; Dalvi, B.; Gardner, M.; Kisiel, B.; et al. Never-ending learning. Commun. ACM 2018, 61, 103–115. [Google Scholar] [CrossRef] [Green Version]
  7. Termite, M.R.; Baraldi, P.; Al-Dahidi, S.; Bellani, L.; Compare, M.; Zio, E. A never-ending learning method for fault diagnostics in energy systems operating in evolving environments. Energies 2019, 12, 4802. [Google Scholar] [CrossRef] [Green Version]
  8. Elizalde, B.M. Never-Ending Learning of Sounds; Carnegie Mellon University: Pittsburgh, PA, USA, 2020. [Google Scholar]
  9. Ji, S.; Pan, S.; Cambria, E.; Marttinen, P.; Philip, S.Y. A survey on knowledge graphs: Representation, acquisition, and applications. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 494–514. [Google Scholar] [CrossRef] [PubMed]
  10. Han, J.; Sarica, S.; Shi, F.; Luo, J. Semantic networks for engineering design: State of the art and future directions. J. Mech. Des. 2022, 144, 020802. [Google Scholar] [CrossRef]
  11. Chowdhary, K.; Chowdhary, K.R. Natural language processing. In Fundamentals of Artificial Intelligence; Springer Nature India Private Limited: New Delhi, India, 2020; pp. 603–649. [Google Scholar]
  12. Kang, Y.; Cai, Z.; Tan, C.W.; Huang, Q.; Liu, H. Natural language processing (NLP) in management research: A literature review. J. Manag. Anal. 2020, 7, 139–172. [Google Scholar] [CrossRef]
  13. Lin, J.; Zhao, Y.; Huang, W.; Liu, C.; Pu, H. Domain knowledge graph-based research progress of knowledge representation. Neural Comput. Appl. 2021, 33, 681–690. [Google Scholar] [CrossRef]
  14. Weekly, B. Available online: https://www.brandweekly.co/ (accessed on 26 December 2022).
  15. Ding, Y.; Teng, F.; Zhang, P.; Huo, X.; Sun, Q.; Qi, Y. Research on text information mining technology of substation inspection based on improved Jieba. In Proceedings of the 2021 International Conference on Wireless Communications and Smart Grid (ICWCSG), Hangzhou, China, 13–15 August 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 561–564. [Google Scholar]
  16. Liu, J.; Li, X.; Zhang, Q.; Zhong, G. A novel focused crawler combining Web space evolution and domain ontology. Knowl.-Based Syst. 2022, 243, 108495. [Google Scholar] [CrossRef]
  17. Nguyen, T.T.H.; Jatowt, A.; Coustaty, M.; Doucet, A. Survey of post-OCR processing approaches. ACM Comput. Surv. (CSUR) 2021, 54, 1–37. [Google Scholar] [CrossRef]
  18. Zhao, Z.; Liu, Y.; Zhang, G.; Tang, L.; Hu, X. The Winning Solution to the iFLYTEK Challenge 2021 Cultivated Land Extraction from High-Resolution Remote Sensing Images. In Proceedings of the 2022 14th International Conference on Advanced Computational Intelligence (ICACI), Wuhan, China, 15–17 July 2022; IEEE: Piscataway, NJ, USA; pp. 376–380. [Google Scholar]
  19. Santos, Z.R.; Cheung, C.M.; Coelho, P.S.; Rita, P. Consumer engagement in social media brand communities: A literature review. Int. J. Inf. Manag. 2022, 63, 102457. [Google Scholar] [CrossRef]
  20. Qisman, M.; Rosadi, R.; Abdullah, A.S. Market basket analysis using apriori algorithm to find consumer patterns in buying goods through transaction data (case study of Mizan computer retail stores). In Journal of Physics: Conference Series; IOP Publishing: Bristol, UK, 2021; Volume 1722, p. 012020. [Google Scholar]
  21. Panjaitan, S.; Amin, M.; Lindawati, S.; Watrianthos, R.; Sihotang, H.T.; Sinaga, B. Implementation of apriori algorithm for analysis of consumer purchase patterns. In Journal of Physics: Conference Series; IOP Publishing: Bristol, UK, 2019; Volume 1255, p. 012057. [Google Scholar]
  22. Guo, Y.; Wang, M.; Li, X. Application of an improved Apriori algorithm in a mobile e-commerce recommendation system. Ind. Manag. Data Syst. 2017, 117, 287–303. [Google Scholar] [CrossRef]
  23. Du, J.; Zhang, X.; Zhang, H.; Chen, L. Research and improvement of Apriori algorithm. In Proceedings of the 2016 Sixth International Conference on Information Science and Technology (ICIST), Dalian, China, 6–8 May 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 117–121. [Google Scholar]
  24. Södergren, J. Brand authenticity: 25 Years of research. Int. J. Consum. Stud. 2021, 45, 645–663. [Google Scholar] [CrossRef]
  25. Cai, S.M. Dong MZ: GREE’s Three Perseverances. International Finance, 17 October 2022. (In Chinese) [Google Scholar] [CrossRef]
  26. Luan, L.; Feng, X.X. The coffee industry in 2022: The financing heat will not decrease, and the homogenization competition will be broken. China Business Daily, 13 January 2023. (In Chinese) [Google Scholar] [CrossRef]
  27. Gao, Z. An in-depth examination of China’s advertising regulation system. Asia Pac. J. Mark. Logist. 2007, 19, 307–323. [Google Scholar] [CrossRef]
  28. Stokel-Walker, C.; Van Noorden, R. What ChatGPT and generative AI mean for science. Nature 2023, 614, 214–216. [Google Scholar] [CrossRef] [PubMed]
  29. Zhou, H.; Ke, P.; Zhang, Z.; Gu, Y.; Zheng, Y.; Zheng, C.; Wang, Y.; Wu, C.H.; Sun, H.; Yang, X.; et al. Eva: An open-domain Chinese dialogue system with large-scale generative pre-training. arXiv 2021, arXiv:2108.01547. [Google Scholar]
Figure 1. Never-Ending Language Learning model.
Figure 1. Never-Ending Language Learning model.
Electronics 12 01625 g001
Figure 2. Never-Ending Language Learning model for Chinese Brand Identity Management.
Figure 2. Never-Ending Language Learning model for Chinese Brand Identity Management.
Electronics 12 01625 g002
Figure 3. Brand Identity System depicted in Building Strong Brands.
Figure 3. Brand Identity System depicted in Building Strong Brands.
Electronics 12 01625 g003
Figure 4. Brand Identity Management Knowledge Graph Framework following David’s “brand identity system”.
Figure 4. Brand Identity Management Knowledge Graph Framework following David’s “brand identity system”.
Electronics 12 01625 g004
Figure 5. Two-dimensional Bag-of-words Model.
Figure 5. Two-dimensional Bag-of-words Model.
Electronics 12 01625 g005
Figure 6. The Word Frequency Vector for Apriori.
Figure 6. The Word Frequency Vector for Apriori.
Electronics 12 01625 g006
Figure 7. The Synonym Knowledge Graph of “loves to eat”.
Figure 7. The Synonym Knowledge Graph of “loves to eat”.
Electronics 12 01625 g007
Figure 8. The BI KG of GREE Electric on 30 November 2022.
Figure 8. The BI KG of GREE Electric on 30 November 2022.
Electronics 12 01625 g008
Figure 9. The BI KG of Luckin Coffee on 30 November 2022.
Figure 9. The BI KG of Luckin Coffee on 30 November 2022.
Electronics 12 01625 g009
Figure 10. The Word Clouds of the Results of BI Evaluation Model Based on KG and Two-dimensional Bag-of-words. (a) the Word Cloud of GREE Electric. (b) the Cord Cloud of Luckin Coffee.
Figure 10. The Word Clouds of the Results of BI Evaluation Model Based on KG and Two-dimensional Bag-of-words. (a) the Word Cloud of GREE Electric. (b) the Cord Cloud of Luckin Coffee.
Electronics 12 01625 g010
Figure 11. The Timelines of the Second Most Frequent Words Weekly. (a) the Timeline of GREE Electric. (b) the Timeline of Luckin Coffee. The words in the brackets next to each Chinese word are its meaning notes in English.
Figure 11. The Timelines of the Second Most Frequent Words Weekly. (a) the Timeline of GREE Electric. (b) the Timeline of Luckin Coffee. The words in the brackets next to each Chinese word are its meaning notes in English.
Electronics 12 01625 g011
Table 1. Internet Media for the Data Resource.
Table 1. Internet Media for the Data Resource.
No.CategoriesMedia NameWebsite
1Social mediaSina Weibo (新浪微博)weibo.com
2WeChat (微信)weixin.qq.com
3Word of mouth mediaZhihu (知乎)www.zhihu.com
4Little Red Book(小红书)www.xiaohongshu.com
5Douban (豆瓣)www.douban.com
6Short video
platform
Douyin (抖音)www.douyin.com
7Kwai video (快手)www.kuaishou.com
8Mass media
& portal
Sina (新浪网)www.sina.com.cn
9NetEase (网易)www.163.com
10Sohu (搜狐)www.sohu.com
11Tencent (腾讯)www.tencent.com
12xinhuanet (新华网)www.news.cn
13people (人民网)www.people.com.cn
14ifeng (凤凰网)www.ifeng.com
15E-commerce
platform
taobao (淘宝)www.taobao.com
16JD.COM (京东)www.jd.com
17Pinduoduo (拼多多)www.pinduoduo.com
18Domain mediaHongzhoukan
(证券市场红周刊)
www.hongzhoukan.com
19jiemian (界面新闻)www.jiemian.com
20National Business Daily
(每经网)
www.nbd.com.cn
21Rayli (瑞丽网)www.rayli.com.cn
22Culture and Creativity in China (文创中国)creativity.china.com.cn
Note: All the websites are last accessed on 1 February 2023.
Table 2. The Word Scene Coefficient Vector.
Table 2. The Word Scene Coefficient Vector.
Scenario WeightsLevelValue
Mass
Media
Title1v1
Image2v2
Content3v3
Video4v4
We
Media
Title5v5
Image6v6
Content7v7
Comment8v8
Content Video9v9
Advertisement Video10v10
Table 3. The configuration of the NVIDIA deep learning server deploying the NEL model.
Table 3. The configuration of the NVIDIA deep learning server deploying the NEL model.
ItemConfiguration
CPUs2× Intel Xeon E5-2698 v4 (2.2 GHz/20-core/50 MB/135 W)
GPUs8× NVIDIA® Tesla V100
RAMs1T DDR4 (2133 MHz)
Hard Disks4× 1.92 TB SSD RAID 0
NetworksDual 10 GbE, 4 IB EDR
Operating SystemCentOS7.8
Table 4. Software and libraries for deploying NEL.
Table 4. Software and libraries for deploying NEL.
Software & PackagesUsages
Anaconda3Basic Package of Python, containing Python3.8.10, numpy1.18.2, pandas1.1.2.
Wordcloud1.6.0Generating the word clouds of brands.
Scrapy2.5.0Generating the data resources of specific brands.
Mongodb4.4.6Archiving Knowledge Graphs.
Networkx2.1Drawing Knowledge Graphs.
Table 5. Data summary displayed in the experimental results.
Table 5. Data summary displayed in the experimental results.
Media TypeGREE ElectricLuckin CoffeeNumber of Data Records
Mass Media5.1 × 1042.2 × 1047.3 × 104
We Media3.3 × 10411.0 × 10414.3 × 104
Total Number8.4 × 10413.2 × 10421.6 × 104
Table 6. The Definition of the Word Scene Coefficient Vector of GREE Electric.
Table 6. The Definition of the Word Scene Coefficient Vector of GREE Electric.
Scenario WeightsLevelValue
Mass MediaTitle10.21
Image20.17
Content30.14
Video40.08
We MediaTitle50.12
Image60.1
Content70.08
Comment80.06
Content Video90.03
Advertisement Video100.01
Table 7. The Definition of the Word Scene Coefficient Vector of Luckin Coffee.
Table 7. The Definition of the Word Scene Coefficient Vector of Luckin Coffee.
Scenario WeightsLevelValue
Mass MediaTitle10.14
Image20.11
Content30.09
Video40.06
We MediaTitle50.23
Image60.12
Content70.1
Comment80.08
Content Video90.05
Advertisement Video100.02
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, D.; Wang, Y.; Wang, G.; Lu, J.; Zhu, Y.; Bella, G.; Liang, Y. Chinese Brand Identity Management Based on Never-Ending Learning and Knowledge Graphs. Electronics 2023, 12, 1625. https://doi.org/10.3390/electronics12071625

AMA Style

Li D, Wang Y, Wang G, Lu J, Zhu Y, Bella G, Liang Y. Chinese Brand Identity Management Based on Never-Ending Learning and Knowledge Graphs. Electronics. 2023; 12(7):1625. https://doi.org/10.3390/electronics12071625

Chicago/Turabian Style

Li, Dalin, Yijin Wang, Guansu Wang, Jiadong Lu, Yong Zhu, Gábor Bella, and Yanchun Liang. 2023. "Chinese Brand Identity Management Based on Never-Ending Learning and Knowledge Graphs" Electronics 12, no. 7: 1625. https://doi.org/10.3390/electronics12071625

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop